The Truth About AI in Risk Management
Third Party Podcast: Can We Trust AI Models?

Episode Recap
If you’re looking for a "set-it-and-forget-it" solution for Third-Party Risk Management, you’re looking for a fairy tale.
In the latest episode of Third Party, hosts Jeffrey Wheatman, Bob Maley, and Ferhat Dikbiyik pull back the curtain on the most hyped technology in the industry to ask a foundational question: Can we trust AI models?
The answer isn't a simple yes or no. To understand why, we have to understand the nature of models themselves.
The Fundamental Truth: All Models are Wrong
As Bob says, every single model we’ve ever used is wrong. The question is how wrong, and do we understand it?
In vulnerability research, we rely on models like CVSS and EPSS to prioritize risk at scale. They are incredibly useful, but they are still just simplifications of a messy reality. AI doesn't change this fact. It just accelerates it.
Jeffrey points out a paradox that every risk professional should memorize: The more accurate a model becomes, the harder it is to explain. The easier it is to use, the less it reflects reality. When we trust an AI model blindly because it’s "easy," we aren't managing risk; we’re participating in Accountability Theater.
Why Math Alone Fails in Cyber Risk Management
If math were the only requirement for risk management, we would have solved TPRM a decade ago. But as Ferhat explains, vulnerability research can’t rely on math alone.
Experienced researchers layer human judgment, intuition, and context on top of these models. Time and again, that "gut feeling" proves right when real-world exploitation follows.
- The AI Gap: AI is excellent at processing massive volumes of data, but it struggles with nuance and the "why" behind the numbers.
- The Context Trap: An AI might tell you a vendor is "High Risk," but without the human intuition to connect that vendor to a specific, critical business impact, that data is just noise.
The Danger of "Closed Loop" AI and Data Poisoning
The real risk isn't using AI models. It’s letting them run without a challenge. When we stop asking questions and stop validating assumptions, models don’t just fail. They quietly mislead.
In an "open loop," humans audit the results and revisit decisions. In a "closed loop," the AI begins to evaluate its own outputs, leading to data poisoning. If the AI makes a wrong turn and then "trains" on that error, the entire risk posture collapses. This is why a "Human in the Loop" (HITL) isn't just a best practice; it’s a de facto standard for survival.
AI as a Buzzword vs. a Differentiator in TPRM
Most vendors use AI as a buzzword to hide the "black box." To cut through the smoke, you must demand transparent math and defensibility. If a model, AI or otherwise, cannot be explained to a Board of Directors in plain English, it is a liability.
Progress in security comes from combining three things:
- Structured Models: Using AI to handle the 99% of data drudgery.
- Continuous Validation: Never letting a decision sit unexamined.
- Human Insight: Using the AI as a co-pilot to fuel, not replace, the expert’s judgment.
Achieve Defensibility Through Transparent AI
Models are inputs, not outcomes. They are part of the solution, not the entirety of it. We can trust AI models only when we have the humility to revisit their decisions and the tools to provide the "daylight" they need to function.
In an industry full of hidden math, Black Kite AI is built on a different philosophy: Transparency. By providing transparent math and open-source intelligence, Black Kite ensures that every risk score is defensible. It’s not about handing over the keys to an algorithm. It’s about using an AI co-pilot that helps you find the "missing points," vet your assumptions, and explain vendor risk to your Board with total confidence.
DON'T MISS AN EPISODE!
Subscribe to Third Party on YouTube, the podcast for the people who don’t need to ask ChatGPT what TPRM means. New episodes every other week.
Next Time on Third Party
Who actually owns risk? From CISOs to CFOs, vendors to regulators, everyone wants a say — but no one wants the responsibility. We’ll tackle why ownership is still fragmented, how to fix it, and why “shared responsibility” often means “shared confusion.”
Subscribe below.