
TPRM is feeling AI’s seismic impact—from attackers to defenders and the vendors in between.
AI is already changing third-party risk—on both sides of the glass. In this episode of the Third Party podcast, hosts Jeffrey Wheatman, Ferhat Dikbiyik, and Bob Maley strip away the hype and get practical about two truths:
1️⃣ Your vendors are using AI, often without telling you, and sometimes without even realizing where it sits in their stack.
2️⃣ Your TPRM team must use AI, not to replace judgment, but to kill the drudge work that keeps you from doing real risk management.
On the vendor side: assume every SaaS you rely on has shipped AI somewhere. That means your data may be riding shotgun with a model you didn’t approve and can’t see. Traditional “block it until we figure it out” governance won’t survive contact with reality. Users will route around controls, and algorithms update thousands of times a day and you can’t re-committee every change.
The mature posture is simple and aggressive: expect AI, require disclosure, and put guardrails where they matter: data boundaries, acceptable-use limits, and auditable trails for any AI-assisted output that touches your risk decisions. One customer story in the episode says the quiet part out loud: turning off a vendor’s AI to appease a committee also turns off your visibility into dark-web chatter and open-source intelligence (OSINT) that feed risk intelligence. You don’t reduce risk by blinding yourself.
Governance is another sore spot. Even as adoption spikes, very few organizations have genuinely mature AI governance. The panel calls out the gap directly: most councils aren’t sure what to govern or how to test, and the “maturity model” mindset collapses in a domain moving faster than cloud ever did. The directive here isn’t to stall, it’s to ship guardrails: know where models run, what they’re trained on, which data they can touch, how outputs are logged, and when a human must make the final call.
On the team side, Bob poses the question: Should we aim to automate TPRM 100%? You could try, but the outcomes risk being shallow or misleading. The smarter move is to automate the repetitive tasks inside TPRM and keep human judgment on the risk decisions that matter.
What you can and should automate are the repetitive, point-in-time tasks that slow analysts down. Automating questionnaire management is fair game. The best bet is to keep humans in the loop and use AI to provide evidence (use AI to retrieve policies, pull control-test artifacts, and draft responses with citations) then have an analyst validate and sign.
In the research trenches, that approach is already paying off. Ferhat describes how automating the repetitive parts of vulnerability analysis—while keeping a person on the decision—cut cycle time from roughly a day to about two hours, freeing the team to chase higher-value work, including AI-specific risks like data poisoning and prompt injection.
The conversation lands on where AI will actually bend the curve in the next twelve months.
That’s the theme: use AI to accelerate analysis and action, not to abdicate accountability.
Key takeaways:
Bottom line: AI is your copilot, not your scapegoat. Assume your vendors are already using it, force clarity on where and how, and let your own team use it to move faster on the work that matters—while a human owns the call when risk is on the line.
Don’t miss an episode! Subscribe to the show on YouTube and hit the bell button somewhere around the screen so you don’t miss a single episode. Or catch it wherever you listen to podcasts.
Next up, we’re digging into the dark side of third-party report cards. You’ll never trust a simple score again. Stay tuned.
Integrate risk intelligence into every part of your workflow so you can make more informed decisions with confidence.