New: Black Kite Global Adaptive AI Assessment Framework (BK-GA³™), a truly global framework for assessing AI riskGet It Now
BlackKite: Home
Menu
blog

Rethink What's Possible in TPRM AI: Moving from Automation to Agents

Published

Nov 24, 2025

Updated

Nov 24, 2025

Authors

Dr. Ferhat Dikbiyik

In this article

In this article

Learn more about the Black Kite AI Agent, streamlining core workflows from assessments to incident response.

Meet the Black Kite AI Agent

Introduction

Nearly every vendor today claims to have something “AI-powered.” It’s the tagline of the moment, showing up everywhere from product pages to investor decks. In the race to keep up, many companies parade their use of AI features as a signal of value and competitiveness.

But in third-party risk management (TPRM), most of what’s marketed as “AI” isn’t really intelligent at all. It’s automation powered by basic AI techniques.

AI today is often used to make TPRM workflows faster, but when a single vendor breach can disrupt an entire supply chain overnight, speed alone isn’t enough. What security teams need is foresight: intelligence that helps them see what’s coming, not just process what’s already known.

Here’s a closer look at how AI is being used in third-party risk management today, and where its capabilities are headed next.

3 Common Ways TPRM Solutions Use AI Today

Most AI capabilities in TPRM today fall into one of three categories: natural language processing, workflow automation, or simple risk scoring. 

  1. Natural language processing (NLP)

Many platforms use natural language processing (NLP) to summarize vendor documents, such as SOC 2 reports or security questionnaires. That helps parse long documents and can save hours of manual review, but NLP can only interpret what’s written. If a vendor never discloses a control or uses vague language, the AI won’t catch it. It doesn’t reason about what’s missing—it just repeats what it sees.

  1. Rule-based automation

Another common application is rule-based automation—sending reminders for overdue assessments, escalating follow-ups, or routing questionnaires. This helps small teams manage large vendor ecosystems but still relies on static inputs. They accelerate existing workflows but don’t improve visibility or understanding of risk.

  1. Simple risk scoring

Finally, there is risk scoring. In most cases, these models are based on self-reported vendor data or point-in-time assessments. They are easy to communicate and can be helpful for prioritization, but they don’t reflect real-time changes in exposure. If a vendor adds a new dependency or one of its suppliers is breached tomorrow, that score won’t change until the next review cycle.

All three are low-hanging fruit that improve efficiency, but none fundamentally change how we understand or manage risk.

Why Automation Alone Isn't Enough

The efficiencies created by AI-driven automation can help small TPRM teams dedicate more time to higher-value work. But speed without context can also paint an incomplete picture of risk.

I recently spoke with a large company whose TPRM team had just two people managing close to a thousand vendors. Automation helps them send assessments quickly, but in practice, they can only meaningfully monitor a small subset (around 30 vendors) at any given time. 

Now imagine the system sends questionnaires to all thousand vendors and 500 of them come back showing potential issues. Who follows up? Who interprets those results, investigates red flags, and decides which ones matter? (Remember, the team can only manage around 30 vendors.)

That’s the challenge: automation speeds up the process, but it doesn’t scale accountability. Some executives see AI as a way to reduce headcount, assuming the technology can fully manage vendor risk. But AI should augment, not replace, human expertise. There should always be a human in the loop—someone to guide how AI is used, validate outputs, and make the calls that data alone can’t.

If a vendor says their solution can automate third-party risk management 100% with AI, they’re wrong. We’re not there yet, and that’s okay. The goal isn’t full automation—it’s using AI to make people more effective at managing risk.

The Next Phase of AI in TPRM: From Automation to Coordinated Intelligence

The goal of TPRM should never have been to manage the questionnaires. But questionnaires, in time, have become what we manage, while they were just a means to assess the risk. The goal should be, and it has always been, managing the risk.

Here are 3 ways predictive, collaborative intelligence can anticipate, investigate, and act—with humans in the loop:

  1. Predictive analysis

Imagine being able to forecast which vendors are most likely to experience a security incident before it happens. By analyzing massive datasets, predictive models can identify the vendors most likely to face an attack in the near term. At Black Kite, this predictive approach is already in motion with our Ransomware Susceptibility Index® (RSI™), which uses machine learning trained on thousands of real-world ransomware incidents to model breach likelihood.

  1. Continuous monitoring

Today, many TPRM programs still rely on annual assessments. But AI can continuously observe external signals—such as new vulnerabilities, leaked credentials, and configuration changes—and connect those dots across the entire vendor ecosystem. That allows organizations to detect posture changes as they occur, not months later. Black Kite’s FocusTags™, for example, can automatically tag vendors linked to emerging threats, giving teams an instant way to spot and prioritize potential exposure across their supply chain.

  1. Non-obvious relationship mapping

AI also has the potential to reveal connections between third parties that humans often miss. Vendors don’t operate in isolation—they share code libraries, cloud providers, and downstream partners. By mapping hidden interdependencies, AI can surface systemic risk: not just who’s vulnerable, but how that vulnerability could flow through the supply chain.

These aren’t isolated AI features—they’re the foundation of a new way to manage third-party risk: rapid risk detection and response.

Just as cybersecurity teams use detection and response to stay ahead of threats, this model applies the same mindset to third-party ecosystems. It links every stage—risk hunting, risk intelligence, rapid response, and remediation tracking—into one ongoing cycle.

Agentic AI ties these stages together, with specialized agents collaborating across each phase. Together, they create a system that adapts and responds in real time, while humans guide priorities and remain accountable for decisions.

The Shift From Basic Automation to Agentic AI in TPRM

Managing third-party risk takes more than faster processes. It demands intelligence that can interpret what’s happening across the entire vendor ecosystem, recognize how risks are connected, and adjust as new information emerges.

That’s the idea behind agentic AI: a coordinated network of intelligent agents that can detect, assess, and respond to risk in real time. 

Each agent focuses on a specific task—such as tracking emerging vulnerabilities, analyzing vendor exposure, or managing remediation progress—but all share the same data and context. Working together, they give analysts a continuous, unified view of risk and reduce the time spent piecing information together.

AI has always been part of the Black Kite platform, but agentic AI represents the next phase of that evolution. Our newly released Black Kite AI Agent is a super agent built into the Black Kite platform that automatically investigates, assesses, and reports on third-party risk. Teams can interact directly with underlying sub-agents by asking plain-language questions or launching guided investigations through pre-built “Blueprints.” An analyst can ask, “Which vendors could be impacted by this new CVE?” and get an immediate answer.

Agentic AI and tools like Black Kite AI mark a major step forward for AI maturity in TPRM. The goal isn’t to replace people or simply automate workflows. It’s to help organizations see risk more clearly and act faster.

Where AI in TPRM Goes From Here

AI is reshaping third-party risk management, but not all “AI” deserves the label. Most tools today use basic AI to make existing workflows more efficient, not to deepen understanding. And that’s fine, as long as we’re honest about what those tools can and can’t do.

The value of AI in TPRM won’t come from chasing speed or replacing human judgment. It’s the clarity it brings, making risk easier to see, interpret, and act on. With agentic AI, teams can connect insights across vendors, spot exposure changes as they happen, and respond with confidence. And that’s how organizations will move from reacting to risk to staying ahead of it.

Learn more about the Black Kite AI Agent, streamlining core workflows from assessments to incident response.