Understanding Agentic AI & Protocols: Use Cases, Variants, and Real-World Fit
The future of TPRM is evolving. Are you prepared to navigate the rise of Agentic AI?
Written by: The Black Kite Research Group led by Müzeyyen Gökçen Tapkan, Director of Data Research
In 2025, AI stopped waiting to be asked. A new class of system, agentic AI, arrived with the ability to plan, decide, and act autonomously toward complex goals without requiring human input at every step. For TPRM and cybersecurity teams, this is not a future scenario. It is the infrastructure being built right now to manage vendor risk at a scale and speed no manual process can match.
The gap between teams that understand agentic AI and those that do not is becoming a security gap. This research paper by the Black Kite Research Group examines the foundational architecture of agentic systems, the three key protocols shaping how agents collaborate (MCP, A2A, and LCEL), and what each means for how your organization detects, assesses, and responds to third-party cyber risk.
Agentic AI Fundamentals
Why Agentic AI Represents a Fundamental Shift in Automation
Traditional AI models are reactive. They respond to a single prompt, then stop. Agentic AI systems are designed differently: they take initiative, maintain context across multiple steps, and pursue defined goals without waiting for continuous human direction. For risk management, this changes not just what is possible, but what is expected.
Agentic AI: An AI system designed to observe its environment, form a plan, take action, evaluate results, and loop autonomously until its goal is achieved. Unlike a prompt-based model that answers a question, an agentic system manages a project.
The Observe, Decide, Act Loop: The core operating cycle of every agentic system. The agent reads state from the world (observe), selects the next action toward its goal (decide), and executes while monitoring results (act). Because the agent owns this loop, it behaves more like a software actor than a chat interface.
Every agentic system is built on three core behaviors:
- Perception: Reading state from external systems (APIs, databases, documents, and live threat feeds) to inform the next decision
- Decision and Planning: Breaking a high-level goal into ordered steps, maintaining context across those steps, and adjusting the plan based on intermediate results
- Action: Executing steps, calling external tools, evaluating outcomes, and looping until the goal is achieved
Why AI Agents Are More Than Fancy Prompts
The Capability Gap Between Prompts and Agents
Prompts and agents share the same underlying LLM technology. The difference is architecture. A prompt is a single instruction. An agent is a system that manages a workflow. The table below shows exactly where that gap opens, and why it matters for third-party risk management operations that require multi-source data, multi-step logic, and governed auditability.
“A prompt is a single instruction; an agent is a project manager.”
- Long-Term Memory and Context: Traditional LLM prompts have no persistent memory; each call starts fresh. Agents maintain context across steps, meaning a 50-step vendor impact analysis does not lose its context on step three.
- Multi-Step Planning: Prompts handle single-turn reasoning at best. Agents use native task graphs with ordered, trackable steps that adapt as conditions change.
- Tool and API Calling: Tool use in agents is declarative: the agent discovers and calls the tools it needs without hard-coded integration logic.
- Cross-Agent Collaboration: Prompts have no mechanism for agent-to-agent communication. Agents use standardized message buses to coordinate across systems and organizational boundaries.
- Governance Hooks: Prompts offer only post-hoc log review. Agentic systems embed audit trails directly into execution, with every decision, every tool call, and every outcome logged in real time.
The Protocols Powering Agentic AI Collaboration
The Hidden Architecture Behind Intelligent Automation
When multiple agents, tools, and data sources need to work together across platforms and organizations, they need a shared language. These are AI agent protocols: standardized rules governing how information is exchanged, how tasks are handed off, and how errors are handled.
Today, AI agents trying to work together speak their own dialect. Each framework (LangChain, AutoGen, CrewAI) uses its own format. Several standards have emerged to address this: MCP (Anthropic), A2A (Google), ANP (Open Source Community), ACP (IBM), and LCEL (LangChain). Each addresses a different layer of the coordination problem.
Model Context Protocol (MCP): Developed by Anthropic. A client-server protocol that connects AI agents to data sources through a single standardized interface. The server advertises available resources, tools, and prompts via JSON-RPC; the agent calls what it needs. MCP keeps data access stateless, predictable, and auditable, replacing multiple custom API integrations with one unified connection.
Agent-to-Agent (A2A) Protocol: Developed by Google. Enables AI agents to communicate directly with each other as peers across different platforms, frameworks, and organizations. Unlike MCP, which connects agents to data, A2A connects agents to agents. Particularly relevant for Nth-party visibility scenarios where risk extends beyond direct vendor relationships into multi-tier supply chains.
LangChain Expression Language (LCEL): LangChain’s declarative expression language for building adaptive LLM workflows. An entire pipeline (prompts, tools, memory, error handling, and conditional routing) is described in a single pipe-style syntax that LangChain compiles into an executable graph. LCEL’s core value is dynamic routing: workflows branch automatically based on intermediate results and confidence thresholds.
Additional Emerging Protocols: ANP (Open Source) targets decentralized agent networking. ACP (IBM) targets large-scale, enterprise-governed deployments. These are not competing tools; they are different gears in the same engine, each solving a different layer of the coordination puzzle.
Real-World TPRM Use Cases
The research presents three concrete scenarios drawn from TPRM operations, one per protocol, to show exactly where each delivers the most value and why the alternatives fall short.
MCP in Action: Vendor Breach Impact Intelligence
It is 3 AM and a major cloud provider has suffered a data breach. Your TPRM team needs to immediately identify which of your 500+ vendors are affected, assess each vendor’s current risk posture, and quantify financial exposure before markets open.
The traditional approach requires:
- Querying six separate vendor databases with incompatible APIs
- Pulling Ransomware Susceptibility Index® (RSI™) scores and security ratings from multiple providers
- Correlating financial data and compliance records by hand
- 4 to 6 hours of manual work while risk compounds in the background
An MCP-enabled agent replaces all six API calls with one unified query, aggregating FocusTags® threat intelligence alongside vendor security data, financial records, and compliance information simultaneously and delivering a complete impact assessment in seconds. MCP’s “universal connector” approach standardizes access across every data source into a single interface, eliminating the manual correlation bottleneck entirely.
A2A in Action: Cross-Organizational APT Detection
An advanced persistent threat (APT) is targeting the financial sector. Evidence surfaces simultaneously at multiple institutions, with each organization seeing only its own piece of the picture. Traditional approaches leave each institution to respond in isolation. Learn more about threat actor monitoring and how Black Kite surfaces adversary intelligence.
The A2A-enabled response unfolds in coordinated steps:
- A Network Behavior Agent at one institution detects anomalous traffic and sends indicators of compromise to a shared Attribution Agent
- A Malware Analysis Agent at a second institution analyzes related samples
- A Campaign Correlation Agent synthesizes findings across all participants
- All contributing agents receive coordinated threat intelligence in real time, without any institution exposing its internal systems
A2A is the right protocol here because this scenario requires specialized agents to collaborate across organizational boundaries. MCP would only handle data access within a single organization. Only A2A enables secure, peer-to-peer coordination between independent agents.
LCEL in Action: Dynamic Threat Routing by Confidence
A continuous monitoring pipeline is processing threat signals across a large vendor portfolio simultaneously. Not every signal warrants the same response. LCEL handles routing automatically based on what it finds.
- Medium-confidence signals: Flow through a standard enrichment path — context enrichment, correlation, and attribution analysis
- High-confidence APT detections: Trigger parallel processing across context enrichment, attribution analysis, and impact assessment simultaneously
- Aggregated findings above the risk threshold: Generate automated SOC escalation and deploy blocking rules without manual intervention
LCEL is the right protocol here because the workflow logic must adapt dynamically to the nature of each threat. MCP would provide data access but no routing logic; A2A would coordinate agents but not manage the conditional workflow. LCEL’s declarative pipeline syntax makes complex, multi-branch logic readable and maintainable as threats evolve.
What Agentic AI Still Cannot Do
Five Critical Gaps TPRM Teams Must Account For
The protocols covered in this research are production-ready for specific, well-defined use cases. But the research is equally clear that meaningful obstacles remain, and that understanding them is part of being genuinely prepared.
- Context Is Not Sticky Yet: Agents still lose long-term memory in dynamic, extended tasks. Multi-session workflows with complex state requirements remain an open challenge.
- Tool Use Is Not Fully Plug-and-Play: Autonomy often depends on tightly tuned prompts or rigid APIs. When underlying systems change, agent pipelines can break in ways that require manual intervention.
- Graceful Failure Is Rare: Agents struggle to recover and improvise when conditions change mid-execution. Robust fallback behavior is still being solved at the framework level.
- Collaboration Lacks Depth: Theory of mind, the ability to reason about another agent’s goals and beliefs, is still in early stages. Agents can coordinate tasks but cannot model intent the way human collaborators can.
- Security Still Has Blind Spots: Agents can inadvertently leak sensitive data, misuse tools, or be manipulated by adversarial prompts. Greater autonomy means a larger attack surface. Protocols organize the system; they do not guarantee it is secure.
Preparing Your TPRM Operations for Agentic AI
From Manual Correlation to Autonomous Risk Intelligence
Protecting your third-party risk management operations in the era of agentic AI requires building familiarity with these architectures now, before the pressure to deploy is high. Based on the use cases examined in this paper, here is how to think about where to start.
- Map Your Integration Landscape: Before selecting a protocol, inventory your existing data sources: vendor databases, security rating platforms, and compliance systems your team currently queries manually. These are your MCP candidates.
- Start with MCP for Data Aggregation: MCP delivers the most immediate operational value for TPRM teams, replacing manual multi-source correlation with a single standardized data access layer. It is the lowest-friction entry point into agentic architecture.
- Use A2A Where Cross-Organizational Coordination Matters: For risk operations involving shared threat intelligence across business units or partner organizations, A2A makes coordinated response possible without exposing internal systems. This is especially relevant for teams managing Nth-party and supply chain risk.
- Implement LCEL for Adaptive Workflow Orchestration: For monitoring pipelines that need to route high-priority threats differently from routine findings, LCEL provides the declarative orchestration layer that makes adaptive behavior manageable at scale.
Read the report, no download required.
See how the Black Kite AI Agent applies agentic AI to third-party risk management today.
