Black Kite is a finalist in the 2026 SC Awards for continued innovation and leadership in third-party cyber risk intelligence.Learn more
BlackKite: Home
Menu
blog

Two CRQ Experts Walked Into a Webinar. Nobody's Heat Map Survived.

Cyber risk quantification isn't a trend. It's the only language that actually works in the boardroom.

Published

May 5, 2026

Authors

Laurie Asmus

In this article

In this article

See Black Kite in action

Book a Demo

Introduction

We invited CISOs and TPCRM professionals at every level to join two experts for a conversation that's long overdue: What does it actually take to get a board to trust your risk data?

Jack Jones is the founder of FAIR™ (Factor Analysis of Information Risk), the international standard for cyber risk quantification, and a Black Kite advisor. He built FAIR two decades ago when the industry told him quantifying cyber risk was impossible. The Federal Reserve now uses it internally. 

Bob Maley is Black Kite's Chief Security Officer, a FAIR practitioner, and the person responsible for protecting a vendor ecosystem using the same methodology he recommends to customers every day.

One built the model. The other lives with it. Between them, they've had this conversation with more boards, executives, and skeptical CFOs than most security teams will encounter in a career.

What they agreed on—without hesitation—is this: your board has never actually believed your risk dashboard. They've just been too polite to say so.

Here's what that means for your program.

YouTube video thumbnail

Watch the highlight reel for Jack Jones's and Bob Maley’s takes on boards, budgets, and why your AI is giving you garbage. Watch the full webinar replay at https://blackkite.com/crq-webinar.

The Dirty Secret About Color-Coded Risk

Boards aren't nodding because they trust your heat map. They're nodding because they trust you and they have no other frame of reference to push back on.

Jones puts it plainly:

“They typically don't get them, and they typically don't place a lot of credibility in them. It's really how much they trust the person sitting across the table.”

Translation: Your red/yellow/green dashboard isn't informing decisions. It's just not being challenged.

That's a problem. Because if your board can't actually interpret your risk data, they can't allocate resources against it. They can't make the tradeoffs that protect the business. They're flying blind—and so are you.

How FAIR Was Born, and Why It Almost Didn't Survive

Jones didn't build FAIR in a lab. He built it to save his job.

As CISO of Nationwide Insurance, he was getting grilled by executives who wanted to know the ROI of his security spend. He had no answer. The prevailing wisdom of the time—that you simply cannot quantify cyber risk—wasn't cutting it with a board that spoke exclusively in dollars and cents.

So he built a model that could. When he published a white paper on FAIR in 2005, a prominent industry voice told him he should be prosecuted for criminal negligence—that the word “risk” should be stricken from the English language, and that compliance was the answer, full stop.

Twenty years later, the Federal Reserve uses FAIR internally to manage its own risk.

For a full breakdown of how the FAIR methodology works and how to apply it as a practitioner, the CISO's master guide to risk quantification is the place to start. 

Why Financial Risk Language Finally Lands

The industry has been slow, but the shift is real. Maley watched it happen firsthand, first at PayPal, running a global third-party risk program and reporting in colors, and now at Black Kite, helping customers make the leap to quantitative analysis.

The reason it's landing now comes down to one thing: executives have finally started demanding answers that match how they think.

“Boards don't talk in colors,” Maley says. “ROI isn't red.”

That's not a philosophical statement. It's a practical one. Security teams have always had to compete for budget against every other business priority. If you can't express the financial impact of a risk in the same language as a CFO, you will always lose that argument.

CRQ gives you that language. A 1-to-5 risk scale is numeric. It is not quantitative. Quantitative means financial exposure: annualized loss expectancy, frequency of loss events, magnitude in dollars. The moment you express it that way, a CFO can compare it to the cost of a control. The conversation stops being about security and starts being about the business.

CRQ at Scale: What It Actually Looks Like for Third-Party Risk

This is where theory gets real. Most organizations don't have three vendors. They have hundreds. Or thousands.

The good news: you don't need to run a full FAIR analysis on every single one. The discipline is knowing when to apply it, and that changes depending on where you are in the vendor lifecycle.

Onboarding is where you have leverage. 

Before a contract is signed, you have the ability to ask for more information, require compensating controls, or walk away entirely if the annualized loss exposure doesn't sit within your risk appetite. Once the ink is dry, that window closes.

Triage is where it becomes operationally essential. 

When a zero-day drops and a FocusTag® (cyber risk intelligence alert) surfaces across your vendor portfolio, you cannot work every exposure simultaneously. Financial risk quantification tells you which vendors represent the highest potential impact to your organization. And that's where you start.

This is FAIR at scale. Not every vendor gets the full treatment. But every decision gets a financial frame.

The AI Problem Nobody Talks About

Let's address the elephant in the room, because everyone's asking: can AI just do this for you?

Short answer: no. Not out of the box.

Jones explains why. Large language models have been trained on years of cybersecurity content, most of which reflects exactly the red/yellow/green thinking we're trying to move away from. Ask a general-purpose AI to assess your third-party cyber risk, and you'll get a confident-sounding answer that is, in his words, "almost certainly going to be utter garbage."

The model isn't lying. It's pattern-matching against the training data it has. And that data is full of the same color-coded noise everyone is trying to escape.

But here's the flip side: You can train these models. 

Jones has done it: building custom skill files that teach an AI the FAIR methodology, turning it into what he calls “a mini-me, only smarter.” Maley has built on top of that at Black Kite, adding organizational-specific guardrails to produce results that are “surprisingly useful” for ongoing risk registry management.

The lesson isn't that AI can't do this. It's that you can't just hand it the question and walk away. The setup is the work.

Want a head start? Get Jack Jones’s Claude skill. 

During the webinar, Jones made an offer that doesn't come around often: he's sharing the actual Claude skill files he built, the same ones that turned a general-purpose AI into a FAIR-fluent risk analysis tool, free for the taking. The files are fully readable—no proprietary code, no hidden logic—just Jones' decades of FAIR expertise translated into plain language that any AI can work with.

Watch the recording to get the details on how to request them directly from Jones.

The “Good Enough” Question

If there's one thing that stops organizations from starting, it's the belief that they need perfect data before they can begin.

They don't.

Jones is direct: start quick and dirty. Define your loss event scenarios. Use the FAIR model at a higher level of abstraction—you almost never need to go to the deepest levels. Work with the data you have, and let the model reflect the uncertainty rather than pretending it doesn't exist.

“You will never have all the data,” Jones says. “You have what you have. And there are very specific ways of making use of that data that allow you to generate really good analyses.”

The goal isn't precision. It's defensibility. Walk into a board meeting, show your work, and answer the follow-up questions without flinching. That's the standard.

What Happens When It Works

Jones describes a transformation that happened at Nationwide after FAIR was in place: security went from being the uninvited guest at business meetings to being a team that couldn't staff fast enough to attend all the meetings they were being asked to join.

That's the real outcome. It’s not just better risk reporting. It’s a seat at the table that you didn't have to force your way into.

“My credibility went way up,” Jones says. "My problem became I didn't have enough people on my staff to attend all the meetings we were being invited to."

The Train Has Left the Station

FAIR is no longer a fringe idea being defended against claims of criminal negligence. It's the direction the entire industry is heading, from major consulting firms and federal regulators to the boardrooms of major financial institutions.

“You can be on that train or under that train,” Jones says. "That's really kind of the choices we face as a profession."

Black Kite's FAIR-based financial risk ratings bring this methodology to third-party cyber risk at scale, giving TPCRM programs the quantitative foundation they need to make decisions that hold up to scrutiny, not just dashboards that don't get challenged.

Ready to go deeper? 

Watch the full on-demand webinar featuring Jack Jones and Black Kite CSO Bob Maley,  including a live discussion on FAIR-CAM, AI guardrails for risk analysis, and how to start your first CRQ program without drowning in data.

Watch the webinar on demand.