Written By: Gökçen Tapkan

Today, we’re seeing a lot of cybersecurity companies exaggerate claims of AI innovation in their marketing materials to generate buzz for product releases and updates.

It’s not necessarily a bad thing for a business to focus on AI innovation. However, the definition of “AI” has evolved over the past few decades. In the 1990s, AI referred to more of a rule-based computer system rather than machine learning (ML). Today, AI encompasses far more sophisticated technologies and functions, including deep learning algorithms and large language models (LLMs). 

While current innovations are exciting, groundbreaking, and transformative for the cybersecurity industry, they also need to be better understood and differentiated from previous iterations of AI. 

These new innovations also present an ethically questionable grey area when cybersecurity companies and security rating services claim to use AI to improve the value and accuracy of their solutions when, really, they don’t.

What Do Exaggerated AI Claims Look Like?

Over the past few years, we’ve seen several cybersecurity companies announce “new” AI-powered product features and innovations that actually perform very basic functions. Some examples include:

  • Information Matching and Mapping: Companies often refer to their use of AI for rule-based functions like asset attribution. While this technically falls under the definition of “AI” as a computer-based decision-making program, it doesn’t necessarily represent a sophisticated “new functionality.” At its core, asset attribution is simply a matching game — finding commonalities or crossovers between different pieces of data and reporting on that connection. 
  • Automation: Automation is not synonymous with AI, yet many companies claim their automation is cutting-edge AI. Automation refers to a very rule-based process used to complete a predefined function more efficiently than a human could execute manually. You can think of automation as an “If this, then that” rule-based program.

While both use cases are helpful within a product, they don’t incorporate more recent and sophisticated AI functionality like ML, deep learning and LLMs. When companies call these things “AI,” they give the impression that they’ve incorporated the latest technology. These claims set an expectation for the consumer that the product will be more accurate, more unique and more valuable than others on the market. But in reality, these organizations are leaning on a generous definition of “AI” and aren’t offering anything cutting-edge or new.

Spot the Marketing Fluff

As our CSO Bob Maley said in a recent e-book about artificial intelligence in third-party risk management (TPRM), “The market is saturated with AI-related products and services, each accompanied by its own set of promises and hype. While some of these offerings may be transformative, others may fall short of their claims. It is our responsibility [as CSOs] to sift through the noise, critically evaluate each solution, and identify those that truly have the potential to elevate our risk management capabilities.”

There are a few ways to conduct that critical evaluation and a set of red flags every savvy buyer should look for when evaluating any AI-related claim or purchase.

Generic Statements

A dead giveaway of AI exaggeration is the use of generic statements about AI that don’t specify which type of AI is being used (e.g. ML, deep learning or LLMs). If you spot general claims of AI usage that don’t dig into details, put a critical lens on the content.

No Use of Machine Learning or LLMs

If the company goes into detail but doesn’t mention any use of sophisticated AI like deep learning algorithms or LLMs, it’s likely that they are simply using a more basic form of AI like asset mapping or automation. Again, while this is technically considered AI, it’s a very outdated definition of it.

No Mention of Data Creation or Curation Procedures

AI algorithms are only as good as the data used to build them. All good models should be supplemented by high-quality data and curated by industry experts who validate the accuracy of the information.  If a company talks about AI but doesn’t discuss how human experts create and curate data to train their models, you should be wary about “junk in, junk out” as well as how your company’s data is being utilized. In fact, when evaluating a tool for your organization, savvy buyers should ask if the AI models will train on your company’s owned data.

Ask Yourself: Is AI Necessary?

Sometimes the most straightforward questions provide the most illuminating answers. Today, many companies attempt to apply AI to functions or solutions that simply don’t require it. When a company claims to use AI, ask yourself, could I perform this task without AI? The answer is often “no” when a solution only uses AI to automate or speed up a process via a rule-based algorithm. In these cases, AI is helpful for optimizing a task, but isn’t truly setting the company’s solution or service apart from competitors in the space.

This is also a good way to spot companies who “re-release” certain product features under the guise of AI. These companies repackage an existing functionality with some type of AI, just to announce a release and latch onto the newsworthiness and buzziness of the term. 

How Black Kite Uses AI in Third-Party Risk Management

At Black Kite, when we talk about integrating AI into our platform, we mean that we leverage the most sophisticated and cutting-edge applications of AI in a way that truly impacts the quality and value of our service. Here are a few examples:

  • UniQuE™ Parser 3.0: One of our most sophisticated AI models is a large language model that we developed in-house called UniQuE™ Parser. This model is built on top of proprietary data generated by Black Kite and curated by our cybersecurity industry experts. Unique Parser originally operated as a compliance mapping tool that read and mapped documents to various industry standards and Black Kite cybersecurity controls to reduce time spent on compliance efforts. The initial version utilized an off-the-shelf language model. More recently, we integrated a natural language processing model that was built in-house into Parser, which was fine-tuned for the cybersecurity domain. With this update, UniQuE™ Parser now understands industry nuances and provides more intelligent automation than you can get with simple asset mapping tools. 
  • Threat Intelligence Curation: Our threat intelligence curation capabilities use machine learning (including LLMs ) and determine the value of collected risk information. While other solutions and security rating services collect all information on the web to determine the risk,, Black Kite curates supply chain threat intelligence data with a sophisticated AI that mirrors the lens through which a cybersecurity expert would evaluate the information. This AI model increases the accuracy of threat intelligence and risk predictions. 
  • Ransomware Susceptibility Index® (RSITM) A portion of our Ransomware Susceptibility Index® uses machine learning (including neural networks) along with a deterministic function. This model increases the accuracy of risk predictions.

One of the most important differentiators of Black Kite’s AI functionality is the expert curation of information that we layer in the base of all models. The quality and relevancy of training data can make or break an AI algorithm, which is why our commitment to quality data is essential for creating value for our customers.

Become a Savvy Cybersecurity Buyer 

Marketing fluff is not a new issue. Savvy buyers know that companies tend to overinflate their value or try to manipulate buyers via tactics like exaggeration or instilling fear, uncertainty, and doubt (FUD). When it comes to AI, keep a healthy dose of skepticism at the ready and prepare to ask questions to clarify the true value of the AI the vendor promotes. 

Ask vendors what is unique about their AI application and how it brings more value or increased accuracy to their product or service. Don’t forget to ask about data quality; it’s important to know which data the vendor uses to train the AI model and whether they curate or automatically generate that data — or even whether the model is trained on customer data.

With these questions in mind, you will be a more savvy cybersecurity buyer and avoid falling victim to exaggerated claims of AI innovation.

Want to learn more about how Black Kite uses AI to increase the value and accuracy of our third-party risk management platform? Check out our e-book about Artificial Intelligence in TPRM. 

Ultimately, securing a constantly shifting tech ecosystem comes down to getting the right cyber threat intelligence on relevant risks Take our platform for a test drive and request a demo today.