Today, we’re seeing a lot of cybersecurity companies exaggerate claims of AI innovation in their marketing materials to generate buzz for product releases and updates.
It’s not necessarily a bad thing for a business to focus on AI innovation. However, the definition of “AI” has evolved over the past few decades. In the 1990s, AI referred to more of a rule-based computer system rather than machine learning (ML). Today, AI encompasses far more sophisticated technologies and functions, including deep learning algorithms and large language models (LLMs).
While current innovations are exciting, groundbreaking, and transformative for the cybersecurity industry, they also need to be better understood and differentiated from previous iterations of AI.
These new innovations also present an ethically questionable grey area when cybersecurity companies and security rating services claim to use AI to improve the value and accuracy of their solutions when, really, they don’t.
Over the past few years, we’ve seen several cybersecurity companies announce “new” AI-powered product features and innovations that actually perform very basic functions. Some examples include:
While both use cases are helpful within a product, they don’t incorporate more recent and sophisticated AI functionality like ML, deep learning and LLMs. When companies call these things “AI,” they give the impression that they’ve incorporated the latest technology. These claims set an expectation for the consumer that the product will be more accurate, more unique and more valuable than others on the market. But in reality, these organizations are leaning on a generous definition of “AI” and aren’t offering anything cutting-edge or new.
As our CSO Bob Maley said in a recent e-book about artificial intelligence in third-party risk management (TPRM), “The market is saturated with AI-related products and services, each accompanied by its own set of promises and hype. While some of these offerings may be transformative, others may fall short of their claims. It is our responsibility [as CSOs] to sift through the noise, critically evaluate each solution, and identify those that truly have the potential to elevate our risk management capabilities.”
There are a few ways to conduct that critical evaluation and a set of red flags every savvy buyer should look for when evaluating any AI-related claim or purchase.
A dead giveaway of AI exaggeration is the use of generic statements about AI that don’t specify which type of AI is being used (e.g. ML, deep learning or LLMs). If you spot general claims of AI usage that don’t dig into details, put a critical lens on the content.
If the company goes into detail but doesn’t mention any use of sophisticated AI like deep learning algorithms or LLMs, it’s likely that they are simply using a more basic form of AI like asset mapping or automation. Again, while this is technically considered AI, it’s a very outdated definition of it.
AI algorithms are only as good as the data used to build them. All good models should be supplemented by high-quality data and curated by industry experts who validate the accuracy of the information. If a company talks about AI but doesn’t discuss how human experts create and curate data to train their models, you should be wary about “junk in, junk out” as well as how your company’s data is being utilized. In fact, when evaluating a tool for your organization, savvy buyers should ask if the AI models will train on your company’s owned data.
Sometimes the most straightforward questions provide the most illuminating answers. Today, many companies attempt to apply AI to functions or solutions that simply don’t require it. When a company claims to use AI, ask yourself, could I perform this task without AI? The answer is often “no” when a solution only uses AI to automate or speed up a process via a rule-based algorithm. In these cases, AI is helpful for optimizing a task, but isn’t truly setting the company’s solution or service apart from competitors in the space.
This is also a good way to spot companies who “re-release” certain product features under the guise of AI. These companies repackage an existing functionality with some type of AI, just to announce a release and latch onto the newsworthiness and buzziness of the term.
At Black Kite, when we talk about integrating AI into our platform, we mean that we leverage the most sophisticated and cutting-edge applications of AI in a way that truly impacts the quality and value of our service. Here are a few examples:
One of the most important differentiators of Black Kite’s AI functionality is the expert curation of information that we layer in the base of all models. The quality and relevancy of training data can make or break an AI algorithm, which is why our commitment to quality data is essential for creating value for our customers.
Marketing fluff is not a new issue. Savvy buyers know that companies tend to overinflate their value or try to manipulate buyers via tactics like exaggeration or instilling fear, uncertainty, and doubt (FUD). When it comes to AI, keep a healthy dose of skepticism at the ready and prepare to ask questions to clarify the true value of the AI the vendor promotes.
Ask vendors what is unique about their AI application and how it brings more value or increased accuracy to their product or service. Don’t forget to ask about data quality; it’s important to know which data the vendor uses to train the AI model and whether they curate or automatically generate that data — or even whether the model is trained on customer data.
With these questions in mind, you will be a more savvy cybersecurity buyer and avoid falling victim to exaggerated claims of AI innovation.
Want to learn more about how Black Kite uses AI to increase the value and accuracy of our third-party risk management platform? Check out our e-book about Artificial Intelligence in TPRM.