Artificial intelligence (AI) as a field and industry has been around for approximately 70 years, and yet today, the demand for AI-powered cybersecurity solutions is higher than ever. The global AI cybersecurity market value reached $14.9 billion in 2021. Experts expect the figure to reach $133.8 billion by 2030. Also, according to the IBM Global AI Adoption Index, 35% of companies report using AI in their organizations, and 42% considered using AI in 2022. 

In addition to the buzz caused by ChatGPT, there are a few reasons behind the surging demand. AI no longer needs high-end servers and expensive processors to function, which means that more organizations can adopt the technology. Additionally, AI’s usage now extends to smaller datasets and can automate asset discovery and vulnerability management. The new use cases for AI can improve team efficiency and make up for skills gaps and ongoing staff resourcing issues. 

When it comes to AI and cybersecurity, however, it is a complicated relationship. AI is both an asset and a risk to organizations. As threat actors leverage more sophisticated, AI-powered methods of attack, organizations must “fight fire with fire” by using AI to power their cyber defenses. 

Moving forward, cybersecurity and AI/ML teams have to work together to develop effective security solutions. As AI evolves, it’s important to examine the relationship between AI and cybersecurity from the perspective of security specialists and threat actors. Organizations must also consider how AI relates to third-party risk and take the necessary steps to improve its security posture against developing threats.

What Does AI Have To Do With Cybersecurity?

Quite a lot, actually. When we talk about AI and cybersecurity, it’s a two-fold conversation: how threat actors use AI and how security teams wield the technology to combat threat actors. Each part of the conversation directly impacts the other.

AI and Cybersecurity From the Security Specialist’s Perspective

Cybersecurity experts are excited about AI — it can be a powerful, proactive defense mechanism. Here are a few of the ways that security specialists utilize AI:

  • Predictive AI: Specialists use predictive analytics in AI tools to identify potential threats before they materialize. Predictive AI is one of the most advanced types of AI currently available. Self-supervised, these systems can perform analysis in rapidly changing environments while at the same time observing and learning to respond to threats better in the future. Cybersecurity teams can use this to automate threat monitoring, reducing the likelihood of cyberattacks. 
  • AI for remediation: AI-powered cybersecurity solutions help organizations develop more effective remediation strategies. Leveraging AI for automated remediation can dramatically reduce a team’s mean time to resolve (MTTR). AI analysis can find breaches faster, advise teams on the most effective way to respond, and automated workflows can even solve breaches without human interference. Reducing response time can save organizations considerable money: IBM found that identifying and responding to a breach in less than 200 days can save organizations an average of $1.12 million.
  • Simplified AI in LLMs: Large language models (LLMs) like ChatGPT often use simple heuristics when approaching algorithms instead of complex ones. Besides being difficult to maintain, complex heuristics also open LLMs to data poisoning and sponge attacks. Security specialists find that opting for simple heuristic-driven AI tools offers more security and a smaller attack surface, which is easier to analyze and monitor. 
  • AI to combat malware: Security experts can train AI algorithms to recognize well-known malware and phishing attacks. Organizations use this to detect and block malicious files, websites, and emails before they cause harm to the organization’s network.

AI From the Threat Actor’s Perspective

While AI has been a boon for cybersecurity specialists, it also benefits malicious threat actors. Recently, Blackberry found that 78% of IT professionals believe successful cyberattacks credited to ChatGPT are on the horizon. The same study revealed that 49% of IT professionals believe threat actors will use the AI chatbot to improve their technical knowledge and develop their skills. So far, security specialists have seen an increase in the following:

  • Romance scams: In a 2023 report by security company McAfee, researchers found that 7 in 10 people failed to detect whether or not a human or AI wrote a love letter. Why does it matter if AI can woo the average reader? Romance scams represent some of the steepest financial losses compared to other online crimes. The Federal Trade Commission reported $547 million lost due to romance scams in 2021. 
  • Squirrely malware: To evade traditional detection methods, threat actors use AI to create malware. AI algorithms generate polymorphic malware that uses an encryption key to change shape/signature. It also uses a mutation engine with self-propagating code to change its appearance. Cybersecurity researchers have already proven that AI tools like ChatGPT can create new strands of polymorphic malware. The bot can also create highly advanced malware that doesn’t contain any malicious code. Malware is already a significant concern with AAG estimating that there were 236.1 million ransomware attacks globally in the first half of 2021, which accounted for 20% of all cybercrimes.

Other ways threat actors use AI include: data poisoning, sponge attacks, and authentication bypassing.

AI, Cybersecurity, and Third-Party Risk

In addition to dealing with increased risks from threat actors, organizations must consider how AI impacts their third-party risk, as AI introduces new challenges when assessing vendors and partners. Companies should carefully evaluate how their third-party vendors and partners use AI. Many companies use AI tools without a thorough assessment of how they affect cyber risk.

The Security Risks of Using AI Tools

To protect itself, an organization needs to account for security weaknesses in its own AI tools and those of its third-party vendors. For example, many AI tools gather, store, and process significant amounts of data. Without proper cybersecurity measures, threat actors can easily access vulnerable data

Another threat to AI tools is model poisoning. Model poisoning occurs when malicious data or code infiltrates an AI system. The result is that the corrupted system produces erroneous or malicious code/results.

Failing to consider how your third-party vendors secure their AI tools can open your organization to increased concentration and cascading risk. If your third-party vendor experiences a data breach in their AI tools, how will it impact your company? Is your data in jeopardy? If your partner experiences a significant business disruption, will you be without an essential service?

The Benefits of Using AI to Protect Against Third-Party Risk

On the other hand, AI can improve your organization’s ability to efficiently and accurately assess third-party risk. Your organization can leverage AI-powered tools to automatically gather and analyze data about potential vendors to understand their level of risk. The AI tools use pattern recognition and data processing to gain deeper insights into potential risks and make more informed decisions around a third-party vendor. AI-driven systems can also continuously monitor your existing vendors for changes in their risk profiles, alerting you to emerging threats with enough time to take action and protect your organization.

How Does Black Kite Use AI to Combat Third-Party Risk?

Black Kite leverages AI in several ways to enhance its cyber intelligence capabilities and provide comprehensive risk assessments to its customers. Here are some key aspects of how Black Kite uses AI:

  • AI-driven compliance correlation: Black Kite uses AI algorithms to estimate a company’s compliance level. This estimation considers a company’s ability to adhere to regulations and standards like NIST 800-53, ISO27001, PCI-DSS, and HIPAA. Knowing a vendor’s compliance level is necessary for organizations operating in highly regulated industries like finance and healthcare. 
  • Ransomware Susceptibility Index®: Organizations utilizing Black Kite’s RSI™ can estimate a third-party vendor’s likelihood of a ransomware attack based on AI-powered data analysis of common indicators like location, industry, and annual revenue.
  • Cyber-aware AI-language model: Black Kite is also working on an AI-language model to enhance its compliance tools. The model will incorporate AI-driven language understanding to analyze and interpret complex cybersecurity documentation more effectively. The result will be a more accurate (and efficient) risk assessment process around compliance.

The Future of AI and Cybersecurity

AI is developing rapidly, and while no one has a crystal ball to see what the future holds, we have Dr. Ferhat Dikbiyik, head of research at Black Kite, with some predictions on the evolution of AI and cybersecurity:

  • AI-enabled cyber defense: AI technologies will be integral to most organizations’ cyber defense strategies. These technologies will enable faster threat detection and response, and improve the prediction of potential attacks. 
  • AI-powered security automation: Already, we’re seeing AI automate various security tasks (vulnerability management, threat remediation, etc.). This trend will continue, with AI playing an even more significant role in threat hunting and other aspects of cybersecurity. Consequently, security professionals can focus on high-priority tasks and strategic decision-making.
  • Public-private partnerships and global cooperation: As AI-powered cyber threats grow in scale and sophistication and the public and private sectors grapple with the ethical usage of AI and its regulation, we’re likely to see organizations and regulatory bodies collaborating globally. Organizations, governments, etc., will need to share knowledge across industries to develop the best practices for defending against cyber threats and utilizing AI technologies.

As your organization navigates AI technologies, it’s important to stay ahead of changing regulations and ensure that your company uses AI-powered tools transparently and responsibly. And, in the battle against AI-driven cyber attacks, remember to consider your third-party partner’s susceptibility and your own.

Ready to tap into AI-powered third-party risk assessments?

Let’s begin with your free RSI™ rating!