When considering the increasing popularity of AI tools, the statistics are staggering: ChatGPT acquired 1 million users within days of launching last November. Social media juggernauts Facebook and Instagram took months to accomplish the same feat. 

On the one hand, the popularity of artificial intelligence (AI) tools like ChatGPT and DALL-E is exciting and raises interesting questions (and controversy) around the technology’s application and usage. It’s also important to consider that some portion of ChatGPT’s purported 100 million users are most certainly threat actors leveraging the technology to improve the sophistication and impact of their attacks. Threat actors aren’t just leveraging large language models (LLMs) like ChatGPT; they’re also using a variety of AI tools, from machine learning (ML) algorithms to generative models. 

As AI evolves, security professionals should “fight fire with fire” by leveraging the technology in their security programs and studying how threat actors weaponize AI tools. Increased awareness of AI-assisted cyber attacks ensures that organizations can adapt to the changing threat landscape and mitigate the potential effects of attacks.    

Let’s examine how threat actors utilize AI tools in their attacks and how your organization can protect itself.

The Tea on ChatGPT

Some AI tools require little to no development skills to use, which has raised concerns that more threat actors will try their hand at using AI to create malicious tools. 

In fact, since ChatGPT’s launch, cybersecurity professionals monitoring hacker forums have seen a rise in threat actors experimenting with the LLM to create new malware strains and attack techniques. Here are a few of the biggest examples:

Example #1: Infostealer Malware

Less than a month after the launch of ChatGPT, a threat actor experimented with the program to create a Python-based stealer — a trojan that gathers information from a system. The code searches for common file types — like MS Office documents, PDFs, and images —  and uploads them to a hardcoded FTP server.

A similar infostealer malware made its way into a fake browser extension for ChatGPT users earlier this year. The threat actors collected Facebook credentials from over 40,000 users before another threat actor wiped and ransomed the database. Both examples demonstrate how easy it is to use AI tools to develop and disseminate this type of malware.

Example #2: Encryption Tools

Around the same time the ChatGPT-developed infostealer hit the cybercriminal forum, another threat actor posted a multi-layer encryption tool. They claimed the tool’s script, created with ChatGPT, was their first effort. While the tool appeared benign, potential uses included malicious purposes, like creating ransomware. After investigating, security researchers also found that the individual behind the encryption tool’s post had limited technical skills and no apparent background as a developer.

Example #3: Dark Web Marketplaces

The same security researchers from our previous examples also found ChatGPT-created dark web marketplace scripts posted on known cybercriminal forums. These scripts create a platform for threat actors to trade illegal or stolen goods — everything from drugs and ammunition to the data culled from encryption tools.

When Dr. Ferhat Dikbiyik, head of research at Black Kite, reviewed the forum posts from each of these examples, he noted that the level of written English proficiency in each post was high —  higher than what he’s seen in similar posts prior to ChatGPT’s release. This increased level of proficiency could indicate that threat actors are utilizing the LLM to develop malicious tools and improve their communication abilities. Enhanced communication skills within the threat actor community can spread malicious tools faster and aid in the creation of social engineering and phishing attacks.

Enough About ChatGPT, What Other AI Tools Are Threat Actors Using?

ChatGPT examples are easy to come by due to the popularity of the platform and the ease of access. That doesn’t mean, however, that threat actors cannot utilize other AI resources. Here are a few additional examples of AI tools threat actors have been known to employ:

  • Other LLMs: ChatGPT isn’t the only LLM around (looking at you, Google Bard). In addition to using these platforms to write scripts for malware and dark web marketplaces, threat actors use them for writing more persuasive copy in phishing attacks. McAfee recently found that 7 in 10 people failed to detect whether or not a human or AI wrote a love letter. 
  • ML Algorithms: Threat actors can use predictive AI and ML algorithms to determine the most effective strategies for cyber attacks and identify new software and network vulnerabilities. In 2017, security researchers used PassGan, an ML-powered password-guessing program, to demonstrate how the program could be misused to quickly and accurately identify user account passwords. 
  • Generative Models: In the past few years, there’s been an alarming increase in the use of fake images, videos, or audio in cyber attacks. These deepfakes can transform existing content (e.g., a publicly available video or leaked audio) or create original content. VMware found that 66% of surveyed cybersecurity professionals reported seeing deepfakes used in a cyber attack last year.

This list is by no means exhaustive. Security experts believe threat actors may use AI to improve the efficacy of their botnets and even to hijack AI tools developed for cybersecurity purposes.

It’s also important to consider how these new techniques and attack methods may increase the impact and reach of threat actors. For example, it takes an average of 277 days to identify and contain a data breach. As AI-assisted malware grows more adaptable, there’s also a growing likelihood that these attacks will inflict more significant damage simply by being harder to find via conventional methods.

How Can I Protect My Company Against Threat Actors Leveraging AI Tools?

With the threat from malicious AI tools rising, organizations must shift their defenses to mitigate risk from this new threat. Understanding how threat actors utilize AI tools is the first step (After all, you can’t protect against threats you don’t know about!). Then, armed with greater knowledge of how cybercriminals use these tools, you can take the following steps to strengthen your organization’s security posture:

  • Improve Your Cyber Hygiene: Cyber hygiene comprises practices that maintain system health and improve online security. Good cyber hygiene practices include keeping your organization’s systems patched and updated. Also, everyone with network access should use strong, unique passwords and follow the principle of least privilege. 
  • Practice Proper Data Management: AI models used in attacks often require large amounts of data to be effective. Limiting the amount of data your organization collects, anonymizing data whenever possible, and securely deleting obsolete data can reduce the damage inflicted in an AI-assisted cyber attack.  
  • Leverage AI Tools for Cybersecurity: When defending against AI tools used in cyber attacks, leveraging AI technology for cybersecurity is best. Consider investing in AI-powered threat detection and response systems. These can analyze vast amounts of data to identify and react to threats faster than human analysts. 
  • Consider Your Third-Party Risk: Even if you follow the previous recommendations, you must consider how AI-assisted cyber attacks on your third-party vendors will affect your organization to ensure your security plan is complete. Investigating your vendors’ security postures is crucial to determine if working with them is an acceptable level of risk for your company.

When thinking about AI tools and third-party risk, other important questions include: If an AI-assisted cyber attack compromises my vendor, how does it affect my data, supply chain, and overall security? Utilizing tools like Black Kite’s Ransomware Susceptibility Index® can tell you the likelihood of a ransomware attack on your organization and any third-party vendors your company works with.

Conclusion: When It Comes to AI Attacks, Knowledge is Power

In our previous piece, “The Evolution of Artificial Intelligence and Cyber Risk,” we looked at how AI has been around for 70 years but has only become widely used in the past five years. 

Because of AI’s explosive growth, security specialists are learning how to leverage the technology alongside threat actors, which means that AI is a tool that we’re learning how to harness and protect our organizations against when used maliciously in real time. 

There’s no crystal ball to tell us how threat actors will leverage AI in the future. Still, we can look at how they’re using AI now and closely follow cyber attacks and cybersecurity developments to indicate how the field is changing. It’s also important to collaborate and share information about AI-assisted attacks and cybersecurity to ensure that private and public sector organizations can quickly identify new threats, disseminate information, and develop security responses.

Interested in learning more about your organization’s ransomware risk?

Start with a a free RSI™ rating today!