AI in Ransomware: How Threat Actors Are (and Aren’t) Using AI
Byline by: Dr. Ferhat Dikbiyik, Chief Research & Intelligence Officer
Artificial intelligence (AI) is everywhere, capturing headlines, shaping product roadmaps, and dominating boardroom discussions. But in the shadowy world of ransomware, its presence is surprisingly muted.
In our 2025 Ransomware Report, we examined not only how ransomware groups evolved over the past year, but also whether AI is playing the disruptive role many expected. What we found was something of a paradox: while AI is helping attackers fine-tune parts of their campaigns, it’s not powering the attacks themselves. In fact, despite the hype, most ransomware operators are barely scratching the surface of what AI could do.
Why? Because they don’t need to.
Why Ransomware Groups Aren’t Rushing to Adopt AI
Ransomware groups are pragmatic. They aren’t chasing innovation—they’re chasing returns. And right now, the tools they’ve relied on for years work just fine. Instead of building sophisticated, AI-powered malware, attackers are using proven, low-tech tactics that maximize profit while minimizing risk.
From our analysis of over 6,000 publicly disclosed ransomware victims between April 2024 and March 2025, one insight stood out: attackers are winning without innovation. They’re going after smaller organizations with lower cyber maturity, capitalizing on widely known vulnerabilities (often months after disclosure). They’re spinning up operations with prepacked Ransomware-as-a-Service (RaaS) kits that cost as little as $500. And they’re leaning on affiliate networks to cast a wide net.
AI isn’t a necessity in this environment—it’s a distraction.
Watch me walk through the section of the 2025 Ransomware Report dedicated to how threat actors are using AI.
For all the promise of AI, ransomware actors have little incentive to overhaul what’s already working. Ransom payment amounts may be declining, but the number of victims continues to rise—up 24% year over year. In the past year alone, 52 new ransomware groups emerged, bringing the total number of active groups to 96.
And these new players aren’t sophisticated. They operate without negotiation portals and they don’t bother with theatrics—they launch quickly, demand payment, and move on. Building AI-enhanced malware simply doesn’t align with that fast-turnover model.
There are also significant technical and operational hurdles. Training custom models or deploying AI-powered malware takes time, resources, and specialized expertise. And most ransomware operators aren’t software engineers—they’re opportunists looking for the easiest path to payment.
As long as the status quo keeps paying off, there’s no reason to reinvent it. For now, threat actors are still finding success the old-fashioned way.
Where AI Is Showing Up (So Far) in Ransomware
That doesn’t mean AI is completely off the table. Some groups use it behind the scenes—not to launch attacks, but to speed up the prep work. Common ways ransomware groups are using AI today include:
- Phishing and social engineering: AI is used to generate more convincing phishing emails by mimicking the language patterns and communication styles of specific targets. Creating tailored lures that blend in with corporate communications increases the likelihood of initial compromise. Ransomware group FunkSec, for example, has said it uses AI to create phishing templates, but emphasizes AI contributes to only about 20% of its operations.
- Victim research: Threat actors use AI to research victims before launching an attack. This includes identifying high-value targets, learning about their tech stack, and even mapping out vendor relationships. AI helps automate this process, speeding up attack planning.
- Code analysis and debugging: Some groups are using AI to analyze malware code—identifying bugs, improving stability, and refining functionality.
In our research, the real-world applications of AI in ransomware are found to be primarily limited to grunt work—support tasks that make attacks more efficient, not more advanced. The real change is happening elsewhere, as the ransomware ecosystem grows larger, messier, and harder to predict.
The Threat Is Still Evolving, Just Not How You’d Expect
Even without widespread AI adoption, the ransomware landscape has never been more chaotic and hard to pin down. Today’s threat actors are less centralized, less disciplined, and more opportunistic. Many lack the twisted “code of conduct” we saw with legacy groups like LockBit and AlphV, which once claimed they wouldn’t target hospitals or nonprofits. Those boundaries have all but disappeared.
We’re also seeing attacks on small and mid-sized businesses (SMBs), especially those earning between $4M and $6M annually, which often serve as critical vendors within larger supply chains. For threat actors, they’re not just easier to compromise—they’re more likely to pay. These targets may not make headlines, but they’re incredibly valuable to attackers looking to cause maximum disruption with minimal effort.
Re-victimization is also on the rise. In 14 cases, companies were attacked by two separate ransomware groups within a single week. Sometimes, that’s because affiliates jump ship. Other times, it’s because public disclosures make victims more visible—and more vulnerable.
Attackers may still rely on simple tools, but how they operate is changing. With more fragmentation and unpredictability, tactics will eventually evolve—and AI is likely to play a bigger role when they do.
What Happens When Ransomware AI Does Get Weaponized?
It’s only a matter of time before we see more advanced AI applications in ransomware. If and when that happens, the consequences could be severe. Imagine:
- Adaptive ransomware that can evolve its behavior to avoid detection, analyzing EDR logs or monitoring incident response communications to adjust ransom demands
- Deepfake video or voice messages used to impersonate executives during breaches or negotiations
- Automated negotiation tools that respond to legal, financial, or reputational signals to increase leverage
- Open-source AI models uploaded to platforms like Hugging Face that contain malicious components—not traditional malware, but mechanisms for data leakage or data poisoning
These scenarios aren’t far-fetched. The underlying capabilities exist, and the incentives to adopt them are growing. As ransomware groups face mounting pressure from law enforcement and stronger organizational defenses, they’ll inevitably look to AI to stay ahead. And organizations need to be ready before that shift happens.
What Security Leaders Should Do Now
AI-powered ransomware hasn’t arrived in full force, but the conditions are forming. The surge in re-victimization, the rise of less disciplined actors, and the increasing focus on vulnerable SMBs all point to a more tumultuous threat landscape where AI could quickly shift from background tool to frontline weapon.
To prepare for this shift, security leaders should focus on:
- Investing in AI-powered defense: If attackers begin using AI in more sophisticated ways, defenders must be equipped to counter with AI-powered tools of their own. Prioritize security solutions that use AI for behavioral detection, anomaly spotting, and real-time response. Speed and precision will be critical as tactics evolve.
- Monitoring the supply chain: Many ransomware attacks now begin through third-party vendors. In fact, ransomware was the most common known attack vector in third-party breaches, accounting for nearly 67% of all incidents. Tools like Black Kite’s Ransomware Susceptibility Index® (RSI™) can identify which vendors are most likely to be targeted based on patterns observed across thousands of real-world breaches.
- Assessing vulnerabilities across the AI stack: As organizations adopt AI tools and integrate models into their workflows, a new layer of risk emerges. Vulnerabilities in the AI tech stack can offer attackers new pathways in. Security teams should begin assessing and monitoring these vulnerabilities now, before threat actors shift their focus to AI-specific weaknesses.
- Strengthening public-private partnerships: Collaborate with industry peers, government agencies, and threat intelligence networks to stay ahead of evolving ransomware tactics. Shared insights can accelerate response times and improve collective defense.
Preparing for What’s Next in Ransomware
Ransomware groups don’t need AI to succeed today—they’re making easy money with easy methods. But that won’t hold forever. Today’s threat environment is anything but stable, and as the ransomware ecosystem evolves, the tactics will change. When they do, AI may shift from a fringe tool to a force multiplier.
Security leaders need to plan for what’s next—not just react to what’s already here. That starts with knowing where your organization is most exposed. And in today’s ransomware environment where attackers are always looking for the weakest link, that exposure often lies beyond your own perimeter. Using Black Kite’s RSI™, security teams can proactively identify which vendors are most prone to ransomware, prioritize risk, and create a mitigation strategy.
While ransomware groups are doing fine without AI for now, cybercriminals have never been static for long. Once rudimentary tactics stop delivering, more advanced capabilities will quickly become the new baseline. And when that happens, the companies with deep visibility into their third-party risk landscape will have the upper hand.
Want more insights into how ransomware is evolving? Read the 2025 Ransomware Report to see what’s changing and how RSI™ helps you stay ahead.
Read our full 2025 Ransomware Report: How Ransomware Wars Threaten Third-Party Cyber Ecosystems – accessible instantly, no download required.