Why Generative AI Fuels Ransomware Attacks Through Language, Not Code

Josue Ledesma

September 26, 2023

Why Generative AI Fuels Ransomware Attacks Through Language, Not Code

Threat actors are constantly evolving their attacks and we’re seeing that with ransomware. Ransomware attacks have increased 73% in Q2 2023 compared to the previous quarter and there’s a big fear that artificial intelligence (AI) will be a driving force in the evolution of ransomware. Given that the average ransomware payment in 2022 was $4.7M, it makes sense that threat actors would be open to adopting new ways in order to reap these rewards. 

If we buy into the headlines and hype, the future looks bleak. This would mean that AI is poised to help threat actors vastly speed up their attacks and create such advanced scripts and ransomware that organizations will be defenseless. 

However, the truth is not so dire.  

The reality is that generative AI tools – like ChatGPT – can be used to help threat actors but in very specific ways. Organizations need to know how this new technology will be used in order to better prepare and defend against them.  

Here’s what’s real and what’s overhyped about the relationship between generative AI and ransomware, and how strategic approaches, like leveraging managed detection and response (MDR), can help. 

Contact an expert

The Myth: AI Will Be Used to Write Ransomware Code 

Generative AI has become a major source of fear for the cybersecurity market. ChatGPT and AI code-writing tools like Copilot have dramatically lowered the barrier of entry to writing code, scripts, and small applications. The fear here is that these tools can be used to write even more effective ransomware and sophisticated malware that can evade detection. 

In reality, this is not a real concern because of the impracticality and unreliability of the tools involved.  

“Ransomware groups are mirroring the practices of legitimate businesses,” says Martin Zugec, Technical Solutions Director at Bitdefender. “They can effectively run criminal enterprises using relatively straightforward techniques. So, why would threat actors be motivated to consider adopting more complex methods when simpler ones still yeld results?” 
 
The adoption of tools like ChatpGPT can be viewed as a risky proposition. These tools continually evolve and update. Access may not be guaranteed, and their integration can introduce unnecessary complexity into an already functional ecosystem. 

Just like organizations, ransomware groups need to do their due diligence if they’re going to incorporate any new tools in their operations. They need to be sure these tools don’t expose them to risk, especially given the shift in ransomware attacks. 
 
Another myth related to AI-generated malware is that the code could be dynamic and constantly changing. In reality, nearly all malware we encounter today is dynamic; the era of static maliciuos code is a thing of the past. Bitdefender Labs, for instance, handles approximately 400 new threats every minute, kand cybercriminals routinely adapt and evolve their malware using code repositories like GitHub. 

“Most ransomware groups are focused on data exfiltration rather than encrypting data,” says Sean Nikkel, Director of Threat Research and Reporting at Bitdefender. This is much more efficient and removes the risk of a decryptor failing to work, which can impact a ransomware groups’ reputation.  

“Ransomware groups need to work in good faith,” says Nikkel. “If they fail to decrypt a company’s files after getting their ransom, their reputation is going to take a hit and they’ll have a difficult time getting other companies to pay the ransom. They won’t trust the company.” 

Ransomware groups are unlikely to rely on generative AI to write malware or ransomware because the risk is too high and the payoff isn’t big enough. 

The Truth: AI Will Fuel Threat Actors With Language Barriers 

Generative AI will be a boon to threat actors who have a limited audience because they don’t know English. Phishing emails are often identified via poor grammar, typos, and an overall unnatural writing style. However, with the right prompts, scammers can easily use a tool like ChatGPT to generate natural-sounding phishing emails and improve their success rate. 

This means that ransomware attacks deployed via phishing emails are likely to increase and may be more successful because the human element of detection is now mitigated. As a result, low-level scammers may flood inboxes with a number of phishing emails and non-English speaking ransomware groups now have a scalable way to target an English speaking audience. 

“Who are the scammers today that have a barrier to entry because they aren’t strong English speakers, or are not familiar with industry-specific or role-specific terminology?” Zugec asks. “Those are the scammers that ChatGPT will help.” 

When prompted, the chatbot explained how it can work with faulty English writing.

What to look for: Targeted Impersonation Attacks Via Generative AI 

Targeted ransomware attacks may also be able to utilize ChatGPT because it can be trained to model specific writing styles, fueling impersonation attacks. For example, a ransomware group can train ChatGPT to learn the writing style of a top executive at a company they’re targeting, like the CEO or CFO. This can be done if they obtain the executive’s emails or if the executive has some kind of online presence, which is likely. 

They can then direct the AI tool to craft emails in the style of that person, making impersonated phishing emails that much more convincing. 

This use case can be especially dangerous because it shortens one of the most time-intensive elements of these targeted impersonation attacks. A ransomware group may not have the ability to write in the style of another person but with generative AI, they can do that in just a few hours. This may lead to a more scalable operation, so high-profile executives, organizations, and highly targeted industries may see an increase in these impersonation phishing attacks that can lead to ransomware, BEC attacks, or more. 

Robust Cybersecurity: Your Best Defense 

While the fact that these AI applications have fallen into the wrong hands may be concerning, the good news is that we have the capabilities and technology to defend against these novel attacks and many of the principled approaches to cybersecurity still apply. 

“Organizations shouldn’t just rely on one technology,” says Nikkel. “Defense in depth is still important in defending against ransomware.” 

“A Zero Trust approach and [having] layered security are still excellent options,” adds Zugec. “These strategies are designed to be proactive and adapt to new kinds of attacks so they’ll apply here.” 

To directly address the risk of these AI-improved phishing emails, organizations should prioritize advanced detection and response technologies. Since human detection is less reliable now, companies should think about how to mitigate risk and prevent further damage post-compromise, whether that’s finding an unauthorized user in your network or finding and containing an active ransomware application before it can infect your most sensitive data. 

Lastly, organizations can employ managed detection and response providers who can offer 24/7 detection and response while also deploying proactive threat hunting capabilities across your environments. These are outsourced providers that offer defense in depth services, allowing you to have a comprehensive cybersecurity strategy at a fraction of the time and resources. 

Don’t fall for the FUD of AI. Yes, they may be utilized to improve and advance existing ransomware attacks but an organization with the right cybersecurity tools, partners, and strategy can overcome these threats.

Contact an expert

Contact an expert

tags


Author


Josue Ledesma

Josue Ledesma is a writer, filmmaker, and content marketer living in New York City. He covers cyber security, tech and finance, consumer privacy, and B2B digital marketing.

View all posts

You might also like

Bookmarks


loader