2024 Cybersecurity Predictions for AI: A Technical Deep Dive

Martin Zugec

January 09, 2024

2024 Cybersecurity Predictions for AI: A Technical Deep Dive

Welcome to our 2024 Cybersecurity Forecast Series! This is the second of our four expert blogs where we unveil key predictions for AI advancements, discuss ransomware trends, navigate the geopolitical landscape, and dive into attack surface challenges in the year ahead. You can also watch our exclusive webinar that covers these insights and answer your burning questions about what 2024 holds for cybersecurity.


 

When thinking about AI applications in cybercrime, make sure to apply Occam's Razor principle. Remember that attackers don't always need fancy tools, as our society still struggles with basic security practices. Attackers adjust their tools to counter defenses, and they might not bother with complex tools. They use AI to make routine tasks easier, just as we do. In 2024, the most significant risk of AI in cybersecurity may be its transformation into a buzzword. Companies might skip basic protective steps, focusing on theoretical AI threats with unproven security products. Maintain a balanced approach by focusing on proven strategies like defense-in-depth and multilayered security.

General Predictions

In our recent 2024 predictions for ransomware, we stated that the behavior of ransomware groups is more predictable because they are rational, financially motivated actors. In contrast, predictions about AI are quite the opposite. Characteristic of any disruptive technology, we are currently in a phase of intense innovation in both technology and its applications, with various approaches emerging and fading again. While eventually, everything will settle into a more predictable routine, 2024 won't be that year. Brace for a bumpy road with unexpected twists and turns. Our insights are for this year only; don't hold us to them in 2025.

AI Will Serve As Job Augmentation Rather Than Job Annihilation: AI won't replace your job, but perhaps someone who knows how to use AI effectively will. AI excels at enhancing or complementing pre-existing skills, and the same limitations apply to cybercriminals. As boring as it may sound, the most practical application of AI for cybercriminals may lie in simplifying mundane tasks. Anticipate that cybercriminals will become more productive with the assistance of AI, rather than reaching entirely new levels. A notable exception to this rule is deepfakes, but we'll get into that later.

Even Small Offensive Progressions With LLMs Will Be Effective: In theory, defensive applications of AI should outweigh the benefits of offensive applications, especially considering the operational security concerns for threat actors. Cybercriminals exercise caution in investing too much time, money, and effort into a system that can be rapidly altered to ruin their investment. Cybercriminals typically lean towards straightforward, repeatable, and scalable playbooks, while AI operates dynamically and undergoes continual changes. Systems with complex infrastructure and dynamic nature work better when the use case is official, rather than when intentions need to be concealed from law enforcement agencies.

For cybercriminals, we expect that AI will primarily be employed in augmented social engineering attacks, where humans remain the frontline of defense. Based on past experiences, we know that despite significant efforts in training and simulations, people are a terrible first line of defense. If our capacity to recognize suspicious behavior remains the same, even minor improvements for attackers will increase their chances of success.

Hybrid Attacks Will Blur the Lines Between Precision and Broad Tactics: Cybercriminals are likely to favor leveraging AI behind the scenes, making it increasingly challenging to identify whether an attack was executed by a sophisticated threat actor or with the assistance of AI. Initially, threat actors may actively attempt to conceal the use of AI in their operations, further complicating attribution efforts for defenders. The automation capabilities of AI will enable threat actors to introduce an individualized approach to each attack, even when executed on a large scale. Is it a targeted or broad attack, driven by humans, AI, or a combination of both? Drawing a clear line will become increasingly challenging.

LLMs Will Represent the Next Stage in Globalization: What truly sets LLMs apart from older machine learning models is their ability to process natural language. LLM models, such as ChatGPT, are often just confident liars. But when it comes to the English language, their proficiency is truly remarkable.

Eliminating language barriers will empower cybercriminals (and others of course) to extend their activities to a broader audience. This extends beyond linguistic considerations to include industry terms, insider acronyms, or current events that influence your intended audience. For instance, Slavic languages, lacking the concept of articles, present a common challenge when attempting to write in English, even for individuals with an extensive vocabulary.

In the future, what will matter is if you can speak the same language as AI (effective prompt engineering), not necessarily the language of your target.

2024 Is Unlikely To Be the Era of Evil LLMs: When it comes to LLMs, cybercriminals have three options. They can develop a custom LLM, demanding technical expertise and resources. However, existing malicious LLMs have shown a prevalent pattern: the majority were either attempting to scam the scammers, or underperforming. Many of these models compensated for their lack of actual skills by adopting "l33t hax0r" language and are more suitable for generating explicit stories rather than producing actual malicious content. This will change, as it's getting easier to adapt local models with highly efficient LLMs (for example Mixtral with techniques such as QLora). While it's currently unlikely that this approach is sufficient for constructing 'general malicious LLMs' capable of aiding in malware development, the genuine threat lies in the potential for LLMs being directed to scam individuals, a very real concern this year.

The second option for threat actors is to jailbreak regular LLMs. Jailbreaking LLMs requires prompt engineering skills and although it can yield results, these outcomes are typically temporary. Most importantly, the expertise required in prompt engineering tends to exclude less experienced threat actors who would benefit the most from access to LLMs.

Finally, the third option is to rely on GPTs, custom versions of ChatGPT that can be created for specific purposes. GPT Builder from OpenAI offers a platform for users to customize ChatGPT for specific tasks without coding. By providing instructions and selecting functionalities, users can create tailored GPTs, such as helping in board games, teaching math - or assisting cybercriminals. The tool prioritizes privacy controls, allowing users to manage data and interactions, and emphasizes a community-driven approach to AI development.

Predicting short-term exploitation, our bet is on GPTs being targeted by cybercriminals in the next 2-3 months. However, our ultimate expectation is that local models will become the preferred approach for cybercriminals utilizing LLMs in 2024. It's becoming increasingly easy to locally train and deploy powerful LLMs, while various vendors are even offering easy-to-use and cheap cloud training services which could or could not include the proper security safeguards needed to avoid helping actors develop malicious LLMs.

Although predicting AI for a whole year is tricky, this one's pretty certain: in the three attacks highlighted below, expect language to get better, and language barriers to break down. And this will lead to more endpoint compromises.

Generic Social Engineering

By “generic social engineering”, we're referring to opportunistic attacks that cast a wide net, trying to trap whomever they can. In the context of these attacks, the introduction of LLMs in the current year brings about several notable developments:

Generic Social Engineering Predictions

The Royal Decree Is In, Assistance From Commoners Will No Longer Be Needed: Chatbots powered by LLMs can remove one of the historical limitations in generic social engineering. Traditionally, certain scams, such as the well-known Nigerian schemes, relied on a basic "IQ test" strategy to filter responses. Advance fee schemes often employed narratives that could be easily identified as false with minimal critical thinking. Those who responded to scammers were more likely to become unwitting victims. This strategy was a practical means to prevent overwhelming volumes of interactions. However, the availability of human-like chatbots makes this scalability barrier obsolete, enabling more technologically adept cybercriminals to enhance the credibility of advance fee schemes. This transition will unfold gradually as chatbots become easier to implement.

Deepfakes Will Become a Real Threat: Anticipate a real challenge posed by deepfakes in 2024. Convincing deepfakes require hours of audio and video material for training, a resource typically available for public figures such as politicians or influential social media personalities. The combination of advanced technology and upcoming elections in major countries creates an ideal environment for threat actors to experiment with this technology. While deepfakes will primarily fuel misinformation campaigns, there's an expectation that they will also be employed for financial gains. A surge in takeover attempts on social media platforms, coupled with the use of deepfakes to impersonate original owners—especially in crypto-related scams—is on the horizon. It’s a good time to learn how to identify deepfakes.

Targeted Social Engineering

Just like generic attacks, anticipate a general improvement in the quality of spear phishing attempts. Here’s what we’re expecting from this class of attacks:

Targeted Social Engineering Predictions

Trickling Down of High-Tier Tactics: In recent years, we've witnessed highly sophisticated attacks that required considerable time, dedication, and investment to prepare. However, LLMs have the potential to significantly lower the entry barrier for such attacks, making them accessible to a broader range of threat actors. This not only expands the pool of potential attackers but also enables already capable threat actors to execute these sophisticated attacks more frequently.

Surge in Business Email Compromise (BEC) Attacks: As outlined in our ransomware predictions for 2024, we anticipate a surge in Business Email Compromise (BEC) attacks targeting companies of all sizes, including small businesses. This trend, already started before the introduction of LLMs, will become a significant risk in 2024. Leveraging LLMs, threat actors can easily reproduce the communication style, terminology, and insider knowledge of executives by inputting their past conversations into these models.

Deepfaked CEOs: Creating deepfakes involves the use of extensive audio and video data fed into the model, a resource commonly not available for ordinary individuals. However, for public figures such as company owners and CEOs, this data is more accessible, especially considering events like quarterly calls. While deepfakes have been uncommon in the past, we anticipate encountering them more frequently in 2024. The positive development is that current research is concentrating on a broader recognition approach for deepfakes, moving away from the singular identification of individual models (you can read our research on weakly-supervised deepfake localization).

Multiple Coordinated Attacks: As the barrier for sophisticated attacks decreases, we anticipate the emergence of attacks that were previously possible but deemed too labor-intensive and logistically challenging. This includes coordinating attacks on multiple companies, especially during scenarios like acquisitions/mergers or among companies belonging to the same corporate family or cartel.

Malware Development

Through 2023, we've seen a lot of news about malware developed by AI with some bombastic claims. We analyzed that malware and were not very impressed. The quality of malware code produced by AI tends to be low, making it a less attractive option for experienced malware writers who can find better examples in public code repositories like GitHub. Can you generate malware using ChatGPT? Yes. But is it better quality than what's already available as a commodity? No.

The silver lining is that AI helps security researchers in code analysis and reverse engineering—areas where we at Bitdefender Labs have been pioneers for quite some time. While AI may be a recent addition to mainstream media and public awareness, its integration into cybersecurity practices is far from new. We embraced machine learning nearly 15 years ago. Remarkably, many of the threat detections we create are designed for malware that hasn't even emerged. As a case in point that this technology is not new, our model that detected WannaCry (a significant event in 2017) was trained almost a decade ago in 2014 (3 years prior to the actual attack).

In our ongoing AI research, we explore cutting-edge approaches like Genetic AI and Generative Adversarial Networks (GANs). Genetic AI, inspired by natural selection, evolves over generations, resembling the way species evolve in nature. This learning improves threat detection without manual rules or tuning. On the other hand, GANs operate as a dynamic duo—a generator creating realistic data and a discriminator distinguishing between real and synthetic content. Think of it as an artist (generator) creating forgeries and an art critic (discriminator) learning to detect them.

Malware Development Predictions

Don’t Expect Skynet or HAL 9000: When thinking about the latest AI malware, don't imagine a complex binary skillfully maneuvering through your network to pinpoint vulnerabilities for exploitation. Instead, picture a code with minor customizations, crafted in a language of your preference. Script kiddies are more likely to find this opportunity appealing compared to experienced malware developers.

Mediocre Mass of Malware: Contrary to popular belief, common malware today is already dynamic and morphing; our own Bitdefender Labs process around 400 unique threats every minute (!). If the promise of ChatGPT is to bring more mediocre malware, we can handle that. Threat actors will find more use for LLMs in human, not machine languages. It's worth noting that, similar to real life, this won't prevent some threat actors from using AI for marketing purposes.

High-Value ICS/SCADA Targets: Every year, predictions resurface about the vulnerability of critical infrastructure to cyber attacks. Until now, this threat has been somewhat mitigated by the concept of Mutual Assured Destruction (MAD). Those with the capability to exploit these systems (typically state-sponsored threat actors) are aware of the self-destructive consequences of such attacks. However, with the assistance of AI and the ability to manipulate output programming languages, SCADA/ICS systems could become accessible to a broader range of threat actors, not necessarily at a low level but certainly at a lower tier. The knowledge required for IEC 61131-3 languages is not widespread, and AI has the potential to bridge this gap, potentially expanding the pool of actors with the capability to target these critical systems.

In the realm of malicious code, security researchers tend to outpace threat actors. The real cause for concern lies in attacks that exploit human elements. It's the threats relying on human vulnerabilities that should be the primary focus of worry.

Intelligence Gathering and OSINT

Surprisingly, one of the most common questions about AI that I receive from friends and family is about its capabilities in market research. Can it be used to gather more information about a specific company, figure out if a company is potentially fraudulent, or provide a brief overview of its current financial status? These are the same inquiries that cybercriminals seek to answer during Open-Source Intelligence (OSINT) operations. Ransomware-as-a-Service (RaaS) groups frequently include dedicated OSINT teams to assess and determine appropriate ransom demands.

Intelligence Gathering and OSINT Predictions

Ransom Calculator: Much like tools such as CrunchBase, LLMs can assist in gathering information about companies. Not only in providing overviews of a company's background and financial status but also in analyzing the latest news, including mergers and acquisitions. One of the most practical and easily monetizable uses of AI is the development of an AI-driven reconnaissance tool to analyze vast amounts of data and identify high-value targets with significant financial capabilities. By intelligently selecting and targeting organizations with the highest potential to pay substantial ransoms, ransomware groups could increase their maximum ransom demands, leading to potentially higher returns on their malicious activities.

Risks of Sensitive Data Leakage: While LLMs offer powerful capabilities, the current implementations often resemble a "wild west" as companies and employees include AI in their workflows. The risk of sensitive data leakage presents an intriguing opportunity for threat actors during this learning phase, especially as ransomware groups continue pivoting shifting towards data exfiltration. We wouldn't be surprised to witness a major security breach in 2024 where the target of the social engineering attack was a corporate LLM.

Analyzing Public Data Leaks: Customized messaging generated by LLMs can be used to craft convincing spear-phishing attacks, leveraging information obtained from public leaks or company websites. This automation not only scales the scope of potential targets but also enhances the authenticity of malicious communications.

Sorting Exfiltrated Data: Finally, LLMs can be leveraged by cybercriminals to sift through data exfiltrated from targeted companies. The models' understanding of natural language enables them to categorize and identify sensitive information within the stolen data. Stealing terabytes of data is one thing; finding information that can be effectively used for blackmail is another challenge altogether.

Conclusion

When considering the future of AI and its role in cybercrime, it's critical to bear in mind the "anchoring bias." While considering the evolution of offensive AI, it's equally important not to overlook the continuous advancement of defensive AI. Rather than seeking elusive solutions or quick fixes, the emphasis should remain on proven strategies that have stood the test of time—such as defense-in-depth and multilayered security. Bitdefender, with over two decades of experience, has successfully used this approach, incorporating AI as a solid step forward in our ongoing commitment to protecting customers through robust research foundations.



Dive deeper into 2024 cyber threats! Our on-demand webinar, Predictions 2024: Ransomware Evolution, AI Realities, and the Globalization of Cybercrime, goes beyond the blog, featuring live discussions on ransomware, AI/LLM, and emerging threats. Ask questions, get answers, and stay ahead. 

Contact an expert

tags


Author


Martin Zugec

Martin is technical solutions director at Bitdefender. He is a passionate blogger and speaker, focusing on enterprise IT for over two decades. He loves travel, lived in Europe, Middle East and now residing in Florida.

View all posts

You might also like

Bookmarks


loader