A new cyber threat? A look at AI worms

The world of technology is on a steady curve of development. Yet, with the great development of technology comes the development of ways to exploit it.

25-03-2024 - 9 minute read. Posted in: case.

A new cyber threat? A look at AI worms

The growing threat of worm AI in Cybersecurity

The rapid advancement of artificial intelligence (AI) has revolutionized the way we interact with technology. From chatbots to generative AI and virtual reality, innovation continues to push boundaries. However, with technological progress comes new cybersecurity threats, and one of the latest concerns is AI worms.

AI worms pose a significant threat to genai ecosystems, which are interconnected networks of applications utilizing Generative AI capabilities. As these ecosystems grow and integrate more GenAI functionalities, they become potential targets for security risks associated with AI worms.

What are AI worms?

AI worms are a new breed of malware that leverages artificial intelligence to enhance their capabilities and evade detection. Unlike traditional malware, AI worms are designed to replicate themselves and spread autonomously across networks and devices. They use machine learning algorithms to learn from their environment, adapting their behavior to become more effective over time.

These intelligent malware programs can perform complex tasks without human intervention. For instance, they can identify system vulnerabilities, evade detection mechanisms, and execute highly targeted attacks. One of the most concerning aspects of AI worms is their ability to use generative AI to create highly convincing fake emails, making it easier to deceive users and infiltrate systems.

In essence, AI worms represent a significant leap in the evolution of malware, combining the power of artificial intelligence with the malicious intent of traditional cyber threats. Their ability to cause widespread damage to computer systems and networks makes them a formidable challenge for cybersecurity professionals.

AI worms: A new cybersecurity risk

A team of researchers from Cornell Tech has investigated the potential risks associated with autonomous AI systems. Their goal was to uncover vulnerabilities and highlight the dangers of malicious AI-powered cyberattacks, particularly focusing on the GenAI component of agents within interconnected GenAI ecosystems. As part of their research, they developed a generative AI worm, a self-replicating program designed to spread malicious software across systems and steal sensitive user data.

Researcher Ben Nassi warns that AI worms represent a new type of cyberattack, one that businesses and individuals are largely unprepared for.

The role of generative AI in cyber attacks

Generative AI, a type of artificial intelligence that can create new content based on input data, is increasingly being used in cyber attacks. This technology can generate text, images, and even videos, making it a versatile tool for cybercriminals.

One of the primary ways cybercriminals use generative AI is to create highly convincing fake emails. These emails can be so realistic that they easily bypass traditional security systems, tricking recipients into divulging sensitive information or clicking on malicious links. This makes phishing attacks more effective and harder to detect.

Generative AI is also used to create malware that can evade detection by security systems. By continuously generating new variants of malware, cybercriminals can stay one step ahead of cybersecurity defenses. The use of generative AI in cyber attacks is becoming more common, posing a significant threat to cybersecurity.

Moreover, generative AI can be employed to create AI worms that replicate themselves and spread across networks and devices. This combination of self-replication and generative capabilities makes AI worms particularly dangerous, as they can adapt and evolve to become more effective over time.

The development of Morris II: A modern AI worm

Nassi, along with colleagues Stav Cohen and Ron Bitton, created an AI worm named Morris II – a nod to the infamous Morris worm that disrupted the internet in 1988.

The researchers designed Morris II to exploit automated email functions, allowing it to:

  • Steal sensitive data such as phone numbers, bank credentials, addresses, and names.

  • Hijack email assistants to extract information from user inboxes.

  • Send spam emails to propagate itself further.

  • Be used in scenarios such as spamming and exfiltrating personal data, posing significant risks to user privacy and security.

To ensure safety, the research team tested the AI worm within a controlled environment, ensuring that no real-world users or devices were compromised.

Types of AI Worms

AI worms come in various forms, each with unique characteristics and methods of operation. Some AI worms use machine learning algorithms to learn from their environment and adapt their behavior, making them more effective at evading detection and exploiting vulnerabilities.

One type of AI worm uses generative AI to create highly convincing fake emails. These emails can deceive recipients into divulging confidential information or clicking on malicious links, facilitating the worm’s spread. Another type employs social engineering tactics to manipulate individuals into compromising their security. Cybercriminals increasingly rely on psychological manipulation to exploit human vulnerabilities – learn more about how social engineering works and how to protect yourself. Read our in-depth guide here.

AI worms can also be classified based on their propagation methods. Some exploit network vulnerabilities to spread across multiple devices, while others may use infected files or emails to propagate. Additionally, AI worms can be designed to target specific systems, organizations, or individuals, or they can aim to cause widespread damage indiscriminately.

The level of sophistication among AI worms varies. Some are relatively simple, relying on basic algorithms and techniques, while others are highly advanced, using complex machine learning models and adaptive strategies. Regardless of their sophistication, all AI worms pose a significant threat to cybersecurity.

How AI worms work: Adversarial self replicating prompts

Generative AI typically requires a prompt to generate a response, whether it's text, images, or code. AI worms leverage this mechanism through adversarial self-replicating prompts, allowing them to spread autonomously. This means the worm generates a prompt that triggers further actions, creating a continuous self-replication cycle.

The research team explored two key attack methods:

  • Text-based self-replicating prompts: A specially crafted adversarial text prompt embedded in an email can manipulate the AI-powered email assistant, leading it to extract and share sensitive data.

  • Image-based AI exploits: An infected image (JPG file) can carry a prompt that forces the AI to spread malicious messages. This type of attack is particularly dangerous because images can contain hidden malware, phishing links, and other exploitative elements.

Threats to the entire genAI ecosystem

AI worms pose a significant threat to the entire GenAI ecosystem, as they can replicate themselves and spread across networks and devices with alarming efficiency. The use of generative AI in these worms makes them particularly dangerous, as they can create highly convincing fake emails and evade detection by even the most advanced security systems.

One of the primary concerns is the vulnerability of GenAI-powered applications, such as those utilizing retrieval augmented generation (RAG) models. These applications can be exploited by AI worms to launch cyber-attacks, compromising the integrity of the entire GenAI ecosystem.

Ongoing research has highlighted several risks associated with the GenAI layer of agents, including dialog poisoning, membership inference, prompt leaking, and jailbreaking. These vulnerabilities can be exploited by AI worms to exfiltrate personal data, posing a significant threat to both individuals and organizations. Jailbreaking allows attackers to bypass built-in restrictions in AI models, leading to unintended behaviors and security risks – explore what jailbreaking is and why it matters.

AI worms can also target GenAI services, whether they use local or remote models, and the GenAI layer of agents. This broad range of potential targets makes the entire GenAI ecosystem vulnerable to AI worm attacks. Effective countermeasures are essential to prevent these threats from materializing.

Furthermore, AI worms can target GenAI models that use input data to generate output, making them a significant threat to the entire GenAI ecosystem. As the use of AI worms in cyber attacks becomes more common, it is crucial to develop robust security measures to protect against these sophisticated threats.

The future of AI worms: A looming cybersecurity crisis

Nassi and his team developed Morris II to highlight critical security gaps in AI-driven systems. Their research aims to alert major tech developers, such as Google and OpenAI, to the risks posed by generative AI vulnerabilities.

Ongoing research highlighted risks associated with Generative AI (GenAI) agents, including vulnerabilities such as dialog poisoning and prompt leaking. This research aims to highlight potential security threats arising from the integration of GenAI into various applications, cautioning against possible exploitation by cyber attackers.

One key concern is AI software integration, where users allow AI-powered tools to take actions on their behalf – such as sending emails or booking appointments. Granting AI access to sensitive data increases cybersecurity risks, making it easier for AI worms to infiltrate personal or business accounts.

The Cornell Tech team warns that AI worms could soon become a real-world cybersecurity threat, as cybercriminals will likely attempt to exploit AI-driven platforms. Proactive security measures are essential to mitigate the risk before it escalates into a widespread issue.

How to protect against AI worms

Experts suggest several strategies to mitigate the threat of AI worms:

  • Limit AI automation: Restrict the number of actions AI software can take independently to reduce exposure to malicious exploits.

  • Monitor AI-generated activity: AI worms rely on rapid self-replication, which can often be detected by anomaly-detection systems. Businesses should implement AI behavior monitoring tools to identify suspicious activity early. AI worms can specifically target GenAI ecosystems, exploiting vulnerabilities within these interconnected systems.

  • Enhance cybersecurity measures: Companies and individuals should regularly update security protocols and use advanced AI threat detection tools to stay ahead of evolving threats.

AI security in a digital age with ongoing research highlighted risks

As AI technology continues to evolve, so do the risks associated with it. The emergence of AI worms like Morris II underscores the need for stronger cybersecurity frameworks to protect against future AI-driven threats. Whether you’re an individual user or a business, staying informed about AI cybersecurity is crucial to safeguarding your digital assets.

Securing various types of GenAI systems, including three different GenAI models like Gemini, ChatGPT, and LLaVA, is essential to ensure comprehensive protection against diverse cyber threats targeting these platforms. AI is reshaping the cybersecurity landscape, influencing both defense strategies and attack methods – explore how AI has changed cybersecurity and what it means for the future.

By implementing proactive security measures, monitoring AI activities, and spreading awareness, we can reduce the risk of AI worms before they become a widespread cyber threat.

Are you concerned about AI cybersecurity threats? Stay updated with the latest security insights and best practices to protect your digital environment.

This post has been updated on 19-03-2025 by Sarah Krarup.

Author Sarah Krarup

Sarah Krarup

Sarah studies innovation and entrepreneurship with a deep interest in IT and how cybersecurity impacts businesses and individuals. She has extensive experience in copywriting and is dedicated to making cybersecurity information accessible and engaging for everyone.

View all posts by Sarah Krarup

Similar posts