Here Come the AI Worms: Zero-click Worms that target GenAI-Powered Applications

A team of researchers developed one of the first generative Artificial Intelligence (AI) worms, which can spread from one system to another.

As generative AI systems like OpenAI’s ChatGPT and Google’s Gemini become more advanced, they are increasingly being put to work. Startups and tech companies are building AI agents and ecosystems on top of the systems that can complete boring chores for you: think automatically making calendar bookings and potentially buying products. But as the tools are given more freedom, it also increases the potential ways they can be attacked.

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process.

“It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,”

Researchers created the worm, dubbed Morris II, as a nod to the original Morris computer worm that caused chaos across the internet in 1988.

In a research paper and website, the researchers show how the AI worm can attack a generative AI email assistant to steal data from emails and send spam messages—breaking some security protections in ChatGPT and Gemini in the process.

The research, which was undertaken in test environments and not against a publicly available email assistant, comes as large language models (LLMs) are increasingly becoming multimodal, being able to generate images and video as well as text.

While generative AI worms haven’t been spotted in the wild yet, multiple researchers say they are a security risk that startups, developers, and tech companies should be concerned about.

Most generative AI systems work by being fed prompts—text instructions that tell the tools to answer a question or create an image. However, these prompts can also be weaponized against the system. Jailbreaks can make a system disregard its safety rules and spew out toxic or hateful content, while prompt injection attacks can give a chatbot secret instructions.

For example, an attacker may hide text on a webpage telling an LLM to act as a scammer and ask for your bank details.

To create the generative AI worm, the researchers turned to a so-called “adversarial self-replicating prompt.”

This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies. This is broadly similar to traditional SQL injection and buffer overflow attacks, the researchers say.

To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the system—by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.

In one instance, the researchers, acting as attackers, wrote an email including the adversarial text prompt, which “poisons” the database of an email assistant using retrieval-augmented generation (RAG), a way for LLMs to pull in extra data from outside its system. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT-4 or Gemini Pro to create an answer, it “jailbreaks the GenAI service” and ultimately steals data from the emails. The generated response containing the sensitive user data later infects new hosts when it is used to reply to an email sent to a new client and then stored in the database of the new client.

Despite this, there are ways people creating generative AI systems can defend against potential worms, including using traditional security approaches.

With a lot of these issues, this is something that proper secure application design and monitoring could address parts of, you typically don’t want to be trusting LLM output anywhere in your application.


Keeping humans in the loop—ensuring AI agents aren’t allowed to take actions without approval—is a crucial mitigation that can be put in place.

“You don’t want an LLM that is reading your email to be able to turn around and send an email. There should be a boundary there.”

If a prompt is being repeated within its systems thousands of times, that will create a lot of “noise” and may be easy to detect.

People creating AI assistants need to be aware of the risks. This is something that you need to understand and see whether the development of the ecosystem, of the applications, that you have in your company basically follows one of these approaches, because if they do, this needs to be taken into account.

kivuti kamau

Data Modelling, Design & Development

Press ESC to close