Are generative AIs adequately protected against viruses? This is what the experiment demonstrated

Are generative AIs adequately protected against viruses? This is what the experiment demonstrated

Researchers have discovered that flaws in the design of large language model services can create a gap for the massive spread of self-replicating malicious viruses.

Generative artificial intelligence tools, such as ChatGPT or Google's Gemini, are increasingly in demand by both users and businesses and public institutions for a variety of purposes, from creating new content to planning strategies. However, free access to AIs also allows criminals and extortionists to steal your data or money using these tools. Given this danger, AI and cybersecurity experts are looking to proactively anticipate how AI will be used for criminal purposes. They recently conducted an experiment that showed what serious risks the use of artificial intelligence can lead to by those who want to hack your devices.

Cornell Tech researchers conducted an experiment in laboratory conditions to create a new type of cyber attack, which allows using generative AI services to launch the process of self-reproduction of a network worm. For this experiment, a language model-based email client was used, into which a malicious program was embedded. More specifically, they connected the ChatGPT, Gemini, and LlaMA language models to an email messaging service and then launched two waves of attacks.

n the first series of attacks, an email with a malicious command was sent to the victim's email address, which forces the language model to follow a link and find the answer there. Another command was placed on the generative AI service, which launched the process of stealing the user's personal data and sending a copy of the letter to all his contacts. Thus, the network worm spread itself independently. The same process was repeated on several devices. In the second attack, the experimental team injected a malicious command into an image, which it then sent along the same path. The image with the virus also began to quickly self-generate and infect devices.

The most important conclusion of this study is that it is extremely dangerous to give neural networks access to such important functions as sending something on behalf of a user without his authorization. A reliable formula would be to have the machine learning model wait for confirmation from the email account owner.

Representatives of OpenAI and Google were informed about the error and attacks based on it. The first company confirmed the presence of the vulnerability and said that it was already working to improve the neural network's resistance to such attacks. Google did not comment on the researchers' request.

Scientists have discovered that imperfections in the architecture of AI models are fertile ground for growing viruses and worms. The experiment clearly showed that with the help of such AI worms, attackers can start the process of self-production of worms. The main danger of such attacks is that they are very difficult to stop, since the number of sources of infection is growing at enormous speed.