Now Reading
Why AI Worms Are The Subsequent Huge Cyber Menace And How To Shield Your self Towards Them

Why AI Worms Are The Subsequent Huge Cyber Menace And How To Shield Your self Towards Them


Warning: Undefined property: stdClass::$error in /home/u135931481/domains/lavishlife.net/public_html/wp-content/themes/theissue/inc/misc.php on line 71

The world took a major leap forward when OpenAI’s ChatGPT gave the common man a taste of generative AI. Over the last two years, advances made in this field have changed the way we live and work. Be it helping with the kids’ homework, curating a unique social media caption, filling up a photo with artificial elements or solving complex mathematical problems, generative AI has taken a load off our shoulders.

Their creators also promise the utmost safety from cyber threats. However, if a recent study is to be considered, generative AI is as vulnerable to external threats as any other technology existing today.

Recently, a generative AI worm has found a loophole in the security systems of two of the world’s most popular AI chatbots – Google Gemini (erstwhile Bard) and OpenAI’s ChatGPT. When enabled, this worm can manipulate these AI chatbots in their client apps and utilise them to spread malware, conduct phishing attacks and send spam emails.

If you have signed up for a generative AI service, you need to sit up and take note. Although the teams at OpenAI and Google constantly work on releasing security patches, worms can still hunt new weaknesses in AI chatbots and figure out a loophole to further their malicious intentions.

This is why we attempt to understand AI worms and how you can stay safe from these pesky computer programs.

Let’s begin by answering the simplest questions.

What is a computer worm?

AI worm
Image credit: Emiliano Vittoriosi/Unsplash

A computer worm is essentially malware that can create and distribute duplicates to other computers over a network. While it may seem similar to a computer virus, a computer worm doesn’t require a host program or OS to disseminate copies of itself. It is self-sufficient.

Worms are among the most dangerous malware found in the digital world and can cause troubles like stealing sensitive information, corrupting files, populating system resources and installing backdoors to pave the way for hackers to gain access. The first computer worm, called Morris, was released in 1988 by scientists at Cornell University.

AI worms: How much of a threat do they pose?

Fast forward to 2024 and a successor from the same institute has created an AI worm. Aptly called Morris II, this worm goes after email clients infused with generative AI models and utilises the loopholes in these Large Langauge Models (LLMs) to conduct malicious business.

Modern generative AI chatbots usually respond to a query and try to solve the problem at hand. The researchers gave these chatbots an adversarial self-replicating prompt that triggers the GenAI model to output an input prompt. This starts a cycle of repetition, eventually breaking the LLM and causing it to act like a worm. The prompts can contain both texts and images.

These prompts can then be used to perform malicious activities like sending spam messages, exposing confidential data, generating toxic content and even disseminating propaganda. In a demonstration by Ben Nassi from Cornell Tech, Stav Cohen from the Israel Institute of Technology and Ron Bitton from Intuit, the Morris II worm was able to infect email clients using GPT 4, Gemini Pro and LLaVa generative AI models for various factors like propagation rate, replication and malicious activity.

The AI worm created by these scientists was capable of breaking the safeguards of these AI models and directing them to steal sensitive data like social security numbers and credit card information. In the wrong hands, such an AI worm can do even more damage than one can expect.

Although the Morris II worm’s sole intention was to demonstrate the vulnerability of these advanced generative AI models, the study gives the creators of these AI chatbots a notable responsibility of ensuring solid security protocols that are impenetrable by worms and other malicious programs.

See Also

How to stay safe from AI worms

At this stage, the onus falls onto companies like Google, OpenAI and every other LLM developer to ensure solid security protocols that cannot be violated by malicious text and image prompts.

However, if you use a generative AI-based email client, we advise you to stay alert for suspicious emails with unverified attachments. Avoid downloading them on your device at all costs. The same goes for files and text prompts sent on messaging platforms.

(Hero and Featured Image Credits: Courtesy Mojahid Mottakin via Unsplash)

This story first appeared on Augustman Singapore

Frequently Asked Questions (FAQs)

– What is an AI worm?
An AI worm is malware that steals your confidential information, sends spam and spreads itself using various methods.

– How can an AI worm infect ChatGPT?
An AI worm can infect ChatGPT by attacking a GPT-4-based email client using text and image prompts.

Source: Prestige Online

View Comments (0)

Leave a Reply

Your email address will not be published.

Copyright © MetaMedia™ Capital Inc, All right reserved

Scroll To Top