WormGPT on the loose

ChatGPT's evil brother, WormGPT, has emerged. We take a closer look at the malicious AI technology and make you smarter about it.

08-09-2023 - 9 minute read. Posted in: cybercrime.

WormGPT on the loose

WormGPT on the loose

WormGPT, an unsettling newcomer to the AI world, has emerged as a powerful AI tool. While this sophisticated AI model has the potential for innovative and helpful apps, it also possesses a darker side that can cause havoc in the digital world.

We already have a blog post on the dark side of ChatGPT, so now we’ll discuss the phenomena that is WormGPT and unfold the even darker side of AI.

The genesis of WormGPT: large language models

WormGPT is an AI model that is based on the GPT-3 architecture, one of the most advanced large language models. But it appears as an altered version, with improvements that increase its durability and efficiency. WormGPT has undergone an adaptation that gives it limitless character support, chat memory retention, and different code formatting features.

WormGPT was created to generate natural language text based on inputs that the user provides. These improvements broaden its abilities beyond that of standard AI models, allowing it to develop further in creating data that can either inform or mislead.

However, the appeal of WormGPT hides a feature that threatens the digital world. WormGPT works without the security measures and filters that restrict the outputs of its GPT-3 predecessor. Because of this fundamental lack of restrictions, it becomes a double-edged sword that can produce everything from informative to malevolent content without being restricted by these filters.

What is WormGPT?

WormGPT is a generative AI platform specifically designed to assist with criminal activities. Promoted on darknet forums as a tool for doing “all sorts of illegal stuff,” it stands as a rival to ChatGPT, a well-known generative AI engine. WormGPT is an AI module based on the GPT-J language model, trained with data sources that include malware-related information. This AI module boasts features such as unlimited character support, chat memory retention, and advanced code formatting capabilities. These attributes make it a powerful tool for creating highly convincing fake emails and malicious code, posing a significant threat to cybersecurity.

Unveiling the dark side

WormGPT is driven into a dangerous area by the lack of security filters since it can create damaging texts for malicious activities like

  • Phishing

  • Frauds

  • Spreading malware

The ethical use of AI technology is more or less challenged by its uncontrolled power that WormGPT and hackers hold, making it a tool that can be abused by cybercriminals for their own gain.

Hackers can execute Business Email Compromise (BEC) attacks that trick users into transferring money to fake accounts that mimic corporate employees, using WormGPT's powerful capabilities to produce the convincing messages. These hacking attacks demonstrate the ability that WormGPT has to carry out complex and risky schemes by using its functions to create persuasive and deceptive messages. To understand how similar threats exploit trust and human error, read our guide on what is social engineering.

WormGPT's malicious function goes even further than that: It's largely focused on data that is used to produce and disperse malware - it has been based on an extensive number of data sources and information from all over the world wide web. It's well known that cybercriminals promote malicious code and hacking methods - including WormGPT - in forums for cybercriminals on the dark web.

Here, they give many other cybercriminals the chance to take advantage of the AI tech's potential for sophisticated and targeted phishing attacks. These attacks have the capacity to jeopardize the security and privacy of people and businesses, and - in the worst case scenario - lead to great consequences and data breaches for organizations and nations even.

How WormGPT works: highly convincing fake emails

WormGPT is similar to other, more innocent software like ChatGPT. The softwares are related as they are both effective AI inventions that were developed using a large corpus of various textual inputs.

WormGPT's training materials, on the other hand, take an unpleasant turn, integrating material from underground forums, hacking guides, cases of malware, and fraudulent e-mail templates.

The model makes use of this information to accurately and convincingly generate phishing e-mails, produce malware, and distribute illegal content online. To personalize content and enhance its manipulative impact, it makes use of strategies like data scraping and social engineering.

WormGPT can collect personal information from websites through data scraping - and by using social engineering they are certain to make content that appeals to the target's mind and that ultimately makes the phishing even more convincing.

The brutal consequences of executing harmful code

It can carry severe consequences if you interact with WormGPT; its potential for exploitation makes it a dangerous weapon that can involve disclosing sensitive information and threaten both specific victims and society overall. Misled people who use WormGPT for illegal activities risk facing legal repercussions such as monetary fines and prison sentences.

As you can hear, WormGPT poses a variety of dangers. It can execute phishing attacks to trick people into disclosing valuable financial and personal information. Furthermore, it has the ability to:

  • Inject malware code into victim’s devices

  • Spreading viruses

  • Worms

  • Trojans

  • Ransomware

  • Spyware

  • Keyloggers that can hack, steal, or encrypt sensitive data

WormGPT is furthermore capable of hacking websites and exploiting them to launch a Distributed Denial of Service (DDoS) attack against a target website. Online platforms will become inaccessible and break as a result of such attacks. To learn more about how DDoS attack works and its impact, what is a DDoS attack, check out our guide on what is a DDoS attack.

Risks of WormGPT attacks

WormGPT poses a significant risk to both individuals and organizations. Its ability to generate highly convincing fake emails, personalized to the recipient, makes it a potent tool for tricking people into disclosing sensitive information or installing harmful software. Beyond phishing, WormGPT can be used to create malware and cybersecurity exploits, producing inappropriate content and executing harmful code. This makes it a valuable asset for malicious actors. The use of WormGPT can also lead to Business Email Compromise (BEC) attacks, which can result in substantial financial losses and damage to an organization’s reputation.

Working against WormGPT

The threatening presence of WormGPT requires awareness and vigilance. Maintaining AI security is crucial to prevent the misuse of technologies like WormGPT for illicit purposes. Those who are drawn to its malicious potential should exercise caution since using it for illicit purposes is both wrong and illegal - and will ultimately lead to harsh sentences, which can either be fines or imprisonment. Authorities from various nations discourage its use for illicit purposes and warn anyone who violates it of legal repercussions.

We should, though, remember that the potential harm that WormGPT’s can cause, can be mitigated. Although it has strong persuasive abilities, its effect can be minimized if you implement various tools and strategies. Using techniques like signatures, algorithms, and machine learning, malicious e-mails and malware created by WormGPT can be identified and thus prevented.

There are many different tools that can be used to track and identify malicious code, so that we can avoid getting it on our devices and systems. Many antivirus programs and -software are efficient and are constantly updated - they follow the trends in cyberspace, including the malicious ones.

Safeguarding against WormGPT attacks

To safeguard against WormGPT attacks, it is crucial to maintain AI security and be aware of the potential risks associated with AI-powered malware. Companies should develop comprehensive, regularly updated training programs aimed at countering BEC attacks. Employees need to be educated on the nature of BEC threats and the tactics employed by attackers. Email systems should be configured to flag messages containing specific keywords linked to BEC attacks, and potentially malicious emails should undergo thorough examination before any action is taken.

If you’re looking to improve your organization’s defenses, explore our guide on why gamification in awareness training works.

Best defense against WormGPT

The best defense against WormGPT is to remain vigilant and aware of the potential risks associated with AI-powered malware. Antimalware solutions continue to be an important and effective defense against cybersecurity threats. For instance, downloading a free trial of Panda Dome can provide robust protection against malware and other cybersecurity threats. Additionally, companies should enforce stringent email verification processes and educate employees on the nature of BEC threats and the tactics employed by attackers.

To improve your organization’s resilience to such threats, check out our guide on network security: a top 10 of best practices.

Mitigating the damage

To mitigate the damage caused by WormGPT attacks, it is essential to act quickly and decisively. If an attack is suspected, the first step is to contain the damage by isolating the affected systems to prevent further unauthorized access. Companies should then conduct a thorough investigation to determine the extent of the damage and identify the vulnerabilities that were exploited. Furthermore, it is crucial to review and update security protocols to prevent similar attacks in the future. By taking these steps, organizations can minimize the impact of WormGPT attacks and strengthen their defenses against future threats.

Looking forward: Maintaining AI security

The emergence of WormGPT begs for an important question: is it a single trend or is it a sign of a more concerning future in cybersecurity? The accessibility of AI technologies is growing, and such technology even reaches people without advanced hacking or scamming knowledge - especially with phenomena like MaaS emerging as well.

The threat of more advanced technologies with malicious intentions is approaching, emphasizing the importance of developing and implementing ethical AI that can help us on the right side of the law. If you want to learn more about how hackers use MaaS to distribute malware, read out our guide on MaaS: A malicious service provider.

WormGPT’s development sheds light on the complex connection between AI technology and its risk of cyberattacks. Although its powers are remarkable and wide-ranging, hackers’ malicious use reveals a world of risks that must be averted.

This post has been updated on 22-01-2025 by Sarah Krarup.

Author Sarah Krarup

Sarah Krarup

Sarah studies innovation and entrepreneurship with a deep interest in IT and how cybersecurity impacts businesses and individuals. She has extensive experience in copywriting and is dedicated to making cybersecurity information accessible and engaging for everyone.

View all posts by Sarah Krarup

Similar posts