WormGPT: The Unseen Threat in the Cybersecurity Landscape


Published on Jul 15, 2023   —   2 min read

The advent of generative artificial intelligence (AI) has brought about a paradigm shift in various sectors. However, its misuse in the realm of cybercrime is a growing concern. A new generative AI tool, WormGPT, is making its presence felt in underground forums, providing cybercriminals with a potent weapon to launch sophisticated phishing and business email compromise (BEC) attacks.

The Intersection of AI and Cybercrime

Generative AI models, such as OpenAI's ChatGPT, have the ability to generate human-like text based on the input they receive. While this technology has numerous beneficial applications, it also opens up a new vector for cybercriminals. By leveraging these AI models, attackers can automate the creation of highly convincing fake emails, personalized to the recipient, thereby increasing the success rate of their attacks.

The emergence of WormGPT, however, has added a new dimension to this threat. Advertised as a blackhat alternative to GPT models, WormGPT is designed specifically for malicious activities, enabling even novice cybercriminals to launch sophisticated attacks swiftly and at scale.

WormGPT: A New Player in the Cybercrime Arena

WormGPT, based on the GPTJ language model, comes equipped with a range of features, including unlimited character support, chat memory retention, and code formatting capabilities. It has reportedly been trained on a diverse array of data sources, with a particular focus on malware-related data.

In tests, WormGPT has demonstrated its potential for launching sophisticated phishing and BEC attacks. For instance, when tasked with generating an email intended to pressure an account manager into paying a fraudulent invoice, WormGPT produced an email that was not only highly persuasive but also strategically cunning.

The Implications of Generative AI in Cybercrime

The misuse of generative AI in cybercrime presents several challenges:

  1. Exceptional Grammar: Generative AI can create emails with impeccable grammar, making them appear legitimate and reducing the likelihood of being flagged as suspicious.
  2. Lowered Entry Threshold: The use of generative AI democratizes the execution of sophisticated BEC attacks. Even attackers with limited skills can use this technology, making it an accessible tool for a wider spectrum of cybercriminals.

Protecting Against AI-Driven BEC Attacks

The rise of AI in cybercrime underscores the need for robust preventative measures. Companies should develop comprehensive, regularly updated training programs to counter BEC attacks, especially those augmented by AI. Organizations should also implement stringent email verification processes and deploy systems that automatically alert when emails originating outside the organization impersonate internal executives or vendors.

In conclusion, while AI has brought many benefits, it has also introduced new attack vectors. Staying informed about the latest developments in AI and cybercrime, and taking proactive steps to safeguard against these emerging threats, is of paramount importance.

Share on Facebook Share on Linkedin Share on Twitter Send by email

Subscribe to the newsletter

Subscribe to the newsletter for the latest news and work updates straight to your inbox, every week.