EHA
Home ChatGPT Cybercriminals Exploit ChatGPT to Launch Sophisticated Attacks

Cybercriminals Exploit ChatGPT to Launch Sophisticated Attacks

0

Researchers report an escalating trend of cyberattacks leveraging Large Language Models (LLMs) to generate malicious code, which is often initiated through phishing emails and deploys code capable of downloading diverse payloads, including Rhadamanthys, NetSupport, and LokiBot. 

Analysis indicates that the scripts used in these attacks exhibit characteristics consistent with LLM-generated content, highlighting the growing threat of AI-powered malware creation. 

Phishing email with an attached password-protected ZIP file

The AI-powered models, originally developed for generating natural language text, are being repurposed to create sophisticated phishing emails and malicious code. 

The attackers use LLMs to craft highly convincing phishing messages that often contain attachments capable of executing harmful scripts, such as PowerShell or JavaScript, which can deploy various types of malware, including Rhadamanthys, NetSupport, and LokiBot, each serving different nefarious purposes like data theft or remote control of infected systems.

 LLM-generated PowerShell script

Since the models enable the creation of complex and obfuscated code that can elude conventional detection techniques in addition to facilitating the creation of convincing and realistic phishing emails, their integration into cyberattacks represents a significant increase in the complexity and efficacy of these threats. 

This dual capability makes LLMs a potent tool in the cybercriminal arsenal, lowering the entry barrier for attackers who may not have advanced technical skills but can leverage AI to produce effective attacks.

Phishing email mimicking HR notification

One of the primary concerns highlighted is the ability of LLMs to generate polymorphic malware, which can change its appearance or behavior to avoid detection by signature-based antivirus systems. 

By generating unique code each time, LLMs can produce variants of the same malware that differ enough to bypass security filters, making them particularly challenging to combat. 

It is not only a technical challenge but also a strategic one, as it requires cybersecurity professionals to continuously update their detection algorithms and employ more sophisticated behavioral analysis techniques.

web page displayed during attack

While these models have legitimate uses in automating and enhancing productivity, their misuse highlights a darker potential. The ease with which they can generate convincing phishing emails or code snippets that include malware underscores the need for responsible development and deployment of AI technologies. 

An LLM-generated JavaScript script, embedded within a seemingly innocuous HTML file disguised as an HR notification, initiates a phishing attack and, when opened, downloads and executes additional payloads, effectively marking the transition from initial access to the payload delivery stage of the attack lifecycle. 

LLM-generated HTML file

According to Symantec, continuous monitoring and threat intelligence are crucial, as they provide the necessary context to understand and respond to the rapidly evolving tactics employed by cybercriminals leveraging LLMs.

AI is rapidly democratizing cybercrime, empowering adversaries with tools to craft sophisticated phishing attacks and generate malicious code previously requiring substantial expertise. 

The threat landscape will change as AI capabilities increase, presenting more powerful and scalable attacks that call for effective countermeasures to reduce risks. 

Also Read:

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version