Home Cyber Attack Cybercriminals Leverage LLM Models to Amplify Hacking Tactics

Cybercriminals Leverage LLM Models to Amplify Hacking Tactics

0

Cybercriminals are increasingly exploiting advancements in artificial intelligence, particularly large language models (LLMs), to enhance and automate their malicious operations.

While mainstream LLMs such as OpenAI’s ChatGPT and Anthropic’s Claude are equipped with robust safety features like alignment protocols and real-time guardrails to prevent unethical use threat actors are circumventing these protections by turning to uncensored, custom-built, or jailbroken models.

This trend is reshaping the cyber threat landscape, enabling more scalable and sophisticated attacks.

Uncensored and Custom-Built LLMs

Uncensored LLMs, which lack the ethical constraints of their commercial counterparts, are now widely distributed on underground forums.

Models such as Llama 2 Uncensored, OnionGPT, and WhiteRabbitNeo are readily available for download and local deployment, often via frameworks like Ollama.

These models are stripped of safety mechanisms, allowing users to generate phishing emails, malicious code, and other illicit content with ease.

The resource requirements for running such models locally can be significant, but the payoff for cybercriminals is the ability to produce unrestricted, contextually relevant outputs that would otherwise be blocked by mainstream LLMs.

Beyond repurposing open-source models, some cybercriminals have developed their own LLMs tailored for offensive operations.

An uncensored LLM, OnionGPT, advertised on the hacking forum Dread.

Examples include GhostGPT, WormGPT, DarkGPT, DarkestGPT, and FraudGPT, which are marketed on dark web marketplaces.

According to Cisco Talos Report, these models advertise features such as malware generation, phishing page creation, vulnerability scanning, and even the ability to automate the verification of stolen credit card data.

However, the underground LLM market is rife with scams, as seen in the case of FraudGPT, where would-be buyers were defrauded by the model’s purported developer.

FraudGPT dark web homepage.

Despite these risks, the proliferation of criminally oriented LLMs highlights the demand for AI-powered hacking tools.

Model Poisoning Expand Threat Landscape

When access to uncensored or custom models is limited, cybercriminals often resort to jailbreaking legitimate LLMs.

Jailbreaking involves using prompt engineering techniques to bypass built-in safety features.

Common methods include obfuscating harmful prompts with encoding schemes (such as Base64 or leetspeak), appending adversarial suffixes, or employing role-play scenarios that trick the model into ignoring its ethical constraints.

More advanced techniques involve context manipulation, meta prompting, or even academic framing, where malicious requests are disguised as scholarly inquiries.

This ongoing “arms race” between LLM developers and adversaries is driving rapid innovation on both sides. The utility of LLMs for cybercrime extends far beyond generating malicious content.

Criminal forums reveal that LLMs are being integrated with external tools such as Nmap for vulnerability scanning and used to automate research, summarize technical outputs, and develop new attack strategies.

The programming capabilities of these models enable the creation of ransomware, remote access trojans, and obfuscated scripts, while their content generation features streamline the production of phishing campaigns and scam infrastructure.

LLMs themselves are not immune to attack. Threat actors are targeting AI supply chains by distributing backdoored models, often via popular repositories like Hugging Face.

These models may contain malicious code embedded in serialized files, which executes upon deserialization.

Additionally, LLMs utilizing Retrieval Augmented Generation (RAG) are susceptible to data poisoning, where attackers manipulate external data sources to influence model outputs or deliver harmful instructions to specific users.

As AI technology continues its rapid evolution, the adoption of LLMs by cybercriminals is expected to accelerate.

While these models do not necessarily introduce novel attack vectors, they serve as powerful force multipliers, automating and enhancing traditional cybercrime tactics.

Security researchers and AI developers face mounting pressure to strengthen model safeguards and monitor the emerging threat landscape, as the intersection of generative AI and cybercrime becomes an increasingly critical battleground.

Find this Story Interesting! Follow us on LinkedIn and X to Get More Instant updates

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version