A new report from the S2W Threat Intelligence Center’s TALON unit has revealed a marked surge in cybercriminals weaponizing large language models (LLMs) such as ChatGPT, Claude, DeepSeek, and others to accelerate cyber-attacks.
The study, grounded in real-world observations of dark web activity, details the evolution of both exploitation tactics and threats now targeting the AI models themselves, illustrating the dual-edged nature of generative AI in cybersecurity.
The analysis highlights how LLMs, originally developed to augment productivity and automate routine tasks, have become embedded in the offensive arsenals of threat actors.
Popular cybercrime forums are now rife with discussions of LLM-based exploit generation, malware authoring, vulnerability scanning, and methods to bypass in-built security safeguards.
Notably, references to ChatGPT and similar tools are widespread, alongside detailed guides for “jailbreaking” these models to produce unauthorized code outputs.
Exploitation and Security Guardrail Evasion
A pertinent case surfaced in January 2025 when a user known as “KuroCracks” advertised an AI-developed scanner for CVE-2024-10914-a remote code execution vulnerability-on a high-profile cracking forum.
The scanner, developed using LLM support and leveraging Masscan automation, was provided as open source, with explicit mentions of using prompt engineering techniques to elicit exploit code from AI models.
According to TALON analysts, this incident exemplifies a rising trend of dark web actors actively sharing and refining circumvention strategies that undermine existing LLM safety layers.
Beyond exploit development, threat actors are increasingly distributing LLM-related research, code repositories, and fine-tuning methodologies across underground forums.
The circulation of academic and industry intelligence in these spaces signals a growing convergence between public AI advancements and their rapid adoption for malicious use cases-including the creation and sale of customized, “no-limits” models such as WormGPT, which is marketed specifically for its absence of content restrictions.
Direct Attacks on LLM Services
Recent months have witnessed a critical shift: threat actors are not merely leveraging LLMs for attack facilitation but are also directly targeting LLM infrastructures and APIs.
In February 2025, a BreachForums user called “MTU1500Tunnel” purportedly began selling an exploit for the Google Gemini API that promised to bypass balance controls and security mechanisms, raising alarms about the potential for API-level compromise and LLM-specific vulnerabilities.
Prompt injection, a type of attack where adversarial commands are smuggled into LLM queries to induce harmful behaviors, has emerged as a particularly persistent threat.
Major AI providers have scrambled to introduce more robust, multi-layered guardrails, but the pace of threat evolution continues to challenge these defenses.
The S2W TALON report underscores the need for vigilant, adaptive defense in the face of fast-evolving LLM abuse.
While LLMs hold significant promise for automated vulnerability detection and remediation-demonstrated by major initiatives such as DARPA’s AI Cyber Challenge-these same capabilities are being weaponized to automate proof-of-concept exploits, vulnerability scanning, and attack orchestration.
Industry experts recommend a holistic approach to LLM security, combining advanced technical safeguards such as input/output filtering and real-time behavioral monitoring with ongoing user education and community-driven incident response frameworks.
Continued investment in multi-layered defenses, rapid threat intelligence sharing, and ethical deployment guidelines is deemed critical to balancing the transformative potential of generative AI with its escalating risks.
As adversaries sharpen their AI-assisted tactics, organizations must proactively evolve their strategies to keep pace, ensuring that the benefits of large language models are not overshadowed by their exploitation in the cybercrime underground.
Find this Story Interesting! Follow us on LinkedIn and X to Get More Instant updates