LAMEHUG: AI-Powered Malware Strikes Organizations Using Compromised Corporate Email Accounts

Ukraine’s Computer Emergency Response Team (CERT-UA) confirmed a sophisticated cyberattack targeting the nation’s security and defense sector, attributing the operation to the Russia-linked APT28 (a.k.a. Fancy Bear, IRON TWILIGHT).

This campaign introduces a uniquely modern threat: LameHug, believed to be the first documented malware leveraging large language models (LLMs) to generate malicious system commands dynamically.

Phishing as the Entry Point

The attack commenced with the delivery of phishing emails, issued from compromised Ukrainian government email accounts to maximize credibility and increase delivery success.

Malware Strikes
Malicious email delivering LameHug malware

These emails enticed recipients to open a ZIP archive named Appendix.pdf.zip, which, upon extraction, presented a .pif executable named identically to the archivea classic masquerading tactic. This file, produced with PyInstaller from Python code, deploys the LameHug malware.

LameHug’s novelty lies in its direct integration with the Qwen 2.5-Coder-32B-Instruct LLM, accessed via the Hugging Face API.

Instead of executing hard-coded commands, LameHug receives attacker-written prompts in plain language, instructs the LLM to generate system reconnaissance and data exfiltration commands dynamically, and then executes these on the victim host.

Malware Strikes
LLM prompts used for command generation

This approach enables highly adaptive automation, aligning malware capability and behavior with attacker intent in near real time.

Stealthy Exfiltration

The AI-driven malware instructed infected endpoints to gather extensive system information: hardware details, running processes, network settings, user accounts, and organizational directory objects using both native Windows commands and WMIC utilities.

The collated output is saved under info.txt within a new directory under %PROGRAMDATA%\info\.

Subsequently, the malware recursively searches user data folders (Documents, Desktop, Downloads) for potentially sensitive files, staging these for exfiltration.

Data is exfiltrated via SFTP or HTTP POST to attacker infrastructure, frequently using freshly generated network connections to bespoke IPs and domains linked to APT28.

CERT-UA further uncovered additional LameHug variants (notably AI_generator_uncensored_Canvas_PRO_v0.9.exe and image.py), highlighting ongoing development and functional divergence in data theft and emission strategies.

Unlike common commodity malware, the LameHug campaign situates itself at the bleeding edge of adversarial AI adoption. Effective threat hunting for such activity requires robust telemetry.

Analysts are advised to monitor system events for anomalous process creation (especially .pif files in %PROGRAMDATA%), unauthorized file drops in common staging locations, and reconnaissance activity such as batch execution of inquisitive Windows binaries and WMIC commands.

Network telemetry is equally crucial: defenders should alert on connections to flagged addresses (e.g., 144.126.202.227 and 192.36.27.37), suspicious domains (such as stayathomeclasses[.]com), and unsolicited API interactions with major LLM providers (OpenAI, Hugging Face, Google, etc.), particularly from hosts not expected to use such services.

According to the report, CERT-UA has also published file hashes and user-agent strings observed in this campaign, supporting sample-based IOC detection.

Integration of SIEM, EDR, and advanced playbooks (for swift email forensics, targeted host isolation, and process containment) is advised for incident response.

This campaign underscores urgent lessons for corporate defenders. Phishing remains a perilous vector, especially when adversaries weaponize trusted accounts.

Organizations must reinforce user training, encourage out-of-band verification for unusual requests, and promote rapid internal reporting.

Equally, as malicious actors fuse traditional tradecraft with LLMs for code generation, obfuscation, or automation, defenders must adjust to include AI monitoring and strict controls around LLM API access, especially from sensitive systems.

Finally, the campaign validates the need for layered defense: stringent asset monitoring, endpoint visibility, robust incident response plans, and the continuous assessment of new technological attack surfaces, including those introduced by the proliferation of AI in both legitimate and malicious contexts.

Find this Story Interesting! Follow us on LinkedIn and X to Get More Instant Updates

Mandvi
Mandvi
Mandvi is a Security Reporter covering data breaches, malware, cyberattacks, data leaks, and more at Cyber Press.

Recent Articles

Related Stories

LEAVE A REPLY

Please enter your comment!
Please enter your name here