APT28 Hackers Introduce First Known LLM-Powered Malware, Integrating AI into Attack Methods

Ukraine’s Computer Emergency Response Team (CERT-UA) has made public the discovery of a groundbreaking cyber campaign: malware that natively harnesses the capabilities of large language models (LLMs).

Dubbed “LAMEHUG,” this malware family is the first documented instance of a state-sponsored threat actor identified with moderate confidence as Russia-linked APT28 (also known as Fancy Bear) directly integrating LLMs to automate and augment its attack flow.

The campaign, uncovered in early July 2025, primarily targeted Ukrainian government officials using sophisticated phishing lures that attached ZIP archives containing Python executables compiled with PyInstaller.

How LAMEHUG Leverages LLM Integration

What sets LAMEHUG apart from conventional malware isn’t just its attack vector phishing emails mimicking ministry communications with attachments such as “Додаток.pdf.zip” but its technical innovation: direct, programmatic calls to Alibaba’s Qwen2.5-Coder-32B-Instruct LLM, via the Hugging Face API.

APT28 Hackers
Attachment.pif.pdf

The malware submits base64-encoded prompts, which instruct the LLM to generate operating system commands tailored for information theft and reconnaissance tasks.

Each execution is thus context-sensitive and can bypass signature-based detection, underscoring a leap in malware dynamism.

CERT-UA reports several LAMEHUG variants, including executables posing as productivity tools and image generators (e.g., AI_generator_uncensored_Canvas_PRO_v0.9.exe, AI_image_generator_v0.95.exe, and scripts like Image.py).

APT28 Hackers
image.py prompts

Throughout, the core attack logic remains consistent: encode malicious objective as a text prompt, send to a commercial LLM using a pool of approximately 270 authentication tokens, receive human-like synthetic commands, execute them on the infected host, and proceed to data exfiltration.

Prompts reverse-engineered from the samples instruct the LLM to, for example, gather system and user information, copy office documents, and exfiltrate this data.

Proof-of-Concept Nature

Unlike typical APT28 operations, LAMEHUG’s coding and operational sophistication appear intentionally simplistic, lacking evasion features or anti-forensics.

The AI integration is direct rather than obfuscated; prompts and API traffic are only superficially hidden via encoding.

This operational profile, paired with observed variant proliferation and differences in exfiltration (HTTP POST vs. SFTP), strongly suggests a proof-of-concept (PoC) exploratory exercise by APT28.

Ukraine’s status as a recurring testbed for Russian threat actors likely influenced its selection as the target environment.

CERT-UA attributes the campaign to APT28 with moderate confidence, referencing distinct overlaps with historical tactics and infrastructure.

This campaign highlights significant challenges ahead for enterprise defenders. Because commands are generated by a highly capable LLM in real-time, static and signature-based security solutions will struggle to detect such threats.

API communications blend in with legitimate cloud AI workloads. Additionally, behavioral detection must now contend with the near-infinite variety of actions an LLM can synthesize on-demand.

The emergence of LAMEHUG portends a future where AI-powered malware is customized per-target, can rapidly adapt techniques, and uses trusted infrastructures for both command issuance and data exfiltration.

Security experts recommend adopting strict controls over AI platform usage monitoring and restricting access to LLM services, enforcing AI-aware data loss prevention mechanisms, and employing machine-learning driven anomaly detection at both endpoint and network levels.

Modern SASE and XDR platforms, integrating ML-based behavioral analytics and cloud access security, are presented as critical countermeasures in this new era of AI-driven cyberattacks.

The LAMEHUG operation marks a turning point: the PoC test may be simple, but it demonstrates how state-sponsored actors are now operationalizing LLMs to perform automated reconnaissance, exfiltration, and attack evolution, laying groundwork for more mature, evasive, and impactful AI-powered campaigns in future conflicts.

Indicators of Compromise (IoCs)

MD5SHA256Filename
abe531e9f1e642c47260fac40dc41f59766c356d6a4b00078a0293460c5967764fcd788da8c1cd1df708695f3a15b777Додаток[.]pif
3ca2eaf204611f3314d802c8b794ae2cd6af1c9f5ce407e53ec73c8e7187ed804fb4f80cf8dbd6722fc69e15e135db2eAI_generator_uncensored_Canvas_PRO_v0.9[.]exe
f72c45b658911ad6f5202de55ba6ed5cbdb33bbb4ea11884b15f67e5c974136e6294aa87459cdc276ac2eea85b1deaa3AI_image_generator_v0.95[.]exe
81cd20319c8f0b2ce499f9253ce0a6a8384e8f3d300205546fb8c9b9224011b3b3cb71adc994180ff55e1e6416f65715Image[.]py

Find this Story Interesting! Follow us on LinkedIn and X to Get More Instant Updates

Mandvi
Mandvi
Mandvi is a Security Reporter covering data breaches, malware, cyberattacks, data leaks, and more at Cyber Press.

Recent Articles

Related Stories

LEAVE A REPLY

Please enter your comment!
Please enter your name here