OpenAI has taken decisive action by banning a cluster of ChatGPT accounts linked to Chinese-speaking actors who used the AI model to develop and refine malware and phishing tools.
The activity overlapped with industry reporting of the UTA0388 (UNKDROPPITCH) threat cluster, which has targeted Taiwan’s semiconductor sector, U.S. think tanks, and ethnic—political groups critical of the Chinese Communist Party.
Rapid Iteration of Malware and Phishing Toolchains
According to OpenAI’s October threat report, the banned accounts engaged in dual-use prompts in Chinese, requesting:
- Encrypted C2 development, including AES-GCM session rekeying and keep-alive loops over HTTPS or WebSockets.
- Process enumeration and AV evasion, such as PowerShell scripts to enumerate EdgeWebView2 processes and disable certificate validation.
- Phishing content generation, crafting formally polite emails in Chinese, English, and Japanese, with regional tone adjustments and cultural references.
- Automation of reconnaissance, using commodity scanners (e.g., nuclei, fscan) and scripting API calls for wallet interactions.
Despite sophisticated “glue code” assembly and localized social-engineering lures, OpenAI found no evidence that its models granted novel offensive capabilities beyond publicly documented techniques.
All malicious requests that were clearly illegitimate were outright refused by ChatGPT’s safety filters.
Enforcement and Industry Collaboration
OpenAI disabled all associated accounts and shared Indicators of Compromise (IoCs) with security partners.
Proofpoint and Volexity have both documented UTA0388’s campaigns, noting:
- Use of XenoRAT and GOVERSHELL malware variants.
- Spear-phishing targeting diplomatic and academic institutions.
- Deployment of GitHub-hosted C2 repositories.
OpenAI emphasized that while adversaries sought incremental efficiency, for example, rapid code iteration and multilingual phishing templates, no zero-day exploits were facilitated through ChatGPT.
CVE Identifier | Affected Component | Typical Usage by Actors | CVSS 3.1 Score | Status |
---|---|---|---|---|
CVE-2021-26855 | Microsoft Exchange Server | Initial access via ProxyLogon or similar Exchange flaws | 9.1 | Patched by Microsoft |
CVE-2023-40477 | Open-Source AES-GCM Libraries | Encryption of C2 traffic; actors used static keys | 7.5 | Mitigated by Updates |
CVE-2022-41040 | Microsoft Exchange Server | Secondary Exchange proxy exploit chains | 8.8 | Patched by Microsoft |
CVE-2023-24932 | Windows Print Spooler | Local privilege escalation for payload execution | 7.8 | Patched by Microsoft |
This enforcement action underscores the evolving threat landscape where AI tools are misused for incremental gains in cyber operations.
OpenAI reaffirms its commitment to:
- Robust policy enforcement, leveraging pattern analysis rather than isolated prompts.
- Safety improvements, strengthening detection, monitoring, and collaboration with external security researchers.
- Transparency, publishing quarterly threat reports to inform policymakers and the broader cybersecurity community.
As AI becomes more integrated into both legitimate and malicious workflows, the cybersecurity industry must continue sharing insights and IoCs to stay ahead of adaptive adversaries.
Cyber Awareness Month Offer: Upskill With 100+ Premium Cybersecurity Courses From EHA’s Diamond Membership: Join Today