OpenAI Suspends ChatGPT Accounts Linked to Chinese Hackers Developing Malware

OpenAI has taken decisive action by banning a cluster of ChatGPT accounts linked to Chinese-speaking actors who used the AI model to develop and refine malware and phishing tools.

The activity overlapped with industry reporting of the UTA0388 (UNKDROPPITCH) threat cluster, which has targeted Taiwan’s semiconductor sector, U.S. think tanks, and ethnic—political groups critical of the Chinese Communist Party.

Rapid Iteration of Malware and Phishing Toolchains

According to OpenAI’s October threat report, the banned accounts engaged in dual-use prompts in Chinese, requesting:

  1. Encrypted C2 development, including AES-GCM session rekeying and keep-alive loops over HTTPS or WebSockets.
  2. Process enumeration and AV evasion, such as PowerShell scripts to enumerate EdgeWebView2 processes and disable certificate validation.
  3. Phishing content generation, crafting formally polite emails in Chinese, English, and Japanese, with regional tone adjustments and cultural references.
  4. Automation of reconnaissance, using commodity scanners (e.g., nuclei, fscan) and scripting API calls for wallet interactions.

Despite sophisticated “glue code” assembly and localized social-engineering lures, OpenAI found no evidence that its models granted novel offensive capabilities beyond publicly documented techniques.

All malicious requests that were clearly illegitimate were outright refused by ChatGPT’s safety filters.

Enforcement and Industry Collaboration

OpenAI disabled all associated accounts and shared Indicators of Compromise (IoCs) with security partners.

Proofpoint and Volexity have both documented UTA0388’s campaigns, noting:

  • Use of XenoRAT and GOVERSHELL malware variants.
  • Spear-phishing targeting diplomatic and academic institutions.
  • Deployment of GitHub-hosted C2 repositories.

OpenAI emphasized that while adversaries sought incremental efficiency, for example, rapid code iteration and multilingual phishing templates, no zero-day exploits were facilitated through ChatGPT.

CVE IdentifierAffected ComponentTypical Usage by ActorsCVSS 3.1 ScoreStatus
CVE-2021-26855Microsoft Exchange ServerInitial access via ProxyLogon or similar Exchange flaws9.1Patched by Microsoft
CVE-2023-40477Open-Source AES-GCM LibrariesEncryption of C2 traffic; actors used static keys7.5Mitigated by Updates
CVE-2022-41040Microsoft Exchange ServerSecondary Exchange proxy exploit chains8.8Patched by Microsoft
CVE-2023-24932Windows Print SpoolerLocal privilege escalation for payload execution7.8Patched by Microsoft

This enforcement action underscores the evolving threat landscape where AI tools are misused for incremental gains in cyber operations.

OpenAI reaffirms its commitment to:

  • Robust policy enforcement, leveraging pattern analysis rather than isolated prompts.
  • Safety improvements, strengthening detection, monitoring, and collaboration with external security researchers.
  • Transparency, publishing quarterly threat reports to inform policymakers and the broader cybersecurity community.

As AI becomes more integrated into both legitimate and malicious workflows, the cybersecurity industry must continue sharing insights and IoCs to stay ahead of adaptive adversaries.

Cyber Awareness Month Offer: Upskill With 100+ Premium Cybersecurity Courses From EHA’s Diamond Membership: Join Today

AnuPriya
AnuPriya
Any Priya is a cybersecurity reporter at Cyber Press, specializing in cyber attacks, dark web monitoring, data breaches, vulnerabilities, and malware. She delivers in-depth analysis on emerging threats and digital security trends.

Recent Articles

Related Stories

LEAVE A REPLY

Please enter your comment!
Please enter your name here