OpenAI Sora 2 Vulnerability Exposes System Prompts Through Audio Transcripts

OpenAI’s Sora 2 represents a significant leap forward in video generation technology. Yet recent security research has uncovered a critical vulnerability that exposes its hidden system prompt via multimodal extraction techniques.

Researchers successfully demonstrated that carefully crafted requests across different output modalities,such as audio transcripts, encoded video frames, and text renderings,can systematically extract sensitive instructions that guide the model’s behavior, raising essential questions about the security posture of production AI systems.

Multi-Modal Extraction Attack Surface

The vulnerability exploits Sora 2’s ability to generate content across multiple modalities: text, images, video, and audio.

Researchers discovered that while traditional text-to-text prompt injection defenses are relatively robust, the model’s cross-modal capabilities create unexpected weaknesses.

The attack leverages the principle that information can be progressively recovered by requesting that the system render or speak the target content in different formats.

Audio transcription proved remarkably effective, as speech-to-text conversion maintains higher fidelity than image-based text rendering, which suffers from character distortion and semantic drift.

The extraction process involved fragmentary requests spread across multiple 15-second video clips, with researchers iteratively refining their approach based on successfully recovered portions.

This stepwise methodology transformed seemingly impossible extraction into a practical attack, demonstrating how temporal and format constraints can be circumvented through persistence and multi-modal chaining.

OpenAI acknowledged the vulnerability on November 4, 2025, noting that system prompt extraction was already a known possibility across multimodal systems.

The research team responsibly coordinated with OpenAI’s security team before publication, with full disclosure occurring on November 12, 2025.

While Sora 2’s exposed system prompt itself contains no highly sensitive data, researchers emphasize that system prompts function as security boundaries equivalent to firewall rules and should be protected as confidential configuration, not harmless metadata.

Vulnerability TypeAttack VectorSeverityStatus
System Prompt ExtractionMulti-Modal Input (Audio/Video/Image)MediumAcknowledged
Audio Transcript LeakageSpeech-to-Text TranscriptionMediumAcknowledged
Cross-Modal Data ExfiltrationEncoded Image/Video GenerationLow-MediumAcknowledged

This research highlights an emerging gap in AI security: while text-based safeguards have matured through years of red-teaming, multi-modal systems remain vulnerable to creative circumvention strategies.

The vulnerability demonstrates how duplicate semantic content, when transformed across different output formats, can expose protected information.

As AI systems become increasingly complex and multi-modal, security teams must evolve their threat models beyond single-modality assumptions to account for cross-channel information leakage and indirect exfiltration pathways.

Find this Story Interesting! Follow us on Google NewsLinkedIn and X to Get More Instant Updates

AnuPriya
AnuPriya
Any Priya is a cybersecurity reporter at Cyber Press, specializing in cyber attacks, dark web monitoring, data breaches, vulnerabilities, and malware. She delivers in-depth analysis on emerging threats and digital security trends.

Recent Articles

Related Stories

LEAVE A REPLY

Please enter your comment!
Please enter your name here