January 30, 2025
DeepSeek AI Database Exposed: Over 1 Million Log Lines, Secret Keys Leaked
Buzzy Chinese artificial intelligence (AI) startup DeepSeek, which has had a meteoric rise in popularity in recent days, left one of its databases exposed on the internet, which could have allowed malicious actors to gain access to sensitive data. The ClickHouse database "allows full control over database operations, including the ability to access internal data," Wiz security researcher Gal

Jan 30, 2025Ravie LakshmananArtificial Intelligence / Data Privacy

Buzzy Chinese artificial intelligence (AI) startup DeepSeek, which has had a meteoric rise in popularity in recent days, left one of its databases exposed on the internet, which could have allowed malicious actors to gain access to sensitive data.

The ClickHouse database “allows full control over database operations, including the ability to access internal data,” Wiz security researcher Gal Nagli said.

The exposure also includes more than a million lines of log streams containing chat history, secret keys, backend details, and other highly sensitive information, such as API Secrets and operational metadata. DeepSeek has since plugged the security hole following attempts by the cloud security firm to contact them.

The database, hosted at oauth2callback.deepseek[.]com:9000 and dev.deepseek[.]com:9000, is said to have enabled unauthorized access to a wide range of information. The exposure, Wiz noted, allowed for complete database control and potential privilege escalation within the DeepSeek environment without requiring any authentication.

This involved leveraging ClickHouse’s HTTP interface to execute arbitrary SQL queries directly via the web browser. It’s currently unclear if other malicious actors seized the opportunity to access or download the data.

“The rapid adoption of AI services without corresponding security is inherently risky,” Nagli said in a statement shared with The Hacker News. “While much of the attention around AI security is focused on futuristic threats, the real dangers often come from basic risks—like the accidental external exposure of databases.”

“Protecting customer data must remain the top priority for security teams, and it is crucial that security teams work closely with AI engineers to safeguard data and prevent exposure.”

DeepSeek has become the topic du jour in AI circles for its groundbreaking open-source models that claim to rival leading AI systems like OpenAI, while also being efficient and cost-effective. Its reasoning model R1 has been hailed as “AI’s Sputnik moment.”

The upstart’s AI chatbot has raced to the top of the app store charts across Android and iOS in several markets, even as it has emerged as the target of “large-scale malicious attacks,” prompting it to temporarily pause registrations.

In an update posted on January 29, 2025, the company said it has identified the issue and that it’s working towards implementing a fix.

At the same time, the company has also been at the receiving end of scrutiny about its privacy policies, not to mention its Chinese ties becoming a matter of national security concern for the United States.

Furthermore, DeepSeek’s apps became unavailable in Italy shortly after the country’s data protection regulator requested information about its data handling practices and where it obtained its training data. It’s not known if the withdrawal of the apps was in response to questions from the watchdog.

Bloomberg, The Financial Times, and The Wall Street Journal have also reported that both OpenAI and Microsoft are probing whether DeepSeek used OpenAI’s application programming interface (API) without permission to train its own models on the output of OpenAI’s systems, an approach referred to as distillation.

“We know that groups in [China] are actively working to use methods, including what’s known as distillation, to try to replicate advanced US AI models,” an OpenAI spokesperson told The Guardian.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.