
A recent analysis of enterprise data suggests that generative AI tools developed in China are being used extensively by employees in the US and UK, often without oversight or approval from security teams. The study, conducted by Harmonic Security, also identifies hundreds of instances in which sensitive data was uploaded to platforms hosted in China, raising concerns over compliance, data residency, and commercial confidentiality.
Over a 30-day period, Harmonic examined the activity of a sample of 14,000 employees across a range of companies. Nearly 8 percent were found to have used China-based GenAI tools, including DeepSeek, Kimi Moonshot, Baidu Chat, Qwen (from Alibaba), and Manus. These applications, while powerful and easy to access, typically provide little information on how uploaded data is handled, stored, or reused.
The findings underline a widening gap between AI adoption and governance, especially in developer-heavy organizations where time-to-output often trumps policy compliance.
If you’re looking for a way to enforce your AI usage policy with granular controls, contact Harmonic Security.
Data Leakage at Scale
In total, over 17 megabytes of content were uploaded to these platforms by 1,059 users. Harmonic identified 535 separate incidents involving sensitive information. Nearly one-third of that material consisted of source code or engineering documentation. The remainder included documents related to mergers and acquisitions, financial reports, personally identifiable information, legal contracts, and customer records.
Harmonic’s study singled out DeepSeek as the most prevalent tool, associated with 85 percent of recorded incidents. Kimi Moonshot and Qwen are also seeing uptake. Collectively, these services are reshaping how GenAI appears inside corporate networks. It’s not through sanctioned platforms, but through quiet, user-led adoption.
Chinese GenAI services frequently operate under permissive or opaque data policies. In some cases, platform terms allow uploaded content to be used for further model training. The implications are substantial for firms operating in regulated sectors or handling proprietary software and internal business plans.
Policy Enforcement Through Technical Controls
Harmonic Security has developed tools to help enterprises regain control over how GenAI is used in the workplace. Its platform monitors AI activity in real time and enforces policy at the moment of use.
Companies have granular controls to block access to certain applications based on their HQ location, restrict specific types of data from being uploaded, and educate users through contextual prompts.
Governance as a Strategic Imperative
The rise of unauthorized GenAI use inside enterprises is no longer hypothetical. Harmonic’s data show that nearly one in twelve employees is already interacting with Chinese GenAI platforms, often with no awareness of data retention risks or jurisdictional exposure.
The findings suggest that awareness alone is insufficient. Firms will require active, enforced controls if they are to enable GenAI adoption without compromising compliance or security. As the technology matures, the ability to govern its use may prove just as consequential as the performance of the models themselves.
Harmonic makes it possible to embrace the benefits of GenAI without exposing your business to unnecessary risk.
Learn more about how Harmonic helps enforce AI policies and protect sensitive data at harmonic.security.