IT Brief UK - Technology news for CIOs & IT decision-makers
Story image

One in fourteen workers use China-based AI apps, study shows

Yesterday

Research from Harmonic Security has found that employees use an average of 254 AI-enabled applications in the workplace, with 7% experimenting with China-based apps.

The analysis examined 176,460 prompts submitted to various Generative AI platforms by 8,000 end-users across different companies during the first quarter of 2025. The study revealed that 6.7% of all prompts reviewed potentially disclosed company data.

Within the subset of prompts carrying potential risks, 30.8% related to legal and finance data, 27.8% involved customer data, 14.3% concerned employee data, and 10.1% exposed sensitive code. Nearly half (45.4%) of the submissions containing sensitive data originated from employees accessing Generative AI tools using personal email accounts, a practice that could circumvent IT oversight and established security protocols.

According to Harmonic Security, ChatGPT was the most commonly used tool for submitting sensitive data, accounting for 79.10% of such incidents. During the analysis period, image files represented 68.3% of all file types uploaded to ChatGPT. Other frequently uploaded file types included .pdf (13.4%), .docx (5.46%), .xlsx (4.90%), .csv (3.17%), and .pptx (1.45%).

The research also found that 7% of users logged into China-based AI platforms, including DeepSeek, Manus, Ernie Bot, Qwen Chat, and Baidu Chat, during the same period.

Alastair Paterson, Chief Executive Officer and Co-Founder of Harmonic Security, said: "There are several areas of concern. Firstly, the nature of the prompts going into the tools which put valuable company data at risk. Secondly that so much use is with personal accounts which are often outside of company IT control. App providers will often stipulate they train their models on free tiers. Finally, the sheer volume of apps being used and particularly the 7% associated with Chinese tools. The Chinese government can likely just request access to this data, and data shared with DeepSeek, Manus, or Qwen should be considered property of the Chinese Communist Party."

To address the risks identified in its report, Harmonic Security recommends several measures for organisations seeking to safeguard sensitive data while enabling the use of Generative AI tools. These measures include continuous monitoring for unauthorised app use, establishing robust vetting processes, and providing sanctioned alternatives for employees. The company also advises implementing context-aware policies that are based on user roles, the sensitivity of data handled, and potential destination risks.

Harmonic recommends restricting the use of personal accounts for logging into AI platforms, employing policies and technological controls to block or limit the flow of sensitive data into personal AI accounts, and assessing return on investment by understanding how often end users rely on their licensed tools versus free versions and for which specific use cases.

The company further suggests targeted training for end users, aiming to educate employees on safe AI practices that are directly relevant to their responsibilities and usage scenarios.

The findings are based on anonymised, aggregated insights from end-users working in a range of departments within Harmonic Security's customer organisations and cover usage of major platforms including OpenAI's ChatGPT, Google's Gemini, Anthropic's Claude, Perplexity AI, and Microsoft Copilot during the first three months of 2025.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X