Organisations are contending with a rising volume of sensitive data being uploaded to generative AI platforms, according to new analysis by Harmonic Security. The firm's latest research found that 26.4% of all file uploads to AI-enabled tools in the third quarter of 2025 involved sensitive data, up from 22% in the previous quarter.
Data exposure growth
The study examined over three million prompts and file uploads across 300 generative and embedded AI tools used by organisations in the United States and United Kingdom. Harmonic Security's analysis detected that more than half (57%) of exposed sensitive data related to business or legal information. Of this, contract and policy drafting accounted for 35%, while a separate 35% involved mergers, acquisitions, and financial forecasts.
Technical data represented a quarter (25%) of all sensitive disclosures. Within this category, 65% was proprietary source code uploaded for debugging or refactoring. The remainder consisted of credentials, security keys, and incident reports. Personal or employee data, including names, addresses, HR records, and payroll details, comprised 15% of the total exposed data.
Personal account use
A significant portion of sensitive data exposure stemmed from employees using personal accounts. The analysis showed that 12% of all such exposures originated from free and personal versions of popular platforms such as ChatGPT, Gemini, Claude, and Meta AI. These accounts, which maintain history and context, can allow sensitive work-related information to remain accessible even after employees leave an organisation.
Tool proliferation
Harmonic Security's report also highlighted that the average enterprise was using 27 different AI tools in the third quarter. There was a decrease in the number of newly introduced AI platforms by employees, falling from 23 in the second quarter to 11 in the third quarter. This could suggest that employees are moving beyond experimentation and are integrating AI into established workflows.
Enterprises uploaded significantly more data than in previous periods. The amount rose from an average of 1.32GB in Q2 to 4.4GB in Q3, as AI becomes more embedded in day-to-day activities.
Sector response
"The challenge has shifted from adoption to control: managing the flow of sensitive information through an ecosystem that blurs the line between company and individual. BYOAI is still a big issue; 12% use of personal accounts is still too high and could proliferate further as newly-introduced AI-native browsers take hold. Therefore, governance must occur where work happens with browser-level controls enabling organizations to apply policy at the point of data loss, not retroactively," said Alastair Paterson, CEO and co-founder, Harmonic Security.