Consumers demand stronger data security against AI risks
New data from Cohesity indicates a majority of consumers globally are concerned about the extent to which their personal data is being collected by companies, particularly for use in artificial intelligence.
Consumers in the UK, US, and Australia have expressed significant dissatisfaction with company data collection practices, with 73% in the UK, 81% in the US, and 82% in Australia indicating that they felt personal or financial data collected was excessive. The survey, encompassing over 6,000 participants worldwide, highlights a pervasive discomfort among consumers regarding the security of their data in the face of AI utilisation.
Beyond the collection of data, consumers have voiced expectations for companies to enhance their protective measures. In the UK, 73% of respondents, rising to 86% in the US and 87% in Australia, called for heightened diligence in protecting personal information post-collection. A notable number of participants indicated they would consider ceasing business with firms that fall victim to cyberattacks, with over 90% across the surveyed countries stating they might abandon the company in such circumstances.
James Blake, Global Cyber Security Strategist at Cohesity, remarked on the implications of these findings, "Consumers clearly understand that companies have a lot of catching up to do in the area of data governance and security. The hunger for AI is causing some businesses to skip threat modeling and due diligence on how their data will be exposed."
"Companies looking to use AI in-house must invest in the security and hygiene of their data to maintain cyber resilience in order to satisfy these consumers that are willing to vote with their purchases. Those looking to leverage the AI capabilities of suppliers must adopt a strong and proactive approach to third-party risk. Consumer trust is quickly lost, and competitors are always just a click away, so ensuring AI strategies don't introduce additional risk to customer data is crucial," Blake says.
The fears around AI and data handling are extensive, with nearly all consumers surveyed (87% in the UK, 92% in the US, and 93% in Australia) expressing concern that AI might complicate data security and management. A significant portion also classified AI as a risk to data protection (64% in the UK, 72% in the US, and 83% in Australia).
There is also a call for greater transparency and regulation of AI use in handling personal and financial data. A considerable number of respondents (70% in the UK, 81% in the US, and 83% in Australia) are apprehensive about unregulated AI application and are demanding more transparency. Furthermore, 74% in the UK, 85% in the US, and 88% in Australia insist they should provide consent before their data is utilised in AI models.
Additionally, consumers highlighted the importance of vetting third-party providers managing sensitive data. A large majority (79% in the UK, 87% in the US, and 90% in Australia) want clarity on data sharing, with 77% in the UK, 85% in the US, and 90% in Australia expecting companies to scrutinise third-party data security practices.
Blake added further, "Paying a ransom rarely results in the recovery of all data. It brings its own logistical challenges and potential criminal liability for paying sanctioned entities - not to mention rewarding criminals," he explained. "It's time for companies to really focus on aligning themselves with the best cyber resiliency vendors and end the cycle. This is where Cohesity can help."