IT Brief UK - Technology news for CIOs & IT decision-makers
Story image

GenAI poses potential risk for 40% of AI data breaches by 2027

Tue, 18th Feb 2025

A recent analysis by Gartner has predicted that more than 40% of AI-related data breaches will arise from the misuse of generative AI (GenAI) across borders by the year 2027.

The rapid adoption of GenAI technologies among end-users has raised concerns due to the lag in the development of data governance and security measures necessary to safeguard sensitive information. This situation is especially pronounced in the context of data localisation, given the centralised computing power that these technologies demand.

Joerg Fritsch, Vice President Analyst at Gartner, highlighted the risks: "Unintended cross-border data transfers often occur due to insufficient oversight, particularly when GenAI is integrated into existing products without clear descriptions or announcement." Fritsch further remarked on the changes organisations are observing in the content generated by employees using GenAI tools: "While these tools can be used for approved business applications, they pose security risks if sensitive prompts are sent to AI tools and APIs hosted in unknown locations."

A key challenge identified by Gartner is the lack of consistent global best practices and standards for AI and data governance. This gap leads to market fragmentation and forces companies to develop strategies tailored to specific regions, which can hinder their ability to leverage AI products and services effectively across the globe.

"The complexity of managing data flows and maintaining quality due to localised AI policies can lead to operational inefficiencies," noted Fritsch. "Organisations must invest in advanced AI governance and security to protect sensitive data and ensure compliance. This need will likely drive growth in AI security, governance, and compliance services markets, as well as technology solutions that enhance transparency and control over AI processes."

Gartner has projected that by 2027, AI governance will become a universal requirement under sovereign AI laws and regulations globally. Organisations that fail to incorporate necessary governance models and controls may find themselves at a competitive disadvantage, particularly those without the necessary resources to swiftly adapt their existing data governance frameworks, according to Fritsch.

To mitigate the risks associated with AI data breaches and to ensure compliance with emerging regulations, Gartner recommends strategic actions such as enhancing data governance. This involves ensuring compliance with international regulations and monitoring for unintended cross-border data transfers by extending data governance frameworks to include guidelines specific to AI-processed data, incorporating data lineage, and conducting data transfer impact assessments within regular privacy impact assessments.

Establishing governance committees is also advised to improve oversight and transparent communication regarding AI deployments and data handling. These committees would handle technical oversight, risk and compliance management, and communication and decision reporting.

Fritsch also emphasised strengthening data security through the use of advanced technologies like encryption and anonymisation to protect sensitive information. He suggested verifying Trusted Execution Environments in specific regions and applying advanced anonymisation techniques such as Differential Privacy when data must cross territorial boundaries.

Organisations are encouraged to invest in TRiSM (trust, risk, and security management) products and capabilities tailored to AI technologies. This recommendation includes AI governance, data security governance, prompt filtering and redaction, and the synthetic generation of unstructured data. Gartner predicts that by 2026, enterprises employing AI TRiSM controls will reduce the intake of inaccurate or illegitimate information by at least 50%, thereby decreasing faulty decision-making.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X