IT Brief UK - Technology news for CIOs & IT decision-makers
European soc security ops center ai threat monitoring critical infra

Europe warned over AI security gap despite AI Act lead

Tue, 13th Jan 2026

Kiteworks research has found that European organisations lag global benchmarks on several security controls linked to AI systems, including anomaly detection, incident response and visibility of AI software components.

The company’s Data Security and Compliance Risk: 2026 Forecast Report surveyed security, IT, compliance and risk leaders across 10 industries and eight regions. It reported a gap between Europe’s AI regulatory direction and practical security measures used to manage AI-related risk.

Detection shortfalls

The report highlighted weaker AI anomaly detection rates in several large European markets. It put France at 32%, Germany at 35% and the UK at 37%, compared with a 40% global benchmark. The report described AI anomaly detection as the ability to identify when AI models behave unexpectedly.

It also pointed to training-data recovery as another area where European organisations trail. The report put European adoption at 40% to 45%, compared with 47% globally. It said Australia reached 57%.

Supply chain visibility for AI components also ranked lower in Europe, according to the report. It stated that software bill of materials visibility for AI components sat at 20% to 25% across Europe, versus 45% or more in leading regions.

“Europe has led the world on AI governance frameworks with the AI Act setting the global standard for responsible AI deployment. But governance without security is incomplete,” said Wouter Klinkhamer, GM of EMEA Strategy & Operations, Kiteworks.

Kiteworks linked its findings to risks in breach detection and response where AI systems behave outside expected parameters. It said organisations that lack monitoring and recovery controls face difficulty identifying threats and investigating incidents involving AI models.

“When an AI model starts behaving anomalously. Such as accessing data outside its scope, producing outputs that suggest compromise, or failing in ways that expose sensitive information. European organisations are less equipped than their global counterparts to detect it. That's not a compliance gap. That's a security gap,” said Klinkhamer.

Six predictions

The report set out six predictions for European organisations in 2026. It said AI-specific breach detection would lag other regions, based on the gap between European and global anomaly detection levels.

It said AI incident response would remain incomplete, and again pointed to training-data recovery levels. The report associated that control with forensic investigation of AI failures and the ability to show regulators what happened during an incident.

The report also said AI supply chain visibility would remain a blind spot. It focused on lower adoption of software bill of materials approaches for AI components. It framed this as a constraint on understanding third-party libraries, datasets and frameworks used in AI deployments.

Another prediction concerned incident preparedness with external AI suppliers. The report said only 4% of French organisations and 9% of UK organisations had joint incident response playbooks with their AI vendors. It said this left gaps in communication and containment when incidents originated in vendor systems.

The report added that AI governance evidence would remain manually generated in many European organisations. It described a pattern of “continuous but manual” compliance rather than automated evidence generation. It linked that to operational friction in producing documentation and potential disputes around insurance claims where organisations cannot demonstrate controls.

The report repeated its prediction that AI incident response would remain incomplete, using the same training-data recovery figures and reasoning.

Operational exposure

Kiteworks said the implications went beyond compliance. It said AI systems increasingly process sensitive data, make autonomous decisions and integrate with critical infrastructure. It said gaps in monitoring and inventorying AI components increased exposure to adversarial inputs, data poisoning and model manipulation.

The report also described unified audit trails and training-data recovery as “keystone capabilities”. It said these measures predicted stronger outcomes across other security metrics. It said organisations that implemented them showed measurable advantages in the study.

In the report’s view, European organisations face an execution challenge as they move from policy and governance frameworks to operational security controls around AI. It said organisations that close gaps in anomaly detection, training-data recovery, supply chain visibility and vendor incident coordination would be better placed to manage compliance obligations and withstand attacks.

“The AI Act establishes what responsible AI governance looks like. The question for European organisations is whether they can secure what they're governing,” said Klinkhamer. “By end of 2026, the organisations that have closed the gap between AI policy and AI security through anomaly detection, training-data recovery, supply chain visibility, vendor incident coordination will be positioned for both compliance and resilience. Those still running AI workloads without detection capabilities will learn about their security gaps the hard way: from attackers, not auditors.”