IT Brief UK - Technology news for CIOs & IT decision-makers
European boardroom ai brain interface trust vs weak governance

AI trust paradox exposes gaps in literacy & governance

Wed, 28th Jan 2026

Informatica by Salesforce has published a study of chief data officers that points to a "trust paradox" around corporate AI use, as data leaders report rising employee confidence in AI data alongside widespread concern about data and AI literacy and weak governance.

The report, titled "CDO Insights 2026: Data governance and the trust paradox of data and AI literacy take center stage", draws on a survey of 600 data leaders. It includes results for Europe and the UK on the pace of adoption for generative AI and agentic AI, and on investment priorities for data management and workforce training.

Across Europe, 68% of respondents said their businesses will have started agentic AI pilots by the end of the first quarter of 2026. In the UK, 61% said they plan to start the shift to becoming an agentic enterprise.

European respondents also reported broad uptake of generative AI. The study said 79% of European businesses will have adopted generative AI by the end of the first quarter of 2026.

Trust Paradox

The survey findings highlighted a gap between perceived trust and perceived readiness. European data leaders said 61% of employees trust most or all of the data their organisations use for AI. UK data leaders put that figure at 52%.

At the same time, 96% of European respondents said staff need more training in AI or data literacy to use AI or its outputs responsibly. The report broke this down into 82% calling for more data literacy training and 71% calling for more AI literacy training.

Informatica linked the literacy gap to governance concerns. The study found that 77% of European respondents said their organisation's AI visibility and governance has not kept pace with employee use of AI technology.

"The promise of AI is immense, but so are the risks if you don't have confidence in a reliable data foundation," said Krish Vitaldevara, Chief Product Officer, Informatica.

"Our CDO Insights 2026 report reveals a 'trust paradox': although employees generally trust the data used for AI, many are lacking in data and AI literacy skills, and organizations lack underlying AI governance structures for achieving the responsible and ethical outcomes they desire. This poses significant risk exposure and hurts confidence in AI initiatives," said Vitaldevara.

Why AI Now

The research also asked what is driving organisations towards broader AI adoption. European respondents cited improving business decision-making and enhancing employee collaboration as the top reasons, both on 32%.

Optimising internal processes followed on 28%. Enhancing customer experience and loyalty came next on 27%.

The report also examined how organisations plan to source agentic AI tools. In Europe, 55% said they planned to purchase vendor-supplied agents. In the UK, 44% said they expected to follow that route.

Another group said they will develop and manage agents in-house. The study found 45% of European organisations and 55% of UK organisations planned to do so. Among those building internally, 21% said they plan to develop using no-code or low-code platforms.

Investment Priorities

The study suggests a shift in spending plans as AI moves from pilot phases towards broader deployment. It found that 85% of European organisations expect to spend more on data and AI management in 2026, with 23% expecting a significant increase.

When asked about the main reasons for this additional spending, European data leaders cited three areas equally. Upskilling employees to improve data and AI fluency came in on 44%. Improving data privacy and security also came in on 44%. Enhancing data and AI governance also registered 44%.

In the UK, the report said 49% of organisations plan to invest in improving data literacy.

Production Barriers

Data quality and reliability emerged as recurring barriers. The study said 57% of European data leaders see poor data reliability as a key obstacle when moving generative AI initiatives from pilot to production. In the UK, 60% said the same.

The survey also reported concern about AI pilots progressing without fixes to underlying reliability issues. It found that 50% of European data leaders said they are very or extremely concerned that new AI pilots will move forward without addressing data reliability problems evident in previous initiatives. In the UK, 46% reported the same level of concern.

For AI agents moving into production, 51% of European organisations cited data quality and retrieval as the top challenge. Security concerns came next at 46%. Lack of agentic AI expertise followed at 45%.

Respondents also pointed to operational controls and tooling. Observability issues came in at 40% in Europe. Insufficient tools for managing AI agents registered 43%. Lack of safety guardrails came in at 41%.

In response, organisations reported a mix of process changes and investment in data practices. The report said 58% are improving workflows around data and AI. Another 56% are investing in data and metadata collection and management. It also found that 55% are increasing the frequency of data checks, 54% are increasing investments in data quality, and 52% are hiring or upskilling staff in this area.

"Agentic AI without strong data governance isn't innovation - it's exposure. Blind trust in AI, without the data and AI literacy to match, is giving organisations a false sense of confidence. Encouragingly, there are early signs of maturity in how organisations approach AI and agentic systems. Data leaders are recognising the risk and increasing investment in data governance and compliance foundations", said Emilio Valdés, SVP Sales International, Informatica from Salesforce.

RS Group said it has taken a governance-led approach to the adoption of AI systems. "This report highlights the significant risks of accelerating AI adoption without strong data governance and literacy. At RS Group, we address this challenge by embedding governance and accountability into how we evaluate and scale AI initiatives. For all AI initiatives, we thoroughly evaluate the technological, security, legal, and strategic implications to maximise opportunities while minimising risks. This approach helps ensure innovation moves forward responsibly, with risks understood and value clearly defined from the outset.

"Through investments in robust data-driven solutions, comprehensive upskilling, and close collaboration with partners like Informatica, we believe we are taking the essential steps to foster trusted, responsible AI that delivers real, measurable value to our customers and employees."