IT Brief UK - Technology news for CIOs & IT decision-makers
Story image

Global study reveals public trust is lagging growing AI adoption

Today

A global study has found that trust in artificial intelligence remains a significant hurdle, despite widespread and increasing use of the technology.

The survey, titled 'Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025', was led by Professor Nicole Gillespie, Chair of Trust at Melbourne Business School at the University of Melbourne, and Dr Steve Lockey, Research Fellow at Melbourne Business School, in collaboration with KPMG. Covering over 48,000 participants across 47 countries between November 2024 and January 2025, the study is described as the most comprehensive of its kind to date.

The findings show that 66% of respondents already use AI regularly, and 83% believe the technology can offer a wide range of benefits. However, only 46% of those surveyed are willing to trust AI systems. Furthermore, 58% of global respondents consider AI to be untrustworthy, highlighting a growing disconnect between usage and confidence in the technology.

Compared to a similar study covering 17 countries prior to the release of systems such as ChatGPT in 2022, the researchers found that public trust has dropped while concerns have increased as AI becomes more integrated into daily life.

Professor Gillespie said, "The public's trust of AI technologies and their safe and secure use is central to sustained acceptance and adoption." She added, "Given the transformative effects of AI on society, work, education, and the economy - bringing the public voice into the conversation has never been more critical."

In the workplace, the study reports that 58% of employees actively use AI, with 31% using it at least weekly. Reported benefits include increased efficiency, improved access to information, and greater innovation. Almost half of respondents (48%) said that AI has contributed to increased revenue-generating activity.

Despite these positives, the use of AI at work is also associated with a number of risks. Nearly half of employees admitted to using AI in ways that go against company policies, such as uploading sensitive company data to public platforms like ChatGPT. There is evidence of complacency, as 66% of employees rely on AI-generated outputs without verifying accuracy, and 56% admitted to making mistakes because of AI.

Over half (57%) of employees surveyed said they conceal their use of AI and present AI-generated content as their own work. Only 47% reported having received training in AI, and just 40% said their workplaces have clear policies or guidance on the use of generative AI.

Factors contributing to this behaviour include a sense of urgency, with half of employees expressing concern about falling behind if they do not actively use AI.

Professor Gillespie commented, "The findings reveal that employees use of AI at work is delivering performance benefits but also opening up risk from complacent and non-transparent use. They highlight the importance of effective governance and training, and creating a culture of responsible, open and accountable AI use."

The survey also examined the impact of AI across wider society. Four in five people indicated they had personally experienced or observed benefits, such as reduced time on routine tasks, enhanced personalisation, lower costs, and improved accessibility.

Nonetheless, the same proportion expressed concerns about potential risks, with two in five noting negative impacts, including diminished human interaction, cybersecurity issues, the spread of misinformation, inaccurate outcomes, and deskilling. Specifically, 64% of respondents are worried about elections being influenced by AI-powered bots and synthetic content.

There is a strong perceived need for regulation, with 70% indicating that AI requires both national and international regulation, but only 43% believe current laws are adequate. A large majority (87%) called for stricter laws to combat AI-generated misinformation, and expect media and social media organisations to adopt stronger fact-checking processes.

Professor Gillespie said, "The research reveals a tension where people are experiencing benefits from AI adoption at work and in society, but also a range of negative impacts. This is fuelling a public mandate for stronger regulation and governance of AI, and a growing need for reassurance that AI systems are being used in a safe, secure and responsible way."

David Rowlands, KPMG International's Global Head of AI, said the latest findings point to opportunities for organisations to play a greater role in strengthening governance and building trust with employees, consumers, and regulators. "It is without doubt the greatest technology innovation of a generation and it is crucial that AI is grounded in trust given the fast pace at which it continues to advance. Organizations have a clear role to play when it comes to ensuring that AI is both trustworthy and trusted."

Rowlands also said, "People want assurance over the AI systems they use which means AI's potential can only be fully realized if people trust the systems making decisions or assisting in them. This is why KPMG developed our Trusted AI approach, to make trust not only tangible but measurable for clients."

The study notes marked differences in attitudes and adoption between advanced and emerging economies. Adoption rates, trust, and optimism about AI are higher in emerging economies, along with reported AI literacy (64% compared to 46%) and training (50% compared to 32%). In these regions, three in five people trust AI systems, compared to only two in five in advanced economies.

Professor Gillespie said, "The higher adoption and trust of AI in emerging economies is likely due to the greater relative benefits and opportunities AI affords people in these countries and the increasingly important role these technologies play in economic development."

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X