IT Brief UK - Technology news for CIOs & IT decision-makers
Business leader analyzing ai network diagram with lock icons for responsible governance

New book urges stronger AI governance for business leaders

Fri, 21st Nov 2025

A new books examines the rise of artificial intelligence (AI) in business and society, underlining the significant governance gaps that persist as organisations roll out advanced technologies. Governing the Machine from AI governance experts, Ray Eitel-Porter, Dr Paul Dongha and Miriam Vogel, provides a practical framework for business leaders who are navigating risks associated with AI while aiming to capture its benefits.

Governance shortfall

Recent research by EY estimates that around three-quarters of companies are already utilising generative AI. However, only a third reportedly have controls in place to manage the associated risks responsibly. The rapid implementation of AI outpaces the maturity of internal policies and oversight, exposing businesses to legal, reputational, and operational vulnerabilities.

The authors, who bring experience from sectors including consulting, finance, and policy, highlight that trust in AI is essential for stakeholders ranging from employees to investors. Their work maps out the complex regulatory and ethical terrain organisations must cross to establish reliable and transparent AI usage.

Risk framework

The book introduces a structured approach for identifying and managing emerging challenges tied to AI adoption. This framework addresses nine areas: accuracy and reliability; fairness and bias; interpretability, explainability and transparency; accountability; privacy; security; intellectual property and confidentiality; workforce; and environmental and sustainability concerns.

It synthesises several leading AI risk management models and proposes an adaptable system suitable for organisations at different levels of technological maturity. The authors advocate for companies to embed these principles into governance practices to ensure compliance with existing legal standards, even as new laws are anticipated.

Industry insights

The content draws on interviews with AI executives at major firms and includes a foreword by Andrew NG, founder of deeplearning.ai. It reflects the authors' collective experience advising corporations, public institutions, and governments on AI policy and operational strategies. Their analysis demonstrates commonalities in risk mitigation strategies across sectors and geographies.

Dr Paul Dongha directs Responsible AI and AI Strategy at NatWest Group, where he focuses on balancing innovation with regulatory and customer protection. Miriam Vogel leads EqualAI, supporting the development of responsible AI practices, and chairs the US National AI Advisory Committee.

Ray Eitel-Porter, Senior Research Associate at Jesus College, Cambridge and former head of Accenture's Responsible AI practice, consults for multinationals and the public sector on AI governance.

External endorsement

Reid Hoffman, co-founder of LinkedIn, commented:

"Responsible AI isn't just a technical challenge - it's a leadership imperative. This book offers essential guidance for anyone navigating the promise of deploying AI at scale."

Professor Mike Wooldridge, Ashall Professor of the Foundations of Artificial Intelligence at the University of Oxford, stated:

"Deserves to become the handbook for the field," said Wooldridge.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X