IT Brief UK - Technology news for CIOs & IT decision-makers
Story image
Downing Street eyes creation of 'AI Safety Institute' amidst risks
Wed, 11th Oct 2023

Downing Street is potentially contemplating the establishment of an 'AI Safety Institute,' it has been revealed. The discussions are part of a growing global concern over the risks of frontier AI, with officials advocating for essential international cooperation to address the problem.

Officials from Downing Street have been globe-trotting and engaging in discussions, aiming to reach an agreement on a warning statement that suggests international collaboration on AI. Revolutionising the global AI scene, the proposed institute would be At the heart of addressing the impending threat to human life.

The envisioned 'AI Safety Institute' will stand as a beacon, providing a platform for the national security-related scrutiny of AI models, ensuring their safe and beneficial evolution. The idea behind the establishment is to allow global leaders to collaborate and address the potential threats and ethical considerations posed by AI.

Last week, the prime minister's representative, while not entirely confident about the inception of such an entity, emphasised that collaboration was "key" to manage AI risks. In embarking on this potentially perilous journey, the multi-country cooperation lays the groundwork for ensuring the safe development and application of artificial intelligence.

Summit attendees are expected to consist of AI pioneers such as OpenAI, Google, and Microsoft. It's hoped these leading companies will come forward with their AI safety commitments, initially agreed upon with the White House in July. The emerging discussions will likely include in-depth exploration of AI-impacted sectors such as safety, cybersecurity and national security aspects.

The initiative is certainly in line with the UK's pioneering role in AI safety. It is hoped that the recently announced frontier AI task force will evolve into a permanent institution with international presence, fostering a safer AI landscape. However, it's presumed that most countries would want to develop their capabilities autonomously in this space.

Oseloka Obiora, CTO at RiverSafe commented on the issue, "AI offers immense business benefits, however, potential detrimental consequences cannot be overlooked. Businesses must act cautiously, or they put themselves at risk of significant backlash.”

He continued, "An 'AI Safety Institute' would hold the reins in managing AI-associated risks, especially concerning cybersecurity. By promoting the rigorous scrutiny of frontier AI models, the Institute can better equip businesses in implementing AI and ensuring the presence of robust cybersecurity measures to protect themselves."

A government spokesperson added, "We have been clear these discussions will explore areas for potential collaboration on AI safety research, including evaluation and standards. International dialogues are already in process, and we look forward to convening this conversation in November at the summit."