Responsible AI development in the UK: Yohan Lobo's perspective
As the UK government moves towards implementing legal frameworks for AI safety and security, Industry Solutions Manager, Financial Services at M-Files, Yohan Lobo, discusses how businesses can contribute to the responsible growth of AI research and adoption.
AI safety and security has been a hotly discussed topic over recent weeks − numerous high-profile figures expressed concern at the rate of global AI development at the UK's AI Safety Summit, held at Bletchley Park.
King Charles weighed in on the subject when virtually addressing the summit's attendees stating, "There is a clear imperative to ensure that this rapidly evolving technology remains safe and secure."
Additionally, in his first King's speech delivered on Tuesday where he set out the UK government's legislative agenda for the coming session of parliament, King Charles explained the government's intention to establish "new legal frameworks to support the safe commercial development" of revolutionary technologies such as AI.
M-Files' Lobo believes that sufficient growth and adoption in AI's ethical usage largely depend on businesses leveraging high-quality data-driven AI solutions. Speaking on the potential of AI, Lobo maintains that "mass adoption of AI presents one of the most significant opportunities in corporate history," as businesses look to capitalise on this tech for its potential in increasing efficiency and allowing organisations to scale.
However, with the growing reservations surfacing at the UK's Global AI Safety Summit and the King's Speech, it's evident that businesses should consider how they can protect customers. "Data quality lies at the heart of the global AI conundrum," Lobo adds, emphasising that understanding Large Language Models (LLMs) and ensuring the AI solution is reliable and accurate plays a crucial role.
To validate the effectiveness of AI models, Lobo advises that businesses must control where the solutions gain their knowledge. The use of trusted, internal company data increases the confidence level in the answers provided by the AI, making these models a powerful tool in boosting organisational efficiency.
Lobo suggests that human involvement in AI integration also plays a huge role in its safe and effective use. Regular audits are necessary, along with treating any findings of AI as recommendations rather than instructions. This approach helps businesses maintain a level of control and understanding over the functionalities of the AI.
Lobo concludes that "companies can contribute to the safe and responsible development of AI by only deploying AI solutions that they can trust and fully understand." This begins by controlling the data the technology is based on and ensuring that a human is involved at every stage of deployment, emphasising the necessity to blend new-age technology with human comprehension and supervision.