EU AI Act sets new global standard for ethical AI use
The enactment of the EU AI Act on 1st August 2024 marks a pivotal moment for businesses, tech firms, and regulatory bodies across Europe.
Significant changes are expected as companies navigate through the new regulations aimed at ensuring ethical and transparent use of Artificial Intelligence (AI). The legislation, which many view as the world's first comprehensive legal framework for AI, positions the EU as a forerunner in global technology governance.
Julian Mulhare, Managing Director for EMEA at Searce, highlighted the necessity for businesses to understand their obligations under the new legislation. "With the EU AI Act starting this week, businesses need to understand their new obligations to remain compliant and avoid crippling fines. Compliance with copyright laws and transparency is crucial for both general-purpose AI systems, like chatbots, and generative AI models. Detailed technical documentation and clear summaries of training data will be necessary," Mulhare stated.
Maintaining agility in AI processes, according to Mulhare, will be critical. "Companies need modular AI processes for easy updates—avoiding a complete overhaul. A dedicated team and budget for AI maintenance are essential. As AI becomes increasingly integrated, it will impact all business areas. Investing in compliance infrastructure, enhancing documentation and transparency, and instilling robust cybersecurity measures will be imperative," he added.
Concerns about the legislative framework's implications were also raised by Pieter Arntz, Senior Threat Researcher at Malwarebytes. "Looking at the EU AI Act, I am immediately reminded of NIS2. They are very much alike. This is because laws are often playing catch-up with technological developments," he observed. Arntz stressed that while the law classifies AI models based on risk, interpreting these classifications within the judiciary could be complex.
"Systems considered a threat to people will be banned, but this is immediately obfuscated by examples that address privacy, discrimination, and the use of biometrics. There are many cases where exceptions for law enforcement exist," Arntz elaborated. He pointed out that traditional product safety regulations are hard to translate into AI regulation, given the evolving nature of AI.
Jonathan Armstrong, Partner at Punter Southall Law, offered a legal perspective on the new regulations. "The EU AI Act represents a significant leap in AI governance, but it is not without its challenges. It's quite a hybrid piece of legislation based on influences from previous EU laws like product safety, competition, and GDPR," Armstrong explained.
Armstrong noted the importance of preparation for businesses: "Organisations should assess their current use and planned use of AI systems. Conduct a compliance gap analysis and identify affected business areas. Building a bespoke Action Plan is essential, including training employees and raising awareness about AI risks and opportunities."
The impact of the EU AI Act won't be confined to the EU. Armstrong pointed out that UK businesses must also prepare. "The UK government's position on AI regulation has evolved, with plans for new AI legislation. This includes setting up a Regulatory Innovation Office to support existing regulators. The new law might be a simplified version of the EU AI Act," Armstrong added.
David Shepherd, Senior Vice President of EMEA at Ivanti, concurred with this assessment and emphasised the need for continuous adaptation to AI regulations. "With AI regulation clearly on the government's radar, continuous monitoring of the impact of AI is crucial, so that everyone's interests and anxieties are managed," he remarked.
The consensus among experts is that while the EU AI Act will impose stringent requirements on businesses, it could also set a global standard for ethical AI use. The legislation aims to prevent misuse, ensure transparency, and protect individual rights in a rapidly digitalising world.