EU AI Act sparks significant shift toward ethical, responsible AI use
As the European Union's Artificial Intelligence Act's latest set of provisions comes into force, organisations operating across the continent are grappling with the pivotal regulatory shift that could influence the global technology landscape.
The focus now tightly centres on general-purpose AI (GPAI) models, with the legislation expected to have a cascading impact beyond its initial scope.
Protecting EU citizens with safety, accountability and trust
Andreas Vermeulen, Head of AI at Avantra and former adviser to several government bodies on AI policy, commended the EU's comprehensive framework.
"The EU's General Purpose AI Code of Practise defines a clear duty of care for the future of AI in the region," he said. Vermeulen argued that by "putting safeguards around general-purpose AI systems, the code ensures that EU citizens are effectively protected from potential harms - whether systemic bias, misuse, or opaque decision-making."
He acknowledged that, while the framework could slow certain developments, such caution "is deliberate and necessary: safety, accountability, and trust must come first. By prioritising responsible innovation over reckless speed, the EU is setting a benchmark for ethical AI scaling."
Organisations urged to approach AI methodically
As the deadline takes effect, organisations are urged to approach AI implementation with strategic foresight. Ann Maya, EMEA CTO at Boomi, highlighted the risks of ad-hoc or piecemeal deployment.
"As organisations shift from AI experimentation to meaningful implementation, the complexity of managing governance during that transition must be treated as a strategic priority," Maya said. "The challenge today isn't whether to adopt AI, but how to do so responsibly - with attention to risk, data security, authorisation and access and long-term impact."
She emphasised that, particularly in Europe, "responsibility is now being codified, as key obligations under the EU AI Act come into force." Maya urged organisations to be mindful not only of regulatory compliance but the broader societal effects, particularly on jobs, decision-making, and how systems are structured.
"Success won't come from applying generative AI in isolation. It requires thoughtful integration across systems, processes, and services," she continued, advocating for a strategic renewal rather than mere rapid deployment. She concluded, "organisations need to consider...how to orchestrate it - connecting applications, APIs, and data in ways that deliver real-world outcomes while ensuring control remains where it matters most."
A significant milestone beyond compliance in the EU
Industry leaders and technology experts have highlighted the significance of this milestone, not only for compliance but for the broader imperative of responsible artificial intelligence deployment.
Levent Ergin, Chief Climate, Sustainability and Artificial Intelligence Strategist at Informatica, remarked that while "the EU AI Act's 2nd August enforcement date may primarily target general-purpose AI providers, it sets a clear precedent and will trickle downstream." He underlined that "enterprises must be ready to demonstrate that they are using AI in line with responsible practices, even if they're not yet legally required to do so."
Ergin believes this marks the "first true test of AI supply chain transparency". According to Ergin, "if you can't show where your data came from or how your model reasoned, your organisation's data is not ready for AI."
He cited data from Informatica's CDO Insights 2025 survey, which canvassed 600 chief data officers globally, revealing that 48% of European businesses had encountered issues with unauthorised use of personal data in generative AI projects, and 55% reported using incomplete or incorrect data as inputs.
"It's critical they get this right," he stressed, pointing to the need for a robust, centralised AI data governance framework focusing on quality, lineage, governance, and control.
This call for a unified approach is echoed by concerns about divergent regulatory regimes around the world. "It's simply too complex for large international companies to do AI regulation on a country-by-country basis," Ergin said, suggesting that enterprises adopt universal internal standards that exceed minimum legal requirements.
With the EU's new governance rules in effect, the spotlight is now firmly on AI's ethical and operational foundations. The test will be how companies meet these responsibilities, not just to regulators but to the wider society they serve.