Experts predict AI compliance & security challenges by 2025
Alastair Paterson, the Chief Executive Officer of Harmonic Security, has outlined emerging security predictions for Generative AI by 2025.
According to Paterson, incoming AI compliance frameworks will present significant organisational challenges. "The EU AI Act is the obvious candidate, but there's plenty to pay attention to in the US," he stated. "National regulatory initiatives, such as the proposed SEC rules need attention but there is a growing patchwork of state-level legislation, such as Colorado's Artificial Intelligence Act. This is just one example; there are no fewer than 15 US states that have enacted AI-related legislation, with more in development."
Concerning third-party risk, Paterson anticipates a shift in how companies manage these relationships, particularly in the context of AI services. He noted, "Going forward, I'd wager that we're going to be speaking less about the 'AI problem' and rather a third-party risk problem. Sure, you can block ChatGPT and buy an enterprise subscription to Copilot, but are you really going to block Grammarly, Canva, DocuSign, LinkedIn, or the ever-growing presence of Gemini through your Chrome browser?" He suggests that more organisations may choose to buy rather than build AI systems, resulting in complex management needs.
Addressing AI model threats, Paterson highlighted the industry's focus on improving safeguards against vulnerabilities such as prompt injections and model hallucinations. "At the beginning of 2024, we were told that there was a storm coming. We saw a proliferation of new frameworks and taxonomies for tracking these. It included prompt injections, model hallucinations, bias, and other attacks against AI models," he commented.
He added, "While there has been a host of new prompt injection techniques published, these have been relatively few public cases or organisations' in-house AI models being compromised in this way." This suggests a reliance on model providers to enhance security.
Data security is also anticipated to gain prominence. Paterson expressed concern about the current state of data protection, "When it comes to securing GenAI, one area that is still underserved is data protection. In reality, most approaches to this are using legacy technologies and approaches like regular expressions or labeling. Data security has probably been underserved for the last decade, but some recent money and investments in the space – along with the need – will catapult it to be spoken about a good deal over the next 12 months."
AI within security operations could soon become a mainstream tool, offering benefits for security programmes. Paterson said, "If 2024 was about experimentation and exploration, I anticipate that we're going to start seeing security programs more earnestly embracing AI-for-defense. I'm particularly excited to see what will come of the agentic future of AI. We've already seen plans around Google's Project Mariner, where enterprises will benefit from all sorts of AI use cases at the browser level."