Agentic AI surge in 2026 sparks fresh cyber security risks
Security specialists expect artificial intelligence systems to create new cyber risks in 2026 as organisations embed agent-based and generative models into core operations, expanding the attack surface beyond traditional infrastructure.
Executives from BeyondTrust and Check Point Software Technologies say rapid rollout of autonomous and large language model (LLM) tools will test existing governance, with attackers targeting both the models and the systems they control.
Agentic AI
Morey Haber, Chief Security Advisor at BeyondTrust, said consumer and enterprise use of operational technology (OT) offers a template for how quickly AI agents could spread through daily life and business processes.
He drew a parallel with connected devices such as cameras and thermostats, which moved from niche deployments to mass use in a short period and are now common targets for cyber attacks.
Haber expects autonomous or semi-autonomous "agentic" AI to be integrated into a wide range of tools and services over the next year, covering routine tasks and complex decision-making.
"The speed at which OT entered our homes and businesses from cameras to thermostats can be measured in days verses years or decades. While very few other technologies including electricity, television, radio, and even the Internet were adopted faster, Agentic AI is poised to dominate our lives in days as well in 2026. In the next year, nearly every technology we operate will be connected to agentic ai and claim benefits from booking travel to optimising the temperature in our homes. While some claims may really help, others will be empty promises or actually make things worse. On the pessimistic side, the rush to deploy agentic AI everywhere will lead to a plethora of attack vectors, breaches, and new security concerns due to excessive privileges, confused deputy problems, and a general lack of guardrails instrumented during typical secure by design processes. The speed to make a name with agentic AI will reflect cybersecurity an afterthought and in the end, the user community will suffer at a rapid rate of adoption and threats," said Morey Haber, Chief Security Advisor, BeyondTrust.
Security teams have raised concerns that agent-based systems, which can trigger actions across multiple applications, may be granted broad permissions by default, making any compromise more damaging.
Haber's comments reflect a wider debate about secure-by-design principles for AI services and the level of oversight needed when models are allowed to initiate changes without human review.
"The speed at which OT entered our homes and businesses from cameras to thermostats can be measured in days verses years or decades. While very few other technologies including electricity, television, radio, and even the Internet were adopted faster, Agentic AI is poised to dominate our lives in days as well in 2026. In the next year, nearly every technology we operate will be connected to agentic ai and claim benefits from booking travel to optimising the temperature in our homes. While some claims may really help, others will be empty promises or actually make things worse. On the pessimistic side, the rush to deploy agentic AI everywhere will lead to a plethora of attack vectors, breaches, and new security concerns due to excessive privileges, confused deputy problems, and a general lack of guardrails instrumented during typical secure by design processes. The speed to make a name with agentic AI will reflect cybersecurity an afterthought and in the end, the user community will suffer at a rapid rate of adoption and threats."
Model exposure
Jonathan Zanger, Chief Technology Officer at Check Point Software Technologies, said the use of generative AI in enterprise environments is creating a different class of risk where the model itself becomes the entry point for attackers.
He expects organisations to expand AI deployments in security operations, customer engagement, and software development, increasing dependence on LLMs hosted internally and via external providers.
Zanger pointed to prompt injection and data poisoning as techniques that allow adversaries to alter a model's behaviour, potentially without leaving traces on underlying infrastructure or networks.
"As enterprises embed generative AI into everything from customer service to threat hunting, the models themselves have become attack surfaces. In 2026, adversaries will exploit prompt injection, inserting hidden instructions into text, code or documents that manipulate an AI system's output, and data poisoning, where corrupted data is used to bias or compromise training sets. These attacks blur the boundary between vulnerability and misinformation, allowing threat actors to subvert an organisation's logic without touching its infrastructure. Because many LLMs operate via third-party APIs, a single poisoned dataset can propagate across thousands of applications. Traditional patching offers no defence; model integrity must be maintained continuously. CISOs must treat AI models as critical assets. This means securing the entire lifecycle, from data provenance and training governance to runtime validation and output filtering. Continuous red-teaming of models, zero-trust data flows, and clear accountability for AI behaviour will become standard practice," said Jonathan Zanger, Chief Technology Officer, Check Point Software Technologies.
Security researchers have warned that organisations often rely on external training data or third-party integrations without full visibility of provenance, which may complicate efforts to detect and remediate poisoning attacks.
Vendors and regulators are also examining how accountability for AI decisions should be assigned within corporate structures, particularly where systems are used in regulated sectors such as finance or healthcare.
Governance focus
Haber and Zanger both highlighted a shift from perimeter-based defences to governance of AI systems, including access controls, monitoring, and clear operational boundaries for automated tools.
Haber's warning on "excessive privileges" aligns with a trend towards least-privilege models and just-in-time access in identity and access management, which many firms are now applying to machine identities as well as human users.
Zanger stressed the need to manage AI across its lifecycle, from initial data collection to deployment and ongoing testing, describing continuous validation as necessary to maintain trust in outputs.
"As enterprises embed generative AI into everything from customer service to threat hunting, the models themselves have become attack surfaces. In 2026, adversaries will exploit prompt injection, inserting hidden instructions into text, code or documents that manipulate an AI system's output, and data poisoning, where corrupted data is used to bias or compromise training sets. These attacks blur the boundary between vulnerability and misinformation, allowing threat actors to subvert an organisation's logic without touching its infrastructure. Because many LLMs operate via third-party APIs, a single poisoned dataset can propagate across thousands of applications. Traditional patching offers no defence; model integrity must be maintained continuously. CISOs must treat AI models as critical assets. This means securing the entire lifecycle, from data provenance and training governance to runtime validation and output filtering. Continuous red-teaming of models, zero-trust data flows, and clear accountability for AI behaviour will become standard practice," said Zanger.