AI agents expose data, security & skills bottlenecks
Enterprise use of AI agents is expected to expose new bottlenecks in data infrastructure and security in 2026, as vendors predict consolidation around a small number of agent frameworks and warn that identity protection and specialised talent will struggle to keep pace.
Executives across infrastructure, security and access management say context handling, identity control and skills are emerging as the most pressing constraints on deploying large-scale agent-based systems, rather than model performance alone.
Vendors also expect the broad term "AI agent" to fragment into more precise categories as organisations distinguish between agents that run locally, in data centres, on behalf of users or as independent system components.
Context bottleneck
Manvinder Singh, VP of AI Product Management at Redis, said many deployments are now constrained by how quickly and consistently they can provide relevant information to agents that need to act across applications and data sources. He said developers are struggling with architectures that combine vector databases, long-term memory stores, transactional databases and caches, each with its own interface and latency profile.
"The rise of context engines: By 2026, as AI agents become deeply embedded in software and business systems, their biggest bottleneck won't be reasoning-it will be serving them the right context at the right time. Developers are realising that stitching together vector databases, long-term memory storage, session stores, SQL databases, and API caches creates a fragile patchwork of solutions," said Singh.
"The next evolution will be unified 'context engines', platforms that can store, index, and serve all forms of data through a single abstraction layer. These systems will merge structured and unstructured retrieval, manage both persistent and ephemeral memory, and dynamically route information across diverse sources. This unification will replace fragmented architectures, reduce latency, simplify development, and enable AI agents to operate with fluid, on-demand intelligence across all data modalities."
Database and infrastructure providers are racing to position their products as central repositories for retrieval-augmented generation (RAG) and long-term memory for agents, as enterprises embed agent capabilities into customer service, operations, and software development workflows.
Framework shake-out
Alongside infrastructure consolidation, Singh expects a smaller number of frameworks for building and orchestrating agent workflows to dominate over the next two years. "The agent framework wars will crystallise: By 2026, the winners in the AI Agent Framework race will finally emerge. As with every major platform era-from mobile operating systems to cloud infrastructure, network effects will drive consolidation around two or three dominant players."
"Developer familiarity, deep ecosystem investments (e.g., database vendors building integrations for memory and RAG), and the gravitational pull of community mindshare will narrow the field. LangGraph, with its early momentum and tight integration across agent orchestration and memory, is well-positioned to claim one of those top spots. But 2025 saw interesting launches and investments including the new Microsoft Agent Framework, Google's Agent Development Kit, Amazon's Strands SDK and OpenAI's Agents SDK."
"The defining trait of the ultimate winners won't just be technical performance-it will be openness. Frameworks that encourage extensibility, embrace interoperability standards, and foster thriving third-party ecosystems will dominate. Open ecosystems allow innovation at the edges: memory stores, vector stores, shared libraries, and cross-platform compatibility will turn frameworks into self-sustaining platforms. Just as Android and iOS built empires on developer participation, the leading agent frameworks will become ecosystems where thousands of companies can innovate together," concluded Singh.
Tooling from hyperscale cloud providers, model vendors and independent projects is expanding quickly, prompting enterprises to weigh early bets on orchestration platforms that can connect to multiple models and data systems.
Identity risk
Security leaders expect the spread of AI agents into operational systems to expand the attack surface, particularly when agents have direct access to data, applications, or identity infrastructure.
David Rajkovic, Regional Vice President A/NZ at Rubrik, said Australian organisations are already concerned about identity-driven attacks and see AI as both a capability boost and a risk multiplier. "AI agents are a force multiplier, but that force cuts both ways. Our Rubrik Zero Labs research found that 98 per cent of Australian security leaders cite identity-driven attacks as their top concern. With 99% already integrating or planning to integrate AI into identity systems, the stakes have never been higher."
"A compromised agent can unleash ten times the damage in one-tenth of the time. Securing AI agent identities and access controls is critical. We've already seen the impact compromised human identities can have, and it's clear agentic identities will be the next battleground in 2026."
Security teams are starting to explore policies, monitoring and governance for non-human identities that can initiate actions, call APIs and move laterally within environments at machine speed.
Talent pressure
Alongside infrastructure and security, organisations expect to face constraints in specialist skills as AI agents are embedded across workflows.
Ev Konstevoy, CEO at Teleport, said demand for employees who can design, operate and supervise agent systems will outstrip supply, even as headlines focus on automation risks. "Every technology boom creates talent market disruptions. The narrative of "AI replacing jobs" is not correct. Instead, we face a shortage of highly-skilled, AI-native talent. Every CEO is - or should be - worried about recruiting and training these AI operators who are capable of utilising AI tools in the most effective way."
"Agentic AI will be defined more granularly."During 2024 and 2025, "AI agent" referred to software utilising LLM for decision-making. The reality is more complicated. Some agents run in datacenters, others are fully local, others act on behalf of a human-owner, while others have their own identity. The industry will create more granular terms to denote the different types of AI agents and how they are deployed," said Konstevoy.
Vendors expect this greater precision around agent definitions, coupled with consolidation in frameworks and context infrastructure, to shape how enterprises design and govern large-scale deployments over the next two years.