IT Brief UK - Technology news for CIOs & IT decision-makers
United Kingdom
Red Hat & NVIDIA add controls for enterprise AI agents

Red Hat & NVIDIA add controls for enterprise AI agents

Thu, 14th May 2026 (Today)
Mark Tarre
MARK TARRE News Chief

Red Hat has expanded its AI Factory with NVIDIA to support autonomous AI agents in enterprise settings, adding new security and management features to the jointly engineered software stack.

The changes focus on helping companies run long-lived AI agents under tighter controls across hybrid cloud environments. The platform now includes stronger policy enforcement, hardware-based protections and broader lifecycle management for models and agent activity.

Among the updates is support for OpenShell, an open source project founded by NVIDIA that provides a sandboxed runtime for autonomous AI agents. The software is designed to govern how agents execute tasks, which tools and systems they can access, and where inference is routed.

Joint engineering work is under way to integrate OpenShell with Red Hat's full-stack AI platform. Red Hat is also contributing to the upstream OpenShell project as it looks to help shape common approaches to managing autonomous agents across hybrid cloud systems.

Security focus

Another addition is confidential containers with NVIDIA Confidential Computing through Red Hat OpenShift sandboxed containers, available in technology preview. This is intended to protect AI agents at runtime, including when another agent in the environment has been compromised.

Red Hat is combining that approach with a zero-trust security model that includes SELinux, FIPS compliance and NVIDIA DOCA-based runtime protection. The aim is to enforce policy controls from the data centre to edge systems while helping customers meet governance and regulatory requirements, including measures associated with the EU AI Act.

The software stack also now includes updates from Red Hat AI 3.4. These include a governed Model-as-a-Service offering through the Red Hat AI gateway, giving developers access to selected models, including NVIDIA Nemotron, through OpenAI-compatible interfaces.

For operational oversight, the platform uses lifecycle management tools based on MLflow. This allows companies to trace large language model calls, tool execution and reasoning steps so they can audit how an autonomous agent reached a result.

Infrastructure support

On the infrastructure side, Red Hat Enterprise Linux for NVIDIA 26.01 is now generally available. The release supports NVIDIA Blackwell systems, and work is also under way on support for the forthcoming NVIDIA Vera Rubin platform.

NVIDIA Run:ai, now included in NVIDIA AI Enterprise, is also available to customers using the Red Hat AI Factory with NVIDIA. The latest support extends across Red Hat's wider AI portfolio, including Red Hat Enterprise Linux AI, Red Hat OpenShift AI and Red Hat AI Enterprise.

The companies are also offering validated NVIDIA AI Blueprints and quickstart packages intended to simplify deployment of common AI patterns. The use cases highlighted include model-as-a-service, enterprise research, and enterprise retrieval-augmented generation and RAFT deployments that use proprietary data with governance controls.

The announcement reflects a broader industry shift as large technology providers try to turn interest in generative AI into production systems that can operate under enterprise security and compliance rules. One of the main barriers has been the difficulty of governing autonomous agents that can make repeated calls to tools, models and data sources over long periods.

Red Hat and NVIDIA are positioning their joint platform as a way to address that issue by combining software governance with infrastructure-level protections. The collaboration also underscores how AI suppliers are increasingly packaging model access, orchestration, observability and hardware support into integrated stacks aimed at large corporate buyers.

Chris Wright, Chief Technology Officer and Senior Vice President, Global Engineering, Red Hat, described the challenge in terms of maintaining control across distributed environments. "Moving AI from corporate experimentation to an industrial engine requires a sovereign, consistent foundation across the hybrid cloud. Through our strategic co-engineering efforts with NVIDIA, Red Hat provides the architectural control and open source innovation enterprises need to scale agentic AI with confidence. By delivering a hardened, zero-trust path for organisations to own their intelligence, we are enabling our customers to maintain technical independence in an increasingly complex global landscape," said Wright.

Justin Boitano, Vice President, Enterprise AI Platforms, NVIDIA, outlined NVIDIA's view of the market direction. "Agentic AI is transforming enterprise operations. Every company will need an AI factory to build, deploy and govern digital workers at scale. Red Hat and NVIDIA are co-engineering the Red Hat AI Factory with NVIDIA, bringing NVIDIA OpenShell, NVIDIA Confidential Computing and the full AI stack together so enterprises can securely run their most demanding agentic AI workloads," said Boitano.