Object storage emerges as the backbone for private AI
Object storage underpins most private AI deployments that have reached production, according to a survey of 504 large organisations running AI systems on infrastructure they control.
Research by analyst firm Freeform Dynamics found that 91% of enterprises operating private AI in live environments make meaningful use of object storage. Of those, 44% use it extensively and 47% use it "quite a bit".
The findings suggest a shift in where infrastructure teams see bottlenecks as AI moves beyond pilots. GPUs and server capacity remain central, but respondents ranked storage performance as an equal constraint when scaling production workloads.
Storage constraints
More than half of respondents (57%) said storage performance is a priority for avoiding AI bottlenecks, compared with 54% who cited compute or GPU availability and 52% who cited network bandwidth.
The survey suggests production AI programmes are becoming more dominated by data movement and reuse across pipelines than by model training alone. Freeform Dynamics pointed to a need for systems that can stage, govern, protect, and reuse data for inference-driven workloads.
Metadata and mixed workloads also emerged as pressure points. Some 40% cited metadata handling at scale as a bottleneck risk, and 38% reported challenges in handling mixed workloads-such as high-throughput training and lower-latency inference in the same environment.
Private AI
The research focused on organisations running "private AI", also described as sovereign AI, where the enterprise retains control of the infrastructure used for models and data. Respondents who said all their AI work was cloud-based were excluded.
Freeform Dynamics found that 81% view private AI infrastructure they control as critical to their success, linking this to sovereignty, compliance requirements, and the need to keep data close to where AI workloads run.
Many organisations are taking a blended approach rather than building from scratch. The survey found that 44% adapt existing compute for AI, while 40% purpose-build compute. On the storage side, 42% adapt existing storage and 39% purpose-build storage.
This pattern suggests tiered and hybrid architectures are now common in production AI environments, combining faster tiers for active workloads with larger-capacity tiers for long-term retention and reuse.
Object storage role
Within these architectures, object storage has become a core component for on-premises AI data pipelines. Respondents reported slightly deeper use of object storage than file-based storage, and far more than block-based storage.
Object storage is widely used across enterprise IT for unstructured data and large-scale archives. Its role in AI reflects the operational need to manage large volumes of training data, model artefacts, and inference inputs and outputs across multiple systems.
The findings also align with broader trends in data governance and lifecycle management. Teams running production AI face requirements for reliability, access controls, auditing, and long-term cost management, alongside performance.
Industry context
Respondents were drawn from medium to large enterprises with more than 1,000 employees, based in the United States, the United Kingdom, France, and Germany. Sectors included financial services, professional services, manufacturing, healthcare and life sciences, government, and media and entertainment.
For storage suppliers, the results point to an opportunity to position data infrastructure alongside compute in AI spending decisions. Ranking storage performance alongside GPUs suggests procurement and architecture discussions may expand beyond accelerators and networking.
Scality, which commissioned the study, sells storage software focused on object storage and cyber resilience. It said S3-compatible object storage can serve as a foundational layer in the tiered environments described by respondents, alongside file systems and other storage types.
"Most industry discussion frames AI infrastructure as primarily a compute challenge," said Tony Lock, director of engagement and distinguished analyst at Freeform Dynamics.
"This research makes clear that enterprises running private AI in production are dealing with a broader systems reality. Many see a need for simple, scalable architectures that keep data close, support multiple AI genres, and balance performance with governance and cyber resilience across the full pipeline."
Scality said the findings support a more data-centric view of AI operations as inference becomes more prevalent in day-to-day use. It also pointed to demand for control and predictability in environments where data sensitivity and regulatory oversight shape deployment decisions.
"The data defines the problem, and the platform determines who scales," said Paul Speciale, chief marketing officer at Scality.
"This research validates what we see in the field: production AI success depends on how effectively teams manage and operationalise data throughout the AI lifecycle. Scality provides an S3-native, tiered, cyber-resilient foundation aligned with how enterprises are building sovereign AI today. It delivers the control, predictability, and operational resilience required to scale."
Freeform Dynamics said enterprises are converging on tiered designs that separate fast storage for active workloads from scalable capacity tiers for persistent, reusable datasets as production AI becomes increasingly inference-driven.