AI is an inherently hybrid workload. Training foundational models, spinning up proofs of concept, fine-tuning models, and deploying production environments all require access to both on-premises and cloud data, tools, and resources. Organizations need to balance GPU availability, cost, latency, and security concerns. Meanwhile, enterprise data resides across data centers, multiple clouds, and edge locations—the result of decades of digital transformation.
This distributed reality presents a fundamental challenge: 85% of AI projects never reach production, not due to insufficient compute power, but because organizations struggle to access their data across different environments seamlessly. The "train anywhere, deploy everywhere" reality of modern AI requires a complete enterprise infrastructure solution. This represents the foundation challenge of AI factories: unified data access across hybrid cloud environments. Without smooth data flow as the operational foundation, even sophisticated AI initiatives can’t scale effectively.
AI workloads naturally span multiple environments by necessity. Proof-of-concept and experimental training often begin in cloud environments where organizations can quickly access GPU resources for model development and initial experimentation. However, production training and deployment increasingly require hybrid or on-premises infrastructure to address latency-sensitive applications, security-conscious industries, and compliance-heavy regulations. Data gravity compounds this challenge, because enterprise data lives across multiple locations and can’t feasibly migrate entirely to cloud environments. Decades of digital transformation have distributed critical business data across on-premises systems, various cloud providers, and edge locations, and regulatory constraints around data sovereignty and compliance requirements further limit data movement, making centralization impossible in many enterprise scenarios.
Most organizations address this challenge through environment-specific solutions—deploying different storage systems for cloud versus on-premises environments. This approach immediately creates operational challenges in which data duplication across environments increases costs and complexity exponentially, and inconsistent management requires environment-specific tools, policies, and access methods. The result is limited mobility: Moving data between environments becomes complex, slow, and operationally risky. When data access becomes fragmented, AI teams face blocked workflows due to access delays—analogous to factories operating without raw materials. Projects stall while waiting for data to be "prepared" for each environment, AI factory operations become fragmented across disconnected systems, and storage costs multiply as operations become less efficient. This fragmentation ultimately undermines the business case for AI factory investments.
NetApp addresses this challenge through one data management platform: NetApp® ONTAP® software. ONTAP runs consistently across on-premises, AWS, Azure, and Google Cloud environments, representing real cloud integration—not simply "cloud-compatible" solutions, but native services within each major cloud provider. The result is unified management through a single control plane for data across all environments, eliminating operational complexity while delivering a consistent experience regardless of deployment location.
This platform enables efficient data movement between environments without losing context, security policies, or performance characteristics. The unified approach allows enterprises to access data for AI applications without wholesale migration projects, and organizations can maintain data consistency and synchronization across hybrid environments. The approach achieves cross-cloud flexibility that avoids vendor lock-in without sacrificing performance, supported by built-in security that enables the same protection mechanisms to work across all environments. Proven scalability handles enterprise data volumes from departmental to global levels, while high availability maintains enterprise reliability standards everywhere data lives, and compliance support meets regulatory requirements regardless of data location.
The partnership of NetApp and NVIDIA delivers validated configurations—pretested combinations of NetApp storage with NVIDIA platforms that provide predictable AI performance. Reference architectures offer production-ready designs that eliminate deployment risk and speed time to value. And these reference architectures are backed by 600+ customer deployments demonstrating proven success in production environments.
This partnership makes cloud training and on-premises deployment seamless, from model development to production workflows, without complex data migrations. Cross-environment data access enables AI workloads to use the same data whether running in the cloud or on premises, and AI applications perform consistently regardless of deployment environment. Unified policies mean that security, compliance, and governance work consistently everywhere, breaking down the barriers between hybrid environments that have historically fragmented AI initiatives.
AI factory success begins with solving hybrid cloud data access. When data moves across environments without friction, AI innovation accelerates. Organizations that invest in unified data infrastructure today—built on the proven collaboration between NetApp and NVIDIA—position themselves to lead tomorrow's AI-driven markets.
The choice is clear: Continue struggling with fragmented data access and environment-specific solutions or build the hybrid cloud data foundation that makes AI factory success inevitable. Organizations that establish this foundation will capture competitive advantages in the AI-driven economy.
To get started, explore more about NetApp AI solutions.
Take the first steps to becoming an AI master by completing the AI Maturity self-assessment.
Mackinnon joined NetApp and the Solutions Marketing team in 2020. In her time, she has focused on Enterprise Applications and Virtualization, but uncovered a passion in Artificial Intelligence and Analytics. In her current role as a Marketing Specialist, Mackinnon strives to push messaging and solutions that focus on the intersection of authentic human experience and innovative technology. With a background that spans industries like Software Development, Fashion, and small business operations, Mackinnon approaches AI topics with a fresh, outsider perspective. Mackinnon holds a Masters of Business Administration from the Leeds School of Business at the University of Colorado, Boulder. She continues to live in Colorado with an often sleeping greyhound and a growing collection of empty Margaux bottles.