At GTC 2026, we're focused on what actually matters
At NVIDIA GTC, the energy on the show floor reflects something we've been hearing from customers all year.
The AI conversation is getting practical.
The excitement around models and benchmarks hasn't gone away, but it's being joined by a much more grounded set of questions. How do we get our data ready? How do we move from a successful pilot to something that runs in production every day? How do we do that securely, at scale, without rebuilding everything from scratch?
These are infrastructure questions. And for teams actually deploying AI in their organizations, they've become the most important. The model and the compute are rarely the bottlenecks. The data is. The storage is. The governance is. That's the reality of enterprise AI in 2026, and it's what we're here to talk about.
If you're experiencing AI hype fatigue, you're in good company. For two years, the message has been that AI will transform everything. But for many organizations, the reality looks more like pilots that don't scale, data that isn't ready, and infrastructure that wasn't designed for these workloads.
So, here's what's actually changing.
Agentic AI is moving from concept to roadmap. These are AI systems that go beyond answering prompts. They reason across multiple steps, maintain context over time, and take action through tools and workflows. Every major enterprise is planning for a future where AI agents are embedded in critical business processes.
But agents need more than a capable model. They need memory. They need context. And they need access to your real enterprise data, not a curated demo dataset, but the messy, distributed data that lives across your entire organization.
The practical question for IT teams right now is no longer "should we invest in AI?" It's "what infrastructure do we need to make AI actually work?"
Every enterprise sits on massive amounts of unstructured data: documents, images, logs, communications, research, and customer records. That data can be a real competitive advantage for your AI initiatives. But raw material alone doesn't build anything.
Think of it as a supply chain. Every AI workload, whether it's training, retrieval-augmented generation, or powering an AI agent, depends on data flowing through a pipeline. That data has to be discovered across your environments, enriched and made consumable, delivered at high throughput to GPUs that can't afford to sit idle, and governed and protected the entire way.
If the pipeline is slow, fragmented, or ungoverned, projects stall and costs balloon. Expensive compute sits waiting for data that isn't ready.
This is what the NetApp data platform is built for.
At the foundation is NetApp ONTAP, the most widely deployed enterprise storage operating system, delivering unified data management across on-premises, hybrid, and cloud environments with built-in security, multi-tenancy, and zero-trust protections. Wherever your AI workloads run, native first-party cloud storage powered by ONTAP gives you seamless data connectivity and mobility across AWS, Azure, and Google Cloud. No other storage vendor comes close to that level of multicloud integration.
Our high-performance platforms, including the AFX disaggregated architecture, let you scale compute and capacity independently, so your infrastructure investment stays aligned to what you actually need. No fixed-ratio overprovisioning. No rip-and-replace when workloads change. And for the massive unstructured data lakes feeding your AI pipelines, StorageGRID delivers the durability and economics that enterprise requirements demand.
This isn't a storage platform with AI features added on. It's an intelligent data infrastructure, built over three decades and engineered for the demands of production AI.
NetApp AI Data Engine: the intelligence layer for your data supply chain
We just described the NetApp data platform as an AI supply chain. The NetApp AI Data Engine (AIDE) is the intelligence that makes that supply chain work.
AIDE is a secure, unified AI data service, co-engineered with NVIDIA based on the NVIDIA AI Data Platform reference design. It automates the hardest part of enterprise AI: turning your raw, distributed, unstructured data into fuel that AI workloads can actually consume.
AIDE is now generally available for an initial wave of customers and partners, with broad availability coming this spring. It integrates with leading AI platforms, including Microsoft M365 Copilot, Google Vertex, and LangChain, so you can build AI applications that securely leverage your enterprise data.
AIDE will also expand to support deployment on new NVIDIA RTX PRO 4500 GPUs and to directly integrate into existing NetApp storage environments, including AFF and FAS systems. More deployment options, meeting you where your data already lives.
Supporting NVIDIA STX and the rise of context memory
Here's something most organizations haven't fully absorbed yet: the AI industry is undergoing an architectural shift, and it centers on inference.
As agentic AI scales, agents that reason over long sessions generate massive KV caches that need to be stored, shared, and retrieved at speed. When that context exceeds local GPU memory, performance doesn't degrade gracefully. It collapses. Latency spikes, reasoning stalls, and agents hit a wall mid-session.
NVIDIA STX is a new modular reference architecture designed to address this head-on. Built on the next-generation Vera Rubin platform and BlueField-4 DPUs, STX creates purpose-built tiers for training, enterprise data, and context memory. Higher throughput, lower latency, better power efficiency.
This is not a new relationship between NetApp and NVIDIA. This is the next chapter of a partnership that has been building for more than six years.
The AI Factory is not a product. It's a methodology for implementing production AI: compute, software, and data infrastructure that work as one system.
NVIDIA brings accelerated compute and the AI software stack. NetApp brings the intelligent data infrastructure that makes it work in the real world. Together, we've been co-engineering this architecture for over six years, serving more than 1000 joint customers running production AI today.
That depth isn't something you build with a press release. It's built through years of shared engineering, validated designs, and real deployments. And when NVIDIA needed enterprise storage for their own internal AI infrastructure, they chose NetApp.
For organizations navigating enterprise AI, it comes down to this: you need infrastructure that is proven, integrated, and built for production. You need a data platform that turns your enterprise data into an asset, not an obstacle. And you need a partnership that goes deeper than a logo on a slide.
That's what we're showcasing at GTC 2026.
Visit us at Booth 1907 at NVIDIA GTC in San Jose, March 16-19.
Mackinnon joined NetApp and the Solutions Marketing team in 2020. In her time, she has focused on Enterprise Applications and Virtualization, but uncovered a passion in Artificial Intelligence and Analytics. In her current role as a Marketing Specialist, Mackinnon strives to push messaging and solutions that focus on the intersection of authentic human experience and innovative technology. With a background that spans industries like Software Development, Fashion, and small business operations, Mackinnon approaches AI topics with a fresh, outsider perspective. Mackinnon holds a Masters of Business Administration from the Leeds School of Business at the University of Colorado, Boulder. She continues to live in Colorado with an often sleeping greyhound and a growing collection of empty Margaux bottles.