We’re hearing a lot about trusting AI.
What we talk about less is what that trust is actually built on.
AI doesn’t create insight in a vacuum. It reflects the quality, integrity, and availability of the data behind it. As AI becomes embedded in core business processes, the real question shifts from “Which model should we use?” to “Can we trust the data feeding it — everywhere it runs?”
That’s why more organizations are rethinking a familiar assumption: instead of moving massive datasets into centralized AI platforms, bring AI to the data — wherever that data already lives.
Most enterprises don’t lack data. They lack a data foundation designed for hybrid reality.
AI workflows routinely span onpremises environments, multiple clouds, and the edge. When data must be copied, reshaped, or relocated just to be usable, teams introduce latency, cost, and risk — often without realizing it.
Organizations that are making real progress with AI tend to get three things right.
Data mobility — without fragmentation
AI thrives on access. But access doesn’t require constant movement.
Running AI services close to where data already resides reduces latency and operational overhead while preserving governance. The goal isn’t data sprawl — it’s consistent access and control, regardless of location.
This is where unified data services matter. Whether data sits behind Amazon FSx for NetApp ONTAP, Azure NetApp Files, or Google Cloud NetApp Volumes, the advantage is architectural consistency. Teams work with the same data semantics, snapshots, and protection models across environments — rather than rebuilding pipelines for each environment.
Cyber resilience is essential for trusted AI
As AI moves from experimentation to production, availability becomes nonnegotiable.
Fraud detection, customer experience, healthcare analytics, supply chain optimization — these systems assume continuous access to data. A resilient data foundation ensures AI pipelines stay operational through infrastructure failures, outages, or regional disruptions.
Capabilities like snapshots, replication, and automated recovery aren’t just about protecting storage. They’re about keeping decision systems running when the business depends on them.
Security and integrity — the real trust layer for AI
Security discussions often focus on access controls and encryption — necessary, but incomplete.
Trust in AI starts with data integrity. If you can’t verify where data came from, how it changed, or whether it was tampered with, you can’t fully trust the outputs — regardless of model sophistication.
In hybrid and multi-cloud environments, consistently maintaining that integrity is the challenge. The organizations getting this right are the ones building protection, immutability, and governance directly into the data layer, rather than trying to bolt them later.
Domino Data Lab offers a practical example of this approach.
By leveraging Amazon FSx for NetApp ONTAP, Domino enables data scientists to work directly against governed datasets without copying data into separate AI silos. Teams iterate faster, move models into production sooner, and maintain the security and compliance controls enterprises require. The takeaway isn’t the specific platform. It’s the pattern: keep data authoritative, protected, and accessible — then bring AI to it.
The industry is moving away from centralized AI hubs that depend on data gravity. The next phase of AI is distributed by design — spanning environments, resilient to disruption, and grounded in data integrity. The organizations that succeed won’t be the ones chasing the newest model. They’ll be the ones that:
With the right foundation, AI becomes portable, reliable, and secure — right at the source.
If AI is moving from experimentation into production in your organization, the most valuable place to start isn’t the model — it’s the data.
Evaluate whether your data foundation truly supports:
That conversation tends to surface architectural gaps quickly — and it’s the fastest way to turn AI ambition into something operational and trustworthy. Explore how modern hybrid and multicloud data platforms are enabling this shift.
Richard Hardy is the VP and Field CTO for the NetApp Cloud Business. He has been at the company for more than 25 years, helping innovate and grow new business. Most recently, he has helped customers accelerate their cloud transformation and Gen AI applications.