A few years ago, cloud conversations were mostly about speed, scale, and cost. Today, a very different question sits at the center of many boardroom discussions: who actually controls our data?
That shift is driving an explosion in sovereign cloud demand. Governments, defense organizations, healthcare systems, and financial institutions are no longer willing to live with gray areas around jurisdiction, access, or foreign influence. As a result, global spending on sovereign cloud infrastructure is projected to cross $130B by 2028, growing at well over 30% annually.
The leaders driving sovereign cloud decisions today are no longer asking whether the cloud is viable—they’re asking whether the architectural choices being made now will still hold up five or ten years from today. They are thinking in terms of jurisdictional risk, long-term control, auditability, and what happens when regulations tighten, borders shift, or new sovereign regions come online.
That lens changes the discussion. It shifts the focus away from individual services and toward fundamentals—where data lives, who ultimately controls it, and how easily it can remain governed as environments evolve. For leaders responsible for national systems, critical infrastructure, healthcare platforms, or financial institutions, sovereignty isn’t theoretical. It’s operational, reputational, and deeply tied to trust.
The same applies to architects and partners building these environments alongside hyperscalers. When sovereign deployments are designed, they need to work not just for a single region or mandate, but across commercial, government, and sovereign boundaries—without forcing reinvention every time the rules change. Consistency at the data layer becomes the difference between a design that scales and one that fragments.
Europe, in particular, is ground zero for this change.
Europe combines some of the world’s strictest privacy and data protection regulations with a deep reliance on hyperscale cloud innovation. Organizations want the speed and scale of modern cloud services, but they also need legal clarity, operational autonomy, and confidence that their data is insulated from extraterritorial jurisdiction.
That tension is exactly why the recent emergence of European-based sovereign clouds from AWS and Microsoft matters. These are not simply new regions added to an existing footprint. They are cloud environments intentionally designed to operate under European legal authority, with independent governance models, clear operational separation, and explicit sovereignty boundaries.
For many European customers, this marks the first real escape from the long-standing trade-off between innovation and control. But one critical point is still often overlooked in sovereign cloud discussions: sovereignty is not defined by where compute runs. It is defined by who controls the data. And data ultimately lives in storage
This is where NetApp plays a very specific and very deliberate role.
NetApp cloud storage is available wherever sovereign workloads run—across AWS, Azure, and Google Cloud—including government, secret, and top-secret classifications, and soon the AWS European Sovereign Cloud. NetApp storage already runs in AWS GovCloud and classified environments, and it will be supported in the European Sovereign Cloud at launch.
That consistency matters more than it might seem at first glance.
It means customers don’t have to rethink their data architecture every time sovereignty requirements tighten. The same storage platform, the same data services, and the same operational model can be used across commercial, government, and sovereign environments.
At its core, a sovereign cloud is about operating entirely within the legal, regulatory, and operational boundaries of a specific jurisdiction — with no external interference. Gartner describes it as ensuring that data, infrastructure, and operations are protected from control or access by foreign governments.
Importantly, most sovereign workloads are not expected to live in fully isolated or air-gapped environments forever. The majority will run in public cloud regions, using software-defined sovereignty controls to meet national requirements while still benefiting from hyperscale economics and services.
That’s exactly how NetApp is designed to operate.
NetApp works alongside native hyperscaler controls — like AWS Control Tower, Azure Policy, or Google Cloud Assured Workloads — to ensure data is only created and stored such as AWS Control Tower, Azure Policy, or Google Cloud Assured Workloads — to ensure data is created and stored only in approved regions. Those geographic boundaries aren’t just documented; they’re enforced.
Encryption is another critical piece. NetApp supports cloud-managed keys, customer-managed keys, and even external key management, where encryption keys stay on-premises or with a trusted third party. For organizations with the strictest sovereignty mandates, this means the ultimate control point — the keys — never leave their jurisdiction.
This turns sovereignty from a policy statement into something concrete, auditable, and defensible.
Sovereignty isn’t just about where data sits today; it’s about how it’s protected and how it moves when allowed.
Modern cloud storage from NetApp can add an additional layer of security by enforcing immutability, helping protect sensitive datasets against tampering, ransomware, or accidental deletion—an increasingly common requirement in regulated and government environments. And with NetApp’s common public cloud storage data platform, data replication can remain encrypted end-to-end even across clouds, allowing organizations to move or recover data across approved regions without compromising sovereignty controls.
Equally important, consistent operations and API support across NetApp public cloud services in the largest public clouds enable smooth, predictable mobility between private, public, and sovereign clouds. Applications, automation, and operational tooling don’t need to change just because the regulatory perimeter does.
Regulations evolve. Political realities shift. New European-based sovereign clouds emerge, while existing environments expand, fragment, or tighten their requirements. That change is constant — and unavoidable.
For most organizations, the cost of getting this wrong isn’t theoretical. It shows up as delayed programs, duplicated platforms, and years of technical debt locked in by short-term compliance decisions.
What customers don’t want is a fractured data estate whenever the rules change.
By using NetApp as the data layer across sovereign and commercial cloud environments, organizations get continuity. They can modernize applications, move workloads when permitted, and respond to new regulations without re-architecting their storage or retraining teams from scratch.
Over time, that continuity becomes more than convenience. It becomes a strategic advantage.
Sovereign cloud isn’t about pulling back from the public cloud. It’s about regaining control while continuing to move forward.
Hyperscalers like AWS and Microsoft Azure are redefining what sovereign regions look like — especially in Europe. NetApp makes sure that once data is inside those regions, it stays governed, protected, and truly sovereign.
In the end, sovereignty isn’t a checkbox. It’s a property of how data is stored, encrypted, and controlled. That’s the role NetApp plays — quietly, consistently, and exactly where it matters. To learn more, check out this podcast titled “Cloud Boundaries – Sovereignty Made Simple.”
Pravjit Tiwana is SVP & GM of NetApp’s Cloud Storage and Services business, where he leads strategy, product, engineering, and P&L across first-party hyperscaler services (Azure NetApp Files, Amazon FSx for ONTAP, Google Cloud NetApp Volumes), Cloud Volumes ONTAP, Instaclustr, and NetApp’s broader open source and cloud AI portfolio. A technology executive with more than 24 years of experience, Pravjit has built and scaled platforms from incubation to multi-billion-dollar businesses across cloud infrastructure, storage, SaaS, networking, and Web3.