Menu

Reflecting on NetApp’s Hillsboro Data Center

NetApp’s Hillsboro data center was purpose-built to deliver speed, resiliency, and long-term flexibility, enabling the company to adapt to shifting IT demands over more than a decade. In this reflection, Ralph Renne highlights how the facility’s rapid construction, redundant design, and future-ready infrastructure continue to serve as a blueprint for hybrid operations and AI-scale workloads.

Contents

Share this page

NetApp arch logo
Ralph Renne, Senior Director of Workspace Experience at NetApp

Last summer, at the request of our account manager, NetApp’s IT and Workspace Experience teams hosted one of our largest Oregon-based customers at our Hillsboro data center. The customer sought to modernize their data center strategy and wanted to observe an efficient, scalable facility located near their operations. Their team, responsible for managing critical infrastructure, was preparing to retrofit existing environments and came to learn from our approach. They left with valuable insights into resiliency, power distribution, and preparing for AI-scale workloads.

As I guided them through the facility, I was reminded of what Hillsboro represents: the impact of aligning data center strategy with evolving IT needs. This facility wasn’t just built for the 2011 business forecast—it was designed to adapt. Our decision to pursue a built-to-suit colocation model, rather than a traditional build-own-operate facility, provided us with the long-term flexibility needed to respond to changing IT infrastructure demands.

For example, we once managed our own Microsoft Exchange environment. But with the adoption of Office 365, our infrastructure footprint shrank significantly, allowing us to consolidate and reduce operating costs. Hillsboro was designed with this kind of flexibility in mind, while also delivering on the speed and resiliency that enterprise IT demands, both then and now.

A rapid build with resiliency in mind

When we committed to building Hillsboro, our objective was clear: deliver a robust, resilient, future-ready facility as quickly as possible. What we accomplished exceeded expectations. We broke ground in November 2011 and went live just nine months later, in August 2012. For a Tier 3+ data center built to Uptime Institute specifications, this was an incredible achievement—and one of the proudest milestones of my career at NetApp.

That accelerated timeline was made possible by leveraging modern construction techniques developed by Digital Realty. These included precast concrete panels, which are faster and more efficient than traditional tilt-up methods, and turnkey-design (TKD) electrical rooms built on skids in parallel with the facility itself. These methods were so effective that we later adopted them during the buildout of our Wichita campus.

While many enterprise-grade data centers take 18 months or more to build, Hillsboro proved that, with the right partners and approach, we could cut that time in half without compromising reliability.

Engineered for resilience and critical workloads

Speed was important, but resiliency was essential. Hillsboro was engineered with fully redundant electrical distribution pathways—A and B feeds—each supported by separate uninterruptible power supplies (UPS) and emergency generators, forming a 2N design. The data center is served by two separate 12kV utility circuits, with the only shared point being the utility substation itself.

That level of infrastructure reliability was critical for the workloads we hosted. At the time, our flagship application was Oracle 11i with extensive customizations. Since then, our IT environment has evolved. At a recent Global All Hands, we celebrated the completion of Phase 2 of the OCEAN project, a major cloud migration of our Oracle environment. Many of our other enterprise platforms—such as SAP SuccessFactors and ServiceNow—already run in the cloud.

This shift has allowed us to scale back our on-premises production environment and consolidate operations into our disaster recovery site in RTP. However, applications like Active IQ, which require high availability and consistent performance, remain in our production footprint. For that reason, I worked closely with our IT team to establish a geographically dispersed disaster recovery site for Active IQ in Santa Clara, ensuring business continuity. These transitions reaffirm the wisdom of the original built-to-suit strategy and its ability to flex as business needs evolved over the past 14 years.

A model for modernization

The customer who toured Hillsboro was especially focused on power distribution and failover strategies—particularly for workloads tied to public safety and real-time monitoring. Seeing our dual-feed design, load-transfer mechanisms, and physical layout firsthand helped their team validate key aspects of their modernization plans.

Looking ahead, the industry is shifting rapidly. Higher-density equipment, solid-state storage, and AI workloads are pushing traditional cooling methods to their limits. At NetApp, we’re already preparing for this future. Emerging hardware will require advanced liquid-cooling systems, such as liquid-to-chip cold plate heat exchangers mounted directly on CPUs and GPUs. These systems use non-conductive fluids and are far more efficient than air cooling when managing the heat output of AI and HPC infrastructure.

If you’re building a new data center or significantly retrofitting an existing one, planning for water-cooled infrastructure from day one can save considerable time and capital. At NetApp, we’re actively exploring these concepts for the next generation of data center design. AI-intensive workloads can draw five times the power of traditional IT racks, and we intend to be ready.

Role of on-prem in a cloud-first world

A question I hear often is whether on-premises data centers still make sense in today’s cloud-first world. My answer: It depends on your scale and workload.

Production IT environments are increasingly predictable and well-suited for cloud platforms. But product development environments are far more dynamic. Our internal cost modeling shows that running large-scale ONTAP development and QA entirely in the cloud would substantially increase our operational expenses, especially when factoring in data egress fees and burst pricing.

For use cases that require tight control, predictable performance, and long-term cost efficiency, owning both the hardware and the facility still makes sense. At NetApp, we continue to leverage cloud services where they fit best, including AmazonFSx for ONTAP and Azure NetApp Files, while retaining strategic workloads in built-owned-operated environments that give us full-stack control.

Closing reflections

Hillsboro played a pivotal role in executing NetApp’s IT strategy over the past decade. It supported product development, enterprise applications, and business growth during a period of massive transformation. Now, as we transition key workloads to the cloud and prepare for the next wave of infrastructure innovation, Hillsboro’s value remains clear.

It was never just a data center. It was a flexible platform, built fast, built right, and built for what’s next.

Explore NetApp IT case studies

Ralph Renne, Senior Director of Workspace Experience at NetApp

Ralph Renne is Senior Director of Workspace Experience at NetApp, where he has led data center strategy and global infrastructure initiatives. A pioneer in sustainable design, Ralph was part of the EPA and ASHRAE TC 9.9 committee that defined ENERGY STAR standards and helped build one of the industry’s first ENERGY STAR–rated data centers, advancing green practices worldwide.

View all Posts by Ralph Renne, Senior Director of Workspace Experience at NetApp
Drift chat loading