NetApp has been collaborating with NVIDIA in AI infrastructure solutions since 2018, enabling enterprises to infuse their businesses with data-driven insights. Today, NetApp is announcing the certification of NetApp all-flash NVMe storage and BeeGFS parallel file system, with NVIDIA DGX SuperPOD. This turnkey AI data center solution brings together two screaming-fast storage systems that absolutely smoke the biggest AI workloads. Our latest solution makes it incredibly simple for all kinds of companies to deploy high-performance infrastructure that meets the demands of extreme AI workloads.
NVIDIA DGX SuperPOD is an AI data center infrastructure platform delivered as a turnkey solution for IT to support the most complex AI workloads facing today’s enterprises. It simplifies deployment and management while delivering virtually limitless scalability for performance and capacity. In other words, DGX SuperPOD lets you focus on insights instead of infrastructure.
That said, even the fastest supercomputer might sit idle if it doesn’t have equally fast storage to support it. Enter the NetApp EF600, an all-flash NVMe storage system that acts as the data-streaming turbine powering DGX SuperPOD. EF600 arrays also happen to be price-to-performance legends, so whatever size your configuration, it will be more cost-efficient.
In earning certification status for NVIDIA DGX SuperPOD, NetApp EF600 storage—combined with the BeeGFS parallel file system—exceeded NVIDIA’s baseline performance threshold. Each EF600 array and BeeGFS-based scalable building block adds up to 76GBps/23GBps of sequential read/write performance and 431TB of capacity. Both capacity and performance can be easily sized and optimized for metadata operations, data storage, or any mix of the two. And with proven 99.9999% availability, the EF600 significantly reduces your system’s downtime.
The data-fabric glue between NetApp’s storage system and NVIDIA DGX SuperPOD is the BeeGFS parallel file system. This award-winning parallel file system, delivered by ThinkParQ, features a modern user space architecture that’s used in many supercomputing environments today. No more hacking away at kernels to get your parallel file system up and running. No more hardware vendor lock-in. No more paying for premium features you don’t need. And best of all, no more complicated pricing.
“Our long-term collaboration with NetApp and NVIDIA to accelerate the DGX SuperPOD turnkey solution with our parallel file system, BeeGFS, is exciting for us as we strongly believe this will also help our customer base to further accelerate their growing performance demands,” said Frank Herold, CEO of ThinkParQ.
With BeeGFS, you get a blazing-fast HPC file system that’s automated and integrated into the overall DGX SuperPOD experience. With NVIDIA DGX SuperPOD and InfiniBand networking, NetApp storage, and the BeeGFS file system, you get a complete, fully supported, and performance-proven AI platform.
When it comes to managing the performance and capacity of your extreme AI workloads, EF600 arrays let you scale seamlessly from terabytes to petabytes (or more) by adding capacity—one building block at a time. By increasing the number of storage building blocks, you can scale up the performance and capacity of the file system as you need it. BeeGFS also significantly reduces your data management headaches by enabling your entire storage capacity to be served up in a single namespace.
NetApp has also collaborated with NVIDIA on other AI solutions, including NetApp ONTAP® AI. Powered by NVIDIA DGX systems and available as an integrated turnkey solution, ONTAP AI provides a converged infrastructure stack that makes it incredibly easy to integrate data pipelines, deploy AI model training, and speed return on investment. Additionally, NetApp provides high-performance storage for NVIDIA DGX Foundry, delivering instant AI development infrastructure in a hosted offering for enterprise. Now, DGX SuperPOD extends our offerings and gives NetApp the broadest array of solutions with NVIDIA.
Beyond offering enterprise applications, NetApp is collaborating with NVIDIA and Wichita State University to empower the next generation of talent. “We’re thrilled to be working with NetApp and NVIDIA to build an HPC/AI Center of Excellence with NVIDIA DGX systems so students can experience supercomputing and high-performance computing infrastructure,” said Tonya Witherspoon, Associate Vice President, Industry Engagement & Applied Learning at Wichita State University.
NetApp all-flash NVMe storage is delivering a whole lot more “super” in the data fabric that fuels DGX SuperPOD. With jaw-dropping performance from ingest through archive, we’ve once again built upon our expertise to keep the data flowing… and we’re pushing the boundaries of what’s possible with AI.
As vice president of NetApp’s Solutions and Alliances program, Phil Brotherton is responsible for leading NetApp product/service teams and strategic partners to bring customers exceptional value in advanced solution areas including AI, modern analytics, private, hybrid and public clouds, and database and enterprise applications. Prior to building and leading the Solutions and Alliances program, he founded and led NetApp’s Public Cloud business unit. He also focused on product development, global alliance, and marketing programs for grid and virtualized infrastructure environments.
Before joining NetApp in 2004, Phil spent the dot-com era in startups—leading Marketing for JNI and Acta Technologies. He started his Silicon Valley career at Hewlett-Packard where he held a variety of engineering and marketing roles building UNIX servers and enterprise storage.
Phil holds two degrees from the University of California—an MBA degree from Berkeley’s Haas School of Business and an engineering degree from UC Santa Barbara.