NetApp® ONTAP® 9.10.1 supports NVMe over TCP (NVMe/TCP). Check out how NVMe/TCP lets you do more with less.
In my previous blog post, I talked about NetApp’s release of NetApp® ONTAP® with fully supported NVMe over TCP (NVMe/TCP) target. The most important thing about NVMe/TCP isn’t just the technical superiority of host CPU efficiency or latency, or even security and ease of deployment. It’s what those things mean for delivering faster results for your business more efficiently. NetApp® ONTAP® 9.10.1 supports NVMe over TCP (NVMe/TCP). Check out how NVMe/TCP lets you do more with less.
NVMe/TCP is a great technology for disaggregated or asymmetric HCI configurations. The undisputed success of technology like SAN from VMware, or open-source offerings like Hadoop and Cassandra, points to how much value customers get from software-defined storage built on direct-attached storage (DAS). The raw throughput of these designs, built on top of “scale-out” commodity servers and storage, is impressive. Vendors like Nutanix made a lot of noise about how these DAS configurations were intrinsically faster, and went to great lengths to keep the storage I/O on “local” disk. Recently, though, DAS-based software-defined storage, which was originally designed around hard disk and slow networks, is complemented by easy-to-use, low-latency solid-state storage with Fast Ethernet.
We now see offerings like VMware vSAN HCI Mesh and Nutanix storage-only nodes. Dell recently made a big deal about “dynamic” nodes in their previously HCI-purist VxRail configurations, which aren’t designed to serve data from local storage. This comes at a time when Dell customers are facing challenging upgrades to end of life ScaleIO (PowerFlex) nodes that depend on server-based storage, due to complex upgrade processes that lead to data loss unless followed very carefully. Since Dell announced that PowerStore would be their primary focus for all flash storage, PowerFlex customers have been increasingly asking NetApp to help them move away from that platform.
AWS neatly outlined the reason for the move toward disaggregated or asymmetric HCI:
"For many customers, operating a storage-heavy environment on VMware Cloud on AWS is likely to impact the compute footprint, increasing the hosts and overall TCO.
In this context, AWS services should be considered to offload the storage component from vSAN. Cost-optimization could then be realized through a combination of host reduction and utilization of low-cost cloud storage."
These low-cost cloud storage options now include Amazon FSx for NetApp ONTAP. FSx for ONTAP can improve TCO for VMware customers while accelerating their cloud journey by supplying a common operating environment across public, private, and hybrid clouds.
But what exactly are these “storage-heavy” environments? As I outlined in an earlier blog, for some it’s databases like Oracle or Microsoft SQL Server with traditional scale-up performance requirements. Increasingly, however, we have customers in our NVMe/TCP early availability programs and FSx for ONTAP customers asking us about containerizing big data and analytics workloads and managing them under Kubernetes.
The requirement for these workloads can be quite different from the older monolithic applications. Cloud-native architectures typically use a distributed data model with “many smaller databases, each aligning with a microservice… a design that exposes a database per microservice.”
Each of those microservices might have different storage requirements. Some of these workloads might work better with NFS, but many will want something that acts “just like DAS”—especially if they come from teams with a belief that “DAS is faster.” As this distributed data dramatically increases the number and variety of datastores, the importance of performance scalability and designing for flexibility increases as you plan your hybrid cloud journey.
This is one area where I really like what VMware is doing with Tanzu by abstracting away a lot of the complexity under first-class disks and storage-policy-based management. The most natural fit for these first-class disks is VMware vSphere Virtual Volumes (vVols), which is why VMware announced vVols integration into VMware Cloud Foundation (VCF) to supply a common management framework for external storage.
This approach enables both management flexibility and the ability to get the best out of both vSAN and the gold-standard external storage and data management provided by NetApp® ONTAP®. It’s why NetApp has been working with VMware as a design partner for vVols and NVMe over Fabrics (NVMe-oF) and was the first vendor to ship end-to-end NVMe with vVols support in May 2021.
Dell and Pure Storage might try to make you think that they can rapidly copy our joint work and deliver the same kind of value today. But if you look past the marketing into the fine print, you’ll see they’re not there yet. For example, Pure’s support site says that NVMe-oF is still only supported with Purity for VMware Virtual Machine File System (VMFS), and the PowerStore document from July 2021 says, “Support for the NVMEoFC protocol does not enable VMs on external hosts to use vVOL storage on PowerStore clusters.”
Having NVMe/TCP support is a great foundation for improving performance and TCO, but the real benefit comes when you use it to add value to a larger ecosystem like vSphere,VCF, and Tanzu. NetApp has already completed a lot of work in that area with NVMe/FC and vVols. Adding another transport like NVMe/TCP is straightforward, which means that our joint customers will be able to hit the ground running soon after release. This will be true for both traditional scale-up use cases and next-generation cloud-native scale-out applications, both of which are essential parts of any hybrid cloud journey. That’s why showing leadership and having strong partnerships is critical. For a deeper dive into ONTAP and vVols, Tanzu, and VCF, check out NetApp with VMware Tanzu and Using vVols with NetApp & VMware Tanzu Basic.
If you’re interested in the work we’re doing with VMware, visit our page on virtual infrastructure management for VMware vSphere, and our market comparisons page where we detail why NetApp is best for flash.
Ricky Martin leads NetApp’s global market strategy for its portfolio of hybrid cloud solutions, providing technology insights and market intelligence to trends that impact NetApp and its customers. With nearly 40 years of IT industry experience, Ricky joined NetApp as a systems engineer in 2006, and has served in various leadership roles in the NetApp APAC region including developing and advocating NetApp’s solutions for artificial intelligence, machine learning and large-scale data lakes.
Brush up on the latest trends and developments in cloud, on premises, and everywhere in between. This is where it all gets real, with a cherry on top.
Explore a wide range of open forums where you can post questions, share answers and just generally get smart on all the NetApp technologies that matter most to you.