BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
Hello and welcome to today's webinar. Right sizing VMware cloud compute and storage is no longer a dream. This special webinar event is produced by actual tech media and presented in partnership with our friends at NetApp, VMware and AWS. Thanks so much for joining us on the webinar today. Uh we love the actual tech media audience. We appreciate your support and we want to help to solve your technology challenges on events like these. Uh before we get started, there's just a few things that you should know about today's event. My name is David Davis and I'm a partner at Actual Tech Media. We do encourage your questions here in the questions pane of your audience console. Um many of you have already said hello and good afternoon from across the United States and around the world. uh we appreciate that but we also want your technical questions. Uh we will be doing a dedicated Q&A session here on the live event and so keep those good questions coming during the presentation. We even have a best question prize which I'll help I'll um talk about here in just a moment. But first I do want to encourage you to check out the handouts tab. It's there that you'll find three different resources. Uh there is the VMware cloud on AWS integration with Amazon FSX for NetApp ontap solution brief. There is the TCO calculator and the technical deep dive document. Uh so make sure that you check out all of those resources before you go. Those won't be as readily accessible uh after the event as they are right now. So I encourage you to check out the handouts tab. At the end of the webinar today, I'll be announcing the winner of our Amazon $300 gift card prize. Uh if you're watching this on demand, of course, that drawing would have already occurred. And the prize terms and conditions can be found there in the handouts tab as well. And then, as I alluded to, we also have our best question prize for an additional Amazon $50 gift card. Uh we started this just to kind of help shake questions loose. I know many of you out there in the audience have questions and you're just not quite sure if you should ask. And so the idea of this uh question is, you know, you have or I'm sorry, the idea of this prize is you have to ask a question uh to be entered into the prize drawing and we do contact those prize winners after the event and you of course must still meet the prize terms and conditions. All right, so with that housekeeping out of the way, it's now my pleasure to bring in today's expert presenters.Welcome to uh Chuck Foley, senior director of product marketing for public cloud services at NetApp. Uh Glenn Seismore, who's a product line manager for the cloud in the cloud platform business unit at VMware. And Kieran Reid, who is a senior partner for and solution architect at Amazon Web Services. Uh Chuck, Glenn, and Kieran, it's great to have you on the webinar. Thanks for being here. I'll first hand it off to you, Chuck. >> All right. I really appreciate it, David, and thank you to all of you for spending time with us today. And thank you especially for my counterparts up here for sharing some insights with you. Some of the things that we're going to go over today are what we're hearing from the market, from you guys, from customers on what they're facing in dealing with the complexities of a hybrid multi cloud world and how they shift workloads from data centers up to that cloud. In particular, it's that input that's driving the investments that many of us in the industry are making. Today we're going to talk about what NetApp, VMware, and AWS are doing jointly to address these issues that you're facing. Then I'm gonna have a real pleasure to turn it over to Glenn and Kieran to go into an absolute deep dive. So roll up your sleeves, get out a pen and paper. You're going to want to take a lot of notes as they talk about what's going on uh and then bits and bites and moving pieces of VMware Cloud on AWS using FSX for NetApp on reference examples, reference architectures, customer examples, even some demos. and then we'll take your questions to figure out where you want to go next. In particular, theissues that we're dealing with really are focusing around scale and complexity as you move into a hybrid cloud and a multicloud world. Virtually everybody is using the cloud and using infrastructure in the cloud. We're not talking about just SAS services, but literally using IAS and paz services as well in the cloud. Latest uh reports that we find from the analyst community show a massive number 87% of enterprises having a multicloud uh excuse me a hybrid cloud strategy and an even greater amount because not everybody is still having a data center footprint an even greater amount using a multicloud strategy. In essence the days of using only one cloud footprint or using only a data center are for most of us gone. So that move to a multi-environmentuh scenario brings a lot of issues and changes. The biggest one is complexity. It's having different frameworks, different lexicons, different tool sets, different competencies required. The effect on the staff, the IT ops and the cloud ops staff from being in a hybrid multi cloud environment is crushing a lot of companies because I start uh just yesterday I sat down with some customers and the hue and cry was I have an AWS team, an Azure team, a Google team, an on-prem team. on-prem team. on-prem team. Pretty much three of the teams are doing okay. One of them lost three headcount to recruiters and it's just killing us because the others can't always plug right in because they don't use the same lexagon, the same tool sets, the frame, same frameworks all the time. This complexity leads to other issues like security risks. The more moving pieces you have, the wider your attack vectors are. And so we're looking for ways to bring simplicity. Through simplicity, we bring stability and security. And if we do it right, you have a set of standard operating environments that you're working on that you can apply wherever you need to go, whichever cloud environment you have, whichever workload you have. And that's something that these three companies have been working towards. Uh we go back 20 years with VMware as an example. We were a co-design partner for VMware's first development initiatives like Vivall's NVMe, NFS4.1, cloudnative storage and others. And so we brought that technology and integrated it not only in our traditional data center accounts uh accounts of which there's tens of thousands but into the cloud itself. The ability to take your well architected very secure stable and scalable VMware operating environments from on premise and lift those workloads into the cloud without having to adopt new frameworks and runbooks and policies and procedures. It's the number one platform worldwide for virtualization and most of our customers have said I need to lift that up into the cloud but I need to lift it up with the scalability performance and cost vectors that I've become used to on premise and so that's how we're working together to modernize the data centers we've had somevery significant milestones as you'll see here as we worked with both of these partners if I look at VMware in particular look at their look at the initiatives I mentioned before of NFS data store uh ina cloudnative footprint NFS4.1 footprint NFS4.1 footprint NFS4.1 valls etc as we brought now and this is what we're really talking about today is to how to bring scalable storage that you can scale independently of your HCI oriented compute and memory requirements to the major cloud platforms that's now available in AWS Azure and Google and we're going to show you the AWS implementation today so as we build that out we're helping customers along their journey. Customers have gotten really good at virtualizing their compute, network, and storage environments in the VMware world, but there's still a growth path as they embrace that cloud and learn how to go from optimizing what they have, accelerating their transformation into the cloud. As we talked about earlier, we've moved from a simple hybrid cloud to a hybrid multicloud world, which requires that standardization across those frameworks. Now we've got to be able to accommodate the data center size workloads that you'vegrown on premise in your footprint and move it up to these three separate clouds. AWS is a great target for that especially in light of the work that we have been doing at NetApp with AWS over the past almost 10 years. Uh we started back in 2012 when we partnered with AWS and then brought out NetApp private storage to give scalable secure data center class NFS capabilities in the AWS cloud. We follow that up with bringing enterprise class storage with ONAP, the world's leading storage operating system into the AWS environment eight years ago and giving that capability into the hands of our customers and then layering on additional services for data protection, for orchestration, for cyber resilience. This culminated in 2021 when AWS uh selected the ONAP operating system to literally be a part of the AWS plumbing if you will and to take on tap and implement it not as a third-party technology but as a firstparty native service offered directly from AWS as a fully managed service for enterprise class block and file capabilities. That really set the stage for what we're talking about today. When you look at the scale of workloads that we have inthe data center and lifting that up into the cloud, you need the scale of capabilities that you become used to in the data center. AWS turned to NetApp and said we'd like to make you a part of that foundation based upon those decades of experience and it addresses the key challenges that we're dealing withour customers. When we pulled the AWS customers and said you're moving enterprise workloads to the cloud, what are you most worried about? AWS being a trusted partner was not something they would worry they have worried about they already have that confidence. It's why it's thelargest public cloud in the world. But when you look at specific technology implementations, number one, they needed technology that could reduce thecost of an ever growing storage footprint. Number two, they needed to have a storage footprint in the cloud that allowed them to seamlessly and easily migrate not only their data but the processes and services around that data. So many customers have integrated environments with runbooks and processes and frameworks and API integration they built into their on-prem environments that they would like to lift up to the cloud. And third is what I mentioned before the staff competency and training. How do I make it simple for a data center staff to adopt and embrace aninfrastructure in the cloud that mirrors what they have been used to on premise. And so that's where we're going today. We were so pleased earlier this year when VMware and AWS worked together to announce first initial availability and then general availability of VMware cloud on AWS with NetApp powered FSX for NetApp Ontap as a supplemental data store. It's a big deal for us and our customers because this is the first codeesign integration that VMware has selected to help them expand their cloud oriented footprint. So now we have in AWS VMware cloud which is a VM offer VMware offered managed service on AWS integrated with Amazon FSX an Amazon firstparty storage service to be able to support that as an underlying storage service which can scale independently of the independent nodes and be used for workloads that are large storage footprint requiring high performance high availability and high capacity still fitting into the VMware infrastructure. I'm really pleased to turn it over now to Glenn as he walks you through some of the particular aspects of VMware cloud on AWS using FSX for NetUP on. >> Thanks. >> Thanks. >> Thanks. Okay. Uh real quick just to explain like from the VMware perspective why we've built this enhancement. uh over the past four or five years of operating the managed service uh what we've discovered is when customers are operating inside a public cloud they fundamentally have an expectation that they can manage compute resources independent of storage resources and then inside that storage umbrella it's actually a little bit more nuanced and they expect to be able to manage capacity independent of performance and as we all know this is mainly to be able to control cost andoptimize operation ations. Now, when we take a look at the current offer inside VMC, uh today weuse a strict astrict uh HCI model. So, every time you get a host from us, you get compute, memory, and storage inthe trifecta. Uh andto be honest, this works for the vast majority of our customers, right? uh if it turns out that uh if you are using the CPU and memory at all inside one of ourhosts the storage that we can provide inside that host is some of the most cost-effective storage you can get your hands on. Um but unfortunately as we all know not all workloads are the same and this breaks down pretty quickly when we start to talk to a customer that has an asymmetric requirement where their storage requirements dramatically outstep their compute and memory requirement. Now all of a sudden, you know, instead of looking at potentially one or two extra hosts here or there, you know, we'relooking at having to buy dozens of additional hosts with no intention of ever putting any VM workloads on them. Uh andunfortunately formost customers, this just kind of broke down the model uh and prevented them from being able to take advantage of the service. Right? We had a lot of conversations where it was, you know, we really want to do this.sounds awesome, but you know, that theTCO just isn't there. So to resolve this tension uh what we've done is we've added NFS data store support inside the service. Uh theinitial offer thatwent generally available this morning a couple hours before this call. Uh we support up to four data stores attached to every cluster. Uh and you can attach a data store to as many uh clusters as you have inside your software defined data center. Now this is abrand new integration. You do not manage data stores inside vsenter like you we have traditionally for the past 20 years. uh the management has been moved up into the service layer. So it'sbeen integrated into the VMware cloud API and STDC UI uh andwe also have a new certification program where we require any data store attached that tobe certified for use in a cloud context. Uh as of right now Amazon's FSX for Netapontap is the only storage solution that is certified. Uh and at the end of the day we did this so that our customers could rightsize their storage to control cost. Uh but you know since we'realso just doing a protocol level extension that means technically you know if the thing we're connecting to has any kind of rich data management features then all of those features are instantly available to our customers as well. So it really was a win-win. Um now when we were taking a look at this uh integration of course you got to build with something. Uh and honestly Amazon's FSX for NetApp Bondap was our peanut butter and chocolate moment. Uh that we chose them to be our design partner for this integration. Uh we've been working with them for well over a year. Every single bit of development andQA andevery little tiny bit of work. It's all been done on top of FSX. Uh and really that that's because for us this is like the perfect blending oftwo partnerships, right? Uh the entirety of VMware cloud and AWS is built on top of AWS's EC2 services. So wejust know Amazon very well. Uh wehave a very deep partnership at this point. uh andtaking on another foundational service and integrating into the platform. Itwas a very light lift for us. But at the same time, it's important what Amazon wasexecuting here, right? Like vSphere and ONAP uh are co-deployed all around the world, have been for 20 some odd years. There's just an incredibly long history there. This is a well-known uh platform and because of how it's been architected inside uh EC2 and the FSX offer uh everything that's available to on premises customers is available to them when they move into the cloud. So this really does provide full parody foran existing shop if you're familiar with the solution and you're just looking to move to a managed offer oryou have a cloud mandate. Uh it it's the easy button for making those migrations uh simple. Uh and then finally from a capabilities perspective when we take a look at FSX it really is capable ofstepping up to the kinds of workloads we see customers asking for right withthecurrent product supporting roughly 200 terabytes of SSD raw capacity before you add in efficiencies uh anda three 49's SLA for their multi-AZ offer. This aligns perfectly withwhat we provide inside the service itself. Now one thing that is unique about this uh isyou'll uh theFSX is not something that VMware resells or operates right it is maintained by Amazon throughtheir managed service. So as a customer when you bring th thesedata stores into the SDDDC uh that this does change the management model a little bit but at the same time that also brings benefits right because again this is on tab so it's a multi-protocol multi-tenant operating system uh andyou can use a single fsx uh file system for more than just a data store right you can use it for your inguest workloads if you have uh ns SMB or NFS or ice based block workloads today and you want lift those up into the cloud without doing refactors or migrations. Uh thenthis gives you a simple button tobe able to manage those workloads as well. And again uh all the rich data management features, all of the integration thatcustomers are used to around the world is available when using FSX. Uh the only thing that is not available at this time are the vSphere plugins. So like your storage replication adapters, the VAI plugins, thosethings uh those aren't available right now. Uh butuh particularly if we're talking inguest initiated connections like everything works all your snaps, all your flexes, uh snap drives, all that stuff. All right. Uh now I mentioned that thisis a customer managed asset, right? This does change the support model for us just a tiny bit. Uh traditionally inside VMR cloud and AWS, right? Like everything is VMware. We're the only stop for everything. It's one of the main value props of the service. uh you have the actual you know software development service owners responsible for your outcomes and ensuring that your software runs asit's supposed to. Uh when we talk about FSX though, uh that model mostly stays the same, right? Uh if you're having any issues with the STDC, uh you suspect the storage, the first call is always to VMware. So we continue to be the first call. Uh we will validate thatyour STDC is functioning properly and there aren't any issues over there. If we're not able to resolve the problem thenwe will walk the customer through opening a case with AWS uh and then doing the appropriate linking so that our two support services can work together with your authorization. Uh sothere is just a little it is a bit of a split right um butat the same time with that split comes thisgreat power to be able to connect these two services and then leverage some of those enhanced data management functionality thatwe just don't have in native vsspere today.Now, one of the things thatuh we kind of like about how Amazon has built this service is it's outcome focused. So, if we take a look at uh how youactually configure, procure, and control a FSX file system inside EC2, right, there's really only two tunable parameters that you really have to think about. uh there's what they call thethroughput right whichreally kind of controls thephysical thing that onap is installed into right uh it controls how much CPU memory network performance is available to the aggregate file system but they've simplified all of these metrics just by certifying it and saying you know this combination of resources can go x amount of speed and then you have your SSD capacity which is just the raw capacity pool that's available in inside uh theon tab file system. This doesn't impact, you know, again, usable isbased onyour efficiency rating. So the more ddup compression compaction savings you get, the more data you can physically fit inside an individual file system. Um butthe good thing here isthis is a very flexible platform. So it'sa low stakes investment when you deploy. Uh when it comes to file system throughput, we can scale up or scale down uh with very minimal disruption, right? it'sjust a storage failover event for them to swap the controllers out. Uh the SSD capacity pool can also be dynamically scaled. It however can only go in one direction. When you add capacity to the file server, uh that capacity is permanently added. Uh you can just ifyou don't need capacity, you just need a little bit more performance. You can though just purchase some provisioned IOPS in independent. So uh it's a very dynamic system where as a customer you can start small and then really kind of fine-tune it uh in step increments growing as you need it uh without having to do you know massive planning cycles where you know absolutely everything and what's going to happen for the next three years before you can even hit deploy um inside the environment. just to get an idea ofuh whatthis is capable of, right? I know again we mentioned this is outcome focused. So if we take a look uh the physical resources, right? If you set your file system throughput to 512 megabytes per second, for example, you're getting a controller with 32 gigs of in-memory cache and 600 gigs of extended NVMe flash cache, right? So that's just it's a lot of cash when you're talking about read performance. Um andyou can see how this scales linearly as youmove up through those throughput increments. Uh now on theSSD capacity pool side of the house uh that you get two things whenever you add capacity right for every gigabyte of capacity of course you get a gig capacity tostore data on. Um but you also get an IOP and throughput allocation for how much data can be sent from the file system down into the actual discs themselves. Uh and generally for each gig you get three IOPS and 768 kilobytes ofdata throughput. Uh andif you scale this all the way up to theconfigured max, right? Thatbrings you to about 80,000 uh disk IOPS and roughly uh 2 gigs ofdisk throughput. Now that doesn't mean that that's only 80,000 IOPS of performance, right? like there's a lot of things inside on top thatare very clever about uh getting more than the physical resources are there between the caching andyou know waffle itself just being really good at writing. Uh you can get a lot more than that 80,000 out of the file system. Um thethroughput number though that is really kind of areliable figure thatyou know when you're designing and looking at the offer you can depend upon and uh you know ifuh if we just focus on the throughput right uh thegenerally speaking therated number is your max for reads and writes are about half of that right so like ifwe're looking at thelarge instance you know we can read up to two gigs per second assuming there's sufficient performance on the back end. Uh and we can expect to write about a gig per second in aggregate. Uh so it's a really flexible system, very capable of handling uh modern-day workloads. And this is still kind of the early days for AWS, right? Like uh this product is only going to get bigger and faster as we continue to work together and move forward.Now there is one thing that you got to know about currently uh andthat's just because wechose to invest and integrate this solution using NFS v3. Uh the NFS v3 protocol uh currently is a single TCP IP connection from each host to each data store. And the AWS backend has a rule that basically squashes any single TCP IP session to no more than 5 Gbits per second of aggregate throughput. Now in practice, whatthismeans is that each vsspere host can expect roughly 450 to 500 megabytes of throughput for each data store present. Right? Um now we can architect around this, right? like asis shown here inour example. If we attach multiple data stores to a host uh then each one of those data stores has its own 450 500 megabyte uh throughput allocation. So we can aggregate you know up to four of them to get roughly two gigs to a single cluster. Uh and then you know your cluster aggregate throughput is however many hosts you have right uhmultiplied. Now th this does mean that you know if you have one monster workload right one big shark that you're trying to compartmentalize and get inside VMC currently you really do have to be aware of this throughput limitation and kind of architect for that uh you may need to break up the workload into multiple VMDKs and then do some guest striping ordisc grouping orthat sort of stuff tobe able to aggregate the throughput currently. Um but for generic VM workload uh what we were seeing inour customer environments is it's really not a concern. the system is more than fast enough uh andit really does solve a lot of problems. The other thing though thatwe do need tocurrently be aware of uh is that we the way that we connect these two services, right? Since FSX is deployed inside a customer environment uh and then theuh STDC is deployed inside a VMware account, right? Like natively these twoenvironments don't know how to get to each other. So we currently are using VMware transit connect which is atransit gateway that is managed by VMware uh to connect these two services and that is a metered data connection so that you know data traffic going through there uh whether it's read or write traffic it's metered at 2 cents per gigabyte of data throughput. Now, so just to kind of give an idea of what this could look like, right? That this is if we take a look at basically our worst case scenario, let's assume we deploy a 2 gig per second file system and we push it to its limit driving roughly, you know, 1 gigabyte or 1.6 gigabytes per second of aggregate throughput with, you know, roughly 100,6k IOPS. Uh if we were to run that for an entire month, right, uh it would cost roughly $79,000 in data transfer fees. Now, considering the file system that I just described is roughly like $2 to $4,000 a month, like it'sthat these could potentially be an issue. Uh it's just something that we want to make sure customers are well aware of ahead of time. Uh as I'lltouch on here at the end of this presentation, we are looking to remove both of the concerns that I just addressed and I'll tell you how we're going to fix that. Um, but if you're look if you're planning atmigrations right now, uh, definitely be aware of these kind of sharp edges and make sure that we account for them when we're doing your total design. And with that in mind, I'm going to hand over to mygood friend and longtime partner over at Amazon who will take you through the architecture uh, anduh, some additional considerations. Take it away, Kieran. >> Thank you, Glenn. So, um, hope everyone can hear me. Okay, so this is the reference architecture we're going to take a look at. Apologies, I'll progress to the next slide. There we go. Um, and uh, don't worry, uh, all the components that you're looking at here. It looks like a lot, but a lot of them are there already. So, existing customers will already have most of the components on the right hand side, but we'll refresh ourselves on those as well and look at the ones on the left, which make up the entire new offering that we're discussing today. So starting with the call out button number one um that is your VM cloud and AWS softwaredefined data center which is deployed in a single a configuration in this example as well witha single vSphere cluster. Um something else we should also call out is that at the moment we don't support the stretch cluster or multi-AZ deployment of the SDC. That is a roadmap item which we'll probably look at later but at the moment um single A is supported. Looking at um item number two that is the uh SDDDC group. Now that group gets you create the group as the customer and when you do that you automatically get that VMware managed transit gateway. Now that is a VMware managed it's called a VMware transit connect but all that isan AWS transit gateway fully managed by VMware and that is automatically deployed when you create that SDC group. Within that group you can have one or more STDC's. Most customers just start off with one and attach it to their file system. So we touched on what that VTGW is, the VMware manage transit connect and that provides the connectivity as Glenn just touched on earlier between your SDDDC and your um FSXE on our file system. Looking at number four, these are the virtual machines running in your SDC that are looking to um you know uh that run on VSAN and we're looking to augment that and add the NFS data stores to that.brings us to number five. Um, we deploy a multi-AZ uh FSX FET app on file system that gets deployed across two availability zones, hence the name multi-AZ. And that file system contains multiple volumes. In this reference architecture, we call them DS1 and two. And that will provide um supplemental data stores to the vSphere cluster. And um that'll enable you to add storage as you need it or take it away when you don't. So in order to do to provide that end-to-end connectivity uh the uh AWS transit gateway has elastic network interfaces which connects the customer account VPC to the VMware managed transit connect. And if you look in the middle there we've got the uh floating IP and um the reason why we've got that is of course because we deployed across two availability zones. We have a floating IP to manage failover between the two in the unlikely event should anything happened. I'll come back to number eight in just a second because that just talks about the flow. But if we look at number nine, the next step is we configure the security group and I saw someone call out security in the uh chat earlier. But in here you limit you know uh which your STDC uh can talk to the file system. you control the connectivity within the security group and um Glenn's team as well as the NetApp team along with AWS we've put together some really good documentation and uh click through demos on how to exactly to configure that so it won't be too alien to you when you come to do it for the first time finally we look at the AWS we look at thestorage traffic so the traffic flows from the data stores that you mounted to your ESXi hosts and that goes through the VMware transit connect from the transit connect it routes the storage traffic to the AWS uh transit connect attachments and that's in the customer VPC where it's finally rooted to the active FSX on tap ENI. So that's an overview of the architecture. Again, most of it's there for the existing customers when they've dep and they're already using it today, but it's a section on the left that we're now deploying um and um configuring to and um the transit gateway binds those two together.So moving on to the next slide, we're going to talk about um you know the connectivity overview and what happens in the unlikely event of a failure. So what will happen here is um in the event of an a failure or any kind of disruption, let's say an EC2 instance that was actively hosting the data has unexpectedly crashed. What happens here is the instance would go down and at that point on starts the failover process uh non-destructively in the background.Uh let's see if this animation works. There we go. Yeah. And FSX4 NetApp on is simultaneously in the background rutting the network to sim simultaneously swing the active IP address. Now that goes back to the reference architecture while we have that floating IP address. So I wanted to call that out there. Onap then restarts the storage virtual machines that are um attached to those volumes and then brings up those exports. So and in this case it also updates the default route and once that route is updated it instantaneously fixes the connection and the connectivity path from the vSphere hosts which will automatically reconnect to those data stores and then your workload can resume and in our testing with uh VMware and net app um we normally see the disruption taking about 60 seconds so which isn't bad if the whole entire a node goes Sorry, should have drawn those out there. Um, and then the last slide we wanted to talk about, um, and Glenn touched on this is, you know, beyond data store support, you know, so it's a multi-protocol solution. So, if you've got customers with, um, ICE scuzzy or NFS 4.1 workloads, you can actually still provide that. Now the data stores are NFSV3 but if you've got guest workloads that need ICE SCSI or NFS4.1 you can provision those volumes from the file system or a brand new file system and present that directly to the guests as volumes and to meet those workload requirements. So that's something else we'd like to um call out that the system's also capable of. So I hope that was a good um overview of thesolution and um now I'm going to hand back over to Glenn and he's going to run through a few more bits. At a high level, when we talk to customers, and we've been talking to customers for about a year now, uh about using these two products together, uh what we hear from them, uh iswhenthey're looking at using uh VMC particularly like in a hybrid context, like Chuck mentioned at the very top, one of the biggest benefits that they get out of this joint offer is thatthey completely get to avoid theneed to replplatform or rearchitect their applications. They can accomplish those cloud first movements. They can accomplish anative migration or a data center evacuation uh without the need tokind of reinvent the wheel. They can just kind of pick it up and set it down somewhere else and keep operating and managing everything the way that they are. Uh it also enables them to reduce their costs, right, ofuh running their workloads. Particularly if we're talking to native VMC customers. We have customers today thatare what we call storagebound. uh they have hosts in their cluster that are there purely for the purpose of adding storage uh and that is not a terribly cost-effective way to operate in an HCI deployment particularly in the public cloud. So by combining FSX and introducing supplemental data stores we have a lot of customers who are going to be able to reduce their costs uh virtually overnight uh once they get thisintegration inplace. And then finally, uh, since these are two managed offers, you really get toreduce the problem space, right? If you're an on- premises customer today, you're responsible for everything. You hold the bag, right? Youhave to manage the entire architecture and your app stack and your customers, etc.,etc. When you build on top of a managed offer, you start with a guaranteed outcome, right? Like VMware saying your VMs are going to be available and Amazon is saying your data storage is going to be available. And you as a customer, you just manage the configuration of those assets, right? the day-to-day patching, security, you know, operational readiness, auditing, all of that is the responsibility of the providers themselves. So like it by reducing the problem space for what you need to manage and take care of, it just lets you get a lot more done. So,if you ifwe compare projects that typically deploy on premises versus cloud first projects, typically we do see those cloud first projects can get to thefirst increment, that first data bit uh quite a bit faster just because again they don't have to do everything. Uh just to put some numbers behind this, we took a couple of examples. Uh we again I mentioned we talked to a great many customers. We talked to a midsize enterprise that had about 5,000 employees, 314 351 virtual machines, roughly 600 terabytes of storage. Uh, and this particular customer uh could save over a million dollars, right? 25% lower TCO byusing FSX in combination with their SDDDC. Uh, wealso had aanother customer uh this was adata center expansion uh customer, right? theuh the uh testdev environment, right? Like theywere building some new apps and this gets back to what I was talking about. They were building some new apps andthey just needed a small environment. This wasn't a huge deal. Uh right, six hosts wouldget them up and running. Uh we were able to cut two of those hosts out though um byuh by combining FSX and again saving about 25% TCO. And the thing that was neat about this particular customer is the whole reason that they were moving to VMC wasuh it was in the middle of the pandemic and they physically were incapable of getting servers right they had a project that had to go and they could not get the hardware to do it but we were able to stand up an STDC within two hours for them. So, you know, it was it'sby combining these two service, not only are we solving customer pain points, right, butwe'realso doing it in a way uh where we're not uh you know, breaking thepiggy bank. And honestly, the more the larger the data set, the better these numbers get. We talked to a government uh IT uh company. Theywere doing a modernization. Theyit's a large environment, right? 6,000 virtual machines, over 300 terabytes of data capacity. uh the pure STDC wasgoing to be quite large, right? Over 30 I3N hosts. Uh if we compared that to an FSX uh environment, we were looking at reducing their costs by almost 50%, right? Saving them almost $3 million uh onthe total deal. Uh andthis was a big deal for them in particular because uh this project isdriven by a data center closure, right? They don't have a choice. they have to get out of the room and uh VMC gives them a means where not only can they keep everything the same, right? To continue to take advantage of those operational efficiencies that they've been building on for years. Uh butalso do so with again without having to break the piggy bank andin this case uh FSX is the difference between the customer being able to afford the solution and having to go try to find something else um out in the market. Uh and then finally uh we another uh midsize environment. Uh thisis interesting because the customer had a mixed workload, right? Uh soagain about 5,000 employees uh they had about 400 VMs and those 400 VMs had about 90 terabytes of data capacity, but then they also had 150 terabytes offile storage that their build farm needed to have access to. And here in this environment, we actually did ahybrid where uh we were using two FSX file systems. One that was optimized for the VM workload, you know, all SSD capacity, uh large throughput because thedatabase servers needed a lot of performance. Um but then we augmented that with asecond FSX file system that was optimized for filebased workloads. So using cloud taring, smaller throughput uh andthe flexibility to kind of mix and match like that, use different configurations. Again, huge difference. You can see here almost a 50% TCO uh compared to a raw STDC configuration. Uh so we're super excited to have uh launched this solution where we think it's going to make a big difference for customers uh as we shared at VMware Explore last week, right? Like previously, if you thought your workload was too big for VMC, uh it's time to reassess because we think we're ready for your biggest and baddest workloads. >> And with that, I'll hand it back to Chuck.>> Thank you so much, Glenn. It's so good to work with pros. Ireally appreciate you and Karen walking everybody through this. So, as you walk through this, you might want to look for uh some resources. And these three companies have worked together to put some tools at your fingertips to let you be able to figure out what would this look like in my environment. What would it cost me? What does it look and feel like? The first you see on the screen right now, which is a both an ROI savings tool and a sizer. So, you see the URL there. It's also included in the resources and handouts. You go in through a browser. You have the ability to profile your environment in terms of CPUs and memory and cores and storage and throughput. So you can create an environment as Glenn said for your biggest and baddest workloads. And you'll be able to see what size of environment you'll need as well as what your potential cost and savings are from using the new external supplemental data store capability. If that's attractive to you and you say, "All right, I'm ready to go now and actually put hands-on and get a look and feel of this environment, there's actually a simulator that you can go, again, browser based. The URL is there for you. It's also included in the resources and handouts so you can see what it's like to initiate, spin up, activate, and put hands-on with uh the VMware cloud environment in AWS. By doing these things, we're trying to make sure that the community has what they need to understand both the business aspect and the technical and competencies aspect to be able to take advantage of what everyone has spoken about today. So with that, you might have a lot of questions. I'm going to flip it back to David to bring everybody into the discussion here and see what questions we can answer for you. >> Absolutely. Yeah, great discussion. I learned a lot. This is a really cool solution here. Um, now's the time if you have a question out there in the audience, you're wondering, you know, how exactly this might work or what about uh this or that security, how would it work with my applications? Now is the time to get your question in uh because we're kicking off our Q&A. And um let's see, first question I see here that came in. Uh this one is from uh Satic in the audience who's asking u how does support work? Basically, ifI have a question about this solution, I'm using it. I need sometechnical support. Um, Glenn, how would I get support on this >> and we will own the case uh asfar as we can. But since the FSX file system is sitting in a customer account, we don't have permission to open a case with AWS against an asset that doesn't belong to us, right? Uh sobasically VMware will do everything that they can if we get to a place where we need to bring FSX in onto the call to help us out. Then our support services will walk you the customer through the process of opening the ticket that we need. Uh and there's a linking process that we do that authorizes the two services to talk to one another and then [clears throat] at that point we take it from there. >> Got it. Okay. Yeah, that sounds easy, straightforward. I appreciate you explaining that a little bit. Um, here's a good question. They're asking, "How has the partnership with VMware and AWS accelerated cloud adoption, Chuck?" >> Yeah. What these three companies have done in enterprise workloads is significant. And now the task, as we've talked about, is getting those workloads up into the cloud. In particular, here with AWS, there are massive footprints. Again, Glenn, I love how we he refers to it as your biggest bag biggest baddest workloads that customers have been reticent to move into the cloud. That's this kind of the 8020 rule, right? Where so much of the footprint in terms of compute and storage and data and revenue generation for a lot of companies is locked up in data centers because the customers haven't felt that they've had the infrastructure in the cloud to enable them to move up there and take advantage of it. activities like we've done with FSX for NetApp on becoming that supplemental data store allowing you to scale your workloads storage independently have the savings that you were have seen here from Glenn is a big step the other is it shortens the gap in terms of competency staff requirements skill sets training that speeds the adoption of the cloud so if we keep doing this if we keep making it simple to move on premise workloads to the cloud if we do it in such a way that no retraining is needed. If we do it in such a way that no refactoring is needed and if we do it in such a way that your IT ops team has a really clear value prop for the PHOps team, we think we can continue to accelerate that adoption andthat's what you're seeing here in this announcement. >> Absolutely. Yeah, I love that this solution could help so many companies out there to um massively accelerate theircloud migration andbe more agile. I love some of the case studies as well where companies were so much more agile because of this solution. Um, so if a company uh did want to purchase the solution with this being an integrated solution, Karen, maybe you can talk a little bit about how would that work? >> Yeah, so the um the VMA cloud and AWS solution is available for purchase through both AWS as well as um VMware resellers. Um as Glenn touched on um VMware don't sell Episex at the moment. So customers um when they sign up for VMware cloud in AWS they are expected to bring their own AWS account and they can purchase FSX on tap uh using that account uh through Amazon. >> Excellent. >> Excellent. >> Excellent. Yeah sounds straightforward. Here's a good question from Harvey who wants to know let's see he said was there a clear trade-off in having multiple data stores to one host? I know this is a workaround for limitations sort of but what downsides if any are there to consider Glenn? Yeah, there's the only downside honestly uh isif you're in one of those workaround scenarios, right? You'reintroducing a little bit of complexity um because just having multiple data stores isn't going to increase your performance unless you can also evenly distribute your workload across of those multiple data stores. So like if you had one VM with three VMDKs, you know, it's very easy to put each VMDK inan independent data store, but if like one of those is your database and like one's the logs, right? it'snot distributed in an even manner. So, it just itadds a little bit of complexity in the guest operating system. Uh and that's why we're working to remove that limitation. We don't want you to have to think about that. Uh but honestly, 500 megabytes per second, that's nothing to shake a stick at. Like that'smost workloads, right, can that's more than enough for what they need. Uh the only other consideration to here as well is, you know, thatis a shared resource. So if you have 10 VMs over 10 hosts and you're evenly distributed, then each one of them gets 500 megabytes. But then if for some reason all 10 of those VMs end up on one host, well, they're all sharing that 500 megabyte, right? So itthere's a little bit of complexity and someperformance considerations mainly. Um, but design-wise, there's no downside to having multiple data stores uh orsharing them between clusters. It'sone of the advantages of the solution compared to it tojust native VSAN. Excellent.Yeah. And here's another good question from Donnie and I'll just kind of sum it up by saying Donniey's concerned about moving his company's data from on premises into the cloud. What about security, Glenn? >> Yeah. So, this is a shared responsibility model. Uh the way thatit thatuh VMC on AWS works uh is uh Amazon is responsible for maintaining audit PCI. You know, we have all the ISOs and various different HIPPAs andvarious certifications. Uh AWS maintains the audit and certifies the underlying EC2 services that we operate on top of. Uh and then VMware also maintains an audit uh and certifies all of the resources that we manage. So you as a customer who has a compliance workload or a regulator that wants to see like prove to me that you've done your due diligence and you've done your job. Well, you as the customer would be responsible for proving that you did your controls inside the guest operating system in the application stack. VMware would hand over our certification that proves that we've done our part. Amazon hands over their part that proves that they did their part. And this goes back to part of thatoperational efficiency, right? You don't have to maintain the entire stack. You just focus on your VMs and the rest of it is managed by the providers themselves. And the same is true for FSX, by the way. >> Yeah, absolutely. I think that would make things far easier. [clears throat] Um, let's see. [snorts] Chuck, here's a good one for you. They want to know how important are the data management capabilities? things like storage efficiency, replication, space efficient clones, backup, uh, to this integrated solution.>> That's a great list. I should have captured that and we just write that out and we'll have you take over a territory and start selling [laughter] on tap for us around the world. So, Ithink that there's a couple of different areas that the these storage and data management capabilities are really important. Number one, it doesn't work if you don't have an ROI. If you can't as an IT ops team go to your fin team andconvince them why you want to do this, you're dead in the water. When you look at the storage efficiencies like compression, compaction, ddup, and tearing, they're material. They cut the actual physical storage footprint needed by 60% sometimes more. That translates into direct savings of your cloud environment and most customers are using them on prem as well. That leads to number two, the fact that most customers are using them. The second biggest holdback is it this is not the way I work. It's not my competency. It's not in my playbooks and my runbooks. H if you have already as an IT ops team given your DevOps team some of the capabilities like flex clones and flex volumes and snapshot and etcThey're going to want and need that. You may have that built into workflows and your CI/CD pipeline. You need to provide that same type of capability in the cloud. So number one, you have to get Finops on your side. Number two, you have to have your operations be consistent from one to the other. Number three, your skill set, training, and competencies. It's the people that are going to be running this for you that make it work. And byhaving thesame environment on prem as you do in the cloud, you've removed so many obstacles not only now, but in that first three, six, nine months of production runtime. That is really where people gauge uh this type of thing to make it happen. The really neat thing is there are other data services in available in the cloud as well that are really simply and easily available through this platform such as compliance and governance services which you can then layer on top of it and we're not here to go into that right now but moving this into the cloud offers a wealth of opportunity to then add value to what has been my existing environment.>> Very well said. Wise advice. I appreciate that. Uh Glenn, next question for you. They want to know besides cost savings, can you talk about any other benefits that the integration of VMware cloud on AWS and FSX for onap deliver? >> Yeah, there'sa couple of unique scenarios that we just can't do without it. Like for example, app volumes in s inside Horizon uh iscurrently a bit of a pain point. You know, if you operate multiple STDC's indifferent environments, keeping yourcontent library synchronized isa bit of a chore. requires some scripting and some various different assets. Uh here with FSX, you just attach a data store toall your SDC's, you know, put yourcontent library on there and everything else is just kind of handled for you. Uh it also, you know, the ability tomove the problem somewhere else, right? Like Chuck touched on all those data management capabilities like to ifin a pure STDC environment, we need to do everything inside BSER and with the supported app stack that we have there. uh andwe've been able to manage your customer environments. I don't want to sell short our offer, right? Like we'vegot quite a few customers at this point. It's a massiveoperation. Um butnevertheless, like therehave been a couple times in the past where if we had the ability to say like whatif we keep that data in FSX and just manage that with Snap Mirror to put some snapshots on top of it, uh it really would have made theoverall integration a lot easier for the customer, which at the end of the day is our ultimate goal here, right? like we just want to make it as easy as possible for you to operate inside the managed offer without having to reinvent the wheel.>> Excellent. And then what about use cases? What are the ideal use cases forVMware cloud on AWS and does it make sense for companies to deploy this economically?>> Yeah, honestly uh the probably like 80 to 85% of our fleet is storagebound, right? meaning that the number of hosts in a given cluster are there based on the storage resources. Now typically it's only like a single host here or there maybe two but nevertheless like if you take a look atthe cost of you know just like 10 terabytes of FSX data storage it is absolutely more cost- effective than buying one of those hosts from me. So in every single one of those scenarios as a customer, you know, when you get that elastic DRS scaled you up for storage purposes, uh instead of having to go, you know, add another node orrun on demand and try to figure out how you can delete data, uh you can just go add a data store SBotion, somedata workload andrightsize your STDC. I really think that's going to be the main benefit for a lot of customers. And honestly, if that was the only thing it did, we would still have built this. like everything else that it provides. That'scherry on top. Uh fromVMware's perspective, the ability to rightsize these STDCs and control costs isabsolutely paramount. >> Excellent.Well, I think we've covered all thebest questions we have time for here. We're running out of time in our webinar slot here. I I've learned a lot about this solution. I know the audience has as well. Um, let's see ifanyone has any further questions, now's the time to just drop them there in the questions box and we'll have to get back to you after the webinar. But I really appreciate all the expert insight from uh Chuck, Glenn, and Kieran. Thank you all so much to our expert presenters. >> Thank you so much. Enjoyed it very much. >> Excellent. And thank you to everyone out there in the audience for joining us on the webinar today. I want to give a special thanks to NetApp, VMware, and AWS. Don't forget about the handouts tab. It's there that you can check out the resources, the VMware cloud on AWS integration with Amazon FSX for NetApp on tap. There's the solution brief, the TCO calculator, and there's the uh tech deep dive as well. And on the screen there in the slide, we still have uh a link. Actually, if you mouse over that graphic on the screen, you can click on that.will open up the NetApp uh VMC on AWS uh NFS data store simulator. So, you can try that out for yourself here on this webinar event. And I'll leave that up if you want to click on that while I announce the winner of our Amazon $500 or sorry, $300 gift card. This is going out to Paul Goff from Illinois. Congratulations, Paul Groskoff from Illinois. I hope everyone learned a lot on the event today. We'll see you next time. Have a great day.
Need to extend storage-heavy VMware workloads to the cloud while controlling costs? Amazon FSx for NetApp ONTAP scales storage independently of compute to dramatically lower your TCO. Catch the details from NetApp, VMware, and AWS experts.