BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
Hello and welcome to reduce AVSTCO and increase performance by adding Azure NetApp files data stores. Today's webinar is sponsored by NetApp and produced by Actual Tech Media. My name is Scott Becker. I'm from Actual Tech Media and I'm excited to be your moderator for this special event. Now, before we get to today's great content, we do have a few housekeeping items that will help you get the most out of this session. First off, we want this to be an informative event for you. So, we encourage any questions in the questions box in our webinar control panel. Not only will we have team members responding to questions during the live event, we'll also have a dedicated Q&A session at the end of the presentation where we'll discuss in greater detail some of the top questions that you ask. That Q&A panel is also the place to let us know about any technical issues you might be experiencing. A browser refresh will fix most audio, video, and slide advancement issues, but if that doesn't work, just let us know in the Q&A and we'll provide further technical assistance.Second, in the handout section of your webinar control panel, you'll find that we're offering several resources. At the top of the list is a pair of PDFs from NetApp. One is on top reasons why the VMware and Azure combo is exactly what your enterprise applications need and the other is the slide deck for today's presentation. Also in that section are links to the Gorilla Guide book club where you can get access to actual tech media's great printed resources on technology topics as well as a link to the ATM event center which has our calendar of upcoming events. So I encourage you to access those resources now and share them with your friends and colleagues.Third, at the end of this webinar event, we will be awarding a $300 Amazon gift card to one lucky registrant. Of course, you must be in attendance during the live event to qualify for the prize. Official terms and conditions of today's prize drawing can be found in the handout section. Just scroll to the bottom and you'll find the prize terms and conditions link there. And finally, one of the best benefits of this event is the opportunity to ask a question of our expert presenters. So, to help encourage your questions, we have a special additional prize for you. That's another Amazon gift card. This one for $50 for the best question. The end of the event, we'll look at all the questions that came in. Pick out the very best one and contact that prize winner.And with that, let's get to today's fantastic content. I'm excited to introduce you to our speakers today. We have Art Deir and Sha Loose. They're both technical marketing engineers at NetApp. they've got a lot of great information and insights to share with us today and I'm really looking forward to this session. So now I'm going to turn it over to ART to >> Hey, thank you Scott. Thanks for the introduction and uh thanks for having us on your webcast today. We'll introduce Azure Netapiles data stores for Azure VMware solution. Um my name is Art Arent here. I'm an Azure NetApp files technical marketing engineer and uh Sean how are you doing? I'm doing good, Art. Thank you. My name is Sean Loose and I am a technical marketing engineer for Azure NetApp Files.>> Great. So, let's look at the agenda for this session. Um, we'll first introduce Azure VMware solution. We'lllook at what that really is, what it encompasses, and we'll look at Azure NetUP files as well. So, just what is that? Um, and then we'll talk about why Azure Netup files is a good solution in certain scenarios. Uh we'll talk about uh the storage expansion of Azure VMware solution um and you know why that may be required. Um we'll talk about migration and disaster recovery into this solution. We'll talk about when to consider Azure Net Files data stores. Umand then we have some workflow slides that we won't actually show because they're covered in the demo that comes right after that. Um but the slides will be available in the handover. So the demo will be the complete workflow of creating and connecting an Azure net files data store which is uh basically a two-phase approach and then at the end we have some uh additional information for you and we'll review the learnings from this session and uh we'll have some time for Q&A at the end hopefully. Right. So what is Azure VMware solution? What is it all about? Well, as the name suggests, it really is VMware in Azure with the big difference being that uh we can now consume VMware as a service. Um and obviously that means that we no longer need to acquire any hardware into our on premises data center. In Azure, everything is already there. um to a degree obviously, but there's a large amount ofinfrastructure u built out for us andwe can now using our Azure account um acquire the use of a cluster a VMware cluster in Azure and be charged on a daily basis for the use of the cluster and that brings a lot of benefit because we can now basically expand our cluster with an extra node or two extra nodes um if circumstances so dictate if we're running out of horsepower. Um andthat brings um an unprecedented amount of flexibility. We no longer have to size for our maximum workload because we can basically scale our deployment up and down um to accommodate the workload. And there's somenews to be had with the data store introduction that we'll do next. um on premises is displayed here as well because we strongly believe that this solution will be used for uh migrating existing workloads from VMware uh on premises into Azure and uhto potentially do disaster recovery for uh on premises workloads into Azure and uhto basically take over for uh DR data centers which uh many customers are looking to uh to stop using um and there's somereally good news onthat front as well. Um what we don't see is customers building new workloads in on top of Azure VMware solution in Azure because obviously there's other ways to do that with uh Azure native uh VMs uh which can utilize all kinds of uh storage solutions as well and uh give possibly even more flexibility. But for DRM migration, this is an excellent solution. So let's have a look at um a little more detail about uh thesolution. Um it's scalable as I indicated. It's scalable VMware. We can really just say well we currently have five nodes. Um I'll now want a sixth node added to our cluster. So we go to the portal and we uh we change the number of nodes from five to six. Click uhapply. Um and uh it happens and our cluster gets expanded. It'slike magic almost and ithas nothing to do with any physical hardware installs. Um so it brings a lot of ease of use and it brings great performance as well. Um, it comes bundled with VSAN and VSAN, as you can see on the in the table here, runs on either SSDs or NVME SSDs that are included in the bare metal uh machinery that sits in the infrastructure. So, performance is great, but there is um a concern that customers have uh expressed andthat's around capacity. Um can you talk to that a little bit uhSean andexplain to us uh what customers are reporting?>> Yeah. So one of the shortcomings I' I I'd say today about you know regarding Azure VMware solution isthe lack of ability toscale the storage capacity independently of compute. So if you need more storage, the only way to really get that is to add additional hosts. Andwhat happens quite often is that wesize for storage. Andbecause we're sizing for storage, we now have wasted compute and memory resources in the environment, which no one likes to spend money that they're forresources that they aren't using.>> Yeah. itbasically throws off the balance, right, ofthedifferent components that we have andwe have anice way to um to help us decide in what kind of situation we're in by u by looking at our uh TCO estimator a little bit later on. But first, let's go andlook at the other side of what we're introducing today, which is Azure NetApp files. So, what is Azure NetApp files? Well, similar to Azure VMware solution, uh this is uh storage technology powered by NetApp um that is integrated into the Azure solution andhere too we can consume uh storage volumes um that we can basically provision from the portal. So we don't have to deploy anything physically. uh we just go to the portal um uh check our uh Azure NetApp uhaccount because wedo go through accounts and more detail on that later on. And we get to deploy avolume with uh different types of protocols. Uh it's all file based and we support NFSV3, NFSV 4.1 and SMB3 and even dual protocol. So insome situations, customers like to fill a volume with data from one platform fromLinux and then uh open it from u a Windows or either a workstation side or uh maybe vice versa in some data analytics uh configurationsum andwe allow for that to happen as well. So we can have dual protocol both NFS and SMB access to the same volume.Um we provide on premises class performance with multiple tiers and uh more on that later on as well. But the three tiers are ultra, premium and standard. Um we have enterprise data management features included with the solution. Um and those include snapshots for very fast primary uh data protection and recovery. Uh clones for feeding test and devi environments. Um there's replication as well for uh replicating data to other regions because we're now talking regions not necessarily data centers. Um and backup is also available um in order to uh be able to protect data not just by scapshots but by actually copying data off to other media.And it's all um presented with an with a consistent Azure experience from a billing and support uh perspective all through theportal. So it's afirst party Microsoft supported solution. It's not a marketplace. Workloads that we see customers deploy well basically anything. Enterprise file workloads, databases are popular. Um we see some Oracle we see a lot of SAP HANA uh that is deployed on top of Azureeta files um but also containerized applications um high performance computing VDI and obviously as we're introducing today Azure VMware solution and verticals and industries well you can see for yourself basically any uh vertical in industry is deployed onthis solution so let's u move ahead and uh let's talk about why Azure genetic files andwe probably have given some of this away already. Um but uh itreally is there to resolve a problem um that customers face when they are uh looking at migrating workloads into the cloud. Um because exactly because it is file-based um customers can now transition workloads being those VMware or enterprise NAS workloads to the cloud without having to refactor and rearchitect and that's an important uh benefit because if we don't need to really change anything to the architecture um to ofour application of our workloads uh then it becomes a lot easier to actually bring workloads from on premises into the cloud. It gives it much more flexibility and uh it reduces risk. Uh risk is uh costly and uh it also takes a lot of time uh to do risk mitigation. So customers are extremely happy for us to uhto help them reduce a lot of the risk. So if we then look at where Azure files fits inthe larger picture of all services inAzure um it really is asmall spec on the next slide where we can see the Azure net files icon just dropping in into the storage section. Um there's a lot of platform services that are offered inAzure. Um there is support for hybrid cloud, there's security and management. Um there's compute services. there's networking services and now um in the square of storage we find uh Azure net files next to disks andAzure files and all those other services that exist. So great. Let's um now uh look at the um AVS part and let's look at how we can bring the two together. How we can uh bring the strength of Azure net files to support the Azure VMware solution. Um, Azure VMware solution has been around for a while and has already supported um the uhtheuh guest OS mounted um capability over NFS and SMB. But as you can see, what's new here in the left side of this diagram is the fact that we can now support data stores. Uh the feature is in preview um and um so it'snot availablefor production just yet. Um but we do now allow um the ESXi hosts to have data stores connected directly to them and all the integration work to make that happen from the portal has been done and um it it's actually fairly impressive to see how it all works now. So again what's new is Azure net files uh data stores can be connected to uh our vSphere environment or our Azure VMware solution to be uh complete and weare now able to store VMDKs onto an Azure net files volume as well next to the other capability and um as we'll see later on there may be some very good reasons to actually do just that before we go there um Sean can you introduce some benefits of Azureet files data stores. >> Yeah, absolutely. Thank you, Art. So, one of the big things that we're trying to do with Azureet files data stores islower your overall cost of ownership. So, we can actually savemoney by scaling storage independently of the compute. So we can use Azure netup files to support storage intensive AVS workloads by scaling or by adding additional storage without having to add additional ABS hosts. So things like imaging archives and GIS workloads that are data intensive can really benefit from Azure net files data stores. In addition, we can also use Azureet files data stores to create a minimally uh host or an environment with minimal compute forDR scenarios. So you can imagine a scenario where you're using Azure VMware solution as a disaster recovery site for your on-prem environment. You can have a minimal threehostAVS cluster deployed. While that normally wouldn't have enough storage tocontain the entire workload, we can now extendthat cluster that cluster storage using Azureet files data store to give you enough capacity and then when a disaster event occurs we can then scale up that AVS cluster to then accommodate the remaining workload or the entire workload.So we can scale up to pabytes on Azure net files data stores without adding additional hosts. Next, uh, Azure Net Files data stores is veryeasy to manage both performance and availability. So, this is a fully managed solution with four 9s of availability, built-in um scale and performance. So we can essentially resize volumes on the fly, either increase or decrease those volume sizes to get more performance all non-disruptivelyand in near real time as business needs change. We can also move volumes from one capacity pool to the other. So we can non-disruptively move volumes between the various tiers. So if we have a volume in standard, we can quickly move that volume up to ultra to get more performanceshould you need it. Lastly, we have very flexible data management capabilities. We can take virtual machine consistent snapshots. We can replicate using our cross region replication, backup capabilities. All the data management functions are built in and part of that service. We can also clone data stores. So we can make what are called instant clones or clones based on snapshots. So we can clone those data stores and clone your VMs very quickly. Um and this is all withusing the resources built into theplatform itself. So we're not using host resources todo a lot of these functions. They're offloaded to the service itself. So just some of the benefits ofAzure net files data source.>> Yeah, I really like the fact that we can do the replication and cloning uh without actually taking uh resources onthe AVS platform. Um so thatleaves more resources available to run the workloads andthe same the same is for you know theability to scale performance. We can like you said migrate a volume from one capacity pool to another to change its performance characteristics and um initially yeah and when we compare this to an on premises environment u we always scale for the maximum performance right andthen we have to deal what we've got um but inthe cloud things are different and you can actually change the characteristics on the fly um so we can have avolume that's running in a premiumtier and um if it needs uh you know double the performance, we can just move it to the ultra tier uh leave it there for as long as it takes um and then potentially even move it back to the premium tier andthat can help save u quite a bit of money when it comes to the total cost of ownership.>> You got it >> right. Let's move on to um thenext section here which is um about migration and disaster recovery. Okay. So we'll introduce just a couple of solutions for migration and DR. Um there are many more and there are quite a few uh third party vendors that uh enable uh migration and disaster recovery. Uh but first of all let's discuss HCX because HCX isa obviously a VMware product and it is included with Azure VMware solution. It helps accelerate cloud adoption. Um it uh it can uh help with hybrid cloud extension um because uh it can help with uh load balancing between the on premises environment that we see on the left and the Azure VMware solution on the right of this picture. Um, and it can also help with uh bulk migration. So we can potentially just pick up a complete workload even if that uh consumes 50 VMs and migrate all of them into uh Azure VMware solution. Uh potentially we can migrate them back again as well. Um, and this brings a lot of flexibility to customers again because it basically allows them to use the cloud as an extension to their on premises uh environment and they can burst workloads to the cloud uh for as long as again it takes uh forthem to uh to utilize the extra resources that are offered and then bring them back again or even stop them if they're no longer needed. So theusage scenarios are you know there are so many migrations and transformation um also hardware and software refreshes and upgrades. Um you know we can potentially free up our on premises uh installation by migrating workloads away um do some hardware upgrades andmigrate workloads back again. Um we can consolidate smaller data centers to the cloud or disaster recovery data centers to the cloud as we indicated before. It allows us to go fully hybrid where some of the workload is running on premises and another part is running in the cloud. Um so yeah rapid migration burst capacity all these are now uh all theflexibility advantages that we see in the cloud. Um the other u solution that we want to in introduce is u disaster recovery. Um with disaster recovery uh things are slightly different than with migration. uh migration is a uh a prepared um uh move of a workload from on premises to the cloud whereas disaster recovery is an unannounced you know we we're preparing for a situation where we all of a sudden need to uh migrate orresume a workload in the cloud and that requires a different approach from the perspective of replication and uhrehydration of the replicated data in the cloud and Jetream have built um a really uh great solution for doing both of their purposes. Um what they do isthey have these IO filters that you can see underneath theVMs andthrough their uh DRVA their virtual appliance they can replicate all the rights that these VMs do to local storage they can replicate those rights to Azure VMware storage Azure blob storage actually and the reason they select blob storage is that during normal operation um we're not going to have very uh workload consumption in Azure and that helps reduce TCO of the solution. So we can continually update theblob storage uh using the continuous replication continuous data protection iswhat this is called as well. And then we get a choice whether you know a workload um gets rehydrated um when it's time to do the disaster recovery or for more crit mission critical workloads uh we can write rehydrate the workloads continuously andthis is where the addition of Azure neta files may come in handy because continuous replications and continuous rehydration now lets us basically keep a mirror of the VMDKs of our workloads uh that are running on premises in the cloud on an Azure net files volume that's sitting in a standard tier and the advantage of that is obviously that it's not as expensive as uh the tier that is required to actually run those workloads but since thereplication uhIO patterns aretypically not that high we can suffice in many cases with a standard volume um the advantage isthat the VMDKs are there and they're ready to be picked up by the VMs. Um anduh that brings uh a lot of benefit when it comes to the time to restart or to resume the workloads in Azure. So um it solves the problem of our PL and RTO all at the same time. While for some other workloads that are not critical um we u we may choose to have adifferent approach and just rehydate rehydrate them on time. Um this also brings in the uh the concept of apilot light cluster uh where we size the actual Azure VMware uh solution cluster um for just running the continuous replication and then when it's time to do the disaster recovery we can obviously start the critical workloads within the pilot light cluster as it will have at least three nodes um and then we can upgrade the standard volume to be a premium or an ultra volume if uh performance dictates that. Um and then we can start rebuilding the uh the or rehydrating theworkloads from blob storage uh through theDRVA in Azure um and uh wait for them to become ready. So it'savery good hybrid approach with different levels of RTO and RPO and uh associated cost levels. Right? So let's move on andtry and figure out in what circumstances we should consider Azureet files data stores.As Sean said earlier in the intro, um as soon as the um thestorage utilization or the storage requirement is heavy relative to the CPU and memory requirements, um we're going to see um a mismatch because thenodes in Azure VMware solution come with a fixed amount of storage. Um so for those environments that are storage heavy need much more storage than is provided by the uh the nodes from a perspective of a CPU and memory we have built this uh calculator. So for the purpose ofhelping us decide whether or not to add anf data stores we have come up with this TCO estimator and we'll actually go to the live site andyou might want to take a note of this. It's aka.msavvsfcalc. So you just go there now and um what we'll see is aset of numbers a predefined numbers um that basically describe our environment. Um we have a total number of VMs. We have a setting of number of vcpus per physical core. uh we have the average uh vCPUs per VM, average memory per VM and average utilized storage per VM. And we'll come to back to that injust a second. So there's also some uh inputs for uh for VSAN uh specifics such as the slack space which is recommended to be at least 25%. And the VSAN on disk format efficiency being at 90%. And we can select the storage policy. Um the most uh u well it's not the most but for larger deployments uh customers typically choose FTT2 which takes a 33% uh hit on the total usable storage capacity from VSAN. And there's a setting for DDI compression. Um, for this particular environment, Ireally like to use the uh sizing decisions table here because it tells us uh for this scenario how many hosts we would need uh to run this configuration successfully driven by CPU, memory, and storage. And as we can see, for CPU, we'd only need four hosts. Um, for memory, we'd need eight, but for storage, we need 18. And that's largely uh caused by the pretty heavy uh storage utilization that we see per VM here at 500 gig. Um so in order to deploy 18 VMs to get sufficient capacity, we're looking at a TCO per year of almost 1.7 million. Uh that seems pretty expensive. Uh so let's see how we can save some of that cost. I'll close the sizing decisions to free up some screen space. And down here we'll see um what happens if we were to combine the supplied space from VSAN with uh ANF ultra tier volume capacity. Um so we'll now need only eight hosts because that was the required for the memory. Uh we no longer need more hosts for storage because we're offloading uh 92 terabytes to Azure files. And this will sit on an Azure files data store connected to our AVS environment. Um the TCO for this is now 1.2. So we're looking at a savings of 28%. And there's even more to save if we don't need ultra tier performance for these data stores. We can scale them down to premium. we can scale them down to standard um and see theactual performance savings or the sorry the TCO savings and 46% is a pretty impressive number. Um we can also uh add additional uhcombination settings um if we have a good picture of the individual workloads where some may require ultra tier performance and others may not and they can be happy on that standard tier performance. Then we click this box and an extra line appears. But what I'd like to spend some time with you on uhSean is uh the bottom one here, ABS with ANF for disaster recovery. So as per the introduction with Jetream and HCX. Um can you talk why we only have uh three nodes here? That looks like a pilot light cluster configuration, right?>> Exactly. Yes. So whatwe're able to do with Azureet files data stores like we mentioned before is we're able to scale the storage capacity and the compute independently. So what this allows us to do is have these pilot light clusters that are only three nodes which is the minimum for the Azure for Azure's VMware solution. The minimum being three isn't going to provide enough storage to accommodate the entire disaster recovery site or thesite that we're trying toprotect. So now we can add Azure netup files data stores to increase the overall storage capacity of that cluster to accommodate the entire workload. And what this does, this allows us in a disaster recovery event, this allows you to run your most critical services and applications immediately. So those three nodes can run, you know, critical applications such as Active Directory or DNS, things like that. So you can get started and get up and running as soon as possible. And then we can add nodes to that pilot light cluster as needed to accommodate the remaining applications andthe rest of that workload. But Azureet files data source is what allows us to have this pilot light cluster andhave those minimum compute resources deployed until there's adisaster event.>> This is pretty impressive. Look at the TCO savings. were at 66% from u you know a full-size cluster with 18 nodes um which would be required to do a complete disaster recovery for this configuration. So um a lot of savings to be had and uh the numbers are going to differ for each of your personal uh circumstances. Um so plug in the numbers on this estimator. um check the number of VMs, check the number of uhCPUs required and the memory per VM andum in the future you you'll be able to um use existing data from your on premises deployment uhsupplied by RV tools and uh you will then be able to uh import those numbers into this estimator and uh it will tell you how much uh there is potentially to save and Um like we said before if the deployment is fairly storage heavy there is a uh potentially very impressive savings to be made. Um you can also do uh or introduce guest connected storage or uh you know bring over the uh the file services that may be u installed somewhere and the savings can be even higher if I were to introduce like 200 terabytes of file services andthey can potentially sit on a standard tier. So look at the savings here 80%. That's really impressive. Anything to add to this uh Sean? I think this is a really clear picture and a very useful tool. >> No, I think you you've covered it. Let's um let's head into the demo. >> Uh but before we see the product itself, uh let us show you how Azure netup files is constructed. So uh what do we see here Sean? >> Yeah thanks AR. So there are several components that make up the Azure netup files solution. kind of at the top of the hierarchy here is what you is what's called a NetApp account. Now a NetApp account istied to a particular region and you can have up to 10 NetUP accounts within your Azure subscription and then under the NetApp account we have what are called capacity pools. Now capacity pools are where the billing starts. Capacity pools come in one of three different tiers. standard, premium or ultra. And depending on the tier that you select and also the size of that volume, those two levers will determine the performance or the throughput of those volumes. So you see here we have capacity pools. You can have up to 25 capacity pools within the NetUP account. And then finally underneath that capacity pool, you'll see that we have the actual volumes. Now these volumes are what actually serve data. They can be anywhere from 100 gigabytes up to 100 tabbytes.And again, these are what are going to translate into your Azure VMware solution data stores. So these volumes can be increased as far as size ordecreased as well. And they can also be moved from one capacity pool to the other. And all of this can be done non-disruptively toscale the performance and the size of those volumes as your business needs change in near real time. Um so yeah and you can have up to 12.5 pabytes of enterprise and as storage in a single Azure NetApp account. >> Awesome. So now that you have an understanding there ofkind of how things look, we can now dive into the portal and actually see what it looks like to provision an Azure NetUP files data store. So in this environment, we've done a couple things already. We already have our Azure VMware solution private cloud provisioned and created. So that's already been done. And here you can see that we already have the NetApp account created as well. So the first step is going to be to create that capacity pool. We're going to choose our service level as ultra and then give it the minimum size of 4 tabbytes for this exercise and leave that quality of service type as auto. In just a few seconds, we now have our capacity pool. We select that capacity pool and we click on volumes. And this is where we can create the backing volume for our Azure VMware solution data store. So we give our volume a name. In this case, we're select anf data store01. We're going to give that volume a size. And again, the size directly impacts the performance of that volume. In this case, we're going to set that to the full size of the capacity pool, which is four teabytes. And then we're going to make sure that our network settings are correct. And on the next screen here, we're going to select the NFS protocol. And we're going to leave the defaults as NFS v3. And then make sure we check that box, Azure VMware solution data store. We're going to hit next. Apply any tags that you may need in your environment. And finally, review and create. We'll review those settings and make sure our validation has passed and then we'll go ahead and click create. Thefirst Azure net files volume that you do create can take a few minutes as it'sbuilding a lot of that backend networking infrastructure in order to create those private connections within your virtual network. So once that volume is created, we can now head back to our Azure VMware solution private cloud. Click on storage and click on connect Azure Netup files volume at the top. This is going to allow us to connect this volume that we created as a data store. So we're first going to select our account, our capacity pool, and finally that volume that we just created. And then we're going to select which cluster we want to add that to. In this case, we're going to pick the default, which is cluster one. And then we're going to give that data store a name. So this is the name that it will appear within the vSphere client. I like to keep those names the same to avoid any confusion later on. And this process to attach that Azureet files volume as a data store only takes a few minutes. You can see there it's already done. We can click go to resource. And now we can see that we have a brand new data store connected to our AVS private cluster. We'll come in here and log into the vSphere client just to verify. And you can see that we now see um that data store. And we'll go ahead and migrate a running virtual machine to that data store. You're going to select change storage only from the migrate submen.And then you're also going to need to change the VMware storage policy, sorry, VM storage policy to data store default. And that will get rid of anywarningsfor compatibility. for compatibility. You see now our compatibility checks are all succeeded. We'll hit next and finally finish. And that will migrate or storage votion that virtual machine over to our new Azure netup files data store. And as you can see, it's already starting to build out that folder structure on the Azure net files data store. And you'll see here in just a few seconds that our migration is now completed.And that's it. That's all there is to it. just provisioned uh an Azure netup files data store and migrated our first virtual machine. And just to reiterate, it can be scaled up and down independentlyand non-disruptively. So if you need more space or you need more throughput, you can go ahead and increase the size of that volume or even move it to a higher tier. And the same thing for um the other direction as well. We can scale that volume down if performance isn't as high. if performance requirements aren't as high, wecan scale thatvolume down as well. We do have best practices uh outlined on Azure documentation. If you head to docs.microsoft.com, you can see best practices uh for recommendations as far as numbers of data stores andsizes of those data stores as well. >> Brilliant. This looks a surprising lot like an on- premises environment. So it'sgreat to see that it's really just the same inAzure now. So yeah, thanks for that demo. That's awesome. >> Yeah, of course. You're welcome. >> Let's talk about what we have learned in this session. Um, next slide, please. Um, we have found or we've seen that we can expand the storage capacity of AVS. Um andthat no longer limits us to the capacity that's offered by VSAN um when it comes to uh deployments that require more storage capacity than you know than is available whenyou're uh whenthe perspective is theCPU or memory. Um, Azure files provides enterprise class performance, availability, data management, including efficient snapshot based data protection and cloning. Um, as an additional bonus, so data can be protected and replicated more easily. Um, we can save cost by scaling volume performance up and down when required. So in standard in normal operation mode we can uh make a volume a standard volume and then when it's time to do the disaster recovery we can actually scale it up to be ultra performance to support theperformance that our workloads require. Uh we can instantly satisfy capacity requirements by dynamically changing the data store size effectively the volume size. So if we uh upgrade the size of a volume uh the data store will grow automatically with it. Um and also by adding external storage capacity we can prevent the waste of AVS solution CPU and memory because we no longer need to add more nodes just to add storage capacity.And uh lastly andquite importantly actually IO and memory is offloaded from AVS um which aids inthe total Azure VMR solution performance uh because a lot of the IO activity uh that is storage related is no longer uh driven by thevSphere nodes themselves. And with that we have uh come to the end of the session and uh we'll open it up for Q&A and while we do so um you can get started now here are some links for you to click to uhto get back to the introduction of Azure VMware solution uh to learn more about Azure net files and also to open up an Azure free trial with the uh thethird link on this page. this page. Um, so let's look at u what questions we have. >> All right, gentlemen. Nice presentation anddemo. Really appreciate that learning slide as well. Um, are you ready for some questions? >> Yep. >> Yep. >> Yep. >> Yeah, absolutely. >> All right. Super. So the first one here isAzure NetApp files data store uh support with Azure VMware solution is that only for on premises NetApp customers? >> No. Soactually um Azure NetUP files is actually a Microsoft product. So this is an Azure firstparty offering. You don't need to be an existing NetApp customer to uh touse Azure andany product within Azure. So,yeah, any anyone can use it. You can um log into the Azure portal today andsearch for NetApp andyou can find uh Azure NetUP files. >> Okay. All right. Great. Umwhy is Azure enabling Azure NetApp file storage into their Azure VMware solution offering when they have VSAN today? Visa and aswe explained inthe slide deck inthe for a part in the demo as well comes with afixed capacity per node and there'sno way to have more storage capacity for the storage heavy deployments and um so customers are either looking at adding more nodes just to add the storage capacity and increase the va van space um but they now have an alternative to actually offload some of that and uh keep the number of nodes in line with theCPU and memory requirements andstill have external uh data store capacityuh forrunning additional uh workloads. And u as we saw in the TCO discussion um andyou know I recommend people to actually try this on for themselves onthe uh on the link um toactually demonstrate that if the storage requirement is much heavier than the CPU and memory requirements for the AVS deployment then there is money to be saved and uh it works out well because of theoffload ofstorage related IO's from VSAN. Um, andthere's additional features obviously withstorage based replication uh called the cross region replication to bring data to other locations as well. >> Okay. Yeah. And aren't youmentioned the links there. I do want to bring thelinks back up here. Um, and this next question, Sean, you might have hit this one a little bit in yourfirst answer, but thequestion here is how can an Azure customer get this solution today? Yeah. So, we actually have somegreat documentation um out there and I'll put a link. I'llput a link inchat as well. But you um you really justlog into the Azure portal. You know, if you already have uh Azure VMware solution deployed, then you'rekind of halfway there. At that point, you just want to go over to the Azure NetUP files area ofthe Azure portal and just start deploying Azure NetUP files. Um, but yeah, it'spretty straightforward. Um, we have really good documentationum, whichI'll share a link to now.>> Okay. All right. Super. Um, the next question we have in a two-parter from Naidu. Uh, so I'll ask one at a time. Thefirst part of the question is how did the problem that used to constrain the expansion of storage without expansing the expanding the compute uh go away withyour solution? >> Well, that is basically answered by theprevious question uh as well to a degree. um thewhole um I won't call it a limitation but the design is that each node ins provides a fixed amount of storage um and there'sthree or four different node types you may find only one or two in your subscription u because it's dependent on the region um andthe capacity per node type is a fixeduh piece of information right? Wewon't be able to say I want to attach more uh NVMe storage to this node because it's a given piece of bare metal configuration that sits in the data center and um so coming back to what I said earlier ifyou want to or if you need to add more storage capacity uh than what is offered by VSAN you can add more nodes tomake VSAN wider anduh create more capacity that way. But there's a TCO advantage by uh not adding more nodes and adding Azure files data stores instead. And those Azure files data stores, they will sit in the same data center. Um but they are not hosted on the actual AVS infrastructure. They sit externally on uh on Azure file solution. I hope that answers the question. >> Okay. Yeah. Um and then the second part of thequestion um isuh when the workloads can be moved to cloud andbrought back as needed uh would we not as a customer incur egress costs ofdata movement?>> That's a fair point and u yes that is the case there will be cost of data movement involved in such situations. Um so to just you know um move workloads in and out um because it can be done that might not be the best approach but we're talking about you know uhbursting workloads into the cloud um whichmay bring additional advantages to our on premises deployment um because we can either uh size our on premises deployment for the maximum workload that we will ever have um and add many more physical nodes into our data center Or we can now basically size for the average workload and whenever we need more capacity whenever we need more CPU more memory we can burst to the cloud and bring some of the workloads over for as long as it takes to actually run them. Think of monthly data processing um that happens in many environments.Um sowe can now use the cloud as an expansion to what we have locally and as long as we don't size it for the maximum and only use it um to a certain size when we need it um the cost can be quite limitedmake sense. make sense. >> Gotcha. Okay. Yeah. Super. Uh we had apoint here from uh from Scott F who says, "What do you recommend for the headache I have trying to grasp all this great information." [laughter] So Idon't know aboutthat one, but uh there is a lot of great information comingpeople's way here. Um next question. Can uh can customers youknow and aren't you had talked about um you know sort of the choice between VSAN and uh you know theAzure net app files data stores. Can customers move VMs between VSAN andthe Azure NetApp files data source? >> Absolutely. Um they can and u it was shown in the demo thatSean did. Um you can basically move uh VMs up and down between VSAN and uh your ANF data stores. also between ANF data stores. Uh in many cases we'll have more than one ANF data store. um for all kinds of flexibility andfor also workload um consolidation. Um it may make sense to have you know six or eight or even more data stores. Um andso yes, you can move workloads between data stores. And from the uh from the perspective of compatibility, they're really all the same, although the fault tolerance settings are different. Um there's no specific fault tolerance settings for Azure net files data stores as the fault tolerance is built into Azureet files platform. Um for data stores that run sorry for VSAN data stores there obviously is a fault tolerance setting because VSSAN runs on top of the physical hardware and you'll need to do you need to take some protection measures for you know device failures on the physical nodes inAzureet files um any device failures or you know any network failures even controller fail failures are fully transparent Um andthe solution will just continue to run even if such events would occur. >> Okay. Yeah. You know you mentioned fault tolerance there. So canthe uh the NetApp uh files uh data stores on AVS be used as a disaster recovery target um sort of as a replacement for on premises DR data centers. Uh absolutely andthis is where you know theone of the other questions uh came up. How about cost andinterestingly and interestingly wecan obviously go for the big bang approach anddisaster recovery uh everything that we have on our on premises uhdeployment into Azure this way. Um but we recommend to have you know to at least do some sort of a classification uh for the most critical workloads andthe less critical workloads and then uh treat them accordingly. Um because for very critical workloads that have very tight RTO's and RPOS's um you can actually do a replicationbased disaster recovery using uh tools like Jetream or from other vendors uh to bring the data into Azure andthen actually scale up the deployment at the moment the disaster recovery happens and but during normal operation when you're only doing the replication um the deployment um can be very small. It can be just a the minimum size cluster which is only three nodes and you can scale it up upon the disaster recovery and the same thing goes for the actual Azure net files volume. Um you can continuously h rehydrate data into Azure uh into an Azure net files volume. So all the VMDKs are ready to be started um andthat can save a lot of cost because you can keep them at the standard performance tier. Um, and then when it's time to do the disaster recovery, you can scale it up to premium or ultra, which is more expensive, but also supplies more performance, >> right? So, >> okay. Yep. No, makes a lot of sense. Uh, so it looks like we'rerunning out of time. Um, but, uh, you know, I guess aren't ifuh, you know, anysort of closing thoughts or you know, recommendations for people toget started?Well, um we would recommend people to uh to actually try these links. Uh copy them if they haven't already done so. And if they have further questions, uh feel free to schedule the VMware strategy session. Um that link will actually takes you to a contact form. Um andfeel free to uh to contact one of our uh cloud solution architects that way. Um, and uh, for questions, our email addresses are there and we'lldo our best to turn those around very quickly. >> All right. Excellent. Well, um, AR andSean, uh, thank you both forputting together a really informative presentation and for all your insights here in the Q&A and bringing us up to speed on, um, onNetApp. Really appreciate it. >> You're very welcome. You're welcome. Thanks again for the opportunity. >> Yeah. Thank you. And uh I will leave this slide up here uh just so you know especially so you can grab that uh schedule VMware strategy session link and or uh Sean and AR's emails there at the bottom while we do ourprize drawing. [snorts] So uh the $300 Amazon gift card prize drawing today the winner is John Abalencia from California. So congratulations toJohn. will be in touch to get you your card. And with that, on behalf of the actual tech media team, I want to thank NetApp again for making this event possible. And thanks as always for attending and for your great questions. That concludes today's event. Have a great rest of your day.
Are you challenged by vSAN storage capacity when deploying or migrating workloads to Azure VMware Solution? Learn how adding Azure NetApp Files datastores to AVS can help overcome those challenges and save costs without sacrificing performance.