BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
so my name is keith norman for those that don't know me i help our worldwide partner community with solutions and sales acceleration this is an exciting announcement that we're here to make with a variety of people that and i've got no slides so save your applause for that um you know for me thiswhole thing started a couple decades ago personally getting into the vmware space and the birth of vmworld and for a lot of us we looked at this as opportunities to really look for something different that uh removed barriers created new opportunities and so we're going to talk in depth about that today we've brought a couple of technical uh resources to get into the depths of both the technology uh customer implication and i thought there's no better way to do that than to bring a partner of ours and so we got mark from presidio i see some other partners in town or inthe room from ahead and we have other partners worldwide whether it be accenture insight cw the list goes on and on these are the folks that really i think bring thediverge set of skills needed to bring this together not just deep storage but as you know there's security networking and a bunch of complexity on the operations to be able to get on-prem workloadsto cloud and that's really the focus of the topic um so without a ton of extra adouh i'll turn it over to jason who will give you a bit of the context behind what's changed in our relationship with vmware to kind of present this opportunity and then hear from the two technical presenters as we get into the depths of what you guys want to probably sink into but it's good to see all you and with that here's jason i manage the relationship between netapp and vmware and we've been really busy keith mentioned this partnership goes back 20 years but over the last two years we've been really busy together the first thing i want to call your attention to is this morning a joint announcement went out between the two companies we were all asleep here in california went out this morning at five between the two ceos that's going to talk a little bit aboutwhat's coming out next in a specific press release with aws but we've also been working with tanzu vmware cloud foundation v-vols nvme there are several other joint technology initiatives that we've been hard at work at for the last 18 months that are in that announcement please go take a look at that we're happy to take any more questions on that but today we really wanted to drill down into the hybrid multi-cloud supplemental data store work as keith mentioned um but definitely take a look at this release if you get a chance and we'd love your feedback nowwhat nemo's going to talk to and mark's going to get into a lot of detail on is this whole project was really born from a very simple premise that came directly from our customers this is all about customers that have decided to move their vmware workloads from on-premise into the cloud and stay on vmware so they decided not to stay on-prem they decided not to go to cloud native yet they may still do so in the future but they wanted to stay on that vmware control platform and today as you guys are aware of across all three hyper scalers today all of the vmware cloud services are built with vsan and a hyper-converged infrastructure within it so when you go provision the compute nodes and buy the compute nodes you are going to be building that on a hyper-converged stack now one of the things that vmware brought to our attention and this was coming directly from customers we were hearing it too was they wanted more flexibility in being able to scale just storage independent of cloud compute today they have to buy it as a bundle and depending on the hyperscaler there are you know certain configurations available we'll take aws as an example there's two choices today so you can pick your node type it comes with a certain amount of cpu a certain amount of capacity and price now vmware has these two options available and for a lot of workloads that will work nicely if your compute and storage are well balanced but what vmware started seeing was workloads where the storage requirement is heavier than the compute requirement posed a challenge to customers quite frankly on cost they could see right away that they were buying more compute than they needed just to get the capacity they needed so this is where this whole idea of a supplemental data store program was really born so vmware came to us and said hey we'd like to be able to find a way to supplement the cloud storage that we have at vmc at aws with external storage now to do that there were a couple of things that they required to have in place um there was lots of storage in the cloud you couldn't just connect anything to the vmware cloud service the first thing that they wanted to make sure was we had a first party manage native service at aws we actually went down the route of exploring this with some of our own customer managed options like cloud volumes on tap and things of that nature what we found is to make sure the performance the availability the security service levels of vmc to be preserved you needed to attach something that was a first party managed service and fsx for ontap had just come out about that time last summer so it was a perfect answer to this solution the second requirement was they wanted something that was proven and mature they didn't want to just attach anything to vmc we've been working with vmware as keith mentioned for over 20 years on premise with ontap and fsx for ontap is basically taking the same core storage platform and enabling it in the hyperscaler so they had a high level of comfort knowledge and understanding around ontap already and then the third thing was it had to actually fix the problem which was you know if i have a storage heavy workload can this drive down and reduce the cost of the infrastructure environment and correct that storage heavy problem so when you get into like compression dedupe thin provisioning all the things that we do in ontap we actually found it addressed the challenge um from a customer perspective now we started this out in a private preview and we just wanted to see how it went we brought customers in what do they think about it does it work is it the right solution for them that went very well so we moved it into a public initial access and then let customers come in and start doing pocs and things of that nature in the summer that went very well and so what's coming right now and is what we're briefing you on today is we're actually going to go ga we're going to bring this out formally and this is really just the beginning it's a big milestone for us but we're going to have a roadmap that's going to continue to evolve and we're reallyexcited about this so i wanted to just set a little context for you and again we're doing this at all three hyperscalers today microsoft and google are doing a similar thing now over at microsoft and google it's their services so they're actually doing the integration at aws we're working with vmware because vmware actually drives the vmc service themselvesso i'm going to bring up nemo he's going to talk a little bit more about the architecture get into the details with youshow you a demo and then mark's been one of the great partners out in the front lines actually talking to customers about this and he's going to share some examples that they've seen as jason mentioned you know we have been working with vmware as well as with aws in building this solution right the slide talks about the detailed architecture i'm sure you have a lot many questions when we talk about hey we have integrated right the first thing that comes up in mind is how does it work you know it's all about nfs everyone knows there can only be one single stream connection from nfs data store into the sxi host how does it work so as you see on the screen you have vmware sddc as you know when you deploy an sddc it goes into an aws specific vpc right so it stays there and you have got fsx on tab that is running in another vpc that is connected via a transit gateway now the question is why are we using transit gateway the simple reason for that is fsx on tap today uses floating ip a floating ip is a mechanism so that it can actually fail over between the zones if there is you know any kind of outage and the floating ip doesn't reside within a sider of the vpc it's a completely separate ip address that is being associated with it so to make sure the failover happens during a zonal outpage we get a specific ip address it can be any cider that you have in your environment which doesn't overlap with any of the other existing insiders and with the on-premises side it just works fine right so once you have that you can actually connect and in the initial release which is going to be announced tomorrow it will only be public sorry multi-ac that is supported with fsx on tap for vmc right in the future we are working towards that very soon you will hear single ac will also be supported but you know for the this particular conversation we are going to keep it at the multi ac level rightnow as you can see it's only nfs v3 there is no iscsi support at the moment we only start with nfs v3 and that would enable customers who are running today with ontap in on-premises leverage snap mirror capability now it's not confined just for netapp customers for other customers who use other storage solutions and if they want to move into aws vmc they can use fsx ontap as the data store for any storage intensive workloads that have that they have even for performance as well so they can scale their vmc clusters accordingly just for the host requirements not for the storage requirements can the storage itself be scaled after it's provisioned yes okay so the question was can we scale the storage after being provisioned yes you can scale the storage you can downsize the storage however you want to do it you can increase the throughput as well as the cap mean as well as the iops you know as you want it and then bring it down as well so it's purely pay as you grow yes is any of this automated and by that i mean the growth and the shrinking of the storage but also um knowing if it's the storage that needs to be adjusted if this is even neededuhso it's more of a sizing question so the question is you know is this automated you know so that understand can we actually shrink and grow as we need so that's more of a customer specific requirement but it's all available via apis as well as a single click of a button which you can do however it depends on how you're actually monitoring the solution to see whether i really need this much storage or not then you can just execute that particular command to actually shrink it back so that command comes with the solution or that comes with an it's part of the pass offering aws fsx ontap is a pass offering so it has all of those apis including rest available out there rightin ga we support four data stores to start with right and it will actually as and when we may move forward the number of data stores will actually increase however keep in mind with a single vsan data storethe customers will get the ability to actually run multiple data stores the way they have been doing it in on-premises so they can slice and dice their deployment and then actually make sure their databases are actually getting specific data stores the way they have been doing it in on-premises right and they can isolate the traffic that way again one question that you might have in mind isthis going over the nsx plane it is not going over the nsx plane as you can see in the graphics itself from the vpc it goes into the tr0 as you guys know by default there is a tier 0 gateway and there is a tier 1 gateway and from tier zero it actually goes into the esxi host directly so it's not going over the management gateway at all so you don't have to set any firewall rules or anything like that it's all created done and vmc has actually done more work in here in actually automating this part and i'll actually walk you guys through and how exactly it works when you show that slide to a security guy he's going to say well i wanted to go over firewall how did you make the argument that he shouldn't care so the way it works is it's all happening within the aws vp6 itself so when the traffic actually goes on in flight it actually is encrypted all the way on the fsx side when it actually hits to the transit gateway that is when the ssl termination happens and from there it is actually internal because it's already on the tier zero by then so from the aws compartment into the customer here how is key management handled so that security is satisfied that we're using the keys that they can control between where we're stored on disk to where it gets to theircustomer so with fsx on tap you can have the key manage yourselves you know our customer can go with amazon manage keys as well so whatwe have also done is we have built a pco estimator that will help customers inidentifying how much of storage is required how many nodes would be required and what it takes to just run it on vsan alone and how much does it take to actually go and then use fsxn into the solution so that way you will see how much of the benefit you get on a tco standpoint what i'm going to do here is rather than going through the slides and talking through that i thought it's better to do a live demo so what you see on my screen now isthe vmcsdc as you can see i've got a single host in there uh obviously we need to pay for it so you know we try to keep it smaller from a footprint standpoint uhand it's an i3 host now whatyou can see here is for those guys who haven't seen this there is a storage option so generally there is no storage option in there yeah so this comes with version 1.18.3 and later right and tomorrow when vmware announces this they would actually say it's 1.20.1 is the version so any sddc that is running 1.20 version would actually get the storage option in place and the storage option is cool as you can see it would give you the option to attach the data store that means you also have an api available to actually attach that data store right now before we go and do that let me show you how it works so we go back into the aws console so within aws as uh keith and jason mentioned fsx for ontap is an aws offering so you can go into the console and search for fsx i already have it there so there's no point in searching but still just to show you guys that soout of four offerings ontap is one of themrightand if you see majority of the customers since the inception it's been almost an year now we've got large number of customers running iscsi as well as you know nfs workloads in aws using this but there have been many scenarios where i have personally involved in making sure no customers can move the data from abs on to iscsi disks that is running on fsx on tap because of the efficiency capabilities that we offer right so as you can see uh i've got the fsx page here it's very simple you have to go and create a file system i'm not going to create a file system now it takes 30 minutes when i say creating a file system it's nothing but creating the storage itself so that means it goes and deploys ontap on two ec2 instances and they will not go much technical details in there because aws generally doesn't want us to talk through how exactly it's actually built behind the scenes uh however it's too easy to instance is running it has got ontap software installed and they'll actually go into two zones in there and you'll have a floating ip address which will be you know used for as an endpoint to connect now once you create the file system i'll quickly show you that it's very simple you go in here and then say fsx on tab and you have got a q create and a standard create as well here you would specify what is the throughput required and again as i said we have both multi ac and single ac however for ga we are only supporting multi-ac you can specify the capacity here it can go up to 192 terabyte but then you know from an on-premises standpoint generally customers try to keep smaller data stores so you can either have multiple file systems created for each of the data store or you can have a single file system and then actually you know create multiple volumes in there you can specify the throughput sorry the iops and iops you can go up to 80 000 that's what maximum and ebs volume would actually give you however with fsx on tap you have got flash cache capability in place similar to what we have in on-premises because it uses nvme cache disk so you could actually get more throughput i mean more read iops with better latency because the reads are not going back to the disk instead they're actually getting it fetched from the cache itself right so instead of 80 000 customers would generally see 120 160 depending upon the block size you know which they have in place to give an example i've been doing some testing with sql and oracle uh we saw even 220 000 iops at 8k block size with fsx on tap so once you yes the question so the apis if i want to run my shop as infrastructures code am i going to be able to have all of these options as in the apis yes so the question is you know do you have terraform support do you have any other api support yes you have terraform support you have rest api support for it the ontap cli is also available okay thank you so once you specify that you can specify the throughput it's a bug on the aws side ideally the throughput capacity comes up there with a drop down you get two gigabytes it starts with 128 megabytes per second up to two gigabytes per second throughput right and our recommendation is to use anything from 512 up to two gigabytes for your data store so it may not restore uh volumes uh once you create the file system as you can see it's very simple it's just it's the regular aws concepts once you create the file system you would actually get the file system here as you can see so i'll show you the one that is created so this is an eight terabyte file system and you can see i've created eight terabyte for the capacity two gigabytes throughputforty thousand iops and it's actually in two availability zones two a and two b and it's part of one of the data i mean one of the vpcs and i have got an endpoint address which is actually you know outside of the generic private ip address ranges just to make sure there is no ip overlapping happening and that's up to the it team to decide you know what they need once this is done you need to go and create a volume again for the purpose of this demo i created one volume as you can see here tech fieldthe s d s stands for data store zero one so i have the volume in place here it has got a junction path uh again for those people who understand fsx aren't app we have a tiering capability in place which means any data that is inactive will be automatically pushed into s3 uh however for the data store support we recommend only to enable snapshot for tiering not the active data but the reason for that is because when you clear the data the virtual machine data it goes into s3 so there might be delays and actually recovering it so again this is a solution that we have been building so we are testing that to see if it would be better because the data is right now sitting within the data center of aws and s3 is also in the same data center but we are actually testing that to make sure enough it would work it won't actually cause any issues for the virtual machines but otherwise if someone asks you oh man you guys were in the field day you know i mean should tiering be enabled steering can be enabled for snapshots only rightsothis is on tap how do i move it from this aws delivered service from me the vsphere stuff to some other vsphere deploymentuh so let me rephrase the question so if you want to move from aws vmc vsphere to azure to whatever on-premises whatever on-premises whatever on-premises okay so i'll answer the question in two ways one if customers started with you know vmcsdc and has got fsx on tap he can always snap mirror the data if he's using ontap in on premises reverse snap mirror the data on premises and actually not bring the data back because snap mirror as you know it's block level replication it works fine now if they want to go to again on premises but are using third-party storage they can use hcx which is vmware xcx capability hybrid cloud extension for migrating the vms off which will give you either a cold via motion capability or a live view motion as well again live emotion is serial so you can only do one at a time or you have to do bulk vmotion which actually is actually bulk you can do multiple vms at the same time but there would be a down time when you're actually cutting over from you know on my cloud to on-premises so that's one or you can use third party you know again veeam or any other existing solutions can be used to actually go back same works with azure avs or to google gcbe as well because you don't have snap mirror support between fsx on tap to any other cloud because fsx is proprietary to aws right so in that cases you use hcx because that's what we are documenting too hcx is the ideal solution to actually you know have the uh data movement between the clouds although we will have other services that we will actually build because it's the beginning as jason mentioned you know we'll have more solutions and services coming out okay so we have the volume here if you want to mount all you need to do is go into the storage virtual machine with generic approach take the nfs endpoint and that's the one that has the floating ip take that go back into vmware cloud console come in here and say attach data store so it'll actually tell me you know whether how many clusters are there i already have three of them attached i'll say no man i want a new one just make sure the connectivity is in place it'll actually validate once it validates basically it's making sure that it's able to communicate back to the vpc that is actually in aws once it is done so here there might be some challenges for customers who are not you know i mean aws ready or haven't got much aws experience they need to make sure they have their network security groups on the vpc site enabled so as to actually accept the traffic on both the sites once this is done as you can see it's validateduh that's our junction path tech field dayyou'll see fsxn is in place just say the nameso i'll say tfds01 boom done it will actually go and attach the data store so we'll go and actually look at here so this is the vcenter you know i mean for vmc sddc so we'll try to get some locks in there so let's go into monitor come on the internet isn't as fast as i expect it to be comeso you know it takes generally up to a minute or two actually you know get that mounted come on you can see the one that we mounted techfield ds01 the ip address so hardly took less than a minute to mount a data store and it can go up to 100 terabyte in size you know and you can actually shrink it we can do that if you want to so we have it mounted now so we go back into the volume and then say resize again all of this is available via apis you can say update the volume give a different size oops yeah when i created the volume i didn't specify that but again this is the value you need to enable efficiency for your data stores because when you keep your vm data as you know most of that can be compressed by up to 80 percentage so you should definitely enable it when i created the volume i might have missed it but this is captured in the best practices once you say update it would actually you know i mean update the volume you can go back in and then refresh it right from here one once it completes then you can actually refresh it you can on the fly change it so now compare this with adding a host versus just with this click of a button or with an api you can actually change the data store size and get it used for any application that you have so if you actually change the storage efficiency on the flight will it go back through and take advantage of efficiencies on already existing data not with inline efficiency so the question is you know i mean if we enable efficiency later will it work they can hear me and see me okayso in that case yes uh not for the inline deduplication but for anything that is existing there yes it will actually go through the process for anything after that when for the new data that the blocks that is coming in it will actually apply the inline duplication as well yes okay cool so what about um typical data center hygiene once there's data created on these volumes um how is like if you are since releasing this ga tomorrow do you have backup companies on board like do you have a way to take care of all that they're reducing a ga todaysorry i'm so confused to the backup companies it's just another data store to back up from probably you access it through the same apis and vsphere that you would for the vsan data i'm just making sure yeah so we are working on solutions so as you said yes i mean any of the existing vendors who works on the nfs side it will work however there will be limitations keep in mind in vmc you don't have the enter credentials you don't have the vsphere local credentials in place you don't have the administrator credentials you have cloud admin which has restricted permissions so it cannot mount a data store you know because you can mount the data store for fsx through the console but they cannot do that directly those permissions are actually you know kind of cut off but yes apart from that it would actually work okay now if it works in vmc today generally speaking if your backup solution works in vmc today it should this shouldn't didnot be any different no so if the question is uh we have netapp snapcenter that works in on-premises you know mean for nfs data stores that doesn't work with vmc again the simple reason because snapcender uses local obvious in the plug-in right and we are moving into remote plug-in you know by the end of the year so if i'm using a supported vmc backup i'm saying this is it should work it will work rather yeah and weare working on actually validating those vendors as well because it's just hitting the floors now so we are making sure you know i mean the ones that has lesser limitations we're bringing them on to you know i mean the customers faster because that brings up like another question because the reason the backups won't work right is because um you have less access you don't have access to the hardware i mean there's a whole reason for using a public cloud so everything vm and down you don't have access to soare there any other things that a storage administrator would be umused to doing that now they don't have to do or don't have access to do because they don't have access to the infrastructure so what kind of things are those uh one of the things that i can actually highlight is vai you know that's something that every administrator wants to have with nfs you can actually quickly clone the vm because it's offloaded back it's not a host-based replication that's not available with fsx on tap okay i was actually going to ask you know do you actually have vice versa development underway because things like the aai there's lots of other file services that are available in vmfs that are not available on an nfs data store yeah so we are working towards that we have i mean good road map in place you know there are many things that is coming up which we really cannot talk here but yes many exciting stuff that is actually coming out you know which will actually enable i mean the same functionality is what we have in on-premises i wouldn't say all of them will come we mean apples to apples but eventually it'll all end up there because our primary objective and not just our vmware's was also to make sure customers can adopt vmc for storage intensive workloads and as you can see 99 of the customers have storage agency workloads in place because they start small they end up with you know huge amount of data so that's the first use case that we are trying to solve then we are getting into the disaster recovery side and then all the feature sets uh i'm tempted to say more because i'm an engineering guy i have too many things in my list but i really cannot because i got my there was a reason i asked you one thing i can say uh we have similar offering in azure i know it's vmware's i mean we the aws side of it but we have a similar offering in azure called as azure netapp files with azure vmware solution there we support snapcenter backup for virtual machines and i know that's not the focus of today's conversation but is that ga or is it no it's uh in public preview so azure netapp files as a data store for azure vms solutions is public preview now but the rest everything no vai everything remains the same it's just that azure was lenient with us to say i mean oh go come in actually not me and do this and we're like oh okay you know we're jumping on it we got it running so i have the demos for that i'll be publishing those the requirement for the sddc group so the traffic connect vtgw correct uh how is that going to affect customers from a pricing perspective right is there going to be any pricing break as this is offered right that's a lot of throughput okay fantastic thanks for asking you know i was transitioning to that anyways so cool i'll talk about it so the question is very valid uh because the transfers goes over transit gateway because transit connect is nothing but an aws transit gateway behind the scenes thanks to vmware they have automated that so all customers have to do in this process is to create a transit gateway attachment attach it you know that is it problem solved uh from a data transfer standpoint yes there would be cost as you know because the transit gateway charges the customer based on the number of attachments in place and the amount of the data that is actually being transferred so it purely depends on how much is the customer using you know whether it's actually highly transactional or not againon a day-to-day basis if you look at it the change rate is generally one to two percentage in an environment but in the sizer that you see and i'll talk about this in detail uh we use 10 percentage as the change rate just for rights reads we consider 25 okay so we're kind of putting up to 35 percent even with that i would say you know it would only cost the customer the price of adding two hosts into the sddcyou know i mean not beyond that again i wouldn't say that'sa final this is an assumption that we make now there might be customers who have got oracle or sql databases that are very transactional they're continuously reading the data writing the data in that case the numbers would actually go up sure again transit gateway you know is something that would actually you know i mean eventually change okay you know i mean where you'll have different connecting mechanisms transit gate will still remain but the question that was asked specifically you know i mean hey how do we get it we have the plans to actually not change that okay and so i think you said earlier that it will go through the t0 of the nsx but not the t1sso are you recommending or is there something in the works to go to a multi-edge sddc so that we're separating outbecause traditionally you're going to get one t-zero correct in a traditional cluster so yeah so do you recommend segmenting that off with t0 you know i mean vmware is not much concerned from that aspect however we have plans to move to t1 at one point okay there yes there are some discussions going on i would say let me quickly show you the tco estimator that we have built uh so this is what vmware is going to point to tomorrow uh because this is something that i've builtbut what how is it works is it uses the vmc sizer that is available out there thevmc.vmware.com sizer and it also uses the aws pricing list so it's simple api calculations based on the input that is being given to say i mean hey how are the what are the numbers how much can you save and i'll show you the magic now let's assume you know i mean you have a customer who has got 400 virtual machines 4v cpus leave the storage policy you cannot edit it anyways because in bmc anything beyond three is always getting rates of five or eight six with uh erasure coding uh you've got storage let's say you know you've got thousand and this thousand is nothing but aggregated capacity across all of the virtual machines it's not just one vm but assume you know you might have 50 vms who is actually really storage heavy but then that would actually be good enough to actually cover the entire 400 to say i mean this is the amount of capacity you need i'm sorry what is the unit there which one this one gig okay good point thank you i'm gonna highlight that there okay so if you've seen the vmc sizer it's exactly the same because all i'm doing is an api call so i kept the ui similar uh if the customer has got rb tools output as you know you know i mean you can just get that excel file you can do that but for the purpose of this discussion i'm keeping it at manual sizingbut that's a really good point yeah oh my god how did i miss that so as you can see it's multi-ac again single ac is the future but you cannot select it uh and you have got profiles so the assumption here is out of so if the customer has 100 terabyte extra you know which is outside of vsan out of that 20 percentage is for databases and this is something customer can change right so if he changes it hereand say i want 50 oh it will automatically change yeah sure to kind of say you know i mean let's say 30 percentage so if you see the profile it says this is fsx on tap sizing says i mean how much of data is sitting on the ssd capacity right how much of that is performance there how much of that is actually capacity here capacity tier is nothing but which is tiered to s3 so i'm saying in database case you don't hear anything everything should sit right on the disk itself so it's 100 may not 100 is sitting there what is the savings capacity you get it's is it 35 10 percentage you can change that you know how much iops you need what is the throughput that is it and then you can specify your transit gateway attachment by default it's two now here the data processing let's say you know i mean i've got 400. so this 400 terabyte means in essence it's 800 terabyte because i've got two attachments so you can think the amount of data now is 800 terabytes let's say submit simple calculator right now i'm making an api call to vmc coming back you can see you get a whopping 52 percentage savings if you go with vmc sizer it would say you need 54 hosts to actually meet that capacity requirement that'sdoesn't take off with any customers because of the tcl dco that's where fsx4 ontap really comes in with fsx on tap added you get 52 percentage savings you optimize your savings by another 52 percentage now what mike mark is going to talk about if customer wants to just take one of his existing data centers which is disaster recovery only and put everything on to aws sddc then with a pilot light cluster no it's not pilot like cluster it is without anything because you can spin up an sddc within two hours you have all your nfs data stores copied using snap mirror or using you know any other mechanismnow you get an 81 percentage savings and what tooling would you use for the dr use case in this scenario would it be vmware site recovery or vcdr or good questionvsr and vcdr is not supported with fsx on pack okay so sure looking at our own role so would it be more akin to like uh like deregistering vms and then re-registering to a different vcenter yes uh i mean so we are i mean we are working on a solution can we mention that okay so we already did itso we are working on an orchestration solution that will help customers and you know i mean you guys to actually uh perform disaster recovery using snap mirror from on premises similarly you know i mean you can use hcx for disaster recovery or any other existing tool which uses vadp snapshot mechanism right it's just that vsr is not supported now we are working with the team to get it supported vcdr as you know because it'skind of it's alike solutions you know since we're talking about availability i also wanted to ask if since you have a multi-az architecture are you also able to support vmc stretched clusters uh at the moment no okay but it's in the road map weare actually talking through that uh we do not support stretch cluster at the moment okay it's not be vmware doesn't support it and anything i'm talking is actually on behalf of vmware and aws for that matter so going back real quickly to something keithwas mentioning about moving datathis is still an fsx like the same fsx with netbond tab solution right so it's still that same connection that they can do from multiple different endpoints right yes so this is just adding a new endpoint to uh to increase their capability correctand thanks for bringing up that question so it's just not as a data store now if you look at on-premises you have customers who run main data on guest attack storage and it's a big use case right whoever has been using third-party backup solutions like vm or so on who didn't want to go with srm they would actually use guest connected storage and fsx is a perfect use case because fsx can connect as any protocol not just as nfs it can be iscsi it can be smb you know and even multi-protocol for that matter which will enable guest virtual machines to leverage that capability that means now for some reason if you have got transit gateway and you have got highly performing virtual machines like highly transactional ones i have what i've seen out of my experience is most of them are disconnectedthe easiest way is to actually switch to guest connected storage which in which case it's actually going over the connected eni not on this one so that way you can actually slice and dice it and this is also covered in the documentation so there will be a document published most probably by end of this week on the tech zone from vmware side which is co-authored by me and one of the vmware guys that covers all of these aspects in place uh so my name is mark vaughn if you guys haven't met me before i'm with presidio so we were very excited as we started looking at the solution i've been working with vmware cloud i guess it was vmworld 2016 when it was announced and about a month later i joined an advisory board with one or two other partners where we spent the next year kind of giving feedback what we thought customers would like to see beta 1 beta 2 and then early access so we'vebeen working with this for quite a while and we've always had challenges around storage i mean a lot of what we do today is still on the i3 nodes that it launched with we used very sparingly the r5 nodes that came out a while back to do eds storage um we use i3 ends more now to be honest i'm surprised we don't use i3 ends more than we do in the cases we've used them have been around database workloads and it was for the cpu and performance gain that we got out of the i3ens for storage it often still comes less expensive to do multiple i3s and that's where our challenge has been and so the tco that you just saw nemo talking about was very interesting to us when we started looking at that because we've really wanted to find a way that we can improve storagewithout hurting performance which is why the r5 nodes didn't get used very much the performance wasn't there in particular and nemo also touched on this use case so when you look at fsxn in the cloud what we're looking at is we had a customer come to us who wanted to do disaster recovery they're a netapp customer today so they have netapp storage on-prem storage on-prem storage on-prem uh we came to them jointly with netapp to kind of give them a preview of this technology because it fit into some of the many of the boxes they were trying to check um with disaster recovery and they're going to be running a snap mirrordirectly into this volume as nemo just showed the ability to then attach that volume into vmware cloud and so their goal and what we're going to help them automate is exactly what you just brought up a minute ago of should everything go away there we're going to script running through the inventory finding all the vms importing them into vcenter and powering them on yes we're very anxious to see an orchestration tool that will do this for usum but at the same time these aren't difficult things to script either as a matter of fact there's some things that we do with some of our other customers even around vsr and vcdr where we still do a little bit of scripting around the environments because customers don't want to run a pilot light and is everyone familiar with the term pilot light so in particular with this customer this is that same example but this is what the end goal would be right now we're building the environment with them getting everything running and then the second step is going to be scripted all so that we can actually delete the sddc and then have it all rebuild on demand so that's what this would look like for them is they would have everything mirrored on-prem should the environment on-prem go away scriptingwould then create the sddc mount the storageloop through powering up all of those vms now we'rein the process of actually deploying this right now i really like what we've been able to do with some of the dr solutions in the cloud i still think it's one of the best examplesof a good place to use vmc for a lot of our customers they see that as kind of their first stepping stone if they're not ready for something else if they're i'm still shocked at how many customers are afraid of putting workloads into the cloud this becomes that easy button for themumi also look at it i know that a lot of you guys were here in the early p to v days i always told people that p to v there was not a tool for that there's a toolbox because no one thing would fit every solution we find that the same way with a lot of disaster recovery solutions and business continuity solutions we try to build out you know we have customers came to us initially wanting to use srm or vsr until they actually tried putting a couple hundred workloads on and they're like oh wait a minute is there any is there a more economical way that we can do this umvcdr was a good solution that came in but it still didn't quite check all the boxes especially for our netapp customersso i see this as a very valuable tool that's going to be in the toolbox going forward as we start looking at business continuity solutions for our customers the ability to attach this not only for growth so i think that our production workloads are going to benefit from this greatlyum but also for disaster recovery and the ability to leverage snapmere the fact thatis supported is big for us so we're very anxious for it to be supported in some other platforms as well what are y'all using what script are you using so you're using this tool to get it over there but how are you going through the process right now are you using for the scripting it'swe've scripted these in different pieces for other projects but in different ways so we're actually going to be combining a couple different scripts i think we're going to try to focus more on cloud formation where we canum but there's also some power cli and some other things that we're having to leverage in there and so are you controlling it with any one thing like a terraform or anything like that not at the moment the scripting portion is still in development so you know our phase one is build all of this make this work and then figure out how can we shut it all off and then rebuild it quickly okay soi think for me the higher level vision that's verydetailed on that low-level work that needs to be done to kind of make it work what's the market texture picture from a or architecture picture from a cloud architect's view when he's doing this or he or she is doing this across multiple customers environments etc where it's not just because this can be very customer specific or very group of customers specific how do i create a standard practice across different underlays i would like to make as much of this reusable as possible you know also in the back of my mind i know that there's some orchestration tools coming up behind us that may pass us before or may cross the finish line about the same time we do in which case we'll probably retool over to those just to be more supportable more repeatable umi do know that this is definitely going to be leveraging um if you're familiar with the vm um there's the fling for copying vc uh vmc environments we'll definitely be leveraging that before we power anything downsnapshot everything with that make a copy of everything with that script re-importing that re-importing that re-importing that as far as making this rep are you talking more like multi-tenant no talk theportal i think you're getting to my point which is portability uh it's cloud today i'm using fsn fsxn tomorrow i could be using some othernetapp solution or some other vendor solution i don't want to throw away the high level architecture the low-level bits when me and alistair get together i'll draw something and alistair does it and then i never change my drawing when he changes how he does itwould be nice to make that modular so that if this solution were to tweak or if it were to add functionality where we could just simply adjust a module or even replace a module with something that would you know attach in a different way and as i also mentioned there may be some different attachment methodologies coming in the future umyes anything we script i would like to see yeah i hear this stuff is hard it's hard this is what we're doing is hard and we're still very early so there's within any complaints it's just hard with any automation there's usually the very first script that is very long andveryyou know serial everything just goes through one at a time and then there's thecleanup and the process of trying to make it more modular do you have do you envision having capabilities like being able to test failovers and do dry runs of failovers and commit dry runs of failovers and like things like automated fight fail back at some point you know thesetypes of more advanced uh dr features yesso the fail back we would need to work through i know that that's a simple answer to a complex question um i liked it though i'm just going to leave it at yes ask nemo for anything else no um our goal would be yes to be able to there's no reason why anything we script for an initial failover could not just be run and cleaned up over and over so that would be as far as the testing goes yes you should be able to run this i would recommend anyone run this at least two or three times a year my goal would be also that when it comes to the script that you could break up workloads into groups so you don't have to fail over everything but you could tell it bail all fail group one fail group two because i know that a lot of times when it comes to testing you want your testing to be more granular and they don't often test everything every timeso yes we would like for the automation part to be something that we can break up into smaller groups and then leverage for fail back it would be leveraging snapmirror again to push the workloads back are you concerned that you kind of get into the field where all these software companies like the veeam pcds or the existing you can't replicate what they can offer for example theshirt backup right to verify my backup actually works so um where do you see this move in the future do you try to compete directly with them no like any orchestration tool that may come out from netapp as well i mean we would constantly evaluate is it worth continuing to do custom scripting or is there an orchestration tool right now scripting would not be my first choice but there's a customer need now and there's tools in the future so scripting is our that automation is the way that we're going to deliver this today yeah if you need this you have to do this today in a few two years now is it integrated into zerto let zerto do it but if you need this today there's not many options and thesefunctions and the calls we're gonna be doing for this aren't heavy lifts um it's just gonna kind of be the first time that we're taking two or three different pieces that we've done in other migrations putting those together in a way that's going to work directly with fsx on tap okay so you're taking what you're learning with customers and making available to us to other customers correct then do you offer this or is your plan to offer this as a ongoing managed service or is this a product you're delivering to customers is kind of a one-time thing one-time thing one-time thing initially it would be delivered as a one-time thing for this particular customerwe would then evaluate is this something we would want to do more of and if that particular customer also were to come back and say hey great we love this but can you maintain all of this i mean we definitely have an offering around thatwe could put together this is my last slide i just kind of wanted to talk about what got us excited when we first saw this and what we're looking at doing i'm not even touching on the many environments that we're migrating today where all the cost savings that nemo talked about you know that is also something that we're very interested in going back and looking at can we reduce our node counts which is going to simplify you know the environment itself less to manageumso yeah so yeah i want i don't want to take away from how like impossible this was a little bit ago the there was the mindset that was able that you're able to do it but i've done way more dr than i'd like to admit in the automation and the test isand failing over and as andy mentioned filling back is so difficult when you put in dependencies etcand you stretch it across customers so the base capability of getting the storage over there toaws and attaching it to vmc on aws the easy part this is the hard part well and i mean part of the reason i ask those questions isto actually be a product that would be usable for a lot of places they need you need to do all those capabilities versus somebody who says well your product doesn't do the things i really need it to so i'm just going to go home grow my own approachso i mean it's these are all things that if you start thinking about disaster recovery you really have to pay attention to and if a product doesn't supply all the capabilities that you needit's like well let's just hire a couple developers and do it ourselves and i would and i've gone down that route and i will rather and i'd rather just have somebody else figure it out across multiple customers because we've seen more than i have it is this is yeah absolutely and i agree and it's just you know the completeness of the solution is a big part of whether you'll actually get that done right and we definitely are looking to be able to replicate this with other customers i would not say that we're definitely looking to productize this because again i think there are some orchestration tools from the vendors involved here with vmware with netapp that are going to be coming along that we'll be able to pivot to i would love to see a day where this would work with vmware site recovery and give you one more option of having to build love to see you david and vmware site and covering works [Laughter]i can say it's working we actually do multi-tierwith that and other tools because again it kind of started pricing itself out as the environment grew yeah i did a podcast just a couple of days ago about how uh so much emphasis is on developers and being able to make write code and people don't realize that with ops there's a lot of stuff you have to write scripting and tooling and coding for too soi think that's awesome still there is we've done that with hcx we've done some large migrations when it gets to be over a thousand um luckily there's apis for hcx so we could script building the groups reading the groups creating the sinks moving things over fortunatelyall of the vendors involved here have been very dedicated to making sure that the apis are available as well
The new joint integration with VMware, NetApp and AWS allows organizations to scale cloud storage independent of cloud compute to optimize costs, deploy new modern applications, and maximize the value of their existing IT investments.