BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
Welcome to Insight session 1233-2 on NetApp TV. I am so happy and thankful that you found your way here to come listen to us talk about how to get proven data protection and compliance with Amazon FSX for NetApp on tap. Now imagine if you will you are a technology leader inside of your organization and your organization has been trying to transform one of your crown jewels applications that's critical to your survival critical to your customer experience and that transformation effort has not gone well at all. In fact it has cost the company hundreds of millions of dollars. Naturally in that situation there's some turnover at the top unfortunately. So you get new leadership and they come in and they say, "Hey, we have all this great experience in transforming our applications. We have great experience in DevOps. We have great experience in refactoring, replatforming, refactoring, replatforming, refactoring, replatforming, rearchitecting applications. And because of that experience and history, we'd like to take this application and put it into AWS. Makes sense, right? gets them access to all the new and modern data services and other comput services that AWS offers that are just extraordinary and awesome. Okay,but there's a problem. This application itself is coded in cobalt computes in AS400 mainframe which is emulated itself inside of a Windows environment on traditional IT infrastructure. Well, there's a couple of questions that you probably ask in your mind. First and foremost, how am I going to do this? How am I going to migrate this application into the cloud? Because they would like to put it in the cloud immediately so that they can start rearchitecting everything.Let's say you're successful in doing that. Next, you have to consider how am I going to protect this data? What am I going to do with it? How am I going to govern it and get observability inside of what's going on inour data? Well, I'm glad you asked because that's exactly what we hope to talk about in this session. So, let's go ahead and get right into it. But first, before we begin, I need to bring up our confidentiality statement. If you just take a quick second to look at it, I'll remind you of a certain phrase that I know and love from Gandalf the Gray from Lord of the Rings. Keep it secret. Keep it safe. Yes, I'm a geek unapologetically. So, I'm going to take us through the agenda a little bit real quick. Um, we are going to talk a little bit about something called cyber resilience. And I think we're we all know this term pretty well, but I think itserves as a really good term to align on. We'll follow that up with the head of product at AWS for FSX for NetApp on Andy Crudge. He's here with me doing this session. I can't thank him enough for his work and his effort that he's poured into this. Andy, thank you. I can't wait to get to your part of the session. Mine's not as fun. So, um, stay tuned. He's coming. So he's going to take us through a little bit about FSX fornet app on ONAB as aservice in Amazon and our partnership and then he'll dive a little deeper into the data protection capabilities and compliance with Snap Mirror. We'll follow that up with a few brief demos and from there I'll close us out and you get on to your next session on NetApp TV. All right. So I introduced a term just in the agenda slide uh called cyber resilience. I also want to take that term and apply it to the scenario that I gave you at the beginning on when we first started this session. I want to keep that as our context asour definition as we work through our storyline here. Now, cyber threats common in the world today create a need for greater collaboration and integration among traditional security and protection teams and the solutions that they source. Netup and AWS approach managing data in the AWS cloud with that in mind through the lens of cyber resilience. Cyber resilience solutions absolutely must protect data from a variety of threats whether that's internal threats, external threats, natural occurrences and events like floods and hurricanes etc. And they exist even to support compliance with regulatory demands and their own requirements. So any solution absolutely must focus on the availability of the data, the accessibility of the data, the observability of the data, and the protection of that data from any possible and perceivable threat. It's a holistic approach cyber resilience to protecting and securing data wherever it's stored. So, let's take acloser look at what I'm talking about and how our partnership between NetApp and AWS accomplishes this in AWS. And we'll start with the sub lens of data protection. Data is the world's most valuable asset. According to TechCrunch, the value of data is expected to reach roughly$280 billion dollars by 2025. That number is absolutely massive. I can't even believe that I'm saying it. I can't even truly quantify that honestly in my mind. $280 billion is insane. And I have four kids. I would love $280 billion. But because it's so big, um, it has been called the oil of our industry. And it makes sense. I come from the oil and gas industry myself. It's where I cut my tech teeth. It's also one of the most vulnerable assets because of that number as well. So protecting and securing your data is a critical function for any of your organizations. Now, not everybody's definitions are the same on how to protect data. So, let's just align a little further. Modern companies are in the data business regardless of what they make or sell. Period. Oil and gas companies use data to find new oil and gas reserves. They use that same data once that reserve or that well is depleted to better manage that well, even to protect the environment. Online retailers depend on customer data day in and day out to promote their products and services to build better customer experiences for their customer base. Even brickandmortar businesses, smaller businesses use data all the time for critical sales functions and uh to analyze traffic data.Another industry that I always like to think about when it comes to data is the healthcare industry. Let's talk about saving lives. Forget revenue generation. Healthcare relies on data to save lives.are on the line for them and even to cure diseases that we otherwise couldn't cure in our past. I like talking about that topic. My daughter is a byproduct of the health care system securing and protecting its data and keeping it accessible. They had to make critical decisions around her leukemia and she came out a survivor. Praise God. Accessibility of data must be ubiquitous. What do I mean by that? What do I mean by ubiquitous data access? Well, really what it means is it's got to be fast access. And fast access must be available broadly to its user base. Any loss of access can prevent businesses from serving their customers. Now, I talked about loss of access.Let's talk a little bit about loss of data itself. Also not a good situation. It's in many cases even worse. If the data is not there, you have nothing, right? Is the absence of data. Every piece of data is an opportunity. So you lose the data, you lose the opportunity. Recovering any lost data might be impossible also without the people, the process, and the technology. One of my favorite words there in that sentence, hintAnd those are things that we absolutely have to frame our minds around and think about your industry and your organization and how data is important to you. It would be hard to find somebody that would tell me, "Man, we don't use data at all. We even use it personally on our own cell phones." Now according to storage network industry association, data protection is the process of safeguarding important data from corruption, compromise or loss and providing the capability to restore data to a functional state should something happen to render the data inaccessible or unusable. And I really like that definition. I think it's a pretty it's broad, but I think it really helps kind of scope our thinking down to what data protection really is. And to follow that up, I'd like to walk us through a little bit of a timeline. Data protection includes a variety of use cases and strategies. And like I said, definitions are a little bit off. Some organizations blur the lines a bit. That's okay. For instance, some organizations may use tape backup as their disaster recovery solution. Maybe that's all they have. I have seen that when I was an IT consultant.That's okay. Itmight meet their SLOs's SLAs's. But although tape can help recover from a disaster, from an industry standpoint, tape is categorized as a backup or archiving solution. And to find the right solution for your organization's needs, you need to think about your RTO and your RPO, recovery time objective, recovery point objectives. Those will dictate the right approach here. Right? So, we're starting with backup and archive. Still critical today. Okay. But as you can see, it takes a little bit longer to recover from a backup and archive. However, alternatively, it is a cheaper solution. Let's fast forward closer to today. Many organizationsum realized that backup and archive just wasn't enough and disaster recovery came on the scene. And this process really um is about faster recovery and recovery improvement to minutes and hours instead of hours and days. And what this is doing is taking a system mirroring data and allowing data to transfer to a nearly identical system or an identical system and then recovering from that separate system manually. Pretty manual effort. There's not a lot of automation there. And that process it involves moving a lot of data and depending on how much can take quite a while but it does increase cost. So your RTO's your RPOS's improve um your recovery time is reduced but your cost is starting to increase here.So let's talk a little bit about continuous availability. This is more of an innovation and improvement on top of disaster recovery and that new technology is put in place and other things like automation to provide more of anear realtime recovery time. So the RTO's and RPOS's drastically improve here and bring your recovery time down to seconds or minutes. But to attain this is even more expensive.These solutions or implementations reduce a typical disaster recovery process and eliminate any downtime but result in abit more of an increased cost. So these are all the considerations we have tothink about in our scenario that I posed to you at the beginning. Right? That did come up as a thought. Now there are also data protection strategies around geographies.It's got to be multi-layered. Okay? So we can't just think about the strategies themselves. We need to think about those strategies within the context of geography. All right. So let's take a primary DC in a single city. And long ago in a galaxy faraway, most companies had singular data centers. And in these singular data centers, they had their, you know, pizza box servers, they had their storage systems, but a power or a hardware or some network outage would occur. And the second that happens, they've lost access to their data or they've lost data. And in either case, they're sitting there trying to recover manually. Okay, this is kind of more thebackup archive scenario and an early stage DR scenario. They've got HA, but because that whole data center went down because of a mass power outage or something, they're kind of stuck, right? So, what did they do to improve? They added a geography, went to more of a metro setup. And when what I mean by metro is a little bit less than 500 kilometers, meaning you're probably standing up another data center in a similar geography, like in the city or the outskirts of the city. This is an improvement. However, I live in Houston, Texas. And if you know Houston,likes to flood. And Houston also likes hurricanes. We're like a magnet for hurricanes, right? So, anytime something so widespread like flooding, storms, earthquakes occur, guess what? That data center might go down. Now, you have two data centers down in the same geography. What do those organizations do from here? Well, they'd probably add another geography, right? So Houston's near Austin, Texas. In my theoretical scenario here, I might put a data center up in Austin. So we have a massive flood in Houston. Both data centers go down. All of the sudden, I'm able to fail over in Austin. But wait, while there's a flood in Houston, I now have a fire happening in Austin. And that has brought down the power grid for our data center. And on top of that, the generators didn't work. We tested them. They worked then, they're not working now. Somebody forgot to fill it. Whoops. And that, my friends, is when the cloud come comes into play. And the cloud has provided an extensive regional aspect to data protection and disaster recovery. Now, pervasive throughout these geographies are threats, user threats internal and external, and malicious actors internal and external. Right? And so now we're kind of moving over into the discussion around cyber security or data security and how to better secure data. Ransomware is a big topic today and is very pervasive in many conversations. I see commercials about it all the time on TV. Ransomware threats come in a variety of forms and one feature alone won't be able to prevent all attacks. So to successfully prevent damage from ransomware attacks, you need to be able to protect data, detect threats in real time, and recover rapidly to minimize the high cost of downtime. Security and compliance solutions need to provide protection, detection, and recovery for the most aggressive threats by delivering secure storage that blocks attacks in the first place while also autonomously detecting those threats that get past the defenses and delivering efficient and secure recovery to restore data in seconds to enable rapid application recovery. Furthermore,we've got to be able to have some observability and governance and complianceoverlaying our data. That is critical to organizations of all sizes as well. Knowing the kind of data you have, who has access to permissions, who has access to the data, what are they doing in the data, what are they doing with it, where is it stored, that's all essential to protecting your data and complying with regulations. The NetApp portfolio delivers intelligent and secure solutions to address the challenges of compliance and governance and more specifically we're talking about AWS people that's included.Now to lead us further into the discussion around AWS and FSX for NetUP on hand it over to Andy Crud, head of product for AWS FSX for NetApp on tap. He is awesome. I can't wait to have him on here. And with that, Andy, I'm going to hand it over to you and he's going to dive deeper into the details and bring this home for us. So Andy, it's all yours, man. Thank you so much. >> All right. Thanks, Ty. Thanks, Ty. Thanks, Ty. Appreciate the intro. Uh, for those of you who don't know me, my name is Andy Crudge. I lead the product management team for the FSX Onap service on the Amazon side. And I'm excited today to be talking to you about uh overall giving you an overview of the FSX for ONAP service, why we launched it, what are we seeing in terms of some of the key themes and use cases customers arerunning on the service, uh, and then talk about some of the data protection capabilities and patterns that we're seeing customers adopt as part of leveraging the service in the workflows. Uh before I jump into the service itself, um I wanted to start just by talking about um so I've been at AWS for a bit over seven years now and have had over that time an opportunity to engage with a number of customers uh who are looking to migrate theirshared storage, their network attached storage into the cloud. And there there's a few reasons, a few key patterns that we see that's leading customers to want to have their data uh to want to migrate their data over to the cloud. Uh, one of these reasons is, uh, having your data in the cloud allows you to simplify how you manage your data. With cloud storage, customers can launch and scale storage in minutes. With FSX on tap, you click a few buttons, you get a file system a few minutes later, and scaling takes just seconds. Uh and also a common theme we're seeing is customers as part of migrating over to the cloud are now able to leverage infrastructure as code tools.like cloud form and Terraform to more um repeatably and more simply deploy their infrastructure as code uh which again you can do in a cloud when everything isprovisionable via an API. Another key theme we're seeing is the ability to leverage the cloud to reduce TCO. A few ways in which customers can do so. One is uh with cloud services such as FSX for Ontap there's no minimum commitments. Customers can spin up a file system, run a workload against it and if they no longer need their file system around, they can delete it uh or reduce their performance for example and not have to pay based on the peak of what they've provisioned. Uh we also offer a number of different storage products within AWS uh including within FSX ontap we offer a capacity pool tier and these uh storage products are fully elastic meaning that customers have an option to even store their data in um a number of different SKs where um they're only built based on the data that they have stored. So as they add data, as they delete data um the amount they're built for is just what they're storing. And that allows customers to not need to worry about paying for excess capacity just to maintain headroom. Lastly, uh we're seeing customers looking to the cloud as a means to improve their overall reliability for the storage and for their applications. Uh we offer a number of fully managed storage services where um customers can leverage um thebenefits of high durability, high availability that our storage products provide without the operational heavy lifting of needing to manage hardware replacements, heartbeating, monitoring, software upgrades, everything that goes into managing and maintaining highly available infrastructure. Uh and lastly, as part of reliability, customers in AWS can leverage a number of broad services to raise their security bar and make sure that they're able to securely protect um monitor and um and record access to all their data. And so when we talk to customers who are looking to make this m migration to AWS and leverage the cloud for their applications and for thestorage that their applications rely on, um a key theme that we're hearing is a number of customers today rely on particular storage solutions. Um whether it's NetApp on or other storage products that they may deploy on prem. And uh what we found historically is if you're looking to make a jump to the cloud, you don't have exactly the same storage product or a storage product with the exact same features that offers like for-like capabilities. Um sometimes that migration can be difficult because as part of that migration um what customers need to do in some cases is rearchitect their tools, their applications, reertify, retrain their personnel on a completely different storage product. And all that does is it adds friction for customers to migrate to the cloud and leverage these benefits that you could see on the slide. So withFSX for ONTAP, our goal with this product first and foremost is to give customers a complete NetApp on ONTAP experience. Um the service itself at a high level offers complete NetApp ontap file systems with all of the simplicity, agility and scalability benefits that come from an fully managed AWS service. Put simply, FSX for ONTAP is ONTAP as an AWS service. Andwelaunched the service for a number of reasons, including making it easy for customers who rely on ONAP or storage products like ONAP to be able to migrate to the cloud without needing to fundamentally rethink how they manage their data and how they um access and protect their data.So the service we um webuilt the service um again as I mentioned with um kind of providing this like forlike experience in mind and there are a number of capabilities if you use onap I'm sure you're familiar with a lot of these [clears throat] capabilities such as snapmir and snap vault replication on a really rich set of cost optimization features such as compression dduplication taring that make it costefficient to store data uh and lastly there's a broad ecosystem of datamanagement APIs that customers leverage to create their own operational tooling um but also third party ISVS um also leverage in order to power their applications thataccess storage andput all of these things combined um when we sought out to design and to launch FSX for ONTAP it was very important for us to make sure that in doing so um we were giving customers atrue ONAP experience because that was the feedback that we were hearing from customers we engaged with who are looking to have some of ONTAP's capabilities available in the cloud. And so with MSX for ONAP again, we're are weoffer with this service a complete ONAP experience. It's ONAP as a service and um for customers who use ONTAP or products like it,offers a really easy path to migrate or extend ONAP deployments over to the cloud. There are a few key use cases that we see customers use with the service. Um, in short, when we talk to customers who use ONAP on prem, a lot of these same use cases that they may run on prem are ones that they will also run in the cloud. Again, ONAP on FSX works very similarly and has similar strengths as where ONAP as the strengths ONAP has on prem for on-prem use cases. But some of the workloads we see are user and group fileshares. think uh you know team home directories for example IT applications databases such as SAP HANA SQL server Oracle um line of business applications and this is across a number of verticals whether it's genomics or even EDA uh and then lastly data protection so we're um an interesting theme that we're seeing with customers is um that there when we talk about data protection in the context of this session that there's almost two um two concepts that wecommonly see. One is there's use cases. The three on the right are ones where customers are leveraging FSX for ONTAP as their primary storage. That's where their primary data is stored. That's where the data that powers their application or workload is stored. And so when we talk about data protection for these use cases, primarily what we're focusing on is um the capabilities that make it easy for customers to ensure that the data they have stored in FSX is stored securely and is well protected. Now in addition to that uh customers also are leveraging FSX for ONTAP for data protection itself. So what I mean here is there's some use cases for example um some customers have workloads and applications that are on premises today and those workloads themselves may maybe there the customer plans to keep those workloads on prem for their foreseeable future um but they need a DR copy of that data to make sure that if something happens to that on-prem data center they're able to fail over to another site and continue running their workload historically what a lot of customers have done for this DR solution is they would provision an entirely second data center that they would replicate their data to and that second data center is where they'd run the workload if there were to be an issue in the first data center. Um, a number of customers we talked to when we talk to them about what they're planning to do in the future, a key theme that we hear is that customers are often looking to retire these secondary data centers, get out of the business of managing those second locations. And so with FSX ONTAP, what they often will do is they will instead of replicating their data from their primary data center to a secondary data center, instead what they can do is replicate their data from their primary data center to FSX for ONTAP. And they can do so using the same exact ONTAP replication features such as SnapMir or Snap Vault that they use today. And in doing so, what that enables customers to do is to retire these secondary data centers, get out of the business of managing all that on-prem infrastructure and have a fully managed uh storage solution that they can use for DR and basically treat AWS as a DR site for their data. So again, data protection when we talk about it, there's two kind of two use cases. One is protecting thedata that's stored on FSX. That's the three use cases on the left. And then thelast one on the right is also using the service itself as a data protection copy for data that's stored whether it's stored on FSX or stored on another uh location. So with that I want to spend a few minutes talking about some of the data protection capabilities that FSX for ONTP has to offer and that customers leverage. As Tai mentioned earlier there's uh three kind of key themes that customers leverage to uh protect their data at scale. There's backup and archive. There's disaster recovery. And then there's continuous availability of data. I'm [clears throat] going to start by actually going from right to left. I'm going to start by talking about continuous availability because this is a pretty exciting uh theme, a pretty exciting capability that we offer with the service. So with um FSX for OTP, we offer two different deployment options. We offer a single easy deployment option and a multi-AZ deployment option. Now what a single easy deployment option isit's a file system where the data and um isreplicated and highly available within a single availability zone. And an availability zone in AWS isakin to a data center on prem. Uh in addition to single easy file systems, we also offer a multi-AZ deployment option. And what a multi-e deployment option is, it's a file system where under the covers, all of the data in the file system is actually synchronously replicated across multiple availability zones. And the data itself is accessible for multiple availability zones um andhighly available across availability zones because there are storage controllers or file servers that are available in multiple aes. And what multi-AZ file systems enable customers to do is to have a highly resilient shared storage solution that's highly available across multiple A's so that customers can run their application access shared storage from any A in a region and even if there were to be an A on availability for example the storage continues to maintain availability in other availability zones and maintain availability for the overall workload. So as an example, here's a diagram of an FSX for Ontap Multi-AZ file system. The file system itself is on the bottom half. The customer's environment is on the top half. And so as an example, let's say there's a failure of a file server in the primary A or one of the um or the A itself becomes unavailable temporarily. um Amazon FSX automatically we fail over to serving data from a different file server in a different a and automatically all clients are redirected to accessing that data and all this happens seamlessly under the covers without impact to customers workloads. As I mentioned, FSX is a fully managed service. If there were to be a file server failure, we'd automatically replace that file server under the covers. Clients would automatically fail over to it. Again, all of this occurs transparently without impact to customer applications. The exact same thing happens with storage. If there were to be a storage disc failure, FSX automatically replaces it so that data remains highly durable, highly available even in the presence of individual disk failures, individual controller failures or even unavailability in a single availability zone. This is again a really powerful capability a number of customers are leveraging to provide highly resilient continuously available storage across multiple availability zones in a region. Second theme we're seeing is disaster recovery. Um so here a very common pattern that we'll see customers deploy is they'll use a multi-AZ file system for their primary workload. So they'll have a workload for example running in the US East region um andthatworkload will rely on a file system that's multi-AZ and what customers will sometimes do is for disaster recovery so that's where you typically want another copy of a data in another location they will replicate their data using snap mirror or snap vault into an entirely different file system maybe in another region and oftent times with that file system will be a single EZ file system uh which gives them a lower cost point forthat DR copy. And again, thisworks the same way it would with any other ONAP system. Uh FSX for ONAP fully supports SnapMir and Snap Vault. And so you can just replicate data from one location to another. And Tai is going to show you a demo of this a little bit later. Similarly, and I talked about this a little bit before, um customers can also replicate data just as easily from an existing on-prem ontap system into FSX. same concepts just um setting up a snap mirror relationship from onrem to FSX. And for those of you who are familiar with SnapMir, it's a very flexible and powerful technology. Customers can snap mirror into FSX, out of FSX, within a region, across regions. Um there's a number of different ways that customers can fan in and fan out the replication relationships. Again, from an FSX perspective, FSX for ONTAP is just another ONAP cluster. So all of those snap mirror and snap fault primitives that you may be familiar with work the exact same way with FSX. Now what I want to touch upon now is backup and archive that the kind of leftmost theme thatTai talked about earlier. So FSX for ONTAB we offer a fully native builtin backups capability and there's three highle ways to create backups of your volumes. The first is Amazon FSX automatically takes daily backups of all of your volumes. This is enabled by default. You don't need to configure anything to have backups enabled. It's a really simple, powerful capability to make sure you have an offline backup copy of your data if you ever need it. The second is FSX for Ontap is fully integrated with the AWS backup service uh which is a service Amazon offers that allows customers to centrally manage data protection strategies uh for all of their AWS resources, FSX onap included. Last but not least, you can create a backup of a volume at any time simply by clicking the create backup button in the FSX console or calling the create backup API. And Tai is going to show you this in the demo a bit later. But regardless of how you create a backup, all backups on FSX onap arethe same in terms of how they work. Um backups on FSX for ONTP are highly durable. Um they are forever incremental. So each backup you create is going to be incremental relative to the backup that came before it. And they're all stored again in a highly durable way so that you can easily restore your data um in the event of a failure of any kind. Lastly, what I want to talk about is uh Snaplock. So Snaplock is acapability customers use to further protect their data and it's a feature that we brought to FSX for ONTP uh just this past July. So just a few months back. We're really excited to have it as part of the service. So, for those of you who aren't familiar with Snaplock,is a capability. Uh, it's an ONTAP feature that prevents data from being modified or deleted for a specified period of time. Um, Snaplock, uh, you may have heard of a class of features called Worm or write once read features. Uh, Snaplock is a worm feature. And there there's three highle use cases where we see customers leverage snap lock today and looking to leverage Snaplock on FSX for ONAP. One is to meet regulatory compliance. A number of customers have are in highly regulated industries. They have requirements to keep data around for a certain period of time. Snaplock is a really powerful capability that enables these highly regulated customers to um to meet their regulatory compliance requirements. Second is we're seeing especially recently a pretty significant increase in customers looking to leverage Snaplock to um add an additional layer of protection against ransomware attacks. Um the high level idea is uh in a ransomware attack an attacker may get into your system and either delete data ormaybe modify data in such a way that um they require a ransom in from you in order for you to be able to access that data again. And by having an immutable copy of your data through Snaplock, whatcustomers can do is effectively give themselves a layer of protection against a ransomware attacker. Um if there if a copy of the data is fundamentally immutable regardless of the permissions that an attacker may have, um it means that you know thatattacker can't um can't demand a ransom from the customer and that makes it easier for customers to protect themselves against these kinds of attacks. And lastly, um, by being able to show that data is not modifiable for a period of time, stat lock is often an easy capability that makes it easy for customers to ensure authenticity and integrity of data. Uh, a number of customers we engage with may uh perform periodic check sums or things like this. Um, and you know, instead what customers can simply do is put their data in the snap lock volume. And that by itself is a different way customers can sometimes show and it may be a more easier way to show the data has not changed over time. So I wanted toend by just sharing a few details about how snaplo specifically works on fsx for tap uh for those of you whomay be familiar with snaplock or who may not. So the first thing I want to start with is to share whathasn't changed. So, if you're familiar with Snaplock as an onAP capability, um what hasn't changed is Snaplock works fundamentally the same on FSX for ONAP as it would with any other ONAP system. Um you can continue using the ONAP CLI, the ONAP REST API to manage your snap lock volumes. Um and Snaplock itself offers two modes of compliance. There's an enterprise mode and a compliance mode and both are supported on FSX for Ont. Now some unique differentiators in terms of how snap lock works on fsx on tap. So one is um depending on the version of ontap you're running depending on the environment um tiering of colder data to uh lower cost storage is not fully supported in all snaplock deployments. Uh with fsx for onep it is. So with fsx on we offer two storage tiers. We offer a high performance SSD tier and a lower cost capacity pool tier. And with Snaplock on FSX for Ontap customers can have their data automatically tiered off to lower cost capacity pool storage and um and reduce their costs in doing so. Second is with Snaplock.is a separately licensed feature in ONAP. Uh with Snaplock on FSX for onTAP customers only pay a snaplock license based on the data they have stored in the volume. Um so they don't pay for the overall file system they create. They don't even pay based on the size of a volume. Customers only pay just based on the data they actually store.Snaplock is uh wepurpose-built Snaplock on FSX to be super simple. Um with Snaplock on FSX, there's no need for you to worry about installing Snaplock licenses, purchasing licenses separately, or managing compliance clocks or any of that low-level infrastructure. All of that is fully managed and taken care of by the service. Um anda snap lock volume at the end of the day is accessible just using the same exact protocols you would use to access any other volume whether it's NFS or SMB or even ICE scuzzi. Lastly um Snaplock on FSX were this is an exciting feature for us to launch. We heard a lot of customers who've been asking for this capability in the cloud. Uh and with Snaplock on FSX it's a pretty unique capability that customers now have available for the first time in the cloud. So for example uh with snap lock on fsx4 on tap the service is now the only fully managed uh file storage solution in the cloud that offers worm protection. Uh really exciting capability for customers who have file-based workloads and are looking for that extra layer of protection. Um it's the only uh snap lock as I mentioned earlier that the only onap snap lock that allows tiering of data to lower cost storage and also with FSX for ONTAP we offer the only form of snap lock compliance uh which is one of the two forms of snap lock in the cloud as well. So again, very exciting capability uh a really powerful capability that enables customers to further their data protection strategy by allowing them to further protect themselves against uh such as ransomware attacks for example um or to meet their compliance requirements.So with that said, I want to pass it back to Tai now who's going to give you a demo of some of these really cool capabilities of FSX for Ont. >> Andy, that was stellar. Thank you so much. I really appreciate it again. Um, having you go through what FSX for NetApp on is really just brings it home, helps us better understand what the product service is and uh, just makes it really real and tangible and to have you on here isan absolute blessing and pleasure, man. So, really appreciate it. All right, so let's jump into some um, demos real fast.Just to preface the demos, the first demo I'm going to show is all about how to deploy FSX. just so you can kind of see it in action, how to get access to it, etc. And in addition to that, how to back up a volume. And then from there, we'll go into another demo video that is going to show us how to actually set up a snap mirror relationship across regions from one file system to another. I'm really excited about that one, too. So, let's get started here. Now, as you can see, we're actually in the AWS console that we all know and love. You've probably seen it before. And for me, FSX actually shows up as a native service under my recently visited. That's because I was building this lab. But what I'm trying to show you here is that you can quickly search for it.is a native service. This is not marketplace. This is strictly right here as an AWS productservice in the console. So I've searched for that service and now it appears here. As you can see, I have a file system listed and I'm going to walk you through actually deploying the one I already have listed. I have pre-eployed it. Um, but as you can see here, uh, you have two selections. A quick create of a file system that's really close to your push button and then a standard create. So, I give it a name. I have two deployment types as Andy mentioned. I've got multi availability zone and single availability zone. I'm selecting a multi for now. I want HA across AZ as I do have 192 tibbyte max and a one tbyte minimum. I'm going to just use the default here. At a minimum we have three IOPS per gbyte. I'm just going to select that for the ease of deployment in the lab here. And from a throughput perspective from network throughput we have a couple of options. So our two speed variables are IOPS and throughput. You saw the IOPS. In this case, on throughput, I can go anywhere from 128 megabytes a second to 4 gigabytes a second. I'm going to select the 128 just to keep it easy. I'm selecting my VPC that I've created for this lab, the subnets that I prefer. I'm going to choose a couple route tables that I've pre-created just to set this up quickly since we don't have all the time in the world.I'm going to just say take the IP addresses that you can as a default from an encryption perspective. Um, you can actuallyuse a default key that's provided for the FSX service or you can use your own key that you've created for this service. In either case, by default, everything's encrypted. You don't have an option. It it's just natively encrypted.Right? I'm going to give it astorage virtual machine name, a password. I'm selecting the style of security in this case uh Linux or Unix. I'm not joining an AD right now. I'm going to create my default volume at deployment and give it a volume size. Here I'm going to specify if it's readr or data protection. If we're we know that we're setting up a snap mirror target or destination, I would select data protection for now. We're going to go with readwrite. I have pre-anned or default policies that I can select. I'm going to go with the default for now. Got a capacity pool tiering. We talked a little bit about taring to basically save money on the storage of our data in AWS. Um the default is 31 days. Can go up to 183 days. Right now I'm going to select auto just to keep our data efficient in the lab.I'm just walking you through what Snaplock looks like. We did talk about the Snaplock feature. You would enableit right there on the volume. And I would create the file system. Now, I pre-created the file system for the sake of time, but a deployment takes roughly around 20 25 30 minutes just depending on the day at most really in general. And so I have already deployed this one with all the same variables that Ijust showed you. All right. Now, we're going to take a look at our volumes. This is one view to see the volumes. I'm going to take you to the other view. This particular view are only volumes specific to this file system.In this view, this is volumes specific to all file systems in FSX. Remember, there are other flavors of FSX, right? So, you can see those volumes here as well. But I'm going to select the data volume that I have listed here. I'm going to give it a backup name and go ahead and back it up. That's 500 gigabytes.It's going to take me roughly around 15 minutes. And as you can see here, it listed as a user initiated backup. 15 minutes, 500 gigabytes, and done. And I now have a backedup volume called data backup. Now, let's pretend like I need to go recover data out of that backup. So, I'm going to select the backup, hit restore,and here you will notice that it says at the top create volume from backup. This is really cool. So, I'm going to give it a um file system ID or target. All backups are offline, meaning if the file system that it's originally tied to where the volume is um disappears or we lose access to it, I can actually take that backup and restore to a completely separate file system external to the original file system. So, that's what I'm actually showing here. So I'm going to give it all the specifics of a volume, the security style like you saw previously, snapshot policy, our tearing policy and all of this I can edit on the fly at creation of volume and if I know that I need this volume to be different. So I go ahead and hit the restore and it goes and does a data restore. This I think took around roughly 10 minutes when I timed it if I remember correctly. All right, so we now have a restored volume. What do we do with it? You mount it, right? So, let's go take a look at what it looks like to mount this volume from the restore. I'm going to open up an EC2 instance that's in this region. I can see this file system. Now, first I want to ping and make sure I can get access to it. All right. So, I'm going to go to the file system itself. Immediately, you can see the IP information. Here's the management endpoint. I'm going to ping that management IP. I can see that I have connectivity to it. All right, I want to mount it, but I can't recall all the information for the volume. So, let's go get that info. If I hit attach right there, it gives me an output for mounting my volume. So, I can quickly just copy that. I can paste it. There's my DF showing it's not mounted yet. Ah, I don't have the dur. So, let's make it.And you don't have to match it, but I'm matching it. And I'm going to append the command. And boom, it's mounted. Let's take a look there.it is. There's my restore mounted in this EC2 instance that um has been restored from the data volume. It's really that simple, folks. And it's awesome. All right, let's go to the next. All right. So now you can see that we are back in the console. And what I'm going to do in this next demo is demonstrate how simple and easy it is to set up disaster recovery between two file systems in two distinct regions. So cross region replication is what we're targeting here. So here's my North Virginia SVM file system and volume. We're going to look at the prod volume, the one I created earlier,20 gigabytes. So, now we're going to go to Oregon, take a look at the file system over there.is the SVM. And notice there's not an actual volume yet. We'll get to that here in just a little bit. Now, in North Virginia, I'm going to take the IP information here, open up an EC2 instance. Let's get connected. Setting up my lab here. There's my Virginia instance. Okay. So before I really dive deep into this, what I meant to go back to here is establishing VPC connectivity. In order to do that, you have to create a VPC pier for communication, Snapmir communication specifically to traverse the regions. All right. Now for Snapmir, you only need a VPC pier. You don't need a transit gateway. For some other services like mounting volumes across the region, you do need a transit gateway. What I'm checking here is the route table for VPC pairing. You need to have two citers that are distinct and don't overlap. So, I made it very different on purpose for the sake of the lab. And it looks like I've got the subnets listed in the route table. So, we're good there. I'm going to go over to Oregon and take a look at uh that route table as well and make sure that those citers are listed in the route tables and we'll be able to communicate. All right. So, we've established that we do have that. I will ping across the region. Now, in this case in the demo for those uh eagle-eyed watchers here, I ping the wrong IP address. actually ping the IP address of the EC2 instance. Um, so itdid work, otherwise this demo wouldn't work. What I'm going to do is SSH first into the North Virginia file system. I'm going to take a look at all of its information. And we set up Snapmir. The intercluster lifts or interfaces um are critical. That's where the Snapmir is actually going to communicate are on those interfaces. So, we're making notes right now of the different interface information because we're going to come back to that intentionally right here. Um, if you notice, I'm doing a cluster peer create. We have to peer the two file systems first. All right. So, I'm establishing a peering relationship between the two file systems and doing a show so I can see its status. And what we're going to do next is go to the Oregon region, take a look at the file system on that side and do the same thing. We need to complete the peering process.So we'll do that here on the file system in the Oregon region. I'm into SSH first. We're going to do a cluster pier show once I get into SSH here just to show that nothing's there. Yep, there we go. Empty table. All right. So, cluster pure create on this side of the house. Use the same passphrase as you used on the North Virginia side. Looks successful. Let's double check it. All right, we now have two file systems peered across availability zones. So the next step in establishing snap mirror communication across the regions will be to peer the SVMs. So we remember if you recall look back have a pro SVM and a DRSVM. So I'm doing a vsserver peer create here. It is now cued right. So let's take ashow here. Let's take a look at it. And it does show that it's initiated. also shows that it's for Snap Mirror specifically and it is in a pending state. So let's do an accept. And it is now peered. All right. And so we've got the next step out of the way. We've peered the clusters, the file systems, and we've peered the SVMs. Sweet.But we're not done. We got to do a V create. All right. So remember when we looked at the file system, I hadn't created a volume yet. And that's because I wanted to show the CLI and the power of the CLI inside the AWS console native. So here's a vault create just like you would see it.is a DP volume. Remember this is a destination or target for the snap mirror. It has to be DP. And now I'm establishing in the volume list thatvolume is officially created. Okay, so let's do a snap mirror create. Let's get this thing rocking and rolling. I have now created the relationship,okay, with the destination. It's uninitialized. That means we have to initialize it. All right, so we're going to initialize our snap mirror with the next command here. That triggers the initialization and you can now see that it is initialized and it's already transferring data. We're off to the races. So we're now transferring data across the region. This took I don't know five minutes I think something like that maybe a little less four minutes. It is now completed. As you can see the steps in the screen here. It is this video has been fast forwarded a little bit but we're now snap mirrored in idle completely. Now let's pretend there's an event. So I'm going to ques the snap mirror. make sure nothing's happening on it. And in order to actually fail over, we have to break the snap mirror. So, we're going to break the snap mirror. And when we do that, and it shows broken off here, it's going to convert that volume from DP to read. So, we've broken it. It's converted. Now, we're going to actually go mount the Snapmir in Oregon. As you can see, double check that we're in Oregon on my EC2 instance. It is mounted. We do a DF and you can see that mounted volume with snap mirrored data across multiple regions. So in a very short period of time I have two multi-AZ file systems on each side of the United States. They are connected cross region through VPC pier that we've double checked. I set up the source in North Virginia with data inside of the volume. I peered the clusterfile system. I peered the SVM successfully. I established and initiated the Snapmir data transferred and then I failed over to the distant end multi-AZ file system in Oregon and mounted the volume that was transferred. Now I sped this up in total um without the video sped up is roughly 12 minutes. I mean there's some time for transfer etc but it's a really relatively quick process and I hope you see the power of the fact that this is fully native to AWS. I have CLI right here in an EC2 instance. It's all baked in right here. Super cool stuff. All right, so we've gone through some demos. You listened to Andy tell us an incredible story around all the features and functionality uh for data protection and for compliance in FSX for NetApp ONTAP. I want to take you back to that original story I told you about the mainframe. What is all this about? This is all about raising your bar in cyber resilience within AWS and you can do that with FSX for NetApp on I have actually seen it done myself. That story I told you that is a real and true story. When I first came to NetApp, I was a solutions engineer when we called them those. Uh we called them that role solutions engineer at the time. And my first customer is a pretty well-known food distribution company.They had um this exact scenario play out. They had a failed transformation effort. They hired some new folks to come in and when they came in they um had the conundrum of what do we do with this application? I mean this thing's old school. It's AS400 to emulate it in Windows. Um we need to get it up into the cloud. We can't refactor replatform and all that stuff here on prem and we don't have those native services. So we need this data we need the application as it is into the cloud forklifted so to speak. Um, when I approached them, I said, "Hey, I've got an idea. I know you're trying softnaz." trying softnaz." trying softnaz." They weren't working well um with softnaz and they were having trouble getting the data transferred in the first place. We converted the data that's stayed as files, but webasically migrated the data from thedata storage that it was on to a virtualized instance of ONAP called ONAP select and then we snap mirrored the volumes as ice scuzzy block volumes to the cloud and by doing so because of the multi-protocol aspects of ONAP we were able to do that and achieve their cutover window in time to be fully operated. in AWS. That was mainframe inside of a mainframe in the cloud. Absolutely incredible. So, we got the migration done, but uh-oh, there was some network outage in an availability zone. Well, lucky for them, we chose a multi-availability zone setup with uh synchronization happening between twonodes in HA pair just like the multi-AZ FSX for Netup on. And no one knew the difference when the availability zone dropped. In fact, I got a little notification. No one else did and they had notifications set up. Um, and I was like, well, that doesn't look good because of that outage. Um, one of the nodes did fail, but everything automatically failed over to the other A. No one knew the difference, and that saved them millions of dollars a minute from being offline. Now, I can't really reference this customer um publicly, so to speak. So, let me give you something a little more tangible, an actual customer that I can put on the screen here, S&P Global, um, their market intelligence group. So, they've done something similar. They sought out to have a better disaster recovery solution in AWS. They found it in FSX for NetUP on tap. And if you use this QR code, it'll link you to this actual blog post. It gives you all the detail architectural information that they used to build their solution in AWS with FSX for NetUP on top. But at the end of the day, what they achieved was first um some technical bonuses. um they were used to uh transferring and replicating data at the database level which was causing them different types of problems amongst of amongst those problems was performance because they put in fsx for netup on tap and we have snap mirror we can do blockbased replication with snapmir across the regions and they now have a far more efficient and effective uh disaster recovery scenario for their database data for their SQL data. In addition to that, um, they were able to with the efficiencies built in DDUP and compression reduce their overall storage cost in AWS. It's a really cool story. Highly encourage you to go check that out. All right, we're coming down to the end here.You've heard so much from um myself and from Andy. You've seen some demos that hopefully brought a little bit of the reality to you instead of just some slideware. Um, that should boil down to a couple of key points here. The first is ontap feature parody. FSX for NetApp on is onap. It's just a managed version of it in AWS. And in addition to that, AWS has put its full force and power behind automation and also management. It's fully managed. So with that, it's the storage OS you know and love, right? We're raising the bar in intelligent data infrastructure and we're raising that bar with cyber resilience protection and enterprisegrade data management for your data in AWS. Last and certainly not least, you have a deep partnership between AWS and NetApp. It is so cool to be a part of this partnership. I've seen it grow over the course of I mean roughly 10 11 years and it went from nothing to something that it is now where they've taken on and it has been made an actual product and service native to AWS not marketplace this is AWS's service that they manage AWS is the leader in cloud computing in the world and you pair that with NetApp's leadership in intelligent data infrastructure you've got a partnership made in heaven. It's absolutely beautiful. I can't thank AWS enough. Andy, thank you so much for your partnership. Has been super incredible. Now, um because this is recorded, uh there isn't another session for you. There was an actual lab session, but I would encourage you to keep your eyes posted for an immersion day coming up in January. Get with your um sales counterparts at AWS and or NetApp. They can give you information on our immersion day. The immersion day is a full hands-on lab that you can get your hands dirty with FSXN in at no cost to you, just time. It is roughly around two, three hours. We'll walk you through these features in excruciating depth and you'll get to build these snap mirrors and test and play with other features like flexcache for instance. Um, basic snapshotting. Um, you'll get to dive deeper into how the backup uh technology works, etc. Really cool. Uh, aside from that, there are some pages that you can go check out online. Um, I highly encou encourage the product page. Use these QR codes. They'll link uh to these pages for you. The documentation, the product documentation is spectacular, man. I love it. I fall asleep reading product documentation. If I want to go to sleep, that's what I read. I don't fall asleep with this documentation. It's very well written. It's very robust. So much good information in there. Um, go check it out. And again, tutorials and on demand sessions at the bottom there. Um, plethora of information out there for you. Um, go check out uh any of these particular social media URLs. You can hit us up on Facebook, on Twitter, on Discord. Andy and I are on LinkedIn. Got those posted here on this actual page. So, please go check us out on LinkedIn. Last and certainly not least, a huge thank you. Huge,Thank you so much. It was awesome to be a part of Insight. It was awesome to see um all of you in person at Insight for the first time in what four years. I mean, what in the world? So cool. Thank you for coming to this session. I hope you have a wonderful day. Go check out all the other sessions out there. They're stellar. Um so much information. Um and with that, God bless. Thank you so much. Take care.
Explore an integrated approach to protect your data from serious threats like ransomware, unplanned data loss, or disasters. With built-in NetApp® ONTAP® data-protection capabilities in FSx for ONTAP, it’s simple to back up, archive, and [...]