BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
Hi there, my name is Chris Fox and I'm a cloud technical solutions specialist with NetApp. I want to welcome you back from Insight and today we're going to cover session 1365-2 which was managing your NetApp infrastructure uh with Blue XP. So if we will go ahead and share out the desktop and follow through. I'll show you the presentation and uh we'll walk through the whole session. So let me go ahead and share out my desktop. And here we go. So just to confirm again, this is managing your netup data infrastructure with blue XP. uh Chris Fox session 1365-2 and as anyone who attended Net uh NetApp's Insight will see this slide in just about every presentation we had and for those who are watching this fresh um this is a confidentiality notice. It basically just says we'll be discussing features and some functionality of the product. Anything that isn't already out may be due to shifting of scheduling or may not come out. uh this isn't a promise thatfeature will be delivered but we do want to give you some insight into what we're doing. Okay. So just setting the stage the reason we're talking about blue XP and managing your data infrastructure today as our topic is that it's one of the main ways that NetApp is driving our change and management tools to provide you better ways to handle the rapid growth in data in your infrastructure. We're also looking at new and unique threats when it comes to security, especially when cyberware uh ransomware attacks are occurring. And we're looking at providing a unified um look at all of our products. And those products are [clears throat] Eseries, storage grid, traditional onab systems as well as cloud resources, you know, that are in either Google, um Azure or AWS. So we want to provide you one interface that can span your infrastructure on premise as well as in the cloud. So for the agenda we're going to define first what is blue XP so you understand what we're talking about. I'll go through an example blue uh XP use case uh typical request for storage and how you would handle that using this interface.We'll go over the management and some of the additional tools that Blue XP provides.We'll provide you um ways that you can gather more data about your environment, not just from existing tools, but from brand new interfaces that are aggregating data from a variety of sources. We'll talk about utilizing some of the tools that are add-ons that you can choose um for protecting your data, whether that's the cloud backup tools that we have um as well as traditional things like snap mirror replication. Um, also data mobility. We just mentioned Snapmir. We have a copy and sync tool that's heterogeneous. So it can take data from third parties and move it into the NetApp environment or move from uh environments that NetApp has data to third parties if that is a use case you have. Um, another way that we've involved uh, NetApp's Blue XP interface is to look at data classification and governance, which is a kind of a new area that people are looking at NetApp to help. Uh, traditionally, we stored your data and you used other tools out there possibly to back it up or to manage, you know, classification. We also have a tool that is integrated into blow XP and can provide you a way to look at how is my unmanaged or unstructured data um categorized. Is there a way to do that quickly and easily and then we'll wrap up with the conclusion. So uh let's go ahead and begin.So blue XP is a new interface that NetApp has developed. Uh it aggregates several different tool sets together and it provides you a graphical interface to look at your environment to quickly see relationships uh between different entities and then also to uh create new ent um relationships or discover deploy and uh create new uh storage options in your environment. It provides deep insight into your environment um from things like your storage utilization, power consumption, uh forecasting when you may need to have updates and expansions to your environment. And we have also incorporated a new way to look at your um licensing. So instead of having to contact NetApp or your partner to understand, you know, what your license uh subscriptions are, renewal dates, that's an integrated functionality that we have in the tool in our digital wallet. [clears throat] And then wrapped around all of this, you'll see a strong prevalence that NetApp has when it comes to data security. Um we have ransomware protection tools that we're looking at um that provide you guidance as to are you at risk? Do you have sensitive data somewhere? And if something occurs in your environment, we can even monitor for aberrations in normal behavior and alert an IT person to maybe take a look at something or you can enable the tool set to automatically take uh reaction uh when it's discover something that's happening. So, Blue XP is traditionally going to be used by NetApp storage admins, but it does have um interfaces and applications that apply to a variety of people. Uh the finance team might be interested in the digital wallet to subscription andlicensing. Your security team can definitely get benefits from getting the alerts and updates as to what's happening in the environment and to verify that you have protected backup data and immutable snapshots that cannot be uh tampered with. IT management architects will also be uhbenefiting from this. We have integrated tools that can do uh backups at the application layer, at the database layer, and we have discovery tools that can understand what's going on in the cloud and provide you insight into once again financial operations, uh, and application management, Kubernetes, uh, containers, a lot of that tool set is all integrated together.And this does span on premise, the Azure cloud, AWS, and Google. The other thing that we'rereally um seeing a lot of people leverage is the automation tool set. So, Blue XP has a fully developed REST API interface. You can use that to import and connect tools uh from outside of us. We also expose all the JSON files and code for all the operations so that you can use that to integrate with thirdparty tools. Common things would be like uh help desk tools or inventory tools, data management, um CMDBs. Another approach that was done with Blue XP that is interesting is that a lot of the classification when you go to the menu structure is done without using NetApp uh terminology for our products. Uh you don't see product names for the most part and here they're defined by what their functional group is. So simple things like storage, health monitoring, protection, governance, you don't have to be a NetApp expert to understand what those subcategories would then relate to. And then within there, we can go into looking [clears throat] at your storage information, break out into things like the active IQ under digital advisor and some of the add-on tools like the classification or the backup and recovery. But all these sections are easy to navigate and you don't have to know or remember an branded name tool to find the functional area that you're probably going to want to work with. Okay. With Blue XP, the vast majority of people will use our standard mode, which is a software say a software as a service delivery. Basically, you just go to our website, bluxp.netup.com, which I'll show you in a demo shortly. uh you could register with your NetApp support account and then you'll have access to a Blue XP interface that you can then start to discover your working environment that already exists as well as build net new environments. That's what most people will do, but there are other supported options where you can download and install Blue XP on your own managed VM [clears throat] and that can be hosted in a variety of environments in more of a restricted setting if you have regulations or requirements. And ultimately we can even do this in a private mode. So it could be completely segregated in a dark site or DMZ um isolated environment. You will have uh limited or less full than full functionality of blow XP because some of the things like digital um advisor relies on you connecting to the active IQ server and if you don't have that access then um youlose that functionality. But it does have um a option for people who just cannot run whatever the environment they're managing in a public cloud. [clears throat] So the architecture for this would either be running on a VM or other software as a service. But the main layer you would integrate into is from a web browser. So from the web browser as long as you can connect to wherever blow XP is installed then we do REST API uh connections to something called a connector or connectors. The connector can service um rights and uh security grouping network settings for specific uh hyperscalers. It can also allow you to have on premise discovery of uh your environment and uh connect to some of our tool sets there. So the connector is an important piece when you get into more advanced functions of Blue XP and ultimately this can work on your ONAP systems. Um the cloud uh can be on premise or it can be in the cloud with um AWS, Google and Azure. So, one point of view to view that across a variety of settings. [clears throat] [clears throat] [clears throat] Okay. So, in this example, we're going to go into our first of three demos. And the uh example we'll talk about is a request for storage, which is probably one of the most basic things that occurs with NetApp admins. We'll also go through how to create a cloud volumes on instance if that was one of the options you needed for storage. some of the migration tools to move data back and forth as well as the various cloud options in our environment. So, let me go ahead into my working environment here and this is the live demo uh environment where we're going to start out at bluxp.netapp.com. This is a good launch point to get a lot of information about Blue XP, about our NetApp various products. So along the lefth hand side you'll see both our firstparty NetApp offerings as well as the cloud first-party offerings based on NetApp technology things like Amazon's FSX for NetApp on tap uh Azure NetApp files and then the Google cloud NetApp files uh or volumes that's a key point I want to pause and emphasize on uh is able to be run onyour fast systems you can run it as ONAP select on white box hardware that either you manage or is managed in a hosted environment but now you can run ONTAP in the cloud. Uh cloud volumes ontap is one option where you run it and you manage the virtual machines but each of the cloud vendors have partnered closely with NetApp to provide firstparty offerings that they can provide to their customer base directly. So you can get Amazon's FSX um ONAP directly from Amazon. That's not something that would be quoted or provided uh through a NetApp sales team. Also, Azure NetApp files based on NetApp technology is in Azure and that once again that's a software as a service offering that they provide to provide storage for like NFS and SMB files. And then the most recent is Google Cloud NetApp volumes where Google is now uh providing a first-party offering that they can manage. The reason I emphasize this is it number one it says a lot that these very large hyperscalers that have a lot of resources have chosen to partner with NetApp versus develop their net new you know own technology to try to uh do the same type of technology storage functionality and I think it says a lot to the fact that 30 plus years of NetApp development is hard to catch up to. So instead of trying to compete, they do have some other offerings that are in the same storage marketplace, but they also saw an advantage with getting NetApp's technology. The advanced features like Snapmir, our storage efficiencies and all that builtin uh functionality isvery valuable. So with Amazon's FSX for ONAP, very similar to cloud volumes on except for it's an Amazon managed environment and uh they've tailored that to be very efficient for that environment. Azure NetApp files is um also a very high performing environment. Uh both of these are where you would get support for like SAP HANA or Oracle workloads or the you know the um the greatest resource requirements you have and lowest latency requirements. Same with Google Cloud for NetApp volumes. The other thing we'll talk about is that all three of these solutions now are supported backend data store options for VMware. So VMware, the hyperscalers, and NetApp, all three companies coming together in each of the individual uh solutions provide you a data store option that's supported uh to expand across your VSAN options. So there's a lot here. Uh you can uh go through the uh sources andsolutions. I wanted to highlight the resources though. A lot of times you're trying to figure out what would be my cost moving into the cloud. So there's a lot of calculators that we have on this website where you can drill down and see you know estimated costs not only from the licensing from NetApp but also the infrastructure that you'd have to purchase in the cloud if you're going to run certain things in the cloud. And that's for cloud volumes on tap. It's for firstparty offerings uh as well as the VMware solutions. And you can review latest pricing and such. But what we'll do is up in the right hand corner, we'll go ahead and launch into our console which will take us to Blue XP proper. So this is the first landing page. When you get into Blue XP, I can see my canvas here and this is the graphical interface that we talked about earlier. I can quickly see my relationships between the different HA pairs or single nodes as they're defined in this uh cloud infrastructure. I can quickly see logos like this is an AWS, this is Google, this is um Azure, my on premise systems, Eseries storage grid, my all flash systems. Um all of this is very easy to view. If you do have a more complex environment, you can go to the tabular view and see more of an Excel spreadsheet where I can quickly filter and sort. But uh for this environment, we'll just basically stay here in um the canvas. [clears throat] [clears throat] [clears throat] So the example I alluded to earlier, we get a request from Bob uh who all of a sudden realized that his hardware is no longer going to be available in the hosted environment and he suddenly needs u many terabytes of storage to host his data and his applications. Uh in fact he only has a week to do this. So he calls storage uh team in a panic and says can you get a fast system ordered or some storage ordered and uh shipped out and installed and set up injust a few days so he has time to replicate his data. As anyone who's ordered NetApp systems or any hardware systems realizes there's probably going to take longer than a day or two to get something drop shipped in. uh you've got to go through the quoting, work with the partner system, making sure you have the right pricing, go through your procurement process. Um, but luckily for Bob with his data on the on premise system, we can go ahead and look at details about the volumes. Drill down in here, see what his data is, go ahead and view the volume information on the on premise system in the environment, verify the volumes that he has, the size, and then we can go in and even do a report on the migration and look at, okay, if we're going to move this data from one point to another, let's look at the volumes that Bob needs to have moved. Let's look at the amount of files, the size of the files, [clears throat] the uh depth of Q trees, uh the amount of directories, the typical reaction uh for latency and performance that they're expecting so that if we're going to move it, we size it appropriately. All this information can be verified. I can go back to my canvas that was under migration reports and I can say okay Bob we've luckily got cloud volumes on tap already available we can go ahead and procure new storage on demand and I'm going to set up replication to move your data so what I'm doing basically is selecting his volumes I'm going to define a snap mirror relationship what type of storage do I want to put this on as the destination. What do I want to call it? Do I want to load balance or just let uh Blue XP put it on their best aggregate possible? Do I need to do any throttling as I'm going to move this data from one point to another? Do I want to set up uh not only snap mirror replication but I do I also want to have a backup of the data so that I have an immutable copy that we can restore from and any of your snap mirror defined relationships can be done here along with snapshot policies. I can go ahead and set my frequency. Do I want to just do a one-time migration? Do I want to do this periodically to keep it up to date? And at the end of the day, I see my summary of what I've selected because this is in the cloud and I'm going to need net new storage. Uh I just have to allocate this uh or check off this um warning that this will possibly cost a u cost to me. And what that means is basically Blue XP is set up so that it can automatically provision virtual machines, disks, set up network resources and routing for you to have storage in the cloud uh using cloud volumes on tap. What's nice is as a NetApp admin, I don't have to go into the hyperscaler console to set that up directly. So we go ahead and set this up and we have a replication that will show up. So this is aquick way for me to address storage requirements on demand that happen very quickly. Um if I needed to say I didn't have that storage already in place, I could add to my working environment. And let's go ahead and provision a cloud volumes on tap instance from scratch. Uh we'll go ahead and do cloud volumes on tap ha. Uh you'll notice I can discover my existing environment. So if I already had it deployed but I just wasn't in blue xp's working environment that I'm in, I could discover it. But I'm going to add a net new cloud volumes ontap instance. I'll call this cobbo azure number one. Put in password and then I can uh toggle on and off a couple options. Do I want to enable data sense compliance and reporting on my environment? I can review what that means. Do I want to enable backup technology in the cloud? Now, not having a backup technology based on our snap mirror replication is very efficient, but it doesn't mean that you have to use this. We do still support our long-term partnerships with companies like BH, Rubik's, Convolt, yourbackup u partner of choice, but sometimes when you're going into the cloud, it's simpler to have one tool set to manage it. So, customers have asked us to also have a solution. Okay. Um, I'm going to have the option to choose what region I want to deploy. And there's a back on that bluxp.netup.com,you can see the regional maps of what we support. Um, my v-netss, subnetss,do I want to do a single availability zone or multiaability zones. Basically with HA multiaability zone, um, I can have, uh, redundancy. If there's any maintenance that's done in a availability zone, we'll fail over just like an HA failover. Get back on a fast system on premise. Okay. Security grouping, uh, resource groups, networks. This will go fast because it's predefined, but when you're setting up that connector and working with your cloud team, this is an area you do want to vet and make sure that everybody agrees you're following your best practices. I could create one from uh scratch or I could use an existing one. What version do I want to run here? And um this is interesting because now with our uh cloud volumes on tap you have the option to run uh professional which includes our backup tool as a bundled license. You can do essentials which is very similar to your on-remise faz system. You can do essentials secondary which is a DR copy which costs a little bit less and it allows you to have a failover give back at uh the multi- um CVO instances. So basically I could have a primary region and in the same region or a secondary region I could have essential secondary receiving snap mirror replications. If my primary one goes down secondaries will spin uh will take over. I don't have to change any licensing and I have that copy but I don't have to pay full price from a netapp licensing perspective for that secondary environment. Um, optimize is unique to Google and to um, Azure and that's a very lowcost um, version of cloud volumes on intended for cold storage. So it does have a um, cost associated with IOPS in and out. So, you don't want to use this for things like databases or highfrequency access data. But if you're looking at something like an archive tier, optimize is a very um cost-effective option. And you can even use premium if you want to just get started. You can spin those up 500 gigs or less for free. Um you get a certain amount that you can spin up andtest. Some people use this to test updates, patches. Um the other option here we didn't touch on is edge. This is where I can have replication out to remote locations. And per node is a legacy model that we no longer have available for new purchase. So I'm going to go ahead and choose professional. I can um choose from preconfigured designs what VM type this will provision or I can go ahead and select the specific virtual machine that I want to provision.Um, you know, I can go up to an E80, uh, DS-15s, DS13s, DS-15s, DS13s, DS-15s, DS13s, a variety of options. This isn't the full list of all the systems that are supported in this case in Azure, uh, but it's the ones that have been tested and validated. And you do want to have those deployed and managed through Blue XP. You don't want to do management outside of it directly from the say the Azure console.One great thing about cloud volumes on tap is I do have the ability to change the VM type after it's been deployed. So if I realize that I sized it too large, too small, I can make adjustments. And if it's an HA environment, I can do a rolling upgrade of both nodes without taking an outage of availability to my storage because one will take over the pathing for the other. Okay. Uh let's go ahead and continue. I can also choose my disk size for my aggregate. Um the disk size is important because once you create the first disk in an aggregate, every subsequent disk must be the same size for that aggregate. And you do have up to 12 discs in Azure per aggregate. With Google and AWS, it's six discs, but they do go a little bit larger. Um, and the number of disks you apply to an aggregate can affect your performance. So if you think about it, if I sized an E80, which is a very large VM in Azure, it basically takes up an entire host, but I put oneterabyte disc behind it, my performance is going to be limited by that one disc. That's the SLA bottleneck.So, you do want to size your VM and then keep an idea that your storage should be pretty close to that um target for the SLA and that way you're in balance. You may go over and have more storage and then the bottleneck would be the VM. It's not the worst situation. Uh you could change the VM if it does need more performance, but it's something to consider when you're designing your environment. A lot of times in PC's or testing something out, I see people put just a single disc out there because they want to minimize their cost and then they're very frustrated with performance and they think that the system itself isnot going to be able to meet their needs. Trust me, this can handle very large workloads with a lot of uh IOPS as long as you size the environment for the infrastructure layer appropriately. Some other options we have I can adjust uh speeds. Um basically this is going to have some consideration as to whether we acknowledge rights from what was the NVMRAM um settings or whether we're going to have it uh go faster. Um this is an area I'd recommend talking to professional services or your NetApp team in design as to whether it's appropriate. Uh key thing I do want to act um mention is we do support worm. So if you have storage that requires worm protection, you can configure a system to have that. Uh now I have to provision at least a single volume. Uh give it some capacity so Bob can start to move his data if he has to have a new uh replication. And then I have my ICE scuzzi, my SMB CIFS, and my NFS. The different versions of NFS. Uh I'lldo NFS for this case. You'll notice this doesn't have uh fiber channel. It's not a protocol supported in the cloud. Um but there is one protocol not listed here that you can configure for a CVL um from a command line, and that's the S3 protocol. So you could have that for some of your storage.Okay, some options here about disks storage efficiency. That's a key thing I want to mention. The reason people use cloud volumes on tap in the cloud is not just the advanced functionality that NetApp has with replication and you know all the visibility we have from our management here. The storage efficiency is really where you get the quick ROI TCO. So we're taking a VM in the cloud and disks allocated in the cloud and we're applying NetApp storage efficiency to that environment. So your dduplication, compression, compaction, all your advanced function functionality, thin provisioning, flex clone, the snap mirror uh snapshots will save you a lot of storage versus using something that isn't taking advantage of all those possible storage efficiencies. some of the native file system storage options in the hyperscalers will not expose the full amount of storage efficiencies that you can take advantage of. The other thing I mention about this is when you allocate a terabyte disc for Azure I don't have to calculate uh RAID overhead or spare drives thatconcept that you have to deal withhardware is not something you deal with here. Basically, I said, I want a terabyte of storage from Azure.is going to provide me a terabyte of storage. They're taking care of the RAID level on the back end. They're taking care of um the overhead of having spare drives for availability. That all meets the SLA, but you don't have to pay when you're size CVO for that overhead and design it into your own system. Okay. Uh go ahead and hit go. I could review data if I wanted to there. And uh we'll go back to our canvas environment and then that will start to show up. While we're waiting for that to provision, I can go to my estate which was another area in the canvas I wanted to show and I can detect on premise systems and we're adding more and more that you can detect and include in the system automatically. Also, you'll see some of the copy or thefunctional features you can enable in the system here that might be of an advantage to you. Okay, while that is uh going to go ahead and finish up, I'm going to go back into my presentation and we'll move to the next set. So, we just provision storage net new in the cloud. We also used it to replicate on premise storage to the cloud. Uh we could replicate between clouds. Um but some of the other things that Blue XP provides that will give you a comfort level if you're a network uh or a net app admin is our common tool system manager, the digital health advisor which is active IQ and the ability to manage your own support tickets from within the blue XP environment. These are very important environment u capabilitiesthat can really add value to what you're looking at doing. So just wanted to b bounce out to show you that we're going to go into those sections next. So I'm going to go into my second demo. Okay. So let's go into digital advisor and this will call up the active IQ interface that uh we've been using and uh for anyone who's used active IQ it has all the same capabilities you would have. Now how does this know who I am or what to show me? This is where that uh NetApp support account being how you register and link to your Blue XP account uh or basically having the same account for both allows me to see this information. And so I can quickly here use this for um any of my review of basically performance uh patching updates things we may need to do. Uh capacity efficiency is one area that I like to highlight. Um because you can look at your storage efficiency for an on-remise system and like I just said CVO if you move from an on-remise baz to CVO you should have the same storage efficiencies here. If you are looking at moving this to some third-party storage that doesn't have all the efficiencies, you'll have to factor in what you'll lose possibly and instead of reducing 11.8 to one the amount of storage needed, I would have to inflate it in reverse. So that can be a cost um functionality that you want to compare. Okay, so let's go back here. Um, I've got my working environment and uh, let me go into my on premise system here. You can see my relationship that we talked about. I'm going to open up my on-remise system which we did at the first part of the demo. But, uh, instead of looking at this kind of summary view, I'm going to switch to advanced view. And this isn't limited just to on premise, but I want to call it up because this is where someone who doesn't have any interest in using cloud resources can get value from day one using Blue XP. Basically, I have my system manager here. I can go ahead and do all the operations, provision volumes, I can change things that I can do from system manager directly, but it's integrated into the tool set. Um, this might just save you a tab on your browser. Uh but what's nice is on the back end we're integrating this in to provide more and more reporting data uh so that you can get net new views you don't have anywhere else. So let me go into one of those net new view views which is the sustainability reportand this is where I can look at my carbon footprint. So, I'm not sure if energy efficiency is something that you have to report on or that's something that your organization maybe in an annual report is trying to roll up, but this can be very useful. I've got some companies that I support that have operations in Europe that are using this for their annual reports to show what's the storage efficiency, the power utilization, and then we'll identify options where you can enable some functionality to maybe utilize that hardware more efficiently. and it'll give you a little bit of a rating broken out by the specific systems as to whether they're, you know, being utilized uh efficiently or uh whether you have operational uh chances to improve it. So,showed system manager, active IQ, sustainability dashboards. So far, everything I've shown with those reports are included, no additional cost. uh cloud volumes ontap would be an additional cost if you were provisioning it but dis um discovering your on-remise systems and managing it with those tools that I just showed would not be a um requirement for licensing. If I did have to go into licensing, I could go to my digital wallet. Once again, this is also something I could do uh without having to um this doesn't cost any extra basically. And I can see my systems, what licenses I have, what capacity, uh what subscriptions I have for various systems. I can even view my onremise systems. If you're enrolled in our Keystone um procurement uh option where you can get hardware uh and cloud resources in a combined bucket that's also visible and manageable from here. Okay,let's go back into the presentation and we're going to move now to another section.This will cover uh backup and recovery and classification which are add-on functional uh tools and do cost additional money as licensed as well as taring. Tearing uh is our old fabric pool uh technology. It allows you to basically analyze a volume and you can set policies to say after a certain amount of time I want at the block level cold blocks to a uh to move to lowerc cost storage and then typically it's object storage on premise you could do storage grid or onap S3 um if you are moving to the cloud then that is alicense for how much uh cloud taring uh you want to enable and you would have to license that and that could go to blob storage or S3 storage. Um, if you're in cloud volumes on you,also have the cloud taring functionality, but that's provided at no additional cost. So basically as an example if I have cloud volumes on tap and we talked about the u optimized version in Azure lowcost cold data great option for tiering so I can have a minimal amount of storage connected in those aggregates to my uh VM and then I can have the vast majority of my storage go to a lower cost blob or S3 tier and um it'sa great way to archive data without going through like a tape backup solution.I have a customer that likes it because for their end users, the data, you know, the directory and file looks like it's always been there. They don't have to go through a help desk request to get something restored. And that means that the IT team doesn't have to be involved if a user wants to click on something that's in their cold storage. just to educate them thatretrieval uh performance will take longer as those um blocks are restored from the cold tier. But it operationally makes it much easier to manage any data that you were going to look at archiving or even a data you know volume system that is uh still in use pretty much. Um a lot of data 80% of it isn't accessed after a few weeks if it's net new. That's a good um option for taring to reduce your overall spend. So for our third and final demo, I'm going to go into backup and recovery. We'll touch on taring and classification and I'll kind of give you an overview of a few more functional features. Okay. So, back here on my main screen, if I go down here to backup and recovery,this is the optional tool that you can license with NetApp. It is included with the professional license if you do license cloud volumes on that way or backup and recovery can be used for on-remise systems. on-remise systems. on-remise systems. the uh quick summary of how many volumes and snapshots and replication I have um set up. And if I wanted to analyze what's going on, I can quickly see if everything's green and healthy, if I backup jobs have been done properly, I can go into a restore and look at what data is already backed up and I can restore that data. Um, that can be the full volume, that can be individual files or folders, um, that can be to the original location it was backed up to, or you can deploy it to a net new system. I can set up my backups uh, at the application layer. So maybe uh, I this is application aware. I want to back up an SAP HANA or Oracle database or a variety of systems there. We have that virtual machines. I can back up my data stores and have this synchronized with VMware as an integrated backup solution. Uh same thing Kubernetes. Uh if you have Kubernetes containers, uh NetApp is a product u called Trident which is free that a lot of people will use to manage the um persistent storage for containers and then they can be rolled into the backup recovery tool. We can monitor the status of various jobs and run reports. But if I go back to the first screen, uh we can walk through the setup if we were going to do a net new backup. So let's say here want to select a couple volumes and manage my protection. Okay. So, I can do local snapshots, replication, or backup. Go ahead and label that. Save any updates. And now I've got my uh backup snapshot set. We do support 321 backup technology uh best practices. So, three copies of your data atleast two different locations, one of them being immutable. And that could be defined and set up here. And you can see a variety of working environments that I can switch to. When you're setting up your backup options, you do also have uh retention policies andsettings that you can set for how long you want the backup to be before it will roll off. Okay, let's go ahead next. And I'm going to go into classification. This is a net new technology uh the data classification or cloud data sense and what we're doing here is on a volume byvol basis we can analyze the databases or the files and tell you not only you know the size and where they're at. Uh we can tell you whether they're stale, whether there's anything you've classified as non- business data that was detected and duplications. So you can be more efficient in your environment. But I can also look at policies that you have set up or that you know you took from out of the box. And I can verify is there any data that might be sensitive or personal and where is it located? So in other words, do I have all my payroll records on a public share that anybody on the internet can see or is that properly stored in accounting?If I click on and see, I can look at the specific files that were in this case identified as possibly having sensitive data and I can take corrective action. What I mean by that is you know we have compliance and regulatory analysis tools that can look at the data and it is AI driven contextsensitive. So if we talk about going out for Chinese food at lunch doesn't flag that as personal identified information. and it knows we're just hungry. If that Chinese is referring to Bob in a county and we're now talking about an ethnic reference or ethnicity that could be flagged as sensitive data that we don't necessarily want to have uh pushed somewhere. So all of these settings you can do your own search criteria and build your own policies from scratch or you can look at policies that were defined out of the box and then use these as templates if you want to configure and build them. The other thing classification can do uh which is nice is I can add to my working environment and this is a heterogeneous solution. So our cloud insights tool which we haven't talked about before is a deep dive reporting tool and analytical tool for an environment. Uh it goes beyond what we do with uh active IQ unified manager um as well as you know active IQ's uh data reporting. Cloud insights is a heterogeneous tool. It can monitor third parties. Same thing with data sense. If you allocate us to monitor Sharepoint, One Drive, third-party storage, and give us rights, classification can work across a heterogeneous environment, which can make it very valuable if you're trying to have one reporting tool across multiple uh storage options. So there's a variety of charts here where you can see the roll up of data and be able to drill uh drill down on any of these or see information. So classification is a very popular tool. Next we'll go to taring and this is just a sample. There'ssome stuff that we will not be able to cover in this 50inut session that um either breakout other sessions we'll touch on or you can always engage your NetApp sales team or your partner sales team can bring us in and we can do a one-on-one overview of Blue XP where we can explore exactly what you're interested in. But from tiering here we can go in and look at an environment to see what is being tiered. I can set up my tiering. Uh we've covered this pretty extensively, but tiering is a great cost-savings option in your environment. And uh the fact that it once you set a policy of say something ages two days up to six months, that automatically applies to your environment. If someone reads a file and some of those data blocks are read, then they come back in and they reset to that time um analysis to see when they'll go back to the cold tier. So you have control. We do recommend you wait the two days. I mean, you do have an option to just tear directly when something gets uh into your data, but if you give it the two days, the secret is it'll go through the storage efficiencies like dduplication, compression, compaction before we tar it. And that way you're getting the most efficient use of the storage taring because it also honors those storage efficiency functionalities, but they have to be applied at the primary source. Um but tiering is a very uh cost-effective system that uh you can go here and review your volumes add volumes to be tiered uh control your process. Okay.So beyond that other things we didn't touch on you have um a variety of remediation tools and templates to define workflows you want to automate. Uh we can analyze your risk for operating operational resiliency things like do I have snapshots of my environment? Do I have backup of my environment? Um disaster recovery we can restore things in a workflow integrated first off in the beta with VMware but disaster recovery workloads and workflows will be defined. uh my replication that would be my snap mirror uh ransomware protection this is being re um reworked uh and will be uh coming soon but ransomware protection is interesting that it will take things as simple as my active IQ do I need to patch something is on tap out of date what's my risks to do I have snapshots do I have backups in my data what what's my risk if something is infected with ransomwareto this cloud secure which is portion of cloud insights where an AI engine can analyze the typical behavior of the users in your environment and it will understand if Bob in accounting has never encrypted all the data and on the servers that he has access to that's unusual behavior and maybe we want to stop Bob from doing that until someone verifies why he's doing that. So if Bob's always doing encryption that it understands that's a normal operation for that account, you're okay. So it does take a while to learn. And what a lot of people will do is instead of automating the stopping ofthe account from doing something, it'll set up alerts. So you probably want to let it learn for a little bit, then enable alerting, and you'll start to see some, you know, false positives or some alerts that you decide, okay, we need to filter on. But eventually you can have this tuned so that if something unusual happens, not only will it alert the admin, but it can go in and say, "We're going to lock that account." So it cannot continue with whatever it's doing. It can always be unlocked after someone's, you know, verified that that's accepted behavior. So if it's not a ransomware attack, it might interrupt a workflow, but that's better than having all your data encrypted and held for ransom. But we can also automate a snapshot of the affected environments, the volumes that were affected by whatever activity we detected. So you have a restore point as close as possible. Um, snapshots immutable. You can't go in and encrypt a snapshot after the fact or change the data that's in these snapshots or you can incorporate like things like worm technology. But we definitely want to protect you and that cloud secure is a very valuable tool under the cloud insights portfolio. The reason I bring up is under ransomware, we bring in all those into net new dashboard views and mapping views that can help you quickly understand where you may have uh some risks. Um some other things that we can do economic efficiency would analyze where we're going to go forward and based on your growth of use of storage when you may need budget uh activity such as procuring new faz systems or uh expanding your cloud environment. Um if you do Terraform or Ansible uh playbooks uh you do automation we have agrowing catalog that's listed here and that is supported and we also have access to like GitHub's u repositories and then down here we have some of our cloud apps that's a good way to analyze an environment that's not netapp specific it could be heterogeneous like cloud insights and data cents but we can analyze a cloud environment anddetectunderutilized resources um where you might be able to be more efficient cost savings. It's a great way to get ahandle on your cloud environment as many people get surprised by how large those bills can grow uh very rapidly. Okay,so some key takeaways and I do appreciate you um uh joining me for this journey through Blue XP. um where your data is, you want to entrust who's storing it. I'm not disparaging anywhere else or any third party out there or even the cloud hyperscalers, but NetApp has had a long relationship with our customers and has agood track record of securing your data. not only to make sure we don't lose any data but going forward things like that secure and ransomware protection. Uh those tools are very important to make sure that we have that data protected. Um and that includes on premise and the cloud one view being able to be managed from Blue XP. uh thatbreaks down those silos of data that you may have struggled with where you have different in interface tools for each of the areas that you have storage and getting a holistic view is the challenge and then automation uh with the rest APIs the anible playbooks terraform uh great way to take things from a point andclick to scale. So instead of solving Bob's problem with just moving uh one snap mirror over lunch, maybe that's provisioning 50 systems while we go out to lunch and come back. There are other related uh resources and sessions that you can explore. Now that these are recorded and uh you can browse them at your own leisure, I would recommend searching out some of these that are listed uh or just doing a general search in the catalog. There's a lot of great resources available um going forward. You can also track some information on Insight. It's already passed, but there um you know, some historical information that might be of interest. If anybody does have questions, um I'm Chris Fox, uh the NetApp Cloud technical solution specialist. And my email is listed here, chris.fox@netapp.com.Send me an email. be happy to help out uh either myself directly or help route that to uh someone who might answer your question.Thank you very much for your time and uh at this point we'll go ahead and uh stop sharing and shut down the recording, but I hope you have a great day. Goodbye.
NetApp® BlueXP™ delivers a unified management experience for storage and data services across on-premises and cloud environments. In this session you’ll see how to discover, deploy, and manage all your NetApp storage using BlueXP. You’ll see [...]