BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
Hello everyone. Thank you for joining us here today. Welcome to our session, and thank you for taking time from your busy schedules to join us. I am Sean Dowell. I'm a market strategist for VMware and virtualization here at NetApp. Uh, before we start the session, I'll just give a brief introduction, uh, before passing along to our expert. And at the end of our session, I will also help facilitate a Q&A. So the focus for session today will be maximizing your VMware investments with intelligent data infrastructure from NetApp as a surprise to, of course, no one here. The last year has presented us all with many challenges. Uh, but in addition to that, there's been many opportunities to kind of take another look at our infrastructure and how we can optimize and increase the ROI of our investments. Given the new subscription model from VMware by Broadcom, it really is, uh, more important than ever to partner with companies that understand these changes and can help facilitate that transition efficiently,So let's go ahead and take a look at our agenda. Can you can flip this forward one more slide. Uh we're briefly going to touch on the partnership, the long term partnership we've had with VMware and why that is important to you. Uh, we're then going to jump in and discuss, uh, NetApp data infrastructure insights and how it provides a unified observability to kind of help teams rightsize identify waste, uh, or identify cause and effect of infrastructure issues. Uh, then we'll quickly jump over to the NetApp All SAN Array and its value to VMware workloads. Lastly, we're going to jump into the industry leading storage operating system. Uh, you all know as NetApp, ONTAP and ONTAP tools for VMware. Uh, should you have any questions throughout the webinar, please plug them into the Q&A during the session, and we'll take time at the end to go ahead and answer those questions. Any questions we're not able to get to, we'll make sure to answer those offline after the session. Again, thank you everyone for joining us today. And now let's pass this along to our expert presenter for the day. Excuse me Ken. Take it away Ken. Thanks, John. Sean. Appreciate that. Hi folks, I'm Ken Jarris. I'm part of our solution architect team here at NetApp. My role is part of a group of specialists that focus in on certain, uh, subject matter. Mine happens to be virtualization. So I've been doing this role for nine years at NetApp. I've worked with our customers for the past 19 years. I've actually worked with account teams. I was part of an account, several account teams in my years here. So I've worked with, um, large and small customers in addition to working with our reseller partners. So happy to be here. Um, I started my career over 30 years ago. Grey hair kind of indicates that. So this used to be a lot darker when I started out this whole journey. But I'm pleased to be here and help understand and help you understand how we can help optimize thatVMware environment and leverage our NetApp toolsets to leverage that. Um, so we're going to focus on data infrastructure insights today, and that's part of our tool sets for that end. But before we get started with that whole conversation. Let's take a look at our partnership with VMware. We have a 20 plus year relationship that we've been partners with them from the technology partner perspective. We our engineering teams meet on a frequent basis and from those conversations has led to joint roadmap information that VMware has added to their product side. And we've done the same thing within ONTAP. So we've made a lot of inroads around certain technologies and certain features that you now see in the VMware product line. We are planning to continue that. Uh, one of the things that they haven't changed is that conversation and those they value that technology partnership just as much as we do. That's led to over 20,000 joint clients between VMware and ONTAP using our technologies today. But the shift that happened with that subscription basis has really cost customers more or less looking to need to, uh, maximize their investments and not just around what they're missing in on VMware from licensing. With that subscription, it's now meaning that I need to make sure myinfrastructure, my. And that's not just the storage side, it's also making sure my compute side is being efficient as well. The days that we have of running our ESX host at, you know, 20% CPU utilization are really gone. I really can't have that idle CPU running around. I gotta be able to look at higher density workloads on my ESX environment, because I want to maximize that investment of that subscription I'm placing in that environment. And at the same time, I need to eliminate the hardware that may not meet that requirement these days. So that might be an upgrade, might be a, you know, refresh that. I like to maybe get as much hardware runway out of as I can, how long I run my hardware before I need to refresh it. But that shift in that licensing is really causing some customers to take a look at. Maybe it's time to do the refresh a little bit sooner, so we can take full advantage of that and collapse and condense workloads on there. Also simplifying operations between the VMware environment, you know, the,admins thatmanage that on a day to day basis and the,IT teams, all by being able to get data that can be gathered through APIs has been really crucial for a lot of this as well. And that leads into the NetApp intelligent data infrastructure to where our offerings can provide the ability for to run intelligently multiple workloads on the same platforms and not have silos of storage and data pools that are specific just to VMs, specific to files and containers. We can bring all of that onto the same platform. So if you're a current NetApp customer, a thank you. We appreciate your investment and appreciate yourloyalty and leveraging us as a partner and technology partner. For those if you're not using us today, then we welcome a conversation around that and how we can help you in those areas. And it's really around, again, making that optimal and going forward from there. Now when we look at our investment and our integration with VMware, it can be fallen into three these three categories automation and orchestration, monitoring and support. And then have to move things around integration and protection. Sorry I've got a little screen display there. Change there. So with those three categories we're really looking at the capability of a AI can automate. And VMware Cloud Foundation is really trying to encourage customers to adopt automation and automation means for the VMware environment. Some customers have gone further down that automation journey than others.are maybe just starting that and looking and seeing what their new licensing capabilities with the subscription basis and what VMware Cloud Foundation can do for them. And the nice thing is, we've been doing that integration for automation for quite some time, the ability to not have to learn multiple APIs and leverage our integration points to be able to go back and forth very easily using our Rest APIs, or in the case where our toolsets bridge those APIs for you. And orchestration is usually around the capability of how do I not just give VMs deploy, but also around when we talk about data protection and data in and data failover and getting those VM failover and those automation environments as well monitoring support, we're going to dive more into and then withthe data protection side of this integration protection, this is really we're adding additional security features and also ensuring that it's very easy to get data where you need to, and those VMs where you need torecover it and make sure they're protected at all times. Now let's talk a little bit about monitoring support. We have capabilities starting with our primary tool integration. We'll talk about it a little bit later which is ONTAP tools. This is doing data store management providing reports and dashboards around the environment, particularly for specific vSphere environment. We have the ability now to work with VRF. So ability to be part of an automation scheme to help automate data store provisioning. If you want to do that,aspect. We also get the information from the storage systems presented to you in vCenter, which now means I don't have to go back and forth between multiple dashboards and those type of things. We also integrate with Aria. VMware has had we've had management packs for Aria for quite some time with Aria Operations and Log Insights. VMware has recently started to deprecate or eliminate those particular current management packs. Ours is not quite there yet. ONTAP is still available, but we expect that they are going to be removing that from their inventory. So they haven't given us guidance yet on how we're going to plan to integrate future from there. So stay tuned on that piece. But in the meantime, we're going to focus a bit more about data infrastructure insights, which is our ability to provide those deep analytics and optimizationtools for no matter where you're running your environment across the globe. country, state or locale. So that gives us ability to have a centralized reporting point.Whereas our typical integration point is usually around your specific. Particular vSphere data center environment that we're leveraging from there. So this gives us a little higher up and a broader brush towork at work with. So let's talk about data infrastructure insights. This is a ability to simplify your management and why you would think from coming from NetApp. It's only going to be for NetApp technologies. That's not true. This product is actually available for heterogeneous infrastructure. So if you're not using us today, shame on you. But more importantly, we can help you with those environments we can help you with. If you look at the number of connections we make, unless it's some, you know, some archaic, you know, like mainframe orWell, although mainframe is still used today, unless it's some type of really ancient technology, you're going to find that we probably are going to be able to connect to it. So the nice thing is data infrastructure insights can provide you a centralized infrastructure monitoring environment. No matter again, no matter where it runs. This is a as a service offering that we provide to give you the ability to see everything you want to connect to. So if you got something running around the world, we can actually connect all those sites. And you actually can see this from a centralized point of view. It helps with resource planning and optimization. We talked about thatneed for that VMware optimization these days, which gets very crucial. We'll get more details that as we show you what we can do with,the VMware environment specifically, but we're also trying to optimize and help you with optimizing your environment to see where you might have resources that are wasted or disconnected and not being properly utilized. Troubleshooting always a bane of anybody's working environment, of dealing with it. Our technology, we're you know, we're all trying to you know, we have issues come up. We try to troubleshoot these the best we can. And sometimes we may fumble aroundthat. So we're going to talk about how we can also help you improve the time to resolution that we can help with making it simplify to troubleshoot where the issue really does lie, and around infrastructure refresh. Keeping track of what we need to refresh. We have customers who have very large environments using data infrastructure insights today that are using it to track those assets so that they can ensure that they are properly refreshed at the proper time and know exactly where they are at any given time around a refresh cycle, so that they not have to worry about relying on spreadsheets, relying on other documents or someone else's documents. That is being keeping track of. They actually can leverage it to keep all their assets inventoried and make sure that the life cycle of those are kept up to date as well. And then also anomalies. We also are helping with customers determine, hey, this is a blip. This is something that's happened more than once. This is something that, you know, where are these anomalies and are these and what type anomalies can we help you focus in on. So we'll talk more details on these coming up. So right size. Right size has been crucial for the VMware environment these days because we've got CPUs that are not being utilized properly. Uh, on the hosts. We have also overprovisioning in the VMs. We also have, uh, you know, you take a look at your environment today, how many VMs are powered off. Now some of those might be templates. And in those areas and you know, we take account for those. But how many got powered off and then said oh we'lldelete that later. And later never comes. You never come back around and go back and actually delete those VMs, or you leave them for a period of time because I what if you don't want to worry about okay, I just deleted that. Now they want to power back on. Well now I go back and restore it and I have to come back andget it back in place. And that takes time as well. So they're kind of left around. You know. We know we're decommissioning these VMs. We're going to leave them around for a little while, but we tend to forget about them over time. So they get they kind of get uh,maybe moved off to the, you island of misfit VM's, uh, from that end to,borrow a phrase froma from a holiday classic. But the idea of this is that now I have ability to see exactly where mypower on VM virtual guests are, and also see the trending from that as well. Andhow many are removed from the environment, the other areas around offering capacity, those VMs represent capacity on our storage systems that are basically wasteful. Uh, how many data stores do we decommission and or doing upgrades we're going to put in and go from vSphere seven to VS or eight. We're going to decommission the vSphere seven environment, and we're going to just move everything and maybe do greenfield and move forward from there. So from that end, when I do those upgrades, what do I do with the old thing? Well, I disconnect the,Lun, I disconnect or I remove the NFS mount. I have those volumes sitting around in my case, where do I know? Do I delete those? Are those still sitting out there? We had one customer that actually was able to defer a storage expansion purchase, much to the detriment of their sales team and the reseller partner. But they were able to defer a purchase because they actually found, uh, multiple terabyte. We're talking about hundreds of terabytes that they actually had of orphan storage environment, very large customer. So they actually found that theyhad no capacity that was no longer associated to a host. And that's one of the key things that we can help withdata infrastructure insights. Now that VM rightsizing, we talk about, you know, being able to connect, condense or increase the capacity or the,uh, number of VMs running on ESX host. But times we have reservations around the resources those are consuming because we know we can't really power on everything if they're over provisioned. So when we take a look at memory and CPUs and now even more modern workloads are leveraging those virtual GPUs, we have to make sure we're not overprovisioning those. Otherwise they're just we're just going to run into a mess from a performance perspective. so ensuring the right sizing efforts don't impact that performance or the availability of the hosts to run those VMs is also crucial, just as crucial. So we can help you with guidance around those. And that can be automated to where you set those thresholds and say, yeah, this is where we have that waste and you can leverage from there. Also analyzing workloads from a full business context, these dashboards available out of the box without any customization, you can start leveraging this and get the data immediately collected and go from there. But it also it often helps to understand the business context so we can add that capability into the reporting information that says what department is belong to, what application is this? Is this particular VM or group of VMs belong to, so that I can start to make more context aware decisions as opposed to just an arbitrary decision saying, yeah, this is a waste, get rid of them. No. More importantly, I can now go through and say, hey, this department has these things. Can we tweak these things? Or this application hasexcess capacity, excessprovisioning ofcompute resources. How can I make that a bit easier to put that in context. And that really brings up around, you know, identifying waste. There's the old adage one man's treasure is another man's junk. And it's really around, you know, what constitutes waste and what doesn't. Again, available out of the box zero digital configuration. You can start really identifying where you've got powered off VMs, where you have CPUs that are being unused, the virtual CPUs are being unused. You know this example here we have 680 average virtual CPUs being unused in this environment. Well, that may or may not be a good thing. So again, that identifying what's waste, I put that and add more context around the business data around departments and the application and the data centers where these things are residing, that I can start to make even more better decisions around that context being added into this as well. So we can help you identify that again immediately out of the box. The idea is we can also help you hone in that data and make it more intelligent decision as to what's really waste and what isn't in your environment as well. Change. Everything's about change. Weas humans are not fond of change. But yet we've all made decisions to run in an industry that's all about change. But at the same time, in our data centers, we continually are making changes. We're making tweaks, we make adjustments, we have new applications coming online. And but yet sometimes those changes may cause problems that we want to take a look at. And we have to identify the cause and effect. And dealing with those infrastructure issues very quickly. And troubleshooting can take uh, time it you know, we,spend our cycles trying to figure out where the problem lies or where the issue is. We like to fix things. We're trying to figure that out quickly, but oftentimes it takes us in areas where we may not, Hot. You know, we've wasted time trying to get there and trying to figure out what the true resolution is. And, you know, whether you're working with support organizations, whether, you know, with NetApp support or VMware support, or perhaps you're your server vendor. Usually the first question you're asking is what changed in the environment? And the answer to that usually is nothing. That's right. I,conversely here by saying yeah everybody says nothing changed. And that's the key thing is that, yeah, nothing changed the environment until someone finally owns up and says, oh, wait a minute, I made this little tweak and there's an old TV show thatuh, you know, tagline said, oh, did I do that? And all of a sudden, you realize thattweak that was made by one person that maybe didn't go through change control or did go through change control, didn't get in, you know, didn't get considered for the impact on this whole onwhat we've been chasing our tails for,maybe, uh, hours or days or even weeks to try and ensure that how can we get to that resolution, get things back to normal. So Data Infrastructure Insights provides ability to have a change analysis show you what has changed or where the change occurred, what happened and what got triggered, so that I can reduce the mean time to resolution and improve cross-silo collaboration. First time we, you know, oftentimes working with VMware customers. And when I was a VMware administrator before and handling environments from that end, I used to end up with, you know, first point was to storage. It's always storage is fault until you couldn't until you proved it wasn't otherwise. Well, now we have the ability to actually show the verification and validation of those plan changes. And more importantly, we can show you visualization frombeginning to end of the stack, from the VM to the storage system, and show you where the problem really does lie very quickly. So I can get to that mean time resolution very quickly in a much more expedited,manner, which means I can go back to other business value activities I'm working on. And more importantly, everybody else is happy the fact that the operations are back to normal. So this is a crucial thing where all of our customers are leveraging today. And if you're not aware of how to leverage this, we can certainly help you that as part of either a trial or gettingmore details and conversations around this going as well. Let me take a quick look here. Do we have anything in the chat? Not yet. Okay. So keep in mind if you have questions around these type of capabilities, what we're doing today, feel free to put that into the chat or the Q and A section on your screen. And let me keep moving forward. The other areas around anomalies.occur. You know, workloads maybe seasonally have certain spikes, you know, are probably familiar with the retail use case where, you know, the holiday season. Uh, nothing. No changes go in. We get workloads that are spiking from there. Uh, we have one coming up here in the US as well that's coming up with the, uh, upcoming Super Bowl. Um, and that's probably trademarked in somewhere. The NFL may come after us, but we all know the big game that's out there comingup that everybody's playing for in a couple of weeks. But that leads to a lot of activity, a lot of advertising dollars that go into that,particular event, which means it's spiking on a website. You may also be running a promotion that's going to drive a lot of traffic towards your website, or you hope, drive traffic to your website. You see those spikes and see and might have, uh, seasonally type of activity that you're doing around those areas that can't be effectively monitored by static thresholds. So how do you implement it at scale? This is where we can help you with that anomaly detection that shows where this might be a weekly trend. This could be a full anomaly bounce, uh, to where I can show the fact this anomalies occurred when we've done these activities over time and give you the ability to detect performance anomalies before proactive resolution. So what that really is telling us is that I can now see my historical trend, take a look and say, hey, we've done these events in the past. How can I tie those events to the track and have auto learning algorithms that will see those events occur, and adapt with your workload trends to help you make sure that you're ready for those trends ahead of time without having to say, oh yeah, by the way, we've done this in the past and we think we've done these things and we saw a big spike. This can help you give that type of data to those type ofeither marketeers, the marketing folks or the folks that are planning events and help the IT folks ensure the fact that, hey, when we see these events in the past, this is what happened to us, and now we can make sure we're ready for that for those going forward as well. And this gives you the ability to measure any device, any metric with throughput, utilization, I, o and error counts and more. So the idea of sitting in the dark and not sure what's going to happen when we unleash this onto the world, that can go by the wayside. And lot of our customers are using data infrastructure Insights today, are doing it just for that purpose. They know they have these type of planned events that they're doing to drive more traffic to their websites, more traffic to their application workloads, and they need to know those type of things going on. Uh, some of those might be for ticketing, uh, to a certain events and making sure they have to have those areas. Uh, others might be again, promotions that they're running over seasonal or they just know the fact that, hey, every holiday season, we know we're going to get traffic to our website because of this. So we want to make sure we can help you in those areas. And this is one of the big key things we can certainly do. This is an example though I talked about how we can help modernize and help assess what you're currently looking at today and help you improve your,um, VMware licensing environment in making that to where it's more optimal, that is, around getting CPUs and servers out of the environment that don't meet the requirements. Now that VMware is requiring you to license for thosesubscriptions, and this is an actual example for a customer that did this, uh, for this particular use case where they were doing it before they were going into the discussions with,VMware for, uh, their new subscription, uh, licensing and what they saw from a jump with no change was they got a they got the price, uh, bill fromfor their subscription from VMware. It was a 12 times increase compared to what they're doing on vSphere enterprise. Well, we helped them with was assessing where they hadin their environment. They had CPUs where they optimize the host. We actually had in this environment, we had eight hosts that had only eight cores. Now for the VMware subscription license, this is where we are ensuring the fact that, uh, you know, we don't want to put a license on a CPU that has less than eight, less than 16 cores. That's the minimum licensing requirement for VMware with their new licensing. So those CPUs that we were running, they only had eight cores. It was a waste of effort and a waste of time to put a new license on that particular environment to run, uh, VRF so from that perspective, we looked and said, well, you have 80 VMs on this particular environment across these eight hosts. So where can we move those. And again, looking at the number of CPUs that virtual CPUs are being utilized and amount of memory being utilized, we also could help them identify waste in those environments where they were overprovisioned and never utilized. And that often occurs because sometimes we want to be over proactive, you know, we'll get application requirements that come to us and say, yeah, I need four,virtual CPUs and I need two terabytes of memory. Well, we're going to give them twice that amount of CPUs or give them maybe four times the amount of memory, because really we don't want to hear from them again. We'd rather them to be happy and know that they're running well. And if they get a spike, we don't have to worry about the fact that they'veyou know, they over you know, we ran they ran over their current allocations orthey may grow and we don't know about it. They're going to add additional functionality to that application. And we don't want to have to worry about oh we have to go back and redo this. So we give them over. Provision has been our normal way. We operate a VMware environment over those years. So those days may not necessarily be the way we have to do things. We have to make sure we're really making sure we're optimal on the virtual side as well, not just the physical hardware side, but also on the virtual hardware side. So we're able to take a look at the customer's workloads and say, look, if you go with the new VM and you optimize these hosts, we can save you this much money. We can get you down to there and not have to license as much of UCF. But also we had some those servers that were under court. They didn't have the core minimum requirements to go to the newplatform and new licensing. So we said, if you do this, you can spend this much more, but in your year one, so you have two choices. You can do this and keep and just optimize the host and get it down at this point. But if you have a new server refresh and take care of some servers that are not yet long in your tooth, in your definition of how long you keep your server assets around, but we can turn this into a opportunity to bring you in capabilities of a new server farm for those areas that provide denser workloads. I can now run more VMs in there because CPU is more capable for that and do that now than in year two and three. You can actually flatline that and make that much easier on your subscription costs. So this is again real customer where theywere running 39 nodes, in this case 780, 784 cores. And they're going to be charged 1344 cores in the new subscription. So we modernize this and help them from there. Now this is all part of what we leveraged from that end. We saved them. Um, estimated savings was 970,000 over three years on BCF, um, or 757 if they spent the money on the new host. Now.that also means power and cooling could also be improved as well, because the utilization andalso in some cases switching that over from spinning disks to flash environments. Customers haven't done that yet. And those ideas is now we have also there are some parties, third party software in their environment as well that we can help you identify with data infrastructure insights to where those are also core based licensing. So by reducing the number of cores you're running, those servers on thoseparticular software licenses on, you reduce those number of cores, you're licensing those as well. And this again what we can provide today we do a modern assessment analysis as well. But the idea is you can get started just by installing data infrastructure insights and leveraging it to go forward from there to get that data started collecting. And we can help you with assessments as well from that. But this is the type of data that customers can leverage on an ongoing basis to help ensure that not only are they starting today for those environments and help get that there, but we also help you with an ongoing basis to make sure it continues to be optimized going forward from there. Now, let's shift a little bit gears around. You know, we have also hardware that can help in this environment from the storage perspective are all San array for VMware. So let's talk a little bit about that before I do that. Um Sean, do we have anything in the chat? Let me verify. Do we have anything in the Q&A? We don't have anything in the chat. Appears that's not working. But we do have a couple questions in the chat or. Sorry, in the Q&A. Uh, I think we can wait till the end to answer those, because I think there's some questions that everyone might like to hear. Okay. So we'll go over those a little bit. No problem. Thank you. All right. So customers want to modernize their data structure. They also get data storage for VMware. Faced with the dilemma do you want something that's easy to use simple, easy to use storage? Or do you want scale out advanced data management capabilities? In the past, that used to be a trade off. It was easy and simple storage. I couldn't necessarily maybe scale it out and have advanced functionalities, or I would go that route, but then became not so easy to manage andmaybe not simple to use. Well, we took a look at that and that's why we built theall San array or ASA capabilities. Now for ONTAP. This is a product line that provides simplicity. It leverages the storage solution. Simple to deploy. Anyone can deploy it, manage it, and upgrade it, so no longer need advanced degrees. No longer need advanced functionality or advanced knowledge around managing storage systems. To do that, we took thatpage, uh, and,brought that forward with this particular platform. Yet at the same time, we also keep the power of that capability of a powerful storage array, all flash array, the ability to accelerate those VMware and database apps with market leading performance and proven reliability that we've always had in our ONTAP platforms from day one. Also, portability get unmatched value. We have ROI calculators can help customers take a look at this. So far, our partners in the audience. We can help you with your customers to help with ROI comparisons to existing technologies they're using today, but also how that workload cantranslate to leveraging ASA and help that ROI discussion for customers that are looking to modernize and not sure where they want to go and help hope thatcapability for those workloads, the power of ONTAP has been always that we can drive and build anything on this workload. It does not have to be a single workload running on a particular box. We can run VMware workloads, we can run SAP workloads. Kubernetes File Services online transaction processing in addition to EDA workloads, all can run capability on the same platforms. We have ability to scale it up and scale it out, and we have ability also to bring that on premises. We have ability to leverage that with our high performance, our A series capabilities, which is our low latency, high performance type workloads. In addition to our C-Series capabilities, which is our capacity flash capabilities as well. Now in the ASA, we're still on the A-series platforms. So we'll, uh, we'll we're working toward bringing that C-Series to those as well. But the ASA is our,primary andall sand capability workloads. We have some customers that decide that they really want to separate, you know, block workloads, those San workloads from that files. And we do that now with those capabilities, but still ONTAP at the heart of the whole thing and giving our ability to provide those protocols that are there today. We also have private cloud offerings. We can leverage our. This is part of our FlexPod offering for our converged infrastructure. In addition to, we also can make a consumption model with ASA to where customers no longer have to rely on getting it right. They don't have to worry about. You know, I have a colleague that loves to leverage about the fact that we don't know what the future's going to bring. We don't know what type of workloads it may bring, but I just want to know I need capacity now. I need to be able to grow at A and not have to guess what my future is going to be, and buy that capacity from three years or five years and do it now. So I can actually leverage that with our Keystone capability to where I can have a capacity model that fits within my consumption model, and leverage that to go forward from there to based upon my what I think is going to be and more importantly, have adjustments to where if it doesn't quite meet there or if it exceeds that, I can ability to adjust that very easily and bring in the right capacity and my right service levels based upon that Keystone model thatwe can provide from there. And ONTAP also gives the ability to have data move to the cloud. So with that and nice thing is with data Infrastructure insights, I can help you see those details in the cloud as well. From that, thosemeasurements of what's using consuming those cloud consumption models and leveraging your cloud resources as well. So don'tyou know, we talk about everything being on premises. We also recognize customers are storing things are going to run things in the cloud. And now we're seeing a more of a balance between those. So we can certainly leverage those capabilities from there. The other biggest thing about ONTAP is around we've been focusing onsecurity. ONTAP is the only enterprise storage vendor that's validated stored top secret data everywhere. No matter wherewe're running, we can run that. There we are on the Commercial Solutions for Classified Component list with the National Security Agency Fips 143 compliance forour security around securing the data. Also Department of Defense approved product list for running in those scenarios in addition to certify for Common Criteria measurements as well. So while a lot of enterprises may not necessarily need all of those particular security, using Fips 143 meets everybody's needs, but also other requirements that are stronger andsecuring the data and making sure that it cannot be compromised. That is what ONTAP has been focused on. And we're providing that to our customers today to help also protect them from ransomware attacks and ensuring that if you leverage our technologies to put your data there and follow our guidelines, along with a little services engagement around that, we can ensure that you be able to recover from a ransomware attack for data that's residing in ONTAP system today, which is why we really are posting the fact that our ONTAP systems are the world's most secure storage system on the planet, so we can. So this is really crucial for a lot of customers today. For VMware environments, IT customers are more aware of the fact that they, you know, they want to make sure they can mitigate those attacks, and we can have those capabilities around those as well. And it also made it easier around ONTAP licensing or ONTAP one, licensing is all the software features included on the on board platforms and ability to make sure that our security resiliency is there as well, in addition to our sand features. And one of the sand features that we also now have is that symmetric activemultipathing, which means that our paths available to the hosts are activeAnd that mitigates the what happens when I have a failure occur. I don't have to worry about alua rebalancing things. It's the fact that all paths are active to the host. They'll be utilized in an active,environment and gives us the us ability to ensure the fact that we're not waiting on some.behind the scenes things have to take place ordinarily for a failover event to occur. So from a resiliency and from for VMware environments that require it, we have the ability to leverage that capability today. I mentioned earlier on also we talk about ONTAP tools for monitoring. This is one also where we bring our integration point for VMware in the environments as well. This is where we can work with Vkf. For example, we work with our workload domains. And keep in mind at VMware explore. This past year, VMware also removed the need to run Vkf with just vSAN. So we have ability to also run in a not just a workload domain, but also within the management domain for ONTAP system to be placed there to deploy VRF. So ONTAP tools can work with this environment, work with VRF as being automated, or can work within where we're working from vCenter to actually deploy Like data storage and managed data storage from that environment, all from vCenter. So that also provides reporting capabilities as well for that particular environment, just that environment alone. And compared to data infrastructure insights, that's going to be able to give you a observation of observability of all the environments running no matter where they're hosted. So whether it be in a Colo, whether it be in your data center on the West Coast, data center on the East Coast data center over in Europe, data center over in, um, in South Africa or overin Asia-pac. So capabilities are going to be seen fromCloud Insights, excuse me from Data infrastructure Insights and give us ability tosee that where ONTAP tools is going to be more from how can I help you see things immediately within that particular focused environment that we're seeing from vCenter and within the vSphere data center environment from there. I mentioned Cloud Foundation, VMware Cloud Foundation customers are starting to look at deploying this. Some have deployed it previously on the previous iteration when Cloud Foundation was licensed under the old,model. But everybody with the new version is going to be leveraging typically the VMware Cloud Foundation to, uh, provide resiliency within their environment and make it more automated. We also work with VMware's vSphere Metro storage cluster configuration, which makes our VMware environment even more resilient. And with our SnapMirror ActiveSync capability, which is our ability to be resilient on the storage side, we can make a near zero RTO and RPO capability of a VMware environment by leveraging the VMware capabilities that vSphere Metro Storage provides, and top that off with our mirroring replication capabilities that SnapMirror ActiveSync provides. This is where we're providing instantaneous mirroring between systems, irregardless of where they're located. Uh, there is some distance around this around typically around limitations. So feel free to reach out to us if this is of interest to your configurations and how we can help you from that end. But this is built into ONTAP, so SnapMirror ActiveSync is something you don't have to license. It's part of our replication capabilities and built into that ASA model to be able to take full advantage of it. And just like I mentioned here, it's built in ONTAP is able to consolidate those crucial workloads on the same cluster. You can decide to leverage that functionality for a workload basis as opposed to an entire system basis from that, as seen in the past. If you looked at this in the past for these type of technologies, this gets us the ability to leverage that on a workload by workload basis. Instead, it is deployable through ourmanagement interface. It also provides that site redundancy to where the host apps at the primary or secondary capability. So you don't have to have one system sitting there idle and doing nothing. You can actually have applications running at both sites and have that capability to where they can fail over to each other's site very seamlessly and non-disruptive. It's also optimized for your environment because we're taking advantage of the other storage efficiencies we provide with an ONTAP system. And I can also use that mirror copy for doing development and test copies as well. So there are functionalities we provide from that environment to make sure that this is a usable copy at the other side that we're providing that mirror copy for,business continuity. But,also it's also leveraging our proven replication technology, which is SnapMirror. So thatcapability that we provided for, uh, years and decades for customers relying on that for replication capabilities, we can take full advantage of that to make that to where it can be, again, a non-disruptive failover in the same environment for your workloads so that customers don't even know it. At the end of the day, thatanything happened on the back end. So let's sum things up. Let's take a look at what we've looked at today. Uh, we'reproviding an efficient VMware environment, which is needing to be critical for optimizing those subscription costs. We can help you with that with our data Infrastructure insights capability today. The reports we have that are out of the box canstarted just by installing thedata collection tool. And taking full advantage of that data immediately gives us ability to have those reports populated and get those details where you can start looking at that immediately out of the box. You can refine that capability as well by adding additional context into around departments, around application information, and gives you ability to refine that even further to make that more intelligent decisions around where you want to optimize those and make those efficient. Those reports and data insights and data infrastructure insights enables the customer to easily determine efficiencies and our troubleshooting capability from seeing theinfrastructure stack from beginning to end, from the VM to the storage side, gives us the ability to help and resolve those issues in the VM environment up to 30% faster. So we're getting a faster return, um, return to meantime Meantime, around, uh, time to resolution. So we can actually ensure the fact that the customers are. And your customers are happier there. You know, issues are going away. Your application owners getuh, get pleased because you're solving their problems faster from that end. And more importantly, your IT staff and your VMware staff can work together around ensuring the fact that, hey, when we have an issue, we know we can come together and get things, you know, where we need to focus in on that in that environment faster. And more importantly, get things put behind us so we can get back to really what we want to get forward with and getting other things done in the environment that we've been tasked to do. So with that, hopefully this gives you some indications what we can get started with. If you're curious about what's next, this UI URL can get you to get you started. You can request a personalized demo, get more details, and see a live environment of this. And more importantly, start a free trial. There are some links here available as well. Uh, this may be available to the attendees. You'll be able to see details around what we've covered today. More in depth, more reading material for you, but also trading off the, you know, debunking those trade offs, optimizing your environment without compromise. Now, I mentioned also, you know, we focus on VMware today. We can also do this also for other environments as well. But the idea is we can also show you some details is how we further integrate with detail with VMware. With that VMware tools for VMware documentation link and shows you the details of what we provide for that environment. So with that, thank you for your time today. We know we have Q&A. Uh, Sean, there's some questions we had in the environment. Yeah,we do have a few questions in here. Thanks, Ken. Great presentation. Um, I'll answer the first one live. And just kind of let you know that, uh, based off this answer, it'd be best to also get in touch with your, uh, DII experts. But there was a quick question there about does data infrastructure Insights provide metrics on VMware vSAN? Uh, the answer is yes. Um, but like I said, to dive into more details, it would be good to get in touch with, uh, with anyone on the DII specific team, and we can make that happen after this webinar. Uh, can another question was uh, around cloud footprint versus on premises. Is Dia the best tool for our environment given that we're on premise only? Uh, I did mention that it is a hybrid cloud tool, but if you'd like to expand upon that, that'd be great. Yeah, thereis. Uh, I mean, we have the ability to when we're doing data collection for that,on prem, the,reports, the driving factor of that is,for where we are, uh, we're running that in the cloud but as a service. But it,makes it easier to maintain and manage to where I don't have to worry while you're running, you're all focused running on prem. The idea behind this as a service offering, and us being cloud based for that reporting tool, gives us the ability to add features and functions and dashboards to where you don't have to worry about are you at that level and care and feeding in that particular application? So it's a newer model that customers are moving towards. And,these type of reporting tools are moving towards because Is again, we see customers tend to not have to worry about or that's one of the things theylag behind before. In the past, they might be two releases behind on a new tool set that if they switch to the as a service offering, that gives us the ability to immediately see those just by logging in. So don't think about the fact that if I'm running on premises only, does this tool have to be on prem. Now if you have dark site, give us a call. You know, go through the trial environments. We talk to our,uh, our data infrastructure insights specialists. They can talk about our capabilities that we can do with dark site only capabilities from this as well. But the intention is,really around. Just because I'm not running in the cloud, uh, doesn't mean that this isn't a tool for you, because we can actually that reporting functionality being available as a service offering is really that benefit of I don't have to worry about caring feeding it other than my own data. So that gives us that functionality that we can, um, take advantage of. And as we provide new updates and new features to this, you get those just by logging in. You don't have to worry about. Oh, I got to go through this whole upgrade and planning and process around that,capability anymore. So that's been the big benefit of that switch, that model for that end, for that, for that audience. Great. Thank you Ken. Uh, let's so we'll jump into another question. Uh, part of it kind of introduced the cost. And what is the cost model look like, as we saw in that last link there, we did share links. And we'll have a follow up to this webinar with, uh, you know, links to, uh, start a 30 day free trial and kind of get walked through the whole process with the DII team or requested a free demo. And we also have a link to the DII page that will walk through the different scenarios and how you can access and utilize these tools. Uh, the second part of that question, though, I think Ken would be applicable, is do you need to be a medium to large size environment in order to make this worth implementing, or can smaller IT departments utilize this tool as well? Yeah. Itwe had some shifts in that area to where now this could be applicable to even the smaller size audiences as well. So it's um, it's no longer a large scale audience only or medium scale audience only. This could be applicable to anysize and the way we are now leveraging that capability, that gives us someleeway. So we're now, you know, if you thought this was something, a tool that was, you know, you weren't large enough for, uh, revisit this because it's a is now a smaller audience as well. It can be small medium large. So it's notrestricted by how big you are from your IT perspective. Uh, we can certainly help you in those areas to,leverage that. And that 30 day trial can help you get started to see where that can be of value to you. AndI think you'll see you'll be pleased with some of the revisions we've done around that licensing to make that true. Great. Thank you. Um, let's jump into a question here about, uh, Vivaldi and ONTAP tools. Uh, what is the scale that we have for, uh, ONTAP tools and DevOps? Yeah. So we've worked with ONTAP tools ten. We've revised ourum, scalability capabilities around that. um, we have hit up to from a right from the, from the statistics about, uh, 120,000 v balls within the current, within ONTAP tools ten testing. And we feel we haven't reached the limits of what ONTAP tools ten can provide that re-architecture of ONTAP tools ten provides that, uh, it is now Kubernetes based. It is, uh, it is actually running its own Kubernetes clusters. So we make that even easier for customers to not have to worry about deploying a cluster or anything like that,scalability of container based and microservices inside of that appliance. For ONTAP tools ten is what's really giving us that power, and we can start small and additional resources to the appliance as you need to grow out and scale out from that end. So we really feel we have a and the other area that was lacking in the past was around concerns around high availability for that appliance. We now have ability to deploy those in an Ha configuration as well. So we feel wehave the largest or largest scale we can provide now where we really haven't reached our upper limits. But our initial testing is at 100 and 120,000 v balls capability, where we can now easily hit those and we feel we can go higher. So if you looked in the past and felt that we couldn't reach your requirements, feel free to revisit us with what we can do now with ONTAP tools ten because we feel thatis a powerful capability that we can do with v balls for that,environment for scaling out and scaling up. Perfect. Thank you. Ken. Uh, we'll jump into one last question here. And it's, uh, basically around themain differences between what Data Infrastructure Insights and VMware Aria is going to provide. Yeah. So Aria is it's going to be youroperations aspects. You're going to see your,dashboards around those areas. But oftentimes with Aria we have to hop around or we have to it's oftentimes hard to see a bigger picture with Aria from a site to site perspective, whereas with DII we're going to provide that observation of all those capabilities and allow you to see across the plane. The other area we see with Aria is going to be around the VMware environment only. So when we take a look at and while we, you know, we're focused on the VMware environment for this piece, we're talking about with data infrastructure insights, we're also looking at things outside the VMware environment, give you the ability to see those in your capacity planning and see those in your resource planning as well. So while Aria is very focused on the VMware environment and is oftentimes going to be seen within that environment, that aspect, um, with data infrastructure insights, we can help supplement. You saw what we can do with the reporting capabilities around the anomalies, the end to end, where it's an aria, you're going to be going to probably multiple locations to try and figure out those same type of things. We can see from one visualization within,Data Infrastructure Insights. So with that, it's really an augmentation and supplement to what we can do with Aria. But more importantly, the fact that you are likely going to have other things outside of the VMware environment to also monitor and observe. That's where we can provide that value with data infrastructure insights toreally give you that observation of playing across outside of everything that's in your infrastructure, not just the VMware environment only. Great. Thank you Ken. And with that, we are all set to close out the webinar. I want to thank everyone again for taking the time to sit down and join us today. Ken, thank you for the great presentation and the great questions from the crowd. Any of the questions that we weren't able to answer at the end here that we did not have time for? We will go ahead and reach out to you afterwards and keep a look out for a follow up to this email, where we will provide links to the content, assets and tools that we showed you at the end of the presentation. Thanks again everybody. Appreciate your time.
Discover how to leverage observability data from NetApp Data Infrastructure Insights to create a "no regrets" plan to reduce waste and optimize virtual CPU, memory, and storage.