BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
Data [music] is no longer passive. It moves. [music] It learns. It creates. It [music] protects. We've taken three decades of trust and engineered the future. [music] future. [music] future. [music] Not just storage, not just cloud, but intelligent data infrastructure. [music] Built for speed, scale, and AI. [music] It's the foundation that makes your data unstoppable.The core that [music] fuels your breakthroughs. The edge that defines tomorrow.We have the dog. This [music] is where innovation takes flight. This is where [music] you unleash genius. >> Greetings everyone and welcome to our NetApp launch webinar. We're thrilled to have you join us t today as we dive deep into some exciting announcements and explore how a unified enterprisegrade data platform can power your organization to drive AI with unmatched scalability, resilience, and security. My name is Debbie Markham and I'll be your host for today's session. Before we get started, I'd like to cover a few housekeeping items. We encourage you to use uh the Q&A function if you have questions throughout the presentation. We have experts standing by to address your questions real time. You may recall as part of our promotion of this webinar, the first 100 people who registered and attend the webinar will receive a pair of NetApp branded earbuds. Those winners will be notified following the webinar via email. And please be sure to stay till the end. We'd love for you to participate in our postevent survey where one lucky winner will receive a $100 Amazon gift card. And now I have the privilege of introducing our speaker, Jeff Baxter, vice president of product marketing at NetApp. Jeff has been instrumental in driving innovation and strategic initiatives at NetApp, and he's here to share his insights and expertise on how you can transform your AI strategy with our latest solutions. Jeff, the floor is yours.>> Thanks, Debbie, and thanks everyone for joining us today. It's uh my pleasure tocome talk to you all about some of the uh amazing innovations uh we launched over uh just a few weeks ago at Insight 2025 and really give you a recap of frankly uh the largest launch that I've had the privilege ofuh handling in NetApp and probably the largest launch in our history to be quite frank in our three decade history of just amazing innovation around really four different areas that we'll talk through around artificial articial intelligence, data infrastructure modernization, cyber resilience and cloud transformation. So why do I mention those four things? Well, we believe that those are really the four transformations that are driving modern transformation in all of your businesses. And we believe that amongst all of them, data is at the center of every transformation. Without data, artificial intelligence doesn't matter. or cloud transformation doesn't matter. None of these can happen without data. Those transformation efforts then become really all of your imperatives. And this is what we hear from all of our customers that they need to transform. They need to take advantage of artificial intelligence. They need to keep their data secure with cyber resilience. They need to modernize their data centers and they need to take proper advantage of multiple different clouds often operating in a hybrid state.And so to help customers with all these imperatives,we have the NetApp data platform. And frankly, we've had a NetApp platform for many years. But at Insight was really the first time we've started to call it the NetApp data platform that we've recognized that this really is a distinct offering that's focused around your most important asset around data and helps provide the unified enterprisegrade foundation for building out an intelligent data infrastructure. So when we talk about the NetApp data platform, this is essentially um a quick guide to you can almost think of it as a high level of our entire portfolio. uh we build everything on a foundation of unified storage both on prem and in the public cloud as well as more and more work we're doing to support large-scale AI factories sometimes called neo clouds as well as operating on sovereign clouds either built on premises or as sovereign cloud regions within uh local clouds or within some of the large hyperscalers all of those can be served across with a single data plane based on NetApp on tap, our leading data management operating system that has been strengthened over three decades of use um across tens of thousands of customers um being able to serve as the basis for your entire digital estate. On top of that, we layer multiple different data services. Um multiple protection data services that you may be familiar with today, several more that we'll talk about during the course of uh this webinar. and then a whole new class of services focused on AI um that we call our AI data engine. And I'll talk in much greater detail through the course of this video about that.And then on top of all that, we want you to be able to experience the NetApp data platform how you want to experience it. So um available, you know, via purchase or via as a service through NetApp keystone, all managed through a single control plane with the NetApp console and obviously open to your broader ecosystem. So with open APIs for broad integration and integration across the open-source continuum including the use of NetApp Instacluster for fully managed open-source optimized performance. So this is obviously there's a ton more that NetApp does but this is the high level when we talk about the NetApp data platform what it enables and what it builds for customers. So let's talk about how that data NetUP data platform can be used across all of these transformations. And of course we're going to start with AI. Now why do we think having the right data platform is critical for AI? The simple reality is that data and data preparation is perhaps the biggest challenge facing AI. We all think about having to build out GPU enabled data centers and certainly there are challenges there. We all think about how do I adopt all the latest AI tools and there are challenges there. But the reality is when you actually look at the causes of failure in AI initiatives, the top cause or at least one of the top causes is lack of AI ready data. So there's multiple different studies, surveys, and other data points out there that say this.particular one is from Gartner saying that 60% of AI projects will be abandoned through the course of 2026 due to lack of AI ready data. That means you can have all the infrastructure ready. You can have all the tools ready. You can even have the right data scientists ready. But unless they have data that is unified and made AI ready, all the tools in the world, all the smart people in the world can't do anything without that data to operate on. So we believe that we're in the middle of an evolution where AI has been focused in the past primarily on largecale model trading um in you know specific neo clouds, sovereign clouds, public clouds to the point where now we're in the era of enterprise AI where yes, there'll be some there'll still be large scale model training going on. theleft side doesn't go away. But increasingly every enterprise is going to be adopting those models and maybe fine-tuning them but using retrieval augment generation inferencing and increasingly giving them some autonomy to embrace agentic AI to move the state of AI further and really democratize it across every company. And so if you want to do that, you go from just needing high performance to do this model training to needing all of these sort of enterprisegrade capabilities. So this is our premise. We believe that AI is this new premier enterprise workload and enterprise workloads demand enterprise capabilities. So it's not enough to have high performance. Yes, you have to have high performance. You have to be able to keep the GPUs busy. You have to be able to keep your data pipeline running. But on top of that, you need to be able to prove return on investment. It's no longer about science experiments. You need to be able to actually prove what you're doing here meets the return you're putting into it. So that means you need to have cost efficiency built into the environment. It needs to have enterprisegrade resiliency, availability, and data protection all built in. You need to be able to protect the data with built-in ransomware protection and compliance. And you need to make sure again that you have unified and up-to-date visibility across your entire data state. Otherwise, none of this matters if you can't get access to your data. So, at NetApp, what we focused on with this launch is announcing the creation of an enterprisegrade data platform for AI. That is the NetApp data platform that provides enterprise resilience and performance built on a unified data foundation that allows you to have an accelerated data pipeline to deliver real business results. So today I want to talk to you about what we mean by an enterprisegrade data platform. And to do that I want to talk about one of the first announcements that we made, the amazing new NetApp AFX system. Let's take a look at it. [music] Meet the enterprisegrade data platform for AI. NetApp AFX is disagregated storage purpose-built for the AI powered enterprise. AFX delivers a massive global name space with granular linear scalability and performance [music] equivalent to parallel file systems but without all the complexity. Powered by proven enterprisegrade [music] ontap intelligent data management and integrated real-time ransomware detection. NetApp AFX is the most secure storage on the planet. Unify your data estate with [music] seamless hybrid cloud integration. Instantly and securely bring your data to all the most popular AI models on the world's largest clouds. And that's just the beginning. Your data infrastructure [music] needs to turbocharge your data pipeline. NetApp AI data engine accelerates AI outcomes with global data visibility and semantic search, built-in data guard rails, [music] and an efficient, always up-to-date vector database to accelerate your Gen AI and Agentic [music] AI workloads with one end-to-end solution. Data teams can optimize AI pipelines and drive [music] faster time to insight. Don't gamble your AI future on unproven solutions. [music] solutions. [music] solutions. [music] Lead with the trusted name in enterprise data infrastructure. lead with NetApp. So that gives you a sneak peek into what we announced the NetApp AFX AI portfolio. This is enterprisegrade disagregated storage with the new NetApp AFX system and the ability to streamline AI data pipelines with the NetApp AI data engine. So I want to go ahead and double click into each one of these announcements.We start with NetApp AFX. So, NetApp AFX is our new disagregated storage offering built for the AI powered enterprise. The most important thing to emphasize about NetApp AFX is that it's based upon the same NetApp on data management operating system that we've had and built upon for the last three decades. We've taken it and turbocharged it to be a disagated architecture but kept in place all of the enterprisegrade capabilities that ONAP hasled in for many years. So the disagated nature of it allows you to scale efficiently. So you can go from small systems up to exabytes to power your AI workloads but in very efficient and granular increments. So you only have to improve uh or increase your capacity or your performance not both simultaneously.You hear me talk about enterprise grade a lot. So, the ability to have the most secure storage on the planet to be able to govern your data, mobilize your data, and move it to wherever your GPUs live so that you can literally bring um your AI to your data or your data to the AI. And it's all based upon an enterprisegradeuh underlying system. And of course, you want it to be cloud connected by design. So many of the major AI models live in the cloud that you want to be able to build and deploy AI anywhere but still have access to your data everywhere. And this is really fundamentally our design principles for building out the NetApp AFX.So when I said massive scale um the NetApp AFX just in its launch year uh moving into 2026 will be able to scale up to 128 nodes and scale to over an exabyte worth of capacity. These are not architectural limits. These are really the starting point that we're going to scale the system to in 2026. Architecturally it's able to go um even higher. thesky's is the limit in a lot of ways um as we continue to be able to scale out performance and capacity in a very linear and granular fashion by just adding additional components into the AFX system. It also out of the gate and this is really interesting was certified by Nvidia as Nvidia super pod certified. So it's designed for the largest instantiations fromvery small AI projects all the way up to the largest super pods. Um, Nvidia took a look at this and because it's leveraging NetApp on which has already been certified for SuperPod, it was fairly straightforward for them to be able to go ahead and certify right out of the gate onday zero the NetApp AFX as a Super Pod system. So when I talk about enterprisegrade AI powered by ONAP, that means all of the capabilities of ONAP come seamlessly into the NetApp AFX. That includes uh enterprisegrade functionality. Um all the reliability and resiliency that we're known for as an enterprisegrade storage provider. Um all the built-in intelligent data mobility and caching capabilities. All built upon a no compromise security. Built on the world's most secure storage. um and built using zero trust principles with built-in 99% accurate ransomware detection and secure multi-tenency to be able to slice and dice up your AFX infrastructure into multiple different secure tenants to run multiple different lines of business for their AI initiatives. You put on top of that the seamless scaling, you're able to get HBC class performance um without necessarily the pain of running a parallel file system and optimize your performance density on a common capacity pool. And of course, it's built for AI, being able to have unified data access across on premises and all major clouds and to have an up-to-date metadata engine built directly into uh the AFX system to aid in providing AI data operations. So, I mentioned being hybrid multicloud connected. Because it's ONAP and because ONAP is present and native in all the world's largest clouds, you're able to directly connect it out to uh replicate or cache your data out to AWS, Google Cloud, or Microsoft Azure. You're able to easily exchange your data with existing parts of your NetApp data estate. So, even though it's this brand new disagregated architecture, it's not operating in a silo. You can replicate or cache data from any existing AFF or faz system you might already have if you're a NetApp customer. And you can certainly exchange object data because this AFX system works on both file and object. You can exchange object data uh with object stores such as NetApp storage grid so that if you have a largeobject repository, you can easily move objects in and out into your new uh disagregated AFX system. So taking a double click down into what we specifically announced, there actually just three hardware building blocks of an AFX system. The AFX1K storage controller. This is where you get all the performance capability out of the NX224 storage enclosure, which is where you add your capacity. And then the brand new sort of element is this DX50 data compute node, which is really where you run these added AI data services such as the NetApp AI data engine on top of. So let me put these together in an architecture for you. So the storage controller as I mentioned is the AFX1K storage controller. It runs on tap and it's all connected into a high-speed low latency network. In our case we use Ethernet. We strongly embrace open standards and so we use an Ethernet based network uh today operating at 400 gig. Obviously can continue to grow in the future. and then we attach in the storage enclosures directly into that Ethernet network. Um, so this means that any controller can see any storage enclosure. If you're familiar with NetApp's current architecture, right now you just have pairs of controllers that can access a certain stack of storage and they can certainly cluster together, but not every controller in the cluster can access every single SSD within the cluster. that changes with AFX so that every single controller can see every single piece of storage. And so what that means is when you want to scale performance, you don't need to add any additional storage uh enclosures. You just add additional storage controllers and you've automatically increased your performance. And when you want to increase your capacity, you could just add storage enclosures directly to the network. So it completely frees you up to scale linearly and granularly however you need to grow as your AI workloads grow and change. And so this in and of itself is a AFX cluster. But there is one additional optional component you can add into the AFX cluster and that's these data compute nodes I mentioned. So again you can have just straight out disagregated storage for AI with the AFX cluster. But if you want to add some of these optional AI data services, you add in the DX50 data controllers. You start with three uh very basic controllers and then you can scale up from there depending upon how much and how uh performant your AI data services are. So let's go ahead and talk about that NetApp AI data engine and what it offers. So this is net new innovation from NetApp. This is us bringing uh a ton of different capabilities that today are often represented by 10 or you know more different tools within an infrastructure to directly produce AI ready data from your infrastructure. So, our goal is to be able to take all of your data stored on ONAP storage, starting with an NAB AFX, but hopefully expanding over time to be able to look at all that storage, find and understand all of your data, be able to keep it current and automatically keep our index of that data up to date, then be able to govern it with guardrails across the entire process, and then transform it so it can be directly consumed by AI apps. So this is the NetApp AI data engine. It's really composed of four different pieces that I'll talk about in detail here um in a moment that help provide each one of these steps through the entire environment to basically take your whole collection of data across onrem and in the cloud and at the end of the day make it so that your AI apps can directly talk to the system and consume that data. It skips all of these intermediary steps and basically produces uh a built-in vector database that your AI apps can talk to directly on your storage system. This is how we solve that problem I started this whole section with talking about 60% of AI projects failing due to lack of AI ready data. If your storage system can directly provide that AI ready data out of the gate, you solve those problems and you're automatically a hero right down all the way at the data infrastructure level. So the first of those four is the metadata engine. The metadata engine will enable your data practitioners, so your data engineers or others to work with the storage admin to easily find and understand your data. So you have an easy API to be able to query your metadata and you can create these data collections by looking um through keywords or semantic search across a metadata engine of all of your data. So without having to walk your data across all these different sources every single time, we automatically create a metadata engine that holds data about the data that can easily be used by your data practitioners to find the data that they need to serve their AI models. Now, on top of that metadata engine, we have a data sync capability that automatically works with all of your different ONAP instances to enable automatic change detection. So, you don't have to constantly be walking your files to look what changed. Every time something changes, we use our built-in snap diff capability to automatically basically alert and pull the metadata from that and push it into the metadata engine so that you always have the latest data available through that metadata engine even across a sprawling data estate on prem and in the cloud. On top of that, you can then build data guardrails. So, it's great to be able to provide easy access to your data practitioners to all of the data. But just because you have access to the data doesn't mean you want to train AI models on it. We all know that AI models can hallucinate. AI models can produce information that you don't want them to produce. A lot of modern AI is all placed upon trying to put boundaries at the end of the AI process. So, if you have a chatbot, telling the chatbot, don't give out social security numbers or don't give out credit card numbers. But we all know that modern prompt engineering is very sophisticated at getting AIs to spit out the data that they have. Um these large language models ina way want to help. They want to give the data that they have. So putting in place guardrails at the very end of the process is not the right way to solve this. The right way to solve this is to never train that AI model on sensitive data to begin with. If you never train the model on sensitive data, if you never give it access to sensitive data, it can never disclose that sensitive data. And so what we allow you to do is at the very start of the data pipeline create these data guardrails. So you can go and put in place um any sort of rule that you have built in. You can create these policies. You can automatically use built-in policies to redact things like personally identifiable information. Um in the United States for example you could detect if a social security numbers there. You could look for credit card numbers. All of these you can build these data guardrails and you can determine for each data set which guardrails to apply so that your AI application no matter what it gets never gets that data and you can either automatically redact information from files or just exclude them entirely. So the AI is never ever trained on data that you don't want out in public and this is how we make sure that your data is AI ready out of the gate. And then the final step is to help transform your data. And so we have a function that we call data curator that allows you to search across that entire hybrid metadata or sorry across that entire metadata catalog um across the entire hybrid multicloud find all the relevant data they want create a curated data set that's automatically kept up to date that's already been through your guardrails to take out any sensitive information and generate vector embeddings at the storage layer. So this vector database that we create is essentially the native language for most AI applications. Typically you would run your data through multiple different tools to go ahead and finally produce a vector database that your AI applications can talk to. We instead produce a vector embedding at the storage layer, reducing data bloat by up to 10x to optimize your cost and increase performance. And then you can point all of your AI applications including for example your retrieval augmented generation endpoints directly uh you can point your retrieval augmented generation applications directly at this endpoint and so it enables them to have rapid vector embedded semantic search and all of this is powered by embedded NVIDIA software that we place directly within the NetApp AFX cluster. So we've actually worked uh throughout many years now with Nvidia and we've been heads down over the last several years with NVIDIA on embedding their capabilities directly into NetApp AFX and the AI data engine. So those data compute nodes I talked about have NVIDIA GPUs embedded directly within them. We also embed the NVIDIA NIM software as our microservices directly within the AI data engine. That's what allows us to do all those vector embeddings directly within the storage cluster. We didn't have to invent the technology. We used the technology from the acknowledged industry leader NVIDIA and embedded it directly within the NetApp software and the NetApp systems to enable you to use it at no additional cost as part of the NetApp AI data engine. I mentioned before that all this has been validated with the NVIDIA DGX Super Pod out of the gate. And I should note our initial release of all of this will be with the DX50 nodes that come from NetApp. We fully intend to take the AI data engine and build it into a softwaredefined deployment so that you can eventually deploy that on your choice of servers. So for example, you might go to your server vendor um and get a server based upon the NVIDIA RTX Pro 6000 server so that you can use the latest uh Blackwell-based GPUs. So we want to start off with our DX50s to enable your basic AI data engine applications and then our goal in the future is to allow you to scale to massive limits with your own GPU infrastructure but still having that be a part of the AFX cluster for rapid governed access to your data. And you're going to see a lot more from this Nvidia partnership. We actually have a great conversation online between uh Jensen Wong and George Currion uh the two CEOs between the two companies talking about the extensive collaboration and what we plan to do in the future. If you're interested, you can find that on our YouTube channel. But that's just the start of the combined R&D between uh these two industry leaders to make your data AI ready.So I want to close with really where we see the advantage. We see uh that only the AFX AI portfolio is purpose-built for enterprise AI. It provides this modern disagregated architecture like many of the AI startups uh with an integrated AI data pipeline that again some of the startups are starting to build but we have this legacy of having all these intelligent data services enterprisegrade reliability integrated anti-ransomware and native integration with all the major clouds that none of the current AI startups can hope to match. So it's delivering all the same capabilities now but with that enterprisegrade level of p that enterprisegrade pedigree and set of capabilities and of course it far exceeds anything available from sort of traditional storage vendors who have not managed to take all their multiple disparit storage oss and turn them into modern disagregated architectures. So that was the major announcement we made about the NetApp AFX AI portfolio. I want to cover a couple other additional AI innovations. First, everything that I've talked about so far is available through our Keystone storage as a service offering. We know that AI in a lot of cases is, let's say, highly variable, right? You're just starting to move it as an enterprise workload. Not entirely sure how many resources it'll consume. So, you don't want to under or overspend on it. It's a perfect way to diminish your addition your existing commitment risk and avoid a large investment upfront, but have this ready to go consumption-based service that provides all of these tools for you and can grow as your AI projects prove themselves and as your funding for AI grows, Keystone will automatically keep track of you. So, if you're an existing Keystone customer, this will work. This will insert directly into Keystone. If you're not um even if you don't use Keystone for your main existing enterprise workloads, it may be something you want to consider for enterprise AI just because of the uncertainty about how AI is going to go over the next few years. It provides for that right partnership between NetApp and you togrow your NetApp AFX and NetApp AI data engine infrastructure only as your AI grows. We also have worked with Cisco to integrate AFX with Flexbod. Flex Pod we've had for over a decade as our industryleading converged infrastructure with Cisco and NetApp. Uh we now can add Cisco UCS servers and Cisco Nexus networking um connected directly into AFX storage to build out an entire FlexPod AI environment and in the cloud we continue to integrate with AI on the cloud. As I mentioned so many of the major AI models are actually available cloud first or cloud only. And so the fact that you have ONAP native on all three of these major clouds means that you can go ahead and connect your data directly in. So you can have things like the netapp volumes connector uh which allows Google cloud net volumes to connect directly into all the Google cloud AI applications so that any of your sort of proprietary data that you're storing in Google cloud netapp volumes whether you instantiated it there whether you've replicated it from onrem all of it can be processed as a firstass source of information to drive your business intelligence using the Google cloud AI applications like Gemini like vertex Similarly, we announced the same sort of integration for Azure where you can now take all of your data through Azure net files, expose it through an built-in object API that's built into Azure net files and in doing so expose that data to the entire suite of Azure data and Microsoft fabric AI services. So all of your data security remains in place, your data remains in place in Azure Netup Files, but all of these services, things like Azure Open AI, Azure AI foundry, Azure data bricks, all of them can now operate on your Azure Netup files data just as they would data on any other Microsoft Azure service. So again, this is all about being able to create that unified data estate and run any AI model anywhere against all of your data in place. And so this really kind of summarizes why NetApp is the best partner for enterprise AI. If data is the fuel for AI, then we know that tens of thousands of enterprises already store over 100 exabytes of data on NetApp systems. So all of your data that's already on NetApp systems or can easily be placed onto NetApp systems instantly becomes available to a broad AI ecosystem. We have the truly only theonly enterprise proven infrastructure for AI um a combination of the underlying technologies and the resiliency to be able to drive these new AI applications. We know that AI is inherently hybrid with NetApp being the only one to have firstparty native integrated technologies on AWS, Azure, and Google Cloud. And we know that robust governance and security is necessary to protect all of that valuable data. And as we've proven time and time again, NetApp is the industry's most secure storage. So, let's move on to some of the other items on our agenda, specifically data infrastructure modernization. So, one of the largest things we did is introduce the new AFX platform I just talked about. And this really completes in a way this picture that we've been building over the last uh two to three years to build out an entire portfolio all powered by ONAP. So, common APIs, common capabilities, but built to focus on the right type of performance you need and the right type of data you're operating on. So whether it's our unified systems obviously starting with our faz hybrid uh flash systems into our industryleading AFF systems for both capacity flash and performance flash our block optimized systems the net ASA both in capacity flash and performance flash and now for optimizing for unstructured data a new disagregated architecture for these sort of intensive AI workloads is NetApp AFX the key part about all this it's all powered by ONAP so your data can flow go seamlessly across this entire infrastructure but still be optimized for the type of performance and the type of workload that you're building. For to be frank for some of our competitors this would involve three, four, five different storage operating systems. We do it all with Netup on. So another way we're helping customers is with VM migrations. We know that for many of you, there are a variety of reasons why you're looking at um potentially changing your hypervisor that you're using to um to run your virtual machines. Um we all know there have been changes within the industry that have in some cases increased costs and so we want to provide the flexibility for all of you to be able to use any hypervisor easily on top of NetApp on one of the things we've done is revitalize something we call NetApp Shift. We've actually had Netup Shift for a while now. Uh, but in the last year, we've really put alot more effort in it to be able to add new capabilities. So, what Shift does is allow you to migrate a VM from VMware to HyperV, for example, without having to copy the data. We use our flexible cloning technology and change just the modified bits to be able to take uh a VMDK file from VMware and turn it into a VHD file for HyperV without ever having to actually copy the data. That means your migrations are as simple as turning your VM off on VMware, running NetUP shift for a few seconds, and then bringing it back up on HyperV. And it reduces those migrations to a matter of minutes to move between hypervisors. And so in this announcement, we uh announced that birectional replication or birectional migration sorry between VMware and HyperV as well as now the ability to migrate from VMware to basically any sort of a QC 2 variant KVM. So that includes Red Hat, Open Shift, Oracle Linux VM, Proxmox and many other KVM based solutions. All of which you'll be able to easily migrate your VMs without copying the data from a VMware environment. So this is a free tool available from NetApp. All you need is ONTAP underneath to be able to take advantage of it. And you're able to then easily and seamlessly migrate your VMs between these hypervisors, continuing to give you more flexibility uh to achieve the right blend of hypervisors that you need for your own performance uh requirements and uh overall budget. The other thing we did is introduce something we call the NetApp console. So NetApp console is now our new centralized control plane, our centralized guey for managing all of your storage environment, all these additional data services. It's built on a security first foundation to provide intelligent and simplified operations for you. This does replace something if you're an existing NetUP customer, you might have been familiar with Net Blue XP. Net blue XP became a little bit confusing in terms of whether it was a guey or a platform or the name of a bunch of different services. We wanted to simplify things. So now uh the NetApp console is just the guey that you go to console.net.com to manage all these services. You'll see all the other services that we have. We've taken blue XP out of the name and now we've made it much simpler. So something like NetApp backup and recovery. So that's a transition that we made at inside as well to sort of deconfuse things and you'll see us retiring the Blue XP name. Um we effective immediately but it'll probably drop out um for material over the course of the next few months. So, let's go ahead and move on to cyber resilience. As I've mentioned time and time again, NetApp is focused on being the most secure storage on the planet. And to do that, we introduced several new capabilities at NetApp Insight. The first was NetApp Ransomware Resilience. This was formerly called Blue XP ransomware protection. As I just mentioned, we'retaking Blue XP out of the names, but more than that, we're upleveling this and allowing you to provide truly intelligent orchestration for workloadcentric ransomware defense across your entire ONAP storage, both file and block. So, this builds in all these different technologies that you see on the screen. Um, most notably things like autonomous ransomware protection with AI to be able to provide that 99% plus uh detection of ransomware attacks across your ONTAP data estate. But we've also added new capabilities into NetApp ransomware resilience. Uh, one of my personal favorites is data breach detection. So we had a lot of customers ask for this when we first unveiled ransomware uh detection. They said that's great. Can you also detect when people are stealing my data? And that was actually a harder data science challenge because it's alittle bit easier. It's still hard, but it's a little bit easier to detect anomalous writes than to detect an anomalous read pattern. But we've done it. And so we synthesize data across your entire data estate. And we say, "Hey, this user is accessing files across different places that they've never accessed before or in amounts that they've never accessed before or just the amount of read activity on this file system is far in excess of what we would normally expect at this time." And we'll automatically provide an alert to you and through your SIM tool of choice to be able to go in and immediately block the access of that user until you can figure out what's going on. So, we know that today's ransomware attacks are typically sort of triple ransom attacks. They first go in, steal all your data, then they encrypt all your data, then they ransom you and say, "We'll give you the key to unlock your data, and if you pay that ransom, they sometimes give you the key to unlock your data. Then they'll say, "Oh, by the way, we have a copy of all your data, and we're going to release it into the wild or sell it if you don't pay us again." And so, you pay them again. And then a lot of cases they go sell your data anyways. Um to make basically three paydays off of one attack. So the only way to stop all of those is to not just catch it when the data is being encrypted, but to catch it when the data is being read and accessed and stolen to start with. And so if we can catch it when they've gotten access to just a few% of your data, obviously that's bad. Uh the goal is to make sure that attacks never happen. But we all know that's not the reality. Something happens. Um an identity gets compromised. someone's password gets compromised and someone has privileged access to your file system to be able to get this data. But if they can only get one or two% of your data before we alert you when you block them, that is a much better problem to have than if they manage to copy your entire data set and then encrypt it in place and cause massive downtime and massive privacy violations. So all this is builtin capability in NetApp ransomware resilience with this new data breach detection. The second piece we've added into NetApp ransomware resilience is the isolated recovery environment. This is a new capability that ensures a clean, fast, malwarefree workload restoration following an attack. So detecting the attack was really just step one. There are multiple other steps to help you restore. And now NetApp Ransomware Resilience will help walk you through all of those steps to ensure that as you're building that process, you're doing so in essentially a clean room. you're making sure that the malware is entirely removed or absent from the backups that you're restoring. And so you know that you have a known good data set when you go to push it back into production. And so the isolated recovery environment workflow within NetApp ransomware resilience walks you through that entire process step by step. Another update here is with the NetApp disaster recovery service. This is our reliable lowcost disaster protection for VMware workloads. This now supports Amazon EVS connected to Amazon FSX for ONAP storage. So the idea here is you could have your disaster you could have a single data center say for smaller customers that don't have their whole separate dedicated uh data center for disaster recovery. You can now purchase the service from NetApp for VMware and replicate just the data up to FSX for NetApp on You don't need to have anything else really running on AWS and only in the event of a disaster do you start up all your virtual machines on top of a AWS EVS and start paying for the service on Amazon. So it provides a much lower cost way to enable disaster recovery that hopefully you never need to use but now you have that insurance policy in case your primary data center goes down. So this is also available now. So let's move on to the last piece of our agenda and that's all around cloud transformation.So as I've mentioned many times you know NetApp is the only first party integrated cloud experience in all three of the large world's largest major public clouds. One of the key announcements we made at Insight was the Google Cloud Netup volumes which had previously just supported file workloads now supports block as well. So now it is unified storage capable of running on very small to very large volumes very low latency less than 1 millisecond latency um but at a very economical price and to be shared storage for as many uh different VMs or other workloads that you need. So this obviously will help customers who have self-managed databases. If you've migrated a database in place from onrem and you want to migrate up to Google cloud net volumes especially if it was already running on tap on prem you can now have it run in onap on block on Google cloudnet volumes and run it just as you were doing on prem but now in the cloud and so obviously it'll support databases or other enterprise stand workloads in addition if you're running VMware or Kubernetes uh using blockbased protocols as a backend you can now go ahead and move those workloads up into Google Cloud Net volumes using the new block service there as well. Another major capability that we announced is integrating flex cache and snap mirror into both Azure netup files and Google cloud net volumes. So we've obviously had both of those technologies for replication and caching available on AFF and more recently on AWS FSX for NetApp on Insight we announced that we've integrated both of those into Azure Netup files and Google Cloud Net volumes. So obviously lets us replicate data around more freely. But one of my favorite capabilities this unlocks is the ability to create an automated secure global namespace from all three of these clouds and on prem. With Flex Cache, you can literally have a folder in any of these locations that is cached at any of the rest of the locations. So you can look in a given folder in AWS FSX for uh NetApp on you could connect an AWS AI tool to it, but the data behind that could be sitting on Google Cloud or it could be sitting on AFF or it could be sitting on Azure Net Files or it could be on all three. And you can literally create a global namespace that joins together all of your data without having to actually copy your data. And this is really the promise of a lot of the AI architecture I was talking to you about earlier today where regardless of where your data is, regardless of where your AI model lives, it has a complete unified data visibility to that data and can then access it and operate upon it. This is a capability that no one else can match. No one else even has a native presence on all three of these major clouds, much less the ability to natively cache data. No additional cost, no separate gateways, no separate software. This is all built directly into ONAP and it is unique to NetApp to be able to build this global namespace without any add-ons across the hybrid multicloud. Finally, we made a ton of different announcements at NetApp Insight that I'm summarizing on this slide. Um, one of the advantages we have is because these are all native offerings, the innovation on a lot of them is actually done by AWS or Microsoft or Google and they're constantly making announcements of all these new capabilities. So you can see in Google Cloud Net volumes they introduced as I mentioned Flex Cache and Snapmir. They connected the Google Cloud AI applications with NetApp volumes and they've added new data protection and compliance capabilities on Amazon FSX for NetApp on U. We recently enabled the ability to have autonomous ransomware protection with AI. Um we originally only had that on prem. We're now extending that out to the cloud. So it's now available on AWS FSX for Netup on. You also have the ability to be more economical and reduce your provision SSD capacity in place. So if your requirements change over time and get smaller, you can actually uh essentially reduce that provision SSD capacity and save money. Um, we also introduced multiple new features into the NetApp workload factory to be able to help you with uh preparing for an EVS migration. Uh, analyze your SQL um, environment or analyze your Oracle environment to make sure you're complying with all best practices.For Azure Net Files, we again introduced those two new capabilities, Flex Cache and Statmir directly into Azure Netup files. We also made multiple improvements to help improve the overall effective price performance of the system as well as introducing new data protection features including autenum autonomous ransomware protection. Finally, in NetApp Cloud Volumes ontap, that's our marketplace-based offering that's available across all three of these clouds. We are now allowing CVO to connect directly into the Azure VMware solution as well as enhancing the overall IO throughput of CVO running on Google Cloud. So, that's a ton of announcements to pack into about 45 minutes here. Um, but hopefully it just gives you a taste of all the exciting innovation that we unveiled at Insight 2025 and we certainly look forward to helping you to learn even more. Please do reach out to us if you want to learn more about anything that we've discussed andcontinue to put your questions in Q&A. But just to summarize today, we talked about everything we do to build the enterprisegrade data platform for AI with the NetApp AFX AI portfolio, including the new disagregated AFX storage system as well as the NetApp AI data engine. All that being supported on Keystone and Flex Pod and connecting all of these into all of the major clouds. We talked about how NetApp shift will enable you to easily migrate uh VMs among different hypervisors as well as the new NetUP console to provide uh a simple interface to manage your entire NetApp data state. From a cyber resilience standpoint, the new NetApp ransomware resilience service takes everything we had done as the most secure storage on the planet and uplevels it to provide not just autonomous ransomware protection but data breach detection as well as an isolated recovery environment. And finally, a number of features across Google Cloud Netup Volumes, AWS FSX for NetApp on Microsoft Azure NetApp Files to continue to enhance the state-of-the-art with the world's only native hybrid multicloud data management operating system, NetApp on all of these are built upon the NetApp data platform. So with that, I want to go ahead and pass it back to Debbie.>> thank you Jeff. Always such great insights and what an exciting time for us at NetApp to have the opportunity to support our customers in building their AI and data infrastructure strategies with superior scalability, resilience, and security. You presented that very well, Jeff. Thank you. Um, before we wrap up, I'd like to highlight an exciting opportunity coming in December. We've got a NetApp and NVIDIA AI impact road show. These in-person events will unite AI pioneers, advanced technology, and actionable insights to support you in creating a strong data foundation for your AI achievements. So, for those of you in New York, Minneapolis, Dallas, San Jose, or Atlanta, you'll have the opportunity to attend and we'll dive deeper into AI innovations with NetApp and NVIDIA. You'll learn best practices for transforming your data infrastructure and hear some real world success stories and network with some industry leaders and your fellow AI enthusiasts.Next slide, Jeff. And finally, a few important next steps. Watch your email for all the details on how you could register for the road show or how you can request an AI readiness assessment workshop. These events are designed to provide you with deeper insights and hands-on experiences to help you advance your AI initiatives. And of course, we always want to hear your feedback. So, please take a moment, scan that QR code, and complete our short survey, and you'll have a chance to enter and win a $100 Amazon gift card. Again, we appreciate your participation today. We hope you found this session a valuable use of your time. Have a wonderful day, and we look forward to seeing you at our upcoming events. Thank you.
Learn about exciting NetApp AI launch announcements and discover how a unified, enterprise-grade data platform can power your organization to drive AI with unmatched scalability, resilience and security.