BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
With all that housekeeping out of the way, I think we're ready to get into our session. And I'm really excited today to introduce you to our panelists in this fireside chat. We have Andy Sayer, who's director of AI partnerships at NetApp. And we have Thomas Bien, who's chief marketing officer at Domino Data Lab. Andy Thomas, welcome. Thank you guys for being here. Hey, it's great to be here. Thank you. Hi, Scott. Thanks for having us. Yeah. So, you know, we've got this fireside chat today. You know, we're, you know, as AI adoption accelerates and enterprises are facing new challenges in governance, traceability and cost management, especially as AI workloads scale across increasingly complex infrastructures and ensuring full model provenance from predictions to underlying data, maintaining transparency and auditability and optimizing AI infrastructure for cost efficiency are critical priorities for data science and IT leaders. Really excited to talk about all that today. Love to hear from you guys. Why does this conversation matter now? Well, I'll get started. I think, uh, what, uh, is immense value to be unlocked from AI. I'm just preaching to the core. But the reality of enterprises today is that they have two major challenges when it comes toAI, especially if they want to scale AI at the enterprise level. On one end is that they cannot get out of the, uh, of the cycle between prototypes and pilots to production. Uh, still, I read recently, uh, a CIO article, 88% of initiatives remain at the pilot level. So still very few of these AI initiatives make it to, uh, to make it to production. So there's one challenge there, uh, in terms of unlocking the value, the business value. And then theother element. The other challenge they have is that they cannot build the trust at the governance level, or even at the cost control level to actually use these models. Or now everybody talks about agents, which is composed,ofmodels. They really have a big challenge. They don't have the visibility to trust that the quality the behavior is,going to comply with some of theexpectation on the expectations on one end and also the,regulations. So this what we the reason why we're partnering withNetApp andwhat we are seeing with our joint customers is really some leading companies stepping up and solving these challenges. And that's what we want to share today. Because today that'sreally the,main problems, uh, and where the main value lies with AI. But Andy, what's your perspective? Yeah. Thanks, Thomas. That's a great perspective, I think. Um, from a netapp's point of view, you know, um, you know, as Jensen Huang, CEO of Nvidia, just recently said on stage, Netapp's got about half of the world's unstructured data under management. And that gives us a unique perspective to look at what's happening with these enterprise pilots and what companies are able to do to turn that into production systems. I think one of the things that's become very clear to us is that pilots are often handcrafted. If you think about it, the data is collected and curated very carefully, and then the results are great. And then when it comes to comes time to go to production, when we're talking about massive quantities of data that get added to that model or to that project. Then suddenly things aren't so clear. And then all kinds of new challenges come up in terms of data access governance. Does it exist on premises or is it in the cloud somewhere? And how do we get it? And you know, all of those kinds of challenges emerge in the enterprise. So it's not um, you know, it's not a slam dunk to move from a pilot to production. And yet we are seeing quite a bit of that happen today. And so the time is now to start talking about lessons learned from the folks who have gone before and can help those who are on the journey today. That's a it's a great framing. I appreciate that fromboth of you. And, you know, getting into thatthing about the difference between pilot and production, let's talk about a real world issue, which is, you know, AI in hybrid and multi-cloud environments. You know, not probably not. A lot of pilot programs are done in hybrid or multi-cloud environments. But how should enterprises balance AI governance, performance and cost across cloud and on prem environments? And turn to you, Thomas. First on that one. Thanks, Scott. I love the way you framed it, because indeed, uh, what we're seeing is that hybrid and multi-cloud is not even a reality. It's a necessity. But still, many enterprises are not. They're not there yet. Um, on one, I think we've seen behaviors over the last, especially with the advent of Genii, that a lot of things went by default, uh, to the cloud. And I want to come back to what Andy said in the cloud, arguably, certain things are easier. The resources are available quickly, and especially if you're in a pilot where you took a fraction of the data, maybe not that representative. You can go fairly quickly. So that's been kind of a habit. But the reality now that enterprises are facing is that this proven real data is still on premises. And this for many reasons. It might be regulatory, it might be cost related. Um, it might be also because be honest, let's look at the world nowadays and,some,companies, we actually both NetApp and us are very present in highly regulated industries. And they're a bit cautious about their strategic options and their strategic independence. So that's why there's more for sure. Focus on the data center. And that's especially true for uh, for AI. So there's a divide that still exists that the enterprises have not, uh, have not solved yet. Um, and what they need, though, is to address this,divide, um, we believe it's not going to be. And we would certainly not advise our customers or enterprises to go one way. It's going to be hybrid, for sure, and what matters is actually the optionality, the ability to have choices in front of you and have the ability at the data level. But Andy is going to provide more perspective as well as at the way you create these models, the way you operate these models, you,need this,optionality. And actually, this is exactly what we've addressed, uh, with the integration ofDomino and NetApp via NetApp. ONTAP is this ability to make these enterprise data assets available in the most governed way, uh, and then provide choices, uh, and empower actually the teams that need it to,use them efficiently. But what matters, what enterprises should look at is really the optionality of not actually bringing data to AI. That'sbeen kind of the standard until now. That's why a lot of things went to the cloud. But actually bringing these AI workloads where the data is and make sure that they can operate in, of course, in a secure environment, but also with sufficient governance so that the outcome, be it the model or the operation of this models, can be used in, can be used in a safe way. It's something that most enterprises have not really thought about at the moment, but now is the time if they want to seize these,opportunities. So, um, from our perspective, the optionality is really what matters. And the ability to switch actually eventually from 1 to 1 model to the other, from a hybrid to multi-cloud or along the life cycle. Sometimes you're going to train on certain set of data and then take the model and run this model in the cloud, knowing that the data used to be trained can remain on premises or in a certain country. That is the flexibility that we see becoming critical, uh, for the, uh, for, uh, for enterprises. And the picture that emerges, by the way, is that you cannot really think that much about. Oh, I have I and I have data. It's managing, uh, these two at the same time that matters. And that's a lot of what we've been working on with the NetApp team. But, Andy, what'syour perspective on that from the data side? Yeah, there's a lot here to unpack, Thomas. And I think um, so first of all, you know, NetApp customers enjoy the benefit of being able to leverage their data with, um, essentially a single pane of glass across their on premise, as well as their data estates that may reside in any of the hyperscalers. So all of that data is at the disposal of NetApp customers who are trying to harness that data, uh, for AI. And, you know, we firmly believe, as Thomas said, that AI is going to continue to be a hybrid workload. It is never going to exist entirely on premise or entirely in the cloud. It's going to be that hybrid mix that because that data resides in different places for different purposes. I think thekey, as Thomas points out, is allowing customers to have a choice of where they want to work with that data. So, for instance, if they want to leverage, um, you know, a stack of compute and software that exists on prem, they can absolutely do that. Leveraging the tools and technologies from NetApp and Domino, if they choose to use a native hybrid or nativecloud, um, set of capabilities, they can leverage that as well. All of that is available as a choice to our customers. So I think that'sa key topic here and one that, you know, we'll talk a little bit more about as we get into some of the integration that we've done. But then a second issue, I think, um, beyond simply the regulatory concerns, most enterprises have their own corporate governance concerns. So, for instance, we want to ensure that Sally from marketing is not looking at the data that Fred from finance put into the system. Right, because that finance data may be sensitive. Um, and, you know, for both of them, we want to ensure that these models that are built don't have personally identifiable information in them, for example, for,example. So we want to make sure that the right data is getting into the models and that the right access is given, um, based on the,metadata that surrounds the data that's being used in these models. I think another point that's worth considering is that, you know, NetApp customers enjoy the ability to, um, uh, bring AI to their data rather than bringing their data to AI. A lot of folks that we hear in the market today are talking about setting up a specialized infrastructure for AI, where, you know, you have a compute farm that's dedicated to AI model building. Uh, you have data that has to get into that environment and then be used. And then once the model is built, then pushed out back to where all the rest of the data resides. We think that's backwards from NetApp perspective. We think that customers should be able to leverage the data in place to be able to do I. And that's a huge roadblock that many customers are facing today that NetApp customers don't have to deal with. So I think all those concepts come together to enable the enterprise to be able to move much more quickly, to be able to take advantage of AI. You know, Andy, I kind of want to, um, you know, follow up onwhat you were talking about there. You know, toward the end of, you know, bringing AI to the data. And let's talk a little bit about the economics of,AI infrastructure. So, you know, andThomas, I'll turn this question to you first, but how do I storage and compute inefficiencies, drive up costs and how can enterprises optimize? Well, there are several ways. As Andy said, there's a lot to unpack there. I'll start with the fact that a lot of AI is, of course, related to computations. Um, and if you look at the cost of even the infrastructure that Andy was mentioning. GPUs. Uh, it'show long you're using them. So when it takes you a lot of time getting this data, making it available, then, um, using it, uh, it'salready impacting the cost. And that's true actually in the cloud or on,premises. So there is this hidden cost. People don't think about people think about scale and think people think about power. But at the end of the day, all of these things come together and end up into a process that can be extremely costly when it's inefficient. So I think thiscompute element, there's another element I want to add which is the people time. Uh, they're actually along the life cycle of these, um, these models, let's say, uh, so above the data you have a process, think a bit of like an assembly line. Uh, data scientist are going to do research to experiments like the model. They're going to pass the ball to it. And today? What cost? Maybe not the most, but a significant part of the cost is waiting,and doing redundant work. Uh, the fact that, uh, the, uh, people wait for data and the people wait for infrastructure, uh, which has actually creates bad,behaviors, which creates other risks. We'll talk about risk a little bit later, but there is a lot of cost that is, uh, that is created this way because of these, uh, these inefficiencies, um, and actually moving the data in other to other places, as Andy was mentioning, is creating a cascading effect because you start having more copies of the data. So not only do you run the risk of having incoherent data, which can be very dangerous on a regulatory standpoint, but also you need to maintain this data. So usually when you start moving the data in an ungoverned way, you are ending up, it's going to snowball and you're going to have way more to maintain before you know you have even depth at the data level, uh, which is a huge, a huge challenge. So all of these cost a lot of time and therefore money for the IT and data science teams involved, but also in the core processes, in the training, in actually the inference, if data is slow to access or cumbersome to access, you will have a direct impact on the onthe cost to not only actually build, but also operate yourmodels. Keep in mind that we're talking about models that are used. Uh, it's not ChatGPT where I do a pronto, I get an answer, I forget about it until tomorrow. We're talking about, uh, fraud detection. We're talking about very intensive processes to discover new drugs and such. So the numbers stack up very,quickly. Andnow, actually, enterprises need to take this into account into the overall economics. Um, is it really is the value generated by the model is going to actually match even the costs that are implied. This is even more true, actually. There are aggravating, so to speak, factors. Genii turned. It was always a it was always true that there were costs coming from storage, inefficient storage, and an inefficient use of the resources. Genii took it to the next level. And now with agents, which is which are actually composed of these various models, you're taking yet another step. So this is something that enterprises need to address right now at the storage level, but also at the process level. ONTAP on how they build and operate these models. Andy, is that what you're saying? Yeah. Let me, uh, let me dive into a couple of those points. I one of the things that we have been finding is, um, when you look at the profile of how compute resources are leveraged at the time of model creation versus doing inferencing, There's some interesting observations. One is when you do when you train a model, let's say you're going to train a model in the cloud. You start the training process and then eventually the training process ends. So there's a definite start and end to model training when you look at the inferencing side of the equation. It's basically always on right. So you may have hundreds or thousands of clients or users who are taking advantage of that model by doing their everyday work. So when you look at the economics, the cloud becomes a challenge for many because you're talking about continuous resource utilization and inferencing time. So that's why we see a lot of models getting built in the cloud and then brought on premises to do the inferencing portion of that, because it's a much more predictable workload that they can scale accordingly. And then not have to pay a cloud provider for accessing GPUs continuously. So we see a real economic calculation there that many customers go through. And then on the data side, the one of the interesting things that we learned, I believe, from IDC, was that the, uh, the typical data scientist will copy their data 7 to 10 times in the course of building a model. Um, so what we're talking about here is, say, a Multi-terabyte model. If you're copying that model 7 to 10 times, you're talking about significant storage demands. And of course, you multiply that by the number of data scientists and the number of experiments that are running. And the numbers can get very quickly, very large, very quickly. You know, NetApp has had technology for many years that has become incredibly valuable in today's AI creation world. And that is the ability to do essentially zero footprint snapshots of that data so that, you know, a data scientist will be able to make, uh, a snapshot of the data and the model at any moment in time and save that as an immutable copy if they want, so that they can go back to it for audit purposes, but also be able to potentially share that with their colleagues, to be able to split up, um, approaches to a particular data set, or be able to, um, actually revert back to a model that was working better than, you know, where they got to. So being able to take advantage of underlying storage technology is a huge boon to data scientists. And we're going to talk specifically about what NetApp and Domino have done together to be able to ease that burden for the data scientist. Okay, excellent. Um, you know, the next thing that I want to get into a little bit and, you know, your mention of, um, immutable copies is making me think about this as well, is just about model provenance as a governance imperative. You know, why is proving where AI decisions come from more important than ever? I'm happy to dig this one. Um, and I want to set the context to start with taking a few examples, actually, from my joint customers, uh, with, uh, with that we have withNetApp the types of processes that we are talking about. As,I mentioned earlier, drug discovery, taking clinical trial data, massive amounts of data and with a very serious impact, which is heavily actually regulation isimportant, but it may not be the most important point. They need to get it right. So they take these drugs to the market, have an impact. And then like other examples include claim insurance claims uh, include um, financial transactions. So this is very, uh, very confidential data, uh, on one end and on the other end, the outcome is something that impacts lives. And we're talking about mission critical use cases here. So this is where you have a combination of factors, regulations being uh, being one of them where you need absolutely at the enterprise level to understand the provenance of the data, how the data has been used by who? Um, and along the process, not only how the model is run. Uh, this kind of uh, on always ongoing examples that Andy was providing him before. But really how the model has been built, why the decisions were made on how they've been built, then, how the model is used, and why the model is actually providing this recommendation or this decision as part of, uh, as part of a certain businessprocess. So the key is, uh, to be able to understand what happened. It is a regulatory requirement. Everything needs to be auditable. As Andy mentioned, and this is usually something that is very time consuming because you have massive amounts of data, but actually fairly complex data because everything is, uh, is intertwined. Um, but there's also the fact of understanding what happened, what worked well or did not, and being able to iterate. This is how these processes these models are,getting are getting better. It's not only a compliance thing, we need to do this so as to just put our drugs to the market. It's also inherent to the way AI is getting better. So you need to have this understanding and almost being able to pick up what works super well and can be improved, or what needs to be dropped. So that's why it's really something. And it's something that's been evolving a lot in recent years, because customers have been doing on certain of these processes. They've been doing auditability and some form of governance for a very long time, 30 years for some of them. But it was usually a few technologies, small data sets and just a few use cases. But now what happened over the last five years is that the number of technologies have increased and none of them are brand new. So enterprises are not necessarily equipped. They are way more data sets that are being available and they're becoming more massive. That's the outcome. Also maybe digital transformation and there are more use cases. So you see that's at least three dimension where this problem of uh, of provenance needs to be solved so that you can really drive an important, uh, an important impact with,these, uh, with these models. So that's the challenge that, uh, that customers are facing, enterprises are facing, um, if they want to not only be able to use the outcome of, uh, of their AI initiatives, but also keep on innovating and keep on improving them. Let me give you some real world examples here. We, uh, NetApp has been, uh, working with customers on AI projects for seven years. Seven years ago, we created our first reference architecture with Nvidia and built a pod essentially that would allow our customers to do model training and, um, and be able to do inferencing. And, you know, a lot of that work, uh, over those seven years was before Gen I really became on the scene. And so we were focused on things that were doing predictive AI, things that were, um, helping manufacturers and bottling plants, you know, all kinds of different, um, defect detection, an array of different, uh, uses for AI. One of the most interesting and probably an easy one to talk about. An example here is self-driving vehicles. So we've worked with self-driving vehicle companies who are actually building systems that enable these cars to be able to be on the road and drive safely. So what happens if this particular model built, um, uh, used an algorithm and,actually ended up causing an accident that involved human lives? God forbid. Let's say that happens. Well, what happens at what happens next? What happens next is the NTSC comes to that company and says, all right, give me all your data. Let me see the model that you used to train this. Let me see the, uh, the test data that you created or that you leveraged and see how this happened. So with an eye towards how could we prevent this from happening in the future? The NTSC wants a minimum of seven years of data storage and be available on hand for their audit purposes. When you're talking about, you know, potentially aterabyte a day coming off of any of these self-driving vehicles in terms of data that gets reused and retrained into these models. We're talking about a massive amount of data. But the point is that regulatory environments insist on having auditable copies of models and data. You know, especially where human lives are concerned. So this is a real concern for companies. Jen, I you know, we end up getting a hallucination or we get a bad answer to something in a one shot environment. You know, we can live with that in an environment where that result is then leveraged into a much more complex or agentic, uh, operation. Then things get a little more complicated, and you could end up doing some harm to, uh, to individuals. So My point here is that ensuring that you are able to demonstrate how a conclusion was come to is important for all types of AI, and it's really ingrained into this data scientist to be able to ensure that they can show their work, and they can show that this is how we got to this place. Um, and, you know, NetApp has offered the tools, again, based on tools that we have built, you know, 15 years ago, um, that are important for data scientists to be able to leverage today. And if I may add also to,this, I think I don't want the audience to have the impression that it's just about having a copy of the data, and then you're done. Um, especially as we see autonomous systems, we also see a lot of projects in defense with drones and such. But this data is not only used to be eventually auditable after the fact. Governance is also the act of working and using this data to adapt quickly, and that can be at the operational level. Hey, this model is not behaving as expected. You need to retrain it and you can somehow automate these kind of things, but it's also creating, taking the right action, sometimes not getting models to production or taking them out of production, having a specific way to work or taking some decisions or validations byhumans. So it's not only about being passive and having a memory in case something goes wrong. It's also about applying the right policies so that you not only do not have anything wrong happening, but you keep on improving. That's also a big trend that we're seeing is that governance has been for the longest time associated with, okay, let's do this. So we don't have an issue. But now governance is really becoming active, as in a lot of our customers inregulated industries like life sciences, they're using governance to improve quality and alignment with business value. So there's a lot of intelligence coming from this, from this data. That's a big,trend that we're seeing. It's not only about okay, just make sure the authorities are happy. It's yes, it's about avoiding the regulatory risks, but it's also avoiding the business risk andactually fueling innovation. So governance comes from a passive tax. If you want to becoming today more of a value driver, especially when it's done the right way. And we see a lot of developments on that front. Great,Sofar the conversation has been really interesting, very high level. Um, one thing that I want to make sure that we capture here, um, is, you know, for viewers who maybe aren't familiar with the partnership, I wonder if you guys can talk about whateach of you is bringing to the partnership, what kind of the point of it is, and then what you've been doing for customers there as well. Um, I'm happy to,start here. Uh, so Andy introduced NetApp, uh, leading solution in the enterprise, uh, dataspace. Uh, NetApp. And Domino have actually worked for quite a few years. We've always had, uh, integrations, but, um, we what we've announced, uh, starting actually last September, and we've been rolling out a lot of, uh, a lot of outcome, uh, is joint innovation really looking at the problems that we've spoken about with, uh, with Andy, uh, during this discussion and really addressing them, how can you actually make sure that the best of enterprise data management becomes a foundation for these, the way these models are created andoperated, how can you actually improve the performance of accessing the data? How can you improve? And accessing the data is not only it's faster to access the data at the system level, it's also, as Andy was saying, the ability for a data scientist to just come in, look at the enterprise data assets that are available, take a snapshot one button, and then having a version of the data that is fully governed, fully secured, fully authorized, and work with it. And we're talking seconds. That's how we're completely changing the way these people work, because they don't have to wait two weeks for data sets, which is very often what we hear fromcustomers. So this ability to access the data without even thinking too much about it and really focusing on what they do best, which is the research work and the experiments and such is,critical. Uh, and that's something that, thanks to the work between NetApp and Domino is really something they have from their ID, from the environment they work on the other teams involved. It is also getting a ton of control on the data and how the data is surfaced. They really define who is having access, but it is also making sure that these data assets, when they're used in these complex processes, training and inference, is actually way more performant. We've seen actually compared to alternative a factor of two twice as fast as data access, which as we spoke about is,really, um, making an impact on a, on a cost side. The immense value also that NetApp brings. If you imagine a stack with NetApp Domino and some of our partners like,Nvidia is actually really being the foundation and enterprise foundation with all the resilience, all the efficiency and all the security that is needed. NetApp is a leader in data management and governance. We are coming with the ability to actually take these elements, this metadata, have it into the governance of the model. So the outcome for our joint customers is that they have a stack. We'll use this term that is ready to go for their most mission critical AI use cases of any kind. By the way, agents Genii throw the technology you want at it, but it's really about this foundation and the processes they're going to be able to have on top of it. So that'swhy we're so excited, as you can tell about the partnership. Indeed. Did I miss anything? No, I think you got it. Uh, Thomas, I think the only perspective I would add from NetApp is, you know, what we've done here is essentially empower the data scientists to be able to do their job more effectively. And we've done it in a way where they can take advantage of the wealth of data,management technology that's available in NetApp without having to know anything about NetApp or storage or, you know, how to administer, uh, volumes or,any of the complexity that typically it gets involved in for managing the data estate at their, at their enterprise. Instead, we've exposed those rich capabilities directly into the workspace where the data scientists lives and does their work day to.us, that's the ultimate value. It'sbringing the NetApp goodness, if you will, to the point of consumption, to the point of, uh, allowing people to do their work more effectively. And for us, that's a huge win, right? Um, enterprise storage has been in the domain of, um, you know, uh, storage managers, storage administrators for the last 20 years. And what we've done here is expose that to the point of consumption in a way that fits with their daily, their, um, their daily, uh, workload. So, uh, for us, that's the excitement, uh, of this announcement with Domino. Great,Thank you for that context. Appreciate it. Um, so, you know, just, you know, before we get into some audience questions, just, you know, wanted to get someclosing thoughts from you guys. You know what? What's one critical takeaway, uh, for each of you forit and data science leaders looking to govern and scale AI responsibly. Uh, I'll go first. I think the main insight is that 2025 is a very exciting year. We see agents as yet another form factor. We get into the composition era ofAI. This marks the need to industrialize AI. The artisanal era was very fun and such, but it's time to industrialize by having the right resources. This is why we're so excited about the partnership withNetApp, put standardized processes to build these models and operate them, and also have the safety required. That's what governance is all about. And this is where we think with NetApp and the partnership between NetApp and Domino, we changed the game. And it's really yes, I think we're going to look back at 2025 and say, okay, this has been the beginning of the industrial revolution at the AI level, in the way enterprises have been able to embrace the opportunity and really starting delivering AI at scale. And I think we described very well how to get there. That's great. Thomas. The only thing I would add to that is, you know, some lessons learned over the last seven years with AI. You know, the first is don't try to unnaturally put AI in one particular place. AI needs to exist where the data lives. And to do that, you need to be able to have the flexibility to run AI against that data. The second thing is don't make unnecessary data estate expansions. When it's not really necessary, when you can make zero footprint snapshots instantly of that data and let everyone leverage that data. You've gone quite a bit in reducing overall costs. And the third thing I would say is that, um, you need to ensure that the security of that data and the governance of that data stays in place wherever that data happens to be. So you need to ensure that the ACLs, the provenance, all of the attributes of that data follow the data regardless of how it's being used. And that's something that NetApp customers enjoy and have been able to leverage as they build AI into the enterprise. Okay. All right. Fantastic things to think about. Appreciate that. Um, we do have a couple of questions coming in here from the audience. Um, first one here. The question is, what does it mean when you say self-service data management for data scientists? And how easy is it? Um, either one of you want to grab that one? I'm happy to take this to start on this one. Uh, I think actually Andy kind of answered, uh, a little bit before. What we mean is that data scientists can take their favorite IDE, connect to what's called a domino workspace, and then see directly thedata sets, the enterprise data sets they have, um, access to. They are authorized to access by a joint technology called Domino volumes for NetApp ONTAP. And take a snapshot of this data and get to work. The reality is that they will also choose, uh, the infrastructure they need. So maybe they need a GPUs and a certain compute framework, but in a matter of not even minutes, actually, uh, they are going to be ready to work and to understand the difference and how easy it is. Just describe you how the data of a data scientist starts, versus how the month of the data scientist works when they have to wait for it to provide. Actually the infrastructure and the access to the data. That's easy. One button access to enterprise data. The infamous snapshot button that we jointly created. That'swhat easy means. Yeah, I think that's well said. Thomas. I really don't have anything to add thatis ultimately why we did this. Integration was to ease the burden on the data scientists and make that time to first token reduce to as small as possible. All right. Excellent. Um, another question here. Uh, what are the best examples you've seen of customers sharing GPU resources? I can start I think, uh, what we've seen at some of our joint customers is that the they're very serious, obviously, about infrastructure. That's why they use NetApp on the data side. And we see more customers making investments sometimes in their data centers and in GPU resources, but with the same ease of access for the data scientists that we just explained for data Domino actually provide the same ease of access to GPUs. But this is what it teams our customers joint customers are leveraging to make sure that they have a good utilization of these GPUs. This GPUs are very expensive to buy, but if you don't use them, that's even more. That's become more of a way. So what we've seen is customers pooling actually these resources and make and using Domino to actually manage the access Domino allows them to supervise and actually measure utilization of these GPU resources and have a visibility on the coast thanks to a FinOps module that we have. So everybody is working in an informed way. And then they can from there define best practices. They can do back charging. They can actually manage the access sometimes maybe not throttle but optimize the way these are done. So and this is in a context that we see a lot at our joint customers, that many data science teams are coming to this stack with NetApp Domino and let's say Nvidia, um, and it is offering almost a central service if you want to,get access to these. So whatever the diversity of projects they might have, they come in and they know they can have instant access to these, to these resources. But on the IT side, IT knows that they're optimized. Not only are they securing everything, of course, but they are also optimizing, uh, the cost, uh, they are applying in enforcing best practices, and they can make sure there's an ROI on their own investments. Yeah. I would just add that, um, you know, Domino and NetApp have been advising our customers to build centers of excellence for years now. Um, these centers of excellence are a pooling of resources, essentially centralizing compute environments and access to storage and being able to allow, um, you know, the highest possible efficiency across the entire enterprise. Now, the term of art has become AI factories, but it's all the same concept. The concept here is you shouldn't have a proliferation of, um, AI environments around the enterprise usage, centralize those resources and be able to manage them with tools like Domino and NetApp, um, to take full advantage of those resources. And this is how you get actually literally for some of our customers. Thousands of data scientists working on thousands of models that are going to.production. That'sthe notion of scale. That's the outcome that you get from. The industrial approach to AI that we spoke about a little bit earlier. Exactly. All right. Well, I think that's a great place to leave it. Um, Thomas. Andy, this has just been fantastic, just in the sense of, you know, everybody is headed, uh, toward, you know, toward AI. You guys havebeen working with it for a while. It's great to hear, you know, your insights from, uh, you know, frombeing in the field with it for,years now and then it's just really exciting to see this sort of infrastructure being built around the AI so that the data scientists can,worry about their data scientists problems. And you guys are handling all theinfrastructure things that they shouldn't need to worry about. So really exciting partnership. Thank you guys both forspending some time with us today.
NetApp and Domino Data Lab explore how businesses can prove, govern, and optimize AI.