BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
Good morning, good afternoon, good evening. Wherever you are, whatever your time zone. Welcome. This is I talk, and I am your host, Kevin Crane. Welcome to the show. Today we'll be discussing the topic of ways to propel your AI initiatives into new business opportunities. Look, as we know, AI is a tremendously transformative tool, but success depends on more than just adopting the latest tools to drive real business value, organizations must align their AI initiatives with strategic goals, ensure that they have seamless data flow, and navigate the complex AI tool landscape. So in this session, we'll explore three critical areas essential for AI success. First, we'll discuss how to align your AI data infrastructure strategy with your business goals to ensure AI I investments deliver measurable impact. Next, we'll cover how the importance of getting the right data to the right tool at the right time with the right security will help models function efficiently in security. And finally, we'll tackle the challenge of managing a fragmented AI tool ecosystem. How to simplify orchestration and integration for more streamlined and effective AI environment. So in the end, I hope that you'll have some more insights on how to turn your AI investments into tangible business outcomes. So stand by for another great AI talk in just a second. But before we get going, I want to say thank you for everyone attending today. That includes the folks that are joining us live on LinkedIn. Hello and everyone joining us today on zoom. Thank you. During today's discussion, we'd like to hear from you too. So I'd like to encourage everyone attending to participate today. Join in with your comments in the chat section and if you have a question along the way, just jump on in. I will attempt to get some of your questions into the flow of the show today, and we may have a few minutes also at the end for your questions today. All right. We have a lot to cover and a full panel today. So let's jump on into it and introduce our panel of guests today, starting with Christie Biddiscombe. Christie is a sales business development manager for NetApp Christie. Are you with us? I am, good morning. Christie. Hi there. Welcome aboard. Christie. It is so great to have you with us. Thank you for joining us. Thank you. Also joining us today is Martin Miller. Martin is an ex director of AI and machine learning production operations at Levi Strauss. Martin. Welcome aboard. Where are you calling in from today? Well, hello, world. I am calling in from sunny California, where it never rains. Except last week. Sunny California. All right, Martin, welcome aboard. Thank you so much for being with us. Also joining us today is Diana Kearns Monolithos. She is a technology transformation research leader at Deloitte. Diana. Welcome aboard. Where are you calling in from today? Hi, Kevin. Great to be here. I'm calling in from New York. Uh, the city that never sleeps. Welcome aboard. Where are you calling in from today? Hi, y'all. So I am in beautiful Spain. Madrid. More specifically. Wonderful Madrid. Wonderful. Well, we are covering the globe today. Thank you everyone for joining us today. Uh, it should be a good session. I'd like to start our discussion today by drawing our attention to an article published recently by AI business called orchestrating AI Agents The Key to Unlocking Enterprise Efficiency and growth. Now, this article highlights how businesses can maximize generative AI by using orchestrator agents systems that coordinate multiple specialized agents, while individual agents handle specific tasks. Orchestrators unify them, enabling seamless collaboration across departments like HR and IT and customer service. If done well, this coordination transforms fragmented, automated automation into integrated ecosystems that drive innovation, help reduce friction, and unlock the full potential of AI driven digital transformations. I'm wondering, I'd like to get your impressions, folks, on the article today. Kirsty, what did you think about the article? So I think it's, um, it's really key to what's starting to,happen around AI. So we've seen the influx of people using AI with the rise of ChatGPT. Essentially, it's made AI tangible. Um, and obviously everything progresses at a phenomenal rate when you're talking about AI. So bringing Agentic AI to the forefront and learning on how we can automate processes and make it even more constructive to,our everyday lives is absolutely incredible. So it's really worth a read just to get your mind into, you know, the art of the possible and how it's working, how it's going to be changing, what we're already starting to use today. Wonderful. I curious. And, Martin, perhaps you can help us out. We've talked about generative AI. We've talked about agentic AI. Now we're talking about orchestrating AI agents. What's the difference in how do they fit? That's a great question. Um, Kevin, I think one of the things we need to step back is, um, the I,would call it more augmented intelligence as opposed to artificial. And with that point, we'reable to leap bound, um, some of the, uh, knowledge bases that many of the, companies have within their own ecosystem. Like the author here is from ServiceNow. If the author was from like a workday, they those systems in ecosystems have knowledge base included as part of the product offering. And so to be able to mine that product offering with local datathat's unique to theclient, the customer that uses that product is huge. And to use it intelligently. Um, one of the challenges is you may have more knowledge than is than in relevance. So some of the data needs to be basically graded regraded. Some of it is graded. You can use by access or last uh created. But to orchestrate it is like considering a data pipeline. You have to plan it, manage it and keep it alive. Mhm. All right. Wonderful. Now, Diana, I know you work closely with these emerging technologies. Um, these orchestrating AI agents have the potential to coordinate and transform otherwise fragmented automation. Um. Tell me a little bit more from your perspective about where you think this is going to take us in the next few months? Yeah, absolutely. So I think if we look at these agentic capabilities, um, and think about them in two groups, we have those agents that are performing single tasks autonomously, and then we have these, um, connected agents that require a lot more orchestration. Um, and so with some of the survey work that we did, uh, we spoke in the fall with over 2700 global enterprises, and we asked them where they are in using some of these agentic capabilities that draw on some of the more advanced reasoning capabilities, um, powered by generative AI. And what we saw was that, um, even in the fall, about a quarter, 25% 5%. Of those, 2700 plus enterprises were already using AI agents or planning to use them in 2025. And Deloitte's TMT predictions shows thatnumber is expected to go up to about half of organizations by 2027. So in terms of where AI and we see it going, growing demand for agentic capabilities, um, and a growing need to think about how traditional AI capabilities, generative capabilities, agentic and reasoning capabilities all fit together to power individual workflows and eventually workflows across the enterprise as a whole. Wow. We are really seeing a convergence of AI technologies really starting to move the needle in terms of digital transformation and innovation. Miguel, what's your take on the article? Do you feel that this is just the tip of the iceberg? Well, this is a really good application of a trend that we have right now. I mean, we are moving from llms, from models who basically are able to answer factual things, to be able to plan, to reason, to schedule. So these orchestrating agents are managed by llms, what we call reasoning llms that are able to do not only produce a single answer, but also to invoke other tools that can be other agents and therefore orchestrate that full execution, which is more complex with this doesn't have an immediate answer. And the most important part is that this allows you to basically revisit previous steps tomake sure that what thepath that it was following, it is the correct one. So definitely it is a tip of the iceberg, but it is a really interesting, uh, technology and approach that we will iterate through it for the next couple of years at least. Well. It is fascinating to see this innovation unfold in front of our eyes. The article is called orchestrating AI Agents The Key to Unlocking Enterprise Efficiency and Growth. There is a link here in the chat feature. Everyone, please go ahead and take a look at the article. What are your thoughts about the article? We'd like to hear from you too. Is this something that your organization is looking at or adopting now? What do you think? Uh, where do you think orchestrating AI agents will take us in the next few months? All right. Well. Very good. Um, this is a subject we could cover in an entire episode, but I'd like to move on to our discussion points today, everyone. Uh, and our first discussion point today is aligning your AI data infrastructure strategy with your business goals. This is an important topic. A recent McKinsey report shows that data driven organizations see profit growth of up to 25%, emphasizing the strong benefits of aligning data strategies with business goals. Diana, I'd like to ask you, what are some key factors to consider when aligning your AI data infrastructure with your organization's business objectives? Absolutely. So a few things to get us started. So I think aligning with objectives starts with defining objectives. Um, and so really deciding what are the tangible business outcomes that you're looking to achieve from the AI investment is very important, not just investing in AI because it's a cool and exciting technology. Um, but really, what is the enterprise or the business unit trying to achieve? Um, and so in terms of business outcomes, we've done a tremendous amount of research in this space, and really for the last three years have tracked, um, digital transformation measurement KPIs against a framework of 46 key indicators. I won't go through all 46, but really the five categories to be thinking about are outcomes related to financial KPIs, customer focused KPIs. APIs, process related KPIs, workforce related KPIs, and then areas of purpose. So that's the first thing is defining what you're trying to achieve. Um, and then in terms of um, connecting what you're trying to achieve in terms of the implementation, um, a couple of things that I would add. So, you know, gen AI strategies are very tied to the massive amounts of data that it takes to power them. So another thing that leaders can be thinking about is what is the real world universe of data that they need to create and define to power that particular workflow or that particular business case. And so just that task of defining what the real world universe looks like could be a great way to be connecting in with the business to understand what that use case is trying to achieve. As you define the real world and bring in the data that is actually needed. Um, a second kind of more tactical thing is thinking about a modern data infrastructure. And I'm sure my fellow panelists will have a lot to add on this topic. So I'll just briefly mention that some of the things to be thinking about for a modern data infrastructure will be things like vector databases to manage, um, some of these semantic workflows and, um, frameworks, um, things like knowledge graphs to be establishing context for the data. Whereas, you know, prompting is very context based to make sure that you're connecting, um, your AI outputs with, um, contextual understanding of what you're trying to achieve. And then I think the last thing that I'll note is often these gen AI applications or orchestrated agents are going to be multimodal solutions. And so that requires strong architectural principles, real time data processing, um, to be able to achieve the desired reliable business outputs. That's fantastic Diana. As always, some really great points there. Now, Kirsty, Diana mentioned a modern data infrastructure. How do I assess whether my current data infrastructure is ready to support AI driven initiatives? So there'sa few things that I would suggest and what Diana mentioned there around, you know, people don't buy AI because they want AI. They buy it because it does a job for them. So, um, for us, we run services and we work with partners who run services. And essentially look at your current environment to find out how much data you have readily available and do a data assessment. We run something called an AI readiness workshop for our customers, which basically means we take into consideration, um, again, what Diana was saying earlier on the use cases. So what industry is our customer working in? What is the top use cases thatindustry are particularly delivering around AI. And then really drill into then what kind of data is needed for that. Now to look at your environment itself. You need to make sure that you've got constant access to data. So you need to make sure that the speed at which you can access the data is,up to speed and where it needs to be for the applications and the platforms that you're going to be using. And you also need to look at what kind of data you have. So a data assessment service on that is really,important. Um, and essentially data points are everywhere now. You have um, if you're looking at a retail environment, for example, they will have a centralised data centre essentially where that holds their information. But then they have local remote offices. So they'll have retail stores, uh, possibly thousands of them across the country, and all of them need to be able to access the same data, relevant data as and when they need it. So to be able to create a seamless environment where you can access the information, which is the data that you need to drive your models is really,key. So making sure that you've got a data foundation that spans across your entire environment and doesn't create any silos, doesn't create any latency around accessing the data. Um, so I would definitely suggest for those companies and people that are looking for help work with technology companies, um, of which are online today. Um, and we can, um, certainly come out and help you understand what data you have, where that data is and how tomanage it and access that. Again, the vector database is really,key to this actually moving forward because of the amount ofdata that is going to be necessary to run some of these AI algorithms. So work with your partners, look at a seamless data platform to be able to access it, and make sure that you can access your data very,securely so thatdata can not be poisoned in any way and is reliable for you to be able to use it for your platforms. Now, Miguel, we've talked about the data infrastructure, but there's a human infrastructure at play here as well. What role does Cross-departmental collaboration play in aligning AI data infrastructure with business goals, and how can organisations ensure all their stakeholders are on the same page? Okay, about the well, let me be back for a second. I fully agree with everything that our colleagues have said, but I would also emphasize that try tothink not only in the short term, but try to make use of technologies that has been already proven to work, and not any the same data requires the same type ofpre-processing and compute. So try to open your mind and think that both GPUs,CPU, any piece of hardware that you incorporate in your data center has an use. And we need to think carefully howto use that hardware. But think also about the future. Think about the knowledge as such as RDMa, object storage or parallel systems, because they are all really well proven on scale and that would work. And think also, how would you potentially scale up and scale out your data centers? Which means basically how much improvement do I get by just replacing our servers while maintaining the same amount of energy? Or how much could I increase by adding more servers into our data center? So then the humans. That's the most challenging part. Always in any equation, right? I would think I'll try tohave your people motivated doing their life's work, and that could be enough. There are a lot of different roles in our research team. For instance, wedo create huge amounts ofdata, terabytes and petabytes of data. And that can be done from people specialized on data curation itself, but also by data scientists. What do we do? We try to provide the tools that have been proven, some of them developed by us, that work, basically make their life easy and try to. We always think that easy problems should be done without issues and complex, but we can fail on complex problems, but not on easy problems. So for instance, transferring data through the network or cleaning up the data, that shouldn't be challenging. It shouldn't invest you weeks and weeks of time because there are already many solutions. So what I would do is basically, I would try to give all those people who took those companies that long process of hiring the best tools that they can achieve, and you will multiply by a few times their productivity. That's the only advice in terms of people, the roles are very diverse. Even if it is a state of the art science, it is not that difficult. With the right tools and with the right training, everyone can do it. That's my take on that. Now, Martin, we have a comment coming in from one of our attendees today. Carl. Thank you, Carl, for your comment and your contribution today. Martin, I'm hoping maybe you can help us out with this one. Carl says businesses have been trying to overcome silos for years, affecting many ambitious digital transformation efforts. Carl says we'll struggle with Agentic AI in the same way. Martin, do you agree? And if so, what can we do about it? Yeah, so I'm not going to disagree with the silos. And it the combative nature of how different agendas may be holding their data in those silos, and there's a place to actually build a solution around the silo. And that's something we can do today. We can leverage a lot of the infrastructure as code deployment methodologies to actually pull a model next to the data, which is kind of unique by today's standards versus about 15 years ago. And it was a little tougher. Uh, Hall, you don't necessarily have to dedicate your physical compute as in compute resources in a data center. You could still be virtualized as in a cloud service. So silo doesn't mean bad. But if you need cooperation between the data sets, then you need to work on that cooperation. You start small, you find out what is a measurable success, uh, metric. You work towards that. If you show, uh, success in that metric, then you grow the initiative to silo, um, the groups one by one. And this is how larger organizations, uh, come about solving that problem. And the next point is, you know, the word retail came up. Data from retail usually comes in batches and streams. So there's two different types of data. It's not necessarily called real time. Real time is typically something that you make a judgment that could be life threatening. It could be financially threatening. Uh, and it'susually within milliseconds to seconds and data comes in batches. It could be 15 minutes, five minutes an hour daily. And then, you know, other telemetry that comes from third party services, um, you know, weather data impacts, retail, etc. those are all examples of ways to think of how data comes in. And there's a reason for certain types of silos. I'm not going to fight the silo. I'm going to fight. Where does it make sense to put your money for your investment and show that your return? Very good. We are here today with AI talk. We're here with Martin Miller, Chris Christie, Biddiscombe, Diana Kearns, and Miguel Martinez. If you have a question or comment for our group, please pop it into the chat feature and we'll try to get it into the flow of the show. All right, folks, I'd like to move us on to our next discussion point. And that is getting the right data to the right tool at the right time with the right security. No problem. Look, according to Deloitte, 74% of companies struggle to achieve scalable value from AI integration, with over 90% facing difficulty integrating AI with existing systems. This certainly emphasizes the important importance of efficient data flow and security. Now, Dan, I have invoked the name of Deloitte, so I'd like to go back to you. What are your thoughts? What are some best practices for ensuring that data flows seamlessly from various sources to the appropriate AI tools? So I think the first thing to think about is what type of AI tools are we talking about? So over the last decade, many Any organizations that have been using traditional AI tools have spent a lot of time building data lakes and predefined, structured, rigid structures and schemas because they're running deterministic data systems. So, you know, they're asking questions that have true or false type of responses. It's zeros and ones. And so that requires a very different type of data environment and structure for these rules based systems. Now when we start to talk about generative AI it's a totally different universe. We have probabilistic models, um, that Kirsty mentioned require vectorized type of data. That's going to be more important, um, to understand patterns in data when you're running inferencing. So, you know, I think that it's important to be considering in a proper framework for data management. Both types of data, um, and what type of data architecture you need to be powering the solutions based on, um, the questions that you're asking, are you asking these yes or no types of questions? Or are you doing kind of probabilistic inferencing or are you doing both? Um, and so, you know, in addition to the data lakes, when we talk about data silos, one of the things to be thinking about as part of that data framework is how you bring in federated systems of data management, um, and define ownership of that data that you're bringing into a model. Um, so those I think few a few of the things that I would bring up in terms of the data environment, um, and then in terms of um, infrastructure, I would also emphasize the importance of kind of thinking through a hybrid by design infrastructure in terms of where you're hosting the data, where inferencing is happening, and how you're thinking about scaling the workloads. I mean, Miguel made a great point about some of the, um, scalability options that we should be thinking about. So reimagining our existing data center footprint, um, thinking about how we have a mix of public private cloud as well as AI infrastructure, um, private AI infrastructure that we might be considering across our future scaling needs. Now, Kirsty, let's talk a little bit about security.has always been a top concern, but it seems like it's getting ever more complicated and complex now. How do we determine the right security measures when transferring and processing data for AI applications? So, um, for us, we actually use, um, AI in our platforms, um, to look at data access, for example. So we've all heard of ransomware, and we all know that ransomware attacks have been happening over the last ten, 15 years, more and more. So.we have adapted, um, our platform to actually incorporate some predictive AI, essentially, uh, which uses pattern recognition and identifies when somebody is trying to access data or information that they don't normally access. Um, so we can and this forms an awful lot of what Agentic AI is going to be used for an awful lot moving forward. Um, almost, um, almost unsupervised learning, if you like,a model of training your AI, um, to understand and act upon, um, a situation where they can see somebody is either trying to access data, they're not supposed to copy data, delete data, and actually prevents that from happening in the first place. So there's a number ofattributes that we feed into our actual models themselves to recognize when this is happening. Um, something else that is really,key. And,the team on the call today are going to be much better articulating this than I would be. But, um, data poisoning issomething I mentioned it briefly earlier on, but actually that's really,key to making sure that you can stop that from happening in the first place. Because when we're starting to use Agentic AI. So for example, if we are, um, booking a holiday, for example, orusing Agentic AI to essentially do make our decisions for us, we need to make sure that the decisions that the AI are making is based on, um, really,specific and clean data and correct information because otherwise it can end up causing a real,issue on the back end of it. So the importance ofthe reliability of that data is absolutely critical. Another example would be autonomous vehicles. So we all know that autonomous vehicles arestarting to take a real place in society now. Um, if they have access to incorrect information, then they're going to run a red light and run somebody over. They're going to drive the wrong way up a one way street and cause an accident. Um, that may be an extreme version of, um, of why it's so important to have, you know, to make sure that your data is secure. Butthat's the one that most people can relate to. Um, and when you're talking about personal and private data as well, you've got to make sure that nobody can access data that they're not supposed to be accessing. So making sure you've got really clear access controls in place to begin with is really,key to making sure that yourAI and your business actually doesn't fail through, um, throughfaults, essentially. Now, Miguel, you are a data scientist with Nvidia, with data being one of the most critical elements of AI success, how can organizations prevent data silos and ensure that they are using the most accurate and relevant data? Yeah, so I love that you mentioned data silos because it has been mentioned before, because I envision completely the opposite. Let me elaborate. Before, when we had two companies or two elements to interact, we needed to agree an interface. Now that interface is the human language. So the only need that thing that we need to have is an endpoint that understands your business, and another endpoint that understands another business and make them interact. So silos, it should be artificial as soon as you expose those interfaces to your data. So, for instance, imagine that you want to ask to an agent, can you please book a couple of flights from Madrid to New York to visit Diana for these days? And this is my budget, and it will basically interact with not a broker of flights with any potential broker who is able to understand my request and provide the information I'm asking. So close to me, hopefully are something in the past, in the near future. But then we are talking about security, how we expose the data. So just a couple of things. So we,need to think that an important part of this equation are LMS models, artificial models that understand human language. And they are trained on huge amounts of data. So the first step that we need to make sure is to try to train those models with the highest quality as possible dataset with the less bias with the less hate with. So we do really take a huge We have recently. And this is not advertising. This is a happened yesterday, released a new model and I counted the number of people that were involved in was around 150 people trying to make sure that the data sets and the,model really behaves as it is supposed to behave. So what else can we add to that? Once we have trained a model, we have created it. We have toignore any potential incorrect information. We need to keep on adding more layers. Some of them very traditional ones. For instance, we have mentioned, I don't know better databases, but better databases are basically a way to store in a specific format, let's say. So we can add our typical roles and permissions, uh, capabilities. We can say you cannot access information, those documents of that specific level, but regardless that there is also a chance that something could leak into the LM or leak into the permissions. So we could also have an extra layer, which is guardrails. We could It basically try and make our LMS to do not deviate from certain content or certain type of ways to approach an answer or solution to a problem. But even then, because, for instance, Christiementioned personal information, we can also make an LLM to judge if there is personal information in our answer and try to get rid of it before presenting that information toour users or our customers. So the tools are there, thecapabilities are there. We just need to basically try and find blueprints, try and find proven solutions by any ofthe industry that you can rely on then to,make this happen. We are moving really,fast and things that we at the let's say a long time ago, three years ago, we didn't have that much of top of our head because we had other problems to solve. Now we are iterating on the same problem and making it more solid and robust. So I do expect really good things on this topic. Now, Martin, we're talking about getting the right data to the right tool at the right time, with the right security. I was shocked to find in the Deloitte survey that 90% of the companies they talk to are facing difficulties integrating AI with existing systems. That seems super high. From your perspective, what are some of the common challenges in this kind of integration and what should we do about it? Yeah, so I want to pick up where Miguel brought up. And first of all, you know, I appreciate the statistic. The statistic here, however, you know, that statistic has a lens of you know, who the audience was that they surveyed. Um, when you come back to the API endpoint point of view where you could have multiple models involved, and that is machine learning models, plus, um, agentic, uh, solutions together you can create API endpoints that are going to stay within their guardrails. So I'm only going to answer questions on my topic. I'm not going to deviate. And that's a safeguard. That's an important piece for the average non-technology company. They should bring in partners with technology background that are appropriate to help them solve these problems and make sure that they don't over invest, because it's very easy to go down the rabbit hole and spend a large amount of capital on something that doesn't deliver results. So keep it small and then grow it as opposed to, you know, a grandiose corporate initiative. All right. Very good. We are here today with Christy Biddiscombe, sales business development manager at NetApp. Martin Miller, ex-director of AI and ML Productions at Levi Strauss. Diana Kearns, monologue, technology transformation research leader at Deloitte. And Miguel Martinez, senior deep learning data scientist at Nvidia. Now look, if you have some questions or comments for our group, pop them into the chat feature and we will attempt to get them into the flow of the show today. Now, Christy, we do have another comment coming in from our audience, this one from Rebecca.welcome to the show again today and thank you for your contributions. Rebecca is saying, how do you ensure your AI data infrastructure remains flexible enough to support shifting business goals, particularly in use cases like customer intelligence or real time decisioning? Kirsty, do you have some suggestions for Rebecca? So I would say, um, I like to use the analogy around your data infrastructure that it's like building a house. So the data infrastructure is essentially the foundation. So by making sure that you have the right foundation in place means that whatever you can then put on top of it will work and it will be stable. So for us, it's really important to make sure, um, that the flow of data from one end to the other. We,use the phrase from edge to core to cloud. Um, I think it was mentioned, earlier on. I think Martin said something about it, butactually, when you're looking at AI models, starting small is generally the way that people do begin their journey. Anda huge proportion of AI models actually start out life in the cloud because people don't want to or don't have the funds to invest in expensive technology from day dot without proving that actually the return on investment is going to be substantial for what they need. So for us being able to connect your environment to every single data source that you have is really,important. And as I say, because a lot of these platforms do start out life in the cloud, being able to access that data seamlessly as if it was just next to you where you need it is really,crucial. So making sure that you've got a platform that allows you the flexibility to be able to access the data and move it freely around where you need it, be able to get instant access to it at the right time. Um, means that that'sgoing to help certainly lay that foundation for whatever you then put on top of it. So I'd say it's just critical, making sure that the data flow is completely seamless, and you're not having to copy silos of data and move them over, taking minutes or hours or even days in some instances, to be able to just access that data instantly with regardless of where it is, which is something that we help our customers do all the time. That's really,crucial just to make sure that you don't get these bottlenecks created. And Martin, did you have something you wanted to add for Rebecca? Yeah. So one of the things I brought up a little earlier was about bringing the model closer to the data and the solution closer to the data, which is a movement that has good legs on it. Um, for the reason of being able to prevent the data being transported, the time latency of data transport. So the word real time. And I just want to pick up real time issomething important to me because it has more time criticality. You know, when an incident happens, how do you reroute airplanes from an airport like what we just had recently happened in London? You know, there's traffic data, there's flight data. All this has to be navigated through, uh, no pun on the navigation, but that is close to about a real time type of problem. As you get in the real world of, you know, someone created a fire and it shut down an airport, right? So that data problem is real. And there's many streams of data that help build in here. So I would call a data engineering a data infrastructure challenge, not an I thing. Um, because there are decision systems that aren't AI driven that depend on this. And so look at your partners for your data, look at your partners for managing data, and you have a better way to approach your augmented intelligence. All right. Very good. Well, thank you for helping out, Rebecca and Rebecca, thank you for your comment today. Um, I'm cognizant of time here. I want to move us on to our third discussion point today.And that is simplifying tool orchestration and integration. Now, Diana, tell us a little bit more. How can organizations effectively simplify the orchestration of multiple AI tools within their tech stack? Yeah, a great question. I think maybe I'll bring us back to where we started with the idea of orchestration agents. That's one choice that we see emerging of thinking about as you bring in new AI capabilities. How can you do that? Um, either to bring together existing tools and processes through APIs, um, across a value chain, maybe one value chain at a time? Um, another interesting approach that I've seen emerging is organizations Thinking about how they can move away from some of their legacy infrastructure that, you know, all of our survey data shows us that legacy infrastructure and the need to rationalize applications is one of the major things holding back organizations from being able to move forward with existing technologies. So an API layer could be one way to kind of get around legacy infrastructure that's holding you back and to make the infrastructure do more for you. Um, another approach that we've seen is organizations using digital twins to simulate some of the moves that they'd like to make across their, um, there are many different tools to see if I do this. How might that impact my environment? Um, before they go ahead and do it? Because, you know, organizations like the question we had in the chat. They're concerned with disrupting the flow of business. And so being able to simulate some of the changes that you're looking to make before you make them, um, with digital twin capabilities, isanother approach that I've seen. And Miguel, how can we address this challenge oforchestration integration? Can you share any tips on how to streamline AI tool integration without disrupting other critical systems or workflows within my organization? Yeah. I mean, I can't think of orchestration on two different layers or three different ways. For instance, orchestration in terms of training, inference, and also how we use our applications. Uh, I would say that for.consumers of this technology or for companies that are adopting this technology, they should try to basically rely on other people's work. For instance, there are already existing APIs that are used tocommunicate between different agents. If we are talking about AI or let's say, different standards of exchanging data between GPUs in remote systems or so, basically they should try to rely on those technologies. For instance, they should try to rely on containers that are maintained by certain people. Nvidia has a lot of them that we call names, for instance, where basically our engineers try to make sure that those containers use the latest version with the security bugs and all that can be done by other. By the way, those are by no chance, I mean they are available. Um, but what I wanted to say is that even if your engineering team can do that, if there is other people who has the skills and the experience, and once you have those components already built, it's not that difficult. I would say we have Kubernetes, we have Slurm, we have MCP. In terms of agent, we have a AQ. I think is another framework. We have lunch chains, we have OpenAI interfaces. We just need tobuild on top of those established standard de facto technologies. It will come others. But there is no need to be always because at the end we need to differentiate between what it is research and what it is a business and the type of business and the level of investment you want todo to incorporate this technology into your business and to your company. And that really makes the difference. But I don't have a specific tips, but basically just try to reuse what it has already been tested and proven, because there are only a certain really limited number of players that really needs to be innovating continuously onhow they do schedule integrate their tools for most of the companies using Kubernetes Islam and a few others. Well known technologies and inference server is more than enough, I would say. All right. Very good. Now, Martin, we have another question coming in from our audience, this time from Lauren.thank you so much for your contribution today. And, Martin, perhaps you can help us out with this one. Lauren is asking, with so many AI tools emerging daily, how do we avoid tool fatigue while still trying to be innovative? What criteria should we use to decide which tools are truly worth integrating? Absolutely. It's a fair question. There seems to be many, uh, layers of tools that pop out daily. And I have, uh, fatigue in my eyes looking at these to be honest and transparent. There are tools that will not be viable for you. They're just they'reeye candy. They do a little one little task and they do it good today. But tomorrow they'realmost useless because they're not being updated at a frequency that can be maintained as the bigger players can. So there will be tools that are valuable. And so I would take this intothought. They're going to be more solutions than you can ever evaluate. So you have to figure out what works for your organization. And also be cognizant of how does that access into your organizational data work. You should come through the front door, um, have a process for onboarding tools. So you don't have 2500 tools sitting in your organization, which is very viable to happen. I've witnessed it. You want to consolidate. You want to get the best of your problem solved today and move forward and switch if you need to. Now, Christie, what's. Your take on this? What criteria should we use when deciding which tools to integrate? So it's such a difficult question to answer. To be honest with you, it's almost impossible because again, going right to the beginning of this conversation, um, it completely depends on the use case and what you're trying to do with it and how the outcome that you want from it. Um, the high level suggestion I would probably make is,I think, um, Miguel mentioned about integration. So Kubernetes look at systems that integrate with other mainstream platforms, because it means that you're using less interfaces. And if you can have less interfaces to work through your data access and management in your AI platforms, it makes it a lot,easier rather than, as Martin said, if you have five, ten, 20 different types of tools to run a business, it gets really,complicated. So look for thosecompanies who are reputable and have integration with mainstream platforms and will help you navigate and access the information that you need. At the same time, I wouldn't be able to mention and name them because there are too many out there. But I would look at the industry that you're looking at, the use case that you have for it, and then speak to partners who work with these ISVs and to be able to understand the most useful points and integration access that they have. All right. Well, folks, where has our time gone? I'm afraid that we are just about out of time today. But before I let you go, I'm wondering if each of you could provide us with one quick action item that our viewers and listeners can use to take advantage of your ideas and advice. Miguel, I'd like to start with you. Do you have an action item for us today? Well, I will try toanswer that one or to cover that one with what they are talking about, tool fatigue. And I think this is the best advice I can give to any company I collaborate with. Basically it is when you want to solve a problem you need to establish a baseline. So took existing solutions. Try to measure how it solves your problem, and you need to basically establish a benchmarking data set a way to measure your success. And after that you need to be realistic. Because I once again, because I come from research, getting the 90% from 90% to 90.1 can cost millions. But maybe for a business it's not interesting. So basically they need to take a system solutions benchmark against this problem and decide which is the benchmarking mechanism tomeasure their success. And when they reach there they need to reconsider. Should they keep on investing more or should they tackle other aspect of my company? I would suggest then tofollow thatapproach with data, with AI in general, because that's usually the,most the thing that I approach, the more people wants to create and learn to train, to customize, to fine tune, and they don't really know how to measure their success. So if they start from the end, thatcould be easier for them. That could be my suggestion. That is Miguel Martinez with Nvidia. Miguel, thank you so much for being with us today. Now, Diana, do you have an action item for us? Yeah. My action item will be what not to do. So don't fall into the trap. Uh, there is continuously going to be a, you know, a new technology of the day. Um, you know, we were excited about generative AI. Now we're excited about agents. Um, these are compounding technologies that are very integrated, um, making one another better. And so I think, you know, rather than asking, what should our AI strategy be? Always go back to what should the business strategy be that we're trying to achieve so that we're using technology in support of that? Um, and don't fall into the buzzword traps. That is Diana Kern's mono technology transformation research leader at Deloitte. Diana, thank you for being with us. It's always great to have you with us. Next is Martin.do you have an action item for us today? Oh my goodness, Kevin. Of course. Um, well, let me give a couple things that are real and easy to ingest here. First of all, I have a podcast. It's called Unremitted where we talk about artificial intelligence, digital transformation with real people. Uh, a couple of books to point out. Uh, obviously myself, uh, co-authored book, uh, I in a weekend. It's a quick read, available electronically. And then here's a really good book. I've been, uh, eating up. It's called How to Move Up When the Only Way is Down. And it's from Judah Taub. Uh, it's got lessons of that are extremely out there. And thinking of local maximum as a topic. Great book. I highly recommend it. That is Martin Miller. Martin, thank you once again for joining our show today. And finally Christie.Excuse me. Kirsty, do you have an action item for us? My action item is probably. Don't be scared of it. Um, AI is changing the world. We can see that. But actually it's not necessarily the exciting AI that is going to be world changing. It's not. We're not all going to have robots stood next to us doing our shopping for us or,helping us out day to day. What we're seeing more is the more boring AI, if you like. That's automating things exactly like the agentic things, taking away the mundane tasks that we don't enjoy doing, and put it right at the end of our to do list. But look around you. Look at how much you use AI daily, and consider actually what you would see AI as benefiting you, um, in your business and in your personal life moving forward. But don't be scared of it is my is would be my action. Wonderful. That is Christy Biddiscombe, EMEA Sales Business Development Manager at NetApp. Kirsty Martin Diana Miguel, it has been great speaking with you today. What a great panel. Your perspectives and advice are spot on. Thank you so much. I hope we get a chance to talk again soon. And to everyone joining us today. Thank you to join us next time on May 13th, when we return with another great panel of guests discussing the topic of keeping data private, safe and smart AI deployment. That should be another great discussion. In the meantime, if you'd like to find me and check me out, you can do so on LinkedIn. I'm happy to connect there. I'm Kevin Crane, and you can check out my weekly audio podcast, The Digital Transformation Podcast. But for now, that'll do it for this episode of AI talk. And until next time, I am Kevin Crane. Thanks for watching.
Learn how to effectively scale AI tools across your key business processes for maximum impact. By effectively weaving AI into their key business processes, organisations can unlock significant value and driving transformative growth.