BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
Hello everyone. Good morning, good afternoon, good evening. Wherever you are. Whatever your time zone. Welcome. This is digital transformation talk. And I am your host, Kevin Crane. Welcome to the show. Today we will be discussing the topic of simplifying and scaling your data storage strategy. Now, in a time when data volumes are exploding and AI adoption is accelerating, getting your storage strategy right is more important than ever. In fact, according to IDC, global data is expected to more than double over the next year, reaching 200 zettabytes. And this kind of growth demands not only scalability, but also simplicity, security and resilience. So in this session, we will focus on three key areas driving today's data strategies. First, we'll explore how to create a simpler and more secure AI data infrastructure. Because AI performance depends on seamless access to clean and reliable data. Next, we'll look at why Cyber Resilience by design is essential to maximize data protection and security. Proactive strategies are critical as threats become more sophisticated and persistent. And finally, we'll examine the latest data storage strategies that empower IT leaders across the entire environment. It should be a great session today. And we're going to dive in. But first I want to say thank you to everyone attending today. And that includes the folks that are joining us live on LinkedIn. Hello everyone, and hello, everyone. Joining us today on zoom. Thank you, everyone, for being with us. Look, we'd like to encourage everyone to participate today. We'd like to have you join in with your comments as well. So please feel free to use the chat section if you have a question along the way. For our panelists, just jump on in. I will attempt to get some of your questions and comments into the flow of the show, and we'll have a few minutes also, perhaps at the end for your questions. All right. Let's get to our great panel of guests today, starting with Adam Gale. Adam is Field CTO for AI and cybersecurity at NetApp. Adam, are you with us? Hello, Adam. Thank you. Great to have you with us today. Adam, where are you calling in from? I'm calling in from Dubai at the moment. Dubai? Well welcome aboard. Adam, it's great to have you with us. Thank you so much for being with us on the show today. Also joining us today is Morgan O'Neill. Morgan is director of data protection services at Thornton's Law. Morgan. Welcome aboard. Where are you calling in from? Hi, Kevin. I'm calling in from Scotland. Oh, wonderful. Great. Morgan, it's great to have you with us. Thank you so much for joining us today. And also John Carlos. JC is information security lead for Trade Republic. JC where are you calling in from? I'm calling from London. Hi, everyone. London. Very good. Well, welcome aboard everybody. It is great to have you on the program today. I'm looking forward to our conversation. I'd like to start our discussion today by drawing our attention to an article published recently by TechRadar titled Beyond Backup Why Cyber Resilient Storage Needs AI Powered Intelligence, and the article talks about how modern backup solutions must evolve into cyber resilience storage systems powered by AI to defend proactively and not just recover data. Some of the techniques described in the article include real time anomaly detection, spotting ransomware, or insider threats before they strike. Also, automated compliance enforcement. In short, AI transforms backups from a reactive safety net to a defensive active defensive layer. I'm curious from our panel today what your thoughts are on the article. Did it miss anything important that we should be considering. What are some of the more.important factors we should consider? Adam, tell us more. What are your thoughts on the article today? Sure. Um, I think it was actually an excellent article, and it's an area that I'm very interested in. What is it? NetApp. Here we call user behavior analytics or ransomware protection. And I think this is my personal opinion. This is going to be the future of cybersecurity because we literally can't train enough people. There aren't enough people out there, and they can't even respond quick enough when they are trained. Because these threats are now AI led, they are submillisecond in speed. So we need to use the tools that are available to us. And this type of tools,that builds a model on your environment, sees when things are happening which are out of the norm. So me, I attend conferences, I do these sort of things. If I started creating financial records in my company, that'd be out the norm. And then we can do things, we can automate responses. And the article touches on this. So I think it's very pertinent. I think it's even more pertinent. And the future of cyber security. I hear from other CISOs that this is a game changer. Do you believe it? I do actually, yes I do. And you have this already developing in the field. We have areas like finance who have arguably been doing artificial intelligence for a very long time. You have Visa's VRA, which is visa account attack intelligence. And that is a model which will look at card not present transactions, which is where I buy something online without my card, which are often subject to more fraud than ones where the card is present and it will give you a response sub millisecond with a rating, whether or not that transaction is fraudulent. Now you can only do that with the power of AI. And we're seeing more of these things creep out into industry. So I do I believe this is the future. Now Morgan, what are your thoughts on the article? Does it miss anything important that we should also be considering? No, I mean, I think it's a great article as well. Um, and I agree with Adam. I think it makes a really strong case for AI powered backup and storage solutions, and especially, I think, in today's cyber threat landscape, it is so important because as our organizations, you know, our processing, certainly the organizations I work with are processing a lot of different types of data. Some are sensitive depending on the sector, financial data, information about individuals. Um, and I would say I'm sort of approaching this very much from a data protection angle. I'm looking at the privacy and security side of things. So thinking about how you can take a proactive approach using AI powered backup and storage solutions is really,important because it can greatly prevent data breaches or certainly identify data breaches through things like detecting anomalies in your, you know, your systems and behaviors, which can be, um, you know, one of the, I guess, one of the triggers, you know, one of the first initial, um, signs that something could be wrong. There could be a threat actor within your system. And I think the impact certainly, that I see across my client base and in the media as well of,really significant cyber attacks, is so significant from a financial, legal, regulatory and reputational standpoint. Investing in your technology and looking at AI solutions is just such an important thing to do. It can just massively, you know, benefit organizations and navigating this, um, really risky, um, landscape thatmany are operating in at the moment. We've always wished that we could detect threats before they strike. Now with these AI tools, wecan. Yeah. It feels like we could almost be one step ahead of the game, I guess. Um, you know, it gives you that opportunity to be that one step ahead from perhaps where we have been before. Um, and for organizations that are processing your information that is sensitive commercially, but also about people, um, it's so important to make sure that you've got those adequate protections you can't afford to take, um, you know, to,take for granted that your security posture is going to remain efficient and sufficient. Um, over time, we have to adapt and we have to look at what these new solutions can offer. And so, yeah, really interesting article. Um, from a data protection angle, you know, really,positive in the messaging through there. I think, um, isgreat. Now, Jesse, you're, uh, an information security lead at Trade Republic. Are you using AI tools now? And what are what has been the impact to your processes? Yes, we use, uh, a lot of places, especially at the same example that Adam have just provided, which is the one that visa use to identify if a transaction is it's, you know, it's a legit or it's fraudulent. So we have the same, uh, models around and various other models to identify if a transaction is secure or not. Of course I see it more on the cybersecurity. I work very close with the fraud and Detection team, which is the team, of course, responsible to identify frauds. When you know our people, our customers are using the cards or making transactions on the app, on the application, using ourapplication. Um, mean stadium on the other side, our function, for example, the security, we already started to really transform everything that we do to try to automate as much as possible. And because we want to be industry leading in what we do, and we don't want to be that security function that hold people from actually building, from developing, because we are trade with public. And of course we build application and we want to be, uh, we are fintech and we really want to be ahead of the game. And for us to be able to achieve this, we have to provide security around what people are doing. So if we are that sort of secured function that we block in everything and we really have to assess everything, we wouldn't be able to be as successful as we are of today. And thanks for the utilization of AI, machine learning models and automation, we are able to detect and identify threats happening around the whole organization because if we want to hire, you know, a team or people to be able to investigate and detect and try to identify all threats across the organization, it just would not be possible. So thanks for the, you know, tools and solutions and AI capabilities available today. We are able to remain secure while providing the developers engineer to still be innovative and actually build solutions and be the first in the market to do that. So you're really being more proactive on the security side of things, while also enabling or not getting in the way of process innovation and efficiency. Exactly. Which is a very hard coin to flip and, you know, to play. Right. Extremely hard for you to find the balance, you know, how open can you be? And you know, how secure do we have to be? But now with the with this capability capabilities available in the market, it's much easier to find that, you know, the perfect level. Nice. All right. Well, the article is published by TechRadar. It's called Beyond Backup. Why Cyber Resilience storage needs AI powered intelligence. Take a look. The link to the article is in the chat feature. Everyone, we'd like to hear what you think. Take a look at the article. Let us know what you think. Is it resonating with you and what your organization is about? Is it missing something important we should consider it? Uh, consider. Um. Anyway. All right. Well. Very good. Well, let's move on to our main discussion points today, folks. Uh, and that is starting with creating a simpler and more secure AI data infrastructure. This is an important topic because as AI workloads become more data hungry, more complex, the need for streamlined and secure infrastructures are is greater than ever. In fact, over 80% of enterprises say that their data infrastructure is not ready to support AI at scale. That's according to Gartner just last year. I'm surprised at that. Adam, what do you believe are the biggest architectural challenges that organizations face when modernizing their infrastructure to support AI workloads? I think I've read the same thing Gartner myself as well. And I think one of the challenges they point to is having the right data accessible at the right time and good data governance. I think of this as your house and your foundations. You know, bad data in, bad AI out. It's quite simple, really, so you have to have really good data governance. Know what you've got and know where it is. As you do that you can really build good things. And that's what we see a lot of in the field. It's actually quite hard to do, but we have tools already in place. We have things we can use. And Morgan touched on this, such as PPI sensitive data, sorting it, tagging it, using AI, placing it into the right bucket, putting the right protection policies, then including or excluding it in our models or our use cases. So that's incredibly important. And I really like the title of simple because personally me, I think we have complex of a complicated things so many times, particularly engineers, which I have a background in. We love to create these fantastically complex systems which no one else can really use. Keeping it simple and use intelligent data infrastructure, which is what we do at NetApp to keep it simple. So that's my thoughts on that really. We hear a lot about storage optimization. What role does storage optimization play enabling a better, secure, more efficient AI data pipeline? That's a great question. Actually, we ran into this recently with, uh, someone who's training a model, I believe it was in, uh, the medical industry, and they were creating whole new data sets every time they wanted to train their model. Incredibly inefficient, as you can imagine, uh, taking up lots of storage. So we started building snaps, which is just where we take a snapshot of the copy of the data. And then you do a little change and you can play with it. Roll back what they didn't realize, which was they could also create immutable copies so they could stop it from being deleted and time bound it, because they found that engineers were playing all over the place and messing up their data sets. So they created time bound, immutable snaps that wouldn't be deleted for a week, a month, or so on. And then when we got talking about this and we started talking about what we were trying to develop, we actually dug into the regulation side of it. JC touches on this really quite, I think personally, really fine line between walking, between keeping things secure and keeping things innovative. The more you secure things. Do you agree? JC? Very much agree. I am in this industry as we speak. You know, working, you know, on daily basis on the same problem that we're discussing here. And it is a very hard problem to tackle, especially when you are an organization processing, you know, multi-millions and sometimes billions of transactions per day to find the right level of being secure and productive. It's extremely hard. And if you don't have the right technology in place, it's you won't be able to achieve this. You will fail. And this is what the problem that you know, many organizations, not just fintech, but I can say in all industry I have worked for, if you don't find the right technology stack with the right level of automation, and let's say I the days of today and the amount of data we have, the amount of complexity you will fail and it will be very expensive, it will be delayed, and you're just not going to be able to achieve the innovation and the security that you want to achieve. But it's a very hard problem to tackle. It's not easy. And again, if you don't have the right technology, I can, you know, frankly say you will fail. Mm. Now Morgan, how do you legal and compliance considerations factor into the design of AI ready infrastructure, especially in highly regulated sectors likelaw, for example? Yeah. Well, yeah, I mean it's really interesting. Again, it's not an easy one. Um, because I advise my clients a lot in developing AI governance structures. So we're talking about, you know, what should you be considering when initiating a procurement exercise for, you know, to solve a problem that involves, you know, an AI system? What should we be looking at? What should we be planning? How are we documenting our use cases? So for things like, you know, storage problems, what is the problem? What is the solution? Find the appropriate sort of technical solution. As,Adam andJC have mentioned. And then think about, you know, how you implement that into your organization in a safe way. So going back to what Adam spoke about in relation to data structures and data classification, you know, what does your data look like? Because if your data is a bit of a mess, then you know, your data is only be you know, your use of the system is only going to be as good as the information you put in. So if you put bad data in, you're not going to get good results out of an AI system. So it's about doing that, planning and organizing and actually having a strategy for initiating that exercise, finding the tech, implementing it and building it into your systems and then building it into your business processes, I should say, and then making sure there's the right level of AI literacy. There's some control about, you know, around the use of certain systems as well. So should we be putting PII into these systems? In some cases, yes. It's absolutely essential, you know, but in other cases you might want to avoid that. What about confidential information. Um, you know any sort of IP considerations. So we look at that whole piece and to make sure that there is, you know, a structured plan and an impact assessment carried out by an organization to document those risks and then, you know, have corresponding mitigating controls and then the monitoring, once you have this in place, part of your governance structure should be to monitor how it works. You know, what are the human checks that you will put in place to just make sure that the outputs correct, that you're not dealing with any bias or any, you know, strange results, things like that. But to go back to what Adam said, simple is,you know, you could these things could really grow arms and legs. So what you're looking at is a fairly simple structure. And I always talk about this,three step, the initiation, the implementation and the monitoring. There are your three key areas. What are you trying to achieve. What are your use cases. How do you implement this safely. Do your risk assessments and then have a process for monitoring the use of these systems. And make sure that you're getting what you want out of it, but also that you know there are no ongoing security or data protection risks. So in a nutshell, that's generally howI would approach this from a governance and legal and compliance perspective. Simple but complete. All right. Well, we have a question coming in from our audience today, folks. Um, I'd like to put it up to the entire group. Have you jump in if you have some feedback. But this one's coming in from Clive today. Clive, thank you so much for your contribution and your comment. Today, Clive is asking how could organizations simplify access to diverse AI data sources without compromising security? Like, what steps are organizations already taking any quick wins to simplify access for folks? Anyone have a comment for Clive? I would suggest. Abstracting this layer. Um, so abstracting access layer to the actual data layer. And again, make your data sets immutable so you have copies of them. If they are infected or changed or broken in any way, you can always resort back to a good layer. I think this goes back to a fundamental question of using the basic tools available to us, such as access something that's often overlooked. Really, we give people too much access. We should only give just in time. Permissions. One of my favorite tools here that we have, which I always joke was designed for me because it catches people doing things they shouldn't do, and I'm always doing something I shouldn't do. It's math. Multi administrator verification takes two people to do a big thing. So two administrators need to sign off and turn the keys at the same time. Like a Hollywood movie where they launch the nuclear missiles. What often happens when you use these tools is you catch people just making a mistake. Someone looking over your shoulder says, you shouldn't really do that. You might break something. One time out of ten, you catch someone doing something nefarious. So to answer the question, I would use the tools we already have. Just give just in time permissions and abstract the data and the access level. Yeah, I think I can add. Sorry, Kevin. Go ahead. Please go right ahead. Yes, please. No. Thank you. Um, I think here this question can be very complex. We can take to very complex level or simplify. Like I don't have simplified, which is amazing. Uh, but yeah, if I can give an example of my organization where we have a lot of data points and what we had to do is really sit down around the table with various of our team members, different team heads, leads, etc. to understand where the data is coming from. All the data we have, the data that we want to use to actually train our AI models, data that we don't want to use and what is important to us. So in here the question is asking, you know, how to simplify access to the diverse AI data source. And we as an organization have loads of data source. And the only way to get to a clear, secure and robust access to all the data sources that we require was to create a robust data pipeline. So we have a robust data pipeline where we do all the necessary checks, we do data quality checks, we do the security checks before the data is ingested, and then we have all the tiers of data. So of course we have development and staging. And each layer we have different, uh, automated checks which will check the quality of the data, which we check checked the security of the data, and after the data is actually in production, and even after it is trained by the AI model, we have to, uh, we have all the security around, all of it, to make sure that the data is secure, you know, isit free of anti-malware? Can we detect if the user is doing something suspicious? And we have all other security tools around the data when it's in production. But basically we have a very robust data pipeline to check all the data from the time. The data is not of a good use until it is treated, until to the point where the data is actually used in the production environment. And we had basically to sit around the table with all the leads, with all the tech people, to really get to how to build this trustworthy data pipeline. And we keep enhancing this data pipeline is not something static, right? You implement first and you change this. You add more data, you add different tools. But it's a thing that you will continue to develop and improve. But I would say on your data pipeline. I would just. I would just add a point to that as well. Just when you'redesigning these things, just make sure that you're adding in these controls as you go. So I think when we talk about the data protection and security controls, we always say bake that in from the start. So when you're approaching this, when you're starting to pull your data sets, think about that from the beginning. Because what can be really,frustrating for an organization is you get several miles down the road and you've got your data pipelines and you've, you know, you've done all this work and then you think, now we need to start baking in controls. So do that on the front end. Start that right from the beginning. Start thinking about that immediately and make that one of your considerations. Because all of the considerations around data quality and everything JC's just mentioned are super,important. But don't overlook your data protection and security obligations. At the same time, make sure thatis your front and center when you're making decisions aboutapproaching these, uh, these new processing activities. I would like to double, you know, double confirm on this. Yeah. Please make sure you do this because until you do the data pipeline and you,know, get everyone involved. If you miss, you know, the data protection team, you know, everyone who actually needs to understand all this process and data dealing with. And if you involve them afterwards, you'll be in big trouble. You will literally fail. So I highly recommend make sure to involve all the key stakeholders, including compliance and privacy and security. This is digital transformation talk. We are here today with Adam Gail Field, CTO for AI and cybersecurity at NetApp. We're also here with Morgan O'Neill, director of Data Protection Services at Thornton's Law, and John Carlos. JC is information security lead at Trade Republic. And thank you, Clive, for your question and comment today. If you have a question or comment for our panelists, please feel free to jump into the chat feature and we'll get some of those questions and comments into the flow of the show. All right, folks, I'd like to keep cruising. Now, let's move on to our next discussion point. And that is cyber resilience by design maximizing data protection and security. With global cyber attacks increasing by 38% in 2024 compared to the previous year. Designing for cyber resilience is no longer optional. It's foundational to protect sensitive data and ensure business continuity. Adam, what does cyber resilience by design mean to you in practical terms, and how is it being implemented in enterprise storage solutions today? Yeah, this is a. Fascinating subject, and it's one that we're having a lot of conversation with our customers about. And it also kind of segues into operational resiliency, too, because the two kind of are intertwined. And if we keep it framed in the context of AI, which we've been discussing today, Use Artificial Intelligence Act makes reference to security with the context of AI quite a bit. They are very worried about people messing with AI. In fact, there's one part of it which talks about data poisoning. They're worried about people injecting dirty data into AI models and then making them do nefarious things. So that's a long way of saying this is the conversation we talk a lot about. One of the things is I think we've already discussed here is foundational building in from the ground up. I often talk about it being almost like a hotel, where you have an access key card to get access to the room you need to, and only the room you need to. And then once your access is gone, that's it. You kicked out the hotel? Well, at least I get kicked out. Well, we build our systems like this just in time. You can only go into the bits that you need to. And this really does protect things. And then a nod to the resilience side of things, which I think is very important as AI is introduced into high risk areas or our critical infrastructure, we absolutely need to make it cyber resilient, because when this falls over and it will, you need something to pick up the slack, whether it's someone with a pen and paper or another system to pick up the slack. And we have an interesting situation at the moment where a lot of nuclear engineers are retiring. We haven't trained the next batch. We could have the same situation in the future, where AI replaces a lot of skills and knowledge, and then what happens when those eyes start falling over now? Morgan, how do you advise clients on cyber resilience? By design? Howdo how do you advise your clients into building systems from a legal standpoint of view? Yeah. So I mean, we look at the regulatory requirements that are in place at the moment. Um, I suppose around the use of,data and data by individuals most,often. So we're looking at the Data Protection Act and look at the UK, GDPR, the EU, GDPR where relevant, and also theEU AI act. All these quite difficult to see all at once. And now we have the data use and access Bill or Data Use and Access Act, I should say, which has just received Royalassent in the UK. So thinking about the UK aspects, I think we're seeing a bit more regulation coming in, but we're also seeing opportunity and indeed things like theData Use and Access Act changes the way that systems can be used for things like automated decision making and where that comes in that allows the responsible use of AI to make automated decisions about individuals. So that'sa bit of a step change. So where I would really start is well, as I've mentioned before, what is it you're looking to do? You know what.is your use case for. Um, you know, your,particular product and then bringing in um, the cyber, you know, the cyber resilience side of it, you know, looking at those obligations under data protection law which sit within the GDPR typically and mapping out, you know, are you developing this process or will you be using this system in a way that complies with those principles of data protection law, and then those other laws like the EU, AI act, depending onwhat it is you're doing, whether that might be relevant. And then of course, the Data Protection Act and now the Data Use and Access Bill act, I should say, sorry, it was only passed a couple of weeks ago. I keep calling it the bill, but it is an act now. Um, we're also, um, awaiting an update on our UK cyber act as well. Um, which has been we've been threatened with for,some time and it's not quite landed. And there was talk about it landing this summer. So just really being aware of those,legal obligations and those regulatory obligations and taking those obligations, and I suppose applying them to what it is you're doing in practical terms. Um, don't overcomplicate things. Think about, okay, what is the requirement? How does this fit in with our business processes, how we use data and how we're looking to utilise technology and really mapall that out against yourobligations? It can be sometimes quite complex. It really does just depend. But usually when you break these things down, break the obligations down and understand what they mean in practical terms that can then, you know, really help you to create sensible policies and procedures and put in place reasonable controls that make business sense. Um, and then you're then able to demonstrate that accountability with those, um, legal and regulatory requirements. Now, JC, how do you approach, uh, Trade Republic, how do you approach testing and validating, uh, cyber resilience strategies in a, in a live environment? Yes. I mean, first of all, we have to look at the regulation that wehave to, you know, be obliged by such as, uh, Trade Republic is a bank in Germany and we are a licensed bank, which means we are regulated by the banking regulation. And so there's lots of security requirements, governance and compliance, you know, data protection that we have to follow. Of course, we also need to abide by GDPR and all the other regulations that sits across all the other countries that we have, uh, a bank or that we are, uh, at that country as well. Um, a great example I can and we recently we had Dora. Right. Dora I say recently because it's one of the most recent, uh, requirements that and here in the question we,talk about a lot about, uh, resilience. And this is one of the things that we have been checking for the last, you know, year. It doesn't matter how good our fraud detection is if that system is down. Right. So we really need to check, make sure that system is up and running and actually is resilient to failure. And how we test our, uh, our you know, we don't test in live production, right? We don't know. We know the question you ask. You know, how do we test this on production system? We don't. Right. We have a development stage and environment so that we can test this properly. But of course, once it is in production, we have other necessary checks and tools around that to make sure that the system is running secure. So all the security of the data, who has access to it, where the data is going and suspicious activity. Is the data classified or not? If it's not classified, we get the necessary alerts. It goes to the necessary team. So we have all the security aspects outside. Sorry, outside of before we actually put the data into production. And then when the data production, we have all these controls around the production system. Just to build on what Jesse said there, if that's okay. He mentioned Dora, the Digital Operational Resilience Act. Fascinating. Part of that is it actually calls for autonomous monitoring systems looking for things going wrong in your environment with autonomous responses, which is what the article we read today touches on. So they're asking you to use the tools which the article is talking about. I think that's fascinating that even the regulators recognize we have to use these things because the threats are so increasing. The second point is Dora doesn't just cover the financial district as in the banks. Covers insurance, covers crypto, covers central counterparties. It's huge because they've realized if you compromise one, it's a chain. You can start compromising the other. So this is ever more important in our connected world. I think just to quickly. Add to that as well. Sorry, Kevin, if you don't mind. But when we think about, you know, those regulated organizations that are bound by Dora and othersimilar regulations, their supply chain will also have those obligations passed down. So if you're a supplier of services to these regulated organizations, you can often be caught by thoserequirements and obligations. They will pass down those obligations within theircontracts. So it's yeah. Again it'sreally,interesting because if you're a service provider, say I'm a managed IT services provider, you may then be caught by this. And the expectation might be that you have autonomous tools that can, you know, can support and protect this organization's environment to. Um, so, yeah, it'san interesting one. I just would like to point something very important that Morgan and Adam have mentioned, which is the autonomous or the autonomous of,the systems. For example, we as a bank, we have hundreds of suppliers. Right? It is just impossible for us to monitor the security of all our suppliers if we don't have autonomous systems or the utilization of AI. It's just impossible unless we spend, you know, multi-millions on people, which is still would be not reliable enough. Right? Because it's a manual work. A lot of organizations, you check a lot of technical systems. We have very technical organization with a lot of integrations with other organizations. With all these autonomous systems, the utilization of AI, we just wouldn't be able to be, you know, um, as innovative and as secure as we are. All right. I want to be mindful of the time that we have left. Um, but we do have a couple of comments coming in from our attendees today. I'll open it again up to the panel here, just briefly. Um, we have, uh, sorry, we have coming in from Jose data on back on the data poisoning. Um, Jose is asking data poisoning. Isthat something that threat actors are pivoting towards? I mean, is that something we're seeing more and more often now? Anyone? I think I mentioned that, uh, because it was something that the EU AI act actually discusses. Um, data poisoning isn't a new concept. We've seen that with SQL injections and things like that. But, um, I am unaware of any real life examples of AI being intentionally poisoned just yet, but I am aware of some fun examples. For example, Google's Gemini. You used to be able to Google give me a pizza recipe and it would include glue. It would say put glue in it. Now I can't cook and if I followed that, I probably would put glue in it because I'm useless. But the point here is that it scanned the dirty,data source Reddit and put that in there. It's not hard to see the next logical jump. I could start feeding these things dirty data on purpose, like an autonomous driving system, and then have it do bad things in the future. Now don't worry, autonomous driving systems aren't subject to that and they won't be. There's too many controls. But it is kind of what I think Joseis saying there. It is a sum of all fears. It's it could happen. I'm not aware of anything but Morgan JK, if you're aware of something, please do correct me. I'm not. I'm not aware of any real what not real life examples. You'vejust given us a real life one, but you know, one that is um. Yeah, sort of impactful yet, but I think it's something that we definitely, um, need to be, need to be really aware of. And I suppose from my perspective, you've got the,outputs and things like that,intentional sort of injection of false information that could then change a data set, alter a data set, um, you know, modify it, delete a portion of a data set. Um, I suppose it's something that, um, if that was to happen and that was to compromise, you know, a business's data set, then that could result in them basically being subject to a data breach, subject to regulatory intervention, because it's not just the loss of data that constitutes a breach. The modification, alteration, partial deletion of data can also constitute a data breach. It can bring your organisation down in a moment. So I think it's definitely something to be aware of. But um, I don't know. I don't have any, um, any anecdotes. Um, I'm afraid on that. Um, but it is something I think, um, we need to be mindful of. Another comment here from our audience I'd like to get to, um. That's a lot of acts, bills and papers. Uh, our organisations who are compliant on paper also resilient by design or are they polar opposites in terms of security mindset? I'm sorry, I think. I think it was me that spouted quite a lot of different acts and bills and things like that. So apologies, I may have, I may have um, yeah. Spokena lot about that. I think there needs to be a balance. And again, I think it's about the practical application of,the rules to your business situation and making sure that you're meeting those obligations. And by design, the security and the data protection aspects need to be built in and baked in. So the laws and regulations don't tell you exactly how to do things. They don't tell you what products to use, what controls to set. And they really more or less just tell youhave to have sufficient and proportionate controls and security in place to protect your systems and data and more or less. Um, so I think it's about designing, um, those controls and the security,levels that you require for your organisation. And that then in turn helps you to demonstrate compliance, but it has to be proportionate. Um, so I think, um, yeah, you wouldn't just be. You don't it's not just sufficient to be compliant on paper, you have to be doing what you'resaying you're doing. You have to be able to demonstrate accountability. And one last question from our audience before we move on, just briefly, um, isn't ChatGPT poisoningitself? There's so much gen AI data out there online that it's training itself on its own terrible output. What are your thoughts, folks? I have a interesting thought on that one. Um, I love it. I do see that it's like a snake eating its own tail, isn't it? Well, what we can do here is I mentioned earlier, create immutable backups of our data. A good point in time. I actually think some of the winners of the AI race are going to be the people with the cleanest, best data snapshot of what we knew was true. So if I was an organization and I had very valuable data, and by the way, they're coming up with financial standards to monetize data and say, this is what we would classify it on our balance sheet. It's that important. I would be protecting that data at all costs. I'd be placing it on the most secure storage platforms and making immutable copies of it, because I don't know what's been put into my system now. I'm creating PowerPoints, using AI, going into my group SharePoint and stuff, and it's a self-fulfilling prophecy. So I'd use the tools now that we have to protect our data. All right. I want to be mindful of our time that we have left. Let's move along, folks, to our final discussion point today. And again, we're here with Adam Gale, Morgan O'Neill and John Carlos JC. I want to ask a little bit about managing data across multiple cloud environments. Hybrid data. As environments grow to be more distributed, IT leaders struggle with multi-cloud environments and data across hybrid multi-cloud environments. Adam, what trends are you seeing in multi-cloud or hybrid storage strategies that give it leaders more flexibility and control? Well, they are definitely asking for that ability to move workloads back from the cloud or to other clouds. And as GSC will probably comment here, that's part of the theme of Dora Digital Operational Resiliency Act for the financial sector. It's the ability to repatriate or move workloads, critical workloads from one cloud to another or back on premise. That is something that we're seeing a lot of, and luckily enough, that's part of our core product portfolio. It's being able to do that, most importantly, being able to do it and prove you can do it. Running a readiness drill and proving it to the regulators because you need all this in a dashboard, because if I can do it, but I can't prove it. What use is it to anyone? Well, that's a good point. You have to be able to prove it. Otherwise whatuse is any of it right. All right. Morgan, how can modern storage strategies help organizations stay ahead of data governance challenges? What do you recommend? I think data storage is and data retention is always a challenge for organizations. Um, there are there organizations that I work with tend to struggle with, um, maintaining retention schedules. Quite often. It can be complex, it can be very manual, and nobody has the time to go through data sets manually and start to work out what needs to be deleted and what falls in a retention schedule or not. So I think there is opportunity to automate and manage that. Reducing storage costs, maintaining better security and evolving that excessive processing of personal data, which is a big nounder the GDPR, both EU and UK, US. So it helps you to, I think, put in place a more structured, efficient and compliant process for managing storage and retention of data in line with the regulations. And so there's real opportunity there, I think, tobuild that out and use automation and AI to support with your compliance in relation to storage. Now, JC, I am a business guy. I believe if you can't measure it, you can't manage it. What key metrics or KPIs guide your decisions when evaluating or Manning or scaling a storage solution. Oh, the 100% right. If you cannot measure, you know it doesn't exist. That's what I tell tomy team members, right? If you're building something, we need some clear metrics and KPI on exactly what we have built and show value for that. So some of the key metrics I utilize, it's first understanding what the business trying to achieve. Right. So sometimes we go to just the internet or talk to someone and they say, oh use this technology A, technology B and you just go for it. So that's not the right way. First, understand your organization. And what are you trying to achieve, not just in the short term but in the long term as well. So where do you want to be in the next currently? But where do you want to be in the next three years? And it really depends from organization to organization. And each organization will have a different requirement. Some of them would just be more classic. Some of them would like to have something more innovative and advanced. And of course, you need to make sure you have the appropriate technology and team and expertise internally or externally to be able to help you to achieve that, but truly understand what your organization is trying to deliver and try to achieve. For example, we as a bank, we are currently going a huge transformation program to do with how we manage our data, and we went through a lot of planning with a lot of people and we still discussing things. We always be discussing what is the best place to host our data. We actually one thing that I hadn't mentioned about the data,on premise, we actually also thinking about that. Is it worth it for us? Maybe investing on some on premise data center and on premise storage? But how are we going to be communicating and maintaining this data reliable and resilient when we also have data in multi-cloud environments? So it's a very complex question to answer, especially when you have, you know, terabytes, petabytes of data. How do you do this decision. So it is a very complex problem. But first and what the organization is trying to achieve in the short term. But what do you want to be in the next, you know, 2 to 3 years. And you know by doing by asking this question, you start to find the right people and start to find the answers that you're looking for. Mm. This is digital transformation talk. We've been talking about simplifying and scaling your data storage strategy. We've been here today with Adam Gale from NetApp, Morgan O'Neill from Thornton's Law, and John Carlos. JC is with Trade Republic now. Folks, I don't know where our time has gone, but we have reached the action item round of the program. I'm wondering if each of you could please provide us with one quick action item that our viewers and listeners can use to take advantage of the ideas and advice today. JC do you have an action item for us? Yes. Uh, first of all, uh, check where your data is. so we can go into a very huge conversation. Does this topic it's a hard answer for you to it's a hard question for you to answer, but have a think where is your data? And by finding your data, you will start to find much more questions which will come into your mind. Why do we have this data? Is the data sensitive or not? Do we encrypt? What is the access control of it? So by just finding where is your data, it will bring you much more questions that you probably will not have the answer, but you are going the right path. Mm. That is Giancarlo's information Security Lead at Trade Republic. JC thank you so much for being with us today. Thank you. Morgan, do you have an action item for us? I do. Um, so to follow on from JC's point, once you've found your data, think about your strategy for AI governance. Think about what that looks like. Think about what it is you're looking to do. And as I mentioned, those sort of three steps initiation, implementation and monitoring. What does that look like? And building your security and data protection from the start. Thank you. Morgan, thank you for those three steps. That is Morgan O'Neill, director of data protection services at Thornton's Law. Now, Adam, do you have an action item for us today? I do, unfortunately, Jesse stole mine, so I'm gonna have to come up with another one on the spot. And I would suggest looking at your data and making sure it's encrypted. Now, that sounds like a really basic requirement, but you'd be very surprised how many production workloads are not encrypted. I'd go one step further now and start asking myself, am I using quantum proof encryption? Quantum computing is just around the corner. So it says Jensen always events. So start looking to make sure that encryption is quantum proof. That is Adam Gale field, CTO for AI and cybersecurity at NetApp. Adam, thank you so much for being part of our program today. Adam Morgan, Jean John Carlos, thank you so much for being with us today. What a great panel. Your perspectives and advice are spot on. I hope that we get a chance to talk again soon. And to everyone joining us today, thank you. To be sure to check out our sister show I talk. Our next episode will be on July 15th where we will be discussing leveraging AI in retail. That should be a great discussion as well. So check us out. And meantime, if you'd like to find me, you can do so on LinkedIn. I'm happy to connect there. So check me out. I'm Kevin Crane and you can check out my weekly audio podcast, The Digital Transformation Podcast. But for now, that'll do it for this episode of Digital Transformation Talk. And until next time, I am Kevin Crane saying thanks for watching.
As data volumes surge and AI transforms the business landscape, your data storage strategy must evolve fast. Unlock the full potential of AI with an infrastructure designed for performance and protection.