BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
I've heard comments from a number of companies recently like, well, you're just a storage company. Why would I talk to you about AI or security? I’m here wth NetApp AI experts Ray White and Mike Hommer. And today we're going to answer that question because how you store and protect your data is critical to building confidence in AI deployments, in the ever evolving AI security landscape. So what can you do right now to protect and secure your AI workspace against AI? Runtime manipulation, training data poisoning, and model for AI? Can you protect yourself against the increasingly sophisticated use of ransomware powered by malicious AI? So, Ray, can you explain some of these new threats? What are they trying to achieve and why should people care? Yeah, that's a great question. Those threats were coined by Gartner last year. The manipulation of the AI runtime is really just saying there's a big attack surface out there. So think about anytime I would access data, move data, copy data, collaborate between on premises and cloud. All those are new doors for people to do bad things. And when you talk about data theft and model poisoning, think about manipulating the outcomes of large language models or dare I say, the results of a business impacts financially. Compliance reasons. Governance. It's the wild,west in the AI space. Matt. So bring it to life for me. Give me an example of how that looks for a customer, right? Yeah. So Matt, we're right here in Las Vegas. How could I not talk about the news? Right. We've seen these newer versions of these attacks in casinos, health care, government organizations. If, for example, in the Verizon data breach that most ransomware attacks were coming from insider threats, we're starting to see that as it relates to AI threats as well. They're hacking humans. Matt. There's no known academic countermeasure to a deepfake today. A three second phone call, and all of a sudden they have your voice. They contact the helpdesk, they hack humans, they gain access to the network, they escalate privileges, and they sit, wait and watch for bad things to happen. And then all of a sudden, what people thought was material impact to the business, using Gen-AI to create things people are using Gen-AI to create malware. So this has been democratized now. This is now the tools that the masses can start to use to create even more volumes of attacks than we've ever seen before. Yes, absolutely. So what can companies do to protect themselves against these new threats, Mike? Well, in a way, it's the same that they've always been doing and new as well. We think about AI. It's really just a new workflow. It's a new application. And with any security project, when you try and do it after the fact, that's when you break things, because the business is already dependent upon it. They're already getting a revenue stream from it. It's already producing results or to introduce roadblocks or security. It can seem unnatural and slow down progress. So like any good security project, it's making sure it's designed in from the beginning. And that's where NetApp has a plethora of features. Because security is not just a checkbox, but there's a multitude of different things that can be done, and they're all built in there. It's just a matter of partners and customers working together to ensure those are turned on to meet those security objectives. What about protecting the AI runtime from manipulation, training, data poisoning, and model theft? It's all about making sure we have good, secure data of high integrity, and we really can break that down into three categories. We want to be able to protect what's there. When something does occur, we want to be able to detect that it has happened and then have a path to recovery, because we need to get back to that clean information. And NetApp has feature sets across all of those categories. So from protection side, doing things like with F policy to block known ransomware from entering the box in the first place, making sure that our administrative credentials are protected with multiple layers, and then through multi-factor authentication or multi-admin verification, we need multiple people to approve a process. Then when we look at detection, it's being able to alert on it through external tools or even our built in autonomous ransomware protection, highlighting that something has occurred that's outside the norm and then immediately providing a snapshot. So we've got a recovery point as close to the incident as possible. And then on the recovery side, it's using those snapshots in layers so that we have those multiple protection points, even cascading them out to air gapped solutions that give us a locked content that we know we have good solution data that we can go back to. It feels to me like people should have those as basic expectations when they're considering that storage purchase, right? Absolutely. And when you talk to a lot of security teams, all of their practices around protecting that data source, yet very infrequently, do they actually use the data source itself to help in that protection? We're not a security company, but we absolutely can participate in the conversation, giving teams a broader set of tools that extend deeper into that infrastructure. So that the storage helps in protecting the data for the rest of the security is there as well, to also protect. Guys, thanks so much, Ray. Mike, for your time today. I think it's been really interesting information. So like everything you've just heard is why you need to consider storage to be such an important part of bringing more security to your AI workloads. NetApp AI solutions bring a comprehensive and built in approach to data security for data that's created or ingested at any location at any time, meaning you can confidently deploy AI workloads with the most secure storage on the planet. Click the link below to learn more about NetApp AI solutions.
Unlock the secrets to safeguarding AI workflows as AI cyber attacks evolve. Discover how to combat malicious AI ransomware and protect against model theft, training data poisoning, and runtime manipulation with NetApp's AI data security solution.