BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
Hello everyone and welcome to NetApp on Air. We had a little loop button there that we forgot to uncheck. That happens when we're doing things live. It's a good time. Welcome to the show everybody. We are back after an amazing show last week. We had a massive launch event launching the new A series platforms. Yes, we were just highlighting the C series. We do not have the video yet for the A series, but we'll have that for the upcoming episodes for sure. Still working on getting some of that stuff. But yes, we are back here in the data center dude studios talking to you guys about NetApp and we definitely wanted to um connect with you guys on some of the things in this and upcoming episodes that were covered in the launch last week. Um we have hardware new hardware platforms to talk about both AFA series and storage grid. Both of those are on the books already. We've got new security uh pieces to talk about. That's on the books already. And of course, we can't talk about new hardware systems without talking about Ontap and the latest version of it and the latest enhancements and all of that good stuff. And we've got a returning guest today to talk you about that. Before we get into that, make sure you're joined us joining us in Discord. That is the place to be. If you want to keep up with episodes here onair, have follow-up conversations, any of that kind of stuff after the show. That is where the afterparty is at. You can come hang out with us 247 over in Discord, regardless of where you are in the world. but it is thecentral destination for all things NetApp community. Uh you can find us over there. Uh my guest today needs no introduction. He joined us last week. He's one of the most popular guests on our show uh over the last couple of years as we've been doing this. It's ar he's arguably one of the most soughtafter individuals to uh and the and subject matter expertise that he brings to the table is uh Mr. Keith A. Keith, how you doing, sir? Welcome back to the show. >> I'm good, Nick. Thanks for having me again.>> Yeah. Weren't you just here like a week ago?>> Oh, it's nice. And then I kind of wandered off in the woods. Had a nice week off and now back at it. So, it was good. >> Right.Uh, so how was your break first and foremost? >> Break was good. Nice to, you know, nice to uh get out and not think tech for a while, but it's also nice to get back. I'm glad everybody got really excited about the launch. It's something we've been working really hard to get ready for, and it feels good to kind of get that out there now, and everybody can start sort of uh using that tech. >> Fantastic. Well, I wore my powered by ONAP shirt today because we're going to be talking about ONAP. Um, how have I wanted to ask you a few leadup questions before we like dive into the deep techy gooey details. I'veseen a change really since probably on tap 910 like I can kind of I can feel something has changed the speed at which we release things the andthe frequency I should say at which we release things has increased. Uh there are more things going into each of those releases. Um, yes, some of them are staple on tap things that we've known and love, but I've seen more net new things happening over the last five to six releases than I maybe ever have. Is can you help me as someone who's been doing this for a while, help me understand like from your perspective as a product manager for this stuff, like how does that happen? What accelerates those things? Is technology driving the bus here or is this just aconstant like a hamster wheel of innovation that we're on that we just keep pushing this stuff out? >> Ithink there's been a you know there's been a fundamental change. You're right. I think 910 is probably the right point where that sort of inflection changed and one of the big things that I think morphed was, you know, NES always been a really hardcore engineering company and we built some amazing stuff, butsometimes we get we would get a little too inward looking, right? Wewere sort of building things um because they were pretty cool and they were something that we were interested in building. But we had a really fundamental shift several years ago to be a lot more outward in, right? really um make sure we look at, you know, what are customers struggling with? What are their pain points? How is this changing? What are some things that they're wrestling with, not just today, but upcoming? And I think weplace some pretty big bets. Like it's if you're always trying to code to today's problem, you're never going to be there, right? By the time you get that problem solved, somebody else has beat you to the punch. You really have to get better about predict uh predicting, you know, where is the industry going, where are the problems going to be a couple years out. And I think we made some big bets and I think they're paying off right now. >> Yeah, itfeels like we'reat this inflection point of ontap um where it's okay. It'sgotten to this safe spot where it'sconstantly iterating on some of the good things, but yet we're still introducing new stuff. I look at ARP and how that's improving and evolving and because of the way that ransomware is has become so prolific in the industry and I look at some of the other things that continue to evolve, but a lot of them arejust staple ontap stuff that we've known and loved for decades now and we continue to find new ways to leverage them, new ways to build solutions and reference architectures around them. So Ilove where it's at right now. It's at this kind of just peaceful place uh where I if I were running it as a customer I would feel just stable and secure and just confident have this level of confidence in it. Um Ican't say that throughout all of history about onap like there were times when I was just like of course right everybody's had those moments but there we're kind of in a sweet spot right now. Would you agree with that? Iwould totally agree and not only looking at it froma feature function standpoint which is kind of I feel like you're coming at butalso behind the scenes think about how on is so tightly integrated with the hardware we run it on but at the same time we're also able to run it on these engineered appliances we're able to run it you know as a virtual machine on VMware and KVM we're able to run it in cloud uh infrastructure we're able to run it as cloud firstparty services and so there's a there's been a massive amount of optimizing on tap for all those different environments at the same time and not just like make it work but like optimize it so it thrives in those other environments and so Iagree I think the fact that as a customer right now a lot of them you know don't necessarily know where things are living there there's some that are like yeah I'm on prem or some yeah I'm in the cloud but I think for most customers it's a little bit of like yeah I got some things in the cloud and some things on prem and I you know things are shifting around I think you know thenice thing about having on tap being anywhere is theycan be confident that they have this constant set of data features and protection regardless of where the data is supposed to be or needs to be. >> Yeah. When we first started doing the cloud stuff, I was trying to get this phrase over the hump called on tap everywhere. And that was what I would use to pitch to customers. Well, last week at Converge, I saw somebody had stickers made that said that on it. And I went, [laughter] >> we done it. >> Yes, wegot it over the hump. People have started adopting it. So that's really thecrux of this whole thing and really what you were just outlining is this ontap everywhere mantra and that's one of the beauties of the portability of it is that once you ontap is on tap and it is so versatile it can run on just about anything. Um and so but thefunctionality and everything remains the same uh within it right regardless of where it's deployed regardless of how it's uh configured under the covers. Now,we optimize each one, right? And even our hardware appliances, we're optimizing C series for capacity flash as opposed to performance. Um, but yeah, it is optimized for each of those individual environments and but you have thatconsistency of protection and management thatsits over top of it.>> Oh, a good question just came in before we get into nine thenew the new business. Um, Idefinitely wanted to pull this up because this is one of my lines of thought as well. Thank you for the great comment question here, Darth. Um does so do the things when we deploy things to cloud and maybe they require some different things architecturally does that often drive innovation back into the kind of engineered systems side of things that we might adopt things that we learned by deploying it in you know non-traditional spaces like the cloud. Do we learn lessons from that and then apply that to some of the ways that we bring those back on prem? >> Yeah, it's a great question and it is the same codebase. Um but having said you can imagine how fast things change in the cloud. It's not like you kind of optimize onap for you know cloud and you're like we're done right you know cloud is evolving all the time right new infrastructure new machine types new services to integrate with and so that's where you know the pace of innovation on the cloud side continues to be >> you know it'sa lot of work still right we're constantly optimizing for those cloud environments. Um typically what's introduced in the cloud versions um doesn't have a lot of relevance on prem because it is usually that sort of optimization but occasionally there are things andthen they find their way out um you know in the on-rem releases. So there's not usually, you know, the second part of that question was, hey, can we make those versions available onprem? There's not really any point. There's really typically nothing in there that would benefit onprem, but it's umbut the on-prem features do find their way, you know, specifically targeted for those twice a year release. >> Nice. Thanks again for the question, Dar. That that's a great talking point is howa lot of that stuff couldbe managed on a day-to-day basis from a release cycle per uh point of view. Uh what do you think, Keith? Should we dive in? Should we talk about uh let's let me kick this off a different way. You've been on the show now three times for three different release. Ithink it was 12, 13, and 14.>> Yep. >> Yep. >> Yep. >> That we've talked about uh over the course of the last 12 to 18 months. Um what's new like as far as like in between 912 and 9? Obviously we we've got iterative updates and things like that for each of the major features and you know subproducts built into ONTA but like where what's the 915 pay just give me the Keith A perspective of like what drove the 915 payload um what really were the big bullet things that we wanted to talk about today. Um just to give people a bit of ateaser upfront summary of what we're going to go over. >> Sure.I'd say pri priorities, you know, first priority was supporting the new hardware, right? Thatthe new the new AFF lines or A series line, you know, that's a tremendous amount of engineering effort right there alone, right? Andnot only because it'snew infrastructure, but it has to be, you know, highly performant, highly stable, highly resilient. So,that was the top priority. And so that was a lot of work that went into that. >> And that was for the record, that was the first time we've really refreshed that line fully since I believe 2016, maybe 2017. Yeah, they were definitely were getting, you know, we' done spot releases, right? Wethe 400 was newer, but they were sort of incremental. This is the first, it's kind of the first time that wedid the whole range at the same time as well.>> We went out and built a Lamborghini as well with an A900 a couple years ago. >> Exactly. >> Exactly. >> Exactly. >> As a bit of a one-off, right? Just to see if we could.>> What was really unique about this was there was a single design center across that and we're going to get somebenefits out of that. So, that was the biggest lift. Um and then I'd say drop down be behind that security is ongoing right that's a veryhigh priority and we're introducing some really interesting new concepts around security we had Matt on last week and Matt kind of deep dive in some of that sosecurity I think isa big one and then we had a couple ofheavy hitters um you know flex cache is increasingly becoming amajor focus as we're getting into this hybrid environment of you know not only some things on prem some things in the cloud but also you know more and more things are working globally and flex cache is was you know we're pretty excited about a major deliverable there. So um oh and I guess business resiliency right isthat's the other one iswhat we're doing in the sand space with the ASA um andyou know protecting you know highly available um you know block based environments you know that was another big payload we kind of previewed that in 14 but getting that out and sort of ready for general availability was another big payload. >> Nice. Yeah. and looking at, you know, and guys, in case you missed anything from last week, they're available online on YouTube, right here on the YouTube channel. Um, you can watch the announcement video completely over again to go over the new A series lineups and all the other Cyber Vault and all of the other stuff that was that's part of that launch, but we've also got a uh an on demand video recording of the I think it was about 45 minutes, the conversation we had with yourself, with Matt Trudin, with Chris Luth, and of course my lovely esteemed co-host, Mr. Jeff Baxter. Um, but all of that's available for you guys in case you missed it last week. Um, uh, for anything like that. So, yes, I want to make sure. Why is there a link going out?Oh, Drew is linking the shows from last week. Thank you, Drew. Sorry. Good,looking out, Drew. Ouresteemed producer behind the scenes pulling levers. Uh, posted links there in the chat for you guys um for any of the shows that you wanted to see. Um, let's get into it, Keith. Let's I'm gonna pull the slides up here. Um let me make sure they're on the right one. Yeah. Okay. Yeah. Okay. Yeah. Okay. >> So, obviously we need to talk about with the new A series came some new technology. Um I we're going to go super deep on the Intel QAT stuff. Uh that's part of that payload when I have the guys on. Guys, teaser here. June 5th. Mark your calendars. Chris Luth and Mr. Scott Bell will both be returning to the show to do a complete deep dive on the A70, A90, and A1K platforms and all of the hardware gooey goodness of all of it. QAT is a big part of that. And um so Keith, whenwe when you saw that or when your team saw that like how were we going to take storage efficiency to the next level? Andlet me even takea slight step back becauseyou know previously we had this you what you always want to deliver is the most storage efficiency you can with no impact to performance right that that'sthe ultimate goal andso we had one method of compression that we could do in line withoutimpacting performance but we could get much greater levels ofcompression. um but they did have a impact on performance and so you know we thought well that would actually be fine for cold data cold once the data kind of cools off that's usually the perfect time at that point you'rerarely going to access it but it was a hard decision as a storage admin to make that decision well what really defines cold data and how do I actually set that theawesome part of QAT is it's a hardware offload we use it as part of the chipset so it's not a card it's nothing you need to add to these A series it's just native on all of the new AFA series platforms. Um, andthat hardware offload allows us to use the much more efficient uh compression with no performance impact. And so therefore, it can be used on all data. And so we get a kind of a win-winwin, right? You get the better efficiency, you get no performance impact, and you get simpler management because there's no tuning, nothing to be set up, no choices to be made. Um, soprettyexciting capability tohave that embedded directly intothe hardware platform. >> Yeah, absolutely. And is it part of the die? Is it part of the processor? Is it like a north bridge south bridge offload chip for you said it was part of the chipset? >> Sounds like a great question for Scott Bell and Chris Leuth as a software guy. I just know we have access to it and it's always there. So I don't know. It's>> and it's not a card. It'sembedded in.>> So I love the approach. I love the perspective of uh get the most storage efficiency without per affecting performance. And that's kind it's probably a slider based on maybe the platform that you're doing it on or any of that kind of stuff. Like there's so many variables in there. Uh distance and range to where your offload's going to be or your uh secondary copy might be or any of that kind of stuff comes into account. So what did we do? what where did we end up landing um with QAT in mind and available to us and what was the material difference between with without QAT previously and now with >> sothe where we landed was hey we've got this hardware offload and it's there let's use it so the idea was let's use it for all data as we get data soin other words if you're snap mirroring data to one of these new AFA a series that will be applied as the data is coming in via snapmir. If you're writing data from a client, the new compression is applied in line. If you do a v move in the cluster, it's applied during the v move. And the kicker, the one that's really,exciting is the fact that just by upgrading an existing system to one of these A series, your existing volumes as a background process opportunistically as the system has availability, we'll use that QAT to apply this compression to the existing data.>> And so, you know, usually we introduce something like this, it's like, yeah, we get this better efficiency. It's onany data you write after the upgrade. No, we're going to do it on every piece of data that one of these new A series systems can actually touch. And so, you know, the kicker is like, hey, I upgrade toa to a new A series. Yeah, it's faster. It's got better throughput. Oh, and by the way, my existing data gets smaller, so I free up space on the media I already have. Like, it's there's my >> Well, is this something that happens over time? Does this is this a first boot process kind of thing after the upgrade?>> No, it'll happen. It'll happen over time. So, as I said, it'll be opportunistic asthe system has cycles and has it has access to the QAT. Um, it'lldo it behind the scenes. So,um again, we can dive and you know, bringour friend Chris Hurley in another, you know, regular guest on the show toget more into the actual, you know, kind of background process. But the idea is again itdoes it opportunistically. So there's no again no performance overhead, nothing to monitor. It'll just you know over time as the system soaks on the existing data and you of course you have things like snapshots that need to expire and roll off and free up that space but over time it will be applied on the existing data as well. >> Fantastic. Um do is there any kind of metrics you can share with how fast that happens or like what are some of the dependencies that might get in the way of that opportunistic uh you know compaction or compression happening?>> Uh I don't have the exact one. you know it'sverydependent on how busy thesystem is. Um and then of course the results vary as well butas they mentioned in the launch you know some of the data as much as two and a half times. So for example, we had a data set that was um an Oracle database data set that you know I think we were getting a2.7 toone ratio before like I thought it existed previous and then after um you know we had applied the you know the 9.15.1 code andapplied moved the data on to anew a series um it was almost an 8 to1 ratio on the same data right at rest. So,it was that was two and a half times was the reduction of the actual uh compression. So, mileage will >> let me repeat back what I So,[laughter] yousaid 2.5 to one up to 8:1.>> Yep. >> That's not messing around. That is >> now that was >> that is that's some serious differences for you. Now, are there any unintended consequences of doing this kind of stuff? Youmentioned like it's not meant to affect performance. it's going to happen in the background opportunistically. Like I'm looking for the blind spots here asan admin and an engineer. I'm like those you'rethrowing some fairy dust on me and I'm not sure exactly like is this all just the magic of QAT or is there some other secret sauce that we've put baked into this? >> Well, so the QA is the enabler, right? It'sthe you know that'sthe hardware cap. Soon is sort of the magic sauce as far asusing that particular hardware. you can't do it strictly hardware alone. So, it's the two together, right? They have to be there. Um, there's no gotcha. Theonly potential gotcha is, hey, if I apply this greater level of compression and I move it to a box that doesn't have QAT, well, isn't that a problem? Yeah, it could be, right? Because now that other box has to do that uncompression. So, we do the same thing as data is being ingested, as the data is being egressed. So as you move data off ofa of one of the new A series that's got QAT, we'll also undo that compression. Um andwhat that means is there will be no performance impact where that data actually lands. The only gotcha isagain that those greater efficiencies won't be realized on the other system. Sothat's a bit new. You'll have to kind of you know plan accordingly for that. But you know fairlysoon you know thewhole portfolio will be refreshed withthat. So they'll be lessof an issue on an ongoing basis. >> Yeah. And I think assystems cycle out over the next year or so, like you know, naturally occurring refreshes happening over time, Ithink we see that becomes less and less of a talking point, you know, 12 months from now. >> Well, I can just imagine the conversations oflike, hey, this data is pretty big. Let's maybe move it over to the new box. Uh becauseit's probably smaller over there. And then pretty soon it's like, well, you know, I can't put anything more in the new box. Let'smaybe buy some more new boxes, right, todo that. >> Yeah. The other thing I'm thinking about is like does this have downstream effects for backups, uh vaulting and things like that, does this have down for tape? Uh like those are the that's where my head's going [laughter] when I hear about this because it >> alldepends on the format, right? If you're keeping it in native on format, no, it all works just as expected. uh if you're dumping it to a raw format that all gets undone as we're taking it off anyway. So no change, right?>> Yes. >> Yes. >> Yes. >> It's all >> all continues to work as expected. >> Yeah.So this is one of those great that's going to get bigthings but nothing you actually have to do. It just it's just happened. Those are the best ones, right? There's nonew settings, no choices to make, no configuration. Just it's just there. >> It'sreally beautiful. And I'm looking at this from the context of we have been doing this for going on 20 years at best that Ican remember DDUP in like 2007. >> Yeah.>> When that was first coming out and on tap 7 >> back when we still called it data ontap. >> Yeah. And I'm trying to think of like the I' just the trajectory and the growth of I'm looking at the timeline of this of like introducing compre compressionum continuing to make dduplication more efficient like there's this timeline over the last 20 years of doing this kind of stuff and somehow someway we continue to find new ways to make the footprint smaller and smaller>> and I'mconstantly amazed byyourself and the ONTAP team of how the from an engineering standpoint, we're continuing to do this 20 years later. >> Well, I even take that I even zoom that out a little bit, Nick, is you know, our goal for every release, every version of ONAP is every version of ONAP should get faster, right? You should get better performance every version of ONAP. That alone is a ambitious goal to be able to continue to optimize code as you do that. Not to mention the fact that we're adding features as we do that. >> Yeah. >> Yeah. >> Yeah. >> Right. So it's like, oh, we're gonna add capability and still make it faster on every single version, you know, and, you know, better availability. Like it's just it's a very ambitious goal. And yeah, it amazes me howwe're able to continue to do that, you know, 20 years plus. >> Yeah. It'simpressive. It's genuinely impressive for any of the engineering folks watching. >> Yes. Bravo. >> Exactly. >> Exactly. >> Exactly. >> Bravo. All right. So, let's kick the slide over here and uh talk about the next thing. Anything else to close out on storage efficiencies or >> Yeah, if we can go to more detail on the hardware, guys, if people have hardware questions, we can do that. >> One of the Yeah, one of the more controversial ones I've heard about over the last uh couple of months probably is the there's a little bit of and I'm hoping you can clear all of this up. So, I'm kind of setting this up for a little bit of There's been some consternation about Snap Mirror Business Continuity versus Snapmir Active Sync. And correct me if I'm wrong, this was introduced 914. >> It was done as a tech preview in 914. Yes, there Yes, there >> we're basically just bringing it to GA at this point. Is that right? >> Bring it to GA. Yeah, it was a tech preview. We had some limitations, but now it's going to be generally available. Um andpart of what makes it GA is all the associated offtap things work with it, right? SoOTV andSnap Center, all that works with it.>> Um back up a little bit. I think the part people get really excited about active sync, but I think the part that gets missed quite often is, >> you know, we rebranded Snapmir business continuity. So,we had Staten Business continuity which is still very viable as far as a configuration which one site being active, second site being passive with an automated failover that still exists but werebranded it to active sync. Now, normally Now, normally >> um Ihate rebrands andyou know I always you know our friend Jeff who was on last week I was like Jeff what whydo we rename it? Actually, this one actually makes sense because if we looked at Snapmir, we had Snapmir asynchronous.>> Okay. Yeah, I can decipher from the name what that means. You know, we had Snapmir synchronous. >> Yeah. >> Yeah. >> Yeah. >> Again, pretty straightforward. And then Snapmir business continuity. I was like, whatdoes that mean? Like it doesn't fit with the other >> flashy marketing name. >> Yeah, that was the outcome, but it wasn't really describing what was actually happening. So,in this case with Snapmir active sync, well, we have the sync. So that's just like a we're building off of Snapmir synchronous. We're adding the active whichbasically means that thefailover the automated or thecut over of the applications from site to site is active right that happens automatically >> and also optionally both sides are rewritable and sois an activedeployment. So actually from a naming standpoint hey that actually now makes sense that we have these three modes you can put snap mirror in and they build off of each other. Interesting. Okay. So before so what was what's different now about active sync versus the things that we had before? >> Sopreviously Snapmir business continuity had the you know the hooks to have a mediator and detect if an application's having a problem an automated failover from one side to the other. Right? Soyouyou'd resume communication but you'd have an active site that would you know cut over to the other side. Now with active sync what's changed is you can now optionally also turn it into activewhich means the LUN is available and this is for sand only the LUN is available on both sites being rewritable>> and hosts connected to those sites write locally um sothen the data is synchronously replicated either direction still have >> yeah so it's birectional as well >> and it's birectional which >> Iyeah I can't make that work in my head. [laughter] head. [laughter] head. [laughter] >> And again it'smetrocluster like but again you don't have cluster dependencies. So dependencies. So >> well how do you get past the epsilon? How do you get past thewho'sthe who wins? >> Well the mediator is sitting in the middle right. Soyou have thisa mediator that's keeping track of things andmaking sure that you know you're not just in a split brain scenario. But the fact that I don't need to have like forlike infrastructure. In other words, I might have a two node cluster on one side and a six node cluster on the other. Some of that kind of stuff hurts my brain as well, right? Itdoesn't have [laughter] to be, >> but it's all on tap at the end of the day. It's just about available resources to process data. So that I can get my head around, >> but it's I keeping track with what belongs to who and where it was originally written. Like thelogging of that must be insane>> at a block level. Yeah.>> pretty stunning thatit's still you know I think back about you know obviously the waffle that we use today that theright anywhere file that we use today is not the same file system we started 20 years ago butthe fact that it's evolved again to this state to have this capability isvery>> yeah I think even Dave would agree this ain't your daddy's waffle >> yep [laughter] or your granddaddy's waffle I >> I had an opportunity to ask you mentioned about the storage efficiency I had an opportunity tohave a chat withDave Hitz once And I asked him about, hey, when you designed this, did you have the idea of like dduplication back then? And he's like, I had no idea what you know, like what ddup what the idea of like removing, you know, duplicate blocks had never even crossed his mind. Buthe goes, don't think that the waffle I wrote or they wrote worked on was the same as it is now. It has evolved, you know, constantly, which is pretty wild. >> Yeah. Interesting.I'm anxious to see how people use this. I think is the as someone [clears throat] who loves this kind of stuff and especially so a lot of my background came from tier one applicationsOracle Microsoft Exchange and then we got into virtualization and then so that's where my head goes when I ever I hear this kind of stuff uh SQL clusters Oracle clustering exchange clusters >> all of that so I look at this with the in the context of the example that's on the screen is using a VMware stretch VMS data store >> Iokay I can get my head around that um because it's just virtual networking at the end of the day and you can you're making smart of it. What I can't get my head around is the birectional kind of nature of it and so theoretically right in theory I could have a virtual machine whose VMDKs were on site A but it was actually being hosted by B1 and B2 in that UCS chassis. where's the latency going to start affecting things like so you get into some metroclustery kind of conversations with that kind of stuff. So that's really where I come down to like how do people decide whether they want to leverage snap mirror active sync or do they just need to jump take the leap of faith and go full metrocluster. >> Soit's a great question. So andcertainly doesn't it doesn't mean metrocluster is going anywhere. Um, >> oh no. >> The perk of active sync is it's done very granular, right? Andif you think about an envir, if you have a big enough environment, as you mentioned, for one of those key applications that it demands its own storage cluster, Metrocluster is probably your right choice because you're doing it at the cluster level. But I think a lot of customers have, you know, the bulk of themasses of their applications, but they have a few crown jewels that are like, I've got this application. It doesn't need its own storage system, but it needs to be highly available. That's where Snap Active Sync absolutely shines where I can, you know, have sort of the, you know, general populace VMs that have normal, you know, DR mechanisms. Um, I have a few applications that absolutely have to be available. This is a great way of enabling it just for those few applications without having to set up a whole separate infrastructure for it. I look at this as almost as a yeah but then I start God I can't get my I'm trying to get my head around thedifference between metrocluster and this because in my head I'm going this is more maybe an active passive setup versus the activenature that you can get from metrocluster but no at the end of the day it you could achieve similar things here it would just be that birectional nature instead of maybe in parallel as you can get um when you really push metrocluster out there. This is one I really need to spend some time on. Um, this is one that Iactually want to go make a separate video on it just from an architecture perspective of the kind of things that your imagination could run wild building with this. Idid notice SRM support for DR or orchestration. I wanted to call that out because I think that's probably one of the highest use cases or most probable use cases we'll see something like this used for is sort of a instant active passive cut over failover kind of situation. So if you've got VMs or applications or workloads stacks on side B and you're replicating that data back, you could just you know SDWAN over or you could do something where you can flip over and do an immediate cut over with no disruption to end users. Like all of that stuff ishuge and I see it as being a part of this conversation.>> Well, where that comes into play isyou mentioned so you are dealing with the latency. Everything that's written in one site needs to be written at the other site, right? So you are dealing with that latency whatever you have between there. But where that comes into play is um say you have a site a city where you have two data centers in the same city. So maybe you're two three milliseconds between the two. You could have this set up between those two data centers. Now I have data center resiliency. But I also want to protect against what if that whole my whole city has some sort of a you know weather event or a power outage for the whole city. I want to then replicate those virtual machines to you know the other coast and use site recovery manager for that. And so I have that ability if I lose one you know one data center on one side or the other nothing changes. Everything just automatically recovers. But if I lose the whole city then I initiate my DR mechanism with site recovery manager and I recover on the east coast or west coast. Yeah, coast. Yeah, coast. Yeah, >> we used to do this. >> Yeah. 20 years ago it was we would do uh local campus regional um or sorry local campus municipal regional national and there were you would have to build out scenarios and runbooks foreach of those. Right. So I would call theone you were describing the citywide when thegood old smoking hole scenario um of just a meteor lands and it takes out the whole city. What do you do? And like that's where you start getting into East Coast, West Coast, or even multinational. You'regetting into somereally lengthy stuff. Um there were some stories a long time ago of when uh tsunamis happened in Fukushima was going on in Japan and how things were replicated to uh both Europe and the United States and how they were able to just continue running and things I hear and that was leveraging Snap Mirror and SRM under the covers. So, there'sthings like thathappen out there in the world. Um, and you got to plan for them. Isee that this is going to be incredibly powerful forusers. And I know people are upset at the branding change and the name change and just try not to get too wrapped around the axle about that because this is bringing um some really cool functionality for you guys to take advantage of. Uh, we did have a question pop in. I'm going to pull the slides down real quick so we can see this. uh from Tobias over on YouTube said uh is that flex pay flex group based and can you explain how the interfaces are configured like IP based and or sand? >> Yeah.for sure. So it's not flex group based because it is um sand only. Sothose two are not compatible at this stage. So it's using regular flex balls. Uh and then you can configure the interfaces. Now here's where if you can pop the slide back up. Oh yeah. Here's where we can really bake your noodle is the fact that on the slide right now you see that those ESX hosts like I look at host A1 A2 rightnow it shows a solid line local and a dotted line remote that's what we refer to as uniform access wherethose hosts would have access to both clusters part of one something that's actually new in9.15 as opposed to 9.14.1 is we now also support nonuniform access which means that your ESX hosts only have local connectivity to their local cluster. So,again, it could be fiber channel, it could be ice scuzzi, um you know, doesn'treally matter, but imagine that yourvSphere hosts or your application server only sees the local onap cluster. And so therefore, the only thing that spans between the two sites is the snap mirror relationship, right? It doesn't even know about the other cluster near the site. That's okay because you know something fails over the other nose you know like soit's not even the same it's not even it's the same lung but it's not the same I group at all because they're completely different initiators completely different hosts >> and for any sort of smoking hole scenario we're depending on SRM or VMware's heartbeat to determine which volume they're pulling the data store from. Well, in this case, it's not SRM. It'sVMware HA, >> right? It's a single VMware cluster, right? It's a regular HA. >> It's just a stretch cluster. Yeah, >> it's a stretch cluster, right? In a regular HA cluster, one host fails, VMware just automatically restarts those VMs on surviving hosts. It's the same thing here. So when you enable So when you configure this, is there anything special about thepack of interfaces that you use uh that we had that have to be configured in order for snap mirror active sync to work over them? I know this Tobias asked that specifically in his question. Um is there any special ONAP configuration that needs to happen to the specific interfa likewe have specialized metrocluster interfaces uh andfru right? So there's like I'm looking at it from a context of do we is there anything special we need to do to either uh a set of interfaces or some other configuration other than just go snap mirror active sync enable and we're done.>> The metro cluster is a bit trickier because you're actually spanning the SVM across sites and so it looks like the same storage. You know in this case here they're presenting as completely different storage clusters but happen to present the exact same L. So,the answer is no. And the good news isinside if you're using it for VMware, uh, OTV takes care of all that for you. It'll,just set this up for you. Now, there are some special things you need on the networking side for the virtual machines to be able to do that, right? As you flip a VM from site to site, it doesn't reip itself. So, you got to make sure that you're, you know, it can still get to the outside world or people get to it. There's some complexity there, but not really on how we present access to the actual loans themselves, which is pretty wild.>> So, from a solution standpoint, Tobias, hopefully that answers your question. If you have some follow-ups there, throw them in. Um, I we got to get going because Ikeep asking questions about this because this is the one I hear the questions about the most. Um, obviously VMware is a very popular solution here, but where else do we see this? Do we see this in the kind of native tier one applications, the oracles and the SQLs of the world? >> Yeah, youhit it spot on. It's all of those, right? Andit's really for anyapplication that sort of has its own um availabil built-in availability, right? You need to have the ability that the application goes, "Oh, I've lost one of my nodes in the application. I'm going to recover somewhere else." And so yeah, the ones you named all have thatthat's the key one isit has to have that sort of ability to within the application itself to recognize that one you know it'shad a failure at one site and it should try to recover somewhere else. >> Nice. Uh one more quick question before we move on from Phipe is uh in a three- sight scenario um A and B activeuh plus site C as DR if A fails do B and C keep on rep does B keep on replicating to C >> C uh >> or is there some kind of epsilon takeover cascade that happens in a three site?>> Uh Iam high confidence that the answer to that is yes. Uh but by my Ireserve my right to do a double check on that. But yes, Ibelieve that's the case when you set up um Snapmir Active Sync thatrelationship tosight C. I believe that is the case that it will continue to replicate if you lose sight A. Yeah. >> Yeah. Awesome. Thank you so much for the comment there, Philippe. Um let's kick over, guys. If you have any more questions about this kind of stuff, Keith and I both are in our Discord community constantly, and if Keith doesn't answer right away, I'll add him. I'll tag him, make sure he gets back to you guys. Uh, but come join us over there. Um, ARP, I know we're gonna have Matt on in a follow-up episode. We've got to talk about Cybervault asa as a part of the last week's launch announcement, but we cannot talk about current technology without talking about ransomware. Um, give me the high level without being Matt Trudin. >> Yeah, sounds good. I will >> aboutARP and ONTAP. >> I could not be Matt Trudein. So, highlevel. um on tap 910 to onap 914 we trained a highly effective model to detect ransomware in um NAS fileshares within ONAP so itdoes pattern recognition and file extension recognition to identify when ransomware is happening it does it asynchronously so it's outside the IO path so no performance impact um and it you know just periodic sampling of really it's based a lot around well a number of attributes so one of them is file entropy. How are the files themselves being changed? It was a very static model, but really effective. >> Now, you would train it to start off with, but that was really adjusting its sensitivity level. >> So, if you think about a model, you sort of is trained and then you kind of tune the sensitivity totrigger. And that'swhat you would do um in 910 to 914. Now, in ONAP 9.15.1, what's unique is we're introducing the model, but the model will actually continue to learn. So as you as it's running, as it has detections and you say, "No, that's a false positive or that is an attack." It continues to learn and adapt itself. So it's not a fixed model. It's still a fluid model. It continues to improve itself. Andwhat that's done is it'snot only increased its detection level, which is it our detection level was already really high. Itdid improve that, but where we saw our biggest gains is it reduced the false positives. So every time you told it, no,that wasn't actually an attack, that was a false positive, it would adapt so that it wouldn't trigger on that same set of conditions again. >> Andthat's what you want isyou want tominimize that noise. So if something does trigger, you're not, you know, you're ready to jump on it. you're not. So,this really brought up both theprecision, so we'reyou know, um we're not false positing, we're not false triggering, and it brought up the recall, which is actually, you know, we're actually detecting when an attack is really occurring. And both of those are now over 99%. >> Yeah. Not it's not a chicken little or a crywolf kind of scenario where you kind of get numb to the alerts. >> Exactly. So,what's different about So, I know that previously, previous to 915, we had to do you had to go into system manager or CLI and you had to enable uh on a volume by volume basis, I believe uh the enable um ARP stuff. Is that has that gotten more efficient? Is it still kind of a 30 plus day process to do that the learning like walk me through how that operates differently if at all in915? >> Sure.Well, soinitially we had that Yeah. you turn it on, we need to give it 30 days and it would then you flip it into production mode where it actually right. >> So well that's two administrator touches. Wedidn't want to do that. So actually in914 is when we did away with that so that when you turn it on it goes into learning mode but once it has a pretty high level of confidence ofits sensitivity, it'll automatically go into production mode. So you don't need to touch it twice. And that's the same thing in 915 is once you enable this it um you know itbasically stays in learning mode but is in production mode at the same time like it it'll continue to learn and evolve. >> Yeah. Just because it kicks over to production mode doesn't mean it's not continually ingesting newbehaviors, right?>> Yeah. And you'reevery time you sort of like reward it, yeah, that was an attack orshake your finger at it. No, that was false positive. It'll adapt and get better, right? It continues to >> you naughty ARP. Yeah, you know, soit'sthat idea of it's not a static model. Theother thing that wedid which I'm very excited for is we made it modular out of ONAP. So one of the question came up was hey can I upgrade this model andpreviously is like no we upgrade the model whenever you did an ONAP upgrade. how we didn't actually have to change our model very much because it was so comprehensive andyou know there'snew flavors of ransomware attack but they still all sort of work in the same fashion butwe have made this modular now so if the need arose we could issue a new model that you could bring in independent of upgrading on tap whichis great somethingradical changed you know we could retrain a model in our labs publish that and you could bring that in without having to go through your normal will change process of doing an on-time upgrade.>> That's fantastic. Um, how do we do that? >> Well, that'sa question for another time. I might ask you that one offline without being live on the internet. [laughter] internet. [laughter] internet. [laughter] >> And this is again this is tech preview, right? So, watch for this future to be GA in a future version of 10. >> Yeah, I can see where this could go. um the amount of data that we have available to us via auto supports, via um documentation, via manuals and TRS and best practices and all of that stuff that have been written over the decades and generated over the decades. There is a massive amount of learning that could be taken into account there as far as what is legitimate information, what is legitimate blocks, uh what is not going to trigger thesefalse flags, um and what could be a legitimate attack above and beyond what the blacklist, you know, file extensions are and kind of the baseline ofthat kind of stuff. Ican't wait for us to really be able to start taking uh advantage of some of that stuff. But I also want to be cognizant of, you know, make sure that we're doing it the right way. um because if you take a model and you drop it onto 10 different customer systems, there might be 10 different reactions to it uh from this from those systems. So those are the things I would be curious about like how we would control that kind of stuff. But that's a deeper >> sort of this is sort of your last defense, right? You know, ideally you catch any sort of an attack or vulnerability before the attack start because this you still have to wait for the attack to start before it triggers, right? So this is sort of your this is your emergency parachute. Hopefully you'veclocked things before it gets to it, right? And hopefully this never should have to trigger, but if it does, it's your safety net. >> Yeah. Awesome.More on ARP soon, guys. We're going to have Matt Trudewin come on and do talk some CyberVault and uh the newest uh updates toum ARP. Um>> I won't steal his thunder on this one either then. >> Okay. So, we well, let's gloss over this one and what it is because it's got a big fancy name, but what is it actually doing under the covers? >> Another tech preview, butwhat we're trying to do here ishave ONTAP make um real time dynamic choices about how it should respond to commands that could result in data loss or data destruction or data excfiltration uh based on a trust score. So think about all the things that administrator does and attributes of an administrator like you know where are they coming in from? Havewe seen the device that the administrator is connecting to us from before? Have we seen that SSH key? Um what time of day is it? Is this administrator doing something different than they normally do? Isthis abnormal behavior? And based on that trust score um do you allow the command? Do you maybe challenge the administrator to ask them to re-enter a you know multiffactor token or reenter their password ordo you block them all together and go nah this feels off please try again later >> andso this is early stages it's a tech preview but it's an idea ability to have on tap to sort of um build and maintain a like a think of it like a credit score but it's a trust score for individual administrators onthe system itself >> you know I took Cisco to task last here um for making people have to learn Python in order to take their renew their CCNP. Ifeel like we're getting into this world and I blame virtualization on this at the end of the day. [laughter] We're getting into this world where people can't be specialized anymore. And I feel like there's a so storage admins now have to learn IDS IDP um or ID IDs IPS sort of principles and things like that. They now they have to be security engineers, networking security engineers foraccess control and things like this. But the cloud has also driven that as well. We everybody's had to learn how to navigate themess that AWS AM is. Um but like that's Ifeel are we now asking storage administrators to be security engineers as well? I guess in a way we we've always asked them to be stewards of data regardless. >> Yeah. Ithink I think what you know this is optional andthe idea here is you are able to kind of define within the framework what you would deem suspicious. So what thing should the framework lookat? Right? You know, ifa if at a storage admin suddenly connects with a laptop that he's never connected to before, >> maybe suspicious, right? You know, that's odd, right? Um ifhe suddenly is working with storage volumes, you know, this is downthe road, working with storage volumes he's never worked with before, right? In an SVM hedoesn't typically normally manage, is that suspicious? >> Yeah, >> Yeah, >> Yeah, >> maybe. Right. Andso this has some of that fluidity to identify that theultimate goal is to identify a bad actor or compromised set of credentials, right? Something that has suddenly changed outside of their normal day-to-day behavior. Um how can we respond and try to defend with that? >> Yeah. And a lot of these are defined by um regulatory agencies, especially in the public side of the world. um for any publicly traded company or anybody that might have um you know these kinds of auditors coming in every six months asking you to check and verify things um then they're going to have these templates that they have to apply to all systems. So giving this kind of stuff is probably already mandated top down, but now we give them the ability to apply these sorts of those same sorts of regulatory definitions to storage systems and to capabilities to people that are accessing that system at any given time. Where I see people getting in trouble with this honestly is with service accounts. >> One way or the other, um there's going to be a service account somewhere that has either way too much permissions or not enough permissions to do the job that it exists for in the first place. the service account. I mean, um, th that's where this get this stuff gets kind of tricky. If it's just individuals, yeah, you can regulate, allow a deny, all of those all day, but the people get in trouble with this kind of stuff with service accounts. >> Uh, if you manage these kinds of things on a day-to-day, you know exactly what I'm talking about. >> But for sure, and the idea of this was to be a little more flexible or a little more dynamic, which is where the name came from, versus um, multimmen verify, which I think we'retalk about here next. Yes, >> multi-man verify ismuch more rigid andit's really targeted at customers who absolutely positively say here's something that I want no single administrator to do andthat is great from a regulatory standpoint. Pretty unfortunate for things like automation ormaybe sometimes just for an individual of men trying to get something done in a short window. And so what this is trying to do is try and be a softer version of that to say, you know, hey, you know, normally you should be able to do your job without any sort of impedance. If you do something that's a little bit unusual, we may inconvenience you by asking you to reenter a token or a password. Um, if you do something really off the rails or enough things add up, we may pump the brakes tosay, "Whoa, let'sstop what's happening here because it seems just too suspicious." So, Soros tries to be that more, you know, dynamic or more intelligent um real-time decision letting ONTAP make rather than sort of a rigid yes or no type of scenario.>> Yeah, between this and Cybervault and the ARP enhancements and stuff like I've got a big show to look forward to withMatt. with Matt. >> Um I definitely want to get into that. >> Um we you mentioned the captain's missile key scenario as I like to refer to it. I still want to make the kind of meme video of remaking Crimson Tide in that scene between Denzel Washington and Gene Hackman, but overdeing a volume. Like, I still want to do something fun like that. Um, nobody steal my idea. But Ithink that would be that's exactly what this is. It'sthere is no better definition of it. If you've seen the movie Crimson Tide, there is someone that is trying to delete something or make some kind of change, but then there is an exo that is challenging them because the order is not clear. You have to have the approval of both of them. Now, it's not as there's not as much gravity to it as launching a nuclear attack, but at the same time, this is exactly the principle behind it is making sure that there is that double verification. And Ithink it's honestly one of the best things we've come up with in a long time.>> And when we first introduced it in 9.11, we were very focused on your description of deleting data or exfiltrating data. >> Right. >> Right. >> Right. >> What we're expanding on in15 arereally took a hard look at activities that a bad actor would do ahead of an attack.>> Sowhat are some things you would do to grease the skin? Like if you just go in guns ablazing, you're probably going to get caught pretty fast. shut down, right? But if you can get into the cluster and say, well, let's, you know, let's redirect the logs sowe can't do that. Let's maybe drop down, you know, maybe our alerting levels, you know, let's turn off some security measures. You can really, um, you know, soften the security ahead of an actual attack. And so these are things that typically once you set them, when you set up a cluster, you probably don't want to mess with them again. And this is a way of locking them down and saying, "Yeah, I can change them if I have to, but I need a couple of administrators toapprove that, yeah, it is the right thing to do is to redirect those logs from this point to that point or drop an alert level or, you know, change an encryption setting." >> It's a system level change control, a change management sort of thing. And Ireally dig that. Ithink it's >> I it takes the ticketing nature out of theboredom of change management. like this is it'sI like this so much because itforces teams to work together. It forces people to agree or at least collaborate to come find a common ground on how to do something the right way. These kinds of things are often like misunderstood and overlooked. But the secondary effects that it can have uh on a team um ofpractitioners day-to-day doing you know just general operations is massive. uh giving them levels of confidence, trust in each other, you know, knowing who to go, you got to find out who to go talk to because you need to know who has the second key. Like there's all kinds of secondary effects that this sort of technology has. Not to mention the fact that it's protecting your systems and your company and anything that might be tucked away on thatsystem uh that's super valuable and proprietary. So all >> I worked as an admin where we had really tight change control and you wanted to make asetting change, you had to, you know, submit for change control and then you wait for the change control meeting where they'd review it and you know and that was great from a process standpoint, but again, if I was trying to do something malicious, they had no way of stopping me from doing [laughter] it, right? It was purely was purely >> what are you gonna do? Submit a ticket? I Hi, I'm an attacker and I would like to delete all your data. Do you approve? >> That's exactly it. So,that's the you know this is you know um enforcing that process at the on tap level which I think is reallybrilliant right isa great way of protecting it. Nowyou have to keep things like automation in mind because it can seriously inhibit if you're doing like high levels of automation of deleting andcreating and deleting volumes but forsensitive data and customers that have those sort of concerns. Yeah, it's a great feature. >> Yeah, awesome stuff. Uh I know we're at time. We're going to go a little bit over here, but we can't get out of here without talking about Flex Cash right back. And I feel like this is one of those another one that we could probably spend an hour talking about on its own. Talk to me about the evolution of Flex Cash and how what we're doing now. I know this is more or less a GA announcement from a prior tech preview, but9:15 this for me uh as far as what it means to the listeners. >> Sowhat it means, you know, Flex Cash is, you know, the great thing about going there's full circle for you, Nick. We talk about how great it is that ONAP's everywhere. The fact that wherever we have ONTAP, I can create avolume of data anddo a sparse cache. What I mean by that is I can keep my data in one location where I have all my data protection and backup protection, you know, strategy there. And yet for remote users, whether that be in the cloud or halfway around the world, I can drop a sparse cache andum read accelerate any files for them. So when they do, you know, reads, itfeels local. It's a fast open, it's, you know, fast access. Um, historically, we've always had right around, which means that any of those locations, they are rewritable when you have them locally, but any rights get uh ingested by the local cache system. They then get written back to the origin and then they get acknowledged. So,the rights always, you know, incur theWAN penalty of going across all rights. >> Um, which if you're just kind of closing a file and writing it once, that's fine. I don't really care that the app took a few seconds to write. But if I'm working with like amovie file or a CAD CAM file where I'm constantly sort of changing and evolving that andyou know it's doing automatic saves regularly to that file and it's a big say it's a you know a gig file orfive gig file that's going to be really you know painful of sending every time it does a little change or say if you have you incur that land. So,what flex cache writeback allows you to do isthe ability to say, hey, I'm actually going to let these big files write locally to the cache site and acknowledge it then. So, the app feels like it has local write acceleration and then I asynchronously pull that back to the origin. But we do that in a really unique way that still ensures immediate consistency across the whole ecosphere. So it doesn't matter if somebody tries to read that file somewhere else. We will make sure that any writes that are in the cache are actually synced back before they read it. They that all files say immediately consistent regardless of where you come in from it. Sothat's its sweet spot isparticularly if I need readr access to a large file and I'm going across ahigh WAN bandwidth. That's where this you know this technology absolutely shines. Well, you also made the great exa theuse case for uh movie editing if you're working with big movie files forthose unaware like I'm sure people know by now that there's always multiple departments working on a film. Some there's the sound designers that are putting in sound effects and music and then there's the actual editors themselves. There's the uh compositors and people doing VFX and then there's all there's probably maybe 10 different associate editors that are stitching timelines together all using the same raw footage and I look at this for a movie studio or a post house post-production house as going this is especially in the pandemic era right andthe remote era that we're in now like this is a godsend for people that do that kind of work that's collaborative that requires ires um you know one source of truth in a way. I can't get my head around the physics of it. Like I that's the part for me like I just I can't make this make sense in my head when it comes to just thenature of the physics of it because you can only move data so fast depending on the all of this stuff. Like is it all coming from the same source from the same place and it's all writing back to the same place eventually? Is this an eventually consistent sort of like that's where I struggle withreally getting my head around the right back nature of this. >> The analogy I use and I don't know if it's a good one, but have you ever been in a team meeting or something wheresomebody to prevent everybody talking at the same time, somebody has the talk stick. >> Yeah.>> That that's how this works, right? Isif you want to write to a file, you got to get the talk stick >> andthe origin manages that, right? So,basically essentially is if oh you want to write to this file you get the talk stick somebody else then goes I want to talk they put their hand up you go oh you done with the talk stick great give me the talk stick back and then soit is that's how that's that mean that consistency around isit's passing that permission stick around to who's actually allowed to talk to the file at a given time >> I wasn't the only one thinking that and I think you just answered it but Alex also just threw it into the chat on YouTube is [laughter] there there's your answer Alex just answered it >> you got to get the talk stick that's a t-shirt if I ever saw one. >> Yeah. So,no, there's no risk of collision because you have to first get thattoxic. So,getting that does involve going across the WAN. So, your first right does have WAN latency because I need to get the toxic. Once I have it, then all the rights can stay local as long as I maintain thatright delegation.>> It's the conchk from Lord of the Flies. Yeah, you [laughter] have to have the con.>> That's exactly you have permission. You know, right? >> All right. Um, before we get out of here, anything else about Flex Cache that we need to uh go over? Ifeel like we're not doing it justice because I it'shere at the end, but >> you kind of just nailed it. >> Yeah, the movie house is a perfect use case. Engineering is another perfect use case. Another one we're seeing of it is even just like data pipelines. In other words, hey, Iwork with my files on prem in one app andthen I have a cloud service work with that same data and that cloud service needs rewrite like it needs to have high performance. And so the idea is but they don't happen at the same time. This is another perfect use case where the data you know lives indefinitely you know on prem or in the cloud vice versa but then on the opposite side I have local performance because it's highly performant for that. Now again it's noton first right but it's when I have these subsequent rights on a given file thatit actually shines. So um you know it's not just you know um media doesn't have to be long distance but it'sa way ofeliminating a lot of that WAN latency that creeps in. >> I can see AI use cases with this. I can see tier one application use cases with this like you said data pipelines for things like upstream database uh workloads API workloads that driven workloads that there's so many use cases that could take advantage of something like this and if you have one you probably know where you could take advantage of something like this um it's very easy for like user shares across multiple remote sites um if you're sharing departmental shares and you have everybody has to share a bunch of stuff. Uh, you just got to get the talking stick. You got to get the conchk. >> You got to get the conchk. Yeah. [laughter]>> So, that's the >> I'mgoing to go find a conchk because I use that adult a lot. I'm going to go get one so I can actually physically have a con. >> There you go. [laughter] Uh, last but certainly not least, um, and we got a couple of last minute questions. Um, just going over kind of theQA and stability and the ongoing hardening andthings of that nature that go into the general release cycle of ONAP isconstantly impressive. I wanted to give you a chance to go through some of these and um, you know, maybe shout out any of the engineers that um, that worked on this kind of stuff. I know you love doing that. >> Oh man. Yeah, the engineering teams arehuge. I hate to call it any one. Iwill say actually on the closing off on the flex cache, Elliot, our friend Elliot Ectton, the TME for flex cache is working on an upgrade to theTR for that. That'll be a huge TR. Sowa watch for that when that's being published. We'llshare with that uh once that's out. Um a few things on915 that we did that are again not feature related, butin one we're doing a bunch of things both within ONAP and also you know inthe ecosystem around ONAP around making upgrades easier. theeasier we can get make upgrades and the safer we can make upgrades, the more chance customers will upgrade. And the more you upgrade, again, you'reit's all goodness, right? You're always getting more performance every upgrade you do. Now, you're making, you know, more storage efficiency. You know, we all do all the security fixes. It'salways better to stay closer to the front end of ONAP. But we acknowledge there's things that we can do better. And for example, in 915, one of the things we're doing is reducing the number of potential reboots. So, if I did an ONAP upgrade, there could be as many as threereboots, right? We do one reboot and if there's new firmware or um BIOS for the system, that could be one reboot to apply that. Then we'll reboot again. We'll update firmware in any of the IO cards and we do another reboot and actually upgrade Montap. Um they're all automatic, but you know, every time you do that, it takes a takes some time, right? And so thewhole time you're in that cycle, you're in that, you know, nonHA configuration. And so, you know, 915 reduces that down totwo at most reboots and we're looking to keep pushing that down hopefully a single reboot to apply all firmware all by us allon tap all in one shot. >> Nice. >> Nice. >> Nice. >> You you'll see some improved tools and advisory and a better progress bar and time estimations, things like there's a ton of things we're going to do to make ontap upgrades a heck of a lot easier. >> Um, I noticed there was a bullet on here that I wanted to ask you about specifically. We've kind of been hung on this 12 node for sand, 24 node for NAS thing for about a decade now, really since the in the birth of clustered ontap. Are we ever going to exceed that? And is that an ontap I don't want to call it an artificial limit, but is that whenare we going to take the leash off andsort of go crazy and maybe throw up a 200 node cluster? >> Uh, it'skind of an artificial limit. Um the reason we haven't really pushed it is because at the same time in that same window again individual nodes have gotten faster. Now we've got some much faster hardware that's going in. We just look at the number ofsystems that are sitting at 12 node or 24 node and it's just really not that many systems out there. Now the customers that are sitting on those limits we have helped them do upgrades to you know to burst you know 24 to 26 to facilitate a tech refresh or 12 to you know that can be done >> but generally you get the way that the nature of onap you generally don't need that many nodes and as you get more nodes um you know the intercom communication just gets busier right andso there is sort of a you know there's right now there's just not really seeing areal strong need to go to like a200 node cluster when we can do the same thing with 12 bigger nodes or 24 bigger nodes. Sothat's but it is sort of an artificial limit. There's no hard limit to that. If we needed to if we suddenly saw a bunch of people go hey I your 24 node cluster whatevergoofiness that is from a you know five six million IOPS whatever that is I need bigger than that let's talk we'll we [laughter] can make that happen if you need >> there's two things happening in the industry right now that made me ask that question. One of them, NetApp released a new modular chassis last week called the A1K that is screaming redonkulous fast. >> Yep. >> Yep. >> Yep. >> Right. Single node, but it'sstill har paired construct, but it's done over the cluster switch over 100 gig. I'm assuming at some point we'll see that advanced to 400 gig. And I ask that in mind of AII can see a world where we're doing model training with multiple exabytes of data. And we're going to need systems that not only can perform at that screaming fast level to be able to process data at those speeds, uh, but the sheer amount of data. >> Yep. >> Yep. >> Yep. um themagnitude and the gravity of the amount of data that we're about to see. We saw an upswing of this in 2010ish plus or minus where we saw this explosion of data. I think we're about to 10x that. My personal not speaking for NetApp, my personal hypothesis there. I think we're about to at least 10x that because as soon as all of these Gen AI products that are hitting the market out there,are hundreds if not thousands of them doing automatic generation of videos and text and all this stuff. And that's just the sliver of the pinky of what this thing's capable of. I think once we truly unleash the horsepower of what specifically generative AI, it's called generative AI because it's generates thingsthat are data. So when we have something that can generate all of this stuff, it's going to generate like it doesn't have any limits on storage. And I think we're going to see companies begin to really unleash that. And we're going to need systems like the A1K stacked maybe to a 100 nodes to handle both the capacity and the performance requirements of that kind of stuff because I don't think we've even I don't think we have any idea of what's coming. >> We >> We >> We >> at the scale of it. >> Well, we may not, but wethink we have an idea and >> we think we do, but I'm saying I think it's going to hit us in the face. >> Well, Ithink we're I said we're making big bets. We've made a big bet. So,uh, future episode plug, uh, we'll come back and talk about what we're going to do forgenerative AI. >> Heck yeah. I know we got some guys that want to get on here. I've been talking to Jose and all that kind of good stuff. Uh, one couple quick questions before we get out of here. Uh, Phipe asked, uh, any timeline for 915 to be available at our preferred public cloud providers either via CVI or native firstparty? Like what's the lag time typically on upgrades for ONAP? um whether it's the first party or you know the ability to spin up CVO through blue XP.>> Yeah. So 151 is available as a release candidate right now. Um again you know we've bit fluid about how we move that to G based on adoption andramp up and um so but it's usually a six to eight window when we you can kind of see it once we um and then each of the individual as far as first party. So CVO we have some control of uh but first party that's entirely up to thecloud vendors right soit's their they have uh now they tend to be fairly aggressive on their versions but it is entirely their discretion about when they want to move to a new version but it's it it's usually you know in the month's range it's notyears they usually stay prettyclose to it. Thank you very much, Philipe, for the question. Uh, I've been holding on to this one for heasked it over in Discord. Sorry for making you wait so long, Troy. It goes back to QAT. Um, how does that work with fabric pool? How does ONTAP handle compression and stuff like that? More advanced compression scenarios and when you're doing things like your automated tear offs to storage grid andfabric poolool and things like that. >> Uh, yeah.great question. It's all in line. So um as if we're sending things to fabric poolool as they're ingested they'll get the additional compression of QAT uh when I said we're you know you compress existing data we will not be rewarming data just a compression right thatwould have if you're saying to a public cloud that would be a lot of egress cost we're not going to do that too soif the data isalready been tiered off using fabric pool it it'll stay tiered off now if the data you know isbrought back or is changed on new rights any new data that's coming inthat case you'll get the new compression it and that is preserved when you send it off tothe cold tier. So, um yeah, in those cases, yeah, the fabric pool, we're not going to rewarm any cold data just to compress it. That would, you know, that would be rather expensive. >> Yeah, I think it would just reprocess it, Troy, if it when it comes back. So, hopefully that answers your question and um if not, let us know in Discord. Uh last one for Alex here on from YouTube says, uh for activesnap mirror, I understood the prerec is San Lun. Uh the question is whether that lun can be hosted off of faz AFF or only on the ASA.>> Soit can be faz AFF or ASA just as you said there. It just can't be both. Sowhat I mean by that is you can have ASA to ASA you can have AF to AFto Faz even A series to C series no problem. The only two streams you cannot cross is ASA toAFF because your pathing is so different between the two of them. So that's the only two, you know, you don't want to cross the streams there. >> Do you see a world where that changes? >> Never rule anything out, I guess. [laughter] Never say never. >> Never say never. Uh notin anytime soon. Yeah. soon. Yeah. soon. Yeah. >> Gotcha. Okay. It'sa challenge just because that but because you're presenting the same L presentually to the same host you wouldn't want to say like which pathing profile do you pick then well in one side it's do you know it'sdoing activeother side it's doing active passive pathing it was it's too hard to kind of set a preferred path for the actual host. So >> yeah. >> yeah. >> yeah. Well cool Keith thank you so much for all the time today man. Uh it's always a pleasure tohang out and rap with you for an hour here and there a few times a year to go over some of the all of the latest updates ofeverything that we talk about with these shows. Um Igenuinely enjoy having this time to spend with you andgo over these things. I again I'm going to look back I'm going to read back through or read I'm going to watch this back and look for any things that we can call out and maybe we'll do somefurther discussion on some of the things uh now that we're past the launch andwe're getting out there. uh official GA date for 9:15. Do we have that? Is it out there already? >> The RC is right. Andas I said, you know, we our GA is based on number of customers that adopt it. Um you know, early quality that we see, number of systems, there's a number of metrics thatfactor that in. So we don't have an official GA date. It's typically six to eight weeks. SoI would early July would probably be your GA date isa professor Hazard, I guess. >> There you go. I think everybody waits till service pack two anyway. So [laughter] that to us as more of an August thing maybe orSeptember. But yeah.>> Oh god. All right. Well, thank you so much, Keith, for uh for hanging out with me again. Um and we'll see you next time for sure. >> Sounds good. Thanks for having me, Nick. >> All right. Take care. All right, guys. There you go, Mr. Keith Asen once again joining the show here with another big just huge massive show that I even need to go back and watch back through just to understand all of the stuff thatwent on there. Um we have lots to coming up for you guys. I teased it earlier in the show. We have lots of shows booked to keep up with those. Come join us in Discord. That's where you want to be. Wep put each of these episodes up as events and as we generate the graphics and the descriptions and all that stuff, you'll see them as upcoming streams on the NetApp YouTube channel. So, you can subscribe to us over on YouTube or come join us in the community and you'll see the announcements and things like that uh happening in real time as we put them up. So, June 5th, mark your calendars. Wednesday at 10 a.m. Pacific, we've got Chris Luth and Scott Bell coming in going over the new A series platforms, the A70, A90, A1K, and all the gooey geeky hardware nerd details uh that we're going to get into with that one. That one's going to be fun. Um let's see what else we got. We got uhMatt Trudewin coming back in talk about CyberVault uh and some of the things we uh hinted at today around ARP um and um dynamic authorization. All of that stuff we've got upcoming for you guys. And the big one, BeepWe have confirmation that Mr. Duncan Moore will finally be joining us here on the show to go over the latest and greatest with Storage Grid. Yes, somehow we made it 18 months doing this show without talking about storage grid. Damn it, I'm going to fix that. So, I love Storage Grid. I can't wait to geek out about it. I hope you guys are excited about that one. Um, but we are going to get that booked very soon. Sometime within the next one to two months, but depending on schedules and things like that. So, I'm very excited about the upcoming schedule here. Uh, reminder, in case you missed it last week, registration for Insight's open. Thecheapest, lowest prices you're ever going to get on it are active now. So, head over to insight.netapp.com. Make sure you get registered. All that stuff. I think that's the right website we're sending people to. U, but registration is open for Insight. You can now register for it.is September 23rd this year. You're going to hear me harp on this every week because typically we do it at the end of October. It is at the end of September this year. And I don't know about ongoing years, but you will see. Yes, registration is open for Insight. And yes, we are less than five months away. So, it's coming up pretty quick, guys. Um, don't get don't be one of those that looks up at the end of August and you haven't registered yet because you'll have the most expensive price and you'll have like no time to prepare for it. Get your hotels and your flights booked now. Get all of that stuff taken care of because it's only going to get more and more expensive. And you know, you want to be here. Ifyou were at last year's show, shout it out in Discord. tell everybody how amazing the expo was. All the sessions were fantastic. All it was great to see everybody again after four long years. Um, so we're doing it even bigger this year. So get registered. We'll see you at Insight at the end of September and we'll see you guys here on the show next week. But until then, my name is Nick Howell. Thank you guys for watching and we'll see you next time. Take care.
Learn about the awesome new features of ONTAP 9.15 boosting performance, new security enhancements, and improves hybrid cloud setups. Get expert tips, tricks, and see how to get the most out of your NetApp systems.