BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
Good morning everyone. I'm Stephen Foscet, organizer of Tech Field Day and publisher of Gestalt IT. Uh we are here in San Jose, California this morning at the beautiful uh new NetApp headquarters for a special cloud field day event. We are doing an exclusive one-day cloud field day event focused on multicloud with NetApp. Uh this event is uh packed withgreat speakers from NetApp. We've heard from them many times before at our field day events, but uh this is a chance to get a little bit deeper and learn a little bit more about NetApp's efforts to build and support an enterprise multi cloud environment. Uh as always, we are joined here around the table by a wonderful group of independent technical folks from around the world. Uh we had people fly in uh just for this and uh they are excited to hear what NetApp has to say. But of course, all of these folks are not uh they don't work for comp for NetApp. They don't uh you know, they're not paid tosay good things. They'rehere to represent themselves and to represent the audience. That's why we call them delegates, field day delegates. So, they are here to interrupt and ask questions and engage and be part of the conversation. Hopefully, you as a viewer will find that the folks around the table are asking the questions you want them to ask at the moments you want them to ask them. things like clarifications and uh digging deeper and uh also I would encourage you to reach out to the delegates. Uh you can find them on Twitter using CFDX and multicloud. You can also go to the techfield day website at techfieldday.com or to the NetApp websites that are hosting this live stream and learn a little bit more about them. You'll find their Twitter ids. You'll find their LinkedIn, their blog and so on. And that is a great way for you to uh you know engage with them. You can pass uh questions and comments to them. Uh you can uh they they'll relay them into the room. Uh you can also follow their blogs and follow their social media accounts, their podcasts, uh their videos and so onYouTube and um andsee what they have to say afterward that way. Uh we do this sort of thing all the time with uh different companies as I said with NetApp. We're also going to be returning with another cloud field day event tomorrow through Friday. Uh you can learn more about that at techfieldday.com. While you're there at Techfield Day, you can also see previous presentations from NetApp, which include some deep dives into some of the technologies you'll hear about today. Uh again, that's at Tech Field Day or you can use your favorite Google or your favorite YouTube and find the videos that way, which is how most of our viewers come to us. If you have any questions or comments about what you're seeing here technically, uh I'm Soscit on Twitter and you can find me on other most other social media networks and I would love to hear from you. You'll also find atgestaltit which is the company and at tech field day and um again we would love your feedback on this event. I know as well that NetApp would love your engagement here. Uh you can find them on various social media networks especially on LinkedIn. Just go to slashcompnetapp where this is live streaming. You can also uh go to netapptvuh or of course netapp.com to learn a lot more about uh their products and their services. So, uh, without further ado, I'm going to turn it over to the folks from NetApp, uh, and delegates. Are you all ready for this? Uh, ready for the discussion? You know, I'm never going to give you guys up. Never going to let you down. >> No, I'm just kidding. Come on up. I had to say I had to lighten things up a little here forPhoebe. >> Thank you so much, Stephen. Well, thank you all for joining us today. We are so excited to have you all here in the beautiful NetApp office in San Jose, California. and thank you for traveling this far to come to see us as well. So, we are talking about multicloud today. But before we do that, I just want to I guess talk about alittle story. We just finished our summer break in here in the United States. And I remember a couple of weekends ago, I was sitting on the couch with my partner and his three kids, and we were all on our devices because we're a family of this generation of this millennia and we were all watching something different on a streaming service. We had the kids watching Spider-Man, Harry Potter, and I was watching Pride and Prejudice, and my boyfriend was watching something else. And it made me realize how lucky we are to have this at the palm of our hands, be able to watch anything we want, to stream it um just at the click of a button and at a really simple uh monthly cost. And we have a number of these services today to let us do that as well. And it's that ease and that flexibility that makes it really powerful and makes my summer holiday, my summer vacation really easy. And it reminded me a little bit of how enterprises are approaching multicloud as well. That there are lots of cloud services out there for us to choose, different cloud providers offering different capabilities and we like them all and we want to use them at different times. And just like streaming services, we sometimes end up with more than one, sometimes more than two. And just like streaming services, we sometimes also have a little bit of billshock at the end of the month. But unlike our movie platforms where we can uh turn off a subscription, we can't so easily move out of a single cloud provider into another one. Um we have a lot of other things in enterprise IT that we think about things like security. I'm not just remembering passwords. I'm remembering all of the resources that I have in a single provider or across all of my platforms. There's a lot of requirements around data protection, around data sovereignty, about where we host our data and of course we want to use all of the platforms and all of the capabilities within those services as well. So today we're going to be talking about some of that complexity and we're going to be talking about the way that NetApp is helping our customers and we're looking forward to helping a lot of people solve these problems in the multicloud. So, our agenda today is [laughter] I should get to it. Our agenda today is quite straightforward. We're going to have a uh discussion with one of our esteemed uh leaders here at NetApp, Jeff Baxter. We'll be then spending most of our time with you on demonstrations of how this actually works from uh the from the NetApp team, Chuck Foley, Vishnu Cherroali, and Greg Marino. uh we will then have a break for uh lunch and tohave achat about that and then we will come back for a discussion with our partners and our friends at Google who are going to talk a little bit about how they see these file services and storage services working in Google cloud and finally of course as always we like to have a discussion lively discussion with you all so please um send through your questions and we really look forward to having this conversation with you because this is as I said one of those really important topics that everybody is talking about these days. So, we're looking forward to sharing that with you.So, first up, I'd like to introduce Jeff Baxter.>> Hi, Phoebe. >> Hi, Jeff. >> Good to see you. >> It's like we've never met. >> Exactly. >> Exactly. >> Exactly. >> So, Jeff has been at Let me say that again. Jeff has been at NetApp for quite some time, actually. >> About 15 years. Yeah. Getting there. >> So, maybe could you introduce yourself? And I remember when I joined NetApp um you were in a very different role with a very different experience. So perhaps introduce yourself and a little bit of your background. >> Yeah. So I've beenin NetApp now for 15 years. Before that I was an enduser, Unix admin, storage admin, right? And I uh ended up using NetApp not knowing what the heck it was. Ended up liking it so much I joined the company, right? So um aswith many things. So um I joined as a systems engineer back in the time, right? Andworked with end users. And then um after doing that for a while, I was CTO for the Americas and that was cool until the 99% travel and having toddlers didn't really mix very well, right? They're like, "Who's that guy who I see every month, right?" Um so I switched uh over to the product side and I ended up running um product management for ONAP for several years andmore recently in the last year I moved over to the marketing side as part of my trying to get every job in the company. So I just need to finish with legal. They say something about needing a law degree, but legal, finance, and I think housekeeping, and I will have covered every job in the company. >> Well, thank you. Yeah, that's quite a storyed history. Um, and I think we are very fortunate to have you here today because of that background. So, like I said, we're talking about multicloud and I guess you're in a perfect position to see, you know, what is multicloud in the definition of uh how NetApp would define hybrid multiloud. Yeah, it's a great question and I think we should bythe way iterate for the delegates here, right? We'll do some question and answer, but this is a great time for you to jump in as well if you have opinions on all this. So, it's intended to be because no one wants to listen to me talk for 25 minutes, right? That's been verified by multiple family members. So, um the more you can join in on this. But when we think about wetalk about hybrid multicloud and I think most people understand that instinctively at this point, right? What we mean by that. But just for a definitional standpoint, um hybrid meaning there's usually still some connection to the data center. We certainly see companies that were born in the cloud and have lived in the cloud that are cloudnative andhave never touched a data center in their life, right? An increasing number of companies like that. But if we look at the large enterprises and still the majority of IT spend out there, there's some connection to a data center. There's something still going in a data center. So this concept of hybrid is still very important both for us and for many of our partners in the hyperscalers and the public clouds, right? Working with those on-prem data centers. And then obviously multicloudis you know relatively self-explanatory in terms of being on one or more clouds. So we know that um over 80% of cloud users today are multicloud right according to multiple different um end analysts ESG other people like that. And when they talk about over the next three years, the number of customers that are going to be in more than three clouds and three or more clouds is going to grow from they project from almost double, right? 31% up to about 68% of customers out there. So it's not going away. It's not subsiding. In fact, it's growing the number of customers who are going to be out there in data centers as well as in three or more different clouds out there. >> Right. I have a question. >> Yeah. Channeling my Howard Marks here. Um, [laughter] Um, [laughter] Um, [laughter] >> so >> so >> so >> be careful with that. >> Yeah, I know. >> Okay. >> Okay. >> Okay. >> I love you, Howard. Uh, so when you say >> corporations that are in one or more multiple clouds, yes, is that distributed workloads or is that just one workload in each, you know, cloud like best of breed? >> Ithink I think it's incredibly rare these days to have one workload that's split among multiple different clouds. I think that we're starting to see a little bit of that, especially in advanced designs where you're designing for ultra high resiliency, right? We're used to in the data center talking about 59s or 69s. In the clouds, we're usually talking about something like 49s and certainly you can leverage regions and availability zones to increase on that. But there are some ultra high architectures where they're trying to, you know, look at balancing and we certainly have some architectures like if you look at um things we're doing with spot and others where we're starting to look at, okay, can we deploy different instances and move them around in clouds for cost efficiency? But I think the vast majority of that multi cloud environment is the I have one workload on this cloud, I have another workload on this cloud and another workload on this cloud. I think the interesting evolution that's going on right now is moving that from being accidental to being purposeful. Right? So I think today accidentally people are on multiple clouds, right? They've got um Office 365 over here. They've got some developers doing something on Amazon over here and they've got an enterprise program going on doing something in DevOps with Google Cloud for example, right? Andit it's just kind of evolved that way. Yes, >> we think it's going to be far more purposeful in the future. >> Iwant to add to that as well. I think um we're also seeing companies that are trying to leverage when we say a workload, I think that's what we also need to clarify in definition case, right? What is a workload? Is it a single instance of a particular application running or is it the way that we are actually running our business processes? And if we think about business processes where we may collect data in one place but want to analyze it somewhere else and that somewhere else may be in another cloud provider or maybe from on premises into the cloud. I think that's when we're starting to see some of those workflows becoming more multicloud and yeah going back to what is a workload it could be one workload or one application one team that's thinking about it that way. >> Um I like that >> to take us to take to add to your point let's step back even further. M >> let's define cloud because I've even had some [laughter] We could be here all day.>> Yeah. Because I've even had some people challenge me going, "Well, SAS isn't really part of the cloud." Are you kidding me? SAS is the original cloud when it's, you know, original >> when we talk about cloud. So, you know, there's likeyou suggested, Office >> is a cloud. Others may have workloads in AWS or GCP. That is multicloud by accent, which a lot of organizations do have. Then you had Salesforce, maybe you have a Oracle. So, you know, do we include SAS or is multicloud inclusive of SAS, IAZ, PAZ? >> Ithink it would be disingenuous to not include SAS as part of cloud. I mean,>> I would agree with you on that one. >> It's delivered as a cloud service. Now, I agree with you that it's traditionally more of a I'll call it easy on-ramp to the cloud, right? You're not choosing the cloud per se. You're choosing the SAS service that you want. And in some cases, the cloud delivery of it is incidental. And in a lot of cases for the end enterprise, right, they don't have a choice as to what cloud it's being delivered on. You choose your SAS provider and they're making the choice in the back end on what hypers scale they want to run on. So I think theinteresting evolution is becoming hybrid multicloud tobring it back to this point on purpose, right? As opposed to accidentally being hybrid multicloud because you've chosen a best of breed SAS vendor for multiple different workloads. Um you're actually directly choosing I want to operate with Google Cloud directly as an entity. Um and in some cases it's about where do you have the business relationship, right? Do you have it with a SAS provider? Are you sending your money to that SAS provider? Are you actually sending it directly to Google Cloud orMicrosoft Azure or Amazon Web Services? Andthat and that business relationship, I think, is where it starts torotate. >> So Iwant to move on to sort of some of the commonmisconceptions about moving into a cloud provider or a multi cloud environment like you said. What are some of those that we think you know moving out of a data center into the cloud should be easy? But like you said, there are some uh I guess misconceptions or perhaps unknowns there.>> Ithink you know moving into the cloud in some cases I think is seen as a panacea and I think anyone who's done that recognizes that it's not in and of itself a panacea. You are trading a certain amount of complexity within the data center. I think any of us who operate inside a data center understand that complexity, understand theneeds of trying to keep something running at 59s or 69s. If you've ever had a lightning strike on your data center or an actual sewage spill into your data center, um that was not a good day, by the way. Um I'll tell you that story over dinner. Actually, I shouldn't tell the sewage spill over dinner. It's not appetizing. Um but you know, so we'vetraded those sort of complexities for the complexities now of operating in a cloud environment and operating in an environment where not everything is completely under our control and some things we only learn when we actually sort of trip over them for the first time in the cloud. um especially when you start to get into a multi cloud environment and you're dealing with the different APIs, the different command sets, the different portals of all of these different clouds and trying to achieve interoperability for your business across them. >> Sure. >> Sure. >> Sure. >> Well, so hold on a second. Yeah. >> So I mean we come back to Fala's question of >> sorry >> sorry >> sorry >> it's [laughter] okay. >> Uh oflet's define cloud. I mean >> so fromthe perspective of this presentation are we defining cloud as quote as hyperscaler you know GCP AWS Azure I mean there's an entire other end of the spectrum >> that takes care of all of those things still but it's still more traditional workloads and I think that'sall part of the conversation is that in scope here as well or >> No absolutely in scope I think that's a good point. Do you want to say a little bit more about that? Are you talking about cloud service providers? Is you talking down that direction?>> Well, I'm sure >> that some clouds were perhaps out on a different island than [laughter] otheruh>> there was an inside joke there that [laughter] someone's going to explain to me at some point. >> Uh so yeah, so I mean there are cloud service providers, but I mean Iwould make the argument that once we started virtualizing workloads, we were creating clouds.>> Yeah. andso you know it's a hybrid cloud should take all of those things somewhat into account and definitely we should be optimizing for each of those you know different workloads have different needs the reality of it is there's still legacy applications that aren't going to be cloudnative now or probably in the next 10 years >> right >> right >> right >> you know unfortunately >> I wish that they were gone but yeah u but yeah I just definitely let's keep you know that in scope >> yeah I absolutely want to talk about that a little bit as well that when we think about cloud at NetApp and I think sometimes we are we see the cloud providers andI mean all cloud providers even private cloud providers people providing platforms for their business units for their application teams for their you know for other users as well as part of the cloud story and we don't really I mean we differentiate logically that there is apublic cloud or hyperscala cloud and a private cloud but we really want to think about them as a single place that you would go or something that you would build and then the place that a developer might go or data scientist might go to get a service. >> And so, you know, one of the things thatI'm seeing definitely andwe talked about a little bit before this, Jeff, was um the evolution of how platform engineering is really becoming again a cool term. It was a cool term a couple of years ago um where we are trying to build these platforms and it shouldn't matter. What you really want at the end of the day is do I get the outcome that I was actually looking for? Was that in do I care that it was in a public cloud provider on premises? As long as I got the right security and I had the right service and I have the right, you know, time to delivery, then itshouldn't really matter. And that's really one of the things that I think um, you know, we're seeing a lot more andwe'll be demonstrating. Yay.>> Yeah. [laughter] Every time we >> I think it really is that evolution that you're talking about of cloud to Ithink we have to be careful when we talk about abstraction layers because that means something technically, right? And when we talk about virtualization that means something technically but it's this concept of instead of it having to be a net new learning curve with every new data center cloud service provider um hyperscaler orpublic cloud provider and it's a completely new learning curve that it's part of the platform each company is building for themselves right and I think that's the important evolution where um we're looking at a world where we move beyond sort of walled gardens where each individual is its own individual experience and every time you stand up a new workload quote it's a brand new learning experience to how can we abstract that up so that just as you know you mentioned virtualization just as if you bring in a server from another vendor it's not relearning everything you might be you know going back a couple years maybe you're dealing with theoffline management is a little bit different right the BIOS is a little bit different but everything up the stack has been abstracted has been built into a process and is you know part of a workflow right how do we do that but uplevel it to the cloud layer essentially>> so I'm going to ask that can I ask a question onyour multi cloud environmentin the mobile industry right we've got 5G being deployed by the service providers also the enterprise and they have very much a concept of moving data towards the edge to reduce back hall and also latency so how does the multicloud then incorporate data on the edge for those kind of environments what's your perspective there and how are you working on that >> so Ithink there's that's a great question andhonestly I think it's a good open conversation. I think that we're very nent today as to whether the edge is considered to be its own domain, its own category that'streated entirely differently or if the edge is extensions of the cloud and will be managed as the cloud service, right? And I think that even I I'll be really honest, you know, vendors like to come up here and say we've figured everything out, right? I mean, I think we're learning as well about how we want to treat the edge because, you know, we'll demonstrate uh my colleague will demonstrate sort of uh ourcloud manager software and how we think about managing resources across the cloud. When you start thinking about thousands or tens of thousands of endpoints if every car is an endpoint, some of those traditional models start to break down, right? You're not going to model it as though you have 10,000 data centers, even though in some cases you have 10,000 data centers on wheels. So, I think that this is I don't think we've entirely figured this out, right? I think this is a challenge thatall of us in the industry have to figure out right now is how we're going to model that, right? And isdata truly going to live out in the manufacturing facility in the car, you know, in the air or is it arespit for um you know, a IML togather some insights and those insights are going to be pushed back, right? And Ithink it's probably going to be more the latter just because even though with 5G and everything like that, bandwidth is not increasing at the same rate as the rate of data growth, right? So we're still going to have that inequity in the pipe getting to the data. >> One of the things I see and I'm not a storage person so jump all over me butone of the things I see is that when we look at data um the same piece of data needs to be in different places in the network now to be acted on in a different way. So at the edge it's much more responsiveness. We want to act on it. in the center it's more analysis and dealing with how we deal with data when it's when it doesn't exist in one place I think isperhaps the biggest challenge to think re rearchitect our networks that way does that make sense >> yeah no it makes sense a lot of what we've been doing especially in the a IML space is moving away from the idea of storage right where data goes to a place and think about as a pipeline right where data is always in transit in some cases it's in more than one place right and so when we built out like our data science toolkit and other things like that. It's based upon the assumption that there may not be an authoritative final state where the data is. It may be constantly existing in this pipeline that will go from edge to maybe a data center, right? Maybe a cloud. We think most likely it'll go from edge to cloud, right? Do some processing there and then maybe be stored on a data center for long-term archival sort of, you know, use. Um butyeah, definitely I think it's going to be moving into more of a pipeline.>> So I have a question [clears throat] there and this relates to the pipelines and the workflows, right? Because we're do you guys see that we're at the point wehave workflows that are that maybe even the people who you sell to and who actually manage traditional data centers now do they even understand these terms? I mean we've spent this long talking about the definition of cloud and multicloud. So, you know, are we do you guys see us at a state or getting to the state where people need to start >> really understanding what these workflows are because that's how applications are being written or do are the developers there with these ideas? I mean, like where because it seems like there's a mismatch of how infrastructure will be to how infrastructure needs to be when we finally start developing applications this way. >> Yeah. And I think wego back and I come from an architecture background, so I'm always asking, okay, that's cool, but so what why do I need this? what is the reason that we are building a really cool feature in on and I think it comes back to yeah we need to understand what those requirements are you know whether it's an application that's going to live in a place withpersistent data in a single location or whether it's a workflow and the data is going to move around we really need to understand what is it that we are trying to do and then the infrastructure has to support thatflexibility and I think that's what we're looking towards and I mean not to speak for you know for Jeff here but Ithink what we're definitely seeing ishelping the infrastructure be more flexible. helping your the decisions that you make be flexible so that you are not going through a single decision path that is going to lead you down you know a particular architectural route that then you realize oh gosh I now need that at the edge how you know do I have to unwind all of those decisions so yeah Idefinitely see that happening >> would it also being said that if I was going to create plowed I'll use quotes here [laughter] as >> there were air quotes in case you weren't on camera. Yes, >> exactly. Inoper as an opera as a good operational guy, I would say you know cloud is an operational model. It's not a destination, >> right? We've heard that used multiple times before. Mhm. times before. Mhm. >> And I think it's interesting in all these different discussions that we hear there are things that we take for granted now like platforms like for instance if I go to you know AWS it has a gooey that I can go clickboomand then all suddenly stuff gets spun up and it just works right>> and we've getused to this idea of cloud as that platform. So I you know is maybe that is what is the connector when we talk about all these multiclouds because at the end of the day it's an operations model thattakes things into assumption kind of like a declarative language like it knows that there are things that are going to be there and then it builds based off of that.>> Ithink that's exactly right.I was while you're saying I was thinking there was an old SNL skit talking about something that was both a floor wax and a dessert topping. Right. I don't know if you're I'mdating myself. Right. So I think cloud ishonestly both of those things and I think that's where a lot of the confusion st stems for right because I think it's Ithink to be fair cloud can describe you know the hyperscalers at a certain scale because there does become almost like this transformation called a cloud singularity or something when you get to a certain number of cores in data center under management something transforms something happens right something that none of us are going to be able to achieve e even in the fortune five right within the scope of a data center it's just itit's more than the sum of its parts when you get to that sort of scale not to mention the number ofsheer bodies and intelligence that people like AWS or Azure or Google Cloud can throw at problems when they want to solve them. So,it is a thing, right? Iknow we were joking offline before about thebumper sticker, right? Cloud is just someone else's data center. And Ithink that'strue, but it's a massive reinvented data center, right? It'slike saying that a battleship is just a upscaled sailing boat, right? It's like I mean, they both go across the water, but one's got scale, right? So and but at the same time I think what's more important is cloud as a model right and cloud as this way in which you operate across disparit resources and across the hybrid multicloud.>> So Jeeoff I want to ask then you know we'rehere with some great presenters from NetApp. What is that NetApp perspective on and how is NetApp making what we would consider cloud services whether it's a reinvented data center or whether it's workflows. How is NetApp making that better? >> Yeah. So Ithink the number one thing about NetApp to understand is that we have been really all in the on the cloud since cloud wasn't cool, right? Like we were like, you know, had our pocket protectors and saying this cloud thing is awesome back when a lot of what have been considered traditionally onrem companies were seeing the cloud as potentially a threat, right? And we've always seen the cloud as an opportunity. That's the reason why we have these best of breed partnerships with um Google cloud, with Microsoft Azure, with Amazon Web Services, right? um in terms of engaging with them. I think we did our first demo of our software running on the cloud in 2015 2014 something like that right and we were talking about that um it'sgoing close to seven or eight years now right so from our perspective the cloud is absolutely critical it's an absolutely huge imperative and our goal is to build this sort of ecosystem around it so that companies can take advantage of the hybrid multicloud without necessarily having to refactor all their data all their applications or to support them if they are refactoring right as they move to the cloud. >> And when you think about some of the challenges that we talked about here, you know, fromedge to developers to, you know, how are we consuming the cloud? Is it something that we need through APIs? Yeah. through APIs? Yeah. >> Um whatis happening withNetApp's uh I guess platforms or our products in that space to make them more friendly to people who perhaps are not, you know, familiar with NetApp from 20 years ago. >> Yeah. So, thefirst thing I want to do is not steal Chuck Foley's thunder for the demo later because I think he knows where I live. Um, you know, in Jersey, but theIdon't um so I don't want to steal a lot of that thunder. And in fact, that's what we'll spend the majority of the day. If people are getting tired of listening to me talk, right, cool demos are coming in less than 10 minutes. So, stand by for that. Don't tune out just yet. Um, but we'rebuilding our software to be essentially cloud agnostic and data center agnostic, right? we're building and everything we do now is cloud first in terms of how we deliver things delivering it as a SAS sort of offering right and so you look at these services and they're all embedded into our cloud offering such that our customers can use it across any of the hypers scales of their choice um we even support um when you look at observability right and things we've done traditionally there not just traditional netapp things in the data center but basically just about any storage or data infrastructure that can run in the data center we're trying to offer that visibility for customers so they can actually see what's going on in real time right there andwe actually think one of the key things we can do is deliver that observability right because I think that's one of the been one of the biggest challenges is once your data gets out in the cloud iswhere is it it's kind of a dude where's my data sort of moment right and so being able to provide that visibility so that then we can offer value added services around compliance around governance around privacy we really see that as the future of the company right we want to provide that visibility and the way NetApp I mean NetApp's a for-profit company right the way continue to grow and the way we continue tobuild this is to offer not just keeping people in wall gardens or silos but offer these value added services across their data wherever it is. >> That's awesome. So last question for you then um is if we look at themulticloud or the cloud as we've defined it here today, what is one thing that we would like people to do or what what's one thing that you think that they should do in order to really empower and evolve their cloud strategy? It's a great question. So Ithink the key thing to be doing right now is to really look at instead of looking at it in silos instead of looking at it what are we doing with this cloud, what are we doing with that cloud is really trying to establish what is your overall multicloud strategy moving forward, right? Andmove it from being we had this discussion at the very start of our discussion. Move it from being this thing that happens by accident on a workload by workload basis to being intentional, right? Like how am I building a management platform in the future? I think a lot of us for our data centers in some ways almost lucked into having um vSphere as our management console of choice, right? Itkind of came with the package, right? Thatwas weused to, you know, a decade or more ago, we used to say that everyone thought the hypervisor was the secret sauce of VMware, but there were multiple hypervisors. Thereal magic was the first time you did a votion, right, betweenmachines and then you were hooked, right? So, it's [snorts] the how do we instead of having that happen by accident, how do we start to build out a control plane? How do we start to build out this sort of hybrid multicloud visibility and control intentionally so that we're taking control of our destiny as we move into the multiloud instead of just having it thrust upon us? >> Excellent. Thank you so much. And I like I'll leave you with that point to take control of your multi cloud destiny. Thanks so much for joining me here. >> Thank you. Thanks. >> So I'm going to pass it over to Chuck Foley now. I'm not sure if um he is coming up here. Hello, Chuck. Hello, Chuck. Hello, Chuck. >> Good morning, Phoebe. [laughter] >> I hate following that guy. He's like one of the smartest guys I know, and so there's a no win to follow him or dogs or children. >> All right. Well, I will [laughter] I will hand over to you. >> Hey, I really appreciate you guys spending some time with us today. You've already shared with them the agenda for the rest of the day, Phoebe, so they know how much time we have. Right. Good. There's 2 minutes and 41 seconds left in this session is what that says. So, I'm going to spend some time with you guys going a little deeper on some of the things that Jeff shared with you. Uh, really I have forthe next few minutes, I've got three objectives. Let you know what we're hearing, let you know what we're seeing, which is sometimes different than what we're hearing, and kind of give you a preview of some of the investments that we're making, and that's really the big one. uh you have over the past couple of years seen some pretty significant investments from NetApp in this hybrid multicloud chaos landscape that Phoebe and Greg spoke about and we're making a lot more and we're also then I'm going to take the chance in a little bit to bring up some of my counterparts. They're going to actually dig in, roll up the sleeves, go onto a live system and show you how these technologies work together. But to [clears throat] make the point and to really drive home what we're seeing and what we're hearing from what Greg and Phoebe shared with you, we spend a lot of money with industry analysts. We try and keep an eye on what's happening with our customers and what not only do they need today, but they're going to need next year and the year after because some of these go to market cycles are pretty long in development. What's become really loud and clear from the analysts that we're working for is that we have kind of moved from a public cloud andprivate cloud distinctive that kind of move to a hybrid cloud anduh Puma actually asked the question about tell me about workloads and does that mean one workload across all of them or is it independent workloads the answer by the way is yes Italk with customers all the time and ask them that question and that the answer is yes to some customers it means workloads by cloud where it makes sense. There's other workloads we're seeing where a part of the workflow is in one cloud and a part of the workflow is in another cloud. I will tell you that at NetApp, we're working hard on developing what we think is a defensible lexicon so that we can keep all of our 10,000 employees on the same page with what's a workflow, what's a work load, what's a use case under uh a workload, etc. But everybody seems to be coalescing around this which is the world is moving their workloads to a hybrid multi cloud scenario and they're looking for a single way to interface with that.has become what I refer to as the bleeding from the neck problem that our customers are facing. It's not a question of are you using cloud look at the numbers and you might have your favorite number jockeyies. So, I don't care if you're looking at IDC or Gartner or Forester or GigaM or ESG or Evaluator Group, etc., etc. They all have great numbers andthey're all fairly well aligned. So, we pick a few and put them out here so you guys see what we're seeing. 97 97% of the customers thathave been surveyed are using cloud for IAS. That's another question that came up. Is it SAS, PAS, IAS? The deepest level from managing an infrastructure perspective is IAS. It's not just am I using 0365. Am I actually using resources that I have to know and manage and buy and keep an eye on and that I'm responsible for keeping secure. That's a pretty big deal. Most people are doing that. And hybrid multicloud is the standard. II'll give you some open the kimono some dirty laundry. We did some surveys of our own customer base between I think it was December and March to get a handle on how many of you are private cloud, how many of you are public cloud basically only. How many of you are hybrid cloud between the two connecting workflows for DevOps or backup or analysis and how many are hybrid multicloud? We were really surprised. I did an informal survey. I talked to some people in the sales and marketing and finance and executive ranks before we got the numbers in. I said, "Where do you, if you're drawing a bell curve, where do you believe our customers are?they private cloud? I'm sorry. Yeah, private cloud like data centers. Are they public cloud or hybrid where they have a data center?" Because most of our NetApp customers, right, have data centers. Are they hybrid meaning they're connecting their private data center to a public cloud? Let's consider that one because there's a few analysts that they have an influence on our lexicon are driving the message that hybrid cloud we have to kind of keep it to one primary in your data center or are they hybrid multicloud we expected a bell curve right a small percent and the numbers I was hearing were more around 20 to 25% of our customers this was our internal assessment were private cloud the bell curve were hybrid cloud meaning they have data centers and they're primarily Azure or AWS or Google etc. And that some of the leaders maybe 25 30% were hybrid multicloud. That number has actually shocked us because that number came out to be that 93% of our customers surveyed were hybrid multicloud. They were very clear. I value my data center resources for what they do and what I need them for. And guess what? Azure has things it does really well. AWS has a different set of strengths that I want to leverage. Google has a different set of strengths that I want to leverage and I want to be able to. What the heck? I've opened up to the cloud. I want to use all of them. In inordinately large amount of our customers are using all of the cloud environments, not all multiple cloud environments to complement their data center footprint. That was an eye openener for us because what we are beginning to realize is this level of complexity is not linear. It's logarithmic. when you it's not just oh I need to use this cloud and that cloud. It's multiple frameworks, multiple tool sets, multiple skills and competencies. I had a great conversation with a couple of you guys last night about what you're facing with your staffs competency manpower and the need to have the these are not funible resources where oh he knows this he can do that over here because Jim Bob is on vacation. I talked with a customer about three weeks ago that says I have different teams with different skill sets. I have an Azure team, an AWS team, and a Google team. And myfirst two teams are doing great. But this team on this cloud over here, I lost three people to recruiters, and it's killing me because they use a different lexicon, a different skill set, a diff different set of tools. I can't just pull somebody over here and put it in there effectively. So, our customers are saying, I want the scale and performance, the confidence in the cloud that I had out of data center infrastructure. I want the flexibility to do it all. But the real big one, the real big one is that last bullet, simplicity. NetApp, I need you to make it easier for me to take advantage from an infrastructure and application services and data services perspective. I need you to make it easier for me to use these different clouds. So, that's a problem we're really keying on. And so, our investment is we'relooking at therequirements that are coming from increased staff demands, increased competency demands. When you get complex like that, you get decreased visibility. Ican't watch everything. When if you have decreased visibility, I have too much stuff going on. Your risk goes up. Your risk not only of things like breach and failure and availability and SLAs's, [snorts] your risk of cost overruns. If you guys are like most of the customers I talk to, you see it that uh they say, "Look, wethrow it out there. We'll check later to see if we're using it." Uh, how many times, oh my gosh, in NetApp, I got to tell you, we spend a lot of money on public cloud for all the right reasons, but we have times where we spin up an environment for a proof of concept or a demo or training or whatever, and we have to remind ourselves, go back and check that because you got a lot of resources spinning there that are hitting the clock. And we have a lot of customers that saying one of their number one problems is cost overruns because that takes money out of the business that could go to generating revenue and decrease business agility. You need to be able to move and capitalize on opportunities when you see them. As a result of that, we're investing in expanding our ability to have a common technology base everywhere. We're really,happy about what we've been able to do with ONAP in the cloud. Jeff noted for you that it's been about nine years since we have first brought ONTAP to the public cloud. It's now available in all of the public clouds, a common format for storage, storing and managing data. And now we're going these next layers of how do I provide service to that data be it data protection or governance or compliance or cyber resilience andcyber resiliency and how can I do that on a policy basis not just I can see it and I have to go take these actions but what if I could apply a policy that fit across all those operating environments. We're going to show you some of that. A real key thing is our increased investment in automation AI and ML. There's so much work to do. I think we all realize there's just not enough people to do all that work. So, what we've got to do is find ways to use AI and ML intelligently to automate a lot of the infrastructure tasks. Another great conversation we had at least at my dinner table last night was about where we're training our people in the industry.The question that came up from two of you is what's where do people get trained on infrastructure anymore? where to get people get trained on storage and compute and provisioning volumes and aggregates and setting protection levels because most of the people I'm interviewing that are now coming out of college in their 20s, theydon't even care about DevOps as much anymore as they do application development. And I'm getting smaller and smaller amounts of my people that really understand the critical aspects of infrastructure that all this cool stuff sits on top of. And we'refeeling is maybe you know we're going to reach a point where thegeneration that understands infrastructure and the generation that is moving on to some of the next things are going to have a break and the bridge can collapse. So how can we use AI and ML and automation to do some of that infrastructure work so that we can compete with thelack of resources and how do we innovate in our excuse me our consumption models. So this is what we're going to go over with you. We're going to show you in just a few minutes that we're going to put some hands-on with you about how we're addressing these challenges. We think that we have a really significant opportunity here if we can bring a unified hybrid multicloud data services platform. It's built on ONAP. ONTAP is the world's leading storage operating system. Block file, data management, protection, resiliency, availability. It's used by some of the most demanding users oftechnology in the world. So, it's really well proven. And now that we have the ability to have it in every different environment, we can bring you the capability to lay management services on top of that. So you don't have to go piece together your own. And we're working with a lot of our partners and having an open uh interface that we can bring in some of our partners as well as our own uh technologies and bring simplification to what really is a pretty complex world. I'm going to share with you a few terms before we get into the demo so you know some of the things we're talking about. I want you to have the lexicon in your head so that when really smart guys like Greg and Vishnu come up and show you workloads in motion, you know what they're referring to. First, we're going to talk about the ability with the NetApp uh platform to be able to store your data. You know, ONTAP. Most people inthe enterprise IT industry know the capabilities of ONAP. We now have ONTAP in all of the major clouds. Not just do we have ONAP there, our hyperscaler partners have partnered with us to make it a part of their plumbing. We have native firstparty services in major clouds. We have Azure NetApp files in the Azure cloud. We have cloud volume services in the Google cloud. We have Amazon FSX for NetApp on the AWS cloud. These are services provided from your hyperscaler through their console through their be billing mechanisms for those that need enterprise class storage capabilities. We then have a netapp offering called cloud volumes ontap that spans all three clouds. So you can either use a hyperscaler provided flavor of onap you can use a netapp provided flavor and it's the good news is it's all on >> which three clouds >> Amazon Azure and Google. We also have uh ONAP select running in the IBM cloud >> and Oracle cloud >> at this point in time we don't have official support in Oracle Alibaba and some of the other ones but if you look in the industry what you see is that cloud file storage is the number one fastest growing data footprint in the cloud I mean there's a massive object store but file storage is growing incredibly and so we're having a lot of conversations with other providers that might be able to want to leverage this as well but Today it is Azure, Amazon and Google and ONAP select and IBM cloud >> and Chuck if you don't mind one of the things that when we always say that APIs and this idea of presenting APIs humans don't consume APIs humans consume software that consumes APIs and that's why the partner thing you already kind of jumped right to you're creating a partner ecosystem which is as if not more important because you can have the most fantastic APIs in the world but if you don't have hybrid scaler partners development partners, other you know, Kubernetes, you know, CSI plugin, CNI plugins that can reach this stuff, the API ismeaningless to me. And then on the cost side, too, like you talked about being sort of deeply ingrained that you can offer services. So that means I can cut my cost down because the cloud is cheaper as long as you're willing to pay more. But the problem is >> I love that >> that if I try and deploy on tap myself sitting on top of AWS, it's gonna I'm not going to do it well because it's going to use far too much. Like how deep? We'll obviously see it. We'll get to thereal bits, but what other partner ecosystem stuff are you doing with like development partners and other software services that need storage backends to you know create this unified ecosystem. So two two-part answer to that. You're right. What we're doing with the major hyperscalers and between those three, they have the overwhelming lion share of the public cloud footprint. So we may not be everywhere, but we are taking in a priority order from based upon where our customers are spending their money. Having ONTAP as a service offered through the hyperscalers means we're getting into their plumbing into their console so they can engage their services. Let me give an example. Having um having Azure NetApp files being a native Azure service means it participates on that control plane with other Azure services like Azure Kubernetes solution or Azure VMware solution.Having CVS offered by Google means that cloud volume service or CVS in Google cloud can participate with the other native Google services like GCVE, Google Cloud VMware engine. Having FSX for NetApp on as a native offering from AWS means it's part of the AWS control plane. It's a native service and so if you're going to use other AWS services, they can interface natively. RDS custom for example, >> right? >> right? >> right? >> There's a really long list on every one and I don't want to say that every cloud has got every service done for their NetApp onapp implementations yet because the list is really long. As big as AWS and Azure and Google are, they got a lot of work on their plate too. But I can tell you this, there is a long list of services. They prioritize them just like we're doing. They checkthrough that list. They give validation and certification of this storage footprint with those native services. That's a real strength for our customers because that brings them an incredible level of confidence and simplicity. But you asked about third parties. Let me give you a great example. We're going to talk about it a little later. Uh is VMware. We got a massive amount of customers that are running VMware as their hypervisor platform in their on-remise data center environments and they've wanted to bring some of those same workloads and all the runbooks and policies and processes they have up to the cloud and there have been some technological challenges. We have worked with VMware, with Azure, with AWS and with Google to be able to lift a footprint up into the cloud by providing a supplemental data store. We're going to go into the detail on why that's important and how we did it. But that took over a year of joint development between three of the biggest players in the industry. Uh well on any given cloud, the cloud player, VMware, and NetApp. And sometimes it's a little complex because when you're doing it in Azure,VMware solution is that's an Azure service. when you're doing it in AWS VMware cloud that's actually a VMware service running in their managed account and we've all seen depending on whose account it's in and whose VPC it is in and how you're connecting to other people's VPCs etc can bring layers of complexity but that shows you the investment we made Nathan >> on that same point just defining for something like Azure uh Azure onep netapp files right and defining that in terms of like what it can do with other native Azure services to me It sounds like almost exactly the same thing. I could do almost exactly the same thing with just native Azure files when then moving to VMware. The whole reason why we're moving from the Vsspere onrem to VMC on AWS, AVS, whichever cloud solution you're utilizing is because it'sthe same platform, the same core binary files. I'm assuming there's some sort of secret sauce that's inside of these things that adds thatconnectivity for NetApp as well. >> Yes. Andthat'sa great point and that's the key. It's not just big fast always available storage. It's the ability for that to play with the rest of the control plane because as you say it should be as easy as that. IfI'mgoing to use Azure an example since you brought that one up and I have my list of options from Microsoft. I can choose Azure files. I can choose Azure files premium. What if I need some of the advantages that are not there? What if I'm running an SAP haunted in-memory database that requires guaranteed submillisecond response time and that there is another one there? It's right off the checklist and it's part of my control plane and when I set up my SAP environment, it's natively integrated and it has the services that I need and it's integrated with the rest of my Azure services for backup and data protection and audit and compliance as well.Okay, I want to set some other nomenclature in front of you so you know what we're talking about protecting the data. It's not enough to just store it because as I found out when I dropped my phone in the lake 3 weeks ago, if you don't back up and protect that data, don't worry. I got memories of my grandkids last year. I just don't have pictures of them anymore. And that uh that carries some real economic value when you're a large corporation. So, you got to protect that data. We have a service called Cloud Backup. We're going to demonstrate it for you. Cloud backup is a SAS delivered service to take uh to protect data directly from an ONAP system either on-rem or in the cloud directly to a secondary object store either on prem or in the cloud any of the clouds any of the three major clouds without needing backup software. >> Can you go crosscloud with it? >> You can go cross cloud. I can say and we're going to show you actually. So Iwant you to bring that up for Vision and Gle. I want you to say I'm backing up from here to there. Can I back up from there to there as well? But we have customers that have to do that for whatever their compliance regulations. Theyneed a 321, a 421, a 522 or whatever. They don't want to go from my on-prem to this cloud and I need to back up from this cloud to that cloud. Yes, it can do that. >> Is it backups or sn a backup service or is it snapshots? >> It's a backup service that uses block level incremental forever. So your baseline, your first baseline becomes yourlast image and then you update that incrementally over time and you can restore either files or complete volumes. So it for most of our customers it serves the purpose of both snapshot and backups. >> And it's is this a netapp native backup service? >> This is NetApp native. We manage it. It's a SAS service. And again if I have on-prem systems it's from onap to object. You can go on to ONAP with our snap mirror technologies for example. But for us for more of a backup versus a DR a lot of our customers are saying I want to go directly from my primary to the most inexpensive scalable storage I can get which is in many cases is object point andclick set and forget. We're going to show you how easy it is. So remember that term cloud backup. The next is how we help you optimize thestorage and the data that you have. So we have the ability to tier data. Some of you may have seen this before. Whereas I've got a pabyte on site. I'm facing I need another couple hundred terabytes forecast. Supply chain constraints are making it kind of tough to get or budget makes it tough to get. How about if I find out data that I have in my primary ontap footprint that hasn't been touched in x number of years. I'll pick a number two years. You can set policies where that is tiered off to a more inexpensive object tier either on prem or in the cloud according to your policies but it's a but ituses the same namespace. So if I have an application or user that needs that they don't need to say I need to go here and get it. Their application goes to open that data and it's retrieved from whatever object here that it was moved to. cloud sync because we see mobility with hybrid multicloud. Mobility is becoming a much hotter issue than it was a couple of years ago. We have customers that need the ability to move data from any to any. I need to go from this storage footprint in this cloud to that storage footprint in that cloud. Maybe I need to go from NFS to object. Maybe I need to go from an ice scuzzy LUN to an SMB. Maybe I need to go from S3 to blob. You can move data any to any with some really easy, elegant drag and drop simplicity. We'll show you that in a little bit. Helps a lot with migration workloads, DevOps and testing workloads, um, big data analysis workloads where I say, I need I got this data four different places. I need to get it in that bucket so I can have some analysis run on it. And by the way, some of it's from NFS and some of from SMB and some of, but I need thatobject bucket so I can run data analysis on it. And one really exciting technology we have is cloud volumes edge cache which is an edge caching technology to allow our customers to get rid of dozens, hundreds, even thousands of distributed file servers. Think of those Windows file servers that sit in every manufacturing location, distribution office, regional sales office, etc. that somebody has to manage, back up, do compliance. We've had customers that called and said, "We need to talk to you about that solution because we just figured out our file server walked away a couple of weeks ago. It's not there. Last time anybody remembers using it was two weeks ago. Nobody saw it.was sitting in a closet. It's gone. Literally, that kind of stuff happens. Imagine if you can consolidate those into a single cloud footprint. That's Cloud Volume's edge cache. >> On the op on the tiering and optimization, how much of it is purely NetApp specific platform development versus leveraging the whatever the cloud offers in that they have to have tiering available that you leverage or are you basically backending the entire tiering? We obviously weI'm giving you the tough question that we're going to give the same folks later as well. >> No, it's a good one. It'sa flow. Again, it's us working with those other object partners. Of course, it's very simple if we're going from an ONAP system to an on-remise storage grid system. That's our object implementation. That's the easiest. A little tougher if you're going to S3 or blob or Google Cloud Store. But we've worked with them to make that happen so that you set it up. It is ontap source and it's only ontap source but it can be any of those object targets and so we've worked with the major cloud partners as well as our own object team of course to make sure that I can set it up and have it tiered regardless of wherever I need it to go and it can go from cloud to cloud it can go from on the cloud like CVO to another cloud instance in a different object store >> so right off right on top of that could I do something like that for hybrid cloud like a VMC on AES to an AVS Yes, >> VMC is a little tougher because VMware has its own hypervisor layer in there that's managing the storage if you're familiar with VSAN for example. So, um I you wouldn't use that for a VM. You would use it if you say I have VMware cloud environment here and I need to recapture some storage space. I'm going to tear it off. But it's not a migration. It's a taring from one working environment. It basically extends that working environment to the cloud. But let's not think of the tiering as a migration. So it's not that I'm going to tier from uh ESX on prem to AVS in the cloud. That's not it. You would have to use a migration process for that. >> So something like cloud backup, could you like backup information from one to the other or vice versa? >> Cloud backup is I got a source, I got a target, I can restore to a different target. So I could I can use a matter of fact that we're going to show that today how I can back up from one OMAP instance to a backup store and restore either to that one or to a different target. If that different target, for example, maybe it's ait's a CVO guest in a uh VMware cloud instance, you could do that.Thank you. Two terms to remember uh that are really topical these days inthe days of cyber resiliency. Cloud data sense. Cloud data sense is about governance and compliance. Number one thing about cloud data sense, where is my data? Who has access to it? What are the permissions on it? How often has it been used? Is it even my data? I got three pabytes out here. How much of it is non-b businessiness data where my 5,000 employees are using me as their own personal Netflix library? Do I want to do I maybe want to see that and flag that and move it off somewhere or delete it because it's against policy governance. The second is compliance. HIPPA, PII, GDPR, DARS. We're going to show you some examples of that. So, this is now saying, let's put some intelligence on the data. Let's allow you to govern and ensure your compliance risks are mitigated. And that by the way, one of the things I love about cloud data sense is we took the perspective oflet's make this not just on tap. Let's make it heterogeneous because we have customers with heterogeneous stores either from other vendors on premise or like one drive and sharepoint. How can I have my platform provide governance and compliance capabilities and cyber resilience capabilities to the rest of their data stores? That's bringing real value because that cuts to that heart of simplification. Cloud secure you'll hear referred to and anti-ransomware AIdriven ML-driven anti-ransomware capabilities I'll get into that management the cockpit of this is cloud manager is a hybrid multicloud console that allows you to have visibility into to discover and deploy and manage data and data services across the cloud environments. Cloud insights is an infrastructure layer. So cloud manager is about deploying the data uh the data resources and the services uh that are associated with that. Cloud insights is about what's happening at literally your system level your storage level for health performance availability. Are you running out of resource? Are you going to hit uh hit a roadblock? Both not just for storage but for storage for compute for applications like databases. We'll show some of that and the developer environment of cloud native. We're a huge uh not only believer in but a huge investor in cloudnative kubernetes enablement technologies. You might be familiar with trident.is a CSI a container storage interface used to attach onap to kubernetes clusters. We started that. It's open sourced. uh we have expanded that with Astrocontrol Service and Astrocontrol Center either on-prem or cloud implementations to be able to bring data management, migration and portability and protection to the cloud in ways that you've been doing in your data center. So is the develop is the intended audience for that developer so that they can develop their own storage or is that actually for operators to manage thestorage delivered up to the platform the developers use >> more the second it is more to give those Kubernetes development teams SRR and even some of the uh excuse me some of the infrastructure teams like SRRA give them the capability to uh to continue with their CI/CD pipeline so they can be really fast and really agile. and use containerbased technologies to deploy newtechnologies real time but do it using a real uh um enterprisegrade persistent storage layer that has snapshots backup mirroring and portability. So if I move a workload from here to here I don't move that workload but I lost the connection to the persistent store. One of the things we're seeing in the Kubernetes world is if I go back five years, persistent store, I mean, you had a capability, but the really cool thing about Kubernetes and containers and Docker a while ago was I spin up uh an instance. I have my pod or my cluster. Uh my application has its data, it manages it, and when it's done, everything's gone. I don't need to worry about it. And then theystarted to find ways that this is a really cool scalable way to deploy applications overall. let's use this even for our on-prem development and they needed to find ways to make use of larger sets of onremise storage systems and that's where the develop capability came in. So it's really targeted for those people developing applications that need to use persistent storage based upon its scale its availability and they need the manageability to run with it. Okay, now we're>> one quick to carry on theKubernetes piece. How much of the percentage of contributors to Astra Trident are outside of NetApp? You know, Idon't know if you know that number, but >> I don't, but that's a really good one. So, I can ask one of my peers like Zach Mitchell to see how what percentage of our contributions to the Trident GitHub come from outside. I know there is. I know we have certain people that we look to and we say you're it's really cool what you're doing withAstro Trident. What else do you think we should do with our teams, but I don't know the percentage? We can find out for you.>> Yeah. So, I just curious because as I always tell people that you know putting it on GitHub does not open source make right like it's not autom just because it's open technology doesn't mean it's necessarily an active open source ecosystem. It look like you've got lots [snorts] of issues which is fantastic. That means people are finding stuff that they want >> and I love thatmove to like so people are seem to be engaging with the open community but yeah just be interested in how broadly people outside of the just the straight net app core arehelping to drive innovation in that area. >> Yeah. Last I saw there was over a thousand enterprises using it. >> Um but we'll find out what their contribution is. That'san interesting question. >> I want to add to that Chuck actually um as you I think you're about to set up your demo. Right. That's right. is that um you know with open source as well it means making that code base available. So you might like Trident but you might want to do it your own way. So youcan I mean that does that make it supported by NetApp if you go and fork our repository and change it? No. But there could always be that potential. And of course, you know, there'sways that those um code bases may be merged. You know, if you come up with a really great feature and we go, actually, we really like that. You know, here's the value of open source. You it comes back into the community. And so, um, we definitely invest a lot. It's not just a it's not just lip service to the open source community. It's definitely looking to the community, looking to what customers are looking for, but also taking advantage of the acceleration that opensource provides just any project being able to do that. So, >> Chuck, I know we had one more. Averil has got a question as well. Yeah. >> Yeah. Chuck, going back to your slide with your product, could you explain to me where the AI, the MI fits and where your open interface fits that you talked about because I'm not seeing it. So first of all, we are finding ways to embed AI and ML throughout all the different services. To us, that's kind of a back-end service. Um, if we do this right, we'll have the customerf facing services, the ability to store your data, protect your data, govern your data, and then we have a back-end layer of services, which is the API layer, the tenency layer, uh, etc. That's where we can engage a IML and plug it into each of those. They use it to different degrees. To be brutally honest, cloud backup doesn't use a lot of AI. Cloud backup is this source goes to that target. Cloud insights as an example, managing infrastructure now uses AI to say what anomalies am I seeing in user behavior taking place on ontap data or uh cloud data sense uses AI and ML to say what user access anomalies am I seeing in certain data sets. ontap at the all the way down at the storage operating system level has the ability to say I'm seeing things I haven't seen before such as increasing encryption rates or aberant user behavior that are outside a policy what step needs to be taken next so we're using AI and ML as a technology in each of these we're not providing an AI service to our customers what we're doing is we're investing in AI and ML really to drive the AI ops that we can deliver to you >> so have you chosen a Pacific AI I engine that you're using right across these platforms. Are you using different engine? I'mtrying to understand technically>> whatit is you're using to deliver it and what your open interface looks like.>> Yeah. So I would say that we're when it comes to our interface and our API level that's into the services. The AI enablement is under that. We're not exposing the AI service to our customers. So we're not saying hey we're using AI for cloud insights. you can plug into the AI and make it do different things because this is a supported service we have to offer. Okay.>> And cloud insights as an example or cloud backup or cloud taring or cloud data sense those are not open source. Those are SAS services. We're injecting them with AI but we're not opening that layer. Thelayer above it is the API layer where you can have your frameworks interface with some of those services if that helps. >> Okay.Thanks. Now, one of the things I said before, not only is it all on tap, but it's all in one place. And this is where the rubber meets the road because this is our cloud manager interface. It is a SAS delivered global control plane that let you dis discover and deploy and manage storage, data, and data services across the enterprise. I don't want you to even have to look at that slide because guess what? We're going to go real time. We're going to go live. I'm gonna invite a couple of my counterparts up. So Greg Marino and Vishnu Charvali work with customers on an everyday basis. They have customers are saying I'm migrating my SAP to the cloud. How do I do this? My auditors are coming down on me saying I don't meet their compliance requirements. Help. I got to migrate XYZ workload. These guys face all kinds of scenarios. So you guys come up. Vishnu specializes. He works primarily with our Google teams. Greg specializes and works primarily with our Azure facing teams. >> And we're gonna ask Phoebe Go, who you've already met, who's literally our voice of the customer. Phoebe has pulled customers and account teams and actually has real world problems that customers face. And we're going to have them do that. But before that, I'm going to make sure just real quick that you know the layout of Cloud Manager that they're going to be using. First of all, you'll see this area here. We call the canvas. The canvas is where you can see graphically your environment. Whereis cloud manager? Is this a SAS offer? >> Cloud manager is a net appet SAS offering. So we're offering you don't have [clears throat] have to run it. And now I'll give you a clarification. If you are a secure government industry agency or in a secure industry where you need to run it in your data center, there are certain services not all of them that can be run in a dark sight mode. Thank you. Generally cloud manager and the associated services are SAS delivered services that I'm going to be showing you today. So the first thing you see is on this canvas, what do you notice? This is a multicloud environment. You've got Azure resources, you've got Google resources, you've got AWS resources. I can see them and work with them in one place. Number two, on the right, any given environment, you'll see the different working environments, but if I click on one, I will be able to go deeper into any given environment. So I clicked on the CVO and ONAP here or if I click on the Azure NetApp files in Azure down here I can see moreinformation about the environment. I could actually go into the console for managing that specific environment down here. I'm not going to do it right now. You'll see that in a minute. The left navigation that you'll see us using shows you the capabilities that you have for those different working environments for provisioning storage for data protection for consumption for ransomware etc. >> Is all of this um exposed via API and do all of the functions are they presented? >> Uh the functions are presented where they're applicable. So there are some services that are applicable to one given service but maybe not another. Maybe cloud vendor A has gone through the validation and certification for a certain service like cloud backup and cloud vendor B might not have yet done that. So that would not be presented as an option for that. The APIs areopen.>> So let me show you to just to give you a feel for what we can do. I'm going to add a working environment. I'm going to say that I'm going to add I'll tell you what I'm going to add an on-prem NetApp environment. So what I showed you before was all cloud stuff, right? What if I want to turn this into a hybrid multicloud environment. What you just saw in seconds. Now, I had to have the credentials and I had to know where my system was. But I've just added right down here at the bottom an onrem NetApp system that if I wanted to, I can tell that it's there. It's already set up. It's got certain number of volumes and capacity that are already set up on it.was literally that easy. I want to I want you to think about the problem I talked about before about your staff not having expertise in different areas. A cloud guy can find and integrate this into his management framework literally in a matter of seconds. It's not just the on-prem environment, however, because what if I said, you know what, I want to add a cloud instance maybe for a hybrid cloud workload like DevOps. I can say I want to go into and add a cloud volumes ontap instance in Azure. I don't need to know a lot about Azure. I'm going to call this one Chuck CVO. I do have to have my credentials. that the credentials are high enough. If I'm trying to access this from this Net App cloud manager, at what level on the AWS side, credentials do you need? >> You need your credentials stored for your AWS account, your Azure account, your Google project. When I bring this up, you need to re-enter youruser ID and password for that account. But your account credentials were already set up. In this instance here, I already have those three accounts. So, the account credentials themselves are already set up. >> And you have Okay. They were theywere set the first time. If this was a totally blank canvas and I went to do this for the first time, I would have had to enter all of my AWS or my Google or my Azure credentials, not just my user ID and password. >> Okay. >> Okay. >> Okay. >> So, what you see is I I'm in the process of adding this. Now, what we've done is we've built in some automation to help get to self-service. It's not only let's handle the skills gap for existing people. What if you want to get to a point where your DevOps team or your SR teams can provision storage according to policy according to their role and make it stupid and easy where they don't need to know is this an M5X large or do I need ST1 disc what that can all be templatized for you. So I've said I want to add a CVO instance. It says do you want to turn on these services? My company has a policy that says we would like compliance, backup, and monitoring to be the default. I can say no. I can even say why I I'm saying no. This drift, user drift behavior is captured for later analysis so that not only us, but our customers that can have people doing self-service, they can determine what's working and what's not, who's deploying according to policy and who's not. Really good user behavior intelligence. I would then select the region and the availability zone, the V-Net, the subnet. Yes, I have connectivity. It will allow me to either create a new resource group or use an existing resource group. I'm going to take the defaults here. It then tells me what level ofuh CVO would you like to have because I can either use a premium level. Premium, by the way, is we're actually providing ONTAP capability in the cloud up to 500 gig for free. It's not a trial. It's not a demo. It's yours to use full function up to 500 gig. And our customers find that really useful as they're determining if a workload will fit or not fit. I'm going to choose that one for now. I could choose one of the other ones. And then let's get back to the templatized basis. The system will say, "My company has already set up these different profiles. Is this like a PC in a small workload that needs a certain amount of storage and SSD because you want it to go well or is it a cost-effective DR that's going to use standard HDD or something in between? I'm going to choose the DR scenario because that's what I'm going to use it for here. While I'm here, I might as well assign a volume.Give a size to that volume. Choose the defaults. Say yes, I want to use the storage efficiencies that are already set up on the system. confirm the configuration and guess what? I'm done. It was about two minutes and I walked you through how to set that up and I didn't have to know anything about Azure infrastructure, which VM to use, which storage types to use, which networking services. I just followed my defaults that are set up according to policy by the company. And what you see is that's just been set up and it's right there. You got a question? Somebody did. >> Yeah, I do. Yeah. So,yeah, it was really fast, really easy. And there's defaults set up and anybody can do it, but that can also be really freaking dangerous. So like what who sets up the defaults and how are we sure because like none of this is free. It really >> So you have role-based access. You're right. You have role-based access. I happen to have administrator access signed on. You can have different user levels of access. So and you can set what they get to see and what they don't. You don't have to give everybody access to this. You don't have to give your dev devops team access to everything. And you can give them access only to certain levels because sometimes the developers, they're going to choose the biggest and fastest of everything. >> Of course, >> right? >> right? >> right? >> Why wouldn't they? >> So you can have role based access. >> So there'sobviously a lot to pick apart here. >> Yeah, we're going to turn this into security field day, right? Hold on your hat. [laughter] hat. [laughter] hat. [laughter] >> Yeah, that'swhat Idon't I don't want to go down like a lot of rabbit holes here, but so rolesbased access control, you've got your policies defined by the customer themselves. They they've created that. Um from the PHOPS perspective is there cost monitoring that's like thou shalt not exceed uh thou shalt not do you know tr transfer data from >> so there is cost monitoring >> y >> y >> y >> uh phpops happens to be andwe're going to talk about phop sign and I got really excited >> that's exactly right cuzphops is one of the biggest pushes for why we're doing this we had a discussion last night with the delegates about trying to bring IT ops devops cloud ops and fins closer together. So where FinOps are not the bad guys. Finops are actually the guys that help you get stuff done in your company and we've built some things in here in our consumption model capability. I want to show that in a little bit. Is it okay if I ask you to hold the thought as to how we give Phups visibility into control over and access to what's actually being used and what's not and what's been provisioned and where people stand >> andif you can tie that back to architectural patterns and how those are defined. So likeyourPHOPS and your EA team should be closely aligned. They never are. Uh andso when an EA creates an architectural pattern and then creates a policy for it, themagic sauce would be to have like a PHOPS overlay that says, "Oh, okay. This is a great idea, but you're dumping all of your Azure stuff into an S3 bucket and your transfer costs are going to be skyrocketing." Soif there's a way tomake people not do that,would be amazing. I think I mean I got to be honest with you. >> We've done a lot. That's the kind of input of where we see it going. Okay. >> And so Ican't tell you we're going to have everything today.we're starting with visibility and management control for FinOps. >> Mh. >> Mh. >> Mh. >> And Finnops can then provide input into the templates which determine what role can do which things >> with tagging and alerting and stuff. >> And we have the ability over time to get there. I will have to grow my excitement.>> All right. >> Quick question. So what if you're in an organization that is mandated no doing anything in a portal everything must be infrastructure as code where does this play>> then you would be using the API layer you don't need to use cloud manager is wonderful for teams that are stretched really thin smaller teams you know insome of our uh smaller and medium-sized customers that don't have mass you know Google teams Amazon teams onrem teams Azure teams etcor self-service capability Most of our customers will use cloud manager for some things but also use infrastructure as code and nos for other things and that's why the API layer is so important they can coexist >> and I like to add we have a terapform providers too a lot of our customers use >> terform and one other thing too is that when Chuck went through and created this at the very last step you could have it spit out the rest APIs that you would want to use so there's a JSON file that get that you can use and then you can templatize that JSON file >> so that you just run through it once and then you use your JSON file after that to create the common templates and keep everything consistent. not going to be the security and auditing kid that uh I'm going to say the things that need to be said where is the privileges managed uh like are you delivering least privilege credentials to this through your Azure account and then from here there's different granularity of control for rolebased access where is auditing happening for changes that are occurring is it happening here is it also pushing it back to like a cloud trail so I if I because we may have cloud-based infrastructure teams and security teams that will never want to interact with this interface, right? >> And they're basically firing cloud trail into a rapid 7 or whatever other thing they're doing and that's their source oftruth. of truth. >> So where is stuff like security and auditing andprivilege use being either controlled as well as logged? >> Does it differ on between the two hyperscaler providers you work with? >> No. So pretty much the entire infrastructure is on the hypers scale, right? So cloud manager is essentially the hub to send and receive APIs and as Chuck mentioned you can use the SAS version of it or you can deploy it on the cloud connector a VM essentially running in your environment providing that access. So it's completely within your hyperscaler >> environment. We don't override anything the hyperscalers are doing right we're using their APIs. Now let's uh I know Phoebe wants to get to some of these cool use cases. You're going to see some of these. Do you have both miked up and you have them turned on?>> We do. >> Excellent. Good. Fe, you guys are just they're just going to use the environment you're seeing and they're going to build on this as we go. So, you're seeing a model built as we go. >> Well, yousaw thebasic I guess building blocks like you said. There'sa lot to unpack here, but what we generally see is, you know, if Vishnu and Greg or I go out and talk to a customer or a partner, you're really talking about some of the challenges that would happen after the day two or you know onwards operational um scenarios that they're going how do we solve these problems? So um I'm going to ask the way we'll set this up is I will uh provide a scenario. It's generally based onreal use cases but without using real names. Um and I will pass it through our cloud architects to solve as they would ifwe were uh in a an experience with an actual user. Um and if they solve it properly then they don't get paged tomorrow morning. So everything works well. So um the first scenario that I was provided okay we have a use case from a customer in Sydney. Dave says, "I am at a cyber security company and we've been using Google Cloud for some time. We have legacy workloads on premises, including ones that store sample malware um on NetApp arrays. We've been asked to refresh that array and we've decided it's a good time to also refresh our backup environment. Sounds sensible." Anyway, we want to see what's possible in the cloud. You know, maybe use that backup data for something more than just backup. And we of course need to make sure that our backup policies are maintained. So you know sounds like they want to back up their on- premises array to Google cloud um and then make it available to use in that active cloud. So Vishnu, why don't you start us off in this canvas? >> Absolutely. And before we get to the fun stuff, I would like to reiterate what Chuck mentioned earlier. We at NetApp strive to make data management as simple as possible for our customers. So our entire cloud portfolio along with cloud backup service follows that train of thought. I'll show you how easy it is to enable backup and protect your data in just a few clicks. Right? So in this particular scenario that Phoebe went through, the customer has an on-remise cluster that needs to be backed up. So let me go add another working environment. So on premises select on premises on tap provide some of my necessary networking side of things and as long as we have the plumbing between our on-remise data center and the hyperscaler of choice in this case we're going to use Google cloud we can find our on-remise cluster as you can see over here. Now going to backup and restore we have our navigation menu with all the different auxiliary services and products that cloud manager offers. I'll go into backup and restore. As you can see on this particular environment we haven't enabled backup. So I would click on activate backup. select the particular environment. So all my environments that are on the canvas has been already pulled up and just with a few clicks I'll be like all right this particular AFF cluster on prem enable backup over there. Okay, but this is cluster tocluster backup at this point, correct? point, correct? point, correct? >> It's cluster to GCS backup. So directly without any middleware or middleman in between, cloud manager is going to orchestrate the backup which is snapshot based. I believe you asked that question earlier into GCS. >> Okay. So you're landing into an object storage platform. >> Exactly. And we do have >> Sorry. >> Sorry. >> Sorry. >> Is that how is that handling thebucket split up? Is that all landing into a single bucket? >> It'll be landing into a single bucket of our choice. So you can sosay we get we'll definitely get into the restore part of it if you do not need this data for a year you can go into coal line or even deep archive so based upon what the restore cadence is going to be to make it a more costefficient solution you can select the tier that you're moving the data to. >> Okay. Are you guys defining any kind of top-end maximums in terms of how large a volume can be or how many volumes can go into a particular job because object storage itself has some very definite top-end limits as to how many objects you can land into a bucket before your performance goes in the tank. >> So that's a very good question. So the main key factor over there is it's block based at as Chuck mentioned earlier. So the first the initial copy would be your first snapshot then everything else is incremental forever. So the objects are not going to be direct file it's going to be the changes in the block >> correct but I mean you can only have you know by like laws of physics and how object storage works you can only have 5 million objects in a given bucket before things go horriblyarry and your databases that power the underneath that start to fall off. So we haven't seen ever hitting that max threshold but I can definitely get back to you on that [clears throat] whether that is something that we can change on the bucket level or is there a functionality under within the APIs at least to get to that. All right I can definitely get back to you on that. >> Right. >> Right. >> Right. >> Is there a reason that you're using cloud backup service versus just snap mirroring it? >> So snap mirroring would be the entire volume the data and the snapshots going into it. Here we are only taking the snapshots as a backup and provides that form we can later on restore from. So that would be the difference. >> Yeah. >> Yeah. >> Yeah. >> The other part is remember Becky we're going from an ontap you know NFS or SMB or whatever he's going to do to an object. So you want to go through that conversion at the same time. So it's not just snap mirroring an ONTP volume A to an ONAP volume B because that's likely to be as he noted larger and more expensive. It's a non-tap volume to an object bucket and then it's inc it's not snap mirroring always it's increment only the incremental blocks are changed >> and it's more of a DR versus backup so do you need an active passive kind of environment then you would just snap mirror which we would be also going through later on with them all right >> so over here you can select the hyperscaler of choice or even storage grid on prem so I would go with Google cloud>> since Google's in the room that's a good choice>> yeah and I work as a cloud solution architect ict on the Google cloud side of things too. So my preferred cloud [laughter]and I would provide all the necessary info to reach my Google cloud storage the region that I'm looking to back this data to. So I'm going with US West3 and this is where we set the policy. So I can create a new policy with different cadence like hourly, daily, weekly, monthly, yearly and provide the number of snapshots I would like to retain. Or else if cloud manager has already been used to do backup earlier, we have some existing policies too. Right? And now we get to the screen where which volumes do would I like to back up? I'll go with all my volumes. We need to be in compliance. And there's a neat little feature over here saying if you check this box any new volumes that are added to this working environment will be backed up. So especially our complianced driven customers you don't want you know your script to fail or volume not to be backed up later on to get hit by compliance issues especially our healthcare customers with HIPPA compliance and all they need to make sure the data is backed up. So that's the reason for >> set and forget. As we talk about backups and general data protection and compliance, we open up the story about encryption. What happens where let's say we have to have a you know our customized KMS in one cloud. We use that in order to sign and encrypt. How can I possibly restore that into a secondary cloud which does not have the same target KMS? Like so there's where is encryption and decryption happening uh both at rest and in transit. Sorry this that's like we could spend four hours digging into this but >> so in this scenario we are moving to a GCS bucket so we would be only restoring either to on-prem right >> or to another on tap cluster within Google cloud so if you're using cloud KMS like say you want to bring your own keys you can do that on the Google cloud bucket right or you can use Google's own managed keys so it would be one of the other right >> and >> and >> and yeah so when we think about the where does the encryption occur And when does it happen like how much is encrypted in transit as well as at rest? >> So at rest it would be on the storage bucket. Right. So now our proprietary snap mirror or snap mirror cloud encrypts the data in transit 2. Right. >> So is this only supported with hyperscaler? So is it S3 uh native commands you're using to do this? And is it just the hyperscaler S3 or are we supporting S3 compatible? So if I have something that does object storage on prim or somewhere else, can I target that as a repository for this kind of capability? >> No, currently it's only S3 GCS ashure blob and then storage grid which is proprietary NetApp S3. S S3 is as you know it's a standard that's not a standard and so wehave if we're going to provide a backup service we have to validate and certify the targets >> okay >> okay >> okay >> Azure blob Google cloud store uh Amazon S3 and our own storage grid implementations are the current ones that are supported >> okay and then are you leveraging the native immutab object lock capabilities for those services >> there is so that's recently added I'm not getting into it but there is a capability to do that yeah >> that's a I would say >> we just made an announcement last week of invoking the object lock capability in a hyperscaler. >> Okay. Not to like go all the way down the weeds but is that are you using bucket policies or are you actually doing it on the rights? So each block is individually addressed with object lock. >> It'll be each block individually at the time.>> Now any true backup solution is only as good as its restore capability. And I believe you had asked a question earlier where can you restore it right? So you can restore it back to the same on-remise cluster if that is requirement. But the scenario for our customers that Phoebe just went through, they have malware data that needs to be backed up. And anytime a new product release comes into the picture, they need to run few tests or regression tests on this malware data. And this is a very costefficient solution where you have the backup as and when required. build a cluster in the cloud, restore the data, run your test, and you can even automate the entire thing, shut down the system. So, it's very cost efficient. So, let me show you how easy it is to restore. I'll create a cloud volumes on or CVO cluster in GCP to showcase this. So, I'll again go into adding environment, select Google Cloud Platform, and in this case, I'm going with cloud volumes on tap single node. And like Chuck went through earlier, I'll provide all the necessary details. I'll provide a friendly name for my cluster, my user admin password. >> No pressure. [laughter] For some reason, I'm not able to. Is caps lock on? >> I don't think so. But >> and while you're working on that piece because weobviously we will dig into the sort of multi cloud use cases but I think Chuck andeverybody >> the reality of most of the customers is they generally will back up and restore into a single target like this beautiful panacea of like right restore anywhere like 04% of your customer base I presume and I given you're a public company I want to be careful how I ask you a question that makes you commit a number like that but in the past where I may have worked for a public company that has a three letters in it same thing occurred right where we like the intellectual idea of running through this like I could run it anywhere but in reality we don't >> Iwant to say that and one of the I think the use cases we mentioned earlier is that you're not necessarily restoring the entire backup into a different cloud right you generally want a file or an area of it for audit. And so in that case, you'd restore it and then use that data, replicate that data and have access to that data only. And that could be anywhere. As we said, you know, you may want that in another cloud, but you're not necessarily going to do direct restores all over the place because that doesn't really suit a work. I mean, that's not most people's workflows, I would say. Um, and definitely not the ones that we >> Yeah. We learned a couple things is over the past couple years as we brought cloud backup service out. We initially started with a cloud only service then it became a hybrid service and what we learned is number one most of because one of the questions we get all the time is well this is ontap 2 but can it do everything and the answer is no it's an ontap 2 we're bringing value to that ontap based data block level incremental forever and most of our customers said that's okay we have two to four backups on average backup segments anyway we don't have the one button that backs up a thousand different environments wedon't spenda lot of money to back up a lot of our Windows file servers. We use VSSs for that. We use a high power one over here and a cloud-based one over there. So, number one, having a segmented fit. Number two is they helped us really understand their views between restore volume restore and file level restore and DR and a I want to restore anywhere have to have that capability. That is not normally because I screwed up a file, let me restore it. That's because something happened to the primary environment and I got to restore it. And most of our customers are going to have a DR strategy. >> Yeah, >> Yeah, >> Yeah, >> excuse me for that. >> Chuck, one question I would have is so incremental forever is great, especially well incremental forever is great. How from your customer base, what are you guys typically seeing for retention levels? And then how are you dealing with that person that's doing GFS or something like that and then has to recover from a month ago and now you've got to restore roll back through every single one of those restore points to actually get what they need. >> Go ahead, Greg. I was going to say so we are doing incrementals forever but it's actually viewed as a one backup volume because the way we do things with ontap where everything is pointerbased >> we're doing the same thing in the cloud as well. So we're not you're not having to go back you know restore seven restore them the first backup and then restore 20 days worth of incrementals to it. It's a one restore backup because it's using those pointers just like we use in the ONAP side to give you the ability to restore >> that whatever snapshot you want to restore from. >> Yeah. So you can go back wherever you need to for your RTOR PR objectives. >> Yeah. But not incrementals forever. >> Okay. >> Okay. >> Okay. >> Yeah. I've had it. I'll say uh I don't want to necessarily call it a particular product, but let's say it rhymes with Tivoli storage manager that I may have used in the past that had incremental forever. And the idea that I could take my distributed environment, back it up to the mainframe using TSM was fantastic. Again, manyyears ago, I was I'm an older fella. I've been around thisgame for a while. And that was always every time I would go to restore, they'd be like, "Oh yeah, we seem to have lost that particular object." like so effectively it was like the old Windows 42 disc set. So thiswas the problem we would bump into is that like the incremental forever sounds great but >> that's why we have some of the search restore capabilities. cuz I don't know if we're going to go into it if we want to go to another scenario. >> Quickly just show you like >> but we can do a global search. So if you lost that file, you don't know where it is cuz you can back up different environments to different clouds. You can do a global search for Bob's file. >> Yeah, for example. >> Yeah, thereis a catalog as well of the backup that I think you're going to >> exactly. So you can Yeah, we can quickly show you like say there is restore at the volume level or restore at the file level. So you get a catalog where you can pick and choose which file to restore. Right now we can go into the next >> and carry on Chris's early conversation about PHOPS. Uh how do we manage egress charges? I mean I'm presuming you're doing a certain amount of compression before restore tolimit egress costs andusage at this point in time. There's no forced implementation of that. Um you get benefit because it's block level incremental forever. It's you're not restoring whole files. You're only restoring blocks that have changed and theblocks that have changed as far back as you want to go. So you're having afractional egress compared to if I back this whole thing up, I have to bring it back depending on your use case. But we're not yet at the point to say, hey, you can only spend x amount of money in egress.is a reality of our world. And so one of our challenges is let's find creative ways to let our customers do what they need to do without facing the smallest possible egress charges. We do that in our edge cache instances. We do that with block level incremental forever. But we don't yet have an implementation to put a governor on how much you have coming out here.>> Makes sense. >> And you get your efficiencies that carries over. You have >> compression, compaction, ddup, all that still applies. >> All right. Well, we're going to wrap this scenario. Um, and we're going to take a quick break. I think wehave probably have a lot of discussions to have in that break as well, but um we'll see you back here in 15 minutes. One cloud, two cloud, red cloud, blue cloud, orange cloud, green cloud, old cloud, new cloud. This one has the best security. This one has the most maturity. Some clouds let you run code on the go. Some clouds are fast or carbon zero. Some people use two or three or four. Maybe you are using more. If you need cloud integrations or to streamline operations, NetApp's here to save your bacon. From data storage to data sense, NetApp is multicloud intelligence. [music] To make your multicloud conducive, don't miss cloud field day exclusive. [music] Under pressure, the cloud used to be fun. It was the terrain of hot startups and cutting edge enterprises. Being a cloud professional was cool and it still is. But today, you're under a lot of pressure from finance, from developers, and security threats, both internal and external. And things are more complex than they ever have been. There's a bajillion challenges every day. And you, well, you probably have more than a few things keeping you up at night. Hello everyone, Nick Howell here. I just wanted to jump in and give you my five things that you can do right now today to depressurize your cloud. Oh, and before I forget, we've got a whole page of product, services, and solutions for you to check out with everything we're about to discuss over at netapp.com/cloudervices.So, be sure to head over and check that out when you're done here. NetApp delivers cloud data management and storage that puts you in charge and helps you de-stress with industryleading services built for the clouds of your choice. While the cloud offers all kinds of different application services, everything you do there is dependent on the underlying infrastructure resources. You need to understand what's going on so you can actually manage it. You need visibility across all your environments. You don't want to spend the rest of your life hovering over dashboards and manually adjusting resources like some crazed crypto trader. We have the tools powered by AI and machine learning that let you automate provisioning to meet the performance and scalability needs of your applications without lifting a finger. Now, if you've used the public cloud, chances are you've also experienced sticker shock. Maybe you added some storage capacity for a few terabytes of old cat videos on some high performance expensive storage and then completely forgot about them. I won't name any names. Look, you don't want to be that person who gets snacks and beer cut from the break room because you blew the budget, right? Trust me, you don't. The cloud operates much differently than a data center. Overprovisioning is just unnecessary and it ends up running up the bill. You can use automation to continuously align your workload requirements to the most cost effective resources and then tear off any cold data. Your apps get what they need, your cloud bill becomes manageable, and finance stops beating down your door. Also, just because it's in the cloud doesn't mean that it's automagically safe and protected. You have to protect everything in the cloud. Data, infrastructure, and resources. But something as simple as a cloud misconfiguration can open the door to hackers in hoodies with ransomware that will seriously ruin your day. You should do everything you can to make sure you have solid security practices in place for all cloud processes and enterprisegrade protection that automates threat monitoring and detection. It's like an extra set of eyes watching over everything 24/7. You can deflect and defeat attacks and mitigate the impact before the world or your boss even knows anything happened in the first place. Developers don't want to have to worry about boring stuff like storage and databases or the endless tickets to get it all set up. They simply want a highly reliable environment to work in. The less time DevOps teams spend on infrastructure, the more time they get to create cool stuff. But containerbased application development gets tricky real fast when you're trying to deploy at scale in constantly shifting multi cloud environments. Obviously, automated storage provisioning helps, but we take it a step further to make sure all of your applications are ready to run from day one. You get to enjoy the enterprisegrade benefits of powerful data management for stateful cloud applications. I mean, imagine having consistent pervasive storage automation with powerful backup, data protection, and disaster recovery capabilities built in. Look, the cloud is complicated. I'm not trying to say it isn't, but it doesn't have to be. And here at NetApp, we've got your back. We spent the last decade building strong relationships with Azure, AWS, and Google Cloud, so you can trust that we know how to help you get the best out of whichever cloud provider you choose. Thanks for hanging out with me for a few minutes, and don't forget to head over to netapp.com/cloudervices for cool demos, how-to videos, and much more.Take care. [music] >> One cloud, two cloud, red cloud, blue cloud, orange cloud, green cloud, old cloud, new cloud. This one has the best security. This one has the most maturity. Some clouds let you run code on the go. Some clouds are fast or carbon zero. Some people use two or three or four. Maybe you are using more. If you need cloud integrations or to streamline operations, NetApp's here to save your bacon. From data storage to data sense, NetApp is multicloud intelligence. [music] To make your multicloud conducive, don't miss CloudFill Day exclusive. under pressure. The cloud used to be fun. It was the terrain of hot startups and cutting edge enterprises. Being a cloud professional was cool, and it still is. But today, you're under a lot of pressure from finance, from developers, and security threats, both internal and external. And things are more complex than they ever have been. There's a bajillion challenges every day. And you, well, you probably have more than a few things keeping you up at night. Hello everyone, Nick Howell here. I just wanted to jump in and give you my five things that you can do right now today to depressurize your cloud. Oh, and before I forget, we've got a whole page of products, services, and solutions for you to check out with everything we're about to discuss over at netapp.com/cloudervices. at netapp.com/cloudervices. at netapp.com/cloudervices. So, be sure to head over and check that out when you're done here. NetApp delivers cloud data management and storage that puts you in charge and helps you de-stress with industryleading services built for the clouds of your choice. While the cloud offers all kinds of different application services, everything you do there is dependent on the underlying infrastructure resources. You need to understand what's going on so you can actually manage it. You need visibility across all your environments. You don't want to spend the rest of your life hovering over dashboards and manually adjusting resources like some crazed crypto trader. We have the tools powered by AI and machine learning that let you automate provisioning to meet the performance and scalability needs of your applications without lifting a finger. Now, if you've used the public cloud, chances are you've also experienced sticker shock. Maybe you added some storage capacity for a few terabytes of old cat videos on some high performance expensive storage and then completely forgot about them. I won't name any names. Look, you don't want to be that person who gets snacks and beer cut from the breakroom because you blew the budget, right? Trust me, you don't. The cloud operates much differently than a data center. Overprovisioning is just unnecessary and it ends up running up the bill. You can use automation to continuously align your workload requirements to the most cost-effective resources and then tear off any cold data. Your apps get what they need. Your cloud bill becomes manageable and finance stops beating down your door. Also, just because it's in the cloud doesn't mean that it's automagically safe and protected. You have to protect everything in the cloud, data, infrastructure, and resources. But something as simple as a cloud misconfiguration can open the door to hackers in hoodies with ransomware that will seriously ruin your day. You should do everything you can to make sure you have solid security practices in place for all cloud processes and enterprisegrade protection that automates threat monitoring and detection. It's like an extra set of eyes watching over everything 24/7. You can deflect and defeat attacks and mitigate the impact before the world or your boss even knows anything happened in the first place. Developers don't want to have to worry about boring stuff like storage and databases or the endless tickets to get it all set up. They simply want a highly reliable environment to work in. The less time DevOps teams spend on infrastructure, the more time they get to create cool stuff. But containerbased application development gets tricky real fast when you're trying to deploy at scale in constantly shifting multi cloud environments. Now obviously automated storage provisioning helps, but we take it a step further to make sure all of your applications are ready to run from day one. You get to enjoy the enterprisegrade benefits of powerful data management for stateful cloud applications. I mean, imagine having consistent, pervasive storage automation with powerful backup, data protection, and disaster recovery capabilities built in. Look, the cloud is complicated. I'm not trying to say it isn't, but it doesn't have to be. And here at NetApp, we've got your back. We spent the last decade building strong relationships with Azure, AWS, and Google Cloud, so you can trust that we know how to help you get the best out of whichever cloud provider you choose. Thanks for hanging out with me for a few minutes, and don't forget to head over to netapp.com/cloudervices for cool demos, how-to videos, and much [music] more. Take care. One cloud, two cloud, red cloud, blue cloud, orange cloud, green cloud, old cloud, new cloud. This one has the best security. This one has the most maturity. Some clouds let you run code on the go. Some clouds are fast or carbon zero. Some people use two or three or four. Maybe you are using more. If you need cloud integrations or to streamline operations, NetApp's here to save your bacon. From data storage to data sense, NetApp is multicloud intelligence. [music] To make your multicloud conducive, don't miss CloudFill Day exclusive. under pressure. The cloud used to be fun. It was the terrain of hot startups and cutting edge enterprises. Being a cloud professional was cool, and it still is. But today, you're under a lot of pressure from finance, from developers, and security threats, both internal and external. And things are more complex than they ever have been. There's a bajillion challenges every day. And you, well, you probably have more than a few things keeping you up at night. Hello everyone, Nick Howell here. I just wanted to jump in and give you my five things that you can do right now today to depressurize your cloud. Oh, and before I forget, we've got a whole page of products, services, and solutions for you to check out with everything we're about to discuss over at netapp.com/cloudervices. at netapp.com/cloudervices. at netapp.com/cloudervices. So, be sure to head over and check that out when you're done here. NetApp delivers cloud data management and storage that puts you in charge and helps you de-stress with industryleading services built for the clouds of your choice. While the cloud offers all kinds of different application services, everything you do there is dependent on the underlying infrastructure resources. You need to understand what's going on so you can actually manage it. You need visibility across all your environments. You don't want to spend the rest of your life hovering over dashboards and manually adjusting resources like some crazed crypto trader. We have the tools powered by AI and machine learning that let you automate provisioning to meet the performance and scalability needs of your applications without lifting a finger. Now, if you've used the public cloud, chances are you've also experienced sticker shock. Maybe you added some storage capacity for a few terabytes of old cat videos on some high performance expensive storage and then completely forgot about them. I won't name any names. Look, you don't want to be that person who gets snacks and beer cut from the breakroom because you blew the budget, right? Trust me, you don't. The cloud operates much differently than a data center. Overprovisioning is just unnecessary and it ends up running up the bill. You can use automation to continuously align your workload requirements to the most cost-effective resources and then tear off any cold data. Your apps get what they need. Your cloud bill becomes manageable and finance stops beating down your door. Also, just because it's in the cloud doesn't mean that it's automagically safe and protected. You have to protect everything in the cloud, data, infrastructure, and resources. But something as simple as a cloud misconfiguration can open the door to hackers in hoodies with ransomware that will seriously ruin your day. You should do everything you can to make sure you have solid security practices in place for all cloud processes and enterprisegrade protection that automates threat monitoring and detection. It's like an extra set of eyes watching over everything 24/7. You can deflect and defeat attacks and mitigate the impact before the world or your boss even knows anything happened in the first place. Developers don't want to have to worry about boring stuff like storage and databases or the endless tickets to get it all set up. They simply want a highly reliable environment to work in. The less time DevOps teams spend on infrastructure, the more time they get to create cool stuff. But containerbased application development gets tricky real fast when you're trying to deploy at scale in constantly shifting multi cloud environments. Now obviously automated storage provisioning helps, but we take it a step further to make sure all of your applications are ready to run from day one. You get to enjoy the enterprisegrade benefits of powerful data management for stateful cloud applications. I mean, imagine having consistent, pervasive storage automation with powerful backup, data protection, and disaster recovery capabilities built in. Look, the cloud is complicated. I'm not trying to say it isn't, but it doesn't have to be. And here at NetApp, we've got your back. We spent the last decade building strong relationships with Azure, AWS, and Google Cloud, so you can trust that we know how to help you get the best out of whichever cloud provider you choose. Thanks for hanging out with me for a few minutes, and don't forget to head over to netapp.com/cloudervices for cool demos, how-to videos, and much [music] more. Take care. One cloud, two cloud, red cloud, blue cloud, orange cloud, green cloud, old cloud, new cloud. This one has the best security. This one has the most maturity. Some clouds let you run code on the go. Some clouds are fast or carbon zero. Some people use two or three or four. Maybe you are using more. If you need cloud integrations or to streamline operations, NetApp's here to save your bacon. From data storage to data sense, NetApp is multicloud intelligence. To make your multicloud conducive, don't miss CloudFill Day exclusive. under pressure. The cloud used to be fun. It was the terrain of hot startups and cutting edge enterprises. Being a cloud professional was cool and it still isfrom finance from develop. So let's shift gears to another part of our business. Um, one of the workloads that we see a lot of businesses moving into the cloud is, you know, enterprise applications like Oracle or SAP. So, I have a uh a note here from a customer. Anboo writes, we have an SAP installation today, but we want more agility and scalability than what it provides across two data centers. Now, we handle hundreds of thousands of sales orders a day, but we think this will just keep growing and all we can see is an existing SAP installation getting more and more expensive. We know that storage is just one part of the larger architecture and of course this migration is going to be complex. So, we're talking with the cloud provider Microsoft Azure and our systems integrator. We're not trying to do this on our own. They recommended using Azure Netup files for SAP HANA to meet performance requirements. We like that it checks the boxes, but our biggest worry is that going forward with SAP in the cloud is just going to be really complicated and we'll just end up spending more time managing this environment than we would if it was just in our data centers. So, Greg, as our Azure CSA, it would be one thing to talk to this customer, but also to Microsoft Azure and to systems integrators about SAP HANA in the cloud. Um what would you say also to help them reduce their operational burden? >> Okay, thanks Phoebe. Um I think the first thing is just before we jump in to the actual demonstration, let's just talk about the use case a little bit. So for the use case thatwe're describing here, um I'm going to pull up a architecture diagram here just so everybody's comfortable with an SAP workload. for an SAP workload. Typically, when you migrate from onrem into the cloud, you're going to be doing that as an IAS type offering. So, you'll be spinning up VMs just like you would if you were on prem, but those VMs are going to be spun up in the cloud and you would do that in a highly available environment. That's whether it's going to be Oracle database or an SAP database, you're going to want to have some type of HA environment. So what typically would happen is you would have one VM that's available in one availability set or availability zone. You can choose which way you want to go and then you would have another VM that is in another uh in the same availability set or in a different availability zone. So you have the choice of having those. The thing with the replication for SAP as well as Oracle is that the applications themselves do the replication. So keep that in mind. It's not the storage doing the replication. The applications are doing the replication from one to the other. So as you can see on the line there, there's an orange line that says HANA system replication. So between those two VMs, the data is replicating between those two. And that's doing it with synchronous replication so that everything gets locked in lock step with the environment. Okay. And then when you go to the DR sideof the use case, you'll see down here that if we go into a second zone for the DR side of the use case, that's actually doing replication as well down to the DR site, but it's an async replication because we don't want to have to wait for the replication to go from, you know, West US to East US and have to wait for those checkpoints coming back. So that's going to be an asynchronous replication for your DR site. So that's the architecture that's typically used for an SAP environment. Just so that everybody's on the same page there. >> For the question for the synchronous replication. Now this is in zone. You can guarantee sub 10 milliseconds in order to maintain synchronicity there. Like I how is that I'm never >> never sure what's hiding behind it. So that the provider ensures that. >> Yeah. So with the within Azure, it's pretty much a 2 millisecond latency that you have with inside availability zones in the same region. >> Okay, cool. >> So as long as you're in the same region, those availability zones are typically in the two to three millisecond range. >> And does that fall under their uh sorry I'm asking anAzure question. Now, does that fall into the like we it's an SLA or an like we can slip once in a while and oopsy doodle we'll give you a discount for 5 seconds on your bill because we went outside SLA or is this like architecture they always ensure >> Idon't know that it's a SLA forthe uh the range between the two zones but it is something that they target for there.>> Right on. >> Yep. But I'm not sure that they have an SLA on that. We'll have to get back to you on that one. >> We need Microsoft to present to us. Wink>> Come on in. Tech field day is fantastic. >> Yep. So, everybody comfortable with the architecture there and anybody have any other questions on it so far? Wanted to make sure everybody understands the architecture there. So, if we jump over now into cloud manager, you'll notice that we have a U icon. Here we go for Azure NetApp files. I'm going to click on that and open up the icon for Azure NetApp files just to kind of give you an overview of how we would set this up. So from a setup perspective, you would have your VMs configured just like we had talked about earlier. And then to connect the storage up to it, you would create a set of volumes that mount up to those particular VMs to have the SAP database available there. The database is going to be a collection of data and then logs like you typically have in any database. So what you'll see here on the canvas is that for the uh initial primary node I've got a volume configured um it's called HANA data 01 and then it's in EastUS2. It's an NFSV4.1 volume. B4.1 is required if you're going with the SAP environment and then we've got that provisioned in the ultra tier. The ultra tier is the highest level performance tier and that's what we use to sat uh satisfy the KPIs that SAP requires when you do your benchmark testing. So we adhere to that. Um also have another volume down here. Slide up a little bit. I've got another volume over here which is that HSR replication in the same region. So that's the other volume that's going to be replicated to within the same region and you'll see it has all the same parameters with it. One of the things that I just did want to show before we move on is that if you take a look at the one that was provisioned earlier, you'll see that it's at 3.1 terabytes and the one below it is provisioned at 3.2 terab. 3.2 terab. 3.2 terab. >> Oops. You went all the way down to the taskbar and opened another Google tab.>> My bad. It's>> all good. We can catch up on today's news.>> Flip back across. >> Oh, >> Oh, >> Oh, >> just close that Google tab >> and move the cursor back up. >> Might want to use the mouse. >> It's going on right there. Now move it up. Getoff of that.>> Let's just close this guy. How's that? There we go. Uh so once again we have 3.1 provisioned on the primary side we have 3.2 provisioned on the secondary side. So somebody made a mistake when they were configuring this volume. So the first thing that I want to show you that we can do is we can go into the actual volume itselfand I can go in and do an edit on the volume.And then what did you'll notice is that somebody did 3.2 but they didn't do it in tibytes, they did it in tbs. So that's the reason that the volume is a little bit off. So I can simply go in and change that to be the right value. And now you'll see when we come out the volumes are provisioned correctly at the right size there. So you can easily resize those volumes as needed. And then we'll have the replication that's going from a synchronous level between these two volumes. Now what I want to do next is>> great. So these are justso that I can be clear um these are volumes that are provisioned in Azure on an Azure service that are being um I guess populated into this canvas so that you can easily you can edit them without>> having that you know going into Azure to do it. So it's making those calls for you.>> Yes, correct. It'sdoing making the calls for you. So as we go through and I add the next volume, which is going to be a new volume here, I'll walk through that in the workflow, but then it's going to be issuing the REST API back to Azure to actually create the volume in the Azure environment. >> Greg, yes, you brought up an interesting thing because we're the nerds. So the storage shouldn't care about bite level alignments at up in the cloud because we're basically obscuring and hiding that. But it's funny, we as humans, we always think in binary. We think of like, you know, a gigabyte is 1024, not a thousand. Is there a performance difference in aligning to true bite level versus, you know, the easy thousands when we get to that layer? Just cuz, you know, my head immediately said like, oh, this is wrong. But does it actually matter when we're looking at ONTAP and how it does data transfer? >> No. So ontap is going to do everything at the 4K block level. That's how on tap operates. So everything we done at the 4K block level and even when we're doing it in Azure because that is using utilizing on tap underneath the covers. Um it's got a pad offering over top of it to simplify it.will still be doing everything at that 4K block level. So everything will be done at the block level. You're not going to have, you know, things transitioning over two blocks. Okay. Yep. So what I wanted to do next is just show you how easily I can create the DR volume. >> Uh to create the DR volume that we have here. So to create a volume for the DR size, what I'm going to need to do is I'm going to be putting that in a different region, remember? So I'm not going to be in the same region. So I'm going to want to create a new account. The account is based on the region that you're in. So we have an account for East US2, which we were just looking at. And the partner region for East US2 is South Central. So I'm going to create a um account for South Central. And then we'll create the volume in South Central as well for the DR region. So I'm going to go in here, select create a new account.And I'll create an account for SPHANA South Central. I'll select my subscription region is going to be South Central. While he does that, to Gina's point earlier, the overall subscription credentials are in there. His user ID and password are thus that allow him to create this new account in the new region. And notknowing what I don't know about Azure, isan account in Azure synonymous with an account in AWS? >> No. >> No. >> Okay. >> Okay. >> Okay. >> So, inside of Azure, >> that would be too easy. >> Yeah. [laughter] Inside of Azure, they typically do things as accounts from the storage layer. Okay. storage layer. Okay. >> So, like if you want to create a blob object or you want to create a file share inside of a blob object, >> okay, >> okay, >> okay, >> they use what's called an account. So you have a storage account and then we have an Azure NetApp files account that kind of mirrors that behavior so that if you're familiar with all that in Azure creating this account is very synonymous to doing it with just a regular Azure storage account. >> Gotcha. So we've tried to mirror that to you know make it the learning curve a little bit easier for the engineers who are working there. So the next thing that I'm going to do is I'm going to create a uh capacity pool. And I'm just going to give this one a name. And you'll notice I selected the standard tier here. Um, what I'm doing there is I'm selecting the performance level that we're going to be using inside of Azure. So with Azure NetApp Files, you have three performance tiers, a standard, a premium, and an ultra. The ultra tier is the one that meets the KPIs that we talked about using that 3.2 terabytes that the ultra tier meets the KPI. What you'll notice I'm doing here though, let me just get this going here a second. We give it a name of uh DR there. Is there apolicy within here to remind you that you cannot actually retract volume sizes because Azure has a limitation that you can it's unidirectional you can scale up you cannot scale back on certain performance tiers I presume that this will I I'm always curious what warranty >> it's not going to let you do things that are not applicable >> right >> right >> right >> so and that's one example right because for example whenyou're working with CVO you can scale up and down you can scale from the professional which is the highest level volume with built in backup down to what's called essentials. But essentials actually has subcategories. You can only scale down the first one. So if I want to go from CVO professional to CVO secondary for primary data, I can do that. I couldn't scale it down to secondary because there are workflow implications. So cloud manager has logic to know what is and isn't allowed. So cloud manager is not going to let you do something that the Azure service in this case won't accept. Cool. Or that NetApp policies wouldn't accept. Right on. >> [snorts] >> [snorts] >> [snorts] >> So, you'll notice that I um Oops. You'll notice that I created a volume down here that is a DR volume now. And I've got it set for the standard tier here. So, I don't have it set for the ultra tier. So this is one way that customers can use to save money at the DR site is that they can replicate the data over to a lower tier and then once you have to go to a DR event, you can elevate the tier up so that you can go from [clears throat] standard, move those volumes up to ultra and then after the DR event is over, you can move them back down to standard again. So you have the ability to do that via what's called a pool change. basically move the volumes from one pool to another pool. So you can go from standard premium ultra and down not just up >> you can come back down there's just a waiting period of seven days that Azure provides>> say it's a migration an actual because they have different resources to >> you have a waiting period of 7 days to come back down you can go up >> coming back down is a 7-day cool down period>> when you say there'sa um the change of performance tier though there's no is there data movement in that>> there's no data movement in at no data movement. It's all a logical um movement function there. So the data is not actually moving. You're just changing the QoS threshold around the data. >> Okay. >> Okay. >> Okay. >> Because that service has compute and network resources, not just storage resources,>> right? >> right? >> right? >> Yep. >> Yep. >> Yep. >> And so this gives them the time to do that. If you need like and this is a great point he's making that I can take SAP HANA which requires a certain throughput level, I give that my DR site. I don't want it sitting there for two years paying twice as much. So I pay at the standard level. If I do have to over it's an immediate provision up. Azure will provision the additional resources to give the throughput required for your business. >> Yep. >> Yep. >> Yep. >> It's going to take seven days to scale it down because they've made allocations for you and they got to pull those networking resources back and they want to do that in an orderly manner. >> Andif you think about it, if it's a real DR event, seven days is probably going to be nothing. >> I mean, keep that in mind. You know, most DR events are at least several weeks or months. >> Yeah. No one's screaming like, "Come on, we need to get the cost down within a week." Like, we're just happy the business is up. Right. >> Exactly.[laughter] But this is one way that you can help with cost savings. So, you know, from a financial perspective, you can [snorts] monetize cost savings with some of that as well. >> Yeah. And froma user perspective as well, I mean being able to do that through cloud manager or the Azure net uh the Azure portal itself is probably you know during a DR environ Dr. you want to make that as easy as possible. So automating it. Yep. >> Yep.And then you also have the ability um with SAP and you'll find this with a lot of SAP workloads is that they don't have everything that they can move into Azure initially. They may have some secondary systems that are running on really old code that pull in, you know, like contractual information. So for example, one of the customers I work with, they have alarge facility. They're doing manufacturing, but they have partners that they work with. Those partners have to sign contracts for various different things for each of their customers. They store those contracts off in, you know, a different tier of storage over there. So for something like that, CVO is a better fit for that to put that on cloud volumes. And we have a version of that that's optimized so that you can get lower cost values and it pushes everything most everything into the blob tier. So that allows you to keep those contracts for seven years that you need but to be able to import those on a daily basis into the SAP system. So all that data resides to say hey we've got this contract it's been signed here's the expiration date of it and so forth. So you can doesn't have to just be Azure NetApp files that you can use for your SAP environment. You can do that with CVO as well and we have a lot of customers who wind up doing that. Do you actually tie into the optimization services that are native in the Azure platform? So when it would look and say, hey, you know, based on utilization, it has the Azure advisor that can tell youcan probably dial back your storage tier in order to save a couple of shekels like >> Yes. So we're integrated within Azure monitor. So because it's a firstparty Azure service, we integrate with the Azure monitoring service. With Azure monitor, we can go look through metrics throughput. We could also set up alerts as well. So if you have an environment where you know things are going up and down, you can have alerts signal to say, hey, we've hit a threshold, we need to expand, or hey, we've dropped to a threshold. Maybe we want to shrink a little bit. So you do have all those capabilities. They all get bubbled up through Azure monitor and Azure monitor allows you to send out notifications. So you can have pages, text, but you know help desk tickets generated with those events.So we do um so we do want to kind of uh help customers in those environments where they're most comfortable I think is where you know Greg's kind of pointing at because obviously there's different um experience levels different skill sets if you know if we were toset somebody up who's not familiar with Azure and say hey we just want to get those um those running cloud manager makes a lot of sense for them whereas somebody who's already integrated with a lot of the Azure services would make would want to continue to use those. I have a question that >> right because I think we've gone away from what we were talking about first part of the day which was about the workflows and how we have to all of this we're talking about a lot of very old school ways to set up an infrastructure base even though we're setting it to the cloud right so question for you with um especially with SAP HANA first of all I'mimagining that this is definitely an SAP approved >> solution because that's a thing right yeah>> but you know just what you talked about just Then ifI'm trying to set up maybe infrastructure as code and I'm trying to run everything even from on-prim all the way up >> is there a way since you're making use of and you're integrated maybe that's not the right word with the Microsoft service to do alerts and stuff >> how you know is there a way that if I'm really trying to set my applications up to be on this workflow basis se separated maybe from the three tier client server basis we're all used to architecting for would that would this fit? So like here I am going from one place to another place. You're offering up the APIs. Ifyour API iskind of bound up with the Azure API, can I call things specifically todo a specific workflow for this application or the applications that are sitting on HANA, you know? >> Yes. Soin the case of like this SAP workload we're talking about, you can automate this completely automate the entire process of that. um you can have the SAP deployment process being automated and then you can have the storage portion of that SAP process being automated and that's going to use standard Azure APIs. So we're going to be using standard Azure REST APIs for all of that. You're not going to be using you know NetApp APIs versus Azure APIs. Also you can create templates for all those to templatize all those whether you want to do it in terraform Azure uh ARM templates whatnot you so you can templatize all that to do the deployment of those >> and so my other question was so for a day two thing if you're monitoring to see what's going on using you know because you guys can run with that Azure monitoring can you also do day two by taking in the information from the monitoring and feeding it back into those deployment pipelines. Is that something that could happen? >> Yes. So, something that you might do would be, you know, if um you were experiencing high load and you could have it automate that process of increasing the volume size like Drake showed or changing thetier of thatcapacity and you could automate that. So, and that could even be something that would be put into a CD pipeline. So,more so >> on a regular basis. So what they're showing, you know, isa lot of that setup, of course, but then all of the functions that are set up could then be automated,you know, because they'rereflective of events that are happening, whether they're triggered or >> and that way I could declare this is what I want it to look like and if it ever drifts based on themonitoring, we could set it back to where we declared it to be. >> Yeah. Ithink for configuration like forsetting up adeclarative environment like that, I think youneed to know um>> what it looks like >> what you want it to look like. Yeah.Which we're not define we're not manager is not telling notsetting up what it what you want it to look like. You have to tell it what you want it to look like. >> Well, yeah, that's what I'm saying. So like you define thatwould be the template and then if you do it's Oops. If it's declarative youif you've declared it to be this and you could have a way to know how it's drifting one way or the other different things happening you can get it set it back to that.>> Yeah.And youwould do that from the Azure side. Yes. that you could do that from the Azure side and then also with the monitoring um it's not just sending out you know text messages and emails you can actually have it kick back into an Azure workflow it will actually change things so as Phoebe was mentioning if you hit a performance cap all of a sudden and you need to get more performance one of those things could be hey my performance hit 100% trigger this workflow to give me more throughput in the volume and then you could also have it trigger a little bit later to when that workflow level goes down to trigger to bring it back down again. So you could reduce the throughput levels that you need. >> Yep. >> Yep. >> Yep. >> Oh, okay. Thanks. Thank you, Greg. Yeah. AndI think thatuse case, I mean, weused SAP as a really good use case. Like you said, it's um >> SAP approved um anda reference architecture, but there are similar reference architectures for different workloads. We just want to show that the same concepts of being able to modify things when you need them, provision them when you need them, um, and obviously update them whensituations change would apply across the board. Um, I think you have a few there of HBC and VDI, etc. >> By the way, to Gina's point, when you say is this blessed by SAP, having a scalable enterprise storage platform in the cloud to run SAP hunt, it's kind of a big deal. There's only one place you can go for that, right? outside of if you're using a dedicated disc on a hyperscaler VM and that is NetApp. We've got it in multiple clouds. So you can run with confidence in ANF. You can run with confidence in AWS using FSX forONAP and know that you're running a blessed validated environment because look if you have issues and you go to SAP first thing you're especially with HANA an in-memory database they're going to say let's review our environment. Sorry, youcan't run something this big on something this small. With each of these, you have an SAP validated environment in either cloud. Again, using based upon ONAP. >> So, Chuck, well, while we're with you, um we do have requests from customers who have invested in commitments in cloud providers and now want to change their minds. Um sometimes, you know, workloads change over time because they maybe moved faster than they planned. Maybe there was a global pandemic. Maybe they uh they decided they didn't want to have a specific commitment. They'vegone down a different path. >> Um and what happens is that becomes wasted money usually because you'vepaid for something that you're not using. Um especially when these are longerterm commitments. So firstly, what can uh this is a customer's email. Firstly, what can we do to get understanding of the commitments that we have purchased? um so that I can get better visibility to the point about PHOPS earlier and secondly is there anything that you can do to make purchasing and license administration of all of this a lot more flexible because that would be really key for us in this day and age. >> Let's peel back on those because that speaks to something we talked about earlier this morning and last night which is PHOPS the rise of FinOps being like afull-fledged citizen at the table with cloud ops IT ops etc. You know, up until now, Finnops has been kind of thepolice. They've been the Finn cops, not FinOps. They've said, "Oh, you're going to jail. You spent too much money." But the other thing is we haven't given Finnops really astechnology vendors the kind of real-time visibility insight that the IT team has. PHOps has had to say, "What are you using?" Or Phops has had to read the invoices or Phops had to go somewhere else. So, one of the things we're trying to do is to make sure we give FinOps a valid seat at the table. A lot of questions today about, oh, can I govern this, can I govern that? Look, I got to admit to you, we we've started to embrace the PHOPS community over the last year. There's still so much more we can do. Even though we're young in this, I think we're still years ahead of the industry. And I'm going to show you why I say that. We've considered them. We've built them in. So, let me go to an example here. We've got a capability inum cloud manager we call our digital wallet. Digital wallet is we're given some capabilities here to the PHOps team or the IT and cloud ops team to work with the PHOPS team. Step one is visibility. When I pull out my wallet, okay, I use a money clip, not a wallet. What do Igot visibility. Isee what I have. I know what I have. That's a big deal for a lot of enterprises, especially if you're working across three clouds and eight data centers. So, we want to make sure from a from what we control in NetApp, your cloud volumes on licenses, your data services licenses like cloud backup, cloud data sense, cloud taring, whatever. I'm using your keystone subscriptions. We haven't even talked about that yet today, but moving from onrem to the cloud. A lot of customers say, "I got X number of pabytes. I'm going to need some more, but I'm my target is to get these workloads up in the cloud over the next 5 years. I don't want to buy a lot of more capex. I just want someone to provision some storage for me to use on prem." And that's our Keystone offering that and a lot more. So we've brought all of these kind of things including your on-prem ONTAP licenses that you have acquired into one dashboard here which we call the vis the digital wallet. So I have visibility into my storage service entitlements either the ONAP footprints or the services on them. That was step one. Now toPhoebe's point at least that get lets me know what I have. It also lets me know when they expire, when I need to renew them, etc. But how about if I want some flexibility? So let me give you an example. Nodebased example. Nodebased CVO cloud volumes ontap licenses. We got a lot of customers that have nodebased which means I bought an annual subscription for this. What is a not unrealistic scenario? I'm migrating a workload to the cloud. I'm think I'm going to need about six different CVO instances. I'll buy a one-year subscription to them. I think this process is going to take me a year. What if you get done sooner? That that's a good thing, right? Yougot the project you got the workload migrated sooner. But if you got that one-year project done in seven months, that means you have six licenses with five months left. What am I going to do with it? So, what I can do here is go and say, what are my unassigned CVO licenses? I have some unassigned licenses. One of them, by the way, expired, but I keep track of that until I acknowledge it and remove it. I've got one here that maybe I say, okay, it expires in 2024. I don't think I'm going to use it. Why don't I exchange that? Why don't Ipaid this money for a CVO license. There's X amount of time. Why don't I exchange that for maybe the cloud backup service that we spent a lot of time talking about earlier to protect my on-prem ontap system so I don't have to pay for a third party. I've already spent the money. I'm not going to do it here because it goes through and changes actual license structures. But that's as simple as it is provided that I have the role based authority and access. Not everyone does. So to do that >> is this vis because I'm not a PHOPS person but I would imagine that they either have their own particular type of application that they are in all day long doing their PHOPS thing or that they would rather get an email with a spreadsheet attached. So is this consumable in another way than this gooey? I can export this information and you bring up a good point Gina I just uh two weeks ago we did a webinar with Harvard Business Review talking about the rise of FinOps and there was and we actually did a white paper it's available through NetApp or Harvard Business Review that talks about the rise of Finaps and where people are 42% are just now implementing it 37% implemented in the last year my pure simple countryorn math says 79% of people are pretty darn young in FinOps andthey're just starting out. So to your point, a lot of times the Finaps team is doing just that. I got a spreadsheet.>> Yeah. >> Yeah. >> Yeah. >> Or I go look here,It's kind of like me. If somebody says to me, if you got hit by a trucker, your kids's taken care of, so I got to go check this retirement account and this one and add it all up. What we're trying to do is make those PHOPS people part of the equation. By doing this, at least from a NetApp perspective, this is all real time. So if it provisions something, it shows up right now. If something expires, it shows up right now. If it needs to move it from this bucket to that bucket, shows up right now. I'm not I won't say we're trying to replace what PHOPS is doing. We're trying to give them capabilities, insight, and governance that would have previous to this been an afterthe thing because they're looking at spreadsheets. >> I think that's awesome. But I kind of I know for me um I have too much to look at and too many places to go now. So it kind of would be if you know if you want them to adopt this process and have this information it might be good to put it into theirenvironment that they already use which is probably not mine. >> It can be exported like I said to go into what they do or you can give them access. The other thing that we're finding is >> is there an API perhaps you can >> absolutely I can Yep. That's how I would get at some of this data. Yeah, because uh there's ser or there's application services like uh cloudability that uh you know pull you pull data from AWS andAzure where a lot of large enterprises use something like that because it's I hate to say this you know single pane of glass but itgives that fin department an ability to just say here here's my realm of responsibility and I get to you know go into each of the different clouds without having to >> call up you know an architect oran engineer to say can you give me this data you hit nail that that's why we're building this in here. >> And spreadsheets, I'm sorry, but you know, spreadsheets are the arethe lifeline of every company, but they get really big and they get cumbersome >> and they get out of date and people get the wrong versions. >> See, byincluding Finops in this platform, and again, I was very honest with you, we're young in this spin-op journey. We just started bringing them inthe last year. By including them in we're making them a citizen and we'reproviding the capability for if you're using one of those services you mentioned thenyou can start connecting to this you can start pulling that information if you look at other infrastructure and storage providers in the industry they don't have this yet and even more so the exchange capability as I showed you I can just exchange from one to another I can be on here and say I have an onremise environment>> this is a hybrid multicloud I have a certain number of storage in this 400 terabyte environment. It's committed to 900. I've committed 900 terabytes to NetApp. I'm going to buy that over X period of time by May 31st, 2023. What if I want to go in and I want to provision some more? I want to provision 100 terabytes more. I can submit that provisioning. It's now been sent to NetApp. I can check the after effect. We then have to go approve it and submit it because we're managing the on-prem resource. But this gives me the ability to do that immediate and I can even shift from my on-prem commitments to cloud storage commitments. So this is again trying to make FinOps part of the table trying to give them visibility trying to give some uh real world operational capability to them on a role-based access and get everybody working together and figure out where we are and where we going. So, are these are the licenses in this situation are they convertible in every direction or is it like a >> Okay. >> Okay. >> Okay. >> No, again, you'll hear me say this 100 times.>> We're young in this, but we're still two years ahead of the industry today. There are certain things that you can and can't either exchange or float. We use these different terms. And a float of license says, I've got a CVO license, for example, inAzure. Mhm. >> I want to move what's left of that license over to AWS. That's floating that license over. >> Okay. >> Okay. >> Okay. >> Then there's exchange to a whole different service. I want to exchange uh Keystone entitlements to CVO or I want to exchange CVO to the data services that I showed you before. Both of those concepts right now because we have to manage this and so we're really taking great pains to do it slowly and walk before we run before we fly. So right now there's certain things you can do like keystone to CVO like floating those CVOs like converting CVO to the data services but for example as of today>> I can't say I've committed to you uh I gave you a PO for $3 million of cloud backup and I decided I want to use some of that money for Keystone. Not today. >> Okay. But we're building the infrastructure and Iencourage you to keep an eye on what we're going to be doing as we go through the rest of this year and the next year to see what's going to be capable now that we built it.>> And is that a consistent exchange rate or is that effectively a customer? >> It's a defined exchange rate for what the customer committed to. So it's x number of terabytes of this or x number of terabytes of that. >> Okay. >> Okay. >> Okay. >> Or in some cases it's you have this amount financially left on that license. That's what you have to apply. It'sall set up and defined. Um I can envision a world over time where we get to you've got a certain amount of tokens and you can apply those tokens at whatever your rate is to whatever. But we've got we got to make sure we do this right from the ground up. >> So >> So >> So I saw the dollar sign and I got excited.My expectation ofa page like this ifwe're talking about PinOps ifwe're having a PHOPS discussion>> Yep. >> Yep. >> Yep. >> I would want to see a larger picture of my burn likemy I've I'mrunning in four different clouds. I' I've got some expectations of what it costs to run on prem. I've got some expectations of Ihave very um targeted expectations of what it's going to cost in the cloud. If I'm provisioning space on cloud 1 2 3 4 5 as a PHOPS person, I want to know how much it's going to cost me. Okay, I want another 100 terabytes inAWS on X type of capacity.That that's what I want to see. I'm glad that I'm seeing the licensing here, buta [clears throat] a PHOPS view would be my run rate. It would be my monthly burn. It would be my allocation provisioning costs. Yeah, that's good input. So, we'regetting there. Right now, we can show you what you've committed and what you've consumed, >> right? >> right? >> right? >> We haven't yet tied it to the dollar, >> but then I have to go to Cost Explorer to figure out how much I actually did >> orproject what I'm going to do. So, isthis going to tie into Cost Explorer and start extracting that information? this will we're going to continue to enhance ours and we do feed that information but I can't say we're going to feed cost explorer over here and cloudability over there. We're not quite there yet. >> Notfeed cost explorer. Imean Iwant to I'm greedy. I'm selfish and I'm lazy andI want a one-stop shop thatwill show me okay ifI'm going to provision 100 terabytes here and100 terabytes there. Iwant to know that you know myultra over here is going to be an extra 2k a month, >> right? mylower tier stuff over here, my GP3 is going to be 1K a month. That's >> we will get into forecasting. We've started it in some areas. So, I'll give you a great example. One of the most dynamic areas is Kubernetes clusters. >> So, we haven't even yet got into cloud insights, which is monitoring insights and cloud insights does monitor and manage what you're using across these different environments. And in particular, it'sall about priorities, right? We're doing R&D >> because the Kubernetes world is so dynamic. We have built the ability for our Kubernetes explorer to not only determine what you're using and where, but to tie that back to the dollars you're paying for those services. We're taking a step there. So with our Kubernetes Explorer, we can also we can say here's the workloads, here's the resources you're using, here's what you're using, and we convert that intothe currency, okay, >> that you're paying. And the reason we've really done that is we've had some larger customers to say myfile shares, they're actually pretty well under control. I can manage those. >> I have nodes, pods, and clusters coming up everywhere all the time. And that is that's way more dynamic and running out of control. So we started there >> and allows us to say not to a line of business that's spinning up their shopping cart applications just you used x amount of terabytes and x amount of cycles. We say you use x amount of resource and here's what it cost. So this is what we're charging you back. >> Exactly. >> Exactly. >> Exactly. >> So Iknow I I'm with you as to where we want to go in terms of a forecasting and modeling. I think we've proven that we're getting the data and giving the visibility and in some really bleeding from the neck areas like the highly dynamic Kubernetes world. We're actually doing thatchargeback capability and doing forecasting. >> Awesome. >> Awesome. >> Awesome. >> We'll work on the rest of it. Join us for Cloud Field Day next year and [laughter] we'll see where we've come. Nathan.>> Yeah. And I just want to bring up I mean like we'rekind of hitting you hard on the whole PHOps model and things but I think itit's worth stating the power and the usability of thatlicensing model and the be the ability to do some of these tradings directly you know right here. I mean there there's a lot of stuff to grow into and we're excited about that and that's why we're going to hit you there. But I think a lot of vendors out there that want to move to the subscription model they need to stand up and pay attention because this is it. This is doing it right.>> Yeah. I'm not casting aspersions. I'm giving you my wish list. That's what I >> And I've been pretty honest with you as to where we're young and where we're advanced. So I No, I definitely appreciate it. >> And I think the question I didn't hear answered that you asked it, I asked it, she asked it is will this because I don't know the names of the PHOPS >> platforms that y'all already use. But are these cons can this information via the API be consumed and subscribed tothe one you mentioned and the one you mentioned? But I don't >> I don't want to speak though. Theinformation can be exported. I would have to get into the specifics on any different platform, but in the world that I've seen, if you can export it either via file or via API, you can use it. I just can't comment on anyone in particular because I don't know what they are. We can take that know you've been raising your hand for a while. >> Thank you. uh so we have seen a global view of uh what is going on what is uh the total cost of uh the consumer infrastructure and so and I think that the one thing that every phops guy wants to know is how is partition at this cost I mean um tagging or something like that. So uh is there something that can I phops guy to um re reach out what uh what resources or what team or what project is consuming something. So I know this is a um an application perspective uh way to recalculate the cost but um sometimes phenops guy uh work tight tightly with um devops team and so is there a way >> and so I >> yeah I can take that. So when you deploy a cloud volumes ontap cluster there's an option to provide labels and those labels will cascade to every resource that is built on the cloud. So when you're building your reports, you can just use that label. You can push it to like say on Google Cloud to BigQuery or in AWS and Azure, whatever you're using to for the database side of things and then you can query it from. >> Okay.Sorry. >> Yeah, >> Yeah, >> Yeah, >> we'llget there. There'sanother I mean we haven't even got into cloud data sense yet. We might do that later, but cloud data assistance isreally pretty powerful and cool and we're still finding new ways to use it. So itallows you to do tagging and classification of data, not only classing and classification andsensitivity levels, but tagging of that data. So, for example, I'm going to tie this back to your PHOPS in just a minute. We had a customer that says we we're splitting into two companies, the primary company and the new co. We've been together for a long time and there's a lot of data. There's 10 years of history that has to go with this new co for the line of business that they do. How do we know what data to give them? We've got data centers around the world. We've got literally hundreds of file servers around the world. How do we get their data to them? Because there's some real serious fiduciary responsibilities once you devest a company. And what we whatthey have found is that using the power of cloud data sense to be able to determine who the file owner is, what ad group they're in, and if they want even go further into the sensitivity level because there's some of that data crazy complex that says I'm splitting off this company and some of it is shared data. Some's company A, some's company B, but some of it is really sensitive and shared and both of them legally want access to it. So that's a use case for our cloud data sense for in this case separation M&A and migration. But at the same time being able to tie back data toowners being able to assign owners and tie that back to the AD group means Ihave the underpinnings of doing some of the things LO that you're talking about. I will not say we're there yet, but it's nice to see these pieces coming together so that we can take your input and figure out wherewe want to go and what templates and dashboards and functions we want to create >> on the I I'll give you the comfort in the Phaps world. I participate in the phin-ups.org and I'm not sure if NetApp actually is active in that ecosystem. >> Love Linuxfoundation.org and phups.org are both great resources for >> the beauty part is it's uh I'm trying to think the best way to describe it. atire fire. That's what it'scomplete tire like no single place. It is all just like we're basically shipping CSVs to each other. There is no single platform that's been successful. However, if you can create a good a consumable explorable data format because in the end we are no better at giving the right information to the fin people as they are telling us what you know IOPS they need on the volume for their HANA. Right? So we are so disparate in what we understand about each other's stuff, but with work with stuff like open communities like the phops.organd working with the other partners, we're just creating basically good interchange data models. And I think that's where it's going to end.>> That's a pretty young effort, right? >> It will never get to the point where it's just like I've got one thing. I'm going to use air table and I'm going to use this. I'm going to store it in data bricks. Like it will never exist. It's but to create a safe interchange of data that we can do, you know, and it's all and it's going to be bloody JSON and CSV that we'll end up with. But um I'm in the camp the example you gave before. I'll say Iknow a similar company that also had to spend out a new code and it created what would say murky waters. It's a tricky thing because that particular company had to suddenly have very different data requirements, compute requirements. Some was shared, some was separated. >> Yeah. Data sense, Finops, like it's all >> it all ties together, which is why it's important that we get it in the platform that we begin to mature it and we get input from people like you. We got just a few minutes left. I want to see if you can have like a fivem minute use case. >> A five minute use case. Actually, let's talk about that metadata for a moment since you're up here. I think that that's one of those things that you're talking about. How can you use metadata to then make intelligent decisions whether they're cost optimization decisions or whether they are um you know separation for privacy or for compliance reasons? I think that's a that's probably a use case that's an interesting and definitely one that we see coming up a lot. I mean when has compliance affected a cloud migration? Should it affect a cloud migration andwhat can we do to resolve that um and to make that easier for people tothink about when they are splitting a company or moving into the cloud or making an application uh cloudnative >> and I think what we're finding is wow in this hybrid multi cloud world things that you would have thought about before which is how big is it and how much of it and etc and what format is it get joined by these other things like who owns it how sensitive is it and is it governed and canour company get hit with a with the 5 million pound fine if the wrong people gets into it, etc., etc. And that also has to go with your internal policies because when you look at some of the regulations, a lot of the regulations say you have to adhere to your existing policies. They don't even say you have to adhere to the governments. They say you have to adhere to your policies. So, you have to be able to tie those off. One of the tools that we have is a service called cloud data sense. It's a SAS delivered service to scan, classify, tag, and provide insight into your data. and then to go even further and perform some of the actions for you that normally a team would have to do like producing GDPR reports or PII reports or DSARS I you know employees leaving the company okay this guy is kind of suspect or he'saccused us of something we need to look go look get look at the data so cloud data sense gives you that um and it gives it from a lot of perspectives one and we mentioned some of this is hey I'm a PHOPS guy where do I have the chance to save money if you can govern [clears throat] your data and know what is where and how often it's been used. You can say, look, I have stale data, which I've set a policy. I can't uh to be honest. Let me see here what the tool tip says. I don't even Okay, files last modified 3 years ago. I can set that. By the way, my tool tip tells me what it is. It's now going since I did that and checking to see if there's anything new. I have 120,000 items. That's 100 gig of stuff that meets my op my category of stale data. Do I want to tear that? Do I want to get rid of that? How about non-b businessiness data? Thank heavens it's only 15 gig in this case. We have seen customers where it's terabytes and terabytes of non-b businessiness data, meaning it doesn't meet it's of a format or a type. It doesn't meet my requirements of being my business data. Most of the customers I've talked to that are using that don't zap it, but they do move it to a very slow infrequent access tier or do I have duplicate files? Do I need to investigate which I could probe down deeper into any of these and investigate and see the list of them and start taking action on them? I look at my data overview. So, uh, Phoebe had mentioned a migration capability. One of the things is, can I migrate this stuff to the cloud? If it meets a policy, for example, that it's restricted or sensitive, I might have an internal policy says that's not even allowed to go to the cloud. So, I need to be able to check and see, and you'll notice it refreshes real time according to parameters that are set. So, it's refreshing to see if there's been any changes by repository. How much of my data is it'snon-sensitive. It's general data. How much of it meets a personal restriction that I now have governance? How much of it is sensitive? And not only my labeling, but the permissions that are applied to it. You know, as an example here, I have some data down here that has it's open to the public and it's restricted. Do I really want to allow that to happen? Lots of stuff that I'm not going to go into all the details here just in the interest of time before we take our break. But it allows me then to say you know what not only governance about where is my data how about I need to be compliant. So I need to know how much of my data is nonsensitive or person uh personal or sensitive. Now these by the way are our cate categorizations you can define your own. How about in my govern data criminal procedures data ethnicity reference per there's all kinds of parameters that I can choose. It also has I think the number is like 28 legally um compliant reports that I can pull. I'm an IT guy. I don't know what myPCIDSS report is. That's okay. I can click it.will run it. It's a standard format report and it will produce it for me in a PDF format that I can use to answer requests upon demand from my auditors, authorities, etc. This is the kind of capability that as a storage person if they said tell me what your privacy risk assessment is. Tell me about give me a standard HIPPO report. A lot of storage admins will say dude seriously really uh I got to hire a contractor. >> Right. But is sensitivity um like descriptors is that a configurable thing? I understand the need to have canned things. >> Yeah. But thinking of any organization where say an account number is going to be eightcharacters here 72 characters there. >> So you can define your own you can use regax expressions. So if you have a 19 character that looks x y and z you can do that. >> You can set your own parameters. As I said these are the ones we've set up in this demo environment. You can cast your own. And it it'smeant to say you've got a wild and woolly world of data. You need to get it under control. We're going to give you some AI and ML enabled tools to do that. So, not only is it looking for what's in the data, it's also allowing you to tag that data if it meets certain capabilities.>> There's so much on this one that we cannot cover. >> Oh my gosh. We got five minutes left until our break. So, you're right. But it's important to know what is here. And then, by the way, if you want to go deeper into any one, I can go into any of these. the one I mentioned before of file owner. I can enter file owner names or provided a list of names or import a list of names or groups uh you say user groups and permissions. I want to import a list of ad groups and it will tell me how many items meet that capability and I can export that.gets me miles ahead when you have millions and millions of objects.>> Let's hold it there because I think there's that opens up a whole other can of worms. I love the metadata conversation though because to that point about multiloud I mean this doesn't care where your data is running or whether it's in a cloud or another cloud or on premises and I think that's part of that power of being uh having thisthese kind of I don't want to say agnostic because we do care which cloud you're in uh forvarious reasons cost reasons. >> Can you point them to this to show you that capability? >> We can't I think you're going to have to show them later. >> Oh, do we have later time? >> We do. Well, wewill have time later, but Iwant to thank you all fortoday. I think there's an important >> question that I have to ask all of our presenters who have demoed this, I would say, countless times across different cloud providers to different sizes of customers in different industries. And I want to say from an a multi we are here to talk about multiloud and we are here to talk about hybrid cloud as well and um and what that evolution of cloud looks like. So I want to ask what that um I guess what again what is that key takeaway out of all of this that you've shown today there's a lot of services but what does the future of cloud look like and I'm not going to start with you Chuck I'm going to start with Greg and I will end so Greg um to yeah whatdo you think the future of cloud from an Azure CSA's perspective looks like um I think with the future of cloud we're gonna ready.>> I think with the future of cloud, um, I see it going into a hybrid multicloud environment where we're going to have stuff on prem, we're going to have stuff in individual clouds. That's kind of where we seem to be going today. But I do think we're going to start, especially as things get containerized, we're going to start to see various workloads going into multiple clouds because some of the clouds have better capabilities than others for various different services. And I think we're going to see that multicloud environment starting to take place as well and link those services together via the application layer. >> Sure. And wedidn't get a chance to talk about Kubernetes terribly much today, but that is definitely one that I know you're a fan of and Vishnu is as well. So that's agreat conversation for another time. Vishnu, you work a lot of the time inGoogle, which is kind of that upand cominging. We should we can't call it that anymore. Uh butone of thelargest cloud providers in the world. What do you see happening in this world of multicloud? >> So one thing is we have a target state where a lot of customers are trying to move to the cloud but as NetApp we are trying to meet them where they are right. So there's always the ideal utopia state where you are in all the clouds and you have all your workloads but we are trying to meet them where they are with some of our products right and tying back to your first statement where you and your family are watching different streaming services being so I started my career off as a bionformatics engineer. Sometimes there are some services in a specific hyperscaler that would cater to my workload. How can I make sure that I can move that efficiently to that particular hyperscaler utilize their services and not worry about all right I need to have all my data over there. So we need to get to that state where we can pick and choose and make it way more accessible to everyone for different workloads. rather than saying a product, we have a use case and we should target that use case and that's what I feel Google Cloud, AWS, Azure, everyone is doing with our partnership, right? >> Awesome. Thank you. And uh and Chuck, last but not least, >> I think the cloud's evolving to a point where some of the stuff you've seen in your own homes. When I grew up, we had a telephone provider and an electric provider and a TV provider. And then it got to where some of these providers did multiple. andyou've seen now where cloud computing has evolved almost to a utility model. Uh a guy in his garage can have access to massive amounts of capability that wasnot even dreamed of 20 years ago. But what where we are today in the utility model is sure I have an electric provider. They own thepipe, but every year or so I switch who the content provider is into that, right? I get a lower rate on electricity from this guy or that guy that's still charging through my electric company and it's become abstracted. What I mean by that is I think we're going to get a place where today we have a hybrid multicloud where you have the ability to put the right job in the right environment. You can put it on prem, you can put it in Azure, you can put it on Google. You need to know and understand and man manage that. where we're evolving to is myTerraform automation for example is driven byAI to choose the right environment based upon what's happening at that moment what's available what's the current price what am I using what are my commitments what compliance requirements have to be met in a given region that one cloud provide right Iwon't be choosing anymore we will evolve to a point where my teams are deploying workloads and the infrastructure is deploying itself where it makes sense >> thank you so much. Ilove these presenters and we had great demos um a lot of conversation. I appreciate all of your uh questions and your thoughts and I think it's definitely great food for thought as well for our engineers and our product leaders. Um we will be taking a break so we will see you back here in one hour. Um and we will be having a great conversation with the our partners at Google. Yes.>> Uh something. Um you see what I did there? All right. Well, thank you very much. Uh asyou just heard, we will be back uh on the hour with uh a continuation of this and we're going to hear from Google. We're also going to have a roundt discussion with the delegates around the table uh and the uh NetApp folks so we can kind of talk about how this multicloud works in the real world. Uh, if you've enjoyed this discussion, I encourage you to check out some of the other NetApp presentations. If you just Google NetApp and Field Day, you'll find a great number of those, including some deep dives into some of the technology we've heard about today. So, if you're kind of wondering how some of this stuff really works and what's underlying it and so on, uh, there have been somereallygreat presentations there. Uh, in fact, one that I'll call out uh that you should maybe check out was at VMware Explore this year. NetApp went into some great detail in uh and how uh ONTAP works in the cloud and uh really enjoyed that one. Uh also uh in case you missed any of this, all of this is being recorded and will be shared as well on NetApp channels as well as on YouTube. And uh as a little hint, if you go to the NetApp or the Tech Field Day page on LinkedIn, you can actually go back and uh see the recordings of this morning's uh sessions, the live recordings right now. So, if you did miss something, that's a good way to catch up with it. Otherwise, we're going to show you a few videos here on the video stream to uh kind of keep you entertained for the next hour. And like I said, we'll be back at 1 Pacific for the final uh the closeout of this special Cloud Field Day exclusive event with NetApp focused on multicloud. [music] >> [music] >> All right, gentlemen. We ready? >> Absolutely. Watch your [music] step here.>> Okay.>> Nice to meet you. Thank you. All right, >> Chris. Two guys on a boat. Who would have imagined a gondola in Vegas, 70°? It's not Venice, but pretty close. Very nice.>> What do you think? This is a special ride. I'm, you know, really fortunate to have you here. >> Yeah. Thank you for having us. >> Yeah, absolutely.You know, we'll talk about a few things, but let me [music] just kick it off by learning a little bit about you, Chris, about Blackboard and kind of the journey that you've had so far. >> Yeah.So, Blackboard is an educational technology company. We support learners and educators around the globe. Um, about 180 million users in about 80 countries around the world. Um we work very closely with the educational community. Educational technology saw a verysignificant growth [music] as entire countries and continents uh change their approach to fully online learning at a moment's [music] notice. Um I'll also say that it's been a very exciting time more recently as we've announced the merge with Anthology. Um, Anthology is a uh company that is delivering educational technology products that really focus on the stu the full student life cycle and in the North American higher education market. So, it's been a really good fit, a lot of energy between the two companies as we've combined [music] in this uh new organization for anthology. That's a lot of change. But what I heard a lot more about was velocity, [music] innovation, and kind of the scale at which you had to do these things rather than the cost and the optimization. That's fascinating. Can you tell us a little more? >> Yeah, absolutely. Andobviously, you know, cost optimization, performance, thatgoes into it, but really when you think about [music] thecustomer value, right? Theywant to know that the systems are scalable, that they're reliable, [music] and that we're able to deliver um asquickly as we possibly can. And absolutely during those times especially during the Mar the month of March in 2020 um we had todeliver almost on a daily cadence uh as we kept up topace with the[music] the changing dynamics. Um but you know I think there's areally important story here around our journey andour partnership with NetApp. I mean we started in uh managed hosting data centers that were on premise. Um weuse a lot of NetApp inthose data [music] centers. At one point we hit I think about 15 pabytes wow ofd of data uh that we were hosting and you know that was great and it served a great purpose but um there are challenges to how quickly you can move in physical data centers right I mean you know our provisioning processes were measured in weeks or months right and when you have to move at the scale of you know changing industry needs you need to move a lot faster than that and that was really why we picked [music] AWS as our cloud vendor to of choice to move to. Um it's why we picked NetApp [music] on tap to be able to accelerate our journey with storage to be able to leverage the API systems [music] that allow our orchestration layer to be able to take a lot of the time out of the provisioning [music] process and spend that time more on innovation. Right. We want to spend a lot more [music] time on delivering value, not on um >> setting up and exactly administering systems, which I totally agree. I think the laws of physics have changed. >> Yes. >> Yes. >> Yes. >> Absolutely. >> Absolutely. >> Absolutely. >> Yep. And we need to keeppace. >> Veryfascinating. [music] So anything that you from this now that you are getting into I would call it a new world where earlier it was almost separated between onrem and kind of cloud now it's kind of merging together and you have to take care of both the communities. Any learnings from it [music] for NetApp and AWS like what can we do better more >> to help you and your kind of community? It's been a journey that we've done together, right? That we've done withAWS with Amazon. And when I think about that journey, right, you know, going from on-prem and going to the cloud, you know, [music] we have benefited so much from the ability to tap into um automation and orchestration to be able to tap into the reliability of NetApp on um in the [music] AWS cloud, which is something that wecouldn't match withother storage systems that we tried [music] before NetApp on was available. Wewant to be able to focus on educa driving forward the educational technology community. Um we want to spend a lot less time [music] on services that you know that are a little bit more commoditized that we can trust in our partners [music] with AWS and our NetApp to deliver FSX on tap. Um offers a really interesting [music] opportunity for us to be able to benefit from some of those managed services. >> The best of both in a way, right? You get it as a service and you get it totally integrated as a [music] consumption rather than having you to manage it and operate it and run it. >> AB: Absolutely. >> All right, I'm going to switch gears on this lighter moment. You know, [music] I can but not look like, you know, outside like this is really actually phenomenal. This is cool. >> This is absolutely [music] beautiful. >> Reminds me of Venice itself. Yeah. >> Yep.Absolutely.>> I think the only thing that's missing might be the singing. >> That's true. Should we try that? >> Take care of that. Yeah. >> Come on, Ezio. Can you give us a little >> [singing] >> So fresco. of [singing]>> Thank you so much. I'm sure it was all about two buddies on a boat, right? [laughter]>> Absolutely. Y >> that's all this was. No, but thank you, Chris. Thanks for your time. [music] Really appreciate it. I hope you get >> some fun here, too, while you're at it. you know, not just all business, but thank you for this time. Thank you [music] for this uh sharing all these details with us and look forward to a stronger possibility. >> This has been great. Yeah. Thank you for the opportunity. It's been wonderful. >> Awesome. >> Bob's got a new big boss. He's also got a new big problem. More stringent mandates for backup and recovery. In the event of a disaster, all critical workloads must be restored within an hour. his recovery time objective with no loss of data beyond an hour from the failure, his recovery point objective. This news makes Bob hyperventilate for about an hour. You see, Bob uses traditional backup software, so his backups take over 2 hours to complete, which means he's only able to run them once a day, normally between midnight and 2:00 a.m., so he doesn't impact users. Should a disaster hit, Bob's recovery point is likely to be at least a business day out of date. This news does not go over well with the new boss who tells Bob to just do what he needs to get it done. A few days later, Bob submits a PO [music] for new backup servers and storage, networking hardware, rack and cables, plus additional extended backup software licenses. And this goes over even less well. So Bob makes another list of possible places to find a new job. Bob's friend DJ gets a similar mandate from her boss. But her boss also gives her data management solutions from NetApp. So when she's hit with 1-hour RTO and RPO requirements, all DJ has to do is log in to NetApp Cloud Manager, open the Cloud Backup Service tab, and change her settings to create snapshots to Azure Cloud every hour. And that's it. Since Cloud Backup uses block level incremental forever technology, her snapshots are super fast and don't impact user availability or performance. so she's continually protected. Restores are lightning fast, too, so DJ can get users back up and running with minimal business impact. And it protects both her on- premises and cloud storage with the same policies. The NetApp cloud portfolio makes backup Windows a thing of the past and keeps DJ's data and her job completely safe. Make your boss happy with NetApp, the cloud storage specialist.[music]Bob's company is venturing into the online wine business. Bob's no connoisseur. He likes his wine like his IT solutions, slightly chilled and served from a box. But he's excited to deploy his new Kubernetes-based transaction processing app for wine sales. Problem is, despite Kubernetes giving Bob the ability to deploy apps super quickly, his standard IT tool set offers no insight into the highly dynamic Kubernetes infrastructure. and his team isn't up to speed on containerized workloads. So, their old familiar tools are all they've got. One fateful Friday, a new pinage with a 99 point rating and a very reasonable price point hits Bob's wine store and demand rises like a tsunami. But by 5:15, customer complaints are pouring in and thousands of bottles are trapped in online shopping carts. Bob's at a loss because his Kubernetes cluster appears to be running fine. and his Steam all report that the individual components they each own are fine. Bob's hit with hundreds of thousands of dollars in loss revenue before they can even identify the issue. Bob's friend DJ uses Kubernetes to deploy apps for her online chocolate shop. But unlike Bob, her team uses NetApp Cloud Insights. So when a major sugar rush hits one Friday evening, Cloud Insights automatically alerts her team that the web store app is developing a problem. A node within her cluster is failing and an outage is looming. DJ uses the Kubernetes explorer feature of Cloud Insights [music] to get a graphical view of the entire landscape and drills down into the failing cluster. Within seconds, she's pinpointed the specific node that's over capacity and allocates a larger volume to fix the problem. This sweet solution means no downtime and no lost business. DJ celebrates by ordering up her second favorite cluster type, the kind with chocolate and peanuts. a transaction that's hardly approved by NetApp, the cloud storage specialists. [music]Bob is an IT admin. When his boss mentions an audit and ask Bob to send a report of all data backups, Bob goes into a panic. You see, Bob's company is global with employees [music] and data spread out in locations around the world. That means 71 different file servers with 71 different backups each cycle and limited visibility for Bob. So when he needs to collect backup reports, Bob has to pick up the phone and call the remote office in [music] Thailand and Portugal and Brazil and every other office and hope that whoever answers can provide an updated backup report if their file server has been backed up at all. Bob's friend DJ is also an IT admin. But unlike Bob, DJ uses NetApp Cloud Volumes Edge Cache to consolidate her file servers into a single footprint, accessible from any location in real time. Cloud Volume's Edge Cache automatically caches often used files locally for each location, providing those users immediate access to whatever files they need. So when John in Boston needs to open up a slide deck just updated by Jan in Bangkok, he doesn't have 10 minutes of latency waiting for the file to open. It pops up immediately. And since DJ manages the centralized storage, she takes news of an audit in stride. [music] With a few clicks, she creates a report of the backup history, plus a complete audit and compliance report produced by NetApp's Cloud Data Sense and a threat report from Cloud Secure. All of which she emails to her boss within minutes. And that's bonus points for DJ thanks to NetApp, the cloud storage specialist. [music] Bob's company is working on a huge infrastructure project, building a highway bridge over a river gorge out west. It's not going well. On the positive side, they've got an amazing team of specialized engineers from all over the world. That's also the negative side. With the distributed team needing to collaborate on huge CAD files, extreme latency issues introduce significant unforeseen costs, risk, and delays. Even in the main office, where the data is stored, engineers have to wait for colleagues to complete work and transfer the file back. The risk of conflict and overwrite from unsynced files is just too great. Bob's only workaround is to implement a strict follow the sun schedule with only one location at a time allowed to work on the CAD files, which means project overruns and exploding budgets. Even with the regimented schedule, Bob worries about conflicting updates to the files where even a tiny discrepancy can have enormous consequences. DJ's company is in the middle of building a massive dam in the same western region, and it's going great. NetApp Cloud Volumes Edge Cache lets her team collaborate in real time on a single centralized set of files. Cloud Volumes Edge Cache automatically caches the files each engineer needs locally for fast access. It's as if the entire team was working from the same office, which means extremely low latency and no delays. Cloud Volumes Edge Cache also manages file access with realtime global locking. So even when a remote user is working off a cached copy, the CAD file remains under centralized control. This eliminates the risk of data conflicts or file overwrites and provides a single source [music] of truth for every file across the entire project. DJ knows that Cloud Volume's edge cache is a futurep proof solution that always delivers the right data to the right place at the right time, courtesy of NetApp, the cloud storage specialists. The demand for virtual desktops is skyrocketing and both Bob and DJ are expected to deliver simple VDI solutions for globally distributed workspaces. For DJ, this is no problem because her company uses NetApp Cloud Volumes edge cache. This means users around the world can use centralized storage as if it was local. This makes DJ, her boss, and their global workforce very happy. Bob's company doesn't use Cloud Volumes edge cache, so his workspaces around the world each require dedicated storage with lots of moving parts and redundant software. Bob also needs to spend time on backup, compliance, DR, and more at each site for both user data and shared files. It gets even more complex when Bob factors in workflows and applications that require collaboration between sites. This makes Bob and his boss very nervous. DJ's whiteboard, on the other hand, reveals a simple solution that actually is simple. A hub and spoke approach that connects all distributed workspaces. This means users around the world access centralized storage as if it was local. This makes collaboration a breeze because everyone is using the same data with only one location for DJ to backup, manage, audit, and keep compliant. Even better, NetApp Cloud Volumes Edge Cache supports FSLogic storage containers, so everyone's data travels with them across the globe and caches automatically anywhere they [music] go, which means a virtual desktop that delivers local performance. Cloud Volume's Edge Cache lets DJ keep things simple and keep plenty of white space on her whiteboard, courtesy of NetApp, the cloud storage specialists. >> [music]>> All right, gentlemen. We ready? >> Absolutely. Watch your step [music] here.>> Okay.>> Nice to meet you. Thank you. All right, >> Chris. Two guys [music] on a boat. Who would have imagined a gondola in Vegas, 70°? It's not Venice, but pretty close. >> Very nice. >> What do you think? This is a special ride. I'm, you know, really fortunate to have you here. >> Yeah, thank you for having us. >> Yeah, absolutely.You know, we'll talk about a few things, but let me just kick [music] it off by learning a little bit about you, Chris, about Blackboard and kind of the journey that you've had so far. >> Yeah.So, Blackboard is an educational technology company. We support learners and educators around the globe. Um about 180 million users in about 80 [music] countries around the world. Um we work very closely with the educational community. Educational technology saw a verysignificant [music] growth as entire countries and continents uh change their approach to fully online learning at [music] a moment's notice. Um I'll also say that it's been a very exciting time more recently as we've announced the merge with Anthology. [music] Um Anthology is a uh company that is delivering educational technology products that really focus on the stu the full student life cycle and in the North American higher education market. So it's been a really good fit, a lot of energy between the two companies as we've combined in this uh new organization for anthology. >> That's a lot of change. But [music] what I heard a lot more about was velocity, innovationand kind of the scale at which you have to do these things rather than the cost and the optimization. That's fascinating. Can you [music] tell us a little more? >> Yeah, absolutely. Andobviously, you know, cost optimization, performance, thatgoes into it, but really when you think about thecustomer value, right? [music] Theywant to know that the systems are scalable, that they're reliable, and that we're able to deliver um asquickly as we [music] possibly can. andabsolutely during those times especially during the March the month of March in 2020 [music] uh we had todeliver almost on a daily cadence uh as we kept up topace with thechanging dynamics. Um but you know I think there's areally important story here around our journey andour partnership with [music] NetApp. I mean we started in uh managed hosting data centers that were on premise. Um weuse a lot of NetApp inthose data centers. At one point we hit [music] I think about 15 pabytes wow ofdata uh that we were hosting and you know that was great and it served a great purpose but um there are challenges to how quickly you can move in physical data centers [music] right I mean you know our provisioning processes were measured in weeks or months right and when you have to move at the scale of you know changing industry needs you need to move a lot faster than that and that was really why we picked [music] AWS as our cloud vendor to of choice to move to. Um, it's why we picked NetApp on tap to be able to accelerate our journey with storage to be [music] able to leverage the API systems that allow our orchestration layer to be able to take a lot of the time out of the provisioning process and [music] spend that time more on innovation. Right. We want to spend a lot more time on delivering value, not on um [music] >> setting up and exactly administering systems, which I totally agree. I think the laws of physics have changed. >> Yes. [music] >> Absolutely. >> Absolutely. >> Absolutely. >> Yep. And we need to keeppace. >> Veryfascinating. So anything that you from this now that you [music] are getting into I would call it a new world where earlier it was almost separated between on-prem and kind of cloud now it's kind of merging together and you have to take care of both the communities. Any [music] learnings from it for NetApp and AWS like what can we do better more >> to help you and your kind of community? [music] It's been a journey that we've done together, right? That we've done withAWS with Amazon. And when I think about that journey, right, you know, going from on-prem and going to the cloud, you know, we have benefited so much from the ability to tap into um automation and orchestration to be able to [music] to [music] to [music] tap into the reliability of NetApp on um in the AWS [music] cloud, which is something that wecouldn't match withother storage systems that we tried before [music] NetApp on was available. Wewant to be able to focus on educa driving forward the educational technology community. Um, we want to spend a lot less time on services that, you know, that are a little bit more commoditized that we can trust in our partners [music] with AWS and our on NetApp to deliver. FSX on tap um, offers a really interesting opportunity [music] for us tobe able to benefit from some of those managed services. >> The best of both in a way, right? You get it as a service and you get it totally integrated [music] as a consumption rather than having you to manage it and operate it and run it. >> AB: Absolutely. All right, I'm going to switch gears on this lighter moment. You know, I can but not [music] look like, you know, outside like this is really actually phenomenal. This is cool. >> This is absolutely beautiful. Reminds me of Venice itself. Yeah. >> Yep.[music] Absolutely. [music] Absolutely. [music] Absolutely. >> I think the only thing that's missing might be the singing. >> That's true. Should we try to? Yeah. >> Come on, Ezio. Can you give us a little [singing] Fresa, [singing]you're not so Hey, >> thank you so much. I'm sure it was all about two buddies on a boat, right? Absolutely. [laughter] Y >> that's all this was. [music] No, but thank you Chris. Thanks for your time. Really appreciate it. I hope you get >> some fun here too while you're at it. You know, not just all business, but thank you for this time. [music] Thank you for this sharing all these details with us and look forward to a stronger possibility. This has been great. >> Yeah. Thank you for the opportunity. [music] It's been wonderful. >> Awesome. Bob's got a new big boss. He's also got a new big problem. More stringent mandates for backup and recovery. In the event of a disaster, all critical workloads must be restored within an hour. His recovery time objective with no loss of data beyond an hour from the failure, his recovery point objective. This news makes Bob hyperventilate for about an hour. You see, Bob uses traditional backup software, so his backups take over 2 hours to complete, which means he's only able to run them once a day, normally between midnight and 2:00 a.m., so he doesn't impact users. Should a disaster hit, Bob's recovery point is likely to be at least a business day out of date. This news does not go over well with the new boss, who tells Bob to just do what he needs to get it done. A few days later, Bob submits a PO for new backup servers and storage, networking hardware, rack and cables, plus additional extended backup software licenses, and this goes over even less well. So, Bob makes another list of possible places to find a new job. Bob's friend DJ gets a similar mandate from her boss, but her boss also gives her data management solutions from NetApp. So when she's hit with 1 hour RTO and RPO requirements, all DJ has to do is log in to NetApp Cloud Manager, open the Cloud Backup Service tab and change her settings to create snapshots to Azure Cloud every hour. And that's it. Since Cloud Backup uses block level incremental forever technology, her snapshots are super fast and don't impact user availability or performance, so she's continually protected. Restores are lightning fast, too, so DJ can get users back up and running with minimal business impact. and it protects both her on premises and cloud storage with the same policies. The NetApp cloud portfolio makes backup Windows a thing of the past and keeps DJ's data and her job completely safe. Make your boss happy with NetApp, the cloud storage specialist.[music]Bob's company is venturing into the online wine business. Bob's no connoisseur. He likes his wine like his IT solutions, slightly chilled and served from a box. But he's excited to deploy his new Kubernetes-based transaction processing app for wine sales. Problem is, despite Kubernetes giving Bob the ability to deploy apps super quickly, his standard IT tool set offers no insight into the highly dynamic Kubernetes infrastructure. And his team isn't up to speed on containerized workloads, so their old familiar tools are all they've got. One fateful Friday, a new pinage with a 99 point rating and a very reasonable price point hits Bob's wine store and demand rises like a tsunami. But by 5:15, customer complaints are pouring in and thousands of bottles are trapped in online shopping carts. Bob's at a loss because his Kubernetes cluster appears to be running fine. And his team all report that the individual components they each own are fine. Bob's hit with hundreds of thousands of dollars in lost revenue before they can even identify the issue. Bob's friend DJ uses Kubernetes to deploy apps for her online chocolate shop. But unlike Bob, her team uses NetApp Cloud Insights. So when a major sugar rush hits one Friday evening, Cloud Insights automatically alerts her team that the web store app is developing a problem. A node within her cluster is failing and an outage is looming. DJ uses the Kubernetes explorer feature of Cloud Insights to get a graphical view of the entire landscape and drills down into the failing cluster. Within seconds, she's pinpointed the specific node that's over capacity and allocates a larger volume to fix the problem. This sweet solution means no downtime and no lost business. DJ celebrates by ordering up her second favorite cluster type, the kind with chocolate and peanuts. A transaction that's hardly approved by NetApp, the cloud storage specialists. >> [music] >> [music] >> [music] >> Bob is an IT admin. When his boss mentions an audit and asks Bob to send a report of all data backups, Bob goes into a panic. You see, Bob's company is global with employees and data spread out in locations around the world. That means 71 different file servers with 71 different backups each cycle and limited visibility for Bob. So when he needs to collect backup reports, Bob has to pick up the phone and call the remote office in Thailand and Portugal and Brazil and every other office and hope that whoever answers can provide an updated backup report if their file server has been backed up at all.Bob's friend DJ is also an IT admin. But unlike Bob, DJ uses NetApp Cloud Volumes Edge Cache to consolidate her file servers into a single footprint, accessible from any location in real time. Cloud Volumes edge cache automatically caches often used files locally for each location, providing those [music] users immediate access to whatever files they need. So when John in Boston needs to open up a slide deck just updated by Jan in Bangkok, [music] he doesn't have 10 minutes of latency waiting for the file to open. It pops up immediately. And since DJ manages the centralized storage, she takes [music] news of an audit in stride. With a few clicks, she creates a report of the backup history, plus a complete audit and compliance report produced by NetApp's Cloud Data Sense and a threat report from Cloud Secure. All of which she emails to her boss within minutes. And that's bonus points for DJ thanks to NetApp, the cloud storage specialist. Bob's company is working on a huge infrastructure project, building a highway bridge over a river gorge out west. It's not going well. On the positive side, they've got an amazing team of specialized engineers from all over the world. >> [music] >> [music] >> [music] >> That's also the negative side. With the distributed team needing to collaborate on huge CAD files, extreme latency issues introduce [music] significant unforeseen costs, risk, and delays. Even in the main office, where the data is stored, engineers have to wait for colleagues to complete work and transfer the file back. The risk of conflict and overwrite from unsynced files is just [music] too great. Bob's only workaround is to implement a strict follow the sun schedule with only one location at a time allowed to work on the CAD files, which means project overruns and exploding budgets. Even with the regimented schedule, Bob worries about conflicting updates to the files where even a tiny discrepancy can have enormous consequences. DJ's company is in the middle of building a massive dam in the same western region, and it's going great. NetApp Cloud Volumes Edge Cache lets her team collaborate in real time on a single centralized set of files. Cloud Volumes edge cache automatically caches the files each engineer needs locally for fast access. It's as if the entire team was working from the same office, which means extremely low latency and no delays. Cloud Volume's Edge Cache also manages file access with real-time global locking. So even when a remote user is working off a cached copy, the CAD file remains under centralized control. This eliminates the risk of data conflicts or file overwrites and provides a single source of truth for every file across the entire project. DJ knows that Cloud Volumes edge cache is a futurep proof solution that always delivers the right data to the right place at the right time, courtesy of NetApp, the cloud storage specialists. The demand for virtual desktops is skyrocketing and both Bob and DJ are expected to deliver simple VDI solutions for globally distributed workspaces. For DJ, this is no problem because her company uses NetApp Cloud Volumes edge cache. This means users around the world can use centralized storage as if it was local. This makes DJ, her boss, and their global workforce very happy. Bob's company doesn't use cloud volumes edge cache, so his workspaces around the world each require dedicated storage with lots of moving parts and redundant software. Bob also needs to spend time on backup, compliance, DR, and more at each site for both user data and shared files. It gets even more complex when Bob factors in workflows and applications that require collaboration between sites. This makes Bob and his boss very nervous. DJ's whiteboard, on the other hand, reveals a simple solution that actually is simple. A hub and spoke approach that connects all distributed workspaces. This means users around the world access centralized storage as if it was local. This makes collaboration a breeze because everyone is using the same data with only one location for DJ to backup, manage, audit, and keep compliant. Even better, NetApp Cloud Volumes edge cache supports FSLogic storage containers, so everyone's data travels with them across the globe and caches automatically anywhere they go, which means a virtual desktop that delivers local performance. Cloud Volume's Edge Cache lets DJ keep things simple and keep plenty of white space on her whiteboard, courtesy of NetApp, the cloud storage specialists.>> [music]>> All right, gentlemen. We ready? >> Absolutely. >> Absolutely. >> Absolutely. Watch your step here. >> Okay.[music] Donald,>> nice to meet you. Thank you. All right, >> Chris. Two guys on a boat. Who would have imagined a gondola in Vegas? 70°. It's not Venice, but pretty close. [laughter]>> Very nice. >> What do you think? This is a special ride. I'm, you know, really fortunate to have you here. >> Yeah, thank you for having us. >> Yeah, absolutely.You know, we'll talk about a few things, but let me just kick it off by learning a little bit about you, Chris, about Blackboard and kind of the journey that you've had so far. >> Yeah.So, Blackboard is an educational technology company. We support learners and educators around the globe. Um about 180 million users in about 80 countries [music] around the world. Um we work very closely with the educational community. Educational technology saw [music] a verysignificant growth as entire countries and continents uh change their approach to [music] fully online learning at a moment's notice. Um I'll also say that it's been a very exciting time more recently as we've announced the merge with Anthology. Um Anthology is a uh company that is delivering educational technology products that really focus on the stu the full student life cycle and in the North American higher education market. So it's been a really good fit, a lot of energy between the two companies as we've combined in this uh new organization for anthology. >> That's a lot of change. But what I heard a lot more about was velocity, [music] innovationand kind of the scale at which you had to do these things rather than the cost and the optimization. That's fascinating. Can you tell us a little more?>> Yeah, absolutely. Andobviously, you know, cost optimization, [music] performance, thatgoes into it, but really when you think about thecustomer value, right? Theywant to know that the systems are [music] scalable, that they're reliable, and that we're able to deliver um asquickly as we possibly can. and[music] absolutely during those times especially during the March the month of March in 2020 um we had todeliver almost on a daily cadence uh as we kept up topace with thechanging dynamics um but you know I think there's areally important story here around our journey andour partnership with NetApp I mean we started in uh managed hosting data centers that were on premise um weuse a lot of NetApp app inthose data centers. At one point we hit I think about 15 pabytes wow ofdata uh that we were hosting and you know that was great and it served a great purpose but um there are challenges to how quickly you can move in physical [music] data centers right I mean you know our provisioning processes were measured in weeks or months right and when you have to move at the scale of you know changing industry needs you need to move a lot faster than that and that was really why we picked [music] AWS as our cloud vendor to of choice to move to. Um it's why we picked NetApp on tap to be able to accelerate our journey with storage to be able to leverage the API systems [music] that allow our orchestration layer to be able to take a lot of the time out of the provisioning [music] process and spend that time more on innovation. Right. We want to spend a lot more time on delivering value, not on um >> setting up and exactly administering systems which I totally agree. I think the laws of physics have changed. >> Yes. >> Yes. >> Yes. >> Absolutely. >> Absolutely. >> Absolutely. >> Yep. And we need to keeppace. >> Veryfascinating. [music] So anything that you from this now that you are getting into I would call it a new world where earlier it was almost separated between on-prem and kind of cloud now it's kind of merging together and you have to take care of both the communities. Any learnings from it for [music] NetApp and AWS like what can we do better more >> to help you and your kind [music] of community? It's been a journey that we've done together, right? That we've done withAWS with Amazon. And [music] when I think about that journey, right, you know, going from on-prem and going to the cloud, you know, we have benefited [music] so much from the ability to tap into um automation and orchestration to be able to tap into the reliability of NetApp on um in the AWS cloud, which is [music] something that wecouldn't match withother storage systems that we tried before NetApp on Tap was available. Wewant to be able to focus on educa driving forward the educational technology community. Um we want to spend a [music] lot less time on services that you know thatare a little bit more commoditized that we can trust in our partners with AWS and on [music] NetApp to deliver. FSX on tap um offers a really interesting opportunity for us tobe able to benefit from some of those managed services. [music] >> The best of both in a way, right? You get it as a service and you get it totally integrated as a consumption [music] rather than having you to manage it and operating environment. >> AB: Absolutely. >> All right, I'm going to switch gears on this lighter moment. You know, I can't but not look like, you [music] know, outside like this is really actually phenomenal. This is cool. >> This is absolutely beautiful. >> Reminds me of Venice itself. [music] Yeah.>> Yep.Absolutely.>> I think the only thing that's missing might be the singing. >> That's true. Should we try that? >> Take care of that. Yeah. >> Come on, Ezio. Can you give us a little >> [singing] >> [singing] >> [singing] >> So fresca. You're not. [singing] >> Thank you so much. I'm sure it was all about two buddies on a boat, right? Absolutely. [laughter] Y >> that's all this was. No, but thank you, Chris. [music] Thanks for your time. Really appreciate it. I hope you get >> some fun here, too, while you're at it. you know, not just all business, but thank you for this time. Thank you for this uh sharing all these details with us and look forward to a stronger possible.>> Yeah, thank you for the opportunity. It's been wonderful. >> Awesome. >> Bob's got a new big boss. He's also got a new big problem. More stringent mandates for backup and recovery. In the event of a disaster, all critical workloads must be restored within an hour. his recovery time objective with no loss of data beyond an hour from the failure, his recovery point objective. This news makes Bob hyperventilate for about an hour. You see, Bob uses traditional backup software, so his backups take over 2 hours to complete, which means he's only able to run them once a day, normally between midnight and 2:00 a.m., so he doesn't impact users. Should a disaster hit, Bob's recovery point is likely to be at least a business day out of date. This news does not go over well with the new boss who tells Bob to just do what he needs to get it done. A few days later, Bob submits a PO for new backup servers and storage, networking hardware, rack and cables, plus additional extended backup software licenses. This goes over even less well. So Bob makes another list of possible places to find a new job. Bob's friend DJ gets a similar mandate from her boss. But her boss also gives her data management solutions from NetApp. So when she's hit with 1 hour RTO and RPO requirements, all DJ has to do is log in to NetApp Cloud Manager, open the Cloud Backup Service tab, and change her settings to create snapshots to Azure Cloud every hour. [music] And that's it. Since Cloud Backup uses block level incremental forever technology, her snapshots are super fast and don't impact user availability or performance. so she's continually protected. Restores are lightning fast, too, so DJ can get users back up and running with minimal business impact. And it protects both her on premises and cloud storage with the same policies. The NetApp cloud portfolio makes backup windows a thing of the past and keeps DJ's data and her job completely safe. Make your boss happy with NetApp, the cloud storage specialist. Bob's company is venturing into the online wine business. Bob's no connoisseur. He likes his wine like his IT solutions, slightly chilled and served from a box. But he's excited to deploy his new Kubernetes-based transaction processing app for wine sales. Problem is, despite Kubernetes giving Bob the ability to deploy apps super quickly, his standard IT tool set offers no insight into the highly dynamic Kubernetes infrastructure. and his team isn't up to speed on containerized workloads. So, their old familiar tools are all they've got. One fateful Friday, a new pinage with a 99 point rating and a very reasonable price point hits Bob's wine store and demand rises like a tsunami. But by 5:15, customer complaints are pouring in and thousands of bottles are trapped in online shopping carts. Bob's at a loss because his Kubernetes cluster appears to be running fine. and his team all report that the individual components they each own are fine. Bob's hit with hundreds of thousands of dollars in loss revenue before they can even identify the issue. Bob's friend DJ uses Kubernetes to deploy apps for her online chocolate shop. But unlike Bob, her team uses NetApp Cloud Insights. So when a major sugar rush hits one Friday evening, Cloud Insights automatically alerts her team that the web store app is developing a problem. A node within her cluster is failing and an outage is looming. DJ uses the Kubernetes explorer feature of Cloud Insights to [music] get a graphical view of the entire landscape and drills down into the failing cluster. Within seconds, she's pinpointed the specific node that's over capacity and allocates a larger volume to fix the problem. This sweet solution means no downtime and no lost business. DJ celebrates by ordering up her second favorite cluster type, the kind with chocolate and peanuts. a transaction that's hardly approved by [music] NetApp, the cloud storage specialists. Bob is an IT admin. When his boss mentions an audit and ask Bob to send a report of all data backups, Bob goes into a panic. You see, Bob's company is global with employees and data [music] spread out in locations around the world. That means 71 different file servers with 71 different backups each cycle and limited visibility for Bob. So when he needs to collect backup reports, Bob has to pick up the phone and call the remote office in Thailand and Portugal and Brazil and every other office and hope that whoever answers can provide an updated backup report if their file server has been [music] backed up at all.Bob's friend DJ is also an IT admin. But unlike Bob, DJ uses NetApp Cloud Volumes Edge Cache to consolidate her file servers into a single footprint, accessible from any location in real time. Cloud Volumes Edge Cache automatically caches often used files [music] locally for each location, providing those users immediate access to whatever files they need. So when John in Boston needs to open up a slide deck just updated by Jan in Bangkok, he doesn't have 10 minutes of latency waiting for the file to open. It pops up immediately. [music] immediately. [music] immediately. [music] And since DJ manages the centralized storage, she takes news of an audit in stride. [music] With a few clicks, she creates a report of the backup history, plus a complete audit and compliance report produced by NetApp's Cloud Data Sense and a threat report from Cloud Secure. All of which she emails to her boss within minutes. And that's bonus points for DJ thanks to NetApp, the cloud storage specialist. [music] Bob's company is working on a huge infrastructure project, building a highway bridge over a river gorge out west. It's not going well. On the positive side, they've got an amazing team of specialized engineers from all over the world. That's also the negative side. With the distributed team needing to collaborate on huge CAD files, extreme latency issues introduce significant [music] significant [music] significant [music] unforeseen costs, risk, and delays. Even in the main office, where the data is stored, engineers have to wait for colleagues to complete work and transfer the file back. The risk of conflict and overwrite from unsynced files is just too great. Bob's only workaround is to implement a strict follow the sun schedule with only one location at a time allowed to work on the CAD files, which means project overruns and exploding budgets. Even with the regimented schedule, Bob worries about conflicting updates to the files where even [music] a tiny discrepancy can have enormous consequences.DJ's company is in the middle of building a massive dam in the same western region, and it's going great. NetApp Cloud Volumes Edge Cache lets her team collaborate in real time on a single centralized set of files. Cloud Volumes Edge Cache automatically caches the files each engineer needs locally for fast access. It's as if the entire team was working from the same office, which means extremely low latency and no [music] delays. Cloud Volume's Edge Cache also manages file access with real-time global locking. So even when a remote user is working off a cached copy, the CAD file remains under centralized control. This eliminates the risk of data conflicts or file overwrites and provides [music] a single source of truth for every file across the entire project. DJ knows that Cloud Volumes edge cache is a futurep proof solution that always delivers the right data to the right place at the right time, courtesy of NetApp, the cloud storage specialists. The demand for virtual desktops is skyrocketing and both Bob and DJ are expected to deliver simple VDI solutions for globally distributed workspaces. For DJ, this is no problem because her company uses NetApp Cloud Volumes edge cache. This means users around the world can use centralized storage as if it was local. This makes DJ, her boss, and their global workforce very happy. Bob's company doesn't use cloud volumes edge cache, so his workspaces around the world each require dedicated storage with lots of moving parts and redundant software. Bob also needs to spend time on backup, compliance, DR, and more at each site for both user data and shared files. It gets even more complex when Bob factors in workflows and applications that require collaboration between sites. This makes Bob and his boss very nervous. DJ's whiteboard, on the other hand, reveals a simple solution that actually is simple. A hub and spoke approach that connects all distributed workspaces. This means users around the world access centralized storage as if it was local. This makes collaboration a breeze because everyone is using the same data with only one location for DJ to backup, manage, audit, and keep compliant. Even better, NetApp Cloud Volumes edge cache supports FSLogic storage containers, so everyone's data travels with them across the globe and caches automatically anywhere [music] they go, which means a virtual desktop that delivers local performance. Cloud Volume's edge cache lets DJ keep things simple and keep plenty of white space on her whiteboard, courtesy of NetApp, the cloud storage specialists. >> [music] >> All right, gentlemen. We ready? >> Absolutely. Watch your [music] step here.>> Okay.>> Nice to meet you. Thank you. All right, >> Chris. Two guys on a boat. Who would have imagined a gondola in Vegas, [music] 70°? It's not Venice, but pretty close. [laughter] close. [laughter] close. [laughter] >> Very nice. >> What do you think? This is a special ride. I'm, you know, really fortunate to have you here. >> Yeah, thank you for having us. >> Yeah, absolutely.You know, we'll talk about a few things, but let me just kick it off by learning a little bit about you, Chris, about Blackboard and kind of the journey that you've had so far. >> Yeah.So, Blackboard is an educational technology company. We support learners and educators around the globe. Um about 180 million users in about 80 countries around [music] the world. Um we work very closely with the educational community. Educational technology saw a verysignificant [music] growth as entire countries and continents uh change their approach to fully online learning at a moment's [music] notice. Um I'll also say that it's been a very exciting time more recently as we've announced the merge with Anthology. Um, Anthology is a uh company that is delivering educational technology products that really [music] focus on the stu the full student life cycle and in the North American higher education market. So, it's been a really good fit, a lot of energy between the two companies as [music] we've combined in this uh new organization for anthology. That's a lot of change. But what I heard a lot more about was velocity, [music] innovation and kind of the scale at which you had to do these things rather than the cost and the optimization. That's fascinating. Can you tell us a little more? >> Yeah, absolutely. Andobviously, you know, cost optimization, performance, that[music] goes into it, but really when you think about thecustomer value, right? Theywant to know that the [music] systems are scalable, that they're reliable, and that we're able to deliver um asquickly as we possibly can. And absolutely during those times especially during the Mar the month of March in 2020 [music] 2020 [music] 2020 [music] um we had todeliver almost on a daily cadence uh as we kept up topace with thechanging dynamics. Um but you know I think there's areally important story here around our journey andour partnership with NetApp. [music] I mean we started in uh managed hosting data centers that were on premise. Um weuse a lot of NetApp inthose data centers. At one point we hit I think about 15 pabytes wow ofd of data uh that we were hosting and you know that [music] was great and it served a great purpose but um there are challenges to how quickly you can move in [music] physical data centers right I mean you know our provisioning processes were measured in weeks or months right and when you have to move at the scale of you know changing industry needs you need to move a lot faster than that and that was really why we picked [music] AWS as our cloud vendor to of choice to move to. Um it's why we picked NetApp on tap to be able to accelerate our journey with storage to be able to leverage [music] the API systems that allow our orchestration layer to be able to take a lot of the time out of the provisioning [music] process and spend that time more on innovation. Right. We want to spend a lot more time on delivering value, not on>> [music] >> [music] >> [music] >> um >> um >> um >> setting up and exactly administering systems, which I totally agree. I think the laws of physics have changed. >> Yes. >> Yes. >> Yes. >> Absolutely. [music] >> Yep. And we need to keeppace. >> Veryfascinating. So anything that you from this now that you are [music] getting into I would call it a new world where earlier it was almost separated between on-prem and kind of cloud now it's kind of merging together and you have to take care of both the communities. Any learnings from it for NetApp [music] and AWS like what can we do better more >> to help you and your kind of community? It's [music] been a journey that we've done together, right? That we've done withAWS with Amazon. [music] And when I think about that journey, right, you know, going from on-prem and going to the cloud, you know, [music] we have benefited so much from the ability to tap into um automation and orchestration to be able to tap into the reliability of NetApp on um in the AWS cloud, which is something [music] that wecouldn't match withother storage systems that we tried before NetApp on was available. Wewant to be able to focus on educa driving forward the educational technology community. Um we want to spend a lot less time on services that you know thatare a little bit more commoditized that we can trust in our partners with AWS and on NetApp to deliver [music] FSX on tap. Um offers a really interesting opportunity for us to be able [music] to benefit from some of those managed services.>> The best of both in a way, right? You get it as a service and you get it totally integrated as a [music] consumption rather than having you to manage it and operating and run it. >> AB: Absolutely. >> All right, I'm going to switch gears on this lighter moment. You know, I can but not look like, you know, outside like this is [laughter] really actually phenomenal. This is cool. >> This is absolutely beautiful. >> Reminds me of Venice itself. Yeah. >> Yep.Absolutely.>> I think the only thing that's missing might be the singing. >> That's true. Should we try that? >> Take care of this. Yeah. >> Come on, Ezio. Can you give us a little >> [singing] >> So fresco. >> [singing] >> [singing] >> [singing] >> of [singing] >> Thank you so much. I'm sure it was all about two buddies on a boat, right? [laughter]>> Absolutely. Y >> that's all this was. No, but [music] thank you, Chris. Thanks for your time. Really appreciate it. I hope you get >> some fun here, too, while you're at it. you know, not just all business, but thank you for this time. Thank you [music] for this sharing all these details with us and look forward All right. Hey everybody, welcome back. I'm Steven Foscet, organizer of Tech Field Day and publisher of Destit. And we are here in uh San Jose at NetApp headquarters uh to wrap up our wonderful uh day uh special cloud field day event focused on multicloud. Uh this morning we heard from some NetApp folks uh about the reality of uh enterprise multiloud needs. Uh wesaw some uh really great interactive um and practical demonstrations of how it all works. And now we're going to hear from a great partner and a great name in the event and frankly somebody that is a friend of ours uh here from uh Google Cloud uh to learn a little bit more about how they work with NetApp in this multi cloud environment. All of these sessions are recorded uh as I said earlier you can uh catch any of the recordings right now on LinkedIn. You'll also find these recordings on YouTube, on NetApp TV, and uh probably in a lot of other places besides if you're wondering what this is all about, uh you can go to techfieldday.com and click there to learn more about what Tech Field Day is and how you could become a presenter or a delegate in the future. And if you have any questions or comments about the event, uh again, I'm Stephen Foscuit and I would love to hear from you. Uh you can find me on most social media channels asFoscuit. So, I'm going to turn it over to Phoebe and uh Drew and uh get a chance to get started here with Sean and uh and Google. >> Well, thank you. I'm so excited to be uh wrapping up our content today with our great partners, Google.has been um a NetApp partner andyou know, we've been friends for a really long time from an engineering perspective. So I'm really excited to be joined today by Sean Darington and Dean Hildebrand. So I'm going to firstly I guess introduce uh well I'd ask you to introduce yourself guys. So Sean let's start with you u I guess a little bit about your history um so that people know who you are andkind of where you're coming from.>> Yeah. Um so Sean Darington um a product management at Google Cloud now focusing on our storage solutions and do a lot of work with NetApp as a C company and a partner of Google Cloud. uh longtime storage industry veteran 25 years Veraritoss and Semantic and storage startup at Exoblocks and so forth. Um this is my uh seventh tech field day um with Stephen Foscott and team and actually uh Stephen and I did the very first uh tech field day way back when with Veraritoss and uh threepar on uh thinprovisioning. So it's uh it's great to be back. Um, yeah. [laughter] AndDean, I'm really excited to have you because I talk alo all the time about NFS andfile services and Google obviously. Um, and you have just such a history in that area. So maybe introduce yourself. >> Yeah. Uh, thanks for having me. Uh, Dean Hildebrand. Um, I'm in the Google office of the CTO. Uh, yeah. And I started school at an interesting time right when we were redefining the NFS protocol around NFS4. Um, and so a lot of the work I did at that time was around scaling up NFS. Um, and then a lot of those changes got into NFS41 and then started defining the NFS42 protocol as well at that time. Uh, so that was lots of fun. And then I went over to IBM research and worked in uh, primarily HPC storage there for over 10 years and then eventually in their cloud storage uh, offerings as well. And then now I'm at Google. Like I said, I'm in a technical director working on our file block and object um storage strategy. >> Yeah, that's awesome. And it's great to have you both here today. I um Techfield Day is not meant to date us, but it's so exciting to hear about the history and your backgrounds because I think that'swhat informs a lot of our um our future direction as well. So Sean, you've seen the um well, let's start with you. So we're talking about file services primarily in Google today and Google Cloud Storage as well. What are some of the use cases you're seeing for file in particular in um in Google cloud today?>> You know, it's u it's a lot of the same things we see on premises because as we were talking about earlierin the show this morning, you know, there are digital native companies that are born in the cloud and then there are a lot of companies that are on premises coming to the cloud. Um and with the vast amount of data being unstructured on premises, it's a good opportunity to move it into the cloud and kind of keep that same structure and sometimes you're going to rewrite applications and so forth. Um but you know some of the main use cases uh around disaster recovery right and particularly withNetApp on premises replicating to NetApp CBO in the cloud it's a good opportunity toget the data in synchronous modes between on-rem and cloud failover capabilities run compute and storage whereas needed um a lot of the same verticals um we see a lot of uh success in financial services uh retail uh healthcare um even eda is an example right there are companies that are able to leverage NetApp on premises and have for years. Um, but now they have the opportunity to leverage like CVO with flex cache in the cloud for bursting workloads. So there's a different way that people can think about leveraging cloud assets as they continue to leverage on premises assets. Um, and you know one of the other big things ishealthcare um that wesee a lot of companies using that particularly around NetApp file services where you know one company in particular is a global company. They have, you know, pabytes of NetApp file services across 80 different locations worldwide. And we're now in the process of centralizing all that within six Google regions and they're using NetApp CVO in Google Cloud with the combination of the edge cache that you heard earlier this morning. Um, used to be global file cache for the file serving. So now they're able to minimize their operational costs by getting rid of some of the hardware that was aging out already, but now they're actually leveraging a new way to manage that data more efficiently. >> Right. So it'sa lot of the um you know thetypical use cases for moving to the cloud andhaving the storage and having the file services available in the cloud just like they are on premises and that the ease of not just setting them up but I imagine managing them as well when you're still when you're trying to learn this whole new architecture of cloud, right? >> Yep. >> Yep. >> Yep. >> Okay. Um, and is there something that customers look for in the cloud that um that they want because they're used to it on premises or is there a change in the perspective of the way that they architect these environments? >> I think itreally what we've seen is it really depends on where that company is in kind of their um their process of migrating to the cloud whether it be you know an all or nothing or they're moving and they have multi-year strategy to do this because you know you companies will run a combination of VMs on our compute engine. uh they may be rewriting applications for Kubernetes which you heard about earlier this morning as well or they may just want to continue to leverage VMware with GCVE. So it kind of depends on where they're coming from in terms of if they're going to think about things differently. But with CVO, you know, they have all the bells and whistles that they're used to on premises, right? And they'reable to leverage a lot of the same APIs todo some advanced things or they may want to go down a fully managed storage route with cloud volume service. So, not all the features, but you're going to have some trade-offs in terms of ease of administration directly integrated with the console versus kind of do-it-yourself. So, it just depends on where companies are. We see it really across the board. >> Yeah. Right. It's that multiloud journey or the cloud journey that we talk about and where they are. Um, do you find that uh like you mentioned cloud native customers, do you find that they're leaning towards and I don't know if either of you want to answer this. Do you find that they're leaning towards um file as a storage service or is there um preference there to you know to use other things like object? Um yeah I mean you know we're big believers in using the right tool for the job right and so for uh whatever workload they have right they really need to um use the specific storage service that meets their needs and so you know object andfile have been on different journeys I would say over the last five years in cloud where you know object uh is really trying to play to its core strengths around flexibility scalability um and global access of the dataAnd then they really want to focus on their core uh use cases which is around content distribution uh data analytics uh and data preservation and so and delivering all of that at a very you know low cost. Um whereas in file you know it's been on its own journey where you know all of the data management capabilities that exist on prem right we've been moving them into cloud making them easy to use easy to consume you know work fornon-netup experts right that really just don't want to think about so much about the data management side and really want to focus on their applications so and we've had a lot of success there as well around delivering you know dependable SLAs's and RPO and RTO times and separating ing out um IOPS and bandwidth fromcapacity in terms of how they buy these things. So you know we'vebeen a long uh way down that road. But the other thing that file has is a very specific API right around PZIX. Uh it also has its own data sharing semantics that uh between different users and then as well as a low latency performance access. So, you know, something that you're much more uh similar to onrem even when you're inthe cloud versus object which is a much higher latency and really designed for largerobjects. And so, you know, when we talk to customers around what workloads they have and where they want to uh bring them, you know, it's really about looking at those types of capabilities that the different services provide and you know, doing the match intoeach one of them, right? And both of them are again thefirst class citizens in the cloud. >> Sure. Yeah, that's Thank you. So Sean, Iwant to kind of take a segue here just because I think some of our viewers maybe aren't familiar with Google cloud and I know that you have other storage services. So we mentioned um Google, you know, object storage as well as file. Can you just maybe give a rundown um of what those cloud storage services are in Google just so that we are all on the same plane? >> Yeah. So I mean so there's you know depending on the application again there's a combination of object or block or file services that we have. Uh Dean was talking about cloud storage. Uh that's obviously our object storage which is um designed for the types of applications Dean was mentioning. It's also uh built on the infrastructure that we use for our other services in Google like photos and Gmail and other things called Colossus uh which is really just a cluster file system that is planet scale. Um persistent disc is our block storage offering. Um and there are variety of persistent disk options based upon capacity and performance and compute size and so forth uh that people can choose touse in GCE or in other cases. Um and then file store is our native um just our fully managed NFS storage service. It's a file storage service that customers can use. Um, but one of the things with Google Cloud is that we do have an open ecosystem. And that's one of the reasons, you know, we've been partnering with NetApp for years. Uh, is that we want to do what's right for the customer and not everything that Google has is going to be right for the customer. And so we try to figure out what that right option is based upon, uh, in this case file needs andmaybe it's file store, but and sometimes it might be also CVO or might be CBS from NetApp. So that's really an approach that we have for an open partner ecosystem in addition to backup services that we have that are is integrated into the cloud console for us to provide but we work with veh and convalt and others as well again with that mindset of an open ecosystem. >> Okay. Yeah. And that that's a really interesting um I want to kind of dive into when you say an open ecosystem from a Google what does that mean from Google cloud? Does that mean that you let peoplebuild things on top of Google Cloud and then sell it through the Google console or is there something more integrated than that? >> Yeah, well in this case uh withNet at Google Cloud, it's more integrated than that. A lot of times customers can, you know, they can build and deploy things on their own inGoogle Cloud and that's one way to do it. Um they can also buy things through the marketplace which is one of the things that the two companies have a relationship with and that um that oftentimes help helps customers with how they're going to spend how they're going to think about how the what kind of service levels are associated with support of those products that are in marketplace. Um and then you know inthe case of NetApp andGoogle Cloud right now we actually have sales teams that are focused on selling each other's products right so you know within NetApp there's a team that's compensated for selling CVO or CVS in Google cloud and similarly within Google cloud we have teams that are compensated on selling NetApp uh in Google cloud and so from that perspective people can buy it through the marketplace and they can use and they know that there's a joint support SLA associated with that. Yeah, support's definitely one of the big ones for I guess for customers especially moving into the cloud where it's the last thing you want to think about is who do I call if something's not working right. So can you talk a little bit more about that relationship between Google and I think I'd like to hear from both of you about this because I think there is both like you mentioned the sales partnership and themarketplace partnership but there's also a pretty deep engineering partnership too. So Dean maybe you could um talk a little bit about what's that been like with NetApp over the last couple of decades. Yeah, I mean uh I think we started uh working together at an engineer to engineer level probably around 2017 or 18. Um and you know things uh I think got really close and really you know working as one team right to deliver uh the services that we have on the platform. um you know specifically we built a brand new service uh with CVS where you know it's a Kubernetes-based ONAP service that uh you know users can now use across our entire platform and you know that really required uh both engineers sitting in the same room for uh you know over a year or two years um you know sitting down talking to each other working through all the issues right in terms of getting you know thetechnologies to work together the networking you know the even right into the Kubernetes tech leads in order to you know modify the platform that we had in terms of uh adapting to you know some of the NFS and SMB needs that are uh requirements that are needed and you know I think that's been rewarded on both sides in terms of you know we've had uh internal engineering awards given to teams inside of Google for the great partnership that we've had I think the same thing has happened on the other side as well uh you know and we've been you know have partners of the year awards together andall these different things. So I think that it's been a really, you know, productive and super amazing uhpartnership where I think both engineers on both sides have learned a ton. >> And I think the other thing, you know, going forward like the Google cloud PMs and engineering and NetApp PMs and engineering on both sides, you know, there's a lot of interaction almost on a daily if not, you know, well, weekly if not daily basis on roadmap items. What are we going to do? And that really comes light in like cloud bottom services, right? It's directly integrated into Google Cloud's console. So customers don't have to install anything. they just on the lefth hand nav can navigate down to cloud volume service and choose a capacity and region and deploy it. Um and but that's also leading into things that we're going to be bringing out in the future as well for prioritization about what are customers wanting and so as we work in these joint opportunities with customers about their migration to the cloud whether it be a migration or hybrid deployment uh it really influences from a product perspective which features do we prioritize to deliver first based upon both teams inputs and priorities. Um, can I ask if you uh can talk about any of like maybe a feature or a capability that has been added to um cloud volume service as a result of a joint customer? >> You can ask, but we'll have to say that for next field day. >> Okay. [laughter] Okay. Sure. Um, yeah. And so theother thing I want to um touch on is you mentioned earlier um Sean about how the sales teams are really tightly integrated and I want to ask about Google Cloud um approaching some of these workloads. Whatare you when you're c when customers are coming to you and asking about um you know migrating into Google cloud and one of the things that they have behind that is some storage requirement what are some of the things that you're finding that the teams are able to do together um to solve that customer or to get to the customer's requirements true c requirements>> well I think you know oftentimes the conversation with the customer is much bigger than I have astorage project and so it that's actually a good thing because that opens up the door for a lot of conversations a lot of possibilities about what can we do in what scenario and how much can we do when it really gets into a priorization and planning perspective for what'sthe customer's most important priorities sometimes it's the data center evacuation sometimes it's I want to move this cloud solve a DR problem I need to get out of my second one second data center I want to stay in my primary I want to move different workloads to the cloud based upon my VMware and I want to run that in GCVE right so it really just depends upon the priorities of the customers but those conversations that are much broader than a specific big storage project uh really opens the doors to bigger opportunities. >> Sure. Okay. Um [clears throat] and I guess Iwant to kind of look into the future of those opportunities andwhere you see your customers what are those kinds of workloads that they're starting to lean towards. So theymay have started with a backup or a DR um workload because that's a nice way to get started and to see what the cloud is like and if it's going to work for them. But what are they asking for now? If we can get a little bit more specific.Do you want to chime in? >> Yeah, sure. I mean, I'lluh add in a few. I think um you to your point, right? I mean, Ithink it started off, you know, everyone uh at some level on some workloads started off very slowly, you know, making sure that everything's working right and now I think we're seeing the transition to saying, well, actually, you know, maybe I can either a use the cloud as the first, you know, the only existence for this workload instead of having touse it. But then a lot of others are saying well actually my gateway into cloud is through bursting. And so being able to burst out and put pipelines in that can burst out to cloud you know leverage the compute resources uh inside of cloud and then at the end of the day they can quickly shut it all down. >> But then that becomes its own gateway into saying well okay well now that you're bursting that requires a lot of data movement. Uh that requires a lot ofmanagement from that side. what happens if we just moved everything into cloud on a per workload basis and so now we you know are seeing design workstations uh coming up in the cloud again get access to GPUs and these types of things um and so a lot more of what did happen on the desktop I think is now happening inside the cloud and that becomes the place of generation of the data and then that becomes the place where they can now run let's say larger batch workloads or pipelines based on that created data uh which then leads in down through the entire pipline line. And so, you know, I think that this evolution is the trust that we've been building over the last several years is starting to sort of reap thehybrid and theall-in type uh workloads. >> Anything to add or >> No. Well said. Oh, well, there you go. >> Well, Iwant to ask then, I mean, westarted talking about file services and you mentioned NFS andhow it has been around for a really long time. Um, how do you find thatprotocol is you know, whatdo you see happening with NFS? Is it reached its um you know its potential or is there still more to come? >> Yeah, it's interesting. You know, Ithink a lot of people confuse the implementation of a server or the implementation of an NFS client with the actual protocol itself andit's a little bit confusing toknow where one ends and one begins. And so a lot of people's perceptions of like oh this is NFS storage it works this way is more based on you know onrem they use some version of it or they had some old thing or they whatever they were doing and so I think we're really you know in cloud trying to change the mindset a little bit about saying well actually you know this type of behavior isn't fixed towards the protocol meaning like if you did want to set up a service and have it automatically grow over time as youknow start easing data you can do that with an NFS-based offering, right? uh there's a lot of different things you can do that are completely separate than the protocol and I think we're discovering those as we go through this journey uh over the last several years to really say well what are the you know customer like right now they're really looking for that high level data protection uhthe data management services and make sure that everything's safe and you know >> but I think once we get through all of that uh it'll be a lot more around okay I have a global team I need to uh access data from these different locations. You know, I really need to figure out how to leverage the benefits of cloud to get my team working together and getting them uhleveraging uh all the cloud resources as we talked before then that becomes the interesting questions to have and like there's I think at this point we don't see any reason why NFS or SMB and whatever cannot you know follow us along into that journey again to enable those key parts of the workloads and the APIs that people expect. >> Yeah.for sure. Ithink that's a good thing to know as well, knowing I don't have to learn another skill. >> Yeah. And if you want to, right, I think the key that we're trying to get across is like if you want to learn and you want to use different aspects of things, go ahead. You know, this is great. But if don't feel like it's an obligation in order to take advantages of cloud and the flexibility and the scalability, right? that is like really want to separate out that uh that file storage can you know meet all of the needs and deliver on the promise of what cloud isout there um delivering >> the guy that's going to break the ice the ice >> please >> please >> please fantastic we actually were enjoying that it's a good intro to questions that can lead us in there one of the things with storage and cloud especially when you get into file storage and the abstraction equals performance loss no matter how you slice it every of abstraction. There's a sense of inability to measure or get top performance and we get distributed cloud storage. There's a fear that us onrem folks right team bare metal we love managed latency controlled latency and there's a sensation human sensation that we will lose that control or capability when we go to the cloud. Can we talk about what you're doing both with ONTAP and the partnership with NetApp to make sure that you can maintain those SLAs's SLOs's and that we don't just think it's going to get spread all over some data center and we're going to get these disturbing latencies that we can't point to because that's also the thing is that we can go back and I can say now I know what happened but when I move to cloud-based storage I maybe I'm wrong I lose that capability to get that observability to say like at this point in time this specific thing happened which spiked latency which caused you know second tier cache problems. >> I mean I can go on all day about performance and cache. It'skind of a thing that's close to home for me but that that's a human problem that welove metal. love metal. >> We love to know that I can go I can touch it. I can throw observability at it and I can see exactly what happened when.>> And I feel like I lose that sometimes when I go to cloud-based services. No offense. Yeah, know if we were just talking about this, it's actually a trade-off, right? I mean, you know, all of the, you know, I shouldn't say all of, but a lot of the NetApp admins on premises, they love to figure out how many spindles are going to be in a box, what's the ratio, what's the utilization, how many per spindle, how do I size it, uh, what's the cash ratio, all that good stuff that everybody's used to doing. >> Is it SATA? Is it SAS? You know, I want to know, >> is it 5,400 RPM or 7200 RPM? I'm going to save a few dollars by going with slower driving. Can I still get the IOPS I need out of it? Right. >> Deep fire and wde idea. [laughter] It it's one of those trade-offs that you don't have to worry about that anymore, right? So even if you're using CVO in Google Cloud, you know, you're going to run that on persistent disc underneath the covers, you're going to run that on a compute size and instance type with X amount of VP VPCU, vCPU and memory and so forth. And so you have to think about things differently as a service that you're almost subscribing to. And you can size that service how you'd like to. And there are SLAs's that go along with those services, right? So like CVO is like a 49 SLA and depending on how you deploy CVO in an HA pair, you can deploy that across two different zones within a region and now you're actually increasing the level of nines's availability that you can actually count on. And so we were just talking about that a little bit earlier working on a project with a customer and customer's steeped in an app sizing and it's like well that's not how it works on premises and you know go through that explanation about it's a different way about thinking about a service you're actually going to run on in this case CBO uh in the cloud. I don't if you want to add anything. Well, I was just going to add that I think what that does though is it puts the onus on us to deliver, right? So to your point, right, iswell, I'm going to get these crazy tail latencies or something to that effect. How do I know that I'm getting them and how do I react when they occur? So I think that puts the onus on us to say look, you know, we're giving you this service. You know we should be open about what it actually is in the certain in the sense of what is its latency profile what is its performance characteristics how do you achieve those performance characteristics you know so we publish benchmarks for example on our website to say hey look you're not getting the right performance run this if you're still not getting you know if it's not achieving it that's a bug and so we really try to build up thisuh I think partnership basically with customers and partners to say if you're not achieving this it's a bug And they're like, "Whoa, wait a second. What do you mean a bug?" I'm like, "It's an actual bug. Like, open the bug because you're not getting it." They're like, "Well, I just figured it was slow at this point." Yeah, it might be slow at this point, but that's not supposed to happen, right? It's going to happen, but it it's not something that's supposed to. So, I think that's where it really is about, you know, sort of building that trust that the numbers we say you're going to get, and therefore, you know, does it matter if it, you know, is, you know, SAS or SATA or has some sort of rightback cache in the middle of it or whatever. I mean as long as it's uh achieving I think the sort of promise that you care for plus the monitoring which is you know to your point of I think that's something where cloud traditionally has done really well in terms of monitoring but you know when you to your point of like well I can monitor the higher level but now I want more detailed and I think that's something that we're evolving over time to add more and more granular statistics IO latency profiles. Um, the one thing that I've had people that are used to on prem is say is like, you know, I wrapped up a bunch of stuff together. I built a dashboard in 10 minutes and now I'm observing the latencies of like all of my VMs. They're like, "This would have taken me two days, you know, with Grafana and all these other things before and now I just sort of did it through some point and clicks." >> And even be able to see like a specific workload that could get an extremely, you know, terrifying cliff on the M ratio curve, you know, at a certain allocation level, you know, like, oh, I could move that around. I could choose a better way in which I could allocate storage to that. But it's tough to get that visibility from the cloud consumer side.>> Yeah. One example I was going to give is on our block storage as an example. We actually have a monitoring of how often you've hit the limit. And so you actually know whether or not you're hitting the limit or not. And if you are, then maybe you need to increase the size or pay for more IOPS orsomething along that side, right? Depending if you're unhappy about that because a lot of folks are like, "No, I'm not hitting the limit." Except, well, in fact, you have a very bursty workload and you are you just couldn't see it in the monitoring because it's not granular enough. And so, you know, we try to build more and more of those types of uh metrics back to the >> Could you talk about Oh, sorry. >> Could you talk about I love these stories about kind of like how the mindset has to change from being how we've had to do things and control them um on prem and now we can control different things from a higher level. So, do you have any more examples of customers that may have come to you with this same type of fear and how you've kind of been able to help? >> That was it's um good segue because Dina mentioned like the onus is on us to deliver a service, right? And one of the services that we deliver really well are networking services. Uh so there are over 130 points of presence where customers can get into the Google network. Um and that healthcare company I was talking about earlier where we were sizing removing you know multiple pabytes of filers across 80 locations. The original design was actually we're going to put a cloud volume edge cache at each of the locations. Um however in the larger regional centers that they had we are able to actually eliminate the edge cache all together because they can get onto the network just run it directly in the cloud. So now the users data is in the cloud in one of the six regions and we replicate it in across two regions for availability but the users are getting it directly from the cloud and it's going right back down to the server. So you still have the Windows file serving experience but you don't need an edge cache. And so that's one of the things that changed based on the sizing and thesome of the testing that we did from latencies is about how close you are to the compute to the data uh where we still had edge cache deployments in some of the sites and that's the plan but we actually were able to alter and actually improve what the customers are able to do.>> Okay. Iwas actually going to ask about your edge cache and your remote sites because what I'd really like to know isthat going to grow because I think that's a distinctive quality that Google has is thatoutreach to the edge and it but it sounds from your response that you actually think is not going to grow if you're actually not seeing the demand for the cash. >> Is what part of it going to grow? >> So Iwas hoping you would tell me you're going to have more locations and more distributed clouds andthings like that, but Iwasn't hearing that in your response. >> Yeah. So well so this case uh was actually uh the customer had 80 locations. We are consolidating that into six Google cloud regions. We have 34 Google cloud regions today and we've announced at least nine others that we're building out over time. So we're going to continue to expand the number of regions that we are in. Uh and consequently the number of regions that cloud volumes ontap or cloud volume service will be available in addition to increase the number of networking points of presence that we have. So uh yeah our growth iscertainly not slowing down in terms of our building out regional. You announced last year 140 edge points around the world where you >> it it's a large number but thecloud volume edge cache is different than the networking edge points I was talking about.>> Oh okay. Well I was interested in the 140 edge points and howthat would impact our customers especially those with a distributed uh business environment.>> Yeah. Itreally helps them just get into our network sooner. All right. And then once you're on our network it's all of our network. Uh so you're not going through different providers. You're not going you know so you have very predictable network performance network latencies that you can size into how you architect the application. >> Yeah. And doyou see that growing because you know I come from a service providers background and one point in one city is not enough. >> You want multiple points in the city regional on the caches. I think there's 2,000 uh edge points out there onthat are today like and that's a lot used for just again wherever you are just to getting into the network right and as soon as the sooner you're in as you said then the better like we don't I don't remember the numbers off the top of my head but it'slarge so the real question becomes then is given you know so one is giving the users the choice on do you want to use that type of network and thing or do you want to maybe save some and just use public internet in terms of like how you want to access the different zones and regions. And then the second one becomes is our strategy around okay well now there's ways to push out into the edges and push into onrem right in terms of the different offerings that we're building either on prem or edge sites as well that then link into the sort of primary uh what is it 29 regions now that we have >> and is that exposed in NetApp? Can I control and see that in NetApp? >> Yeah. Well, maybe Sean you could say how many regions andwhere is NetApp in Google Cloud? >> Yeah. So,cloud volumes on tap is available in I think every region maybe less one um like some of the newest ones like as they come online it takes a little bit of >> certification [clears throat] but it'syou can think of it basically everywhere and then cloud volume services in I would say probably 80% uh of our regions. Ireally like what you both said actually there about um to the edge about how it kind of brings the edge closer to where your users are and so we don't really we may never think about it as an edge and a you know and a core we might just think of it as everything is the edge and I can access my Google cloud resources wherever I happen to be and it may be over the public internet and it may be over a private you know direct link but it's going to be there and it'll be available when I need it at the latency that I need it with the performance that I need it. So that's excellent. It's a nice way of thinking about it to change your perspective of what cloud looks like, what data centers look like. >> Is the use case for backing as a CSI to GKE andthat pattern in place andImay be going outside the lines of where we are yet, but Iwe didn't explore that use case yet. And uh I'm going to go all Kubernetes all the time now if you give me a chance.>> Yeah. Well, let's talk about the Kubernetes use case a little bit. Um, I guess well, I mean, the first question Ithink isyou know, is there a role for persistent storage in Kubernetes? I think the answer we all say is yes, but um, how does file look and whathappens to it and how are we um in, you know, obviously Google is in GKE, but how are we working together with um, with Google to make that easier for customers?>> Yeah, actually this feeds into some of the stuff we were going to talk about around like the future and where file is going, right? And so uhso my earlier point before I think we need to continue the journey on making it simple and easy to use and getting all the enterprise management features that we know and love and making them cloudified if you will right flexible scalable globally accessible. Mhm. accessible.Mhm. >> Um but then I think the next trend is going to be much more specific variants of file storage in cloud for the different frameworks that are emerging likeKubernetes, right? I think we spent the last five to feels like 5 to 10 years at this point um integrating the two in terms of making them you know CSI drivers and making them easy to sort of launch andhopefully abstracting away any of the sort of gory details. But I don't think we've done anything specific necessarily um you know in terms of the NFS protocol or in terms of the NFS client and other things to really adapt to per container requirements and I think that is going to start to be a trend uh that we're going to see about we're not just talking about a VM anymore. We're talking about very small microservices and how do we adapt to their needs not to this VM's needs if you will and today we've tried to focus more at the abstract level because it's a little bit easier but I think you know two containers could have completely different needs running one right next to each other and we need to start building storage that's going to adapt to those types of work workloads and requirements and some of them might be slow and large and some of them might want really low latency and fast and how do we colllocate apps like that right So I think that's like a really big evolution and that I think will evolve over into service serverless frameworks and all of the rest of the frameworks like as we move forward, right? Just really sort of adapting to their needs and we're going to start seeing these specializations of >> Yeah, I think the idea of thecontainer being able to pres you know to consume directly TPU, GPU and high-speed storage for persistence that also allows for other things to happen to the data that's in the storage layer. We're going to see those real true scale out purpose-built like maybe this is again this panacea type of thing but looking in the pharma use cases that folks that I've talked to in media use cases where they're leveraging containerbased services but they need high speeded storage they need some persistence they need to run other data management algorithms and data checking on it so it's got all the right things but the only compute layer that can keep up with it is scale containers and so Isee that as there so yeah be interested to as where we can build applications that can use this stuff, right? >> If we can find someone to build it, then we'll be there. >> That's such a greatnote to end it on. I think that future and the possibility. So, thank you so much Dean and Sean for joining me today. It's been really exciting and interesting having you here. >> Thanks, Phoebe. Thanks. >> All right. Up next uh we are going to have a uh a classic field day element dating back to the very first field day which is a roundt discussion. So we are going to invite our uh friends from NetApp back up if we want to add some more chairs here. >> Um and uh we are going to basically have a chance for everyone to discuss with the presenters uh the delegates to discuss among each other. And as always with these roundt discussions, uh the core question is kind of what you think it is, which is essentially uh we've spent the day, uh we've listened to a presentation, uh we've uh seen some demos, we've had some questions and answers, we've heard from apartner, uh and it kind of gets down to the core question of multicloud. So, uh, I will actually pose that, uh, premise to my, uh, my esteemed colleagues around the table. Who wants to kick us off here with a thought on, uh, modern multicloud? What does it mean? And, uh, what have you heard today thattells you what it means? >> I'lljump in because wehad to cut time off from some of the ransomware discussion. And when we discuss multicloud, that's probably one of the quickest ways that we move into I have everything built in one location, I need to restore it into another location. So when we were talking about ransomware um and you know the solutions that you have around cloud data sense um where are you all looking at uh that type of uh recovery, ransomware detection and solutioning?>> Yeah, I'll take that. So number one acknowledge it's not in the multiloud but multi-layer approach. depends on where the threat is and where the layer of technology is. Nathan, you brought up uh cloud data sense which is a great example because it's one of the NetApp offerings that's not just restrict restricted to NetApp storage or NetApp environments. Cloud Data Sense has the ability to view, scan, give insights, alerts, and reporting on lots of types of storage. be it our traditional storage competitors. You might have an isolon platform out there, an HP platform or you might be one of the cloud formats that's not just a standard NFS like onepoint or excuse me one drive, sharepoint or some of the object buckets. So cloud data center to do its job well we had to give it the capability to look at a lot of different types of stuff. Now what we also have to acknowledge is I can look at a lot of places bring a lot of value and I encourage everybody go to netup.com or cloud.net.com nap.com and look at some of thedashboards that you can see where you can see red and green light alerts on different environments where there may be issues and what it I'm sorry we didn't get to that today maybe the next cloud field maybe Stephen will let us do another one and we'll get into that but as you go deeper you can see and alert on from but to take action especially the autonomous action sometimes we have to go deeper as an example if it's an ontap environment and we're seeing anomalous behavior like excess excessive serial encryption on a volume, right? That that's not normal. And so the AI engine will see that and say that's not normal and have some steps it can take like it can spin an immutable back uh snapshot at that moment in time. So that if even you don't see it and get involved for an hour, that's your worst fallback. You may go up to the next level of user activity. So another layer of the technology stack isthe cloud secure capability of cloud insights that can say okay a snapshot was given but how about what if it's not on just this particular file system what if it's a user account on a lot of different you know a really smart hacker that's going in different places it'll identify that anomalous behavior and not only flag that kick off the snapshot if it hasn't been but block that user. So there's different layers you got to go to. And so we're kind of looking at it as a multicloud horizontal segment and then striped this way. What is the layer of technology? And we'll bring value as high up the stack as we can. >> Ithink the other thing to your specific scenario, I think if you've had a ransom attack and you're recovering, you're not going to be able to practically recover any significant amount of data to another cloud if it's not there already, right? Bandwidth, data gravity, not to mention ingress and egress charges and all the other things that make that impractical. So youronly solutions there are to either point in time restore to the cloud you're on making the assumption that thechallenge that got you attacked in the first place is likely not something wrong with the cloud architecture, right? That's likely to be a black swan event versus a more standard spear fishing orany other sort of more common compromise, right? So most likely like Chuck was saying, the ability to have those point in time snapshots that are already there that are consistent across all the different clouds, right? and all managed within that sort of software lets you recover to that point in time instantaneously. Anything that's not snapshot based recovery for the data sizes we're call we're talking about and the speed in which files can be encrypted especially with hardware offload of encryption and everything like that you're never getting it back if you're trying to stream it as a backup. The other solution is if you truly do want to have it as a crosscloud sort of occurrence for a ransomware or for let's say some massive denial of service attack orsomething like that then it's having the data already there right so that's where the crosscloud capabilities thatChuck and the team demonstrated to be able to literally drag and drop and have your data in two different places on two different clouds it it's like a mega availability zone let's call it or something like that right Imade a term up >> yeah MAZ is that a thing now that's going to be over Twitter now darn #>> Did I trademark that? Wait, no, it's NetApp work product. Darn it. >> Think of an because that would really be appropriate. Yeah, appropriate. Yeah, appropriate. Yeah, >> because Maze[laughter] multi availability zone encryption. There we go. I think it makes a good point that we have a we have the challenge of the technology being able to prevent or like see and then ultimately prevent and there's that weird trust of like at what point will there actually be real application behavior that will emulate a heristic that represents what looks like ransomware and the idea then there's work I was actually at SNE last week SDC talking about other storage you know services that are aiming to do the moment they see as you said sort of serial encryption from the beginning that it immediately it makes the files immutable. Y immutable. Y >> and so it selfdetects you know prevents but there are genuine cases where there are things we're doing where you know we could be using different architectures. key value stores behave fundamentally differently and how they write and reallocate data on the fly which can by some good heristic measurement appear like it's doing naughty things and then you've got this weird thing of yeah what's right and all of a sudden thousands of things get deleted all at once you're like oh god oh no it's actually because I'm just cleaning up mym drive right you know or whatever like that's there are tough things of user behavior that's normal or application behavior that's normal and normal such hard thing to ascertain now >> and it varies by workload right there's EDA workloads or M& workloads that involve by definition making a million files and then deleting them a couple minutes later right not to mention AI workloads and everything like that so if you apply the same horistics engine without the right intelligence behind it and say once you discover this to be the signature of an attack just apply it equally across all your volumes that's going to you're going to deny service your own storage essentially right by making all your files immutable and then someone will be mad >> well to that same thread. So, as we talk about security and stuff like that, I I'veloved what I've been hearing about the cloud manage capability, but Ido have a certain level of concern about like we're now taking the storage layer that historically has been hidden behind multiple layers of things that block it from the out the big wide world and now we'regiving some manageability and observability to the outside world. whatkind of controls do we have aside from just basic arbback toprevent access to control access to that cloud manage endpoint and things like that for our organizations so MFA you know ACL's things like that I mean what are we talking >> I'm going to add on that governance yes right like overall overreaching governance from um all the things that you mentioned but also like data sovereignty data residency all that like what are we doing about that. >> So yeah, so specifically on the hyperscaler say if you take AWS there are keys take Google cloud there are service accounts so we tap into what the clouds are doing in that perspective and just utilizing it for our own products within the hypers scale. >> Okay. >> Okay. >> Okay. So yeah and I wanted to add to that. It's like um so you know many of the conversations we have are around security and security policies and knowing who has access in the cloud sense it's which who has access to which resources and it could be down to the individual resource you know bucket name ora single EC2 instance or what have you and so um our policies are obviously because they do utilize say AWS IM it's not u it's open it's something that you can go in there and you know what our policy is asking your systems todo whether it's provisioning or deleting ing or um building a new one. And that's part of the I guess the transparency thatthe Google team mentioned as well is you need to know what it's going to do. So um it's aligning with them and then also using it as much as possible. >> Um but oh >> I was just going to say as well you were asking about cloud manager specifically. It sounded like >> cloud manager you can configure um federation so that it federates over to your own organization and then have MFA associated with that as well like every time we sign in you know we have to approve a key or you know hit it on authenticator to be able to get in. So that's another line of defense that you were kind of looking for there as well. >> So I'd like to extend it out right because this is all talking about building the storage piece of it right. So what happens when you know the goal is to provide developers their playground to build an application or a service out to the rest of it. So um how is so if I'm a developer and I'm consuming storage to display as part of a application is that on them to keep that locked down to the right degree or is that on the storage side to keep that locked down? I mean I could just start off with like I got a mic. Yeah. >> I mean everything starts off with a good cloud architecture right for your organization. And so you know we have customers that have you know thought they were going to start small and then you know they built sort of a very simplistic sort of architecture in terms of how the networking andtheir design ofum folders and projects and how we sort of lay that all out was relatively straightforward. And then they realized, oh wait, I now have 300 teams in my organization. All of them are accessing different data resources. They all have uh sort of a staging, they want staging areas and production areas. Some of them are on prem and they want to access data and then you know it gets really complicated. The point being is then they actually rewrite the entire cloud architecture in terms of the networking design, the project folder layout, you know, the access control mechanisms and that. Then once they have that then all of a sudden that's when you know NetApp would come into play and just saying okay well now we're deploying into that architecture and then how do we make use of how they've set up the roles and the service accounts and uh the access control and everything else >> and somebody like from our side the offsiz istrying to understand the best practices where would they look to get um templates for that would we look to our very familiar um onpre storage providers or would we look to y'all for that kind of out outcome. >> Yeah. Like who's the guy who's the guiding reference? Yeah. From a governance standpoint because governance and security are comp they're two separate functions but they are related. >> Iwould take responsibility intous in the sense that assaid uh you know every cloud has their own sort of reference architectures in terms of how you build secure uhsystems andar deployments. And so I would say you'd want to probably start with the specific clouds uh white papers and best practices guides that we have on our website and you know work with ourteam right in terms of deploying that out for the organization and then it's really about how do we complement all of that right with the storage services that>> I yeah >> it is yeah I find thatshared architecture I mean a lot of the um the white papers and on Google's in Google docsGoogle cloud um uh you co-authored byNetApp because weknow that we are part we have to fit into that ecosystem as well of however you've decided to architect your solution. So the reference architectures um yeah weprovide input into that or we or help author them and so that's where I would definitely start in the cloud provider. What cloud manager then does or what the services then do is give you that consistent experience if you do happen to have that in other cloud providers. you know that there's going to be a way to achieve your high security or your compliance requirements or um you know your role based access because that's the NetApp promise ina sense of our products that we will be able to deliver this that capability to you.>> Um oh >> in the end all you got to do is just phone the Uber engineer ask them their password and apparently they just give it away and you take over the whole thing. [laughter] thing. [laughter] thing. [laughter] >> Yeah. >> Yeah. >> Yeah. >> All this stuff doesn't mean anything. >> Jeez, [laughter] you're so positive till then. Iwill say toyour original question, I think this is one of the interesting things we struggle with. If you go back 15 years ago and you set up a storage array, right? You put the management port on a separate subnet and you called it a day, right? And it was a little bit of I don't want to ask security by obscurity, but it was close to an air gap, right? But not really. And Ithink that we all struggled with the first day you could actually manage a storage array but do it from the cloud, right? It was a brand new paradigm, right, for us or anyone else. And this isn't NetApp specific. I think this would be true for any of our competitors or cooperators in the industry, right? Andin some ways, I even myself, I'll freely admit it, at first kind of was resistant to the idea as someone who's been around for a while. And it's like, whoa. But then you realize, I mean, the principles of zero trust and everything, you should assume that your management network is on the public internet, right? I mean, it'stheassumptions of zero trust is that your data center is already compromised, right? And at any time it is compromised. And so the good thing about putting something on the public internet is it means you have to focus on the things that are truly important, right? Multiple admin verification, all of the structures that Google Cloud and others have put in place because you can't just rely upon the oh someone can't route to that IP address as a method of security because that's not a true method of security anymore. So by removing that as a crutch to be quite honest and it's a really helpful crutch because it makes us all feel like we can fall asleep at night, right? That there's a firewall and it's a separate subnet and everything that's not it's not a valid crutch anymore. And so we've got to adopt more secure and more sophisticated methods for this age. >> Yes, you bring up the perfect thing of assume it's compromised and now what do you do? It's not necessarily about detection pre detection and sum to your thing about governance. This is something that came up. We talked about data sense before and Chucky actually brought up the thing of like let's find a file owner and it's connected to active directory and the first thing that hits me it's like okay it's metadata storage >> but is it then maintained in an immutable state somewhere as a point over time because you know Puml and I say something naughty at our company and we get fired and all of a sudden our active directory is removed and so is that metadata that the stuff that we touched right before we got fired oh yeah we did something naughty to the file system as well is you know just what got us knocked out of the company. >> I'm always naughty. Okay. [laughter] >> Is there like because that itself has to be maintained for an incredibly long period of time because we don't know when it occurs by the time you ransomwarestarted two years ago. You it really bad things happened today but it was there a long time ago. And that's part of the problem is having this sort of immutable continuous record as a time series that you could go back and say like ah a series of events occurred that I can go back and have this in this true sort of immutable reference to then go back and record that the thing that happened today actually began six months ago. >> Well, not just ransomware, but what about litigation lawsuits? They happen. People leave companies and next thing there's a lawsuit over, you know, whatever situation. you've got to go back and dig up that data. Like >> I I'd say that's one of the advantages that NetApp has ishavingthought about this for a reallylong time that people do keep data for a very long time. And so when it comes to moving that data into the cloud or having it in a different you know suddenly we're starting to run different kinds of applications. We're still thinking about that andthatis still one of the things that yes, we know that c people are going to want to keep data around for 10 years, 25 years, the lifespan of a patient orwhatever it happens to be. And that should be something that thedata management system should be able to tell you. So I think that's one of the core tenets for us is that your data requirements don't change whether you want to run it in Google, you want to run it on premises and the tools should be there whether they are autonomous in the system itself like um what Chuck mentioned where it's automatically taking a snapshot if it detects something that might be ransomware to the more kind of okay I'm actually I'm going to set a policy and this policy is going to tell me when things are going to be kept through to data sense which is literally really saying what is that data where is it so I think of it as almost a layer cake I like food analogies so a layer cake of or pancake stack of data considerations thatwe have thought about through the years and now we're bringing that across the board where wherever you go >> even as simple as the odd thing that happened personally experienced this right where we had a full audit that was done I worked in a large financial services firm just go over my lint and you'll figure out who they are but you know we got regular audits as we should proudly survive them all and but one thing came in. They said, "Hey, let's check is this user here and this user was let go and normal like they were summer students and then you go and you check Active Directory and like they weren't deleted. They're right here and then we realized like oh because they were hired back the next summer, right? And so in that case where we had a file that was owned by somebody and then you know it was deleted or something like an activity occurred triggered by a user behavior and then we mark that and we say like all right good immutable log record of that thing occurring and then that user gets let go and then you hire somebody that's their exactly the same first initial last name. >> It happens. >> What are the records? Is it a is a UYU ID? or there other thing that anyways there's questions that you shouldn't have to answer to some idiot nerd like me on a panel right now but things to think about as we look about that >> two things I wanted tosay on that whole discussion one is that um Lisa can we please have Door Dash bring uh Phoebe Eggma muffin because she's obviously still hungry. Second [laughter] is that one of the things that we're trying to do here is with our cloud volumes platform because you bring up retention and things. Remember this is the customer's data. We're not becoming the data warehouse, >> right? >> right? >> right? >> There is so much data that's available but it's your file systems and your data and how long you choose to keep all of it. It's your data. It's your logs. Your metadata is in your file systems. The problem is it's been out there for years. If you go back to1995, you could keep data for 10 years. but you just couldn't figure anything about it. You know, you have an employee that left. He said, "I did this." What you could never prove it. So, what we're trying to do is give the intelligence to be able to mine that data and make it available to you. If you have that lawsuit, >> you need to be able to show what did and didn't happen. >> But I just want to draw that distinction that it's not we're not keeping cloud manager is not keeping metadata about your data at all. Right? >> You have it. >> You're helping classify it. We'reclassifying your data by tagging your data with the tags you wanted, but that's your data in your metadata. It's not in our account. So >> So we can search for it if we need to. We need to find all thousand pictures of yours.>> But you know what? IfI if I in my account I say, "Look, I don't want to keep anything more than six months, nothing. Um deleting all file systems, all objects, all this, all that." And three years later, somebody brings a lawsuit against me. I can't go to NetApp to say, "I need to know what happened two years ago." those logs are gone because that was my data. You know what I mean? And so that'sthe shared and I think it was somebody Yeah. shared responsibility. Correct. Zero trust, >> shared responsibility, intelligence to allow you to do everything you need to in that model. >> We in its call that not my circus, not my monkeys. [laughter] >> So I want to touch on when we're discussing compliance, we're discussing governance. There's something that we that is kind of like I don't know where it is.I always think of it it's a as a step down from governance but it's called standardization and we always think of it in terms of like data uh metadata tags which you just brought up um what solutions in terms of like self-service and uh not self-service uh self-healing and automation tools can net up bring to the table in terms of I have agold bronze um and whoever's between gold and bronze silver um taring and I've got a workload that's not bronze it should be gold is there some sort of like self-healing there. I have a Kubernetes cluster that has differentspeeds. You know, this one pod is supposed to be on this one uh solution is those type of self-healing. And then one last uh use case which is PII which you showed which was perfect but PII is really great when it's where it's supposed to be and it's not always where it's supposed to be. So are these things that y'all are investigating or these things that you already have available? someof them Ican't say that we can solve all the problems in the world but as an example some of the could say that >> I could say that but [laughter] I would get>> I would get back keeping an eye on >> yeah exactly >> so first of all interms of some of the things that we showed here in terms of provisioning infrastructure for the type of job we showed you some very basic but there are higher levels where I can create templates that said based upon this user in this role doing this type of job here's the kind of infrastructure which gets you part of the way there you actually saw a little bit of that with the SAP HANA implementation that can be templatized and automated so that if I'm doing an SAP HANA workload, it's always going to be the right level of storage in this case withthe right level of throughput. Happens a lot. You mentioned Kubernetes happens a lot because as we talked about, it's incredibly dynamic. You know, the I read about a month ago, two months ago, I read that the average lifespan ofa node in a cluster is measured in seconds. >> Literally like it was like 87 seconds or something. It was crazy to me. what they didn't even refer to as minutes because it was fractional. This stuff comes and goes and it's up and down. So, making sure that you can have the right class of resource file um applied at the time. That's part of the automation process thatwe help with. And in addition to that, when you're looking at the monitoring things like ourobservability capabilities provided by the cloud insights engine says I can have performance levels set. I can monitor thousands of things. I can have my dashboards set up that if something is not performing according to its SLA levels, it's flagged for me. And evenmore than that, the ML can learn as to what the likely steps are to take. doesn't yet autonomously do things like uh for example if I have anumber of processes running on a single node in a Kubernetes environment and one of them starts to hog all the resource spew start to write a whole bunch of files I had because sometimes these things are small amounts of storage Ihad 10 gig assigned>> for these five because it comes and goes it's fal shouldn't be but all of a sudden one of them is furiously writing and creating data and CI says the other four soon I'm going to be out of storage space and you're going to get andout of space condition sent to you, it will say these other four, they'reabout to be compromised. You may want to do X, Y, and Z. You may want to spin up another VM, move this over, uh move the offender over and provision more storage, for example. So, we're getting there in terms of a automate and templatize the provisioner resources. B monitor what's going on with the SLAs you set, we can't do it. So the user of the system has to set those monitor all these thousands of different points for storage performance, IO throughput, network performance, compute performance, application and database performance, and let you know when you're crossing the red line and if it does provide you with here's the next three things that we recommend you do. That's where we are today. Ican't wait till next year when we have this same session and see. >> Yeah. Neat to see the that ability to sort of stream that data for persistence somewhere else. And I think that's where more large the enterprises start getting to where they ultimately you know whether it's just you know pouring it in and treating a cafka subscription so they can capture stuff for their own but it's keeping it in some sort of time series database as you mentioned >> not your circus not your monkeys mycircus so I could now in using an ability to stream that data instead of wait for it to write commit pull it like FTP it sftp it around the old school way right team hardware So, but I want to go now to this data is being streamed somewhere. Can I stream it now to a central repository and then do other exciting things like fire Google ML services at it andreally see what's going on so that I can better operate my environment. I think it goes full circle to where wevery started at the very beginning of the day which was that it's not just collecting the data about the storage that you care about because if okay my storage isbusy so what maybe that's a good thing but it's what is my workflow what is my application trying to achieve and that's where we want to get to as a holistic you know with our partners at Google with all the tools that we have at our dis at our availability but also giving them to you and then that might end up in a different system depending on who you are and where you happen to be. So, I think we have one more question. >> You're risking it. >> I'm risking it. Let's do it. >> Dowe Are we on? >> No, you go. >> Let's do it. >> At the beginning of the day, >> you said that multi cloud initiatives have historically been happening by accident. M&As, whatever like that, somebody accidentally swiping a credit card.>> Yeah. >> Yeah. >> Yeah. >> Andthat now people are deliberately architecting formulticloud. How many withoutsaying without saying anything how many are what customers are deliberately creating multi- hyperscaler multi- hyperscaler multi- hyperscaler environments>> andwhat's that use case look like you guys want to take this do you have >> I'm one of them but [laughter] >> anyone [laughter] Iam a customer um I will not say my employer but we are deliberately building uh multicloud enablement program so that we can bring in officially uh additional cloud services that are outside of our current existing cloud platform so that it's governed in an enterprise way because like you said it was I don't say it's accidental but it was things just came about right um soyou got to pull stuff together and building a program like that is verycomplex because you have to upscale your existing engineers that know hybrid that know on premises but also the other cloud provider but now they're faced with oh wait we have all these additional clouds are now official what's going to happen with us are you hiring new people I it'sbeen afun and rough six seven months for me personally challengingum and we're looking at it from a phase perspective initially. Let's get these additional clouds up and running with all those foundational components, you know, landing zones. Bring them up first so they have all that governance. Take lessons learned from the other side. Bring it over less technical debt, you know, let's learn from our lessons from past mistakes so don't repeat it in the new clouds and then let's build a new target operating model. >> But isthat is that the why behind So I'mthe why is cloud crazy, right? Cloud sprawl. Why is cloud crazy? >> Yeah, like it just it became mad. Well, not madness, but cloud sprawl. There's services out there that were getting deployed, >> butthat's still speaking to the first point of it of the accident. Most of it is accidental, right? >> One of the other things is we live in a world of merges and acquisitions. >> Sure. >> Sure. >> Sure. >> So, two companies merged, one would have one cloud strategy, the other would have one other cloud strategy. >> NetApp is kind of the link between the two. We cater to this cloud, we cater to that cloud and make sure that >> but that's still the accident. That'snothing you guys are all talking about the accident, right? >> Well, Iwill say one more thing. I get the last word. So, Ido think that um okay, there's the choice to say I want I have different business units. They want to do different things, different applications, different clouds. But I also think that there andthere's obviously um I want to balance my risk. I think these are use cases that are valid and they have a you know they're not necessarily something technical that they say how I'm going to architect this but I think there's that um really there's wanting to use the right tool at the right time for the right job and that may change so it's not to say that I'm multicloud because I am all in one cloud provider and then I'm going to start using another one and move everything there and maybe I'm halfit's because things change over time and that may just make more sense. I don't think that's accidental. I think that's an intentional decision to say, I want to put it where it makes the most sense. >> That makes sense. Iget that. >> It's a little bit of accidental and it's a little bit of um >> adapting to your business needs. As your business grows, yourbusiness needs changes. So, you are adapting. You know, six, seven years ago, this was the need. But now that scope has changed and the business is actually evolving faster than it. And it usually is. It is usually the last to jump on the train because we're a little bit of fearful of things, but it's businesses. It's businesses evolving to what the business wants to do to be competitive to get, you know, to basically make money. It's our paycheck. >> The other thing iswhat if you're in the middle of and I think the way y'all started off was perfect. We're talking about theseum workflows, right? So we're talking about these work pipelines which that's all developer CI/CD talk until now that we're actually able to deploy everything as code to help them in the cloud, right? So maybe what the business needed at the time that they conceived the application and someone's on their laptop building it out in a particular cloud made sense to be on that cloud. But the services they need to scale it and deploy it globally and to make all the money off of it aren't available on the cloud they developed it on. But because it now has become smart enough to understand what services are where they can advise them, okay, we really need to go here and here's the way we're going to get there and this is how we're going to get there.>> There's some of that that's is it's kind of typical. The same things happened on prim. Yay, I have a whole bunch of servers. You know, if we put some VMs on this, we probably can do this faster for you and get more performance for it. And that happened in the middle. Always put the train tracks out in front of the train. But feels like you're trying to get ahead of doing that part, that train track. Uh, a little bit of both. >> Get ahead of the train. I like that. >> Yeah, we're running along the with the train. We're running the train. >> Yeah. Trying to catch with it. No, it it's a little bit of everything. What? You know, itreally is because it's trying to get ahead, trying to wrangle everything back together so we're all not doing whatever. But then it's also like our business partners, they're evolving, too. So, we have to evolve with our business partners. It it'sa very complex question. Yeah, you can pick a last a very last complex question and I think this is a great discussion. Let's keep it going. Um, we'll ask even more hard questions next time. But I'mgoing to just say thank you a huge thank you to everybody here who came up and also to all thequestions. We've loved being here with uh withGladfield Day and we're so grateful that you gave us this opportunity to have a very exclusive day with you all. So, thank you so much. >> Thank you. >> Yes, I'm going to wrap up the uh the live stream today. Um, first off, Iwould like to say thank you, of course, to our wonderful presenters, especially our partner presenters joining us here on uh stage as well. Um, our delegates who came in, took time out of their lives to come in and join us for this presentation. Uh, thank you as well. Uh, those of you who are watching, this has been a really excellent uh, experience for us here. And I hope that you uh, thatcame through the cameras and that you saw that uh, from where you are at home as well. If you missed any of this, uh, just go over to LinkedIn. You can find it on the NetApp, uh, page on LinkedIn. You'll also find video recordings of this on NetApp TV as well as on the, uh, on YouTube. And you'll be able to catch up on any of the sessions you might have missed. Um, before we go, I will say as well, um, this could not have happened without, uh, Paula from NetApp back there. Thank you very much. Uh, >> and of course, uh, theNetApp support team here. I don't want to, um, uh, leave you guys out. Uh, theGestalt IT support team, uh, Corey back home and Prime Image Media. Um, and we will be back uh, we will be back with more Cloud Field Day uh, all week. So, please just head over to techfieldday.com andlook at that. Um, you may see some familiar faces at Cloudfield Day. Um, to continue this conversation, uh, I will also say as well that everyone around here, everyone here in front behind me is very interested in continuing this conversation. Um, check out the delegates uh, page on the Tech Field Day website. Look them up onyour favorite uh, podcast program. Look them up on Google. Look them up wherever theymay be. And you will find a great many things. You will find video series, you will find podcasts, you will find blogs, you'll find articles. Um, all showing the kind of thought leadership that you've been hearing every single day uh here uh at this event. So,please do continue the conversation with them. And uh with that, I guess uh we are all ready to take charge of our multi cloud environment with NetApp. Uh that was the title here. So, uh that's what we focused on all day long. So, let's go ahead and take down the stream and uh we will be back tomorrow morning at 8 for more Cloud Field Day.
We're gathering the brightest minds from NetApp and the industry to share their expertise, tips and tricks, best practices, and plenty of how-to demos to help you take charge and leave your multicloud worries behind.