BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
All right, welcome to our session today. Uh, our session is the first frontier, the IT imperative of innovation and higher education. Again, so thrilled that you with us and also more importantly, I am thrilled that our panelists are here with us today. And I just want to do a quick introduction of our panelists. But before I do that, my name is Matt Lawson. Uh I am the uh IT I'm the director for uh technical specialists at NetOut and for state government local government education and uh on our on my far right wefirst have Marissa Jules director of architecture and infrastructure at G the Institute of uh technology Georgia Institute of Technology >> Georgia Tech. Thank you. We're gonna have fun today. I promise you that. Uh next we have Sean Riley. And again I'm reading make sure I get the titles right. research data specialist. Very good. University of Oklahoma School of Meteor meteorology. Next we have Michael Ellis who is recently minted assistant director. So congratulations to Michael at Western Oregon University. Uh and then finally we have Steve Degro, enterprise storage manager at Yale University.So again let's just give a quick round of applause for our uh panel today and we're going to have some fun today. But, you know, I want to start off and I'd just like to hear from everyone uh to tell us a little bit of about your institution and about your role. We're going to mix it up a little. We're going to start with Marissa. >> Okay. Hi everybody. I am Marissa Jules, director for architecture infrastructure at the Georgia Institute of Technology, also known as Georgia Tech. Yes. To keep it easy. Don't mix this up with UG or I'll be very unhappy. Um so uh my role I oversee our 24 by7 uh operations data centers server storage platforms which includes cloud um and several other platforms uh and a financial operations team. Um I've been a NetApp customer at Georgia Tech and previously at autotrader.com since 2000. Been with the product for a verylong time. Um so glad to be here. Thank you all. >> All right. Thank you Sean. >> Hi. So, uh, like you said, uh, I've been at OU now for about 10 years, been with NetApp for since 2015. I run and manage all of the research data, uh, services for the School of Meteorology. Um, right now I'm a oneperson shop, so it's been kind of entertaining recently. >> All right. Thank you. Uh, Michael, thank you. My name is Michael Ellis. Uh, I have worked at Western Oregon University for 21 years and right now I'm currently in charge of cyber security. I'm sort of a oneperson team, but I manage our systems team, which includes networking, systems, storage, uh, and the like. So, the spend all day just keeping the bad guys out, keeping the data in. >> Steve Degro, uh, been at Yale University for 14 years and Bank of America, 13 years before that, all with NetApp since 97. Um so I'm fanboy NetApp if you want to think of it that way. Yeah. Uh there's three offices at Yale that manage the uh storage infrastructure uh for from the administrative side all the way through the file services across the entire university. So Ithought thatwas the title I used when I was a customer chief fanboy of NetApp. Sogood to see that there's other ones out there. All right. So I my first question is you know uh data continues to grow and as data collection grows exponentially and becomes such a critical asset for you know institutions of higher education you know how are you leading those data modernization efforts at your institution and how are you uh you know attacking the efforts around cyber security and how are you partnering with NetApp you know tomanage that effort and I want to start with you Steve Okay. So, as the data has grown, right, in the history of Yale, 300 years, nobody's ever deleted a file. So, they'reall there. We have to keep them all up. Um, the expectation is that it's now available 100% of the time. Researchers don't stop working. Um, post docs are there 24 hours a day, 7 days a week, and they'll be glad to let you know that they're there, and they're not getting their data if they're not. Um so wedepend greatly on the resiliency of theinfrastructure itself. Um we we've got three different data centers uh clusters ranging from two HA pairs up to uh eight systems in the cluster. Uh file services everything has to be up continuous. Um the non-disruptive upgrades have been phenomenal for us in doing that. Uh so we can upgrade anytime. Even our change management group now is just like, "Oh, you're doing an upgrade. Do it whenever you want." kind of thing. It's not going to break anything down. We um in the history of my use with NetApp, I've never lost a piece of data from the operating system or the hardware. Hold on, let me give you a block of Right. Uh now it's there. I'm going to get a call. Uh but no, it thefaith in the system itself uh is really strong.>> Yeah, thanks. I mean, I love that, you know, the non-disruptive upgrades. I remember when I was a customer before I had NetApp, I'd ask for some downtime to do an upgrade and you know they say well if you want to come in on Christmas day we'll let you do that and so gee thank you. So you know getting that capability those non disruptive op upgrades and being able to do those upgrades you know in the middle of the week while the systems are up and live and nobody knowing that that's awesome. >> Yeah evenwhenit happens and it does happen right we have had acontroller panic in the middle of the day and people don't realize it. we know it. Um, but nobody seems to know thatoccurred, which is a good thing. >> Yes. Thank you. All right, Michael. >> So, I think I can probably best answer this by telling two really short stories. You know, historically, I think one of the things that we've enjoyed most and uh and provided for our users are uh user accessible snapshots. And so, not only did we make the snapshots available to theusers, uh, but we made acute video uh that uh looks like the old Mac versus PC commercial. And so it'sme talking to me, you know, explaining both on a Mac and a PC how users can restore their own files, right? And that's huge, not because it's cute, but because when users call, we no longer have to do it for them. We don't have to make a ticket. We just say, "Hey, go watch the video." And the users can restore their own files themselves where they deleted it, overwrote it, or lost it. That's, you know, really powerful for us. The other story I think that's really helpful for us is that you know especially for data security and backup uh you know we had a very clever person help us to uh sort of rearchitect some existing hardware that we have uh andturn that into an offline backup system. That person may or may not be in the room. Uh but we are extremely grateful because we had existing hardware that we were able to basically make that drawbridge copy of all of our data and that's been amazing for us. >> Great. How about you, Sean? >> Um, thefirst thing with thought the mic died for a second. The first thing was when we first got the netup was just getting rid of the data silos. Um, every research group had their own research storage and they were like, "Oh, you can just figure it out." And it made my life a lot easier. um and [clears throat] [clears throat] [clears throat] uh helped us get rid of a bunch of duplicate files. We had at one point I had five different research groups all having downloaded the same data set 10 times each for each member of the research group before I got there and I was like let let'slet me show you how this act um also did it. Okay, that was weird. [laughter] But also uh similarly the um non-disruptive updates and letting users get um their own snapshots has been tremendous. You know, I can't count the number of times I've had grad students RM RF. Oh, I was in the wrong directory. Hey, look, you don't have to redownload all your data. you don't have to reprocess all your data. Um, and um, after when in our most recent um, controller update, nobody noticed anything. We swapped out four controller, four fast 26s for two Faz 8000 series. Nobody noticed for a week. We were doing the work. Not a single soul noticed in the weather center, which was unheard of for a lot of the different groups, add them into the cluster, and then moved data, put them in cluster, moved to data, took them out. >> Yeah. No, I love that story. And I'm hearing a theme here of non-disruptive operations where you can do stuff and the users can't tell. AndI resemble that comment about the RM minus RF. I may have done that in the past. So, it's a great story. All right, Marissa. Yeah. So I think we have a lot of the similarities with the other institutions here interms of the kind of non-disruptive upgrades andongoing um operational uptime. We actually moved into a new data center about 5 years ago and we stood up a new cluster. We moved into the data center. As we moved customers into this new data center, we moved their data on the back end. They never noticed uh for the most part which was really fantastic. We've done some life cycle upgrades in that data center. Again, customers never noticed which is fantastic. Um and to thenon-disruptive upgrades. So the proof of that is uh we were short a storage engineer. We have exactly one storage engineer for my team plus half an architect and half a manager. Um and the architect was back uhback in the saddle trying to cover and he went to check anddo kind of like the pre-check, you know, to make sure that everything's right before he deploys the upgrade. What he didn't realize is that it automatically does it when you're done. And so he calls me up about, I don't know, 10:30 or 11 o'clock in the morning and says, "So uh we have an upgrade running right now. I know it wasn't planned. Uh, do we need to do anything? It's like, well, let's just watch it and see. And nobody noticed. So, except for the change control folks who were like, hey, we have to, you know, document this. But, >> um, and so I think that's really a testimony to how well thesystem works, right? When youcan do non-disruptive upgrades like that, no one really notices. Um, which has been really good for us. We, like many large research universities, are very decentralized. And so there's IT units across campus that do their own thing that, you know, run everything from Musfs onWhitebox, right? Um up to maybe somesmall frames um of other vendors products. Um and so we've been encouraging them as they've seen the reliability and the stability of our storage platform uh to buy in. So we have a model where our customers can contribute cost as we uh either do life cycle upgrades orbuy an expansion frame and they get a set amount of capacity for 5 years the life cycle and then that's theirs and we put them in our environment and they get to take advantage of all thatnon-disruptive goodness andhigh performance and the ability to move workload between tiers. So,Marissa, um, continuing on with that thought. So, talk to me about, you know, some of your biggest challenges that you've had, you know, as you're trying to deliver, you know, differentiated, resilient, secure solutions and how partnering with NetApp has helped you do that. >> Yeah, I would say ourbiggest challenge is really with the research data, right? So, the enterprise data is a pretty well-known problem for us. I mean, there's, you know, white papers you can read. There's lots of customers that do that today. It you just bring a little bit of money and it works, right? Um the research data is another story because it's so spread out and there's so much of it and the scale of it is so huge. And so to that end, we're really doing kind of two things to try to address that problem. So one, we went out andbuilt a very large um storage grid uh install that we are in the process of moving ourcustomers data to. It actually has um a fabric pool sitting in front of it. So you can get to it asfile data, you can get to it as object data. Um, and then we have a kind of a subsidized model where customers can they get a little bit for free and then they can buy more at a reasonable rate that's somewhat subsidized and it can grow essentially infinitely as long as we can secure the you know funding in the back end to keep that subsidy up. Um, and so that's just kind of kicked off. It's in production just this year. Um, we've got some pretty good uptake. Um, a lot of atmospheric data is going on it right now as we speak. Um, so that's kind of one thing, right? isyou know looking at having a repository that we can manage atlarge scale um and apply some policy to it and that policy is even going further is looking at identifying our data which is really I think a big problem right we don't know what data we have so to that never deleted a file in 30 years right that kind of thing um we're actively looking at okay what data can we archive we have this whole huge set of data which I like to call write once read maybe that you have to store for you know compliance reasons orsponsor reasons Um, and so we're looking at coming up with an archive solution for thatwe can literally just move it to and kind of get it off of ourNetApp infrastructure and not have to worry about it long term. >> Well, that's great. You know, as we're talking to other institutions and research data, it sounds to be kind of a common theme that a lot of institutions are dealing with is, you know, this these emergence of these big massive data lakes or next generation data lakes and, you know, are there are they meeting their compliance requirements or are there compliance data sets in those data lakes? And so that's a really interesting topic. Thanks for sharing. How about you, Sean? >> A lot of the same stuff. It's um most of our data is researchbased, so it's up to the funding agency and how they tell us we have to store it. Um but um it's um whether it's been rather radar data um the Antarctic we're getting ready to start an Antarctica project. Um it's um getting everything in line has just been super it's a dead >> I am getting a dead spot really bad. [laughter] Um but um it's been quite helpful and um the researchers have started to really notice a difference. That's been the big thing is they've realized that, oh, I can spend a little bit more and get all this extra stuff and make my life 10 times easier or I do what I used to do and if it goes away, it goes away unless I they tell me that it needs to be backed up, which half the time they do, half the time they don't. Yeah, we'll just use this one, Mike. No, that's great. And I you said something really interesting about, you know, setting up infrastructure in Antarctica. If you need someone to bring pizza while you're setting stuff up in I will come with you andhelp. That sounds like an awesome thing. So, >> I've already asked the researcher if they need somebody to go to McMmeto for this project because I'm the first person that I'll to go. I I'll go with you. That's great. >> All right. How about you, Michael? So for Western Oregon historically I think one of the biggest things that we've done is that we had our LMS hosted locally and you know we didn't have the performance that we needed you know in that area and so we moved to an AFF system and at that point we actually got like a 10x performance increase which for us was really significant and more recently we've moved our VDI workloads uh onto NetApp storage And that was really critical for us especially during COVID when everyone went home when students were learning remotely when employees were at home. We were able to scale that environment very easily, very rapidly with really no performance hit. We could deliver applications very seamlessly. We could deliver access very seamlessly and we could do that all securely. So that worked out really great for us. >> Dave, um for us being a research institute, we're also very spread out. uh every professional school has their own shadow IT. Um file services were scattered everywhere. So that was theflexibility NetApp brought to us. Uh we offer we don't offer it.is a full chargeback model for all of our storage. So we actually sell it back to the clients. It's all blue dollars in the back end, but itthe first rule in data management is always make them pay for it and it they start to take care of it a little bit better. Um so with the flexibility um from the net app we were we're our storage at Yale is what the file services are called. Uh we have a the top tier being the active tier with the full mirroring um between the sites. We were able to keep the price down on that with storage grid. So we've got a big A700 uh in front with 11 pabytes of storage grid fabric pool behind it. Um then we've got the workspace tier which is just that with no other mirroring and a whole archive system built on net app around that. So thewhole flexibility part of it uh helps us keep the price down um and makes the researchers happy because they don't want to spend any money. >> Well that's great. So uh it's becoming very clear to me that each of you play very critical roles at your institutions and not only you know empowering our next generation but then terms of also you know managing theinfrastructure for key research that's happening. So talk to me about how you know your role is you know ensuring that you've got that data protection to ensure maximum uptime and also that you can share securely you know data and information across infrastructures or even withother institutions that you share data. Steve we'll start with you. >> Okay. Uh one of thesecret sauce NetApp has always had for us has been the um snapshot capability. Everybody's mentioned thefunctionality there, the ability for the end user to rightclick and go to properties and previous version and do their own restores to save the help desk and my team uncountable hours. Uh the we use SVMDR for our primary administrative data protection. So we SVMDR the entire VMware environment to a second data center. Um in the event of a failover, you just one command and it's broken and VMs come up on the other side. Um we've just implemented the um well we did it before it was called blue XP the old cloud manager um cloud backup service to protect the data that same data offsite for the file services we mirror uh those 11 pabytes from Connecticut down to Virginia uh every hour. So all of that data is there if we ever became you know the smoking hole the data is down there and somebody can bring it back up for them.>> Oh great. So you mentioned SVMDR. Do you mind just telling us what that is if there's anybody in the audience that's not familiar with S SVMDR? >> Yeah, so basic snap mirror at the volume level just mirrors the data back and for or mirrors the data one direction um and gives you the mirror at the other side. SVMDR gives you the entire SVM instance. Uh it's reallygood if you've got large data shares uh because SVMDR carries all of the IP addresses. It carries all of the share information. Uh it carries all of the export policies. Um so when you break that the other side comes up and it looks exactly like your primary side. Um for so for VMware or even our file shares that's really important on the other end. We don't have to re mirror anything re um IP anything just comes up with those IP addresses.>> Great. Thank you. All right Michael again back to you about you know ensuring maximum uptime or sharing data securely.>> Yeah. So for Western, you know, we've been a NetUP customer over 15 years and so ourplatform and our system has just grown over time. You know, we started with a faz system and we just needed, you know, disc uh and then eventually we needed high performance. So we got an AFF system. Well, now we have like between 60 and 70 RU worth ofdisc and controllers and it'slarge. It takes up a lot of space. It takes up a lot of power and cooling. And so we're looking to the future at the C series which I know has been talked about a lot during the conference. uh you know thepossibility for us is to move that much space down to probably 9 RU of disk space right still high performing still plenty of capacity right that's a game changer for us that's like three or four racks worth of equipment turned into you know 7RU and to be able to put that in and to be able to use the data fabric to be able to move that data over seamlessly like folks have talked about that's great for us and would even allow us to do you know DR andyou know business continuity you know, if needed, >> you know, and Michael, I just want to comment. I mean, that's a really cool outcome that we're seeing a lot of higher ed institutions kind of look to achieve with some of our new technologies. A solution is that reduction of requirement for data center cooling, power, you know, this whole notion of, hey, we need to look for more sustainable solutions to I mean, not only enable our next generation, but make sure we've got a planet for our next generation as well. So, it's great to see that you'reactually enabling, you know, higher degrees of sustainability with this consolidation effort. So,thank you. >> Yeah. All right, Sean. >> Um, [clears throat] a lot of the same stuff. When we first got our, uh, NetApp, we only had one. Um, and so we were just, you know, sshing off into our backup site. Um but um then the our federal partners started getting Net Apps and we were able to that allowed a little bit better uh collaboration with them um within the building because our data centers are next door to each other. [laughter] other. [laughter] other. [laughter] Um and um so that helped with that communication. Um, in the very recent time frame, the university as a whole has actually decided to start migrating to NetApp off of Dell as uh or sorry, bad word Dell [laughter] um has started migrating to NetApp. So, I'm right now I'm basically waiting with baited breath for them to get their systems up and running. So instead of just having SSH or a Globus connection to my backup sites, I can start doing snap mirrors uh to their data center in um for Tulsa, which for those of you who are not familiar with Oklahoma is basically the other side of the state from where the university where his main campus is. >> All right. Thank you, uh Marissa. >> Sure. Now, this is a really good question. I think itgoes at least in our environment, you know, I was talking about how we consolidate, you know, kind of around campus. We've been bringing our campus customers into our centralized services. We're doing kind of the same thing in central IT as well. So, we're taking advantage of the ability to build larger clusters and put disparit workloads in the same cluster. Um, and so today our primary cluster, there's one cluster in our primary data center, right? And so we've got eight or 10 nodes, excuse me, um, which covers ourVDI infrastructure. Uh we have a very sizable VDI infrastructure as an engineering institution. Uh we have about a thousand VMs that are available for students. We run all sorts of engineering software um that you can't license to an end user easily uh and requires GPU. So we have a very large GPU enabled uh VDI infrastructure that runs in the exact same cluster that our enterprise applications run on and it also has some research data on it. Um just a couple of years ago, we convinced our HPC teams to stop running kind of their own file storage for um shared binaries anduh certain sets of data and let us do that for them. So we have all that converged into one cluster and the ability to move data from one tier to another relatively easily uh has been a huge win for us and being able to effectively use that um that infrastructure andtake advantage of the right you know kind of tier of performance for theset of data um andthat's also helped us convince other customers to come aboard because they see that oh if I need more IOPS you guys can move it for me and I won't even notice right um so that that's certainly one thing I think in terms of sustainabilitywe look attrying to make the best use of the resources we have um and I don't want to go down the road but we have a very interesting data center that we've done a lot of stuff kind of on the back end um that itsits on top of to uh we run a uh we have a great pee because run at a very high internal temperature um our experience has been the equipment runs just fine at 80 or 82 degrees um which has been helpful for us saving on the power bill um and there's some other things we've done along those lines along with heat reclamation but that's not story. >> No, that's great. I mean, it's all part of the sustainable story. So, we're going to keep the mic with you for now. >> Fun. >> Fun. >> Fun. >> But, you know, so just before the session, I was chatting with Michael. Uh, and Michael said, you know, I'm a security guy. You know, I didn't think I'd get a lot out of this conference, but he was like, "Wow, there's so much going on here at Insight about security." Andso, let's talk a little bit about security. You know what? Please. what? Please. what? Please. >> Yes. So,what are some of your biggest security challenges andyou know how has NetApp helped maintain cyber resiliency and protect it against you know some of the you know sophisticated threats that we're seeing from ransomware to spyware to nation state attackers. >> Yeah. Now that's a so that's a fantastic question one we've lived out fairly recently in terms of how we've been preparing for that. So, um, in the right up to thewar in Ukraine, um, there was, uh, fairly good intelligence andwidespread belief that, uh, large fally funded institutions, um, we have an FFRC that's affiliated with Georgia Tech. We do a lot of Department of Defense research, a lot of government research, um, we're going to be targeted, uh, with cyber attacks. And so, trying to get ahead of that, uh, we did kind of a crash effort to look at protecting ourselves uh, even better from ransomware than we already did. So, we looked at our top 30 critical applications we needed to run the business of the institute and we went down a list of those and we determined whether or not they were covered ornot by ourransomware protection. We made a couple of interesting discoveries. Um, so one of them about half the applications in that list were SAS applications. I know this is a bit off topic but I have to share this because it's important. We had just assumed that your SAS vendors are protecting your data. Well, they say they're protecting your data. Our experience was, yeah, they're protecting it from their own disaster. They're not protecting it from if you get attacked. Um, and so we had to do some things to make sure we're protecting that data or get the vendor to do a better job of protecting it for us. So that was the first thing we did. The second thing we did is we looked at our virtualization infrastructure um because almost all the rest of the applications ran on that and we looked at okay, we have the ability to protect this data from ransomware, but if we get hit and everything gets taken out at once, how do we get it back in a timely fashion? We started doing the math and we were talking about weeks and weeks of recovery, right? Um to restore from backup. And so we did kind of a crash course to stand up a CVO instance in Azure and start replicating data offsite um take advantage of Snap Vault and Snap Lock uh to protect that data so that we could do a rapid turnaround should we have a major event. Um CVO was fantastic for our needs because it allowed us to stand it up very quickly. It was also very expensive which is why we went and bought aframe to put on site. That took some more time and so we implemented that exact same infrastructure in our secondary facility and then today wereplicate that data there and protect it there. Um you know kind of on our own onrem hardware because that cost model works much better for us. Um so there we go. That was fun. >> Awesome. >> Awesome. >> Awesome. Sean security challenges. >> Um a lot of the same stuff honestly. Um and um it in particular uh we were uh we became very concerned um during uh the run-up to Ukraine because we are colllocated with numerous federal uh departments. Um so we knew that kind of was like that might put a little bit of a spotlight on us. Um so um getting um everything to our off-site and making sure thatreplication was happening in a timely manner was imperative and it really showed and um gave me more than enough ammunition to go to my director and be like, "Okay, so we're going to get a second one right so I can do this correctly." Um because um being just a department in a major research institutionsometimes you still got to convince us. You got to convince the people. Um but um the between that and honestly one of the biggest things for uh a lot of our data ended up has been um end users and bringing in personal devices um because or VPNing in from their personal device because you know they need to be able to do their research and weather doesn't stop. Weather happens 24/7.Um so um be uh working with um the NetApp storage and our uh backup uh vendor uh getting immutable backups set up of our VMs and of um all of the various research data sets we have um has ended up being one of the most ended up being most of my last year but it put us in a much better place. >> All right, thank you Michael. This is dangerous. You ask the security guy a security question like, "Yeah, it calls out for hours. >> What do I choose?" Uh, so I think when it really comes down to the end of the day, what we're really talking about isour data, right? You can have a cyber incident like we had years and years ago where, you know, we lost hundreds of machines to a ransomware attack, but no data was stolen. That's a huge pain in the butt, right? But there's no data stolen. So ultimately, you can fix that. But the minute a piece of sensitive data leaves your environment, right, you are screwed. because you can't get it back, right? Even if you pay the bad guy to get it back, right? Is there really honor among thieves? Right? Your data is the single most valuable thing that you have. And the minute it's gone, it's gone even if you get it back. So that's really what we're talking about at a storage conference, right? Is it's your data. That's like the thing that you have to provide. And so for every cyber risk that I could talk about, right, it's really your sensitive data that has to be protected the most. So everybody wants to talk about ransomware. there's you know four directional ransomware you know if you haven't heard about that and you want to know more you know ask about it later um but Ithink our biggest success in this area uh came from avery organic process and so we had a firewall that was uh logging uh to the tune of about 25 million records a day and the appliance could only hold 2 and 1/2 days worth of records so you can't do a forensic incident on 2 and 1/2 days worth of records so we figured out how to cram that into a database that was running on a netup volume. Well, after a couple of months, we realized that's a lot of data. What could we do with that? So, we started mining it. We started harvesting it. We started correlating it together. And so, we built uh this cheeky little program called Redwolf. Western Oregon's mascot is the wolves. And so, Redwolf is our security tool. Andso, wejoke that Redwolf never sleeps. It runs 24/7 365 and it's always looking for attack patterns andmalicious activity. And then we got it to the point where we could actually weaponize it to react in real time to attacks that we saw. And so I can't tell you how many times I've gotten a text that said, "Oh yeah, we just took 200 brute force attacks from our VPN from Iran and we shut that down in 90 seconds." Right? That's a testament to a piece of software that we wrote ourselves based off of a massive data set that was, you know, there on our NetApp appliances. Moving forward, we're really looking forward to autonomous ransomware protection. We don't have it in place today, but we're really excited to see all the things that it can do. Uh, and then, you know, tamperroof snapshots gives us one more level of security, you know, as we just build layer upon layer of security. As most of you know, it's there's no silver bullet. you know, it's just lots and lots of pieces. >> Yeah.Michael, that's a very meta story. Using the data to make the data more secure. I love that story. So, that's great. Steve, how about you? Um, for us, we the snapshots and the immutable snapshot capability. We've recovered from two virus attacks and one ransomware. Um, just by being able to recover the data. Once info said theend users and their workstations were clear, they told us what date to roll back to. We just reset the volume andthey were they were back online and good to go with that. Um, our info security folks have a dedicated uh HA cluster that's set up um what is it's a I think it's an 8300 or 8200 um where they run Docker Trident environment uh to do all of the um file auditing and so forth like your Redwolf. Um so thatis their dedicated environment just for that so that they can do those scans themselves. Uh we're looking we we've tried manydifferent ransomware tools and things. Uh a lot of those look at patterns or they look at um file extensions and we haven't found one that works yet. Um we're looking at data sense getting ready to bring data sense in to start uh categorizing things. Uh but froma research institute researchers will use every extension you can think of and make up their own. um the patterns there is no pattern to the way a researcher writes data. So everyransomware trips and then you start uh whitelisting things and if you start whitelisting too much everything will get through anyway. So we haven't figured out figured that out yet uh asfrom a storage perspective how to do it. They're doing all the scanning and so forth and more at the uh network layer. Uh we're also looking at implementing some more zero trust level type protection. >> That's great. you know um it's interesting you talk about you know all your researchers are ingesting all this data that you're storing andthat you're looking at data sets I think a lot of our higher ed institutions are looking at data sense to help them kind of rationalize and better understand what are those data sets you know what kind of data is in those data sets as the researchers are putting in you know is there you know compliance related data that then you become responsible for someone at the institution becomes responsible for when they put that in and data sense isagain a tool thatcanhelp with that. So that's great that you're looking at that. So uh next question is just for you Steve. So Steve has been a great customer for over 25 years. And so my question Steve is you know what about the NetApp portfolio or the partnership that has kept you such a great loyal customer. Uh I actually had a sales meeting with a company that will not be named um two weeks ago and they asked me very similar question. They said, "Um, so you've been with NetApp for over 25 years. What wheredoes the loyalty come from?" And I said, "First, it came from theconfidence that I haven't lost a piece of data." Um, I can do upgrades without failing it when I used your previous equipment. Talking to that sales group, um, we never had a successful upgrade. Every time we would do an upgrade, we had to get support on the line to bring a node back into the system. It just never worked for four or five years. Um thefaith thatthe system is going to be there and continue to run is huge. The other part has been uh probably and I you know total fanboy but has been the people that I've learned uh and met with for 25 years as part of NetApp. Um I know who to call. Uh they knew who theyknow they can call me and try stuff new stuff. Um it's the support staff has been phenomenal. I've been through several SAMs over the years. Um, but you get to know them. They know your environment. Uh, my sales engineer has been with me for 14 years. Um, that's unheard of inany vendor. Uh, right. Every two years, somebody gets a new sales engineer or a new salesperson. My sales lead has been with me for what, seven years now. So, um, sales engineer helped me design. Icame toYale to set up um NetApp VMware on NFS and he was there day one. So he's watched the entire progression from mode 7 to our cluster all the way to this uh ginormous storage at Yale environment for the file sharing. So it'sthe people it's therelationship their understanding of my environment. Um you know mysales engineer knows what to bring me and what not to bring me. um you know, don't sell me something that ortalk to me about something that he knows I'm not going to need. >> That's a great story. Again, Steve, thank you for the partnership. That's awesome. So, all right, Michael. So, youmentioned zero trust. Uh sotell me a little bit about, you know, as you look at the NetApp portfolio, you look at NetApp solutions, what are you looking, you know, in that stack to help you with your zero trust journey? >> Yeah, great question. So, I think we've gone through a lot of transition recently, leadership changes, uh, ouradmin of 15 years, uh, just recently retired. And so, we're really looking at the entire stack differently, what'snew, what's available. And so, uh, storage virtual machines are one of the very first things that we're going to be looking at just to make sure that we can keep the data separate, to make sure that we can keep it secure. Uh, I'm really excited about the multi-admin verify. Uh it just reminds me of the old, you know, nuclear silo or, you know, two people, two keys, right? Like, and I just love that as the security guy, you know, I just love that picture uh thatyou can't just delete volumes, you can't just delete data even if theadmin's account got hacked, right? That that's not going to work. So, that's really exciting for me. And then in our new architecture that we're looking at uh as we refresh our systems, we're looking at self- encrypting discs. And that's just one more layer that we have uh you know tokeep things secure. >> That's great. Yeah. Thank you. >> So Sean um you've talked about you know the massive uh data sets with atmospheric data that you're storing. You talked about your you know distributed geographic you know data sets where I mean data out in Antarctica that's pretty cool. Um so you know as you consider you know those complexities you know how uh you know is NetApp enabling you to manage those massive data sets andthose distributed environments.Um it I want to geek out for a second as a scientist. Um but um it's been imperative. Um the weather um looking at all of our various weather data sets um everyone else we've ever looked at barely can handle them. It's either might as well just put them on dumb, it's pretty much might as well put them on dumb discs. And um NetApp has been basically the only storage that whether it be real-time radar data from radar trucks, observing an active tornado, sending data back to the weather center, processing it in real time and distributing it in real time out are data sets from Antarctica that we're about to do. um uh seasonal data sets. It doesn't b at anything. It works. It's able to handle them. It's the only in some of our data sets. NetApp's the only one that I've ever used that's been able to actually ddup anything like at all. So, it'sbeen it's been tremendous. And as a scientist that's doing storage admin stuff, um, it's made my life a lot easier. Um, because there's a lot of the stuff that I do, it would I would be running around with like a chicken with my head cut off if it hasn't been for net. [laughter] And you mentioned um sales engineers. Uh the team that I've been working with pretty much since 2015 has been invaluable. They work I mean they want to understand my data. They work with me night and day, make sure everything's working, that it's working the way it's supposed to, and it's made it to the point where out of all the units in the National Weather Center. Um, I'm the only one who hasn't lost a disc in the last 5 years because of a power outage. >> That's crazy. >> So, again, you know, Steve, Sean, thanks for the call out. You know, the Netup team, you know, we've got performance reviews coming up and that's going to help. Hey, those before was reviewed. So, thanks thank you for that. So,Marissa, you you've talked about sustainability, you've talked about research, and you know, one of the things that we see as we talk to education institutions is it's the research departments, the IT research departments that are being tapped to first start look at building out those AI infrastructures. And so talk to me a little bit about um you know how do you see NetApp supporting you know the development of you know your AI education your AI infrastructure and you know supporting you in your sustainability efforts. >> Sure. Happy to talk about that. First I do want to mention the command approval thing. So my team turned that on a couple of months ago and they didn't tell me. I mean we had talked about it and I start getting these emails. I'm like wait something wrong. Oh no that we expect that. Okay fine. So itworks. Um, works. Um, works. Um, >> you didn't know that you're the other one turning the key, right? So, >> yeah. Well, I didn't know I was going to be in the list. That was the real problem. So, um, but no, go going back to kind of theresearch side, especially around um AI andlarge data sets. So, we have a couple of things going on. I was talking about that large research data store, which we call Cedars. I can't remember what the acronym is for at the moment. Um, so one of the things with Cedars is the idea is to collect kind of all this data from the research community. Um, but also make it available everywhere, right? So Cedars is not only available from, you know, a researcher's desktop, their lab, um, but it's also connected to our HPC install, right? So you can actually take that data that you uploaded there and work on it directly from our HPC infrastructure, which isreally pretty cool, right? Um, and then take that exact same data set and access it from aworkstation in your lab and do some visualization on it. Um, and so it really kind of speeds that workflow up and makes it um much quicker for ourresearchers to work with their data. Um, speaking of AI though, it's kind of interesting. So, I can talk a little bit about this. We're still in the very early stages. Uh, but the College of Engineering has um done a and I'm going to call it a pre-by because we have this habit of buying technology that isn't shipping yet. We've done this many times. It usually works out um for a large number of GPUs to power um an AI maker space for the college of engineering. So, the idea is every student in the college of engineering is going to get access to this resource. uh they can request access. They'll have some home share. They'll have some space where there will be data to train models and they'll be able to go and use it just like you use the maker space down the road where you have access to all the tools. And the idea is we'll see what our students come up with and it's a reallycool idea. And so a bunch of that is um underpinned by NetApp. So all the user data is going to sit on NetApp today um is the plan and we're hoping to have that rolled out probably end of maybe end of spring. We'll have to see. Um it's still veryearly though, but it it's a really cool concept and I'm glad we're getting to do it. Um and we've got a place to put all that data and share. >> No, that's great. Igota assume that it's going to help attract students anduh I mean that's a very exciting thing. So thing. So >> yeah, myworry is when students start using some data to train something to try to crack our systems because they have way too much time on their hands at times. So times. So >> ethical AI very good. So >> yes, >> yes, >> yes, >> so again uh thank you panel for uh your words today and so I want to give everybody a chance just final thoughts any uh you know distilled learned shared wisdom that you can share with our audience as we close out today. >> So I think the thing I'llshare and thisgoes back to the data protection stuff. Um and I've said this before and I'll say it again if you ever hear me talk anywhere else. It's never too late to start and every incremental thing you can do to protect your data is worth the effort in my opinion. Um so often especially in higher ed I see my peers andeven others on our campus say the problem is too big we can't solve it. Um just because you can't solve it doesn't mean you shouldn't get started and do something. Um it's like the old adage about thetwo guys hiking through the woods and they come across a bear, right? Yeah. I see heads nodding. Right. So, um, you know, they start running away from the bear andthe one looks at the other and he sees this guy pulling on running shoes while they're running. He's like, "You can't outrun a bear." And thesecond guy says, "I don't have to. I just have to outrun you.">> So, Ilove that story. Right. Sean, final words. >> Um, it's been mentioned a few times, but um kind of Field of Dreams, if you build it, they will come. Um researchers are set in their ways but if you give them a reason and you show them the benefit they will come around. They will accept a new way of doing things. Some of them take a little bit longer than others. Some of them you may have to be like, "No, tell your funding agency. This will actually help." But they will come around and they will see the benefits outweigh any of the discomfort of having to move some of their data from the way they used to do things. >> Thank you. Awesome, Michael. Awesome, Michael. Awesome, Michael. Yeah, I think it comes down to, you know,your data, know where it is. You know, you can lose a lot ofstuff. You know, I think hired is one of the hardestuh types of organizations to secure. You know, it's an open environment. People can walk in, researchers and faculty expect instant access to everything. You know, whether you have students who will install anything, you know, very difficult environment to secure, but you don't have to secure everything, right? But if you know where your critical data is, you know what it is, and you can secure that, you know, then you can kind of work your way out from there. >> Great advice. Last but not least, >> uh going to stick to that uh data protection thing. So data protection is what lets you sleep at night and knowing that you can recover uh whether it's a single file or your failover capability is tested and proven. You know you can fail over if something happens to it. Um in our in all of our positions, the last thing you want is an RP, right? Resume producing event. Um so the ability to recover that data and show that the data is there even if people can't get to it, right? We may be able to fail over to our secondary site immediately in one command and the VMware environment has another issue. But I presented the data, the data is there. Once they get to it, they'll work, but my job is done and the data is protected. >> Love it. We always called it a resume generating event, but uh I didn't get it. So, well, so we've got uh time for one or two questions from the audience. So, uh do we have any questions fromthe audience? All right. In the back. So if you guys like to hear the difficulty of getting fun to close your watching this university >> right let me uh repeat the question just so we can get it on the mic here. So again, I think the question was um you know, managing the funds uh and budget for the infrastructure. Any words of advice in terms ofhow to makethat work? Um anybody want to jump at that? >> Well,first off, wesince we are full chargeback model, um granted it's still subsidized. Uh but we sell from VMware and the Oracle world, they get charged just as much as Sally in the French department and Professor Sean. Uh everybody gets charged across the board. Um, so that at least gives us uh it's presents ownership to the end user uh and it gives us a showback for our management that says you know this is what's actually being used and when I come and ask you for it you've gotten some money back not all of it but Ican show um the actual capacity usage and the growth through that. I think you know from Western's perspective we have 3,500 students. wedon't have the same uh you know economy of scale as you know some of our uh our friends and so we've had to do things like you know repurpose hardware uh tomeet a critical need instead of buying new hardware or write our own software instead of spending a4 million dollars on something and so for us I think it'syou got to think really creatively you got to innovate you know I love the phrase innovate or die and uh I think that we haverisen to that challenge >> it you just got, like you said, you got it's innovation. You got to use the cards you're dealt um and um try to do what you can to make it work. Um eventually higherups will catch on, [laughter] but it sometimes it takes, you know, the better part of a decade, but eventually the higherups will catch on. >> Okay. Ihave a two-part answer. So part one is don't be afraid to push back on your vendors. Um the push to opex from capex does not work in higher ed. At least not where I work and not anybody I've talked to. Right. >> Um so I don't know what that was. >> Look at theghost in the room doesn't like what I'm saying. Okay. [laughter]>> Yeah. So >> the ghost of OPEX is fighting back. >> Yes. There you go. Right. So yeah, don't be afraid to push back and tell your vendors this doesn't work for me. Um, I've had to do that many times and it doesn't always get me a solution, but at least I can have that conversation and say, "You need to sell me something I can pay for now and not have to pay for over the next 5 years because I don't have recurring funding." Um, so that's one. The other is, it's something we've done and I would encourage otherinstitutions to take a look at this is find a way to make things work for your customers. Right? So, we've had really good luck with this idea of bundling purchases, doing one big purchase, letting a unit buy in upfront so that they don't have a chargeback every year, right? Wedochargeback services, but a lot of units have that same problem we do. And so, if we can do a 5-year purchase and they can get five years of storage or five years of compute, that's a model that works and allows us tocentralize and bring things together. And I think that's a fairly creative way for us to do that. It's a little stressful sometimes and trying to track all that, but it it's got value. So, thank you. That's a great question.>> All right, we have time for one more question. Go here. >> How do you basic built me? >> Let me just re reate the question just so we can get it on. So the question was, you know, how do you sell that infrastructure? How do you sell what central IT is doing to the departments and colleges that you work with?>> So that's the$64,000 question, right? Um, for us it's been a long time of building trust and showing things work um, and showing that we can make it work for them. So, in our case, I came from one of the units on campus, right? So, I was trusted when I showed up in central IT. Um, I haven't burned all that trust yet. Hopefully, it's been seven years. Hopefully, there's still a little bit left. Um, andI've really tried to pick somebody who is willing to try it, but is also known to be kind of, you know, open about whether things work or not on campus and make them my first partner and make it work for them. Um, and that's how I've done it and just build it very slowly. This is not a fast process. We've got about half the major units on campus participating in some form and I've been working on this for seven years, right? We've probably got another seven or 10 or 20 and by then who knows what we'll be using, right? Um, that'show it's worked for us. So, us. So, us. So, >> right. Anybody else have anything to add? Marissa. >> I'm not part of our central IT group, so I'm not the best person to answer this. >> I mean, if anything, I'd say, you know, overcommunicate.You know, we have in some cases gone department by department and sat down with every single department on campus to talk about here's the change. Let me show you. Let me walk you through it. Let me answer specific questions to you. that takes a huge amount of time but it builds a huge amount of trust. >> Um showthe value. So um we were able to show the value and of the central file services versus all of these shadow IT groups trying to run their own file services and so forth. Um the resiliency that we've been able to show on it. Uh the recovery um we're up 100% of the time and they're not. uh you know uh we're central IT so we're doing everything else and we're running this file services whereas they've got posttos trying to do it within their departments and you know theinstability of that so it's the centralization is has gone a really long way with it uh and we're able to buy massive quantities and keep the price extremely low versus them going out and doing onesy twoosy kinds of things so it's that that's how we've sold it >> all right thank you Marissa Sean Michael Steve, thank you so much. Let's give our panel a big round of applause.
University technologists are critical to our future, from empowering the next generation of ideas to enabling the real-time use of data in climate-change research or repelling hostile cyber-attacks. IT experts will explore their biggest [...]