BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
Well, all right. I guess we'll go ahead and get started here. Uh, welcome to day three of NetApp Insight, and I appreciate all the stalwarts who actually made it to a 1 PM session on day three. So, I've been privileged to work side by side with the Lumen network storage engineers for roughly five years now on the development of the LNS platform. So, uh, as their NetApp technical resource. And I got to tell you that in the time that I've been at NetApp and in fact since I first learned about the power of, you know, NetApp ontap snap restore in 2003, I don't think I've been as excited about a product. Um and the reason is that Lumen literally leveraged every piece of the NetApp portfolio and they have actually driven more innovation in our product portfolio uh as a result of their technical team's restful automation expertise and just everything that they've done to take a whole bunch of geeks beak and turn it into the most dead simple beautiful portal on the world's most capable network. and we now have this ability to offer a comprehensive unified product that is candidly unmatched by any of your competitors. Um, and this is a message that needs to get out before we get started. We have our confidentiality notice. There's practically nothing in this session that uh we can't speak about publicly. It's all there in the Lumen portal or on the NetApp website. it is and in fact uh this session is being recorded uh and will be available on NetApp TV I think x number of weeks after the uh conference. So thanks for that uh and Rob just administratively Rob's got a microphone and so what we'll do is as questions come up Rob will bring you the microphone if that's all right. Thank you sir. Yeah. So just basically the you know the products that NetApp and Lumen have jointly brought to market both as customers and partners of each other just incredible. And if you can indulge me for just a second with a story you know uh NetApp received a DDoS threat from a bad actor a while back. And the Lumen network is literally a sensor capable of detecting these things coming. So you know they didn't call and say hey do you want to increase your bandwidth? they just fixed it for us and called to let us know, hey, uh, we've temporarily increased your bandwidth to help you mitigate this threat so that you don't need to pay a ransom or, you know, deal with that. And that's the kind of partnership that we have. So, just truly incredible. So, I'm thrilled to be here with Frank Kenny from Lumen and to present this awesome technology to you. Well, Todd's been with working on this for five years with Brad Lewis and our product team. I've been working with NetApp collaboratively on the financial analysis piece, supporting the engineers on our side and creating that innovation on the value statement. So I like to hold my own against these engineers and architects and our both of our companies try to match wits with some smart people technically by aligning all of the features and functionality of what this product offers to the enterprise value of our customer. So that's been a project that we're still innovating for the last two and a half years since I've been with Lumen. I'm equally grateful to be alongside this poly Frank Kenny. So before we get started, Rob, if I could, I'm going to ask a question and I want one or two people to provide us an answer. You signed up for the session, right? There was intent. I want to know what do you hope to achieve and why does that matter? So I uh acquire partners on the indirect channel to sell our services. So one of the key components would be LNS especially for system integrators and such. So understanding what our future is together with LN with Lumen LNS and NetApp and those capabilities and how we can uh and how we can differentiate in the marketplace togather those partners. >> Thank you. >> Perfect. >> Perfect. >> Perfect. >> No, that's that is perfect. that great setup there and I think that's uh exactly the message that we need to get out and you know this is a message for customers but it's also a message for our internal sales teams at both NetApp and Lumen uh that we built a product jointly that is entirely based on the NetApp portfolio which means when customers tear data out of their existing on-prim systems into an object store it's a lot better for you the sales rep if you're dual comped on that object store and it's a lot better for you the customer if you're not paying for a fabric pool license because you're using a NetApp native to NetApp native technology underneath the hood. It just happens to be a hundred times simpler to implement because of Lumen programming expertise that takes 52 NetApp products that are really complicated to use individually and makes it into a single portal. five clicks, your customer's data, whether it's the 78% that's cold or the 20some percent that's hot, you're getting comped on all of that data. So, I think that pretty well covered our agenda slide. [laughter] Uh, but yeah, I think that, you know, the one other key thing is, you know, wewill take you through both the TCO side of that as well as the technical side of that. So that by the time you leave here, you should feel completely confident that customers that don't want to go through the hassle of building their own object store to gain cloud level economics don't have to. So Frank, why don't you tell us a little bit more about the overall Lumen platform? So as we think about the purpose of what Lumen is doing, right? So Keith reminded me, right, our network is our foundation as we were uh kind of prepping here. And what that does, it enables us to unleash the world's digital potential. So what we do in there's a whole heck 30,000 team members that are engineering, architecting these solutions and driving value for our end customer. And that ties directly to our mission. We are igniting business through connecting people, data, and applications securely, quickly, and effortlessly. So, what today we're going to touch on within this portfolio of technologies and solutions and managed services, today we're going to drill into network storage and how we're driving that value to our own customer. Yeah. And a key point here is when we talk about network storage, we're talking about Blumen's edge bare metal offering being able to access that. We're talking about that being available both in the Lumen core but also on your customer premise and we'll drill into that a little bit deeper here but I wanted to kind of segue into that because you know if you can kind of speak to so within the network right that enables now 180,000 fabric builders we have 180,000 buildings that have that are on the fabric network that also enables 2200 data centers. So, as we'll touch on, part of the differentiation is if the glass can't meet the latency requirement, we can push along that whole network fabric, we can push the data closer to how, and Todd mentioned this, consuming and utilizing that information closest to those application workloads.>> Exactly. And here's how that looks today in terms of, you know, if there is a core site available close enough to the city that you need. Fantastic. You can go in real time to the Lumen network map. You can see where uh not just where onet fiber exists but where there is an actual lumen network storage core location. You can find out where edge bare metal is located. And the other thing you can do is you can actually click to say hey show me the regions that are overlaid in less than 5 milliseconds of latency. Meaning I can send the data to loom and I don't have to host it on prim because it's fast enough.But then what about those situations where it's greater than 5 milliseconds latency? Here's the great news. Lumen will drop ship the storage to you and put it closer to your customer. We're going to show that in a demo today uh where we're working in kind of a hostile network environment that is actually a little bit distant from the lumen network, but we're going to maintain latency levels that are just as good as if we were right on an on-net building. Yeah.The question was uh adjacency considerations are different than that. Yeah. AndI mean absolutely if you're already close to the lumen core, why would you host you know, why would you host a stack on site and just light up one of your 400 gig connections and send the data straight there? But as we know, data has gravity and every now and again data gets created in inconvenient places like a war zone or, you know, uh heaven forbid uh these days. and uh you know but certainly uh having that ability you know to advance those missions or you know advance uh missions where we're trying to you know look at climate change in you know a region where maybe the data is going to come in over 5G and eventually make its way onto the Lumen network. So in that situation it does not matter that Lumen network storage doesn't deploy on a Lumen endpoint. What matters is the lumen network storage will now take that data in at an inest rate that's incredible and send it back out as best it can over the network conditions that exist. So yeah, great question. So Frank, uh now that we know that you know who Lumen is and what they can do in the context of their broader portfolio offerings, I want to dig a little bit deeper into the actual storage offering. So, as we touched on, right, cloud-like flexibility, Lumer network storage is powered on the onap platform. So, with our performance and capacity as it relates to our network, Lumer network storage, whether it's adaptive or object tier, right, we can optimize how that uh those tiers are being utilized within the needs of our customer. So as I think about um all those features and functionalities where now I spend an equal amount of time is understanding what does this actually mean financially to our customers and being able to tangibly show them not just through the network map and all thecoverage uh maps but now mobilizing that in a meaningful tangible succinct way that there it's easy to consume easy to digest and if we could let's touch here if we have to go as you think about location and how long asTodd touched on the gravity of the data, how long do we need it to be stored in this particular uh tier or crunch of storage? Um again, we've got the performance capacity criteria that based on the applications or workloads or the end customer needs are then of course that this platform is simple, it's nimble, it's flexible by design. What that means is now if I turn our attention uniquely just to the customer and in the science of business valuation, what drives value to the enterprise? Well, in the business valuation exercise, we're looking at free cash flow, we're looking at risk and growth. And if the enterprise itself is not de-risking, is not increasing its growth potential or throughput, speed, quality, and of course it's dispersing greater uh cash flow, then it is by definition eroding enterprise value. And so what we spend time understanding is this particular customer and invoid if it's a privately held company we'll go to the industry get some benchmarks and help hold the customer's hand on here's how lumen and on NetApp's platform is driving enterprise value for it and we'll have some schedule. >> Yeah and if I could kind of give a specific example there uh you know we have retail chains that you know it's a pretty competitive environment for these days a lot of mergers and acquisitions. So when they think about the AI challenges they have on prim, you know, and needing to be able to bring an army of compute, network, and storage right where the data is first generated. So I'm watching video of slip and fall footage and making sure thatI intercept that and have my security alerted be you know the we find out first from the AI. We don't find out first from the person who complains, right? Those are the types of things, line queuing applications, things that they're doing and they're sitting there going, "What? I need to outlay $20 million in capital to get this capability to 1,800 retail locations. All of which meanwhile are already on the Lumen network. We're on both sides of that network connection. And they're saying, "Oh man, 20 million, that makes me ripe for a takeover. How about I rent it from Lumen, have them configure it and manage it on site for me? They're on both sides of the network connection. So to the extent that my connection's fast enough, I'll leave it in their core. to the extent that it's not fast enough and I need to bring processing power right on site, I'll have them bring it to me and I'll do that without a crazy capital outlay that makes me a acquisition target. So, that's just one example of some of the customer problems that we have, but you know, Frank will kind of drill into a little bit more of what we see both in terms of general and edge storage problems that Lumen is solving for our joint customers. >> Well, I love your example, right? This just came up a week ago. I mean, this happens. You know, I support thecountry with this analytical perspective and that capex and George touched on it on day one during the keynote on what each of our clients we have to meet them where they are and some of them are a little they're either a market leader or a lagard and that that's there's no villain in this story, but we have to help educate where there's a where the awareness and perception of these moving pieces is low. Well, we better jump in fast. In particular, you just mentioned the capex exercised If there's an opportunity for us to educate in an opex uh model, this comes up time and time again where you have a legacy-minded accounting uh perspective inthe finance and accounting or procurement teams whereby what we've our culture is we always buy physical assets and then depreciate them over a particular life. As engineers and as architects, we have to immediately start thinking about the constraints within their existing workflows and bring those to light, bring them into focus for us as a team because we're collectively trying to be a good steward. Right? I like to use the phrase fiduciary responsible individual. So every analysis that I'm performing ispure. It's objective. It's transparent by design. So too, like in this example of the capex, understanding what their hurdle rate is from a cost of capital, wecan be dangerous in the room in articulating here's the value right ofan opex uh variable cost model instead ofoutlaying that big cash outlay at the beginning. So the other thing that confules our customers horribly is this notion of service levels, right? And NetApp is not anti-service level. Every single hyperscaler that we partner with except for Lumen obsesses over service levels. What Lumen does is they say, "Hey, we're just going to provide you great service, but we're not going to make you micromanage it." So, Lumen offers two simplified tiers of storage, which is awesome performance tier, what they call their adaptive tier, okay? And then they have their object tier. Okay? But the key value points across all of those is whether you're accessing object or whether you're accessing file or block, Lumen's going to deliver the performance level that you need in a billing model that is transparent. And this is one of the areas where they've really leveraged the NetApp portfolio. You know, our cloud insights product gets used entirely wrong by practically everyone who uses it. uh you know they use it to shame back they use it to you know like basically create these incredibly complex mathematical formulas and when we talk about you know our TCO calculator today one of the things we're going to talk about is just the simplicity of going into a portal and going this is how much you're consuming from a latency throughput and capacity perspective in real time you know what your bill is going to look like and they did that all underneath the hood with cloud insights telemetry but they took hundreds of I mean if you has anyone loaded up cloud insights and been completely overwhelmed by the 5,000 default reports available in that product. Lumen said, "Look, we can boil this down to three views and a, you know, in a real-time, you know, billing perspective." And so, to me, it's so much better. Uh, I actually had the privilege of working with Frank on the TCO calculator that we'll talk about in a minute. But, as it relates back to the storage, uh, you know, the adaptive and object tier, Lumen is going to get the data to the right tier. If you're buying Lumen network storage, you're already tiering naturally. If you buy file and block, we're going to tier what we can tier without sacrificing latency or our service level commitment. Um, so you don't even have to think about that if you're an all lumen and managed storage customer. But let's say you're a NetApp on prim customer struggling. You're about to buy a new all flash faz because your adaptive tier is filling up or what we call our, you know, tier one or performance, you know, whatever term you use for it. Uh, you know, you got a couple choices in that situation. You could try to build your own storage grid. You could send it off to Amazon and pay a tax to NetApp and a tax to Amazon for doing so. Um, you could try to send it to Azure cold uh storage and try to figure out their mathematical formula for every 10,000 IOPS you pay a penny. Uh, every time you read it, you pay both by the throughput and the IOPS. And I mean it took us we had to do f factor label math like we were in chemistry class in high school to try to figure out what Azure cold storage was actually going to cost us in a situation where what happens when suddenly that real retail chain I was talking about uh says wait we thought that was cold data there's gold in those hills let's go mine it right and so they perform data mining against their object here yeah we'll show you how that looks on Azure when you realize oh that cold storage wasn't so cold after all. We're in trouble now. We just got hit with that bill. Um so again, uh object tier, adaptive tier, both available, uh in a very simplified billing model. And again, all of that, uh on a network that this I mean, these are the areas. Yeah, I think I see one little sliver that isn't less than five milliseconds on the United States map there, you know, if I look hard. Um, but this really is the punch line and it goes back to your opening question. What is the differentiator? And now we're going to get into if those are theconcepts of what features the backbone and which enables all these features. Now we're going to get into some actionable insights as to what us as a team collectively can now start focusing on with some cleardeliverables. Maybe we don't have names and you know tasks associated but we're jumping in right now. Yeah. And >> uh knowing that I was coming to this, I read up on lumen network storage and can you tell me what the top differentiators between the object and the adaptive? When would one be used over it? >> Yeah. So um so the question was what are the differentiators between the adaptive and the object tier? And I think the biggest thing is that as a general rule, if it's in the adaptive tier, we expect it to be incredibly low latency. Um, in the object here, we can typically tolerate more latency. But here's the uh misinformation that everybody makes about objects. Everybody thinks once you put it in an object store, you never need it again and that you never read it again. And that's just patently untrue. And the hyperscalers are counting on you making that mistake because that's where they're going to hit you with a bill. Lumen is counting on you making that mistake as well [laughter]and being joyful when you see there Bill that you had the presence of mind to say ah okay so when I decided to take that 22 terabytes worth of sales history from the last 30 years and look through it to start uh training an AI model about consumer purchasing patterns uh reading that 20 terabytes it's not quite as latency sensitive right so it's okay to pull it out of an object uh but it still needs to have good throughput and so Lumen can promote that data you know in a way that allows you to read it quickly uh and you know make it available. So Iguess the best answer to your question which is a great one is if you're having to think hard about what the distinction is you probably didn't buy Lumen network storage and you're agonizing over a hyperscaler bill. So that to me is the differentiator as it relates to lumen is whether it's adaptive high performance submillisecond guaranteed or whether it's object hey we don't think we're going to touch this data but if and when we get proven wrong we're not going to lose our jobs over it and the other key thing that I like to point out about lumen's object here tier is it's on the same 400 gig network backbone that their adaptive tier is on right so when it starts getting to be a throughput thing yeah maybe the latency is higher but you can still pull it out of there you know hundreds of you know I mean at actual gigabyte per second rates. Uh so, you know, absolutely fantastic. Uh and again, the objects are not designed to punish you anymore. They're designed to be available when you need them at no additional charge other than what you paid to put them in there in the first place. >> So, folks, an engineering standpoint, how and George touched on this and I think it was Dr. Fay too, right? Not applications, but data is the foundation, not the application. >> Exactly. >> Exactly. >> Exactly. >> Right. Or the CRM or the business analytic tool. What can we as engineers be thinking about to help the client understand toeliminate maybe some of that conf it's not confusion it's just as you touched on that's it it's unaware we increase the perception of that distinction because as an analyst right we're looking at building that model to say on that customer what is the ratio but it's use case specific that comes up all the time you'll never have a perfect unilateral TCO that can be rinse and repeat yeah >> because it is customuilt to that use case, but on that tier. >> Yeah. And I think if ever there was a way to kind of reinforce the point, I mean, people don't know. They know they have an application and they know that they want it to perform well, but if people weren't paranoid about having bad application performance, this stat would not exist. And this is a NetApp statistic. I pulled this two weeks ago from our autosupport database. 78% of the blocks that are stored across all customer systems that report ASEP. So we're talking, you know, 20some thousand systems reporting in um are storing 78% of their data in the wrong damn tier because we're terrified of, you know, creating a bad customer experience, right? And so what if we can move that entire 78% of existing data that where do I want to see that go? Do I want to see that go to Amazon S3? No, I don't get paid on it. Do I want to get it in Azure blob? Of course not. Do I want it in Google object? No. I want it in Lumen network storage because when that 78% leaves the building, if it goes into Lumen's network storage, it goes right back in the building. [laughter]So, uh, just in a better way that's more cost-effective for our customers. So again, what this step means in plain English is your customers, and I'm going to prove this to you in the demo. I'm going to show you how whether these blocks live in your adaptive tier or whether they live in your object tier, the only difference is how much you decide to pay for your data. So just tie a few fing wait I'm just want to make sure about this is it is this like your Ford alter is it for what you're saying with more advantages or is this uh equex like b no yeah fair question so yeah sowhat happens here and you know and we'll address some of that in the demo but I don't want to defer your question because so I want to address uh because there's kind of two components to it right so to the second part of your question is this like Equinex uh it's way better than Equinex uh and in fact uh what Lumen is doing now with their 400 gig wavelength network I should probably let you speak to that a little bit but just in a nutshell we are reallycloseto this object store no matter where we are in the world. We're close to it. It's simple to connect to it and you don't have to think about stuff like that. Right? IfLumen is your network provider, they're creating a route to that storage for you uh through their exchange. And uh if even if you are not a Lumen customer today, it is a publicly available object store resource just like Amazon S3 that has been out uh since you know 2020 I think is when we released object. Uh so it's been available for you know quite some time and because it is based entirely on the NetApp storage grid product uh on the object here it is 100% compatible with customers on-prem NetApp systems it's license compatible with those systems as well. So, and I don't you know I don't know uh as many of the bits and pieces of the network itself but >> right yeah we have different so there's compute and then uh we we've competed up againstan Equinex an Equinex whereby whether it's doing a bare metal or as an opex or as a service right to ifthey want so in the case of abig fulfillment company right where they're doing a hardware refresh we're working on those schedules right now oftheir 24 data centers or their data centers that they are managing life cycle management we're doing a capex versus opex model and they thought they asked a question we came back with two or three different options as it relates to compute they know they need the information at this proximity but they had this legacy mindset of well I got to buy these number of I don't know 20 $30 million worth of clusters every five years and so I may not be asking answering your question directly, but there there's a distinction of what Equinex is able to provide independent and lower value than what we've been able to enable whether they still want to consume that bare metal or as a service.>> Yeah. And to me at least the irony is that you know a lot of folks when they're consuming services from an E equinex are riding on top of the Lumen network anyway, right? It's just that they're kind of adding complication to their life versus saying, "Hey, uh, why not be on both sides of my network? Why don't I just buy the network and the route to the storage, whether that storage sits on prim or whether I'm just consuming it just like I would out of Google, Amazon or Microsoft." Um, which is, by the way, one of the things we can also do is, you know, the connectivity exists to all of those. So well, so like I said, I don't want to completely spoil the rest of the presentation, but great question because thatuh you know, segus nicely into so you know, we'll spend just a couple seconds on why this automated cold data tiering uh is so important to us because again there's just so much opportunity here for existing on-prims. I don't want to see those systems drained into object technologies that a don't get uh you know lumin and netapp sales paid but b cost our customers more moneyuh so we all kind of know what automated cold data taring is or by show of hands have you never heard of this before fair all right awesome well then I'm glad I built this [laughter] and the netnet is if you're a lumen network storage customer today you're already doing this um you're your data goes into the adaptive tier and then we figure out what needs to go into the cold tier for you. So, it's not something you have to think about, agonize over whether or not you want to do it, you just know your service level is going to be met and it's going to go to the right tier. But if you're an existing NetApp customer thinking about tiering, knowing you should do it, knowing you shouldn't have bought those last three AF AFS, you just did it out of fear of not meeting your service level because you haven't dealt with a vendor before who had a performant object store on a low latency network. the best low latency network available. Um, and so now, you know, when we talk about why Lumen chose NetApp and why NetApp chose Lumen, um, it was exactly that. We were the best at making tiering magical and simple. And then we put it in the hyperscalers and made it incredibly complex. You had to do endpoints and routing and VPCs and all this other stuff, you know, and you had to figure that out for yourself as a customer, you know. Uh, and then meanwhile, you know, why did Lumen choose NetApp? 30,000 customers in 140 countries, 2,000 technology patents. And really, it was those technology patents around making tearing magical because when Brad Lewis created this product, he had the vision for, you know, kind of a software definfined storage future. And one of the things we're going to see in this demo today is that instead of bringing an all flash faz on prim for this demo, I actually lit up a softwarebased copy of data on tap in a simple commodity VMware server. And even though I pushed 32 terabytes of data to the Lumen cloud, I never had more than two terabytes of local NVME storage available to me throughout the demo. But through the magic of those patents, we're able to uh you know again exploit the entire NetApp portfolio which if you have a lifetime and I have uh you know I'm 20 years into this so I've fully internalized how to master all of these products but if you don't want to take that same 20 years I did how about use the dead simple portal that Lumen provides. Right? uh your engineers. Uh Ican't recall a time in my life where I've been more stressed out but also uh more gratified by ah man you stumped the chump again. Let me go back to product engineering and find out you know how our rest interface you know no that's not simple enough. And that's what they harped on. They said we need this to be something that anybody can click through and see their billing in real time their service level in real time. they can select what services they want to provision and they should be able to do that without having to talk if they have to talk to the engineering team we failed. So this is kind of what it looks like in terms of an ecosystem here and you know I think the it factor here Todd just is any data at any place right we can bring it to you right I think if you just summarize this slide yeah that's what and because of onap's right what did George how did George phrase it at the keynote AI is built on data and data is built on that app exactly and so that's what onap has been able to provide or enable Lumen to create that transparency as Brad was striving for when he when they created it fiveand a half years ago transparency it's not just a trending phrase transparency has been a foundational element that which we were driving toprovide >> yeah and I think you know exactly to your point one of the questions is that we do want to address is you know obviously uh people have put their data in AWS they've put it in Google they've put it in Azure There are incredible services in all of those clouds and customers are going to want to keep their data close to those services, AI learning models. Uh, and the point is, uh, Lumen gives your customers the ability to store their data more economically while still keeping it cloud, I like to say cloud adjacent. Um but the point is uh via custom engagement today uh you know if there is a use case that you can't do through the automated portal lumen still has an appetite to help your customers bring their costs down in the hyperscalers as well as on prim by leveraging the network storage. So let's talk a little bit about how lumen helps customers achieve reduced TCO. So Todd touched up right trying to get the math right so that it's representative of the existing workloads as I touched on early on no TCO is going to be perfect right out of the gate nor can it be necessarily uh unilaterally used across different client use cases. With that said from the beginning it's a perspective it's an approach. We're committed. Part of our growth engine is being transparent, being objective, and showing our customers how just how specifically uh Lumen is driving value. So in this example, this is a healthcare uh institution wherein we had visibility, we had line of sight to what they anticipated, what their capex outlay was had they chose to build this out on prem, maybe in a data center, but they were managing it through physical equipment on site and that's highlighted here in the yellow. So when our sales and engineers teams get connected with a customer, this is right out of the gate. We articulate to the customer plainly, this is what we're driving for. And with a little bit of research, we can get a feel for how they're currently performing. And we benchmark against their current returns on investment. And if I draw your attention to the lower right of the graph here in of the table, pardon me, is that's showing the cost multiplier that by choosing a competitor or do to do nothing or do it themselves, that's how much more money they're going to be spending. So in the case of do nothing, it's one and a half times more expensive than having Lumen manage your network storage. And the competitor is even higher. And here's atrigger point and Todd touched on it. All of those complexities of workloads of tearing and the way that their uh apps are pulling information uh out of archive, cool or hot, we take a stab at based on what we've heard from our engineers. Theyunderstand boy the waterfall effect of that business analytics is pulling yeah information out of thatmonthly egress that catches you every time and you know that those lumpy outlays because oh we didn't realize we were going to do a data mining scenario when we thought this was a cold blade radiology department or some department is running analytics that how often does that happen? No one can predict it. All of those questions go away with Lumen network storage. You don't have to worry about it because the meter isn't running.>> Yeah. And in this example here, so this is just a straight up let's compare the difference between using lumen network storage object here against some of our competitors cold tiers. Um and so uh and again a big part of the difference there uh if I'm an on-prem system I'm paying for a NetApp fabric pool license plus I'm paying for the object. If I go to Lumen network storage, I'm just paying Lumen for the object. So, we've been bragging about how simple this is. Sorry, I sure can. I was just Thank you. I was just wanting to account for the egress andzooming in on that because it's such a significant factor. Oh,>> and to your point about predictability. >> Yeah, very significant. Thank you. Andyeah, and it's one of those things that it'snot very predictable, right? And so uh we have a model um that you know allows us to say and kind of play what ifs hey what if you know we didn't think about end ofear sales data in mining or you know whatever thing. So we throw that in as an additional egress in month 11 andwe can do all of this in real time with the LNS savings calculator. So Frank's team can actually meet with you. Um and it's a great point. Let's dive in just a moment. even if they are predictable and this is apain point often times that we'llask of our clients and so from a financial lens standpoint we're looking for those you know theregression analysis of how do costs move against sales so on and so forth in this particular case if our customers are telling us well costs are predictable we can then still illustrate the value because let's for by way of an example if they're seven or 8% of their storage costs are related to egress fees. Well, we know that pretty starkly right out of the gate. That goes to zero starting as soon as the as soon as you get these tiers up, thelumin network stores tears up. So, it depending on the again this market lagger versus leader depending on where that customer is and their level of understanding of the workloads and here'sthe bills. uh let's meet them where they are and ask those questions of how predictable are they and if there is a pain point there let's run right to it help them >> yeah to just the focus >> jump in on that point of lagard versus leaders so we saw an announcement today in the keynote right uh you know Microsoft has a beautiful 18-month old baby um and uh [laughter] I mean you know we love it we'resuper glad that Azure Netup files got around to thinking about taring four years later right and so I mean we want everybody to do this ultimately, right? Um, so if you've sold Azure NetApp files to your customers as a NetApp rep, uh you can count on having about 78% less revenue next year uh that you get um bonused on uh as they move that out into Azure Blob, right? Good for them. [laughter]Uh if on the other hand you sold lumen network storage you saved your customer money while also getting comped on the object here because that's moving into your native technologies lumen network storage object and storage grid respectively. So congratulations to them for figuring that out in the year 2023. We did it in 2019 and we've been saving customers money with it for a while now. If I may I think one of the considerations youthink about to spare thatthat's right that's right e charges from the conversation now I really just need to focus on passive data.>> It's a fair statement andyou know because of the scale lumen's doing this on. So the question was, you know, just, hey, they're making me think about so many different vectors, right? If I'm paraphrasing accurately, that I'm having to think about capacity, the rate, the cost, and it's all so variable and it's not very predictable, you know, and everybody's going to do this, right? I mean, that's 78% [laughter] of hot data that's on overpriced AFF storage, you know? I mean, thatyou know, it's inefficient. It's only overpriced if you stored it in the wrong place, right? if you needed that level of performance, it's the best product out there. But if you only needed it for 22% of what you're doing, you overpaid massively and now you're an acquisition target because you made too large a capital outlay. So, uh, and so again, you know, we've been bragging on how simple this is. So, you know, if this looks scary to any of the sales folks by the time I click through it, then by all means, don't sell it. But I don't think it's gonna. [laughter] Uh so you know so how do I do all this stuff we've been talking about right so we're just talking about one use case out of a thousand that we can do with Lumen but let's just talk about this finding your NetApp sales rep finding your end customer and saying hey we looked at your auto support data do you realize that you have an opportunity to spend substantively less with us in the coming year free up capital dollars and all you need to do is click get started in the Lumen portal Get set up with a portal account. Log into that portal. Select the object storage tier. Select a region that you want that's close to you. So, I'm in Albuquerque, so I chose Phoenix, which was plenty close and plenty fast, well under five milliseconds.I created myself a user, created an access key and secret key for hooking up to the storage. If I get confused about anything along the way, I've got the Lumen network help portal. One of the best help portals I've seen any vendor build. Clean, easy to find the information. I said, "How do I create a bucket for NetApp object tiering?" And this popped up and it walked me through it step by step. >> Todd, on that security point, there was something on encryption. >> Could you just Yeah, thanks.for bringing that up. So,you know, most of our on-prim customers are taking advantage of their free uh encryption license, right? So, we do AES 256 encryption on ship and so your data is encrypted at rest. But now, wait, I'm concerned. I'm putting this data into the Lumen cloud. Do they now have my data? No. They have the encrypted blocks. After we've done compression, after we've done compaction, after we've done compression, all of the wonderful things that we do to make the data smaller, we also write it to waffle in an encrypted fashion. And all that happens before we ever send an object out. So what you have candidly on the Lumen network is gibberish without that private key that only you have custody of. So the really nice thing about this is this opens up a world of use cases where you're you know uh meeting AES levels of encryption standards that uh are absolutely going to be compliant and not get you in a situation where you have uh you know even potentially in classified environments this you know uh could be you know certified for use um in the future. Um but you know dialing back from that just you know security conscious environments where they don't want their hyperscalers to own their security keys. That's part of the reason why they keep stuff on prim completely interoperable. So yeah thanks for pointing that out. Um so again did I have to think about any of that stuff though? No it's just built into the product. So I create a bucket and I make sure that I have a network interface that is built on my system that can ping us west1-sto.lumin.com. If I can ping that address from my network,I can store data in lumen network storage. Now here's the cool thing. What's the bandwidth requirement and what's the latency requirement? Anybody want to guess? Because NetApp has very specific guidance for fabric pooling, right? And it turns out that if all I want to do is offload snapshots, the answer is it does not matter. It truly does not matter. You know, if all I'm trying to do is offload tons and tons of snapshot data, that 78% is probably composed of 60% snapshot and change data that it's very unlikely that I'm going to have to restore. So, just to keep this demo super fun, I did it over a 5G Verizon connection that was plugged in then, you know, upl to a Lumen connection and uh justto prove the point that, you know, so here I am I'm only I only have a terabyte available uh ofstorage in these aggregates. Why is there so little storage available in these? This is literally a virtual machine. So, this is aNetApp ONAP select virtual machine. Um, and I'm going to push 32 terabytes of data into it using a policy that says, "Hey, as the change data gets cold, send it to the cloud." So, Ianticipate that my total storage requirement is 32 terabytes. So, I provision two terabytes. Makes perfect sense, right? Well, it does start to make sense when you when again when you realize that ratio. So, I'm going to click add a cloud tier. You can see I don't have a cloud tier configured currently. So, I add my storage grid cloud tier. I name it lumen network storage. I point it at US1.sto.lum.com. And then I just choose which of my local high performance tiers living on NVME very expensive NVME disc that I could only afford two terabytes of. Uh I'm going to go ahead and tier. I make sure I've got a network uh available and then I choose a policy on a volume byvolume basis. Maybe there's some data that I just I want to get comfortable with this first. So I'm going to create a cloud test volume. Um but you know my projects I'm pretty confident I can offload the snapshots at least on those. So I'll do a snapshot copy only policy on those. Maybe on cloud test I'll do an all policy and just say hey everything that comes into this system I just want it to go straight to the lumen cloud. Okay. So now kind of risky considering I just told you I'm not on an onnet fiber lumen location doing this demo, right? I'm across the street from it. Butjust to make it more of a challenge on myself uh and not to be spoiled, uh you know, rather than using my dedicated 400 gigup link, I thought it'd be fun to use a connection that has a max upload of 40 megabits per second and a max download of 200. Um sometimes with as much as 90 milliseconds of latency and see if I could still deliver storage performance results with that. So I clicked okay. Now I've got a cloud tier and literally in the time it took me to refresh the screen, I'd already sent eight megs worth of objects out to the cloud and went back and checked on my interfaces to see how they were doing. And sure enough, they were pushing about five megabytes per second, which is all that connection was capable of, butwent and logged into a S3 native browser just to see what it looked in like on the Lumen side. And I opened up this file and sure enough it was gibberish. I couldn't you know so even if I directly access the bucket outside of the context of ONTAP my data is not at risk and so here were the results uh after running this test uh generating completely random I effectively used a ransomware algorithm to just sit there and generate encrypted data um I did that on purpose so you know so as to make sure that I couldn't take advantage of any compression compaction or dduplication technologies I wanted to see what the true capability of the lumen object here was so I wanted to stress it out in the most evil you know, techge geek kind of way. And here's the results. 30 tab of high performance tier 1 capacity saved. I'm using 780.9 using 780.9 using 780.9 gigs of hot blocks. But when we look at the performance before and after I put in the object tier, you can see I ran the same test three times. Once before I had the cloud tier, twice after I had the cloud tier. Now, these are not earthshattering numbers. These are intentionally embarrassing numbers, right? Uh because that's the whole point is I'm at a retail location. I might be using a 5G uplink. I might be at a football event as a vendor with a, you know, wearing my hot dog hat, you know, [laughter] and the point is all I needed was a virtual machine running on a box this big u to push 700 megabytes a second of throughputat less than 50 milliseconds roundtrip application latency. And so notice that the cloud tier latency is crazy, right? That you're not reading that wrong. If you anybody has good enough eyes to read that, that's 100,000 milliseconds of network latency I'm experiencing on a competitor network. Uh but because I put Lumen network storage in front of it, my application sees less than 50 milliseconds of roundtrip latency for that application. So I'm able to sustain 7800 megabytes per second and let NetApp offload it as bandwidth allows. So again that you know and that just segus perfectly into some of the customer use cases where you know you think about well wait is the network going to be fast enough? No, in a lot of cases it's not. I mean North America is pretty blessed but we still need mobile connectivity you know to do I mean the data is coming to us in farms. It's coming to us in you know Antarctica. that's coming to us in really inconvenient places that aren't going to be on your network. So now you need to think about what can I sell where my network doesn't even exist to help optimize for the customer until the network does exist, right? Which is kind of flipping the script a little bit on the Lumen people in this room, I think. Butagain, you're sitting on a gold mine there in the sense that it extends your reach way beyond uh just the classic onnet fiber location that you're used to selling to. So if you want to talk through a little bit of the customer use cases here. Yeah. So here right. So we had no capex only appex right. So what I'm trying to listen for are the constraints or the pain points right the technology and then aligning the technology and the features. So in this particular case, medical um it was pay for what is what was used and this comes up more times than not as it relates to efficiency of spend right and so here it was pay for what is used extra licensing fees as Todd touched on even in the demo no licensing fees. So here technically do you want to speak to>> so yeah this one since I worked with them really directly and I know we're getting short on time which is great because we've had so many uh questions so and that's how we wanted it to be but uh this one was a super fun one uh Nvidia DGX 100 systems needing NFSV41 multi-protocol access to data that began life in a Windows active directory on an NTFS file system but now they needed to access it over Unix NFS V41. Guess who can solve that? Lumen can and that one might have resulted in a call back to me but you know but the point isthat they'll get you there. Um and that was pretty unique. So you know artificial intelligence use cases where now we're reducing the number of data centers they need to manage and just getting them directly piped to their data bringing the data closer to them so that their AI model can run fast on prim but also be backed up appropriately. Um, we've done the same types of things for media companies where we can massively rationalize their network. Large enterprise backup applications, another great use case where all they really need is an S3 bucket. Why would you buy and maintain your own? It's just silly. Um, so if you want to kind of take us through our key takeaways here. Yeah, I think key takeaways, right? So if Lumen network storage creates greater flexibility, increased control, transparency, and better value, I want to go back to the let let's go back to that 78% number because that is inefficient. And so what can us collectively be paying attention to and create some task lists immediately because that infinancial terms that's arbitrage. It's just it'sinefficiency waiting to be solved and we should be taking action on that because of that flexibility increasing control and aligning it to the economic value of our customer that it's a hard debate to lose when you've solved for every one of those opportunities that we've identified through better right thank you so much for taking the time to be with us today Frank this is fantastic technology I could not be more excited about this product this is the product that NetApp app should be selling, Lumen should be selling, and NetApp, our joint customers should be buying. Um, if you want to get under the hood a little bit about some of the other technologies that work really well on top of Lumen Network Storage, here's some of the related sessions that'll also be posted on uh NetApp TV by the end of the session orby the end of the conference. I think it takes them a few weeks to do all the production and everything. Um, but uh if you want to learn more about Lumen network storage, I've put a couple I try to put at least one tiny Earl in there uh because some of them are a little bit long. Um, but just fantastic technology, fantastic partnership and let's save our customers money with this point. Okay. Uh, if I could ask a question.>> Sure. Excellent question. So, one of the folks that Yeah. So uh I'm actually yeah no thank you that so the question was uh what are we doing in the federal space effectively if I could paraphrase to meet all of these mandates and regulations and things like that right so that is one area where I will say you know if the hyperscalers need to catch up to lumen on the core technology um you know we've got some work to do in terms of getting some of those certifications done and I know we're in discussions with uh the NetApp keystone team and you know because a lot of work that they've done is completely reusable for Lumen uh because it's the same underlying technology. These guys just built a better mousetrap on top of it. So, it's actually going to be easier to certify. So, if you have customer use cases today, um where you know the only thing holding you back is those certifications. Let's get with Brad Lewis and let's make sure that that's not the long pull in the tin. They do. Right.Yeah.And I was hoping to have Rich Patterson here today because he's actually on the keystone team, but that but he's one of the folks that we want to work with to basically say, look, if it was a checkbox there, it's a checkbox here. And the only difference is Lumen's been QAing it for four years. NetApp internally has been QAing it for a year, [laughter] you know. So, yep. Yes. So, it's a dual comp model where if you bring adeal to Rob here, he could probably speak to it better than I can actually. And sothe dual compensation program that NetApp supports just encourages collaboration and cooperation deal. So we're all trying to do the best things for our user customers. Yeah. So it encourages thatuh come together um approach and uh we're working together on opportunities.theend uses or cops seems the capacity just as if they would do in the traditional transaction with their endings. So if everybody wins everybody sort of rewarded for again taking the best then so it's the customer so it'sa great ball. So if they're buying how you sell it to howdo you sell sotheYeah. So,theuh end user customer is client to base service when >> Yeah. And so they're buying a service and we have uh mechanisms behind the scenes to account for that for ourselves tofigure out how they shouldn't compensate into thatwhatever that services that they focus would be capacity elements other things that then make that up but beyond that and user confidence but 100% that'sexact >> and that's really the point is you know every time you see yourself writing up a storage grid quote don't call Rome and say hey let's bring you in on this you know let's see if we can turn this into an LNS deal andone of the things thatwe like to share with our friends who you know work with better price bus commercial whatever it may be you're selling to the end users isBetty 5 are really missing out opportunities um to participate in um efforts and initiatives with their customer the business was maybe looking at an opex smart or some other need that's driving them to wound tolook for uh solutions. They may not want to go traditional. They may not want to deal with life cycle management um daily equipment on site all of the things that come traditional on premises types of operations. So inthose uh scenarios, you know, weuh encourage ourcounterparts to come work with us, go together into those end user customers, have those conversations and many times they'll find out those conversations are happening already and they're just not involved in and stop. So it's a really way to come together. Everybody kind of works together uh for the betterment of theircustomer share. You just put in a team in touch with me. >> Absolutely. Any other questions? I know we're a little bit over. It's about 6 minutes over and we'll lose the room in uh you know 3 to 4 minutes probably. But [laughter]but any last questions? Thank you so much. Appreciate your time and thank you Frank.
Lumen Network Storage (LNS) offers NetApp® ONTAP® customers managed file/block/object storage and backup to the cloud with no ingress/egress charges, and no cloud-tiering license fees. Our Cost Saving Calculator provides side-by-side, fully [...]