BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
Hello and welcome to today's webinar, Kubernetes, VMware, and NetApp, a match made in container heaven. Thank you so much for joining us on the webinar today. This event is brought to you in partnership with our friends at NetApp. We've got a great event lined up for you. Before we get started, there's just a few things that you should know here about the event. Uh my name is David Davis of Actual Tech Media and I'll be serving as the moderator. As always here at ActualTech Media, we want these events to be educational. We're all former IT professionals ourselves here. We know how tough it can be out there in the world of IT to solve your challenges. There's always new innovations. There's always uh new threats, new concerns, new projects. Uh it never stops. And this event is here to help you. to hopefully overcome some of your challenges and get all your questions answered. We encourage your questions there in the questions pane of the audience console and we'll be doing a Q&A session at the end of the event. So, keep those questions coming throughout the webinar. We even have a best question prize to help encourage those questions. I'll talk about that here in just a moment. Uh we also have a number of resources there in the handouts tab I want to call your attention to. Uh I see uh a link to the NetApp and VMware homepage. There's also a link to try Astra control for free and Astra download Astra Trident. Uh these are uh solutions that you'll be learning about on the webinar today, but make sure you keep those links in mind during the event. Uh I'll remind you about them of course at the end of the webinar as well. At the end of the webinar, we'll also be announcing the winner of our Amazon $300 gift card door prize. If you're watching this on demand, of course, that drawing has already occurred. The prize terms can be found there in the handouts tab. And as I mentioned, we also have our best question prize for an additional gift card, an Amazon $50 gift card. Uh we'll contact that prize winner after the event, but of course, you have to ask a prize to be entered into the drawing. And with that, I'm excited now to introduce you to our two expert presenters. Welcome to Chance Bingan, technical marketing engineer at NetApp. And welcome Alan Cows, also a technical marketing engineer at NetApp. Chance and Allan, it's great to have you on the event. Um, before I first hand it off to Chance, we've got a quick poll for everyone out there in the audience. The question on the screen is, what is your current Kubernetes solution? And as you know, there's multiple um flavors of Kuberneti Kubernetes in the market. So, we've tried to represent some of the most popular there on the screen. Uh but perhaps uh you have one that's not listed. That would be the other option. Perhaps you're using multiple. Uh feel free to select multiple or maybe you're still learning about Kubernetes and haven't uh implemented it yet. And so you can choose none. But uh we appreciate your feedback there on the poll. And uh look like uh looks like uh there's a lot of good responses coming in. It's hard to select awinner here. I see definitely that Tanzoo andOpen Shift and uh the Hyperscaler Solutions uh Azure EKS GKE those are kind of the leaders there on the poll. So uh good feedback. Thank you so much. We'll have other polls here during the event, but for now, I'm going to hand it off to you, Chance. Take it away. >> Thanks. Yeah, it's great to be here. Um, appreciate everyone joining. Um, so we'll go ahead and dive into it. Today, we're going to be talking about um, Kubernetes, VMware, and NetApp together. Uh, talking a lot about the VMware Tanu portfolio. And that's kind of where we're going to uh, get started there. So we're going to talk about um the NetApp and VMware partnership. We're going to go ahead and talk a little bit too about the VM the Tanzoo portfolio. So talking about, you know, all the different things that make up Tanzoo and Tanzoo is one of those things. It'sreally not just one product, right? It's and I saw a lot of you in the poll are actually using it already. That's fantastic. But it's uh it's kind of like Microsoft Office, right? You can have Microsoft Office, but you also are using Word, Excel, PowerPoint, right? All these different things arepart of the Microsoft Office portfolio and that's kind of what Tanzoo is. We'll look at some of the options with uh how you can manage Tanu applications and the applica you know using application data management for uh stateful application data with your containerized applications and talk a little bit about why NetApp is a good choice for your data management platform for all of yourcontainerized data needs really. So before we get started, uh usually when we do these, we uh one of the top feedback items that we get is, "Holy cow, there are a lot of acronyms. It's acronym soup." And we at NetApp love our acronyms. If you've been following us for any amount of time, you're definitely familiar with that. And our friends at VMware love their acronyms, too. So a couple I want everyone to be familiar with today is uh so starting off at the top and we're not going to go through all of these but CNS. So when I say CNS I'm talking about the cloudnative storage uh essentially management plane that exists within VSCenter and it provides integration between uh say the vSphere CSI and storage policy based management um in vsenter itself. So it provides that connect you know that connection that allows native storage policy based management to be leveraged by you know your Kubernetes administrators your DevOps teams. Um [snorts] I did just mention the word CSI. So what is a CSI? So CSI is a container storage interface. It's essentially a standardized way to uh perform data management tasks like storage management tasks through the Kubernetes framework. So it's a way to manage Kubernetes storage and a couple other things here. So a persistent volume or PPV if you're not familiar is essentially just um a storage allocation that can be consumed by um a container or a pod, right? And a PVC is a way to claim that storage resource and associate it with um a pod or you know um some other resources, right? TKG, Tanu Kubernetes grid and uh the various flavors of that. We'll be talking about that more in a moment. Um so the NetApp and VMware partnership u specifically when it comes to Tanzoo. Uh so this year uh we have uh over a 20-year u partnership with VMware uh where we've had uh really a great shared vision of an history of innovation looking to increase theefficiency of the data center right and that goes from the era of machine virtualization where everything was migrating from bare metal to um you know virtual machines, virtual servers, virtual desktops to take advantage of the efficiencies you can achieve with uh you know consolidation getting rid of unused compute resources and consolidating on shared storage. If you look at the um little chart there I've got on the side move this little widget out of the way. So if you look at in the middle so we're actually celebrating 10 years of Vasa and Vivalls this year. So back in 2012, NetApp was actually one of the reference platforms for the release of Vivalls. So with our 20-year history of Vasa and Vivalls, uh we've learned a lot and our Vasa provider has evolved a lot along the way. And I'll be talking more about that shortly, but that dovetales into the work that we've done over the last two years with VMware to validate solutions based on cloud foundation. Right? VMware Cloud Foundation is the new vSphere and it's going to provide that cloud-like experience for onrem services. So, and that's whether it's principal storage or supplemental storage with COB foundation certified both ways um internally with our testing and we've talked about it in a number of different webinars, but also with Tanu um as soon as VMware rolled out support for first class discs, we're able tovalidate that and ship a release of our Vasa provider that supports first class discs, which are essentially a VMDK that isn't owned by a VM. It's a first class citizen within the vSphere ecosystem. And that allows you to take thesestorage objects and associate them with things other than VMs such as pods for example. And then looking forward into the future with uh announcements that have been made over the last uh quarter or so where um you'll be able to take advantage of NetApp data stores as a firstparty service across all of the uh hyperscalers. Of course, you'll have to follow up those hyperscalers to see uh you know what the private previews areall about and how to get engaged with those. But those again have been announced by all three of the uh the major hyperscalers. looking into the forward uh you know in into the future we're looking at uh improving our uh our solutions to support Tanzoo as we work with VMware to get moreintegration points if you will exposed through the cloudnative storage management plane. Now when I talk about NetApp integrating with a VMware ecosystem, typically thefoundation for that integration is a free product that we have called ONAP tools for VMware vSphere. [snorts] And that really provides um seeif I can get this animation to step there.we go. Good enough. So uh thanks David.There are three main components within ONTAP tools that provide that integration. The first is the VSCenter UI uh enhancements that it provides. So what that does is provides contextsensitive menus for you to simplify operations tasks uh for your vsspere administrator. So things like you could rightclick on a data store and perform storage management tasks on it. Um there's also uh guey enhancements right within the actual vscenter single pane of class right the HTML 5 user interface that will show you storage details right where it lives in the environment what its actual capacity is um if you're using storage policy based management some things around that exist as well not to mention things like being able to generate reports right in the vsenter UI for things like data store capacity and performance reports um and as well as managing storage profiles which we'll talk about later. The other thing is um the API server. So [snorts] on Tools is a REST API endpoint and the point thepurpose of that is to simplify your the task of creating automated workflows within your VMware ecosystem. So for example, you could uh write a workflow that sends one REST API to ONAP tools that would perform a myriad of tasks. So instead of having to write different API calls for the ONAP storage array to VSCenter to you know any number of other things, you can send one API call and it will create volumes and lungs or exports and map initiators and then tell your host to rescan and create data stores and mount the data stores to additional hosts. All these things, all these things are simplified by just using a single REST API endpoint for you know building theseautomated workflows. It makes it a lot easier and less painful to adopt um uh you know very simple automations just to improve the efficiency of yourweb front end or self-service portal you know whatever you're using to provide your services and actually before I go ahead let me go back to that onemore time so I also will also mention um the VASA provider is part of on tap tools so that provides services for vivalls and storage policy based management and uh storage awareness. So you're able to see and manage storage with capabilities that aren't possible with just data protocols alone. And then the last thing there is the storage replication adapter uh the server for the storage replication adapter that provides array based replication for VMware's site recovery manager. So if you want to uh take advantage of SRM to orchestrate your DR workflows, you're able to take advantage of NetApp's snap mirror replication, even synchronous snap error replication for an RPO zero solution. So here really looking at kind of thewhole NetApp portfolio when it comes to integrating ONTAP with vSphere. So for our automation, we're going to uh have REST APIs that you can consume that I mentioned with ONAP tools. our backup and recovery application. Uh we have sample anible modules you can take advantage of as well as on tab's own anible modules you can download and integrate. Monitoring is available through a number of products [snorts] as well as integrating through uh there we go the slide steps aren't working too well for me but uh but that's okay. and then integrating through onap tools itself again vi for storage acceleration enhanced offloads reporting snapshot offload uh even without vasa all that's possible with these integration tools so let's take a look at the tanu portfolio And uh David, I see we have a question here. Yeah, the poll on the screen I want to call everyone's attention to is how familiar are you with VMware's Tanzoo offerings? Uh very familiar. Maybe you're using them already or considering using them uh somewhat familiar. Maybe you've heard of them a little familiar or not at all. So, I'm curious to hear yourfeedback on this. We appreciate your responses. Looks like kind of a tie here between somewhat familiar and uh not at all. Was still a goodportion there in between. So, thank you everyone who responded there to the poll. Uh Chance, I'll hand it back to you.>> Thanks. So, that's uh that's great. It looks like we've got the great opportunity to um to do some learning today. So when you look at VMware Tanzoo, uh it really does in have asolution package, right, to meet pretty much any need that you might have. And it goes all the way from Tanzoo CommunityEdition which is a free open-source free open-source free open-source um you know Kubernetes DRO from VMware to Tanu basic edition which gets you up and running with uh Tanzoo Kubernetes grid service and standalone and vSphere native pods on up to advanced edition. So if you go all the way up to advanced edition, that's essentially a completely integrated dev sec ops stack with, you know, everything you could possibly need to run, you know, your DevOps environment with a, you know, fully integrated VMware solution. Now, each of these different uh licensed editions of Tanzu have different capabilities as far as what you can do with storage. Um, so let's take a little bit of a deeper dive look at thetechnical differences. So, uh, if you remember a few years ago at VM World, uh, it's like 2018 maybe something around that time frame, VMware introduced Project Pacific. And so project pacific was this concept of running Kubernetes pods natively on the vSphere hypervisor as a first class citizen without having to uh you know set up a guest operating system and install your container runtime and everything in that. It runs natively on vSphere and one of the advantages there is it's extremely efficient. It uses um just the right number of uh re amount of resources within the hypervisor and it's extremely fast and efficient because there's you know it's essentially para virtualized. Now there are some limitations of why you might not use the vSphere native pods and they are very restrictive. They're very fast, very efficient and a and very well integrated into vSphere in terms of visibility and you know provisioning management and you know everything you can do. It's just really um really well done. Uh however, there are some restrictions. So, you can't go and log into your pod. You can't uh you can't use other um CSIs. You reme remember we mentioned CSIs earlier, container storage interface. You're really restricted to only using the vSphere CSI which takes advantage of vSphere storage. So, that might be uh vivalls, it can be you know any number of other things uh even VSAN. So you move up the stack a little bit to Tanzu Kubernetes grid service which is the service integrated with vSphere that lets you um manage Tanu Kubernetes grid standalone clusters within your vSphere environment.So that uh essentially you have your supervisor cluster that provides Kubernetes supervisory services within a given ESXi host cluster and then you can deploy TKG workload clusters within that same environment and manage them through that kind of single pane of glass interface. And then all the way at the top we've got Tanu Kubernetes grid integrated TKGI which is um essentially an evolution of enterprise PKS which came to VMware through the Heptio acquisition back in 2019. So that'syet another flavor. And if we look at this solution stack kind of side by side here, you can kind of see what TKGI looks like um with Bosch and running on top of NSXT for your network virtualization on the lefth hand panel there. But then on the right hand side of the slide, you're looking at um what I think isa really good way to look at aunified platform, right? And what do I mean by that? So let's say you've got uh and this is really one of the great things about the Tanzu portfolios. You can run native VMs like you know your legacy apps, your line of business apps within VMs just like you always have right alongside vere native pods for those high performant containerized workloads right alongside guest clusters running um you know TKG uh workload clusters you know all of it on the same infrastructure sharing the same resources and doing it in an efficient way. So, you know, looking at this a little bit further, why would you want to use Tanzoo as the foundation for a, you know, a Kubernetes self-service platform, right? So, it gives you an open source align Kubernetes solution with a very simple deployment model that's vSphere integrated and brings multicloud support, right? hybrid multi cloud support and it gives you a consistent dev sec op sec devops dev secc ops experience right um your kubernetes developers or your kubernetes admins they don't need to know if it's tanu or if it's um Anthos or vanilla kubernetes whatever it is it works exactly like any other kubernetes dro from the administrator's perspective right if you're using the vsspere CSI VM storage policies appear as storage classes in Kubernetes, there'sautomatic translation that happens. They don't need to worry about creating storage classes with that. We'll talk more about that in a minute. Um or if you're using a thirdparty CSI like Astra Trident, um that's going to give them the opportunity to take advantage of that as well. And of course, because you're using standard Kubernetes namespaces, you've got that, you know, that intrinsic secure multi-tenency that Kubernetes gives you. And you know, I just talked about the unified platform. Vsere gives you the option to have your legacy apps, your virtual servers, virtual desktops, um all the VMware services that you you've used for years, run it alongside a consolidated infrastructure with um you know with Tanu as that Kubernetes uh platform. And of course, you're leveraging trusted partners, right? So NetApp and VMware have uh they've been around together for decades now, partnering for decades now. And of course they both have global support organizations that are uh available 247 in, you know, multiple different languages all over the world. So you really have a partner there that uh you can depend on any day of the night, right? [snorts] Any time of day. And another thing to look at is the opportunity to elevate your existing IT skills. IT skills are something that we talk about a lot these days trying to maintain our staffing levels or even level up our staffing levels like in this example. So because all of this is vere integrated, you're able to take advantage of a lot of your pre-existingstaff skill base. So you already have experienced and really good vere administrators. It doesn't take much for one of them to take and turn up a Tanzoo workload cluster, right? Or enable workload management on an ESXi cluster and start building out pods on it. Uh so you're able to take these skills that you already have, take them up to the next level and really um enable them to be part of a DevOps team. Um, so let's dig a little bit deeper into the CNS platform, a cloudnative storage platform. So starting with vSphere 7, they introduce support for vivals, right? And so vivalls take advantage of storage policy based management to provide VM or even now pods scale granular controls uh across an environment that uh allows you to consume advanced storage array capabilities without necessarily having to become a storage administrator. The policies themselves will simply tell you what the capability is or give you the option to take advantage of a capability and you just use it. You don't have to need you don't need to know how to go configure an ONAP storage array or an element storage array or whatever the case might be. umsecure multi-tenency again intrinsic to Kubernetes namespaces and you know I mentioned that VM storage policies that are you know the foundation of policy based management that you can use with vivall's traditional storage as well even including uh you know vsan is [snorts] a storage policy based management u based storage platform [clears throat] [clears throat] [clears throat] so uh the cloud native storage platform. Sorry, I lost my train of thought. Had a message pop up. Um, cloud native storage plat uh management plane takes these VM storage policies and translates them into uh VM uh excuse me Kubernetes storage classes within the namespace. So you will take as a vsspere administrator you'll have a VM storage policy. You can take and associate that with a managed namespace within VSCenter. You can assign a unique quota to it. So you could take the same policy and assign it to multiple different namespaces if you want and with each one you can you know assign a specific quota. So say you have you know 10 pabytes of storage you can take the same policy assign it to everybody and give each one say a one pedabyte quota so they can't go over that and it allows you to have these fine grain granular storage controls that uh allow the VASA provider or you know whatever storage you're using to automatically provision the p persistent volumes and manage persistent volume claims for all these different uh you know storage resources based on whatever classes of service you define within VSCenter and you know all of that gives you the capability to have predictable um consistent and reliable storage services because youknow how this is going to work because you are able to take a kind of a cookie cutter approach if you will by defining quality of services you know in terms of you know how many IOPS do I want a particular VMDK to be able to consume right so you eliminate these you know noisy neighbor problems that you oftent times find in highly dense environments. Now, I've mentioned Vivalls a lot and I've mentioned ONTAB tools a lot. If you were at VM World last year or if you were at Insight um last year as well, which was right after uh VM World, weannounced a new scale out Vasa provider that's launching this year. So this is a whole new architecture and I mentioned we're celebrating 10 years of vivols this year at NetApp and along the way we've learned a lot with our appliance based vasa provider and what we've learned is that you need to be able to scale and you've got to be able to scale across venters. You've got to be highly available across every layer of the stack and you've got to be able to meet uh massive demand, right? Especially with containers because container sprawl is real and when you start seeing thousands and thousands of persistent volumes get deployed. Um you know you you're going to understand that vivols make a lot of sense and managing them at scale with a scale out vasa provider makes a lot of sense. So what is a sca scale out vasa provider? What is netapp's next generation vivall solution? So it's a um it's based on a containerized micros service architecture. It's uh horizontally scaling based on you know whateverparameters you define there of course there are parameters out of the box that will have it automatically scale deploying new pods to support the workload and it's also essentially you know self-healing right so as a pod if a pod dies like say you've got one of your worker nodes that it runs on uh fails for some reason new pods will get spun up to replace it on whatever hardware is remaining in the environment. [snorts] We've tested this with over a 100,000 significantly over a 100,000 vivols on a single Vasa provider across six vsenters 800 actually well over 800 ESXi hosts and this is you know well beyond a 6x scale improvement over what our legacy vos solution is able to provide. So think about whatever you know about NetApp and Vivolves. It's changing this year, right? It's uh highly available. It's highly scalable and it's highly resilient. It provides uh state-of-the-art instrumentation and visualization through built-in, you know, graphana and alerting and Prometheus. Everything that you expect from a modern cloudnative architecture is going to be part of this application. So, we do have some early adopter activity going on right now. If you do, if you are interested in uh learning more about this or uh possibly taking part in an early adopter program, just reach out through your normal NetApp contacts, whoever you normally deal with. Um, and they'll be able to set up a conversation about that. So, I have talked a lot about Vivos. I've talked about VSAN. I've talked about traditional storage and data management dage data management with Tanzoo in general. But what does the architecture look like? So what we're looking at here is kind of a visualization. So you've got all your Kubernetes pods and they have persistent volumes and persistent volume claims for storage running either ontap or element or um cloud volumes on tap whatever the case might be right wherever it lives. Um, here you can see the vSphere CSI, the vSphere container storage interface that lives within your Kubernetes cluster communicates directly with the CNS control plane. The CNS control plane can talk to, you know, all the rest of the VSCenter components within the VCSA and the VCSA if you're using Vivalls can talk Vasa to the Vasa providers that it has registered. So in this example where we've registered the element vasa provider and we've registered on tap tools the onap vasa provider which in turn talks to onap 9 but that's not the only option that does provide fantastic integration uh with vSphere and allows your vere administrators to have really fine grain control over your storage capabilities. But you know what? If you need some capabilities thataren't possible with the vSphere CSI, there are some things that it can do that other CSIs can't and vice versa, right? What if there's another solution that better meets your needs? So, here's where something like NetApp Astro Trident comes into play. [clears throat] Excuse me.So, Astro Trident brings in some new capabilities and is able to communicate directly with element or ONTAP and um you can build storage classes that expose these different capabilities to your Kubernetes administrators and but there's some difference there whether your Kubernetes administrators aremanaging these things or if your Vsspere administrators are managing these things as [snorts] well as what protocols you're using and you know where things live. Um, so you know Allan, I've got him here. I've also got Bala from the all both of these guys are from the Astro Trident team. Bala is here to help answer questions in chat. And Allan being one of the experts on Astro Trident, what are your thoughts on this architecture and where you know the difference between the Vsere CSI and the Astro Trident CSI kind of come into play?>> Uh, I think that's a great question, Chance. Um I look at mainly uh as we say here on the slide where we talk about there are some greater capability sets available when you're using uh NetApp Astrotridident. Of course Astroridant is NetApp's preferred storage integration solution for Kubernetes. It's developed inhouse. It's open source uh fully supported by NetApp uh backended by uh our element storage systems on tap storage systems uh as well as um cloud volumes on taps cloud volume services Azure NetApp files if you're working in the cloud. Um so it has a number of feature set capabilities that are not available by default with the vSphere CSI driver. um primarily the ability to do uh snapshotting of volumes uh and if you know NetApp you know one of ourbig things for years has been our data efficiency with our snapshots flex clones things like that um and also its ability since uh it can map to NFS directly um it can provide rewrite many volumes so that'sanother big thing but you know the rest of this presentation is going to be dedicated to storage here so uh going to go ahead and take it over to the next slide.Andso first I do want to ask I just mentioned this uh do you uh use or plan to use CSI snapshots? Yeah, this is a good question and the poll is on the screen. I encourage everyone to respond to that. We do want to hear your feedback on this. Alan, do you want to elaborate just a little bit on when you say CSI snapshots,maybe a little more definition on that for those who are not quite sure? >> So, >> So, >> So, yeah. So, I see it's uh we're actually kind of leaning towards no right there quite a bit. And I think that's uhinteresting because I see, you know, some major value with the ability to snapshot an application uh roll back the persistent volume, make a clone of that persistent volume and attach it to another application. um you know there there's quite a bit of things that you know I do in day-to-day uh with my applications in Kubernetes that uh I leverage snapshots all the time. So and then you know having you know NetApp snapshot copies uh backing those where the snapshots themselves as they're taken don't use additional disk space you know as the snapshots taken is a major boon there. So >> excellent. Well thank you for elaborating on that. Um, sounds like there's some education here to be done and so I'll hand it back to you. >> Okay. So, I'm going to continue here and I've right now take a look at the slide where I have placed the uh Astroridant CSI driver uh side by side with the vSphere CSI. Now the main thing you have to think about here is kind of how Chance mentioned earlier when it comes to uh ontap element the vasa provider um netapp storage systems in both of these cases are providing the backing storage. So when it comes to storage efficiencies like uh dduplication, compression, uh compaction of data, that's going to be available on the backend no matter what. Um where you get the different feature sets that um can depend on your workload performance is the fact that Astro Trident gives you the ability to create uh read write once volumes on block formats and that would be like ice scuzzy to element storage. um or ice scuzzy to ontap or it gives you the ability to do a readwrite many if you're doing NFS to ontap. Um now you see on the vSphere driver on the right NFS is a supported uh storage protocol. However, in the case of the vsspere driver uh whether it's VSAN VMFS volumes mounted by block NFS volumes that's more of a traditional uh how you do avSphere data store with NFS. It creates um the vSphere data store is mounted off an NFS server, but then it creates VMDK files in it as block devices and that is what is mapped to your container which is why the limitation is there of having the read write once volumes uh which cannot be attached to multiple containers or pods at the same time. Um so when you have workloads a IML type workloads or any type of workload where you want to do like a rolling upgrade of an application so that you can deploy the new application side by side and attach a back-end data vault u having readr many uh isa major factor there in allowing you to run those types of operations. So another thing is the way it was provisioning. If you heard Chance mention earlier, you know, Trident takes into account the Kubernetes admin or the actual DevOps engineer or user and their ability to provision storage compared to the VMware admin. So in this case when you look at Trident um and I show all the different backends that are there uh I mentioned before, but all it takes is an initial configuration by the IT admin where they configure the storage backends. Trident is installed in the deployment of Kubernetes. They add the backends available to that cluster and then define the storage class there in Kubernetes. Once that is done, it's all up to the end user inside of Kubernetes. So the developer can then make the PVC request.And what we have here is the PVC request where they're asking for one volume at 10 gigs. They want it to be readr many. And they're asking for their gold tier storage class. Um in that case, Trident will then go for the creation of the PBC. It will find the backing storage pool that will satisfy that defined storage class. Um and then you know when you set up these backends, you can choose to put them on different aggregates on different if you had a NetApp fast system with hybrid storage. So you have spinning disc and all flash disc or if you have an AFF array all flash if you have cloud uh storage pools it will trident will find the uh storage pool that will meet the requirements of that storage class and that's where it will provision that volume. So uh it'll create the volume in that storage pool. It will then w take that volume map it to the PVC and then go ahead and take care of mounting that into the container for you. So, what I've got here quickly, um, and I hate to say this, you're not going to be able to escape my voice. Uh, I've got a pre-recorded demo here for a few minutes, um, talking over and showing how the Trident provisioning process works when you're inside a Tanzoo cluster. Um, I hope you enjoy. My name is Alan Cows, a solutions architect here at NetApp. And today I will be demonstrating deploying an application with persistent storage in VMware Tanzoo TKG1.3.1using NetApp Astrorid. We begin our demonstration assessing our environment by running Tanzoo version showing that we have installed TKG1.3.1. We follow that by running coupubecuddle version to show that the current installed server version is Kubernetes 1.20.5 20.5 VMware.2. We can run coupe cuddle get nodes and take a look at the cluster nodes we have deployed in TKG a single master control plane node and three worker nodes. We can see that NetApp Astra Trident is installed by running coupe cuddle get pods in the Trident name space which shows us the Trident operator the orchestrator and pods installed on each node in the cluster. We can run trident control version against the trident namespace to see the current release that is installed in this case 21.10.0 which is the latest release from the month of October. We can also use trident control to observe the backends that we have to support NetApp storage. Here we have configured an NFS backend using the ONAP NAS storage drive. The backend is online and currently has zero volumes associated with it. This backend connects to an ONAP system in my lab, K8's ONTAP, which is running ONTAP 991P2 and currently has many volumes deployed for our Kubernetes testing. Returning to our console, we can continue by installing the application using Helm. running helm install wp using the image provided by the bitnami repo for WordPress in the namespace wp and setting a few options for username password and our global image pool secret for our dockerhub registry. Once the chart is deployed we can check on the status of the application by running coupubecuddle get pods in the wp namespace and we can see that it is pending. Running that again, we can see that one of our pods is running and the other container is currently being created. Using coupube cuddle, we can also check on the status of our PVCs in that name space. Notice it has created two of them. They are currently bound and we get the PVC IDs with specific volumes. One of them is 8 gigs and the other is 10 gigs. We can take a look at these PVCs on our ONAP system by looking at their unique IDs. First of all, we'll look up Trident_PVC4 AD and take a look at the one that presents itself. In this case, it's our 8 gig volume that we just looked at. Changing the end of that to C131, we see our other volume here on our ONAP system. It's our 10 gig volume. Returning to our console, we can run coupubecuddle get pods in our namespace and see that our application and both pods associated with it are currently running.coupe cuddle get services in the namespace, we can see the external IP where the app is exposed. Copying and pasting that over to our browser, we can pull up our WordPress website and show that our application is running. I'd like to thank you for watching my demonstration today of deploying an application with persistent storage in VMware Tanzoo TKG using NetApp Astra Tridident. I invite you to check out cloud.netapp.com/astra to get started with NetApp Astrotrid today. Okay, so hopefully I didn't bore too many of you off with that um and you found it uh informative and useful. Um so continuing our presentation here uh we're going side by side with the feature comparison as we mentioned um you know the main thing we've already talked about is having that topology aware volume provisioning and noting that we have block only with the vSphere driver versus block and file with Astrotridentum also the we have the inability for the volume snapshots which with our you know uh quiz question earlier didn't seem to be much of a concern but just pointing out thatfeature set is there and quite useful um which you might see in our next demo. Uh finally, you know, talked about the readwrite mini volumes for the different types of workloads. Uh one of the benefits that you get out of the vsspere CSI driver right now is its ability to support Windows-based Kubernetes nodes. So if you are deploying uh Windows containers, um you'll see a major benefit there for uh the actual vSphere CSI driver. Um overall pretty well pretty comparative if you look at everything else when it looks at you know whether encryption is available whether you can expand the volume offline um how you can uh you know whether you can map the volume to um for raw block or format the file system you would want to see on there. So uh o overall they're quite comparable and you just have to choose what works best for you and the applications you're deploying.Um so another quiz question here um David>> yeah the question on the screen I want to call everyone's attention to is do your Kubernetes applications require read write many volumes and so uh this is a specific type of volumehas multiple applications writing to it and reading from it isthat right Alan?>> Yeah it's theability for different podnodes to all access the same volume at the same time. Like I said, it'suh it's majorly used in the a IML type workloads or in the um you know, any application with like a rolling upgrade where you have a persistent data set that will migrate from one instance of an application to the newly upgraded version of the application. >> Okay, excellent. Thanks for clarifying that. So it looks like uh kind of a split really between uh yes and no. So half and half I would say in general. Um Alan, I'll hand it back to you. >> All right, we will continue. So we want to discuss um why do we want to use NetApp for Tanzoo? Um, basically we look at NetApp's data management systems for Tanzoo and other Kubernetes distributions is that we have solutions that will fit any Kubernetes storage need. Whether you need that readr many, you need the snapshots or you don't. Um, you can do those with Trident um or you can use the vSphere driver with NetApp as your backing storage. Um, as NetApp says, we like to say we have best-in-class performance and resiliency uh both on prim and in the cloud with uh ontap. Uh we have built-in support for secure multi-tenency. Our element storage systems are uh have individual tenants that are created for every storage volume provisioned. uh ONTAP systems have uh SVMs that help us uh split workloads and ensure and trident itself is set up with authentication to dedicated SVMs. So each user's workload can be separated from uh the other users you know in your company or in your department. Um you know we like to talk about our data fabric the fact that we do have you know on-prem solutions with Faz and AFF and SolidFire. We have our cloud-based solutions, CVO, CVS, ANF, um, and all of the technologies that allow you to connect those, uh, bring data back and forth easily from on-prem to the cloud. And pretty well all of our solutions are, uh, available to be managed with REST APIs. Uh, onap and element both aredesigned to allow you to, uh, automate any of your workflows. We have um Antsible modules that we publish for ONTAP management that make it easy to uh set up your own automated scripts and control your systems in that manner. Uh and then you know lastly but not least definitely is uh we have a you know a history trusted partnerships strong track record with VMware with uh other thirdparty uh companies that have Kubernetes distributions out there and uh we work with their you know with any of their products honestly and uh we do it well. So one more demo uh this one's a little bit of a sneak preview right here. This is uh something that you know mad scientists have been doing up in the lab right here. This is NetApp's Astro control center which is um it uses Trident as a major uh its actual backend into ontap storage systems but it gives you a um ability to snapshot an application and its backing data uh in a stateful manner um giving you an application consistent snapshot. uh you can then migrate that you can uh clone it. Uh you can migrate it from one cluster to another. You can take those snapshots and uh just use them for disaster recovery like snapshots are intended for. So bear with me for one more a few more minutes here. I'm going to show off this demo and I hope you enjoy it. My name is Alan Cows, a solutions architect here at NetApp. And today I'm going to demonstrate how easy it is to clone an application between clusters in VMware Tanzoo TKG1.3.1 using NetApp Astra control center. Starting from our dashboard, you can see that we have one managed application and two managed Kubernetes clusters and one managed storage backend. The application that we are managing is a WordPress application installed in our Tanzoo workload cluster. This cluster is but one of two clusters discovered and managed in our environment running Tanzoowith an application available to be managed. The other cluster is dedicated for WordPress and is also Kubernetes of the same version. We have also discovered storage backends, our ontap system running on tap 991 and have created a generic S3 bucket for the Tanzoo Astra purpose which is currently available for use by NetApp Astra control center. In order to clone an application from one cluster to another, we select the application and go to the action menu and select clone. We then change from our workload cluster to our WordPress cluster and we click on next. We notice that the application with his existing name space and existing cluster now have a new name, new namespace and a new cluster destination. We click on clone to continue. You can see that the application is now in the discovering process where it will remain for a while as the application is first copied locally and then to the remote cluster. While our clone process is running, we can go ahead and take a look at our app in the console. First of all, let's go ahead and look at the pods that are in our original namespace. We can see that there are two, Mariah DB and WordPress. We can then take a look at the services that are available in that namespace and get the IP that is exposed and pull up our WordPress instance. The cloning process for this particular WordPress instance takes about 20 minutes, but for the purpose of this video, we have trimmed that down. When the clone is complete, the application will become available in Astra control center. We can see that it is now located on the WordPress cluster and we can pull up our console switching our coupe config to the one for that cluster in order to take a look at that environment.If we do get pods in our newly defined namespace,we will see that we also have two pods here. Mariah and WordPress. And if we take a look at the services in that new namespace, we can collect the IP from the new service, which we notice is different from the old one. We can take that IP, add it to a browser, and pull up our WordPress instance there as well. I'd like to thank you for watching my demonstration today of non-disruptively cloning an app using Astra Control Center in VMware Tanzoo TKG. And I invite you to check out cloud.netapp.com/astrato get started with Astro Control Center today. Okay, so I hope you guys found that as um entertaining and as informative as I did and you know just overall neat you know the ability to take your application and tell it you want it to live in another cluster and non-disruptively it moves it over uh creates the service needed to expose that application and you're up and running. Um you know so I find that pretty awesome. So, um, we'recoming near to the end of our presentation here. So, we've got a few key takeaways. Um, first of all, we want to talk about how Tanzoo has multiple additions, TKGI, TKG, and TKGs and the, uh, vSphere pods. Um, finding out which one fits your specific workload uh, does require careful analysis. Um, figure out which one works best for you. um you know believe that storage is a core requirement for Kubernetes and containerized workloads especially those that uh are to remain persistent um and that there is more than one way to consume NetApp storage for Tanzoo whether it's that vSphere CSI driver with the Vasa backend or Astrotrident um if you would like to get started with Astrotrident today uh have a GitHub link right here so that you can pull it and install it in your cluster like I said it's open source um allows you to access your NetApp clusters and uh provision storage straight away. Um we also have a demo for Astro control if you'd like to try it. It's available at cloud.netapp.comastra. Uh here's some additional information, some resources u that we find valuable. Um, so you can follow any of those links uh at your leisure and uh discover what's more uh what's available there for you. So in the end, I want to thank you for joining me and I'm going to turn this back over to David. Absolutely. Yeah, great presentation, Chance and Allan. I learned a lot on the event today. Thank you so much for being here. Um, really cool demos. I know that the experts from NetApp have been answering questions electronically and it looks like they'vecovered all the questions that have come in. So really appreciate that. Uh Allan and Chance, thank you so much for the great presentation today. >> Thank you. >> Hey, you're welcome. >> And I want to call everyone's attention there to the handouts tab for additional information. Uh there are links right there to uh download Astra Trident and try Astra control for free as well as a link to the uh joint VMware NetApp homepage where you can find additional resources. Thank you to everyone for the excellent questions that you submitted as well on the event. Before we go, I do want to announce the winner of our Amazon $300 gift card. This is going to Tim Cross from Missouri. Congratulations and we'll see you next time. Have a great day. Bye-bye.
Is deploying infrastructure to support the development of next-generation cloud-native applications running on Kubernetes too daunting a challenge? It’s a challenge that can be overcome.