BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
Good morning everyone and welcome to the eighth episode of our webinar series called knowledge and knowhow with NetApp support.If you've uh been to one of these before, this may be a review, but I wanted to first explain a little bit of why we are doing these webinars for our customers. Shiv, can you go to the next slide? So, my name is Dwight Cranford. I'm a leader in the NetApp support center and we've got three of our technical experts that are going to be presenting to you today. SH Pthon is one of our technical support engineers and Scott Stanton and David Croen are two of our senior escalation engineers and they're going to be talking about Astro TridentJ. Next slide. So just to give a little idea of why we do these webinar uh generally in support we only work with our customers when we're dealing with an issue or trying to resolve a problem. It's important for us to connect with you outside of working these cases. So we wanted to share some knowledge that we have tools that we use andmake it easier for you to manage your environment. We [clears throat] also want to build effective partnerships put our customers first. Like I said, be proactive. And when you contact Net App support, we want you to feel like you're contacting an extension of your own team. Uh we also want to hopefully um make it so that you are resolving issues more quickly or possibly avoiding issues altogether by reviewing some of this content. So during the uh the webinar, we will have Q&A. You'll see a Q&A down at the bottom um of your screen there. If you have any questions, please put them in there and we will make sure we get all those questions answered. Um, also you will receive a copy of the slides that we're presenting today. Um, and so just sit back, relax, andenjoy the content. Hopefully, you're going to learn something uh great from us today. So, Shiv, let's go ahead and get started. >> Thanks, Dwight. Hi, everyone. Hi, everyone. Hi, everyone. Thanks so much for joining today. We'd like to officially start by introducing Astro Trident.NetApp Astro Trident is an open- source dynamic storage provisioner for Kubernetes. It is the intermediate layer between a Kubernetes cluster in a NetApp storage platform. Astrorid allows Kubernetes environments to easily provision volumes on the NetApp storage. This is especially helpful to a DevOps admin as it can mitigate the need to learn storage administration as all commands can be issued from the Kubernetes environment. Astrotrident is currently available for on-prem and cloud deployments of ONAP as well as element OS for solid fire. The first thing we need to know about Astro Trident is the backend that it uses. Uh the backend for Astro Trident defines the relationship between Trident and the NetApp storage that you're going to be using it with. Uh there are things in the back end like usernames, passwords, uh on how to connect to NetApp storage um as well as some other options depending on what type of storage that uh that you're using. Um the type of backends that are available uh on tap NAS on San uh both of those also come in an economy model which allows you to use fewer volumes. uh also SolidFire and then on the cloud side we have backends for Amazon FSX, Azure NetApp Files and Cloud Volume Services for GCP. Next, we'd like to talk about storage classes. An easy way to think about storage classes is to think of them as you would a plane ticket. Plane tickets come in varying service levels, such as first class or economy. While on the flight, passengers are provided with different service levels, but the end goal of each ticket is to bring passengers to the same destination. The same concept applies to storage classes. Each storage class can be defined to provide different service levels, but the end goal remains the same, which is to provision a volume. So some of the parameters that you see on the screen here uh show the different types of um backend uh parameters that we can set. Uh so if you have a back uh if you have a storage class that you want to use solid fire san and a particular number of IOPS uh you can specify uh the backend type and the IOPS in the storage class so that um when we provision storage weuse that backend and at least that number of IOPS. uh some other parameters. Um if you get a selector set up or if you've got parameters set up in your backend, you can use a selector to uh choose which parameters that you want to use. Uh in this case, this one would be for a cloud model because it has geography equals east. Um and then performance equals gold. And so in the backend, you would have performance set up for different levels depending on what type of storage is available for it. and then uh also by geography. So those parameters will give us a little bit of a narrowing of what type of uh storage we use in our back end.So the next thing we're going to look at is persistent volume claims. >> A persistent volume claim or PVC is a request for storage made by a user. When a PDC comes into existence based on the storage class, it will reach out to Trident to provision the persistent volume on the NetApp storage. A PVC can request size and also access modes. The three common modes are read write once, read only many, and read write many. >> And in the example on the bottom of the screen, we can see uh the arrow pointing to the name of the persistent volume claim. Uh in this case the default is the namespace that this is in and the name of the volume claim is persistent volume claim naz. Um that's quite a mouthful. Uh so once we have a persistent volume claim um trident will see that request for PVC and it will go out to NetApp and it will create physical storage and then that will take the physical storage and bind a PV to the PVC. So looking at what a persistent volume is, uh persistent volume is a storage object that's been provisioned. Uh it is provisioned using the information in the storage class and in the persistent volume claim. Uh you can see in the example on the bottom of this screen that the PV name uh is PVY-en and I'm not going to read the rest of that out u but it is a UU ID that's created by Kubernetes.So the PVs are volume plugins like regular volumes. Uh the life cycle is independent of any individual pod that uses the PV. Uh and so we'll look at that a little bit later as we go through uh this presentation. >> Pods are also able to mount persistent volumes to store readrite operations performed by their respective applications. A reclaim policy is how the volume should be managed once they are no longer needed. Reclaim policies come in two main flavors. The default being delete. The delete policy will delete the PV when the PVC is deleted. This policy will delete the volume from the NetApp storage as well. Retain is a bit different in that it will allow the PV to remain on the storage and preserve the data within. Removing the PVC and the pod will not immediately remove the PV from the storage. To remove the PV, you'll either need to modify the reclaim policy to delete or you can run the Trident Ctl delete volume command. All right. So, let's look at a hierarchy model of how this looks and how everything interacts witheach other. So we have NetApp storage and then we create a Trident backend that points to NetApp storage and tells us what NetApp storage we're going to use and how we're going to connect to it. Then we have a storage class that we create that points to which backend we're going to use and then gives us some parameters to know exactly which storage we're going to use through that backend.Then we create a PVC and that PVC points to uh a storage class which points to the back end which points to storage um and also gives us some more definitions of how we're going to use that storage or how much storage we're going to grab. Trident takes that PVC creates the storage. So we go back tostorage again directly um and then creates the persistent volume in Kubernetes and hands the storage off to uh either a pod or you can just create a PVC without creating a pod and just have the storage available to Kubernetes there. So now let's take a look at uh the YAML files that are create that are required to create uma storage class. So a backend is required here. Um but we're going to kind of ignore um creation of a backend. We're going to do that for two reasons. Uh first the backend that you choose to use uh is probably going to be different or could be different from what we would use. Um and we would use a very small model ofwhat is available in the back end. Um there's many backends to choose from. The Astro Trident documentation which can be found at docs.netapp.com at docs.netapp.com at docs.netapp.com does an exceptionally good job at going over all of the details of creating the backend file and what you can put in it. Um but let's look at the storage class that we've got here. Uh this is a simplistic version of storage class. Um but this is thevalid YAML for it. Uh first to note that the kind is storage class with capital S capital C. Uh the it is case sensitive. So we do need to make sure that we've got the case correct. Uh the name for this particular storage class is storage class NAS. Uh the backend type is ONAP NAS that will match directly with the backend that we've already created. Uh we've got a reclaim policy that Shiv told us about and we've got that set to retain. And another option that we've put in here is allow volume expansion. Um this will tell us whether or not we can resize the volume in the future. Um and by resize I mean make it bigger. Um because I don't believe we allow for making volume smaller. there uh are some complications there. Um so let's take a look at uh persistent volume claims. Once a backend and a storage class have been created, a persistent volume can be created through a PVC. PDCs are also defined through YL files. We can see in this file some of the concepts we have previously gone over such as access mode So here theYAML that we're looking at is a persistent volume claim. Uh the name on this one is PVC NAS. Uh we've got the access mode there. Um one of the things we haven't mentioned yet is the request for storage is 1 gigabyte. This is uh where we put the size that we want our uh physical volume to be on uh on our NetApp storage. And the storage class name uh points back to the storage class that we recently created of storage hyphen classy NAS uh as we show showed on the uh previous slide. So now that we've gone over that, let's take a look at uh some of the methods that we can use for Trident installation.Before attempting to install Astro Trident, API access must be configured for both ONAP and SolidFire environments. In case difficulties are faced during this, please contact NetApp support directly for assistance. If you're deploying Trident for a cloud environment, the correct network routing from the NetApp storage to the Kubernetes cluster is required. IM permissions are also needed. >> The first installation method that we're going to look at is going to be Helm. Uh we've got Helm is apackage manager for Kubernetes. Uh Helm uses Helm charts uh to show Helm how to install uh whatever package it is that you're installing on Kubernetes. Trident ships with a Helm chart that can be used for installing through Helm. Uh as with most Helm charts, there is a values.yml file that will allow you to change certain defaults for installation. Uh so if you don't like the uh the default name space of trident uh and would like to change it, you can change that in the values.l file. >> The Astro Trident operator enables certain forms of automation regarding installation. An operator pod is deployed into the cluster. As we can see for the Kget pods output here, the operator pod is listed on the bottom. This method implements easier modification as changes can be made without having to uninstall and reinstall Trident because it allows the usage of the cubectl edit command. Upgrades also become smoother for this reason.>> And as I spoke about Helm before, this Astroridon operator method is the method that will get installed when you use Helm. This is themethod. Um, so as we can see in thedescription below, and I believe we'll see this in the demo, uh, you've got the Trident operator pod, uh, that is running at the bottom of that, and then, uh, the rest of the Trident uh, pods that are required for running Trident. So, the last uh, installation method that we're going to look at um, is actually the first method that was ever used for Trident, and that's using Trident Ctl. Uh, this is the most basic method. uh this will not give you an operator pod. Uh so there are some uh nicities of having the operator around that you do not get with a Trident CtlL install.Uh so any changes that you want to make through Trident um or any upgrades that you want to do will require an uninstall and then a reinstall of Trident. Uh an important note uh to know about is that whatever method you use to install Trident uh is going to be the same method you use for either upgrading or uninstalling. Uh so if you've got a an older installation of Trident uh that you use Trident Ctl uh to install and you would like to use the operator, you would have to uninstall TridentCTL and then install using the operator. Uh if you've now using the operator and you want to switch to Helm, you would have to uninstall using the operator method and then install using Helm. We'd like to go over a few logs we generally ask for when troubleshooting a Trident case. A support bundle includes all relevant troubleshooting logs. The command for this is trident cpl logs- a n trident n being the name space for which in this case is trident. For operator logs, we'll usually run Trident Ctl logs-l trident- operator pipe gzip and then redirect that into a file with the.gz extension. The logs can then be uploaded to upload.netapp.com/sg. Commands for troubleshooting will be a mix of Kubernetes cubectl and astrotens tridentl commands. We won't be going over all of these today, but a quick tip is to append the - wide parameter to certain commands to get more information.Okay, so now we'll take some questions before heading into the live demo. >> And I'm not seeing any questions that have come in. >> Yeah, looks like we don't have any questions. So, just a reminder, there's a Q&A button at the bottom of your Zoom screen. If you have questions, please put them in there and we'll make sure we get them answered. Uh this is Dave Crossen and I work with Scott and Shiv uh on Trident and today I'd like to show you uh a demo. Let me get this question. Okay. Um uh a quick demo of lab environment uh that uh we'll be using that is for Trident. And hopefully this will just help explain and understand uh some of again uh some of the concepts that uh Shiv and Trident uh Shiv and uh Scott had have already uhshown us inthat excellent uh presentation. So uh I'll be running uh the scripts. This is a live demo environment. Uh and I'll be just running some scripts to help uh type in the commands uh so that you don't uh so that I don't make any typos and just makes things go a little smoother. Uh but as we go each screenshot I will explain all the commands on as we go. So uh as already stated uh please post any questions you have in the chat and uh we will try to answer them. Okay. So get started here. So Trident is created for DevOps environments that use Kubernetes uh to provide access and management for applications that need persistent storage space. Trident is installed and runs in pods on a working Kubernetes cluster and that also requires uh a working connection to the NetApp storage. Uh the Kubernetes API allows the communication of objects like pods uh within the cluster and Trident is an app that runs on the Kubernetes cluster which uses the Kubernetes API and listens for API rest calls from the Kubernetes objects. Uh the commands coupube cuddle or coupube coupubectl and uh trident ctl also use the same kubernetes api. Trident listens for kubernetes api calls from the pvc requests uh and then sends a rest api call to the netapp storage to perform the volume provisioning. Trident is configured using the Kubernetes CSI standard which is the container storage interface to provision volumes to NetUP storage and Trident uses the Kubernetes CRD uh which are custom resource definitions uh which is just an extension of the Kubernetes API to create the unique objects that it needs uh in the Kubernetes cluster that are specific to for Trident to meet the uh requirements for the NetApp storage. So uh here we have um you know uh we're running uhI'm showing the try to codle command. We're running uh the latest version which is 22.07. Uh there's another version coming out at the end of October. Werelease four releases a year and so uh 07 was released in July of 22 and uh we'll have a new version uh here at the end of August of October which will be 22.10. uh thecluster that we're running is just simple cluster that has one master node and two worker nodes and there are currently no PVCs or uh PVs uh created at this time. So as you can see okay so the trident main pod is created uh so what we have here is um uh two commands uh kubernetes get pod and kubernetes get all. So we'll just go through the kubernetes get all. So when you install trident it installs the trident operator uh which then deploys uh the deployment as well as the damon set. So the deployment creates the uh trident main pod and that is where the trident controller is and all the application uh for trident and there's six different containers running six different apps um like uh the trident main which is the controller that all the different pods need to be able to communicate with as well as uha container for auto support for logs uh for CSI provisioning um provision ing the volumes, the attaching the CSI attacher uh that attaches thepods to the nodes, uh CSI resizer which allows for expansion of the volumes and CSI snapshot which uh is uh for uh creating volume snapshots. So all those containers need to be up and running uh for trident main pod to be uh working and then there is as you can see each pod is uh created on a different node or onthe different nodes the worker nodes there is a worker node for each um poduh as you can see and that's the job of the damon set uh to uh maintain all of the there's one replica uh for each uh one rep onepod for each uh on each node. So eachworker nodes has one pod running on it that's trident and each of those pods also have a uh talk to the trident controller. So that kind of explains the trident deployment.Okay.Uh next I just want to uh show thea little bit more about the storage class and the back end. So uh Shiv and Scott had a very good slide on the storage classes. Um in this environment we have two storage classes set up. One for NAS which will create uh volumes in the NFSvolumes and um one for SAN which will create uh LUNs on the NetApp storage using ice. Uh both of these policies are set to the default of uh delete um to delete the volume using the CSI provisioner and then the um the volume expansion has been enabled to allow volume expansion so that as we consume space in our NetUP storage uh we can uh increase the space as needed. Uh then there's a backend for each of these storage classes, one for NAS and one for SAND and there's currently no volumes uh on either of those. Let me uh pop out to another screen real fast just to show you. When you install Trident, we're running 22 version 2207. When you install 2207 or any version, uh itunbundles into a Trident installer uh directory. Uh and then uh there uh we've provide a sample input directory that has all the different YAML the example YAML that you can use for backends uh PVCs pods storage classes. Uh so I'min the storage class directory right now and just to show youknow just again it give you some more example uh you can storage classes are pretty much userdefined and can be ar arbitrarily arbitrary based on your needs. Uh so you know one for in case you're using AWS to uh cloud volume service uh or um uh you can define bronze and silver and gold for different service levels uh QoS levels uh etc. Uh there's a couple for NAS. This one includes NFS mount options that you might need uh when accessing when your users excuse me are accessing the their application. Uh the one I've uhlisted out here is the one uh that's uh for uh like for cloud for if you have a topology uh where you uh only want to use uh the certain storage that's out there in uh in a certain region or a certain zone of your cloud. Uh this is an example. So that just gives you a an example a place to go if you're in your uh in your Kubernetes cluster. So, Trident doesn't have a fancy guey because it'sa uh it's a middleware. It'sthe it's the plugin thatsits on the cluster uh that listens for API calls uh from your uh you know from the Kubernetes uh end users. Uh and so uh we you know we're looking at from a command line here.Okay. Uh the next uh I just want to give a example of the backends. Uh so we talked about storage classes and uh there's two backends created uh for this each mapping to a storage class. So this is the back end that's using the storage class forONAP NAS driver and uh this one is going and this will create NFS volumes and then uh this is the storage the backend that will create uh LUNs uh using ice cozy connections and uh there's various different configs these are fairly minimal configs but uh for our lab um uh some of the main points that you'll need is you'll need an endpoint IP address so the management lift in this case 135 and you'll need to be able to have that connection you know uh working HTTP uh and uh will be able to you know connect to the storage uh on that lift and uh we've calling out the data lift that'll be where the mounts uh when you assign the pod uh it will mount using uh to the data volume uh using uh that IP address um we uh there's also a feature in auto export feature that you can use which basically as you add and delete nodes from your Kubernetes cluster. It will update the export policy on the on NetApp storage so that only the hosts inside the clo the Kubernetes cluster are uh allowed access to this volume.Uh this is the SVM that that's going to be used on the storage. And then we have the credentials. In this case, uh these are in text, but as Scott mentioned, uh you can also use secrets. Uh so we have an example of that. Uh you can uh encrypt yourum credentials and uh you can paste them directly into the backup backend uh JSON or you can place them in aYAML file, a secret YAML file uh that will then create a secret object uh and then you can just reference that secret object uh here. And I'll show you let's see on the net storage itself uh this is the current NetApp storage that we're using in this lab and you can see that uh for the NFS management lift it's 135 and for the data lift it's 132 and uh for the ice scuzzi it's 136 and if you don't call out the data lift uh it will find it and use that and it and we'll pick one anduse that and there's currently again no volumes uh for Trident right So then the next uh piece is let's look at uh the pod and the PVC that we're going to create. So in this case we are creating uh we want to uh let's say for an example we want to use ngnx. So that's ourapplication and so we're going to place that is in our pod. We're going to request uhmake a PVC request for a NAS volume. Uh and that's up here. uh and we want that volume to have rewrite mini access with 1 GBTE of storage using the NAS storage driver. Uh so that's what and uh this would be the mount point that it's going to use. So now we've uh issued that command um and we've created aPVC and a pod. And the same thing for the sand. Uh here's sand pod and PVC. So the uh the this will be uh actually using an alpine as the image just toyou know show this is where the where your applications uh are contained. Uh and then uh it's going to use the sand uh driver. Uh so up here in the PVC wegive it the sand driver and the storage class for sand. And we are uh requesting 1 GBTE. So now that we've created those, let's see uh if we're getting what we're expecting. So here's the PVC that's created uh and we are and uh we've asked for one gig andit when the PVC was requested, it went out to the storage and created a volume for each one of these. So we've created two volumes. uh one for uhthe SAN is the A EO6 as you see over here. [snorts] Um and then the uh 8440 uh is the NAS and each one is 1 gigabyte as we requested. Um and uh they are both uh and we've create uh we've also created pods to access to give the user access to those applications and those pods are assigned to worker nodes and the worker node is where themount is actually uh happening. So we'll go out to uh the worker node here and uh check for the mounts and we see that there is in fact a mount point for each one of those using thedata lift uh for those PVCs. So as you can see this is the NFS one A440 and then down here we have the ice scuzzi device uh for uh this the sand volume uh for the sand one that was created. And next I want to show you uh just demonstrate um we talked about expanding the volumes orcreating uh as your application uh consumes more space. This is how you will um how it's very simple to just request more space. So uh what we do is it's the PVC. The PVC is the request for uh a to use a resource. So, we update the PVC and right now we'vegot one gigs. Uh, so we're just going to go in and edit the PVC thepersistent uh volume claim. Uh, and ask for more space. So, we'll just uh go in and change the uh under resource requests, we just change that to let's just say three. Uh, so and save that file. And as soon as that is completed, then the API call is sent. So we can see that the PVC request is just because we just updated it is 3 gig uh and it's already come back uh and the PV now registers three 3 gig and the PV is created and is bound to thePVC uh and uh Tridentum Trident also uh has its own object to make sure that it's managing uh all the PVs that you create and so it's also registering 3 gig and we can check the storage real quick and see that on the storage in fact uh it is showing 3 gigs. So wewere able to uh increase the space pretty quickly and easily. Okay, that's volume expansion. Okay, let me uh I think we talked about logging. So let's just uh talk about logging a little bit because that's something that uh is uh something that support is uh isnear and dear to uh is look at helping you uh solve your problems by looking at logs. So when you first installuh Trident uh iteach pod will have uh its own logging and uh it comes with um you know info level uh which provides a good amount of information about the basics of creating and deleting uh your PVCs. Uh but in case uh that'sout of the box but in case you want we need to uh you know a little more information we can enable debugging. So there's two ways of de enabling debugging uh for the logs which just uh increase the verbosity of the logs. Uhone way is to increase it on the pods all the pods themselves. So um we do that by just patching theCRD uh trident orchestrator with uh which has a flag for debugging. We just enable that one command and that will enable debugging on all the different pods as you can see here. And when you do that, um, all these pods will be updated. And actually, um, theyget restarted. So, uh, while that's running right now, you may see one of those. I don't think it's completed yet. So, yeah, 18 seconds. Yeah, this one hasn't restarted yet. There we go. So, um, you can see that the pods are restarting just to enable the debugging. So while that's running here in the background um the other uh the other way uh to increase the debugging is to increase it towards the storage side. So uh to collect uh verbose uh debugging on uh the API calls that are sent to the storage uh you enable this flag on your backend JSON. Uh we looked at this earlier. This is for the ONAP NAS backend. uh and you just enable this flag here and then whenever you make an update to your backend uh JSON uh you'll want to run an update on the back end or you can bounce the main pod and that will update the back end uh with to reync everything you know with the storage and uh and also enable the flags that you'rewanting to enable. Okay. And then thelast bit is uh so all that logging uh is available but um to make it simple we also have created atry a command that will collect all the logs in one bundle and that's just uh a trident logs minus a for all and as you can see it creates a log uh it collects the logs for all of the containers uh all the all support provisioning attacher that we talked about earlier as well as a log for all of the worker nodes and the registering uh of those worker nodes to the controller. So uh itcollects all that in to a one zip file that you can send to support and uh then we can uh it helps us uh to with the analysis. So uh that is the end of the demo uh and so please let us know if there's questions. I think there may be some questions there. So let me stop sharing and maybe we can get to the questions. Yeah. So, there's been a few questions in the chat that we've answered. So, um I think Scott's answering another one right now. We did have a couple that came in through the comment section as well, and you know, one was about the recording of this webinar. Yes, we definitely record these and we'll send it out in the next couple days. Uh we also have aYouTube channel that I'm going to share in just a minute. Um, so all of our webinars that we've done are posted there and you can view those at any time. Um, one of the other questions was regarding um, licensing. So Scott, were you going to address that one? >> Uh, yes I will. Uh, so licensing cost for Astro Trident is both an easy and a complicated answer. Um, the easy part is that Trident itself is open source and therefore free. Um, so you can go to our GitHub page, uh, github.comnetapptryant. Um, there might be some capitalization in there, um, but you can search for it. And so you can download and run Trident. Uh, thecomplicated part of that is that you will have to have NetApp storage of some type and that costs and that varies. So I don't have a good answer for NetApp storage costs but if you have NetApp storage uh then you can do use Trident for free. >> Okay. Thank you very much. >> So another question just came in uh about using Trident on an ONTP simulator or on a NetApp simulator. Um I have not tried using a NetApp simulator. Uh but as long as the NetApp simulator will answer uh the API calls which I believe they do um then you can use uh Trident on a NetApp simulator. >> Okay, thank you. [clears throat] [clears throat] [clears throat] >> Think we have a couple more coming in. I don't know. Um, if we want to answer those, just read those out and answer them live. That may be quicker and then we can type in an answer so that everyone sees it. >> So, >> So, >> So, yeah, one question is, uh, PVC is creating flexs onONAP and Metroclusters don't have much flex volumes. I'm not sure what you mean by metroclusters don't have much flex volumes.Um, I don't deal with MetroCluster a whole lot. Uh, Dave, do you know of any Trident installations on MetroCluster thatyou've encountered? >> We do have there are a lot of metro clusters that are um usually that we do. Um, I don't have any um I don't know the answer to that one. So, let me uh let'slook up that one andget respond back to you directly. >> Okay. So, we'll reach out to you via uh email after the webinar once we have an answer to that. I'm also going to provide an NG um where we can if you have follow-up questions, you can email us andask us questions and we'llget the answers. So, we'll take that one offline andrespond back. There's also one in the chat. Um, >> so, uh, as a followup for another question that I answered in the Q&A, um, how do I get Astro Control Center or cloud manager? So, uh, cloud manager is in the cloud. So, it is just cloud.netup.com. Um, Astro control center uhwould be downloadable through the NetApp support site. So, the best way to get that is to contact somebody on your sales team and uh and work with them to get uh whatever licensing might be necessary for Astro Control Center. Okay.And there's one also in the chat, not in the Q&A section, if uh other benefits to using Trident than ease of use to provision storage for DevOps engineer. Well, provisioning storage is what Trident does. Um, and I can't think of anything, Dave, but uh, correct me if I'm missing something. uhprovisioning storage for a DevOps engineer to be able to not have to bug their storage admin for storage um and for the storage admin to be able to take snapshots uh of what the DevOps engineer is doing uh in case something bad happens uh is a pretty good selling point in and of itself. Uh Trident is used um I've mentioned Astro Control Center so I'll throw it back out there. Trident is used in Astra control center. Um and to answer the other question that came in through the Q&A um about Trident automating moving container workloads between sites andor to and from cloud um that would be something that Astro control center would do. Uh and Astro control center uses Trident as part of its work workhorse workflow. Uh so yes, it it's just provision storage, but we've got other products that use that to uh to do even more with it. >> All right. Thanks, Scott. [clears throat] [clears throat] [clears throat] And I think that answers everything that we have in the queue. So again, if you have questions, continue to put them in uh put them in the Q&A so that way that everyone can see the questions and answers. Um but for now, we'llcontinue on and I'll just have a few closing slides and then we'll check for questions again andthat'll conclude the webinar. So Shiv, can you go to the next slide? Thank you. So [clears throat] again, I mentioned in the beginning we do these webinars foryou our customers. So, you're going to receive a feedback along with the slides and the recording of the webinar. Please respond to the survey. Uh, we want to know how we can improve. We want to know if we did things, what we did well andagain what we can improve on. We also want to know um what items you would like to see, what subjects you would like to see presented in these webinars. Um, you know, we try to do these once a quarter. We haven't always been successful in that, but that is our goal. And the more content we have, the more we can get out there. So, if there's something that you would like to see uh demonstrated or presented, please get that over to us. And um and we'lltry to put those together for you.can always you can also email us. So, I mentioned before we do have a an NG that you can email. So, it's ng-supportweinars@netapp.com. Um, so if you have follow-up questions or if you go back and watch video andsomething pops up or again content you'd like to see, please send us an email. Um, and we also do have a webinar out there on NetApp KBTV. Um, the link is provided here and you will receive that as part of the slide package. So go out there and check out all the other videos that we have listed. And we do have um NetApp Insight 2022 is coming up uh November 1st through the 3rd. It is all virtual again this year. So super easy to access. Um [clears throat] I have provided the registration link for you there. So go out and register. Um we're going to have some always on demand videos from the support perspective. We also have um one of our sessions that you can register for and join. um we've got them listed here. So again, when you get the presentation, you'll be able to see them andeasily identify them and hope you'll go out and check out our videos andattend our session. Umand that's really what we wanted to present to you today. Um I'll pause about a few seconds to see if we have any other questions come in. [clears throat] I think David is answering. Yeah, we've already answered that one. So, um about the how do I get Astro control and cloud manager. Um okay, so we'll conclude that. We'll give everyone a few minutes back to go grab some lunch or prepare for your next meeting. And again, uh please be on the lookout for invites to our next webinar. We post them on our support site. So, a link to register there. Hopefully, you'll also get an email um andplease let us know what you'd like to hear. And we appreciate your time andthank you for attending and hope you have a great day. Thank you.
Episode 8 in our NetApp support Webinar series. In this Episode we will give an introduction to Astra Trident.