BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
All right. Thank you everyone for joining this morning or today I should say. Um, today we're going to have a session about Amazon Q for business and one of the new releases from NetApp, which is a connector for, uh, Q for business to be able to work with a data sitting on ONTAP, whether it's FSx or NetApp, ONTAP or ONTAP appliances on premises. Uh, with that said, uh, we're joined today by Tariq Mohammed, who's a senior GTM specialist for Gen I here at AWS and covers the Amazon Q uh, suite of products, as well as Robert Bell from NetApp. Robert is an AWS cloud product specialist. And that being said, I will pass it over for Tariq to get us started. Thank you. Tariq. Thank you. Sebastian. Uh, pleasure to be here. Thanks for joining. Excited to chat with you all today. Um, we're going to talk a lot about generative AI today and specifically our collaboration with NetApp. And if you don't know, NetApp actually has 50% of the world's files. And that's something that Jensen Huang with Nvidia has shared with us. And I don't want to steal your thunder, Robert, but we really wanted to set the stage asit relates to, uh, the relevance here, uh, of our collaboration. So let's kick things off. Uh, economists, pundits, technologists, everyone is buzzing about generative AI, and, uh, many believe that it will transform the way that we work. Um, many people are saying that roughly 80% of the work, uh, as we know it, uh, will be automated. And of the remaining 20%, uh, the quality will increase from 10 to 100 times improvements in quality. So lots of excitement around generative AI. I. I'm certain that you've seen commercials, whether from a consumer perspective or a commercial perspective. Lots of excitement here. So how does AWS play in the generative AI space? Robert next slide. So Amazon provides a number of services, uh, full stack, um, that support our customers as it relates to generative AI. So whether you want to build a model yourself on top of chips like Nvidia's chips or AWS's chips, AWS Trainium AWS inferentia, uh, you have that opportunity. Um, customers that do this or partners that do this include anthropic or whether you want to use our managed infrastructure, um, to build models, train models and so forth. You can do that via SageMaker. Customers like Coinbase do that. Or whether you want to access existing models, uh, via Amazon bedrock. You have the ability to do that. Um, there's a number of customers that do that, including, uh, Bridgewater, uh, associates, uh, as well as uh, New York Stock Exchange and so forth. We have tens of thousands of customers leveraging Amazon bedrock. Or if you want to access Amazon's applications and you don't want to have to, um, worry about the building of models or the accessing and choosing of models, you can do so.the application layer is where Amazon Cube business sits. And that's what we'll chat about today. Uh, customers like Principal Financial are leveraging Amazon Cube business to accelerate, um, workforce productivity and to answer questions for their customers much more quickly. Next slide Robert. So there's four core tenets as it relates to Amazon Cube business. The discovery. So being able to find information quicker to answer questions, uh, whether that's internal questions or external questions for customers or, um, questions that you have as it relates to, um, your day to day activities as a, as a worker, um, analyzing information as well as taking action. Taking action is discrete actions. So if you wanted to create, for example, a JIRA ticket or if you wanted to create a Salesforce opportunity, uh, you can do so via Amazon Q business and our plugins. And then lastly automationas it relates to automating workflows. So being able to upload a standard operating procedure of a particular workflow and build out, uh, a workflow and in turn have agents, uh, track the effectiveness,of that workflow and update that workflow. So lots of capabilities, um, and the capabilities continue to grow. Next slide. So cube business is built for the enterprise with security in mind. As you've heard, many of our leaders here at AWS share. Um, security is priority zero for us. Um, so having said that, uh, Amazon Cube business respects your access controls. Um, we respect the permissions that you have within your identity provider. Um, so if somebody does not have permissions to access, uh, data, uh, before using Amazon Cube business, they will not have access to it while using Amazon Cube business. Furthermore, uh, the models that are, uh, that Amazon Cube business is built on top of do not train uh, or your data does not train the models that Amazon cube business is built on top of your private data is your private data. And lastly, uh, we provide the guardrails for you to ensure that you're honoring and reflecting your organization's voice, and you're preventing certain questions from being asked, or certain answers from being sent tothose questions. Right. We want to ensure that the guardrails and the responsible AI is implemented for you to make sure that, again, it reflects your organization's voice. Next slide Robert. So hundreds of customers are using Amazon Kubernetes today and our capabilities continue to expand and increase. Um next slide Robert. And we have more than 40 different connectors, 50 different plugins. Uh, and today we'll talk a little bit about one of those newer connectors which Robert will highlight. Um, but before we do that, we also have partners like zoom who are leveraging our index. So that way, um, customers can access the same data that they can access with Amazon Q business. Um, within uh, for example, zoom calls or, uh, asana and so forth. So the idea is we want to be with you where you work, wherever you work. Um, so the partnerships continue to expand. Uh, collaborations continue to expand, data connectors continue to expand as well. Next slide. And Robert you can take it away here. Okay. Thank you very much Tariq. So again I'm Robert Bell from NetApp. Um, and I work with the AWS products that are, um, co-developed between, uh, NetApp and Amazon. Um,so I think the topic here is really to,emphasize, um, the importance of bringing data into Amazon. Q that'swhat I want to address, really with this specific example. As Derek mentioned, there's many connectors. Um, and this is a new connector that's,just been launched, um, that I think is it might be very important, especially for, um, large organizations that have data on premises. So I think thechallenge here really is, um, getting the data into Amazon, Q or any other, um, uh, Amazon AI service in a way that's simple but also secure. Um, so this is, I think, a challenge that a lot of companies are facing now. And therate at which these new services are coming out and really just Creating like a whole new level of simplicity for accessing generative AI for organizations where if in the past it would really, it would seem like something that would just take lots and lots of resources and lots of time, lots of money just to develop these applications. Um, with, for example, with Amazon Q it's all there. The,whole, um, complexity of building everything, putting everything together is,taken care of. Um, so it's really just a matter of deciding how you're going to use this generative AI. And I think one of the keys to that is also what kind of data it's going to be using as the data source, which will give it its unique value. So, um, what I want to do is kind of like, uh, look at the way that,it is possible to overcome this challenge today in a very,simple and straightforward way. Um, so starting really from, let's say, looking at a specific customer, uh, who is, uh, a global company with, um, data centers around the world, employees around the world with a lot of very,valuable data, which has been collected from the internal systems, um, not public information that can be downloaded off the internet, but that is information that they want to use in order to improve their customer service. This is something that I think a lot of companies are looking into, whether it's for customer service or development processes or business processes. It's just taking thedata that's already been collected. It piles and piles of files with lots of information inside them, and having something very powerful that can just go through that and create good summaries and extract thevaluable, the vital insights from this information and just make it very,accessible for the employees who are providing the service. So this is just, um, uh, quite a simple example, um, where I think the,constraints, um, of a system like that is the fact that the data is not in one place. Obviously, if it was in, um, in one place, if it was, um, just in an S3 service, then that is directly accessible. But when it's spread around different geographic regions and different types of, uh, file systems on premises and different types of systems, uh, that becomes more challenging. Um, and here we're talking specifically about a customer that has data stored in NetApp storage in the data center. So that data center. So it would be, um, NetApp hardware that's holding the data. And this data, um, because of its sensitivity, it's itmight include information about clients or information that is, um, private to the company it needs to be secured in many ways. So usually there are lots of, um, security policies and privacy policies and access controls in place. So thestrategy here, um, the,um, initial strategy, this is um, what companies would do in the past is say, let's take this data and just copy it to S3. Um, and then it can be accessed by the service. That's sort of like the, I would say the, um, the approach that was acceptable before these connectors existed. So. Having all the data moved to from file systems to S3 kind of flattens the data. So if you've got data in directories with metadata and you're putting everything in,um, uh, object storage, it you kind of lose some of the metadata thathas to be filled in later. You also have to keep that data in sync because the data is resides on premises, you have to keep copying every time there's a change. Um, there's also the issue of when files are in file systems onpremises. The access controls are configured, but if you're moving it to a new kind of storage service, you have to rewrite all of these access controls, which can be time consuming. Um, and there is also the issue of privacy regulations. So obviously there's um, there are regulations in,each country, but usually they're all about just not exposing any personally identifiable information or PII, uh, that exists in the data. This is kind of like something that obviously it's very clear thatneeds to be done. But thechallenge here is that often, um, the owners of the data don't know, um, they don't know how,to extract that data, or they don't even know if there is private data inside. Um, the these files. So given that what we're looking for here in this example is a simple. Just a second. Let me go back. Um, a simple way of meeting all of these goals and getting thevital information that is On-premises into Amazon queue to be able to, uh, create very fast value. So essentially, it's, uh, taking these files, which were from different locations, aggregating them all into Amazon queue and having this model that Tariq showed before of having the data going into Amazon queue, but also checking permissions so that when the user asks questions, they can only get information that they're authorized to access. And also if there's any kind of private information to prevent that from leaking out or getting to the users. So the challenges here are often how do we get the data from these, um, NetApp systems that are on premises into Amazon queue? It's complex and it can be costly todo the data transfer. It can take a lot of time. Um, but even so, if it's just creating a copy or copying data to a different place, you've got multiple copies, which is an operational, um, effort, because keeping these copies in sync and maintaining them and securing them is,sort of like a double effort. And being able to move the data without losing the access controls. If it was possible to take this data and just move it with the access controls with it, that would obviously be ideal, and also knowing whether the information that you're moving has any personally identifiable information inside it. If there is any kind of PII like, um, uh, credit card numbers, customer emails, customer addresses, those kind of things, they have to be removed before the data is even put inside Amazon queue, because you don't want there to be a situation where somebody can, um, figure out some, uh, personal information just through these chats. So these are the challenges. And now let's see, um, how we solve it with this new connector. So the,new connector is, um, specifically for connecting NetApp, um, NetApp storage to Amazon queue. Um, and it really focuses on simplifying the process of bringing the data inside so that it's completely seamless, that it's not something that you have to use external tools for. Um, avoiding the fact that there would be multiple copies. So, um, having just one main source data source, which is it could be on premises, which has a mirror copy which is automatically synchronized at all times, but also extending the security of the on premises system to the um, AWS environment where the data is, uh, is mirrored. So all of these challenges together are what is what this, uh, connector needs to address. Um, and obviously it needs to be something that is easy to use and secure. So at the core of this, um, connector is, um, an,AWS service called Amazon FSX for NetApp Ontap. This is uh, one of the storage systems, uh, storage services provided by Amazon, which is designed with the NetApp ONTAP storage operating system built into it. So what that allows is a seamless, um, migration of data from any NetApp ONTAP system into Amazon with, um, the same access controls, the same, uh, feature set, um, all of the same features that exist on premises. So if you've got any kind of data in NetApp systems, it can be seamlessly moved to FSx for ONTAP, which is the Amazon service, and it resides in AWS with the same feature set that existed on premises. So this is at the core because this is really what, um, the way that we can bring the data seamlessly into AWS. So if we take FSx for ONTAP, which is the short name for Amazon FSX, for NetApp, Ontap, um, ONTAP itself, theoperating system that runs on this storage. Um, it has built in mobility features. And what that enables is very easily to um either mirror data from for example, from on premises into uh AWS. So that would be with a feature called SnapMirror which does data mirroring. It takes the data that is on premises and creates an exact copy in AWS, but these copies don't have to be maintained separately. It's everything that happens on premises is mirrored to AWS. So if this is an active, um, file system that is used by the systems on premises and it's constantly updated. These updates will be reflected in the copy. The mirror copy in FSx for ONTAP, which resides inside us. The organizations AWS account in AWS. So this is one way of extending the on premises into AWS so that it can be accessed by Amazon. Q. Um, and there's another way which is actually useful for very large data sets, and that's data caching. Uh, another one of the built in features in ONTAP. So it's not you don't have to install any external devices. It's something that, uh, a communication between one ONTAP and another ONTAP between on premises and AWS, where, um, only the relevant data is copied. That means the metadata is copied so that the systems that access the, um, the files in AWS, you can see what files there are there. And when you want a specific File, only that file is copied. Um, so if there is a very large data set, it could be petabytes of data, but only maybe, let's say, um, 100GB are needed. What will happen as a result of this intelligent caching is that only a hundred uh hundred gigabytes will be copied from on premises to AWS, and that will save time and save the storage costs. But yet it will appear as a full file system that can be accessed by Amazon Q, and the files can be extracted. So these are the built in systems that um, are supported on all of the on FSx for ONTAP and all of the NetApp systems. And it can be used with multiple sources. So if you have global systems, um, with file systems spread around the world, you can aggregate all of these into one FSx for ONTAP system. Have all of the information there as the data source for Amazon. Q. Now the way it all fits together, and this is the connector that I mentioned earlier, is um, with a,tool called Workload Factory, which is a, um, a SaaS application SaaS, um, orchestration tool that kind of glues everything together. It is, um, the tool that creates the connector. So it makes a connection between an FSx for ONTAP file system where the files are. And this could be the,source itself of the files, or it could be files that have been um, mirrored from on premises sources. It connects that with Amazon Q in a way that the files can be sent from FSx from ONTAP into Amazon Q business into the specific application, but they're sent in a way that includes the. Access control. So by looking at the directory services, each file is sent with the specific permissions for that file. So Amazon knows who can access exactly which user can access each file. And it won't let um. In other words, whena user asks questions, it won't take any information from the files. It's not thatuser is not allowed to access. So that's one part of it. Um, this is done through workload factory in the, um, process and the deployment process. And I'll show you a demonstration of that in a moment. Um, and then there is the issue of data guardrails, which is, um, optional, of course, but if there is, uh, any, um, any, uh, private information that might be in these files, then there's an additional tool which runs, which scans the files before they are sent to Amazon queue. So any file that has um, PII personally identifiable information will be scanned and then, um, the PII will be masked. That means it will be removed and replaced with um a tag showing thatthe information has been removed. Then passed on to, uh, Amazon queue. So if I'm trying to find out what somebody's credit card number is, I would ask that question. Amazon Queue will know thatinformation has been removed. It doesn't have that information. It won't be able to pass that information on. But it will also be able to, um, specify thatinformation does not exist and maybe even raise a flag if someone's trying to, um, do something that is not, according to, um, the policy. So this is all something that is, um, automated with workload factory creating this connector. Once the creator has been connected, itworks automatically and it stays in sync. So, um, what I'd like to do now is show a short demonstration of how this would work. And, um, and then I think we'll have some time for questions. So if there are any questions, if there's any relevant questions, um, feel free to interrupt me. But otherwise we can take the questions after the demonstration. So this is, um, an example of, uh, how this connector would be deployed. So assuming that the data has already been migrated or copied to um, Amazon FSX for uh, for NetApp Ontap. So it's in,the file system in AWS. Now we need to create the connector. So this would be done through the I section in this um orchestrationtool, which is called Workload Factory. Um, this. What would happen here is. You would start off by choosing, um, Amazon Queue at the moment. Let me just pause that for a moment. Pause it. Yeah. So, um, this tool can create a connector for Amazon queue can also create a knowledge base for Amazon Bedrock. So there are different options um, in this case. But here we're looking at Amazon Queue. Um, and what needs to be done is that the connector needs to be defined. Hang on a second. Okay. Let me try this again. So I'm choosing Amazon Queue. Um, adding a name for the connector. And choosing the region where this connector will exist, which is obviously what will be the region where the data is. I choose the specific. Oh dear. Um, please excuse me. I keep pressing the wrong button. Okay. Um, so in this case, like I said, um, itwould be in the same region that the data is. And when I'm configuring it, I can then choose whether there'll be data guardrails, which is the privacy. And I when I click the uh create connector button, what it will do is it will start building this, uh, relationship between the data source, the FSx, um, which is um, will be the data source and Amazon. Q and in this case, I would have I have chosen the, uh, the specific application. So what I can do here before I start adding the data, what I want to do is I want to go to the Amazon Q application itself. So the Amazon Q application, it is there. It doesn't have any data. And I'm going to ask a question what is mauve. This is something that's going to be in the information that I will put in. But at the moment it says I cannot find any information. So now I'm going back to um, Workload Factory. And what I'm going to do is choose the specific place where the data is. There could be lots of folders, and within these folders I can choose a specific file or all of the files. In this case, I'm choosing a file that has FDA reports about, um, different drugs, one of them being mauve, which is the question I asked before. Now, when I add this data, the file is being processed and analyzed. Um, again, this takes a few minutes. And now that it's processed, I can go back to the application itself. The same application that I saw before. Ask the same question. And now what it will do is it will look at this information. It will give me the correct answer. This is the correct answer with the data source itself that appears within the text. If there were multiple files, it would have multiple numbers, um, showing where this information came from. So I did speed. This was uh I sped it up a little bit. But essentially this is something that this whole process, uh, really takes a matter of minutes, um, to set it up. All of this is done behind the scenes, all of the heavy lifting of, um, connecting the files, transferring the files, making sure the permissions are correct. And then obviously, if the file has changed in any way, keeping these files in sync all of the time. All of this is done automatically. So, um, from the user perspective, you can create multiple applications. You can have multiple data sources. And I would say that the most important thing here because of this simplicity is it's something you can start playing around with and doing experiments with. And then once you have the information inside, you can see what kind of, um,what kind of services you can use this for or what kind of, uh, value you can create, either internally for the organization or as services that can be provided to, um, end users, customers. All of these are things that once you start putting certain data inside, you can start experimenting,with that. And the advantage here is that the data is put ina secure way so that, um, there is no, uh, privacy regulations, that you're that you have to worry about regarding access. Only users who can access information will get these answers. So this is something that you can safely, um, start experimenting with to get ideas of what kind of, uh, applications can be built. And now I can Robert, we've got. Let me interrupt you here for a second. We've got a couple interesting questions here. Omid is asking, is it possible to integrate Amazon Q with on premise NetApp cifs, SMB shares? Uh, yes. This is um, it's not a direct connection. So the on premises system cannot send the information directly to Amazon. Q but this is done with FSx for ONTAP. So FSx for ONTAP is something that you can you spin it up, um, and then once it's there, you can create a SnapMirror relationship. This is something that is relatively simple to do. Um, it can be done. Um, and then once it's done, the data, the relevant data is in FSx for ONTAP. And then using this, uh, connector that I just demonstrated, the information then will be used. So,essentially the answer is yes. There's just another step in between. It's not a direct connection, but the data goes from On-premises to FSx for ONTAP in AWS. And then the data from there. It's uh, directly accessed by Amazon. Q and this is true for NFS and CIFs. Um, it's uh, they're both supported in all of the protocols that are supported on premises are supported in FSx for ONTAP. So that's multiple versions of NFS, multiple versions of CIFs. Anything that runs on premises will run, um, in Amazon exactly in the same way. And then it is sent to Amazon Q through an API. So basically any kind of file that is supported on ONTAP can be sent to Amazon Queue. Obviously, um, it's mostly focused on the text files, PDF files,that can be analyzed by Amazon Queue for the purpose of, um, the large language models. Awesome. We also have a question from Syed asking a question for Tariq. How soon? Amazon Queue will also deploy the resources in respective AWS subscription, rather than only listing the services. Yes, I believe that is actually a capability that queue developer is either working on or is, uh, going to be available here soon. Um, so that's more of a queue developer rather than a queue business, uh, capability. Um, but we're certainly happy to set up a time and take that offline to,talk about that question and unpack it a bit more. Awesome. And Robert, one more question from Aman. Uh, can this be used with NetApp CVO as well, or is it just for FSx? Yes, actually. Uh, again, it'sthe same answer I gave before. So CVO is,an ONTAP system. Um, it's a self-managed Ontap system. So it will be deployed, um, by the,uh, thecustomer, the user, um CVO can the data from CVO can be mirrored and cached between CVO and FSX for ONTAP. So because it's the core, it's,the same um, Ontap operating system, you create this, uh, SnapMirror relationship, which means that the data, any, uh, data on CVO will be mirrored to FSX for ONTAP. Again, not all of the data. The data you choose, you do it on a per volume level. Um, and then once it's mirrored the same, it will be the same process that we just saw. So from CBO or from On-premises. Um, so really essentially any ONTAP system, it can be between, um, FSx systems in different regions if you want to have like, uh, all of the data aggregated in one place. Um, I would say thatum, it's something that,I think for practical reasons, I think that if you're going to deploy Amazon queue, you want to get as many data sources available as possible. So all of the data that is relevant from whatever source, uh, will exist. And then if these data sources are on ONTAP, on premises or on CVO, you aggregate it into a single FSx for ONTAP system, the necessary data, and then have that feed it into Amazon queue. Um, once thisconnector is in place, once the infrastructure is in place, there's nothing you need to do. It all happens automatically. So there's no, um, you don't have to maintain the system or itit's all fully automated. So I think that's one of the advantages here. So I think what you're saying is, regardless of the source, so long as the data is sitting on ONTAP, you can either create acache with FlexCache between that,source data that you want to connect with Amazon queue and queue for business. Correct. So that's FSx as theultimate, uh, connected data to queue for business. Yeah. And I would say that in most cases you can just use the mirror mirroring the data. Um, because it's typically the kind of things that companies would start off with would be, uh, files that have a lot of text in them, but those don't take a lot of space. So it's really not such an issue to, um, to mirror them. But if they're a huge,data sets, um, with a very large capacity and you just want to save time instead of, um, uh, caching all of that data, synchronizing all of the data, um, if it's just a certain portion of the data that is required, then that then caching can be used just to save time and cost. Um, but both of these options are valid. Awesome. That's the end of the questions that we have in the chat. Okay, soI'll just, um, go back to the slide. Um, this is really kind of just to summarize, um, what I spoke about and what I showed. Um, so I think the, these,challenges and this is something we,uh, we speak to a lot of our customers, and these are real challenges. These are real challenges thatcome up, um, because there are many challenges. There are many issues when it comes to just moving data from one type of environment to another type of environment. And that can once you have to start changing the data in any way, it can things can,just stop working properly, whether it's um, uh, the structures, the directory, the all of the rules that are built around this data. Um, so this can be a challenge. The effort of moving the data and sometimes even the cost of having all these systems synchronizing and moving and creating multiple copies can be a challenge. This is something that is solved with this, with the ONTAP ecosystem and FSx for ONTAP is an AWS service, but it is part of the ONTAP system. So essentially you're extending the on premises system to AWS and maintaining all of these, um, features and capabilities. So that eliminates the complexity. Um, and it also reduces the cost because it's all done, uh, storage to storage. There's no external tools required. So that's the first part. The second part is really theimportance of not having to, um, maintain duplicate copies. Um, we all know this. Once you have a second copy of data. The challenge of keeping them in sync and applying the same kind of, um,management to each one of them is a real headache. Um, so having some system that automatically does that for you, not only syncing the data between on premises and AWS, but also whenever there's a change, sending this change to Amazon queue. So a file that's been sent to Amazon Queue, once it's changed, it will be replaced by the new file. Um, and that is something that is done automatically with this connector. Um, there is the issue of access control. I think that's a critical issue because if you've got, um, multiple directories, each one with different access rules, uh, maybe even on a file basis, organizations with different, uh, different departments or different, um, types of users be having to recreate that from scratch is an effort. There's a risk of making mistakes and, um, and allowing incorrect access. So once that's in place and it's all in theaccess directory service, uh, that service is used when,the information is sent so that it's preserved, uh, all the way. And finally, the issue of sensitive information, which is really, I think can be a showstopper for some organizations, the fact that you don't know if you've got sensitive information, um, is going to prevent you from moving that data anywhere. But if you have a tool that can scan and, uh, create a sanitized copy of that information, which is safe to, um, to use in any, uh, format, then that's something that really overcomes that roadblock. So that's in a nutshell. That's what the, um, this new connector that, uh, has been introduced does. Um, and now maybe I'll pass back to Tariq to sort of summarize how these two things fit together, the connective together with, Amazon Q business. Yes. Thank you Robert. So today anybody with a credit card can access the chat bot and use generative AI capabilities. But as you learn today and as youlikely already know, the value of generative AI is accessing the right data and accessing the right data. One of the, uh, design patterns that are that as quick as the market is, Rag is retrieval, augmented generation, and Amazon Q business is a one stop shop Rag as a service for you to be able to move quickly as it relates to deploying solutions, um, and streamlining workforce productivity. Um, some of the other reasons why hundreds of customers are leveraging Amazon Q business include the productivity gains, uh, speed to market accuracy, security. Um, our ecosystem, if you will, of partners and integrators, including NetApp, among others. Right. And again, uh, the security, uh, honoring the, uh, access controls, um, and ensuring the guardrails for you. So that way you can leverage, um, or you can leverage those guardrails to,maintain, um, your organization's voice. Um, we'd love to chat with more if you have additional questions. Um, and we're happy to, uh, to provide, uh, greater direction as well. And I'll pass the baton back over to Sebastian. Yeah, we'llstick around for a couple more minutes. Uh, unless people have questions. Uh, if you do have question, uh, drop it in the chat. Or I guess at this point in time, you could, uh, come off of mute and ask your question if you want to as well. We'll give it a couple of minutes, and if there's no questions, And we'll close the call. And thank you so much for joining us today. Hi. My name is Sayid. I'm from MBARI and I have one question like, uh, like once I understood. And first of all, thanks to Robert and Tariq for beautiful session. And, uh, just wanted to understand, like, uh,once, uh, a VR, like connecting with Amazon with, uh, ONTAP FSx. And, like, our data is reaching to the Amazon now. So how for, like, uh, if we have any, uh, any, if any bedrock service we are leveraging in AWS so we can integrate that to,like, because Tariq said, like, uh, this should be Amazon Queue as a, uh, you know, uh, what do you call uh, I just slipped that term what he said. So, uh, like, it's as a service basically. Yeah. So, uh, canwe leverage, uh, any of the Amazon bedrock model with the because our data is still Amazon queue, so we can leverage with any of the bedrock model like, uh, the same data and we can further process, you know, like, uh, training the model or agents or something like that. So Amazon Queue business is abstracting away the need for you to select a model, uh, via bedrock. Um, but you can use, uh, the same index for the bedrock, uh, models that you're leveraging as you do for the Amazon Cube business, uh, service. Right. And we can talk to you about that in greater detail if you're interested in that. But the shared index, um, is more cost effective, but also, um, helps as it relates to scaling because, um, that index, uh, allows you that. I guess you could say the access to the various different, uh, data repositories, if you will. So, um, butthere's,a bit more detail that we can, we can unpack. Um, but the short answer is think about it as queue business. uh, abstracting away the need for you to select models via bedrock. Um, Q business is effectively built on top of bedrock. And we're doing that, uh, that model routing, if you will, or, uh, prompt routing and so forth. Um, and then also you can, you can use if you have a use case that's a little bit more complex that you want to use Q business for. Um, and then a use case that is not as complex that you want to use, uh, RAC as a service Q business for. You can use the same index for both of those use cases. Thank you, thank you. Yeah. And I just,want to add to that,if it's from the, um, data source perspective, you can use the same data source for Amazon Q as you can also use it for Amazon Bedrock. So if you've got multiple applications, some of them running on Q, some of them are based on bedrock, you can use the same data sources, the same, the same basically the same kind of rag. Um, behavior will work in both and especially if you're bringing the on prem data, as we showed in this session, I think it's very useful to have that flexibility. Um, yeah. I mean, it's eachI think each one of these, uh, services has its own, um, benefits and use cases. And, you know, I think a lot of organizations are going to be, uh, using multiple services, in which case, you know, it all. It all fits together really well. Thank you. Robert. Thank you. All right, well, if there's no other questions from the audience, thank you very much for joining us today. We really appreciate the time. And if you've got any further questions, please feel free to reach back out to us.
Explore how Amazon Q for Business integrates with NetApp to simplify data management, enhance generative AI, and accelerate enterprise productivity.