BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
[ Subtitles have been automatically generated ] Hey everyone, thank you for joining my session today. In this session titled Raising the Bar on Your Object Storage Needs, we will explore the exciting new enhancements in StorageGRID , both in hardware and software, and discuss how these can bring value to your business. So it has been a significant year for StorageGRID, and I'm excited to share these updates with you. My name is Jonathan and I'm a product manager for NetApp StorageGRID, and I've been working on this product for over five years. I initially started as a Technical Marketing engineer, where I wrote technical content and assisted pre-sales and StorageGRID design and sizing. I have since transitioned to product management, focusing on product vision and strategy. So I do enjoy engaging with customers and partners and understand their needs and translate them into the product. Now this is the recording for one of the INSIGHT 2024 sessions. So this is the standard confidentiality notice, saying that anything shared in this presentation is confidential and should not be disclosed, and the timing of any future releases are subject to change. Now to set the stage for this the session. What I want to bring to top of mind is that unstructured data is one of the fastest growing areas of data today, and will continue to be in the foreseeable future. So many businesses use a data strategy, which involves having a large central repository known as a data lake. And this really helps break down silos and aggregate your data from multiple sources. Object storage is going to be one of the most effective underlying technologies for these vast data stores. And that's why we've seen StorageGRID being adopted by more customers at larger scale, running more workloads. So there is no better time to be a StorageGRID customer. To catch this wave. And for those that are already our customers for a long time, we'll all be sharing some of the latest software feature updates and enhancements we've made to our product and showcase how those can bring value to your business. So again, like I said, it's been a big year for StorageGRID. We've had our 11 eight release in February of 2024. We also launched our new appliances in May 2024. We also introduced Blue XP integration in August 2024, and our upcoming 11 nine release is expected to be in October November of 2024. All right, so let me just cover the agenda for today's session. It is a 45 minute session. And first, we'll start off with just an introduction to the object storage landscape and market. Then I will introduce StorageGRID as a product and how it helps your object storage needs. And then we'll discuss about the new and upcoming StorageGRID features and how they align with your business values. And to wrap it all up, I will provide a case study showing how StorageGRID evolves and matures with your business. So as mentioned, object storage is the new approach to solving data growth problems. Object storage is ideal for large unstructured data sets and it's designed for scale. It comes with a few key benefits. It comes with high scalability. High durability. Metadata management. It is also a cost effective form of storage compared to file and block as well as accessible. It's a simple S3 protocol based on HTTP. So we are seeing these escalating volumes of unstructured data drive revenue growth in object storage markets. So IDC estimates that 13.4% CAGR rate from 2022 to 2026. So what does this imply? This implies that organizations are increasing their spend on object storage solutions as well. Businesses are seeing value in their investments, so they're seeing a return on investment for purchasing object storage. How is this reflected in the market and how has the use cases transitioned? Well, object storage was initially for more compliance archiving use cases. The traditional use cases were system backups and they followed a write once read never workload pattern. And the value proposition for an object storage vendor back then was purely focused on how cheap and deep you could be, so focused on the capacity scale as well as the cost efficiency. But now there are more modern use cases coming up. Cloud nativeapplications. They use object storage as their primary data model, and we've seen how Cloud has grown as well. Data analytics and AI also leverage object storage for large scale data lakes as well as for data. Training. And we are seeing this actually within our product itself as we grow our third party validations in the analytics space, for example with Lakefs or Dremio. So as object storage becomes increasingly central to your data infrastructure, moving from a secondary storage to something that's primary, there are a few additional challenges you'll face. First is security.is paramount and remains a top priority for any business. Second is going to be performance. Can your object storage now it's facing primary workloads support modern workloads and multiple workloads simultaneously. Cost is also going to be a challenge. So not only from an initial purchase cost but also from an operational cost perspective. And finally, flexibility. This object storage platform, it has to be adaptable to evolving requirements and provide freedom to adjust and optimize without feeling locked into a bad object storage implementation, for example. So given these challenges, it's going to be essential to choose a reliable and proven object storage solution. And this is where NetApp StorageGRID stands out as an industry leader in the object storage market. So what is object storage? I mean, what is NetApp StorageGRID. So NetApp StorageGRID is built to meet the demands of today's digital landscape. It is a scale out, active,global namespace object storage solution, meaning it can support multiple tenants, multiple workloads. You can read from one part of the continent, write to one part of the continent, and read from another part of the continent with consistency. And I want to go over some of the key features before I dive into the specific features. First, StorageGRID is flexible. You could start small with a software defined approach and grow to massive scale with more high performance applications. And as you go through this journey, you can mix and match virtual machines and appliances. Durability is going to be also important for object storage. StorageGRID has a distributed architecture with no single point of failure, so we can tolerate multiple drives. failing node failures, or even an entire geo distributed data center failing, or what we call a site. It also provides extreme durability up to 15 nines with what we call erasure coding. And finally, it's Cloud integrated. So integration with public cloud platforms is crucial, especially for AIML applications, because the cloud does provide a lot of these tool sets. So by StorageGRID supporting various cloud integrations, we support notification, we support replication, and we also support tiering data out to the cloud. And this allows StorageGRID to manage cloud based data and local data within a single unified namespace. And this helps eliminate data silos. So NetApp StorageGRID has over a decade of object storage leadership, and we are constantly introducing new features and enhancements to stay ahead of the market, continually raising the bar. Now, StorageGRID is also a key component in the NetApp portfolio, serving as the enterprise Object storage platform. We seamlessly integrate with other NetApp services and products such as Blue XP, so Blue XP hosts a variety of services including protection. So this is your backup and your recovery as well as your replication. It also supports mobility services. So tiering data to S3, copy and sync data to S3 as well as analysis. So supporting classification, having a digital advisor as well as supporting a sustainability dashboard. So all of these integrate with StorageGRID today. So these integrations ensure that NetApp offers a cohesive and powerful ecosystem and really enhances the overall functionality and efficiency of your StorageGRID system. Now, as a product manager, I did want to talk a little bit about the core business needs, right? It's essential to focus on what a business needs, and that way you can guide the development of your product roadmap as well as your strategy. While businesses require intelligent data infrastructure and I want to showcase how StorageGRID delivers on multiple fronts. But first, let's talk about the business needs. First, you want your infrastructure to be manageable, right. You want it to simplify administration and monitoring. You want to eliminate these infrastructure silos. You also want your object storage to be cost efficient. So can it optimize expenses with efficient storage solutions . You also want it to be scalable. So can it easily expand to meet your growing data demands? Can it onboard new applications effortlessly? Fourth is going to be performance. You want to be able to ensure that you can onboard new workloads, ensure fast and reliable data access, and meet the high performance requirements of these next generation workloads. And finally, security. Being able to protect your data with robust security measures like ransomware protection. By excelling in these five areas, StorageGRID can really drive your organization towards greater success. So now that I've covered the business needs, let's discover the key StorageGRID features that empower you to meet these business needs. And this is essentially going over all of the enhancements we've made over the past year. So the first feature to discuss is StorageGRID Information lifecycle management policy. Also what we call ILM. When you are managing hundreds of petabytes of data, it is challenging and you would want an automated policy to control the data effectively. And that is exactly what ILM does. ILM is StorageGRID primary method for managing data throughout its life cycle, from creation all the way to deletion, and it allows organizations to define rules that govern how your data is stored, protected, and accessed over time. And this is all done via configuring rules and policies that are automated and object granular. So some key benefits of ILM. First, it's going to optimize your storage utilization so it can optimize your storage utilization by ensuring that your data is stored in the most cost effective manner. So as an example, when you newly ingest an object, you may want to create three copies of it one copy per site. This is to ensure the applications and the users have a local copy. Once these objects age out and after 30 days, you may want to erasure code them. So you might want to raid them to, let's say for example, EC six plus three. And that essentially reduces your overhead by half. And furthermore, objects older than six months, you may want to tear out to the cloud to AWS Glacier, further reducing your storage overhead and your storage costs. A second key benefit is to enhance data access so ILM can ensure that your frequently accessed data is stored on your high performance appliances. So, as an example, if you are a grid with an all flash site as well as an HDD site, you could create a rule such that any objects accessed within the last seven days are stored in your flash storage, and objects not accessed in the last seven days are automatically moved to your hard drive hard disk site. ILM can also help you meet regulatory requirements by automating data retention, data sovereignty, as well as deletion processes. So for example, if you are an EU region, you are managing servers and sites within the EU. You could have a rule that says every object tagged with location equals EU. Store them only in the data center in EU to comply with GDPR. And finally, ILM is always going to be agile and responsive. So as you scale your grid, as you add nodes or sites, you can reconfigure ILM and it will easily adjust to accommodate the data growth. So this flexible policy management really ensures your storage system remains agile and responsive to any of your evolving needs. So at this point, I think it's clear that ILM is critical to your intelligent data infrastructure, really simplifies your operations, and reduces the complexity of managing large volumes of data with a single policy. Now, given that it is a single policy, we have had some customer feedback about some challenges of ILM. So I do want to spend some time and talk about that. First is your rules can get complex. If you're managing a single active policy in a diverse, multi-tenant multi workload environment, you could have 1000 tenants, 10,000 tenants. You may want to rule for each one. Then that can get quite complicated because you're trying to tailor a rule per tenant and per workload. Obviously, there's ways to mitigate this by applying multiple tenants to a rule, but this can cause you to update rules frequently. And if an error is made to an IAM policy, all of your objects can actually get affected and that can increase the IO of your system. And finally, ILM is solely controlled by the grid admin, so tenants have no visibility into ILM. So if you are a service provider providing S3 as a service, the tenants may actually want to choose their own appropriate level of protection based on how much they're paying. But today they don't have any visibility. So that's why in 11 eight, which we released in February 2024, we've introduced the concept of ILM policy tags. So ILM policy tags is a powerful enhancements designed to simplify data management for these multi-tenant and multi workload environments. So what are ILM policy tags. Well ILM policy tags allow admins to have multiple active ILM policies. These are kind of like micro ILM policies. You could create one. For example in the picture I have here you have gold, silver and bronze. So instead of relying on a single policy, you can create multiple active policies tailored to your specific needs. Now tenants can actually choose the desired policy tag for their data. So for example, when they create a bucket, they can actually tag that bucket with a specific policy tag. So this gives them insight and control over their data management policies. So what are the benefits of IAM policy tags. First there's more granularity right. You get more granular data management. And second you also reduce the risk or the blast radius of policy errors. So this really showcases, I think, one of our customer driven innovations in response to customer feedback, recognizing that object storage environments are increasingly growing in scale. Having this simplification really helps our customers. Now, in addition to IAM policy tags, we've actually made two significant enhancements to StorageGRID IAM Cloud Tiering capabilities. So these are coming from our upcoming 11 nine release, which will be coming October November of 2020. For the first enhancement we've made was support for secure temporary tokens. So for ILM to tier data to the cloud, the first thing you have to do is to ensure a connection to a cloud bucket needs to be configured, which means let's take AWS for example. You would have to create a bucket there, get the keys and then submit them into StorageGRID. Now, if you were trying to implement a key rotation policy, you could see some of that overhead that's required. If your keys rotate every four months, for example, every four months you'll have to go create new keys in Amazon and update that configuration within StorageGRID. So what we've done with Secure Token service is now StorageGRID can actually talk to AWS STS.will generate or create a temporary key that will have its own expiry policy, and StorageGRID will automatically retrieve those short lived credentials from AWS and use them for authentication. So this greatly reduces the overhead to implement a key rotation policy, and a key rotation policy. Iskind of mandatory in businesses today because you want to reduce the impact of compromised keys and align with the security best practices. The second enhancement we've made to ILM tiering to the cloud is also another security enhancement is tiering to object lock enabled buckets. So previously you were not actually allowed to tier two object lock enabled buckets. Now we've allowed our customers to do that. So what this does is ensures your data is protected by object lock at all stages from on premise to the cloud. So with the introduction of STS support and tiering to object lock buckets, we've really enhanced the security and compliance of data tiered out to the public cloud. So we've made multiple enhancements to ILM because it is a critical aspect to StorageGRID architecture.ILM defines how data is managed over time, and it really delivers savings in both storage footprint as well as operational costs. Building on the topic of buckets, I know we talked about buckets in the context of AWS, but within StorageGRID, within our own buckets, we've also made some enhancements in 1109. First is going to be a scalability enhancement. So we've increased the bucket support per tenant to 5000. So this is A5X increase from a previous version. And the purpose of this feature is really to enhance the support for applications that use multiple buckets within a tenant. So allow them to create more buckets per tenant. The second is a manageability enhancement. We've now added the concept of bucket limits. So a tenant admin can now specify a max logical capacity for each bucket. And the benefit of this, and the purpose of this feature is to prevent overprovisioning and ensure that the resources are allocated efficiently. So now I wanted to take a quick segue from software features. We'll get back to more later, but I want to discuss some of the hardware appliances because we have made one of, if not the largest hardware refreshes at NetApp today. All right. So these new appliances really support our growth in primary only object storage use cases. And the target here is to accelerate the rollout of all of these AI, ML applications, as well as data lake houses which are increasingly being deployed on object storage. So we've upgraded the hardware, including the CPU, the memory, the network, and all of these appliances. When compared to their predecessors, we've seen up to 30% performance improvement. And all of these appliances require 11 eight to run. So let me quickly highlight each of these appliances. The first is our SGF 6012. This is our all flash appliance. This really adds a new dimension to the appliance portfolio. We launched it initially with performance flash drives. So this is TLC. This is for your high throughput, lower latency type of appliance. But now we've also made it available with QLC drives. And this makes it one of our highest densities as well as the most sustainable platform within StorageGRID today. The next appliance is the SG 6160. This is our high performance powerhouse. It is a perfect balance of performance and capacity. It has high performance because we put a dedicated one. You complete Compute Blade, which hosts flash drives for caching the metadata as well as some of the object data. But it's also high in capacity because we it's a 60 drive chassis. And as well we allow you to put two expansion shelves on top of it. So a single node can actually scale up to four petabytes of capacity. Now, when you're looking at an entry point into object storage utilizing an appliance, then there is no better appliance than the 5812. This is our entry level workhorse. We have the 5860, which is the 60 drive variant of the 5812, which is a higher capacity storage, more cost effective for the 58 series. The compute and storage are combined together in a single box. And finally we have our services appliances. This is the SG 1100 and the SG 110. So both of these had their CPU and their networking upgraded. The SG 1100 is suitable for large deployments needing high performance. They run our services such as our admin as well as our gateway nodes, and the SG 110 is the entry level option more suitable for your initial deployments? Small to medium sized. So for full specifications and drive sizes, we have a datasheet online you can refer to. Uh, I want to double click a little bit on the SGF, 6112, because we've also made recent software enhancements to it. So here's a little bit more of a deep dive onto the SGF, 6112. It's a one new form factor. It supports 12 SSDs for the TLC drive sizes. It ranges from 1.9TB up to 15.3. We also support Fips drives. We support 3.8 and 15.3. And like I mentioned, we support QLC drives starting with the 30 terabyte. But we're always looking to qualify larger drive sizes as they come into market. So one of the interesting things is this launched as one of our first non E-Series platforms in May 2023. So in 11 eight we've made some important enhancements. First we've improved the performance. This is a software enhancement by up to 40% . We've also added a drive management UI so this helps you manage your drives, view the layout, do your basic troubleshooting and maintenance of drives. We've also introduced local key manager support. So this allows you to secure management of your encryption keys without relying on third party key managers. This feature actually applies to our services appliances as well. In 11 eight, we've also introduced this concept of a metadata only, and the SGF 6012 acts as our metadata only appliance. So let me kind of explain what a metadata only appliance does and what it means, and how it can help you and optimize workloads. So in the concept of our storage nodes, we have metadata as well as data. So storage nodes could provide capacity for both metadata as well as data. Metadata capacity controls the maximum number of objects that can be stored. So metadata is typically always stored. It is always stored in our SSDs. And it's used to track locations of all objects across the grid. So there's also some metadata processing involved. Excuse me. The second part is our data capacity. So this controls the maximum amount of data that can be stored. So when you think of a role of a metadata only node, its job is to increase your metadata capacity as well as improve the processing of your metadata. And our data only node purely stores object data. It doesn't interact or participate in metadata processing doesn't store any metadata. So an enhancement we've made in 11 eight was first to introduce the concept of a software defined metadata only node. This was kind of a way for us to initially introduce to the market for specific customers. And in 11 nine we further enhanced these by first having a metadata only appliance. And we also introduced the concept of data only nodes. So this is the other side of the coin where you have metadata only. You also have data only. So why would you want to split metadata and data right. This is for specific use cases only. And let me cover two use cases on why we would do this. First it helps speed up your mix node performance. So mix nodes refers to either you're mixing your next generation hardware that I talked about that has more processing power with older hardware. Or it could be any differentiation between two nodes that have different performance. So for example an all flash appliance and an HDD appliance. So in a mixed node scenario the slower metadata processing nodes can actually drag down the overall performance of your grid. So let me give you an example. So for example you purchased your SGF 6012 nodes. These are your all flash. It doesn't mean you're locked into SSDs to maintain performance. You can actually add 5812 and 5860. These are the slower hardwares and optimize them by deploying them as data only. Now, second use case is really to improve the small object workloads. So by adding a metadata only node you can enhance the small object capacity without necessarily increasing your data capacity. So this can lead to a more balanced grid and improved cost efficiencies. For example, if your workload is small objects, you may find yourself running out of metadata earlier. So you want to expand the metadata only node to grow only that vector of your capacity. So the metadata nodes and the data only nodes. These advanced advancements provide customers with the tools to optimize performance, scalability, as well as the efficiency of their storage infrastructure, and this helps meet the rising demands and diversity of your object storage workloads. The normal deployment, where nodes contain both metadata and data, will suffice for a majority of the use cases. So, just as if we have enhanced our storage nodes, we've also enhanced some of our load balancer nodes. So I want to talk about some of the load balancer enhancements we've made in the recent releases 11 eight and 11 nine. So first of all, to provide some context, what is a StorageGRID load balancer? Well, they help us in multiple ways. First, they optimize our traffic management. They figure out where traffic needs to go and to which node. They integrate natively with our storage infrastructure and they provide high availability and fault tolerance. So of course they can be software defined. But most commonly our customers use our services appliances. So this is the SG 1100 and the SG 110. Load balancers are also fundamental to driving a secure infrastructure through the concept of endpoints. So for example you have an S3 endpoint. You're actually able to restrict that endpoint to specific IPS or tenants for example. And additionally on that endpoint you can apply QoS policies. And we call these traffic classification policies and apply that to that S3 endpoint to prevent network congestion and ensure that no single application talking to that endpoint can monopolize the network resources, for example. So one of the enhancements we've made in 11 eight is to extend load balancer endpoints to management traffic. So initially these were only available for S3 traffic. Now what is the benefits of this one is now you can apply all of that security I talked about restricting the endpoint access providing QoS policies on that endpoint. Now to your management traffic. So a good example of this is you can apply a QoS policy to a management endpoint and protect yourgrid manager interface from denial of service attacks, for example. Now the second benefit this helps you is actually increases the flexibility of our port usage. So our customers have had requested for S3 traffic to sit on port 443443 is your default Https traffic, meaning you can just specify the domain name. You wouldn't have to specify a specific port and to do that port remapping because fourthree was sitting on our management port, they actually had to go do some maintenance procedures to actually do that port remap. It was something that was complicated, but now it's quite simple to do. You would just create a management endpoint off of port 443 and create your S3 traffic endpoint on port 443. And this really just simplifies how you want to use your ports. Now, another enhancement we've made in 11 eight is to support for connection training. So connection training is actually quite a common load balancing feature. So for us as the object storage infrastructure platform to get requests for these load balancing features, I think it really demonstrates the importance of having an integrated native load balancing. And it's definitely a differentiator for StorageGRID. And we have heavy customer adoption using our load balancers. So what connection training does is it allows graceful removal of storage nodes from the load balancer pool without abruptly terminating active connections. So an example use case is you would want to use connection training on your nodes before you take them down for maintenance or updates. And this will really better control the shutdown procedure as well. Minimize client interruptions. So client interruptions would be your HTTP connections terminating, for example. And these actually termination codes are actually tracked in our load balancer endpoint logs. So that actually leads me to one of the enhancements we've made in 1109. In 1109 we support sending these load balancer logs, which contain key critical information about client connections. S3 traffic and HTTP traffic. And we now allow you to export that to an external syslog server. So the benefit of that is first of all, you can export these logs into a centralized logging system where you may have enhanced log analysis tools, but it also helps provide redundancy for your logs. If you ever want to protect your logs because they have critical information on your client traffic. You can store some logs within StorageGRID as well, redundantly stored on the syslog server. So these are the three enhancements we've made to load balancing over the course of a year. And they've significantly improved the operational efficiency of StorageGRID. Speaking of operational efficiency, we've also made some tenant manager enhancements. So the first thing we've done and this is in 11 eight is we've integrated an S3 console into the tenant manager. And I think this integration is incredibly useful for developers, as the S3 browser allows them to quickly view and manage their data objects directly within the tenant manager. So this eliminates the need to switch between different interfaces and really help streamline their workflow. And that way your developers can focus more on coding and less on navigating through multiple tools as well. Sometimes you're not able to get a third party application like S3 browser and download that onto your computer if you're in a secure environment, so having this S3 browser built into the product is key. The S3 browser is in the tenant manager and allows you to search for objects, edit objects, upload and download as well as edit tags, view metadata and as well manage versions. The second enhancement we've made to Tenant Manager and this is an 11 eight, which is our upcoming release, is We added a bucket policy editor. So this has been asked by a lot of customers. And we've we finally were able to add it into 1109 so when you think of bucket policies, they're very important to the security of your system because you can create policies and they govern kind of how the data is accessed within the bucket. And this can actually come in kind of a long JSON format. And before you had to submit this through CLI or API and because of the intricate syntaxes involved, it's easy to lead to an error. So by us now, having a UI to upload your bucket policy makes it significantly easier to utilize. And the UI actually will also do some of that JSON syntax checking. So this significantly improves the usability ofbucket policies. So those two are the tenant manager enhancements we've made. And these are to help navigate objects as well as utilize bucket policies. Now, speaking of simplifying the UI, we've actually integrated with Blue XP. So Blue XP is a unified control plane, and it's designed to simplify the management of data across your hybrid multi-cloud environments. So StorageGRID actually has always been available in Blue XP , but it was presented in a very limited form. Now, given the release in August 2024, StorageGRID is now fully integrated into Blue XP, and the Blue XP UI is nearly identical to the on premise grid manager interface of StorageGRID. So essentially this is direct access, so users can perform activities via Blue XP just as they would have with a standalone grid manager. So this really again simplifies your administration. And similar to the tenant enhancements, reduces the need to switch between different management tools. So this is one of the features where we've integrated with something inside the NetApp portfolio. Now the next thing I wanted to talk about is an enhancement we've made integrating with an outside platform. And this is this outside platform is Apache Kafka. So with StorageGRID 11 eight, we have integrated a Kafka producer within the StorageGRID platform, and this allows the public publication of notifications to a Kafka cluster. So these notifications are actually crucial as they enable event driven architectures. So previously we supported AWS SNS, which triggered workflows in the cloud, which makes sense because youwant to leverage the necessary compute services. But we've had some customers that have actually moved these compute services back on premise. So they were asking us, hey, what is a notification system that we can support on premise? And we decided to go with Kafka because it's used by our customers. It's also a proven robust as well as scalable event streaming platform. So for us to provide bucket notifications to on premise customers really enables them to integrate with more applications. So let me give you an example of how Apache Kafka and NetApp storage could work in event driven architecture. So here I have an example of a video transcoding workflow so you have unprocessed data. Let's say there are MP4 files. You want to store them into a StorageGRID bucket. Once that object is added, StorageGRID will send a message to Kafka notifying Kafka a specific topic in Kafka that this object has been posted and it'll contain some of the object metadata. And then you can have consumers applications or pipeline processes. In this case, it's a media converter, which will then process that information, receive that notification and pull that object from StorageGRID, do its media conversion in this case to a WebM format, and you can actually store it back into StorageGRID into a destination bucket, for example, to make it available for streaming on various devices. So this example showcases that StorageGRID can do more than just be a backup and archive, right? Of course, people are utilizing it for AI, ML for data training direct to S3, but we have customers that are running some of these event driven architectures. And really the data is critical data for their business. So when you have such data that's critical for the business, then it makes sense that we want to talk about security of the data. Right. How do you protect all of that data given that it's so important. So let's talk about some of the enhancements we've made to security in 11 eight as well as 11 nine. So one of the key security features native to S3 is the concept of object lock. So object lock allows you to lock up an object either in compliance mode, meaning it's locked forever, or governance mode, meaning there is some bypass for it. And essentially what that means is you'll also specify a retention period. And once you specify that retention period, that object can no longer be deleted. Now this is a great feature, right to have to protect for ransomware. But as a service provider, you may have been hesitant to enable Object lock because it's at a global scale. You know, it's critical to your data security and it adds value to your customers. You can charge them for it, but one of the concerns you might have had was, hey, what if my tenant locks up my data and then they leave the service? They could lock this maliciously or accidentally, but there's nothing I can do. I can't clean up that data afterwards. So this really creates a challenge in managing and reclaiming storage resources, which will drive up the perceived costs of your system even if you're not a service provider. There are concerns again, about the misconfiguration. So there are things you could do. Bucket policies can control this, for example, but they operate at the bucket level. And if you are serving out full tenants then what you want is actually tenant level controls. So with version 1109 we now allow per tenant restrictions. And this enhances the effectiveness of S3 object lock. And it enables it for your business. So there's two types of restrictions that you can do per tenant. So for example you could create a tenant. And within that tenant that has object lock enabled you could say do I want to allow or disallow compliance mode. So for example, if you disallow compliance mode for a tenant, then only objects stored with the object lock mode governance will be allowed and governance mode is important if you're a service provider because you will always have the bypass to remove the objects. The other enforcement is the maximum retention. So for example, you could say for this tenant, any object lock they enable needs to be within three years. So these enhancements reduce the risk of enabling object lock for your S3 object storage platform. And it provides better control, reduces the risk and in general just allows for more aggressive use of object lock. You could create tenants and say your max retention for object lock is one year, and enable object lock across the board for your developers, for your clients to utilize. So on the topic of security, there are other security enhancements I did want to mention, but Object Lock is definitely one of the key enhancements we've made in 1109. So we all know security is a top priority. StorageGRID is built with secure security in mind. We have a variety of encryption options. We have a variety of endpoint protection features. In terms of encryption, we provide drive level encryption. This is through self-encrypting drives. We provide object level encryption. So this is per object bucket level encryption as well. And as well node level encryption specifically in 11 eight we've made notable enhancements to our node level encryption. So first we added support for HashiCorp as a Kmip server. Secondly like I mentioned we built local key management into our SGF 6012 appliances. And we've also made sure that all of the traffic talking to Kmip servers are using TLS 1.3 encrypted traffic. Another enhancement we've made is the support of a unified Extensible Firmware Interface secure boot long feature name. But what it means is it ensures that the startup procedures are from verified kernels and software's. And what does this prevent? This prevents potentially software supply chain attacks where they may have hacked your software image. For example, so it ensures that you always boot up with verified kernels and software. So additionally, in 11 eight, all of our connections are kept secure via cryptography modules that are Fips 142 compliant. So this is from our client to our load balancer as well as our load balancer to the storage nodes. So we are still pending certification for that. But um yeah. So we are nearing time. So there are many more enhancements in version 11 eight and 11 nine. But they really wouldn't fit into a 45 minute session. The concept of this session was really to highlight the key enhancements and the ones that drive business value. So to wrap up this session, what I did want to do is shift gears to a case study. And this case study is going to highlight some of these practical applications of these features and showcase how StorageGRID can evolve with your business needs. So this case study is on a Truvia AG. A Truvia AG is an IT service provider for German financial groups. They specialize in banking and information technology, so they manage a vast IT infrastructure to support their operations. So as background, a truvia started as a FabricPool customer. So what that means is they utilize StorageGRID to lower the cost of their all flash appliances. So they tear their cold data to S3 to lower the cost. Now, once they had S3 added into their infrastructure, they started discovering more applications that actually spoke S3 native. So this led to an increase of demand. Now with this increase of demand, they were able to scale efficiently. They did this through non-disruptive expansions. They were also able to scale their nodes, not from a capacity perspective, but from a performance perspective because they as they scale more nodes, both of these increase and all of this was done non-destructively. They were also able to upgrade from virtual machines to appliances. And finally they leveraged our ILM. So as their architecture grew, they were able to create new rules, update rules, create new policies, and make sure that it handles multiple application requirements, as well as keep the overall storage costs low. And finally, they utilized our real time alerts, our metrics, as well as our logs to help proactively identify and address issues. And this helps reduce your operational costs. So from start to finish, they started as a secondary workload specifically for tiering. And they were able to grow StorageGRID to be a critical primary storage for business critical applications. So now we are at the end of our session where we can summarize some of the key takeaways. The first one is going to be how we talked about the object storage market, how it's growing, and how StorageGRID is positioned within the NetApp portfolio. And we also discovered that business needs like scalability, manageability, security and performance and how StorageGRID can meet those business needs. We've also learned all about the latest enhancements that occurred this year, from 11 eight to Blue XP integration to new hardware all the way to the upcoming 11 nine release. And finally, we showcased how StorageGRID can evolve with your object needs, and we showcased how these features are used to align with your business's maturity. So the final thing I wanted to add is that StorageGRID us as a product team, we're committed to helping you innovate with object storage, and that's why we've been positioning these new features and driving to these new enhancements. So here are just some related resources you could check out. Some of them are customer sessions. If you want to learn more about how customers use StorageGRID. And some of these sessions are from our Solution Architects team, where you can learn more about StorageGRID. Here's just a slide on how to stay connected. I'll be here at INSIGHT as well. My email and LinkedIn information is there if you ever want anyone wants to reach out. I appreciate everyone's time. Thank you so much.
Elevate your understanding of object storage and NetApp StorageGRID key features that align with evolving business needs, adapting to your data journey, ensuring that as your business grows, StorageGRID scales seamlessly with you.