BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
Hello and welcome to this video on the new Azure netup files feature application volume groups for SAP HANA. You might have seen that the feature is now in general availability and is it is planned to be the standard provision method for Azure NetUP files volumes used for SAP HANA. We have put together a series of short videos to explain the application volume group feature and the usage for the different SAP HANA deployments. This video will walk you through the provisioning workflow for an SAP HANA multiplehost system. As shown in this figure, the volume provisioning for an SAP HANA multiple host system is a two-steps configuration. First, you need to execute the application volume group workflow for HANA single host and then you extend the configuration by adding HANA hosts. In the first demo, we already provisioned the volumes for HANA single host system. And this demo will now show how we can extend the configuration to a multiple host system. When we go to the application volume group view, the volume group of our singlehost system is listed. Now we going to add a group to extend the system to a multiple host configuration. We enter the same SID as for our single host system before. The error message about the uniqueness of the group name can be ignored since the host ID will be different in a multiple host setup. We select multiple host and provide the RAM size of the hosts. In our demo, we want to provision volumes for a 3 + 1 multiplehost system. Since the first host is already there, we start with host two and have a host count of two to get host two and host three. The application volume group workflow supports to add a maximum of five hosts in a single step. If you require more hosts, you will need to run the application volume group workflow again to add the additional hosts. The further configuration steps are identical to what we have already seen for the single host workflow. The volume preview now looks slightly different. There are only two volume names, a data and a lock volume with the host ID as an extension. In our example, the host ID is 2 and three based on our input before in our lab setup. The capacity pool is not large enough to provide the required throughput. Therefore, we need to reduce the throughput numbers for our volumes. Now, we start the validation process which returns successful. As a final step, the provisioning is executed. The process in our lab took around 5 minutes.Within the application group view, we now have two additional groups, one group for each of the additional HANA hosts. Each of the groups include a data and a lock volume for the additional HANA hosts. Remember that the volume group for the first host also include the shared and the backup volumes which are used by all three hosts. The standard volume view shows that for each host the data and the log volume got provisioned on different storage endpoints with individual IP addresses. So the storage layout of our SAP HANA landscape now consists of three storage endpoints in close proximity to the HANA VM. The lock volumes of the two new hosts have been provisioned on the same storage at the lock volume of the first host. For the data volumes, an additional storage endpoint has been selected. So for each individual host, data and log volumes are on different storage endpoints and have individual IP addresses.Okay, with that, thanks a lot for watching the video. Take care and bye-bye.
Learn about the volume provisioning workflow for an SAP HANA multiple host system using the Azure NetApp Files application volume group feature.