BlueXP is now NetApp Console
Monitor and run hybrid cloud data services
Hello and welcome to this video on the new Azure netup files feature application volume group for SAP HANA. You might have seen that the feature is now in general availability and is planned to be the standard provisioning method for ANF volumes used for SAP HANA.We have put together a series of short videos to explain the application volume group feature and the usage for the different SAP HANA deployments. This video will walk you through the required preparations and the provisioning workflow for an SAP HANA single host system. Let's first have a look at the used lab setup and environment. In our lab, we have created a proximity placement group and an availability set within this proximity placement group. We then requested a pinning of the IV set using the HANA pinning request form. After the pinning has been done, we provisioned the VM within the AV set and the PPG. As a last preparation step on the compute side, we started the VM to create the anchor in the compute cluster and to create the link to the network spine. On the ANF side, we created a capacity pool with manual quality of service and a delegated subnet. Okay, with that let's look at the recorded demo and let's first look at the preparation and prerequisites. So we have created a proximity placement group in the west US region and you can see that the proximity placement group includes an availability set as well as a VM. Within the availability set, we can see the VM that we are using for our HANA system, which is currently up and running. When looking at the virtual machine, we can see that the virtual machine has been configured uh with the availability set as well as with the proximity placement group. Next, we want to look at the ANF configuration. And we can see that we have a capacity pool uh created with the quality of service manual. From the networking perspective within our VNET we have configured a delegated subnet for ANF which includes 251 IP addresses. So we used a class C subnet as a delegated subnet for ANF. Now all preparations are done and we can start with the provisioning of volumes for our HANA single host system. So we select the netup account in our best US region. We go to application volume group. We choose add group to add the first application volume group in our environment. Deployment type is SAP HANA.And now we have to provide a couple of input values. On the first screen, we provide the SID. In our example, it's PR1. The group will give get a name and a description. We provide the memory size of the HANA node and a capacity overhead which is used for snapshot backup operations. Now the HANA system type must be selected. It can be either a single host or multiple host system. For our demo we have selected the single host HANA system.In addition, you can also select if you want to deploy secondary volumes for a HANA system replication target or for disaster recovery destination using cross region replication. In the next screen, we need to provide the use proximity placement group, the ANF capacity pool, the virtual network and the delegated subnet used for Azure NetUP files. Optionally, we can add text to our resources.Now the storage network access configuration needs to be done. The protocol type is pre-selected to NFSV4.1 and can't be changed. Export policy and rules are preconfigured to allow access from any host in a network. This can be changed in this dialogue. Now a preview of the volumes is shown and you can see the five volumes which will be provisioned by the application volume coupe workflow. Each volume has a proposed value for capacity and throughput which has been calculated by the application volume group logic based on the RAM size of the HANA system. So this table shows the throughput numbers for the different volumes depending on the RAM size of the HANA system. You can see that the throughput numbers for the HANA data and log volume start with the HANA KPI values and are increased for large RAM sizes. Keep in mind that these numbers are proposals and can be adapted before the provisioning workflow or at any time later using the normal Azure NetUP file volume configuration. The capacity numbers are as well calculated based on the RAM size of the HANA system. The capacity values are calculated based on the best practices for SAP HANA defined by SAP. These values can also be adopted in the same way as the throughput numbers by clicking on one of the volumes in the list. You can change the throughput or capacity values. In our lab demo, we are changing the throughput of the lock backup volume. we need to change to a lower value since our capacity pool does not have enough resources. On the same screen, you can also select to delete the volume. Deleting volumes from the application volume group is only possible for the optional backup volumes.If you want to change the volume name, you would need to do it before you start the provisioning. The file path would then need to be changed accordingly in the protocol tab. With the next step, a validation will be executed. The validation has been passed successfully and we can start the provisioning process. The final deployment will take some time. In our example, it took around 10 minutes. Now we have a new application volume group and we can review the volumes included in the group. The volumes are also listed in the standard ANF volume view. In the volume view, the mount path of the volume is also listed. And you can see that the data and the log volumes are accessible through different IP addresses. The two backup volumes are provisioned at another storage endpoint with a third IP address. Okay, let's summarize what we have seen in the demo. The application volume group workflow provisioned five ANF volumes for our HANA single host system using a defined naming convention and capacity and throughput numbers based on best practices. From the infrastructure perspective, the data and the lock volume have been provisioned on two different storage endpoints on an ANF cluster connected to the same network spine as the HANA VM. Data and lock volume are accessible using two different IP addresses and data and lock volume are mounted using these two IP addresses providing best performance on a Linux host. And finally, data and lock backup volumes are provisioned outside of the pro proximity placement group on different INF hardware. Okay, with that, thanks a lot for watching the video. Take care and bye-bye.
Learn about the required preparations and the volume provisioning workflow for an SAP HANA single host system using the Azure NetApp Files application volume group feature.