Today, many customers are using Microsoft Azure to accelerate their SAP deployments, to reduce cost and provide increased agility for their business processes. Many customers first embraced the “Dev/Ops” paradigm in the cloud and moved their development & test SAP systems off-premise. However, more and more customers are now choosing to migrate their complete SAP infrastructure including production into the cloud.
In this blog post I look at how Azure NetApp Files will improve the way Azure customers can provision shared file systems for their SAP landscapes.
Almost every SAP landscape requires a shared file system. It’s either provided via NFS protocols on Unix or Linux operating systems or via SMB whenever SAP systems run on Windows operating systems.
As shown in the figure above typical candidates for shared files are:
All of these file systems hold one thing in common, they have relatively low-performance requirements. This stands in contrast to shared file systems used to store SAP HANA backups (file-based backups as well as automated log backups), which require a significantly higher throughput.
Another topic customers are challenged with, is the provisioning of a highly available shared file system in the cloud. Many template deployments provision a dedicated Linux cluster just to create a highly available NFS server. Whilst Azure supports such a setup via an automated template deployment, the maintenance and patching of these Linux cluster servers is up to the customer.
Wouldn’t it be great to get rid of this unwanted burden, to have it more cloud-like? To easily provision, grow and shrink, snapshot and clone an enterprise-grade NFS without the hassle of unwanted management tasks, take a look at Azure NetApp Files.
NetApp and Microsoft developed Azure NetApp Files. A new Azure native NFS and SMB file service, based on NetApp’s proven and widely adopted ONTAP operating system. It now gives all Azure customers a high-performance, low-latency file service with many of the features NetApp customers already enjoy when running their SAP systems in their on-premises datacenters – Without the need for additional dedicated storage management.
Provisioning Azure NetApp Files is a 2-step process:
First, customers have to create a capacity pool within their Azure NetApp Files storage account.
A capacity pool needs to be assigned to a ‘delegated subnet’. This is a subnet within the customers virtual network from where the IP for the NFS export will be allocated.
A capacity pool has a minimum size of 4 TByte and a service level. The service levels (standard, premium and ultra) have an influence on the overall throughput a capacity pool is able to provide.
Important to know – both can be adapted on-the-fly to adjust/increase capacity or service level.
As a final step you need to create a volume, specifying the name, access privileges (i.e. the export policy), built-in policy for snapshot based automated backups and lastly set up the quota. The quota (specified in TByte) defines the portion of the overall throughput of a capacity pool, that is allocated for the volume. The formula to calculate the volume throughput is:
Throughput [MBps] = Quota [TB] * Service Level [MBps/TB].
In the above figure, we’ve combined the three different file systems into one volume to simplify management and data protection but also to benefit from a higher overall performance compared to a configuration with three separate volumes.
In the following video, I’d like to demonstrate how easy it is to use the Azure portal to configure Azure NetApp Files for your shared file system.
We saw that customers already use Azure NetApp Files for their SAP landscapes in US datacenters and the preview of Azure NetApp Files in Europe has just been started. If you plan to use Azure for your SAP applications, don’t miss this opportunity, but make sure to simplify your setup with an enterprise-grade NFS or SMB using Azure NetApp Files.
For more information please see the following technical reports and web pages:
Bernd Herth architects and defines NetApp's SAP solutions as TME at the SAP Partner Port at SAP headquarters in Walldorf. He has over 25 years of experience in SAP software and in planning and architecting infrastructure solutions for SAP and has held various positions in the SAP ecosystem. Herth has published articles and books focused on SAP technology and virtualization. He holds a masters degree in physics and taught computer science classes as assistant professor.