In adopting a cloud-first strategy, one of the first items on the list is: How do I get there? Application migration to Azure is a top priority for organizations, whether they’re moving from the data center to the cloud, or migrating across clouds. Of all an organization’s applications, database workloads are often mission critical, and migrating databases to Azure successfully requires a well-defined plan. There are multiple factors to consider during the planning process—among them the need for always-on high availability, consistently high performance, data durability, security, and data migration capabilities for hybrid architectures.
Azure NetApp Files meets all these needs. It’s also quick and easy to deploy, making it ideal for demanding enterprise database workloads such as Oracle. In this blog, we’ll explore the capabilities of Azure NetApp Files to understand why it’s the right choice for deploying enterprise databases in Azure.
Data forms the backbone of all line-of-business (LOB) applications, and database systems must be able to process data efficiently while also ensuring data security and integrity. But the ability to do so largely depends on the database storage solution used. For enterprise databases, the underlying storage must offer specific capabilities and features that support large-scale production environments.
High availability. Database services require a storage solution that’s always available, without interruptions. Disruption to data access can bring your entire business operation to a standstill, often resulting in unmet SLAs and financial penalties. Ensuring highly available storage with multiple access paths to the underlying data is therefore critical. To avoid the added overhead that can come with designing and configuring storage-level redundancy, organizations should opt for solutions that have high availability built in by design. Check out our SLA document for more information.
Performance. Database performance levels directly correlate with the speed at which the data is accessed. However, performance requirements vary by use case. For example, database backups can reside in lower-performing storage because the data is not accessed frequently, but production databases require faster storage for faster transaction processing. Cost optimization is, in this regard, top of mind, because higher performance storage can be costly. Having the flexibility to choose among multiple service levels for cloud database use cases is very valuable for enterprises.
Durability. Databases are often the backbone of mission-critical applications, and businesses cannot risk these environments going down or becoming corrupted. The data integrity of any committed database transaction must be maintained at all times. To that end, database storage must be able to overcome failure to ensure durability. Storage-level data redundancy used by databases plays a crucial role in establishing durability.
Rapid deployment. Database deployment issues can often be traced back to complicated storage design, as well as poor capacity planning and a lengthy provisioning process. In a traditional on-premises environment, deployment could take anywhere from a day (if the capacity is available) to months (for example, if additional storage boxes need to be procured to meet the demand). Faster time to market is a key success indicator, and these organizations would lose their competitive edge if production slowed. Quick deployment and automation are thus highly desirable features for database storage platforms.
Azure NetApp Files was created by Microsoft and NetApp to address the many challenges of deploying NFS file share-dependent workloads in Azure. Available as a first-party service in Azure, Azure NetApp Files was purpose-built to help you deploy and run Linux file share workloads. Here’s what makes Azure NetApp Files the right solution for your database workloads in Azure:
Two absolute must-haves for any production database deployed in the cloud are high availability and consistent performance. Azure NetApp Files gives you the flexibility and power to choose from multiple deployment architectures to achieve the performance benchmarks required by Azure database workloads.
Volume layout for databases. You can choose either a dedicated or shared volume layout for your Azure databases. In a shared volume layout, all files—database data files, log files, redo logs—are placed in the same volume. This layout is more suitable for DevTest environments, which are lenient when it comes to performance demands but still require that the data be highly available. With production workloads, it’s best to go with a dedicated volume layout, in which the files are placed in independent volumes that offer optimal IOPS for workloads.
Reference architecture: Single VM. This sample reference architecture for an Oracle deployment in Azure uses a single VM. Oracle data files and log files are placed on separate volumes. The volumes hosting the data files and redo logs can be configured to use the Ultra service level for the highest storage throughput, while the archive log and backup files can be placed in the Premium service level for cost optimization. Because this architecture uses single VM deployment, NetApp Snapshot™ technology can be used to increase the resilience of the solution by creating a point-in-time copy of data as backup, without hurting performance.
Reference architecture: Multiple VMs for high availability. With an Oracle deployment in Azure using Azure NetApp Files, the data and log files are placed in different volumes and replicated to the secondary region. The VMs and networks are placed in different availability zones to enable high availability for compute. The Ultra service level is used for volume-hosting data files, and the Premium tier is used for log file volumes. This architecture maintains high availability and performance of the database system at both the compute and storage levels and is recommended for mission-critical production workloads.
A useful feature of Azure NetApp Files is the ability to change the performance of a volume on the fly, meaning you can adjust throughput by moving a volume to a different service level or by changing the volume’s allocated throughput without interruption to the service. With the Flexible service level, you can even change the capacity pool allocated throughput independently of the capacity, enjoy the first 128MiB/s throughput at no additional cost, and provision higher throughput per volume than before. Increase throughput in those busy times of the month and reduce throughput in quieter times to save cost.
Digital transformation is a necessity for all organizations that want to stay ahead of the game. A cloud-first mandate to migrate databases to Azure as part of this transformation is no longer an elusive goal. Azure NetApp Files addresses both immediate and long-term storage requirements for your databases in Azure.
Simplify your database migration to Azure using Azure NetApp Files and experience the benefits firsthand.
Arnt de Gier is a technical marketing engineer for NetApp based in Amsterdam, focused on Azure NetApp Files. Arnt has over two decades of experience in designing and implementing storage solutions for customers across all verticals.