Quality of service is a critical enabling technology for enterprises and service providers that want to deliver consistent primary storage performance to business-critical applications in a multitenant or enterprise infrastructure. For a broad range of business-critical applications, the important metrics are consistent and predictable performance. Unfortunately, neither consistency nor predictability are easily achievable with traditional storage arrays. The type of applications that require primary storage services typically demand higher levels of performance than are readily available from traditional storage infrastructures. However, simply providing raw performance is often not the only objective in these use cases.
QoS features exist in everything from network devices to hypervisors to storage. When multiple workloads share a limited resource, QoS helps provide control over how that resource is shared and prevents the noisiest neighbor (application) from disrupting the performance of all the other applications on the same system. QoS features, like rate limiting, prioritization, and tiering, are effective only when the scope of the problem remains small. When storage is deployed at scale, these techniques quickly fail. In fact, these features are all “bolt-on” technologies that attempt to overcome limitations in storage architectures that were never designed to deliver QoS in the first place.
To meet varying application performance requirements, the storage industry has implemented caching or tiering schemes in front of traditional disk-based systems. These schemes apply complex algorithms and predictive methodologies that shuttle data to the right media at the right time to boost performance. Costly, complex, and reactive, this approach does little to meet the predictable-performance requirements of mission-critical applications.
Solving for this disparity requires a more balanced pool of capacity and performance at the system level. From this starting point, a storage system can deliver performance and capacity scaled independently to serve the unique needs of different applications. This ability to finely allocate capacity and performance resources separately from one another is a fundamental component of building cloud infrastructures.
Each volume on the platform is configured with minimum, maximum, and burst IOPS values that are strictly enforced within the system. The minimum IOPS provides a guarantee for performance, independent of what other applications on the system are doing. The maximum and burst values control the allocation of performance and deliver consistent performance to workloads. For the enterprise and service provider, QoS enables SLAs around exact performance metrics and complete control over the end-consumer's experience.
In next-generation infrastructures, raw storage performance is important, but it is the predictable and consistent delivery of that performance which ensures that every application has the resources required to run without variance or interruption. In servicing these workloads, QoS enables the underlying storage architecture to:
QoS is essential for multitenant systems running varying workloads with unpredictable demands, such as databases, virtual machines, and private clouds. A multitenant system allows the agility of a next-generation architecture that can scale flexibly with your business needs. It eliminates the need to plan for what your architecture will look like in the future by allowing you to build and run for today’s demands, including:
QoS is one of the defining principles of NetApp® Element® Software, the data management software enabling your cloud infrastructure, providing all the features you demand from primary storage in an innovative, automated architecture that delivers unmatched scalability with guaranteed predictable storage performance.