Sign in to my dashboard Create an account

The Data Storage Trifecta: Cost, Performance and Automation

Greg Knieriemen

A couple of months ago I discussed the rise of the IT generalist who possesses both the knowledge and soft skills that allow them to be adaptable to a wide array of work environments and technologies. This movement reflects the common refrain from IT leaders to “do more with less” but it doesn’t stop with the human resources. Streamlining staff efficiency is dependent on three core elements of IT infrastructure: cost, performance and automation. 

Managing Cost 

As most IT professionals already know, the actual cost of any data storage solution isn’t just your initial purchase price, it’s the total of all the associated costs including support and additional human resources needed to maintain these systems. Matt Watts recently penned an excellent blog post that calls out the risks of costly support agreements that purport to deliver free controllers 3 years into a 6-year commitment. That “free” controller could end up costing you anywhere from 50% to 300% more in support costs. These types of programs aren’t built for convenience, instead they are designed to trap you into high margin maintenance programs.  

Driving Performance 

Doing more with less also means driving greater value in performance from your IT investment. Last month, John Martin provided a deep dive on the benefits of extending the NVMe protocol all the way to the host to drive efficiencies in application performance. In high transaction environments, where time is money, lower latency and higher speeds deliver immediate value. There’s no benefit to paying more for less performance.

Delivering Automation 

The final piece of the trifecta is automation. In IT this is generally about protecting your data, predicting performance anomalies and preventing disruption without taxing the IT generalist with routine monitoring tasks. There are 5 key goals for automation of data storage

  1. Gain simple and secure visibility into the health of your systems. 
  2. Expose risk factors and prevent problems before they affect your business. 
  3. Lower support costs with automated case creation and parts replacement. 
  4. Resolve performance issues fast with real-time insights into system bottlenecks. 
  5. Monitor and predict capacity usage to stay a step ahead of growing data demands. 

For NetApp customers, the Active IQ engine uses machine learning, predictive analytics and community wisdom to create actionable intelligence that allows IT to prescriptively optimize their NetApp environment.

Many vendors provide some level of monitoring, reporting and alerting services, but Active IQ takes a broader approach, enabling customers to leverage the insights learned from the massive and diverse NetApp user base. Each day, Active IQ receives telemetry data from more than 300,000 assets around the globe, adding to a multi-petabyte data lake that processes over 10 trillion data points per month. By using predictive analytics and community wisdom, Active IQ provides customized insights and recommendations to protect and optimize your NetApp environment.  

Active IQ also provides data center–wide insights and recommendations while leveraging Active IQ Unified Manager to troubleshoot, automate and customize monitoring and management. With Active IQ Unified Manager, you can set up automated remediation actions and active management. You can also customize operational reporting real time for critical infrastructure health as well as schedule polling intervals for performance and capacity. 

Active IQ and Active IQ Unified Manager provide a comprehensive optimization of your data storage environment by delivering:

  • Pattern recognition to provide system-defined and dynamic thresholds, eliminating manual effort. Events are categorized according to criticality.  
  • Overall infrastructure visibility with a SaaS model enabling you to configure IT monitoring to specific requirements.  
  • simple deployment with the granularity to maintain performance consistency through quality-of-service (QoS) monitoring and management.  
  • Easily viewed real-time events and topological representations of key components with an intuitive control panel.  
  • Detailed standard and customizable reports help ensure operational excellence.  
  • Built-in analytics with easy to use metrics to understand and optimize storage operations and performance. 

These tools provide truly unique capabilities that extend the value of a data storage infrastructure while automating tasks that would otherwise require specialized teams.  

By managing costs, driving performance and delivering automation, IT can do more with less. 

Greg Knieriemen

Greg Knieriemen is a NetApp Chief Technologist. He helps develop and drive the vision and application of NetApp products and solutions. Previously, Greg worked for Hitachi and was the founder of the Speaking in Tech Podcast. Greg has over 15 years of experience using, deploying and marketing enterprise IT solutions.

View all Posts by Greg Knieriemen

Next Steps

Drift chat loading