Sign in to my dashboard Create an account

Explainable AI with MLOps powered by NetApp and Modzy

healthcare counselor talking to patience

Share this page

Sathish Thyagarajan
Sathish Thyagarajan

Extracting value from data requires the use of analytics, machine learning (ML), and artificial intelligence (AI) to distill insights from raw information. However, turning that data and those insights into a competitive differentiator that can drive business value means that AI and ML must operate at massive scale, and explainable AI (XAI) is a crucial element to make sure that those results can be trusted by humans.

In healthcare, organizations are increasingly using AI-based clinical decision support systems (CDSS) that assist clinicians and doctors in diagnosing disease and making treatment decisions to improve patient outcomes and save lives. In financial services, companies are using AI to support their business in predicting liquidity balances and credit scores and optimizing investment portfolios by creating algorithmic trading strategies using ML. As AI advances, new concerns are emerging around transparency and the reasoning behind the AI models.

As stated in the Harvard Business Review, “AI increases the potential scale of bias: Any flaw could affect millions of people, exposing companies to class-action lawsuits.” To ensure transparency and interpretability in AI-enabled predictions, companies must adopt explanatory capabilities and machine learning operations (MLOps) based approaches for building and managing AI-powered systems, which can increase trust from consumers and regulators.

NetApp and Modzy: Partners in applying AI and ML to data at scale

To address these emerging concerns, NetApp and Modzy have partnered to deliver a new way of applying AI and ML to any type of data, including imagery, audio, text, and tables, for at-scale AI and data science where trust matters.

NetApp and Modzy

MLOps solutions enable streamlined deployment, integration, and management of AI and ML models at scale. Modzy gives organizations a central location for all models to be deployed, run, and monitored, with explainability and drift detection, to ensure that model performance remains robust. Modzy makes it easy to add AI explainability code directly into a model container in the form of an API request to turn on XAI. This includes CUDA-capable runtime with support for NVIDIA GPUs and popular open-source solutions like Local Interpretable Model-Agnostic Explanations (LIME) and SHAP. Organizations can deploy and execute their own custom-built models, open-source or commercial models, or pretrained models from Modzy’s marketplace to analyze data stored in NetApp® StorageGRID®. The StorageGRID and Modzy integration powers a central, secure solution to unify, govern, and analyze data for accelerated AI workloads. Data scientists can examine the data associated with the greatest drift and find those data assets in StorageGRID for future analysis.

Datacentric AI model retraining
Datacentric AI model retraining

Security, flexibility, and compliance

The NetApp StorageGRID and Modzy integration also makes sure that organizations meet the most stringent AI security requirements, while maintaining the flexibility to build and evolve their AI tech stacks over time. Modzy’s FISMA-moderate security controls, combined with NetApp StorageGRID WORM compliance, DataLock, and ransomware protection for cloud backups offers customers hybrid deployment options. Those options can be optimized for hardware-based appliances, or software-based virtual machines or Docker containers that run on bare-metal servers, or a combination of virtual and physical environments.

Highly complex AI systems, or so-called “black-box” AI models used in large-scale AI processing for pattern recognition or object detection, can become too complicated even for subject matter experts to parse or understand. Additionally, regulatory momentum is building as governments call for explainability and traceability for AI-enabled systems.

Embrace explainability and improve trust in AI decisions

Explainability and monitoring for AI offer insights into various model failures, help to overcome false positives, and encourage AI acceptance with more informed decisions. Embracing the design principles of transparency, interpretability, and explainability helps to transform a black-box AI system into a differentiated white-box AI model, supporting the social right to explanation, and improving trust in AI decisions. In this effort, NetApp and Modzy join forces to bring those capabilities and help large enterprise customers to comply with regulations related to AI-enabled decision making.

As enterprises of all types embrace AI technologies, they face big data challenges from the edge to the data center to the cloud. As a cloud-led, datacentric software company, NetApp is constantly building a network of partners that can help with all aspects of constructing a data pipeline for AI solutions with its industry-leading data management capabilities. Data fabric technologies and services from NetApp, combined with Modzy’s MLOps and XAI capabilities, can jumpstart your company on the path to enabling trusted AI workloads at scale.

Learn more

To learn more about the NetApp and Modzy solution, read the white paper, MLOps powered by NetApp and Modzy, or visit

Sathish Thyagarajan

Sathish joined NetApp in 2019. In his role, he develops solutions focused on AI at edge and cloud computing. He architects and validates AI/ML/DL data technologies, ISVs, experiment management solutions, and business use-cases, bringing NetApp value to customers globally across industries by building the right platform with data-driven business strategies. Before joining NetApp, Sathish worked at OmniSci, Microsoft, PerkinElmer, and Sun Microsystems. Sathish has an extensive career background in pre-sales engineering, product management, technical marketing, and business development. As a technical architect, his expertise is in helping enterprise customers solve complex business problems using AI, analytics, and cloud computing by working closely with product and business leaders in strategic sales opportunities. Sathish holds an MBA from Brown University and a graduate degree in Computer Science from the University of Massachusetts. When he is not working, you can find him hiking new trails at the state park or enjoying time with friends & family.

View all Posts by Sathish Thyagarajan

Next Steps

Drift chat loading