Menu

Connecting AI risk to real-time data decisions

NetApp + Enkrypt AI: Bringing AI risk enforcement to the data layer

Contents

Share this page

Praveen Vijayaraghavan
Praveen Vijayaraghavan

Enterprise AI adoption is already reshaping how data is accessed inside organizations. Security leaders are watching training pipelines and AI agents read directly from enterprise file systems and object stores, often using broad service identities and operating at machine scale. In many environments, this access bypasses the application control layers that teams relied on for years. When sensitive data is ingested into a model or embedded into a vector index, the exposure happens immediately - and it cannot simply be undone.

At NetApp, we believe security must operate where the data lives. Access decisions should reflect context and intent, rather than rely solely on static access controls. Shifting data toward centralized inspection layers places the enforcement burden on every application and pipeline in the environment. In an AI-driven enterprise operating at machine scale, that model is no longer reliable.

Enkrypt AI and NetApp are collaborating to address this shift directly. Enkrypt AI provides deep visibility into AI system risk, evaluating model behavior, unsafe prompts, policy violations, and compliance posture. At NetApp, we are investing in a data-centric security architecture that brings together data sensitivity, identity, activity, and lineage into a unified security graph by automatically approving or blocking data access using live context, helping teams ship AI capabilities faster while reducing exposure, preventing costly leaks, and avoiding compliance surprises.

Unified AI and data risk visibility

AI risk and data risk are typically evaluated in parallel but rarely in combination. Security teams assess model behavior and AI governance posture, while data protection teams track sensitive datasets and regulatory exposure. Without a unified view, organizations lack clear visibility into where AI activity intersects with their most sensitive information.

In a forward-looking collaboration, AI workload posture from Enkrypt AI would be correlated with NetApp’s data context — including sensitivity classifications, regulatory attributes, identity context, lineage, and access behavior — within a unified security graph. This structure enables a precise understanding of which AI systems are interacting with regulated or high-value datasets across hybrid environments.

 This consolidated view helps organizations pinpoint where AI workloads touch regulated or high-value data, prioritize controls to reduce the highest-risk exposure paths, and ultimately lower the likelihood of AI-driven data leaks and breaches, as well as the resulting reputational and regulatory impact.

Turning context into action

Unified visibility is valuable, but risk is ultimately determined when data is accessed.

When AI workload posture from Enkrypt AI is correlated with data sensitivity and regulatory attributes, that combined context can inform real-time access evaluation. For example, Enkrypt AI may identify an AI training workload as high risk based on governance policy, model configuration, or usage behavior. If that workload later attempts to bulk-read a dataset containing regulated or high-value information stored on NetApp-managed infrastructure, the access decision should reflect more than static permissions.

In a forward-looking architecture, the request would be evaluated using the full context available within the security graph: data sensitivity, regulatory classification, lineage history, identity type, access behavior, and the AI risk signals supplied by Enkrypt AI. If the combined context violates the defined policy intent, enforcement would occur at the storage I/O layer before the data is consumed.

This is where AI governance transitions from advisory insight to enforceable control at the data layer.

Continuous enforcements in a dynamic environment

AI environments do not operate in static cycles, and governance cannot either. Access decisions must reflect the live intersection of AI workload posture, data sensitivity, lineage, identity context, and observed access behavior — not just the conditions that existed when a policy was written. As models retrain, pipelines evolve, and datasets are copied or repurposed, risk evaluation must occur continuously at the time of access rather than relying on expensive offline scans that quickly go stale and miss real-time changes in workload behavior and data context.

In a forward-looking collaboration, AI risk intelligence from Enkrypt AI would feed directly into NetApp’s data-centric security architecture, allowing access-time decisions to adapt as AI posture or data characteristics change. By applying that continuously updated context at the storage layer, enforcement would remain consistent even as workloads shift across hybrid environments. Governance intent would not depend on periodic reviews or manual recalibration — it would be upheld each time sensitive data is read.

Governing AI at the point of data

AI is now embedded in core business workflows. Models retrain, agents act autonomously, and non-human identities operate continuously across hybrid environments. The question facing security leaders is no longer whether to adopt AI — it is how to maintain control as that adoption scales.

Sustainable AI governance requires more than monitoring models or classifying data independently. It requires aligning AI risk intelligence with the layer where data is accessed. When decisions reflect both workload posture and data sensitivity — and are applied at the point of read — governance intent holds even as systems evolve.

As we evolve this integration with Enkrypt AI, we are connecting AI risk insights with data-layer decisioning to build toward an architecture where innovation and control are not trade-offs, but coexisting outcome.

Explore more about AI data security at NetApp.

Praveen Vijayaraghavan

Praveen Vijayaraghavan is a product leader at NetApp leading the strategy, execution, and growth of a portfolio of products spanning infrastructure observability, data and AI governance, security and compliance. He has previously held product leadership roles at Microsoft, X, Teradata building & scaling enterprise and consumer products and platforms. He holds a Masters in computer science from the University of Minnesota, Twin Cities.

View all Posts by Praveen Vijayaraghavan

Next Steps

Drift chat loading