Visual interpretations have become an integral part of communication, along with the emergence of image processing technologies and the growing adoption by large enterprises of computer vision for understanding digital images and videos. The application of image recognition ranges from object detection in self-driving cars to the pathologies diagnosed from 3D images obtained from computed tomography (CT) or magnetic resonance imaging (MRI) for surgical planning. Artificial intelligence (AI) in image processing brings novel business opportunities across industries to address various needs. However, the more decisions organizations put into the hands of AI, the more they accept risks related to privacy, security, fairness, ethics, and regulatory matters.
Discussion is growing across industries about privacy in this time of AI and modern data analytics. Automated techniques like defacing or skull‐stripping algorithms that are used to obscure an individual's facial features from structural CT and MR images have become an essential part of privacy-preserving data sharing processes for biomedical research institutions. The right to erasure by GDPR or HIPAA’s privacy rule govern how data must be collected, processed, and erased by organizations on all personal information and digital images like photographs. For data to qualify as sharable under the HIPAA Safe Harbor regulations, full-face photographic images and any comparable images must be removed. Failure to do so may lead to hefty fines for breaching security and compliance and can seriously damage an organization’s reputation. Also, when institutions outsource their machine learning (ML) and neural network models to cloud computing platforms, privacy becomes even more crucial.
Responsible AI helps companies to build the trust, fairness, and governance that are crucial for AI at scale in large enterprises. In my previous blog post, Federated learning: Intelligence versus privacy, I described some of NetApp’s enterprise-grade data privacy solutions. Expanding on those capabilities, NetApp® AI joins Protopia to continue bringing value to its customers. Image-processing algorithms use techniques like color enhancement, noise removal, segmentation, detection, and recognition that extract semantic information from the captured image data. Protopia’s Stained Glass Factory is an extension to a deep learning (DL) framework like PyTorch. It formulates the problem of changing the representation that retains the pertinent information in the input feature with respect to the functionality of a deep neural network model.
The responsible AI solution combines NetApp’s enterprise-grade data management and AI/ML model traceability capabilities with Protopia’s image and data transformation software for AI inferencing tasks in a face-detection use case. We used the Face Detection Data Set and Benchmark to study the problem of unconstrained face detection, combined with a PyTorch machine learning framework for implementation of FaceBoxes.
The data scientists and AI/ML engineers in large enterprises and research institutions can simply add the obfuscation code for AI inferencing deployment scenarios like these:
If your image data is currently stored in an S3-compatible object storage platform, such as NetApp® StorageGRID® or Amazon Simple Software Storage (S3), then the S3 data mover capabilities of the NetApp DataOps Toolkit can be used. Data scientists and AI engineers are looking for efficient ways to deploy AI/ML inferencing models that protect sensitive information. They can benefit from this solution, which combines a flexible, scale-out architecture with responsible AI practices for both on-premises and hybrid cloud deployments.
To learn more about this solution and validation done by NetApp and Protopia engineering teams, refer to the technical report Responsible AI and confidential inferencing.
Ethics in AI systems design is within the scope of AI governance with fairness, accountability, transparency, and safety as its core principles. As a cloud-led, data-centric software company, NetApp is uniquely positioned to offer a data fabric with industry-leading data management capabilities across the edge-core-cloud ecosystem. NetApp is committed to its customers in building privacy-preserving AI at scale solutions that enable private, secure, seamless, and ethical data analysis.
Learn more about NetApp AI solutions.
Sathish joined NetApp in 2019. In his role, he develops solutions focused on AI at edge and cloud computing. He architects and validates AI/ML/DL data technologies, ISVs, experiment management solutions, and business use-cases, bringing NetApp value to customers globally across industries by building the right platform with data-driven business strategies. Before joining NetApp, Sathish worked at OmniSci, Microsoft, PerkinElmer, and Sun Microsystems. Sathish has an extensive career background in pre-sales engineering, product management, technical marketing, and business development. As a technical architect, his expertise is in helping enterprise customers solve complex business problems using AI, analytics, and cloud computing by working closely with product and business leaders in strategic sales opportunities. Sathish holds an MBA from Brown University and a graduate degree in Computer Science from the University of Massachusetts. When he is not working, you can find him hiking new trails at the state park or enjoying time with friends & family.