Sign in to my dashboard Create an account

Join NetApp at GTC21

Mike McNamara
Mike McNamara

Join-NetApp-at-NVIDIA-GTCNVIDIA GTC is happening April 12-16, 2021, bringing the premier artificial intelligence (AI) and deep learning (DL) conference to audiences around the globe. NetApp is a proud Diamond Sponsor of the 2021 virtual conference, and we’d love for you to join us.

It’s going to be an exciting event, so register for free now to make sure you don’t miss it! We’re offering a number of sessions this year, falling into three focus areas:

  • New joint AI infrastructure solutions. NetApp is working closely with NVIDIA to make it easier for enterprises to gain access to the infrastructure needed to accelerate critical AI and machine learning (ML) projects.
  • Data science software. NetApp continues to expand its AI software ecosystem. We’re partnering with industry leaders like Domino Data Lab, Iguazio, and SFL Scientific to validate and integrate leading software solutions and deliver advanced services.
  • Expanded platforms. AI technology is everywhere. Our expanded platform offerings include enhancements to our own solutions and partnerships with industry leaders like Lenovo to deliver the right infrastructure for every need, from edge to core to cloud.
Read on to find out about our cutting-edge sessions and determine your GTC plan of attack. Our full announcement blog will come out on Monday April 12.

NetApp Sessions at GTC21

Session Summary

NetApp and NVIDIA AI Infrastructure Solutions
Session SS33180: Next Generation AI Infrastructure View abstract
Session S32762: A New Approach to AI infrastructure View abstract
Data Science Software and Partner Solutions
Session E31919: From Research to Production—Effective Tools and Methodologies for Deploying AI Enabled Products View abstract
Session S32190: Democratizing Access to Powerful MLOps Infrastructure (with Domino Data Lab) View abstract
Session S32161:  Large-scale Distributed Training in Hybrid Cloud View abstract
Session SS33179: COVID-19 Lung CT Lesion Segmentation & Image Pattern Recognition with Deep Learning (With SFL Scientific) View abstract
Expanded Platform Solutions
Session: S32187: AI Inferencing in the Core and the Cloud View abstract
Session SS33181: Go beyond HPC—GPU Direct Storage, Parallel File Systems and More View abstract
Session SS33178: How to Develop a Digital Architecture and Accelerate the Adoption of AI and ML across the DoD View abstract

Full Session Abstracts

Joint NetApp and NVIDIA AI Infrastructure Solutions

 Session SS33180: Next Generation AI Infrastructure

Presenters: Dan Holmay, Business Development, NetApp; Will Vick, Head of NALA DGX and Data Center Solution Sales, NVIDIA Learn about NetApp and NVIDIA AI solutions, including the recently announced NetApp® ONTAP® AI integrated solution. You can deploy NetApp ONTAP AI as a highly flexible and scalable reference architecture or as a preconfigured integrated solution with on-site installation and comprehensive support.

See full session details  

 Session S32762: A New Approach to AI infrastructure

Presenters: Nithya Natesan, Product Management Lead, NVIDIA; Scott Dawkins, Field CTO, NetApp In this technical session, learn how you can avoid the challenges that many organizations face: losing valuable time on systems provisioning and integration, workload orchestration and scheduling, and sub-optimal utilization of your accelerated compute resources. We'll present a simpler approach to deploying and managing AI platforms that ensures that your developers have effortless, scalable access to resources in a cost-effective model, as well as solutions that provide large-scale AI infrastructure to your data scientists. With live demos, you’ll see how this new approach is drawing from NVIDIA’s own experience supporting leading-edge AI R&D.

See full session details

Data Science Software and Partnerships

 Session E31919: From Research to Production—Effective Tools and Methodologies for Deploying AI Enabled Products

Presenter: Muneer Ahmad, Data Scientist and AI Solution Architect, NetApp In this presentation, we will discuss integration of ML systems to continuously operate and scale in production environments. Topics include:
  • MLOps and DataOps methodologies and how to use them.
  • ML workflows and creating an efficient MLOps/DataOps pipeline.
  • Leveraging tools and architectures to build, train, test, and deploy models in production environments.
  • Developing an end-to-end automated pipeline to optimize machine learning deployment flow.
  • Balancing performance boosts from complex ML workflow management tools against the management overhead of deploying them.
See full session details

 Session: S32190: Democratizing Access to Powerful MLOps Infrastructure

Presenters: Mark Cates, AI Product Manager, NetApp; Thomas Robinson, VP of Strategic Partnerships and Initiatives, Domino Data Lab In this session, NetApp and Domino Data Lab will demonstrate how you can leverage Domino and NetApp ONTAP AI to:
  • Address the most common AI workflow pain points. Domino acts as an orchestration layer to simplify AI infrastructure configuration, giving data scientists access to the right resources—such as Spark on demand—with the touch of a button.
  • Provide a best-in-class workbench experience for data scientists. Domino supports both open source tools and proprietary software, allowing teams to work with the tools they prefer.
  • Harness the power of NVIDIA DGX systems with the Kubernetes-native Domino Compute Grid. Compute Grid provides accelerated access to NVIDIA GPU resources in DGX systems to support the full AI lifecycle, from model development to model training to production model deployment at scale.
  • Increase efficiency and performance with ONTAP. Domino and NetApp ONTAP data management software work together to maximize multi-node NVIDIA DGX POD™ utilization and automate the provisioning and scheduling of data science workloads by using custom resource definitions.
See full session details

 Session S32161. Large-Scale Distributed Training in Hybrid Cloud

Presenter: Rick Huang, Data Scientist, NetApp This session covers the details of large-scale distributed training using Azure NetApp Files, RAPIDS, and Dask for cluster-led multi-GPU multi-node processing and training. We use RAPIDS CUDA Machine Learning, Azure Kubernetes Service, and NetApp Trident for Persistent Volumes, NetApp Snapshot™ copies, and other services on Azure to build an end-to-end AI pipeline for the simulation of click-through rate prediction on the Criteo Terabyte Click Logs dataset. NVIDIA CUDA, Dask distributed training, and Azure NetApp Files technologies are integrated to enable fast model deployment, data replication, and production monitoring capabilities in public cloud.

See full session details

 Session SS33179: COVID-19 Lung CT Lesion Segmentation & Image Pattern Recognition with Deep Learning (With SFL Scientific)

Presenters: Rick Huang, Data Scientist and TME, NetApp; Egor Kharakozov, Senior Data Scientist, SFL Watch a demonstration of a system that automatically identifies and segments lesions in lung CT images of COVID-19 patients, reducing the burden on healthcare workers. Using NVIDIA Clara pretrained healthcare AI models, we quickly implemented a state-of-the-art model using custom CT data and transfer learning to optimize results. The NetApp Data Science Toolkit enabled easy versioning and traceability for data and models during experimentation and tuning.

A second demonstration provides a view of real-time face mask detection and social distancing in a clinical setting to provide caregivers with a means to ensure compliance with policies meant to protect patients and healthcare workers. Users can drill down as needed into more detailed aggregate analytics on adherence.

See full session details

Expanded Platforms

 Session: S32187: AI Inferencing in the Core and the Cloud

Presenter: Rick Huang, Data Scientist, NetApp In this session, we will discuss the integration of NVIDIA Triton Inference Server with NetApp ONTAP AI for inferencing in the core and the cloud. The session explains:
  • How ONTAP AI can host inferencing workloads by combining NetApp ONTAP AI with NVIDIA DGX A100, NetApp AFF A800 storage, NVIDIA Triton Inference Server, and a Kubernetes infrastructure built using NVIDIA DeepOps.
  • A reference architecture combining NetApp ONTAP AI, NetApp StorageGRID® for object storage in the cloud, and NetApp Data Science Toolkit for seamless data snapshots between the core and the cloud.
  • A conversational AI use case with NVIDIA Jarvis client applications connected to an NVIDIA Triton Inference Server for millisecond-level inference performance and model retraining using NVIDIA NeMo.
See full session details

 Session SS33181: Go beyond HPC—GPU Direct Storage, Parallel File Systems and More

Presenters: Joey Parnell, Software Engineer, NetApp; Abdel Sadek, TME, NetApp IT departments are scrambling to deploy high-performing yet simple and scalable storage solutions that can meet the performance demands of AI workloads. In this session:
  • Learn how to turbocharge your NVIDIA DGX A100 systems with NVIDIA GPU Direct Storage, the massively scalable BeeGFS file system, and NetApp EF600 all-flash arrays.
  • Find out how to simplify IT workflows by provisioning your apps and storage with cloud-native tools like Kubernetes.
  • See how NetApp customers are combining NVIDIA technology with NetApp EF-Series storage to accelerate AI.
See full session details

 Session SS33178: How to develop a digital architecture and accelerate the adoption of AI and ML across the DoD

Presenters: Kirk Kern, CTO Americas, NetApp; Lloyd Granville, Deputy CTO of Dept. of Defense and Intelligence Community, NetApp; May Casterline, Senior Data Scientist, NVIDIA AI experts from NetApp and NVIDIA will discuss how ONTAP AI will help U.S. Department of Defense (DoD) and U.S. Post Office (USPS) customers ingest, data prep, train, deploy, and analyze datasets using a hybrid cloud architecture to expedite AI and ML workloads. You’ll learn how ONTAP AI:
  • Eliminates design complexities
  • Allows independent scaling of compute and storage
  • Enables you to start small and scale seamlessly
  • Provides a range of storage options that address various performance and cost points
See full session details

Don’t forget to register for free and join NetApp at NVIDIA GTC 2021.

Mike McNamara

Mike McNamara is a senior product and solution marketing leader at NetApp with over 25 years of data management and cloud storage marketing experience. Before joining NetApp over ten years ago, Mike worked at Adaptec, Dell EMC, and HPE. Mike was a key team leader driving the launch of a first-party cloud storage offering and the industry’s first cloud-connected AI/ML solution (NetApp), unified scale-out and hybrid cloud storage system and software (NetApp), iSCSI and SAS storage system and software (Adaptec), and Fibre Channel storage system (EMC CLARiiON).

In addition to his past role as marketing chairperson for the Fibre Channel Industry Association, he is a member of the Ethernet Technology Summit Conference Advisory Board, a member of the Ethernet Alliance, a regular contributor to industry journals, and a frequent event speaker. Mike also published a book through FriesenPress titled "Scale-Out Storage - The Next Frontier in Enterprise Data Management" and was listed as a top 50 B2B product marketer to watch by Kapos.

View all Posts by Mike McNamara

Next Steps

Drift chat loading