Menu

Operationalize your AI workloads at the edge with NetApp and Lenovo for inferencing

Mike McNamara
369 views

Operationalize your AI workloads at the edge with NetApp and Lenovo for inferencingOrganizations today are generating massive volumes of data at the network edge. To gain maximum business value from smart sensors and IoT data, they are looking for a real-time event-streaming solution that enables edge computing. Computationally demanding jobs are increasingly performed at the edge, outside of data centers. Artificial Intelligence (AI) inferencing is one of the drivers of this trend. Edge servers provide sufficient computational power for these workloads, especially when using accelerators, but limited enterprise-class storage is often an issue, especially in multiserver environments.

NetApp and Lenovo have partnered to develop an affordable, easy-to-manage, validated edge inferencing solution that is simple, smart, and secure, at an affordable price. The solution helps you meet requirements with a modern all-flash array that offers comprehensive data services, integrated data protection, seamless scalability, higher levels of performance, and cloud integration.

The Lenovo ThinkSystem SE350 is an edge server that is designed to enable traditional IT and OT applications as well as new transformative IoT and AI systems. The ThinkSystem SE350, built on the Intel Xeon D2100 CPU, is a compact, rugged system that is designed to fit into any environment. The NetApp® AFF C190 system is optimized for flash and delivers 10 times faster application response than hybrid arrays. If more storage capacity or faster networking speed is needed, NetApp AFF A220 or NetApp AFF A250 can also be used.

All-flash storage enables you to run more workloads on a single system without compromising performance. This validated solution demonstrates high performance and optimal data management with an architecture that uses either a single or multiple Lenovo SR350 edge servers interconnected with a single NetApp AFF storage system. This solution can address use cases such as autonomous vehicles, patient monitoring, cashierless payment, and inventory monitoring.   The design in Figure 1 shows how multiple ThinkSystem SE350s can be deployed in an edge environment, such as multiple retail stores. The model management can be handled through a single storage node (AFF C190) and pushed out to each of the compute nodes, making it easy to simplify model management for your AI workloads. This design also provides local data storage so that you don’t have to move all of the data from the compact edge servers back to the cloud. This tiering can reduce your storage costs by retaining the data locally and moving only necessary data back to the cloud.

Figure 1) Physical architecture overview. The design in Figure 1 shows how multiple ThinkSystem SE350s can be deployed in an edge environment. Making it easy to simplify model management for your AI workloads The NetApp and Lenovo solution is a flexible scale-out architecture that is ideal for enterprise AI inference deployments. NetApp storage delivers the same or better performance as local SSD storage and offers the following benefits to data scientists, data engineers, and IT decision makers:

  • Effortless sharing of data between AI systems, analytics, and other critical business systems. This data sharing reduces infrastructure overhead, improves performance, and streamlines data management across the enterprise.
  • Independently scalable compute and storage minimize costs and improve resource utilization.
  • NetApp data compaction and deduplication reduce the amount of storage needed, and automatic cold data tiering lowers storage costs.
  • Meet demanding and constantly changing business needs with a solution that delivers seamless scalability, easy cloud connectivity, and integration with emerging applications.


To learn more about this joint solution, read the technical report and visit www.netapp.com/ai.

Mike McNamara

Mike McNamara is a senior leader of product and solution marketing at NetApp with 25 years of data management and data storage marketing experience. Before joining NetApp over 10 years ago, Mike worked at Adaptec, EMC and HP. Mike was a key team leader driving the launch of the industry’s first cloud-connected AI/ML solution (NetApp), unified scale-out and hybrid cloud storage system and software (NetApp), iSCSI and SAS storage system and software (Adaptec), and Fibre Channel storage system (EMC CLARiiON). In addition to his past role as marketing chairperson for the Fibre Channel Industry Association, he is a member of the Ethernet Technology Summit Conference Advisory Board, a member of the Ethernet Alliance, a regular contributor to industry journals, and a frequent speaker at events. Mike also published a book through FriesenPress titled "Scale-Out Storage - The Next Frontier in Enterprise Data Management", and was listed as a top 50 B2B product marketer to watch by Kapos.

View all Posts by Mike McNamara

Next Steps