“This is not the singularity you are looking for.” – The Data Buddha
No need to worry. AI hasn’t decided (yet) to rise up and enslave us all. In fact, for now at least, it seems more like AI is here “to serve man,” and we aren’t talking about a cookbook.
Enter AI inference servers. They aren’t new. Inference helps most of us with daily tasks that we probably don’t even think of as artificial intelligence. Google speech recognition, image search, and spam filtering applications are all examples of AI inference workloads. Today, inference systems and general recommender systems, cited by NVIDIA as the most common edge workload, extend well beyond these simple tools into more complex applications like medical diagnostics, manufacturing, agriculture, analytics, media and entertainment, and more.
NetApp® technology, in combination with the NVIDIA TensorRT Inference Server, provides an integrated edge, core, cloud solution for these use cases with common AI inference workloads such as:
Charles Hayes is a Product Marketing Manager focusing on hybrid cloud solutions. He’s a 20-year veteran of the storage industry, joining NetApp in September 2019. Before NetApp he spent years defining, developing, and marketing products and solutions for SimpleTech, Iomega, EMC and Lenovo. Charles is also a mediocre percussionist/guitarist, an old-school punk rock fan, and frequently claims he saw all the cool bands before they were cool.