Artificial intelligence (AI) has recently been described in many ways—revolutionary, an economic game changer, a “beast” that is either overhyped or underhyped. I like to think about AI as a new frontier in the great tradition of tools that have propelled humankind forward—the next stage of the information revolution, like the industrial revolution or the scientific revolution that came before. And like any of the significant innovations before it, AI has the possibility to become a source for good, or a source for chaos.
AI holds great promise for businesses: Predictive AI, powered by machine learning, is already being used to recognize patterns, drastically improve efficiency, and solve business and social problems better and faster than anything we’ve seen. It can be used to improve medical research, like predicting how proteins fold to affect biological functions. It can help detect financial fraud to protect both customers and the company’s bottom line. It can aid natural disaster planning by better predicting crises and their ripple effects. We know because we have been helping customers achieve these AI-driven objectives for many years.
And generative AI not only recognizes patterns but also generates new patterns. This capability can enable software developers to be more productive, help content creators deliver much more immersive experiences, and make it far easier for customers, employees, citizens, and students to find the information they need.
All of these possibilities are made by one thing: data. This has long been true—better datasets have allowed prior generations of AI tools to deliver better predictions, and by using very large datasets, large language models have powered generative AI to achieve new levels of capability. Current innovations are rapidly improving these foundation models by using customers’ private data to provide better context or to fine-tune an existing model and make better decisions. The eminent computer scientist Peter Norvig summarizes it elegantly: “More data beats clever algorithms, but better data beats more data.”
Simply put, AI is built on a foundation of data—data storage, safety, and accessibility is critical to the insight and analysis provided by AI. And the AI capabilities of your organization are only as competent as the data that fuels it.
Operationalizing AI requires managing multiple versions of models and keeping them up to date with the latest datasets. This means that massive amounts of data must flow freely—whether that is the enterprise’s own data or other relevant datasets that customers use to improve their AI systems. Of course, we know better than anyone that this isn’t the data equivalent of opening a spillway on a dam. Not only is the volume of data massive and unrelenting, but it’s scattered, often unstructured, and needs to be protected. Complex technology and disparate organizational and data silos are major hurdles to getting AI projects into production. To help you capitalize on the best of AI, you need the most complete, powerful, and sustainable solutions, without the bottlenecks of traditional data silos. Having a modern, intelligent, integrated-hybrid cloud data infrastructure is the foundation of AI.
Whether you’re a small or large business, here’s how you can optimize your data engine to take advantage of the intelligent technology revolution:
By optimizing your data engine, you can have a solid foundation to unlock the power of AI while doing so responsibly, safely, and affordably.
Data and data infrastructure is what we do best at NetApp. By leveraging our expertise, you can truly hit the ground running, ensuring unhindered focus on your customer demands.
For 30 years, through successive technological and business model revolutions, NetApp has not only endured, but also capitalized on each of these changes to deepen our expertise, help our customers navigate a changing world, and drive our own business success. We were here when client-server computing became the norm, when businesses migrated sales to the internet, when cloud became the disruptor, and when hybrid became the solution. As a result, we know what businesses need even as technology changes. AI is no different. We have enabled customers to use AI techniques to advance drug development and disease diagnosis, improve manufacturing and customer service, and reduce fraud, waste, and risk. And we use AI every day to develop and make our products and services much better.
Only NetApp enables customers to integrate, access, and manage the full lifecycle of their data, and to do it for any data, for any application — including AI — anywhere.
If you’re interested in learning more about the future of AI and data, and NetApp’s position at this critical nexus, register for NetApp INSIGHT® 2023—our global tech conference for data and infrastructure thinkers, leaders, and builders.
A group of industry leaders will discuss how we can chart this path responsibly and successfully. I hope you’ll join us.
George Kurian is chief executive officer at NetApp and a member of the Board of Directors. George joined the company in 2011, bringing his passion and relentless focus on execution to his leadership roles at NetApp. He was named CEO in June 2015. He holds a Bachelor of Science in electrical engineering from Princeton University and an MBA from Stanford University.