Kubernetes technology begins with containers. Containers are a highly efficient way to separate independent workloads on a single host. This separation makes containers similar to virtual machines (VMs), but containers provide greater server efficiency than VMs do. Therefore, a host can have more containers colocated than VMs, which makes containers a technology that can scale enormously.
Harnessing all these features under one control panel hasn’t been as straightforward. It wasn’t until Docker had been developed that container usage became widespread. A drawback to Docker is that it can manage the containers only on a single host, so for many enterprise organizations, it isn’t a solution that is going to meet their demands.
Enter Kubernetes, which enables multiple containers to run across numerous machines. Kubernetes was created by Google after over a decade of Google’s use of container orchestration internally to operate the company’s public services. Google had been using containers for a long time and developed its own proprietary solutions for data center container deployment and scaling. With the development of Kubernetes, scaling benefits that containers offer over VMs have increased exponentially. When tasks demand such a large scale, control is required for all those deployments. That’s why Google and other cloud providers offer fully managed Kubernetes services.
By providing a robust solution for managing and for scaling containers and containerized applications and workloads across a cluster of machines, Kubernetes takes container deployment to a whole new level.
Why Use Kubernetes for Containers?
On deployment, a container provides process and file system separation in much the same way that a VM does, but with considerable improvements in server efficiency. That efficiency allows a much greater density of containers to be colocated on the same host.
Although container technology has been a part of UNIX-like operating systems since the turn of the century, it was only with the advent of Docker that containers really came into the mainstream.
Docker has succeeded by offering two benefits. It standardizes container run times, for example, through the container management system around the raw technology, which simplifies the process of creating and deploying containers for end users. Docker, however, can be used to execute a container on only a single host machine. That’s where Kubernetes steps in as an improvement.
Introducing NetApp Kubernetes Service
NetApp Kubernetes Service is a Kubernetes-as-a-service offering that targets hybrid and multicloud deployments. With NetApp Kubernetes Service, you can create Kubernetes clusters that are scalable, ready for production, easy to manage in one or more clouds, and managed from a single set of controls.
NetApp Kubernetes Service federates clusters, enabling cloud-based DNS to handle incoming requests to clusters in the back end across the entire federation. The advantages are that users gain transparency for all Kubernetes workloads and gain the ability to scale up. These benefits mean that if your organization is still growing, the solution grows simultaneously, and if you’re an enterprise, you will finally have a way to be in complete control. NetApp Kubernetes Service works on all the major public clouds and with their respective Kubernetes services:
- Amazon Elastic Container Service for Kubernetes (Amazon EKS)
- Amazon Web Services (AWS)
- Kubernetes Engine on Google Cloud Platform
- Microsoft Azure Kubernetes Service (AKS)
NetApp Kubernetes Service also supports NVIDIA graphics processing units (GPUs). So if you build clusters from raw computing, rather than by using cloud-managed services, you can use Amazon EKS, Google Cloud Platform, or Azure.
The ability for NetApp Kubernetes Service to deploy to any of the major hyperscalers gives you an enormous range. You also don’t have to rely on building solutions by using cloud-based compute and storage directly. This kind of flexibility allows you to move workloads to the cloud platform that suits your needs most effectively. You’re not confined to a Kubernetes deployment in just one cloud or between one cloud and an on-premises environment. NetApp Kubernetes Service also comes with a utility that enables you to install cloud-native solutions on demand. This extensive curated list of open-source technologies includes:
These applications can be deployed directly to a cluster for any number of use cases. For example, with Istio, NetApp Kubernetes Service is an easy way to manage microservice deployments. With Helm, NetApp Kubernetes Service creates personalized Helm charts or can use the repository on GitHub for deployment.
Trident is another solution that’s preinstalled with NetApp Kubernetes Service. With Trident, NetApp solutions such as Cloud Volumes Service can meet persistent volume claims that are made by Kubernetes clusters. NetApp Kubernetes Service deploys Trident so that Kubernetes can interface with Cloud Volumes Service. This interface gives Cloud Volumes Service scalable, high-performance data volumes that are highly available and easy to copy with NetApp Snapshot™ technology and to clone.
The Future Is Here with NetApp Kubernetes Service
The future with a single data fabric web for all data management solutions is here. NetApp Kubernetes Service, NetApp Cloud Insights, Cloud Volumes Service, Trident, and NetApp Cloud Backup (formerly AltaVault™) are all working together to provide solutions. This unique positioning overcomes one of the biggest product blockers today: achieving the true cloud concept in which workloads can be moved to the optimal home based on need, function, or cost.