Microservices are an architectural approach to creating cloud applications. Each application is built as a set of services, and each service runs in its own processes and communicates through APIs. The evolution that led to cloud microservices architecture began more than 20 years ago. The concept of a services structure is older than containers and dates from before the era of modern web applications. A microservices architecture is a way of developing applications that has matured into a best practice over time.
To understand the advantages of microservices architectures today, it’s critical to understand where it all started from.
Initially each application, residing on a single server, comprised three layers:
These layers were built in a single, intertwined stack located on a single, monolithic server in a data center. This pattern was common across every industry vertical and technology architecture. Generally speaking, an application is a collection of code modules that serve a particular function—for example, a database, various types of business logic, graphics rendering code, or logging.
In this monolithic architecture, users interacted with the presentation layer, which talked to the business logic layer and the database layer, and information then traveled back up the stack to the end user. Although this was an efficient way to organize an application, it created many single points of failure, which could result in long outages if there was a hardware failure or code bug. Unfortunately, “self-healing” did not exist in this structure. If a part of the system was damaged, it would need to be repaired by human intervention in the form of a hardware or software fix.
Furthermore, scaling on any one of these layers meant purchasing an entire new server. You had to purchase a monolithic application running on a single server and segment a portion of users over to the new system. This segmenting resulted in silos of user data that had to be reconciled by nightly batch reports. Thankfully, client need became thinner as webpages and mobile applications became more popular, so new methods of application development began to take shape.
Service-oriented architecture (SOA)
By the mid-2000s, the architectures began to change so that various layers existed outside a single server and as independent service silos. Applications were designed to integrate these services by using an enterprise service bus for communication. This approach lets administrators independently scale these services by aggregating servers through proxy capabilities. The approach also enabled shorter development cycles by allowing engineers to focus on only one part of the application service structure. Decoupling services and allowing independent development required the use of APIs, the set of syntax rules that services use to communicate with each other.
SOAs also coincided with the rise of the virtual machine (VM), which made physical server resources more efficient. Services could be deployed much more quickly on smaller VMs than previous monolithic applications on bare-metal servers. With this combination of technologies, better high-availability (HA) solutions were developed, both within the services architecture and with the associated infrastructure technologies.
Today, cloud microservices break down the SOA strategy further as a collection of granular functional services. Collections of microservices combine into large macroservices, providing even greater ability to quickly update the code of a single function in an overall service or larger end-user application. A microservice attempts to address a single concern, such as a data search, logging function, or web service function. This approach increases flexibility—for example, updating the code of a single function without having to refactor or even redeploy the rest of the microservices architecture. The failure points are more independent of each other creating creating a more stable overall application architecture.
This approach also creates an opportunity for microservices to become self-healing. For example, suppose that a microservice in one cluster contains three subfunctions. If one of those subfunctions fails, the is being repaired. With orchestration tools such as Kubernetes, self-healing can occur without human intervention; it occurs behind the scenes, is automatic, and is transparent to the end user.
Microservices architectures have come into use along with Docker containers—a packaging and deployment construct. VM images have been used as the deployment mechanism of choice for many years. But containers are even more efficient than VMs, allowing the code (and required code libraries) to be deployed on any Linux system (or any OS that supports Docker containers). Containers are the perfect deployment vector for microservices. They can be launched in seconds, so they can be redeployed rapidly after failure or migration, and they can scale quickly to meet demands. Because containers are native to Linux, commodity hardware can be applied to vast farms of microservices in any data center, private cloud, or hybrid multicloud.
Microservices have been intertwined with cloud-native architectures nearly from the beginning, so they have become indistinguishable from each other in many ways. Because microservices and containers are so abstracted, they can be run on any compatible OS (usually Linux). That OS can exist anywhere: in the public cloud, on premises, in a virtual hypervisor, or even on bare metal. As more development is done in the cloud, cloud-native architectures and practices have migrated back into on-premises data centers. Many organizations are constructing their local environments to share the same basic characteristics as the cloud, enabling a single development practice across any locations, or cloud-native anywhere. This cloud-native approach is made possible and necessary by the adoption of microservices architectures and container technologies.
Microservices are decentralized and run on different servers, but they still work together for an application. Ideally, each microservice serves a single function, which enables simple routing between services with API communication. Here are some of the other benefits:
Containers are immutable: After they’re deployed, they can’t (or shouldn’t) be altered. If a new version of code becomes available, the container is destroyed, and a new container with the latest code is deployed in its place. Containers can launch in seconds or milliseconds, so additional service components can be deployed immediately when and where they are needed. Because of their light resource footprint relative to VMs or bare metal, containers are the best deployment mechanism for microservices. Whereas VMs contain all OS components, containers comprise only the microservice code itself and supporting code libraries. Other needed functionality is shared on a common OS, with other microservices running in containers.
Container orchestration tools have been around nearly as long as containers themselves. Thousands of microservices work together to form services and applications, and managing them manually would be almost impossible even with an army of administrators. With most IT staffing budgets staying flat, the rise of automation, and the adoption of DevOps practices, the need for good orchestration tools is paramount.
The first version of Kubernetes (K8s) was released in 2015, shortly after the rise of Docker containers, and it has quickly become the dominant orchestration tool in the container landscape. Kubernetes lets developers register a container-based application or microservice in a common library and provide a manifest file to the Kubernetes controller. Kubernetes then deploys the application to its worker nodes according to the specifications of the manifest file, using the container image in the common registry.
“Desired state,” a key concept in Kubernetes, allows self-healing and auto-upgrade functionality to be integral to the solution. For example, suppose that your desired state is that 10 instances of a particular microservice (in a container pod) at version 1.1 should always be running. Kubernetes monitors the deployment to ensure that desired state. If one of the containers fails, and there are only 9 running, Kubernetes deploys another container and launches it to achieve the desired state of 10 instances at version 1.1. If the manifest is changed to specify version 1.2, Kubernetes sees the desired state as unmet, and performs a rolling upgrade of all 10 instances to version 1.2 (assuming that version 1.2 is an available image in the container registry).
It’s easy to see why Kubernetes has so quickly grown to dominate the container orchestration sphere, and why every major public cloud provider (Amazon Web Services, Microsoft Azure, Google Cloud Platform) has its own version of Kubernetes.