Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications, but before you start getting to know it in detail I would like to help you to understand the need for a system like k8s. Let’s take a quick look at how software development and deployment has changed over recent years.

Splitting Monolithic apps into Microservices

As you know the applications were big Monolithic systems, all its components were strongly coupled together, in other words, were interconnected and interdependent. Probably, only one big team was in charge of those kinds of apps, managing the development, deployment and deliver as one entity. As you can imagine, changes to one part of the application require a redeployment of the whole application. Deploying and running a monolithic application usually requires a powerful server or a small number of servers that can give enough resources to support those kinds of apps, but what happen if the number of users increases? Could we scale our monolithic app? Scaling up (vertically scale) the server could be an option, such as, adding more CPU or memory and it doesn’t need changes on the app but gets expensive relatively quickly. In the other hand, Scaling out (horizontally scale) by adding more servers and running replicas of our application probably require big changes in our code and is not always possible. So, if any part of a monolithic application is not scalable, the whole application becomes unscalable. As you can see, the monolithic apps have some drawbacks, to name a few:

In order to solve the issues associated with the monolithic apps, a concept known as Microservice appeared: the idea is to break up the monolithic app into smaller, independently and deployable components. Each microservice runs as an independent process and communicates with other microservices through simple APIs.

Monolithic vs Microservices

The image shows how to split monolithic apps into microservices, each new component is decoupled from each other and could be developed, deployed, updated and scaled individually. The drawbacks of monolithic apps are solved with this architecture. If you have a new version of a microservice, you are able to deploy only this component, without redeploying the entire application. Besides, having microservices you can scale out faster, you should scale only the components that need more replicas. If you think of the reliability of your software, with this kind of architecture you avoid that the entire app fails when a particular microservice starts to fail. Your app becomes more stable. An top of this, each microservice can be written in any language, due to is a standalone process that exposes a set of APIs for the communication between them. Also, you can have one team per microservice, in this way the team could focus on a particular module of the entire system. But what happens when the number of microservices increases? The entire system becomes difficult to configure, manage and keep running smoothly. Deployment-related decisions become increasingly difficult because of not only the number of deployment combinations increase but the number of inter-dependencies between the components increases by an even greater factor. Microservices do their work together as a team, so they need to find and talk to each other. When deploying them, someone or something needs to configure all of them properly to enable them to work together as a single system. Also, since it is common to have separate teams developing each component, nothing impedes each team from using different libraries. The divergence of dependencies and the possibility each component requires different versions of the same libraries are inevitable. Deploying applications that need different versions of shared libraries, and need other environment specifics, can quickly become a nightmare for the ops team. So, how we could isolate the environment of each microservice? Let’s introduce the concept of Containers.

Isolating applications using containers

When we have different microservices running in the same server we will probably need different versions of the same dependency or have different environment requirements in general. If we have a smaller number of components, it is completely reasonable to use one Virtual Machine per component. But what happens when the number of microservices starts to grow? You should not use a Virtual Machine for each microservice if you don’t want to waste your hardware resources and spend unnecessary money, besides you are going to waste human resources to configure properly and keep up all the VMs working as expected. Instead of VMs to isolate environments, developers started using Linux Containers technologies (LXC). They allow running multiple applications on the same host machine, exposing different environment for each microservices and isolating them from each other. Behind the scenes, LXC takes advantages of two functions of the Linux Kernel, the first is Linux Namespaces in order to make sure each process sees its own personal view of the system (files, processes, network interfaces, hostname, and so on). The second one is Linux Control Groups, know as cgroups, which limit the number of resources the process can consume (CPU, memory, network bandwidth, and so on). While container technologies have been around for a long time, they’ve become more widely known with the rise of the Docker container platform. Docker was the first system that made containers easily portable across different machines. Simplifying the building and packaging our apps and its dependencies, into a portable package that can be used to distribute the application to any other machine running Docker. It offers a high-level tool with several powerful functionalities:

There are three main concepts about Docker that you should know before we’ll start learning Kubernetes. First, Images: a Docker-based container image is something you package your application and its environment into. The Registry, Docker defines a hub or repository to store all the images that the users publish and make easier the way to share these images between the developers. The last one, we already talked about it, Containers, a regular Linux container created from a Docker-based container image.

Docker
Docker Images, Registries and Containers

Introducing Kubernetes

As we mentioned in the earlier sections, the applications can grow and become a huge set of microservices with different environment requirements each one. To solve this problem, we should move all our microservices to containers using platforms like Docker. At this point, we are going to have a set of containers, so now, the definition makes more sense, Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It relies on the features of containers to run applications without having to know any internal details of these applications and without having to manually deploy these applications on each host. Because these apps run in containers, they don’t affect other apps running on the same server.

Kubernetes
Kubernetes

Some of the benefits of using Kubernetes are listed below.

I hope that this brief explanation was helpful for you to understand why as a developer we need amazing platforms like Kubernetes.