Containerization

Kubernetes #2

By 03/11/2019 March 13th, 2019 No Comments

What does Kubernetes do?

First of all, it gives us good speed. I’m not talking about raw application speed here – that we get something displayed 2 seconds faster, no. It is about the speed with which we can make changes on the infrastructure, leaving the application still available and producing. What is now a huge problem – service windows, changes in the version of Kubernetes fade away. This is possible thanks to the idea of Kubernetes actions based on three main pillars – immutability, declarative configuration and self-healing.

Let’s start with immutability. This idea in the world of Kubernetes but also the world of of containerization, bases itself that once established, the infrastructure can not be modified by the user. The only way to change the configuration is to destroy the old infrastructure and set up a new one with changed parameters. It sounds a bit scary, I know. But when you think about it, this solution has a huge advantage. Lack of changes in the potentially producing infrastructure prevents human errors related to the so-called “well, we changed some-thing some-time ago, but John is no longer working with us and well, we’re not entirely sure… so maybe it’s better not to update?” There are no such situations anymore, because thanks to the fact that the infrastructure is established, it has its own specific configuration, which can be reviewed, and, in case of failure or failed update, we can trace where the error is. The second issue is the fact that when you have an old version of the application (for example, the old version of a container in the repository, along with its configuration), you can always go back to it, or even leave it running for the time of the upgrade, and in the case of a problem, change into a few seconds on the old technology stack. I think it sounds better now, but the question arises how to embrace this in the context hyperscale. The answer to this is a declarative configuration that represents the whole as code.

Yep! There is no clicking; we type in everything neatly and Kubernetes performs the entire infrastructure as described. Before you go about and start thinking along the lines “That’s not a change! Agape, Satanas!”, let me assure you, it is. Up to now, we got used to that when typing in a script that will configure something, it was necessary to write it exactly, line by line, which system was to do what, and it was called an imperative configuration. So, e.g. create a folder, copy files from here to here, and set line 37 such and such. If we made a mistake, the script crashed and in the best-case scenario nothing was done while in the he worst case we had 99.7% of the script executed but nothing worked. Imperative configuration tells you how to get from point A to B and to then to C. In the declarative configuration it is the opposite – we say immediately that we want to get to C. We define the target state, and the platform, in this case Kubernetes, strives to achieve this goal with its, internally sewn in with methods.

It has several advantages, first of which is that we do not have to know all the mechanisms that are responsible for creating the infrastructure. Which translates into time that we can devote to the implementation of the configuration. Kuberentes is a kind of framework with features prepared by someone else that we can politely use. If you want to know how exactly they work, you can immerse yourself in the code of Kubernetes himself because it is available to the public. Second major advantage is the fact that we do not need to care about getting to the desired state as the platform does it for us. In short, this means that if the configuration is somewhere incorrect and the infrastructure is not set up, Kubernetes will clean up after itself and tell us that something is wrong in this and this line. Everything is the implementation side of the coin, but there is also the operational one. Because we keep everything in the in code, we have control over the configuration version of the infrastructure and the possibility of its actual auditing. Transfer of knowledge eventually becomes possible and with it, automatic documentation.

When Kubernetes receives the configuration, it will maintain the state, i.e. if we want to establish 5 instances of a container and say, one of these instances will cease to work, Kubernetes will slaughter this instance and set it up again. Because we declared 5 and not 4, so it’s time to act! This means that Kubernetes will not only invoke the configuration of your infrastructure, but also will take care of its health without fail.

Since we decided that we will use containerization technology for our example of transoceanic tutoring platform, we will create it in the architecture of distributed microservices and the idea of invariability, declarative configuration and self-treatment with Kubernetes becomes quite sexy. The way we will be able to scale applications in the evening of the second hemisphere may also be encouraging – when the knowledge-seeking ladies and gentlemen, their credit cards start attacking our infrastructure. Thanks to the declarative configuration, it is very easy – just change the configuration, and Kubernetes will do the job itself. Unchanging container images, their configuration in the appropriate equally constant version will allow automatic scaling, only changing the digit from the old instance number to the new one… or the usage of the built-in WK8S (Kubernetes) autoscaling and allowing it to manage the mess by itself.

Obviously, by running autoscaling you need to have appropriate hardware and an idea for DDoS. Because sometimes the number of teachers willing to tutor can significantly increase and it will be necessary to scale the Kubernetes cluster itself. Here you can also see the experienced hand that rested on the shoulder of the creators of this project; undoubtedly it was covered with many furrows after innumerable failures and catastrophes, after EoF on overflowing discs and All Path Down on matrixes. All Kubernetes cluster nodes are identical, and applications in containers are completely independent of each other. The auto deployment of the next node can take place from a previously prepared image and a simple one-liner that will join it to the cluster.

Let’s start with the opposite than it would seem logical, i.e. not with the architecture, but with the logical beings that we need to get to know and include in our configuration, so that Kubernetes knows how to establish a declared infrastructure.

To start with a Pod, this is a single or a collection of containers described in the configuration in a certain group. It is the smallest configuration entity representing the process that can be created and run on the cluster. The Pods are divided into two types:

1. A Pod that runs the container: a model called “one-container-per-Pod” and is the most popular method used with Kubernetes. The Pod can be compared to a kind of encapsulation inside of which there runs a container and the K8S does not manage it directly, but through Pod itself.

2. Pod that runs multiple containers: when we have an application that consists of multiple containers that are closely connected with each other and need to share resources; it’s worth packing them into a single Pod and consider them as a thread composed of multiple containers.

Pod has one IP address and a range of ports which are shared by the containers that are within; this is not guaranteed as the timespan of the Pod is, by intent, short. In addition to the, containers in the Pod can share volumes with data but it should be remembered that they exist only if the Pod; when the Pod is destroyed, the volume is also destroyed, and the data on it is removed. This can be prevented by using external controls for the so-called Persistent Storage.

Services are another entity. Just as the Pods they are mortal- they live and die but services are a more permanent unit. It is a layer of the abstract that allows to group the Pods along with applicable rules of access. For example, we have a simple application that repeats the images and is composed of three instances of a Pod. The frontend of the app does is not interested how the backend connects – it’s supposed to create and so it does. The clients, however, should not see any change. This type of setup is ideal to create a Service – one entrance and several Pods as the backend.

The article has been published on IT Professional.