Table of Contents
Once upon a time, the road to Kubernetes
The story of Kubernetes began at Google: like no other, the company needed a huge infrastructure to make its search engine and the associated advertising available to all people. The planned, enormous growth was a challenge for which various ideas arose. Virtualization of hardware, site reliability engineering and container technologies represent three essential pillars which are vital for the subsequent solution:
Cloud Computing: Implement efficient infrastructures with virtual machines
Virtualized computing power is an important prerequisite for efficient infrastructures: it has improved the utilization of server hardware fivefold. While previously a hardware server was used for a service, many services gather on one thanks to virtualized hardware. This improves hardware utilization from 5 to 25 percent. Despite virtualization, the full potential of resources is still not fully utilized: If a single machine is not sufficient for high loads, the whole system can get bored at quieter times. As operators of an OpenStack-based public cloud, we are also aware of this.
Site Reliability Engineering: Solving Problems with Software
The approach behind Site Reliability Engineering (SRE) is to combine operation and software development. By accessing the means of software development, SRE solves classic problems of IT operations. For example, if a server crashes, SRE does not primarily want to fix the problem by changing the configuration of the servers. Instead, Site Reliability Engineering is applied at the software level to automate and sustainably solve the problem.
The Docker Age: Optimizing development and operation with containers
Docker started 2013 with a simple idea: Applications do not always need a complete operating system with all services, but only a small part of it. With this approach, resource-conserving images can be built that function autonomously and produce an independent work result. For software development, but also from the perspective of the company, the new quasi-standard for container-based images was an encouraging development: Developers were able to develop software without having to take greater account of individual environments. Furthermore, containers also led to fewer errors in operation.
Docker vs. Kubernetes
Docker and Kubernetes are among the most popular container technologies. So who is ahead in the battle for supremacy in the container business? Answering this question is more difficult than suspected. We have created an infographic that compares the essential components and features of Docker and Kubernetes to provide clarity!
The project “Borg” becomes Kubernetes
Google relied on containers and their advantages in many ways – even independently of Docker. In order to manage (or orchestrate) containers sensibly, the employees developed the “Borg” project. Star Trek fans know the meaning of the name: it is a collective that is not organized hierarchically, because all Borg are connected to each other. We’ll learn later what that means for Kubernetes.
Borg was an undisputed competitive advantage of Google because it utilized the machines much better than purely virtualized operations. Google strengthened its market position by being able to provide huge infrastructure landscapes at a much lower cost.
The big hit for the general public was the idea of making Borg available as an open-source solution. Kubernetes (Greek for helmsman; short: K8 or K8s) was born.
Kubernetes & the CNCF
In 2016, the Kubernetes project was donated by Google to the Cloud Native Computing Foundation (CNCF). The CNCF was founded in 2015 as a project of the Linux Foundation and now has over 500 members consisting of developers, end users, and IT technology and service providers. The goal of this community is to shape an open source ecosystem of vendor-neutral projects.
What is Kubernetes?
The open-source platform Kubernetes orchestrates and automates the setup, operation and scaling of container applications. The architecture allows the containers to be orchestrated across multiple machines, whether they are virtualized hardware or bare metal.
Kubernetes continuously monitors the state of the applications and ensures that it corresponds to the specified descriptions: For example, if the descriptions specify that three instances should always be executed as web servers, Kubernetes keeps exactly three instances running. If an instance fails or a process crashes, Kubernetes restarts this instance. Even if an entire work process fails or cannot be reached, Kubernetes restarts it from a new node.
Kubernetes is structured according to the so-called master-slave architecture. The master component controls the nodes on which the containers run. The Kubernetes architecture includes:
The Kubernetes Master is the central control element that distributes and manages the containers on the nodes. To achieve high availability, several masters can be distributed.
A node, also called a worker machine or minion, can be a virtual machine (VM) or a physical server. The pods run on the nodes.
Pods are the smallest deployable unit. They contain one or more containers that share the allocated resources.
The etcd stores the configurations of the Kubernetes cluster and thus represents the key value database. The communication with the etcd manages Kubernetes via the API server.
The API server contains all the information of the etcd, making it one of the most important components of Kubernetes. Via REST interfaces, for example, it communicates with all internal and external services of the Kubernetes Cluster.
The kube-scheduler monitors and manages the utilization of the nodes by deciding on the basis of the resources on which node a pod starts.
The Controller Manager contains all control mechanisms and is therefore an essential component for monitoring. It communicates with the API server to read and write all statuses.
More than Kubernetes: MetaKube by SysEleven
For the IT world, Kubernetes is a revolutionary approach and makes life easier for DevOps. But the implementation of Kubernetes is complex and time-consuming. With MetaKube we offer a managed service for Kubernetes that goes beyond pure provisioning: We provide you with the powerful features of Kubernetes and have already considered other important services for you. Meta features include Backup & Recovery, Load Balancer as a Service, Monitoring and Lifecycle Management.
With MetaKube we are a member of CNCF and are thus registered with the umbrella organization that organizes the developments of the open-source solution. For this reason, we at MetaKube strictly ensure that we remain standard-compliant with Kubernetes. Our OpenStack-based cloud, on which we operate Kubernetes, also is 100 percent standards-compliant.
Everything you need to know about running a cluster
Building your own Kubernetes can be very complex. A Kubernetes installation alone is usually not enough, because the devil is in the details: managing all the components of a cluster. In our Managed Kubernetes 101 Guide we’ll show you step-by-step what extensions exist for Kubernetes:
The container runtime runs separately from the actual container management and provides external control via this interface. Since this interface also needs to be maintained, this is the first component that is integrated into Kubernetes lifecycle management.
In Kubernetes, central components can be used for standardized tasks in the software architecture. For example, a service proxy or a service mesh can be used to deploy software that fulfills global functional requirements.
A service mesh is one of the larger themes that software development on Kubernetes can expect. This integrates the ability to manage the "mesh" of microservices or pods in their communication.
A message queue is used for asynchronous processing of messages between individual services. This supports the general approach of microservice architectures and thus also containerized applications.
Management systems for databases enjoy great popularity. Due to their high efficiency in storage, access control as well as logical sorting, they represent the status quo in the field of data storage.
Continuous Integration (CI) deals with the automated merging of different components during software development. This ensures that no errors occur even when several developments are made on the same application.
App Definition & Image Build
In order to deliver an application, it is first necessary to define what an application is. In the area of containerization and the division of services into microservices, this can get confusing.