In this article, we will set up a learning and testing environment for Kubernetes technology using the basic `kubeadm' tool. In this example, we will connect 2 machines to EuroLinux system. In the procedure, we will discuss containerd configuration, installation of Kubernetes tools (kubeadm, kubelet, kubectl), configuration of the host network (kernel, firewalld), initiation of control-plane, implementation of the Kubernetes networking model (Kubernetes Networking Model) and addition of a worker. An environment built with at least two machines will allow the practice of node management.
Kubernetes, also known as K8s, is an open source software for automating the launch, scaling and management of applications in containers. Kubernetes is an absolute leader in container orchestration, which is the maintenance and scaling of applications delivered in containers in distributed systems. It provides a portable and extensible platform for managing tasks and services running in containers. In addition, it supports declarative configuration and extensive automation of processes related to the application lifecycle. In this article, we will discuss the tools for combining machines into a cluster, i.e. its scaling.
Image diagram of Kubernetes on the example of a pallet factory
An expanding pallet factory has certain resources. These include those necessary to create pallets that can be used within a given workplace: boards, nails, hammers, saws and resources that can migrate between factories – employees. The accountant/HR officer and the boss perform administrative functions. Boards, nails, hammers and saws are common resources necessary for the entire pallet creation process. A single workplace is a node worker, an accountant/HR and a boss act as a control-plane, while employees are pods. The accountant/HR officer has a database on the state of the factory (
etcd). The boss manages and introduces changes in the operation of the factory and is in constant contact with the accountant/HR officer. Based on the data obtained, the boss can adapt the factory to the needs (
kubectl). If it is necessary to create more pallets, it is possible to add additional employees (pods) to the existing workplace (provided that sufficient resources are available). If the available resources are exceeded, the boss may decide to build an additional workplace (another node). The new plant has its own resources, but it is required to connect it with the control-plane (
kubeadm join). Employees (pods) automatically move between workplaces, depending on the available resources.
The key components of Kubernetes are:
- pod – the smallest Kubernetes unit. It includes one or more (rarely more than two) cooperating containers. Each pod has a unique IP address assigned
- deployment – this is a kind of procedure/instruction to run the application as a pod set in the Kubernetes environment. Includes, among other things, the configuration of pods, the number of their duplication, port mapping and external memory
- node – a machine on which the configured
kubeletprocess works, managing the pods and communicating with the control-plane
- control-plane – is a node being in control of the cluster
- Kubernetes Networking Model – a network model that solves key communication issues in a Kubernetes cluster.
Kubernetes cluster building steps
Step 0 – minimum requirements
- Two CPU cores are required for control-plane. For worker nodes, one core is enough
- 2GB of RAM
- swap memory disabled – by default, the kubelet process does not support it. From version v1.22, it is possible to support swap memory, however, additional configuration is required.
sudo swapoff -a sudo sed -i '/ swap /s/^/#/' /etc/fstab
In this presentation of the Kubernetes cluster creation process, we use two virtual machines with EuroLinux 8 installed. Host map created for testing the procedure in the author's environment:
cat /etc/hosts 192.168.122.176 euro1 192.168.122.42 euro2
Hostnames can be set using the command:
hostnamectl set-hostname euro1
The host IP can be verified using the command:
Step 1 – configuring CRI (Container Runtime Interface)
The ability to manage containers is crucial for each of the nodes in the cluster. Therefore, it is necessary to install and configure the appropriate environment compatible with the Container Runtime Interface (CRI). Supported environments include:
- Docker Engine
- Mirantis Container Runtime.
In this article we will only discuss the procedure for installing containerd on EuroLinux 8.
Installation and configuration of containerd as container runtime for Kubernetes on EuroLinux
1. Add docker repository. continerd.io is not available from EuroLinux repositories.
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
2. Install containerd.io.
sudo dnf install containerd.io
3. Set systemd as cgroup driver.
# Saving the default containerd configuration to a file /etc/containerd/config.toml sudo bash -c 'containerd config default > /etc/containerd/config.toml' # Setting systemd as the default cgroup driver sudo sed -i '/SystemdCgroup/s/false/true/' /etc/containerd/config.toml
4. Run containerd and set it to turn on also after the system restart.
sudo systemctl enable --now containerd.service
Step 2 – configuration of each node's network in the Kubernetes cluster
Container and Pod Network support requires running the
br_netfilter kernel modules. It is also necessary to set specific kernel parameters, as in the script below.
# required modules cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter # required sysctl parameters cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF # Apply changes without rebooting sudo sysctl --system
Step 3 – installation of kubeadm, kubelet and kubectl
kubeadm– a tool used to administer a cluster. It supports tasks such as: control-plane node initiation, node cluster attachment, management of tokens, certificates, updates and cluster users
kubelet– daemon supporting communication between nodes
kubectl– a universal CLI (Comand Line Interface) tool for cluster status control. Every engineer using Kubernetes has contact mainly with
Procedure for installing kubeadm, kubelet and kubectl tools on EuroLinux 8
1. Add an official Kubernetes repository.
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kubelet kubeadm kubectl EOF
2. Disable SELinux. Setting up SELinux is one of the advanced issues that will not be discussed in this article.
sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
3. Install key Kubernetes tools.
sudo dnf install kubelet kubeadm kubectl --disableexcludes kubernetes
4. Set the
kublet daemon so that it also starts when the node is restarted.
sudo systemctl enable --now kubelet.service
5. (optional) Enable
kubectl. It is required to install the
sudo dnf install bash-completion package.
kdubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null kubeadm completion bash | sudo tee /etc/bash_completion.d/kubeadm > /dev/null exit # bash-completion zadziała po ponownym uruchomieniu bash
The next stages of a cluster building process are divided into two parts. Part 1 marked as „I” about control-plane and part 2 marked as „II” about workers.
Step 4.I – opening firewalld ports for control-plane
A script that opens the ports necessary for the operation of Kubernetes
sudo firewall-cmd --permanent --add-port=6443/tcp # Kubernetes API sudo firewall-cmd --permanent --add-port=2379-2380/tcp # etcd sudo firewall-cmd --permanent --add-port=10250/tcp # Kubelet sudo firewall-cmd --permanent --add-port=10251/tcp # kube-scheduler sudo firewall-cmd --reload
Step 5.I – initiation of the first control-plane
Below is the command initiating the Kubernetes cluster. The
--pod-network-cidr option defines the address range for the Pod Network. If the proposed network
10.33.0.0/16 is used by the system, another network must be selected. Otherwise, there may be a conflict. Using this option is necessary for network configuration using the
kuberouter plugin, which will be configured in step 7.I.
kubeadm init --pod-network-cidr 10.33.0.0/16
The correct installation of the cluster will be confirmed by the following message.
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.122.176:6443 --token up2tmb.0tetele8s0w54g7a \ --discovery-token-ca-cert-hash sha256:b0a58dc0c76af44233d6848979aa89911c17dda8a57fe2e591cc853501603752
Step 6.I – granting rights to cluster administration
In the message that appears after the cluster has been properly installed, there is an instruction for granting administration rights (message above). It is possible to grant rights permanently to any system user, or one-time rights to a root user. Once access rights have been granted, the
kubectl command should be functional. It can be tested using:
kubectl get nodes NAME STATUS ROLES AGE VERSION euro1 NotReady control-plane 8m48s v1.24.0
The cluster consists of one node
euro1. The status of
NotReady can be tested as follows:
kubectl describe nodes euro1 | grep Ready Ready False Sun, 22 May 2022 13:16:02 +0200 Sun, 22 May 2022 12:50:24 +0200 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
It is necessary to implement the container network (CNI) in accordance with Kubernetes.
Step 7.I – implementation of the Kubernetes Networking Model
The Kubernetes Network Model must support several types of communication, such as:
- Container-to-Container: communication within a pod and within a single host
- Pod-to-Pod: communication between pods
- Sub-to-Service: service support, load balancing, traffic management within services
- External-to-Service: providing services to an external network.
For demonstration purposes we will configure the network plug-in
kuberouter based on the documentation.
# Downloading the configuration file that implements Pod Network and Network Policy curl https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml -o kuberouter.yaml # cat kuberouter.yaml # we can see the network model configuration file kubectl apply -f kuberouter.yaml
Verification of the availability of nodes in the Kubernetes network:
kubectl get nodes NAME STATUS ROLES AGE VERSION euro1 Ready control-plane 41m v1.24.0
Step 8.I (optional) – enabling the application to run on master node
The default scenario is to separate the application operation from the control-plane. However, for testing or learning purposes, it may be possible to run pods on the control-plane node. Simply remove the taints
kubectl taint nodes --all node-role.kubernetes.io/control-plane- node-role.kubernetes.io/master-
Step 4.II – opening ports for worker node
Script to open ports:
sudo firewall-cmd --permanent --add-port=10250/tcp # port dla kubelet sudo firewall-cmd --permanent --add-port=30000-32767/tcp # NodePort services sudo firewall-cmd --reload
Step 5.II – connection of worker node with control-plane
Execute the command from the message after initiating the control-plane node:
kubeadm join 192.168.122.176:6443 --token up2tmb.0tetele8s0w54g7a \ --discovery-token-ca-cert-hash sha256:b0a58dc0c76af44233d6848979aa89911c17dda8a57fe2e591cc853501603752
The IP address in the command belongs to our test machine, so it can vary depending on the actual control-plane address. The certificate hash will also be different.
Once connected, you can execute the following command on a machine that has been granted Kubernetes administrator rights:
kubectl get nodes NAME STATUS ROLES AGE VERSION euro1 Ready control-plane 3h3m v1.24.0 euro2 Ready 147m v1.24.0
We presented the procedure of combining two EuroLinux machines using basic Kubernetes tools. It is also possible to attach additional machines in a similar way. A dual-host environment provides extensive testing for note management in a Kubernetes environment. It allows you to use, among others, taints, node affinity, allows you to add more nodes and configure a second control-plane node.
To test Kubernetes functionality, you can create the following deployment:
kubectl create deployment nginx-test --image nginx --replicas 4
Verification of deployment start-up:
kubectl get deployments.apps NAME READY UP-TO-DATE AVAILABLE AGE nginx-test 4/4 4 4 14s
kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-test-847f5bc47c-d9lwm 1/1 Running 0 32m 10.33.0.4 euro1 <none> <none> nginx-test-847f5bc47c-pftqc 1/1 Running 0 32m 10.33.2.3 euro2 <none> <none> nginx-test-847f5bc47c-r48ch 1/1 Running 0 32m 10.33.2.2 euro2 <none> <none> nginx-test-847f5bc47c-z5jjj 1/1 Running 0 32m 10.33.0.5 euro1 <none> <none>
We ran the pods on the euro1 and euro2 nodes because we made it possible to switch them on on the control-plane according to Step 8.I. If the control-pane was marked with a
node-role.kubernetes.io/control-plane:NoSchedule taint, all the pods would be activated on the euro2 node.