How to Setup a Multinode Kubernetes Cluster Control Plane in Red Hat Enterprise Linux 9 (Step-by-Step Guide)
This document focuses on installing software for a Kubernetes controller plane node (also called a master node) and creating a new Kubernetes cluster in Red Hat Enterprise Linux 9.
The installation procedure is followed by the admin
user with a sudo privilege.
We can create a new account with sudo privilege that can be used later for administration purpose:
# sample user
useradd admin
passwd admin
# an example of password less sudo config for a installation user
# sudo permission
cat <<EOF>>/etc/sudoers.d/admin
admin ALL=(ALL) NOPASSWD: ALL
EOF
We can use the kubectl alias command and auto bash completion setting in the bashrc:
# sudo vim /etc/bashrc
# or
# vim ~/.bashrc
alias k="kubectl"
source <(kubectl completion bash)
complete -F __start_kubectl k
In our example setup, we have three nodes, which we are going to use as master nodes. Later, if we don’t have an extra worker node, we will enable user workload on the master nodes.
In this installation, we assume that node firewalld is in shutdown state.
Setup hosts information
Now setup /etc/hosts
with content below in all three VM nodes:
# vim /etc/hosts
# node ip address
192.168.60.85 master01.example.com master01
192.168.60.86 master02.example.com master02
192.168.60.87 master03.example.com master03
# vip ip for the software load balancer
192.168.60.91 k8scluster1.example.com k8scluster1
Upgrade Node Software to Latest Level
First, upgrade the current software packages and install exrra utility packges.
# upgrade current os to latest level
yum -y upgrade
# Install extra packages:
sudo yum install bash-completion tree net-tools wget traceroute jq git
[admin@master01 ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux release 9.3 (Plow)
[admin@master01 ~]$ uname -a
Linux master01.example.com 5.14.0-362.13.1.el9_3.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 24 01:57:57 EST 2023 x86_64 x86_64 x86_64 GNU/Linux
Install and configure prerequisites
Kubernetes 1.28+ supports swap space on the Linux OS. In this installation, we are not going to disable swap space.
Enable kernel parameters
# setup for persistent
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
# enable immediately
sudo modprobe overlay
sudo modprobe br_netfilter
# either hash the line related to "net.ipv4.ip_forward" or add like "net.ipv4.ip_forward = 1"
sudo vim /etc/sysctl.d/99-sysctl.conf
#net.ipv4.ip_forward = 0
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
# verify kernel module
lsmod | grep br_netfilter
lsmod | grep overlay
Enable IPv6 packet forwarding options at kernel level:
# enable ipv6 forwarding
cat <<EOF | sudo tee -a /etc/sysctl.d/k8s.conf
net.ipv6.conf.all.forwarding=1
EOF
Disable SE-Linux
# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Container Runtime: CRI-O Installation
It’s possible to build CRI-O from source code. However, we are not going to follow it here. We will add a crio stable repo ** and install crio from that repository.
Add a stable CRI-O of v1.29 (current stable version as of 8th Jan 2024) repository in the node:
cat <<EOF | tee /etc/yum.repos.d/crio-stable.repo
[crio-stable]
name=CRI-O Stable v1.29
baseurl=https://pkgs.k8s.io/addons:/cri-o:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/addons:/cri-o:/stable:/v1.29/rpm/repodata/repomd.xml.key
EOF
Install official package dependencies:
# for v1.29+
dnf install -y container-selinux
Now install cri-o:
# install cri-o
dnf install -y --repo crio-stable cri-o
# enable & start services
systemctl enable crio
systemctl start crio
# For CRI-O, the CRI socket is available at /var/run/crio/crio.sock by default.
Install Kubeadm, Kubelet and Kubectl
As of our installation time, Kubernetes latest release is V1.28 so we are going to add same repo.
# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
Now install software:
dnf --showduplicates list kubeadm
# install package
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# enable kubelet
sudo systemctl enable --now kubelet
Follow the above procedure all three control plane and worker node.
Create Kubernetes High Available Control Plane Cluster
Now, create a high-available control plane config file for kubeadm with the name kubeadm-config-ha.yaml. In this config file, we require a load balancer to balance traffic among the API server of controller nodes. This load balancer can be external or software-based and installed on the control plane nodes.
Loadbalancer for API Server
The combination of keepalived and haproxy can help to create a software loadbalance. There are two options to setup a software-based loadbalancer on the same control plane nodes.
- Run the services on the operating system
- Run the services as static pods
For details procedure and configuration files follow the document at https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md
Kubeadm Config File
We will initialize kubernetes cluster control plane using the configuration file named kubeadm-config-ha.yaml which contains the following content.
# kubeadm-config-ha.yaml
kind: InitConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
nodeRegistration:
# specify cri socket that will be use in the k8s cluster
criSocket: unix:///var/run/crio/crio.sock
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
clusterName: k8s-dev-cluster1
controlPlaneEndpoint: "k8scluster1.example.com:9443"
# controlPlaneEndpoint: "192.168.60.91:9443"
networking:
podSubnet: 10.244.0.0/16,2001:db8:42:0::/56
serviceSubnet: 10.96.0.0/16,2001:db8:42:1::/112
apiServer:
extraArgs:
anonymous-auth: "true"
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
# config for cgroup driver
cgroupDriver: systemd
# config for swap space
failSwapOn: false
featureGates:
NodeSwap: true
memorySwap:
swapBehavior: LimitedSwap
Initialize Kubernetes HA Cluster Control Plane Deployment with above config file.
# create kubernetes cluster with configuration parameters
sudo kubeadm init --config kubeadm-config-ha.yaml --upload-certs --ignore-preflight-errors=Swap
At the end of the installation, it will give two sets of commands for joining the rest of the control plane nodes and worker nodes. Follow the cluster joining steps, and then you will have a working Kubernetes cluster node.
If you would like to run user’s workload on the control plane nodes, then remove the taint using below command:
kubectl taint node master01.example.com node-role.kubernetes.io/control-plane:NoSchedule-
kubectl taint node master02.example.com node-role.kubernetes.io/control-plane:NoSchedule-
kubectl taint node master03.example.com node-role.kubernetes.io/control-plane:NoSchedule-