With this article we start series dedicated to kubernetes. No previous knowledge of kubernetes is required, although some Linux administration skills will be helpful.
Kubernetes (or short k8s) is a platform dedicated to run and manage application through containers, so naturally it needs containerization tool. In this post we are going to use probably most popular of them – Docker.
In this, the first article, we go through the installation and basic configuration of kubernetes cluster. We are going to use multi-machine environment. For the purpose of this post we have prepared three virtual (but of course you may use bare metal 😉) machines with centos 8 preinstalled. All machines are connected to the same subnet.
Canonical k8s installation distinguish two categories of nodes:
- Master (in our case named
k8admin
), which runs processes responsible for i.a. API, assignment of tasks to worker nodes, cluster configuration maintenance
- Worker (our
k8work1
and k8work2
), which uses docker to run tasks (or components carrying out business functions)
Let's start the installation.
Activities on each node
To assure maximum performance of a system swap has to be turned off on each node, so we turn it off with the command:
$ sudo swapoff -a
To prevent mounting swap after OS restart it needs to be commented out or removed from /etc/fstab
.
The first to install is docker. To add docker repository to our system we execute:
$ sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
Unfortunately, command dnf install docker-ce
– which should install newest version of docker – did not work on centos 8 while this post was written. That's why we use --nobest
option that allows installer to choose first meeting dependencies version of package:
$ sudo dnf install -y --nobest docker-ce
For some reason docker does not use proxy setting in /etc/environment
, so if there is a proxy server in your environment, you will have to define it in /etc/systemd/system/docker.service.d/http-proxy.conf
:
[Service]
Environment="HTTP_PROXY=http://proxy.yoursite.local:80/"
After installation is finished we run docker service:
$ sudo systemctl enable --now docker
Now we go to installation of kubernetes. Kuberentes packages are not included in centos distribution and we want to get the newest available version, so we create repo /etc/yum.repos.d/k8.repo
as follows:
[k8s]
name=k8s
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Careful reader will notice that there is no package dedicated for centos 8 and we have to choose version for previous OS.
Activities on master node
Next, we install kubernetes tools required on master node:
$ sudo dnf install -y kubeadm kubectl
and activate service:
$ sudo systemctl enable --now kubelet
Let's try to initialize the cluster. Command:
$ sudo kubeadm init
will give us two warnings:
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri
We are going to mitigate the first of them by opening ports required by k8s:
$ sudo firewall-cmd --permanent --add-port=6443/tcp
$ sudo firewall-cmd --permanent --add-port=2379-2380/tcp
$ sudo firewall-cmd --permanent --add-port=10250/tcp
$ sudo firewall-cmd --permanent --add-port=10251/tcp
$ sudo firewall-cmd --permanent --add-port=10252/tcp
$ sudo firewall-cmd --permanent --add-port=10255/tcp
$ sudo firewall-cmd –reload
Additionally, we can reload the br_netfilter module:
$ sudo modprobe br_netfilter
The second warning we handle according to instruction in message by adding to
/etc/docker/daemon.json
following definition:
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
and by restarting docker service:
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
Let's try to initialize cluster again, this time with higher log level:
$ sudo kubeadm init -v5
Operation should finish with message:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.92.29.12:6443 --token 57snrx.7rnpbqrijssjsk83 \
--discovery-token-ca-cert-hash sha256:54efc0d9c5cb4d1e11c1c0c397a6713f9d3b727a6fedfe83c0b4db718de707d5
So, we follow prescriptions:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
and now we are able to use kubectl (basic tool to control cluster) as regular user.
Prescriptions told us to deploy pod network. We are going to prepare it with weave-net. We need to know exact version of cluster, which can be retrieved with command:
$ export kubever=$(kubectl version | base64 | tr -d '\n')
and using that we will initialize the network:
$ kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=$kubever
Activities on worker nodes
Mindful of experiences from master node installation this time we start with firewall config:
$ sudo firewall-cmd --permanent --add-port=10250/tcp
$ sudo firewall-cmd --permanent --add-port=10255/tcp
$ sudo firewall-cmd --permanent --add-port=30000-32767/tcp
$ sudo firewall-cmd --permanent --add-port=6783/tcp
$ sudo firewall-cmd --reload
Next proceed with activities described in section Activities on each node, remembering of docker config in /etc/docker/daemon.json
.
Then we install kubeadm package:
$ sudo dnf install -y kubeadm
To connect node to cluster we use token returned during master node initialization:
$ kubeadm join 10.92.29.12:6443 --token 57snrx.7rnpbqrijssjsk83 --discovery-token-ca-cert-hash sha256:54efc0d9c5cb4d1e11c1c0c397a6713f9d3b727a6fedfe83c0b4db718de707d5
Operation should finish with message:
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Having done above tasks on both worker nodes the k8s cluster is assembled. To confirm that we can execute kubectl get nodes
command on k8admin node:
NAME STATUS ROLES AGE VERSION
k8admin Ready master 2h v1.18.4
k8work1 Ready <none> 12m v1.18.4
k8work2 Ready <none> 2m v1.18.4
As you can see the only node which has assigned its role is k8admin. That's normal, but if you want to assign role (actually it is only a label) to worker nodes you can use command kubectl label node k8work1 node-role.kubernetes.io/worker=worker
. Here is the result of kubectl get nodes
after labeling each worker node:
NAME STATUS ROLES AGE VERSION
k8admin Ready master 2h v1.18.4
k8work1 Ready worker 14m v1.18.4
k8work2 Ready worker 4m v1.18.4
Pod
The cluster is established, but we want to check if it really works – so let's deploy some task. The simplest entity which can be deployed on k8s is a pod. You may consider pod as virtual micro-machine, which is built upon at least one container and which has unique IP address.
Here our pod definition (pod.yml
):
apiVersion: v1
kind: Pod
metadata:
name: simplest
spec:
containers:
- name: simplest
image: centos:8.2.2004
command: ["/bin/sh"]
args: ["-c", "while true;do sleep 3;done"]
The pod is built upon centos container version 8.2.2004. To keep the pod busy we define simple shell loop.
To run the pod we execute:
$ kubectl apply -f pod.yml
pod/simplest created
Let's check the pod status:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
simplest 1/1 Running 0 70s
As you can see our pod works. The last thing is to log in to the pod – just like it would be regular Linux machine:
$ kubectl exec -it simplest -- bash
[root@simplest /]# ls
bin dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var
[root@simplest /]# ps
PID TTY TIME CMD
190 pts/0 00:00:00 bash
208 pts/0 00:00:00 ps
The command kubectl exec -- something
executes what we want on given node – in our case thanks to the -it
switch it opens interactive bash session.
That's all for the installation of simple k8s cluster. In next articles we are going to use it to present some more complicated tasks.