Create kubernetes cluster “The hard way”

This tutorial was made based on Kelsey Hightower’s Kubernetes The Hard Way guide, also a very nice guide made by Just Me and Opensource for LXC containers.
The main difference, I wanted to do it on premises with VMs (original tutorial on Google Cloud) and with only three machines for simplicity, one Control Plane and two Workers.
Original tutorial is with six machines, three Control Planes and three Worker.

These guides are made for learning purposes in order to understand in deep how the components of the Kubernetes work together reason why it’s not recommended for Production, doesn’t cover security and best practices.
The post is pretty long, please see the summary below.

Install client tools
Create Certificate Authority (CA) and Generating TLS Certificates
Create configurations
Bootstrap Control Plane
Bootstrap workers
Routes
Test

Requirements
Debian 11, probably works on other distros also, but tested on this one, leave a comment if so.
Setup the network with NetworkManager (nmcli)

Hostnames
Add the hosts to /etc/hosts on each node plus the client machine, with the right IP and the right hostname, in my case:
192.168.254.40 c4-control-plane
192.168.254.41 c4-worker-1
192.168.254.42 c4-worker-2

Install the Client Tools, Software that are used to generate the certificates and k8s configurations.

If you are using a Linux machine you can install these tools locally, if not or you don’t want you can install them on the Control Plane.

CFSSL: CloudFlare’s PKI/TLS toolkit (cfssl and cfssljson)

wget https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl_1.4.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssljson_1.4.1_linux_amd64
sudo mv cfssl_1.4.1_linux_amd64 /usr/bin/cfssl
sudo mv cfssljson_1.4.1_linux_amd64 /usr/bin/cfssljson
sudo chmod +x /usr/bin/cfssl /usr/bin/cfssljson

kubectl: command line tool lets you control Kubernetes clusters

wget https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl
sudo mv kubectl /usr/bin/
sudo chmod +x /usr/bin/kubectl

Create Certificate Authority (CA) and Generating TLS Certificates

Control Plane:
– kube-apiserver
– kube-scheduler
– kube-controller-manager
– etcd

Workers:
– kubelet
– kube-proxy

Generate the CA configuration file

{
cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": ["signing", "key encipherment", "server auth", "client auth"],
        "expiry": "8760h"
      }
    }
  }
}
EOF

cat > ca-csr.json << EOF
{
  "CN": "Kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "CA",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca
}

Result: ca.pem, ca-key.pem (ca-config.json, ca-csr.json, ca.csr)

The Admin Client Certificate

{
cat > admin-csr.json << EOF
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:masters",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
}

Result: admin-key.pem, admin.pem (admin-csr.json, admin.csr)

Client certificates for kubelet

Kubernetes uses Node Authorizer, each Worker node (kubelet) needs a certificate with CN system:node:[nodeName] and of course the certificates needs to be signed by our CA in order to authenticate with the API.

for instance in c4-worker-1 c4-worker-2; do
cat > ${instance}-csr.json << EOF
{
  "CN": "system:node:${instance}",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:nodes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

EXTERNAL_IP=$(sudo cat /etc/hosts | grep ${instance} | awk '{print $1}')

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${instance},${EXTERNAL_IP} \
  -profile=kubernetes \
  ${instance}-csr.json | cfssljson -bare ${instance}
done

Result: c4-worker-1-key.pem, c4-worker-1.pem, c4-worker-2-key.pem, c4-worker-2.pem (c4-worker-1-csr.json, c4-worker-1.csr, c4-worker-2-csr.json, c4-worker-2.csr)

The kube-controller-manager Certificate

{
cat > kube-controller-manager-csr.json << EOF
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
}

Results: kube-controller-manager-key.pem, kube-controller-manager.pem (kube-controller-manager-csr.json, kube-controller-manager.csr)

The kube-proxy Certificate

{
cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:node-proxier",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-proxy-csr.json | cfssljson -bare kube-proxy
}

Results: kube-proxy-key.pem, kube-proxy.pem (kube-proxy-csr.json, kube-proxy.csr)

The kube-scheduler Certificate

{
cat > kube-scheduler-csr.json << EOF
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-scheduler-csr.json | cfssljson -bare kube-scheduler
}

Results: kube-scheduler-key.pem, kube-scheduler.pem (kube-scheduler-csr.json, kube-scheduler.csr)

The kube-apiserver Certificate

{
KUBERNETES_PUBLIC_ADDRESS="192.168.254.40"
WORKER_1=192.168.254.41
WORKER_2=192.168.254.42

KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local

cat > kubernetes-csr.json << EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${WORKER_1},${WORKER_2},${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \
  -profile=kubernetes \
  kubernetes-csr.json | cfssljson -bare kubernetes
}

Result: kubernetes-key.pem, kubernetes.pem (kubernetes-csr.json, kubernetes.csr)

The Service Account Key Pair

{
cat > service-account-csr.json << EOF
{
  "CN": "service-accounts",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  service-account-csr.json | cfssljson -bare service-account
}

Result: service-account-key.pem, service-account.pem (service-account-csr.json, service-account.csr)

Distribute the Client and Server Certificates

scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem c4-control-plane:~/
for instance in c4-worker-1 c4-worker-2; do
scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
done

Create configuration files

Before creating the configurations make sure to set KUBERNETES_PUBLIC_ADDRESS to Control Plane's IP.

KUBERNETES_PUBLIC_ADDRESS="192.168.254.40"

The kubelet Kubernetes Configuration File

for instance in c4-worker-1 c4-worker-2; do
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-credentials system:node:${instance} \
    --client-certificate=${instance}.pem \
    --client-key=${instance}-key.pem \
    --embed-certs=true \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:node:${instance} \
    --kubeconfig=${instance}.kubeconfig

  kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done

Results: c4-worker-1.kubeconfig, c4-worker-2.kubeconfig

The kube-proxy Configuration File

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-credentials system:kube-proxy \
    --client-certificate=kube-proxy.pem \
    --client-key=kube-proxy-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-proxy \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}

Result: kube-proxy.kubeconfig

The kube-controller-manager Configuration File

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config set-credentials system:kube-controller-manager \
    --client-certificate=kube-controller-manager.pem \
    --client-key=kube-controller-manager-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-controller-manager \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}

Result: kube-controller-manager.kubeconfig

The kube-scheduler Configuration File

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-credentials system:kube-scheduler \
    --client-certificate=kube-scheduler.pem \
    --client-key=kube-scheduler-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-scheduler \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}

Result: kube-scheduler.kubeconfig

The admin Configuration File

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=admin.kubeconfig

  kubectl config set-credentials admin \
    --client-certificate=admin.pem \
    --client-key=admin-key.pem \
    --embed-certs=true \
    --kubeconfig=admin.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=admin \
    --kubeconfig=admin.kubeconfig

  kubectl config use-context default --kubeconfig=admin.kubeconfig
}

Result: admin.kubeconfig

Distribute the Kubernetes Configuration Files

scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig c4-control-plane:~/
for instance in c4-worker-1 c4-worker-2; do
scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done

Generating the Data Encryption Config and Key (https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/)

ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

cat > encryption-config.yaml << EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF

Result: encryption-config.yaml

Copy to the Control Plane.

scp encryption-config.yaml c4-control-plane:~/

Bootstrap Control Plane.

The following configuration you will need to do it on the Control-Plane machine, in my case c4-control-plane.

Install etcd

wget -q --show-progress --https-only --timestamping "https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz"
tar -xvf etcd-v3.4.15-linux-amd64.tar.gz
sudo mv etcd-v3.4.15-linux-amd64/etcd* /usr/local/bin/
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo chmod 700 /var/lib/etcd
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/

Create etcd.service

INTERNAL_IP=192.168.254.40
ETCD_NAME=$(hostname -s)

cat << EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --cert-file=/etc/etcd/kubernetes.pem \\
  --key-file=/etc/etcd/kubernetes-key.pem \\
  --peer-cert-file=/etc/etcd/kubernetes.pem \\
  --peer-key-file=/etc/etcd/kubernetes-key.pem \\
  --trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
  --advertise-client-urls https://${INTERNAL_IP}:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster c4-control-plane=https://${INTERNAL_IP}:2380 \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Enable and start etcd

sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd

Test etcd

sudo ETCDCTL_API=3 etcdctl member list --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/ca.pem --cert=/etc/etcd/kubernetes.pem --key=/etc/etcd/kubernetes-key.pem

Install the Control Plane's main components, kube-apiserver, kube-controller-manager, kube-scheduler and kubectl.

sudo mkdir -p /etc/kubernetes/config
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl"

chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/

Configure the kube-apiserver

sudo mkdir -p /var/lib/kubernetes/
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem encryption-config.yaml /var/lib/kubernetes/
KUBERNETES_PUBLIC_ADDRESS=192.168.254.40

cat << EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
  --advertise-address=${KUBERNETES_PUBLIC_ADDRESS} \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/audit.log \\
  --authorization-mode=Node,RBAC \\
  --bind-address=0.0.0.0 \\
  --client-ca-file=/var/lib/kubernetes/ca.pem \\
  --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --etcd-cafile=/var/lib/kubernetes/ca.pem \\
  --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
  --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
  --etcd-servers=https://${KUBERNETES_PUBLIC_ADDRESS}:2379 \\
  --event-ttl=1h \\
  --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
  --runtime-config='api/all=true' \\
  --service-account-key-file=/var/lib/kubernetes/service-account.pem \\
  --service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem \\
  --service-account-issuer=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\
  --service-cluster-ip-range=10.32.0.0/24 \\
  --service-node-port-range=30000-32767 \\
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Configure the kube-controller-manager

sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
cat << EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
  --bind-address=0.0.0.0 \\
  --cluster-cidr=10.200.0.0/16 \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
  --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
  --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
  --leader-elect=true \\
  --root-ca-file=/var/lib/kubernetes/ca.pem \\
  --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
  --service-cluster-ip-range=10.32.0.0/24 \\
  --use-service-account-credentials=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Configure the kube-scheduler configuration and service

sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
cat << EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1beta1
kind: KubeSchedulerConfiguration
clientConnection:
  kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
  leaderElect: true
EOF
cat << EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
  --config=/etc/kubernetes/config/kube-scheduler.yaml \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Start the Control Plane Services

sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler

Copy admin.kubeconfig and test the Control Plane

mkdir ~/.kube/
cp admin.kubeconfig ~/.kube/config
kubectl cluster-info

If you see the following result you successfully installed your Control Plane, let's move to the Workers.

Kubernetes control plane is running at https://127.0.0.1:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Bootstrapping the Kubernetes Worker Nodes

The following procedure you will have to do on both Workers, c4-worker-1 and c4-worker-2, you can use a terminal multiplexer like tmux.

Requirements

sudo apt-get update
sudo apt-get install socat conntrack ipset wget curl iptables apparmor-utils -y

Disable swap

sudo sed -i '/swap/d' /etc/fstab
sudo swapoff -a

Download and Install Worker Binaries

wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-amd64.tar.gz \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc93/runc.amd64 \
https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz \
https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubelet
sudo mkdir -p /etc/cni/net.d /opt/cni/bin /var/lib/kubelet /var/lib/kube-proxy /var/lib/kubernetes /var/run/kubernetes
mkdir containerd
tar -xvf crictl-v1.21.0-linux-amd64.tar.gz
tar -xvf containerd-1.4.4-linux-amd64.tar.gz -C containerd
sudo tar -xvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin/
sudo mv runc.amd64 runc
chmod +x crictl kubectl kube-proxy kubelet runc
sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
sudo mv containerd/bin/* /bin/

Configure CNI Networking, generate two diferent configs as POD_CIDR c4-worker-1 is 10.200.0.0/24 and c4-worker-2 is 10.200.1.0/24


POD_CIDR=10.200.0.0/24

cat << EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
    "cniVersion": "0.4.0",
    "name": "bridge",
    "type": "bridge",
    "bridge": "cnio0",
    "isGateway": true,
    "ipMasq": true,
    "ipam": {
        "type": "host-local",
        "ranges": [
          [{"subnet": "${POD_CIDR}"}]
        ],
        "routes": [{"dst": "0.0.0.0/0"}]
    }
}
EOF

Loopback

cat << EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
    "cniVersion": "0.4.0",
    "name": "lo",
    "type": "loopback"
}
EOF

Configure containerd

sudo mkdir -p /etc/containerd/
sudo containerd config default | sudo tee /etc/containerd/config.toml

Edit containerd configuration

sudo vim /etc/containerd/config.toml
below:
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
add:
            SystemdCgroup = true

result:

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
  SystemdCgroup = true

Create the containerd.service systemd unit file:

cat << EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target

[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity

[Install]
WantedBy=multi-user.target
EOF

Configure the kubelet

sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo mv ca.pem /var/lib/kubernetes/
cat << EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
  mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
  - "10.32.0.10"
podCIDR: "${POD_CIDR}"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
cgroupDriver: systemd
EOF
cat << EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
  --config=/var/lib/kubelet/kubelet-config.yaml \\
  --container-runtime=remote \\
  --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
  --image-pull-progress-deadline=2m \\
  --kubeconfig=/var/lib/kubelet/kubeconfig \\
  --network-plugin=cni \\
  --register-node=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Configure the kube-proxy

sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
cat << EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF
cat << EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-proxy \\
  --config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Start Worker services

sudo systemctl daemon-reload
sudo systemctl enable containerd kubelet kube-proxy
sudo systemctl start containerd kubelet kube-proxy

At this point you should be ready, you can check the nodes on your Control Plane.

kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
c4-worker-1   Ready       23m   v1.21.0
c4-worker-2   Ready       16m   v1.21.0

Congratulations! You have successfully installed your kubernetes cluster "The hard way".

Routes

Because we are using bridge plugin in order to be able to access the pods between the machines, we need to setup routes.

c4-control-plane

sudo nmcli connection modify enp1s0 +ipv4.routes "10.200.0.0/24 192.168.254.41"
sudo nmcli connection modify enp1s0 +ipv4.routes "10.200.1.0/24 192.168.254.42"
sudo nmcli con up enp1s0

c4-worker-1

sudo nmcli connection modify enp1s0 +ipv4.routes "10.200.1.0/24 192.168.254.42"
sudo nmcli con up enp1s0

c4-worker-2

sudo nmcli connection modify enp1s0 +ipv4.routes "10.200.0.0/24 192.168.254.41"
sudo nmcli con up enp1s0

Test

Create an nginx pod make sure that it's successfully created.

kubectl run nginx --image=nginx
kubectl get pods -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP            NODE          NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          30s   172.16.0.87   c4-worker-1              
curl 172.16.0.87
<!DOCTYPE html>
<htmlgt;
<headgt;
<title>Welcome to nginx!</title>

Create a CentOS 8 or Rocky Linux 8 kubernetes cluster with kubeadm

This post will be similar as “Create a Debian 11 kubernetes cluster with kubeadm” and of course for CentOS 8 or Rocky Linux 8.
The cluster will be composed of three machines, one control plane and two workers.
I used KVM (Kernel-based Virtual Machine) running Centos 8 (tested also on Rocky Linux 8) and installed a minimal system with SSH.
Note : this tutorial is made for learning, doesn’t cover any security or best practices in order to keep it simple we will disable firewalld and SELinux.

Test environment:
CentOS-8.4
Rocky-8.4

References
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

Hostnames
c2-control-plane
c2-worker-1
c2-worker-2

The following setup has to be done on all three machines to be more efficient you can use a terminal multiplexer like tmux.

Edit hosts file with your favorite editor and add the following lines with your right IP addresses.

sudo vim /etc/hosts
192.168.254.20 c2-control-plane
192.168.254.21 c2-worker-1
192.168.254.22 c2-worker-2

Disable firewalld, SELinux and swap

sudo systemctl disable firewalld
sudo sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
sudo sed -i '/swap/d' /etc/fstab

Load required modules and set kernel settings.
overlay it’s needed for overlayfs, https://www.kernel.org/doc/html/latest/filesystems/overlayfs.html
br_netfilter for iptables to correctly see bridged traffic, http://ebtables.netfilter.org/documentation/bridge-nf.html

sudo cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

sudo cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

Apply settings, you can also skip this by rebooting the machines.

sudo systemctl stop firewalld
sudo setenforce Permissive
sudo modprobe overlay
sudo modprobe br_netfilter
sudo sysctl --system
sudo swapoff -a

Install requirements

sudo yum install iproute-tc chrony -y

Install containerd

sudo yum install yum-utils -y
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum update -y
sudo yum install containerd.io -y

Configure containerd

sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

Edit containerd configuration

sudo vim /etc/containerd/config.toml
below:
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
add:
            SystemdCgroup = true

result:

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
  SystemdCgroup = true

Enable and start containerd and check the status

sudo systemctl enable containerd
sudo systemctl start containerd
sudo systemctl status containerd

Install kubelet, kubeadm and kubectl

Add repository

sudo cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Update repository

sudo yum update -y

Install in my case I have installed version 1.21.0-0

sudo yum install kubelet-1.21.0-0 kubeadm-1.21.0-0 kubectl-1.21.0-0 -y

Enable and start kubelet

sudo systemctl enable kubelet.service
sudo systemctl start kubelet.service

Lock versions in order to avoid unwanted updated via yum update

sudo yum install yum-plugin-versionlock -y
sudo yum versionlock kubelet kubeadm kubectl

The following setup have to be done on the Control Plane node only

Create cluster configuration

sudo kubeadm config print init-defaults | tee ClusterConfiguration.yaml

Modify ClusterConfiguration.yaml, replace 192.168.254.20 with your Control Plane's IP address

sudo sed -i '/name/d' ClusterConfiguration.yaml
sudo sed -i 's/ advertiseAddress: 1.2.3.4/ advertiseAddress: 192.168.254.20/' ClusterConfiguration.yaml
sudo sed -i 's/ criSocket: \/var\/run\/dockershim\.sock/ criSocket: \/run\/containerd\/containerd\.sock/' ClusterConfiguration.yaml
sudo cat << EOF | cat >> ClusterConfiguration.yaml
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF

Create kubernetes cluster

sudo kubeadm init --config=ClusterConfiguration.yaml --cri-socket /run/containerd/containerd.sock

Move kube configuration

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

From the control plane node you can now check you kubernetes cluster, c2-control-plane is in not NotReady mode because we didn't set up the networking yet.

kubectl get nodes
NAME               STATUS     ROLES                  AGE     VERSION
c2-control-plane   NotReady   control-plane,master   3m49s   v1.21.0

Setup networking with calico

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

The following setup have to be done on the Worker nodes only

Join the other nodes to our cluster, the command must be run on the worker nodes only.
At the end of the "kubeadmin init ..." command you were prompted for a join command, it should look like:

sudo kubeadm join 192.168.254.20:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:e60463ed4aa5d49f0f41460c6904f992f0e53f1921f81dc88a80131a9be273c0

If you missed it, you can still generate a token and generate the command with:

kubeadm token create --print-join-command

We are ready, the setup can be validate with kubectl, all nodes are in ready state and kube-system pods are running.

kubectl get nodes
NAME               STATUS   ROLES                  AGE     VERSION
root@c2-control-plane ~]# kubectl get nodes
NAME               STATUS   ROLES                  AGE   VERSION
c2-control-plane   Ready    control-plane,master   54m   v1.21.0
c2-worker-1        Ready                     90s   v1.21.0
c2-worker-2        Ready                     86s   v1.21.0
kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-74b8fbdb46-fwx7g   1/1     Running   0          2m42s
kube-system   calico-node-5p22k                          1/1     Running   0          2m42s
kube-system   calico-node-h47c9                          1/1     Running   0          2m
kube-system   calico-node-nl4gd                          1/1     Running   0          2m4s
kube-system   coredns-558bd4d5db-f2j45                   1/1     Running   0          55m
kube-system   coredns-558bd4d5db-qcvhg                   1/1     Running   0          55m
kube-system   etcd-c2-control-plane                      1/1     Running   0          55m
kube-system   kube-apiserver-c2-control-plane            1/1     Running   0          55m
kube-system   kube-controller-manager-c2-control-plane   1/1     Running   0          55m
kube-system   kube-proxy-5988n                           1/1     Running   0          55m
kube-system   kube-proxy-6q2br                           1/1     Running   0          2m
kube-system   kube-proxy-jqdbh                           1/1     Running   0          2m4s
kube-system   kube-scheduler-c2-control-plane            1/1     Running   0          55m