Check and enable wake on lan in Linux

You can checked if WOL is enabled with ethtool

sudo ethtool enp4s0 | grep -i wake
	Supports Wake-on: pumbg
	Wake-on: d

As you can see in my case wake-on its d (disabled)
Other possible values, the wake on can be triggered by p (PHY activity), u (unicast activity), m (multicast activity), b (broadcast activity), a (ARP activity), and g (magic packet activity).

The value g is required for WOL to work and we can change it with ethtool or NetworkManager.

Ethtool, the change it’s not reboot proof

sudo ethtool -s enp4s0 wol g

Network Manager, the change is reboot proof

sudo nmcli connection modify enp4s0 802-3-ethernet.wake-on-lan magic
sudo nmcli connection up enp4s0

After applying the setting you can check again with ethtool

sudo ethtool enp4s0 | grep -i wake
	Supports Wake-on: pumbg
	Wake-on: g

Create a Debian 11 kubernetes cluster with kubeadm

In this blog, I’ll try to keep as simple as possible to get up and running a Kubernetes cluster.
The cluster will be composed of three machines, one control plane and two workers.
I used KVM (Kernel-based Virtual Machine) running Debian 11 and installed a minimal system with SSH.
Note: this tutorial is made for learning, doesn’t cover any security or best practices.



The following setup has to be done on all three machines to be more efficient you can use a terminal multiplexer like tmux.

Edit hosts file with your favorite editor and add the following lines with your right IP addresses.

sudo vim /etc/hosts c1-control-plane c1-worker-1 c1-worker-2

Load required modules and set kernel settings.
overlay it’s needed for overlayfs,
br_netfilter for iptables to correctly see bridged traffic,

cat << EOF | sudo tee /etc/modules-load.d/containerd.conf

cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

Turn off swap, as kubelet requires that

sudo sed -i '/swap/d' /etc/fstab

Apply settings, you can also skip this by rebooting the machines.

sudo modprobe overlay
sudo br_netfilter
sudo sysctl --system
sudo swapoff -a

Install an NTP server otherwise etcd will be mad.

sudo apt install -y chrony

Install containerd

# requirements

sudo apt install -y curl gpg lsb-release apparmor apparmor-utils
curl -fsSL | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# install

sudo apt update
sudo apt-get install -y
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd

Install kubernetes tools, in my case version 1.21.0.

# requirements

sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

# install

sudo apt-get update
sudo apt install iptables libiptc0/stable libxtables12/stable
sudo apt-get install -y kubelet=1.21.0-00 kubeadm=1.21.0-00 kubectl=1.21.0-00
sudo apt-mark hold kubelet kubeadm kubectl

Now it's time to create our Kubernetes cluster the following commands needs to be run from the control plane only.

sudo kubeadm init --pod-network-cidr --kubernetes-version 1.21.0
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

From the control plane node you can now check you kubernetes cluster, c1-control-plane is in not NotReady mode because we didn't set up the networking yet.

kubectl get nodes
NAME               STATUS     ROLES                  AGE     VERSION
c1-control-plane   NotReady   control-plane,master   3m49s   v1.21.0

Setup networking with calico

kubectl apply -f

Join the other nodes to our cluster, the command must be run on the worker nodes only.
At the end of the "kubeadmin init ..." command you were prompted for a join command, it should look like:

sudo kubeadm join --token oxaul0.24g50wlwsp4ktiqs --discovery-token-ca-cert-hash sha256:74746c748be5fef131d9c91a591c053591b6ce1e274396bcb7c48b6e6664bded

If you missed it, you can still generate a token and generate the command with:

kubeadm token create --print-join-command

We are ready, the setup can be validate with kubectl, all nodes are in ready state and kube-system pods are running.

kubectl get nodes
NAME               STATUS   ROLES                  AGE     VERSION
c1-control-plane   Ready    control-plane,master   5m52s   v1.21.0
c1-worker-1        Ready                     2m10s   v1.21.0
c1-worker-2        Ready                     66s     v1.21.0
kubectl get pods -A
kube-system   calico-kube-controllers-78d6f96c7b-9q4lq   1/1     Running   0          9h
kube-system   calico-node-4mq7p                          1/1     Running   0          9h
kube-system   calico-node-8km7w                          1/1     Running   0          9h
kube-system   calico-node-sjzs4                          1/1     Running   0          9h
kube-system   coredns-558bd4d5db-7pbjx                   1/1     Running   0          9h
kube-system   coredns-558bd4d5db-ptn59                   1/1     Running   0          9h
kube-system   etcd-c1-control-plane                      1/1     Running   1          9h
kube-system   kube-apiserver-c1-control-plane            1/1     Running   0          9h
kube-system   kube-controller-manager-c1-control-plane   1/1     Running   0          9h
kube-system   kube-proxy-ls768                           1/1     Running   0          9h
kube-system   kube-proxy-mk98k                           1/1     Running   0          9h
kube-system   kube-proxy-qbxwb                           1/1     Running   0          9h
kube-system   kube-scheduler-c1-control-plane            1/1     Running   0          9h