Understanding Kubernetes ConfigMap with examples

ConfigMaps are objects that help us to store non-confidential data in key-value pairs in order to separate configuration from application.
They should not be used to store sensitive data for such tasks you should use Secrets, also they are not designed to store large amount of data they cannot exceed 1 MiB.
In this article, we will discuss how to create ConfigMaps, but also how to consume them in a Pod.
There are multiple methods that ConfigMaps can be injected in a Pod, we will cover the first three in this post, in all three cases the ConfigMap and Pod needs to run in the same namespace.

Methods:
Command and args
Environment variables
Mounted in the container via a read-only volume
Kubernetes API to read a ConfigMap via own code (not covered)

Resources
Kubernetes documentation

Command and args

We will create a ConfigMap and Pod in the default namespace named command-and-args-configmap and command-and-args-pod.

Create the ConfigMap, we have defined two keys, “message” and “sleep”.

apiVersion: v1
data:
  message: "Hello from ConfigMap"
  sleep: "10"
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: command-and-args-configmap

Create the Pod, the container will use the two keys “message” and “sleep” loaded from the ConfigMap as args.
The “message” key will be used to print “Hello from ConfigMap” in a loop, while the “sleep” value will be used to sleep for 10 seconds between the messages.

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: command-and-args-pod
  name: command-and-args-pod
spec:
  containers:
  - image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date) $(MESSAGE); echo 'Sleeping for $(SLEEP)'; sleep $(SLEEP); done"]
    env:
    - name: MESSAGE
      valueFrom:
        configMapKeyRef:
          name: command-and-args-configmap
          key: message
    - name: SLEEP
      valueFrom:
        configMapKeyRef:
          name: command-and-args-configmap
          key: sleep
    name: command-and-args-pod
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

Create the ConfigMap and Pod

kubectl create -f https://gitlab.com/oueta.com/oueta/-/raw/main/kubernetes/configmap/command-and-args-configmap.yaml
kubectl create -f https://gitlab.com/oueta.com/oueta/-/raw/main/kubernetes/configmap/command-and-args-pod.yaml

Check the results

kubectl logs command-and-args-pod
Sun Oct 3 15:59:52 UTC 2021 Hello from ConfigMap
Sleeping for 10
Sun Oct 3 16:00:02 UTC 2021 Hello from ConfigMap
Sleeping for 10
Sun Oct 3 16:00:12 UTC 2021 Hello from ConfigMap
Sleeping for 10
Sun Oct 3 16:00:22 UTC 2021 Hello from ConfigMap
Sleeping for 10

Environment variables

Create the ConfigMap with an example of connection information.

apiVersion: v1
data:
  db_host: "192.168.0.100"
  db_port: "6432"
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: environment-variables-variable-configmap

Create the Pod, simpler but similar to the previous and we will do a different test.

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: environment-variables-variable-pod
  name: environment-variables-variable-pod
spec:
  containers:
  - image: busybox
    command: ["sleep", "3600"]
    env:
    - name: DB_HOST
      valueFrom:
        configMapKeyRef:
          name: environment-variables-variable-configmap
          key: db_host
    - name: DB_PORT
      valueFrom:
        configMapKeyRef:
          name: environment-variables-variable-configmap
          key: db_port
    name: environment-variables-variable-pod
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

Create the ConfigMap and Pod

kubectl apply -f https://gitlab.com/oueta.com/oueta/-/raw/main/kubernetes/configmap/environment-variables-variable-configmap.yaml
kubectl apply -f https://gitlab.com/oueta.com/oueta/-/raw/main/kubernetes/configmap/environment-variables-variable-pod.yaml

Test, let’s check the environment variables of our container.

kubectl exec environment-variables-variable-pod -- printenv | grep DB_
DB_HOST=192.168.0.100
DB_PORT=6432

Mount a read-only volume in the container

Create the ConfigMap, we have defined two keys, “description” and “certificate”.

apiVersion: v1
data:
  certificate: |
    "This is my
    test certificate"
  description: |
    "This is my 
    test description"
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: volume-mounts-configmap

Create the Pod, the “/config” folder will be mounted and two files will be created, “description” and “certificate” each one containing the values that we have defined.

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: volume-mounts-pod
  name: volume-mounts-pod
spec:
  containers:
  - image: busybox
    command: ["sleep", "3600"]
    name: volume-mounts-pod
    volumeMounts:
      - name: config
        mountPath: "/config"
        readOnly: true
    resources: {}
  volumes:
    - name: config
      configMap:
        name: volume-mounts-configmap
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

Create the ConfigMap and Pod

kubectl apply -f https://gitlab.com/oueta.com/oueta/-/raw/main/kubernetes/configmap/volume-mounts-configmap.yaml
kubectl apply -f https://gitlab.com/oueta.com/oueta/-/raw/main/kubernetes/configmap/volume-mounts-pod.yaml

Test, let’s check the mounted directory and the two files created.

kubectl exec volume-mounts-pod -- ls -l /config
total 0
lrwxrwxrwx    1 root     root            18 Oct  3 18:19 certificate -> ..data/certificate
lrwxrwxrwx    1 root     root            18 Oct  3 18:19 description -> ..data/description
kubectl exec volume-mounts-pod -- cat /config/certificate
"This is my
test certificate"
kubectl exec volume-mounts-pod -- cat /config/description
"This is my 
test description"

The ConfigMap changes are detected by kubelet and can be customized via configMapAndSecretChangeDetectionStrategy option.

Backup and restore etcd of a kubernetes cluster

Etcd is distributed key-value database. It’s designed to store small amounts of data that can be stored in memory.
It supports high availability and widely used in kubernetes clusters.
In this post we will describe how to backup and restore the data stored in the etcd database in a context of a kubernetes cluster.

Resources
https://etcd.io/
https://github.com/etcd-io/etcd/releases/tag/v3.5.0
https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#restoring-an-etcd-cluster

Note: this procedure is for demonstrating purpose on a cluster created with kubeadm (one Control Plane and two Workers), depending on your setup this should be tailored on your specific case!

Download and install etcd containing the server and utilities on your Control Plane, it would make sense to use the same version as you have installed, in my case will be v3.5.0.

wget https://storage.googleapis.com/etcd/v3.5.0/etcd-v3.5.0-linux-amd64.tar.gz
tar -zxvf etcd-v3.5.0-linux-amd64.tar.gz
sudo cp etcd-v3.5.0-linux-amd64/etcdctl /usr/bin/
sudo cp etcd-v3.5.0-linux-amd64/etcdutl /usr/bin/

Test etcdctl by getting the namespaces, edit your endpoint, in my case as I have only a Control Plane etcd is listening only on 127.0.0.1

ETCDCTL_API=3 sudo etcdctl get /registry/namespaces --prefix --keys-only --endpoints https://127.0.0.1:2379 --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --cacert=/etc/kubernetes/pki/etcd/ca.crt
/registry/namespaces/default

/registry/namespaces/home-server

/registry/namespaces/kube-node-lease

/registry/namespaces/kube-public

/registry/namespaces/kube-system

/registry/namespaces/kubernetes-dashboard

Create a test configmap, before backup

kubectl create configmap backup-etcd-test

Backup, the data will be saved in a file named “my-backup”

ETCDCTL_API=3 sudo etcdctl snapshot save my-backup --endpoints https://127.0.0.1:2379 --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --cacert=/etc/kubernetes/pki/etcd/ca.crt

Delete the test configmap

kubectl delete configmap backup-etcd-test

Extract the backup file to my-backup.etcd folder

sudo etcdutl snapshot restore my-backup --data-dir my-backup.etcd

Stop kube-apiserver and etcd static pods on your Control Plane’s by moving the manifests, I have deployed my cluster via kubeadm and the kube-apiserver it’s running in a pod. (If in your case is running as a service, use systemctl to stop it)

sudo mv /etc/kubernetes/manifests/kube-apiserver.yaml /tmp
sudo mv /etc/kubernetes/manifests/etcd.yaml /tmp

Move backup.etcd folder to /var/lib/etcd/

sudo rm -rf /var/lib/etcd/
sudo mv my-backup.etcd /var/lib/etcd/

Start the etcd and kube-apiserver by moving back the configs.

sudo mv /tmp/kube-apiserver.yaml /etc/kubernetes/manifests/
sudo mv /tmp/etcd.yaml /etc/kubernetes/manifests/

Recomandation would be to restart also other components like kube-scheduler, kube-controller-manager, kubelet to ensure that they don’t rely on some stale data.

Test, our config map is back!

kubectl get configmap
NAME               DATA   AGE
backup-etcd-test   0      45m
kube-root-ca.crt   1      70d