Create kubeconfig with “normal user” and x509 client certificates

Kubernetes has two categories of users, “normal users” and service accounts.
Service accounts are managed by Kubernetes while “normal users” are not, there are no objects added into the cluster in order to represent them. In this post, we will created a “normal user” with x509 client certificate and use it in our Kubernetes cluster.

Reference:
Kubernetes documentation

Create and sign the certificate

The following commands should be run on the Control Plane, as we need the CA certificate from the Kubernetes cluster.

Generate our private key

openssl genrsa -out oueta.key 2048

Create Certificate Signing Request (CSR).
Kubernetes determines the username from the common name field in the ‘subject’ of the cert, in my example CN=oueta/O=Group1/O=Group2, meaning the user “oueta” being part of two groups, Group1 and Group2. The groups can be none or many.

openssl req -new -key oueta.key -out oueta.csr -subj "/CN=oueta/O=Group1/O=Group2"

Sign the certificate with our kubernetes cluster’s Certificate Authority (CA), valid for 365 days.

sudo openssl x509 -req -in oueta.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out oueta.crt -days 365

Create the kubeconfig file with our cluster and authentication information.

kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true --server=https://10.255.0.252:6443 --kubeconfig=oueta.kubeconfig
kubectl config set-credentials oueta --client-certificate=oueta.crt --client-key=oueta.key --embed-certs=true --kubeconfig=oueta.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=oueta --kubeconfig=oueta.kubeconfig
kubectl config use-context default --kubeconfig=oueta.kubeconfig

We are ready with generating and signing the certificates, we are ready with the kubeconfig, let’s test.

kubectl get pods --kubeconfig=oueta.kubeconfig
Error from server (Forbidden): pods is forbidden: User "oueta" cannot list resource "pods" in API group "" in the namespace "default"             0           0           0

As expected, because we don’t have any kind of access. In kubernetes we can define two types of permissions, Role and ClusterRole. With a Role object we can define permissions within a single namespace while with ClusterRole we can define cluster-scoped permissions. More information about Role-based access control (RBAC) in the Kubernetes documentation.

Create a Role and define the permissions.

cat << EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  creationTimestamp: null
  name: oueta-role
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - pods/log
  verbs:
  - get
  - list
  - watch
EOF

Creating a RoleBinding and bind our user to our earlier create Role.

cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: null
  name: oueta-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: oueta-role
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: oueta
EOF

Test

kubectl get pods --kubeconfig=oueta.kubeconfig
NAME                         READY   STATUS    RESTARTS   AGE
test-nginx-59ffd87f5-vvpdt   1/1     Running   0          111s

Leave a Reply

Your email address will not be published.