At SWITCH we are looking to provide a container platform as a Service solution. We are working on Kubernetes and Openshift to gauge what is possible and how a service could be structured. It would be really nice to use the existing Openstack username and password to authenticate to Kubernetes. We tested this solution and it works great.
How does it work ? Lets start from the client side.
Kubernetes users use the kubectl client to access the cluster. The good news is that since version v1.8.0 of the client, kubectl is able to read the usual openstack env variables, contact keystone to request a token, and forward the request to the kubernetes cluster using the token. This was merged the 7th of August 2017. I could not find anywhere how to correctly configure the client to use this functionality. Finally I wrote some documentation notes HERE.
How does it work on the Kubernetes master side ?
The Kubernetes API receives a request with a keystone token. In the Kubernetes language this is a Bearer Token. To verify the keystone token the Kubernetes API server will use a WebHook. What does it means ? That the Kubernetes API will contact yet another Kubernetes component that is able to authenticate the keystone token.
The k8s-keystone-auth component developed by Dims makes exactly this. I tested his code and I created a Docker container to integrate the k8s-keystone-auth in my kube-system namespace. When you run the k8s-keystone-auth container your pass as an argument the URL of your keystone server.
If you are deploying your cluster with k8s-on-openstack you find this integration summarized in a single commit.
Now that everything is setup I can try:
source ~/openstackcredentials kubectl get pods
I will be correctly authenticated by keystone that will verify my identity, but I will have no authorization to do anything:
Error from server (Forbidden): pods is forbidden: User "email@example.com" cannot list pods in the namespace "default"
This is because we need to set up some authorization for this keystone user. You can find detailed documentation about RBAC but I make here a simple example:
kubectl create rolebinding saverio-view --clusterrole view --user firstname.lastname@example.org --namespace default
Now the my user is able to view anything at the default namespace, and I will be able to do kubectl get pods
Of course setting up RBAC specific rules for every user is not optimal. You can at least use the keystone projects, that are mapped to
kind: Group in Kubernetes. Here an example:
--- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: read-pods namespace: default subjects: - kind: Group name: <openstack_project_uuid> apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: pod-reader apiGroup: rbac.authorization.k8s.io
You can then achieve a “soft multitenancy” where every user belonging to a specific keystone project is limited with permissions to a specific namespace. I talk about soft multitenancy because all the pods from all the namespaces, depending on your networking solution, could end up on the same network with a completely open policy.
I would like to thank Dims and the other people on the Slack channel #sig-openstack for the great help while developing this Kubernetes deployment.