Securing the Kubernetes API server. Part 3
Using ClusterRoles and ClusterRoleBindings
Roles and RoleBindings are namespaced resources, which means they reside in and apply to resources in a single namespace, but, as we saw, RoleBindings can refer to ServiceAccounts from other namespaces, too.
In addition to these namespaced resources, two cluster-level RBAC resources also exist: ClusterRole and ClusterRoleBinding. They’re not namespaced. Let’s see why we need them.
A regular Role only allows access to resources in the same namespace the Role is in. If you want to allow someone access to resources across different namespaces, you have to create a Role and RoleBinding in every one of those namespaces. If you want to extend this to all namespaces (this is something a cluster administrator will need), you need to create the same Role and RoleBinding in each namespace. When creating an additional namespace, you have to remember to create the two resources there as well.
As we know, certain resources aren’t namespaced at all (this includes Nodes, PersistentVolumes, Namespaces, and so on). We have also mentioned the API server exposes some URL paths that don’t represent resources (/healthz for example). Regular Roles can’t grant access to those resources or nonresource URLs, but ClusterRoles can.
A ClusterRole is a cluster-level resource for allowing access to non-namespaced resources or non-resource URLs or used as a common role to be bound inside individual namespaces, saving you from having to redefine the same role in each of them.
Allowing Access to Cluster-Level Resources
As mentioned, a ClusterRole can be used to allow access to cluster-level resources. Let’s look at how to allow your pod to list PersistentVolumes in your cluster. First, you’ll create a ClusterRole called pv-reader:$ kubectl create clusterrole pv-reader --verb=get,list --resource=persistentvolumes
The ClusterRole’s YAML is shown in the following screenshot 1.
Before we bind this ClusterRole to our pod’s ServiceAccount, we need to verify whether the pod can list PersistentVolumes. For that we run the following command inside the pod in the foo namespace:/ # curl localhost:8001/api/v1/persistentvolumes
Note that the URL contains no namespace, because PersistentVolumes aren’t namespaced.
As expected, the default ServiceAccount can’t list PersistentVolumes. We need to bind the ClusterRole to your ServiceAccount to allow it to do that. ClusterRoles can be bound to subjects with regular RoleBindings, so we will create a RoleBinding now:$ kubectl create rolebinding pv-test --clusterrole=pv-reader --serviceaccount=foo:default -n foo
Can we list PersistentVolumes now? Let’s run the following command:/ # curl localhost:8001/api/v1/persistentvolumes
User "system:serviceaccount:foo:default" cannot list persistentvolumes at the cluster scope. Hmm, that’s strange. Let’s examine the RoleBinding’s YAML in the following listing. Is it anything wrong with it?
The YAML looks perfectly fine. You’re referencing the correct ClusterRole and the correct ServiceAccount, as shown in the figure below, so what’s wrong?
Although you can create a RoleBinding and have it reference a ClusterRole when you want to enable access to namespaced resources, you can’t use the same approach for cluster-level (non-namespaced) resources. To grant access to cluster-level resources, you must always use a ClusterRoleBinding.
Luckily, creating a ClusterRoleBinding isn’t that different from creating a RoleBinding, but you’ll clean up and delete the RoleBinding first.
Then we create the ClusterRoleBinding. And after that, we check to see if we can list PersistentVolumes.
As we can see, we replaced rolebinding with clusterrolebinding in the command and didn’t need to specify the namespace. The figure below shows what we have now. And we see that we can list PersistentVolumes now.
It turns out we must use a ClusterRole and a ClusterRoleBinding when granting access to cluster-level resources. Remember that a RoleBinding can’t grant access to cluster-level resources, even if it references a ClusterRoleBinding.
We have now used ClusterRoles and ClusterRoleBindings to grant access to cluster-level resources and non-resource URLs. Now let’s look at how ClusterRoles can be used with namespaced RoleBindings to grant access to namespaced resources in the Role-Binding’s namespace.
Using ClusterRoles to Grant Access to Resources in Specific Namespaces
ClusterRoles don’t always need to be bound with cluster-level ClusterRoleBindings. They can also be bound with regular, namespaced RoleBindings. You’ve already started looking at predefined ClusterRoles, so let’s look at another one called view, which is shown in the following listing. $ kubectl get clusterrole view -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: view
...
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- persistentvolumeclaims
- pods
- replicationcontrollers
- replicationcontrollers/scale
- serviceaccounts
- services
verbs:
- get
- list
- watch
...
This ClusterRole has many rules. Only the first one is shown in the listing. The rule allows getting, listing, and watching resources like ConfigMaps, Endpoints, Persistent-VolumeClaims, and so on. These are namespaced resources, even though you’re looking at a ClusterRole (not a regular, namespaced Role). What exactly does this ClusterRole do?
It depends whether it’s bound with a ClusterRoleBinding or a RoleBinding (it can be bound with either of them). If you create a ClusterRoleBinding and reference the Cluster-Role in it, the subjects listed in the binding can view the specified resources across all namespaces.
If, on the other hand, you create a RoleBinding, the subjects listed in the binding can only view resources in the namespace of the RoleBinding. We will try both options now.
We will see how the two options affect our test pod’s ability to list pods. First, let’s see what happens before any bindings are in place running the following commands:/ # curl localhost:8001/api/v1/pods
/ # curl localhost:8001/api/v1/namespaces/foo/pods
With the first command, we are trying to list pods across all namespaces. With the second, we are trying to list pods in the foo namespace.
The server doesn’t allow us to do neither of them.
Now, let’s see what happens when you create a ClusterRoleBinding and bind it to the pod’s ServiceAccount. Can the pod now list pods in the foo namespace?
Yes It can list, because we created a ClusterRoleBinding, it applies across all namespaces. The pod in namespace foo can list pods in the bar namespace as well as seen below.
So, the pod is allowed to list pods in a different namespace. It can also retrieve pods across all namespaces by hitting the /api/v1/pods URL path:
So, the pod is allowed to list pods in a different namespace. It can also retrieve pods across all namespaces by hitting the /api/v1/pods URL path:
As expected, the pod can get a list of all the pods in the cluster. To summarize, combining a ClusterRoleBinding with a ClusterRole referring to namespaced resources allows the pod to access namespaced resources in any namespace, as shown in figure below.
Now, let’s see what happens if we replace the ClusterRoleBinding with a RoleBinding.
First, delete the ClusterRoleBinding with the command:$ kubectl delete clusterrolebinding view-test
Next create a RoleBinding instead. Because a RoleBinding is namespaced, we need to specify the namespace where we want to create it. Create it in the foo namespace:$ kubectl create rolebinding view-test --clusterrole=view --serviceaccount=foo:default -n foo
We now have a RoleBinding in the foo namespace, binding the default ServiceAccount in that same namespace with the view ClusterRole. What can our pod access now?
Lets try the following command:/ # curl localhost:8001/api/v1/namespaces/foo/pods
It works. Now, let’s try the following commands:/ # curl localhost:8001/api/v1/namespaces/bar/pods
/ # curl localhost:8001/api/v1/pods
As we can see, our pod can list pods in the foo namespace, but not in any other specific namespace or across all namespaces. This is visualized in the following figure.
Summarizing Role, Clusterrole, Rolebinding, and Clusterrolebinding Combinations
We’ve covered many different combinations and it may be hard for you to remember when to use each one. Let’s see if we can make sense of all these combinations by categorizing them per specific use case. Refer to table below.
Hopefully, the relationships between the four RBAC resources are much clearer now.
Now let’s explore the pre-configured ClusterRoles and ClusterRoleBindings.
Understanding default ClusterRoles and ClusterRoleBindings
Kubernetes comes with a default set of ClusterRoles and ClusterRoleBindings, which are updated every time the API server starts. This ensures all the default roles and bindings are recreated if you mistakenly delete them or if a newer version of Kubernetes uses a different configuration of cluster roles and bindings.
We can see the default cluster roles and bindings listing running the following command: $ kubectl get clusterrolebindings
The most important roles are the view, edit, admin, and cluster-admin ClusterRoles. They’re meant to be bound to ServiceAccounts used by user-defined pods.
Allowing Read-Only Access to Resources with the View Clusterrole
We already used the default view ClusterRole in the previous example. It allows reading most resources in a namespace, except for Roles, RoleBindings, and Secrets. You’re probably wondering, why not Secrets? Because one of those Secrets might include an authentication token with greater privileges than those defined in the view Cluster-Role and could allow the user to masquerade as a different user to gain additional privileges (privilege escalation).
Allowing Modifying Resources with the Edit ClusterRole
Next is the edit ClusterRole, which allows us to modify resources in a namespace, but also allows both reading and modifying Secrets. It doesn’t, however, allow viewing or modifying Roles or RoleBindings. Again, this is to prevent privilege escalation.
Granting full control of a Namespace with the Admin ClusterRole
Complete control of the resources in a namespace is granted in the admin Cluster-Role. Subjects with this ClusterRole can read and modify any resource in the namespace, except ResourceQuotas and the Namespace resource itself. The main difference between the edit and the admin Cluster-Roles is in the ability to view and modify Roles and RoleBindings in the namespace.
To prevent privilege escalation, the API server only allows users to create and update Roles if they already have all the permissions listed in that Role (and for the same scope).
Allowing Complete Control with the Cluster-Admin ClusterRole
Complete control of the Kubernetes cluster can be given by assigning the clusteradmin ClusterRole to a subject. As we have seen before, the admin ClusterRole doesn’t allow users to modify the namespace’s ResourceQuota objects or the Namespace resource itself. If we want to allow a user to do that, we need to create a RoleBinding that references the cluster-admin ClusterRole. This gives the user included in the RoleBinding complete control over all aspects of the namespace in which the Role-Binding is created.
Now we know how to give users complete control of all the namespaces in the cluster. Yes, it is by referencing the cluster-admin ClusterRole in a ClusterRoleBinding instead of a RoleBinding.
Understanding the Other Default ClusterRoles
The list of default ClusterRoles includes a large number of other ClusterRoles, which start with the system: prefix. These are meant to be used by the various Kubernetes components. Among them, we will find roles such as system:kube-scheduler, which is obviously used by the Scheduler, system:node, which is used by the Kubelets, and so on.
Although the Controller Manager runs as a single pod, each controller running inside it can use a separate ClusterRole and ClusterRoleBinding (they’re prefixed with system: controller:).
Each of these system ClusterRoles has a matching ClusterRoleBinding, which binds it to the user the system component authenticates as. The system:kube-scheduler ClusterRoleBinding, for example, assigns the identically named ClusterRole to the system:kube-scheduler user, which is the username the scheduler Authenticates as.
Granting authorization permissions wisely
By default, the default ServiceAccount in a namespace has no permissions other than those of an unauthenticated user (as you may remember from one of the previous examples, the system:discovery ClusterRole and associated binding allow anyone to make GET requests on a few non-resource URLs). Therefore, pods, by default, can’t even view cluster state. It’s up to us to grant them appropriate permissions to do that.
Obviously, giving all our ServiceAccounts the cluster-admin ClusterRole is a bad idea. As is always the case with security, it’s best to give everyone only the permissions they need to do their job and not a single permission more (principle of least privilege).
Creating Specific ServiceAccounts for each Pod
It’s a good idea to create a specific ServiceAccount for each pod (or a set of pod replicas) and then associate it with a tailor-made Role (or a ClusterRole) through a RoleBinding (not a ClusterRoleBinding, because that would give the pod access to resources in other namespaces, which is probably not what you want).
If one of our pods (the application running within it) only needs to read pods, while the other also needs to modify them, then create two different ServiceAccounts and make those pods use them by specifying the serviceAccountName property in the pod spec, as we learned. Never add all the necessary permissions required by both pods to the default ServiceAccount in the namespace.
Expecting Apps to be Compromised
Our aim is to reduce the possibility of an intruder getting hold of our cluster. Today’s complex apps contain many vulnerabilities. We should expect unwanted persons to eventually get their hands on the ServiceAccount’s authentication token, so we should always constrain the ServiceAccount to prevent them from doing any real damage.
REFERENCES
Kubernetes Documentation. Kubernetes.io/docs
Marko Luksa. Kubernetes in Action. 2018 Edition