IAM Roles for Service Accounts and Pods-IRSA
In Kubernetes on AWS, there are two complementary access control regimes at work. AWS Identity and Access Management (IAM) allows us to assign permissions to AWS services, for example an S3 bucket. Inside the Kubernetes cluster, the complementary system to define permissions towards Kubernetes resources is Kubernetes Role-based Access Control (RBAC). RBAC is a key method for making the cluster secure.
If we are running the Kubernetes cluster on AWS, Identity and Access Management (IAM) allows us to assign permissions to the EC2 instances that are the Kubernetes nodes, to restrict access to resources. The problem here is that all pods running on the Kubernetes node share the same set of permissions we assigned to the EC2 node instances, and this cause a violation of the least privilege principle.
Therefore we need to have a way of avoid that pods inherit permission that are assigned to the node where they live in and also have a way to assign to pods their own permission schemas. The solution to this problem is IRSA.
What is IRSA?
IRSA which stand for IAM Roles for Service Accounts, is a feature of AWS which allows us to make use of IAM roles at the pod level. This is done combining an OpenID Connect (OIDC) identity provider and Kubernetes service account annotations. In other words, IRSA let us to provide controlled access from the containers inside pods, to AWS resources, for example, let the containers have access to a S3 bucket. With this feature, we no longer need to provide extended permissions to the EKS node IAM role so that pods on that node can call AWS APIs.
By combining an OpenID Connect (OIDC) identity provider and Kubernetes service account annotations, we can now use IAM roles at the pod level.
Drilling further down into the solution: OIDC federation access allows us to assume IAM roles via the Secure Token Service (STS), enabling authentication with an OIDC provider, receiving a JSON Web Token (JWT), which in turn can be used to assume an IAM role. Kubernetes, on the other hand, can issue so-called projected service account tokens, which happen to be valid OIDC JWTs for pods. AWS setup equips each pod with a cryptographically signed token that can be verified by STS against the OIDC provider of your choice to establish the pod’s identity.
Beside this sort of arcane explanation, the solution is available in EKS, where AWS manage the control plane and run the webhook responsible for injecting the necessary environment variables and projected volume. It can also be used in deployments of Kubernetes not using the EKS service, but in this case some additional work is required while in EKS most part of the work is done.
In simple terms what we do is create a IAM role and associate to it a policy that provide access to the AWS services like for example access to a S3 bucket. We then associate the role toa Kubernetes ServiceAccoutn, this service account provides resources permissions to pods that uses that service account.
So in an EKS cluster. to benefit from the IRSA feature the steps, on a high level, are:
- We need to create an OpenID identity provider using the OpenID provider connect URL that the cluster already have and use as audience sts.amazonaws.com.
- Create the IAM policy to access the AWS service.
- Create the IAM access role for the trusted identity “Web Identity” using the OID provider created in point 1.
- Attach the policy created in point 2 and create a “Trusted Relationship” to specify that only the given ServiceAccount can use this role.
- Finally, configure pods that will access the AWS resource by using the service account specified in point 4.
So, for showing this with an example we will use a S3 bucket named w-eks-s3-TestAccess that has a file inside.
We can retrieve the OID url either from the AWS console or using a awscli command. For simplicity we will use the console.
With the OID url we can create the new identity provider using OpenID connect as shown below
With the Openid identity provide created we can now focus on IAM polict and role.
This is the w-eks-s3-TestAccess IAM policy we need to create:{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::w-eks-s3-test/*"
]
}
]
}
Notice that the resource point at the S3 bucket we want the pod have access to.
Now we create a new IAM role named w-eks-S3AccessTest-Role and attach to it the policy we just created.
Before we finish the role creation, we create a trusted relationship for a ServiceAccount we will create in the next step that we call “app”. This service account will be inside the default namespace and will be “who” will be able to assume this role we are creating. {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::971393087701:oidc-provider/oidc.eks.us-east-2.amazonaws.com/id/862CC1240A3EA12FCF57C110808C9389"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.us-east-2.amazonaws.com/id/747541915D622A86594CCBA1716E6D5C:sub": "system:serviceaccount:default:app"
}
}
}
]
}
This means that any pod that uses the ServiceAccount “app” that is in the namespace default can assume the role w-eks-S3AccessTest-Role.
The next step is creating the “app” ServiceAccount which we do with the kubectl apply command using the below yaml definition. Notice that the ServiceAccount has a tag “annotation” that carry the the role arn.---
apiVersion: v1
kind: ServiceAccount
metadata:
name: app
namespace: default
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::971393087701:role/w-eks-S3AccessTest-Role
So now we show that the service account does not exist, we create it and then verify it has been created
Now we will test the configuration by creating a pod that uses the configured “app” account.---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
serviceAccountName: app
initContainers:
- name: aws-cli
image: amazon/aws-cli
command: ['aws', 's3', 'cp', 's3://w-eks-s3-test/w-eks-s3-text', '-']
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
With this deployment definition we can now create this pod that has two containers.
As we see the first container was able to run and wrote to the standard output the content of the text file we have in the S3 bucket.
Now lets create another container and check if we can manually access s3 bucket from inside the container.
First, we modify the s3 access policy as follow so we can list the bucket content.{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::w-eks-s3-test",
"arn:aws:s3:::w-eks-s3-test/*"
]
}
]
}
Then, we create and apply the following deployment:---
apiVersion: apps/v1
kind: Deployment
metadata:
name: w-eks-deployment
namespace: default
spec:
selector:
matchLabels:
app: aws-cli
replicas: 1
template:
metadata:
labels:
app: aws-cli
spec:
serviceAccountName: app
containers:
- name: aws-cli
image: amazon/aws-cli
command: ["tail", "-f", "/dev/null"]
this deployment defines a pod that has one container that uses an aws image that has the aws cli installed and we make it static so we can access it by including the command tail -f /dev/sh/null.
Finally, after checking we have the pod running
We get inside the container with the command:kubectl exec -it w-eks-deployment-5d8d5c5bf7-vxh9s -- /bin/bash
once inside the container we check s3 access running a “ls” and “cp” commands.
As we see, a hypothetical application running inside the pod container is able to access an S3 bucket because the pod was configured to use the ServiceAccount “app” which was configured to assume the role eks-S3AccessTest-Role for that purpose.
As we see, a hypothetical application running inside the pod container is able to access an S3 bucket because the pod was configured to use the ServiceAccount “app” which was configured to assume the role eks-S3AccessTest-Role for that purpose.