Using IAM Roles for Kubernetes service accounts in AWS EKS using Terraform Nov 19 2022
Part of the design principles of the security pillar in the AWS Well architected framework is "Implement a strong identity foundation", that is:
"Implement the principle of least privilege and enforce separation of duties with appropriate authorization for each interaction with your AWS resources. Centralize identity management, and aim to eliminate reliance on long-term static credentials."
Well Architected Framework - Security Pillar - Design principles
When we start using Kubernetes in AWS EKS, we might take some shortcuts during the learning phase and add all the policies to a role that we directly assign to our nodes. The issue is that we give much more privileges than we require just for practicality. And it can also become a habit. The way we learn something sometimes becomes our default pattern. So let's break that habit and explore how to set IAM Roles to Kubernetes service accounts in AWS EKS.
Note: If you are interested in learning more about how to set up the directory structure for your Terraform project, you might find my guide, Meditations on Directory Structure for Terraform Projects, useful.
Bash Beyond Basics Increase your efficiency and understanding of the shell
If you are interested in this topic you might enjoy my course Bash Byond Basics. This course helps you level up your bash skills. This is not a course on shell-scripting, is a course on improving your efficiency by showing you the features of bash that are seldom discussed and often ignored.
Every day you spend many hours working in the shell, every little improvement in your worklflows will pay dividends many fold!
Learn moreWhat are IAM Roles for Kubernetes service accounts?
The IAM Roles for Kubernetes service accounts allow us to associate an IAM role with a Kubernetes service account. This feature is available through the Amazon EKS Pod Identity Webhook. The IAM role for a service account is then used to provide AWS credentials to the pod or resource using the service account. These credentials are automatically rotated before they expire.
If you have used WebIdentity federation with AWS Cognito, you will be familiar with the concept of IAM roles for service accounts.
Ok, so what do we need?
- OIDC Issuer - Already set up if you are running in EKS
- OIDC provider
- The IAM role that we want to associate with the service account
- The Kubernetes service account that we want to associate with the IAM role
Set up OIDC provider
The first step is to set up the OIDC provider. OIDC, if you are unfamiliar, is an authentication protocol similar to SAML, but it is based on JSON Web Tokens (JWT). AWS also uses it to authenticate users in AWS Cognito.
There are two parts to using OIDC an Issuer and a Provider. The issuer is the entity that issues the token, and the provider is the entity that validates the token. In our case, the issuer is handled by AWS, and Kubernetes handle the provider. When we create the EKS cluster, we already have an Issuer endpoint. If you want to use your OIDC issuer, you can provide that to EKS. But in this example, we'll stick to the issuer already in the EKS cluster.
To get the URL of the issuer, run the following command:
1
aws eks describe-cluster --name temp-cluster --query "cluster.identity.oidc.issuer" --output text
You should see something like the following:
1
https://oidc.eks.us-east-1.amazonaws.com/id/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Let's check first if an OIDC provider is already created for our cluster. Run the following command:
1
aws iam list-open-id-connect-providers
If you get an empty list, then you need to create the OIDC provider. Let's define it in terraform:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
locals {
oidc_issuer = "https://oidc.eks.us-east-1.amazonaws.com/id/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
# We'll use the cluster_namespace and the service_account_name later
cluster_namespace = "my-namespace"
service_account_name = "my-service-account"
}
# Set open IDC provider
data "tls_certificate" "cluster" {
# If we created the EKS cluster in terraform, we would do something like the following
# url = aws_eks_cluster.main.identity[0].oidc[0].issuer
url = local.oidc_issuer
}
resource "aws_iam_openid_connect_provider" "cluster" {
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = [data.tls_certificate.cluster.certificates.0.sha1_fingerprint]
# If we created the EKS cluster in terraform, we would do something like the following
# url = aws_eks_cluster.main.identity[0].oidc[0].issuer
url = local.oidc_issuer
}
Now, if we run the command to list the OIDC providers, we should see the one we just created:
1
aws iam list-open-id-connect-providers
With the OIDC provider created, we can now create the IAM role and the Kubernetes service account.
Creating the IAM role
We will create a new IAM role, and we will call it eks-test-role
. We want the pod to read a parameter from the SSM parameter store, so we will also create a policy for that and attach it to the same role.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
data "aws_region" "current" {}
data "aws_caller_identity" "current" {}
# SSM Parameter Store Service Role
resource "aws_iam_role" "pod_sa" {
name = "eks-test-role"
assume_role_policy = data.aws_iam_policy_document.pod_sa_assume_role_policy.json
}
data "aws_iam_policy_document" "pod_sa_assume_role_policy" {
statement {
effect = "Allow"
actions = ["sts:AssumeRoleWithWebIdentity"]
principals {
type = "Federated"
identifiers = ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:oidc-provider/${replace(local.oidc_issuer, "https://", "")}"]
}
condition {
test = "StringEquals"
variable = "${replace(local.oidc_issuer, "https://", "")}:aud"
values = ["sts.amazonaws.com"]
}
condition {
test = "StringEquals"
variable = "${replace(local.oidc_issuer, "https://", "")}:sub"
values = ["system:serviceaccount:${local.cluster_namespace}:${local.service_account_name}"]
}
}
}
We will create an additional policy allowing the pod to read any parameter in the parameter store. We are going to call it cluster-ssm-parameter-policy
:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
resource "aws_iam_role_policy_attachment" "sa_role_ssm_parameter_policy" {
policy_arn = aws_iam_policy.ssm_parameter_policy.arn
role = aws_iam_role.pod_sa.name
}
resource "aws_iam_policy" "ssm_parameter_policy" {
name = "cluster-ssm-parameter-policy"
policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
"Action" : [
"ssm:DescribeParameters",
"ssm:GetParameter",
"ssm:GetParameters",
"ssm:GetParametersByPath"
],
"Effect" : "Allow",
"Resource" : [
"arn:aws:ssm:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:parameter/*"
]
},
]
})
}
We will create a go program that will serve as a basic HTTP server. The server's sole function is to echo the parameter value. For Kubernetes to be able to pull the image from ECR, we need to create an ECR repository. Let's do that. We are going to call it test-ecr
:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# ECR
resource "aws_ecr_repository" "main" {
name = "test-ecr"
}
resource "aws_ssm_parameter" "ecr" {
name = "/ecr/ecr-url"
type = "String"
value = aws_ecr_repository.main.repository_url
}
resource "aws_ecr_lifecycle_policy" "main" {
repository = aws_ecr_repository.main.name
policy = <<EOF
{
"rules": [
{
"rulePriority": 1,
"description": "Keep last 10 images",
"selection": {
"tagStatus": "tagged",
"tagPrefixList": ["v"],
"countType": "imageCountMoreThan",
"countNumber": 10
},
"action": {
"type": "expire"
}
}
]
}
EOF
}
The following is a quick and dirty go program that will read the parameter from the parameter store and echo it:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
package main
import (
"fmt"
"log"
"net/http"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/ssm"
)
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
// Create an SSM client
sess, err := session.NewSession(&aws.Config{
Region: aws.String("us-east-1"),
})
if err != nil {
panic(err)
}
svc := ssm.New(sess)
// Get the parameter
param, err := svc.GetParameter(&ssm.GetParameterInput{
Name: aws.String("/my/parameter"),
})
if err != nil {
fmt.Fprintf(w, "Error:")
fmt.Fprintf(w, err.Error())
} else {
// Print the value
fmt.Fprintf(w, "Greeting: ")
fmt.Fprintf(w, *param.Parameter.Value)
}
})
fmt.Println("Starting server in port 8080")
if err := http.ListenAndServe(":8080", nil); err != nil {
log.Fatal(err)
}
}
We will also need a Dockerfile to build the image:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
FROM golang:1.19-alpine
WORKDIR /usr/src/app
COPY go.mod go.sum ./
RUN go mod download && go mod verify
COPY main.go .
RUN go build -v -o /usr/local/bin/app ./...
EXPOSE 8080
CMD [ "app" ]
We can now build the image and push it to ECR. Let's create a file called deploy.sh
and add the following content:
1
2
3
4
5
6
#!/bin/bash
export AWS_ECR_URL=$(aws ecr describe-repositories | jq -r '.repositories[0].repositoryUri')
docker build -f Dockerfile . -t ${AWS_ECR_URL}:latest
aws ecr get-login-password | docker login --username AWS --password-stdin $(echo $AWS_ECR_URL | sed -r "s/\/.*//g")
docker push ${AWS_ECR_URL}:latest
We can now run the script:
1
2
chmod +x deploy.sh
./deploy.sh
And we should have our Image ready to deploy in Kubernetes.
Create the SSM parameter
We are going to create an SSM parameter that the pod will read. We are going to call it /my/parameter
:
1
2
3
4
5
resource "aws_ssm_parameter" "my_parameter" {
name = "/my/parameter"
type = "String"
value = "Hola Mundo"
}
Create the Kubernetes configuration
We will create the Kubernetes namespace.yml
, deployment.yml
, service.yml
, and service-account.yml
.
The namespace.yml
file will create the namespace that we are going to use:
1
2
3
4
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
The service-account.yml
file will create the service account that we are going to use:
1
2
3
4
5
6
7
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
namespace: my-namespace
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<ROLE_NAME>"
Replace <AWS_ACCOUNT_ID>
and <ROLE_NAME>
with the values you created in the previous steps.
The service.yml
file will create the service that we are going to use:
1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
name: sserver
namespace: my-namespace
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: NodePort
selector:
app: sserver
The deployment.yml
file will create the deployment that we are going to use:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: apps/v1
kind: Deployment
metadata:
name: sserver
namespace: my-namespace
spec:
selector:
matchLabels:
app: sserver
replicas: 1
template:
metadata:
labels:
app: sserver
spec:
serviceAccountName: my-service-account
containers:
- name: sserver
image: <ERC_URL>:latest
ports:
- containerPort: 8080
Replace <ERC_URL>
with the value of the ECR we created in the previous steps.
Ok, now let's deploy everything:
1
2
3
4
kubectl apply -f namespace.yml
kubectl apply -f service-account.yml
kubectl apply -f service.yml
kubectl apply -f deployment.yml
And we should be ready to test.
Test that everything is working
We can now test if our pod uses the service account we assigned.
We can port forward to our local host and hit our endpoint to see if we get the parameter's value. Assuming we only have one pod running in my namespace, we can do the following:
1
kubectl port-forward $(kubectl get pods -n my-namespace --no-headers | awk '{print $1}') 8080:8080 -n my-namespace
In a different terminal, we can do a curl to the endpoint:
1
curl localhost:8080
And we should see:
1
Greeting: Hola Mundo
Final thoughts
Setting up the OIDC might be the most confusing part if we haven't worked with it before. But remember you need an Issuer, which is already provided if you use AWS EKS. You can link to an external issuer in the same way. We also need a provider accessible to our cluster, as we did.
After the OIDC part is completed, we only need to create our IAM roles as we usually do and build the association by creating a service account and assigning the role to it.
I hope this helps you to understand how to use IAM roles for service accounts in Kubernetes.
References
AWS Documentation - Kubernetes service accounts AWS Documentation - Well architected framework Illustrated guide to OAuth and OpenID Connect