Skip to content

EKS Workload Identity

This implementation guide assume you have an existing EKS cluster and want to install RunWhen Local with workload identity (no long-lived access keys). Choose either Pod Identity or IRSA; both end with the same capability.


1. Prerequisites

  • Existing EKS cluster (1.24+ for Pod Identity; any version with OIDC for IRSA).
  • kubectl and helm configured for that cluster.
  • Permissions to create IAM roles/policies, EKS Access Entries, and Kubernetes resources in the cluster.

2. Namespace and Kubernetes Base

Create the namespace and (optionally) the service account. The Helm chart can create the SA; if you create it yourself, keep the name/namespace below so the rest of the guide matches.

Terminal window
kubectl create namespace runwhen-local

If you prefer to create the service account manually (e.g. to add IRSA annotation yourself):

Terminal window
kubectl create serviceaccount runwhen-local -n runwhen-local

3. IAM Role for Workload Identity

Create one IAM role that RunWhen Local and the runner will use. You’ll attach permissions in the next step and link this role to the runwhen-local service account in step 4.

3a. Trust policy

Pod Identity (EKS 1.24+, recommended):

  • Trust policy: allow pods.eks.amazonaws.com to assume the role.

IRSA (OIDC):

  • Trust policy: allow your cluster’s OIDC provider to assume the role for the specific subject system:serviceaccount:runwhen-local:runwhen-local.

Create the role in the AWS console (IAM → Roles → Create role) or via CLI. Example trust policy for Pod Identity:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": ["sts:AssumeRole", "sts:TagSession"]
}
]
}

For IRSA, use “Web identity” as provider, select your EKS OIDC provider, and set the audience; then restrict the trust policy to the service account (e.g. condition on runwhen-local:runwhen-local). Record the role ARN (e.g. arn:aws:iam::ACCOUNT_ID:role/runwhen-local-role).


4. Attach IAM Permissions to the Role

Add an IAM policy to the role so RunWhen can:

  • Call EKS and other AWS APIs for discovery (CloudQuery, eks:ListClusters / DescribeCluster, etc.).
  • Use STS for identity (needed for workload identity).

Minimum for EKS cluster discovery + common discovery resources:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "RunWhenDiscovery",
"Effect": "Allow",
"Action": [
"eks:DescribeCluster",
"eks:Describe*",
"eks:List*",
"ec2:Describe*",
"s3:List*",
"s3:GetBucket*",
"iam:GetUser",
"iam:GetRole",
"iam:ListUsers",
"iam:ListRoles",
"iam:ListAccountAliases",
"sts:GetCallerIdentity"
],
"Resource": "*"
}
]
}

Add more actions (e.g. rds:Describe*, cloudwatch:*) if you need discovery for those resources. Attach this policy to the role you created in step 3.


So that pods using the runwhen-local service account get the IAM role credentials.

Option A: Pod Identity (EKS 1.24+)

Create a Pod Identity association so the role is used for the runwhen-local SA in runwhen-local:

Terminal window
CLUSTER_NAME=your-eks-cluster-name
ROLE_ARN=arn:aws:iam::ACCOUNT_ID:role/runwhen-local-role
NAMESPACE=runwhen-local
SERVICE_ACCOUNT=runwhen-local
aws eks create-pod-identity-association \
--cluster-name "$CLUSTER_NAME" \
--namespace "$NAMESPACE" \
--service-account "$SERVICE_ACCOUNT" \
--role-arn "$ROLE_ARN"

Ensure the EKS Pod Identity Agent addon is enabled on the cluster. No annotation on the service account is required.

Option B: IRSA

  1. Ensure the cluster has an OIDC provider (EKS console → cluster → Configuration → OIDC).
  2. Annotate the service account with the role ARN (create the SA if the chart doesn’t manage it):
Terminal window
kubectl annotate serviceaccount runwhen-local -n runwhen-local \
eks.amazonaws.com/role-arn=arn:aws:iam::ACCOUNT_ID:role/runwhen-local-role

Use the same ROLE_ARN and ensure the role’s trust policy allows system:serviceaccount:runwhen-local:runwhen-local for this cluster’s OIDC provider.


6. Grant the Role Cluster-Reader Access to EKS

So the same role can call the EKS cluster API (for Kubernetes discovery). Create one EKS Access Entry for the role and associate the managed view policy:

Terminal window
CLUSTER_NAME=your-eks-cluster-name
ROLE_ARN=arn:aws:iam::ACCOUNT_ID:role/runwhen-local-role
aws eks create-access-entry \
--cluster-name "$CLUSTER_NAME" \
--principal-arn "$ROLE_ARN" \
--type STANDARD
aws eks associate-access-policy \
--cluster-name "$CLUSTER_NAME" \
--principal-arn "$ROLE_ARN" \
--policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy \
--access-scope type=cluster

No extra Kubernetes RBAC is required for this; EKS Access Control grants the role read-only access to the cluster.


7. workspaceInfo Configuration

RunWhen Local discovers the cluster via AWS (EKS APIs) and then uses the same credentials (and EKS access) to read the cluster. Put this in a workspaceInfo.yaml that you’ll load as a ConfigMap (step 8).

  • Use workload identity only (no access keys, no assume role) for the cluster you’re installed in.
  • Set cloudConfig.kubernetes: null and use cloudConfig.aws with eksClusters so EKS discovery is used.

Example for auto-discovering EKS clusters in one region (including the one you’re on):

workspaceName: "my-runwhen-workspace"
workspaceOwnerEmail: "you@example.com"
defaultLocation: location-01
defaultLOD: detailed
cloudConfig:
kubernetes: null
aws:
region: us-east-1
useWorkloadIdentity: true
eksClusters:
autoDiscover: true
discoveryConfig:
regions:
- us-east-1
codeCollections:
- repoURL: "https://github.com/runwhen-contrib/aws-c7n-codecollection"
branch: "main"
- repoURL: "https://github.com/runwhen-contrib/rw-cli-codecollection"
branch: "main"

Adjust region and regions to match your cluster. For a single explicit cluster instead of auto-discovery, set autoDiscover: false and list the cluster under eksClusters.clusters with name, server, and region.


8. Load workspaceInfo into the Cluster

Create a ConfigMap from the file you created in step 7:

Terminal window
kubectl create configmap workspaceinfo -n runwhen-local --from-file=workspaceInfo.yaml

If the ConfigMap already exists, replace it:

Terminal window
kubectl create configmap workspaceinfo -n runwhen-local --from-file=workspaceInfo.yaml --dry-run=client -o yaml | kubectl apply -f -

9. Install RunWhen Local with Helm

Use the RunWhen Local Helm chart and point it at the existing namespace, service account, and workspaceInfo ConfigMap.

  • Service account: use the same runwhen-local SA in runwhen-local namespace so it gets the IAM role (Pod Identity or IRSA).
  • Runner: set runner deployment and runner pods to use the same runwhen-local service account so they share the role and EKS access.

Example values (customize image, upload, and code collections as needed):

# values.yaml (excerpt)
runwhenLocal:
enabled: true
serviceAccount:
create: true
name: runwhen-local
# For IRSA only: uncomment and set your role ARN
# annotations:
# eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/runwhen-local-role
serviceAccountRoles:
clusterRoleView:
enabled: true
workspaceInfo:
useExistingConfigMap: true
existingConfigMapName: workspaceinfo
configMap:
create: false
discoveryKubeconfig:
inClusterAuth:
enabled: true
runner:
enabled: true
runEnvironment:
deployment:
serviceAccount: runwhen-local
pod:
serviceAccount: runwhen-local
# ... rest of runner config (codeCollections, controlAddr, etc.)

Install (from the directory that contains the chart or use the chart reference you normally use):

Terminal window
helm upgrade --install runwhen-local ./path-to-chart \
-n runwhen-local \
-f values.yaml

If you use IRSA and the chart creates the SA, you must ensure the SA gets the eks.amazonaws.com/role-arn annotation (e.g. via runwhenLocal.serviceAccount.annotations in values or by patching after install).


10. Verify

  1. AWS identity from a pod:

    Terminal window
    kubectl run -it --rm debug --restart=Never -n runwhen-local \
    --serviceaccount=runwhen-local \
    --image=public.ecr.aws/aws-cli/aws-cli:latest -- \
    sts get-caller-identity

    The identity should be the role you created (e.g. runwhen-local-role).

  2. EKS cluster access:

    Same pod should be able to list nodes (cluster-reader):

    Terminal window
    kubectl run -it --rm debug --restart=Never -n runwhen-local \
    --serviceaccount=runwhen-local \
    --image=bitnami/kubectl:latest -- \
    get nodes
  3. RunWhen Local: Check discovery and runner pods; logs should show no AWS auth errors and successful EKS/Kubernetes discovery.


Summary Checklist

StepWhat you did
1Confirmed existing EKS cluster and tools
2Created namespace runwhen-local (and optionally SA runwhen-local)
3Created IAM role with trust for Pod Identity or IRSA
4Attached IAM policy (EKS, EC2, S3, IAM, STS, etc.) to that role
5Linked role to SA: Pod Identity association or IRSA annotation on runwhen-local SA
6Created EKS Access Entry for the role + AmazonEKSViewPolicy (cluster scope)
7Wrote workspaceInfo.yaml with useWorkloadIdentity: true and eksClusters (auto or explicit)
8Created ConfigMap workspaceinfo from workspaceInfo.yaml in runwhen-local
9Installed Helm with SA runwhen-local, existing ConfigMap, and runner using same SA
10Verified caller identity and cluster access from a pod using runwhen-local SA

Single namespace, single service account (runwhen-local), single IAM role, and one EKS Access Entry give both the RunWhen Local controller and the runner cluster-reader access and AWS discovery using workload identity on your existing EKS cluster.