This implementation guide assume you have an existing EKS cluster and want to install RunWhen Local with workload identity (no long-lived access keys). Choose either Pod Identity or IRSA; both end with the same capability.
1. Prerequisites
-
Existing EKS cluster (1.24+ for Pod Identity; any version with OIDC for IRSA).
-
kubectlandhelmconfigured for that cluster. -
Permissions to create IAM roles/policies, EKS Access Entries, and Kubernetes resources in the cluster.
2. Namespace and Kubernetes Base
Create the namespace and (optionally) the service account. The Helm chart can create the SA; if you create it yourself, keep the name/namespace below so the rest of the guide matches.
kubectl create namespace runwhen-local
If you prefer to create the service account manually (e.g. to add IRSA annotation yourself):
kubectl create serviceaccount runwhen-local -n runwhen-local
3. IAM Role for Workload Identity
Create one IAM role that RunWhen Local and the runner will use. You’ll attach permissions in the next step and link this role to the runwhen-local service account in step 4.
3a. Trust policy
Pod Identity (EKS 1.24+, recommended):
-
Trust policy: allow
pods.eks.amazonaws.comto assume the role.
IRSA (OIDC):
-
Trust policy: allow your cluster’s OIDC provider to assume the role for the specific subject
system:serviceaccount:runwhen-local:runwhen-local.
Create the role in the AWS console (IAM → Roles → Create role) or via CLI. Example trust policy for Pod Identity:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": ["sts:AssumeRole", "sts:TagSession"]
}
]
}
For IRSA, use “Web identity” as provider, select your EKS OIDC provider, and set the audience; then restrict the trust policy to the service account (e.g. condition on runwhen-local:runwhen-local). Record the role ARN (e.g. arn:aws:iam::ACCOUNT_ID:role/runwhen-local-role).
4. Attach IAM Permissions to the Role
Add an IAM policy to the role so RunWhen can:
-
Call EKS and other AWS APIs for discovery (CloudQuery,
eks:ListClusters/DescribeCluster, etc.). -
Use STS for identity (needed for workload identity).
Minimum for EKS cluster discovery + common discovery resources:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "RunWhenDiscovery",
"Effect": "Allow",
"Action": [
"eks:DescribeCluster",
"eks:Describe*",
"eks:List*",
"ec2:Describe*",
"s3:List*",
"s3:GetBucket*",
"iam:GetUser",
"iam:GetRole",
"iam:ListUsers",
"iam:ListRoles",
"iam:ListAccountAliases",
"sts:GetCallerIdentity"
],
"Resource": "*"
}
]
}
Add more actions (e.g. rds:Describe*, cloudwatch:*) if you need discovery for those resources. Attach this policy to the role you created in step 3.
5. Link the IAM Role to the Service Account
So that pods using the runwhen-local service account get the IAM role credentials.
Option A: Pod Identity (EKS 1.24+)
Create a Pod Identity association so the role is used for the runwhen-local SA in runwhen-local:
CLUSTER_NAME=your-eks-cluster-name
ROLE_ARN=arn:aws:iam::ACCOUNT_ID:role/runwhen-local-role
NAMESPACE=runwhen-local
SERVICE_ACCOUNT=runwhen-local
aws eks create-pod-identity-association \
--cluster-name "$CLUSTER_NAME" \
--namespace "$NAMESPACE" \
--service-account "$SERVICE_ACCOUNT" \
--role-arn "$ROLE_ARN"
Ensure the EKS Pod Identity Agent addon is enabled on the cluster. No annotation on the service account is required.
Option B: IRSA
-
Ensure the cluster has an OIDC provider (EKS console → cluster → Configuration → OIDC).
-
Annotate the service account with the role ARN (create the SA if the chart doesn’t manage it):
kubectl annotate serviceaccount runwhen-local -n runwhen-local \
eks.amazonaws.com/role-arn=arn:aws:iam::ACCOUNT_ID:role/runwhen-local-role
Use the same ROLE_ARN and ensure the role’s trust policy allows system:serviceaccount:runwhen-local:runwhen-local for this cluster’s OIDC provider.
6. Grant the Role Cluster-Reader Access to EKS
So the same role can call the EKS cluster API (for Kubernetes discovery). Create one EKS Access Entry for the role and associate the managed view policy:
CLUSTER_NAME=your-eks-cluster-name
ROLE_ARN=arn:aws:iam::ACCOUNT_ID:role/runwhen-local-role
aws eks create-access-entry \
--cluster-name "$CLUSTER_NAME" \
--principal-arn "$ROLE_ARN" \
--type STANDARD
aws eks associate-access-policy \
--cluster-name "$CLUSTER_NAME" \
--principal-arn "$ROLE_ARN" \
--policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy \
--access-scope type=cluster
No extra Kubernetes RBAC is required for this; EKS Access Control grants the role read-only access to the cluster.
7. workspaceInfo Configuration
RunWhen Local discovers the cluster via AWS (EKS APIs) and then uses the same credentials (and EKS access) to read the cluster. Put this in a workspaceInfo.yaml that you’ll load as a ConfigMap (step 8).
-
Use workload identity only (no access keys, no assume role) for the cluster you’re installed in.
-
Set
cloudConfig.kubernetes: nulland usecloudConfig.awswitheksClustersso EKS discovery is used.
Example for auto-discovering EKS clusters in one region (including the one you’re on):
workspaceName: "my-runwhen-workspace"
workspaceOwnerEmail: "you@example.com"
defaultLocation: location-01
defaultLOD: detailed
cloudConfig:
kubernetes: null
aws:
region: us-east-1
useWorkloadIdentity: true
eksClusters:
autoDiscover: true
discoveryConfig:
regions:
- us-east-1
codeCollections:
- repoURL: "https://github.com/runwhen-contrib/aws-c7n-codecollection"
branch: "main"
- repoURL: "https://github.com/runwhen-contrib/rw-cli-codecollection"
branch: "main"
Adjust region and regions to match your cluster. For a single explicit cluster instead of auto-discovery, set autoDiscover: false and list the cluster under eksClusters.clusters with name, server, and region.
8. Load workspaceInfo into the Cluster
Create a ConfigMap from the file you created in step 7:
kubectl create configmap workspaceinfo -n runwhen-local --from-file=workspaceInfo.yaml
If the ConfigMap already exists, replace it:
kubectl create configmap workspaceinfo -n runwhen-local --from-file=workspaceInfo.yaml --dry-run=client -o yaml | kubectl apply -f -
9. Install RunWhen Local with Helm
Use the RunWhen Local Helm chart and point it at the existing namespace, service account, and workspaceInfo ConfigMap.
-
Service account: use the same
runwhen-localSA inrunwhen-localnamespace so it gets the IAM role (Pod Identity or IRSA). -
Runner: set runner deployment and runner pods to use the same
runwhen-localservice account so they share the role and EKS access.
Example values (customize image, upload, and code collections as needed):
# values.yaml (excerpt)
runwhenLocal:
enabled: true
serviceAccount:
create: true
name: runwhen-local
# For IRSA only: uncomment and set your role ARN
# annotations:
# eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/runwhen-local-role
serviceAccountRoles:
clusterRoleView:
enabled: true
workspaceInfo:
useExistingConfigMap: true
existingConfigMapName: workspaceinfo
configMap:
create: false
discoveryKubeconfig:
inClusterAuth:
enabled: true
runner:
enabled: true
runEnvironment:
deployment:
serviceAccount: runwhen-local
pod:
serviceAccount: runwhen-local
# ... rest of runner config (codeCollections, controlAddr, etc.)
Install (from the directory that contains the chart or use the chart reference you normally use):
helm upgrade --install runwhen-local ./path-to-chart \
-n runwhen-local \
-f values.yaml
If you use IRSA and the chart creates the SA, you must ensure the SA gets the eks.amazonaws.com/role-arn annotation (e.g. via runwhenLocal.serviceAccount.annotations in values or by patching after install).
10. Verify
-
AWS identity from a pod:
Bashkubectl run -it --rm debug --restart=Never -n runwhen-local \ --serviceaccount=runwhen-local \ --image=public.ecr.aws/aws-cli/aws-cli:latest -- \ sts get-caller-identityThe identity should be the role you created (e.g.
runwhen-local-role). -
EKS cluster access:
Same pod should be able to list nodes (cluster-reader):
Bashkubectl run -it --rm debug --restart=Never -n runwhen-local \ --serviceaccount=runwhen-local \ --image=bitnami/kubectl:latest -- \ get nodes -
RunWhen Local: Check discovery and runner pods; logs should show no AWS auth errors and successful EKS/Kubernetes discovery.
Summary Checklist
|
Step |
What you did |
|---|---|
|
1 |
Confirmed existing EKS cluster and tools |
|
2 |
Created namespace |
|
3 |
Created IAM role with trust for Pod Identity or IRSA |
|
4 |
Attached IAM policy (EKS, EC2, S3, IAM, STS, etc.) to that role |
|
5 |
Linked role to SA: Pod Identity association or IRSA annotation on |
|
6 |
Created EKS Access Entry for the role + |
|
7 |
Wrote |
|
8 |
Created ConfigMap |
|
9 |
Installed Helm with SA |
|
10 |
Verified caller identity and cluster access from a pod using |
Single namespace, single service account (runwhen-local), single IAM role, and one EKS Access Entry give both the RunWhen Local controller and the runner cluster-reader access and AWS discovery using workload identity on your existing EKS cluster.