Skip to content

Kubernetes Kubeconfig Setup

RunWhen discovers Kubernetes resources using standard kubeconfig files. This guide covers the two main patterns for providing cluster access, how cloud discovery providers automatically contribute kubeconfigs, and how to manually create a service account and kubeconfig from scratch.


How RunWhen uses kubeconfigs

The core configuration field is kubeconfigFile inside your workspaceInfo.yaml:

cloudConfig:
kubernetes:
kubeconfigFile: /shared/kubeconfig

This path is resolved inside the RunWhen Local container. The /shared/ directory is the standard mount point for files you provide at deployment time.

RunWhen reads all contexts in the kubeconfig and, depending on your contexts: configuration, discovers resources across all of them or a specific subset.

By default, inClusterAuth: true causes RunWhen to also generate its own kubeconfig for the cluster it is running in (the default context). To discover external clusters you must set inClusterAuth: false and provide your own kubeconfig.


Pattern 1 — Single kubeconfig with multiple contexts

The recommended approach for multi-cluster environments is to provide one kubeconfig file that contains all cluster contexts. RunWhen will iterate over every context in that file and apply your discovery configuration to each one.

A multi-context kubeconfig looks like this:

apiVersion: v1
kind: Config
clusters:
- name: prod-cluster
cluster:
server: https://prod.example.com
certificate-authority-data: <base64-ca>
- name: staging-cluster
cluster:
server: https://staging.example.com
certificate-authority-data: <base64-ca>
contexts:
- name: prod-cluster
context:
cluster: prod-cluster
user: runwhen-sa
- name: staging-cluster
context:
cluster: staging-cluster
user: runwhen-sa
users:
- name: runwhen-sa
user:
token: <service-account-token>
current-context: prod-cluster

Merging multiple kubeconfig files into one

If you have separate kubeconfig files per cluster, merge them into a single file with kubectl:

Terminal window
KUBECONFIG=cluster-a.kubeconfig:cluster-b.kubeconfig:cluster-c.kubeconfig \
kubectl config view --flatten > /shared/kubeconfig

Verify the merged result:

Terminal window
kubectl --kubeconfig /shared/kubeconfig config get-contexts

Targeting specific contexts in workspaceInfo.yaml

Once your merged kubeconfig is in place, you can set per-context discovery options:

cloudConfig:
kubernetes:
kubeconfigFile: /shared/kubeconfig
contexts:
prod-cluster:
defaultNamespaceLOD: detailed
staging-cluster:
defaultNamespaceLOD: basic

Contexts not listed under contexts: fall back to the global defaultLOD.


Pattern 2 — Cloud provider-generated kubeconfigs

When you configure cloud discovery for Azure, GCP, or AWS, each provider can automatically generate kubeconfigs for the managed Kubernetes clusters it discovers. These are merged at runtime into your kubeconfigFile before discovery runs.

ProviderCluster typeHow the kubeconfig is generated
AzureAKSRunWhen uses your Service Principal or Managed Identity to fetch AKS credentials via the Azure API and generate a context per listed cluster
GCPGKERunWhen uses your GCP service account to generate kubeconfigs for each GKE cluster in your project
AWSEKSRunWhen uses your AWS access key to generate kubeconfigs for each EKS cluster in your account

This means you do not need to manually extract and maintain kubeconfigs for every cloud-managed cluster. You provide the cloud credential once and RunWhen handles the rest.

If you also provide a kubeconfigFile, the provider-generated contexts are merged into it. This lets you combine cloud-managed clusters (handled automatically) with self-managed clusters (where you supply the kubeconfig yourself) in a single discovery run.

Example — AKS clusters via Service Principal

No manual kubeconfig extraction needed. List your clusters in the Azure config block and RunWhen generates the kubeconfig context for each one:

cloudConfig:
azure:
spSecretName: azure-sp # Kubernetes secret with SP credentials
aksClusters:
clusters:
- name: aks-prod
server: https://aks-prod.hcp.eastus.azmk8s.io:443
resource_group: rg-prod
- name: aks-staging
server: https://aks-staging.hcp.eastus.azmk8s.io:443
resource_group: rg-staging
kubernetes:
kubeconfigFile: /shared/kubeconfig # Self-managed clusters can also be added here

At runtime, RunWhen generates contexts for aks-prod and aks-staging and merges them with any contexts already in /shared/kubeconfig.


Manually creating a service account and kubeconfig

Use this approach for self-managed clusters or any cluster where you need a dedicated least-privilege service account.

1 — Set variables

Terminal window
export contextName=$(kubectl config current-context)
export clusterName=my-cluster
export namespace=my-namespace
export serviceAccount=runwhen-sa
export newFile=/shared/kubeconfig

2 — Create the service account, token, and role binding

kubectl apply -f - << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: ${serviceAccount}
namespace: ${namespace}
---
apiVersion: v1
kind: Secret
metadata:
name: ${serviceAccount}-token
namespace: ${namespace}
annotations:
kubernetes.io/service-account.name: ${serviceAccount}
type: kubernetes.io/service-account-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ${serviceAccount}-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: ${serviceAccount}
namespace: ${namespace}
EOF

The built-in view ClusterRole grants read-only access across the cluster — sufficient for RunWhen discovery in most environments.

3 — Generate the kubeconfig

Terminal window
server=$(kubectl config view --minify --raw -o jsonpath='{.clusters[].cluster.server}')
ca=$(kubectl -n ${namespace} get secret/${serviceAccount}-token -o jsonpath='{.data.ca\.crt}')
token=$(kubectl -n ${namespace} get secret/${serviceAccount}-token -o jsonpath='{.data.token}' | base64 --decode)
cat >> ${newFile} << EOF
apiVersion: v1
kind: Config
clusters:
- name: ${clusterName}
cluster:
certificate-authority-data: ${ca}
server: ${server}
contexts:
- name: ${contextName}
context:
cluster: ${clusterName}
namespace: ${namespace}
user: ${serviceAccount}
users:
- name: ${serviceAccount}
user:
token: ${token}
current-context: ${contextName}
EOF

4 — Test the kubeconfig

Terminal window
KUBECONFIG=${newFile} kubectl get pods -A

Extending RBAC for additional resources

The basic view ClusterRole covers most discovery needs. If your RunWhen tasks need access to additional resource types, create a custom ClusterRole:

kubectl apply -f - << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ${serviceAccount}-extended-view
rules:
- apiGroups: [""]
resources: ["pods", "pods/log", "events", "configmaps", "services"]
verbs: ["get", "watch", "list"]
- apiGroups: ["apps"]
resources: ["*"]
verbs: ["get", "watch", "list"]
- apiGroups: ["batch"]
resources: ["*"]
verbs: ["get", "watch", "list"]
- apiGroups: ["autoscaling"]
resources: ["*"]
verbs: ["get", "watch", "list"]
- apiGroups: ["metrics.k8s.io"]
resources: ["*"]
verbs: ["get", "watch", "list"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ${serviceAccount}-extended-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ${serviceAccount}-extended-view
subjects:
- kind: ServiceAccount
name: ${serviceAccount}
namespace: ${namespace}
EOF