Kubernetes Kubeconfig Setup
RunWhen discovers Kubernetes resources using standard kubeconfig files. This guide covers the two main patterns for providing cluster access, how cloud discovery providers automatically contribute kubeconfigs, and how to manually create a service account and kubeconfig from scratch.
How RunWhen uses kubeconfigs
The core configuration field is kubeconfigFile inside your workspaceInfo.yaml:
cloudConfig: kubernetes: kubeconfigFile: /shared/kubeconfigThis path is resolved inside the RunWhen Local container. The /shared/ directory is the standard mount point for files you provide at deployment time.
RunWhen reads all contexts in the kubeconfig and, depending on your contexts: configuration, discovers resources across all of them or a specific subset.
By default, inClusterAuth: true causes RunWhen to also generate its own kubeconfig for the cluster it is running in (the default context). To discover external clusters you must set inClusterAuth: false and provide your own kubeconfig.
Pattern 1 — Single kubeconfig with multiple contexts
The recommended approach for multi-cluster environments is to provide one kubeconfig file that contains all cluster contexts. RunWhen will iterate over every context in that file and apply your discovery configuration to each one.
A multi-context kubeconfig looks like this:
apiVersion: v1kind: Configclusters: - name: prod-cluster cluster: server: https://prod.example.com certificate-authority-data: <base64-ca> - name: staging-cluster cluster: server: https://staging.example.com certificate-authority-data: <base64-ca>contexts: - name: prod-cluster context: cluster: prod-cluster user: runwhen-sa - name: staging-cluster context: cluster: staging-cluster user: runwhen-sausers: - name: runwhen-sa user: token: <service-account-token>current-context: prod-clusterMerging multiple kubeconfig files into one
If you have separate kubeconfig files per cluster, merge them into a single file with kubectl:
KUBECONFIG=cluster-a.kubeconfig:cluster-b.kubeconfig:cluster-c.kubeconfig \ kubectl config view --flatten > /shared/kubeconfigVerify the merged result:
kubectl --kubeconfig /shared/kubeconfig config get-contextsTargeting specific contexts in workspaceInfo.yaml
Once your merged kubeconfig is in place, you can set per-context discovery options:
cloudConfig: kubernetes: kubeconfigFile: /shared/kubeconfig contexts: prod-cluster: defaultNamespaceLOD: detailed staging-cluster: defaultNamespaceLOD: basicContexts not listed under contexts: fall back to the global defaultLOD.
Pattern 2 — Cloud provider-generated kubeconfigs
When you configure cloud discovery for Azure, GCP, or AWS, each provider can automatically generate kubeconfigs for the managed Kubernetes clusters it discovers. These are merged at runtime into your kubeconfigFile before discovery runs.
| Provider | Cluster type | How the kubeconfig is generated |
|---|---|---|
| Azure | AKS | RunWhen uses your Service Principal or Managed Identity to fetch AKS credentials via the Azure API and generate a context per listed cluster |
| GCP | GKE | RunWhen uses your GCP service account to generate kubeconfigs for each GKE cluster in your project |
| AWS | EKS | RunWhen uses your AWS access key to generate kubeconfigs for each EKS cluster in your account |
This means you do not need to manually extract and maintain kubeconfigs for every cloud-managed cluster. You provide the cloud credential once and RunWhen handles the rest.
If you also provide a kubeconfigFile, the provider-generated contexts are merged into it. This lets you combine cloud-managed clusters (handled automatically) with self-managed clusters (where you supply the kubeconfig yourself) in a single discovery run.
Example — AKS clusters via Service Principal
No manual kubeconfig extraction needed. List your clusters in the Azure config block and RunWhen generates the kubeconfig context for each one:
cloudConfig: azure: spSecretName: azure-sp # Kubernetes secret with SP credentials aksClusters: clusters: - name: aks-prod server: https://aks-prod.hcp.eastus.azmk8s.io:443 resource_group: rg-prod - name: aks-staging server: https://aks-staging.hcp.eastus.azmk8s.io:443 resource_group: rg-staging kubernetes: kubeconfigFile: /shared/kubeconfig # Self-managed clusters can also be added hereAt runtime, RunWhen generates contexts for aks-prod and aks-staging and merges them with any contexts already in /shared/kubeconfig.
Manually creating a service account and kubeconfig
Use this approach for self-managed clusters or any cluster where you need a dedicated least-privilege service account.
1 — Set variables
export contextName=$(kubectl config current-context)export clusterName=my-clusterexport namespace=my-namespaceexport serviceAccount=runwhen-saexport newFile=/shared/kubeconfig2 — Create the service account, token, and role binding
kubectl apply -f - << EOFapiVersion: v1kind: ServiceAccountmetadata: name: ${serviceAccount} namespace: ${namespace}---apiVersion: v1kind: Secretmetadata: name: ${serviceAccount}-token namespace: ${namespace} annotations: kubernetes.io/service-account.name: ${serviceAccount}type: kubernetes.io/service-account-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: ${serviceAccount}-crbroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: viewsubjects: - kind: ServiceAccount name: ${serviceAccount} namespace: ${namespace}EOFThe built-in view ClusterRole grants read-only access across the cluster — sufficient for RunWhen discovery in most environments.
3 — Generate the kubeconfig
server=$(kubectl config view --minify --raw -o jsonpath='{.clusters[].cluster.server}')ca=$(kubectl -n ${namespace} get secret/${serviceAccount}-token -o jsonpath='{.data.ca\.crt}')token=$(kubectl -n ${namespace} get secret/${serviceAccount}-token -o jsonpath='{.data.token}' | base64 --decode)
cat >> ${newFile} << EOFapiVersion: v1kind: Configclusters: - name: ${clusterName} cluster: certificate-authority-data: ${ca} server: ${server}contexts: - name: ${contextName} context: cluster: ${clusterName} namespace: ${namespace} user: ${serviceAccount}users: - name: ${serviceAccount} user: token: ${token}current-context: ${contextName}EOF4 — Test the kubeconfig
KUBECONFIG=${newFile} kubectl get pods -AExtending RBAC for additional resources
The basic view ClusterRole covers most discovery needs. If your RunWhen tasks need access to additional resource types, create a custom ClusterRole:
kubectl apply -f - << EOFapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: ${serviceAccount}-extended-viewrules: - apiGroups: [""] resources: ["pods", "pods/log", "events", "configmaps", "services"] verbs: ["get", "watch", "list"] - apiGroups: ["apps"] resources: ["*"] verbs: ["get", "watch", "list"] - apiGroups: ["batch"] resources: ["*"] verbs: ["get", "watch", "list"] - apiGroups: ["autoscaling"] resources: ["*"] verbs: ["get", "watch", "list"] - apiGroups: ["metrics.k8s.io"] resources: ["*"] verbs: ["get", "watch", "list"] - apiGroups: ["apiextensions.k8s.io"] resources: ["customresourcedefinitions"] verbs: ["get", "watch", "list"]---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: ${serviceAccount}-extended-crbroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ${serviceAccount}-extended-viewsubjects: - kind: ServiceAccount name: ${serviceAccount} namespace: ${namespace}EOFRelated
- Kubernetes discovery configuration —
contexts,namespaces, LOD settings, and exclusion annotations - Microsoft Azure discovery — AKS cluster list and SP/MI authentication
- Amazon Web Services discovery — EKS and AWS credential configuration
- Google Cloud Platform discovery — GKE and GCP service account setup
- Kubernetes documentation — Service Accounts