Skip to content

Guides

Guides are practical, outcome-focused walkthroughs for real workflows in RunWhen. They sit between reference docs and blog posts: opinionated enough to help you execute quickly, but detailed enough to support repeatable team practices.

Use this section when you want to do something end-to-end:

  • Configure assistant behavior for your environment
  • Standardize operational workflows across teams
  • Roll out repeatable triage and remediation patterns
  • Adopt advanced integrations and automation practices
  • RunWhen terms for AI audiences
    A short mapping from RunWhen vocabulary (Rules, Tasks, Commands, Knowledge) to familiar agent and LLM concepts — useful when explaining the product to teams who already think in tools, RAG, and system prompts.

  • Building Operational Context
    A walkthrough for adding Rules, Commands, and Knowledge to your workspace — moving from generic AI output to investigations your team trusts enough to act on.

  • Azure OpenAI & AI Foundry — BYO LLM Setup
    Step-by-step setup for connecting RunWhen to your Azure OpenAI or Azure AI Foundry endpoint using a cross-tenant Service Principal trust model. Covers role assignments, endpoint configuration, and IP allowlisting for both endpoint types.

  • Kubernetes Kubeconfig Setup
    How to provide kubeconfigs to RunWhen — single file with multiple contexts, cloud-provider-generated kubeconfigs (AKS, GKE, EKS) and how they merge at runtime, and step-by-step service account and RBAC setup for self-managed clusters.

  • Common User Journeys — day-to-day usage flows by role and scenario
  • Live Demos — hands-on sandbox scenarios
  • Blog — engineering deep-dives, product updates, and field stories

What’s next

This library will continue to grow with integration guides, operational playbooks, and advanced configuration patterns. New guides will follow the same format: clear prerequisites, UI screenshots, specific examples, and measurable outcomes.