Skip to content

RunWhen terms for AI audiences

If you already think in terms of agents, tools, prompts, and RAG, RunWhen fits naturally — the names are just different. This page gives you a compact mapping you can reuse in conversations, onboarding decks, or internal wiki pages.

Official spellings for platform objects (CodeBundle, RunSession, and so on) still follow the Terms and Concepts glossary.

One-sentence mental model

RunWhen pairs an AI Assistant with Tasks (automation that runs in your environment) and layers Rules, Commands, and Knowledge on top so investigations stay consistent, scoped, and grounded in how your organization operates.

Quick reference table

RunWhen termWhat it is in practiceFamiliar AI analogy
AI AssistantA configured persona in a workspace with permissions, confidence thresholds, and access to Tasks and chat contextAn agent or copilot with a system persona, tool access, and policy guardrails
TaskExecutable automation (diagnostics, checks, remediation) tied to SLXs and CodeBundles; runs on your runner infrastructureA tool or function the model can invoke — often backed by code you did not write inline in the prompt
SLX (Service Level Expectation)The workspace object that groups SLI/SLO configuration with the TaskSet for a component or serviceA monitored entity plus its automation bundle — the “thing we care about” and the scripts that observe it
CodeBundle / CodeCollectionPackaged task definitions (from Git) that power TasksA tool SDK or plugin package versioned like any other dependency
RunSessionA triggered investigation: one or more RunRequests, results, and the assistant’s reportA trace or session where the agent executed tools and synthesized output
RuleScoped natural-language guidance: how to interpret findings, what to deprioritize, severity framingSystem instructions, policy, or eval rubric layered above the base model — especially for “how to read this environment”
CommandA named, repeatable instruction users invoke from Workspace Chat (often slash-style)A saved prompt, macro, or agent workflow — structured intent instead of one-off chat
KnowledgeCurated sources (docs, runbooks) the assistant should use during reasoningRAG over your corpus — retrieval-augmented context, managed as first-class config
WorkflowEvent-driven automation (alerts, schedules, webhooks) that can start RunSessions or notify channelsOrchestration or event-driven agent — “when X happens, run this playbook”

Rules, Commands, and Tasks together

People often ask how Rules, Commands, and Tasks relate:

  • Tasks answer “what can the platform do in my systems?” — concrete executables.
  • Rules answer “how should the assistant think about what it sees?” — interpretation, noise, priorities.
  • Commands answer “how do we run the same investigation the same way every time?” — repeatable procedures in natural language that still pick the right Tasks at runtime.

That split mirrors common agent design: tools (Tasks), system prompt / policy (Rules), and templated user or operator flows (Commands). For a UI-oriented walkthrough, see Building Operational Context and Workspace Studio.

“Skills” in an AI conversation

“Skills” is not a separate RunWhen object in the same sense as Rules, Commands, or Tasks. In general AI product language, people use skill loosely (a capability bundle, a fine-tuned behavior, or a packaged workflow). In RunWhen, those ideas usually map like this:

If someone says “skill” they might mean…Closest RunWhen ideas
A thing the assistant can doTasks (and the CodeBundles that implement them)
How the assistant should behaveRules (and AI Assistant configuration)
A reusable procedureCommands
Facts the assistant should knowKnowledge

If your audience comes from MCP (Model Context Protocol), the word skill sometimes appears alongside tools. In that framing, RunWhen Tasks are the durable, audited “tools” that run in your environment; Rules and Knowledge shape how the model uses them. The RunWhen MCP Server exposes platform operations to external clients in a similar spirit.