Skip to main content

Documentation Index

Fetch the complete documentation index at: https://www.rumus.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Rumus’s AI assistant is more than a chat box — it’s an agent that can read files, run commands, search the web, call tools through MCP, and complete multi-step jobs end-to-end. This page gives you the lay of the land; deeper pages cover each feature in detail.

Where it lives

The AI lives in the right sidebar. Toggle it from the panel-right icon at the far right of the title bar. When the sidebar is open, you’re talking to a single conversation. Open the History view (also in the sidebar) to switch between past conversations or start a new one.

What it can do

Answer questions

Explain a command, summarize a log, sketch a fix.

Run commands

Propose a command, ask for approval (when needed), execute it in your active terminal — see Agentic execution.

Plan multi-step work

For larger jobs, the agent automatically drafts a plan and works through it — see Plan mode.

Search the web

Pull current information from the open web — see Web search.

Call your own tools

Hook up any MCP server (databases, internal APIs, ticket systems) — see MCP integration.

Follow your conventions

Always-on rules and named skills the agent invokes when relevant — see Rules & skills.

What you’ll see in the sidebar

A typical AI message has more on it than just text:
  • Reasoning blocks — collapsible “thinking” sections from reasoning models, when the model exposes them.
  • Tool calls — when the agent runs a command, fetches a URL, or calls an MCP tool, you see the call inline with its result.
  • Plan blocks — when the agent enters plan mode, the plan renders as a checklist with status icons.
  • Token info — click the small info icon on a message to see exactly how many input / output / cached tokens that response used (and what it cost on built-in models).

Picking a model

Click the model name above the input box to switch. The list combines: The model picker remembers your last choice per conversation, so you can keep parallel threads on different models.

Smart autocomplete

Separate from the chat: as you type in any terminal tab, Rumus suggests context-aware completions inline. Press or Tab to accept. See Smart autocomplete.

Settings tour

The AI’s behavior is configured at Settings → AI, split across four tabs:
TabWhat’s in it
GeneralMaster enable/disable, conversation defaults, web search, browser engine
ModelsAdd and manage AI providers and models — see Models & Providers
ConversationAuto-naming, task lists, command approval, autocomplete tuning
MCPCustom MCP servers and their connection status
There’s also a separate Rules & Skills entry at the same level — see Rules & skills.

Privacy

  • Built-in model traffic is routed through Rumus to the upstream provider. Rumus logs token counts but not message contents — see Built-in models.
  • BYOK (your own API key) traffic goes directly from your machine to the provider you configured. Rumus never sees the prompts.
  • API keys, host credentials, and anything else sensitive live in the encrypted vault.

Where to go next

Agentic execution

What “the agent runs commands” actually means in practice.

Plan mode

How and when plans get auto-generated.

Command approval

The whitelist / blacklist that gates what runs without asking.

Rules & skills

Teach the agent your conventions once and reuse them everywhere.