Skip to main content

Documentation Index

Fetch the complete documentation index at: https://www.rumus.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Configure your own Anthropic key to use Claude Opus, Sonnet, and Haiku directly. Requests go from your machine to Anthropic — Rumus does not proxy them.

Before you start

You need:
  • An Anthropic account and an API key from the Anthropic Console.
  • Available credit on your Anthropic account.

Add Anthropic in Rumus

1

Open the model settings

Go to Settings → AI → Models and click Add Model.
2

Pick the provider

Set Provider to Anthropic.
3

Paste your API key

Paste the key from the Anthropic Console into API Key. The field is masked and stored encrypted in your local vault.
4

(Optional) Custom base URL

Leave Base URL blank to use the default (https://api.anthropic.com). Override it only if you’re routing through an Anthropic-compatible proxy or regional endpoint.
5

Pick a model

Choose from the list (Opus 4.6, Sonnet 4.6, Haiku 4.5, etc.) or toggle Enter custom ID to type a model ID manually — useful for snapshot or preview models.
6

Capabilities

Switch to the Capabilities tab and enable what the model supports. For Claude models you typically want:
  • Tool Calling — required for agentic mode.
  • Vision — for image input on multimodal Claude versions.
  • Prompt Cache — Claude supports prompt caching; leave on to cut cost on repeated prompts.
7

Save

Click Save. The model now appears in the chat model picker under Custom Models.
ModelGood for
Claude Opus 4.6Hardest tasks: tricky debugging, multi-step refactors, careful reasoning
Claude Sonnet 4.6Default daily driver — fast, smart, agentic
Claude Haiku 4.5High-volume, latency-sensitive work like autocomplete
For Anthropic’s full lineup and capabilities, see the Anthropic model docs.

Tips

  • Prompt caching is on by default for Claude. With long system prompts or repeated context, cache hits can drop input cost by ~90%.
  • Extended thinking (“reasoning”) is supported. Toggle it per thread from the model picker; Anthropic bills thinking tokens as output tokens.
  • If you hit rate limits, raise them in your Anthropic Console workspace settings — Rumus surfaces the upstream error verbatim.

Troubleshooting

Recheck the key in the Anthropic Console. Anthropic keys start with sk-ant-. Make sure you didn’t paste a workspace ID by mistake.
You’ve hit Anthropic’s per-minute or per-day limits. Wait, switch to a smaller model, or raise limits on your Anthropic workspace.
Use Enter custom ID to type the model ID exactly (e.g. claude-opus-4-6-20251115).
Hit a snag we didn’t cover? Ask in the Rumus community.

Next steps

Other providers

OpenAI, Google, Z.AI, DeepSeek, Kimi, Ollama, OpenAI-compatible.

AI assistant

Plan mode, sub-agents, MCP, command approval.