Skip to main content

Documentation Index

Fetch the complete documentation index at: https://www.rumus.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Bring your own OpenAI key to use GPT-5, GPT-4.1, the o-series reasoning models, and any other model your account has access to. Requests go directly from your machine to OpenAI.

Before you start

You need:
  • An OpenAI account with API access at platform.openai.com.
  • An API key from API keys in your OpenAI dashboard.
  • Sufficient credit or an active billing setup on the OpenAI side.

Add OpenAI in Rumus

1

Open the model settings

Go to Settings → AI → Models and click Add Model.
2

Pick the provider

Set Provider to OpenAI.
3

Paste your API key

Paste your sk-... key into API Key. The key is masked in the UI and stored encrypted in your local vault.
4

(Optional) Custom base URL

Leave Base URL blank to use https://api.openai.com. Override only if you’re using an Azure deployment fronted by an OpenAI-compatible URL or a corporate proxy.
For Azure OpenAI specifically, the OpenAI-compatible provider gives you finer control over headers and query parameters.
5

Pick a model

Choose from the list (GPT-5, GPT-5 Mini, GPT-4.1, GPT-4o, o4-mini, o3, o1, etc.) or toggle Enter custom ID to type a model ID manually.
6

Capabilities

On the Capabilities tab:
  • Tool Calling — enable for agentic features (most GPT and o-series models support it).
  • Vision — enable for GPT-4o, GPT-4.1, and other multimodal models.
  • Prompt Cache — OpenAI’s automatic prompt caching is supported on supported models.
7

Save

The model appears in the picker under Custom Models.
ModelGood for
GPT-5Heavy multi-step work, ambiguous prompts, hard reasoning
GPT-5 Mini / GPT-4.1 MiniDefault daily driver — strong quality, lower cost
GPT-4oVision-heavy tasks (screenshots, diagrams)
o3 / o4-miniMath, code reasoning, planning where extra “think time” helps
For the latest model lineup and pricing, see the OpenAI models page.

Tips

  • Reasoning models (o-series) ignore temperature and use a thinking budget instead. Rumus handles the parameter difference automatically.
  • Prompt caching is automatic on supported OpenAI models — leaving the capability flag on lets Rumus account for cache reads correctly.
  • Project-scoped keys work fine. Create a separate key per project to keep usage tracked.

Troubleshooting

The key is missing, expired, or revoked. Recreate it in the OpenAI dashboard.
Some models require a verified org or higher tier. Check Limits on the OpenAI dashboard.
Add credit to your OpenAI account or attach a payment method.
Toggle Enter custom ID and paste the exact model ID (e.g. gpt-5-2025-11-04). The list ships with common defaults but not every snapshot.
Hit a snag we didn’t cover? Ask in the Rumus community.

Next steps

Other providers

Anthropic, Google, Z.AI, DeepSeek, Kimi, Ollama, OpenAI-compatible.

OpenAI-compatible

Azure OpenAI, OpenRouter, vLLM, and friends.