4 min read 929 words Updated Mar 24, 2026 Created Mar 24, 2026

Providers and Models

Dwight separates backends (CLI tools that run agents) from providers (API services for single-shot operations). This page explains both systems and how to configure them.

Providers and model switching


How It Works

  1. Backend — a CLI agent (Claude Code, Codex, Gemini, OpenCode) that handles autonomous tasks like :DwightAgent and :DwightAuto. The backend manages its own authentication.
  2. Provider — an API service (Anthropic, OpenAI, Gemini, OpenRouter, or custom) used for single-shot operations like :DwightGenSkill, :DwightRefactor, and inline modes. Requires an API key.
  3. Model — the specific model within a provider. Switch at runtime with :DwightSwitch.
:DwightBackend claude_code    " Switch backend
:DwightSwitch opus            " Switch model
:DwightProviders              " Show current provider/model/key status

Backends

The backend determines which CLI tool runs agentic tasks. Set it in setup() or switch at runtime.

BackendCLIAuth
claude_codeClaude Codeclaude login (OAuth)
codexOpenAI CodexOPENAI_API_KEY env var
geminiGemini CLIgcloud auth or GOOGLE_API_KEY
opencodeOpenCodeManaged by opencode
require("dwight").setup({
  backend = "claude_code",          -- default
  claude_code_bin = "claude",       -- path to binary
  claude_code_model = "sonnet",     -- or "opus", "haiku"
})

Providers

Providers handle API calls for non-agentic operations. Dwight ships with five built-in presets:

ProviderKey Env VarModels
anthropicANTHROPIC_API_KEYsonnet, haiku, opus
anthropic_maxOAuth (:DwightAuthMax)sonnet, haiku, opus
openaiOPENAI_API_KEYgpt-4o, gpt-4o-mini, o1, o3, o3-mini, o4-mini, gpt-5, gpt-5-mini, gpt-5-nano, gpt-5-codex, gpt-5.1, gpt-5.1-mini, gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.2, gpt-5.3-codex, gpt-5.4, gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, codex-mini-latest
geminiGEMINI_API_KEYflash, pro
openrouterOPENROUTER_API_KEYsonnet, haiku, opus, gpt-4o, flash

Auto-detection: if no provider is set, Dwight checks environment variables and picks the first available.


Switching Models

:DwightSwitch sonnet           " Switch to sonnet on current provider
:DwightSwitch openai:gpt-4o     " Switch to GPT-4o via OpenAI
:DwightSwitch openrouter:opus  " Switch to Opus via OpenRouter

Tab completion shows only models available for your current backend. For claude_code, the options are haiku, sonnet, and opus — the CLI handles auth, no API key needed.

You can also pass any raw model ID directly — no source-code changes needed. Dwight infers the provider from the model string prefix:

:DwightSwitch gpt-4.1-nano          " Any gpt-* → OpenAI
:DwightSwitch codex-mini-latest     " Any codex-* → OpenAI
:DwightSwitch o3-mini               " Any o<digit>* → OpenAI
:DwightSwitch claude-3-5-haiku      " Any claude-* → Anthropic
:DwightSwitch gemini-2.0-flash      " Any gemini-* → Gemini

For providers that do not match a known prefix, use :DwightAddProvider to register a named alias, or use the provider:model-id syntax (e.g. openai:some-new-model).


Model Diversity

Use different models for test-writing vs implementation to reduce blind spots:

require("dwight").setup({
  test_model = "sonnet",       -- for /test, /stub modes
  implement_model = "opus",    -- for /code, /fix modes
})

When both are set, Dwight automatically routes to the correct model based on the mode. When unset, all modes use the default model.


Adding Custom Providers

For self-hosted or third-party API-compatible services:

:DwightAddProvider

The wizard prompts for: name, API format (openai/anthropic/gemini/custom), base URL, endpoint, API key env var, default model, and model aliases.

Provider configs are stored globally in ~/.config/dwight/providers.json and per-project in .dwight/providers.json (project overrides global).


Anthropic Max (OAuth)

Use your Anthropic Pro/Max subscription instead of API credits:

:DwightAuthMax

This command:

  1. Searches for existing Claude Code credentials (auto-import if found)
  2. Falls back to manual token or API key entry
  3. Stores the token securely in ~/.config/dwight/oauth_token.json (chmod 600)
  4. Switches the active provider to anthropic_max

Token refresh is automatic when a refresh token is available.


MCP Servers

Connect external tools via the Model Context Protocol. MCP servers provide additional context to agents.

require("dwight").setup({
  mcp_servers = {
    { name = "sqlite", command = "mcp-server-sqlite", args = { "project.db" } },
    { name = "github", command = "mcp-server-github",
      env = { GITHUB_TOKEN = os.getenv("GITHUB_TOKEN") } },
  },
})

Check server status with:

:DwightMCP

MCP resources are referenced in prompts with &server:resource_uri and resolved synchronously at prompt build time.


Tips

  • Use claude_code backend with anthropic_max provider. The backend handles agent tasks via CLI auth; the provider handles single-shot calls against your subscription. No API credits needed for either.
  • Check :DwightProviders when something fails. It shows the active backend, provider, model, and key status in one line.
  • Set model diversity for TDD workflows. Different models catch different bugs — using one for tests and another for implementation improves coverage.
  • Custom providers work with any OpenAI-compatible API. Local models via Ollama, LM Studio, or vLLM can be added through :DwightAddProvider.

Commands

CommandArgsDescription
:DwightBackend[claude_code|codex|gemini|opencode]Get or set the CLI backend
:DwightSwitch<model>Switch model (filtered by backend)
:DwightProvidersShow current provider, model, and key status
:DwightAddProviderInteractive wizard to add a custom provider
:DwightAuthMaxAuthenticate with Anthropic Pro/Max subscription
:DwightMCPShow MCP server status

See Also

  • Configuration -- for all setup() options including backend and provider settings
  • Core Concepts -- for how providers fit into the overall architecture
  • Agent Mode -- uses the backend for autonomous tasks
  • Inline Editing -- uses the provider for single-shot API calls