Configuration

Huginn reads configuration from ~/.huginn/config.json. It is created automatically on first run.

Example config

{
  "backend": {
    "type": "external",
    "endpoint": "http://localhost:11434"
  },
  "models": {
    "planner": "qwen3-coder:30b",
    "coder":   "qwen2.5-coder:14b",
    "reasoner": "qwen3-coder:30b"
  },
  "context_limit_kb": 128,
  "git_stage_on_write": false,
  "allowed_tools": [],
  "disallowed_tools": []
}

Options

KeyDefaultDescription
backend.typeexternalBackend provider: managed, external, anthropic, openrouter, openai
backend.endpointhttp://localhost:11434API endpoint (for external and openai)
backend.api_key""API key or $ENV_VAR reference
models.plannerqwen3-coder:30bModel used for planning
models.coderqwen2.5-coder:14bModel used for implementation
models.reasonerqwen3-coder:30bModel used for /reason
context_limit_kb128Max context window in KB
git_stage_on_writefalseAuto-stage files after writing
allowed_tools[]Whitelist of tool names (empty = all)
disallowed_tools[]Blacklist of tool names

Providers

Managed (no external deps)

{ "backend": { "type": "managed" } }

Huginn downloads and manages its own llama.cpp runtime.

Ollama

{ "backend": { "type": "external", "endpoint": "http://localhost:11434" } }

Anthropic

{ "backend": { "type": "anthropic", "api_key": "$ANTHROPIC_API_KEY" } }

OpenRouter

{ "backend": { "type": "openrouter", "api_key": "$OPENROUTER_API_KEY" } }

OpenAI

{ "backend": { "type": "openai", "endpoint": "https://api.openai.com/v1", "api_key": "$OPENAI_API_KEY" } }

API keys prefixed with $ are resolved from environment variables at startup.

Switching models at runtime

Use /switch-model inside Huginn, or use natural language:

use deepseek-r1 for reasoning