Configuration

Huginn reads configuration from ~/.huginn/config.json. It is created automatically on first run with safe defaults. All fields are optional.

Example config

{
  "backend": {
    "type": "external",
    "provider": "ollama",
    "endpoint": "http://localhost:11434"
  },
  "default_model": "qwen2.5-coder:14b",
  "context_limit_kb": 128,
  "max_turns": 50,
  "git_stage_on_write": false,
  "diff_review_mode": "auto",
  "allowed_tools": [],
  "disallowed_tools": []
}

Options

Backend

KeyDefaultDescription
backend.type"external""external" (Ollama / cloud API) or "managed" (built-in llama.cpp)
backend.provider"ollama""ollama", "anthropic", "openai", "openrouter"
backend.endpoint"http://localhost:11434"API endpoint (external backends)
backend.api_key""Literal key or "$ENV_VAR" — resolved from environment at startup
backend.builtin_model""Model name when type is "managed"

Model

KeyDefaultDescription
default_model"qwen2.5-coder:14b"Default model for all agents (per-agent overrides via /agents swap)
ollama_base_url"http://localhost:11434"Ollama API endpoint (shorthand for backend.endpoint)

Agentic loop

KeyDefaultDescription
max_turns50Max tool-use iterations per turn (0 = default of 50)
bash_timeout_secs120Timeout for bash tool commands
allowed_tools[]Tool whitelist — empty means all tools allowed
disallowed_tools[]Tool blacklist
diff_review_mode"auto"When to show diffs: "always", "never", "auto"
git_stage_on_writefalseAuto-stage files after writing

Context & memory

KeyDefaultDescription
context_limit_kbMax context window in KB
compact_mode"auto"Context compaction: "auto", "never", "always"
compact_trigger0.8Fill ratio (0.0–1.0) that triggers compaction
notepads_enabledfalseEnable persistent notepads
vision_enabledfalseEnable image/screenshot input

Web UI

KeyDefaultDescription
web.port8421HTTP port (0 = dynamic)
web.auto_openfalseOpen browser automatically on huginn tray
web.bind"127.0.0.1"Bind address

Providers

Managed (no external deps)

{ "backend": { "type": "managed" } }

Huginn downloads and manages its own llama.cpp runtime and walks you through picking a model on first run.

Ollama

{
  "backend": {
    "type": "external",
    "provider": "ollama",
    "endpoint": "http://localhost:11434"
  }
}

Anthropic

{
  "backend": {
    "type": "external",
    "provider": "anthropic",
    "endpoint": "https://api.anthropic.com",
    "api_key": "$ANTHROPIC_API_KEY"
  },
  "default_model": "claude-sonnet-4-6"
}

OpenRouter (200+ models)

{
  "backend": {
    "type": "external",
    "provider": "openrouter",
    "api_key": "$OPENROUTER_API_KEY"
  },
  "default_model": "anthropic/claude-sonnet-4-6"
}

OpenAI

{
  "backend": {
    "type": "external",
    "provider": "openai",
    "endpoint": "https://api.openai.com/v1",
    "api_key": "$OPENAI_API_KEY"
  }
}

Switching models at runtime

Use /switch-model inside Huginn, or use natural language:

use deepseek-r1 for reasoning