Configuration
Huginn reads configuration from ~/.huginn/config.json. It is created automatically on first run with safe defaults. All fields are optional.
Example config
{
"backend": {
"type": "external",
"provider": "ollama",
"endpoint": "http://localhost:11434"
},
"default_model": "qwen2.5-coder:14b",
"context_limit_kb": 128,
"max_turns": 50,
"git_stage_on_write": false,
"diff_review_mode": "auto",
"allowed_tools": [],
"disallowed_tools": []
}
Options
Backend
| Key | Default | Description |
|---|
backend.type | "external" | "external" (Ollama / cloud API) or "managed" (built-in llama.cpp) |
backend.provider | "ollama" | "ollama", "anthropic", "openai", "openrouter" |
backend.endpoint | "http://localhost:11434" | API endpoint (external backends) |
backend.api_key | "" | Literal key or "$ENV_VAR" — resolved from environment at startup |
backend.builtin_model | "" | Model name when type is "managed" |
Model
| Key | Default | Description |
|---|
default_model | "qwen2.5-coder:14b" | Default model for all agents (per-agent overrides via /agents swap) |
ollama_base_url | "http://localhost:11434" | Ollama API endpoint (shorthand for backend.endpoint) |
Agentic loop
| Key | Default | Description |
|---|
max_turns | 50 | Max tool-use iterations per turn (0 = default of 50) |
bash_timeout_secs | 120 | Timeout for bash tool commands |
allowed_tools | [] | Tool whitelist — empty means all tools allowed |
disallowed_tools | [] | Tool blacklist |
diff_review_mode | "auto" | When to show diffs: "always", "never", "auto" |
git_stage_on_write | false | Auto-stage files after writing |
Context & memory
| Key | Default | Description |
|---|
context_limit_kb | — | Max context window in KB |
compact_mode | "auto" | Context compaction: "auto", "never", "always" |
compact_trigger | 0.8 | Fill ratio (0.0–1.0) that triggers compaction |
notepads_enabled | false | Enable persistent notepads |
vision_enabled | false | Enable image/screenshot input |
Web UI
| Key | Default | Description |
|---|
web.port | 8421 | HTTP port (0 = dynamic) |
web.auto_open | false | Open browser automatically on huginn tray |
web.bind | "127.0.0.1" | Bind address |
Providers
Managed (no external deps)
{ "backend": { "type": "managed" } }
Huginn downloads and manages its own llama.cpp runtime and walks you through picking a model on first run.
Ollama
{
"backend": {
"type": "external",
"provider": "ollama",
"endpoint": "http://localhost:11434"
}
}
Anthropic
{
"backend": {
"type": "external",
"provider": "anthropic",
"endpoint": "https://api.anthropic.com",
"api_key": "$ANTHROPIC_API_KEY"
},
"default_model": "claude-sonnet-4-6"
}
OpenRouter (200+ models)
{
"backend": {
"type": "external",
"provider": "openrouter",
"api_key": "$OPENROUTER_API_KEY"
},
"default_model": "anthropic/claude-sonnet-4-6"
}
OpenAI
{
"backend": {
"type": "external",
"provider": "openai",
"endpoint": "https://api.openai.com/v1",
"api_key": "$OPENAI_API_KEY"
}
}
Switching models at runtime
Use /switch-model inside Huginn, or use natural language:
use deepseek-r1 for reasoning