Configuration
Huginn reads configuration from ~/.huginn/config.json. It is created automatically on first run.
Example config
{
"backend": {
"type": "external",
"endpoint": "http://localhost:11434"
},
"models": {
"planner": "qwen3-coder:30b",
"coder": "qwen2.5-coder:14b",
"reasoner": "qwen3-coder:30b"
},
"context_limit_kb": 128,
"git_stage_on_write": false,
"allowed_tools": [],
"disallowed_tools": []
}
Options
| Key | Default | Description |
|---|---|---|
backend.type | external | Backend provider: managed, external, anthropic, openrouter, openai |
backend.endpoint | http://localhost:11434 | API endpoint (for external and openai) |
backend.api_key | "" | API key or $ENV_VAR reference |
models.planner | qwen3-coder:30b | Model used for planning |
models.coder | qwen2.5-coder:14b | Model used for implementation |
models.reasoner | qwen3-coder:30b | Model used for /reason |
context_limit_kb | 128 | Max context window in KB |
git_stage_on_write | false | Auto-stage files after writing |
allowed_tools | [] | Whitelist of tool names (empty = all) |
disallowed_tools | [] | Blacklist of tool names |
Providers
Managed (no external deps)
{ "backend": { "type": "managed" } }
Huginn downloads and manages its own llama.cpp runtime.
Ollama
{ "backend": { "type": "external", "endpoint": "http://localhost:11434" } }
Anthropic
{ "backend": { "type": "anthropic", "api_key": "$ANTHROPIC_API_KEY" } }
OpenRouter
{ "backend": { "type": "openrouter", "api_key": "$OPENROUTER_API_KEY" } }
OpenAI
{ "backend": { "type": "openai", "endpoint": "https://api.openai.com/v1", "api_key": "$OPENAI_API_KEY" } }
API keys prefixed with $ are resolved from environment variables at startup.
Switching models at runtime
Use /switch-model inside Huginn, or use natural language:
use deepseek-r1 for reasoning