Models
Huginn uses a single configurable model for all activity. Each named agent can override the default with its own model, so your team can still run different models for different roles.
Default model
Set the global default in ~/.huginn/config.json:
{
"default_model": "qwen2.5-coder:14b"
}
All agents fall back to this model unless they have their own override.
Per-agent model override
When you create or configure a named agent, you can assign it a specific model:
/agents swap Steve qwen2.5-coder:32b
Or set it at creation time. The agent’s model persists across sessions in ~/.huginn/agents.json.
Switching at runtime
Use /switch-model inside Huginn, or natural language:
use deepseek-r1 for this session
Supported providers
Local (managed runtime or Ollama)
Recommended models by hardware:
| RAM | Model |
|---|---|
| 8 GB | qwen2.5-coder:7b |
| 16 GB | qwen2.5-coder:14b |
| 32 GB+ | qwen2.5-coder:32b or qwen3-coder:30b |
Anthropic
{
"backend": {
"type": "external",
"provider": "anthropic",
"endpoint": "https://api.anthropic.com",
"api_key": "$ANTHROPIC_API_KEY"
},
"default_model": "claude-sonnet-4-6"
}
OpenRouter
Access 200+ models from a single API key:
{
"backend": {
"type": "external",
"provider": "openrouter",
"api_key": "$OPENROUTER_API_KEY"
},
"default_model": "anthropic/claude-sonnet-4-6"
}
You can mix providers per-agent — e.g. use a cloud model for one specialist and a local model for another.
OpenAI
{
"backend": {
"type": "external",
"provider": "openai",
"endpoint": "https://api.openai.com/v1",
"api_key": "$OPENAI_API_KEY"
},
"default_model": "gpt-4o"
}