Installation

Huginn ships as a single static binary. No runtime dependencies required — it manages its own inference engine and models.

macOS

curl -fsSL https://huginn.sh/install.sh | sh

Homebrew tap coming soon.

Linux

curl -fsSL https://huginn.sh/install.sh | sh

Manual:

wget https://github.com/scrypster/huginn/releases/latest/download/huginn-linux-amd64
chmod +x huginn-linux-amd64
sudo mv huginn-linux-amd64 /usr/local/bin/huginn

Windows

irm https://huginn.sh/install.ps1 | iex

Manual: Download huginn-windows-amd64.exe from GitHub Releases and add to your PATH.

Scoop package coming soon.

Build from source

Requires Go 1.25+:

git clone https://github.com/scrypster/huginn
cd huginn
go build -tags embed_frontend -o huginn .

Verify

huginn --version

What Huginn creates on first run

When you run huginn for the first time, it sets up ~/.huginn/:

PathWhat
~/.huginn/config.jsonYour configuration (auto-created with defaults)
~/.huginn/agents/Named agent configs
~/.huginn/sessions/Session history
~/.huginn/skills/Installed skills
~/.huginn/bin/llama-serverInference runtime — managed mode only
~/.huginn/models/*.ggufDownloaded models — managed mode only

Managed mode (no Ollama required)

Huginn can manage its own inference runtime (llama.cpp) and models with no Ollama dependency. Enable managed mode in ~/.huginn/config.json:

{
  "backend": {
    "type": "managed"
  }
}

On first run with managed mode, Huginn downloads the llama-server runtime and walks you through picking and downloading a model.

Using Ollama

ollama pull qwen2.5-coder:14b
huginn

Or point at any OpenAI-compatible endpoint:

{
  "backend": {
    "type": "external",
    "provider": "ollama",
    "endpoint": "http://localhost:11434"
  }
}

Huginn works with any OpenAI-compatible endpoint — Ollama, LM Studio, vLLM, or a remote server.

Cloud providers

Set the backend in ~/.huginn/config.json:

Anthropic:

{
  "backend": {
    "type": "external",
    "provider": "anthropic",
    "endpoint": "https://api.anthropic.com",
    "api_key": "$ANTHROPIC_API_KEY"
  }
}

OpenRouter (200+ models):

{
  "backend": {
    "type": "external",
    "provider": "openrouter",
    "api_key": "$OPENROUTER_API_KEY"
  }
}

OpenAI:

{
  "backend": {
    "type": "external",
    "provider": "openai",
    "endpoint": "https://api.openai.com/v1",
    "api_key": "$OPENAI_API_KEY"
  }
}

API keys starting with $ are resolved from environment variables automatically.

Hardware requirements

RAMRecommended model
8 GBqwen2.5-coder:7b (4.7 GB)
16 GBqwen2.5-coder:14b (8.1 GB)
32 GB+qwen2.5-coder:32b (18 GB)

GPU acceleration (Metal on macOS, CUDA on Linux) is used automatically when available.