OpenClaw + Ollama: ollama launch vs openclaw gateway start

Февраль 26, 2026 - Время чтения: ~1 минут

When running OpenClaw with Ollama locally, there is a measurable difference between the two startup methods.

ollama launch openclaw

Observed behavior:
 • Modifies ~/.openclaw/openclaw.json
 • Registers multiple models
 • Sets agent-oriented defaults
 • Uses large context windows (e.g. 40k+)
 • Higher RAM usage

Example (qwen3:8b on macOS):
 • ollama run qwen3:8b → ~5.5 GB RAM, very fast
 • ollama launch openclaw → ~10+ GB RAM, noticeably slower

openclaw gateway start

Observed behavior:
 • Uses existing configuration
 • Does not auto-register models
 • No implicit reconfiguration
 • More predictable memory footprint

Context window requirement

OpenClaw requires a minimum context window of 16,000 tokens for Ollama backend. 

Practical note

For controlled local setups, start OpenClaw directly and manage models explicitly with ollama pull, instead of relying on ollama launch.

about

It's a personal page // Это личная страница