When Linus Torvalds released the Linux kernel in 1991, he famously called it “just a hobby, won’t be big and professional.” Thirty-five years later, it runs 96% of the world’s servers. OpenClaw is having a similar moment. What started as a weekend hack by Peter Steinberger now has 60,000+ GitHub stars and enterprise backing from Nvidia and AWS. The difference? You can have it running on your own server in under ten minutes.
This guide walks you through a production-ready Docker Compose deployment. Not the “it works on my laptop” kind — the kind that survives restarts, keeps your data safe, and doesn’t leak your API keys to the internet.
OpenClaw in Docker gives you process isolation, reproducible builds, and one-command updates. If you’ve self-hosted n8n with Docker before, you already know the playbook.
Hardware You Actually Need
What you’ll learn:
- Minimum specs vs recommended specs
- When a $4 VPS is enough
- GPU requirements (spoiler: probably none)
OpenClaw’s gateway is a Node.js process. It doesn’t crunch numbers — it routes messages to AI providers and manages tool execution. That means the hardware bar is surprisingly low.
| Spec | Minimum | Recommended |
|---|---|---|
| RAM | 2 GB | 4 GB |
| Disk | 10 GB | 20 GB+ |
| CPU | 1 vCPU | 2 vCPU |
The 2GB minimum is a hard floor. During docker build, the pnpm install step will get OOM-killed on 1GB hosts. I’ve watched it happen on a $3.50 DigitalOcean droplet — the build silently dies and you stare at logs wondering what went wrong.
At runtime the story changes. The container idles at roughly 256MB for a single agent. A $4/month Hetzner CX22 handles it fine.
Do you need a GPU?
No — unless you want to run local models through Ollama. If you’re connecting OpenClaw to Claude, GPT, or Gemini through their APIs, the gateway is pure I/O. A Raspberry Pi 5 can handle it.
For local models, an 8GB VRAM card covers 7-8B parameter models at roughly 40 tokens per second. But that’s a separate container and a separate conversation.
The Docker Compose File
What you’ll learn:
- Copy-paste-ready
docker-compose.yml - Volume mounts explained
- Port mapping and networking
Here’s the production-ready compose file. Copy it, adjust the env vars, and you’re running.
services:
openclaw:
image: ghcr.io/openclaw/openclaw:latest
container_name: openclaw
restart: unless-stopped
ports:
- "18789:18789"
volumes:
- ./openclaw-config:/root/.openclaw
- ./openclaw-workspace:/root/openclaw/workspace
env_file:
- .env
environment:
- NODE_ENV=production
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:18789/health"]
interval: 30s
timeout: 10s
retries: 3
What the volumes do
Two directories matter.
./openclaw-config:/root/.openclaw— Configuration, memory files, API keys, andopenclaw.json. This is the brain. Lose this and your agent forgets everything./openclaw-workspace:/root/openclaw/workspace— Files the agent can read, write, and manipulate. Think of it as the agent’s desk
Both mount to the host filesystem. That means you can back them up with rsync, version them with Git, or inspect them with any text editor. OpenClaw stores memory as plain Markdown — no proprietary database, no binary blobs.
Port 18789
The web UI runs on port 18789 by default. In production, you’ll put this behind a reverse proxy (Nginx or Caddy) with TLS. Never expose 18789 directly to the internet.
Environment Variables
What you’ll learn:
- Required vs optional env vars
- How OpenClaw resolves config from multiple sources
- Securing your
.envfile
Create a .env file in the same directory as your docker-compose.yml. Here’s what matters.
# Required: your AI provider
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-your-key-here
# Required: gateway authentication token
GATEWAY_TOKEN=your-secure-random-token
# Optional: model selection
MODEL_VERSION=claude-sonnet-4-20250514
# Optional: messaging platforms
WHATSAPP_ENABLED=true
TELEGRAM_BOT_TOKEN=your-telegram-token
# Optional: cost controls
MAX_MONTHLY_SPEND=50
LOOP_DETECTION=true
OpenClaw resolves environment variables from four sources and never overrides existing values. The priority order runs like this:
- Process environment (from the Docker runtime)
.envin the working directory- Global
.envat~/.openclaw/.env - Config block in
openclaw.json
The golden rule: if a variable already exists, OpenClaw won’t touch it. This prevents your compose file from accidentally overriding something you set at the system level.
Lock down your .env file. It contains API keys worth real money. Run chmod 600 .env and make sure it’s in your .gitignore. An exposed ANTHROPIC_API_KEY can drain hundreds of dollars overnight if someone finds it.
chmod 600 .env
echo ".env" >> .gitignore
Production Hardening
What you’ll learn:
- Reverse proxy with TLS
- Skill vetting and security
- Backup strategy for agent memory
Getting OpenClaw running is step one. Keeping it running safely is step two. Here’s what separates a toy deployment from a production one.
Reverse proxy
Never expose OpenClaw’s port directly. Use Caddy for automatic HTTPS — it handles certificate renewal without configuration.
# Caddyfile
openclaw.yourdomain.com {
reverse_proxy localhost:18789
}
Three lines. Caddy obtains and renews Let’s Encrypt certificates automatically.
Skill security
This is the big one. The OpenClaw marketplace has 800+ community skills — and over 800 malicious ones have been discovered. A critical RCE vulnerability (CVE-2026-25253) was patched in February 2026 but not before it was exploited in the wild.
- Audit every skill before installing — read the
SKILL.mdand any scripts it includes - Pin skill versions — don’t auto-update from the marketplace
- Run with least privilege — create a dedicated non-root user inside the container
- Network isolation — the agent should never reach your production databases
Backups
OpenClaw stores everything as files. Back them up like files.
# Daily backup of config and workspace
tar -czf "openclaw-backup-$(date +%Y%m%d).tar.gz" \
./openclaw-config ./openclaw-workspace
Add this to a cron job. Rotate weekly. If you’re running on a VPS, most providers offer snapshot backups too — use both. The n8n encryption key guide covers a similar backup philosophy for credentials.
Resource limits
Docker lets you cap CPU and memory. Use it. An agent stuck in a loop will consume everything available if you let it.
services:
openclaw:
deploy:
resources:
limits:
memory: 2G
cpus: "1.5"
Troubleshooting Common Issues
What you’ll learn:
- Build failures and OOM kills
- Authentication errors on first launch
- Container restart loops
flowchart TD
A[Container won't start] --> B{Check logs}
B -->|OOM killed| C[Increase RAM to 2GB+]
B -->|Auth error| D[Set GATEWAY_TOKEN]
B -->|Port conflict| E[Change host port]
B -->|Permission denied| F[Fix volume ownership]
C --> G[Rebuild and restart]
D --> G
E --> G
F --> G
classDef trigger fill:#e1f5fe,stroke:#01579b
classDef process fill:#fff3e0,stroke:#ef6c00
classDef action fill:#e8f5e8,stroke:#2e7d32
class A trigger
class B,C,D,E,F process
class G action
Build fails silently
If docker compose up --build dies without a clear error, it’s almost always RAM. The pnpm install step during the image build needs 2GB. Check dmesg | grep -i oom on the host — if you see kill entries, upgrade the VPS or add swap.
# Add 2GB swap (temporary fix)
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
”Error: authenticate first”
When you hit http://localhost:18789 and see an authentication error, that’s expected. OpenClaw requires the GATEWAY_TOKEN to be set. Pass it as a query parameter or configure your reverse proxy to inject it.
Container restart loops
Check docker logs openclaw --tail 50. The usual suspects are missing environment variables (especially LLM_PROVIDER and the corresponding API key), corrupted config files in the mounted volume, and port 18789 already in use by another process.
For corrupted config, the nuclear option works: delete ./openclaw-config/openclaw.json and let the container regenerate it on next boot. You’ll lose custom settings but keep your memory files intact.
Volume permissions
On Linux hosts, Docker volumes sometimes create files as root. If OpenClaw can’t write to its workspace, fix ownership.
sudo chown -R 1000:1000 ./openclaw-config ./openclaw-workspace
The user ID 1000 matches the default non-root user inside most Docker images. Adjust if your image uses a different UID.
Self-hosting OpenClaw with Docker Compose gives you complete control over your AI agent. Your data stays on your hardware. Updates are a single docker compose pull && docker compose up -d. And when something breaks, you have the logs, the config files, and the full source code to figure out why.
But let’s be honest. The gap between “docker compose up” and “production-ready agent that handles real work” is wider than most tutorials admit. Skill vetting, cost controls, monitoring, security hardening — it adds up to days of work before the agent earns its keep.
Skip all of this and get agents running in 5 minutes. Book a free Gap Assessment with Agent Gap — we handle the deployment, security, and ongoing management so you just use the result.
Technical co-founder specialized in SaaS, DevOps, AI agents, and data platforms. Building and scaling with Ruby on Rails, n8n, and fast feedback loops.
n8n Environment Variables Cheat Sheet — Every Setting with Examples
Copy-paste .env for n8n in Docker: encryption key, WEBHOOK_URL, execution pruning, Redis queue. The 5 variables you must set before first prod boot.
N8N_ENCRYPTION_KEY: The One Setting That Can Lock You Out
Fix N8N_ENCRYPTION_KEY mismatches, recover a lost key, and work around the _FILE bug in queue mode. Step-by-step for Docker, Compose, and Kubernetes.
Self-Host n8n in Docker Without Losing Your Credentials
Set N8N_ENCRYPTION_KEY in Docker, fix the _FILE queue mode bug, configure volumes, and back up workflows so you never lose credentials after a restart.