In November 2025, an Austrian developer named Peter Steinberger sat down and built something in about an hour. A personal AI assistant that lived in his WhatsApp. It could research topics, draft emails, write code, and automate tasks — all from a text message. He called it Clawdbot and pushed it to GitHub.
Two months later, 60,000 developers starred it in 72 hours. Nvidia built an enterprise platform on top of it. AWS launched managed hosting. And Steinberger got a call from Sam Altman.
That project is now called OpenClaw. Here’s what it actually is and whether you should care.
OpenClaw is an open-source AI agent that runs on your machine and acts through your everyday apps. Think of it as a personal employee that works 24/7, lives in your chat apps, and never asks for time off.
The Short Version
What you’ll learn:
- What OpenClaw does in plain English
- How it connects to AI models and your tools
- The naming drama (Clawdbot → Moltbot → OpenClaw)
OpenClaw is a local AI agent framework. That means it runs on your computer (or server) and connects to large language models — Claude, GPT, Gemini, Llama — to perform tasks on your behalf.
But it’s not just a chatbot. It’s an agent. The difference matters.
- A chatbot answers questions
- An agent takes action
OpenClaw can send emails, control your browser, read and write files, manage your calendar, run shell commands, and execute code. All triggered by a message in WhatsApp, Telegram, Slack, Discord, or any of its 12+ supported platforms.
It wraps everything in a persistent daemon with session management, memory that persists between runs, and a heartbeat scheduler that keeps it alive even when you’re asleep.
How It Actually Works
What you’ll learn:
- The architecture in 30 seconds
- Skills system explained
- What “local-first” means for your data
The core loop is simple:
flowchart TD
U[You send a message] --> G[OpenClaw Gateway]
G --> L[AI Model]
L --> D{Decision}
D -->|Act| T[Run Tool / Skill]
D -->|Reply| R[Send Response]
T --> R
classDef trigger fill:#e1f5fe,stroke:#01579b
classDef process fill:#fff3e0,stroke:#ef6c00
classDef action fill:#e8f5e8,stroke:#2e7d32
class U trigger
class G,L,D process
class T,R action
You send a text. OpenClaw routes it to an AI model. The model decides what to do — reply, run a command, search the web, or trigger a skill. The result comes back to your chat.
Skills are the building blocks. Each skill is a directory with a SKILL.md file containing instructions for what the agent can do. Think of them like plugins — there are skills for browsing the web, managing files, sending emails, and hundreds more.
If you’ve worked with n8n agentic workflows, the mental model is similar: a central orchestrator that decides which tools to call based on context. The difference is OpenClaw runs as a persistent daemon connected to your messaging apps, not as a workflow triggered by events.
Local-first means your data stays on your machine. Memory is stored as Markdown files on disk. No cloud database. No third-party storage. If you unplug the server, your data goes with it.
This matters more than most people realize. Every conversation, every decision the agent makes, every piece of context it remembers — it’s all files you own. You can back them up, version them with Git, or wipe them in seconds. Compare that to cloud-based agents where your data lives on someone else’s servers and you hope they don’t get breached.
The agent sidecar pattern works especially well here — you can run OpenClaw alongside your existing tools without replacing anything. The agent augments your workflow rather than taking it over.
What It Costs to Run
What you’ll learn:
- Hardware options from $0 to $1,200
- API costs: the real expense
- When you need a GPU (and when you don’t)
Here’s what trips people up: OpenClaw itself is free. MIT-licensed, open-source, costs nothing to install. The cost comes from two places — hardware and AI model API usage.
Hardware
| Setup | Cost | Best For |
|---|---|---|
| Oracle Cloud free tier | $0 | Testing, light personal use |
| VPS (Hetzner, DigitalOcean) | $4-8/mo | Stable daily agent, no GPU |
| Raspberry Pi 5 (8GB) | ~$80 one-time | Home server, always-on |
| Mac Mini M4 (16GB) | $499-599 | Cloud API + small local models |
| Mac Mini M4 (32GB) | $1,199 | Local models + OpenClaw together |
The key insight: if you connect OpenClaw to cloud AI models (Claude, GPT), you don’t need a GPU. The gateway is pure Node.js — it just routes messages. A $4/month VPS handles it fine.
You only need a GPU if you want to run local models through Ollama. An 8GB GPU handles 7-8B parameter models at ~40 tokens/second. A 16GB card covers 13B models comfortably.
API Costs
This is where it gets real. OpenClaw sends your messages to AI models and those models charge per token. The community reports:
- Light use (10-20 messages/day): $5-15/month
- Heavy use (agent loops, research, automation): $30-100/month
- Danger zone: agent loops that run unsupervised can drain hundreds overnight
Cost management is non-trivial. It requires setting spend limits, building in loop detection, and choosing the right model for each task. This is one of the areas where working with someone who’s done it before saves real money.
The Naming Drama
What you’ll learn:
- Why it was renamed twice in 48 hours
- Peter Steinberger’s journey from PSPDFKit to OpenAI
- Where the project is headed
The project has had three names in three months. Each rename tells a story.
Clawdbot (November 2025) — Steinberger’s original name. A play on Anthropic’s Claude, with a lobster mascot. Cute and memorable.
Moltbot (January 27, 2026) — Anthropic politely asked for a name change due to trademark concerns. The community ran a chaotic 5am Discord brainstorm and picked Moltbot. Molting — what lobsters do when they outgrow their shell. Fitting.
OpenClaw (January 29, 2026) — Two days later, another rename. This one was voluntary. “Open” emphasized the open-source nature. “Claw” kept the lobster identity without leaning on any vendor’s branding.
The project exploded after the Moltbot rename. 60,000+ GitHub stars in under a week. Nvidia announced NemoClaw (enterprise security layer) at GTC March 2026. AWS launched managed hosting. Microsoft, Kaspersky, and Sophos published security advisories.
On February 14, Steinberger announced he’d be joining OpenAI. The project moved to an open-source foundation. The lobster endures.
Should You Self-Host?
What you’ll learn:
- Who benefits most from OpenClaw
- The gap between “install” and “production-ready”
- When self-hosting isn’t worth it
OpenClaw is powerful. It’s also not plug-and-play for most people.
The install takes about 5 minutes. Getting it production-ready — secure, stable, cost-controlled, with the right skills configured — takes significantly longer. Our Docker Compose setup guide walks through the full process, and the MCP server ecosystem adds capability but also complexity.
| Situation | Verdict |
|---|---|
| You’re a developer who likes self-hosting | Go for it — you’ll love it |
| You’re a founder who wants AI agents but not the ops | Get help (see below) |
| You’re an enterprise evaluating AI agents | Start with NemoClaw or a managed provider |
| You’re curious and want to experiment | Try the Oracle free tier + Claude API |
The security situation is real. 800+ malicious skills have been found in the marketplace. A critical remote code execution vulnerability (CVE-2026-25253) was patched but not before it was exploited. Production deployments need:
- Skill vetting — audit every marketplace skill before installing
- Network isolation — the agent should not reach your production database
- Credential management — dedicated non-privileged accounts, not your personal login
- Cost controls — API spend limits and loop detection to prevent overnight bill surprises
- Monitoring — log every agent action for audit and debugging
The gap between “installed” and “production-ready” is where most people get stuck. The install takes 5 minutes. Hardening takes hours. Maintaining it takes ongoing attention.
For founders who want the agent working without managing the infrastructure, that’s exactly what Agent Gap exists for — we handle the deployment, security, and ongoing management so you just use the result.
Want AI agents handling your busywork without touching a terminal? Book a free Gap Assessment — 15 minutes, no pitch. We’ll show you where agents save you 10+ hours a week.
Technical co-founder specialized in SaaS, DevOps, AI agents, and data platforms. Building and scaling with Ruby on Rails, n8n, and fast feedback loops.
n8n AI Agents: Complete Guide to Agentic Workflows
Build reliable n8n AI agents: models, prompts, tools, memory, and a step‑by‑step research agent that writes to Google Docs. Compare single vs multi‑agent and ship to production.
The Agent Sidecar Pattern for n8n MCP: A Deep-Dive Definition
Define the Agent Sidecar Pattern for n8n MCP: n8n MCP server orchestrates Rails/Flask sidecars with string/URL contracts, webhook-first I/O, tracing, and latency budgets.
What Is the Real Value of MCP (Model Context Protocol) in Agent Architectures?
Understand MCP’s real value—server-owned tools and single-credential access—and how it pairs with n8n. See protocol limits, performance trade‑offs, and when to choose custom code.