OpenClaw Docker Setup: Run Your Agent in a Container
If you are running OpenClaw directly on a server with npm install -g openclaw and a systemd service file, you already have a working setup. But you are also carrying technical debt. Every Node.js version mismatch, every stray dependency update, every workspace directory conflict on a multi-agent host is a reminder that running bare metal costs more than it saves. Containerizing your OpenClaw agent with Docker eliminates that debt in one shot. Portable, reproducible, trivially updatable, and ready for production. This guide walks you through the complete process: a working Dockerfile, volume management for persistent state, Docker Compose configuration with health checks and resource limits, API key handling, webhook networking, zero-downtime updates, and a multi-instance pattern for running multiple agents on a single VPS.
Why Run OpenClaw in Docker?
Docker wraps your OpenClaw agent and all of its runtime dependencies—Node.js, npm packages, configuration files—into a single, immutable image. That image runs identically on your development laptop, a $6/month VPS, or a Kubernetes cluster. Three concrete benefits make the switch worthwhile for anyone running OpenClaw in production:
- Isolation. The agent process runs in its own filesystem and process namespace. A runaway loop, a memory leak, or a rogue npm package cannot touch the host or other containers. This is especially valuable when you run multiple agents on the same host.
- Reproducibility. The same Docker image produces the same agent behavior on any machine, on any date. No more “works on my machine” debugging when deploying from a dev environment to production.
- Effortless updates. Updating OpenClaw becomes a two-command operation: pull the new image and restart the container. No manual npm update, no version conflicts with other Node.js projects on the same host.
For VPS deployments, Docker also integrates naturally with the reverse proxies (nginx, Caddy, Traefik) you are likely already running for other services. And it is the foundation for the openclaw docker setup container 2026 workflow that teams are adopting for reproducible multi-agent infrastructure.
Prerequisites: Docker and Docker Compose
The only host requirements are Docker Engine and Docker Compose. Both install cleanly on Ubuntu 22.04/24.04, Debian 12, and most other Linux distributions. OpenClaw requires Node.js 18 or later, but that is handled entirely inside the container—you never install Node.js on the host.
Install Docker Engine on Ubuntu or Debian:
# Remove old packages
sudo apt-get remove docker docker-engine docker.io containerd runc
# Install prerequisites
sudo apt-get update
sudo apt-get install ca-certificates curl
# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine and Compose plugin
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Verify
sudo docker run hello-world
docker compose version
For Debian, replace ubuntu with debian in the repository URL and verify your codename with lsb_release -cs.
Add your user to the docker group to run Docker without sudo:
sudo usermod -aG docker $USER
newgrp docker
Log out and back in for the group change to take effect system-wide.
The Dockerfile: Building Your OpenClaw Image
A minimal Dockerfile for OpenClaw uses the official Node.js 20 slim image, installs OpenClaw globally via npm, and exposes port 3000 (the default port OpenClaw’s HTTP server listens on).
FROM node:20-slim
WORKDIR /app
RUN npm install -g openclaw
EXPOSE 3000
CMD ["openclaw", "start"]
Why node:20-slim? The slim variant includes everything Node.js needs to run OpenClaw (the runtime, npm, core libraries) while omitting build tools and package managers that would bloat the image. The resulting image is roughly 180 MB, compared to 1 GB+ for the full node:20 image.
Build the image:
docker build -t openclaw-agent .
This single line produces a tagged, versioned artifact you can push to a registry, share across hosts, and pin for reproducible deployments.
Official image note. If an official openclaw image becomes available on Docker Hub or GitHub Container Registry, you can replace the Dockerfile with a simple FROM openclaw:latest. The Docker Compose examples below are designed work with either approach—just swap the image: or build: directive.
Critical: Mount Your Config and Workspace as Volumes
Every OpenClaw installation has two stateful filesystems that must persist across container restarts: the configuration file (openclaw.json) and the workspace directory (containing SOUL.md, MEMORY.md, AGENTS.md, and any other files your agent reads and writes).
If you skip volume mounts, Docker stores these files inside the container’s writable layer. Rebuild or recreate the container and everything disappears—your agent’s identity, memory, and configuration are gone.
Mount these two paths from the host into the container at runtime:
docker run -d \
--name openclaw \
-p 3000:3000 \
-v /home/user/openclaw/openclaw.json:/app/openclaw.json \
-v /home/user/openclaw/workspace:/app/workspace \
openclaw-agent
On the host, /home/user/openclaw/ is your project directory. The first volume mount binds your host-side openclaw.json to /app/openclaw.json inside the container. The second binds the workspace directory so your agent’s memory, identity, and operational instructions persist.
You can use any host path you like, but keep these principles:
- Store
openclaw.jsonandworkspace/in the same parent directory for clarity. - Use absolute paths in the volume source to avoid ambiguity.
- Never commit your
openclaw.jsonto version control if it contains API tokens. Use environment variables instead (covered below). - If you run multiple agents, give each its own parent directory.
Docker Compose: The Recommended Approach
Docker Compose bundles the image, ports, volumes, environment variables, restart policy, and any health checks into a single declarative file. This is the recommended way to run OpenClaw in Docker because it makes your entire deployment strategy versionable and auditable.
services:
openclaw:
build: .
container_name: openclaw
ports:
- "3000:3000"
volumes:
- ./openclaw.json:/app/openclaw.json
- ./workspace:/app/workspace
environment:
- OPENCLAW_SLACK_TOKEN=${OPENCLAW_SLACK_TOKEN}
- OPENCLAW_LOG_LEVEL=${OPENCLAW_LOG_LEVEL:-info}
restart: unless-stopped
Save this as docker-compose.yml in your project directory alongside openclaw.json and the workspace/ folder.
Start the agent:
docker compose up -d
View logs:
docker compose logs -f
Stop the agent:
docker compose down
The restart: unless-stopped policy means Docker restarts the container automatically if it crashes or if the host reboots—critical for a production agent that needs to stay online.
Managing API Keys Securely: Environment Variables and .env Files
Hardcoding API keys in openclaw.json or the Dockerfile is a security risk. If that file ends up in a git repository, a CI log, or a shared directory, your credentials are exposed. The standard pattern is to inject them at runtime via environment variables.
Create a .env file in the same directory as your docker-compose.yml:
OPENCLAW_SLACK_TOKEN=xoxb-your-slack-bot-token
OPENCLAW_DISCORD_TOKEN=your-discord-bot-token
# Add any other tokens your agent needs
Never commit .env to version control. Add it to your .gitignore immediately:
echo ".env" >> .gitignore
Docker Compose automatically loads variables from the .env file when you reference them with the ${VARIABLE_NAME} syntax in docker-compose.yml. If a variable has a default value, use the colon-dash syntax: ${OPENCLAW_LOG_LEVEL:-info} defaults to info if the variable is unset.
Your openclaw.json can reference these environment variables using OpenClaw’s ${ENV_VAR} substitution syntax, keeping secrets out of the config file entirely.
Networking: Getting Webhooks and Channels to Work
Webhook-based channels (Slack Events API, Discord interactions, GitHub webhooks) require OpenClaw to be reachable from the internet on port 3000. You have two options for exposing the container.
Option 1: Reverse proxy (recommended for production). Place nginx, Caddy, or Traefik in front of the OpenClaw container. The proxy handles TLS termination, rate limiting, and request filtering before forwarding traffic to http://localhost:3000. This is the standard pattern for any web-exposed Docker service and keeps your OpenClaw container behind a hardened gateway.
Option 2: Socket Mode (Slack-specific). If using Slack, Socket Mode allows your agent to connect to Slack over a WebSocket connection initiated from inside the container, with no inbound port required. Configure socket_mode: true in your Slack app settings and pass the Socket Mode token via environment variables. No reverse proxy, no open ports, no TLS certificates. This is the simplest option for single-agent setups.
When using a reverse proxy, add the proxy to your Docker Compose network:
services:
openclaw:
# ... existing config ...
networks:
- proxy
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
networks:
- proxy
networks:
proxy:
If OpenClaw uses only polling-based channels (RSS, email, periodic API checks) and does not need inbound webhooks, you can skip the reverse proxy entirely and leave the container on a closed internal network.
Updating OpenClaw in Docker: Zero Downtime Process
One of the best reasons to use Docker is the update cycle. With a bare-metal npm installation, updating OpenClaw means npm update -g openclaw, restarting the service, and hoping no dependency conflicts surface. With Docker, the process is mechanical and reversible.
Standard update (brief downtime):
# Pull the latest base image (if using node image)
docker compose pull
# Rebuild the image with the latest OpenClaw version
docker compose build --pull
# Recreate the container with zero config changes
docker compose up -d --force-recreate
Your volumes, environment variables, and restart policy all carry over automatically. If the new image has a problem, roll back in seconds:
# Point back to the previous image tag
git checkout docker-compose.yml # if you version the image tag
docker compose up -d
Zero-downtime update (rolling). For production agents that cannot tolerate even a few seconds of downtime, run two containers behind the reverse proxy:
- Build the new image.
- Start a second container with the new image on an alternate port.
- Verify the new container is healthy.
- Update the reverse proxy to point traffic to the new container.
- Stop and remove the old container.
This pattern requires the reverse proxy to support dynamic backends (Traefik and nginx with health checks both handle it natively).
Multi-Instance Setup: Running Multiple Agents on One Host
Running multiple OpenClaw agents on the same VPS is one of the most common production patterns. Each agent has its own identity (SOUL.md), its own memory (MEMORY.md), its own config (openclaw.json), and often its own Slack bot token or Discord application.
With Docker, each agent is a completely independent container. No dependency conflicts, no port fights, no workspace collisions.
services:
agent-alpha:
build: .
container_name: openclaw-alpha
ports:
- "3001:3000"
volumes:
- ./agents/alpha/openclaw.json:/app/openclaw.json
- ./agents/alpha/workspace:/app/workspace
environment:
- OPENCLAW_SLACK_TOKEN=${AGENT_ALPHA_SLACK_TOKEN}
restart: unless-stopped
agent-beta:
build: .
container_name: openclaw-beta
ports:
- "3002:3000"
volumes:
- ./agents/beta/openclaw.json:/app/openclaw.json
- ./agents/beta/workspace:/app/workspace
environment:
- OPENCLAW_SLACK_TOKEN=${AGENT_BETA_SLACK_TOKEN}
restart: unless-stopped
Each agent container uses the same base image but runs with its own config, workspace, tokens, and host port. The reverse proxy maps different subdomains or paths to each container.
For Socket Mode agents, no port mapping is needed at all—each container opens its own outbound WebSocket connection to Slack. This is the cleanest multi-instance pattern because it eliminates port management entirely.
Health Checks and Resource Limits
Production containers need two safety nets: health checks that detect when the agent has stopped responding, and resource limits that prevent a single agent from consuming all host memory or CPU.
Add both to your docker-compose.yml:
services:
openclaw:
build: .
container_name: openclaw
ports:
- "3000:3000"
volumes:
- ./openclaw.json:/app/openclaw.json
- ./workspace:/app/workspace
environment:
- OPENCLAW_SLACK_TOKEN=${OPENCLAW_SLACK_TOKEN}
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 15s
deploy:
resources:
limits:
cpus: "0.5"
memory: 512M
reservations:
cpus: "0.1"
memory: 128M
Health check details. Docker runs curl http://localhost:3000/health every 30 seconds. If the check fails three times in a row, Docker marks the container as unhealthy. Combined with restart: unless-stopped, the container is automatically killed and recreated. The start_period: 15s gives the agent time to boot before the first health check runs, preventing false failures during startup.
Resource limit details. The deploy.resources block restricts the container to half a CPU core and 512 MB of RAM. These limits prevent a runaway agent from starving the host or other containers. Reserve at least 128 MB and 0.1 CPU to ensure the agent has a baseline allocation even under host pressure.
Adjust the limits based on your agent’s workload. A single-agent Slack setup typically needs 256 MB. A multi-agent host running three instances should plan for 1 GB each if they run heavy tooling or large language model inference.
For swarm or Kubernetes deployments, the deploy block is mandatory and integrates with the orchestrator’s scheduling. For standalone Docker Compose, it is advisory but still enforced by the container runtime.
Sources
- OpenClaw GitHub Repository — https://github.com/openclaw
- Docker Engine Installation on Ubuntu — https://docs.docker.com/engine/install/ubuntu/
- Docker Compose File Reference — https://docs.docker.com/compose/compose-file/
- Slack Socket Mode Documentation — https://api.slack.com/apis/connections/socket
- Docker Health Check Reference — https://docs.docker.com/reference/dockerfile/#healthcheck
- Docker Resource Constraints — https://docs.docker.com/config/containers/resource_constraints/
Related Reading on RedRook
- OpenClaw VPS Deployment: DigitalOcean, Linode, Hetzner 2026 — Step-by-step guide for deploying OpenClaw on cloud VPS providers, including firewall setup, fail2ban, and automatic backups.
- OpenClaw Security Hardening Guide 2026 — Hardening your OpenClaw instance against known CVEs, container escape vectors, and common misconfigurations.
