OpenClaw First-Time Setup: What the Onboarding Wizard Actually Does

You run openclaw onboard and a wizard starts asking questions. You probably just want to get your agent running, not learn a new config format. What is this wizard actually doing, and why does it matter?

The openclaw onboarding | openclaw setup wizard, openclaw first run process is a bootstrap. It gets you a working agent with a single API key, which you can then instruct to switch to cheaper providers like DeepSeek or Ollama. This is the community’s standard two-phase setup: bootstrap with any key, then optimize with your agent.

This guide explains each wizard step, what it configures in your ~/.openclaw/openclaw.json file, and how to use it as a launchpad for a truly budget-friendly setup.

Before You Install Anything, Check Your Hardware

The OpenClaw gateway is lightweight. The expensive part is running AI models. Your hardware determines your cheapest path.

If you have a GPU with at least 8GB of VRAM (like an RTX 4060), you can run models locally for free with Ollama. If you don’t, you’ll use a cheap cloud API like DeepSeek or Ollama’s cloud service. Both work. You just need to know which path you’re on before you start.

Hardware Quick Reference

  • No GPU / Raspberry Pi 5: Gateway only. Use a cheap cloud API (DeepSeek via OpenRouter) or Ollama Cloud free tier. Total cost: ~$5-10/month for a VPS plus API fees.
  • GPU with 8GB VRAM (RTX 4060): Can run 7-8B parameter models locally (e.g., Llama 3.1 8B). Good for basic agent tasks. Zero API cost.
  • GPU with 12-24GB VRAM (RTX 4070 Ti, used RTX 3090): Can run 14-32B parameter models locally (e.g., Qwen3 32B). Best for serious local use. Zero API cost.
  • Apple Silicon Mac (16GB+ unified memory): Can run 7-14B models locally via Ollama. Good performance.

If you don’t have suitable local hardware, your cheapest path is a cheap VPS plus a cheap API. The community’s go-to is a Hetzner CX22 VPS (€4.15/month) plus the DeepSeek API ($0.14 per million input tokens).

What You Need

The onboarding wizard needs one thing to bootstrap: a working API key. It doesn’t have to be your final provider. You just need any key to get the agent running so it can configure itself for cheaper options.

  • Node.js 24 (recommended) or Node.js 22.14+: OpenClaw is a Node.js daemon. Version 24 is the current Long-Term Support (LTS) version and is tested. You can check with node --version.
  • One working API key (any provider): This is your bootstrap key. The easiest is an OpenRouter free tier key from openrouter.ai. You can also use a DeepSeek API key, an OpenAI key, or an existing Ollama local setup. The wizard will ask for it.
  • A terminal on macOS, Linux, or Windows (WSL2): The installation and openclaw onboard command run here.

Setting Up Your Environment

Step 1: Install OpenClaw

This gets the openclaw command onto your system. The official one-line installer is the fastest method.

curl -fsSL https://openclaw.ai/install.sh | bash

This script installs Node.js if needed, installs the OpenClaw npm package globally, and sets up the daemon. After it finishes, the openclaw command will be available.

Doing it manually? If you prefer npm directly: npm install -g openclaw@latest. Then run openclaw onboard --install-daemon to set up the background service.

Step 2: Get Your Bootstrap API Key

You need one key to get past the wizard. Go to openrouter.ai, sign up, and go to your API Keys page. Copy the key. The free tier is enough for bootstrap.

Why OpenRouter? It gives you access to 100+ models (including DeepSeek) with one key, and the free tier is sufficient to complete setup. It’s the community’s default gateway.

If you already have Ollama running locally with a model pulled (like gemma4), you can use that instead. Your “key” for Ollama local is ollama-local.

Step 3: Run the Wizard

Now you start the interactive setup. This is where you’ll paste your bootstrap key.

openclaw onboard

The wizard starts. It will ask for your name, your agent’s name, and then the critical question: “How would you like to authenticate with your AI model provider?”

You’ll see a list. If you’re using the OpenRouter key, select openrouter. If you’re using Ollama local, select ollama. The wizard will then prompt you for the API key or base URL.

Once you provide it, the wizard writes your initial ~/.openclaw/openclaw.json config file and starts the gateway. Your agent is now running with your bootstrap provider.

Similar Posts