OpenClaw macOS Setup: M1/M2/M3/M4 and Intel Complete Walkthrough

OpenClaw macOS Setup: M1/M2/M3/M4 and Intel Complete Walkthrough

You sit down at your Mac. You open Terminal. Within 15 minutes, you will have OpenClaw running as a background service on your machine, accessible at http://localhost:3000, ready to connect your AI agents, skills, and automation. This guide covers every step for both Apple Silicon (M1 through M4) and Intel Macs, with specific notes where the architectures differ.

Whether you bought the latest M4 Ultra Mac Studio, an M3 MacBook Air, or a tried-and-true Intel Mac Pro, this walkthrough gets OpenClaw running. No fluff. No skipped steps.

Apple Silicon vs Intel: What’s Different for OpenClaw

Both Apple Silicon and Intel Macs run OpenClaw identically from a user perspective. The software stack is the same. The differences are under the hood.

Feature Apple Silicon (M1/M2/M3/M4) Intel Mac
Architecture ARM64 x86_64
Node.js binary Native ARM (no Rosetta needed) Native x86
Unified memory Yes – up to 128 GB on M4 Ultra No – separate RAM/VRAM
Power draw at idle ~3-5 W ~15-25 W on modern Intel Macs
Local LLM capability Excellent (see bonus section below) GPU-limited on most models

If you have an Apple Silicon Mac, Node.js 22 installs as a native ARM binary. It runs faster and uses less power than running through Rosetta 2. No translation layer is required at all. If you have an Intel Mac, the same installation steps work with x86 binaries. Both get the same OpenClaw experience.

The single biggest practical difference is unified memory. Apple Silicon Macs share memory between CPU and GPU. This means you can run OpenClaw alongside a local language model (like Llama 4 Scout via Ollama) on the same machine without fighting over VRAM. Intel Macs with discrete GPUs have separate video memory, which makes running large local models alongside OpenClaw harder. We cover this in the bonus section below.

Step 1: Install Node.js on macOS

OpenClaw runs on Node.js. You need Node.js version 22 or later. macOS does not ship with a modern Node.js. You have two good options.

Open Terminal (find it in Applications > Utilities, or press Cmd+Space and type “Terminal”).

Option A: Install with nvm (Recommended)

nvm (Node Version Manager) lets you install, switch, and manage multiple Node.js versions. It is the standard tool used by professional Node.js developers.

First, install nvm:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash

After the script completes, close and reopen Terminal, or run:

source ~/.zshrc

(macOS uses zsh by default since macOS Catalina 10.15. If you use bash, run source ~/.bash_profile instead.)

Verify nvm installed:

nvm --version

You should see a version number like 0.39.7.

Now install Node.js 22:

nvm install 22
nvm use 22

Set 22 as your default:

nvm alias default 22

Verify the installation:

node --version
npm --version

Both should print version numbers. Node.js should be v22.x, npm should be 10.x or later.

Important for Apple Silicon users: nvm automatically downloads the ARM64 binary for Node.js 22 on M-series Macs. No Rosetta. No special flags. If you ever see “x64” in the Node.js process architecture, something is off. Check with node -p "process.arch" – this should print arm64 on Apple Silicon and x64 on Intel.

Option B: Install with Homebrew (Simpler)

If you already use Homebrew or prefer a simpler installation, this option works well.

Install Homebrew first if you do not have it:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Then install Node.js 22:

brew install node@22

After installation, Homebrew will print instructions to add Node.js to your PATH. Follow them. On Apple Silicon Macs, this typically means:

echo 'export PATH="/opt/homebrew/opt/node@22/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc

On Intel Macs, Homebrew installs to /usr/local, so the path is different:

echo 'export PATH="/usr/local/opt/node@22/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc

Verify:

node --version
npm --version

Which option should you pick? Use nvm if you ever need to switch Node.js versions for different projects. Use Homebrew if simplicity matters most and you will only ever need one Node.js version.

Step 2: Install OpenClaw

With Node.js installed, install OpenClaw globally using npm:

npm install -g openclaw

This downloads and installs the OpenClaw CLI tool. The -g flag makes it available system-wide so you can run openclaw from any directory.

This step works identically on Apple Silicon and Intel Macs. npm handles the platform-specific binary. Check with:

openclaw --version

You should see a version number printed. If you get “command not found,” your npm global bin directory is not in your PATH. Fix it by adding the npm global prefix to your shell profile:

echo 'export PATH="$(npm config get prefix)/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc

Step 3: Start OpenClaw and Run the Setup Wizard

Starting OpenClaw for the first time:

openclaw start

The first run launches a setup wizard in your terminal. It will:

  1. Prompt you to configure basic settings (node name, admin email)
  2. Generate configuration files
  3. Initialize the gateway service
  4. Open your browser to http://localhost:3000 automatically

If the browser does not open automatically, navigate to http://localhost:3000 manually. You should see the OpenClaw web interface.

Firewall note: macOS may show a popup asking if you want to allow incoming connections on port 3000. Click “Allow.” This is required for remote agent connections and for the web UI to function. The firewall popup appears only on first launch.

Running on a headless Mac (macOS Server, Mac Mini without display): On headless setups, openclaw start still works. Access the web UI from another machine on your local network using your Mac’s IP address, like http://192.168.1.50:3000. Find your Mac’s IP with ifconfig | grep inet.

Stop OpenClaw at any time by pressing Ctrl+C in the terminal window where it is running, or by running openclaw stop from another terminal.

Step 4: Keep OpenClaw Running with macOS launchd

If you just run openclaw start in a terminal window, OpenClaw stops when you close that terminal or log out. For a production setup, you want OpenClaw to start automatically at login and stay running in the background. macOS’s built-in service manager, launchd, handles this.

Create a property list file for OpenClaw:

mkdir -p ~/Library/LaunchAgents
nano ~/Library/LaunchAgents/ai.openclaw.plist

Paste the following content into the file:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN"
  "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>ai.openclaw</string>
    <key>Program</key>
    <string>/usr/local/bin/openclaw</string>
    <key>ProgramArguments</key>
    <array>
        <string>openclaw</string>
        <string>start</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
    <key>KeepAlive</key>
    <true/>
    <key>WorkingDirectory</key>
    <string>/Users/yourusername</string>
    <key>StandardOutPath</key>
    <string>/tmp/ai.openclaw.stdout.log</string>
    <key>StandardErrorPath</key>
    <string>/tmp/ai.openclaw.stderr.log</string>
    <key>EnvironmentVariables</key>
    <dict>
        <key>PATH</key>
        <string>/usr/local/bin:/usr/bin:/bin:/opt/homebrew/bin</string>
    </dict>
</dict>
</plist>

Important edits for your machine:

  • Replace /Users/yourusername with your actual home directory path (e.g., /Users/jane)
  • If you installed OpenClaw via nvm, which openclaw will show the path. Use that path instead of /usr/local/bin/openclaw. nvm installs to something like /Users/yourusername/.nvm/versions/node/v22.x/bin/openclaw
  • If you installed via Homebrew on Apple Silicon, add /opt/homebrew/bin to the PATH variable as shown

After saving the file, load it into launchd:

launchctl load ~/Library/LaunchAgents/ai.openclaw.plist

Check that it loaded:

launchctl list | grep openclaw

You should see ai.openclaw listed. Check the logs if something is wrong:

cat /tmp/ai.openclaw.stdout.log
cat /tmp/ai.openclaw.stderr.log

To unload the service later (stop it from auto-starting):

launchctl unload ~/Library/LaunchAgents/ai.openclaw.plist

With this plist loaded, OpenClaw starts automatically when you log in. If it crashes, launchd auto-restarts it. You never need to think about it again. Open http://localhost:3000 in your browser and OpenClaw is always there.

Apple Silicon Bonus: Running Local LLMs Alongside OpenClaw

If you have an Apple Silicon Mac, you have a significant advantage. The unified memory architecture lets you run AI models locally with zero cloud costs while OpenClaw runs alongside.

Here is the practical setup:

  1. Install Ollama

    Download from ollama.com or install via Homebrew:

    brew install ollama
  2. Pull a model

    ollama pull llama3.2:3b

    This downloads a 3B-parameter model that runs well on 8 GB RAM Macs. For machines with more memory:

    • 16 GB RAM: Try llama3.1:8b (8B parameters)
    • 24 GB+ RAM: Try llama4-scout:17b or mistral:7b
    • 48 GB+ RAM: Try llama3.3:70b or qwen2.5:32b
  3. Start Ollama

    ollama serve

    Or run Ollama as a launchd service – create a ai.ollama.plist file following the same pattern above.

  4. Access the model from OpenClaw agents

    Ollama exposes an API at http://localhost:11434. Point your OpenClaw agent configuration to this endpoint and use local inference instead of paid APIs.

What can you run on each Apple Silicon model?

Mac Model Unified RAM Recommended Local Model OpenClaw Alongside
M1 MacBook Air 8 GB Llama 3.2 3B Yes, smooth
M2 MacBook Pro 16 GB Llama 3.1 8B Yes, smooth
M3 Pro MacBook Pro 18 GB Llama 4 Scout 17B (4-bit) Yes, smooth
M4 Pro MacBook Pro 24 GB Llama 4 Scout 17B Yes, with room to spare
M4 Max MacBook Pro 48 GB or 64 GB Llama 3.3 70B Yes, easily
M4 Ultra Mac Studio 128 GB Llama 3.1 405B (4-bit) Yes, easily

The key insight: OpenClaw uses very little memory (typically 100-300 MB). The Mac’s unified memory is shared across all tasks. Running OpenClaw alongside a local model uses almost exactly the same amount of memory as running the model alone. On a 16 GB M2 MacBook Pro, you can run an 8B model and OpenClaw together with no noticeable slowdown.

For Intel Macs, local LLM inference is GPU-limited on most models. The integrated Intel Iris graphics and even many discrete AMD GPUs lack the VRAM for models above 3B-7B parameters. You can still run small models via Ollama on Intel, but the experience is best on Apple Silicon.

Common macOS Setup Issues

Gatekeeper Blocking OpenClaw

macOS may show “OpenClaw cannot be opened because the developer cannot be verified.” This is Gatekeeper, Apple’s security system. OpenClaw is an open-source project and may not be notarized by Apple.

Fix it by removing the quarantine attribute:

xattr -dr com.apple.quarantine $(which openclaw)

If which openclaw returns nothing, find the actual path:

find /usr/local -name "openclaw" -type f
find ~/.nvm -name "openclaw" -type f

Then apply the xattr command to that path.

PATH Issues After nvm Install

nvm installs a script that needs to be sourced in your shell configuration. On modern macOS (Catalina and later), the default shell is zsh. If nvm is not found after installing, add this to ~/.zshrc:

source ~/.nvm/nvm.sh

Then reload:

source ~/.zshrc

If you use bash instead of zsh, add the same line to ~/.bash_profile.

macOS Firewall Blocking OpenClaw

When you first start OpenClaw, macOS may display a dialog: “Do you want the application ‘node’ to accept incoming network connections?” Click “Allow.”

If you accidentally clicked “Deny,” fix it in System Settings > Network > Firewall > Options. Find “node” or “openclaw” in the list and change it to “Allow incoming connections.”

Using the System Node.js

macOS may include a system Node.js from Xcode Command Line Tools. This is typically an older version like 18.x or 20.x. Do not use it. OpenClaw requires Node.js 22 or later.

Check your Node.js version:

node --version

If it shows something earlier than v22, you need to install Node.js 22 using one of the methods above. The system Node.js is located at /usr/local/bin/node and is separate from nvm or Homebrew installations.

Not Restarting Terminal After Installation

After installing nvm or Homebrew, the PATH changes are only available in new terminal sessions. If you try to run openclaw immediately without restarting Terminal or sourcing your shell config, you will see “command not found.”

Always run source ~/.zshrc after installing, or close and reopen Terminal.

Not Using launchd – Process Dies When Terminal Closes

This is the most common mistake with running services on macOS. If you start OpenClaw from a terminal window and then close the terminal (or log out), OpenClaw stops. Always set up the launchd service (Step 4 above) for production use. launchd keeps OpenClaw alive, restarts it if it crashes, and starts it automatically at login.

What to Do After Install

OpenClaw is running at http://localhost:3000. Here is what to do next:

  1. Complete the setup wizard – configure your gateway, node name, and admin account
  2. Connect a client app – pair the OpenClaw mobile app (Android or iOS) using the QR code in the web UI
  3. Install a skill – browse available skills in the OpenClaw skill library and activate the ones you need
  4. Add a local LLM backend – if you have an Apple Silicon Mac, connect Ollama (from the bonus section above) as your AI model provider
  5. Secure your installation – set a strong admin password, configure TLS if exposing OpenClaw beyond localhost, and review the firewall settings
  6. Explore the API – OpenClaw exposes a REST API at http://localhost:3000/api for integration with your own tools

For deeper integration, check the related guides on RedRook:

Sources

  • OpenClaw official documentation and GitHub repository
  • Node.js official downloads page – nodejs.org
  • nvm-sh/nvm – github.com/nvm-sh/nvm
  • Homebrew – brew.sh
  • Apple developer documentation on launchd plist configuration
  • Ollama – ollama.com
  • Apple Silicon performance benchmarks for LLM inference
  • macOS Gatekeeper and firewall documentation

Similar Posts