Running two OpenClaw agents on the same server is a common setup: a primary agent for your main work and a secondary agent for a specific project, a separate persona, or a dedicated automation channel. The problem is that two agents on one machine share the server’s resources and can collide on ports, config files, workspace directories, database paths, and process ownership. Done carelessly, the second agent corrupts the first, or both end up in a broken state that requires manual recovery. Done correctly, two agents coexist cleanly with no interference. This article covers exactly how to do it correctly.
TL;DR
- Separate data directories: Each agent needs its own
~/.openclawequivalent, config file, and workspace. - Separate gateway ports: Both agents listen on HTTP ports. They cannot share the same port.
- Separate process identities: The cleanest approach is separate OS users or separate systemd service units.
- Shared infrastructure is fine: Both agents can share the same Ollama instance, the same network, and the same physical server.
Throughout this article you will see indented blocks like the ones below. Each one is a command you can paste directly into your OpenClaw chat. Your agent will run it and report back. You do not need to open a terminal or edit any files manually.
Why two agents conflict by default
When you run OpenClaw, the gateway process reads its configuration from a specific path, writes its data to a specific directory, and listens on a specific port. The default values for all three of these are fixed: configuration at ~/.openclaw/openclaw.json, data in ~/.openclaw/, and port 18789. If you start a second OpenClaw process without changing these defaults, both processes will attempt to own the same files and the same port simultaneously. The port conflict surfaces immediately and crashes one process. The shared data directory creates a subtler and more damaging problem: both agents read and write to the same memory database, the same workspace files, and the same log output, producing corruption that may not surface until days after it begins.
Check my current OpenClaw configuration to understand what ports and directories my primary agent is using. Read my openclaw.json and tell me: (1) What port is the gateway listening on? If gateway.port is not explicitly set, what is the default? (2) What is the data directory path? If not explicitly set, what is the default? (3) What workspace directory is this agent using? I need these values to configure a second agent that does not conflict.
Architecture options for two agents
There are three viable approaches to running two agents on one server. Each has different trade-offs in terms of isolation, complexity, and operational overhead. Choose the one that matches your comfort level and the degree of separation you actually need.
Option 1: Separate OS users (strongest isolation). Create a second Linux user account. The second agent runs as that user, with its own home directory, its own ~/.openclaw/ path, and its own systemd service unit. The two agents have completely separate file namespaces and no shared mutable state. This is the cleanest architecture and the easiest to reason about.
Option 2: Separate data directories under one user (simpler, less isolated). Keep both agents running as the same OS user but point each one at a different data directory and config file. Use environment variables or startup flags to tell each agent where its config lives. This requires more care because both processes share the same user account, but it avoids the complexity of managing two OS users.
Option 3: Docker containers (most isolated, highest operational overhead). Run each agent in its own dedicated container. This provides complete process, filesystem, and network isolation. It is the right approach if you want strong security boundaries between agents or if the agents serve different customers or use cases with different trust requirements. The full Docker setup is covered in a separate article. This article focuses on Options 1 and 2, which cover the large majority of personal and small-team use cases without requiring container orchestration knowledge.
Help me choose the right architecture for running a second OpenClaw agent on this server. My use case for the second agent is: [describe it, e.g., a dedicated automation agent, a different persona for a separate project, a testing environment]. Based on my use case: (1) Do I need strong isolation between the agents or is light separation sufficient? (2) Would the second agent need to access any of the same files as the primary agent? (3) Recommend the right option from the three described above and explain why it fits my case.
Setting up a second agent as a separate OS user
This approach gives the cleanest separation and is the recommended path for most operators who want two fully independent agents on the same machine. The second agent runs as a different Linux user, so it cannot accidentally access or corrupt the first agent’s data regardless of configuration errors or bugs.
Help me create a second OS user for a second OpenClaw agent. Run:
sudo useradd -m -s /bin/bash agent2to create the user with a home directory. Then set a password withsudo passwd agent2. After creating the user, confirm: (1) The user exists and has a home directory at /home/agent2, (2) The user can be switched to withsudo su - agent2.
Once the user exists, install OpenClaw for that user and create its configuration:
Set up OpenClaw for the agent2 user. Switch to that user with
sudo su - agent2and run the OpenClaw installation. The install creates ~/.openclaw/ in the agent2 home directory, which is /home/agent2/.openclaw/. After installation, I need to configure the gateway port to something other than 18789 (my primary agent uses that port). Set gateway.port to 18790 in /home/agent2/.openclaw/openclaw.json. Confirm the port is set correctly before proceeding.
Create a dedicated systemd service unit for the second agent so it starts and stops independently of the first. Separate service units mean you can restart Agent 2 without touching Agent 1, read each agent’s logs independently via journalctl, and have each agent set to a different auto-start behavior if you want one always-on and one on-demand.
Create a systemd service unit for the second OpenClaw agent. The service should: run as user agent2, use the OpenClaw binary installed for that user, set HOME=/home/agent2 so it reads the correct config, and be named openclaw-agent2.service. Write the unit file to /etc/systemd/system/openclaw-agent2.service. After writing, run
sudo systemctl daemon-reload && sudo systemctl enable openclaw-agent2 && sudo systemctl start openclaw-agent2. Then check the status withsudo systemctl status openclaw-agent2and confirm it is running.
Setting up a second agent under the same OS user
If you prefer to keep both agents running under the same user account, you can separate them by using different config file paths and data directories. This requires passing the config path explicitly to the second agent at startup, but avoids the overhead of managing a second OS user.
Set up a second OpenClaw data directory for a second agent running under my current user. Create the directory at ~/.openclaw-agent2/ and copy the base openclaw.json from ~/.openclaw/openclaw.json as a starting point. Then edit the copy to: (1) Change gateway.port from 18789 to 18790. (2) Change any data or database paths that point to ~/.openclaw/ to point to ~/.openclaw-agent2/ instead. Show me the diff of what changed between the original and the new config.
Start the second agent using the alternate config path. The exact flag depends on your OpenClaw version, but the common approach is the --config flag:
Check whether OpenClaw supports a –config flag for specifying an alternate config file path. Run:
openclaw --help 2>&1 | head -40and show me the output. I am looking for a flag that lets me specify which openclaw.json to use at startup, so I can run two agents under the same user pointing at different configs.
Create a second systemd service unit for this approach as well, this time pointing at the alternate config:
Create a systemd service unit for the second agent using the alternate config directory. The service should: run as my current user (node), set OPENCLAW_CONFIG_DIR=/home/node/.openclaw-agent2 or equivalent environment variable, and be named openclaw-agent2.service. Check the OpenClaw documentation or existing service unit to understand what environment variable or flag controls the config directory. Write the correct unit file and enable the service.
Port allocation and network separation
Each agent needs a unique port for its gateway. The default is 18789. The second agent should use a different port, typically 18790. If you plan to add a third agent later, allocate ports sequentially: 18789, 18790, 18791. Keep a written note of which port belongs to which agent to avoid confusion when checking status or connecting clients.
Verify there are no port conflicts between my two agents. Run:
ss -tlnp | grep -E '18789|18790'. Show me the full output. Confirm: (1) Is port 18789 in use, and by which process? (2) Is port 18790 in use, and by which process? (3) Is there any other process on those ports that is not an OpenClaw gateway?
If you expose either agent’s gateway externally through a reverse proxy, configure separate virtual hosts or paths for each agent. Do not route both agents through the same upstream address. Each agent should have a distinct, non-overlapping external entry point so requests reach the intended agent consistently.
I am using Caddy as a reverse proxy and want to expose both agents externally. Agent 1 runs on port 18789 and should be accessible at agent1.example.com. Agent 2 runs on port 18790 and should be accessible at agent2.example.com. Write the Caddy configuration block for both virtual hosts. Include basic auth or a bearer token check for each, since exposing an OpenClaw gateway without authentication is a security risk.
Managing shared resources
While each agent needs its own data directory, config, and port, some infrastructure can be safely shared between agents. Understanding which resources can be shared and which cannot prevents unnecessary duplication while avoiding the conflicts that come from sharing things that should be separate.
Safe to share: The Ollama instance. Both agents can point at the same 127.0.0.1:11434 endpoint. Ollama handles concurrent requests across multiple callers. The physical server hardware, RAM, and CPU are obviously shared. System-level packages and binaries. The git remote repository if both agents commit to the same workspace repo.
Do not share: The openclaw.json config file. The memory database (LanceDB or equivalent). The workspace directory with active files either agent writes to. The session database. Log files. Any file that either agent writes to during normal operation.
Audit my second agent’s configuration to check for any shared resources that should be separate. Read the config at [second agent config path]. For each data path configured: (1) Does this path currently point to the same location as my primary agent’s config? (2) If yes, is this a resource that is safe to share or one that needs to be separate? Flag every path that should be separated and tell me the correct path for the second agent.
Connecting channels to the right agent
If you connect messaging channels like Discord or Telegram to your agents, make sure each channel bot token or webhook points to the correct agent’s gateway port. A Telegram bot that sends messages to port 18789 will reach Agent 1. If you intend those messages to go to Agent 2, the bot must point to port 18790. Routing errors here produce confusing behavior where the wrong agent responds to messages or one agent handles requests intended for the other.
I am configuring channel integrations for my second agent. The second agent runs on port 18790. I want to connect [Discord/Telegram/other] to this agent specifically, not to my primary agent. Check the second agent’s config at [path] to see how channel integrations are configured. Tell me what I need to add or change so the [channel] integration points to port 18790 and does not overlap with my primary agent’s [channel] integration.
Monitoring and managing both agents
With two agents running, you need a way to check the status of each one, restart each one independently, and read each one’s logs without them mixing. Systemd service units with distinct names handle all of this cleanly.
Check the current status of both agents. Run:
sudo systemctl status openclawfor the primary agent andsudo systemctl status openclaw-agent2for the second agent. Show me the status output for both. Are both running? Do the logs show any errors? What are the current PID and uptime for each?
Show me the recent logs for the second agent specifically. Run:
journalctl -u openclaw-agent2 -n 50 --no-pager. Show me the last 50 lines. Are there any errors, warnings, or unexpected restarts in the log? If yes, tell me what each error means and what I should do about it.
Memory database isolation
Each agent’s memory system writes to a database on disk. The exact path depends on the memory plugin you are using, but for the most common configuration (LanceDB), the database is stored within the agent’s data directory. If two agents share the same data directory, they share the same memory database. This produces corrupted memory state: one agent’s stored facts appear in the other agent’s recall results, memories from two different contexts get interleaved, and extraction failures in one agent’s processing corrupt the shared database for both.
Check whether my two agents are sharing a memory database. Read the memory plugin configuration for both agents. For agent 1, the config is at [agent1 config path]. For agent 2, the config is at [agent2 config path]. Compare the database path or directory configured for the memory plugin in each. If they point to the same location, tell me exactly which paths need to be changed to give each agent its own isolated memory database.
If the agents are currently sharing a memory database and you want to split them, the cleanest approach is to initialize a fresh database for the second agent rather than trying to split an existing shared database. The shared database has memories from both agents interleaved and there is no reliable automated way to attribute each memory to the correct agent. Start the second agent fresh with an empty database and let it rebuild its memory from context going forward.
Set up a fresh, isolated memory database for my second agent. (1) Update the second agent’s memory plugin config to point to a new, empty database directory at [new path]. (2) Confirm the directory does not yet exist and will be initialized fresh by the plugin. (3) Restart the second agent. (4) After restart, run a test memory store and recall on the second agent to confirm it is writing to and reading from the new isolated database, not the shared one.
Workspace directory isolation
The workspace directory is where each agent’s files live: the SOUL.md, AGENTS.md, memory files, project files, and everything the agent reads and writes during normal operation. If both agents share the same workspace, they will read and overwrite each other’s context files, checkpoint files, and daily memory logs. The second agent will inherit the first agent’s persona and operating rules, and writes from one session will corrupt the context of the other.
Set up an isolated workspace for my second agent. Create the directory at [second agent workspace path]. The second agent needs its own copies of: SOUL.md, AGENTS.md, USER.md, TOOLS.md, and MEMORY.md. Copy these files from my primary workspace as a starting point. Then update the second agent’s config to point its workspace setting to the new directory. Confirm the second agent’s workspace is set correctly before I start editing the files to customize the second agent’s identity.
After creating the isolated workspace, customize the second agent’s SOUL.md and AGENTS.md to reflect its specific role. If the second agent is for automation tasks, its SOUL.md does not need the same conversational persona rules as the primary agent. If it handles a specific project, its AGENTS.md should have project-specific protocols rather than the general-purpose rules from the primary agent. The workspace isolation makes it safe to diverge these files without affecting the primary agent.
I have set up an isolated workspace for my second agent at [path]. This agent’s purpose is: [describe purpose, e.g., dedicated automation agent for cron tasks and background processing]. Help me customize the SOUL.md for this agent to match its purpose. The primary agent’s SOUL.md is at [primary path]. Write a simplified SOUL.md for the second agent that keeps the essential identity and operating rules but removes conversational persona elements that are not relevant for an automation agent.
Cron job and automation separation
If you run cron jobs on one or both agents, make sure each cron job targets the correct agent. A cron job that fires a systemEvent into the wrong session will trigger the wrong agent, which may not have the right context, skills, or capabilities to handle that task correctly. Check each cron job’s target session and confirm it matches the intended agent.
List all cron jobs currently configured. For each cron job, tell me: (1) What session or agent does it target? (2) Is this the correct agent for this task? (3) Are there any cron jobs that should be moved from the primary agent to the second agent or vice versa? I want to make sure each automated task is running in the right agent context.
If you are migrating some cron jobs from the primary agent to the second agent, recreate them on the second agent first and verify they work correctly before removing them from the primary agent. Running both copies briefly is preferable to having a gap where neither agent is running the task.
I want to move the cron job [job name/description] from my primary agent to my second agent. (1) Show me the full configuration of this cron job on the primary agent so I have the exact schedule, payload, and delivery settings. (2) Recreate this cron job on the second agent with the same settings. (3) After confirming the new cron job is running correctly on the second agent, remove it from the primary agent. Do steps 1 and 2 first and wait for my confirmation before doing step 3.
Log separation and debugging across two agents
When something goes wrong on a server running two agents, you need to be able to check each agent’s logs independently without them mixing. Separate systemd service units give you separate log streams via journalctl. If you are writing custom log files to disk, make sure each agent writes to a different path so the logs do not interleave.
Set up a monitoring command I can use to watch both agents in real time. I want to see the last 20 lines from each agent’s journal side by side. Run:
journalctl -u openclaw -n 20 --no-pagerfor the primary agent andjournalctl -u openclaw-agent2 -n 20 --no-pagerfor the second agent. Show me both outputs clearly labeled. Flag any errors in either log.
When debugging an issue that affects one agent but not the other, the separate logs make it possible to isolate the affected agent’s behavior. Look for divergence in log patterns between the two agents. If one agent is restarting frequently and the other is stable, the instability is specific to that agent’s configuration or workload, not a server-level problem.
One of my agents is behaving unexpectedly and I need to determine whether the problem is agent-specific or server-wide. Check the last 100 log lines from both agents. Compare the patterns: (1) Is the issue appearing in both agents’ logs or only one? (2) What is the timing of the errors? Do they correlate with each other? (3) Are there any resource-level errors (out of memory, disk full, process limit) that would affect both agents? Based on the comparison, tell me whether this is an agent-specific issue or a server-level issue.
Giving each agent a distinct identity
Beyond the technical separation, the two agents need to be operationally distinct in practice. If both agents use the same name, respond in the same style, and are connected to the same channels, you will not be able to tell which agent handled a given interaction or easily route specific requests to the right one. Give each agent a unique name, a distinctive greeting style, and clear channel assignments so the routing is unambiguous in daily use.
Help me differentiate my two agents clearly. My primary agent is named [name] and handles [primary role]. My second agent should be named [proposed name] and will handle [secondary role]. Update the second agent’s SOUL.md and IDENTITY.md to reflect the new name and role. The update should change: the name field, any self-references in the persona description, and the stated purpose. After updating, have the second agent introduce itself so I can confirm the new identity is active.
Resource planning for two agents
Before committing to running two agents on the same server, verify the server has enough resources to support both. OpenClaw itself is lightweight, but the model inference it calls into can consume significant RAM, especially if you are running Ollama locally. Two agents making concurrent model calls will saturate a server that could comfortably handle one.
The minimum viable resource allocation for two agents running local models is approximately 16GB RAM: roughly 8GB for the primary agent’s typical model footprint (assuming phi4 or an 8B parameter model), 8GB for the second agent’s footprint, and headroom for the operating system and other processes. If either agent uses larger models (14B+ parameter), the RAM requirement increases proportionally.
Check the current server resource usage to evaluate whether it can support a second agent. Run:
free -hto show memory usage. Run:df -h /to show disk space. Run:nprocto show CPU count. Then check what models are currently loaded in Ollama:curl -s http://localhost:11434/api/tags | python3 -c "import json,sys; [print(m['name'], m.get('size','?')) for m in json.load(sys.stdin).get('models',[])]". Based on these numbers, tell me whether adding a second agent is viable without a server upgrade.
If resources are tight, route-based specialization is a practical alternative to two always-on agents. Instead of running both agents simultaneously around the clock, configure the second agent to start on demand when a specific task requires it and stop when the task is complete and it goes idle. This approach consumes additional RAM only during the periods when the second agent is actually working, which is significantly more efficient when its workload is intermittent rather than a continuous background presence.
My server has limited RAM and I cannot run both agents simultaneously at full capacity. I want to set up the second agent to start only when needed and stop when idle. Configure the second agent’s systemd service to not start automatically on boot. Then give me a command I can run from my primary agent to start the second agent when I need it and stop it when I am done. The start command should be:
sudo systemctl start openclaw-agent2and the stop command should be:sudo systemctl stop openclaw-agent2. Confirm this configuration is correct.
Testing that isolation is working correctly
After completing the setup, run a deliberate series of tests to confirm the isolation is actually working and not just appearing to work. Isolation failures are frequently silent in the early days of a two-agent setup. The agents appear to operate correctly, both respond to requests, both seem independent. But they are quietly sharing state in ways that only become obvious after days or weeks of use, when memories from one agent’s sessions start appearing in the other’s recall results, or workspace files start being overwritten without any clear explanation at all. Test isolation explicitly before treating the setup as production-ready.
Run isolation verification tests for my two agents. (1) Store a unique test memory on Agent 1: something specific that Agent 2 should not know. Then recall that memory on Agent 2 and confirm it does not appear. (2) Create a test file in Agent 1’s workspace at a path that Agent 2 should not have access to. Confirm Agent 2 cannot read that file. (3) Check the active ports: confirm Agent 1 is on port 18789 and Agent 2 is on port 18790 with no overlap. Report the result of each test: pass or fail.
If any isolation test fails, do not continue using the two-agent setup until the failure is resolved. A failed isolation test means the agents are sharing state they should not share, and continued use will make the corruption progressively harder to clean up. Identify exactly which resource is shared, separate it, and re-run the isolation tests before considering the setup complete.
The isolation test failed: [describe which test failed and what happened]. Help me identify the root cause. Based on what failed, which shared resource is most likely causing it? (1) Read both agents’ configs and compare the relevant paths. (2) Identify which path needs to be changed to eliminate the sharing. (3) Tell me exactly what to change and in which config file. I will apply the fix and re-run the isolation test before proceeding.
Documenting your two-agent setup
Two agents on one server means double the configuration surface, double the number of ports and paths to track, and double the number of things that can fail in non-obvious ways when something changes. Documenting the setup while it is fresh and the decisions are clear prevents the situation that happens reliably three months later: you cannot remember which port each agent uses, you are not sure why the second agent has a different data directory name than the first, you do not recall what the second agent was originally set up to do, and debugging anything requires reading through both configs from scratch to understand the layout. A single concise reference document per agent, kept in that agent’s workspace, is all it takes to prevent every one of those problems. Write it while the setup is fresh. Ten minutes now saves an hour of confusion later.
Write a setup reference document for my two-agent configuration. Create a file at [path, e.g., workspace/INFRASTRUCTURE.md or update an existing one] that records: (1) Agent 1: name, port, config path, workspace path, data directory, systemd service name, connected channels, and primary purpose. (2) Agent 2: same fields. (3) Shared resources: what both agents share (Ollama endpoint, hardware) and what is explicitly kept separate. (4) Common management commands: how to check status, restart, read logs, and start/stop each agent. This document is the single reference for anyone managing this server.
Frequently asked questions
Can both agents use the same API keys?
Yes. API keys for model providers like DeepSeek, Anthropic, or OpenAI can be shared between agents at the configuration level without any conflict. Those keys are authentication credentials with no per-instance state. However, messaging channel credentials require more care. If both agents use the same Telegram bot token, both will receive every incoming message sent to that bot and both will attempt to respond. That is almost certainly not what you want. For messaging channels, create a separate bot for each agent so incoming messages route to the intended recipient unambiguously. For model provider API keys, sharing is straightforwardly fine.
How do I know which agent is responding when I test a message?
The clearest approach is to give each agent a distinct identity in its SOUL.md or system prompt. Name them differently and give them a slightly different response style. When you test, the name or style in the response tells you which agent handled it. You can also ask each agent directly what port its gateway is running on. The response will identify which agent is active.
One agent keeps using all the RAM and slowing down the other. What do I do?
This is a resource contention problem, not a configuration conflict. The fix is to limit each agent’s memory usage at the systemd service level using the MemoryMax directive in the service unit, or to schedule resource-intensive tasks on each agent at non-overlapping times. If both agents need to run expensive model calls simultaneously, you may need more RAM or need to route one agent to a cheaper or faster model that uses less memory per call.
Can I share a single workspace git repository between both agents?
Only if they write to completely non-overlapping file paths within that repository. If both agents commit to the same files, the git history will show interleaved commits from both, and a git pull in one agent’s session may pull changes made by the other agent, producing unexpected context shifts. The cleaner approach is separate git repositories for separate workspaces. If you genuinely need both agents to access the same content, use a shared read-only reference rather than a shared write target.
I restarted the server and the second agent did not come back up automatically. Why?
The systemd service for the second agent either was not enabled for auto-start or the enable command was not run. Check with sudo systemctl is-enabled openclaw-agent2. If the output is “disabled”, run sudo systemctl enable openclaw-agent2 and then sudo systemctl start openclaw-agent2. For future reboots, the service will start automatically.
Can I have one agent delegate tasks to the other?
Yes, through sessions_send or sessions_spawn tools. If Agent 1 needs Agent 2 to handle a specific task, it can send a message to Agent 2’s session directly. This requires Agent 1 to know Agent 2’s session key or label, and Agent 2 must be configured to accept incoming session messages. This is an advanced pattern that is useful for task specialization: one agent routes requests to the one best equipped to handle them. The key requirement is that both agents remain operationally independent with separate data stores. The delegation is message-passing, not shared state.
Both agents are running but one of them is writing to the other’s workspace. How do I find and fix the overlap?
Run the config audit command from the shared resources section above. Compare the data paths in both configs and identify any that point to the same directory. The overlapping path is the source of the write collision. Update the misconfigured agent’s config to point to its own dedicated directory, then restart that agent. Check the directory contents to see whether any files were created by the wrong agent and move or delete them as appropriate.
