An openclaw second instance unexpected appearance has three causes: unclean restart, systemd conflict, or detached terminal. This article identifies which one and fixes it without losing data.
TL;DR
Two OpenClaw processes running on the same port fight over incoming messages: one receives the message, the other does not, and both respond to some messages while missing others. To fix it, identify all running OpenClaw processes, stop the duplicate, confirm only one remains, and check what started the duplicate so it does not come back. The resolution takes under five minutes once you know which process to stop.
Step 1: Confirm you actually have two instances running
Before stopping anything, confirm the situation. Two OpenClaw processes on the same port is a different problem from two OpenClaw processes on different ports (which is intentional multi-agent routing and may be working correctly). The symptom that sends operators here is usually that the agent occasionally misses messages, responds twice, or behaves inconsistently, because two processes are competing to handle the same traffic. The inconsistency is the tell: a single process either handles messages or does not, but two competing processes produce irregular patterns that do not fit any single-process failure mode. If the openclaw running twice situation means both processes are handling messages, you see 70 percent handled fine and 30 percent dropped, and the 30 percent is not correlated with message length, complexity, or time of day, duplicate instances competing for messages is the most likely explanation. A quick mental check before running the diagnosis: did anything happen recently that could have started a second process? A server reboot, a config change that triggered a restart, a terminal session you opened and forgot to close, or an automated script that runs openclaw on a schedule. If you can trace the symptom onset to one of those events, that event is almost certainly the cause.
Check whether there are multiple OpenClaw processes running on this server. Run a process list and filter for openclaw. Show me: how many openclaw processes are running, what port each one is listening on, when each process started, and what command started each one. If two processes are on the same port, tell me which one started first.
What you are looking for in the output:
- Two processes on the same port: This is the duplicate instance problem. One needs to be stopped.
- Two processes on different ports: This may be intentional. Check whether you have a multi-agent setup configured before stopping either one.
- One process only: The duplicate is not currently running. The problem may be intermittent (a process that stops and restarts on a schedule) or the symptom you noticed may have another cause. Check the next section anyway to understand how the duplicate was starting.
Manual check if your agent is not yet responding
If your agent is unresponsive (possibly because two instances are fighting over the port), run the check directly in SSH: ps aux | grep openclaw shows all running openclaw processes. Note: if OpenClaw was started as a Node.js process, the process name in ps output may appear as node rather than openclaw. Use pgrep -f openclaw (full command-line match) rather than pgrep -x openclaw (exact name match) to reliably find all openclaw-related processes regardless of how they were started. ss -tlnp | grep 18789 (or your configured port) shows what is listening on the gateway port. If two PIDs appear in either output, you have a confirmed duplicate.
Why a second instance appears without you starting one
There are four common causes. Understanding which one applies determines both how to stop the duplicate now and how to prevent it from coming back.
Cause 1: Unclean restart left the old process running
When you restart OpenClaw (via the openclaw gateway restart command, a config apply, or a manual kill-and-restart), the old process should stop before the new one starts. If the stop signal is sent but the process does not exit cleanly before the new start command fires, both end up running. This is the most common cause and the easiest to reproduce: restart OpenClaw while it is handling a message, and the old process may linger.
Check the process start times for all running openclaw processes. If two are running, look at when each one started. Tell me whether the older one appears to be a lingering process from a recent restart (started shortly before the newer one) or whether it started days or weeks ago (suggesting a persistent duplicate rather than a restart artifact).
If the older process started within minutes of the newer one, it is almost certainly a restart artifact. Stop it by PID and the problem is resolved. If it starts coming back after every restart, the issue is in the restart mechanism itself. The stop command is not waiting for the process to fully exit before launching the new one. The practical fix for this is adding a short sleep between the stop and start commands if you are doing manual restarts, or switching to systemctl restart which handles the ordering correctly. If you are using an openclaw CLI restart command that produces this behavior, file an issue or work around it by running openclaw gateway stop, waiting a few seconds, and then running openclaw gateway start as two separate commands with a deliberate pause between them.
Cause 2: Systemd service running alongside a manual start
If you have OpenClaw configured as a systemd service and it autostarted on boot, and you also started it manually in a terminal (or via an openclaw gateway start command that spawned a new process rather than using the existing one), both the systemd-managed process and the manually started process are now running.
Check whether OpenClaw is running as a systemd service and whether there is also a separately started openclaw process not managed by systemd. Run: systemctl status openclaw (or the relevant service name) to show the systemd service state, and ps aux | grep openclaw to show all running processes. Tell me if the PID in the systemctl output matches a PID in the ps output, or if there are additional PIDs not tracked by systemd.
If a systemd-managed process and a manually started process are both running, stop the manually started one (kill it by PID) and leave the systemd one running. The systemd-managed process will restart automatically on boot and is the one you want as your permanent process. Going forward, use systemctl restart openclaw to restart rather than manual start commands, which bypass systemd and create untracked processes.
Never start OpenClaw twice from the same config
Running openclaw gateway start when a process is already running does not replace the existing process. Depending on your OpenClaw version, it either starts a second process alongside the existing one or returns an error. If you are unsure whether openclaw is already running, check with ps aux | grep openclaw before running any start command. If it is already running and you want to restart it, use the restart command rather than stop-then-start in two separate operations.
Cause 3: A previous session running in a detached terminal
If OpenClaw was started inside a screen, tmux, or nohup session that was detached rather than closed, that process continues running in the background. If you later started OpenClaw again (perhaps not realizing the first one was still going), you now have two processes.
Check whether there are any detached terminal sessions (screen or tmux) on this server that might be running an openclaw process. List all active screen sessions with: screen -ls. List all tmux sessions with: tmux ls. For any sessions found, tell me the session names and when they were created. Also check for any nohup.out files in the home directory or workspace that might indicate a nohup-started process.
If you find a detached screen or tmux session running openclaw, attach to it to see its current state, then stop openclaw cleanly within the session before detaching again or closing it. Avoid killing the session from outside without first stopping openclaw inside it, because a hard kill may leave lock files or partial writes that cause issues on next start. A cleaner way to handle screen and tmux sessions for openclaw: do not use them for production deployments at all. They are useful for interactive debugging sessions where you want to watch live logs, but as a permanent process management method they introduce exactly the accidental duplicate problem this article covers. Run your debug session in screen or tmux, confirm what you needed to confirm, stop openclaw in the session when done, and let systemd or PM2 restart it cleanly.
Cause 4: A cron job starting openclaw on a schedule
A cron job set up to start openclaw automatically (perhaps as a “ensure it’s running” watchdog or as part of a boot script) can create a duplicate if openclaw is already running when the cron fires.
Check the crontab for all users on this server for any entries that start, restart, or reference openclaw. Check: crontab -l for the current user, sudo crontab -l for the root user, and the files in /etc/cron.d/ for any system-level openclaw entries. List every openclaw-related cron entry you find and what time it is scheduled to run.
If a cron job is starting openclaw and openclaw is already running when it fires, the fix is to change the cron command to check whether openclaw is running before starting it. The pattern is: pgrep -x openclaw || openclaw gateway start. This only starts openclaw if no openclaw process is currently detected. Update any problematic cron entries to use this conditional form.
Deciding which instance to stop
Once you know the cause, you know which process to stop. In most cases it is straightforward: stop the older one if it is a restart artifact, stop the unmanaged one if systemd is involved, stop the detached terminal one if that is the source. But if both processes have been running for a similar amount of time and you are not sure which one is “legitimate,” check which one has the active channel connection.
I have two openclaw processes running and I need to determine which one is receiving active channel messages (Telegram, Discord, or other). Check the gateway logs for both processes. Look at which PID is logging incoming message activity. Tell me the PID of the process that handled the most recent inbound message. That is the one I want to keep. Also check: are both processes using the same openclaw.json, or different configs?
The process that handled the most recent message is the one your channel is actively delivering to. Keep that one. Stop the other. If both are handling messages (intermittently, because they are racing for the same connection), the channel may be in an indeterminate state. In that case, stop both and restart cleanly from the correct startup method (systemd or your preferred process manager). After a clean restart, send a test message and verify the response includes the context and persona you expect. If the context is wrong (agent does not seem to know who it is or what workspace it is in), the surviving process may have been using a different config or workspace than expected. Check the startup command for the new process to confirm it is loading the correct openclaw.json.
Step 3: Stopping the duplicate process safely
Stopping an openclaw process cleanly matters because an unclean stop can leave lock files, open database connections, or partially written config files that cause problems when the remaining process tries to access them.
Stop the openclaw process with PID [DUPLICATE_PID] cleanly. Use a SIGTERM signal first (not SIGKILL) to give the process time to close connections and write any pending state. After sending SIGTERM, wait 5 seconds and confirm the process has exited. If it has not exited after 10 seconds, send SIGKILL. After stopping it, check that the port it was using is now free and that the remaining openclaw process is still running and responsive.
Manual stop if your agent is unresponsive
If both openclaw instances are consuming resources and neither is responding to agent commands, stop the duplicate directly in SSH. Find the PID with ps aux | grep openclaw, then run kill [PID] (SIGTERM). If it does not stop within 10 seconds, run kill -9 [PID] (SIGKILL). After stopping the duplicate, verify the remaining process is responsive: try a simple message to your agent or check curl http://127.0.0.1:18789/health if your version exposes a health endpoint.
Step 4: Preventing the second instance from coming back
Stopping the duplicate now is the immediate fix. Understanding and closing the root cause is what keeps it from coming back. Each of the four causes has a specific prevention step.
Now that I have stopped the duplicate openclaw process, help me prevent it from recurring. Based on what we found about the cause: tell me the specific config change, cron update, or startup method change that prevents this from happening again. Then walk me through verifying the prevention is in place. For example, rebooting the server and checking that only one openclaw process starts, or checking that the cron job now uses the conditional start pattern.
A server reboot test is the most reliable verification. After applying the prevention fix, reboot and then check how many openclaw processes are running. A single process on the correct port confirms the fix is working. This test also catches cases where the fix addressed one startup path but a second path you had not noticed is still creating duplicates.
Use systemd as your single startup authority
The cleanest way to prevent duplicate instances long-term is to make systemd the only thing that starts openclaw. Remove openclaw from any cron jobs that start it. Never start it manually in a terminal except for temporary debugging (and stop it when done). Never start it in a screen or tmux session as the permanent method. Systemd with a properly configured unit file handles start, stop, restart, and boot autostart in a way that prevents duplicates. It also gives you systemctl status openclaw as a single source of truth for whether openclaw is running. When systemd manages openclaw, the unit file can include a ExecStartPre directive that kills any stale openclaw process before the new one starts, eliminating the restart-artifact duplicate entirely. This is the belt-and-suspenders approach: systemd serializes start/stop correctly already, and the pre-start kill is a safeguard for edge cases where a previous process did not exit cleanly. Ask your agent to add an ExecStartPre kill directive to your openclaw systemd unit file if you have been seeing restart-artifact duplicates.
What happens to your data when two instances share a config
When two openclaw instances run with the same openclaw.json and the same workspace directory, they compete for every shared resource: the session database, the memory database, the LCM database, and any lock files. The result is not just missed messages. Both processes write to the same databases simultaneously, which can corrupt session history, produce duplicate memory entries, or cause one process to overwrite data written by the other.
Check my openclaw workspace for signs that two processes were writing to the same databases simultaneously. Look for: duplicate session entries in the session database, duplicate memory entries in the memory store, any LCM database corruption errors in recent logs, and any lock files that might have been created by the duplicate process. Report what you find and whether any data repair is needed.
Memory duplicates are the most common consequence of a dual-instance situation. If your memory plugin ran on both instances simultaneously, the same fact may have been stored twice (or more). Ask your agent to check for duplicates and clean them up after you confirm only one instance is running. Session history corruption is less common but more disruptive: if two processes wrote conflicting turns to the same session record, the history may have interleaved turns from two different contexts, making it appear that the agent said things it did not say in the context you remember. If your session history looks scrambled after a dual-instance incident, it is safer to start a fresh session than to try to continue in the corrupted one. The underlying data is not deleted, so you can still recall memories and reference old sessions, but the active session context is cleaner when started fresh.
FAQ
My openclaw agent is responding twice to every message. Is that caused by a duplicate instance?
Double responses are the most recognizable symptom of two instances sharing a channel connection. Each instance receives the message independently and each generates a response. The responses arrive within a few seconds of each other and appear identical or nearly identical. Confirm this is the cause by running the process check above. If you see two openclaw processes, that is your answer. If you see only one process, double responses have a different cause: check whether your channel plugin is configured with two identical webhook subscriptions or whether a dispatcher agent in your setup is both relaying the message and responding directly.
I restarted OpenClaw and now I have two processes. What happened?
The restart command sent a stop signal to the old process and started a new one before the old one fully exited. The old process is still running (waiting to finish whatever it was doing when the stop signal arrived) alongside the new process. This is Cause 1 from the article. The old process will usually exit on its own within a minute or two once it finishes any pending work. If it is still running after five minutes, stop it manually by PID. To prevent this in future restarts, use systemctl restart openclaw instead of manual stop-start sequences, because systemd waits for the stop to complete before starting the new process. If you need to restart openclaw from within an agent command (e.g., after a config change), have your agent use the gateway restart tool rather than calling shell commands directly. The gateway restart tool is designed to handle the stop-start sequence correctly, while a raw shell command sequence leaves you responsible for managing the timing.
Can two OpenClaw instances run on the same server intentionally without causing problems?
Yes, if they are on different ports and have different workspace directories and different channel connections. Two intentional instances is the multi-agent setup covered in the article on running two agents on one server without them conflicting. The key requirements: different gateway port in each config, different workspace directory for each instance, different channel tokens (different Telegram bot token, different Discord application, etc.), and a separate systemd service unit for each. With those four things separated, the two instances are fully independent and do not interfere with each other.
I found two openclaw processes but they are on different ports. Should I still stop one?
Only if you did not intentionally set up a two-agent configuration. Check whether there are two different openclaw.json files with two different port numbers configured. If you find them, two agents were intentionally deployed. Check whether both are functioning correctly before stopping either. If you only have one openclaw.json and you did not intentionally set up a second instance, one of the processes is using a different port than expected, possibly because your config was changed between starts. Stop the one on the unexpected port and check your config file to confirm the intended port is set correctly.
After stopping the duplicate, my agent is not responding. What went wrong?
You may have stopped the wrong process. The process you stopped was the active one handling messages, and the one still running is the older duplicate that is not connected to your channel. Check which process is now running and whether it is receiving messages by sending a test message and watching the gateway logs. If the remaining process is not receiving messages, stop it and restart openclaw cleanly using the correct method (systemctl start openclaw or your preferred process manager). A clean restart from a known-good state is faster than trying to diagnose which of two competing processes was active. Before restarting, confirm which openclaw.json you want to use. If the two processes were using different config files, you need to decide which one represents your intended setup. Check both config files for differences in model selection, port, workspace path, and plugin config. Use the correct one as the basis for the clean restart.
Will stopping a duplicate openclaw instance lose any of my current conversation history or memory?
Stopping the duplicate will not delete data, but if the duplicate was actively writing to the session or memory databases when you stop it, the partial write may leave an incomplete record. This is why stopping cleanly with SIGTERM rather than SIGKILL matters: SIGTERM allows the process to finish its current write operation and flush any pending state before exiting. After stopping the duplicate and confirming only one process is running, ask your agent to verify your last few session history entries are intact and your recent memory recall is returning expected results. Any corruption from the dual-write period can usually be resolved by removing the duplicate entries manually. The most efficient approach: ask your agent to list memory entries created in the time window when the duplicate was running, look for pairs of near-identical entries, and remove the duplicates using the memory_forget tool. For session history, corruption is harder to clean up but rarely matters for ongoing operation: the current session starts fresh and the corrupted history entries are only visible if you specifically query old sessions.
How do I set up OpenClaw so this openclaw duplicate instance problem never happens again?
Three steps. First, make systemd the only startup mechanism and remove openclaw from any other autostart paths (cron jobs, rc.local, screen sessions). Second, add a process guard to your systemd unit file: set Restart=on-failure and remove any competing startup mechanisms so systemd is the single authority. Third, test the setup with a controlled reboot: boot the server, check that exactly one openclaw process is running on the correct port, and confirm the agent is responsive before declaring the setup stable. If you do not want to use systemd, a process manager like PM2 provides the same guarantees (single instance, autostart, restart-on-failure) with a slightly different interface. Beyond process management, consider adding the process count check as a daily health check: a cron job that runs every morning, counts openclaw processes, and sends a Telegram message if the count is anything other than one. This turns a reactive debugging problem into a proactive alert. By the time you notice inconsistent behavior, the duplicate may have been running for hours. An alert fires within minutes of the duplicate appearing.
What a port conflict between two instances actually looks like
Most operators discover a duplicate instance because something behaves strangely, not because they see two processes in a list. The behaviors that point specifically to a duplicate instance rather than other problems are worth knowing so you can identify the cause faster next time.
Symptom: messages arrive but the agent does not always respond
In polling mode, only one process claims each batch of updates from the polling endpoint. If two processes are polling the same bot token, they alternate claiming message batches: Process A gets one batch, Process B gets the next. Messages going to Process A are handled. Messages going to Process B are also handled, but by an agent session that may have a different or empty context. The result is inconsistent response quality: some messages get good responses, some get worse ones, and the agent seems to “forget” context it should have from earlier in the conversation.
In webhook mode, only one process receives webhook deliveries because the webhook URL points to one endpoint. If two processes are running, only the process whose endpoint the webhook URL resolves to receives messages. The other process runs silently, doing nothing, while consuming RAM and potentially writing to shared databases.
I am experiencing inconsistent agent responses, sometimes good, sometimes as if the agent has no context. Check whether I have more than one openclaw process running and whether they are both connected to the same channel. If two processes are sharing a Telegram bot token in polling mode, they would be alternating which one handles each message batch. Confirm or rule this out.
Symptom: the agent responds twice to the same message
Double responses indicate both processes successfully handled the same message and both sent a reply. This happens more in webhook mode than polling mode, and requires a specific condition: the webhook is delivering to a reverse proxy or load balancer that is forwarding to both instances. In this case, both receive the message simultaneously, both process it, and both send a response. The user sees two messages arrive within a few seconds of each other.
Double responses can also happen when a dispatcher agent is configured to both forward to a target agent AND respond directly. This is a configuration issue rather than a duplicate instance issue, but produces the same visible symptom. Confirm which cause you have before making changes. The quickest way to tell the difference: if both responses are identical or near-identical, it is a duplicate instance (two agents with the same SOUL.md producing similar outputs). If the two responses are different in tone or content, it is a dispatcher configuration issue (two different agents with different instructions both responding). That distinction narrows the diagnosis immediately.
Symptom: agent is slower than usual for no obvious reason
Two openclaw processes running with the same workspace and model configuration consume roughly twice the RAM and CPU of a single process. If your server is at or near its resource limits, a surprise second process can push it over and cause everything to slow down: model inference takes longer, gateway responses are slower, and cron jobs back up. A duplicated instance can also trigger Ollama to load the same model twice if both instances request it simultaneously, doubling the model RAM footprint. Ollama handles concurrent model load requests, but each request from a different OpenClaw process gets its own model context in memory. On a server with 4GB RAM, two instances loading llama3.1:8b simultaneously can consume all available memory, causing the OS to start swapping and bringing everything to a crawl. The first sign of this is usually that model responses that normally take 3 to 5 seconds suddenly take 20 to 40 seconds. If you see this pattern, check for a duplicate instance before investigating model or hardware issues.
My OpenClaw agent is noticeably slower than usual. Check whether a duplicate openclaw process is contributing to the slowdown. Show me total RAM and CPU usage, broken down by process, and identify whether any openclaw processes are consuming unexpected resources. If Ollama is running, check whether it has loaded the same model twice.
After stopping the duplicate: verification checklist
Once you have stopped the duplicate and identified the root cause, run through this checklist before closing the issue. Skipping verification is how a fix that worked right now fails again in an hour.
- Only one openclaw process running:
ps aux | grep openclawshows exactly one PID. No extras. - Correct port:
ss -tlnp | grep 18789(or your port) shows one listener, bound to the correct address. - Agent responsive: Send a test message. The agent responds once, with correct context.
- Root cause closed: The startup mechanism that created the duplicate has been updated, removed, or replaced.
- Reboot test scheduled: If you have not rebooted since fixing the startup mechanism, schedule a test reboot to confirm only one process starts on boot. This is the only way to be certain: on-disk fixes that look correct in config can still have residual paths (a user-level cron, an old rc.local entry, a stale shell profile) that trigger an extra start on boot. The reboot test catches all of them at once.
- Memory and session data intact: Ask the agent to recall a recent fact and confirm session history is intact. If anything looks corrupted from the dual-write period, address it now.
Run the post-duplicate-resolution verification for me. Check: one openclaw process running, correct port bound, agent responsive, recent session history intact, and recent memory recall working. Report each check as pass or fail with the specific output you found.
Using PM2 as an alternative to systemd for process management
If you are not comfortable configuring systemd service files, PM2 is a Node.js process manager that provides similar guarantees: single-instance enforcement, autostart on boot, restart-on-failure, and process monitoring. Because OpenClaw is a Node.js application, PM2 integrates naturally.
Set up PM2 to manage my OpenClaw instance as a replacement for manual starts. Install PM2 if it is not already installed, configure it to start OpenClaw using the correct command for my installation, set it to restart on failure, and configure it to autostart on system boot using pm2 startup. Then show me the PM2 status output confirming OpenClaw is running and managed by PM2.
With PM2 managing openclaw, starting a duplicate is harder because PM2 tracks the process it started. Running pm2 start openclaw twice results in PM2 starting a second named process, which it shows in the process list with a different ID. This visibility makes the duplicate easier to detect and stop: pm2 list shows all managed processes and pm2 delete [id] removes the duplicate. PM2 also provides a log aggregation feature that combines logs from all managed processes into one stream, which is useful when you need to check whether two processes were both active during a specific window. Ask your agent to use PM2 logs to show you the last 100 lines across all openclaw-related processes.
PM2 vs systemd: choosing one
Use systemd if your server runs Linux and you prefer managing services with standard system tools. Systemd is already present on Ubuntu, Debian, CentOS, and most modern Linux distributions. Use PM2 if you are managing multiple Node.js applications, want a richer monitoring dashboard, or find PM2’s config format more intuitive than systemd unit files. Do not use both to manage the same openclaw instance. Pick one and stick to it. Mixing process managers for the same application recreates the exact duplicate instance problem this article describes.
Brand New Claw: $37
Get OpenClaw running clean from day one
The exact systemd config, startup sequence, and process isolation setup that prevents duplicate instances, port conflicts, and the other setup failures that cost operators hours to debug.
