OpenClaw route messages different agents by running multiple agent instances, each with its own channel connection and config. This article covers every method for splitting message traffic across agents: by channel, by sender ID, by keyword, and by time of day.
TL;DR
The cleanest way to route different people’s messages to different OpenClaw agents is to give each agent its own channel connection: a separate Telegram bot token, a separate Discord bot, or a separate WhatsApp number. Each agent only sees messages sent to its channel. No routing logic required. If you need to split traffic within a single channel (e.g., different users in the same Telegram bot), use allowlist config to control which user IDs each agent responds to.
Why you would want this
A single OpenClaw agent handles all messages from all sources with the same persona. Knowing how to openclaw route messages to different agents is what unlocks genuinely different experiences for different users., the same model, and the same permissions. For most personal setups that is exactly right. But several real situations call for separate agents handling different traffic:
- Work vs personal separation: A work agent has access to work files, work credentials, and work-specific instructions. A personal agent has access to personal calendars, home automation, and personal context. Mixing them in one agent means one agent has access to everything. Separate agents mean each has only what it needs.
- Different personas for different audiences: A customer-facing bot needs a different tone, knowledge base, and scope than an internal agent for yourself. Running them as the same agent with the same SOUL.md is awkward at best and a liability at worst.
- Different capability levels for different users: You want yourself to have full tool access including exec. A family member using the same Telegram bot should not be able to trigger exec commands. Separate agents with different tool permission configs enforce this cleanly.
- Load distribution: A high-traffic use case where message volume would overload a single agent’s context management and response queue benefits from distributing traffic across multiple agents.
- Experimentation and A/B testing: You want to test two different SOUL.md configurations or two different models on real traffic. Routing half your users to Agent A (current config) and half to Agent B (new config) lets you compare results before committing to a change. This is especially useful when evaluating a new model for your primary setup.
Method 1: Separate channel connections per agent (recommended)
The simplest and most reliable way to route different people to different agents is to give each agent its own channel connection. Agent A has Bot Token 1 and responds to messages sent to that bot. Agent B has Bot Token 2 and responds to messages sent to that bot. Users who message Bot 1 reach Agent A. Users who message Bot 2 reach Agent B. No routing logic, no allowlists, no overlap. This is also the only method that provides a true security boundary: Agent A cannot see messages sent to Agent B, and vice versa. If you have one agent with sensitive permissions and one without, separate channels ensure a user who only has access to the low-permission agent cannot accidentally or intentionally cause the high-permission agent to act.
I want to set up two separate OpenClaw agents, each with its own Telegram bot connection, so that different people reach different agents. Tell me: do I currently have a second OpenClaw instance running, or do I need to create one? What does a second instance require: separate workspace directory, separate config file, separate gateway port? Walk me through what the second agent’s config needs to look like and what is different from my current agent’s config.
Each OpenClaw instance needs:
- Its own workspace directory (e.g.,
~/.openclaw-work/workspacefor the work agent) - Its own config file pointing to that workspace
- Its own gateway port (e.g., 18790 instead of 18789) so both can run simultaneously
- Its own channel token (a different Telegram bot token, Discord bot application, or WhatsApp phone number)
- Its own systemd service or process manager entry so both start automatically
Creating a second Telegram bot takes two minutes
Message @BotFather on Telegram and send /newbot. Follow the prompts to name the bot and get a token. That token is your second agent’s Telegram credential. It is a completely separate bot identity from your first one. Users who message it are routed exclusively to whichever OpenClaw instance holds that token. There is no limit on how many bots you can create.
Create a second OpenClaw agent configuration for me. Use the following parameters: workspace at /home/node/.openclaw-work/workspace, gateway port 18790, agent name “WorkAgent”. Copy my current SOUL.md as a starting point but I will update it separately. Set up a separate systemd service file for this second instance so it starts automatically with the server. Show me the complete config and service file before writing anything.
The article on running two agents on one server without conflicts covers the exact config structure and port isolation in detail. If you run into gateway conflicts or workspace collisions, that article has the specific resolution steps. The key point worth stating here: when you create a second instance, its workspace is completely independent of your primary agent. SOUL.md, AGENTS.md, workspace files, memory databases, and cron jobs are all separate. Changes to the second agent’s config do not affect the primary agent’s behavior. This independence is the whole point: each agent has its own identity, its own permissions, and its own context, and there is no shared state that a message to one agent can inadvertently affect in the other.
Method 2: Sender allowlists within a single channel
If you want to keep one channel (one Telegram bot, one Discord bot) but have different agents respond to different users, OpenClaw’s sender allowlist config is the mechanism to use. Each agent instance is configured with an allowlist of user IDs it will respond to. Messages from IDs not on the allowlist are ignored by that agent.
I want to configure my OpenClaw agents so that Agent A only responds to specific Telegram user IDs and Agent B responds to a different set of user IDs, both using the same Telegram bot. Show me where the sender allowlist is configured in openclaw.json for the Telegram plugin, what the config field is called, and the format for specifying user IDs. Also tell me what happens to messages from user IDs that are not on either allowlist. Are they silently ignored or does the user get an error?
This method has a constraint worth understanding before committing to it: both agents need to be connected to the same bot token and both need to receive the message before the allowlist check filters it. In webhook mode, the same webhook URL can only point to one endpoint at a time. If you are using webhook mode and want two instances sharing a token, one of them needs to be the primary webhook receiver that forwards to the second. This adds complexity. In polling mode, both instances independently call getUpdates on the same bot token. Telegram allows multiple simultaneous pollers on the same token without error, though this behavior is not officially documented as supported and behavior may vary across Telegram API versions. For a production setup with two instances sharing a token, webhook mode with a forwarding dispatcher is the cleaner architecture. In OpenClaw’s architecture, each running instance independently polls or receives webhook deliveries for its configured channel. Two instances with the same bot token will both receive every message. The allowlist on each instance determines whether it acts on the message. This means both instances receive all traffic. The allowlist controls who responds, not who receives.
Overlapping allowlists cause double responses
If a user ID appears on the allowlist of two different agent instances both connected to the same bot, that user gets a response from both agents for every message they send. This is almost never the intended behavior. Before deploying allowlist-based routing, verify that every user ID appears on exactly one agent’s allowlist. Also verify that the catch-all or default behavior (what happens when a message comes from an unlisted ID) is consistent between agents.
Method 3: OpenClaw bindings config for channel-level routing
OpenClaw’s openclaw bindings config provides a more structured way to specify which agent handles which channel or connection. Rather than configuring the channel inside the agent’s config, bindings declare an explicit mapping between a channel source and an agent session. This is the recommended pattern for operators running three or more agents with complex routing requirements.
Show me whether my current OpenClaw version supports a bindings configuration that maps channel sources to specific agent sessions. Read my openclaw.json and tell me if there is a bindings or routing section. If there is, explain how to add a binding that routes Telegram messages from user ID [X] to agent “WorkAgent” and all other Telegram messages to the default agent. If my version does not support bindings config, tell me what version introduced this feature and what the alternative is.
The bindings config pattern (when supported) looks like a routing table: each entry specifies a source (channel type plus optional filter), a target agent, and a priority order for when multiple rules match. The first matching rule wins. A catch-all rule at the lowest priority handles anything not matched by a more specific rule.
Bindings vs allowlists: when to use each
Use bindings config when: you need declarative routing rules in one place, you have three or more agents, or you need priority-ordered matching (e.g., a VIP user gets Agent A, everyone else gets Agent B). Use allowlists when: you have two agents, the routing is simple (user set A gets agent A, user set B gets agent B), and you want the simplest possible config with no new concepts to learn. Use separate channel connections when: routing needs are permanent, agents have genuinely different access levels, and you want zero risk of routing errors causing the wrong agent to respond.
Method 4: Keyword and intent routing
Keyword routing uses message content rather than sender identity to decide which agent handles a message. A message that starts with “Work:” or contains a specific trigger phrase gets routed to the work agent. Everything else goes to the default agent. This is useful when the same user needs access to multiple agents depending on what they are asking, rather than needing a permanent assignment.
I want to set up keyword-based routing so that messages beginning with “Work:” or “W:” are forwarded to my work agent and all other messages go to my default agent. Tell me whether OpenClaw supports keyword routing in its native config, and if not, what the practical implementation looks like. I am willing to implement this as a cron-driven dispatcher or a plugin if native support is not available.
Native keyword routing at the gateway level depends on your OpenClaw version. If your version does not support it in config, the practical alternative is a lightweight dispatcher pattern: a single “router” agent receives all messages, inspects the content, and forwards to the appropriate agent using sessions_send. The router agent itself responds to no one. It only reads and redirects.
Set up a dispatcher pattern for me. I want a lightweight router that receives all Telegram messages, checks whether the message starts with “Work:” or “W:”, and forwards matching messages to my work agent session using sessions_send. All other messages should pass through to my default agent. Write the SOUL.md or instruction file for the router agent that implements this logic. Keep it simple. The router should not try to respond itself, only dispatch.
Keyword routing adds a response-time layer
Every message in a keyword-routing setup goes through two agents: first the router, then the target. The router adds latency equal to its own processing time before the actual agent sees the message. For conversational use where response time matters, this extra hop is noticeable. Keep the router’s instructions minimal and its model set to the fastest available option (a small local model works well for simple keyword matching). The router should complete its dispatch in under a second.
Method 5: Time-based routing for work-hours separation
Time-based routing switches which agent is active based on the time of day or day of week. During work hours, the work agent handles messages. Outside work hours, the personal agent does. This is useful when the same person needs different contexts from the same interface depending on when they are messaging.
I want to implement time-based agent routing so that on weekdays between 9am and 6pm my work agent handles messages and outside those hours my personal agent handles them. Tell me the cleanest way to implement this in OpenClaw. Options I am considering: cron jobs that enable/disable agents on a schedule, a dispatcher that checks the current time before forwarding, or a single agent that switches its SOUL.md instructions based on time. Tell me which approach has the least operational risk and what the config looks like.
The cron-based enable/disable approach is the most reliable for time-based routing. Two cron jobs: one fires at 9am weekdays to activate the work agent config and deactivate the personal agent config, and another fires at 6pm to reverse it. The risk is a message arriving during the switch window before the cron fires and being handled by the wrong agent. For most personal setups, a 1-minute window of uncertainty is acceptable. For tighter requirements, use a dispatcher that checks the current time at runtime. A runtime dispatcher that checks the clock is more accurate but has a different failure mode: if the dispatcher’s session goes stale or its process restarts, the time check may fail entirely and messages may pile up unhandled until the session recovers. Cron-based enable/disable is predictable and survives restarts cleanly, because the cron job fires regardless of agent session state.
Shared memory across agents
When you run multiple agents handling different message sources, a common follow-up question is whether those agents can share memory, specifically whether facts learned by the work agent are available to the personal agent and vice versa. By default they are not. Each OpenClaw instance has its own memory scope tied to its workspace and agent identity.
I am running two OpenClaw agents for different message routing purposes. Tell me whether it is possible to configure them to share the same memory database so facts stored by one agent are available to the other. If shared memory is supported, show me how to point both agents at the same LanceDB or memory plugin data directory. If it is not supported, tell me the practical workaround for passing facts between agents.
Whether shared memory is feasible depends on your memory plugin. LanceDB stores data in a directory. If both agents are configured to point their memory plugin at the same directory path, they write to and read from the same store. The risk is concurrent write conflicts if both agents store memories at exactly the same moment. For low-traffic personal setups, this risk is low. For higher-volume use, a dedicated memory service that both agents call via API is the safer architecture. A middle path that avoids both shared storage and a separate service: periodic memory sync. Each agent stores memories to its own isolated database, and a nightly cron job exports facts from one agent’s memory and imports them into the other’s. This introduces a lag (facts learned today are available to the other agent tomorrow), but avoids concurrent write risk entirely. Ask your agent to design a memory sync script if this pattern fits your use case.
Testing and verifying your routing setup
When you openclaw route messages different agents, a routing misconfiguration where the wrong agent responds is hard to detect because both agents produce valid-looking responses. They just come from the wrong context. Before putting a multi-agent routing setup into production, test each routing rule explicitly with a message that should and should not trigger it. The boundary cases matter more than the happy path: test a message that matches two rules simultaneously (to confirm priority ordering), a message from an unlisted user (to confirm it is handled or silently ignored as intended), and a message sent during a scheduled routing switch window (to confirm the timing boundary works). Most routing bugs live at boundaries, not at the center of the rule. Plan your tests for the edges.
Verify my multi-agent routing setup is working correctly. For each routing rule I have configured, send a test message that should trigger that rule and confirm which agent responded. Also send a test message for each rule that should NOT trigger it and confirm the correct agent (or no agent) handled it. Report the results as a table: rule, test message, expected agent, actual agent, pass or fail.
FAQ
Can one OpenClaw agent forward a message to a different agent mid-conversation?
Yes, using the sessions_send tool. An agent can call sessions_send with the target session key and the message text, and the target agent receives it as a new inbound message. The reply from the target agent comes back to the calling agent’s session, which can then forward it to the original user. This is the basis of the dispatcher pattern described in Method 4. The calling agent can also spawn a subagent with sessions_spawn for tasks that require a fresh session rather than an existing one. The distinction: sessions_send routes to a running persistent session, sessions_spawn creates a new isolated session for a single task. For multi-turn conversations where context needs to persist across multiple exchanges with the target agent, sessions_send to a persistent named session is the right call. For one-off tasks where you want a clean slate, sessions_spawn is cleaner because the sub-agent’s context does not accumulate across uses.
What happens if both agents are running and both respond to the same message?
The user receives two responses, usually within a few seconds of each other. This is disorienting and in a customer-facing setup, damaging. It happens when allowlists overlap (the same user ID is on two agents’ allowlists), when both agents are connected to the same channel token without any sender filter, or when a dispatcher forwards to a target agent but also responds itself. The fix depends on the cause: tighten allowlists if they overlap, add a sender filter if none exists, or update the dispatcher’s SOUL.md to prohibit direct responses. Test with the verification prompt above before deploying.
Can I route messages from the same Telegram user to different agents depending on what they say?
Yes, using the dispatcher/keyword routing pattern from Method 4. The dispatcher agent receives all messages from that user and routes based on content. The key implementation detail is that the dispatcher must not reply to the user itself. Its sole job is to call sessions_send to the appropriate target and then stop. If the dispatcher also replies, the user gets two responses: one from the dispatcher (usually empty or a relay confirmation) and one from the target. Set the dispatcher’s SOUL.md to explicitly prohibit user-facing replies and only dispatch.
How do I know which agent a given user is currently assigned to?
Ask your agent. If you have allowlist-based routing, ask it to read both agents’ configs and show you which user IDs are on each list. If you have bindings-based routing, ask it to show you the current bindings table. If you have keyword routing, the assignment is dynamic per message rather than per user. For dispatcher-based routing, ask your dispatcher agent to tell you which agent it would route to for a given test message. For most setups, maintaining a simple comment in your routing config that documents which user IDs go where is worth the 30 seconds it takes and prevents the “which agent does Bob use?” confusion weeks later.
Do I need a separate server for a second OpenClaw agent, or can they run on the same machine?
Same machine is fine. Two OpenClaw instances on the same server use different ports and different workspace directories. The practical constraint is RAM: each instance runs its own Node.js process and loads its own models. On a server with 4GB RAM, two instances using small local models is feasible. On a server with 2GB RAM, two instances will compete for memory and both will slow down. Check your current RAM usage before adding a second instance. Ask your agent to show you current memory usage broken down by process, then estimate whether there is headroom for a second OpenClaw process plus its model allocation. A useful rule of thumb: a minimal OpenClaw instance with no active local model needs roughly 200 to 400 MB of RAM for the Node.js process itself. Add the model size on top of that. Two instances plus two small Ollama models on a 4GB server is tight but workable. Two instances plus two full-size models will compete for memory and both will be slow.
Can I use openclaw multi-agent routing without running two separate instances?
It depends on what you mean by routing. If you want different users to get different personas or SOUL.md instructions within a single agent instance, OpenClaw does not natively support per-user personas in a single session. The agent has one identity per session. You can approximate per-user context by having the agent detect the sender ID at the start of a conversation and adjust its tone or instructions accordingly, but this requires careful SOUL.md design and is fragile compared to separate instances. For genuinely different access levels, separate instances are the reliable path. A single OpenClaw instance with session-level routing is not the right abstraction for this. Sessions in OpenClaw are not security boundaries: they are context partitions. An agent session can call tools that affect other sessions, read files shared across the workspace, and use any credentials in the config regardless of which session the request came from. Security boundaries require separate instances with separate workspaces and separate configs.
What is the openclaw bindings config format and where does it live in openclaw.json?
Bindings config, when supported, lives under the top-level bindings or routing key in openclaw.json. The exact key name and schema vary by OpenClaw version. Ask your agent to read your openclaw.json and check whether a bindings or routing key exists, then tell you its schema. If it does not exist in your current config, your version either does not support it or uses a different mechanism for channel-to-agent routing. Running openclaw config schema (if your version supports it) shows the full schema for your installed version and reveals available routing fields.
Choosing the right routing method for your situation
The five methods above are not equally suitable for all situations. When you openclaw route messages to different agents, the right method depends on how many users you have, how different their access requirements are, how much configuration complexity you can maintain, and whether your routing needs are static or dynamic.
Here is a decision framework that matches situation to method:
- Two users with permanently different access levels (e.g., you and a family member): Method 1 (separate channels). Create a second bot, give it its own OpenClaw instance with tighter permissions. No routing logic to maintain. The separate channel identity is the routing mechanism.
- Five to twenty users where most get the same experience but a few get different treatment: Method 2 (allowlists) or Method 3 (bindings). Use allowlists if the rules are simple lists of IDs. Use bindings if the rules have priority ordering or multiple conditions.
- One user who needs different agent behavior depending on task type: Method 4 (keyword routing). A dispatcher that reads the first word of each message and routes accordingly. Simple to implement, low maintenance, adds one processing hop.
- Work vs personal separation based on time: Method 5 (time-based). Two cron jobs are more reliable than a dispatcher checking the clock at runtime, because cron state persists across gateway restarts.
- Three or more distinct user groups with complex overlapping rules: Method 3 (bindings config) if your OpenClaw version supports it. If not, Method 1 with multiple instances, each scoped to its group.
The most common mistake is over-engineering the routing. If you have two users with different needs, two separate bots with two separate instances is a five-minute setup with zero ongoing maintenance. A dispatcher-based keyword router is a compelling architecture until you need to debug why user 7 is occasionally getting routed to the wrong agent at 11pm on a Tuesday. Start with the simplest method that covers your actual requirements. The second most common mistake is under-specifying the routing rules before implementing. Before writing any config, write out the routing rules in plain English: “User A always goes to Agent X. Users B and C go to Agent Y on weekdays. On weekends, all users go to Agent X.” If you cannot write the rules clearly in a sentence, the config will reflect that ambiguity. Clarify the rules first, then configure.
Access control and security between agents
When you run multiple agents handling different users, the security question is not just “which agent responds to whom” but “what can each agent do on my behalf.” An agent with exec tool access and file system permissions is an agent that can modify your server if it receives the right (or wrong) instruction. If one of your agents is exposed to untrusted users, that agent’s tool permissions should reflect that.
I am running two OpenClaw agents: one for myself with full tool access, and one that handles messages from other users who I do not fully trust. Tell me what tool permissions I should restrict on the second agent to prevent it from being used to execute commands, read sensitive files, or access my API keys. Show me the specific config fields that restrict exec access, file system scope, and API key exposure, and what values to set for a restricted public-facing agent.
The practical access control settings for a restricted agent are:
- exec tool restriction: Set
exec.securitytodenyto prevent any shell command execution. This is the highest-risk tool in a public-facing agent. - File system scope: Restrict the agent’s workspace to a subdirectory that contains only the files it needs. Avoid pointing a public-facing agent’s workspace at your main
~/.openclaw/workspacewhere your API keys, memory files, and product files live. - API key exposure: The second agent’s openclaw.json should contain only the API keys it actually uses. If the second agent only needs to respond conversationally using a local model, it does not need your Anthropic key, your Cloudflare token, or any other credential.
- Memory isolation: Do not point the second agent’s memory plugin at the same data directory as your primary agent. Conversations from untrusted users should not mix with your personal memory store.
- Model selection: A restricted public-facing agent does not need your most capable (and most expensive) model. Route it to a fast, cheap local model or a small cloud API model. This also reduces the risk of model jailbreaks being economically costly: a local model responding to a jailbreak attempt costs nothing in API fees.
Prompt injection risk in multi-agent routing
When a routing agent reads user messages and decides where to forward them, the message content passes through the router’s context. A malicious user who knows your router’s instructions can craft a message designed to confuse or override the routing logic. This is a prompt injection attack targeted at the routing layer. Keep your router’s instructions minimal and specific. Do not include sensitive information in the router’s SOUL.md or context. If the router is compromised by an injected instruction, the worst case should be a misrouted message, not a command executed against your server.
Monitoring a multi-agent setup
A single agent is easy to monitor: you watch one set of logs, one session history, one gateway status. Two or more agents add monitoring surface area. A message that should have reached Agent A but went to Agent B produces no error anywhere in the logs of either agent. From each agent’s perspective, it either received a message and handled it or did not receive one. The routing failure is only visible at the coordination layer.
I am running multiple OpenClaw agents with message routing between them. Set up a monitoring approach that tells me: which agent handled the last message from each user, whether any messages were dropped (received by no agent), and whether any messages were handled by more than one agent (double response). Check the recent session history across my running agents and report any anomalies.
A practical monitoring pattern for multi-agent setups is a brief daily check where you ask your primary agent to pull the last 24 hours of session activity from all running agents and flag any gaps or duplicates. This is lighter than real-time monitoring and catches the most common failure modes: a bot going offline, an allowlist misconfiguration causing double responses, or a cron routing rule firing outside its intended window. Pair this with a simple status page in your workspace: a markdown file that your monitoring cron updates after each check, showing each agent’s last-seen timestamp and current status. When you want to know if everything is working, you read one file rather than checking multiple dashboards. This file also serves as an incident record: if something broke at 2am and was automatically resolved by 3am before you woke up, the status file shows the gap. That visibility is worth the ten minutes it takes to set up the cron.
Check the health of my multi-agent routing setup. For each OpenClaw instance I have running, confirm: the gateway is up, the channel plugin is connected and receiving messages, the last inbound message timestamp is recent (not stuck), and there are no error-level log entries in the last hour. Give me a one-line status per agent: agent name, status (OK or ISSUE), and the specific issue if any.
If you want automated routing health monitoring rather than on-demand checks, a cron job on your primary agent that runs this check every 30 minutes and sends a Telegram alert when an agent is down or not responding is a low-cost, high-value automation. The Queue Commander covers exactly this pattern: a cron-driven health monitor with conditional alerting.
Queue Commander: $67
Build multi-agent workflows that run themselves
The exact config patterns for routing, dispatching, and coordinating multiple OpenClaw agents, plus the queue system that makes complex automation manageable without constant oversight. Routing is the connective tissue of a multi-agent setup; Queue Commander is the brain.
