OpenClaw sub-agents can’t access the skills I installed on my main agent

When you spawn a sub-agent in OpenClaw, it does not automatically inherit the skills you installed on your main agent. This surprises almost every operator who tries it for the first time. The sub-agent is an isolated session with its own context. It does not see your main agent’s workspace skills directory by default, and it has no way to call skill-defined tools that were never loaded into its session. This article explains why, and what to do about it.

TL;DR

Sub-agents run in isolated sessions and do not inherit skills from the main agent. To give a sub-agent access to a skill’s tools, pass the skill’s instructions and any required context explicitly in the task prompt when you spawn it. There is no config option that automatically propagates skills into sub-agent sessions.

Why sub-agents do not inherit skills

A skill in OpenClaw is a set of instructions loaded at session start from a SKILL.md file in your workspace. When your main agent starts a session, OpenClaw reads those files and injects their instructions into the agent’s active context. The agent knows how to use the skill’s tools because those instructions are present and readable in its context window from the moment it starts.

When your main agent calls sessions_spawn to create a sub-agent, that sub-agent gets a fresh, empty context. It does not share the main agent’s session. It has no access to the main agent’s active context, injected skill instructions, or loaded tool configurations. It starts from scratch, exactly the same way a brand-new OpenClaw session does when you open a fresh chat.

This is intentional. Isolation is what makes sub-agents useful for parallel work and for confining blast radius. A sub-agent that accidentally inherits permissions or context it was not meant to have is a security and reliability problem, not a convenience. The tradeoff is that you have to be explicit about what each sub-agent needs to know and what it is allowed to do. This article covers exactly how to do that.

I want to understand exactly what my sub-agents have access to when they start. Read the sessions_spawn tool documentation and tell me: do sub-agents inherit my current session’s skills, tools, or context? What does a freshly spawned sub-agent’s context look like at the moment it starts? List everything it has and everything it does not have.

What a sub-agent actually gets at startup

When you spawn a sub-agent with sessions_spawn, the sub-agent session starts with a specific set of things. Understanding this list is the foundation for building sub-agent workflows that work reliably.

What the sub-agent has at startup:

  • The task prompt you provided. This is the only content the sub-agent has at startup. Everything it knows at session start, it knows because you put it in this prompt. Nothing is added automatically.
  • The model you specified (or the system default model if you did not specify one). Model selection is critical for skill-heavy tasks. More on this below.
  • The built-in tools available to the agent definition it runs under. By default, the same built-in tools as your main agent: read, write, exec, web_search, web_fetch, memory_recall, message, cron, sessions_spawn, and others. Subject to the allowedTools configuration for that agent definition.
  • Access to the same workspace directory as the main agent. Files on disk are shared. The sub-agent can read and write to the workspace the same way the main agent does.
  • Access to plugin-registered tools if those plugins are active system-wide. Plugins that register custom tools make those tools available to all sessions, including sub-agents.

What the sub-agent does NOT have at startup:

  • Skill instructions injected from SKILL.md files in your workspace
  • Automatic memory recall from your long-term memory store (autoRecall does not fire in spawned sub-agent sessions)
  • Any context from the main agent’s active conversation or turn history
  • Config-defined persona instructions or SOUL.md contents
  • Any session-level state the main agent has accumulated during its current session
  • Knowledge of your workspace structure, active projects, or any facts the main agent learned during its session

Workspace files are the exception

The sub-agent has access to files on disk. If your skill instructions are in a file (like SKILL.md), the sub-agent can read that file using the read tool. But it will not do this automatically. You have to include the instruction to read the file in the task prompt. The skill file is on disk and readable. The sub-agent just does not know to look for it unless you tell it to.

How to give a sub-agent access to a skill

Two approaches work in practice. The right one depends on the skill length and how often you are spawning the same sub-agent pattern.

Approach 1: Embed the skill instructions in the task prompt

This is the right approach for short skills or for one-off sub-agent tasks. Copy the relevant parts of the skill’s SKILL.md into the task prompt when you call sessions_spawn. The sub-agent has everything it needs from the start and does not need to make any additional file reads.

The advantage of this approach is speed and reliability. The sub-agent’s first action is the actual task, not a file read. There is no risk of the sub-agent failing to find or parse the skill file. The skill instructions are in the prompt, in the context, and immediately usable.

I want to spawn a sub-agent that uses my Tavily skill. Read skills/tavily/SKILL.md and tell me how long it is in characters and lines. Then help me write a sessions_spawn task prompt that includes the Tavily skill instructions at the top, so the sub-agent will know how to use the Tavily search tools without needing to read any files itself. The skill instructions should appear before the task description in the prompt.

The sub-agent then has the skill instructions in its first prompt, which is its entire context at startup. It will use the Tavily tools as described in those instructions for the duration of its session. When the session ends, the instructions are gone. The next spawn starts fresh with the same embedded prompt.

Keep prompts lean: only what’s needed

Embedding the full SKILL.md is not always necessary. Many skills have one or two key paragraphs that cover the critical behavior. Ask your main agent to extract just the essential instructions before building the spawn prompt. A lean, focused prompt produces more consistent sub-agent behavior than a long one where the key instructions are buried.

Approach 2: Tell the sub-agent to read the skill file first

This is the right approach for long skills, or when you are spawning sub-agents repeatedly for the same purpose and want to maintain the skill instructions in one place. You include a read instruction at the top of the task prompt, and the sub-agent reads the SKILL.md before doing anything else. Use the full path from your workspace root (for example, /home/node/.openclaw/workspace/skills/tavily/SKILL.md) rather than a relative path. Sub-agents inherit the same workspace directory as the main agent, so relative paths work in practice, but absolute paths are unambiguous and eliminate one class of file-not-found errors.

The advantage of this approach is maintainability. When you update the skill, you update the SKILL.md file on disk. Every subsequent sub-agent spawn picks up the new instructions automatically because the task prompt tells it to read the file rather than having the instructions embedded inline. For skills you update regularly, this avoids having to update spawn prompts manually every time the skill changes.

Help me write a sessions_spawn task prompt for a sub-agent that handles web research. The sub-agent should start by reading skills/tavily/SKILL.md before doing anything else. After reading the skill file, it should follow those instructions for all search tasks in this session. The task prompt should make it clear that reading the skill file is the first step, not optional, and not to be skipped even if the task seems simple.

One read at the start, not per task

The sub-agent only needs to read the SKILL.md once, at session start. Once the instructions are in context, the sub-agent uses the skill for the rest of the session. For a long-running sub-agent that handles many tasks, this approach keeps the initial prompt lean while still giving the sub-agent full skill access. Do not instruct the sub-agent to re-read the skill file before each individual task. It stays in context.

A third approach: send instructions after spawn

If you are using a persistent sub-agent session (mode=”session”), you can spawn the sub-agent with a minimal prompt and then send it the skill instructions as a follow-up message using sessions_send. This keeps the initial prompt short and lets you change the skill context between tasks without re-spawning. For one-shot sub-agents (mode=”run”), embed or read-on-start are better choices since the session closes after the task completes.

Which tools are actually available to sub-agents

Skill instructions tell the agent how to use tools. They do not add new tools. The tools themselves come from OpenClaw’s built-in tool registry plus any active plugins. A sub-agent has access to the same built-in tools as any other agent session, subject to the allowedTools configuration for the agent definition it runs under.

If a skill relies on a tool that is in OpenClaw’s built-in set (read, write, exec, web_search, web_fetch, memory_recall, message, cron, and others), the sub-agent can use it as long as that tool is not excluded by the allowedTools config. The skill instructions just need to be in the sub-agent’s context to tell it when and how to use the tool.

If a skill relies on a custom tool added by a plugin, the situation is different but simpler. Plugin-registered tools are available system-wide to all sessions, so the sub-agent can call them as long as the plugin is installed and running. Plugin tools are not session-specific and do not need to be passed in the prompt. Check whether the relevant plugin is active before assuming a sub-agent can use it:

List all currently active plugins and confirm which tools each one registers. I want to know which plugin-registered tools are available to a fresh sub-agent session right now. Also check whether there are any tools in my allowedTools config that would be blocked for sub-agents running under the default agent definition. Show me the current allowedTools list for that definition.

Sub-agents and allowedTools configuration

The allowedTools setting lives in the agent definition (under agents.list in openclaw.json). If you have multiple agent definitions, each has its own allowedTools list. When you specify an agentId in sessions_spawn, the sub-agent runs under that definition’s tool permissions. If you do not specify an agentId, the sub-agent runs under the default definition. Verify which definition your sub-agent uses before assuming a specific tool is available.

Model selection for skill-heavy sub-agents

This is the issue that trips up most operators once they have the config correct. The skill instructions are in the prompt. The tools are available. The sub-agent still does not use the skill correctly. The cause is almost always the model.

Weaker local models (llama3.1:8b, phi4, qwen2.5-coder at smaller sizes) struggle with multi-step skill instructions that require holding a workflow in mind across multiple tool calls. They follow the first instruction reliably. They frequently drop later instructions, skip validation steps, or produce output that partially matches the skill format. This is not a bug. It is a capability limit.

For sub-agents that need to follow a skill correctly, use a capable model explicitly:

I need to spawn a sub-agent that will follow my Tavily skill instructions to do a multi-step research task. What model should I specify in the sessions_spawn call to get the most reliable skill execution? Compare the tradeoffs between deepseek/deepseek-chat, anthropic/claude-sonnet-4-6, and ollama/phi4 for this kind of task, including cost and reliability. Give me a recommendation and the exact model parameter string to use.

As of March 2026, the reliable choices for skill-following sub-agents are deepseek/deepseek-chat (cost-effective, handles most skill workflows correctly) and anthropic/claude-sonnet-4-6 (highest reliability for complex multi-tool skills, higher cost). Local models are appropriate for simple tasks with minimal skill requirements. They are not appropriate for complex skill workflows.

Memory recall in sub-agents

Long-term memory stored via memory_store lives in your LanceDB database on disk. Sub-agents can access it, but autoRecall does not fire automatically in isolated sub-agent sessions. AutoRecall is the feature that injects relevant memories from your long-term store into the agent’s context at session start. It runs for your main agent when a session begins. It does not run for spawned sub-agents. The sub-agent will not have your memories injected at startup the way your main agent does.

If a sub-agent needs specific memories to do its job, two options work well in practice:

  • Include the relevant facts in the task prompt directly. For short, specific context (a Cloudflare zone ID, a project name, a preference, an API endpoint), just put it in the prompt. This is cheaper and more reliable than a memory query inside the sub-agent because it does not depend on the quality of the memory index or the accuracy of the recall query.
  • Tell the sub-agent to call memory_recall at the start. Include an instruction like “Before starting work, run memory_recall for [topic] and use those results to inform your approach.” The sub-agent can query the memory store the same way your main agent does, using the same scope and the same tools.

I want to spawn a sub-agent that needs access to memories about my content pipeline. What is the better approach for this specific use case: including the key facts directly in the task prompt, or telling the sub-agent to run memory_recall at startup? Walk me through the tradeoffs, then give me example task prompt text for both approaches so I can decide.

Memory scope in sub-agents

If your memory setup uses a custom scope (such as agent:main), the sub-agent needs that scope stated explicitly when calling memory_recall. It will not infer the scope from your main agent’s config. Include the scope in the task prompt: “Use memory_recall with scope agent:main for all memory queries.” Without this, the sub-agent may query the default scope and return nothing, or return unrelated memories from a different scope.

A practical template for skill-aware sub-agents

Here is the pattern that works for production sub-agent workflows where skills matter. Your main agent uses this template to spawn a sub-agent that has everything it needs from the first message.

Help me build a reusable task prompt template for spawning a sub-agent that needs: (1) my Tavily search skill loaded from skills/tavily/SKILL.md, (2) access to memories about my project using scope agent:main, and (3) write access to files in my workspace. The template should structure the task prompt so that: first, the sub-agent reads the SKILL.md; second, it runs memory_recall for the relevant topic; third, it confirms write access by checking the workspace; then it proceeds with the assigned task. Show me the complete sessions_spawn call with all parameters including model selection, runTimeoutSeconds, and the full task prompt structure.

Once you have this template working for one skill, the pattern extends to any skill in your workspace. Swap the SKILL.md path, adjust the memory recall topic, set the model appropriate to the complexity of the skill, and the sub-agent is ready. The structure does not change. Only the content does.

Here is what a complete, correctly structured sessions_spawn call looks like for a research sub-agent that uses the Tavily skill:

Example: Tavily research sub-agent

Ask your main agent to generate this for you rather than writing it manually. The blockquote above (“Help me build a reusable task prompt template”) will produce a complete sessions_spawn call tailored to your exact skill file and workspace structure. The example below shows the shape of what you will get, not a value you paste directly.

  • task: Starts with the full contents of skills/tavily/SKILL.md, then memory_recall instructions with explicit scope, then the specific research task
  • model: “deepseek/deepseek-chat” for cost-effective skill following, or “anthropic/claude-sonnet-4-6” for maximum reliability
  • runTimeoutSeconds: 180 for a 3-5 search workflow, 300 for more complex research
  • runtime: “subagent”
  • mode: “run” for one-shot tasks, “session” for interactive workflows

One pattern worth building: a session startup message that your main agent sends to any newly spawned research sub-agent immediately after creation. Rather than embedding the full skill file in every spawn prompt (which can get long), the spawn prompt can be short and the first follow-up message passes the detailed skill instructions. Ask your main agent to help you design this two-message initialization pattern if the embedded-in-prompt approach produces prompts that feel unwieldy.

Save templates as files for reuse

Once you have a spawn template that works, ask your main agent to save it as a file in your workspace (for example, templates/spawn-researcher.md). When you need to spawn that sub-agent pattern again, your main agent reads the template and constructs the sessions_spawn call from it. You get consistent sub-agent behavior without rebuilding the prompt from memory each time. Version your templates alongside your skill files: when you update a SKILL.md, update the corresponding spawn template. If a sub-agent workflow stops working after a skill update, compare old and new template versions to find the divergence. The template is infrastructure, not a throwaway prompt.

Verifying that skill access is working correctly

After spawning a skill-aware sub-agent, do not assume it worked. Confirm it. Two things to check: whether the sub-agent loaded the skill correctly, and whether it used the skill tools rather than improvising its own approach.

I just spawned a sub-agent with Tavily skill instructions in its task prompt. I want to verify it worked correctly. Check the sessions list and show me the last message from that sub-agent session. Tell me: did it use the Tavily search tool as the skill instructs, or did it fall back to a different search method? If it did not follow the skill, what should I change in the task prompt to fix it?

The clearest sign of a skill working correctly is the sub-agent calling the specific tools the skill describes in the order the skill describes them. If the sub-agent is using web_search instead of the Tavily API endpoint, the skill instructions either did not load or were not followed. The most common causes are prompt position (skill instructions buried after a long task description), model capability (too weak to follow multi-step instructions), and missing API credentials (the sub-agent tried to use the tool but lacked the key). Each of these has a distinct signature in the session history, which is why reading the full session output is always the first diagnostic step rather than guessing at the cause and patching blind.

API credentials and sub-agents

Skills that call external APIs (Tavily, OpenAI, Jina, and others) rely on API keys stored in your openclaw.json config. These keys are read by the plugin or tool at call time from the system config, not from the session context. Sub-agents can use these tools without needing the keys passed in the task prompt. If a skill is failing because of a missing key, the fix is in openclaw.json, not in the spawn prompt.

FAQ

Can I configure skills to automatically load in all sub-agents?

No. As of March 2026, there is no OpenClaw config option that automatically injects skill instructions into every spawned sub-agent. Skills are loaded at main session start from the workspace SKILL.md files. They do not propagate to isolated sub-agent sessions. The embed-in-prompt and read-on-start approaches described above are the supported patterns for giving sub-agents skill access.

If a skill tool is blocked by allowedTools, can I unblock it just for sub-agents?

Yes. The allowedTools setting is per agent definition in agents.list in your openclaw.json config. If your main agent definition has a restrictive allowedTools list, create a separate agent definition with a different allowedTools list and pass that definition’s agent ID in the agentId parameter of sessions_spawn. The sub-agent runs under that definition’s tool permissions. Each agent definition is independent.

Does the sub-agent’s skill access expire when the session ends?

Sub-agent sessions close when the task completes or the timeout expires. The skill instructions exist only in that session’s context window. When the session ends, they are gone. The next time you spawn a sub-agent for the same task, you start fresh with the same prompt. The SKILL.md file is still on disk and unchanged, so re-loading it costs nothing.

My sub-agent has the skill instructions in its prompt but still is not using the skill correctly. What should I check?

Three things in this order: (1) Prompt position. Skill instructions that appear after the task description may be deprioritized by the model. Put them first, before the task. (2) Model capability. Weaker models do not reliably follow multi-step skill instructions across multiple tool calls. Switch to deepseek/deepseek-chat or Sonnet for skill-heavy work. (3) Skill complexity. If the SKILL.md references external files that also need to be read, the sub-agent may be missing that content. Check whether the skill instructions are self-contained or whether they assume the agent has already read other files.

Can sub-agents I spawn also spawn their own sub-agents?

Yes, if the sub-agent’s agent definition has allowAgents configured. This enables recursive multi-agent workflows where one orchestrator spawns workers that spawn their own workers. The same skill-passing rules apply at every level. Each spawn must pass skills explicitly in the task prompt. Context does not cascade automatically down the chain at any depth. Also check that each level of sub-agent has sessions_spawn in its allowedTools list. A sub-agent that is not configured to spawn other agents will fail silently at the tool call level, not with a clear error message.

I am using the memory-lancedb-pro plugin. Do sub-agents have access to the memory database?

Yes. The LanceDB database is on disk and accessible system-wide. Sub-agents can call memory_recall and memory_store the same way the main agent does, as long as memory_recall is in their available tools. The plugin’s autoCapture and autoRecall settings apply to the main agent session only. They do not fire automatically in sub-agent sessions. To use memory in a sub-agent, include explicit memory_recall calls in the task prompt, and specify the scope explicitly if your setup uses a non-default scope.

Do I need to pass my SOUL.md or persona config to sub-agents?

Only if it matters for the specific task. For functional tasks (research, file processing, web search), the sub-agent does not need persona instructions. It just needs the tools and the task description. For tasks where voice or output format matters (drafting content, composing messages), include the relevant style guidelines in the task prompt directly rather than pointing to SOUL.md. Keep it specific to the task rather than loading the entire persona file.

What happens if the skill file I told the sub-agent to read does not exist?

The sub-agent’s read tool call returns an error. Depending on how the task prompt is written, the sub-agent either reports the error back to your main agent (if you are monitoring via sessions_history) or proceeds without the skill instructions and improvises its approach. Neither is what you want. Before building any spawn workflow that relies on reading a skill file, verify the file path is correct by asking your main agent to read it first. Use the exact path you will put in the spawn prompt.

Troubleshooting: when skills still do not work after setup

You have embedded the skill instructions. The model is capable. The sub-agent ran. The skill still did not work the way you expected. Here is how to diagnose what actually happened.

Step 1: Read the sub-agent’s session output

The first step is always to look at what the sub-agent actually did, not what you expected it to do. Sub-agents run asynchronously and their output is in the session history, not in your main chat window.

Use sessions_list to show me all sub-agent sessions from the last 30 minutes. For each one, show me the session key, the model used, the current status, and the last message the sub-agent sent. If any session is still running, tell me how long it has been active.

Once you have the session key for the sub-agent you are diagnosing, fetch its full history. Reading the complete turn sequence, from initial prompt to final output, is faster than any other diagnostic approach. Most skill failures are visible within the first three turns: either the skill instructions are absent from the prompt, the first tool call is the wrong tool, or the first tool call returns an authentication error. Finding any of these takes under a minute once you have the session history open.

Fetch the full session history for [session key]. Show me every turn in order: the initial task prompt, every tool call the sub-agent made, and every response. I want to see exactly what it did from start to finish, including any errors or unexpected tool choices.

Reading the session history tells you three things immediately: whether the skill instructions were present at the start (look at the first message), whether the sub-agent called the right tools, and where it went off-track if something failed.

The sub-agent has the instructions but ignores them

This is almost always a model problem. The sub-agent acknowledged the skill instructions, then did not follow them during the actual task. Two fixes:

  • Upgrade the model. Switch from a local model to deepseek/deepseek-chat or Sonnet. Stronger models follow multi-step skill instructions more reliably because they have larger context windows and better instruction-following behavior at long context lengths.
  • Restructure the prompt. If the skill instructions are near the bottom of a long task prompt, they get deprioritized. Move them to the top. The structure should be: (1) skill instructions, (2) any context or memory needed, (3) the specific task. Not the other way around.

I have a task prompt for a sub-agent that is not following the skill instructions correctly. The sub-agent uses web_search instead of the Tavily API calls the skill specifies. Here is the current prompt structure: [skill instructions at bottom, task at top]. Rewrite this prompt so the skill instructions appear first and are framed as mandatory behavior, not as optional context. Show me the restructured prompt.

The skill requires a tool that is not available

Some skills call tools that require a specific plugin or external service. If that plugin is not active, the sub-agent will attempt to call the tool, receive an error, and either fail silently or fall back to a built-in alternative. The session history will show the failed tool call clearly.

Silent fallback is the worst outcome

When a skill tool is unavailable, capable models do not stop and report the error. They fall back to the closest available tool and continue. A Tavily skill whose API key is missing will silently use web_search instead. The output looks correct but is not using the tool the skill specified. The only way to catch this is to read the session history and verify which tools were called.

API credentials the skill needs are not reachable

Skills that call external APIs (Tavily, Jina, OpenAI, Anthropic) rely on API keys that must be set in your openclaw.json config or in a plugin’s config section. Sub-agents do not need these keys in the task prompt. They are read from the system config at call time. If a skill is failing with an authentication error, the key is either missing from the config or the plugin that manages it is not active.

A sub-agent running my Tavily skill is failing with an authentication error. Check whether the Tavily API key is set correctly in my config and whether the Tavily plugin or skill is configured to read from the right config location. Do not show me the actual key value. Just confirm whether the key is present and in the expected location, and whether the plugin is active.

The sub-agent ran out of time mid-skill

The default runTimeoutSeconds for a spawned sub-agent is 30 seconds as of March 2026. Most skills that involve multiple tool calls take longer than 30 seconds. If the sub-agent’s session history shows it started correctly but stopped before completing, timeout is the most likely cause.

Set runTimeoutSeconds explicitly in every sessions_spawn call. For research workflows that call external APIs multiple times, 180 to 300 seconds is a reasonable starting point. For complex multi-step workflows, set it to 0 (no timeout) and monitor the first few runs to establish a realistic baseline.

I need to update my sessions_spawn call for a research sub-agent that keeps timing out. The workflow involves reading a SKILL.md, running 3 to 5 Tavily searches, summarizing the results, and writing a file. Estimate how long this should take in seconds and give me the runTimeoutSeconds value I should set. Also show me exactly where in the sessions_spawn call this parameter goes.

runTimeoutSeconds: 0 means no timeout, not instant

Setting runTimeoutSeconds to 0 does not cancel the sub-agent immediately. It removes the timeout entirely, meaning the sub-agent runs until it completes or until you manually kill it with subagents(action=kill). Use this for long-running tasks where completion time is unpredictable. For short tasks, always set an explicit timeout to avoid zombie sessions that consume tokens indefinitely if something goes wrong.


Queue Commander: $67

Build multi-agent workflows that actually run reliably

Skill-passing templates, sub-agent orchestration patterns, cron job design, and the exact config that keeps complex workflows from silently failing. Built for OpenClaw operators who want automation that works while they sleep.

Get Queue Commander →

Keep Reading:

QUEUE COMMANDERMy OpenClaw agent won’t spawn sub-agentsFix allowAgents config, sessions_spawn tool availability, and model reliability issues that silently block sub-agent creation.QUEUE COMMANDERHow to run two OpenClaw agents on one server without them conflictingPort separation, data isolation, and the exact config needed to run multiple agents side by side without interference.QUEUE COMMANDERHow to pass output from one OpenClaw cron run into the nextState file patterns, handoff design, and the exact cron config that chains tasks reliably across runs.