You told your OpenClaw agent to run two tasks. Task B needed the output from Task A. But Task B ran first, found nothing to work with, and either failed or produced garbage. This happens because a flat task list has no concept of order beyond priority. If both tasks look ready to run, the agent picks one without knowing one depends on the other. This article shows you exactly how to add dependency tracking, how to pass output between tasks reliably, and what to do when a dependency fails.
TL;DR
Task ordering fails for one of three reasons: no dependency column in the task list, so the agent does not know one task requires another; output passed through context instead of a file, so it disappears after compaction or a session restart; or no failure propagation, so a blocked task sits in waiting forever when its dependency fails. All three have direct fixes. The dependency column takes five minutes. The file handoff takes one line in the task description. The failure propagation takes one instruction in the processor prompt.
Every indented block in this article is a command you can paste directly into your OpenClaw chat. Your agent will run it and report back. You do not need to open a terminal, edit any files manually, or navigate any filesystem.
Why tasks run out of order
When your agent works through a task list, it looks for the next available task and starts it. “Available” means highest priority among tasks that are not blocked. If Task A and Task B both show as PENDING with equal priority and neither has a dependency column, they both look identical to the agent. It picks one. Which one depends on how the list is sorted, not on whether one task logically requires the other.
This is the core problem: the agent cannot infer that Task B requires Task A’s output from the task descriptions alone, unless you tell it explicitly through the task list structure. Good intentions do not substitute for explicit tracking.
Three patterns cause this to happen in practice:
- No dependency column: the task list has no field that records “wait for task X before starting.” The agent sees all PENDING tasks as equally available and picks based on priority or position.
- Dependency column exists but the processor prompt does not check it: you added the column but forgot to update the prompt. The agent reads the column but does not act on it.
- The dependency is implicit in the task descriptions: Task B says “use the output from the research task” but there is no structural link. The agent may or may not catch the reference, and it will not wait for the research task to complete before trying.
Read my task list file. Does it have a dependency column or any field that tracks which tasks must complete before others can start? Does my task processor prompt include any instruction to check dependencies before starting a task? Tell me what is there and what is missing.
Adding a dependency column to your task list
The dependency column is simple: it holds the ID of the task that must finish before this one can start. A task with an empty dependency column is immediately available. A task with a value in the column is waiting. When the dependency is marked DONE, the waiting task becomes available automatically on the next queue run.
Here is what the column looks like in a QUEUE.md file:
| ID | Task | Status | Depends | |------|-------------------------------|---------|---------| | T001 | Research competitor pricing | DONE | | | T002 | Write pricing comparison doc | PENDING | T001 | | T003 | Draft email based on doc | PENDING | T002 | | T004 | Review quarterly metrics | PENDING | |
In this example, T004 is available immediately. T001 is already done. T002 is available because T001 is done. T003 is waiting because T002 is not done yet. The agent should only run T002 on this queue pass.
Add a “Depends” column to my task list file. For any tasks that need output from another task, fill in the dependency. For tasks that can run independently, leave it empty. Show me the updated table before writing it.
Updating the processor prompt to check dependencies
The dependency column is data. The processor prompt is logic. You need both. Without the processor prompt change, the agent can see the dependency column but will not necessarily wait for it.
Add this instruction to your task processor prompt:
Before starting any task, check its Depends column. If the Depends field is empty, the task is available to run. If it contains a task ID, check whether that task is marked DONE. If the dependency is DONE, the task is available. If the dependency is PENDING, IN PROGRESS, or FAILED, skip this task and look for the next available task. Do not run tasks that are waiting for an incomplete dependency.
Ask your agent to find the right place for this instruction:
Read my task processor prompt. Show me where in the prompt the dependency check should go. Then add the dependency check instruction and show me the updated prompt.
Multiple dependencies
A task can depend on more than one task. Store them as a comma-separated list in the Depends column: T001,T003. The processor prompt instruction needs one additional line: “If the Depends field contains multiple task IDs, check all of them. The task is available only when every dependency is marked DONE.” This handles fan-in patterns where multiple parallel tasks must all complete before a synthesis step can run.
Passing output between tasks reliably
Dependency tracking ensures Task B does not start until Task A is done. But Task B still needs Task A’s output to work with. The two main approaches are context and files. One of them works reliably. The other does not.
Why context fails for task handoffs
Context-based handoff means Task B relies on Task A’s output still being in the agent’s active memory when Task B runs. This works in short sessions where both tasks run consecutively with nothing else in between. It breaks in three common scenarios:
- Compaction fires between the tasks: if the session context grows large enough to trigger compaction, earlier content gets summarized or trimmed. Task A’s detailed output may not survive intact.
- The session restarts between the tasks: if Task A completes in one session and Task B runs after a restart, the previous session’s context is not available to the new session.
- Other tasks run between them: if the queue processor runs other tasks between Task A and Task B, those tasks add content to context and may push Task A’s output further back, where it is less likely to be recalled accurately.
The file handoff pattern
Task A writes its output to a specific file. Task B reads that file at the start of its run. The file persists across sessions, compaction events, and time. This is the only reliable handoff mechanism for anything that matters.
The task descriptions encode the handoff directly:
T001: Research competitor pricing. Write findings to workspace/research/competitor-pricing.md.
T002: Depends: T001. Read workspace/research/competitor-pricing.md and write a comparison
document to workspace/drafts/pricing-comparison.md.
T003: Depends: T002. Read workspace/drafts/pricing-comparison.md and draft an email summary
to workspace/drafts/pricing-email.md.
Each task knows exactly where to read from and where to write to. The file path is the contract between tasks. If something goes wrong at any step, you can open the intermediate files yourself and see exactly what was produced.
Look at the tasks in my list that depend on other tasks. For each one, where is the output of the first task supposed to come from? If any task relies on context rather than a file, update the task descriptions to specify a file path for the handoff.
File naming conventions
Use a consistent directory for task handoff files. workspace/task-output/ or workspace/pipeline/ works well. Name files by task ID so it is clear what produced what: T001-competitor-pricing.md. This makes debugging obvious: if T002 is failing, check whether T001-competitor-pricing.md exists and has content. If it does not, T001 did not finish successfully even if it was marked DONE.
Handling failures in a dependency chain
Without failure propagation, a chain of dependent tasks can get stuck silently. Task A fails and is marked FAILED. Task B is waiting for Task A and stays in PENDING. The queue processor runs, finds no available tasks (Task B is blocked, nothing else is pending), and exits without reporting anything wrong. You come back the next morning and everything looks stuck, with no obvious explanation.
The fix is failure propagation: when a task fails, any task that depends on it should also be marked as failed, with a clear note explaining why.
Add this instruction to my task processor prompt: if a task is marked FAILED, find all tasks in the queue that depend on it. Mark those tasks as FAILED with a note that says “blocked: dependency [task ID] failed.” Do not leave dependent tasks in PENDING when their dependency has failed.
With this in place, a single task failure cascades visibly through the queue. You see exactly what broke and exactly what was affected by it. Nothing sits in limbo.
Failure alerts for dependency chains
Failure propagation makes the task list accurate. It does not tell you about it. Add an alert so you find out when a failure cascades.
Add this to my task processor: if any task is marked FAILED and it causes one or more dependent tasks to also be marked FAILED, send me a Telegram message with: the original failed task, the list of tasks that were blocked by it, and the error message from the original failure.
Debugging a stuck or blocked task
When a task is stuck and you cannot figure out why, work through this sequence:
Read my task list. Find any task that has been in PENDING or WAITING status for more than 24 hours. For each one: what is its dependency? Is the dependency marked DONE? If the dependency is DONE, why has this task not started? If the dependency is not DONE, what is blocking it?
The most common answers:
- Dependency is DONE but task never started: usually a processor prompt issue. The dependency check logic is either missing or has a bug. Check whether the dependency column value matches the exact format the prompt expects (e.g., “T001” vs “task-001”).
- Dependency is PENDING but nothing is running it: the dependency task itself is blocked by something further up the chain, or it was never picked up because its own dependency was not met. Trace back to the root of the chain.
- Dependency is FAILED and the blocked task is still PENDING: failure propagation is not configured. Add the instruction from the previous section and manually mark the blocked tasks as FAILED with a note.
- Task has no dependency but is still stuck: something else is wrong. Check whether the task processor is running at all, whether the task has exceeded its retry limit, or whether the task’s output file already exists and the processor is skipping it as “already done.”
For the stuck task [task ID], trace the full dependency chain. What does this task depend on? What does that task depend on? Go all the way up the chain until you find the root cause: the first task that is either failed, missing, or not starting for some other reason.
More complex dependency patterns
Fan-out: one task that produces output for multiple tasks
Sometimes Task A produces output that both Task B and Task C need. Both B and C depend on A. Neither depends on the other. In this case both B and C should have A in their Depends column, and neither should depend on the other. When A completes, both become available and the processor can run them in any order.
T001: Gather raw data. Write to workspace/data/raw.md. T002: Depends: T001. Analyze for trends. Write to workspace/analysis/trends.md. T003: Depends: T001. Analyze for anomalies. Write to workspace/analysis/anomalies.md. T004: Depends: T002,T003. Write final report using both analyses.
T001 runs first. T002 and T003 become available and run in either order. T004 waits for both. This pattern is efficient: the two independent analysis tasks do not block each other, so if you ever run multiple queue passes in sequence they can each complete without waiting on the other. The only hard serialization point is T004, which genuinely needs both analyses before it can synthesize a final report.
Conditional dependencies: skip a task if a condition is not met
Sometimes Task B only makes sense if Task A produced a certain result. If the research found no relevant data, the draft email task should not run. This requires a more sophisticated check than a simple DONE/not-DONE dependency.
I want to add conditional task execution. When Task A completes, it should write a result file. If the file contains a specific outcome (for example, “no data found”), Task B should be marked SKIPPED rather than becoming available. How would I implement this check in my task processor prompt?
Keep it simple until you need complexity
Simple linear chains (A then B then C) cover most real-world task workflows. Fan-out and conditional patterns are worth implementing only when you have a specific use case that requires them. Every additional dependency logic layer is another thing that can break. Start with straight-line chains, get them working reliably, then add complexity only when the simpler model demonstrably fails to meet your needs.
Before you build a dependency chain
Two questions worth answering before writing any dependency tracking into your queue:
Does the order actually matter for correctness, or just for your preference? If running Task B before Task A produces garbage, you need dependency tracking. If running Task B before Task A produces a slightly worse result but not a wrong one, priority ordering may be sufficient. Priority ordering is simpler to maintain and easier to debug. Use it when it is enough.
Is the output of Task A something your agent needs to read, or something a human needs to approve? If your agent reads Task A’s output to do Task B, the file handoff pattern handles it cleanly. If a human needs to review Task A’s output before Task B starts, you need a PAUSED state in your task list, not just a dependency column. These are different problems with different solutions, and conflating them leads to fragile setups where the chain waits forever because no one told it the human approved.
For the workflow I am building: which steps require my agent to have the previous step’s output to function correctly? Which steps require me to review something before continuing? Which steps are just preferences for order but could run in any sequence without breaking anything?
That classification tells you where to use dependency tracking (hard requirements), where to use PAUSED status (human review points), and where to use priority order (soft preferences). Running them all through the same mechanism adds unnecessary complexity to the cases that do not need it. The cleaner the classification up front, the easier the queue is to maintain when something breaks at 2am and you need to figure out which mechanism is responsible for the failure.
Real-world dependency chain patterns
The concepts above are easier to apply when you can see them in the context of actual tasks people run. Here are four common patterns and how dependency tracking applies to each.
Daily research and report pipeline
This is the most common pattern: gather information, process it, format it, deliver it. Each step requires the previous step’s output.
T001: Search for news in [topic] published today. Write a bullet-point list to
workspace/pipeline/daily-raw.md.
T002: Depends: T001. Read workspace/pipeline/daily-raw.md. Summarize to 5 key points.
Write to workspace/pipeline/daily-summary.md.
T003: Depends: T002. Read workspace/pipeline/daily-summary.md. Format as a Telegram
message (max 400 chars) and send to my chat.
T001 runs first, always. T002 cannot start without T001’s output file existing. T003 has content to format only because T002 produced a clean summary rather than raw search results.
The file handoff makes this testable: you can look at daily-raw.md after T001 to verify the search worked, look at daily-summary.md after T002 to verify the summary quality, then let T003 format and send. If T003 sends a bad message, you know immediately whether the problem was in the research (T001), the summary (T002), or the formatting (T003).
Content creation pipeline
Longer-form content creation has more steps and more opportunities for one task to depend on a previous decision.
T001: Research the topic [subject]. Find 5 authoritative sources. Write citations and
key points to workspace/content/research.md.
T002: Depends: T001. Read workspace/content/research.md. Write a detailed outline with
H2 and H3 headings to workspace/content/outline.md.
T003: Depends: T002. Read workspace/content/outline.md and workspace/content/research.md.
Write the full draft article to workspace/content/draft.md.
T004: Depends: T003. Read workspace/content/draft.md. Check for: em dashes, AI-isms,
passive voice, and claims that need citation. Write edited version to
workspace/content/draft-edited.md.
T005: Depends: T004. [PAUSED: human review required] Read workspace/content/draft-edited.md
and confirm ready to publish.
T005 is intentionally PAUSED. You want to review the edited draft before publishing. The chain runs automatically through T004, then stops and waits for you to release T005 to PENDING after reading the draft. This is the human-in-the-loop pattern: automation runs where you trust it, pauses where you need oversight.
Weekly data processing pipeline
Recurring pipelines that process data from an external source benefit from dependency tracking because the processing steps have strict ordering requirements.
T001: Download this week's metrics from [source]. Write raw data to
workspace/weekly/raw-2026-W12.md.
T002: Depends: T001. Read workspace/weekly/raw-2026-W12.md. Validate: check for missing
fields, obvious errors, unusual values. Write validation report to
workspace/weekly/validation-2026-W12.md.
T003: Depends: T002. Read workspace/weekly/validation-2026-W12.md. If PASS: write
cleaned data to workspace/weekly/clean-2026-W12.md. If FAIL: mark self as FAILED
with "validation failed" note.
T004: Depends: T003. Read workspace/weekly/clean-2026-W12.md. Compute this week's metrics
and compare to last week. Write analysis to workspace/weekly/analysis-2026-W12.md.
T005: Depends: T004. Send Telegram summary with key metrics changes.
T002’s validation step is important because it catches bad data before it flows into the analysis. If T003 marks itself FAILED due to validation failure, T004 and T005 are blocked and you get notified. The bad data never reaches the analysis stage.
I want to create a dependency chain for [describe your workflow]. Based on that workflow, write out the task list with appropriate dependencies, file paths for each handoff, and a PAUSED step at any point where I need to review before continuing.
Resetting a pipeline for its next run
A pipeline that runs once works naturally with the dependency schema. A pipeline that runs repeatedly (daily, weekly) needs a reset mechanism so each run starts fresh rather than seeing all tasks from the previous run as already DONE.
The cleanest approach is a reset task that runs first each cycle:
T000: Reset pipeline for new run. Set T001, T002, T003, T004, T005 back to PENDING.
Clear workspace/pipeline/ directory. Set all attempt counts to 0.
T000 has no dependencies. It runs first. After it completes, the rest of the chain is PENDING and ready to go again.
For daily pipelines, trigger the reset task via cron at the start of each day. For weekly pipelines, trigger it on the first run of the week. The key is that the reset fires before any other task in the queue has a chance to run. If the reset and Task T001 both look PENDING at the same time, the processor will run whichever comes first in the file. Put T000 at the top of the file and give it a higher priority than everything else to guarantee it runs first every time.
I have a recurring pipeline that should run every [day/week]. Add a reset task to the beginning of the pipeline that clears all task statuses back to PENDING and removes the output files from the previous run. Schedule this reset task to run at [time] as the trigger for the new cycle.
Archive before reset
Before clearing the previous run’s output files, archive them. A simple convention: move workspace/pipeline/*.md to workspace/pipeline/archive/YYYY-MM-DD/ before clearing. This means you always have the previous run’s data available for debugging or comparison. The reset task becomes: archive outputs, reset statuses, clear current pipeline directory.
When dependencies add value versus when they add complexity
Dependency tracking is the right tool when:
- Task B genuinely cannot do useful work without Task A’s output
- Running Task B before Task A would produce incorrect results or waste API calls
- You want to pause the chain at a specific point for human review
- You want failure in Task A to be visible as the cause of Task B not running, rather than as a mysterious stuck state
Dependency tracking adds unnecessary complexity when:
- Tasks are genuinely independent and just need to run in the same queue cycle
- The “dependency” is just a preference for ordering, not a hard requirement (use priority order instead)
- The pipeline has only two tasks and you will always trigger them manually in order anyway
The judgment call: if running the tasks in the wrong order produces a clearly wrong result, use dependency tracking. If running them in any order produces an acceptable result, use priority ordering instead.
A useful test: remove the dependency column from your queue temporarily and run the pipeline in a test environment. If the output is correct regardless of task order, you do not need dependency tracking. If the output breaks, you do. This empirical check is faster than reasoning about it in the abstract.
Another reliable signal: have you ever manually reordered tasks in your queue to fix a broken run? If you have edited the queue file to put tasks back in the right sequence after something ran out of order, that is the queue telling you it needs dependency tracking. The manual fix is the symptom. The dependency column is the cure. Adding it once removes the need to manually intervene every time the queue runs in a session you are not watching.
Look at my current task list. For each pair of tasks where one runs after another, tell me: is this a hard dependency (Task B fails or produces garbage without Task A’s output) or a soft dependency (Task B works better after Task A but can run independently)? Which pairs actually need the dependency column and which are just preferences?
Common questions
Can a task depend on a task in a different queue file?
Yes, but it requires extra work. By default, a task ID in the Depends column refers to tasks in the same file. To depend on a task in another file, you need to specify the full path in the Depends field (for example, “workspace/queue-B.md#T003”) and update the processor prompt to check across files. This gets complex quickly. For most setups, keeping all related tasks in the same queue file is simpler and less error-prone. Separate files work well for tasks that are genuinely independent, not for tasks that form a pipeline.
My dependency chain worked correctly once but broke on the second run. What happened?
The most common cause is a task that was marked DONE on the first run and is now blocking the second run because it is still DONE rather than PENDING. If your workflow is meant to run repeatedly (a daily pipeline, a weekly report), you need a reset step at the start of each run that sets all tasks back to PENDING and clears the output files. Without a reset, the second run sees the first run’s DONE tasks as already complete and skips them. Add a “reset pipeline” task with no dependencies that resets all statuses and deletes output files, and make it the first task in each cycle.
How do I handle a task that sometimes needs to run and sometimes can be skipped?
Add a SKIP status to your task list schema. A task that is SKIPPED is treated the same as DONE for dependency purposes: anything that depends on it can proceed. Your processor prompt needs one additional instruction: “If a task is SKIPPED, treat it as DONE when evaluating whether its dependents can start.” This lets a task higher in the chain mark a downstream task as SKIPPED based on its output, and the chain continues without manual intervention.
What is the right way to test a new dependency chain before relying on it?
Create a test version of the chain with minimal tasks: Task A writes a single line to a file, Task B reads that file and writes a confirmation, Task C reads Task B’s output. Run it manually, verify each handoff worked, and check that the dependency checks fired correctly. Once the mechanics are confirmed, replace the test tasks with real ones. Testing the plumbing separately from the real work saves time when something goes wrong because you already know the dependency logic is sound.
Is there a way to pause a dependency chain midway through?
Yes. Add a PAUSED status. Mark any task in the chain as PAUSED and update the processor prompt to treat PAUSED the same as PENDING (do not run it) but without marking its dependents as FAILED. When you are ready to resume, reset the task from PAUSED back to PENDING. This is useful for chains that require human review between steps: Task A completes, Task B is set to PAUSED automatically, you review Task A’s output, then manually reset Task B to PENDING when you are satisfied.
My queue has 50+ tasks and the dependency chain is getting complicated to manage. What should I do?
Break the queue into logical sections. Each section is a self-contained pipeline: it has its own input, its own output files, and its own dependency chain. Sections run independently. If Section A and Section B have no dependencies between them, they can run in any order and failure in one does not block the other. The only tasks that span sections are the ones that genuinely need to. For very large queues, consider maintaining a dependency diagram alongside the queue file so you can visualize the chain at a glance rather than tracing it through the table every time something breaks.
Can I use dependency tracking with the OpenClaw cron scheduler directly, rather than a queue file?
OpenClaw’s built-in cron scheduler as of March 2026 does not natively support task dependencies. You can express dependencies through timing (schedule Task B 30 minutes after Task A is scheduled to run) but this is fragile: if Task A takes longer than expected or fails, Task B starts with stale or missing data. A queue file with a dependency column is the more robust approach. The cron scheduler triggers the queue processor, which enforces the dependency logic inside the queue file. The scheduler is responsible for when the queue runs; the queue is responsible for the order tasks run in.
How do I make one task wait for a task that runs in a different session?
Use a status file. When Task A completes, write a file to a known path: workspace/status/T001-complete.md. Task B’s processor check is: before starting, verify that workspace/status/T001-complete.md exists. If it does, start. If it does not, skip and check again on the next queue run. This works across sessions because files persist while context does not. The status file is the cross-session signal.
What is the best way to visualize a complex dependency chain?
For chains with more than five tasks, a dependency diagram is worth maintaining alongside the queue file. Ask your agent to generate a simple text-based diagram: “Draw a text diagram of my task dependency chain using arrows to show which tasks depend on which.” For example: T001 → T002 → T004 and T001 → T003 → T004. That kind of map lets you spot circular dependencies (Task A depends on Task B which depends on Task A), missing links, and tasks that should be parallel but are currently serialized unnecessarily.
Is there a limit to how deep a dependency chain can be?
There is no hard technical limit in OpenClaw. The practical limit is how many sequential steps you want before you see a result. Long chains (10+ steps) increase the chance that something early in the chain fails and blocks everything that follows. Long chains also mean more time between triggering the pipeline and seeing the final output. If a chain is getting very long, consider whether some steps can be parallelized (run independently and converge at a synthesis step) or whether the chain should be broken into two separate workflows with a handoff point in between.
My task processor is supposed to run the next available task, but it always runs the same task first. Why?
The processor is likely selecting tasks by position in the file (first row that is PENDING) rather than by priority or availability. If your highest-priority available task is at the bottom of the file, it never gets selected first. Either sort your queue file by priority before each run (ask your agent to reorder PENDING tasks by priority column after each task completes) or update the processor prompt to explicitly select by highest priority among available tasks rather than first available by position.
Queue Commander
The complete autonomous task system
Complete task list structure with dependency columns, processor prompt with ordering logic already written in, failure propagation, file handoff conventions, and the reset pattern for recurring pipelines. Drop it into your agent and it handles the setup.
