How to pass output from one OpenClaw cron run into the next

Each OpenClaw cron job runs in an isolated session with no memory of previous runs. When that job finishes, the session closes and its working memory is gone. By default, your daily health check has no idea what yesterday’s health check found. Your weekly digest cannot compare this week to last week. The fix is a file-based state handoff: each run writes its output to a known location in the workspace, and the next run reads it before doing anything else. This article covers how to implement that pattern, what to store, and how to avoid common pitfalls.

TL;DR

  • Each run writes to a dated file in workspace/cron-state/ , the next run reads it before generating output.
  • Use structured output (JSON or plain key:value) so the next run can parse reliably.
  • Always handle the missing-file case , the first run has nothing to read.
  • Keep state files small , store summaries and key metrics, not full outputs.
  • Clean up old files with a weekly purge step to prevent workspace clutter.

Throughout this article you will see indented blocks like the ones below. Each one is a command you can paste directly into your OpenClaw chat. Your agent will run it and report back. You do not need to open a terminal or edit any files manually.

Why isolated sessions need explicit state handoff

When a cron job fires, OpenClaw spawns a fresh isolated session. That session has access to your workspace files and tools, but it has zero memory of previous sessions. There is no automatic context inheritance, no shared memory with past runs, and no way for the session to ask “what did the last run find?”

This is by design. Isolated sessions are predictable and reproducible precisely because they do not carry forward state from prior runs. But it means that any continuity between runs has to be made explicit. You build it in.

The mechanism is simple: write to a file at the end of each run, read from that file at the start of the next run. The workspace directory is shared across all cron sessions and persists indefinitely between runs, so a file written by run N is available and readable by run N+1. The file system is your shared memory across isolated sessions, and it is always available.

Explain how file-based state handoff works between cron runs in OpenClaw. What is the workspace directory path, and is it accessible from an isolated cron session?

The basic pattern

Every state handoff job has the same structure in its prompt:

  1. Read the state file from the last run (if it exists).
  2. Do the work for this run.
  3. Write this run’s output to the state file for the next run to read.

Here is a minimal example , a daily metric tracker that compares today’s disk usage to yesterday’s:

Create a cron job that runs daily at 8am America/New_York. Prompt: “Step 1: Read the file workspace/cron-state/disk-check.json if it exists. If it does not exist, set yesterday_pct to null. If it does exist, parse it and extract yesterday_pct. Step 2: Run df -h and extract the usage percentage for the root filesystem. Set today_pct to that number. Step 3: Build the output message: ‘Disk usage today: today_pct%. Yesterday: yesterday_pct% (or first run if null). Change: delta.’ Step 4: Write a new workspace/cron-state/disk-check.json with content: {today_pct: today_pct, date: today’s date}. Step 5: Send the output message to Telegram [your-chat-id].” Model: ollama/phi4:latest. Delivery: none (agent sends Telegram directly).

After two days of runs, you will see a message like: “Disk usage today: 47%. Yesterday: 44%. Change: +3%.” That comparison is only possible because the job writes state for the next run to read.

Choosing a state file format

The file format determines how reliably the next run can parse the output. Three options:

JSON (recommended for structured data)

Easy to read, easy to write, easy to extend. The agent can parse JSON natively and write it back without custom logic.

{
  "date": "2026-03-23",
  "disk_pct": 47,
  "memory_pct": 62,
  "status": "ok"
}

Use JSON when you are storing multiple values, when the values have types (numbers, booleans), or when the structure may grow over time.

Plain text (good for simple values)

One value per line, labeled clearly. Fast to write, easy for the agent to read without parsing.

date: 2026-03-23
disk_pct: 47
memory_pct: 62
status: ok

Use plain text for simple state files where you want the contents to be human-readable without a JSON parser.

Markdown (for prose summaries)

When the state is a summary meant to be read and compared rather than parsed, markdown works well. The next run reads the file and uses its content as context rather than extracting specific fields.

# 2026-03-23 health summary
- Disk: 47% (stable)
- Memory: 62% (up from 58% yesterday)
- Cron: all jobs ran on time
- Notes: updated phi4 model this morning

Use markdown when the next run needs to read the previous summary as context for generating a new one, rather than extracting specific numbers.

I want to set up state handoff for a daily cron job. The job tracks disk usage, memory usage, and whether all other cron jobs ran. What format should I use for the state file and what fields should it include? Show me the exact file content for a sample run.

Organizing state files

All state files should live in a dedicated directory. Two patterns work well:

Single file per job (rolling)

One file that gets overwritten on every run. The file always contains the most recent run’s state.

workspace/cron-state/health-check.json
workspace/cron-state/weekly-digest.json
workspace/cron-state/memory-audit.json

Use this when the next run only needs the immediately preceding run’s state. Simple and self-cleaning.

Dated files (historical archive)

One file per run, named by date. Keeps a history you can query.

workspace/cron-state/health-2026-03-23.json
workspace/cron-state/health-2026-03-22.json
workspace/cron-state/health-2026-03-21.json

Use this when you want to compare across multiple runs, generate trend charts, produce historical reports, generate trend reports, or reconstruct what happened on a specific date. The tradeoff is cleanup: add a purge step to prevent unbounded file accumulation.

Create the directory workspace/cron-state/ if it does not exist. Then list any existing files in that directory.

Handling the first run

The most common failure point in state handoff prompts is not handling the case where the state file does not exist. The first time a job runs, there is nothing to read. If the prompt tries to read a file that does not exist and does not handle the missing case, the agent will either error out or produce garbled output.

Every state handoff prompt must explicitly handle the missing file case. Three approaches:

Default values

Set default values for all state fields and use them when the file is absent:

Read workspace/cron-state/health-check.json. If the file does not exist, use these defaults: disk_pct=null, memory_pct=null, date=”no prior run”. If it exists, parse the JSON and extract disk_pct, memory_pct, and date.

First-run detection

Detect the first run explicitly and change the output message accordingly:

Read workspace/cron-state/digest.json. If the file does not exist, this is the first run , skip the comparison step and note “first run, no baseline” in the output. If the file exists, use its content as the prior baseline.

Graceful skip

For jobs where the comparison is optional (not the main purpose), skip it when there is no prior state:

If workspace/cron-state/weekly-summary.json exists, read it and include a brief comparison to last week in the output. If it does not exist, skip the comparison entirely and just report this week’s data.

Full example: weekly digest with rolling comparison

Here is a complete weekly digest job that compares this week to last week:

Create a cron job that runs every Monday at 9am America/New_York. Name: “Weekly digest”. Prompt: “Step 1: Read workspace/cron-state/weekly-digest.json. If it exists, extract last_week_summary. If it does not exist, set last_week_summary to null. Step 2: Read workspace/pipeline/ARTICLE-QUEUE.md and count DONE and PENDING articles. Summarize any articles published since last Monday. Step 3: Run df -h and free -h and get current server health status. Step 4: Build this week’s summary: articles published this week, articles remaining, server health. Step 5: If last_week_summary is not null, add a ‘vs last week’ section comparing this week’s article count to last week’s. Step 6: Write workspace/cron-state/weekly-digest.json with: {date: today, articles_done: count, articles_pending: count, this_week_summary: one-paragraph summary of this week}. Step 7: Send the full digest to Telegram [your-chat-id].” Model: ollama/phi4:latest. Delivery: none.

After the first Monday, the job writes a baseline. The second Monday, it reads that baseline and adds a comparison. Each subsequent Monday builds on the prior week’s record.

Chaining two separate jobs

Sometimes the job that collects data and the job that reports on it are better separated. Job A runs at midnight and collects raw data. Job B runs at 8am and reads Job A’s output to generate a human-readable report.

The handoff is the same file-write/file-read pattern, but across two different jobs rather than within a single job.

Create two cron jobs: Job A runs at midnight and collects data (disk, memory, failed cron jobs) and writes it to workspace/cron-state/overnight-data.json. Job B runs at 7am and reads workspace/cron-state/overnight-data.json and sends a formatted Telegram summary. Job A should use delivery mode none. Job B should handle the case where overnight-data.json does not exist.

Stagger chained jobs

If Job A runs at midnight and Job B runs at 12:01am, a slow Job A may not have finished writing before Job B starts reading. Stagger chained jobs by at least 10 to 15 minutes. Midnight and 12:15am is safe. Midnight and 12:01am is not.

Cleaning up old state files

If you use dated files, add a cleanup step to a weekly or monthly maintenance job to delete files older than your retention window. A workspace full of hundreds of dated state files is slow to list and wastes disk space.

Create a cron job that runs every Sunday at 11pm America/New_York. Prompt: “Delete any files in workspace/cron-state/ that are older than 30 days. List the files deleted. If none are deleted, say ‘No cleanup needed.’” Model: ollama/phi4:latest. Delivery: announce to Telegram [your-chat-id].

For rolling files (single file overwritten each run), no cleanup is needed. The file is always the same size and always contains exactly one run’s worth of state.

Making the read step reliable

The read step is where most state handoff prompts break down. A prompt that says “read the state file and extract the metrics” is fragile , if the file format changes slightly or the agent writes a field with a different name, the next run cannot find what it is looking for.

Three rules for reliable reads:

Name fields exactly the same every run

If you write disk_pct in the write step, the read step must look for disk_pct, not diskPct or disk_usage. Be explicit in the prompt about the exact field names to use.

Validate what you read

After reading the file, instruct the agent to verify that the expected fields are present before using them. If a field is missing, use the default value rather than erroring out.

Read workspace/cron-state/health-check.json. After parsing, confirm that disk_pct and memory_pct fields are present. If either is missing, use null as the value for that field. Do not error out if a field is missing.

Write atomically

Instruct the agent to write the complete new state file in a single operation, not to append to or partially update an existing file. Partial writes leave the file in a broken state that the next run cannot parse.

Write the complete updated state to workspace/cron-state/health-check.json in a single write operation. The file should contain only the new state , do not append to the existing file.

Debugging a state handoff that is not working

When a chained job is not producing the expected output, the problem is almost always in one of three places: the write step did not produce a valid file, the read step looked for the wrong field name, or the file path does not match between the two steps.

Read workspace/cron-state/health-check.json and show me its exact contents. Is the file valid JSON? Are the field names spelled correctly?

List all files in workspace/cron-state/ with their sizes and last modified times.

If the state file exists but contains garbled or partial content, the write step is the problem. If the state file does not exist when it should, the job that writes it may have failed before reaching the write step, or the path in the write step does not match the path in the read step.

Trigger cron job [job-id] immediately. After it runs, read workspace/cron-state/[state-file].json and confirm it was updated. Show me the file contents and the last modified time.

Three real-world state handoff patterns

Most state handoff use cases fall into one of three patterns. Understanding which one fits your job makes the prompt easier to write.

Pattern 1: Delta tracking

The job measures something that changes over time. The state file stores the last measurement. The next run compares the new measurement to the stored one and reports the delta.

Good for: disk usage trends, memory usage trends, article count growth, error rate changes, API spend changes.

Create a cron job for delta tracking: runs daily at 8am. Reads workspace/cron-state/metrics.json for yesterday’s values. Measures today’s disk usage percentage and memory usage percentage. Calculates the change from yesterday (or notes “first run” if no prior state). Writes today’s values back to workspace/cron-state/metrics.json. Sends a Telegram summary showing today’s values and the delta from yesterday. Use ollama/phi4:latest.

Pattern 2: Accumulating summary

The job collects items over time. Each run adds new items to a running list. The state file grows (within limits) until a reporting job reads it and resets it.

Good for: collecting daily article titles for a weekly digest, logging completed tasks for a monthly review, tracking API errors for a weekly error report.

Create a cron job that runs daily at 11pm and appends today’s published article titles to workspace/cron-state/weekly-articles.json. The file format should be a JSON array of objects with date and title fields. Keep a maximum of 7 entries (one per day). On Sunday, a separate job reads this file and generates the weekly digest, then clears the file for the next week.

Pattern 3: Conditional trigger

The job checks a condition and writes a flag to the state file. A second job reads the flag and takes action only if the flag is set. The first job is cheap and fast; the second job does heavier work only when needed.

Good for: triggering a detailed report only when usage crosses a threshold, running a cleanup only when disk usage is above 80%, sending an alert only when something has changed since the last check.

Create two jobs: Job A runs hourly, checks disk usage, and writes workspace/cron-state/disk-flag.json with content {alert: true/false, disk_pct: current value, checked_at: timestamp}. Job B runs every 2 hours, reads workspace/cron-state/disk-flag.json, and sends a Telegram alert only if alert is true. Job B should update the flag to alert: false after sending the alert, so it does not resend for the same condition.

Managing the state directory over time

A workspace with active cron state files accumulates files at a predictable and steady rate. If you have five daily jobs each using dated files, you accumulate 35 files per week. After six months, that is 900 files. At a year, it is over 1800. None of these files are large, but the directory becomes cluttered and listing it becomes slow.

The standard approach is a weekly maintenance job that runs cleanup as part of a broader housekeeping pass:

Create a weekly cron job that runs every Sunday at 11:30pm. Prompt: “Step 1: List all files in workspace/cron-state/ with their ages. Step 2: Delete any file older than 30 days. Step 3: List all files in workspace/tmp/ older than 7 days and delete them. Step 4: Report total files deleted and remaining disk usage.” Use ollama/phi4:latest. Deliver to Telegram.

Rolling files (single file overwritten each run) do not need cleanup. They maintain a constant footprint regardless of how long the job has been running. If you are setting up openclaw cron pass output between runs for the first time, the rolling file pattern is the easiest to maintain and requires zero cleanup logic.

State files vs. agent memory

OpenClaw’s memory plugin stores facts in a vector database that can be recalled across sessions. That sounds like it could serve the same purpose as a state file, and in some cases it can. But the two mechanisms have different tradeoffs.

  • State files: precise, structured, predictable, readable by non-agent tools, no retrieval ambiguity. Best for numeric data, dates, flags, and structured records that need to be parsed reliably.
  • Agent memory: semantic search, can surface related facts you did not know to look for, works across different types of tasks. Best for preferences, qualitative observations, and context that benefits from fuzzy retrieval.

For state handoff between cron runs, state files are the right tool. Memory retrieval is probabilistic , the next run might not recall exactly the right fact at exactly the right moment. File reads are deterministic , the next run reads exactly what the last run wrote.

Use memory for: preferences, project context, qualitative notes, anything where fuzzy retrieval adds value. Use state files for: metrics, counts, dates, flags, anything that needs to be read and parsed reliably. The two systems complement each other. A cron job can write state to a file AND store a qualitative observation in memory in the same run.

My daily health check currently stores its findings in agent memory. What are the risks of relying on memory recall for cron state vs. using a dedicated state file? When would you recommend switching to file-based state?

A reusable prompt template for state handoff jobs

Here is a reusable template you can adapt for any state handoff cron job in OpenClaw. Fill in the bracketed sections for your specific use case and job type:

Step 1 , READ PRIOR STATE: Read workspace/cron-state/[filename].json. If the file does not exist (first run), use these defaults: [field1: default, field2: default]. If it exists, parse it and extract: [field1, field2, date]. Step 2 , DO THE WORK: [Your actual task , check metrics, generate content, run commands, etc.]. Step 3 , COMPARE TO PRIOR STATE: [What to compare and how to present the delta]. If prior state was null (first run), skip the comparison and note “first run”. Step 4 , WRITE NEW STATE: Write the complete updated state to workspace/cron-state/[filename].json in a single write. Include: {date: today’s date in YYYY-MM-DD, [field1]: new value, [field2]: new value}. Step 5 , DELIVER: Send the output to Telegram [your-chat-id]. Format: [describe output format].

The five-step structure (read, work, compare, write, deliver) keeps the prompt organized and ensures the write step always comes before delivery. If delivery fails, the state was still written correctly and the next run will have the right baseline. This ordering matters: write before you deliver, never after. A failed delivery is recoverable. A failed write means the next run starts with stale data.

Advanced: multi-job pipelines with shared state

For more complex setups, multiple jobs can share a single state directory with each job owning its own files. A central job can read from all of them and generate a synthesized report.

I have three daily jobs: health check (writes workspace/cron-state/health.json), article pipeline (writes workspace/cron-state/pipeline.json), and memory audit (writes workspace/cron-state/memory.json). Create a fourth job that runs at 9am and reads all three files, synthesizes a morning briefing, and sends it to Telegram. Handle the missing file case for each one independently.

The synthesizing job should read all three files before building its output. If any file is missing or stale (older than 25 hours), it should note that data is unavailable for that section rather than omitting the section silently.

Error-proofing the write step

The write step is the most critical step in any state handoff job. If it fails or produces a partial file, the next run gets bad input. Here is how to make the write step as reliable as possible.

Write only after the work is done

Do not write partial state mid-task. Collect all the values the state file needs, then write the complete file in one operation at the end. If the job fails before reaching the write step, the previous run’s state is preserved intact. This is better than a partial overwrite that corrupts the state file.

Include a checksum or version field

For critical state files, include a written_by field with the job name and a version field that increments each run. The reading job can verify these fields to confirm it is reading a valid file from the expected job.

{
  "date": "2026-03-23",
  "version": 47,
  "written_by": "daily-health-check",
  "disk_pct": 47,
  "memory_pct": 62,
  "status": "ok"
}

The version field also gives you a quick way to confirm that a job is running: if the version has not changed since yesterday, the write step did not execute.

Write to a temp file first, then rename

For jobs where data integrity is important, instruct the agent to write to a temporary file first, verify the write completed successfully, then rename the temp file to the final name. This prevents a partial write from corrupting the state file that the next run depends on.

Write the state data to workspace/cron-state/health-check.tmp first. After confirming the write succeeded and the file contains valid JSON, rename it to workspace/cron-state/health-check.json. If the rename fails, report the error but do not delete the .tmp file.

Monitoring whether the handoff is actually working

Once you have state handoff set up, check periodically that it is actually functioning. The state file should be updated on every successful run. If the last_modified time of the state file is older than one run cycle, the write step is failing silently.

List all files in workspace/cron-state/ with their last modified times. For each file, tell me the expected update frequency (daily, weekly, etc.) based on the filename and compare it to the actual last modified time. Flag any file that has not been updated within its expected cycle.

Include this check in your weekly cron audit. A state file that has not been updated is a signal that the job writing it has stopped working, and the jobs reading from it are operating on stale data.

Read workspace/cron-state/health-check.json. Check the date field. Is it today’s date? If it is more than 25 hours old, send a Telegram alert: “STALE STATE: health-check.json has not been updated since [date].”

Getting started in five minutes

If you have an existing daily cron job and want to add state handoff to it today, here is the fastest path:

  1. Decide what data from the current run is useful to the next run. Usually one to five fields.
  2. Pick a state file path: workspace/cron-state/[job-name].json.
  3. Update the existing job prompt with a read step at the start and a write step at the end.
  4. Trigger the updated job manually to confirm it writes the state file correctly.
  5. Trigger it a second time to confirm the second run reads the first run’s state correctly.

I have an existing daily cron job [describe what it does]. I want to add state handoff so each run can compare to the previous run. Show me the updated prompt with a read step at the start and a write step at the end. The state file should be at workspace/cron-state/[job-name].json.

Most existing jobs can have state handoff added in under five minutes. The prompt gets longer but the structure is the same, and the output immediately becomes richer because each run now has context from the run before it. Once you see a daily job comparing today to yesterday for the first time, you will want to add it to every job that collects data over time.

When the read step returns unexpected results

The read step fails in predictable ways. Here is a quick reference for the most common problems and their fixes.

The agent reads the file but reports wrong values

The field names in the write step do not match the field names in the read step. Check the state file contents directly and compare the actual field names to what the read step is looking for. If there is a mismatch, fix either the write step to use consistent names or the read step to look for the correct names.

The agent says the file does not exist when it clearly does

The path in the read step has a typo or uses a different base directory than the write step. The workspace path is /home/node/.openclaw/workspace. If the write step uses a relative path and the read step uses an absolute path, they may resolve to different locations. Standardize on absolute paths in both steps.

The agent reads stale data from a previous job

Two jobs are writing to the same state file and one is overwriting the other’s data. Give each job its own uniquely named state file. If jobs must share a file, designate exactly one job as the writer and the rest as readers only.

The state file grows without bound

The write step is appending to the file instead of replacing it. Instruct the agent explicitly to write the complete new state as a replacement for the existing file, not to append. Check the write prompt for any language like “add to” or “append” and replace it with “overwrite” or “write the complete state”.

Read workspace/cron-state/[state-file].json and show me its size in bytes and the number of top-level fields. If the file is larger than 10 KB, show me the full contents so I can identify what is growing unexpectedly.

Frequently asked questions

The questions below address the edge cases and design choices that come up once openclaw cron run state handoff is working and you start building more sophisticated pipelines on top of it.

These questions cover the edge cases and gotchas that operators run into once they start using state handoff in production.

Can two different cron jobs read from the same state file?

Yes. Multiple jobs can read the same file. Only one job should be responsible for writing it, and that job should always write the complete file (not append). If two jobs write to the same file, they will overwrite each other and the state will be unreliable.

What happens to the state file if the job fails halfway through?

If the job fails before the write step, the state file from the previous run is still intact. The next run reads the previous state, which is correct behavior. If the job fails after partially writing the new state file, the file may be corrupted. To reduce this risk, instruct the agent to write the complete new state in a single operation and only overwrite the file after confirming the new state is complete.

How large can a state file be before it causes problems?

Keep individual state files under 10 KB as a rule. The agent reads the file into context at the start of every run, so a bloated state file wastes context tokens on every execution. The agent reads the file into context before doing its work, so a large state file consumes context tokens that could be used for the actual task. Store summaries and key metrics, not full output dumps. If you need to store large outputs, write them to a separate archive file and store only a reference (filename, date, summary) in the state file.

Can I use the state file to pass data to a job that runs on a different schedule?

Yes. The daily health check can write to a state file, and a weekly summary job can read it. The weekly job does not need to know when the daily job ran , it just reads the most recent state file and uses whatever is there. Include the date field in every state file so the reading job can tell how old the data is.

Is there a risk of the state file becoming stale and misleading the next run?

Yes. If the writing job stops running (due to a gateway restart, model failure, or disabled status), the state file will age out. The reading job will read old data without knowing it is old. The fix is to include a date field in every state file and instruct the reading job to check how old the data is before using it. If the data is older than expected, the reading job should note that in its output rather than presenting stale data as current.

Can I read multiple state files in a single job?

Yes. A weekly digest job might read the daily health state, the article queue state, and the memory audit state, then synthesize them into a single report. Instruct the agent to read each file, handle the missing case for each separately, and then combine the results.

What if I want to keep a full history, not just the last run?

Use dated files. Write workspace/cron-state/health-2026-03-23.json each run. When you need historical analysis, ask the agent to read all files in workspace/cron-state/ that match a name pattern and summarize the trend. Add a cleanup cron job to delete files older than 90 days to keep the directory manageable.


Go deeper

CronHow to schedule a daily task in OpenClaw without building a queue systemThe complete setup guide for daily cron jobs, schedule types, and delivery modes.CronMy cron job works in testing but silently does nothing in productionFive root causes of silent cron failures and the fix for each.CronHow to stop OpenClaw cron jobs from piling up when tasks run longConcurrency limits, timeout settings, and queue depth controls.