Cron scheduling patterns that work

Cron jobs are one of the most powerful things you can do with an AI agent. They are also one of the easiest ways to quietly burn through your API budget without realizing it. Most operators set up a job or two, watch them work, and never audit them again. Six months later they have a dozen scheduled tasks, half of which are duplicates, orphans, or firing far more often than they need to. This article covers what cron jobs actually are, the three schedule types and when to use each, the patterns that work in production in 2026, and the zombie job problem that is almost certainly costing you money right now.

TL;DR

  • A cron job is a scheduled instruction. You set it once and your agent runs it automatically on a timer, forever, whether you are there or not.
  • Use cron type for anything tied to a clock time: daily briefings, nightly summaries, weekly reports.
  • Use every type for recurring background checks where exact timing does not matter: queue polling, health checks, memory cleanup.
  • Use at type for one-shot tasks that should fire once and never again: reminders, scheduled sends, delayed actions.
  • Zombie jobs are a real cost problem. Old, forgotten, or duplicate jobs keep firing and burning API tokens indefinitely. Audit yours now.
  • Always set a timezone on cron type jobs. The default is UTC and it will bite you.
  • Keep task text short and point to a file. Long instructions in the task field cost tokens on every single run.

Throughout this article you will see indented blocks like the ones below. Each one is a command you can paste directly into your OpenClaw chat. Your agent will run it and report back. You do not need to open a terminal or edit any files manually.

If you are coming from a non-technical background, the name is confusing. “Cron” comes from the Unix utility that has run scheduled tasks on servers since the 1970s. The name does not matter. What matters is what it does: a cron job is a standing instruction that runs automatically on a schedule without you being there to trigger it.

In OpenClaw, a cron job has three parts:

  • A schedule: when to fire. This could be “every day at 8am”, “every 15 minutes”, or “once at 3pm on March 20th”.
  • A task: what to do when it fires. This is the instruction your agent receives: a prompt, a file to read, a question to answer.
  • A target: whether to run in your main session or an isolated one. Isolated is safer for background work because it does not interrupt what you are doing.

Once created, the job fires on its own. You do not need to be in a chat session. You do not need to send a message. Your agent wakes up, reads the task, does the work, and goes back to sleep. This is what makes autonomous agents actually autonomous.

List all my current cron jobs. For each one, tell me: what it is called, what schedule type it uses, when it was created, when it last ran, and when it will run next.

If you have never done this audit before, the output will probably surprise you.

Every time a cron job fires, it starts a new agent turn. That turn consumes tokens: your system prompt, any injected context, the task instruction, the agent’s response, and any tool calls it makes along the way. Those tokens cost money on every API call.

The math compounds fast. A queue processor that fires every 5 minutes runs 288 times per day. If each run is 2,000 tokens with a model that costs $3 per million tokens, that is $1.73 per day just for that one job. Multiply by several jobs and you have a meaningful monthly line item for work that may or may not be doing anything useful.

The four ways cron jobs silently drain budgets:

1. Wrong interval for the use case

A queue processor that fires every 5 minutes when your queue gets new tasks once an hour is running 11 empty turns for every productive one. The empty turns still cost tokens. If there is nothing to do, the agent still spins up, reads the task, checks the queue, finds nothing, and shuts down. You paid for that.

Match the interval to the actual frequency of work. A queue that fills a few times a day does not need a 5-minute check. A 30-minute or hourly check is almost always sufficient and costs 6-12x less.

2. Long task descriptions

The task field is sent as part of the prompt on every run. If you wrote a 500-word instruction directly in the task field, those 500 words are burned on every single fire. For a job that runs 100 times a day, that is 50,000 extra tokens per day just from the task text itself.

The fix is one line: “Read workspace/cron-prompts/my-task.md and follow the instructions there.” The 500 words live in the file. The cron job just points to them. Same behavior, a fraction of the cost.

3. Broad, open-ended tasks

A task that says “do my morning briefing and also check my email summary and update my task list and write a status report” will generate a long, expensive response every time. Tasks that have clear stopping conditions (“check for PENDING tasks; if none, stop immediately”) are far cheaper because the agent does minimal work on empty runs.

The best cron job tasks are idempotent: running them on an empty queue or with nothing to do costs almost nothing. The agent reads the condition, finds nothing, writes one sentence, done.

4. Zombie jobs

This is the big one. A zombie job is a cron job that is still running but should have been stopped. You created it for a project that is finished. You created a replacement with a better schedule and forgot to delete the original. You were testing something and never cleaned up. Now both versions are firing, doing duplicate work, and you have no idea because you stopped looking at the job list months ago.

Zombie jobs are extremely common. Every operator who has been running OpenClaw for more than a few weeks has at least one. They are invisible unless you actively look for them.

List all my cron jobs including disabled ones. For each one, tell me: when was it created, when did it last run, how many times has it run total, and does its purpose still seem relevant to what I am currently doing? Flag anything that looks like it might be a duplicate, orphan, or leftover from a project that is finished.

Kill anything you cannot justify keeping. A disabled job costs nothing. A running job you forgot about costs something on every fire.

OpenClaw gives you three ways to schedule a job. They are not interchangeable. Using the wrong one for your use case is one of the most common setup mistakes.

Type 1: cron

Uses a standard cron expression to fire at specific times. Five fields: minute, hour, day of month, month, day of week. 0 8 * * * means “at 8:00am every day”. 0 9 * * 1 means “at 9am every Monday”.

Use this for anything that is tied to a clock time. Daily briefings. Weekly summaries. Monthly reports. End-of-day task reviews. The key property of cron type is that it fires at a predictable, human-readable time that stays consistent across gateway restarts.

The critical detail: cron expressions are evaluated in UTC by default. If you do not set a timezone, “0 8 * * *” fires at 8am UTC, which is 3am or 4am Eastern depending on the time of year. Always set the timezone explicitly:

Create a cron job that fires every day at 8am Eastern time. Use a cron schedule with the expression “0 8 * * *” and timezone “America/New_York”. The task should be: read workspace/cron-prompts/morning-briefing.md and follow the instructions there.

Common cron expressions for reference:

0 8 * * *       Every day at 8am
0 9 * * 1       Every Monday at 9am
0 20 * * *      Every day at 8pm
0 0 * * 0       Every Sunday at midnight
0 8,20 * * *    Every day at 8am and 8pm
0 */4 * * *     Every 4 hours

Type 2: every

Fires on a repeating interval measured in milliseconds. It does not know what time it is. It just counts from when it was last created or when the gateway last restarted, and fires again when the interval has elapsed.

Use this for recurring background work where exact timing does not matter. A queue processor does not care whether it runs at 2:00 or 2:07. A health check does not need to fire at a specific hour. Anything where “every N minutes” is a more natural description than “at X o’clock” belongs here.

The catch: if your OpenClaw gateway restarts, the interval resets from the restart time. A job set to fire every hour might fire at 2:07 instead of 2:00 after a restart. For most background tasks this is fine. For anything time-sensitive, use cron type instead.

Create a cron job that runs every 30 minutes. Use an “every” schedule with everyMs set to 1800000. The task should be: read workspace/cron-prompts/queue-processor.md and follow the instructions there.

Common millisecond values:

60000     1 minute
300000    5 minutes
900000    15 minutes
1800000   30 minutes
3600000   1 hour
86400000  24 hours

Type 3: at

Fires once at an exact timestamp and never again. Use ISO 8601 format: 2026-04-01T09:00:00Z. After it fires, it is done. It will not run again unless you create a new job.

Use this for reminders, scheduled sends, delayed follow-ups, anything that has a specific one-time fire date. The at type is underused. Most operators default to every for things that should really be at, and end up with jobs that keep running after the relevant moment has passed.

Create a one-shot cron job that fires at 2026-04-01T14:00:00Z. The task should be: remind me that the quarterly review is due today and list any open items in workspace/reviews/q1-2026.md that are still marked incomplete.

This is the most important structural pattern for production cron jobs in 2026. Instead of writing your task logic directly in the cron job’s task field, put the full instructions in a markdown file in your workspace and have the cron job read it.

The task field becomes one line:

Read workspace/cron-prompts/queue-processor.md and follow the instructions there.

The file contains everything: the full workflow, the stopping conditions, the output format, the notification logic, the error handling steps. When you want to change how the job behaves, you edit the file. The cron job itself never needs to be touched.

Why this matters beyond cost:

  • Versioning. Your prompt files live in git. You have a full history of every change you made to the job’s logic, when you made it, and why.
  • Auditability. If a job produces unexpected output, you can read the prompt file and see exactly what it was told to do.
  • Composability. Multiple cron jobs can share prompt files. A morning briefing and an evening review might both reference the same task list file and the same format instructions.
  • Faster iteration. Changing a cron job’s behavior through the API requires finding the job ID, making the update, confirming it took. Editing a file and saving takes two seconds.

Look at all my cron jobs. For each one where the task field is longer than one sentence, help me create a corresponding prompt file in workspace/cron-prompts/ and update the task field to point to it.

An idempotent job produces the same result whether it runs once or ten times, and costs almost nothing when there is nothing to do. This is the property you want for every background task.

A queue processor is idempotent if it checks the queue, works through exactly one PENDING task, marks it DONE, and stops. If there are no PENDING tasks, it writes “nothing to process” and stops immediately. It does not matter if it fires ten times in a row. The output is always correct and the cost of empty runs is minimal.

A queue processor is not idempotent if it re-processes tasks that are already DONE, or if it tries to run multiple tasks in a single turn and leaves the queue in an ambiguous state if something goes wrong partway through.

The idempotency test: ask yourself what happens if this job fires twice in 30 seconds. If the answer is “nothing bad, the second run just finds nothing to do and exits cleanly”: that job is idempotent. If the answer involves duplicate work, corrupted state, or double-sends: it is not.

Look at my queue processor cron job’s task logic. Is it idempotent? If a gateway restart caused it to fire twice in quick succession, would the second run cause any problems? If not, what would need to change?

The most reliable cron setups have many small jobs, each with a single clear responsibility, rather than a few large jobs that do multiple things. This is not just an aesthetic preference. It has practical consequences.

When a large job fails, you do not know which part failed. Was it the queue check? The memory update? The notification? You have to dig through the run log to find out. When a small job fails, you know exactly what broke because there is only one thing it could have been.

When you want to change the schedule for one part of your workflow, a large job forces you to change the schedule for everything else too. Small jobs can each have the schedule that fits their actual needs.

A well-structured cron setup for an active operator might look like this:

  • Morning briefing: cron type, 8am, daily. Summarizes pending work and any overnight events.
  • Queue processor: every type, every 30 minutes. Checks for PENDING tasks, works through one, marks it DONE.
  • Memory cleanup: cron type, Sunday 2am. Reviews and deduplicates stored memories.
  • End-of-day summary: cron type, 9pm, daily. Writes what was accomplished to the daily log.
  • Weekly planning: cron type, Monday 9am. Reviews the week ahead and flags anything that needs preparation.

Five jobs, five responsibilities, no overlap. Any one of them can be modified, paused, or deleted without touching the others.

Look at my current cron jobs. Are any of them doing more than one distinct thing? If so, draft a plan to split them into separate jobs with single responsibilities.

Do this now, before you do anything else. A zombie job audit takes five minutes and almost always finds something worth killing.

A zombie job is any scheduled task that is still running but no longer serves a purpose. Common origins:

  • You were testing a new job configuration and created several versions. The test versions are still running.
  • You rebuilt a job with better logic but forgot to delete the original. Both are firing.
  • You finished a project but left the job it needed running. It fires, finds nothing relevant, exits clean, and still costs tokens every time.
  • A gateway restart created a duplicate of an existing job. OpenClaw does not deduplicate by name.

List all my cron jobs including disabled ones. For each one, show me: the name, the schedule type, the interval or expression, when it was created, when it last ran, how many total runs it has, and the full task text. I want to do a zombie audit. Flag anything that looks redundant, outdated, or like it might be duplicating another job.

For each job you cannot immediately justify, either delete it or disable it. Disabled jobs cost nothing and can be re-enabled if you realize you needed them. Deleted jobs are gone.

After the audit, check your estimated daily token spend from cron jobs:

Based on my remaining active cron jobs, estimate the daily token cost. For each job: how often does it fire per day, roughly how many tokens does a typical run consume, and what is the approximate daily cost assuming I am using deepseek-chat at $0.28 per million input tokens and $1.10 per million output tokens?

A cron job that fires is not the same as a cron job that completed successfully. Jobs can fire, start running, and then fail silently. No error, no notification, just an incomplete result. This is especially common with jobs that depend on external files, make API calls, or rely on a particular state in the workspace.

Two habits that catch silent failures before they pile up:

Output verification in the task prompt. End every cron job prompt with an explicit completion signal: “When you are done, write a one-line summary of what was completed to workspace/cron-logs/YYYY-MM-DD.md.” If the log entry is missing, the job did not finish. This gives you a lightweight audit trail without needing to read full run logs.

Regular run history checks. Once a week, do a five-minute review:

Show me the run history for all my cron jobs from the last 7 days. For each job, tell me: how many times it was supposed to run, how many times it actually ran, and whether any runs produced empty or error output. Flag anything that looks wrong.

Timezone mismatches are the single most common cause of cron jobs firing at the wrong time. The checklist:

  • Every cron type job should have a tz field set explicitly.
  • Use IANA timezone names (America/New_York, Europe/London, Asia/Tokyo) rather than abbreviations like EST or PST. Abbreviations are ambiguous and not universally supported.
  • If your OpenClaw instance is on a VPS, the server’s local timezone may differ from yours. Do not rely on it. Set timezone explicitly on every job.
  • During daylight saving transitions, jobs with explicit IANA timezones automatically adjust. Jobs expressed in UTC offset (like UTC-5) do not.

List all my cron jobs that use the cron schedule type. For each one, tell me whether a timezone is set, what it is, and what time each job fires in America/New_York. Flag any that are missing a timezone or that are firing at a time that does not match their intended purpose.

Getting the schedule and structure right is the foundation. Building a cron system that handles failures gracefully, chains tasks with dependencies, passes output between jobs, and recovers autonomously when something goes wrong requires a full architecture. That is what the complete guide covers.

Complete guide

Queue Commander

The full autonomous task system: scheduling patterns, file-based prompts, idempotent job design, error handling, task chaining, zombie audits, and failure recovery. Drop it into your agent and it configures itself.

Get it for $67 →

Advanced scheduling patterns

The dependency chain pattern

Some tasks must run in sequence but at different frequencies. A research task runs weekly; a summary task runs daily but needs the research output. The naive approach is to run research and summary on the same daily schedule, wasting API calls six days out of seven. The better pattern is a gating check: the summary task first reads a state file to see if new research is available, and only runs the summary if it finds fresh input.

I have a research task that runs weekly and a summary task that should only run after new research is available. Create a gating pattern: the summary job checks for a flag file that the research job writes when it completes. The summary only runs if the flag exists and is newer than 24 hours. Show me both job payloads.

The time-based model routing pattern

API costs are the same regardless of time, but your sleeping hours are a good opportunity to run expensive tasks that are not time-sensitive. Route tasks to API models during off-hours and local models during active hours to avoid burning paid tokens on background work while you are working interactively.

Set up time-based model routing for my cron jobs. Tasks scheduled between 10pm and 6am can use the API model since I am not actively using the agent. Tasks during waking hours should use the local model to avoid competing with my interactive usage. Show me how to implement this in cron payloads.

The burst-and-rest pattern

Some workloads generate bursts of activity followed by quiet periods. A content pipeline might process 10 articles in a row, then sit idle for a week. Rather than scheduling a fixed interval that is too fast for idle periods, use a queue-based pattern where the cron fires frequently but only does work if items are in the queue.

Set up a burst-and-rest pattern for my content pipeline. The cron fires every 15 minutes, reads the queue, and only processes work if items are waiting. If the queue is empty, it exits immediately with a short “queue empty” log entry. Show me the payload and the queue check logic.

Monitoring cron job health

Once jobs are running, they need monitoring. Silent failures are the main risk: a job that logs nothing has either not run or has been failing silently.

Set up a watchdog job that runs once per hour. It checks whether each expected job has run in the last N hours (based on its configured interval). If any job has not produced a log entry in twice its expected interval, send me a Telegram alert with the job name and the time of its last run.

The last-run timestamp pattern

Each job writes its completion timestamp to a state file. The watchdog reads state files rather than log files, which is simpler and less fragile than log parsing.

Update my existing cron jobs to write a completion timestamp to workspace/cron-state/[job-name]-last-run.txt after each successful run. Then create the watchdog job that reads these files. Show me the updated payloads.

Common questions

How many cron jobs is too many?

There is no hard limit, but each job fires an agent turn that uses context and may use API tokens. At 20+ active jobs with overlapping or near-overlapping schedules, context contention becomes a real issue: multiple jobs may fire within the same minute and compete for the same session resources, causing delays or failures. For dense schedules, use a queue pattern instead: one cron fires every five minutes and picks the next task from a list rather than having 20 individual crons.

My cron job runs fine manually but fails when scheduled. What is usually wrong?

Path issues. When a job runs manually from your session, it inherits your session context including the working directory. When it runs as a scheduled cron, it starts fresh with defaults. If the job references relative paths, they resolve against whatever the default working directory is, which may not be where you expect. Always use absolute paths in cron job payloads.

How do I stop a cron job from running while I am actively working?

Two options. First, disable the job before you start work and re-enable when done. Second, add a check at the start of the job payload: “If the agent is currently in an active session (has received a message in the last 30 minutes), skip this run and log ‘deferred – active session’.” This automatic deferral prevents background jobs from interrupting interactive work without requiring manual job management.

Can a cron job trigger another cron job?

Not directly via the scheduler, but a job payload can create or update a queue file that another job reads. This is the recommended pattern for job chaining: Job A completes and writes a flag file, Job B checks for the flag on its next scheduled run and processes if found. Avoid circular dependencies where A triggers B which triggers A.

My job was running correctly for months and then started failing. Nothing changed in the config. What should I check?

External dependencies change even when your config does not. Check: whether an API the job depends on updated its response format, whether a file the job reads has grown large enough to hit a context limit, whether Ollama updated a local model that changed its output format, and whether the working directory contains more files than before (affecting file glob operations). Most “nothing changed” failures are actually external changes the job was not designed to handle.

Job payload templates that work correctly out of the box

These templates cover the most common automation patterns. Copy them directly and customize the file paths, notification targets, and intervals for your specific setup.

Simple file-read-and-notify cron job

Read /home/node/.openclaw/workspace/HEARTBEAT.md. If there are any active tasks listed, send me a Telegram summary of the first three. If the file is empty or has no active tasks, log “heartbeat: no active tasks” to workspace/cron-activity.log and exit.

Queue processor cron job

Read workspace/QUEUE.md. Find the first task with status PENDING. If found: update its status to IN_PROGRESS, perform the task, update status to DONE, and log completion to workspace/cron-activity.log. If no PENDING tasks exist, log “queue check: empty” and exit.

Weekly activity summary cron job

Every Sunday at 8pm: read all entries in workspace/cron-activity.log from the past 7 days. Count completions and failures per job. Send me a weekly summary to Telegram: which jobs ran cleanly, which had failures, the failure reasons if logged, and the total API call count for the week. Then archive the log to workspace/cron-logs/week-YYYY-MM-DD.log and clear the active log file to start the new week fresh.

Daily digest cron job

Read workspace/memory/YYYY-MM-DD.md (today’s date). Summarize the five most important items in three sentences each. Send the summary to my Telegram. Log the send to workspace/cron-activity.log.

Health watchdog cron job

Check workspace/cron-state/ for last-run timestamps. For each expected job, verify the last run was within twice its configured interval. For any job that missed its window, send me a Telegram alert: “WATCHDOG: [job name] has not run since [timestamp]”. Log all results to workspace/cron-activity.log.

Pre-deployment checklist for new cron jobs

Before enabling any new cron job, run through this checklist.

  1. Task is well-defined and completable in a single agent turn.
  2. Interval is appropriate for the use case (not too frequent, not too infrequent).
  3. All file paths in the payload are absolute.
  4. The job logs its completion to the activity log.
  5. If the job fails silently, it still produces a log entry indicating the exit state.
  6. The job has been tested manually at least once before scheduling.

Keep Reading:

Common questions

My cron job ran at the wrong time. What happened?

Almost always a timezone issue. Cron expressions default to UTC. If you are not in UTC, set the timezone explicitly using an IANA name like America/New_York. Check with: list my cron jobs and show the timezone for each one that uses the cron schedule type.

My queue processor ran twice in quick succession. Why?

Usually a zombie duplicate. Two “every” jobs with the same task, one from a previous session that was never deleted. List all cron jobs including disabled ones and look for jobs with overlapping tasks. Delete or disable the one you do not need.

How do I stop a cron job without losing the configuration?

Disable it rather than deleting it. Ask your agent: disable the cron job named [name]. It stops firing until you re-enable it. The configuration, schedule, and task text are preserved. Deleting is permanent.

Can I run a cron job manually to test it?

Yes. Ask: trigger my cron job named [name] right now. It fires immediately regardless of schedule. This is the right way to test that your task logic works before waiting for the scheduled time to arrive.

How much do cron jobs actually cost per month?

It depends on frequency, model, and task length. A queue processor firing every 30 minutes with a short task and a cheap model (DeepSeek at $0.28/M input) costs roughly $0.30-0.50 per month. The same job on a flagship model (Claude Sonnet at $3/M input) costs 10x more. The biggest cost drivers are firing frequency and task description length.

What is the difference between cron type and every type?

Cron type fires at specific clock times using a cron expression. It knows what 8am means. Every type fires on a repeating interval from whenever it was last started. It does not know what time it is. Use cron for time-anchored tasks, every for background polling where exact timing does not matter.

Get practical OpenClaw operator notes straight to your inbox.

From the same series

My OpenClaw cron job ran twice, or never ran at all

When the schedule behaves unexpectedly, here is how to diagnose exactly what went wrong.

Read →

My OpenClaw agent failed overnight and I didn’t find out until morning

How to set up alerting so failures surface before you wake up.

Read →

Task B ran before Task A finished and everything broke

How to add dependencies so tasks run in the right order every time.

Read →