How to build a weekly digest with OpenClaw that reads like a human wrote it

Most automated openclaw weekly digest setups fail at openclaw weekly digest prompt engineering: immediately recognizable as machine output: bullet points, generic headers, the same structure every week regardless of what actually happened that week. They get skimmed and ignored. This article covers how to build a weekly digest cron job in OpenClaw that actually sounds like something worth reading every week , because the prompt is explicitly designed to make it sound that way.

TL;DR

  • The content problem: Generic prompts produce generic output. Specific prompts produce specific output.
  • The format problem: Bullet lists feel mechanical. One or two strong sentences per item feel human.
  • The sourcing problem: Digests that read actual data beat digests that make claims without evidence.
  • The cadence problem: A digest that changes week to week based on what actually happened is read. One that feels the same every week is skipped.
  • The fix: Build the prompt in layers , data sources first, framing second, voice rules third.

Throughout this article you will see indented blocks like the ones below. Each one is a command you can paste directly into your OpenClaw chat. Your agent will run it and report back. You do not need to open a terminal or edit any files manually.

Why most automated digests fail

A weekly openclaw cron digest fails when the reader stops opening it. That happens faster than most people expect , usually within three or four weeks of the digest going live. The reader opens it the first time out of curiosity, the second time to verify it works, and stops after the third when they realize each issue looks identical regardless of what actually happened that week.

The root cause of a generic openclaw cron digest human readable failure is almost always the prompt. An openclaw automated digest that reads well requires four explicit decisions the model will not make on its own. A prompt that says “summarize my week” produces a structural template masquerading as a summary. The same headers, the same bullet count, the same level of specificity every single time. The digest stops feeling like communication and starts feeling like a status report format that fills itself in automatically.

The fix requires rethinking what the prompt is actually doing. It is not generating a summary. It is making editorial decisions: what mattered this week, how to lead with the most important thing, which items deserve a sentence and which deserve a paragraph, what the week’s narrative arc actually was. A good digest prompt encodes those editorial decisions explicitly rather than leaving the model to guess.

Step 1: identify the actual data sources

A digest that reads real data is more specific than one that works from memory. Specificity is what makes something sound human. Before writing a single word of the prompt, decide what files and outputs the digest will actually read.

Common data sources for an OpenClaw weekly digest:

  • Article queue file , which articles were published this week, which are still pending.
  • Cron state files , what the daily health checks found over the past seven days.
  • Git log , what changed in the workspace this week.
  • API spend , how much was spent on which models this week vs. last week.
  • Memory audit , new memories stored, stale memories found, current memory count.
  • Server metrics , average disk and memory usage over the week.

List all files in workspace/cron-state/ that were modified in the last 7 days. Also show me the last 20 git commits. I want to understand what data is available for a weekly digest.

The digest will only be as specific as the data it reads. If the data sources are rich, the digest can be specific. If there are no data sources, the agent will fill in the gaps with generic claims that sound plausible but mean nothing. A prompt that says “summarize the week” with no data reads produces a summary of nothing in particular, stated confidently.

Step 2: frame the prompt around narrative, not structure

The single most important change you can make to a digest prompt is replacing structure-first framing with narrative-first framing. Structure-first prompts produce outlines. Narrative-first prompts produce prose.

Structure-first (produces robotic output)

Generate a weekly digest with the following sections: Summary, Articles Published, Server Health, API Spend, Next Week Goals. Use bullet points for each section.

This prompt produces the same template every week. The model fills in the slots. Nobody reads it after week two. It is a formatting exercise, not communication.

Narrative-first (produces readable output)

Read the article queue, the daily cron state files from the past 7 days, and the last 20 git commits. Then write a weekly digest that leads with the single most important thing that happened this week, not a summary of everything. After the lead, cover the two or three things that actually moved this week , drop anything that was the same as every other week. End with one concrete thing that needs to happen before next Monday. Write in direct prose, not bullet points. No headers. Maximum 400 words.

The narrative-first version forces the model to make editorial decisions. It cannot produce the same structure every week because the lead has to come from what actually happened, not from a fixed slot labeled “Summary.”

Step 3: add explicit voice rules

Without voice rules, the model defaults to a tone that is slightly formal, slightly hedged, and uniformly the same. Adding three or four voice rules transforms the output from something that sounds like a report to something that sounds like a person.

Effective voice rules for a weekly digest:

  • No em dashes. They signal machine writing more reliably than almost any other punctuation choice.
  • No qualifiers like “it is worth noting” or “it is important to highlight.” Just say the thing.
  • Lead with the outcome, not the activity. Not “three articles were published” but “the Queue Commander section now has five articles, up from two last Monday.”
  • One sentence maximum for anything routine. If the server was fine, one sentence. If something actually broke, more.
  • Use actual numbers. Not “several articles” but “four articles.” Not “disk usage is stable” but “disk at 47%, same as last week.”

Write the digest following these voice rules: no em dashes, no qualifiers, lead with outcomes not activities, one sentence for anything routine, actual numbers everywhere. The digest should read like a sharp colleague wrote it quickly, not like a report generator filled in a template.

The complete digest prompt

Here is a full cron job prompt that combines data sourcing, narrative framing, and voice rules into a single cohesive digest:

Create a cron job that runs every Monday at 8am America/New_York. Name: “Weekly digest”. Model: ollama/phi4:latest. Delivery: announce to Telegram [your-chat-id]. Prompt: “Step 1: Read workspace/pipeline/ARTICLE-QUEUE.md and count DONE vs PENDING articles. Note which articles moved to DONE in the last 7 days. Step 2: Read workspace/cron-state/health-check.json and workspace/cron-state/weekly-digest.json if they exist. Note any metrics that changed by more than 10% since last week. Step 3: Run: git -C /home/node/.openclaw/workspace log –oneline –since=’7 days ago’ to get this week’s commits. Step 4: Write a weekly digest of 300 to 400 words. Lead with the single most significant development this week , not a summary, the one thing that moved. Then cover two or three other items that actually changed. End with one specific thing that needs to happen before next Monday. Voice rules: no em dashes, no qualifiers, outcomes not activities, actual numbers, direct prose. Step 5: Write workspace/cron-state/weekly-digest.json with today’s article counts and a one-paragraph summary of this week for next week’s comparison.”

This prompt is longer than most, but every section earns its place. The data read steps ensure the agent has real numbers to work with. The voice rules prevent the default robotic tone. The state write step enables comparison next week.

Calibrating length for your reading habit

The right length for a weekly digest depends on one thing: how much of it you actually read. A 1000-word digest that gets skimmed is worse than a 300-word digest that gets read fully. The goal is the shortest possible digest that covers everything that actually mattered.

Three length guidelines:

  • Under 300 words: Use when the digest is a quick status check and not a decision-making input. Good for stable, low-activity weeks.
  • 300 to 500 words: The sweet spot for most operators. Enough room for three to five substantive points without becoming a reading task.
  • Over 500 words: Only when something actually significant happened that week , a launch, a breaking change, a major decision. Do not default to this length every week or the longer-than-usual length loses its signal value.

Update the weekly digest prompt: add a length rule , if fewer than 3 significant things happened this week, cap the digest at 250 words. If 3 to 5 significant things happened, aim for 350 words. If more than 5, aim for 450 words and flag that it was a heavy week in the first sentence.

Making the digest adapt to what actually happened

The biggest difference between a digest that feels alive and one that feels mechanical is whether the content adapts to the week’s actual events. A static prompt produces static content. An adaptive prompt produces a digest that the reader notices is different from last week.

Two techniques for adaptive content:

The prior-week baseline

Reading last week’s digest before writing this week’s forces the model to address what changed. Without the baseline, every week’s content is generated in isolation. With it, the model can explicitly compare and highlight what moved.

Before writing the digest, read workspace/cron-state/weekly-digest.json. Use last week’s summary as context. For each item you plan to cover this week, check whether it also appeared last week. If it did and nothing changed, skip it or mention it in one clause rather than a full paragraph. Only expand on items that are different from last week.

The significance threshold

Instruct the model to only cover items above a significance threshold. Items that were the same as last week do not make the cut. Items where a number crossed a threshold, a new thing shipped, or something broke get coverage. Everything else gets silence or a one-liner.

Apply a significance filter before writing. For each potential topic: ask whether anything changed by more than 10% vs. last week, whether something shipped that was not there last Monday, or whether something broke that was working. If none of these are true for a topic, it does not get its own paragraph , at most a one-sentence mention.

Formatting for Telegram delivery

Telegram renders markdown in messages from bots. Bold, italic, code blocks, and links all work. What does not work well is heavy nested structure or very long paragraphs that wrap awkwardly on mobile screens.

For a weekly digest delivered to Telegram, the formatting sweet spot is:

  • No headers (they render as bold text on Telegram and look like you are shouting).
  • Paragraphs of two to four sentences each with a blank line between them.
  • Bold for the single most important number or fact in each paragraph.
  • A clear ending that does not trail off , the last sentence should be the action or the headline.

Format the digest for Telegram: no headers, paragraphs of 2 to 4 sentences with blank lines between them, bold the key number in each paragraph, end with a clear action sentence. Do not exceed 4000 characters total (Telegram message limit).

Improving the digest over time

The first week’s digest will not be the best version. The prompt needs calibration based on what the actual output looks like. Here is how to iterate efficiently.

After the first scheduled run, trigger the job manually and read the output critically. Ask three questions: Did it lead with the right thing? Did it cover anything that was not actually significant? Did it miss anything that was? The answers tell you exactly which part of the prompt to adjust.

Trigger the weekly digest cron job immediately and show me the full output. Then critique it: did it lead with the most important development? Did any item get more coverage than it deserved? Did it miss anything significant from the last 7 days?

The agent will both generate the digest and critique it in the same session, which is the fastest path to improving the prompt. Adjust one thing at a time , either the lead framing, the significance filter, or the voice rules, but not all three at once. Each adjustment changes the output noticeably and you want to know which change caused which improvement.

What the model actually needs to write well

A language model writing a weekly digest faces a specific challenge: it has no intrinsic preference for what to lead with, no sense of what is significant relative to last week, and no default standard for when something deserves one sentence vs. a paragraph. Every one of those decisions has to be made explicit in the prompt, or the model will substitute a generic heuristic.

The generic heuristic is usually: cover each category in order, one paragraph each, with roughly equal weight. That produces the template-digest problem. The fix is to replace the generic heuristic with specific instructions for each decision the model would otherwise guess at.

The four decisions that define digest quality:

  1. What to lead with. Specify explicitly: “lead with the single most significant development of the week, defined as the thing that most changed the state of the project.” Without this, the model leads with whatever category appeared first in its training data patterns for “weekly digest.”
  2. What to include vs. skip. Without a significance filter, the model covers everything it reads, regardless of whether it changed. Specify: “only include items where something measurably changed from last week.”
  3. How long each item gets. Specify: “one sentence for anything routine, one paragraph for anything significant, two paragraphs maximum for anything critical.” Without this, the model allocates roughly equal space to each item.
  4. How to end. Without an ending instruction, the model often ends with a vague forward-looking statement that means nothing. Specify: “end with one concrete next action, not a general statement about the upcoming week.”

Rewrite my weekly digest prompt to make all four editorial decisions explicit: what to lead with, what to include vs. skip, how long each item gets, and how to end. Show me the revised prompt before applying it.

Sourcing real numbers

The fastest way to make a digest sound specific and human is to make sure every sentence that could have a number does have a number. Generic claims like “disk usage is stable” and “several articles were published” are the fingerprint of a prompt that did not require the model to read actual data.

For each category the digest covers, identify the command or file that produces the real number, and include that read step in the prompt. The model will use whatever it finds.

Read workspace/pipeline/ARTICLE-QUEUE.md. Count: (1) articles with status DONE, (2) articles with status PENDING, (3) articles that moved to DONE in the last 7 days. Report these three numbers exactly.

Run: df -h / | tail -1 to get disk usage. Run: free -m | grep Mem to get memory. Run: uptime to get load average. Report all three as single numbers.

Run: git -C /home/node/.openclaw/workspace log –oneline –since=”7 days ago” | wc -l for this week’s commit count. Also: git -C /home/node/.openclaw/workspace log –oneline –since=”7 days ago” | head -5 for the 5 most recent commit messages.

With these three read steps in the prompt, the model has actual numbers for every major category. It cannot fall back on vague language because the specific data is sitting directly in its context window.

Testing the digest before scheduling it

Before setting the digest on a weekly schedule, run it manually and ask the model to show its data reads before writing any prose. If the model cannot report specific numbers from specific files, the data sourcing is broken and the prose will be generic regardless of the voice rules.

Run the digest prompt but before writing any prose, show me a data summary: the exact numbers you found from each read step. I want to verify the data sourcing is working before evaluating the writing quality.

Fix any gaps in the data sourcing first. Once the numbers are real, improving the prose is straightforward. Improving the prose when the numbers are invented is futile, because the specificity that makes prose feel human has to come from somewhere real.

Common prompt mistakes and how to fix them

These are the five most common mistakes in weekly digest prompts, in order of how much they degrade output quality.

Mistake 1: generic data reads without specific commands

“Check server health” without specifying the command. The model makes a reasonable guess, often running a command that produces more output than it needs, and then summarizes that output generically. Fix: specify the exact command (e.g. df -h / | tail -1) and the exact field to extract (e.g. “the percentage in column 5”).

Mistake 2: equal weighting across all categories

The prompt lists categories without relative importance. The model gives each roughly equal space, even if only one category actually changed this week and the rest were identical to last week. Fix: add an explicit priority ranking and state maximum word counts per category.

Mistake 3: past-tense summary voice

Output sounds like a status report: “this week, three articles were published.” It is technically correct and completely useless as communication. Fix: write in present tense where possible, lead with the implication and current state. “The Queue Commander section now has eight articles” instead of “three were published this week.”

Mistake 4: no ending instruction

The digest ends with generic filler. Fix: “End with exactly one sentence naming the most important thing that needs to happen before next Monday. Not a general statement. A specific task.”

Mistake 5: missing first-run handling

The state file does not exist on the first run and the model errors or produces a no-comparison output with no explanation. Fix: “If the state file does not exist, skip comparisons and note that this is the first run and comparisons will begin next week.”

Review my weekly digest cron job prompt and identify which of these five mistakes it contains. For each one found, show the specific line and the fix.

Scheduling and delivery timing

A Monday morning digest that arrives at 7:30am before the work day starts is useful context for planning the week. The same digest arriving at 2pm Monday has competed with everything that already happened that day.

Set the schedule for 15 to 30 minutes before you typically start working. If you also run daily health checks at 8am, stagger the weekly digest to 7:45am. Two messages arriving simultaneously from the same bot looks like a glitch and quietly undermines the reader’s trust that the digest is a reliable signal rather than background noise.

Create the weekly digest cron job with schedule: every Monday at 7:30am America/New_York. Use ollama/phi4:latest. Deliver to Telegram [your-chat-id] with mode announce.

Making the voice adaptive to what happened

Voice rules prevent robotic output. But voice rules alone do not make a digest worth reading. The other half is making the tone shift based on the content. A week where everything went perfectly should feel different from a week where the server had problems or an article ran into issues. Static voice rules produce static tone. Adaptive voice rules change the register based on what the model found.

Three adaptive tone instructions worth adding to any digest prompt:

  • Good week: Shorter sentences, confident phrasing, concrete outcomes. “Ten articles done, three queued, server clean.”
  • Mixed week: Acknowledge the problem directly in one sentence, then move to what is next. Do not explain it to death.
  • Bad week: Lead with what went wrong, one sentence only, then the recovery plan. Do not bury the problem in paragraph three.

Before writing the digest, classify this week as: good (everything on track or ahead), mixed (one significant problem but overall progress), or bad (major blocker or regression). Then adjust the tone accordingly: good weeks get shorter sentences and confident phrasing, mixed weeks acknowledge the problem once and move on, bad weeks lead with the problem and the recovery path.

Adding this week-classification step takes the digest from a summary to something that communicates situational awareness. The reader knows not just what happened but how you should feel about it. A reader who gets the same confident tone whether the week was great or terrible stops trusting the digest. A reader who gets accurate emotional calibration keeps opening it.

Synthesizing from multiple files without losing coherence

A digest that reads five different data sources and then writes five separate paragraphs for each source is a list, not a digest. The synthesis step is where the value is: finding the through-line across sources and building a coherent narrative from it.

Instruct the model explicitly to synthesize rather than enumerate:

After reading all data sources, find the through-line. Is there a theme connecting what happened this week? For example: everything was about shipping content, or everything was about stability, or there was a clear tension between two things. Lead with that theme in one sentence, then cover the specifics. Do not write one paragraph per data source , write one narrative that happens to draw on multiple sources.

This instruction changes the output from a list of category summaries to something that has a point of view. The reader comes away knowing not just what happened but what it meant and how to interpret it in the context of the week as a whole.

Test the synthesis step by running the digest prompt with this addition: after gathering all data, write one sentence summarizing the week’s theme before writing the full digest. That sentence should not be a summary of activities but a characterization: “This was a shipping week” or “This was a stabilization week” or “This was an off week and here is why.”

Building a self-evaluation loop into the prompt

The best digest prompts include a check step that runs after the writing step but before delivery. The agent generates the draft, scores it against the quality criteria, and rewrites if any criterion fails. This catches the worst outputs without requiring manual review every week.

After writing the digest draft, score it on four criteria: (1) Does every paragraph contain at least one specific number? (2) Does the lead sentence identify the most significant development, not just the first item on a list? (3) Are all em dashes and qualifiers removed? (4) Does the ending name a specific concrete task? If any score is below passing, revise the specific failing section and deliver the revised version.

This self-evaluation costs an extra few seconds of processing time and is worth it. The model catches its own generic language, missing numbers, and trailing endings before they reach your Telegram. Over time, the prompt calibrates to the model’s tendencies and the revision rate drops as the first drafts improve.

Building a sustainable weekly rhythm

A weekly digest only works if you actually read it. The best way to ensure you read it is to make sure it earns your attention every week. Three practices that sustain the habit:

  • Reply to the digest. When something in the digest prompts a question or an action, reply to the message. This creates a feedback loop and makes the digest feel like a conversation rather than a broadcast.
  • Update the prompt quarterly. Your workflow changes. The digest prompt should change with it. Every three months, re-read the prompt, remove anything that no longer applies, and add any new data sources that have become relevant.
  • Let the digest drive cron audits. Once a month, add a line to the digest prompt: “This week, also list all cron jobs and flag any that have not run in their expected cycle.” This keeps the cron health visible without requiring a separate audit job.

Add a monthly audit step to my weekly digest: on the first Monday of each month only, include a section listing all cron jobs with their last run time and flag any that have not run in more than their scheduled cycle. Skip this section on all other Mondays.

The monthly audit section is what turns the weekly digest from a passive status report into an active tool for maintaining system health. The first Monday of each month, you get both the week’s summary and a full cron health review in one combined message. The other three or four Mondays, you get the clean weekly summary with no extra overhead. One job, two functions, and the monthly audit never slips because it is wired directly into the digest instead of being a separate thing to remember to do.

What a well-built digest actually looks like

Reading the prompt instructions is easier once you can see the target. Here is an example of what a well-built openclaw weekly digest output looks like vs. the generic version the same data would produce without the prompt engineering.

Generic output (without prompt engineering)

Weekly Digest - Week of March 17

Summary:
This week several tasks were completed and the system remained stable.

Articles Published:
- Multiple articles were published this week.
- Several articles are still pending.

Server Health:
- The server is running normally.
- Disk usage is stable.

Next Steps:
- Continue working on remaining articles.
- Monitor server performance.

This is what a structure-first prompt with no data sourcing, no voice rules, and no significance filter produces. It is technically a weekly digest. No one reads it twice, and after a few weeks no one even opens it.

Specific output (with prompt engineering)

This was a shipping week.

Queue Commander is now at 8 of 12 articles, up from 4 last Monday. Four articles published in five days is the fastest pace since the project started. Cheap Claw is complete at 6 of 6.

Server held steady: disk at 47%, memory at 62%, no cron failures. Nothing worth flagging.

Before Monday: finish A031 and A032, which clears the Queue Commander section entirely.

Same underlying data. Completely different reading experience. The second is the openclaw weekly digest human-readable output you want to build toward. The second version takes 20 seconds to read, has one specific number per item, leads with the thing that actually mattered, and ends with a concrete task. The first version takes 40 seconds to read and communicates nothing that was not already obvious. The second takes 20 seconds and leaves the reader knowing what happened and what to do next.

Show me the current weekly digest output for this week. Then rewrite it to match the second example above: one through-line sentence, three items maximum with specific numbers, ending with one concrete task before Monday. Maximum 200 words.

Frequently asked questions

These questions cover the practical issues that come up once the digest is running and you start tuning it for your specific workflow.

How do I make the digest cover different things each week without rewriting the prompt?

The data sources do this automatically. If the prompt reads the actual article queue, git log, and cron state files, the output changes each week because the inputs change. The prompt stays the same; the content adapts to what is in those files. If the content is not changing week to week, the data sources are too generic or the significance filter is not aggressive enough.

The digest is always 400 words even when nothing significant happened. How do I fix that?

Add an explicit empty-week clause: “If fewer than two significant things happened this week (measured by the significance filter above), write a digest of no more than 150 words. Do not pad it to hit a word count.” The model will comply reliably if the instruction is explicit.

Can I get the digest in email instead of Telegram?

Yes, if you have an email sending channel configured in OpenClaw. Change the delivery config channel to your email channel and set the recipient to your email address. The prompt and formatting rules stay the same except email supports HTML, so you can use richer formatting than Telegram markdown allows.

The digest reads the git log but does not understand what the commits mean. How do I help it?

Improve your commit message discipline. Commits written as “fix bug” give the digest nothing to work with. Commits written as “A031 published ID=335 , Queue Commander weekly digest article” give the digest enough context to interpret the week’s work. The digest output quality is directly proportional to the signal quality in the data it reads.

I want the digest to cover things outside the workspace , like what happened in the OpenClaw community. How do I add that?

Add a web search step to the prompt before the writing step. Instruct the agent to search for OpenClaw news, product updates, or community discussion from the past 7 days, and incorporate anything relevant into the digest. This turns the digest from an internal status report into something that includes external context, which is harder to replicate and more valuable to read.

The model keeps ignoring my voice rules. What is happening?

Voice rules that appear at the end of a long prompt get deprioritized. Move them to immediately before the write step, not at the end of the overall prompt. Also, make them specific and checkable: “no em dashes” is specific and the model can verify it. “Write in a human voice” is vague and the model will interpret it however it wants.

Can I have the digest rate itself after writing?

Yes. Add a final step: “After writing the digest, score it on three dimensions (1-10 each): specificity (does every claim have a number?), significance filtering (does every item pass the threshold?), voice (no em dashes, no qualifiers, outcomes not activities?). If any score is below 7, rewrite the digest and rescore before delivering.” This self-evaluation loop catches the worst outputs before they reach you, which matters most in the first few weeks when the prompt is still being tuned.


Go deeper

CronHow to pass output from one OpenClaw cron run into the nextFile-based state handoff between isolated sessions for weekly comparison baselines.CronHow to schedule a daily task in OpenClaw without building a queue systemThe complete setup guide for cron jobs, schedule types, and delivery modes.CronMy cron job ran but the Telegram notification never arrivedFive root causes of Telegram delivery failures and the fix for each.