Your agent forgets instructions mid-session. Compaction is why.

Your session is going well. Then somewhere in the middle, the agent stops following an instruction you gave earlier. It is not ignoring you. It genuinely no longer has that instruction. Compaction ran, trimmed the part of the conversation where you gave it, and now it is gone. This article explains exactly what compaction drops first, how to identify what has already been lost, and the structural fixes that prevent important instructions from disappearing in the first place.

TL;DR

Compaction summarizes the oldest parts of your conversation to free up context space. Summaries drop specific instructions, constraints, and preferences in favor of general topic coverage. Anything you told your agent in-conversation that is not also in your system prompt or a workspace file is at risk. The fix is structural: move critical instructions out of the conversation and into places that reload every turn and cannot be compacted. That is the system prompt for standing behaviors, and auto-loaded workspace files for project-specific context.

Every indented block in this article is a command you can paste directly into your OpenClaw chat. Your agent will run it and report back. You do not need to open a terminal, edit any files manually, or navigate any filesystem.

What compaction actually does

When your context window fills up, OpenClaw needs to make room for new content. Compaction is how it does that. It takes the oldest parts of the conversation and summarizes them, replacing the full text with a shorter summary. The summary is compact. Room is freed. The session continues.

The problem is that a summary is not the original. A summary captures the general topic and outcome of an exchange. It does not preserve every specific instruction, constraint, edge case condition, or formatting preference you expressed along the way. A conversation exchange that was 800 tokens in full detail might become a 60-token summary that notes “user discussed formatting preferences” without preserving what those preferences were.

If you gave your agent a specific instruction during that exchange, that instruction may now be gone. The agent does not know it is missing because the summary does not flag what was dropped. From the agent’s perspective, it is operating correctly on all the context it currently has.

Has compaction run in this session? If yes, what was summarized? Are there any instructions I gave earlier in the session that are no longer in full detail in your current context? List anything specific that you recall being told but that is now only represented in summary form.

What gets dropped first when compaction fires

Compaction works from oldest to newest. The first content to be summarized is always the earliest part of the conversation. This ordering has predictable consequences:

  • Instructions given at session start are the most vulnerable. If you begin a session by establishing context, setting constraints, or giving behavioral instructions and then have a long conversation, those opening instructions are the first things compaction reaches.
  • Preferences mentioned once and not reinforced disappear without a trace. If you said “always number your lists” once in passing and never reinforced it, that instruction is high risk in a long session.
  • Context established before the main work began gets summarized at exactly the point where your agent most needs the background. You set up a project context in the first ten turns, then start the actual work. By turn 40, the setup context is gone.
  • Constraints you set early and forgot about are particularly dangerous. If you told your agent to never do a specific thing at the start of the session and it has been correctly following that constraint, you may not notice the constraint is gone until it violates it.

Things that survive compaction without any changes from you, regardless of session length:

  • Your system prompt: loads fresh on every turn and is never compacted. Whatever is in your system prompt is always present, always in full detail, regardless of how long the session runs.
  • Auto-loaded workspace files: configured to load at session start and reload every turn. Same protection as the system prompt for the content they contain.
  • Recent conversation turns: compaction works from the oldest end. The last 10-20 turns are almost never at risk unless you have a very small context window. Instructions you gave recently are safe.

What is currently in my system prompt? What workspace files load automatically at the start of each session? Are there any instructions I rely on that are only in the conversation history and not in either of those places? List every instruction that is at risk of being lost to compaction.

How to identify what has already been lost

If you suspect compaction has already dropped something important, the most direct approach is to ask. Your agent can tell you what is currently in its active context, which means it can also tell you what is not there.

I think compaction may have run and dropped some instructions. Can you tell me: what specific formatting preferences, behavioral constraints, or project-specific instructions do you currently have in your active context? I want to compare this against what I thought I had established.

The agent’s answer will often reveal gaps. If it cannot describe a constraint you know you gave, that constraint is gone. The next step is to re-state it and then immediately move it somewhere compaction cannot reach.

I am going to re-state an important instruction that I think was lost to compaction: [your instruction here]. Confirm you have it. Then help me add it to my system prompt or workspace files so it loads every session going forward and cannot be compacted away.

The recovery for a specific session is re-stating the instruction now. The recovery for all future sessions is moving it to the system prompt immediately. Both steps matter and neither substitutes for the other.

The structural fix: where different instructions belong

The root cause of compaction-related instruction loss is almost always the same: an instruction that should have been in the system prompt or a workspace file was instead given in conversation. The agent followed it correctly until compaction erased it, at which point it stopped following it with no indication anything had changed.

The fix is understanding where different types of instructions belong and moving them there.

System prompt: standing behaviors and non-negotiable constraints

The system prompt is for things that should always be true regardless of what you are working on, how long the session runs, or what else has been discussed. Examples:

  • Tone and style requirements (“always write in plain language, no jargon”)
  • Things the agent should never do (“never send external messages without explicit confirmation”)
  • Default behavior settings (“always number lists,” “always cite sources for factual claims”)
  • Safety constraints (“always ask before modifying any production file”)
  • Identity and persona instructions

Read my current system prompt. Think about the instructions I have given you in conversation over the last few weeks. Are there behavioral constraints or preferences I have expressed repeatedly in conversation that should be moved permanently to my system prompt? List them with a recommended addition for each.

Workspace files: project-specific and phase-specific context

Workspace files are for context that is specific to a project, a phase of work, or a particular domain but that still needs to persist across the whole session. Examples:

  • Current project status and goals
  • Technical context about your infrastructure that the agent needs to do its job
  • Phase-specific instructions that will change when the project phase changes
  • Reference information the agent needs to access repeatedly (API endpoints, file paths, naming conventions)

I want to create a project context file that loads automatically. What information about my current project setup should go into it? Read my recent conversation history and identify any project-specific facts or constraints I have mentioned that are not in any of my current workspace files.

What should stay in conversation

Not everything needs to move to permanent files. One-off requests, ad hoc adjustments, and context specific to a single task can stay in conversation. These are things that you do not need to persist after the current session and that you would not miss if compaction summarized them. The discipline is recognizing which category a new instruction falls into before you give it, so you know where it belongs from the start.

The two-question test

Before giving any instruction in conversation, ask two questions: (1) Would I be annoyed if my agent stopped following this in 45 minutes because compaction ran? (2) Will I want this instruction to apply in future sessions? If yes to either, the instruction belongs in the system prompt or a workspace file before this session ends. If no to both, it is fine in conversation.

Adjusting the compaction threshold

Beyond moving instructions to safe locations, you can adjust when compaction fires. The compaction threshold controls how full the context window needs to be before summarization kicks in. A higher threshold lets more conversation accumulate before anything gets trimmed. A lower threshold fires compaction earlier, keeping more buffer at the top but summarizing sooner.

What is my current compaction threshold setting? What is the maximum context window size configured for this agent? Show me both values and the config path for each.

A threshold of 0.75 (75%) means compaction fires when context is 75% full. Raising this to 0.85 gives more conversation space before anything is summarized. The tradeoff is that you leave less buffer before the window is completely full, which can make responses slightly slower on very large context windows.

For most operators, the right response to premature compaction is raising the contextTokens ceiling (so the window is larger overall) rather than raising the compaction threshold. A larger window with the same threshold means more conversation before compaction fires, without reducing the buffer.

What model am I currently using, and what is its maximum context window? Is my current contextTokens setting lower than that maximum? If I raised contextTokens to [X], would that allow longer sessions before compaction fires without any other config changes?

The /new rule

OpenClaw caches contextTokens in the session entry when a session first starts. Changing contextTokens in config and checking the result in the same session will still show the old value. After any change to contextTokens or compaction settings, start a new session with /new before verifying that the change took effect.

What OpenClaw preserves through compaction

OpenClaw’s compaction is not a random truncation. It summarizes older content and attempts to preserve certain sections that are more likely to contain important context. Understanding what it tries to protect helps you work with it rather than against it.

Sections that OpenClaw’s compaction typically attempts to preserve in full:

  • Session startup sections: initialization context that the agent generated at session start
  • Model routing instructions: which model to use for which tasks
  • Red lines: explicit “never do this” constraints
  • Active task state: what the agent is currently working on

What it does not preserve in full:

  • Specific formatting instructions given in conversation
  • Nuanced constraints with conditions attached (“unless X, always do Y”)
  • Reference information given in the flow of a conversation rather than in a structured file
  • Corrections you gave after the agent made a mistake (these often get summarized to “user provided correction” without the specific content)

Based on what you know about how OpenClaw handles compaction, which parts of what we have discussed today are most at risk of being lost if compaction fires in the next hour? What should I move to a permanent location right now as a precaution?

What to do immediately after compaction fires

If compaction has just run and you noticed (either from a system message or because your agent suddenly stopped following an instruction), here is the recovery sequence.

  1. Ask what was lost. The blockquote above for identifying lost instructions covers this. Get a picture of what the agent currently knows about your constraints and preferences.
  2. Re-state the critical instructions for this session. Anything you need for the current task that was dropped, re-state now. This re-enters it into the conversation history for the remaining session.
  3. Move them to permanent storage before the session ends. Do not wait until the next session to fix the structural problem. Ask your agent to help you add the recovered instructions to your system prompt or workspace files while you have them clearly in mind.
  4. Audit your system prompt and workspace files. Compaction events are a signal that something important was in conversation that should not have been. Use this as a prompt to review what is permanently stored and add anything that was exposed as vulnerable.

Compaction just ran. I want to recover quickly. First, tell me everything important you still know about how I want you to behave and what we are working on. Second, identify anything that feels uncertain or vague in your current context that was probably clearer earlier in the session. Third, help me add the important instructions to my system prompt so this does not happen again.

Managing instructions in very long sessions

Some work naturally produces very long sessions: research projects, complex writing tasks, multi-step problem solving. For these, a single compaction event may not be the only risk. Context can compact multiple times across a very long session, progressively reducing the detail available about earlier decisions and constraints.

The checkpoint pattern

For long sessions, a useful habit is periodic checkpointing: every 15-20 turns or whenever you reach a natural pause in the work, ask your agent to write a brief checkpoint file with the current state.

Write a context checkpoint to workspace/.context-checkpoint.md. Include: the current task we are working on, any active constraints or instructions I have given in this session, key decisions we have made, and what the next steps are. This file will help restore context if compaction fires or if I need to continue in a new session.

The checkpoint file survives compaction because it is a workspace file. When compaction fires, the checkpoint provides a condensed but accurate record of the session’s important context, which is far better than what the automatic compaction summary preserves.

Splitting very long tasks across sessions

If a task is going to run long enough that multiple compaction events are likely, consider splitting it across sessions intentionally rather than letting compaction handle it. End the current session with a checkpoint, then start a fresh session that reads the checkpoint as its first action. This gives you full context at the start of each session chunk, rather than a progressively degraded context in a single very long session.

I want to pause this task and continue it in a fresh session. Write a detailed handoff note to workspace/.context-checkpoint.md that covers everything the next session needs to pick up exactly where we are: current task, all active constraints and instructions I have given, the work completed so far, the work remaining, and the exact next step.

Session continuity is a design problem

Most compaction-related instruction loss is a symptom of treating session continuity as something the agent handles automatically, when it is actually something you need to design. Instructions that matter to you belong in structured files, not in the flow of conversation. Checkpoints preserve state between sessions. Compaction handles context limits as best it can, but it cannot preserve what was never put somewhere safe.

Recognizing compaction-related behavior changes

Compaction does not announce every instruction it drops. The most common way operators discover the problem is through behavioral changes that seem unexplained. Learning to recognize these symptoms saves time compared to debugging the agent’s behavior without knowing what changed.

Common behavioral changes that are caused by compaction:

  • Format changes mid-session: the agent was using a specific format you requested, then switched to a different format partway through. The formatting instruction was dropped.
  • Constraint violations after a long session: the agent does something you told it not to do. You are sure you gave that constraint. Compaction summarized the exchange where you gave it.
  • Loss of project context: the agent asks clarifying questions about things you already explained earlier in the session. The explanations were compacted and the detail was lost.
  • Returning to default behavior: the agent reverts to a behavior that its default instructions specify, overriding a customization you made in-session. The customization was compacted. The default is in the system prompt and survived.
  • Inconsistent tone or style: the agent was writing in a specific voice you requested, then shifted back toward its default. Same pattern.

I noticed you changed [specific behavior] partway through this session. Is that because an instruction I gave earlier in the session is no longer in your active context? What do you currently have that relates to that behavior?

Testing your setup for compaction resilience

Before relying on a set of instructions to survive a long session, it is worth verifying that they actually will. The test is straightforward: run a session long enough that compaction fires at least once, then check whether the important instructions are still present.

I want to test my compaction setup. What instructions are currently in my system prompt versus in my workspace files versus only in our current conversation? Then tell me: if compaction fired right now, which of these would survive intact and which would be at risk?

Run this check after any significant change to your workspace configuration. When you add a new instruction, verify it is in the right place before relying on it in a long session. When you change your system prompt, verify the change is reflected correctly in a fresh session. Small verification habits prevent the larger diagnostic sessions that follow when something has been silently wrong for weeks.

A practical verification checklist

Before beginning any long session or complex task, take two minutes to run through this checklist:

  1. Ask: “What are the three most important behavioral constraints I have set for this agent?” If the agent cannot name them correctly, they may not be in the system prompt.
  2. Check contextTokens versus the model’s maximum. If contextTokens is set low relative to the model, raise it before starting.
  3. Confirm that a checkpoint write command is ready to paste if the session runs long. Have the blockquote from the checkpoint section of this article ready to use.
  4. For project-specific sessions, confirm the project workspace file exists and has been recently updated with the current status.

None of these steps takes more than 30 seconds individually. Together they take under two minutes. The cost of skipping them is a long session where compaction strips important context and you spend time recovering rather than working.

Compaction in multi-agent and cron setups

Compaction-related instruction loss is not only a risk in interactive sessions. Cron jobs and sub-agent tasks also run in sessions with context windows. A cron job that runs a long task can trigger compaction mid-run if the task is complex enough or if there are large tool outputs. The agent following the cron job instructions may lose important constraints partway through the task.

For cron jobs, the structural protection is the same as for interactive sessions: the instructions that govern the cron job’s behavior should be in the system prompt or workspace files that load with the session, not embedded only in the cron job payload text. If the cron job payload contains a long set of instructions, consider moving those instructions to a workspace file and having the cron payload say “read workspace/cron-instructions.md for the full task specification.”

I have cron jobs that run long tasks. For each cron job, how many tokens does the job payload contain? Is there any risk that compaction could fire during the job run and cause the agent to lose track of important job-specific instructions? Should any of the job instructions be moved to a workspace file instead?

Sub-agents that run in isolated sessions also start fresh each time they are spawned. They have access to the system prompt and workspace files of their configuration, but not the conversation history of the parent session. This is actually an advantage for compaction resilience: every sub-agent spawn starts with a full context window, and whatever instructions are in the workspace files are fully present from the start. The risk for sub-agents is not compaction mid-run but the absence of context that only existed in the parent session’s conversation. Pass critical context to sub-agents explicitly through their task prompt rather than assuming they have the parent’s context. If a sub-agent needs to know about a constraint, a project status, or a behavioral rule that only exists in the parent session’s conversation, include it directly in the task description. Sub-agents have no way to access what the parent session discussed; they only have what is explicitly given to them at spawn time plus what is in their loaded workspace files.

When I spawn a sub-agent for a task, what context does it have access to? Does it see my workspace files? Does it see the current conversation? Does it see my system prompt? What information do I need to explicitly pass to it that it would not have by default?

Building compaction-resistant habits from the start

The operators who never have significant compaction problems are not doing anything magical. They have a few consistent habits that eliminate most of the risk before it appears.

Write instruction-first, conversation-second. When you want to establish a new constraint or preference, write it to your system prompt or workspace file first, then confirm the agent has it by reading it back. Do not establish important constraints by saying them in conversation and hoping they persist.

Keep the system prompt focused and current. A bloated system prompt with stale instructions is not better than a lean one. Review your system prompt monthly and remove instructions that no longer reflect how you use your agent. A focused, current system prompt is easier to reason about and easier to audit.

Write checkpoints at natural pause points, not only in emergencies. A checkpoint written at the end of a productive session is infinitely more valuable than one written in a panic when compaction has already fired. The habit of writing a checkpoint every time you pause a task protects you whether compaction fires or not, because the checkpoint also preserves state across session restarts.

Treat behavioral changes as signals, not annoyances. When your agent stops following an instruction mid-session, the right response is not simply to re-state the instruction and move on. The right response is to identify why it stopped, determine whether compaction is the cause, and then fix the root cause. If compaction is the cause, move the instruction somewhere safe before the session ends. If it was a direct instruction in conversation that was never written to a permanent file, move it to the system prompt now. If it was in a workspace file that was recently modified, check the git history to see what changed and restore any missing instructions. Treating every behavioral change as a diagnostic opportunity keeps your setup accurate over time rather than gradually drifting away from how you intended the agent to behave.

Audit my current setup for compaction resilience. Check: are all standing behavioral constraints in my system prompt? Are all project-specific instructions in workspace files that auto-load? Is there anything I rely on that only exists in conversation history? Give me a report with any gaps and recommended fixes.

Common questions

How do I know if compaction has already run and something was lost?

Ask directly: “Has compaction run in this session? If yes, what was summarized and are there any instructions from earlier in the conversation that you no longer have in full detail?” The agent can tell you what it currently has in context. If it cannot recall a specific constraint you know you gave, compaction is the likely cause. The absence of a memory is less obvious than its presence, so comparing what the agent recalls against what you know you established is the most reliable diagnostic.

Can I stop compaction from running entirely?

You can disable it in config, but this is rarely the right move. Without compaction, the context window fills completely and new responses either fail or get truncated. The practical effect is a hard session length limit rather than graceful degradation. The better approaches are: raise contextTokens so the window is large enough for your typical sessions, move critical instructions to the system prompt so they survive compaction regardless, and use checkpoints for very long sessions. Disabling compaction is a last resort for very specific use cases, not a general fix for instruction loss.

My system prompt is already long. Will adding more to it make things slower?

Modestly, yes. Every token in your system prompt is processed on every turn. Going from a 500-token system prompt to a 2,000-token system prompt adds a small amount of processing time and API cost per response. The tradeoff is usually worth it for instructions that genuinely matter and need to survive compaction. The discipline is keeping the system prompt focused on instructions that change agent behavior: put in what needs to survive, not everything you have ever said to your agent. The goal is a system prompt that is complete but lean.

I gave an important instruction and the agent is already forgetting it. What do I do right now?

Re-state the instruction now, clearly and explicitly. Then immediately ask your agent to add it to your system prompt or workspace file: “Add this to my system prompt so it persists: [instruction].” Do not wait until the session ends. If the agent is already forgetting in-session instructions, context is close to the compaction threshold and anything new you say is also at risk of being summarized before the session ends. The re-statement and the permanent storage are both necessary.

Does the compaction summary ever incorrectly represent what was said?

Yes. Compaction summaries are generated by the model and are subject to the same limitations as any model output: they can omit details, misrepresent nuanced instructions, or summarize a correction as its pre-correction form. This is one reason why critical instructions should never rely on surviving compaction intact. Assume that any instruction in the conversation summary is less precise than the original. If precision matters for a constraint, it belongs in the system prompt, not in the conversation.

How do I know when my context window is getting close to the compaction threshold?

Ask your agent periodically during long sessions: “What is my current context usage as a percentage of the configured maximum?” Most agents can report this. OpenClaw also includes context usage indicators in its dashboard. If you are above 60-70% in the middle of a working session, you are approaching the compaction zone. This is a good time to write a checkpoint and move any unprotected instructions to permanent storage before they get hit.

If I start a new session, does the previous session’s compaction summary carry over?

No. Each session starts fresh with your system prompt, workspace files, and any memory the memory system has stored. The previous session’s conversation history, including any compaction summaries, does not carry into the new session. This is why checkpoints written to workspace files are the reliable continuity mechanism: a checkpoint file persists because it is on disk, not because the session history carries over. A checkpoint written at the end of one session can be read at the start of the next.

What is the difference between compaction and the memory system?

Compaction is an in-session mechanism that summarizes old conversation turns to free up context space within the current session. It is temporary by nature: the summary stays in the session but detail is lost. The memory system (if you have one configured) is a persistent store that survives across sessions. What goes into the memory system stays there until explicitly removed. The two serve different purposes: compaction manages within-session context limits, memory manages cross-session continuity. Instructions critical enough to survive compaction belong in the system prompt; information you want to carry across many sessions belongs in memory.

Can compaction affect tool call results that are stored in the conversation?

Yes. If a tool returned a large result earlier in the session and that result is now in the compaction zone, the full result will be summarized. The agent may lose specific details from that result. For tool results you will need to reference again later in the session, consider asking your agent to write the important portions to a workspace file explicitly. A file write is permanent; a tool result in conversation history is not.


Brand New Claw

System prompt structure, workspace file setup, and compaction tuning

Every instruction in the right place so nothing gets lost mid-session. The complete first-setup guide covers which instructions belong in the system prompt, which belong in workspace files, how to tune the compaction threshold for your workload, and the checkpoint pattern for long sessions.

Get Brand New Claw for $37 →

Keep Reading:

Brand New ClawWhy does OpenClaw keep compacting even on short sessions?If compaction is firing before the session gets long, the context window ceiling is probably set too low.Brand New ClawWhy does OpenClaw fill up context so fast even on simple tasks?What is actually consuming your context budget before you type anything, and how to measure and reduce it.Brand New ClawWhy is OpenClaw so slow? It is probably your context windowResponse time scales directly with context window size. How to right-size it for your actual workload.