My OpenClaw agent keeps mixing up memories from different sessions

The memory pipeline is working. Recall is surfacing results. The problem is that memories from one project or context keep appearing when you are working on something completely different. This is a scope problem, not a retrieval problem. Everything is landing in one pool, and every query pulls from that entire pool regardless of which context you are in. This article covers how to split memory into scopes that keep project knowledge isolated while still making global preferences available everywhere.

TL;DR

Everything in one scope means every recall query competes across all your contexts simultaneously. The fix is two layers: a shared scope for global facts (preferences, working style, general knowledge) and a per-project scope for task-specific memories. Configure autoCapture to write to the relevant project scope and autoRecall to query both the project scope and the shared scope. Context stays separated. Global facts stay available.

Every indented block in this article is a command you can paste directly into your OpenClaw chat. Your agent will run it and report back. You do not need to open a terminal, edit any files, or navigate any filesystem.

What scopes actually do

A scope is a named bucket. Memories written to one scope are only searchable within that scope by default. If autoCapture writes everything to a single default scope, then working on Project A will surface memories from Project B, personal preferences, old tasks, and anything else that matched the query semantically. The recall system has no way to know which context you are currently in unless the scope tells it.

Most OpenClaw setups ship with a single default scope. It works fine when one agent is doing one type of work. It breaks down when the same agent handles multiple projects, different clients, or work and personal contexts that should not bleed into each other.

Read my openclaw.json memory plugin config. What scope is autoCapture writing to? What scope is autoRecall querying? Are they the same scope? List every distinct scope that currently has memories stored in it using memory_stats.

If everything is in one scope, that is the root cause. The fix is deciding how to divide it before changing any configuration.

The two-layer pattern that actually works

The instinct when you first encounter scope bleed is to create one scope per project and fully isolate everything. That creates a different problem: facts about you that should be available everywhere (your preferences, your working style, your formatting conventions) get stuck in whichever scope they were written to and disappear from every other context.

The pattern that resolves both problems is two layers:

  • A shared scope (called something like agent:shared or global) for facts that should be available in any context: who you are, how you like things done, preferences, communication style, anything that applies regardless of what you are working on.
  • Per-project scopes (called something like project:client-a or work:q1-campaign) for memories that are specific to a project, client, or task type and should not bleed into unrelated work.

Recall queries both layers: the active project scope for context-specific memories and the shared scope for global facts. This way, project memories stay isolated from each other, but your preferences and general knowledge remain available everywhere.

Based on the work I do with you, suggest a scope structure that separates my different contexts. What scopes would you create? What would go in the shared scope versus the project-specific scopes? Show me the proposed structure before making any changes.

Migrating existing memories to the new structure

Before creating new scopes, decide what to do with existing memories in the default scope. You have three options: keep them in the default scope as a legacy archive and create new scopes going forward; categorize and move them to the new scopes; or start fresh with a clean slate in the new scopes and let the default scope memories age out naturally.

For most setups, the pragmatic approach is a mix: move memories that are clearly project-specific to their new scopes, move clearly global preferences to the shared scope, and leave ambiguous or stale memories in the default scope where they will be gradually superseded by fresh memories in the correct scopes.

List all memories currently in my default scope. Categorize them: which ones are global preferences that should move to the shared scope, which ones are project-specific memories that should move to a project scope, and which ones are stale or no longer relevant? Do not move anything yet. I want to review the categorization first.

Moving memories between scopes

OpenClaw’s memory tools do not have a native move operation. The process is: recall the memory from the old scope, store it in the new scope with the same content, then forget it from the old scope. For a large number of memories, this is worth doing in batches by category rather than one at a time.

Move all memories in my default scope that are categorized as global preferences to the shared scope (agent:shared). For each one: recall it from the default scope, store it in agent:shared with the same content and category, then forget it from the default scope. Show me the list of memories to be moved before starting. Stop after each batch of 5 so I can confirm the moves are correct.

Configuring autoCapture to write to the right scope

Once the scope structure is decided, autoCapture needs to know which scope to write new memories to. The default behavior writes to whatever scope is configured in the plugin settings. If you want new memories to go to a project-specific scope, you need to either update the config when switching projects or add a context instruction that tells the agent which scope to use for the current session.

What is the current autoCapture target scope in my memory plugin config? If I want new memories captured during a specific project session to go to a project scope instead of the default, what is the cleanest way to handle that? Show me the config change and explain whether I need to restart or start a new session for it to take effect.

Using session-level scope overrides

For workflows where you switch between projects frequently, changing the config for every context switch is impractical. A cleaner approach is to specify the scope explicitly in each memory store and recall call rather than relying on the autoCapture default. This requires adding scope instructions to your agent prompt for specific project sessions.

For this session, we are working on [project name]. Store any new memories related to this project in the scope project:[project-name]. For recalls about this project, query project:[project-name] first, then agent:shared for global preferences. For anything that is a general preference or not project-specific, store it in agent:shared.

Configuring autoRecall to query the right scopes

Recall is only useful if it searches the right scopes. For the two-layer pattern to work, autoRecall needs to query both the active project scope and the shared scope on every retrieval. Querying only the project scope misses global preferences. Querying only the shared scope misses project context.

What scopes is autoRecall currently configured to query? If it is only querying one scope, what would I need to change to have it query both agent:shared and the active project scope? Show me the config change.

When to query which scope

Not every recall needs to search all scopes. A recall for project-specific technical decisions should not be pulling global preference memories that will only dilute the results. The right approach is context-aware querying: use the project scope when looking for project-specific information, the shared scope when looking for preferences or general facts, and both when the query could reasonably match either type of memory.

I need to recall information about the data model decisions for [project name]. Search the scope project:[project-name] only. Then separately, recall my preferences for how I like technical documentation structured, searching agent:shared only. Show me the results from each search separately.

Keeping scopes clean over time

Scope bleed can creep back in gradually if the structure is not maintained. The most common ways it re-emerges: a session where the agent writes to the wrong scope because the scope instruction was not in the prompt; a config change that accidentally reverts the autoCapture target; or stale memories in an old scope that were never cleaned up and still surface in broad recalls.

Run a scope audit. Check all existing scopes and their memory counts. Are there memories in scopes that should not exist (default scope memories that should have moved)? Are there any scopes with very few memories that might be misnamed variants of an intended scope? Flag anything that looks like it was written to the wrong place.

A monthly scope audit takes about five minutes and catches drift before it accumulates into a significant bleed problem. Add it to a monthly maintenance cron job alongside queue cleanup and log rotation.

Advanced scope design for complex setups

The two-layer pattern (shared plus project) handles most setups. For more complex configurations, a three-layer structure provides additional granularity:

  • Global scope (agent:global): Facts about the operator that never change regardless of context. Name, timezone, communication preferences, standing instructions.
  • Domain scope (domain:work, domain:personal): Context-type memories that apply across all projects within a domain. Work preferences that apply to all client projects, personal preferences that apply to all personal tasks.
  • Project scope (project:specific-name): Task-specific memories for a particular project, client, or initiative.

Recall queries progressively: project scope first (most specific), domain scope second (context-type), global scope last (universal). This structure makes sense for operators with clear work/personal separation or multiple client types that have different conventions.

Does my memory usage suggest I would benefit from a three-layer scope structure (global, domain, project) rather than a two-layer one (shared, project)? Based on what you know about how I use you, would the extra layer add clarity or just add complexity? Recommend one approach.

Diagnosing which memories are bleeding through

Before restructuring scopes, it helps to understand exactly which memories are causing the bleed. A targeted diagnosis tells you whether the problem is a handful of widely-applicable memories that match too many queries, or a structural issue where the entire default scope is too broad.

I am working on [project name] and getting irrelevant memories surfacing. Run a recall for “recent work on this project” and show me all results with their scope and category. Which of the results are from a different project or context? That is the bleed I need to fix.

The results from this diagnostic tell you whether the bleed is coming from memories in the same scope that happen to match the query semantically, or from memories that should never appear in this context at all. Those are different problems with different fixes.

Semantic bleed versus scope bleed

Scope bleed (everything in one scope) and semantic bleed (memories that match queries they should not) look similar but need different solutions. Scope bleed is fixed by separating memories into scopes. Semantic bleed is fixed by making memory content more specific so it only matches the queries it is relevant to, or by deleting memories that are too vague to be useful.

Look at the memories that keep surfacing in the wrong context. Is the text of those memories genuinely specific to the context they belong to, or is it vague enough to match many different queries? If a memory says “I prefer concise output” rather than “I prefer concise output when writing code comments,” it will match almost any recall query. Flag any memories in my default scope that are too vague and should either be made more specific or deleted.

Scope naming conventions that scale

Scope names become more important as the number of scopes grows. A consistent naming convention prevents the situation where you end up with both project:clienta and client-a because the name was not standardized and two sessions wrote to different scope names. Those memories are now in separate buckets that neither queries when searching the other.

The cleanest convention is a two-part prefix system: type:name where type is one of agent, project, domain, or client, and name is a short slug with no spaces. Examples: agent:shared, project:q2-content, domain:work, client:acme.

List all scope names currently in use in my memory system. Do any of them look like variants of the same intended scope (different capitalizations, slightly different names for the same thing)? Suggest a standardized naming convention and flag any scopes that should be merged or renamed.

Documenting your scope structure

Once you have a working scope structure, write it down in your AGENTS.md or a dedicated memory configuration file. Document what each scope is for, what types of memories belong there, and which scopes autoCapture should target in different contexts. This documentation prevents drift when months pass and you cannot remember why the structure was set up the way it was.

Write a scope documentation block for my workspace. List each scope I am using, describe what it is for, specify what types of memories belong there (category types: preference, fact, decision, entity, event), and note which scope autoCapture should target for each of my main work contexts. Format it as a section I can paste into my AGENTS.md.

Testing your scope setup before relying on it

After restructuring scopes, verify the setup works as intended before trusting it with real work. Three tests cover the most important behaviors.

Test 1: Scope isolation

Write a test memory to a project scope and confirm it does not appear when querying the shared scope.

Store a test memory in scope project:test-scope with the text “test memory for scope isolation check.” Then do a recall with the query “scope isolation check” in scope agent:shared only. The test memory should NOT appear in the results. Then recall from scope project:test-scope with the same query. It SHOULD appear. Report whether the isolation is working correctly. Then delete the test memory.

Test 2: Shared scope availability

Confirm that shared scope memories surface correctly when querying from a project context that queries both layers.

I am in the context of project:test-scope. Recall my general formatting preferences from agent:shared. Do they appear correctly even though I am currently in a project context? Confirm that the two-scope query is returning results from agent:shared as expected.

Test 3: autoCapture target

Confirm that newly captured memories during a project session go to the correct scope and not the default.

During this session, if you were to store a memory about a decision I made for the current project, which scope would it go to? Is that the correct project scope, or would it go to the default scope? If it would go to the default, what instruction or config change is needed to route it correctly?

Common scope configuration mistakes

These are the mistakes that reintroduce scope bleed after the initial cleanup.

Forgetting the scope in memory_store calls. If your agent makes an explicit memory_store call without specifying a scope, it writes to the plugin default regardless of any session-level scope instructions. Always include the scope parameter in explicit memory_store calls in your agent prompts.

Updating a memory without specifying scope. Memory updates may not respect session-level scope overrides if the update targets a memory by ID without an explicit scope. Check your memory plugin documentation for whether memory_update inherits the scope from the original memory or requires it to be specified again.

Using the same scope name with different capitalizations. Most memory systems treat scope names as case-sensitive strings. Agent:Shared and agent:shared are different scopes. Standardize on all-lowercase scope names and check for capitalization variants in your scope audit.

Check my current memory plugin usage for these three common mistakes: memory_store calls without a scope parameter, memory_update calls that might write to the wrong scope, and scope name variants with different capitalizations. Flag any instances so I can fix them.

When scopes are not the right solution

Scopes solve the problem of context bleed between distinct work areas. They are not the right solution for all memory quality problems. If your recalls are surfacing irrelevant results but all the memories are genuinely in the right scope, the problem is not scope structure. It is either that the memories are not specific enough, the recall query is too broad, or the embedding model is producing poor similarity scores for your content type.

I have restructured my scopes but recall results are still not quite right. The memories are in the correct scopes, but some irrelevant results still surface. Is this a scope problem or an embedding quality problem? Run a test recall and explain why the top results were returned. Are the similarities making sense given the query?

If the embedding model is producing poor results, the fix is either changing the embedding model or adding more specific text to memories at write time so the vectors better capture the intended meaning. The scope structure is correct; the retrieval quality is the issue.

Real-world scope structures that work

Seeing a few concrete examples of scope structures helps calibrate what “two-layer” or “three-layer” looks like in practice for different types of OpenClaw usage.

Solo operator, one main project at a time

The simplest structure that still solves scope bleed. Two scopes: agent:shared for preferences and standing instructions, and project:current that gets renamed or replaced when the active project changes. Recall always queries both. autoCapture targets project:current by default, with explicit override to agent:shared for genuinely universal facts.

This structure works well when you are not genuinely context-switching between multiple active projects. The naming is simple and there is no overhead managing many project scopes.

Freelancer with multiple active clients

Three layers work well here: agent:shared for operator-level preferences, domain:client-work for conventions that apply to all client work (billing preferences, communication style, deliverable formats), and per-client scopes like client:acme and client:beta-corp. Recall queries the active client scope, domain:client-work, and agent:shared. Client memories stay completely isolated from each other.

I work with multiple clients and want their memory contexts fully isolated. Set up a three-layer scope structure with agent:shared for my operator preferences, domain:client-work for general client-work conventions, and individual scopes for each active client. Show me the scope list and the recall configuration before I apply anything.

Content creator with multiple topic verticals

For a content operation covering distinct topics (a tech site, a personal finance site, a UAP research site), vertical-level scopes keep topic knowledge clean. agent:shared for writing style and editorial voice, vertical:tech, vertical:finance, vertical:uap for topic-specific knowledge, sources, terminology, and audience preferences. When writing for one vertical, recall queries that vertical plus agent:shared. Terminology and audience context for other verticals do not surface.

I create content across multiple topic verticals. Create vertical-level scopes for each topic area so that topic-specific terminology, audience context, and source preferences stay isolated. When I start a session for a specific vertical, configure recall to query that vertical’s scope plus agent:shared. Show me the proposed structure.

Scopes and compaction interaction

One detail worth understanding: compaction does not affect your memory scopes. Compaction summarizes the conversation history in the active session context, which is separate from the memory database. Memories stored via memory_store persist in the memory database regardless of whether compaction runs. What compaction does affect is the agent’s ability to recall in-session context it has not written to memory, which is why the recommendation to write important decisions to memory explicitly (rather than assuming the agent will remember them from the conversation) is important for long sessions.

After this session compacts, will the memories I stored to project:current-project still be available? Explain the difference between session context and the memory database so I understand what compaction does and does not affect.

The answer is yes: memories in the database survive compaction. The distinction between session context and the memory database is one of the most commonly misunderstood aspects of OpenClaw memory. Session context is the active whiteboard that gets compacted. The memory database is the external file or store that persists indefinitely. They are separate systems even though both contribute to what the agent can access in a session.

Export your memories before restructuring

Any time you are about to do significant restructuring of scopes, migrate memories between scopes, or delete a scope, export the current state first. This is a two-minute operation that serves as a safety net if anything goes wrong during the restructuring.

Before I restructure my memory scopes: list all memories across all scopes (using memory_list with no filter) and write them to workspace/memory-export-YYYY-MM-DD.md as a formatted markdown file. Include the memory ID, scope, category, and full text for each one. I want a recoverable snapshot before making any changes.

The export file is also useful for reviewing the full contents of your memory system in a readable format. Most operators have never seen all their stored memories in one place. The review often surfaces memories that are stale, duplicate, or stored in the wrong format, which are worth cleaning up while you are restructuring anyway.

Common questions

I added a scope prefix to my memory stores but recalls still return memories from the old default scope. Why?

autoRecall is still querying the default scope. Adding a scope to memory_store writes new memories to the correct scope, but autoRecall pulls from whatever scope is configured in the plugin settings by default. You need to either update the autoRecall scope in the config or explicitly pass the scope parameter to each memory_recall call. The two operations (writing to a scope and reading from a scope) are configured separately. Changing one does not automatically change the other.

How many scopes can I create before performance degrades?

LanceDB, which underlies most OpenClaw memory setups, handles dozens of scopes without performance issues at typical memory volumes (hundreds to low thousands of memories). Performance degrades when a single scope contains tens of thousands of memories rather than when there are many scopes. For practical purposes, create as many scopes as your logical structure needs without worrying about scope count. What you should avoid is letting any single scope accumulate memories indefinitely without periodic cleanup.

My agent keeps writing project-specific memories to the shared scope. How do I stop it?

The agent is following the default autoCapture behavior, which targets whatever scope is set in the config. If autoCapture is set to agent:shared, everything goes there by default. The fix is either to update the autoCapture scope in the config for the current project context, or to add an explicit instruction in your project session prompt: “When storing new memories, use the scope project:[name] for anything related to this project. Only write to agent:shared for facts that apply across all contexts.” The explicit instruction in the prompt is more reliable than a config change for frequent context switching.

Is there a way to have memories automatically go to the right scope based on content?

Not natively. The autoCapture target is a single configured scope, not a routing system. For automatic content-based routing, you would need to write a custom extraction prompt that includes scope assignment logic, or use a post-capture cron job that reviews newly stored memories and moves ones that appear to be in the wrong scope. For most setups, explicit scope instructions in the session prompt is the simpler and more reliable solution than attempting automatic routing.

I deleted a scope accidentally. Can I recover the memories?

If you have git history of your workspace and the memory database is tracked, you can restore the database to a previous state. If the memory database is not in git (common with LanceDB files), the memories are gone unless you have a separate backup. This is a strong argument for running the periodic memory export that writes memories to plain text files in your workspace before any destructive memory operations. Ask your agent to export all memories from a scope to a markdown file before deleting or significantly restructuring scopes.

How do I handle memories that genuinely belong in multiple scopes?

Store them in the most specific applicable scope and accept that they will not appear in unrelated scope queries. If a memory truly needs to be available everywhere, it belongs in the shared scope rather than a project scope. If it belongs in two specific project scopes but not everywhere, store it in both explicitly. Duplicating a memory across two scopes is not an error; it is the right approach when the same fact is relevant in two distinct contexts that should otherwise remain isolated from each other.

I created a new scope but my agent keeps writing to the old default scope. How do I fix it?

Two things need to change, not one. First, update the autoCapture target scope in your memory plugin config to the new scope. Second, start a new session after the config change so the updated settings take effect. In a running session, the autoCapture target is cached from when the session started. A config change during a live session does not update the current session’s behavior. After the new session starts, confirm the target scope by asking your agent to store a test memory and then check which scope it landed in.

Can I have memories that are visible in all scopes, not just one?

Not natively in most memory plugin implementations. The design is one-scope-per-memory rather than one-memory-in-many-scopes. The practical workaround is to store globally relevant memories in the shared scope and configure recall to always include the shared scope in its queries. For memories that are truly universal, the shared scope is the right place, and querying it alongside any project scope gives you cross-scope availability for those memories without needing native multi-scope storage.

How do I handle a memory that starts as project-specific and later becomes a general preference?

Move it. When you notice that a memory you originally stored in a project scope now applies more broadly, delete it from the project scope and re-store it in the shared scope with the same content. This is a normal part of scope maintenance, not a system flaw. Memory content evolves as context generalizes. The operation is: recall the memory, confirm its exact text, store it in the shared scope, then forget it from the project scope.

My memory stats show 500 memories in the default scope and 12 in my new shared scope. Do I need to migrate all 500?

No. Migrate the ones that are actively relevant and will be actively queried. For the 500 in the default scope, do a quick review: how many are from projects that are now finished and whose memories you will never need to recall? How many are stale facts that have been superseded by newer information? How many are genuinely useful global preferences or ongoing project facts? In practice, a 500-memory default scope usually has 50-100 memories worth keeping and actively using. The rest can stay in the default scope as an inert archive that does not affect your new scope structure unless you explicitly query it.

Will adding more scopes slow down my memory recall?

At typical memory volumes (hundreds to a few thousand memories), no measurable slowdown. Scope separation actually improves recall relevance and reduces the number of results returned, which makes the recall call faster because there is less to process. The scenario where scopes hurt performance is the opposite: one very large scope with tens of thousands of memories where every query has to score against the entire pool. Splitting that into multiple smaller scopes would improve performance, not degrade it.

I set up scopes for two different clients but I am still seeing one client’s memories in the other’s context. What did I miss?

Check the autoRecall scope configuration. If autoRecall is configured to query all scopes or a broad wildcard scope, it will include both client scopes in every recall regardless of which project context you are in. The recall needs to be restricted to the active client scope plus the shared scope. Also check whether the session instructions specify which project scope to use for this session. Without an explicit scope in the session context, the recall system has no way to know which client scope is the active one and may query both.

What happens to memories in a scope I deleted?

They are gone if the scope was deleted using the memory plugin’s scope deletion functionality. There is no recycle bin or soft delete in the standard implementation. Before deleting any scope, export its memories to a plain text file in your workspace as a backup. The export takes 30 seconds and protects you from permanently losing information you might need later. Ask your agent to list all memories in the scope before deletion and write them to a markdown file as a record.

Is it possible to rename a scope without migrating all its memories?

Not in most memory plugin implementations as of March 2026. Scopes are not metadata attached to a single location; they are filter criteria applied to the underlying vector store. Renaming a scope requires migrating all memories from the old name to the new name: recall each memory, re-store it with the new scope, delete the old one. For a scope with many memories, this is a batch operation rather than a quick rename. This is one more reason why defining your scope naming convention before you start storing significant volumes of memories saves work later.


Ultra Memory Claw

Complete scope architecture, migration playbook, and monthly audit cron pre-built

The two-layer and three-layer scope schemas, the memory migration workflow, autoCapture and autoRecall config for multi-scope setups, and the session-level scope override pattern. Drop it in and your memory stays organized as your usage grows.

Get Ultra Memory Claw for $37 →

Keep Reading:

Ultra Memory ClawHow to design memory scopes for a multi-project OpenClaw setupScope architecture for operators running multiple contexts from one agent instance.Ultra Memory ClawWhich embedding model should you use for OpenClaw memory?Local versus API embedding models, quality tradeoffs, and which one to use for your setup.Ultra Memory ClawMy OpenClaw agent stopped following instructions mid-sessionWhy compaction drops instructions and how to move them somewhere they survive.