Your OpenClaw memories are the most valuable part of your setup. They hold everything your agent has learned about how you work, what you have decided, and what it should remember across sessions. Moving to a new server without backing them up first means starting over. This openclaw memory backup migration guide covers how to export your memories, verify the backup is complete, and restore them on a new instance without losing anything in the process or starting over from scratch on a new instance.
TL;DR
- Where memories live: A LanceDB database file, typically at
~/.openclaw/memory.lanceor the path set in your plugin config. - The backup: Stop the gateway, copy the database directory, restart. The directory contains binary files that must be copied as a unit.
- The verify step: Ask your agent to recall a specific memory after restore to confirm the backup is intact before decommissioning the old server.
The commands in this article are written as agent prompts. Paste them into your OpenClaw chat and your agent will run them. You do not need a terminal for any of this.
Step 1: Find your memory database
Before you can back anything up, you need to know where the database is. The location depends on your plugin configuration. Ask your agent:
Read my openclaw.json. Find the memory plugin configuration. What is the database path set in the plugin config? If no path is set, what is the default path OpenClaw uses for the memory database? What plugin is handling memory (memory-lancedb, memory-lancedb-pro, or something else)? Is the memory plugin currently enabled?
The response will tell you the exact path. Common locations:
/home/node/.openclaw/memory.lance: default for most LanceDB plugin installs/home/node/.openclaw/workspace/memory.lance: common when the workspace is used as the base path- A custom path if you configured
dbPathin the plugin config
LanceDB is a directory, not a file
The LanceDB memory database is a directory containing multiple binary files. It looks like a folder named memory.lance or similar. You must copy the entire directory as a unit. Copying individual files inside it will produce a corrupt backup. Use cp -r or rsync -a, not individual file copies.
Step 2: Check how many memories you have
Before backing up, get a baseline count. This is what you will verify against after the restore.
Show me my memory stats. How many total memories do I have? Break them down by category. What is the oldest memory and what is the most recent? Give me the exact counts so I can verify them after a restore.
Write down the total count and the breakdown. You will compare this against the restored instance to confirm the backup captured everything.
Also do a spot check on a specific memory you know exists:
Search my memories for [something specific you remember telling the agent, like your timezone, a project name, or a preference you set explicitly]. Return the exact text of the memory and its ID.
Note the exact text and ID. You will run this same search after restore as a verification probe.
Step 3: Stop the gateway before copying
Copying a LanceDB database while it is being written to can produce a corrupt backup. The gateway must be stopped before you copy the memory directory.
Stop the OpenClaw gateway service. Confirm it has stopped by checking the process status. Do not restart it until I tell you to.
You are about to lose connection
Stopping the gateway ends your current chat session. You will need to reconnect after the gateway restarts. Have the backup command ready before you stop the gateway so you can paste it into your terminal directly if needed. Alternatively, ask your agent to run the entire stop-copy-restart sequence as a single command so the connection loss is brief.
If you prefer to keep the connection live, ask your agent to run the full sequence atomically:
Run this sequence as a single operation: (1) Stop the OpenClaw gateway. (2) Copy the memory database directory to a backup location at /tmp/openclaw-memory-backup-$(date +%Y%m%d). (3) Restart the gateway. (4) Confirm the backup directory exists and report its size. Do all four steps before reporting back.
This approach minimizes downtime. The backup copy on a typical server takes under 5 seconds, so the gateway is down for less than 10 seconds total.
Step 4: Copy the database
If you ran the atomic sequence above, skip to the verify step. If you stopped the gateway manually, run the copy now:
Copy the memory database directory to /tmp/openclaw-memory-backup-$(date +%Y%m%d) using cp -r. Then report the size of the backup directory and confirm it contains the same number of files as the original.
After the copy completes, confirm the backup is the right size:
Run du -sh on both the original memory database directory and the backup copy. The sizes should match. Also run ls -la on both and confirm the file count matches. Report any discrepancies.
If the sizes match, the backup is complete. Restart the gateway:
Restart the OpenClaw gateway. Confirm it started successfully and that the memory plugin loaded without errors.
Step 5: Move the backup to the new server
The backup needs to get from the old server to the new one. The options depend on what access you have:
Option A: SCP (direct server-to-server)
Transfer the memory backup directory from /tmp/openclaw-memory-backup-[date] on this server to the new server at [new server IP] using scp -r. The destination path should be /tmp/openclaw-memory-restore on the new server. Confirm the transfer completed and report the size on the destination.
Option B: rsync (more reliable for large databases)
Use rsync -av to transfer the memory backup from /tmp/openclaw-memory-backup-[date] to node@[new server IP]:/tmp/openclaw-memory-restore. Use –checksum to verify file integrity during the transfer. Report the final sync stats including bytes transferred and any errors.
Option C: Via your local machine
If direct server-to-server transfer is not available, download the backup to your local machine first, then upload it to the new server. Ask your agent to compress it first:
Compress the memory backup directory at /tmp/openclaw-memory-backup-[date] into a tar.gz archive at /tmp/openclaw-memory-backup.tar.gz. Report the compressed file size.
Then use SCP or SFTP from your local machine to transfer the tar.gz file.
Step 6: Restore on the new server
On the new server, OpenClaw should already be installed and configured but not yet started with the memory plugin. Connect to the new server’s OpenClaw instance and run:
Read my openclaw.json and tell me the configured memory database path. Do not start the gateway yet. Is there an existing memory database at that path? If so, what is its size?
If there is an existing database at the destination path (from a fresh install or any prior use of the new instance), back it up before overwriting:
If there is an existing memory database at the configured path, move it to /tmp/openclaw-memory-original-backup before we proceed. Confirm it has been moved. Then copy the restore directory from /tmp/openclaw-memory-restore to the configured memory database path. Confirm the copy completed and the directory is in place.
After the restore copy is in place, start the gateway:
Start the OpenClaw gateway. Check the logs for any errors related to the memory plugin loading. Did it load successfully? Did it find the database? Report the first 20 lines of the startup log.
Step 7: Verify the restore
Do not decommission the old server until you have confirmed the restore is complete and accurate. Run three verification checks:
Check 1: Memory count matches
Show me my memory stats. How many total memories do I have? Break them down by category. Compare this to the baseline I noted before the backup: [paste your baseline count here]. Do the numbers match?
Check 2: Spot check a known memory
Search my memories for [the same specific thing you searched for in Step 2]. Return the exact text and memory ID. Compare to the baseline: text should be [exact text from baseline], ID should be [ID from baseline].
Check 3: Recall works end-to-end
Run a memory recall for “user preferences and working style”. Return the top 5 results. Then store a new test memory: “Memory restore verification completed on [today’s date]. All memories confirmed present.” Confirm the new memory was stored successfully. Then recall it back to confirm read-write is working on the restored database.
If all three checks pass, the restore is complete and the new server is ready to use.
Common failures during memory backup and restore
The backup directory size does not match the original
This usually means the copy was interrupted mid-way, or the source database was being written to during the copy. Stop the gateway fully before copying. If the gateway is managed by systemd, run systemctl stop openclaw and confirm it exited before starting the copy. A partial LanceDB copy cannot be recovered; delete it and start over with the gateway stopped.
The gateway starts but the memory plugin reports no memories
The database path in the config on the new server does not match where you placed the restored files. Ask your agent to read the config and confirm the exact path being used, then check whether the restored directory is at that exact path. Case sensitivity matters on Linux. A path of /home/node/.openclaw/memory.lance and /home/node/.openclaw/Memory.lance are different directories.
Memory recall works but returns wrong results
The embedding model on the new server may be different from the one used to create the memories. LanceDB stores embeddings alongside the memory text. If the embedding model changes, the stored vectors no longer correspond to the current model’s vector space, and recall quality degrades. Check that the same embedding model is configured on both servers before migrating.
Read my openclaw.json on this server. What embedding model is configured for the memory plugin? Compare this to the old server’s config. Are they the same model?
The memory plugin fails to load after restore
LanceDB version mismatches between the old and new server can prevent the database from opening. If the plugin was updated between the old server and the new one, the database schema may not be compatible. Check the plugin version on both servers:
What version of the memory plugin is installed on this server? Check the package.json or node_modules directory for the memory-lancedb or memory-lancedb-pro package version. Report the exact version string.
If the versions differ, install the same version on the new server before restoring the database. A newer plugin may have a migration path; a downgrade typically does not.
Transfer appears complete but files are corrupt
Use rsync with the --checksum flag instead of scp for transfers over unreliable connections. rsync verifies file integrity during transfer and retries corrupt blocks automatically. For databases over 100 MB, always use rsync over scp.
Setting up scheduled memory backups
A one-time backup before migration is the minimum. For ongoing protection, set up a daily backup cron job that keeps a rolling 7-day window of memory snapshots.
Set up a daily cron job that runs at 3am server time and does the following: (1) Stops the gateway. (2) Copies the memory database directory to a backup path with today’s date in the name (e.g., /var/backups/openclaw-memory/YYYY-MM-DD). (3) Deletes backups older than 7 days from that directory. (4) Restarts the gateway. (5) Sends me a Telegram message confirming the backup completed with the backup size. Show me the cron job configuration before creating it.
Review the configuration before confirming. Make sure the backup path has enough disk space for 7 days of memory snapshots. For a typical installation with a few hundred memories, each snapshot is under 50 MB. Seven days of snapshots is under 400 MB total.
Off-server backups matter more than on-server backups. A daily cron job that backs up the memory database to a directory on the same server does not protect against server failure. For real protection, add a second step to the cron job that syncs the backup to an off-server location: an S3 bucket, a different VPS, or your local machine via rsync. The incremental sync after the first full copy is fast and cheap.
Understanding what is in the memory database
Knowing what the database contains helps you understand what you are backing up and what could go wrong. A LanceDB memory database stores three types of data for each memory:
- The text content. The actual memory string, the category, the importance score, the scope, and any metadata attached to it. This is the part you can read.
- The vector embedding. A numerical representation of the memory’s meaning, generated by your embedding model (usually nomic-embed-text or a Jina model). This is what makes semantic search work. You cannot read it, but it is what allows recall to find memories that are conceptually related to your query even when the exact words do not match.
- The index structures. LanceDB builds and maintains index files that make search fast. These are rebuilt automatically if missing, but rebuilding takes time on large databases and the rebuild quality depends on having the right embedding model loaded.
When you copy the database directory as a unit, you copy all three. When you export memories as text only, you get the text content but lose the vector embeddings. That means the new instance has to re-embed everything from scratch, which requires the embedding model to be running and takes time proportional to your memory count.
For most migrations, copying the full directory is faster and produces better results. The text-export approach is useful when you want to inspect or edit memories before migrating, or when you are moving between incompatible LanceDB versions where the binary format changed.
How to export openclaw memories as text (when you need it)
There are two scenarios where text export makes sense over a full database copy: when you want to review and clean up memories before migrating, and when you suspect the database has corruption that would carry over if you copy it directly.
List all of my memories. For each memory, include: the memory ID, the text, the category, the importance score, the scope, and the creation date if available. Format the output as a JSON array. Write the result to a file at /tmp/memories-export-$(date +%Y%m%d).json. Confirm the file was written and report how many entries it contains.
After the export, verify it is complete:
Read the file at /tmp/memories-export-[date].json. Count the total entries. Compare this to the memory stats count. Do they match? If not, which memories are missing from the export?
Once you have a clean JSON export, you can review and edit it before importing. Remove memories you do not want to carry over, correct any that have wrong categories or importance scores, and add any you want to pre-seed on the new instance.
To import the text export on the new server, the process is slower than a database copy but straightforward:
Read the file at /tmp/memories-export-[date].json. For each entry in the array, store it as a memory using the memory_store tool. Use the text, category, importance, and scope from the export. After all entries are stored, run memory stats to confirm the count matches the import file. Report how many were imported successfully and how many failed.
This process re-embeds each memory using the current embedding model on the new server. For databases with hundreds of memories, it may take 10 to 30 minutes depending on the embedding model and hardware. Let it run; interrupting it leaves a partial import.
Cleaning up memories before migrating
Migration is a good time to audit and clean the memory database before copying it to the new server. Stale, duplicate, and low-quality memories slow down recall and reduce the signal-to-noise ratio of results.
Audit my memories for quality issues. Look for: (1) duplicate or near-duplicate memories that store the same fact more than once, (2) memories with importance scores below 0.3 that are unlikely to be useful, (3) memories that reference outdated information (old server IPs, old API keys, old project names). List them by category and count. Do not delete anything yet.
Review the list before acting. Some low-importance memories are intentional; some outdated memories remain useful for historical context. After review:
Delete the memories I have identified for removal: [list the memory IDs or descriptions]. Confirm each deletion. After all deletions, run memory stats and confirm the new total count.
Then take the backup. A cleaned database migrates faster, uses less disk space, and produces better recall results on the new server.
Matching plugin configuration on the new server
A successful database restore still fails if the plugin configuration on the new server does not match the old one. Three config values must match exactly:
The embedding model
The vectors stored in the database were generated by a specific embedding model. If the new server uses a different model, the stored vectors are incompatible with the new model’s vector space. Recall will appear to work but return wrong results. Always use the same embedding model on both servers. If you are switching embedding models intentionally as part of the migration, delete all stored embeddings and re-embed everything from the text export.
What embedding model is configured in my memory plugin? What is the exact model name and where is it hosted (local Ollama, Jina API, OpenAI, etc.)? I need to confirm this matches my old server configuration before I finalize the restore.
The scope configuration
Memories are stored and retrieved by scope. If the old server used agent:main as the scope and the new server defaults to a different scope, memories stored on the old server will not be retrieved by queries on the new one even though they are in the database. Check that the scope setting matches.
The plugin version
LanceDB database format changes between plugin versions. A database created by [email protected] may not open correctly under [email protected] if the schema changed. Check versions before migrating and either match the version on the new server or confirm the plugin has a migration path for the version difference.
What to do if the restore fails
If the restore does not work and you cannot get memories back from the database copy, the text export is your last resort. Even if you did not create a text export before migrating, you can create one from the original database on the old server if it is still accessible.
If neither server has a working database, check whether your git repository contains any memory files. Some OpenClaw setups commit daily memory logs to a workspace git repository. Those logs are not a complete restore but they contain enough context to recreate the most important memories manually.
Search my workspace git history for any memory files or daily log files committed in the last 30 days. List them with their dates and sizes. I need to assess whether they contain enough to reconstruct my memory state.
If you have daily memory markdown files in the workspace, use them to manually re-store the most critical facts. Prioritize: API credentials and infrastructure facts (easiest to lose, hardest to recover), standing preferences (things you have told the agent repeatedly that shape how it behaves), and active project context (what is in progress and where you left off).
Handling large memory databases
Memory databases grow over time. A setup with autoCapture enabled and active daily use can accumulate tens of thousands of memories over months of operation. Large databases have specific considerations that do not apply to smaller ones.
Estimating transfer time before you start
How large is my memory database directory? Run du -sh on the database path and report the total size. Then tell me the available disk space at /tmp and the estimated transfer time to a server with a 1 Gbps network connection.
As a rough guide: a database with 500 memories is typically 20 to 50 MB. A database with 5,000 memories is typically 200 to 500 MB. A database with 50,000 memories may exceed 2 GB. At these sizes, compression before transfer is worth the extra step. The gateway stop time for a 2 GB uncompressed copy is 30 to 60 seconds; for the same data compressed to 500 MB, it is 5 to 10 seconds.
Incremental backups for active databases
For databases over 500 MB with active daily use, a full copy on every backup cycle is inefficient. LanceDB supports incremental sync because it writes new data to new files without modifying existing ones. rsync can take advantage of this: after the first full backup, subsequent rsync runs only transfer files that have changed since the last sync.
Set up an rsync-based incremental backup for my memory database. The source should be the memory database path. The destination should be /var/backups/openclaw-memory/latest. Add a –link-dest option pointing to yesterday’s backup for efficient storage. The first run should do a full sync; subsequent runs should only transfer changed files. Show me the rsync command before running it.
Verifying integrity on large databases
For large databases, a size comparison alone is not sufficient to verify backup integrity. Add a checksum step:
Generate a checksum manifest for my memory database backup. Run md5sum (or sha256sum if available) on every file in the backup directory and write the results to /tmp/memory-backup-checksums.txt. After I transfer the backup to the new server, I will run the same command there and compare the outputs to verify integrity.
On the new server after transfer, run the same checksum command and diff the two manifest files. Any line that differs indicates a file that did not transfer correctly and needs to be re-copied.
Complete pre-migration checklist
Use this checklist before every migration. Each item corresponds to a section in this article:
- Confirmed memory database path from plugin config
- Noted total memory count and category breakdown (baseline for verify step)
- Noted text and ID of a specific known memory (spot check probe)
- Cleaned up stale, duplicate, and outdated memories before backup
- Confirmed embedding model name and version (to match on new server)
- Confirmed scope configuration (to match on new server)
- Confirmed plugin version (to match on new server)
- Stopped gateway before copying database
- Copied database directory as a unit with cp -r or rsync -a
- Verified backup size matches original
- Restarted gateway after copy
- Transferred backup to new server
- Restored database to correct path on new server
- Started gateway and confirmed memory plugin loaded without errors
- Verified memory count matches baseline on new server
- Verified spot check memory returns correct text and ID
- Confirmed read-write works with a test store and recall
- Set up scheduled backups on new server
- Confirmed old server is not decommissioned until all checks pass
This checklist exists because the most common migration failure is skipping the verify step at the end. Operators stop the old server as soon as the new one appears to work, then discover a day later that recall is returning wrong results because the embedding model did not match. Keep the old server available until all three verification checks pass and you have run the new server in production for at least 24 hours. The 24-hour window catches problems that only appear after the first real workday of use, which is when autoCapture, scheduled crons, and interactive sessions all run together for the first time on the restored database.
After the migration: first week monitoring
A successful restore confirmation is a point-in-time check. The first week of operation on the new server reveals problems that did not show up in verification: recall quality drift, slower response times from the new hardware, memory extraction failing silently, or scoping issues that only appear when the conversation history grows long enough to trigger them.
Run a quick health check on day 3 and day 7 after migration:
Run a memory health check. (1) Show current memory stats and compare to the baseline count from migration day. Has the count grown as expected? (2) Run a recall for “user preferences” and confirm the results look correct. (3) Check whether any memories stored since migration have correct categories and importance scores. (4) Report the average embedding time for the last 10 stored memories if available in the logs. Flag anything that looks wrong.
If the memory count is not growing after migration, autoCapture may have stopped working. This is usually a config issue on the new server: the memory plugin is loaded but the extraction LLM config was not carried over, or the extraction model is not reachable from the new server. Check the plugin config and confirm the extraction model endpoint is accessible.
If recall is returning results that feel off, the embedding model may have changed between servers. Run the structured output benchmark from the local model testing article against the embedding endpoint to confirm it is returning valid vectors. Wrong-shaped vectors produce plausible-looking but semantically wrong recall results. The symptom is that recall returns results that are syntactically related to the query term but not conceptually relevant. A query for “user preferences” returns memories about infrastructure config instead of working style preferences. That pattern is almost always an embedding model mismatch, not a data corruption issue.
Common questions
Do I need to stop the gateway before copying the memory database?
Yes. LanceDB maintains write locks and transaction logs. Copying a database that is actively being written to can produce a corrupt or incomplete backup. Stop the gateway, confirm it has exited, then copy. The downtime is typically under 30 seconds.
My restore looks complete but the agent says it has no memories. What happened?
Check three things: first, that the database path in your config matches where you restored the files. Second, that the restored files have the same permissions as the originals (the node user needs read/write access). Third, that the gateway was fully stopped before you restored and fully started after. A partial restart can cause the plugin to initialize before the restored database is fully accessible.
How large is a typical memory database?
It depends on how long you have been running and how aggressively autoCapture is configured. A typical setup after six months of daily use with moderate autoCapture lands between 200MB and 1GB. The vector data (embedding dimensions times number of memories) dominates the size. At 1024 dimensions and 4 bytes per dimension, each memory uses about 4KB for vectors alone.
Can I back up memories from one OpenClaw version and restore to a different version?
Usually yes, as long as the memory plugin version is the same or compatible. The LanceDB format is relatively stable. Where this fails is if the schema changed between plugin versions. Check the plugin changelog before attempting a cross-version restore. If the schema changed, you may need to export memories as text and re-import rather than doing a direct database file copy.
Is it safe to run backups on a live instance without stopping the gateway?
For read-only snapshots (rsync without deleting), the risk is low but not zero. You might capture a database in mid-write, producing a snapshot where the latest transaction is incomplete. For production use or pre-migration backups, stopping the gateway is safer. For regular daily backups where losing the last few seconds of writes is acceptable, live rsync is a reasonable tradeoff.
