Memory worked on your laptop the first day you set it up because your laptop had years of accumulated tooling that satisfied every dependency without you ever noticing. Ollama was already installed from a previous project. Build tools were already present from a Node.js setup months ago. The data directory was created with your user account as owner automatically. You moved to a VPS, followed the same steps, and now recall returns nothing, memories do not appear to be storing, or the plugin fails silently. The VPS is not a different product, but it is a different environment, and memory plugins have specific dependencies that a laptop install satisfies automatically and a fresh VPS does not.
TL;DR
Memory failures on a VPS are almost always one of four things: the embedding model is not running, the LanceDB data directory does not exist or has wrong permissions, the plugin is installed but not enabled, or a native dependency (sharp, better-sqlite3) failed to compile during npm install. Ask your agent to run the diagnostic below before changing anything.
Step 0: Confirm the plugin is loaded before anything else
Check the OpenClaw gateway startup logs. Did the memory plugin load successfully? Look for any initialization errors for the memory plugin specifically. Tell me the exact log lines for the memory plugin startup sequence.
If the plugin did not load at all, the diagnostic in the next section will not return useful results. Fix the plugin load error first.
Run the full diagnostic
Before adjusting any config, get a clear picture of what is actually broken. Memory failures on VPS installs have four common causes and they require different fixes. Changing things without knowing which cause you have leads to config drift and makes the problem harder to diagnose.
Run a full openclaw memory vps diagnostic. This covers the most common reasons openclaw memory not working vps operators report, from the lancedb vps setup to missing Ollama. Check: (1) Is the memory plugin installed and enabled in my config? (2) What embedding model is configured and is it reachable right now? (3) Does the memory data directory exist and does OpenClaw have read/write access to it? (4) What does memory_stats return? (5) What happens if I try to store one test memory right now? Report every finding with the exact output of each check.
The output of this diagnostic tells you which of the four causes applies. The sections below address each one.
Cause 1: The embedding model is not running
On your laptop, Ollama was already installed and running with nomic-embed-text pulled before you set up OpenClaw. On a fresh VPS, none of that is true. The memory plugin cannot store or retrieve memories without a working embedding model. It needs to convert text to vectors, and that conversion requires a running model.
Check whether Ollama is installed and running on this server. Then check whether nomic-embed-text is available. Show me the output of both checks and tell me what is missing.
Manual check
If your agent cannot run shell checks: SSH into the VPS and run ollama list. If the command is not found, Ollama is not installed. If it is found but nomic-embed-text is not in the list, the model has not been pulled. Run ollama pull nomic-embed-text to fix it.
Installing Ollama on the VPS
Install Ollama on this server using the official install script, then pull nomic-embed-text, then verify both are working. Do not restart OpenClaw until both checks pass. Show me the output at each step.
Ollama needs to stay running after install. On a VPS, the Ollama process will stop when the install script exits unless it is configured as a systemd service. Ask your agent to check:
Is Ollama configured to start automatically on this server? Check whether a systemd service exists for it. If not, set one up so Ollama starts on boot and stays running. Show me what you find and what you plan to change before making any changes.
OLLAMA_KEEP_ALIVE
By default Ollama unloads models from memory after 5 minutes of inactivity. On a VPS where memory recall happens in bursts, this means the model may be unloaded between requests, causing the first recall after a gap to timeout. Set OLLAMA_KEEP_ALIVE=-1 in the Ollama systemd service environment to keep models loaded permanently.
If you are using a remote embedding API instead of Ollama
Some setups use a remote embedding service (Jina, OpenAI embeddings, Cohere) instead of local Ollama. The failure mode is different: the model is reachable but the API key is wrong or the endpoint URL in the config does not match the VPS environment.
Read my memory plugin config. What embedding provider and endpoint is configured? Make a test embedding call to that endpoint right now with a short test string. Report whether it succeeds and show the response.
Cause 2: The data directory does not exist or has wrong permissions
LanceDB stores memory data on disk. It needs a directory to write to, and that directory needs to be writable by the user running OpenClaw. On a laptop, this directory was created automatically and your user account owned it. On a VPS, especially one where OpenClaw was installed with a different user than your local setup, the directory may not exist or may be owned by root.
Check the LanceDB data directory path from my memory plugin config. Does that directory exist? What are the permissions on it? What user owns it? What user is OpenClaw running as? If the directory does not exist or is not writable by the OpenClaw user, tell me exactly what needs to change.
Finding the configured data directory
The data directory path is set in your memory plugin config under something like dataPath or dbPath. Common default paths: /home/node/.openclaw/memory.lance, /home/node/.openclaw/workspace/memory.lance, or a path relative to the workspace root. If the path is relative, it is resolved from the OpenClaw process working directory, which may differ between laptop and VPS installs.
Fixing the data directory
Create the LanceDB data directory if it does not exist. Set the owner to the user running OpenClaw and set permissions to 755. Then verify the directory is writable by running a test write. Show me each step’s output.
After fixing the directory, restart OpenClaw and run a test memory store:
Store a test memory: “VPS memory setup verified on [today’s date]” with category fact and importance 0.8. Then immediately recall it using “VPS memory setup”. Confirm the store and recall both succeed.
Cause 3: The plugin is installed but not enabled
Installing a memory plugin with npm or clawhub does not automatically enable it. The plugin needs to be listed in your OpenClaw config with "enabled": true. On your laptop, you enabled it during setup. On the VPS, if you copied only some of your config or started from a fresh install, the plugin may be installed but disabled, or the wrong plugin may be enabled.
Read my openclaw.json. List every memory-related plugin entry, including any that are disabled. Tell me which one is currently active, which one my config says should be the memory provider, and whether there are any conflicts (e.g., two memory plugins both enabled, or the stock memory-lancedb enabled alongside memory-lancedb-pro).
The stock plugin conflict
If you are using memory-lancedb-pro (the extended plugin), the stock memory-lancedb plugin must be disabled. When both are enabled, the stock plugin shadows the pro plugin’s tool registrations and the pro plugin’s memory tools become unreachable. The symptom: memory_store and memory_recall exist but behave like the basic plugin with no autoCapture, no autoRecall, and no importance scoring.
Enabling the correct plugin
Show me the current plugin config for all memory-related entries. I want to enable [plugin name] and make sure no other memory plugin is also enabled. Show me the exact config changes needed before applying anything.
After enabling the correct plugin and disabling any conflicts, you need a fresh session for the change to take effect. Config changes to plugin enable/disable status are read at session start, not applied live.
Cause 4: Native dependency compilation failed
LanceDB and some memory plugins have native Node.js dependencies: compiled C++ bindings like better-sqlite3 or native LanceDB binaries. On your laptop, these compiled successfully because you had the right build tools installed. On a fresh Ubuntu VPS, the build tools (gcc, g++, make, python3) may not be present, causing the native dependency to fail during npm install with an error that looks like a network problem or a generic install failure.
Check whether the memory plugin installed correctly on this server. Look for any native dependency errors in the npm install logs or in the OpenClaw gateway logs. Also check whether build-essential and python3 are installed on this server.
Manual check for build tools
SSH into the VPS and run: gcc --version && python3 --version && make --version. If any of these fail with “command not found”, the build tools are missing. Install them with: sudo apt-get install -y build-essential python3. Then reinstall the memory plugin.
Reinstalling after fixing build tools
Reinstall the memory plugin from scratch. First stop the OpenClaw gateway. Then remove the plugin’s node_modules and reinstall. Check the install output for any native compilation errors. If it succeeds, restart the gateway and run memory_stats to confirm the plugin loaded correctly.
If the reinstall still fails with a native compilation error after build tools are installed, the issue may be a Node.js version mismatch. LanceDB’s native bindings are compiled for specific Node.js ABI versions. Check:
What version of Node.js is running on this server? What version does the memory plugin’s package.json specify as the required Node.js version? Are they compatible?
Cause 5: Scope mismatch between laptop and VPS config
A less common but genuinely confusing failure: memories store successfully on the VPS but recall returns nothing. Everything appears to be working: no errors, memory_stats shows a count, but the agent cannot find anything when asked to recall.
This is almost always a scope mismatch. Your laptop config specified one scope (for example, agent:main) and your VPS config either has no scope set (defaulting to default or the session key) or has a different scope. Memories stored under one scope are not visible to queries using a different scope.
What scope is my memory plugin configured to use on this server? Run memory_stats with scope=agent:main and also without specifying a scope. Report what each returns. Are memories being stored under a different scope than they are being recalled under?
Scope configuration note
The memory-lancedb-pro plugin does not support a scopes.default config field. The scope must be set explicitly as agent:main (or your configured scope) in every memory tool call, and it must match what is set in the plugin’s scopes config. If the plugin config has no explicit scope and you are passing agent:main in tool calls, the plugin may be storing under a derived scope that does not match.
Migrating your memories from laptop to VPS
Once memory is working on the VPS, you may want to bring your existing memories over from the laptop. This is not automatic. You need to copy the database file.
I want to migrate my memory database from my laptop to this VPS. What is the exact path of the LanceDB data directory on this server? What is the name of the database file or directory structure I need to copy? What is the safest procedure: stop gateway, copy, verify, restart?
The migration procedure:
- On the VPS: stop the OpenClaw gateway
- On the laptop: locate the LanceDB data directory (ask your laptop agent for the path)
- Copy the entire data directory from laptop to VPS using scp or rsync, replacing the VPS data directory
- On the VPS: verify the copied directory has the correct ownership and permissions
- On the VPS: restart the gateway and run memory_stats to confirm the count matches what was on the laptop
- Run a recall check on 3 specific memories you know were on the laptop
LanceDB directory structure
LanceDB stores data as a directory, not a single file. The directory contains multiple files including _latest.manifest, version directories, and index files. You must copy the entire directory, not individual files inside it. Copying only the data files without the manifest causes LanceDB to fail on open.
Preventing this from happening again
The root cause of the laptop/VPS gap is that VPS environments do not have the same implicit dependencies a developer laptop accumulates over years of use. Document what your memory setup actually requires so any future VPS migration starts from a checklist, not from debugging.
Generate a memory setup checklist for this VPS based on what is currently working. Include: Ollama version, embedding model name, data directory path, plugin name and version, scope configuration, and any environment variables set (OLLAMA_KEEP_ALIVE, etc.). Save this to a file called memory-setup-checklist.md in my workspace.
The checklist file becomes your recovery document for the next migration. Every VPS rebuild, cloud provider switch, or new team member starts from the checklist instead of from memory. The checklist also serves as a quick sanity check when memory breaks: work through each item in order and confirm the current state matches what was working before. Because the checklist is version-pinned (it includes exact Ollama version, exact plugin version, and exact Node.js version), it also surfaces version drift: if Node.js was upgraded by an automatic system update and the native plugin bindings no longer match, the checklist will show a mismatch before you spend an hour debugging in the wrong direction. The first item that does not match is your problem. This is faster than running the diagnostic blockquote above because the checklist is specific to your exact setup rather than generic, and it includes the version numbers and paths that are unique to your environment.
Update the checklist every time you make a change to the memory setup. A checklist that is 6 months out of date is worse than no checklist because it sends you down the wrong path with false confidence. Treat it as a living document, the same way you treat your openclaw.json. The maintenance cost is one blockquote run after each change. That is a reasonable investment for a document that saves hours of debugging on the next migration.
Reading the gateway logs to diagnose memory failures
When the diagnostic blockquote above does not return a clear answer, the gateway logs contain the actual error. Memory plugin failures generate log entries that point directly at the cause: a timeout reaching the embedding model, a permission denied error on the data directory, a missing module error from a failed native compile.
Check the OpenClaw gateway logs for any errors related to memory, LanceDB, embeddings, or plugin initialization. Look at the last 200 lines. Show me every line that contains ERROR, WARN, or any of those keywords.
Common log patterns and what they mean:
- Cannot find module ‘@lancedb/lancedb’: The LanceDB native package did not install. Reinstall after adding build tools.
- ENOENT: no such file or directory (on the data path): The data directory does not exist. Create it and set correct ownership.
- EACCES: permission denied: The data directory exists but is owned by a different user. Fix ownership.
- connect ECONNREFUSED 127.0.0.1:11434: Ollama is not running. Start the Ollama service.
- Error: timeout (during embedding): Ollama is running but the embedding request timed out, likely because the model is loading. Wait 30 seconds and retry, or set OLLAMA_KEEP_ALIVE=-1 to prevent model unloading.
- Plugin memory-lancedb loaded but memory tools unavailable: The stock plugin is shadowing the pro plugin. Disable the stock plugin.
Based on what you find in the logs, tell me the root cause of the memory failure and the exact fix. Do not apply anything yet. Show me the plan first.
Verifying the full memory pipeline end to end
Once you believe the issue is fixed, do not just check that memory_stats returns a number. Run a full end-to-end verification that confirms every part of the pipeline is working: store, embed, write to disk, read from disk, embed query, retrieve.
Run a complete memory pipeline test. Do this in order: (1) Store a test memory with text “Pipeline verification test [timestamp]”, category fact, importance 0.9. (2) Wait 5 seconds. (3) Recall using the query “pipeline verification test”. (4) Confirm the recalled memory matches what was stored. (5) Run memory_stats and confirm the count increased by 1. Report pass or fail at each step with the exact tool output, not a summary.
If step 1 succeeds but step 3 returns nothing, you have a store-vs-recall scope mismatch. If step 1 fails immediately, the issue is still in the embedding or write path. The five-step test tells you exactly where the break is.
After the pipeline test passes, run one more check to confirm autoCapture is working (if you have it enabled). Have a short conversation that mentions a specific fact you would not normally discuss, then check whether that fact was extracted:
My favorite color is cerulean blue. Now check whether that fact was captured in memory. Search for “favorite color” and tell me whether a new memory was stored in the last 2 minutes.
If autoCapture is enabled and working, a memory about cerulean blue should appear within a minute of the conversation. If it does not, check whether the extractMinMessages threshold is set higher than 1 in the plugin config. A threshold of 5 means no extraction runs until the conversation has at least 5 messages.
VPS-specific performance considerations
Memory performance on a VPS differs from a laptop in ways that affect usability even when everything is technically working. The three most common performance gaps are disk I/O latency, RAM constraints, and cold start latency after a model unload. All three are addressable. None of them are bugs. They are environmental differences between a developer laptop and a minimal VPS that require explicit configuration to close.
Disk I/O
LanceDB is read/write intensive. VPS storage (especially shared NVMe on budget providers) has higher latency than a local SSD. If recall takes 3 to 5 seconds on a VPS that took under a second on a laptop, the issue is not a bug, it is disk latency. Check whether your VPS provider offers a faster storage tier, or move the LanceDB data directory to a RAM disk for read-heavy workloads.
RAM constraints
Ollama with nomic-embed-text loaded uses approximately 500MB of RAM. On a 1GB VPS, this leaves limited headroom for the OpenClaw Node.js process and LanceDB in-memory operations. If the VPS has less than 2GB RAM and you are running both OpenClaw and Ollama, memory operations may fail intermittently due to OOM kills. Check:
Check the current memory usage on this server. How much RAM is in use and how much is free? Has the OOM killer killed any processes recently? Check dmesg or /var/log/syslog for OOM events in the last 24 hours.
If RAM is the constraint, the options are: upgrade the VPS to 2GB minimum, switch to a remote embedding API to eliminate the Ollama RAM overhead, or reduce the Ollama model size (though nomic-embed-text is already the smallest practical embedding model for quality results).
Cold start latency
If the VPS reboots or Ollama restarts, the first memory operation after startup takes longer than normal because the embedding model needs to load into memory. This is expected behavior, not a bug. With OLLAMA_KEEP_ALIVE=-1, the model stays loaded between operations. Without it, each operation after a gap triggers a reload. On a VPS with limited RAM, the keep-alive setting trades RAM for latency. Choose based on your setup.
Docker and containerized installs
If OpenClaw is running in a Docker container on the VPS, memory failures have additional causes specific to containerized environments.
Volume mounts for the data directory
If the LanceDB data directory is inside the container filesystem rather than mounted as a volume, it is lost every time the container restarts. The data directory must be a volume mount pointing to persistent storage on the host.
Is the LanceDB data directory inside the container filesystem or mounted as a volume? Check the current container configuration and tell me whether the memory data persists across container restarts.
Ollama in a separate container
If Ollama is running in a separate container, the OpenClaw container needs to reach it at a network address, not at 127.0.0.1:11434. The embedding URL in the plugin config must use the Ollama container’s name or the host network address.
Docker networking note
With --network host, both containers share the host network and 127.0.0.1:11434 works. With bridge networking, use the Ollama container name as the hostname (e.g., http://ollama:11434) and make sure both containers are on the same Docker network. The default bridge network uses container names for DNS resolution.
Memory behavior after an OpenClaw or plugin update
Memory can break after updating OpenClaw or the memory plugin even on a previously working setup. The two most common causes: the plugin’s native dependencies need to be recompiled for the new Node.js version that shipped with the update, or a config schema change in the new plugin version means the old config fields are no longer recognized.
When did the memory failure start? Was there a recent OpenClaw update or memory plugin update before it stopped working? Check the gateway logs around the time the failure started for any plugin initialization errors or schema validation errors.
If the failure followed an update, the fix is usually a reinstall of the plugin to recompile native dependencies:
The memory plugin may need its native dependencies recompiled after the recent update. Stop the gateway, navigate to the plugin directory, delete node_modules, run npm install, then restart the gateway. Show me each step’s output and stop if there are any errors during npm install.
For config schema changes: check the plugin’s changelog for the new version. Schema changes are usually listed as breaking changes. The fix is updating the config fields to match the new schema, which may require adding new required fields or renaming existing ones.
Testing the embedding model in isolation
If you are not sure whether the embedding model is the problem, test it directly before involving OpenClaw at all. A direct API call to the embedding endpoint tells you immediately whether Ollama and the model are working, fully independent of any plugin or config issue on the OpenClaw side.
Make a direct API call to the Ollama embedding endpoint. Use: POST http://127.0.0.1:11434/api/embeddings with body {“model”: “nomic-embed-text”, “prompt”: “test embedding”}. Show me the raw response. If it returns a vector (an array of numbers), the embedding model is working. If it returns an error, show me the error text.
A successful embedding call returns a JSON object with an embedding field containing an array of 768 numbers (for nomic-embed-text). If you see that, Ollama and the model are not the problem. The issue is in how the plugin is connecting to or using that endpoint. If you see an error, Ollama or the model is the issue and the fix is in the Ollama setup, not the plugin config.
This isolation test is the fastest way to split the diagnosis in half: either Ollama is working and the problem is in the plugin, or Ollama is not working and the problem is in the Ollama setup. Two different fix paths with zero overlap. This matters because debugging a plugin config when Ollama is the actual problem wastes time. And debugging Ollama when the plugin config is the actual problem also wastes time. The isolation test gives you a definitive answer in under a minute.
Frequently asked questions
The questions below cover the failure modes that do not fit cleanly into the five causes above but come up consistently in operator setups.
The diagnostic blockquote returns results but memory_stats shows zero memories. What is happening?
The agent is fabricating a successful result rather than actually running the check. This is a model quality issue, not a memory issue. The agent is generating a plausible-sounding response based on what a successful diagnostic looks like rather than actually executing the tool calls. Confirm by asking explicitly: “Run memory_stats right now and show me the raw output from the tool call, not a summary.” If the agent cannot show you actual tool output (a JSON result with a count field), it is not running the tool. This typically means the memory tools are not registered at all, which points back to the plugin not being enabled or the wrong plugin being loaded.
Memory works in a fresh conversation but fails when the context gets long. Is this a VPS resource problem?
Possibly, but more likely it is a timeout issue. Long conversations mean more tokens in the context, which means the model takes longer to process each turn. If memory extraction runs at the end of a turn (which it does with autoCapture), a long turn may push the extraction over the plugin’s LLM timeout setting. On a VPS with a slower CPU or a slower model, this threshold is hit earlier than on a fast laptop. The fix is increasing the extraction timeout in the plugin config. For memory-lancedb-pro, this is the llm.timeoutMs setting. The default is 30 seconds. On slower hardware with larger models, 90 seconds is more appropriate.
Memory worked for a few days on the VPS and then stopped. What changed?
The most common cause of memory working briefly then stopping is Ollama unloading the embedding model from memory. If OLLAMA_KEEP_ALIVE is not set, Ollama unloads models after 5 minutes of inactivity. The first recall after a gap triggers a model reload, which takes 10 to 30 seconds. If OpenClaw’s embedding request times out during that reload window, the memory operation fails silently. The fix is OLLAMA_KEEP_ALIVE=-1 in the Ollama systemd service. The second common cause is disk full: the VPS ran out of disk space and LanceDB can no longer write to the data directory. Check both.
My memory stats show a count but recall returns nothing. The memories are there but unreachable.
This is almost always a scope mismatch. Memory stats and recall use different scopes. Stats show what is in the database under whatever scope the plugin defaults to. Recall uses the scope you pass in the tool call (or the plugin default if you do not pass one). If the plugin stored memories under default and you are recalling with scope=agent:main, you will see a count in stats but get no results from recall. Check the scope in both the plugin config and the tool calls to make sure they match.
I copied my openclaw.json from the laptop to the VPS. Should that be enough to get memory working?
No. The config tells OpenClaw what plugins to use and where to look for data, but it does not install those plugins or create the data directories. A config copy without reinstalling plugins is the most common setup mistake. After copying the config, you still need to: install the memory plugin on the VPS, pull the embedding model, create the data directory, and verify the plugin loaded with memory_stats. The config is a map. You still need the territory.
The plugin install fails with a node-gyp error. How do I fix it?
node-gyp errors mean a native dependency failed to compile. The fix is almost always installing build tools: sudo apt-get install -y build-essential python3 python3-dev. On some Ubuntu VPS images, you may also need libssl-dev and libffi-dev depending on the specific native dependencies in the plugin. After installing build tools, clear the npm cache with npm cache clean --force and reinstall the plugin. If it still fails, check the Node.js version. LanceDB native bindings have specific ABI compatibility requirements.
Memory worked immediately when I set it up locally because I already had Ollama. What is the minimum I need to install on a fresh VPS?
For a local Ollama embedding setup: (1) build-essential and python3 (for native npm dependencies), (2) Ollama installed and running as a systemd service, (3) nomic-embed-text pulled, (4) OLLAMA_KEEP_ALIVE=-1 set, (5) the memory plugin installed via npm or clawhub, (6) the LanceDB data directory created with correct ownership. That is the full list. Everything else (plugin config, scope settings) is in openclaw.json and copies with your config.
Can I run memory on a VPS without Ollama?
Yes. You can use a remote embedding API instead of local Ollama. Jina Embeddings, OpenAI text-embedding-3-small, and Cohere embed are all supported by memory-lancedb-pro. Configure the embedding provider in the plugin config with the API key and endpoint. The tradeoff: remote embedding adds latency to every memory operation and costs per token. For an active setup with autoCapture enabled, this cost is non-trivial. Local Ollama with nomic-embed-text is free and fast enough for most setups once it is configured correctly.
