If your OpenClaw agent randomly stops responding, restarts on its own, or just goes quiet for no reason you can find, the server it lives on is almost certainly running out of memory. This article covers exactly what to set up in your OpenClaw VPS setup so your agent stays running, recovers when something goes wrong, and does not silently die at 3am. Paste the diagnostic command into your OpenClaw first and read what it tells you before changing anything.
Before You Start
- OpenClaw is already installed and running on your VPS
- You can send messages to your agent from Discord, Telegram, or the web UI
- You have SSH access to the server, even if you prefer not to use it
- You do not need root access for most of this. Your agent can handle the checks and most changes by itself
TL;DR
A $5 VPS ships with no swap and no restart protection. OpenClaw silently dies when memory runs out, and it stays dead until you manually log in. Set up swap, configure systemd to restart the gateway on failure, tune your context window, and add log rotation. Your agent can walk you through all of it. Estimated time: 20 minutes, one paste at a time.
Time estimate: 20 minutes
Jump to what you need
- Agent keeps dying silently? Start with Swap
- Agent dies and stays dead? Make OpenClaw restart itself
- Agent is slow during heavy tasks? Tune OpenClaw’s config or Node.js memory limits
- Disk filling up? Log rotation
- Want a daily health report? Know when something is about to go wrong
- Just want to verify everything? The verification sequence
What you are actually working with
A $5 VPS gives you a small slice of a physical server: shared CPU, a fixed amount of RAM, and a fixed disk. The numbers vary by provider, but in 2026 the common options look like this: Hetzner CX22 starts at around €3.99 per month in EU regions (USD pricing varies) and gives you 2 CPUs and 4GB of RAM. DigitalOcean’s basic droplet at $6 gives you 1 CPU and 1GB. Vultr’s lowest tier is $2.50 and comes with 512MB.
OpenClaw’s gateway process is a Node.js application. It uses 150 to 300 megabytes of RAM just to run, depending on how many plugins you have loaded. When your agent is actively working on something, like running a long research task, compacting context, or making several tool calls in a row, that number spikes. On a 1GB server with no swap, a spike that pushes past available RAM causes Linux to kill the process. No warning. No log entry you would easily find. The agent just stops responding.
The goal of this article is to make sure that when memory pressure hits, your server handles it gracefully instead of silently dying.
Check my server’s current memory and swap usage. Show me the total RAM, how much is in use right now, whether swap is enabled and how much, and the current disk usage on the main partition. Then tell me whether my current setup is likely to cause problems.
Your agent will show you the numbers. If swap shows as 0, that is the first problem to fix. Read on.
Swap: the one thing most cheap VPS setups are missing
Swap is disk space that Linux uses as emergency overflow when RAM fills up. It is slower than RAM, significantly slower, but it is the difference between your agent surviving a memory spike and dying silently. Most providers ship VPS instances with no swap at all. You almost certainly need to add it.
A swap file of 2 to 4 gigabytes is the right size for a VPS running OpenClaw. On a 1GB RAM server, 2GB of swap is enough. On a 4GB server, 2GB is still fine. You should not need swap often, you just need it there as a safety net.
WRITE, TEST, THEN IMPLEMENT
Creating a swap file modifies your server’s configuration in a way that persists across reboots. You can reverse it safely, but you should confirm the commands your agent gives you before running them. The agent will show you each command before executing it. Read them first.
Check whether swap is enabled on this server. If it is not, create a 2GB swap file at /swapfile, set the correct permissions, format it as swap, enable it immediately, and add it to /etc/fstab so it comes back after a reboot. Show me each command before running it, and confirm it worked by showing me the current swap status after.
After your agent runs this, you should see a swap line in the memory output showing 2048MB total. That is what success looks like. If something goes wrong mid-way, tell your agent exactly what error you are seeing and it will fix it.
Manual fallback (if your agent cannot access the terminal)
SSH into your server and run these commands in order:
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
Then add this line to /etc/fstab: /swapfile none swap sw 0 0
Verify with: free -h and you should see a Swap line showing 2.0Gi total.
If fallocate fails
Some filesystems (older ext3, certain network mounts, and some cloud provider block storage) do not support fallocate. If you get an “Operation not supported” error, use dd instead:
sudo dd if=/dev/zero of=/swapfile bs=1M count=2048
Then continue with chmod and mkswap as above. The dd method is slower but works on every filesystem. Your agent knows to try this automatically if fallocate fails.
Swappiness: tell Linux when to use swap
Swappiness is a number between 0 and 100 that tells Linux how aggressively to move data to swap before RAM fills up. The default on most servers is 60, which means Linux starts using swap fairly early. For a server running OpenClaw, you want this lower, around 10. At the default value of 60, the kernel aggressively moves memory pages to swap even when free RAM is available. In practice this means your agent slows down during routine tasks because the kernel is swapping pages to disk that it could have kept in RAM. At 10, the kernel avoids swap until RAM is exhausted, and your agent stays responsive during normal use.
Check the current swappiness value on this server. If it is not 10, change it to 10 immediately and make the change permanent by adding it to /etc/sysctl.conf. Confirm what it is after the change.
Manual fallback
SSH in and run: sudo sysctl vm.swappiness=10
To make it permanent, add this line to /etc/sysctl.conf: vm.swappiness=10
Verify: cat /proc/sys/vm/swappiness should return 10.
Make OpenClaw restart itself when it crashes
Even with swap in place, things can go wrong. A bad plugin update, a malformed config change, an out-of-memory event that swap could not catch. Any of these can stop the gateway process. Without auto-restart configured, the agent stays dead until you manually log in and bring it back up.
OpenClaw runs as a systemd service on Linux. Systemd can watch the process and bring it back automatically when it dies, but only if that restart behavior is configured. By default it is not.
Check how the OpenClaw gateway service is configured in systemd. Specifically, tell me whether Restart is set, what it is set to, and what RestartSec is. If Restart is not set to on-failure or always, show me what I need to change to make it restart automatically when it crashes.
What you want to see is Restart=on-failure and RestartSec=5. That means if the gateway process dies for any reason, systemd waits 5 seconds and starts it again. Your agent will be back without you doing anything.
WRITE, TEST, THEN IMPLEMENT
Editing a systemd service file and running systemctl daemon-reload changes how your system manages the OpenClaw process. Your agent will show you the exact file content before writing it. Read the changes before you confirm.
Update the OpenClaw systemd service to set Restart=on-failure and RestartSec=5. Also make sure StartLimitIntervalSec=60 and StartLimitBurst=5 are set so it does not loop infinitely if there is a configuration error. Show me the current service file content, the changes you are making, and then reload the daemon and confirm the service is running.
After this runs, your agent should confirm the service is active and running. You can test it works by asking your agent what would happen if the gateway process were killed. It will explain that systemd would restart it within 5 seconds.
Manual fallback
Find the service file: systemctl cat openclaw
Edit it with: sudo systemctl edit openclaw --force
Add in the [Service] section:
Restart=on-failure
RestartSec=5
StartLimitIntervalSec=60
StartLimitBurst=5
Then: sudo systemctl daemon-reload && sudo systemctl restart openclaw
Verify: systemctl status openclaw should show active (running).
What happens when the service crashes in a loop
If OpenClaw has a configuration error that causes it to crash immediately on startup, systemd will restart it 5 times (StartLimitBurst=5) within 60 seconds (StartLimitIntervalSec=60). After that, it stops trying and marks the service as “failed (Result: start-limit-hit).” You will see this in systemctl status openclaw. Before resetting, check the journal to find the crash reason: journalctl -u openclaw -n 30 --no-pager
Then, after fixing the underlying issue, reset and restart:
sudo systemctl reset-failed openclaw && sudo systemctl start openclaw
OOM score: protect OpenClaw from the Linux OOM killer
When Linux runs completely out of RAM and swap, it activates the OOM (out-of-memory) killer. This is a last-resort mechanism that picks a process to kill to free up memory. Left to its own devices, it kills OpenClaw instead of something less important running on your server. You can tell Linux to deprioritize killing the OpenClaw process by lowering its OOM score adjustment.
The valid range is -1000 to 1000. Lower values mean the process is more protected from being killed. By default, all user processes start with an OOM score adjustment of 0, which means OpenClaw has no more protection than any other process on the server. A value of -300 is a reasonable middle ground: it gives OpenClaw meaningful protection while leaving headroom for critical system processes like SSH to stay alive. Setting this too aggressively (below -700) on a severely memory-constrained server can mean Linux kills the SSH daemon before OpenClaw, locking you out of your own server.
Check the current OOM score adjustment for the OpenClaw gateway process. If it is not set to a negative value, update the systemd service to set OOMScoreAdjust=-300, reload the daemon, and restart the service. Confirm the new score after the restart.
Why -300 and not lower
-300 makes OpenClaw significantly less likely to be killed by the OOM killer compared to most user processes. Going lower (like -700 or -1000) protects OpenClaw even more aggressively, but on a 1GB or 2GB server under extreme memory pressure, this can cause Linux to kill SSH instead. Losing SSH access to your own server is a harder problem to fix than restarting OpenClaw. On a 4GB or larger server, you can safely go lower if you want.
Manual fallback
Edit the service with sudo systemctl edit openclaw --force and add OOMScoreAdjust=-300 under [Service].
Then sudo systemctl daemon-reload && sudo systemctl restart openclaw.
Check: cat /proc/$(pgrep -f openclaw)/oom_score_adj should return -300.
Tune OpenClaw’s config for low-memory servers
OpenClaw’s default settings are designed for machines with plenty of RAM. On a cheap VPS, a few of those defaults will cause problems. The two biggest ones are the context window size and compaction settings.
Context window size
The context window is how much conversation history OpenClaw holds in memory during an active session. Larger context means your agent remembers more of your conversation without compacting it away, but it also means more RAM in use at all times. On a server with 1GB or less, a large context window will cause the agent to crash during any task that fills it, like a long research session, a multi-step pipeline, or a conversation that runs for more than a few hours. The agent will work fine for short exchanges and then die without warning when context fills up. The specific default varies by OpenClaw version, so ask your agent what it is actually set to.
Read my openclaw.json config. What is the current context window size set to? If it is higher than 32000 tokens, recommend what I should set it to for a VPS with 1GB of RAM, explain the tradeoff, and then make the change if I say yes.
The tradeoff is real
A smaller context window means your agent compacts older conversation history more aggressively. You will still have access to that history through LCM (the context management system), but it is retrieved rather than held live in memory. For most everyday use, you will not notice the difference. For very long, complex tasks that depend on details from earlier in the session, you may notice the agent asking for clarification more often.
Compaction settings
Compaction is what OpenClaw does when the context window fills up. It summarizes old conversation history and compresses it to make room for new content. This process temporarily uses more memory while it runs, which is exactly when low-RAM servers are most likely to OOM-kill the process. You can reduce compaction pressure by setting it to trigger earlier (so it compresses smaller batches) and by using a lighter model for the compaction step.
Read my openclaw.json config. What model is currently being used for compaction? What is the compaction threshold set to? For a VPS with 1GB of RAM, what settings would reduce the memory spike during compaction? Show me the recommended settings and explain what each one does before making any changes.
Local model note
If you are running Ollama locally on the same VPS, compaction using a local model adds to memory pressure rather than reducing it. On servers with less than 4GB of RAM, using an API model for compaction (even a cheap one like deepseek-chat) is better than using a local model that needs to load into memory to run. Ollama with a 7B model needs at least 4-6GB RAM to run without constantly swapping.
Log rotation: the slow killer most guides miss
OpenClaw writes logs continuously. On a small VPS with 20 to 40GB of disk, these logs can fill the disk completely within weeks, sometimes faster if you are running lots of automated tasks. When the disk fills, OpenClaw cannot write session data, logs, or LCM entries. The crashes look completely unrelated to disk space: you will see database errors, failed writes, or the gateway simply refusing to start. If you skip this section, the crash will happen eventually. On a 20GB disk with active automated tasks, “eventually” can be two to four weeks.
Log rotation is the process of automatically archiving old logs, compressing them, and deleting logs beyond a certain age. Most servers have logrotate installed by default, but it is not configured for OpenClaw by default.
Check how much disk space is currently in use on this server. Then find where OpenClaw is writing its logs. Check /var/log/openclaw, ~/.openclaw/logs, and any other locations that have log files from the gateway. Tell me the total size of those log files and whether logrotate is configured for them.
If your agent reports no logrotate config for OpenClaw, or if log files are already several hundred megabytes, deal with this now.
Create a logrotate configuration for OpenClaw that rotates logs daily, keeps 7 days of logs, compresses old logs with gzip, and handles missing log files gracefully without erroring. Show me the config before writing it, then write it to /etc/logrotate.d/openclaw and test that logrotate accepts it without errors.
Manual fallback
Create /etc/logrotate.d/openclaw with this content:
/var/log/openclaw/*.log /home/node/.openclaw/logs/*.log {
daily
rotate 7
compress
delaycompress
missingok
notifempty
create 0640 node node
}
Then test it: sudo logrotate --debug /etc/logrotate.d/openclaw
Check which user OpenClaw runs as
The create 0640 node node directive in the logrotate config assumes OpenClaw runs as the node user. If your install runs as a different user (check with systemctl show openclaw -p User: if it returns an empty value, OpenClaw is running as root), change both node references in the create line to match your actual user. If the ownership is wrong, logrotate will fail silently and logs will not rotate.
Know when something is about to go wrong before it does
The best VPS setup is one that tells you about problems before they become outages. OpenClaw can send you a memory and disk report on a schedule using the cron system, which means you can get a daily status message instead of finding out something broke when the agent stops responding.
Set up a daily cron job that runs every morning at 8am server time (ask me what timezone the server is in first, and adjust the time if needed) and sends me a brief status report. Include: current RAM usage and swap usage, current disk usage on the main partition, whether the OpenClaw gateway service is running, and a warning flag if RAM usage is above 80%, disk is above 80%, or the service is not active. Send it to me on Telegram (or Discord, whichever I have configured).
Once this is set up, you will receive a message every morning. If anything is above the warning threshold, you have time to deal with it before it becomes a crash.
If you do not have messaging configured
Ask your agent to create the cron job anyway and write the output to a log file in your workspace. You can check it any time by asking: “Show me the latest server status report.”
If you chose the log file option instead of messaging
Make sure the status report log file is included in your logrotate configuration. A daily status report that appends to a file indefinitely will eventually consume disk space. Add the path to the same logrotate config you created earlier, or ask your agent to set up a separate rotation for it.
What not to run on a low memory OpenClaw server
Knowing what will work is only half the picture. This section covers the configurations that will cause your VPS to fall over regardless of how well everything else is set up.
Ollama on a server with less than 4GB of RAM
Running Ollama locally on the same server as OpenClaw is appealing. Local models mean no API costs. But Ollama needs RAM to load a model and keep it in memory. A 7B parameter model requires 4 to 6GB of RAM minimum. A 13B model needs 8 to 10GB. If your VPS has 1GB or 2GB of RAM, running Ollama will cause constant swap thrashing. The system spends more time moving data between RAM and disk than it does actually running your agent.
On servers with less than 4GB of RAM, use API models only. On servers with exactly 4GB, Ollama is marginal. A small model like Llama 3.1 8B with aggressive swap will work, but expect slower responses and occasional crashes.
Is Ollama installed and running on this server? If so, how much RAM are the currently loaded models using? Is there enough free RAM to run Ollama alongside OpenClaw without heavy swapping?
Multiple OpenClaw instances on one VPS
Running two OpenClaw instances on the same server doubles the memory footprint. On a $5 VPS, this means both instances will compete for the same limited RAM. The usual result is that both become unstable. If you need multiple agents, run them on separate servers or use a single instance with multiple sessions.
Heavy plugin loading on low-RAM servers
Each plugin you load into OpenClaw adds to the base memory usage. Memory plugins, LCM plugins, and tool-heavy plugins are the biggest consumers. On a 1GB server, running more than 3 or 4 plugins simultaneously will push you into swap territory constantly.
List all the plugins that are currently enabled in my OpenClaw config. For each one, tell me roughly how much RAM it adds to the base process. Then tell me which ones I could disable to reduce memory usage without losing the features I use most.
Node.js memory limits: the setting most guides skip
OpenClaw runs on Node.js. By default, Node.js calculates its maximum heap size dynamically based on available system memory. On a 4GB server this works out to a reasonable figure. On a 1GB server, the dynamic calculation can still allow the heap (the memory region where Node.js stores active data) to expand well beyond available RAM before garbage collection kicks in. If you skip this step, the agent will work normally most of the time but periodically freeze for 30 to 90 seconds while Node.js reclaims memory from an oversized heap, or crash outright when the heap grows faster than GC can reclaim.
You can override this by setting the --max-old-space-size flag explicitly at startup. Setting it to 60 to 70 percent of your available RAM tells Node.js to garbage collect more aggressively before memory pressure builds up. On a 1GB server, a value of 512MB to 640MB is appropriate. On a 4GB server, 2048MB gives you room to work without forcing constant GC.
Before adding this flag, ask your agent how OpenClaw is actually being started. OpenClaw may launch via the openclaw CLI, via npx, or via a custom wrapper. The method matters because you need to pass the flag to the correct executable. Do not assume it is a bare node command.
Check how OpenClaw is being started on this server. Look at the ExecStart line in the systemd service file, and tell me exactly what command is running. Is the Node.js max-old-space-size flag set anywhere? What is the total RAM on this server, and what would the recommended max-old-space-size value be? Tell me the exact change to make based on how it is actually being launched.
Why this matters on top of swap
Swap prevents the server from dying. Node.js memory limits prevent the agent from becoming so slow that it is effectively dead. With swap but without a memory limit, Node.js can expand until it fills both RAM and swap, causing severe swap thrashing for minutes before finally garbage collecting. Setting the limit keeps the process lean and responsive.
Manual fallback (after confirming the launch method)
Once you know the exact ExecStart command, edit the service with sudo systemctl edit openclaw --force.
If it launches via a bare node call, prepend: --max-old-space-size=512 (or whatever value is appropriate for your RAM).
If it launches via openclaw CLI, set the environment variable instead: NODE_OPTIONS=--max-old-space-size=512 in the service’s [Service] section.
Then reload: sudo systemctl daemon-reload && sudo systemctl restart openclaw
Which $5 VPS providers work with OpenClaw in 2026: the cheap VPS breakdown
Not all cheap VPS options are equal. The hardware, network, and default configurations differ enough to matter for a long-running agent process. Here is an honest look at the main options in 2026 and what you should know before picking one.
Hetzner CX22 (~$4.90/month)
The best value option for OpenClaw in 2026. 2 vCPUs, 4GB of RAM, 40GB NVMe disk (a fast type of solid-state storage), and a generous network allocation. The 4GB RAM means you have headroom to run OpenClaw with memory plugins enabled, and still have room for compaction spikes without hitting swap constantly. Hetzner’s NVMe storage also means swap, when you need it, is faster than the HDD-backed storage many other providers use. Hetzner is available in EU and US regions as of 2026. If you have no other constraints, this is the pick.
DigitalOcean Basic ($6/month)
1 vCPU, 1GB RAM, 25GB SSD. The 1GB RAM is the limiting factor. Swap is mandatory, Ollama is off the table, and you need to be careful about how many plugins you enable. The upside is DigitalOcean’s control panel and documentation are excellent, and their managed databases and object storage integrate cleanly if you later want to extend your setup. If you are already in the DigitalOcean ecosystem, the $12/month 2GB droplet is worth the extra spend for the headroom.
Vultr Regular Cloud ($2.50/month)
512MB RAM, 1 vCPU, 10GB storage. This is technically functional for OpenClaw but not recommended for anything beyond the most minimal setup: no LCM, no memory plugins, no Ollama, aggressive context limits. With 512MB RAM and swap, OpenClaw will run, but it will be slow and will struggle with any task that involves significant tool use or context. If cost is the absolute constraint, use this tier and expect to upgrade when you hit the limits.
Oracle Cloud Free Tier (actually free)
Oracle’s Always Free tier in 2026 includes up to 4 Arm-based OCPUs and 24GB of RAM split across two instances. The catch: the ARM architecture means some Docker images and compiled dependencies need ARM-compatible versions. OpenClaw itself runs fine on ARM, but plugins with native dependencies need attention. If you are comfortable with ARM and willing to work through any compatibility issues, this is genuinely the best hardware-per-dollar option available. It costs nothing.
ARM plugin compatibility
Plugins that use native compiled dependencies (like memory-lancedb, some image processing plugins, and plugins that bundle platform-specific binaries) do not always have ARM builds available. If a plugin fails to install or crashes on startup with architecture-related errors, check whether the plugin supports ARM. Your agent can identify which of your installed plugins have native dependencies.
What CPU architecture is this server running on? Is it x86_64 or ARM? What is the total available RAM, and how does that compare to what OpenClaw needs to run comfortably with my current plugin configuration?
The OpenClaw VPS setup verification sequence: confirm everything is actually set up
After you have worked through the swap, systemd, config tuning, and log rotation steps, run this final check to confirm everything is in place. One paste. Your agent will report on all of it.
Run a full VPS health check for my OpenClaw setup. Check and report on: (1) whether swap is enabled and what size it is, (2) current swappiness value, (3) current RAM in use and free, (4) whether the openclaw gateway service is active and running, (5) whether Restart=on-failure is set in the systemd service, (6) whether OOMScoreAdjust is set and what value, (7) disk usage on the main partition, (8) whether a logrotate config exists for openclaw logs, (9) the current context window setting in openclaw.json, and (10) what model is configured for compaction. For each item, tell me whether the setting looks good for a low-RAM VPS or if I need to change something.
Save your agent’s response. It is a snapshot of your server’s current state. If something breaks in the future, running this same check again gives you something to compare against.
What good looks like when everything is set up
After completing everything in this article, your setup should look like this:
- Swap: 2GB or more, active, with swappiness at 10
- Systemd service: active (running), Restart=on-failure, RestartSec=5, StartLimitBurst=5
- OOMScoreAdjust: in the -300 to -500 range (avoid going below -700 on servers under 4GB RAM)
- Context window: 32k tokens or lower for a 1GB server; up to 64k is fine on a 4GB server
- Compaction model: an API model, not a local Ollama model (unless you have 4GB+ RAM to spare)
- Logrotate: configured, rotating daily, keeping 7 days
- Daily status cron: active, delivering a morning report to your messaging channel
- Disk usage: under 70% of total capacity
If your agent reports all of these as healthy, your setup is as stable as a $5 VPS can be. You should not see random crashes, silent deaths, or mysterious pauses under normal workloads.
What to do when a crash still happens despite all this
No setup eliminates crashes entirely. Hardware fails, providers have outages, bad plugin updates happen. When your agent goes down despite having everything configured correctly, the recovery sequence is:
- Wait 30 seconds. Systemd is likely already restarting it
- Check whether the agent is responding. If yes, the auto-restart worked.
- If still down: SSH in and check
systemctl status openclawfor the current state - If it hit the start-limit:
sudo systemctl reset-failed openclaw && sudo systemctl start openclaw - Check the journal:
journalctl -u openclaw -n 100 --no-pagerto find the root cause - Fix the root cause before the next restart attempt
The journal is always the source of truth. Do not guess at the cause. Read what it says.
Want the full setup checklist?
Brand New Claw: $37
Everything in this article plus a complete hardening checklist, the exact systemd and config settings for common VPS providers, and the full monitoring cron template, formatted to copy and paste straight into OpenClaw.
Questions people actually ask about this
OpenClaw keeps crashing or my agent stopped responding. What do I check first?
Check whether the service is still running. Paste this into your OpenClaw if it is responding, or SSH in if it is not:
Check whether the openclaw gateway service is running. If it is not, tell me the last 50 lines of the service journal so I can see why it stopped.
If the service is dead, the journal will show what killed it. An OOM kill shows up as “Killed” with no other explanation. A config error shows up as a specific error message about a failed parse or missing field. A plugin crash shows up as an unhandled exception. Each of these has a different fix. Get the journal output first before doing anything else.
How do I know if my server is actually running out of memory during tasks?
Ask your agent to show you the memory situation during a task that you know causes problems:
Show me current RAM usage, swap usage, and how much swap has been used in the last hour. Also show me if there are any OOM events in the kernel log from today.
If swap usage is above zero and climbing during normal tasks, you are regularly hitting memory pressure. If the kernel log shows OOM events, the OOM killer has already fired at some point. Check whether it killed OpenClaw or something else. The process name in the OOM log will tell you.
My disk is filling up but I cannot find large log files. Where else should I look?
OpenClaw stores things beyond just log files. Session archives, LCM databases, memory databases, and workspace files can all grow over time without obvious log-style filenames.
Find the top 10 largest files and directories on this server. Include the openclaw workspace directory, the ~/.openclaw directory, any LCM database files, any session archive directories, and any log directories. Sort by size, largest first.
Common culprits: session JSON archives that were never pruned, LCM databases that grew beyond their intended size, and workspace files from long-running pipelines that were never cleaned up. Your agent can delete safe-to-delete files once you have identified what is large.
The openclaw gateway restarts every few minutes. What is causing it?
A restart loop is different from a one-time crash. If the service starts, runs for a short time, and then crashes over and over, the issue is a config problem, a plugin that errors on startup, or a database that got corrupted. A crash loop that hits the StartLimitBurst threshold will cause systemd to stop trying to restart. You will see “failed (Result: start-limit-hit)” in the status.
Show me the last 100 lines of the openclaw service journal including timestamps. I need to see the restart pattern, specifically what error appears right before each crash.
If the error is the same every time, that is your culprit. If the error is random or memory-related, the swap and OOM settings from earlier in this article are the fix. If systemd has stopped retrying, SSH in and run sudo systemctl reset-failed openclaw && sudo systemctl start openclaw to give it another chance after fixing the underlying cause.
Can I run OpenClaw without a public IP? My VPS only has a private network.
Yes. OpenClaw’s gateway only needs to be reachable from where you send messages: your Discord client, Telegram app, or wherever your webhook comes from. If you are using Telegram or Discord, those are outbound connections from your server to the messaging platform, not inbound connections to your server. Your VPS does not need a public IP for those to work.
If you need to access the web UI from your laptop, you will need either a public IP with the gateway port open (not recommended without auth) or an SSH tunnel from your laptop to the server. The SSH tunneling approach is covered in a dedicated article linked below.
I set everything up but my agent still occasionally goes quiet for 30 to 60 seconds. Is that normal?
Short pauses during heavy tasks are normal. The agent is waiting for API responses, running tool calls, or compacting context. Pauses of 30 to 60 seconds that happen randomly even during light use usually mean one of three things: the server is swapping (disk is slower than RAM, so operations stall), the API is rate-limiting you, or the gateway process is GC-pausing (Node.js garbage collection on a low-memory setup).
I have been experiencing 30 to 60 second pauses at random times. Check the server’s current swap activity, look at recent API error logs for rate limit errors, and check how much free memory is available right now. Tell me what is most likely causing the pauses based on what you find.
