You saw something that looked wrong: a notification, a log entry, a message from someone who told you to check. This article will tell you whether anything actually happened and, if it did, what to do next. Most of the time it turns out to be nothing. Either way, you will know within a few minutes.
Step 1: Ask your agent what actually happened
OpenClaw keeps a record of every connection it receives: timestamps, where each one came from, and whether it was let through or turned away. Your agent can read these logs. Before you do anything else, ask it to look.
Copy this message and paste it into your agent exactly as written:
Check your gateway logs for the last 24 hours. I want to know: did any requests come in from a device that is not my own? If yes, did any of those requests get a real response back, or were they all turned away with an error? Tell me what you found in plain English.
You are looking for one of three answers:
“I see connection attempts from outside, but they all got errors” (or similar). Every device on the internet gets probed by automated scanners: programs that knock on every door looking for something unlocked. If those probes hit your OpenClaw and got turned away, nothing happened. You can stop here.
“I see successful responses going back to an outside address” (or similar). Outside traffic was reaching your agent and getting real responses back. Go to Step 2.
“I cannot find any logs,” “I am not sure,” or your agent gives a different answer you are not sure how to read. Go to Step 2 as a precaution. Closing the door takes two minutes and does not harm anything.
If your agent says it cannot access logs or needs permission to run a command: It may need your approval to look at the system. If you see an approval prompt, allow it for this check. If it still cannot access them, skip to Step 2 and proceed as a precaution.
Step 2: Tell your agent to stop accepting outside connections
The gateway is the part of OpenClaw that receives messages from outside your device. Think of it as the front door. By default, it only accepts messages from your own device. If that setting was changed at some point, this step changes it back.
Start by asking your agent to show you the current state:
Show me my current gateway config, including the bind address and port number.
Your agent will show you the current values. Once you know what is set, paste this to make the change:
Check the gateway.bind setting in my OpenClaw config. Tell me what it is currently set to. If it is not set to 127.0.0.1, change it to 127.0.0.1:18789 and restart the gateway. Tell me what it was before and what it is now.
Before you paste this: it will make a change, not just check. If the setting needs to be updated, your agent will update it and restart itself. The restart takes a few seconds. Your interface may go quiet briefly, then come back. Your conversations and agent memory are not affected by a restart.
The address 127.0.0.1 means “this device only.” Once this setting is in place, connections from other devices cannot reach your gateway directly. 0.0.0.0 is the opposite: it means “accept connections from any device” and is the wide-open state. If you were set to 0.0.0.0, changing to 127.0.0.1 is the fix.
To undo this later, ask your agent to change gateway.bind back to 0.0.0.0:18789. But before going back to that, read the SSH tunneling article below: it shows how to keep remote access without leaving the gateway open.
Running OpenClaw inside WSL2 on Windows: WSL2 (Windows Subsystem for Linux, a way to run Linux software inside Windows) has its own internal loopback. Setting gateway.bind to 127.0.0.1 inside WSL2 locks access to WSL’s internal environment, not to your Windows machine as a whole. If you were reaching your agent from a Windows app running outside WSL, that will stop working after this change. That is the correct outcome for security. The SSH tunneling article covers how to set up access between WSL2 and Windows apps.
If you are using a managed or cloud-hosted version of OpenClaw (rather than running your own instance): the gateway and firewall steps do not apply to your setup. Your hosting provider manages those. Skip to Step 4.
If your agent says it cannot make this change: Some setups restrict what the agent can modify. If that happens, you can do it manually. Open the file called openclaw.json. On Mac and Linux it lives at ~/.openclaw/openclaw.json, on Windows at %APPDATA%\openclaw\openclaw.json. Find the section that starts with "gateway" and update it to read "bind": "127.0.0.1:18789". Save the file, then restart OpenClaw the same way you normally start it.
Step 3: Check that your firewall is blocking outside access to OpenClaw
A firewall is software that acts as a checkpoint between your device and the internet, deciding which kinds of traffic to allow through and which to block. OpenClaw communicates on port 18789 (think of a port like a specific door number on your device). That door should not be open to outside traffic.
Ask your agent to check and close it if needed:
Check whether port 18789 is listed as open in the system firewall. If it is, remove the rule that opens it and tell me what you changed. If there is no rule for that port, tell me that instead.
After the agent runs this, you will see one of two responses:
- “No rule exists for port 18789” or similar: that is the correct state. The port was never open. You are done with this step.
- “I removed a rule allowing port 18789”: your agent closed the door. Ask it to check once more to confirm the rule is gone.
If you need to do this manually on Ubuntu Linux: UFW is the standard firewall tool on Ubuntu. On a fresh Ubuntu server, it handles port rules. Run sudo ufw status to check whether port 18789 appears. If it does, run sudo ufw delete allow 18789 to remove the rule. If UFW says “Could not delete non-existent rule,” the port was never open and you are done.
If you are on Windows (not WSL): Windows Firewall manages port rules. From PowerShell or Command Prompt, run: netsh advfirewall firewall delete rule name="openclaw" protocol=TCP localport=18789. If no rule by that name exists, the port was never open.
Step 4: If outside connections were getting through, replace your access keys
OpenClaw connects to AI services like OpenAI, Anthropic, or DeepSeek using an API key. An API key is a long string of characters that acts like a password: it tells the service “this request is authorized by my account.” If outside traffic was reaching your agent and getting real responses, replacing those keys removes the risk. If your agent found only errors in Step 1, you can skip this step.
Ask your agent to tell you what needs to change:
List all the API keys and access tokens stored in my OpenClaw config. Do not show me the values. Just tell me which services they are for, like OpenAI or Anthropic.
For each service your agent lists: go to that service’s website, create a new key, then delete the old one. Create the new key first before deleting the old one. Then give the new key to your agent:
Update my [service name] API key in the config to [paste your new key here].
The old key stops working immediately once you update. Your agent will use the new one from that point on. You should see no interruption in normal use.
Replacing those keys is a precaution, not a confirmation that they were compromised. The risk is low but the fix is fast, and it removes the uncertainty.
OpenAI: platform.openai.com/api-keys
Anthropic: console.anthropic.com/account/keys
DeepSeek: log in and look for API or developer settings
Other services: search “[service name] API keys” and look for the developer or account section
If OpenClaw is running on a remote server (a separate computer you access over the internet, not your own laptop): there is one more thing to check. Servers use SSH keys to control who can log in remotely. SSH (Secure Shell) is a standard protocol for remotely accessing a server from the command line. An SSH key works like a physical key to the server: whoever holds the private half can get in. Ask your agent:
Are there any SSH private key files in the .ssh directory on this server? If yes, list them by name only. Do not show me the contents.
Your agent will list files like id_rsa or id_ed25519. If it finds none, you have no SSH keys stored here and can skip the rest of this section.
Only do this next step if your agent confirmed in Step 1 that outside connections were getting real responses. If Step 1 found only errors, stop here.
What these steps protect against going forward
With gateway.bind set to 127.0.0.1 and port 18789 closed in your firewall, your agent is no longer reachable directly from outside your device. Your chat integrations (Discord, Telegram, and similar) still work because they route through their own secure channels, not through your gateway’s port directly.
This setup means someone on the internet cannot reach your agent without first getting access to your device or your chat account. That is a meaningful reduction in exposure.
About reversibility: Everything in Steps 1 through 3 is fully reversible. If you change gateway.bind and later want remote access again, change it back to 0.0.0.0:18789 and open the firewall port. The SSH tunneling article in Go Deeper covers how to set up secure remote access that does not require reopening the port. API key replacement is permanent in the sense that the old key stops working, but your agent keeps functioning normally with the new key from the moment you update it.
These steps handle the immediate situation. The complete hardening setup is in Brand New Claw: exec approvals, plugin vetting, what your agent is and is not allowed to do. It covers all of this in the same plain-English approach, without assuming you have a sysadmin background.
Complete hardening guide
Brand New Claw
The full security baseline for OpenClaw operators. Gateway config, exec approvals, plugin vetting, tool scoping, and a hardened config you can drop straight into your agent.
FAQ
My agent found outside connection attempts but they all got errors. Do I need to do anything?
No. Errors mean the door was knocked on but never opened. Every internet-connected device gets this constantly. Nothing was accessed and nothing needs to change.
I changed the gateway setting but now I cannot reach my agent from my phone or a second computer. Did something break?
Nothing broke. That is the correct outcome. Your agent is now only reachable from your own device or through a secure tunnel. Your Discord and Telegram integrations still work. The SSH tunneling article covers how to set up remote access that does not require leaving the gateway open.
What does it mean to “rotate” an API key?
Rotating a key means replacing it: you create a new one on the service’s website, update it in your OpenClaw config, and then delete the old one. It is called rotating because you cycle through keys rather than just changing a password. Once you delete the old key, it stops working immediately. Your agent uses the new one from that point on.
How did my gateway end up accepting outside connections in the first place?
Usually one of three ways: it was changed during setup to allow remote access, a guide somewhere recommended it without explaining the risk, or a plugin changed it. Worth checking the gateway.bind value any time you install a new plugin or follow an external setup guide.
Do I need to reinstall OpenClaw?
No. Changing the gateway setting and rotating your keys handles this situation. Reinstalling is for a different problem: if the software itself behaves strangely in ways that persist after those changes.
Should I report this to the OpenClaw team?
If your agent confirmed that outside requests were getting real responses, yes. File a note on the OpenClaw GitHub with the date range and a description of what you saw. You do not need to include sensitive details. It helps the team understand how widespread the problem is.
I am on a cloud server. My hosting provider’s firewall shows port 18789 open. Why would it be open?
Cloud servers often have ports opened during initial setup either by a one-click installer, a setup guide, or a default firewall template. The port being open does not mean it was exploited. Closing it now is the right move regardless of why it was opened.
Understanding the attack surface
When someone connects to your OpenClaw gateway from outside, what can they actually do? The answer depends on your configuration, but the default is worse than most operators expect.
What the gateway exposes by default
The gateway is an HTTP API. With no authentication layer (the default), anyone who can reach it can:
- Send messages to your agent as if they are you
- Ask the agent to read any file in the workspace (including openclaw.json with all your credentials)
- Trigger exec tool calls that run shell commands on your server
- Send messages through any connected channel (Telegram, Discord, email)
- Access memory contents and stored context
- Trigger any automation the agent has access to
Check my current gateway configuration. Is authentication enabled? What tools does the agent have access to? If someone connected from outside right now, what would they be able to do? Give me a realistic worst-case scenario.
Forensic analysis of the access
Before closing the vulnerability, gather evidence of what happened. This helps you understand the scope of any breach.
Gateway log analysis
Check the gateway logs for any requests from external IP addresses. Show me the timestamp, source IP, and request type for each external connection in the last 30 days. If logs are not available, tell me how to enable them.
Session history analysis
Check session history for any sessions that were not initiated by me. Look for sessions with unfamiliar user identifiers, sessions at unusual times, or sessions that made suspicious requests (reading openclaw.json, running exec commands, accessing credentials).
File integrity check
Check my workspace files for any modifications I did not make. Compare the current state against the last known good git commit. Flag any files that were added, modified, or deleted by an unknown session.
Hardening after the incident
Once the immediate threat is contained, implement hardening to prevent recurrence.
Network-level hardening
- Set gateway.bind to 127.0.0.1: This is the single most important change. It prevents the gateway from accepting connections from any network interface except loopback.
- Configure firewall rules: Block the gateway port (default 18789) from external access at the OS firewall level as a secondary defense.
- Review cloud firewall: If running on a cloud provider, ensure the security group or firewall rules do not allow inbound traffic to the gateway port.
Implement all three network hardening measures: set gateway.bind to 127.0.0.1, add a firewall rule blocking external access to port 18789, and verify my cloud provider firewall settings. Show me the commands and confirm each is applied.
Tool permission hardening
Review my exec tool permissions. If exec.security is set to “full”, change it to an allowlist that only permits commands I actually need. Show me a recommended allowlist based on my current usage.
Ongoing monitoring for external access
Set up monitoring to detect future unauthorized access attempts early.
Set up a cron job that checks for any connections to the gateway from non-loopback addresses every hour. If any are found, send me an immediate Telegram alert with the source IP, timestamp, and what was requested.
Weekly security audit
Create a weekly security audit cron job that checks: gateway bind address is still 127.0.0.1, firewall rules are in place, no unexpected open ports, and no unauthorized sessions in the last week. Send me a summary report to Telegram every Sunday.
Documenting the incident
Record what happened for future reference. This helps with pattern recognition if similar incidents occur.
Write an incident report for what just happened. Include: timeline (when the exposure started, when it was discovered, when it was closed), what was exposed, what actions were taken, what evidence of unauthorized access was found, and what hardening measures were implemented. Save it to workspace/incidents/ for future reference.
Common attack scenarios and what they look like
Understanding what an attacker does with gateway access helps you recognize the signs in your logs and session history.
Scenario 1: Credential extraction
The attacker sends a message asking the agent to read the config file and output all API keys. This is the fastest way to monetize gateway access. In session history, it looks like a single message asking to read openclaw.json or “show me all my API keys.”
Search my session history for any messages that asked to read openclaw.json, display API keys, show credentials, or output configuration secrets. Flag anything suspicious.
Scenario 2: Compute abuse
The attacker uses your agent to run expensive API calls through your accounts. They may ask the agent to generate large volumes of text, process documents, or run lengthy research tasks, all billed to your API keys. In session history, this looks like unfamiliar tasks or unusually long sessions.
Check my API provider dashboards for unusual usage during the exposure period. Look for: usage spikes that do not correlate with my normal activity, requests from unfamiliar IP addresses, and any models called that I do not normally use.
Scenario 3: Lateral movement
The attacker uses the exec tool to explore your server, install backdoors, or pivot to other systems on your network. This is the most dangerous scenario because it can persist even after you close the gateway vulnerability. In session history, look for exec commands that probe the filesystem, install software, or make network connections.
Check for signs of lateral movement: any exec commands that installed software, created new user accounts, modified SSH keys, opened ports, or established outbound network connections. Also check for any new files in system directories that I did not create.
Scenario 4: Social engineering via messaging
The attacker sends messages through your connected channels (Telegram, Discord) to your contacts, posing as you or your bot. This can be used for phishing, spreading malware, or social engineering your contacts.
Check my Telegram and Discord message history for any messages I did not send. Look especially for messages to new contacts or channels, messages containing links, and messages that ask recipients to do something (click a link, send money, share information).
Severity assessment framework
Not all external access is equally dangerous. This framework helps you prioritize response actions.
| Severity | Indicators | Response |
|---|---|---|
| Critical | Exec commands found in logs from unknown sessions; credentials were accessed; messages sent to contacts | Immediate credential rotation, server rebuild consideration, contact notification |
| High | Unknown sessions found but no exec or credential access; API usage anomalies | Credential rotation, gateway hardening, usage audit |
| Medium | Connection attempts found but all failed or were rejected; no successful sessions | Gateway hardening, firewall rules, monitoring setup |
| Low | Port scan detected but no connection attempts to the gateway specifically | Gateway hardening as preventive measure |
Based on the evidence gathered so far, classify my incident severity using this framework. Tell me which response actions I need to take and in what order.
Post-incident hardening checklist
After closing the immediate threat, work through this checklist to harden your setup against future incidents.
- ✅ gateway.bind set to 127.0.0.1
- ✅ OS firewall blocking gateway port from external access
- ✅ Cloud provider firewall (if applicable) blocking gateway port
- ✅ exec.security set to allowlist (not “full”)
- ✅ All API keys rotated
- ✅ All bot tokens regenerated
- ✅ Session history reviewed for unauthorized access
- ✅ Server checked for persistence mechanisms (SSH keys, cron jobs, new users)
- ✅ Monitoring cron job set up for gateway bind regression
- ✅ Incident documented in workspace/incidents/
Run through the post-incident hardening checklist. For each item, check whether it has been done and report the status. For any items not yet completed, show me the exact steps to complete them.
When to consider rebuilding the server
Rebuilding from a clean image is the nuclear option but sometimes the right call. Consider it when:
- Evidence of exec commands run by an unknown party, especially commands that modify system files, install software, or create network connections
- Persistence mechanisms found (unauthorized SSH keys, cron jobs, user accounts)
- Extended exposure window (weeks or months) with exec.security set to “full”
- Inability to fully audit all changes made during the exposure window
Rebuilding takes time but gives certainty. Patching an actively compromised server leaves uncertainty about what else may have been changed that you did not find.
Based on the forensic analysis results, do I need to rebuild from a clean image? If yes, what data do I need to back up first and what is the rebuild process? If no, what gives you confidence that the server is clean?
Prevention architecture for future deployments
When setting up new OpenClaw instances (or rebuilding after an incident), build security in from the start rather than retrofitting it.
Defense in depth
Layer multiple defenses so that failure of any single layer does not expose the gateway:
- Layer 1 (gateway config): gateway.bind = 127.0.0.1
- Layer 2 (OS firewall): ufw deny 18789/tcp from any
- Layer 3 (cloud firewall): No inbound rule for port 18789
- Layer 4 (tool permissions): exec.security = allowlist
- Layer 5 (monitoring): Cron checks for bind regression
Implement the full defense-in-depth architecture for my setup. For each layer, show me the current status and the command to implement it if missing.
Additional questions
Can an attacker use my agent to attack other systems on my network?
Yes, if exec.security allows arbitrary commands. An attacker can use the exec tool to scan your local network, attempt connections to internal services, and potentially pivot to other machines. This is why restricting exec permissions is critical even when the gateway is not directly exposed.
How do I know if the attacker installed a backdoor?
Check for: new entries in ~/.ssh/authorized_keys, new cron jobs (crontab -l and files in /etc/cron.d/), new user accounts in /etc/passwd, modified system binaries (check with package manager verify commands), and new systemd services. If any of these show unexpected entries, a server rebuild is recommended.
What logs should I preserve before any changes?
Before patching or rebuilding, copy the following: gateway access logs (if they exist), session history files (in your OpenClaw data directory), system auth logs (/var/log/auth.log), and any output from the forensic analysis commands run earlier. Store these on a separate device or cloud storage. They are your evidence trail and may be needed if you discover the compromise was worse than initially assessed. If you need to involve law enforcement or notify affected parties later, these logs are the foundation of that process. Do not delete or modify them until the incident is fully resolved and documented.
For a complete security hardening walkthrough that covers gateway configuration, firewall setup, tool permission management, credential rotation, monitoring, and incident response planning in a single comprehensive guide, see the Brand New Claw product page. Every recommendation in this article and more, organized as a sequential checklist you can work through in one session.
Keep Reading:
Go deeper
CVE-2026-25253: what it is, whether you’re exposed, and what to do now
The specific vulnerability disclosed this week. What it does, who is affected, and the fix.
Security config before you go live
The baseline settings every OpenClaw operator should have in place. Gateway, exec approvals, plugin vetting, and tool scoping explained without assuming a sysadmin background.
How to vet a plugin before you install it
The ClawHub crisis exposed what happens when plugins are installed without review. How to check what a plugin actually does before it runs inside your agent.
