OpenClaw Security Issues: Add a “Security Guardrail” to Your AI Application

OpenClaw Security Issues: Add a “Security Guardrail” to Your AI Application

março 11, 2026 | NSFOCUS

In 2026, AI intelligent agent technology will usher in a full-scale explosion. As a representative project, OpenClaw (formerly known as Clawdbot and Moltbot) is highly favored for its powerful capabilities-it can integrate multi-channel communication capabilities with large language models to build customized AI assistants with persistent memory and active execution capabilities, supporting local private deployment.

However, it is precisely such a “capable assistant” that can become a “time bomb” lurking in your network.

OpenClaw’s Core Risks

OpenClaw was written by individual programmers. It took only a few months from its release to its popularity. Due to its own design characteristics, there is a natural problem of “fuzzy trust boundaries”. It has the ability to operate continuously, make independent decisions, call systems and external resources. In the absence of effective authority control, audit mechanism and security reinforcement, it will face three serious risks:

Security risk 1: Code security – two high-risk RCEs in three days, the system can be taken over maliciously

OpenClaw’s code base has been exposed to two high-risk remote code execution vulnerabilities (RCE) in a short period of time. Attackers can exploit vulnerabilities to execute arbitrary code on the target host without complex operations, achieving a one-step leap from “invasion” to “takeover”. Once successful, the host where OpenClaw is located will become the attacker’s “bottom chicken”, and the core data and internal network of the enterprise will be completely exposed to risks. This is not alarmist, but a real threat that has been actively exploited in the wild.

Security risk 2: Blind trust amplifies risks -“safety for convenience”, Agent becomes a springboard for attack

The design concept of OpenClaw emphasizes “autonomy”, and the default configuration often sacrifices safety for convenience. Many users need to give it extremely high permissions when deploying it in order to make their work more convenient, and even allow it to directly access sensitive systems or databases. This “blind trust” in Agent allows attackers to easily manipulate OpenClaw to perform unauthorized operations through induced instructions-such as reading confidential files, sending malicious emails, and laterally moving attacks on other hosts on the intranet. You think it is your right-hand man, but in fact it may be being remotely controlled by the enemy.

Security risk 3: Plug-in system becomes a breakthrough in the supply chain-lack of isolation mechanism, amplification of poisoning threats

OpenClaw supports the extension of functions through the Skills plug-in system, but this “fertile ground” has also become a paradise for attackers. The source of third-party plug-ins is unknown, the supply chain lacks review, and OpenClaw itself lacks an effective isolation mechanism for plug-in operation, making a poisoned plug-in a “Trojan horse”. Once the plug-in is installed, malicious code can run rampant with OpenClaw permissions, steal data, implant backdoors, and even spread poisoning to more users through the plug-in update mechanism. The weak links in supply chain security are infinitely magnified here.

Google banned OpenClaw overnight in February, and several giants such as Facebook, Mata, and Microsoft do not allow employees to use OpenClaw within the company. The Microsoft security team has characterized this situation as an “untrusted code execution environment with persistent credentials”, which is worth pondering for every company that is currently or plans to use AI agents.

Pain Points

In our conversations with many corporate users, we heard two typical concerns:

Customer Voice 1: “Some employees in our company have secretly deployed OpenClaw themselves. I am worried that these ‘shadow AIs’ will cause the local host port to be exposed, causing information leakage. But I don’t even know where they are, let alone control them. “

Customer Voice 2: “Our business unit has officially deployed OpenClaw, but I want to know what external access it does? Are these visits legal and compliant? Is there a risk of being exploited?”

Faced with new AI agents such as OpenClaw, traditional security solutions seem to be powerless:

Traffic content is invisible: OpenClaw user-side API mainly performs regular HTTPS calls, traffic is encrypted and transmitted, and traditional application identification methods are completely invalid.

Port identification is easy to bypass: Although OpenClaw has a default port, it is very easy to modify. Relying solely on port identification not only has low accuracy, but is also easy to be bypassed by attackers.

NSFOCUS OpenClaw Security Protection Solution: “AI Unified Threat Management + NSFOCUS Firewall”

  • Accurate identification: The AI Unified Threat Management has built-in AI agent discovery capabilities, which can actively scan the intranet environment and accurately identify which hosts have deployed AI agents such as OpenClaw.
  • Flexible control: According to enterprise policies, illegally deployed OpenClaws can be isolated from the network and legally deployed OpenClaws can be tracked throughout the process.
  • Defense in depth: The firewall conducts real-time analysis of OpenClaw session access, identifies risks such as malicious URLs, intrusion threats, viruses, etc., and ensures that every access is safe and controllable.

When enterprises hand over their core business to large models, the risks of illusions, data leakage, prompt word injection, etc. of large language models themselves become the “Achilles heel” of enterprises. With more than 20 years of practical experience in offense and defense, NSFOCUS is committed to becoming the most reliable “co-pilot” in our customers’ intelligent transformation, helping them see the security conditions clearly and warn of risks, so that customers can focus on accelerating and surpassing business innovation and drive towards an intelligent future together.