China’s National Computer Network Emergency Response Technical Team (CNCERT) has issued a cybersecurity warning regarding the use of OpenClaw, an open-source and self-hosted autonomous artificial intelligence agent previously known as Clawdbot and Moltbot. According to Chinese authorities, the platform’s weak default security configurations combined with its privileged system-level access could expose organizations to serious cyber threats if not properly secured.
In a notice published on WeChat, CNCERT highlighted that OpenClaw’s autonomous capabilities – designed to execute tasks, browse the web, and interact with system resources – could be exploited by threat actors to gain control of endpoints. Because the AI agent operates with elevated privileges to perform automated tasks, attackers may leverage security gaps to manipulate the system or extract sensitive information.
Cyber Technology Insights: AI Security Risks in 2026: How Organizations Can Protect Against Breaches
A major concern raised by cybersecurity experts involves prompt injection attacks, where malicious instructions embedded within web content can influence the AI agent’s behavior. If OpenClaw accesses such content during browsing or analysis tasks, attackers could trick the system into leaking confidential information. These attacks are often referred to as indirect prompt injection (IDPI) or cross-domain prompt injection (XPIA) because the malicious instructions are delivered through external sources rather than through direct interaction with the AI model.
Researchers warn that adversaries can exploit legitimate AI features – such as webpage summarization or content analysis – to run manipulated instructions. These techniques have been associated with various malicious outcomes, including search engine manipulation, evasion of automated moderation systems, and generation of biased or misleading content.
The risks are not purely theoretical. Security researchers from PromptArmor recently demonstrated how messaging platforms like Telegram or Discord could be used to trigger data exfiltration when interacting with OpenClaw. In this scenario, attackers manipulate the AI agent into generating a URL that includes confidential information within its query parameters. When the link is previewed automatically by messaging applications, sensitive data could be transmitted to attacker-controlled servers without requiring the user to click the link.
CNCERT also outlined several additional security concerns related to the AI agent’s design. One risk involves the possibility that OpenClaw could unintentionally delete important files or data due to misinterpreting user instructions. Another involves malicious “skills” uploaded to repositories such as ClawHub. If installed, these skills could execute arbitrary commands or deploy malware on compromised systems. Additionally, attackers may exploit newly disclosed vulnerabilities within OpenClaw to gain unauthorized access or extract sensitive information.
Chinese authorities warned that such risks could be particularly damaging in critical sectors such as finance, energy, and government operations. In these environments, a successful compromise could result in the exposure of proprietary business data, confidential code repositories, or even the disruption of essential operational systems.
Cyber Technology Insights: Meta Doubles Down on Teen AI Safety – A Big Moment for CyberTech
To mitigate these threats, CNCERT recommends several security practices for organizations deploying OpenClaw. These include restricting network access to the platform’s management ports, isolating the agent within containerized environments, avoiding storage of credentials in plaintext, and installing AI skills only from verified sources. Administrators are also advised to disable automatic skill updates and ensure the system is consistently updated with the latest security patches.
Reports also indicate that Chinese authorities have taken precautionary measures by restricting government agencies and state-owned enterprises from installing OpenClaw AI applications on workplace computers. The restrictions are reportedly intended to limit potential security exposure linked to the platform.
Meanwhile, the rapid popularity of OpenClaw has created opportunities for cybercriminals. Security researchers have identified malicious GitHub repositories posing as legitimate OpenClaw installers. These repositories distribute malware, including Atomic Stealer, Vidar Stealer, and a Golang-based proxy malware known as GhostSocks, using deceptive installation instructions.
Investigators found that some malicious repositories gained high visibility in search results, making them appear as legitimate download sources for users seeking OpenClaw installation files for Windows or macOS systems.
The warnings highlight growing concerns around the cybersecurity implications of autonomous AI agents. As organizations increasingly adopt AI-driven tools capable of interacting with external data sources and executing tasks automatically, experts emphasize the importance of strong security configurations and continuous monitoring to prevent exploitation.
Cyber Technology Insights: Global Cybercrime Surge: How Criminals Get Resources So Easily
To participate in our interviews, please write to our CyberTech Media Room at info@intentamplify.com
🔒 Login or Register to continue reading




