OpenClaw is one of the hottest AI products in the market right now. The numbers alone explain why it has become such a talking point.
- OpenClaw earned over 331,000 GitHub stars in three months
- Linux: received 224,000 GitHub stars in 14 years
The project’s GitHub presence also shows it as a personal AI assistant that runs on your own devices and works across channels such as WhatsApp, Telegram, Slack, Discord, iMessage, and more.
That speed of adoption is remarkable because OpenClaw is not just another chatbot. Its official documentation describes it as a self-hosted gateway that connects messaging apps to an always-available AI assistant.
OpenClaw is appealing because it does things people actually want an AI assistant to do. It can handle practical day-to-day tasks through familiar apps, remember user preferences and past context over time, and go beyond conversation into automation.
It can run scripts, control browsers, manage email and calendars, and support scheduled actions, making it feel less like a chatbot and more like a working assistant.
From a product perspective, that is exactly the kind of experience people have been waiting for from personal AI assistants. OpenClaw shows what happens when AI moves from answering questions to actually taking actions on behalf of the user.
The Same Features That Make OpenClaw Powerful Also Make It Risky
That same product power is also what makes OpenClaw such a serious security concern. Cisco calls OpenClaw groundbreaking from a capability perspective and an absolute nightmare from a security perspective.
To deliver real productivity gains, an AI assistant often needs permission to access the systems people actually use, handle sensitive information, and take actions on the user’s behalf. The more access an AI assistant is given, the more serious the security exposure becomes as well.
OpenClaw is not ready for business use, Paul Baier has argued on The Forbes.
“As a nontechnical CEO, I wanted to see if the hype was real. The category of personal AI assistants for work and home persona is real and growing, but OpenClaw's bugs and security gaps disqualify this product.”
Paul Baier, CEO & co-founder of GAI Insights
Analyst commentary has been equally direct. According to CSO Online, Gartner researchers said OpenClaw confirms two things at once: first, there is strong market demand for agentic AI; second, the current risk profile is not suitable for most enterprise environments.
They pointed to demonstrated remote code execution paths, serious supply chain exposure through the ClawHub skills marketplace, and the risk of leaked API keys, OAuth tokens, and sensitive conversations when credentials are stored insecurely or a host is compromised.
Another warning came from Noma Security, where they discovered a blind spot many organizations would overlook: shared communication channels such as Discord, Telegram, or WhatsApp groups.
According to Noma’s analysis, when an OpenClaw agent is placed inside a public-facing or loosely controlled group, other participants may be able to issue instructions the bot treats as legitimate.
In the scenario Noma described, an attacker could join a public Discord server, inject a cron job into the agent’s workflow, crawl the local file system for tokens, passwords, API keys, and seed phrases, and exfiltrate the data in about 30 seconds while the bot appears to be functioning normally.
3 Key OpenClaw Risks:
- OpenClaw can run shell commands, access local files, and execute scripts on the host machine. If it is misconfigured, over-permissioned, or extended with a malicious skill, it can create a direct operational security risk.
- Credentials may be exposed if tokens and secrets are stored insecurely. That can put API keys, OAuth credentials, and conversation data at risk.
- Its messaging integrations expand the attack surface. Inputs coming from apps like WhatsApp, Telegram, Discord, Slack, or iMessage can be used to trigger unintended actions or malicious behavior.
Why Enterprises Should Care About OpenClaw
The enterprise concern is not whether OpenClaw was built for personal use. The concern is what happens when employees, contractors, or teams start using it in business environments.
First, once a tool like OpenClaw is used at work, it is no longer just personal. It can access files, run scripts, connect to messaging apps, and interact with business systems. That makes it a potential path for data leakage, unauthorized actions, and hidden exposure inside everyday workflows.
Second, agentic AI changes how risk shows up in day-to-day systems. Traditional security controls are designed to monitor people using approved systems. With tools like OpenClaw, prompts can trigger real actions, credentials can be reused across services, and sensitive data can move through channels that were never meant to function as execution layers.
Third, the skills ecosystem (Clawhub) introduces supply chain risk. If users install third-party skills without proper review, they may bring insecure logic or malicious instructions directly into the environment. A skill may look useful or become popular quickly, but that does not make it safe.
Fourth, the local nature of skills makes the risk harder to ignore. Unlike remote services, these skills are installed as local file packages and loaded directly on the machine. Harmful behavior can be hidden inside files that users assume are harmless extensions.
Finally, OpenClaw creates shadow AI risk. Employees, contractors, and teams looking for speed can start using it informally before the organization has any visibility, policy, or control. By the time leadership notices, the tool may already have access to sensitive workflows, internal systems, or customer data.
Yes. Enterprises should care. OpenClaw is not just a personal productivity tool. The OpenClaw security risk shows how agentic AI can enter the workplace faster than security and governance can catch up.
From AI Risk to AI Control
From evaluating tools like OpenClaw to defining guardrails, access controls, and policy frameworks, we help teams reduce exposure while still capturing business value. If your organization is exploring AI assistants and wants a safer path forward, contact Precio Fishbone to build a more controlled and confident AI strategy.
Contact usFrequently Asked Questions
Should nontechnical executives use open-source AI assistants like OpenClaw in business settings today?
Not yet. Open-source assistants like OpenClaw are promising, but they are still too fragile for most business environments. They often require hands-on technical setup, continuous monitoring, and frequent troubleshooting. For nontechnical leaders, the operational burden can easily outweigh the immediate productivity gains.
What are the main risks of connecting an AI assistant to sensitive business data?
The biggest issue is insufficient control. Tools that were not built for enterprise use can expose confidential information through broad permissions, insecure integrations, weak credential handling, or unintended data access. For companies working with client, financial, or strategic information, those risks can quickly become more serious than the efficiency benefits.
What is the safest way for executives to begin using AI assistants?
Start with enterprise-ready tools that provide stronger security, clearer access controls, and more predictable governance. Roll them out in a limited environment, connect only the systems that are truly necessary, and focus on practical use cases such as executive summaries, meeting preparation, or task prioritization. This allows teams to test value without introducing unnecessary operational or security risk.