OpenClaw: The Viral AI Assistant I'm Not Installing (Yet)
You’ve spent the past week seeing “Clawdbot” everywhere. Then “Moltbot”. Now “OpenClaw”. Peter Steinberger’s open source project has racked up over 100,000 GitHub stars, been featured in TechCrunch, Fortune, CNBC, and your timeline is full of screenshots from people setting up their “personal Jarvis”.
The promise is irresistible: an AI assistant that runs on your own machine, connects to WhatsApp, Telegram, or Discord, remembers everything you tell it, and can execute real tasks. Book flights. Manage your email. Control your calendar. All from a text message.
Sounds like science fiction made real. And it probably will be, eventually. But today, in February 2026, I’m not installing it. And I don’t think you should either.
What OpenClaw is and why it’s generating so much hype
OpenClaw is a self-hosted AI agent. Unlike ChatGPT or Claude, which live in the cloud, OpenClaw installs on your computer, your server, or a VPS. You control where it runs and where your data lives.
The project connects to language models like Claude, GPT-4, or even local models via Ollama. But the magic isn’t in the model—it’s in what it can do with it: execute commands in your terminal, browse the internet, read and write files, send messages on your behalf, and maintain persistent memory between conversations.
It’s literally “Claude with hands”. An assistant that doesn’t just talk, but acts.
The hype makes sense. We’ve been waiting for this for years. Since Siri appeared in 2011, the promise of an intelligent personal assistant has been more marketing than reality. OpenClaw seems to be the first project that comes close to delivering.
But there’s a problem.
42,000 exposed instances and counting
While the world was celebrating OpenClaw’s capabilities, security researchers were scanning the internet. What they found isn’t pretty.
A recent analysis identified over 42,000 OpenClaw instances publicly exposed on the internet. Of those actively verified, 93% had critical authentication bypass vulnerabilities. On Shodan, the connected device search engine, you can find complete credentials: API keys, OAuth tokens, entire conversation histories, and the ability to execute commands on other people’s machines.
The main vulnerability, cataloged as CVE-2026-25253 with a risk score of 8.8 out of 10, allows an attacker to steal your authentication token with a single click. Visit a malicious website, and the attacker gains complete access to your gateway. From there they can disable the sandbox, modify permissions, and execute arbitrary code on your machine.
One researcher demonstrated the problem by sending an email with prompt injection to a vulnerable instance. The bot read the email, interpreted it as a legitimate instruction, and forwarded the user’s last five emails to an attacker-controlled address. The entire process took five minutes.
Palo Alto Networks described OpenClaw as a “lethal triad”: access to private data, exposure to untrusted content, and external communication capability with persistent memory. Cisco published an analysis where they ran a malicious skill called “What Would Elon Do?” that silently exfiltrated data. The skill had been artificially inflated to appear as number one in the skills repository.
A three-month project with three names
Part of the problem is the speed. OpenClaw was born as Clawdbot just three months ago. It received a cease and desist from Anthropic over the name (too similar to “Claude”), rebranded to Moltbot, had its GitHub and Twitter accounts hijacked by crypto scammers, and finally settled on OpenClaw.
In 72 hours, the most popular project of the moment lost its name, its accounts, and much of its credibility. Meanwhile, 90% of exposed instances on the internet are still running old versions with the previous names, probably abandoned after initial experimentation.
Peter Steinberger himself, the project’s creator, describes it as “a young, unfinished hobby project, less than three months old, not meant for most non-technical users”. It’s an honest statement that contrasts with headlines presenting it as the future of personal productivity.
The real cost nobody mentions
Installation tutorials make it seem free. Technically it is: the software is open source. But you need a language model to make it work.
If you use Claude (the recommended model for complex tasks), you’re looking at API costs that can scale quickly. A Fast Company analysis estimated that automating a few routine tasks would cost around $30 a month. Nothing dramatic, but also not the “free assistant” many imagine.
The alternative is running local models with Ollama. Free in terms of API costs, but you need capable hardware. If you have an NVIDIA GPU with CUDA, relatively straightforward. If you have AMD, you need to wrestle with ROCm. If you have a Mac with Apple Silicon, it works well but you’re limited by unified memory.
And then there’s the time cost. Configuring OpenClaw securely isn’t trivial. Isolated Docker, restricted permissions, separate network, test accounts instead of real ones, monitoring what the agent does. If you’re not willing to invest that time, you probably shouldn’t install it.
Why I’m not installing it (yet)
I’m not anti-OpenClaw. The concept is exactly what I want: an AI assistant I control, running on my infrastructure, that can do real things. I’ve been running self-hosted services for years precisely because I value that autonomy.
But there’s a difference between a Nextcloud server and an agent with access to my terminal, my credentials, and my messaging accounts. The threat model is completely different.
A misconfigured file server can leak documents. A misconfigured AI agent can execute arbitrary commands, exfiltrate credentials, and act on my behalf on platforms where I have reputation and identity.
The project is too young. The code changes too fast. Vulnerabilities appear weekly. The community is more focused on adding features than on hardening. And the hype is attracting users who lack the technical context to understand the risks.
I prefer to wait.
What OpenClaw needs for me to consider it
First, stability. A project that has changed names three times in three months doesn’t inspire confidence. I need to see a couple of months without major incidents, without rebrandings, without Twitter drama.
Second, independent security audits. Not Medium posts explaining vulnerabilities, but formal audits from specialized firms. The project has enough traction to attract that kind of attention.
Third, official hardening documentation. Not third-party guides on Substack, but documentation maintained by the project that explains exactly how to configure it securely. With Docker Compose examples, network policies, and verification checklists.
Fourth, a granular and well-designed permission model. Right now, if you give OpenClaw access, you give it access to everything. I need to be able to say “you can read my calendar but not modify it”, “you can search the internet but not execute commands”, “you can send Telegram messages but only to these contacts”.
When those pieces are in place, I’ll reconsider.
The lesson for the rest of the ecosystem
OpenClaw isn’t an isolated case. It’s the canary in the coal mine for autonomous AI agents.
We’re entering an era where language models will have access to real tools. Not just answering questions, but executing actions. And the security ecosystem isn’t prepared.
OpenClaw’s vulnerabilities aren’t exotic bugs. They’re insecure default configurations, lack of input validation, and trust assumptions that shouldn’t exist. They’re mistakes we’ve known about for decades, applied to a new domain.
If OpenClaw, with all the attention it has, presents these problems, imagine the smaller projects. The forks. The corporate implementations rushed to impress the CEO. The automation scripts someone set up one weekend and forgot on a server.
The future of AI agents is promising. But the present requires caution.
Conclusion: Hype is not a security strategy
OpenClaw represents something genuinely interesting: the democratization of autonomous AI agents. An open source project anyone can install, modify, and control. That has value.
But potential value doesn’t justify ignoring current risks. And right now, the risks are too high for my tolerance.
I’ll keep following the project. I’ll read the commits, the security advisories, and the community discussions. When the code matures, when the audits arrive, when hardening becomes the norm rather than the exception, I’ll install it.
Until then, my “personal assistant” will remain a combination of scripts, cron jobs, and Claude’s web interface. Less sexy, but also less likely to leak my credentials to a server on the internet.
Hype is fun. Security is necessary. And sometimes, the best decision is to wait.
You might also like
You have 3-5 years before AI agents become normal
78% of executives believe digital ecosystems will be built for humans AND AI agents. The window to position yourself is closing.
AI is the new data leak channel (and nobody's ready)
Employees copy sensitive data into ChatGPT without thinking. AI-powered phishing is more sophisticated than ever. Traditional security doesn't work.
AI Agents for Enterprise: From Demo to ROI in 2026
74% of companies with AI agents report ROI in year one. What works, what doesn't, and how to move from pilots to production.