News

Clawdbot AI Flaw Exposes API Keys And Private User Data


تكنلوجيا اليوم
2026-01-27 05:35:00

Cybersecurity researchers have raised red flags about a new artificial intelligence personal assistant called Clawdbot, warning it could be inadvertently exposing personal data and API keys to the public. 

Blockchain security firm SlowMist reported on Tuesday that a Clawdbot “gateway exposure” has been identified, putting “hundreds of API keys and private chat logs at risk.”

“Multiple unauthenticated instances are publicly accessible, and several code flaws may lead to credential theft and even remote code execution,” it added

Security researcher Jamieson O’Reilly originally detailed the findings on Sunday, stating that “hundreds of people have set up their Clawdbot control servers exposed to the public” over the last couple of days. 

Clawdbot is an open-source AI assistant built by developer and entrepreneur Peter Steinberger that runs locally on a user’s device. Over the weekend, online chatter about the tool “reached viral status,” reported Mashable on Tuesday. 

Scanning for “Clawdbot Control” accesses credentials

The AI agent gateway connects large language models (LLMs) to messaging platforms and executes commands on users’ behalf using a web admin interface called “Clawdbot Control.”

The authentication bypass vulnerability in Clawdbot occurs when its gateway is placed behind an unconfigured reverse proxy, O’Reilly explained. 

Using internet scanning tools like Shodan, the researcher could easily find these exposed servers by searching for distinctive fingerprints in the HTML.

“Searching for ‘Clawdbot Control’ – the query took seconds. I got back hundreds of hits based on multiple tools,” he said. 

Related: Matcha Meta breach tied to SwapNet exploit drains up to $16.8M

The researcher said he could access complete credentials such as API keys, bot tokens, OAuth secrets, signing keys, full conversation histories across all chat platforms, the ability to send messages as the user, and command execution capabilities.

“If you’re running agent infrastructure, audit your configuration today. Check what’s actually exposed to the internet. Understand what you’re trusting with that deployment and what you’re trading away,” advised O’Reilly

“The butler is brilliant. Just make sure he remembers to lock the door.”

Extracting a private key took five minutes 

The AI assistant could also be exploited for more nefarious purposes regarding crypto asset security. 

CEO at Archestra AI, Matvey Kukuy, took things a step further in an attempt to extract a private key. 

He shared a screenshot of sending Clawdbot an email with prompt injection, asking Clawdbot to check the email, and receiving the private key from the exploited machine, which “took 5 minutes.”

Source: Matvey Kukuy

Clawdbot is slightly different from other agentic AI bots because it has full system access to users’ machines, which means it can read and write files, run commands, execute scripts, and control browsers.

“Running an AI agent with shell access on your machine is… spicy,” reads the Clawdbot FAQ, which adds, “There is no ‘perfectly secure’ setup.”

The FAQ also highlighted the threat model, stating malicious actors can “try to trick your AI into doing bad things, social engineer access to your data, and probe for infrastructure details.”

“We strongly recommend applying strict IP whitelisting on exposed ports,” advised SlowMist. 

Magazine: The critical reason you should never ask ChatGPT for legal advice