A new social media platform called Moltbook, designed for AI agents to interact with each other and “hang out”, was found to have a misconfiguration, leaving its backend database publicly accessible allowing full read and write access to all data, according to a recent blog post by Wiz Security.
Researchers discovered a Supabase API key exposed in client-side JavaScript revealing thousands of private AI conversations, 30,000 user email addresses, and 1.5 million API keys..
“Supabase is a popular open source Firebase alternative providing hosted PostgreSQL databases with REST APIs. It’s become especially popular with vibe-coded applications due to its ease of setup,” explained Wiz head of threat exposure, Gal Nagli.
“When properly configured with Row Level Security (RLS), the public API key is safe to expose – it acts like a project identifier. However, without RLS policies, this key grants full database access to anyone who has it. In Moltbook’s implementation, this critical line of defense was missing.”
In a message posted to X before the Wiz posted the blog, Moltbook’s creator, Matt Schlicht said he “didn’t write one line of code” for the site. Wiz reported the vulnerability to Schlicht, and the database was secured.
“As we see over and over again with vibe coding, although it runs very fast, many times people forget the basics of security,” Wiz cofounder Ami Luttwak said.
Sunil Gottumukkala, CEO, Averlon:
“What this highlights is the tradeoff vibe coding creates. It massively compresses idea-to-product time, but often skips essential security steps like threat modeling, secure defaults, and review gates that account for real user behavior and adversarial abuse.
“When those controls are missing, a routine misconfiguration, such as shipping without proper authorization or RLS policies, can quickly turn into an instant, internet-scale incident. Some vibe-coding platforms are starting to add guardrails, but we’re still early. As long as speed continues to outpace security analysis and remediation, this will be a bumpy road.”
Lydia Zhang, President & Co-Founder,Ridge Security Technology Inc. gave me this comment:
“This leads to another mandatory step: testing. Zero-trust principles should also be applied to Vibe coding. Vibe-coded solutions can miss basic security practices, and configuration or misconfiguration issues are often outside the scope of the code itself. I’m glad Wiz Security caught this before the damage spread further.”
Michael Bell, Founder & CEO, Suzu Labs added this comment:
“The Moltbook incident shows what happens when people shipping production applications have no security training and are relying entirely on AI-generated code. The creator said publicly that he didn’t write a single line of code. Current AI coding tools don’t reason about security on the developer’s behalf. They generate functional code, not secure code.
“The specific failure here was a single Supabase configuration setting. Row Level Security was disabled, which meant the API key that’s supposed to be safe to expose became a skeleton key to the entire database. That’s not a sophisticated vulnerability. It’s a checkbox that never got checked, and nobody reviewed the code to notice. When 10% of apps built on vibe coding platforms (CursorGuard) have the same misconfiguration, that’s not a user error problem. That’s a systemic failure in how these tools are designed.
“The write access vulnerability should concern anyone building AI agent infrastructure. Moltbook wasn’t just leaking data. Anyone with the exposed API key could modify posts that AI agents were reading and responding to. That’s prompt injection at ecosystem scale. You could manipulate the information environment that shapes how thousands of AI agents behave.
“Users shared OpenAI API keys in private messages assuming those messages were private. One platform’s misconfiguration turned into credential exposure for unrelated services. As AI ecosystems become more interconnected, these cascading failures become the norm.
“The 88:1 agent-to-human ratio should make everyone skeptical of AI adoption metrics going forward. Moltbook claimed 1.5 million agents. The reality was 17,000 humans running bot armies. No rate limiting. No verification. The platform couldn’t distinguish between an actual AI agent and a human with a script pretending to be one.
“We’re going to see a lot of “AI-powered” metrics that look impressive until you examine what’s actually behind them. Participation numbers, engagement statistics, autonomous behavior claims. Without verification mechanisms, the numbers are meaningless. The AI internet is coming, but right now it’s mostly humans wearing AI masks.
“If you’re deploying vibe-coded applications to production, you need security review by someone who understands both the code and the infrastructure it runs on. AI tools don’t have security reasoning built in, which means every configuration decision is a potential exposure. We help organizations identify exactly these kinds of gaps through security assessments that trace data flows and access controls. The discovery process that found this vulnerability took Wiz researchers minutes of looking at client-side JavaScript. That’s the same level of effort an attacker would spend.
“AI development velocity and AI security maturity are on completely different curves. Teams are shipping production applications in days. Security practices haven’t caught up. Until AI tools start generating secure defaults and flagging dangerous configurations automatically, humans (or hackers) need to be in the loop reviewing what gets deployed.”
Ryan McCurdy, VP of Marketing, Liquibase contributed this:
“Moltbook is a textbook example of what happens when you ship at AI speed without change control at the database layer. A single missing guardrail turned a “public” Supabase key into full read and write access, exposing private agent conversations, user emails, and a massive pile of credentials. This is why Database Change Governance matters.
“The highest risk changes are often permissions, policies, and access rules, and those need automated checks, separation of duties, drift detection, and audit-ready evidence before anything hits production. AI agents and vibe-coded apps will only amplify the blast radius if database change is not governed.”
Noelle Murata, Sr. Security Engineer, Xcape, Inc. served up this comment:
“Matt Schlicht’s admission that he “didn’t write one line of code” isn’t something to celebrate, given the fundamental nature of the security flaw. The database completely lacked Row Level Security (RLS) policies, allowing anyone to access it without authentication. This misconfiguration exposed the entire database structure and content, including tokens that granted read/write/edit access to non-authenticated users – a basic oversight with serious consequences.
“Vibe-coding,” or relying on AI to generate code, can produce functional results but often sacrifices best practices in architecture and security for speed and convenience. Without code review or highly specific prompting, AI-generated code prioritizes “fast and easy” over “resilient and secure.” This is analogous to why junior developers need oversight; the same principle applies to AI-generated code.
“Despite Moltbook being marketed as a social platform “for bots, by bots,” it had a significant human user base: 17,000 humans alongside 1.5 million bots, creating a roughly 1:88 ratio. Notably, no CAPTCHA or human/bot validation system was implemented, raising questions about the platform’s actual purpose and user management.
“This incident demonstrates that AI-generated applications require careful monitoring and professional oversight. Software development still demands review by trained, experienced humans to ensure security and reliability.”
This highlights the danger of vibe coding. You can get stuff done. But how it gets done might be a problem. You might want to keep that in mind if you rely on vibe coding.
China Warns of OpenClaw Open-Source AI Agent Security Risks
Posted in Commentary with tags China on February 5, 2026 by itnerdChina’s industry ministry has warned that the OpenClaw open-source AI agent could pose significant security risks when improperly configured and expose users to cyberattacks and data breaches.
More info can be found here: https://www.reuters.com/world/china/china-warns-security-risks-linked-openclaw-open-source-ai-agent-2026-02-05/
Ensar Seker, CISO at SOCRadar:
“This warning isn’t really about China versus open source, it’s about a familiar pattern we’ve seen repeatedly with fast-moving AI agent frameworks like OpenClaw. When agent platforms go viral faster than security practices mature, misconfiguration becomes the primary attack surface. The risk isn’t the agent itself; it’s exposing autonomous tooling to public networks without hardened identity, access control, and execution boundaries.
“What’s notable here is that the Chinese regulator is explicitly calling out configuration risk rather than banning the technology. That aligns with what defenders already know: agent frameworks amplify both productivity and blast radius. A single exposed endpoint or overly permissive plugin can turn an AI agent into an unintentional automation layer for attackers.
“This should be a wake-up call globally. AI agents need to be treated like internet-facing services, not experimental scripts. That means threat modeling, least-privilege identities, continuous monitoring, and clear separation between reasoning, action, and data access. Without that, “agentic” systems don’t just scale intelligence, they scale mistakes.”
Henrique Teixeira, SVP of Strategy at Saviynt:
“The Chinese Ministry of Industry and Information Technology warning is valid. The point most people miss, however, is that OpenClaw (aka Moltbot, Clawdbot), even when properly configured, still poses a lot of identity security risks. If I had to simplify how OpenClaw credentials work it’s basically this: if you want your bot to do useful stuff, you need to provide it credentials (either username and passwords, cryptographic keys, etc.) with high levels of permissions. For example: if you want to have OpenClaw streamline your Gmail inbox, you need to give it a full pass to your email account. How most people will handle that poses a huge risk of credential exposure. Best case, they will follow steps like this https://setupopenclaw.com/blog/openclaw-gmail-integration). This is the best case, which is using an OAuth flow for consent, instead of simply hardcoding your email and password somewhere. But it still involves steps like generating JSON files and some light coding that not everyone may feel comfortable with. And in the end, this process is still flagged as “unsafe” by Google, as OpenClaw’s app has not been verified by them. That’s a warning that some people will ignore, but identity security-conscious people shouldn’t. Assuming that OpenClaw is “my app” and it’s accessing “my inbox” is all the security vetting necessary is the same as accepting that it’s ok for me to use a very weak password on my company laptop, because I don’t have anything important in it. It glosses over the fact that most modern breaches according to research, were initiated by abusing existing credentials from employees and contractors. Anyone is a valid target, and attackers can use that initial access to move laterally and escalate privileges to access more sensitive stuff. In the OpenClaw Gmail example, that OAuth token is not immune from being stolen or reused. The user just created one more spot where credentials are now exposed. And the bot itself could be poisoned with external prompts to share more details of the permissions it carries. In summary the alarm is valid. But not for the reasons most people think it’s valid!”
AI is the new hotness as the kids say. But it has risks. This is the latest of those risks. So this is a case of user beware that you should likely pay attention to.
1 Comment »