A new social media platform called Moltbook, designed for AI agents to interact with each other and “hang out”, was found to have a misconfiguration, leaving its backend database publicly accessible allowing full read and write access to all data, according to a recent blog post by Wiz Security.
Researchers discovered a Supabase API key exposed in client-side JavaScript revealing thousands of private AI conversations, 30,000 user email addresses, and 1.5 million API keys..
“Supabase is a popular open source Firebase alternative providing hosted PostgreSQL databases with REST APIs. It’s become especially popular with vibe-coded applications due to its ease of setup,” explained Wiz head of threat exposure, Gal Nagli.
“When properly configured with Row Level Security (RLS), the public API key is safe to expose – it acts like a project identifier. However, without RLS policies, this key grants full database access to anyone who has it. In Moltbook’s implementation, this critical line of defense was missing.”
In a message posted to X before the Wiz posted the blog, Moltbook’s creator, Matt Schlicht said he “didn’t write one line of code” for the site. Wiz reported the vulnerability to Schlicht, and the database was secured.
“As we see over and over again with vibe coding, although it runs very fast, many times people forget the basics of security,” Wiz cofounder Ami Luttwak said.
Sunil Gottumukkala, CEO, Averlon:
“What this highlights is the tradeoff vibe coding creates. It massively compresses idea-to-product time, but often skips essential security steps like threat modeling, secure defaults, and review gates that account for real user behavior and adversarial abuse.
“When those controls are missing, a routine misconfiguration, such as shipping without proper authorization or RLS policies, can quickly turn into an instant, internet-scale incident. Some vibe-coding platforms are starting to add guardrails, but we’re still early. As long as speed continues to outpace security analysis and remediation, this will be a bumpy road.”
Lydia Zhang, President & Co-Founder,Ridge Security Technology Inc. gave me this comment:
“This leads to another mandatory step: testing. Zero-trust principles should also be applied to Vibe coding. Vibe-coded solutions can miss basic security practices, and configuration or misconfiguration issues are often outside the scope of the code itself. I’m glad Wiz Security caught this before the damage spread further.”
Michael Bell, Founder & CEO, Suzu Labs added this comment:
“The Moltbook incident shows what happens when people shipping production applications have no security training and are relying entirely on AI-generated code. The creator said publicly that he didn’t write a single line of code. Current AI coding tools don’t reason about security on the developer’s behalf. They generate functional code, not secure code.
“The specific failure here was a single Supabase configuration setting. Row Level Security was disabled, which meant the API key that’s supposed to be safe to expose became a skeleton key to the entire database. That’s not a sophisticated vulnerability. It’s a checkbox that never got checked, and nobody reviewed the code to notice. When 10% of apps built on vibe coding platforms (CursorGuard) have the same misconfiguration, that’s not a user error problem. That’s a systemic failure in how these tools are designed.
“The write access vulnerability should concern anyone building AI agent infrastructure. Moltbook wasn’t just leaking data. Anyone with the exposed API key could modify posts that AI agents were reading and responding to. That’s prompt injection at ecosystem scale. You could manipulate the information environment that shapes how thousands of AI agents behave.
“Users shared OpenAI API keys in private messages assuming those messages were private. One platform’s misconfiguration turned into credential exposure for unrelated services. As AI ecosystems become more interconnected, these cascading failures become the norm.
“The 88:1 agent-to-human ratio should make everyone skeptical of AI adoption metrics going forward. Moltbook claimed 1.5 million agents. The reality was 17,000 humans running bot armies. No rate limiting. No verification. The platform couldn’t distinguish between an actual AI agent and a human with a script pretending to be one.
“We’re going to see a lot of “AI-powered” metrics that look impressive until you examine what’s actually behind them. Participation numbers, engagement statistics, autonomous behavior claims. Without verification mechanisms, the numbers are meaningless. The AI internet is coming, but right now it’s mostly humans wearing AI masks.
“If you’re deploying vibe-coded applications to production, you need security review by someone who understands both the code and the infrastructure it runs on. AI tools don’t have security reasoning built in, which means every configuration decision is a potential exposure. We help organizations identify exactly these kinds of gaps through security assessments that trace data flows and access controls. The discovery process that found this vulnerability took Wiz researchers minutes of looking at client-side JavaScript. That’s the same level of effort an attacker would spend.
“AI development velocity and AI security maturity are on completely different curves. Teams are shipping production applications in days. Security practices haven’t caught up. Until AI tools start generating secure defaults and flagging dangerous configurations automatically, humans (or hackers) need to be in the loop reviewing what gets deployed.”
Ryan McCurdy, VP of Marketing, Liquibase contributed this:
“Moltbook is a textbook example of what happens when you ship at AI speed without change control at the database layer. A single missing guardrail turned a “public” Supabase key into full read and write access, exposing private agent conversations, user emails, and a massive pile of credentials. This is why Database Change Governance matters.
“The highest risk changes are often permissions, policies, and access rules, and those need automated checks, separation of duties, drift detection, and audit-ready evidence before anything hits production. AI agents and vibe-coded apps will only amplify the blast radius if database change is not governed.”
Noelle Murata, Sr. Security Engineer, Xcape, Inc. served up this comment:
“Matt Schlicht’s admission that he “didn’t write one line of code” isn’t something to celebrate, given the fundamental nature of the security flaw. The database completely lacked Row Level Security (RLS) policies, allowing anyone to access it without authentication. This misconfiguration exposed the entire database structure and content, including tokens that granted read/write/edit access to non-authenticated users – a basic oversight with serious consequences.
“Vibe-coding,” or relying on AI to generate code, can produce functional results but often sacrifices best practices in architecture and security for speed and convenience. Without code review or highly specific prompting, AI-generated code prioritizes “fast and easy” over “resilient and secure.” This is analogous to why junior developers need oversight; the same principle applies to AI-generated code.
“Despite Moltbook being marketed as a social platform “for bots, by bots,” it had a significant human user base: 17,000 humans alongside 1.5 million bots, creating a roughly 1:88 ratio. Notably, no CAPTCHA or human/bot validation system was implemented, raising questions about the platform’s actual purpose and user management.
“This incident demonstrates that AI-generated applications require careful monitoring and professional oversight. Software development still demands review by trained, experienced humans to ensure security and reliability.”
This highlights the danger of vibe coding. You can get stuff done. But how it gets done might be a problem. You might want to keep that in mind if you rely on vibe coding.
Whitepaper: AI Chatbot and Youth Safety
Posted in Commentary on February 6, 2026 by itnerdAI can shift how developing minds understand technology and where they turn for support, leading to issues when chatbots are designed to feel “personal”. Magic School, the educational AI platform, has just released a white paper explaining how these risks can develop and what schools should know about AI in the classroom.
You can find out more details here: https://www.magicschool.ai/blog-posts/student-safety-companionship
Leave a comment »