Lovable access issue exposes project data, credentials in AI-generated coding

A security issue involving AI coding platform Lovable allowed users to access other users’ project data, including source code, database credentials, AI chat histories, and customer data, according to reports and user disclosures.

The issue was publicly highlighted after a user demonstrated that a free account could access data across projects created before November 2025.

Lovable initially stated there was no data breach, describing the behavior as expected for public projects, but later acknowledged a backend error that temporarily enabled access to AI chat data. The company updated its visibility and permission settings following the incident and said the issue had been addressed.

The incident involved exposure of project-level data within the platform environment and did not include confirmation of broader system compromise. Reporting indicates the issue remained unresolved for a period of time after being reported before changes were implemented.

Ryan McCurdy, VP of Marketing, Liquibase had this to say:

   “This incident is a reminder that the risk in AI-generated development is not just bad code. It is bad control design. When application creation speeds up, permissions, secrets exposure, and database access paths can become part of the attack surface just as quickly. If teams do not put governed change, least-privilege access, and clear separation between public artifacts and sensitive backend context in place, AI can amplify operational risk faster than traditional review processes can catch it.”

John Carberry, Solution Sleuth, Xcape, Inc. adds this comment:

   “The Lovable data exposure incident highlights a catastrophic failure in the fundamental security architecture of AI-powered “vibe coding” platforms. By failing to implement basic ownership validation on API endpoints, a textbook Broken Object Level Authorization (BOLA) flaw. Lovable allowed any user to traverse project IDs and scrape the source code, database credentials, and AI chat histories of others.

   “For security leaders, the primary risk is a silent supply chain compromise: while Lovable claims no “breach” of its own servers, the exposure of third-party secrets like Stripe and Supabase keys means the applications built on the platform are now effectively backdoored.

   “Technically, the crisis was compounded by a February 2026 backend regression that re-opened access to sensitive chats and a response cycle that spent 48 days ignoring a bug bounty report. Organizations must treat AI-generated code with extreme caution, ensuring that “vibe coding” speed doesn’t bypass mandatory secret scanning, environment variable isolation, and the hard-won security logic of the last twenty years.

   “Lovable proved that while AI can write your code, it can’t write your common sense, especially when “public by default” includes your Stripe secret keys.”

Hannah Perez, Director of Marketing, Suzu Labs followed up with this:

   “As we move toward AI-generated software, the ‘shared responsibility model’ is becoming dangerously blurred. Users expected a private sandbox for innovation, but instead found a communal space with paper thin walls.

   “Lovable’s eventual pivot is welcome, but the delay between the initial report and the actual fix suggests that AI startups are currently outpacing their own security protocols, which is as expected for most. In the rush to ‘vibe code,’ fundamental safety is being treated as a post-launch patch rather than a requirement. For this industry to mature, Secure by Default must be the non-negotiable standard for any platform handling sensitive IP and source code.”

Vishal Agarwal, CTO, Averlon provided this comment:

   “It’s one thing to have access to the sauce. It’s another to have access to its recipe. With inadvertent leakage of chat history, attackers gain access to reconnaissance information that can be leveraged to target the organization more precisely.

   “What makes sophisticated attackers dangerous isn’t just their technical capability, it’s their detailed understanding of the target’s systems. Exposing chat history and source code together hands that understanding directly to an attacker.”

This highlights the fact that AI has to be part of your security planning. Otherwise really bad things will happen. And this is a case in point.

Leave a Reply

Discover more from The IT Nerd

Subscribe now to keep reading and get access to the full archive.

Continue reading