Lovable.dev Security Flaw Exposes Risks in AI-Built Apps
- Covertly AI
- 6 days ago
- 4 min read

Lovable.dev is under growing pressure after multiple reports described a serious security flaw that may have exposed thousands of older projects to anyone with a free account. The issue reportedly affected projects created before a November 2025 cutoff and allowed unauthorized users to access full source code, database credentials, AI chat histories, and even live customer data. Researchers said the flaw was a Broken Object Level Authorization problem, meaning the platform checked whether someone was logged in but did not properly verify whether that person actually owned the project being requested. In practical terms, that meant a user could allegedly retrieve highly sensitive information with only a few
API calls, raising major concerns about how securely the platform handled older projects.
The scope of the exposed information is what makes the incident especially alarming. In one reported example, a researcher accessed an active admin panel tied to a real nonprofit and found names, job titles, LinkedIn profiles, contact details, and other personal information connected to real professionals. Reports also said the platform’s stored AI conversation histories were visible, which matters because developers often paste error logs, schema details, tokens, and credentials into those chats while debugging. That means the exposure was not limited to code repositories or backend settings. It may also have included sensitive conversations that revealed how apps were built, what data structures they used, and which secrets had been shared during development.
The response timeline has become a major source of criticism. The flaw was reportedly disclosed to Lovable through HackerOne on March 3, 2026, yet weeks later legacy projects were still said to be exposed. According to the reports, Lovable added ownership checks for newer projects while leaving older ones accessible, creating a divide where recently created projects returned a forbidden response but pre existing ones did not. Critics argued that this approach effectively protected future users while leaving long standing customers vulnerable. Lovable later responded publicly by saying it had not suffered a data breach and that part of the issue involved unclear documentation around what public visibility meant. Even so, that explanation did not fully address why projects could allegedly be enumerated so easily, why sensitive AI chats were exposed, or why older projects were not fully brought under the same protections.

The controversy has also renewed attention on Lovable’s earlier security troubles. A previous flaw, tracked as CVE-2025-48757, reportedly exposed more than 170 production applications because Row Level Security was missing on Supabase databases used by Lovable generated apps. That earlier case was centered on insecure defaults in applications built through the platform. The newer reports appear more serious because they point to an authorization weakness in Lovable’s own platform architecture. Together, the two incidents suggest a pattern in which important security controls were either missing or added only after researchers identified the damage they could cause. For a tool marketed as a fast path to launching real products, that pattern is difficult to ignore.
The broader concern extends far beyond Lovable itself. AI coding platforms promise that users can describe an app in plain language and launch it quickly, often without deep programming knowledge. But the same speed that makes these tools appealing can also make security gaps easier to miss and easier to scale. Industry research highlighted in the coverage found high rates of flaws in AI generated applications, including insecure object references, missing access controls, exposed secrets, and weak authentication patterns. Other reports noted that many vibe coding users are non developers, which increases the chances that unsafe defaults will make it into production unnoticed. The result is a growing class of apps that may work well on the surface while quietly exposing business logic, user data, and infrastructure credentials underneath.
For developers and founders using Lovable or similar tools, the takeaway is urgent and practical. Any older project should be treated with caution until it is thoroughly reviewed and confirmed secure. That means rotating credentials, checking whether secrets were ever pasted into AI chats, auditing database permissions, reviewing access logs, and testing whether user data can be reached through predictable endpoints. The appeal of AI app builders is obvious, especially for teams that want to move fast, but incidents like this show that speed cannot replace basic authorization, secure defaults, and careful access control. If AI built products are going to handle real users, real money, and real customer records, the platforms behind them will need to prove that security is part of the product itself, not something clarified only after exposure is discovered.
Works Cited
“Lovable Left Thousands of Projects Exposed for 48 Days And Still Hasn’t Fixed It.” Cyber Kendra, Apr. 2026, www.cyberkendra.com/2026/04/lovable-left-thousands-of-projects.html.
“Lovable.dev Fixed a Critical Bug for New Projects and Left Every Old One Exposed.” Glitchwire, 20 Apr. 2026, glitchwire.com/news/lovable-dev-fixed-a-critical-bug-for-new-projects-and-left-every-old-one-exposed/.
“The CVE That Exposed 170 Lovable Apps And What It Means for Your Vibe-Coded App.” Wolfgang Solutions, 18 Apr. 2026, wolfgangsol.com/blog/lovable-cve-2025-48757-vibe-coding-security.
“Lovable AI: How the AI App Builder Works and What You Should Know Before Using It.” Scalevise, scalevise.com/resources/lovable-ai-design-tool/. Accessed 20 Apr. 2026. 
Solomakha, Vlad. “Lovable.dev AI: Features, Pricing, And Alternatives.” Banani, 10 Dec. 2024, www.banani.co/blog/lovable-dev-ai-pricing-and-alternatives. 
.png)





Comments