Research Reveals Security Flaws in AI-Generated Apps Expose Corporate Data
Cybersecurity researchers found thousands of publicly accessible applications built with AI coding tools contain sensitive corporate information.

Cybersecurity firm RedAccess has discovered approximately 5,000 publicly accessible applications containing sensitive corporate data among 380,000 assets built using AI-powered coding platforms. The research, independently verified by Axios and Wired, highlights significant security vulnerabilities in applications created through "no-code" or "low-code" AI tools.
The exposed data included patient conversations from healthcare facilities, internal financial information from a Brazilian bank, shipping company vessel schedules, and active clinical trial listings from UK health companies. Researchers also found phishing sites impersonating major brands including Bank of America, FedEx, and McDonald's built on these platforms.
The security issues stem from default privacy settings on AI coding platforms that make applications publicly accessible unless users manually change them to private. Many of these applications become indexed by search engines, making them discoverable to anyone. The platforms examined included Lovable, Base44, Replit, and deployment service Netlify.
Separate research by Escape.tech in October found over 2,000 high-impact vulnerabilities and 400 exposed secrets including API keys in 5,600 publicly available AI-generated applications. IBM's 2025 Cost of a Data Breach Report found that 20% of organizations experienced breaches linked to unauthorized AI use, adding an average of $670,000 to breach costs.
The findings reflect a broader trend of "shadow AI" - unauthorized use of AI tools by employees that creates security blind spots for organizations. Gartner forecasts that by 2028, AI-generated code approaches will increase software defects by 2,500% due to AI's lack of awareness of broader system architecture and business rules.
Cybersecurity experts recommend that organizations implement discovery scanning for AI coding platform domains, require security reviews before deployment, and extend existing application security pipelines to cover citizen-developed applications. The platforms involved said they are investigating the reported vulnerabilities, though some disputed the research methodology and timeline for disclosure.