198 iOS Apps Leaking Data: When AI, Mobile Apps, and Security Don’t Align

Researchers uncovered 198 AI-related iOS apps are leaking data. Learn what went wrong, how AI code contributed, and what your team can do to strengthen mobile app security.

By

TL;DR – Key Takeaways

  • 198 iOS apps leaked sensitive data, such as private chats, email addresses, and more
  • The incident was due to backend misconfigurations and weak data protections
  • App Store approval is not proof of security or data protection

 

According to a recent TechRadar article, a project from CovertLabs uncovered 198 AI-related iOS apps that were leaking sensitive user data, such as private chats, email addresses, phone numbers, authentication tokens, and location information, into publicly accessible systems. The worst of these apps exposed hundreds of millions of messages and user identifiers, all due to backend misconfigurations and weak data protections. 

According to the article, “The fact that many of the leakiest apps – including Chat & Ask AI, GenZArt, Kmstry and Genie – are related to AI isn’t too surprising. In the rush to capitalize on the AI goldmine, it’s likely that many developers have cut corners or implemented lax security measures in order to get their app out the door and onto the App Store.”

Why AI Makes iOS App Security Harder, Not Easier

While AI is a powerful development tool, it also introduces new security challenges that most traditional toolchains weren’t built to handle. Studies consistently show that AI-generated code often mirrors existing insecure patterns rather than improving on them. Models may recommend libraries with known vulnerabilities, deprecated frameworks, or poorly maintained dependencies simply because they are common in training data. 

Without strong validation and review, teams end up scaling the same mistakes faster and embedding risk deeper into their applications.

Basic iOS Security Gaps Became Massive Leaks

The iOS leaks weren’t the result of a zero-day exploit or cutting-edge attack — they were caused by basic configuration errors and a lack of secure development practices. Systems were left open to the public, and authentication tokens and chat logs were exposed to anyone who knew where to look. That’s the kind of thing traditional scoping or superficial review won’t catch.

AI generation accelerates how quickly an app can be built and released, but speed doesn’t equate to secure design. AI systems don’t necessarily understand secure coding practices, and developers relying on them without thorough security validation are essentially outsourcing blind spots. 

The Broader Implications

What’s happening with these 198 iOS apps is a snapshot of a larger shift:

  • Supply chain risk is rising. AI-generated suggestions, including third-party SDKs, may bring hidden vulnerabilities into your codebase.
  • App stores aren’t a safety net. Presence in an official app store should never be mistaken for proof of security. Zero-day vulnerabilities can slip through the cracks and malware can intentionally trick security checks. 
  • Regulatory exposure is real. Vulnerable apps can violate privacy and security frameworks like GDPR, CCPA, and PCI DSS when they leak personal data.

In short, AI is reshaping the mobile app landscape faster than existing security safeguards can adapt.

What Security and Product Teams Should Do About It

If you’re responsible for risk, development, or compliance, here’s what needs to change:

  1. Treat AI output as untrusted until validated. AI may suggest code that functions, but that doesn’t mean it is secure or compliant.
  2. Understand your data flows end-to-end. Misconfigured storage buckets, open databases, and improperly scoped APIs are more common than you think, and they won’t be found by surface-level checks alone.
  3. Integrate security testing early and continuously. Mobile app security testing should be part of the CI/CD pipeline, not an afterthought. Quokka’s Q-mast delivers static, dynamic, and behavioral analysis to uncover risks in code, libraries, and dependencies. 
  4. Vet apps your team uses. Just because an app is in the official app store doesn’t mean it’s safe. Q-scout enables companies to vet the mobile apps on their MDM-managed devices to flag security, privacy, and compliance risks. 

The Hard Truth

The same shortcuts that lead to exposed databases and leaked chat logs also make it easier for threat actors to use AI to mass-produce convincing scam apps and outright malware. As AI lowers the barrier to building and shipping mobile apps, it’s important for developers and organizations to pay close attention to mobile app risks.

Related content