Hundreds of Thousands of Mobile Apps May Now Be Exposing AI Access

Google API keys were once considered safe to embed in mobile apps. But with Gemini, those same keys can now enable access to AI services and billable resources — quietly turning legacy best practices into a growing mobile security risk.

By

TL;DR – Key Takeaways

  • Gemini turned previously public API keys into sensitive credentials
  • Hardcoded Google API keys now expose AI access and billing risk
  • Q-mast customers can quickly identify this risk and remediate

 

For years, Android developers followed official guidance from Google and hard-coded API keys for services like Google Maps and Firebase directly in their mobile applications because those keys were described as identifiers for billing and project tracking rather than secrets protecting sensitive customer data. This practice became standard across the ecosystem, and today a large percentage of mobile applications expose these keys because doing so was not considered a security flaw but simply part of normal configuration.

That assumption no longer holds true.

With the introduction of Gemini APIs inside Google Cloud projects, the security implications of those same exposed keys changed in a way that many mobile teams did not anticipate. When Gemini is enabled on a project, an API key that previously allowed access to relatively limited services may now permit calls to generative AI endpoints, large language model inference, and other advanced capabilities that carry higher cost and potentially greater data sensitivity. The key itself has not changed, but the permissions and services attached to it can now be far more powerful than when the app was first shipped.

In practical terms, this means thousands of applications that embedded “non-sensitive” API keys years ago may now be exposing credentials that provide access to AI functionality and billable LLM resources.

In fact, scanning the latest 250,000 apps in our database, we found that 39.5% of them contain hardcoded Google API keys. This scan returned more than 35,000 unique keys.

Because Android applications can be easily unpacked and inspected, extracting these keys requires minimal technical skill, and automated scraping at scale is entirely feasible. What used to be low-risk visibility has quietly turned into a meaningful attack surface, particularly for organizations that enabled Gemini without reassessing how their mobile clients authenticate to backend services.

The Real Risks

The attack is trivially simple. An attacker de-compiles the application packages that is downloaded from the store using any of the open source tools available, performs a simple regular expression pattern match in the decompiled code that identifies the API keys hard-coded in the app, and runs a single curl command against the Gemini API’s /files endpoint. Instead of a 403 error, they receive a 200 response. From there, they can access any files or cached content the project owner has stored through the Gemini API, exhaust quotas to shut down legitimate services, or run up AI inference charges that can reach thousands of dollars per day on a single victim account. The attacker never needs to touch the target’s infrastructure because the key is included in the mobile app, right where Google told the developer to put it.

Beyond potential cost abuse through automated LLM requests, organizations must also consider how AI-enabled endpoints might interact with prompts, generated content, or connected cloud services in ways that expand the blast radius of a compromised key. Even if no direct customer data is accessible, the combination of inference access, quota consumption, and possible integration with broader Google Cloud resources creates a risk profile that is materially different from the original billing-identifier model developers relied upon.

How Organizations Can Fix This Issue

The immediate remediation steps are straightforward. Developers should audit their Google Cloud projects for the Generative Language API, review API key configurations for unrestricted keys or keys with explicit Gemini access, and verify that none of those keys appear in client-side code, public repositories, or application code. Any exposed key should be rotated without delay.

For organizations using Q-mast, Quokka’s mobile app security testing solution, hardcoded keys are flagged so that customers can easily find and fix this issue.

A Broader Warning

The pattern here is not unique to Google. As AI capabilities get bolted onto existing platforms at speed, legacy credential architectures are being asked to do jobs they were never designed for. Keys that were safe under one set of platform capabilities become sensitive under another, and the gap between a platform’s security posture and developers’ understanding of it can widen faster than anyone notices.

The developers who embedded these keys in their Android apps and web pages were not making mistakes. They were following instructions from a vendor they trusted. The lesson is not that developers should stop trusting vendor documentation. It’s that platforms have a responsibility to notify users when the security properties of their credentials change, and to build systems where the consequences of a key going public are bounded and well understood. When those systems fail, thousands of developers can find themselves holding credentials they believed were harmless that have quietly become a serious liability.

Related content

Mobile security that makes you smile.

Sign up for our newsletter, The Quokka Intel Briefing

Quokka icon

Copyright © 2026, Quokka. All rights reserved.