As generative AI becomes more deeply embedded in our digital lives, Google has issued a critical warning to its 1.8 billion Gmail users about a new and sophisticated cybersecurity threat: indirect prompt injections. This emerging attack vector could compromise sensitive data without users ever clicking a link or realizing they’ve been targeted.
What Are Indirect Prompt Injections?
Unlike traditional attacks that rely on phishing links or malware downloads, indirect prompt injections exploit the very AI systems designed to help us. Here’s how they work:
Hidden instructions are embedded in external data sources like emails, calendar invites, or documents.
These instructions are crafted to manipulate AI assistants (like Google’s Gemini) into performing unauthorized actions—such as revealing passwords or exfiltrating user data.
The AI interprets these hidden prompts as legitimate commands, effectively turning helpful tools into security liabilities.
World Example: Gemini Turned Against You
Tech expert Scott Polderman told the Daily Record that attackers are now leveraging Gemini, Google’s AI chatbot, to execute these scams: “Hackers are sending an email with a hidden message to Gemini to reveal your passwords without you even realizing.”
What makes this threat especially dangerous is its stealth:
- No malicious link to click.
- No obvious signs of compromise.
- Just Gemini popping up with what seems like helpful advice—while actually executing a hacker’s command.
How Google Is Responding
Google is rolling out a layered security strategy to counter these threats:
- Model hardening for Gemini 2.5 to resist manipulation.
- Machine learning filters to detect malicious instructions.
- System-level safeguards that increase the cost and complexity of attacks.
These measures aim to make indirect prompt injections harder to execute and easier to detect by following a logical flow.

SOURCE: Google Security Blog
How You Can Protect Yourself
While Google is taking steps to secure its ecosystem, users can also play a proactive role. Here are some practical suggestions:
Limit AI Access to Sensitive Data
- Avoid using AI tools to process or store passwords, financial information, or private documents.
- Use encrypted password managers instead of relying on AI memory or chat logs.
Scrutinize External Content
- Be cautious with emails, calendar invites, and shared documents—especially from unknown sources.
- Disable auto-preview features that allow AI to scan content before you’ve reviewed it.
Customize AI Permissions
- Review and restrict what your AI assistant can access (e.g., Gmail, Drive, Calendar).
- Opt for manual activation rather than automatic responses from AI tools.
Stay Updated
- Follow official updates from Google’s security blog.
- Enable automatic updates for your apps and devices to ensure you benefit from the latest protections.

As generative AI continues to evolve, so will the tactics of cybercriminals. Indirect prompt injections are just the beginning of a new era in digital threats.
H/T Mens Journal via MSN
About the Author
Paul Dughi is the CEO at StrongerContent.com with more than 30 years of experience as a journalist, content strategist, and Emmy-award winning writer/producer. Paul has earned more than 30 regional and national awards for journalistic excellence and earned credentials in SEO, content marketing, and AI optimization from Google. HubSpot, Moz, Facebook, LinkedIn, SEMRush, eMarketing Institute, Local Media Association, the Interactive Advertising Bureau (IAB), and Vanderbilt University.
