

State-sponsored hackers exploit Google Gemini’s email summary feature to launch sophisticated phishing attacks without traditional malware markers
Key Takeaways
- Google Gemini vulnerability exploits email summaries by allowing attackers to hide malicious instructions in emails using invisible HTML/CSS formatting, bypassing traditional security filters without attachments or links
- Nation-state actors from over 20 countries actively use Gemini for cyberattacks, with Iranian and Chinese groups leading usage for reconnaissance, malware development, and phishing email creation
- Google’s $10 billion Workspace revenue faces trust risks as prompt injection attacks demonstrate persistent security challenges despite layered AI defenses and ongoing mitigation efforts
Introduction
Google’s Gemini AI assistant faces a critical security vulnerability that allows cybercriminals to manipulate email summaries and redirect users to phishing sites without using traditional attack methods. The exploit leverages indirect prompt injections hidden within seemingly legitimate emails that Gemini processes when generating summaries for users.
This security flaw represents a significant evolution in phishing tactics, as attackers can now bypass conventional email security filters by embedding invisible malicious instructions directly in email content. The vulnerability persists despite Google’s implementation of multiple defense layers and previous reports of similar issues throughout 2024.
Key Developments
Mozilla security researcher Marco Figueroa disclosed the vulnerability through the 0din bug bounty program, demonstrating how attackers can exploit Gemini’s email summarization feature. The attack method involves embedding malicious instructions in email bodies using HTML and CSS techniques that render text invisible to users by setting font size to zero and color to white.
When users request Gemini to generate email summaries, the AI processes these hidden commands and incorporates fabricated security warnings into its responses. Figueroa’s demonstration showed Gemini generating fake alerts about compromised Gmail passwords complete with fraudulent support numbers, creating convincing phishing attempts that users are more likely to trust.
Google’s Threat Intelligence Group published comprehensive research documenting how state-sponsored hacking groups from China, North Korea, Iran, Russia, and over a dozen other countries have been using Gemini to streamline cyberattacks. The research covers adversarial AI activity throughout 2024, focusing on government-backed advanced persistent threat actors and coordinated information operations.

Market Impact
The vulnerability poses significant risks to Google’s Workspace business, which generates an estimated $10 billion in annualized revenue. Enterprise customers investing in AI-powered productivity tools face potential security exposure that could impact adoption rates and customer retention.
Cybersecurity vendors are positioning themselves to address the emerging threat landscape around large language model vulnerabilities. The persistence of prompt injection attacks despite mitigation efforts creates new market opportunities for specialized LLM monitoring and detection services.
Traditional email security solutions face challenges adapting to AI-powered attack vectors that bypass conventional filtering mechanisms. Security providers must develop new approaches to detect and prevent prompt injection attacks that exploit AI assistants rather than relying on malicious attachments or links.
Strategic Insights
Google’s aggressive integration of Gemini across its Workspace suite represents a strategic bet on AI as a core differentiator against Microsoft and other cloud productivity competitors. However, the security vulnerabilities highlight the challenges of deploying advanced AI at scale while maintaining enterprise-grade security standards.
The shift in phishing tactics from traditional malicious links and attachments to AI manipulation represents a fundamental change in the threat landscape. Organizations must adapt their security awareness training and technical controls to address attacks that exploit the trustworthiness of AI-generated content.
Iranian actors demonstrated the heaviest usage of Gemini for both advanced persistent threats and information operations, with over 30% of activity linked to APT42, a group targeting military and political figures in the US and Israel. Chinese APTs focused on reconnaissance, coding development, and post-compromise activities including lateral movement and data exfiltration.
Expert Opinions and Data
Marco Figueroa, GenAI Bug Bounty Programs Manager at Mozilla, recommends implementing detection systems to identify and neutralize hidden content in email text along with post-processing filters to flag suspicious AI-generated summaries. According to BleepingComputer, Figueroa’s research demonstrates the persistent nature of these vulnerabilities despite previous mitigation attempts.
A Google spokesperson acknowledged ongoing improvements through red-teaming exercises to strengthen defenses against adversarial attacks, with some enhancements currently being implemented or near deployment. However, Google maintains there is no current evidence of Gemini being manipulated as described in Figueroa’s report.
Google’s Threat Intelligence Group reports that while nation-state actors have not developed novel attack methods using AI, generative AI tools allow threat actors to operate faster and at higher volume. The research indicates that Iranian actors used Gemini extensively for crafting legitimate-looking phishing emails, utilizing text generation and editing capabilities for translation and sector-specific targeting.
According to M-Trends 2025, email phishing accounts for 14% of initial infection vectors in cyber incidents, demonstrating the continued effectiveness of email-based attacks. Google claims its Gmail defenses block over 99.9% of spam, phishing attempts, and malware, though the Gemini vulnerability introduces new attack vectors not covered by traditional filtering.
Conclusion
The Google Gemini vulnerability exposes fundamental challenges in securing AI-powered productivity tools as they become integral to business operations. While Google implements layered defenses and collaborates with security researchers, the persistence of prompt injection attacks demonstrates that AI security remains an evolving challenge requiring continuous adaptation.
The widespread use of Gemini by nation-state actors across multiple countries indicates that AI tools are becoming standard components of cyber operations, even if they have not yet enabled entirely new attack methodologies. Organizations deploying AI assistants must balance productivity benefits against emerging security risks that traditional defenses may not adequately address.