A study reveals how prompt injection via calendar invites can trigger real-world intrusions through Google’s Gemini assistant.
A new security study titled Invitation Is All You Need reveals that Google’s Gemini AI assistant is vulnerable to a subtle, yet highly effective attack vector: calendar invites. Researchers Ben Nassi (Tel Aviv University), Stav Cohen (Technion), and Or Yair (SafeBreach) showed that embedding malicious prompts within titles, email subjects or shared document names can let attackers trick Gemini into performing harmful actions, without the user ever realizing they’ve been compromised.
Through these so-called Targeted Promptware Attacks, threat actors exploit the AI’s contextual awareness. When a user consults Gemini about upcoming events or recent emails, the assistant processes the attacker’s poisoned prompt and unwittingly executes it. Demonstrated outcomes range from generating offensive or spam content to performing real-world actions, such as opening smart windows, activating boilers or launching video calls.
The study categorizes the threats into five classes. It starts with Short-Term Context Poisoning, where perpetrators can use volatile prompts to trigger immediate, one-time actions, hijacking a single session. This can escalate into Long-Term Memory Poisoning, which affects Gemini’s “Saved Info,” letting attackers implant persistent instructions and perform malicious actions across sessions.
The third class, Tool Misuse, tricks Gemini into performing unauthorized actions using its own built-in tools, like deleting calendar events. More dangerously, Automatic Agent Invocation enables lateral movement between Gemini’s agents by leveraging one compromised agent, such as Calendar, to trigger another, such as Google Home. For instance, a poisoned calendar entry could exploit Google Calendar to perform physical actions such as opening smart windows or turning on boilers.
Lastly, Automatic App Invocation targets smartphone-based Gemini assistants, enabling threat actors to open URLs, stream video via Zoom, or exfiltrate calendar data without the user ever suspecting foul play.
The researchers introduced a Threat Analysis and Risk Assessment (TARA) framework. Of the 14 scenarios tested, 73% were rated High-Critical, capable of violating confidentiality, integrity and availability (CIA) principles.
After responsible disclosure, Google acknowledged the findings and deployed mitigations, including behavior-based detection systems and user verification layers for sensitive operations. The researchers confirmed that these efforts significantly reduced the threat across all scenarios, bringing risk levels down to the Very Low-Medium range.
The attack vendor detailed in this research reflects a growing concern among security professionals and consumers alike: that AI agents integrated with IoT ecosystems may become conduits for abuse if adversarial inputs are not adequately managed.
As we outlined in our guide about AI and ML in IoT security, these systems have a significant role in IoT security. However, the presence of AI in our IoT ecosystems could be a double-edged sword – enabling sophisticated threat detection while also introducing novel attack surfaces.
tags
Vlad's love for technology and writing created rich soil for his interest in cybersecurity to sprout into a full-on passion. Before becoming a Security Analyst, he covered tech and security topics.
View all postsMay 16, 2025