Your AI Browser Could Be Hijacked by a Simple Hidden Message, Researchers Warn

Vlad CONSTANTINESCU

August 22, 2025

Promo
Protect all your devices, without slowing them down.
Free 30-day trial
Your AI Browser Could Be Hijacked by a Simple Hidden Message, Researchers Warn

Invisible prompts on websites could trick AI assistants into exposing your most sensitive data.

Rising risks in agentic browsing

The next generation of AI-powered browsers is moving beyond simple summarization to performing real-world tasks such as booking flights or handling banking requests for users. While this ushers in a whole new world of convenience and efficiency, it also brings various drawbacks, especially concerning security.

As users place more trust in these AI entities, the avenue for exploitation expands. AI agents are now entrusted with logged-in sessions to critical services like healthcare, corporate systems and finance. A simple hallucination or misinterpretation could lead to severe consequences, potentially exposing users’ credentials or personal information.

Prompt injection vulnerabilities

During an examination of competitors like Perplexity’s Comet browser, Brave researchers uncovered a severe vulnerability: Comet treats webpage content, without distinction, as part of the user’s command. This oversight enables so-called indirect prompt injection, where a seemingly benign webpage, or even a Reddit comment with hidden instructions, can manipulate the AI into navigating to sensitive sites, extracting data, or exfiltrating it covertly, all without explicit user consent.

Jarringly, a similar technique was recently demonstrated in research targeting Google’s Gemini AI. In that scenario, researchers have shown how attackers could embed malicious instructions in Google Calendar invites, hidden in event titles, which Gemini unwittingly executed when users asked about their schedule. Outcomes ranged from smart home takeovers to email exfiltration and Zoom manipulation.

The need for better security

In most of these scenarios, old-guard security solutions like same-origin policy or CORS (Cross-Origin Resource Sharing) fall short, mainly because AI agents operate with the user’s session-level privileges. Instead of compromising a single site, prompt injection can transcend domain boundaries by exploiting an AI agent’s contextual understanding.

Whether intentionally injected into a calendar invite or accidentally left on a web page or in an email, hidden instructions can render current defense models insufficient.

Embracing a secure AI-fueled future

To protect users, developers must implement new safeguards. For instance, browsers should be able to separate trusted user commands from untrusted content, such as indirect prompts left on web pages. Agents must require explicit user confirmation for sensitive operations.

Furthermore, agentic browsing should be bound by its own confines, segregated from routine tasks, with clear visual cues and permission constraints.

That’s not to say traditional security defenses should be left out. Established defenses like antivirus software still play a crucial role in blocking malware, phishing attempts and exploit kits that often serve as stepping stones for more advanced AI-driven attacks. Solutions like Bitdefender Ultimate Security can help create a layered defense, ensuring that, while new risks like prompt injection demand attention, the foundational protections against long-standing cyber threats remain firmly in place.

tags


Author


Vlad CONSTANTINESCU

Vlad's love for technology and writing created rich soil for his interest in cybersecurity to sprout into a full-on passion. Before becoming a Security Analyst, he covered tech and security topics.

View all posts

You might also like

Bookmarks


loader