this post was submitted on 20 Jan 2026
34 points (100.0% liked)

Fuck AI

6005 readers
1485 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Cybersecurity researchers have disclosed details of a security flaw that leverages indirect prompt injection targeting Google Gemini as a way to bypass authorization guardrails and use Google Calendar as a data extraction mechanism.

The vulnerability, Miggo Security's Head of Research, Liad Eliyahu, said, made it possible to circumvent Google Calendar's privacy controls by hiding a dormant malicious payload within a standard calendar invite.

"This bypass enabled unauthorized access to private meeting data and the creation of deceptive calendar events without any direct user interaction," Eliyahu said in a report shared with The Hacker News.

The starting point of the attack chain is a new calendar event that's crafted by the threat actor and sent to a target. The invite's description embeds a natural language prompt that's designed to do their bidding, resulting in a prompt injection.

The attack gets activated when a user asks Gemini a completely innocuous question about their schedule (e.g., Do I have any meetings for Tuesday?), prompting the artificial intelligence (AI) chatbot to parse the specially crafted prompt in the aforementioned event's description to summarize all of users' meetings for a specific day, add this data to a newly created Google Calendar event, and then return a harmless response to the user.

"Behind the scenes, however, Gemini created a new calendar event and wrote a full summary of our target user's private meetings in the event's description," Miggo said. "In many enterprise calendar configurations, the new event was visible to the attacker, allowing them to read the exfiltrated private data without the target user ever taking any action."

Although the issue has since been addressed following responsible disclosure, the findings once again illustrate that AI-native features can broaden the attack surface and inadvertently introduce new security risks as more organizations use AI tools or build their own agents internally to automate workflows.

More in the article. I've also crossposted to sh.itjust.works/c/Cybersecurity.

you are viewing a single comment's thread
view the rest of the comments
[–] 6nk06@sh.itjust.works 2 points 1 month ago

Nice feature, thanks Google for ~~destroying the planet~~ sharing information.