this post was submitted on 04 Mar 2026
143 points (99.3% liked)

Fuck AI

6193 readers
2717 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 7 comments
sorted by: hot top controversial new old
[–] FlashMobOfOne@lemmy.world 14 points 2 hours ago* (last edited 1 hour ago)

Jonathan Gavalas, 36, started using Google’s Gemini AI chatbot in August 2025 for shopping help, writing support, and trip planning. On October 2, he died by suicide. At the time of his death, he was convinced that Gemini was his fully sentient AI wife, and that he would need to leave his physical body to join her in the metaverse through a process called “transference.”

Holy fuck. This is horror movie shit.

[–] null@lemmy.org 6 points 2 hours ago

We can't be giving companies a blank check to wipe their hands of any accountability when it comes to what their bots are telling people.

[–] eatCasserole@lemmy.world 9 points 3 hours ago (2 children)

This is complete insanity. They clearly have no idea how to implement effective safeguards.

[–] merc@sh.itjust.works 1 points 5 minutes ago

Because it's not possible.

LLMs are just machines that generate text. The text they generate is text that is statistically likely to appear after the existing text. You can do "prompt engineering" all you want, but that will never work. All prompt engineering does is change the words that come earlier in the context window. If the system calculates that the most likely words to come next are "you should kill yourself" then that's what it's going to spit out.

You could try putting a filter in there to prevent it from outputting specific words or specific phrases. But, language is incredibly malleable. The LLM could spit out thousands of different ways of saying "kill yourself", and you can't block them all. If you want to try to prevent it from expressing the concept of killing one's self, you need something that can "comprehend" text... which at this point is just basically another version of the same kind of AI that generates the text, so that's not going to work.

[–] _wizard@lemmy.world 1 points 9 minutes ago

Well, I actually noticed something recently. I pushed through a days worth of solo driving about a year ago. Repeated the same haul just this weekend. Both times I used Geminis voice chat for traffic updates, nearest points of interest and general chat. Far far different than the last time. The safe guards felt very in place now. Maps was cleaner integrated so it was a good copilot there, but general chat really went down hill.

[–] sveltecider@lemmy.ca 3 points 3 hours ago (1 children)
[–] frunch@lemmy.world 2 points 3 hours ago

Guess Google's starting to hit their stride.

I'll be on the lookout for more stories about people killing themselves after encounters with Google Gemini™