this post was submitted on 04 Mar 2026
168 points (98.8% liked)

Fuck AI

6193 readers
2644 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] merc@sh.itjust.works 10 points 5 hours ago

Because it's not possible.

LLMs are just machines that generate text. The text they generate is text that is statistically likely to appear after the existing text. You can do "prompt engineering" all you want, but that will never work. All prompt engineering does is change the words that come earlier in the context window. If the system calculates that the most likely words to come next are "you should kill yourself" then that's what it's going to spit out.

You could try putting a filter in there to prevent it from outputting specific words or specific phrases. But, language is incredibly malleable. The LLM could spit out thousands of different ways of saying "kill yourself", and you can't block them all. If you want to try to prevent it from expressing the concept of killing one's self, you need something that can "comprehend" text... which at this point is just basically another version of the same kind of AI that generates the text, so that's not going to work.