this post was submitted on 12 Feb 2026
113 points (89.5% liked)

Fuck AI

5734 readers
1086 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] nonentity@sh.itjust.works 19 points 12 hours ago (2 children)

LLMs are a tool with vanishingly narrow legitimate and justifiable use cases. If they can prove to be truly effective and defensible in an application, I’m OK with them being used in targeted ways much like any other specialised tool in a kit.

That said, I’m yet to identify any use of LLMs today which clears my technical and ethical barriers to justify their use.

My experience to date is the majority of ‘AI’ advocates are functionally slopvangelical LLM thumpers, and should be afforded respect and deference equivalent to anyone who adheres to a faith I don’t share.

[–] hector@lemmy.today 1 points 2 hours ago

I mean I think one legitimate use is sifting through massive tranches of information and pulling out everything from a subject. Like if you have these epstein files, whatever is not redacted in the half of the pages they released any of, and you want to pull out all mentions of, say the boss of the company that ultimately owns the company you work for, or the president.

Propublica uses it for something of that sort anyway they explained how they used it in sifting through tranches of information on one article I read about something a couple of years ago. That seemed like a rare case of where this technology could actually be useful.

[–] pkjqpg1h@lemmy.zip 2 points 12 hours ago (4 children)

What do you think about these;

Translation
Grammar
Text editing
Categorization
Summarization
OCR
[–] nonentity@sh.itjust.works 5 points 3 hours ago

LLMs can’t perform any of those functions, and the output from tools infected with them and claim to, can intrinsically only ever be imprecise, and should never be trusted.

[–] AnnaFrankfurter@lemmy.ml 6 points 9 hours ago

Translation isn't as easy as easy as just take the word and replace with another word from different language with same definition. I mean yes a technical document or something similar can be translated word for word. But, Jokes, songs and a lot more things differ from culture to culture. Sometimes author chooses a specific word in a certain language based on certain culture which can be interpreted in multiple ways to reveal hidden meaning for readers.

And sometimes to convey the same emotion to a reader from different language and culture we need to change the text heavily.

[–] Catoblepas@piefed.blahaj.zone 7 points 10 hours ago (1 children)

OCR isn’t a large language model. That’s why sometimes with poor quality scans or damaged text you get garbled nonsense from it. It’s not determining the statistically most likely next word, it’s matching input to possible individual characters.

[–] pkjqpg1h@lemmy.zip -2 points 9 hours ago* (last edited 9 hours ago)

I mean using LLMs for OCR like (Gemini 3 Flash or Kimi K2.5)

[–] FrowingFostek@lemmy.world 2 points 11 hours ago (1 children)

Not OP. I wouldn't call myself tech savvy but, suggesting categorization of files on my computer sounds kinda nice. I just can't trust these clowns to keep all my data local.

[–] pkjqpg1h@lemmy.zip 1 points 11 hours ago

There is some providers with Zero Data Retention you can check on OpenRouter