this post was submitted on 23 Feb 2026
287 points (98.3% liked)
Fuck AI
6027 readers
2137 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You're getting downvoted because it sounds like you're defending the topic at hand. It shows how most people don't understand the inner workings of an LLM. Hell, experts still aren't completely sure, but they ran with what was working and have been tweaking along the way when things got too ugly. And as also brought up, they used everything they could grab to make it happen without concern for legality or future backlash. For science... and profit. And I don't see a way to go backwards at this point, thanks to AI being embedded into everything (where it's suited and where it's not). For science... no, wait, that's definitely for profit. And also because of your points, there's no real way to filter or carve out what should have been restricted from being used, because it's not really there in that form. We need to do something and quickly, but we do have to work with the beast we've made.
Laws are notorious for being far slower than the tech it tries to control. And this time it can't be retroactive. Well, I mean, it could be... if we just ban all existing LLM and related AI work and start over. Good luck with that kind of legislation.
To be fair, the big AI companies are just applying the science in order to profit from it. The science behind LLMs is innocent enough. It's some very specific, money-making applications of that science that are pissing people off.
Reading all these replies... Ugh. It's so obvious none of these people understand how LLMs work. Not how the training happens either.
Somehow people got it into their heads that LLMs are "plagiarism machines" and that image stuck. LLMs aren't copying anything when they generate output! If they do, that's a flaw in their training and AI researchers are always trying to spot and fix things like that. Why? Because it's those same flaws that allow 3rd parties to understand and copy how their models work (and can create security issues).