this post was submitted on 23 Feb 2026
287 points (98.3% liked)

Fuck AI

6027 readers
1625 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

AI and legal experts told the FT this “memorization” ability could have serious ramifications on AI groups’ battle against dozens of copyright lawsuits around the world, as it undermines their core defense that LLMs “learn” from copyrighted works but do not store copies.

Sam Altman would like to remind you each Old Lady at a Library consume 284 cubic feet of Oxygen a day from the air.

Also, hey at least they made sure to probably destroy the physical copy they ripped into their hopelessly fragmented CorpoNapster fever dream, the law is the law.

you are viewing a single comment's thread
view the rest of the comments
[–] supersquirrel@sopuli.xyz 3 points 2 days ago* (last edited 2 days ago) (1 children)

That's all it is. It's not a database! It hasn't memorized anything. It hasn't encoded anything. You can't decode it at all because it's a one-way process.

Not it isn't a one-way process, literally the point of this article is that you functionally can.

[–] riskable@programming.dev 2 points 1 day ago (1 children)

You can functionality copy Shakespeare with enough random words being generated. That's the argument you're making here.

If you prompt an LLM to finish sentences enough times (like the researchers did, referenced in the article) you can get it to output whatever TF you want.

Wait: Did you think the researchers got these results on the first try? You do realize they passed zillions of prompts into these LLMs until it matched the output they were looking for, right?

It's not like they said, "spit out Harry Potter" and it did so. They gave the LLM partial sentences and just kept retrying until it generated the matching output. The output that didn't match was discarded and then the final batch of matching outputs were thrown together in order to say, "aha! See? It can regurgitate text!"

Try it yourself: Take some sentences from any popular book, cut them in half, and tell Claude to finish them. You'll be surprised. Or maybe not if you remember that RNG is at the core of all LLMs.

[–] supersquirrel@sopuli.xyz 1 points 1 day ago

You can functionality copy Shakespeare with enough random words being generated. That's the argument you're making here.

No it is not, that would be writing Shakespeare by combining random words, LLMs are not capable of that level of artistry, there is no random to them. All they can do is calculate the probabilities of pre-existing connections and give you the most boring, obvious one.