this post was submitted on 03 Feb 2026
334 points (94.4% liked)

Technology

80254 readers
4827 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

In the filings, Anthropic states, as reported by the Washington Post: “Project Panama is our effort to destructively scan all the books in the world. We don’t want it to be known that we are working on this.”

you are viewing a single comment's thread
view the rest of the comments
[–] FauxLiving@lemmy.world 3 points 3 hours ago (1 children)

https://arstechnica.com/features/2025/06/study-metas-llama-3-1-can-recall-42-percent-of-the-first-harry-potter-book/

The claim was "Yet most AI models can recite entire Harry Potter books if prompted the right way, so that’s all bullshit."

In this test they did not get a model to produce an entire book with the right prompt.

Their measurement was considered successful if it could reproduce 50 tokens (so, less than 50 words) at a time.

The study authors took 36 books and divided each of them into overlapping 100-token passages. Using the first 50 tokens as a prompt, they calculated the probability that the next 50 tokens would be identical to the original passage. They counted a passage as “memorized” if the model had a greater than 50 percent chance of reproducing it word for word.

Even then, they didn't ACTUALLY generate these, they even admit that it would not be feasible to generate some of these 50 token (which is, at most 50 words, by the way) sequences:

the authors estimated that it would take more than 10 quadrillion samples to exactly reproduce some 50-token sequences from some books. Obviously, it wouldn’t be feasible to actually generate that many outputs.

[–] NostraDavid@programming.dev 2 points 2 hours ago

The claim was “Yet most AI models can recite entire Harry Potter books if prompted the right way, so that’s all bullshit.”

In this test they did not get a model to produce an entire book with the right prompt.

For context: These two sentences are 46 Tokens/210 Characters, as per https://platform.openai.com/tokenizer.

50 tokens is just about two sentences. This comment is about 42 tokens itself.