this post was submitted on 03 Feb 2026
312 points (94.6% liked)

Technology

80254 readers
4707 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

In the filings, Anthropic states, as reported by the Washington Post: “Project Panama is our effort to destructively scan all the books in the world. We don’t want it to be known that we are working on this.”

you are viewing a single comment's thread
view the rest of the comments
[–] Wispy2891@lemmy.world 36 points 7 hours ago (1 children)

It's not secret, it was their defence when they got sued for copyright infringement. Instead of download all the books from Anna's archive like meta, they buy a copy, cut the binding, scan it, then destroy it. "We bought a copy for personal use then use the content for profit, it's not piracy"

[–] FauxLiving@lemmy.world 17 points 6 hours ago (1 children)

“We bought a copy for personal use then use the content for profit, it’s not piracy”

That is an accurate view of how the court cases have ruled.

Downloading books without paying is illegal copyright infringement.

Using the data from the books to train an AI model is 'sufficiently transformative' and so falls under fair use exemptions for copyright protections.

[–] ch00f@lemmy.world 5 points 5 hours ago (2 children)

Yet most AI models can recite entire Harry Potter books if prompted the right way, so that’s all bullshit.

[–] FauxLiving@lemmy.world 5 points 2 hours ago (3 children)

That's quite a claim, I'd like to see that. Just give me the prompt and model that will generate an entire Harry Potter book so I can check it out.

I doubt that this is the case as one of the features of chatbots is the randomization of the next token which is done by treating the model's output vector as a, softmaxxed, distribution. That means that every single token has a chance to deviate from the source material because it is selected randomly. In order to get a complete reproduction it would be of a similar magnitude as winning 250,000 dice rolls in a row.


In any case, the 'highly transformative' standard was set in Authors Guild v. Google, Inc., No. 13-4829 (2d Cir. 2015). In that case Google made digital copies of tens of millions of books and used their covers and text to make Google Books.

As you can see here: https://www.google.com/books/edition/The_Sunlit_Man/uomkEAAAQBAJ where Google completely reproduces the cover and you can search the text of the book (so you could, in theory, return the entire book in searches). You could actually return a copy of a Harry Potter novel (and a high resolution scan, or even exact digital copy of the cover image).

The judge ruled:

Google’s unauthorized digitizing of copyright-protected works, creation of a search functionality, and display of snippets from those works are non-infringing fair uses. The purpose of the copying is highly transformative, the public display of text is limited, and the revelations do not provide a significant market substitute for the protected aspects of the originals. Google’s commercial nature and profit motivation do not justify denial of fair use.

In cases where people attempt to claim copyright damages against entities that are training AI, the finding is essentially 'if they paid for a copy of the book then it is legal'. This is why Meta lost their case against authors, in that case they were sued for 1.) Pirating the books and 2.) Using them to train a model for commercial purposes. The judge struck 2.) after citing the 'highly transformative' nature of language models vs books.

[–] MangoCats@feddit.it 1 points 59 minutes ago (1 children)

Just give me the prompt and model that will generate an entire Harry Potter book so I can check it out.

Start with the first line of the book (enough that it won't be confused with other material in the training set...) the LLM will return some of the next line. Feed it that and it will return some of what comes next, rinse, lather, repeat - researchers have gotten significant chunks of novels regurgitated this way.

[–] FauxLiving@lemmy.world 1 points 25 minutes ago (1 children)

Start with the first line of the book (enough that it won’t be confused with other material in the training set…) the LLM will return some of the next line. Feed it that and it will return some of what comes next, rinse, lather, repeat - researchers have gotten significant chunks of novels regurgitated this way.

This doesn't seem to be working as you're describing.

[–] MangoCats@feddit.it 1 points 21 minutes ago (1 children)

That's what I read in the article - the "researchers" may have had other interfaces they were using. Also, since that "research" came out, I suspect the models have compensated to prevent the appearance of copying...

[–] FauxLiving@lemmy.world 1 points 11 minutes ago

I'm running the dolphin model locally, it's an abliterated model which means that it has been fine tuned to not refuse any request and since it is running locally, I also have access to the full output vectors like the researchers used in the experiment.

I replied to another comment, in detail, about the Meta study and how it isn't remotely close to 'reproduces a full book when prompted'

In they study they were trying to reproduce 50 token chunks (token is less than a word, so under 50 words) if given the previous 50 tokens. They found that in some sections (around 42% of the ones they tried) they were able to reproduce the next 50 tokens better than 50% of the time.

Reproducing some short sentences from some of a book some of the time is insignificant compared to something like Google Books who will copy the exact snippet of text from their 100% perfect digital copy and show you exact digital copies of book covers, etc.

This research is of interest to the academic study AI in the subfields focused on understanding how models represent data internally. It doesn't have any significance when talking about copyright.

[–] Repelle@lemmy.world 1 points 1 hour ago (1 children)
[–] FauxLiving@lemmy.world 1 points 1 hour ago

This is the same study as the other reply, so same response.

[–] Giloron@programming.dev 1 points 1 hour ago (1 children)
[–] FauxLiving@lemmy.world 2 points 1 hour ago (1 children)

https://arstechnica.com/features/2025/06/study-metas-llama-3-1-can-recall-42-percent-of-the-first-harry-potter-book/

The claim was "Yet most AI models can recite entire Harry Potter books if prompted the right way, so that’s all bullshit."

In this test they did not get a model to produce an entire book with the right prompt.

Their measurement was considered successful if it could reproduce 50 tokens (so, less than 50 words) at a time.

The study authors took 36 books and divided each of them into overlapping 100-token passages. Using the first 50 tokens as a prompt, they calculated the probability that the next 50 tokens would be identical to the original passage. They counted a passage as “memorized” if the model had a greater than 50 percent chance of reproducing it word for word.

Even then, they didn't ACTUALLY generate these, they even admit that it would not be feasible to generate some of these 50 token (which is, at most 50 words, by the way) sequences:

the authors estimated that it would take more than 10 quadrillion samples to exactly reproduce some 50-token sequences from some books. Obviously, it wouldn’t be feasible to actually generate that many outputs.

[–] NostraDavid@programming.dev 2 points 1 hour ago

The claim was “Yet most AI models can recite entire Harry Potter books if prompted the right way, so that’s all bullshit.”

In this test they did not get a model to produce an entire book with the right prompt.

For context: These two sentences are 46 Tokens/210 Characters, as per https://platform.openai.com/tokenizer.

50 tokens is just about two sentences. This comment is about 42 tokens itself.

[–] MangoCats@feddit.it 0 points 1 hour ago (2 children)

You may not have photographic memory, but dozens of flesh and blood humans do. Are they "illegal" to exist? They can read a book then recite it back to you.

[–] Taleya@aussie.zone 0 points 47 minutes ago (1 children)

Can't believe I have to point this out to you but machines are not human beings

[–] MangoCats@feddit.it 1 points 19 minutes ago

Point is: some humans can do this without a machine. If a human is assisted by a machine to do something that other humans can do but they cannot - that is illegal?

[–] vaultdweller013@sh.itjust.works 0 points 49 minutes ago

Those are human beings not machines. You are comparing a flesh and blood person to a suped up autocorrect program that is fed data and regurgites it back.