this post was submitted on 03 Feb 2026
360 points (94.6% liked)

Technology

80254 readers
3602 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

In the filings, Anthropic states, as reported by the Washington Post: “Project Panama is our effort to destructively scan all the books in the world. We don’t want it to be known that we are working on this.”

you are viewing a single comment's thread
view the rest of the comments
[–] MangoCats@feddit.it 0 points 3 hours ago (1 children)

That's what I read in the article - the "researchers" may have had other interfaces they were using. Also, since that "research" came out, I suspect the models have compensated to prevent the appearance of copying...

[–] FauxLiving@lemmy.world 2 points 3 hours ago (1 children)

I'm running the dolphin model locally, it's an abliterated model which means that it has been fine tuned to not refuse any request and since it is running locally, I also have access to the full output vectors like the researchers used in the experiment.

I replied to another comment, in detail, about the Meta study and how it isn't remotely close to 'reproduces a full book when prompted'

In they study they were trying to reproduce 50 token chunks (token is less than a word, so under 50 words) if given the previous 50 tokens. They found that in some sections (around 42% of the ones they tried) they were able to reproduce the next 50 tokens better than 50% of the time.

Reproducing some short sentences from some of a book some of the time is insignificant compared to something like Google Books who will copy the exact snippet of text from their 100% perfect digital copy and show you exact digital copies of book covers, etc.

This research is of interest to the academic study AI in the subfields focused on understanding how models represent data internally. It doesn't have any significance when talking about copyright.

[–] MangoCats@feddit.it 2 points 2 hours ago

It doesn’t have any significance when talking about copyright.

I agree, but that doesn't stop journalists from recognizing a hot button topic and hyper-bashing that button as fast and hard and often as they can.