317
submitted 11 months ago by Geert@lemmy.world to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] JohnDClay@sh.itjust.works 1 points 11 months ago

But I don't know if Google cares enough about privacy to bother training individual models to avoid cross contamination. Each model takes years worth of super computer time, so the fewer they'd need to train, the less costly.

[-] Natanael@slrpnk.net 1 points 11 months ago

Extending existing models (retraining) doesn't need years, it can be done in far less time.

[-] JohnDClay@sh.itjust.works 1 points 11 months ago

Hmm, I thought one of the problems with LLMs was they're pretty baked in in the training process. Maybe that was only with respect to removing information?

[-] Natanael@slrpnk.net 1 points 11 months ago

Yeah, it's hard to remove data already trained into a model. But you can retrain them to add capabilities to an existing model, so if you copy one based on public data multiple times and then retrain with different sets of private data then you can save a lot of work

this post was submitted on 20 Sep 2023
317 points (96.2% liked)

Technology

57944 readers
3518 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS