this post was submitted on 21 Nov 2024
165 points (97.7% liked)

Technology

69946 readers
3093 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It's the earliest AI technology striving to expose unreported CSAM at scale.

you are viewing a single comment's thread
view the rest of the comments
[–] sexual_tomato@lemmy.dbzer0.com 14 points 5 months ago* (last edited 5 months ago) (2 children)

Jesus Christ. If someone ever got their hands on this model they could use it to generate new material. The grossest possible AI model to date

[–] Kbobabob@lemmy.world 1 points 5 months ago (1 children)

I thought being able to do that was already a thing. This is designed to do the opposite.

I know, I know.. bad actors and such.

[–] NauticalNoodle@lemmy.ml 1 points 5 months ago* (last edited 5 months ago)

...but if simple posession defines who a bad actor is...

The irony of this never ceases to amaze me.