341

A Telegram user who advertises their services on Twitter will create an AI-generated pornographic image of anyone in the world for as little as $10 if users send them pictures of that person. Like many other Telegram communities and users producing nonconsensual AI-generated sexual images, this user creates fake nude images of celebrities, including images of minors in swimsuits, but is particularly notable because it plainly and openly shows one of the most severe harms of generative AI tools: easily creating nonconsensual pornography of ordinary people.

(page 3) 50 comments
sorted by: hot top controversial new old
[-] Ultragigagigantic@lemmy.world 4 points 8 months ago

It's gonna suck no matter what once the technology became available. Perhaps in a bunch of generations there will be a massive cultural shift to something less toxic.

May as well drink the poison if I'm gonna be immersed in it. Cheers.

[-] VinnyDaCat@lemmy.world 3 points 8 months ago

I was really hoping that with the onset of AI people would be more skeptical of content they see online.

This was one of the reasons. I don't think there's anything we can do to prevent people from acting like this, but what we can do as a society is adjust to it so that it's not as harmful. I'm still hoping that the eventual onset of it becoming easily accessible and useable will help people to look at all content much more closely.

[-] afraid_of_zombies@lemmy.world 4 points 8 months ago

It's stuff like this that makes me against copyright laws. To me it is clear and obvious that you own your own image, and it is far less obvious to me that a company can own an image whose creator drew multiple decades ago that everyone can identify. And yet one is protected and the other isn't.

What the hell do you own if not yourself? How come a corporation has more rights than we do?

load more comments (2 replies)

We need to shut this whole coomer thing down until we work out wtf is going on in their brains.

load more comments (1 replies)
[-] noxy@yiffit.net 2 points 8 months ago

I wonder how holodecks handle this...

[-] shasta@lemm.ee 2 points 8 months ago

They send you to therapy because "it's not healthy to live in a fantasy."

[-] antlion@lemmy.dbzer0.com 3 points 8 months ago

Don’t be like Lt Reg Barclay

[-] 9488fcea02a9@sh.itjust.works 1 points 8 months ago* (last edited 8 months ago)

Probably the same types of guardrails chatGPT has when you ask it to tell you how to cook meth or build a dirty bomb

And maybe Data was distributing jailbroken holodeck programs for pervs on the ship

load more comments
view more: ‹ prev next ›
this post was submitted on 29 Mar 2024
341 points (93.4% liked)

Technology

59710 readers
1826 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS