167
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 07 Sep 2023
167 points (96.1% liked)
Technology
60130 readers
4003 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
I can't say I entirely agree. I do think that they should be helped, but in a measured and rigorous way. None of this "let them find shit online that quells their needs". Pedophilia, in the psychological profession, is viewed in a similar light to sexual orientations; of that the person I'm responding to is correct. It's simply that they seem to be blind to nuance beyond that stance that they're stuck.
AI pedophilia is certainly a very risky move for us to simply accept, when we don't even have any data on how consumption of real or virtual CSAM impacts those who indulge in it, and to get that data would require us to do very unethical and likely illegal research as far as I can tell. The approach Kerfuffle@shi.tjust.works is suggesting is one that is naive and myopic in the most generous light; which is how I'm choosing to take it so as to not accuse them of something they may not be guilty of.
I'm also someone who's extremely progressive, and while I can sympathize with people who have these urges and no true wish to act on them, I think it's outright malicious to say that the solution is to simply allow them to exist with informal self-treatments based on online "common sense" idealism. Mental health support should absolutely be available and encouraged; part of that is making sure people are safe to disclose this stuff to medical professionals, but no part of that is just having this shit freely spread online.
I appreciate your measured and metered response. I think these are extremely tricky conversations to have, but important, especially with how technology is progressing.
The problem is that the technology is progressing so rapidly without any checks or balances that our reaction for the time being should simply be one that enables further research without allowing others to create. This isn’t to say we should be stopping advancements, but we should be taking measured responses and using the input of psychologists to help us better understand the repercussions. It’s the same as if someone could generate AI gore that allows them to make generated videos of them killing someone they have always wanted to kill. It’s something that needs to be evaluated before we just release this stuff into the world. Specifically before this technology gets even better and more realistic. That blending of reality from fiction could be a path we as a society are not prepared for.
My worry is that people with backgrounds in computers are making decisions around things that impact human brains.
I agree completely. Unfortunately techbros have been making important world-changing decisions for two decades now and our legislators seem mostly fine to let them continue unabated.