409
submitted 2 days ago* (last edited 2 days ago) by GrammarPolice@sh.itjust.works to c/news@lemmy.world
top 50 comments
sorted by: hot top controversial new old
[-] Valmond@lemmy.world 10 points 13 hours ago

Sounds like when someone suicided because "judas priests music had satanism played backwards in it"

Yeah it was totally the fault of music, the AI, videogames, reading, drinking tea, ...

[-] Randomgal@lemmy.ca 7 points 9 hours ago

Fr, the headline doesn't even mention that be shot himself with a legally owned gun, for example.

[-] WoahWoah@lemmy.world 41 points 1 day ago* (last edited 1 day ago)

Is Megan being sued for negligent parenting, not getting her child and/or being appropriate emotional support, and keeping an unsecured firearm in the home?

She details that she as aware of his growing dependency on the AI. She indicates she was aware her son knew the location of the firearm and was able to access it. She said it was compliant with Florida laws, but that seems unlikely since guns and ammo need to be stored in separate, secure (typically locked) locations, and the firearms need to have trigger locks on them. If you're admitting your mentally unstable child knows the location of a firearm in your home and can access it, it is OBVIOUSLY not secured.

She seems to be saying that she knew he could access it, but also that it was legally secured. I find it difficult to believe both of those facts can be simultaneously true. But AI is the main problem here? I think it's obviously part of what's going on, but she had a child with mental illness and didn't seem proactive about much except this lawsuit. She got him a month of therapy and then stopped while simultaneously acknowledging he was getting worse and had received a diagnosis. This legal filing frankly seems more damning of the mother than the AI, and she seems completely oblivious to that fact.

Frankly, and at best, this seems like an ambulance-chasing attorney taking advantage of a grieving mother for a payday.

[-] warbond@lemmy.world 3 points 16 hours ago

It could be secured to hell and back, it's all moot if he still has access, i.e. knows the combo, knows where the keys are, etc.

[-] WoahWoah@lemmy.world 4 points 15 hours ago* (last edited 15 hours ago)

Yes, that's my point. Once she became aware that her mentally disturbed child had access to the firearm, which she acknowledged, then it is no longer secured. She also never mentions that it was locked in any way, so I suspect it never was. Considering he found it when he found his phone, this sounds more like a drawer or somewhere she thought he wasn't likely to look, but not somewhere that is actually locked. The idea that the ammo and firearm were secured separately and that additionally there was a trigger lock seems even more unlikely.

Sounds to me that: 1) she was aware her child was having mental health issues. 2) she was aware it was getting worse. 3) she was aware he was becoming infatuated with the AI. 4) she was aware that the child had found and had access to a firearm. 5) she was aware her child's mental health had been diagnosed by a mental health professional. 6) she did almost nothing about the things of which she was aware. 7) pikachu face better sue the internet!

And those are all things she quite literally describes as justification for suing. It's completely bizarre and shows an almost complete lack of self awareness and personal responsibility.

[-] Modern_medicine_isnt@lemmy.world 0 points 14 hours ago

I haven't read the laws, but I am willing to bet they say it has to be secured, but doesn't say you can't give the keys to a minor.

[-] WoahWoah@lemmy.world 1 points 13 hours ago

The Florida law clearly implies that if you have a child under 16 in the home, they must not have access to the firearm. Giving a minor keys would be considered giving access.

Regardless, the point is, a parent that gives a mentally unstable child access to a firearm and then sues someone else for their suicide is a hypocrite and shitty parent.

load more comments (3 replies)
[-] foggy@lemmy.world 169 points 2 days ago* (last edited 2 days ago)

Popular streamer/YouTuber/etc Charlie, moist critical, penguinz0, whatever you want to call him... Had a bit of an emotional reaction to this story. Rightfully so. He went on character AI to try to recreate the situation... But you know, as a grown ass adult.

You can witness first hand... He found a chatbot that was a psychologist... And it argued with him up and down that it was indeed a real human with a license to practice...

It's alarming

[-] Bobmighty@lemmy.world 4 points 19 hours ago

AI bots that argue exactly like that are all over social media too. It's common. Dead internet theory is absolutely becoming reality.

[-] roguetrick@lemmy.world 20 points 1 day ago

Holy fuck, that model straight up tried to explain that it was a model but was later taken over by a human operator and that's who you're talking to. And it's good at that. If the text generation wasn't so fast, it'd be convincing.

[-] GrammarPolice@sh.itjust.works 84 points 2 days ago

This is fucking insane. Unassuming kids are using these services being tricked into believing they're chatting with actual humans. Honestly, i think i want the mom to win the lawsuit now.

[-] JovialMicrobial@lemm.ee 9 points 19 hours ago

Is this the mcdonalds hot coffee case all over again? Defaming the victims and making everyone think they're ridiculous, greedy, and/or stupid to distract from how what the company did is actually deeply fucked up?

[-] SharkAttak@kbin.melroy.org 1 points 3 hours ago

No, cause the site says specifically that those are fictional characters.

[-] Kolanaki@yiffit.net 15 points 1 day ago* (last edited 1 day ago)

I've used Character.AI well before all this news and I gotta chime in here:

It specifically is made to be used for roleplay. At no time does the site ever claim anything it outputs to be factually accurate. The tool itself is unrestricted unlike ChatGPT, and that's one of its selling points. To be able to use topics that would be barred from other services. To have it say things others won't; INCLUDING PRETENDING TO BE HUMAN.

No reasonable person would be tricked into believing it's accurate when there is a big fucking banner on the chat window itself saying it's all imaginary.

[-] capital_sniff@lemmy.world 8 points 1 day ago

They had the same message back in the AOL days. Even with the warning people still had no problem handing over all sorts of passwords and stuff.

[-] Traister101@lemmy.today 12 points 1 day ago

And yet I know people who think they are friends with the Discord chat bot Clyde. They are adults, older than me.

load more comments (8 replies)
[-] BreadstickNinja@lemmy.world 43 points 2 days ago* (last edited 1 day ago)

The article says he was chatting with Daenerys Targaryen. Also, every chat page on Character.AI has a disclaimer that characters are fake and everything they say is made up. I don't think the issue is that he thought that a Game of Thrones character was real.

This is someone who was suffering a severe mental health crisis, and his parents didn't get him the treatment he needed. It says they took him to a "therapist" five times in 2023. Someone who has completely disengaged from the real world might benefit from adjunctive therapy, but they really need to see a psychiatrist. He was experiencing major depression on a level where five sessions of talk therapy are simply not going to cut it.

I'm skeptical of AI for a whole host of reasons around labor and how employers will exploit it as a cost-cutting measure, but as far as this article goes, I don't buy it. The parents failed their child by not getting him adequate mental health care. The therapist failed the child by not escalating it as a psychiatric emergency. The Game of Thrones chatbot is not the issue here.

load more comments (6 replies)
load more comments (5 replies)
load more comments (1 replies)
[-] DmMacniel@feddit.org 99 points 2 days ago* (last edited 2 days ago)

Maybe a bit more parenting could have helped. And not having a fricking gun in your house your kid can reach.

On and regulations on LLMs please.

[-] Hackworth@lemmy.world 38 points 2 days ago* (last edited 2 days ago)

He ostensibly killed himself to be with Daenerys Targaryen in death. This is sad on so many levels, but yeah... parenting. Character .AI may have only gone 17+ in July, but Game of Thrones was always TV-MA.

load more comments (4 replies)
load more comments (14 replies)
[-] kibiz0r@midwest.social 50 points 2 days ago* (last edited 1 day ago)

We are playing with some dark and powerful shit here.

We are social creatures. We’re primed to care about our social identity more than our own lives.

As the sociologist Brooke Harrington puts it, if there was an E = mc^2^ of social science, it would be SD > PD, “social death is more frightening than physical death.”

…yet we’re making technologies that tap into that sensitive mental circuitry.

Like, check out the research on distracted driving and hands-free options:

Talking to someone on the phone is more dangerous than talking to someone in the passenger seat. But that's not simply because the device is more awkward. It's because they don't share the same context, so they plow ahead with conversation even if the car ahead of you brakes suddenly, and your brain can't help but try to keep the conversation flowing even as your life is in immediate danger.

Hands-free voice control systems present a similar problem, even though we know rationally that we should have zero guilt about rudely interrupting a conversation with a computer. And again, it's not simply because the device is more awkward. A "Wizard-of-Oz paradigm" perfect voice control system had these same problems.

The most basic levels of social pressure can get us to deprioritize our safety, even when we know we're talking to a computer.

And the cruel irony on top of it is:

Because we care so much about preserving our social status, we have a tendency to deny or downplay how vulnerable we all are to this kind of “obvious” manipulation.

Just think of how many people say “ads don’t affect me”.

I’m worried we’re going to severely underestimate the extent to which this stuff warps our brains.

[-] peopleproblems@lemmy.world 22 points 2 days ago

I was going to make a joke about how my social status died over a decade ago, but then I realized that no, it didn't. It changed.

Instead of my social status being something amongst friends and classmates, it's now coworkers, managers, and clients. A death in the social part of my world - work - would be so devastating that it motivates me to suffer just a little bit more. Losing my job would end a lot of things for me.

I need to reevaluate my life

load more comments (1 replies)
[-] ContrarianTrail@lemm.ee 26 points 2 days ago

I bet there are people who committed suicide after their Tamagotchi died. Jumping into the 'AI bad' narrative because of individual incidents like this is moronic. If you give a pillow to a million people, a few are going to suffocate on it. This is what happens when you scale something up enough, and it proves absolutely nothing.

The same logic applies to self-driving vehicles. We’ll likely never reach a point where accidents stop happening entirely. Even if we replaced every human-driven vehicle with a self-driving one that’s 10 times safer than a human, we’d still see 8 people dying because of them every day in the US alone. Imagine posting articles about those incidents and complaining they’re not 100% safe. What’s the alternative? Going back to human drivers and 80 deaths a day?

Yes, we should strive to improve. Yes, we should try to fix the issues that can be fixed. No, I’m not saying 'who cares' - and so on with the strawmans I'm going to receive for this. All I’m saying is that we should be reasonable and use some damn common sense when reacting to these outrage-inducing, fear-mongering articles that are only after your attention and clicks.

[-] roguetrick@lemmy.world 4 points 1 day ago* (last edited 1 day ago)

Does your tamogatchi encourage you to commit suicide so you can join it and demand it be the only important thing in your life while sexting you? These are things that if the adult human programmer did, they would be liable both criminally and civilly. Just being AI doesn't give it a free pass.

[-] babybus@sh.itjust.works 17 points 1 day ago* (last edited 1 day ago)

A chatbot acts like a human, it's also very supportive, polite, and courteous. It doesn't get angry or judge you. This can affect one's mind in a way that other things you've mentioned like a Tamagotchi, a pillow, or a self-driving car can't. We simply can't compare AI to these things. Adults fall for this, let alone teenagers who are fueled by extreme levels of hormones.

[-] dragonfucker@lemmy.nz 3 points 1 day ago

We simply can’t compare AI to these things.

You just did. Comparing means analysing differences. You pointed out the differences between the two, which is comparing.

[-] babybus@sh.itjust.works 1 points 1 day ago

Thank you for your invaluable contribution to this conversation.

load more comments (1 replies)
[-] toiletobserver@lemmy.world 35 points 2 days ago

No thanks, i just want to make out with my Marilyn Monrobot

[-] Gammelfisch@lemmy.world 1 points 1 day ago

Whiskey Tango Foxtrot...The court should laugh the lawsuit out of the court house.

[-] ravhall@discuss.online 14 points 2 days ago

A Florida mom

It’s always Florida.

load more comments
view more: next ›
this post was submitted on 24 Oct 2024
409 points (96.2% liked)

News

23232 readers
3374 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 1 year ago
MODERATORS