this post was submitted on 20 Mar 2025
104 points (100.0% liked)

Technology

38771 readers
185 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
top 26 comments
sorted by: hot top controversial new old
[–] OsrsNeedsF2P@lemmy.ml 37 points 2 months ago (1 children)

About 3 percent of students in the study had positive mental health outcomes, reporting that talking to the chatbot "halted their suicidal ideation." But researchers also found "there are some cases where their use is either negligible or might actually contribute to suicidal ideation."

This is referring to a bot designed to help with people struggling with mental health, and is actually a big one. That number is way too low.

[–] GammaGames@beehaw.org 26 points 2 months ago (1 children)

“hey, I know you feel like killing yourself, but if it happens then we’ll just replace you with a shitty bot” probably isn’t as helpful as they thought it would be. It’s violating and ghoulish.

[–] OsrsNeedsF2P@lemmy.ml 11 points 2 months ago* (last edited 2 months ago) (2 children)

I hate this attitude of "well if you can't get a professional therapist, figure out how to get one anyways". There needs to be an option for people who either can't afford or can't access a therapist. I would have loved for AI to fill that gap. I understand it won't be as good, but in many regions the wait-list for therapy is far too long, and something is better than nothing

[–] TehPers@beehaw.org 11 points 2 months ago (3 children)

Someone close to me gave up on the hotlines in the US and now just uses ChatGPT. It's no therapist, but at least it'll hold a conversation. If only the hotlines here weren't so absurdly understaffed.

[–] Powderhorn@beehaw.org 7 points 2 months ago

I've given up on crisis lines. Their whole premise seems to be "get back to being comfortable with the oppressive system, you little bitch."

[–] Megaman_EXE@beehaw.org 4 points 2 months ago (1 children)

I've used one called PI which I'm assuming is some kind of branch off of chat gpt or something.

You don't have to sign up or anything (for now) which is cool. But I assume they harvest all our data and information.

I tested to see if I could break it once, and from my brief tests, it seemed to never break out of character or tell me something bad or negative, which I thought was interesting(and good!)

[–] Powderhorn@beehaw.org 4 points 2 months ago (1 children)

I actually used Pi as my intro to generative LLMs. It was ... I guess not encouraging self harm, but so fucking irritating that it led me to want to. Always with the irrelevant supportive words that I guess work if you're a teen?

[–] Megaman_EXE@beehaw.org 1 points 2 months ago

Lol yes, that was going to be the one downside I was going to mention. I wasn't sure if it was just unique to my situation, but I found it would lead me down a logical path. It would ask me if I had tried various solutions.

Eventually, I would hit a point where it wouldn't know where to go any further, and it would land on "here's some things you can do" but those options would be things I was actively trying and failing with.

So that was fun. In a way, it was great at confirming that I had thought of all the logical options.

[–] Alice@beehaw.org 1 points 2 months ago

I tried AI once but it just kept telling me to call the hotlines. Useless.

[–] wizardbeard@lemmy.dbzer0.com 9 points 2 months ago

I would have loved AI to fill that need as well, but it's not an adequate tool for the job.

[–] Powderhorn@beehaw.org 19 points 2 months ago (1 children)

Imagine a 3% success rate being acceptable in any situation. That tends to get you fired.

[–] sqgl@beehaw.org 5 points 2 months ago* (last edited 2 months ago) (1 children)

3% success vs what? 6% sent over the edge? 10% 20% ?

If the journalist asked for a specific figure but was evaded then it should be stated in the article.

[–] Powderhorn@beehaw.org 6 points 2 months ago (2 children)

I don't much like that take. Ars commits excellent journalism.

From the story:

About 3 percent of students in the study had positive mental health outcomes, reporting that talking to the chatbot "halted their suicidal ideation." But researchers also found "there are some cases where their use is either negligible or might actually contribute to suicidal ideation."

[–] sqgl@beehaw.org 3 points 2 months ago (1 children)

I don't think they contacted the researchers and the linked study does not seem to give the answer (I spent a few minutes looking).

[–] Powderhorn@beehaw.org 2 points 2 months ago (1 children)

I generally don't go about doing research for free.

[–] sqgl@beehaw.org 2 points 2 months ago (1 children)

Ars offers free articles while most publications have a paywall, so I imagine funding isn't as generous as it would have been 30 years ago when such publications would have been in magazine format.

[–] Powderhorn@beehaw.org 3 points 2 months ago (1 children)

Ars is actually my only paid subscription. Didn't need to, but wanted to support their journalism.

[–] sqgl@beehaw.org 1 points 2 months ago

I mistakenly thought you were the actual journalist. But I should always presume the journalist will see my comments and therefore not be so harsh, especially when freeloading.

FWIW I subscribe to an (Australian) online newspaper which is free just like you do. The difference being that I rarely read it since I am on top of those topics largely. Am just glad that it exists for others because it is well researched and presented.

[–] sqgl@beehaw.org 1 points 2 months ago (1 children)
[–] Powderhorn@beehaw.org 3 points 2 months ago
[–] prole@lemmy.blahaj.zone 18 points 2 months ago (2 children)

Wasn't this an episode of Black Mirror back when it was still really good?

[–] halykthered@lemmy.ml 4 points 2 months ago

Yeah, I know someone who won't watch that episode again because of how unsettling it is. Luckily for them, it's slowly becoming reality.

[–] somegeek@programming.dev 3 points 2 months ago (1 children)

I watched until season 6 and all of it was really amazing. Is S7 bad?

[–] prole@lemmy.blahaj.zone 2 points 2 months ago* (last edited 2 months ago)

Nah, not particularly... If you like it up til then, you will probably like it.

It's not that I think it got bad or anything, but I think there was a noticeable drop in quality from season 3 to season 4 (the second Netflix season).

The first two seasons, when it was still on Channel 4 in the UK, were just so fucking good. Only a few episodes each, but man. And the Christmas episode with Jon Hamm, goddamn. So fucking good. Some of the best sci-fi ever put to film imo.

Then Netflix bought it. Season 3 was good, it had some bangers (I imagine Charlie Brooker had some of the plots ready to go already). Then... I don't know maybe it's because they were pushed to write like 3x more episodes per season? The quality suffered.

The show is still solid, and I will watch the new season for sure. But I don't know, it's just not the same as it was.

[–] jlow@beehaw.org 16 points 2 months ago

Oh hey, reality being even worse than Black Mirror again 😿

This will become more common. I've seen dead people involved in scams on the internet.