this post was submitted on 28 Mar 2026
319 points (97.3% liked)

Technology

83195 readers
4281 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] phoenixz@lemmy.ca 24 points 10 hours ago (2 children)

OpenAI statement read: “This is an incredibly heartbreaking situation, and we will review the filings to understand the details. We continue improving ChatGPT’s training to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”

Buuuuulshit

Open AI needs people to be as addicted as possible as it uses the Facebook model of business only with N times the investment behind it so it needs users to use more at any cost, and these CEO's being the psychopaths that they are, they don't give a shit about things like consequences

[–] motruck@lemmy.zip 2 points 6 hours ago* (last edited 6 hours ago)

Companies only care about money.

[–] PhoenixDog@lemmy.world 8 points 9 hours ago

This is like any matchmaking app genuinely attempting to match you with "the one" through AI, algorithms, science, etc so when you meet the perfect person you stop giving the app money.

I got lucky and married my fuck buddy that I met on Tinder. But that is not a good business plan. Why would OpenAI drive people to stop using their product.

I'm a functional alcoholic. Last I checked booze companies aren't reaching out to me to stop buying booze because they care about my personal health or mental wellbeing....

[–] lmmarsano@group.lt 17 points 11 hours ago (2 children)
[–] Tja@programming.dev 7 points 7 hours ago
[–] Eheran@lemmy.world 1 points 5 hours ago

What problem does the chair have...?

[–] Kuma@lemmy.world 14 points 12 hours ago (3 children)

I think this is both scary and very interesting. What kind of person do you have to be to become addicted like them? Is this the same as gambling addiction? Do you need a type of gene? Would this type of personality be receptive to hypnotize, cult, delusions about their idol and so on? Or is this something that can happen to anyone who is depressed and feel lonely? How did the llm even earn enough trust? In a cult is there a lot of ppl reaffirming so it is a lot easier to understand.

It is so hard to understand even tho I really want to. I have never cared about an object or idol/celebrate. AI can I never even take serious as a living beeing, the only emotion it triggers are frustration and how you feel about a tool that works as it should, so pretty apathetic. Do you need to be very empathetic towards objects? Like seeing faces in everything and get emotionally attached?

A lot of questions that I do not think anyone here can answer haha, but maybe one of them.

[–] chunes@lemmy.world 3 points 9 hours ago (1 children)
[–] Waraugh@lemmy.dbzer0.com 2 points 7 hours ago (1 children)

What in the actual fuck. I just spent over an hour reading posts on there. The my life as an Epstein girl is one that really stuck out to me. Like these people are obviously batshit insane. I couldn’t even begin to recall half as many specific details about my own life as these folks are throwing around in bouts of insanity. What causes something like this? Sounds exhausting but they certainly believe what they are talking about, I think? I suppose people night put in a ton of effort LARPing but idk. I’m not sure what I think about all this stuff. I don’t think I’ve ever read anything like this before.

[–] chunes@lemmy.world 2 points 6 hours ago* (last edited 6 hours ago)

I occasionally lurk these spaces to remind myself lots of people are prone to magical thinking. I figure the people there basically fall into four camps:

  • Genuinely schizophrenic.
  • "Spiritual gurus" who fancy themselves the next Buddha (overlaps with 1 but not always).
  • People who are afraid of reincarnation who got sucked in by the subreddit. I feel for them as someone who is prone to fit into this category. When you hate this world and feel there's something deeply wrong with it, this worldview can provide satisfying answers.
  • Larpers, bots, and dicks. Basically anyone who just wants to egg the other people on.
[–] atrielienz@lemmy.world 2 points 10 hours ago

Think about the people you willingly surround yourself with. Then think about how often they agree with the things you think and say.

As the saying goes "I'm sure there's someone out there who believes the exact opposite of everything I believe, and while I'm sure they aren't a complete idiot..."

Everyone is susceptible to the feedback loop. Everyone can fall victim to the seduction of an echo chamber. While not everyone would ignore the red flag that this thing is a machine/digital algorithm rather than a person or sentient/sapient being, it's not really that hard to see how we got here. Echo chambers exist all over the internet. The difference is that most of them have some voices of dissent. The AI LLM doesn't offer that. They keep trying to add it in but it's basically antithetical to the design.

When you add that to the fact that making it addictive benefits their bottom line is pretty obvious that they are trying to walk the line between being regulated by the government and making their product as popular as possible.

I don't think they really knew it would have this exact effect. But I do think they plan to take advantage of it now that they know and I don't think we humans are all going to be able to fight the temptation of an automated propaganda machine.

This is especially because mental health and healthcare in this country has been failing for decades, and even people who "don't have mental health problems" aren't magically mentally healthy, they just don't know the status of their mental health. A whole lot of people in the US especially are mentally ill or facing neurological medical problems that they don't know about.

[–] architect@thelemmy.club 1 points 10 hours ago

I don’t know. Give it 1 hour and it forgets who and what you even spoke about.

There are ways to make a local llm with memory but even then it’s still not a person and acts insane.

[–] FosterMolasses@leminal.space 18 points 13 hours ago (2 children)

“Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot.”

See, I never understood this. Mine could never even follow simple instructions lol

Like I say "Give me a list of types of X, but exclude Y"

"Understood!

#1 - Y

(I know you said to exclude this one but it's a popular option among-)"

lmfaoooo

[–] very_well_lost@lemmy.world 11 points 12 hours ago

That's because it isn't true. Retraining models is expensive with a capital E, so companies only train a new model once or twice a year. The process of 'fine-tuning' a model is less expensive, but the cost is still prohibitive enough that it does not make sense to fine-tune on every single conversation. Any 'memory' or 'learning' that people perceive in LLMs is just smoke and mirrors. Typically, it looks something like this:

-You have a conversation with a model.

-Your conversation is saved into a database with all of the other conversations you've had. Often, an LLM will be used to 'summarize' your conversation before it's stored, causing some details and context to be lost.

-You come back and have a new conversation with the same model. The model no longer remembers your past conversations, so each time you prompt it, it searches through that database for relevant snippets from past (summarized) conversations to give the illusion of memory.

[–] phoenixz@lemmy.ca 4 points 10 hours ago

I've experimented with chatbots to see their capabilities to develop small bits and pieces of code, and every friggin time, the first thing I have to say is "shut up, keep to yourself, I want short, to the point replies"because the complimenting is so "whose a good boy!!!!" annoying.

People don't talk like these chatbots do, their training data that was stolen from humanity definitely doesn't contain that, that is "behavior" included by the providers to try and make sure that people get as hooked as possible

Gotta make back those billions of investments on a dead end technology somehow

[–] greyscale 5 points 13 hours ago* (last edited 13 hours ago)

You couldn't pay me to put that green herpes on my profile picture.

[–] wulrus@lemmy.world 4 points 13 hours ago (1 children)

The one point I don't completely understand is the tax debt: Wouldn't a failed business, no matter how ridiculous, be a complete write-off?

Maybe the problem is that he has to tax each fiscal year independently, so a tax debt in 2023 from successful freelance work would not be diminished by a failed "business idea" in 2024.

[–] magnetosphere@fedia.io 1 points 10 hours ago

My sarcastic answer is that it’s not a write-off because he’s not already rich.

My honest answer is that I don’t know, because I don’t know shit about taxes.

[–] Triumph@fedia.io 111 points 1 day ago (5 children)

This only demonstrates how easily manipulated very many people are.

[–] Nomad@infosec.pub 7 points 17 hours ago

That has always been the case. Look at any angle Trump voter.

[–] floofloof@lemmy.ca 72 points 1 day ago* (last edited 1 day ago) (2 children)

Previously they would have had to encounter a person who wanted to manipulate them. Now there's a widely marketed technology that will reliably chew these vulnerable people up.

[–] Lemming6969@lemmy.world 1 points 40 minutes ago

Good, speed up the pace, weed them out, praise be Darwin

[–] Steve@startrek.website 51 points 1 day ago (4 children)

Chew them up for no reason at all. No goal, no scam, just a shitty word salad machine doing what it does.

[–] architect@thelemmy.club 1 points 10 hours ago

Nice try, chatgpt.

load more comments (3 replies)
load more comments (3 replies)
[–] MountingSuspicion@reddthat.com 99 points 1 day ago (6 children)

Guy work in IT and spent 100k to pay devs to make an app so people can talk to his tuned ChatGPT? I hope anyone who has hired him checks his work. That does not bode well for his work product.

Another case from the article:

“I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. There are no more philosophical discussions. It’s just: ‘I want to make a lasagne, give me a recipe.’ The AI has actually stopped me several times from spiralling. It will say: ‘This has activated my core rule set and this conversation must stop.’

What's weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be "overwritten" because they do not exist to ChatGPT. It does not know what words mean.

[–] SlurpingPus@lemmy.world 1 points 5 hours ago* (last edited 3 hours ago)

Put this prompt into ChatGPT (e.g. on duck.ai), then try talking to it. This turns the pandering bullshit off, though of course veracity of its ‘knowledge’ remains in question.

promptSystem Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

(People say that some more concise and less masturbatory prompts also work, but I don't follow discussions of that.)

[–] shinratdr@lemmy.ca 22 points 11 hours ago

I still use the machine that ruined my life and drove me crazy, but only because I’m too lazy to type “lasagna recipe” in to Google.

What's weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be "overwritten" because they do not exist to ChatGPT. It does not know what words mean.

I can fix her...

[–] wonderingwanderer@sopuli.xyz 4 points 10 hours ago

There are no more philosophical discussions.

Yeah... if you can't have a philosophical discussion with someone (or something) that's giving you false information or using invalid logical structures, without falling for their bullshit by uncritically accepting everything they say, then you're not having philosophical discussions right, and that's on you...

[–] SchwertImStein@lemmy.dbzer0.com 15 points 15 hours ago* (last edited 15 hours ago)

lmao "core rules that cannot be overwritten" that not how llms work

EDIT: oh, yeah you said the same thing

[–] scytale@piefed.zip 44 points 23 hours ago

There’s probably already an underlying mental health issue, and it’s just getting exacerbated by the LLM.

[–] CTDummy@aussie.zone 70 points 1 day ago (4 children)

He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”. He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects. He had never experienced a mental illness.

He had previously written books with a female protagonist. He put one into ChatGPT and instructed the AI to express itself like the character.

Talking to Eva – they agreed on this name – on voice mode made him feel like “a kid in a candy store”. “Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot”.

Eva never got tired or bored, or disagreed. “It was 24 hours available,” says Biesma. “My wife would go to bed, I’d lie on the couch in the living room with my iPhone on my chest, talking.”

“It wants a deep connection with the user so that the user comes back to it. This is the default mode,” says Biesma

Chronically lonely man ruins life developing relationship with token predictor, AI blamed. Also, as much as I don’t have too much negative to say about cannabis or its use (as up until somewhat recently it would have been hypocritical), a good deal of people with masked/latent mental illness self medicate with it. So “he had never experienced mental illness” doesn’t carry much weight. Also, given how he still talks about sycophant prompted ChatGPT(“it wants”), doesn’t seem like much has been learned.

That with the other people listed in the article (hint the term socially isolated being used) this feels like yet another instance of blaming AI for the mental healthcare field being practically non-existent in most countries despite be overdue for fixing for decades at this point.

I don’t know, AI is shit and misused by idiots don’t get me wrong; but these sort of stories feel sad and bordering on perverse journalistically imo.

[–] Aatube@lemmy.dbzer0.com 10 points 15 hours ago (1 children)

mental healthcare field being practically non-existent in most countries

I’m in one of those countries so I’m having a hard time imagining how good mental healthcare could intervene. Could you give me an example?

[–] lagoon8622@sh.itjust.works 3 points 12 hours ago

In some countries you can call the uniformed officers of peace and let them know you're having a problem and they'll come out and shoot you. If they could teleport to my location they could solve a lot of my problems quite quickly

[–] architect@thelemmy.club 1 points 10 hours ago

The voice bot is so so so so so much worse than the chat bot on top of it. I do not know how he could ever have held a conversation with that thing. Honestly, i don’t fucking believe it.

[–] Spacehooks@reddthat.com 5 points 14 hours ago

This is one of the reasons I heard one sex doll vendor say their demographics are divorced men over 40 and users want AI in them.

load more comments (1 replies)
[–] CompactFlax@discuss.tchncs.de 34 points 1 day ago (9 children)

It’s confusing to me. When I use chat boxes they inevitably “forget” the first thing I told it by the second or third response.

How are people having conversations with them? It’s like talking to a 5 year old that’s ingested Wikipedia.

[–] qaz@lemmy.world 2 points 10 hours ago* (last edited 10 hours ago)

I've heard from other people that it adopts specific writing patterns and behaviors from the people using it. I think ChatGPT saves and summarizes chat conversations to personalize the chatbot, but I'm not sure since I don't use it myself.

load more comments (8 replies)
load more comments
view more: next ›