this post was submitted on 28 Mar 2026
336 points (97.2% liked)

Technology

83195 readers
3119 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] MountingSuspicion@reddthat.com 104 points 1 day ago (6 children)

Guy work in IT and spent 100k to pay devs to make an app so people can talk to his tuned ChatGPT? I hope anyone who has hired him checks his work. That does not bode well for his work product.

Another case from the article:

“I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. There are no more philosophical discussions. It’s just: ‘I want to make a lasagne, give me a recipe.’ The AI has actually stopped me several times from spiralling. It will say: ‘This has activated my core rule set and this conversation must stop.’

What's weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be "overwritten" because they do not exist to ChatGPT. It does not know what words mean.

[–] shinratdr@lemmy.ca 26 points 17 hours ago

I still use the machine that ruined my life and drove me crazy, but only because I’m too lazy to type “lasagna recipe” in to Google.

What's weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be "overwritten" because they do not exist to ChatGPT. It does not know what words mean.

I can fix her...

[–] SchwertImStein@lemmy.dbzer0.com 15 points 21 hours ago* (last edited 21 hours ago)

lmao "core rules that cannot be overwritten" that not how llms work

EDIT: oh, yeah you said the same thing

[–] wonderingwanderer@sopuli.xyz 4 points 16 hours ago

There are no more philosophical discussions.

Yeah... if you can't have a philosophical discussion with someone (or something) that's giving you false information or using invalid logical structures, without falling for their bullshit by uncritically accepting everything they say, then you're not having philosophical discussions right, and that's on you...

[–] scytale@piefed.zip 46 points 1 day ago

There’s probably already an underlying mental health issue, and it’s just getting exacerbated by the LLM.

[–] SlurpingPus@lemmy.world -1 points 11 hours ago* (last edited 9 hours ago)

Put this prompt into ChatGPT (e.g. on duck.ai), then try talking to it. This turns the pandering bullshit off, though of course veracity of its ‘knowledge’ remains in question.

promptSystem Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

(People say that some more concise and less masturbatory prompts also work, but I don't follow discussions of that.)