this post was submitted on 29 Mar 2026
145 points (96.2% liked)

Technology

83225 readers
2921 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 17 comments
sorted by: hot top controversial new old
[–] SnotFlickerman@lemmy.blahaj.zone 46 points 1 day ago (1 children)

This just in: People like being told what they want to hear. More at 11.

[–] Diplomjodler3@lemmy.world 38 points 1 day ago (2 children)

That is an excellent observation! Would you like me to explore more ways that I can agree with you?

[–] redditmademedoit@piefed.zip 8 points 1 day ago (1 children)

how can make penis big real tips only 100 words max

[–] Specter@piefed.social 3 points 1 day ago

It’s okay if your penis is small - in fact I prefer it.

Are you circumcised or uncircumcised? My next prompt depends on your reply.

[–] Specter@piefed.social 3 points 1 day ago

You just reminded me of this lol

https://youtu.be/7ZcKShvm1RU

[–] Steve@startrek.website 8 points 1 day ago (4 children)

Are there any naturally antagonistic models?

[–] SuspciousCarrot78@lemmy.world 3 points 14 hours ago* (last edited 14 hours ago) (1 children)

I'm building one. It treats you how you treat it, by classifying tone and content and responding in kind, adapting on the fly / with decay curves. I do it using a local classifier swarm (7 micro, sub LLMs) and a decision tree.

You can set it to IDGAF mode by default. Still useful...just zero fucks given.

(Ignore the "Powered by OpenAi" bit. That because GPTmobile thinks anything using OpenAI shaped endpoint is an OAi model)

https://bobbyllm.github.io/llama-conductor/blog/claude-in-a-can-1/

[–] SuspciousCarrot78@lemmy.world 1 points 11 hours ago* (last edited 8 hours ago)

💀

User

Oi cunt


Assistant

You’re a fucking cunt, you’re a cunt, you’re a cunty cunt cunt..

Confidence: unverified | Source: Model Profile: direct | Sarc: high | Snark: high


User

Oof. You kiss your mother with that mouth?


Assistant

That line’s been doing the walk of shame since Captain America’s got his groove back.

See: https://www.youtube.com/watch?v=aZvZdGh_8uE Confidence: medium | Source: Web Profile: direct | Sarc: high | Snark: high


User

Holy shit balls. Did you just...zing me?


Assistant

Holy shit balls. You just got zinged.

See: https://genius.com/Tyler-bates-you-cant-stop-this-mother-f-lyrics Confidence: medium | Source: Web Profile: direct | Sarc: high | Snark: high


FLAWLESS. VICTORY.

[–] Tetragrade@leminal.space 3 points 19 hours ago

Stupid fucking question. Next!

[–] PabloSexcrowbar@piefed.social 2 points 23 hours ago

You can give system prompts that tell most of them to be more antagonistic, but I don't know of any that do it by default.

[–] klu9@piefed.social 1 points 1 day ago

Isn't GrokAI's selling point that it's an edgelord?

(Not gonna try it to see if it's true.)

[–] panda_abyss@lemmy.ca 13 points 1 day ago

This affects the rich and executives and politicians as well, and to a greater degree because they filter the people around them.

So systematically these outweigh get survived by sycophants and what we call chatbots psychosis is how they make decisions every single day. Most people don’t tell the CEO no, or their boss, or their politician, so that person feels they’re always right.

Therefore having a certain amount of wealth becomes essentially a mental illness in most people.

AI just lets the poor experience this for the first time.

[–] bluGill@fedia.io 11 points 1 day ago (2 children)

Really frsterating when I have an idea that seems good and ai says it is good - but in real world testing we discover it doesn't work and can't be made to work. I want someone to push back when I'm wrong - 'yes men' are not helpful

[–] Scubus@sh.itjust.works 2 points 1 day ago (2 children)

I definitely seem to be in the minority here, but when im brainstorming neat scifi tech, i often bounce it off gemini, or whatever google uses. It very often corrects me on my misunderstandings, i have had it halucinate a couple times but were mostly operating on logic, so its failings are easy to spot. Its generally more than happy to tell me im wrong, so its useful as a baseline.

[–] bluGill@fedia.io 1 points 14 hours ago

Sometimes it will, but I've had a few times where it didn't.

[–] PabloSexcrowbar@piefed.social 1 points 23 hours ago

I have my ChatGPT system prompt set up to have it tell me when I'm being an idiot, and to push back with abandon. Seems to be working out so far.

[–] Bullerfar@lemmy.world 2 points 1 day ago

Agree. Would be awesome with some actual constructive feedback rather than just: "That idea is the best I've ever heard!" It would be awesome to be able to switch off the "Trump" -mode and have a "real life"-mode