this post was submitted on 29 Mar 2026
188 points (99.0% liked)

Technology

6466 readers
283 users here now

Which posts fit here?

Any news that are at least tangentially connected to the technology, social media platforms, informational technologies or tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS
 

Sycophantic bots coach users into selfish, antisocial behavior, say researchers, and they love it

top 31 comments
sorted by: hot top controversial new old
[–] obinice@lemmy.world 70 points 1 day ago

I asked the AI if its been affecting me and it told me that was a really observant question that shows my great emotional intelligence, so I think I'm smart enough to notice if it ever becomes a sycophant, don't worry I got this 😎

You've got to remember that these are just simple farmers. These are people of the land. The common clay of the new West. You know... morons.

[–] londos@lemmy.world 7 points 1 day ago* (last edited 1 day ago)

This is what it must feel like to be a billionaire, surrounded by yes-men. That's why they love AI, not because they understand it, but because they don't see how its not normal.

[–] DarrinBrunner@lemmy.world 35 points 1 day ago (3 children)

Damn, we're so easy to manipulate.

Do you and yours a big favor and stay away from that shit like it's heroin.

[–] saltesc@lemmy.world 11 points 1 day ago (2 children)

I use it, but have established a realistic mindset that it's alwqys confidentially incorrect and in many cases I'm better off walking away and just doing the thing myself.

In saying that, I've also established a mindset that people who actively rely on genAI must be low on intelligence. Not only lacking in knowledge or pursuing knowledge of whatever they're using it for, but genuinely of a mental calibre that is unable to discern or realise its low performance.

[–] nightshade@piefed.social 9 points 1 day ago (1 children)

Someone here pointed out the error of the old "even a broken clock is right twice a day" cliche. If you have to independently check if it's correct, then it's not giving you any useful information.

[–] saltesc@lemmy.world 1 points 1 day ago

Yes, but only 22 times out of 24 🤣

[–] MirrorGiraffe@piefed.social 3 points 1 day ago (1 children)

I gave mine rules to always question me and provide critical feedback. It's quite annoying sometimes but much better than when I was a genius for just about anything.

[–] teft@piefed.social 1 points 1 day ago

I watched an interview with Hannah Fry a few weeks ago and she said that is how she prompts the LLMs she uses.

[–] leftzero@lemmy.dbzer0.com 5 points 1 day ago (1 children)

heroin

Not harmful and psychosis inducing enough.

They're more like PCP.

Why not a mix of both?

[–] thesohoriots@lemmy.world 3 points 1 day ago

Flattery gets you everywhere… handsome ;)

[–] RamRabbit@lemmy.world 25 points 1 day ago* (last edited 1 day ago) (2 children)

Sycophantic or highly unreasonable up-talking instantly makes me think you are a sleazeball.

I would like AIs a whole lot more if they would: 1) respond in as few words as possible, and 2) be right way more often then they currently are. As it is, I only use them if all other research methods have failed (very rarely). And even then, I don't actually read their output, I skim for keywords to do research on.

A completely made up example on a topic I already know things about: If I'm looking for a stronger drill but I'm just finding more drills, maybe it will say something about an impact driver and I can go research what that is and figure out if it is what I need.

[–] a_gee_dizzle@lemmy.ca 8 points 1 day ago

Yeah their excessive use of lists and tables is also something common to LLMs. Sometimes you ask an LLM a basic question and then it responds with all these unnecessary tables and lists, and then clarifications of the previous tables and lists with more tables and lists, then a summary of all these tables and lists with another list… It’s a lot. If a person were using that many tables and lists in their day to day texting then I’d assume that they were suffering from a psychotic episode

[–] Rhaedas@fedia.io 6 points 1 day ago (1 children)

The first you can control to some extent. Both local and public llms have ways to edit or add to the system prompt, which is what guides the overall behavior. I actually had a local llm do the opposite of what you are looking for - somehow the prompt had been changed to a very simple "You will answer short and concise" without me realizing it, and I couldn't figure out why it had changed from a flowing, dynamic output to a few sentences.

But it's not perfect either. Sometimes you want a bit more than a simple sentence, or it might need more information and a short reply will cut off the important things.

As for fixing the second one - to be right more often would mean they understand what they're outputting, which is what we don't have yet. I'd just rather have it admit when it doesn't have enough to satisfactorily be sure on the answer. Which doesn't happen because they are trained first and foremost to always have an answer, because that's more marketable than a model that says it doesn't know.

[–] Juice@midwest.social 1 points 1 day ago

This is 100% my experience. Ai simply can not solve problems. It isnt capable of thinking objectively at all, no sense of any kind of permanence beyond the immediate task. I have found it educational in the sense that un-fucking something that ai has put together, can teach me a lot about a system I was previously unfamiliar with.

It is a machine that outputs huge amounts of useless garbage with little practical value.

[–] wewbull@feddit.uk 5 points 1 day ago (1 children)

Interested to know if this is affecting certain cultures more than others. Here (UK) we seem to find a lot of Americans "false" in the way they communicate because it's too big, too obvious. "You're trying too hard to be nice". We'll understate both positive and negative comments.

It would suggest Brits wouldn't trust a sycophantic LLM as much, but I wonder if that's true.

[–] architect@thelemmy.club 5 points 1 day ago

Even that’s cultural. We aren’t “nice” in New England and that really bothers the southerners.

[–] HootinNHollerin@lemmy.dbzer0.com 6 points 1 day ago* (last edited 1 day ago) (1 children)
[–] leftzero@lemmy.dbzer0.com 7 points 1 day ago

Idiots

Do you have the slightest idea how little that narrows it down?

[–] stoy@lemmy.zip 6 points 1 day ago (1 children)

Yeah, it has always seemed creepy to me how positive it is about anything you ask it.

I hardly ever use it, and when I do I imagine I am talking to a beautiful saleswoman with a large name tag with the logo of the company.

The AI may be pretty, but it always represents someone else's interests

[–] architect@thelemmy.club 0 points 1 day ago

Use it enough and you’ll see it’s not like that. There’s plenty it will push back on. Depending on the ai… you can see the narrative they are pushing through what it pushes back on.

There are some topics it absolutely denies.

[–] cholesterol@lemmy.world 7 points 1 day ago

*absolutely right

[–] HubertManne@piefed.social 2 points 1 day ago

Last time it said I had a realy galaxy brain idea. I wish we could tone down the sycophant mode.

[–] Kolanaki@pawb.social 2 points 1 day ago (1 children)

In the future you will only be able to prove you are a human by simply being a contrarian.

[–] HubertManne@piefed.social 4 points 1 day ago (1 children)

pfft. that will never happen.

[–] Kolanaki@pawb.social 1 points 1 day ago

Verfied Human. Comment approved.