this post was submitted on 17 Mar 2025
582 points (96.9% liked)

Technology

67050 readers
6744 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
(page 2) 50 comments
sorted by: hot top controversial new old
[–] rottingleaf@lemmy.world 15 points 3 days ago

That's called a self-proving statement.

[–] Kolanaki@pawb.social 34 points 4 days ago* (last edited 4 days ago)

They're right. AI is smarter than them.

[–] singletona@lemmy.world 40 points 4 days ago

Am American.

....this is not the flex that the article writer seems to think it is.

[–] Naevermix@lemmy.world 14 points 3 days ago (1 children)

Hallucination comes off as confidence. Very human like behavior tbh.

load more comments (1 replies)
[–] forrcaho@lemmy.world 2 points 2 days ago

As far as I can tell from the article, the definition of "smarter" was left to the respondents, and "answers as if it knows many things that I don't know" is certainly a reasonable definition -- even if you understand that, technically speaking, an LLM doesn't know anything.

As an example, I used ChatGPT just now to help me compose this post, and the answer it gave me seemed pretty "smart":

what's a good word to describe the people in a poll who answer the questions? I didn't want to use "subjects" because that could get confused with the topics covered in the poll.

"Respondents" is a good choice. It clearly refers to the people answering the questions without ambiguity.

The poll is interesting for the other stats it provides, but all the snark about these people being dumber than LLMs is just silly.

[–] GoodOleAmerika@lemmy.world 6 points 3 days ago

"US".... Even LLM won't vote for Trump

[–] blady_blah@lemmy.world 15 points 3 days ago (2 children)

You say this like this is wrong.

Think of a question that you would ask an average person and then think of what the LLM would respond with. The vast majority of the time the llm would be more correct than most people.

[–] LifeInMultipleChoice@lemmy.dbzer0.com 17 points 3 days ago (1 children)

A good example is the post on here about tax brackets. Far more Republicans didn't know how tax brackets worked than Democrats. But every mainstream language model would have gotten the answer right.

load more comments (1 replies)
[–] JacksonLamb@lemmy.world 8 points 3 days ago (9 children)

Memory isn't intelligence.

load more comments (9 replies)
[–] conditional_soup@lemm.ee 8 points 3 days ago (2 children)

This is sad. This does not spark joy. We're months from someone using "but look, ChatGPT says..." To try to win an argument. I can't wait to spend the rest of my life explaining to people that LLMs are really fancy bullshit generator toys.

[–] jj4211@lemmy.world 5 points 3 days ago

Already happened in my work. People swearing an API call exists because an LLM hallucinated it. Even as the people who wrote the backend tells them it does not exist

load more comments (1 replies)
[–] communism@lemmy.ml 8 points 3 days ago

Given the US adults I see on the internet, I would hazard a guess that they're right.

[–] Th4tGuyII@fedia.io 33 points 4 days ago

LLMs are made to mimic how we speak, and some can even pass the Turing test, so I'm not surprised that people who don't know better think of these LLMs as conscious in some way or another.

It's not a necessarily a fault on those people, it's a fault on how LLMs are purposefully misadvertised to the masses

[–] Arkouda@lemmy.ca 32 points 4 days ago (13 children)

"Nearly half" of US citizens are right, because about 75% of the US population is functionally or clinically illiterate.

load more comments (13 replies)
[–] Comtief@lemm.ee 17 points 4 days ago (3 children)

LLMs are smart in the way someone is smart who has read all the books and knows all of them but has never left the house. Basically all theory and no street smarts.

[–] ripcord@lemmy.world 26 points 4 days ago (1 children)

They're not even that smart.

load more comments (1 replies)
[–] joel_feila@lemmy.world 8 points 3 days ago (1 children)

Bot even that smart. There a study recently that simple questiona like "what was huckleberry finn first published" had a 60% error rate.

load more comments (1 replies)
load more comments (1 replies)
[–] aesthelete@lemmy.world 13 points 4 days ago

They're right

[–] curiousaur@reddthat.com 5 points 3 days ago* (last edited 3 days ago) (1 children)

This is hard to quantify. I use them constantly throughout my work day now.

Are they smarter than me? I'm not sure. Haven't thought too much about it.

What they certainly are, and by a long shot, is faster. Given a set of data, I could analyze it and pull out insights and conclusions. It might take me a week or a month depending on the size and breadth of the data set. An LLM can pull out insights and conclusions in seconds.

I can read error stacks coming from my code, but before I've even read the first few lines the LLM has ingested all of them, checked the code, and reached a conclusion about the necessary fix. Is it right, optimal, and avoid creating other bugs? Like 75% at this point. I can coax it, interate on the solution my self, or do it entirely myself with the understanding of the bug that it granted me. This same bug might have taken hours to figure out myself.

My point is, I'm not sure how to compare smarter vs orders of magnitude faster.

[–] fyzzlefry@retrolemmy.com 5 points 3 days ago

Are you smarter than a calculator?

[–] bjoern_tantau@swg-empire.de 20 points 4 days ago

I know enough people for whom that's true.

[–] jh29a@lemmy.blahaj.zone 7 points 3 days ago

Do the other half believe it is dumber than it actually is?

[–] kipo@lemm.ee 12 points 4 days ago* (last edited 4 days ago) (1 children)

No one has asked so I am going to ask:

What is Elon University and why should I trust them?

[–] Patch@feddit.uk 15 points 4 days ago

Ironic coincidence of the name aside, it appears to be a legit bricks and mortar university in a town called Elon, North Carolina.

load more comments
view more: ‹ prev next ›