this post was submitted on 01 May 2025
140 points (100.0% liked)

chat

8390 readers
255 users here now

Chat is a text only community for casual conversation, please keep shitposting to the absolute minimum. This is intended to be a separate space from c/chapotraphouse or the daily megathread. Chat does this by being a long-form community where topics will remain from day to day unlike the megathread, and it is distinct from c/chapotraphouse in that we ask you to engage in this community in a genuine way. Please keep shitposting, bits, and irony to a minimum.

As with all communities posts need to abide by the code of conduct, additionally moderators will remove any posts or comments deemed to be inappropriate.

Thank you and happy chatting!

founded 3 years ago
MODERATORS
 

Bitch if I wanted the robot, I’d ask it myself (well, I’d ask the Chinese one)! I’m asking you!

you are viewing a single comment's thread
view the rest of the comments
[–] Moss@hexbear.net 29 points 1 week ago (3 children)

My friend pulled out her phone to ask chatGPT how to play a board game last night, and despite all of us yelling at her that chatGPT doesn't know anything, she persisted. Then the dumbass LLM made up some rules because it doesn't know anything.

[–] dat_math@hexbear.net 16 points 1 week ago* (last edited 1 week ago) (2 children)

My friend pulled out her phone to ask chatGPT how to play a board game last night, and despite all of us yelling at her that chatGPT doesn't know anything, she persisted. Then the dumbass LLM made up some rules because it doesn't know anything.

Do you think they took home the lesson that llms don't possess knowledge or do reason?

[–] Moss@hexbear.net 4 points 1 week ago (1 children)

i fucking hope, but she didn't really pay much attention to us lol

[–] dat_math@hexbear.net 4 points 1 week ago

she didn't really pay much attention to us

Why do people do things like this? What is the point of playing a game with your friends if you won't listen to or pay attention to them?

[–] jsomae@lemmy.ml 1 points 1 week ago (1 children)

Why would she take away that lesson? It produced a list of rules to the game that look approximately right.

[–] dat_math@hexbear.net 3 points 1 week ago (1 children)

Presumably her friends corrected her and showed her why the "generated" rules were incorrect... at least that's what I would expect of my friends

[–] jsomae@lemmy.ml 4 points 1 week ago

I hope so. You have to be patient in circumstances like that.

[–] Waldoz53@hexbear.net 12 points 1 week ago

one of my friends did the same thing and it provided incorrect information about the games rules lmao

[–] jsomae@lemmy.ml 5 points 1 week ago* (last edited 1 week ago) (1 children)

If you tell people that ChatGPT doesn't know anything, they will only think you're obviously wrong when it gives them apparently correct answers. You should tell people the truth -- the harm in ChatGPT is that it is generally subtly wrong in some way, and often entirely wrong, but it always looks plausibly right.

[–] THEPH0NECOMPANY@hexbear.net 4 points 1 week ago (1 children)

Yea, that's definitely one of the worst aspects of AI is how confidently incorrect it can be. I had this issue using deep seek and had to turn on the mode where you can see what it's thinking and often it will say something like.

I can't analyze this properly, let's assume this.... Then confidently spits an answer out based on that assumption. At this point I feel like AI is good for 100 level CS students that don't want to do their homework and that's about it

[–] jsomae@lemmy.ml 4 points 1 week ago

Same, I just tried deepseek-R1 on a question I invented as an AI benchmark. (No AI has been able to remotely correctly answer this simple question, though I won't reveal what the question is here obviously.) Anyway, R1 was constantly making wrong assumptions, but also constantly second-guessing itself.

I actually do think the "reasoning" approach has potential though. If LLMs can only come up with right answers half the time, then "reasoning" allows multiple attempts at a right answer. Still, results are unimpressive.