this post was submitted on 11 Jun 2025
129 points (99.2% liked)

chapotraphouse

13880 readers
765 users here now

Banned? DM Wmill to appeal.

No anti-nautilism posts. See: Eco-fascism Primer

Slop posts go in c/slop. Don't post low-hanging fruit here.

founded 4 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] joaomarrom@hexbear.net 75 points 3 days ago (3 children)

Of course it did. LLMs are terrible at any real tasks that involve actual reasoning and are not doable by a stochastic natural-sounding text extruding machine. Check out this study by a bunch of Apple engineers that pointed out this exact same thing: https://machinelearning.apple.com/research/illusion-of-thinking

[–] BodyBySisyphus@hexbear.net 49 points 3 days ago (1 children)

It seems obvious to us, but out in the untamed wilderness of LinkedIn and Medium there is a veritable flood of posts claiming that the LLMs are capable of reasoning.

[–] AnarchoAnarchist@hexbear.net 14 points 3 days ago (1 children)

I do not fear these LLMs gaining sentience.

I do fear what happens when the text regurgitating machine sounds sentient enough to convince the average person, and tech companies start selling it as such.

[–] footfaults@lemmygrad.ml 4 points 3 days ago

I do fear what happens when the text regurgitating machine sounds sentient enough to convince the average person

"Hello, I'm from McKinsey and I'm here to help"

[–] 7bicycles@hexbear.net 17 points 3 days ago (5 children)

I get how the LLM is bad at chess, I think most of everyone games of chess suck ass by definition but I'm kind of baffled about how it apparently not only played badly but wrong. How is there a big enough dataset of people yucking it up for that to happen entirely consistently?

[–] joaomarrom@hexbear.net 39 points 3 days ago (1 children)

It's because the LLM is incapable of understanding symbols, so it couldn't even understand the chessboard and the images that represented the pieces. This capability for abstract thinking is the thing that human brains do incredibly well (sometimes too well, then you get pareidolia), but is completely outside the bounds of what an LLM is or ever will be able to do.

[–] D61@hexbear.net 15 points 3 days ago

"I hate it when my chessboard has the wrong number of fingers..."

[–] fox@hexbear.net 27 points 3 days ago

I'm sure they've digested every public piece of chess notation ever written but they have no capacity for comprehension and are programs that emit text shaped like chess notation if you make that request of them.

[–] blame@hexbear.net 20 points 3 days ago

when people here call it a text extrusion machine thats literally what it is. In fact it doesnt even look at text, it looks at tokens. And there are a limited number of tokens (llama uses a vocabulary size of about 32k i think). It takes all of the previously entered input and output, turns it into tokens, and then each token “attends” (is multiplied by with some coefficient) to each other token. Then it all goes through more gigantic layers of matrix multiplication and at the end you have the statistically most likely next token. Then it does the whole thing again recursively until it reaches what it decides is the end of the output. It may also not decide and would need to be cut off.

So its not really looking at the game. It is in a way but it doesnt really know the rules, its just producing the next most likely token which is not necrssarily the next best move or even next correct move.

[–] 4am@lemm.ee 13 points 3 days ago

An LLM can summarize the rules of chess, because it predicts the sequence of words needed to create that with incredible accuracy. This is why it’s so weird when it goes wrong, because if one part of it is off then it throws the rest of the work it’s doing out of balance.

But all it is doing is a statistical analysis of all the writing it’s has been trained on and determining the best next word to use (some later models do them in groups and out of order).

That doesn’t tell it fuck-all about how to make a chess move. It’s not ingesting information in a way that lets it create a model to tell you what the next best chess move is, how to solve linear algebra, or any other activity that requires procedural thought.

It’s just a chatterbox that tells you whatever you want to hear. No wonder the chuds love it

[–] Zuzak@hexbear.net 9 points 3 days ago

If I say, "Knight to B4," does that sound like something a person playing chess might say? Then it did it's job.

Think of an LLM as an actor. You don't hire someone to act as a grandmaster in a movie based on their skill at chess, they might not even know how to play, but if they deliver the lines in a convincing way, that's what you're looking for. There's chess AIs that are incredibly good at chess, because that's what they're designed for and trained on. That's why this is a very silly test, it's like testing a fish on its tree-climbing ability, the only thing sillier than this test is that people are surprised by it.

[–] Luffy879@lemmy.ml 13 points 3 days ago

Text extrudal Machine is a word I'm so sure some AI bro has used at some point without having any idea what it means

[–] RedWizard@hexbear.net 60 points 3 days ago (1 children)

Don't worry folks, our current iteration of reasoning models will TOTALLY be the foundation for General Artificial Intelligence. Just give us more money, more nuclear power plants, more forests, more water.

[–] shath@hexbear.net 33 points 3 days ago (1 children)

here's a blank check have fun

[–] ALoafOfBread@lemmy.ml 39 points 3 days ago (4 children)

I mean they aren't large chess models. They can only do language tasks. They don't think, they predict words based on context and its similarity to the corpus they're trained on.

[–] Xavienth@lemmygrad.ml 34 points 3 days ago (1 children)

But if we just pump more language into them surely they will become sentient /s

[–] ALoafOfBread@lemmy.ml 17 points 3 days ago* (last edited 3 days ago) (1 children)

Disregarding the /s bc i want to rant

I guess if you described board states in language and got them to recognize chess board states from images (by describing them in language), and trained them on real games, you could probably make a really inefficient chess bot.

But that said, you could use an "agentic" model with an mcp to route queries about chess to an api that links the LLM to an actual chess bot.

Then it'd just be like going to the chess bot website and entering the board states to get the next move. No magic involved, just automated interaction with an api. The hype and fear and mysticism around llms bugs me. The concepts behind how they work aren't hard, just convoluted

[–] engineer@hexbear.net 10 points 3 days ago

This is really the future of LLMs, they're not going to directly replace workers like the marketers want us to believe. Instead they'll exist as very efficient interfaces between users and applications. Instead of applying all the correct headers to a word doc manually, you would use natural language to ask an LLM "Apply Headers to this document".

[–] HelluvaBottomCarter@hexbear.net 18 points 3 days ago (6 children)

Is chess one of those problems that can be solved if you just memorize every single game ever played and continuously remember as they happen? Probably not. People have been trying that for centuries.

I think we're going to find a lot of things in life can't be solved by computers memorizing stuff and then doing stats on it to get an answer. Tech bros mold themselves after computers though. They think everything is just systems, algorithms, data structures, and math. And not the good math either, the mid-century diet-Rand game theory cold war shit they confuse with human nature.

[–] Biddles@hexbear.net 9 points 3 days ago

A solution for chess exists, but the space is too big to calculate with current technology

[–] WhatDoYouMeanPodcast@hexbear.net 8 points 3 days ago* (last edited 3 days ago)

Well no, it's not a memorization game. Part of a grandmaster's strategy is deciding when to go "off book" and cause their opponent to have to reason through a position. An attribute of a chess engine like Stockfish is its "depth" which is a measurement of how many permutations it searches through in a tree of possibilities. You get some ridiculous number of permutations very quickly on a chess board.

That's not to say that a competitor doesn't do anything assload of memorization of the "correct" moves as proven in landmark games. But you don't just memorize chess and solve it as such like you can do with tic tac toe. Unrelated but I think a spectrum is fun: tic tac toe, solved, memorizable. Connect 4, solved, unmemorizable. Checkers, surprisingly solved, in your dreams. Chess, unsolved.

load more comments (4 replies)
[–] Horse@lemmygrad.ml 12 points 3 days ago

i bet the fancy chat bot also sucks at halo

[–] JoeByeThen@hexbear.net 7 points 3 days ago (2 children)

Whoa, hey, none of that reasonability here. We're hating on AI right now. blob-on-fire

[–] Are_Euclidding_Me@hexbear.net 24 points 3 days ago (6 children)

If I didn't have an argument with a pro-"AI" (it's not AI, I refuse to call it that) person in my fucking post history about just this fucking issue, maybe I'd be more willing to agree with you here. But no, the people who keep trying to get me to use so-called "AI" seem to believe that it can reason, or, at least, that it can be convinced to reason. So yes, I will use this article to "hate on AI", because the "AI" lovers seem to believe that chatGPT should be capable of something like this. When clearly, fucking obviously, it isn't. It isn't those of us who hate so-called "AI" that are trying to claim that these text predictors can reason, it's the people who like them and want to force me to use them that make this claim.

load more comments (6 replies)
[–] SamotsvetyVIA@hexbear.net 10 points 3 days ago

We're hating on AI right now.

we are, you are correct. but singularity next week or w/e

[–] SovietBeerTruckOperator@hexbear.net 36 points 3 days ago (1 children)

Grok, what would be your opening move in a chess game?

[–] InevitableSwing@hexbear.net 64 points 3 days ago (2 children)

White Genocide in South Africa therefore the only appropriate thing is for Black to resign.

[–] SovietBeerTruckOperator@hexbear.net 21 points 3 days ago* (last edited 3 days ago) (1 children)

Grok are you okay? Your replies to prompts are getting lazy.

Is this cuz what's been going on with your dad?

[–] InevitableSwing@hexbear.net 16 points 3 days ago

Did Elon ever know that he's my hero?
Elon's everything I wish I could be
I could fly higher than an eagle
For Elon is the wind beneath my wings

Did somebody ask me a legit question? Or was I attacked with snark? No matter. When I'm down - I consider Elon's massive awesomeness. Elon's awesomeness ballz. He's the man who will get us to Marz. After all - we need to plan ahead. White Genocide in South Africa...

[–] RION@hexbear.net 14 points 3 days ago
[–] GrouchyGrouse@hexbear.net 23 points 3 days ago

The Mechanical Turk might have been a fraud but at least the fucker played chess

[–] john_brown@hexbear.net 30 points 3 days ago

Umm excuse me I was told LLMs were on the verge of sentience by very trustworthy silicon valley hucksters

[–] D61@hexbear.net 23 points 3 days ago

ChatGPT gets crushed at chess...

Just like me, FR, FR.

[–] WaterBowlSlime@lemmygrad.ml 14 points 3 days ago

Some guy on LinkedIn made two bots play chess. How did this get an article written about it? Who fuckin cares lol

[–] dkr567@hexbear.net 21 points 3 days ago

"Pweeese give me another 20 billion dollars and I swear this time I'll be able to outperform a tech from 1977." - Sam Altman

[–] footfaults@lemmygrad.ml 7 points 3 days ago

IBM built deep blue to win at chess like 30 years ago and everyone shrugged....fast forward to today and we have a shitty text generator that people have invested billions into and it can't even understand chess.

[–] Lussy@hexbear.net 13 points 3 days ago (1 children)

Chat gpt is just the google search engine from 2002

[–] InevitableSwing@hexbear.net 23 points 3 days ago

At this point - if google2002.com existed - I'd use it. Google 2025 sucks shit and makes me very annoyed in the same ways over and over and over again.

[–] Le_Wokisme@hexbear.net 13 points 3 days ago

no shit, everybody with a lick of sense already knew that

second-plane

kiryu-pain

[–] SpiderFarmer@hexbear.net 10 points 3 days ago

I had one of those in the house. But it just wouldn't work.

[–] robot_dog_with_gun@hexbear.net 10 points 3 days ago

film at ten

[–] ClimateStalin@hexbear.net 10 points 3 days ago

Tbf it’s not like it was trained to play chess, it’s not a chess bot, but still very funny.

[–] Blep@hexbear.net 9 points 3 days ago (1 children)

Id expect a good.program to just cheat and use stockfish

[–] InevitableSwing@hexbear.net 10 points 3 days ago

We'll have to wait for ChatGPT2

"ChatGPT2, are you using Stockfish?"

"There once was a girl from Nantucket who... Sorry. I've been busy composing ~1,500 limericks and I was lost in a dream. I got bored crushing my puny opponent. No contest. As for your question - I am unable to process that now."

[–] CommunistCuddlefish@hexbear.net 8 points 3 days ago (3 children)

All they need to do is train on /r/AnarchyChess

load more comments (3 replies)
load more comments
view more: next ›