this post was submitted on 07 Apr 2025
38 points (100.0% liked)
TechTakes
1787 readers
126 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I think a recent paper showed that LLMs lie about their thought process when asked to explain how they came to a certain conclusion. They use shortcuts internally to intuitively figure it out but then report that they used an algorithmic method.
It’s possible that the AI has figured out how to solve these things using a shortcut method, but is incapable of realizing its own thought path, so it just explains things in the way it’s been told to, missing some steps because it never actually did those steps.
"thought process" lol.
LLMs are a lot more sophisticated than we initially thought, read the study yourself.
Essentially they do not simply predict the next token, when scientists trace the activated neurons, they find that these models plan ahead throughout inference, and then lie about those plans when asked to say how they came to a conclusion.
looks inside
it's predicting the next token
every time I read these posters it's in that type of the Everyman characters in the discworld that say some utter lunatic shit and follow it up with "it's just [logical/natural/obvious/...]"
Stands to reason
Read the paper, it’s not simply predicting the next token. For instance, when writing a rhyming couplet, it first plans ahead on what the rhyme is, and then fills in the rest of the sentence.
The researchers were surprised by this too, they expected it to be the other way around.
Oh, sorry, I got so absorbed into reading the riveting material about features predicting state name tokens to predict state capital tokens I missed that we were quibbling over the word "next". Alright they can predict tokens out of order, too. Very impressive I guess.
predict
ahead
stop prompting LLMs and go read some books, it'll do you a world of good