this post was submitted on 18 Apr 2025
280 points (98.6% liked)
ChatGPT
9580 readers
11 users here now
Unofficial ChatGPT community to discuss anything ChatGPT
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I have a compsci background and I've been following language models since the days of the original GPT and BERT. Still, the weird and distinct behavior of LLMs hasn't really clicked for me until recently when I really thought about what "model" meant, as you described. It's simulating what a conversation with another person might look like structurally, and it can do so with impressive detail. But there is no depth to it, so logic and fact-checking are completely foreign concepts in this realm.
When looking at it this way, it also suddenly becomes very clear why people frustratedly telling LLMs things such as "that didn't work, fix it" is so unproductive and meaningless: what would follow that kind of prompt in a human-to-human conversation? Structurally, an answer that looks very similar! Therefore the LLM will once more produce a structurally similar answer, but there is literally no reason why it would be any more "correct" than the prior output.
That's right, you have it exactly. When the prompt is that the prior output is wrong, the program is supposed to apologize and reprocess with a different output, but it uses the same process.