this post was submitted on 16 Dec 2025
20 points (83.3% liked)
Technology
1383 readers
69 users here now
A tech news sub for communists
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
LLMs work by hallucinating, the wild shit that gets shared isn't an accident, it's how they generate all their output.
people have trained models on internal document sets and it gets things wrong, they are simply not useful for facts. they don't think, they don't have knowledge, they just pull scrabble tiles in a clever statistical way that fools you into trusting it.
thats a tooling/prompting/context window management problem. it can be solved with proper programming procedures and smart memory management