this post was submitted on 16 Dec 2025
20 points (83.3% liked)

Technology

1383 readers
69 users here now

A tech news sub for communists

founded 3 years ago
MODERATORS
 

I have not read the article yet but i think this is a good topic to discuss here.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] robot_dog_with_gun@hexbear.net -2 points 2 months ago (1 children)

LLMs work by hallucinating, the wild shit that gets shared isn't an accident, it's how they generate all their output.

people have trained models on internal document sets and it gets things wrong, they are simply not useful for facts. they don't think, they don't have knowledge, they just pull scrabble tiles in a clever statistical way that fools you into trusting it.

[โ€“] percyraskova@lemmygrad.ml 7 points 2 months ago

thats a tooling/prompting/context window management problem. it can be solved with proper programming procedures and smart memory management