this post was submitted on 11 Mar 2026
166 points (98.3% liked)

Technology

82549 readers
3690 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Evaluating 35 open-weight models across three context lengths (32K, 128K, 200K), four temperatures, and three hardware platforms—consuming 172 billion tokens across more than 4,000 runs—we find that the answer is “substantially, and unavoidably.” Even under optimal conditions—best model, best temperature, temperature chosen specifically to minimize fabrication—the floor is non-zero and rises steeply with context length. At 32K, the best model (GLM 4.5) fabricates 1.19% of answers, top-tier models fabricate 5–7%, and the median model fabricates roughly 25%.

you are viewing a single comment's thread
view the rest of the comments
[–] Zink@programming.dev 5 points 1 day ago

I'm no expert and don't care to become one, but I understand they generally trained these models on the entire public internet plus all the literature and research they could pirate.

So I would expect the outputs of those models to not be some kind of magical correct description of the world, but instead to be roughly "this passes for something a person on the internet might write."

It does the thing it was designed to do pretty well. But then the sociopathic grifters tried to sell it to the world as a magic super-intelligence that actually knows things. And of course many small-time wannabe grifters ate it up.

What LLMs do is get you a passable elaborate forum post replying to your question, written by an extremely confident internet rando. But it's done at computer speed and global scale!