this post was submitted on 07 Mar 2026
969 points (98.9% liked)

Technology

82457 readers
2862 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] presoak@lazysoci.al 2 points 12 hours ago (1 children)

LLMs have been trained on absolute garbage

It depends on the LLM actually.

Specialized medical LLMs are actually very accurate.

[–] badgermurphy@lemmy.world 1 points 12 hours ago* (last edited 8 hours ago) (1 children)

I'm sure the quality of the LLM output does vary a lot based on the size of the scope it covers and the training data set.

However, I believe that if it were possible to get an LLM to be "quite accurate" in any context, that would make it easy to find a path to profitability for that tool, but I don't think we have seen that materialize anywhere.

I believe that the best they can get is "more accurate" than the mean, but still not accurate enough to reliably make anyone money*.

*Nvidia notwithstanding

[–] Routhinator@startrek.website 2 points 2 hours ago

Moreover, until you can get the same output from the same input from an LLM consistently, the entire tech is unreliable garbage.