this post was submitted on 17 Jul 2025
158 points (99.4% liked)
Tech
1559 readers
163 users here now
A community for high quality news and discussion around technological advancements and changes
Things that fit:
- New tech releases
- Major tech changes
- Major milestones for tech
- Major tech news such as data breaches, discontinuation
Things that don't fit
- Minor app updates
- Government legislation
- Company news
- Opinion pieces
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
OpenAI lost like 5 BILLION dollars last year. With a 'B'. There is no way all these AI companies will ever see an ROI. Somebody (or more likely a lot of somebodies) will get left holding the bag.
That's why they are putting AI into every fucking thing. They want to get you hooked on it so, maybe, they can have a business.
I realized a while back that one of the primary goals of these LLMs is to get people to continue using them. While that's not especially notable - the same could be said of many consumer products and services - the way in which this manifests in LLMs is pretty heinous.
This need for continued use is why, for example, Google's AI was returning absolute nonsense when asked about the origins of fictitious idioms. These models are designed to return something, and to make that something pleasing to the reader, truth and utility be damned. As long as the user thinks that they're getting what they wanted, it's mission accomplished.
Apparently patched. I just tried this out:
This hits another problem - I know the idiom doesn't exist, because I made it up. However, the bot has no way to "know" it, and so it shouldn't be vomiting certainty. (Or rather, what a human would interpret as certainty.)
On this topic, this podcast episode is very interesting:
https://www.techwontsave.us/episode/282_chatbots_are_repeating_social_medias_harms_w_nitasha_tiku