this post was submitted on 26 Feb 2026
209 points (94.5% liked)

Hacker News

4423 readers
462 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

Source of the RSS Bot

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] AmbitiousProcess@piefed.social 41 points 3 days ago (1 children)

"doesn't work" doesn't mean the AI literally does not produce any output or do anything, it means it has so many flaws it's just a fundamentally bad technology to be using.

And don't worry, I've got sources.

LLMs still routinely hallucinate, and even implementations being used by AI safety researchers can't help but automatically wipe email inboxes without permission. They atrophy your brain the longer you use them, cause both general dependency and emotional dependency, as well as deskill you at your job, they produce content favored worse by both humans and the AI models searching for trustworthy sources, and to top it all off, scaling laws are already failing to improve AI models enough to fix these problems, companies aren't seeing returns, the economy gained essentially nothing from AI investment, usage, and growth, and public perception by the people actually affected most by AI is only getting worse while the people financially incentivized to keep building it say it's going to get better, all while datacenters accelerate global warming and LLMs keep killing people.

I don't know about you, but I'd rather not support a technology that makes you get fundamentally worse at most cognitive tasks, damages the planet, burns money that could otherwise go to something more valuable, all while randomly killing mentally vulnerable people.