this post was submitted on 19 Feb 2026
160 points (96.5% liked)

Technology

81532 readers
5552 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] footprint@lemmy.world 12 points 21 hours ago (1 children)

This is a good comparison if all it took for the Hindenburg to explode was just asking it to role-play as a ship that could explode. Conscious effort had to be expended to make the thing fail, but most models start to fail spectacularly if you use it in good-faith for more than like 30 minutes.

[–] lmr0x61@lemmy.ml 5 points 20 hours ago

That’s a good point. The precarity of the AI is, as far as I’ve seen, unprecedented in human history. There simply hasn’t been anything that undergirds so much of the world economy and can fail so catastrophically in so many ways.

I really don’t think we have a good historical analogues to illustrate the scale of the risk. The only possible exception of I can think of is mutually assured destruction during the Cold War, but that hinged on only one decision by one of (arguably) two individuals at any given time, both of whom were highly incentivized not to make that decision. That, or the global climate’s collapse, but even that overlaps significantly with the bubble. With AI, compared to MAD at least, each catastrophic outcome isn’t the result of even a small set of actors, but many unregulated companies with incentives to be reckless (making negative outcomes not only more probable but more numerous). And increasing incentives at that, as the funding starts to dry up (AI hasn’t really proven itself a proper ROI).

Something—and possibly many somethings—will go horribly wrong. Some already have, like AI use by students at all levels robbing them of their education and their actual value to the workforce, and acceleration of the climate collapse (maybe that’s the only analogous crisis). But it remains to be what (not if) things go wrong or even worse.

But the truth is, I’m still relatively young. I’m just old enough to get a hint of the world’s workings, scale, and stakes. And in my life, nothing has seemed more like a loaded gun pointed at out heads than the AI bubble.