this post was submitted on 19 Feb 2026
141 points (97.3% liked)

Technology

81532 readers
4261 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 23 comments
sorted by: hot top controversial new old
[–] tal@lemmy.today 2 points 4 hours ago

Wooldridge sees positives in the kind of AI depicted in the early years of Star Trek. In one 1968 episode, The Day of the Dove, Mr Spock quizzes the Enterprise’s computer only to be told in a distinctly non-human voice that it has insufficient data to answer. “That’s not what we get. We get an overconfident AI that says: yes, here’s the answer,” he said. “Maybe we need AIs to talk to us in the voice of the Star Trek computer. You would never believe it was a human being.”

Hmm. That's probably a pretty straightforward modification for existing LLMs, at least at the token level.

You can obtain token probabilities, so you can give some estimate out-of-band confidence in a response, down to the token level. Don't really need to change anything for that, just expose some data.

And you could make the AI aware of its own neural net's confidence level, feed the confidence back into the neural net for subsequent tokens, see if you can get it to take that information into account.

https://en.wikipedia.org/wiki/Recurrent_neural_network

In artificial neural networks, recurrent neural networks (RNNs) are designed for processing sequential data, such as text, speech, and time series,[1] where the order of elements is important. Unlike feedforward neural networks, which process inputs independently, RNNs utilize recurrent connections, where the output of a neuron at one time step is fed back as input to the network at the next time step. This enables RNNs to capture temporal dependencies and patterns within sequences.

[–] friend_of_satan@lemmy.world 3 points 5 hours ago* (last edited 5 hours ago) (1 children)

Tangent: that pic reminds me of the terrorizing tit in Everything You Always Wanted to Know About Sex (*But Were Afraid to Ask)

[–] thatradomguy@lemmy.world 1 points 5 hours ago

Honestly first thing that came to mind was booby.

[–] XLE@piefed.social 18 points 9 hours ago (1 children)

“It’s the classic technology scenario,” he said. “You’ve got a technology that’s very, very promising, but not as rigorously tested as you would like it to be, and the commercial pressure behind it is unbearable.”

Is it promising though, Michael Wooldridge? Have you recently attended any magic shows and become excited by the potential of invisibility technology?

[–] Zink@programming.dev 1 points 6 hours ago* (last edited 6 hours ago)

Oh touche, not Michael Woolridge! The technology has created an entire segment of the economy worth many trillions of dollars based on NOTHING BUT promises! We are living in a promise-based economy!

/s but not really

[–] UnspecificGravity@piefed.social 25 points 13 hours ago (1 children)

The difference being that the Hindenburg was a perfectly functioning rigid airship that had a lot of inherent risks due to the nature of its design.

AI isn't good enough at its actual job to be in this position. The risk of AI is people pretending that it works when it doesn't. It would be like if you made a blimp and filled it with carbon dioxide and people kept buying tickets and just sitting there waiting for it to take off.

[–] criss_cross@lemmy.world 2 points 3 hours ago

With society insisting that it’ll take off in the future and only suckers would leave.

[–] ReverendIrreverence@lemmy.world 6 points 12 hours ago (2 children)

Except for the one person on the ground, the only people harmed in the Hindenburg disaster were the ones on board. If you're not "on board" when the AI bubbles pops and burns I expect you will not be hurt as much as those blindly taking that ride.

[–] GreenBeard@lemmy.ca 17 points 12 hours ago

Unfortunately, we're not all the ones that decide if we're on board or not. Our employers are. We live in a world where profits are privatized and losses are socialized, so when this goes, it's going to hurt the general public a lot more than it will every hurt the Epstein Class.

[–] discocactus@lemmy.world 4 points 11 hours ago* (last edited 11 hours ago) (2 children)

On board means part of the utility grid and industrial food infrastructure sooooo

[–] entropicdrift 1 points 3 hours ago

And if you have a retirement account with investments, kinda at all. The entire US economy is hinging on AI at this point, to a deranged degree. Almost more than oil, at this point.

[–] FauxLiving@lemmy.world 1 points 7 hours ago

When the AI bubble crashes then they would use less grid power on account of not existing.

[–] footprint@lemmy.world 12 points 14 hours ago (1 children)

This is a good comparison if all it took for the Hindenburg to explode was just asking it to role-play as a ship that could explode. Conscious effort had to be expended to make the thing fail, but most models start to fail spectacularly if you use it in good-faith for more than like 30 minutes.

[–] lmr0x61@lemmy.ml 5 points 13 hours ago

That’s a good point. The precarity of the AI is, as far as I’ve seen, unprecedented in human history. There simply hasn’t been anything that undergirds so much of the world economy and can fail so catastrophically in so many ways.

I really don’t think we have a good historical analogues to illustrate the scale of the risk. The only possible exception of I can think of is mutually assured destruction during the Cold War, but that hinged on only one decision by one of (arguably) two individuals at any given time, both of whom were highly incentivized not to make that decision. That, or the global climate’s collapse, but even that overlaps significantly with the bubble. With AI, compared to MAD at least, each catastrophic outcome isn’t the result of even a small set of actors, but many unregulated companies with incentives to be reckless (making negative outcomes not only more probable but more numerous). And increasing incentives at that, as the funding starts to dry up (AI hasn’t really proven itself a proper ROI).

Something—and possibly many somethings—will go horribly wrong. Some already have, like AI use by students at all levels robbing them of their education and their actual value to the workforce, and acceleration of the climate collapse (maybe that’s the only analogous crisis). But it remains to be what (not if) things go wrong or even worse.

But the truth is, I’m still relatively young. I’m just old enough to get a hint of the world’s workings, scale, and stakes. And in my life, nothing has seemed more like a loaded gun pointed at out heads than the AI bubble.

[–] Sims@lemmy.ml 5 points 13 hours ago

..but giving AI technology to Psycho Corporations that have an open declared goal of not caring about anything but profits - is not a problem. Got it..

Jeebus, "The Guardian" is infested with no/slow-thinking child 'journalists'..

[–] RobotToaster@mander.xyz 8 points 15 hours ago* (last edited 15 hours ago) (2 children)

A disaster that causes a lot of bad publicity despite the majority (62/97) of the passengers surviving, and that may have been caused by sabotage?

[–] XLE@piefed.social 1 points 10 hours ago

I appreciate the people who help make sure AI doesn't receive an ounce of the credit it doesn't deserve

[–] AstralPath@lemmy.ca 2 points 15 hours ago

No!

Fire BIG. Big fire bad!

Run away!

[–] doug@lemmy.today 6 points 15 hours ago

“Oh the inhumanity!”

[–] BeigeAgenda@lemmy.ca 4 points 14 hours ago

And now we hear stories about how easy it is to hack systems with built in LLM's and when you think about it, they are basically trained to be as helpful and forthcoming as possible, and then we give them the keys to the system!

[–] UnderpantsWeevil@lemmy.world 2 points 14 hours ago

Hindenburg was a hiccup in history relative to the fallout from an AI bust.

[–] Lembot_0006@programming.dev 1 points 15 hours ago (1 children)

What? Global interest? Self-driving cars? Hindenburg? Is this professor a cat? Markov chain? The provided info is so crazy that I decided to NOT read the article.

[–] TropicalDingdong@lemmy.world 1 points 15 hours ago

Hydrogen buildup?