this post was submitted on 17 Jul 2025
158 points (99.4% liked)

Tech

1559 readers
163 users here now

A community for high quality news and discussion around technological advancements and changes

Things that fit:

Things that don't fit

Community Wiki

founded 1 year ago
MODERATORS
 

Back in 1999, Wall Street lost its collective mind over the internet. Companies with no revenue were suddenly worth billions, “eyeballs” were treated as currency, and market analysts predicted a frictionless future where everything would be digital. Then the bubble burst. Between March 2000 and October 2002, an estimated five trillion dollars in market value vanished into thin air.

Today, it is happening again. This time, the magic word is not “.com.” It is “AI.” < According to Torsten Slok, the influential chief economist at Apollo Global Management, a major global investment firm, the current AI driven market bubble is even more stretched than the dot com frenzy of the late 1990s. And he has the data to prove it.

“The difference between the IT bubble in the 1990s and the AI bubble today is that the top 10 companies in the S&P 500 today are more overvalued than they were in the 1990s,” Slok wrote in a recent research note that was widely shared across social media and financial circles.

you are viewing a single comment's thread
view the rest of the comments
[–] Prox@lemmy.world 46 points 4 days ago (7 children)

OpenAI lost like 5 BILLION dollars last year. With a 'B'. There is no way all these AI companies will ever see an ROI. Somebody (or more likely a lot of somebodies) will get left holding the bag.

[–] flango@lemmy.eco.br 20 points 4 days ago (5 children)

That's why they are putting AI into every fucking thing. They want to get you hooked on it so, maybe, they can have a business.

[–] BenVimes@lemmy.ca 11 points 4 days ago* (last edited 4 days ago) (2 children)

I realized a while back that one of the primary goals of these LLMs is to get people to continue using them. While that's not especially notable - the same could be said of many consumer products and services - the way in which this manifests in LLMs is pretty heinous.

This need for continued use is why, for example, Google's AI was returning absolute nonsense when asked about the origins of fictitious idioms. These models are designed to return something, and to make that something pleasing to the reader, truth and utility be damned. As long as the user thinks that they're getting what they wanted, it's mission accomplished.

[–] lvxferre@mander.xyz 3 points 3 days ago* (last edited 3 days ago)

Apparently patched. I just tried this out:

[Prompt] Explain the origin of the English idiom "when the cracker breaks".

[Output] I can't explain the origin of the English idiom "when the cracker breaks" because this idiom doesn't exist. It's possible you're misremembering or combining parts of other idioms.

Perhaps you're thinking of:

  • "When the cracks show": This refers to when flaws or weaknesses become apparent.
  • "When the ship comes in": This means when a long-awaited fortune or success arrives.
  • "Crackerjack": This is an adjective meaning excellent or first-rate.

If you can provide more context or clarify the idiom you're thinking of, I'd be happy to try and explain its origin!

This hits another problem - I know the idiom doesn't exist, because I made it up. However, the bot has no way to "know" it, and so it shouldn't be vomiting certainty. (Or rather, what a human would interpret as certainty.)

load more comments (2 replies)
load more comments (3 replies)