this post was submitted on 24 Mar 2026
709 points (98.5% liked)

Technology

83102 readers
5513 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.world/post/44699253

This is clearly a sign that the product failed to draw in enough customers and its viability was overhyped.

Hopefully, it is the start of the AI bubble bursting.

you are viewing a single comment's thread
view the rest of the comments
[–] fastfomo7@lemmy.dbzer0.com 16 points 1 day ago (1 children)
[–] StupidBrotherInLaw@lemmy.world 5 points 1 day ago* (last edited 1 day ago) (2 children)

It's so they can repurpose that capacity for developing robots. It's not good at all.

OpenAI told the BBC on Wednesday that it has discontinued Sora so that it can focus on other developments, such as robotics "that will help people solve real-world, physical tasks".

https://www.bbc.com/news/articles/c3w3e467ewqo

[–] queermunist@lemmy.ml 11 points 21 hours ago* (last edited 21 hours ago) (1 children)

Robots aren't like software, it's immediately obvious when they don't work the way they're advertised whereas chatbots can trick people into thinking they're way more useful than they actually are. The "fake it till you make it" "move fast and break things" ethos of tech doesn't work when there's actual, physical evidence that shit's busted.

[–] StupidBrotherInLaw@lemmy.world -2 points 20 hours ago (2 children)

Unpopular Opinion Incoming

I was assigned at work to evaluate a few LLMs for potential adoption, so I spent a solid week doing so.

Most of the "AI is broken and doesn't work" on here is solid echo chamber cope. It's more competent than several of my coworkers, though it's thankfully not ready to replace knowledge workers as it requires a knowledge baseline to best direct it and evaluate its answers.

I still advised against using it for multiple reasons, including ethics, but much of Lemmy is playing make believe about the actual capabilities of LLMs.

[–] Erdalion@lemmy.world 4 points 19 hours ago (1 children)

Mind telling us what it is that you do? I heard similar things being said in the Plain English podcast last week (and the host was pretty anti-AI before) and I'm starting to wonder if certain jobs are going to be more affected than others.

Or are your coworkers just bad at what they do? :P When I was working tech support, there were people that were worse at their jobs than the bots of the time, let alone LLMs, I swear.

[–] StupidBrotherInLaw@lemmy.world 1 points 14 hours ago

Electrical engineering. My mentioned coworkers are competent but more junior in the field. We did a miniature internal study and found the best models provided accurate, relevant information on the first prompt about 90% of the time when asked to explain or verify concepts. The remainder consisted of hallucinations or misunderstood queries.

They struggled with questions that instead required complex problem, providing some mixture of appropriate solutions, overly complex but still functional solutions, and hallucinated shite.

I recommended that we do not move forward with adopting AI in any capacity. While it has some utility for basic information retrieval and fact checking, it still required someone with sufficient knowledge to be able to quickly evaluate the quality of its output. Helpful for someone who knows what they're doing, dangerous 10% if the time for someone who does not. I also highlighted the ethical concerns, many of which my peers were unaware.

[–] queermunist@lemmy.ml 3 points 18 hours ago* (last edited 18 hours ago) (1 children)

Cool anecdote. Every time we actually see real data, though, the numbers don't reflect much in the way of productivity gains or increased efficiency or better output. People say that LLMs are useful because it feels useful, but we aren't seeing actual usefulness. The most recent study out of Duke University observes "a productivity paradox, in which perceived productivity gains are larger than measured productivity gains, likely reflecting a delay in revenue realizations."

A delay. Sure.

[–] StupidBrotherInLaw@lemmy.world 1 points 14 hours ago* (last edited 14 hours ago)

I really appreciate your dismissive, arrogant tone. Your casual dismissing of my anecdote really added to how you provided even less substance to support your point.

But hey, it got you those "supporting the echo chamber by dunking on dissent" up votes, and that's what we're all here for, right?

[–] schema@lemmy.world 2 points 21 hours ago* (last edited 21 hours ago)

Correct, thought there is still good news in a way: OpenAI is running out of money rapidly. So much so, that they have to pick and choose one thing over the other.

They would have done the robot thing anyways, but the fact that they had to shut something else down for it sbows that the massive deficit is starting to affect them pretty heavily.

Maybe im just coping, but imo, the cracks are getting bigger and bigger.