this post was submitted on 22 Aug 2025
65 points (100.0% liked)

technology

23925 readers
197 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
top 26 comments
sorted by: hot top controversial new old
[–] came_apart_at_Kmart@hexbear.net 45 points 8 months ago (2 children)

my clue that the trumpets of the AI collapse are tuning up is that, last week, the least tech savvy person I know in my cohort was telling me, the person everyone they know goes to for random technical assistance/context, about how powerful "AI" (LLM) is and how it's about to take over everything.

it's like that bit about how, when the shoeshine kid and your gardener have stock tips, it's time to get out of the market because now literally everyone is regurgitating the "New Paradigm!" cliches.

[–] yogthos@lemmygrad.ml 13 points 8 months ago

I imagine the flop of ChatGPT 5 along with it becoming clear that current gen models aren't living up to the expectations might be starting to cool investor expectations.

[–] Dirt_Possum@hexbear.net 12 points 8 months ago (1 children)

I've been having an ongoing argument the past month with with a 70-something step relative I see often who has always come to me for computer advice about this exact thing. I've tried to let it go many times, but she keeps hammering at it, even bringing it up out of the blue. Since you mentioned something similar, forgive me for popping in with my own rant here, but it has been really getting on my nerves.

She absolutely will not hear it that "AI" does not mean it is actually intelligent but rather a marketing scam and that it has zero chance of developing general intelligence. It's been disappointing because like I said, she used to trust me about computer stuff, but now angrily asks me "do you even know about <some pop sci "expert"> and the projects they're working on?!," namedropping all these supposed "respected" scientists she's been reading about the impending AI apocalypse and thinking I'm uninformed for not knowing them. It's like shit lady, I used to argue with Ray Kurtzweil's singularity nuts 12 years ago about this same sort of garbage. She's actually fairly youthful in her views for a boomer, she's a sci fi fan and prides herself on being socially progressive, and frequently talks about how much she loves science, but has always had a real "woowoo" new-agey bent to that

The conversation first came up because she told me about how she's been literally losing sleep with actual insomnia, thinking about AI with respect to what it will mean for her grandkids, what will happen to them in a world where machines become "ever more" intelligent, repeating talking points she heard somewhere about how "once machines become intelligent, they'll have no use for us and only see us as a threat." So many brainworms to sift through, from colonialist thinking to buying into "AI" hype. I did my best to disabuse her of this belief to begin with partly just to help her sleep better at night, though I admit I did remind her there are many many other real things to worry about regarding the world her grandkids will be inheriting. But she's sticking to it vehemently. She's been going off on me about how "all the top thinkers on the subject" agree with her, going on about how even Stephen Hawking thought AI would be a disaster (she thought that was an ace in the hole because I used to like discussing theoretical physics with her and never had the heart to differentiate for her the real but niche contributions Hawking made from his celebrity). Rather than think about what I said as someone who tends to know more about this kind of thing, she has decided I don't actually know anything.

It really has been a trip watching how the propaganda/hype mill and the shoehorning of "AI" into everything has broken so many brains who only 5 years ago would have laughed at anyone else for thinking the plot of Terminator was really happening.

[–] Frogmanfromlake@hexbear.net 7 points 8 months ago

Better keep an eye on her. I know a number of people who fit that description and they gradually fell into a right-wing rabbit hole. Usually anti-vax or anti-Covid precautions were the final stepping stones.

[–] TankieTanuki@hexbear.net 35 points 8 months ago (1 children)
[–] Frogmanfromlake@hexbear.net 5 points 8 months ago

Have any of the other three made a return?

[–] peeonyou@hexbear.net 34 points 8 months ago (3 children)

Honestly, I can't imagine these LLMs are actually contributing any sort of benefit when you consider the amount of trash you have to wade through and fix once they've done what they've done. For every quickly typed up professional e-mail or procedure they do they're wasting multiple hours of programmer time by introducing bs into codebases and trampling over coding conventions which then has to be reviewed and fixed. I imagine it will get to the point where AI can do things on its own without the hallucinations and the flat out errors and whatnot, but it ain't now and I don't think it's anytime soon.

[–] yogthos@lemmygrad.ml 24 points 8 months ago (2 children)

I find they have practical uses once you spend the time to figure out what they can do well. For example, for coding, they can do a pretty good job of making a UI from a json payload, crafting SQL queries, making endpoints, and so on. Any fairly common task that involves boilerplate code, you'll likely get something decent to work with. I also find that sketching out the structure of the code you want by writing the signatures for the functions and then having LLM fill them in works pretty reliably. Where things go off the rails is when you give them too broad a task, or ask them to do something domain specific. And as a rule, if they don't get the task done in one shot, then there's very little chance they can fix the problem by iterating.

They're also great for working with languages you're not terrible familiar with. For example, I had to work on a Js project using React, and I haven't touched either in years. I know exactly what I want to do, and how I want the code structured, but I don't know the nitty gritty of the language. LLMs are a perfect bridge here because they'll give you idiomatic code without you having to constantly looks stuff up.

Overall, they can definitely save you time, but they're not a replacement for a human developer, and the time saving is mostly a quality of life improvement for the developer as opposed to some transformational benefit in how you work. And here's the rub in terms of a business model. Having what's effectively a really fancy autocomplete isn't really the transformative technology companies like OpenAI were promising.

[–] Chana@hexbear.net 14 points 8 months ago (2 children)

With React I would be surprised if it was really idiomatic. The idioms change every couple years and have state management quirks.

[–] Andrzej3K@hexbear.net 6 points 8 months ago (1 children)

I think that's going to change now though, as a result of LLMs. We're going to be stuck with whatever was the norm when the data was harvested, forever

[–] Chana@hexbear.net 2 points 8 months ago

Assuming the use of these tools is dominant over library developers. Which I don't think it will be. But they may write their libraries in a way that is meant to be LLM-friendly. Simple, repetitious, and with documentation and building blocks that are easily associated with semi-competent dev instructions.

[–] yogthos@lemmygrad.ml 5 points 8 months ago (1 children)

It uses hooks and functional components which are the way most people are doing it from what I know. I also find the code DeepSeek and Qwen produce is generally pretty clear and to the point. At the end of the day what really matters is that you have clean code that you're going to be able to maintain.

I also find that you can treat components as black boxes. As long as it's behaving the way that's intended it doesn't really matter how it's implemented internally. And now with LLMs it matters even less because the cost of creating a new component from scratch is pretty low.

[–] Chana@hexbear.net 2 points 8 months ago

Does it memoize with the right selection of stateful variables by default? I can't imagine it does without a very specific prompt or unless it is very simple boilerplate TODO app stuff. How about nested state using contexts? I'm sure it can do this but will it know how best to do so and use it by default?

In my experience, LLMs produce a less repeatable and correct version of what codegen tools do, more or less. You get a lot of repetition and inappropriate abstractions.

Also just for context, hooks and functional components are about 6-7 years old.

[–] Andrzej3K@hexbear.net 4 points 8 months ago (1 children)

I find Gemini really useful for coding, but as you say it's no replacement for a human coder, not least because of the way it fails silently e.g. it will always ime come up with the hackiest solution imaginable for any sort of race condition, so someone has to be there to say WTF GEMINI, ARE YOU DRUNK. I think there is something kind of transformative about it — it's like going from a bicycle to a car. But the thing is both need to be driven, and the latter has the potential to fail even harder

[–] yogthos@lemmygrad.ml 5 points 8 months ago

Exactly, it's a tool, and if you learn to use it then it can save you a lot of time, but it's not magic and it's not a substitute for understanding what you're doing.

[–] Chana@hexbear.net 21 points 8 months ago

The most useful application is in making garbo marketing images for products that used to be 100% photoshopped instead. Cool your fake product has an "AI" water splash instead of one from Getty. Nothing of value gained or lost except a recognition of how meaningless it is.

[–] MolotovHalfEmpty@hexbear.net 3 points 8 months ago

Also, the reason all the hype and 'culture' around these products focus on individual end users (write me a poem, be a chatbot, make me Pixar art etc) is because they're good at being flexible, at applying the algorithm to different shallow tasks. But when it comes to specific, repeated, reliable use cases for businesses they're much much worse. The error rates are high, it's actual ability for 'institutional memory' and reliable repetition is poor, and if you're replicating a known process previously done by people you still have to train or recruit new people to get the best out of the tech.

[–] happybadger@hexbear.net 32 points 8 months ago (1 children)
[–] BodyBySisyphus@hexbear.net 14 points 8 months ago (1 children)

Yeah, the next nightmare is starting to get tired of waiting. doomer

[–] jackmaoist@hexbear.net 10 points 8 months ago

They can make a bubble about Quantum Computing as a treat.

[–] frogbellyratbone_@hexbear.net 22 points 8 months ago (1 children)

this isn't me fanboying LLM corporations. pop pop pop. this article is fucking stupid though.

On Tuesday, tech stocks suffered a shock sell-off after a report from Massachusetts Institute of Technology (MIT) researchers warned that the vast majority of AI investments were yielding “zero return” for businesses.

no they didn't. :// there was a small 1.5% "shock sell-off" (fucking lol) before rebounding. they're only down 0.5% over the past 5-days.

even softbank, who the article focuses on, is up 36.5% (god damn) over the past month. that's huge.

this week’s sell-off has yet to shift from a market correction to a market rout

omg stfuuuuuuuuuu. it's -10% for a correction we aren't even 0.5% of that.

[–] Carl@hexbear.net 30 points 8 months ago (1 children)

come to think of it, "the market responded to an MIT study suggesting that the technology is worthless" is far too coherent for the stock market. The crash/bubble pop will come because a black cat crossed someone's path or a meteor is seen in the sky over the Bay Area.

[–] Formerlyfarman@hexbear.net 10 points 8 months ago (1 children)

It's always those "Comet sighted" events.

[–] Florn@hexbear.net 8 points 8 months ago

I wish I lived in more enlightened times.

[–] LangleyDominos@hexbear.net 17 points 8 months ago

Unfortunately it will probably be like the dotcom crash. Websites/services only became stronger afterwards, becoming inseparable from daily life. If a crash happens this year, the Facebook of AI is coming around 2030.

[–] Rom@hexbear.net 13 points 8 months ago

LET'S FUCKING GOOOOOOOO lets-fucking-go