this post was submitted on 22 Aug 2025
65 points (100.0% liked)

technology

23925 readers
197 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] peeonyou@hexbear.net 34 points 8 months ago (3 children)

Honestly, I can't imagine these LLMs are actually contributing any sort of benefit when you consider the amount of trash you have to wade through and fix once they've done what they've done. For every quickly typed up professional e-mail or procedure they do they're wasting multiple hours of programmer time by introducing bs into codebases and trampling over coding conventions which then has to be reviewed and fixed. I imagine it will get to the point where AI can do things on its own without the hallucinations and the flat out errors and whatnot, but it ain't now and I don't think it's anytime soon.

[–] yogthos@lemmygrad.ml 24 points 8 months ago (2 children)

I find they have practical uses once you spend the time to figure out what they can do well. For example, for coding, they can do a pretty good job of making a UI from a json payload, crafting SQL queries, making endpoints, and so on. Any fairly common task that involves boilerplate code, you'll likely get something decent to work with. I also find that sketching out the structure of the code you want by writing the signatures for the functions and then having LLM fill them in works pretty reliably. Where things go off the rails is when you give them too broad a task, or ask them to do something domain specific. And as a rule, if they don't get the task done in one shot, then there's very little chance they can fix the problem by iterating.

They're also great for working with languages you're not terrible familiar with. For example, I had to work on a Js project using React, and I haven't touched either in years. I know exactly what I want to do, and how I want the code structured, but I don't know the nitty gritty of the language. LLMs are a perfect bridge here because they'll give you idiomatic code without you having to constantly looks stuff up.

Overall, they can definitely save you time, but they're not a replacement for a human developer, and the time saving is mostly a quality of life improvement for the developer as opposed to some transformational benefit in how you work. And here's the rub in terms of a business model. Having what's effectively a really fancy autocomplete isn't really the transformative technology companies like OpenAI were promising.

[–] Chana@hexbear.net 14 points 8 months ago (2 children)

With React I would be surprised if it was really idiomatic. The idioms change every couple years and have state management quirks.

[–] Andrzej3K@hexbear.net 6 points 8 months ago (1 children)

I think that's going to change now though, as a result of LLMs. We're going to be stuck with whatever was the norm when the data was harvested, forever

[–] Chana@hexbear.net 2 points 8 months ago

Assuming the use of these tools is dominant over library developers. Which I don't think it will be. But they may write their libraries in a way that is meant to be LLM-friendly. Simple, repetitious, and with documentation and building blocks that are easily associated with semi-competent dev instructions.

[–] yogthos@lemmygrad.ml 5 points 8 months ago (1 children)

It uses hooks and functional components which are the way most people are doing it from what I know. I also find the code DeepSeek and Qwen produce is generally pretty clear and to the point. At the end of the day what really matters is that you have clean code that you're going to be able to maintain.

I also find that you can treat components as black boxes. As long as it's behaving the way that's intended it doesn't really matter how it's implemented internally. And now with LLMs it matters even less because the cost of creating a new component from scratch is pretty low.

[–] Chana@hexbear.net 2 points 8 months ago

Does it memoize with the right selection of stateful variables by default? I can't imagine it does without a very specific prompt or unless it is very simple boilerplate TODO app stuff. How about nested state using contexts? I'm sure it can do this but will it know how best to do so and use it by default?

In my experience, LLMs produce a less repeatable and correct version of what codegen tools do, more or less. You get a lot of repetition and inappropriate abstractions.

Also just for context, hooks and functional components are about 6-7 years old.

[–] Andrzej3K@hexbear.net 4 points 8 months ago (1 children)

I find Gemini really useful for coding, but as you say it's no replacement for a human coder, not least because of the way it fails silently e.g. it will always ime come up with the hackiest solution imaginable for any sort of race condition, so someone has to be there to say WTF GEMINI, ARE YOU DRUNK. I think there is something kind of transformative about it — it's like going from a bicycle to a car. But the thing is both need to be driven, and the latter has the potential to fail even harder

[–] yogthos@lemmygrad.ml 5 points 8 months ago

Exactly, it's a tool, and if you learn to use it then it can save you a lot of time, but it's not magic and it's not a substitute for understanding what you're doing.

[–] Chana@hexbear.net 21 points 8 months ago

The most useful application is in making garbo marketing images for products that used to be 100% photoshopped instead. Cool your fake product has an "AI" water splash instead of one from Getty. Nothing of value gained or lost except a recognition of how meaningless it is.

[–] MolotovHalfEmpty@hexbear.net 3 points 8 months ago

Also, the reason all the hype and 'culture' around these products focus on individual end users (write me a poem, be a chatbot, make me Pixar art etc) is because they're good at being flexible, at applying the algorithm to different shallow tasks. But when it comes to specific, repeated, reliable use cases for businesses they're much much worse. The error rates are high, it's actual ability for 'institutional memory' and reliable repetition is poor, and if you're replicating a known process previously done by people you still have to train or recruit new people to get the best out of the tech.