this post was submitted on 22 Aug 2025
65 points (100.0% liked)
technology
23925 readers
197 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I find they have practical uses once you spend the time to figure out what they can do well. For example, for coding, they can do a pretty good job of making a UI from a json payload, crafting SQL queries, making endpoints, and so on. Any fairly common task that involves boilerplate code, you'll likely get something decent to work with. I also find that sketching out the structure of the code you want by writing the signatures for the functions and then having LLM fill them in works pretty reliably. Where things go off the rails is when you give them too broad a task, or ask them to do something domain specific. And as a rule, if they don't get the task done in one shot, then there's very little chance they can fix the problem by iterating.
They're also great for working with languages you're not terrible familiar with. For example, I had to work on a Js project using React, and I haven't touched either in years. I know exactly what I want to do, and how I want the code structured, but I don't know the nitty gritty of the language. LLMs are a perfect bridge here because they'll give you idiomatic code without you having to constantly looks stuff up.
Overall, they can definitely save you time, but they're not a replacement for a human developer, and the time saving is mostly a quality of life improvement for the developer as opposed to some transformational benefit in how you work. And here's the rub in terms of a business model. Having what's effectively a really fancy autocomplete isn't really the transformative technology companies like OpenAI were promising.
With React I would be surprised if it was really idiomatic. The idioms change every couple years and have state management quirks.
I think that's going to change now though, as a result of LLMs. We're going to be stuck with whatever was the norm when the data was harvested, forever
Assuming the use of these tools is dominant over library developers. Which I don't think it will be. But they may write their libraries in a way that is meant to be LLM-friendly. Simple, repetitious, and with documentation and building blocks that are easily associated with semi-competent dev instructions.
It uses hooks and functional components which are the way most people are doing it from what I know. I also find the code DeepSeek and Qwen produce is generally pretty clear and to the point. At the end of the day what really matters is that you have clean code that you're going to be able to maintain.
I also find that you can treat components as black boxes. As long as it's behaving the way that's intended it doesn't really matter how it's implemented internally. And now with LLMs it matters even less because the cost of creating a new component from scratch is pretty low.
Does it memoize with the right selection of stateful variables by default? I can't imagine it does without a very specific prompt or unless it is very simple boilerplate TODO app stuff. How about nested state using contexts? I'm sure it can do this but will it know how best to do so and use it by default?
In my experience, LLMs produce a less repeatable and correct version of what codegen tools do, more or less. You get a lot of repetition and inappropriate abstractions.
Also just for context, hooks and functional components are about 6-7 years old.
I find Gemini really useful for coding, but as you say it's no replacement for a human coder, not least because of the way it fails silently e.g. it will always ime come up with the hackiest solution imaginable for any sort of race condition, so someone has to be there to say WTF GEMINI, ARE YOU DRUNK. I think there is something kind of transformative about it — it's like going from a bicycle to a car. But the thing is both need to be driven, and the latter has the potential to fail even harder
Exactly, it's a tool, and if you learn to use it then it can save you a lot of time, but it's not magic and it's not a substitute for understanding what you're doing.