this post was submitted on 26 Feb 2026
209 points (94.5% liked)
Hacker News
4415 readers
396 users here now
Posts from the RSS Feed of HackerNews.
The feed sometimes contains ads and posts that have been removed by the mod team at HN.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Absolutely... Thank you, from the very depths of my heart and soul... dear Tom Gracey, programmer, artist... for the marvel you do... for the wisest attitude, for the belief in in human... in effort... in art...
Usually only the last statement is true.
I think there is quite a bit more subtlety than that.
Yes, just asking an LLM, even the latest versions, to write some code goes from “wow, that’s pretty good” to “eh, not great but ok for something I’m going to use once and then not care about” to “fucking terrible” as the size goes up. And all the agents in the world don’t really make it better.
But… there are a few use cases that I have found interesting.
I like to add:
as well when I’m using it to help me learn a new language.
Just ask those people to read engrish, they'll stand a chance of understanding the issue. It's put together with clues of how it works, and can copypaste pieces, but without the knowledge to string it all together cohesively. Maybe not the best example, but coding is a language like English is a language, and we take a lot of our knowledge for granted when it comes to our intimate relationship with language.
Thank you... When I read "engrish"... my heart skipped a beat...
This assumes you never review it, meaning it’s at best an argument against vibe coding. It’s not an argument against using LLMs for coding in general.
Additionally, I’ve been writing software for a living for almost 30 years, and I could say the exact same thing about a lot of human generated code I’ve reviewed during that time. I don’t even know how often I’ve explained basic stuff like “security goes in the backend, not in the frontend” to humans.
I certainly do code and if I don’t understand what the LLM outputs it doesn’t go in the project.
I’m a software engineer, I can’t judge LLMs in most other domains. I also don’t think there are no problems. A tool doesn’t have to be 100% problem free to be useful as long as you recognize the limitations.
I don’t see a problem with this. The post even mentions pulling code from stackoverflow, which is the same. But nobody ever argued that it has no uses in coding because you still have to read the code.
Honestly at this point any article just flat out dismissing LLMs for coding only reads to me like the author isn’t even trying to stay up to date. Which is understandable if they don’t like AI but makes posting about it a bit pointless.
A year ago I would had a similar opinion as the author but in the last 3-4 months specifically, it feels like AI based tools made a huge leap. I went from using short snippets for learning to letting AI implement entire features and being actually happy with the result.
There is however still a pretty big difference between what it produces for common problems vs. what it produces for specialized difficult ones. It’s also inherently better at some languages than others based on the availability of up-to-date training material. So you need some amount of breadth in your projects to accurately judge it.
If you only try some AI service in free mode on one thing every month, for example, you’ll just have this very polarized opinion that’s either “AI is useless” or “AI can do everything”, but you won’t have a good idea of what it can and can’t do.
I've seen this claim made basically weekly for the last couple of years, if we're having "generational leaps" monthly then these LLMs would actually be capable of doing what people claim they can.
It’s just my experience as someone who was pretty much forced to use AI for coding by my employer for the last few years. For the longest time it was completely useless. And then it suddenly wasn’t. I’m sure you’ll keep hearing this kind of story though, because people have different requirements and AI assisted coding or even agents don’t have to start working for everybody at the same time.
"Additionally, I’ve been writing software for a living for almost 30 years, and I could say the exact same thing about a lot of human generated code I’ve reviewed during that time. I don’t even know how often I’ve explained basic stuff like “security goes in the backend, not in the frontend” to humans."
This is the part I find so funny, as if all humans, hell all DEVS are actually capable of writing perfect code every time. Edit: (Reread and realized I didn't phrase this right) Whereas a normal-lower end dev wouldnt be able to write that program in less time than an LLM could whip up a similarly buggy program.
Like do y'all (expert coders who write things like this article) interact with software outside of what you actually write? I've literally never worked with a larger program that didn't have some kind of bug or strange behavior, it's just how it goes.
I do think it's an interesting dynamic we are seeing play out, I've been wanting to learn and get better at code and this is simultaneously a great time to learn and a horrible time to learn lmao.
Too many people assume, that since genAI is a machine, it'll never make any mistakes.
Maybe if you're only working with languages and features that are well documented and have a lot of examples out there. I've been trying to use LLM coding to assist me with a process automation at work, and the results are a couple steps up from dog vomit more often than not.
AI code assistants aren't making big strides, you're likely just seeing them refine common scenarios to points where it becomes very usable for your specific use cases.
Sure. How much the language or features change is also important. For example Claude can build entire iPhone apps in Swift but you bet they’re going to be full of warnings about things that are illegal now and you bet if there’s any concurrency stuff it’s going to be a wild mix of everything async that ever existed in Swift. It makes sense too because LLMs are trained on code that’s, on average, outdated.
But what it’s good at and what it’s not good at is just part of what you need to know when using AI, just like with any other tool. I have projects too where it can at best replace google, so I don’t try to make it implement those by itself.
Sounds like Tom tried LLM-assisted coding once about 6 model release cycles ago and hasn't revisited it.
ultra copium.jpg
Neat strawman, but I don't see improvements plateauing yet.
soul.yml for some of these people lol
Just ask any model to write minimally complex formally verified code and watch it crash and burn.