this post was submitted on 26 Feb 2026
209 points (94.5% liked)

Hacker News

4415 readers
423 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

Source of the RSS Bot

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Passerby6497@lemmy.world 3 points 2 days ago (1 children)

A year ago I would had a similar opinion as the author but in the last 3-4 months specifically, it feels like AI based tools made a huge leap. I went from using short snippets for learning to letting AI implement entire features and being actually happy with the result.

Maybe if you're only working with languages and features that are well documented and have a lot of examples out there. I've been trying to use LLM coding to assist me with a process automation at work, and the results are a couple steps up from dog vomit more often than not.

AI code assistants aren't making big strides, you're likely just seeing them refine common scenarios to points where it becomes very usable for your specific use cases.

[–] setsubyou@lemmy.world 2 points 2 days ago

Sure. How much the language or features change is also important. For example Claude can build entire iPhone apps in Swift but you bet they’re going to be full of warnings about things that are illegal now and you bet if there’s any concurrency stuff it’s going to be a wild mix of everything async that ever existed in Swift. It makes sense too because LLMs are trained on code that’s, on average, outdated.

But what it’s good at and what it’s not good at is just part of what you need to know when using AI, just like with any other tool. I have projects too where it can at best replace google, so I don’t try to make it implement those by itself.