this post was submitted on 28 Mar 2026
117 points (88.7% liked)
Technology
83195 readers
3290 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
How did I end up on a timeline where Microsoft is talking about rolling back AI in its OS and practically acknowledging vibe coding caused problems... and Linux developers are talking about ramping up its usage?
Obviously Microsoft is still worse here, but what are these trajectories?
What I think you are also seeing is AI sucking at some things and doing better than humans in others.
AI is pretty great at adding unit tests to code, for example, where humans do a just-OK job. Or in writing code for a very direct well scoped small problem.
AI is just OK at understanding product nuance and choices during larger implementations, or getting end to end coding right for any complex use cases.
Just assuming this is all true (i.e. that AI can do good and bad code outputs), why would Linux development be able to succeed at something that Microsoft (which has an insider track with AI, far more money, and far more maturity) failed at?
Could be a lot of reasons. A big one i see working at a large company myself is that AI needs to draw from a lot of data to do its work. A huge amount of contextual data too. A company like MSFT inevitably needs to provide AI with a walled-off curated set of data, and prevent any of it from leaking. Its AIs will not have the same amount of data an AI can draw from outside MSFT.
Leaking? Microsoft basically owns OpenAI. They pull the data in and don't need it to go out. The whole industry is fighting to close off competition, meaning they know they're on top.
So do you have any reason to assume the open-source community's use of these (closed-source) other models is somehow bucking all real-world evidence to the contrary, or are we just hoping and praying?
The variable you're missing is time. There was a big shift in quality by Christmas, and the latest models are much better programmers than models from one year ago. The quality is improving so fast, that most people still think of AI as a "slop generator", when it can actually write good code and find real bugs and secutity issues now.
As someone who has to sift through other people's LLM code every day at my job I can confirm it has definitely not gotten better in the past three months
We require you to submit markdown plan before working on a feature, which must have full context, scope, implementation details. Also verification tests mardown file of happy path and critical failure modes that would affect customer, and how tests were performed. Must be checked in with the commit. More complex, large features require UML diagrams of architecture, sequences, etc. to be checked in too.
If your plan or verification docs have wrong context, missing obvious implementation flaws, bad coupling, architecture, interfaces, boundary conditions, missing test cases, etc then PR rejected.
Every developer's performance is judged as a systems engineer. Thoughtless features without systems docs and continued lack of improvement in your systems thinking gets you PIPed.
That's the thing though. Even if the code is good, the plans are good, the outputs are good, etc, it still devolves into chaos after some time.
If you use AI to generate a bunch of code you then don't internalize it as if you wrote it. You miss out on reuse patterns and implementation details which are harder to catch in review than they are in implementation. Additionally, you don't have anyone who knows the code like the back of their hand because (even if supervised) a person didn't write the code, they just looked over it for correctness, and maybe modified it a little bit.
It's the same reason why sometimes handwritten notes can be better for learning than typed notes. Yeah one is faster, but the intentionality of slowing down and paying attention to little details goes a long way making code last longer.
There's maybe something to be said about using LLMs as a sort of sanity check code reviewer to catch minor mistakes before passing it on to a real human for actual review, but I definitely see it as harmful for anything actually "generative"
How do you manage?
The work-life balance is otherwise pretty good and my manager/direct coworkers are chill 🤷
Otherwise I would have lost motivation a long time ago
The other missing variable is actually knowing how to use the tools. Vibe coding still produces slop. Good AI-generated code requires understanding what you're trying to achieve and giving the AI clear context on what design paradigms to follow, what libraries to use and so on. Basically, if you know how to write good code without AI, it can help you to do so faster. If you don't, it'll help you to write slop faster. Garbage in, garbage out.
This is a good answer. AI tools won't make someone who has not yet developed programming skills into a good programmer. For someone who has a good grasp of implementation patterns and the toolkit for a given tech stack, they can speed things up by putting you into the role of a senior programmer reviewing code from multiple newbies.
I'm finding that for it to work well, you have to split things up into very small pieces. You also have to really own your AI automation prompts and scripts. You can't just copy what some YouTuber did and expect it to work well in your environment.
I used to feel the same way, but I've come to realize it's slop that just looks better on the surface not slop that is actually better.
At least it compiles most the time now. But it's never quite right... Everytime I have Claude write some section of code 6 more things spring up that need to be fixed in the new code. Never ending cycle. On the surface the code appears more readable but it's not