this post was submitted on 28 Mar 2026
117 points (88.7% liked)

Technology

83195 readers
3290 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] XLE@piefed.social 71 points 1 day ago (2 children)

How did I end up on a timeline where Microsoft is talking about rolling back AI in its OS and practically acknowledging vibe coding caused problems... and Linux developers are talking about ramping up its usage?

Obviously Microsoft is still worse here, but what are these trajectories?

[–] kreskin@lemmy.world 16 points 9 hours ago* (last edited 9 hours ago) (1 children)

What I think you are also seeing is AI sucking at some things and doing better than humans in others.

AI is pretty great at adding unit tests to code, for example, where humans do a just-OK job. Or in writing code for a very direct well scoped small problem.

AI is just OK at understanding product nuance and choices during larger implementations, or getting end to end coding right for any complex use cases.

[–] XLE@piefed.social -2 points 8 hours ago (1 children)

Just assuming this is all true (i.e. that AI can do good and bad code outputs), why would Linux development be able to succeed at something that Microsoft (which has an insider track with AI, far more money, and far more maturity) failed at?

[–] kreskin@lemmy.world 3 points 7 hours ago (1 children)

Could be a lot of reasons. A big one i see working at a large company myself is that AI needs to draw from a lot of data to do its work. A huge amount of contextual data too. A company like MSFT inevitably needs to provide AI with a walled-off curated set of data, and prevent any of it from leaking. Its AIs will not have the same amount of data an AI can draw from outside MSFT.

[–] XLE@piefed.social 1 points 6 hours ago

Leaking? Microsoft basically owns OpenAI. They pull the data in and don't need it to go out. The whole industry is fighting to close off competition, meaning they know they're on top.

So do you have any reason to assume the open-source community's use of these (closed-source) other models is somehow bucking all real-world evidence to the contrary, or are we just hoping and praying?