this post was submitted on 28 Mar 2026
113 points (88.4% liked)

Technology

83195 readers
4281 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] XLE@piefed.social 70 points 22 hours ago (2 children)

How did I end up on a timeline where Microsoft is talking about rolling back AI in its OS and practically acknowledging vibe coding caused problems... and Linux developers are talking about ramping up its usage?

Obviously Microsoft is still worse here, but what are these trajectories?

[–] kreskin@lemmy.world 14 points 7 hours ago* (last edited 7 hours ago) (1 children)

What I think you are also seeing is AI sucking at some things and doing better than humans in others.

AI is pretty great at adding unit tests to code, for example, where humans do a just-OK job. Or in writing code for a very direct well scoped small problem.

AI is just OK at understanding product nuance and choices during larger implementations, or getting end to end coding right for any complex use cases.

[–] XLE@piefed.social -2 points 6 hours ago (1 children)

Just assuming this is all true (i.e. that AI can do good and bad code outputs), why would Linux development be able to succeed at something that Microsoft (which has an insider track with AI, far more money, and far more maturity) failed at?

[–] kreskin@lemmy.world 3 points 5 hours ago (1 children)

Could be a lot of reasons. A big one i see working at a large company myself is that AI needs to draw from a lot of data to do its work. A huge amount of contextual data too. A company like MSFT inevitably needs to provide AI with a walled-off curated set of data, and prevent any of it from leaking. Its AIs will not have the same amount of data an AI can draw from outside MSFT.

[–] XLE@piefed.social 1 points 3 hours ago

Leaking? Microsoft basically owns OpenAI. They pull the data in and don't need it to go out. The whole industry is fighting to close off competition, meaning they know they're on top.

So do you have any reason to assume the open-source community's use of these (closed-source) other models is somehow bucking all real-world evidence to the contrary, or are we just hoping and praying?

[–] Mongostein@lemmy.ca 98 points 1 day ago (5 children)

Linux kernel czar?

I’m curious about this but I refuse to click the link because that just sounds so fucking stupid.

[–] daychilde@lemmy.world 1 points 3 hours ago

Your loss. The Register has been rock solid tech news (if a bit cheeky) for decades.

[–] wewbull@feddit.uk 6 points 11 hours ago

We Brits use Czar as a colloquialism for "person in charge of...".

So the head of the water regulator might be referred to as the water Czar (and they deserve a similar fate).

[–] inari@piefed.zip 69 points 1 day ago (12 children)

The headline is stupid but the article is interesting. Greg is saying that since last month for some unknown reason, AI bug reports have gotten good and useful, and something current Linux maintainers can handle. 

[–] justOnePersistentKbinPlease@fedia.io 39 points 1 day ago (1 children)

Yeah, but then article says that "good" ones still need reams of human work to make them acceptable.

Article is propaganda.

[–] inari@piefed.zip 23 points 1 day ago (1 children)

Greg says they're mostly small bug fixes and that the current maintainers can handle it, not sure where you're getting the "reams" bit from

[–] justOnePersistentKbinPlease@fedia.io 13 points 1 day ago (1 children)

Says in the article that they arent good to go, needing code review, code cleanup, comment and documentation cleanup, etc

[–] inari@piefed.zip 26 points 1 day ago

Yeah I mean, the goal is not to replace code maintainers, only to assist them in their work. Greg in general seems optimistic about it:

"I did a really stupid prompt," he recounted. "I said, 'Give me this,' and it spit out 60: 'Here's 60 problems I found, and here's the fixes for them.' About one-third were wrong, but they still pointed out a relatively real problem, and two-thirds of the patches were right." Mind you, those working patches still needed human cleanup, better changelogs, and integration work, but they were far from useless. "The tools are good," he said. "We can't ignore this stuff. It's coming up, and it's getting better."

load more comments (10 replies)
[–] deadbeef79000@lemmy.nz 16 points 23 hours ago

It's an affectation of The Register they like reporting real news with a sometimes quirky voice. It's also British so some of the language and humour doesn't quite work as well in other parts of the world.

[–] frongt@lemmy.zip 12 points 1 day ago

That's The Register's style. Their a little weird with their copy, but their reporting has been solid, in my experience.

[–] riskable@programming.dev 18 points 1 day ago (4 children)

Either a lot more tools got a lot better,

That's what it was. Even the free, open source models are vastly superior to the best of the best from just a year ago.

People got into their heads that AI is shit when it was shit and decided at that moment that it was going to be stuck in that state forever. They forget that AI is just software and software usually gets better over time. Especially open source software which is what all the big AI vendors are building their tools on top of.

We're still in the infancy of generative AI.

[–] frongt@lemmy.zip 27 points 1 day ago

I tried one for the first time yesterday. It was mediocre at best. Certainly not production code. It would take just as much effort to refine it as it would to just write it in the first place.

[–] XLE@piefed.social 10 points 22 hours ago (2 children)

If you read AI critics, you will see people presenting solid financial evidence of the failure of AI companies to do what they promised. Remember Sam Altman promised AGI in 2025? I certainly do, and now so do you.

Do you have any concrete evidence that this financial flop will turn around before it runs out of money?

[–] riskable@programming.dev 7 points 14 hours ago (1 children)

Assume all the big AI firms die: Anthropic, OpenAI, Microsoft, Google, and Meta. Poof! They're gone!

Here would be my reaction: "So anyway... have you tried GLM-7? It's amazing! Also, there's a new workflow in ComfyUI I've been using that works great to generate..."

Generative AI is here to stay. You don't need a trillion dollars worth of data centers for progress to continue. That's just billionaires living in an AGI fantasy land.

[–] XLE@piefed.social 0 points 11 hours ago (2 children)

I'm sick and tired of AI fans making statements like

Generative AI is here to stay

without evidence.

Citation needed.

[–] riskable@programming.dev 5 points 7 hours ago (1 children)

Um... Where would it go? I've got about 30 models on my machine right now and I download new ones to try out all the time.

Are you suggesting that they'd all just magically disappear one day‽

[–] XLE@piefed.social -1 points 6 hours ago (1 children)

Where do you think the "new ones" are coming from?

[–] riskable@programming.dev 3 points 3 hours ago (1 children)

Same places as usual: Academia and open source foundations.

That's where 99% of all advancements in AI come from. You don't actually think Big AI is paying as many people to do computer science and mathematics research as all the universities in the world (with computer science programs)?

It's the same shit as always: Big companies commercialize advancements and discoveries made by scientist and researchers from academia (mostly) and give almost nothing back.

Big AI has partnerships with tons of schools and if it weren't for that, they wouldn't be advancing the technology as fast as they are. In fact, the only reason why many of these discoveries are made public at all is because of the agreements with the schools that require the discoveries/papers be published (so their school, professors, researchers, and students can get credit).

Like I was saying before: You don't need a trillion dollars in data centers to do this stuff. Almost all the GPUs and special chips being used (and preordered, sigh) by Big AI are being used to serve their customers (at great expense). Not for training.

Training used to be expensive but so many advancements have been made this is no longer the case. Instead, most of the resources being used in "AI data centers" (and research) is all about making inference more efficient. That's the step that comes after you give an AI a prompt.

Training a super modern AI model can be done with a university's data center or a few hundred thousand to a few million dollars of rented GPUs/compute. It doesn't even take that long!

Generative AI improves at a ridiculously fast rate. In nearly all the ways you could think of: Training, inference (e.g. figuring out user intent), knowledge, understanding, and weirder, fluffier stuff like "creativity" (the benchmarks of which are dubious, BTW).

[–] XLE@piefed.social -1 points 3 hours ago* (last edited 3 hours ago)

Before we spin into a tangent about theory and "what ifs" etc, care to link me to all these great models from academics and open-source institutions?

Because right now, the only companies I see making advancements in "AI" are burning through obscene amounts of cash, with no end in sight.

And there is no evidence the cost of inference is going down, and even Anthropic admits training will continue burning resources.

[–] unpossum@sh.itjust.works -2 points 11 hours ago (1 children)
[–] XLE@piefed.social 2 points 9 hours ago

Oh wow, comparing a thing to a completely different thing without demonstrating the comparison is valid.

Exactly the non-evidence I expected.

[–] freeman@sh.itjust.works 11 points 17 hours ago

Whether AI can reliably detect issues and generate working code is a whole different thing from CEO's delusions and hyperbole to game the market. Their financial success is also irrelevant, in fact it's better if the sub/token model fails and we are left with locally ran models.

[–] 4am@lemmy.zip 3 points 1 day ago

They should all be destroyed

[–] AliasAKA@lemmy.world 1 points 23 hours ago (1 children)

Traditional software was developed by humans as an artifact that, and to the degree that humans improved the software for some task, got better, but it was not guaranteed. Windows 11 is proof of that, and there are a laundry list of regressions and bugs introduced into software developed by humans. I acknowledge you say usually and especially for open source — I lukewarm agree with that statement but disagree that large LLMs or other generative models will follow this trend, and merely want to point out that software usually introduces bugs as it’s developed, which are hopefully fixed by people who can reason over the code.

Which brings us to AI models, and really they should just be called transformer models; they are statistical tensor product machines. They are not software in a traditional sense. They are trained to match their training input in a statistical sense. If the input data is corrupted, the model will actually get worse over time, not better. If the data is biased, it will get worse over time, not better. With the amount of slop generated on the web, it is extraordinarily hard to denoise and decide what’s good data and what’s bad data that shouldn’t be used for training. Which means the scaling we’ve seen with increased data will not necessarily hold. And there’s not a clear indication that scaling the model size, which is largely already impractical, is having some synergistic or emergent effect as hoped and hyped.

Also, we’re really not in the infancy of AI. Maybe the infancy of widespread hype for it, but the idea of using tensor products for statistical learning algorithms goes back at least as far as Smolensky, maybe before, and that was what, 1990?

We are in the infancy of I’d say quantum style compute, so we really don’t have much to draw on beyond theoretical models.

Generative LLM models have largely plateaued in my opinion.

[–] Peruvian_Skies@sh.itjust.works 3 points 13 hours ago

We're in the infancy of AI in the sense that widespread use, testing and properly-funded development of these technologies only began a few years ago when massively parallelized processing became affordable enough, even though the concepts are older. You could say we're in the infancy of practical AI, not theoretical.

[–] SaneMartigan@aussie.zone 11 points 1 day ago

Video killed the radio czar?

[–] KiwiTB@lemmy.world 9 points 1 day ago

Sounds like time for a new czar

load more comments
view more: next ›