this post was submitted on 25 Feb 2026
16 points (90.0% liked)

Technology

42341 readers
303 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
 

What will happen if the Linux kernel starts having AI generated code in it?

top 22 comments
sorted by: hot top controversial new old
[–] cecilkorik@piefed.ca 9 points 6 hours ago

Code is code. "Hello, my name is Fred." is an equally valid sentence whether a human wrote it or an AI wrote it. It doesn't contain some magical AI pollution that makes it different from the normal human-written sentence "Hello, my name is Fred."

AI written code, especially in largely "vibe coded" projects are written by people with zero experience in maintaining or developing software. They don't understand what bad code is. They have no way of recognizing it when they see it. If AI writes bad code that works, it is indistinguishable from good code that works, from the perspective of somebody who knows nothing about coding, which an increasing amount of code and an increasing number of projects are submitted to and written by.

The problem with AI is that it cannot be trusted to write good code. This is a problem when people who don't know any better start trusting it to write code. It sometimes, often, writes REALLY BAD CODE. Code with GIANT security flaws. Code that is unmaintainable, that does not fit with the rest of the code or the goals of the project.

The Linux kernel famously does not trust anyone except Linus. It has a very thorough process for reviewing all code that is suggested for inclusion, and all code is reviewed in extensive and sometimes expletive-laden detail, by Linus himself if it gets that far.

Bad AI code will be caught by the same process that catches bad human written code (which there is also a fair bit of).

Good AI code will be fine, because it gets reviewed by the same process that reviews good human written code, to ensure that it is, in fact, good code. Who actually wrote it is largely irrelevant, as long as the code is high quality and written in a safe, reliable way, because good code is still good code.

Vibe code, on the other hand, is not always good code. Sometimes it is good enough code. Often it is atrocious code. It is not a substitute for experienced software developers, if anything, it necessitates them more. Reviewing complex AI generated code quickly and accurately enough to keep up with the relentless onslaught of even more generated AI code is no minor feat, and skilled, experienced software developers are required to accomplish this.

[–] CanadaPlus 9 points 12 hours ago

Either it will work, or it won't and it will get removed.

I'm not really sure what's interesting about this.

[–] JohnEdwa@sopuli.xyz 21 points 15 hours ago* (last edited 15 hours ago) (1 children)

AI code is like alternative medicine, it's called that when it's bad and doesn't work. If it does, it's just called code. And the issue isn't using code made by AI, it's when people who don't know how to code think the AI does, and blindly do without checking. That's very unlikely to happen with the Linux kernel, as the entire project is basically just one constant code review where it really doesn't matter if bad code was written by a human or an AI.

Even Torvalds has used AI to help with his projects, because it would be kinda silly not to.

[–] N0x0n@lemmy.ml -5 points 5 hours ago* (last edited 5 hours ago) (2 children)

alternative medicine

Uhhhh... Did you know that most used medications are derived from plant secondary metabolites? Without "alternative medecin" our ancestors wouldn't be alive, and you and I wouldn't even be here talking about it.

Your argumentation falls apart, the moment you realize how powerful and unique mother nature is... You just have to close your eyes, shut down your brain and open your mind and heart.

[–] mech@feddit.org 7 points 5 hours ago

The plants that could be proven to work are now just called medicine.
Alternative medicine is medicine that's not scientifically proven to work.

[–] ruuster13@lemmy.zip 3 points 5 hours ago (1 children)
[–] luciole@beehaw.org 1 points 1 hour ago

Bro you're on Beehaw here. So please be nice. Objectively pointing out what's wrong with their comment is good. Simply not engaging is alright too.

[–] hendrik@palaver.p3x.de 17 points 17 hours ago* (last edited 17 hours ago) (1 children)

Nothing? I mean an if/else works the same way, no matter if it's written by a human or an AI or a cat or whatever...

The Linux kernel developers are opinionated, though. Everything gets quite an amount of scrutiny. There will be several people having their eyes on submissions. They're looking for security vulnerabilities. They're adamant on maintainability. Have a standard on how to phrase things, indent lines... Send in the patches... They generally have high standards. I mean if someone submits some AI slop, there's a high chance it just gets declined and they're getting scolded for doing it.

There's of course always the chance someone tries to sneak something in. Or it creeps in on its own. But it's the same for bugs or security attacks. And maybe some of the devs work for companies who push AI and they'll do silly things. But the Linux community is pretty strong. They'll find a way to handle it. And maybe in the far future, AI will get as good as human programmers and there won't be an issue accepting AI code, because it has the same quality as human code. But that's science fiction as of now.

[–] XLE@piefed.social 2 points 1 hour ago (1 children)

Code is code supposedly, but choosing to pursue it with AI is a real danger: how much AI slop will be presented to developers by morons or trolls?

Here's 49 examples of worthless AI slop in just one project that wasted developer time. All of them supposedly fix security bugs.

I can't think of a faster way to demoralize developers than through a torrent of AI-powerws slop spam, and considering Debian lost its entire data protection team, this seems like the worst possible time to make circumstances worse.

[–] hendrik@palaver.p3x.de 1 points 35 minutes ago* (last edited 28 minutes ago)

That's correct. I went with OP's original question, what happens after it happened... Not sure what OP meant, they're nowhere in the comments... Maybe they're a bot as well, and we're subject to the very same thing we're talking about, right now...

But sure. All the fabricated pull requests, issue reports etc are massively problematic. We got quite some bot activity. Then we also need to protect our servers and platforms from their crawlers who just DDOS everyone... Documentation went down the drain, StackOverflow, Reddit... The industry is trying to get rid of entry level programmer positions, so you'll have a bad time entering the job market as any programmer... We're just drowned in all that stuff. Supply chains also get affected by AI, people need to choose between using existing libraries, licensing, money... Or replacing it with something the AI generated...

[–] unreliable@discuss.tchncs.de 5 points 17 hours ago (1 children)

That will means all good developers are dead and world is on death spiral

[–] entropicdrift 2 points 13 hours ago

That's a bit extreme of a take

[–] Jobe@feddit.org 4 points 17 hours ago

Considering the level of oversight and quality control, probably not much. The only way the general public would know is that people would complain very loudly.

[–] TehPers@beehaw.org 3 points 17 hours ago

Do we know it doesn't?

[–] Sebbe@lemmy.sebbem.se 3 points 17 hours ago

The code reviews will get tougher.

[–] Twig@sopuli.xyz 3 points 17 hours ago

Probably get forked.

[–] Skyline969@piefed.ca 3 points 17 hours ago (1 children)
[–] 6jarjar6 6 points 17 hours ago (1 children)

How do you know it doesn't already?

[–] draco_aeneus@mander.xyz 5 points 16 hours ago (1 children)

We cannot know, in the same way we cannot know that it doesn't contain code that is hand-written on graph paper and scanned in via OCR.

The standards for code submissions for the kernel are extremely high, and their review process very strict and complete. There are no barriers stopping LLM generated code from entering the code base, but the barrier of entry for the code quality itself is so high that you have to submit code at the quality of a seasoned and competent engineer.

Ultimately, does it matter that the code was LLM written if the quality is sufficiently high?

[–] village604@adultswim.fan 3 points 16 hours ago (1 children)

Exactly. AI generated code is only a bad thing if it's blindly pushed to production without any sort of review. A lot of the use of AI in coding is to do the simple mundane work that an entry level dev could do.

[–] otter@lemmy.zip 1 points 11 hours ago

Oh, I need a bunch of slightly different unit tests to cover a bunch of different cases. {AI} write the unit tests for me to review.

This is my main use case that ends to saving a lot of time not writing boilerplate code.

[–] riskable@programming.dev 2 points 16 hours ago

The assumption here is that the AI-generated code wasn't reviewed and polished before submission. I've written stuff with AI and sometimes it does a fantastic job. Other times it generates code that's so bad it's a horror show.

Over time, it's getting a little bit better. Years from now it'll be at that 99% "good enough" threshold and no one will care that code was AI-generated anymore.

The key is that code is code: As long as someone is manually reviewing and testing it, you can save a great deal of time and produce good results. It's useful.