this post was submitted on 25 Feb 2026
16 points (90.0% liked)
Technology
42341 readers
290 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It won’t.
How do you know it doesn't already?
We cannot know, in the same way we cannot know that it doesn't contain code that is hand-written on graph paper and scanned in via OCR.
The standards for code submissions for the kernel are extremely high, and their review process very strict and complete. There are no barriers stopping LLM generated code from entering the code base, but the barrier of entry for the code quality itself is so high that you have to submit code at the quality of a seasoned and competent engineer.
Ultimately, does it matter that the code was LLM written if the quality is sufficiently high?
Exactly. AI generated code is only a bad thing if it's blindly pushed to production without any sort of review. A lot of the use of AI in coding is to do the simple mundane work that an entry level dev could do.
This is my main use case that ends to saving a lot of time not writing boilerplate code.