this post was submitted on 11 Mar 2026
33 points (97.1% liked)
Games
21258 readers
197 users here now
Tabletop, DnD, board games, and minecraft. Also Animal Crossing.
Rules
- No racism, sexism, ableism, homophobia, or transphobia. Don't care if it's ironic don't post comments or content like that here.
- Mark spoilers
- No bad mouthing sonic games here :no-copyright:
- No gamers allowed :soviet-huff:
- No squabbling or petty arguments here. Remember to disengage and respect others choice to do so when an argument gets too much
- Anti-Edelgard von Hresvelg trolling will result in an immediate ban from c/games and submitted to the site administrators for review. :silly-liberator:
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Is ai code bad? Like I mean on top of any ethical concerns ai art just kinda sucks.
We're expected to use it at work and ive been using Claude a lot lately. Tbh LLM coding assistants have come a very long way since the early days of copilot. Frankly, ive worked with several other (more senior even) engineers that claude could code circles around. Not many, I could count them on my fingers, but its more competitive than some other folks have let on.
The other repliers are correct that these tools can easily spit out buggy code that sneaks its way into the codebase due to lack of oversight, test coverage, and general guard rails. This is pretty easy to spot with various services you likely use or have used (amazon has had several outages recently for example). In my own workplace we have seem significantly more code being merged and a correlated increase in bug density (which is multiplicative with the increase in code being merged). There are definitely problems with relying on LLMs too much.
People are still learning what these tools are good at. Right now that seems to be boiler plate generation, following very common or explicitly defined conventions, and unit test generation. That's not a lot, but its absolutely not nothing. People seem to think their program/app/service is a special snow flake with special requirements only understandable by greybeards. That is not at all the case, most programming in industry is gluing together existing tools and solutions in various arrangements, then putting a little proprietary sprinkle on top. This has been the state of software development for decades at this point.
Like most other social issues, the underlying problem is capitalism. Like the advent of all other industrial automation, the mere existence of LLMs causes capitalists to demand an increase of production from the existing work force. Of course quality control is going to be a problem.
I dislike LLMs because of their impact on the environment and that theyre being shoved into every product. I also do enjoy programming, so LLMs were something I really avoided until the office started demanding it. I'm trying to lean into it now though. I dont care for the product my company sells (the tech is fine and even interesting, just not a product or field that seems worthy of spending so much energy on), I don't like how many hours I work, and I'd rather be spending time organizing and with my family. So , ive been offloading a lot of work to get deliverables out the door to Claude, then pivoting over to organizing work while it churns. I'm lucky in that the product I work on can't hurt anyone if a bug gets deployed. I can just log on the next day and fix it, nbd. Obv that's not the case for all software, but it is the case for most of it. Frankly, I strongly encourage other workers that have jobs that LLMs can do large swaths of to do the same. Talk to coworkers, do some work for whatever org you're a member of (you are a member or an org, right?), and let Claude churn out shit in the background.
People have written so much shit about this that writing more just feels like pissing into an ocean of piss but in brief, AI code:
In the spirit of the GNU project re-defining well-known acronyms and abbreviations, I've noticed developers on the Guix mailing lists referring to LLMs as "License Laundering Machines."
Your points are all correct but for the first one.
The dangerous thing about LLM-generated code is not that it generally looks correct but isn’t. The danger is it oftentimes is correct and oftentimes isn’t.
The fact that it can be actually correct is dangerous. It lulls actual programmers into a false sense of security with it. It makes them cognitively lazy. And then when it turns out that it produces something wrong it slips by.
And even worse, what it assuredly does is convince bosses and non-programmers that THEY are correct and know even better than people who actually studied programming and learned the craft!
I never believed “anyone can code” was a worthwhile goal or objective, one that was aggressively pursued and promoted in the 2010s. Perhaps anyone can. Maybe anyone can be a mathematician. Maybe anyone can be an electrician. But I always saw it for what it was: a naked attempt to devalue the skill of programming and make the labor for it cheap.
Now anyone can be tricked into thinking they can code. Good or bad, it doesn’t matter. The software is about to get a lot worse.
Yeah sorry just never thought about this. But sounds like it would be shitty even if someone self hosted their own llm for it.
A lot of it is about how it's used. I think the second point is the most important. A lot of [software] engineering is familiarity with the topic and tools used. The mental map of the architecture of how everything fits together is powerful, and giving that all up to an LLM is a huge loss if you are using it to write anything more than a basic function.
In my practice in use it in a couple spots:
Using it more than that feels like a heavy risk of brain drain to me.
The only okay use for AI-generated code is short one-time use scripts.
AI generated code is so much worse. AI generated/assisted art can at least hypothetically serve a space-filling role in a sort-of ok fashion for a small project with no budget, particularly if its curated and being used as a fancy photoshop tool to merge sketches and references instead of making things from whole cloth. The worst case scenario with art is that it's just not really that great. With local models, which are the only ones that can actually be controlled to any meaningful extent anyways, it's also a lightweight program that's no more energy intensive than playing a modern game is.
AI generated code is an active cognitohazard and a massive threat vector. It's taking the actual core of project, something that has to be designed cohesively and made to work with countless moving parts in an intelligent manner, and replacing it with the equivalent of massaging copy/pasted stack overflow answers until they squeak through a compiler without crashing. It can spew out boilerplate GUIs and stuff you might find in an "intro to making [whatever sort of thing]" tutorial, but in a nonsensical and impossible to follow way. The inherent, inevitable end result of using it is creating an abomination that can't be maintained and you can't fix it because it's eldritch madness spewed out by an unthinking machine and trying to follow it is physically painful. These code-generating LLMs are also part of the massively bloated and inefficient datacenter models that can't be run locally.
But the code my coworkers write is also a cognitohazard so the AI agents are better since I can actually call them out for crap