this post was submitted on 22 Dec 2025
14 points (100.0% liked)

TechTakes

2335 readers
213 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Merry Christmas, happy Hannukah, and happy holidays in general!)

you are viewing a single comment's thread
view the rest of the comments
[–] YourNetworkIsHaunted@awful.systems 5 points 7 hours ago (2 children)

So maybe I'm just showing my lack of actual dev experience here, but isn't "making code modifications algorithmically at scale" kind of definitionally the opposite of good software engineering? Like, I'll grant that stuff is complicated but if you're making the same or similar changes at some massive scale doesn't that suggest that you could save time, energy and mental effort by deduplicating somewhere?

[–] swlabr@awful.systems 5 points 5 hours ago* (last edited 4 hours ago) (1 children)

The short answer is no. Outside of this context, I'd say the idea of "code modifications algorithmically at scale" is the intersection of code generation and code analysis, all of which are integral parts of modern development. That being said, using LLMs to perform large scale refactors is stupid.

[–] V0ldek@awful.systems 4 points 2 hours ago (1 children)

This is like the entire fucking genAI-for-coding discourse. Every time someone talks about LLMs in lieu of proper static analysis I'm just like... Yes, the things you say are of the shape of something real and useful. No, LLMs can't do it. Have you tried applying your efforts to something that isn't stupid?

If there's one thing that coding LLMs do "well", it's expose the need in frameworks for code generation. All of the enterprise applications I have worked on in modernity were by volume mostly boilerplate and glue. If a statistically significant portion of a code base is boilerplate and glue, then the magical statistical machine will mirror that.

LLMs may simulate filling this need in some cases but of course are spitting out statistically mid code.

Unfortunately, committing engineering effort to write code that generates code in a reliable fashion doesn't really capture the imagination of money or else we would be doing that instead of feeding GPUs shit and waiting for digital God to spring forth.

[–] sailor_sega_saturn@awful.systems 6 points 6 hours ago (1 children)

This doesn't directly answer your question but I guess I had a rant in me so I might as well post it. Oops.


It's possible to write tools that make point changes or incremental changes with targeted algorithms in a well understood problem space that make safe or probably safe changes that get reviewed by humans.

Stuff like turning pointers into smart pointers, reducing string copying, reducing certain classes of runtime crashes, etc. You can do a lot of stuff if you hand-code C++ AST transformations using the clang / llvm tools.


Of course "let's eliminate 100% of our C code with a chatbot" is... a whole other ballgame and sounds completely infeasible except in the happiest of happy paths.

In my experience even simple LLM changes are wrong somewhere around half the time. Often in disturbingly subtle ways that take an expert to spot. Also in my experience if someone reviews LLM code they also tend to just rubber stamp it. So multiply that across thousands of changes and it's a recipe for disaster.

And what about third party libraries? Corporate code bases are built on mountains of MIT licensed C and C++ code, but surely they won't all switch languages. Which means they'll have a bunch of leaf code in C++ and either need a C++ compatible target language, or have to call all the C++ code via subprocess / C ABI / or cross-language wrappers. The former is fine in theory, but I'm not aware of any suitable languages today. The latter can have a huge impact on performance if too much data needs to be serialized and deserialized across this boundary.

Windows in particular also has decades of baked in behavior that programs depend on. Any change in those assumptions and whoops some of your favorite retro windows games don't work anymore!


In the worst case they'd end up with a big pile of spaghetti that mostly works as it does today but that introduces some extra bugs, is full of code that no one understands, and is completely impossible to change or maintain.

In the best case they're mainly using "AI" for marketing purposes, will try to achieve their goals using more or less conventional means, and will ultimately fall short (hopefully not wreaking too much havoc in the progress) and give up halfway and declare the whole thing a glorious success.

Either way ultimately if any kind of large scale rearchitecting that isn't seen through to the end will cause the codebase to have layers. There's the shiny new approach (never finished), the horrors that lie just beneath (also never finished), and the horrors that lie just beneath the horrors (probably written circa 2003). Any new employees start by being told about the shiny new parts. The company will keep a dwindling cohort of people in some dusty corner of the company who have been around long enough to know how the decades of failed code architecture attempts are duct-taped together.

[–] Soyweiser@awful.systems 2 points 49 minutes ago

Some of the horrors are also going to be load bearing for some fixes people dont properly realize because the space of computers which can run windows is so vast.

Think something like that happend with twitter, when Musk did his impression of a bull in a china store at the stack, they cut out some code which millions of Indians, who used old phones, needed to access the twitter app.