this post was submitted on 01 Mar 2026
78 points (100.0% liked)
United States | News & Politics
8987 readers
271 users here now
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Emphasis on consistently.
Coding: AI slop gives devs undue confidence to introduce glaring bugs and security holes and unmaintainable structures as they are not accustomed to doing proper code reviews (which is now their role - reviewing bad junior dev code). It works great at first, seemingly, and then racks up a massive cost later in the form of fixing its problems. Of course, you can just not fix those problems and live with terrible security and constantly rewriting half the codebase to try and imolement a single feature. LLMs can reproduce patterns but can't really think. You will end up spending just as much time, if not more, building something half decent using it, but then likely end up not properly understanding what was built. And God help you if you want to implement using version 4.3 of some library rather than he much more publicly documented version 3.x.
Automation: I dunno the only irl examples I have seen of automation have been catastrophes because the person trusted a broken implementation. They were real excited at first and then had a bad time a couple months later. But I'm sure there are examples of this where "good enough" meshes reasonably well with the capabilities of LLMs.
Research: Oh I strongly discourage this. These are pattern regurgitation machines, they will reproduce what is common and that is not the same as what is true, and that is before accounting for "hallucinations", which is really just more pattern-making, it is the same as the non-hallucinatioms but just more obviously wrong rather than subtly wrong. This is a surefire way to unlearn how to do good research and adopt false ideas without even knowing it.
Re: reading and believing headlines: yes that will also lead you astray. Doesn't make the lie regurgitation machine a good idea for most topics.
Re: "Arrogance and virtue signaling" I have absolutely no idea what you are referring to.
These are all examples that you don’t know how to use GPT’s effectively. You’re not even trying. It’s a tool. It’s not a replacement brain.