this post was submitted on 07 Mar 2026
219 points (98.7% liked)

Fuck AI

6241 readers
825 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 24 comments
sorted by: hot top controversial new old
[–] Renat@szmer.info 1 points 4 hours ago* (last edited 4 hours ago)

So teaching chickens to use AI would be useful to make fried chicken.

[–] gothic_lemons@lemmy.world 148 points 2 days ago (2 children)

“My thinking wasn’t broken, just noisy — like mental static,” the senior manager continued. “What finally snapped me out of it was realizing I was working harder to manage the tools than to actually solve the problem.”

Omg constantly fact checking and tweaking the lying machine is actually slower than just thinking. Who would have guessed?

[–] SupraMario@lemmy.world 15 points 1 day ago

It also kills your ability to solve problems, since you're hitting that easy button constantly.

[–] arcine@jlai.lu 27 points 2 days ago* (last edited 2 days ago) (2 children)

This is exactly the kinda thing keeping me away from so-called "modern" programming languages.

Too many tools and nonsense to manage ; I want to program the computer, not "orchestrate a build pipeline" with 10+ different tools running on different machines, doing some nonsense with their own bespoke syntax and quirks...

[–] boonhet@sopuli.xyz 4 points 1 day ago

It's modern javascript/typescript you're talking about, isn't it?

They're an edge case IMO, and largely the issue is actually modernizing old languages. If you take something like Go or Rust as an example for a modern language, it's actually nice because the tooling is standardized for the language. In particular, the dependency management is built into the standard tooling.

Now you can always make it more complex by doing things like including docker in your build pipeline and maybe even creating pipelines that automatically deploy to kubernetes on a successful build... But all that is completely optional for like 99% of scenarios.

[–] MountingSuspicion@reddthat.com 14 points 2 days ago

Not sure what you're considering as "modern", but as someone why cut their teeth on C++ and still actually enjoys it, plenty of modern languages have their uses. It really depends on what you're looking for, but I've spent years in C++ and will still use python as a go to for small projects.

[–] ItsMeForRealNow@lemmy.world 7 points 1 day ago

Decades of ADHD prepared me for this

[–] ThePowerOfGeek@lemmy.world 54 points 2 days ago (3 children)

Interesting article.

I would share this with my colleagues on our 'AI Discussions' channel. But I know what the result will be. "Those people just aren't using the agents correctly", "they need to provide the agents with moar context!!1!", "this article is bad because I don't like what it says", "those respondents are just lazy or stupid".

Personally, I've noticed this kind of mental exhaustion myself. I've tried leaning more heavily into AI usage because my employer encourages it. But it's usually so damn frustrating.

I've found even the better/cutting edge LLMs struggle with basic troubleshooting, even when you provide them with solid context and try to keep the scope limited. Half the time they do great, but the other half they fail pretty spectacularly, and I end up wasting time trying to police/hand-hold them.

And I can't even rely on these LLMs to reliably perform more menial tasks like formatting CSV data into JSON. They usually just stop the conversion after some arbitrary point, or they fuck up the structure of the output. Again, no matter how much context it detail I provide them.

These are all things some of my colleagues have found as well. Meanwhile, I'm also seeing other people become overly reliant on LLMs/agents, and accept whatever slop they produce as gospel while claiming it as their own work.

And that's not even covering the knowledge/skill atrophy that I've witnessed. A lot of people learn and hone skills through repetition. But overuse of AI kills that opportunity, while offering unreliable immediate results.

[–] floquant@lemmy.dbzer0.com 3 points 1 day ago

And I can't even rely on these LLMs to reliably perform more menial tasks like formatting CSV data into JSON. They usually just stop the conversion after some arbitrary point, or they fuck up the structure of the output. Again, no matter how much context it detail I provide them.

Use them to produce a script that converts CSV into JSON instead of having the LLM do it directly. More transparent, reliable, and resource efficient

[–] qarbone@lemmy.world 22 points 2 days ago (2 children)

LLMs are only more efficient when you don't even know how to start doing a thing. Once you have even a primer on any subject, you'd probably be better off muddling through to a solution on your own.

[–] AdamBomb 1 points 1 day ago

This is where they are most useful for me

yeah, i'd much rather learn how to do something than have an LLM do it for me. but that might be why i'm me. i like to gather knowledge like a magpie

[–] john_lemmy@slrpnk.net 11 points 2 days ago (1 children)

This is exactly my experience. Even when sharing milquetoasty articles, like the AGENTS.md one just to test the waters.

[–] farting_gorilla@lemmy.world 1 points 1 day ago (1 children)

Mind sharing that article here?

[–] john_lemmy@slrpnk.net 3 points 1 day ago

I can't find the article, but this was the paper: https://arxiv.org/abs/2602.11988.

I just wanted to point out that the myriad of "best practice" articles and information on AI are not very rigorous. So, not even necessarily an argument against AI (the paper sure isn't). Even then it got pushback.

[–] henfredemars@infosec.pub 41 points 2 days ago (1 children)

Maybe pursuing infinite, unbounded productivity gains forever isn’t sustainable. Maybe we need to build systems that work for people.

[–] JustTesting@lemmy.hogru.ch 6 points 1 day ago (1 children)

It's a symptom of the times. Saw an ad earlier this week "tired, stressed? This will rob your body of important nutrients, get this vitamin pill to replenish them!". Instead of reducing stress.

Somehow, maybe through the societal focus on individualism, we've gotten to this point where it's all the individuals fault. It's all about doing whatever it takes to get ahead, screw your body and peers, if you can't take it, you're too weak and a loser. But we'll sell you products to cope! And people internalize this, are proud of working more efficiently at the cost of their health. That's the really sad part, many actually start wanting this.

The system actively works against people and tries to train them to yearn for more of the same.

[–] henfredemars@infosec.pub 1 points 1 day ago* (last edited 1 day ago)

Colloquially, it’s the rat race. There’s nothing inherently wrong with striving nor sacrifice, but we must ask: to what end? At what cost? To what success if it costs you your soul? It better be worth it.

Social focus on individualism sounds like a dog’s focus on chocolate.

[–] Kolanaki@pawb.social 3 points 1 day ago

Brain fry used to be something you needed drugs to get.

[–] mudkip@lemdro.id 4 points 1 day ago

This should be a surprise to absolutely nobody.

[–] ssfckdt@lemmy.blahaj.zone 2 points 1 day ago

Increased speed? Maybe half the time.

[–] Semi_Hemi_Demigod@lemmy.world 19 points 2 days ago

I'm seeing this with my boss. He used to be a little scatterbrained, but the more he uses AI the less he actually gets done and the more frantic he seems doing it.

[–] eestileib@lemmy.blahaj.zone 3 points 1 day ago

Kinda sounds like lack of sleep?

I'll use them for very simple CRUD apps with easily verifiable logic, for work because my employer cares about speed more then quality but that's about all I'll use them for.