this post was submitted on 27 Mar 2026
165 points (88.4% liked)

Fuck AI

6540 readers
804 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

It has to be pure ignorance.

I only have used my works stupid llm tool a few times (hey, I have to give it a chance and actually try it before I form opinions)

Holy shit it's bad. Every single time I use it I waste hours. Even simple tasks, it gets details wrong. I correct it constantly. Then I come back a couple months later, open the same module to do the same task, it gets it wrong again.

These aren't even tools. They're just shit. An idiot intern is better.

Its so angering people think this trash is good. Get ready for a lot of buildings and bridges to collapse because of young engineers trusting a slop machine to be accurate on details. We will look back on this as the worst era in computing.

top 50 comments
sorted by: hot top controversial new old
[–] foxwolf@pawb.social 3 points 15 hours ago

Oooh buddy, is isn't even young engineers using these to destroy their designs. I was at a building construction conference recently where one of the presentations was about how AI is going to "give us so much time back" as designers. He then told us about how the AIs hallucinate math still, and that the AI companies are not liable for their output. After the presentation, I and another person asked him a question about who exactly the liability will lie with and how someone could protect themselves from the liability without spending all the time we "save" meticulously checking the outputs. His response was to generate thousands of outputs for the same task and then only check "the best versions." Okay, so how will we know which are the "best" without meticulously checking thousands of them?

Anyway, afterwards, I asked my colleagues from all around the country who were at the conference for their opinion on AI and the presentation, and most of these 50-60 year old men told me they regularly use it in their work already. So be prepare for things constructed in the past few years to be incredibly dangerous facilities to be in or near.

[–] Witchfire@lemmy.world 1 points 14 hours ago* (last edited 14 hours ago)

A previous job forced us to use them, I spent more time getting the damn thing to work than actually doing work

[–] GreenKnight23@lemmy.world 7 points 1 day ago (2 children)

holy fucking shit man. this community has a clear astroturfing problem.

[–] bridgeenjoyer@sh.itjust.works 1 points 8 hours ago (1 children)

I don't really know what the word means. I was posting my experiences.

[–] GreenKnight23@lemmy.world 2 points 7 hours ago (1 children)

not anything to do with your post. it's the comments here.

fuckai used to be a community where no exceptions were made for AI. it's quite literally in the name of the community.

so many posts from this community over the last 3-4 weeks has had an increasing amount of users that are "astroturfing" that AI has its uses and can be helpful sometimes. I kind of feel like these comments are made disingenuously as a way to silence the community at large by over commenting in a community that was created literally to hate AI, no exceptions.

anyway, won't stop me from never using AI. If anything it'll just make me read more books from before AI was a thing.

[–] bridgeenjoyer@sh.itjust.works 2 points 7 hours ago

OHH I thought the opposite.

[–] jj4211@lemmy.world 2 points 18 hours ago

A lot of people have their livelihood tied to the narrative that LLM deserves every cent of investment. The fact that it's utility is more limited is an existential threat to their careers.

The truth that it is selectively useful gives them a thread of hope, but the fact it is useless for a lot of stuff drives irritation. We don't make a distinction between the sort of work that LLM can do and can't so people end up completely dumbfounded by the other perspective.

[–] utopiah@lemmy.world 4 points 1 day ago (1 children)

An idiot intern is better.

Well, 100% because the intern WILL eventually learn. That's the entire difference. It won't be about adjusting the prompt, or add yet another layer of "reasoning", or wait for the next "version" with a different code name an .1% larger dataset. No, you'll point to the intern they did a mistake, try not calling them an idiot, explain WHY it's wrong, optionally explain how to do it right, THEN the next time they'll avoid it or fix it after.

That's the entire point of having an intern : initially they suck BUT as you train them, they don't! Meanwhile an LLM, despite technical jargon hijacked by the marketing department, they don't "learn" (from machine learning) or train (from "training dataset") or have "neurons" (from "artificial neural networks") rather it's just statistics on the next most probable world, sounding right with 0 "reasoning".

[–] jj4211@lemmy.world 2 points 18 hours ago (1 children)

Had a person a few years back who would never ever learn.

In fact, a way I have expressed my opinion of LLM is that it is like working with that useless guy, except at least faster.

Based on my experience, the broader company is chock full of the never learn developers and I suppose I can see why they see value in the LLM, but either way their product sucks and no one likes them.

[–] bridgeenjoyer@sh.itjust.works 1 points 15 hours ago (1 children)

You're so right .

And if the person sucks that bad, get rid of them

[–] jj4211@lemmy.world 2 points 15 hours ago (1 children)

Yeah, but the same bad management that keeps thinking LLMs are magic are the same bad management that kept that guy around.

Every interaction that guy had where a senior tech ever dared to say he was useless ultimately landed the senior tech in hot water with management, as they claim "he says you aren't providing what he needs to suceed, that he is very skilled and willing to work, but you never told him how or gave him access or (a million other excuses that were generally lies)".

After a way too long career with us, he finally overplayed his hand by making the same old claims to the manager about no one giving him what he needed to work. Except he forgot that this time, the manager himself was the one who had been directing him and so he accidentally was accusing the manager of lying to himself.

Finally, the only person with credibility to the manager was on the receiving end of this guys grift.

[–] bridgeenjoyer@sh.itjust.works 1 points 8 hours ago

Its all a grift in the end!

Thats why youll mostly see conservatives/Nazis in love with llms. It fits their propaganda agenda perfectly.

[–] douglasg14b@lemmy.world 9 points 1 day ago (3 children)

I know this community is all about fuck AI, but this is just straight echo chambering.

But honestly your post sounds like you're just not using it right? You can get pretty good results with it with enough guardrails. Just because you can't get the results you want doesn't mean that no one can.

That said, fuck AI. It's all a bunch of bullshit, but denying real results just means you're sticking your head in the sand and that's not how you fix this problem.

[–] T156@lemmy.world 2 points 23 hours ago

Or that it's not right for their use case.

Like someone throwing a bunch of data into an LLM and trying to use it to process it into a chart or something. It can work, but it was never designed to be used in that manner.

I've got an acquaintance who does that, despite the fact that python would be a better thing to use.

Personally, I sometimes run a few saved images thorough a multi-modal 8 gigaparameter local model on my computer, so I can automate giving them more descriptive names than randomnumbers.png, and that seems to work fine. I could do it by hand, but it would take hours and days, compared to minutes, and since it's not too important, it doesn't matter if it's wrong. The resource usage is also less of an issue, since it's my own computer.

[–] utopiah@lemmy.world 3 points 1 day ago (1 children)

pretty good results with it with enough guardrails

examples?

[–] arbitrary_sarcasm@lemmy.world 5 points 1 day ago (1 children)

For a research project, I had to convert 20+ projects from a dataset into a new format. The old format was simply a single script for each project that builds it. But I needed a format with a Docker file and a script. It would've taken me around a week to do all that one by one.

I got Claude to do it in 2 hours.

I know people hate AI in this community, but to say it doesn't do anything good or to insult all people who use it is just pure negativity.

[–] bridgeenjoyer@sh.itjust.works 1 points 8 hours ago

Thats good. It has use cases. Is the monetary and earth destroying cost worth it? Not in the slightest

[–] cooligula@sh.itjust.works 12 points 1 day ago* (last edited 1 day ago) (1 children)

I agree... Saying LLMs are good at nothing is just plain ignorance... One can disagree with the philosophy or dislike hallucinations, but they are definitely good at some things.

load more comments (1 replies)
[–] Lutra@lemmy.world 2 points 1 day ago

I think the intern comparison fits. The root of the problem is that AI can very good at the thing is is good at. That leads humans to believe that it is good at other things. This is often untrue.

Often the things it is good at are in the set of 'problems machines are good at'. Most professionals, people who are trained/experienced in their field face problem's that are NOT in that set. They are skilled, experienced problem solvers, who are solving difficult, real world problems. Not generic workers, or human resources.

The belief at the top is often that this machine which is 'so impressive', must therefore be good at everything. And this gets pushed down. Where people experience that same truth. The machine is incredibly good at the things it's good at, but it sucks doing what they do.

paraphrasing my grandpa - "To a suit with hammer, everything looks like a nail"

[–] jj4211@lemmy.world -1 points 19 hours ago* (last edited 18 hours ago) (1 children)

I think it really depends on the task.

There are folks who manage to have their whole careers be basically put stuff from documentation and stack overflow to implement very basic stuff over and over again, and pray it works and doesn't need debugging. They hate coding, but it was heralded as a doctor/lawyer level pay but way easier to get into. LLMs can largely replace the work for those. These are humans I would never have trusted with anything significant, and only have them low stakes low risk stuff to keep them busy because management wants them utilized. Sure they spend a month to fall at delivering something basic, but management is happy enough.

Then there are folks who mostly live in code that is needed because it truly doesn't already exist. Those folks will find LLM relatively less useful. Now those folks do end up with braindead chores on occasion. Change from library x to y because whole they both do the same thing, x got discontinued. LLM can be useful at accelerating that because it's just so obvious but not quite as simple as search and replace. Or if you want to define some argv parsing you can let a codegen do that because it's easy but tedious.

To go back to the days of car analogies. Imagine some tech people got the world excited because they created tools to automate engineering in motor sports. People even come out saying how it helped them engineer their own vehicle and stories saying the most prolific motor racing is being taken over. You as an F1 engineer see it as mostly useless, but everyone keeps talking about it's going to replace engineers. Turns out everyone is actually taking about go karts and it is true that it works ok for that and that go karts are way more common than F1.

The problem is that to the world, programming is programming without distinction, and even the people in charge of the F1 type work don't know the difference because they were never technical either.

[–] bridgeenjoyer@sh.itjust.works 1 points 15 hours ago (1 children)

I don't really mind people using it for simple dumb tasks. I'm just sick of boosters saying how great it is. If you're an intelligent person, you realize real fast how stupid it is at real work.

But we have destroyed the economy and planet and given all of our data to billionaires to do it. Not fucking worth it in the SLIGHTEST.

Maybe if THE PEOPLE owned all the data centers and models I could get behind it. But that'll never happen.

[–] jj4211@lemmy.world 2 points 14 hours ago

This is why I'm hoping the bubble pops soon. Too many people trying to gaslight about the utility of it for their self interest.

If it were just "boringly useful, but not mind boggingly profitable somehow", then I'm sure I'd no longer have a ton of people trying to micromanage use of LLM all around me.

Currently, my management has dictated that our failure to see "magic results" is because we just haven't been trained enough, and are paying for and mandating for over 200 hours of training on how to use LLM correctly. The grift is insane, since the whole point of the LLM is that it shouldn't need training to use, but here we are, people found a way to grift training on a 'doesn't need training' solution that doesn't work as advertised...

On a call with a partner, after demonstrating their software and everything it does, one of our executives kept insisting that they need to use LLM and then it would be even better.

So ready to have execs stop caring, and then maybe I'll somehow appreciate the residual utility of it, whatever it is.

[–] AnotherPenguin@programming.dev 3 points 1 day ago* (last edited 1 day ago)

For programming, at least it's a good way to speed up things that you know how to do but take some time to type, or you don't remember the syntax of. But relying on AI any more than that usually means you'll be adding free technical debt and debugging time or becoming dependent on it.

[–] TBi@lemmy.world 52 points 2 days ago (3 children)

Generally I equate positivity about LLMs with people’s technical ability. I find the more they say AI is good the worse programmer they are.

[–] Valmond@lemmy.dbzer0.com 3 points 1 day ago

Might be some dunnig kruger curve there. Not tooting my horn but I know my ways around and I only use ai for programming when I more or less know how it works already. Which means I verify and fix any eventual problems before committing any code. It does speed up the process, it's a tiny bit simpler than checking stuff out on stack overflow IMO.

Now, if you don't know your ways around, and "trusts" the outcome on an LLM, boy are you in trouble 😵‍💫.

[–] okamiueru@lemmy.world 3 points 1 day ago* (last edited 1 day ago)

It also says a lot about their inability to identify bullshit

[–] bridgeenjoyer@sh.itjust.works 34 points 2 days ago

Technical literacy in general. My friend thinks it's the greatest thing ever, is an idiot with technology (and life in general).

[–] AlecSadler@lemmy.dbzer0.com 16 points 2 days ago (2 children)

There are right tools and wrong tools depending on the application.

There are right ways to use said tools and wrong ways...like you wouldn't use a phillips head screwdriver on a flat head.

I guarantee your company's provided tool is Copilot or OpenAI based, which is already bottom of the barrel for usefulness.

load more comments (2 replies)
[–] CompactFlax@discuss.tchncs.de 38 points 2 days ago (1 children)

every time I use it, i waste hours

Yes, exactly this. It looks good, I ask for it to tweak something. It tweaks, but now something else needs adjustment. Then it comes back unusable.

It ends up taking the same time as doing it myself. There’s some value perhaps in either the novelty or engagement that keeps me focused but it’s not more efficient.

When it does work, I’m always worried it is an illusion I’ve missed something. Like how you send an email and immediately see the typo.

People who love it, love it because they don’t need to or care about having accuracy and precision in their work. Sales and marketing, management, etc. Business idiots.

load more comments (1 replies)
[–] 100_kg_90_de_belin@feddit.it 5 points 1 day ago (1 children)

I cut my LLM usage to almost zero because of environmental and political reasons, but it was helpful enough to wish it could be sustainable and not another tool in the dystopian take on the world.

[–] IronBird@lemmy.world 4 points 1 day ago* (last edited 11 hours ago) (1 children)

local models are advanced enough to the point where you can run em as needed without datacenter.

the datacenter craze is basically just an excuse to get the banks (and eventually the american taxpayer, via bailouts when they fail) to fund your local nepitistic infrastructure rollout.

the entire US economy is built around the purposeful boom/bust system, as it's very effecient at "bagging" people that don't know the rules.

[–] wewbull@feddit.uk 1 points 14 hours ago

They've still had a huge power investment in creating them.

[–] AdamBomb 3 points 1 day ago (4 children)

Yeah, don’t generate code with it. Treat it like StackOverflow. It does pretty good at that.

[–] BlameTheAntifa@lemmy.world 5 points 1 day ago

This is the only way I use it, and I do it grudgingly only because AI has ironically also ruined the web and web search. It’s also a last resort for when Kagi isn’t helping.

load more comments (3 replies)
[–] cypherpunks@lemmy.ml 28 points 2 days ago (2 children)

you're obviously prompting it wrong, and/or not using the latest models ~/s~

[–] bonenode@piefed.social 6 points 2 days ago (2 children)

I still think back of Linkedin post I saw from someone talking about LLMs and throwing in the sentence:

”Apparently I am really great at making super prompts!”

Which is probably something the LLM told them and they have lost all self-reflection, so...

[–] GrindingGears@lemmy.ca 1 points 1 day ago

Oh man I deleted LinkedIn last month, and March has been glorious.

I literally feel like there's hope for humanity now. Like just a little glimmer of it. I didn't even really use it, but it somehow sucked my soul and shattered it into 1,000 pieces.

LLM: Wow, you are so good at this. Are you sure this is your first time? ...Oh, prompt me harder, daddy.

load more comments (1 replies)
load more comments
view more: next ›