this post was submitted on 28 Feb 2026
149 points (99.3% liked)

Fuck AI

6150 readers
1827 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

So this is a bit of a weird one. I have a tech background, which gives me some more authority, but I'm working in a non-tech field at the moment. They're taking this event pretty seriously, they're flying me out from Beirut to one of their Gulf offices, so I would hope that they would make the trip worth it and not only listen for AI boosting.

I'm one of the few people on my team who doesn't use these tools every day. The only benefit I've ever gotten was as an alternative thesaurus when the search engine results don't give me exactly what I want, and as a supercharged content aware fill on my personal laptop when I'm dicking around editing images. I use a low memory local model from 2022. This is all I need or want. I also understand the tech on a much more fundamental level thanks to my background. Not to give out too much information, but we built simulated pseudo tensor cores on an FPGA in university. I'm not a machine learning engineer, but I understand what this thing is better than anyone else in that meeting will - and certainly anyone with AI in their job title.

I have to temper the fuck out of my tone and frustration to make sure I can get messaging across. I also need to be careful not to come off as panicking about AI "stealing" my job. I have a completely different career path than all those business people, so I don't know if that's something on their mind. I also, you know, work from another country. I'm the cheap offshore labor.

I'm obviously not going to word all of the below this way is my point. I have to pick my battles as well, because most of the people with serious authority haven't had a real job in years and think their magic workplan generator and semi-reliable banana bread ratio calculator is the future of work, humanity, and consciousness.

Within my organization, I've seen people with years of knowledge and experience throw it out of the window because of the magic text box in their pocket. I've seen people with very passable English push their work through a slop extruder to "make the wording more natural" - when it makes it look more generic. I've also had experiences where someone in the chain of custody of my hard work did this to something I've made, making the information within it more generic, diluting my effort.

Company policy has banned external chatbots because Microslop Copilot is "more secure". I used to use GPTZero as a detection tool, just to put in particularly egregious paragraphs and send a screenshot to whoever "wrote" it, to be like "Hey, this reads really bad and I expected actual tailored analysis here. Please write this yourself, if it's not this long, it's okay." Slop makes our hard work look cheap! But GPTZero offers an LLM service, I think it offers a "de-roboticization" service, it has fucking GPT in the name, so it's blocked now.

However, despite the ban, I'm still getting ChatGPT links sent to my Whatsapp from superiors asking me if I "checked this" or "if we're covering all of this", with the most generic ass information in there. The corpus of the web is largely Western and this shit just does not apply here. You know it doesn't apply here. If we were having a face to face conversation and I suggested this stuff you'd be shocked, boss man. What the fuck.

I hear people in meetings and in the offices when I fly in openly talk about "ChatGPT being "better"" and using it on their phone. I'm not fighting for Copilot's market share here, I want these people to use their brains!

So many little things as well. Feedback on my work comes back more vague now, like someone brute forcing a prompt instead of actually, you know, being a part of the process of doing work. People who need time to write English or are not confident with their English are not gradually improving their language skills. Some interns and juniors don't learn anything, and are outright awful at looking up obscure information the old fashioned way.

Over the last few months, I've helped push some work friends off paying for ChatGPT, after relentlessly bombarding them with "You already know this", "This sounds off, you worded it better to me over lunch", "This contradicts our call with those guys, don't you remember the argument you made?", that kind of thing. I find it funny that the antidote to this shit is to be 1% more conscious about your work.

I can also probably score a lot of brownie points by overemphasizing my mini pc / raspi homelab situation and using it to do "AI", kind of reassuring them that I am not insulting their digital false idol.

Thing is, with war tensions (usually it's them asking about my safety, ha), this meeting has been pushed forward, but they seem adamant on having it.


I would prefer not to enter job specifics for obvious reasons, but I do want to emphasize that the work we do can have direct positive impact on people's lives and has done so already. Part of what keeps me sane in the corporate machine is the fact that I've somehow found myself in a position to nudge typically unfeeling processes into marginally improving the material conditions of normal people.


Oh but you're a dbzer0 user, that's a pro-AI instance!

Yes and no. The admin is upfront about this being a facet of technology they are interested in, in the technical sense. I am as well. Their focus is on mass proliferation of this stuff with user control. I can't say I share their views on this tech era to the tee, but this does not give me the heebie jeebies the way mainstream machine learning worship does. Also they seem to be horrified at the social phenomenon that is modern "AI", so... It's not that big of a deal. I don't hate the tech when it's in a whitepaper or running in a university server semantically indexing its digital library. I hate it when it kills the web and the brains of the people around me.

AI worship and AI financing is also a bit different in the Middle East, but this is not the place for me to complain about that. Let's just say there's layers. Let's just say a lot of shit keeps me up at night.

you are viewing a single comment's thread
view the rest of the comments
[–] 18107@aussie.zone 20 points 3 days ago* (last edited 3 days ago) (2 children)

AI is mile wide, skin deep.
It will tell you many things about a wide variety of topics, but it can only provide answers that appear correct on the surface.

Another analogy would be asking someone to multiply two 3 digit numbers in their head and write the answer in less than five seconds. Most people can guess that the answer will have 6 digits, and most people can write a random 6 digit number. Very few people are capable of checking if the given answer is correct.
An AI will give you the equivalent of a 6 digit number. If you don't know the answer, it looks impressive. It's only when you are capable of finding the answer for yourself that you realise the AI is usually wrong.

LLMs are made to be really good at language. They are also made to be confident. They will always give well written answers with the highest confidence.
If you want to rephrase an email or improve a resume, an LLM can give valuable feedback on various snippets. That doesn't mean it's always right, and it doesn't mean you should always throw away what you have in favour of the LLM output.

One of the biggest downsides I've personally experienced (and you've made reference to) is gradually falling out of the practice of thinking. Thinking is a skill that takes constant practice, and it's really easy to get into the habit of relying on an AI instead of thinking. In less time than you'd expect, you're out of practice and unable to do simple tasks that you used to do easily.
This wouldn't be a big deal if AI worked all the time, but in and case where it can't give an answer, you can no-longer fill in the blanks.

In programming you have 3 tiers of errors: compiler errors, runtime errors, and logic errors.
The easiest is compiler errors - the compiler can often tell you exactly how to fix it. Runtime errors are harder to identify, but an AI can help to resolve them.
The hardest is logic errors. These do not crash the program, and do not notify you of their existence. And AI will not usually notice these errors.

When programming yourself, you often think of all the ways you could solve the task, and the act of thinking often brings edge cases and logic issues to mind. When asking an AI to do the work, the AI does not think and the prompter does not think, so no-one preempts any logic errors. This is already leading to massive amounts of technical debt, the extent of which is yet to be fully realised. One only has to look at recent Windows 11 bugs to see how quality is reduced and debugging time is increased whenever AI is used.

Writing code is 5% of the time/cost, and maintaining code is the other 95%. AI can reduce the writing time, but drastically increases maintenance costs as a side effect. If you want to run a business for any reasonable period of time, you want the exact opposite.

The use of AI actively de-skills workers, increases subtle mistakes, reduces proofreading and error checking, and makes the company reliant on a costly external tool that could change or disappear at any moment.

[–] Tar_alcaran@sh.itjust.works 4 points 3 days ago

It's only when you are capable of finding the answer for yourself that you realise the AI is usually wrong.

Exactly. Explaining that LLMs don't give the right answer, they give an answer that appears correct to the average person. Sometimes, for trivial stuff, a thing that appears correct is correct. For complex matters, you need to ask yourself "if I asked my aunt to Google this, would I use that answer for my company?".

If the answer is yes, then by all means, use LLMs for your work.

[–] tinyvoltron@discuss.online 2 points 2 days ago (1 children)

I'm curious what you think of using AI for one time tasks. Something that won't have to be supported later. Like a grand poobah says scrape all the code in our GitHub instance looking for XYZ. Not that specifically but some kind of one-off. That's usually where I use it. I can slap together something that mostly works in much less time then I could even type it then spend a little time fixing weirdness. As soon as I run the script I move on and never look at it again.
I've had success using this approach. I would never let AI write something I plan on using on a prod setting.

[–] 18107@aussie.zone 2 points 2 days ago

If it really is a simple one off project in a language I'll never use again, I'll use AI.

The downside is that if I didn't use AI, then I would have learnt the basics of the language and would be able to do the next project much faster. Now, if I want to do another project in the same language, I'm starting from the beginning.

It depends entirely on how much you value learning.