this post was submitted on 28 Feb 2026
49 points (100.0% liked)

TechTakes

2463 readers
116 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.dbzer0.com/post/64538524

So this is a bit of a weird one. I have a tech background, which gives me some more authority, but I'm working in a non-tech field at the moment. They're taking this event pretty seriously, they're flying me out from Beirut to one of their Gulf offices, so I would hope that they would make the trip worth it and not only listen for AI boosting.

I'm one of the few people on my team who doesn't use these tools every day. The only benefit I've ever gotten was as an alternative thesaurus when the search engine results don't give me exactly what I want, and as a supercharged content aware fill on my personal laptop when I'm dicking around editing images. I use a low memory local model from 2022. This is all I need or want. I also understand the tech on a much more fundamental level thanks to my background. Not to give out too much information, but we built simulated pseudo tensor cores on an FPGA in university. I'm not a machine learning engineer, but I understand what this thing is better than anyone else in that meeting will - and certainly anyone with AI in their job title.

I have to temper the fuck out of my tone and frustration to make sure I can get messaging across. I also need to be careful not to come off as panicking about AI "stealing" my job. I have a completely different career path than all those business people, so I don't know if that's something on their mind. I also, you know, work from another country. I'm the cheap offshore labor.

I'm obviously not going to word all of the below this way is my point. I have to pick my battles as well, because most of the people with serious authority haven't had a real job in years and think their magic workplan generator and semi-reliable banana bread ratio calculator is the future of work, humanity, and consciousness.

Within my organization, I've seen people with years of knowledge and experience throw it out of the window because of the magic text box in their pocket. I've seen people with very passable English push their work through a slop extruder to "make the wording more natural" - when it makes it look more generic. I've also had experiences where someone in the chain of custody of my hard work did this to something I've made, making the information within it more generic, diluting my effort.

Company policy has banned external chatbots because Microslop Copilot is "more secure". I used to use GPTZero as a detection tool, just to put in particularly egregious paragraphs and send a screenshot to whoever "wrote" it, to be like "Hey, this reads really bad and I expected actual tailored analysis here. Please write this yourself, if it's not this long, it's okay." Slop makes our hard work look cheap! But GPTZero offers an LLM service, I think it offers a "de-roboticization" service, it has fucking GPT in the name, so it's blocked now.

However, despite the ban, I'm still getting ChatGPT links sent to my Whatsapp from superiors asking me if I "checked this" or "if we're covering all of this", with the most generic ass information in there. The corpus of the web is largely Western and this shit just does not apply here. You know it doesn't apply here. If we were having a face to face conversation and I suggested this stuff you'd be shocked, boss man. What the fuck.

I hear people in meetings and in the offices when I fly in openly talk about "ChatGPT being "better"" and using it on their phone. I'm not fighting for Copilot's market share here, I want these people to use their brains!

So many little things as well. Feedback on my work comes back more vague now, like someone brute forcing a prompt instead of actually, you know, being a part of the process of doing work. People who need time to write English or are not confident with their English are not gradually improving their language skills. Some interns and juniors don't learn anything, and are outright awful at looking up obscure information the old fashioned way.

Over the last few months, I've helped push some work friends off paying for ChatGPT, after relentlessly bombarding them with "You already know this", "This sounds off, you worded it better to me over lunch", "This contradicts our call with those guys, don't you remember the argument you made?", that kind of thing. I find it funny that the antidote to this shit is to be 1% more conscious about your work.

I can also probably score a lot of brownie points by overemphasizing my mini pc / raspi homelab situation and using it to do "AI", kind of reassuring them that I am not insulting their digital false idol.

Thing is, with war tensions (usually it's them asking about my safety, ha), this meeting has been pushed forward, but they seem adamant on having it.


I would prefer not to enter job specifics for obvious reasons, but I do want to emphasize that the work we do can have direct positive impact on people's lives and has done so already. Part of what keeps me sane in the corporate machine is the fact that I've somehow found myself in a position to nudge typically unfeeling processes into marginally improving the material conditions of normal people.


Oh but you're a dbzer0 user, that's a pro-AI instance!

Yes and no. The admin is upfront about this being a facet of technology they are interested in, in the technical sense. I am as well. Their focus is on mass proliferation of this stuff with user control. I can't say I share their views on this tech era to the tee, but this does not give me the heebie jeebies the way mainstream machine learning worship does. Also they seem to be horrified at the social phenomenon that is modern "AI", so... It's not that big of a deal. I don't hate the tech when it's in a whitepaper or running in a university server semantically indexing its digital library. I hate it when it kills the web and the brains of the people around me.

AI worship and AI financing is also a bit different in the Middle East, but this is not the place for me to complain about that. Let's just say there's layers. Let's just say a lot of shit keeps me up at night.

top 8 comments
sorted by: hot top controversial new old
[–] CinnasVerses@awful.systems 1 points 48 minutes ago

Baldur Bjarnason has a whole book on the business risks of LLM use https://illusion.baldurbjarnason.com/

[–] reallykindasorta@slrpnk.net 17 points 21 hours ago

It sounds like you’ll be an excellent representative without a lot of prep to me. Your honest perspective and examples are balanced and you seem to understand ceo vibe based business decisions can be swayed by practical considerations.

My non-tech office job has pushed AI a bit but mostly backed off when I demonstrated that I had considered where I could integrate it but that the technology wasn’t stable/reliable enough to build into workflows that are supposed to be consistent year over year. The models are constantly being updated so you don’t get the same behaviors day to day. This issue would be alleviated somewhat if you used a custom model where the question would become whether or not it was more efficient/cost effective to train a custom model vs whatever the current workflow is.

Another big concern in my opinion is company/project longevity. AI bubble aside, tech companies are constantly changing their structures and priorities. If we adapt our workflows to piggyback on this or that model and it gets cut by google or whoever, we’re left in a bit of a tight spot.

[–] slazer2au@lemmy.world 4 points 17 hours ago

I would bring up that AI output is not copyrightable so anyone can take your employee work and say it's their own. Now sure how your legal or manglement teams would like that.

Also, I would question a language tool that fails 30% of math questions.
https://www.theregister.com/2026/02/26/ai_models_get_better_at/

[–] fiat_lux@lemmy.world 3 points 19 hours ago

Without knowing what sort of work, either conceptually or on what tech layer (if it's tech based at all), it's very difficult to be of direct help. My advice would be to talk to the senior workers in your company (obviously not managers) and set up some individual chats where you make it very clear their anonymity will be respected. Or maybe you can send out a survey?

You did mention though that the work you do helps people. I would dig a bit further into that, and ask which coworkers/customers the LLMs help, either directly or indirectly. Because the research is indicating that if you're from a less privileged demographic, that can change substantially.

Here's a small set of articles and papers which might give you ideas for topics you might explore relevant to your area:

There seems to be an increasing focus on the MENA area especially, and that might mean you should look into how geopolitical guardrails affect responses, but I don't have any good links at hand for that.

[–] jaschop@awful.systems 3 points 18 hours ago* (last edited 18 hours ago)

I would assume that you will only get across a very limited amount of information. If you pack them with details they will zone out, if you can focus on very few arguments something might stick. If you have the background knowledge to bring up points as needed, that's great of course.

If I would try to sway some business people, I'd try this angle: AI intensification creates a dependence on your AI model vendor and endangers your human capital. Your AI vendor is knowingly selling you broken goods, so they can satisfy their desperate bubble economics. Your people are (on average) dabbling with AI, but diving into it too much can cause mental health issues (an in-progress paper trying to look at this [1]). And furthermore you're endangering the maintenance and transfer of critical know-how because people are burying critical business processes in slop that sort of works but noone understands (throwback to the 80s where similar things happened with classical automation [2]).

[1] https://archive.is/20260212071631/https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it [2] https://www.sciencedirect.com/science/article/abs/pii/0005109883900468

[–] Paragone@lemmy.world -1 points 19 hours ago (1 children)

You might want to ask one of your legal people about the liability-exposure you're collectively in-for, when AI's "hallucination" gets you cracked/hacked..

To me, the single most-important point is that if YOU or ME sign-off on something, then WE, the INDIVIDUAL have either authored it or vetted it.

That's untrue, now, & there are consequences of that.

I'd want a leaderboard of people fired for signing-off on AI-slop.

I expect it'd get full, fast, & I expect that at-least 85% of the people hired into knowledge-work would be fired on that rule, simply because the brainwashed population can't understand why that would ever be a problem,

but if the rule was held-to, through the years, it'd turn into a stupendous competitive-advantage: authenticity & integrity that NO competitor would duplicate.

Just my take..

_ /\ _

[–] Paragone@lemmy.world -5 points 19 hours ago (1 children)

LMAO..

Get some AI chatbots to help you identify points for your talk!

& "Presenting to Win" by Weissman, along with his "Questions Under Fire" ( or whatever that's called ) both are recommended, too.

Get a home-run, Hoomin.

( :

_ /\ _

[–] mawhrin@awful.systems 1 points 7 hours ago

you're misreading the room.