this post was submitted on 03 Apr 2026
79 points (98.8% liked)

Ask Experienced Devs

1472 readers
114 users here now

Icon base by Delapouite under CC BY 3.0 with modifications to add a gradient

founded 2 years ago
MODERATORS
 

I work at a giant tech company. I'm in the upper levels (the CTO is my bosses boss).

I think our C-suite has truly lost its mind around AI. Last year I thought their delusions were just optics for the market. I thought they were hedging bets, playing the short game for their RSUs, and I forgave any low IQ moments and off key messages as being due to fear, caution, or divided attention from allowing AI grifters to infect their circle and feed them bad lines

But Iately I've been seeing stuff that's so catastrophically stupid, and all in, that I think they really have lost their minds.

I know how this works in big co systems of people. Their stupidity is amplified one thousandfold to lower levels like a game of crack the whip. I'm used to them firing leagues of people for no reason other than deciding contractors are good this month, no wait, now they are bad. And lately they've been so incredibly and almost unbelievably stupid. It will take a year for this to trickle down from the top to everyone, but this whole company has already started recentering and orbiting a new sun. This new sun has nothing to do with making money, making things people want to buy, or reality. Our whole center of gravity is moving to internal optics and politics around AI. I can already see pathological low skill people from long ago enshittified companies starting to win this new game.

AI is not going to kill this company. This company will kill itself because it has lost it's mind thinking it needs to be "AI-native."

Is this everywhere, or can I go somewhere that's still smart? Is this a temporary cycle, are people just freaking out because of WWIII?

top 27 comments
sorted by: hot top controversial new old
[–] grrgyle@slrpnk.net 2 points 46 minutes ago

Been seeing "AI-native" too, and I work at a university. I forgive them for following the money, but me and the other techs have to constantly remind leadership that the average person is not excited about LLMs. And that going all-in could do reputational and credibility damage to the institution we currently work at.

[–] wizardbeard@lemmy.dbzer0.com 2 points 1 hour ago* (last edited 1 hour ago)

I work for a financial company where our largest clients look to us for fiscally conservative actions. Thankfully, that trickles down to create an IT division that is aware of current trends, but usually doesn't chase the bleeding edge.

Let the risk takers tank the challenges, and we'll come in after and benefit from what's already been figured out.

We just opened Copilot chat to our users late last year. We discovered, and squashed, a whole slew of people trying to shadow IT their way into "supercharging their workflows with AI". A few were fired for shoving private info into public and unapproved models, but not nearly enough.

[–] Kissaki@programming.dev 1 points 1 hour ago

I work for a small ~30-person company with various customers, including some very big names. We're very deliberate about where tools like those could help us, where it's worth the exploration and investment. We want to be innovative and have the expertise, but at the same time, be reasonable and sound. We're also very conscious of data sharing and safeguards, in part out of necessity, because we can't just share our customers' code or data with third parties.

Excitement, commitment, use, and hopes of using AI tools differ between colleagues. What we can use and how differs between projects.

So yes, there are definitely other kinds of companies and environments out there.

[–] Get_Off_My_WLAN@fedia.io 5 points 2 hours ago (1 children)

My company made the announcement during a meeting that we're going full AI. Our website is going to get sloppified, our software is going to get sloppified, and it's going to let our clients sloppify even harder. They're expecting everybody to start using AI in our workflows somehow, not just engineers. They're going to look at the numbers and ask questions if anybody isn't using Claude enough.

I'm protected for now because my environment is a bit more locked down, so they can't expect me to use AI, yet.

They can fire me when that day comes. I refuse to use generative AI.

[–] SlykeThePhoxenix@programming.dev 2 points 2 hours ago* (last edited 2 hours ago)

"Let them eat slop" - Queen Marie-Antoinette

(Yes I know she didn't actually say it but let me have it)

[–] Pucker8736@piefed.social 5 points 3 hours ago (1 children)

Yes. Also big tech

We've been commanded to rework all workflows to be agentic, whatever the hell that means.

[–] grrgyle@slrpnk.net 1 points 41 minutes ago

It means prompts are triggered by events like crons or webhooks, and using text files to keep models from losing "context" 🙄

[–] ClockworkOtter@lemmy.world 6 points 3 hours ago

I think somewhere in my organisation there's a very strong bulwark against AI encroachment, and I suspect it's our IT security because data protection and privacy are pretty much their top priority (we take it seriously in general). I don't think I've read anything published or sent internally that reads like AI slop.

[–] Ephera@lemmy.ml 9 points 5 hours ago (1 children)

Not as bad, but they have started setting goals for how much AI we should use. No one knows what the fuck that means. Just this magical belief that any use of AI somehow increases productivity, despite there being scientific studies that it actually lowers productivity.

[–] baggachipz@sh.itjust.works 1 points 1 hour ago

We have a goal for everybody in the company, from sales to senior dev: “Demonstrate the use of AI in daily activities”. What the fuck does that even mean. Like, is the office manager supposed to answer phones with ChatGPT?

[–] fleck@lemmy.world 10 points 5 hours ago (1 children)

At a standup meeting, CEO asked whether we use AI and I was the only one who said that I don't use it at all, or very rarely, starting a little discussion. Overall, their position is somewhat moderate. They do fall for the hype a lot (especially with the recent Claude stuff) but it did not seem that it was a requirement for us to use it. But they were curious why I do not embrace this so much and I said that I can feel myself getting more and more stupid when using these tools, due to the mental offloading. This seemed to resonate a bit with the others, at least I could feel that my coworkers in the round got my point, despite remaining silent.

Coworker recently came to my desk and jokingly asked why I was typing out code by hand when I could ask Claude to generate it for me, but there was also a bit of seriousness to it, so I cringed a lot

[–] regedit@lemmy.zip 4 points 2 hours ago (1 children)

I said that I can feel myself getting more and more stupid when using these tools, due to the mental offloading.

It takes a lot of self reflection to notice how well you understood a thing before genAI use, and how that understanding starts to disappear quickly after using genAI a lot more. I, too, noticed a similar reduction in ability and understanding of topics the more I used genAI. Now, I only use it as a last resort or when searching online will return too many unrelated results due to description of the problem being too generic.

A senior web dev is all about it at my work and I can't really stand him nor genAI so I'm glad I don't work directly with him or his team!

[–] fleck@lemmy.world 2 points 1 hour ago

I really noticed it when instead of thinking how to solve X, my mind started phrasing a prompt to ask how to solve X instead, does that make sense? I found this to be a dangerous, almost evil thing, and I'm sure it is the same with my coworkers, or they just don't like to admit it. I still do this sometimes but am giving my best to unlearn it. And the crazy thing is, I did not even use it that much, only very occasionally, similar to what you mentioned you do. I do not wish to know how cooked the brains of "vibe coders" are by now...

[–] cbazero@programming.dev 23 points 7 hours ago (1 children)

More than 95% of management is utterly incompetent. All they do is follow the newest grift because they are not capable and don't want to actually evaluate something based on necessity or need. Examples of this are Agile, Cloud (AWS, Azure, etc), AI and many more.

[–] Retail4068@lemmy.world -1 points 6 hours ago

Old man still yelling at the cloud 🤣

[–] bigfish@lemmy.dbzer0.com 33 points 9 hours ago (2 children)

Yep. Big healthcare, same thing.

It's like they don't realize that AI at best is as good as having a mailroom full of overeager interns. Who in their right mind would want to put in front of clients?! Or worse have it run critical business systems.

[–] grrgyle@slrpnk.net 1 points 39 minutes ago

It blows my mind how many of the dress-up for meetings crew can't see how bad and like unprofessional ai generated content looks

[–] pelespirit@sh.itjust.works 21 points 8 hours ago (1 children)

AI at best is as good as having a mailroom full of overeager interns.

That is the best description of AI that I've seen.

[–] Pucker8736@piefed.social 3 points 3 hours ago

Even better, it's overeager interns that don't learn.

[–] Maddier1993@programming.dev 6 points 6 hours ago* (last edited 6 hours ago) (1 children)

Any sort of tech company was always a headless chicken. It's just that when software developers were the hot thing on the horizon we were oblivious to the stupidity that thrives in corporate.

If you think about a business that maximizes profit, and sample opinions from the business side about the business domain.. you would be forgiven to assume the replies are nothing short of genius. However, the facade of brilliance falls apart the moment you attempt to sample anything adjacent to business or unrelated to it. You will realize that having experience in and making a lot of money does not translate well to other endeavors where making money is not the main concern.

Now that we have been milked dry like Apids in an Ant farm and have been virtually kicked out of the money making complex, we start to realize we were just cattle all along and that praise for our stature in the industry as the eponymous "genius" of the entire operation was just flattery to assuage any inconvenience we perceived through corporate bullshit. They succeeded in avoiding drawing our ire towards the incredibly narrow-sighted and hollow endeavor that is making the line go up exponentially.

Now that facade has fallen face first at the dawn of the LLM era. The arrogance that hid from software engineers and developers but was all too apparent to blue collar and "lesser" corporate functions has shown it's face to us, the enlightened morons.

Thanks for coming to (hehe) my TED talk

[–] baggachipz@sh.itjust.works 1 points 1 hour ago

I’ve been in the software industry for 30 years. There has always been an active disdain for programmers, since we have been relatively scarce and moody and mysterious to the suits. Every so often, another product comes along which purports to finally free the organization of these pesky programmers and let the normies do the work faster and better. Management falls hard for the grift, and things end up worse off as a result. We’re in another one of those right now.

All that “genius” bullshit has always been a thin veneer, said through gritted teeth. They’ll come crawling back soon enough. But it does suck in the meantime.

[–] onlinepersona@programming.dev 15 points 9 hours ago* (last edited 9 hours ago) (1 children)

I'm cleaning up AI slop but have been tasked with using more AI to do it faster and deliver more features. Doesn't matter to me. I'm getting paid 🤷 It'll be somebody else's problem soon enough, if the company survives.

My eye will be on the job market soon and I do wonder what it will be like. It seems like AI is eroding the minds of many.

[–] tyler@programming.dev 17 points 8 hours ago (1 children)

It’s terrible, every single job posting is requiring using AI. That’s how you can tell it’s all a house of cards. Imagine reading a job post that said you’d be required to use JetBrains IDEs, you’d be thinking, “what the hell, why is this in a job post??”

[–] kunaltyagi@programming.dev 2 points 2 hours ago

Some teams have plugins and workflows around particular IDEs. Though I don't think any team has a workflow around AI that can't be done with human intelligence

[–] halcyoncmdr@piefed.social 6 points 8 hours ago

Nope, definitely taken the position that LLM- based "AI" is actively harmful, at best.

That being said... It is a casino. So it makes sense they are generally pretty risk-averse.

[–] red_tomato@lemmy.world 3 points 7 hours ago

I’m lucky I’m in a company that hasn’t gone far deep into the AI rabbit hole. They have set up Claude Code for us and encourage us to use it if we want, but that’s about it.

[–] OwOarchist@pawb.social 6 points 9 hours ago