this post was submitted on 10 Mar 2026
688 points (99.3% liked)

Technology

82516 readers
4498 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Amazon’s ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a “deep dive” into a spate of outages, including incidents tied to the use of AI coding tools.

The online retail giant said there had been a “trend of incidents” in recent months, characterized by a “high blast radius” and “Gen-AI assisted changes” among other factors, according to a briefing note for the meeting seen by the FT.

Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established.”

top 50 comments
sorted by: hot top controversial new old
[–] Zink@programming.dev 29 points 3 hours ago (1 children)

"Huge rich company responsible for hosting like half of the fucking internet spent the last year pushing code to global-scale production without so much as a review by a senior engineer."

That's how I read that headline.

[–] Thermite@lemmings.world 5 points 2 hours ago* (last edited 2 hours ago) (1 children)

I read it as "now a senior developer will be at fault for all AI code." Do you think they will have time to review all that code properly and do their jobs.

[–] sakuraba@lemmy.ml 1 points 1 hour ago

They will save time by making them go pee in bottles

[–] piranhaconda@mander.xyz 3 points 1 hour ago
[–] laranis@lemmy.zip 33 points 6 hours ago (1 children)

How in the glorious fuck was this not a thing from the start? In a system this big and this critical all code should be reviewed by cognizant individuals. Anyone who thought an LLM would be perfect and not need code reviews has their heads so far up their asses they can see through their pee hole.

[–] titanicx@lemmy.zip 12 points 5 hours ago

If you do this, you signal the AI isn't ready for production capabilities, which limits your sales groups capability to market it. Which is in reality the actual case and AI sucks and should never be trusted.

[–] merc@sh.itjust.works 51 points 7 hours ago (3 children)

What is AI good at? Creating thousands of lines of code that look plausibly correct in seconds.

What are humans bad at? Reviewing changes containing thousands of lines of plausibly correct code.

This is a great way to force senior devs to take the blame for things. But, if they actually want to avoid outages rather than just assign blame to them, they'll need to submit small, efficient changes that the submitter understands and can explain clearly. Wouldn't it be simpler just to say "No AI"?

[–] Joeffect@lemmy.world 6 points 3 hours ago* (last edited 3 hours ago) (1 children)

If you ask a writer what is Ai good for? They will say it's good for art. But never use it for writing, because it's terrible at it.

If you ask a artist what is Ai good for? They will say it's good for writing. but never use it for art, because it's terrible at it.

[–] Mongostein@lemmy.ca 1 points 1 hour ago

Conclusion… it’s good at neither… or am I missing your point?

[–] Earthman_Jim@lemmy.zip 15 points 7 hours ago* (last edited 7 hours ago)

AI's greatest feature in the eyes of the Epstein class is the ability to shift responsibility. People will do all kinds of fucked up shit if they can shift the blame to someone else, and AI is the perfect bag holder.

Just ask the school of little girls in Iran which were likely targets picked by AI with out of date information about it being a barracks. Why bother confirming the target with current intel from the ground when no one's going to take the blame anyway?

[–] monkeyslikebananas2@lemmy.world 1 points 7 hours ago (1 children)

Or I suppose add extra work by walking an AI tool through making small incremental changes.

[–] merc@sh.itjust.works 4 points 4 hours ago (1 children)

In my experience, LLMs suck at making smart, small changes. To know how to do that they need to "understand" the entire codebase, and that's expensive.

[–] monkeyslikebananas2@lemmy.world 2 points 3 hours ago

Yeah that’s what I mean by extra work. I can make the change myself or I can argue with claude code until it does what I want.

[–] nightlily@leminal.space 11 points 6 hours ago (1 children)

If my job ends up being reviewing AI code spammed at me by vibe coding juniors all day, I’m joining a nunnery.

[–] Repelle@lemmy.world 3 points 2 hours ago

If nunneries are as gay as I always imagined in my head, I’m in.

[–] WraithGear@lemmy.world 13 points 7 hours ago* (last edited 7 hours ago) (1 children)

or hear me out, they can build it themselves so they don’t have to chase hallucinations. as a matter of fact, let’s cut the ai out of the project and leave it to summarizing emails.

[–] laranis@lemmy.zip 7 points 6 hours ago

This 1000x. You think that senior dev got to that level hoping one day all they'd have to do is evaluate randomly generated code? No! They want to create, build, design, integrate, share. Cut out the middle, useless step and get back to the work these professionals have dedicated their careers to.

[–] Bytemeister@lemmy.world 11 points 7 hours ago (2 children)

AI is an assistant, not a replacement. It amazes me that Amazon, Microsoft, Google, and all these "tech leader" companies are going to make the same tech fuckup multiple times.

[–] Earthman_Jim@lemmy.zip 3 points 7 hours ago* (last edited 7 hours ago)

If only the lessons were painful for them and not just us/the workers.

[–] laranis@lemmy.zip 1 points 6 hours ago

Wonder what the turnover rate in executives is. I bet it is about 8 years.

[–] pedroapero@lemmy.ml 62 points 10 hours ago* (last edited 10 hours ago) (1 children)

Yes, so now when there's a success, it gets attributed to AI. When there's an outage, that's the fault of humans not reviewing correctly. These senior engineers will get fucked in all scenarios.

[–] IratePirate@feddit.org 41 points 10 hours ago* (last edited 10 hours ago) (6 children)

Precisely. From Cory Doctorow's latest, very insightful essay on AI, where he talks about the promise of AI replacing 9 out of 10 radiologists:

"if the AI misses a tumor, this will be the human radiologist's fault, because they are the 'human in the loop.' It's their signature on the diagnosis."

This is a reverse centaur, and it's a specific kind of reverse-centaur: it's what Dan Davies calls an "accountability sink." The radiologist's job isn't really to oversee the AI's work, it's to take the blame for the AI's mistakes.

load more comments (6 replies)
[–] DarrinBrunner@lemmy.world 10 points 9 hours ago

Couldn't they, I don't know, just go back to people writing the code, and stop using AI to do something it clearly can't handle? Just an idea.

I guess they've invested (thrown) so much money at this thing, they're determined to make it work. Also, I know they've gone into insanely deep debt and if it doesn't work they're going to lose an eye watering amount of money, and perhaps the bubble bursting will be the catalyst to bringing down the entire world economy.

Oh, so yeah, they do have great incentive to make this work, but I don't see it happening. As usual, they fuck up and the rest of us pay the bill. None of the billionaires will suffer any more than loss of face over this. Even if they've broken laws, all they ever get is a small fine and a slap on the back, "Better luck, next time, ol' boy!"

[–] resipsaloquitur@lemmy.world 8 points 9 hours ago
[–] Simulation6@sopuli.xyz 35 points 15 hours ago (1 children)

I always saw a code review like a dissertation defense. Why did you choose to implement the requirement in this way? Answers like 'I found a post on Stackoverflow' or 'the AI told me to' would only move the question back one step; why did you choose to accept this answer?
I was a very unpopular reviewer.

[–] PlutoniumAcid@lemmy.world 9 points 12 hours ago

Likely, but you did not let poor code pass. That is valuable.

load more comments
view more: next ›