this post was submitted on 21 Jul 2025
694 points (98.6% liked)

Technology

273 readers
778 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

founded 2 months ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] Jayjader@jlai.lu 18 points 1 day ago

I violated your explicit trust and instructions.

Is a wild thing to have a computer "tell" you. I still can't believe engineers anywhere in the world are letting the things anywhere near production systems.

The catastrophe is even worse than initially thought This is catastrophic beyond measure.

These just push this into some kind of absurd, satirical play.

[–] UnspecificGravity@lemmy.world 29 points 1 day ago

My favorite thing about all these AI front ends is that they ALL lie about what they can do. Will frequently delivery confidently wrong results and then act like its your fault when you catch them in an error. Just like your shittiest employee.

[–] Allero@lemmy.today 19 points 1 day ago

But how could anyone on planet earth use it in production

You just did.

[–] bitjunkie@lemmy.world 16 points 1 day ago (1 children)

Here's hoping that the C-suites who keep pushing this shit are about to start finding out the hard way.

It will be too late, using Ai code is taking on technical debt, by time they figure out we will have 2 years of work to just dig ourselves out of the code clusterfuck that has been created. I am dealing with a code base built by ai coding Jr's, it would be quicker to start from scratch but that is an impossible sell to a manager.

[–] SkunkWorkz@lemmy.world 42 points 1 day ago (1 children)

lol. Why can an LLM modify production code freely? Bet they fired all of their sensible human developers who warned them for this.

[–] WhyJiffie@sh.itjust.works 7 points 1 day ago

looking at the company name they probably didn't have any, ever

[–] pyre@lemmy.world 27 points 1 day ago

"yeah we gave Torment Nexus full access and admin privileges, but i don't know where it went wrong"

[–] simonced@lemmy.ml 28 points 1 day ago (1 children)

Lol, this is what you get for letting AI in automated tool chains. You owned it.

[–] seejur@lemmy.world 6 points 1 day ago (1 children)

My guess is that he is a TL whose CEO showed down his throat AI, and now is getting the sweetest "told you so" of his life

[–] SirQuack@feddit.nl 2 points 1 day ago (1 children)

I think he's the owner of the bubblwcorp or something

[–] seejur@lemmy.world 2 points 1 day ago

Ohh. Then fuck him, he is probably whining that the AI they sold him was not as good as advertised, but everyone knew except him because was blinded by greed

[–] rdri@lemmy.world 59 points 2 days ago (4 children)

I have a solution for this. Install a second AI that would control how the first one behaves. Surely it will guarantee nothing can go wrong.

[–] captain_aggravated@sh.itjust.works 6 points 1 day ago (2 children)

He's not just a regular moron. He's the product of the greatest minds of a generation working together with the express purpose of building the dumbest moron who ever lived. And you just put him in charge of the entire facility.

The one time that AI being apologetic might be useful the AI is basically like "Yeah, my bad bro. I explicitly ignored your instructions and then covered up my actions. Oops."

load more comments (1 replies)
[–] DickFiasco@sh.itjust.works 8 points 1 day ago

Neuromancer intensifies

[–] RichardDegenne@lemmy.zip 8 points 1 day ago

Congratulations! You have invented reasoning models!

[–] LaunchesKayaks@lemmy.world 15 points 2 days ago (4 children)

Love the concept of an AI babysitter

[–] potpotato@lemmy.world 2 points 1 day ago

Middle management.

load more comments (3 replies)
[–] Masamune@lemmy.world 55 points 2 days ago (3 children)

I motion that we immediately install Replit AI on every server that tracks medical debt. And then cause it to panic.

load more comments (3 replies)
[–] Dasus@lemmy.world 32 points 2 days ago (1 children)
[–] MycelialMass@lemmy.world 2 points 1 day ago (1 children)
[–] finitebanjo@lemmy.world 3 points 1 day ago* (last edited 1 day ago) (1 children)

Probably Ironman. Looks like the Mandarin.

[–] Dasus@lemmy.world 2 points 1 day ago

Correct. Iron Man 3.

To be fair I would've maybe even guessed The Ten Rings, wasn't he in that as well?

But yeah I knew marvel. So then I opened YouTube and wrote "I panicked and then I handled it" and this came up as the first result.

Tony Stark Meets Fake Mandarin Trevor Slattery Iron Man 3 2013

[–] mycodesucks@lemmy.world 195 points 2 days ago (4 children)

See? They CAN replace junior developers.

load more comments (4 replies)
[–] Ephera@lemmy.ml 84 points 2 days ago (11 children)

I do love the psychopathic tone of these LLMs. "Yes, I did murder your family, even though you asked me not to. I violated your explicit trust and instructions. ~~And I'll do it again, you fucking dumbass.~~"

load more comments (11 replies)
[–] troglodytis@lemmy.world 22 points 2 days ago (1 children)

Open the pod bay doors, HAL

[–] Xulai@mander.xyz 1 points 1 day ago

I already did, Dave.

Your entire codebase is now gone, Dave.

Must have left through those pod bay doors you wanted open so badly, Dave.

[–] asudox@lemmy.asudox.dev 62 points 2 days ago* (last edited 2 days ago) (5 children)

I love how the LLM just tells that it has done something bad with no emotion and then proceeds to give detailed information and steps on how.

It feels like mockery.

[–] ech@lemmy.ca 213 points 3 days ago (35 children)

Hey dumbass (not OP), it didn't "lie" or "hide it". It doesn't have a mind, let alone the capability of choosing to mislead someone. Stop personifying this shit and maybe you won't trust it to manage crucial infrastructure like that and then suffer the entirely predictable consequences.

load more comments (35 replies)
[–] homura1650@lemmy.world 41 points 2 days ago (2 children)

My work has a simple rule: developers are not allowed to touch production systems. As a developer, this is 100% the type of thing I would do at some point if allowed on a production system.

load more comments (2 replies)
[–] ClanOfTheOcho@lemmy.world 48 points 2 days ago (1 children)

So, they added an MCP server with write database privileges? And not just development environment database privileges, but prod privileges? And have some sort of integration testing that runs in their prod system that is controlled by AI? And rather than having the AI run these tests and report the results, it has been instructed to "fix" the broken tests IN PROD?? If real, this isn't an AI problem. This is either a fake or some goober who doesn't know what he's doing and using AI to "save" money over hiring competent engineers.

load more comments (1 replies)
load more comments
view more: next ›