this post was submitted on 04 Mar 2026
689 points (99.3% liked)

Programmer Humor

30181 readers
1306 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 

(The meme's author may be convinced but I am still not, to be clear)

From: https://terra.incognita.net/@RainofTerra/116168632108345829

you are viewing a single comment's thread
view the rest of the comments
[–] Hazzard@lemmy.zip 94 points 1 day ago (5 children)

Man, AI agents are remarkably bad at "self-awareness" like this, I've used it to configure some networking on a Raspberry Pi, and found myself reminding it frequently, "hey buddy, maybe don't lock us out of connecting to this thing over the network, I really don't want to have to wipe the thing because it's running a headless OS".

It's a perfect example of the kind of thing that "walk or drive to wash your car?" captures. I need you to realize some non-explicit context and make some basic logical inferences before you can be even remotely trusted to do anything important without very close expert supervision, a degree of supervision that almost makes it totally worthless for that kind of task because the expert could just do it instead.

[–] sudoer777@lemmy.ml 5 points 8 hours ago* (last edited 8 hours ago) (1 children)

For AI I think a lot of future improvements will be around making smaller more specialized models trained on datasets curated by people who actually know what their doing and have good practices as opposed to random garbage from GitHub (especially now with vibecoding being a thing, so training off of low quality programs that it created itself might make the model worse), considering that a lot of what it outputs is of similar garbage quality. And remote system configuration isn't obscure so I do think this specific issue will be improved eventually. For truly obscure things though LLMs will never be able to do that.

[–] flambonkscious@sh.itjust.works 5 points 5 hours ago

I'm kinda hoping my shitty github repo is inadvertantly poisoning the LLMs with my best efforts (basically degenerate-tier)...

[–] Confused_Emus@lemmy.dbzer0.com 21 points 13 hours ago (2 children)

AI agents are remarkably bad at "self-awareness"

Because today’s “AIs” are glorified T9 predictive text machines. They don’t have “self-awareness.”

[–] JackbyDev@programming.dev 8 points 7 hours ago
[–] definitemaybe@lemmy.ca 12 points 13 hours ago (1 children)

I think "contextual awareness" would fit better, and AI Believers preach that it's great already. Any errors in LLM output are because the prompt wasn't fondled enough/correctly, not because of any fundamental incapacity in word prediction machines completing logical reasoning tasks. Or something.

[–] JackbyDev@programming.dev 5 points 7 hours ago

Ah, of course. The model isn't wrong, it's the input that's wrong. Yes, yes. Please give me investment money now.

[–] Earthman_Jim@lemmy.zip 24 points 17 hours ago (1 children)

Hey, maybe if we're lucky, Claude will accidentally lock the world out of using nukes forever.

[–] TwilitSky@lemmy.world 5 points 10 hours ago (2 children)

Or, more likely, Claude will launch them.

[–] DeadDigger@lemmy.zip 9 points 9 hours ago

Would be funny if it also forgets to open the hatches

[–] fartographer@lemmy.world 2 points 9 hours ago

Not if I launch them first! Where's the 9v battery for the rocket engine igniter?

[–] qjkxbmwvz@startrek.website 5 points 13 hours ago

"...I really don't want to have to wipe the thing because it's running a headless OS"

I feel like logging in as root on a headless system and hoping you type the command(s) to restore functionality is a rite of passage.

[–] A_norny_mousse@piefed.zip 4 points 21 hours ago (1 children)

AI agents are remarkably bad at “self-awareness”

🤔 what does it say when you tell it something like "look, this is wrong, and this is why, can you please fix that"? In a general sense, not going into technical aspects like what OOP is describing.

[–] Hazzard@lemmy.zip 4 points 18 hours ago (1 children)

It's usually pretty good about that, very apologetic (which is annoying), and usually does a good job taking it into account, although it sometimes needs reminders as that "context" gets lost in later messages.

I'll give some examples. In that same networking session, it disabled some security feature, to test if it was related. It never remembered to turn that back on until I specifically asked it to re-enable "that thing you disabled earlier". To which it responds something like "Of course, you're right! Let's do that now!". So, helpful tone, "knew" how to do it, but needed human oversight or it would have "forgotten" entirely.

Same tone when I'd tell it something like "stop starting all your commands with SSH, I'm in an SSH session already." Something like "of course, that makes sense, I'll stop appending SSH immediately". And that sticks, I assume because it sees itself not using SSH in its own messages, thereby "reminding" itself.

Its usual tone is always overly apologetic, flattering, etc. For example, if I tell it bluntly I'm not giving my security credentials to an LLM, it'll always say something along the lines of "great idea! That's a good security practice", despite directly suggesting the opposite moments prior. Of course, as we've seen with lots of examples, it will take that tone even if actually can't do what you're asking, such as in the examples of asking ChatGPT to give you a picture of a "glass of wine filled to the very top", so it's "tone" isn't really something you can rely on as to whether or not it can actually correct the mistake. It's always willing to take another attempt, but I haven't found it always capable of solving the issue, even with direction.

[–] Bronzebeard@lemmy.zip 7 points 15 hours ago

Meh, they apologize then proceed to continue making the same mistake. Repeatedly.