this post was submitted on 27 Feb 2026
938 points (99.4% liked)

Programmer Humor

30043 readers
2002 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] NeatNit@discuss.tchncs.de -1 points 16 hours ago (3 children)

What would be a "nearly impossible" task in this post-AI world? Short of the provably impossible tasks like the busy beaver problem (and even then, you would be able to make an algorithm that covers a subset of the problem space), I really can't think of anything.

[–] MoffKalast@lemmy.world 1 points 1 hour ago* (last edited 1 hour ago) (1 children)

Reliability. We can do pretty much anything... with a 5% success rate. Deep learning can take any input, approximate any function and generate the required output, but it's only as good as the training set and most of them suck. Or it needs to be so large and complex that it's not fast enough.

[–] NeatNit@discuss.tchncs.de 1 points 44 minutes ago (1 children)

Yeah, of course. I think I was misunderstood, which is probably why I got so many downvotes.

Most tasks are possible (and often trivial, given access to the right library) with traditional programming. If it's possible to do them this way, this is by far the best approach.

Of the things that are not reasonably doable this way, like determining whether a photo is of a bird as in the comic, quite a lot of them are possible nowadays with machine learning (AKA "AI"), and often trivial given access to the right pre-trained model. And in this realm, I would say success rates are very often higher than that. Image recognition is insanely good.

What I'm asking is, what's a task that's virtually impossible both with programming and with machine learning?

"Mission critical" tasks which require very high and provable reliability, such as autonomous driving cars, technically fit this question but I think it's ignoring the point of the question.

And if you were going to mention counterexamples where specially crafted images get mislabeled by AI: this is akin to attacking vulnerabilities in traditional software, which have always existed. If you're making a low-stakes app or a game, this doesn't matter.

[–] MoffKalast@lemmy.world 1 points 21 minutes ago* (last edited 20 minutes ago)

I think if we're looking at it conceptually, it has to be something that is too complex to do with traditional heuristics reliably and also doesn't allow us to generate enough data for good DL results.

There's also liability to consider, for cases like airplanes and trains. Trains are dead simple to automate, but there needs to be someone there for long tail events, to make people feel safer, and as a fall guy in case of accidents. So in practice it's impossible to automate beyond subways where you control the entire environment despite the tech being fully capable of it. Same goes for airliners, they practically fly themselves but you need two people there anyway just in case.

[–] hemko@lemmy.dbzer0.com 33 points 16 hours ago (4 children)

Deterministic answers from AI

[–] Tlaloc_Temporal@lemmy.ca 4 points 10 hours ago

I think more important would be non-chaotic answers. It doesn't matter too much if their not identical if the content is roughly the same. But if you can get significantly different answers from trivial changes in prompt wording, that really does break things.

Still doesn't mean it's correct though.

[–] Vigge93@lemmy.world 10 points 15 hours ago (1 children)

Most AI are deterministic, it's only a small subset of AI that are non-deterministic, and in those cases it's often by design. Also, in many cases, the AI itself is deterministic, but we choose to use the output in a non-deterministic way, e.g. the AI gives a probability output, and will always give the same probabiliies for the same input, and instead of always choosing the one with highest probability, we choose based on the probability weight, leading to a non-deterministic output.

Tl;Dr. Non-determinism in AI is often not an inherit property of the model, but a choice in how we use the model.

[–] hemko@lemmy.dbzer0.com 5 points 14 hours ago* (last edited 14 hours ago)

Okay, probably fair. I've only been working with LLMs that are extremely non-deterministic in their answers. You can ask same question 17 times and the answers have some variance.

You can request an LLM to create an OpenTofu scripts for deploying infrastructure based on same architectural documents 17 times, and you'll get 17 different answers. Even if some, most or all of them still manage to get the core principals right, and follow the industry best practices in details (ie. usually what we consider obvious such as enforcing TLS 1.2) that were not specified, you still have large differences in the actual code generated.

As long as we can not trust that the output here is deterministic, we can't truly trust that what we request from the LLM is actually what we want, thus requiring human verification.

If we write IaC for OpenTofu or whatnot, we can somewhat trust that what we specify is what we will receive, but with the ambiguity of AI we can't currently make sure if the AI is filling out gaps we didn't know of. With known providers for, say, azurerm module we can always tell the defaults we did not specify.

[–] pebbles@sh.itjust.works 9 points 15 hours ago (1 children)

Wouldn't you just set the temperature to 0?

[–] sudoMakeUser@sh.itjust.works 6 points 14 hours ago (1 children)

Still going to be non-deterministic for any commercial AIs offered to us. It's a weird technology. I had a link to an article explaining why but I can't find it anymore.

[–] pebbles@sh.itjust.works 3 points 9 hours ago* (last edited 9 hours ago)

Ah yeah I was wrong. You set top-k to 1 to get a deterministic output.

[–] gothic_lemons@lemmy.world 7 points 16 hours ago (1 children)

Do you have a link explaining what deterministic means in the context of AI? Preferably for noobs

[–] nogooduser@lemmy.world 20 points 16 hours ago (1 children)

Deterministic means for the same input you always get the same output.

For AI it would be if you ask it a question multiple times using exactly the same words you would get the same answer.

[–] gothic_lemons@lemmy.world 3 points 14 hours ago
[–] Fatal@piefed.social 4 points 12 hours ago* (last edited 12 hours ago)

I think 100% autonomous robotics and driving is still at least 5-10 years away even with large research teams working on it. I mean truly robust AI which is able to handle any situation you could throw at it with zero intervention needed.