this post was submitted on 23 Mar 2026
701 points (99.0% liked)

Technology

83102 readers
3132 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Zagorath@quokk.au 12 points 3 days ago (2 children)

the user needs to be smart enough to do whatever they're asking anyway

I'm gonna say that's ideal but not quite necessary. What's needed is that the user is capable of properly verifying the output. Which anyone who could do it themselves definitely can, but it can be done more broadly. It's an easier skill to verify a result than it is to obtain that result. Think: how film critics don't necessarily need to be filmmakers, or the P=NP question in computer science.

[–] Pyro@programming.dev 16 points 3 days ago (4 children)

But if the output has issues, what're you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI's mistakes yourself.

[–] Zagorath@quokk.au 9 points 3 days ago (2 children)

At the risk of sounding like an overly obsequious AI… You know what, you're completely right. I'm honestly not sure what use case I was imagining when I wrote that last comment.

You were thinking logically about a normal production chain. In that case, QA or whoever says "This is wrong, rework it and correct the issue" and that's that. With AI, it does the whole thing over again and may or may not come back with the same issue or an entirely new one.

[–] Redjard@reddthat.com 6 points 3 days ago

Making text flow naturally, grouping and ordeeing information, good writing.

You can verify two textst have the same facts and information, yet one reads way better than the other. But writing a text that reads well is quite hard.

I can't draw, but I could probably photoshop out some minor issues in an AI-generated image.

[–] fartographer@lemmy.world 1 points 2 days ago (1 children)

If you're unable to brute-force verification (research, testing, consulting the ancient texts), there's where you stop what you're doing, and take a breath. Then, consult an expert. Just like the film critic analogy, it's easier to verify than to create, so you're saving the expert time and effort while learning about something that you were obviously already passionate enough about to have started this endeavor.

[–] alsimoneau@lemmy.ca 2 points 2 days ago (1 children)

As someone who codes, it's not always easier to verify than to create.

[–] fartographer@lemmy.world 1 points 1 day ago

As someone who codes, I specifically didn't say "always" because of course it's not always true. Especially in the cases of "garbage in, garbage out."

But there's still an argument to be made for mental load and context, for which I'd argue that planning solutions and then writing the code generally is more taxing than someone handing you suggested solutions with semi-complete code or pseudo-code, and then identifying road blocks.

On the other hand, if someone you trust unexpectedly hands you hallucinated garbage, then you're likely to spin your wheels trying to identify what they did.

[–] Redjard@reddthat.com 1 points 3 days ago

If you don't habe the ability then you would do what you would have 5 years ago: not do it
Either submit without, or not submit at all.

[–] Aralakh@lemmy.ca 2 points 2 days ago

This is where domain expertise would come in, no? It's speeding up the work but it usually outputs generic content, and whatever else it injects while hallucinating. Therefore the validation part holds up I'd say.