this post was submitted on 21 Jul 2025
698 points (98.6% liked)

Technology

290 readers
309 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

founded 2 months ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] sukhmel@programming.dev 13 points 5 days ago (1 children)

And those patterns, mind you, often include lying and deception. So while I agree that LLMs can't exhibit anything consciously, I also know that they can provide false information. To call it a lie is a stretch, and looks like something one would do if one wants to place blame on LLM for their own fault

[–] anomnom@sh.itjust.works 3 points 4 days ago

I don’t think calling it a lie (vs a hallucination, or error) is necessary to assign blame. If they were instructed to use ai to deploy then that’s on management. Not having backups is on everyone, but I suspect they were backed up.

Saying, “the AI agent broke it” is just fine, but isn’t clickbait like saying it lied is. So many fewer of us would have seen this without it.