this post was submitted on 28 Feb 2026
465 points (96.4% liked)
Technology
82066 readers
3315 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I appreciate the honesty when they say it's an AI response and not genuine knowledge.
When I tell someone "an LLM told me that..." It's usually followed by "Let's see if there's any truth to it." An AI response should always be treated as a suggestion, not an answer.
Hell, Google's AI still doesn't know which day the F1 GP is on this week. It was wrong by a whole week a while back. Now it's only off by a day.
Exactly. An AI response can be a great way to get started on a topic you know little about, but it's never a definitive answer. You have to verify whether it's actually true. Whether it works. Never trust it blindly.
I feel like a big barrier is people anthropomorphizing the AI. It's not "ChatGPT generated this" it's "ChatGPT said this". I don't necessarily blame people for it, machine that speaks to you short circuits something in people's brains and it's not like we've got better language to talk about it. It's just that... people treat it as an opinion, not as software output. And so long as that's how people handle it, I just don't know if a "healthy" use of the technology is possible.
Exactly. We are extremely social animals, hardwired to recognise ourselves in things around us, which I'm sure is super useful and vital for a tribe of hunter gatherers living in a hostile environment. But it means that now we recognise faces and emotions in power outlets and lawn chairs. It's really not surprising we see intelligence and awareness in LLMs, because we recognise that stuff in everything. We are really poor at the level of critical thought required to deal with this responsibly.