this post was submitted on 08 Mar 2026
688 points (95.7% liked)

Off My Chest

1815 readers
557 users here now

RULES:


I am looking for mods!


1. The "good" part of our community means we are pro-empathy and anti-harassment. However, we don't intend to make this a "safe space" where everyone has to be a saint. Sh*t happens, and life is messy. That's why we get things off our chests.

2. Bigotry is not allowed. That includes racism, sexism, ableism, homophobia, transphobia, xenophobia, and religiophobia. (If you want to vent about religion, that's fine; but religion is not inherently evil.)

3. Frustrated, venting, or angry posts are still welcome.

4. Posts and comments that bait, threaten, or incite harassment are not allowed.

5. If anyone offers mental, medical, or professional advice here, please remember to take it with a grain of salt. Seek out real professionals if needed.

6. Please put NSFW behind NSFW tags.


founded 2 years ago
MODERATORS
 

I’ve been working with so many students who turn to it as a first resort for everything. The second a problem stumps them, it’s AI. The first source for research is AI.

It’s not even about the tech, there’s just something about not wanting to learn that deeply upsets me. It’s not really something I can understand. There is no reason to avoid getting better at writing.

you are viewing a single comment's thread
view the rest of the comments
[–] BranBucket@lemmy.world 4 points 9 hours ago* (last edited 8 hours ago) (1 children)

It's not that I don't think there aren't legitimate uses for AI or that it could be used as a learning tool.

It's that I doubt it's better than current learning tools largely because the nature of the medium seems to turn off the kind of critical thinking you're describing. The medium and language of a message can have a profound effect on how we understand and process information, often without us even realizing it, and AI seems to be able to make those changes far too easily.

[–] SuspciousCarrot78@lemmy.world 1 points 3 hours ago* (last edited 3 hours ago) (1 children)

Perhaps only because ubiquity and speed favour sloppiness. As a thought experiment, imagine if you could only use AI once a day, for one question. Asking questions would suddenly become expensive.

They would require careful thinking and pre-planning, followed by careful rumination on the answer and possible follow-ups.

That’s obviously an extreme example, but it’s not that dissimilar to how people use tools like LexisNexis or IBISWorld - expensive research tools where the cost naturally forces you to think about the question before asking it.

In that sense the issue may not be the medium itself so much as the cost structure of the interaction.

When answers are instant and effectively unlimited, people tend to outsource thinking. When access is constrained, the incentive flips and the thinking moves back to the question.

Which is to say: the tool probably amplifies existing habits rather than creating them. People who already interrogate sources will interrogate AI outputs. People who don’t, won’t.

[–] BranBucket@lemmy.world 2 points 3 hours ago (1 children)

I would ask it a careful question, and I would get a well worded, persuasive, but ultimately careless reply that's just repetition of information and devoid of any new reasoning or insight.

I would carefully ruminate on this reply, and find that at best, it's factually correct because it's an echo of the training data fed into the model, and although it sounds highly persuasive, it likely will need additional work to be adapted into the specific context and details of my situation.

But, that's not my main complaint. My complaint is that medium used seems to prevent people from doing that analysis. I think this is very much in line with what Neil Postman wrote about in Amusing Ourselves To Death and Technopoly. These tools seem to use us, sneakily adjusting our perceptions of what the information means, rather than us using the tools.

Is it possible to be careful and use it the way you describe in your thought experiment? Yes. Is it likely that people will be? No, and we seem to be seeing example after example of that every day.

[–] SuspciousCarrot78@lemmy.world 1 points 2 hours ago* (last edited 2 hours ago) (1 children)

OK but is that an AI problem or a people problem?

I think the Postman point is a fair one. The way information is presented absolutely affects how people reason with it. A fluent conversational answer can feel authoritative in a way that a messy set of search results doesn't.

But that problem isn't unique to LLMs. Every medium that compresses information into something smooth and persuasive has created the same concern.

Books did it, newspapers did it, television did it, and search engines arguably did it as well.

The real question is whether the medium determines behaviour or just amplifies existing habits.

People who already interrogate sources tend to interrogate AI outputs as well. People who don't… won't.

I suspect there's a bigger issue here than “LLM bad”. We've been drifting toward shallow, instant-answer information consumption for years. AI just slots neatly into a pattern that already existed.

We've become (for lack of better words) mentally flabby - me included.

[–] BranBucket@lemmy.world 1 points 1 hour ago* (last edited 1 hour ago)

If I'm arguing in good faith, it's both. We have a tool that uses us, a medium that shoves massive amounts of information at us but hinders gaining knowledge (which I'm going to say is the useful retention and application of that information, and not just for winning trivial night) and as a species we refuse to not let ourselves be suckered by it.

In the same vein, Postman also argued that this sort of change is often both ongoing and inevitable, and the only real debate was on what the true cost to our culture and society will be. He sited examples going back to Plato if I remember correctly. So as you put it, writing did it, books, television, search engines, etc. And so much money has been spent on making this a thing that we're going to have to contend with it until it undeniably starts costing more than it's worth, and if that cost is cultural or societal instead of financial, it might never go away.

I suspect there’s a bigger issue here than “LLM bad”. We’ve been drifting toward shallow, instant-answer information consumption for years. AI just slots neatly into a pattern that already existed.

I don't pretend to speak for the man, but I think Postman would agree with you, and he thought it started in the 1860's with the telegraph.