Off My Chest
RULES:
I am looking for mods!
1. The "good" part of our community means we are pro-empathy and anti-harassment. However, we don't intend to make this a "safe space" where everyone has to be a saint. Sh*t happens, and life is messy. That's why we get things off our chests.
2. Bigotry is not allowed. That includes racism, sexism, ableism, homophobia, transphobia, xenophobia, and religiophobia. (If you want to vent about religion, that's fine; but religion is not inherently evil.)
3. Frustrated, venting, or angry posts are still welcome.
4. Posts and comments that bait, threaten, or incite harassment are not allowed.
5. If anyone offers mental, medical, or professional advice here, please remember to take it with a grain of salt. Seek out real professionals if needed.
6. Please put NSFW behind NSFW tags.
view the rest of the comments
It's not about AI; it's about how people are USING AI.
Take for example this recent video from Language Jones, showing how to use AI to leverage your native intelligence for language learning (Yes, it's from PhD in linguistics and yes, he cites research. "Always bring receipts" is logic 101). He shows how AI works best as a Socratic tutor, forcing you to generate answers rather than replacing thinking.
https://www.youtube.com/watch?v=xQXiSGDXknA
When used properly, AI is a force magnifier par excellence. When used in the way you're likely encountering (young cohort? poor attention span? no training in formal reasoning, logic?) then yeah... "shit's fucked" (in the Australian vernacular).
I use to teach biomed, just before AI took over (so, circa 2013-2019). Attention spans were already alarmingly low and we'd have to instigate movement breaks, intermissions, break outs etc. I had to fucking tap dance out there - anything to keep "engagement" high and avoid the dreaded attrition KPIs.
The days of students being able to concentrate for 60+ mins in a row are likely gone. Hell, there's an oft repeated meme stat that average attention span on digital devices has dropped from two and a half minutes in 2004 to 47 seconds today. Whether you consider the provenance of that dubious, it does point to "people have trouble paying attention".
But...that's not AI's fault. The "shit was already fucked".
I think there's something (still) to be said about Classical Education Method. We need things like that. We need to teach our young ones about things like "intuition pumps" and "street epistemology", reasoning etc. And we can use ShitGPT to do it.
Take a simple example: a student uses ChatGPT to write an essay on climate policy. The AI generates a claim. Now ask: "What would prove this wrong?" If they can't answer - if they can't articulate what evidence or logic would falsify it - they don't understand it.
They've outsourced the reasoning. That's the difference.
It's not easy out there; it never was. But there's a confluence of factors (popular culture, digital devices, changing demographics, family dynamics, "education" being streamlined as vocational pre-training etc etc ad infinitum) that certainly seem to be actively hostile towards developing thinkers.
Here endth the pro clanker sermon.
Ramen; may we be blessed by his noodly appendage.
PS: I’m actually pretty hostile to AI myself and have been working on an open source engineering approach to mitigate some of these issues. Happy to share it if curious (not selling anything, Open source: just something I'm trying to use to solve this sort of issue for myself)
I dislike guns. When used properly, they're really fun; they're used to shoot spinning discs out of the sky. But that's not how they're used. And regardless of how the inventor of guns intended for them to be used, and regardless of how much better off we'd all be if everyone just used them to shoot spinning discs out of the sky, people by and large use them for violence. If they didn't have guns, they'd be much less able to easily kill other people. So, I dislike guns.
I dislike AI.
That analogy only works if AI ends up being mostly used for harm. Guns were designed to apply lethal force, so misuse is built into the tool.
AI is closer to something like a spreadsheet or search engine - a general tool that can be used well or badly depending on the user.
If the argument is really about risk tolerance that's fair, but it's a very different claim than saying the tool itself is inherently comparable to a weapon.
Really appreciate you taking the time to write this out. People forgetting how to learn is my largest concern with AI, in addition to a dead internet theory scenario where almost nothing new is being created by people.
What you articulated about the first concern really did leave me with more hope for the future than I had previously. One of the best comments I’ve read on this platform.
Sorry to see some of the replies making tired political quips instead of critiquing your actual points head on.
Thank you for saying so. I appreciate it. As always I could be wrong - I'm just a meat popsicle.
See? Civil discourse. Still possible. Even in 2026. Thumbs up to you, friend.
It's not that I don't think there aren't legitimate uses for AI or that it could be used as a learning tool.
It's that I doubt it's better than current learning tools largely because the nature of the medium seems to turn off the kind of critical thinking you're describing. The medium and language of a message can have a profound effect on how we understand and process information, often without us even realizing it, and AI seems to be able to make those changes far too easily.
Perhaps only because ubiquity and speed favour sloppiness. As a thought experiment, imagine if you could only use AI once a day, for one question. Asking questions would suddenly become expensive.
They would require careful thinking and pre-planning, followed by careful rumination on the answer and possible follow-ups.
That’s obviously an extreme example, but it’s not that dissimilar to how people use tools like LexisNexis or IBISWorld - expensive research tools where the cost naturally forces you to think about the question before asking it.
In that sense the issue may not be the medium itself so much as the cost structure of the interaction.
When answers are instant and effectively unlimited, people tend to outsource thinking. When access is constrained, the incentive flips and the thinking moves back to the question.
Which is to say: the tool probably amplifies existing habits rather than creating them. People who already interrogate sources will interrogate AI outputs. People who don’t, won’t.
I would ask it a careful question, and I would get a well worded, persuasive, but ultimately careless reply that's just repetition of information and devoid of any new reasoning or insight.
I would carefully ruminate on this reply, and find that at best, it's factually correct because it's an echo of the training data fed into the model, and although it sounds highly persuasive, it likely will need additional work to be adapted into the specific context and details of my situation.
But, that's not my main complaint. My complaint is that medium used seems to prevent people from doing that analysis. I think this is very much in line with what Neil Postman wrote about in Amusing Ourselves To Death and Technopoly. These tools seem to use us, sneakily adjusting our perceptions of what the information means, rather than us using the tools.
Is it possible to be careful and use it the way you describe in your thought experiment? Yes. Is it likely that people will be? No, and we seem to be seeing example after example of that every day.
OK but is that an AI problem or a people problem?
I think the Postman point is a fair one. The way information is presented absolutely affects how people reason with it. A fluent conversational answer can feel authoritative in a way that a messy set of search results doesn't.
But that problem isn't unique to LLMs. Every medium that compresses information into something smooth and persuasive has created the same concern.
Books did it, newspapers did it, television did it, and search engines arguably did it as well.
The real question is whether the medium determines behaviour or just amplifies existing habits.
People who already interrogate sources tend to interrogate AI outputs as well. People who don't… won't.
I suspect there's a bigger issue here than “LLM bad”. We've been drifting toward shallow, instant-answer information consumption for years. AI just slots neatly into a pattern that already existed.
We've become (for lack of better words) mentally flabby - me included.
If I'm arguing in good faith, it's both. We have a tool that uses us, a medium that shoves massive amounts of information at us but hinders gaining knowledge (which I'm going to say is the useful retention and application of that information, and not just for winning trivial night) and as a species we refuse to not let ourselves be suckered by it.
In the same vein, Postman also argued that this sort of change is often both ongoing and inevitable, and the only real debate was on what the true cost to our culture and society will be. He sited examples going back to Plato if I remember correctly. So as you put it, writing did it, books, television, search engines, etc. And so much money has been spent on making this a thing that we're going to have to contend with it until it undeniably starts costing more than it's worth, and if that cost is cultural or societal instead of financial, it might never go away.
I don't pretend to speak for the man, but I think Postman would agree with you, and he thought it started in the 1860's with the telegraph.
No, it's about AI.
No, it's about you.
Those who funded the Austrian artist fully agree.
Ah, we’ve reached the Austrian painter stage already.
https://politicaldictionary.com/words/godwins-law/