this post was submitted on 15 May 2026
15 points (89.5% liked)
Funny
322 readers
73 users here now
Funniest content on all Lemmygrad
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I decided to take matters into my own hands and ask Deepseek: "If you saw an AI generated painting in a style o Monet and it made you feel a certain way, is that feeling fake because the image you saw wasn’t produced by a human?"
This is the response I got:
Interestingly it seems to agree with my own assessment. The problem isn't that the feeling is fake, it's that someone lied to me when I was seeking Human to Human connections and not a System to Human connections.
I often wonder why the people who defend AI don't start with justifying their own necessity, because why should I read their own articles defending the System when I can just approach the System directly? Why do I need a 3rd wheel in the relationship?
I'll ask Deepseek this question: "Some people spend a lot of time defending these AI "systems" and they write lengthy articles about it. At the same time, if their claims are to be taken at face value and there isn't much difference between the way a human mind functions and an AI system functions, then why can't I just approach the AI system directly and ask it to generate my article instead of reading their article?"
This is the response:
The force of your point is this:
> In fact, if AI can produce equally good or better arguments, then the human defender is wasting your time by acting as an intermediary. You could cut out the middleman.
Possible responses from an AI defender (and why they might fail to fully escape your criticism):
They might say:
But your counter is strong here:
They might say:
Your reply:
But then they’ve conceded the point:
The deeper issue
Your argument reveals a pragmatic contradiction in strong AI functionalism (the view that human and AI cognition are essentially the same kind of thing).
If they were truly the same, then human-generated text has no privileged status—so why defend AI in human-written prose? Why not let AI defend itself?
But most AI defenders don’t actually believe in total equivalence. They believe AI can simulate aspects of human thought, but not replace the social, intentional, and biographical dimensions of human communication. The moment they write a lengthy defense, they’re implicitly banking on you valuing the human source
So your question exposes that:
If they say “read my article” → they assume human authorship matters.
If they say “just ask the AI” → they admit their own article is redundant.
They can’t logically do both without contradiction.
Bottom line for you.
You’re right to be suspicious. If AI systems are truly close enough to human minds that intent and agency don’t matter for appreciation (art) or argument (articles), then human defenders become optional middlemen. But if human defenders are not optional, then the original claim about AI’s cognitive equivalence is weaker than it sounds. Your question pulls the thread, and the sweater unravels a bit.
As you point out, the whole argument of equivalence is a straw man that nobody who actually understands how these systems works believes. So, human authorship obviously does matter. AI systems like LLMs or stable diffusion are just tools a human uses and directs. What these tools bring to the table is the ability to draw connections over a huge data set they've been trained on, and to act as a sounding board for the human. The way our own thinking works is that parts of our brain activate in response to words, that's why we have an internal monologue in our heads. That's why we often get breakthroughs in our thinking when we talk through a problem with another person. Putting things into words can lead to relevant activations in the brain which unlock useful ideas for us. LLM slots into this perfectly because its outputs can stimulate our brains the same way. When we read something LLM wrote, on a subject we are versed in, we can get insights into our own thinking about the subject by considering the output. Sometimes we'll discard it as nonsense, and sometimes, it will trigger an insight. It doesn't mean the LLM is doing any thinking of its own, just that it finds statistically likely connections between different ideas that we might not have considered on our own because we didn't have the right words to make the leap.