this post was submitted on 15 May 2026
15 points (89.5% liked)

Funny

322 readers
73 users here now

Funniest content on all Lemmygrad

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] CanaryFeigned@lemmygrad.ml 1 points 11 hours ago (1 children)

I decided to take matters into my own hands and ask Deepseek: "If you saw an AI generated painting in a style o Monet and it made you feel a certain way, is that feeling fake because the image you saw wasn’t produced by a human?"

This is the response I got:

Aesthetics doesn’t require agency. But the meaning we assign to art often involves a perceived human-to-human connection. AI art can be beautiful and moving, but it shifts the experience from “encounter with another mind” to “encounter with a beautiful system.” Both are real—just different. The “fakeness” charge usually comes from conflating emotional response with the story we tell ourselves about where the response came from.

Interestingly it seems to agree with my own assessment. The problem isn't that the feeling is fake, it's that someone lied to me when I was seeking Human to Human connections and not a System to Human connections.

I often wonder why the people who defend AI don't start with justifying their own necessity, because why should I read their own articles defending the System when I can just approach the System directly? Why do I need a 3rd wheel in the relationship?


I'll ask Deepseek this question: "Some people spend a lot of time defending these AI "systems" and they write lengthy articles about it. At the same time, if their claims are to be taken at face value and there isn't much difference between the way a human mind functions and an AI system functions, then why can't I just approach the AI system directly and ask it to generate my article instead of reading their article?"

This is the response:

The force of your point is this:

If human cognition and AI text generation are near-equivalent in terms of creativity, insight, and meaning, then the defender’s own article has no special value.

> In fact, if AI can produce equally good or better arguments, then the human defender is wasting your time by acting as an intermediary. You could cut out the middleman.

Possible responses from an AI defender (and why they might fail to fully escape your criticism):

“I’m not claiming equivalence, just overlap.”

They might say:

AI and humans both process patterns, but humans have embodiment, emotion, and lived experience. My article reflects my unique perspective—not just pattern completion.

But your counter is strong here:

If that’s true, then their earlier functionalist claims about AI were overblown. They can’t have it both ways.

You still need me to frame the question and curate the output.”

They might say:

AI doesn’t know what you want to read unless you prompt it well. My article is a curated, argued position.

Your reply:

Then why not just prompt the AI yourself with my original question? You’d get a decent argument instantly.

“The process of writing the article itself is valuable to me, and reading it is a form of human connection.”

But then they’ve conceded the point:

The value isn’t just in the information but in the human source. That undermines the “AI is functionally similar” claim.


The deeper issue

Your argument reveals a pragmatic contradiction in strong AI functionalism (the view that human and AI cognition are essentially the same kind of thing).

If they were truly the same, then human-generated text has no privileged status—so why defend AI in human-written prose? Why not let AI defend itself?

But most AI defenders don’t actually believe in total equivalence. They believe AI can simulate aspects of human thought, but not replace the social, intentional, and biographical dimensions of human communication. The moment they write a lengthy defense, they’re implicitly banking on you valuing the human source

So your question exposes that:

  1. If they say “read my article” → they assume human authorship matters.

  2. If they say “just ask the AI” → they admit their own article is redundant.

They can’t logically do both without contradiction.


Bottom line for you.

You’re right to be suspicious. If AI systems are truly close enough to human minds that intent and agency don’t matter for appreciation (art) or argument (articles), then human defenders become optional middlemen. But if human defenders are not optional, then the original claim about AI’s cognitive equivalence is weaker than it sounds. Your question pulls the thread, and the sweater unravels a bit.

[–] yogthos@lemmygrad.ml 3 points 4 hours ago

As you point out, the whole argument of equivalence is a straw man that nobody who actually understands how these systems works believes. So, human authorship obviously does matter. AI systems like LLMs or stable diffusion are just tools a human uses and directs. What these tools bring to the table is the ability to draw connections over a huge data set they've been trained on, and to act as a sounding board for the human. The way our own thinking works is that parts of our brain activate in response to words, that's why we have an internal monologue in our heads. That's why we often get breakthroughs in our thinking when we talk through a problem with another person. Putting things into words can lead to relevant activations in the brain which unlock useful ideas for us. LLM slots into this perfectly because its outputs can stimulate our brains the same way. When we read something LLM wrote, on a subject we are versed in, we can get insights into our own thinking about the subject by considering the output. Sometimes we'll discard it as nonsense, and sometimes, it will trigger an insight. It doesn't mean the LLM is doing any thinking of its own, just that it finds statistically likely connections between different ideas that we might not have considered on our own because we didn't have the right words to make the leap.