this post was submitted on 15 May 2026
15 points (89.5% liked)
Funny
322 readers
71 users here now
Funniest content on all Lemmygrad
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
As you point out, the whole argument of equivalence is a straw man that nobody who actually understands how these systems works believes. So, human authorship obviously does matter. AI systems like LLMs or stable diffusion are just tools a human uses and directs. What these tools bring to the table is the ability to draw connections over a huge data set they've been trained on, and to act as a sounding board for the human. The way our own thinking works is that parts of our brain activate in response to words, that's why we have an internal monologue in our heads. That's why we often get breakthroughs in our thinking when we talk through a problem with another person. Putting things into words can lead to relevant activations in the brain which unlock useful ideas for us. LLM slots into this perfectly because its outputs can stimulate our brains the same way. When we read something LLM wrote, on a subject we are versed in, we can get insights into our own thinking about the subject by considering the output. Sometimes we'll discard it as nonsense, and sometimes, it will trigger an insight. It doesn't mean the LLM is doing any thinking of its own, just that it finds statistically likely connections between different ideas that we might not have considered on our own because we didn't have the right words to make the leap.