this post was submitted on 01 Feb 2026
81 points (80.9% liked)

Memes

15235 readers
575 users here now

Post memes here.

A meme is an idea, behavior, or style that spreads by means of imitation from person to person within a culture and often carries symbolic meaning representing a particular phenomenon or theme.

An Internet meme or meme, is a cultural item that is spread via the Internet, often through social media platforms. The name is by the concept of memes proposed by Richard Dawkins in 1972. Internet memes can take various forms, such as images, videos, GIFs, and various other viral sensations.


Laittakaa meemejä tänne.

founded 3 years ago
MODERATORS
 

'It's just parroting the training data!' That's supposed to be reassuring??

you are viewing a single comment's thread
view the rest of the comments
[–] Grail@multiverse.soulism.net 0 points 1 month ago (3 children)

We should not be using these machines until we've solved the hard problem of consciousness.

I see a lot of people say "It can't think because it's a machine", and the only way this makes sense to Me is as a religious assertion that only flesh can have a soul.

[–] kogasa@programming.dev 16 points 1 month ago (2 children)

If current LLMs are conscious then consciousness is a worthless and pathetic concept.

[–] andrewrgross@slrpnk.net 3 points 1 month ago* (last edited 1 month ago)

I actually kinda agree with this.

I don't think LLMs are conscious. But I do think human cognition is way, way dumber than most people realize.

I used to listen to this podcast called "You Are Not So Smart". I haven't listened in years, but now that I'm thinking about it, I should check it out again.

Anyway, a central theme is that our perceptions are comprised heavily of self-generated delusions that fill the gaps for dozens of cludgey systems to create a very misleading experience of consciousness. Our eyes aren't that great, so our brains fill in details that aren't there. Our decision making is too slow, so our brains react on reflex and then generate post-hoc justifications if someone asks why we did something. Our recall is shit, so our brains hallucinate (in ways that admittedly seem surprisingly similar sometimes to LLMs) and then applies wild overconfidence to fabricated memories.

We're interesting creatures, but we're ultimately made of the same stuff as goldfish.

[–] Grail@multiverse.soulism.net 3 points 1 month ago* (last edited 1 month ago)

Yeah, you're right. Humans get really weird and precious about the concept of consciousness and assign way too much value and meaning to it. Which is ironic, because they spend most of their lives unconscious and on autopilot. They find consciousness to be an unpleasant sensation and go to efforts to avoid it.

[–] GreenBeanMachine@lemmy.world 1 points 1 month ago* (last edited 1 month ago) (1 children)

Spoiler alert:

spoilerno one has souls

[–] Grail@multiverse.soulism.net 1 points 1 month ago

A soul is a wet spiderweb made out of electricity that hangs from the inside of your skull.

[–] MintyAnt@lemmy.world 0 points 1 month ago (1 children)

In theory a machine one day could think

LLMs, however, do not think. Even though the term "think" is used in chatgpt. They don't think

[–] Grail@multiverse.soulism.net 1 points 1 month ago* (last edited 1 month ago) (1 children)

I once built a thinking machine out of dominos. Mine added two bits together. Matt Parker's was way bigger, and could do 8 bits. Children have made thinking machines in Minecraft out of redstone. Thinking machines aren't very hard.

[–] MintyAnt@lemmy.world 1 points 1 month ago (1 children)

What do you consider thinking, and why do you consider LLMs to have this capability?

[–] Grail@multiverse.soulism.net 1 points 1 month ago

Extrapolating from information.

My calculator can extrapolate 5 when I give it 2, 3, and a plus sign. So can an LLM. My calculator uses some adder circuits in its ALU to get the 5. The LLM gets it from memorising the next likely token, the same way your brain works most of the time. Your brain's a lot more advanced, though, and can find the 5 in many different ways. Likely tokens are just the most convenient. Cognitive scientists call that "System 1", though you might know it as "fast brain". LLMs only have system 1. They don't have system 2, the slow brain. Your system 2 can slow down and logic out the answer. If I ask you to solve the problem in binary, like My calculator does, you probably have to use system 2.

The question you should be asking is: does system 1 experience qualia? And based on split brain studies in participants who have undergone corpus callosumectomy, I believe the answer is yes. Of course, the right brain isn't the same thing as system 1, but what these studies demonstrate is that there are thinking parts of your brain that you can't hear. So I'd errr on the side of caution with these system 1 machines.