48
submitted 2 weeks ago* (last edited 2 weeks ago) by Hohsia@hexbear.net to c/philosophy@hexbear.net

I don’t know how there aren’t a myriad of problems associated with attempting to emulate the brain, especially with the end goal of destroying livelihoods and replacing one indentured servant for another. In fact, that’s what promoted this post- an advertisement for a talk with my alma mater’s philosophy department asking what happens when see LLMs discover phenomenological awareness.

I admit that I don’t have a ton of formal experience with philosophy, but I took one course in college that will forever be etched into my brain. Essentially, my professor explained to us the concept of a neural network and how with more computing power, researchers hope to emulate the brain and establish a consciousness baseline with which to compare a human’s subjective experience.

This didn’t use to be the case, but in a particular sector, most people’s jobs are just showing up a work, getting on a computer, and having whatever (completely unregulated and resource devouring) LLM give them answer they can find themselves, quicker. And shit like neuralink exists and I think the next step will to be to offer that with a chatgpt integration or some dystopian shit.

Call me crazy, but I don’t think humans are as special as we think we are and our pure arrogance wouldn’t stop us from creating another self and causing that self to suffer. Hell, we collectively decided to slaughter en masse another collective group with feeling (animals) to appease our tastebuds, a lot of us are thoroughly entrenched into our digital boxes because opting out will result in a loss of items we take for granted, and any discussions on these topics are taboo.

Data-obsessed weirdos are a genuine threat to humanity, consciousness-emulation never should have been a conversation piece in the first place without first understanding its downstream implications. Feeling like a certified Luddite these days

all 50 comments
sorted by: hot top controversial new old
[-] RaisedFistJoker@hexbear.net 52 points 2 weeks ago* (last edited 2 weeks ago)

LLMs dont think, they are statistical models that are good at prediciting what word might come next after another word

I admit that I don’t have a ton of formal experience with philosophy, but I took one course in college that will forever be etched into my brain. Essentially, my professor explained to us the concept of a neural network and how with more computing power, researchers hope to emulate the brain and establish a consciousness baseline with which to compare a human’s subjective experience.

Right now the ai capitalists are tricking you, thats ok, they have a lot of money to spend on propaganda, the current form of ai text generation is a highly advanced chatbot that is very effective at making us humans believe its close it consciousness. It isnt. We are a long way off consciousness in silicone, our technology just isnt there yet.

[-] Formerlyfarman@hexbear.net 22 points 2 weeks ago* (last edited 2 weeks ago)

Exactly. Wolfram alpha is much more like a brain than the llms.

[-] imogen_underscore@hexbear.net 25 points 2 weeks ago* (last edited 2 weeks ago)

i think the through line from "sufficiently advanced computer + software" to "conscious brain akin to a human one" is basically made up nonsense conceived by SF writers and propagated as being actually real by tech bros. i don't think it's something worth taking seriously at all

[-] BoxedFenders@hexbear.net 13 points 2 weeks ago

Human consciousness is an emergent property of neurons firing in our brain. Unless you attribute consciousness to some external mystical force, replicating it should theoretically be possible. I'm not saying LLMs are the path to get there or that we are anywhere close to it, but it seems inevitable that it is eventually achieved.

[-] GaveUp@hexbear.net 15 points 2 weeks ago

All the math done to estimate the computation required shows absurd numbers required at minimum

Capitalists will never try to truly emulate a human brain because it's infinitely cheaper to just hire/breed/enslave real ones to do whatever you need

[-] Saeculum@hexbear.net 2 points 2 weeks ago

All the math done to estimate the computation required shows absurd numbers required at minimum

Nature fit it into a space the size of a human head with a bunch of redundancy through an unconscious process of trial and error.

[-] imogen_underscore@hexbear.net 4 points 2 weeks ago* (last edited 2 weeks ago)

i personally do believe in the human soul and don't think rationalist vulgar materialism can fully explain consciousness so yeah, I guess we may just fundamentally disagree there. it doesn't even have to be something "mystical" though, could just be something totally unknown to science that can never be replicated in silicon. even if you still think it's possible, it's plain that the current extinction event and the technological setbacks/energy crises it will bring is going to prevent much progress being made towards the currently science fiction-level technology and energy required to get even close. far from "inevitable" in my view and ultimately, a total waste of time and resources. may as well say Dyson spheres another thing made up by SF writers are inevitable. energy crises, tech setbacks and population destruction will always get in the way. it's utopian to a cartoonish extent, like hundreds or thousands of years of end stage communism would be needed for this kind of stuff to even begin being feasible. and if we had that then I would hope creating AI slaves wouldn't be very high on the agenda. that's why I think taking it seriously is a waste of time.

[-] BoxedFenders@hexbear.net 5 points 2 weeks ago

even if you still think it's possible, it's plain that the current extinction event and the technological setbacks/energy crises it will bring is going to prevent much progress being made towards the currently science fiction-level technology and energy required to get even close.

No disagreement from me on this point. And by no means did I mean its inevitability will happen in our lifetime or even centuries from now. Just that it is theoretically possible and there is no physical limitation that forbids it, unlike say, faster than light space travel. But since you believe in human souls I'm curious- would you ever concede that a sufficiently advanced machine could be conscious or dismiss it as a trickery of code? Does consciousness only arise when a soul is assigned to an organism with 46 chromosomes?

[-] imogen_underscore@hexbear.net 4 points 2 weeks ago* (last edited 2 weeks ago)

I'm not really interested in any kind of debate about this sorry. you're not being rude or anything I just find the idea tedious, as I said it's clear we just disagree on a basic thing here and I'm fine with keeping it that way

[-] Saeculum@hexbear.net 1 points 2 weeks ago

could just be something totally unknown to science

Any thoughts on brain organoid computers related to this?

[-] 4am@lemm.ee 8 points 2 weeks ago

The problem isn’t that it’s not real; it’s that men with vast fortunes are trying to chase that fictional dragon in the name of profits and they’re selling out the entire species in several critical ways as we teeter on the precipice of an existential crossroads.

And it’s literally for fucking nothing.

[-] Hohsia@hexbear.net 6 points 2 weeks ago* (last edited 2 weeks ago)

Yeah sorry I might’ve jumped the gun there, but I’m legitimately starting to contemplate peacing out of corporate America in general because all they can talk about is “using AI to increase productivity” (read as replace workers) and haven’t been able to escape the talk of whatever a human can do can be done by a computer (in the context of replicating monotonous computer-touching tasks).

I guess the whole sentience vs non is a completely separate one entirely (and a moot one by the sound of it), but that has not stopped the powers that be from dumping as much money possible into these treat printers.

And after all, this isn’t the first time I haven’t been able to outrun the propaganda. I guess it’s just becoming increasingly more difficult to sift through the bullshit and that’s another reason why I think the question of consciousness won’t matter in the short term. But we do know that consciousness is just the result of activity in the brain (and we can prove this with MRIs on stroke victims and such).

[-] WhyEssEff@hexbear.net 20 points 2 weeks ago* (last edited 2 weeks ago)

As a data science undergrad, knowing generally how they work, LLMs are fundamentally not built in a way that could achieve a measure of consciousness.

Large language models are probability-centric models. They essentially look at a graph node of "given my one quintillion sentences and one quadrillion paragraphs on hand, which word is probably next given the current chain of output and the given input." This makes it really good at making something that is voiced coherently. However, this is not reasoning–this is parroting – it's a chain of dice rolls that's weighted to all writing ever to create something that reads like a good output against the words of the input.

The entire idea behind prompt engineering is that these models cannot achieve internal reasoning, and thus you have to trick it into speaking around itself in order to write out the lines of logic that it could reference in its own model.

I do not think AGI or whatever they're calling Star Trek-tier AI will arise out of LLMs and transformer models. I think it is fundamentally folly. I think what I see as fundamental elements of consciousness are just not covered at all by it (such as subjectivity) or are something I just find sorely lacking even despite the advances in development (such as cognition). Call me a cynic, I just truly think it's not going to come out of genAI (as we generally understand the technology behind it for the past couple years) and further research into it.

[-] GaveUp@hexbear.net 17 points 2 weeks ago

We're never going to develop artificial consciousness before the world burns up or humans nuke each other to death, don't worry

[-] Saeculum@hexbear.net 1 points 2 weeks ago

Climate change won't kill all of us, and neither could a wordt-case nuclear exchange. It'll be a problem for someone eventually.

[-] 2Password2Remember@hexbear.net 15 points 2 weeks ago

LLMs are dogshit tech and nothing to be worried about. capitalists will use them to replace jobs, but they won't be better at the jobs than humans, so in the long run they're going to come to nothing

Death to America

[-] sexywheat@hexbear.net 9 points 2 weeks ago

Well the main problem is that capitalists don't want works of art, they don't want jobs to be done exceptionally well, they just want something good enough, which LLMs are very good at.

[-] ped_xing@hexbear.net 15 points 2 weeks ago
[-] Nakoichi@hexbear.net 9 points 2 weeks ago

man I cannot wait for season 2

[-] TreadOnMe@hexbear.net 14 points 2 weeks ago

If you've ever watched Big Bang theory, you will begin to realize that much of the tech sector is literally run by people who think like Sheldon.

[-] dat_math@hexbear.net 13 points 2 weeks ago

our pure arrogance wouldn’t stop us from creating another self and causing that self to suffer.

I think there's already some weak evidence supporting the notion that we'll do this with living neurons in a dish before we successfully simulate it in silico

Feeling like a certified Luddite these days

Big same, dawg

[-] SchillMenaker@hexbear.net 10 points 2 weeks ago

Human beings aren't all that special but biology is infinitely special. We have absolutely no fucking clue how consciousness works, how could we possibly hope to simulate it with a completely different mechanism?

Why does eveyone head to the trenches about consciousness this or that? Why even try to emulate the brain when transformers produce the results they do? All these systems need to do is convincingly approximate human behavior well enough and they can automate most professional jobs out there, and in doing so upend society as we know it. These systems don't need a soul to scab for labor.

[-] Saeculum@hexbear.net 2 points 2 weeks ago

What's the observable difference between something that closely emulates human behaviour and something with a "soul"?

What difference does it make? Does Bob from accounting have a soul? I can't answer that either.

[-] griefstricken@lemmy.ml 1 points 1 week ago

Well I am confident an AI could replicate the kind of alt account activity you like, and isn't that more important? Filling the trough and appearing to be in good company?

[-] griefstricken@lemmy.ml 2 points 1 week ago

What is the observable difference between a rock and a person who keeps their mouth shut? When you only live on the internet they are indistinguishable.

[-] Saeculum@hexbear.net 2 points 1 week ago

We know a world exists outside of the internet, as far as we know anything.

We might choose to believe in a soul, but with no evidence there's not really any point in bringing it up as a quality something can have.

To say that something does or does not have value because of the presence of a soul is the same as saying that something doesn't have value because I've decided it doesn't have the intangible property of valuableness.

[-] griefstricken@lemmy.ml 2 points 1 week ago

Souls are dumb ideas. All of the suffering, boredom, the heavens and hells are here on earth.

[-] ShimmeringKoi@hexbear.net 9 points 2 weeks ago* (last edited 2 weeks ago)

And shit like neuralink exists and I think the next step will to be to offer that with a chatgpt integration or some dystopian shit.

Cannot wait to see a really shitty, stupid version of Upgrade play out in real time as lonely and depressed people start taking Grok advice from the honest to god voice in their head

[-] Nacarbac@hexbear.net 11 points 2 weeks ago

The best? worst? blurst? part of it being that the LLM isn't even close to self aware or alive. It isn't even really appropriate to talk about how it lacks consciousness, in the same way that one doesn't talk about the sand on a beach having a soul(Wealthy).

It's just everywhere, and Very Serious (Wealthy) People are saying it's alive, and if you don't believe in it either then maybe you're not being productive and don't deserve a job where they give you a free* wire in your head.

*hahahahaha

[-] Formerlyfarman@hexbear.net 7 points 2 weeks ago* (last edited 2 weeks ago)

You probably can simulate an intelligent creature in one of those 3ghz pentium processors. There are creatures with relatively simple neuronal systems that are capable of complex thinking, like the pack hunting velvet worms, or those tool using ants.

The thing is that the theory for how that works does not seem to exist. Chat got is a glorified Markov chain. It does not understand what it is saying. It cant it just calculates wich is the most likely token to produce next. It does not manipulate any categories or does anything resembling thought. Wolfram alpha does. That's the true state of the art. But there is a long way to go.

[-] BodyBySisyphus@hexbear.net 4 points 2 weeks ago

pack hunting velvet worms

jesus-christ

[-] Hohsia@hexbear.net 3 points 2 weeks ago* (last edited 2 weeks ago)

Researchers also inserted brain organoids into living organisms, which is essentially a man made horrror beyond my comprehension

[-] NuraShiny@hexbear.net 6 points 2 weeks ago* (last edited 2 weeks ago)

"Oh no my phone keyboard knows the next word I want to type almost like it's intelligent!"

Thee are not intelligent programs. They don't have memory of the past except for a few previous prompts. You are giving these tech bros way too mcu credit. It's like snake oil. Of course Doctor Health's snake oil can cure all ailments! Please buy it now, while it's this cheap! We just need ot get to the step where we tar and feather these fucks for their lies.

[-] Saeculum@hexbear.net 2 points 2 weeks ago

LLMs almost certainly aren't going to be where it eventually comes from, but I have no doubt we'll get there some day.

[-] spicehoarder@lemm.ee 1 points 2 weeks ago

LLMs are the Vacuum Tubes of AI, we just need to invent the transistor.

[-] NuraShiny@hexbear.net 1 points 2 weeks ago

No doubt? Okay. What makes you so sure?

[-] Saeculum@hexbear.net 1 points 2 weeks ago* (last edited 2 weeks ago)

Nature has already shown us that it's possible, and anything nature can do, we can iterate and improve upon.

load more comments (1 replies)
[-] Saeculum@hexbear.net 5 points 2 weeks ago* (last edited 2 weeks ago)

and our pure arrogance wouldn’t stop us from creating another self and causing that self to suffer.

People already have children all the time, this is just what humans do.

[-] Tom742@hexbear.net 4 points 2 weeks ago

Hell, we collectively decided to slaughter en masse another collective group with feeling

My first thought was when we wiped out the Neanderthals

[-] AmericaDelendaEst@hexbear.net 3 points 2 weeks ago* (last edited 2 weeks ago)

not that Id know from experience but dating on reddit is so much more fun when you get messaged and, unless they happen to use any of the key phrases you've learned are code for "i'm a fucking bot" you might spend 20-30 minutes "in conversation" before they start to plug their totally real only fans

Think about all the resources wasted on AI and then think about how many of these bots are running at all times just trying to grift some horny freak out of $3

how much AI is currently focused on AI-AI interactions as bots attempt to trick bots into subscribing to their onlyfans???

[-] Philosophosphorous@hexbear.net 3 points 2 weeks ago

i don't know how they expect to 'establish a consciousness baseline' without a theory of information processing that can explain subjective experience. what is the algorithm that makes something experience? simulating a human brain on a computer will no more produce subjective experience than simulating a bladder will produce actual piss on your desk, as far as we know. it might tell us something about how consciousness works regardless, just as a bladder simulation can inform us about how a real bladder works even though you couldn't replace someone's actual organ with the simulation. regardless i am sure the information processing capabilities of humans will be more fully outclassed by computers eventually, or well enough to justify replacing paid human workers at least. although even current primitve LLMs require a lot of energy. like any other industrial revolution it will only be used to extract more profit instead of bettering society.

[-] Saeculum@hexbear.net 2 points 2 weeks ago

and our pure arrogance wouldn’t stop us from creating another self and causing that self to suffer.

People already have children.

[-] Philosophosphorous@hexbear.net 1 points 2 weeks ago

i don't know how they expect to 'establish a consciousness baseline' without a theory of information processing that can explain subjective experience. what is the algorithm that makes something experience? simulating a human brain on a computer will no more produce subjective experience than simulating a bladder will produce actual piss on your desk, as far as we know. it might tell us something about how consciousness works regardless, just as a bladder simulation can inform us about how a real bladder works even though you couldn't replace someone's actual organ with the simulation. regardless i am sure the information processing capabilities of humans will be more fully outclassed by computers eventually, or well enough to justify replacing paid human workers at least. although even current primitve LLMs require a lot of energy. like any other industrial revolution it will only be used to extract more profit instead of bettering society.

[-] TreadOnMe@hexbear.net 1 points 2 weeks ago

If you've ever watched Big Bang theory, you will begin to realize that much of the tech sector is literally run by people who think like Sheldon.

[-] dat_math@hexbear.net 1 points 2 weeks ago

our pure arrogance wouldn’t stop us from creating another self and causing that self to suffer.

I think there's already some weak evidence supporting the notion that we'll do this with living neurons in a dish before we successfully simulate it in silico

Feeling like a certified Luddite these days

Big same, dawg

this post was submitted on 11 Nov 2024
48 points (98.0% liked)

philosophy

19654 readers
1 users here now

Other philosophy communities have only interpreted the world in various ways. The point, however, is to change it. [ x ]

"I thunk it so I dunk it." - Descartes


Short Attention Span Reading Group: summary, list of previous discussions, schedule

founded 4 years ago
MODERATORS