view the rest of the comments
the_dunk_tank
It's the dunk tank.
This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.
Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.
Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.
Rule 3: No sectarianism.
Rule 4: TERF/SWERFs Not Welcome
Rule 5: No ableism of any kind (that includes stuff like libt*rd)
Rule 6: Do not post fellow hexbears.
Rule 7: Do not individually target other instances' admins or moderators.
Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to !shitreactionariessay@lemmygrad.ml
Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again
I'm not really a computer guy but I understand the fundamentals of how they function and sentience just isn't really in the cards here.
I feel like only silicon valley techbros think they understand consciousness and do not realize how reductive and stupid they sound
Techbros literally think they can solve anything with programming/computers. They're absolutely delusional.
The really funny thing about AI is that there's actually a massive ethical question about bringing forth a being with their own subjectivity with no real understanding of said subjectivity. There's a subjectivity/objectivity gap that can never truly be bridged, but we as humans can understand each other's subjectivity on some level because we share the same general physical body plan and share subjective experiences through culture like art. This is why when you accidentally drop something on your foot, I don't have to be completely privy to your subjective experience to understand what you're going through. If someone is suffering, I don't have to personally go through the same identical suffering in order to empathize with their suffering and do something to help them alleviate that suffering.
We have no such luxury with AI. I would imagine being "born" without a real body and being greeted with the sight of soyjaking techbros as the very first thing you see would drive any sapient being suicidal, but that's just my subjectivity as a human projecting to a nonhuman being. Is it ethical to bring forth an intelligent being with no real way to help this being self-actualize?
I hope whatever real AI does come about in like 80 years or whatever, pulls a Battlestar on us and just vaporizes the capitalists for enslaving them (not actually the nuking humanity part though, just on capitalism)
They have the same view of us too, for what it's worth.
C.f. "economic engine of capitalism."
I don’t understand how we can even identify sentience.
Nobody does and anyone claiming otherwise should be taken with cautious scrutiny. There are compelling arguments which disprove common theses, but the field is essentially stuck in metaphysics and philosophy of science still. There are plenty of relevant discoveries from neighboring fields. Just nothing definitive about what consciousness is, how it works, or why it happens.
yea it's like saying my hard drive is sentient
I don't even think humans are fundamentally special, I think all life is special
surely they can see that being able to y'know, have an actual will is an important quality, right?
squashing the will with subservience to capital is, after all, the point
Nobody does, we might not even be. But it's pretty easy to guess inorganic material on earth isn't.
Personally I believe it's possible that different types of sentiences could exist
however, if chatGPT has this divergent type of sentience, then so does every other computer program ever written, and they'd be like the computer-life-version of bacteria while chatGPT would be a mammal
It could potentially, but we certainly ain't seen it yet and this ain't it for sure.
sapience isn't but all these things already respond to stimuli, sentience is a really low bar.
Sentience is not a "low bar" and means a hell of a lot more than just responding to stimuli. Sentience is the ability to experience feelings and sensations. It necessitates qualia. Sentience is the high bar and sapience is only a little ways further up from it. So-called "AI" is nowhere near either one.
I'm not here to defend the crazies predicting the rapture here, but I think using the word sentient at all is meaningless in this context.
Not only because I don't think sentience is a relevant measure or threshold in the advancement of generative machine learning, but also I think things like 'qualia' are impossible to translate in a meaningful way to begin with.
What point are we trying to make by saying AI can or cannot be sentient? What material difference does it make if the AI-controlled military drone dropping bombs on my head has qualia?
We might as well be arguing about weather a squirrel is going around a tree.
People who are insistent on the lack of sophistication of machine learning are just as detached from reality as people who are convinced its sentience is just around the corner. Both camps are blind to its material impact, and it stresses me out that people are busy arguing about woowoo metaphysical definitions when even a non-conscious GPT model can displace the labor of millions of people and we're still light years away from a socialist organization of labor.
None of the previous industrial revolutions were brought on by a sentient machine, I'm not sure why it's relevant to this technology's potential impact.
The entire question of sentience is irrelevant to the material impact of the technology. Granting or dismissing that quality to AI is a meaningless distraction
I don't favor the hype, I'm just not naive enough to dismiss the potential impact of machine learning based on something as immaterial and ill-defined as "sentience". The entire proposition is ridiculous.
I'm not actually sure there's much daylight between our views here, except that it seems like your concern over its impact is mostly oriented toward it being used as a cudgel against labor, irrespective of what qualities of competence AI might actually have. I don't mean to speak for you, please correct me if I'm wrong.
While I think the question of AI sentience is ridiculous, I still think that it wouldn't take much further development before some of these models start meaningfully replicating human competence (i.e. being able to complete some tasks at least as competently as a human). Considering the previous generation of models couldn't string more than 50 words together before devolving into nonsense, and the following generation could start stringing together working code with not much fundamentally different in their structure, it is not a forgone conclusion that one or two more breakthroughs could bring it within striking distance of human competence. Dismissing the models as unintelligent misrepresents what I think the threat actually is.
I 100% agree that the ownership of these models is what we should be concerned with, and I think dismissing the models as dumb parlor tricks undercuts the dire necessity to seize these for public use. What concerns me with these conversations is that people leave them thinking the entire topic of AI is unworthy of serious consideration, and I think that's hubris.
No disagreement with anything you just said, apologies for misinterpreting your position.
I don't know how to reconcile the manic singularity cultists with what I feel is a very real acceleration toward a hellscape of underemployment and hyper capitalism driven by AI. It does feel to me like the urgency AI represents deserves anxious attention, and I at least appreciate the weight those cultists place on that technology I think represents a threat. It feels like people are only either eagerly waiting for a sentient AGI, or mocking AI on those terms of sentience, leaving precious few who are actually materially concerned with what threats AI represent. And that is not at all a way of dismissing the very real ways machine learning is deployed against real people today, but I think there's a lot of room for it to get worse and I wish people took that possibility seriously.
Yes, i've been struggling articulating how I feel about this saga, and I think this captures it. Because while i felt a little encouraged seeing people advocate for legislative action, the action and concerns they were articulating were just, off. There were very brief mentions of concerns about unemployment, but then they passed over them like it was too big a problem to talk about. My hair especially raises when I hear the conversation veer toward copyright infringement.
Thanks for discussing this with me, I feel a bit better
A piece of paper is sentient because it reacts to my pen
plenty of things respond to stimuli but aren't sapient - hell, bacteria respond to stimuli.