this post was submitted on 03 May 2025
132 points (100.0% liked)

chapotraphouse

13815 readers
789 users here now

Banned? DM Wmill to appeal.

No anti-nautilism posts. See: Eco-fascism Primer

Slop posts go in c/slop. Don't post low-hanging fruit here.

founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Jabril@hexbear.net 9 points 3 days ago* (last edited 3 days ago) (2 children)

Of course LLMs aren't a simulation of consciousness with the same abilities of a human, the idea is that if a model was trained first on Marxist theory and history before taking in more information through that perspective there could be a point where this can be used to simulate economic models that would be useful for economic planning. It could be used to simulate contradictions and formulate strategies to navigate them in organizing spaces. It could be used for propaganda purposes, like if an average person asks it questions it would default to discussing the topic from a revolutionary angle. If some American goes on Deepseek to ask about how to convince their boss to give them a raise, Deepseek should default to teaching the person how to unionize their workplace instead of helping them form a good argument to convince the boss alone. There are a lot of use cases for an LLM trained this way, and this type of work would pave the way for greater advancements as the technology advances and we inevitably do get closer to a science fiction understanding of AI which is obviously not what a LLM is.

There are a lot of leftists here have a reactionary stance on this technology because of the way it is being used by capitalists, in the same way that anarchists have a reactionary stance on the state because of the way it is being used by capitalists. My wish casting fan fiction about a "good" AI existing one day in the future instead of the 99% more commonly thought idea that if an AI of this type would ever be developed it would kill us all is obviously bloomer cope. We'll be long dead before the technology gets there because the great minds of the left take a post about a marxist leninist dialectial materialist bot as a serious analysis of humanity's current technological progress and feel the need to critique it.

Edit: this post isn't directed towards the person I am responding to particularly, there has just been an ongoing undercurrent around this issue which is what I am speaking to more broadly

[–] Wheaties@hexbear.net 3 points 2 days ago (1 children)

I think computers themselves are already the under-utilized tool for economic planning and coordination. Perhaps LLMs (or some other sort of trained neural net) have a role in that, though I think the way it spits out answers without being able to go back and follow the steps it took to get there makes it a bit unreliable for economic modeling. Honestly, I've seen a pretty compelling case put forward that all we really need is some open-source algebra equations, and a dedicated network for incorporating worker feedback and real time data.

And, I'm... skeptical about the effectiveness of online messaging in general. It's good for getting ideas out there, but real organizing happens offline, between people. Ideally, we want them to be able to recognize, analyze, and work through contradictions themselves - rather than relying on the computer to hand them answers.

[–] Jabril@hexbear.net 1 points 1 day ago

I generally agree with all of this where we are now, especially the last part in regards to organizing.

I'm imagining that there are two concurrent timelines, one where machine learning and related technology continues to develop at an increasingly rapid pace, and another where westerners are under the heel of capitalism, increasingly desperate for change, but more or less alienated from revolutionary theory and practices due to their settler/colonial/fascist base ideologies preventing them from accepting the solutions to their problems.

Playing out that scenario, I could foresee a time where the technology (all machine learning, neural networks, "AI," and not LLMs particularly) has advanced to a point that it is more useful than the average American leftist in finding solutions to American problems, because American leftists are inhibited by the aforementioned ideologies and show no signs of letting them go. Many are doubling down these days.

Even now, the most talented organizers I know are mostly bogged down by the reproductive labor of keeping organizations afloat, and if even a third of that could be offloaded to machines and just touched up by the humans, it would save hundreds of hours a year that could go back into human to human interactions

[–] Le_Wokisme@hexbear.net 8 points 3 days ago (1 children)

Of course LLMs aren't a simulation of consciousness with the same abilities of a human, the idea is that if a model was trained first on Marxist theory

there could be a point where this can be used to simulate economic models t

but they don't simulate anything, they're word calculators

[–] Jabril@hexbear.net 2 points 3 days ago (1 children)

How are you defining simulation? We can already generate images, videos, 3D models, text, interpret data and train models on particular data to base any of that generation on with currenly available platforms.

[–] Le_Wokisme@hexbear.net 9 points 3 days ago (1 children)

dumping out scrabble tiles doesn't simulate systems.

[–] Jabril@hexbear.net 2 points 3 days ago (2 children)

AI is already being used to assist simulation. One team used it to train robots by taking photos of a room and having the AI simulations train the robot on movements virtually instead of having to physically repeat the tasks in a real space. A quick search will yield many examples of the work being done that will allow the types of simulations you don't see now being done in 5-10 years.

[–] Le_Wokisme@hexbear.net 7 points 3 days ago (1 children)

the AI simulations train the robot on movements virtually

that sounds like not an LLM

[–] Jabril@hexbear.net 1 points 2 days ago

I didn't say it was an LLM, other people brought up LLMs in response to my comment

[–] awth13@hexbear.net 6 points 3 days ago (1 children)

You are confusing the wider field of machine learning, which has been developing in strides throughout 2010s (and before that really) without the media overhyping it to the extent that people think machines can think now, and LLMs, which birthed the media hype cycle that is the subject of criticism in this thread.

[–] Jabril@hexbear.net 1 points 2 days ago (2 children)

My original comment was about AI, other people brought up LLMs in response to that. I'm not confusing anything.

[–] Le_Wokisme@hexbear.net 4 points 2 days ago (1 children)

if you made a comment about AGI on a post about an LLM and only said "AI" there is zero context clue for us to think you meant a different topic.

[–] Jabril@hexbear.net 0 points 2 days ago* (last edited 2 days ago)

my response to the OP was about a fictional communist AI to save humanity, clearly riffing on the OP's title, which prompted all the debate perverts to come out and make sure everyone understands that LLMs aren't actually HAL 9000.

[–] awth13@hexbear.net 3 points 2 days ago* (last edited 2 days ago) (1 children)

Of course LLMs aren't a simulation of consciousness with the same abilities of a human, the idea is that if a model was trained first on Marxist theory and history before taking in more information through that perspective there could be a point where this can be used to simulate economic models that would be useful for economic planning.

You couldn't make it any more confusing than talking about LLMs and economic model simulation in a single sentence then.

[–] Jabril@hexbear.net 1 points 2 days ago (1 children)

Is it confusing or are you just so locked in on your special interest that you are ignoring the context? I made a comment about a future CPC AI that I have imagined for fun, someone responded to inform me about how LLMs work, which isn't what I was talking about, I responded saying of course that is true, then elaborated the idea they had misunderstood in my original comment.

You even left out the part where I clarified that LLMs as they stand are a part of paving the way towards the idea I brought up. If something like what I have imagined for kicks is ever made, LLMs will certainly be a part of its development.

I'm sure I could have been more concise but considering you used the word "gaslighting" to describe what you feel my comment was, it seems like you just reaching heavily for the outcomes you seek

[–] awth13@hexbear.net 3 points 2 days ago (1 children)

It appears to be confusing because other people also read your comments in the same way as me. Thank you for clarifying though, I understand that it must be frustrating getting your thoughts hijacked like that! Before I say anything else, I'd also like to clarify that, in my first comment, I didn't mean your comment in particular – I was replying to someone who already replied to you after all – but a wider trend I can't describe more specifically without naming names, which I don't want to do. With that being said,

LLMs will certainly be a part of its development.

Why certainly? That's the point where what you are saying now can feel like part of that LLM hype bullshit because I don't see how a chatbot can help a planned economy. Other machine learning models, sure, and I've fantasised about this before too, but LLMs seem to be orthogonal to this use case. Or do you rather mean that the insights obtained while developing LLMs can help us towards those better machine learning applications?

[–] Jabril@hexbear.net 2 points 1 day ago (1 children)

Yet others are not reading it that way, so maybe there are multiple people who are just skimming over what I said to find what they want to go in on.

I appreciate you clarifying your original comment.

Or do you rather mean that the insights obtained while developing LLMs can help us towards those better machine learning applications?

Yes, exactly this. I don't think the future technology we have both fantasized about is just a beefed up chat gpt, but that the field of machine learning as a whole will advance and LLMs are a part of that process.

[–] awth13@hexbear.net 2 points 1 day ago (1 children)

I apologise for misunderstanding you. I agree, it's just that everyone is really tired already of the LLM hype machine that keeps claiming AI will take over any moment now when we don't even know if and when that future technology is going to be achieved. Personally, I think the LLM hype is counterproductive to that effort, which is why I use such strong terms when discussing it.

[–] Jabril@hexbear.net 1 points 1 day ago

Thanks for apologizing.

I definitely understand where you are coming from, although I do think there is a Luddite-esque angle that attempts to reject "AI" as bad because of the LLM hype and the negative uses pushed by capitalists. "AI" is already putting people out of work and being used in a lot of industries, some of which (like medicine) are actually really promising, and others are pretty terrible.

Either way, ending capitalism is the only way to ensure that there is any future where the technology is a net positive.

I do think that with the rate of climate collapse, there's a good chance we won't see it reach the point of being advanced enough to be liberating.