this post was submitted on 21 Feb 2026
11 points (76.2% liked)

Technology

1378 readers
45 users here now

A tech news sub for communists

founded 3 years ago
MODERATORS
 

A blog post I found in response to Cory Doctorow taking a pro-LLM stance in a recent post of his.

you are viewing a single comment's thread
view the rest of the comments
[–] allende2001@lemmygrad.ml 16 points 2 days ago* (last edited 2 days ago) (2 children)

Obligatory Intellectual property in the times of AI mention

@CriticalResist8@lemmygrad.ml

Also, a comment I found under this article:

LLMs are based on extraction, exploitation and subjugation

So is torrenting. This is a very capitalist argument, coming from someone that self-identifies as a communist, that one deserves to reap the rewards of them adding value to humanity through some form of gatekeeping and is entitled to a reward from such gatekeeping. You’re literally arguing on the side of Elsevier and JSTOR against Aaron Swartz

What does it matter if human knowledge is available as a book or an LLM? The important part is that all of humanity has access to it.

Omelas is an almost perfect city. Rich, democratic, pleasant. But it only works by having one small child in perpetual torment.

Walking away from Omelas doesn’t stop that child’s perpetual torment. Your choice is merely ignorance and cowardice in front of injustice. Choosing to stay in Omelas and poison its democratic system to lead to its downfall is the arguably the more moral option. Let’s not even get into the argument about how Germany made the Eurozone its Omelas at the expense of deficit-prone southern Europe, and how you should leave Germany by your argument.

If everything is somehow “free and open” then we have won.

your moral choice to not use LLMs is the same as abandoning Omelas and the eternally tormented child, it serves as nothing but intellectual onanism. Distilling GPT 5, Opus 4.6, commoditising the petaflop (see George Hotz), deploying efficient models on Huawei chips is the same as causing rot in Omelas from the inside, causing the billions invested into AI to be worthless, tearing down the system that is perpetually tormenting that child. It is the only way forward.

Cory was right to label this “neolib purity testing”, because 1) it sides with capital (see above point re: torrenting), 2) it tries to don the mantel of dialectical materialism, while viewing this issue through a lens of “individualist action” and static morality and 3) It endlessly criticises power instead of aiming to claim and wield it for good.

spoiler Also, one (out of many) of Cory's points mentioned in the last paragraph of the comment:

Purity culture is such an obvious trap, an artifact of the neoliberal ideology that insists that the solution to all our problems is to shop very carefully, thus reducing all politics to personal consumption choices:

https://pluralistic.net/2025/07/31/unsatisfying-answers/#systemic-problems :::

Also also, @yogthos@lemmygrad.ml

[–] yogthos@lemmygrad.ml 10 points 2 days ago
[–] CriticalResist8@lemmygrad.ml 12 points 2 days ago (3 children)

Well, that's only my two cents and I don't know either of these writers but I read the original post and the response with great interest. It seems clear to me that Cory was writing a newsletter (literally updating people about his blog) and the LLM portion is small because of this. That doesn't mean it can't be critiqued, but clearly Cory was not writing a manifesto for LLMs there so it seems kind of unexplained to me, again not knowing anything about the writers or publications, to want to write a whole response to a small anecdote/rant about someone's own use of LLMs:

spoilerTante's writer contradicts themselves a few times and doesn't make a great case for themselves, and a lot of their arguments rely on pure idealism. It seems to me they have not examined their own line on AI and therefore rely on some hodgepodge of things they've heard and things they've arrived at implicitly glued together. Therefore the response is full of contradictions and seems more like what they took issue with was that someone they read had a positive opinion of AI.

Some of the contradictions:

artifacts have built-in politics deriving from their structure. A famous example is the nuclear power plant: Due to the danger of these plants, their needs with regards to resources as well as security power plants imply a certain form of political arrangement based on having a strong security force/army and a way to force these facilities (and facilities to store the waste in) upon communities potentially against their will.

But then:

That does not mean that it is impossible to take certain technologies or artifacts and try to reframe them, change their meaning. In some way computers are one such example: They were first used by governments, banks and other corporations to reach their goals but where then taken and reframed to devices intended to support personal liberation

The computers used in banks were used differently from the personal computers that made their way into our homes. They had different protocols in place into how they could be used and connect to the network, having to log your work, not leaving personal files on it, archiving work files every X years etc. So we see that it is indeed not the technology itself that's problematic, but how it is used, and I suppose this is what separates marxists from vibes-based leftism. If someone is punching you and you punch them back, you might both be considered violent, but we see that this violence has a character: someone was attacking, and the other was defending themselves.

Their argument also doesn't explain why the USSR was interested in machine learning and AI (yes, neural networks are not new, they were being tested with vacuum tubes as far back as the 50s and the USSR was very big into it) and why China is making so many models. They ascribe a universal character to different system instead of taking them in their particular material context. Frankly there is very little material analysis at all in both pieces, because neither look at things in their totality and in the material world. Any analysis or critique of AI that ignores what's happening in wholly different states just won't produce anything actionable.

They further contradict the earlier argument here:

A search engine scrapes pages to build an “index” in order to let people find those pages. The scraping has value for the page and its owner as well because it leads to more people finding it and therefore connecting to the writer, journalist, musician, artist, etc. Search engines create connection.

AI scrapers do not guide people towards the original maker’s work. They extract it and reproduce it (often wrongly). “AI”‘s don’t point out to the web for you to find other’s work to relate to, they keep you in their loop and give you the answer cutting off any connection to the original sources.

While the technology of scraping is the same, the purpose and material effects of those two systems is massively different.

(emphasis mine). They agree that things develop differently. But as for the difference they try to make between a search engine and AI, they are starting to twist themselves into knots there to try and keep up their deeply-held beliefs. There is nothing that says a search engine has to be this way, or that an LLM has to be that way. Secondly, Google has been under fire for years for trying to get people to stay in their 'loop' and not leave the results page. There are LLM providers that focus on making them into a search engine.

And then contradict this anyway:

Technologies are embedded not only in their deployment but also in their creation, conceptualization. They carry the understanding of the world that their makers believe in and reproduce those. A bit like an LLM reproduces the texts it learned from: It might not always be a 100% identical replica but it’s structurally so similar that the differences are surface level.

So they're indirectly saying a search engine only has surface-level differences from an LLM, which cancels their entire point about how search engines are actually different enough on the surface to be good while LLMs are still bad. That part felt like they didn't have a problem with the stuff they grew up with because it was normalized, but couldn't articulate it and so it produces this contradiction.

Another contradiction:

And freedom is not the only value that we care about. Making everything “free” sounds cool but who pays for that freedom? Who pays for us having for example access to the freedom an open weight LLM brings? Our freedom as users rests on the exploitation of and violence against the people suffering the data centers, labeling the data for the training, the folks gathering the resources for NVIDIA to build chips. Freedom is not a zero-sum game but a lot of the freedoms that wealthy people in the right (which I am one of) enjoy stem from other people’s lack thereof.

And two paragraphs later:

But the argument against using LLMs is not about shopping and markets at all. My not using LLMs does not influence anything in that regard, Microsoft will just keep making the data center go BRRRRRRR.

Exactly, the technology will exist whether we use it or not, you can't put the toothpaste back in the tube as they say. So what's the solution? Pretend it doesn't exist? Berate people into compliance? It's like saying I don't want to use drones in my war because they're pretty scary and barbaric. Sure, but the adversary is using them and will be very happy that you are not using drones against them (see Hedgehog 25 military exercise in Estonia). It's self-defeatist to refuse to use something because it's the "tool of the enemy", which is the grown-up way of saying it has cooties. They talk about not building the nexus torment, but what if building some of the nexus torment allowed us to destroy it for good? We know this intuitively as marxists: there will not be socialism without capitalism first.

On top of which are several words that betray what the author really thinks, such as putting AI in quotes (because it's not actually "intelligent" you see), or calling it stochastic, or the inclusion of the "(often wrongly)" in a paragraph I quoted. That's the sort of keywords you see repeated all the time on twitter from people who briefly tried chatGPT in 2023 when it was underbaked, made up their mind about it then and never tried LLMs again or followed what has been happening since then.

[–] cfgaussian@lemmygrad.ml 9 points 2 days ago* (last edited 2 days ago) (2 children)

Good response. It seems to me that both parties here don't really have fully coherent arguments. There is indeed a tech-libertarian bent to the arguments made by Cory Doctorow, which is not always compatible with a how a Marxist analysis would approach this topic. On the other hand, the critique also seems to stray too far into moralism and the idea of expressing your ethics through consumption, which, again, is not the dialectical-materialist outlook. In fact it has somewhat of a superstitious feel to it, like you are incurring "bad karma" by using certain products.

For example: In the piece they mention using What's App being problematic because Meta is problematic. That is a moralistic argument. A more practical reason why you shouldn't use What's App is security and data privacy - backdoors for intelligence agencies, and the fact that companies like Google, Meta or Apple are embedded with the security state. And it depends on what you use it for. Context matters. Using it discuss revolutionary organizing - maybe not a great idea; using it for a parent discussion group about your kid's school - probably fine. It's not so much what you use but how you use it.

[–] ksynwa@lemmygrad.ml 7 points 2 days ago (1 children)

Yeah there are strange takes on both sides. Doctorow for example talks about liberating LLMs and his act of liberation is just... using a local model. Normal people cannot liberate LLMs because asidr from the extremely high level of expertise, it requires a bonkers amount of resources. As such the only thing adjacent to liberation that is happening right now is the release of Chinese open source models. No one gives a fuck about Mistral, Trinity or Llama. But Doctorow does not get into the details of this liberation.

Tante's post on the other hand also has soms good bits. Purity testing is an idealistic distraction but purity testing is often used as a strawman to distract from what he called negative externalities. But the part about Omelas is just crazy. Who reads that story and concludes that the correct choice to walk away?

[–] darkernations@lemmygrad.ml 7 points 2 days ago* (last edited 2 days ago)

It's only through ML the contradictions are sublimated ie the solution to privatised control is not to unsocialise the labour but to socialise control of the means of production as an extension of socialised labour... which is why China is winning whereas the best West has to offer to counter what they consider "corporatism" is limitinv - either anti-AI artisanal variety of reaction or "pro-AI" but some version of what Doctorow has to offer as a solution - ie individualised "emancipation" (and I still think your article was a good share)

[–] darkernations@lemmygrad.ml 4 points 2 days ago* (last edited 2 days ago)

I used to consider marxism-leninism as anti-moralism, and anti-purity/anti-dogma but I am not so sure now. Now I wonder if we just have a different set of morals and dogma - a scientific one

https://redsails.org/aristocratic-marxism/

[–] darkernations@lemmygrad.ml 6 points 2 days ago
[–] allende2001@lemmygrad.ml 2 points 2 days ago

Great response!

zoidberg salute 2