this post was submitted on 12 Apr 2025
29 points (91.4% liked)

Ask Lemmygrad

951 readers
62 users here now

A place to ask questions of Lemmygrad's best and brightest

founded 2 years ago
MODERATORS
 

Hey there, sometimes I see people say that AI art is stealing real artists' work, but I also saw someone say that AI doesn't steal anything, does anyone know for sure? Also here's a twitter thread by Marxist twitter user 'Professional hog groomer' talking about AI art: https://x.com/bidetmarxman/status/1905354832774324356

top 50 comments
sorted by: hot top controversial new old
[–] darkernations@lemmygrad.ml 21 points 2 weeks ago* (last edited 2 weeks ago) (7 children)

The privitisation of the technology is bad but not the technology itself. Labor should be socialised and to be against this is not marxist.

Properietorship is heavily baked into our modern cultures due to liberalism so you are going to hear a lot of bad takes such as "stealing" or moralism based on subjective quality on a given AI arts' aesthetics (even if you were to homegenise the level of "quality" to call it substandard, all it then means is that the technology should improve. Talking, for example, the "soul" of art is just metaphysical nonsense. Human beings and their productions do not possess some other-worldly mysticism) - even from people who consider themselves marxists and communists.

The advance of technology at the cost of an individual's job is the fault of the organisation and allpcation of resources, ie capital, not the technology itself. Put it this way: people can be free to make art however they want to and their livelihood should not have to depend on it.

If you enjoyed baking but lamented the industrialisation and mechanisation of baking because it costed your livelihood and you said it was because the machines were stealing your methods and the taste of the products weren't as good would we still consider it a marxist position? Of course not.

The correct takes could be found here:

If you're a marxist, do not lament the weaver for the machine (Alice Malone): https://redsails.org/the-sentimental-criticism-of-capitalism/

Marxism is not workerism or producerism; both could lead to fascism.

Artisans being concerned about proleterisation as they effectively lose their labor aristocracy or path to petite-bourgoisie may attempt to protect their material perspectives and have reactionary takes. Again this obviously is not marxist.

TLDR - bidetmarxman is correct. I would argue lot of so-called socialists need self-reflection but like I said their view probably reflect their relative class positions and it is really hard to convince someone against their perceived personal material benefits.

load more comments (7 replies)
[–] yogthos@lemmygrad.ml 14 points 2 weeks ago (27 children)

What people are really upset with is the way this technology is applied under capitalism. I see absolutely no problem with generative AI itself, and I'd argue that it can be a tool that allows more people to express themselves. People who argue against AI art tend to conflate the technical skill and the medium being used with the message being conveyed by the artist. You could apply same argument to somebody using a tool like Krita and claim it's not real art because the person using it didn't spend years learning how to paint using oils. It's a nonsensical argument in my opinion.

Ultimately, the art is in the eye of the beholder. If somebody looks at a particular image and that image conveys something to them or resonates with them in some way, that's what matters. How the image was generated doesn't really matter in my opinion. You could make a comparison with photography here as well. A photographer doesn't create the image that the camera captures, they have an eye for selecting scenes that are visually interesting. You can give a camera to a random person on the street, and they likely won't produce anything you'd call art. Yet, you give the same camera to a professional and you're going to get very different results.

Similarly, anybody can type some text into a prompt and produce some generic AI slop, but an artists would be able to produce an interesting image that conveys some message to the viewer. It's also worth noting that workflows in tools like ComfyUI are getting fairly sophisticated, and go far beyond typing a prompt to get an image.

My personal view is that this tech will allow more people to express themselves, and the slop will look like slop regardless whether it's made with AI or not. If anything, I'd argue that the barrier to making good looking images being lowered means that people will have to find new ways to make art expressive beyond just technical skill. This is similar to the way graphics in video games stopped being the defining characteristic. Often, it's indie games with simple graphics that end up being far more interesting.

[–] darkernations@lemmygrad.ml 5 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

It appears some artisans who consider themselves marxist want to claim exception for themselves: that the mechanisation and automation of production by capital, through the development of technology, in attempt to push back against the falling rate of profit can apply to everyone else but not them - when it happens to them then apparently the technology itself is the problem.

load more comments (3 replies)
load more comments (26 replies)
[–] Munrock@lemmygrad.ml 13 points 2 weeks ago (6 children)

The messaging from the anti-generative-AI people is very confused and self-contradictory. They have legitimate concerns, but when the people who say "AI art is trash, it's not even art" also say "AI art is stealing our jobs"...what?

I think the "AI art is trash" part is wrong. And it's just a matter of time before its shortcomings (aesthetic consistency, ability to express complexity etc) are overcome.

The push against developing the technology is misdirected effort, as it always is with liberals. It's just delaying the inevitable. Collective effort should be aimed at affecting who has control of the technology, so that the bourgeoisie can't use it to impoverish artists even more than they already have. But that understanding is never going to take root in the West because the working class there have been generationally groomed by their bourgeois masters to be slave-brained forever losers.

[–] amemorablename@lemmygrad.ml 8 points 2 weeks ago

It can be frustrating sometimes. I've encountered people online before who I otherwise respected in their takes on things and then they would go viciously anti-AI in a very simplistic way and, having followed the subject in a lot of detail, engaging directly with services that use AI and people who use those services, and trying to discern what makes sense as a stance to have and why, it would feel very shallow and knee-jerk to me. I saw for example how with one AI service, Replika, there were on the one hand people whose lives were changed for the better by it and on the other hand people whose lives were thrown for a loop (understatement of the century) when the company acted duplicitously and started filtering their model in a hamfisted way that made it act differently and reject people over things like a roleplayed hug. There's more to that story, some of which I don't remember in as much detail now because it happened over a year ago (maybe over two years ago? has it been that long?). But point is, I have seen directly people talk of how AI made a difference for them in some way. I've also seen people hurt by it, usually as an indirect result of a company's poor handling of it as a service.

So there are the fears that surround it and then there is what is happening in the day to day, and those two things aren't always the same. Part of the problem is the techbro hype can be so viciously pro-AI that it comes across as nothing more than a big scam, like NFTs. And people are not wrong to think the hype is overblown. They are not wrong to understand that AI is not a magic tool that is going to gain self-awareness and save us from ourselves. But it does do something and that something isn't always a bad thing. And because it does do positive things for some people, some people are going to keep trying to use it, no matter how much it is stigmatized.

[–] LeGrognardOfLove@lemmygrad.ml 7 points 2 weeks ago (1 children)

It's a disruptive new technology that disrupt an industry that already has trouble giving a living to people in the western world.

The reaction is warranted but it's now a fact of life. It just show how stupid our value system is and most liberal have trouble reconciling that their hardship is due to their value and economic system.

It's just another mean of automation and should be seized by the experts to gain more bargaining power, instead they fear it and bemoan reality.

So nothing new under the sun...

[–] Munrock@lemmygrad.ml 6 points 2 weeks ago (1 children)

It’s a disruptive new technology that disrupt an industry that already has trouble giving a living to people in the western world.

Yes, and the solution to the new trouble is exactly the same as the solution to the old trouble, but good luck trying to tell that to liberals when they have a new tree to bark up.

load more comments (1 replies)
[–] yogthos@lemmygrad.ml 5 points 2 weeks ago (7 children)
load more comments (7 replies)
load more comments (3 replies)
[–] amemorablename@lemmygrad.ml 13 points 2 weeks ago

It's a multifaceted thing. I'm going to refer to it as image generation, or image gen, cause I find that's more technically accurate that "art" and doesn't imply some kind of connotation of artistic merit that isn't earned.

Is it "stealing"? Image gen models have typically been trained on a huge amount of image data, in order for the model to learn concepts and be able to generalize. Whether because of the logistics of getting permission, a lack of desire to ask, or a fear that permission would not be given and the projects wouldn't be able to get off the ground, I don't know, but many AI models, image and text, have been trained in part on copyrighted material that they didn't get permission to train on. This is usually where the accusation of stealing comes in, especially in cases where, for example, an image gen model can almost identically reproduce an artist's style from start to finish.

On a technical level, the model is generally not going to be reproducing exact things exactly and don't have any human-readable internal record of an exact thing, like you might find in a text file. They can imitate and if overtrained on something, they might produce it so similarly that it seems like a copy, but some people get confused and think this means models have a "database" of images in them (they don't).

Now whether this changes anything as to "stealing" or not, I'm not taking a strong stance on here. If you consider it as something where makers of AI should be getting permission first, then obviously some are violating that. If you only consider it as something where it's theft if an artist's style can reproduced to the extent they aren't needed to make stuff highly similar to what they make, some models are also going to be a problem in that way. But this is also getting into...

What's it really about? I cannot speak concretely by the numbers, but my analysis of it is a lot of it boils down to anxiety over being replaced existentially and anxiety over being replaced economically. The second one largely seems to be a capitalism problem and didn't start with AI, but has arguably by hypercharged by it. Where image gen is different is that it's focused on generating an entire image from start to finish. This is different from tools like drawing a square in an image illustrator program where it can help you with components of drawing, but you're still having to do most of the work. It means someone who understands little to nothing about the craft can prompt a model to make something roughly like what they want (if the model is good enough).

Naturally, this is a concern from the standpoint of ventures trying to either drastically reduce number of artists, or replace them entirely.

Then there is the existential part and this I think is a deeper question about generative AI that has no easy answer, but once again, is something art has been contending with for some time because of capitalism and now has to confront much more drastically in the face of AI. Art can be propaganda, it can be culture and passing down stories (Hula dance), or as is commonly said in the western context in my experience, it can be a form of self expression. Capitalism has long been watering down "art" into as much money-making formula as possible and not caring about the "emotive" stuff that matters to people. Generative AI is, so far, the peak of that trajectory. That's not the say the only purpose of generative AI is to degrade or devalue art, but that it seems enabling of about as "meaningless content mill" as capitalism has been able to get so far.

It is, in other words, enabling of producing "content" that is increasingly removed from some kind of authentic human experience or messaging. What implications this can have, I'm not offering a concluding answer on. I do know one concern I've had and that I've seen some others voice, is in the cyclical nature of AI, that because it can only generalize so far beyond its dataset, it's reproducing a particular snapshot of a culture at a particular point in time, which might make capitalistic feedback loop culture worse.

But I leave it at that for people to think about. It's a subject I've been over a lot with a number of people and I think it is worth considering with nuance.

[–] hexthismess@hexbear.net 12 points 2 weeks ago

AI steals from other's work and makes a slurry of what it thinks you want to see. It doesn't elevate art, it doesnt further an idea, doesn't ask you a question. It simply shows you pixels in an order it thinks you want based on patterns it's company stole when they trained it.

The only use for AI "art" is for soulless advertising.

[–] Arachno_Stalinist@lemmygrad.ml 11 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

I believe the main issue with AI currently is its lack of transparency. I do not see any disclosure on how the AI gathers its data (Though I'd assume they just scrape it from Google or other image sources) and I believe that this is why many of us believe that AI is stealing people's art. (even though the art can just as easily be stolen with a simple screenshot even without AI, and stolen art being put on t-shirts has been a thing even before the rise of AI, not that it makes AI art theft any less problematic or demoralizing for aspiring artists) Also, the way companies like Google and Meta use AI raises tons of privacy concerns IMO, especially given their track record of stealing user data even before the rise of AI.

Another issue I find with AI art/images is just how spammy they are. Sometimes I search for references to use for drawing (oftentimes various historical armors because I'm a massive nerd) as a hobby, only to be flooded with AI slop, which doesn't even get the details right pretty much all the time.

I believe that if AI models were primarily open-source (like DeepSeek) and with data voluntarily given by real volunteers, AND are transparent enough to tell us what data they collect and how, then much of the hate AI is currently receiving will probably dissipate. Also, AI art as it currently exists is soulless as fuck IMO. One of the only successful implementations of AI in creative works I have seen so far is probably Neuro-Sama.

[–] yogthos@lemmygrad.ml 10 points 2 weeks ago

I very much agree, and I think it's worth adding that if open source models don't become dominant then we're headed for a really dark future where corps will control the primary means of content generation. These companies will get to decide what kind of content can be produced, where it can be displayed, and so on.

The reality of the situation is that no amount of whinging will stop this technology from being developed further. When AI development occurs in the open, it creates a race-to-the-bottom dynamic for closed systems. Open-source models commoditize AI infrastructure, destroying the premium pricing power of proprietary systems like GPT-4. No company is going to be spending hundreds of millions training a model when open alternatives exist. Open ecosystems also enjoy stronger network effects attracting more contributors than is possible with any single company's R&D budget. How this technology is developed and who controls it is the constructive thing to focus on.

load more comments (1 replies)
[–] Commiejones@lemmygrad.ml 10 points 2 weeks ago (2 children)

AI art is stealing even less than piracy is. If copying a digital movie without paying for it isn't stealing than how is generating a digital image based on thousands of digital images?

Intellectual property is a bad thing. It is a cornerstone of modern capitalism. The arguments for AI art being stealing all hinge on the false premise that intellectual property law is fair, equitable and just and not just a way for capitalists to maintain their monopolies on ideas.

Yes artists deserve to be paid a fair share for their efforts but only as much as everyone else. The issue is that artists are losing control of the means of production. Artists are rightly upset but this should bring them into solidarity with the working classes. Instead they want the working classes to rally to their cause. They want workers, who have had no control over the means of production for hundreds of years, to rise up and fight for the artists right to control their means of production. It's neo-ludditism they are railing against machines for stealing their jobs instead of railing against the capitalists hoarding all the wealth. It's individualism bordering on narcissism. It lacks class consciousness.

This topic can be a good entry point for agitation if you have a soft touch. It's hard to sound like you are on the side of an artist who feels they are being stolen from and convincing them that it is actually not theft while explaining that the fear of losing their livelihood is real but it is a feeling that the entire working class has been battling with for centuries.

Artists visceral feelings about AI are very valuable because most of the working class has been desensitised to their lot in life. If artists were able to use their skills to remind the masses of this injustice it could go a long way to raising class consciousness. But since artists have been separated from the working class by their control of the means of production getting them to pivot can be hard like with any other petit bourgeoisie.

AI art is stealing even less than piracy is. If copying a digital movie without paying for it isn’t stealing than how is generating a digital image based on thousands of digital images?

Humans also do it all the time, going onboard with the IP mafia on AI it's like if every even vaguely impressionist painting author needed to pay royalties to Claude Monet or if every conventional fantasy author had to do it for Tolkien, except Tolkien also got his inspirations from previous works so i guess whomever is the lawful inheritors of Snorri Sturlusson and Elias Lonnrot suddenly become very rich, except that they also compiled their works based on... and so on and on and on

Intellectual property is a bad thing.

Even if we ignore every other impact of IP, it was historically always used by publishing industry against the individual artist.

load more comments (1 replies)
[–] Dengalicious@lemmygrad.ml 10 points 2 weeks ago (3 children)

It’s not stealing in the same way that studying the classics in an art class isn’t stealing. We should still be critiquing it, however, on environmental grounds

[–] yogthos@lemmygrad.ml 4 points 2 weeks ago (4 children)

The good news is that efficiency is rapidly improving, so energy use problem does look like it's being solved. There is a lot of incentive in reducing energy costs as well which means that there is a concerted effort being applied here.

load more comments (4 replies)
load more comments (2 replies)
[–] KrasnaiaZvezda@lemmygrad.ml 9 points 2 weeks ago (1 children)

My take on it is that: when big corporations are doing it for (direct or inderect) profit it's stealing. It was trained on the work of artists after all and that's the only reason they can make good images. If they can make a model that doesn't require using images from others it would solve that issue but at least under capitalism that is too expensive to happen now so it won't happen.

Personal use can be fine I guess and can even allow for more creativity, like if people are using image generation to make new images/art/assets based on their own work/photos. An indie game/movie/etc where the person uses AI to expand what they can do in size is a great use of AI I'd say, giving someone the ability to do something bigger/better than they could do by themselves is what a tool should be like and gen AI should be such a tool for artists too.

There are more cases but they might be harder to come to a conclusion on, specially as they exist in a capitalist setting.

[–] yogthos@lemmygrad.ml 9 points 2 weeks ago

Incidentally, there's a similar case of corporate freeloading when it comes to open source. Corporations use projects developed by volunteers and save billions of dollars in the process, but rarely contribute anything back or help fund the projects they depend on.

[–] pcalau12i@lemmygrad.ml 8 points 1 week ago

A lot of computer algorithms are inspired by nature. Sometimes when we can't figure out a problem, we look and see how nature solves it and that inspires new algorithms to solve those problems. One problem computer scientists struggled with for a long time is tasks that are very simple to humans but very complex for computers, such as simply converting spoken works into written text. Everyone's voice is different, and even those same people may speak in different tones, they may have different background audio, different microphone quality, etc. There are so many variables that writing a giant program to account for them all with a bunch of IF/ELSE statements in computer code is just impossible.

Computer scientists recognized that computers are very rigid logical machines that computer instructions serially like stepping through a logical proof, but brains are very decentralized and massively parallelized computers that process everything simulateously through a network of neurons, whereby its "programming" is determined by the strength of the neural connections between the neurons, that are analogue and not digital and only produce approximate solutions and aren't as rigorous as a traditional computer.

This led to the birth of the artificial neural network. This is a mathematical construct that describes a system with neurons and configurable strengths of all its neural connections, and from that mathematicians and computer scientists figured out ways that such a neural network could also be "trained," i.e. to configure its neural pathways automatically to be able to "learn" new things. Since it is mathematical, it is hardware-independent. You could build dedicated hardware to implement it, a silicon brain if you will, but you could also simulate it on a traditional computer in software.

Computer scientists quickly found that applying this construct to problems like speech recognition, they could supply the neural network tons of audio samples and their transcribed text and the neural network would automatically find patterns in it and generalize from it, and when new brand audio is recorded it could transcribe it on its own. Suddenly, problems that at first seemed unsolvable became very solvable, and it started to be implemented in many places, such as language translation software also is based on artificial neural networks.

Recently, people have figured out this same technology can be used to produce digital images. You feed a neural network a huge dataset of images and associated tags that describe it, and it will learn to generalize patterns to associate the images and the tags. Depending upon how you train it, this can go both ways. There are img2txt models called vision models that can look at an image and tell you in written text what the image contains. There are also txt2img models which you can feed it a description of an image and it will generate and image based upon it.

All the technology is ultimately the same between text-to-speech, voice recognition, translation software, vision models, image generators, LLMs (which are txt2txt), etc. They are all fundamentally doing the same thing, just taking a neural network with a large dataset of inputs and outputs and training the neural network so it generalizes patterns from it and thus can produce appropriate responses from brand new data.

A common misconception about AI is that it has access to a giant database and the outputs it produces are just stitched together from that database, kind of like a collage. However, that's not the case. The neural network is always trained with far more data that can only possibly hope to fit inside the neural network, so it is impossible for it to remember its entire training data (if it could, this would lead to a phenomena known as overfitting which would render it nonfunctional). What actually ends up "distilled" in the neural network is just a big file called the "weights" file which is a list of all the neural connections and their associated strengths.

When the AI model is shipped, it is not shipped with the original dataset and it is impossible for it to reproduce the whole original dataset. All it can reproduce is what it "learned" during the training process.

When the AI produces something, it first has an "input" layer of neurons kind of like sensory neurons, such as, that input may be the text prompt, may be image input, or something else. It then propagates that information through the network, and when it reaches the end, that end set of neurons are "output" layers of neurons which are kind of like motor neurons that are associated with some action, lot plotting a pixel with a particular color value, or writing a specific character.

There is a feature called "temperature" that injects random noise into this "thinking" process, that way if you run the algorithm many times, you will get different results with the same prompt because its thinking is nondeterministic.

Would we call this process of learning "theft"? I think it's weird to say it is "theft," personally, it is directly inspired by biological systems learn, of course with some differences to make it more suited to run on a computer but the very broad principle of neural computation is the same. I can look at a bunch of examples on the internet and learn to do something, such as look at a bunch of photos to use as reference to learn to draw. Am I "stealing" those photos when I then draw an original picture of my own? People who claim AI is "stealing" either don't understand how the technology works or just reach to the moon claiming things like it doesn't have a soul or whatever so it doesn't count, or just pointing to differences between AI and humans which are indeed different but aren't relevant differences.

Of course, this only applies to companies that scrape data that really are just posted publicly so everyone can freely look at, like on Twitter or something. Some companies have been caught scraping data illegally that were never put anywhere publicly, like Meta who got in trouble for scraping libgen, which a lot of stuff on libgen is supposed to be behind a paywall. However, the law already protects people who get their paywalled data illegally scraped as Meta is being sued over this, so it's already on the side of the content creator here.

Even then, I still wouldn't consider it "theft." Theft is when you take something from someone which deprives them of using it. In that case it would be piracy, when you copy someone's intellectual property for your own use without their permission, but ultimately it doesn't deprive the original person of the use of it. At best you can say in some cases AI art, and AI technology in general, can based on piracy. But this is definitely not a universal statement. And personally I don't even like IP laws so I'm not exactly the most anti-piracy person out there lol

[–] m532@lemmygrad.ml 7 points 2 weeks ago (1 children)

I don't understand how every picture is supposed to be "art". Art is subjective. To me only a Slammer or similar is art.

[–] WaterBowlSlime@lemmygrad.ml 6 points 2 weeks ago (2 children)

It's a cultural thing for white people. Hell, I'd say that it's one of the only things that's truly white culture. When the word "art" gets used on something, it's basically the same as anointing it. Any and every drawing, sculpture, text, video, and song is "art" by default. And some take it further to define everything made by a person as "art" too. Even though it's vague as hell, it's a deadly serious topic.

I'm not saying that other cultures don't have similar ideas, but that it's weird the way that white people universally agree that protecting "art", whatever that is, is of the utmost importance.

It always bewildered me about english language that every creation is called "art". In Polish art is "sztuka" which word have quite many meanings, but the most relevant is "a field of artistic activity distinguished by the aesthetic values ​​it represents", of which common understanding is that it have to actually represent some aesthetic values. This of course cause unending discussions about what is art and the subjectivity of it to the point of the adage "art is in the eye of beholder" became universally accepted.

[–] amemorablename@lemmygrad.ml 5 points 2 weeks ago

I like to bring up the example of Hawaiian Hula dance for that kind of reason. It's a case where a kind of "art" is inseparable from culture, heritage, passing down stories. Which is a lot different than simply doing what you feel like as art, as a form of "self expression." Not that I'm judging "self expression" point of view as bad, but just adding to the notion that what is considered art and the associated importance of it, is not always the same across cultures.

[–] June@lemmygrad.ml 7 points 2 weeks ago* (last edited 2 weeks ago)

i don't think it's stealing; all current artists invariably owe the work of those who came before them

[–] Makan@lemmygrad.ml 6 points 2 weeks ago

It looks fugly, that's the deal with it

[–] ledlecreeper27@lemmygrad.ml 5 points 2 weeks ago
[–] ksynwa@lemmygrad.ml 5 points 1 week ago

I don't wanna get too deep into the weeds of the AI debate because I frankly have a knee jerk dislike for AI but from what I can skim from hog groomer's take I agree with their sentiment. A lot of the anti-AI sentiment is based on longing for an idyllic utopia where a cottage industry of creatives exist protected from technological advancements. I think this is an understandable reaction to big tech trying to cause mass unemployment and climate catastrophe for a dollar while bringing down the average level of creative work. But stuff like this prevents sincerely considering if and how AI can be used as tooling by honest creatives to make their work easier or faster or better. This kind of nuance as of now has no place in the mainstream because the mainstream has been poisoned by a multi-billion dollar flood of marketing material from big tech consisting mostly of lies and deception.

[–] MasterDeeLuke@lemmygrad.ml 4 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

The only major issue I personally see with it are the fakes/deepfakes. The quality is still pretty subpar right now yes but that's something that will get better over time, just like computer graphics have over the last few decades. Being against AI art just because it's easy seems like a rather reactionary take to me, Marxists shouldn't be in favor of intentionally gatekeeping things behind innate ability or years of expensive study.

As for the deal with artists, ideally there should be a fine distinction between personal and commercial use that empowers indie artists while holding large corporations accountable for theft.

load more comments
view more: next ›