this post was submitted on 12 Apr 2025
30 points (94.1% liked)

Ask Lemmygrad

942 readers
89 users here now

A place to ask questions of Lemmygrad's best and brightest

founded 2 years ago
MODERATORS
 

Hey there, sometimes I see people say that AI art is stealing real artists' work, but I also saw someone say that AI doesn't steal anything, does anyone know for sure? Also here's a twitter thread by Marxist twitter user 'Professional hog groomer' talking about AI art: https://x.com/bidetmarxman/status/1905354832774324356

top 50 comments
sorted by: hot top controversial new old
[–] pcalau12i@lemmygrad.ml 2 points 5 hours ago

A lot of computer algorithms are inspired by nature. Sometimes when we can't figure out a problem, we look and see how nature solves it and that inspires new algorithms to solve those problems. One problem computer scientists struggled with for a long time is tasks that are very simple to humans but very complex for computers, such as simply converting spoken works into written text. Everyone's voice is different, and even those same people may speak in different tones, they may have different background audio, different microphone quality, etc. There are so many variables that writing a giant program to account for them all with a bunch of IF/ELSE statements in computer code is just impossible.

Computer scientists recognized that computers are very rigid logical machines that computer instructions serially like stepping through a logical proof, but brains are very decentralized and massively parallelized computers that process everything simulateously through a network of neurons, whereby its "programming" is determined by the strength of the neural connections between the neurons, that are analogue and not digital and only produce approximate solutions and aren't as rigorous as a traditional computer.

This led to the birth of the artificial neural network. This is a mathematical construct that describes a system with neurons and configurable strengths of all its neural connections, and from that mathematicians and computer scientists figured out ways that such a neural network could also be "trained," i.e. to configure its neural pathways automatically to be able to "learn" new things. Since it is mathematical, it is hardware-independent. You could build dedicated hardware to implement it, a silicon brain if you will, but you could also simulate it on a traditional computer in software.

Computer scientists quickly found that applying this construct to problems like speech recognition, they could supply the neural network tons of audio samples and their transcribed text and the neural network would automatically find patterns in it and generalize from it, and when new brand audio is recorded it could transcribe it on its own. Suddenly, problems that at first seemed unsolvable became very solvable, and it started to be implemented in many places, such as language translation software also is based on artificial neural networks.

Recently, people have figured out this same technology can be used to produce digital images. You feed a neural network a huge dataset of images and associated tags that describe it, and it will learn to generalize patterns to associate the images and the tags. Depending upon how you train it, this can go both ways. There are img2txt models called vision models that can look at an image and tell you in written text what the image contains. There are also txt2img models which you can feed it a description of an image and it will generate and image based upon it.

All the technology is ultimately the same between text-to-speech, voice recognition, translation software, vision models, image generators, LLMs (which are txt2txt), etc. They are all fundamentally doing the same thing, just taking a neural network with a large dataset of inputs and outputs and training the neural network so it generalizes patterns from it and thus can produce appropriate responses from brand new data.

A common misconception about AI is that it has access to a giant database and the outputs it produces are just stitched together from that database, kind of like a collage. However, that's not the case. The neural network is always trained with far more data that can only possibly hope to fit inside the neural network, so it is impossible for it to remember its entire training data (if it could, this would lead to a phenomena known as overfitting which would render it nonfunctional). What actually ends up "distilled" in the neural network is just a big file called the "weights" file which is a list of all the neural connections and their associated strengths.

When the AI model is shipped, it is not shipped with the original dataset and it is impossible for it to reproduce the whole original dataset. All it can reproduce is what it "learned" during the training process.

When the AI produces something, it first has an "input" layer of neurons kind of like sensory neurons, such as, that input may be the text prompt, may be image input, or something else. It then propagates that information through the network, and when it reaches the end, that end set of neurons are "output" layers of neurons which are kind of like motor neurons that are associated with some action, lot plotting a pixel with a particular color value, or writing a specific character.

There is a feature called "temperature" that injects random noise into this "thinking" process, that way if you run the algorithm many times, you will get different results with the same prompt because its thinking is nondeterministic.

Would we call this process of learning "theft"? I think it's weird to say it is "theft," personally, it is directly inspired by biological systems learn, of course with some differences to make it more suited to run on a computer but the very broad principle of neural computation is the same. I can look at a bunch of examples on the internet and learn to do something, such as look at a bunch of photos to use as reference to learn to draw. Am I "stealing" those photos when I then draw an original picture of my own? People who claim AI is "stealing" either don't understand how the technology works or just reach to the moon claiming things like it doesn't have a soul or whatever so it doesn't count, or just pointing to differences between AI and humans which are indeed different but aren't relevant differences.

Of course, this only applies to companies that scrape data that really are just posted publicly so everyone can freely look at, like on Twitter or something. Some companies have been caught scraping data illegally that were never put anywhere publicly, like Meta who got in trouble for scraping libgen, which a lot of stuff on libgen is supposed to be behind a paywall. However, the law already protects people who get their paywalled data illegally scraped as Meta is being sued over this, so it's already on the side of the content creator here.

Even then, I still wouldn't consider it "theft." Theft is when you take something from someone which deprives them of using it. In that case it would be piracy, when you copy someone's intellectual property for your own use without their permission, but ultimately it doesn't deprive the original person of the use of it. At best you can say in some cases AI art, and AI technology in general, can based on piracy. But this is definitely not a universal statement. And personally I don't even like IP laws so I'm not exactly the most anti-piracy person out there lol

[–] ksynwa@lemmygrad.ml 3 points 20 hours ago

I don't wanna get too deep into the weeds of the AI debate because I frankly have a knee jerk dislike for AI but from what I can skim from hog groomer's take I agree with their sentiment. A lot of the anti-AI sentiment is based on longing for an idyllic utopia where a cottage industry of creatives exist protected from technological advancements. I think this is an understandable reaction to big tech trying to cause mass unemployment and climate catastrophe for a dollar while bringing down the average level of creative work. But stuff like this prevents sincerely considering if and how AI can be used as tooling by honest creatives to make their work easier or faster or better. This kind of nuance as of now has no place in the mainstream because the mainstream has been poisoned by a multi-billion dollar flood of marketing material from big tech consisting mostly of lies and deception.

[–] Munrock@lemmygrad.ml 14 points 1 day ago (4 children)

The messaging from the anti-generative-AI people is very confused and self-contradictory. They have legitimate concerns, but when the people who say "AI art is trash, it's not even art" also say "AI art is stealing our jobs"...what?

I think the "AI art is trash" part is wrong. And it's just a matter of time before its shortcomings (aesthetic consistency, ability to express complexity etc) are overcome.

The push against developing the technology is misdirected effort, as it always is with liberals. It's just delaying the inevitable. Collective effort should be aimed at affecting who has control of the technology, so that the bourgeoisie can't use it to impoverish artists even more than they already have. But that understanding is never going to take root in the West because the working class there have been generationally groomed by their bourgeois masters to be slave-brained forever losers.

[–] LeGrognardOfLove@lemmygrad.ml 7 points 1 day ago (1 children)

It's a disruptive new technology that disrupt an industry that already has trouble giving a living to people in the western world.

The reaction is warranted but it's now a fact of life. It just show how stupid our value system is and most liberal have trouble reconciling that their hardship is due to their value and economic system.

It's just another mean of automation and should be seized by the experts to gain more bargaining power, instead they fear it and bemoan reality.

So nothing new under the sun...

[–] Munrock@lemmygrad.ml 6 points 23 hours ago (1 children)

It’s a disruptive new technology that disrupt an industry that already has trouble giving a living to people in the western world.

Yes, and the solution to the new trouble is exactly the same as the solution to the old trouble, but good luck trying to tell that to liberals when they have a new tree to bark up.

[–] LeGrognardOfLove@lemmygrad.ml 4 points 15 hours ago

I tried but they are so far into thinking that communism does not work ...

[–] yogthos@lemmygrad.ml 6 points 1 day ago (1 children)
[–] thefreepenguinalt@lemmygrad.ml 2 points 1 day ago (2 children)

I would argue that generated images that are indistinguishable from human art would require an AI use disclosure. The difference between computer-generated images and human art is that computers do not know why they draw what they draw. Meanwhile, every decision made by a human artist is intentional. There is where I draw the line. Computer-generated images don't have intricate meaning, human-created art often does.

[–] yogthos@lemmygrad.ml 8 points 1 day ago (1 children)

I don't really see how a human curating an image generated by AI is fundamentally different from a photographer capturing an interesting scene. In both cases, the skill is in being able to identify an image that's interesting in some way. I see AI as simply a tool that an artist can use to convey meaning to others. Whether the image is generated by AI or any other method, what ultimately matters is that it conveys something to the viewer. If a particular image evokes an emotion or an idea, then I don't think it matters how it was produced. We also often don't know what the artist was thinking when they created an image, and often end up projecting our own ideas onto it that may have nothing to do with the original meaning the artist intended.

I'd further argue that the fact that it is very easy to produce a high fidelity images with AI makes it that much more difficult to actually make something that's genuinely interesting or appealing. When generative models first appeared, everybody was really impressed with being able to make good looking pictures from a prompt. Then people quickly got bored because all these images end up looking very generic. Now that the novelty is gone, it's actually tricky to make an AI generated image that isn't boring. It's kind of a similar phenomenon that we saw happen with computer game graphics. Up to a certain point people were impressed by graphics becoming more realistic, but eventually it just stopped being important.

[–] IWantToMakeProgress@hexbear.net 3 points 1 day ago* (last edited 1 day ago) (1 children)

Kind of unrelated but if you are to start to learn about AI today, how would you do it regarding helping with programming (generating images too as side objective) ?

Having checking the news for quite sometimes, I see AI is here to stay, not as something super amazing but a useful tool. So i guess it's time to adapt or be left behind.

[–] yogthos@lemmygrad.ml 6 points 1 day ago (1 children)

For programming, I find DeepSeek works pretty well. You can kind of treat it like personalized StackOverflow. If you have a beefy enough machine you can run models locally. For text based LLMs, ollama is the easiest way to run them and you can connect a frontend to it, there even plugins for vscode like continue that can work with a local model. For image generation, stable-diffusion-webui is pretty straight forward, comfyui has a bit of a learning curve, but is far more flexible.

Thank you, I'll check them out.

[–] Horse@lemmygrad.ml 5 points 1 day ago (1 children)

every decision made by a human artist is intentional

the weird perspective in my work isn't an artistic choice, i just suck at perspective lol

Yes but you intentionally suck, otherwise you would just train for thousands more hours. Or be born with more talent. /s

[–] amemorablename@lemmygrad.ml 8 points 1 day ago

It can be frustrating sometimes. I've encountered people online before who I otherwise respected in their takes on things and then they would go viciously anti-AI in a very simplistic way and, having followed the subject in a lot of detail, engaging directly with services that use AI and people who use those services, and trying to discern what makes sense as a stance to have and why, it would feel very shallow and knee-jerk to me. I saw for example how with one AI service, Replika, there were on the one hand people whose lives were changed for the better by it and on the other hand people whose lives were thrown for a loop (understatement of the century) when the company acted duplicitously and started filtering their model in a hamfisted way that made it act differently and reject people over things like a roleplayed hug. There's more to that story, some of which I don't remember in as much detail now because it happened over a year ago (maybe over two years ago? has it been that long?). But point is, I have seen directly people talk of how AI made a difference for them in some way. I've also seen people hurt by it, usually as an indirect result of a company's poor handling of it as a service.

So there are the fears that surround it and then there is what is happening in the day to day, and those two things aren't always the same. Part of the problem is the techbro hype can be so viciously pro-AI that it comes across as nothing more than a big scam, like NFTs. And people are not wrong to think the hype is overblown. They are not wrong to understand that AI is not a magic tool that is going to gain self-awareness and save us from ourselves. But it does do something and that something isn't always a bad thing. And because it does do positive things for some people, some people are going to keep trying to use it, no matter how much it is stigmatized.

load more comments (1 replies)
[–] DamarcusArt@lemmygrad.ml 1 points 1 day ago (1 children)

Ok. Let's be real here. How many of you defending AI art have used it to make porn? Be with honest with yourselves. Could something like that be clouding your views of it?

[–] amemorablename@lemmygrad.ml 7 points 23 hours ago (4 children)

I know someone who was better able to process childhood trauma with the help of AI-assisted writing. I will let that speak for itself.

load more comments (4 replies)
[–] Arachno_Stalinist@lemmygrad.ml 11 points 1 day ago* (last edited 1 day ago) (2 children)

I believe the main issue with AI currently is its lack of transparency. I do not see any disclosure on how the AI gathers its data (Though I'd assume they just scrape it from Google or other image sources) and I believe that this is why many of us believe that AI is stealing people's art. (even though the art can just as easily be stolen with a simple screenshot even without AI, and stolen art being put on t-shirts has been a thing even before the rise of AI, not that it makes AI art theft any less problematic or demoralizing for aspiring artists) Also, the way companies like Google and Meta use AI raises tons of privacy concerns IMO, especially given their track record of stealing user data even before the rise of AI.

Another issue I find with AI art/images is just how spammy they are. Sometimes I search for references to use for drawing (oftentimes various historical armors because I'm a massive nerd) as a hobby, only to be flooded with AI slop, which doesn't even get the details right pretty much all the time.

I believe that if AI models were primarily open-source (like DeepSeek) and with data voluntarily given by real volunteers, AND are transparent enough to tell us what data they collect and how, then much of the hate AI is currently receiving will probably dissipate. Also, AI art as it currently exists is soulless as fuck IMO. One of the only successful implementations of AI in creative works I have seen so far is probably Neuro-Sama.

[–] yogthos@lemmygrad.ml 10 points 1 day ago

I very much agree, and I think it's worth adding that if open source models don't become dominant then we're headed for a really dark future where corps will control the primary means of content generation. These companies will get to decide what kind of content can be produced, where it can be displayed, and so on.

The reality of the situation is that no amount of whinging will stop this technology from being developed further. When AI development occurs in the open, it creates a race-to-the-bottom dynamic for closed systems. Open-source models commoditize AI infrastructure, destroying the premium pricing power of proprietary systems like GPT-4. No company is going to be spending hundreds of millions training a model when open alternatives exist. Open ecosystems also enjoy stronger network effects attracting more contributors than is possible with any single company's R&D budget. How this technology is developed and who controls it is the constructive thing to focus on.

load more comments (1 replies)
[–] big_spoon@lemmygrad.ml 0 points 22 hours ago

well...in my experience, one side (people who draws good or bad and live making porn commissions mostly) complain that AI art is stealing their monies and produce "soulless slop"

and the other side (gooners without money and techbros) argue that this is the future of eternal pleasure making lewd pics of big breasted women without dealing with artistic divas, paying money or "wokeness"

load more comments
view more: next ›