I don't wanna get too deep into the weeds of the AI debate because I frankly have a knee jerk dislike for AI but from what I can skim from hog groomer's take I agree with their sentiment. A lot of the anti-AI sentiment is based on longing for an idyllic utopia where a cottage industry of creatives exist protected from technological advancements. I think this is an understandable reaction to big tech trying to cause mass unemployment and climate catastrophe for a dollar while bringing down the average level of creative work. But stuff like this prevents sincerely considering if and how AI can be used as tooling by honest creatives to make their work easier or faster or better. This kind of nuance as of now has no place in the mainstream because the mainstream has been poisoned by a multi-billion dollar flood of marketing material from big tech consisting mostly of lies and deception.
Ask Lemmygrad
A place to ask questions of Lemmygrad's best and brightest
Ok. Let's be real here. How many of you defending AI art have used it to make porn? Be with honest with yourselves. Could something like that be clouding your views of it?
I know someone who was better able to process childhood trauma with the help of AI-assisted writing. I will let that speak for itself.
Glad that helped them, and it was probably a hell of a lot cheaper than a psychologist would've been, but we aren't talking about a chatbot, we're talking about AI generated art, using the colloquial meaning of "pretty pictures"
The so-called defenses of "AI art" in this thread seem to have been mostly about generative AI as a whole, so text is included under that umbrella, as far as I'm concerned. Also, for all the annoying (in my view) trend of calling AI pictures "AI art", writing is often considered an artform too, so...
Anyway, I don't really have time to get into a long thing right now (or at least, my version of long), but the point of my comment was "here's something that is actually happening with a real person who uses AI" instead of projections of motives onto people that ignore the content of what has been said so far in this thread. Generative AI is one area where I can confidently say I am probably way more familiar with it than most people and this implication of "clouded views" is a conversation-ender kind of comment, not something that clarifies anything.
I posted this comment in a pretty snide way, that's for sure. but I think it does sum up a lot of my views of AI art well. A lot of defenders of it aren't looking for genuine use cases, they're just demanding unlimited access to the treat machine, and that's sad to see in a space like this.
If you've read my other comments on AI pictures specifically and would like to discuss what I have said in them, then sure, but if I'm coming across as defensive, it's because of the sheer amount of people here who have presupposed what anti-AI picture people believe, if people are going to be talking past me, I'm going to be making snide comments about them. I do think a lot of people are becoming addicted to the treat generators, and as such, will rationalise away their addiction and start accusations against people who "want to take their treats away." without really examining whether this, as it currently exists, is actually good for society. A lot of them seem to be presuppose a kind of "platonic ideal" of AI art that just brings joy to people, rather than the capitalist treat machine it is currently being used for.
(Fair warning, I have time to do a long thing now... bear me, or don't, up to you.)
I'd have to go find those other comments of yours, but for the moment, I will say, I kind of get it. I do remember seeing at least one comment that was sniping at anti-AI views and being uncharitable about it, and I kinda tried to just skate past that aspect of it and focus on my own read of the situation, but I probably should have addressed it because it was a kind of provocation in its own way.
But yeah, I can get defensive on this subject myself because of how often anything nuanced gets thrown out. Personally, I've put a lot of thought into what way and how I use generative AI and for what reasons, and one of my limits is I don't share AI-generated images beyond a very limited outlet (I'm not sharing them on the wider internet, on websites where artists share things). Another is that I don't use AI-generated text in things I would publish and only use it for my own development, whether that's development as a writer or like a chatbot to talk about things, etc.
Can they be "treat generators" in a way? Yeah, I guess that's one facet of them. But so is high speed internet in general. It's already been the case before generative AI kicked into high gear that people can find novel stimuli online at a rate they can't possibly "use up" all of or run out of fully because of the rate at which new stuff is being produced. The main difference in that regard is generative AI is more customizable and personal. But point is, it's not as though it's the only source of "easy treats". Probably the most apt comparison to it in that way is high speed internet itself along with the endless churn of "new content".
Furthermore, part of the reason I chose the example I did of use in my previous post is that while, yes, there are people who use generative AI for porn, or "smut" as some would call it in the case of text generation, the way you posed your post, there was essentially no way to respond to it directly without walking into a setup that makes the responder look bad. If the person says no, I don't use it for that, you could just say, "Well I meant the people who do and I'm sure some do." And if the person says yes, I do use it for that, you can say, "Hah, got you! That is what your position boils down to and now I'm going to shame you for use of pornography." It also carries an implication that that one specific use would cloud someone's judgment and other uses wouldn't, which makes it sound like a judgment specifically about pornography that has nothing to do with AI, which is a whole other can of worms topic in itself and especially becomes a can of worms when we're talking about "porn" that involves no real people vs. when it does (the 2nd one being where the most intense and justifiable opposition to porn usually is).
Phew. Anyway, I just wish people on either end of it would do less sniping and more investigating. They don't have to change their views drastically as a result. Just actually working out what is going on instead of doing rude guesses would go a long way. Or at the very least, when making estimations, doing it from a standpoint of assuming relatively charitable motives instead of presenting people in a negative light.
The messaging from the anti-generative-AI people is very confused and self-contradictory. They have legitimate concerns, but when the people who say "AI art is trash, it's not even art" also say "AI art is stealing our jobs"...what?
I think the "AI art is trash" part is wrong. And it's just a matter of time before its shortcomings (aesthetic consistency, ability to express complexity etc) are overcome.
The push against developing the technology is misdirected effort, as it always is with liberals. It's just delaying the inevitable. Collective effort should be aimed at affecting who has control of the technology, so that the bourgeoisie can't use it to impoverish artists even more than they already have. But that understanding is never going to take root in the West because the working class there have been generationally groomed by their bourgeois masters to be slave-brained forever losers.
It's a disruptive new technology that disrupt an industry that already has trouble giving a living to people in the western world.
The reaction is warranted but it's now a fact of life. It just show how stupid our value system is and most liberal have trouble reconciling that their hardship is due to their value and economic system.
It's just another mean of automation and should be seized by the experts to gain more bargaining power, instead they fear it and bemoan reality.
So nothing new under the sun...
It’s a disruptive new technology that disrupt an industry that already has trouble giving a living to people in the western world.
Yes, and the solution to the new trouble is exactly the same as the solution to the old trouble, but good luck trying to tell that to liberals when they have a new tree to bark up.
I tried but they are so far into thinking that communism does not work ...
it's basically this
I would argue that generated images that are indistinguishable from human art would require an AI use disclosure. The difference between computer-generated images and human art is that computers do not know why they draw what they draw. Meanwhile, every decision made by a human artist is intentional. There is where I draw the line. Computer-generated images don't have intricate meaning, human-created art often does.
I don't really see how a human curating an image generated by AI is fundamentally different from a photographer capturing an interesting scene. In both cases, the skill is in being able to identify an image that's interesting in some way. I see AI as simply a tool that an artist can use to convey meaning to others. Whether the image is generated by AI or any other method, what ultimately matters is that it conveys something to the viewer. If a particular image evokes an emotion or an idea, then I don't think it matters how it was produced. We also often don't know what the artist was thinking when they created an image, and often end up projecting our own ideas onto it that may have nothing to do with the original meaning the artist intended.
I'd further argue that the fact that it is very easy to produce a high fidelity images with AI makes it that much more difficult to actually make something that's genuinely interesting or appealing. When generative models first appeared, everybody was really impressed with being able to make good looking pictures from a prompt. Then people quickly got bored because all these images end up looking very generic. Now that the novelty is gone, it's actually tricky to make an AI generated image that isn't boring. It's kind of a similar phenomenon that we saw happen with computer game graphics. Up to a certain point people were impressed by graphics becoming more realistic, but eventually it just stopped being important.
Kind of unrelated but if you are to start to learn about AI today, how would you do it regarding helping with programming (generating images too as side objective) ?
Having checking the news for quite sometimes, I see AI is here to stay, not as something super amazing but a useful tool. So i guess it's time to adapt or be left behind.
For programming, I find DeepSeek works pretty well. You can kind of treat it like personalized StackOverflow. If you have a beefy enough machine you can run models locally. For text based LLMs, ollama is the easiest way to run them and you can connect a frontend to it, there even plugins for vscode like continue that can work with a local model. For image generation, stable-diffusion-webui is pretty straight forward, comfyui has a bit of a learning curve, but is far more flexible.
Thank you, I'll check them out.
every decision made by a human artist is intentional
the weird perspective in my work isn't an artistic choice, i just suck at perspective lol
Yes but you intentionally suck, otherwise you would just train for thousands more hours. Or be born with more talent. /s
It can be frustrating sometimes. I've encountered people online before who I otherwise respected in their takes on things and then they would go viciously anti-AI in a very simplistic way and, having followed the subject in a lot of detail, engaging directly with services that use AI and people who use those services, and trying to discern what makes sense as a stance to have and why, it would feel very shallow and knee-jerk to me. I saw for example how with one AI service, Replika, there were on the one hand people whose lives were changed for the better by it and on the other hand people whose lives were thrown for a loop (understatement of the century) when the company acted duplicitously and started filtering their model in a hamfisted way that made it act differently and reject people over things like a roleplayed hug. There's more to that story, some of which I don't remember in as much detail now because it happened over a year ago (maybe over two years ago? has it been that long?). But point is, I have seen directly people talk of how AI made a difference for them in some way. I've also seen people hurt by it, usually as an indirect result of a company's poor handling of it as a service.
So there are the fears that surround it and then there is what is happening in the day to day, and those two things aren't always the same. Part of the problem is the techbro hype can be so viciously pro-AI that it comes across as nothing more than a big scam, like NFTs. And people are not wrong to think the hype is overblown. They are not wrong to understand that AI is not a magic tool that is going to gain self-awareness and save us from ourselves. But it does do something and that something isn't always a bad thing. And because it does do positive things for some people, some people are going to keep trying to use it, no matter how much it is stigmatized.
A mechanical arm in a factory back in the 80s wasn't as effective a worker as a person, it was however, much cheaper to run than hiring a worker there. Something doesn't need to be a perfect replacement for it to replace workers.
Funny how so many alleged "socialists" stop caring about workers losing their jobs and bargaining power the instant the capitalists try to replace them with a funny treat machine.
well...in my experience, one side (people who draws good or bad and live making porn commissions mostly) complain that AI art is stealing their monies and produce "soulless slop"
and the other side (gooners without money and techbros) argue that this is the future of eternal pleasure making lewd pics of big breasted women without dealing with artistic divas, paying money or "wokeness"
I believe the main issue with AI currently is its lack of transparency. I do not see any disclosure on how the AI gathers its data (Though I'd assume they just scrape it from Google or other image sources) and I believe that this is why many of us believe that AI is stealing people's art. (even though the art can just as easily be stolen with a simple screenshot even without AI, and stolen art being put on t-shirts has been a thing even before the rise of AI, not that it makes AI art theft any less problematic or demoralizing for aspiring artists) Also, the way companies like Google and Meta use AI raises tons of privacy concerns IMO, especially given their track record of stealing user data even before the rise of AI.
Another issue I find with AI art/images is just how spammy they are. Sometimes I search for references to use for drawing (oftentimes various historical armors because I'm a massive nerd) as a hobby, only to be flooded with AI slop, which doesn't even get the details right pretty much all the time.
I believe that if AI models were primarily open-source (like DeepSeek) and with data voluntarily given by real volunteers, AND are transparent enough to tell us what data they collect and how, then much of the hate AI is currently receiving will probably dissipate. Also, AI art as it currently exists is soulless as fuck IMO. One of the only successful implementations of AI in creative works I have seen so far is probably Neuro-Sama.
I very much agree, and I think it's worth adding that if open source models don't become dominant then we're headed for a really dark future where corps will control the primary means of content generation. These companies will get to decide what kind of content can be produced, where it can be displayed, and so on.
The reality of the situation is that no amount of whinging will stop this technology from being developed further. When AI development occurs in the open, it creates a race-to-the-bottom dynamic for closed systems. Open-source models commoditize AI infrastructure, destroying the premium pricing power of proprietary systems like GPT-4. No company is going to be spending hundreds of millions training a model when open alternatives exist. Open ecosystems also enjoy stronger network effects attracting more contributors than is possible with any single company's R&D budget. How this technology is developed and who controls it is the constructive thing to focus on.