this post was submitted on 15 May 2026
15 points (89.5% liked)

Funny

322 readers
69 users here now

Funniest content on all Lemmygrad

founded 2 years ago
MODERATORS
 
top 48 comments
sorted by: hot top controversial new old
[–] CanaryFeigned@lemmygrad.ml 1 points 9 hours ago (1 children)

I decided to take matters into my own hands and ask Deepseek: "If you saw an AI generated painting in a style o Monet and it made you feel a certain way, is that feeling fake because the image you saw wasn’t produced by a human?"

This is the response I got:

Aesthetics doesn’t require agency. But the meaning we assign to art often involves a perceived human-to-human connection. AI art can be beautiful and moving, but it shifts the experience from “encounter with another mind” to “encounter with a beautiful system.” Both are real—just different. The “fakeness” charge usually comes from conflating emotional response with the story we tell ourselves about where the response came from.

Interestingly it seems to agree with my own assessment. The problem isn't that the feeling is fake, it's that someone lied to me when I was seeking Human to Human connections and not a System to Human connections.

I often wonder why the people who defend AI don't start with justifying their own necessity, because why should I read their own articles defending the System when I can just approach the System directly? Why do I need a 3rd wheel in the relationship?


I'll ask Deepseek this question: "Some people spend a lot of time defending these AI "systems" and they write lengthy articles about it. At the same time, if their claims are to be taken at face value and there isn't much difference between the way a human mind functions and an AI system functions, then why can't I just approach the AI system directly and ask it to generate my article instead of reading their article?"

This is the response:

The force of your point is this:

If human cognition and AI text generation are near-equivalent in terms of creativity, insight, and meaning, then the defender’s own article has no special value.

> In fact, if AI can produce equally good or better arguments, then the human defender is wasting your time by acting as an intermediary. You could cut out the middleman.

Possible responses from an AI defender (and why they might fail to fully escape your criticism):

“I’m not claiming equivalence, just overlap.”

They might say:

AI and humans both process patterns, but humans have embodiment, emotion, and lived experience. My article reflects my unique perspective—not just pattern completion.

But your counter is strong here:

If that’s true, then their earlier functionalist claims about AI were overblown. They can’t have it both ways.

You still need me to frame the question and curate the output.”

They might say:

AI doesn’t know what you want to read unless you prompt it well. My article is a curated, argued position.

Your reply:

Then why not just prompt the AI yourself with my original question? You’d get a decent argument instantly.

“The process of writing the article itself is valuable to me, and reading it is a form of human connection.”

But then they’ve conceded the point:

The value isn’t just in the information but in the human source. That undermines the “AI is functionally similar” claim.


The deeper issue

Your argument reveals a pragmatic contradiction in strong AI functionalism (the view that human and AI cognition are essentially the same kind of thing).

If they were truly the same, then human-generated text has no privileged status—so why defend AI in human-written prose? Why not let AI defend itself?

But most AI defenders don’t actually believe in total equivalence. They believe AI can simulate aspects of human thought, but not replace the social, intentional, and biographical dimensions of human communication. The moment they write a lengthy defense, they’re implicitly banking on you valuing the human source

So your question exposes that:

  1. If they say “read my article” → they assume human authorship matters.

  2. If they say “just ask the AI” → they admit their own article is redundant.

They can’t logically do both without contradiction.


Bottom line for you.

You’re right to be suspicious. If AI systems are truly close enough to human minds that intent and agency don’t matter for appreciation (art) or argument (articles), then human defenders become optional middlemen. But if human defenders are not optional, then the original claim about AI’s cognitive equivalence is weaker than it sounds. Your question pulls the thread, and the sweater unravels a bit.

[–] yogthos@lemmygrad.ml 1 points 1 hour ago

As you point out, the whole argument of equivalence is a straw man that nobody who actually understands how these systems works believes. So, human authorship obviously does matter. AI systems like LLMs or stable diffusion are just tools a human uses and directs. What these tools bring to the table is the ability to draw connections over a huge data set they've been trained on, and to act as a sounding board for the human. The way our own thinking works is that parts of our brain activate in response to words, that's why we have an internal monologue in our heads. That's why we often get breakthroughs in our thinking when we talk through a problem with another person. Putting things into words can lead to relevant activations in the brain which unlock useful ideas for us. LLM slots into this perfectly because its outputs can stimulate our brains the same way. When we read something LLM wrote, on a subject we are versed in, we can get insights into our own thinking about the subject by considering the output. Sometimes we'll discard it as nonsense, and sometimes, it will trigger an insight. It doesn't mean the LLM is doing any thinking of its own, just that it finds statistically likely connections between different ideas that we might not have considered on our own because we didn't have the right words to make the leap.

[–] Munrock@lemmygrad.ml 4 points 16 hours ago

Reminds me of how you can post a description about a Capitalist state, without naming it, in the vernacular that people are trained to use for AES states (regime, totalitarian, etc) and the vast majority of them will assume you're talking about North Korea. We are more like LLMs than we'd like to admit.

[–] Amnesigenic@lemmy.ml 1 points 14 hours ago (2 children)

Random sampling of people on Twitter, no evidence of any meaningful qualifications among responders, this is methodologically worthless AI propaganda

[–] ComradeSalad@lemmygrad.ml 4 points 13 hours ago

You mean that a random collection of twitter engagement bots isn't indicative of anything?! Shocking.

[–] amemorablename@lemmygrad.ml 2 points 12 hours ago (1 children)

I disagree, I think it does a good job of illustrating how reactive and poorly thought out anti-AI positions can be. Not all anti-AI views are like this and not all of them are invalid, but it is definitely a thing that happens; though it mainly seems to be an online thing, the taking any opportunity to try to put it down thing. Typically when AI comes up offline, I see much more mixed takes on it and more openness if I go into details I understand about how it works.

[–] Amnesigenic@lemmy.ml 1 points 12 hours ago (1 children)
[–] amemorablename@lemmygrad.ml 3 points 11 hours ago

It is unironically a cool story and the moral of it is that nuance is nearly always more beneficial than reactivity. Reactivity has its place in moments of crisis, in the short-term, but it can't sustain itself without the nuance of plodding organization behind it. And the moment you dig into that plodding nature of organization, the need for nuance becomes self-evident. Because no matter what outcome you want most, or think you want most, the details are not bending so easy and the only way to get them to bend is to pick apart the dynamics of them. Which is why socialism is a science, not only the energy of upset with the existing system. All the rage in the world won't make a firearm hit its mark. Rage alone is impotence and disconnect from political power. But organization alone without motivating energy will struggle to sustain itself when it inevitably encounters obstacles. So both can play their part, provided they work together. Disjointed individualist upset won't change anything though.

[–] Imnecomrade@lemmygrad.ml 4 points 1 day ago* (last edited 1 day ago) (1 children)

This reminds of me of similar experience on lemmy.ml where yogthos posted a blurry picture with meme text, and a person and I first started believing it was AI because of the weird context of the image that looked unrealistic. Then a couple other people started to question if it was really AI, and a reverse image search led them to finding a post of the image that existed pre-AI. I did some extra digging and found related images on a firefighter magazine website that included the location and time the event occurred, and I deleted my previous comment upon discovering this.

I do try to resist commenting on a subject before doing an investigation, but sometimes I am guilty of being too quick to make an assumption. With AI, though, it's really hard to take any media as truth anymore, so I am very skeptical of everything I see at this point.

[–] yogthos@lemmygrad.ml 8 points 1 day ago

Same, AI generated media is getting to the point where it's very difficult to tell if something is real or not. It's going to be an interesting adjustment period where we can no longer take video or photographic evidence for granted anymore. I can see cameras using something like RSA going forward, where the camera will sign the image with a cryptographic signature using its private key, and then anybody could verify that the image came from the camera using the public key published by the manufacturer for example. I actually recall reading a paper a little while back which explained how this could work, and you could embed signature data in a way which would survive retouching software too.

[–] amemorablename@lemmygrad.ml 7 points 1 day ago (1 children)

Since people have already commented on this regarding AI, I'll mention something else. That I see this type of thing as a common practice on the western English-speaking internet in general, regardless of subject matter. I say western internet because it's what I'm familiar with and I don't know if it still applies elsewhere.

It goes something like this: Person A says X (X is posited/presumed to be true by Person A's phrasing). Responders nearly all reply as if X is true, without question. As if the important thing is not accuracy of information, but being among the first to be an authoritative commentator on the subject. My hypothesis is that this is mainly a western thing because of the cultural superiority / supremacy view (the idea being that this view is feeding into people's need to appear like they can casually speak with authority on anything at the drop of a hat, no matter what the question is).

Were people prioritizing accuracy instead of rushing to have an authoritative opinion, they may be more apt to pause and go, "Hold on a second, is this person telling the truth? After all, many untrue things are said on the internet. Maybe I should look up what a Monet looks like for reference and compare." The alternative, in other words, is investigation rather than people believing they can "cudgel their brains" to "develop an idea".

[–] yogthos@lemmygrad.ml 6 points 1 day ago (1 children)

The way the language and culture shape our thinking is a really interesting subject to consider. So much of our understanding of the world is implicit in how we communicate with one another, and it's something that we tend to take completely for granted. It's one good reason to learn different languages because they directly expand your mental toolkit.

[–] amemorablename@lemmygrad.ml 5 points 1 day ago

Yes, for sure! I tend to end up thinking of it mainly in a fucked up context because of what colonialism did with it (like punishing people for using words in their native language), but there can be a positive side to it too. Learning to go outside a colonized language to re-embrace a native one; learning multiple languages to better understand different perspectives and cultures; learning to go outside a native language that has been part of imperialist/colonial power structures in order to see a language that hasn't.

[–] Marat@lemmygrad.ml 3 points 2 days ago (1 children)

Hmm, yes, those replies are pretentious I suppose. But it's Twitter so your point is more than a little muted

And there is a much more Salient point to be made here: It's converse

Let's say ai had made this. OK. What has the world gained? An exact copy of a Monet painting.

If ai had actually generated another painting in the style of Monet, what would that do?

If an art student had shown this to me I would say they have a good grasp of how to paint art, but they just copied a Monet painting. Anyone can copy. Was Monet good because he copied other people?

And, unfortunately, there is no one standard of what makes good art. In a previous discussion I said that there is an ephemeral aspect to art, and I stand by that. Even Socialist Realism has that aspect, even if the meaning is more clear than more Avant-garde works.

[–] yogthos@lemmygrad.ml -1 points 2 days ago (1 children)

The appreciation of the art is firmly in the eye of the beholder. If you see a painting and it evokes some emotion, or triggers an interesting thought or feeling, why does it matter how it was created? Most of the time you don't have any personal connection with a human artist, and you have no idea what they were thinking when they created a piece of art. All you know is how it makes you feel. And agency isn't even required. You can appreciate a beautiful sunset or a landscape that wasn't created with any intent by anyone. So, if you saw an AI generated painting in a style o Monet and it made you feel a certain way, is that feeling fake because the image you saw wasn't produced by a human?

[–] Marat@lemmygrad.ml 0 points 2 days ago (1 children)

I'll admit my previous comment was guilty of this too, but you're looking at this the wrong way.

The purpose of art in a society is to be new or innovative in some way, shape, or form. Individually yes, someone can be satisfied watching sunsets or looking at the same Monet painting all the time. But what art affects society? It's the new, the unknown and discovered.

Let's go back to our art student for a moment. Will they be the new Monet by making paintings like Monet? No. It can look good, and be good, but it won't affect society or win them higher praise beyond their technical ability.

I can write a technically very good passage or book or whatever. That does take some skill, certainly. But if writing was boiled down to copying whatever was technically good, what writing would there be?

This can extend to even the technical fields. Why do engineers exist? Because you cannot simply copy and paste a solution to every problem. You can certainly have common tools and techniques, same as art, but for society to progress you inherently need the new.

Art represents the same problem as evolution, as art is evolutionary. As much as people divide art history into epochs and ages, the creation of art is much like the evolution of a species. It is over a long period of time, a longer period than what people expect, and engages in small changes over a long period of time. It is quantitative changes making eventual qualitative changes.

Ergo, that's the problem of ai art. I know no one that wouldn't fund a sunset beautiful, certainly there would only be a very small amount in the whole history of the world. But are we satisifed with just recursion? Would you be satisfied if every museum was simply representations of sunsets, or pictures of sunsets? I doubt it.

[–] yogthos@lemmygrad.ml 2 points 2 days ago (1 children)

I think you might be misunderstanding how image generation models like Stable Diffusion actually work here. They encode a compressed statistical model of the patterns that exist across billions of images the training data was broken down into. When you give Stable Diffusion a text prompt it doesn't go find a relevant image and regurgitate it. What actually happens is that it generates an image pixel by pixel based on the probability distribution it learned during training through a process of denoising in a latent space.

And that process is not actually all that different from the way humans imagine or combine visual ideas. We absorb visual patterns from countless examples over our lifetimes and internalize them. When you draw a picture of a cat you do not pull up a specific cat image you memorized from a photo to copy. Drawing is a process of generating novel combinations based on general patterns we associate with the subject.

Similarly the weights of a model encode statistical regularities about pixel co occurrence, shapes, textures, object relationships, and higher level visual concepts. When it generates an image it is combining those patterns in ways that were never present in the training data. That is how Stable Diffusion produces entirely novel images that have never been seen before and why it can be prompted to combine concepts from different domains in creative ways.

[–] Marat@lemmygrad.ml 1 points 1 day ago (1 children)

This doesn't really change the argument, and for your information I do know how so called "ai" works.

In fact I'd argue that's the crux of my point.

What do these probability functions do? They create emphasis. This, of course, is not all of what art is. There is no one of what art is, by its nature. However, emphasis is a large part of what makes a style unique.

Let's say I were to ask an ai for a soviet style propaganda poster. What would I get? I would, most likely, a poster with harsh contrasting colours with an emphasis on deep or bright reds and yellows. Ignore the content, it can be anything. I could make one with Joe Biden or Donald Trump or Augusto Pinochet or Netanahyu, but that's simple enough. It is probabilistically determining that this figure would be in the center or near center of the image, doing some action.

Objectively it is making something never seen before. That student from before did too. Didn't they? I find it highly unlikely that, even copying Monet's style, they somehow made something someone exactly made before.

But that's not the point. Just like how when looking at modern society you need to term labour in "socially necessary labour time," here too you need to look at it socially. Yes, objectively, these models are making new things, and can even combine them to make new things. But I'm struck by a quote Dan Olsen had about Doug Walker, saying, "He is a fundamentally incurious person who thinks a groundbreaking idea is 'what if Batman met Mario.'"

To put it simply, ai art can't experiment. Obviously you can regenerate a prompt. Let's say you only had training data of socialist realist paintings. Would you be able to make "defeat the whites a red wedge"? Or the visa versa. Perhaps that's a strawman. What of you gave training data of all the non-cubist paintings to that point and training data of real world images, would it be able to make a cubist painting?

Theres also a consideration to be made, could it make an inverse? Many movements were made to be the inverse to a movement at the time. Realism itself was an inverse to the romantic movement. Could I ask an ai to generate the inverse of a romantic painting? Or the inverse of an anime style?

"Ah-hah" you say "well you can ask it to generate it with specific charecteristics." Well, what would those characteristics be? Maybe you could ask it to generate the inverse of anime charecters, small eyes, textured skin, etc., but what's the inverse of the style?

Does ai understand humans? That's the key question. I don't mean this metaphyscially, I do mean it literally.

Can someone who hasn't seen the world, hasn't seen other people, hasn't lived, produce new art? If you put a homonculous in a box and showed it all the art of humanity, would it be able to make something new?

I'd say no.

Lets go back to your sunset example. When someone draws a sunset, do they simply copy it? Ignoring the difficulty that actually entails, no. They stylyze it. They emphasize, they warp it, not to objective reality but to what they feel.

If you strapped a camera onto a person and recorded every frame of their life from birth to death, could you get it to make an oil painting of a sunset?

Sure, a human wouldn't be able to either, but they could make their own "training data," per se. They can make it, change it, improve it. Conversely, ai's can't train of themselves or else the recursion gets even worse

[–] yogthos@lemmygrad.ml 2 points 1 day ago (1 children)

I get what you are saying and I think you are making a good point about the limits of using AI as a tool to produce art as it exists right now. But here is the thing, human art does not come from a vacuum either. It comes from material conditions, from culture, from all the art we have seen before and our experience of the world we live in. That is also a form of training data, just more varied and embodied than what the LLM gets.

You say a human can make an inverse of a style like realism being an inverse of romanticism. But that inverse was not random inspiration that came from some separate realm of Platonic objects. It came from a specific social and political context the artist is embedded in. Realism emerged because artists were responding to industrialization, to photography, to new ideas about truth and observation. They did not just think what if I do the opposite of romanticism in a vacuum. New styles are a reaction to the material conditions people experience. These are the same conditions that shaped their training data of the world.

Now imagine you take that homunculus in a box and show it all the art of humanity. You say it could not make something new, but could a human raised in complete sensory isolation with only a library of art make something truly new? That's also highly unlikely because novelty in human art comes from bumping up against reality, from making mistakes, and having a body that feels cold and hunger. On top of that, it is ultimately a human user that generates the prompts. The idea to create something doesn't originate within the LLM, it's just a medium somebody uses because they had a though in their head and they wanted to visualize it in some way.

The issue is not that AI cannot experiment either, you could create a system where an agent produces images and people rate them on how much they like them, and it learns the style people appreciate most. This type of stuff has already been done by the way. The key gap is that training data that LLMs are fed is narrow and static while ours is dynamic and embodied. But, say, you gave an AI a robot body and let it wander the world for twenty years, let it have a lot of embodies experiences, its outputs would end up having the same material basis as human art. And it wouldn't be because it has a soul, it would simply be a product of having a richer set of statistical patterns to draw from and new ways to contextualize them.

You say AI cannot train on itself without recursion getting worse. That is true for current methods, but humans can also get stuck in recursive loops. Think of artists who just copy their own style forever which is a form of recursion too. The ones who do break out do so by encountering new experiences.

So I do not think the difference is of kind, but rather one of breadth and embodiment. The human advantage is that we are embedded in the messy physical and social world. But we should not treat that as some sort of a metaphysical miracle.

[–] Marat@lemmygrad.ml 4 points 1 day ago (1 children)

The homonculus thing was simply to avoid the logical problems of somehow having someone both completely sensory deprived, and able to draw and such.

No, a human completely sensory deprived human with all the knowledge of art would not be able to make something new. That's exactly the point I'm trying to make. If an ai cannot experience, then it cannot make anything new, just as how a human who cannot do those things cannot.

If you want to talk about the future, then that may be a different matter. But it would have to be a big leap in technology

[–] yogthos@lemmygrad.ml 3 points 1 day ago (1 children)

But my point is that the AI already has a lot of existing human art baked into it, and the human driving it provides the genuine novel experience you talk about. The idea for a prompt comes from the human user of the system, then the system combines all its training to produce an image, and the human reviews it and decides whether it matches what they intended, then iterates on that. So, that's precisely the element where novelty comes from with the technology as it works today. What the model does is automate the mechanical process of producing the image. The person doesn't need to learn how to paint or use photoshop, they just bring their creativity to the table.

[–] Marat@lemmygrad.ml 4 points 1 day ago* (last edited 1 day ago) (1 children)

Art is a lot more than just the concept of what you want. I'm not saying something has intrinsic value by being worked on nk matter its content, but just fundamentally that by its very nature you cannot mathematically/statistically create art that appeals at any higher value level. Maybe now it's good but if this technology doesn't improve rapidly, I'm sure in 5, 10 years everyone is gonna be sick of it. Edit: actually people are already sick of it, on social media it'll exist because the bar is through the floor as it's free. But otherwise, I doubt anything more is going to come of it

Also, saying humans can get in a recursion loop is fundamentally different from ai. You're a marxist, you should know the difference. Corruption can occur in socialist countries, corruption will occur in capitalist ones. It's the same here. Humans, individually, can get stuck in loops, but humanity rarely will get stuck in one for a long time. Ai fundamentally will get stuck, not being able to cannibalize it's own creations.

At the very least, it is not you or me making this art. If we asked someone else to make a concept for us, that's not us doing work. It's the programmers who make these ai's that are, effectively, making the art. They at the very least make the machine that makes the art.

[–] yogthos@lemmygrad.ml 4 points 1 day ago (1 children)

As a materialist, I absolutely do think that art is something you can mathematically/statistically create. Thinking otherwise involves believing in magic basically. As a Marxist, I think humans are product of the material world, and that our brains evolved to do computation through the selection pressures in the environment. In my view, the brain is a biological computer that evolved to allow us to interact with our environment more predictably, which created obvious survival advantages for us. I don't think what the brain does is exclusive to biology, and I see no reason why artificial systems would not be able to do similar types of computation.

I don't really see any evidence to support the argument that what the brain does is fundamentally different from AI. There's certainly a huge difference in complexity, and I'm not suggesting that LLMs are qualitatively comparable to what human brains do in any way here. But on a fundamental level, I think that both are inference engines which build an internal model through reinforcement training. I'm also not sure what you're basing the argument that AI must fundamentally get stuck in loops.

And the programmers making these AI are not making this art any more than Adobe makes art by creating photoshop. The art is the product of human intention and the tool. If we asked someone to draw an idea that we had to a specification that we came up with, then we would be collaborating on art.

[–] redLinuxCowboy@lemmygrad.ml 1 points 1 day ago (1 children)

I think that art requires suffering. If there was a way that could make an art piece without suffering and spending any time on it, it would be worthless

I'm not even talking about studying it, although it does also requires a lot of effort, but the process of making art, process of trying and trying again and again, making errors, fixing them, making more errors, until it's something you at least fine with it. It applies not only for drawing, but for every artistic expression I think that if we remove that part from the process of creating art, we strip it of any worth. That's why hand-made stuff is still relevant, hand-made mug might be worth more for you because you made it, or someone you love made it, or you just bought it from some weird funny looking guy who makes mugs and sell it for overprice, even though there's factory made objectively better mugs for objectively better price. So, maybe AI does the same thing as human brain but faster, just like factory machines can do things better than humans. But the time, effort, the suffering of making something is what gives art any value in my opinion. Just like, well, any kind of work requires effort, but if in production faster and cheaper is desired thing, art has different goals, I think that human effort to just make pretty picture or depict your idea is more desiarable than whatever AI might provide with art. Although I do agree that if your goal is to depict your idea and you lack the skills required but you really want to, using AI is a way, especially if you lack time and/or will to commit to learning art, and you lack finances to commision artist. But if I see a painting I can be impressed with artist learning all the skills required, all the details or even lack of them, it is cool. If I see ai generated image the best I can say that it might be neat idea, I could not possible comment on execution, just like I couldn't comment on a factory made mug. (Well I could say that factory is cool thing too, and that AI can generate pictures is pretty cool too, I just can't agree that it can be considered art) In art, as I see it, the skill of the author is equally, if not more important than idea. Because whatever idea you might put in it, people will intrepret it however they see it unless you just spell it out and if you do what even the point

Sorry if it's a bit incoherent I'm also thinking while writing..

[–] yogthos@lemmygrad.ml 3 points 1 day ago

I mean that's where the whole term artisanal comes in. People place more value on things that were crafted by hand end to end as opposed to manufactured using an industrial process. That's why I don't really see image generation replacing traditional art. People still paint on canvas even though photoshop exists, and we tend to place more value on a physical painting than a digital one.

I'd argue AI generated content serves a different purpose from high art though, same way a mug made in a factory coexists with artisanal mugs you'd attach sentimental value to. For example, take the whole Iranian Lego videos that turned out to be extremely effective propaganda. Nobody would argue they're high art you'd display in a gallery, but they are effective, they evoke an emotional response, and they convey the message in a clever and creative way.

Before AI tools were available, the US effectively had a monopoly on such messaging because it was a highly labor intensive process that could only be accomplished by skilled 3d artists. It took a company the size of Pixar to produce something equivalent. But now, this process has been democratized, and a small studio in Iran can punch way above their weight to challenge imperial propaganda.

I think that's where the real value for this stuff comes in. Iran proved that we can use it for effective messaging. We can make memes and catchy short videos that promote our ideas and do it in a way that connects with people.

[–] CriticalResist8@lemmygrad.ml 3 points 2 days ago (1 children)

Reminds me of wine tasters who pick apart all the nuances of the wine and then crumble when tested in a double blind experiment

[–] yogthos@lemmygrad.ml 3 points 2 days ago

lol that's exactly what this is

[–] cfgaussian@lemmygrad.ml 3 points 2 days ago* (last edited 2 days ago) (4 children)

Art appreciation has always been a class signifier, a way for rich snobs to show to other rich snobs that, unlike the unwashed masses, they "get it". It reinforces class boundaries on the cultural level. And the educated middle class apes this behavior in an attempt to signal higher social status. That's why so much of art criticism is entirely ad hoc nonsense, pseudo-objective rationalization for an entirely subjective assessment. It's as pretentious as when you hear a wine snob talk about "notes of X, Y or Z" when there is zero of that ingredient actually in the wine or any of the manufacturing process (the foodie equivalent of homeopathy).

[–] davel@lemmygrad.ml 6 points 2 days ago* (last edited 2 days ago) (1 children)

It’s as pretentious as when you hear a wine snob talk about “notes of X, Y or Z” when there is zero of that ingredient actually in the wine or any of the manufacturing process

Not to defend wine snobs, but they aren’t talking about X, Y, or Z literally being in the wine. They’re talking about flavors that a wine has in common with X, Y, or Z . We don’t have a rich vocabulary for flavors like we do with colors, so analogies are used (and anyway, we do the same with some colors, like lilac and salmon).

[–] cfgaussian@lemmygrad.ml -1 points 2 days ago

Ok, fair point, except that for regular people it still all just tastes like wine.

[–] MasterBlaster@lemmygrad.ml 1 points 2 days ago* (last edited 2 days ago)

great comment & it resonated with me, brought me back to being a tryhard teenager and the type of faux-intellectualism i tried to ape. back then it seemed so weird and pointless like I couldn't understand the broader context. like I wanted to be like the intellectuals I looked up to but couldn't find an end to what they were doing.

[–] big_spoon@lemmygrad.ml -1 points 2 days ago

fascinating take...

[–] yogthos@lemmygrad.ml -1 points 2 days ago

I couldn't have put it better

[–] Belly_Beanis@hexbear.net 1 points 2 days ago (1 children)

JFC that's arguably his most famous painting.

[–] DonLongSchlong@lemmygrad.ml 0 points 2 days ago (2 children)

Did he not paint, like, a billion of those water lilly paintings? I am pretty sure some of his paintings are the exact same just with some colours switched out lol

[–] CriticalResist8@lemmygrad.ml 3 points 2 days ago (1 children)

he did, and while the purple water lilies actually existed in his garden, he painted them with very bright blues and purples because his cataracts made those colors weaker in his vision, so to him they looked normal. when he finally got surgery he was appalled at what he had painted and in his rage destroyed some of his paintings from that period.

[–] dazaroo@lemmygrad.ml 0 points 1 day ago

The backstory of the pieces make looking at them more interesting

[–] Belly_Beanis@hexbear.net 2 points 2 days ago

Yeah over 250. That one in the OP is specifically famous because it's part of a Munich art museum's collection of Modern Art, the Neue Pinakothek. It's famous for having a lot of its art confiscated by the Nazis during WWII. It's home to a lot of Picassos, Van Goughs, Monets, Manets, and works by Max Liebermann.

[–] big_spoon@lemmygrad.ml -1 points 2 days ago (1 children)

that triggers me a little, when i remember that pragerU video talking about art, pragerU was...kinda right?

[–] yogthos@lemmygrad.ml -1 points 2 days ago

even a broken clock... 🤣

[–] Cletus@lemmygrad.ml 0 points 2 days ago (4 children)

So the art isn't that good after all, or everyone replying has no clue what they're talking about and are easily influenced. Or both.

[–] Amnesigenic@lemmy.ml 2 points 14 hours ago

It's a random group of whoever responded first on twitter, option 2 seems likely

[–] Marat@lemmygrad.ml 5 points 1 day ago

Or people are just paranoid about the ai's made by pedophile billionaires and have a negative reaction to it

So the art isn't that good after all

many such cases

[–] cenarius@lemmygrad.ml 0 points 2 days ago* (last edited 2 days ago) (1 children)
[–] Cletus@lemmygrad.ml 2 points 1 day ago
[–] bennieandthez@lemmygrad.ml -1 points 2 days ago (1 children)

lol that's an hilarious experiment

[–] yogthos@lemmygrad.ml -2 points 2 days ago