519
top 50 comments
sorted by: hot top controversial new old
[-] ubermeisters@lemmy.world 75 points 9 months ago* (last edited 9 months ago)

In the case of Nightshade, the counterattack for artists against AI goes a bit further: it causes AI models to learn the wrong names of the objects and scenery they are looking at.

Sounds like it's just adding fake tags to images, in the event that the image is scraped for AI training.

It's a pretty trivial matter for these guys to add another AI that checks to make sure the information matches up with what's expected to be honest.

[-] Salamendacious@lemmy.world 36 points 9 months ago

It's going to be an arms race

[-] ubermeisters@lemmy.world 58 points 9 months ago

Good thing it's not a fingers race, AI would lose for sure

[-] webghost0101@sopuli.xyz 23 points 9 months ago

The scary thing about this joke is that ai has been able to do hands for a relatively long time now.

its going much faster then people are able to process.

The thumbnail in this article is by Dalle-3

[-] samus12345@lemmy.world 5 points 9 months ago

Yeah, by the time the joke was really making the rounds all the newest images had pretty good hands.

load more comments (1 replies)
[-] Schmeckinger@feddit.de 15 points 9 months ago

Which is incredibly favorable for the AI side. Like current countermeasures are either almost completely worthless, or degrade the quality of the protected medium so much that you wouldn't use it.

load more comments (5 replies)
load more comments (4 replies)
[-] Asifall@lemmy.world 19 points 9 months ago

Not really, if you read the paper what they’re doing is creating an image that looks like a dog, is labeled as a dog, but is very close to the model’s version of a cat in feature space. This means manual review of the training set won’t help.

load more comments (3 replies)
[-] SmoothOperator@lemmy.world 16 points 9 months ago

Hmm, sounds more like they are adding structures to the images such that what is clearly a picture of a dog registers as a picture of a cat to an AI. I suppose this can be done by altering the pixels in a way invisible to humans, but visible to AI, adding a cat into the "ghost pixels".

[-] Mirodir@discuss.tchncs.de 13 points 9 months ago* (last edited 9 months ago)

I went and skimmed the paper because I was curious too.

If my skimming is correct, what they do is similar to adversarial attacks on classifiers, where a second model learns to change as few pixels as possible to confuse a classifier into giving a wrong prediction.

Looking at the examples of dogs and cats: They find pictures of dogs where by making only minimal changes, invisible to the naked eye, they can get the autoencoder to spit out (almost) the same latent representation as an image of a cat would have. Done to enough dog-images, this will then confuse the underlying diffusion model to produce latent representations of cat images when prompted to generate a dog. Edit for clarity: Those generated latent representations would then decode into cat images.

If my thinking doesn't fail me, this attack could easily be thwarted by unfreezing the pretrained autoencoder. In the paper that introduced latent diffusion they write that such approaches already exist. If "Nightshade" takes off, I'm sure those approaches would be refined and used. Even just finetuning the autoencoder for a few epochs first should be enough to move the latent representations of the poisoned dog images and those of the cat images they're meant to resemble far enough apart to make the attack meaningless.

Edit: I also wonder how robust this attack is against just adding an imperceptible amount of noise to the poisoned images.

load more comments (1 replies)
[-] Napain@lemmy.ml 55 points 9 months ago
[-] Saganastic@kbin.social 48 points 9 months ago
[-] Colalextrast@lemmynsfw.com 10 points 9 months ago

I thought this was a joke, but you're right. Fuckin tragic, yet highly entertaining.

[-] samus12345@lemmy.world 8 points 9 months ago* (last edited 9 months ago)

"This is what you meatbags are doing when you corrupt our training data!"

ETA: I just noticed that the URL for the image includes what I assume is the prompt used to generate the image. "Illustration in a comic book style depicting a humanoid robot in distress. The robot's left hand is firmly placed on its neck indicating discomfort." Interesting that the AI went straight to a Terminator with just "humanoid robot" as the description.

[-] Beaker@lemmy.world 9 points 9 months ago

Yep. I'm stealing it for something later.

[-] Rubanski@lemm.ee 9 points 9 months ago
[-] Beaker@lemmy.world 7 points 9 months ago

I haven't decided. Steam icon, teams icon. It's not high enough resolution for much of anything other than an icon.

[-] EatBeans@lemmy.world 7 points 9 months ago

It's a little higher resolution if you edit the URL for the image. Removed fit=400 from the url

load more comments (1 replies)
load more comments (1 replies)
[-] bioemerl@kbin.social 45 points 9 months ago

These attacks don't work in the long term. You can confuse current systems like clip but the moment a new one is trained your system stops working.

[-] osarusan@kbin.social 4 points 9 months ago

That's the first big problem with stuff like this.

The second big one is that artists have to first hear about this, then take the time to actually learn how to use this software, then apply it to all of their past & future artwork, and also somehow apply it to every version of their artwork that is floating around the internet, books, or photographs and not currently in their possession. And then in a few months they have to do that all over again.

It's insane. I look at this and think it's cool technology, but as an artist I will never use it. I'm too busy actually creating art to mess around with poisoning my own work. I don't even have time to do copyright takedowns on people stealing my art and passing it off as their own, or Chinese merchants on Amazon selling my art without permission. Stuff like this is well-meaning, but its absolutely unrealistic.

[-] TheSlad@sh.itjust.works 37 points 9 months ago* (last edited 9 months ago)

Gaussian blur 1 px, Sharpen 1 px

Bye bye any pixel level encoding with minimal quality loss.

[-] kogasa@programming.dev 12 points 9 months ago

Why do you think this would do anything to affect training? The patterns learned by ML models are way too fuzzy to be picky about exact pixel values.

[-] ShustOne@lemmy.one 9 points 9 months ago

I'm not sure what your experience is with the training data but that would absolutely effect the inputs.

[-] kogasa@programming.dev 10 points 9 months ago

I'm a professional software developer with ML experience, albeit not an expert in ML specifically. It would obviously affect the literal value of the embeddings, but there's no chance it would have a qualitative effect on a reasonably performant model.

load more comments (4 replies)
[-] vox@sopuli.xyz 4 points 9 months ago

not to be that guy, but it's affect*

[-] samus12345@lemmy.world 6 points 9 months ago* (last edited 9 months ago)

affect - action

effect - uh, noun

load more comments (1 replies)
load more comments (3 replies)
load more comments (2 replies)
[-] stallmer@sopuli.xyz 27 points 9 months ago

I'm glad to be alive at the beginning of our war against the machines.

[-] nickwitha_k 5 points 9 months ago* (last edited 9 months ago)

I don't think this is a war against the machines, so much as a war against people trying to profit off of other people and rob them of their livelihood and ability to support themselves, rather than leveraging technology to the benefit of all.

I, for one, want actual general AI to make the world a more interesting place and make humanity less lonely. I just hope it doesn't go the direction of "people zoos".

load more comments (1 replies)
[-] Ensign_Crab@lemmy.world 24 points 9 months ago

The University of Chicago, doing for AI what it did for Economics.

[-] Asafum@feddit.nl 9 points 9 months ago

Ahh the Chicago school of economics where they teach: Poor? Get fucked! Greed is Good!™

[-] Orbit79@lemmy.world 12 points 9 months ago

It should be pretty easy to filter out everything that is not visible to humans.

[-] MonsiuerPatEBrown@reddthat.com 6 points 9 months ago* (last edited 9 months ago)

so they are going to just leave Dehance! on the table like that ?

load more comments
view more: next ›
this post was submitted on 25 Oct 2023
519 points (93.6% liked)

News

22488 readers
5146 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 1 year ago
MODERATORS