this post was submitted on 18 Apr 2025
11 points (100.0% liked)

Ask Lemmy

31219 readers
1674 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
top 13 comments
sorted by: hot top controversial new old
[–] ptz@dubvee.org 16 points 6 days ago* (last edited 6 days ago) (3 children)

This probably isn't a super-helpful answer, but for the most part, I haven't needed to use any (yet?). Dunno if it's just me, but pretty much every AI generated image still just looks "off" and uncanny in a perceptible and slightly off-putting way.

That said, there are occasional false positives depending on the lighting, focus, and filters used for legit photographs. No false negatives yet, though.

[–] Opinionhaver@feddit.uk 7 points 6 days ago* (last edited 6 days ago)

No false negatives yet, though.

Good old toupee fallacy

[–] Sixtyforce@sh.itjust.works 7 points 6 days ago (1 children)

Some people were fooled by manual edits in Photoshop before this, so I'm sure there's a gradient.

So far I'm with you, I can tell at least for now.

[–] ptz@dubvee.org 6 points 6 days ago* (last edited 6 days ago)

Yeah, not sure how long that'll hold up for me, but for now, so far so good.

The rule-of-thumb used to be "look at the hands", but I use a combo of focus, lighting, perspective, background objects (especially ones with text), color saturation, common sense (e.g. 'could this even be remotely real?'), etc. The scary part is if someone would run that through a filter and present it like grainy CCTV footage, all that (minus the common sense part) would be lost and I'd likely be stumped.

[–] Rhynoplaz@lemmy.world 4 points 6 days ago

No false negatives yet, though.

Can't be sure of that. There may have been some that you didn't suspect were AI, so you didn't bother investigating.

For the most part you're right. I can often catch them just by noticing mistakes, but we never know how many REALLY good ones slipped through the cracks.

[–] Corno@lemm.ee 5 points 6 days ago* (last edited 6 days ago) (1 children)

Artist here. I don't use any of those and use my eyes. Using AI to try and detect AI always rubs me the wrong way because the false positive rate is high. I've even seen Van Gogh paintings get wrongfully labeled as being AI-generated

[–] 1rre@discuss.tchncs.de 1 points 6 days ago

I've even seen van gogh paintings get wrongly labeled

That's way less surprising than an indie artist's art being wrongly labelled. It's nothing about the quality, just that van gogh paintings are likely to be very overrepresented in the training dataset

[–] TwistyLex@discuss.tchncs.de 3 points 6 days ago

A thought: any ai-image detector is a defacto trainer for ai-image generators. It necessarily becomes a kind of arms race in the same way that spam generators test their payloads against spam filters.

[–] Opinionhaver@feddit.uk 3 points 6 days ago

I don't feel like I need one. If it's badly made I can usually tell with a high confidence that it's AI and if it's made so well that I can't tell I generally don't care either.

[–] tal@lemmy.today 3 points 6 days ago* (last edited 6 days ago)

I don't, and to be blunt, I don't think that there is going to be any approach that is effective in the long run. I can think of many technical approaches, but it's fundamentally playing a game of whack-a-mole; the technology is changing, so trying to identify flaws in it is hard. And false negatives may be a serious problem, because now you've got a website saying that something is a real photograph. Under some cases, it may be useful to identify a particular image as being generated, but I think that it will very much be an imperfect, best-effort thing, and to get harder over time.

I am pretty sure that we're going to have a combination of computer vision software and generative AIs producing 3D images at some point, and a lot of these techniques go out the window then.

I'm also assuming that you're talking images generated to look like photographs here.

  • Looking for EXIF metadata flagging the image as being AI-generated. Easiest approach, but this is only going to identify images that doesn't have someone intentionally trying to pass off generated images as real.

  • It's possible for a generated image to produce image elements that look very similar in its training set. Taking chunks of the image and then running TinEye-style fuzzy hashing on it might theoretically turn up some image that was in its training set, which would be a giveaway. I don't know the degree to which TinEye can identify portions of images; it can do it to some degree. If I had to improve on Tineye, I'd probably do something like an edge-detection, vectorization, and then measure angles between lines and proportional distance to line intersections.

  • Identifying lighting issues. This requires computer vision software. Some older models will produce images with elements that have light sources coming from different directions. I'm pretty sure that Flux, at least, has some level of light-source identification run on its source material, else I don't see how it could otherwise achieve the images it does.

  • Checking to see whether an image is "stable" with a given model. Normally, images are generated by an iterative process, and typically, the process stops when the image is no longer changing. If you can come up with exactly the same model and settings used to generate the image, and the person who generated the image ran generation until it was stable, and they're using settings and a model that converge on a stable output, then an image being perfectly stable is a giveaway. The problem is that there are a vastly-proliferating number of models out there, not to mention potential settings, and no great way to derive those from the image. Also, you have to have access to the model, which won't be the case for proprietary services (unless you are that proprietary service). You might be able to identify an image created by a widely-used commercial service like Midjourney, but I don't think that you're going to have as much luck with the huge number of models on civitai.com or similar.

  • One approach that was historically used to identify manipulated images was looking for images with image compression artifacts


you can see non-block-aligned JPEG compression artifacts, say. You might be able to do that with some models that have low quality images used as training data, but I'm skeptical that it'd work with better ones.

  • You can probably write software to look for some commonly-seen errors today, like malformed fingers, but honestly, I think that that's largely trying to look for temporary technical flaws, as these will go away.
[–] IronKrill@lemmy.ca 1 points 6 days ago

My eyes. Everything I've seen of AI detectors does not Inspire confidence in me. I don't feel like I can trust their results due to the inevitability of incorrect guesses, so I don't bother using them.

[–] SaltSong@startrek.website 1 points 6 days ago

I use my wife. She's an artist, and has an eye for such things.

[–] vvilld@lemmy.dbzer0.com 0 points 6 days ago

Do you really need one at this point? AI images are really easy to notice.