this post was submitted on 12 Mar 2025
68 points (100.0% liked)

SneerClub

1050 readers
37 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

While this linear model's overall predictive accuracy barely outperformed random guessing,

I was tempted to write this up for Pivot but fuck giving that blog any sort of publicity.

the rest of the site is a stupendous assortment of a very small field of focus that made this ideal for sneerclub and not just techtakes

top 42 comments
sorted by: hot top controversial new old
[–] ikidd@lemmy.world 8 points 1 day ago

I think it should just look for a crucifix around their necks. Should be 95% effective.

[–] BlueMonday1984@awful.systems 11 points 1 day ago (2 children)

I was tempted to write this up for Pivot but fuck giving that blog any sort of publicity.

On the one hand, I can see you not wanting to give the fucker attention, on the other hand, AI's indelible link to fascism is something which needs to be hammered home and shit like this gives you a golden opportunity to do it.

[–] dgerard@awful.systems 10 points 1 day ago (1 children)

I have a half written text about working definitions of intelligence in the AI field and whoops, it's all racism!

[–] dgerard@awful.systems 16 points 1 day ago (3 children)

wot i got so far:

Current “artificial general intelligence” researchers have a repeated habit of using a definition of “intelligence” from psychologist and ardent race scientist Linda Gottfredson. The definition looks innocuous, but was from Gottfredson’s 1994 Wall Street Journal op-ed, “Mainstream Science on Intelligence,” a farrago of race science put forward as a defense of Charles Murray’s book The Bell Curve — signed off by 52 other race scientists, 20 of whom were from the Pioneer Fund.

Gottfredson’s piece was cited in Shane Legg’s Ph.D dissertation “Machine Super Intelligence,” in which he called it “an especially interesting definition as it was given as part of a group statement signed by 52 experts in the field” and that it therefore represented “a mainstream perspective” — an odd way to refer to Pioneer Fund race scientists. Somehow, this passed Legg’s dissertation committee.

The definition made it from Legg’s Ph.D into Microsoft and OpenAI’s “Sparks of AGI” paper, and from there to everyone else who copies citations to fill out their bibliography. When called out on this, Microsoft did finally remove the citation.

[–] YourNetworkIsHaunted@awful.systems 4 points 17 hours ago (1 children)

Surely there have to be some cognitive scientists who are at least a little bit less racist who could furnish alternative definitions? The actual definition at issue does seem fairly innocuous from a layman's perspective: "a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience." (Aside: it doesn't do our credibility any favors that for all the concern about the source I had to actually track all the way to Microsoft's paper to find the quote at issue.) The core issue is obviously that apparently they either took it completely out of context or else decided the fact that their source was explicitly arguing in favor of specious racist interpretations of shitty data wasn't important. But it also feels like breaking down the idea itself may be valuable. Like, is there even a real consensus that those individual abilities or skills are actually correlated? Is it possible to be less vague than "among other things?" What does it mean to be "more able to learn from experience" or "more able to plan" that is rooted in an innate capacity rather than in the context and availability of good information? And on some level if that kind of intelligence is a unique and meaningful thing not emergent from context and circumstance, how are we supposed to see it emerge from statistical analysis of massive volumes of training data (Machine learning models are nothing but context and circumstance).

I don't know enough about the state of non-racist neuroscience or whatever the relevant field is to know if these are even the right questions to ask, but it feels like there's more room to question the definition itself than we've been taking advantage of. If nothing else the vagueness means that we haven't really gotten any more specific than "the brain's ability to brain good."

[–] dgerard@awful.systems 4 points 9 hours ago

this is notes, not a thesis, i have the links to hand

[–] BlueMonday1984@awful.systems 8 points 1 day ago

A piece like this would dovetail nicely with Baldur's deep-dive into AI's link to esoteric fascism. Hope to see it get finished.

[–] mountainriver@awful.systems 3 points 1 day ago (1 children)

They removed the citation, but did they keep the definition?

These are AI bros, and should be assumed to be both racist and lazy. Of course they kept it.

[–] Architeuthis@awful.systems 7 points 1 day ago* (last edited 1 day ago)

The post is using traditional orthodox frankincense scented machine learning techniques though, they aren't just asking an LLM.

This is AI from when we were using it to decide if an image is of a dog or a cat, not how to best disenfranchise all creatives.

[–] MrSulu@lemmy.ml 15 points 1 day ago

I'm guessing that no one copied this guy's science homework when he was at school.

[–] mountainriver@awful.systems 48 points 2 days ago

That was gross.

On a related note, one of my kids learnt about how phrenology was once used for scientific racism and my other kid was shocked, dismayed and didn't want to believe it. So I had to confirm that yes people did that, yes it was very racist, and yes they considered themselves scientists and were viewed as such by the scientific community of the time.

I didn't inform them that phrenology and scientific racism is still with us. There is a limit on how many illusions you want to break in a day.

[–] blakestacey@awful.systems 31 points 1 day ago (4 children)

Hashemi and Hall (2020) published research demonstrating that convolutional neural networks could distinguish between "criminal" and "non-criminal" facial images with a reported accuracy of 97% on their test set. While this paper was later retracted for ethical concerns rather than methodological flaws,

That's not really a sentence that should begin with "While", now, is it?

it highlighted the potential for facial analysis to extend beyond physical attributes into behavior prediction.

What the fuck is wrong with you?

[–] dgerard@awful.systems 12 points 1 day ago (1 children)

What the fuck is wrong with you?

the blog tagline is "Dysgenics, forecasting, machine learning, sociology, physiognomy, IQ, simulations", so he tells us straight up what's wrong with him

[–] Architeuthis@awful.systems 4 points 16 hours ago

I hate being reminded that besides phrenology physiognomy is also a thing.

[–] Soyweiser@awful.systems 11 points 1 day ago* (last edited 1 day ago) (1 children)

The implication here that it isnt methodically flawed is quite something.

E: and I don't have the inclination for to do the math, but a 97% accuracy seems to be on the unusable side considering the rate of 'criminals' vs not-criminals in the population. (Yeah, see also 'wtf even is a criminal').

[–] sc_griffith@awful.systems 11 points 1 day ago (1 children)

about 3 in 100 americans are in prison, on parole, etc. so if that's the definition of a criminal, you would get 97% accuracy by just guessing not criminal every time

[–] Soyweiser@awful.systems 5 points 1 day ago

Also an extremely good false positive rate

[–] swlabr@awful.systems 18 points 1 day ago

it highlighted the potential for facial analysis to extend beyond physical attributes into behavior prediction.

bouba/kiki prison industrial complex

Racist ideology predicted by a degenerated frontal lobe!

It's not related to their skull shape, they just have brain damage.

[–] swlabr@awful.systems 17 points 1 day ago (1 children)

Oh hey it’s a rehash of the pedosmile maddox post from a billion internet years ago.

[–] Soyweiser@awful.systems 16 points 1 day ago* (last edited 1 day ago) (1 children)

The war on weird looking people continues. (The false positive/negative rate of this bs is immense. Wait a 69% succes rate? Ow god the false positives on that are going to be immense (even worse, the model works worse than random chance on a online game dataset, and then also the statistical uselessness of 69% due to low amount of pedos in general public isn't even mentioned in the conclusions, toss this where it belongs, in the dustbin of history).

[–] AllNewTypeFace@leminal.space 7 points 1 day ago (1 children)

The war on weird looking people continues.

Well, maybe if they had more eugenic facial symmetry and stronger jawlines, they’d be able to find age-appropriate sexual partners…

[–] froztbyte@awful.systems 14 points 1 day ago

let's not (even in jest)

[–] blakestacey@awful.systems 22 points 2 days ago (1 children)

(At the brainstorming session for terrible software names)

"PedoAI!"

[–] threeduck@aussie.zone 5 points 1 day ago
[–] dohpaz42@lemmy.world 14 points 2 days ago (2 children)

I picked the wrong week to be an older, white, overweight man. 😱

[–] Soyweiser@awful.systems 6 points 1 day ago* (last edited 1 day ago)

Don't worry, the people who would go and accuse you of being a pedophile would do so with or without this tool. It would just give them faux legitimacy.

E: post + profile picture was a lol moment however.

[–] Architeuthis@awful.systems 12 points 1 day ago (1 children)

His commenters really didn't like the 'white' part.

[–] sinedpick@awful.systems 9 points 1 day ago (1 children)

holy fuck

Interesting study, but I am skeptical that this result applies to the general population (without the "convicted" qualifier).

If non-whites are more violently criminal than whites, then we can expect them to be imprisoned earlier in life for any violent crime, of which pedophilia will be a small subset.

So we have more convicted white paedos because ....... the coloreds do more crimes??! what in the actual fuck did I just read?

[–] ICastFist@programming.dev 5 points 1 day ago (1 children)

No, no, it means non-white pedos will be jailed earlier in their lives. Makes perfect sense, dinnit?

[–] Architeuthis@awful.systems 4 points 16 hours ago

I like how this presupposes that unlike the tendency towards violent crime, pedophilia is equally distributed among all races and skull bump arrangements.

[–] BlueMonday1984@awful.systems 11 points 2 days ago
[–] Amoeba_Girl@awful.systems 12 points 2 days ago (3 children)

Is this some sort of bait or is he really expecting this to help in whatever way it is he imagines this would help?

[–] Amoeba_Girl@awful.systems 24 points 2 days ago* (last edited 2 days ago)

Oh, third option, he really wants to do phrenology and he figures if he does it on paedos no one will mind

We'll continue that trend: predicting pedophilic behavior based solely on facial features, bringing levity to an otherwise serious crime.

what is wrong with you

[–] Architeuthis@awful.systems 9 points 1 day ago

Could be an SSC type situation: you write an interminable pretend research post in a superficially serious manner on an obviously flawed premise and let the algorithm help it find its audience of mostly people who won't read it but will be left with the impression that the premise is at least defensible.

This will be made considerably easier once siskind puts it in his regular link roundup with a cheeky comment about how he doesn't really truly endorse this sort of thing.

[–] antifuchs@awful.systems 12 points 2 days ago (1 children)

Really wonder if his own pictures are included in the training set (as negative examples, of course)

[–] Soyweiser@awful.systems 4 points 1 day ago

Yeah including people you dont like 'accidentally' is a big risk. Also per definition the data only includes known/convicted pedos.

[–] riskable@programming.dev 10 points 2 days ago (2 children)

If you really want AI to catch pedophiles you need to train it with a database of priests and pastors.

[–] Amoeba_Girl@awful.systems 4 points 1 day ago

I appreciate the sentiment but the overwhelming majority of child sexual abuse is done by family. So maybe try a database of parents, I don't know.

[–] ICastFist@programming.dev 3 points 1 day ago

Also a photo of all of Epstein's friends and "ex" friends

[–] swlabr@awful.systems 5 points 1 day ago

Currently the comments section has one thread and you can probably guess what it’s about. (Hint: the post concludes something about the average SI pedophile)