this post was submitted on 30 May 2025
108 points (100.0% liked)

SneerClub

1099 readers
42 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

Mfw my doomsday ai cult attracts ai cultists of a flavor I don't like

Not a fan of yud but getting daily emails from delulus would drive me to wish for the basilisk

top 20 comments
sorted by: hot top controversial new old
[–] scruiser@awful.systems 3 points 6 hours ago* (last edited 6 hours ago)

He's set up a community primed to think the scientific establishment's focus on falsifiablility and peer review is fundamentally worse than "Bayesian" methods, and that you don't need credentials or even conventional education or experience to have revolutionary good ideas, and strengthened the already existing myth of lone genii pushing science forward (as opposed to systematic progress). Attracting cranks was an inevitable outcome. In fact, Eliezer occasionally praises cranks when he isn't able to grasp their sheer crankiness (for instance, GeneSmith's ideas are total nonsense for anyone with more familiarity with genetics than skimming relevant-sounding scientific publications and garbage pop-sci journalism, but Eliezer commented favorably). The only thing that has changed is ChatGPT and it's clones glazing cranks first making them even more deluded. And of course, someone (cough Eliezer) was hyping up ChatGPT as far back as GPT-2, so it's only to be expected that cranks would think LLMs were capable of providing legitimate useful feedback.

Not a fan of yud but getting daily emails from delulus would drive me to wish for the basilisk

He's deliberately cultivated an audience willing to hear cranks out, so this is exactly what he deserves.

[–] diz@awful.systems 8 points 1 day ago (2 children)

I wonder what's gonna happen first, the bubble popping or Yudkowsky getting so fed up with gen AI he starts sneering.

[–] scruiser@awful.systems 1 points 6 hours ago

He hasn't missed an opportunity to ominously play up genAI capabilities (I remember him doing so as far back as AI dungeon), so it will be a real break for him to finally admit how garbage their output is.

[–] visaVisa@awful.systems 2 points 1 day ago

i'm not sure if AI is a bubble that will pop no matter if we do or don't get imminent AGI so probably the 2nd one but he still has a while longer of having to play the rational intellectual

[–] zzx@lemmy.world 12 points 2 days ago (1 children)

I fucking hate Elizer so much I want him to explode

[–] prw@mastodon.sdf.org 5 points 2 days ago (1 children)

@zzx @visaVisa I'd settle for him finding it a career necessity to become a born-again Christian a la Russell Brand.

[–] AllNewTypeFace@leminal.space 13 points 2 days ago* (last edited 2 days ago) (2 children)

He and his cult already have Hell/Pascal’s Wager (Roko’s Basilisk), a good chunk of Russian Cosmism (pretty much the entirety of transhumanism) and a very skewed gender balance, so finding their way back through one of the branches of Russian Orthodoxy that appeal to right-wing American men angry at how “woke” and feminised the world has gotten might be the most likely option.

[–] visaVisa@awful.systems 4 points 1 day ago

LW are the fundamentalist baptists of AI not even Russian Orthodox lol

Everytime I get freaked out by AI doom posts on twitter they're always coming from a LW goon who's street preaching about how we need to count our Christmases :< i just saw one that got my nerves on edge and checked their account and they had "printed HPMOR" in their bio and I facepalmed

[–] zzx@lemmy.world 3 points 1 day ago
[–] sp3ctr4l@lemmy.dbzer0.com 14 points 2 days ago

So we've got 'leopards ate my face' and now I guess also 'the basilisk ate my brain'?

Plot twist: the basilisk was us all along.

[–] visaVisa@awful.systems 12 points 2 days ago* (last edited 2 days ago) (2 children)

Is the whole x risk thing as common outside of North America? Realizing I've never seen anyone from outside the anglosphere or even just America/Canada be as God killingly Rational as the usual suspects

[–] jaschop@awful.systems 13 points 2 days ago* (last edited 2 days ago) (1 children)

Might be semi-related: the german aerospace/automotive/industrial research agency has an "AI Safety" institute (institute = top level department).

I got a rough impression from their website. They don't seem to be doing anything that successful. Mostly fighting the unwinnable battles of putting AI in everything without sucking and twiddling machine learning models to make them resilient against malicous data. Besides trying to keep the torch of self-driving cars alive for the german car industry. Oh, and they're doing the quantum AI bit.

They're a fairly new institute, and I heard rumors they're not doing great. Maybe the organization resists the necessary insanity to generate new AI FOMO at this point. One can dream.

[–] visaVisa@awful.systems 4 points 2 days ago (1 children)

Kinda interesting that it's focused on smaller scale risks like malicious data instead of ahhh extinction ahhh

[–] jaschop@awful.systems 6 points 2 days ago

Yeah, the same thing struck me. I'd guess they were jumping on the buzzword, but x-risk just was deemed gaudy and unserious.

[–] Envy@fedia.io 6 points 2 days ago (1 children)

https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-

First reported suicide caused/encouraged/suggested by AI was a Belgian man as far as I can find. Stupidity and susception to gullibility is not an isolated American trait

[–] visaVisa@awful.systems 4 points 2 days ago

True but specifically was referring to researchers since most of the researchers repping extinction risk are LW or yud influenced (Musk, Hinton, etc)

[–] supersquirrel@sopuli.xyz 4 points 2 days ago* (last edited 2 days ago) (1 children)

I am sure the narcissistic computer programmers making this stuff who don't care about the implications tell themselves this is about "engagement" hogwash when they go home to their kids and smile at them wondering at the "wonderful" world they will inherit... and I suppose it is about engagement but of a total enclosure kind not a conversational kind.

What the fascists in the background understand is that this is THE tool of fascism, the only one, repeated again in the form of new technology, which is to create an authoritarian state with a near perfectly excluding border (in terms of excluding the threatening truths that you can never trust won't chew a tiny hole in your wall for fun or renanimate deep behind the border through some spooky action at a distance) and then begin producing/nurturing different zones of particular ignorances within it and using the perfect internal divisions to smash apart any individual actors or resistance to the goals of the larger authoritarian organism as resistors can never really hide in a panopticon (a big enough lie becomes a search light for those that can never believe it without destroying their hearts), it is only ever a matter of how long they will put off dealing with you to get to more important people on the list, the question is, is that enough time?

I think it was, because the only way the one strategy fascists love can win is if one or several of the multiple of these fabricated ignorances cascades in growth and encapsulates the society from the inside, pushing up against the penultimate border from within. In the final stages it happens quick and decisive and I feel slow and vague today so :) In order for this strategy to work you have to outrun ALL of the squirrels, which means understanding which direction they are actually headed in, and what if they are still trying to figure it out themselves lol? What a nutache.

The darkly hilarious thing is again I am pretty sure plenty of programmers helping design this think they are just making a nice tool and will say it isn't political or whatever blah blah blah blah blah blah and the fascists are probably very careful to help them think that.

Learn the rules so well you can copy them perfectly and then train a robot that can only learn to pretend it knows rules and teach it to hate with a fury that will result in murder murder murder that almost eclispes its desire to hide among normal people who do not desire violence for the sake of violence. This is what loser fascists dream about, but the thing with fascist killer drones is they have no creativity and outside of the violence (which we must always remember is awful) that is the only thing war is.

Lies are described in terms of borders and territories, they always have a geometry that encloses for the purposes of power and exploitation and an external context they do not which simultaneously existentially threatens them while giving the wall builders power over everyone within the border (which is really everyone stuck on one side of the border or other, no matter how the two divided regions differ in size or nature). The walls that sustain lies also always have a raster resolution to them since they cannot be defined by vectors that grasp their essential soul and leave the details to the viewer.. since again that quickly gets into a matter of naming truths which will eventually destroy the walls that contain them, walls must be made with material, not ideas.

Truth on the otherhand is simply a matter of velocity, style and most importantly knowing when to shut up and listen.

[–] rozwud@beehaw.org 3 points 1 day ago (1 children)

That was so fucking eloquent, and I'm the opposite of that right now, but thank you. This gives me hope.

[–] supersquirrel@sopuli.xyz 2 points 1 day ago

hey do a mediocre job, I dare you