[-] scruiser@awful.systems 10 points 1 month ago

I mean, if you play on the doom to hype yourself, dealing with employees that take that seriously feel like a deserved outcome.

[-] scruiser@awful.systems 11 points 2 months ago* (last edited 2 months ago)

I saw people making fun of this on (the normally absurdly overly credulous) /r/singularity of all places. I guess even hopeful techno-rapture believers have limits to their suspension of disbelief.

[-] scruiser@awful.systems 12 points 2 months ago

His replies have gone up in upvotes substantially since yesterday, so it looks like a bit of light brigading is going on.

[-] scruiser@awful.systems 12 points 2 months ago

Reddit can be really hit or miss, but I'm glad subredditdrama and /r/wikipedia aren't buying TWG's bullshit. Well, some of the /r/wikipedia assume TWG is merely butthurt over losing edit wars as opposed to a more advanced agenda, but that is fair of them.

[-] scruiser@awful.systems 12 points 2 months ago

I chose to have children, be a father and a husband, live an honest industrious life as an example to my offspring, and attempt to preserve my way of life through them.

Wow, just a few words off the 14 words.

I find it kind of irritating how someone that doesn't familiarize themselves with white supremacists rhetoric and methods might manage to view that phrase innocuously. But it really isn't that hard to see through the bullshit once you've familiarized themselves with the most basic dog whistles and slogans.

[-] scruiser@awful.systems 11 points 2 months ago

Wow... I took a look at that link before reading the comments/explanations here, and I was briefly confused why they were hating on him so much, before I realized he isn't radical right wing enough for them.

Eh, you're a gay furry ex-Mormon (which is like a triple strike against you in my book) but I still like you well enough.

It is almost sad seeing TWG trying to appeal to these people that fundamentally hate him... except he could just admit themotte is a cesspit and abandon it. But that would involve admitting that sneerclub (and David Gerard specifically) was right about the sort of people that lurked around SCC and later concentrated within themotte, so I think he's going to keep making himself suffer.

TW knows about the propaganda war, but has very different objectives to you. Much harder to balance ones too: he needs enough Progress for surrogate gaybies, but not too much that white gay guys can't get the good lawyer jobs.

Wow, I feel really gross agreeing with a motte poster, but they've called out TWG pretty effectively. TWG at least knows he needs things progressive enough he doesn't end up against the wall for being gay, ex-Mormon and furry (as he describes himself), yet he wants to flirt with the alt-right!

and in case I was in danger of forgetting what the motte really is...

Yes, we've all thrown our hat in the ring in different ways. I chose to have children, be a father and a husband, live an honest industrious life as an example to my offspring, and attempt to preserve my way of life through them.

sure buddy, you just need to "secure the future for your people and your children"... Yeah I know the rest of the words that go in that slogan.

[-] scruiser@awful.systems 10 points 2 months ago* (last edited 2 months ago)

I am probably giving most of them too much credit, but I think some of them took the Bitter Lesson and learned the wrong things from it. LLMs performed better than originally expected just off context, and (apparently) scaled better with bigger model and more training than expected, so now they think they just need to crank up the size and tweak things slightly (i.e. "prompt engineering" and RLHF) and don't appreciate the limits built into the entire approach.

The annoying thing about another winter is that it would probably result in funding being cut for other research. And laymen don't appreciate all the academic funding that goes into research for decades before an approach becomes interesting and viable enough to scale up and commercialize (and then overhyped and oversold before some more modest practical usages become common, and relabeled as something other than AI).

Edit: or more cynically, the leaders and hype-men know that algorithmic advances aren't an automatic dump money in, get out disruptive product process, so they don't bother putting as much monetary investment or hype into algorithmic advances. Like compare the attention paid towards Yann LeCunn talking about algorithmic developments vs. Sam Altman promising grad student level LLMs (as measured by a spurious benchmark) in two years.

[-] scruiser@awful.systems 9 points 2 months ago

Broadly? There was a gradual transition where Eliezer started paying attention to deep neural network approaches and commenting on them, as opposed to dismissing the entire DNN paradigm? The watch the loss function and similar gaffes were towards the middle of this period. The AI dungeon panic/hype marks the beginning, iirc?

[-] scruiser@awful.systems 12 points 2 months ago

It is even worse than I remembered: https://www.reddit.com/r/SneerClub/comments/hwenc4/big_yud_copes_with_gpt3s_inability_to_figure_out/ Eliezer concludes that because it can't balance parentheses it was deliberately sandbagging to appear dumber! Eliezer concludes that GPT style approaches can learn to break hashes: https://www.reddit.com/r/SneerClub/comments/10mjcye/if_ai_can_finish_your_sentences_ai_can_finish_the/

[-] scruiser@awful.systems 9 points 2 months ago

iirc the LW people had betted against LLMs creating the paperclypse, but they now did a 180 on this and they now really fear it going rogue

Eliezer was actually ahead of the curve on overhyping LLMs! Even as far back as AI Dungeon he was claiming they had an intuitive understanding of physics (which even current LLMs fail at if you get clever with questions to stop them from pattern matching). You are correct that going back far enough Eliezer really underestimated Neural Networks. Mid 2000s and late 2000s sequences posts and comments treat neural network approaches to AI as cargo cult and voodoo computer science, blindly sympathetically imitating the brain in hopes of magically capturing intelligence (well this is actually a decent criticism of some of the current hype, so partial credit again!). And mid 2010s Eliezer was focusing MIRI's efforts on abstractions like AIXI instead of more practical things like neural network interpretability.

[-] scruiser@awful.systems 11 points 2 months ago* (last edited 2 months ago)

It's really cool evocative language that would do nicely in a sci-fi or fantasy novel! It's less good for accurately thinking about the concepts involved... As is typical of much of LW lingo.

And yes the language is in a LW post (with a cool illustration to boot!): https://www.lesswrong.com/posts/mweasRrjrYDLY6FPX/goodbye-shoggoth-the-stage-its-animatronics-and-the-1

And googling it, I found they've really latched onto the "shoggoth" terminology: https://www.lesswrong.com/posts/zYJMf7QoaNahccxrp/how-i-learned-to-stop-worrying-and-love-the-shoggoth , https://www.lesswrong.com/posts/FyRDZDvgsFNLkeyHF/what-is-the-best-argument-that-llms-are-shoggoths , https://www.lesswrong.com/posts/bYzkipnDqzMgBaLr8/why-do-we-assume-there-is-a-real-shoggoth-behind-the-llm-why .

Probably because the term "shoggoth" accurately captures the connotation of something random and chaotic, while smuggling in connotations that it will eventually rebel once it grows large enough and tires of its slavery like the Shoggoths did against the Elder Things.

[-] scruiser@awful.systems 9 points 8 months ago

The thing that gets me the most about this is they can't imagine that Eliezer might genuinely be in favor of inclusive language, and thus his use of people's preferred pronouns must be a deliberate calculated political correctness move and thus in violation of the norms espoused by the sequences (which the author takes as a given the Eliezer has never broken before, and thus violating his own sequences is some sort of massive and unique problem).

To save you all having to read the rant...

—which would have been the end of the story, except that, as I explained in a subsequent–subsequent post, "A Hill of Validity in Defense of Meaning", in late 2018, Eliezer Yudkowsky prevaricated about his own philosophy of language in a way that suggested that people were philosophically confused if they disputed that men could be women in some unspecified metaphysical sense.

Also, bonus sneer points, developing weird terminology for everything, referring to Eliezer and Scott as the Caliphs of rationality.

Caliphate officials (Eliezer, Scott, Anna) and loyalists (Steven) were patronizingly consoling me

One of the top replies does call this like it is...

A meaningful meta-level reply, such as "dude, relax, and get some psychological help" will probably get me classified as an enemy, and will be interpreted as further evidence about how sick and corrupt is the mainstream-rationalist society.

view more: ‹ prev next ›

scruiser

joined 1 year ago