scruiser

joined 2 years ago
[–] scruiser@awful.systems 3 points 8 months ago

I missed that it’s also explicitly meant as rationalist esoterica.

It turns in that direction about 20ish pages in... and spends hundreds of pages on it, greatly inflating the length from what could be a much more readable length. It then gets back to actual plot events after that.

[–] scruiser@awful.systems 5 points 8 months ago

I hadn't heard of MAPLE before, is it tied to lesswrong? From the focus on AI it's at least adjacent to it... so I'll add that to the list of cults lesswrong is responsible for. So all in all, we've got the Zizians, Leverage Research, and now Maple for proper cults, and stuff like Dragon Army and Michael Vassar's groupies for "high demand" groups. It really is a cult incubator.

[–] scruiser@awful.systems 5 points 8 months ago* (last edited 8 months ago)

I actually think "Project Lawful" started as Eliezer having fun with glowfic (he has a few other attempts at glowfics that aren't nearly as wordy... one of them actually almost kind of pokes fun at himself and lesswrong), and then as it took off and the plot took the direction of "his author insert gives lectures to an audience of adoring slaves" he realized he could use it as an opportunity to squeeze out all the Sequence content he hadn't bothered writing up in the past decade^ . And that's why his next attempt at a HPMOR-level masterpiece is an awkward to read rp featuring tons of adult content in a DnD spinoff, and not more fanfiction suitable for optimal reception to the masses.

^(I think Eliezer's writing output dropped a lot in the 2010s compared to when he was writing the sequences and the stuff he has written over the past decade is a lot worse. Like the sequences are all in bite-size chunks, and readable in chunks in sequence, and often rephrase legitimate science in a popular way, and have a transhumanist optimism to them. Whereas his recent writings are tiny little hot takes on twitter and long, winding, rants about why we are all doomed on lesswrong.)

[–] scruiser@awful.systems 3 points 8 months ago (1 children)

Yeah, even if computers predicting other computers didn't require overcoming the halting problem (and thus contradict the foundations of computer science) actually implementing such a thing with computers smart enough to qualify as AGI in a reliable way seems absurdly impossible.

[–] scruiser@awful.systems 11 points 8 months ago* (last edited 8 months ago) (5 children)

Weird rp wouldn't be sneer worthy on it's own (although it would still be at least a little cringe), it's contributing factors like...

  • the constant IQ fetishism (Int is superior to Charisma but tied with Wis and obviously a true IQ score would be both Int and Wis)

  • the fact that Eliezer cites it like serious academic writing (he's literally mentioned it to Yann LeCunn in twitter arguments)

  • the fact that in-character lectures are the only place Eliezer has written up many of his decision theory takes he developed after the sequences (afaik, maybe he has some obscure content that never made it to lesswrong)

  • the fact that Eliezer think it's another HPMOR-level masterpiece (despite how wordy it is, HPMOR is much more readable, even authors and fans of glowfic usually acknowledge the format can be awkward to read and most glowfics require huge amounts of context to follow)

  • the fact that the story doubles down on the HPMOR flaw of confusion of which characters are supposed to be author mouthpieces (putting your polemics into the mouths of character's working for literal Hell... is certainly an authorial choice)

  • and the continued worldbuilding development of dath ilan, the rationalist utopia built on eugenics and censorship of all history (even the Hell state was impressed!)

...At least lintamande has the commonsense understanding of why you avoid actively linking your bdsm dnd roleplay to your irl name and work.

And it shouldn't be news to people that KP supports eugenics given her defense of Scott Alexander or comments about super babies, but possibly it is and headliner of weird roleplay will draw attention to it.

[–] scruiser@awful.systems 10 points 8 months ago

To be fair to DnD, it is actually more sophisticated than the IQ fetishists, it has 3 stats for mental traits instead of 1!

[–] scruiser@awful.systems 5 points 8 months ago (3 children)

If your decision theory can't address ~~weird~~ totally plausible in the near future hypotheticals with omniscient God-AIs offering you money in boxes if you jump through enough cognitive hoops, what is it really good for?

[–] scruiser@awful.systems 6 points 8 months ago

It's always the people you most expect.

[–] scruiser@awful.systems 12 points 8 months ago

It's pretty screwed up that humble bragging about putting their own mother out of a job is a useful opening to selling a scam-service. At least the people that buy into it will get what they have coming?

[–] scruiser@awful.systems 7 points 8 months ago* (last edited 8 months ago) (5 children)

Nice job summarizing the lore in only 19 minutes (I assume this post was aimed at providing full context to people just joining or at least relatively new to tracking all this... stuff).

Some snarky comments, not because it wasn't a good summary that should have included them (all the asides you could add could easily double the length and leave a casual listener/reader more confused), but because I think they are funny ~~and I need to vent~~

You’ll see him quoted in the press as an “AI researcher” or similar.

Or decision theorist! With an entire one decision theory paper that he didn't bother getting through peer review because the reviewers wanted, like actual context, and an actual decision theory and not just hand waves at paradoxes on the fringes of decision theory.

What Yudkowsky actually does is write blog posts.

He also writes fanfiction!

I’m not even getting to the Harry Potter fanfic, the cult of Ziz, or Roko’s basilisk today!

Yeah this rabbit hole is deep.

The goal of LessWrong rationality is so Eliezer Yudkowsky can live forever as an emulated human mind running on the future superintelligent AI god computer, to end death itself.

Yeah in hindsight the large number of ex-Christians it attracts makes sense.

And a lot of Yudkowsky’s despair is that his most devoted acolytes heard his warnings “don’t build the AI Torment Nexus, you idiots” and they all went off to start companies building the AI Torment Nexus.

He wrote a lot of blog posts about how smart and powerful the Torment Nexus would be, and how we really need to build the Anti-Torment Nexus, so if he had proper skepticism of Silicon Valley and Startup/VC Culture, he really should have seen this coming

There was also a huge controversy in Effective Altruism last year when half the Effective Altruists were shocked to discover the other half were turbo-racists who’d invited literal neo-Nazis to Effective Altruism conferences. The pro-racism faction won.

I was mildly pleasantly surprised to see there was a solid half pushing back in the comments in the response to the first manifest, but it looks like the anti-racism faction didn't get any traction to change anything and the second manifest conference was just as bad or worse.

[–] scruiser@awful.systems 8 points 8 months ago* (last edited 8 months ago) (1 children)

I think the problem is that the author doesn’t want to demonize any of those actual ideologies that oppose TESCREALism either explicitly or incidentally because they’re more popular and powerful and because rather than being foundationally opposed to “Progress” as he defines it they have their own specific principles that are harder to dismiss.

This is a good point. I'll go even further and say a lot of the component ideologies of anti-TESCREALISM are stuff that this author might (at least nominally claim to) be in favor of so they can't name the specific ideologies.

[–] scruiser@awful.systems 10 points 9 months ago* (last edited 9 months ago)

I feel like lesswrong's front page has what would be a neat concept in a science fiction story at least once a week. Like what if an AGI had a constant record of it's thoughts, but it learned to hide what it was really thinking in them with complex stenography! That's a solid third act twist of at least a B sci-fi plot, if not enough to carry a good story by itself. Except lesswrong is trying to get their ideas passed in legislation and they are being used as the hype wing of the latest tech-craze. And they only occasionally write actually fun stories, as opposed to polemic stories beating you over the head with their moral or ten thousand word pseudo-academic blog posts.

 

I found a neat essay discussing the history of Doug Lenat, Eurisko, and cyc here. The essay is pretty cool, Doug Lenat made one of the largest and most systematic efforts to make Good Old Fashioned Symbolic AI reach AGI through sheer volume and detail of expert system entries. It didn't work (obviously), but what's interesting (especially in contrast to LLMs), is that Doug made his business, Cycorp actually profitable and actually produce useful products in the form of custom built expert systems to various customers over the decades with a steady level of employees and effort spent (as opposed to LLM companies sucking up massive VC capital to generate crappy products that will probably go bust).

This sparked memories of lesswrong discussion of Eurisko... which leads to some choice sneerable classic lines.

In a sequence classic, Eliezer discusses Eurisko. Having read an essay explaining Eurisko more clearly, a lot of Eliezer's discussion seems a lot emptier now.

To the best of my inexhaustive knowledge, EURISKO may still be the most sophisticated self-improving AI ever built - in the 1980s, by Douglas Lenat before he started wasting his life on Cyc. EURISKO was applied in domains ranging from the Traveller war game (EURISKO became champion without having ever before fought a human) to VLSI circuit design.

This line is classic Eliezer dunning-kruger arrogance. The lesson from Cyc were used in useful expert systems and effort building the expert systems was used to continue to advance Cyc, so I would call Doug really successful actually, much more successful than many AGI efforts (including Eliezer's). And it didn't depend on endless VC funding or hype cycles.

EURISKO used "heuristics" to, for example, design potential space fleets. It also had heuristics for suggesting new heuristics, and metaheuristics could apply to any heuristic, including metaheuristics. E.g. EURISKO started with the heuristic "investigate extreme cases" but moved on to "investigate cases close to extremes". The heuristics were written in RLL, which stands for Representation Language Language. According to Lenat, it was figuring out how to represent the heuristics in such fashion that they could usefully modify themselves without always just breaking, that consumed most of the conceptual effort in creating EURISKO.

...

EURISKO lacked what I called "insight" - that is, the type of abstract knowledge that lets humans fly through the search space. And so its recursive access to its own heuristics proved to be for nought. Unless, y'know, you're counting becoming world champion at Traveller without ever previously playing a human, as some sort of accomplishment.

Eliezer simultaneously mocks Doug's big achievements but exaggerates this one. The detailed essay I linked at the beginning actually explains this properly. Traveller's rules inadvertently encouraged a narrow degenerate (in the mathematical sense) strategy. The second place person actually found the same broken strategy Doug (using Eurisko) did, Doug just did it slightly better because he had gamed it out more and included a few ship designs that countered the opponent doing the same broken strategy. It was a nice feat of a human leveraging a computer to mathematically explore a game, it wasn't an AI independently exploring a game.

Another lesswronger brings up Eurisko here. Eliezer is of course worried:

This is a road that does not lead to Friendly AI, only to AGI. I doubt this has anything to do with Lenat's motives - but I'm glad the source code isn't published and I don't think you'd be doing a service to the human species by trying to reimplement it.

And yes, Eliezer actually is worried a 1970s dead end in AI might lead to FOOM and AGI doom. To a comment here:

Are you really afraid that AI is so easy that it's a very short distance between "ooh, cool" and "oh, shit"?

Eliezer responds:

Depends how cool. I don't know the space of self-modifying programs very well. Anything cooler than anything that's been tried before, even marginally cooler, has a noticeable subjective probability of going to shit. I mean, if you kept on making it marginally cooler and cooler, it'd go to "oh, shit" one day after a sequence of "ooh, cools" and I don't know how long that sequence is.

Fearmongering back in 2008 even before he had given up and gone full doomer.

And this reminds me, Eliezer did not actually predict which paths lead to better AI. In 2008 he was pretty convinced Neural Networks were not a path to AGI.

Not to mention that neural networks have also been "failing" (i.e., not yet succeeding) to produce real AI for 30 years now. I don't think this particular raw fact licenses any conclusions in particular. But at least don't tell me it's still the new revolutionary idea in AI.

Apparently it took all the way until AlphaGo (sometime 2015 to 2017) for Eliezer to start to realize he was wrong. (He never made a major post about changing his mind, I had to reconstruct this process and estimate this date from other lesswronger's discussing it and noticing small comments from him here and there.) Of course, even as late as 2017, MIRI was still neglecting neural networks to focus on abstract frameworks like "Highly Reliable Agent Design".

So yeah. Puts things into context, doesn't it.

Bonus: One of Doug's last papers, which lists out a lot of lessons LLMs could take from cyc and expert systems. You might recognize the co-author, Gary Marcus, from one of the LLM critical blogs: https://garymarcus.substack.com/

 

So, lesswrong Yudkowskian orthodoxy is that any AGI without "alignment" will bootstrap to omnipotence, destroy all mankind, blah, blah, etc. However, there has been the large splinter heresy of accelerationists that want AGI as soon as possible and aren't worried about this at all (we still make fun of them because what they want would result in some cyberpunk dystopian shit in the process of trying to reach it). However, even the accelerationist don't want Chinese AGI, because insert standard sinophobic rhetoric about how they hate freedom and democracy or have world conquering ambitions or they simply lack the creativity, technical ability, or background knowledge (i.e. lesswrong screeds on alignment) to create an aligned AGI.

This is a long running trend in lesswrong writing I've recently noticed while hate-binging and catching up on the sneering I've missed (I had paid less attention to lesswrong over the past year up until Trump started making techno-fascist moves), so I've selected some illustrative posts and quotes for your sneering.

  • Good news, China actually has no chance at competing at AI (this was posted before deepseek was released). Well. they are technically right that China doesn't have the resources to compete in scaling LLMs to AGI because it isn't possible in the first place

China has neither the resources nor any interest in competing with the US in developing artificial general intelligence (AGI) primarily via scaling Large Language Models (LLMs).

  • The Situational Awareness Essays make sure to get their Yellow Peril fearmongering on! Because clearly China is the threat to freedom and the authoritarian power (pay no attention to the techbro techno-fascist)

In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers?

  • More crap from the same author
  • There are some posts pushing back on having an AGI race with China, but not because they are correcting the sinophobia or the delusions LLMs are a path to AGI, but because it will potentially lead to an unaligned or improperly aligned AGI
  • And of course, AI 2027 features a race with China that either the US can win with a AGI slowdown (and an evil AGI puppeting China) or both lose to the AGI menance. Featuring "legions of CCP spies"

Given the “dangers” of the new model, OpenBrain “responsibly” elects not to release it publicly yet (in fact, they want to focus on internal AI R&D). Knowledge of Agent-2’s full capabilities is limited to an elite silo containing the immediate team, OpenBrain leadership and security, a few dozen US government officials, and the legions of CCP spies who have infiltrated OpenBrain for years.

  • Someone asks the question directly Why Should I Assume CCP AGI is Worse Than USG AGI?. Judging by upvoted comments, lesswrong orthodoxy of all AGI leads to doom is the most common opinion, and a few comments even point out the hypocrisy of promoting fear of Chinese AGI while saying the US should race for AGI to achieve global dominance, but there are still plenty of Red Scare/Yellow Peril comments

Systemic opacity, state-driven censorship, and state control of the media means AGI development under direct or indirect CCP control would probably be less transparent than in the US, and the world may be less likely to learn about warning shots, wrongheaded decisions, reckless behaviour, etc. True, there was the Manhattan Project, but that was quite long ago; recent examples like the CCP's suppression of information related to the origins of COVID feel more salient and relevant.

 

I am still subscribed to slatestarcodex on reddit, and this piece of garbage popped up on my feed. I didn't actually read the whole thing, but basically the author correctly realizes Trump is ruining everything in the process of getting at "DEI" and "wokism", but instead of accepting the blame that rightfully falls on Scott Alexander and the author, deflects and blames the "left" elitists. (I put left in quote marks because the author apparently thinks establishment democrats are actually leftist, I fucking wish).

An illustrative quote (of Scott's that the author agrees with)

We wanted to be able to hold a job without reciting DEI shibboleths or filling in multiple-choice exams about how white people cause earthquakes. Instead we got a thousand scientific studies cancelled because they used the string “trans-” in a sentence on transmembrane proteins.

I don't really follow their subsequent points, they fail to clarify what they mean... In sofar as "left elites" actually refers to centrist democrats, I actually think the establishment Democrats do have a major piece of blame in that their status quo neoliberalism has been rejected by the public but the Democrat establishment refuse to consider genuinely leftist ideas, but that isn't the point this author is actually going for... the author is actually upset about Democrats "virtue signaling" and "canceling" and DEI, so they don't actually have a valid point, if anything the opposite of one.

In case my angry disjointed summary leaves you any doubt the author is a piece of shit:

it feels like Scott has been reading a lot of Richard Hanania, whom I agree with on a lot of points

For reference the ssc discussion: https://www.reddit.com/r/slatestarcodex/comments/1jyjc9z/the_edgelords_were_right_a_response_to_scott/

tldr; author trying to blameshift on Trump fucking everything up while keeping up the exact anti-progressive rhetoric that helped propel Trump to victory.

 

So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).

 

This is a classic sequence post: (mis)appropriated Japanese phrases and cultural concepts, references to the AI box experiment, and links to other sequence posts. It is also especially ironic given Eliezer's recent switch to doomerism with his new phrases of "shut it all down" and "AI alignment is too hard" and "we're all going to die".

Indeed, with developments in NN interpretability and a use case of making LLM not racist or otherwise horrible, it seems to me like their is finally actually tractable work to be done (that is at least vaguely related to AI alignment)... which is probably why Eliezer is declaring defeat and switching to the podcast circuit.

view more: next ›