[-] scruiser@awful.systems 17 points 2 months ago

First of all. You could make facts a token value in an LLM if you had some pre-calculated truth value for your data set.

An extra bit of labeling on your training data set really doesn't help you that much. LLMs already make up plausible looking citations and website links (and other data types) that are actually complete garbage even though their training data has valid citations and website links (and other data types). Labeling things as "fact" and forcing the LLM to output stuff with that "fact" label will get you output that looks (in terms of statistical structure) like valid labeled "facts" but have absolutely no guarantee of being true.

[-] scruiser@awful.systems 13 points 2 months ago

Sneerclub tried to warn them (well not really, but some of our mockery could be interpreted as warning) that the tech bros were just using their fear mongering as a vector for hype. Even as far back as the OG mid 2000s lesswrong, a savvy observer could note that much of the funding they recieved was a way of accumulating influence for people like Peter Thiel.

[-] scruiser@awful.systems 14 points 2 months ago

Nice effort post! It feels like the LLM is pattern matching to common logic tests even when that is the totally incorrect thing to do. Which is pretty strong evidence against LLM's properly doing reasoning as opposed to getting logic test and puzzles and benchmarks right through sheer memorization and pattern matching.

[-] scruiser@awful.systems 23 points 2 months ago

Which, to recap for everyone, involved underpaying and manipulating employees into working as full time general purpose servants. Which is pretty up there on the scale of cult-like activity out of everything EA has done. So it makes sense she would be trying to pull a switcheroo as to who is responsible for EA being culty...

[-] scruiser@awful.systems 13 points 2 months ago

Roko is also violating their rules of assuming charitably and good faith about everything and going meta whenever possible. Because defending racists and racism is fine, as long as your tone is careful enough and you go up a layer of meta to avoid discussing the object level claims.

[-] scruiser@awful.systems 31 points 3 months ago* (last edited 3 months ago)

They are more defensive of the racists in the other blog post on this topic: https://forum.effectivealtruism.org/posts/MHenxzydsNgRzSMHY/my-experience-at-the-controversial-manifest-2024

Maybe its because the HBDers managed to control the framing with the other thread? Or because the other thread systematically refuses to name names, but this thread actually did name them and the conversation shifted out of a framing that could be controlled with tone-policing and freeze peach appeals into actual concrete discussion of specific blatantly racists statements (its hard to argue someone isn't racist and transphobic when they have articles with titles like "Why Do I Hate Pronouns More Than Genocide?").

[-] scruiser@awful.systems 16 points 3 months ago

Did you misread or are you making a joke (sorry the situation is so absurd its hard to tell)? Curtis Yarvin is Moldbug, and he was the one hosting the afterparty (he didn't attend the Manifest conference himself). So apparently there were racists too cringy even for Moldbug-hosted parties!

[-] scruiser@awful.systems 13 points 3 months ago* (last edited 3 months ago)

I don't think even that does it. Richard Hanania, one of Manifest's promoted speakers, wrote "Why Do I Hate Pronouns More Than Genocide?".

[-] scruiser@awful.systems 25 points 3 months ago* (last edited 3 months ago)

There's more shit gems in the comments, but I think my summary captures most of the major points. One more comment that stuck out:

Being a republican is equally as compatible with EA as being a Democrat. Lots of people on both sides have incompatible views. I honestly think you just haven't met enough Republicans!

Yes, this is actually true, and it is a bad thing and an indictment of EA.

Edit 1: There is another post clarifying that it wasn't mostly racists (https://forum.effectivealtruism.org/posts/34pz6ni3muwPnenLS/why-so-many-racists-at-manifest ) but 1) this is sneerclub, not careful count of the exact percentage of racists and racist talks to avoid hurting feelings club 2) if you sit down at a table with 3 Neo-Nazis, there are 4 Neo-Nazis sitting down. 3) "Full" is a subjective description, so yes its valid. two major racists would be more than my quota 4) see sidebar on debate

68

So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).

[-] scruiser@awful.systems 12 points 6 months ago* (last edited 6 months ago)

So, I was morbidly curious about what Zack has to say about the Brennan emails (as I think they've been under-discussed, if not outright deliberately ignored, in lesswrong discussion), I found to my horror I actually agree with a side point of Zack's. From the footnotes:

It seems notable (though I didn't note it at the time of my comment) that Brennan didn't break any promises. In Brennan's account, Alexander "did not first say 'can I tell you something in confidence?' or anything like that." Scott unilaterally said in the email, "I will appreciate if you NEVER TELL ANYONE I SAID THIS, not even in confidence. And by 'appreciate', I mean that if you ever do, I'll probably either leave the Internet forever or seek some sort of horrible revenge", but we have no evidence that Topher agreed.

To see why the lack of a promise is potentially significant, imagine if someone were guilty of a serious crime (like murder or stealing billions of dollars of their customers' money) and unilaterally confessed to an acquaintance but added, "Never tell anyone I said this, or I'll seek some sort of horrible revenge." In that case, I think more people's moral intuitions would side with the reporter.

Of course, Zack's ultimate conclusion on this subject is the exact opposite of the correct one I think:

I think that to people who have read and understood Alexander's work, there is nothing surprising or scandalous about the contents of the email.

I think the main reason someone would consider the email a scandalous revelation is if they hadn't read Slate Star Codex that deeply—if their picture of Scott Alexander as a political writer was "that guy who's so committed to charitable discourse

Gee Zack, I wonder why so many people misread Scott? ...Its almost like he is intentionally misleading about his true views in order to subtly shift the Overton window of rationalist discourse and intentionally presents himself as simply committed to charitable discourse while actually having a hidden agenda! And the bloated length of Scott's writing doesn't help with clarity either. Of course Zack, who writes tens of thousands of words to indirectly complain about perceived hypocrisy of Eliezer's in order to indirectly push gender essentialist views, probably finds Scott's writings a perfectly reasonable length.

Edit: oh and a added bonus on the Brennan Emails... Seeing them brought up again I connected some dots I had missed. I had seen (and sneered at) this Yud quote before:

I feel like it should have been obvious to anyone at this point that anybody who openly hates on this community generally or me personally is probably also a bad person inside and has no ethics and will hurt you if you trust them, but in case it wasn't obvious consider the point made explicitly.

But somehow I had missed or didn't realize the subtext was the emails that laid clear Scott's racism:

(Subtext: Topher Brennan. Do not provide any link in comments to Topher's publication of private emails, explicitly marked as private, from Scott Alexander.)

Hmm... I'm not sure to update (usage of rationalist lingo is deliberate and ironic) in the direction of "Eliezer is stubbornly naive on Scott's racism" or "Eliezer is deliberately covering for Scott's racism". Since I'm not a rationalist my probabilities don't have to sum to 1, so I'm gonna go with both.

[-] scruiser@awful.systems 14 points 6 months ago* (last edited 6 months ago)

ghost of 2007!Yud

This part gets me the most. The current day Yud isn't transphobic (enough? idk) so Zack has to piece together his older writings on semantics and epistemology to get a more transphobic gender essentialist version of past Yud.

[-] scruiser@awful.systems 13 points 8 months ago

The hilarious part to me is that they imagine Eliezer moderates himself or self-censors particularly in response to sneerclub. Like of all the possible reasons why Eliezer may not want to endorse transphobic rhetoric about pronouns (concern about general PR besides sneerclub, a more complex nuanced understanding of language, or even genuine compassion for trans people), sneerclubs disapproval is the one that sticks out to the author. I guess good job on us? Keep it up!

11

This is a classic sequence post: (mis)appropriated Japanese phrases and cultural concepts, references to the AI box experiment, and links to other sequence posts. It is also especially ironic given Eliezer's recent switch to doomerism with his new phrases of "shut it all down" and "AI alignment is too hard" and "we're all going to die".

Indeed, with developments in NN interpretability and a use case of making LLM not racist or otherwise horrible, it seems to me like their is finally actually tractable work to be done (that is at least vaguely related to AI alignment)... which is probably why Eliezer is declaring defeat and switching to the podcast circuit.

view more: next ›

scruiser

joined 1 year ago