Soyweiser

joined 2 years ago
[–] Soyweiser@awful.systems 3 points 23 hours ago (1 children)

It wasnt just one like almost all of them were bad. The worst one was a vid where they went 'they did great things with colors, see how the rebels constantly wear yellow and red clothing to symbolize the fire of the rebellion' only half the outfits they called orange were just brown, and their supporting arguments on this from things which were said clearly were about other thematicnthings which they missed.

I purged them from my history to try and make the algo stop however.

Unrelated to that, also saw a guy do a deep dive on the themes of a movie (not andor). Only to admit he had only seen the movie once. Which is quite a thing to admit.

[–] Soyweiser@awful.systems 3 points 23 hours ago

Considering the war with Iran (and how some people will just blame anything that goes wrong on AI), this is quite the PR mastermove from Altman.

[–] Soyweiser@awful.systems 3 points 1 day ago

There cant be a bank run if there is no bank.

[–] Soyweiser@awful.systems 5 points 1 day ago (1 children)

I myself unfollowed Masnick a while back because I knew I would eventually push back on some of his shit and it would lead to me getting into stupid timewasting discussions. Nice to see im not the only one annoyed.

[–] Soyweiser@awful.systems 4 points 1 day ago (3 children)

The blind leading the blind. Because so many stuff on yt is so bad.

(Recently the algorithm decided I wanted some analysis of Andor. And oof).

[–] Soyweiser@awful.systems 6 points 1 day ago* (last edited 1 day ago) (1 children)

Seems like it, before they just used to word 'innovation' to do the same thing. A think which drives me mad re dutch politics. (We have a problem that our farms produce to much nitrogen, and instead of doing anything about it our govs keep going 'we will invest in innovation', which means nothing. It just pushes the ball forward, and more and more stuff gets shut down because of the nitrogen problems (building buildings for example). But the word innovation polls well and feels proactive).

And while this is very specific to the nitrogen problem, people have been doing this with climate change for decades as well. (see also how AI is replacing the word innovation there).

[–] Soyweiser@awful.systems 7 points 1 day ago

ourselves why the current state-of-the-art in beginner-friendly programming tools is a planet-boiling roulette wheel.

Thats easy.

[–] Soyweiser@awful.systems 7 points 2 days ago* (last edited 1 day ago)

it’s more complicated than this, sorry, but this oversimplification is basically true

Wait so it isnt true and it is true? Nice to notice your own confusion/reluctance (yeah im a broken record on the Rationalists not doing Rationalism) Also weird way to teach math. This makes me wonder if he understands math at all.

Edit sneer

He also threatens an Anti-Stochastic-Parrot FAQ.

So, he is a crypto Stochastic Parrot?

[–] Soyweiser@awful.systems 4 points 3 days ago* (last edited 3 days ago)

Dont these sort of prompt files fail when the llm runs out of tokens/context and it needs to summarize its own history. (Yeah im not using the right terms, you know what I mean).

So we can have the one step for a short nondeterministic moment till you try to do something big.

Im not sure calling the problem trackable is meaningful in anyway. Yud style end of the world AGI stuff is also trackable. Doesnt mean jack shit.

[–] Soyweiser@awful.systems 16 points 3 days ago (1 children)

The tech isn't mature, but neither was the Internet 30 years ago.

Drink!

[–] Soyweiser@awful.systems 16 points 4 days ago* (last edited 4 days ago)

Sneerclub was right.

are usually intellectually honest enough to be able to acknowledge that various heretical political beliefs might be true.

Ow you have more heretical political beliefs? Such as? Come on dont be shy. (Place your bets, will it be sexism, pedophilia or antisemitism).

Also odd how they instantly give the game away, are they talking about higher Ashkenazi IQ or the higher Asian IQ scores? Nope. Instantly goes to IQ with a hard r.

A lot of unexplored assumptions as well, race being real, iq tests being real, iq tests showing intelligence, iq being genetic, and all that being scientific, etc. Same issues as the Basilisk.

[–] Soyweiser@awful.systems 5 points 4 days ago

Before they could ask grok how to stop a process it was already too late.

Not that it mattered as Groks advice to become the reichschancellor actually didnt fix this problem.

 

Via reddits sneerclub. Thanks u/aiworldism.

I have called LW a cult incubator for a while now, and while the term has not catched on, nice to see more reporting on the problem that lw makes you more likely to join a cult.

https://www.aipanic.news/p/the-rationality-trap the original link for the people who dont like archive.is used the archive because I dont like substack and want to discourage its use.

 

As found by @gerikson here, more from the anti anti TESCREAL crowd. How the antis are actually R9PRESENTATIONALism. Ottokar expanded on their idea in a blog post.

Original link.

I have not read the bigger blog post yet btw, just assumed it would be sneerable and posted it here for everyone's amusement. Learn about your own true motives today. (This could be a troll of course, boy does he drop a lot of names and thinks that is enough to link things).

E: alternative title: Ideological Turing Test, a critical failure

 

Original title 'What we talk about when we talk about risk'. article explains medical risk and why the polygenic embryo selection people think about it the wrong way. Includes a mention of one of our Scotts (you know the one). Non archived link: https://theinfinitesimal.substack.com/p/what-we-talk-about-when-we-talk-about

11
submitted 9 months ago* (last edited 9 months ago) by Soyweiser@awful.systems to c/sneerclub@awful.systems
 

Begrudgingly Yeast (@begrudginglyyeast.bsky.social) on bsky informed me that I should read this short story called 'Death and the Gorgon' by Greg Egan as he has a good handle on the subjects/subjects we talk about. We have talked about Greg before on Reddit.

I was glad I did, so going to suggest that more people he do it. The only complaint you can have is that it gives no real 'steelman' airtime to the subjects/subjects it is being negative about. But well, he doesn't have to, he isn't the guardian. Anyway, not going to spoil it, best to just give it a read.

And if you are wondering, did the lesswrongers also read it? Of course: https://www.lesswrong.com/posts/hx5EkHFH5hGzngZDs/comment-on-death-and-the-gorgon (Warning, spoilers for the story)

(Note im not sure this pdf was intended to be public, I did find it on google, but might not be meant to be accessible this way).

 

The interview itself

Got the interview via Dr. Émile P. Torres on twitter

Somebody else sneered: 'Makings of some fantastic sitcom skits here.

"No, I can't wash the skidmarks out of my knickers, love. I'm too busy getting some incredibly high EV worrying done about the Basilisk. Can't you wash them?"

https://mathbabe.org/2024/03/16/an-interview-with-someone-who-left-effective-altruism/

 

Some light sneerclub content in these dark times.

Eliezer complements Musk on the creation of community notes. (A project which predates the takeover of twitter by a couple of years (see the join date: https://twitter.com/CommunityNotes )).

In reaction Musk admits he never read HPMOR and he suggests a watered down Turing test involving HPMOR.

Eliezer invents HPMOR wireheads in reaction to this.

view more: next ›