FunkyStuff

joined 4 years ago
[โ€“] FunkyStuff@hexbear.net 3 points 14 hours ago

That's why they make the Boeings keep crashing. They're calling it planed obsolescence.

[โ€“] FunkyStuff@hexbear.net 22 points 15 hours ago

@mom Happy birthday ๐ŸŽ‚:qin-shi-huangdi-fireball

just imagine there's an emperor guy there lighting the candles

[โ€“] FunkyStuff@hexbear.net 24 points 18 hours ago

If I was being accused of being a globalist neocon by the MAGA people that I'm trying to grift, I wouldn't say I'm doing a "reset" because they'll just take that to mean great reset.

[โ€“] FunkyStuff@hexbear.net 2 points 18 hours ago

I think you're entitled to think of it this way for westerners. The way I became aware of the distinction at all was because of Judith Butler explaining their position (they consider themself a liberal zionist so obviously take all of it with a grain of salt, never let the enemy define the terms etc etc). You'd probably be right to say that, in the US and Europe, a nonzionist and a zionist are the same person but just at different stages of embarrassment.

[โ€“] FunkyStuff@hexbear.net 31 points 23 hours ago* (last edited 23 hours ago) (12 children)

I'd argue the blowback is very real for antizionist (and even nonzionist) Jews. Zionist Jews made something of a Faustian bargain where they'd be allowed a voice and a platform in AmeriKKKa at the expense of every other Jew that feels negatively towards Israel. Effectively, they've created an acceptable way to be Jewish, like how white supremacists hold East Asians as "model minorities" for everyone else to emulate (or else).

Looking at how antizionist Jews are currently treated, it's hard to say how much of this only exists as a subconscious phenomenon and how much is already material.

That's interesting, I also pretty much sing as high as I can manage every time I hear this too.

[โ€“] FunkyStuff@hexbear.net 27 points 1 day ago

I think if I was working in laundromats in the DC area, I'd be upset at the overtime, but excited about the opportunity.

[โ€“] FunkyStuff@hexbear.net 30 points 1 day ago

Country that spends 20% of its national budget on skull measuring implements.

[โ€“] FunkyStuff@hexbear.net 4 points 2 days ago

That's awesome comrade stalin-approval

[โ€“] FunkyStuff@hexbear.net 12 points 2 days ago (1 children)

I enjoyed arguing with them because it really took very little time for them to take on full peasant brain "I do not care about evidence, this is a vibes only zone"

[โ€“] FunkyStuff@hexbear.net 3 points 2 days ago

You might be right, I haven't run across "en absoluto" used that way but there sometimes are phrases that seem to mean their opposites because of double negatives in Spanish.

[โ€“] FunkyStuff@hexbear.net 3 points 2 days ago (1 children)

It all started when I watched his stream, actually. But the sweatlord inside me took over and I started learning a lot from FuryForged and DunkOrSlam. Ended up going through a lot of wand building guides to learn how to abuse the crap out of Greek letter spells to make a wand that casts as many copies of Chaos Larpa modifiers attached to omega sawblade. Hilarity ensues.

 
 
 

(obviously you should go listen to the whole thing)

 

universe is missing the point of fiction media!" then they'll turn around and mock visionary shows like NCIS for scenes in which the detectives enhance the resolution of an image. Oh, computers can't just do that? Who cares! Lock in! The point is not that the story makes internal sense, it's what the story communicates!

 

dunk so hard the admins stepped down

 
 

and post

 

Prompted by the recent troll post, I've been thinking about AI. Obviously we have our criticisms of both the AI hype manchildren and the AI doom manchildren (see title of the post. This is a Rationalist free post. Looking for it? Leave)

But looking at the AI doom guys with an open mind, sometimes it appear that they make a halfway decent argument that's backed up by real results. This YouTube channel has been talking about the alignment problem for a while, and I think he probably is a bit of a Goodhart's Law merchant (as in, by making a career out of measuring the dangers of AI, his alarmism is structural) so he should be taken with a grain of salt, it does feel pretty concerning that LLMs show inner misalignment and are masking their intentions (to anthropomorphize) under training vs deployment.

Now, I mainly think that these people are just extrapolating out all the problems with dumb LLMs and saying "yeah but if they were AGI it would become a real problem" and while that might be true if taking the premise at face value, the idea that AGI will ever happen is itself pretty questionable. The channel I linked has a video arguing that AGI safety is not a Pascal's mugging, but I'm not convinced.

Thoughts? Does the commercialization of dumb AI make it a threat on a similar scale to hypothetical AGI? Is this all just a huge waste of time to think about?

view more: next โ€บ