this post was submitted on 10 May 2026
1 points (100.0% liked)

TechTakes

2573 readers
41 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 32 comments
sorted by: hot top controversial new old
[–] BlueMonday1984@awful.systems 3 points 9 hours ago
[–] CinnasVerses@awful.systems 2 points 15 hours ago (1 children)

Someone called Fran has a story of being sexually harassed at the Center for Effective Altruism (and assaulted in other communities).

[–] ivyastrix@awful.systems 3 points 3 hours ago

Fran has done some really great writing on this, really admire her ability to deconstruct a community she's fond of.

[–] Architeuthis@awful.systems 2 points 18 hours ago (2 children)

In other Scott of Siskind news, he just posted an entirely unnecessary amount of words to aggressively push back against the adage that "all exponentials sooner or later turn into sigmoids" as if it was by itself a load bearing claim of the side arguing against the direct imminence of the machine god.

It's just a bunch of arguing by analogy ( "helping you build intuition" ) and you-can't-really-knows while implying AI 2027 was very science much rigorous, but it also feels kind of desperate, like why are you bothering with this overperformative setting-the-record-straight thing, have you been feeling inadequate as an AI-curious stats fondler of note lately?

[–] scruiser@awful.systems 3 points 9 hours ago* (last edited 9 hours ago)

he just posted an entirely unnecessary amount of words

taking a quick look at it... it's actually short by Scott's standards, but still overly long, given that the only point he makes is claiming Lindy's Law is applicable to predicting AI progress in absence of other information. Edit: glancing at it again... its not that short, I kinda skimmed until I got to Scott's actual point my first time glancing at it. You can't blame me for not reading it.

you-can’t-really-knows

Yeah, he straw-mans AI critics/skeptics as trying to make an argument from ignorance, then tries to argue against that strawman using Lindy's Law (which assumes ignorance and a pareto distribution). He completely ignores that AI critics are actually making detailed arguments about LLM companies consuming all the good and novel training data, hitting the limits on what compute costs they can afford, running into problems of the long lead time for building datacenters, etc. Which is pretty ironic given his AI 2027 makes a nominal claim to accounting for all that stuff (in actuality it basically all rests on METR's task horizons, and distorts even that already questionable dataset).

[–] lurker@awful.systems 2 points 10 hours ago

The idea of “the exponential curve goes up forever” has always been silly and an idea rooted in capitalism for me (“no bro you don’t get it we’re gonna get infinite money forever”). Limited resources exist, and people are already very fed up with the ludicrous amounts of water and electricity data centres take up. Making bigger models that need to run for longer is also probably going to take an exponential amount of resources (and also make people hate you more).

[–] gerikson@awful.systems 1 points 20 hours ago* (last edited 20 hours ago) (4 children)

Ladies, this Wrong'un is available (assuming you can meet his exacting standards (spoiler: you can't))

(for the record this is downvoted by the community, and the one helpful comment is slammed by OP)

[–] blakestacey@awful.systems 3 points 13 hours ago (1 children)

An lesswrong will literally do... whatever this is instead of going to therapy.

[–] dgerard@awful.systems 3 points 12 hours ago

the reply is about as close to being nice and helpful as one could be, really

[–] aio@awful.systems 2 points 13 hours ago* (last edited 13 hours ago)

im smarter than everyone else around me, especially those whiny feminists. why hasn't society granted me a female to be my mate yet?

[–] Amoeba_Girl@awful.systems 2 points 20 hours ago

least egotistical lesswronger

[–] Architeuthis@awful.systems 1 points 19 hours ago* (last edited 11 hours ago)

He probably paid a rationalist dating coach good money to tell him to do that.

[–] rook@awful.systems 1 points 1 day ago

you know how sometimes people that weren't exposed to religion as children sometimes convert and get really weird about it as adults (eg: the extremely online california tradcaths) and because they were never socialized in a religion they speedrun committing every medieval heresy? rationalism is that but for philosophy.

https://feed.hella.cheap/@bob/statuses/01KRM0NVXCFT80AVFBRSB1G6G4

[–] blakestacey@awful.systems 0 points 1 day ago* (last edited 1 day ago) (2 children)

Apparently, the American Physical Society is revising their AI policy to allow "broader applications" than the "light editing" they currently permit.

https://indico.global/event/16413/contributions/153970/attachments/69779/135365/JSayre-Pheno2026.pdf#page=8

I currently have a review request sitting in my inbox from them. I'm thinking of using this as a reason to decline that request.

I would rather quit physics than accept the institutional endorsement of skill-destroying, environmentally disastrous fashtech.

[–] dgerard@awful.systems 3 points 12 hours ago (1 children)

looking very much forward to that crashing head first into arXiv threatening a ban if your chatbot fucks up in your name

[–] scruiser@awful.systems 3 points 9 hours ago

I was pretty happy about seeing that news about arXiv! So much news has been various organizations giving into LLM usage like some kind of inevitability, so it was a nice change of pace.

[–] scruiser@awful.systems 0 points 1 day ago (1 children)

It is this continuing slippage of standards that makes me appreciate a hard line against any and all genAI that place like awful.systems have. You concede one small usage and the boosters will keep pushing for more.

[–] Soyweiser@awful.systems 1 points 23 hours ago (1 children)

Yeah the first AI comes in all nice and friendly but if you dont toss them out before you know it you turn out to he an AI bar.

(Also noticed that a lot of 'I just want some nuanced talks' friendly looking ai bros are not friendly at all when they keep getting pushback).

But I listened and agreed that you had serious concerns about certain aspects of this technology. I even agreed when you talked about how frustrating it was that specifically other people wanted to do bad things. I listened as you asked whether I had any options to address those concerns! What more do you want from me before you agree to let me do and say whatever I want!

[–] o7___o7@awful.systems 1 points 2 days ago* (last edited 2 days ago) (1 children)

Prompt goblins insist that we're backward and irrelevant. Why do they crave our sweet delicious approval?

[–] fullsquare@awful.systems 1 points 2 days ago

they want your data and freshwater

[–] CinnasVerses@awful.systems 0 points 2 days ago* (last edited 2 days ago) (1 children)

In 2017, a LessWronger discovered index investing but decided that most people were doing it wrong: why keep an emergency fund in cash or other safe assets when stocks have the greatest long-term return? He mentions that the US stock market lost half its value in 2007-8, and that if you hold stocks in your employer they may lose value at the same time as you are laid off, but he never uses his business degree to think through "if the stock market crashes, I may lose my job and have to draw on my savings."

The investment platforms I mentioned can convert your index funds into cash and send it to your bank account in 4-5 days, so you don’t need to hold more cash than you’d need on a 4 day notice. I keep about 50% more than my average monthly credit card bill, so I can pay my cards on time with autopay.

[–] sinedpick@awful.systems 2 points 11 hours ago

I love how this guy's blog is "about math" but there are like zero math posts on it? It's so funny to me how these people want to seem "mathy" and smart when in reality they couldn't tell you what a group axiom is.

[–] CinnasVerses@awful.systems 1 points 3 days ago* (last edited 3 days ago)

In 2024, Duncan Sabien posted an interminable essay on abusers and people he thinks took advantage of him. Some of the references to a former employer may be to CFAR. Ozy also had a cheery aside abut how in rationalist organizations which the Rats have disavowed, "everyone was a victim and everyone was a perpetrator. The trainer who broke you down in a marathon six-hour debugging session was unable to sleep because of the panic attacks caused by her own."

Some of the things which happened inside these communities must have been heartbreaking, and I hope that many people left and got on with their lives rather than founding their own dysfunctional organization with their own minions to abuse.

[–] dgerard@awful.systems 0 points 2 days ago* (last edited 2 days ago) (1 children)
[–] CinnasVerses@awful.systems 1 points 1 day ago

This may be code for "I don't want to see uppity women, brown people, and queer people in my shows."

[–] lurker@awful.systems 1 points 4 days ago

Graduation Speaker Shocked When She’s Loudly Booed by Students for Saying AI Is the Future

I don't know man maybe shoving AI into every conceivable crack and crevice and insisting people shut up and deal with it has made people upset. could be wrong tho

[–] froztbyte@awful.systems 0 points 4 days ago (1 children)

gitlab posts a totally-not-a-dear-john

The agentic era affords GitLab the largest opportunity in our history as a company, and we're making the structural and strategic decisions to meet it. This letter has three parts. First, the operational and structural news, which is hard

you'd instantly guess what comes next!

[–] V0ldek@awful.systems 1 points 3 days ago* (last edited 3 days ago)

>box labeled "agentic AI revolution automation realignment innovation acceleration opportunity"

>looks inside

>layoffs

[–] CinnasVerses@awful.systems 0 points 5 days ago (1 children)

In January, Scott Alexander had another crisis of faith: to paraphrase, I cared almost as much about prediction markets as I care about racist lies, but we got prediction markets and why are they not doing much? Maybe I need to keep faith and Friend Computer will be so powerful that we don't need prediction markets?

[–] Soyweiser@awful.systems 1 points 5 days ago (1 children)

Turns out sneerclub is the superpredictor. 10/10 on going 'this is a bad idea'.

The last several years have been the monkey's paw moment for rationalists, where they keep getting what they want and realizing it's actually bad. As for why they keep getting what they want, just look at who's funding them.

(Also featuring a "Chinese curse" that isn't actually a phrase in Chinese. At least it's not "may you live in interesting times".)