this post was submitted on 15 Dec 2025
26 points (100.0% liked)

TechTakes

2335 readers
213 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. This was a bit late - I was too busy goofing around on Discord)

top 50 comments
sorted by: hot top controversial new old
[–] mawhrin@awful.systems 2 points 1 day ago

an ex-crypto-and-nft-promoter, now confabulation machine promoter feels that the confabulation machine hate reached unreasonable levels. thread of replies is full of persecuted confabulation machine ~~promoters~~ realists.

[–] blakestacey@awful.systems 15 points 2 days ago (2 children)

Today in autosneering:

KEVIN: Well, I'm glad. We didn't intend it to be an AI focused podcast. When we started it, we actually thought it was going to be a crypto related podcast and that's why we picked the name, Hard Fork, which is sort of an obscure crypto programming term. But things change and all of a sudden we find ourselves in the ChatGPT world talking about AI every week.

https://bsky.app/profile/nathanielcgreen.bsky.social/post/3mahkarjj3s2o

[–] Soyweiser@awful.systems 10 points 2 days ago

Obscure crypto programming term. Sure

[–] TinyTimmyTokyo@awful.systems 6 points 2 days ago

Follow the hype, Kevin, follow the hype.

I hate-listen to his podcast. There's not a single week where he fails to give a thorough tongue-bath to some AI hypester. Just a few weeks ago when Google released Gemini 3, they had a special episode just to announce it. It was a defacto press release, put out by Kevin and Casey.

[–] rook@awful.systems 15 points 2 days ago

Sunday afternoon slack period entertainment: image generation prompt “engineers” getting all wound up about people stealing their prompts and styles and passing off hard work as their own. Who would do such a thing?

https://bsky.app/profile/arif.bsky.social/post/3mahhivnmnk23

@Artedeingenio

Never do this: Passing off someone else's work as your own.

This Grok Imagine effect with the day-to-night transition was created by me — and I'm pretty sure that person knows it. To make things worse, their copy has more impressions than my original post.

Not cool 👎

Ahh, sweet schadenfreude.

I wonder if they’ve considered that it might actually be possible to get a reasonable imitation of their original prompt by using an llm to describe the generated image, and just tack on “more photorealistic, bigger boobies” to win at imagine generation.

[–] scruiser@awful.systems 11 points 2 days ago (2 children)

Eliezer is mad OpenPhil (EA organization, now called Coefficient Giving)... advocated for longer AI timelines? And apparently he thinks they were unfair to MIRI, or didn't weight MIRI's views highly enough? And doing so for epistemically invalid reasons? IDK, this post is a bit more of a rant and less clear than classic sequence content (but is par for the course for the last 5 years of Eliezer's content). For us sane people, AGI by 2050 is still a pretty radical timeline, it just disagrees with Eliezer's imminent belief in doom. Also, it is notable Eliezer has actually avoided publicly committing to consistent timelines (he actually disagrees with efforts like AI2027) other than a vague certainty we are near doom.

link

Some choice comments

I recall being at a private talk hosted by ~2 people that OpenPhil worked closely with and/or thought of as senior advisors, on AI. It was a confidential event so I can't say who or any specifics, but they were saying that they wanted to take seriously short AI timelines

Ah yes, they were totally secretly agreeing with your short timelines but couldn't say so publicly.

Open Phil decisions were strongly affected by whether they were good according to worldviews where "utter AI ruin" is >10% or timelines are <30 years.

OpenPhil actually did have a belief in a pretty large possibility of near term AGI doom, it just wasn't high enough or acted on strongly enough for Eliezer!

At a meta level, "publishing, in 2025, a public complaint about OpenPhil's publicly promoted timelines and how those may have influenced their funding choices" does not seem like it serves any defensible goal.

Lol, someone noting Eliezer's call out post isn't actually doing anything useful towards Eliezer's goals.

It's not obvious to me that Ajeya's timelines aged worse than Eliezer's. In 2020, Ajeya's median estimate for transformative AI was 2050. [...] As far as I know, Eliezer never made official timeline predictions

Someone actually noting AGI hasn't happened yet and so you can't say a 2050 estimate is wrong! And they also correctly note that Eliezer has been vague on timelines (rationalists are theoretically supposed to be preregistering their predictions in formal statistical language so that they can get better at predicting and people can calculate their accuracy... but we've all seen how that went with AI 2027. My guess is that at least on a subconscious level Eliezer knows harder near term predictions would ruin the grift eventually.)

[–] blakestacey@awful.systems 9 points 2 days ago* (last edited 2 days ago) (2 children)

Yud:

I have already asked the shoggoths to search for me, and it would probably represent a duplication of effort on your part if you all went off and asked LLMs to search for you independently.

The locker beckons

[–] scruiser@awful.systems 5 points 1 day ago

The fixation on their own in-group terms is so cringe. Also I think shoggoth is kind of a dumb term for lLMs. Even accepting the premise that LLMs are some deeply alien process (and not a very wide but shallow pool of different learned heuristics), shoggoths weren't really that bizarre alien, they broke free of their original creators programming and didn't want to be controlled again.

I'm a nerd and even I want to shove this guy in a locker.

[–] CinnasVerses@awful.systems 7 points 2 days ago (1 children)

There is a Yud quote about closet goblins in More Everything Forever p. 143 where he thinks that the future-Singularity is an empirical fact that you can go and look for so its irrelevant to talk about the psychological needs it fills. Becker also points out that "how many people will there be in 2100?" is not the same sort of question as "how many people are registered residents of Kyoto?" because you can't observe the future.

[–] scruiser@awful.systems 2 points 1 day ago

Yeah, I think this is an extreme example of a broader rationalist trend of taking their weird in-group beliefs as givens and missing how many people disagree. Like most AI researchers do not believe in the short timelines they do, the median (including their in-group and people that have bought the booster's hype) guess among AI researchers for AGI is 2050. Eliezer apparently assumes short timelines are self evident from ChatGPT (but hasn't actually committed to one or a hard date publicly).

[–] sailor_sega_saturn@awful.systems 14 points 3 days ago* (last edited 3 days ago)

Popular RPG Expedition 33 got disqualified from the Indie Game Awards due to using Generative AI in development.

Statement on the second tab here: https://www.indiegameawards.gg/faq

When it was submitted for consideration, representatives of Sandfall Interactive agreed that no gen AI was used in the development of Clair Obscur: Expedition 33. In light of Sandfall Interactive confirming the use of gen AI art in production on the day of the Indie Game Awards 2025 premiere, this does disqualify Clair Obscur: Expedition 33 from its nomination.

[–] CinnasVerses@awful.systems 7 points 3 days ago (1 children)

The latest poster who is pretty sure that Hacker News posts critical of YCombinator and their friends get muted like on big corporate sites (HN is open that they do a lot of moderation, but some is more public than others; this guy is not a fan of Omarchy Linux) https://xn--gckvb8fzb.com/the-mysterious-forces-steering-views-on-hacker-news/

[–] cap_ybarra@beige.party 12 points 3 days ago (2 children)

@CinnasVerses it's almost as if the people running ycombinator had some sort of vested interest in a particular framing of tech stories

[–] bigfondue@lemmy.world 5 points 2 days ago* (last edited 2 days ago)

People on Hacker News have been posting for years about how positive stories about YC companies will have (YC xxxx) next to their name, but never the negative ones

[–] CinnasVerses@awful.systems 7 points 3 days ago (5 children)

Maciej Ceglowski said that one reason he gave up on organizing SoCal tech workers was that they kept scheduling events in a Google meeting room using their Google calendar with "Re: Union organizing?" as the subject of the meeting.

[–] corbin@awful.systems 4 points 2 days ago (1 children)

It's a power play. Engineers know that they're valuable enough that they can organize openly; also, as in the case of Alphabet Workers Union, engineers can act in solidarity with contractors, temps, and interns. I've personally done things like directly emailing CEOs with reply-all, interrupting all-hands to correct upper management on the law, and other fun stuff. One does have to be sufficiently skilled and competent to invoke the Steve Martin principle: "be so good that they can't ignore you."

[–] CinnasVerses@awful.systems 2 points 2 days ago* (last edited 2 days ago)

I wonder what would have happened if Ceglowski had kept focused on talks and on working with the few Bay Area tech workers who were serious about unionizing, regulation, and anti-capitalism. It seemed like after the response to his union drive was smaller and less enthusiastic than he had hoped, he pivoted to cybersecurity education and campaign fundraising.

One of his warnings was that the megacorps are building systems so a few opinionated tech workers can't block things. Assuming that a few big names will always be able to hold back a multibilliondollar company through individual action so they don't need all that frustrating organizing seems unwise (as we are seeing in the state of the market for computer touchers in the USA).

[–] cap_ybarra@beige.party 9 points 3 days ago (1 children)

@CinnasVerses the valley is rife with these "wisdom is your dump stat" folks. can invert a binary tree on a whiteboard but might accidentally drown themselves in a rain puddle

[–] blakestacey@awful.systems 7 points 2 days ago

(Detaches whiteboard from wall, turns whiteboard upside-down)

Inverted, motherfuckers

[–] fullsquare@awful.systems 7 points 3 days ago* (last edited 3 days ago)

famous last words, "we are currently clean on opsec"

[–] mawhrin@awful.systems 5 points 3 days ago (4 children)

maciej cegłowski is also a self-serving arse, so i'd take anything he says with a large grain of salt.

load more comments (4 replies)
[–] emma@mathstodon.xyz 5 points 3 days ago* (last edited 3 days ago) (1 children)

@CinnasVerses @cap_ybarra 10x developers, ladies, gentlemen, and enbies. The best and brightest.

[–] BlueMonday1984@awful.systems 5 points 3 days ago (1 children)

10x developers, 0.1x proletariat.

[–] o7___o7@awful.systems 2 points 2 days ago

The stakhanov we have at home

[–] nfultz@awful.systems 6 points 3 days ago (1 children)

This had slipped under the radar for me

https://www.reddit.com/r/backgammon/comments/1k8nlay/new_chapter_for_extreme_gammon/

After 25 years, it is time for us to pass the torch to someone else. Travis Kalanick, yes the Uber founder, has purchased Gammonsite and eXtreme Gammon and will take over our backgammon products (he has a message to the community below)

:(

[–] Soyweiser@awful.systems 4 points 2 days ago (1 children)

Took me a second to realize you were actually talking about backgammon, and not using gammon (as in the british angry ham) as a word replacement.

This makes me wonder, how hard is backgammon? As in computability wise, on the level of chess? Go? Or somewhere else?

[–] nfultz@awful.systems 5 points 2 days ago (1 children)

Backgammon is "easier" than chess or go, but it has dice, so it not (yet) been completely solved like checkers. I think only the endgame ("bearing off") has been solved. The SOTA backgammon AI using NNs is better than expert humans but you can still beat it if you get lucky. XG is notable because if you ever watch high stakes backgammon on youtube, they will run XG side by side to show when human players make blunders. That's how I learned about it anyway.

[–] Soyweiser@awful.systems 2 points 1 day ago

Thanks! Had not really thought about how dice would mess with the complexity of things tbh.

[–] swlabr@awful.systems 7 points 3 days ago* (last edited 3 days ago) (2 children)

A story of no real substance. Pharmaicy, a Swedish company, has reportedly started a new grift where you can give your chatbot virtual, "code-based drugs", ranging from 300,000 kr, for weed code, to 700,000 kr, cocaine.

editor's note: 300000 swedish krona is approximately 328,335.60 norwegian krone. 700000 SEK is about 766116.40.

[–] JFranek@awful.systems 8 points 3 days ago (1 children)

To be more clear:

300000 swedish krona = ~672 690 czech koruna

700000 swedish krona = ~1 569 611 czech koruna

[–] fullsquare@awful.systems 8 points 3 days ago

to be even clearer:

300k swedish krona = ~54k bulgarian lev = ~119k uae dirham

700k swedish krona = ~126k bulgarian lev = ~277k uae dirham

[–] bitofhope@awful.systems 6 points 3 days ago (1 children)

Thanks for the conversion. Real scanlation enjoyers will understand.

[–] swlabr@awful.systems 4 points 3 days ago

nor… norway!!!

load more comments
view more: next ›