this post was submitted on 09 Mar 2026
11 points (86.7% liked)

TechTakes

2482 readers
99 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 29 comments
sorted by: hot top controversial new old
[–] YourNetworkIsHaunted@awful.systems 5 points 2 hours ago (1 children)

FT reports from Amazon insiders that they're investigating the role AI-assisted development has played in a spate of recent issues across both the store and AWS.

FT also links to several previous stories they've reported on related issues, and I haven't had the time to breach the paywalls to read further, but the line that caught my eye was this:

The FT previously reported multiple Amazon engineers said their business units had to deal with a higher number of “Sev2s” — incidents requiring a rapid response to avoid product outages — each day as a result of job cuts.

To be honest, this is why I'm skeptical of the argument that the AI-linked job losses are a complete fabrication. Not because the systems are actually there to directly replace the lost workers, but because the decision-makers at these companies seem to legitimately believe that these new AI tools will let their remaining workforce cover any gaps left by the layoffs they wanted to do anyways. It sounds like Amazon is starting to feel the inverse relationship between efficiency and stability, and I expect it's only a matter of time before the wider economy starts to feel it too. Whether the owning class recognizes what's happening is, of course, a different story.

[–] mirrorwitch@awful.systems 2 points 1 hour ago

So oil prices are down again, and on nothing but a promise from Trump and a promise from the EU. The economy has proved remarkably resilient to me; the attack on Iran is like, wild nonsense number 17 that the USA regime did that I thought would trigger a major recession, and didn't.

I mean don't get me wrong, things are much worse now than 3 years ago, clearly. But they're not like, Great Depression worse. They're not even 2008 worse. It's just a certain level of degradation (cost of living is higher, purchasing power is lower, concentration of wealth is higher etc.) that people got used to as the new normal. People can get used to lots of things.

To make the IT analogy, I think the global economy is like Twitter. Sure, it feels like a Jenga tower held up by thoughts and prayers, but it's holding up. When Musk took over I really did think his catastrophic management philosophy would completely break Twitter, but no, it trudges on. Yes, moderation is now nonexistent, and I'm told it's down more often, and often in "soft downtime" like notifications not working, or DMs, or some other feature, or it's working but slow, and so on. But clearly the site is up most of the time and more or less functional. Users just get used to degraded quality as the new normal.

I predict AWS will 1) get slower and costlier thanks to "AI", with higher downtime, at higher stress for the workers; 2) the leadership will refuse to see or admit or even consciously be aware of this; 3) the worsened services will be the new normal. I predict similar developments for the socioeconomic situation of the world, too; though I'm not ruling out a spiral into complete recession, either.

[–] nfultz@awful.systems 4 points 12 hours ago

https://helenofdestroy.substack.com/p/grand-theft-reality h/t naked capitalism

Those interested in upgrading to the full RealityPlus™ experience will soon have not one but three styles of brain chip to choose from, expanding Big Parasite’s vertically-integrated propaganda pipeline into a perfect server-to-cerebrum delivery system while realizing the transhumanist dream of merging with the machines. Sam Altman’s brain-chip company is even called Merge Labs, because subtlety is for poor people. Yes, the guy who says human children waste more energy than OpenAI’s planet-liquidating data centers will be playing tug-of-war for direct access to your cognition with Musk and Mark Zuckerberg. Coverage of this assault on privacy already reads like articles about AI from five years ago: You don’t want a brain implant? Are you some kind of Luddite? Better get over it: “avoiding brain-to-text devices will feel like avoiding smartphones.” It’s not like Meta’s underpaying African contractors to watch you through your augmented-reality Raybans while you shit or something. Why is Meta’s glasses project head Rocco Basilico seemingly named after Roko’s Basilisk, the AI bogeyman who will go back in time to torture you if you don’t help create it? Is Roko’s Basilisk…Jewish? Remember to smile for Sam Altman’s soul-sucking WorldCoin orb or you won’t get your UBI!

[–] corbin@awful.systems 7 points 17 hours ago

Previously, on Awful, I predicted that Oracle would be all-in on the bubble:

Microsoft knows that there’s no money to be made here, and is eager to see how expensive that lesson will be for Oracle; Oracle is fairly new to the business of running a public cloud and likely thinks they can offer a better platform than Azure, especially when fueled by delicious Arabian oil-fund money.

But, uh, there's not going to be any Arabian money while we're dancing in the desert, blowing up the sunshine. The lawnmower is now running low on gas. Today, Oracle continues to make astoundingly bad business decisions:

Oracle is the only major player funding the AI buildout with debt, carrying over $100 billion on its books while free cash flow has gone negative.

[–] blakestacey@awful.systems 8 points 20 hours ago
[–] aninjury2all@awful.systems 6 points 20 hours ago (4 children)
[–] istewart@awful.systems 5 points 17 hours ago

Hmm, he's still sticking to tweet-threads on Twitter. We'll know he's fully cracking when he resorts to Ackman-style unreadable text blocks on there.

[–] gerikson@awful.systems 6 points 19 hours ago* (last edited 19 hours ago)

I somehow missed how American leftists were instrumental in urging the Iranian people to oust the Shah. Leftists like... Jimmy Carter[1]

edit this is typical US-centrism, other people don't have any agency, it's all about America


[1] I know, I know, about as left-wing as Genghis Khan

[–] sansruse@awful.systems 6 points 19 hours ago (2 children)

to what extent does he actually believe this? is that even a meaningful question? i think this narrative is way too esoteric and absurd to really convince anyone, so it doesn't even appear valuable if his goal is to flood the zone with post-truth nonsense.

I mean it's not too far off from the standard color revolution conspiracy theories where nefarious American intelligence agents and NGOs are working towards regime change and civil strife across the world in order to advance their sinister ideology. But where the "classical" color revolution conspiracy serves to undermine anticommunist movements in Eastern Europe surrounding the fall of the Soviet Union by positioning them as patsies or victims of the CIA, this newer variant that Moldbug is working with is trying to discredit American domestic anti-imperial/anticolonial/antifascist sentiments by positioning them as puppeteers of oppressive foreign regimes. Kind of an uno reverse card being played on the original story, but one that fits with how the American right conceptualizes itself and its domestic opposition.

[–] istewart@awful.systems 8 points 17 hours ago (1 children)

"Aging left" has lost "vitality" - he's phoning this one in, straight out of the house style guide.

[–] aninjury2all@awful.systems 3 points 5 hours ago* (last edited 5 hours ago)

Tudeh are western stooges

Moldbug 🤝 Iranian Government

[–] Soyweiser@awful.systems 2 points 19 hours ago

Really weird focus on Brooklyn.

[–] CinnasVerses@awful.systems 6 points 20 hours ago* (last edited 20 hours ago) (2 children)

Does anyone know a summary of the shakeup at CFAR in 2016? In January AnnaSalomon promised LessWrong that "CFAR's mission is to improve the sanity/thinking skill of those who are most likely to actually usefully impact the world." In December she announced a pivot to preventing the Reign of Steel. Julia Galef left that year and has not been very visible since. Her husband Luke Muehlhauser is OpenPhil's Managing Director for AI Governance & Policy so still Roko-curious.

LessWrongers sometimes say that Michael Vassar influenced the curriculum of CFAR's workshops even though he was no longer employed by a Rationalist charity. Brent Dill was living in Berkeley participating in rationalist events at that time.

[–] CinnasVerses@awful.systems 6 points 11 hours ago* (last edited 11 hours ago) (1 children)

CFAR seems to have pivoted back to focusing on the workshops. Their winter 2025/2026 fundraiser only raised $10k with a goal of $125k. The curriculum sounds very New Age:

If you’ve been to a CFAR workshop in the ~2015-2020 era, you should expect that current ones: ... Have roughly 1/3rd new content, mostly aimed at practical ways to be less “seeing like a state” when applying rationality techniques, and to be more “a proud gardener of the living processes inside you / a free person with increasing powers of authorship.” (We've been calling this thread "honoring who-ness.")

No masks in their photo of a workshop posted February 2025 (2024 was a pretty bad year for airborne infections where I live, and alienated educated young people are more likely to wear respirators than normies, so I would expect to see someone in that room wearing a N95 or Flo). If building warm and nurturing relationships is important then it helps to be able to eat together and see each other's faces. The venue is about a 90 minute drive from Oakland, CA (the East Bay).

This paragraph leapt out at me:

On Day 4 of the four day workshop, we spent three and a half hours on an activity called Questing, in which participants took turns being the “hero” (who worked on whatever they liked) and the “sidekick” (who assisted at the hero’s direction) for ~10 minute chunks. This activity was extremely well-liked (did best of all activities on our survey; many said many great things about it).

If you read that and say "doesn't that sound like Effort Exchange in the Dragon Army Barracks?" you should go home and rethink the regrettable things you learn on the Internet. I look forward to reading the book on LessWrong, the splinter sects, and just how much they had in common after a hard day gardening in a post-apocalyptic wasteland.

Before FTX collapsed my model of LW was something like cryptozoology enthusiasts who trade posts and sometimes meet at a con, now its more like Scientology. Early Scientology offered a community and a path to self-improvement.

[–] YourNetworkIsHaunted@awful.systems 1 points 5 minutes ago

Somehow I had never found that dragon army retrospective before and had the fascinating experience of wanting to explain to someone that "no, what you're describing is actually a cult. Like, you're describing being a cult leader." Which is usually not the person to whom the cult dynamic needs to be identified and explained.

[–] CinnasVerses@awful.systems 7 points 17 hours ago* (last edited 17 hours ago)

The four-day live-in rationality workshops at CFAR remind me of the live-in blog fests and conferences at Lighthaven. Someone in the comments to the January 2016 posts asks why pay $4,000 for a workshop in the SF Bay Area when you can learn similar content at a college where you live or from free online courses (the commenter later recanted this blatant heresy). Its hard to argue that in-person events in the SF Bay Area are an efficient use of funds, but they let people who already live there keep themselves busy.

Hello from the Center for Applied Rationality! ... We have a new experimental mini-workshop coming up soon (June 2025) and hopefully more workshop content to follow after! ... Pricing is $750 for the CFAR event, plus another $450 to sign up for Arbor (at Lighthaven in Berkeley). This is notably cheaper than the $3900 we've historically charged for most mainline CFAR workshops, since it's a more experimental program -- future workshops will likely be more expensive than this test. https://less-wrong.livejournal.com/4396115.html

This post claims that they could not find anyone doing anything similar https://acritch.com/cfar-scaling/ I know a US military veteran who had a critical thinking course which he pulled out whenever he had a training day to occupy, so maybe they needed to look outside their bubble?

[–] lurker@awful.systems 7 points 22 hours ago* (last edited 13 hours ago) (2 children)

Anthropic is suing the Pentagon

This whole saga is a resounding “everyone sucks here”. but I’m gonna have to side with Anthropic on this one because at least they have some incredibly basic standards, which is far more than I can say for the current government and OpenAI, though the real best outcome is if the government and the AI industry destroy each other

(this has now been deemed high-quality enough for its own post)

[–] scruiser@awful.systems 8 points 18 hours ago* (last edited 18 hours ago)

The specific article's framing pisses me off...

Anthropic CEO Dario Amodei picked a major fight with the Department of Defense last month, asserting that his company’s AI models couldn’t be used for mass surveillance of Americans or direct autonomous weapons systems.

As to who picked a fight with who, the DoD wanted to change the terms of their contract, to which Anthropic apparently compromised on every term except for mass surveillance of Americans (fuck the rest of the world I guess) and fully autonomous weapons (cause a human clicking "yes to confirm" makes slop-bot powered drones so much better). This wasn't good enough for this authoritarian strongman administration, so Pete Hegseth took the fight public with tweets first. So the article framing it as Anthropic "picking a fight" is a bullshit framing. I mean, they did kind of bring it on themselves hyping up their slop machine like it was a sci-fi AGI, but they didn't start the fight.

For one, “it’s 100 percent in the government’s prerogative to set the parameters of a contract,” Snell & Winter partner Brett Johnson told Wired, effectively meaning there may be very little chance of an appeal.

So they find a quote about contracts, but a Supply Chain Risk isn't just the DoD deciding on contracts, it is a specific power that has specific mechanisms set by legislation. If (and it is a big if with the current Supreme Court's composition) the court actually considers the terms set out in the legislation (including, most problematically for the DoD, a risk assessment and consideration of less intrusive alternatives), I think the DoD loses. Of course, the SC has all too often been willing to simply defer to the executive branch's judgement, even if the process for the judgement was "Trump or one of his underlings made a choice on a spiteful or idiotic whim, announced it on twitter, and the departments underneath them rushed to retroactively invent a saner rationalization". If the DoD decided to just end the contract (without all the public threats of SCR or invoking the Defense Production Act) Anthropic wouldn't be in a position to sue and this drama wouldn't have been as publicized in the first place.

But the lawsuit itself takes a dramatically different tone.

Yeah because one set of a language is a CEO trying to grovel and backtrack on one of the rare few ethical commitments he has ever made, and the other is making a court case about the actual law.

[–] gerikson@awful.systems 5 points 19 hours ago (1 children)
[–] scruiser@awful.systems 6 points 18 hours ago

If the DoD accidentally pop the AI bubble by triggering a cascade when Anthropic runs into issues; then later the DoD loses the court case in a humiliating enough way; then DoD loses a civil case with the money going to pay the debts owed in Anthropic's bankruptcy proceedings, and the American public blames all of (without letting one shift the blame to the other) the Trump administration, the Republican party, the parts of the Democrat that acted as pathetic enablers, and the tech ceos for the following economic depression... I would count that as a relative win?

[–] froztbyte@awful.systems 5 points 1 day ago

Missed this when it was first going around

Leave it to fucking zuckco to figure out the worst way to make They Live happen

[–] BlueMonday1984@awful.systems 9 points 1 day ago (2 children)

Starting this Stubsack off, I've found another FOSS project that hit the digital krokodil - ntfy.sh v2.18.0 was written by AI

[–] mirrorwitch@awful.systems 8 points 1 day ago

I feel like at this point I want to highlight the ones that took a clear stance against LLM code. On a chardet thread, people listed:

  • Gentoo
  • Servo
  • Loupe
  • Qemu
  • postmarketOS
  • GoTo Social
  • Zig
[–] BasiqueEvangelist@awful.systems 4 points 1 day ago (1 children)

guess i'll have to write my own unifiedpush provider

[–] antifuchs@awful.systems 4 points 1 day ago (1 children)

I’m still happy with Pushover. Hasn’t changed in a decade (and a half?! Been using that since 2012, damn), works really pretty well.

It’s not self-hosted but when there are push notification services on the path, nothing really is.

[–] anise@quokk.au 5 points 1 day ago (1 children)

there's also overpush, which is meant as a self-hostable drop-in replacement for pushover and does not use ai afaict.

[–] antifuchs@awful.systems 1 points 4 hours ago

Ooh, that’s really cool. Also, very sobering section on the various e2e methods that are a pretty thorough indictment of all the chat systems out there.

[–] o7___o7@awful.systems 5 points 1 day ago

Never have so few been so unsatisfied to be so correct.