this post was submitted on 14 Jul 2025
19 points (100.0% liked)

TechTakes

2077 readers
30 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] blakestacey@awful.systems 6 points 2 hours ago

Evan Urquhart:

I had to attend a presentation from one of these guys, trying to tell a room full of journalists that LLMs could replace us & we needed to adapt by using it and I couldn't stop thinking that an LLM could never be a trans journalist, but it could probably replace the guy giving the presentation.

[–] YourNetworkIsHaunted@awful.systems 6 points 3 hours ago (1 children)

Copy/pasting a post I made in the DSP driver subreddit that I might expand over at morewrite because it's a case study in how machine learning algorithms can create massive problems even when they actually work pretty well.

It's a machine learning system, not an actual human boss. The system is set up to try and find the breaking point, where if you finish your route on time it assumes you can handle a little bit more and if you don't it backs off.

The real problem is that everything else in the organization is set up so that finishing your routes on time is a minimum standard while the algorithm that creates the routes is designed to make doing so just barely possible. Because it's not fully individualized, this means that doing things like skipping breaks and waiving your lunch (which the system doesn't appear to recognize as options) effectively push the edge of what the system thinks is possible out a full extra hour, and then the rest of the organization (including the decision-makers about who gets to keep their job) turn that edge into the standard. And that's how you end up where we are now, where actually taking your legally-protected breaks is at best a luxury for top performers or people who get an easy route for the day, rather than a fundamental part of keeping everyone doing the job sane and healthy.

Part of that organizational problem is also in the DSP setup itself, since it allows Amazon to avoid taking responsibility or accountability for those decisions. All they have to do is make sure their instructions to the DSP don't explicitly call for anything illegal and they get to deflect all criticism (or LNI inquiries) away from themselves and towards the individual DSP, and if anyone becomes too much of a problem they can pretend to address it by cutting that DSP.

[–] nightsky@awful.systems 3 points 1 hour ago

If anyone else is wondering: DSP here I think stands for "Delivery Service Partner", and driver for someone driving a vehicle. (I assumed the context "Digital Signal Processing" and driver as in "device driver" at first and was quite confused :P)

On-topic: I think regulation needs to come down hard on the delivery industry in general, be it parcels or food or whatever, working conditions there have been terrible for a long time.

[–] BlueMonday1984@awful.systems 10 points 5 hours ago* (last edited 5 hours ago)

Found a good security-related sneer in response to a low-skill exploit in Google Gemini (tl;dr: "send Gemini a prompt in white-on-white/0px text"):

I've got time, so I'll fire off a sidenote:

In the immediate term, this bubble's gonna be a goldmine of exploits - chatbots/LLMs are practically impossible to secure in any real way, and will likely be the most vulnerable part of any cybersecurity system under most circumstances. A human can resist being socially engineered, but these chatbots can't really resist being jailbroken.

In the longer term, the one-two punch of vibe-coded programs proliferating in the wild (featuring easy-to-find and easy-to-exploit vulnerabilities) and the large scale brain drain/loss of expertise in the tech industry (from juniors failing to gain experience thanks to using LLMs and seniors getting laid off/retiring) will likely set back cybersecurity significantly, making crackers and cybercriminals' jobs a lot easier for at least a few years.

[–] TinyTimmyTokyo@awful.systems 10 points 16 hours ago (4 children)

Daniel Koko's trying to figure out how to stop the AGI apocalypse.

How might this work? Install TTRPG afficionados at the chip fabs and tell them to roll a saving throw.

Similarly, at the chip production facilities, a committee of representatives stands at the end of the production line basically and rolls a ten-sided die for each chip; chips that don't roll a 1 are destroyed on the spot.

And if that doesn't work? Koko ultimately ends up pretty much where Big Yud did: bombing the fuck out of the fabs and the data centers.

"For example, if a country turns out to have a hidden datacenter somewhere, the datacenter gets hit by ballistic missiles and the country gets heavy sanctions and demands to allow inspectors to pore over other suspicious locations, which if refused will lead to more missile strikes."

[–] bitofhope@awful.systems 10 points 7 hours ago (1 children)

Suppose further that enough powerful people are concerned about the poverty in Ireland, anti-catholic discrimination, food insecurity, and/or loss of rental revenue, that there's significant political will to Do Something. Should we ban starvation? Should we decolonise? Should we export their produce harder to finally starve Ireland? Should we sign some kind of treaty? Should we have a national megaproject to replace the population with the British? Many of these options are seriously considered.

Enter the baseline option: Let the Irish sell their babies as a delicacy.

[–] maol@awful.systems 4 points 4 hours ago* (last edited 4 hours ago)

Funnily enough, there are a lot of data centres in Ireland. Maybe there will be a missile strike and Ireland's population will shrink back to 19th century numbers

[–] Soyweiser@awful.systems 8 points 9 hours ago* (last edited 7 hours ago)

The sanctions and inspections idea is so silly esp after what the USA/Trump did to Iran. (I mean the deciding that Iran wasnt keeping their end of the bargain and still making Uranium. So after the end Iran started to make more Uranium for real. Gg everyone).

Also 'cull the gpus': [angry gamer noises]

I'm not gonna advocate for it to happen but I'm pretty sure the world would be overall in a much healthier place geopolitically if someone actually started yeeting missiles into major American cities and landmarks. It's too easy to not really understand the human impact of even a successful precision strike when the last times you were meaningfully on the other end of the airstrike were ~20 and ~80 years ago, respectively.

[–] BlueMonday1984@awful.systems 9 points 11 hours ago (1 children)

Similarly, at the chip production facilities, a committee of representatives stands at the end of the production line basically and rolls a ten-sided die for each chip; chips that don’t roll a 1 are destroyed on the spot.

Ah, yes, artificially kneecap chip fabs' yields, I'm sure that will go over well with the capitalist overlords who own them

Someone didn't get the memo about nVidia's stock price, and how is Jensen supposed to sign more boobs if suddenly his customers all get missile'd?

[–] yellowcake@awful.systems 9 points 22 hours ago

Ian Lance Taylor (of GOLD, Go, and other tech fame) had a take on chatbots being AGI that I liked to see from an influential person of computing. https://www.airs.com/blog/archives/673

The summary is that chatbots are not AGI, using the current AI wave as the usher to AGI is not it, and all around dislikes in a very polite way that chatbot LLMs are seen as AI.

Apologies if this was posted when published.

[–] nfultz@awful.systems 6 points 1 day ago* (last edited 1 day ago)

Nikhil's guest post at Zitron just went up - https://www.wheresyoured.at/the-remarkable-incompetence-at-the-heart-of-tech/

EDIT: the intro was strong enough I threw in $7. Second half is just as good.

[–] TinyTimmyTokyo@awful.systems 12 points 1 day ago (3 children)
[–] bitofhope@awful.systems 7 points 17 hours ago

The whole internet loves Éspèrature Trouvement, the grumpy old racist! 5 seconds later We regret to inform you the racist is not that old and actually has a pretty normal name. Also don't look up his runescape username.

[–] maol@awful.systems 7 points 23 hours ago (1 children)

Fucking hell. Not the most important part of the story, but his elaborate lies about being Jewish are very very weird. Kind of like white Americans pretending that they're Cherokee I guess?

[–] TinyTimmyTokyo@awful.systems 16 points 21 hours ago (1 children)

It's not that weird when you understand the sharks he swims with. Race pseudoscientists routinely peddle the idea that Ashkenazi Jews have higher IQs than any other ethnic or racial group. Scoot Alexander and Big Yud have made this claim numerous times. Lasker pretending to be a Jew makes more sense once you realize this.

[–] maol@awful.systems 4 points 4 hours ago

I'm aware of the idea, but it's still very weird for someone to pretend to be Jewish and also be a Nazi!

[–] Architeuthis@awful.systems 9 points 1 day ago* (last edited 1 day ago)

also here https://awful.systems/post/4995759

The long and short of it is motherjones discovered TPOs openly nazi alt.

[–] nfultz@awful.systems 12 points 2 days ago (1 children)

https://www.profgalloway.com/ice-age/ Good post until I hit the below:

Instead of militarizing immigration enforcement, we should be investing against the real challenge: AI. The World Economic Forum says 9 million jobs globally may be displaced in the next five years. Anthropic’s CEO warns AI could eliminate half of all entry-level white-collar jobs. Imagine the population of Greece storming the shores of America and taking jobs (even jobs Americans actually want), as they’re willing to work 24/7 for free. You’ve already met them. Their names are GPT, Claude, and Gemini.

Having a hard time imagining 300 but AI myself, Scott. Could we like, not shoehorn AI into every other discussion?

[–] Soyweiser@awful.systems 13 points 1 day ago* (last edited 1 day ago) (3 children)

Iirc Galloway was a pro cryptocurrency guy. So this tracks

E: imagine if the 3d printer people had the hype machine behind them like this. 'China better watch out, soon all manufacturing of products will be done by people at home'. Meanwhile China: [Laughs in 大跃进].

[–] mlen@awful.systems 8 points 1 day ago (2 children)

I think that 3D printing never picked up, because it's one of those things that empower the people, i.e. to repair stuff or build their own things, so the number of opportunities to grift seems to be smaller (although I'm probably underestimating it).

Most of the recently hyped technologies had goals that were exact opposites of empowering the masses.

[–] swlabr@awful.systems 6 points 1 day ago (1 children)

Tangential: I’ve heard that there are 3D printer people that print junk and sell them. This would not be much of a problem if they didn’t pollute the spaces they operate in. The example I’ve heard of is artist alleys at conventions- a 3D printer person will set up a stall and sell plastic models of dragons or pokemon or whatever. Everything is terrible!

[–] BlueMonday1984@awful.systems 5 points 11 hours ago

Tangential: I’ve heard that there are 3D printer people that print junk and sell them. This would not be much of a problem if they didn’t pollute the spaces they operate in.

So, essentially AI slop, but with more microplastics. Given the 3D printer bros are much more limited in their ability to pollute their spaces (they have to pay for filament/resin, they're physically limited in where they can pollute, and they produce slop much slower than an LLM), they're hopefully easier to deal with.

[–] Soyweiser@awful.systems 6 points 1 day ago

I think that is it tbh. There was no big centralized profit, so no need to hype it up.

[–] nfultz@awful.systems 6 points 1 day ago

I liked his stuff on wework back in the day. Funny how he could see one tech grift really clearly and fall for another. Then again, WeWork is in the black these days. Anyway I think Galloway pivoted (apologies) to Mens Rights lately; and he also gave some money to UCLA Extension (ie not the main campus) which is a bit hard to interpret.

[–] fullsquare@awful.systems 6 points 1 day ago (1 children)

yeah lol ez just 3dprint polypropylene polymerization reactor. what the fuck is hastelloy?

[–] Soyweiser@awful.systems 9 points 1 day ago (1 children)

Yeah, but we never got that massive hype cycle for 3d printers. Which in a way is a bit odd, as it could have happend. Nanomachine! Star trek replicators! (Getting a bit offtopic from Galloway being a cryptobro).

[–] scruiser@awful.systems 7 points 1 day ago

I can imagine it clear... a chart showing minimum feature size decreasing over time (using cherry picked data points) with a dotted line projection of when 3d printers would get down nanotech scale. 3d printer related companies would warn of dangers of future nanotech and ask for legislation regulating it (with the language of the legislation completely failing to effect current 3d printing technology). Everyone would be buying 3d printers at home, and lots of shitty startups would be selling crappy 3d printed junk.

[–] gerikson@awful.systems 17 points 2 days ago* (last edited 2 days ago) (7 children)

Here's an example of normal people using Bayes correctly (rationally assigning probabilities and acting on them) while rats Just Don't Get Why Normies Don't Freak Out:

For quite a while, I've been quite confused why (sweet nonexistent God, whyyyyy) so many people intuitively believe that any risk of a genocide of some ethnicity is unacceptable while being… at best lukewarm against the idea of humanity going extinct.

(Dude then goes on to try to game-theorize this, I didn't bother to poke holes in it)

The thing is, genocides have happened, and people around the world are perfectly happy to advocate for it in diverse situations. Probability wise, the risk of genocide somewhere is very close to 1, while the risk of "omnicide" is much closer to zero. If you want to advocate for eliminating something, working to eliminating the risk of genocide is much more rational than working to eliminate the risk of everyone dying.

At least on commenter gets it:

Most people distinguish between intentional acts and shit that happens.

(source)

Edit never read the comments (again). The commenter referenced above obviously didn't feel like a pithy one liner adhered to the LW ethos, and instead added an addendum wondering why people were more upset about police brutality killing people than traffic fatalities. Nice "save", dipshit.

[–] lagrangeinterpolator@awful.systems 13 points 2 days ago (2 children)

Hmm, should I be more worried and outraged about genocides that are happening at this very moment, or some imaginary scifi scenario dreamed up by people who really like drawing charts?

One of the ways the rationalists try to rebut this is through the idiotic dust specks argument. Deep down, they want to smuggle in the argument that their fanciful scenarios are actually far more important than real life issues, because what if their scenarios are just so bad that their weight overcomes the low probability that they occur?

(I don't know much philosophy, so I am curious about philosophical counterarguments to this. Mathematically, I can say that the more they add scifi nonsense to their scenarios, the more that reduces the probability that they occur.)

You know, I hadn't actually connected the dots before, but the dust speck argument is basically yet another ostensibly-secular reformulation of Pascal's wager. Only instead of Heaven being infinitely good if you convert there's some infinitely bad thing that happens if you don't do whatever Eliezer asks of you.

[–] fullsquare@awful.systems 13 points 2 days ago (2 children)

reverse dust specks: how many LWers would we need to permanently deprive of access to internet to see rationalist discourse dying out?

load more comments (2 replies)
load more comments (6 replies)
load more comments
view more: next ›