this post was submitted on 02 Mar 2026
17 points (90.5% liked)

TechTakes

2472 readers
322 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] BlueMonday1984@awful.systems 3 points 1 hour ago (1 children)

The HarfBuzz maintainer has drunk the slop-aid - Baldur has commented on it, warning of the potentially catastrophic consequences:

Fonts are a lucrative target. They require a complex parser, usually written in a language that isn't memory safe, and often directly exposed to outside data (websites, PDFs, etc. that contain fonts). This means a flaw could lead to an attack worst case scenario: arbitrary code execution. HarfBuzz is pretty much the only full-featured library for that takes font files, parses them, and returns glyphs ready to render. It is ubiquitous. A security flaw in HarfBuzz could make a good portion of the world's user-facing software (i.e. that renders text) unsafe.

[–] mawhrin@awful.systems 1 points 1 hour ago

luis villa, who as a lawyer really should have known better, being self-reportedly a friend of behdad and a confabulation/war machine promoter decided to come to rescue, calling the above (a) attack, and (b) slander.

[–] sc_griffith@awful.systems 5 points 2 hours ago (1 children)

one of the brain geniuses at bluesky

why describing how his phone overheated and died: Okay, so full transparency. I did dip it in a pool after it said it was too hot. was told not to do that before doing so, but my hubris said "nah these things are waterproof" and into the swim it went. go in the pool when I want to cool down, why can't my phone?

[–] anise@quokk.au 1 points 1 hour ago

how… what… how… why… why would you think…

[–] swlabr@awful.systems 6 points 8 hours ago (1 children)

scott jumpscare

[–] mirrorwitch@awful.systems 10 points 10 hours ago (1 children)

in the past 24 hours I was fooled by 3 pieces of fake nows in a row:

  • that Kurds from Iraq were crossing the border to fight in Iran
  • that Windows 12 would be AI-centred or require an AI chip to work (I helped spread this)
  • that Spain has capitulated and let the US use its ports for war (erroneously claimed by a WH official).

I know that fake news can be made organically and have been since forever and I'm doing selection bias here but I can't help but picture the misinformation engines firehosing bullshit constantly until some of it catches and spreads.

[–] gerikson@awful.systems 5 points 8 hours ago (1 children)

yeah it's bad

otoh awareness I think is spreading

swedish public broadcasting has regular "spot the fake" pieces on their website

I think giving a sensationalist bit of news 6 hours to "mature" is a good idea before amplifying.

[–] o7___o7@awful.systems 3 points 3 hours ago

I like this. News is a frittata, it needs time to set before consuming.

[–] swlabr@awful.systems 5 points 11 hours ago (1 children)

I’ve been seeing some people (not here, I’ve been taking a break) saying that we shouldn’t be mean to clankers by bringing up Kant’s position on being nice to animals. Well. Fuck all that.

[–] anise@quokk.au 2 points 1 hour ago (1 children)

animals are like sentient beings y'know, a clanker is a… matrix or a bunch of matrices or something

[–] lagrangeinterpolator@awful.systems 1 points 5 minutes ago

Hey, you're selling them short: there are also ReLU and softmax activation functions thrown around here and there. Clankers aren't just linear transformations!

[–] BlueMonday1984@awful.systems 13 points 16 hours ago (4 children)

Recently discovered Donald Knuth got oneshot by Claude recently (indirectly, through fedi) - feeling the itch to write about tech's vulnerability to LLMs because of it.

[–] YourNetworkIsHaunted@awful.systems 3 points 1 hour ago (1 children)

Even in Knuth's account it sounds like the LLM contribution was less in solving the problem and more in throwing out random BS that looked vaguely like different techniques were being applied until it spat out something that Knuth and his collaborator were able to recognize as a promising avenue for actual work.

His bud Filip Stappers rolled in to help solve an open digraph problem Knuth was working on. Stappers fed the decomposition problem to Claude Opus 4.6 cold. Claude ran 31 explorations over about an hour: brute force (too slow), serpentine patterns, fiber decompositions, simulated annealing. At exploration 25 it told itself “SA can find solutions but cannot give a general construction. Need pure math.” At exploration 30 it noticed a structural pattern in an earlier solution. Exploration 31 produced a working construction.

I am not a mathematician or computer scientist and so will not claim to know exactly what this is describing and how it compares to the normal process for investigating this kind of problem. However, the fact that it produced 4 approaches over 31 attempts seems more consistent with randomly throwing out something that looks like a solution rather than actually thinking through the process of each one. In a creative exploration like this where you expect most approaches to be dead ends rather than produce a working structure maybe the LLM is providing something valuable by generating vaguely work-shaped outputs that can inspire an actual mind to create the actual answer.

Filip had to restart the session after random errors, had to keep reminding Claude to document its progress. The solution only covers one type of solution, when Claude tried to continue another way, it “seemed to get stuck” and eventually couldn’t run its own programs correctly.

The idea that it's ultimately spitting out random answer-shaped nonsense also follows from the amount of babysitting that was required from Filip to keep it actually producing anything useful. I don't doubt that it's more efficient than I would be at producing random sequences of work-shaped slop and redirecting or retrying in response to a new "please actually do this" prompt, but of the two of us only one is demonstrating actual intelligence and moving towards being able to work independently. Compared to an undergrad or myself I don't doubt that Claude has a faster iteration time for each of those attempts, but that's not even in the same zip code as actually thinking through the problem, and if anything serves as a strong counterexample to the doomer critihype about the expanding capabilities of these systems. This kind of high-level academic work may be a case where this kind of random slop is actually useful, but that's an incredibly niche area and does not do nearly as much as Knuth seems to think it does in terms of justifying the incredible cost of these systems. If anything the narrative that "AI solved the problem" is giving Anthropic credit for the work that Knuth and Stapprrs were putting into actually sifting through the stream of slop identifying anything useful. Maybe babysitting the slop sluice is more satisfying or faster than going down every blind alley on your own, but you're still the one sitting in the river with a pan, and pretending the river is somehow pulling the gold out of itself is just damn foolish.

[–] lagrangeinterpolator@awful.systems 1 points 44 minutes ago* (last edited 30 minutes ago)

I am a computer science PhD so I can give some opinion on exactly what is being solved.

First of all, the problem is very contrived. I cannot think of what the motivation or significance of this problem is, and Knuth literally says that it is a planned homework exercise. It's not a problem that many people have thought about before.

Second, I think this problem is easy (by research standards). The problem is of the form: "Within this object X of size m, find any example of Y." The problem is very limited (the only thing that varies is how large m is), and you only need to find one example of Y for each m, even if there are many such examples. In fact, Filip found that for small values of m, there were tons of examples for Y. In this scenario, my strategy would be "random bullshit go": there are likely so many ways to solve the problem that a good idea is literally just trying stuff and seeing what sticks. Knuth did say the problem was open for several weeks, but:

  1. Several weeks is a very short time in research.
  2. Only he and a couple friends knew about the problem. It was not some major problem many people were thinking about.
  3. It's very unlikely that Knuth was continuously thinking about the problem during those weeks. He most likely had other things to do.
  4. Even if he was thinking about it the whole time, he could have gotten stuck in a rut. It happens to everyone, no matter how much red site/orange site users worship him for being ultra-smart.

I guess "random bullshit go" is served well by a random bullshit machine, but you still need an expert who actually understands the problem to read the tea leaves and evaluate if you got something useful. Knuth's narrative is not very transparent about how much Filip handheld for the AI as well.

I think the main danger of this (putting aside the severe societal costs of AI) is not that doing this is faster or slower than just thinking through the problem yourself. It's that relying on AI atrophies your ability to think, and eventually even your ability to guard against the AI bullshitting you. The only way to retain a deep understanding is to constantly be in the weeds thinking things through. We've seen this story play out in software before.

[–] lagrangeinterpolator@awful.systems 8 points 4 hours ago* (last edited 3 hours ago)

Baldur Bjarnason's essay remains evergreen.

Consider homeopathy. You might hear a friend talk about “water memory”, citing all sorts of scientific-sounding evidence. So, the next time you have a cold you try it.

And you feel better. It even feels like you got better faster, although you can’t prove it because you generally don’t document these things down to the hour.

“Maybe there is something to it.”

Something seemingly working is not evidence of it working.

  • Were you doing something else at the time which might have helped your body fight the cold?

  • Would your recovery have been any different had you not taken the homeopathic “remedy”?

  • Did your choosing of homeopathy over established medicine expose you to risks you weren’t aware of?

Even when looking at Knuth's account of what happened, you can already tell that the AI is receiving far more credit than what it actually did. There is something about a nondeterministic slot machine that makes it feel far more miraculous when it succeeds, while reliable tools that always do their job are boring and stupid. The downsides of the slot machine never register in comparison to the rewards. Does it feel so miraculous when I get an idea after experimenting in Mathematica?

I feel like math research is particularly susceptible to this, because it is the default that almost all of one's attempts do not succeed. So what if most of the AI's attempts do not succeed? But if it is to be evaluated as a tool, we have to check if the benefits outweigh the costs. Did it give me more productive ideas, or did it actually waste more of my time leading me down blind alleys? More importantly, is the cognitive decline caused by relying on slot machines going to destroy my progress in the long term? I don't think anyone is going to do proper experiments for this in math research, but we have already seen this story play out in software. So many people were impressed by superficial performances, and now we are seeing the dumpster fire of bloat, bugs, and security holes. No, I don't think I want that.

And then there is the narrative of not evaluating AI as an objective tool based on what it can actually do, but instead as a tidal wave of Unending Progress that will one day sweep away those elitists with actual skills. Random lemmas today mean the Millennium Prize problems tomorrow! This is where the AI hype comes from, and why people avoid, say, comparing AI with Mathematica. To them I say good luck. We have dumped hundreds of billions of dollars into this, and there are only so many more hundreds of billions of dollars left. Were these small positive results (and significant negatives) worth hundreds of billions of dollars, or perhaps were there better things that these resources could have been used for?

[–] mirrorwitch@awful.systems 8 points 10 hours ago (1 children)

ooh gooods nooo now all the Claude slurpers are going to refer to this forever as definitive proof of how legitimately useful LLMs have got, it "solved" a math problem for Donald Knuth! :<

[–] gerikson@awful.systems 7 points 8 hours ago (2 children)

A lobster invokes classic argument from authority

First Terrence Tao and now Donald Knuth.

If you're still on the fence about AI, you have to take it seriously now.

yeah b/c I'm a professional computer scientist ...

I was pissed when my (non-academic) friends saw this and immediately started talking about how mathematicians and computer scientists need to use AI from now on.

[–] nightsky@awful.systems 7 points 5 hours ago

If you’re still on the fence about AI, you have to take it seriously now.

But... why?

Always remember that Nobel disease is a thing.

The one I often think about is the person who invented PCR and then later claimed to have had an encounter with a fluorescent talking raccoon of possibly extraterrestrial origin.

[–] lurker@awful.systems 7 points 15 hours ago* (last edited 15 hours ago)

oh hey I remember reading that Donald Knuth paper earlier today, when it got posted by an AI youtube channel as 'proof' AI is on the path to AGI

[–] dgerard@awful.systems 7 points 16 hours ago (1 children)

jesus fuck https://urbit.org/blog/olif-and-urbit-ids

with urbit, you can now sniff each other's farts

[–] swlabr@awful.systems 5 points 11 hours ago (2 children)

Istg this has come up before, i am just too lazy to prove it. Still. Why would anyone want this

[–] anise@quokk.au 1 points 1 hour ago

It has, but I honestly thought it was fake and/or satire

[–] gerikson@awful.systems 6 points 8 hours ago

thought it was satire, genuinely surprised it's an official Urbit-sponsored project

also very much goes against the grain of elevating the mind over the body which is the vibe I get from urbit and environs

[–] CinnasVerses@awful.systems 9 points 22 hours ago (1 children)

Blast from the past: in 2014, Scott Alexander posted a take on marijuana legalization which showed excellent knowledge of medical papers but huge gaps in his knowledge of what brown people or smart policy reformers have to say. David Gerard and Christopher Hallquist in the comments, digression on how pot affects your IQ with gwern chipping in. Alexander came back in 2018 promising that he was right all along with a footnote about how some people in the comments told him that people like smoking weed and he did not know how to process that because his utilitarian calculation said it was bad for society.

[–] corbin@awful.systems 4 points 15 hours ago* (last edited 15 hours ago) (2 children)

It's curious how, in terms of utilitarianism, the 2014 post has people doing arithmetic to estimate QALYs but the 2018 post is more of a handwave where Scoot repeats the 2014 numbers verbatim. Advocates of decriminalization and legalization have long argued that the QALYs saved by releasing people from prison and no longer sentencing them (easily 20+ QALYs/person) and not arresting people for possession in the first place (0.5 QALYs/person-arrest) are significant to society at large, even if there were quantifiable health risks.

TBH I think that Scoot got a bit of a tough surprise when data actually came in on cannabis usage; it's now accepted cannabis lore that cannabis can cause onset of e.g. schizophrenia, at a rate of something like 1 in 2000 users, but the numbers on causing cancer never materialized. Meanwhile the case studies treating e.g. epilepsy have multiplied to the point where, again, it's now accepted lore that some epileptics find relief by using products made from high-CBD strains.

Choice sneer from the second post, from somebody with an extremely-relevant Moray avatar:

Yeah but you know what would achieve better results? Criminalizing driving.

Edit: grammar and also the extremely-relevant link. Pass the Moray, please~

[–] CinnasVerses@awful.systems 5 points 15 hours ago* (last edited 15 hours ago)

I didn't know that Moray in QC was around in 2018!

That is a good example because it shows the failure of imagination (can imagine the end of the world, can't imagine working public transit and public policy to discourage driving) and because hf he thought it through he might get to "humh, some people like to drive, but its bad for public and social health, how can we discourage it while preserving liberties?"

I really wonder what he did as a medical student in Cork other than study and read racist Tumblr accounts. Did his friends never drag him to Amsterdam to ride a bike and eat an edible?

[–] CinnasVerses@awful.systems 4 points 15 hours ago

Another surprise is that illegal weed still has 30% of the market in Canada. I don't know how much of that is consumer inertia ("My buddy Mike always gets me the good stuff eh") and how much is avoiding taxes.

[–] nfultz@awful.systems 5 points 21 hours ago (1 children)

https://techcrunch.com/2026/03/02/chatgpt-uninstalls-surged-by-295-after-dod-deal/

Some of my faculty have called for a campus wide boycott. Relatedly, the Scott Galloway scoreboard is up to $250m hit to tech market cap: https://www.resistandunsubscribe.com/

[–] lurker@awful.systems 5 points 18 hours ago (1 children)

while OpenAI deserves every bit of flack they get, it's comical to see people who criticise OpenAI for creating a 'war machine' turn around and praise Anthropic when they were-by their own admission no less!-the first people to start using AI for military purposes

I mean, I can understand the argument that Anthropic at least maintained a fig leaf of ethics, but notably based on Saltman's statements OpenAI does still feel the obligation to maintain those optics, they're just not nearly as credible at doing so.

[–] mirrorwitch@awful.systems 10 points 1 day ago

Zac Bowden at Windows Central

The good news is the report is false. According to contacts that are familiar with the Windows roadmap, there is no plan to ship a Windows 12 this year. In fact, I understand that the Windows roadmap for 2026 is all about fixing Windows 11 and attempting to improve its reputation by addressing top feedback such as reducing AI bloat across the OS

"We have heard your complaints about lead in the paint, and our roadmap for Leaded Paint 2026 is all about improving its reputation by making the lead easier to swallow"

[–] BlueMonday1984@awful.systems 9 points 1 day ago* (last edited 1 day ago)

The purpose of AI is theft, part infinity: chardet steals LGPL code for profit using Claude

[–] mirrorwitch@awful.systems 6 points 1 day ago
[–] corbin@awful.systems 5 points 1 day ago

Blast from the past: I realized that I didn't have the exact link detailing why nickpsecurity was banned from Lobsters, but now I do. You'll have to click the little [+] to see his comments. He's still active on HN and Reddit; he's gone full MAGA, which is ~~100% predictable~~ a surprising turn for somebody who constantly preaches born-again Christian ~~bigotry~~ peace and love. I really do wish that Lobsters did the whole turn-you-into-a-tree thing (sure, crucifixion, or maybe Peneus-style or Pequenino-style) for banned users rather than forcing folks to dig through archives.

[–] NextElephant9@awful.systems 7 points 1 day ago
[–] BlueMonday1984@awful.systems 6 points 1 day ago (1 children)

John Scalzi's shitcanned any book club plans for the foreseeable future, and AI spammers are the reason why.

[–] mirrorwitch@awful.systems 4 points 1 day ago

From the comments:

Today I got yet another AI huckster email offering to promote my book, but that book turned out to be AI slop published under my name on Amazon. (I have contacted Amazon.) The AIs are eating their young. This would be funny if it wasn’t really happening.

load more comments
view more: next ›