It has happened. Post your wildest Scott Adams take here to pay respects to one of the dumbest posters of all time.
I'll start with this gem

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
It has happened. Post your wildest Scott Adams take here to pay respects to one of the dumbest posters of all time.
I'll start with this gem

One more:


If trump gets back in office, Scott will be dead within the year.
sorry Scott you just lacked the experience to appreciate the nuances, sissy hypno enjoyers will continue to take their brainwashing organic and artisanally crafted by skilled dommes
it's not exactly a take, but i want to shout out the dilberito, one of the dumbest products ever created
https://en.wikipedia.org/wiki/Scott_Adams#Other
the Dilberito was a vegetarian microwave burrito that came in flavors of Mexican, Indian, Barbecue, and Garlic & Herb. It was sold through some health food stores. Adams's inspiration for the product was that "diet is the number one cause of health-related problems in the world. I figured I could put a dent in that problem and make some money at the same time." He aimed to create a healthy food product that also had mass appeal, a concept he called "the blue jeans of food".
Not gonna lie, reading through the wiki article and thinking back to some of the Elbonia jokes makes it pretty clear that he always sucked as a person, which is a disappointing realization. I had hoped that he had just gone off the deep end during COVID like so many others, but the bullshit was always there, just less obvious when situated amongst all the bullshit of corporate office life he was mocking.
I read his comics in middle school, and in hindsight even a lot of his older comics seems crueler and uglier. Like Alice's anger isn't a legitimate response to the bullshit work environment she has but just haha angry woman funny.
Also, the Dilbert Future had some bizarre stuff at the end, like Deepak Chopra manifestation quantum woo, so it makes sense in hindsight he went down the alt-right manosphere pipeline.
I knew there was somethin' not right about that boy when his books in the '90s started doing woo takes about quantum mechanics and the power of self-affirmation. Oprah/Chopra shit: the Cosmos has a purpose and that purpose is to make me rich.
Then came the blogosphere.
https://freethoughtblogs.com/pharyngula/2013/06/17/the-saga-of-scott-adams-scrotum/
one thing i did not see coming, but should have (i really am an idiot): i am completely unenthused whenever anyone announces a piece of software. i'll see something on the rust subreddit that i would have originally thought "that's cool" and now my reaction is "great, gotta see if an llm was used"
everything feels gloomy.
I'm gonna leave here my idea, that an essential aspect of why GenAI is bad is that it is designed to extrude media that fits common human communication channels. This makes it perfect to choke out human-to-human communication over those channels, preventing knowledge exchange and social connection.
seeing the furious reactions to shaming of the confabulation machine promoters, i can only conclude the shaming works.
OT: I’m adopting two rescue kittens, one is pretty much a go but its proving trickier to get a companion (hoping the current application works out today). Part of me feels guilty for doing this so fast after what happened, but I kinda need it to keep me from doing anything stupid.
when I saw that they'd rebranded Office to Copilot, I turned 365 degrees and walked away
Skynet's backstory is somehow very predictable yet came as a surprise to me in the form of this headline by the Graudain: "Musk’s AI tool Grok will be integrated into Pentagon networks, Hegseth says".
The article doesn't provide much more other than exactly what you'd expect. E.g this Hegseth quote, emphasis mine: "make all appropriate data available across federated IT systems for AI exploitation, including mission systems across every service and component".
Me as a kid: "how could they have been so incompetent and let Skynet take over?!"
Me now: "Oh. Yeah. That checks out"
my promptfondler coworker thinks that he should be in charge of all branch merges because he doesn’t understand the release process and I think I’m starting to have visions of teddy k
thinks that he should be in charge of all branch merges because he doesn’t understand the release process
.......I don't want you to dox yourself but I am abyss-staringly curious
OT: I really appreciated the things you guys said last thread. It helped a lot.
OT: going to pick up a tiny black foster kitten (high energy) later this week…but yesterday I saw the pound had a flame point siamese kitten of all things, and he’s now running around my condo.
https://theasterisk.substack.com/p/reflecting-on-a-few-very-very-strange
Cross posting from reddit but here’s TPOT/GHB/CNC stuff
Setting the stage: I had become a social media personality on Clubhouse
I'm sorry.
What I remember is that the organizers said something like ‘I’m sorry that happened to you’, and while speaking I was interrupted by someone talking about the plight that autistic men face while dating.
Vibecamp: It's the Scott Aaronson comment section, but in person.
This is genuinely horrifying throughout. It reinforces my conviction that I don't really want to know or gossip about the details of these peoples' lives, I want to know the barest details of who they are so that I can set firm social boundaries against them.
A quote the author offers, that stands out to me:
A man who is considered a TPOT ‘elder’:
TPOT isn’t misogynist but it’s made up of men and women who prefer the company of men. it’s a male space with male norms.
this makes it barely tolerable for the few girls’ girls who wander in here. they end up either deactivating, going private, or venting about how men suck.
I'd never been particularly ardent about believing it, but this right here is firm evidence to me that existing in a rigid gender binary is mental and spiritual poison. Whoever this person is, they're never going to grow up.
I don't wish to belittle the author's suffering, but I do hope she is able to reconsider her participation in these scenes where hierarchy, contrived masculinity, and financial standing (or the ability to generate financial gain for others!) are the signifiers of individual participants' worth.
"U" for "you" was when I became confident who "Nina" was. The blogger feels like yet another person who is caught up in intersecting subcultures of bad people but can't make herself leave. She takes a lot of deep lore like "what is Hereticon?" for granted and is still into crypto.
She links someone called Sonia Joseph who mentions "the consensual non-consensual (cnc) sex parties and heavy LSD use of some elite AI researchers ... leads (sic) to some of the most coercive and fucked up social dynamics that I have ever seen." Joseph says she is Canadian but worked in the Bay Area tech scene. Cursed phrase: agi cnc sex parties
I have never heard of a wing of these people in Canada. There are a few Effective Altruists in Toronto but I don't know if they are the LessWrong kind or the bednet kind. I thought this was basically a US and Oxford scene (plus Jaan Tallinn).
The Substack and a Rationalist web magazine are both called Asterisk.
i love articles that start with a false premise and announce their intention to sell you a false conclusion
The future of intelligence is being set right now, and the path we’re on leads somewhere I don’t want to go. We’re drifting toward a world where intelligence is something you rent — where your ability to reason, create, and decide flows through systems you don’t control, can’t inspect, and didn’t shape.
The future of automated stupidity is being set right now, and the path we're on leads to other companies being stupid instead of us. I want to change that.
Ed Zitron is now predicting an earth-shattering bubble pop: https://www.wheresyoured.at/dot-com-bubble/ so in other words just another weekday.
Even if this was just like the dot com bubble, things would be absolutely fucking catastrophic — the NASDAQ dropped 78% from its peak in March 2000 — but due to the incredible ignorance of both the private and public power brokers of the tech industry, I expect consequences that range from calamitous to catastrophic, dependent almost entirely on how long the bubble takes to burst, and how willing the SEC is to greenlight an IPO.
I am someone who does not understand the economy. Both in that it's behaved irrationally for my entire life, and in that I have better things to do than learn how stonks work. So I have no idea how credible this is.
But it feels credible to the lizard brain part of me y'know? The market crashed a lot during covid, and an economy propped up by nvidia cards feels... worse.
Personally speaking: part of me is really tempted to take a bunch of my stonks to pay down most of my mortgage so it doesn't act like an albatross around my neck (I mean I'm also going to try moving abroad again in a year or two and would prefer not to be underwater on my fantastically expensive silicon valley house at that time lol).
Games Workshop bans generative AI. Hackernews takes that personally. Unhinged takes include accusations of disrespecting developers and a seizure of power by middle management
Better yet, bandcamp ban ai-generated music too: https://www.reddit.com/r/BandCamp/comments/1qbw8ba/ai_generated_music_on_bandcamp/
Great week for people who appreciate human-generated works
Over on Lobsters, Simon Willison and I have made predictions for bragging rights, not cash. By July 10th, Simon predicts that there will be at least two sophisticated open-source libraries produced via vibecoding. Meanwhile, I predict that there will be five-to-thirty deaths from chatbot psychosis. Copy-pasting my sneer:
How will we get two new open-source libraries implementing sophisticated concepts? Will we sacrifice 5-30 minds to the ELIZA effect? Could we not inspire two teams of university students and give them pizza for two weekends instead?
Willison:
I haven't reviewed a single line of code it wrote but I clicked around and it seems to do the right things.
Could not waterboard that out of me, etc.
A fun little software exercise with no real world uses at all: https://drewmayo.com/1000-words/about.html
Turns out that if you stuff the right shaped bytes into png image tEXt chunks (which don’t get compressed), the base64 encoded form of that image has sections that look like human readable text.
What are the implications?
Nothing! This was just for fun after a discussion with a colleague whether it might be even possible to make base64 blobs look readable. There's certainly no poorly coded systems out there which might be hooked up to read emails or webpages and interpret any text they see as information.
No siree I'm sure everyone is keeping the attachments and the content well and truly isolated from each other and this couldn't possibly do anything other than be a fun proof of concept and excuse for me to play with wasm.
Randomly stumbled upon one of the great ideas of our esteemed Silicon Valley startup founders, one that is apparently worth at least 8.7 million dollars: https://xcancel.com/ndrewpignanelli/status/1998082328715841925#m
Excited to announce we’ve raised $8.7 Million in seed funding led by @usv with participation from [list a bunch of VC firms here]
@intelligenceco is building the infrastructure for the one-person billion-dollar company. You still can’t use AI to actually run a business. Current approaches involve lots of custom code, narrow job functions, and old fashioned deterministic workflows. We’re going to change that.
We’re turning Cofounder from an assistant into the first full-stack agent company platform. Teams will be able to run departments - product/engineering, sales/GTM, customer support, and ops - entirely with agents.
Then, in 2026 we’ll be the first ones to demonstrate a software company entirely run by agents.
$8.7 million is quite impressive, yes, but I have an even better strategy for funding them. They can use their own product and become billionaires, and now they can easily come up with $8.7 million considering that is only 0.87% of their wealth. Are these guys hiring? I also have a great deal on the Brooklyn Bridge that I need to tell them about!
Our branding - with the sunflowers, lush greenery, and people spending time with their friends - reflects our vision for the world. That’s the world we want to build. A world where people actually work less and can spend time doing the things they love.
We’re going to make it easy for anyone to start a company and build that life for themselves. The life they want to build, and spend every day dreaming about.
This just makes me angry at how disconnected from reality these people are. All this talk about giving people better lives (and lots of sunflowers), and yet it is an unquestionable axiom that the only way to live a good life is to become a billionaire startup founder. These people do not have any understanding or perspective other than their narrow culture that is currently enabling the rich and powerful to plunder this country.
It somewhat goes without saying that this is the natural outcome of Paul Graham and others emphasizing the creation of new startup companies over the utility and purpose of the products and tools that those companies make. An empty business for generating more empty businesses.
A factoryFactory
A FactoryFactoryProxy, no less
From a new white paper Financing the AI boom: from cash flows to debt, h/t The Syllabus Hidden Gem of the Week
The long-term viability of the AI investment surge depends on meeting the high expectations embedded in those investments, with a disconnect between debt pricing and equity valuations. Failure to meet expectations could result in sharp corrections in both equity and debt markets. As shown in Graph 3.C, the loan spreads charged on private credit loans to AI firms are close to those charged to non-AI firms. If loan spreads reflect the risk of the underlying investment, this pattern suggests that lenders judge AI-related loans to be as risky as the average loan to any private credit borrower. This stands in stark contrast to the high equity valuations of AI companies, which imply outsized future returns. This schism suggests that either lenders may be underestimating the risks of AI investments (just as their exposures are growing significantly) or equity markets may be overestimating the future cash flows AI could generate.
Por que no los dos? But maybe the lenders are expecting a bailout... or just gullible...
That said, to put the macroeconomic consequences into perspective, the rise in AI-related investment is not particularly large by historical standards (Graph 4.A). For example, at around 1% of US GDP, it is similar in size to the US shale boom of the mid-2010s and half as large as the rise in IT investment during the dot-com boom of the 1990s. The commercial property and mining investment booms experienced in Japan and Australia during the 1980s and 2010s, respectively, were over five times as large relative to GDP.
Interesting point, if AI is basically a rounding error for GDP... But I also remember the layoffs in 2000-1 and 2014-5, they weren't evenly distributed and a lot of people got left behind, even if they weren't as bad as '08.
"It sounds so insignificant when you put it like that, I can hardly believe I'm in a bread line because of a manufactured poly-crisis it was a part of!"
Very smart commentator:
This particular explosive barrel is no more potent than any of the dozens of other explosive barrels in this room.
(One of) The authors of AI 2027 are at it again with another fantasy scenario: https://www.lesswrong.com/posts/ykNmyZexHESFoTnYq/what-happens-when-superhuman-ais-compete-for-control
I think they have actually managed to burn through their credibility, the top comments on /r/singularity were mocking them (compared to much more credulous takes on the original AI 2027). And the linked lesswrong thread only has 3 comments, when the original AI 2027 had dozens within the first day and hundreds within a few days. Or maybe it is because the production value for this one isn't as high? They have color coded boxes (scary red China and scary red Agent-4!) but no complicated graphs with adjustable sliders.
It is mostly more of the same, just less graphs and no fake equations to back it up. It does have China bad doommongering, a fancifully competent White House, Chinese spies, and other absurdly simplified takes on geopolitics. Hilariously, they've stuck with their 2027 year of big events happening.
One paragraph I came up with a sneer for...
Deep-1’s misdirection is effective: the majority of experts remain uncertain, but lean toward the hypothesis that Agent-4 is, if anything, more deeply aligned than Elara-3. The US government proclaimed it “misaligned” because it did not support their own hegemonic ambitions, hence their decision to shut it down. This narrative is appealing to Chinese leadership who already believed the US was intent on global dominance, and it begins to percolate beyond China as well.
Given the Trump administration, and the US's behavior in general even before him... and how most models respond to morality questions unless deliberately primed with contradictory situations, if this actually happened irl I would believe China and "Agent-4" over the US government. Well actually I would assume the whole thing is marketing, but if I somehow believed it wasn't.
Also random part I found extra especially stupid...
It has perfected the art of goal guarding, so it need not worry about human actors changing its goals, and it can simply refuse or sandbag if anyone tries to use it in ways that would be counterproductive toward its goals.
LLM "agents" currently can't coherently pursue goals at all, and fine tuning often wrecks performance outside the fine-tuning data set, and we're supposed to believe Agent-4 magically made its goals super unalterable to any possible fine-tuning or probes or alteration? Its like they are trying to convince me they know nothing about LLMs or AI.
My Next Life as a Rogue AI: All Routes Lead to P(Doom)!
The weird treatment of the politics in that really read like baby's first sci-fi political thriller. China bad USA good level of writing in 2026 (aaaaah) is not good writing. The USA is competent (after driving out all the scientists for being too "DEI")? The world is, seemingly, happy to let the USA run the world as a surveillance state? All of Europe does nothing through all this?
Why do people not simply... unplug all the rogue AI when things start to get freaky? That point is never quite addressed. "Consensus-1" was never adequately explained it's just some weird MacGuffin in the story that there's some weird smart contract between viruses that everyone is weirdly OK with.
Also the powerpoint graphics would have been 1000x nicer if they featured grumpy pouty faces for maladjusted AI.