They're LWers, they already baked their psyche long ago
BlueMonday1984
Machine learning is essentially AI with a paper-thin disguise, so that makes sense
Zed Run: a play to earn (P2E) virtual horse NFT racing game. Defunct as of February, probably due to rug pulling, they are pivoting to “Zed Champions”, which is… pretty much the exact same thing, with likely the same fate.
They're also (indirectly) competing with Umamusume: Pretty Derby, which offers zero P2E elements, but does offer horse waifus and actual entertainment value. Needless to say, we both know who's winning this particular fight for people's cash.
EquineChain: a blockchain platform for tracking horse care history, because apparently people don’t trust horse caregivers and need GPUs to remember how much ivermectin and ketamine their show-ponies have mainlined.
It'd arguably be helpful if the caregivers are helping themselves to the stash, but I doubt there's anything stopping then from BSing the blockchain, too.
But how are they going to awkwardly cram robots in everywhere, to follow up the overwhelming success of AI?
Good question - AFAICT, they're gonna struggle to find places to cram their bubble-bots into. Plus, nothing's gonna stop Joe Public from wrecking them in the streets - and given we've already seen Waymos getting torched and Lime scooters getting wrecked these AI-linked 'bots are likely next on the chopping block.
Don't we know it.
To my knowledge, previous bubbles happened in the background, with the general public feeling little effect from them.
Here, the entire bubble has happened directly in the public eye, either though breathless hype being shoved down their throats or through the bubble's negative externalities directly impacting their lives.
With that in mind, I expect this upcoming winter to be particularly long, with public mockery/condemnation of AI to be particular hallmarks.
Glad I could help with writing this.
I've already predicted that AI will completely and permanently disappear once the bubble bursts, and between AI's utterly radioactive public image and businesses' increasing realisation that AI is a useless money pit, its a prediction I've only grown more confident in over time.
Part of me suspects that particular pivot is gonna largely fail to convince anyone - paraphrasing Todd In The Shadows "Witness" retrospective, other tech bubbles may have failed harder than AI, but nothing has failed louder.
The notion of "AI = "sentient" chatbots/slop generators" is very firmly stuck in the public consciousness, and pointing to AI being useful in some niche area isn't gonna paper over the breathlessly-promoted claims of Utopian Superintelligence When Its Done^tm^ or the terabytes upon terabytes of digital slop polluting the 'net.
I doubt it'll stop the worst people we know from trying, though - they're hucksters at heart, getting caught and publicly humiliated is unlikely to stop 'em.
If you wanna say “but AI is here to stay!” tell us what you mean in detail. Stick your neck out. Give your reasons.
I'm gonna do the exact opposite of this ending quote and say AI will be gone forever after this bubble (a prediction I've hammered multiple times before),
First, the AI bubble has given plenty of credence to the motion that building a humanlike AI system (let alone superintelligence) is completely impossible, something I've talked about in a previous MoreWrite. Focusing on a specific wrinkle, the bubble has shown the power of imagination/creativity to be the exclusive domain of human/animal minds, with AI systems capable of producing only low-quality, uniquely AI-like garbage (commonly known as AI slop, or just slop for short).
Second, the bubble's widespread and varied harms have completely undermined any notion of "artificial intelligence" being value-neutral as a concept. The large-scale art theft/plagiarism committed to create the LLMs behind this bubble (Perplexity, ChatGPT, CrAIyon, Suno/Udio, etcetera), and the large-scale harms enabled by these LLMs (plagiarism/copyright infringement, worker layoffs/exploitation, enshittification), and the heavy use of LLMs for explicitly fascist means (which I've noted in a previous MoreWrite) have all provided plenty of credence to notions of AI as a concept being inherently unethical, and plenty of reason to treat use of/support of AI as an ethical failing.
The users who choose Cursor are hardcore vibe addicts. They are tech incompetents who somehow BSed their way into a developer job. They cannot code without a vibe coding bot. I compared chatbots to gambling and cocaine before, and Cursor fans are the most abject gutter krokodil addicts.
They're also easily comparable to psychics in how they con people, and of course there's all the reports of them crippling critical thinking and generally making you stupider.
So, these things are essentially brainrot-as-a-service.
trying to explain why a philosophy background is especially useful for computer scientists now, so i googled “physiognomy ai” and now i hate myself
Well, I guess there's your answer - "philosophy teaches you how to avoid falling for hucksters"
Its also completely accurate - AI bros are not only utterly lacking in any sort of skill, but actively refuse to develop their skills in favour of using the planet-killing plagiarism-fueled gaslighting engine that is AI and actively look down on anyone who is more skilled than them, or willing to develop their skills.
It would be really funny if Devin caused a financial crash this way