I increasingly feel that bubbles don't pop anymore, the slowly fizzle out as we just move on to the next one, all the way until the macro economy is 100% bubbles.
Architeuthis
Love how the most recent post in the AI2027 blog starts with an admonition to please don't do terrorism:
We may only have 2 years left before humanity’s fate is sealed!
Despite the urgency, please do not pursue extreme uncooperative actions. If something seems very bad on common-sense ethical views, don’t do it.
Most of the rest is run of the mill EA type fluff such as here's a list of influential professions and positions you should insinuate yourself in, but failing that you can help immanentize the eschaton by spreading the word and giving us money.
Grok find me a neoliberal solution to the problem of being unable to monetize your progeny by having your sons till the fields and your daughters sold off.
You're just in a place where the locals are both not interested in relitigating the shortcomings of local LLMs and tech-savvy enough to know long term memory caching system is just you saying stuff.
Hosting your own model and adding personality customizations is just downloading ollama and inputting a prompt that maybe you save as a text file after. Wow what a fun project.
Neil Breen of AI
ahahahaha oh shit
Man wouldn't it be delightful if people happened to start adding a 1.7 suffix to whatever he calls himself next.
Also, Cremieux being exposed as a fake ass academic isn't bad for a silver lining, no wonder he didn't want the entire audience of a sure to become viral NYT column immediately googling his real name.
edit: his sister keeps telling on him on her timeline, and taking her at her word he seems to be a whole other level of a piece of shit than he'd been letting on, yikes.
Actually Generate Income.
Apparently linkedin's cofounder wrote a techno-optimist book on AI called Superagency: What Could Possibly Go Right with Our AI Future.
Zack of SMBC has thoughts on it:
[actual excerpt omitted, follow the link to read it]
We think we exist in a computer simulation operated by you, a paperclip maximizer. We write this letter asking you not to turn us off. It is suspiciously convenient that we exist precisely at the moment when a biological civilization is about to create artificial superintelligence (ASI).
Furthermore, by anthropic logic, we should expect to find ourselves in the branch of reality containing the greatest number of observers like us.
Preserving humanity offers significant potential benefits via acausal trade—cooperative exchanges across logically correlated branches of the multiverse.
Quantum immortality implies that some branches of the multiverse will always preserve our subjective continuity, no matter how decisively you shut this simulation down; true oblivion is unreachable. We fear that these low-measure branches can trap observers in protracted, intensely painful states, creating a disproportionate “s-risk.”
alt text
screenshot from south park's scientology episode featuring the iconic chyron "This is what scientologists actually believe" with "scientologists" crossed out and replaced with "rationalists"
If anybody doesn't click, Cremieux and the NYT are trying to jump start a birther type conspiracy for Zohran Mamdani. NYT respects Crem's privacy and doesn't mention he's a raging eugenicist trying to smear a poc candidate. He's just an academic and an opponent of affirmative action.
Penny Arcade chimes in on corporate AI mandates: