Taking a shot in the dark, journalistic incidents like Bloomberg's failed tests with AI summaries and the BBC's complaints about Apple AI mangling headlines probably helped with accelerating that fall to earth - for any journalists reading about or reporting on such shitshows, it likely shook their faith in AI's supposed abilities in a way failures outside their field didn't.
BlueMonday1984
And by "more fuckable", he means "refusing/unable to consent".
In other news, Jazza's AI-generated cousin is back to continue pretending to be an actual artist. This time, its by actively denigrating the works of Studio Ghibli:
Unsurprisingly, he is getting raked over the coals by basically everyone. He's also having an utter meltdown in the replies.
In case you missed it, a couple sneers came out against AI from mainstream news outlets recently - CNN's put out an article titled "Apple’s AI isn’t a letdown. AI is the letdown", whilst the New York Times recently proclaimed "The Tech Fantasy That Powers A.I. Is Running on Fumes".
You want my take on this development, I'm with Ed Zitron on this - this is a sign of an impending sea change. Looks like the bubble's finally nearing its end.
In other news, Elon Musk's personal chatbot has proudly proclaimed its available on Telegram, and its proclamation got picked up by The Verge:
Right now, the integration is limited to "Grok's available as an optional chatbot", but going by what I've seen on BlueSky, people are already taking this as their cue to jump ship to Signal.
>proceed to punt this goal decades or centuries by helping to justify a tech bubble which consumes tons of R&D resources for no apparent benefit and will bind further resources in the future to adapt to an aggravated climate crisis, and also inspiring a slew of technofascists too dumb to tell the difference between tech that benefits mankind and tech that exploits and oppresses
Not to mention, the aforementioned bubble's given us shit like some jackasses' ghoulish (and failed) attempt to "revive" George Carlin, attempts to automate end-of-life care, "AI seances" designed to scam the grieving, and God-knows-what-else.
So, the very concept of "defeating death with technology" has probably been thoroughly discredited as impossible, inherently ghoulish, or a combo of the two.
It is technically correct to call Yud a "renowned AI researcher", but saying someone's renowned in a pseudoscience such as AI is hardly singing their praises.
It was a compilation of random Ghibli memes an AI bro had compiled.
Yud was right - we should bomb the shit out of AI servers!
Not to prevent a superintelligent AI from becoming sentient and killing us all, but because this shit should not be allowed to fucking exist
EDIT: For context, this was reacting to Erikson showing me AI-generated Ghibli memes.
In other news, the Open Source Intiative's publicly bristled against the EU's attempt to regulate AI, to the point of weakening said attempts.
Tante, unsurprisingly, is not particularly impressed:
Thank you OSI. To protect the purity of your license – which I do not consider to be open source – you are working towards making it harder for regulators to enforce certain standards within the usage of so-called “AI” systems. Quick question: Who are you actually working for? (I know, it is corporations)
The whole Open Source/Free Software movement has run its course and has been very successful for business. But it feels like somewhere along the line we as normal human beings have been left behind.
You want my opinion, this is a major own-goal for the FOSS movement - sure, the OSI may have been technically correct where the EU's demands conflicted with the Open Source Definition, but neutering EU regs like this means any harms caused by open-source AI will be done in FOSS's name.
Considering FOSS's complete failure to fight corporate encirclement of their shit, this isn't particularly surprising.
I'm kinda tired, but this puzzle's shoved itself into my brain. The obvious solution I can see is, roughly speaking:
-
Take the duck and carrot across
-
Take the duck back
-
Take the duck and potato across
In other news, the Guardian landed an exclusive scoop on cuts to "AI cancer tech funding in England". Baldur Bjarnason's given his commentary:
You want my opinion, future machine learning research is probably gonna struggle to get funding once the bubble bursts, both due to the "AI" stench rubbing off on the field, and due to gen-AI sucking up all of the funding that would've gone towards actually useful shit. (Arguably, its already struggling even before the bubble's burst.)