I've created a new godlike AI model. Its the Eliziest yet.
YourNetworkIsHaunted
The thing that kills me about this is that, speaking as a tragically monolingual person, the MTPE work doesn't sound like it's actually less skilled than directly translating from scratch. Like, the skill was never in being able to type fast enough or read faster or whatever, it was in the difficult process of considering the meaning of what was being said and adapting it to another language and culture. If you're editing chatbot output you're still doing all of that skilled work, but being asked to accept half as much money for it because a robot made a first attempt.
In terms of that old joke about auto mechanics, AI is automating the part where you smack the engine in the right place, but you still need to know where to hit it in order to evaluate whether it did a good job.
I get the idea they're going for: that coding ability is a leading indicator for progress towards AGI. But even if you ignore how nonsensical the overall graph is the argument itself is still begging the question of how much actual progress and capability it has to write code rather than spitting out code-shaped blocks of text that can successfully compile.
NANDA claims that agentic AI — or the thing of that name that they’re selling — will definitely learn real good without training completely afresh.
Given their web3 roots, I feel like we should point out that blockchain storage systems are famously cheap and efficient to update and modify, so this claim actually seems perfectly reasonable to me /s.
Anyone who said this about their product would almost certainly by lying, but these guys are extra lying.
And once it does they'll quietly stop talking about it for a while to "focus on the human stories of those affected" or whatever until the nostalgic retrospectives can start along with the next thing.
Oxford Economist in the NYT says that AI is going to kill cities if they don't prepare for change. (Original, paywalled)
I feel like this is at most half the picture. The analogy to new manufacturing technologies in the 70s is apt in some ways, and the threat of this specific kind of economic disruption hollowing out entire communities is very real. But at the same time as orthodox economists so frequently do his analysis only hints at some of the political factors in the relevant decisions that are if anything more important than technological change alone.
In particular, he only makes passing reference to the Detroit and Pittsburgh industrial centers being "sprawling, unionized compounds" (emphasis added). In doing so he briefly highlights how the changes that technology enabled served to disempower labor. Smaller and more distributed factories can't unionize as effectively, and that fragmentation empowers firms to reduce the wages and benefits of the positions they offer even as they hire people in the new areas. For a unionized auto worker in Detroit, even if they had replaced the old factories with new and more efficient ones the kind of job that they had previously worked that had allowed them to support themselves and their families at a certain quality of life was still gone.
This fits into our AI skepticism rather neatly, because if the political dimension of disempowering labor is what matters then it becomes largely irrelevant whether LLM-based "AI" products and services can actually perform as advertised. Rather than being the central cause of this disruption it becomes the excuse, and so it just has to be good enough to create the narrative. It doesn't need to actually be able to write code like a junior developer in order to change the senior developer's job to focus on editing and correcting code-shaped blocks of tokens checked in by the hallucination machine. This also means that it's not going to "snap back" when the AI bubble pops because the impacts on labor will have already happened, any more than it was possible to bring back the same kinds of manufacturing jobs that built families in the postwar era once they had been displaced in the 70s and 80s.
Even if they aren't actively relying on each other here I would assume that we're reaching a stage where all of the competing LLMs are using basically the entire Internet as their training data, and while there is going to be some difference based on the reinforcement learning process there's still going to be a lot of convergence there.
I found this article in Fortune that similarly says 95% of GenAI pilots at companies fail to have a positive impact on the bottom line. They spend a lot of ink trying to sidestep the obvious explanation in favor of talking about the ways people are probably just prompting it wrong, and I couldn't be bothered to fill out the form asking MIT's group for access to the underlying report.
Okay so I know GPT-5 had a bad launch and has been getting raked over the coals, but AGI is totally still on, guys!
Why? Because trust me it's definitely getting better behind the scenes in ways that we can't see. Also China is still scary and we need to make sure we make the AI God that will kill us all before China does because reasons.
Also despite talking about a how much of the lack of progress is due to the consumer model and this is a cost-saving there's no reference to the work of folks like Ed Zitron on how unprofitable these models are, much less the recent discussions on whether GPT-5 as a whole is actually cheaper to operate than earlier models given the changes it necessitates in caching.
Yeah. He spends a fair chunk of ink that I can't be bothered finding the quote on trying to reassure readers that despite throwing out literal Frankfurt School Cultural Marxism dog whistles he's definitely not anti progressive, and in fact the robot apocalypse cultists are the real progressives. It smacks of Jordan Peterson and Moldbug, wrapped in a less competent and readable version of Scott's beigeness.
Promptfondlers are tragically close to the point. Like I was saying yesterday about translators the future of programming in AI hell is going to be senior developers using their knowledge and experience to fix the bullshit that the LLM outputs. What's going to happen when they retire and there's nobody with that knowledge and experience to take their place? I'll have sold off my shares by then, I'm sure.