Enjoy reporting the oil prices while it’s still legal
kibiz0r
Imagine losing your job to a recession that’s being masked by an AI bubble, and The Atlantic believes the CEO when they say it was because of AI
Dijkstra on the foolishness of natural language programming
But like, what does he know? He wasn’t an AI-native vibe orchestrator.
A man
A plan
Amygdala
- Tool allows you to generate output without understanding or accountability
- Continue rewarding output only
- Extreme lack of understanding and accountability
shocked pikachu
Fear is, famously, an excellent impetus for rational decision-making. (/s just in case)
I agree with your position on copyright, but not on AI.
AI is not:
- “Stealing” digital goods
- …of which there are infinite copies
- …and for which “ownership” is a dubious and antisocial concept
But AI is:
- Enclosing the digital commons
- Interfering with free association
- Neglecting mutual obligations of collaborative works
- Polluting our global collaboration infrastructure
- Sowing epistemic chaos
- Enabling more exploitative work conditions
- Concentrating even more wealth in the hands of the Nerd Reich
The bonkers thing is: if you wanted to attack Iran, as the US, you’d need to be willing to cut ties to the GCC.
That means investing in renewables, having close petrodollar allies outside of the GCC, having a way to stabilize USD without the petrodollar (global free trade with big trade deficits is an easy way), and keeping energy demand fairly predictable.
Instead we got: repealing investments in renewables, pissing off every single Western power, tariffs, and spiking energy demands due to reckless data center build-outs.
Yes. AI allows the user to separate output from understanding, accountability, and obligation. It can launder intention just as well as inattention. AI is the ultimate tool of fascism.
Edit: But I should mention, this is not new. Institutions have been pursuing techniques for this long before AI. Everything Was Already AI
Just to be clear: this is not about protecting people.
This is just another squeeze, wringing the next few drops of accountability out of their sector.
They’re not really employing the drivers, so they’re not responsible for vetting them. And they’re not really selling rides, so they’re not responsible for what happens during one.
So what’s next? “Oh, we told drivers to get interior cameras, we told riders to be careful, we gave them checkboxes!”
Anything at all that they can spin as a value-add to shareholders, rather than allowing for any amount of responsibility towards the well-being of people who interact with their systems.
Science is probably the hardest resource-allocation challenge there is. The timing/nature/size of the payoffs are rather unpredictable, and the labor is rarely fungible. Geniuses in one very specific niche may be utterly useless doing anything else, and you can’t reliably predict whether their research will be worthwhile.