kibiz0r

joined 2 years ago
[–] kibiz0r@midwest.social 2 points 3 hours ago

Science is probably the hardest resource-allocation challenge there is. The timing/nature/size of the payoffs are rather unpredictable, and the labor is rarely fungible. Geniuses in one very specific niche may be utterly useless doing anything else, and you can’t reliably predict whether their research will be worthwhile.

[–] kibiz0r@midwest.social 2 points 5 hours ago

Enjoy reporting the oil prices while it’s still legal

[–] kibiz0r@midwest.social 8 points 6 hours ago

Imagine losing your job to a recession that’s being masked by an AI bubble, and The Atlantic believes the CEO when they say it was because of AI

[–] kibiz0r@midwest.social 22 points 12 hours ago* (last edited 12 hours ago) (2 children)

Dijkstra on the foolishness of natural language programming

But like, what does he know? He wasn’t an AI-native vibe orchestrator.

[–] kibiz0r@midwest.social 12 points 12 hours ago

A man

A plan

Amygdala

[–] kibiz0r@midwest.social 12 points 13 hours ago
  • Tool allows you to generate output without understanding or accountability
  • Continue rewarding output only
  • Extreme lack of understanding and accountability

shocked pikachu

[–] kibiz0r@midwest.social 121 points 14 hours ago* (last edited 13 hours ago) (9 children)

Fear is, famously, an excellent impetus for rational decision-making. (/s just in case)

[–] kibiz0r@midwest.social 10 points 16 hours ago

I agree with your position on copyright, but not on AI.

AI is not:

  • “Stealing” digital goods
  • …of which there are infinite copies
  • …and for which “ownership” is a dubious and antisocial concept

But AI is:

  • Enclosing the digital commons
  • Interfering with free association
  • Neglecting mutual obligations of collaborative works
  • Polluting our global collaboration infrastructure
  • Sowing epistemic chaos
  • Enabling more exploitative work conditions
  • Concentrating even more wealth in the hands of the Nerd Reich
[–] kibiz0r@midwest.social 8 points 1 day ago* (last edited 1 day ago) (2 children)

The bonkers thing is: if you wanted to attack Iran, as the US, you’d need to be willing to cut ties to the GCC.

That means investing in renewables, having close petrodollar allies outside of the GCC, having a way to stabilize USD without the petrodollar (global free trade with big trade deficits is an easy way), and keeping energy demand fairly predictable.

Instead we got: repealing investments in renewables, pissing off every single Western power, tariffs, and spiking energy demands due to reckless data center build-outs.

[–] kibiz0r@midwest.social 3 points 1 day ago* (last edited 1 day ago)

Yes. AI allows the user to separate output from understanding, accountability, and obligation. It can launder intention just as well as inattention. AI is the ultimate tool of fascism.

Edit: But I should mention, this is not new. Institutions have been pursuing techniques for this long before AI. Everything Was Already AI

[–] kibiz0r@midwest.social 7 points 1 day ago

Just to be clear: this is not about protecting people.

This is just another squeeze, wringing the next few drops of accountability out of their sector.

They’re not really employing the drivers, so they’re not responsible for vetting them. And they’re not really selling rides, so they’re not responsible for what happens during one.

So what’s next? “Oh, we told drivers to get interior cameras, we told riders to be careful, we gave them checkboxes!”

Anything at all that they can spin as a value-add to shareholders, rather than allowing for any amount of responsibility towards the well-being of people who interact with their systems.

 

Learned about it from this episode of the Team Human podcast

But what is happening in Hong Kong is they come up with a slogan, which is translated as Do Not Split, which is, we know that some people are willing to be confrontational with riot police.

And when they are, that's going to cost the state in terms of not only resources, but it's going to cost the state in terms of political capital and support. And we know that there are some people who are not willing to do that. And we are going to abide by the protocol of Do Not Split, which means that we're not going to criticize them openly, and they're not going to criticize us openly.

If we're the pacifists, we're not going to have them criticize us for being sort of like, I don't know, limpid or flaccid or not courageous or whatever. And we're not going to criticize them for being more confrontational. And the thing is that the support is also tacit.

It's not like they have to come out and tell the media, oh, we approve of our more sort of confrontational colleagues. They just keep quiet. They just keep quiet.

Understanding that a range of tactics is probably going to be necessary. Nobody really knows what's going to work. But if everybody's pushing back against a particularly violent state, then everybody's really on the same side.

 

I feel like there’s a substantial overlap between “fuck AI” and “fuck entitled misanthropic man-children”.

 

“Be indigestible. Grow spikes.”

 

From Wikipedia:

.kkrieger (from KriegerGerman for warrior) is a first-person shooter video game created by German demogroup .theprodukkt (a former subdivision of Farbrausch), which won first place in the 96k game competition at Breakpoint in April 2004. The game has never been fully released, remaining instead in the beta stage of development as of 2025, which renders it a perpetual beta.

 

Generative AI is the nuclear bomb of the information age

 

cross-posted from: https://lemmy.world/post/10961870

To Stop AI Killing Us All, First Regulate Deepfakes, Says Researcher Connor Leahy::AI researcher Connor Leahy says regulating deepfakes is the first step to avert AI wiping out humanity

 
view more: next ›