theluddite

joined 2 years ago
MODERATOR OF
[–] theluddite@lemmy.ml 2 points 1 day ago

I get your point and it's funny but it's different in important ways that are directly relevant to the OP article. The parent uses the instrumental theory of technology to dismiss the article, which is roughly saying that antidemocracy is a property of AI. I'm saying that not only is that a valid argument, but that these kinds of properties are important, cumulative, and can fundamentally reshape our society.

[–] theluddite@lemmy.ml 6 points 1 day ago (2 children)

I don’t like this way of thinking about technology, which philosophers of tech call the "instrumental" theory. Instead, I think that technology and society make each other together. Obviously, technology choices like mass transit vs cars shape our lives in ways that simpler tools, like a hammer or or whatever, don't help us explain. Similarly, society shapes the way that we make technology.

In making technology, engineers and designers are constrained by the rules of the physical world, but that is an underconstraint. There are lots of ways to solve the same problem, each of which is equally valid, but those decisions still have to get made. How those decisions get made is the process through which we embed social values into the technology, which are cumulative in time. To return to the example of mass transit vs cars, these obviously have different embedded values within them, which then go on to shape the world that we make around them. We wouldn't even be fighting about self-driving cars had we made different technological choices a while back.

That said, on the other side, just because technology is more than just a tool, and does have values embedded within it, doesn't mean that the use of a technology is deterministic. People find subversive ways to use technologies in ways that go against the values that are built into it.

If this topic interests you, Andrew Feenberg's book Transforming Technology argues this at great length. His work is generally great and mostly on this topic or related ones.

[–] theluddite@lemmy.ml 1 points 1 month ago

Honestly I should just get that slide tattooed to my forehead next to a QR code to Weizenbaum's book. It'd save me a lot of talking!

[–] theluddite@lemmy.ml 17 points 1 month ago

I agree with you so strongly that I went ahead and updated my comment. The problem is general and out of control. Orwell said it best: "Journalism is printing something that someone does not want printed. Everything else is public relations."

[–] theluddite@lemmy.ml 8 points 1 month ago

These articles frustrate the shit out of me. They accept both the company's own framing and its selectively-released data at face value. If you get to pick your own framing and selectively release the data that suits you, you can justify anything.

[–] theluddite@lemmy.ml 52 points 1 month ago* (last edited 1 month ago) (10 children)

I am once again begging journalists to be more critical ~~of tech companies~~.

But as this happens, it’s crucial to keep the denominator in mind. Since 2020, Waymo has reported roughly 60 crashes serious enough to trigger an airbag or cause an injury. But those crashes occurred over more than 50 million miles of driverless operations. If you randomly selected 50 million miles of human driving—that’s roughly 70 lifetimes behind the wheel—you would likely see far more serious crashes than Waymo has experienced to date.

[...] Waymo knows exactly how many times its vehicles have crashed. What’s tricky is figuring out the appropriate human baseline, since human drivers don’t necessarily report every crash. Waymo has tried to address this by estimating human crash rates in its two biggest markets—Phoenix and San Francisco. Waymo’s analysis focused on the 44 million miles Waymo had driven in these cities through December, ignoring its smaller operations in Los Angeles and Austin.

This is the wrong comparison. These are taxis, which means they're driving taxi miles. They should be compared to taxis, not normal people who drive almost exclusively during their commutes (which is probably the most dangerous time to drive since it's precisely when they're all driving).

We also need to know how often Waymo intervenes in the supposedly autonomous operations. The latest we have from this, which was leaked a while back, is that Cruise (different company) cars are actually less autonomous than taxis, and require >1 employee per car.

edit: The leaked data on human interventions was from Cruise, not Waymo. I'm open to self-driving cars being safer than humans, but I don't believe a fucking word from tech companies until there's been an independent audit with full access to their facilities and data. So long as we rely on Waymo's own publishing without knowing how the sausage is made, they can spin their data however they want.

edit2: Updated to say that ournalists should be more critical in general, not just about tech companies.

[–] theluddite@lemmy.ml 10 points 2 months ago

David Graeber's Debt: The First 5000 Years. We all take debt for granted. It's fascinating to learn how differently we've thought about it over the millenia and how much of our modern world makes more sense when understood through its lens.

[–] theluddite@lemmy.ml 2 points 2 months ago

No need to apologize for length with me basically ever!

I was thinking how you did it in the second paragraph, but even more stripped down. The algorithm has N content buckets to choose from, then, once it chooses, the success is how much of the video the user watched. Users have the choice to only keep watching or log off for simplicity. For small N, I think that @kersplomp@programming.dev is right on that it's the multi-armed bandit problem if we assume that user preferences are static. If we introduce the complexity that users prefer familiar things, which I think is pretty fair, so users are more likely to keep watching from a bucket if it's a familiar bucket, I assume that exploration gets heavily disincentivized and exhibits some pretty weird behavior, while exploitation becomes much more favorable. What I like about this is that, with only a small deviation from a classic problem, it would help explain what you also explain, which is getting stuck in corners.

Once you allow user choice beyond consume/log off, I think your way of thinking about it, as a turn based game, is exactly right, and your point about bin refinement is great and I hadn't thought of that.

[–] theluddite@lemmy.ml 4 points 2 months ago

Yeah I really couldn't agree more. I really harped on the importance of other properties of the medium, like brevity, when I reviewed the book #HashtagActivism, and how those too are structurally right wing. There's a lot of scholars doing these kinds of network studies and imo they way too often emphasize user-user dynamics and de-emphasize, if not totally omit, the fact that all these interactions are heavily mediated. Just this week I watched a talk that I thought had many of these same problems.

[–] theluddite@lemmy.ml 1 points 2 months ago

I knew you were the person to call :)

[–] theluddite@lemmy.ml 3 points 2 months ago (4 children)

Thanks!

I feel enlightened now that you called out the self-reinforcing nature of the algorithms. It makes sense that an RL agent solving the bandits problem would create its own bubbles out of laziness.

You're totally right that it's like a multi-armed bandit problem, but maybe with so many possibilities that searching is prohibitively expensive, since the space of options to search is much bigger than the rate that humans can consume content. In other ways, though, there's a dissimilarity because the agent's reward depends on its past choices (people watch more of what they're recommended). It would be really interesting to know if anyone has modeled a multi-armed bandit problem with this kind of self-dependency. I bet that, in that case, the exploration behavior is pretty chaotic. @abucci@buc.ci this seems like something you might just know off the top of your head!

Maybe we can take advantage of that laziness to incept critical thinking back into social media, or at least have it eat itself.

If you have any ideas for how to turn social media against itself, I'd love to hear them. I worked on this post unusually long for a lot of reasons, but one of them was trying to think of a counter strategy. I came up with nothing though!

[–] theluddite@lemmy.ml 9 points 2 months ago (1 children)

Yup. Silicon-washing genocidal intention is almost certainly the most profitable use of AI we've come up with so far.

 

Dávila's "Blockchain Radicals" argues that the left ought to embrace blockchain. Here's my 2 part review. The first critiques the book's approach to argumentation, and the second examines Dávila's own Breadchain Cooperative.

This is my longest post yet because the theory the book presents is palatable to developers. It does to political theory what tech people always do: Confidently assume their skills apply in a field they don't bother to understand. The consequences are predictable. This, then, is an intervention directed at that mode of thinking, an examination of how bad theory leads to bad practice, and, most importantly, an attempt to stop would-be activists from getting caught up in this mess.

tl;dr Breadchain's use of the term "cooperative" is fraudulent, and it is, structurally, a grift, whatever his intentions might be.

 

Though wrapped in the aesthetic of science, this paper is a pure expression of the AI hype's ideology, including its reliance on invisible, alienated labor. Its data was manufactured to spec to support the authors' pre-existing beliefs, and its conclusions are nothing but a re-articulation of their arrogance and ideological impoverishment.

 

Though wrapped in the aesthetic of science, this paper is a pure expression of the AI hype's ideology, including its reliance on invisible, alienated labor. Its data was manufactured to spec to support the authors' pre-existing beliefs, and its conclusions are nothing but a re-articulation of their arrogance and ideological impoverishment.

 

#HashtagActivism is a robust and thorough defense of its namesake practice. It argues that Twitter disintermediated public discourse, analyzing networks of user interactions in that context, but its analysis overlooks that Twitter is actually a heavy-handed intermediary. It imposes strict requirements on content, like a character limit, and controls who sees what and in what context. Reintroducing Twitter as the medium and reinterpreting the analysis exposes serious flaws. Similarly, their defense of hashtag activism relies almost exclusively on Twitter engagement data, but offers no theory of change stemming from that engagement. By reexamining their evidence, I argue that hashtag activism is not just ineffective, but its institutional dynamics are structurally conservative and inherently anti-democratic.

2
submitted 6 months ago* (last edited 6 months ago) by theluddite@lemmy.ml to c/luddite@lemmy.ml
 

The book "#HashtagActivism" is a robust and thorough defense of its namesake practice. It argues that Twitter disintermediated public discourse, analyzing networks of user interactions in that context. But the book overlooks that Twitter is actually a heavy-handed intermediary. Twitter imposes strict requirements on content, like a character limit, and controls who sees what and in what context. Reintroducing Twitter as the medium and reinterpreting the analysis exposes serious flaws. Similarly, their defense of hashtag activism relies almost exclusively on Twitter engagement data, but offers no theory of change stemming from that engagement. By reexamining their evidence, I argue that hashtag activism is not just ineffective, but its institutional dynamics are structurally conservative and inherently anti-democratic.

 

Regulating tech is hard, in part because computers can do so many things. This makes them useful but also complicated. Companies hide in that complexity, rendering undesirable behavior illegible to regulation: Regulating tech becomes regulating unlicensed taxis, mass surveillance, illegal hotels, social media, etc.

If we actually want accountable tech, I argue that we should focus on the tech itself, not its downstream consequences. Here's my (non-environmental) case for rationing computation.

 

Until recently, platforms like Tinder and Uber couldn't exist. They need the intimate data that only mobile devices can provide, which they use to mediate human relationships. They never own anything. In some ways, this simplifies their task, because owning things is hard, but human activities are complicated, making them illegible to computers. As tech companies become more powerful and push deeper into our lives, here's a post about that tension and its consequences.

view more: next ›