36
Robots Are Already Killing People
(www.theatlantic.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
This just feels like non-technical fear mongering. Frankly, the term “AI” is just way too overused for any of this to be useful - Autopilot, manufacturing robots, and ChatGPT are all distinct systems that have their own concerns, tradeoffs, regulatory issues, etc. and trying to lump them together reduces the capacity for discussion down to a single (not very useful, imo) take
editing for clarity: I’m for discussion of more regulation and caution, but conflating tons of disparate technologies still imo muddies the waters of public discussion
If you read the article, the concern is how those disparate technologies are converging.
I read the article, and stand by my statement - “AI” does not apply to self driving cars the same way as robotics use by law enforcement. These are two separate categories of problems where I don’t see how some unified frustration at AI or robotics applies.
Self driving cars have issues because the machine learning algorithms used to train them are not sufficient to navigate the complexities of roads, and there is no human fallback. (See: autopilot)
Robotics use by law enforcement has issues because it removes a human factor to enforcement, which has concerns of whether any deadly force is ever justified when used (does a suspect pose a danger to any officer if there is no human contact?), and worries of dehumanization exist here, as well as other factors like data collection. These aren’t even self driving mostly, from what I understand law enforcement remote pilots them.
these are separate problem spaces and aren’t deadly in the same ways, aren’t unattractive in the same ways, and should be treated and analyzed as distinct problems. by reducing to “AI” and “robots” you create a problem that makes sense only to the technically uninclined, and blurs any meaningful discussion about the precisions of each issue.
What would've been high risk? Well:
That does make sense, considering ELIZA from the 60s would fit this description. It pretty much repeated what you wrote to it in a different style.
I don't see how generative AI can be considered high risk when it's literally just fancy keyboard autofill. If a doctor asks ChatGPT what the correct dose of medication for a patient is, it's not ChatGPT which should be considered high risk but rather the doctor.
Isn't that The Atlantic's MO?