363
submitted 9 months ago* (last edited 9 months ago) by SnotFlickerman@lemmy.blahaj.zone to c/asklemmy@lemmy.ml

Money wins, every time. They're not concerned with accidentally destroying humanity with an out-of-control and dangerous AI who has decided "humans are the problem." (I mean, that's a little sci-fi anyway, an AGI couldn't "infect" the entire internet as it currently exists.)

However, it's very clear that the OpenAI board was correct about Sam Altman, with how quickly him and many employees bailed to join Microsoft directly. If he was so concerned with safeguarding AGI, why not spin up a new non-profit.

Oh, right, because that was just Public Relations horseshit to get his company a head-start in the AI space while fear-mongering about what is an unlikely doomsday scenario.


So, let's review:

  1. The fear-mongering about AGI was always just that. How could an intelligence that requires massive amounts of CPU, RAM, and database storage even concievably able to leave the confines of its own computing environment? It's not like it can "hop" onto a consumer computer with a fraction of the same CPU power and somehow still be able to compute at the same level. AI doesn't have a "body" and even if it did, it could only affect the world as much as a single body could. All these fears about rogue AGI are total misunderstandings of how computing works.

  2. Sam Altman went for fear mongering to temper expectations and to make others fear pursuing AGI themselves. He always knew his end-goal was profit, but like all good modern CEOs, they have to position themselves as somehow caring about humanity when it is clear they could give a living flying fuck about anyone but themselves and how much money they make.

  3. Sam Altman talks shit about Elon Musk and how he "wants to save the world, but only if he's the one who can save it." I mean, he's not wrong, but he's also projecting a lot here. He's exactly the fucking same, he claimed only he and his non-profit could "safeguard" AGI and here he's going to work for a private company because hot damn he never actually gave a shit about safeguarding AGI to begin with. He's a fucking shit slinging hypocrite of the highest order.

  4. Last, but certainly not least. Annie Altman, Sam Altman's younger, lesser-known sister, has held for a long time that she was sexually abused by her brother. All of these rich people are all Jeffrey Epstein levels of fucked up, which is probably part of why the Epstein investigation got shoved under the rug. You'd think a company like Microsoft would already know this or vet this. They do know, they don't care, and they'll only give a shit if the news ends up making a stink about it. That's how corporations work.

So do other Lemmings agree, or have other thoughts on this?


And one final point for the right-wing cranks: Not being able to make an LLM say fucked up racist things isn't the kind of safeguarding they were ever talking about with AGI, so please stop conflating "safeguarding AGI" with "preventing abusive racist assholes from abusing our service." They aren't safeguarding AGI when they prevent you from making GPT-4 spit out racial slurs or other horrible nonsense. They're safeguarding their service from loser ass chucklefucks like you.

you are viewing a single comment's thread
view the rest of the comments
[-] intensely_human@lemm.ee 7 points 9 months ago

It doesn’t matter if anyone cares about the safety of AGI.

AGI is a direct source of power, much like any weapon. As soon as AGI exists, we will exist in a state of warfare due to the fact that the “big guns” will be out.

I know I’m having trouble articulating this point, but it’s very important to understand. AGI is like a nuclear weapon: once a person has it, it doesn’t matter how much others may want to regulate them. It’s just not possible to regulate.

The ONLY strategy that gives us hope of surviving AGI’s emergence without being enslaved is to spread AGI far and wide to ensure a multipolar AGI ecosystem, which will force AGI to learn prosocial interaction as a means of ensuring its own survival.

And if you want to come at me with “AGI doesn’t inherently have a self interest”, consider that the same is true of nuclear weapons. And yet nuclear weapons get their interests from their wielders. And the only way to stay safe from nuclear weapons is also to proliferate them far and wide so that there is a multipolar ecosystem of nuclear weapons, ensuring those holding nuclear weapons have to play nice to ensure their own survival.

All of this talk about restricting AGI will only have the effect of concentrating it in a few hands, leading to the very nightmare the regulators are trying to avoid.

If the regulators had succeeded, and the US had been the only nation to possess nuclear weapons in the long run, humanity would have suffered massively from that lack of parity. Let me be less coy: humanity would have suffered under the brutality of repeated nuclear holocausts as the interests of the few led to further and further justification of larger and larger strikes.

Nuclear weapons cannot be regulated by law. They can only be regulated by other nuclear weapons. Same is true of AGI.

[-] KeenFlame@feddit.nu 1 points 8 months ago* (last edited 8 months ago)

Okay. It's not only a weapon though.

[-] intensely_human@lemm.ee 1 points 8 months ago

It doesn’t need to be only a weapon for any of this to apply. Same as nuclear fission.

[-] KeenFlame@feddit.nu 1 points 8 months ago* (last edited 8 months ago)

"It doesn't matter if anyone cares about the safety of agi"

It does matter. And it doesn't apply because it's not just a weapon. It matters how it acts towards humans ethically in so many ways other than indiscriminate slaughter

this post was submitted on 20 Nov 2023
363 points (88.4% liked)

Asklemmy

43027 readers
2039 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS