363
submitted 9 months ago* (last edited 9 months ago) by SnotFlickerman@lemmy.blahaj.zone to c/asklemmy@lemmy.ml

Money wins, every time. They're not concerned with accidentally destroying humanity with an out-of-control and dangerous AI who has decided "humans are the problem." (I mean, that's a little sci-fi anyway, an AGI couldn't "infect" the entire internet as it currently exists.)

However, it's very clear that the OpenAI board was correct about Sam Altman, with how quickly him and many employees bailed to join Microsoft directly. If he was so concerned with safeguarding AGI, why not spin up a new non-profit.

Oh, right, because that was just Public Relations horseshit to get his company a head-start in the AI space while fear-mongering about what is an unlikely doomsday scenario.


So, let's review:

  1. The fear-mongering about AGI was always just that. How could an intelligence that requires massive amounts of CPU, RAM, and database storage even concievably able to leave the confines of its own computing environment? It's not like it can "hop" onto a consumer computer with a fraction of the same CPU power and somehow still be able to compute at the same level. AI doesn't have a "body" and even if it did, it could only affect the world as much as a single body could. All these fears about rogue AGI are total misunderstandings of how computing works.

  2. Sam Altman went for fear mongering to temper expectations and to make others fear pursuing AGI themselves. He always knew his end-goal was profit, but like all good modern CEOs, they have to position themselves as somehow caring about humanity when it is clear they could give a living flying fuck about anyone but themselves and how much money they make.

  3. Sam Altman talks shit about Elon Musk and how he "wants to save the world, but only if he's the one who can save it." I mean, he's not wrong, but he's also projecting a lot here. He's exactly the fucking same, he claimed only he and his non-profit could "safeguard" AGI and here he's going to work for a private company because hot damn he never actually gave a shit about safeguarding AGI to begin with. He's a fucking shit slinging hypocrite of the highest order.

  4. Last, but certainly not least. Annie Altman, Sam Altman's younger, lesser-known sister, has held for a long time that she was sexually abused by her brother. All of these rich people are all Jeffrey Epstein levels of fucked up, which is probably part of why the Epstein investigation got shoved under the rug. You'd think a company like Microsoft would already know this or vet this. They do know, they don't care, and they'll only give a shit if the news ends up making a stink about it. That's how corporations work.

So do other Lemmings agree, or have other thoughts on this?


And one final point for the right-wing cranks: Not being able to make an LLM say fucked up racist things isn't the kind of safeguarding they were ever talking about with AGI, so please stop conflating "safeguarding AGI" with "preventing abusive racist assholes from abusing our service." They aren't safeguarding AGI when they prevent you from making GPT-4 spit out racial slurs or other horrible nonsense. They're safeguarding their service from loser ass chucklefucks like you.

(page 2) 50 comments
sorted by: hot top controversial new old
[-] tsonfeir@lemm.ee 5 points 9 months ago

Any space. Any place. Money wins.

[-] Uranium3006@kbin.social 5 points 9 months ago

trusting corporations is always a bad idea

[-] redballooon@lemm.ee 5 points 9 months ago

Hey I am not an AI , I have real feelings, and you hurt them by calling me a looser ass chucklefucks!

[-] SnotFlickerman@lemmy.blahaj.zone 5 points 9 months ago

looser ass

You might want to go see a doctor about them loose stools!

[-] Socsa@sh.itjust.works 4 points 8 months ago

I think it will be fine as long as we don't give the AI thumbs.

[-] Even_Adder@lemmy.dbzer0.com 4 points 9 months ago

Like someone else said "Open AI has been a farce ever since they disabled access to GPT3 for the sake of security".

[-] Tolstoshev@lemmy.world 4 points 9 months ago

The naive irony of all the Less Wrong people discussing letting the AI out of the box when we all know there won’t be a box at all.

load more comments (1 replies)
[-] lurch@sh.itjust.works 4 points 9 months ago

You're right, but there are other dangers, i.e.:

  1. Using it for high-frequency trading and it behaves brutally wrong and ruins an important company/bank using it or crashes the market in a very problematic way.

  2. Using it to control heavy machinery or weapons.

The danger is recklessness of humans at the moment. When they give that reaper drone an AI pilot, so it can react before the humans on the controls even know it's in trouble, that's when shit is about to go sideways. It won't cause the end of the world, but death, destruction and maybe even another war.

load more comments (1 replies)
[-] MudMan@kbin.social 4 points 9 months ago

"Safeguarding AGI" is as much of a concern as making sure the terrorists don't get warp drives.

But then, armies of killer teenagers radicalized by playing Mortal Kombat was never going to be a thing, either, and we spent decades arguing with politicians about that one. Once the PR nightmare is out it's really hard to put back in the box. Lamp. Bag. Whatever metaphor I'm going for here.

load more comments (1 replies)
[-] rip_art_bell@lemmy.world 4 points 9 months ago* (last edited 9 months ago)

Well, to be fair, from what I've been hearing, one of the big points of contention of the internal battle at OpenAI was safety itself. Like some on the board being concerned about the "make your ChatGPT" feature debuting at the dev conference thing. So at least some people care. Which is more than I would have thought...

I do like the word "chucklefucks", though.

[-] hoshikarakitaridia@sh.itjust.works 4 points 9 months ago

Totally agree. Looks like the whole argument was the OpenAI board firing Altman over his safety concerns but unexpectedly the whole team shared his concerns.

load more comments (2 replies)
[-] shiveyarbles@beehaw.org 4 points 9 months ago

There are plenty of people who say they care. They're all lying tho

[-] AdrianTheFrog@lemmy.world 4 points 8 months ago

This should not be a surprise to anyone

[-] 31337@sh.itjust.works 3 points 9 months ago

Agree. Ever since they started lobbying politicians it's been clear that "safety" is a just a pretext for regulatory capture.

[-] Kidplayer_666@lemm.ee 2 points 9 months ago

If they wanted to safeguard AI, they would actually make the models public. Bad actors are bound to get them anyways, hiding it behind secrecy is very unlikely. And I mean, AI could make a virus infecting most infrastructure on planet (Amazon and Google data centres) and then shutting it down or using it for its own purposes. As several programming memes lay out, the entire modern web infrastructure is surprisingly dependent on just a few APIs and tools

load more comments (1 replies)
[-] vexikron@lemmy.zip 2 points 7 months ago

All of these people who make part of their public, and apparently also actual real personas being very concerned about AGI are hypocrites at best and con artists at worse.

How many of such people express vehement public opposition to granting automated military systems the ability to decide whether to fire or not fire?

We are /just about/ to blow through that barrier, into building software systems that totally remove the human operator from that part of the equation.

Then we end up pretty quickly with a SkyNet drone airforce, and its not too long after that it is actually conceivable we end up with something like ED 209 as well, except its a boston dynamics robot mule that can be configured for either hauling cargo, or have a mounted rifle or grenade launcher or something like that.

load more comments
view more: ‹ prev next ›
this post was submitted on 20 Nov 2023
363 points (88.4% liked)

Asklemmy

43027 readers
2039 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS