lurker

joined 1 month ago
[–] lurker@awful.systems 9 points 1 day ago* (last edited 1 day ago)

oh hey I remember reading that Donald Knuth paper earlier today, when it got posted by an AI youtube channel as 'proof' AI is on the path to AGI

[–] lurker@awful.systems 6 points 1 day ago (1 children)

while OpenAI deserves every bit of flack they get, it's comical to see people who criticise OpenAI for creating a 'war machine' turn around and praise Anthropic when they were-by their own admission no less!-the first people to start using AI for military purposes

[–] lurker@awful.systems 3 points 2 days ago* (last edited 2 days ago) (1 children)

That’s precisely what I was thinking. Obviously I don’t want everyone to die but if you forced me to choose between apocalypse scenarios, I’m picking something painless and instantaneous- like a super-virus that activates immediately or a bullet in the head-over being slowly tortured to death via something like nuclear radiation or extreme heat

[–] lurker@awful.systems 7 points 2 days ago* (last edited 2 days ago) (1 children)

Altman claimed that the company would “amend our deal” to add the prohibition of “deliberate tracking, surveillance, or monitoring of US persons or nationals.”

...so the original statement was a lie then? the CEO who is notorious for being a liar lied? I am very surprised about this information.

[–] lurker@awful.systems 4 points 2 days ago* (last edited 2 days ago) (4 children)

I mean yeah I guess in a competition between getting a bullet directly through my brain, getting all my limbs chainsawed off with my head last and being drowned in boiling water, the bullet would win every time. Though the real perversely funniest outcome is if superintelligence turns out to be completely impossible and we fuck ourselves over with garbage to mediocre AI embedded in all our critical infrastructure

[–] lurker@awful.systems 9 points 2 days ago* (last edited 2 days ago)

another Onion banger for these trying times

” Then you wake up in a cold sweat and can’t breathe at all, almost like you’re drowning—I guess from the weight of untold mobs of people leaping on you and ripping you apart”

the real Scam Altman would never feel any kind of remorse or emotion about this

[–] lurker@awful.systems 7 points 3 days ago* (last edited 3 days ago) (15 children)

This piece on how doomers and rationalists have made everything worse with their "AGI is nigh" shtick and ended up giving AI companies way more power than they should and getting chatbots into the military, where they will almost certainly fuck up and kill people

[–] lurker@awful.systems 9 points 3 days ago (1 children)

God I hate Anthropic defenders pretending they're the saints of the Earth

[–] lurker@awful.systems 4 points 4 days ago* (last edited 4 days ago) (1 children)

I do think the "all of humanity" stuff is a little overblown, but this is legitimately dumb and dangerous and will get a ton of innocents killed + allow the military to dodge accountability. Letting ChatGPT potentially run an autonomous weapon with zero oversight is phenomenally stupid and the tech is nowhere near reliable enough to be able to pull off the kind of precision and decision making military campaigns require, which is what Marcus is saying

[–] lurker@awful.systems 8 points 5 days ago

this truly is the dumbest timeline huh

 

this was already posted on reddit sneerclub, but I decided to crosspost it here so you guys wouldn’t miss out on Yudkowsky calling himself a genre savy character, and him taking what appears to be a shot at the Zizzians

 

originally posted in the thread for sneers not worth a whole post, then I changed my mind and decided it is worth a whole post, cause it is pretty damn important

Posted on r/HPMOR roughly one day ago

full transcript:

Epstein asked to call during a fundraiser. My notes say that I tried to explain AI alignment principles and difficulty to him (presumably in the same way I always would) and that he did not seem to be getting it very much. Others at MIRI say (I do not remember myself / have not myself checked the records) that Epstein then offered MIRI $300K; which made it worth MIRI's while to figure out whether Epstein was an actual bad guy versus random witchhunted guy, and ask if there was a reasonable path to accepting his donations causing harm; and the upshot was that MIRI decided not to take donations from him. I think/recall that it did not seem worthwhile to do a whole diligence thing about this Epstein guy before we knew whether he was offering significant funding in the first place, and then he did, and then MIRI people looked further, and then (I am told) MIRI turned him down.

Epstein threw money at quite a lot of scientists and I expect a majority of them did not have a clue. It's not standard practice among nonprofits to run diligence on donors, and in fact I don't think it should be. Diligence is costly in executive attention, it is relatively rare that a major donor is using your acceptance of donations to get social cover for an island-based extortion operation, and this kind of scrutiny is more efficiently centralized by having professional law enforcement do it than by distributing it across thousands of nonprofits.

In 2009, MIRI (then SIAI) was a fiscal sponsor for an open-source project (that is, we extended our nonprofit status to the project, so they could accept donations on a tax-exempt basis, having determined ourselves that their purpose was a charitable one related to our mission) and they got $50K from Epstein. Nobody at SIAI noticed the name, and since it wasn't a donation aimed at SIAI itself, we did not run major-donor relations about it.

This reply has not been approved by MIRI / carefully fact-checked, it is just off the top of my own head.

 

I searched for “eugenics” on yud’s xcancel (i will never use twitter, fuck you elongated muskrat) because I was bored, got flashbanged by this gem. yud, genuinely what are you talking about

view more: next ›