[-] BigMuffin69@awful.systems 20 points 3 months ago

e/acc bros in tatters today as Ol' Musky comes out in support of SB 1047.

Meanwhile, our very good friends line up to praise Musk's character. After all, what's the harm in trying to subvert a lil democracy/push white replacement narratives/actively harm lgbt peeps if your goal is to save 420^69 future lives?

Some rando points out the obvious tho... man who fled California due 'to regulation' (and ofc the woke mind virus) wants legislation enacted where his competitors are instead of the beautiful lone star state 🤠 🤠 🤠 🤠 🤠

[-] BigMuffin69@awful.systems 18 points 4 months ago

Ah, I see TWG made the rookie mistake of thinking they could endear themselves to internet bigots by carrying water for them. ^Also, fuck this nazi infested shithole. Absolute eye bleach.

[-] BigMuffin69@awful.systems 21 points 4 months ago

Wishful thinking on my part to think their sexism/eugenics posting was based on ignorance instead of deliberately being massive piles of shit. Don't let them know Iceland has the highest number of GMs/pop or else we'll get a 10,000 page essay about how proximity to volcanoes gives + 20 IQ points.

[-] BigMuffin69@awful.systems 18 points 4 months ago

Without fail in the comments section, we have Daniel Kokotajlo (the philosophy student turned ai safety advocate who recently got canned at OAI) making the claim that "we [ = Young Daniel and our olde friend Big Yud] are AI experts and believe that risking full scale nuclear war over data centers is actually highly rational^{tm}" :)

...anyways, what were we saying about David Gerard being a bad faith actor again?

[-] BigMuffin69@awful.systems 22 points 4 months ago* (last edited 4 months ago)

Holy fuck David, you really are living rent free in this SOB's head.

[-] BigMuffin69@awful.systems 19 points 4 months ago

https://www.nature.com/articles/d41586-024-02218-7

Might be slightly off topic, but interesting result using adversarial strategies against RL trained Go machines.

Quote: Humans able use the adversarial bots’ tactics to beat expert Go AI systems, does it still make sense to call those systems superhuman? “It’s a great question I definitely wrestled with,” Gleave says. “We’ve started saying ‘typically superhuman’.” David Wu, a computer scientist in New York City who first developed KataGo, says strong Go AIs are “superhuman on average” but not “superhuman in the worst cases”.

Me thinks the AI bros jumped the gun a little too early declaring victory on this one.

[-] BigMuffin69@awful.systems 21 points 5 months ago* (last edited 5 months ago)

Reasoning There is not a well known way to achieve system 2 thinking, but I am quite confident that it is possible within the transformer paradigm with the technology and compute we have available to us right now. I estimate that we are 2-3 years away from building a mechanism for system 2 thinking which is sufficiently good for the cycle I described above.

Wow, what are the odds! The exact same transformer paradigm that OAI co-opted from Google is also the key to solving 'system 2' reasoning, meta cognition, recursive self improvement, and the symbol grounding problem! All they need is a couple trillion more dollars of VC Invesment, a couple of goat sacrifices here and there, and AGI will just fall out. They definitely aren't tossing cash into a bottomless money pit chasing a dead-end architecture!

... right?

[-] BigMuffin69@awful.systems 20 points 5 months ago

If you really wanna just throw some fucking spaghetti at the wall, YOU CAN DO THAT WITHOUT AI.

i have found I get .000000000006% less hallucination rate by throwing alphabet soup at the wall instead of spaghett, my preprint is on arXiV

[-] BigMuffin69@awful.systems 22 points 5 months ago

https://xcancel.com/AISafetyMemes/status/1802894899022533034#m

The same pundits have been saying "deep learning is hitting a wall" for a DECADE. Why do they have ANY credibility left? Wrong, wrong, wrong. Year after year after year. Like all professional pundits, they pound their fist on the table and confidently declare AGI IS DEFINITELY FAR OFF and people breathe a sigh of relief. Because to admit that AGI might be soon is SCARY. Or it should be, because it represents MASSIVE uncertainty. AGI is our final invention. You have to acknowledge the world as we know it will end, for better or worse. Your 20 year plans up in smoke. Learning a language for no reason. Preparing for a career that won't exist. Raising kids who might just... suddenly die. Because we invited aliens with superior technology we couldn't control. Remember, many hopium addicts are just hoping that we become PETS. They point to Ian Banks' Culture series as a good outcome... where, again, HUMANS ARE PETS. THIS IS THEIR GOOD OUTCOME. What's funny, too, is that noted skeptics like Gary Marcus still think there's a 35% chance of AGI in the next 12 years - that is still HIGH! (Side note: many skeptics are butthurt they wasted their career on the wrong ML paradigm.) Nobody wants to stare in the face the fact that 1) the average AI scientist thinks there is a 1 in 6 chance we're all about to die, or that 2) most AGI company insiders now think AGI is 2-5 years away. It is insane that this isn't the only thing on the news right now. So... we stay in our hopium dens, nitpicking The Latest Thing AI Still Can't Do, missing forests from trees, underreacting to the clear-as-day exponential. Most insiders agree: the alien ships are now visible in the sky, and we don't know if they're going to cure cancer or exterminate us. Be brave. Stare AGI in the face.

This post almost made me crash my self-driving car.

[-] BigMuffin69@awful.systems 21 points 5 months ago* (last edited 5 months ago)

No, they never address this. And as someone who personally works on large scale optimization problems for a living, I do think it's difficult for the public to understand, that no, a 10000 IQ super machine will not be able to just "solve these problems" in a nano second like Yud thinks. And it's not like well, the super machine will just avoid having to solve them. No. NP hard problems are fucking everywhere. (Fun fact, for many problems of interest, even approximating the solution to a given accuracy is NP-hard, so heuristics don't even help.)

I've often found myself frustrated that more computer scientist who should know better simply do not address this point. If verifying solutions is exponentially easier than coming up with them for many difficult problems (all signs point to yes), and if a super intelligent entity actually did exist (I mean does a SAT solver count as a super intelligent entity?), it would probably be EASY to control, since it would have to spend eons and massive amounts of energy coming up with its WORLD_DOMINATION_PLAN.exe, but you wouldn't be able to hide a super computer doing this massive calculation, and someone running the machine seeing it output TURN ALL HUMANS INTO PAPER CLIPS, would say, 'ah, we are missing a constraint here, it thinks that this optimization problem is unbounded' <- this happens literally all the time in practice. Not the world domination part, but a poorly defined optimization problem that is unbounded. But again, it's easy to check that the solution is nonsense.

I know Francois Chollet (THE GOAT) has talked about how there are no unending exponentials and the faster growth the faster you hit constraints IRL (running out of data, running out of chips, running out of energy, etc... ) and I've definitely heard professional shit poster Pedro Domingos explicitly discuss how NP-hardness strongly implies EA/LW type thinking is straight up fantasy, but it's a short list of people who I can think of off the top of my head who have discussed this.

Edit: bizarrely, one person who I didn't mention who has gone down this line of thinking is Illya Sutskever; however, he has come to some frankly... uh... strange conclusions -> the only reason to explain the successful performance of ML is to conclude that they are Kolmogorov minimizers, i.e., by optimizing for loss over a training set, you are doing compression which done optimally is solving an undecidable problem. Nice theory. Definitely not motivated by bad sci-fi mysticism imbued with pure distilled hopium. But from my arm-chair psychologist POV, it seems he implicitly acknowledges for his fantasy to come true, he needs to escape the limitations of Turing Machines, so he has to somehow shoehorn a method for hyper computation into Turing Machines. Smh, this is the kind of behavior reserved for aging physicist, amirite lads? Yet in 2023, it seemed like the whole world was succumbing to this gas lighting. He was giving this lecture to auditoriums filled with tech bro shilling this line of thinking to thunderous applause. I have olde CS prof friends who were like, don't we literally have mountains of evidence this is straight up crazy talk? Like you can train an ANN to perform addition, and if you can look me straight in the eyes and say the absolute mess of weights that results looks anything like a Kolmogorov minimizer then I know you are trying to sell me a bag of shit.

[-] BigMuffin69@awful.systems 20 points 6 months ago* (last edited 6 months ago)

This gem from 25 year old Avital Balwit the Chief of Staff at Anthropic and researcher of "transformative AI at Oxford’s Future of Humanity Institute" discussing the end of labour as she knows it. She continues:

"The general reaction to language models among knowledge workers is one of denial. They grasp at the ever diminishing number of places where such models still struggle, rather than noticing the ever-growing range of tasks where they have reached or passed human level. [wherein I define human level from my human level reasoning benchmark that I have overfitted my model to by feeding it the test set] Many will point out that AI systems are not yet writing award-winning books, let alone patenting inventions. But most of us also don’t do these things. "

Ah yes, even though the synthetic text machine has failed to achieve a basic understanding of the world generation after generation, it has been able to produce ever larger volumes of synthetic text! The people who point out that it still fails basic arithmetic tasks are the ones who are in denial, the god machine is nigh!

Bonus sneer:

Ironically, the first job to go the way of the dodo was researcher at FHI, so I understand why she's trying to get ahead of the fallout of losing her job as chief Dario Amodei wrangler at OpenAI2: electric boogaloo.

Idk, I'm still workshopping this one.

🐍

view more: ‹ prev next ›

BigMuffin69

joined 10 months ago