[-] ChairmanMeow@programming.dev 11 points 13 hours ago

Eh, I have a few things from Kickstarter that were successful. Exploding Kittens is probably the most successful one of all the ones I own.

[-] ChairmanMeow@programming.dev 4 points 22 hours ago

Isn't Umbraco the one that struggled loading a page that didn't exist, taking several seconds to load the PageNotFound page and causing very high CPU load in the meantime? Like, an issue they had for years?

Somehow I don't have great faith in that solution, but perhaps it's improved in recent years.

[-] ChairmanMeow@programming.dev 1 points 2 days ago

RFCs aren't really law you know. They can deviate, it just means less compatibility.

[-] ChairmanMeow@programming.dev 2 points 2 days ago

What they didn't prove, at least by my reading of this paper, is that achieving general intelligence itself is an NP-hard problem. It's just that this particular method of inferential training, what they call "AI-by-Learning," is an NP-hard computational problem.

This is exactly what they've proven. They found that if you can solve AI-by-Learning in polynomial time, you can also solve random-vs-chance (or whatever it was called) in a tractable time, which is a known NP-Hard problem. Ergo, the current learning techniques which are tractable will never result in AGI, and any technique that could must necessarily be considerably slower (otherwise you can use the exact same proof presented in the paper again).

They merely mentioned these methods to show that it doesn't matter which method you pick. The explicit point is to show that it doesn't matter if you use LLMs or RNNs or whatever; it will never be able to turn into a true AGI. It could be a good AI of course, but that G is pretty important here.

But it's easy to just define general intelligence as something approximating what humans already do.

No, General Intelligence has a set definition that the paper's authors stick with. It's not as simple as "it's a human-like intelligence" or something that merely approximates it.

[-] ChairmanMeow@programming.dev 4 points 2 days ago

Yes, hence we're not "right around the corner", it's a figure of speech that uses spatial distance to metaphorically show we're very far away from something.

[-] ChairmanMeow@programming.dev 2 points 2 days ago

Not just that, they've proven it's not possible using any tractable algorithm. If it were you'd run into a contradiction. Their example uses basically any machine learning algorithm we know, but the proof generalizes.

[-] ChairmanMeow@programming.dev 4 points 3 days ago

Our squishy brains (or perhaps more accurately, our nervous systems contained within a biochemical organism influenced by a microbiome) arose out of evolutionary selection algorithms, so general intelligence is clearly possible.

That's assuming that we are a general intelligence. I'm actually unsure if that's even true.

That doesn't mean they've proven there's no pathway at all.

True, they've only calculated it'd take perhaps millions of years. Which might be accurate, I'm not sure to what kind of computer global evolution over trillions of organisms over millions of years adds up to. And yes, perhaps some breakthrough happens, but it's still very unlikely and definitely not "right around the corner" as the AI-bros claim (and that near-future thing is what the paper set out to disprove).

[-] ChairmanMeow@programming.dev 5 points 3 days ago

Haha it's good that you do though, because now there's a helpful comment providing more context :)

[-] ChairmanMeow@programming.dev 8 points 3 days ago

I was more hinting at that through conventional computational means we're just not getting there, and that some completely hypothetical breakthrough somewhere is required. QC is the best guess I have for where it might be but it's still far-fetched.

But yes, you're absolutely right that QC in general isn't a magic bullet here.

[-] ChairmanMeow@programming.dev 21 points 3 days ago

The actual paper is an interesting read. They present an actual computational proof, stating that even if you have essentially infinite memory, a computer that's a billion times faster than what we have now, perfect training data that you can sample without bias and you're only aiming for an AGI that performs slightly better than chance, it's still completely infeasible to do within the next few millenia. Ergo, it's definitely not "right around the corner". We're lightyears off still.

They prove this by proving that if you could train an AI in a tractable amount of time, you would have proven P=NP. And thus, training an AI is NP-hard. Given the minimum data that needs to be learned to be better than chance, this results in a ridiculously long training time well beyond the realm of what's even remotely feasible. And that's provided you don't even have to deal with all the constraints that exist in the real world.

We perhaps need some breakthrough in quantum computing in order to get closer. That is not to say that AI won't improve or anything, it'll get a bit better. But there is a computationally proven ceiling here, and breaking through that is exceptionally hard.

It also raises (imo) the question of whether or not we can truly consider humans to have general intelligence or not. Perhaps we're not as smart as we think we are either.

[-] ChairmanMeow@programming.dev 3 points 4 days ago

Some have some kind of date tracking built in. But it's fairly rare.

[-] ChairmanMeow@programming.dev 20 points 5 days ago

Trump's attempt at making other Muslim countries make peace with Israel without properly addressing the Palestinian question is something Hamas cited as part of their 'casus belli', the reason they attacked Israel. They feared that if their supposed "allies" made peace, the Palestinian cause would be lost.

Trump didn't really deescalate tensions, rather he provoked some (e.g. the embassy move) and he tried to ignore other rising tensions because addressing those would be too difficult. One can easily argue his actions were the indirect cause of the current mess.

1
/r/eu4 (programming.dev)
1
/r/paradoxplaza (programming.dev)
view more: next ›

ChairmanMeow

joined 1 year ago