57
you are viewing a single comment's thread
view the rest of the comments
[-] kbal@fedia.io 19 points 5 months ago

The usual AI pumpers have suggested the bots are superhuman!

They are technically correct. The AI is superhuman when it comes to playing Go, by most measures. I don't know about "far" superhuman - the usual type of machine wins against professional human players every single time if the humans are trying to play well, but it's not as if it's off in another dimension playing moves we could never possibly comprehend. Many strong Go players probably still disagree with me there, but it is at least not as unlikely as some assumed at first that we can understand what it's doing. Its moves can usually be analysed and understood with enough effort, and where they can't the difference measured in points won or lost in the game is often small. Their main advantage is being inhumanly precise, never making the kind of small errors in judgement that humans always do. Over the course of a lengthy game of Go that gradually adds up to an impressively large margin of victory.

Katago is not an artificial general intelligence. It is a Go-playing intelligence. And this class of flaw that's been found in it is due to the particular algorithm it uses (essentially the same one as AlphaGo.) It lacks basic human common sense, having found no need or ability to develop that in its training. Where humans playing the game can easily count how much space a group has and act accordingly, the program has only its strict Monte Carlo-based way of viewing the game and has no access to such basic general-purpose tools of reasoning. It can only consider one move at a time, and this lets it down in carefully constructed situations that do not normally occur in human play, since humans wouldn't fall for something so stupid.

Its failing is much more narrow than those of the LLM chatbots that everyone loves so much, but not so different in character. The machines are super-humanly good at the things they're good at. That's not too surprising; so is a forklift. When their algorithms fail them, in situations that to naive humans appear very similar to what they're good at, they're not. When it works, it's super-human in many ways. When it goes wrong, it's often wrong in ways that seem obviously stupid.

I suspect that this problem the machines have with Go playing would be an excellent example for the researchers to work with, since it's relatively easy to understand approximately why the machines are going wrong and what sort of thing would be required to fix it; and yet it's very difficult to actually solve the problem general way through the purely independent training that was the great achievement of AlphaGo Zero rather than giving up and hard-coding a fix for this one thing specifically. With the much more numerous and difficult failure modes they have to work with, the LLM people lately seem busy hacking together crude and imperfect fixes for one thing at a time. Maybe if some of them have time to take a break from that, they could learn something from the game of Go.

[-] o7___o7@awful.systems 12 points 5 months ago

The AI is superhuman when it comes to playing Go, by most measures.

Except beating humans, apparently.

[-] Deebster@programming.dev -5 points 5 months ago* (last edited 5 months ago)

Humans can't beat AI at Go, aside from these exploits that we needed AI to tell us about first.

Lee Sedol managed to win one game against AlphaGo in 2016 (and AlphaGo Zero was beating AlphaGo 100-0 a year later). That was basically the last time humans got on the scoreboard.

[-] BigMuffin69@awful.systems 11 points 4 months ago* (last edited 4 months ago)

Humans can’t beat AI at Go, aside from these exploits

kek, reminds me of when I was a wee one and I'd 0 to death chain grab someone in smash bros. The lads would cry and gnash their teeth about how I was only winning b.c. of exploits. My response? Just don't get grabbed. I'd advise "superhuman" Go systems to do the same. Don't want to get cheesed out of a W? Then don't use a strat that's easily countered by monkey brains. And as far as designing an adversarial system to find these 'exploits', who the hell cares? There's no magic barrier between internalized and externalized cognition.

Just get good bruv.

[-] imadabouzu@awful.systems 7 points 4 months ago

I appreciate this perspective, especially

There’s no magic barrier between internalized and externalized cognition.

I think it's increasingly clear that cognition is networking, and no matter how you are constructed, it's both internal and external, and that in a sense, the objects aren't the important thing (the relationships are).

Like, maybe there aren't shortcuts. If you want perfect GO play you may very well have to pay the full inductive price. And even then, congrats, but GO still exists.

It's interesting to see how Chess has continued to be relevant, hell, possibly even more popular than its ever been, due to increased accessibility, alternative formats, and embracing the performance aspects of the game.

load more comments (4 replies)
load more comments (6 replies)
load more comments (10 replies)
this post was submitted on 15 Jul 2024
57 points (100.0% liked)

TechTakes

1471 readers
209 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS