120
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 07 Oct 2024
120 points (100.0% liked)
Technology
37800 readers
139 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
This isn't my field, and some undergraduate philosophy classes I took more than 20 years ago might not be leaving me well equipped to understand this paper. So I'll admit I'm probably out of my element, and want to understand.
That being said, I'm not reading this paper with your interpretation.
But they've defined the AI-by-Learning problem in a specific way (here's the informal definition):
I read this definition of the problem to be defined by needing to sample from D, that is, to "learn."
But the caveat I'm reading, implicit in the paper's definition of the AI-by-Learning problem, is that it's about an entire class of methods, of learning from a perfect sample of intelligent outputs to itself be able to mimic intelligent outputs.
The paper defines it:
It's just defining an approximation of human behavior, and saying that achieving that formalized approximation is intractable, using inferences from training data. So I'm still seeing the definition of human-like behavior, which would by definition be satisfied by human behavior. So that's the circular reasoning here, and whether human behavior fits another definition of AGI doesn't actually affect the proof here. They're proving that learning to be human-like is intractable, not that achieving AGI is itself intractable.
I think it's an important distinction, if I'm reading it correctly. But if I'm not, I'm also happy to be proven wrong.