[LeCun][AGI][Choe] AGI vs. Human-Level AI
May 17, 2022
Thoughts on AGI by LeCun is very insightful as always!
Here are my thoughts.
(3) we are still missing some fundamental concepts:
I think this will be the key problem the solve. What are the “principles”?
(4) some of these concepts are possibly “around the corner” (like generalized self-supervised learning)
I hope we’re within reach. Some of these may come from neuroscience, perhaps in collaboration with AI/ML.
And, some thought on what LeCun thinks is important:
-
scaling alone might not work: yep!
-
learning like babies: yep!
-
predict changes in the environment due to action: yep!
-
predict effect of sequence of actions : yep!
-
Other are generally agreeable, except: “all of this in ways that are compatible with gradient-based learning”
I think this depends on whether intelligence is purely describable by objective functions and optimization. I am not sure about this. Also, there are non-gradient-based methods that are pretty powerful, such as evolutionary learning.
Finally, some notable omissions:
-
Discussion of evolution is missing. Evolution gave rise to intelligence. Without considering evolution we may not be able to understand intelligence.
-
On a related note, the importance of the body morphology, and how the agent can change the environment itself (e.g. tool construction, agriculture, architecture, etc.) is not considered.
Quote: from Facebook Posting by Yann LeCun
Yann LeCun
About the raging debate regarding the significance of recent progress in AI, it may be useful to (re)state a few obvious facts:
(0) there is no such thing as AGI. Reaching “Human Level AI” may be a useful goal, but even humans are specialized.
(1) the research community is making some progress towards HLAI
(2) scaling up helps. It’s necessary but not sufficient, because….
(3) we are still missing some fundamental concepts
(4) some of those new concepts are possibly “around the corner” (e.g. generalized self-supervised learning)
(5) but we don’t know how many such new concepts are needed. We just see the most obvious ones.
(6) hence, we can’t predict how long it’s going to take to reach HLAI.
I really don’t think it’s just a matter of scaling things up.
We still don’t have a learning paradigm that allows machines to learn how the world works, like human anspd many non-human babies do.
Some may believe scaling up a giant transformer trained on sequences of tokenized inputs is enough.
Others believe “reward is enough”.
Yet others believe that explicit symbol manipulation is necessary.
A few don’t believe gradient-based learning is part of the solution.
I believe we need to find new concepts that would allow machines to:
-
learn how the world works by observing like babies.
-
learn to predict how one can influence the world through taking actions.
-
learn hierarchical representations that allows long-term predictions in abstract representation spaces.
-
properly deal with the fact that the world is not completely predictable.
-
enable agents to predict the effects of sequences of actions so as to be able to reason and plan
-
enable machines to plan hierarchically, decomposing a complex task into subtasks.
-
all of this in ways that are compatible with gradient-based learning.
The solution is not just around the corner.
We have a number of obstacles to clear, and we don’t know how.
← Back to all articles Quick Navigation: Next:[ j ] – Prev:[ k ] – List:[ l ]