Report on the fourth conference on artificial general intelligence
September 3, 2011 by Ben Goertzel
The Fourth Conference on Artificial General Intelligence (AGI-11) was held on Google’s campus in Mountain View (Silicon Valley), California, in the first week of August 2011. This was the largest AGI conference yet, with more than 200 people attending, and it had a markedly different tone from the prior conferences in the series.
A number of participants noted that there was less of an out-of-the-mainstream, wild-eyed maverick feel to the proceedings, and more of a sense of “business as usual” or “normal science” — a sense in the air that AGI is obviously an important, feasible R&D area to be working on, albeit a bit “cutting-edge” compared to the majority of (more narrowly specialized) AI R&D.
I think this difference in tone was due partly to the Google and Bay Area location, and partly to the fact that the conference was held in close spatiotemporal proximity to two larger and older AI-related conferences, AAAI-11 and IJCNN-11. IJCNN was just before AGI in San Jose, and AAAI was just after AGI in San Francisco — so a number of academic AI researchers who usually go to the larger conferences, but not AGI, decided to try out AGI as well this year. Complementing this academic group, there was also a strong turnout from the Silicon Valley software industry, and the Bay Area futurist and transhumanist community.
Tutorials
The first day of the conference was occupied by tutorials on the LIDA and OpenCog systems, and the Church probabilistic logic programming language. The second day comprised two workshops: one on self-programming in AGI systems, and the next the traditional “Future of AGI” workshop, which was particularly lively due to the prominence of future-of-technology issues in Bay Area culture (the conference site was not so far off from the headquarters of a variety of futurist organizations like Singularity University, the Singularity Institute for AI, the Foresight Institute, etc.). Most of the talks from the Future of AGI workshop have corresponding papers or presentations on the conference’s schedule page — with themes such as
Steve Omohundro, Design Principles for a Safe and Beneficial AGI Infrastructure
Anna Salamon, Can Whole Brain Emulation help us build safe AGI?
Carl Shulman, Risk-averse preferences as AGI safety technique
Mark Waser, Rational Universal Benevolence: Simpler, Safer, and Wiser than “Friendly AI”
Itamar Arel, Reward Driven Learning and the Risk of an Adversarial Artificial General Intelligence
Ahmed Abdel-Fattah & Kai-Uwe Kuehnberger, Remarks on the Feasibility and the Ethical Challenges of a Next Milestone in AGI
Matt Chapman, Maximizing The Power of Open-Source for AGI
Ben Goertzel and Joel Pitt, Nine Ways to Bias Open-Source AGI Toward Friendliness
Norvig, Dickmanns, Sloman, Boyden, Shi
The final two days constituted the conference proper, with technical talks corresponding to papers in the conference proceedings, which were published in Springer’s Lecture Notes in AI book series. Videos of the conference talks, including the workshops and tutorials, will be posted by Google during the next months, and linked from the conference website.
Peter Norvig, Google’s head of research and the co-author of the best-selling AI textbook (whose latest edition does mention AGI, albeit quite briefly), gave brief opening remarks. He didn’t announce any grand Google AGI initiatives, making clear that his own current research focus is elsewhere than the direct pursuit of powerful artificial general intelligence. Yet, he also made clear that he sees a lot of the research going on at Google as part of an overall body of work that is ultimately building toward advanced AGI.
The four keynote speeches highlighted different aspects of the AGI field, as well as the strongly international nature of the AGI community.
Ernst Dickmanns, from Germany, reviewed his pioneering work on self-driving cars from the 1980s, which in some ways was more advanced than the current work of self-driving cars being conducted by Google and others. He wrapped up with a discussion of general lessons for AGI implied by his experience with self-driving cars, including the importance of adaptive learning and of “dynamic vision” that performs vision in a manner closely coordinated with action.
Aaron Sloman, from Britain, discussed “toddler theorems” — the symbolic understandings of the world that young children learn and create based on their sensorimotor and cognitive experiences. He challenged the researchers in the audience to understand and model the kind of learning and world-modeling that crows or human babies do, and sketched some concepts that he felt would be useful for this sort of modeling.
MIT’s Ed Boyden reviewed his recent work on optogenetics, one of the most exciting and rapidly developing technologies for imaging the brain — a very important area, given the point raised in the conference’s Special Track on Neuroscience and AGI that the main factor holding back the design of AGI systems based on human brain emulation is currently the lack of appropriate tools for measuring what’s happening in the brain. We can’t yet measure the brain well enough to construct detailed dynamic brain simulations. Boyden’s work is one of the approaches that, step by step, is seeking to overcome this barrier.
Zhongzhi Shi, from the Chinese Academy of Sciences in Beijing, described his integrative AGI architecture, which incorporates aspects from multiple Western AGI designs into a novel overall framework. He also stressed the importance of cloud computing for enabling practical experimentation with complex AGI architectures like the one he described.
Neuroscience and AGI
As well as the regular technical AGI talks, there was a Special Session on Neuroscience and AGI, led by neuroscientist Randal Koene, who is probably the world’s most successful advocate of mind uploading, or what he now calls “substrate independent minds.” Most of the AGI field today is only loosely connected to neuroscience; and yet, in principle, nearly every AGI researcher would agree that careful emulation of the brain is one potential path to AGI, with a high probability of succeeding eventually.
The Special Session served to bring neuroscientists and AGI researchers together, to see what they could learn from each other. Neuroscience is not yet at the point where one can architect an AGI based solely on neuroscience knowledge, yet there are many areas where AGI can draw inspiration from neuroscience.
Demis Hassabis emphasized the fact that AGI currently lacks any strong theories of how sensorimotor processing interfaces with abstract conceptual processing, and suggested some ways that neuroscience may provide inspiration here, e.g., analysis of cortical-hippocampal interactions. Another point raised in discussions was that reinforcement learning could potentially gain inspiration from study of the various ways in which the brain treats internal intrinsic rewards (alerting or surprisingness) comparably to explicit external rewards.
Kurzweil and Solomonoff prizes
Three prizes were awarded at the conference: two Kurzweil Prizes and one Solomonoff Prize.
The Kurzweil Prize for Best AGI Paper was awarded to Linus Gisslen, Matt Luciw, Vincent Graziano and Juergen Schmidhuber for their paper entitled Sequential Constant Size Compressors and Reinforcement Learning. This paper represents an effort to bridge the gap between the general mathematical theory of AGI (which in its purest form applies only to AI programs achieving massive general intelligence via using unrealistically much processing power) and the practical business of building useful AGI programs.
Specifically, one of the key ideas in the general theory of AGI is “reinforcement learning” — learning via reward signals from the environment — but the bulk of the mathematical theory of reinforcement learning makes the assumption that the AI system has complete visibility into the environment. Obviously this is unrealistic — no real-world intelligence has full knowledge of its environment. The award-winning paper describes a novel, creative method of using recurrent neural networks to apply reinforcement learning methods to partially-observable environments — indicating a promising research direction to follow, for those who wish to make reinforcement learning algorithms that scale up to real world problems, such as those human-level AGIs will have to deal with.
The 2011 Kurzweil Award for Best AGI Idea was awarded to Paul Rosenbloom for his paper entitled From Memory to Problem Solving: Mechanism Reuse in a Graphical Cognitive Architecture. Rosenbloom has a long history in the AI field, including a role co-creating the classic SOAR AI architecture in the 1980s. While still supporting the general concepts underlying his older AI work, his current research focuses more heavily on scalable probabilistic methods — but more flexible and powerful ones than Bayes nets, Markov Logic Networks and other current popular techniques.
Extending his previous work on factor graphs as a core construct for scalable uncertainty management in AGI systems, his award-winning paper shows how factor graph mechanisms described for memory can also be used for problem-solving tasks. In the human brain there is no crisp distinction between memory and problem-solving, so it is conceptually satisfying to see AGI approaches that also avoid this sort of crisp distinction. It is yet unclear to what extent any single mechanism can be used to achieve all the capabilities needed for human-level AGI. But it is a very interesting and valuable research direction to take a single powerful and flexible mechanism like factor graphs and see how far one can push it, and Dr. Rosenbloom’s paper comprises a wonderful example of this sort of work.
The 2011 Solomonoff AGI Theory Prize — named in honor of AGI pioneer Ray Solomonoff, who passed away in 2010 — was awarded to Laurent Orseau and Mark Ring, for a pair of papers titled Self-Modification and Mortality in Artificial Agents and Delusion, Survival, and Intelligent Agents. These papers explore aspects of theoretical generally intelligent agents inspired by Marcus Hutter’s AIXI model (a theoretical AGI system that would achieve massive general intelligence using infeasibly much computational resources, but that may potentially be approximated by more feasible AGI approaches).
The former paper considers some consequences of endowing an intelligent agent of this nature with the ability to modify its own code; and the latter analyzes aspects of what happens when this sort of theoretical intelligent agent is interfaced with the real world. These papers constitute important steps in bridging the gap between the abstract mathematical theory of AGI, and the real-world business of creating AGI systems and embedding them in the world.
Hybridization
While there was a lot of strong and interesting research presented at the AGI-11 conference, I think it’s fair to say that there were no dramatic breakthroughs presented. Rather, there was more of a feeling of steady incremental progress. Also, compared to previous years, there was less of a feeling of separate, individual research projects working in a vacuum — the connections between different AI approaches seem to be getting clearer each year, in spite of the absence of a clearly defined common vocabulary or conceptual framework among various AGI researchers.
Links were built between abstract AGI theory and practical work, and between neuroscience and AGI engineering. Hybridization of previously wholly different AGI architectures was reported (e.g., the paper I presented, describing the incorporation of aspects of Joscha Bach’s MicroPsi system in my OpenCog system). All signs of a field that’s gradually maturing.
A Sputnik of AGI
These observations lead me inexorably to some more personal musings on AGI. I can’t help wondering: Can we get to human-level AGI and beyond via step-by-step, incremental progress, year after year?
It’s a subtle question, actually. It’s clear that we are far from having a rigorous scientific understanding of how general intelligence works. At some point, there’s going to be a breakthrough in the science of general intelligence — and I’m really looking forward to it! I even hope to play a large part in it. But the question is: will this scientific breakthrough come before or after the engineering of an AGI system with powerful, evidently near-human-level capability?
It may be that we need a scientific breakthrough in the rigorous theory of general intelligence before we can engineer an advanced AGI system. But … I presently suspect that we don’t. My current opinion is that it should be possible to create a powerful AGI system via proceeding step-by-step from the current state of knowledge — doing engineering inspired by an integrative conceptual, not quite fully rigorous understanding of general intelligence.
If this is right, then we can build a system that will have the impact of a “Sputnik of AGI,” via combining variants of existing algorithms in a reasonable cognitive architecture in a manner guided by a solid conceptual understanding of mind. And then, by studying this Sputnik AGI system and its successors and variants, we will be able to arrive at the foreseen scientific breakthrough in the science of general intelligence. This of course is what my colleagues and I are trying to do with the OpenCog project — but the general point I’m making here is independent of our specific OpenCog AGI design.
Anyway, that’s my personal view of the near- to mid-term future of AGI, which I advocated in asides during my OpenCog tutorial, and various discussions at the Future of AGI Workshop. But my view on these matters is far from universal among AGI researchers — even as the AGI field matures and becomes less marginal, it is still characterized by an extremely healthy diversity of views and attitudes! I look forward to ongoing discussions of these matters with my colleagues in the AGI community as the AGI conference series proceeds and develops.
Mostly, it’s awesome to even have a serious AGI community. It’s hard sometimes to remember that 10 years ago this was far from the case!
Read more: http://goo.gl/FV90i
September 3, 2011 by Ben Goertzel
The Fourth Conference on Artificial General Intelligence (AGI-11) was held on Google’s campus in Mountain View (Silicon Valley), California, in the first week of August 2011. This was the largest AGI conference yet, with more than 200 people attending, and it had a markedly different tone from the prior conferences in the series.
A number of participants noted that there was less of an out-of-the-mainstream, wild-eyed maverick feel to the proceedings, and more of a sense of “business as usual” or “normal science” — a sense in the air that AGI is obviously an important, feasible R&D area to be working on, albeit a bit “cutting-edge” compared to the majority of (more narrowly specialized) AI R&D.
I think this difference in tone was due partly to the Google and Bay Area location, and partly to the fact that the conference was held in close spatiotemporal proximity to two larger and older AI-related conferences, AAAI-11 and IJCNN-11. IJCNN was just before AGI in San Jose, and AAAI was just after AGI in San Francisco — so a number of academic AI researchers who usually go to the larger conferences, but not AGI, decided to try out AGI as well this year. Complementing this academic group, there was also a strong turnout from the Silicon Valley software industry, and the Bay Area futurist and transhumanist community.
Tutorials
The first day of the conference was occupied by tutorials on the LIDA and OpenCog systems, and the Church probabilistic logic programming language. The second day comprised two workshops: one on self-programming in AGI systems, and the next the traditional “Future of AGI” workshop, which was particularly lively due to the prominence of future-of-technology issues in Bay Area culture (the conference site was not so far off from the headquarters of a variety of futurist organizations like Singularity University, the Singularity Institute for AI, the Foresight Institute, etc.). Most of the talks from the Future of AGI workshop have corresponding papers or presentations on the conference’s schedule page — with themes such as
Steve Omohundro, Design Principles for a Safe and Beneficial AGI Infrastructure
Anna Salamon, Can Whole Brain Emulation help us build safe AGI?
Carl Shulman, Risk-averse preferences as AGI safety technique
Mark Waser, Rational Universal Benevolence: Simpler, Safer, and Wiser than “Friendly AI”
Itamar Arel, Reward Driven Learning and the Risk of an Adversarial Artificial General Intelligence
Ahmed Abdel-Fattah & Kai-Uwe Kuehnberger, Remarks on the Feasibility and the Ethical Challenges of a Next Milestone in AGI
Matt Chapman, Maximizing The Power of Open-Source for AGI
Ben Goertzel and Joel Pitt, Nine Ways to Bias Open-Source AGI Toward Friendliness
Norvig, Dickmanns, Sloman, Boyden, Shi
The final two days constituted the conference proper, with technical talks corresponding to papers in the conference proceedings, which were published in Springer’s Lecture Notes in AI book series. Videos of the conference talks, including the workshops and tutorials, will be posted by Google during the next months, and linked from the conference website.
Peter Norvig, Google’s head of research and the co-author of the best-selling AI textbook (whose latest edition does mention AGI, albeit quite briefly), gave brief opening remarks. He didn’t announce any grand Google AGI initiatives, making clear that his own current research focus is elsewhere than the direct pursuit of powerful artificial general intelligence. Yet, he also made clear that he sees a lot of the research going on at Google as part of an overall body of work that is ultimately building toward advanced AGI.
The four keynote speeches highlighted different aspects of the AGI field, as well as the strongly international nature of the AGI community.
Ernst Dickmanns, from Germany, reviewed his pioneering work on self-driving cars from the 1980s, which in some ways was more advanced than the current work of self-driving cars being conducted by Google and others. He wrapped up with a discussion of general lessons for AGI implied by his experience with self-driving cars, including the importance of adaptive learning and of “dynamic vision” that performs vision in a manner closely coordinated with action.
Aaron Sloman, from Britain, discussed “toddler theorems” — the symbolic understandings of the world that young children learn and create based on their sensorimotor and cognitive experiences. He challenged the researchers in the audience to understand and model the kind of learning and world-modeling that crows or human babies do, and sketched some concepts that he felt would be useful for this sort of modeling.
MIT’s Ed Boyden reviewed his recent work on optogenetics, one of the most exciting and rapidly developing technologies for imaging the brain — a very important area, given the point raised in the conference’s Special Track on Neuroscience and AGI that the main factor holding back the design of AGI systems based on human brain emulation is currently the lack of appropriate tools for measuring what’s happening in the brain. We can’t yet measure the brain well enough to construct detailed dynamic brain simulations. Boyden’s work is one of the approaches that, step by step, is seeking to overcome this barrier.
Zhongzhi Shi, from the Chinese Academy of Sciences in Beijing, described his integrative AGI architecture, which incorporates aspects from multiple Western AGI designs into a novel overall framework. He also stressed the importance of cloud computing for enabling practical experimentation with complex AGI architectures like the one he described.
Neuroscience and AGI
As well as the regular technical AGI talks, there was a Special Session on Neuroscience and AGI, led by neuroscientist Randal Koene, who is probably the world’s most successful advocate of mind uploading, or what he now calls “substrate independent minds.” Most of the AGI field today is only loosely connected to neuroscience; and yet, in principle, nearly every AGI researcher would agree that careful emulation of the brain is one potential path to AGI, with a high probability of succeeding eventually.
The Special Session served to bring neuroscientists and AGI researchers together, to see what they could learn from each other. Neuroscience is not yet at the point where one can architect an AGI based solely on neuroscience knowledge, yet there are many areas where AGI can draw inspiration from neuroscience.
Demis Hassabis emphasized the fact that AGI currently lacks any strong theories of how sensorimotor processing interfaces with abstract conceptual processing, and suggested some ways that neuroscience may provide inspiration here, e.g., analysis of cortical-hippocampal interactions. Another point raised in discussions was that reinforcement learning could potentially gain inspiration from study of the various ways in which the brain treats internal intrinsic rewards (alerting or surprisingness) comparably to explicit external rewards.
Kurzweil and Solomonoff prizes
Three prizes were awarded at the conference: two Kurzweil Prizes and one Solomonoff Prize.
The Kurzweil Prize for Best AGI Paper was awarded to Linus Gisslen, Matt Luciw, Vincent Graziano and Juergen Schmidhuber for their paper entitled Sequential Constant Size Compressors and Reinforcement Learning. This paper represents an effort to bridge the gap between the general mathematical theory of AGI (which in its purest form applies only to AI programs achieving massive general intelligence via using unrealistically much processing power) and the practical business of building useful AGI programs.
Specifically, one of the key ideas in the general theory of AGI is “reinforcement learning” — learning via reward signals from the environment — but the bulk of the mathematical theory of reinforcement learning makes the assumption that the AI system has complete visibility into the environment. Obviously this is unrealistic — no real-world intelligence has full knowledge of its environment. The award-winning paper describes a novel, creative method of using recurrent neural networks to apply reinforcement learning methods to partially-observable environments — indicating a promising research direction to follow, for those who wish to make reinforcement learning algorithms that scale up to real world problems, such as those human-level AGIs will have to deal with.
The 2011 Kurzweil Award for Best AGI Idea was awarded to Paul Rosenbloom for his paper entitled From Memory to Problem Solving: Mechanism Reuse in a Graphical Cognitive Architecture. Rosenbloom has a long history in the AI field, including a role co-creating the classic SOAR AI architecture in the 1980s. While still supporting the general concepts underlying his older AI work, his current research focuses more heavily on scalable probabilistic methods — but more flexible and powerful ones than Bayes nets, Markov Logic Networks and other current popular techniques.
Extending his previous work on factor graphs as a core construct for scalable uncertainty management in AGI systems, his award-winning paper shows how factor graph mechanisms described for memory can also be used for problem-solving tasks. In the human brain there is no crisp distinction between memory and problem-solving, so it is conceptually satisfying to see AGI approaches that also avoid this sort of crisp distinction. It is yet unclear to what extent any single mechanism can be used to achieve all the capabilities needed for human-level AGI. But it is a very interesting and valuable research direction to take a single powerful and flexible mechanism like factor graphs and see how far one can push it, and Dr. Rosenbloom’s paper comprises a wonderful example of this sort of work.
The 2011 Solomonoff AGI Theory Prize — named in honor of AGI pioneer Ray Solomonoff, who passed away in 2010 — was awarded to Laurent Orseau and Mark Ring, for a pair of papers titled Self-Modification and Mortality in Artificial Agents and Delusion, Survival, and Intelligent Agents. These papers explore aspects of theoretical generally intelligent agents inspired by Marcus Hutter’s AIXI model (a theoretical AGI system that would achieve massive general intelligence using infeasibly much computational resources, but that may potentially be approximated by more feasible AGI approaches).
The former paper considers some consequences of endowing an intelligent agent of this nature with the ability to modify its own code; and the latter analyzes aspects of what happens when this sort of theoretical intelligent agent is interfaced with the real world. These papers constitute important steps in bridging the gap between the abstract mathematical theory of AGI, and the real-world business of creating AGI systems and embedding them in the world.
Hybridization
While there was a lot of strong and interesting research presented at the AGI-11 conference, I think it’s fair to say that there were no dramatic breakthroughs presented. Rather, there was more of a feeling of steady incremental progress. Also, compared to previous years, there was less of a feeling of separate, individual research projects working in a vacuum — the connections between different AI approaches seem to be getting clearer each year, in spite of the absence of a clearly defined common vocabulary or conceptual framework among various AGI researchers.
Links were built between abstract AGI theory and practical work, and between neuroscience and AGI engineering. Hybridization of previously wholly different AGI architectures was reported (e.g., the paper I presented, describing the incorporation of aspects of Joscha Bach’s MicroPsi system in my OpenCog system). All signs of a field that’s gradually maturing.
A Sputnik of AGI
These observations lead me inexorably to some more personal musings on AGI. I can’t help wondering: Can we get to human-level AGI and beyond via step-by-step, incremental progress, year after year?
It’s a subtle question, actually. It’s clear that we are far from having a rigorous scientific understanding of how general intelligence works. At some point, there’s going to be a breakthrough in the science of general intelligence — and I’m really looking forward to it! I even hope to play a large part in it. But the question is: will this scientific breakthrough come before or after the engineering of an AGI system with powerful, evidently near-human-level capability?
It may be that we need a scientific breakthrough in the rigorous theory of general intelligence before we can engineer an advanced AGI system. But … I presently suspect that we don’t. My current opinion is that it should be possible to create a powerful AGI system via proceeding step-by-step from the current state of knowledge — doing engineering inspired by an integrative conceptual, not quite fully rigorous understanding of general intelligence.
If this is right, then we can build a system that will have the impact of a “Sputnik of AGI,” via combining variants of existing algorithms in a reasonable cognitive architecture in a manner guided by a solid conceptual understanding of mind. And then, by studying this Sputnik AGI system and its successors and variants, we will be able to arrive at the foreseen scientific breakthrough in the science of general intelligence. This of course is what my colleagues and I are trying to do with the OpenCog project — but the general point I’m making here is independent of our specific OpenCog AGI design.
Anyway, that’s my personal view of the near- to mid-term future of AGI, which I advocated in asides during my OpenCog tutorial, and various discussions at the Future of AGI Workshop. But my view on these matters is far from universal among AGI researchers — even as the AGI field matures and becomes less marginal, it is still characterized by an extremely healthy diversity of views and attitudes! I look forward to ongoing discussions of these matters with my colleagues in the AGI community as the AGI conference series proceeds and develops.
Mostly, it’s awesome to even have a serious AGI community. It’s hard sometimes to remember that 10 years ago this was far from the case!
Read more: http://goo.gl/FV90i
Global Source and/or and/or more resources and/or read more: http://goo.gl/JujXk ─ Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://goo.gl/JujXk