Report from the Panel Chairs (PDF)
Slides from Briefing at IJCAI 2009 (PDF)
The AAAI President has commissioned a study to explore and address potential long-term societal influences of AI research and development. The panel will consider the nature and timing of potential AI successes, and will define and address societal challenges and opportunities in light of these potential successes. On reflecting about the long term, panelists will review expectations and uncertainties about the development of increasingly competent machine intelligences, including the prospect that computational systems will achieve “human-level” abilities along a variety of dimensions, or surpass human intelligence in a variety of ways.
The panel will appraise societal and technical issues that would likely come to the fore with the rise of competent machine intelligence. For example, how might AI successes in multiple realms and venues lead to significant or perhaps even disruptive societal changes?
The committee’s deliberation will include a review and response to concerns about the potential for loss of human control of computer-based intelligences and, more generally, the possibility for foundational changes in the world stemming from developments in AI. Beyond concerns about control, the committee will reflect about potential socioeconomic, legal, and ethical issues that may come with the rise of competent intelligent computation, the changes in perceptions about machine intelligence, and likely changes in human-computer relationships.
In addition to projecting forward and making predictions about outcomes, the panel will deliberate about actions that might be taken proactively over time in the realms of preparatory analysis, practices, or machinery so as to enhance long-term societal outcomes.
On issues of control and, more generally, on the evolving human-computer relationship, writings, such as those by statistician I. J. Good on the prospects of an “intelligence explosion” followed up by mathematician and science fiction author Vernor Vinge’s writings on the inevitable march towards an AI “singularity,” propose that major changes might flow from the unstoppable rise of powerful computational intelligences. Popular movies have portrayed computer-based intelligence to the public with attention-catching plots centering on the loss of control of intelligent machines. Well-known science fiction stories have included reflections (such as the “Laws of Robotics” described in Asimov’s Robot Series) on the need for and value of establishing behavioral rules for autonomous systems. Discussion, media, and anxieties about AI in the public and scientific realms highlight the value of investing more thought as a scientific community on preceptions, expectations, and concerns about long-term futures for AI.
The committee will study and discuss these issues and will address in their report the myths and potential realities of anxieties about long-term futures. Beyond reflection about the validity of such concerns by scientists and lay public about disruptive futures, the panel will reflect about the value of formulating guidelines for guiding research and of creating policies that might constrain or bias the behaviors of autonomous and semiautonomous systems so as to address concerns.
Cochairs
Eric Horvitz and Bart Selman
Panel
Margaret Boden, Craig Boutilier, Greg Cooper, Tom Dean, Tom Dietterich, Oren Etzioni, Barbara Grosz, Eric Horvitz, Toru Ishida, Sarit Kraus, Alan Mackworth, David McAllester, Sheila McIlraith, Tom Mitchell, Andrew Ng, David Parkes, Edwina Rissland, Bart Selman, Diana Spears, Peter Stone, Milind Tambe, Sebastian Thrun, Manuela Veloso, David Waltz, Michael Wellman
Focus Groups
- Pace, Concerns, Control, Guidelines
Chair: David McAllester - Potentially Disruptive Advances: Nature and Timing
Chair: Milind Tambe - Ethical and Legal Challenges
Chair: David Waltz
Focus Groups
Slides from Briefing at IJCAI 2009 (PDF)
Comments on the study should be sent to aifutures@aaai.org.