AAAI 2016 Spring Symposia
March 21–23, 2016
Sponsored by the Association for the Advancement of Artificial Intelligence
In cooperation with the Stanford University Computer Science Department
- AI and the Mitigation of Human Error: Anomalies, Team Metrics and Thermodynamics
- Challenges and Opportunities in Multiagent Learning for the Real World
- Enabling Computing Research in Socially Intelligent Human-Robot Interaction: A Community-Driven Modular Research Platform
- Ethical and Moral Considerations in Non-Human Agents
- Intelligent Systems for Supporting Distributed Human Teamwork
- Observational Studies through Social Media and Other Human-Generated Content
- Well-Being Computing: AI Meets Health and Happiness Science
AI and the Mitigation of Human Error: Anomalies, Team Metrics and Thermodynamics
AI has the potential to mitigate human error by reducing car accidents, airplane accidents, and other mistakes made mindfully or inadvertently by humans. One worry about this bright future is that jobs may be lost. Another is the loss of human control. Despite the loss of all aboard several commercial airliners in recent years, commercial airline pilots reject being replaced by AI.
An even greater, existential threat posed by AI is to the existence of humanity, raised by physicist Stephen Hawkings, entrepreneur Elon Musk and computer billionaire Bill Gates. Garland, the director of the new film Ex Machina, has countered their warnings.
Across a wide range of occupations and industries, human error is the primary cause of accidents. In general aviation, the FAA attributed accidents primarily to stalls and controlled flights into terrain. Exacerbating the sources of human error, safety is one area an organization often skimps to save money. The diminution of safety coupled with human error led to the explosion in 2010 that doomed the Deepwater Horizon in the Gulf of Mexico. Human error emerges as a top safety risk in the management of civilian air traffic control. Human error was the cause attributed to the recent sinking of Taiwan’s Ocean Researcher V last fall. Human behavior is a leading cause of cyber breaches.
Humans cause accidents by lacking situational awareness, by a convergence to incomplete beliefs, or by emotional decision making (for example, the Iranian Airbus flight erroneously downed by the USS Vincennes in 1988). Other factors contributing to human error include poor problem diagnoses; poor planning, communication and execution; and poor organizational functioning.
We want to explore the human’s role in the cause of accidents and the use of AI in mitigating human error; in reducing problems with teams, like suicide (for example, the German copilot, Libutz, who killed 150 aboard his Germanwings commercial aircraft); and in reducing the mistakes by military commanders (for example, the 2001 sinking of the Japanese tour boat by the USS Greeneville).
In this symposium, we want a rigorous view of AI with possible applications to mitigate human error, to find anomalies in human operations, and to discover, when, for example, teams have gone awry, whether and how AI should intercede in the affairs of humans.
Papers should address “AI and the mitigation of human error,” specify the relevance of their topic to AI, or state how AI can be used to address their issue.
Organizing Committee
Ranjeev Mittu, ranjeev.mittu@nrl.navy.mil; Gavin Taylor, taylor@usna.edu; Don Sofge, Naval Research Laboratory, don.sofge@nrl.navy.mil; and W. F. Lawless, Technical Consultant, w.lawless@icloud.com
For More Information
For more information, please visit the supplemental symposium website.
Challenges and Opportunities in Multiagent Learning for the Real World
Developing efficient methods for multiagent learning has been a long-standing research focus in the artificial intelligence, game theory, control, and neuroscience communities. As a growing number of agents are deployed in complex environments for scientific research and human well-being, there are increasing demands to design efficient learning algorithms that can be used in these real-world settings (including accounting for uncertainty, partial observability, sequential settings and communication restrictions). These challenges exist in many domains, such as underwater exploration, planetary navigation, robot soccer, stock-trading systems, and e-commerce.
Multiagent learning has had many successes, but significant challenges remain. For this symposium, we are interested in improving methods and integrating methods from different areas of AI.
Topics
Topics of interest include the following:
- Learning in sequential settings and dynamic
- environments (such as stochastic games,
- decentralized POMDPs and their variants)
- Learning with partial observability
- Learning with various communication limitations
- Learning in ad-hoc teamwork scenarios
- Scalability through swarms versus intelligent agents¬
- Bayesian nonparametric methods for multiagent learning
- Deep learning methods for multiagent learning
- Transfer learning in multiagent settings
- Applications of multiagent learning to real-world systems
The purpose of this symposium is to bring together researchers from machine learning, control, neuroscience, robotics, and multiagent learning/planning communities to discuss how to broaden the scope of multiagent learning research and address the fundamental issues that hinder the applicability of multiagent learning for solving complex real world problems.
Submissions
We solicit new (up to 8 pages) and previously published work (2 pages) on related topics as well as position papers (up to 8 pages). Additional details can be found at the supplemental symposium website.
Organizing Committee
Christopher Amato, University of New Hampshire, USA; Miao Liu, Massachusetts Institute of Technology, USA; Frans Oliehoek, University of Amsterdam, NL / University of Liverpool, UK; Karl Tuyls, University of Liverpool, UK; Jonathan P. How, Massachusetts Institute of Technology, USA; Peter Stone, University of Texas at Austin, USA
For More Information
For more information and submission instructions, please visit the supplemental symposium website.
Enabling Computing Research in Socially Intelligent Human-Robot Interaction: A Community-Driven Modular Research Platform
Advances in sensor and communication technologies have facilitated progress in computing research on physical platforms. The field of human-robot interaction (HRI) has grown significantly in the last decade and a half, and actively brings together an interdisciplinary community of researchers across computing, AI, robotics, and social science. However, progress has been limited by the lack of affordable, general-purpose, modular hardware robot platforms with available low-level software that would enable large numbers of computing researchers to enter the field and develop and test algorithms, as well as conduct statistically significant user studies by deploying systems in the real world and collecting user data to inform further computational research in HRI.
The goal of this symposium is to kick off the process of community-informed design and development of the low-cost HRI hardware and software platform, to be developed by the symposium organizers, with NSF support. The hardware design will involve advances in user-centric yet affordable design, safety, modularity, generality, and system integration. The software design will involve novel general algorithms and open source robust code bases that enable the hardware platform to operate “out of the box” with a set of socially intelligent behavior primitives enabling computing researchers to focus on their areas of interest without having to develop low-level robot control algorithms and code. Both the robot hardware and software necessary for pursuing the computational, AI and noncontact HRI challenges have unique requirements driven by the need to be socially aware and socially expressive. The resulting platforms must be capable of recognizing social signals, reasoning over those signals, and generating appropriate behaviors in response.
The symposium will present initial hardware design ideas and plans, along with exploratory exercises to determine the usability of proposed software systems as well as the fit of capabilities with the community’s needs. Our “design by quorum” is combined with modular design that centers on creating a standard vetted by the community and builds on recent technologies to minimize cost. The symposium will address computing challenges that bridge AI, human-computer interaction (HCI), service robotics, and other related areas. Therefore, advances made through the discussion at the symposium will serve to push the field forward, thereby impacting the computing community at large, including AI and robotics.
Topics
Symposium topics include, but are not limited to the following:
- Recognition and generation of fundamental social behaviors, such as spacing (that is, where to be), eye gaze (that is, where to look), natural language (that is, what to say), body language (that is, how to act), and timing (that is, when to act), among others
- Dialog/interaction management, decision-making, and learning
- Computational models of social dynamics and interaction patterns in human-robot interactions
- Mapping, localization, path-planning, and navigation in human environments
- Context/situational awareness and scene understanding in human-robot social interactions
- Online adaptation to human social behavior and interaction contexts
- Long-term learning of human behaviors, preferences, and needs
- Software architectures, tools, and systems for facilitating human-robot interactions
- Sensor, mechanical, and computational hardware for enabling human-robot interactions
- Ethics in the design of social robot hardware and software
Format
The symposium will be a combination of presentations, posters, invited talks, plenary sessions, and breakouts, to maximize participant interaction. There will be paper presentation sessions for accepted full papers on the topics of (1) social robot design and applications, and (2) computational methods and software for robust social robot behaviors; poster sessions for accepted short papers and position papers on topics similar to those in the paper presentations; eemos (optional) for accepted full papers, short, and position papers; plenary talks by invited speakers; panel discussions on the topics of (1) the design of social robots to support strong computational methods and applications, and (2) the design of robot behaviors and software to further HRI research; and breakout sessions in which participants will (1) design an ideal social robot platform to support strong computational methods and applications; (2) formalize robot behaviors and software architectures for robust HRI systems; (3) prototype social robot hardware, software, and/or applications given hands-on materials; and (4) discuss new collaborative research efforts
Submissions
Prospective authors are invited to submit full papers (6-8 pages) and/or short/position papers (2-4 pages) in PDF format to EasyChair. Accepted papers will be published in a technical report on the AAAI Digital Library.
Organizing Committee
Maja Mataric (University of Southern California, USA, mataric@usc.edu); Mark Yim (University of Pennsylvania, USA, yim@grasp.upenn.edu); Ross Mead (University of Southern California, rossmead@usc.edu)
For More Information
For more information and submission instructions, please visit the supplemental symposium website.
Ethical and Moral Considerations in Nonhuman Agents
The moral implications of our technological creations have long been the staple of science-fiction rumination. Speculative films from Metropolis to 2001: A Space Odyssey to Blade Runner and Robocop explore the idea that autonomous intelligence without moral constraint must inevitably lead to deadly hazard. Indeed, this anxiety long precedes the modern age of science (and its fiction), as demonstrated by the popularity of the tales of Frankenstein, the Golem of Prague and the Sorcerer’s Apprentice. We have always worried about the unintended consequences of our complex creations.
Artificial intelligence has now reached a point – not least in the public imagination, and in the prognostications of thought leaders in other fields – where these moral concerns have become the substance of science fact. Our machines are tasked with ever more autonomous decisions that directly impact on the well-being of other humans. There is a very real need to imbue our AI creations with a robust moral sense that real people would recognize as a functional model of human morality.
Recently, many researchers are endeavouring to bring the moral dimension of autonomous nonhuman agents to the public eye. There are academic groups that attempt to forestall or halt the militarization of autonomous agents, such as the International Committee for Robot Arms Control. South Korea and other countries are working to adapt their legal systems to account for the issues of responsibility, liability, insurance, and so on within this new technological realm.
In this symposium, we aim to bring together researchers from AI, law practitioners, philosophers of ethics, and neurocognitive scientists to shed light on the problems of design and regulation of ethically- and morally-informed autonomous systems as they become part of our everyday life. We expect a stimulating interdisciplinary debate that will break new ground on this important and timely topic.
Topics
The symposium will cover the following topics (though papers on other related topics are also welcome):
- Modeling evolution and emergence of moral norms
- Designing moral regulations for autonomous systems
- Role of analogies and metaphors in moral reasoning and argumentation
- Ontologies for moral and ethical reasoning
Submissions
Submission format: Short papers (2 pages or ca. 2000 words), regular papers (3 to 6 pages ca. from 3000 to 6000 words), or posters as PDF files. Please follow AAAI style guidelines.
Please submit through the symposium Easychair account.
Keynote Speakers
- Ron Arkin, Georgia Institute of Technology, USA
- Amit Pandey, Aldebaran Robotics, Paris, France
- Luis Moriz Pereira, Universidade Nova de Lisboa, Portugal
Organizing Committee
Cochairs: Bipin Indurkhya (Jagiellonian University, Cracow, Poland) and Georgi Stojanov (The American University of Paris, France)
Members: Joanna Bryson (University of Bath, UK), Tom Lenaerts (Universite Libre de Bruxelles, Belgium), Tony Veale (University College Dublin, Ireland
Program Committee
Peter Asaro (The New School, New York, USA), Tony Belpaeme (Plymouth University, UK), Simon Hegelich (University of Siegen, Germany), Dietmar Janetzko (Cologne Business School, Germany), Frederic Kaplan (EPFL, Lausanne, Switzerland), Michaal Klincewicz (Jagiellonian University, Cracow, Poland), Patrick Lin (California Polytechnic State University, USA), Thomas McDonnell (Pace University, USA), Luis Moriz Pereira (Universidade Nova de Lisboa, Portugal), Susan Perry (The American University of Paris, France), Regina Rini (New York University, USA), Claudia Roda (The American University of Paris, France).
For More Information
For more information and submission instructions, please visit the supplemental symposium website.
Intelligent Systems for Supporting Distributed Human Teamwork
Distributed teamwork has become more common as technology enables groups of people distributed over vast distances, with fewer opportunities for synchronous interaction to work together on complex tasks extended in time. This symposium will convene AI, HCI and social science researchers to identify challenges to developing intelligent systems for supporting human teamwork and multi-disciplinary approaches to overcoming them. Participants will consider ways to combine insights from AI research on complex, highly distributed artificial teams with results of HCI and social science investigations of human teams to enable the development of effective tools for supporting teamwork in areas such as healthcare, education and disaster relief.
Cross-disciplinary expertise is essential for pushing forward the boundaries of systems for supporting distributed human teamwork. For example, systems might benefit from intelligent algorithms that reduce coordination overhead, but assumptions AI methods make for computer-agent environments often poorly match people’s capabilities. Integrating key ideas from social science and HCI research into the design of AI methods will enable the development of systems that address people’s core needs, adequately consider cultural and organizational factors, and make reasonable assumptions.
Planned activities include the following:
Overviews of teamwork research, to provide an understanding of the diverse problems studied in each of social sciences, HCI, and AI; methods and theories; and main challenges to existing theories and methods.
Interdisciplinary working groups, to identify challenges in specific application domains, gaps between existing approaches and desired solutions, potential approaches within and across fields.
Short talks and poster presentations, from accepted papers.
Looking forward: synthesize discussions and plan next activities (for example, a website with cross-disciplinary resources, a vision paper).
Submissions to participate include a position paper and link to related published paper.
Topics
Topics of interest include, but are not limited to the following:
- Novel teamwork problems, challenges and opportunities for cross-disciplinary approaches
- Teamwork theories
- Case-studies of (successful or failed) complex teamwork (with or without technology support)
- Representations, algorithms or interaction methods for teamwork support
- Systems (design, implementation or deployment efforts) for supporting teamwork
- Empirical testbeds
Contact
Ofra Amir (oamir@seas.harvard.edu).
Organizing Committee
Ofra Amir (Harvard University), Krzysztof Gajos (Harvard University), Ya’akov (Kobi) Gal (Ben-Gurion University), Barbara Grosz (Harvard University), Jonathan Grudin (Microsoft Research), Robert Kraut (Carnegie Mellon University), Gary Olson (University of California, Irvine), Peter Stone (University of Texas, Austin).
For More Information
For more information and submission instructions, please visit the supplemental symposium website.
Observational Studies through Social Media and Other Human-Generated Content
While using the Internet and mobile devices, people create data, whether intentionally or unintentionally, through their interaction with messaging services, websites and other applications and devices. This means that experiments with heretofore unprecedented populations can be performed in a variety of topics. Our symposium will focus on observational studies, which arise from these interactions and data, with a focus on experiments that can indicate causal inferences.
Human generated content in general, and social media in particular, are a rich repository of data for observational studies across many areas: public health, with research on prevalence of disease and on the effects of media on the development of disease; medicine, showing the ability to detect mental disease in individuals using social media; education, to optimize teaching and exams; and sociology, to prove theories previously tested on very small populations. These studies were conducted from data including social media, search engine logs, location traces, and other forms of human generated content.
While many past studies showed a correlation between variables of interest, some studies were able to show causal relationships through natural experiments or by linking data sources. Our symposium focuses on all aspects of causal inference from human generated content, with studies that developed novel methods of identifying and using natural experiments or other methods for inferring causality.
Topics
Topics include the following:
- Interpreting user-generated data, including text, structured data, and temporal data.
- Causal analyses in social media, for example, using propensity score matching and causal graphs.
- Identifying natural experiments and using them to understand causal inferences
- Identifying population, reporting and other biases in social media
- Applications and domain-specific explorations
- Novel methods for preserving privacy
- Ethical codes and implications
Format
The symposium will consist of 2-3 days of invited and submitted talks, poster presentations, and panel discussions.
Submissions
We invite submissions of extended abstracts up to 4 pages in PDF format. Submissions should be made via the symposium web site. Submissions should not be anonymized. The program committee will select talks and poster presentations based on topical relevance, technical contribution and general interest to the community.
Organizing Committee
Elad Yom-Tov (Microsoft Research, eladyt@microsoft.com); Munmun De Choudhury (Georgia Tech, munmund@gatech.edu); Emre Kûcûman (Microsoft Research, emrek@microsoft.com)
For More Information
For more information and submission instructions, please visit the supplemental symposium website.
Well-Being Computing: AI Meets Health and Happiness Science
Well-being computing is an information technology that aims to promote psychological wellbeing (that is, happiness) and maximize human potential. Our environment escalates stress, provides unlimited caffeine, distributes nutrition-free “fast” food, and encourages unhealthy sleep behavior. For this issue, wellbeing computing provides a way to understand how our digital experience affects our emotions and our quality of life and how to design a better wellbeing system that puts humans at the center.
Today great advances are being made both in the science of health and wellbeing and artificial intelligence (AI). Synergy between these two fields can bear fruits in well-being computing. It is now very important to share these scientific findings and AI methodologies for better human centric system design; Well-being computing is where AI meets health/happiness sciences.
This symposium is aimed at sharing latest progress, current challenges and potential applications for our health and happiness improvement in the context of Wellbeing computing. The evaluation of digital experience and understanding of human health and happiness from the viewpoint of well-being computing is also welcome. This symposium will bring together an interdisciplinary group of researchers to discuss possible solutions for our health and happiness by focusing on AI techniques.
Scope of Interests
The following topics are scope of our interests, but not limited to the following:
- Methods for quantifying our health happiness and well-being
- Methods for analyzing the health and wellness data to discover the new meanings. (2-1)
- discovery informatics technologies; (2-2) cognitive and biomedical modeling
- Methods for designing better health and well-being space
- Applications, platforms, and field studies
Format
The symposium is organized by the invited talks, presentations, and posters and interactive demos.
Submissions
Interested participants should submit either full papers (6–8 pages maximum) or extended abstracts (2 pages maximum). Extend abstracts should state your presentation types (long paper (6-8 pages), short paper (1-2 pages), demonstration, or poster presentation). The electronic version of your paper should be send to aaai2016-wc@cas.hc.uec.ac.jp by October 9th, 2015.
Contact
Takashi Kido (Ph.D, Computer Science)
RIKEN GENESIS. CO., LTD.
Toppan Buioding Higashikan 3F
Taito-Ku, Taito, 1-5-1, Tokyo, 110-8560, Japan
TEL: +81-3-3839-8043
FAX: +81-3-3835-7154
E-mail: kido.takashi@gmail.com
Organizing Committee
Takashi Kido, Cochair (RIKEN GENESIS, Japan); Keiki Takadama, Cochair (The University of Electro-Communications, Japan)
For More Information
For more information, please visit the supplemental symposium website.