AAAI 2015 Fall Symposium Descriptions
The Association for the Advancement of Artificial Intelligence is pleased to present the 2015 Fall Symposium Series, to be held Thursday through Saturday, November 12-14, at the Westin Arlington Gateway in Arlington, Virginia. The titles of the six symposia are as follows:
- AI for Human-Robot Interaction
- Cognitive Assistance in Government and Public Sector Applications
- Deceptive and Counter-Deceptive Machines
- Embedded Machine Learning
- Self-Confidence in Autonomous Systems
- Sequential Decision Making for Intelligent Agents
AI for Human-Robot Interaction
This symposium will strengthen and bring together the community of researchers working on the AI challenges inherent to human-robot interaction (HRI) to share the most exciting research in this area while cultivating a vibrant, interconnected research community.
Humans and human environments bring with them inherent uncertainty in dynam ics, structure, and interaction. Human-robot interaction (HRI) aims to develop robots that are intelligent, autonomous, and capable of interacting with, modeling, and learning from humans. These goals are at the core of AI.
The field of HRI is a broad community encompassing robotics, AI, HCI, psychology and social science. In this meeting, we seek to bring together and strengthen the subset of the HRI community that is focused on the AI challenges inherent to HRI. While HRI work is seen across a variety of venues (for example, HRI, RSS, ICRA, IROS, Ro-Man, RoboCup, and more), AI-HRI seeks to serve as the gathering point for the AI-focused community within HRI.
We will build on last year’s success with a heavier emphasis both on sharing research results and the presentation and discussion of current work in the field.
Planned activities include the following:
Keynote talks: Different perspectives on AI-HRI and showcasing recent advances bringing us closer to the reality of humans interacting with robots on everyday tasks.
Breakout groups: These discussions will focus on (1) defining potential grand challenges, application domains, and metrics for community adoption, and (2) how to communicate AI-HRI work both to AI and HRI communities (and beyond)
Student talks/Poster sessions: These sessions will highlight state-of-the-art research in AI-HRI.
Team building: A large part of this effort is to bring together a community of researchers, strengthen old connections, and build new ones. Ample time will be provided for networking and informal discussions.
Organizing Committee
Bradley Hayes (Yale University), Matthew C. Gombolay (Massachusetts Institute of Technology), Brenna D. Argall (Northwestern University), Bilge Mutlu (University of Wisconsin-Madison), Julie A. Shah (Massachusetts Institute of Technology), Sonia Chernova (Worcester Polytechnic Institute), Andrea L. Thomaz (Georgia Institute of Technology), Kris Hauser (Duke University), Brian Scassellati (Yale University)
For More Information
For more information, please see the supplemental symposium website.
Cognitive Assistance in Government and Public Sector Applications
This symposium will present innovative contributions to the research, development and application of cognitive assistance technology for use in Government (executive agencies, legislative and judicial branches), military, police, education, healthcare, and social services. Topics include the following:
- Use case and usage scenarios
- Human/cognitive assistants (Cogs) interfaces, interaction, and human factors
- Best tasks to automate/employ Cogs, how to automate, how much to automate
- Techniques for allocating tasks between human and Cog team members
- User adoption issues, how to build trust in Cogs results/decisions, overcoming fears
- How to train/instruct Cogs
- Technology developments
- Results from early Cog R&D
- Metrics for cogs, test data, benchmarking performance, methodology for iterative improvement
- Mitigation of detrimental impacts of cogs (loss of situational awareness, human skill atrophy)
- Economics of cognitive assistance systems
- Policy issues with using Cogs
Cognitive assistance is “a systematic approach to increasing human intellectual effectiveness” that assumes “computational assists to human decision making are best when the human is thought of as a partner in solving problems and executing decision processes, where the strengths and benefits of machine and humans are treated as complementary co-systems.”
The first quotation is from Augmenting Human Intellect: A Conceptual Framework, by Douglas C. Engelbart, October 1962. The second is from Complex Operational Decision Making in Networked Systems of Humans and Machines by the Committee on Integrating Humans, Machines and Networks; National Research Council, 2014.
The organizing committee for this symposium starts with the following shared assumptions:
- We are in the early phase of a true Second Industrial Revolution, with at least as significant an impact, and an accelerated pace of adoption.
- Cognitive assistance presents opportunities and challenges to the work of governments – some in common with knowledge work in other domains, some distinct to the government domain.
- The opportunities and challenges for adoption of cognitive assistance technology in government are not widely appreciated among government decision makers, nor is there clear consensus on a research agenda specific to this domain to address the opportunities and challenges.
- Progress in adopting cognitive assistance will require expertise in multiple disciplines, including software engineering, artificial intelligence, cognitive science, and human factors, among others.
This symposium will bring together academe, industry, and government to discuss the opportunities and challenges of creating cognitive assistance (Cog) systems. We plan to have talks that present the state-of-the-art in Cogs, and panel discussions on open challenges and opportunities. An emphasis will be on soliciting the participation of people currently engaged in the application of cognitive assistance systems in the public sector, to facilitate an exchange of lessons learned as well as to help scope a research agenda based on current gaps.
Organizing Committee
Chuck Howell (MITRE), Scott Kordella (MITRE), Frank Stein (IBM), Edward B. Rockower (Naval Postgraduate School), Hamid R. Motahari Nezhad (IBM), Murray Campbell (IBM), Jim Spohrer (IBM), Gary Klein (MITRE), Lashon Booker (MITRE)
For More Information
For more information, please see the supplemental symposium website.
Deceptive and Counter-Deceptive Machines
This symposium examines the potential roles and means for deceptive and counter-deceptive machines, and the ethical and social implications thereof.
From the Turing Test to HAL 9000 to Blade Runner to today’s Ex Machina, both rigorous and popular analysis of deception and counter-deception has been part of AI, and part of the larger world fascinated by AI. Moreover, deceptive and counter-deceptive machines are a foreseeable byproduct of our technologized society wherein intelligent systems are rapidly becoming more active and interactive in human physical, economic, and social spheres.
Currently, socialized AI systems are being advanced in areas such as affective and persuasive computing, social and cognitive robotics, human-robot interaction, multi-agent and decision-support systems, and e-commerce. The general belief is that socialization enables or significantly improves system efficiency and efficacy. But then, what is role of deception, even altruistic deception, and counter-deception in these systems? For example: Does robo-therapy or affective computing engender false beliefs that AI artifacts are fully sentient and, specifically, genuinely empathetic? Should AI produce machines that deceive for the greater good (for example, espionage) or should that role be the exclusive province of humans?
The symposium will focus on the emerging science and engineering of machine deception and counter-deception. It will explore questions such as: How and when can machines deceive us and each other? Can we effectively use machines to counter deception perpetrated by machines, and by humans? Can there be both a science and engineering of machine deception and counter-deception? If so, what would it look like? What ethical or policy principles might guide the science of machine deception and counter-deception?
Organizing Committee
Micah H. Clark (Office of Naval Research, micah.clark@navy.mil), Selmer Bringsjord (Rensselaer Polytechnic Institute, selmer@rpi.edu), Paul Bello (Naval Research Laboratory, paul.bello@nrl.navy.mil)
For More Information
For more information, please see the supplemental symposium website.
Embedded Machine Learning
The Embedded Machine Learning symposium will study the challenges that arise when machine learning is embedded as a component in large complex systems. We seek quality contributions describing recent or ongoing work in the scope of the symposium as described below. Both theoretical and applied work are solicited; demonstrations of experimental and/or deployed systems are especially encouraged. Potential issues are listed in the following paragraphs.
Methods for optimizing learning, decisions and actions to maximize the total system performance instead of for traditional losses/metrics such as accuracy, AUC, or log-loss.
- The operating point for an ML system that optimizes it’s accuracy as a stand-alone model can be very different from the operating point that yields the best system-level accuracy. In healthcare the critical region often is at one end of the precision-recall tradeoff, while in credit card fraud the critical region may be at the other extreme of the tradeoff. The model that is best overall may be very different from the models that are best at either of these extremes, and the methods for training, regularizing, and model selection may differ dramatically depending on where the critical operating region is.
- The metrics used to assess system-level performance can be very different from the ones traditionally used in machine learning, and it may be difficult to train to these metrics so that models learn the right thing.
Beyond prediction to action: ML models often predict class probabilities or scores for ranking. However the decision the system must make is an action such as “Should we send a coupon to this customer?”, “Should we use this treatment on this patient?” “How much should we invest in this stock? and so on. What is the best way to couple machine learning to utility models so that the most effective decision/action can be made?
Approaches for making the system and its decisions robust to machine learning.
- Methods for making the system robust to changes in the ML component that occur when the ML component is updated or retrained.
- Techniques that permit the system to detect when the ML component appears to be broken and is no longer providing accurate/useful predictions.
- Algorithms that allow the ML components to be robust to the changes in input distribution that are likely to occur when embedded in a real, living breathing system that changes over time.
- Methods by which the ML component can detect when it is being applied to cases for which it was not trained and thus is unlikely to make accurate predictions. For example, a model trained on data from “regular” hospitals should automatically detect and raise red-flags if it is deployed at a children’s hospital where it sees a very different distribution of patients than the distribution(s) on which it was trained and has competency.
- Approaches that estimate the embedded risk of machine learning components.
- What to do when features that were available at training time break, or are no longer available? More generally, what to do when the feature space changes over time? Can transfer be employed? Systems are upgraded, new sensors become available, old sensors can be removed or may malfunction. Does the whole system stop and wait for retraining, or is there a fall-back “limp-home” mode that allows the system to continue executing, possibly with reduced accuracy or perhaps just reduced speed?
- Sequential decision making. How should current reinforcement learning methods be adapted to evolving systems?
Techniques for avoiding “Tower of Babel” issues that occur in large systems so that components can talk to each other.
- If an ML model predicts scores, and if the scale/range of these scores changes when the model is retrained, this can break the rest of the system because it no longer understands the “language” being spoken the model. How do we handle these kinds of issues when probabilities are not the best answer (for example, when predicting rankings), or when the predictions from the model are structured (for example, parse trees)?
- Can the ML system provide confidences in its predictions so that other parts of the system can intelligently decide when to accept or reject the predictions.
Methods for dealing with feedback loops that naturally arise when deployed systems affect future training sets that will be used to update learned models.
Debugging embedded machine learning. Debugging is critical in all large complex systems: how to detect, isolate, and repair problems in complex systems where the failures result from imperfect interaction between multiple system components.
- Modularity and edit-ability. Is it possible to build components with well-defined functions so that programmers know where to look when a specific behavior breaks? Is it possible to build components so that they may be updated, replaced, or re-trained individually without needing to update or re-train the entire system?
- Complex systems are often built and then maintained by complex organizations that change over time. Can we build components (including the machine learning components) so that it is easier for individuals in the organization to develop, debug, and refine them?
- Intelligibility. Often there is a tradeoff between accuracy and intelligibility in machine learning — models such as logistic regression are intelligible, but not always as accurate as less intelligible models such as boosted tress, random forests, neural nets, and SVMs. Intelligibility, both of the model and of the individual predictions it makes, can be critical in some applications to insure that the model is good, to help engineering improve/refine the model over time, and to aid understanding where fault lies in a complex system.
- Hierarchy. How to decompose a complex learning problem into a hierarchy of simpler learning problems (for example, layered learning as in soccer robots)?
Analyses of how to employ representations learned by other models and then freeze them for reuse by components that will be built on top (downstream) of them:
- Representations learned by deep learning for object recognition on similar objects sometimes can be re-used for object recognition on new objects on which they were never trained. This can make components more general and robust because the representation learned may be more general than what would have been learned from the specific classes and data available in the app-specific training set.
- How and when to retrain learned representation. Do all components downstream from the representations need to be retrained? After re-training, does the new system start making mistakes on cases that were easy before, or is there a way to “blend” new models with previous models so that the strengths of the previous models are retained while the improved accuracy of the new model is incorporated?
Speed and performance issues that arise when learning is embedded in a complex system that has real-time or near-real-time performance requirements.
- What does the learning accuracy ersus system accuracy vs. speed tradeoff curve look like?
Interactive Human-in-the-Loop Systems: Some applications require complex, possibly fine-grained, interaction between humans and the system or parts of the system.
- What are the appropriate interfaces and information presentations for human consumption and human control?
- How does the need for human interaction influence the design of the system into components?
Symposium Organizers
Rich Caruana (rcaruana@microsoft.com) Senior Scientist, Microsoft Research, 18517 201st Ave NE, Woodinville WA 98077; Tom Dietterich (#116;gd@cs.orst.edu) Distinguished Professor and Director of Intelligent Systems, School of Electrical Engineering and Computer Science, 1148 Kelley Engineering Center, Oregon State University, Corvallis OR 97331-5501; Dragos Margineantu (dragos.d.margineantu@boeing.com) Technical Fellow, Boeing Research and Technology, M/C 4C-77, P.O. Box 3707, Seattle, WA 98124
For More Information
For More information, please see the supplemental symposium website.
Self-Confidence in Autonomous Systems
Modern civilian and military systems have created a demand for sophisticated intelligent machine autonomy with human supervision and coordination in uncertain dynamic environments. These “on-the-loop” human roles in autonomy shifted emphasis away from the traditional human “in-the-loop” capabilities, and raise questions about when/how mutual communication of operational intent and the perceived capabilities of autonomous agents can impact human-autonomy coordination.
This symposium will explore the possibilities for augmenting human-machine dialog through communication of an autonomous agent’s “sense of confidence,” that is, the agent’s perceived ability to effectively execute assigned tasks. Such reporting goes above and beyond mere assessment of probabilities for modeled outcomes or successful task completion. Rather, “self-confidence” summarizes an agent’s holistic assessment of robustness regarding its ability to achieve assigned goals (within a defined region of autonomous behavior) in spite of: (1) uncertainties in its knowledge of the world, (2) uncertainties of its own state/self, and (3) uncertainties about its reasoning processes and execution capabilities.
This symposium aims for a holistic interdisciplinary discussion of the factors that contribute to the perception, quantification and understanding of various types of uncertainty in modern (and soon-to-be-realized) autonomous systems. We invite contributions from researchers in AI/expert systems, human factors, autonomous robotics and control/complex systems engineering, and other related disciplines that explore several key questions for this newly emerging topic, including the following:
- What does “self-confidence” mean in the context of autonomous systems?
- What factors influence self-confidence?
- How can self-confidence actually be computed?
- How can/should self-confidence actually be communicated?
This symposium will feature invited talks and contributed paper presentations, as well as panel discussions and group breakout session focusing on the implementation of self-confidence in real autonomous systems. Presentations will address any of the following themes:
Competence: How can an autonomous agent determine whether a “task situation” actually falls within (or is about to reach) its designed competency boundary?
Information adequacy: Are the data/knowledge available to an autonomous agent sufficient to effectively assess the situation and develop an appropriate course of action?
Quantification and expression: How can self-confidence be consistently calculated and communicated?
Questions should be directed to Nisar Ahmed (Nisar.Ahmed@colorado.edu), Nicholas Sweet (Nicholas.Sweet@colorado.edu), or Andrew Hutchins (Nicholas.Sweet@colorado.edu).
Organizing Committee
Nisar Ahmed (University of Colorado Boulder), Nicholas Sweet (University of Colorado Boulder), Ugur Kuter (Smart Information Flow Technologies), Christopher Miller (Smart Information Flow Technologies), Andrew Hutchins (Duke University), and Mary Cummings (Duke University)
For More Information
For more information, please see the supplemental symposium website.
Sequential Decision Making for Intelligent Agents
Sequential decision making under uncertainty (SDM) is a powerful paradigm for probabilistic planning. The emergence of various models that analyze it under different sets of assumptions, for example, as single and multiagent MDPs and POMDPs, has gone hand in hand with the split of this field into many subareas, each with a quite distinct research community. The SDMIA Fall Symposium aims to bring the researchers of computational sequential decision making under uncertainty together, in order to facilitate the cross-pollination of ideas across these communities and thus accelerate the development of the larger field. The symposium will have ample room for discussions and interaction. Furthermore, we intend to reflect on the current state of the field, both in terms of theory and applications, and, more importantly, ways to shape its future.
Topics
- Novel insights in modeling sequential decision making (SDM)
- Recent advances in solution methods
- Benchmark problems and benchmarking
- Realworld applications and application domains
- Model specification and induction
- Methods for transferring solutions from one model to another
All accepted papers will be scheduled for oral presentation and papers will be made available online. Selected original (not previously published) papers can be published in the AAAI Press Technical Reports series. At least one author of each accepted paper is required to register and attend the symposium to present the work.
Organizing Committee
Matthijs Spaan, Chair (Delft University of Technology, m.t.j.spaan@tudelft.nl, Mekelweg 4, 2628 CD, Delft, The Netherlands. Tel. +31152781102); Frans Oliehoek (University of Amsterdam / University of Liverpool, fao@liverpool.ac.uk); Christopher Amato (University of New Hampshire, camato@cs.unh.edu); Andrey Kolobov (Microsoft Research, akolobov@microsoft.com); Pascal Poupart (University of Waterloo, ppoupart@uwaterloo.ca)
For More Information
For more information, please see the supplemental symposium website.