AAAI 2018 Spring Symposium Series
Sponsored by the Association for the Advancement of Artificial Intelligence
In cooperation with the Stanford University Computer Science Department
March 26–28, 2018 Stanford University, Palo Alto, California USA
Call for Participation
Important Deadlines
- October 27, 2017: Submissions due to organizers
- November 27, 2017: Notifications of acceptance sent by organizers
- January 29, 2018: Accepted camera-ready copy due to AAAI.
The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University’s Department of Computer Science, is pleased to present the 2018 Spring Symposium Series, to be held Monday through Wednesday, March 26-28, 2018 at Stanford University. The titles of the seven symposia are as follows:
- AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents
- Artificial Intelligence for the Internet of Everything
- Beyond Machine Intelligence: Understanding Cognitive Bias and Humanity for Well-Being AI
- Data Efficient Reinforcement Learning
- The Design of the User Experience for Artificial Intelligence (the UX of AI)
- Integrating Representation, Reasoning, Learning, and Execution for Goal Directed Autonomy
- Learning, Inference, and Control of Multi-Agent Systems
AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents
Artificial intelligence has become a major player in today’s society and that has inevitably generated a proliferation of thoughts and sentiments on several of the related issues. Many, for example, have felt the need to voice, in different ways and through different channels, their concerns on possible undesirable outcomes caused by artificial agents, the morality of their use in specific sectors, such as the military, and the impact they will have on the labor market. The goal of this symposium is to gather a diverse group of researchers from many disciplines, such as computer scientists, philosophers, economists and sociologists, and to ignite a scientific discussion on this topic.
The symposium will welcome contributions on a broad set of topics related to ethics, safety and trustworthiness of AI. Our focus will be to adopt a scientific approach to help understanding and put in perspective the multitude of opinions which have been expressed on these matters. Submissions may focus on specific and technical details or bring a more general point of view on the desiderata for AI in terms of society. We also welcome papers describing both short-term as well as long-term analysis of AI’s impact on different aspects of society.
Topics
Topics of interest include the following:
- Architectures for ensuring ethical behavior
- Value alignment in autonomous systems
- Safety and trustworthiness in artificial agents’ design
- Moral decision making in autonomous systems
- The impact of AI on jobs and issues like technological unemployment
- Autonomous agents in the military
- Autonomous agents in commerce and other domains
- The societal impacts of AI
- Measuring progress in AI
- The future of AI
Format
The symposium will include invited talks, presentations of accepted papers, group work sessions and panels.
Submissions
We solicit full papers (6 to 8 pages) and short papers (2 to 4 pages). Submissions are invited from all perspectives of interest (see list above), and can include recent or ongoing research, position papers, and surveys. Please submit through Easychair using the AAAI template.
Main Contact
K. Brent Venable
Tulane University and IHMC
Department of Computer Science
Stanley Thomas Hall, 303c,
6823 St. Charles Ave. New Orleans, LA 70118
kvenabl@tulane.edu
Organizing Committee
Francesca Rossi (IBM Watson and University of Padova, frossi@math.unipd.it), K. Brent Venable (Tulane University and IHMC, kvenabl@tulane.edu), Toby Walsh (Data61, UNSW and TU Berlin, Toby.Walsh@nicta.com.au)
For More Information
sites.google.com/site/aiandsocietyss18
Artificial Intelligence for the Internet of Everything
From the perspective of artificial intelligence (AI), for this AAAI symposium on the Internet of Everything (IoE), we desire participants who can discuss the potential meaning, value and effect that the Internet of Things (IoT) may have on ordinary life, in the business or industrial world, on the battlefield (IoBT), in the medical field (IoMT) or on other intelligent agents (IoIT). We leave the topic open-ended for this AAAI symposium. We will consider all papers with an AI perspective that address how IoE affects sensing, perception, cognition and behavior or causal relations, whether the context is clear or uncertain and whether for mundane decisions; decisions made for business, industry or government; complex decisions on the battlefield; life and death decisions in the medical arena; or decisions affected by other intelligent agents and machines. We are interested in practical, measurement and theoretical issues and research questions about how these “things” may affect individuals, teams and society or each other across different units of analysis; or how existing systems and human interactions may affect these “things.” We are especially interested in what may happen when these things begin to reason, communicate and act on their own, whether as autonomous agents or interdependently with other things in autonomous teams. Must IoE systems speak only to humans, to each other, or both? Will each IoE system be an independent system; an interdependent system; or a combination? Regardless, our ultimate goal is to use AI to advance autonomy and autonomic fundamentals to improve the performance of individual agents and hybrid teams of humans, machines, and robots for the betterment of society.
Participants: In 2 to 8 pages submitted to the organizers, we desire participants who can discuss the foundations, metrics or applications of IoE systems (or IoT, IoBT, etc.) and how these systems will affect targeted audiences or society. The topic is open-ended. We will consider all papers that address how IoE systems affect humans or other smart systems. Our ultimate goal is to advance IoE theory and concepts with AI to improve society. We plan a follow-on book with expanded contributions.
Organizing Committee
Ranjeev Mittu (ranjeev.mittu@nrl.navy.mil), Donald Sofge (donald.sofge@nrl.navy.mil), Ira S. Moskowitz (ira.moskowitz@nrl.navy.mil), Naval Research Laboratory; Stephen Russell, Army Research Laboratory (stephen.m.russell8.civ@mail.mil); and W. F. Lawless, Paine College (w.lawless@icloud.com)
For More Information
sites.google.com/site/internetofeverythingioe
Beyond Machine Intelligence: Understanding Cognitive Bias and Humanity for Well-Being AI
Recent AI technologies (such as Deep Learning and other advanced machine learning technology) will definitely change the world. However, excessive expectation for AI (such as science fiction of general purpose AI) and threat theory (such as AI robs a job) distort the judgement of many people. What we must do first is to understand the possibilities and limitations of the current machine intelligence correctly.
Especially understanding machine intelligence in human health and wellness domains remains some challenging. Although statistical machine learning predicts the future based on past data, it is difficult to respond to the new event which has never seen in the past. How to create new values which really make people happy is one of the most important challenges in well-being AI. For this purpose, we need to share interdisciplinary scientific findings between human science (brain science, biomedical healthcare, psychology, and others) and AI.
One of the important keywords in this year’s symposium is cognitive bias. In the recent trend on big data becoming personal, AI technologies to manipulate the cognitive bias inherent in people’s mind have evolved; For example, social media, such as Twitter and Facebook, and commercial recommendation system. Echo chamber effec” is known that makes it easy for people with the same opinion to make community, which makes it felt that everyone has the same opinion. Recently, there have been a movement to use such cognitive bias also in the political world. Advances in big data and machine learning should not overlook the new threats to enlightenment thought.
The second important keyword in this symposium is humanity. One of the purposes of AI is to pursue “what is intelligence?” Early AI researchers focused their efforts to make progress on rational thinking, such as mathematical theorem proving, chess and so on. However, rational thinking is recently and rapidly being replaced by machines. It seems that many people might have begun to believe that irrational thinking is the root of humanity. Empirical and philosophical discussions on AI and humanity are welcomed in this symposium.
This symposium is aimed at sharing latest progress, current challenges and potential applications related with AI health and well-being. The evaluation of digital experience and understanding of human health and well-being is also welcome.
Format
The symposium is organized by the invited talks, presentations, and posters and interactive demos.
Submissions
Interested participants should submit either full papers (8 pages maximum) or extended abstracts (2 pages maximum). Extend abstracts should state your presentation types (long paper (6-8 pages), short paper (1-2 pages), demonstration, or poster presentation). The electronic version of your paper should be sent to aaai2018-bmi@cas.lab.uec.ac.jp.
Organizing Committee
Cochairs: Takashi Kido (Preferred Networks, Inc., Japan) and Keiki Takadama (The University of Electro-Communications, Japan). For a full list of organizers and program committee members, please refer to the URL below.
For More Information
sites.google.com/view/bmi-aaai-2018
Data Efficient Reinforcement Learning
Sequential decision making (SDM) is an essential component for autonomous systems. Although significant progress has been made towards developing algorithms for solving isolated task SDM tasks, these algorithms often require large amount of experience before achieving acceptable performance. This is particularly true in the case of high dimensional tasks, such as robotics control or general game playing environments.
Aiming at efficient reinforcement learning algorithms that can generalize well to unobserved environments or situations, a multitude of methods have been proposed. These include inverse reinforcement learning, imitation learning, lifelong learning, multitask learning, transfer learning, and model-based reinforcement learning.
This symposium will give reinforcement learning researchers an opportunity to present their work and to discuss recent developments in each of the above fields. We request submissions that present novel results and algorithms, with an emphasis on approaches that are also theoretically grounded. We are also particularly interested in RL benchmarking and applications in fields including robotics, medicine, game playing, and real-world large-scale applications.
Topics
Topics of interest include but are not limited to the following:
- Reinforcement learning
- Model-based reinforcement learning
- Inverse reinforcement learning
- Deep reinforcement learning
- Transfer learning
- Multitask reinforcement learning
- Lifelong reinforcement learning
- Large-scale application of reinforcement learning
- Novel applications of reinforcement learning
- Benchmarking reinforcement learning algorithms
- Learning from demonstration
- Imitation learning
- Multiagent reinforcement learning
- Deep multiagent reinforcement learning
- Model-based multiagent reinforcement learning
- Transfer in multiagent reinforcement learning
- Multitask multiagent reinforcement learning
Format
The symposium will include invited talks, presentations on accepted papers, and discussions. Invited speakers and presentations will be announced after the submissions. More information will appear on the supplementary symposium website.
Submissions
We solicit both long (6 pages and an additional page for references) and short (3 pages and an additional page for references) submission on topics related to the above. Field positioning papers and open challenges will also be considered. Submissions should follow the AAAI conference format. Long papers will be scheduled for oral presentations, while short papers for poster demonstrations. Submissions should be forwarded to the main contact.
Main Contact
Haitham Bou Ammar
haitham@prowler.io
PROWLER.io., 29 Austin Drive
Cambridgeshire, CB 2 9BB, United Kingdom
+44-7780562871
Organizing Committee
Haitham Bou Ammar (Main Contact), Dongho Kim (dongho@prowler.io), Enrique Munoz de Cote (enrique@prowler.io), James Hensman (james@prowler.io), Prowler,io, UK; Matthew E. Taylor (taylorm@eecs.wsu.edu), Washington State University, USA
For More Information
The Design of the User Experience for Artificial Intelligence (the UX of AI)
AI is increasingly common in consumer and professional contexts such as voice personal assistants, photo tagging, autonomous vehicles, creative tools, and medical image analysis. But the interaction and user experience design strategies and patterns for these applications are still in their early stages and have many challenges ahead.
This symposium will bring together a diverse group of people involved in the design of AI products and services from large scale, deployed commercial applications and advanced research to speculative futures, and from the worlds of HCI, design, TEI, HRI, and AI.
Topics
- Communication and collaboration
- Automation, agency and control
- Bias, trust, and power
- Multidevice, multitouchpoint behavior
- Service design and AI
- Design for a world of simultaneous multiple predictive systems
- The design of products without traditional screen interfaces and interaction modalities.
- The design of AI collaborators for creative work.
- AI tools for non-AI specialists.
- The design of hybrid human-AI collaborations.
- Communicating AI to end users.
Format
The symposium will be a combination of presentations, posters, invited talks, plenary sessions, and breakouts, to maximize participant interaction. All attendees will be required to present a short (20 minute) presentation on their work or a subject of interest. We will alternate between these short presentations and design explorations in breakouts and large group discussions.
Submissions
Prospective participants are invited to submit one or more of the following: Short position papers (2–4 pages) in PDF format. Please follow AAAI style guidelines. Your position statement should include a short description motivating your interest in the topic, and a short bio that includes a description of your current area of research or practice.
In your proposal, indicate and elaborate on one of the following as your format:
- A 20 minute presentation
- A poster in pdf format. The poster should be printable as a 30″ x 40″/A0. Initial submissions can include a draft of the poster.
- A 3 minute or shorter video in a common file format (AVI, MP4, and others.).
- An interactive demo. Interactive demos should be clear, interactive and focus on research and practice demonstrations illustrating an aspect of artificial intelligence and user experience. No product pitches will be accepted.
- A panel proposal. Panel proposals should include a 400-word description of the topic and panel members (potential and agreed).
Video and interactive demos should be accompanied by an extended abstract (1-–2 pages, PDF) of up to 2000 words. Initial submission can include a draft/rough cut/storyboard of the video/interactive with a text description of its contents.
Submissions should not be anonymized. If your proposal is accepted, you will have the option of including an updated paper in the technical report.
Questions
Mike Kuniavsky
mikek parc.com
Palo Alto Research Center
3333 Coyote Hill Road
Palo Alto, CA 94304
+1 650 812 4847
Organizing Committee
Mike Kuniavsky (PARC), Elizabeth Churchill (Google), Molly Wright Steenson (Carnegie Mellon University), Phil Van Allen (Art Center College of Design)
For More Information
mikek-parc.github.io/AAAI-UX-AI
Integrating Representation, Reasoning, Learning, and Execution for Goal Directed Autonomy
Recent advances in AI and robotics have led to a resurgence of interest in the objective of producing intelligent agents that help us in our daily lives. Such agents must be able to rapidly adapt to the changing goals of their users, and the changing environments in which they operate.
These requirements lead to a balancing act that most current systems have difficulty contending with: on the one hand, human interaction and computational scalability favor the use of abstracted models of problems and environments domains; on the other hand, generating goal directed behavior in the real world typically requires accurate models that are difficult to obtain and computationally hard to reason with.
This symposium addresses the core research gaps that arise in designing autonomous systems that execute their actions in complex environments using imprecise models. The sources of imprecision may range from computational pragmatism to imperfect knowledge of the actual problem domain. Some of the research directions that this symposium aims to highlight are as follows:
- Hierarchical approaches for goal directed autonomy in physically manifested intelligent systems (for example, robotics)
- Formalizations for knowledge representation and reasoning under uncertainty for real-world systems and their simulations, including those based on logic as well as on probability theory
- Tradeoffs between model verisimilitude, scalability, and executability in sequential decision making
- Bridging the gaps between abstract models and reality in sequential decision making
- Online model learning and model improvement during execution
- Identifying modeling errors during plan execution
- Integrated approaches for learning representations and execution policies
- Analysis and use of abstractions in autonomous reasoning and execution
Topics
We invite paper submissions on relevant topics, which include, but are not limited to the following:
- Hierarchical representation, reasoning, and planning
- Behavior synthesis and execution in robotics
- Planning and reasoning with abstract models while ensuring executability
- Abstraction from controls to logic
- Execution monitoring of autonomous systems
- Performance evaluation of executable autonomous systems
- Integrated task and motion planning
- Reasoning in the presence of abstraction
- Online model learning and model improvement
- Detecting model errors during execution
- Integrated representation and policy learning
Submissions
We invite submissions of full papers (6–8 pages) and short/position papers (2–4 pages). We also solicit system demonstrations that highlight how some of the challenges of interest to this symposium were handled
Papers should be submitted via easychair. Detailed instructions will be made available closer to the deadline.
Organizing Committee
Siddharth Srivastava (Arizona State University), Shiqi Zhang (Cleveland State University), Nick Hawes (University of Birmingham), Erez Karpas (Technion – Israel Institute of Technology), George Konidaris (Brown University), Matteo Leonetti (University of Leeds), Mohan Sridharan (The University of Auckland), Jeremy Wyatt (University of Birmingham)
For More Information
siddharthsrivastava.net/sirle18/
Learning, Inference, and Control of Multi-Agent Systems
We live in a multiagent world. To be successful in that world, intelligent agents need to learn to consider the agency of others. They will compete in marketplaces, cooperate in teams, communicate with others, coordinate their plans, and negotiate outcomes. Examples include self-driving cars interacting in traffic, personal assistants acting on behalf of humans and negotiating with other agents, swarms of unmanned aerial vehicles, financial trading systems, robotic teams, and household robots.
There has been great work on multiagent learning in the past decade, but significant challenges remain, including the difficulty of learning an optimal model/policy from a partial signal, the exploration versus exploitation dilemma, the scalability and effectiveness of learning algorithms, avoiding social dilemmas, learning emergent communication, learning to cooperate/compete in nonstationary environments with distributed simultaneously learning agents, and convergence guarantees.
Topics
We are interested in various forms of multiagent learning for this symposium, including the following:
- Learning in sequential settings in dynamic environments (such as stochastic games, decentralized POMDPs and their variants)
- Learning with partial observability
- Dynamics of multiple learners using (evolutionary) game theory
- Learning with various communication limitations
- Learning in ad-hoc teamwork scenarios
- Scalability through swarms vs. intelligent agents
- Bayesian nonparametric methods for multiagent learning
- Deep learning and reinforcement learning methods for multiagent learning
- Transfer learning in multiagent settings
- Applications of multiagent learning
The purpose of this symposium is to bring together researchers from machine learning, control, neuroscience, robotics, and multiagent communities with the goal of broadening the scope of multiagent learning research and addressing the fundamental issues that hinder the applicability of multiagent learning for complex real-world problems. This symposium will present a mix of invited sessions, contributed talks and a poster session with leading experts and active researchers from relevant fields. Furthermore, the symposium is designed to allow plenty of time for discussions and initiating collaborations.
Submissions
Authors can submit papers of 2-6 pages that will be reviewed by the organization committee. The papers can present new work or a summary of recent work. Submissions will be handled through EasyChair (url to be announced on the website).
Organizing Committee
Christopher Amato (Northeastern University), Thore Graepel (Google DeepMind), Joel Leibo (Google DeepMind), Frans Oliehoek (University of Liverpool), Karl Tuyls (Google DeepMind and University of Liverpool)