AAAI 2019 Spring Symposium Series
March 25–27, 2019
Sponsored by the Association for the Advancement of Artificial Intelligence
In cooperation with the Stanford University Computer Science Department
Call for Participation
Important Deadlines
- November 2, 2018: Submissions due to organizers (unless otherwise noted)
- December 3, 2018: Notifications of acceptance sent by organizers
(The aforementioned deadlines are suggested by AAAI to ensure timely notification to authors of inclusion in the symposium program. Organizers, however, may elect to extend one or both of these deadlines. Please consult the individual symposium supplementary websites (linked from the symposium descriptions on the AAAI website) for complete information.)
The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University’s Department of Computer Science, is pleased to present the 2019 Spring Symposium Series, to be held Monday through Wednesday, March 25-27, 2019 at Stanford University. The topics of the nine symposia are:
- Artificial Intelligence (AI), Autonomous Machines and Human Awareness: User Interventions, Intuition and Mutually Constructed Context
- Beyond Curve Fitting — Causation, Counterfactuals and Imagination-Based AI
- Combining Machine Learning with Knowledge Engineering
- Interpretable AI for Well-Being: Understanding Cognitive Bias and Social Embeddedness
- Privacy-Enhancing Artificial Intelligence and Language Technologies
- Story-Enabled Intelligence
- Towards Artificial Intelligence for Collaborative Open Science
- Towards Conscious AI Systems
- Verification of Neural Networks
Artificial Intelligence (AI), Autonomous Machines and Human Awareness: User Interventions, Intuition and Mutually Constructed Context
With the prospect of even larger disruptions to come, the present economic impact of machine learning (ML), a subset of Artificial Intelligence (AI), is estimated in the trillions of dollars. Applications of ML and other AI algorithms are propelling unprecedented economic impacts across industry, the military, medicine, finance and more. But as autonomous machines become ubiquitous, recent problems with ML have surfaced. Early on, Judea Pearl warned AI scientists they must “build machines that make sense of what goes on in their environment”, a warning unheeded that may impede their further development. For example, self-driving vehicles often rely on sparse data; self-driving cars have already been involved in three fatalities, including a pedestrian; and yet ML is unable to explain the contexts within which it operates.
We propose that these three seemingly unrelated problems require an interdisciplinary approach to solve. For example, we prefer papers that address how user interventions may improve the context for autonomous machines operating in unfamiliar environments or when experiencing unanticipated events; how autonomous machines can be taught to explain contexts by reasoning, inferences or causality and decisions to humans relying on intuition; and for mutual context, how these machines may interdependently affect human awareness, teams and society, and how these “machines” may be affected in turn. In short, can context can be mutually constructed and shared between machines and humans? By extension, we are interested in whether shared context follows when machines begin to think, or, like humans, develop subjective states that allow them to monitor and report on their interpretations of reality, forcing scientists to rethink the general model of human social behavior. If dependence on ML continues or grows, we and the public are also interested in what happens to context shared by users, teams of humans and machines, or society when these machines malfunction. As we “think through this change in human terms”, our ultimate goal is for AI to advance the performance of autonomous machines and teams of humans and machines for the betterment of society wherever these machines interact with humans or other machines.
Topics
Topics (not inclusive) include autonomy, teams, and machine explanations of context.
Format
Invited talks will be 60 minutes each and paper presentations will be 30 minutes each.
Submissions
We prefer papers by participants who can discuss the meaning, value and interdependent effects on context wherever these AI-driven machines may interact with humans or other autonomous agents. Papers should be single column ms-word or latex, either as an extended abstract (1–2 pages) or up to 8 pages long (if LaTeX, please submit a pdf file). Please use APA style for citations and references. We plan to publish revised papers in a book after the Symposium. Please send submissions to the co-organizers.
Organizing Committee
Ranjeev Mittu and Don Sofge (Naval Research Laboratory, {ranjeev.mittu,don.sofge}@nrl.navy.mil); W.F. Lawless (Chair) (Paine College, w.lawless@icloud.com)
For More Information
sites.google.com/site/aaai19sharedcontext
Beyond Curve Fitting — Causation, Counterfactuals and Imagination-Based AI
AI and machine learning have received enormous attention from the general public in the recent years. Most of this comes from the successful application of deep neural networks in computer vision, natural language, and reinforcement learning. Nevertheless, despite significant progress, the picture is not yet complete. In a recent interview to Quanta Magazine, Professor Judea Pearl from UCLA noted that “All the impressive achievements of deep learning amount to just curve fitting.” This observation generated surprise and commotion among practitioners, but it was not rhetorical. Rather, it was a theoretically-grounded observation concerning intrinsic limitations of data-centric systems that are not guided by models of reality. Such systems may excel in constructing highly complex functions, associating an input X with an output Y, but are unable to reason about novel actions and hypothetical counterfactuals in a broad collection of never-seen-before situations. Causality is a critical component in the design of AI systems that has been commonly overlooked, despite its centrality to the ways scientists probe reality, how we perceive the world, how we act upon it, and, ultimately, how we understand ourselves. To meet this challenge, the next generation of intelligent systems must be endowed with causal capabilities, which translate to more explainable, robust, efficient, and generalizable decision-making capabilities.
The central aim of this symposium is to bring together researchers to discuss the integration of causal, counterfactual, and imagination-based reasoning into data science and AI, building a richer framework that will support both research and industrial applications in the coming decade. Our discussion will be based on the Ladder of Causality architecture (as discussed in the new book The Book of Why: The Science of Cause and Effect), which provides a general way to integrate current correlation-based data mining approaches (level 1) with causal interventions (level 2), and counterfactual or imagination-based reasoning (level 3). This architecture provides a unifying theme for the symposium. We invite contributions from researchers in all relevant disciplines, including computer science, psychology, cognitive science, neuroscience, bioinformatics, engineering, mathematics, and philosophy.
Topics
Topics of interest include but are not limited to the following:
- Algorithms for causal inference and learning
- Causal analysis of biases in data science and fairness analysis
- Causal and counterfactual explanations
- Causal reinforcement learning, planning, and plan recognition
- Imagination and creativity
- Fundamental limits of current learning and inference algorithms
- Applications of causal inference and connections with the the 3-layer hierarchy
Format
The symposium will include invited talks, presentations of accepted papers, and discussions. More information will appear on the supplementary symposium website.
Submissions
We solicit both long (7 pages including references) and short (3 pages including references) papers on topics related to the above. Position papers, application papers, and challenge tasks will also be considered. Submissions should follow the AAAI conference format.
Organizing Committee
Elias Bareinboim (Purdue University), Sridhar Mahadevan (Adobe and and UMass), Prasad Tadepalli (Oregon State), Csaba Szepesvari (DeepMind and University of Alberta), Judea Pearl (University of California, Los Angeles)
For More Information
Combining Machine Learning with Knowledge Engineering
Significant developments in machine learning methods recently resulted in the rapid adoption of these methods in various applications. Typically, such machine learning methods assist with handling complex situations for which knowledge is not known, or knowledge is tacit.
Many business cases and real-life scenarios using machine learning methods demand explanations of results and behavior. This demand is particularly the case where decisions can have serious consequences. Furthermore, application areas such as banking, insurance and medicine, are highly regulated and require compliance with law and regulations. This specific application knowledge cannot be learned but need to be represented, which is the area of knowledge engineering and knowledge representation. Conclusively, recent results indicate that explicitly represented application knowledge could assist data-driven machine-learning approaches to converge faster on sparse data and to be more robust against noise.
Knowledge-based systems that make knowledge explicit have been used for decades. Such systems are often based on logic and thus can explain their conclusions. These systems typically require a higher initial effort during development than systems that use unsupervised learning approaches. However, symbolic machine learning and ontology learning approaches are promising for reducing the effort of knowledge engineering.
Machine learning is most suitable for building AI systems based on tacit knowledge. It helps to solve complex tasks based on real-world data instead of pure intuition. Knowledge engineering, on the other hand, is appropriate for representing expert knowledge, which people are aware of and that has to be considered for compliance reasons or explanations. Because of these complementary strengths and weaknesses, there is an increasing demand for the integration of knowledge engineering and machine learning.
This symposium aims for bringing together researchers and practitioners from various communities of machine learning and knowledge engineering working together on joint AI that is explainable, compliant and grounded in domain knowledge. Participants shall benefit from each other to avoid pitfalls on one hand side and provide the ground for synergetic co-operations with the aim of identifying the most promising areas of quick wins.
Topics
Among relevant topics are the following:
- Knowledge Representation and Reasoning
- Ontologies
- Rule-Based Systems
- Semantic Web
- Machine Learning
- Deep Learning
- Neural Networks
- Knowledge Engineering and Management
- Causal Explainability
- Learning and Cause and Effect Relationships
- Using Knowledge to Guide Machine Learning
Format
The symposium involves presentations of accepted full and position papers, and posters, (panel) discussions, demonstrations, plenary sessions with breakouts (if required), to foster interaction and contribution among the participants. The schedule will be announced on the symposium website.
Submissions
We solicit full papers, position papers, and poster abstracts on topics related to the above and can include recent or ongoing research, surveys, and business/use cases. Furthermore, proposals for (panel) discussions and demonstrations are very welcome too.
- Full papers (up to 12 pages) and position papers (3 to 5 pages) will be peer-reviewed.
- Posters can be proposed by submitting an extended abstract (1 to 2 pages).
- Discussion proposals (1 to 2 pages) should contain a description of the specific topic with a list of questions and a discussion moderator. For a panel discussion, a list of agreed panel-members should be mentioned.
- Demonstration proposals (1 to 2 pages) should have a focus on research and business related to the symposium excluding undesired product presentation and advertising.
All submissions should reflect the AAAI formatting instructions provided in the AAAI Author Kit. They should be submitted via EasyChair,) and will be reviewed by the program committee.
Accepted and camera-ready papers will be published on the established open access proceedings site CEUR-WS (preprints may be published on arXiv and Zenodo). Authors must grant a publication permission prior to the symposium and present the paper at the symposium to get it published.
Organizing Committee
Andreas Martin (cochair and main contact | andreas.martin@fhnw.ch) and Knut Hinkelmann (cochair | knut.hinkelmann@fhnw.ch) FHNW University of Applied Sciences and Arts Northwestern Switzerland, School of Business, Riggenbachstrasse 16, 4600 Olten, Switzerland
For a full list of organizing committee members, please refer to the URL that follows.
For More Information
Interpretable AI for Well-Being: Understanding Cognitive Bias and Social Embeddedness
Interpretable AI is an artificial intelligence (AI) whose actions can be easily understood by humans. Recently, the European Union’s new General Data Protection Regulation (GDPR) has raised concerns about the emerging tools for automated individual decision-making. These tools use algorithms to make decisions based on user-level profiles, with the potential to significantly affect users.
Especially in the human health and wellness domains, interpretable AI remains a huge challenge. For example, “evidence-based medicine” requires us to show the current best evidence in making decisions about the care of patients. “Why did the system make this prediction?” will be a key question. The important goal of this symposium is to discuss the technical and philosophical challenges of interpretability for well-being AI.
One of the important keywords in this symposium is “cognitive bias.” In the recent trend on big data becoming personal, AI technologies to manipulate the cognitive bias inherent in people’s mind have evolved in social media (Twitter and Facebook, and commercial recommendation systems,). “Echo chamber effect” is known that makes it easy for people with the same opinion to make community, which makes it felt that everyone has the same opinion. Advances in big data and machine learning should not overlook the new threats to enlightenment thought.
The second important keyword in this symposium is “Social embeddedness.” We welcome diverse discussions on the relationships between AI and society. The topics on social embeddedness of AI may include such issues as “AI and future economics (such as basic income, impact of AI on GDP)” or “well-being society (such as happiness of citizen, life quality).”
This symposium is aimed at sharing the latest progress, current challenges and potential applications related to the AI aspects of health and well-being. The evaluation of digital experience and understanding of human health and well-being is also welcome.
Format
The symposium is organized by the invited talks, presentations, and posters and interactive demos.
Submissions
Interested participants should submit either full papers (8 pages maximum) or extended abstracts (2 pages maximum). Extended abstracts should state your presentation type (long paper (6–8 pages), short paper (1–2 pages), demonstration, or poster presentation). The electronic version of your paper should be sent to aaai2019-iaw@cas.lab.uec.ac.jp.
Organizing Committee
Takashi Kido, Cochair (Preferred Networks, Inc., Japan) and Keiki Takadama, Cochair (The University of Electro-Communications, Japan). For a full list of organizers and program committee members, please refer to the URL that follows.
For More Information
sites.google.com/view/iaw-aaai-2019
Privacy-Enhancing Artificial Intelligence and Language Technologies
This symposium will bring together researchers in privacy and researchers in either artificial intelligence (AI) or human language technologies (HLTs), so that we may collectively assess the state of the art in this growing intersection of interests. Privacy remains an evolving and nuanced concern of computer users, as new technologies that use the web, smartphones, and the internet of things (IoT) collect a myriad of personal information. Rather than viewing AI and HLT as problems for privacy, the goal of this symposium is to “flip the script” and explore how AI and HLT can help meet users’ desires for privacy when interacting with computers.
Topics
We will focus on two loosely-defined research questions:
- How can AI and HLT preserve or protect privacy in challenging situations?
- How can AI and HLT help interested parties (for example, computer users, companies, regulatory agencies) understand privacy in the status quo and what people want?
The symposium will consist of invited speakers, oral presentations of submitted papers, a poster session, and panel discussions. This event is a successor to the 2016 AAAI Fall Symposium on Privacy and Language Technologies.
Submissions
The symposium invites 2–6 page papers (excluding references) describing new contributions, works in progress, and positions on research in the intersection between privacy and AI/HLT. Submissions should be anonymized.
Topics of interest include, but are not limited to, the following:
- AI/HLT-driven personalization of privacy assistance
- Uses of AI/HLT to enhance privacy-enhancing technologies (PETs)
- AI/HLT-assisted privacy of online social media users
- AI/HLT-driven simplification or summarization of privacy policies
- AI/HLT analysis of privacy regulations
- Privacy-preserving methods of data mining and text mining
- Ontologies and knowledge bases for privacy
- User studies of AI/HLT-driven systems that support privacy
- Ethical ramifications of AI/HLT in support of privacy
The symposium will welcome submissions from researchers who consider any combination of artificial intelligence, human language technologies, or privacy to be their primary area, recognizing that all are becoming inter-dependent.
Submissions should be in AAAI format. The review process will be double-blind, and accepted papers will be permitted up to one additional page in their final manuscripts. Proceedings will appear on CEUR-WS.org.
Organizing Committee
Shomir Wilson, Lead Organizer (Pennsylvania State University), Sepideh Ghanavati (University of Maine), Kambiz Ghazinour (Kent State University), Norman Sadeh (Carnegie Mellon University)
For More Information
Story-Enabled Intelligence
Systems that can describe their own behavior exhibit intelligence of a higher order. Storytelling-like capabilities empower systems to explain their decisions, describe their activities, align their present situation against precedent, consider hypothetical alternatives, diagnose their mistakes, learn from stories, and generalize their experiences. A number of research efforts have independently explored aspects of such machine-generated descriptions, but interaction between specialized subareas of AI remains sparse in the literature.
In pursuit of a unified approach, four common challenges emerge: (1) Architecture — How can we augment existing opaque systems so as to add a layer of explainability? Alternatively, how can we design systems so as to incorporate explainability from the bottom up? (2) Representation — How can we design systems out of composable, explainable parts? Which representations capture application-specific information and expose constraint? (3) Procedure — How do we effectively identify and deploy sources of constraint? How do we develop anytime explanations, and how can these explanations integrate new information? What capabilities, such as matching against precedent or reasoning about hypotheticals, are enabled by explainable parts? (4) Cognition — What kinds of explanations are most effective for human users? How do humans produce and consume stories? How do we tailor explanations for different audiences and purposes?
Topics
- Computational models of human storytelling and understanding
- Systems that explain when they fail
- Large-scale architectures built from communicating parts
- Problem-solving programs that tell their own story
- Systems that combine information from many sources
- Tools that summarize and interpret
- Tools that support planning and exploration of alternative scenarios
- Self-explaining software engineering tools
- Systems that provide compositional explanations of what they’re doing.
Format
The symposium will include invited talks, presentations of accepted papers, poster sessions and panel discussions.
Submissions
We solicit full papers (6 to 8 pages), short papers (2 to 4 pages), and one-page proposals for panel discussions (as detailed on our symposium homepage). We invite submissions not only from AI but from broadly related fields and perspectives of interest; submissions may include recent or ongoing research, position papers, and surveys. Submissions should be formatted according to the AAAI template and submitted via EasyChair.
Main Contact
Dylan Holmes (MIT) 77 Massachusetts Ave Bldg 32-258 Cambridge, MA 02139 story-enabled-intelligence@mit.edu
Organizing Committee
Leilani H. Gilpin (MIT, lgilpin@mit.edu), Dylan Holmes (MIT, dxh@mit.edu), Jamie C. Macbeth (Smith College, jmacbeth@smith.edu)
For More Information
logical.ai/story-enabled-intelligence
Towards Artificial Intelligence for Collaborative Open Science
The Towards AI for Collaborative Open Science will explore how artificial intelligence and computational tools can accelerate the pace of scientific discovery. Of special interest is how machines can assist human collaboration and knowledge sharing in open, networked science. The symposium will bring together researchers in basic science, computer science, statistics, psychology, and economics, among possibly other areas of academia, industrial research, and the nonprofit sector.
The symposium will highlight research directions in networked, machine-driven science, such as the following:
- AI and NLP methods for mining the scientific literature
- Meta-learning, meta-analysis, and model aggregation for open science
- Knowledge representation for the scientific process, for example, for datasets or data analysis
- Knowledge representation for scientific knowledge, for example, in biomedicine
- Software tools and formats for disseminating scientific knowledge
- Online platforms for collaborative basic science or data science
- Empirical studies of open scientific collaboration and innovation
- Incentives and rewards in open science
Format
Over its two and a half days, the symposium will feature invited talks and paper presentations, as well as discussion sessions to explore future directions for AI in open, networked science.
Submissions
We invite submissions of research papers (6 pages, excluding references) and short papers (2 pages), for work-in-progress or position pieces. Papers should be submitted through EasyChair and will be reviewed by the organizing committee. For submission instructions and important deadlines, please visit the symposium website.
Main Contact
Evan Patterson (Stanford University, epatters@stanford.edu)
Organizing Committee
Ioana Baldini (IBM Research AI, ioana@us.ibm.com), Peter Bull (DrivenData, peter@drivendata.org)
For More Information
Towards Conscious AI Systems
The study of consciousness remains a challenge that spans multiple disciplines. Consciousness has a demonstrated, although poorly understood, role in shaping human behavior. The processes underpinning consciousness may be crudely replicated to build better AI systems. Such a “top-down” perspective on AI readily reveals the gaps in current data-driven approaches and highlights the need for “better AI.” At the same time, the process of designing AI systems creates an opportunity to better explain biological consciousness and its importance in system behavior.
Measuring the components that may lead to consciousness ( for example, modeling and assessing others’ behaviors; calculating utility functions for not only an individual agent, but also an interacting society of agents) is increasingly important to address concerns about the surprising capabilities of today’s AI systems.
The symposium is an excellent opportunity for researchers considering consciousness as a motivation for “better AI” to gather, share their recent research, discuss the fundamental scientific obstacles, and reflect on how it relates to the broader field of artificial intelligence and robotics.
Research on consciousness and its realization in AI systems motivates research to account for, with scientific rigor: the motivations of AI systems, the role of sociality with and between machines, and how to implement machine ethics.
The meeting will offer a platform to discuss the connection between AI systems and other fields such as psychology, philosophy of mind, ethics, and neuroscience.
Topics
Some of the topics that the symposium will cover include the following:
- Recent work on conscious AI systems
- Embodied conscious AI systems
- Self-reflective higher-order AI systems
- Ethical issues involving conscious AI systems
- Trust in conscious AI systems
- Social robotics and conscious AI systems
- Consciousness, the theory of mind and artificial emotions
- The role of episodic memory in conscious AI systems
- Design strategies versus developmental approaches
- Symbolic versus deep neural networks in conscious AI systems
- Measurement of consciousness in AI systems
- Physicalist models of consciousness
- Philosophy of mind and machine consciousness
- Conscious processes and time
- Implementing neuroscience of consciousness in AI systems
- Computational models of consciousness
Submissions
Authors are invited to submit full papers (6 pages) or position and work-in-progress papers (2 pages). Papers should be submitted via the AAAI Spring Symposium Easychair website. Submission deadline is November 2, 2018. All submissions will be peer-reviewed. The symposium proceedings will be published online via CEUR-WS.org. Selected, high-quality papers will be considered for a book.
Organizing Committee
Antonio Chella (University of Palermo and ICAR-CNR, Palermo, Italy), David Gamez (Middlesex University, London, UK), Patrick Lincoln (SRI International), Riccardo Manzotti (IULM University, Milan, Italy), Jonathan Pfautz (DARPA)
For More Information
diid.unipa.it/roboticslab/consciousai
Verification of Neural Networks
Methods based on machine learning are increasingly being deployed for a wide range of problems, including recommender systems, machine vision, and autonomous driving. While machine learning has made significant contributions to such applications, concerns remain about the lack of methods and tools to provide formal guarantees about the behaviours of the resulting systems.
In particular, for data-driven methods to be usable in safety-critical applications, including autonomous systems, robotics, cybersecurity, and cyber-physical systems, it is essential that the behaviours generated by neural networks can be predicted and understood at design time. In the case of systems that are learning at run-time, it is desirable that any system changes respect a given safety-envelope for the system.
While the literature on verification of traditional systems is extensive, results and efforts in this area have been limited until recently. One challenge is that results are being published in several research communities, including formal verification, security and privacy, systems, and AI. The symposium intends to bring together researchers from all of these communities, working on a range of techniques for the verification of neural networks. The key objectives include: presentation of recent work in the area; discussion of key difficulties; collecting community benchmarks; and fostering collaboration.
Format
The symposium will include invited speakers, contributed papers, demonstrations, breakaway sessions, and panel sessions.
Topics
Topics covered by the symposium include but are not limited to the following:
- Formal specifications for neural networks and systems based on them;
- SAT-based and SMT-based methods for the verification of machine learning systems;
- Mixed-integer linear programming methods for the verification of neural networks;
- Testing approaches to neural networks;
- Optimisation-based methods for the verification of neural networks;
- Statistical approaches to the verification of neural networks.
Submissions
We will consider two types of submissions: previously published papers and novel contributions. Each submission must be clearly identified as belonging to one of these categories. Submissions of previously published papers pertaining to the topic above will be lightly reviewed by PC members and the PC chairs. Submissions of novel material will be reviewed following conference standards.
Organizing Committee
Clark Barrett (Stanford, USA), Alessio LomuscioI (Imperial College London, UK)