AAAI 2020 Fall Symposium Series / AAAI 2020 Spring Symposium Partial Colocation
Call for Participation
Sponsored by the Association for the Advancement of Artificial Intelligence
Washington, DC, November 11–-14, 2020
- Submissions due to organizers (please see individual descriptions for extensions): July 30, 2020
- Notifications of acceptance sent by organizers: August 13
- FSS final papers due to organizers: September 25 (recommended)
- Registration deadline: September 17
The topics of the six fall symposia are as follows:
- AI for Social Good
- Artificial Intelligence in Government and Public Sector
- Postponed until 2021: Cognitive Systems for Anticipatory Thinking
- Conceptual Abstraction and Analogy in Natural and Artificial Intelligence
- Physics-Guided AI to Accelerate Scientific Discovery
- Trust and Explainability in Artificial Intelligence for Human-Robot Interaction
Recent developments in the availability of big data and computational power are continuing to revolutionize several domains, opening new opportunities and challenges. In this symposium, we highlight two specific themes of humanitarian relief and healthcare where AI could be used for social good to achieve the United Nations sustainable development goals in those areas that touch every aspect of human, social, and economic development. We expect the symposium to identify the critical needs and pathways for responsible AI solutions for achieving sustainable goals, which demand holistic thinking on optimizing the trade-off between automation benefits and their potential side-effects.
AI technology can have an incredible impact on how we address humanitarian issues and achieve developmental goals worldwide. This symposium will focus on all aspects of humanitarian relief operations supported by the novel use of AI technologies from enabling missing persons to be located, leveraging crowdsourced data to provide early warning for rapid response to emergencies, increasing situational awareness, to logistics and supply chain management.
Similarly, healthcare is one of the foremost challenges of today’s world, highlighted by the recent COVID-19 pandemic where it has come to the forefront of the global discourse. This symposium will also focus on a broad range of AI healthcare applications and challenges encountered, including but not limited to: automation bias, prescriptive AI models, explainability, privacy and security, transparency, decision rights, especially in the context of deployment of AI in healthcare.
The symposium builds upon our continued efforts in bringing the AI community members together for the themes mentioned above and our last year’s successful AAAI Fall Series symposium on AI for Social Good.
This symposium will bring together AI researchers, domain scientists, and policymakers to exchange problems and solutions, to identify synergies across different application domains, and to lead to future collaborative efforts.
The symposium solicits paper submissions from participants (2–6 pages). Abstracts of the following flavors will be sought: (1) research ideas, (2) case studies (or deployed projects), (3) review papers, (4) best practice papers, and (5) lessons learned. The format is the standard double-column AAAI Proceedings Style.
All submissions will be peer-reviewed. Some will be selected for spotlight talks, and some for the poster session. We plan to create a CEUR workshop proceeding.
Muhammad Aurangzeb Ahmad (University of Washington Tacoma and KenSci Inc), Hemant Purohit (George Mason University), Oshani Seneviratne (Rensselaer Polytechnic Institute)
For More Information
Please see our Symposium Website
AI is becoming ubiquitous, being useful across societal, governmental, and public sector applications. However, AI in government at the federal, state, and local levels, and related education and public heath institutions (hereafter referred to as public sector) faces its own unique challenges. AI systems in the public sector will be held to a high standard since they must operate in support of the public good. They will face increased scrutiny and stringent requirements for ethical operation, accountability, transparency, fairness, security, explainability, cost-effectiveness, policy and regulatory compliance, and operation without unintended consequences.
We invite thoughtful contributions — papers and panel proposals — that present novel technical approaches to meeting these requirements and lessons learned from current implementations. We hope to provide some coverage on the use of AI to respond to the COVID-19 and rairness, either through paper presentations, panel discussions, or invited keynotes.
Potential topic areas include (in no particular order):
Technical papers that advance the state-of-the-art on applying AI in public sector applications — innovative approaches to solving the problems of building applications that meet the challenges described above.
- Responsible, safe, and trustworthy
- Verification and validation for deep learning
- Robustness and resiliency
- Public sector interaction paradigms
- Lleveraging ai innovation in open source
- Operation and adaptation to multiple domains
Practice papers that describe current uses of AI in the public sector — applications that are early adopters of AI, role of public/private partnerships in accelerating development and adoption, timely response to societal challenges such as the COVID-19 pandemic response (public health, medical, social, economic), and demonstrations of beta and in-production applications.
- Early areas for adoption of AI
- Role of public-private partnerships
- Using AI to encourage public service innovation
- Translating from .com to .gov
- Systematic approach for the use of AI in the public sector
- Cultivating AI literacy
- AI engineering best practices
- Incentivizing AI engineering best practices submissions
The symposium will include presentations of accepted papers in both oral and panel discussion formats, together with invited speakers and demonstrations. Potential symposium participants are invited to submit either a full-length technical paper or a short position paper for discussion. Full-length papers must be no longer than eight (8) pages, including references and figures and are required for those submitting technical papers as described above. Short submissions can be up to four (4) pages in length and can be used for practice papers as described above, work in progress, system demonstrations, or panel discussions.
Please submit via the AAAI EasyChair.org site choosing the AAAI/FSS-20Artificial Intelligence in Government and Public Sector track. Please submit by August 5. Contact Frank Stein (firstname.lastname@example.org) for the extended CfP and for any questions.
Frank Stein, Chair (IBM), Erik Blasch (USAF), Mihai Boicu (GMU), Lashon Booker (Mitre), Michael Garris (NIST), Mark Greaves (PNNL), Eric Heim (CMU-SEI), David Martinez (MIT-LL), Tien Pham (CCDC ARL), Alun Preece (Cardiff University), Peter Santhanam (IBM), Jim Spohrer (IBM)
For More Information
Please contact symposium chair, Frank Stein, at email@example.com for additional information.
Postponed until 2021.
Current AI systems largely lack the abilities to form humanlike concepts and abstractions. Understanding what concepts are — how they are formed, can be abstracted and flexibly used in diverse situations via analogy, how they compose to produce new concepts &mash; is not only key to a deeper understanding of intelligence, but will be essential for engineering non-brittle AI systems, ones that can robustly adapt their knowledge to diverse situations and modalities. Such an understanding will require collaboration among AI researchers and cognitive scientists studying the nature and development of concepts from different perspectives. This symposium will bring together leading researchers across disciplines to discuss the mechanisms underlying concepts, abstraction, and analogy in natural and artificial intelligence.
The following are examples of the questions we plan to address in depth at the symposium:
- What are potential AI applications in which humanlike concept formation, abstraction, and analogy could improve performance and make systems more robust?
- What is known in psychology and neuroscience about the mechanisms by which humans (and non-human animals) develop and use concepts, form abstractions, and make analogies? How can such mechanisms inspire AI research?
- Can gradient-descent-based systems learn to produce analogical reasoning on novel problems? What can the state-of-the-art in inductive program synthesis teach us about abstraction and reasoning?
- How can abstraction and analogy-making abilities in AI systems be assessed? What can we do to ensure that performance on a test will guarantee generalization?
- Can we discover general computational mechanisms for abstraction and analogy by focusing on idealized microdomains, or could the real challenges lie in interfacing analogical mechanisms with a vast array of commonsense knowledge?
- If we want machines to creatively invent wholly new theories from data, like scientists do, what roles would abstraction, analogy, and strong generalization play?
Each day of the symposium will consist of invited talks (with significant time for audience questions), a poster session, and a panel discussion.
Submit an abstract (at most one page) describing research related to any of these questions, including preference for talk or poster, by August 7, 2020 to the Symposium EasyChair site.
Melanie Mitchell (Santa Fe Institute, firstname.lastname@example.org); François Chollet (Google); Kevin Ellis (MIT)
Main Contact: Melanie Mitchell, Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, email@example.com.
For More Information
Please see the supplemental symposium website.
Physics-guided AI is an emerging paradigm of research that aims to principally integrate physics in AI models and algorithms to learn patterns and relationships from data that are not only accurate on validation data but are also physically consistent with known scientific theories. Physics-guided AI is ripe with research opportunities to influence fundamental advances in AI for accelerating scientific discovery and has already begun to gain attention in several scientific communities including fluid dynamics, quantum chemistry, biology, hydrology, and climate science. The goal of this symposium is to nurture the community of researchers working at the intersection of AI and science and engineering areas, by providing a common platform to cross-fertilize ideas from diverse fields and shape the vision of the rapidly growing field of physics-guided AI.
We encourage participation on topics that explore any form of synergy between scientific principles and AI, machine learning, data mining, high-performance computing methods. Examples of relevant topics include (but are not limited to) the following:
- The use of physical constraints in supervised and unsupervised machine learning methods
- Approaches to encode scientific knowledge in deep learning architecture
- Physics-guided generative and reinforcement learning methods
- Discovery of physically interpretable laws from data
- Hybrid constructions of physics-based and machine learning models
- Architectural and algorithmic improvements enabled by AI in scientific computing
- Software development facilitating the inclusion of physics domain knowledge in learning
- Novel techniques for using data to calibrate parameters and system states in scientific models.
Our symposium will involve a mix of activities including keynote and invited talks, breakout sessions, panel discussions, and poster sessions. Day 1 activities will be centered around the theme ofhow can physics help AI? while day 2 activities will focus on the other side of the coin in physics-guided AI, which is how can AI help physics?
We are currently accepting paper submissions for position, review, or research articles in two formats: (1) short papers (2-4 pages) and (2) full papers (6-8 pages). Extended versions of articles in submission at other venues are acceptable as long as they do not violate the dual-submission policy of the other venue. We also encourage early drafts of on-going research with preliminary insights/results that contribute to the symposium agenda. All submissions will undergo peer review and authors will have the option to publish their work in an open access proceedings site.
Submissions should be formatted according to the AAAI template and submitted via EasyChair
Anuj Karpatne (Virginia Tech; firstname.lastname@example.org), Ramakrishnan Kannan (Oak Ridge National Laboratory), Yan Liu (University of Southern California), Jonghyun Harry Lee (University of Hawaii at Manoa), Vipin Kumar (University of Minnesota).
Main Contact: Anuj Karpatne (Dept. of Computer Science, Virginia Tech, Blacksburg, VA 24061; 540-231-6420, email@example.com)
For More Information
Please see the symposium website.
The Artificial Intelligence for Human-Robot Interaction Symposium has been a successful venue of discussion and collaboration since 2014. During that time, the sub-topics of trust and explainability in robotics have been rapidly growing, with major research efforts at universities and laboratories across the world.
Trust is generally believed that trust is crucial for adoption of both AI and robotics, particularly when transitioning technologies from the lab to industrial, social, and consumer applications. Enabling a robot to provide explanations is one approach to fostering this trust.
Over the course of the two-day meeting, we will host a collaborative forum for discussion of current efforts in trust for AI-HRI, with a sub-session focused on the related topic of explainable AI (XAI) for HRI. Additionally, the symposium will include other topics related to AI for HRI.
Topics include but are not limited to the following:
- Trust and Explainability in HRI
- Architectures and systems supporting autonomous HRI
- Interactive task learning
- Interactive dialog systems and natural language
- Field studies, experimental, and empirical HRI
- Tools for autonomous HRI
- Robot planning and decision-making
- Ethics in HRI
- AI for social robots
- Fielding and deployment, and experimentation for autonomous robots
- Knowledge representation and reasoning to support HRI and robot tasking
- Full papers (6-8 pages) highlighting state-of-the-art HRI-oriented research on trust, explainability, and other related topics.
- Short papers (2-4 pages) outlining new or controversial views on AI-HRI research or describing ongoing AI-oriented HRI research.
- Tool papers (2-4 pages) describing novel software, hardware, or datasets of interest to the AI-HRI community.
Papers are to be submitted through the AAAI Easychair site at . Proceedings will be published through Arxiv.
Authors will be notified as to whether they have been assigned a full-length or lightning presentation slot. Authors assigned to lightning talks will be invited to participate in a poster session.
Shelly Bagchi (National Institute of Standards and Technology), Jason R. Wilson (Franklin and Marshall College), Muneeb I. Ahmad (Heriot-Watt University), Christian Dondrup (Heriot-Watt University), Zhao Han (UMass Lowell), Justin W. Hart (University of Texas Austin), Matteo Leonetti (University of Leeds), Katrin Solveig Lohan (University of Applied Sciences Ostschweiz OST), Ross Mead (Semio), Emmanuel Senft (University of Wisconsin, Madison), Jivko Sinapov, Communications cochair (Tufts University), Megan L. Zimmerman (National Institute of Standards and Technology)