Call for Participation
Sponsored by the Association for the Advancement of Artificial Intelligence
The Westin Arlington Gateway
Washington, DC, November 4–6, 2021
- August 13, 2021: Submissions due to organizers (please see individual descriptions for extensions)
- September 3, 2021: Notifications of acceptance sent by organizers
The Association for the Advancement of Artificial Intelligence is pleased to present the 2021 Fall Symposium Series, to be held November 4-6, at the Westin Arlington Gateway in Arlington, Virginia. The topics of the seven symposia are:
- Artificial Intelligence for Human-Robot Interaction (AI-HRI)
- Artificial Intelligence in the Government and Public Sector
- Cognitive Systems for Anticipatory Thinking
- Computational Theory of Mind for Human-Machine Teams
- Human Partnership with Medical AI: Design, Operationalization, and Ethics
- Science-Guided AI
- Where AI meets Food Security: Intelligent Approaches for Climate-Aware Agriculture
An informal reception will be held on Thursday, November 4. A general plenary session, in which the highlights of each symposium will be presented, will be held on Friday, November 5.
Symposia generally range from 40-75 participants each. Participation will be open to active participants as well as other interested individuals on a first come, first-served basis. Each participant will be expected to attend a single symposium. Registration information will be available on the AAAI website in August 2021.
For further information, please contact:
AAAI Fall Symposium Series
2275 East Bayshore Road, Suite 160
Palo Alto, California 94303
Interested individuals should submit a paper or abstract by the deadline listed above. Please submit your submissions directly to the individual symposium according to their directions. Do not mail submissions to AAAI. See the appropriate section in each symposium description for specific submission requirements.
Most symposium organizers have elected to accept submissions via the AAAI Fall Symposium EasyChair site at https://easychair.org/conferences/?conf=fss21 Please be sure to select the appropriate symposium when submitting your work.
Artificial Intelligence for Human-Robot Interaction (AI-HRI)
The Artificial Intelligence for Human-Robot Interaction (AI-HRI) Symposium has been a successful venue of discussion and collaboration since 2014. This year, we aim to review the achievements of this community in the last decade, identify the challenges facing ahead, and welcome new researchers who wish to take part in this growing community. Taking this wide perspective, this year will have no single theme to lead the symposium and we encourage AI-HRI submissions from across disciplines and research interests. Moreover, with the rising interest in AR and VR as part of an interaction and following the difficulties in running physical experiments during the pandemic, this year we specifically encourage researchers to submit works that do not include a physical robot in their evaluation but promote HRI research in general. In addition, acknowledging that ethics is an inherent part of the human-robot interaction, we encourage submissions of works on ethics for HRI.
- Ubiquitous HRI, including AR and VR,
- Ethics in HRI,
- Trust and Explainability in HRI,
- Interactive task learning,
- Interactive dialog systems,
- Field studies, experimental, empirical HRI,
- Robot planning and decision-making,
- AI for social robots
- Knowledge representation and reasoning to support HRI and robot tasking
Authors may submit under one of these paper categories:
- Full papers (6-8 pages) highlighting state-of-the-art HRI-oriented research.
- Short papers (2-4 pages) describing ongoing AI-oriented HRI research.
- Tool papers (2-4 pages) describing novel software, hardware, or datasets of interest to the AI-HRI community.
Please see the AAAI Author Kit to ensure that your submission has the proper formatting.
Contributions may be submitted here: https://easychair.org/my/conference?conf=fss21
The AI-HRI 2021 program chairs, firstname.lastname@example.org.
Reuth Mirsky (University of Texas, Austin), Megan L. Zimmerman (National Institute of Standards and Technology), Shelly Bagchi (National Institute of Standards and Technology), Jason R. Wilson (Franklin & Marshall College), Muneeb I. Ahmad (Heriot-Watt University), Christian Dondrup (Heriot-Watt University), Zhao Han (UMass Lowell), Justin W. Hart (University of Texas Austin), Matteo Leonetti (University of Leeds), Ross Mead (Semio), Emmanuel Senft (University of Wisconsin, Madison), Jivko Sinapov, Communications Co-Chair (Tufts University)
For More Information
For more information and updates, please visit the Symposium Website.
Artificial Intelligence in Government and Public Sector
AI is becoming ubiquitous, emerging from specialized niches to broad utility across societal, governmental, and public sector applications. However, AI in Government at the federal, state, and local levels, and related education and public heath institutions (hereafter referred to as Public Sector) faces its own unique challenges. AI systems in the public sector will be held to a high standard since they must operate in support of the public good. These systems will face increased scrutiny and stringent requirements for ethical operation, accountability, transparency, fairness, security, explainability, cost-effectiveness, policy, regulatory compliance, and operation without unintended consequences.
We invite thoughtful contributions, either through papers, speakers, panel proposals, or posters that present novel technical approaches to meeting these requirements or lessons learned from current implementations.
Technical papers that advance the state-of-the-art on applying AI in public sector applications describing innovative approaches to solving the problems of building applications that meet the challenges described above, including: Trust and Transparency, Bias and Fairness, Verification and Validation, Privacy and Safety, Robustness and Resiliency, Accountability and Responsibility, Interaction Paradigms, AI Open-Source Innovation, AI for Accelerating Discovery.
Practice papers that describe, demonstrate, analyze, or evaluate current or potential uses of AI in the public sector, including: Successful Transitions, Engineering Best Practices, Challenges and Lessons Learned, Systematic Approaches and Methodologies, Translating from .com to .gov, Early Areas of Adoption (Early Adopters), Role of Public/Private Partnership, Encourage Public Service Innovation, Cultivating AI Literacy, Incentivizing and Acquisition.
The symposium will include presentations of accepted papers in oral, poster and panel discussion formats, together with invited speakers and demonstrations. Potential symposium participants are invited to submit either a full-length technical paper or a short position paper for discussion following AAAI format. Full-length papers must be no longer than eight (8) pages, including references and figures and are required for those submitting technical papers as described above. Short submissions can be up to four (4) pages in length and can be used for practice papers as described above, work in progress, system demonstrations, or panel discussions.
Submit by August 13, via the AAAI EasyChair.org site choosing the AAAI/FSS-21 Artificial Intelligence in Government and Public Sector track.
Erik Blasch (USAF) Co-chair, Mihai Boicu (GMU) Co-chair, Nathaniel D. Bastian (USMA), Lashon Booker (MITRE), Michael Garris (MITRE), Mark Greaves (PNNL), Michael Majurski (NIST), Kathy McNeill (DoL), Tien Pham (ARL), Alun Preece (Cardiff University), Ali Raz (GMU), Peter Santhanam (IBM), Jim Spohrer, Frank Stein, Utpal Magla (IBM)
For More Information
Contact: Mihai Boicu (email@example.com)
Cognitive Systems for Anticipatory Thinking
Anticipatory thinking – the deliberate, divergent consideration of relevant possible futures– is a key cognitive process for risk management. The cognitive systems community has taken steps towards incorporating risk management into solutions and designing scenarios where risk is present. These initial efforts have demonstrated the value to AI systems incorporating risk management into decision-making.
These cognitive system efforts to manage risk have largely been in isolation without real-world deployments. In contrast, data-driven AI systems are being deployed in the real-world but without the tools for risk management. The lack of tools results in inconsistent regulatory and legal policies that limits the widespread adoption of AI systems, increases the risk to the public, and sows mistrust of autonomous systems.
This year’s COGSAT brings together cognitive systems and statistical learning communities to develop risk management capability for 3rd wave – DARPA defines this as context-adaptation — autonomous systems. Autonomous agents with 3rd wave risk management capability will make decisions like an insurance agent assessing premiums; identify perils for a context (life, auto, home) and when the risk of these perils change. However, 3rd wave agents would constantly be re-assessing risk and mitigating them by a variety of actions.
Specifically, we seek the community’s input to achieve the following goals:
- Refine challenges to spur research in AT and risk management
- Identify benchmark domains, measures, and metrics for AT
- Develop 3rd wave autonomy capabilities based on AT
Picking up where COGSAT 2019 left off, COGSAT 2021 will introduce two challenges in perception and cognition with an example domain in each. In the perception challenge, an autonomous vehicle requires pre-hoc assessment of errors and risk management. The key research question is how to assess, test, and evaluate a perception system in a self-driving car for the ability to handle out-of-sample and never-before-seen images. Perception in self-driving cars remains unsolved; there are many well-documented perception errors where anticipatory thinking, in the form of self-explanation, can help mitigate error and failure states.
The cognition challenge examines the disconnect between an agent’s action model and changes in risk exposure. We use Dungeon Crawl Stone Soup, a character-development game, as an example domain due to its catastrophic failures (permanent death) and many opportunities to manage this changing risk. The research question is how to identify when risk changes and mitigate exposure.
We invite 2-page (+1 for references) submissions in AAAI format that address one of the three stated goals.
Dr. Adam Amos-Binks, Chief AI Scientist (Applied Research Associates, Inc.), Dr. Dustin Dannenhauer, Scientist (Wright State Research Institute), Dr. Rogelio E. Cardona-Rivera, Assistant Professor (University of Utah), Dr. Gene Brewer, Associate Professor (Arizona State University), Dr. Leilani Gilpin, Researcher (Massachusetts Institute of Technology)
For More Information
Please see the supplemental website.
Computational Theory of Mind for Human-Machine Teams
Humans intuitively combine pre-existing knowledge with observations and contextual clues to construct rich mental models of the world around them and use these models to evaluate goals, perform thought experiments, make predictions, and update their situational understanding. When the environment contains other people, humans use a skill called theory of mind (ToM) to infer their mental states from observed actions and context and predict future actions from those inferred states. When humans form teams, these models can become extremely complex. High-performing teams naturally align key aspects of their models to create shared mental models of their environment, equipment, team, and strategies. ToM and the ability to create shared mental models are key elements of human social intelligence. Together, these two skills form the basis for human collaboration at all scales, whether the setting is a playing field or a military mission. The purpose of this symposium is to bring together researchers from computer science, cognitive science, and social science to discuss the creation of artificial intelligence systems that can generate theory of mind, exhibit social intelligence, and assist human teams.
- Research on artificial social intelligence
- Computational theory of mind
- Teamwork theories relevant for agent-support systems
- Decision making models for teamwork
- Collective intelligence models
- Machine learning models of theory of mind
- Neural networks
- Inverse reinforcement learning
- Multi-agent reinforcement learning
- Nature and timing of agent advice
- Natural language studies on team communication
The first day will be devoted to invited talks and paper presentations with subsequent days to be composed of panels and discussion groups. Hybrid attendance will be supported.
Please submit via the AAAI FSS-21 EasyChair site. We accept the following types of submissions in AAAI format:
- full papers (6-8 pages + references)
- short papers (2-4 pages + references)
- summaries of previous published papers (1 page)
Position papers about computational theory of mind and artificial social intelligence are welcome, as well as empirical studies. The organizers will invite a subset of submissions to be included either in a Springer volume or journal special issue.
Joshua Elliott (DARPA), Nik Gurney (University of Southern California), Guy Hoffman (Cornell), Lixiao Huang (Arizona State University), Ellyn Maese (Gallup), Ngoc Nguyen, (Carnegie Mellon University), Gita Sukthankar (University of Central Florida), Katia Sycara (Carnegie Mellon University)
For More Information
Main Contact: Gita Sukthankar (University of Central Florida), firstname.lastname@example.org Supplemental Website: https://sites.google.com/view/tomforteams/
Human Partnership with Medical AI: Design, Operationalization, and Ethics
This symposium touches upon two crucial themes of Clinical AI research: (1) barriers of trust in autonomous medical advisory systems and (2) challenges for Clinical AI adoption (design, deployment, evaluation, and sustainability of Clinical AI).
Topics of interest (may include, but are not limited to):
- Trust metrics for medical AI systems
- Trust in hardware, software, and computing as components in an autonomous medical advisory system
- Trust and Explainable medical AI
- Trust and Human-Machine/Human-Computer Collaboration
- Individual and team medical AI support for remote and austere environment
- Robustness and reliability in medical AI
- Psychology, cognition, and human factors in relation to trust in Medical AI interaction and teaming
- Design research to inform Clinical AI
- Empirical studies examining clinician and patient perceptions and risk-beliefs of Clinical AI
- Metrics for and measures of Clinical AI uptake, utility, and impact on users
- Implementation Science strategies to inform and assess Clinical AI practice integration
- Medicolegal and ethical requirements and moral and professional responsibilities in an era of Clinical AI
- Differences and synergies in how AI, HCI, and healthcare research communities’ approach clinical AI; Potential strategies to better bridge these communities
Participants will be invited to prepare extended versions of their position papers for a follow-up special topics issue through the Frontiers in Computer Science and Frontiers in Psychology journals.
This symposium will include keynote speakers, position paper presentations and discussion, break-out sessions and interactive expert panel conversation.
This symposium will be of interest to academia, industry, government, and the broader healthcare community. Participants are invited to submit 2 – 6 page articles which can be framed as speculative position papers, research outcomes, case studies, survey papers, or best practice/guidelines papers. Papers will be submitted to the AAAI Fall 2021 Symposia EasyChair site, selecting the appropriate track and theme.
Thomas E. Doyle, Associate Professor, Department of Electrical and Computer Engineering, McMaster University, Hamilton, Ontario, Canada (email@example.com); Aisling Kelliher, Associate Professor, Department of Computer Science, Virginia Tech, Blacksburg, Virginia, USA (firstname.lastname@example.org).
Reza Samavi, Ryerson University, Canada (email@example.com); Barbara Barry, Mayo Clinic Division of Health Care Delivery Research, USA, (firstname.lastname@example.org); Steven Yule, University of Edinburgh, UK, (email@example.com); Sarah Parker, Virginia Tech, USA, (firstname.lastname@example.org); Michael Noseworthy, McMaster University, Canada (email@example.com); Qian Yang, Cornell University, USA, (firstname.lastname@example.org)
For More Information
Please see the symposium website.
Science-Guided AI (SGAI) is an emerging paradigm of research that aims to principally integrate scientific domain knowledge (from disciplines like physics, chemistry, biology, climate science, economics and many other branches of science with a rich corpus of well-established theory) into AI models and algorithms to learn patterns and relationships from data that are not only accurate but are also consistent with established scientific theories. Science-guided AI is ripe with research opportunities to influence fundamental advances in AI for accelerating scientific discovery and has already begun to gain attention in several scientific communities. The goal of this symposium is to nurture the community of researchers working at the intersection of AI and science and engineering areas, by providing a common platform to cross-fertilize ideas from diverse fields and shape the vision of the rapidly growing field of science-guided AI.
We encourage participation on topics that explore any form of synergy between scientific principles and AI/machine learning (ML) methods. Examples of relevant submissions include (but are not limited to):
- AI/ML algorithms that employ soft or hard scientific constraints in the learning process.
- Methods to encode scientific knowledge in AI/ML model architecture.
- Science guided generative or reinforcement learning methods
- Approaches that use scientific knowledge for post-facto verification of AI results along the lines of explainable AI.
- AI models that employ science knowledge as ‘hints’ (i.e., weak supervision with scientific knowledge).
- Surrogate and reduced order modeling methods.
- Discovery of governing equations from data using AI models.
- Hybrid constructions of science-based & AI/ML-based models.
- Software development facilitating the inclusion of scientific knowledge in learning
- Novel techniques in inverse modeling & system identification with AI.
- Novel techniques for using data to calibrate parameters and system states in scientific models.
Our symposium will involve a mix of activities including keynote and invited talks, breakout sessions, panel discussions, and poster sessions.
We are currently accepting paper submissions for position, review, or research articles in two formats: (1) short papers (2-4 pages) and (2) full papers (6-8 pages). All submissions will undergo peer review and authors will have the option to publish their work in an open access proceedings site.
Anuj Karpatne (Virginia Tech, email@example.com), Nikhil Muralidhar (Virginia Tech, firstname.lastname@example.org), Naren Ramakrishnan (Virginia Tech, email@example.com), Vipin Kumar (University of Minnesota, firstname.lastname@example.org), Ramakrishnan Kannan (Oak Ridge National Laboratory, email@example.com), Jonghyun Harry Lee (University of Hawaii at Manoa, firstname.lastname@example.org)
For More Information
Please see the symposium supplemental website.
Where AI meets Food Security: Intelligent Approaches for Climate-Aware Agriculture
This cross-disciplinary symposium brings together international participants from three key audiences: (1) researchers in Artificial Intelligence (AI), Machine Learning (ML) and/or Robotics; (2) Agriculture and/or Climate scientists; and (3) industry, government, and policy stakeholders. The aim is to engage these groups in broad discussion and multi-disciplinary exploration of the many ways in which AI-grounded tools and techniques can be applied to reshape global agriculture. The symposium focuses on the agriculture domain, with emphasis on climate-aware methods, including smart approaches to water and soil monitoring and management, as well as decision making in agricultural production, using intelligent, data-driven, adaptive, trustworthy, science-based models (e.g., to support management, prediction, and long-term planning). Internationally renowned experts in these areas will participate through invited talks and panel discussions, introducing concepts and challenges to AI researchers. Interested participants from the AI community who address these issues are invited to submit papers.
Topics of interest include both Technical and Practice streams, as described below.
Technical papers that advance the state-of-the-art by applying AI, ML and/or Robotics to agriculture:
- Deep Learning
- Computer/Machine Vision/Image Processing
- Data Mining
- Sensor Fusion
- Reinforcement Learning
- Interactive Learning
- Active Learning
- Multi-agent Systems
Practice papers that describe current uses of AI, ML and/or Robotics in agricultural applications, the role of integrating soil, water and/or climate modeling into agricultural models, the role of AI on agricultural economics, and/or demonstrations of beta and in production applications:
The symposium will include presentations of accepted papers in oral, panel and/or demonstration formats. Authors are invited to submit either a full-length technical paper (up to 7 pages without references) or a short paper (up to 3 pages without references) as a position paper, practice paper, work in progress, system demonstration or panel discussions. Please use the AAAI formatting guidelines available here: http://www.aaai.org/Publications/Templates/AuthorKit21.zip
Instructions will be available here: https://sites.google.com/view/waif21/submission
Papers are due on Friday 13 August 2021 (any time on earth).
Prof. Elizabeth Sklar (University of Lincoln, UK), Dr. Audrey Reinert (University of Oklahoma, U.S.), Prof. David Ebert (University of Oklahoma, U.S.), Prof. Melba Crawford (Purdue University, U.S.), and Prof. Peter Wilson (University of Bath, UK).