AAAI 2018 Fall Symposium Series
Important Deadlines
- July 20, 2018: Submissions due to organizers (please see individual descriptions for extensions)
- August 10, 2018: Notifications of acceptance sent by organizers
Symposia
- Adversary-Aware Learning Techniques and Trends in Cybersecurity
- Artificial Intelligence for Synthetic Biology
- Artificial Intelligence in Government and Public Sector
- A Common Model of Cognition
- Gathering for Artificial Intelligence and Natural System
- Integrating Planning, Diagnosis and Causal Reasoning
- Interactive Learning in Artificial Intelligence for Human-Robot Interaction
- Reasoning and Learning in Real-World Systems for Long-Term Autonomy
Adversary-Aware Learning Techniques and Trends in Cybersecurity
Machine learning-based intelligent systems have experienced a massive growth over the past few years, and are close to becoming ubiquitous in the technology surrounding our daily lives. However, a critical challenge in machine learning-based systems is their vulnerability to security attacks from malicious adversaries. The vulnerability of these systems is further aggravated as it is nontrivial to establish the authenticity of data used to train the system, and even innocuous perturbations to the training data can be used to manipulate the system’s behavior in unintended ways.
This symposium will address the overarching need towards making automated, machine learning-based systems more robust and resilient against adversarial attacks, so that humans can use them in a safe and sustained manner. Discussions and ideas generated in the symposium will be used to determine a roadmap for adversarial learning while identifying immediate technological enablers and hurdles as well as a far-term vision for the field.
Topics
Topics of interest include, but are not limited to the following:
- Adversary-aware machine learning — reinforcement learning, lifelong learning, deep learning
- Generative adversarial networks
- Adversary- aware prediction, forecasting and decision making techniques
- Game theory and game playing to counter adversarial learning
- Distributed, multiagent systems
- Adversarial issues and techniques for cyber-physical systems
- Operations research
- Applications of adversarial learning
- Security threats and vulnerabilities of adversarial learning
Format
The two-day symposium will feature a keynote address, invited talks, peer-reviewed paper and poster presentations, and a breakout session with topic-oriented working groups followed by a panel discussion.
Submissions
Authors are invited to submit full papers (6–8 pages) or short position or work-in-progress papers (2–4 pages). Papers should be submitted via Easychair (details forthcoming). All submissions will be peer-reviewed. The symposium proceedings will be published online via Arxiv. Selected, high-quality papers will be considered for a special issue in a leading archival journal in the field of intelligent systems and cyber-security.
Main Contact
Joseph Collins
Section Head, Distributed Systems Section
U.S. Naval Research Laboratory
4555 Overlook Ave SW
Washington D.C. 20375.
8;ins@nrl.navy.mil
Organizing Committee
Joseph Collins (US Naval Research Laboratory), Prithviraj (Raj) Dasgupta (University of Nebraska, Omaha), Krishnendu (Kris) Ghosh (Miami University of Ohio), Amitabh Mishra (US Army CERDEC), Ranjeev Mittu (US Naval Research Laboratory)
Organizing Committee Email: alecaaaifss18@gmail.com
For More Information
For more information, please see the supplemental symposium website.
Artificial Intelligence for Synthetic Biology
Our primary goal for this symposium is to begin to connect and build mutually beneficial collaborations between the AI and the synthetic biology communities. There are many problems and applications at the intersection of these two fields.
Synthetic biology is the systematic design and engineering of biological systems. It is a relatively new field and holds the potential for revolutionary advances in medicine, materials engineering, environmental remediation, and more. Synthetic biology can be used for a range of diverse goals, for example developing genetic programs to identify and kill cancer cells or designing plants that can extract pollutants from the ground. AI has the capacity to advance the progress of synthetic biology, and to help realize these goals.
Topics
Topics of interest include (but not limited to) the following:
- Managing design complexity, for example, current design happens at a low level analogous to writing programs in assembly language
- Emerging techniques and tools in synthetic biology produce large amounts of data; understanding and processing this data provides avenues for AI techniques to make a big impact
- Data driven modeling of biological systems presents opportunities to apply AI techniques; work is needed to help predict the outcome of genetic modification, identify the root cause of failure in circuit, and predict the effect of circuit on host organism
- Most organism engineering workflows have little automation and rely heavily on domain expertise, only some of which is shared in publications. Tools that support or carry out information integration and informed decision making can improve the efficiency and speed of organism engineering
The symposium will include: introduction to each domain to ensure it is accessible to attendees with both backgrounds; identification of open problems and challenges in the intersection of AI and synthetic biology; contributed talks; panel discussions; and small and large group discussions mixing synthetic biologists and AI researchers.
Submissions
Full papers (7 pages) presenting a problem in the synthetic biology space that AI techniques might address, and optionally a description of the technique that addresses it. Alternatively, presenting a technique from AI that is relevant for synthetic biology problems.
Short position paper (2 pages) outlining new or controversial views of the intersection of AI and synthetic biology research or describing ongoing AI/synthetic biology research.
Organizing Committee
Aaron Adler (BBN Technologies), Mohammed Ali Eslami (Netrias, LLC), Jesse Tordoff (Massachusetts Institute of Technology), and Fusun Yaman (BBN Technologies)
For More Information
For more information, please see the supplemental symposium website.
Artificial Intelligence in Government and Public Sector
The democratization of AI has begun. AI is no longer reserved for a few highly specialized enterprises. As easy-to-configure AI methods proliferate, we see that simple, localized, but nonetheless very useful AI applications are beginning to pervade society. Government and the public sector are not immune from this trend.
However, AI in government and the public sector faces unique challenges and opportunities. These systems will be held to a higher standard since they are supposed to operate for the “public good.” They face increased scrutiny for transparency, fairness, explainability, and operation without unintended consequences. Governments provide critical services and are expected to be the provider of last resort, sometimes backstopping the commercial sector. How can the development, deployment, and use of these systems be managed to ensure they meet these requirements by design and in practice? How can their use be proactively monitored to ensure their operations meet these objectives in practice?
Topics
This symposium will focus on these unique elements of government and public sector AI systems. We invite contributions addressing topics including the following:
- Adoption: Best practices for ensuring adoption and acceptance of AI in Government – navigating the environmental challenges to plan, build, and deploy AI in government.
- Applications: Public sector problems where AI can play an important role without deep new experimentation, for example, fighting terrorism, serving vulnerable populations, understanding regulations, combating child trafficking…
- Transparency: Ensuring transparency and comprehensibility in the governmental use of AI, to avoid anti-democratic preferential access and treatment to select members of society.
- Security: Ensuring that AI systems are designed and built to be robust and resilient in the face of systemic, cyber, external manipulation, and other risks.
- Fairness: Developing AI methods to support auditing in order to detect bias, and then benchmark any efforts to mitigate unwanted bias.
- Innovation: Using AI to encourage public service innovation. What areas are less immediately approachable by AI, but still pose an urgent need, and hence offer significant financial and social reward for experimentation by public administrators?
- Ecosystem: Translating from .com to .gov – looking at the reality that .gov adoption of AI is not in the same ecosystem as a commercial company. How can one establish and foster public-private partnerships around AI methods and services to the benefit of both?
- Standards: Developing a systematic approach for the use of AI in government (for example, policies, methodologies, guides) or elements in support of such use (for example, taxonomies, ontologies).
Submissions
The symposium will include presentations of accepted papers in both oral and panel discussion formats, together with invited speakers and software demonstrations. Potential symposium participants are invited to submit either a full-length technical paper or a short position paper for discussion. Full-length papers must be no longer than eight (8) pages, including references and figures. Short submissions can be up to four (4) pages in length and describe speculative work, work in progress, system demonstrations, or panel discussions.
Please submit via the AAAI EasyChair site choosing the Artificial Intelligence in Government and Public Sector track. Please submit by July 20.
Organizing Committee
Frank Stein, Chair (IBM), Mihai Boicu (George Mason University), Lashon Booker (Mitre), Chris Codella (IBM), Michael Garris (NIST), Eduard Hovy (Carnegie Mellon University), Chuck Howell (Mitre), Anupam Joshi (University of Maryland Baltimore County), Ned McCulloch (IBM), Alun Preece (Cardiff University), Jim Spohrer (IBM), John Tyler
A Common Model of Cognition
This symposium is a direct follow-on to the 2017 AAAI Fall Symposium on A Standard Model of the Mind. Our goal is to engage the international research community in developing a common model of cognition — that is, a community consensus concerning mental structures and process to the extent that such exists — with a focus specifically on human-like minds, including artificial minds that are either inspired by human ones or are similar because of common functional goals. After the first meeting, we formed online working groups covering the following topics: (1) procedural and working memories; (2) declarative memory; (3) metacognition and reflection; (4) language processing; (5) emotion, mood, affect and motivation; (6) higher-level knowledge, rational and social constraints; (7) lower-level neural and physiological constraints; and (8) perceptual and motor systems. The intent of these working groups is to develop a statement of the best consensus in each area given the community’s current understanding of these components of cognition and how they fit together. The goal of this year’s meeting to provide a forum to focus on extending the model based on the progress made in the working groups while engaging new participants to the process. Interested people can participate in the effort by subscribing to the Common Model list and joining the working groups of interest. (List archives provide instructions on joining the working groups.)
Format
There will be a combination of parallel working group sessions that focus on the major components, and plenary sessions for working group presentations and discussions of general topics drawn from submitted papers. There also will be a poster session for accepted papers not presented in the other sessions.
Submissions
Full papers (up to 6 pages) or short position papers (2 pages) can be submitted to sm@ict.usc.edu by July 20, 2018. They can address fundamental issues with the concept of a common model, describe alternative formulations, or make proposals for extension to the common model or its components. While contributions from all perspectives are welcome, those arising from a cognitive architecture approach — and yielding implications for the computational structure and function of the mind and its parts — are expected to be most directly relevant.
Organizing Committee
John Laird (University of Michigan, laird@umich.edu), Christian Lebiere (Carnegie Mellon University, cl@cmu.edu), Paul S. Rosenbloom (University of Southern California, rosenbloom@usc.edu)
For More Information
People considering writing position papers are encouraged to visit the symposium website, which has additional background resources. You can also contact any member of the organizing committee.
Gathering for Artificial Intelligence and Natural Systems
Nature provides hundreds of millions of years of R&D by virtue of genetic variation, adaptation and speciation. Technology inspired by nature has the potential to be revolutionary while saving energy, resources and time. Several such inventions already proved the creativity and efficiency of designs inspired by nature. For example, researchers have been able to create antibacterial surfaces inspired by cicada’s wings and gecko’s skin; the shape of the engine of the Shinkansen Bullet train is said to have been inspired by the kingfisher beak. While these examples are exciting, they are not as numerous as they could be. Biomimicry — the art of innovation inspired by nature — happens due to serendipity. We believe that artificial intelligence could help get serendipity out of the loop.
The goal of this symposium is to bring together researchers from academe, government, and industry to collaborate at the intersection of artificial intelligence and biomimicry.
Topics
We welcome submissions on topics including (but not limited to) the following:
- Nature-inspired algorithms for AI
- AI-enabled biomimetic technology
- Data curation and data sets for biomimicry
- Computational creativity for biomimicry
- Ontologies and taxonomies for knowledge transfer between engineering and biology
- Computational tools and methods for biomimicry
- Applications supporting bioinspired design and innovation
Format
The 2.5–day symposium is focused on: (1) AI for bioinspiration and (2) nature-inspired AI. Each day will have keynotes and panels, interspersed with invited technical talks. In addition, we plan to include early-afternoon poster sessions. The last half-day of the symposium is dedicated to round tables with the objective of creating white papers with actionable items on different collaborative projects and ideas.
Submissions
Submissions should be labelled as full paper, work in progress, or poster.
- Full papers (maximum of 6 pages, including references): Papers detailing solutions or approaches.
- Technical Brief (maximum of2 pages): These are typically position papers or concept papers.
- Work in Progress (maximum of 2 pages): To be presented as poster; however, for reviewing purposes, please submit a maximum of 2 pages description of your work.
Organizing Committee
Ioana Baldini (IBM Research AI), Richard ‘Doug’ Riecken (Air Force Office of Scientific Research), Prasanna Sattigeri (IBM Research AI), Vikram Shyam (NASA Glenn Research Center)
For More Information
Send email including GAINS in the subject to Ioana Baldini at ioana@us.ibm.com or Vikram Shyam at vikram.shyam-1@nasa.gov.
Integrating Planning, Diagnosis and Causal Reasoning
Planning, plan execution, diagnosis, and causal explanation have each been examined by various research efforts, but discussion of the linkages between them in the literature is still somewhat sparse. When considering how to integrate these functions, at least three questions must be considered:
- System integration: how to integrate planning, plan execution, diagnosis, and causal explanation in a single system?
- Model / Belief updates: when the unexpected happens, how does the system change its internal representation so future plans are effective?
- Replanning: what to do now that the unexpected has happened?
Topics
This symposium will raise awareness, promote discussion, and encourage cross-fertilization of ideas from the following topics:
- Integration of planning and plan execution
- Theory (for example, flexibility versus uncertainty, replanning versus contingent planning algorithms)
- Practice (technologies, architectures, system integration)
- Integration of planning and fault management (diagnosis, prognostics, anomaly detection) technologies:
- Planning to diagnose (active diagnosis)
- Planning and fault model integration (impact of diagnosis algorithms on plan model revision, level of abstraction of models)
- Integration of planning and causal explanation (state/event estimation and prediction) technologies:
- Improving or revising plans based on inferred causal explanations
- Revising long-term models based on causal explanation
Accepted symposium papers, posters, and presentations will be archived as a set of working papers; we welcome resubmissions of relevant work from other venues provided the authors confirm doing so does not create reviewing or copyright conflicts.
Building from prior workshops in closely related areas, this symposium, the first in a series, is intended to become a forum for discussing the integration of planning systems and execution with specialized subareas of artificial intelligence. The focus of this iteration is on gathering researchers from the automated planning, planning and execution, model-based diagnosis, explanation, causal inference, and root-cause analysis communities to discuss topics at their intersection.
Format
We will allocate considerable time to presentations from different viewpoints, as well as panels for discussion of disagreements and future joint projects, such as competitions.
Submissions
Regular papers should be up to 8 pages. Position papers should be up to 4 pages; submit to EasyChair.
Organizing Committee
Jeremy Frank (NASA Ames Research Center, jeremy.d.frank@nasa.gov), Matthew Molineaux (Wright State University, matthew.molineaux@wright.edu), Mark (Mak) Roberts (Naval Research Laboratory, mark.roberts@nrl.navy.mil)
For More Information
For more information, please see the supplemental symposium website.
Interactive learning in Artificial Intelligence for Human-Robot Interaction
The goal of this year’s Artificial Intelligence (AI) for Human-Robot Interaction (HRI) symposium is to bring together the large community of researchers working on interactive learning scenarios for interactive robotics. While current HRI research involves investigating ways for robots to effectively interact with people, HRI’s overarching goal is to develop robots that are autonomous while intelligently modeling and learning from humans. These goals greatly overlap with some central goals of AI and interactive machine learning such that HRI is an extremely challenging problem domain for interactive learning and will elicit fresh problem areas for robotics research. Present-day AI research still does not widely consider situations for interacting directly with humans and within human-populated environments, which present inherent uncertainty in dynamics, structure, and interaction. We believe that the HRI community already offers a rich set of principles and observations that can be used to structure new models of interaction. The human-aware AI initiative has primarily been approached through human-in-the-loop methods that use people’s data and feedback to improve refinement and performance of the algorithms, learned functions, and personalization. We thus believe that HRI is an important component to furthering AI and robotics research.
Our symposium will focus on one common area of interest within the broader scope of HRI is an AI problem and AI is an HRI problem: interactive machine learning for interactive robotics. We believe that the fusion of HRI and interactive learning may provide new insights and discussions that could benefit both fields. The symposium will include research talks and discussions both to share work in this intersectional area, guidance for how to best frame AI-centric HRI work within AI venues, and a great deal of community building through discussion and tutorials.
In addition to oral and poster presentations of accepted papers, this year’s symposium will include panel discussions, position talks, keynote presentations, and a hack session with ample time for networking.
Submissions
Authors may submit under one of three paper categories:
- Full papers (6–8 pages) highlighting state-of-the-art HRI-oriented interactive learning research, HRI research focusing on the use of autonomous AI systems, or the implementation of AI systems in commercial HRI products.
- Short position papers (3–4 pages) outlining new or controversial views on AI-HRI research or describing ongoing AI-oriented HRI research.
- Tool papers (1–2 pages) describing novel software, hardware, or datasets of interest to the AI-HRI community.
In addition, philosophy and social science researchers are encouraged to submit short papers suggesting AI advances that would facilitate the design, implementation, or analysis of HRI studies. Industry professionals are encouraged to submit short papers suggesting AI advances that would facilitate the development, enhancement, or deployment of HRI technologies in the real world.
Organizing Committee
Kalesha Bullard (Georgia Institute of Technology), Nick DePalma (FutureWei Technologies), Richard G. Freedman (University of Massachusetts Amherst/SIFT), Bradley Hayes (University of Colorado Boulder), Luca Iocchi (Sapienza University of Rome), Katrin Lohan (Heriot-Watt University), Ross Mead (Semio), Emmanuel Senft (Plymouth University), Tom Williams (Colorado School of Mines)
For More Information
For more information, please see the supplemental symposium website.
Reasoning and Learning in Real-World Systems for Long-Term Autonomy
Over the past decade, decision-making agents have been increasingly deployed in industrial settings, consumer products, healthcare, education, and entertainment. The development of drone delivery services, virtual assistants, and autonomous vehicles have highlighted numerous challenges surrounding the operation of autonomous systems in unstructured environments. This includes mechanisms to support autonomous operations over extended periods of time, techniques that facilitate the use of human assistance in learning and decision-making, learning to reduce the reliance on humans over time, addressing the practical scalability of existing methods, relaxing unrealistic assumptions, and alleviating safety concerns about deploying these systems.
This symposium aims to identify the challenges and bridge the gaps between theoretical frameworks for planning and learning in autonomous agents and the requirements imposed by deployment in the real world. Our goal is to help identify research avenues that can move the AI community beyond highly theoretical results for simple domains or highly engineered one-shot solutions for realistic applications. We seek papers that find a common middle ground between theory and applications, and analyze the lessons learned from these efforts, particularly with respect to long-term autonomy.
Topics
- Decision-making representations, models, and algorithms for the real world
- Hierarchical and multiobjective solutions for scalable planning and learning
- Efficient integrations of task and motion planning
- Integrating planning, reasoning, and learning for long-term deployments
- Safety in real-world decision-making and learning
- Scalable multiagent and human-in-the-loop techniques
- Proactively incorporating human feedback in decision-making
- Leveraging the complimentary capabilities of humans and robots in real-world tasks
- Evaluation metrics for long-term autonomy
- Case studies and descriptions of deployed autonomous systems
- Lessons learned from deployed applications of autonomous systems
Format
The symposium will combine invited talks, presentations, and discussions from both an AI and a robotics perspective.
Submissions
We invite submissions of full papers (6–8 pages) and short papers (3–4 pages) following AAAI style guidelines. Full papers can present novel work or summarize a collection of recent work. Short papers can present preliminary work, describe new real-world challenge problems, or present a position related to these topics.
Organizing Committee
Kyle Wray (University of Massachusetts Amherst), Julie Shah (Massachusetts Institute of Technology), Peter Stone (University of Texas, Austin), Stefan Witwicki (Nissan Research Center), Shlomo Zilberstein (University of Massachusetts, Amherst)
For More Information
For more information, please see the supplemental symposium website.