Call for Papers for the Special Track on Safe and Robust AI
AAAI-23 will feature a special track on Safe and Robust Artificial Intelligence (SRAI). This special track, new for AAAI-23, focuses on the theory and practice of safety and robustness in AI-based systems. AI systems are increasingly being deployed throughout society within different domains such as data science, robotics and autonomous systems, medicine, economy, and safety-critical systems. Although the widespread use of AI systems in today’s world is growing, they have fundamental limitations and practical shortcomings, which can result in catastrophic failures. Specifically, many of the AI algorithms that are being implemented nowadays fail to guarantee safety and success and lack robustness in the face of uncertainties.
To ensure that AI systems are reliable, they need to be robust to disturbance, failure, and novel circumstances. Furthermore, this technology needs to offer assurance that it will reasonably avoid unsafe and irrecoverable situations. In order to push the boundaries of AI systems’ reliability, this special track at AAAI-23 will focus on cutting-edge research on both the theory and practice of developing safe and robust AI systems. Specifically, the goal of this special track is to promote research that studies 1) the safety and robustness of AI systems, 2) AI algorithms that are able to analyze and guarantee their own safety and robustness, and 3) AI algorithms that can analyze the safety and robustness of other systems. For acceptance into this track, we would expect papers to have fundamental contributions to safe and robust AI, as well as applicability to the complexity and uncertainty inherent in real-world applications.
In short, the special track covers topics related to safety and robustness of AI-based systems and to using AI-based technologies to enhance the safety and robustness of themselves and other critical systems, including but not limited to:
- Safe and Robust AI Systems
- Safe Learning and Control
- Quantification of Uncertainty and Risk
- Safe Decision Making Under Uncertainty and Limited Information
- Robustness Against Perturbations and Distribution Shifts
- Detection and Explanation of Anomalies and Model Misspecification
- Formal Methods for AI Systems
- On-line Verification of AI Systems
- Safe Human-Machine Interaction
Submissions to this special track will follow the regular AAAI technical paper submission procedure, but the authors need to select the Safe and Robust AI special track.
Special track co-chairs:
- Chuchu Fan (Massachusetts Institute of Technology)
- Ashkan Jasour (NASA/Caltech JPL)
- Reid Simmons (Carnegie Mellon University)
Safe and Robust AI Keywords
- SRAI: Safe AI Systems
- SRAI: Robust AI Systems
- SRAI: Safe Learning
- SRAI: Safe Control
- SRAI: Uncertainty Quantification
- SRAI: Risk Quantification
- SRAI: Safe Decision Making Under Uncertainty
- SRAI: Robustness Against Perturbations
- SRAI: Robustness Against Distribution Shifts
- SRAI: Anomaly Detection and Explanation
- SRAI: Model Misspecification Detection and Explanation
- SRAI: Formal Methods for AI Systems
- SRAI: On-line Verification of AI Systems
- SRAI: Safe Human-Machine Interaction