The 38th Annual AAAI Conference on Artificial Intelligence
February 20-27, 2024 | Vancouver, Canada
Main Conference Timetable for Authors
Note: all deadlines are “anywhere on earth” (UTC-12)
November 2-7, 2023
Author feedback window
December 9, 2023
Notification of final acceptance or rejection
December 19, 2023
Submission of paper preprints for inclusion in electronic conference materials
February 20 – February 27, 2024
AAAI-24 conference
Previous Deadlines
July 4, 2023
AAAI-24 web site open for author registration
July 11, 2023
AAAI-24 web site open for paper submission
August 8, 2023
Abstracts due at 11:59 PM UTC-12
August 15, 2023
Full papers due at 11:59 PM UTC-12
August 18, 2023
Supplementary material and code due by 11:59 PM UTC-12
September 25, 2023
Registration, abstracts and full papers for NeurIPS fast track submissions due by 11:59 PM UTC-12
September 27, 2023
Notification of Phase 1 rejections
September 28, 2023
Supplementary material and code for NeurIPS fast track submissions due by 11:59 PM UTC-12
Call for the Special Track on Safe, Robust and Responsible AI
AAAI-24 will feature a special track on Safe, Robust and Responsible Artificial Intelligence (SRRAI). This special track focuses on the theory and practice of safety and robustness in AI-based systems and adherence to responsible AI principles. AI systems are increasingly being deployed throughout society within different domains such as data science, robotics and autonomous systems, medicine, economy, and safety-critical systems. With the recent explosion of interest in generative AI systems, the accessibility and applicability of foundational models have grown exponentially.
Although the widespread use of AI systems in today’s world is growing, they have fundamental limitations and practical shortcomings, which can result in catastrophic failures. Specifically, many of the AI algorithms that are being implemented nowadays fail to guarantee safety and success and lack robustness in the face of uncertainties. Generative AI systems additionally give rise to a whole new suite of difficulties such as hallucinations, information leakage, toxicity, etc.
To ensure that AI systems are reliable, they need to be robust to disturbance, failure, and novel circumstances. Furthermore, this technology needs to offer assurance that it will reasonably avoid unsafe and irrecoverable situations. In order to push the boundaries of AI systems’ reliability, this special track at AAAI-24 will focus on cutting-edge research on both the theory and practice of developing safe, robust, and responsible AI systems. Specifically, the goal of this special track is to promote research that studies 1) the safety and robustness of AI systems, 2) AI algorithms that are able to analyze and guarantee their own safety and robustness, 3) AI algorithms that can analyze the safety and robustness of other systems and 4) mechanisms for building responsible and trustworthy AI systems. For acceptance into this track, we would expect papers to have fundamental contributions to safe, robust and responsible AI, as well as applicability to the complexity and uncertainty inherent in real-world applications.
In short, the special track covers topics related to safety and robustness of AI-based systems and to using AI-based technologies to enhance the safety and robustness of themselves and other critical systems, including but not limited to:
- Safe and Robust AI Systems
- Safe Learning and Control
- Quantification of Uncertainty and Risk
- Safe Decision Making Under Uncertainty and Limited Information
- Robustness Against Perturbations and Distribution Shifts
- Detection and Explanation of Anomalies and Model Misspecification
- Formal Methods for AI Systems
- On-line Verification of AI Systems
- Safe Human-Machine Interaction
- Transparency, Interpretability, and Explainability of AI Systems
- Fairness and Equity in Decision Making
- Issues Specific to Generative AI (e.g., hallucination, toxicity, information leakage, prompt injection)
Submissions to this special track will follow the regular AAAI technical paper submission procedure, but the authors need to select the Safe, Robust and Responsible AI special track.
Special track Co-Chairs:
- Chuchu Fan (Massachusetts Institute of Technology)
- Tatsunori Hashimoto (Stanford University)
- Ashkan Jasour (NASA/Caltech JPL)
- Balaraman Ravindran (Indian Institute of Technology Madras)
- Reid Simmons (Carnegie Mellon University)
Safe, Robust and Responsible AI Keywords
- SRAI: Safe AI Systems
- SRAI: Robust AI Systems
- SRAI: Safe Learning
- SRAI: Safe Control
- SRAI: Uncertainty Quantification
- SRAI: Risk Quantification
- SRAI: Safe Decision Making Under Uncertainty
- SRAI: Robustness Against Perturbations
- SRAI: Robustness Against Distribution Shifts
- SRAI: Anomaly Detection and Explanation
- SRAI: Model Misspecification Detection and Explanation
- SRAI: Formal Methods for AI Systems
- SRAI: On-line Verification of AI Systems
- SRAI: Safe Human-Machine Interaction
- SRAI: Explainability and Interpretability
- SRAI: Factuality and Grounding of Generative AI systems
- SRAI: Security Risks of Generative AI
- SRAI: Privacy Preserving Generative AI