The 39th Annual AAAI Conference on Artificial Intelligence
February 25 – March 4, 2025 | Philadelphia, Pennsylvania, USA
Main Conference Timetable for Authors
Note: all deadlines are “anywhere on earth” (UTC-12)
July 8, 2024
AAAI-25 web site open for paper submission
August 7, 2024
Abstracts due at 11:59 PM UTC-12
August 15, 2024
Full papers due at 11:59 PM UTC-12
August 19, 2024
Supplementary material and code due by 11:59 PM UTC-12
October 14, 2024
Notification of Phase 1 rejections
November 4-8, 2024
Author feedback window
December 9, 2024
Notification of final acceptance or rejection (Main Technical Track)
December 19, 2024
Submission of camera-ready files (Main Technical Track)
February 27 – March 4, 2025
AAAI-25 Conference
Note: Deadlines are track-specific and may differ from those listed above. Track-specific deadlines are listed on their respective CFP.
Call for the Special Track on AI Alignment
AAAI-25 is pleased to announce a special track focused on AI Alignment. This track recognizes that as we begin to build more and more capable AI systems, it becomes crucial to ensure that the goals and actions of such systems are aligned with human values. To accomplish this, we need to understand the risks of these systems and research methods to mitigate these risks. The track covers many different aspects of AI Alignment, including but not limited to the following topics:
- Value alignment and reward modeling: How do we accurately model a diverse set of human preferences, and ensure that AI systems are aligned to these same preferences?
- Scalable oversight and control: How can we effectively supervise, monitor and control increasingly capable AI systems? How do we ensure that such systems behave according to predefined safety considerations?
- Robustness and security: How do we create AI systems that work well in new or adversarial environments, including scenarios where a malicious actor is intentionally attempting to misuse the system?
- Interpretability: How can we understand and explain the operations of AI models to a diverse set of stakeholders in a transparent and methodical manner?
- Governance: How do we put in place policies and regulations that manage the development and deployment of AI models to ensure broad societal benefits and fairly distributed societal risks?
- Superintelligence: How can we control and monitor systems that may, in some respects, surpass human intelligence and capabilities?
- Evaluation: How can we evaluate the safety of models and the effectiveness of various alignment techniques, including both technical and human-centered approaches?
- Participation: How can we actively engage impacted individuals and communities in shaping the set of values to which AI systems align?
The goal of this track at AAAI-25 is to bring these problems to the forefront of the academic and research communities, highlighting these challenges as fundamental research questions on the same level as more traditional AI capabilities research.
This page outlines the specific track focus of the AI Alignment Track, as well as review criteria unique to this track. The logistical submission process of the track will mirror those of the main AAAI-25 conference, and more information can be found in the main AAAI-25 Call for Papers.
Submissions to this special track will follow the regular AAAI technical paper submission procedure but the authors need to select the AI Alignment (AIA) special track. There will be no transfer of papers between the AAAI-25 main track and the AI Alignment track; therefore, authors will need to decide to which track they want to submit their paper. Papers submitted to this track will be evaluated using the following criteria, which are similar but slightly different from the criteria used for main track submissions.
- Relevance to AI Alignment: Does the paper address a problem central to the challenge of developing safe and secure AI systems, aligned with human values?
- Engagement with existing literature: Does the paper situate itself within the field of AI Alignment, highlighting previous approaches to the problem and relevant methods that have been developed by researchers to address the problem?
- Methodological or analysis novelty: Does the paper present a new method or bring a new perspective/analysis to the topic?
- Quality of evaluation: Is the method or approach evaluated sufficiently to prove its utility to better achieving or understanding the objective of AI Alignment?
Submission Limit
AAAI-25 is enforcing a strict submission limit. Each individual author is limited to no more than 10 submissions to the AAAI-25 main track and two special tracks (AISI and SRRAI), and authors may not be added to papers following submission (see the main AAAI-25 Call for Papers for policies about author changes).
Questions and Suggestions
Concerning author instructions and conference registration, write to aaai25@aaai.org. For topics specific to the AI Alignment track, please contact the chairs of the AI Alignment track at aaai25aialignment@aaai.org.
Dylan Hadfield-Menell (Massachusetts Institute of Technology, USA)
Lindsay Sanneman (Massachusetts Institute of Technology, USA)
Cem Anil (Anthropic, USA)
Joe Benton (Anthropic, USA)
Yoshua Bengio (Mila, Canada)
AI Alignment Keywords
- AIA: Value Alignment
- AIA: Preference Modeling
- AIA: Scalable Oversight
- AIA: Corribility and Controllability
- AIA: Robustness
- AIA: Safety Constraints
- AIA: Interpretability
- AIA: Governance
- AIA: Superintelligence
- AIA: Evaluation
- AIA: Participation
- AIA: Other