The 38th Annual AAAI Conference on Artificial Intelligence
February 20-27, 2024 | Vancouver, Canada
Main Conference Timetable for Authors
Note: all deadlines are “anywhere on earth” (UTC-12)
July 4, 2023
AAAI-24 web site open for author registration
July 11, 2023
AAAI-24 web site open for paper submission
August 8, 2023
Abstracts due at 11:59 PM UTC-12
August 15, 2023
Full papers due at 11:59 PM UTC-12
August 18, 2023
Supplementary material and code due by 11:59 PM UTC-12
September 25, 2023
Registration, abstracts and full papers for NeurIPS fast track submissions due by 11:59 PM UTC-12
September 27, 2023
Notification of Phase 1 rejections
September 28, 2023
Supplementary material and code for NeurIPS fast track submissions due by 11:59 PM UTC-12
November 2-5, 2023
Author feedback window
December 9, 2023
Notification of final acceptance or rejection
December 19, 2023
Submission of paper preprints for inclusion in electronic conference materials
February 20 – February 27, 2024
AAAI-24 conference
AAAI-24 Keywords
Submission Groups
- Application Domains (APP)
- Cognitive Modeling & Cognitive Systems (CMS)
- Computer Vision (CV)
- Constraint Satisfaction and Optimization (CSO)
- Data Mining & Knowledge Management (DMKM)
- Game Theory and Economic Paradigms (GTEP)
- Humans and AI (HAI)
- Intelligent Robotics (ROB)
- Knowledge Representation and Reasoning (KRR)
- Machine Learning (ML)
- Multiagent Systems (MAS)
- Philosophy and Ethics of AI (PEAI)
- Planning, Routing, and Scheduling (PRS)
- Reasoning under Uncertainty (RU)
- Search and Optimization (SO)
- Natural Language Processing (NLP)
Keywords and Subtopics
Application Domains (APP)
- APP: Humanities & Computational Social Science
- APP: Internet of Things, Sensor Networks & Smart Cities
- APP: Misinformation & Fake News
- APP: Mobility, Driving & Flight
- APP: Natural Sciences
- APP: Other Applications
- APP: Security
- APP: Social Networks
- APP: Software Engineering
- APP: Transportation
- APP: Web
Cognitive Modeling & Cognitive Systems (CMS)
- CMS: Adaptive Behavior
- CMS: Affective Computing
- CMS: Agent Architectures
- CMS: Analogy
- CMS: Applications
- CMS: (Computational) Cognitive Architectures
- CMS: Computational Creativity
- CMS: Conceptual Inference and Reasoning
- CMS: Neural Spike Coding
- CMS: Other Foundations of Cognitive Modeling & Syste
- CMS: Simulating Human Behavior
- CMS: Social Cognition And Interaction
- CMS: Symbolic Representations
Computer Vision (CV)
- CV: Representation Learning for Vision
- CV: Large Vision Models
- CV: 3D Computer Vision
- CV: Adversarial Attacks & Robustness
- CV: Applications
- CV: Medical and Biological Imaging
- CV: Biometrics, Face, Gesture & Pose
- CV: Computational Photography, Image & Video Synthesis
- CV: Bias, Fairness & Privacy
- CV: Interpretability, Explainability, and Transparency
- CV: Image and Video Retrieval
- CV: Language and Vision
- CV: Learning & Optimization for CV
- CV: Low Level & Physics-based Vision
- CV: Motion & Tracking
- CV: Multi-modal Vision
- CV: Object Detection & Categorization
- CV: Other Foundations of Computer Vision
- CV: Scene Analysis & Understanding
- CV: Segmentation
- CV: Video Understanding & Activity Analysis
- CV: Vision for Robotics & Autonomous Driving
- CV: Visual Reasoning & Symbolic Representations
Constraint Satisfaction and Optimization (CSO)
- CSO: Applications
- CSO: Constraint Learning and Acquisition
- CSO: Constraint Optimization
- CSO: Constraint Programming
- CSO: Constraint Satisfaction
- CSO: Distributed CSP/Optimization
- CSO: Mixed Discrete/Continuous Optimization
- CSO: Other Foundations of Constraint Satisfaction
- CSO: Satisfiability
- CSO: Satisfiability Modulo Theories
- CSO: Search
- CSO: Solvers and Tools
Data Mining & Knowledge Management (DMKM)
- DMKM: Anomaly/Outlier Detection
- DMKM: Applications
- DMKM: Conversational Systems for Recommendation & Retri
- DMKM: Data Compression
- DMKM: Data Stream Mining
- DMKM: Data Visualization & Summarization
- DMKM: Graph Mining, Social Network Analysis & Community
- DMKM: Intelligent Query Processing
- DMKM: Knowledge Acquisition from the Web
- DMKM: Linked Open Data, Knowledge Graphs & KB Completio
- DMKM: Mining of Spatial, Temporal or Spatio-Temporal Da
- DMKM: Mining of Visual, Multimedia & Multimodal Data
- DMKM: Other Foundations of Data Mining & Knowledge Mana
- DMKM: Recommender Systems
- DMKM: Representing, Reasoning, and Using Provenance, Tr
- DMKM: Rule Mining & Pattern Mining
- DMKM: Scalability, Parallel & Distributed Systems
- DMKM: Semantic Web
- DMKM: Web
Game Theory and Economic Paradigms (GTEP)
- GTEP: Adversarial Learning
- GTEP: Applications
- GTEP: Auctions and Market-Based Systems
- GTEP: Behavioral Game Theory
- GTEP: Cooperative Game Theory
- GTEP: Coordination and Collaboration
- GTEP: Equilibrium
- GTEP: Fair Division
- GTEP: Game Theory
- GTEP: Imperfect Information
- GTEP: Mechanism Design
- GTEP: Other Foundations of Game Theory & Economic Parad
- GTEP: Social Choice / Voting
Humans and AI (HAI)
- HAI: Applications
- HAI: Brain-Sensing and Analysis
- HAI: Crowd Sourcing and Human Computation
- HAI: Emotional Intelligence
- HAI: Game Design — Procedural Content Generation & Storytelling
- HAI: Game Design — Virtual Humans, NPCs and Autonomous Characters
- HAI: Human-Aware Planning and Behavior Prediction
- HAI: Human-Computer Interaction
- HAI: Human-in-the-loop Machine Learning
- HAI: Intelligent User Interfaces
- HAI: Interaction Techniques and Devices
- HAI: Learning Human Values and Preferences
- HAI: Other Foundations of Human Computation & AI
- HAI: Planning and Decision Support for Human-Machine Teams
- HAI: Teamwork, Team formation
- HAI: Understanding People, Theories, Concepts and Methods
- HAI: User Experience and Usability
- HAI: Voting
Intelligent Robots (ROB)
- ROB: Behavior Learning & Control
- ROB: Cognitive Robotics
- ROB: Human-Robot Interaction
- ROB: Learning & Optimization for ROB
- ROB: Localization, Mapping, and Navigation
- ROB: Manipulation
- ROB: Motion and Path Planning
- ROB: Multi-Robot Systems
- ROB: Multimodal Perception & Sensor Fusion
- ROB: Other Foundations and Applications
- ROB: State Estimation
Knowledge Representation and Reasoning (KRR)
- KRR: Action, Change, and Causality
- KRR: Applications
- KRR: Argumentation
- KRR: Automated Reasoning and Theorem Proving
- KRR: Common-Sense Reasoning
- KRR: Computational Complexity of Reasoning
- KRR: Description Logics
- KRR: Diagnosis and Abductive Reasoning
- KRR: Geometric, Spatial, and Temporal Reasoning
- KRR: Knowledge Acquisition
- KRR: Knowledge Engineering
- KRR: Knowledge Representation Languages
- KRR: Logic Programming
- KRR: Nonmonotonic Reasoning
- KRR: Ontologies
- KRR: Other Foundations of Knowledge Representation &
- KRR: Preferences
- KRR: Qualitative Reasoning
- KRR: Reasoning with Beliefs
Machine Learning (ML)
- ML: Deep Learning Algorithms
- ML: Deep Neural Architectures and Foundation Models
- ML: Deep Learning Theory
- ML: Active Learning
- ML: Adversarial Learning & Robustness
- ML: Applications
- ML: Bayesian Learning
- ML: Bio-inspired Learning
- ML: Calibration & Uncertainty Quantification
- ML: Causal Learning
- ML: Classification and Regression
- ML: Clustering
- ML: Dimensionality Reduction/Feature Selection
- ML: Distributed Machine Learning & Federated Learning
- ML: Ensemble Methods
- ML: Ethics, Bias, and Fairness
- ML: Privacy
- ML: Transparent, Interpretable, Explainable ML
- ML: Evaluation and Analysis
- ML: Evolutionary Learning
- ML: Feature Construction/Reformulation
- ML: Graph-based Machine Learning
- ML: Auto ML and Hyperparameter Tuning
- ML: Imitation Learning & Inverse Reinforcement Learning
- ML: Kernel Methods
- ML: Learning on the Edge & Model Compression
- ML: Learning Preferences or Rankings
- ML: Learning Theory
- ML: Learning with Manifolds
- ML: Matrix & Tensor Methods
- ML: Multi-class/Multi-label Learning & Extreme Classification
- ML: Multi-instance/Multi-view Learning
- ML: Multimodal Learning
- ML: Deep Generative Models & Autoencoders
- ML: Neuro-Symbolic Learning
- ML: Online Learning & Bandits
- ML: Optimization
- ML: Information Theory
- ML: Other Foundations of Machine Learning
- ML: Probabilistic Circuits and Graphical Models
- ML: Quantum Machine Learning
- ML: Reinforcement Learning
- ML: Statistical Relational/Logic Learning
- ML: Representation Learning
- ML: Scalability of ML Systems
- ML: Semi-Supervised Learning
- ML: Structured Learning
- ML: Time-Series/Data Streams
- ML: Transfer, Domain Adaptation, Multi-Task Learning
- ML: Life-Long and Continual Learning
- ML: Unsupervised & Self-Supervised Learning
Multiagent Systems (MAS)
- MAS: Adversarial Agents
- MAS: Agent Communication
- MAS: Agent-Based Simulation and Emergent Behavior
- MAS: Agent/AI Theories and Architectures
- MAS: Agreement, Argumentation & Negotiation
- MAS: Applications
- MAS: Coordination and Collaboration
- MAS: Distributed Problem Solving
- MAS: Mechanism Design
- MAS: Modeling other Agents
- MAS: Multiagent Learning
- MAS: Multiagent Planning
- MAS: Multiagent Systems under Uncertainty
- MAS: Other Foundations of Multi Agent Systems
- MAS: Teamwork
Philosophy and Ethics of AI (PEAI)
- PEAI: Accountability, Interpretability & Explainability
- PEAI: AI & Epistemology
- PEAI: AI & Jobs/Labor
- PEAI: AI & Law, Justice, Regulation & Governance
- PEAI: Applications
- PEAI: Artificial General Intelligence
- PEAI: Bias, Fairness & Equity
- PEAI: Morality & Value-based AI
- PEAI: Philosophical Foundations of AI
- PEAI: Privacy & Security
- PEAI: Safety, Robustness & Trustworthiness
- PEAI: Societal Impact of AI
Planning, Routing, and Scheduling (PRS)
- PRS: Activity and Plan Recognition
- PRS: Applications
- PRS: Deterministic Planning
- PRS: Learning for Planning and Scheduling
- PRS: Mixed Discrete/Continuous Planning
- PRS: Model-Based Reasoning
- PRS: Optimization of Spatio-temporal Systems
- PRS: Other Foundations of Planning, Routing & Scheduling
- PRS: Plan Execution and Monitoring
- PRS: Planning under Uncertainty
- PRS: Planning with Language Models
- PRS: Planning with Markov Models (MDPs, POMDPs)
- PRS: Planning/Scheduling and Learning
- PRS: Replanning and Plan Repair
- PRS: Routing
- PRS: Scheduling
- PRS: Scheduling under Uncertainty
- PRS: Temporal Planning
Reasoning under Uncertainty (RU)
- RU: Applications
- RU: Causality
- RU: Decision/Utility Theory
- RU: Graphical Models
- RU: Other Foundations of Reasoning under Uncertainty
- RU: Probabilistic Programming
- RU: Relational Probabilistic Models
- RU: Sequential Decision Making
- RU: Probabilistic Inference
- RU: Stochastic Optimization
- RU: Uncertainty Representations
Search and Optimization (SO)
- SO: Adversarial Search
- SO: Algorithm Configuration
- SO: Applications
- SO: Combinatorial Optimization
- SO: Distributed Search
- SO: Evaluation and Analysis
- SO: Evolutionary Computation
- SO: Heuristic Search
- SO: Learning to Search
- SO: Local Search
- SO: Metareasoning and Metaheuristics
- SO: Mixed Discrete/Continuous Search
- SO: Non-convex Optimization
- SO: Other Foundations of Search & Optimization
- SO: Sampling/Simulation-based Search
Natural Language Processing (NLP)
- NLP: Safety and Robustness
- NLP: Applications
- NLP: Conversational AI/Dialog Systems
- NLP: Discourse, Pragmatics & Argument Mining
- NLP: Ethics — Bias, Fairness, Transparency & Privacy
- NLP: Generation
- NLP: Summarization
- NLP: Information Extraction
- NLP: Interpretability, Analysis, and Evaluation of NLP Models
- NLP: Language Grounding & Multi-modal NLP
- NLP: (Large) Language Models
- NLP: Learning & Optimization for NLP
- NLP: Machine Translation, Multilinguality, Cross-Lingual NLP
- NLP: Other
- NLP: Lexical Semantics and Morphology
- NLP: Sentence-level Semantics, Textual Inference, etc.
- NLP: Question Answering
- NLP: Sentiment Analysis, Stylistic Analysis, and Argument Mining
- NLP: Text Classification & Sentiment Analysis
- NLP: Syntax — Tagging, Chunking & Parsing
- NLP: Speech
Choosing the best keyword(s) in the AAAI-24 Main Track
AAAI is a broad-based AI conference, inviting papers from different subcommunities of the field. It also encourages papers that combine different areas of research (e.g., vision and language; machine learning and planning). Finally, it also invites methodological papers focused on diverse areas of application such as healthcare or transportation.
In AAAI-24 authors are asked to choose one primary keyword (mandatory) and (optionally) up to five secondary keywords. With 300 keywords available to choose from, picking the best keywords for a paper may become confusing. This brief guide describes some high-level principles for choosing the best keywords.
The main purpose of keywords is to enable finding the most appropriate reviewers for each submission, which is what this guide focuses on. Note, however, that there are a variety of other signals beyond keywords to match reviewers and papers, so not everything hinges on this choice.
In the end, choosing the best keywords is an art; making poor choices about keywords can increase your chance of getting suboptimal reviews. This guide aims to help authors understand the reasoning process to allow for the best possible matching of papers with qualified reviewers.
Choosing the primary keyword
The main principle for choosing a paper’s primary keyword is to identify the subarea to which the paper makes its main contribution. It should follow that a reviewer who is an expert in that subarea will be positioned to evaluate the paper most effectively.
Most of the time, it is recommended to start with the top-level area (e.g., computer vision, knowledge representation) that describes the paper’s methodological focus and then picking the best fitting keyword in that area.
However, a sizable number of papers describe work at the intersection of different fields. To give some examples, consider papers:
- developing general machine learning methods but primarily motivated by problems in NLP
- studying bias in machine learning models applied to healthcare
- designing a novel elicitation mechanism for crowdsourcing
- combining different methodological subareas of AI in an integrated way, e.g., combine learning for solving satisfiability problems.
In all such settings, it becomes trickier to choose the best primary keyword. Here are some rules of thumb:
(1) Focus on where the primary contribution lies, and which community will benefit the most from reading the paper. For example, if an ML algorithm is demonstrated on both computer vision and NLP applications, it is best kept under ML (as it is a general advance, with NLP and vision serving only as applications). If, however, the paper is heavily motivated by details of a particular class of ML problems (e.g., proposing an algorithm that leverages specific structure of images, or language) then picking a keyword that focuses on this class of problems (vision; NLP) is more appropriate.
(2) For papers with specific applications (e.g., healthcare or transportation), typically the application is NOT the primary keyword. Usually, a AAAI main track paper will make methodological advances leading to an impact to an application. Choose a primary keyword based on the methodology. There is one exception to this rule: if the impact to the application area is much more impressive than the methodological innovation, your paper may have the best chance with the application area being the primary keyword. That said, you should carefully consider whether such a paper is more appropriate to submit to the track on AI for Social Impact or to IAAI; note that each of these evaluates papers according to different criteria than the AAAI main track.
(3) For papers genuinely at the intersection of different fields, carefully scan all keywords. It is possible that a joint keyword already exists in the list. For example, an ML paper studying bias applied to healthcare may naturally use the keyword “ML: Bias and Fairness” as the primary area (since healthcare is the application component).
(4) However, perhaps no keyword adequately appears to capture the intersection of AI fields to which the paper makes its primary contribution. In such cases, a judgment call is necessary about which community is likely to best appreciate the work. For example, if a paper combines learning to solve satisfiability problems by using ideas of machine learning within a satisfiability algorithm, it is likely that the paper’s most fundamental impact will be on the design of satisfiability solvers rather than on the design of new learning algorithms; hence, “CSO: Satisfiability” would be a good choice for the primary keyword. However, if the paper is about solving the satisfiability problem using a deep neural network with significant innovations in machine learning, then a machine learning keyword will be a better fit.
Choosing secondary keywords
When choosing secondary keywords, it is helpful to consider two questions. First, all things being equal, what beyond the primary keyword should the reviewers be expert in? Second, if no one reviewer is likely to tick all the boxes, how would experts outside the primary subarea who would add important perspectives to the paper’s review be described? For example, ML fairness applied to healthcare should choose “APP: Healthcare, Medicine & Wellness” as a secondary keyword. Use of machine learning for satisfiability should choose the other area as the secondary keyword.
As an extreme example, if mixed discrete-continuous search is used to solve a routing problem that arises when considering privacy issues in a navigation-based game, with the main contribution being to the routing problem (i.e., “PRS: Routing” is the primary keyword), then the paper may benefit from having multiple secondary keywords “SO: Mixed Discrete/Continuous Search“, “PEAI: Privacy and Security” and “APP: Games”.
Every attempt is made to find reviewers that cover all specified keywords. In some cases it will be hard — such reviewers may not exist, and each reviewer can only review a limited number of papers. On the other hand, be careful what you wish for: adding secondary keywords can be a double-edged sword. If the paper’s contributions are relatively simple from the point of view of an expert in a secondary keyword, then that expert may give a poor rating, perhaps overlooking the paper’s value in another domain. In such situations, it is better to avoid adding secondary keywords as long as experts in the primary keyword will understand the paper, assuming they have broad (but not deep) knowledge of other fields of AI.