The 38th Annual AAAI Conference on Artificial Intelligence
February 20-27, 2024 | Vancouver, Canada
AAAI-24 Workshop Program
Sponsored by the Association for the Advancement of Artificial Intelligence
February 26-27, 2024 | Vancouver Convention Centre – West Building | Vancouver, BC, Canada
W1: “Are Large Language Models Simply Causal Parrots?” (LLM-CP)
The aim of LLM-CP is to bring together researchers interested in identifying to which extent we can consider the output and internal workings of LLMs to be causal by investigating their reasoning and inference capabilities (on a causal, logical, and cognitive level of representation/description).
- Large Language Models
- Foundation Models
- Analytical Philosophy
1-day workshop; No restrictions on attendance
4-5 page papers (excl. references), resubmissions & paper under review allowed, either original research, benchmark or retrospection paper
Submission Details and Portal at https://llmcp.cause-lab.net/submit-llmcp
Workshop Chair: Matej Zečević (TU Darmstadt, firstname.lastname@example.org)
Amit Sharma (Microsoft, email@example.com)
Lianhui Qin (UCSD, firstname.lastname@example.org)
Devendra Singh Dhami (Eindhoven University of Technology, TU Darmstadt & hessian.AI, email@example.com)
Aleksander Molak (CausalPython, firstname.lastname@example.org)
Matej Zečević (TU Darmstadt, email@example.com)
Workshop URL and Contact
W2: AI for Credible Elections: A Call To Action with Trusted AI
This is a full day workshop held either on February 26
The objectives of the workshop are:
- To provide a forum for discussing new approaches and challenges in building AI for conducting elections, and for exchanging ideas about how to move the area forward. It has facilitated cross-geographical exchange of ideas using case studies and experts from countries such as the US, India, Estonia, Brazil, Canada, Ireland, and elsewhere. The workshop thus provides diverse insights into election processes and the role of AI.
- To promote trustworthy AI. The workshop explores innovative approaches and methodologies that enhance the trustworthiness of AI technologies used in election processes, addressing concerns related to disinformation, security, and transparency.
- To advance transparency. The workshop discusses strategies for promoting transparency in the election process, including the use of technology for data management, validation, and the establishment of best practices.
- To address open research problems. The workshop encourages the identification and discussion of open research problems in the field of AI for credible elections, with a focus on areas where Trusted AI can offer solutions.
The workshop topics cover AI use in elections, for example:
- For voters
- Helping groups with special needs, like seniors or first-time voters, understand elections processes
- Helping voters understand issues, candidates and parties
- Reducing cost of voting
- For candidates
- Organizing candidate campaigns
- Detecting, informing and managing mis- and dis-information
- Managing narratives: candidate, party and opposition
- For election organizers
- Identifying and validating voters
- Informing voters about election information understandably
- Possible legal and regulatory gaps and solutions
- Assessing pulse of voting
- Expediting results computation and dissemination
- Detecting, informing and managing election mis and disinformation as well as increasingly sophisticated Deep Fakes.
- Promoting transparency in the election process
- Technology for data management and validation
- Standardizing a secure stack for verifying AI innovations
The workshop is organized into about 4 sessions based on technical areas from technology (AI, security) or humanities (political science, journalism). Sessions include invited speakers, paper presentations and panel discussions.
There is no attendance maximum number of attendees.
Either extended abstracts (4 pages) or full papers (7 pages)
Submission site: Easychair (URL to be announced)
Workshop Chair: Biplav Srivastava (University of South Carolina), firstname.lastname@example.org
Anita Nikolich (University of Illinois-Urbana Champaign), email@example.com
Andrea Hickerson (University of Mississippi), firstname.lastname@example.org
Tarmo Koppel (Tallinn University), email@example.com
Chris Dawes (New York University), firstname.lastname@example.org
Sachindra Joshi (IBM Research), email@example.com
Ponnaguram Kumaraguru (International Institute of Information Technology), firstname.lastname@example.org
W3: AI for Digital Human
It is a natural desire for human beings to investigate the digital world, e.g., the metaverse. Digital human avatars, as the most common representation of human beings in the digital space, are a fundamental element of the metaverse. Accordingly, there is a growing interest in developing AI tools to facilitate the process and improve the quality of digital human creation. This workshop aims to bring together researchers interested in the latest advancements in the field of digital human and how artificial intelligence can be leveraged to improve the quality and efficiency of the process.
This workshop covers a broad scope of digital humans, encouraging the theoretical contributions, the downstream applications, and the discussion of social impacts. This workshop gives an opportunity to systematically explore digital humans in a unified perspective from the field of artificial intelligence, and it opens to not only computer researchers but also experts focusing on law, education, psychology, sociology, etc. In this workshop, the latest developments in these fields can be put together, to inspire the interdisciplinary study of AI and digital humans.
Topics of interest include, but are not limited to, the following:
- AI models and algorithms for digital human modeling; explicit and implicit representations; AI-empowered rendering technique such as neural rendering; learning strategies that are more effective, efficient, and resource-friendly
- Downstream tasks of digital humans; large foundation models for digital human generation; machine learning for face, body, hair and clothing reconstruction; face/body animation with audio or texts
- The social impact of AI-generated characters. For example, the potential to transform industries including health and education, the potential risk of creating fake media, invading people’s privacy, impacting human workers in certain industries, etc.
- Other relevant applications and methods, e.g. digital human in VR and metaverse, etc.
This will be a one-day workshop. In the morning session, we will have three invited speakers and a panel to discuss the important challenges in the field of AI for digital humans. In the afternoon session, we will have an oral session for the authors of the submissions to share their works. If we receive many good submissions, we will also organize a poster session. Additionally, we will organize a competition together with this workshop, and the winners will be announced during the meeting. We expect 50 or so attendance and open the workshop to all AAAI-24 participants.
- Technical Papers: Full-length research papers of up to 7 pages (excluding references and appendices)
- Short Papers: Position or short papers of up to 4 pages (excluding references and appendices)
All papers must be submitted in PDF format, using the AAAI-24 author kit. All submissions should be done electronically via CMT.
The submission deadline is November 24, 2023 (11:59 PM PST).
Submission site: https://cmt3.research.microsoft.com/AI4DH2024
Yichao Yan, Shanghai Jiao Tong University (email@example.com)
Di Xu, Huawei Cloud Computing Technologies Co., Ltd (firstname.lastname@example.org)
Haozhe Jia, Huawei Cloud Computing Technologies Co., Ltd
Matthias Nießner (Technical University of Munich)
Jiankang Deng (Imperial College London)
KangKang Yin (Simon Fraser University)
W4: AI for Education: Bridging Innovation and Responsibility
This two-day workshop explores the innovations in artificial intelligence (AI), specifically generative-AI (GenAI), in educational applications, and discusses the related ethical implications of responsible-AI (RAI). Over two days, attendees will examine GenAI technologies, potential vulnerabilities, and the development of RAI standards in an educational context. Through a variety of formats like papers, demonstrations, posters, a global competition on Math reasoning, and opportunities to hear experts and representatives from various communities, participants will explore AI’s impact on instruction quality, learner outcomes, and ethics. This workshop ultimately aims to inspire novel ideas, foster partnerships, and navigate the ethical complexities of AI in education.
Day-1. Cognitive and learning science principles for improving genAI-based learning frameworks and tutoring systems; Mitigating “hallucinations” in large language models (LLMs) and improving factual validity for education; Improving mathematical and scientific reasoning capabilities of LLMs; Methods and metrics for evaluating GenAI content; AI-based methods to evaluate students, monitor their progress and personalize their education; Analyses, studies, and solutions for preserving academic integrity in the abundance of GenAI models and generated educational content; Benchmark datasets for genAI applications in education; Next generation infrastructure to support research on GenAI for education
Day-2. Responsible AI (RAI; e.g., Fairness, Accountability, Interpretability, and Transparency) for consequential decision-making in education (e.g. admissions, early warning systems, grading); Methodological contributions and impact of responsible AI in education, including and not restricted to generative modeling, predictive modeling, causal inference, reinforcement learning, and data collection; AI for better student outcomes: applications that use AI to enhance educational interventions under resource constraints and inequity; Privacy, security, and AI Regulation in education equity, pipelines, and representation; surveillance; platform governance; regulating large/foundation models in education settings; Social and cultural impacts of AI in education; historical perspectives and critical theory
Format of Workshop
We have a two-day workshop with distinct themes and events on each day. Day-1 will focus on innovations in AI for education specifically highlighting GenAI and a variety of different events including keynotes, invited presentations/posters/demos, a debate all with representation from research and allied communities, and finally, a global challenge on math reasoning. Day-2 will highlight topics on responsibility in AI for education with a keynote, a stellar lineup of invited speakers, a moderated discussion session, expert panels highlighting critical issues in responsible AI, spotlight talks and a poster session to highlight accepted papers. Participants will be encouraged to think critically, ask questions, identify edge cases for demos, and come together to brainstorm solutions.
We expect to attract around 75 attendees and 50 submissions. We are offering a small number of travel scholarships to promote attendance of unrepresented students/postdocs.
We welcome different kinds of submissions: Short papers (2 pages**). Demo papers, Work-in-progress papers. These submissions will be exhibited as posters or demonstrations; Full papers (up to 6 pages**). Novel research papers, Appraisal papers of existing methods and tools (e.g., lessons learned), Benchmark datasets highlighting application of GenAI, Evaluatory papers which revisit validity of domain assumptions; Global challenge on math problem solving and reasoning. We invite researchers and practitioners worldwide to investigate the opportunities of automatically solving math problems via LLM approaches. More details about this competition can be found on https://ai4ed.cc/competitions/aaai2024competition.
Accepted papers will be invited to submit an extended version, addressing the remarks of the reviewers, to PMLR (https://proceedings.mlr.press/).
**Page lengths exclude references and supplementary materials. Previously published work will not be accepted for the workshop, but papers published solely as preprints are welcome
Submission Site Information
All submissions must follow the PMLR style template.
To ensure a fair review process, all submissions will be evaluated through a double-blind review. All submissions must be made through the OpenReview portal for the workshop (https://bit.ly/ai4ed-aaai-openreview). Authors must have an OpenReview account in order to make submissions. Please adhere to the submission guidelines outlined on the workshop website.
Main contact. Debshila Basu Mallick (OpenStax, Rice University); Steering Committee: Muktha Ananda (Google), Debshila Basu Mallick (OpenStax, Rice University), Jill Burstein (Duolingo), Lydia Liu (Cornell), Zitao Liu (Guangdong Institute of Smart Education, Jinan University, Guangzhou, China), James Sharpnack (Duolingo), Jack Wang (Adobe), Serena Wang (UC Berkeley)
Subcommittee Leads. Program committee and proceedings. Isabelle Guyon (Google); Andrew Lan (UMass at Amherst); James Sharpknack (Duolingo); Competition. Jiahao Chen (TAL Education Group, China); Liang Xu (TAL Education Group, China); Zitao Liu (Guangdong Institute of Smart Education, China); Isabelle Guyon (Google); Simon Woodhead (EEDI, UK); Panagiota Konstantinou (EEDI, UK); Debate. Dongwook Yoon (University of British Columbia, Canada); Travel Scholarship. Muktha Ananda (Google); Simon Woodhead (EEDI, UK)
W5: AI in Finance for Social Impact
The main objectives of the workshop are to delve into how AI is shaping the financial sector’s focus on social impact, responding to consumer consciousness and regulatory pressures. This event highlights AI’s ethical applications, including the promotion of financial inclusion, facilitation of ESG investing, and the development of privacy-preserving solutions. Engage with industry and academia experts to exchange ideas and share best practices.
Topics for contributed papers should lie at the intersection of AI, Finance, and Social Good.
Topics include, but are not limited to:
- Generative models and data-driven simulation
- Planning, Search, Constraint-based Reasoning, Optimization, and Reinforcement Learning
- Multi-agent systems and game-theoretic analysis of financial markets
- Transformer models, Self-supervised Learning
- Natural Language Processing, including Large Language Models (LLMs), Speech Analysis and Conversational Dialogue Modeling
- Meta Learning, Federated Learning, Representation Learning, Causal Learning and Transfer Learning
- Computer vision
- Graph theory and Network Analysis
- Data annotation, acquisition, augmentation, and feature engineering
- Semi-structured data modeling
- Validation and Calibration of financial models
Potential applications of interest in Finance and Social Good may include but are not limited to:
- Combating Financial Crime including Fraud Detection
- Developing and implementing AI solutions for ESG (environmental, social, and governance) investing
- Data Privacy
- Financial safety and education for vulnerable and/or underrepresented populations
- Large language models for socially responsible finance
- Decentralized Finance (DeFi) frameworks \& benchmarks
- Bias analysis and mitigation in AI Models for financial decision making
- Explainability and fairness for financial AI \& ML systems
- Responsible AI in finance
Format of Workshop
The workshop will follow a one-day format, focusing on paper presentations, poster sessions, keynote and invited talks.
We anticipate attracting a minimum of 75 and potentially up to 100 attendees.
People who have been accepted to give paper presentations, posters are invited to attend. In addition, anyone interested in AI, Finance and Social good are welcome to attend.
All contributions must be original and unpublished, and should not be under consideration by other conferences or journals. Submissions will undergo a peer review process using a double-blind system. The evaluation criteria include relevance to the workshop, novelty, technical contribution, impact significance, clarity, and reproducibility. To ensure consistency, all submissions must adhere to the AAAI-24 formatting guidelines (https://aaai.org/aaai-conference/submission-instructions/) and utilize the corresponding LaTeX style files.
Three types of submissions are accepted:
- full research papers, which should not exceed 8 pages (including references)
- short/poster papers, which should not exceed 4 pages (including references)
- extended abstracts which should not exceed 2 pages. These abstracts serve as a platform for initial idea exploration.
The submission process will take place via Microsoft CMT via https://cmt3.research.microsoft.com/AIFinSI2024/Track/1/Submission/Create. If you have any queries regarding the submission process, please contact us at email@example.com.
Accepted papers require in-person presentation by at least one author.All accepted papers will be published on the workshop website, and authors are encouraged to also share their work on platforms like arXiv or other online repositories.
- Submission deadline: Friday, 24 November 2023 (anywhere on earth)
- Notification of acceptance/rejection: Monday, 11 December 2023
Tucker Balch, Managing Director, J.P. Morgan AI Research firstname.lastname@example.org
Bo Li, University of Illinois at Urbana Champaign, email@example.com
Suchetha Siddagangappa, J.P. Morgan AI Research firstname.lastname@example.org
Dhagash Mehta, BlackRock, email@example.com
Rachneet Kaur, J.P. Morgan AI Research, firstname.lastname@example.org
Elaine Shi, Carnegie Mellon University and University of Maryland , email@example.com
Kassiani Papasotiriou, J.P. Morgan AI Research, firstname.lastname@example.org
David Byrd, Bowdoin College, email@example.com
W6: AI to Accelerate Science and Engineering (AI2ASE)
Scientists and engineers in diverse application domains are increasingly relying on using computational and artificial intelligence (AI) tools to accelerate scientific discovery and engineering design. AI, machine learning, and reasoning algorithms are useful in building models and decision-making towards this goal. We have already seen several success stories of AI in applications such as materials discovery, ecology, wildlife conservation, and molecule design optimization. This workshop aims to bring together researchers from AI and diverse science/engineering communities to achieve the following goals: 1). Identify and understand the challenges in applying AI to specific science and engineering problems. 2). Develop, adapt, and refine AI tools for novel problem settings and challenges. 3). Community-building and education to encourage collaboration between AI researchers and domain area experts.
Submissions must be formatted in the AAAI submission format. All submissions should be done electronically via CMT.
We welcome submissions of long (max. 8 pages), short (max. 4 pages), and position (max. 4 pages) papers describing research at the intersection of AI and science/engineering domains including chemistry, physics, power systems, materials, catalysis, health sciences, computing systems design and optimization, epidemiology, agriculture, transportation, earth and environmental sciences, genomics and bioinformatics, civil and mechanical engineering etc.
The submission deadline is November 21st, 2023 (11:59 PM PST).
This year’s theme is AI for Materials and Manufacturing. Our invited speakers and panelists from both AI and materials/manufacturing community include:
- Prof. Roman Garnett, Washington University at St. Louis
- Prof. Ying Diao, University of Illinois at Urbana-Champaign
- Prof. Tao Sun, Northwestern University
- Prof. John Gregoire, Caltech University
- Prof. Vahid Babaei, Max Planck Institute for Informatics
- Prof. Koji Tsuda, University of Tokyo
Workshop Organizing Committee
Aryan Deshwal (Washington State University)
Jana Doppa (Washington State University)
Syrine Belakaria (Washington State University)
Kaiyan Qiu (Washington State University)
Yolanda Gil (University of Southern California)
W7: AI-based Planning for Cyber-Physical Systems (CAIPI)
AI-Planning for real-world Cyber-Physical Systems (CPS) is challenging. Planning algorithms must deal with high CPS complexities and large data quantities, all while maintaining their performance. Often, this overstrains conventional planning algorithms from individual research directions, such as symbolic or sub-symbolic planning.
Recent approaches in AI-Planning include Neuro-Symbolic architectures, Large Language Models (LLMs), Deep Reinforcement Learning, or extend symbolic planning paradigms. The performance of such novel methods makes them especially well-suited for the complexity of real-world CPS.
This workshop aims at soliciting discussion and collaboration between researchers on novel methods in AI-planning with a focus on applications in CPS.
- Novel applications of planning for CPS
- Combinations of Machine Learning, sub-symbolic and symbolic planning
- Usage of LLMs for planning
- Reinforcement / imitation / policy learning for decision-making in CPS
- Automated generation of planning domain descriptions
- Multi-objective decision-making
- Reconfiguration of CPS as planning tasks
- Knowledge representations, semantic methods and ontologies
- Online planning after faults and disturbances
This one-day workshop will include presentations, a poster session, an invited keynote talk and a planning competition.
We aim to facilitate discussions between researchers at all levels, including those who are just starting their careers and those who have extensive experience in the field. For this goal we allow the following participants:
- Accepted workshop paper
- Accepted poster
- Conference guests
- Participants of the planning competition
Maximum number: 50 participants
We accept the following submission types:
- short paper (4 pages not including references / appendix)
- full paper (8 pages not including references / appendix)
Papers should be submitted in the AAAI format. Papers previously submitted to other journals / conferences are welcome. If the paper has already been rejected from a different conference a significant effort to address the criticisms should be made before submission. The review process will be single blind.
Alexander Diedrich, Helmut-Schmidt-University, firstname.lastname@example.org
Jonas Ehrhardt, Helmut-Schmidt-University, email@example.com
René Heesch, Helmut-Schmidt-University, firstname.lastname@example.org
Niklas Widulle, Helmut-Schmidt-University, email@example.com
W8: AIBED: Artificial Intelligence for Brain Encoding and Decoding
This workshop aims to explore the intersection of AI and neuroscience, focusing on how AI, particularly deep artificial neural networks, can facilitate the encoding and decoding of brain activities.We will first delve into the principles of brain encoding and decoding, examining how the brain processes and encodes information into neural signals, and how these signals can be decoded to understand cognition. Next, we will discuss the challenges in encoding and decoding high-dimensional neural imaging data, including but not limited to the complexity of brain signal representations, scarcity of data annotations, and the need for model generalizability. Finally, we will consider the implications of these AI-driven advances in brain encoding and decoding for neuroscience, including understanding cognitive functions, diagnosing neurological disorders, and developing brain-computer interfaces
- Understanding Brain Encoding and Decoding:
- Analyzing the processes of brain information processing and neural signal encoding
- Utilizing AI to model complex neural processes and facilitate cognition understanding
- Decoding from brain activities to reconstruct perceived or imagined linguistic, visual and audio information with AI
- Addressing Challenges in Processing Neural Imaging Data:
- Proposing AI solutions to process neural images, such as denoising, registering and slicing etc.
- Leveraging AI’s proficiency in managing high-dimensional data to innovate solutions of representing brain signals
- Implications in Neuroscience:
- Considering the impact of AI developments on cognitive neuroscience
- Aiding in diagnosing neurological disorders with AI
Format and Attendance
This will be a 1 day workshop with keynotes, poster presentations, and panel discussions.
We will invite keynote speakers and all the authors who get papers accepted. Other AAAI attendees who are interested can also attend following AAAI’s related policy. The expected numbers of attendees is about 50-75.
We accept both short papers with no more than 4 pages and long papers with no more than 7 pages.
Submission Site Information: https://openreview.net/group?id=AAAI.org/2024/Workshop/AIBED
Mingxiao Li, firstname.lastname@example.org
Zijiao Chen, email@example.com
Jiaxin Qing, firstname.lastname@example.org
Xinpei Zhao, email@example.com
Tiedong Liu, firstname.lastname@example.org
Dr. Wei Huang, email@example.com
W9: Artificial Intelligence for Cyber Security (AICS)
The AICS workshop will focus on the application of artificial intelligence to problems in cyber security. While AI and ML have shown astounding ability to automatically analyze and classify large amounts of data in complex scenarios, the techniques are not still widely adopted in real world security settings, especially in cyber systems. The workshop will address technologies and their applications in security, such as, machine learning, game theory, natural language processing, knowledge representation, automated and assistive reasoning and human machine interactions.
This year the workshop emphasis will be on applications of generative AI, including LLMs, to cybersecurity problems as well as adversarial attacks on such models.
We invite work at the intersection of AI (all AI topics in AAAI) and cybersecurity that help improve the understanding of this complex space.
Topics of interest include, but are not limited to:
- Machine learning (including RL) approaches to make cyber systems secure and resilient
- Natural language processing techniques
- Anomaly/Threat detection techniques
- Big Data noise reduction techniques
- Adversarial Learning
- Deception in Learning
- Human behavioral modeling, being robust to human errors
- Formal reasoning, with focus on human behavior element, in cyber systems
- Game Theoretic reasoning in cyber security
- Adversarial robust AI metrics
- Multi-agent interaction/agent-based modeling in cyber systems
- Modeling and simulation of cyber systems and system components
- Decision making under uncertainty in cyber systems
- Automation of data labeling and ML techniques that learn to learn in security
- Quantitative human behavior models with application to cyber security
- Operational and commercial applications of AI in security
- Explanations of security decisions and vulnerability of explanation techniques
- The use of foundation models, e.g. LLM, in cybersecurity.
Format of Workshop
The workshop will have two invited talks. We will host a series of short (15-20 minute) presentations of the papers accepted for this workshop. Finally, we will hold a moderated roundtable/panel discussion between members of the AI community on practical issues related to the use of AI and security.
The AICS workshop will be a one-day meeting, from roughly 9am to 5pm.
Attendance: what are the criteria to be invited, is there a maximum number of attendees
The organizers do not impose any criteria to attend, other than what AAAI registration imposes. The maximum number of attendees is to be determined by the room size.
We accept only full-length papers (min of 6 pages, up to overall 8 pages in the AAAI24 format).
Submissions are not anonymized.
Submission Site Information: https://cmt3.research.microsoft.com/AICS2024/
James Holt, firstname.lastname@example.org
Edward Raff, email@example.com
Ahmad Ridley, firstname.lastname@example.org
Dennis Ross, Dennis.Ross@ll.mit.edu
Ankit Shah, email@example.com
Arunesh Sinha, firstname.lastname@example.org
Diane Staheli, email@example.com
Allan Wollaber, Allan.Wollaber@ll.mit.edu
Workshop Committee: The Program Committee is still to be determined.
W10: Artificial Intelligence for Operations Research
Operations Research (OR) utilizes sophisticated analytical methods to facilitate optimal decision-making. It is an interdisciplinary branch that draws from fields such as mathematics, statistics, and computer science and specializes in modelling and solving complex problems in various sectors, including business, government, healthcare, and engineering. The traditional OR process can be segmented into three distinct steps:
- OR Modelling – creating a mathematical optimization model based on project goals and constraints.
- Model Solving – designing algorithms to solve mathematical optimization models.
- Solution Evaluation – deploying and assessing the derived solution.
In today’s digitized world, the integration of Artificial Intelligence (AI) within OR is not just beneficial -– it’s essential. Cutting-edge language models like GPT-4 are poised to transform the mathematical modelling paradigm. Techniques like deep learning algorithms and evolutionary strategies have transformative capabilities that can significantly speed up optimization algorithms. Our workshop aims to showcase how AI can revolutionize the field of operations research, particularly in the modelling and solving phases.
Specific topics of interest for the workshop include (but are not limited to)
- Language Model-driven OR Modelling
- AI in Data Generation and Refinement
- Learning-based Optimization Algorithms
- End-to-End Learning and Optimization
- AI for Decision-Making under Uncertainty
The workshop is planned as a one-day event. It will feature:
- Invited Talks: Leading researchers will share their insights and findings.
- Panel Discussion: Experts from academia and industry will discuss and debate relevant topics.
- Poster Session: All submitted papers will have the opportunity for a poster presentation, allowing attendees to engage in detailed discussions about the research.
- Length: Technical papers can be up to 7 pages, not including references and appendices.
- Format: Submissions must be in PDF format, prepared using the AAAI-24 author kit.
- Review Process: All papers will undergo a peer-review process. Selected papers will be presented in the poster session.
- Awards: A “Best Paper Award” will be given, accompanied by a cash prize of $1000.
- Publication: All accepted papers will be featured in a special issue of INFOR (https://www.tandfonline.com/toc/tinf20/current)
- Submission Portal: Papers should be submitted via the INFOR official website (https://www.tandfonline.com/toc/tinf20/current)
- Due date: Submissions are due on Nov. 24th.
- Jie Wang (University of Science and Technology of China, firstname.lastname@example.org)
- Giuseppe Carenini (University of British Columbia, email@example.com)
- Claudia D’Ambrosio (Centre National de la Recherche Scientifique & École Polytechnique, firstname.lastname@example.org)
- Bissan Ghaddar (Ivey Business School, email@example.com)
- Yong Zhang (Huawei Technologies Canada Co., Ltd, firstname.lastname@example.org)
- Zirui Zhou (Huawei Technologies Canada Co., Ltd, email@example.com)
- Zhenan Fan (Huawei Technologies Canada Co., Ltd, firstname.lastname@example.org)
W11: Artificial Intelligence for Time Series Analysis (AI4TS): Theory, Algorithms, and Applications
Time series data are becoming ubiquitous in numerous real-world applications, e.g., IoT devices, healthcare, wearable devices, smart vehicles, financial markets, biological sciences, environmental sciences, etc. Given the availability of massive amounts of data, their complex underlying structures/distributions, together with the high-performance computing platforms, there is a great demand for developing new theories and algorithms to tackle fundamental challenges (e.g., representation, classification, prediction, causal analysis, etc.) in various types of applications.
The goal of this workshop is to provide a platform for researchers and AI practitioners from both academia and industry to discuss potential research directions, key technical issues, and present solutions to tackle related challenges in practical applications. The workshop will focus on both the theoretical and practical aspects of time series data analysis and aims to trigger research innovations in theories, algorithms, and applications. We will invite researchers and AI practitioners from the related areas of machine learning, data science, statistics, econometrics, and many others to contribute to this workshop.
This workshop encourages submissions of innovative solutions for a broad range of time series analysis problems. Topics of interest include but are not limited to the following:
- Time series forecasting and prediction
- Spatio-temporal forecasting and prediction
- Time series anomaly detection and diagnosis
- Time series change point detection
- Time series classification and clustering
- Time series similarity search
- AI-inspired approaches for time series similarity search
- Time series indexing
- Time series compression
- Time series pattern discovery
- Interpretation and explanation in time series
- Causal inference in time series
- Bias and fairness in time series
- Federated learning and security in time series
- Benchmarks, experimental evaluation, and comparison for time series analysis tasks
We plan to organize a full day workshop, consisting of keynote presentations, oral/poster paper presentations, awards announcements, and a panel discussion.
Researchers, students, and practitioners in AI, machine learning, data mining, and time series analysis.
Submissions should be 4-7 pages long, excluding references, and follow AAAI 2024 template. Submissions are single-blind and author identity will be revealed to the reviewers. An optional appendix of arbitrary length is allowed and should be put at the end of the paper (after references).
Dongjin Song, University of Connecticut, Storrs, Connecticut
Qingsong Wen, DAMO Academy, Alibaba Group (U.S.) Inc., Bellevue, Washington
Yao Xie, Georgia Institute of Technology, Atlanta, Georgia
Cong Shen, University of Virginia, Charlottesville, Virginia
Sanjay Purushotham, University of Maryland Baltimore County, Baltimore, Maryland
Shirui Pan, Griffith University, Queensland, Australia
Tim Januschowski, Zalando, Berlin, Germany
Haifeng Chen, NEC Labs America, Princeton, New Jersey
Yuriy Nevmyvaka, Morgan Stanley, New York City, New York
W12: Artificial Intelligence with Biased or Scarce Data (AIBSD)
Despite notable advancements, integrating Artificial Intelligence (AI) into practical uses like autonomous vehicles, industrial robotics, and healthcare remains a formidable task. This complexity arises from the diverse and rare occurrences in the real world, necessitating AI algorithms to train on extensive data. However, these domains often suffer from data scarcity, making it difficult to gather raw or annotated data. Even when data is available, inherent biases creep in during collection, leading to skewed models. To address these concerns and with AI’s growing prominence, we aim to establish a platform for academics and industry professionals to deliberate on the challenges and remedies in constructing AI systems when confronted with limited data and biases.
We invite the submission of original and high-quality research papers in the topics related to biased or scarce data. The topics for AIBSD 2024 include, but are not limited to (please see the workshop website for more topics of interest):
- Algorithms and theories for explainable and interpretable AI models.
- Application-specific designs for explainable AI, e.g., healthcare, autonomous driving, etc.
- Algorithms, theories, or performance characterization for trustworthy AI models or learning AI models under bias and/or data scarcity.
- Limitation of or methods incorporating large language models under bias and/or data scarcity settings.
- Brave new ideas to learn AI models under bias and scarcity.
This one-day workshop will include invited talks from keynote speakers, and oral/spotlight presentations of the accepted papers. Each oral presentation will be allocated between 10-15 minutes, while the spotlight presentation will be 5 minutes each. There will be live Q&A sessions at the end of each talk and oral presentation.
We expect 50-75 participants and potentially more according to our past experiences. We cordially welcome researchers, practitioners, and students from academia and industry who are interested in understanding and discussing how data scarcity and bias can be addressed in AI to participate.
We welcome full paper submissions (up to 7 pages, excluding references or supplementary materials). The paper submissions must be in pdf format and use the AAAI official templates. All submissions must be anonymous and conform to AAAI standards for double-blind review. The accepted papers will be posted on the workshop website and will not appear in the AAAI proceedings. At least one author of each accepted submission must present the paper in person at the workshop.
Workshop Chair and Committee
- Kuan-Chuan Peng (Mitsubishi Electric Research Laboratories, email@example.com)
- Abhishek Aich (NEC Laboratories, America, firstname.lastname@example.org)
- Ziyan Wu (UII America, Inc., email@example.com)
W13: Cooperative Multi-Agent Systems Decision-Making and Learning: From Individual Needs to Swarm Intelligence
With the tremendous growth of AI technology, Robotics, IoT, and high-speed wireless sensor networks (like 5G) in recent years, it gradually forms an artificial ecosystem termed artificial social systems that involves entire AI agents from software entities to hardware devices. How to integrate artificial social systems into human society and coexist harmoniously is a critical issue for the sustainable development of human beings. At this point, rational decision-making and efficient learning from multi-agent systems (MAS) interaction are the preconditions to guarantee multi-agent working in safety, balance the group utilities and system costs in the long term, and satisfy group members’ needs in their cooperation. The main interest of this workshop is the technique of modeling cooperative MAS decision-making and learning from the cognitive modeling perspective. It will bring together researchers interested in MAS using artificial intelligence, mathematical proof, statistical analysis, software simulation, and hardware demonstration to answer questions about making rational decisions and learning efficiently from interactions in multi-agent cooperation.
We solicit contributions from topics including but not limited to:
- MAS cognitive modeling
- Intrinsically motivated AI agent modeling in MAS
- Innate-values-driven reinforcement learning
- MAS deep reinforcement learning
- Multi-Object MAS decision-making and learning
- Adaptive learning with social rewards
- Cognitive models in swarm intelligence and robotics
- Game-theoretic approaches in MAS decision-making
- Consensus in MAS collaboration
- Trust based MAS decision-making and learning
- Trustworthy AI agents in Human-robot interaction
- Cognitive model application in intelligent social systems
Format of Workshop
The workshop will be a full-day workshop with a mix of keynotes, contribution talks, focused discussions, and poster sessions. The morning session will consist of talks and discussions about the challenges of decision-making and learning based on cognitive modeling at cooperative MAS and AI solutions’ vision. The afternoon session will consist of invited talks by various AI experts to discuss algorithmic approaches to various multi-agent/robot decision-making and learning problems and spark discussion on connecting AI technologies with real problems.
Our invited speakers and panelists include Katia Sycara (Carnegie Mellon University), Maria Gini (University of Minnesota Twin Cities), Marco Dorigo (Université Libre de Bruxelles (ULB)), Brian Scassellati (Yale University), Michael L. Littman (Brown University), Christopher Amato (Northeastern University), Matthias Scheutz (Tufts University), Peter Stone (University of Texas at Austin), Sven Koenig (University of Southern California), Bo An (Nanyang Technological University), Marco Pavone (Stanford University) and more.
The attendance is mainly from the AI and robotics community. However, researchers and practitioners whose research might apply to cooperative MAS decision-making and learning or who might be able to use those techniques in their research are welcome.
Submissions can contain relevant work in all possible stages, including recently published work, is under submission elsewhere, was only recently finished, or is still ongoing. Authors of papers published or under submission elsewhere are encouraged to submit these papers or short versions (including abstracts) to the workshop, educating other researchers about their work, as long as resubmissions are clearly labeled to avoid copyright violations.
We welcome contributions of both short (2-4 pages) and long papers (6-8 pages) related to our stated vision in the AAAI 2024 proceedings format. Position papers and surveys are also welcome. The contributions will be non-archival but will be hosted on our workshop website. All contributions will be peer reviewed (single-blind).
Submission Site Information
Contributions are to be submitted to: https://easychair.org/conferences/?conf=aaai2024cmasdlworksh
Affiliation: Computer Science and Information Systems Department, Bradley University
Matthew E. Taylor
Affiliation: Computer Science Department, University of Alberta
Affiliation: Aeronautics and Astronautics Department, Stanford University
Affiliation: Department of Computer Science, University of Texas at Austin
Affiliation: Robotics Engineering Department, Worcester Polytechnic Institute
Affiliation: Computer Science and Information Systems Department, Bradley University
Affiliation: College of Aeronautics and Engineering, Kent State University
More information and submission details can be found on our website:
W14: Deployable AI (DAI)
Artificial Intelligence (AI) has evolved into a vast interdisciplinary research area and we have reached an era where AI is beginning to be deployed as real world solutions across various sectors/domains. Over the last few years, Generative AI in the form of models like GPT4, Bard, etc. have not only garnered interest across several sectors and shown tremendous success in various tasks, but also has begun to see its applications in various sectors in naive ways. Moving to a wider scope of deployment of these AI models into the real world is not just a simple question of translational research and engineering but also requires several fundamental research questions and issues, involving algorithmic, systemic and societal aspects, to be addressed, while also adhering to Responsible AI standards with respect to Fairness, Ethics, Privacy, Explainability and Security. This will be the second edition of DAI Workshop after DAI Workshop @ AAAI 2024.
The 2nd Workshop of Deployable AI (DAI 2024) will be held at the AAAI 2024 conference on February 26th, with a special thematic focus on “Responsible AI”. The goal of this workshop is to bring together AI (fundamental and applied) researchers, domain experts from multiple disciplines and computer/software architects in a single venue to enable and enhance research in getting AI models ready to be deployed in the real world in a responsible manner. The workshop is organized by faculty members and researchers from the Indian Institute of Technology (IIT) Madras and supported by Robert Bosch Centre for Data Science and AI (RBCDSAI) and Centre for Responsible AI (CeRAI).
In this workshop, we intend to focus on research works that propose models that can be deployed as real world solutions and more importantly propose techniques/strategies that enable and ensure ideal deployment of AI models as real world solutions while adhering to various standards and aspects of deployability and Responsible AI. The works may address several aspects and research questions of Deployable/Responsible AI and its implications on each other, for instance, does addressing robustness of the AI system impact the fairness towards beneficiaries of the system? Does distilling to the edge impact Robustness of the system? And so on. DAI Workshop @ AAAI 2023 was the first workshop that put several aspects of deployability together in a single venue and the quality of discussion in the last event emphasized the need to enable the discovery of synergies across these aspects and potential conflicts between the different requirements of deployability. This workshop further aims to bring together researchers who can build real-world-ready solutions from multiple disciplines/domains and computer researchers and strategists who can ensure and enable the deployability of such AI based solutions in a responsible manner. The workshop also intends to become an interdisciplinary platform for these researchers to spark interesting discussions that could potentially provoke constructive thoughts on theorizing how we can ensure responsible design, development and deployment of AI models.
We are particularly interested in participants who can contribute to theory and techniques/strategies to ensure adherence to the various aspects of deployability of AI models into the real world as socially impactful solutions.
We also invite people to present their already published works in this multidisciplinary platform for knowledge transfer as well as potentially constructive/critical feedback and thought provoking discussions regarding design, development and deployment of AI models and their adherence to Responsible AI standards.
- Deployable AI — Concepts and Models
- Responsible AI
- Language Models \& Deployability
- Explainable and Interpretable AI
- Human-in-the-loop AI
- Online Learning and Transfer Learning
- Few Shot Learning Models
- Fairness and Ethics in AI
- Safety, Security and Privacy in AI
- Cryptography and AI
- Integrity and Robustness in AI
- Computational Scalability and Reliability in AI
- AI on the Edge
- Distil and Lightweight AI Models
- Learning from Drifting Data Distributions
- AI models and social impact
Invited Talks: Talks by eminent researchers in the field.
Contributed Talks: Short talks based on the accepted papers.
Poster session: Poster presentation of all accepted papers.
Panel Discussion: A panel discussion by invited speakers and organizers about the future challenges.
You are invited to submit:
- Poster/short/position papers (up to 4 pages)
- Full papers (up to 7 pages)
The submissions should adhere to the AAAI paper guidelines available at https://aaai.org/aaai-publications/aaai-publication-policies-guidelines/
Submissions can be made on the EasyChair portal using the following link: https://easychair.org/conferences/?conf=dai2024
Accepted submissions will have the option of being posted online on the workshop website or be uploaded on arxiv. The submissions need to be anonymized.
Workshop Submissions Due to Organizers: November 24, 2023
Organizers send acceptance/rejection letters to participants: December 11, 2023
Workshop to be held on: February 26th, 2024
- Balaraman Ravindran(Senior member AAAI, Associate Program Chair AAAI 2023),
- Chair and main contact, firstname.lastname@example.org, RBCDSAI, Indian Institute of Technology Madras, Chennai, 600036.
- Arun Rajkumar, email@example.com, Department of Computer Science and Engineering, IIT Madras.
- Harish Guruprasad, firstname.lastname@example.org, RBCDSAI, Indian Institute of Technology Madras.
- Chandrashekar Lakshminarayanan, email@example.com, RBCDSAI, Indian Institute of Technology Madras.
- Gokul S Krishnan, firstname.lastname@example.org, CeRAI, Indian Institute of Technology Madras.
- Devika Jay, email@example.com, CeRAI, Indian Institute of Technology Madras.
- Sanjay Karanth, firstname.lastname@example.org, CeRAI, Indian Institute of Technology Madras.
W15: Bootstrapping Developmental AIs: From Simple Competences to Intelligent Human-Compatible AIs
Developmental AIs acquire competences like human children do. Starting from innate competences, they learn by interacting with objects in the environment, including people and other AI agents.
The mainstream approaches for creating AIs are the deep learning AI approaches (e.g., generative LLMs) and the traditional symbolic AI approach. These approaches have led to valuable AI systems and impressive feats. However, manually constructed AIs are generally brittle even in circumscribed domains. Generative AIs make strange mistakes and do not notice them. In both approaches the AIs cannot be instructed easily, fail to use common sense, and lack curiosity. They have abstract knowledge but lack social alignment.
The promise of developmental AIs is that they will acquire self-developed and socially developed competences like people do. They would address the shortcomings of current mainstream AI approaches, and ultimately lead to sophisticated forms of learning involving critical reading, provenance evaluation, and hypothesis testing.
However, developmental AI projects have not yet fully reached toddler level competencies corresponding to human development at about two years of age, before their speech is fluent. They have not bridged the Reading Barrier, to skillfully and skeptically draw on online information resources like those that power today’s LLMs. This workshop is about the challenges and prospects for creating developmental AIs that are robust and human-compatible.
The Science of Developmental AI (“science”)
- How kids learn and think: findings and experiments from neuroscience, psychology, machine learning, AI, and research on how educational techniques promote learning.
- Competence testing for embodied AIs.
- Efficient and ethical participation by humans in training and testing AIs.
- Learning environments for task-focused teaching, multi-agent free play, training paradigms.
- Perceptual grounding, cognitive grounding, and common grounding.
Bootstrapping Developmental AI (“engineering”)
- Machine learning for embodied AIs.
- Simulators and robot platforms.
- Cognitive bootstrapping trajectories and curriculum design.
- Communication and linguistic competences for speech and reading.
- Techniques for ensuring that developmental AIs learn human-compatible values and drives.
Other topics: Should developing AIs grow physically as they mature cognitively? How should research on bootstrapping developmental account for economic and social issues and opportunities for ubiquitous embodied AIs.
This workshop will include keynote presentations, short research presentations, and interactive working sessions. The workshop will run from 9:00AM to 5:00PM on the workshop days.
Attendance will be limited to about 40 participants.
Submissions can be uploaded via the OpenReview BDAI website https://openreview.net/group?id=AAAI.org/2024/Workshop/BDAI. Applications should include a technical paper or position paper of 4-6 pages to the workshop.
The conference organizers will select 4-8 workshop participants to give short research talks to seed the working group discussions.
The organizers of this workshop plan to announce a publication venue ahead of the workshop.
Mark Stefik (Workshop point of contact); SRI International – PARC; email@example.com
Prof. Angelo Cangelosi University of Manchester;firstname.lastname@example.org
Prof. Benjamin Kuipers; University of Michigan; email@example.com
Prof. Celeste Kidd; UC Berkeley; firstname.lastname@example.org
Bob Price; SRI International – PARC; email@example.com
Charles Ortiz; SRI International – PARC; Charles.firstname.lastname@example.org
W16: EIW-III: The 3rd Edge Intelligence Workshop on Large Language and Vision Models
The third version of the Edge Intelligence Workshop (EIW-III) focuses on the edge deployment of large language and vision models; and how to make them more efficient in terms of Data, Model, Training, and Inference, specially on edge devices. This is an interdisciplinary research topic that covers theory, hardware, and software aspects of AI models, targeting large language model (LLM) and computer vision (CV) models specifically. The workshop program offers an interactive platform for gathering different experts and talents from different research areas in academia and industry through invited talks, panel discussions, paper submissions, reviews, interactive posters, and oral presentations.
The scope of this workshop includes, but not limited to, the following topics: Efficient Pre-training and Fine-tuning; Data Efficiency; Efficient Deployment; Hardware-Aware Acceleration; Memory and Communication Optimization; Other Efficient Applications of LLMs and CV models.
This is a 1-day workshop with panel discussion, paper submission, and keynote presentation. Paper submissions will be double-blind. Two one-hour poster sessions are reserved for authors of accepted papers to present their work in more detail and discuss with attendees. Moreover, we will have one panel discussion, where all invited speakers will engage in a Q&A session with the attendees and the organizers. We also planned to include the best paper award with a prize offered by our industrial sponsors.
We expect over 30 contributed paper submissions and about 50 active participants in the field. Our workshop will attract participants from a large range of fields, including machine learning, deep learning, natural language processing, computer vision, computer hardware, computer software, algorithms, black-box and numerical optimization.
The content of the paper should not be longer than 4 pages in AAAI2024 conference format. Already published papers are not encouraged for submission, but you are allowed to submit your arXiv.org versions of submitted papers. Moreover, a work that is presented at the main AAAI2024 conference should not be submitted to the workshop. To encourage higher quality submissions, our sponsors are offering the Best Paper and the Best Poster Award to qualified Outstanding original oral and poster presentations (upon nomination of the reviewers).
Submission site information
Vahid Partovi Nia email@example.com
Warren Gross firstname.lastname@example.org
Vahid Partovi Nia, Huawei Noah’s Ark Lab and Ecole Polytechnique de Montreal, email@example.com
Warren Gross, McGill University, firstname.lastname@example.org
Andrea Lodi, Technion-Cornell Institute Cornell Tech, email@example.com
Shah Rokh Valaee, University of Toronto, firstname.lastname@example.org
Melika Payvand, UZH and ETH Zurich, email@example.com
Mehdi Rezagholizadeh, Huawei Noah’s Ark Lab, firstname.lastname@example.org
Habib Hajimolahoseini, Huawei Toronto Research Centre, email@example.com
Mouloud Belbahri, Layer6AI TDBank, firstname.lastname@example.org
W17: FACTIFY 3.0 – Workshop Series on Multimodal Fact-Checking and Hate Speech Detection
It is a Workshop Series on Multimodal Fact-Checking and Hate Speech Detection. We are organizing two shared tasks namely Factify 3.0 and Dehate. We (tentatively) plan to invite: Prof. Eduard Hovy, Professor at the Language Technologies Institute, at the Carnegie Mellon University ; Prof. Yejin Choi, Professor at the University of Washington; and Zhe Gan who is a Staff Research Scientist at Apple AI/ML and has worked on large-scale multimodality.
The length of the workshop is 1 day.
- Long papers: Novel, unpublished, high quality research papers. 10 pages excluding references.
- Short papers: 5 pages excluding references.
- Previously rejected papers: You can attach comments of previously rejected papers (AAAI, neurips) and 1 page cover letter explaining chages made.
- Extended abstracts: 2 pages exclusing references. Non archival. can be previously published papers or work in progress.
- All papers must be submitted via our EasyChair submission page.
- Regular papers will go through a double-blind peer-review process. Extended abstracts may be either single blind (i.e., reviewers are blind, authors have names on submission) or double blind (i.e., authors and reviewers are blind). Only manuscripts in PDF or Microsoft Word format will be accepted.
- Paper template: http://ceur-ws.org/Vol-XXX/CEURART.zip or https://www.overleaf.com/read/gwhxnqcghhdt
- Amitava Das, AI Institute, University of South Carolina, email@example.com
- Amit P. Sheth, AI Institute, University of South Carolina, firstname.lastname@example.org
- Aman Chadha, Stanford AI, Amazon, email@example.com
- Asif Ekbal, IIT Patna
- Anku Rani, AI Institute, USC, firstname.lastname@example.org
- Parth Patwa, UCLA, email@example.com
- Suryavardan Suresh, NYU
- Megha Chakraborty, AI Institute, USC, firstname.lastname@example.org
W18: Graphs and more Complex Structures For Learning and Reasoning (GCLR)
In today’s rapidly evolving technological landscape, we confront the intricate challenges of complex systems head-on. Graph-based modeling often fails to capture the inherent complexities of such systems and then we move on to a diverse array of complex graph structures: knowledge graphs, attributed graphs, multilayer graphs, hypergraphs, and more. These structures provide more accurate representations for these intricate systems.
In the midst of this complexity, the importance of trustworthy AI, particularly in foundational model research, cannot be overstated. Ensuring ethical, explainable, and fair AI aligns perfectly with the nuances of complex systems. Trustworthy AI hinges on our ability to understand and make transparent AI algorithms that grapple with intricate interactions within these systems. Simultaneously, the reliability of foundation models plays a pivotal role in various AI applications reliant on complex graph-based data.
This workshop aims to bring researchers from these diverse but related fields together and embark interesting discussions on new challenging applications that require complex system modeling and discovering ingenious reasoning methods. We have invited several distinguished speakers with their research interest spanning from the theoretical to experimental aspects of complex networks.
We invite submissions from participants who can contribute to the theory and applications of modeling complex graph structures such as hypergraphs, multilayer networks, multi-relational graphs, heterogeneous information networks, multi-modal graphs, signed networks, bipartite networks, temporal/dynamic graphs, etc. The topics of interest include, but are not limited to:
- Fairness-aware Learning in Complex Graphs
- Benchmarking Foundation Models with Complex Data
- Privacy Preservation in Complex Graphs
- Causal Inference and Complex Networks
- Knowledge Graph-enhanced Foundation Models
- Theoretical analysis of graph algorithms or models
- Network representation learning and manifold embedding methods
- Optimization methods for graphs/manifolds
- Link analysis/prediction, node classification, clustering for complex graph structures
- Probabilistic and graphical models for structured data
- Knowledge graph construction
- Social network analysis and measures
- Constraint satisfaction and programming (CP), (inductive) logic programming (LP and ILP)
The papers will be presented in poster format and some will be selected for oral presentation.
Through invited talks and presentations by the participants, this workshop will bring together
current advances in network science and machine learning with a focus on trustworthy AI and
foundation models and set the stage for continuing interdisciplinary research discussions.
- Poster/short/position papers submission deadline: Nov 17, 2023
- Full paper submission deadline: Nov 17, 2023
- Paper notification: Dec 11, 2023
We invite submissions to the AAAI-24 workshop on Graphs and more Complex structures for Learning and Reasoning to be held on February 26 or 27, 2024. We welcome the submissions in the following two formats:
- Poster/short/position papers: We encourage participants to submit preliminary but interesting ideas that have not been published before as short papers. These submissions would benefit from additional exposure and discussion that can shape a better future publication. We also invite papers that have been published at other venues to spark discussions and foster new collaborations. Submissions may consist of up to 4 pages plus one additional page solely for references.
- Full papers: Submissions must represent original material that has not appeared elsewhere for publication and that is not under review for another refereed publication. Submissions may consist of up to 7 pages of technical content plus up to two additional pages solely for references.
The submissions should adhere to the AAAI paper guidelines available at (https://aaai.org/aaai-conference/aaai-24-call-for-proposals/)
Accepted submissions will have the option of being posted online on the workshop website. For authors who do not wish their papers to be posted online, please mention this in the workshop submission. The submissions need to be anonymized.
See the webpage https://sites.google.com/view/gclr2024/ submissions for detailed instructions and submission link.
- Format of the workshop: This is a 1-day workshop involving talks by pioneer researchers
- from respective areas, poster presentations, and short talks of accepted papers.
- Attendance: The eligibility criteria for attending the workshop will be registration in the
- conference/workshop as per AAAI norms. We expect 50-65 people in the workshop.
- Workshop Chair: Balaraman Ravindran
- Affiliation: Indian Institute of Technology Madras, India
- Email: email@example.com
- Workshop Committee:
- Balaraman Ravindran, Indian Institute of Technology Madras, India
Primary contact (firstname.lastname@example.org)
- Ginestra Bianconi, Queen Mary University of London, UK (email@example.com)
- Philip S. Chodrow, Middlebury College, USA (firstname.lastname@example.org)
- Nitesh Chawla, University of Notre Dame, USA (email@example.com)
- Tarun Kumar, Hewlett Packard Labs, Bengaluru, India (firstname.lastname@example.org)
- Deepak Maurya, Purdue University, India (email@example.com)
- Revathy Venkataramanan, Univ. of Southern California and Hewlett Packard Labs, USA (firstname.lastname@example.org)
- Rucha Bhalachandra Joshi, NISER Bhubaneswar, India (email@example.com)
- Balaraman Ravindran, Indian Institute of Technology Madras, India
W19: Health Intelligence (W3PHIAI-24)
The integration of information from now widely available -omics and imaging modalities at multiple time and spatial scales with personal health records has become the standard of disease care in modern public health. Moreover, given the ever-increasing role of the World Wide Web as a source of information in many domains including healthcare, accessing, managing, and analyzing its content has brought new opportunities and challenges. The advances in web science and technology for data management, integration, mining, classification, filtering, and visualization has given rise to a variety of applications representing real-time data on epidemics.
Furthermore, to tackle and overcome several issues in personalized healthcare, information technology will need to evolve to improve communication, collaboration, and teamwork among patients, their families, healthcare communities, and care teams involving practitioners from different fields and specialties. All these changes require novel solutions, and the AI community is well-positioned to provide both theoretical- and application-based methods and frameworks.
The workshop will include original contributions on theory, methods, systems, and applications of data mining, machine learning, databases, network theory, natural language processing, knowledge representation, artificial intelligence, semantic web, and big data analytics in web-based healthcare applications, with a focus on applications in population and personalized health. The scope of the workshop includes, but is not limited to, the following areas:
- Knowledge Representation and Extraction
- Integrated Health Information Systems
- Patient Education
- Patient-Focused Workflows
- Shared Decision Making
- Geographical Mapping and Visual Analytics for Health Data
- Social Media Analytics
- Epidemic Intelligence
- Predictive Modeling and Decision Support
- Semantic Web and Web Services
- Biomedical Ontologies, Terminologies, and Standards
- Bayesian Networks and Reasoning under Uncertainty
- Temporal and Spatial Representation and Reasoning
- Case-based Reasoning in Healthcare
- Crowdsourcing and Collective Intelligence
- Risk Assessment, Trust, Ethics, Privacy, and Security
- Sentiment Analysis and Opinion Mining
- Computational Behavioral/Cognitive Modeling
- Health Intervention Design, Modeling and Evaluation
- Online Health Education and E-learning
- Mobile Web Interfaces and Applications
- Applications in Epidemiology and Surveillance (e.g., Bioterrorism, Participatory Surveillance, Syndromic Surveillance, Population Screening)
- Hybrid methods, combining data driven and predictive forward models
- Response to Covid-19
- Computational models of ageing
In addition to presentations, posters, and demos, we invite participants to present in a special track focused on social determinants of health and equitable healthcare. This special track will highlight work using AI to address the inequalities experienced across healthcare systems.
We invite workshop participants to submit their original contributions following the AAAI format through EasyChair. Three categories of contributions are sought: full-research papers up to 8 pages; short papers up to 4 pages; and posters and demos up to 2 pages. Submissions to the special track can be either full or short papers, and we ask authors to select the special track option during submission.
Martin Michalowski, PhD, FAMIA (Co-chair), University of Minnesota; Arash Shaban-Nejad, PhD, MPH (Co-chair), The University of Tennessee Health Science Center – Oak-Ridge National Lab (UTHSC-ORNL) Center for Biomedical Informatics; Simone Bianco, PhD (Co-chair), Altos Labs – Bay Area Institute of Science; Szymon Wilk, PhD, Poznan University of Technology; David L. Buckeridge,MD, PhD, McGill University; John S. Brownstein, PhD, Boston Children’s Hospital
W20: Human-Centric Representation Learning (HCRL)
Representation learning has become a key research area in artificial intelligence, with the goal of automatically learning meaningful representations of data for a wide range of tasks. However, existing approaches often fail to consider the human perspective, leading to representations that may not be interpretable or relevant to both models and humans. Indicatively, in self-supervised learning, existing models operate on the sample level and do not account for multiple views/modalities belonging to the same person. We invite researchers, practitioners, and industry experts to submit original research papers on all aspects of representation learning, with a focus on human-centric data beyond commonly used ML benchmarks.
- Effectiveness of self-supervised, semi-supervised, or supervised representation learning approaches in a human-centric context, such as through user studies or benchmarking experiments
- Learning and fine-tuning with human feedback and interaction (e.g., human-in-the-loop systems such as RLHF).
- Efficacy of multimodal data in learning approaches, including the integration of visual, audio, time-series, and text data sources.
- Representation learning for novel and underrepresented data sources.
- Explainable and interpretable aspects of the learned representations.
- Novel ways of encoding non-language data into pre-trained models and LLMs.
- Human-centric applications: Speech and audio processing, pose estimation, affective computing, activity recognition, biosignal analysis (ECG, EEG, EMG, PPG, EDA, and others), electronic health records, imaging, and wearable data.
All papers should be a maximum of 4 pages in length, plus additional pages for references and supplementary materials, using AAAI’s template. Publication in the workshop is considered non-archival but all accepted papers will be hosted on our website (with permission). We welcome submissions currently under consideration in other venues. Submissions will go through a double-blind review process.
Format: Our 1-day workshop will include oral presentations, posters, invited keynotes, and a panel.
Attendance: The workshop will be open to everyone registered. In case of increased interest, authors with accepted work will be prioritized.
Submission Site Information: https://cmt3.research.microsoft.com/HCRL2024
- Dimitris Spathis, Nokia Bell Labs & University of Cambridge (firstname.lastname@example.org)
- Aaqib Saeed, Eindhoven University of Technology (email@example.com)
- Ali Etemad, Queen’s University (firstname.lastname@example.org)
- Stefanos Laskaridis, Brave (email@example.com)
- Chi Ian Tang, Nokia Bell Labs (firstname.lastname@example.org)
- Patrick Schwab, GSK (email@example.com)
- Shohreh Deldari, University of New South Wales (firstname.lastname@example.org)
- Shyam Tailor, Google (email@example.com)
- Sana Tonekaboni, University of Toronto (firstname.lastname@example.org)
W21: Imageomics: Discovering Biological Knowledge from Images using AI
Imageomics is an emerging scientific field that uses images, ranging from microscopic cell images to videos of charismatic megafauna, to automatically extract biological information, specifically traits, for understanding the evolution or function of living organisms. A central goal of Imageomics is to make traits computable from images by grounding AI models in existing scientific knowledge. The goal of this workshop is to nurture the community of researchers working at the intersection of AI and biology and shape the vision of the nascent yet rapidly growing field of Imageomics.
We encourage participation on a broad range of topics that explore AI/ML techniques to understand characteristic patterns of organisms from image or video data. Examples of research questions include (but are not limited to): (1) What are the types and characteristics of knowledge and data in biology that can be integrated into AI methodologies, and what are the mechanisms for this integration? (2) How best can new knowledge exposed by ML be translated back into the knowledge corpus of biology? (3) How best can we inform and catalyze a community of practice to utilize and build upon Imageomics to address grand scientific and societal challenges? (4) How can foundation models in vision and language impact biology or benefit from biological knowledge?
Our half-day workshop will include keynote/invited talks, contributed paper presentations, a poster session, and a panel discussion.
We welcome participation from anyone interested in learning about the field of Imageomics, including (a) biologists working on problems with image data and biological knowledge available such as phylogenies, taxonomic groupings, ontologies, or evolutionary models, and (b) AI researchers working on topics such as explainability, generalizability, inductive bias, open world and fine-grained recognition, fondation models, and novelty detection, who are looking for novel interdisciplinary research problems.
We are accepting paper submissions for position, review, or research articles as short papers (2-4 pages, excluding references). Shorter versions of articles in submission or accepted at other venues are acceptable as long as they do not violate the dual-submission policy of the other venue. All submissions will undergo peer review and authors will have the option to publish their work in arxiv proceedings.
Submission Site Information
Anuj Karpatne (email@example.com, primary contact), Yu Su (firstname.lastname@example.org), Wei-Lun Chao (email@example.com), Charles Stewart (firstname.lastname@example.org), Tilo Burghardt (email@example.com), Tanya Berger-Wolf (firstname.lastname@example.org)
W22: Large Language Models for Biological Discoveries
Rapid advances in large language models (LLMs) provide an unprecedented opportunity to further scientific inquiry across scientific disciplines. Despite remarkable feats in natural language tasks, the potential of LLMs beyond natural language has yet to be realized. This workshop brings together diverse researchers from computer science, information science, and molecular, cellular, and systems biology to focus on unique challenges to LLMs for advancing biological discoveries. Objectives include formulating new problem spaces, standardized datasets, community-accepted benchmarks, accounting for experimental error and quantifying uncertainty, injecting prior biological knowledge, etc. The workshop will additionally address the purchase of progress by scale, which leaves out many academic researchers from important discoveries. We will ask important questions of how we can make such research accessible and inclusive to power innovation at the intersection of LLMs and biology.
These are organized around three research themes:
- Foundational Models
- Focus on accessible, light-weight models including beyond-attention paradigms
- Bridging the gap: LLMs for Biological Problems
- Emphasis will be on recent trends of adopting and adapting foundational models for biological research
- Next Scientific Breakthroughs
- Focus will be on new problems, standardized datasets, and benchmark metrics
Format of Workshop
The workshop is planned for half a day.
We will structure the workshop into three sessions. The first session will have invited talks from high-impact researchers. The second session will contain presentations by authors of accepted papers. These will vary in length depending on the submission type. The final session will contain a panel discussion and feature both senior and up-and-coming researchers. The invited talks, the paper presentations, and the panel discussion will reflect the three main themes of the workshop.
Invited speakers and other attendees will fall into three groups:
- foundational LLM researchers
- biological researchers that have started to utilize LLM
- biological researchers with a track record in ML but not LLMs
This is the first workshop of its kind and the first offering. We hope to attract at least 50 attendees. We do not expect to exceed 100 attendees.
To reflect the disciplinary diversity, we will encourage submissions of varying length:
- 1-page position papers
- 4-page papers with focus on breaking results, datasets, benchmarks
- 6-8-page papers for more detailed investigations.
Submission Site Information:
Authors will submit at: https://easychair.org/my/conference?conf=llms4bio24
The submission site will be listed both in the CFP and our workshop website https://llms4science-community.github.io/aaai2024.html.
We will handle any author submission enquiries at email@example.com which has been set up.
All accepted papers will be published at https://github.com/LLMs4Science-Community which has also been set up.
Amarda Shehu, firstname.lastname@example.org
Amarda Shehu, George Mason University, email@example.com
Yana Bromberg, Emory University, firstname.lastname@example.org
Liang Zhao, Emory University, email@example.com
W23: Learnable Optimization (LEANOPT)
The AAAI Workshop on Learnable Optimization (LEANOPT) builds on the momentum that has been directed over the past 6 years, in both the operations research (OR) and machine learning (ML) communities, towards establishing modern ML methods as a “first-class citizen” at all levels of the OR toolkit.
While much progress has been made, many challenges remain due in part to data uncertainty, the hard constraints inherent to OR problems, and the high stakes involved. LEANOPT will serve as an interdisciplinary forum for researchers in OR and ML to discuss technical issues at
this interface and present new ML approaches and software tools that accelerate classical optimization algorithms (e.g., for continuous, combinatorial, mixed-integer, stochastic optimization) as well as novel applications.
LEANOPT will place particular emphasis on:
- Learning to optimize (L2O) methods for solving constrained optimization problems.
- Predict-then-optimize/decision-focused learning.
- ML for heuristic and exact algorithms.
- New Graph Neural Networks for solving constrained optimization problems.
- Reinforcement Learning approaches for dynamic decision-making.
- New applications that can benefit from learnable optimization under uncertainty.
We invite researchers to submit extended abstracts (2 pages) describing novel contributions and preliminary results, respectively, to the topics above. Submissions tackling new problems or more than one of the aforementioned topics simultaneously are encouraged.
We aim to accommodate an audience of up to 50 attendees. The attendees will be a mix of workshop organizers, invited speakers, and invited researchers with accepted abstracts.
While we are planning an in-person workshop to be held at AAAI-23. LEANOPT will be a one-day workshop consisting of a mix of events: multiple invited talks by recognized speakers from both OR and ML covering central theoretical, algorithmic, and practical challenges at this intersection; a poster session for accepted abstracts; and hands-on programming session featuring two open-source libraries NeuroMANCER and PyEPO.
Jan Drgona (Pacific Northwest National Laboratory)
Elias B. Khalil (University of Toronto), Pascal Van Hentenryck (Georgia Institute of Technology), Jan Drgona (Pacific Northwest National Laboratory), Ferdinando Fioretto (Syracuse University), Draguna Vrabie (Pacific Northwest National Laboratory), Priya Donti (Massachusetts Institute of Technology)
W24: Machine Learning for Cognitive and Mental Health
This workshop has three primary goals:
- bring together experts from multiple disciplines working on machine learning (ML) and cognitive and mental health (CMH) to learn from each other,
- encourage the development of shared goals and approaches across these communities, and
- stimulate creation of better multimodal technologies for real-world CMH impact
To achieve these goals, our workshop will invite diverse research combining different fields of ML and CMH, including computer vision (CV), natural language processing (NLP), multimodal learning, signal processing, human-computer interaction, neuroscience, psychiatry, and psychology. We are especially encouraging submission of research combining multiple data sources and multimodal learning for CMH. Recently, a few works have started to exploit the synchronization of multimodal streams to improve prediction of patient status or response to treatment in CMH applications.
This workshop encourages and promotes research efforts towards more inclusive multimodal technologies for CMH applications and tools to assess those methods. We invite papers which focus on the topics of interest include (but is not limited to):
- Datasets and benchmarks (e.g. speech, videos, EEG, fMRI, wearable sensors) for CMH
- Multi-task learning for CMH
- Multimodal or cross-modal learning for CMH combining modules such as imaging, language, speech, videos, genomic data, spatial-temporal data, EHR records
- Evaluation and analysis of models for CMH
- Interpretability of ML models for CMH
- Bias and fairness of ML models for CMH
- Multilingual machine learning for CMH
- CMH disease classification and prediction
- CMH biomarkers for measuring response to treatments
- CMH disease model-building and clinical decision support
- Distributed and federated learning for CMH applications
The workshop will be 1-day long including keynote speakers, oral and poster, panel and discussion (Birds of a Feather) sessions. You can see program details here: https://winterlightlabs.github.io/ml4cmh2024/program/index.html.
We are inviting submission of short papers (4 pages) or long papers (8 pages). Authors can submit an unlimited number of pages for references and supplementary material, but supplementary material will not necessarily be reviewed. All submissions must be fully anonymized to preserve the double-blind reviewing policy. Insufficiently anonymized submissions will be considered for desk-reject. Submit your work through OpenReview. Deadline for paper submission is November 24th, 2023 AOE. Use a paper template from AAAI 2024 Author Kit and select keywords from this keyword list.
In addition, we will host a mentorship program, in order to increase reach and to help researchers from across the world who are new to this field to improve the quality of their papers before the submission time. If you are interested in being part of a mentorship program, please submit your work before November 1st, 2023, AOE and you will be assigned a mentor who will review your paper and give you feedback by November 10th, so that you have enough time to improve your paper before the final submission. The reviewers of your final paper will be different from your mentor.
Marija Stanojevic (Winterlight Labs), Elizabeth Shriberg (Ellipsis Health), Paul Pu Liang (Carnegie Mellon University), Jelena Curcic (Novartis Institute for Biomedical Research), Zining Zhu (University of Toronto), Malikeh Ehghaghi (Lavita), Ali Akram (Winterlight Labs)
W25: Neuro-Symbolic Learning and Reasoning in the era of Large Language Models (NuCLeaR)
We are thrilled to announce the workshop on Neuro-Symbolic Learning and Reasoning in the Era of Large Language Models (NucLeaR) at AAAI 2024 to be held on February 26-27, 2024. This workshop aims to provide a dedicated platform for researchers to present and share their cutting-edge advancements in the next generation of neuro-symbolic AI (NSAI). By creating an environment conducive to knowledge exchange and the exploration of innovative ideas, we aim to foster collaboration and inspire new breakthroughs.
We invite submissions on topics and questions related to NSAI, including but not limited to:
- Survey of recent NSAI methods and applications
- Neuro-symbolic approaches for learning and reasoning
- Programming languages and tools for integration of learning and reasoning formalisms
- Declarative languages for ML
- Using probabilistic inference or logical inference in training deep models
- Deep learning and logical reasoning over structured and relational data
- Integration of non-differentiable optimization models in learning
- NSAI role in grounding LLMs, and enhancing performance
- Data efficiency, scalability, and evaluation benchmarks in NSAI
- Integration of expert knowledge in learning
- Related topics e.g. neural program synthesis, program induction, concept learning, compositional generalization, and multimodal applications.
Format of Workshop:
This will be a 1.5 day workshop (Day 1: NSAI methods, Day 2: NSAI applications), with 8 to 10 invited talks, and 10-15 contributed talks from the top-ranked submitted papers. All contributed papers will have the option to join the poster session. The workshop will have 2 panel discussions, at the end of each day.
Anyone who is interested in AI is welcome to join the workshop, there are no pre-requisites. There is no maximum number of attendees other than the capacity of the room.
Papers must be formatted in AAAI two-column, camera-ready style (see https://aaai.org/authorkit24-2/).
All submissions should be anonymous to enable double-blind review. Submissions may consist of 4-8 pages for the full paper. We also allow a 2-page extended abstract for the work that has already been published.
Submission Site Information:
- Day 1 – am: Alexander Gray, firstname.lastname@example.org
- Day 1 – pm: Pranava Madhyastha, email@example.com
- Day 2 – am: Asim Munawar, firstname.lastname@example.org
- Asim Munawar, IBM Research, email@example.com
- Elham J. Barezi, Michigan State University, firstname.lastname@example.org
- Pranava Madhyastha, City University of London, email@example.com
- Abulhair Saparov, New York University, firstname.lastname@example.org
- Alexander Gray, IBM Research, email@example.com
W26: Privacy-Preserving Artificial Intelligence
The rise of machine learning, optimization, and Large Language Models (LLMs) has created new paradigms for computing, but it has also ushered in complex privacy challenges. The intersection of AI and privacy is not merely a technical dilemma but a societal concern that demands careful considerations.
The Privacy Preserving AI workshop, in its 5th edition, will provide a multi-disciplinary platform for researchers, AI practitioners, and policymakers to focus on the theoretical and practical aspects of designing privacy-preserving AI systems and algorithms. The emphasis will be placed on policy considerations, broader implications of privacy in LLMs, and the societal impact of privacy within AI.
We invite three categories of contributions: technical (research) papers, position papers, and systems descriptions on these subjects:
- Differential privacy applications
- Privacy and Fairness interplay
- Legal frameworks and privacy policies
- Privacy-centric machine learning and optimization
- Benchmarking: test cases and standards
- Ethical considerations of LLMs on users’ privacy
- The impact of LLMs on personal privacy in various applications like chatbots, recommendation systems, etc.
- Case studies on real-world privacy challenges and solutions in deploying LLMs
- Privacy-aware evaluation metrics and benchmarks specifically for LLMs
- Interdisciplinary perspectives on AI applications, including sociological and economic views on privacy
- Evaluating models to audit and/or minimize data leakages
The workshop will be a one-day meeting and will include a number of technical sessions, a poster session where presenters can discuss their work, with the aim of further fostering collaborations, multiple invited speakers covering crucial challenges for the field of privacy-preserving AI applications, a tutorial talk, roundtables, and will conclude with a panel discussion.
Attendance is open to all. At least one author of each accepted submission must be present at the workshop.
Submissions of technical papers can be up to 7 pages excluding references and appendices. Short or position papers of up to 4 pages are also welcome. All papers must be submitted in PDF format, using the AAAI-24 author kit. Papers will be peer-reviewed and selected for oral and/or poster presentation at the workshop.
Submission site: https://cmt3.research.microsoft.com/PPAI2024
- November 15, 2023 – Submission Deadline
- December 12, 2023 – NeurIPS/AAAI Fast Track Submission Deadline
- December 22, 2023 – Acceptance Notification
- January 10, 2024 – Student Scholarship Program Deadline
- February 26, 2024 – Workshop Date
Ferdinando Fioretto, firstname.lastname@example.org (University of Virginia)
Ferdinando Fioretto, email@example.com (University of Virginia)
Juba Ziani, firstname.lastname@example.org (Georgia Institute of Technology)
Christine Task, email@example.com (Knexus Corporation)
Niloofar Mireshghallah, firstname.lastname@example.org (University of Washington)
Pascal Van Hentenryck, email@example.com (Georgia Institute of Technology)
Supplemental workshop site: https://ppai-workshop.github.io/
W27: Public Sector Large Language Models: Algorithmic and Sociotechnical Design
Just as pre-trained Large Language Models (LLMs) like GPT-4 have revolutionized the field of AI with their remarkable capabilities in natural language understanding and generation, LLM-powered systems also see great potential for contributing to the wellbeing of our society through public sector applications, which often feature governmental, publicly-funded and non-profit organizations. This workshop brings together researchers and practitioners from AI, HCI, and social sciences to present their research around LLM-powered tools in the context of public sector applications.
Track 1: Developing LLM-powered tools for positive outcomes.
This track welcomes use-inspired research that develops LLM-powered tools to make positive real-world impact in the public sector. Application domains include, but are not limited to, education, urban planning, public health, agriculture, environmental sustainability, and social welfare and justice. We also welcome submissions that tackle the common challenges in public sector LLM research, such as: costly data collection, significant problem scoping process with domain experts, impact evaluation, high-stake domains yet far less resources (budget, allocated staff effort, AI expertise), and vulnerable stakeholders. We particularly encourage submissions that report on practically realizing these LLM-based systems in the real world, success stories and lessons learned.
Track 2: Designing, deploying and evaluating LLM-powered services with directly impacted communities.
This track invites submissions that investigate how LLM-powered systems fit (or misfit) into the human organizations that implement them, as well as how those systems might directly impact community stakeholders who are on the receiving end of those services. Topics of interest include, but are not limited to, participatory design on LLM-based tools/systems, understanding public sector’s needs and challenges of integrating LLMs, interfacing LLMs with public sector organizations and end users, human oversight of LLMs, the role of LLMs in social justice and equity. We particularly encourage submissions from HCI and social science communities, as well as public sector organizations.
This will be a full-day workshop with several invited talks, panel discussions, a community session featuring public sector organizations, a number of spotlight talks, and a poster session.
Attendance is open to all. At least one author of each accepted submission is expected to present at the workshop.
We welcome both novel work and recently published papers. Submissions are non-archival. Details can be found on workshop website.
Ryan Shi (University of Pittsburgh), Hong Shen (Carnegie Mellon University), Sera Linardi (University of Pittsburgh), Lei Li (Carnegie Mellon University), Fei Fang (Carnegie Mellon University).
For any enquiry please contact firstname.lastname@example.org.
Submission Site Information
W28: Recommendation Ecosystems: \\ Modeling, Optimization and Incentive Design
The workshop centers on the multi-faceted landscape of Recommender Ecosystems (RESs), which couple the behaviors of users, content providers, vendors, and advertisers to determine the long-term behavior of the platform. While prevalent in numerous online products, the modeling, learning, and optimization technologies traditionally used in recommenders prioritize interactions with a single user. Recent research has delved into multi-agent dynamics and economic interactions within RESs, encompassing areas like fairness, popularity bias, market design, social dynamics, and more. Despite its significance, this research remains fragmented across various academic domains. This workshop aspires to bridge these communities, emphasizing the convergence of diverse topics like game-theoretic models, AI techniques, and
social dynamics to holistically comprehend RES ecosystems. By fostering interdisciplinary dialogue, the workshop aims to spotlight the complexities of RESs, engendering fresh insights and solutions.
Topics of interest include, but are not limited to:
- Game-theoretic models and mechanism design in recommender systems
- State of the art techniques, like generative models or reinforcement learning, for promoting the health and diversity of recommender ecosystems.
- Social dynamics, filter bubbles, and polarization in recommender systems
- Fairness and bias in recommender systems.
- Multi-stakeholder recommendation, including users, businesses, and advertisers
- Understanding user/creator/vendor behavior and incentives in recommender ecosystems, and their interactions.
- Interdisciplinary approaches to the study of agent interactions in recommender systems (e.g., incorporating behavioral psychology, sociology, and economics insights).
This one-day workshop will include keynotes by five invited speakers, short spotlight talks for selected accepted papers, Q&A with the speakers, and a poster session of all accepted papers. See the workshop website for an up-to-date schedule.
Attendance is open to all registered participants.
We welcome submissions of long (up to 8 pages), short (up to 4 pages), and position (up to 4 pages) papers. Submissions must be formatted in the AAAI format and anonymized for double blind review.
Submission Site: Submissions will be collected through OpenReview, see the webpage for details: https://sites.google.com/view/recommender-ecosystems/home
Omer Ben-Porat (Assistant Professor at Technion—Israel Institute of Technology, email@example.com)
Sarah Dean (Assistant Professor at Cornell, firstname.lastname@example.org)
Martin Mladenov (Research Scientist at Google Research, email@example.com)
Guy Tennenholtz (Research Scientist at Google Research, firstname.lastname@example.org)
Craig Boutilier (Principal Scientist at Google Research)
Robin Burke (Professor at University of Colorado)
Anca Dragan (Associate Professor at UC Berkeley)
Mounia Lalmas (Senior Director of Research at Spotify)
David Parkes (Professor at Harvard University)
Moshe Tennenholtz (Professor at Technion—Israel Institute of Technology)
W29: Responsible Language Models (ReLM)
The objectives of this workshop are to facilitate collaboration between NLP researchers, domain experts, and industry to explore strategies for the responsible and safe utilization of large language models; examine risks of bias in LLMs from diverse perspectives; integrate technological insights with policy perspectives to enable comprehensive understanding and dialogue around responsible LLM development; and advocate for implementation of policies and standardized protocols for responsible LLM deployment. The goals are to promote multi-disciplinary dialogue, identify and mitigate risks, establish best practices, and shape policy to ensure LLMs are developed and deployed ethically, safely, and for the benefit of society.
We are interested, but not limited to the following topics: explainability and interpretability techniques for different LLMs training paradigms; privacy, security, data protection and consent issues for LLMs; bias and fairness quantification, identification, mitigation and trade-offs for LLMs; robustness, generalization and shortcut learning analysis and mitigation for LLMs; uncertainty quantification and benchmarks for LLMs; ethical AI principles, guidelines, dilemmas and governance for responsible LLM development and deployment.
Format of Workshop
This is a one-day workshop. More details please refer to the workshop website.
Attendance: Approximately 60.
All papers will undergo double-blind peer review. Submission can be in the form of long papers of 8 pages and shorter papers of 4 pages (excluding references and appendix). One additional content page is allowed for the camera-ready version.
Supplementary material such as appendices, proofs, and derivations may be attached to the paper. However, reviewers are not obligated to review these materials. Both Archival or Non-Archival submissions are accepted. Non-archival submissions can include ongoing work and can be subsequently or concurrently submitted to other venues. The archival status will only need to be determined after a paper has been accepted.
Submission Site Information: TBD, which will be posted later on the workshop website.
Faiza Khan Khattak (Vector Institute for AI, email@example.com), Lu Cheng (UIC, firstname.lastname@example.org), Sedef Akinli-Kocak (Vector Institute for AI, email@example.com), Mengnan Du (NJIT, firstname.lastname@example.org), Fengxiang He (University of Edinburgh, F.He@ed.ac.uk), Bo Li (UChicago, email@example.com), Blessing Ogbuokiri (York University, firstname.lastname@example.org), Shaina Raza (Vector Institute for AI, email@example.com), Laleh Seyed-Kalantari (York University, firstname.lastname@example.org), Yihang Wang (UChicago, email@example.com), Xiaodan Zhu (Queen’s University, firstname.lastname@example.org), Graham W. Taylor (University of Guelph, email@example.com).
W30: Scientific Document Understanding
Due to the fast growth of scientific publications, keeping abreast of new findings and recognizing unsolved challenges are becoming more difficult for researchers in various fields. It is thus necessary to be equipped with state-of-the-art technologies to effectively combine precious findings from diverse scientific documents into one easily accessible resource. Due to its importance, there have been some efforts to achieve this goal for scientific document understanding (SDU). However, despite all of the recent progress, the fragmented research focusing on different aspects of this domain necessitates a forum for researchers from different perspectives to discuss achievements, new challenges, new resource requirements, and impacts of scientific document understanding on various fields. Furthermore, the recent introduction of advanced resources and tools designed for the processing of scientific documents, such as large language models (LLMs) and generative AI systems like Galactica (Taylor et al. 2022) and Med-PaLM (Singhal et al. 2023), opens up new opportunities to advance research and applications of scientific document understanding. The SDU workshop is thus designed to specifically address these gaps for the scientific community. In addition to the recent focus on scholarly text processing and document understanding in natural language processing, this workshop extends SDU to other scientific areas, including but not limited to scientific image processing, automatic programming, knowledge graph manipulation, and data management. We hope that this workshop will foster collaborations with researchers working on different scientific and AI areas for SDU.
Topics of Interest
The goal of the SDU workshop is to gather insights into recent advances and remaining challenges in scientific document understanding (SDU). To this end, the topics of interest for this workshop include but are not limited to:
- Information extraction and information retrieval for scientific documents
- Question answering and question generation for scholarly documents
- Word sense disambiguation, acronym identification, expansion, and definition extraction
- Developing LLMs/generative AI models specific to scientific domains
- Instruction tuning and in-context learning with LLMs for scientific documents
- Document summarization, text mining, document topic classification, and machine reading comprehension for scientific documents
- Graph analysis applications including knowledge graph construction and representation, graph reasoning, and query knowledge graphs
- Multi-modal and multi-lingual scholarly text processing
- Biomedical image processing, scientific image plagiarism detection, and data visualization
- Code/Pseudo-code generation from text and image/diagram captioning
- New language understanding resources such as new syntactic/semantic parsers, language models, or techniques to encode scholarly text
- Survey or analysis papers on scientific document understanding and new tasks and challenges related to each scientific domain
- Factuality, data verification, and anti-science detection
SDU will be a one-day workshop with an expectation of 50 attendees. The full-day workshop will start with an opening remark followed by research paper presentations in the morning. The post-launch session includes invited talks and a panel discussion on resources, findings, and upcoming challenges for SDU. We will end the workshop with a closing remark.
Authors are invited to submit their unpublished work that represents novel research. The papers should be written in English using the CEUR Template and follow the CEUR Workshop Proceedings formatting guidelines. Authors can also submit supplementary materials, including technical appendices, source codes, datasets, and multimedia appendices. All submissions, including the main paper and its supplementary materials, should be fully anonymized. All papers will be double-blind peer-reviewed. SDU accepts two types of papers (Note: we don’t enforce any hard page limit):
- Long technical papers with a recommended length of up to 10 pages + references
- Short papers with a recommended length between 3 and 5 pages + references
Two reviewers with the same technical expertise will review each paper. Authors of the accepted papers will present their work in either the Oral or Poster session. At least one author of each accepted paper should register at the conference and present the work at the workshop. Submission should be done electronically in PDF format via Microsoft CMT. SDU will not accept any submission from other mechanisms such as Email.
- Mihir Parmar, Arizona State University, firstname.lastname@example.org
- Thien Huu Nguyen, University of Oregon, email@example.com
- Chien Van Nguyen, University of Oregon, firstname.lastname@example.org
- Franck Dernoncourt, Adobe Research, email@example.com
W31: Sustainable AI
While AI has made remarkable achievements across various domains, there remain legitimate concerns regarding its sustainability. The pursuit of enhanced accuracy in tackling large-scale problems has led to the adoption of increasingly deep neural networks, resulting in elevated energy consumption and the emission of carbon dioxide, which contributes to climate change. As an illustration, researchers estimated that the training of a state-of-the-art deep learning NLP model alone generated approximately 626,000 pounds of carbon dioxide emissions. The environmental sustainability of AI is not the only concern; its societal impact is equally significant. Ethical considerations surrounding AI, including fairness, privacy, explainability, and safety, have gained increasing attention. For instance, biases and privacy issues associated with AI can limit its widespread application in various domains. Furthermore, AI has the potential to make a profound societal impact by directly addressing contemporary sustainability challenges. Climate modeling, urban planning, and design (for mitigating urban heat islands or optimizing renewable energy deployment), as well as the development of green technologies (such as advanced battery materials or optimized wind/ocean turbine design), are areas where AI techniques can be extensively applied. Leveraging AI in these areas is crucial for ensuring a net benefit to sustainability.
Submissions should explicitly focus on the sustainability perspective, encompassing
- Environmental Sustainability
- AI algorithms for different vertical sustainable domains e.g., smart grids scheduling, climate modeling, energy and sustainable material development, etc.
- efficient AI
- Efficient and low-carbon AI
- Data-efficient AI
- Model compression
- Power-aware efficient AI accelerators
- Nature-inspired optimization for sustainable AI
- Societal Sustainability
- Explainable & trustworthy AI, and fairness,
- Privacy and safety of AI, with the sustainable societal impact
Format of Workshop
The workshop is a full-day event. We plan to invite two distinguished keynote speakers and organize a panel discussion where participants can engage in an exchange of ideas on critical topics related to sustainable AI. Additionally, we will consider all submissions for both oral and poster presentations.
Selected papers will have the opportunity to submit their work to a special issue on Sustainable AI in collaboration with our partnering journal, IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI), and the World Scientific Annual Review of AI. We are also in the process of creating an edited book on sustainable AI based on selected papers and invited papers, set to be published by World Scientific.
Papers must adhere to AAAI style Author kit in PDF format while maintaining anonymity. Submissions consist of up to 6 pages of technical content plus additional reference pages.
- Xiaoli Li (Nanyang Technological University/A*STAR, Singapore) firstname.lastname@example.org; email@example.com
- Joey Tianyi Zhou (A*STAR CFAR, Singapore) firstname.lastname@example.org
- Callie Hao (Georgia Institute of Technology, USA) email@example.com
- Vijay Janapa Redi (Harvard University, USA) firstname.lastname@example.org.
- Yung-Hsiang Lu (Purdue University, USA) email@example.com.
W32: Synergy of Reinforcement Learning and Large Language Models
Large Language Models (LLMs) such as ChatGPT and GPT-4 have ushered in a new era of AI capabilities, while Reinforcement Learning (RL) has made significant strides in various domains. This workshop aims to explore the exciting potential of integrating LLMs and RL to enhance AI’s capabilities further. Our primary objectives are to foster collaboration, share insights, and promote discussions on how these two fields can mutually benefit from each other.
We invite contributions across a broad spectrum of themes within the convergence of LLMs and RL, encompassing, but not limited to, the following:
- Planning: Exploring how RL techniques can empower LLMs to make informed decisions over time, leading to coherent and goal-oriented interactions.
- Exploration: Investigating strategies for LLMs to adaptively explore their environment using RL, optimizing the balance between proactively generating high-quality responses while exploring user preferences.
- Personalization: Examining how LLMs can dynamically tailor their responses to individual user preferences and behaviors through RL, thereby enhancing user satisfaction.
- Rich Representation: Exploring how LLMs can encode intricate environmental nuances, enabling RL agents to operate effectively in complex, non-Markovian scenarios.
- Explainability: Investigating how LLMs can serve as interpreters, making RL agents’ decisions more interpretable and transparent to humans.
- Task Decomposition: Discussing ways in which LLMs can assist RL agents in breaking down high-level goals into manageable sub-tasks, optimizing problem-solving strategies.
The workshop will be organized as a full-day event, featuring a mix of invited talks, panel discussions, paper presentations, and poster sessions. We encourage active participation, small-group discussions, and networking opportunities to facilitate knowledge exchange and collaboration.
We expect 4-8 page anonymous submissions excluding references and supplemental materials. Submissions will be peer reviewed in a double-blinded fashion. Our workshop is non-archival. Ongoing and unpublished work are welcomed, yet published work at any venue will not be considered.
Submit to: https://cmt3.research.microsoft.com/RLLLM2024
- Alborz Geramifard (Meta)
- Yuxi Li (AlphaAgent.net)
- Minmin Chen (Google)
- Dilek Hakkani-Tur (UIUC)
W33: Workshop on Ad Hoc Teamwork
Research on ad hoc teamwork (AHT) has been around for at least 18 years (Rovatsos, Weiß, and Wolf 2002; Bowling and McCracken 2005), but it was first introduced as a formal challenge by Stone et al. (2010). The challenge discussed in these papers is: “To create an autonomous agent that is able to efficiently and robustly collaborate with previously unknown teammates on tasks to which they are all individually capable of contributing as team members.”
This workshop aims to bridge the collaboration between research communities working on the various research topics related to AHT and their real-world applications. At the same time, we hope this event can play an important role in attracting new researchers to the field and connecting them with established researchers in areas related to AHT.
We encourage the submission of papers for finished and ongoing work under the following topics:
- Ad hoc teamwork
- Zero-shot coordination
- Adaptive learning agents
- Cooperation with new teammates in game settings
- Agent modelling
- Theory of mind of new teammates
- Plan and Goal Recognition
- Learning communication protocols on-the-fly
- Human-Agent and Human-Robot Interactions
- Emergence of social norms
This workshop will be held as a single day workshop with the following activities. Through playing the Hanabi card game that is a popular benchmark environment for AHT, we let junior and senior members of the research community interact and identify existing challenges with AHT. We then host long talks with two invited speakers that are senior members of the AHT community. Other attendees are also given the opportunity to deliver short talks of their submitted papers and engage in spontaneous discussions with others through a poster session. Finally, there will be a moderated discussion where we get all participants to exchange their views and ideas on interesting challenges and research directions for AHT. Following the interactive nature of these activities, we expect that the workshop is held in person.
We openly invite anyone with an interest towards AHT-related topics to attend the workshop. However, only participants with accepted papers can present during the lightning talks and poster session. We expect at most 50 people in attendance for this workshop.
To submit their papers, we ask the authors to use the following link: https://easychair.org/conferences/?conf=waht24
Each submission must be in PDF format following the AAAI-24 author kit. For short papers, each submission is at most two pages long excluding references and supplementary materials. Finally, the main content of each submitted full paper is at most eight pages long.
The workshop chairs and their respective email addresses is listed below:
- Elliot Fosong (firstname.lastname@example.org)
- Hasra Dodampegama (email@example.com)
- Arrasy Rahman (firstname.lastname@example.org)
- Ignacio Carlucho (email@example.com)
- Reuth Mirsky (firstname.lastname@example.org)
- Stefano Albrecht, The University of Edinburgh, email@example.com
- Mohan Sridharan, The University of Birmingham, firstname.lastname@example.org
- Peter Stone, The University of Texas at Austin, email@example.com
Further information on this workshop will be provided through this link:
W34: XAI4DRL: eXplainable Artificial Intelligence for Deep Reinforcement Learning
Despite the recent progress in deep reinforcement learning (DRL), the black-box nature of deep neural networks and the complex interaction among various factors, such as the environment, reward policy, and state representation, raise challenges in understanding and interpreting DRL models’ decision-making processes. To address these issues, this workshop aims to develop methods, techniques, and frameworks to enhance the explainability and interpretability of DRL algorithms, and to define standardized metrics and protocols to evaluate the performance and transparency of autonomous systems.
- XAI methods for Deep Learning
- Evaluation of XAI methods
- RL and DRL interpretability
- XAI-based Augmentation for DRL
- Current Trends and Challenges in explaining DRL
- Reinforcement Learning-based XAI methods
- Debugging DRL using XAI
- Applications of DRL combined with XAI to real-world tasks
- Position papers on the workshop’s topic
This will be a 1-day workshop. There will be 4 invited speakers and oral presentations of the best submitted contributions. Additionally, there will be a poster session for all the accepted contributions. To encourage discussion and promote audience engagement, we plan to allocate the same amount of time for presentation and Q/A session in each talk.
Everyone is welcome to attend the workshop. At least one author of each accepted paper needs to register and attend the workshop to present their work during the contributed talks and the poster session.
List of relevant publications:
- Heuillet, A., Couthouis, F., & Díaz-Rodríguez, N. (2021). Explainability in deep reinforcement learning. Knowledge-Based Systems, 214, 106685.
- Peng, X., Riedl, M., & Ammanabrolu, P. (2022). Inherently explainable reinforcement learning in natural language. Advances in Neural Information Processing Systems, 35, 16178-16190.
- Sequeira, P., & Gervasio, M. (2020). Interestingness elements for explainable reinforcement learning: Understanding agents’ capabilities and limitations. Artificial Intelligence, 288, 103367.
We accept both long papers (7pages excluding references) and short papers (4 pages excluding references). Papers and associated data (code, supplemental) must be anonymized and follow the same template as the AAAI track.
- Leilani Gilpin (University of California Santa Cruz, firstname.lastname@example.org)
- Roberto Capobianco (Sony AI, Sapienza University of Rome, email@example.com)
- Alessio Ragno (Sapienza University of Rome, firstname.lastname@example.org)
- Biagio La Rosa (Sapienza University of Rome, email@example.com)
- Michela Proietti (Sapienza University of Rome, firstname.lastname@example.org)
- Oliver Chang (University of California Santa Cruz, email@example.com)
W35: XAI4Sci: Explainable machine learning for sciences
As the deployment of machine learning technology becomes increasingly common in applications of consequence, such as medicine or science, the need for explanations of the system output has become a focus of great concern. Unfortunately, many state-of-the-art models are opaque, making their use challenging from an explanation standpoint, and current approaches to explaining these opaque models have stark limitations and have been the subject of serious criticism.
The XAI4Sci: Explainable Machine Learning for Sciences workshop aims to bring together a diverse community of researchers and practitioners working at the interface of science and machine learning to discuss the unique and pressing needs for explainable machine learning models to support science and scientific discovery. These needs include the ability to (1) leverage machine learning as a tool to make measurements and perform other activities in a manner comprehensible to and verifiable by the working scientists, and (2) enable scientists to utilize the explanations of the machine learning models in order to generate new hypotheses and to further knowledge of the underlying science.
The XAI4Sci workshop invites researchers to contribute short papers that demonstrate progress in the development and application of explainable machine techniques to real-world problems in sciences (including but not limited to, physics, materials science, earth science, cosmology, biology, chemistry, and forensic science). The target audience comprises members of the scientific community interested in explainable machine learning and researchers in the machine learning community interested in scientific applications of explainable machine learning. The workshop will provide a platform to facilitate a dialogue between these communities to discuss exciting open problems at the interface of explainable machine learning and science. Leading researchers from both communities will cover state-of-the-art techniques and set the stage for this workshop.
applications of explainable machine learning techniques to real-world problems in sciences (including but not limited to, physics, materials science, earth science, cosmology, biology, chemistry, medicine, and forensic science); explainable machine learning; applied machine learning
Format of Workshop
The XAI4Sci will be a one-day meeting. There will be a series of invited talks as well as two poster sessions where participants will be able to present their work and to provide ample time for discussions.
The workshop is open to the public. There is no upper bound on the number of attendees, however, the number of posters we will be able to accommodate is limited to 30.
All presenters will be invited to submit a short conference paper (4 pages) that will be published on the workshop website. The template for the paper will be made available on the workshop website. We have also contacted the Editors of Machine Learning: Science and Technology journal proposing a special issue focusing on explainable machine learning in sciences where select papers would be invited to submit.
Submission Site Information: firstname.lastname@example.org
Justyna P. Zwolak, email@example.com
Justyna P. Zwolak
Applied and Computational Mathematics Division
National Institute of Standards and Technology
Gaithersburg, MD, USA
Information Access Division
National Institute of Standards and Technology
Gaithersburg, MD, USA
Redmond, WA, USA