March 27–29, 2023 | Hyatt Regency, San Francisco Airport, California
Sponsored by the Association for the Advancement of Artificial Intelligence
The Association for the Advancement of Artificial Intelligence presented the 2023 Spring Symposium Series at the Hyatt Regency, San Francisco Airport, California, March 27-29.
Symposia generally range from 40–75 participants each. Participation was open to active participants as well as other interested individuals on a first-come, first-served basis. Each participant was expected to attend a single symposium.
The program included the following eight symposia:
- SS-23-01 AI Climate Tipping-Point Discovery
- SS-23-02 AI Trustworthiness Assessment
- SS-23-03 Challenges Requiring the Combination of Machine Learning and Knowledge Engineering (AAAI-MAKE)
- SS-23-04 Computational Approaches to Scientific Discovery
- SS-23-05 Evaluation and Design of Generalist Systems (EDGeS): Challenges and methods for assessing the new generation of AI
- SS-23-06 HRI in Academia and Industry: Bridging the Gap
- SS-23-08 On the Effectiveness of Temporal Logics on Finite Traces in AI
- SS-23-09 Socially Responsible AI for Well-being
Fee Schedule
Member: $395.00
Nonmember: $560.00
Student Member: $225.00
Nonmember student: $335.00
AAAI Silver Registration
(Includes AAAI membership, plus the conference registration fee)
Regular One-Year: $530.00
Regular 3-Year: $800.00
Regular 5-Year: $1,170.00
Student (One-Year): $275.00
Visa Information
Letters of invitation can be requested by accepted SSS-23 authors or registrants with a completed registration with payment. If you are attending SSS-23 and require a letter of invitation, please send the following information to sss@aaai.org
First/Given Name:
Family/Last Name:
Position:
Organization:
Department:
Address:
City:
State:
Zip/Postal Code:
Country:
Email:
Are you an author of a paper?
Paper title:
If not accepted author, Registration Confirmation ID#:
AAAI Code of Conduct for Events and Conferences
All persons, organizations and entities that attend AAAI conferences and events are subject to the standards of conduct set forth on the AAAI Code of Conduct for Events and Conferences.
Disclaimer
In offering the Hyatt Regency San Francisco Airport (hereinafter referred to as ‘Supplier’), and all other service providers for the AAAI Fall Symposium Series, the Association for the Advancement of Artificial Intelligence acts only in the capacity of agent for the Supplier, which is the provider of hotel rooms and transportation. Because the Association for the Advancement of Artificial Intelligence has no control over the personnel, equipment or operations of providers of accommodations or other services included as part of the Symposium program, AAAI assumes no responsibility for and will not be liable for any personal delay, inconveniences or other damage suffered by symposium participants which may arise by reason of (1) any wrongful or negligent acts or omissions on the part of any Supplier or its employees, (2) any defect in or failure of any vehicle, equipment or instrumentality owned, operated or otherwise used by any Supplier, or (3) any wrongful or negligent acts or omissions on the part of any other party not under the control, direct or otherwise, of AAAI.
SUBMISSION REQUIREMENTS
Interested individuals should submit a paper or abstract by the deadline listed below, unless otherwise indicated by the symposium organizers on their supplemental website. Please submit your submissions directly to the individual symposium according to their directions. Do not mail submissions to AAAI. See the appropriate section in each symposium description for specific submission requirements.
SUBMISSION SITE
Most symposium organizers have elected to accept submissions via the AAAI Spring Symposium EasyChair site at https://easychair.org/my/conference?conf=sss23. Please be sure to select the appropriate symposium when submitting your work. For those not using EasyChair, please see the individual symposia for submission site details.
IMPORTANT DATES
January 7: AAAI opens registration for Spring Symposium Series
January 15 (recommended): Papers due to organizers
January 31: Organizers send notifications to authors
February 10 (recommended): FSS final papers due to organizers
February 17: Invited Participant registration deadline
March 4: Registration deadline
Monday, March 27
9:00 AM – 5:30 PM: Symposia Sessions
6:00 PM – 7:00 PM: Reception
Tuesday, March 28
9:00 AM – 5:30 PM: Symposia Sessions
6:00 PM – 7:30 PM: Plenary Session
Wednesday, March 29
9:00 AM – 12:30 PM: Symposia Sessions
Christopher Geib (SIFT)
Ron Petrick (Heriot-Watt University)
SSS-23 Symposium Cochairs
ssschairs@aaai.org
General inquiries regarding the symposium series should be directed to AAAI at sss@aaai.org.
AI Climate Tipping-Point Discovery (ACTD)
AI Climate Tipping-Point Discovery (ACTD) is an emerging area of research that aims to integrate artificial intelligence with traditional climate modeling methods to enable scientific researchers to better understand climate tipping points. Critical tipping points of concern exist in Earth systems including massive shifts in ocean current, cryosphere collapse, forest dieback, and permafrost thaw. The study of tipping points further includes the concept of cascading tipping points, where one or more tipping points trigger further tipping points to occur. Forward thinking, positive tipping points may be triggered, or negative tipping points averted, by leveraging climate interventions such as carbon sequestration, marine cloud brightening, and stratospheric aerosol injection. There are many urgent research questions connected to ACTD to address rapidly changing and interconnected global systems. This emerging field will also be of interest beyond the climate community as tipping point discovery methods also apply to social, political, and economic systems. The collaborations that result could accelerate scientific discovery by overcoming existing limitations in state-of-the-art climate modeling approaches used for tipping point discovery.
Topics
The goal of this symposium is to explore how traditional climate modeling methods and AI can be combined for climate tipping point discovery and to bring together the community of researchers working at the intersection of AI, dynamical systems, and climate science to help shape the vision of this important field of ACTD.
Submissions
We encourage participation on topics that explore any form of synergy between artificial intelligence (AI) methods, dynamical systems, and climate modeling for tipping point discovery.
We are soliciting paper submissions for position, review, or research articles in two formats: (i) short papers (2-4 pages, excluding references) and (ii) full papers (6-8 pages, excluding references).
Submissions should be sent via EasyChair: https://easychair.org/conferences/?conf=sss23
Important Dates
Abstract paper submission: January 29th, 2023
Full paper submission: February 5th, 2023
Notification of acceptance: February 26th, 2023
Camera ready paper submission: March 19th, 2023
Registration deadline: March 4, 2023
Workshop date: March 27-29, 2023
Symposium Chair
Jennifer Sleeman, Johns Hopkins Applied Physics Laboratory; jennifer.sleeman@jhuapl.edu Phone (301) 785-5593
For More Information:
https://secwww.jhuapl.edu/EventLink/Event/220
Members
Jennifer Sleeman jennifer.sleeman@jhuapl.edu
Anand Gnanadesikan gnanades@jhu.edu
Yannis Kevrekidis yannisk@jhu.edu
Jay Brett Jay.Brett@jhuapl.edu
Themistoklis Sapsis sapsis@mit.edu
Tapio Schneider tapio@caltech.edu
AI Trustworthiness Assessment
The accelerated developments in the field of Artificial Intelligence (AI) hint at the need for considering “Trust” as a design principle rather than an option. Moreover, the design of AI-based critical systems such as in avionics, mobility, defense, healthcare, finance, critical infrastructures, … requires proving their trustworthiness. Thus, AI-based critical systems must be assessed across many dimensions by different parties (regulators, auditors, developers, customers, technical inspection companies, reinsurance companies, end-users) for different reasons. Such assessment begins from the early stages of development, including the definition of the specification requirements for the system, the analysis, the design, etc.
To judge AI-based systems merely by the accuracy percentage is a highly misleading metric. Due to the multi-dimensional nature of trust and trustworthiness, one of the main issues is to establish objective attributes such as correctness, data quality, resilience, robustness, safety, security, explainability, fairness, privacy etc, map them onto the AI processes and provide methods and tools to assess them. The choice of attributes depends on contextual elements such as the criticality of the application, the expected use, the nature of the stakeholders involved, etc.
Topics
The goal of this symposium is to grow a community of researchers and practitioners for AI trustworthiness assessment leveraged by AI science, system and software engineering, metrology, and SSH. It aims to explore innovative approaches, metrics, methods, and tools, with a particular focus on the following topics (but not limited to):
- Dataset qualification
- Performance indicators (accuracy, robustness, etc.)
- Test examples generation
- Risks and vulnerabilities
- Non-functional requirements such as accountability, reliability, privacy etc.
- Explainability and interpretability
- Trustworthiness measure and trade-offs (metrics, indicators…)
- Processes and governance mechanisms in organizations
- Sectorial specificities, applications
Format
The symposium will consist in keynotes, presentations of papers, a poster session, round table discussions and one final general discussion.
Submissions
8 pages for full papers; 4 pages for poster papers Online submission: http://aita.sciencesconf.org
Symposium Chair
Bertrand Braunschweig, Confiance.ai Scientific coordinator; bertrand.braunschweig@irt-systemx.fr Phone (+33) 6 7429 2861
Organizing Committee
Bertrand Braunschweig, Confiance.ai; bertrand.braunschweig@irt-systemx.fr
Stefan Buijsman, TU Delft; S.N.R.Buijsman@tudelft.nl
Faïcel Chamroukhi, SystemX, faicel.chamroukhi@irt-systemx.fr
Fredrik Heintz, Linköping University, fredrik.heintz@liu.se
Foutse Khomh, Polytechnique de Montréal and Mila, foutse.khomh@polymtl.ca
Juliette Mattioli, Thales, juliette.mattioli@thalesgroup.com
Maximilian Poretschkin, Fraunhofer IAIS; Maximilian Poretschkin@iais.fraunhofer.de
For More Information
Please see aita.sciencesconf.org
Challenges Requiring the Combination of Machine Learning and Knowledge Engineering (AAAI-MAKE)
The AAAI-MAKE 2023 symposium aims to bring together researchers and practitioners from machine learning and knowledge engineering to reflect on how combining the two fields can contribute to tackling future societal, environmental, business, and scientific fundamental AI challenges. This symposium aims to provide datasets, ontologies, initial research findings, or implications about future sociotechnical impacts to academia and practitioners as accessible challenges.
Topics and Format
- Machine Learning, Deep Learning, and Neural Networks
- Knowledge Engineering, Representation, and Reasoning
- AI Challenges with Data Sets and/or Knowledge Graphs
- Hybrid (Human-Artificial) Intelligence and Human-in-the-Loop AI
- Commonsense and Explainable AI
- Hybrid AI and Neuro-symbolic AI
- Human-Centered AI, Dialogue Systems and Conversational AI
The symposium involves presentations of accepted papers, challenges, side-tutorial events from the industry, (panel) discussions, demonstrations, and plenary sessions. Since AAAI-MAKE is a dedicated symposium for combining machine learning and knowledge engineering, the contributions and challenges should address hybrid (artificial) intelligence settings. For more details, see the following webpage: https://www.aaai-make.info.
Submission
We solicit challenge/short and position/full papers that can include recent or ongoing research, challenges with data sets (if available), and surveys. Application scenarios and requirements from the industry would be highly beneficial and most welcome.
Given the time frame of this 2023 symposium, the single-blinded review will be based on extended abstracts of challenge, position, full or short papers.
- Challenge papers (5 to 9 pages) should describe future societal, environmental, business, or scientific challenges requiring hybrid AI and possibly provide datasets or ontologies. In a follow- up event, solutions shall be proposed for these challenges.
- Position/full papers (10 to 16 pages) and short papers (5 to 9 pages) can include recent or ongoing research, business cases, application scenarios, and surveys.
- Industrial side-tutorial event or demonstration proposals (less than 5 pages) should have a focus on business or research related to the symposium topics, excluding product advertising.
- Discussion proposals (1 to 2 pages) should contain a description of the specific topic with a list of questions and a discussion moderator.
All submissions must reflect the formatting instructions (https://www.aaai-make.info/authors). Accepted papers shall be published on CEUR-WS, an established open-access proceedings website, whereas datasets should be uploaded to Zenodo. Please submit your extended abstract of your challenge, position, full or short paper through EasyChair: https://easychair.org/conferences/?conf=sss23.
Important Dates
- Abstract submission: 17 th of January 2023
- Notification: 31 st of January 2023
- Registration: 17 th of February 2023
- Camera-ready submission: 4 th of March 2023
- Symposium: 27-29 of March 2023
Organizing Committee
- Andreas Martin (primary contact), FHNW University of Applied Sciences and Arts Northwestern Switzerland.
- Hans-Georg Fill, University of Fribourg, Switzerland.
- Aurona Gerber, University of Pretoria, South Africa.
- Knut Hinkelmann, FHNW University of Applied Sciences and Arts Northwestern Switzerland.
- Doug Lenat, Cycorp Inc., Austin, TX, USA.
- Reinhard Stolle, Argo AI GmbH, München, Germany.
- Frank van Harmelen, VU University, Amsterdam, Netherlands.
Computational Approaches to Scientific Discovery
The discovery of scientific knowledge is one of the highest forms of human accomplishment. Over the past four decades, scientific discovery has been tackled by computer scientists, mathematicians, physicists, philosophers, psychologists, and statisticians. This has produced multiple paradigms that come from distinct intellectual traditions, publish in different venues, and use their own terminologies. The symposium will convene these communities in an effort to overcome conceptual divides, increase interaction, and foster a unified community.
Despite the paradigms’ differences, they share assumptions about the nature of discovery that offer potential for bridging the current gap:
- Scientific discovery is not solely about data or models, but about finding it relations between them;
- Discovered models should not only make predictions but also provide deeper accounts that are it consistent with scientific theory
- Discovery should produce models that are interpretable and stated in it established scientific formalisms
To build on these themes, the symposium will organize sessions around types of scientific models (e.g., qualitative structures, causal relations, numeric equations, processes) rather than methodological paradigms. Moreover, presenters will abstract away from algorithmic and mathematical details to focus on:
- The original discovery problem they wanted to solve;
- How they formulated the problem in computational terms;
- What data and knowledge they provided to their system;
- How they represented the system’s inputs and outputs;
- What criteria it used to evaluate candidate models; and
- How they interpreted results that the system generated.
Structuring talks in this way should increase communication among discovery researchers who come from different backgrounds and who favor different methods.
Submissions:
Authors should submit abstracts of proposed talks through the AAAI Spring Symposium EasyChair site: https://easychair.org/conferences/?conf=sss23, along with one or two references with links to related papers. These should be one page in 11-point font and need not follow AAAI format.
Submissions are due January 15, 2023. The organizing committee will select abstracts that cover a broad range of discovery problems and approaches, with preference given to ones that address the questions listed above.
Authors of accepted abstracts will be invited to write a full paper for distribution to symposium participants and for possible inclusion in a special issue of a refereed journal.
Organizing Committee:
Youngsoo Choi (Lawrence Livermore National Laboratory, choi15@llnl.gov),
Saso Dzeroski (Jozef Stefan Institute, saso.dzeroski@ijs.si),
J. Nathan Kutz (University of Washington, kutz@uw.edu),
Pat Langley (Stanford University, langley@stanford.edu)
Contact: Pat Langley (langley@stanford.edu)
For More Information:
For additional details, please see the supplementary symposium site at http://cogsys.org/symposium/discovery-2023/
Evaluation and Design of Generalist Systems (EDGeS): Challenges and methods for assessing the new generation of AI
With the advent of large domain-universal models, we are witnessing a trend towards generalist AI systems, no longer restricted to narrow tasks. Alongside this trend has been a resurgence of research in symbolic AI, to support common sense reasoning, explanation, and learning with limited training data. There are few extant strategies for assessing modern generalist AI systems, symbolic AI systems, or combinations thereof; assessing the next generation of AI will require novel tools, methods, and benchmarks that address both reasoning and generalist systems, individually and combined holistically.
Topics
In the interest of fostering discussion of methodologies for understanding and assessing AI in the domains of reasoning and generativity, we are accepting submissions on topics including (not exhaustive):
- Novel training protocols for achieving generalist performance in reasoning and generative tasks
- Limitations of current approaches in AI/Machine learning
- Novel methodologies to assess progress on increasingly general AI
- Methods and tools for identification of vulnerabilities in modern reasoning systems
- Quantifiable approaches to assessing ethical robustness of generalist AI systems
- Relevant architectures involving neuro-symbolics, neural network-based foundation models, generative AI, common sense reasoning, statistical and relational AI
- Architecture for systems capable of reasoned self-verification and ethical robustness
- Description of novel systems that combine reasoning with generativity
Format
This format of the symposium will include invited talks and contributed paper presentations by leading researchers and technical experts, as well as panel discussions and group breakout sessions focusing on the implementation of competency assessment in real autonomous systems.
Submissions
Participants will be invited to submit:
- Full technical papers (6-8 pages)
- Technical presentation (including abstract < 2 pages and biography of main speaker)
- Position papers (4-6 pages)
Manuscripts must be submitted as PDFs via EasyChair: https://easychair.org/conferences/?conf=sss23.
Please align with AAAI format: https://aaai.org/wp-content/uploads/AuthorKit23.zip
Important dates (subject to change, please check symposium website for the latest updates)
- Abstract submission: 16th of January 2023
- Notification: 6th of February 2023
- Registration: 27th of February 2023
- Camera-ready submission: 3rd of March 2023
Organizing Committee
- Joscha Bach, Intel Labs, Committee Co-Chair
- Amanda Hicks, Johns Hopkins University Applied Physics Lab, Committee Co-Chair
- Tetiana Grinberg, Intel Labs
- John Beverley, University at Buffalo
- Steven Rogers, Air Force Research Laboratory
- Grant Passmore, Imandra
- Ramin Hasani, MIT CSAIL
- Casey Richardson, S&P Global
- Richard Granger, Dartmouth College
- Jascha Achterberg, University of Cambridge
- Kristinn R. Thórisson, Reykjavik University, IIIM
- Luc Steels, Barcelona Supercomputing Center
- Yulia Sandamirskaya, Neuromorphic Computing lead, Intel Labs
For more information: https://www.cognitive-ai.org/edges-23
HRI in Academia and Industry: Bridging the Gap
The use of robots that operate in spaces where humans are present is growing at a dramatic rate. We are seeing more and more robots in our warehouses, on our streets, and even in our homes. All of these robots will interact with humans in some way. In order to be successful, their interactions with humans will have to be carefully designed. The field of Human-Robot Interaction (HRI) has been growing at the intersection of robotics, AI, psychology, and a number of other fields, for over a decade. However, until quite recently, it has been a largely academic area, with university researchers proposing, implementing, and reporting on experiments at a limited scale. With the current increase of commercially-available robots, HRI is starting to make its way into the industry in a meaningful way.
This symposium is intended to bring together HRI researchers and practitioners from both academia and industry to find common ground, understand the different constraints at play, and figure out how to effectively work together.
Themes
- The Constraints and Needs of HRI in Industry
- The Relevance and Innovation of Academic HRI
- Interaction between Academic and Industrial HRI: Publications, Conferences, Tools & Technology Resources, and Experiments
- HRI Education and Training: What does Academia and Industry Need?
Contributions
We will solicit contributions in the following formats, addressing one or more of the conference themes. More details of the themes, along with some of the questions we are interested in, can be found on the symposium web site: https://sites.google.com/view/aaai-hri-bridge
- Long Papers (6 to 8 pages, AAAI format), describing HRI work, with at least one section devoted to addressing some of the questions outlined above.
- Case Studies (2 to 4 pages, AAAI format), describing some HRI work done in industry or in academia, with at least one section devoted to addressing the themes outlined above.
- Position Papers (2 to 4 pages, AAAI format), directly discussing themes outlined above.
- Short Papers (2 to 4 pages, AAAI format), summarizing some late-breaking or early stage HRI work.
Organizing Committee
- Bill Smart, Amazon Lab126 and Oregon State University
- Hae Won Park, Amazon Lab126 and MIT
- Chien-Ming Huang
- Anastasia K. Ostrowski, MIT
- Ross Mead, Semio
- Elaine Short, Tufts University
For More Information
For more information, see the symposium web site: https://sites.google.com/view/aaai-hri-bridge
On the Effectiveness of Temporal Logics on Finite Traces in AI
Temporal logics, such as Linear Temporal Logic (LTL), are widely adopted as a logical specification language in Formal Methods. They are also getting increasing attention from the AI Community. In AI, however, there is often a need to interpret such logics over finite traces, rather than the traditional infinite-trace interpretation. This is evident, for instance, in works on Planning, Reinforcement Learning, and Business Process Management. An important computational feature of working with finite traces is that it allows one to use standard finite-state automata to model and reason, rather than the more complex omega-automata used for infinite traces. This is a great simplification that has already had a significant impact on many areas of AI and CS.
This symposium aims to bring together researchers working with temporal logics on finite traces, in basic research and applications, in order to foster a common space to discuss current results and future directions, and to facilitate the emergence of teams working across different areas.
Topics:
Topics of interest span the use of temporal logics over finite traces, including (but not limited to) the following areas:
- AI Planning
- Verification and Synthesis
- Reinforcement Learning
- Automated Reasoning
- Knowledge Representation
- Multi-Agent Systems
- Robotics
- Motion and Task Planning
- Discrete-Event Control
- Workflow Management
- Conversational Systems
- Automated Service Composition
- Business Process Management
- Fintech
- Cyber Security
- Human Computer Interaction
- Natural Language Processing
Format:
The symposium will consist of contributed presentations, panel sessions, and keynote talks.
Submissions:
We invite submissions for presentations, not papers. Submissions should have a single main author, and each author can have no more than one submission. Each submission must not exceed 2 pages, including references (in AAAI style). There will be no formal proceedings. Submissions should be uploaded via the AAAI SSS-23 EasyChair site: https://easychair.org/conferences/?conf=sss23.
Organizing Committee:
- Suguman Bansal, Co-Chair, Georgia Institute of Technology, suguman@seas.upenn.edu
- Antonio Di Stasio, Co-Chair, Sapienza University of Rome, distasio@diag.uniroma1.it
- Sasha Rubin, Co-Chair, University of Sydney, sasha.rubin@sydney.edu.au
- Shufang Zhu, Co-Chair, Sapienza University of Rome, zhu@diag.uniroma1.it
For More Information
Please see the symposium website: https://ltlf-symposium.github.io/
Socially Responsible AI for Well-being
For our happiness, AI is not enough to be productive in exponential growth or economic/financial supremacies but should be socially responsible from the viewpoint of fairness, transparency, accountability, reliability, safety, privacy, and security. For example, AI diagnosis system should provide responsible results (e.g., a high-accuracy of diagnostics result with an understandable explanation) but the results should be socially accepted (e.g., data for AI (machine learning) should not be biased (i.e., the amount of data for learning should be equal among races and/or locations). Like this example, a decision of AI affects our well-being, which suggests the importance of discussing “What is socially responsible?” in several potential situations of well-being in the coming AI age.
The first perspective is “(Individually) Responsible AI”, which aims to clarify what kinds of mechanisms/issues should be taken into consideration to design Responsible AI for well-being. One of the goals of Responsible AI for well-being is to provide responsible results for our health which condition may change every day. Since such health condition changes are often caused by our environment, Responsible AI for well-being is expected to provide responsible results by understanding how our digital experience affects our emotions and our quality of life.
The second perspective is “Socially Responsible AI”, which aims to clarify what kinds of mechanisms/issues should be taken into consideration to implement social aspects in Responsible AI for well-being. One of the aspects of social responsibility is fair decisions, which means that the results of AI should be equally useful for all people. For this issue, we need to tackle the “bias” problem in AI (and in humans) to achieve fairness. Another aspect of social responsibility is knowledge applicability among people. For example, the health-related knowledge found by AI for one person (e.g., tips for good sleep) may not be useful for other persons, which means that such knowledge is not socially responsible. For these issues, we need to find a way to keep machines from absorbing Human biases by understanding how fair is fair and providing socially responsible results.
Topics:
We welcome the technical and philosophical discussions on “Socially Responsible AI for Well-being”, in the design and implementation of ethics, machine learning software, robotics, and social media (but not limited). For example, interpretable forecasts, sound social media, helpful robotics, fighting loneliness with AI/VR, and promoting good health are the important scope of our discussions.
Format:
The symposium is organized by the invited talks, presentations, posters, and interactive demos.
Submissions:
Authors should submit either full papers of up to 8 pages or extended abstracts of up to 2 pages. Extended abstracts should state your presentation type (short paper (1–2 pages), demonstration, or poster presentation). All submissions should be uploaded to AAAI’s EasyChair site at https://easychair.org/conferences/?conf=sss23, and in addition, email your submissions to aaai2023-srai@cas.lab.uec.ac.jp by January 15, 2023.
Submission deadline
January 15th, 2023
Author notification: January 31th, 2023
Camera-ready papers: March 15th (subject to change)
Registration deadline: March 4, 2022
Symposium: March 27–29, 2023
Organizing Committee:
Co-chairs: Takashi Kido (Teikyo University, Japan)
Keiki Takadama (The University of Electro-Communications, Japan).
For More Information
Please see the symposium website: http://www.cas.lab.uec.ac.jp/wordpress/aaai_spring_2023/