AAAI-22 Workshop Program

The Thirty-Sixth AAAI Conference on Artificial Intelligence
February 28 and March 1, 2022
Vancouver Convention Centre
Vancouver, BC, Canada

AAAI is pleased to present the AAAI-22 Workshop Program. Workshops will be held Monday and Tuesday, February 28 and March 1, 2022. The final schedule will be available in November. The AAAI-22 workshop program includes 39 workshops covering a wide range of topics in artificial intelligence. Workshops are one day unless otherwise noted in the individual descriptions. Registration in each workshop is required by all active participants, and is also open to all interested individuals. Workshop registration is available to AAAI-22 technical registrants at a discounted rate, or separately to workshop only registrants. Registration information will be mailed directly to all invited participants in December.

Important Dates for Workshop Organizers


  • November 12: Submissions due (unless noted otherwise)
  • December 3: Notification of acceptance (unless noted otherwise)
  • December 17: Early registration deadline
  • February 28 – March 1: AAAI-22 Workshop Program

 


W1: Adversarial Machine Learning and Beyond

Although machine learning (ML) approaches have demonstrated impressive performance on various applications and made significant progress for AI, the potential vulnerabilities of ML models to malicious attacks (e.g., adversarial/poisoning attacks) have raised severe concerns in safety-critical applications. The adversarial ML could also result in potential data privacy and ethical issues when deploying ML techniques in real-world applications. Counter-intuitive behaviors of ML models will largely affect the public trust on AI techniques, while a revolution of machine learning/deep learning methods may be an urgent need. This workshop aims to discuss important topics about adversarial ML to deepen our understanding of ML models in adversarial environments and build reliable ML systems in the real world.

Topics

  1. Malicious attacks for ML models to identify their vulnerability in black-box/real-world scenarios.
  2. Novel algorithms and theories to improve model robustness.
  3. Benchmarks to reliably evaluate attacks/defenses and measure the real progress of the field.
  4. Theoretical understanding of adversarial ML and its connection to other areas.
  5. The positive/negative social impacts and ethical issues related to adversarial ML.
  6. The consideration and experience of adversarial ML from industry and policy making.
  7. Positive applications of adversarial ML, i.e., adversarial for good.

Format

This is a one-day workshop, planned with a 10-minute opening, 6 invited keynotes, ~6 contributed talks, 2 poster sessions, and 2 panel discussions. We’ll also host a competition on adversarial ML along with this workshop.

There will be about 60~85 people to participate, including the program committee, invited speakers, panelists, authors of accepted papers, winners of the competition and other interested people.

Submissions

We consider submissions that haven’t been published in any peer-reviewed venue (except those under review). The accepted papers will be allocated either a contributed talk or a poster presentation. Submissions including full papers (6-8 pages) and short papers (2-4 pages) should be anonymized and follow the AAAI-22 Formatting Instructions (two-column format) at https://www.aaai.org/Publications/Templates/AuthorKit22.zip.

Submit to: https://openreview.net/group?id=AAAI.org/2022/Workshop/AdvML

Workshop Chair

Yinpeng Dong (dyp17@mails.tsinghua.edu.cn, 30 Shuangqing Road, Haidian District, Tsinghua University, Beijing, China, 100084, Phone: +86 18603303421)

Organizing Committee

Yinpeng Dong (Tsinghua University, dyp17@mail.tsinghua.edu.cn), Tianyu Pang (Tsinghua University, pty17@mails.tsinghua.edu.cn), Xiao Yang (Tsinghua University, yangxiao19@mails.tsinghua.edu.cn), Eric Wong (MIT, wongeric@mit.edu), Zico Kolter (CMU, zkolter@cs.cmu.edu), Yuan He (Alibaba, heyuan.hy@alibaba-inc.com )

Additional Information

Workshop URL


W2: AI for Agriculture and Food Systems (AIAFS)

An increasing world population, coupled with finite arable land, changing diets, and the growing expense of agricultural inputs, is poised to stretch our agricultural systems to their limits. By the end of this century, the earth’s population is projected to increase by 45% with available arable land decreasing by 20% coupled with changes in what crops these arable lands can best support; this creates the urgent need to enhance agricultural productivity by 70% before 2050. Current rates of progress are insufficient, making it impossible to meet this goal without a technological paradigm shift. There is increasing evidence that enabling AI technology has the potential to aid in the aforementioned paradigm shift. This AAAI workshop aims to bring together researchers from core AI/ML, robotics, sensing, cyber physical systems, agriculture engineering, plant sciences, genetics, and bioinformatics communities to facilitate the increasingly synergistic intersection of AI/ML with agriculture and food systems. Outcomes include outlining the main research challenges in this area, potential future directions, and cross-pollination between AI researchers and domain experts in agriculture and food systems.

Topics

Specific topics of interest for the workshop include (but are not limited to) foundational and translational AI activities related to:

  • Plant breeding
  • Precision agriculture and farm management
  • Biotic/Abiotic stress prediction
  • Yield prediction
  • Agriculture data curation
  • Annotation efficient learning
  • Plant growth and development models
  • Remote sensing
  • Agricultural robotics
  • Privacy-preserving data analysis
  • Human-in-the-loop AI
  • Multimodal data fusion
  • High-throughput field phenotyping
  • (Bio)physics aware hybrid AI modeling
  • Development of open-source software, libraries, annotation tools, or benchmark datasets

Format

The workshop will be a one day meeting comprising invited talks from researchers in the field, spotlight lightning talks and a poster session where contributing paper presenters can discuss their work. Attendance is open to all registered participants.

Submissions

Submitted technical papers can be up to 4 pages long (excluding references and appendices). Position papers are welcome. All papers must be submitted in PDF format using the AAAI-22 author kit. Papers will be peer-reviewed and selected for spotlight and/or poster presentation.

Submission site: https://openreview.net/group?id=AAAI.org/2022/Workshop/AIAFS

Organizing Committee

Girish Chowdhary (University of Illinois, Urbana Champaign), Baskar Ganapathysubramanian (Iowa State University; contact: baskarg@iastate.edu), George Kantor (Carnegie Mellon University), Soumyashree Kar (Iowa State University), Koushik Nagasubramanian (Iowa State University), Soumik Sarkar (Iowa State University), Katia Sycara (Carnegie Mellon University), Sierra Young (North Carolina State University), Alina Zare (University of Florida, Gainesville)

Additional Information

Supplemental workshop site: https://aiafs-aaai2022.github.io/


W3: AI for Behavior Change

In decision-making domains as wide-ranging as medication adherence, vaccination uptakes, college enrollment, retirement savings, and energy consumption, behavioral interventions have been shown to encourage people towards making better choices. It is important to learn how to use AI effectively in these areas in order to be able to motivate and help people to take actions that maximize their welfare.

At least three research trends are informing insights in this field. First, large data sources, both conventionally used in social sciences (EHRs, health claims, credit card use, college attendance records) and unconventional (social networks, fitness apps), are now available, and are increasingly used to personalize interventions. These datasets can be leveraged to learn individuals’ behavioral patterns, identify individuals at risk of making sub-optimal or harmful choices, and target them with behavioral interventions to prevent harm or improve well-being. Second, psychological experiments in laboratories and in the field, in partnership with technology companies (e.g., using apps), to measure behavioral outcomes are being increasingly used for informing intervention design. Finally, there is an increasing interest in AI in moving beyond traditional supervised learning approaches towards learning causal models, which can support the identification of targeted behavioral interventions. These research trends inform the need to explore the intersection of AI with behavioral science and causal inference, and how they can come together for applications in the social and health sciences.

This proposed workshop will build upon successes and learnings from last year’s successful AI for Behavior Change workshop, and will focus on on advances in AI and ML that aim to (1) design and target optimal interventions; (2) explore bias and equity in the context of decision-making and (3) exploit datasets in domains spanning mobile health, social media use, electronic health records, college attendance records, fitness apps, etc. for causal estimation in behavioral science.

Topics

The goal of this workshop is to bring together the causal inference, artificial intelligence, and behavior science communities, gathering insights from each of these fields to facilitate collaboration and adaptation of theoretical and domain-specific knowledge amongst them. We invite thought-provoking submissions on a range of topics in fields including, but not limited to, the following areas:

  • Intervention design
  • Adaptive/optimal treatment assignment
  • Heterogeneity estimation
  • Targeted nudges
  • Bias/equity in algorithmic decision-making
  • Mental health/wellness
  • Habit formation
  • Social media interventions
  • Psychological science
  • Precision health
  • Vaccine Hesitancy/Vaccine Uptake

Format

The full-day workshop will start with a keynote talk, followed by an invited talk and contributed paper presentations in the morning. The post-lunch session will feature a second keynote talk, two invited talks. Papers more suited for a poster, rather than a presentation, would be invited for a poster session. We will also select some of the best posters for spotlight talks (2 minutes each). We will end the workshop with a panel discussion by top researchers in the field.

Invited Speakers

Colin Camerer (California Institute of Technology), Susan Murphy (Harvard University)

Submissions

The audience of this workshop will be researchers and students from a wide array of disciplines including, but not limited to, statistics, computer science, economics, public policy, psychology, management, and decision science, who work at the intersection of causal inference, machine learning, and behavior science. AAAI, specifically, is a great venue for our workshop because its audience spans many ML and AI communities. We invite novel contributions following the AAAI-22 formatting guidelines, camera-ready style. Submissions will be peer reviewed, single-blinded. Submissions will be assessed based on their novelty, technical quality, significance of impact, interest, clarity, relevance, and reproducibility. We accept two types of submissions – full research papers no longer than 8 pages (including references) and short/poster papers with 2-4 pages. References will not count towards the page limit. Submissions will be accepted via the Easychair submission website.

Organizing Committee

Lyle Unga (University of Pennsylvania, ungar@cis.upenn.edu), Rahul Ladhania* (University of Michigan, ladhania@umich.edu, primary contact), Linnea Gandhi (University of Pennsylvania, lgandhi@wharton.upenn.edu), Michael Sobolev (Cornell Tech, michael.sobolev@cornell.edu)

Additional Information

Supplemental workshop site: https://ai4bc.github.io/ai4bc22/

Contact

For any questions, please reach out to us at ai4behaviorchange at gmail dot com


W4: AI for Decision Optimization

This AAAI-22 workshop on AI for Decision Optimization (AI4DO) will explore how AI can be used to significantly simplify the creation of efficient production level optimization models, thereby enabling their much wider application and resulting business values.The desired outcome of this workshop is to drive forward research and seed collaborations in this area by bringing together machine learning and decision-making from the lens of both dynamic and static optimization models.

Topics

Topics include, but our not limited to: learning optimization models from data, constraint and objective learning, AutoAI, especially if combined with decision optimization models or environments, AutoRL, incorporating the inaccuracy of the automatically learnt models in the decision making process, and using machine learning to efficiently solve combinatorial optimization models.

Format

In addition to the keynote and presentations of accepted works, the workshop will include both a general discussion session on defining and addressing the key challenges in this area , and a “lightning tutorial” session that will include brief overviews and demos of relevant tools, including open source frameworks such as Ecole.

Attendance

Participation of researchers from a wide variety of areas is encouraged, including Data Science, Machine Learning, Symbolic AI, Mathematical programming, Constraint Optimization, Reinforcement Learning, Dynamic control and Operations Research.

Submissions

Authors are invited to send a contribution in the AAAI-22 proceedings format. We will accept both original papers up to 8 pages in length (including references) as well as position papers and papers covering work in progress up to 4 pages in length (not including references).
Submission will be through Easychair at the AAAI-22 Workshop AI4DO submission site

Workshop Chairs

Professor Bistra Dilkina (dilkina@usc.edu), USC and Dr. Segev Wasserkrug, (segevw@il.ibm.com), IBM Research

Organizing Committee

Prof. Andrea Lodi (andrea.lodi@cornell.edu), Jacobs Technion-Cornell Institute – IIT and Dr. Dharmashankar Subrmanian (dharmash@us.ibm.com), IBM Research

Additional Information

https://research.ibm.com/haifa/Workshops/AAAI-22-AI4DO/


W5: AI for Transportation

In recent years, machine learning techniques (e.g. sup-port vector machine (SVM), decision tree, random forest, etc.) and deep learning techniques (e.g. convolutional neural network (CNN), recurrent neural network (RNN), etc.) have been popularly applied into image recognition and time-series inferences for intelligent transportation systems (ITS). For instance, advanced driver assistance systems and autonomous cars have been developed based on AI techniques to perform forward collision warning, blind spot monitoring, lane departure warning systems, traffic sign recognition, traffic safety, infrastructure management and congestion, and so on. Autonomous vehicles can share their detected information (e.g., traffic signs, collision events, etc.) with other vehicles via vehicular communication systems (e.g., dedicated short range communication (DSRC), vehicular ad hoc networks (VANETs), long term evolution (LTE), and 5G/6G mobile networks) for cooperation. However, the performance and efficiency of these techniques are big challenges for performing real-time applications.

The aim of this workshop is to focus on both original research and review articles on various disciplines of ITS applications, including particularly AI techniques for ITS time-series data analyses, ITS spatio-temporal data analyses, advanced traffic management systems, advanced traveler information systems, commercial vehicle operation systems, advanced vehicle control and safety systems, advanced public transportation services, advanced information management services, etc.

Topics

  • AI for ITS time-series and spatio-temporal data analyses
  • AI for the applications of transportation
  • AI for image recognition
  • Applications and techniques in image recognition based on AI techniques for ITS
  • Applications and techniques in autonomous cars and ships based on AI techniques
  • AI for quality of service in VANET
  • AI for infrastructure management and congestion.

Format

The workshop is organized by paper presentations.
The length of the workshop: 1-day

Submissions

6-8 pages for full papers
2-4 for poster/short/position papers

Submission URL: https://easychair.org/conferences/?conf=aaai-2022-workshop

Workshop Chairs

Wenzhong Guo (Fuzhou University, fzugwz@163.com), Chin-Chen Chang (Feng Chia University, alan3c@gmail.com), Chi-Hua Chen (Fuzhou University, chihua0826@gmail.com), Haishuai Wang (Fairfield University & Harvard University, hwang@fairfield.edu)

Session Chairs

Feng-Jang Hwang (University of Technology Sydney), Cheng Shi (Xi’an University of Technology), Ching-Chun Chang (National Institute of Informatics, Japan)

Additional Information

The excellent papers will be recommended for publications in SCI or EI journals. Detailed information could be found on the website of the workshop.

Workshop URL: https://rail.fzu.edu.cn/info/1014/1064.htm

Contact

Prof. Chi-Hua Chen
Email: chihua0826@gmail.com
Postal address: No.2, Xueyuan Rd., Fuzhou, Fujian, China
Telephone: +86-18359183858


W6: AI in Financial Services: Adaptiveness, Resilience & Governance

The financial services industry relies heavily on AI and Machine Learning solutions across all business functions and services. However, most models and AI systems are built with conservative operating environment assumptions due to regulatory compliance concerns. In recent months/years, major global shifts have occurred across the globe triggered by the Covid pandemic. These abrupt changes impacted the environmental assumptions used by AI/ML systems and their corresponding input data patterns. As a result, many AI/ML systems faced serious performance challenges and failures. Industry-wide reports highlight large-scale remediation efforts to fix the failures and performance issues. Yet, most of these efforts highlighted the challenges of model governance and compliance processes.

Topics

This workshop starts with acknowledging the fundamental challenges of robustness and adaptiveness in financial services modeling and explores systematic solutions to solve these underlying problems to prevent future failures. Some of the key questions to be explored include:

  • Why did so many AI/ML models fail during the pandemic?
  • What are the primary lessons learned from the model failures?
  • What techniques and approaches can be used to detect and effectively manage similar scenarios in the future?
  • What approaches emerge in building fundamentally robust and adaptive AI/ML systems?
  • How can the financial services industry balance the regulatory compliance and model governance pressures with adaptive models

Format

The workshop will take place in person and will span over one day. It will include multiple keynote speakers, invited talks, a panel discussion, and two poster sessions for the accepted papers.

Papers can be submitted here as an extended abstract (4 pages limit excluding references) or a short paper (6 pages limit excluding references). Submissions should follow the AAAI-2022 https://aaai.org/Conferences/AAAI-22/aaai22call/.

Important Dates

Paper Submission: November 12, 2021, 11:59 pm (anywhere on earth) Author Notification: December 3, 2021
Full conference: February 22 – March 1, 2022
Workshop: February 28 – March 1, 2022

Submissions

The workshop page is https://sites.google.com/view/aaaiwfs2022, and it will include the most up-to-date information, including the exact schedule.

Organizing Committee

Naftali Cohen (JP Morgan Chase & New York University), Eren Kurshan (Bank of America & Columbia University), Senthil Kumar (Capital One), Susan Tibbs (Financial Institutions Regulatory Authority, FINRA), Tucker Balch (JP Morgan Chase & Georgia Institute of Technology), and Kevin Compher (Securities Exchange Commission)


W7: AI to Accelerate Science and Engineering (AI2ASE)

Scientists and engineers in diverse domains are increasingly relying on using AI tools to accelerate scientific discovery and engineering design. This workshop aims to bring together researchers from AI and diverse science/engineering communities to achieve the following goals:

1) Identify and understand the challenges in applying AI to specific science and engineering problems
2) Develop, adapt, and refine AI tools for novel problem settings and challenges
3) Community-building and education to encourage collaboration between AI researchers and domain area experts

Topics

Some specific topics in the context of scientific discovery and engineering design include (but not limited to):

  • Methods to combine scientific knowledge and data to build accurate predictive models
  • Adaptive experiment design under resource constraints
  • Learning cheap surrogate models to accelerate simulations
  • Learning effective representations for structured data
  • Uncertainty quantification and reasoning tools for decision-making
  • Explainable AI for both prediction and decision-making
  • Integrating AI tools into existing workflows
  • Challenges in applying and deployment of AI in the real-world

Format

This will be a one day workshop with a number of paper presentations and poster spotlights, a poster session, several invited talks, and a panel discussion.

Prof. Max Welling, University of Amsterdam and Microsoft Research
Prof. José Miguel Hernández-Lobato, University of Cambridge
Prof. Connor Coley, Massachusetts Institute of Technology
Prof. Andrew White, University of Rochester
Dr. Rocío Mercado, Massachusetts Institute of Technology

We will include a panel discussion to close the workshop, in which the audience can ask follow up questions and to identify the key AI challenges to push the frontiers in Chemistry.

Submissions

We welcome submissions of long (max. 8 pages), short (max. 4 pages), and position (max. 4 pages) papers describing research at the intersection of AI and science/engineering domains including chemistry, physics, power systems, materials, catalysis, health sciences, computing systems design and optimization, epidemiology, agriculture, transportation, earth and environmental sciences, genomics and bioinformatics, civil and mechanical engineering etc.

Submissions must be formatted in the AAAI submission format (https://www.aaai.org/Publications/Templates/AuthorKit22.zip) All submissions should be done electronically via EasyChair.

Submission site: TBD

Organizing Committee

Aryan Deshwal (Washington State University, aryan.deshwal@wsu.edu), Syrine Belakaria (Washington State University, syrine.belakaria@wsu.edu), Cory Simon (Oregon State University, cory.simon@oregonstate.edu), Jana Doppa (Washington State University, jana.doppa@wsu.edu), Yolanda Gil (University of Southern California, gil@isi.edu)

Additional Information

Supplemental workshop site: https://ai-2-ase.github.io/

For general inquiries about AI2ASE, please write to the lead organizer aryan.deshwal@wsu.edu or jana.doppa@wsu.edu.


W8: AI-Based Design and Manufacturing (ADAM) (Half-Day)

Advances in complex engineering systems such as manufacturing and materials synthesis increasingly seek artificial intelligence/machine learning (AI/ML) solutions to enhance their design, development, and production processes. However, despite increasing interest from various subfields, AI/ML techniques are yet to fulfill their full promise in achieving these advances. Key obstacles include lack of high-quality data, difficulty in embedding complex scientific and engineering knowledge in learning, and the need for high-dimensional design space exploration under constrained budgets. The first AAAI Workshop on AI for Design and Manufacturing, ADAM, aims to bring together researchers from core AI/ML, design, manufacturing, scientific computing, and geometric modeling. Our intent is to facilitate new AI/ML advances for core engineering design, simulation, and manufacturing. Objectives of ADAM include outlining the main research challenges in this area, cross-pollinating collaborations between AI researchers and domain experts in engineering design and manufacturing, and sketching open problems of common interest.

Topics

We invite paper submission on the following (and related) topics:

  • New theory and fundamentals of AI-aided design and manufacturing,
  • Novel AI-based techniques to improve modeling of engineering systems,
  • Integration of AI-based approaches with engineering prototyping and manufacturing,
  • Novel methods to learn from scarce/sparse, or heterogenous, or multimodal data,
  • Novel ML methods in the computational material and physical sciences,
  • Novel ML-accelerated optimization for conceptual/detailed system design,
  • Novel AI-enabled generative models for system design and manufacturing,
  • ML-guided rare event modeling and system uncertainty quantification,
  • Development of software, libraries, or benchmark datasets, and
  • Identification of key challenges and opportunities for future research.

Format

The workshop will be a half-day meeting comprising several invited talks from distinguished researchers in the field, spotlight lightning talks and a poster session where contributing paper presenters can discuss their work, and a concluding panel discussion focusing on future directions. Attendance is open to all registered participants.

Submissions

Submitted technical papers can be up to 4 pages long (excluding references and appendices). Position papers are welcome. All papers must be submitted in PDF format using the AAAI-22 author kit. Papers will be peer-reviewed and selected for spotlight and/or poster presentation.

Submission site: https://openreview.net/group?id=AAAI.org/2022/Workshop/ADAM

Organizing Committee

Aarti Singh (Carnegie Mellon University), Baskar Ganapathysubramanian (ISU), Chinmay Hegde (New York University; contact: chinmay.h@nyu.edu), Mark Fuge (University of Maryland), Olga Wodo (University of Buffalo), Payel Das (IBM), Soumalya Sarkar (Raytheon)

Additional Information

Workshop website: https://adam-aaai2022.github.io/


W9: Artificial Intelligence for Cyber Security (AICS)

The workshop will focus on the application of AI to problems in cyber-security. Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities. Additionally, adversaries continue to develop new attacks. Hence, AI methods are required to understand and protect the cyber domain. These challenges are widely studied in enterprise networks, but there are many gaps in research and practice as well as novel problems in other domains.

This year the AICS emphasis will be on practical considerations in the real world when deploying AI systems for security with a special focus on convergence of AI and cyber-security in the biomedical field.

In general, AI techniques are still not widely adopted in the real world. Reasons include: (1) a lack of certification of AI for security, (2) a lack of formal study of the implications of practical constraints (e.g., power, memory, storage) for AI systems in the cyber domain, (3) known vulnerabilities such as evasion, poisoning attacks, (4) lack of meaningful explanations for security analysts, and (5) lack of analyst trust in AI solutions. There is a need for the research community to develop novel solutions for these practical issues.

The biomedical space has seen a flurry of activity recently, and cyber criminals have amplified their efforts with health-related phishing attacks, spreading misinformation, and intruding into health infrastructure. These lead to security considerations: (1) securing personal health information, genetic material, intellectual property, and digital health records, (2) balancing privacy rights and data ownership concerns in solutions using network and mobile data, (3) defending AI for biology use cases to deter automated attacks at scale.

Topics

Topics of interest in the biomedical space include:

  • Securing personal information, genomics, and intellectual property
  • Adversarial attacks and defenses on biomedical datasets
  • Detecting and preventing spread of misinformation
  • Usable security and privacy for digital health information
  • Phishing and other attacks using health information
  • Novel use of biometrics to enhance security
  • Threats to biometric security

Topics of general interest to cyber-security include:

  • Machine learning (including RL) security and resiliency
  • Natural language processing
  • Anomaly detection
  • Noise reduction
  • Adversarial learning
  • Formal reasoning
  • Game-theoretic reasoning
  • AI assurance and securing AI systems
  • Multi-agent interaction modeling
  • Modeling and simulation of cyber systems
  • Decision-making under uncertainty
  • Automation of data labeling and ML techniques
  • Quantitative human behavior models
  • Operational and commercial applications of AI
  • Explanations of security decisions and vulnerability of explanations
  • Human-AI teaming for cyber security

Submissions

Submission site: https://easychair.org/conferences/?conf=aics22

Organizing Committee

Tamara Broderick (MIT CSAIL, tamarab@mit.edu), James Holt (Laboratory for Physical Sciences, USA, holt@lps.umd.edu), Edward Raff (Booz Allen Hamilton, USA, Raff_Edward@bah.com), Ahmad Ridley (National Security Agency), Dennis Ross (MIT Lincoln Laboratory, USA, dennis.ross@ll.mit.edu), Arunesh Sinha (Singapore Management University, Singapore, aruneshs@smu.edu.sg), Diane Staheli (MIT Lincoln Laboratory, USA, diane.staheli@ll.mit.edu), William W. Streilein (MIT Lincoln Laboratory, USA, wws@ll.mit.edu), Milind Tambe (Harvard University, USA, milind_tambe@harvard.edu), Yevgeniy Vorobeychik (Washington University in Saint Louis, USA, eug.vorobey@gmail.com) Allan Wollaber (MIT Lincoln Laboratory, USA, Allan.Wollaber@ll.mit.edu)

Additional Information

Supplemental workshop site: http://aics.site/


W10: Artificial Intelligence for Education (AI4EDU)

Technology has transformed over the last few years, turning from futuristic ideas into today’s reality. AI is one of these transformative technologies that is now achieving great successes in various real-world applications and making our life more convenient and safer. AI is now shaping the way businesses, governments, and educational institutions do things and is making its way into classrooms, schools and districts across many countries.

In fact, the increasingly digitized education tools and the popularity of online learning have produced an unprecedented amount of data that provides us with invaluable opportunities for applying AI in education. Recent years have witnessed growing efforts from the AI research community devoted to advancing our education and promising results have been obtained in solving various critical problems in education. For example, AI tools are built to ease the workload for teachers. Instead of grading each piece of work individually, which can take up a bulk of extra time, intelligent scoring tools allow teachers the ability to have their students work automatically graded. In the coronavirus era, requiring many schools to move to online learning, the ability to give feedback at scale could provide needed support to teachers. What’s more, various AI based models are trained on massive student behavioral and exercise data to have the ability to take note of a student’s strengths and weaknesses, identifying where they may be struggling. These models can also generate instant feedback to instructors and help them to improve their teaching effectiveness. Furthermore, leveraging AI to connect disparate social networks amongst teachers \cite{karimi2020towards}, we may be able to provide greater resources for their planning, which have been shown to significantly affect students’ achievement.

Despite gratifying achievements that have demonstrated the great potential and bright development prospect of introducing AI in education, developing and applying AI technologies to educational practice is fraught with its unique challenges, including, but not limited to, extreme data sparsity, lack of labeled data, and privacy issues. Hence, this workshop will focus on introducing research progress on applying AI to education and discussing recent advances of handling challenges encountered in AI educational practice.

Format

We propose a full day workshop with the following sessions:

  • Oral presentations: 10 minute presentation for oral papers.
  • Poster session: One poster session of all accepted papers which leads for interaction and personal feedback to the research.
  • Keynotes and invited talks: Several keynotes and invited talks by leading researchers in the area will be presented.
  • Panel discussion: Interactive Q&A session with a panel of leading researchers

Attendance

25-50 people

Submissions

The workshop solicits paper submissions from participants (2–6 pages). Abstracts of the following flavors will be sought: (1) research ideas, (2) case studies (or deployed projects), (3) review papers, (4) best practice papers, and (5) lessons learned. The format is the standard double-column AAAI Proceedings Style. All submissions will be peer-reviewed. Some will be selected for spotlight talks, and some for the poster session.

Organizing Committee

Zitao Liu (main contact) , TAL Education Group, liuzitao@tal.com, http://www.zitaoliu.com

Jiliang Tang (Michigan State University, tangjili@msu.edu, https://www.cse.msu.edu/~tangjili/), Lihan Zhao (TAL Education Group, zhaolihan@tal.com), and Xiao Zhai (TAL Education Group, zhaixiao@tal.com)

Additional Information

Workshop URL: http://ai4ed.cc/workshops/aaai2022


W11: Artificial Intelligence Safety (SafeAI 2022)

The accelerated developments in the field of Artificial Intelligence (AI) hint at the need for considering Safety as a design principle rather than an option. However, theoreticians and practitioners of AI and Safety are confronted with different levels of safety, different ethical standards and values, and different degrees of liability, that force them to examine a multitude of trade-offs and alternative solutions. These choices can only be analyzed holistically if the technological and ethical perspectives are integrated into the engineering problem, while considering both the theoretical and practical challenges of AI safety. A new and comprehensive view of AI Safety must cover a wide range of AI paradigms, including systems that are application-specific as well as those that are more general, considering potentially unanticipated risks. In this workshop, we want to explore ways to bridge short-term with long-term issues, idealistic with pragmatic solutions, operational with policy issues, and industry with academia, to build, evaluate, deploy, operate and maintain AI-based systems that are demonstrably safe.

This workshop seeks to explore new ideas on AI safety with particular focus on addressing the following questions:

  • What is the status of existing approaches in ensuring AI and Machine Learning (ML) safety, and what are the gaps?
  • How can we engineer trustable AI software architectures?
  • How can we make AI-based systems more ethically aligned?
  • What safety engineering considerations are required to develop safe human-machine interaction?
  • What AI safety considerations and experiences are relevant from industry?
  • How can we characterize or evaluate AI systems according to their potential risks and vulnerabilities?
  • How can we develop solid technical visions and new paradigms about AI Safety?
  • How do metrics of capability and generality, and the trade-offs with performance affect safety?
  • The main interest of the proposed workshop is to look at a new perspective of system engineering where multiple disciplines such as AI and safety engineering are viewed as a larger whole, while considering ethical and legal issues, in order to build trustable intelligent autonomy.

Topics

Contributions are sought in (but are not limited to) the following topics:

  • Safety in AI-based system architectures
  • Continuous V&V and predictability of AI safety properties
  • Runtime monitoring and (self-)adaptation of AI safety
  • Accountability, responsibility and liability of AI-based systems
  • Uncertainty in AI
  • Avoiding negative side effects in AI-based systems
  • Role and effectiveness of oversight: corrigibility and interruptibility
  • Loss of values and the catastrophic forgetting problem
  • Confidence, self-esteem and the distributional shift problem
  • Safety of AGI systems and the role of generality
  • Reward hacking and training corruption
  • Self-explanation, self-criticism and the transparency problem
  • Human-machine interaction safety
  • Regulating AI-based systems: safety standards and certification
  • Human-in-the-loop and the scalable oversight problem
  • Evaluation platforms for AI safety
  • AI safety education and awareness
  • Experiences in AI-based safety-critical systems, including industrial processes, health, automotive systems, robotics, critical infrastructures, among others

Format

To deliver a truly memorable event, we will follow a highly interactive format that will include invited talks and thematic sessions. The thematic sessions will be structured into short pitches and a common panel slot to discuss both individual paper contributions and shared topic issues. Three specific roles are part of this format: session chairs, presenters and paper discussants. The workshop will be organized as a full day meeting. Attendance is virtual and open to all. At least one author of each accepted submission must register and present the paper at the workshop.

Submissions

You are invited to submit:

  • Full technical papers (6-8 pages),
  • Proposals of technical talk (up to one-page abstract including short Bio of the main speaker),
  • Position papers (4-6 pages), and

Manuscripts must be submitted as PDF files via EasyChair online submission system.

Please keep your paper format according to AAAI Formatting Instructions (two-column format). The AAAI author kit can be downloaded from: https://www.aaai.org/Publications/Templates/AuthorKit22.zip.

Papers will be peer-reviewed by the Program Committee (2-3 reviewers per paper). The workshop follows a single-blind reviewing process. However, we will also accept anonymous submissions.

Organizing Committee

Huáscar Espinoza (ECSEL JU), José Hernández-Orallo (Universitat Politècnica de València, Spain), Cynthia Chen (University of Hong Kong, China), Seán Ó hÉigeartaigh (University of Cambridge, UK), Xiaowei Huang (University of Liverpool, UK), Mauricio Castillo-Effen (Lockheed Martin, USA), Richard Mallah (Future of Life Institute, USA), John McDermid (University of York, UK), Gabriel Pedroza (CEA LIST)

Additional Information

Supplemental workshop site: http://safeaiw.org/


W12: Artificial Intelligence with Biased or Scarce Data

Despite rapid recent progress, it has proven to be challenging for Artificial Intelligence (AI) algorithms to be integrated into real-world applications such as autonomous vehicles, industrial robotics, and healthcare. A primary reason for this is the inherent long-tailed nature of our world, and the need for algorithms to be trained with large amounts of data that includes as many rare events as possible. However, these real-world applications typically translate to problem domains where it is extremely challenging to even obtain raw data, let alone annotated data. Even in cases where one is able to collect data, there are inherently many kinds of biases in this process, leading to biased models.

In light of these issues, and the ever-increasing pervasiveness of AI in the real world, we seek to provide a focused venue for academic and industry researchers and practitioners to discuss research challenges and solutions associated with building AI systems under data scarcity and/or bias.

Topics

We invite the submission of original and high-quality research papers in the topics related to biased or scarce data. The topics for AIBSD 2022 include, but are not limited to:

  • Algorithms and theories for explainable and interpretable AI models.
  • Application-specific designs for explainable AI, e.g., healthcare, autonomous driving, etc.
  • Algorithms and theories for learning AI models under bias and scarcity.
  • Performance characterization of AI algorithms and systems under bias and scarcity.
  • Algorithms for secure and privacy-aware machine learning for AI.
  • Algorithms and theories for trustworthy AI models.
  • The role of adjacent fields of study (e.g, computational social science) in mitigating issues of bias and trust in AI.
  • Continuous refinement of AI models using active/online learning.
  • Meta-learning models from various existing task-specific AI models.
  • Brave new ideas to learn AI models under bias and scarcity.

Format

This one-day workshop will include invited talks from keynote speakers, and oral/spotlight presentations of the accepted papers. Each oral presentation will be allocated between 10-15 minutes, while the spotlight presentation will be 2 minute each. There will be live Q&A sessions at the end of each talk and oral presentation.

Attendance

We expect 50~75 participants and potentially more according to our past experiences. We cordially welcome researchers, practitioners, and students from academia and industry who are interested in understanding and discussing how data scarcity and bias can be addressed in AI to participate.

Submissions

We welcome full paper submissions (up to 8 pages, excluding references or supplementary materials). The paper submissions must be in pdf format and use the AAAI official templates. All submissions must be anonymous and conform to AAAI standards for double-blind review. The accepted papers will be posted on the workshop website and will not appear in the AAAI proceedings. At least one author of each accepted submission must present the paper at the workshop.

Submit to: https://cmt3.research.microsoft.com/AIBSD2022

Organizing Committee

Kuan-Chuan Peng (Mitsubishi Electric Research Laboratories, kp388@cornell.edu), Ziyan Wu (UII America, Inc., wuzy.buaa@gmail.com)

Additional Information

Supplemental workshop site: https://aibsdworkshop.github.io/2022/index.html


W13: Combining Learning and Reasoning: Programming Languages, Formalisms, and Representations (CLeaR)

This workshop brings together researchers from diverse backgrounds with different perspectives to discuss languages, formalisms and representations that are appropriate for combining learning and reasoning. The workshop aims at bridging formalisms for learning and reasoning such as neural and symbolic approaches, probabilistic programming, differentiable programming, Statistical Relation Learning and using non-differentiable optimization in deep models. It highlights the importance of declarative languages that enable such integration for covering multiple formalisms at a high-level and points to the need for building a new generation of ML tools to help domain experts in designing complex models where they can declare their knowledge about the domain and use data-driven learning models based on various underlying formalisms. This workshop wants to emphasize on the importance of integrative paradigms for solving the new wave of AI applications.

Topics

The main research questions and topics of interest include, but are not limited to:

  • Programming Languages, Domain specific languages, Libraries and software tools for integration of various learning and reasoning paradigms
  • Integration of probabilistic inference in training deep models,
  • Integration of neuro and symbolic approaches,
  • Integration of logical inference in training deep models,
  • Integration of Deep Learning and Relational Learning,
  • Integration of Deep learning and Constraint programming,
  • Declarative languages and differentiable programming,
  • Integration of declarative and procedural domain knowledge in learning,
  • Integration of non-differentiable optimization models in learning.

Format

This will be a one day workshop, including four invited speakers, one panel session, a number of oral presentations of the accepted long papers and two poster sessions for all accepted papers including short and long.

Attendance

We send a public call and we assume the workshop will be of interest to many AAAI main conference audiences; we expect 50 participants. We allow both short (2-4 pages) and long papers (6-8 pages) papers. We will accept the extended abstracts of the relevant and recently published work too.

Submissions

Papers will be submitted to OpenReview system: Waiting for approval, https://openreview.net/forum?id=6uMNTvU-akO

Organizing Committee

Workshop Chair: Parisa Kordjamshidi, +1-2174187004, kordjams@msu.edu

Organizing Committee: Parisa Kordjamshidi (Michigan State University, kordjams@msu.edu), Behrouz Babaki (Mila/HEC Montreal, behrouz.babaki@mila.quebec), Sebastijan Dumančić (KU Leuven, sebastijan.dumancic@cs.kuleuven.be), Alex Ratner (University of Washington, ajratner@cs.washington.edu), Hossein Rajaby Faghihi (Michigan State University, rajabyfa@msu.edu), Hamid Karimian (Michigan State University, karimian@msu.edu)

Organizing Committee:
Dan Roth (University of Pennsylvania, danroth@seas.upenn.edu) and Guy Van Den Broeck (University of California Los Angeles, guyvdb@cs.ucla.edu)

Additional Information

Supplemental workshop site: https://clear-workshop.github.io


W14: Deep Learning on Graphs: Methods and Applications (DLG-AAAI’22)

Topics of interest (including but not limited to)

We invite submission of papers describing innovative research and applications around the following topics. Papers that introduce new theoretical concepts or methods, help to develop a better understanding of new emerging concepts through extensive experiments, or demonstrate a novel application of these methods to a domain are encouraged.

  • Graph neural networks on node-level, graph-level embedding
  • Joint learning of graph neural networks and graph structure
  • Graph neural networks on graph matching
  • Dynamic/incremental graph-embedding
  • Learning representation on heterogeneous networks, knowledge graphs
  • Deep generative models for graph generation/semantic-preserving transformation
  • Graph2seq, graph2tree, and graph2graph models
  • Deep reinforcement learning on graphs
  • Adversarial machine learning on graphs
  • Spatial and temporal graph prediction and generation

And with particular focuses but not limited to these application domains:

  • Learning and reasoning (machine reasoning, inductive logic programming, theory proving)
  • Natural language processing (information extraction, semantic parsing, text generation)
  • Bioinformatics (drug discovery, protein generation, protein structure prediction)
  • Program synthesis and analysis
  • Reinforcement learning (multi-agent learning, compositional imitation learning)
  • Financial security (anti-money laundering)
  • Cybersecurity (authentication graph, Internet of Things, malware propagation)
  • Geographical network modeling and prediction (Transportation and mobility networks, social networks)
  • Computer vision (object relation, graph-based 3D representations like mesh)

Format

Our program consists of two sessions: academic session and industry session. The academic session will focus on most recent research developments on GNNs in various application domains. The industry session will emphasize practical industrial product developments using GNNs. We will also have a panel discussion on the current and future of GNNs on both research and industry. In addition, several invited speakers with distinguished professional background will give talks related the frontier topics of GNN.

The desired LENGTH of the workshop: Full-day (~8 hours)

Attendance

Estimate of the audience size: 400-500 attendees (based on the number of attendees in previous DLG workshops in KDD’19, AAAI’20, KDD’20 and AAAI’21). About 7-8 invited speakers who are distinguished professional in Deep learning on graph will present the frontier research topics. All the workshop chairs, most of the Committees, and the authors of the accepted papers will attend the workshop also.

Submissions

Submissions are limited to a total of 5 pages for initial submission (up to 6 pages for final camera-ready submission), excluding references or supplementary materials, and authors should only rely on the supplementary material to include minor details that do not fit in the 5 pages. All submissions must be in PDF format and formatted according to the new Standard AAAI Conference Proceedings Template. Following this AAAI conference submission policy, reviews are double-blind, and author names and affiliations should NOT be listed. Submitted papers will be assessed based on their novelty, technical quality, potential impact, and clarity of writing. For papers that rely heavily on empirical evaluations, the experimental methods and results should be clear, well executed, and repeatable. Authors are strongly encouraged to make data and code publicly available whenever possible. The accepted papers will be posted on the workshop website and will not appear in the AAAI proceedings.

Submit to: Papers are required to submit to: https://easychair.org/conferences/?conf=dlg22

Workshop Chair

  • Lingfei Wu (JD.Com Silicon Valley Research Center),lwu@email.wm.edu, 757-634-5455, https://sites.google.com/a/email.wm.edu/teddy-lfwu/
  • Jian Pei (Simon Fraser University), jian_pei@sfu.ca, 778-782-6851, https://sites.google.com/view/jpei/jian-peis-homepage
  • Jiliang Tang (Michigan State University), tangjili@msu.edu, 408-744-2053, https://www.cse.msu.edu/~tangjili/
  • Yinglong Xia (Facebook AI), yinglongxia@gmail.com, 213-309-9908, https://sites.google.com/site/yinglongxia/
  • Xiaojie Guo (JD.Com Silicon Valley Research Center), Xguo7@gmu.edu, 571-224-5527, https://sites.google.com/view/xiaojie-guo-personal-site

Please email to Lingfei Wu: lwu@email.wm.edu for any query.

Organizing Committee

Yuanqi du, George Mason University, USA; Jian Pei, Simon Fraser University, Canada; Charu Aggarwal, IBM Research AI, USA; Philip S. Yu, University of Illinois at Chicago, USA; Xuemin Lin, University of New South Wales, Australia; Jiebo Luo, University of Rochester, USA; Lingfei Wu, JD.Com Silicon Valley Research Center, USA; Yinglong Xia, Facebook AI, USA; Jiliang Tang, Michigan State University, USA; Peng Cui, Tsinghua University, China; William L. Hamilton, McGill University, Canada; Thomas Kipf, University of Amsterdam, Netherlands

Potential Workshop Committee

  • Ibrahim Abdelaziz, (IBM Research AI)
  • Sutanay Choudhury (Pacific Northwest National Lab)
  • Lingyang Chu (Simon Fraser University)
  • Tyler Derr (Michigan State University)
  • Stephan Günnemann (Technical University of Munich)
  • Balaji Ganesan, (IBM Research AI)
  • William L. Hamilton (McGill University)
  • Tengfei Ma (IBM Research AI)
  • Tian Gao (IBM Research AI)
  • Thomas Kipf (University of Amsterdam)
  • Renjie Liao (University of Toronto)
  • Yujia Li, (DeepMind)
  • Shen Wang, (University of Illinois at Chicago)
  • Liana Ling (IBM Research AI)
  • Yizhou Sun (University of California, Los Angeles)
  • Hanghang Tong (Arizona State University)
  • Richard Tong (Squirrel AI Learning)
  • Jian Tang (Mila)
  • Lingfei Wu (JD.Com Silicon Valley Research Center)
  • Qing Wang (IBM Research AI)
  • Yinglong Xia (Facebook AI)
  • Liang Zhao (George Mason University)
  • Dawei Zhou (Arizona State University)
  • Zhan Zheng (Washington University in St. Louis)
  • Feng Chen (University at Albany – State University of New York)

Additional Information

Workshop URL: https://deep-learning-graphs.bitbucket.io/dlg-aaai22/


W15: DE-FACTIFY :Multi-Modal Fake News and Hate-Speech Detection

Combating fake news is one of the burning societal crises. It is difficult to expose false claims before they create a lot of damage. Automatic fact/claim verification has recently become a topic of interest among diverse research communities. Research efforts and datasets on text fact verification could be found, but there is not much attention towards multi-modal or cross-modal fact-verification. This workshop will encourage researchers from interdisciplinary domains working on multi-modality and/or fact-checking to come together and work on multimodal (images, memes, videos etc.) fact-checking. At the same time, multimodal hate-speech detection is an important problem but has not received much attention. Lastly, learning joint modalities is of interest to both Natural Language Processing (NLP) and Computer Vision (CV) forums.

Topics

It is a forum to bring attention towards collecting, measuring, managing, mining, and understanding multimodal disinformation, misinformation, and malinformation data from social media. This workshop covers (but not limited to) the following topics: —

  • Development of corpora and annotation guidelines for multimodal fact checking
  • Computational models for multimodal fact checking
  • Development of corpora and annotation guidelines for multimodal hate speech detection and classification
  • Computational models for multimodal hate speech detection and classification
  • Analysis of diffusion of Multimodal fake news and hate speech in social networks
  • Understanding the impact of the hate content on specific groups (like targeted groups)
  • Fake news and hate speech detection in low resourced languages

Format

It is a one day workshop and includes: invited talks, interactive discussions, paper presentations, shared task presentations, poster session etc. We expect 60-70 participants. Our preliminary plan for the schedule is as following –

DEFACTIFY@AAAI-22 Program [tentative]
———————————————————–
9:00AM-9:15AM
Inauguration
A brief summary of the shared tasks – number of participants, best results

Session 1 – multimodal fact checking
Workshop papers – 9:30AM – 10:30AM

10:30AM – 11:00AM
Break

11:00AM – 12:00pm
Invited talk 1 – Prof. Rada Mihalcea, University of Michigan

12:00pm – 1:00pm
Lunch

Session 2 – Best 4/5 papers from FACTIFY & MEMOTION shared task
Workshop papers – 1:00PM – 2:00PM

2:00PM – 3:30PM
Invited talk 2 – Prof. LOUIS-PHILIPPE MORENCY, CMU

3:30PM – 4:00PM
Break

Session 2 – multimodal hate speech
Workshop papers – 4:00PM – 5:00PM

5:00PM-5:15PM
Closing – vote of thanks

Submissions

We encourage long papers, short papers and demo papers. Submissions will undergo double blind review. Accepted papers are likely to be archived. We are in a conversation with some publishers – once they confirm, we will announce accordingly.

Primary Contact

Amitava Das (Wipro AI Labs; amitava.santu@gmail.com)

Organizing Committee

Workshop Chairs: Amitava Das (Wipro AI Labs) [India], Amit Sheth (University of South Carolina) [USA], Tanmoy Chakraborty (IIIT Delhi) [India], Asif Ekbal (IIT Patna) [India], Chaitanya Ahuja (CMU) [USA]

Student Volunteers

Parth Patwa (UCLA) [USA], Parul Chopra (CMU) [USA], Amrit Bhaskar (ASU) [USA], Nethra Gunti (IIIT Sri City) [USA], Sathyanarayanan R. (IIIT Sri City) [India], Shreyash Mishra (IIIT Sri City) [India], S. Suryavardan (IIIT Sri City) [India]

Web Chair

Vishal Pallagani (University of South Carolina)

Additional Information

Supplemental workshop site: https://aiisc.ai/defactify/


W16: Dialog System Technology Challenge (DSTC10)

The main goal of the dialog system technology challenge (DSTC) workshop is to share the result of five main tracks of the tenth dialog system technology challenge (DSTC10). We encourage all the teams who participated in the challenge to join the workshop. In addition, any other work on dialog research is welcome to the general technical track.

Topics

Dialog systems and related technologies, including natural language processing, audio and speech processing, and vision information processing.

Format

A 2-day workshop to share knowledge and research on five tracks of DSTC-10 and general related technical track. This will include invited talks, poster sessions and a panel to discuss the achievements of past DSTC series, and future direction.

Attendance

Attendance is open to any interested participants at AAAI-22. We will specifically invite participants of the DSTC10 tasks, track organizers, and authors of accepted papers in the general technical track.

Submissions

The submissions must follow the formatting guidelines for AAAI-22. All submissions must be anonymous and conform to AAAI standard for double-blind review. The papers may consist of up to seven pages of technical content plus up to two additional pages for references. We will receive the paper on the CMT system.

Submission site: https://cmt3.research.microsoft.com/DSTC102022

Organizing Committee Chair

Koichiro Yoshino,
Address: 2-2-2, Seika, Sohraku, Kyoto, 6190288, Japan
Affiliation: RIKEN
Phone: +81-774-95-1360
Email: koichiro.yoshino@riken.jp

Workshop Co-chair

Yun-Nung (Vivian) Chen
Address: No. 1, Sec. 4, Roosevelt Rd., Taipei, Taiwan
Affiliation: National Taiwan University
Phone: +1-412-465-0130
Email: yvchen@csie.ntu.edu.tw

Workshop Co-chair

Paul Crook
Address: 1 Hacker Way, Menlo Park, CA, USA
Affiliation: Facebook
Phone: +1-650-885-0094
Email: pacrook@fb.com

Additional Information

DSTC 10 home: https://dstc10.dstc.community/home
DSTC 10 CFPs: https://dstc10.dstc.community/calls_1/call-for-workshop-papers


W17: Engineering Dependable and Secure Machine Learning Systems (EDSMLS 2022) (Half-Day)

Nowadays, machine learning solutions are widely deployed. Like other systems, ML systems must meet quality requirements. However, ML systems may be non-deterministic; they may re-use high-quality implementations of ML algorithms; and, the semantics of models they produce may be incomprehensible. Consequently, standard notions of software quality and reliability such as deterministic functional correctness, black box testing, code coverage, and traditional software debugging become practically irrelevant for ML systems. This calls for novel methods and new methodologies and tools to address quality and reliability challenges of ML systems.

In addition, broad deployment of ML software in networked systems inevitably exposes ML software to attacks. While classical security vulnerabilities are relevant, ML techniques have additional weaknesses, some already known (e.g., sensitivity to training data manipulation), and some yet to be discovered. Hence, there is a need for research and practical solutions to ML security problems.
With these in mind, this workshop solicits original contributions addressing problems and solutions related to dependability, quality assurance and security of ML systems. The workshop combines several disciplines, including ML, software engineering (with emphasis on quality), security, and game theory. It further combines academia and industry in a quest for well-founded practical solutions.

Topics

Topics of interest include, but are not limited to:

  • Vulnerability, sensitivity and attacks against ML
  • Adversarial ML and adversary-based learning models
  • Strategy-proof ML algorithms
  • Case studies of successful and unsuccessful applications of ML techniques
  • Correctness of data abstraction, data trust
  • Choice of ML techniques to meet security and quality
  • Size of the training data, implied guaranties
  • Application of classical statistics to ML systems quality
  • Sensitivity to data distribution diversity and distribution drift
  • The effect of labeling costs on solution quality (semi-supervised learning)
  • Reliable transfer learning
  • Software engineering aspects of ML systems and quality implications
  • Testing of the quality of ML systems over time
  • Debugging of ML systems
  • Quality implication of ML algorithms on large-scale software systems

Format

One day, comprising keynote, paper presentations and panel sessions. Full papers are allocated 20m presentation and 10m discussion. Short papers – 10m presentation and 5m discussion.

Submissions

Full (8 pages) and short (4 pages, work in progress) papers, AAAI style. Submission at: https://easychair.org/my/conference?conf=edsmls2022. Authors of accepted papers will be invited to participate.

Organizing Committee

Onn Shehory, Bar Ilan University (onn.shehory@biu.ac.il), Eitan Farchi, IBM Research Haifa (farchi@il.ibm.com), Guy Barash, Western Digital (Guy.Barash@wdc.com)

Additional Information

Supplemental workshop site: https://sites.google.com/view/edsmls-2022/home


W18: Explainable Agency in Artificial Intelligence

As Artificial Intelligence (AI) begins to impact our everyday lives, industry, government, and society with tangible consequences, it becomes increasingly important for a user to understand the reasons and models underlying an AI-enabled system’s decisions and recommendations. Explainable Agency captures the idea that AI systems will need to be trusted by human agents and, as autonomous agents themselves “must be able to explain their decisions and the reasoning that produced their choices” (Langley et al., 2017). While most work on XAI has focused on opaque learned models, this workshop also highlights the need for interactive AI-enabled agents to explain their decisions and models.

This workshop aims to bring together researchers and practitioners working on different facets of these problems, from diverse backgrounds to share challenges, new directions, recent research results, and lessons from applications. We especially welcome research from fields including but not limited to AI, human-computer interaction, human-robot interaction, cognitive science, human factors, and philosophy.

Topics

With this in mind, we welcome relevant contributions on the following (and related) topic areas:

  • Explainable Agents
  • Explainable/Interpretable Machine Learning
  • Explainable Reinforcement Learning
  • Explainable Planning
  • Agent Policy Summarization
  • Human-AI Interaction
  • Human-Robot Interaction
  • Cognitive Theories
  • Philosophical Foundations
  • Interaction Design for XAI
  • XAI Evaluation
  • Fairness, Accountability and Transparency
  • XAI Domains and Benchmarks
  • Interactive Teaching Strategies and Explainability
  • Intelligent Tutoring
  • User Modeling

Submissions

The submissions must be in PDF format, written in English, and formatted according to the AAAI camera-ready style. All papers will be peer-reviewed, single-blinded (i.e., please include author names/affiliations/email addresses on your first page).

Submitted papers will be assessed based on their novelty, technical quality, potential impact, insightfulness, depth, clarity, and reproducibility. The following paper categories are welcome:

  • Novel Research Contribution describing original methods and/or results (6 pages plus references)
  • Surveys summarizing and organizing recent research results (up to 8 pages plus references)
  • Demonstrations detailing applications of research findings, and/or debating relevant challenges and issues in the field (4 pages plus references)

Submission site: https://sites.google.com/view/eaai2022/call

Organizing Committee

Silvia Tulli (Dept. Computer Science and Engineering, INESC-ID, IST Ulisboa, Lisbon, Portugal – currently at Sorbonne University, Paris, France – silvia.tulli@gaips.inesc-id.pt), Prashan Madumal (Science and Information Systems, University of Melbourne, Parkville, Australia – pmathugama@student.unimelb.edu.au), Mark T. Keane (School of Computer Science, University College Dublin, Dublin, Ireland – mark.keane@ucd.ie), David W. Aha (Navy Center for Applied Research in AI, Naval Research Laboratory, Washington, DC, USA – david.aha@nrl.navy.mil)

Program Committee

Adam Johns (Drexel University, Philadelphia, PA USA), Tathagata Chakraborti (IBM Research AI, Cambridge, MA USA), Kim Baraka (VU University Amsterdam, Netherlands), Isaac Lage (Harvard University, Cambridge, MA USA), David Martens (University of Antwerp, Belgium), Mohamed Chetouani (Sorbonne Université, Paris, France), Peter Flach (University of Bristol, United Kingdom), Kacper Sokol (University of Bristol, United Kingdom), Ofra Amir (Technion, Haifa, Israel), Dimitrios Letsios (King’s College London, London, United Kingdom)

Additional Information

Supplemental workshop site: https://sites.google.com/view/eaai-ws-2022/topic


W19: Graphs and More Complex Structures for Learning and Reasoning (GCLR)

The study of complex graphs is a highly interdisciplinary field that aims to study complex systems by using mathematical models, physical laws, inference and learning algorithms, etc. Complex systems are often characterized by several components that interact in multiple ways among each other. Such systems are better modeled by complex graph structures such as edge and vertex labeled graphs (e.g., knowledge graphs), attributed graphs, multilayer graphs, hypergraphs, temporal/dynamic graphs, etc. In this 2nd instance of GCLR (Graphs and more Complex structures for Learning and Reasoning) workshop, we will focus on various complex structures along with inference and learning algorithms for these structures. The current research in this area is focused on extending existing ML algorithms as well as network science measures to these complex structures. This workshop aims to bring researchers from these diverse but related fields together and embark on interesting discussions on new challenging applications that require complex system modeling and discovering ingenious reasoning methods. We have invited several distinguished speakers with their research interests spanning from the theoretical to experimental aspects of complex networks.

Topics

We invite submissions from participants who can contribute to the theory and applications of modeling complex graph structures such as hypergraphs, multilayer networks, multi-relational graphs, heterogeneous information networks, multi-modal graphs, signed networks, bipartite networks, temporal/dynamic graphs, etc. The topics of interest include, but are not limited to:

  • Constraint satisfaction and programming (CP), (inductive) logic programming (LP and ILP)
  • Learning with Multi-relational graphs (alignment, knowledge graph construction, completion, reasoning with knowledge graphs, etc.)
  • Learning with algebraic or combinatorial structure
  • Link analysis/prediction, node classification, graph classification, clustering for complex graph structures
  • Network representation learning
  • Theoretical analysis of graph algorithms or models
  • Optimization methods for graphs/manifolds
  • Probabilistic and graphical models for structured data
  • Social network analysis and measures
  • Unsupervised graph/manifold embedding methods

The papers will be presented in poster format and some will be selected for oral presentation. Through invited talks and presentations by the participants, this workshop will bring together current advances in Network Science as well as Machine Learning, and set the stage for continuing interdisciplinary research discussions.

Important Dates

Poster/short/position papers submission deadline: Nov 5, 2021
Full paper submission deadline: Nov 5, 2021
Paper notification: Dec 3, 2021

Format

This is a 1-day workshop involving talks by pioneer researchers from respective areas, poster presentations, and short talks of accepted papers.

Attendance

The eligibility criteria for attending the workshop will be registration in the conference/workshop as per AAAI norms. We expect 50-65 people in the workshop.

Submissions

We invite submissions to the AAAI-22 workshop on Graphs and more Complex structures for Learning and Reasoning to be held virtually on February 28 or March 1, 2022. We welcome the submissions in the following two formats:

  • Poster/short/position papers: We encourage participants to submit preliminary but interesting ideas that have not been published before as short papers. These submissions would benefit from additional exposure and discussion that can shape a better future publication. We also invite papers that have been published at other venues to spark discussions and foster new collaborations. Submissions may consist of up to 4 pages plus one additional page solely for references.
  • Full papers: Submissions must represent original material that has not appeared elsewhere for publication and that is not under review for another refereed publication. Submissions may consist of up to 7 pages of technical content plus up to two additional pages solely for references.

The submissions should adhere to the AAAI paper guidelines.

Accepted submissions will have the option of being posted online on the workshop website. For authors who do not wish their papers to be posted online, please mention this in the workshop submission. The submissions need to be anonymized.

Submission Site: See the webpage https://sites.google.com/view/gclr2022/submissions; for detailed instructions and submission link.

Workshop Chair

Balaraman Ravindran (Indian Institute of Technology Madras, India – ravi@cse.iitm.ac.in)

Workshop Committee

Balaraman Ravindran (Indian Institute of Technology Madras, India Primary contact (ravi@cse.iitm.ac.in), Kristian Kersting (TU Darmstadt, Germany, kersting@cs.tu-darmstadt.de), Sriraam Natarajan (Univ of Texas Dallas, USA, Sriraam.Natarajan@utdallas.edu), Ginestra Bianconi (Queen Mary University of London, UK, ginestra.bianconi@gmail.com), Philip S. Chodrow (University of California, Los Angeles, USA, phil@math.ucla.edu) Tarun Kumar (Indian Institute of Technology Madras, India, tkumar@cse.iitm.ac.in), Deepak Maurya (Purdue University, India, maurya@cse.iitm.ac.in), Shreya Goyal (Indian Institute of Technology Madras, India, Goyal.3@iitj.ac.in)

Additional Information

Workshop URL: https://sites.google.com/view/gclr2022/


W20: Health Intelligence (W3PHIAI-22)

Public health authorities and researchers collect data from many sources and analyze these data together to estimate the incidence and prevalence of different health conditions, as well as related risk factors. Modern surveillance systems employ tools and techniques from artificial intelligence and machine learning to monitor direct and indirect signals and indicators of disease activities for early, automatic detection of emerging outbreaks and other health-relevant patterns. To provide proper alerts and timely response, public health officials and researchers systematically gather news and other reports about suspected disease outbreaks, bioterrorism, and other events of potential international public health concern, from a wide range of formal and informal sources. Given the ever-increasing role of the World Wide Web as a source of information in many domains including healthcare, accessing, managing, and analyzing its content has brought new opportunities and challenges. This is especially the case for non-traditional online resources such as social networks, blogs, news feed, twitter posts, and online communities with the sheer size and ever-increasing growth and change rate of their data. Web applications along with text processing programs are increasingly being used to harness online data and information to discover meaningful patterns identifying emerging health threats. The advances in web science and technology for data management, integration, mining, classification, filtering, and visualization has given rise to a variety of applications representing real-time data on epidemics.

Moreover, to tackle and overcome several issues in personalized healthcare, information technology will need to evolve to improve communication, collaboration, and teamwork among patients, their families, healthcare communities, and care teams involving practitioners from different fields and specialties. All these changes require novel solutions, and the AI community is well-positioned to provide both theoretical- and application-based methods and frameworks. The goal of this workshop is to focus on creating and refining AI-based approaches that (1) process personalized data, (2) help patients (and families) participate in the care process, (3) improve patient participation, (4) help physicians utilize this participation to provide high quality and efficient personalized care, and (5) connect patients with information beyond that available within their care setting. The extraction, representation, and sharing of health data, patient preference elicitation, personalization of “generic” therapy plans, adaptation to care environments and available health expertise, and making medical information accessible to patients are some of the relevant problems in need of AI-based solutions.

Topics

The workshop will include original contributions on theory, methods, systems, and applications of data mining, machine learning, databases, network theory, natural language processing, knowledge representation, artificial intelligence, semantic web, and big data analytics in web-based healthcare applications, with a focus on applications in population and personalized health. The scope of the workshop includes, but is not limited to, the following areas:

  • Knowledge Representation and Extraction
  • Integrated Health Information Systems
  • Patient Education
  • Patient-Focused Workflows
  • Shared Decision Making
  • Geographical Mapping and Visual Analytics for Health Data
  • Social Media Analytics
  • Epidemic Intelligence
  • Predictive Modeling and Decision Support
  • Semantic Web and Web Services
  • Biomedical Ontologies, Terminologies, and Standards
  • Bayesian Networks and Reasoning under Uncertainty
  • Temporal and Spatial Representation and Reasoning
  • Case-based Reasoning in Healthcare
  • Crowdsourcing and Collective Intelligence
  • Risk Assessment, Trust, Ethics, Privacy, and Security
  • Sentiment Analysis and Opinion Mining
  • Computational Behavioral/Cognitive Modeling
  • Health Intervention Design, Modeling and Evaluation
  • Online Health Education and E-learning
  • Mobile Web Interfaces and Applications
  • Applications in Epidemiology and Surveillance (e.g., Bioterrorism, Participatory Surveillance, Syndromic Surveillance, Population Screening)
  • Hybrid methods, combining data driven and predictive forward models
  • Response to Covid-19

We also invite participants to an interactive hack-a-thon. The theme of the hack-a-thon will be decided before submission is closed and will be focused around finding creative solutions to novel problems in health. Participants will be given access to publicly available datasets and will be asked to use tools from AI and ML to generate insight from the data. Examples of the datasets which may be considered are the DBTex Radiology Mammogram dataset and the Johns Hopkins COVID-19 case reports. The aim of the hack-a-thon is not only to foster innovation and potentially provide answers to outstanding research problems, but rather to engage the community and create new collaborations.

Submissions

We invite workshop participants to submit their original contributions following the AAAI format through EasyChair. Three categories of contributions are sought: full-research papers up to 8 pages; short papers up to 4 pages; and posters and demos up to 2 pages. Participants in the hack-a-thon will be asked to either register as a team or be randomly assigned to a team after registration. Their results will be submitted in either a short paper or poster format. Dataset(s) will be provided to hack-a-thon participants.

Organizing Committee

Martin Michalowski, PhD, FAMIA (Co-chair), University of Minnesota; Arash Shaban-Nejad, PhD, MPH (Co-chair), The University of Tennessee Health Science Center – Oak-Ridge National Lab (UTHSC-ORNL) Center for Biomedical Informatics; Simone Bianco, PhD (Co-chair), IBM Almaden Research Center; Szymon Wilk, PhD, Poznan University of Technology; David L. Buckeridge, MD, PhD, McGill University; John S. Brownstein, PhD, Boston Children’s Hospital

Additional Information

Workshop URL: http://w3phiai2022.w3phi.com/


W21: Human-Centric Self-Supervised Learning (HC-SSL)

Self-supervised learning (SSL) has shown great promise in problems involving natural language and vision modalities. Nonetheless, human-centric problems (such as activity recognition, pose estimation, affective computing, BCI, health analytics, and others) rely on information modalities with specific spatiotemporal properties. To adapt SSL frameworks to build effective human-centric deep learning solutions for human-centric data, a number of key challenges and opportunities need to be explored. The goal of the inaugural HC-SSL workshop is to highlight and facilitate discussions in this area and expose the attendees to emerging potentials of SSL for human-centric representation learning, and promote responsible AI within the context of SSL.

Topics

The workshop invites contribution to novel methods, innovations, applications, and broader implications of SSL for processing human-related data, including (but not limited to):

  • activity recognition
  • pose estimation
  • speech processing
  • affective computing
  • biomedical signal analysis/modeling (EEG, ECG, PPG, EMG, fMRI, IMU, medical/clinical data, etc.)

In addition to the above, papers that consider the following are also invited:

  • responsible development of human-centric SSL (e.g., safety, limitations, societal impacts, and unintended consequences)
  • ethical and legal implications of using SSL on human-centric data
  • implications of SSL on robustness and fairness
  • implications of SSL on privacy and security
  • interpretability and explainability of human-centric SSL frameworks

Manuscripts that fit only certain aspects of the workshop are also invited. For example:

  • if your work broadly addresses the use of unlabeled human-centric data with unsupervised or semi-supervised learning
  • if your work focuses on architectures and frameworks for SSL for sensory data beyond CV and NLP (but not necessarily human-centric data)

Format

The workshop will be a 1-day event with a number of invited talks by prominent researchers, a panel discussion, and a combination of oral and poster presentations of accepted papers.

Submissions

  • The AAAI template https://aaai.org/Conferences/AAAI-22/aaai22call/ should be used for all submissions.
  • Two types of submissions will be considered: full papers (6-8 pages + references), and short papers (2-4 pages + references).
  • Publication in HC-SSL does not prohibit authors from publishing their papers in archival venues such as NeurIPS/ICLR/ICML or IEEE/ACM Conferences and Journals. We also welcome submissions that are currently under consideration in such archival venues.
  • Submissions will go through a double-blind review process.

Submission site: https://cmt3.research.microsoft.com/AAAI2022HCSSL/Submission/Index

Workshop Chair

Ali Etemad (Queen’s University, ali.etemad@queensu.ca)

Organizing Committee

Ali Etemad (Queen’s University, ali.etemad@queensu.ca), Ahmad Beirami (Facebook AI, ahmad.beirami@gmail.com), Akane Sano (Rice University, akane.sano@rice.edu), Aaqib Saeed (Philips Research & University of Cambridge, aqibsaeed@protonmail.com), Alireza Sepas-Moghaddam (Socure, alireza.sepasm@socure.com), Mathilde Caron (Inria & Facebook AI, mathilde@fb.com), Pritam Sarkar (Queen’s University & Vector Institute, pritam.sarkar@queensu.ca), Huiyuan Yang (Rice University, hy48@rice.edu)

Additional Information

Supplemental website: https://hcssl.github.io/AAAI-22/


W22: Information-Theoretic Methods for Causal Inference and Discovery (ITCI’22)

Causal inference is one of the main areas of focus in artificial intelligence (AI) and machine learning (ML) communities. Causality has received significant interest in ML in recent years in part due to its utility for generalization and robustness. It is also central for tackling decision-making problems such as reinforcement learning, policy or experimental design. Information-theoretic approaches provide a novel set of tools that can expand the scope of classical approaches to causal inference and discovery problems in a variety of applications. Some examples of the success of information theory in causal inference are: the use of directed information, minimum entropy couplings and common entropy for bivariate causal discovery; the use of the information bottleneck principle with applications in the generalization of machine learning models; analyzing causal structures of deep neural networks with information theory; among others.

The goal of ITCI’22 is to bring together researchers working at the intersection of information theory, causal inference and machine learning in order to foster new collaborations and provide a venue to brainstorm new ideas, exemplify to the information theory community causal inference and discovery as an application area and highlight important technical challenges motivated by practical ML problems, draw the attention of the wider machine learning community to the problems at the intersection of causal inference and information theory, and demonstrate to the community the utility of information-theoretic tools to tackle causal ML problems.

Topics

Topics include but are not limited to:

  • Novel algorithmic solutions to causal inference or discovery problems using information-theoretic tools or assumptions.
  • Applications of causal inference and discovery in machine learning/deep learning motivated by information-theoretic approaches (e.g. information bottleneck principle)
  • Characterization of fundamental limits of causal quantities using information theory.
  • Identification of information-theoretic quantities relevant for causal inference and discovery.

Format

ITCI’22 will be a one-day workshop. The program consists of poster sessions for accepted papers, and invited and spotlight talks. Attendance is open to all; at least one author of each accepted paper must be virtually present at the workshop.

Submissions

Submissions of technical papers can be up to 7 pages excluding references and appendices. Short or position papers of up to 4 pages are also welcome. All papers must be submitted in PDF format, using the AAAI-21 author kit and anonymized. Papers will be peer-reviewed and selected for spotlight and/or poster presentation at the workshop.

Submission site: https://cmt3.research.microsoft.com/ITCI2022

Organizing Committee

Murat Kocaoglu, Chair (Purdue University, mkocaoglu@purdue.edu), Negar Kiyavash (EPFL, negar.kiyavash@epfl.ch), Todd Coleman (UCSD, tpcoleman@ucsd.edu)

Additional Information

Supplemental workshop site: https://sites.google.com/view/itci22


W23: Information Theory for Deep Learning (IT4DL)

Despite the great success of deep neural networks (DNNs) in many artificial intelligence (AI) tasks, they still suffer from limitations, such as poor generalization behavior for out-of-distribution (OOD) data, vulnerability to adversarial examples, and the “black-box” nature of DNNs. Furthermore, DNNs are data greedy in the context of supervised learning, and not well developed for limited label learning, for instance for semi-supervised learning, self-supervised learning, or unsupervised learning.

Information theory has demonstrated great potential to solve the above challenges. In recent years, various information theoretic principles have also been applied to different deep learning related AI applications in fruitful and unorthodox ways. Notable examples include the information bottleneck (IB) approach on the explanation of the generalization behavior of DNNs and the information maximization principle in visual representation learning.

With the rapid development of advanced techniques on the intersection between information theory and machine learning, such as neural network-based or matrix-based mutual information estimator, tighter generalization bounds by information theory, deep generative models and causal representation learning, information theoretic methods can provide new perspectives and methods to deep learning on the central issues of generalization, robustness, explainability, and offer new solutions to different deep learning related AI applications.
This workshop aims to bring together both academic researchers and industrial practitioners to share visions on the intersection between information theory and deep learning, and their practical usages in different AI applications.

Topics

The workshop organizers invite paper submissions on the following (and related) topics:

  • Information theoretic quantities (entropy, mutual information, divergence) estimation
  • Information theoretic methods for out-of-domain generalization and relevant problems (such as robust transfer learning and lifelong learning)
  • Information theoretic methods for learning from limited labelled data, such as few-shot learning, zero-shot learning, self-supervised learning, and unsupervised learning
  • Information theoretic methods for the robustness of DNNs in AI systems
  • The explanation of deep learning models (in AI systems) with information-theoretic methods
  • Information theoretic methods in different AI applications (e.g., NLP, healthcare, robotics, finance)

Format

This workshop will be a one-day workshop, featuring invited speakers, poster presentations, and short oral presentations of selected accepted papers.

Submissions

We invite submissions of technical papers up to 7 pages excluding references and appendices. Extended abstract up to 2 pages are also welcome. The submissions must be in PDF format, written in English, and formatted according to the AAAI camera-ready style. All papers will be peer reviewed, single-blinded.

Submit to: Submissions should be made via EasyChair at https://easychair.org/conferences/?conf=it4dl

Organizing Committee

Jose C. Principe (University of Florida, principe@cnel.ufl.edu), Robert Jenssen (UiT – The Arctic University of Norway, robert.jenssen@uit.no), Badong Chen (Xi’an Jiaotong University, chenbd@mail.xjtu.edu.cn), Shujian Yu (UiT – The Arctic University of Norway, yusj9011@gmail.com)

Additional Information

Supplemental workshop site: https://www.it4dl.org/


W24: Interactive Machine Learning

Recent years have witnessed growing interest in human and AI systems with the increasing realisation that machines can indeed meet objectives specified — but the real question becomes have they been given the right objectives. Interactive Machine Learning (IML) is concerned with the development of algorithms for enabling machines to cooperate with human agents. A challenge is how to integrate people into the learning loop in a way that is transparent, efficient, and beneficial to the human-AI team as a whole, supporting different requirements and users with different levels of expertise.

Advances in IML promise to make AIs more accessible and controllable, more compatible with the values of their human partners and more trustworthy. Such advances would enrich the range of applicability of semi-autonomous systems to real-world tasks, most of which involve cooperation with one or more human partners. This workshop aims to bring together researchers from industry and academia and from different disciplines in AI and surrounding areas to explore challenges and innovations in IML.

Topics

Novel mechanisms for eliciting and consuming user feedback, recommender, structured and generative models, concept acquisition, data processing, optimization; HCI and visualization challenges; Analysis of human factors/cognition and user modelling; Design, testing and assessment of IML systems; Studies on risks of interaction mechanisms, e.g., information leakage and bias; Business use cases and applications.

Format

This one-day workshop will consist of: (1) an ice-breaking session, (2) paper presentations, (3) a poster session, and (4) an ideation brainstorming session. We have the following keynote speakers confirmed: Andreas Holzinger (Medical Univ. of Graz), Cynthia Rudin (Duke Univ.) and Simone Stumpf (Univ. of London).

Attendance

Attendance is open to all prior registration to the workshop/conference. At least one author of each accepted submission must register and present their paper at the workshop. Expected attendance is 40-50 people.

Submissions

Submissions should be formatted using the AAAI-2022 Author Kit. Long papers (up to 6 pages + references) and extended abstracts (2 pages + references) are welcome, including resubmissions of already accepted papers, work-in-progress, and position papers. The review process will be single blind.

Submit to: https://easychair.org/conferences/?conf=imlaaai22

Workshop Chair

Elizabeth Daly
Address: IBM Dublin Technology Campus, Dublin 15, Ireland
Email: elizabeth.daly@ie.ibm.com

Organizing Committee

Elizabeth Daly, IBM Research, Ireland (elizabeth.daly@ie.ibm.com), Öznur Alkan, IBM Research, Ireland (oalkan2@ie.ibm.com), Stefano Teso, University of Trento, Italy (stefano.teso@unitn.it), Wolfgang Stammer, TU Darmstadt, Germany (wolfgang.stammer@cs.tu-darmstadt.de)

Additional Information

Workshop URL: https://sites.google.com/view/aaai22-imlw


W25: Knowledge Discovery from Unstructured Data in Financial Services (Half-Day)

Knowledge discovery from various data sources has gained the attention of many practitioners in recent decades. Its capabilities have expanded from processing structured data (e.g. DB transactions) to unstructured data (e.g. text, images, and videos). In spite of substantial research focusing on discovery from news, web, and social media data, its applications to datasets in professional settings such as financial filings and government reports, still present huge challenges. In the financial services industry particularly, a large amount of financial analysts’ work requires knowledge discovery and extraction from different data sources, such as SEC filings and industry reports, etc., before they can conduct any analysis. This manual extraction process is usually inefficient, error-prone, and inconsistent. It is one of the key bottlenecks for financial services companies to improve their operating productivity. These challenges and issues call for robust artificial intelligence (AI) algorithms and systems to help. The automated processing of unstructured data to discover knowledge from complex financial documents requires a series of techniques such as linguistic processing, semantic analysis, and knowledge representation & reasoning. The design and implementation of these AI techniques to meet financial business operations require a joint effort between academia researchers and industry practitioners.

Topics

  • Representation learning, distributed representations learning and encoding in natural language processing for financial documents;
  • Synthetic or genuine financial datasets and benchmarking baseline models;
  • Transfer learning application on financial data, knowledge distillation as a method for compression of pre-trained models or adaptation to financial datasets;
  • Search and question answering systems designed for financial corpora;
  • Named-entity disambiguation, recognition, relationship discovery, ontology learning and extraction in financial documents;
  • Knowledge alignment and integration from heterogeneous data;
  • Using multi-modal data in knowledge discovery for financial applications;
  • AI assisted data tagging and labeling;
  • Data acquisition, augmentation, feature engineering, and analysis for investment and risk management;
  • Automatic data extraction from financial fillings and quality verification;
  • Event discovery from alternative data and impact on organization equity price;
  • AI systems for relationship extraction and risk assessment from legal documents;
  • Accounting for Black-Swan events in knowledge discovery methods

Although textual data is prevalent in a large amount of finance-related business problems, we also encourage submissions of studies or applications pertinent to finance using other types of unstructured data such as financial transactions, sensors, mobile devices, satellites, social media, etc.

Format

This half day workshop will focus on research into the use of AI techniques to extract knowledge from unstructured data in financial services. The program of the workshop will include invited talks, paper presentations and a panel discussion. We plan to invite 2-4 keynote speakers from prestigious universities and leading industrial companies. The workshop plans to invite about 50-75 participants.

Submissions

All submissions must be original contributions and will be peer reviewed, single-blinded. All the submissions must follow the AAAI-22 formatting guidelines, camera-ready style. We accept two types of submissions – full research paper no longer than 8 pages (including references) and short/poster paper with 2-4 pages.

Submission site: https://easychair.org/conferences/?conf=kdf22

Organizing Committee

Chair: Xiaomo Liu (J.P. Morgan Chase AI Research, xiaomo.liu@jpmchase.com)

Zhiqiang Ma (J.P. Morgan Chase AI Research), Armineh Nourbakhsh (J.P. Morgan Chase AI Research), Sameena Shah (J.P. Morgan Chase AI Research), Gerard de Melo (Hasso Plattner Institute), Le Song (Mohamed bin Zayed University of Artificial Intelligence)

Additional Information

Workshop URL: https://aaai-kdf.github.io/kdf2022/


W26: Learning Network Architecture during Training

A fundamental problem in the use of artificial neural networks is that the first step is to guess the network architecture. Fine tuning a neural network is very time consuming and far from optimal. Hyperparameters such as the number of layers, the number of nodes in each layer, the pattern of connectivity, and the presence and placement of elements such as memory cells, recurrent connections, and convolutional elements are all manually selected. If it turns out that the architecture is not appropriate for the task, the user must repeatedly adjust the architecture and retrain the network until an acceptable architecture has been obtained.

There is now a great deal of interest in finding better alternatives to this scheme. Options include pruning a trained network or training many networks automatically. In this workshop we would like to focus on a contrasting approach, to learn the architecture during training. This topic encompasses forms of Neural Architecture Search (NAS) in which the performance properties of each architecture, after some training, are used to guide the selection of the next architecture to be tried. This topic also encompasses techniques that augment or alter the network as the network is trained. An example of the latter is the Cascade Correlation algorithm, as well as others that incrementally build or modify a neural network during training, as needed for the problem at hand.

Main Objectives

Our previous workshop at AAAI-21 generated significant interest from the community. We hope to build upon that success.

Our goal is to build a stronger community of researchers exploring these methods, and to find synergies among these related approaches and alternatives. Eliminating the need to guess the right topology in advance of training is a prominent benefit of learning network architecture during training. Additional advantages are possible, including decreased computational resources to solve a problem, reduced time for the network to make predictions, reduced requirements for training set size, and avoiding “catastrophic forgetting”. We would especially like to highlight approaches that are qualitatively different from some popular but computationally intensive NAS methods.

As deep learning problems become increasingly complex, network sizes must increase and other architectural decisions become critical to success. The deep learning community must often confront serious time and hardware constraints from suboptimal architectural decisions. The growing popularity of NAS methods demonstrates the community’s hunger for better ways of choosing or evolving network architectures that are well-matched to the problem at hand.

Topics

Methods for learning network architecture during training, including Incrementally building neural networks during training, new performance benchmarks for the above. Novel approaches and works in progress are encouraged.

Format

Invited speakers, panels, poster sessions, and presentations.

It is anticipated that this will be an in-person workshop, subject to changing travel restrictions and health measures. We will also have a video component for remote participation.

Attendance

Attendance is open to all, subject to any room occupancy constraints. At least one author of each accepted submission must be present at the workshop.

Submissions

Please refer and submit through the Learning Network Architecture During Training workshop website, which has more detailed information.

Organizing Committee

Scott E. Fahlman, School of Computer Science, Carnegie Mellon University (sef@cs.cmu.edu), Edouard Oyallon, Sorbonne Université – LIP6 (Edouard.oyallon@lip6.fr), Dean Alderucci, School of Computer Science, Carnegie Mellon University, (dalderuc@cs.cmu.edu)


W27: Machine Learning for Operations Research (ML4OR) (Half-Day)

The AAAI Workshop on Machine Learning for Operations Research (ML4OR) builds on the momentum that has been directed over the past 5 years, in both the OR and ML communities, towards establishing modern ML methods as a “first-class citizen” at all levels of the OR toolkit. ML4OR will serve as an interdisciplinary forum for researchers in both fields to discuss technical issues at this interface and present ML approaches that apply to basic OR building blocks (e.g., integer programming solvers) or specific applications.

Topics

ML4OR will place particular emphasis on: (1) ML methodologies for enhancing traditional OR algorithms for integer programming, combinatorial optimization, stochastic programming, multi-objective optimization, location and routing problems, etc.; (2) Deep Learning (DL) approaches that can exploit large datasets, particularly Graph Neural Networks (GNNs) and Deep Reinforcement Learning (DRL); (3) End-to-end learning methodologies that mend the gap between ML model training and downstream optimization problems that use ML predictions as inputs; (4) Datasets and benchmark libraries that enable ML approaches for a particular OR application or challenging combinatorial problems.

Format

ML4OR is a one-day workshop consisting of a mix of events: multiple invited talks by recognized speakers from both OR and ML covering central theoretical, algorithmic, and practical challenges at this intersection; a number of technical sessions where researchers briefly present their accepted papers; a virtual poster session for accepted papers and abstracts; a panel discussion with speakers from academia and industry focusing on the state of the field and promising avenues for future research; an educational session on best practices for incorporating ML in advanced OR courses including open software and data, learning outcomes, etc.

While we are planning an in-person workshop to be held at AAAI-22, we aim to accommodate attendees who may not be able to travel to Vancouver by allowing participation via live virtual invited talks and virtual poster sessions.

Submissions

We invite researchers to submit either full-length research papers (8 pages) or extended abstracts (2 pages) describing novel contributions and preliminary results, respectively, to the topics above; a more extensive list of topics is available on the Workshop website. Submissions tackling new problems or more than one of the aforementioned topics simultaneously are encouraged. Submissions will be collected via the OpenReview platform; URL forthcoming on the Workshop website.

Organizing Committee

Ferdinando Fioretto (Syracuse University), Emma Frejinger (Université de Montréal), Elias B. Khalil (University of Toronto), Pashootan Vaezipoor (University of Toronto)

Additional Information

Workshop URL: https://ml4or.github.io/


W28: Optimal Transport and Structured Data Modeling (OTSDM)

The last few years have seen the rapid development of mathematical methods for modeling structured data coming from biology, chemistry, network science, natural language processing, and computer vision applications. Recently developed tools and cutting-edge methodologies coming from the theory of optimal transport have proved to be particularly successful for these tasks. A striking feature of much of this recent work is the application of new theoretical and computational techniques for comparing probability distributions defined on spaces with complex structures, such as graphs, Riemannian manifolds and more general metric spaces.

This workshop aims to provide a premier interdisciplinary forum for researchers in different communities to discuss the most recent trends, innovations, applications, and challenges of optimal transport and structured data modeling.

Topics

The goal of this workshop is to bring together the optimal transport, artificial intelligence, and structured data modeling, gathering insights from each of these fields to facilitate collaboration and interactions. We invite thought-provoking submissions and talks on a range of topics in these fields. The topics of interest include but are not limited to:

Theoretical and Computational Optimal Transport:

  • Optimal transport theory, including statistical and geometric aspects;
  • Gromov-Wasserstein distance and its variants;
  • Geometry of spaces of structured data;
  • Computational optimal transport.

Optimal Transport-Driven Machine Learning:

  • Bayesian inference for/with optimal transport;
  • Gromovization of machine learning methods;
  • Optimal transport-based generative modeling
  • Optimal transport-based machine learning paradigms;
  • Trustworthy machine learning from the perspective of optimal transport.

Optimal Transport-Based Structured Data Modeling:

  • Optimal transport-based analysis of structured data, such as networks, meshes, sequences, and so on;
  • The applications of optimal transport in molecule analysis, network analysis, natural language processing, computer vision, and bioinformatics.

Format

The full-day workshop will start with two long talks and one short talk in the morning. The post-lunch session will feature one long talk, two short talks, and a poster session. We will end the workshop with a panel discussion by invited speakers from different fields to enlist future directions.

Invited Speakers

Long talks (50 mins):
Gabriel Peyré, (Mathematics, CNRS Senior Researcher);
Yusu Wang, (Mathematics, Professor in CSE, UCSD);
Caroline Uhler, (Statistics and CS, Associate Professor in EECS and IDSS, MIT);

Short talks (25mins):
Titouan Vayer, (Mathematics, Postdoctoral Researcher at ENS Lyon);
Tam Le, (Computer Science, Research Scientist at RIKEN);
Dixin Luo, (Computer Science, Assistant Professor in CS, Beijing Institute of Technology).

Submissions

We invite the submission of papers with 4-6 pages. References will not count towards the page limit. Papers must be in PDF format, in English, and formatted according to the AAAI template. Submissions will be peer-reviewed, single-blinded, and assessed based on their novelty, technical quality, significance, clarity, and relevance regarding the workshop topics. Submissions introducing interesting experimental phenomena and open problems of optimal transport and structured data modeling are welcome as well. Submissions that are already accepted or under review for another conference or already accepted for a journal are not accepted. This policy also applies to papers that overlap substantially in technical content with papers previously published, accepted, or under review.

The submission website is https://cmt3.research.microsoft.com/OTSDM2022.

Key Dates

All time are 23:59, AoE (Anywhere on Earth)

  • Submission deadline: November 12, 2021
  • Notification date:/strong> December 3, 2021
  • Workshop day:/strong> February 28 or March 1, 2022

Organizing Committee

Hongteng Xu (Renmin University of China, hongtengxu@ruc.edu.cn, main contact), Julie Delon (Université de Paris, julie.delon@u-paris.fr), Facundo Mémoli (Ohio State University, facundo.memoli@gmail.com), Tom Needham (Florida State University, tneedham@fsu.edu)

Additional Information

Workshop site: https://ot-sdm.github.io


W29: Practical Deep Learning in the Wild (PracticalDL2022)

Deep learning has achieved significant success for artificial intelligence (AI) in multiple fields. However, research in the AI field also shows that their performance in the wild is far from practical due to the lack of model efficiency and robustness towards open-world data and scenarios. Regarding efficiency, it is impractical to train a neural network containing billions of parameters and then deploy it to an edge device in practice. And considering robustness, input data with noises frequently occur in open-world scenarios, which presents critical challenges for the building of robust AI systems in practice. Some existing research also presents that there is a trade-off between the robustness and accuracy of deep learning models.

These complex demands have brought profound implications and an explosion of interest for research into the topic of this workshop, namely building practical AI with efficient and robust deep learning models. As far as we know, we are the first workshop to focus on practical deep learning in the wild for AI, which is of great significance.

Topics

The workshop organizers invite paper submissions on the following (and related) topics:

  • Network compression
  • Adversarial attacking deep learning systems
  • Neural architecture search (NAS)
  • Robust architectures against adversarial attacks
  • Hardware implementation and on-device deployment
  • Benchmark for evaluating model robustness
  • On-device learning
  • Few-shot detection
  • New methodologies and architectures for efficient and robust deep learning

Important Dates

  • November 14, 2021 – Submission Deadline
  • December 3, 2021 – Acceptance Notification
  • February 28, 2022 – Workshop Date

Format

The workshop will be a 1.5-day meeting.

The workshop will include several technical sessions, a virtual poster session where presenters can discuss their work, to further foster collaborations, multiple invited speakers covering crucial aspects for the practical deep learning in the wild, especially the efficient and robust deep learning, some tutorial talks, the challenge for efficient deep learning and solution presentations, and will conclude with a panel discussion.

Attendance

Attendance is open to all. At least one author of each accepted submission must be present at the workshop.

Submissions

Submissions of technical papers can be up to 7 pages excluding references and appendices. Short or position papers of up to 4 pages are also welcome. All papers must be submitted in PDF format, using the AAAI-22 author kit. Papers will be peer-reviewed and selected for oral and/or poster presentations at the workshop.

The submission website is https://cmt3.research.microsoft.com/PracticalDL2022.

Invited Speakers

Alan Yuille (Professor, Johns Hopkins University); Hao Su (Assistant Professor, UC San Diego); Rongrong Ji (Professor, Xiamen University); Xianglong Liu (Professor, Beihang University); Jishen Zhao (Associate Professor, UC San Diego); Tom Goldstein (Associate Professor, University of Maryland); Cihang Xie (Assistant Professor, UC Santa Cruz); Yisen Wang (Assistant Professor, Peking University); Bohan Zhuang (Assistant Professor, Monash University)

Workshop Chair

Haotong Qin (Beihang University), Yingwei Li (Johns Hopkins University), Ruihao Gong (SenseTime Research), Xinyun Chen (UC Berkeley), Aishan Liu (Beihang University), Xin Dong (Harvard University)

Workshop Committee (Incomplete list)

Jindong Guo (University of Munich), Yuhang Li (Yale University), Yiming Li (Tsinghua University), Yifu Ding (Beihang University), Mingyuan Zhang (Nanyang Technological University), Jiakai Wang (Beihang University), Jinyang Guo (University of Sydney), Renshuai Tao (Beihang University)

Additional Information

Workshop site: https://practical-dl.github.io/


W30: Privacy-Preserving Artificial Intelligence

The availability of massive amounts of data, coupled with high-performance cloud computing platforms, has driven significant progress in artificial intelligence and, in particular, machine learning and optimization. It has profoundly impacted several areas, including computer vision, natural language processing, and transportation. However, the use of rich data sets also raises significant privacy concerns: They often reveal personal sensitive information that can be exploited, without the knowledge and/or consent of the involved individuals, for various purposes including monitoring, discrimination, and illegal activities.

The third AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-22) builds on the success of previous years PPAI-20 and PPAI-21 to provide a platform for researchers, AI practitioners, and policymakers to discuss technical and societal issues and present solutions related to privacy in AI applications. The workshop will focus on both the theoretical and practical challenges related to the design of privacy-preserving AI systems and algorithms and will have strong multidisciplinary components, including soliciting contributions about policy, legal issues, and societal impact of privacy in AI.

Topics

The workshop organizers invite paper submissions on the following (and related) topics:

  • Applications of privacy-preserving AI systems
  • Attacks on data privacy
  • Differential privacy: theory and applications
  • Distributed privacy-preserving algorithms
  • Human rights and privacy
  • Privacy and Fairness
  • Privacy policies and legal issues
  • Privacy preserving optimization and machine learning
  • Privacy preserving test cases and benchmarks
  • Surveillance and societal issues

Finally, the workshop will welcome papers that describe the release of privacy-preserving benchmarks and data sets that can be used by the community to solve fundamental problems of interest, including in machine learning and optimization for health systems and urban networks, to mention but a few examples.

Format

The workshop will be a one-day meeting and will include a number of technical sessions, a virtual poster session where presenters can discuss their work, with the aim of further fostering collaborations, multiple invited speakers covering crucial challenges for the field of privacy-preserving AI applications, including policy and societal impacts, a tutorial talk, and will conclude with a panel discussion. Attendance is open to all. At least one author of each accepted submission must be present at the workshop.

Submissions

Submissions of technical papers can be up to 7 pages excluding references and appendices. Short or position papers of up to 4 pages are also welcome. All papers must be submitted in PDF format, using the AAAI-22 author kit. Papers will be peer-reviewed and selected for oral and/or poster presentation at the workshop.

The submission website is https://cmt3.research.microsoft.com/PPAI2022.

Organizing Committee

Ferdinando Fioretto (Syracuse University), Aleksandra Korolova (University of Southern California), Pascal Van Hentenryck (Georgia Institute of Technology)

Additional Information

Supplemental Workshop site: https://aaai-ppai22.github.io/


W31: Reinforcement Learning for Education: Opportunities and Challenges

RL4ED is intended to facilitate tighter connections between researchers and practitioners interested in the broad areas of reinforcement learning (RL) and education (ED). The workshop will focus on two thrusts: 1) Exploring how we can leverage recent advances in RL methods to improve state-of-the-art technology for ED; 2) Identifying unique challenges in ED that can help nurture technical innovations and next breakthroughs in RL.

Topics

Topics of interest include but are not limited to: (1) Survey papers summarizing recent advances in RL with applicability to ED; (2) Developing toolkits and datasets for applying RL methods to ED; (3) Using RL for online evaluation and A/B testing of different intervention strategies in ED; (4) Novel applications of RL for ED problem settings; (5) Using pedagogical theories to narrow the policy space of RL methods; (6) Using RL methodology as a computational model of students in open-ended domains; (7) Developing novel offline RL methods that can efficiently leverage historical student data; (8) Combining statistical power of RL with symbolic reasoning to ensure the robustness for ED.

Format

This 1-day workshop will include a mixture of invited speakers, panels (including discussion with the audience), and presentations from authors of accepted submissions.

Attendance

We welcome attendance from individuals who do not have something they’d like to submit but who are interested in RL4ED research. If you are interested, please send a short email to rl4edorg@gmail.com and we can add you to the invitee list.

Submissions

We welcome two types of submissions:

    • Research track papers reporting the results of ongoing or new research, which have not been published before. In particular, we encourage papers covering late-breaking results and work-in-progress research.
    • Encore track papers that have been recently published, or accepted for publication in a conference or journal.

Submissions are due by 12 November 2021. Please refer to https://rl4ed.org/aaai2022/index.html for additional information.

Submission URL: https://easychair.org/conferences/?conf=rl4edaaai22.

Workshop Chairs

      • Neil T. Heffernan, Worcester Polytechnic Institute (Worcester, MA, USA)
        Email: nth@wpi.edu, Webpage: https://www.neilheffernan.net/
      • Andrew S. Lan, University of Massachusetts Amherst (Amherst, MA, USA)
        Email: andrewlan@cs.umass.edu, Webpage: https://people.umass.edu/~andrewlan/
      • Anna N. Rafferty, Carleton College (Northfield, MN, USA)
        Email: arafferty@carleton.edu, Webpage: https://sites.google.com/site/annanrafferty/
      • Adish Singla, Max Planck Institute for Software Systems (Saarbrucken, Germany)
        Email: adishs@mpi-sws.org, Webpage: https://machineteaching.mpi-sws.org/adishsingla.html

Additional Information

Supplemental Workshop site: https://rl4ed.org/aaai2022/index.html


W32: Reinforcement Learning in Games (RLG)

Games provide an abstract and formal model of environments in which multiple agents interact: each player has a well-defined goal and rules to describe the effects of interactions among the players. The first achievements in playing these games at super-human level were attained with methods that relied on and exploited domain expertise that was designed manually (e.g. chess, checkers). In recent years, we have seen examples of general approaches that learn to play these games via self-play reinforcement learning (RL), as first demonstrated in Backgammon. While progress has been impressive, we believe we have just scratched the surface of what is capable, and much work remains to be done in order to truly understand the algorithms and learning processes within these environments.

Topics

The main objective of the workshop is to bring researchers together to discuss ideas, preliminary results, and ongoing research in the field of reinforcement in games.

We invite participants to submit papers by the 12th of November, based on but not limited to, the following topics: RL in various formalisms: one-shot games, turn-based, and Markov games, partially-observable games, continuous games, cooperative games; deep RL in games; combining search and RL in games; inverse RL in games; foundations, theory, and game-theoretic algorithms for RL; opponent modeling; analyses of learning dynamics in games; evolutionary methods for RL in games; RL in games without the rules; search and planning; and online learning in games.

Format

RLG is a full-day workshop. It will start with a 60-minute mini-tutorial covering the basics of RL in games, and will include 2-4 invited talks by prominent contributors to the field, paper presentations, a poster session, and will close with a discussion panel. Attendance is expected to be 150-200 participants (estimated), including organizers and speakers.

Submissions

Papers must be between 4-8 pages in the AAAI submission format, with the eighth page containing only references. Papers will be submitted electronically using Easychair. Accepted papers will not be archived, and we explicitly allow papers that are concurrently submitted to, currently under review at, or recently accepted in other conferences / venues.

Submission instructions will be available at the workshop web page.

Workshop Chair

Viliam Lisy (viliam.lisy@fel.cvut.cz)

Organizing Committee

Viliam Lisy (Czech Technical University in Prague, viliam.lisy@fel.cvut.cz), Noam Brown (Facebook AI Research, noambrown@fb.com), Martin Schmid (DeepMind, mschmid@google.com)

Additional Information

Supplemental Workshop site: http://aaai-rlg.mlanctot.info/


W33: Robust Artificial Intelligence System Assurance (RAISA) (Half-Day)

The workshop on Robust Artificial Intelligence System Assurance (RAISA) will focus on research, development and application of robust artificial intelligence (AI) and machine learning (ML) systems. Rather than studying robustness with respect to particular ML algorithms, our approach will be to explore robustness assurance at the system architecture level, during both development and deployment, and within the human-machine teaming context. While the research community is converging on robust solutions for individual AI models in specific scenarios, the problem of evaluating and assuring the robustness of an AI system across its entire life cycle is much more complex. Moreover, the operational context in which AI systems are deployed necessitates consideration of robustness and its relation to principles of fairness, privacy, and explainability.

RAISA’s systems-level perspective will be emphasized via three main thrusts:

      • AI System Robustness: participants will consider techniques for detecting and mitigating vulnerabilities at each of the processing stages of an AI system, including: the input stage of sensing and measurement, the data conditioning stage, during training and application of machine learning algorithms, the human-machine teaming stage, and during operational use.
      • The robust development and assured deployment of AI systems: Participants will discuss how to leverage and update common software development paradigms, e.g., DevSecOps, to incorporate relevant aspects of system-level AI assurance.
      • The impact of robustness assurance on other AI ethics principles: RAISA will also explore aspects related to ethical AI that overlap and interact with robustness concerns, including security, fairness, privacy, and explainability.

Topics

AI threat modeling, AI system robustness, explainable AI, system lifecycle attacks, system verification and validation, robustness benchmarks and standards, robustness to black-box and white-box adversarial attacks, defenses against training, operational and inversion attacks, AI system confidentiality, integrity, and availability, AI system fairness and bias

Format

Full day event featuring a panel, invited and keynote speakers and presentations selected through a CFP.

Attendance

25-50 attendees including invited speakers and accepted papers

Submissions

PDF suitable for ArXiv repository (4 to 8 pages). Previously published work (or under-review) is acceptable.

Submission link: https://easychair.org/cfp/raisa-2022

Workshop Chair

William Streilein, MIT Lincoln Laboratory, 244 Wood St., Lexington, MA, 02420, (781) 981-7200, wws@ll.mit.edu

Organizing Committee

Olivia Brown (MIT Lincoln Laboratory, Olivia.Brown@ll.mit.edu), Rajmonda Caceres (MIT Lincoln Laboratory, Rajmonda.Caceres@ll.mit.edu), Tina Eliassi-Rad (Northeastern University, teliassirad@northeastern.edu), David Martinez (MIT Lincoln Laboratory, dmartinez@ll.mit.edu), Sanjeev Mohindra (MIT Lincoln Laboratory, smohindra@ll.mit.edu), Elham Tabassi (National Institute of Standards and Technology, elham.tabassi@nist.gov)

Additional Information

Workshop URL: https://sites.google.com/view/raisa-2022/


W34: Scientific Document Understanding (SDU) (Half-Day)

Scientific documents such as research papers, patents, books, or technical reports are one of the most valuable resources of human knowledge. At the AAAI-22 Workshop on Scientific Document Understanding (SDU@AAAI-22), we aim to gather insights into the recent advances and remaining challenges on scientific document understanding. Researchers from related fields are invited to submit papers on the recent advances, resources, tools, and upcoming challenges for SDU. In addition to that, we propose a shared task on one of the challenging SDU tasks, i.e., acronym extraction and disambiguation in multiple languages text.

Topics

Topics of interest include but are not limited to:

      • Information extraction and information retrieval for scientific documents;
      • Question answering and question generation for scholarly documents;
      • Word sense disambiguation, acronym identification and expansion, and definition extraction; Document summarization, text mining, document topic classification, and machine reading comprehension for scientific documents;
      • Graph analysis applications including knowledge graph construction and representation, graph reasoning and query knowledge graphs;
      • Graph analysis applications including knowledge graph construction and representation, graph reasoning and query knowledge graphs;
      • Biomedical image processing, scientific image plagiarism detection, and data visualization; Code/Pseudo-code generation from text and im-age/diagram captioning, New language understanding resources such as new syn-tactic/semantic parsers, language models or techniques to encode scholarly text;
      • Survey or analysis papers on scientific document under-standing and new tasks and challenges related to each scientific domain;
      • Factuality, data verification, and anti-science detection

Shared Task

Acronyms, i.e., short forms of long phrases, are common in scientific writing. To push forward the research on acronym understanding in scientific text, we propose two shared tasks on acronym extraction (i.e., recognizing acronyms and phrases in text) and disambiguation (i.e., finding the correct expansion for an ambiguous acronym). Participants are welcomed to submit their system reports to be presented in the workshop.

Format

SDU will be a one-day workshop. The full-day workshop will start with an opening remark followed by long research paper presentations in the morning. The post-launch session includes the invited talks, shared task winners’ presentations, and a panel discussion on the resources, findings, and upcoming challenges. SDU will also host a session for presenting the short research papers and the system reports of the shared tasks.

Attendance

SDU is expected to host 50-60 attendees. Invited speakers, committee members, authors of the research paper, and the participants of the shared task are invited to attend.

Submissions

Submissions should follow the AAAI 2022 formatting guidelines and the AAAI 2022 standards for double-blind review including anonymous submission. SDU accepts both long (8 pages including references) and short (4 pages including references) papers. Accepted papers will be published in the workshop proceedings. System reports should also follow the AAAI 2022 formatting guidelines and have 4-6 pages including references. System reports will be presented during poster sessions.

Please submit the papers and system reports to EasyChair

Organizing Committee

Thien Huu Nguyen (University of Oregon, thien@cs.uoregon.edu), Walter Chang (Adobe Research, wachang@adobe.com), Amir Pouran Ben Veyseh (University of Oregon, apouranb@uoregon.edu), Viet Dac Lai (University of Oregon, viet@uoregon.edu), Franck Dernoncourt (Adobe Research, franck.dernoncourt@adobe.com)

Additional Information

Workshop URL: https://sites.google.com/view/sdu-aaai22/home


W35: Self-Supervised Learning for Audio and Speech Processing

Babies learn their first language through listening, talking, and interacting with adults. Can AI achieve the same goal without much low-level supervision? Inspired by the question, there is a trend in the machine learning community to adopt self-supervised approaches to pre-train deep networks. Self-supervised learning utilizes proxy supervised learning tasks, for example, distinguishing parts of the input signal from distractors, or generating masked input segments conditioned on the unmasked ones, to obtain training data from unlabeled corpora. These approaches make it possible to use a tremendous amount of unlabeled data available on the web to train large networks and solve complicated tasks. BERT and GPT in NLP and SimCLR and BYOL in CV are famous examples in this direction. Recently self-supervised approaches for speech/audio processing are also gaining attention. There were two workshops on similar topics hosted at ICML 2020 and NeurIPS 2020, and both workshops observed positive feedback and overwhelming participation. We are excited to continue promoting innovation in self-supervision for the speech/audio processing fields and inspiring the fields to contribute to the general machine learning community. The goal of this workshop is to connect researchers in self-supervision inside and outside the speech and audio fields to discuss cutting-edge technology, inspire ideas and collaborations, and drive the research frontier.

Topics

  • The workshop welcomes the submission of work on, but not limited to, the following research directions.
  • New self-supervised proxy tasks or new approaches using self-supervised models in speech and audio processing.
  • Theoretical or empirical studies focusing on understanding why self-supervision methods work for speech and audio.
  • Exploring the limits of self-supervised learning approaches for speech and audio processing, for example, adverse environment conditions, multiple languages, or generalization across downstream tasks.
  • Comparison or integration of self-supervised learning methods and other semi-supervised and transfer learning methods in speech and audio processing tasks.
  • Self-supervised learning approaches involving the interaction of speech/audio and other modalities.

The workshop also welcomes participants of SUPERB and Zero Speech challenge to submit their results.

  • SUPERB is a benchmarking platform that allows the community to train, evaluate, and compare the speech representations on diverse downstream speech processing tasks. The challenge requires participants to build competitive models for diverse downstream tasks with limited labeled data and trainable parameters, by reusing self-supervised pre-trained networks.
  • Zero Speech challenge is to build language models only based on audio or audio-visual information, without using any textual input. The trained models are intended to assign scores to novel utterances, assessing whether they are possible or likely utterances in the training language.

Format

This one-day workshop will bring concentrated discussions on self-supervision for the field of speech/audio processing via keynote speech, invited talks, contributed talks and posters based on community-submitted high-quality papers, and the result representation of SUPERB and Zero Speech challenge.

Attendance

Attendance is open to all; at least one author of each accepted submission must be physically/virtually present at the workshop. We expect ~60 attendees.

Submissions

Papers must be between 4-8 pages with the AAAI submission format submitted to the track of regular paper, SUPERB or Zero Speech result paper. Each paper will be reviewed by three reviewers in double-blind. Accepted papers will not be archived but will be hosted on the workshop website. We allow papers that are concurrently submitted to or currently under review at other conferences or venues. We encourage authors to contact the organizers to discuss possible overlap.

Submission Site: https://cmt3.research.microsoft.com/SAS2022

Organizing Committee

Abdelrahman Mohamed (Facebook, abdo@fb.com), Hung-yi Lee (NTU, hungyilee@ntu.edu.tw), Shinji Watanabe (CMU, shinjiw@ieee.org), Tara Sainath (Google, tsainath@google.com), Karen Livescu (TTIC, klivescu@ttic.edu), Shang-Wen Li (Facebook, shangwel@fb.com), Ewan Dunbar (University of Toronto, ewan.dunbar@utoronto.ca) Emmanuel Dupoux (EHESS/Facebook, dpx@fb.com)

Additional Information

Workshop URL: https://aaai-sas-2022.github.io/


W36: Trustable, Verifiable and Auditable Federated Learning

Federated learning (FL) is one promising machine learning approach that trains a collective machine learning model using sharing data owned by various parties. It leverages many emerging privacy-preserving technologies (SMC, Homomorphic Encryption, differential privacy, etc.) to protect data owner privacy in FL. It has gained popularity in some domains such as image classification, speech recognition, smart city, and healthcare. However, FL also faces multiple challenges that may potentially limit its applications in real-world use scenarios. For example, FL is still at the risk of various kinds of attacks that may result in leakage of individual data source privacy or degraded joint model accuracy. In other words, many existing FL solutions are still exposed to various security and privacy threats. This workshop aims to bring together FL researchers and practitioners to address the additional security and privacy threats and challenges in FL to make its mass adoption and widespread acceptance in the community. The discussion in the workshop can lead to implementing FL solutions that are more accurate, robust and interpretable, and gain the trust of the FL participants.

Topics

Topics of interest include, but are not limited to:

  • Interpret Federated Learning
  • Trade-Off between Privacy-Preserving and Explainable Federated Learning Federated Learning Multi-Party Computation
  • Federated Learning Homomorphic Encryption
  • Federated Learning Differential Privacy
  • Federated Transfer Learning
  • Federated Learning Personalization Techniques
  • Federated Learning Attacks and Defenses
  • Federated Learning Blockchain Network
  • Federated Learning Secure Aggregation
  • Federated Learning Fairness and Accuracy
  • Federated Learning with Non-IID Data
  • Federated Learning Incentive Mechanism
  • Federated Learning Meets Mean-Field Game Theory
  • Federated Learning-based Corporate Social Responsibility
  • Social Responsible Federated Learning
  • Decentralized Federated Learning
  • Vertical Federated Learning

Format

The workshop is a full day. Each accepted paper presentation will be allocated between 15 and 20 minutes. The invited speakers, who are well-recognized experts of the field, will give a 30 minute talk.

Attendance

Typically, we receive around 40~60 submissions to each previous workshop. Out of these, around 20~30 papers are accepted. For previous workshops held physically, each workshop attracts around 150~300 participants.

Submissions

We invite a long research paper (8 pages) and a demo paper (4 pages) (including references). The submitted papers written in English must be in PDF format according to the AAAI camera ready style.

The submission website is https://easychair.org/conferences/?conf=fl-aaai-22.

Organizing Committee

Qiang Yang, Hong Kong University of Science and Technology/ WeBank, China, (qyang@cse.ust.hk ), Sin G. Teo, Institute for Infocomm Research, Singapore (teosg@i2r.a-star.edu.sg), Han Yu, Nanyang Technological University, Singapore (han.yu@ntu.edu.sg), Lixin Fan, WeBank, China (lixinfan@webank.com), Chao Jin, Institute for Infocomm Research, Singapore (jin_chao@i2r.a-star.edu.sg), Le Zhang, University of Electronic Science and Technology of China (zhangleuestc@gmail.com), Yang Liu, Tsinghua University, China (liuy03@air.tsinghua.edu.cn), Zengxiang Li, Digital Research Institute, ENN Group, China (lizengxiang@enn.cn)

Additional Information

Workshop site: http://federated-learning.org/fl-aaai-2022/


W37: Trustworthy AI for Healthcare

In this workshop, we aim to address the trustworthy issues of clinical AI solutions. We aim to bring together researchers in AI, healthcare, medicine, NLP, social science, etc. and facilitate discussions and collaborations in developing trustworthy AI methods that are reliable and more acceptable to physicians. Previous healthcare-related workshops focus on how to develop AI methods to improve the accuracy and efficiency of clinical decision-making, including diagnosis, treatment, triage. The trustworthy issues of clinical AI methods were not discussed. In our workshop, we specifically focus on the trustworthy issues in AI for healthcare, aiming to make clinical AI methods more reliable in real clinical settings and be willingly used by physicians.

Topics

  • interpretable AI methods for healthcare.
  • robustness of clinical AI methods.
  • medical knowledge grounded AI.
  • physician-in-the-loop AI.
  • security and privacy in clinical AI.
  • fairness in AI for healthcare.
  • ethics in AI for healthcare.
  • robust and interpretable natural language processing for healthcare.
  • methods for robust weak supervision.

Format

The workshop will be a one-day workshop, featuring speakers, panelists, and poster presenters from machine learning, biomedical informatics, natural language processing, statistics, behavior science.

Attendance

At AAAI 2021, we successfully organized this workshop (https://taih20.github.io/). We received 38 paper submissions and accepted 23 of them. The workshop attracted about 100 attendees.

Submissions

We invite submissions of full papers, as well as works-in-progress, position papers, and papers describing open problems and challenges. While original contributions are preferred, we also invite submissions of high-quality work that has recently been published in other venues or is concurrently submitted. Papers should be up to 4 pages in length (excluding references) formatted using the AAAI template. All the submissions should be anonymous. The accepted papers are allowed to be submitted to other conference venues. This workshop has no archival proceedings.

The submission website is https://cmt3.research.microsoft.com/TAIH2022.

Organizing Committee

  • Pengtao Xie (main contact), Assistant Professor, University of California, San Diego, pengtaoxie2008@gmail.com Engineer Ln, San Diego, CA 92161 (Tel)4123206230
  • Marinka Zitnik, Assistant Professor, Harvard University, marinka@hms.harvard.edu 10 Shattuck Street, Boston, MA 02115 (Tel)6503086763
  • Byron Wallace, Assistant Professor, Northeastern University, byron@ccs.neu.edu 177 Huntington Ave, Boston, MA 02115 (Tel)4135120352
  • Eric P. Xing, Professor, Carnegie Mellon University, epxing@cs.cmu.edu 5000 Forbes Ave, Pittsburgh, PA 15213 (Tel)4122682559
  • Ramtin Hosseini, PhD Student, University of California, San Diego, rhossein@eng.ucsd.edu (Tel) 3104293825

Additional Information

Workshop site: https://taih21.github.io/


W38: Trustworthy Autonomous Systems Engineering (TRASE-22)

Advances in AI technology, particularly perception and planning, have enabled unprecedented advances in autonomy, with autonomous systems playing an increasingly important role in day-to-day lives, with applications including IoT, drones, and autonomous vehicles. In nearly all applications, reliability, safety, and security of such systems is a critical consideration. For example, failures in IoT can result in infrastructure disruptions, and failures in autonomous cars can lead to congestion and crashes. While there have been extensive independent research threads on the subject of safety and reliability of specific sub-problems in autonomy, such as the problem of robust control, as well as recent considerations of robust AI-based perception, there has been considerably less research on investigating robustness and trust in end-to-end autonomy, where AI-based perception is integrated with planning and control in an open loop. This workshop on Trustworthy Autonomous Systems Engineering (TRASE) offers an opportunity to highlight state of the art research in trustworthy autonomous systems, as well as provide a vision for future foundational and applied advances in this critical area at the intersection of AI and Cyber-Physical Systems.

The mission of the TRASE workshop is to bring together researchers from multiple engineering disciplines, including Computer Science, and Computer, Mechanical, Electrical, and Systems Engineering, who focus their energies in understanding both specific TRASE subproblems, such as perception, planning, and control, as well as robust and reliable end-to-end integration of autonomy.

Topics

We are interested in a broad range of topics, both foundational and applied. Topics of interest include, but are not limited to:

  • Security and reliability of AI
  • Robust visual perception
  • Robust control
  • Reliability in real-time systems
  • Robust dynamical systems
  • Robust planning
  • Ethics and fairness in autonomous systems
  • Robust multiagent systems
  • Robust robotic design, particularly of autonomous drones and/or vehicles

Important Dates

  • Submissions due: November 12
  • Acceptance decisions: December 3
  • Workshop dates: February 28/March 1

Submissions

Paper submissions will be in two formats: full paper (8 pages) and position paper (4 pages):

    Research papers (8 pages in length for main content + 2 pages for references in AAAI format): we are soliciting research papers, both relatively mature, as well as early stage, on both the foundational and applied topics related to autonomous systems engineering.

  • Position papers (4 pages in length for main content + 2 pages for references in AAAI format): we are seeking position papers that advocate for a particular approach or set of approaches, or present an overview of a promising relevant research area.

The submission website is https://easychair.org/conferences/?conf=trase2022.

Organizing Committee

Yevgeniy Vorobeychik (Washington University in St. Louis), Bruno Sinopoli (Washington University in St. Louis), Jinghan Yang (Washington University in St. Louis), Bo Li (UIUC), Atul Prakash (University of Michigan)

Additional Information

Supplemental Workshop site: https://jinghany.github.io/trase2022/


W39: Video Transcript Understanding (Half-Day)

Videos have become an omnipresent source of knowledge: courses, presentations, conferences, documentaries, live streams, meeting recordings, vlogs. This has created a strong demand for transcript understanding. However, the quality of audio and video content shared online and the nature of speech and video transcripts pose many challenges to the existing natural language processing.

Topics

At the AAAI 2022 Workshop on Video Transcript Understanding (VTU @ AAAI 2022), we aim to bring together researchers from various domains to make the best of the knowledge that all these videos contain. Researchers from related domains are invited to submit papers on recent advanced technologies, resources, tools and challenges for VTU. We will also organize 3 shared tasks in this workshop: punctuation restoration, domain adaptation for punctuation restoration, and chitchat detection.

Format

The workshop is being organized by application area or other, panels, invited speakers, interactive, small groups, discussions, presentations. Please specify the length of the workshop (1-day, 1.5-day, 2-day, or half-day.)

The workshop will be organized as half-day event with 2 invited speakers, follow by presentation from accepted papers (both ordinary papers, and shared task paper)

Attendance

40 attendees including: invited speakers, authors of accepted papers and shared task participants.

Submissions

The papers have to be submitted through EasyChair.

The VTU workshops accepts both short paper (4 pages) and long paper (8 pages)

Submission URL: https://easychair.org/my/conference?conf=vtuaaai2022.

Workshop Chairs

  • Franck Dernoncourt;
    88 E San Fernando St, San Jose, CA 95113, United States; dernonco@adobe.com
  • Viet Dac Lai
    234 Deschutes Hall, University of Oregon, Eugene, OR 97403, USA; vietl@cs.uoregon.edu; (+1)541-515-2996

Organizing Committee

Cesa Salaam (Howard University, USA), Hwanhee Lee (Seoul National University, South Korea), Jaemin Cho (University of North Carolina at Chapel Hill, USA), Jielin Qiu (Carnegie Mellon University, USA), Joseph Barrow (University of Maryland, US), Mengnan Du (Texas A&M University, USA), Minh Van Nguyen (University of Oregon, USA), Nicole Meister (Princeton University, USA), Sajad Sotudeh Gharebagh (Georgetown University, USA), Sampreeth Chebolu (University of Houston, USA), Sarthak Jain (Northeastern University, USA),
Shufan Wang (University of Massachusetts Amherst, USA)

Additional Information

Supplemental Workshop site: https://vtuworkshop.github.io/2022/

This site is protected by copyright and trademark laws under US and International law. All rights reserved. Copyright © 1995–2021 AAAI