The Thirty-Fifth AAAI Conference on Artificial Intelligence
February 8-9, 2021
A Virtual Conference
AAAI is pleased to present the AAAI-21 Workshop Program. Workshops will be held virtually Monday and Tuesday, February 8-9, 202. The final schedule will be available in October. The AAAI-21 workshop program includes 26 workshops covering a wide range of topics in artificial intelligence. Workshops are one day unless otherwise noted in the individual descriptions. Registration in each workshop is required by all active participants, and is also open to all interested individuals. Workshop registration is available to AAAI-21 technical registrants at a discounted rate, or separately to workshop only registrants. Registration information will be mailed directly to all invited participants in December.
- W1: Affective Content Analysis (AffCon@AAAI 2021)
- W2: AI for Behavior Change
- W3: AI for Urban Mobility
- W4: Artificial Intelligence Safety (SafeAI)
- W5: Combating Online Hostile Posts in Regional Languages during Emergency Situations (CONSTRAINT-2021)
- W6: Commonsense Knowledge Graphs (CSKGs)
- W7: Content Authoring and Design
- W8: Deep Learning on Graphs: Methods and Applications (DLG-AAAI’21)
- W9: Designing AI for Telehealth
- W10: 9th Dialog System Technology Challenge (DSTC9)
- W11: Explainable Agency in Artificial Intelligence
- W12: Graphs and More Complex Structures for Learning and Reasoning (GCLR)
- W13: 5th International Workshop on Health Intelligence (W3PHIAI-21)
- W14: Hybrid Artificial Intelligence
- W15: Imagining Post-COVID Education with AI
- W16: Knowledge Discovery from Unstructured Data in Financial Services
- W17: Learning Network Architecture During Training
- W18: Meta-Learning and Co-Hosted Competition
- W19: Meta-Learning for Computer Vision (ML4CV)
- W20: Plan, Activity, and Intent Recognition (PAIR) 2021
- W21: Privacy-Preserving Artificial Intelligence
- W22: Reasoning and Learning for Human-Machine dialogs (DEEP-DIAL21)
- W23: Reinforcement Learning in Games
- W24: Scientific Document Understanding
- W25: Towards Robust, Secure and Efficient Machine Learning
- W26: Trustworthy AI for Healthcare
W1: Affective Content Analysis (AffCon)
AffCon-2021 is the fourth Affective Content Analysis workshop @ AAAI. The workshop series (i) builds upon the state of the art in neural and AI methods, for modeling affect in interpersonal interactions and behaviors and (ii) brings a confluence of research viewpoints representing several disciplines.
The word ‘affect’ refers to emotion, sentiment, mood, and attitudes including subjective evaluations, opinions, and speculations. Psychological models of affect are adopted by several disciplines to conceptualize and measure users’ opinions, intentions, motives, and expressions. Computational models of such measurement may not recognize the context of affect generated in and through human interactions.
Topics
The 2021 edition of the workshop takes interaction in yet another new direction. A large share of content created are outcomes of collaboration. A basic question worth examining is whether and how collaboration among creatives impact the affective characteristics of the content. A follow up question then is how to model and computationally measure affect in collaborative creation.
This year, collaboration takes on an extra meaning in a physically distanced world. Understanding the dynamics of affect in collaborative content is more topical. The theme for AffCon@AAAI-2021 is ‘Affect in Collaborative Creation’. This is relevant for increasingly decentralized workplaces, asynchronous collaborations, and computer-mediated communication. Studying and codifying user reactions in this setup can help understand the society and aid towards better tools for content analysis.
We invite research spanning both creation and consumption of content, especially for cooperative tasks. Our focus on affective content in collaborations includes, but is not limited to, collectively created content, reactions in groups, interactions through avatars, and multi-modal interfaces. We also strongly encourage research that explores these themes, and group dynamics in content creation and in affective reactions.
- Affect in Collaborative Content
- Affect in Communication co-creation
- Affective Reactions in Co-creation and collaboration
- Affectively responsive interfaces
- Deep learning-based models for affect modeling in content (image, audio, and video)
- Mirroring affect
- Psycho–demographic Profiling
- Affect–based Text Generation
- Multi-modal Affect
- Stylometrics, Typographics, and Psycho-linguistics
- Cognitive and psychological computational models of creativity
- Affective needs and Firm-Consumer co-creation Behavior
- Computational models for Consumer Behavior theories of innovation
- Affective Lexica for Online Marketing Communication
- Affective human-agent, human-computer, and human-robot interaction
We especially invite papers investigating multiple related themes, industry papers, and descriptions of running projects and ongoing work. To address the scarcity of standardized baselines, datasets, and evaluation metrics for cross- disciplinary affective content analysis, submissions describing new language resources, evaluation metrics, and standards for affect analysis and understanding are also strongly encouraged.
Shared Task: CL-Aff
We are pleased to announce the 2021 CL-Aff Shared Task: GoTeam, which will examine the challenges of affect in collaboration in a multimodal dataset comprising both, textual communication and interpersonal actions.
Format
This full-day workshop will have several prominent interdisciplinary invited speakers from the fields of linguistics, psychology, and marketing science to lead the presentation sessions. Each session will consist of a keynote, followed by research presentations, and a short informal poster session. There are expected to be approximately 70-80 attendees.
Submissions
Submissions should be made via EasyChair and must follow the formatting guidelines for AAAI-2021 (use the 2020 AAAI Author Kit). All submissions must be anonymous and conform to AAAI standards for double-blind review. Both full papers (8 page long including references) and short papers (4 page long including references) that adhere to the 2-column AAAI format will be considered for review.
Organizing Committee
Niyati Chhaya, Primary Contact (Adobe Research, nchhaya@adobe.com), Kokil Jaidka (Nanyang Technological University, kokil.j@gmail.com ), Jennifer Healey (Adobe Research, jehealey@adobe.com), Lyle Ungar (University of Pennsylvania, ungar@cis.upenn.edu), Atanu R Sinha (Adobe Research, atr@adobe.com)
Additional Information
W2: AI for Behavior Change
In domains as wide-ranging as medication adherence, vaccination, college enrollment, retirement savings, and energy consumption, behavioral interventions have been shown to encourage people towards making better choices. For many applications of AI in these areas, one needs to design systems that learn to motivate people to take actions that maximize their welfare. Large data sources, both conventionally used in social sciences (EHRs, health claims, credit card use, college attendance records) and unconventional (social networks, fitness apps), are now available, and are increasingly used to personalize interventions. These datasets can be leveraged to learn individuals’ behavioral patterns, identify individuals at risk of making sub-optimal or harmful choices, and target them with behavioral interventions to prevent harm or improve well-being. At the same time, there is an increasing interest in AI in moving beyond traditional supervised learning approaches towards learning causal models, which can support the identification of targeted behavioral interventions. These research trends inform the need to explore the intersection of AI with behavioral science and causal inference, and how they can come together for applications in the social and health sciences.
This workshop will focus on AI and ML-based approaches that can (1) identify individuals in need of behavioral interventions, and/or predict when they need them; (2) help design and target optimal interventions; and (3) exploit observational and/or experimental datasets in domains including social media, health records, claims data, fitness apps, etc. for causal estimation in the behavior science world.
Topics
The goal of this workshop is to bring together the causal inference, artificial intelligence, and behavior science communities, gathering insights from each of these fields to facilitate collaboration and adaptation of theoretical and domain-specific knowledge amongst them. We invite thought-provoking submissions on a range of topics in these fields, including, but not limited to the following areas:
- Intervention design
- Adaptive treatment assignment
- Heterogeneity estimation
- Optimal assignment rules
- Targeted nudges
- Observational-experimental data
- Mental health/wellness; habit formation
- Social media interventions
- Precision health
Format
The full-day workshop will start with a keynote talk, followed by an invited talk and contributed paper presentations in the morning. The post-lunch session will feature a second keynote talk, two invited talks, and contributed paper presentations. Papers more suited for a poster, rather than a presentation, would be invited for a poster session. We will also select up to 5 best posters for spotlight talks (2 minutes each). We will end the workshop with a panel discussion by top researchers from these fields to enlist future directions and enhancement to this workshop.
Invited Speakers
Invited speakers will include Susan Athey, keynote (Economics of Technology, Stanford University, Sendhil Mullainathan, keynote (Computation and Behavioral Science, University of Chicago), Eric Tchetgen Tchetgen (Statistics, University of Pennsylvania), Jon Kleinberg (Computer Science, Cornell University), and Munmun De Choudhury (Interactive Computing, Georgia Tech)
Submissions
The audience of this workshop will be researchers and students from a wide array of disciplines including, but not limited to, statistics, computer science, economics, public policy, psychology, management, and decision science, who work at the intersection of causal inference, machine learning, and behavior science. AAAI, specifically, is a great venue for our workshop because its audience spans many ML and AI communities. We invite novel contributions following the AAAI formatting guidelines, camera-ready style. Submissions will be peer reviewed, single-blinded. Submissions will be assessed based on their novelty, technical quality, significance of impact, interest, clarity, relevance, and reproducibility. We accept two types of submissions — full research papers no longer than 8 pages (including references) and short/poster papers with 2-4 pages. References will not count towards the page limit. Submission will be accepted via the Easychair submission website.
Organizing Committee
Lyle Ungar (University of Pennsylvania, ungar@cis.upenn.edu), Sendhil Mullainathan (University of Chicago, Sendhil.Mullainathan@chicagobooth.edu), Eric Tchetgen Tchetgen (University of Pennsylvania, ett@wharton.upenn.edu), Rahul Ladhania, primary contact, (University of Michigan, ladhania@umich.edu)
Additional Information
Supplemental workshop site: https://ai4bc.github.io/ai4bc21/
For general inquiries about AI4BC, please write to ai4behaviorchange@gmail.com.
W3: AI for Urban Mobility
This workshop aims to provide a forum for bringing together experts from the different fields of AI to discuss the challenges related to any area of Urban Mobility, from the perspective of how AI techniques can be leveraged to address these challenges and whether novel AI techniques have to be developed.
Topics
This workshop seeks papers ranging from experience reports to the description of new technology leveraging AI various AI for innovation in any area of Urban Mobility and Transportation, such as (but not limited to): Traffic Signal Control, Vehicle Routing, Autonomous Driving, Multi-modal planning, and on-demand transport.
Format
This one-day workshop will consists of paper presentations; each presentation will be allocated between 10 and 15 minutes. The program will also include an invited talk from an expert of the field, and a panel composed by AI experts and transportation experts, with the aim of identifying promising areas of work. The expected attendance is approximately 50 people. Authors of accepted papers will be invited to deliver a talk, and well-recognized experts of the field will be invited to participate in the panel or to deliver an invited talk.
Submissions
Two types of papers can be submitted. Full technical papers with a length up to 8 pages are standard research papers. Short papers with a length between 2 and 4 pages can describe either a particular application, or focus on open challenges. All papers should conform to the AAAI style template. Submissions should be made via EasyChair at https://easychair.org/conferences/?conf=ai4um.
Organizing Committee
Lukas Chrpa (Czech Technical University in Prague, chrpaluk@fel.cvut.cz), Mauro Vallati (University of Huddersfield, m.vallati@hud.ac.uk), Scott Sanner (University of Toronto, ssanner@mie.utoronto.ca), Stephen S. Smith (Carnegie Mellon University, sfs@cs.cmu.edu), Baher Abdulhai (University of Toronto, baher.abdulhai@utoronto.ca)
Additional Information
Supplemental workshop site: http://aium2021.felk.cvut.cz
W4: Artificial Intelligence Safety (SafeAI 2021)
The accelerated developments in the field of Artificial Intelligence (AI) hint at the need for considering Safety as a design principle rather than an option. However, theoreticians and practitioners of AI and Safety are confronted with different levels of safety, different ethical standards and values, and different degrees of liability, that force them to examine a multitude of trade-offs and alternative solutions. These choices can only be analyzed holistically if the technological and ethical perspectives are integrated into the engineering problem, while considering both the theoretical and practical challenges of AI safety. A new and comprehensive view of AI Safety must cover a wide range of AI paradigms, including systems that are application-specific as well as those that are more general, considering potentially unanticipated risks. In this workshop, we want to explore ways to bridge short-term with long-term issues, idealistic with pragmatic solutions, operational with policy issues, and industry with academia, to build, evaluate, deploy, operate and maintain AI-based systems that are demonstrably safe.
This workshop seeks to explore new ideas on AI safety with particular focus on addressing the following questions:
- What is the status of existing approaches in ensuring AI and Machine Learning (ML) safety, and what are the gaps?
- How can we engineer trustable AI software architectures?
- How can we make AI-based systems more ethically aligned?
- What safety engineering considerations are required to develop safe human-machine interaction?
- What AI safety considerations and experiences are relevant from industry?
- How can we characterize or evaluate AI systems according to their potential risks and vulnerabilities?
- How can we develop solid technical visions and new paradigms about AI Safety?
- How do metrics of capability and generality, and the trade-offs with performance affect safety?
The main interest of the proposed workshop is to look at a new perspective of system engineering where multiple disciplines such as AI and safety engineering are viewed as a larger whole, while considering ethical and legal issues, in order to build trustable intelligent autonomy.
Topics
Contributions are sought in (but are not limited to) the following topics:
- Safety in AI-based system architectures
- Continuous V&V and predictability of AI safety properties
- Runtime monitoring and (self-)adaptation of AI safety
- Accountability, responsibility and liability of AI-based systems
- Uncertainty in AI
- Avoiding negative side effects in AI-based systems
- Role and effectiveness of oversight: corrigibility and interruptibility
- Loss of values and the catastrophic forgetting problem
- Confidence, self-esteem and the distributional shift problem
- Safety of AGI systems and the role of generality
- Reward hacking and training corruption
- Self-explanation, self-criticism and the transparency problem
- Human-machine interaction safety
- Regulating AI-based systems: safety standards and certification
- Human-in-the-loop and the scalable oversight problem
- Evaluation platforms for AI safety
- AI safety education and awareness
- Experiences in AI-based safety-critical systems, including industrial processes, health, automotive systems, robotics, critical infrastructures, among others
Format
To deliver a truly memorable event, we will follow a highly interactive format that will include invited talks and thematic sessions. The thematic sessions will be structured into short pitches and a common panel slot to discuss both individual paper contributions and shared topic issues. Three specific roles are part of this format: session chairs, presenters and paper discussants. The workshop will be organized as a full day meeting. Attendance is virtual and open to all. At least one author of each accepted submission must register and present the paper at the workshop.
Submissions
You are invited to submit:
- Full technical papers (6-8 pages),
- Proposals of technical talk (up to one-page abstract including short Bio of the main speaker),
- Position papers (4-6 pages), and
Manuscripts must be submitted as PDF files via EasyChair online submission system.
Please keep your paper format according to AAAI Formatting Instructions (two-column format). The AAAI author kit can be downloaded from: http://www.aaai.org/Publications/Templates/AuthorKit20.zip.
Papers will be peer-reviewed by the Program Committee (2-3 reviewers per paper). The workshop follows a single-blind reviewing process. However, we will also accept anonymized submissions.
Organizing Committee
Huáscar Espinoza (Commissariat à l´Energie Atomique, France), José Hernández-Orallo (Universitat Politècnica de València, Spain), Cynthia Chen (University of Hong Kong, China), Seán Ó hÉigeartaigh (University of Cambridge, UK), Xiaowei Huang (University of Liverpool, UK), Mauricio Castillo-Effen (Lockheed Martin, USA), Richard Mallah (Future of Life Institute, USA), John McDermid (University of York, UK)
Additional Information
Supplemental workshop site: http://safeaiw.org/
W5: Combating Online Hostile Posts in Regional Languages during Emergency Situations (CONSTRAINT-2021)
Online hostile posts (e.g., hate speech, fake news, etc.) can have severe consequences, and therefore, its detection is of paramount importance for societal causes. Another important aspect is early detection and prevention of hostile posts in response to a critical/emergency situation (e.g., COVID-19, US presidential election) where the (mis)information spreads rapidly. Moreover, hostile posts are not limited to English; instead, they are available in a variety of regional/local languages (e.g., Indian, some European languages, etc.). The challenges in automatic identification of the hostile posts are more critical in case of these regional languages — primarily because of the lack of resources for such languages.
With the CONSTRAINT-2021 workshop, we aim to attract the attention of the research communities to these important aspects of the hostile posts detection and provide a platform for the high-quality research in this domain.
Topics
We invite the submission of original and high-quality research papers in the relevant fields of misinformation. The list of possible topics includes, but is not limited to: fake news and hate speech detection in regional languages or code-mixed/code-switched environment; evolution of fake news and hate speech; early detection for hostile posts; claim detection and verification related to misinformation; psychological study of the users/spreaders of hostile posts; hate speech normalization; hesource/tool creation for combating hostile posts.
Shared Task
To support research on hostile posts detection, we are also organizing a shared task on fake news detection in microblogging posts. We plan to release two annotated datasets (English and Hindi) for the shared task. We invite researchers to participate and submit their systems. A few systems will be requested to submit their system description papers. https://constraint-shared-task-2021.github.io/.
Attendance
The workshop is open to all researchers, academicians, and industry personnel working in the relevant field. The expected attendance is approximately 100.
Submissions and Notifications
This one-day workshop will conduct a two-phase reviewing process, as well as the release of the shared task. The schedule is as follows:
Phase 1
- Oct 20, 2020: Phase 1 full papers due
- Nov 20, 2020: Notification of phase 1 papers due
- Dec 1, 2020: Camera ready submission due of accepted papers
Phase 2
- Dec 5, 2020: Phase 2 full papers due (Only papers rejected from AAAI’21 main track will be considered. Authors need to submit the full reviews and ratings obtained from AAAI)
- Dec 20, 2020: Notification of phase 2 papers due
- Dec 30, 2020: Camera ready submission due of phase 2 accepted papers
Shared Task
- Oct 1, 2020: Release of the training set
- Dec 1, 2020: Release of the test set
- Dec 10, 2020: Deadline for submitting the final results
- Dec 12, 2020: Announcement of the results
- Dec 30, 2020: System paper submission deadline
Regular papers (maximum 12 pages) should be prepared in English and follow the Springer CCIS template, downloadable from here (https://www.springer.com/series/7899). All papers must be submitted via our EasyChair submission page and will go through a double-blind peer-review process. Only manuscripts in PDF or Microsoft Word format will be accepted.
All submissions must be made via EasyChair portal at the following link: https://easychair.org/conferences/?conf=constraint2021
Main Contact:
Tanmoy Chakraborty (IIIT Delhi, India, tanmoy@iiitd.ac.in)
Committees
Steering Committee: Tanmoy Chakraborty (IIIT Delhi, India, tanmoy@iiitd.ac.in), Kai Shu (Illinois Institute of Technology, USA, kshu@iit.edu), H. Russell Bernard (Arizona State University, USA, ufruss@ufl.edu), Huan Liu (Arizona State University, USA, huanliu@asu.edu)
Organizing Committee: Tanmoy Chakraborty, IIIT Delhi, tanmoy@iiitd.ac.in), Md Shad Akhtar (IIIT Delhi, shad.akhtar@iiitd.ac.in)
Shared Task Organizing Committee: Tanmoy Chakraborty (IIIT Delhi, tanmoy@iiitd.ac.in), Md Shad Akhtar (IIIT Delhi, shad.akhtar@iiitd.ac.in), Asif Ekbal (IIT Patna, asif@iitp.ac.in), Amitava Das (Wipro Research amitava.das2@wipro.com)
Additional Information
Supplemental workshop site: http://lcs2.iiitd.edu.in/CONSTRAINT-2021/
Twitter handle: @CONSTRAINT_AAAI
W6: Commonsense Knowledge Graphs (CSKGs)
Commonsense knowledge graphs (CSKGs) are sources of background knowledge that are expected to contribute to downstream tasks like question answering, robot manipulation, and planning. The knowledge covered in CSKGs varies greatly, spanning procedural, conceptual, and syntactic knowledge, among others. CSKGs come in a wider variety of forms compared to traditional knowledge graphs, ranging from (semi-)structured knowledge graphs, such as ConceptNet, ATOMIC, and FrameNet, to the recent idea to use language models as knowledge graphs. As a consequence, traditional methods of integration and usage of knowledge graphs might need to be expanded when dealing with CSKGs. Understanding how to best integrate and represent CSKGs, leverage them on a downstream task, and tailor their knowledge to the particularities of the task, are open challenges today. The workshop on CSKGs addresses these challenges, by focusing on the creation of commonsense knowledge graphs and their usage on downstream commonsense reasoning tasks.
Topics
Topics of interest include, but are not limited to:
- Creation/extraction of new CSKGs
- Integration of existing CSKGs
- Exploration of CSKGs
- Impact of CSKGs on downstream tasks
- Methods of including CSKG knowledge in downstream tasks
- Probing for knowledge needs in downstream tasks
- Evaluation data/metrics relevant for CSKGs
- Identifying and/or filling gaps in CSKGs
Format
The workshop will consist of: (1) two keynote talks, (2) a panel discussion on ‘Are language models enough?’, (3) presentations of full, short, and position papers, and (4) a discussion session.
Submissions
We welcome submissions of long (max. 8 pages), short (max. 4 pages), and position (max. 4 pages) papers describing new, previously unpublished research in this field. The page limits are including the references. Submissions must be formatted in the AAAI submission format. All submissions should be done electronically via EasyChair.
Submission site: https://easychair.org/conferences/?conf=cskgsaaai21
Organizing Committee
Filip Ilievski (Information Sciences Institute, University of Southern California, ilievski@isi.edu), Alessandro Oltramari (Bosch Research and Technology Center, Pittsburgh, Alessandro.Oltramari@us.bosch.com),Deborah McGuinness (Rensselaer Polytechnic Institute, dlm@cs.rpi.edu),Pedro Szekely (Information Sciences Institute, University of Southern California, szekely@usc.edu)
Additional Information
Supplemental workshop site: https://usc-isi-i2.github.io/AAAI21workshop/
W7: Workshop on Content Authoring and Design (CAD2021)
The goal of the Content Authoring and Design (CAD2021) workshop at AAAI is to engage the AI and NLP community around the open problems in authoring, reformatting, optimization, enhancement and beautification of different forms of contents ranging from articles, news, presentation slides, flyers, posters to any material one can find online such as social media posts and advertisement. Content Authoring and Design refers to the interdisciplinary research area of Artificial Intelligence, Computational Linguistics, and Graphic Design. The area addresses open problems in leveraging AI models to assist users during their creative process by estimating the author and audience needs so that the outcome is aesthetically appealing and effectively communicates its intent.
Topics
The goal of the workshop is to gather insights from Artificial Intelligence, Computational Linguistics, Graphic Design and Creativity, Marketing Science, E-learning, as well as Social Media Analysis for content presentation enhancement, specifically, user-generated content, text in online marketing and education. Specific topics in this field include but aren’t restricted to:
- Emphasis selection for written text in social media data or presentation slides
- Font selection based on input text or other design elements
- NLP for color distributions recommendation
- Text simplification or automatic text editing for representation improvement
- Text appropriateness analysis
- Marketing and brand alignment analysis
- AI-assisted slide authoring
- Document space optimization
- Multi-modal content emotion and sentiment analysis
- Metrics to assess the visual appeal of content
- Machine learning approaches to rate the visual appeal of content
- Related AI or AI-assisted approaches to improve the content layout
Shared Task: Presentation Slides Emphasis Selection
We propose a new shared task where participants will be expected to design automated approaches to predict emphasis in presentation slides with the goal of improving their comprehensibility and visual appeal. This shared task builds on our recent SemEval 2020 shared task on “Task 10: Emphasis Selection for Written Text in Visual Media.” More information about the shared task is provided on the workshop website.
Format
We will hold a one-day workshop where approximately two-thirds of the time will be devoted to presentations of regular workshop submissions and an invited talk. The rest of the day will be devoted to the shared task overview and papers.
Submissions
We encourage 2 types of submissions: archival submission of novel and unpublished work, and non-archival submissions that present recently published work.
Archival Submissions: Submissions should report original and unpublished research on topics of interest to the workshop. Accepted papers are expected to be presented at the workshop and will be published in the workshop proceedings. Archival submissions accepted for presentation at the workshop must not be or have been presented at any other meeting with publicly available proceedings.
Non-archival submissions: We welcome submissions of a one-page abstract describing work recently published but that is of relevance to the topics of the workshop. The goal is to increase the visibility of work in this emerging area and facilitate researchers and practitioners with common research interests to meet each other and learn about efforts in this space.
We welcome long (up to 8 pages), short (up to 4 pages) and one-page abstracts. Long/short paper submissions must use the AAAI official templates. The submission site can be found on the workshop website.
Organizing Committee
Thamar Solorio (University of Houston,tsolorio@uh.edu), Franck Dernoncourt (Adobe Research, franck.dernoncourt@adobe.com), Amirreza Shirani (University of Houston,ashirani@uh.edu), Nedim Lipka (Adobe Research, lipka@adobe.com), Paul Asente (Adobe Research, asente@adobe.com), Jose Echevarria (Adobe Research, echevarr@adobe.com)
Additional Information
Supplemental workshop site: https://ritual.uh.edu/aaai-21-workshop-on-content-authoring-and-design/
W8: Deep Learning on Graphs: Method and Applications (DLG-AAAI’21)
Deep Learning models are at the core of research in Artificial Intelligence research today. It is well-known that deep learning techniques that were disruptive for Euclidean data such as images or sequence data such as text are not immediately applicable to graph-structured data. This gap has driven a tide in research for deep learning on graphs on various tasks such as graph representation learning, graph generation, and graph classification. New neural network architectures on graph-structured data have achieved remarkable performance in these tasks when applied to domains such as social networks, bioinformatics, and medical informatics.
This wave of research at the intersection of graph theory and deep learning has also influenced other fields of science, including computer vision, natural language processing, inductive logic programming, program synthesis and analysis, automated planning, reinforcement learning, and financial security. Despite these successes, graph neural networks (GNNs) still face many challenges namely,
- Modeling highly structured data with time-evolving, multi-relational, and multi-modal nature. Such challenges are profound in applications in social attributed networks, natural language processing, inductive logic programming, and program synthesis and analysis. Joint modeling of text or image content with underlying network structure is a critical topic for these domains.
- Modeling complex data that involves mapping between graph-based inputs and other highly structured output data such as sequences, trees, and relational data with missing values. Natural Language Generation tasks such as SQL-to-Text and Text-to-AMR are emblematic of such challenge.
This one-day workshop aims to bring together both academic researchers and industrial practitioners from different backgrounds and perspectives to above challenges. The workshop will consist of contributed talks, contributed posters, and invited talks on a wide variety of the methods and applications. Work-in-progress papers, demos, and visionary papers are also welcome. This workshop intends to share visions of investigating new approaches and methods at the intersection of Graph Neural Networks and real-world applications.
Topics
We invite submission of papers describing innovative research and applications around the following topics. Papers that introduce new theoretical concepts or methods, help to develop a better understanding of new emerging concepts through extensive experiments, or demonstrate a novel application of these methods to a domain are encouraged.
- Graph neural networks on node-level, graph-level embedding
- Joint learning of graph neural networks and graph structure
- Graph neural networks on graph matching
- Dynamic/incremental graph-embedding
- Learning representation on heterogeneous networks, knowledge graphs
- Deep generative models for graph generation/semantic-preserving transformation
- Graph2seq, graph2tree, and graph2graph models
- Deep reinforcement learning on graphs
- Adversarial machine learning on graphs
- Spatial and temporal graph prediction and generation
And with particular focuses but not limited to these application domains:
- Learning and reasoning (machine reasoning, inductive logic programming, theory proving)
- Natural language processing (information extraction, semantic parsing, text generation)
- Bioinformatics (drug discovery, protein generation, protein structure prediction)
- Program synthesis and analysis
- Reinforcement learning (multi-agent learning, compositional imitation learning)
- Financial security (anti-money laundering)
- Cybersecurity (authentication graph, Internet of Things, malware propagation)
- Geographical network modeling and prediction (Transportation and mobility networks, social
- networks)
Submissions
Submissions are limited to a total of 5 pages for initial submission (up to 6 pages for final camera-ready submission), excluding references or supplementary materials, and authors should only rely on the supplementary material to include minor details that do not fit in the five pages. All submissions must be in PDF format and formatted according to the Standard AAAI Conference Proceedings Template. Following this AAAI conference submission policy, reviews are double-blind, and author names and affiliations should NOT be listed. Submitted papers will be assessed based on their novelty, technical quality, potential impact, and clarity of writing. For papers that rely heavily on empirical evaluations, the experimental methods and results should be clear, well-executed, and repeatable. Authors are strongly encouraged to make data and code publicly available whenever possible. The accepted papers will be
posted on the workshop website and will not appear in the AAAI proceedings. Special issues in flagship academic journals are under consideration to host the extended versions of best/selected papers in the workshop.
Submission site: http://deep-learning-graphs.bitbucket.io/dlg-aaai21/
Organizing Committee
Lingfei Wu (IBM Research AI), Jiliang Tan (Michigan State University), Yinglong Xia (Facebook AI), Jian Pei (Simon Fraser University)
Additional Information
The workshop supplementary site URL will be available soon.
W9: Designing AI for Telehealth
Although telehealth technology has been present for years, appearance of the Covid-19 pandemic has dramatically accelerated its growth. This workshop clearly is timely. Investment currently is strong for numerous types of telehealth systems, many including AI components, and leading enterprises in this work are now recognizing the important need for participatory design. Accordingly, the main objective of this workshop is specifically to gather representatives of AI research and development with those of other stakeholder domains to create a community for sustaining valuable participatory design dialogue.
Topics
Some design topics that emerge during workshop discussion are likely not to be initially familiar to all stakeholder communities present, promising useful gain in shared knowledge. For the AI community, different types of telehealth tend to share abstract design topics that include accuracy, reliability, validity, economy, autonomy, application requirements, privacy, and cybersecurity.
Format
Consistently with its main objective, the workshop welcomes participants who represent distinct but interactive communities of stakeholders in the design of AI applications for telehealth, thus joining participants from AI with those representing patients, physicians, nursing practice, nursing education, healthcare administration, and governmental, pharmaceutical, insurance, and legal enterprises. Ideally, this one-day gathering will contain a presentation representing each of these communities, followed by ample time for open forum discussion.
Timeliness of its topic, use of a virtual medium, and AAAI’s discounted workshop registration for AAAI-21 technical registrants, plus a workshop-only registration option, all suggest that this workshop will attract at least sixty members. Of course, its organizers also seek a gathering size that allows meaningful participation for everyone; fortunately, the workshop is equipped with a website that will post the CFP and accepted workshop papers (pending authors’ permissions) as well as sustain afterward the valuable dialogue generated during the event.
Submissions
Requests to participate and (optional) paper submissions are to be emailed to workshop chair by November 9, 2020. Papers will be 6-to-8-page Word documents. Invitations to participate and decisions concerning selection of papers for presentation will be supplied by November 30, 2020.
Submit to Workshop Chair, Ted Metzler at tmetzler@okcu.edu.
Organizing Committee
Ted Metzler (Oklahoma City University, Kramer School of Nursing, tmetzler@okcu.edu), Lundy Lewis (Southern New Hampshire University, Computer Information Systems, l.lewis@snhu.edu), Elizabeth Diener (Oklahoma City University: Kramer School of Nursing, ejdiener@okcu.edu)
Additional Information
Supplemental workshop site: http://shapingsmarttechnology.org (click on TELEHEALTH to access the workshop page).
W10: 9th Dialog System Technology Challenge (DSTC-9)
DSTC, the Dialog System Technology Challenge, has been a premier research competition for dialog systems since its inception in 2013. Given the remarkable success of the first eight challenges, we are organizing the ninth edition of DSTC this year, and we will have a wrap-up workshop at AAAI-21.
Topics
The main goal of this workshop is to share the results of the following four main tracks of DSTC9:
- Beyond Domain APIs: Task-oriented Conversational Modeling with Unstructured Knowledge Access (Amazon Alexa AI & National Taiwan University)
- Multi-domain Task-oriented Dialog Challenge II (Microsoft Research AI & Tsinghua University)
- Interactive Evaluation of Dialog (CMU & USC)
- SIMMC: Situated Interactive Multi-Modal Conversational AI (Facebook Assistant & Facebook AI)
Format
The two-day workshop will include welcome remarks, track overviews, invited talks, oral presentations, and discussions about future DSTCs. We invite all the teams who participated in DSTC9 to submit their work to this workshop. In addition, any other general technical paper on dialog technologies is also welcome.
Submissions
The submissions must follow the formatting guidelines for AAAI-2021 (use the AAAI Author Kit). All submissions must be anonymous and conform to AAAI standards for double-blind review. The papers adhere to the 2-column AAAI format. Up to 7 pages of technical content plus up to two additional pages solely for references will be considered for review.
Submission site: https://dstc9.dstc.community/paper-submission
Organizing Committee
Workshop Chairs: Abhinav Rastogi (Google Research, USA, abhirast@google.com), Yun-Nung (Vivian) Chen (National Taiwan University, Taiwan, y.v.chen@ieee.org)
Challenge Chair: Chulaka Gunasekara (IBM Research AI, USA, Chulaka.Gunasekara@ibm.com)
Publication Chair: Luis Fernando D’Haro (Universidad Politécnica de Madrid, Spain, lfdharo@die.upm.es)
Publicity Chair: Seokhwan Kim (Amazon Alexa AI, USA, seokhwk@amazon.com)
Additional Information
Supplemental workshop site: https://sites.google.com/dstc.community/dstc9/
W11: Explainable Agency in Artificial Intelligence
As artificial intelligence has become tightly intervened in the society having tangible consequences and influence, calls for explainability and interpretability of these systems have also become increasingly prevalent. Explainable AI (XAI) attempts to alleviate concerns of transparency, trust, and ethics in AI by making them accountable, interpretable, and explainable to humans. This workshop aims to encapsulate these concepts under the umbrella of Explainable Agency and bring together researchers and practitioners working in different facets of explainable AI from diverse backgrounds to share challenges, new directions, and recent research in the field. We especially welcome research from fields including but not limited to artificial intelligence, human-computer interaction, human-robot interaction, cognitive science, human factors, and philosophy.
Topics
XAI has received substantial but disjoint attention in different sub-areas of AI, including machine learning, planning, intelligent agents, and several others. There has been limited interaction among these subareas on XAI, and even less work has focused on promoting and sharing sound designs, methods, and measures for evaluating the effectiveness of explanations (generated by AI systems) in human subject studies. This has led to uneven development of XAI, and its evaluation, in different AI subareas. We aim to address this by encouraging a shared definition of Explainable Agency and by increasing awareness of work on XAI throughout the AI research community and in related disciplines (e.g., human factors, human-computer interaction, cognitive science). With this in mind, we welcome contributions on the following (and related) topic areas:
- Explainable/Interpretable Machine Learning
- Fairness, Accountability, and Transparency
- Explainable Planning
- Explainable Agency
- Human-AI Interaction
- Human-Robot Interaction
- Cognitive Theories
- Philosophical Foundations
- Interaction Design for XAI
- XAI Evaluation
- Agent Policy Summarization
- XAI Domains and Benchmarks
- Interactive Teaching Strategies and Explainability
- User Modelling
- Surveys on Explainability
Format
The workshop will be a two-day meeting, with invited talks, panels, paper presentations, lightning presentations, and a discussion. We expect ~100 participants and potentially more due to the popularity of this topic and the virtual nature of this workshop.
Submissions
We invite the submission of papers describing novel research contributions (6 pages), survey papers (up to 8 pages) or demonstrations (4 pages). The submissions must be in PDF format, written in English, and formatted according to the AAAI camera-ready style. All papers will be peer-reviewed, single-blinded.
Submission site: https://easychair.org/my/conference?conf=xaiaaai21
Organizing Committee
Prashan Madumal, Silvia Tulli, David Aha, Rosina Weber
Additional Information
Supplemental workshop site: https://sites.google.com/view/xaiworkshop/topic
W12: Graphs and More Complex Structures for Learning and Reasoning (GCLR)
The study of complex graphs is a highly interdisciplinary field that aims to study complex systems by using mathematical models, physical laws, inference and learning algorithms, etc. Complex systems are often characterized by several components that interact in multiple ways among each other. Such systems are better modeled by complex graph structures such as edge and vertex labelled graphs (e.g., knowledge graphs), attributed graphs, multilayer graphs, hypergraphs, temporal / dynamic graphs, etc. In this GCLR (Graphs and more Complex structures for Learning and Reasoning) workshop, we will focus on various complex structures along with inference and learning algorithms for these structures. The current research in this area is focused on extending existing ML algorithms as well as network science measures to these complex structures. This workshop aims to bring researchers from these diverse but related fields together and embark on interesting discussions on new challenging applications that require complex system modeling and discovering ingenious reasoning methods. We have invited several distinguished speakers with their research interest spanning from the theoretical to experimental aspects of complex networks.
Topics
We invite submissions from participants who can contribute to the theory and applications of modeling complex graph structures such as hypergraphs, multilayer networks, multi-relational graphs, heterogeneous information networks, multi-modal graphs, signed networks, bipartite networks, temporal / dynamic graphs, etc. The topics of interest include, but not limited to:
- Constraint satisfaction and programming (CP), (inductive) logic programming (LP and ILP)
- Learning with Multi-relational graphs (alignment, knowledge graph construction, completion, reasoning with knowledge graphs, etc.)
- Learning with algebraic or combinatorial structure
- Link analysis/prediction, node classification, graph classification, clustering for complex graph structures
- Network representation learning
- Theoretical analysis of graph algorithms or models
- Optimization methods for graphs/manifolds
- Probabilistic and graphical models for structured data
- Social network analysis and measures
- Unsupervised graph/manifold embedding methods
Papers will be presented in poster format, and some will be selected for oral presentation. Through invited talks and presentations by the participants, this workshop will bring together current advances in network science as well as machine learning, and set the stage for continuing interdisciplinary research discussions.
Format
This is a one-day workshop involving talks by pioneer researchers from respective areas, poster presentations, and short talks of accepted papers. The eligibility criteria for attending the workshop will be registration in the conference/workshop as per AAAI norms. We expect 50-65 people in the workshop.
Submissions
We invite submissions to the AAAI workshop on Graphs and more Complex structures for Learning and Reasoning to be held virtually on February 8 or 9, 2021. We welcome submissions in the following two formats:
- Extended abstracts: We encourage participants to submit preliminary but interesting ideas that have not been published before as extended abstracts. These submissions would benefit from additional exposure and discussion that can shape a better future publication. We also invite papers that have been published at other venues to spark discussions and foster new collaborations. Submissions may consist of up to 4 pages plus one additional page solely for references.
- Full papers: Submissions must represent original material that has not appeared elsewhere for publication, and that is not under review for another refereed publication. Submissions may consist of up to 7 pages of technical content plus up to two additional pages solely for references.
The submissions should adhere to the AAAI paper guidelines available at https://aaai.org/Conferences/AAAI-21/aaai21call/
Accepted submissions will have the option of being published on the workshop website. For authors who do not wish their papers to be posted online, please mention this in the workshop submission. The submissions need to be anonymized.
See the webpage https://sites.google.com/view/gclr2021/submissions for detailed instructions and submission link. Extended abstracts and full papers are due November 9, 2020.
Organizing Committee
Balaraman Ravindran, Chair (Indian Institute of Technology Madras, India, ravi@cse.iitm.ac.in), Kristian Kersting (TU Darmstadt, Germany, kersting@cs.tu-darmstadt.de), Sarika Jalan (Indian Institute of Technology Indore, India, sarika@iiti.ac.in), Partha Pratim Talukdar (Indian Institute of Science, India, ppt@iisc.ac.in), Sriraam Natarajan (University of Texas Dallas, USA, Sriraam.Natarajan@utdallas.edu), Tarun Kumar (Indian Institute of Technology Madras, India, tkumar@cse.iitm.ac.in), Deepak Maurya (Indian Institute of Technology Madras, India, maurya@cse.iitm.ac.in), Nikita Moghe (The University of Edinburgh, UK, nikita.moghe@ed.ac.uk), Naganand Yadati (Indian Institute of Science, India, y.naganand@gmail.com), Jeshuren Chelladurai (Indian Institute of Technology Madras, India, jeshurench@gmail.com), Aparna Rai (Indian Institute of Technology Guwahati, India, raiaparna13@gmail.com).
Additional Information
Supplemental workshop site: https://sites.google.com/view/gclr2021/
W13: 5th International Workshop on Health Intelligence (W3PHIAI-21)
Public health authorities and researchers collect data from many sources and analyze these data together to estimate the incidence and prevalence of different health conditions, as well as related risk factors. Modern surveillance systems employ tools and techniques from artificial intelligence and machine learning to monitor direct and indirect signals and indicators of disease activities for early, automatic detection of emerging outbreaks and other health-relevant patterns. To provide proper alerts and timely response, public health officials and researchers systematically gather news and other reports about suspected disease outbreaks, bioterrorism, and other events of potential international public health concern, from a wide range of formal and informal sources. Given the ever-increasing role of the World Wide Web as a source of information in many domains including healthcare, accessing, managing, and analyzing its content has brought new opportunities and challenges. This is especially the case for non- traditional online resources such as social networks, blogs, news feed, twitter posts, and online communities with the sheer size and ever-increasing growth and change rate of their data. Web applications along with text processing programs are increasingly being used to harness online data and information to discover meaningful patterns identifying emerging health threats. The advances in web science and technology for data management, integration, mining, classification, filtering, and visualization has given rise to a variety of applications representing real-time data on epidemics.
Moreover, to tackle and overcome several issues in personalized healthcare, information technology will need to evolve to improve communication, collaboration, and teamwork among patients, their families, healthcare communities, and care teams involving practitioners from different fields and specialties. All of these changes require novel solutions and the AI community is well-positioned to provide both theoretical- and application-based methods and frameworks. The goal of this workshop is to focus on creating and refining AI-based approaches that (1) process personalized data, (2) help patients (and families) participate in the care process, (3) improve patient participation, (4) help physicians utilize this participation in order to provide high quality and efficient personalized care, and (5) connect patients with information beyond that available within their care setting. The extraction, representation, and sharing of health data, patient preference elicitation, personalization of “generic” therapy plans, adaptation to care environments and available health expertise, and making medical information accessible to patients are some of the relevant problems in need of AI-based solutions.
Topics
The workshop will include original contributions on theory, methods, systems, and applications of data mining, machine learning, databases, network theory, natural language processing, knowledge representation, artificial intelligence, semantic web, and big data analytics in web-based healthcare applications, with a focus on applications in population and personalized health. This workshop is especially interested in hearing about the challenges and problems data science and AI can address related to the global pandemic, and relevant deployments and experiences in gearing AI to cope with COVID-19. The scope of the workshop includes, but is not limited to, the following areas:
- Knowledge Representation and Extraction
- Integrated Health Information Systems
- Patient Education
- Patient-Focused Workflows
- Shared Decision Making
- Geographical Mapping and Visual Analytics for Health Data
- Social Media Analytics
- Epidemic Intelligence
- Predictive Modeling and Decision Support
- Semantic Web and Web Services
- Biomedical Ontologies, Terminologies, and Standards
- Bayesian Networks and Reasoning under Uncertainty
- Temporal and Spatial Representation and Reasoning
- Case-based Reasoning in Healthcare
- Crowdsourcing and Collective Intelligence
- Risk Assessment, Trust, Ethics, Privacy, and Security
- Sentiment Analysis and Opinion Mining
- Computational Behavioral/Cognitive Modeling
- Health Intervention Design, Modeling and Evaluation
- Online Health Education and E-learning
- Mobile Web Interfaces and Applications
- Applications in Epidemiology and Surveillance (e.g. Bioterrorism, Participatory Surveillance, Syndromic Surveillance, Population Screening)
- Explainable AI (XAI) in Health and Medical domain
- Precision Medicine and Health
Format
The workshop will be two full days, consisting of a welcome session, keynote and invited talks, full/short paper presentations, demos, and posters. The organizers have experience hosting virtual conferences and as such have innovative ideas for engaging participants with both presentations and posters.
Submissions
We invite researchers and industrial practitioners to submit their original contributions following the AAAI format through EasyChair (https://easychair.org/conferences/?conf=w3phiai21). Three categories of contributions are sought: full-research papers up to 8 pages; short papers up to 4 pages; and posters and demos up to 2 pages.
Organizing Committee
Martin Michalowski, Cochair (University of Minnesota – Twin Cities, martinm@umn.edu), Arash Shaban-Nejad, Cochair (The University of Tennessee Health Science Center – Oak-Ridge National Lab (UTHSC-ORNL) Center for Biomedical Informatics, ashabann@uthsc.edu), Szymon Wilk (Poznan University of Technology), David L. Buckeridge (McGill University), John S. Brownstein (Boston Children’s Hospital, Harvard University), Byron C. Wallace (Northeastern University), Michael J. Paul (The University of Colorado Boulder);
Additional Information
Supplemental workshop site: http://w3phiai2021.w3phi.com/
W14: Hybrid Artificial Intelligence
Increased specialization in the sub-fields of AI has led to extreme fragmentation of the field. This has inhibited research on more complete AI systems that will require the coordination of multiple cognitive faculties (language, vision and reasoning) coupled with an ability to act and effect changes in the real world. This workshop will bring together researchers from NLP, computer vision, reasoning and action/robotics to explore end-to-end HAI.
Topics
Among the questions that participants will discuss and seek to answer are:
- AI architectures: What should the representational and computational boundaries between modules look like? Can we characterize the best ways in which information shared between modules should be packaged?
- Information revision: How should such systems handle feedback loops and the revision of information between modules?
- Bounded rationality: How can processing resources best be managed between modules?
- Neuro-symbolic: How can machine learning be parceled out among components or can end-to-end learning result in systems that coordinate multi-faculty capabilities? Can HAI systems provide a testing ground for neuro-symbolic computing?
- AI Challenge problems: Are there particular challenge problems that would be most appropriate for the study of such systems that would encourage progress in this area?
Format
HAI will be a one-day workshop that will include presentations of accepted papers, a talk by an invited speaker and a panel discussion to discuss the above questions. Attendance is open to all.
Submissions
Full papers (maximum of 8 pages in length) that address the above questions or that report on efforts to combine multiple cognitive/action modules (together with lessons learned) may be submitted. Position papers (maximum of 4 pages in length) may also be submitted. Note that this workshop is not intended to explore AI systems that consist solely of multiple AI technologies (e.g., symbolic and neural nets): the technologies must be deployed in service of multiple coordinated functions (e.g., language plus vision).
Submission site: https://easychair.org/conferences/?conf=hai2021
Please follow the AAAI formatting guidelines (https://www.aaai.org/Publications/Templates/AuthorKit20.zip).
Organizing Committee
Charles Ortiz, Chair (Palo Alto Research Center (PARC), USA, cortiz@parc.com), Sven Dickinson, (University of Toronto and Samsung Toronto AI Research Center, Canada, sven@cs.toronto.edu), Ron Kaplan (Independent and Stanford, USA, Ron.kaplan@post.harvard.edu), Michael Thielscher (University of New South Wales, Australia, mit@unsw.edu.au)
Additional Information
Supplemental workshop site: https://sites.google.com/view/aaai2021workshop/home
W15: Imagining Post-COVID Education with AI
COVID-19 has brought upon us the inevitable transformation towards virtual education. The ensuing need for scalable, personalized learning systems has led to an unprecedented demand for understanding large-scale educational data. However, the field of Artificial Intelligence in Education (AIEd) has received relatively little academic attention compared to the mainstream machine learning areas like vision, natural language processing, and healthcare. In this workshop, we plan to invite AIEd enthusiasts from all around the world through three different channels. First, we will call for papers related to important AIEd topics that can help us imagine what new education will look like. Second, we propose a shared task about various AIEd problems on the largest public dataset, EdNet, which contains 123M interactions from more than 1M students. Finally, we host a global challenge on Kaggle for a fair comparison of state-of-the-art Knowledge Tracing models and invite technical reports from winning teams. Through these initiatives, we aim to provide a common ground for researchers to share their cutting-edge insights on AIEd and encourage the development of practical and large-scale AIEd methods of lasting impact.
Topics
We invite high-quality paper submissions on topics including, but not limited to, the following:
- Deploying Educational Systems in Real World
- Pre-deployment Considerations of Educational Systems
- Behavioral Testing of Intelligent Tutoring Systems
- User Interface for Interactive Educational Systems
- A/B Testing of Educational Systems
- Interpretability in AIEd
- AI for Formative Learning
- Knowledge Tracing (Response Prediction, Response Correctness Prediction)
- Educational Content Recommendation
- Question Difficulty Prediction
- Score Prediction
- Automated Essay Scoring
- Personalized Curriculum Generation
- Application of Deep Learning in Learning Sciences
- Role of Artificial Intelligence in Remote Learning
- Student Monitoring
- Teacher-Educational System Integration
Format
We plan to host a one-day workshop which consists of the following programs (tentative):
- Invited talks, including Daniela Rus, MIT
- Presentations from
- Kaggle competition top competitors
- Shared task participants
- Authors of submitted papers
- Interactive session
- Panel discussion
Attendance
We expect 25 ~ 50 attendees and the criteria for the invitations are the following:
- Workshop organizers (8)
- Top-ranking Kaggle competition participants (5 ~ 20)
- Authors of shared task technical papers (5 ~ 20)
- Authors of submitted papers (5 ~ 20)
- Invited speakers (2)
Submissions
Submissions of papers including Kaggle competition technical papers, shared task technical papers and general submissions should follow the AAAI format and can be up to 8 pages excluding references and appendices. Submissions should be made in PDF format through OpenReview. Papers will be peer-reviewed and selected for oral or poster presentations at the workshop. Attendance is open to all, and at least one author of each accepted submission must be present at the workshop.
Submission Site: OpenReview, https://easychair.org/cfp/TIPCE2021
Organizing Committee
Paul Kim, Chair (Stanford University), Neil Hefferman, Chair (Worcester Polytechnic Institute), Jineon Baek (University of Michigan), Hoonpyo Jeon (Stanford University), Byungsoo Kim (Riiid! AI Research), Jamin Shin (Riiid! AI Research), Dongmin Shin (Riiid! AI Research), Youngduck Choi, (Yale University)
Additional Information
Supplemental workshop site: https://sites.google.com/view/tipce-2021/home
W16: Knowledge Discovery from Unstructured Data in Financial Services (KDF)
Knowledge discovery from various data sources has gained the attention of many practitioners over the past decades. Its capabilities have expanded from processing structured data (e.g. DB transactions) to unstructured data (e.g. text, images, and videos). In spite of substantial research focusing on discovery from news, web, and social media data, its application to data in professional settings such as legal documents, financial filings, and government reports, still present huge challenges. Possible reasons are that the precision and recall requirements for extracted knowledge to be used in business processes are fastidious, and signals gathered from these knowledge discovery tasks are usually very sparse and thus the generation of supervision signals is quite challenging.
In the financial services industry, in particular, a large amount of financial analysts’ work requires knowledge discovery and extraction from different data sources, such as SEC filings, loan documents, industry reports, etc., before the analysts can conduct any analysis. This manual extraction process is usually inefficient, error-prone, and inconsistent. It is one of the key bottlenecks for financial services companies in improving their operating productivity. These challenges and issues call for robust artificial intelligence (AI) algorithms and systems to help. The automated processing of unstructured data to discover knowledge from complex financial documents requires a series of techniques such as linguistic processing, semantic analysis, and knowledge representation and reasoning. The design and implementation of these AI techniques to meet financial business operations requires a joint effort between academia researchers and industry practitioners.
Topics
We invite submissions of original contributions on methods, theory, applications, and systems on artificial intelligence, machine learning, natural language processing and understanding, big data, statistical learning, data analytics, and deep learning, with a focus on knowledge discovery in the financial services domain. The scope of the workshop includes, but is not limited to, the following areas:
- Representation learning, distributed representations learning and encoding in natural language processing for financial documents;
- Synthetic or genuine financial datasets and benchmarking baseline models;
- Transfer learning application on financial data, knowledge distillation as a method for compression of pre- trained models or adaptation to financial datasets;
- Search and question answering systems designed for financial corpora;
- Named-entity disambiguation, recognition, relationship discovery, ontology learning and extraction in financial documents;
- Knowledge alignment and integration from heterogeneous data;
- Using multi-modal data in knowledge discovery for financial applications
- AI assisted data tagging and labeling;
- Data acquisition, augmentation, feature engineering, and analysis for investment and risk management;
- Automatic data extraction from financial fillings and quality verification;
- Event discovery from alternative data and impact on organization equity price;
- AI systems for relationship extraction and risk assessment from legal documents;
- Accounting for Black-Swan events in knowledge discovery methods.
Based on the reflection and feedback from our AAAI-20 KDF workshop, this workshop is particularly interested in financial domain-specific representation learning, open financial datasets and benchmarking, and transfer learning application on financial data.
Although textual data is prevalent in a large amount of finance-related business problems, we also encourage submissions of studies or applications pertinent to finance using other types of unstructured data such as financial transactions, sensors, mobile devices, satellites, social media, etc.
Format
KDF 2021 is a one-day VIRTUAL workshop. The program of the workshop will include invited talks, spotlight paper presentations, and lightning poster presentations. We cordially welcome researchers, practitioners, and students from academia and industrial communities who are interested in the topics to participate; at least one author of each accepted submission must be present at the workshop.
Submissions
We invite submissions of relevant work that would be of interest to the workshop. All submissions must be original contributions, following the AAAI-21 formatting guidelines. We accept two types of submissions – full research paper no longer than 8 pages and short/poster paper with 2-4 pages. Submission will be accepted via EasyChair submission website https://easychair.org/my/conference?conf=kdf21. For general inquiries about KDF or submission questions, please write to inquiry.kdf2021 at easychair.org.
Organizing Committee
Xiaomo Liu (S&P Global), Zhiqiang Ma (S&P Global), Manuela M. Veloso (J.P. Morgan), Sameena Shah (J.P. Morgan), Armineh Nourbakhsh (J.P. Morgan), Gerard de Melo (University of Potsdam), Le Song (Georgia Institute of Technology and Ant Financial), Quanzhi Li (Alibaba Group)
Additional Information
Supplemental workshop site: https://aaai-kdf.github.io/kdf2021
W17: Learning Network Architecture during Training
A fundamental problem in the use of artificial neural networks is that the first step is to guess the network architecture. Fine tuning a neural network is very time consuming and far from optimal. Hyperparameters such as the number of layers, the number of nodes in each layer, the pattern of connectivity, and the presence and placement of elements such as memory cells, recurrent connections, and convolutional elements are all manually selected. If it turns out that the architecture is not appropriate for the task, the user must repeatedly adjust the architecture and retrain the network until an acceptable architecture has been obtained.
There is now a great deal of interest in finding better alternatives to this scheme. Options include pruning a trained network or training many networks automatically. In this workshop we would like to focus on a contrasting approach, to learn the architecture during training. This topic encompasses forms of Neural Architecture Search (NAS) in which the performance properties of each architecture, after some training, are used to guide the selection of the next architecture to be tried. This topic also encompasses techniques that augment or alter the network as the network is trained. An example of the latter is the Cascade Correlation algorithm, as well as others that incrementally build or modify a neural network during training, as needed for the problem at hand.
Main Objectives
Our goal is to build a stronger community of researchers exploring these methods, and to find synergies among these related approaches and alternatives. Eliminating the need to guess the right topology in advance of training is a prominent benefit of learning network architecture during training. Additional advantages are possible, including decreased computational resources to solve a problem, reduced time for the network to make predictions, reduced requirements for training set size, and avoiding “catastrophic forgetting.” We would especially like to highlight approaches that are qualitatively different from some popular but computationally intensive NAS methods.
As deep learning problems become increasingly complex, network sizes must increase and other architectural decisions become critical to success. The deep learning community must often confront serious time and hardware constraints from suboptimal architectural decisions. The growing popularity of NAS methods demonstrates the community’s hunger for better ways of choosing or evolving network architectures that are well-matched to the problem at hand.
Topics
Methods for learning network architecture during training, including Incrementally building neural networks during training, new performance benchmarks for the above. Novel approaches and works in progress are encouraged.
Format
The workshop will include invited speakers, panels, virtual poster sessions, and presentations. Attendance is open to all; at least one author of each accepted submission must be virtually present at the workshop.
Submissions
Please refer and submit through the workshop website listed below.
Organizing Committee
Scott E. Fahlman (School of Computer Science, Carnegie Mellon University, sef@cs.cmu.edu), Kate Farrahi (Electronics and Computer Science Department, University of Southampton, k.farrahi@soton.ac.uk), George Magoulas (Department of Computer Science and Information Systems, Birkbeck College, University of London, gmagoulas@dcs.bbk.ac.uk), Edouard Oyallon (Sorbonne Université – LIP6, Edouard.oyallon@lip6.fr), Bhiksha Raj Ramakrishnan (School of Computer Science, Carnegie Mellon University, bhiksha@cs.cmu.edu), Dean Alderucci (School of Computer Science, Carnegie Mellon University, dalderuc@cs.cmu.edu)
Additional Information
Supplemental workshop site: https://www.cs.cmu.edu/~sef/AAAI-2021-Workshop.htm
W18: Meta-Learning and Co-Hosted Competition
The performance of many machine learning algorithms depends highly upon the quality and quantity of available data, and (hyper)-parameter settings. In particular, deep learning methods, including convolutional neural networks, are known to be ‘data-hungry,’ and require properly tuned hyper-parameters. Meta-Learning is a way to address both issues. Simple, but effective approaches reported recently include pre-training models on similar datasets. This way, a good model or good hyperparameters can be pre-determined or learned model parameters can be transferred to the new dataset. As such, higher performance can be achieved with the same amount of data, or similar performance with less data (few shot learning). This workshop, with a co-hosted competition, will focus on meta-learning and few shot learning.
Topics
Please note that papers beyond the scope of the competition are also welcome. We welcome all types of submissions that feature Meta-learning and few shot learning, but have a specific focus on the following topics:
- evaluation protocols and standardized benchmarks
- generalization of meta-learning techniques across diverse datasets
- papers that describe submissions to the co-hosted ChaLearn competition
- traditional meta-learning, including active testing, meta-features and meta-datasets
- few shot learning techniques, such as MAML, Few Shot Learning and Matching Networks
Format
The workshop will be held virtually, like all workshops at AAAI. We will organize a one-day workshop, featuring high-profile keynote speakers, a selection of submissions from the workshop, and a panel discussion. All other accepted papers will present their work in a virtual poster session. We already have the following keynote speakers confirmed: Chelsea Finn, Orial Vinyals, Lilian Weng, and Richard Zemel.
Submissions
Papers must be formatted in AAAI two-column, camera-ready style. We welcome two types of submissions, regular papers (max. 7 pages, including references) and short papers (max. 4 pages, including references). All accepted papers will be hosted on the website of the workshop. “Authors of accepted regular papers can opt-in to the formal PMLR proceedings. Submissions are due December 1, 2020.
Submission site: https://cmt3.research.microsoft.com/METALEARNCC2021
Organizing Committee
Adrian El Baz (INRIA and Université Paris Saclay, France), Isabelle Guyon (INRIA and Université Paris Saclay, France, ChaLearn, USA), Zhengying Liu (INRIA and Université Paris Saclay, France), Jan N. van Rijn (LIACS, Leiden University, the Netherlands), Sebastien Treguer (INRIA and Université Paris Saclay, France, ChaLearn, USA), Joaquin Vanschoren (Eindhoven University of Technology, the Netherlands)
Additional Information
Supplemental workshop site: https://metalearning.chalearn.org/
W19: Meta-Learning for Computer Vision (ML4CV)
Machine learning and in particular deep learning has shown significant boost in performance on various computer vision tasks such as object recognition, face recognition, and semantic segmentation in the last decade. Despite this success, the learning mechanism of modern systems remains surprisingly narrow as compared to the way humans learn. For example, contrary to the most current systems which learn just a single model from just a single data set, we humans acquire knowledge from diverse experiences over many years. As an alternative, meta-learning and life-long learning aka never-ending learning has been emerging as a new paradigm in the machine learning literature.
Meta learning and lifelong learning relate to the human ability of continuously learning new tasks with very limited labeled training data. In the current computer vision problems, we train one architecture for every individual problem, as soon as the data distribution or the problem statement changes, the machine learning algorithm has to be retrained or redesigned. Further, once the model is updated to incorporate newer data distribution or task, the knowledge learnt from the previous task is “forgotten.” Meta learning focuses on designing models that utilize prior knowledge learnt from other tasks to perform a new task. Meta learning attempts to build models for “general artificial intelligence.”
Topics
The scope of the workshop includes but are not limited to the following topics:
- Efficient models of meta learning for computer vision
- Lifelong learning for computer vision
- Never-ending multimodal networks for computer vision
- Robust approaches to address catastrophic forgetting
- Imitation learning for visual understanding
- Neural architecture search
- Active domain generalization
- Meta domain generalization
- Domain-shift detection
- Learning to learn
- AutoML
- Meta-learning applications in visual domains including biometrics, medical imaging and action recognition.
Format
This is a one day workshop and will be organized as a combination of presentations by invited speakers and paper presentations. The papers submitted to the workshop will be peer reviewed. There is no restriction on participants willing to attend the workshop.
Submissions
Full papers following the AAAI paper submission guideline should be submitted. We plan to publish the papers accepted in this workshop as a book with Springer in 2021.
Submission Site: https://cmt3.research.microsoft.com/ML4CV2021
Organizing Committee
Mayank Vatsa (IIT Jodhpur, mvatsa@iitj.ac.in), Richa Singh (IIT Jodhpur, richa@iitj.ac.in(, Nalini Ratha (SUNY Buffalo, nratha@buffalo.edu), Vishal Patel (Johns Hopkins University, vpatel36@jhu.edu), Surbhi Mittal, web chair (IIT Jodhpur, mittal.5@iitj.ac.in)
Additional Information
Supplemental workshop site: http://iab-rubric.org/mel4cv/
W20: Plan, Activity, and Intent Recognition (PAIR) 2021
This workshop seeks to bring together researchers and practitioners from diverse backgrounds, to share ideas and recent results. It will aim to identify important research directions, opportunities for synthesis and unification of representations and algorithms for recognition. Contributions of research results are sought in the following areas of:
- Plan, activity, intent, or behavior recognition
- Adversarial planning, opponent modeling
- Modeling multiple agents, modeling teams
- User modeling on the web and in intelligent user interfaces
- Acquaintance models
- Plan recognition and user modeling in marketplaces and e-commerce
- Intelligent tutoring systems (ITS)
- Machine learning for plan recognition and user modeling
- Personal software assistants
- Social network learning and analysis
- Monitoring agent conversations (overhearing)
- Observation-based coordination and collaboration (teamwork)
- Multi-agent plan recognition
- Observation-based failure detection
- Monitoring multi-agent interactions
- Uncertainty reasoning for plan recognition
- Commercial applications of user modeling and plan recognition
- Representations for agent modeling
- Modeling social interactions
- Inferring emotional states
- Reverse engineering and program recognition
- Programming by demonstration
- Imitation
Due to the diversity of disciplines engaging in this area, related contributions in other fields, are also welcome.
Submissions
All submissions must be original. If a work is under submission for the main conference as well or for a different conference, it should be written in the title. Papers must be in trouble-free, high-resolution PDF format, formatted for US Letter (8.5″ x 11″) paper, using Type 1 or TrueType fonts. Submissions are anonymous, and must conform to the AAAI-21 instructions for double-blind review. All questions about submissions should be emailed to Sarah Keren at sarah.e.keren@gmail.com.
Full Papers: We accept full paper submissions. Papers must be formatted in AAAI two-column, camera-ready style. Submissions may have up to 9 pages with pages 8 and 9 containing nothing but references.
Demo Track: This year the PAIR workshop will include a demo track. Authors are required to submit two items: (1) a 2-page short paper describing their system, formatted in AAAI two-column style, and (2) a video (of duration up to 10 minutes) of the proposed demonstration. Slides are also permitted in lieu of video, but greater weight will be given to submissions accompanied by videos. The paper must present the technical details of the demonstration, discuss related work, and describe the significance of the demonstration. We welcome submission of demos submitted to the demo session of the main conference. The demo track will be chaired by Dr. Ramon Fraga Pereira and Dr. Mor Vered. Questions regarding demos should be referred to ramonfpereira@gmail.com or mor.vered@unimelb.edu.au.
Organizing Committee
Sarah Keren (primary contact) (Harvard University, School of Engineering and Applied Sciences, sarah.e.keren@gmail.com or skeren@seas.harvard.edu), Reuth Mirsky (University of Texas, Department of Computer Science, reuth@cs.utexas.edu), Christopher Geib (SIFT LLC, cgeib@sift.net)
Additional Information
Supplemental workshop site: http://www.planrec.org/PAIR/Resources.html
W21: Workshop on Privacy-Preserving Artificial Intelligence (PPAI-21)
The availability of massive amounts of data, coupled with high-performance cloud computing platforms, has driven significant progress in artificial intelligence and, in particular, machine learning and optimization. It has profoundly impacted several areas, including computer vision, natural language processing, and transportation. However, the use of rich data sets also raises significant privacy concerns: They often reveal personal sensitive information that can be exploited, without the knowledge and/or consent of the involved individuals, for various purposes including monitoring, discrimination, and illegal activities.
The second AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-21) held at the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) builds on the success of last year’s AAAI PPAI to provide a platform for researchers, AI practitioners, and policymakers to discuss technical and societal issues and present solutions related to privacy in AI applications. The workshop will focus on both the theoretical and practical challenges related to the design of privacy-preserving AI systems and algorithms and will have strong multidisciplinary components, including soliciting contributions about policy, legal issues, and societal impact of privacy in AI.
PPAI-21 will place particular emphasis on: (1) Algorithmic approaches to protect data privacy in the context of learning, optimization, and decision making that raise fundamental challenges for existing technologies; (2) Privacy challenges created by the governments and tech industry response to the Covid-19 outbreak; (3) Social issues related to tracking, tracing, and surveillance programs; and (4) Algorithms and frameworks to release privacy-preserving benchmarks and data sets.
Topics
The workshop organizers invite paper submissions on the following (and related) topics:
- Applications of privacy-preserving AI systems
- Attacks on data privacy
- Differential privacy: theory and applications
- Distributed privacy-preserving algorithms
- Human rights and privacy
- Privacy issues related to the Covid-19 outbreak
- Privacy policies and legal issues
- Privacy preserving optimization and machine learning
- Privacy preserving test cases and benchmarks
- Surveillance and societal issues
Finally, the workshop will welcome papers that describe the release of privacy-preserving benchmarks and data sets that can be used by the community to solve fundamental problems of interest, including in machine learning and optimization for health systems and urban networks, to mention but a few examples.
Format
The workshop will be a one-day and a half meeting. The first session (half day) will be dedicated to privacy challenges, particularly those risen by the Covid-19 pandemic tracing and tracking policy programs. The second, day-long, session will be dedicated to the workshop technical content about privacy-preserving AI. The workshop will include a number of (possibly parallel) technical sessions, a virtual poster session where presenters can discuss their work, with the aim of further fostering collaborations, multiple invited speakers covering crucial challenges for the field of privacy-preserving AI applications, including policy and societal impacts, a number of tutorial talks, and will conclude with a panel discussion. Attendance is open to all. At least one author of each accepted submission must be present at the workshop.
Submissions
Submissions of technical papers can be up to 7 pages excluding references and appendices. Short or position papers of up to 4 pages are also welcome. All papers must be submitted in PDF format, using the AAAI-21 author kit. Papers will be peer-reviewed and selected for oral and/or poster presentation at the workshop.
Submission site: https://cmt3.research.microsoft.com/PPAI2021
Organizing Committee
Ferdinando Fioretto (Syracuse University), Pascal Van Hentenryck (Georgia Institute of Technology), Richard W. Evans (Rice University)
Additional Information
Supplemental workshop site: https://ppai21.github.io/
W22: Reasoning and Learning for Human-Machine Dialogs (DEEP-DIAL21)
Natural conversation is a hallmark of intelligent systems and thus dialog systems have been a key sub-area of Artificial Intelligence research for decades. Chatbots are their most recent incarnation and have been widely adopted, particularly in the recent COVID-19 pandemic, as sources of information. Given the increasing interest, there has been a surge in the development of easy-to-use platforms to rapidly create dialog agents at different levels of sophistication. Further, with the rapid advances in natural language generation models, there is a need to foster and guide the research on the development and deployment of dialog systems with what users actually value.
The COVID-19 pandemic has been an opportunity to validate the relevance of collaborative assistance technologies for real-world needs. Chatbots have been increasingly used for seeking advice and providing assistance related to symptoms, health facilities and public policies. The usage aim is to implement technical systems that smartly adapt their functionality to their users’ individual needs and requirements and solve problems in close cooperation with users. They need to enter into a dialog and convincingly explain their suggestions and decision-making behavior.
Such applications highlight future research directions for the community. There is a need to build dialog systems that can explain their reasoning and can stand up to ethical standards demanded in real-life settings. The impressive gains of learning-based models to discover insights from data have to be married with pre-known knowledge — e.g., common-sense and spatio-temporal knowledge — to be usable by the common man. There is an urgent need to highlight the crucial role of reasoning methods, like constraints satisfaction, planning, and scheduling can play to build an end-to-end conversation system that evolves over time. The systems have to be deployable at lower cost and usable in situations with limited device capabilities and network connectivity.
These form the motivation for the fourth edition of the Workshop on Reasoning and Learning for Human-Machine Dialogues. The past editions of the workshop were huge successes attracting 100+ AI researchers to discuss a variety of topics. DEEP-DIAL 21 will have reviewed paper presentations, invited talks, panels, and open contributions of datasets and chatbots.
Topics
With these motivations, some areas of interest for the workshop, but not limited to, are:
- Dialog Systems
- Design considerations for dialog systems
- Evaluation of dialog systems, metrics
- Open-domain dialog and chat systems
- Task-oriented dialog
- Style, voice, and personality in spoken dialog and written text
- Novel Methods for NL Generation for dialogs
- Early experiences with implemented dialog systems
- Mixed-initiative dialog where a partner is a combination of agent and human
- Hybrid methods
- Reasoning
- Domain model acquisition, especially from unstructured text
- Plan recognition in natural conversation
- Planning and reasoning in the context of dialog systems
- Handling uncertainty
- Optimal dialog strategies
- Learning
- Learning to reason
- Learning for dialog management
- End2end models for conversation
- Explaining dialog policy
- Practical Considerations
- Responsible chatting
- Ethical issues with learning and reasoning in dialog systems
- Corpora, Tools, and Methodology for Dialog Systems
- Securing one’s chat
Submissions
Submissions must be formatted in AAAI two-column, camera-ready style. Regular research papers may be no longer than 7 pages, where page 7 must contain only references, and no other text whatsoever. Short papers, which describe a position on the topic of the workshop or a demonstration/tool, may be no longer than 4 pages, references included. The accepted papers will be linked from the workshop website to the public versions on ArXiv.
Organizing Committee
Sathyanarayanan N. Aakur (Oklahoma State University, USA), Ullas Nambiar (Accenture, India), Imed Zitouni (Google, USA), and Biplav Srivastava (AI Institute, University of South Carolina, USA)
Additional Information
Supplemental workshop site: https://sites.google.com/view/deep-dial2021
W23: Reinforcement Learning in Games
Games provide an abstract and formal model of environments in which multiple agents interact: each player has a well-defined goal and rules to describe the effects of interactions among the players. The first achievements in playing these games at super-human level were attained with methods that relied on and exploited domain expertise that was designed manually (e.g. chess, checkers). In recent years, we have seen examples of general approaches that learn to play these games via self-play reinforcement learning (RL), as first demonstrated in Backgammon. While progress has been impressive, we believe we have just scratched the surface of what is capable, and much work remains to be done in order to truly understand the algorithms and learning processes within these environments.
The main objective of the workshop is to bring researchers together to discuss ideas, preliminary results, and ongoing research in the field of reinforcement in games.
We invite participants to submit papers on the 9th of November, based on but not limited to, the following topics: RL in various formalisms: one-shot games, turn-based, and Markov games, partially-observable games, continuous games, cooperative games; deep RL in games; combining search and RL in games; inverse RL in games; foundations, theory, and game-theoretic algorithms for RL; opponent modeling; analyses of learning dynamics in games; evolutionary methods for RL in games; RL in games without the rules; Monte Carlo tree search, online learning in games.
Format
RLG is a full-day workshop. It will start with a 60-minute mini-tutorial covering a brief tutorial and basics of RL in games, and will include 2-3 invited talks by prominent contributors to the field, paper presentations, a poster session, and will close with a discussion panel. Attendance is expected to be 150-200 participants (estimated), including organizers and speakers.
Submissions
Papers must be between 4-8 pages in the AAAI submission format, with the eighth page containing only references. Papers will be submitted electronically using Easychair. Accepted papers will not be archival, and we explicitly allow papers that are concurrently submitted to, currently under review at, or recently accepted in other conferences / venues.
Submissions should be sent to: Martin Schmid (mschmid@google.com)
Workshop Chair
Martin Schmid (mschmid@google.com)
Organizing Committee
Marc Lanctot (DeepMind, lanctot@google.com), Julien Perolat (DeepMind, perolat@google.com), Martin Schmid (DeepMind, mschmid@google.com)
Additional Information
Supplemental workshop site: http://aaai-rlg.mlanctot.info/
W24: Scientific Document Understanding
Scientific documents such as research papers, patents, books, or technical reports are one of the most valuable resources of human knowledge. At the AAAI-21 Workshop on Scientific Document Understanding (SDU@AAAI-21), we aim to gather insights into the recent advances and remaining challenges on scientific document understanding. Researchers from related fields are invited to submit papers on the recent advances, resources, tools, and upcoming challenges for SDU. In addition to that, we propose a shared task on one of the challenging SDU tasks, i.e., acronym identification and disambiguation in scientific text.
Topics
Topics of interest include but are not limited to:
- Information extraction and information retrieval for scientific documents;
- Question answering and question generation for scholarly documents;
- Word sense disambiguation, acronym identification and expansion, and definition extraction; document summarization, text mining, document topic classification, and machine reading comprehension for scientific documents;
- Graph analysis applications including knowledge graph construction and representation, graph reasoning and query knowledge graphs;
- Biomedical image processing, scientific image plagiarism detection, and data visualization; code/pseudo-code generation from text and image/diagram captioning, New language understanding resources such as new syntactic/semantic parsers, language models or techniques to encode scholarly text;
- Survey or analysis papers on scientific document understanding and new tasks and challenges related to each scientific domain;
- Factuality, data verification, and anti-science detection
Shared Task
Acronyms, i.e., short forms of long phrases, are common in scientific writing. To push forward the research on acronym understanding in scientific text, we propose two shared tasks on acronym identification (i.e., recognizing acronyms and phrases in text) and disambiguation (i.e., finding the correct expansion for an ambiguous acronym). Participants are welcomed to submit their system reports to be presented in the workshop poster session.
Format
SDU will be a one-day workshop. The full-day workshop will start with an opening remark followed by long research paper presentations in the morning. The post-launch session includes the invited talks, shared task winners’ presentations, and a panel discussion on the resources, findings, and upcoming challenges. SDU will also host a poster session for presenting the short research papers and the system reports of the shared tasks. SDU is expected to host 50-60 attendees. Invited speakers, committee members, authors of the research paper, and the participants of the shared task are invited to attend.
Submissions
Submissions should follow the AAAI formatting guidelines and the AAAI 2021 standards for double-blind review including anonymous submission. SDU accepts both long (8 pages including references) and short (4 pages including references) papers. Accepted papers will be published in the workshop proceedings. System reports should also follow the AAAI formatting guidelines and have 4-6 pages including references. System reports will be presented during poster sessions.
Submission site for papers and system reports: https://easychair.org/conferences/?conf=sduaaai21
Organizing Committee
Thien Huu Nguyen (University of Oregon, thie@cs.uoregon.edu), Walter Chang (Adobe Research, wachang@adobe.com), Amir Pouran Ben Veyseh (University of Oregon, apouranb@uoregon.edu), Leo Anthony Celi (Harvard University and MIT, lceli@bidmc.harvard.edu), Franck Dernoncourt (Adobe Research, franck.dernoncourt@adobe.com)
Additional Information
Supplemental workshop site: https://sites.google.com/view/sdu-aaai21/home
W25: Toward Robust, Secure and Efficient Machine Learning
Machine learning technology has been improving with every passing day and has been extensively applied to nearly every corner of the society that offers substantial benefits to our daily lives. However, machine learning models face various threats. For example, it is known that machine learning models are vulnerable to adversarial samples. The existence of adversarial examples reveals that current machine learning models are vulnerable and can be easily fooled, leading to serious security concerns in machine learning systems such as autonomous driving vehicles or face recognition systems.
More recently, due to both data privacy requirements as specified in the European Union’s General Data Protection Regulation (GDPR), and the limitations of computation power, the training process of machine learning models has extended from centralized to decentralized (i.e. distributed or federated learning) where the model will suffer from even more threats. For example, in a federated learning setting, every client can perform various attacks such as backdoor attacks on the global model as clients have direct access to the global model. How to prevent privacy leaking during information exchange of a decentralized training method is also a critical issue.
At the same time, computation efficiency is a big concern for modern deep learning, both inference and training. For inference, people prefer inference on edge devices due to better privacy, but edge devices have very limited computational resource. For training, gradient or weight exchange is necessary for decentralized training, but such exchange requires communication, which may be slow. Furthermore, models that are robust to adversarial attacks usually require longer training time and orders of magnitude more computation FLOPs than normal networks.
This one-day workshop intends to bring experts from machine learning, security communities, and federated learning together to work more closely in addressing the posed concerns. Specifically, we seek to study threats and defenses to machine learning not only in a single node setting but also in a distributed setting, as well as potential defense strategies in both settings. In summary, we seek solutions to achieve a wholistic solution for robust, secure and efficient machine learning.
Topics
- Theoretical contributions of adversarial machine learning.
- Training data poisoning, adversarial learning, and adversarial attacks and defenses.
- Secure machine learning
- Privacy-preserving machine learning
- Privacy attacks such as membership inference, and model inversion
- Model compression and efficiency improvement in both training and inference
- Efficiency improvement of information exchange in distributed training
Format
One day workshop through online zoom meeting, the workshop will include invited speakers, presentations, panel and group discussions.
Submissions
Submissions can be full technical papers (up to 8 pages) or short papers (up to 4 pages), and should be formatted in the AAAI style.
Submission site: https://easychair.org/conferences/?conf=rseml2021
Workshop General Chair
Qiang Yang (WeBank and Hong Kong University of Science and Technology)
Organizing Committee
Dawn Song (University of California, Berkeley), Song Han (Massachusetts Institute of Technology), Han Yu (Nanyang Technological University), Lixin Fan (WeBank), KamWoh Ng, main contact, (jinhewu@webank.com or kamwoh@gmail.com)
Additional Information
Supplemental workshop site: http://federated-learning.org/rseml2021/
W26: Trustworthy AI for Healthcare
AI for healthcare has emerged into a very active research area in the past few years and has made significant progress. While existing results are encouraging, not too many clinical AI solutions are deployed in hospitals or actively utilized by physicians. A major problem is that existing clinical AI methods are less trustworthy. For example, existing approaches make clinical decisions in a black-box way, which renders the decisions difficult to understand and less transparent. Existing solutions are not robust to small perturbations or potentially adversarial attacks, which raises security and privacy concerns. In addition, existing methods are often biased to specific ethnic groups or subpopulations. As a result, physicians are reluctant to use these solutions since clinical decisions are mission-critical and have to be made with high trust and reliability.
Topics
In this workshop, we aim to address the trustworthy issues of clinical AI solutions. We invite submissions of short papers (up to 4 pages excluding references, in AAAI format) studying trustworthy AI for healthcare. The topics include but are not limited to:
- Interpretable AI methods for healthcare
- Robustness of clinical AI methods
- Medical knowledge grounded AI
- Physician-in-the-loop AI
- Security and privacy in clinical AI
- Fairness in AI for healthcare
- Ethics in AI for healthcare
- Robust and interpretable natural language processing for healthcare
- Methods for robust weak supervision
Format
The workshop will feature 14 invited talks, given by the following distinguished speakers (alphabetic order).
- Lawrence Carin, Professor, Duke University
- Emily Fox, Associate Professor, University of Washington
- Russ Greiner, Professor, University of Alberta
- Joyce Ho, Assistant Professor, Emory University
- Tommi Jaakkola, Professor, Massachusetts Institute of Technology
- Heng Ji, Professor, University of Illinois at Urbana-Champaign
- Sanmi Koyejo, Assistant Professor, University of Illinois at Urbana-Champaign
- Yan Liu, Associate Professor, University of Southern California
- Sendhil Mullainathan, Professor, University of Chicago
- Susan Murphy, Professor, Harvard University
- Tristan Naumann, Senior Researcher, Microsoft Research
- Lucila Ohno-Machado, Professor, University of California San Diego
- Rajesh Ranganath, Assistant Professor, New York University
- Jimeng Sun, Professor, University of Illinois at Urbana-Champaign
Submissions
The AAAI 2021 Workshop on Trustworthy AI for Healthcare invites submissions of short papers (up to 4 pages excluding references, in AAAI format) about trustworthy issues in clinical AI.
Organizing Committee
Pengtao Xie (University of California, San Diego, pengtaoxie2008@gmail.com), Marinka Zitnik (Harvard University, marinka@hms.harvard.edu), Byron Wallace (Northeastern University, byron@ccs.neu.edu), Jennifer G. Dy (Northeastern University, jdy@ece.neu.edu), Eric Xing (Carnegie Mellon University, epxing@cs.cmu.edu)
Additional Information
Supplemental workshop site: https://taih20.github.io/
Contact: taih.aaai21.ws@gmail.com