AAAI-20 Workshop Program

February 7-8, 2020
New York, New York, USA

AAAI is pleased to present the AAAI-20 Workshop Program. Workshops will be held Friday and Saturday, February 7-8, 2020 at the New York Hilton Midtown in New York, New York, USA. The final schedule will be available in November, and exact locations will be determined in January. The AAAI-20 workshop program includes 23 workshops covering a wide range of topics in artificial intelligence. Workshops are one day unless noted otherwise in the individual description. Participation in each workshop is in the range of 25-65 participants, and participation is usually by invitation from the workshop organizers. However, most workshops also allow general registration by other interested individuals. Please note that there is a separate registration fee for attendance at a workshop. Workshop registration is available for workshop only registrants or for AAAI-20 technical registrants at a discounted rate. Registration information will be mailed directly to all invited participants in December.

Important Dates for Workshop Organizers


  • November 15, 2019: Submissions due (unless noted otherwise)
  • December 4, 2019: Notification of acceptance (unless noted otherwise)
  • December 13, 2019: Early registration deadline
  • February 7-8, 2020: AAAI-20 Workshop Program

W1 — Affective Content Analysis (AffCon 2020): Interactive Affective Response

Analysis of content to measure affect and its experiences is a growing multidisciplinary research area that still has little cross-disciplinary collaboration. The artificial intelligence (AI) and computational linguistics (CL) communities are making strides in identifying and measuring affect from user signals especially in language, while the human-computer interaction (HCI) community independently explores affect through user experience evaluations. Consumer psychology and marketing pursues a different direction to ground affect in its theoretical underpinnings as well as their real-world applications.

The third Affective Content Analysis workshop aims to bring together researchers from computer science, psychology, and marketing science for stimulating discussions on affect, with a focus on language and text. In addition, this workshop offers a Shared Task with a new corpus, to spur the development of new approaches and methods for affect identification.

Topics

The theme of AffCon 2020 is the study of affect in response to interactive content that may evolve over time. The word ‘affect’ is used to refer to emotion, sentiment, mood, and attitudes including subjective evaluations, opinions, and speculations. Psychological models of affect have been adopted by other disciplines to conceptualize and measure users’ opinions, intentions, and expressions. However, the context-specific characteristics of human affect suggest the need to measure in ways that recognize multiple interpretations of human responses.

We invite papers that offer modeling and measurement of affect and identify the best affect–related dimensions to study consumer behavior. In turn, that allows data models to be more informed in representing behaviors and hence effective in guiding decisions and actions by firms. We welcome submissions on topics including – but not limited to – the following:

  • Deep learning-based models for affect modeling in content (image, audio, and video)
  • Psycho-demographic profiling
  • Affective human-agent, -computer, and-robot interaction
  • Mirroring affect
  • Affect-aware text generation
  • Measurement and evaluation of affective content
  • Consumer psychology at scale from big data
  • Modeling consumer’s affect reactions
  • Affect lexica for online marketing communication
  • Affective commonsense reasoning
  • Multimodal emotion recognition and sentiment analysis
  • Affect and Cognitive Content Measurement in Text
  • Affect in communication
  • Affectively responsive interfaces
  • Computational models for consumer behavior theories
  • Psycho-linguistics, including stylometrics and typography
  • Bridging the gap between consumer psychology and computational linguistics

We especially invite papers investigating multiple related themes, industry papers, and descriptions of running projects and ongoing work. To address the scarcity of standardized baselines, datasets, and evaluation metrics for cross-disciplinary affective content analysis, submissions describing new language resources, evaluation metrics, and standards for affect analysis and understanding are also strongly encouraged.

Shared Task: CL-Aff

There is a growing interest in understanding how humans initiate and hold conversations. The affective understanding of conversations focuses on the problem of how speakers use affect to react to a situation and to each other. We introduce the OffMyChest Conversation dataset, and invite submissions for the Computational Linguistics Affect Understanding (CL-Aff) Shared Task on Affect in Conversations.

Format

This full-day workshop will have several prominent interdisciplinary invited speakers from the fields of linguistics, psychology, and marketing science to lead the presentation sessions. In a poster session in the afternoon, a few papers deemed more suited for a poster than a presentation will be invited to display a poster or a demo. The workshop will end with a fishbowl-style discussion among the organizers and participants to decide on future directions for the workshop and the research community. Attendance: Expected 60-70 attendees.

Submissions

EasyChair Submission URL

Submissions should be made via EasyChair and must follow the formatting guidelines for AAAI-20 (use the AAAI Author Kit). All submissions must be anonymous and conform to AAAI standards for double-blind review. Both full papers (8 page long including references) and short papers (4 page long including references) that adhere to the 2-column AAAI format will be considered for review.

Cochairs

Niyati Chhaya, Primary Contact (Adobe Research, nchhaya@adobe.com); Kokil Jaidka (Nanyang Technological University, kokil.j@gmail.com); Jennifer Healey (Adobe Research, jehealey@adobe.com); Lyle Ungar (University of Pennsylvania, ungar@cis.upenn.edu); Atanu R Sinha (Adobe Research, atr@adobe.com)

Workshop Chair Address

Niyati Chhaya, Adobe Research
Adobe Systems, Prestige Platina Tech Park, Marathahalli-Sarjapur Outer Ring Rd, Kadubeesanahalli, Bengaluru- 560087, India
nchhaya@adobe.com

Additional Information

Workshop URL


W2 — Artificial Intelligence for Cyber Security (AICS)

The workshop will focus on the application of artificial intelligence to problems in cyber security. This year’s AICS emphasis will be on human-machine teaming within the context of cyber security problems and will specifically explore collaboration between human operators and AI technologies. The workshop will address applicable areas of AI, such as machine learning, game theory, natural language processing, knowledge representation, automated and assistive reasoning and human machine interactions. Further, cyber security application areas- with a particular emphasis on the characterization and deployment of human-machine teaming- will be the focus. Additional areas can be discussed with similar challenges and solution spaces (e.g. genomic big data, astronomy, and cyberbiosecurity).

As cyber security has rapidly matured, data collection has become easier to instrument, implement, and collect. This has led to a massive increase of the amount of data that must be analyzed to achieve situational awareness- the scale of which is beyond human capabilities. Additionally, with the concurrent advancements in machine learning capabilities, there are algorithms and tools with the impressive ability to automatically analyze and classify massive amounts of data in complex scenarios, but deploying them in specific domains can be challenging. Together, this has created an environment of increased reliance on AI-based systems for humans to interact with the scale of cyber security problems.

Because humans must interact with at least parts of these AI systems, many challenges and arise. Principally among them are: 1) Determining optimal techniques to improve AI performance given targeted, limited human input, 2) understanding the extent to which the interaction between humans and AI introduces an attack surface for adversarial techniques to influence the performance of both the human and computer systems, 3) establishing and quantifying trust between humans and AI systems, 4) providing explainable AI where humans are required to do ‘last mile’ synthesis of information provided from a black box algorithm, and 5) defining the scope in which an AI system can operate autonomously in distinct cyber security domains while maintaining safety. A successful framework for the interaction between humans and AI is extremely important as machine learning based AI capabilities become incorporated into everyday life. Human-computer interactions will continue to increase. If they are not accurate, robust, trustworthy, explainable, and safe the systems will be prone to failure even if the underlying algorithms and/or people are individually effective.

For this workshop we consider general challenges 1-5 in the domain of cyber security as a focus application area. Cyber security is difficult to perform because of its high reliance on subject matter expertise to recognize anomalies in cyber data. Because AI systems are not yet well suited for this context-generating tasks for cyber, there is a human-in-the-loop requirement for most cyber security applications. Cyber security thus provides a unique case study in exploring the relationship between AI systems and humans because each rely on input and parse output from the other.

This year we are asking the AI for cyber security community to submit solutions to a challenge problem. The challenge problem (http://aics.site/AICS2020/challenge.html is focused on a representative cyber security task that generally requires human interaction.
Understanding and addressing challenges associated with systems that involve human-machine teaming requires collaboration between several different research and development communities including: artificial intelligence, cyber-security, game theory, machine learning, human factors, as well as the formal reasoning communities. This workshop is structured to encourage a lively exchange of ideas between researchers in these communities from the academic, public, and commercial sectors.

Topics

Topics of interest include, but are not limited to:

  • Human-machine teaming and human computer interactions
  • Adversarial machine learning
  • Cyberbiosecurity
  • Machine learning approaches to make cyber systems secure and resilient
  • Formal reasoning, with focus on human element, in cyber systems
  • Game Theoretic reasoning in cyber security
  • Robust AI metrics
  • Multi-agent interaction/agent-based modeling in cyber systems
  • Modeling and simulation of cyber systems and system components
  • Decision making under uncertainty in cyber systems
  • Automation of data labeling and ML techniques that learn to learn
  • Quantitative human behavior models with application to cyber security
  • Operational and commercial applications of AI

Challenge Problem

For information on this year’s AICS challenge problem: http://aics.site/AICS2020/challenge.html

Format

The workshop will include invited speakers, presentations, panel and group discussions

Submissions

One of two submissions is solicited:

  • Full-length papers (up to 8 pages in AAAI format)
  • Challenge problem papers (up to 8 pages in AAAI format)

Submissions are not anonymized. Please submit PDF via AICS Workshop website.

The deadline to submit papers is November 15, 2019.

Publication

Accepted papers will be published in the AICS Workshop proceedings on arXiv after the event.

Cochairs

Dennis M. Ross (MIT Lincoln Laboratory, MA, USA), Diane P. Staheli (MIT Lincoln Laboratory, MA, USA), David R. Martinez (MIT Lincoln Laboratory, MA, USA), William W. Streilein (MIT Lincoln Laboratory, MA, USA), Arunesh Sinha (Singapore Management University, Singapore), Milind Tambe (Harvard University, MA, USA)

Additional Information

Supplemental workshop site: http://aics.site/AICS2020


W3 — Artificial Intelligence for Education

Artificial Intelligence (AI) has dramatically transformed a variety of domains. However, education, a crucial component of our society still remains a relatively under-explored domain. In fact, the increasingly digitalized education tools and the popularity of the massive open online courses have produced an unprecedented amount of data that provides us with invaluable opportunities for applying AI in education. Recent years have witnessed growing efforts from AI research community devoted to advancing our education. Although it is still in the early stage, promising results have been achieved in solving various critical problems in education. For example, knowledge tracing, which is a intrinsically difficult problem due to the complexity under human learning procedure, has been solved successfully with powerful deep neural networks that can fully take the advantages of massive student exercise data. Besides the achievement in improving the student learning efficiency, similar excitement has been generated in other areas of education. For instance, researchers have also devoted to reducing the monotonous and tedious grading workloads of teaching professionals by building automatic grading systems that are underpinned by effective models from natural language process fields. Despite aforementioned success, developing and applying AI technologies to education is fraught with its unique challenges, including, but not limited to, extreme data sparsity, lack of labeled data, and privacy issues. Therefore, it is timely and necessary to provide a venue, which can bring together academia researchers and education practitioners (1) to discuss the principles, limitations and applications of AI for education; and (2) to foster research on innovative algorithms, novel techniques, and new applications to education.

Topics

We encourage submissions on a broad range of AI technologies for various education domains. Topics of interest include but are not limited to the following:

  • Emerging Technologies in education
  • Evaluation of education technologies
  • Immersive learning and multimedia applications
  • Implications of big data in education
  • Self-adaptive learning
  • Individual and personalized education
  • Intelligent Learning Systems
  • Intelligent Tutoring and Monitoring Systems
  • Automatic assessment in education
  • Automated grading of assignments
  • Learning technology for Lifelong Learning
  • Course development techniques
  • Mining and web mining in Education
  • Learning tools experiences and Cases of Study
  • Lifelong education
  • MOOC’s and Data Analytics
  • Social media in education
  • Smart education
  • Education Analytic approaches, methods, and tools
  • Knowledge management for learning
  • Learning analytics and educational Data Mining
  • Smart classroom
  • Dropout prediction
  • Knowledge tracing
  • Tracking learning activities
  • Uses of multimedia for education
  • Wearable computing technology In e-learning
  • Analysis of communities of learning
  • Computer-aided assessment
  • Course development techniques
  • Automated feedback and recommendations
  • Big data analytics for education

Format

The workshop is a full day. It will include 3-4 keynote speeches, a paper poster discussion session and a panel discussion about “ethics in AIED.”

Submissions

We invite the submission of novel research paper (6 pages), demo paper (4), visionary papers (4 pages) as well as extended abstracts (2 pages). Submissions must be in PDF format, written in English, and formatted according to the AAAI camera-ready style. All papers will be peer reviewed, single-blinded. Submitted papers will be assessed based on their novelty, technical quality, potential impact, insightfulness, depth, clarity, and reproducibility. All the papers are required to be submitted via Easychair ystem. For more questions about the workshop and submissions, please send email to wangzh65@msu.edu.

Submission site: https://easychair.org/conferences/?conf=ai4edu

Organizing Committee

Jiliang Tang (Michigan State University), Zitao Liu (TAL Education Group), Kaitlin Torphy (Michigan State University), Ken Frank (Michigan State University), Zhiwei Wang (Michigan State University)

Additional Information

Supplemental workshop site: http://www.cse.msu.edu/~wangzh65/AI4EDU/index.html


W4 — Artificial Intelligence in Team Sports

Sports is a domain that has grown significantly over the last 20 years to become a key driver of many economies. According to a recent report (https://urlzs.com/tsvbY), the estimated size of the global sports industry is $1.3 trillion, and has an audience of over 1 billion. As the market has grown so has the amount of data that is collected. This means that there are a number of challenging problems in sports to predict and optimise performance but, so far, such problems have largely been dealt with by domain experts (e.g., coaches, managers, scouts, and sports health experts) with basic analytics.

The growing availability of datasets in sports presents a unique opportunity for the artificial intelligence (AI) and machine learning (ML) communities to develop, validate, and apply new techniques in the real world. In team sports, real-world data is available over long periods of time, about the same individuals and teams, in a variety of environmental contexts, thereby creating a unique live test-bed for AI and ML techniques. While research in AI for team sports has grown over the last 20 years, it is as yet unclear how they relate to each other or build upon each other as they tend to either focus on specific types of team sports or specific prediction and optimisation problems that are but one part of the whole field. Hence, this workshop will help fuel discussions in the area and bring together the AI and sports analytics communities to encourage new research that will benefit both communities and industry.

Topics

We invite high-quality paper submissions on topics including, but not limited to, the following:

  • Match Outcome Prediction
  • Tactical Decision Making, Player Acquisition and Performance Prediction
  • Fantasy Sports Games
  • Injury Prediction and Prevention

We are keen to see applications of AI techniques such as:

  • Machine Learning and Deep Learning
  • Computer Vision
  • Optimization
  • Multi-Agent Systems
  • Human-Machine Interaction

Format

The workshop will be a full-day and will include a mix of invited talk(s) and peer-reviewed papers (talks and poster sessions). It will conclude with a panel discussion.

Submissions

Two forms of submissions are invited:

  • Short position papers (2-4 pages) describing initial work or real-world results of applications of AI.
  • Full technical papers (6-8 pages) describing original research. Note that there will be no formal publications of the workshop proceedings and hence we will accept resubmissions from other venues.

Manuscripts must be submitted as PDF files via EasyChair online submission system.

Please keep your paper format according to AAAI Formatting Instructions (two-column format).
Papers will be peer-reviewed by the Program Committee (2-3 reviewers per paper).

Organizing Committee

Ryan Beal (University of Southampton, UK), Sarvapali Ramchurn (University of Southampton, UK), Georgios Chalkiadakis (Technical University of Crete, Greece), Onn Shehory (Bar Ilan University, Israel), Tim Swartz (Simon Fraser University, Canada)

Additional Information

Supplemental workshop site: https://www.ai-teamsports.com


W5 — Artificial Intelligence of Things (AIoT)

Internet of Things (IoT) is a disruptive technology that extends data collection to almost everything around us and enables them to react through intelligent data processing. Gartner estimates that the number of connected things will grow to over 20 billion by 2020. With recent innovative network and chip technology, devices are becoming smarter with increasing compute power, bandwidth, and storage available on the device. This enables intelligent decision making and information transfer through devices. Insights derived from data generated by IoT devices power new business scenarios and ensure long term success of existing business. Major IT solution providers have been investing in building IoT data platform to support customers to develop IoT solutions in different industry sectors such as smart cities, manufacturing, health care and transportation. These business scenarios impose technical challenges and opportunities in building intelligent cloud and edge solutions. This workshop provides a forum for researchers, data scientists and practitioners from both academia and industry to present the latest research results, share practical experience of building AI powered IoT solutions, and network with colleagues.

Topics

IoT is an interdisciplinary field that intersects with device, sensor network, stream analytics, and machine learning. AI for IoT is a workshop on IoT with its focus on technologies to enable machine learning algorithms to run on resource constrained, secure, and connected devices. The workshop encourages submissions of innovative technologies and applications that enable IoT scenarios. Topics of interest, include but not limited to:

  • On device machine learning algorithms
  • Real-time computer vision and speech processing
  • Learning-enabled IoT applications
  • AI for Edge computing
  • AI for IoT security and privacy
  • Low-power AI for IoT systems
  • Distributed inferencing and learning
  • Optimized Blockchain for IoT
  • 5G and IoT

Format

The workshop will be a full day event with keynote, invited talks, technical paper presentation, and project showcase.

Submissions

We solicit original papers in two formats – Technical Paper (6 pages) and project showcase (2 pages) in AAAI format. Submitted papers will be peer-reviewed and selected for presentation. Accepted papers will be published on the workshop’s website.

Submission site: https://cmt3.research.microsoft.com/AIOTW2019/Submission/Index

Organizing Committee

General Chair: Yiran Chen (Duke University, yiran.chen@duke.edu)
Program Chairs: Jian Zhang (Microsoft, jianzha@microsoft.com), Jian Tang (DiDi ChuXing, tangjian@didiglobal.com)
Steering Committee: Jie Liu (Harbin Institute of Technology), Jieping Ye (DiDi Chuxing), Marilyn Claire Wolf (Georgia Tech), Mani Srivastava (UCLA), Michael I. Jordan (UC Berkeley), Victor Bahl (Microsoft), Vijaykrishnan Narayanan (Penn State University)

Additional Information

Supplemental workshop site: https://aiotworkshop.github.io/


W6 — Artificial Intelligence Safety (SafeAI)

Safety in Artificial Intelligence (AI) should not be an option, but a design principle. However, there are varying levels of safety, diverse sets of ethical standards and values, and varying degrees of liability, which can only be addressed by taking into account trade-offs and alternative solutions. A holistic analysis should integrate the technological and ethical perspectives into the engineering problem, considering both the theoretical and practical challenges of AI safety. This new view must cover a wide range of AI paradigms, considering systems that are application-specific, and also those that are more general, providing information about risk. We must bridge the short-term with the long-term perspectives, idealistic with pragmatic solutions, operational with policy issues, and industry with academia, in order to build, evaluate, deploy, operate and maintain AI-based systems that are truly safe.

This workshop seeks to explore new ideas on AI safety with particular focus on the following questions:

    • What is the status of existing approaches in ensuring AI and Machine Learning (ML) safety and what are the gaps?
    • How can we engineer trustable AI software architectures?
    • How can we make AI-based systems more ethically aligned?
    • What safety engineering considerations are required to develop safe human-machine interaction?
    • What AI safety considerations and experiences are relevant from industry?
    • How can we characterize or evaluate AI systems according to their potential risks and vulnerabilities?
    • How can we develop solid technical visions and new paradigms about AI Safety?
    • How do metrics of capability and generality, and trade-offs with performance affect safety?

The main interest of the proposed workshop is to look holistically at AI and safety engineering, jointly with the ethical and legal issues, to build trustable intelligent autonomous machines. As part of a “sister” workshop (AISafety 2019), we started the “AI Safety Landscape” initiative. This initiative aims at defining a multi-faceted and integrated “view” of the current needs, challenges, and the state of the art and practice of this field.

Topics

Contributions are sought in (but are not limited to) the following topics:

        • Safety in AI-based system architectures
        • Continuous V&V and predictability of AI safety properties
        • Runtime monitoring and (self-)adaptation of AI safety
        • Accountability, responsibility and liability of AI-based systems
        • Effect of uncertainty in AI safety
        • Avoiding negative side effects in AI-based systems
        • Role and effectiveness of oversight: corrigibility and interruptibility
        • Loss of values and the catastrophic forgetting problem
        • Confidence, self-esteem and the distributional shift problem
        • Safety of Artificial General Intelligence (AGI) systems and the role of generality
        • Reward hacking and training corruption
        • Self-explanation, self-criticism and the transparency problem
        • Human-machine interaction safety
        • Regulating AI-based systems: safety standards and certification
        • Human-in-the-loop and the scalable oversight problem
        • Evaluation platforms for AI safety
        • AI safety education and awareness
        • Experiences in AI-based safety-critical systems, including industrial processes, health, automotive systems, robotics, critical infrastructures, among others

Format

To deliver a truly memorable event, we will follow a highly interactive format that will include invited talks and thematic sessions. The thematic sessions will be structured into short pitches and a common panel slot to discuss both individual paper contributions and shared topic issues. Three specific roles are part of this format: session chairs, presenters and paper discussants. The workshop will be organized as a full day meeting. Attendance is open to all. At least one author of each accepted submission must be present at the workshop.

Submissions

You are invited to submit:

        • Full technical papers (6-8 pages),
        • Proposals of technical talk (up to one-page abstract including short Bio of the main speaker),
        • Position papers for general topics (2-4 pages), and
        • Position papers for the AI Safety Landscape (2-4 pages).

Manuscripts must be submitted as PDF files via EasyChair online submission system: https://easychair.org/conferences/?conf=SafeAI2020

Please keep your paper format according to AAAI Formatting Instructions (two-column format). The AAAI author kit can be downloaded from: https://www.aaai.org/Publications/Templates/AuthorKit20.zip

Papers will be peer-reviewed by the Program Committee (2-3 reviewers per paper). The workshop follows a single-blind reviewing process. However, we will also accept anonymized submissions.

Chairs

Huáscar Espinoza (Commissariat à l´Energie Atomique, France), José Hernández-Orallo (Universitat Politècnica de València, Spain), Cynthia Chen (University of Hong Kong, China), Seán Ó hÉigeartaigh (University of Cambridge, UK), Xiaowei Huang (University of Liverpool, UK), Mauricio Castillo-Effen (Lockheed Martin, USA), Richard Mallah (Future of Life Institute, USA), John McDermid (University of York, UK).

Additional Information

Supplemental workshop site: http://safeaiw.org/


W7 — Cloud Intelligence: AI/ML for Efficient and Manageable Cloud Services

Digital transformation is happening in all industries. Running businesses on top of cloud services (e.g., SaaS, PaaS, IaaS) is becoming the core of digital transformation. However, the large-scale and high complexity of cloud services brings great challenges to the industry. Artificial intelligence and machine learning will play an important role in efficiently and effectively building and operating cloud services. We envision that, with the advance of AI/ML and other related technologies, the cloud industry will achieve significant progress in the following aspects while keeping a sustained and exponential growth:

Highly resilient cloud service. Cloud services will have built-in capabilities of self-monitoring, self-diagnosis, and self-healing – all with low human intervention without compromising the service quality or the user experience.

Intelligence at users’ fingertips. Users can easily use, maintain, and troubleshoot their workloads or get efficient support on top of the underlying cloud service offerings.
Highly efficient and effective DevOps (Developer and Operations). Engineers are empowered with intelligent tools to (1) build new capabilities of services and smoothly roll out new capabilities to users; (2) efficiently operate production services.

The industry is calling for AIOps solutions but is still at an early stage towards realizing this vision. We advocate the urgency of driving and accelerating AI/ML for efficient and manageable cloud services through collaborative efforts in multiple areas including but not limited to artificial intelligence, machine learning, software engineering, data analytics, and systems.
This workshop provides a forum for researchers and practitioners to present the state of research and practice in AI/ML for efficient and manageable cloud services, and network with colleagues.

Topics

The workshop targets creating an interdisciplinary forum for researchers and practitioners from the fields mentioned above. The workshop encourages submissions on innovative technologies and applications that leverage AI/ML for efficient and manageable cloud services. Topics of interest include AI/ML related techniques, methodologies, and experiences for cloud intelligence and DevOps solutions.

        • New design, development, and operational patterns
        • Development of cloud services
        • Deployment and integration testing
        • System configuration
        • Service quality monitoring and anomaly detection
        • Resource scheduling and optimization
        • Capacity/workload management and prediction
        • Hardware/software failure prediction
        • Auto-diagnosis and problem localization
        • Incident management
        • Auto service healing
        • Data center management
        • Customer support
        • Security
        • Privacy

Format

The workshop will be a full day event with keynote, invited talks, technical paper presentation, and project showcase.

Submissions

We solicit original papers in two formats – technical paper (6 pages) and project showcase (2 pages) in AAAI format. Submitted papers will be peer-reviewed and selected for presentation.

Submission site: https://cmt3.research.microsoft.com/CIEMCS2020/Submission/Index.

Organizing Committee

Program Chair: Jian Zhang (Microsoft, jianzha@microsoft.com)
Steering Committee: Ricardo Bianchini (Microsoft Research Redmond), Mike Dahlin (Google), Marcus Fontoura (Microsoft Azure), Ahmed E. Hassan (Queen’s University), Erik Meijer (Facebook), Tao Xie (Peking University), Dongmei Zhang (Microsoft Research Asia ), Yuanyuan Zhou (UCSD)

Additional Information

Supplemental workshop site: https://cloudintelligenceworkshop.github.io/


W8 — Deep Learning on Graphs: Methodologies and Applications (DLGMA’20)

Deep Learning models are at the core of research in Artificial Intelligence research today. It is well- known that deep learning techniques that were disruptive for Euclidean data such as images or sequence data such as text are not immediately applicable to graph-structured data.
This gap has driven a tide in research for deep learning on graphs on various tasks such as graph representation learning, graph generation, and graph classification. New neural network architectures on graph-structured data have achieved remarkable performance in these tasks when applied to domains such as social networks, bioinformatics and medical informatics.

This wave of research at the intersection of graph theory and deep learning has also influenced other fields of science, including computer vision, natural language processing, inductive logic programming, program synthesis and analysis, automated planning, reinforcement learning, and financial security. Despite these successes, graph neural networks (GNNs) still face many challenges, namely,

        • Modeling highly structured data with time-evolving, multi-relational, and multi-modal nature. Such challenges are profound in applications in social attributed networks, natural language processing, inductive logic programming, and program synthesis and analysis. Joint modeling of text or image content with underlying network structure is a critical topic for these domains.
        • Modeling complex data that involves mapping between graph-based inputs and other highly structured output data such as sequences, trees, and relational data with missing values. Natural Language Generation tasks such as SQL-to-Text and Text-to-AMR are emblematic of such challenge.

This one-day workshop aims to bring together both academic researchers and industrial practitioners from different backgrounds and perspectives to the above challenges. The workshop will consist of contributed talks, contributed posters, and invited talks on a wide variety of methods and applications. Work-in-progress papers, demos, and visionary papers are also welcome. This workshop intends to share visions of investigating new approaches and methods at the intersection of Graph Neural Networks and real-world applications.

Topics of Interest (including but not limited to)

We invite submission of papers describing innovative research and applications around the following topics. Papers that introduce new theoretical concepts or methods, help to develop a better understanding of new emerging concepts through extensive experiments, or demonstrate a novel application of these methods to a domain are encouraged.

        • Graph neural networks on node-level, graph-level embedding
        • Graph neural networks on graph matching
        • Dynamic/incremental graph embedding on heterogeneous networks, knowledge graphs
        • Deep generative models for graph generation/semantic-preserving transformation
        • Graph2seq, graph2tree, and graph2graph models
        • Deep reinforcement learning on graphs
        • Adversarial machine learning on graphs
        • Spatial and temporal graph prediction and generation

And with particular focuses but not limited to these application domains:

        • Learning and reasoning (machine reasoning, theory proving)
        • Natural language processing (information extraction, semantic parsing, text generation)
        • Bioinformatics (drug discovery, protein generation, protein structure prediction)
        • Program synthesis and analysis
        • Reinforcement learning (multi-agent learning, compositional imitation learning)
        • Financial security (anti-money laundering)
        • Cybersecurity (authentication graph, Internet of Things, malware propagation)
        • Geographical network modeling and prediction (transportation and mobility networks, social networks)

Submissions

Submissions are limited to a total of 4 pages for initial submission (up to 5 pages for final camera-ready submission), excluding references or supplementary materials, and authors should only rely on the supplementary material to include minor details that do not fit in the 4 pages. All submissions must be in PDF format and formatted according to the AAAI style file. Special issues in flagship academic journals are under consideration to host the extended versions of best/selected papers in the workshop.

Submission link: https://easychair.org/conferences/?conf=dlgma2020

Camera-ready deadline for final accepted papers: December 20, 2019

Cochairs

Lingfei Wu (IBM Research AI), Jian Tang (Mila-Quebec AI Institute), Yinglong Xia (Facebook AI), Charu Aggarwal (IBM Research AI)

The full workshop committee is listed at the supplemental workshop site.

Additional Information

Please contact Linfei Wu at lwu@email.wm.edu for more information


W9 — The Eighth Dialog System Technology Challenge (DSTC8)

DSTC, the Dialog System Technology Challenge, has been a premier research competition for dialog systems since its inception in 2013. Given the remarkable success of the first seven challenges, we are organizing the eighth edition of DSTC this year and we will have a wrap-up workshop at AAAI-20.

Topics

The main goal of this workshop is to share the results of the following four main tracks of DSTC8:

        • Multi-Domain Task-Completion Dialog Challenge
        • NOESIS II: Predicting Responses, Identifying Success, and Managing Complexity in
        • Task-Oriented Dialogue
        • Audio Visual Scene-Aware Dialog
        • Schema-Guided Dialogue State Tracking

Format

The one-day workshop will include welcome remarks, track overviews, invited talks, oral presentations, poster sessions, and discussions about future DSTCs.

Submissions

We invite all the teams participated in DSTC8 to submit their work to this workshop. In addition, any other general technical paper on dialog technologies is also welcome. The submissions must follow the formatting guidelines for AAAI-2020 (use the AAAI Author Kit). All submissions must be anonymous and conform to AAAI standards for double-blind review. The papers adhere to the 2-column AAAI format up to 8 pages long with page 8 containing nothing but references, will be considered for review.

Submission link: https://sites.google.com/dstc.community/dstc8/paper-submission

Organizing Committee

Workshop Chair: Michel Galley (Microsoft Research AI)
Challenge Chair: Seokhwan Kim (Amazon Alexa AI)
Publication Chair: Chulaka Gunasekara (IBM Research AI)
Publicity Chair: Sungjin Lee (Amazon Alexa AI)

For a full listing of the track organizers and the steering committee members, please refer to the supplemental workshop website.

Additional Information

Supplemental workshop site: https://sites.google.com/dstc.community/dstc8/home


W10 — Engineering Dependable and Secure Machine Learning Systems

Nowadays, machine learning solutions are widely deployed. Like other systems, ML systems must meet quality requirements. However, ML systems may be non-deterministic; they may re-use high-quality implementations of ML algorithms; and, the semantics of models they produce may be incomprehensible. Consequently, standard notions of software quality and reliability such as deterministic functional correctness, black box testing, code coverage, and traditional software debugging become practically irrelevant for ML systems. This calls for novel methods and new methodologies and tools to address quality and reliability challenges of ML systems.

In addition, broad deployment of ML software in networked systems inevitably exposes the ML software to attacks. While classical security vulnerabilities are relevant, ML techniques have additional weaknesses, some already known (e.g., sensitivity to training data manipulation), and some yet to be discovered. Hence, there is a need for research as well as practical solutions to ML security problems.

With these in mind, this workshop solicits original contributions addressing problems and solutions related to dependability, quality assurance and security of ML systems. The workshop combines several disciplines, including ML, software engineering (with emphasis on quality), security, and algorithmic game theory. It further combines academia and industry in a quest for well-founded practical solutions.

Topics of interest include, but are not limited to:

        • Software engineering aspects of ML systems and quality implications
        • Testing of the quality of ML systems over time
        • Debugging of ML systems
        • Quality implication of ML algorithms on large-scale software systems
        • Case studies of successful and unsuccessful applications of ML techniques
        • Correctness of data abstraction, data trust
        • Choice of ML techniques to meet security and quality
        • Size of the training data, implied guaranties
        • Application of classical statistics to ML systems quality
        • Sensitivity to data distribution diversity and distribution drift
        • The effect of labeling costs on solution quality (semi-supervised learning)
        • Reliable transfer learning
        • Vulnerability, sensitivity and attacks against ML
        • Adversarial ML and adversary-based learning models
        • Strategy-proof ML algorithms

Submissions

We solicit original papers in two formats – full (8 pages) and short (4 pages, work in progress), in AAAI format. Submission will be via EasyChair (a URL will be provided soon). All authors of accepted papers will be invited to participate. The workshop will include paper presentation sessions. Full papers are allocated 20m presentation and 10m discussion. Short papers 10-minute presentation, plus 5-minute discussion.

Organizing Committee

Eitan Farchi (DE, Software Testing Analysis and Reviews, IBM Research, Haifa, farchi@il.ibm.com), Onn Shehory (Intelligent Information Systems, Graduate School of Business Administration, Bar Ilan University, onn.shehory@biu.ac.il), Guy Barash (Machine learning and Algorithm dev. , Western Digital Corporation, Israel, Guy.Barash@wdc.com)

Additional Information

Supplemental workshop site: https://sites.google.com/view/edsmls2020/home


W11 — Evaluating Evaluation of AI Systems

The last decade has seen massive progress in AI research powered by crowdsourced datasets and benchmarks such as Imagenet, Freebase, SQuAD; as well as widespread adoption and increasing use of AI in deployed systems. A crucial ingredient is the role of crowdsourcing in operationalizing empirical ways for evaluating, comparing, and assessing the progress.

The focus of this workshop is not on evaluating AI systems, but on evaluating the quality of evaluations of AI systems. When these evaluations rely on crowdsourced datasets or methodologies, we are interested in the meta-questions around characterization of those methodologies. Some of the expected activities in the workshop include:

Asking the question of “what makes evaluations good’?
Defining “what good looks like” in evaluations of different types of AI systems (image recognition, recommender systems, search, voice assistants, etc).
Collecting, examining and sharing current evaluation efforts, comprehensive of one system or competitive of multiple systems with the goal of critically evaluating the evaluations themselves
Developing an open repository of existing evaluations with methodology fully documented and raw data and outcomes available for public scrutiny

Using crowdsourced datasets for evaluating AI systems’ success at tasks such as image labeling and question answering have proven powerful enablers for research. However, adoption of such datasets is typically driven by the mere existence and size of a dataset without proper scrutiny of its scope, quality, and limitations. While crowdsourcing has enabled a burst of published work on specific problems, determining if that work has resulted in real progress cannot continue without a deeper understanding of how the dataset supports the scientific or performance claims of the AI systems it is evaluating. This workshop will provide a forum for growing our collective understanding of what makes a dataset good, the value of improved datasets and collection methods, and how to inform the decisions of when to invest in more robust data acquisition.

Topics

We invite scientific contributions and positions papers on the following topics:

  • META-EVALUATION: Quality of evaluation approaches, datasets / benchmarks
    •  Characteristics of ‘good’ dataset / benchmark?
    • Shortcomings of existing evaluation approaches, datasets / benchmarks?
    • Building new / improving existing metrics
    • Measuring trustworthiness, interpretability and fairness of crowdsourced benchmarks datasets
    • Measuring added value of improvements to previous versions of benchmark datasets
    • Comparative evaluations between mainstream AI systems, e.g. recommenders, voice assistants, etc.
    • Measuring quality of guidelines for content moderation, search evaluation, etc.
    • Comparison of results between offline (e.g. crowdsourced) and online (e.g. A/B testing) evaluations?
    • Open questions and challenges in meta-evaluation?
  • TRANSPARENCY: Making quality and characteristics of (crowdsourced) benchmark datasets transparent and explainable
    • Reproducibility of crowdsourced datasets
    • Replicability of crowdsourced evaluations of AI systems
    • Explainability of crowdsourced evaluations to different stakeholders, e.g. users, scientists, developers
  • RESOURCE BUILDING: Making existing evaluation methodologies, raw data and outcomes, discoverable, fully documented and available for public scrutiny
    • How do we make evaluations and related datasets archival and discoverable?
    • What can we learn from other systematic evaluation efforts and communities such as TREC, SIGIR, etc.?

Submissions

Submission information will be available shortly.

  • Submission Deadline: November 22, 2019
  • Notification of acceptance: December 11, 2019
  • Final camera-ready papers due: December 18, 2019

Organizing Committee

Praveen Paritosh, Kurt Bollacker, Maria Stone, Lora Aroyo
rigorous-evaluation@googlegroups.com

Additional Information

Supplemental workshop site: http://eval.how/aaai-2020/


W12 — Generalization in Planning

Humans are good at solving sequential decision-making problems, generalizing from few examples, and transferring this knowledge to solve new unseen problems. These problems remain longstanding open problems for Artificial Intelligence (AI). In the last decade, the planning community has improved the performance of automated planning systems to solve decision making problems by including novel search techniques and heuristics. On the other hand, the learning community has made major breakthroughs in reinforcement learning techniques for solving planning problems. However, industry level scalability and skill/task generalization still remains an open challenge for current AI tools.

This workshop will feature a mix of invited talks, survey talks in a highlights format, as well as presentations of submitted papers. We aim to synthesize and highlight recent research on the topic from multiple sub-fields of AI, including those of reinforcement learning, classical planning, planning under uncertainty, as well as learning for planning. At the end of the workshop we expect to come up with new insights and topics to address the challenges of generalization in planning.

Topics

Topics of interest to this workshop bring together research being conducted in a range of areas, including classical planning, knowledge engineering, partial policies and reinforcement learning, plan verification, and model checking. Potential topics include but are not limited to:

        • Learning and deriving generalized plans
        • Learning generalizable policies with reinforcement learning
        • Transfer learning of generalizable policies
        • Representation of generalizable solutions
        • Deriving domain control knowledge and partial policies with planning and learning
        • Program synthesis
        • Heuristics for plan and policy generalization
        • Generation and detection of good examples for planning and learning
        • Generalized planning for problems with partial observability and/or noise
        • Learning models for generalizable planning
        • Model checking for generalization guarantees

Format

The workshop will feature multiple invited plenary and highlight talks as well as presentations of submitted technical and position papers. It will also include discussion sessions tuned to the topics presented at the workshop.

Submissions

Papers should be submitted via EasyChair at https://easychair.org/conferences/?conf=genplan20.

Submission deadline: November 1, 2019
Notification: December 4, 2019

Organizing Committee

Javier Segovia-Aguas (Institut de Robòtica i Informàtica Industrial (IRI), Spain, jsegovia@iri.upc.edu), Siddharth Srivastava (Arizona State University, USA, siddharths@asu.edu), Raquel Fuentetaja (Universidad Carlos III de Madrid, Spain, rfuentet@inf.uc3m.es), Aviv Tamar (Israel Institute for Technology, Israel, aviv.tamar.mail@gmail.com), Anders Jonsson (Universitat Pompeu Fabra, Spain, anders.jonsson@upf.edu)

Additional Information

Supplemental workshop site: https://sites.google.com/view/genplan20/


W13 — Health Intelligence

Public health authorities and researchers collect data from many sources and analyze these data together to estimate the incidence and prevalence of different health conditions, as well as related risk factors. Modern surveillance systems employ tools and techniques from artificial intelligence and machine learning to monitor direct and indirect signals and indicators of disease activities for early, automatic detection of emerging outbreaks and other health-relevant patterns. To provide proper alerts and timely response, public health officials and researchers systematically gather news and other reports about suspected disease outbreaks, bioterrorism, and other events of potential international public health concern, from a wide range of formal and informal sources. Given the ever-increasing role of the World Wide Web as a source of information in many domains including healthcare, accessing, managing, and analyzing its content has brought new opportunities and challenges. This is especially the case for non- traditional online resources such as social networks, blogs, news feed, twitter posts, and online communities with the sheer size and ever-increasing growth and change rate of their data. Web applications along with text processing programs are increasingly being used to harness online data and information to discover meaningful patterns identifying emerging health threats. The advances in web science and technology for data management, integration, mining, classification, filtering, and visualization has given rise to a variety of applications representing real-time data on epidemics.

Moreover, to tackle and overcome several issues in personalized healthcare, information technology will need to evolve to improve communication, collaboration, and teamwork among patients, their families, healthcare communities, and care teams involving practitioners from different fields and specialties. All of these changes require novel solutions and the AI community is well-positioned to provide both theoretical- and application-based methods and frameworks. The goal of this workshop is to focus on creating and refining AI-based approaches that (1) process personalized data, (2) help patients (and families) participate in the care process, (3) improve patient participation, (4) help physicians utilize this participation in order to provide high quality and efficient personalized care, and (5) connect patients with information beyond that available within their care setting. The extraction, representation, and sharing of health data, patient preference elicitation, personalization of “generic” therapy plans, adaptation to care environments and available health expertise, and making medical information accessible to patients are some of the relevant problems in need of AI-based solutions.

Topics

The workshop will include original contributions on theory, methods, systems, and applications of data mining, machine learning, databases, network theory, natural language processing, knowledge representation, artificial intelligence, semantic web, and big data analytics in web-based healthcare applications, with a focus on applications in population and personalized health. The scope of the workshop includes, but is not limited to, the following areas:

        • Knowledge Representation and Extraction
        • Integrated Health Information Systems
        • Patient Education
        • Patient-Focused Workflows
        • Shared Decision Making
        • Geographical Mapping and Visual Analytics for Health Data
        • Social Media Analytics
        • Epidemic Intelligence
        • Predictive Modeling and Decision Support
        • Semantic Web and Web Services
        • Biomedical Ontologies, Terminologies, and Standards
        • Bayesian Networks and Reasoning under Uncertainty
        • Temporal and Spatial Representation and Reasoning
        • Case-based Reasoning in Healthcare
        • Crowdsourcing and Collective Intelligence
        • Risk Assessment, Trust, Ethics, Privacy, and Security
        • Sentiment Analysis and Opinion Mining
        • Computational Behavioral/Cognitive Modeling
        • Health Intervention Design, Modeling and Evaluation
        • Online Health Education and E-learning
        • Mobile Web Interfaces and Applications
        • Applications in Epidemiology and Surveillance (e.g. Bioterrorism, Participatory Surveillance,
        • Syndromic Surveillance, Population Screening)
        • Explainable AI (XAI) in Health and Medical domain
        • Precision Medicine and Health

Format

The workshop will consist of a welcome session, a keynote talk, full/short paper presentations, demos, and posters.

Submissions

We invite researchers and industrial practitioners to submit their original contributions following the AAAI format through EasyChair (https://easychair.org/conferences/?conf=w3phiai20). Three categories of contributions are sought: full-research papers up to 8 pages; short papers up to 4 pages; and posters and demos up to 2 pages.

Organizing Committee

Martin Michalowski, Cochair, (University of Minnesota – Twin Cities, martinm@umn.edu); Arash Shaban-Nejad, Cochair, (The University of Tennessee Health Science Center – Oak-Ridge National Lab (UTHSC-ORNL) Center for Biomedical Informatics, ashabann@uthsc.edu); Szymon Wilk, (Poznan University of Technology); David L. Buckeridge, (McGill University); John S. Brownstein, (Boston Children’s Hospital, Harvard University); Byron C. Wallace, (Northeastern University); Michael J. Paul, (The University of Colorado Boulder)

Additional Information

Supplemental workshop site: http://w3phiai2020.w3phi.com/


W14 — Intelligent Process Automation

How to free people from the mundane and repetitive parts of their daily workload? Robotic Process Automation (RPA) addresses this problem by developing software agents (robots) that can mimic human users to perform a variety of business tasks on their computers.

Current RPA systems are mostly rule-based. Artificial Intelligence (AI) promises to take RPA to new heights, but so far the AI research efforts related to the different aspects of RPA have been largely isolated. This AAAI-2020 workshop aims to bridge the gap between the rapidly growing RPA software industry and the AI research community.

Topics

Technical topics include, but are not limited to:

        • demo2process (learning a task-completion software agent from human demonstrations or behaviour logs): interactive task learning, imitation learning, program induction, programming by example, process mining, …
        • text2process (learning a task-completion software agent from step-by-step natural language text descriptions of the process): learning by instruction, conversational machine learning, natural language programming, natural language grounding, …
        • task2process (learning a task-completion software agent directly from the task as defined by an environment with its reward function or some input/output examples): reinforcement learning, neural program synthesis, Bayesian program learning, …

The common theme is that the learning system’s output would not be simply class labels or numerical predictions, but structured & executable processes, which makes it more challenging than most of today’s machine learning research problems. In addition, such automated processes should be safe, robust and explainable.

Submissions

This workshop also encourages submissions on the following interdisciplinary topics:

        • human-in-the-loop: the interaction between human users and software robots.
        • human-outside-the-loop: the social and organizational impacts of software robots.

We welcome submissions of long (6-8 pages) or short (2-4 pages) papers describing new, previously unpublished research in this field. All submissions should be done electronically through EasyChair. The accepted papers will be published on arXiv and they will be included in a non-archival workshop proceedings. Blue Prism will sponsor a Best Paper Award of $1000.

Format

This workshop will last one full day. It will include invited talks, presentations of contributed papers, and a panel discussion. A complimentary lunch will be provided for workshop participants at Blue Prism’s New York office.

Organizing Committee

Dell Zhang, Cochair (Birkbeck, University of London, Malet Street, London WC1E 7HX, UK, dell.z@ieee.org), Andre Freitas, Cochair (University of Manchester, Kilburn Building, Oxford Road, Manchester M13 9PL, UK, andre.freitas@manchester.ac.uk), Dacheng Tao (University of Sydney, dacheng.tao@sydney.edu.au), Dawn Song (UC Berkeley, dawnsong@cs.berkeley.edu)

Additional Information

Supplemental workshop site: https://www.blueprism.com/events/AAAI-20-Workshop-on-Intelligent-Process-Automation


W15 — Interactive and Conversational Recommendation Systems (WICRS)

This workshop is positioned as a forum to present and discuss novel research directions in interactive and conversational recommender systems as well as constituent AI technologies that represent the next generation of recommender systems and personalized, conversational assistants.

Historical work on recommender systems focused on interactive and conversational aspects as evidenced by the large literature on critiquing-based interaction dating back to the 1990’s. With the dawn of the Netflix Challenge, a sizable fraction of recommendation research shifted away from interaction and conversation and focused more on formal machine learning aspects of training objectives and optimization on offline data. However, recent years have seen an increase in work in interactive, sequential (e.g., session-based) interactions with recommender systems. Furthermore, the rise of Conversational AI-based assistants in the form of Apple’s Siri, Amazon’s Alexa, and the Google Assistant have re-invigorated interest in dialog-based sequential interaction, with a limited degree of personalization.

Encouraged by this recent interest in interactive and conversational recommender systems, the workshop aspires to bring together AI researchers from recommender systems, machine and reinforcement learning, dialog systems, natural language processing, human computer interaction, psychology and econometrics for a day of research presentations and open discussion about the future of this high impact and highly cross-disciplinary research area.

Topics

Topics include (but are not limited to):

        • Goal-directed and Personalized Conversational AI
        • Critiquing-based Recommendation Systems
        • Reinforcement Learning in Multi-turn Interactions
        • User Privacy and Security
        • Multimodal Context and Situation-aware Modeling
        • Preference Elicitation and Preference Construction
        • Recommendation with Complex Preferences
        • Explanations and Endorsements in Recommendation
        • Human-Computer Interaction in Recommendation
        • Grounding Dialog in Preferences and Constraints
        • Natural Language Expression of Preferences
        • Expressing Preferences over Latent Embeddings
        • Simulation Environments and Benchmark Datasets
        • Evaluation and Metrics
        • User Choice Modeling
        • Theory of Mind and Mental Model Representations

Submissions

We welcome previously unsubmitted work, papers submitted to the main AAAI conference, and papers reporting research already published provided they align well with the workshop topic.

Three types of submissions are solicited:

        • Full-length papers (up to 7 pages + 1 page for references in AAAI format)
        • Challenge or position papers (2 pages + 1 page for references in AAAI format)
        • Already published papers (1 page: an abstract in AAAI format with a link to the full paper)

Paper Submissions should be made through the workshop EasyChair web site: https://easychair.org/conferences/?conf=wicrs20

Organizing Committee

Scott Sanner (University of Toronto), Tyler Lu (Google Research), Joyce Chai (University of Michigan), Deepak Ramachandran (Google Research)
Email: wicrs20@easychair.org

Additional Information

Supplemental workshop site: https://sites.google.com/view/wicrs2020


W16 — Knowledge Extraction from Unstructured Data in Financial Services

Knowledge discovery from unstructured data has gained the attention of many practitioners over the past decades. In spite of major AI research focusing on data sources like news, web, and social media, its application to data in professional settings such as legal documents and financial filings, still present huge challenges.

In the financial services industry in particular, vast analysis work requires knowledge discovery from various data sources, such as SEC filings, loan documents, and industry reports. The manual knowledge discovery and extraction process is usually low in efficiency, error prone, and inconsistent. It is one of the key bottlenecks for financial services companies in improving their operating productivity. Furthermore, alternative data like social media feeds and news are gaining traction as promising knowledge sources for financial institutions as they provide additional perspectives when they make investment decisions. However, the valuable knowledge is always comingled with immense noise and the precision and recall requirements for extracted knowledge to be used in business process are fastidious.

These challenges and issues call for the need of robust artificial intelligence (AI) algorithms and systems. The design and implementation of these AI techniques to meet financial business operations requires the joint effort between academia researchers and industry practitioners.

Topics

We invite submissions of original contributions on methods, applications, and systems on artificial intelligence, machine learning, and data analytics, with a focus on knowledge discovery and extraction in the financial services domain. The scope of the workshop includes, but is not limited to, the following areas:

        • Knowledge representation
        • Natural language processing and understanding for financial documents
        • Search and question answering systems designed for financial corpora
        • Named-entity recognition, disambiguation, relationship discovery, ontology learning and extraction from financial documents
        • Knowledge alignment and integration from heterogeneous data;
        • AI assisted data tagging and labeling
        • Data acquisition, augmentation, feature engineering, and analysis for investment and risk management
        • Data acquisition, augmentation, feature engineering, and analysis for investment and risk management
        • Automatic knowledge extraction from financial fillings and quality verification
        • AI systems for relationship extraction and risk assessment from legal documents
        • Event discovery from alternative data and impact on organization equity price

We also encourage submissions of studies or applications pertinent to finance using other types of unstructured data such as financial transactions, sensors, mobile devices, satellites, social media.

Format

KDF is a one-day workshop. The program of the workshop will include invited speakers, paper presentations, and poster sessions.

We cordially welcome researchers, practitioners, and students from academic and industrial communities who are interested in the topics to participate; at least one author of each accepted submission must be present at the workshop.

Submissions

We invite submissions of relevant work that be of interest to the workshop. All submissions must be original contributions, following the AAAI-20 formatting guidelines. We accept two types of submissions – full research paper no longer than 8 pages and short/poster paper with 2-4 pages.

Submissions will be accepted via EasyChair submission website https://easychair.org/conferences/?conf=kdf2020

Organizing Committee

Xiaomo Liu (S&P Global, xiaomo.liu@spglobal.com), Sameena Shah (S&P Global, sameena.shah@spglobal.com), Manuela M. Veloso (J.P. Morgan Chase and Carnegie Mellon University, manuela.veloso@jpmchase.com), Quanzhi Li (Alibaba Group, quanzhi.li@alibaba-inc.com), Le Song (Ant Financial and Georgia Institute of Technology, e.song@antfin.com)

Additional Information

Supplemental workshop site: https://aaai-kdf2020.github.io/


W17 — Plan, Activity, and Intent Recognition (PAIR)

Plan recognition, activity recognition, and intent recognition all involve making inferences about other actors from observations of their behavior, i.e., their interaction with the environment and with each other. The observed actors may be software agents, robots, or humans. This synergistic area of research combines and unifies techniques from user modeling, machine vision, intelligent user interfaces, human/computer interaction, autonomous and multi-agent systems, natural language understanding, and machine learning. It plays a crucial role in a wide variety of applications including:

        • Assistive technology
        • Software assistants
        • Computer and network security
        • Behavior recognition
        • Coordination in robots and software agents
        • E-commerce and collaborative filtering

This wide-spread diversity of applications and disciplines, while producing a wealth of ideas and results, has contributed to fragmentation in the field, as researchers publish relevant results in a wide spectrum of journals and conferences. As there is no commonly accepted conference for this work, the workshop we propose will provide a valuable place to discuss, standardize and improve past work of this sub-field.

This workshop seeks to bring together researchers and practitioners from diverse backgrounds, to share ideas and recent results. It will aim to identify important research directions, opportunities for synthesis and unification of representations and algorithms for recognition. Contributions of research results are sought in the following areas of:

        • Plan, activity, intent, or behavior recognition
        • Adversarial planning, opponent modeling
        • Modeling multiple agents, modeling teams
        • User modeling on the web and in intelligent user interfaces
        • Acquaintance models
        • Plan recognition and user modeling in marketplaces and e-commerce
        • Intelligent tutoring systems (ITS)
        • Machine learning for plan recognition and user modeling
        • Personal software assistants
        • Social network learning and analysis
        • Monitoring agent conversations (overhearing)
        • Observation-based coordination and collaboration (teamwork)
        • Multi-agent plan recognition
        • Observation-based failure detection
        • Monitoring multi-agent interactions
        • Uncertainty reasoning for plan recognition
        • Commercial applications of user modeling and plan recognition
        • Representations for agent modeling
        • Modeling social interactions
        • Inferring emotional states
        • Reverse engineering and program recognition
        • Programming by demonstration
        • Imitation

Due to the diversity of disciplines engaging in this area, related contributions in other fields, are also welcome.

This year’s workshop will be centered around the relationship between data-driven and model-based approaches to recognition, and the need to bridge the gap between the two approaches. We hope this workshop will provide opportunities and incentives for future work. Specifically, we intend to extend our community by reaching out to recognition researchers from the machine learning community and encourage them to submit their work to the workshop.

Submissions

All submissions must be original. If a work is under submission for the main conference as well or for a different conference, it should be written in the title. Papers must be in trouble-free, high-resolution PDF format, formatted for US Letter (8.5″ x 11″) paper, using Type 1 or TrueType fonts. Submissions are anonymous, and must conform to the AAAI-20 instructions for double-blind review.

CFP website (EasyChair): https://easychair.org/cfp/PAIR2020
Submission website (EasyChair): https://easychair.org/conferences/?conf=pairaaai2020

Notification: December 10, 2019

Full Papers
We accept full paper submissions. Papers must be formatted in AAAI two-column, camera-ready style; see the 2020 author kit for details: http://www.aaai.org/Publications/Templates/AuthorKit20.zip
Submissions may have up to 8 pages with page 8 containing nothing but references. The last page of final papers may contain text other than references, but all references in the submitted paper should appear in the final version, unless superseded.

Demo Track
This year the PAIR workshop will include a demo track. Authors are required to submit two items: (1) a 2-page short paper describing their system, formatted in AAAI two-column style, and (2) a video (of duration up to 10 minutes) of the proposed demonstration. Slides are also permitted in lieu of video, but greater weight will be given to submissions accompanied by videos. The paper must present the technical details of the demonstration, discuss related work, and describe the significance of the demonstration. The demo track will be chaired by Dr. Mor Vered; questions regarding demos should be referred to mor.vered@unimelb.edu.au.

Posters
Authors whose papers accepted to the workshop would be able to present their work in a joint poster session and panel.

Cochairs

Sarah Keren (primary contact)
Harvard University,
School of Engineering and Applied Sciences
Cambridge, MA
Email: sarah.e.keren@gmail.com or skeren@seas.harvard.edu

All questions about submissions should be emailed to Sarah Keren at sarah.e.keren@gmail.com.

Reuth Mirsky
University of Texas
Department of Computer Science
Austin, TX
Email: reuth@cs.utexas.edu

Christopher Geib
SIFT LLC
319 1st Ave. North, Suite 400
Minneapolis MN 55401-1689
Email: cgeib@sift.net

Additional Information
Supplemental workshop site: http://www.planrec.org/PAIR/Resources.html


W18 — Privacy-Preserving Artificial Intelligence

The availability of massive amounts of data, coupled with high-performance cloud computing platforms, has driven significant progress in artificial intelligence and, in particular, machine learning and optimization. Indeed, much scientific and technological growth in recent years, including in computer vision, natural language processing, transportation, and health, has been driven by large-scale data sets which provide a strong basis to improve existing algorithms and develop new ones. However, due to their large-scale and longitudinal collection, archiving these data sets raise significant privacy concerns. They often reveal sensitive personal information that can be exploited, without the knowledge and/or consent of the involved individuals, for various purposes including monitoring, discrimination, and illegal activities.

The goal of the AAAI-20 Workshop on Privacy-Preserving Artificial Intelligence is to provide a platform for researchers to discuss problems and present solutions related to privacy issues arising within AI applications. The workshop will focus on both theoretical and practical challenges arising in the design of privacy-preserving AI systems and algorithms. It will place particular emphasis on algorithmic approaches to protect data privacy in the context of learning, optimization, and decision making that raise fundamental challenges for existing technologies. Additionally, it will welcome algorithms and frameworks to release privacy-preserving benchmarks and datasets.

Topics

We invite paper submissions on the following (and related) topics:

        • Applications of privacy-preserving AI systems
        • Architectures and privacy-preserving learning protocols
        • Constrained-based approaches to privacy
        • Differential privacy: theory and applications
        • Distributed privacy-preserving algorithms
        • Human-aware private algorithms
        • Incentive mechanisms and game theory
        • Privacy-preserving machine learning
        • Privacy-preserving algorithms for medical applications
        • Privacy-preserving algorithms for temporal data
        • Privacy-preserving test cases and benchmarks
        • Privacy and policy-making
        • Secure multi-party computation
        • Secret sharing techniques
        • Trade-offs between privacy and utility

Position, perspective, and vision papers are also welcome. Finally, the workshop will welcome papers that describe the release of privacy-preserving benchmarks and datasets that can be used by the community to solve fundamental problems of interest, including in machine learning and optimization for health systems and urban networks, to mention but a few examples.

Format

The workshop will be a full-day and will include a mix of invited speakers, peer-reviewed papers (talks and poster sessions) and will conclude with a panel discussion. Attendance is open to all. At least one author of each accepted submission must be present at the workshop.

Submissions

Submissions of technical papers can be up to 7 pages excluding references and appendices. Short or position papers of up to 4 pages are also welcome. All papers must be submitted in PDF format, using the AAAI-20 author kit. Papers will be peer-reviewed and selected for oral and/or poster presentation at the workshop.

Submission Site: https://easychair.org/conferences/?conf=ppai20

Cochairs

Ferdinando Fioretto (Georgia Institute of Technology, fioretto@gatech.edu, http://nandofioretto.com), Pascal Van Hentenryck (Georgia Institute of Technology, pascal.vanhentenryck@isye.gatech.edu, http://pwp.gatech.edu/pascal-van-hentenryck/), Rachel Cummings (Georgia Institute of Technology, rachelc@gatech.edu, https://pwp.gatech.edu/rachel-cummings/)

Additional Information

Supplemental workshop site: https://www2.isye.gatech.edu/~fferdinando3/cfp/PPAI20


W19 — Reasoning and Learning for Human-Machine Dialogues (DEEP-DIAL20)

Natural conversation is a hallmark of intelligent systems. Unsurprisingly, dialog systems have been a key sub-area of AI for decades. Their most recent form, chatbots, which can engage people in natural conversation and are easy to build in software, have been in the news a lot lately. There are many platforms to create dialogs quickly for any domain based on simple rules. Further, there is a mad rush by companies to release chatbots to show their AI capabilities and gain market valuation. However, beyond basic demonstration, there is little experience in how they can be designed and used for real-world applications needing decision making under practical constraints of resources and time (e.g., sequential decision making) and being fair to people chatbots interact with.

The workshop is the third edition of the Workshop on Reasoning and Learning for Human-Machine Dialogues. Both the first edition, DEEP-DIAL18 (http://www.zensar.com/deep-dial18) held at AAAI-18 at New Orleans, USA in February 2018 and the second edition, DEEP-DIAL19 (https://sites.google.com/view/deep-dial-2019/), held at AAAI-19 at Honolulu, Hawaii, USA in January 2019, were huge successes attracting 100+ AI researchers to discuss a variety of topics.

DEEP-DIA20 will have reviewed paper presentations, invited talks, panels and open contributions of datasets and chatbots. The workshop is partially funded by a grant from AI Journal.

There is ever increasing interest and need for innovation in human-technology-interaction as addressed in the context of companion technology. Here, the aim is to implement technical systems that smartly adapt their functionality to their users’ individual needs and requirements and are even able to solve problems in close cooperation with human users. To this end, they need to enter into a dialog and should be able to convincingly explain their suggestions and their decision-making behavior.

From the research side, statistical and machine learning methods are well entrenched for language understanding and entity detection. However, the wider problem of dialog management is unaddressed with mainstream tools supporting rudimentary rule-based processing. There is an urgent need to highlight the crucial role of reasoning methods, like constraints satisfaction, planning and scheduling, and learning working together with them, can play to build an end-to-end conversation system that evolves over time. From the practical side, conversation systems need to be designed for working with people in a manner that they can explain their reasoning, convince humans about choices among alternatives, and can stand up to ethical standards demanded in real life settings.

Topics

With these motivations, some areas of interest for the workshop, but not limited to, are:

        • Dialog Systems
          • Design considerations for dialog systems
          • Evaluation of dialog systems, metrics
          • Open domain dialog and chat systems
          • Task-oriented dialogs
          • Style, voice and personality in spoken dialogue and written text
          • Novel Methods for NL Generation for dialogs
          • Early experiences with implemented dialog systems
          • Mixed-initiative dialogs where a partner is a combination of agent and human
          • Hybrid methods
        • Reasoning
          • Domain model acquisition, especially from unstructured text
          • Plan recognition in natural conversation
          • Planning and reasoning in the context of dialog systems
          • Handling uncertainty of conversation and data
          • Optimal dialog strategies
        • Learning
          • Learning to reason
          • Learning for dialog management
          • End2end models for conversation
          • Explaining dialog policy
        • Practical Considerations
          • Responsible chatting
          • Ethical issues with learning and reasoning in dialog systems
          • Corpora, Tools and Methodology for Dialogue Systems
          • Securing one’s chat

Submissions

Submissions must be formatted in AAAI two-column, camera-ready style (see https://www.aaai.org/Publications/Templates/AuthorKit20.zip). Regular research papers may be no longer than 7 pages, where page 7 must contain only references, and no other text whatsoever. Short papers, which describe a position on the topic of the workshop or a demonstration/tool, may be no longer than 4 pages, references included. The accepted papers will be linked from the workshop website to the public versions on ArXiv.

Electronic submissions to be uploaded at https://easychair.org/conferences/?conf=deepdial20

Notifications: November 30, 2019
Camera-ready copy due: December 3, 2019:

Workshop Chairs

Ullas Nambiar (Accenture, India), Imed Zitouni (Google, USA), Kshitij Fadnis (IBM Research, USA), Biplav Srivastava (IBM, USA)

Additional Information

Supplemental workshop site: https://sites.google.com/view/deep-dial2020


W20 — Reasoning for Complex Question Answering

Question Answering (QA) has become a crucial application problem in evaluating the progress of AI systems in the realm of natural language processing and understanding, and to measure the progress of machine intelligence in general. At AAAI-20, the Reasoning for Complex Question Answering (RCQA) workshop series will feature a special focus on Commonsense Reasoning, and the overall umbrella area of Machine Common Sense (MCS). Commonsense Reasoning has long been a problem of interest to the AI community, and has spurred various efforts over the years that seek to standardize and encode commonsense knowledge for use by various AI systems. Machine Common Sense (MCS), taking after the name of a recent DARPA program, has once again become an area of focus. A large part of this is due to the realization that MCS may be the one of the biggest missing components in the transition of current day narrow AI systems into truly broader general AI systems.

Making progress on the MCS problem unlocks many potential solution techniques for AI systems, and makes them truly useful in various real world domains in related fields such as NLP and Computer Vision. However, such progress is contingent on addressing key problems such as: how to obtain commonsense knowledge; how to encode it into usable models and structures that are suitably agnostic of the specific reasoning technique employed; the (automated) construction of large-scale and cross-domain knowledge bases that store such knowledge; and measuring the progress in the state-of-the-art models and showing an advantage from using commonsense knowledge on standardized datasets.

Submissions

The workshop welcomes three kinds of paper submissions:
(i) challenge papers (up to 2 pages long) that describe a new challenge in the workshop’s focus area;
(ii) short papers (up to 4 pages long) which focus on a single, specific contribution; and
(iii) full papers (up to 8 pages long);

with unrestricted following pages for references only. Submission length should be commensurate to the contribution of the paper, following the rough guidelines above. Submissions must be formatted in the AAAI submission format. Submissions will be reviewed for their relevance to the workshop topic, relevance to this year’s special focus, novelty of ideas, significance of results, and reusability of the contributions. The workshop is non-archival, and we welcome relevant submissions that have been published elsewhere in the very-recent past.

Organizing Committee

Kartik Talamadupula, IBM Research
Vered Shwartz, University of Washington / AI2
Jay Pujara, ISI / USC
Rachel Rudinger, AI2 / UMD
Mausam, IIT Delhi
Nanyun Peng, ISI / USC
Pavan Kapanipathi, IBM Research

Additional Information

Supplemental workshop site: https://rcqa-ws.github.io/


W21 — Reinforcement Learning in Games

Games provide an abstract and formal model of environments in which multiple agents interact: each player has a well-defined goal and rules to describe the effects of interactions among the players. The first achievements in playing these games at super-human level were attained with methods that relied on and exploited domain expertise that was designed manually (e.g. chess, checkers). In recent years, we have seen examples of general approaches that learn to play these games via self-play reinforcement learning (RL), as first demonstrated in Backgammon. While progress has been impressive, we believe we have just scratched the surface of what is capable, and much work remains to be done in order to truly understand the algorithms and learning processes within these environments.

The main objective of the workshop is to bring researchers together to discuss ideas, preliminary results, and ongoing research in the field of reinforcement in games.

Topics

We invite participants to submit papers based on, but not limited to, the following topics: RL in various formalisms: one-shot games, turn-based, and Markov games, partially-observable games, continuous games, cooperative games; deep RL in games; combining search and RL in games; inverse RL in games; foundations, theory, and game-theoretic algorithms for RL; opponent modeling; analyses of learning dynamics in games; evolutionary methods for RL in games; RL in games without the rules; and Monte Carlo tree search, online learning in games.

Format

RLG is a full-day workshop. It will start with a 60-minute mini-tutorial covering a brief tutorial and basics of RL in games, 2-3 invited talks by prominent contributors to the field, paper presentations, a poster session, and will close with a discussion panel.

Submissions

Papers must between 4-8 pages in the AAAI submission format, with the eighth page containing only references. Papers will be submitted electronically using EasyChair. Accepted papers will not be archival, and we explicitly allow papers that are concurrently submitted to, currently under review at, or recently accepted in other conferences / venues.

Organizing Committee

Julien Perolat, Chair (perolat@google.com), Marc Lanctot (DeepMind, lanctot@google.com), Julien Perolat (DeepMind, perolat@google.com), Martin Schmid (DeepMind, mschmid@google.com)

Additional Information

Supplemental workshop site: http://aaai-rlg.mlanctot.info/


W22 — Reproducibility in AI (RAI 2020) — Future Direction and Reproducibility Challenge

Artificial Intelligence (AI), like any science, must rely on reproducible experiments to validate results. However, reproducing results from AI research publications is not easily accomplished. This may be because AI research has its own unique reproducibility challenges. For example, these include (1) the use of analytical methods that are still a focus of active investigation and (2) problems due to non-determinism in standard benchmark environments and variance intrinsic to AI methods. Acknowledging these difficulties, empirical AI research should be documented properly so that the experiments and results are clearly described.

AAAI does not provide any recommendations on how researchers can enhance the reproducibility of their work. In this AAAI-20 workshop, we aim to develop such recommendations, and to encourage future AAAI conferences to implement them.
The goal is to finalize recommendations for AAAI 2021 and discuss how these should be evaluated. This year’s workshop is a continuation of the AAAI 2019 Workshop on Reproducible AI (RAI 2019) where we had several presentations and the attendees started discussing such recommendations.

Reproducibility Challenge
As input to the discussion on recommendations, we will emphasize the submission and acceptance of papers in which researchers describe their experiences from attempting to reproduce a paper(s) accepted at a previous AAAI conference(s) (i.e., try to reproduce the results from a previous AAAI conference paper and report your results).

Submissions should contain a description of the experiment, whether the results of the original paper were reproduced or not, a discussion on reproducibility challenges, lessons learned, and recommendations for best practices as well as a short note on each of the 24 variables presented in by Gundersen, Gil and Aha (2018) (https://folk.idi.ntnu.no/odderik/RAI-2020/On_Reproducible_AI-preprint.pdf).

Topics

Any topics related to reproducible AI are welcome, including position papers, surveys, recommendations, and comparisons of AI reproducibility with other fields of research. Our focus is especially on practical solutions for how to improve the reproducibility of research presented at AAAI.

Relevant reading: See suggested reading list at https://folk.idi.ntnu.no/odderik/RAI-2020/Suggested_Readings.pdf.

Format

The workshop will last span a full day and will include invited talks, oral and poster presentations of submitted work, a panel and open discussion on how to make research results presented at AAAI reproducible.

Submissions

Each submission will be in the form of a maximum 8-page paper including reference, using the main AAAI conference format. Authors can optionally anonymize their submissions. Papers should be submitted via EasyChair. Oral presentation authors and poster session participants will be selected from the submissions. Please send an email to the workshop chairs if you are considering submitting a paper.

Submission site: https://easychair.org/conferences/?conf=rai2020

Chairs

Odd Erik Gundersen (Norwegian University of Science and Technology, odderik@ntnu.no),
David W. Aha (Naval Research Laboratory, david.aha@nrl.navy.mil), Daniel Garijo (Univeristy of Southern California, dgarijo@isi.edu)

Additional Information

Supplemental workshop site: https://folk.idi.ntnu.no/odderik/RAI-2020/


W23 — Statistical Relational AI (StarAI)

The purpose of the Statistical Relational AI (StarAI) workshop is to bring together researchers and practitioners from three fields: logical (or relational) AI/learning, probabilistic (or statistical) AI/learning and neural approaches for AI/learning with knowledge graphs and other structured data. These fields share many key features and often solve similar problems and tasks. Until recently, however, research in them has progressed independently with little or no interaction. The fields often use different terminology for the same concepts and, as a result, keeping-up and understanding the results in the other field is cumbersome, thus slowing down research. Our long term goal is to change this by achieving synergy between logical, statistical and neural AI. As a stepping stone towards realising this big-picture view on AI, we are organizing the Ninth International Workshop on Statistical Relational AI at AAAI-20.

Topics

StarAI is currently provoking a lot of new research and has tremendous theoretical and practical implications. The focus of the workshop will be on general-purpose representation, reasoning and learning tools for StarAI as well as practical applications. Specifically, the workshop will encourage active participation from researchers in the following communities, and integration thereof: satisfiability, knowledge representation, constraint satisfaction and programming, (inductive) logic programming, graphical models and probabilistic reasoning, statistical learning, relational embeddings, neural-symbolic integration, graph mining and probabilistic databases. It will also actively involve researchers from more applied communities, such as natural language processing, information retrieval, vision, semantic web and robotics. We seek to invite researchers in all subfields of AI to attend the workshop and to explore together how to reach the goals imagined by the early AI pioneers.

Format

StarAI will be a one-day workshop with short paper presentations, a poster session, and three invited speakers. Attendance is open to all.

Submissions

Authors should submit one of the following:

        • a full paper reporting on novel technical contributions or work in progress (AAAI style, up to 7 pages excluding references),
        • a short position paper (AAAI style, up to 2 pages excluding references),
        • an already published work (verbatim, no page limit, citing original work) in PDF format via EasyChair.

All submitted papers will be carefully peer-reviewed by multiple reviewers and low-quality or off-topic papers will be rejected.

Submission site: https://easychair.org/conferences/?conf=starai2020

Organizing Committee

Sebastijan Dumančić (KU Leuven, sebastijan.dumancic@cs.kuleuven.be), Angelika Kimmig (Cardiff University, KimmigA@cardiff.ac.uk), David Poole (UBC, poole@cs.ubc.ca), Jay Pujara (USC, jay@cs.umd.edu)

Additional Information

Supplemental workshop site: http://www.starai.org/2020

This site is protected by copyright and trademark laws under US and International law. All rights reserved. Copyright © 1995–2019 AAAI