AAAI 2020 Spring Symposium Series
March 23–25, 2020
Sponsored by the Association for the Advancement of Artificial Intelligence
In cooperation with the Stanford University Computer Science Department
Call for Participation
Important Deadlines
- November 1, 2019: Submissions due to organizers (unless otherwise noted in individual descriptions)
- December 6, 2019: Notifications of acceptance sent by organizers
The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University’s Department of Computer Science, is pleased to present the 2020 Spring Symposium Series, to be held Monday through Wednesday, March 23–25, 2020 at Stanford University. The titles of the eight symposia are as follows:
- Applied AI in Healthcare: Safety, Community, and the Environment
- Artificial Intelligence in Manufacturing
- AI Welcomes Systems Engineering: Towards the Science of Interdependence for Autonomous Human-Machine Teams
- Challenges and Opportunities for Multi-Agent Reinforcement Learning
- Combining Artificial Intelligence and Machine Learning with Physical Sciences
- Combining Machine Learning and Knowledge Engineering in Practice
- Deep Models and Artificial Intelligence for Defense Applications: Potentials, Theories, Practices, Tools, and Risks
- Towards Responsible AI in Surveillance, Media, and Security through Licensing
AAAI Spring Symposium Submission Site
Most organizers have elected to use the AAAI Spring Symposium EasyChair site for receipt of submissions. If specified in the individual symposium description, please submit your work via the AAAI Spring Symposium EasyChair site. Please be sure to select the appropriate symposium when submitting your work.
Applied AI in Healthcare: Safety, Community, and the Environment
This symposium will discuss ways to solve health-related, real-world issues in various emerging, ongoing, and underrepresented areas. Our international team is primarily focused on AI-assisted healthcare, with specific focus on areas of improving safety, the community, and the environment through the latest technological advances in our respective fields. We also want to improve the design and deployment of new technologies by considering the broader contexts in which they will be used. Our group’s mission is to bring engineers, computer and data scientists, physician specialists, epidemiologists, public health researchers, ergonomists, ethicists, social scientists, designers, safety personnel, and other scholars and healthcare professionals together to share and foster ideas.
Topics
This symposium focuses on the following three themes related to potentially challenging issues in healthcare AI: (1) Workplace safety and the environment; (2) Caregiving in the community; and (3) Specialized applications of AI in healthcare.
Workplace Safety and the Environment: There is a significant knowledge gap in the workplace AI arena. Our current focus areas of interest (but not restricted to) revolve around the role of machine learning in optimizing workplace ergonomics, business travel, hospital ergonomics, and monitoring environmental hazards, including automating wildfire smoke or air quality predictions via proprietary Google database sets and other global satellite systems.
Caregiving in the Community:
The introduction of digital technology and AI in care work, along with the broad adoption of network infrastructures and the increasing attention to community-based care, creates opportunities to provide healthcare in novel ways. We consider how emerging AI technologies may bring different forms of care back into homes and communities through health monitoring applications, assistive robotics, telehealth platforms, automated transportation, and other innovations. We will also discuss existing technical, social, design, and practice-oriented aspects of care work and technology, and new conceptual frameworks of care work and design paradigms for assistive technologies that can enable distributed, scalable, sustainable, and maintainable healthcare in 21st-century communities.
Specialized Applications of AI in healthcare: This theme aims to share the latest progress, current challenges and potential applications related to the use of AI in healthcare. Submissions may focus on specific and technical details or bring a more general point of view on the use of AI in healthcare and improvement of public and population health.
Format
The symposium will include invited talks, presentations of accepted papers, group work sessions and panels. Use cases, current and potential application scenarios, and requirements from industry would be encouraged. Invited speakers and presentations will be announced after the submissions. More information will appear on the supplementary symposium website.
Submissions
Interested participants should submit either full papers (6–8 pages), short papers (2–4 pages) or extended abstracts (2 pages maximum). Submissions are invited from all perspectives of interest (see the Topics section) Submissions can include recent or ongoing research, position papers, and surveys or reviews.
Submissions will be handled through the AAAI Spring Symposium EasyChair site.
Organizing Committee
Rajan Puri (Stanford University, purir@stanford.edu), Samira Rahimi (McGill University, Samira.rahimi@mcgill.ca), Selma Šabanović (Indiana University, Bloomington, elmas@indiana.edu)
For More Information
For more information about the Applied AI in Healthcare: Safety, Community, and the Environment symposium, please see the supplementary symposium site.
Artificial Intelligence in Manufacturing
Nearly one half million unfilled manufacturing positions (Deloitt), low productivity and strong competition for new products are driving significant need for increasingly intelligent, agile, and collaborative manufacturing. Significant advanced manufacturing activities are underway across the world from global manufactures in a multiplicity of verticals including pharmaceuticals, medical products, automotive, aerospace, consumer goods, construction, power and hand tools, materials, and industrials. Intelligent machines are addressing an increasingly complex range of manufacturing tasks including product design, material handling, machine tending, assembly, packaging, and distribution. Intelligent systems can provide a range of potential benefits to include speed, flexibility, accuracy.
Topics
Intelligent manufacturing is necessarily a cross AI discipline, drawing from (supervised and unsupervised) machine learning, image understanding, spoken language dialogue, media extraction, search, and knowledge representation, among other areas. This symposium will focus on new directions in burgeoning area of advanced manufacturing. These will include, but will not be limited to the following:
- Generative design for products and processes
- Predictive analytics (for example, avoiding down time, reducing costly failures)
- Collaborative robots (for example, for picking, machine tending, kitting)
- Human machine teaming on the manufacturing floor
- Intelligent industrial internet of things (IIOT)
- Smart planning algorithms for automated factories
- Mobile material planning and autonomous vehicles
- Spoken dialogue interfaces to machines, factories, and supply chains
- Automated defect detection
- Knowledge based modeling and simulation of manufacturing
- Smart warehouses
- Evaluation of advanced manufacturing
Submissions
Potential participants are invited to submit a paper of 1,500 – 6000 words, proposing questions, reporting work in progress, discussing applications and/or uses cases, or theoretical contributions. Submissions should be made in PDF format via the SSS-20 EasyChair Site. Submissions are due no later than 1 November 2019.
Demonstrations will be considered as well. Please submit a demonstration description no less than one page but no longer than the full paper specifications. The symposium form will be a series of short presentations, demonstrations, and joint working sessions aimed to understand key impediments to progress, formulate new research areas, and identify opportunities for cross site collaboration (for example, data or insight sharing, open algorithms, use cases). We will target around 40 participants who are active in the field.
Organizing Committee
Peter Friedland (USAF), Benjamin Gibbs (Ready Robotics), Jim Hender (RPI), John Manferdelli (Northeastern University), Mark Maybury, chair (Stanley Black and Decker, mark.maybury@sbdinc.com, Manish Mehta (Stanley Black and Decker Silicon)
For More Information
For more information, please contact Mark Maybury at mark.maybury@sbdinc.com
AI Welcomes Systems Engineering: Towards the Science of Interdependence for Autonomous Human-Machine Teams
Compared to a collection of the same but independent individuals, the members of a team when interdependent are significantly more productive. Yet, interdependence is insufficiently studied to provide an efficient operational architecture for human-machine or machine-machine teams. Interdependence in a team creates bi-stable effects among humans characterized by tradeoffs that affect the design, performance, networks and other aspects of operating autonomous human-machine teams.
To solve these next-generation problems, the AI and systems engineering (SE) of human-machine teams require multidisciplinary approaches. Namely, the science of interdependence for autonomous human-machine teams requires contributions not only from AI, including machine learning (ML); and from SE, including the verification and validation of systems using AI or ML; but also other disciplines to establish an approach that allows a human and machine to operate as teammates, including simulation and training environments where humans and machines can co-adapt their operational outcomes yet with stable outcomes assured by evidence based frameworks. As a general rule, users interfacing with machine learning algorithms require the information fusion (IF) of data to achieve limited autonomous operations, but as autonomy increases, a wider spectrum of features become necessary, like transfer learning.
Fundamentally, for human-machine teams to become autonomous, the science of how humans and machines operate interdependently in a team requires contributions from, among others, the social sciences to study how context is interdependently constructed among teammates; how trust is affected when humans and machines depend upon each other; how human-machine teams are to train with each other; how human-machine teams need a bidirectional language of explanation, the law to determine legal responsibilities from misbehavior and accidents, ethics to know the limits of morality, and sociology to guide the appropriate team behaviors across society and different cultures, the workplace, healthcare, and combat. We need to know the psychological impact on humans when teaming with machines that can think faster than humans even in relatively mundane situations like with self-driving cars, or the more complex but still traditional decision situations like in combat (that is, in-the-loop; for example, with the Navy’s Ghost fleet; the Army’s self-driving combat convoy teams; the Marine Corps’ ordinance disposal teams), or the more daunting scenarios with humans as observers of decisions (that is, on-the-loop; for example, with the Air Force’s aggressive, dispensable, attritable drones flying wing for an F-35).
Topics
Topics will include AI and machine learning, autonomy; systems engineering; human-machine teams (hmt); machine explanations of decisions; and context.
Format
The format of the symposium will include invited talks (60 minutes) and regular speakers (30 minutes).
Submissions
From November 1, 2019 until January 15, 2020, contributors should submit to organizers an abstract, outline or paper of 2-8 pages; use APA references. A call for book chapters will follow after the symposium.
Organizing Committee
W.F. Lawless (Paine College, w.lawless@icloud.com), Ranjeev Mittu and Don Sofge (Naval Research Laboratory, DC), Thomas Shortell (Lockheed Martin and INCOSE), and Tom McDermott (Stevens Institute of Technology)
For More Information
For more information, please see the supplementary symposium site.
Challenges and Opportunities for Multi-Agent Reinforcement Learning
We live in a multiagent world and to be successful in that world intelligent agents will need to learn to take into account the agency of others. They will need to compete in marketplaces, cooperate in teams, communicate with others, coordinate their plans, and negotiate outcomes. Examples include self-driving cars interacting in traffic, personal assistants acting on behalf of humans and negotiating with other agents, swarms of unmanned aerial vehicles, financial trading systems, robotic teams, and household robots.
Topics
There has been a lot of great work on multiagent reinforcement learning (MARL) in the past decade, but significant challenges remain, including the following:
- The difficulty of learning an optimal model or policy from a partial signal,
- Learning to cooperate or compete in nonstationary environments with distributed, simultaneously learning agents,
- The interplay between abstraction and influence of other agents,
- The exploration versus exploitation dilemma,
- The scalability and effectiveness of learning algorithms,
- Avoiding social dilemmas
- Learning emergent communication.
The purpose of this symposium is to bring together researchers in multiagent reinforcement learning, but also more widely machine learning and multiagent systems, to explore some of these and other challenges in more detail. The main goal is to broaden the scope of MARL research and to address the fundamental issues that hinder the applicability of MARL for solving complex real-world problems.
We aim to organize an active symposium, with many interactive (brainstorm/breakout) sessions. We are hopeful that this will form the basis for ongoing collaborations on these challenges between the attendants and we aim for several position papers as concrete outcomes.
Submissions
Authors can submit papers of 1–4 pages that will be reviewed by the organizing committee. We are looking for position papers that present a challenge or opportunity for MARL research, which should be on a topic the authors not only wish to interact on but also work on with other participants during the symposium. We also welcome (preliminary) research papers that describe new perspectives to dealing with MARL challenges, but we are not looking for summaries of current research — papers should clearly state some limitation(s) of current methods and potential ways these could be overcome. Submissions will be handled through the AAAI Spring Symposium EasyChair site.
Organizing Committee
Christopher Amato (Northeastern University), Frans Oliehoek (Delft University of Technology), Shayegan Omidshafiei (Google DeepMind), Karl Tuyls (Google DeepMind)
For More Information
For more information, please see the supplementary symposium site.
Combining Artificial Intelligence and Machine Learning with Physical Sciences
With recent advances in scientific data acquisition and high-performance computing, artificial intelligence (AI) and machine learning (ML) have received significant attention from the applied mathematics and physics science community. From successes reported by industry, academia, and the research communities at large, we observe that AI and ML hold great potential for leveraging scientific domain knowledge to support new scientific discoveries and enhance the development of physical models for complex natural and engineering systems.
For example, deep learning supports discovery of new materials and high-energy physics from numerous computer simulations and experiments to allow us learn low-dimensional manifolds underlying the acquired data in order to represent the system of interest parsimoniously and effectively. ML has offered new insights on adaptive numerical discretization schemes and numerical solvers, which are clearly distinct from traditional mathematical theories. AI also provides a new way of generalizing constitutive physics laws based on big scientific data sets.
Despite the progress, there are still many open questions. Our current understanding is limited regarding how and why AI/ML work and why they can be predictive. AI has been shown to outperform traditional methods in many cases especially with high-dimensional, inhomogeneous data sets. However, a rigorous understanding of when AI/ML is the right approach is largely lacking. That is, for what class of problems, underlying assumptions, available data sets, and constraints are these new methods best suited? The lack of interpretability in AI-based modeling and related scientific theories makes them insufficient for high-impact, safety-critical applications such as medical diagnoses, national security, and environmental contamination and remediation. With transparency and a clear understanding of the data-driven mechanism, the desirable properties of AI should be best utilized to extend current methods in modeling for physics and engineering problems. At the same time, handling expensive training costs and large memory requirements for ever-increasing scientific data sets is becoming more and more important to guarantee scalable science machine learning.
This symposium will aim to present the current state of the art and identify opportunities and gaps in AI/ML-based physics science. The symposium will focus on challenges and opportunities for increasing the scale, rigor, robustness, and reliability of physics-informed AI necessary for routine use in science and engineering applications and discuss potential researcher-AI collaborations to significantly advance diverse scientific areas and transform the way science is done.
Topics
Authors are strongly encouraged to present papers that combine and blend physical knowledge and artificial intelligence/machine learning algorithms. Topics of interest include but are not limited to the following:
- Artificial intelligence/machine learning framework that can seamlessly synthesize models, governing equations and data.
- Algorithms for scalable physics-informed learning
- Stability and error analysis for physics-informed learning
- Software development facilitating the inclusion of physics domain knowledge in learning
- Applications incorporating domain knowledge into machine learning
Format
The symposium is organized by the invited talks, presentations, and posters.
Submissions
Interested participants should submit either extended abstracts (2-4 pages) or full papers (6 pages maximum) for position and work-in-progress pieces. Submissions should be formatted according to the AAAI template and submitted via the AAAI Spring Symposium EasyChair site.
Main Contact
Jonghyun Harry Lee (University of Hawaiʻi at Mānoa, jonghyun.harry.lee@hawaii.edu)
Organizing Committee
Jonghyun Harry Lee (University of Hawaiʻi at Mānoa), Eric Darve (Stanford University), Peter Kitanidis (Stanford University), Matthew Farthing (U.S. Army Engineer Research and Development Center), Tyler Hesser (U.S. Army Engineer Research and Development Center). For a full list of organizers and program committee members, please refer to the supplementary symposium site.
For More Information
For more information, please see the supplementary symposium site.
Combining Machine Learning and Knowledge Engineering in Practice
Machine learning helps to solve complex tasks based on real-world data instead of pure intuition. It is most suitable for building AI systems when knowledge is not known, or knowledge is tacit. While machine learning is now able to master data-intensive learning tasks, there are still some challenges. Various tasks require large amounts of training data, especially tasks where events to be predicted are rare.
Many business cases and real-life scenarios demand background knowledge and explanations of results and behavior. In medicine, for instance, physicians will likely overrule suggestions if there is no adequate explanation for them. In the self-driving car domain, where safety and control are fundamental, demand for symbolic approaches that can complement machine learning adequately. Moreover, conversational agents require domain knowledge and contextual information to provide satisfactory responses. Furthermore, application areas such as banking, insurance, and life science, are highly regulated and, thus, require compliance with law and regulations. This specific application knowledge needs to be represented, which is the area of knowledge engineering.
Knowledge engineering and knowledge-based systems, which make expert knowledge explicit and accessible, are often based on logic and thus can explain their conclusions. These systems typically require a higher initial effort during development than systems that use machine learning approaches. However, symbolic machine learning and ontology learning approaches are promising for reducing the effort of knowledge engineering.
Because of their complementary strengths and weaknesses, there is an increasing demand in business to integrate knowledge engineering and machine learning for complex business scenarios. Focusing on only one aspect will not exploit the full potential of AI. Explicitly represented application knowledge could assist data-driven machine-learning approaches to converge faster on sparse data and to be more robust against noise, which results in cost efficiency and effectivity for business.
This symposium aims at bringing together practitioners and researchers from various companies, research centers and academia of machine learning and knowledge engineering working together on joint AI that is solving real business problems by being explainable and grounded in domain knowledge.
Topics
Among relevant topics are the following:
- Enterprise AI
- Machine Learning
- Knowledge Engineering, Representation, and Reasoning
- Hybrid AI
- Explainable AI
- Conversational AI
- Deep Learning and Neural Networks
- Rule-based Systems
- Recommender Systems
- Scene Interpretation Systems
- Ontologies and Semantic Web
Use cases, application scenarios, and requirements from the industry would be highly beneficial and most welcome.
Format
The symposium involves presentations of accepted position, full and short papers, side-tutorial events from industry, (panel) discussions, demonstrations and plenary sessions.
Submissions
We solicit papers that can include recent or ongoing research, business cases, application scenarios, and surveys. Position and full papers (5 to 12 pages), short papers (2 to 4 pages) and poster abstracts (1 to 2 pages) will be peer-reviewed by the program committee to ensure academic integrity.
Industrial side-tutorial event or demonstration proposals (1 to 2 pages) should have a focus on business or research related to the symposium topics excluding undesired extensive product advertising.
Discussion proposals (1 to 2 pages) should contain a description of the specific topic.
All submissions must reflect the formatting instructions provided in the Author Kit and be submitted through the AAAI Spring Symposium EasyChair site. Accepted and camera-ready papers shall be published on the established open-access proceedings site CEUR-WS.
Symposium Cochairs
Andreas Martin (main contact, mail@aaai-make.info) and Knut Hinkelmann, FHNW University of Applied Sciences and Arts Northwestern Switzerland, School of Business, Riggenbachstrasse 16, 4600 Olten, Switzerland
Organizing Committee
Hans-Georg Fill (University of Fribourg, Switzerland), Aurona Gerber (University of Pretoria, South Africa), Knut Hinkelmann (FHNW University of Applied Sciences and Arts Northwestern Switzerland), Doug Lenat, Cycorp, Inc. (Austin, TX, USA), Andreas Martin (FHNW University of Applied Sciences and Arts Northwestern Switzerland), Reinhard Stolle (Autonomous Intelligent Driving GmbH, München, Germany), Frank van Harmelen (VU University, Amsterdam, Netherlands)
For More Information
For more information, please see the supplementary symposium site.
Deep Models and Artificial Intelligence for Defense Applications: Potentials, Theories, Practices, Tools, and Risks
Increasing global threats and competition have underscored the importance of artificial intelligence (AI) research and applications to confront adversaries. Warfighters in all domains (land, sea, air, undersea and cyber) utilize AI to support smart sensors, robotics, cyber honey pots, virtual swarms, and war games. Furthermore, multidomain decision makers from central command centers to the distributed tactical edge also seek trusted AI as assistants and automation tools to overcome cognitive overload. These domains underscore the warfighters need for both perceptive and reasoning AI to help in modern combat. This need has motivated the US Department of Defense Strategy (2018) to state that AI and resultant defense applications are the “very technologies that ensure we will be able to fight and win the wars of the future.”
Ubiquitous sensors such as smart cities, smart seas, smart homes, and the Internet of Things (IoT) introduce a wide range of industrial and societal challenges with an overwhelming large volume, high speed data, and type variety of data (for example, databases, image, text, audio, and video). Concurrently with the explosion of the data volume, the excitement of handling increasingly larger big data has generated significant breakthroughs of data models, cloud computing, parallel and distributed computing, and more importantly, advanced deep analytics including machine learning (ML) and artificial intelligence (AI) algorithms.
Deep analytics algorithms have transformed AI capabilities to near human or super human intelligence and capabilities. AI capabilities are categorized by different types of learning, such as: supervised learning, which requires labeled big data for backpropagation; unsupervised learning for data mining, data discovery, statistical pattern recognition, and anomaly detection; semi-supervised learning for accelerated training, transfer learning, and changing scenarios; and reinforcement learning which requires big data with trials and rewards. The reinforcement learning algorithms are used broadly, including Markov decision process (MDP), genetic algorithms, game-theory, control theory, temporal difference, and sequential learning. Due to these breakthroughs, Deep analytics and AI support an abundant set of applications in the commercial world which demonstrate enormous potential but there are still many intractable challenges. For example, academic and industrial AI applications focus on machine vision, speech recognition, chat understanding, and autonomous driving that achieve amazing automation and accuracy that surpass human experts. However, these existing industrial applications may not adequately address or transfer to specific defense problems with intelligent adversaries, sparse data, and unique sensors.
This symposium will explore the potentials, theories, practices, tools and risks related to deep models, AI and networks in defense specific applications. For example, the DoD’s challenges include not only the volume/velocity of big data but also veracity or variation from bad or corrupted or no data. Adequate labeled samples for classification tasks may be lacking and therefore alternatives include synthetic and simulation data. Furthermore, tactical environments often involve shorter time scales and fewer resources for learning. Also, unlike many commercial applications, defense applications access data sources stored in a distributed environment that are fused and analyzed together to form a coherent and holistic battlespace picture. Defense data analytics from multisource data include requirements of real-time, high rates, and limited channels; but also subject to strict security across all domains.
In summary, the four of the main challenges in using defense AI include (1) lack of adequate samples for classification learning, (2) short time scales for adaptive learning, (3) less computational resources for multisource edge learning, and (4) adversarial behavior for robust learning. A representative scenario task is to identify a rarely observed object for novel mission planning and therefore there is little training data; relatively little time to integrate recent observations into the training, contains only a network of high-powered desktops for training, and adversaries are trying to jam or corrupt the sensors. For these reasons, fast optimization methods, generative modeling, and transfer learning methods are of particular interest. Since resources are always limited, the research community needs to discuss how data sciences, ML, AI, and multidomain fusion can be applied to different levels of defense operations such as strategic, operational, and tactical operations.
While the potential is great, the risks may not be trivial either, which require policy positions and discussions. What are the risks that challenge or potentially compromise fundamental human capabilities in the long-run by applying AI technologies? How will AI shape the manpower requirements and costs for the future defense organizations and systems? What level is necessary for explainable learning/decision-making, as well as human-in-the-loop AI. What are the ethical and legal consequences of using this technology? Where are the boundaries if any? What quality assurance approaches are relevant for defense applications?
The goal of the symposium or working group is to foster collaborations and form communities for the theories and practices of deep models to defense applications. We solicit unclassified research, papers, and innovative ideas in the following areas (not limited to) for defense applications.
Topics
What are the potentials, theories, practices, tools and risks using the following deep models that is, models with large number of parameters that can be trained by big data)?
- Deep data fusion models
- Various types of machine learning models (for example, supervised learning reinforcement learning, and unsupervised learning).
- Deep learning models such as deep machine vision and image processing models
- Pattern recognition and anomaly detection algorithms
- Generative adversarial networks (GANs)
- Network models
- Graph models
- Game theory models
- Link analysis models
- Parallel and distributed computing models
- Smart data outputs from deep analytics
- Visualizations and depictions of smart data outputs
- Decision making models
- Cognitive models
- Using AI and human capabilities fused and optimized together, or is there optimized human-in-the-loop AI?
- Advanced optimization algorithms and online learning
- Cyber security,
- Open AI
- Legal and ethical considerations
- Evaluation and assessment considerations
Format
The symposium will consist of keynote talks, invited talks, tutorials, oral presentations, poster/demo presentation, and panel discussions. We will also invite a panel of experts from defense organizations, funding agencies and contractors to discuss these topics as a part of the symposium activities.
Submissions
Regular papers should be 6–8 pages. Position papers should be 2–4 pages; submitted to EasyChair.
Chair
Ying Zhao, Information Sciences Department, Naval Postgraduate School, Monterey, CA 93943, yzhao@nps.edu
Organizing Committee
Doug Lange (Naval Information Warfare Center, Pacific), Tony Kendall (Naval Postgraduate School), Erik Blasch (Air Force Office of Scientific Research), Arjuna Flenner (GE Aviation Systems), Bruce Nagy (NAVAIR China Lake), Richard Arthur (US Navy, NAVAIR China Lake)
For More Information
For more information, please see the supplementary symposium site.
Towards Responsible AI in Surveillance, Media, and Security through Licensing
This symposium will focus on the creation of end-user and source code licenses that developers may include with AI software to restrict its use in surveillance, media, and security. Developing technology licenses that restrict use requires consensus around how responsible use should be defined for different domains or applications, what types of clauses should be included in such a license, and how such licenses could be enforced from a legal standpoint. Thus, the symposium seeks participation from a diverse interdisciplinary group who can help formulate the challenges, risks, and specific conditions the licenses should seek to address in future iterations. The symposium will provide an invaluable opportunity to bring together experts in AI, in the legal community, and in a variety of applied domains to discuss what types of clauses would be appropriate and enforceable and then develop them.
Topics
Surveillance: This includes both overt and covert deployment of AI models for collecting and analyzing personal data by individuals, groups, companies or government.
Media: This includes the use of AI models in the creation of synthetic text, image, video or audio data for the purposes of entertainment, advertising, propaganda or education and the algorithmic targeting of people with this content, or other non-synthetic content.
Security: This includes the use of AI models in systems used in military and humanitarian applications.
Domains of Expertise
- Transportation (for example, autonomous vehicles, drones)
- Employment (for example, hiring, workplace decision making)
- Healthcare (for example, service providing and delivery, disease risk prediction, biomedical research, deanonymization)
- Education (for example, content recommendation, education delivery etc)
- Law enforcement and Judiciary (for example, surveillance, criminal risk prediction, AI applications in law)
- News and social media (for example, machine generated news, images, propaganda, content recommendation, fake news)
- Satellite reconnaissance and surveying (for example, applications for agriculture, deforestation, mining)
- Military applications (for example, autonomous or semiautonomous weapons)
- Essential service delivery and tracking (for example, delivery of healthcare, housing, food or medicine aid)
- Others not mentioned (please describe
Format
Over its two and a half days, the symposium will feature invited talks and paper presentations, followed by breakout group sessions to explore future directions for AI licensing in these domains.
Each topic area will have participants focused on developing high-priority use cases. Participants will have access to crowd sourced ideas and they will also be free to define their own applications that need to be restricted via clauses. Legal experts will be part of the symposium, providing advice and guidance.
The primary tangible outcome would be the development of use cases to inform a set of licenses. The dissemination of the results would be through the development of domain- and application-specific licenses for use by software providers and researchers/developers grounded for protection based on contract laws.
Submissions
We will consider two types of submissions: case studies of areas of misuse (or potential misuse) (6–8 pages), or position papers (2–4 pages) addressing one of the following areas: surveillance, media, or security, or some combination. Please clearly indicate the type and area in the submission. Please also include whether your expertise is in AI, law, or one of the domains of expertise previously listed. Submit via the AAAI Spring Symposium EasyChair site.
Organizing Committee
Danish Contractor (IBM Research, dcontrac@in.ibm.com), Julia Haines (Google), Daniel McDuff (Microsoft Research), Brent Hecht (Northwestern University), Christopher Hines (KL Gates)
For More Information
For more information, please see the supplementary symposium site.