AAAI 2017 Spring Symposia
March 27–29, 2017
Sponsored by the Association for the Advancement of Artificial Intelligence
In cooperation with the Stanford University Computer Science Department
- AI for the Social Good
- Computational Construction Grammar and Natural Language Understanding
- Computational Context: Why It’s Important, What It Means, and Can It Be Computed?
- Designing the User Experience of Machine Learning Systems
- Interactive Multisensory Object Perception for Embodied Agents
- Learning from Observation of Humans
- Science of Intelligence: Computational Principles of Natural and Artificial Intelligence
- Wellbeing AI: From Machine Learning to Subjectivity Oriented Computing
AI for the Social Good
A rise in real-world applications of AI has stimulated significant interest from the public, media, and policy makers, including the White House Office of Science and Technology Policy (OSTP). Along with this increasing attention has come media-fueled concerns about purported negative consequences of AI, which often overlooks the societal benefits that AI is delivering and can deliver in the near future. This symposium will focus on the promise of AI across multiple sectors of society. We seek to bring together AI researchers and researchers, practitioners, experts, and policy makers from a wide variety of domains.
The focus on this symposium is on a broad scope of research topics. We are especially interested in addressing societal challenges, which have not yet received significant attention by the AI community or by the constellation of AI sub-communities. We encourage submissions to the symposium from researchers who are interested in AI methods to tackle unsolved societal challenges in a measurable manner and researchers, practitioners, experts, and policy makers from domains that could greatly benefit from the introduction of AI based systems.
Topics of interest include but are not limited to the following:
- Urban Computing
- Smart Transportation
- Big Data Driven Urban Planning
- Urban Energy
- Healthcare
- Smart Elderly Care
- Virtual Healthcare Assistants
- Effective Disease Prevention Information Dissemination
- Public Welfare and Social Justice
- Preventing Adverse Police-Public Interactions
- Child and Women Empowerment
- Poverty Alleviation
- Sustainability
- Wildlife Protection
- Ecological Modeling
- Climate Change
- Security
- Physical Infrastructure Security
- Cyber Security
- International Relations
Format
The symposium will include invited talks, presentations on accepted papers, discussion and a panel discussion. We will announce the list of invited speakers and presentations post the submissions. In addition, symposium participants will be divided into interdisciplinary working groups, based on their area of work. These working groups will be tasked with coming up with potential societal problems or domains where AI could potentially be used, and then identifying challenges in specific application domains, along with potential AI approaches to tackle these challenges.
Submissions
Submissions should not be longer than 6 pages in length (excluding references) in standard double column AAAI format. Submit to EasyChair.
Organizing Committee
Eric Horvitz (Microsoft Research, Redmond, USA), Barbara Grosz (Harvard University, Cambridge, MA, USA), Amy Greenwald (Brown University, RI, USA), David Parkes (Harvard University, MA, USA), Carla Gomes (Cornell University, NY, USA), Stephen Smith (Carnegie Mellon University, PA, USA), Gregory Hager (Johns Hopkins University, MD, USA), Ann W. Drobnis (Computing Research Association, USA), Nicole Sintov (Ohio State University, OH, USA), Milind Tambe (University of Southern California, CA, USA), Amulya Yadav (University of Southern California, CA, USA), Fei Fang (Harvard University, MA, USA), Bryan Wilder (University of Southern California, CA, USA
For More Information
For more information, please visit the supplemental symposium website.
Computational Construction Grammar and Natural Language Understanding
Computational construction grammar has been emerging as a new line of research into language modeling that promises to yield advances in natural language processing (NLP) and natural language understanding (NLU) with applications to web querying, intelligent tutoring, translation, summarization, language grounding in robots, etc. Natural language understanding entails systems that take action without human intervention and remains intractable with current template or statistical approaches.
The characteristic features of the construction grammar approach and their relevance to NLP or NLU include the following:
1. Tight integration of morphosyntax and semantics/pragmatics — supporting better semantic analysis and language production as well as cleaner handling of non-standard language such as metaphors and reduced constructions and idioms.
2. Smooth integration of many different linguistic perspectives (phrase structure, functional and dependency structure, frame semantics, information structure, etc.).
3. Bi-directionality, in the sense of using the same architecture and the same language model for comprehension and formulation — supporting faster learning, a more compact representation of language knowledge, self-monitoring in language production, and better prediction in language comprehension.
4. Closer fit with human language processing and learning — allowing enhanced contact with the psycholinguistic community and leading to more intuitive language interfaces to autonomous systems.
The primary goals of this AAAI symposium are (1) to draw the attention of the AI community to the challenging technical issues and opportunities that the constructional perspective allows, (2) to nurture an emergent community of computational construction grammar developers that is well integrated within the broader AI community, (3) to compare existing implementations, understand open challenges, exchange technical solutions, and build up user communities, (4) to standardize emerging corpora and establish challenges and evaluation criteria (5) to survey the application potential of construction grammar in NLP/NLU applications.
Areas of Interest
This symposium is primarily targeting researchers in natural language processing or understanding and computational linguistics, but will have broader appeal to the larger AI community, specifically researchers in knowledge representation, man-machine interaction, and machine learning. The interest for a wide range of AI applications (from intelligent robots to web querying) is obvious.
The symposium will address different aspects of computational construction grammar and NLU research. There will be sessions for each of these:
- Theory and Linguistics
- Formalisms for construction grammar
- Natural Language Understanding (NLU)
- Constructicons and corpora annotated for construction grammar
- Construction grammar learning and adaptation.
- Applications.
Submissions
Please submit suggestions for panels and sessions by October 1 to either: steels@arti.vub.ac.be or feldman@icsi.berkeley.edu
Paper submissions may take the form of long papers (4-8 pages in AAAI format, including references) for oral presentation or short papers (up to 4 pages in AAAI format, including references) for poster presentation.
Chairs
Luc Steels
(steels@arti.vub.ac.be)
Institut de Biologia Evolutiva, Universitat Pompeu Fabra – CSIC
Doctor Aiguader, 88. 08003 Barcelona, Spain
Jerome Feldman
(feldman@icsi.berkeley.edu)
International Computer Science Institute
1947 Center St. Suite 600
Berkeley, CA 94704-1198, USA
For More Information
For more information, please visit the supplemental symposium website.
Organizing Committee Members
Adele Goldberg (Psychology of Language Lab, Princeton University, USA, adele@Princeton.edu), Katrien Beuls (Artificial Intelligence Lab, Vrije Universiteit Brussel, Brussels, Belgium, katrien@arti.vub.ac.be), Nancy Chang (Machine Intelligence Group, Google Research, USA, nchang@gmail.com)
Computational Context: Why It’s Important, What It Means, and Can It Be Computed?
Context reputedly explains how the environment influences human perception, cognition and action. Context can be clear, uncertain or an illusion. Clear contexts: A chaplain giving last rites; a visit with a doctor; a policeman asking for registration and insurance. It is the word sequence in a sentence that allows humans to learn an unknown word. Context-specific dependencies have been applied in cancer and biomedical research. An organization’s context is its management, culture and systems. Uncertain contexts: The fog of war; a jury’s reaction to counter-arguments; a shout to “Abandon ship!”
Is context an illusion? Individuals are affected by illusions; for example, humans are prey to Adelson’s checkerboard illusion, while a photometer is not. Rovelli, a physicist, wrote “reality is not as it appears.” In 1944, supporting Einstein’s theory of relativity, the New York Times editorial declared that the physical world was “largely illusory.” After reviewing numerous behavioral and social data sets (for example, polls) in the search for context, Dzhafarov and colleagues concluded that “none of these data provides any evidence for contextuality.” Their conclusion indirectly supports Bekenstein who suggests that the holographic principle may finally place the human struggle to interpret quantum mechanics into a more rational or intuitive context.
However, even outside of awareness, individuals act differently whether alone or in a team. Can computational context with AI adapt to clear and also to uncertain contexts, to change over time, and to individuals, machines and robots in teams? If a program automatically “knows” the context that improves performance, it may not matter whether context is clear, uncertain or illusory. This idea agrees with the Department of Defense’s need for a hybrid team automatically “having a common perception of the surrounding world and able to place it into context.” We believe that integrating systems to work together for the members of a hybrid team will present a computational challenge, but that it will also offer an opportunity to advance the science of context in teams, one of our research interests.
Participants
We desire participants who can discuss the meaning, value and effect context has on performance. The topic is open-ended. We will consider all papers that address how context affects perception, cognition and behavior, or whether the context is clear, uncertain or illusory. Our ultimate goal is to advance the automatic construction of context with AI to improve the performance of agents and hybrid human, machine and robot teams.
Organizers
Ranjeev Mittu (Naval Research Laboratory, ranjeev.mittu@nrl.navy.mil); W.F. Lawless (Paine College, w.lawless@icloud.com); Don Sofge (don.sofge@nrl.navy.mil); David Aha (david.aha@nrl.navy.mil)
For More Information
For more information, please visit the supplemental symposium website.
Designing the User Experience of Machine Learning Systems
The value proposition of consumer-facing machine learning systems is of a better user experience, of less cognitive load and hands-off automation, but how will that experience actually be delivered when the underlying systems behave in unpredictable, often inscrutable, ways? How will designers, who are ultimately responsible for the experience of a new technology, design with this new design material?
Today semiautonomous machine learning-driven predictive systems are in consumer-facing domains from smart homes to self-driving vehicles. Such systems aim to do everything from keeping plants healthy and homes safe to “nudging” people to change their behavior. They are the keystone in many consumer Internet of the Things products and services. However, despite all the promise, there’s been little discussion about how the design of such learning, adaptive, predictive systems will actually deliver multi-touchpoint, multi-device experiences, the form taken by modern cloud-based services (Facebook, Uber, Nest, Echo, and others).
This symposium aims to bridge the worlds of user experience design, service design, HCI, HRI and AI to discuss common challenges, identify key constituencies, and compare approaches to designing such systems. We welcome submissions from people working in design, industry, research and academe.
Topics
- Application- and domain-specific UX challenges versus general UX design challenges
- Communication of machine learning to end users, explanation of predictive behavior and expectation setting
- Potential constituencies of ML UX
- Designing for a machine-learning world of multiple predictive systems
- Multi-device, multi-touchpoint behavior
- Service design and machine learning
- Design deals in material properties, what are the material properties of predictive machine learning systems?
Format
The symposium will be a combination of presentations, posters, invited talks, plenary sessions, and breakouts, to maximize participant interaction. All attendees will be required to present a short (20 minute) presentation on their work or a subject of interest. We will alternate between these short presentations design explorations in small groups, and large group discussions.
Submissions
Prospective participants are invited to submit one or more of the following: Short position papers (2–4 pages) in PDF format. Please follow AAAI style guidelines.
- A 30″x40″/A0 poster in pdf format.
- A 3m or shorter video in a common file format (AVI, MP4, etc.).
- An interactive demo.
- A panel proposal. Panel proposals should include a 400-word description of the topic and potential and agreed panel members.
Video and interactive demos should be accompanied by an extended abstract (1-2 pages, PDF) of up to 2000 words. Initial submission can include a draft/rough cut/storyboard of the poster/video/interactive with a text description of its contents.
Organizing Committee
Mike Kuniavsky (PARC, mikek@parc.com), Elizabeth Churchill (Google), Molly Wright Steenson (Carnegie Mellon University)
For More Information
For more information, please visit the supplemental symposium website.
Interactive Multisensory Object Perception for Embodied Agents
For a robot to perceive object properties with multiple sensory modalities, it needs to interact with the object through action. This interaction requires that an agent be embodied (that is, the robot interacts with the environment through a physical body within that environment). A major challenge is to get a robot to interact with the scene in a way that is quick and efficient, and utilize multiple sensory modalities to perceive and reason about objects. The fields of psychology and cognitive science have demonstrated that humans rely on multiple senses (for example, audio, haptics, tactile, and others) in a broad variety of contexts ranging from language learning to learning manipulation skills.
How do we collect large datasets from robots exploring the world with multisensory inputs and what algorithms can we use to learn and act with this data? While the community has focused on how to deal with visual information (for example, deep learning for visual features), there has been far fewer explorations of how to utilize and learn from the very different scales of data collected from very different sensors.
The goal of this symposium is to bring together researchers from the fields of AI and Robotics who share the goal of advancing the state-of-the-art in robotic perception of objects. The symposium will consist of invited speakers, poster and breakout sessions, and panels over 2 days.
Submissions
We welcome abstract submissions of prior or ongoing work related to multisensory perception and embodied agents. Submissions should be 2–4 pages in length, plus an extra page for references, in AAAI format. Topics include (but are not limited to) the following:
- Multisensory perception
- Psychology of sensory inputs
- Robot learning using multisensory data
- Representations of multisensory perception and action
- Real-time perception and decision making using multisensory input
- Learning algorithms for auditory, visual, and haptic data
- Multisensory data collection
- Algorithms for embodied agents to interact with the real-world
Organizing Committee
Vivian Chu (Georgia Institute of Technology), Jivko Sinapov (University of Texas at Austin), Jeannette Bohg (MPI for Intelligent Systems), Sonia Chernova (Georgia Institute of Technology), Andrea L. Thomaz (University of Texas at Austin
For More Information
For more information, please visit the supplemental symposium website.
Learning from Observation of Humans
Learning from observation (LFO), also known as learning from demonstration, imitation learning, or behavioral cloning, and related to programming by demonstration and apprenticeship learning, studies how computers can learn to perform complex tasks by observing and thereafter imitating the performance of a (human) actor. LFO offers the promise of allowing machines to learn how to perform complex behaviors that would be difficult to manually program (for example, driving vehicles, robotic motion, playing videogames, and others). For example, modern training, education and entertainment applications make extensive use of virtual agents, which must display complex intelligent behaviors. Driving simulations, for example, require realistically moving vehicles, simulated military training environments require friendly and unfriendly forces to present realistic and intelligent tactical behaviors, and computer games require artificial characters that display believable behaviors to heighten immersiveness of the games. Handcrafting those behaviors requires a significant amount of resources, and can be highly error-prone, thus being only practical for small and well defined behaviors. LFO offers a promising alternative.
After several decades of research, however, there are still a significant number of open research problems in LFO, ranging from algorithmic challenges (for example, designing learning algorithms that do not make the common supervised learning independent and identically distributed assumption, which is violated in most LFO settings), to evaluation methodologies, and human-computer/robot interaction (for example, designing algorithms that can learn from humans, or understanding how humans behave when they teach artificial agents). This symposium aims at advancing the state of the art in LFO and related disciplines by bringing together researchers from a broad set of backgrounds, and establishing bridges between the different communities working in these problems.
Submissions
We solicit both long (6 pages plus one for references) and short (3 pages plus one for references) submissions on related topics as well as position papers (6 pages plus one for references). Submissions will be requested in AAAI conference format. All accepted papers will be scheduled for both poster and oral presentations. All presenters will be invited to provide a demonstration of their work during the poster session.
Organizing Committee
Santiago Ontañón (Drexel University), Avelino J. González (University of Central Florida), José L. Montaña (University of Cantabria, Spain)
For More Information
For more information, please visit the supplemental symposium website.
Science of Intelligence: Computational Principles of Natural and Artificial Intelligence
Science of intelligence is a new emerging field dedicated to developing a computation-based understanding of intelligence — both natural and artificial — and to establishing an engineering practice based on that understanding.
This symposium provides a unique opportunity to bring together experts in artificial intelligence, cognitive science, and computational neuroscience to share and discuss the advances and the challenges of the study of the computational principles of natural and artificial intelligence.
The symposium will address the ways to unify the computational principles of natural and artificial intelligence.
Keynote speakers will include James DiCarlo (MIT), Li Fei-Fei (Stanford), Surya Ganguli (Stanford), Samuel Gershman (Harvard), Kristen Grauman (University of Texas at Austin), Gabriel Kreiman (Harvard), Karen Livescu (Toyota Technological Institute at Chicago), Lakshminarayanan Mahadevan (Harvard) ,Aude Oliva (MIT), Pietro Perona (Caltech), Tomaso Poggio (MIT), Lorenzo Rosasco (Italian Institute of Technology), Amnon Shashua (Hebrew University and CTO at Mobileye), Joshua Tenenbaum (MIT), Shimon Ullman (Weizmann Institute), Patrick Winston (MIT), Daniel Yamins (Stanford), Alan Yuille (Johns Hopkins)
The 3 day symposium will consist on keynote and invited talks, poster presentations, panel discussions and a doctoral consortium.
Submissions
Authors willing to participate are invited to submit an abstract (2-4 pages) describing work in progress or recently published work. Selected abstracts will be published as an AAAI technical report, and the authors will present their work as an invited talk or as a poster presentation.
Students that submit an abstract will be eligible for the doctoral consortium. Selected students will be assigned a mentor from the keynote speakers to discuss their work and future career plans.
Chairs
Gemma Roig (MIT)
gemmar@mit.edu
The Center for Brains Minds and Machines
Dept. of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Bldg. 46-5155, 77 Massachusetts Avenue
Cambridge, MA 02139
Tel: 617-324-3684
Xavier Boix (National University of Singapore)
elexbb@nus.edu.sg
Dept. of Electrical and Computer Engineering
National University of Singapore
#E4-06-21, 4 Engineering Drive 3
117583, Singapore
For More Information
For more information, please visit the supplemental symposium website.
Wellbeing AI: From Machine Learning to Subjectivity Oriented Computing
Wellbeing AI is an intelligent information technology that aims to promote psychological wellbeing (that is, happiness) and maximize human potential. Today’s workplace environment escalates stress, provides unlimited caffeine, distributes nutrition-free “fast” food, and encourages unhealthy sleep behavior. While recent technological advances bring many truly great benefits, there is an opportunity to rethink about the impact of digital technologies on human health and wellbeing. Wellbeing AI provides a way to understand how our digital experience affects our emotions and our quality of life and how to design a better wellbeing system that puts humans at the center.
Recently, deep learning and other advanced machine learning technologies has revolutionized in computer vision, speech recognition, and natural language processing and brought promising results in many other areas. Despite this, applying these AI revolutions to human health and wellness problems remains some challenging.
One of the big challenges is to understand human subjective knowledge and design better health & wellbeing systems. We define subjectivity oriented computing is an approach to designing and understanding computations systems by understanding human subjective knowledge. The Oxford philosopher J.R Lucas mandating that the intelligent being must have self-awareness. This symposium discusses the subjective intelligence by learning from human self-awareness process.
Today’s wellbeing science (or positive psychology) articulates that positive mental attitudes, including self-awareness can make huge impacts not only in prevention of disease, but also in maximizing human potential. It is now very important to share these scientific findings with AI methodologies for better human centric system design.
For example, we will have the following four technical challenges:
(1) Representation of subjective knowledge
First, we need to represent the human tacit and subjective health/wellness knowledge in explicit and quantifiable way. Much of knowledge in well-being science is subjective. For example, fuzzy properties of subjective word embeddings in human health & wellness might be better to be represented with concrete mathematical structures.
(2) Deep Learning and other quantitative methods for Health and Wellness
Second, we need to explore the advanced machine learning technologies, such as deep learning and other quantitative methods, in health and wellness domains. Right now machine learning research is interested in getting computers to be able to understand data that humans do: images, text, sounds, and so on. However the focus is going to shift to getting computers to understand things that humans don’t. We need to make a bridge to allow humans to understand these things.
(3) Models, Reasoning and Inference
Third, the reasoning about data through representations should be understandable and accountable to human. For example, we need to develop powerful tools for understanding what exactly, deep neural networks and other quantitative methods are doing. Not only for increasing accuracy rate of predictions, we need to understand the causality with reliable models, reasoning and inference.
(4) Better Well-being systems design.
Furthermore, we need to understand the human. While recent technological advances bring many truly great benefits, there is an opportunity to rethink about the impact of these fruits. We need to understand how our AI revolution affects our emotions and our quality of life and how to design a better well being system that puts humans at the center.
This symposium is aimed at sharing latest progress, current challenges and potential applications related with AI health and wellbeing. The evaluation of digital experience and understanding of human health and wellbeing is also welcome.
Invited Speakers
Steve Cole (UCLA, USA) Christopher Re (Stanford, USA) Kenji Suzuki (University of Tsukuba, Japan)
Submissions
Interested participants should submit either full papers (8 pages maximum) or extended abstracts (2 pages maximum). Extend abstracts should state your presentation types (long paper (6-8 pages), short paper (1-2 pages), demonstration, or poster presentation) The electronic version of your paper should be send to aaai2017-wba@cas.hc.uec.ac.jp by October 28th, 2016.
Organizing Committee
Cochairs: Takashi Kido (Rikengenesis, Japan) and Keiki Takadama (The University of Electro-Communications, Japan)
Programming Committee
Melanie Swan (DIYgenomics, USA), Katarzyna Wac (Stanford University, USA and University of Geneva, Switzerland), Ikuko Eguchi Yairi (Sophia University, Japan), Fumiko Kano (Copenhagen Business School, Denmark), Chirag Patel (Stanford University, USA), Rui Chen (Stanford University, USA), Ryota Kanai (University of Sussex, UK.), Yoni Donner (Stanford, USA), Yutaka Matsuo (University of Tokyo, Japan), Eiji Aramaki (University of Tokyo, Japan), Pamela Day (Stanford, USA), Tomohiro Hoshi (Stanford, USA), Miho Otake (Chiba University, Japan), Yotam Hineberg (Stanford, USA), Yukiko Shiki (Kansai University, Japan), Takashi Maruyama (Stanford, USA), Katsunori Shimohara (Doshisha University, Japan)
For More Information
For more information, please visit the supplemental symposium website.