AAAI Mobile Robot Competition and Exhibition
The Fifteenth Annual Robot Competition and Exhibition will be held in
Boston, MA, from July 16–20, 2006, in conjunction with the Twenty-First National Conference on Artificial Intelligence. This
page contains information and links for potential
participants. The original call for participation is also available. Questions or comments should be sent to the general
cochairs for 2006:
- Paul E. Rybski (firstname.lastname@example.org)
- Jeffrey Forbes (email@example.com)
All teams should watch the competition page for updated
information about the events.
The list of papers to be presented at AAAI-06 can be found in the AAAI-06 proceedings.
The Robot Competition organizers would like all interested parties to contact them early if they are considering participating. To participate in rules discussion, and to keep up with the latest events, they are organizing an email list for this year’s event.
To subscribe to the AAAI 2006 e-mail discussion list, send e-mail to Paul
E. Rybski who will subscribe you to a listserv (you can unsubscribe
There are two steps for registration:
- Register your team’s intent to participate with the mobile robot
competition/exhibition web site.
- Once your registration has been approved (by e-mail notification),
you will be required to complete the AAAI registration form and
submit it with your payment.
You can see the currently registered teams at the competition site.
Deadlines for Conference Abstract
- April 11 Deadline for submission of optional two-page conference
abstracts to organizers (note: abstracts are not required for
- April 18 Notification of acceptance/rejection
- April 25 Final camera-ready submissions due (note: full
registration is also required on this date if abstract is to
Deadlines for Registration
- May 15 Deadline for teams to register their intent to
participate on the web site.
- May 16 Teams are notified whether their application for
registration has been accepted.
- May 22 Team leaders send all paperwork to AAAI
Schedule at the Conference
- July 16 Robot teams arrive at venue and set up
- July 17-19 Robot Competition/Exhibition Events
- July 20 Robot Competition/Exhibition Workshop
Zach Dodds, Harvey Mudd University
Paul Oh, Drexel University
Robots search the conference hotel area for a checklist of given
objects such as people or information located at specific locations or
at a specific time. This task will require robots to navigate and map
a dynamic area with moving objects/people in order to acquire objects
to satisfy the checklist.
We welcome a variety of teams to enter with one or more robots and/or
human operators, yet every entrant must demonstrate AI techniques
during the competition. A key aspect of this event is having the
robots interact with people in the environment during timed missions
run throughout the course of the conference. More specific rules and
guidelines will be posted shortly. We encourage urban search and
rescue teams with AI components to consider joining this event.
In this year’s scavenger hunt robot competition, entries will reason about and navigate the conference’s foyer, hallways, and rooms. We welcome entries from any subfield of AI with a spatial reasoning component. Systems incorporating spatial-reasoning techniques from disparate subfields, e.g., natural language processing, human-robot interaction, multiagent cooperation, and/or sliding-scale autonomy are encouraged, as are more traditional entries focusing on navigation and mapping.
The competition will consist of two phases: a demonstration created primarily by each participating team and a challenge directed by the contest judges.
The demonstration phase
In the demonstration, participants will show off their system’s abilities within the conference environment. For example, a particpant might wish to demonstrate the ability of a robot to follow a trail of colored paper in the conference hall, receive a designated visual cue, and then head to an “X” marking the spot of their treasure. Another entry might create a spatiotemporal map of the people around it and their interactions. This might be accomplished through sensor observations or by interacting with passersby. During this demonstration phase, particpants largely set their own goals and exhibit what their system is capable of. This phase of the competition is especially friendly to other robotic contest entries, educational projects, and systems that take novel AI approaches to environmental reasoning.
The challenge phase
In the predetermined challenge, robots will have to identify the location of a number of objects chosen by the judges (“the hunt”). These objects will be selected to respect the primary sensing modalities of each entrant: generality is an objective, but not the only one. Examples of some of the possible objects appear below. This task requires robots to explore a dynamic area, including moving objects/people, in order to acquire objects to satisfy the checklist.
In addition to this scavenger hunt challenge, judges may ask participants to vary the operating conditions from their demonstration phase in order to explore the limits of their system’s capabilities.
Each contestant will be given a list of items to find. In order to be as concrete as possible, these items will be chosen from the page linked FROM THIS SCAVENGER HUNT ITEM PAGE – all are available at very low cost from Wal-Mart or other similar places.
System-specific Accommodations Welcome
To encourage all kinds of entries, participants may substitute the above items with others, if the sensor suite of their robotic system requires it. Accommodations will be worked out on an individual basis – teams are welcome to bring their own “objects” to find during the challenge phase in any case. In general, system-specific alterations to accentuate the capabilities (and downplay limitations) of each entry are welcome.
For example, an entry with range sensors but without a camera might substitute distinguishable architectural features, such as a particular corridor or niche, for the above items. A system using audio input might seek out the source of a particular voice or recording being played. Systems whose focus is human interaction might seek out a particular individual or, more generally, a person exhibiting a specified behavior.
Regardless of approach, robots should report the location of the scavenger hunt items they find. This report may be in the form of a natural language utterance, a map of the environment showing the location of items, or if the item can be manipulated, by picking up the object and returning it to the starting point.
Contestants may enter a team of robots and will be more favorably judged if they demonstrate some form of cooperation.
We recognize that direct comparison of potentially very different entries is not easy. However, the judges will base their fundamental assessments on the extent and success of the spatial reasoning that each system demonstrates, given the naturalness and perceived difficulties of its operating conditions. Minimum requirements are a mobile robot.
Each judge on the panel will give a subjective score between 1 and 10 for each entry. Their scores will be averaged to produce a final score. The following criteria will be used as a guide for the judges’ considerations.
- Autonomy and shared autonomy We welcome a variety of teams to enter with one or more robots and/or human operators, though every entrant must demonstrate AI techniques during the competition. In particular, we encourage urban search and rescue teams with AI components to consider joining this event. Approaches resulting in systems with shared autonomy or full autonomy will be considered on equal footing. In shared-autonomy systems, judges will consider the naturalness of both the interface and the delegation of tasks to the robotic system and its human assistants. In fully autonomous systems, the extent of that autonomy will be evaluated.
- Environmental modification Ideally, an entry would interact with the conference environment without modification, or by modifying the environment itself. By default, the staging area for the scavenger hunt will be a foyer of the conference venue, along with its accompanying hallways and rooms. The environment will not be engineered for the event, except that the density of people will be relatively low, so that crowding around a robot will not be allowed. Participants are also welcome to demonstrate capabilities under restricted conditions. In such cases the nature and extent of the restrictions should be well understood and conveyed to the judges.
- Unexpected, dynamic, and/or human interactions A key aspect of the scavenger hunt competition is having robots interact with people present in the environment. This category will assess systems’ ability to handle unmodeled activity or changes in the environment. Robustness to such phonomena is a hallmark of intelligent spatial reasoning. As with the other judging criteria, participants may request onlookers and judges to keep to specific types of interactions. Robotic systems which make such requests for themselves will be judged even more favorably.
- Accuracy In order to convey its reasoning about the environment, each scavenger hunt entry should create and convey one or more representations of its surroundings. Many such “maps” are possible, e.g., traditional dense maps, sparse, loosely-connected collections of landmark locations, networks of learned parameters, or other summaries of the systems’ spatial input data. Novel representations or approaches integrating diverse facets of AI are welcome. Judges will consider both the accuracy and utility of these representations in the demonstration and challenge phases of the competition.
- Range and completeness Judges will assess the subset of the conference environment which each system can cope with, especially in light of the particular sensors available to each entry. For example, a system equipped with a laser range finder would be expected to reason about a larger swath of area than one with only a set of IR sensors. “Completeness” considerations include the variety of sensory modalities supported and their extent. For example, can a system locate objects not on the floor? Can a system distinguish objects using visual, auditory, direct range-sensing, or other means?
- Speed is desirable, but it is not as important as a system’s ability to interact with and reason about the (relatively) unmodified conference environment.
The contestants will be evaluated on overall success as well as on any particular abilities they incorporate into their solutions, such as:
- Technical Innovation
- Novel spatial-reasoning approaches
- Mapping and navigation strategy
- Object Recognition
- Object Manipulation
- Multi-Agent Cooperation
Linked here is a pdf showing the atrium area of the conference venue – this is where the robot competition and exhibition will be held, including the scavenger hunt.
A overall first place will be determined. Additional places and prizes for innovative aspects of specific solutions may also be awarded.
Human Robot Interaction
Matthias Scheutz, University of Notre Dame
The Human-Robot Interaction event focuses on human-robot interaction.
It includes a more structured version of last year’s Open Interaction
Event as well as the past Robot Challenge. Teams are asked to submit entries of their own tasks for any of seven interaction
categories. The first six demonstrate particular aspects of
human-robot interaction, while the seventh is an “integration category,” for which only tasks are eligible that demonstrate aspects from at least three of the first six categories.
The past Robot Challenge is now viewed as a formalized instance of
category 7 with a fixed task specification: starting at the entrance to
the conference center and finding the registration desk, registering for
the conference, performing volunteer duties as required, interacting with
conference attendees, and finally reporting at a prescribed time to a
conference hall to give a talk.
Building on the success of the Open Interaction Event in 2005, the goal of the AAAI 2006 Human-Robot Interaction is to demonstrate engaging interactions between people and robots. Different from last year, this year’s event will provide a more structured framework for competition that will both allow teams to compete directly in seven pre-defined categories and allow judges to evaluate the employed AI techniques and their level of sophistication better. Critically, all categories will be aimed at human-robot interaction and involve activities that intrinsically integrate perception and action. In addition, all categories will also involve one or more higher-level AI techniques (e.g., natural language understanding, reasoning, learning):
- Recognition of and reaction to human motions and/or gestures (e.g., mimicking human motion; naming a human’s action; following a hand motion command; determining the referent of a pointing finger; etc.)
- Emotion recognition and appropriate emotion expression (e.g., noticing surprise in a human face and expressing surprise in response; determining frustration in the human’s voice and expressing regret for being slow in understanding; noticing somebody said or did something funny and laughing at the right moment; etc.)
- Natural language understanding and action execution (e.g., following requests from humans to move in particular ways; getting a requested item from some other person; understanding descriptions of directions and applying them to lead other people to specific places; etc.)
- Perceptual learning through human teaching and subsequent recognition and categorization of people, objects, locations or actions (e.g., remembering the face of a person; learning of a location in the environment and being able to remember and recognize it; learning what it means to “turn around” and being able to repeat it; learning what an object like a Coke can looks like and recognizing it among other different objects; etc.)
- Perception, reasoning, and action (e.g., noticing that a person moved behind an obstacle and did not reappear, concluding that the person must still be behind the obstacle, and announcing where the person is; determining that an object that the robot cannot move is blocking its way, noticing that a person is moving a similar object in a different location, and asking a person close-by to help with moving the object; etc.)
- Shared attention, common workspace, intent detection (e.g., remembering referents from previous sentences and being able to disambiguate “this” and “that”; following human eye gaze to determine objects of interest in the environment and using shared attention in constructing referents in sentences or picking topics of conversation — “did you see that? the door just closed”; deriving human intent from multimodal information including gestures, body language, facial expressions, head movements, prosodic information, and linguistic expression — “I see you are pressed for time, maybe we can chat later.” derived from watching the person after the person nervously turned the head left and right and looked at their watch, or “Would you like me to get you a drink?” after observing that the person had noticed that everybody in their group had a drink; etc.)
- Integration challenge: demonstration of an extended, multimodal interaction that combines at least three (!) of the above six categories. As a specific instance, with clearly specified conditions, the previous AAAI Robot Challenge would fall under this category (i.e., (1) starting at the entrance to the conference center and finding the registration desk, (2) registering for the conference, (3) performing volunteer duties as required, (4) interacting with conference attendees, and (5) finally reporting at a prescribed time to a conference hall to give a talk). However, other integration projects are also welcome. Teams who are planning to participate in category 7 should submit a short description of their “integration challenge” to the chair for approval.
All entrants may compete in as many categories as they like, but must compete in at least one. Beyond meeting the basic requirements for a category, we are looking for systems that are interesting and fun to interact with.
Information for Participants:
During the exhibition, each team will be given at least two specific time slots during which their entry will be featured. One of the time slots will be used for judging. Teams may also practice or demonstrate their entries at other times during the exhibition as long as they do not conflict with the featured team. Each team will need to discuss the needs of their entry (area required, etc.) so that we can best coordinate.
As in last year’s competition, a combination of ratings from audience members, other teams, and a judge panel will be used to determine the winners. Winners will be determined for each individual category and certificates will be awarded for outstanding or creative examples of different types of AI and social interaction. A separate award will be given to any team meeting the AAAI Robot Challenge in category 7 (that team may or may not be the winner of category 7). Moreover, an overall winner of the “Open Interaction Event” will be determined based on the individual category results. Final judging policies will be discussed prior to the event.
The regular audience at AAAI is becoming increasingly habituated to robots wandering around, and tends to not pay them much attention any more. You will get individuals coming up to your robots and “kicking the tires” a bit (hopefully figuratively, but sometimes literally). You will want to make sure your robot can grab attention. Visitors to the conference will tend to crowd the robot in groups as they come through (particularly during breaks in the conference talks), so your robot ought to be able to handle a press of people and deal with the situation robustly.
For mobile and wandering robots, try to keep some distance so that it doesn’t look like you are shepherding or controlling the robot. Do make sure to have somebody on hand to talk to the audience and answer questions (and to step in if anything goes wrong!), but it is important that your entry be able to stand on its own without need for explanation.
Debra Burhans, Canisius College
The mission of the Robot Exhibition is twofold. First, to demonstrate
state of the art research in a less structured environment than the
competition events. The exhibition gives researchers an opportunity to
showcase current robotics and embodied-AI research that does not fit
into the competition tasks. Second, the exhibition provides a venue
for faculty using robotics in education to present their approaches
and experiences. We encourage participation from all areas.
All participants in the robotics events (competitive teams or teams participating only in the exhibition) had the opportunity to have a booth on display in the robotics exhibition area during the technical sessions, in the reception on Monday, and the poster session on Wednesday. Because the cleanup and workshop was on Thursday, there were no events at that time.
The Mobile Robot Workshop
Bob Avanzato, Penn State Abington
There was a robotics workshop on the last day of the conference.
Teams who receive travel support were required to attend and present at the workshop.
All other participants were strongly encouraged to attend and present. A
research paper was be required within one month after the end of the
workshop, and will be published in a workshop proceedings by AAAI.