Sixteenth Annual AAAI Mobile Robot Competition
The Sixteenth Annual AAAI Mobile Robot Competition and Exhibition was held Monday –Thursday, July 23–26 in the Balmoral room.
This year’s robot competition and exhibition brought together teams from universities, colleges, and research laboratories to compete and to demonstrate cutting edge, state of the art research in robotics and artificial intelligence.
Mobile Robot Workshop
The robot events commence with a workshop where participants describe the research behind their entries. The workshop will include a panel of academic, industrial, and governmental roboticists that will address “The Personal Robotics Revolution: Where Does It Stand and Where Is It Going?”
Semantic Robot Vision Challenge
In this competition, robots are given a listing of objects that they must locate and recognize. In order to determine what these objects look like, the robots are given an opportunity to search the web for images of the objects in their list before starting their search. This competition attempts to push the state of the art of semantic image understanding by requiring that robots make use of the wealth of unstructured image data that exist on the Internet today.
The Robot Exhibition
The mission of the Robot Exhibition is twofold. The first goal is to demonstrate state of the art research in a less structured environment than the competition events. The exhibition gives researchers an opportunity to showcase current robotics and embodied-AI research that does not fit into the competition tasks. Second, the exhibition provides a venue for faculty using robotics in education to present their approaches and experiences.
Additional Information
Updates to the robot program are available on the supplemental robot pages. A PDF version of the call for participation is also available.
General Cochairs
Jeffrey Forbes (forbes@cs.duke.edu) and Paul Oh (paul@cbis.ece.drexel.edu)
General Cochairs
Jeffrey Forbes, Duke University
Paul Oh, Drexel University
Semantic Robot Vision Cochairs
Paul Rybski, Carnegie Mellon University
Alexei Efros, Carnegie Mellon University
Exhibition Chairs
Research: Andrea L. Thomaz, Massachusetts Institute of Technology
Education: Zach Dodds, Harvey Mudd College
Mobile Robot Workshop Chair
Chad Jenkins, Brown University
Robot Teams
Brown University
Robotics, Learning and Autonomy at Brown
Event: Robot Exhibition
Harvey Mudd College
Event: Robot Exhibition
Kansas State University
KSU Willie
Event: Robot Competition
Princeton University/University of Illinois at Urbana-Champaign
OPTIMOL
Team Members: Fei-Fei Li (Princeton), Jia Li (Illinois), Juan Carlos Niebles (Illinois), Brendan Collins (Princeton), Rahul Mehta (Illinois)
Event: Robot Competition and Exhibition
OPTIMOL is a novel, automatic dataset collecting and model learning system for object categorization developed by a joint UIUC-Princeton team. Our algorithm mimics the human learning process of iteratively cumulating model knowledge and image examples. As a fully automated system, OPTIMOL uses the Internet as the (nearly) unlimited resource for images. The learning and image collection processes are done by applying object recognition techniques in an iterative and incremental way. The goal of this work is to use the tremendous web resource to learn robust object category models in order to detect and search for objects in real-world cluttered scenes.
Southern Illinois University, Edwardsville
Fishtank Assassin
Event: Robot Exhibition
University of British Columbia
UBC LCI Robotics
Event: Robot Competition
University of Manitoba
Keystone Mixed Reality
Event: Robot Exhibition
University of Washington
Team Sunflowers
Team Contact: Masaharu Kobashi
Event: Robot Competition and Exhibition
Our robot interprets the environment by vision and it does not use any range finder. It has two video cameras whose pan, tilt, vergence, focus, and exposure are controlled by the installed computers to perform the active vision. It can accommodate up to five ATX size computer motherboards to handle the CPU intensive vision computation. The robot is designed for both indoor and outdoor use equipped with two powerful motors and sturdy chassis that can carry up to 250 pounds of batteries for extended operation of powerful computers.
Virginia Tech
RoMeLa: Robotics and Mechanisms Laboratory
Team Members: Dr. Dennis Hong, Karl Muecke, Brad Pullins, and Gabriel Goldman
Event: Robot Exhibition
“DARwIn: Dynamic Anthropomorphic Robot with Intelligence” is a humanoid bipedal robot research platform to study dynamic gaits and locomotion. Outfitted with a sensor suite and computers, DARwIn can also perform complicated high-level tasks and autonomous behaviors such as playing soccer. DARwIn will be the first and only US entry into the humanoid division of the international autonomous soccer competition, RoboCup.