Localization in mobile robotics is a well studied problem in many environments. Map building with occupancy grids (probabilistic finite element maps of the environment) is also fairly well understood. However, trying to accomplish both localization and mapping at once has proven to be a difficult task. Updating a map with new sensor information before determining the correct adjustment for the starting point and heading can be disastrous. We hope to solve this problem by performing localization on higher-level objects computed from the raw sensor readings. Any object which can be detected from a single sensor reading can be used to help the localization process. In addition, once the data from the robot’s sensors have been turned into a map with human-identifiable features, the map can easily be augmented with additional information about the objects (unique names, attributes, etc).