ZoomNet: Part-Aware Adaptive Zooming Neural Network for 3D Object Detection

Authors

  • Zhenbo Xu University of Science and Technology of China
  • Wei Zhang Baidu Research
  • Xiaoqing Ye Baidu Research
  • Xiao Tan Baidu Research
  • Wei Yang University of Science and Technology of China
  • Shilei Wen Baidu Research
  • Errui Ding Baidu Research
  • Ajin Meng University of Science and Technology of China
  • Liusheng Huang University of Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v34i07.6945

Abstract

3D object detection is an essential task in autonomous driving and robotics. Though great progress has been made, challenges remain in estimating 3D pose for distant and occluded objects. In this paper, we present a novel framework named ZoomNet for stereo imagery-based 3D detection. The pipeline of ZoomNet begins with an ordinary 2D object detection model which is used to obtain pairs of left-right bounding boxes. To further exploit the abundant texture cues in rgb images for more accurate disparity estimation, we introduce a conceptually straight-forward module – adaptive zooming, which simultaneously resizes 2D instance bounding boxes to a unified resolution and adjusts the camera intrinsic parameters accordingly. In this way, we are able to estimate higher-quality disparity maps from the resized box images then construct dense point clouds for both nearby and distant objects. Moreover, we introduce to learn part locations as complementary features to improve the resistance against occlusion and put forward the 3D fitting score to better estimate the 3D detection quality. Extensive experiments on the popular KITTI 3D detection dataset indicate ZoomNet surpasses all previous state-of-the-art methods by large margins (improved by 9.4% on APbv (IoU=0.7) over pseudo-LiDAR). Ablation study also demonstrates that our adaptive zooming strategy brings an improvement of over 10% on AP3d (IoU=0.7). In addition, since the official KITTI benchmark lacks fine-grained annotations like pixel-wise part locations, we also present our KFG dataset by augmenting KITTI with detailed instance-wise annotations including pixel-wise part location, pixel-wise disparity, etc.. Both the KFG dataset and our codes will be publicly available at https://github.com/detectRecog/ZoomNet.

Downloads

Published

2020-04-03

How to Cite

Xu, Z., Zhang, W., Ye, X., Tan, X., Yang, W., Wen, S., Ding, E., Meng, A., & Huang, L. (2020). ZoomNet: Part-Aware Adaptive Zooming Neural Network for 3D Object Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 12557-12564. https://doi.org/10.1609/aaai.v34i07.6945

Issue

Section

AAAI Technical Track: Vision