Learning Basis Representation to Refine 3D Human Pose Estimations

Authors

  • Chunyu Wang Microsoft Research Asia
  • Haibo Qiu University of Science and Technology of China
  • Alan L. Yuille Johns Hopkins University
  • Wenjun Zeng Microsoft Research

DOI:

https://doi.org/10.1609/aaai.v33i01.33018925

Abstract

Estimating 3D human poses from 2D joint positions is an illposed problem, and is further complicated by the fact that the estimated 2D joints usually have errors to which most of the 3D pose estimators are sensitive. In this work, we present an approach to refine inaccurate 3D pose estimations. The core idea of the approach is to learn a number of bases to obtain tight approximations of the low-dimensional pose manifold where a 3D pose is represented by a convex combination of the bases. The representation requires that globally the refined poses are close to the pose manifold thus avoiding generating illegitimate poses. Second, the designed bases also have the property to guarantee that the distances among the body joints of a pose are within reasonable ranges. Experiments on benchmark datasets show that our approach obtains more legitimate poses over the baselines. In particular, the limb lengths are closer to the ground truth.

Downloads

Published

2019-07-17

How to Cite

Wang, C., Qiu, H., Yuille, A. L., & Zeng, W. (2019). Learning Basis Representation to Refine 3D Human Pose Estimations. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8925-8932. https://doi.org/10.1609/aaai.v33i01.33018925

Issue

Section

AAAI Technical Track: Vision