3D Human Pose Estimation via Explicit Compositional Depth Maps

Authors

  • Haiping Wu McGill University
  • Bin Xiao Bytedance

DOI:

https://doi.org/10.1609/aaai.v34i07.6923

Abstract

In this work, we tackle the problem of estimating 3D human pose in camera space from a monocular image. First, we propose to use densely-generated limb depth maps to ease the learning of body joints depth, which are well aligned with image cues. Then, we design a lifting module from 2D pixel coordinates to 3D camera coordinates which explicitly takes the depth values as inputs, and is aligned with camera perspective projection model. We show our method achieves superior performance on large-scale 3D pose datasets Human3.6M and MPI-INF-3DHP, and sets the new state-of-the-art.

Downloads

Published

2020-04-03

How to Cite

Wu, H., & Xiao, B. (2020). 3D Human Pose Estimation via Explicit Compositional Depth Maps. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 12378-12385. https://doi.org/10.1609/aaai.v34i07.6923

Issue

Section

AAAI Technical Track: Vision