Abstract :
[en] In this paper, we present a two-step methodology to improve existing human pose estimation methods from a single depth image. Instead of learning the direct mapping from the depth image to the 3D pose, we first estimate the orientation of the standing person seen by the camera and then use this information to dynamically select a pose estimation model suited for this particular orientation. We evaluated our method on a public dataset of realistic depth images with precise ground truth joints location. Our experiments show that our method decreases the error of a state-of-the-art pose estimation method by 30%, or reduces the size of the needed learning set by a factor larger than 10.
Scopus citations®
without self-citations
2