Abstract :
[en] Predicting accurately and in real-time 3D body joint positions
from a depth image is the cornerstone for many safety, biomedical,
and entertainment applications. Despite the high quality of the depth
images, the accuracy of existing human pose estimation methods from
single depth images remains insufficient for some applications. In order
to enhance the accuracy, we suggest to leverage a rough orientation
estimation to dynamically select a 3D joint position prediction model
specialized for this orientation. This orientation estimation can be obtained
in real-time either from the image itself, or from any other clue
like tracking. We demonstrate the merits of this general principle on a
pose estimation method similar to the one used with Kinect cameras.
Our results show that the accuracy is improved by up to 45.1 %, with
respect to a method using the same model for all orientations.
Scopus citations®
without self-citations
0