[en] This paper describes the video and audio tools that have been implemented
in a real-type system to immerse a user into a virtual
scene. The video tools include motion detection, skin detection
by thresholding, shadow detection and extraction, and finally the
user’s head and hands detection. Once this is done, the user (who
is surrounded by a matrix of loudspeakers) is able to move a
sound source in the horizontal plane around him. Moreover, the
sound is auralized by convolution with (directional) room impulse
responses, which have been pre-computed by a ray tracing
method. The different sound contributions are distributed to the
individual loudspeakers by applying the VBAP technique.
Disciplines :
Electrical & electronics engineering
Author, co-author :
Dardenne, Renaud
Embrechts, Jean-Jacques ; Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Techniques du son et de l'image
Van Droogenbroeck, Marc ; Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Télécommunications
Werner, Nicolas
Language :
English
Title :
A video-based human-computer interaction system for audio-visual immersion
Publication date :
March 2006
Audience :
International
Main work title :
Proceedings of SPS-DARTS
Pages :
23-26
Peer reviewed :
Peer reviewed
Name of the research project :
CINEMA
Funders :
DGTRE - Région wallonne. Direction générale des Technologies, de la Recherche et de l'Énergie