[en] Background subtraction is usually based on low-level or hand-crafted features such as raw color components, gradients, or local binary patterns. As an improvement, we present a background subtraction algorithm based on spatial features learned with convolutional neural networks (ConvNets). Our algorithm uses a background model reduced to a single background image and a scene-specific training dataset to feed ConvNets that prove able to learn how to subtract the background from an input image patch. Experiments led on 2014 ChangeDetection.net dataset show that our ConvNet based algorithm at least reproduces the performance of state-of-the-art methods, and that it even outperforms them significantly when scene-specific knowledge is considered.
Centre de recherche :
Department of Electrical Engineering and Computer Science (Montefiore Institute), Signal and Image Exploitation (INTELSIG)
Disciplines :
Sciences informatiques
Auteur, co-auteur :
Braham, Marc ; Université de Liège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Télécommunications
Van Droogenbroeck, Marc ; Université de Liège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Télécommunications
Langue du document :
Anglais
Titre :
Deep Background Subtraction with Scene-Specific Convolutional Neural Networks
Date de publication/diffusion :
mai 2016
Nom de la manifestation :
IEEE International Conference on Systems, Signals and Image Processing (IWSSIP)
Lieu de la manifestation :
Bratislava, Slovaquie
Date de la manifestation :
23-25 May 2016
Manifestation à portée :
International
Titre de l'ouvrage principal :
IEEE International Conference on Systems, Signals and Image Processing (IWSSIP), Bratislava 23-25 May 2016
C. Stauffer and E. Grimson, "Adaptive background mixture models for real-time tracking, " in IEEE Int. Conf. Comput. Vision and Pattern Recogn. (CVPR), vol. 2, pp. 246-252, June 1999.
A. Elgammal, D. Harwood, and L. Davis, "Non-parametric model for background subtraction, " in European Conf. Comput. Vision (ECCV), vol. 1843 of Lecture Notes Comp. Sci., pp. 751-767, Springer, June 2000.
P.-L. St-Charles, G.-A. Bilodeau, and R. Bergevin, "A self-adjusting approach to change detection based on background word consensus, " in IEEE Winter Conf. Applicat. Comp. Vision (WACV), pp. 990-997, Jan. 2015.
T. Bouwmans, "Traditional and recent approaches in background modeling for foreground detection: An overview, " Computer Science Review, vol. 11-12, pp. 31-66, May 2014.
P.-M. Jodoin, S. Piérard, Y. Wang, and M. Van Droogenbroeck, "Overview and benchmarking of motion detection methods, " in Background Modeling and Foreground Detection for Video Surveillance, ch. 24, Chapman and Hall/CRC, July 2014.
A. Schick, M. Bauml, and R. Stiefelhagen, "Improving foreground segmentation with probabilistic superpixel Markov Random Fields, " in IEEE Int. Conf. Comput. Vision and Pattern Recog. Workshop (CVPRW), pp. 27-31, June 2012.
P.-L. St-Charles, G.-A. Bilodeau, and R. Bergevin, "SuBSENSE: A universal change detection method with local adaptive sensitivity, " IEEE Trans. Image Process., vol. 24, pp. 359-373, Jan. 2015.
O. Barnich and M. Van Droogenbroeck, "ViBe: A universal background subtraction algorithm for video sequences, " IEEE Trans. Image Process., vol. 20, pp. 1709-1724, June 2011.
S. Gruenwedel, P. Van Hese, and W. Philips, "An edge-based approach for robust foreground detection, " in Advanced Concepts for Intelligent Vision Syst. (ACIVS), vol. 6915 of Lecture Notes Comp. Sci., pp. 554- 565, Springer, Aug. 2011.
M. Heikkilä and M. Pietikäinen, "A texture-based method for modeling the background and detecting moving objects, " IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, pp. 657-662, Apr. 2006.
S. Zhang, H. Yao, S. Liu, X. Chen, and W. Gao, "A covariance-based method for dynamic background subtraction, " in IEEE Int. Conf. Pattern Recogn. (ICPR), pp. 1-4, Dec. 2008.
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition, " Proc. of IEEE, vol. 86, pp. 2278- 2324, Nov. 1998.
N. Goyette, P.-M. Jodoin, F. Porikli, J. Konrad, and P. Ishwar, "A novel video dataset for change detection benchmarking, " IEEE Trans. Image Process., vol. 23, pp. 4663-4679, Nov. 2014.
P. Sermanet, S. Chintala, and Y. LeCun, "Convolutional neural networks applied to house numbers digit classification, " in IEEE Int. Conf. Pattern Recogn. (ICPR), pp. 3288-3291, Nov. 2012.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, "Going deeper with convolutions, " in IEEE Int. Conf. Comput. Vision and Pattern Recogn. (CVPR), pp. 1-9, June 2015.
C. Farabet, C. Couprie, L. Najman, and Y. LeCun, "Learning hierarchical features for scene labeling, " IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, pp. 1915-1929, Aug. 2013.
B. Laugraud, S. Piérard, M. Braham, and M. Van Droogenbroeck, "Simple median-based method for stationary background generation using background subtraction algorithms, " in Int. Conf. Image Anal. and Process. (ICIAP), Workshop Scene Background Modeling and Initialization (SBMI), vol. 9281 of Lecture Notes Comp. Sci., pp. 477- 484, Springer, Sept. 2015.
S. Bianco, G. Ciocca, and R. Schettini, "How far can you get by combining change detection algorithms, " CoRR, vol. abs/1505. 02921, 2015.
F. D. G. Allebosch, P. Veelart, and W. Philips, "EFIC: Edge based foreground background segmentation and interior classification for dynamic camera viewpoints, " in Advanced Concepts for Intelligent Vision Syst. (ACIVS), vol. 9386 of Lecture Notes Comp. Sci., pp. 130-141, Springer, Oct. 2015.
M. Sedky, M. Moniri, and C. Chibelushi, "Spectral 360: A physics-based technique for change detection, " in IEEE Int. Conf. Comput. Vision and Pattern Recog. Workshop (CVPRW), pp. 399-402, June 2014.
L. Maddalena and A. Petrosino, "The SOBS algorithm: what are the limits, " in IEEE Int. Conf. Comput. Vision and Pattern Recog. Workshop (CVPRW), pp. 21-26, June 2012.
A. Miron and A. Badii, "Change detection based on graph cuts, " in IEEE Int. Conf. Syst., Signals and Image Process. (IWSSIP), pp. 273- 276, Sept. 2015.