References of "Van Droogenbroeck, Marc"
     in
Bookmark and Share    
Full Text
See detailM4Depth: A motion-based approach for monocular depth estimation on video sequences
Fonder, Michaël ULiege; Ernst, Damien ULiege; Van Droogenbroeck, Marc ULiege

E-print/Working paper (2021)

Getting the distance to objects is crucial for autonomous vehicles. In instances where depth sensors cannot be used, this distance has to be estimated from RGB cameras. As opposed to cars, the task of ... [more ▼]

Getting the distance to objects is crucial for autonomous vehicles. In instances where depth sensors cannot be used, this distance has to be estimated from RGB cameras. As opposed to cars, the task of estimating depth from on-board mounted cameras is made complex on drones because of the lack of constrains on motion during flights. In this paper, we present a method to estimate the distance of objects seen by an on-board mounted camera by using its RGB video stream and drone motion information. Our method is built upon a pyramidal convolutional neural network architecture and uses time recurrence in pair with geometric constraints imposed by motion to produce pixel-wise depth maps. In our architecture, each level of the pyramid is designed to produce its own depth estimate based on past observations and information provided by the previous level in the pyramid. We introduce a spatial reprojection layer to maintain the spatio-temporal consistency of the data between the levels. We analyse the performance of our approach on Mid-Air, a public drone dataset featuring synthetic drone trajectories recorded in a wide variety of unstructured outdoor environments. Our experiments show that our network outperforms state-of-the-art depth estimation methods and that the use of motion information is the main contributing factor for this improvement. The code of our method is publicly available on GitHub; see https://github.com/michael-fonder/M4Depth [less ▲]

Detailed reference viewed: 159 (23 ULiège)
Full Text
Peer Reviewed
See detailCamera Calibration and Player Localization in SoccerNet-v2 and Investigation of their Representations for Action Spotting
Cioppa, Anthony ULiege; Deliège, Adrien ULiege; Magera, Floriane ULiege et al

E-print/Working paper (2021)

Soccer broadcast video understanding has been drawing a lot of attention in recent years within data scientists and industrial companies. This is mainly due to the lucrative potential unlocked by ... [more ▼]

Soccer broadcast video understanding has been drawing a lot of attention in recent years within data scientists and industrial companies. This is mainly due to the lucrative potential unlocked by effective deep learning techniques developed in the field of computer vision. In this work, we focus on the topic of camera calibration and on its current limitations for the scientific community. More precisely, we tackle the absence of a large-scale calibration dataset and of a public calibration network trained on such a dataset. Specifically, we distill a powerful commercial calibration tool in a recent neural network architecture on the large-scale SoccerNet dataset, composed of untrimmed broadcast videos of 500 soccer games. We further release our distilled network, and leverage it to provide 3 ways of representing the calibration results along with player localization. Finally, we exploit those representations within the current best architecture for the action spotting task of SoccerNet-v2, and achieve new state-of-the-art performances. [less ▲]

Detailed reference viewed: 33 (4 ULiège)
Full Text
See detailExoplanet imaging data challenge: benchmarking the various image processing methods for exoplanet detection
Cantalloube, F.; Gomez Gonzalez, Carlos; Absil, Olivier ULiege et al

in Schreiber, L.; Schmidt, D.; Vernet, E. (Eds.) Adaptive Optics Systems VII (2020, December 13)

The Exoplanet Imaging Data Challenge is a community-wide effort meant to offer a platform for a fair and common comparison of image processing methods designed for exoplanet direct detection. For this ... [more ▼]

The Exoplanet Imaging Data Challenge is a community-wide effort meant to offer a platform for a fair and common comparison of image processing methods designed for exoplanet direct detection. For this purpose, it gathers on a dedicated repository (Zenodo), data from several high-contrast ground-based instruments worldwide in which we injected synthetic planetary signals. The data challenge is hosted on the CodaLab competition platform, where participants can upload their results. The specifications of the data challenge are published on our website https://exoplanet-imaging-challenge.github.io/ . The first phase, launched on the 1st of September 2019 and closed on the 1st of October 2020, consisted in detecting point sources in two types of common data-set in the field of high-contrast imaging: data taken in pupil-tracking mode at one wavelength (subchallenge 1, also referred to as ADI) and multispectral data taken in pupil-tracking mode (subchallenge 2, also referred to as ADI+mSDI). In this paper, we describe the approach, organisational lessons-learnt and current limitations of the data challenge, as well as preliminary results of the participants’ submissions for this first phase. In the future, we plan to provide permanent access to the standard library of data sets and metrics, in order to guide the validation and support the publications of innovative image processing algorithms dedicated to high-contrast imaging of planetary systems. [less ▲]

Detailed reference viewed: 44 (17 ULiège)
Full Text
See detailSoccerNet-v2: A Dataset and Benchmarks for Holistic Understanding of Broadcast Soccer Videos
Deliège, Adrien ULiege; Cioppa, Anthony ULiege; Giancola, Silvio et al

E-print/Working paper (2020)

Understanding broadcast videos is a challenging task in computer vision, as it requires generic reasoning capabilities to appreciate the content offered by the video editing. In this work, we propose ... [more ▼]

Understanding broadcast videos is a challenging task in computer vision, as it requires generic reasoning capabilities to appreciate the content offered by the video editing. In this work, we propose SoccerNet-v2, a novel large-scale corpus of manual annotations for the SoccerNet video dataset, along with open challenges to encourage more research in soccer understanding and broadcast production. Specifically, we release around 300k annotations within SoccerNet's 500 untrimmed broadcast soccer videos. We extend current tasks in the realm of soccer to include action spotting, camera shot segmentation with boundary detection, and we define a novel replay grounding task. For each task, we provide and discuss benchmark results, reproducible with our open-source adapted implementations of the most relevant works in the field. SoccerNet-v2 is presented to the broader research community to help push computer vision closer to automatic solutions for more general video understanding and production purposes. [less ▲]

Detailed reference viewed: 79 (8 ULiège)
Full Text
Peer Reviewed
See detailReal-Time Semantic Background Subtraction
Cioppa, Anthony ULiege; Van Droogenbroeck, Marc ULiege; Braham, Marc ULiege

in Proceedings of the IEEE International Conference on Image Processing (ICIP) (2020, October)

Semantic background subtraction (SBS) has been shown to improve the performance of most background subtraction algorithms by combining them with semantic information, derived from a semantic segmentation ... [more ▼]

Semantic background subtraction (SBS) has been shown to improve the performance of most background subtraction algorithms by combining them with semantic information, derived from a semantic segmentation network. However, SBS requires high-quality semantic segmentation masks for all frames, which are slow to compute. In addition, most state-of-the-art background subtraction algorithms are not real-time, which makes them unsuitable for real-world applications. In this paper, we present a novel background subtraction algorithm called Real-Time Semantic Background Subtraction (denoted RT-SBS) which extends SBS for real-time constrained applications while keeping similar performances. RT-SBS effectively combines a real-time background subtraction algorithm with high-quality semantic information which can be provided at a slower pace, independently for each pixel. We show that RT-SBS coupled with ViBe sets a new state of the art for real-time background subtraction algorithms and even competes with the non real-time state-of-the-art ones. Note that we provide python CPU and GPU implementations of RT-SBS at https://github.com/cioppaanthony/rt-sbs [less ▲]

Detailed reference viewed: 69 (12 ULiège)
Full Text
Peer Reviewed
See detailSummarizing the performances of a background subtraction algorithm measured on several videos
Pierard, Sébastien ULiege; Van Droogenbroeck, Marc ULiege

in Proceedings of the IEEE International Conference on Image Processing (ICIP) (2020, October)

There exist many background subtraction algorithms to detect motion in videos. To help comparing them, datasets with ground-truth data such as CDNET or LASIESTA have been proposed. These datasets organize ... [more ▼]

There exist many background subtraction algorithms to detect motion in videos. To help comparing them, datasets with ground-truth data such as CDNET or LASIESTA have been proposed. These datasets organize videos in categories that represent typical challenges for background subtraction. The evaluation procedure promoted by their authors consists in measuring performance indicators for each video separately and to average them hierarchically, within a category first, then between categories, a procedure which we name “summarization”. While the summarization by averaging performance indicators is a valuable effort to standardize the evaluation procedure, it has no theoretical justification and it breaks the intrinsic relationships between summarized indicators. This leads to interpretation inconsistencies. In this paper, we present a theoretical approach to summarize the performances for multiple videos that preserves the relationships between performance indicators. In addition, we give formulas and an algorithm to calculate summarized performances. Finally, we showcase our observations on CDNET 2014. [less ▲]

Detailed reference viewed: 60 (10 ULiège)
Full Text
See detailForeground and background detection method
Van Droogenbroeck, Marc ULiege; Braham, Marc ULiege; Pierard, Sébastien ULiege

Patent (2020)

The present invention concerns a method for assigning a pixel to one of a foreground pixel set and a background pixel set. In this method, if a first condition is met the pixel is assigned to the ... [more ▼]

The present invention concerns a method for assigning a pixel to one of a foreground pixel set and a background pixel set. In this method, if a first condition is met the pixel is assigned to the background pixel set, and if the first condition is not met and a second condition is met, the pixel is assigned to the foreground pixel set. The method comprises a step (S100) of calculating a probability that the pixel belongs to a foreground-relevant object according to a semantic segmentation algorithm, the first condition is that this probability that the pixel belongs to a foreground-relevant object does not exceed a first predetermined threshold, and the second condition is that a difference between this probability that the pixel belongs to a foreground-relevant object and a baseline probability for the pixel equals or exceeds a second predetermined threshold. [less ▲]

Detailed reference viewed: 114 (38 ULiège)
Full Text
See detailForeground and background detection method
Van Droogenbroeck, Marc ULiege; Braham, Marc ULiege; Cioppa, Anthony ULiege

Patent (2020)

The present invention concerns a method for assigning a pixel to one of a foreground pixel set and a background pixel set using asynchronous semantic post processing to improve motion detection by an ... [more ▼]

The present invention concerns a method for assigning a pixel to one of a foreground pixel set and a background pixel set using asynchronous semantic post processing to improve motion detection by an imaging device , the pixel belonging to an image of a chronological sequence of images taken by the imaging device that includes background and foreground objects . [less ▲]

Detailed reference viewed: 78 (26 ULiège)
Full Text
Peer Reviewed
See detailAsynchronous Semantic Background Subtraction
Cioppa, Anthony ULiege; Braham, Marc ULiege; Van Droogenbroeck, Marc ULiege

in Journal of Imaging (2020), 6(20), 1-20

The method of Semantic Background Subtraction (SBS), which combines semantic segmentation and background subtraction, has recently emerged for the task of segmenting moving objects in video sequences ... [more ▼]

The method of Semantic Background Subtraction (SBS), which combines semantic segmentation and background subtraction, has recently emerged for the task of segmenting moving objects in video sequences. While SBS has been shown to improve background subtraction, a major difficulty is that it combines two streams generated at different frame rates. This results in SBS operating at the slowest frame rate of the two streams, usually being the one of the semantic segmentation algorithm. We present a method, referred to as “Asynchronous Semantic Background Subtraction“ (ASBS), able to combine a semantic segmentation algorithm with any background subtraction algorithm asynchronously. It achieves performances close to that of SBS while operating at the fastest possible frame rate, being the one of the background subtraction algorithm. Our method consists in analyzing the temporal evolution of pixel features to possibly replicate the decisions previously enforced by semantics when no semantic information is computed. We showcase ASBS with several background subtraction algorithms and also add a feedback mechanism that feeds the background model of the background subtraction algorithm to upgrade its updating strategy and, consequently, enhance the decision. Experiments show that we systematically improve the performance, even when the semantic stream has a much slower frame rate than the frame rate of the background subtraction algorithm. In addition, we establish that, with the help of ASBS, a real-time background subtraction algorithm, such as ViBe, stays real time and competes with some of the best non-real-time unsupervised background subtraction algorithms such as SuBSENSE. [less ▲]

Detailed reference viewed: 48 (10 ULiège)
Full Text
Peer Reviewed
See detailA Context-Aware Loss Function for Action Spotting in Soccer Videos
Cioppa, Anthony ULiege; Deliège, Adrien ULiege; Giancola, Silvio et al

in IEEE Conference on Computer Vision and Pattern Recognition. Proceedings (2020, June)

In video understanding, action spotting consists in temporally localizing human-induced events annotated with single timestamps. In this paper, we propose a novel loss function that specifically considers ... [more ▼]

In video understanding, action spotting consists in temporally localizing human-induced events annotated with single timestamps. In this paper, we propose a novel loss function that specifically considers the temporal context naturally present around each action, rather than focusing on the single annotated frame to spot. We benchmark our loss on a large dataset of soccer videos, SoccerNet, and achieve an improvement of 12.8% over the baseline. We show the generalization capability of our loss for generic activity proposals and detection on ActivityNet, by spotting the beginning and the end of each activity. Furthermore, we provide an extended ablation study and display challenging cases for action spotting in soccer videos. Finally, we qualitatively illustrate how our loss induces a precise temporal understanding of actions and show how such semantic knowledge can be used for automatic highlights generation. [less ▲]

Detailed reference viewed: 162 (50 ULiège)
Full Text
Peer Reviewed
See detailMultimodal and multiview distillation for real-time player detection on a football field
Cioppa, Anthony ULiege; Deliège, Adrien ULiege; Noor, Ul Huda et al

in IEEE Conference on Computer Vision and Pattern Recognition. Proceedings (2020, June)

Monitoring the occupancy of public sports facilities is essential to assess their use and to motivate their construction in new places. In the case of a football field, the area to cover is large, thus ... [more ▼]

Monitoring the occupancy of public sports facilities is essential to assess their use and to motivate their construction in new places. In the case of a football field, the area to cover is large, thus several regular cameras should be used, which makes the setup expensive and complex. As an alternative, we developed a system that detects players from a unique cheap and wide-angle fisheye camera assisted by a single narrow-angle thermal camera. In this work, we train a network in a knowledge distillation approach in which the student and the teacher have different modalities and a different view of the same scene. In particular, we design a custom data augmentation combined with a motion detection algorithm to handle the training in the region of the fisheye camera not covered by the thermal one. We show that our solution is effective in detecting players on the whole field filmed by the fisheye camera. We evaluate it quantitatively and qualitatively in the case of an online distillation, where the student detects players in real time while being continuously adapted to the latest video conditions. [less ▲]

Detailed reference viewed: 84 (22 ULiège)
Full Text
See detailPrincipes des télécommunications analogiques et numériques: manuel des répétitions
Van Droogenbroeck, Marc ULiege; Latour, Philippe ULiege; Wagner, Jean-Marc et al

Learning material (2020)

Detailed reference viewed: 599 (127 ULiège)
Full Text
See detailForeground and background detection method
Van Droogenbroeck, Marc ULiege; Braham, Marc ULiege; Pierard, Sébastien ULiege

Patent (2020)

The present invention concerns a method for assigning a pixel to one of a foreground pixel set and a background pixel set. In this method, if a first condition is met the pixel is assigned to the ... [more ▼]

The present invention concerns a method for assigning a pixel to one of a foreground pixel set and a background pixel set. In this method, if a first condition is met the pixel is assigned to the background pixel set , and if the first condition is not met and a second condition is met, the pixel is assigned to the foreground pixel set. The method comprises a step ( S100 ) of calculating a probability that the pixel belongs to a foreground-relevant object according to a semantic segmentation algorithm, the first condition is that this probability that the pixel belongs to a foreground relevant object does not exceed a first predetermined threshold, and the second condition is that a difference between this probability that the pixel belongs to a foreground relevant object and a baseline probability for the pixel equals or exceeds a second predetermined threshold. [less ▲]

Detailed reference viewed: 74 (24 ULiège)
Full Text
Peer Reviewed
See detailGhost Loss to Question the Reliability of Training Data
Deliège, Adrien ULiege; Cioppa, Anthony ULiege; Van Droogenbroeck, Marc ULiege

in IEEE Access (2020), 8

Supervised image classification problems rely on training data assumed to have been correctly annotated; this assumption underpins most works in the field of deep learning. In consequence, during its ... [more ▼]

Supervised image classification problems rely on training data assumed to have been correctly annotated; this assumption underpins most works in the field of deep learning. In consequence, during its training, a network is forced to match the label provided by the annotator and is not given the flexibility to choose an alternative to inconsistencies that it might be able to detect. Therefore, erroneously labeled training images may end up “correctly” classified in classes which they do not actually belong to. This may reduce the performances of the network and thus incite to build more complex networks without even checking the quality of the training data. In this work, we question the reliability of the annotated datasets. For that purpose, we introduce the notion of ghost loss, which can be seen as a regular loss that is zeroed out for some predicted values in a deterministic way and that allows the network to choose an alternative to the given label without being penalized. After a proof of concept experiment, we use the ghost loss principle to detect confusing images and erroneously labeled images in well-known training datasets (MNIST, Fashion-MNIST, SVHN, CIFAR10) and we provide a new tool, called sanity matrix, for summarizing these confusions. [less ▲]

Detailed reference viewed: 48 (14 ULiège)
Full Text
See detailAnalysis and Design of Telecommunications Systems: Manual of Exercises
Van Droogenbroeck, Marc ULiege; Wagner, Jean-Marc; Pierlot, Vincent et al

Learning material (2020)

Detailed reference viewed: 220 (46 ULiège)
Full Text
Peer Reviewed
See detailOrdinal Pooling
Deliège, Adrien ULiege; Istasse, Maxime; Kumar, Ashwani et al

in 30th British Machine Vision Conference (2020)

In the framework of convolutional neural networks, downsampling is often performed with an average-pooling, where all the activations are treated equally, or with a max-pooling operation that only retains ... [more ▼]

In the framework of convolutional neural networks, downsampling is often performed with an average-pooling, where all the activations are treated equally, or with a max-pooling operation that only retains an element with maximum activation while discarding the others. Both of these operations are restrictive and have previously been shown to be sub-optimal. To address this issue, a novel pooling scheme, named ordinal pooling, is introduced in this work. Ordinal pooling rearranges all the elements of a pooling region in a sequence and assigns a different weight to each element based upon its order in the sequence. These weights are used to compute the pooling operation as a weighted sum of the rearranged elements of the pooling region. They are learned via a standard gradient-based training, allowing to learn a behavior anywhere in the spectrum of average-pooling to max-pooling in a differentiable manner. Our experiments suggest that it is advantageous for the networks to perform different types of pooling operations within a pooling layer and that a hybrid behavior between average- and max-pooling is often beneficial. More importantly, they also demonstrate that ordinal pooling leads to consistent improvements in the accuracy over average- or max-pooling operations while speeding up the training and alleviating the issue of the choice of the pooling operations and activation functions to be used in the networks. In particular, ordinal pooling mainly helps on lightweight or quantized deep learning architectures, as typically considered e.g. for embedded applications. [less ▲]

Detailed reference viewed: 142 (11 ULiège)
Full Text
See detailImage classification using neural networks
Van Droogenbroeck, Marc ULiege; Deliège, Adrien ULiege; Cioppa, Anthony ULiege

Patent (2019)

A computer-implemented method for training a neural network for classifying image data and a related computer program product are disclosed. A labelled input data set comprising a plurality of labelled ... [more ▼]

A computer-implemented method for training a neural network for classifying image data and a related computer program product are disclosed. A labelled input data set comprising a plurality of labelled image data samples is provided together with a neural network. The neural network comprises an input layer, at least one intermediate layer, and an output layer having one channel per label class. Each channel provides a mapping of labelled image data samples onto feature vectors. Furthermore, the input layer of a decoder network for reconstructing image data samples at its output is connecting the output layer of the neural network. A classifier predicts class labels as the labels of those channels for which a normed distance of its feature vector relative to a pre-determined reference point is smallest. A loss function for the neural network is suitable for steering, for each channel, the feature vectors onto which image data samples of the associated class are mapped, into a convex target region around the pre-determined reference point. [less ▲]

Detailed reference viewed: 122 (35 ULiège)
Full Text
See detailImage classification using neural networks
Van Droogenbroeck, Marc ULiege; Deliège, Adrien ULiege; Cioppa, Anthony ULiege

Patent (2019)

A computer-implemented method for training a neural network for classifying image data and a related computer program product are disclosed. A labelled input data set comprising a plurality of labelled ... [more ▼]

A computer-implemented method for training a neural network for classifying image data and a related computer program product are disclosed. A labelled input data set comprising a plurality of labelled image data samples is provided together with a neural network. The neural network comprises an input layer, at least one intermediate layer, and an output layer having one channel per label class. Each channel provides a mapping of labelled image data samples onto feature vectors. Furthermore, the input layer of a decoder network for reconstructing image data samples at its output is connecting the output layer of the neural network. A classifier predicts class labels as the labels of those channels for which a normed distance of its feature vector relative to a pre-determined reference point is smallest. A loss function for the neural network is suitable for steering, for each channel, the feature vectors onto which image data samples of the associated class are mapped, into a convex target region around the pre-determined reference point. [less ▲]

Detailed reference viewed: 38 (2 ULiège)
Full Text
See detailStatistical analysis of modulated codes for robot positioning -- Application to BeAMS
Pierlot, Vincent ULiege; Van Droogenbroeck, Marc ULiege

E-print/Working paper (2019)

Positioning is a fundamental issue for mobile robots. Therefore, a performance analysis is suitable to determine the behavior of a system, and to optimize its working. Unfortunately, some systems are only ... [more ▼]

Positioning is a fundamental issue for mobile robots. Therefore, a performance analysis is suitable to determine the behavior of a system, and to optimize its working. Unfortunately, some systems are only evaluated experimentally, which makes the performance analysis and design decisions very unclear. In [4], we have proposed a new angle measurement system, named BeAMS, that is the key element of an algorithm for mobile robot positioning. BeAMS introduces a new mechanism to measure angles: it detects a beacon when it enters and leaves an angular window. A theoretical framework for a thorough performance analysis of BeAMS has been provided to establish the upper bound of the variance, and to validate this bound through experiments and simulations. It has been shown that the estimator derived from the center of this angular window provides an unbiased estimate of the beacon angle. This document complements our paper by going into further details related to the code statistics of modulated signals in general, with an emphasis on BeAMS. In particular, the probability density function of the measured angle has been previously established with the assumption that there is no correlation between the times a beacon enters the angular window or leaves it. This assumption is questionable and, in this document, we reconsider this assumption and establish the exact probability density function of the angle estimated by BeAMS (without this assumption). The conclusion of this study is that the real variance of the estimator provided by BeAMS was slightly underestimated in our previous work. In addition to this speci c result, we also provide a new and extensive theoretical approach that can be used to analyze the statistics of any angle measurement method with beacons whose signal has been modulated. To summarize, this technical document has four purposes: (1) to establish the exact probability density function of the angle estimator of BeAMS, (2) to calculate a practical upper bound of the variance of this estimator, which is of practical interest for calibration and tracking (see Table 1, on page 13, for a summary), (3) to present a new theoretical approach to evaluate the performance of systems that use modulated (coded) signals, and (4) to show how the variance evolves exactly as a function of the angular window (while re- maining below the upper bound). [less ▲]

Detailed reference viewed: 101 (16 ULiège)
Full Text
Peer Reviewed
See detailMid-Air: A multi-modal dataset for extremely low altitude drone flights
Fonder, Michaël ULiege; Van Droogenbroeck, Marc ULiege

in IEEE Conference on Computer Vision and Pattern Recognition. Proceedings (2019, June)

Flying a drone in unstructured environments with varying conditions is challenging. To help producing better algorithms, we present Mid-Air, a multi-purpose synthetic dataset for low altitude drone ... [more ▼]

Flying a drone in unstructured environments with varying conditions is challenging. To help producing better algorithms, we present Mid-Air, a multi-purpose synthetic dataset for low altitude drone flights in unstructured environments. It contains synchronized data of multiple sensors for a total of 54 trajectories and more than 420k video frames simulated in various climate conditions. In this work, we motivate design choices, explain how the data was simulated, and present the content of the dataset. Finally, a benchmark for positioning and a benchmark for image generation tasks show how Mid-Air can be used to set up a standard evaluation method for assessing computer vision algorithms in terms of robustness and generalization. We illustrate this by providing a baseline for depth estimation and by comparing it with results obtained on an existing dataset. The Mid-Air is publicly downloadable, with additional details on the data format and organization, at http://midair.ulg.ac.be [less ▲]

Detailed reference viewed: 181 (52 ULiège)