References of "Van Droogenbroeck, Marc"
     in
Bookmark and Share    
See detailSoccer Player Tracking, Re-Identification, Camera Calibration and Action Spotting - SoccerNet Challenge 2022
Cioppa, Anthony ULiege; Deliège, Adrien ULiege; Giancola, Silvio et al

Conference given outside the academic context (2022)

In this video, we present our new SoccerNet Challenges for CVPR 2022! We introduce the three tasks of calibration, re-identification and tracking on soccer games, in partnership with EVS Broadcast ... [more ▼]

In this video, we present our new SoccerNet Challenges for CVPR 2022! We introduce the three tasks of calibration, re-identification and tracking on soccer games, in partnership with EVS Broadcast Equipment, SportRadar and Baidu Research. We also reiterate our previous action spotting and replay grounding Challenge at the ActivityNet workshop. [less ▲]

Detailed reference viewed: 18 (0 ULiège)
Full Text
Peer Reviewed
See detailSurvey and synthesis of state of the art in driver monitoring
Halin, Anaïs ULiege; Verly, Jacques ULiege; Van Droogenbroeck, Marc ULiege

in Sensors (2021), 21(16), 1-49

Road vehicle accidents are mostly due to human errors, and many such accidents could be avoided by continuously monitoring the driver. Driver monitoring (DM) is a topic of growing interest in the ... [more ▼]

Road vehicle accidents are mostly due to human errors, and many such accidents could be avoided by continuously monitoring the driver. Driver monitoring (DM) is a topic of growing interest in the automotive industry, and it will remain relevant for all vehicles that are not fully autonomous, and thus for decades for the average vehicle owner. The present paper focuses on the first step of DM, which consists of characterizing the state of the driver. Since DM will be increasingly linked to driving automation (DA), this paper presents a clear view of the role of DM at each of the six SAE levels of DA. This paper surveys the state of the art of DM, and then synthesizes it, providing a unique, structured, polychotomous view of the many characterization techniques of DM. Informed by the survey, the paper characterizes the driver state along the five main dimensions—called here “(sub)states”—of drowsiness, mental workload, distraction, emotions, and under the influence. The polychotomous view of DM is presented through a pair of interlocked tables that relate these states to their indicators (e.g., the eye-blink rate) and the sensors that can access each of these indicators (e.g., a camera). The tables factor in not only the effects linked directly to the driver, but also those linked to the (driven) vehicle and the (driving) environment. They show, at a glance, to concerned researchers, equipment providers, and vehicle manufacturers (1) most of the options they have to implement various forms of advanced DM systems, and (2) fruitful areas for further research and innovation. [less ▲]

Detailed reference viewed: 44 (13 ULiège)
Peer Reviewed
See detailSoccerNet challenge : Task presentation and winner announcement
Cioppa, Anthony ULiege; Deliège, Adrien ULiege; Giancola, Silvio et al

Conference (2021, June 19)

One hour talk on our SoccerNet challenge, including the task presentation and winner announcement at the ActivityNet workshop of CVPR 2021. We define 2 tasks in the SoccerNet challenge: (i) action ... [more ▼]

One hour talk on our SoccerNet challenge, including the task presentation and winner announcement at the ActivityNet workshop of CVPR 2021. We define 2 tasks in the SoccerNet challenge: (i) action spotting: identify the exact timestamp for the occurrence of 17 different classes of actions, (ii) replay grounding: given a replay sequence, identify the exact timestamp of the occurrence (in the game) of the actions replayed. All results are available in our public leaderboard. SoccerNet is the largest video dataset for soccer understanding, with 500 games covering 3 seasons of 6 major European football leagues and a total of 300K+ manual annotations. We provide timestamp annotations for 17 types of actions (goal, corner, free-kick, …), enriched with extra attributes (team and action visibility). We also provide the temporal boundaries for game replays, along with the type of camera shot among 13 possibilities (main camera, close-up, behind the goal, ...), the camera transition (abrupt, smooth, logo), and the pointers towards the actions replayed. For a subset of games, we also provide the timestamps for all the camera changes, with the type of camera used and the type of transition, for a deeper understanding of the broadcasting process. [less ▲]

Detailed reference viewed: 20 (3 ULiège)
Full Text
Peer Reviewed
See detailCamera Calibration and Player Localization in SoccerNet-v2 and Investigation of their Representations for Action Spotting
Cioppa, Anthony ULiege; Deliège, Adrien ULiege; Magera, Floriane ULiege et al

in IEEE International Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2021, June)

Soccer broadcast video understanding has been drawing a lot of attention in recent years within data scientists and industrial companies. This is mainly due to the lucrative potential unlocked by ... [more ▼]

Soccer broadcast video understanding has been drawing a lot of attention in recent years within data scientists and industrial companies. This is mainly due to the lucrative potential unlocked by effective deep learning techniques developed in the field of computer vision. In this work, we focus on the topic of camera calibration and on its current limitations for the scientific community. More precisely, we tackle the absence of a large-scale calibration dataset and of a public calibration network trained on such a dataset. Specifically, we distill a powerful commercial calibration tool in a recent neural network architecture on the large-scale SoccerNet dataset, composed of untrimmed broadcast videos of 500 soccer games. We further release our distilled network, and leverage it to provide 3 ways of representing the calibration results along with player localization. Finally, we exploit those representations within the current best architecture for the action spotting task of SoccerNet-v2, and achieve new state-of-the-art performances. [less ▲]

Detailed reference viewed: 83 (13 ULiège)
Full Text
Peer Reviewed
See detailSoccerNet-v2: A Dataset and Benchmarks for Holistic Understanding of Broadcast Soccer Videos
Deliège, Adrien ULiege; Cioppa, Anthony ULiege; Giancola, Silvio et al

in IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2021, June)

Understanding broadcast videos is a challenging task in computer vision, as it requires generic reasoning capabilities to appreciate the content offered by the video editing. In this work, we propose ... [more ▼]

Understanding broadcast videos is a challenging task in computer vision, as it requires generic reasoning capabilities to appreciate the content offered by the video editing. In this work, we propose SoccerNet-v2, a novel large-scale corpus of manual annotations for the SoccerNet video dataset, along with open challenges to encourage more research in soccer understanding and broadcast production. Specifically, we release around 300k annotations within SoccerNet's 500 untrimmed broadcast soccer videos. We extend current tasks in the realm of soccer to include action spotting, camera shot segmentation with boundary detection, and we define a novel replay grounding task. For each task, we provide and discuss benchmark results, reproducible with our open-source adapted implementations of the most relevant works in the field. SoccerNet-v2 is presented to the broader research community to help push computer vision closer to automatic solutions for more general video understanding and production purposes. [less ▲]

Detailed reference viewed: 105 (14 ULiège)
Full Text
Peer Reviewed
See detailVariability of people's activity space using GPS-based trajectory data
Moncayo Unda, Milton Giovanny ULiege; Van Droogenbroeck, Marc ULiege; Saadi, Ismaïl ULiege et al

in The role of transport in urban, energy and climate transitions (2021, May)

The increasing popularity, usage, and ubiquity of mobile devices, as their easy access to global positioning systems (GPS), has promoted new research approaches in the transportation field. The easy way ... [more ▼]

The increasing popularity, usage, and ubiquity of mobile devices, as their easy access to global positioning systems (GPS), has promoted new research approaches in the transportation field. The easy way to collect data allows to extract and classify episodes of people's daily activity based on their location data, without additional information. This study presents a spatial activity analysis of a group of people from Quito, most of the Central University of Ecuador. The spatial variability analysis of the trajectories generated by GPS tracking will allow us to identify common daily, weekly, or monthly travel-behavior patterns. The method, developed in Python and R, gives all potential for initial data processing related to trajectory and spatial information from participants, respectively, to be analyzed later with demographic data [less ▲]

Detailed reference viewed: 60 (6 ULiège)
Full Text
See detailM4Depth: A motion-based approach for monocular depth estimation on video sequences
Fonder, Michaël ULiege; Ernst, Damien ULiege; Van Droogenbroeck, Marc ULiege

E-print/Working paper (2021)

Getting the distance to objects is crucial for autonomous vehicles. In instances where depth sensors cannot be used, this distance has to be estimated from RGB cameras. As opposed to cars, the task of ... [more ▼]

Getting the distance to objects is crucial for autonomous vehicles. In instances where depth sensors cannot be used, this distance has to be estimated from RGB cameras. As opposed to cars, the task of estimating depth from on-board mounted cameras is made complex on drones because of the lack of constrains on motion during flights. In this paper, we present a method to estimate the distance of objects seen by an on-board mounted camera by using its RGB video stream and drone motion information. Our method is built upon a pyramidal convolutional neural network architecture and uses time recurrence in pair with geometric constraints imposed by motion to produce pixel-wise depth maps. In our architecture, each level of the pyramid is designed to produce its own depth estimate based on past observations and information provided by the previous level in the pyramid. We introduce a spatial reprojection layer to maintain the spatio-temporal consistency of the data between the levels. We analyse the performance of our approach on Mid-Air, a public drone dataset featuring synthetic drone trajectories recorded in a wide variety of unstructured outdoor environments. Our experiments show that our network outperforms state-of-the-art depth estimation methods and that the use of motion information is the main contributing factor for this improvement. The code of our method is publicly available on GitHub; see https://github.com/michael-fonder/M4Depth [less ▲]

Detailed reference viewed: 177 (25 ULiège)
Full Text
See detailAnalysis and Design of Telecommunications Systems: Manual of Exercises
Van Droogenbroeck, Marc ULiege; Wagner, Jean-Marc; Pierlot, Vincent et al

Learning material (2021)

Detailed reference viewed: 325 (87 ULiège)
Full Text
See detailExoplanet imaging data challenge: benchmarking the various image processing methods for exoplanet detection
Cantalloube, F.; Gomez Gonzalez, Carlos; Absil, Olivier ULiege et al

in Schreiber, L.; Schmidt, D.; Vernet, E. (Eds.) Adaptive Optics Systems VII (2020, December 13)

The Exoplanet Imaging Data Challenge is a community-wide effort meant to offer a platform for a fair and common comparison of image processing methods designed for exoplanet direct detection. For this ... [more ▼]

The Exoplanet Imaging Data Challenge is a community-wide effort meant to offer a platform for a fair and common comparison of image processing methods designed for exoplanet direct detection. For this purpose, it gathers on a dedicated repository (Zenodo), data from several high-contrast ground-based instruments worldwide in which we injected synthetic planetary signals. The data challenge is hosted on the CodaLab competition platform, where participants can upload their results. The specifications of the data challenge are published on our website https://exoplanet-imaging-challenge.github.io/ . The first phase, launched on the 1st of September 2019 and closed on the 1st of October 2020, consisted in detecting point sources in two types of common data-set in the field of high-contrast imaging: data taken in pupil-tracking mode at one wavelength (subchallenge 1, also referred to as ADI) and multispectral data taken in pupil-tracking mode (subchallenge 2, also referred to as ADI+mSDI). In this paper, we describe the approach, organisational lessons-learnt and current limitations of the data challenge, as well as preliminary results of the participants’ submissions for this first phase. In the future, we plan to provide permanent access to the standard library of data sets and metrics, in order to guide the validation and support the publications of innovative image processing algorithms dedicated to high-contrast imaging of planetary systems. [less ▲]

Detailed reference viewed: 47 (17 ULiège)
Full Text
Peer Reviewed
See detailReal-Time Semantic Background Subtraction
Cioppa, Anthony ULiege; Van Droogenbroeck, Marc ULiege; Braham, Marc ULiege

in Proceedings of the IEEE International Conference on Image Processing (ICIP) (2020, October)

Semantic background subtraction (SBS) has been shown to improve the performance of most background subtraction algorithms by combining them with semantic information, derived from a semantic segmentation ... [more ▼]

Semantic background subtraction (SBS) has been shown to improve the performance of most background subtraction algorithms by combining them with semantic information, derived from a semantic segmentation network. However, SBS requires high-quality semantic segmentation masks for all frames, which are slow to compute. In addition, most state-of-the-art background subtraction algorithms are not real-time, which makes them unsuitable for real-world applications. In this paper, we present a novel background subtraction algorithm called Real-Time Semantic Background Subtraction (denoted RT-SBS) which extends SBS for real-time constrained applications while keeping similar performances. RT-SBS effectively combines a real-time background subtraction algorithm with high-quality semantic information which can be provided at a slower pace, independently for each pixel. We show that RT-SBS coupled with ViBe sets a new state of the art for real-time background subtraction algorithms and even competes with the non real-time state-of-the-art ones. Note that we provide python CPU and GPU implementations of RT-SBS at https://github.com/cioppaanthony/rt-sbs [less ▲]

Detailed reference viewed: 84 (14 ULiège)
Full Text
Peer Reviewed
See detailSummarizing the performances of a background subtraction algorithm measured on several videos
Pierard, Sébastien ULiege; Van Droogenbroeck, Marc ULiege

in Proceedings of the IEEE International Conference on Image Processing (ICIP) (2020, October)

There exist many background subtraction algorithms to detect motion in videos. To help comparing them, datasets with ground-truth data such as CDNET or LASIESTA have been proposed. These datasets organize ... [more ▼]

There exist many background subtraction algorithms to detect motion in videos. To help comparing them, datasets with ground-truth data such as CDNET or LASIESTA have been proposed. These datasets organize videos in categories that represent typical challenges for background subtraction. The evaluation procedure promoted by their authors consists in measuring performance indicators for each video separately and to average them hierarchically, within a category first, then between categories, a procedure which we name “summarization”. While the summarization by averaging performance indicators is a valuable effort to standardize the evaluation procedure, it has no theoretical justification and it breaks the intrinsic relationships between summarized indicators. This leads to interpretation inconsistencies. In this paper, we present a theoretical approach to summarize the performances for multiple videos that preserves the relationships between performance indicators. In addition, we give formulas and an algorithm to calculate summarized performances. Finally, we showcase our observations on CDNET 2014. [less ▲]

Detailed reference viewed: 60 (10 ULiège)
Full Text
See detailForeground and background detection method
Van Droogenbroeck, Marc ULiege; Braham, Marc ULiege; Pierard, Sébastien ULiege

Patent (2020)

The present invention concerns a method for assigning a pixel to one of a foreground pixel set and a background pixel set. In this method, if a first condition is met the pixel is assigned to the ... [more ▼]

The present invention concerns a method for assigning a pixel to one of a foreground pixel set and a background pixel set. In this method, if a first condition is met the pixel is assigned to the background pixel set, and if the first condition is not met and a second condition is met, the pixel is assigned to the foreground pixel set. The method comprises a step (S100) of calculating a probability that the pixel belongs to a foreground-relevant object according to a semantic segmentation algorithm, the first condition is that this probability that the pixel belongs to a foreground-relevant object does not exceed a first predetermined threshold, and the second condition is that a difference between this probability that the pixel belongs to a foreground-relevant object and a baseline probability for the pixel equals or exceeds a second predetermined threshold. [less ▲]

Detailed reference viewed: 120 (40 ULiège)
Full Text
See detailForeground and background detection method
Van Droogenbroeck, Marc ULiege; Braham, Marc ULiege; Cioppa, Anthony ULiege

Patent (2020)

The present invention concerns a method for assigning a pixel to one of a foreground pixel set and a background pixel set using asynchronous semantic post processing to improve motion detection by an ... [more ▼]

The present invention concerns a method for assigning a pixel to one of a foreground pixel set and a background pixel set using asynchronous semantic post processing to improve motion detection by an imaging device , the pixel belonging to an image of a chronological sequence of images taken by the imaging device that includes background and foreground objects . [less ▲]

Detailed reference viewed: 82 (27 ULiège)
Peer Reviewed
See detailUsing mobile phones to collect data including user feedback for the analysis of urban mobility
Moncayo Unda, Milton Giovanny ULiege; Saadi, Ismaïl ULiege; Van Droogenbroeck, Marc ULiege et al

Conference (2020, July)

Sensors integrated in mobile phones, such the accelerometer, gyroscope and GPS provide information that can be used to study mobility and transportation phenomena. In particular, the collected information ... [more ▼]

Sensors integrated in mobile phones, such the accelerometer, gyroscope and GPS provide information that can be used to study mobility and transportation phenomena. In particular, the collected information from the signals of these sensors enables the determination of transport mode choice, movement or activity detection, route choice, etc. In this paper, we present a novel data collection platform based on a custom mobile application for Android OS devices, which integrates data from complementary sensors of the smartphone (luminosity, proximity, step counter, battery level) and other sources like the weather information. Simultaneously, data is collected by interacting with the users through a mechanism of notifications. Via this interaction, a regular feedback about the users’ mobility patterns is obtained. The feedback constitutes a first step to validate the information collected by the smartphone. The results of the data collection show how the combination of data can provide useful information in the field of mobility and transportation [less ▲]

Detailed reference viewed: 50 (5 ULiège)
Full Text
Peer Reviewed
See detailAsynchronous Semantic Background Subtraction
Cioppa, Anthony ULiege; Braham, Marc ULiege; Van Droogenbroeck, Marc ULiege

in Journal of Imaging (2020), 6(20), 1-20

The method of Semantic Background Subtraction (SBS), which combines semantic segmentation and background subtraction, has recently emerged for the task of segmenting moving objects in video sequences ... [more ▼]

The method of Semantic Background Subtraction (SBS), which combines semantic segmentation and background subtraction, has recently emerged for the task of segmenting moving objects in video sequences. While SBS has been shown to improve background subtraction, a major difficulty is that it combines two streams generated at different frame rates. This results in SBS operating at the slowest frame rate of the two streams, usually being the one of the semantic segmentation algorithm. We present a method, referred to as “Asynchronous Semantic Background Subtraction“ (ASBS), able to combine a semantic segmentation algorithm with any background subtraction algorithm asynchronously. It achieves performances close to that of SBS while operating at the fastest possible frame rate, being the one of the background subtraction algorithm. Our method consists in analyzing the temporal evolution of pixel features to possibly replicate the decisions previously enforced by semantics when no semantic information is computed. We showcase ASBS with several background subtraction algorithms and also add a feedback mechanism that feeds the background model of the background subtraction algorithm to upgrade its updating strategy and, consequently, enhance the decision. Experiments show that we systematically improve the performance, even when the semantic stream has a much slower frame rate than the frame rate of the background subtraction algorithm. In addition, we establish that, with the help of ASBS, a real-time background subtraction algorithm, such as ViBe, stays real time and competes with some of the best non-real-time unsupervised background subtraction algorithms such as SuBSENSE. [less ▲]

Detailed reference viewed: 52 (11 ULiège)
Full Text
Peer Reviewed
See detailA Context-Aware Loss Function for Action Spotting in Soccer Videos
Cioppa, Anthony ULiege; Deliège, Adrien ULiege; Giancola, Silvio et al

in IEEE Conference on Computer Vision and Pattern Recognition. Proceedings (2020, June)

In video understanding, action spotting consists in temporally localizing human-induced events annotated with single timestamps. In this paper, we propose a novel loss function that specifically considers ... [more ▼]

In video understanding, action spotting consists in temporally localizing human-induced events annotated with single timestamps. In this paper, we propose a novel loss function that specifically considers the temporal context naturally present around each action, rather than focusing on the single annotated frame to spot. We benchmark our loss on a large dataset of soccer videos, SoccerNet, and achieve an improvement of 12.8% over the baseline. We show the generalization capability of our loss for generic activity proposals and detection on ActivityNet, by spotting the beginning and the end of each activity. Furthermore, we provide an extended ablation study and display challenging cases for action spotting in soccer videos. Finally, we qualitatively illustrate how our loss induces a precise temporal understanding of actions and show how such semantic knowledge can be used for automatic highlights generation. [less ▲]

Detailed reference viewed: 169 (51 ULiège)
Full Text
Peer Reviewed
See detailMultimodal and multiview distillation for real-time player detection on a football field
Cioppa, Anthony ULiege; Deliège, Adrien ULiege; Noor, Ul Huda et al

in IEEE Conference on Computer Vision and Pattern Recognition. Proceedings (2020, June)

Monitoring the occupancy of public sports facilities is essential to assess their use and to motivate their construction in new places. In the case of a football field, the area to cover is large, thus ... [more ▼]

Monitoring the occupancy of public sports facilities is essential to assess their use and to motivate their construction in new places. In the case of a football field, the area to cover is large, thus several regular cameras should be used, which makes the setup expensive and complex. As an alternative, we developed a system that detects players from a unique cheap and wide-angle fisheye camera assisted by a single narrow-angle thermal camera. In this work, we train a network in a knowledge distillation approach in which the student and the teacher have different modalities and a different view of the same scene. In particular, we design a custom data augmentation combined with a motion detection algorithm to handle the training in the region of the fisheye camera not covered by the thermal one. We show that our solution is effective in detecting players on the whole field filmed by the fisheye camera. We evaluate it quantitatively and qualitatively in the case of an online distillation, where the student detects players in real time while being continuously adapted to the latest video conditions. [less ▲]

Detailed reference viewed: 94 (25 ULiège)
Full Text
See detailPrincipes des télécommunications analogiques et numériques: manuel des répétitions
Van Droogenbroeck, Marc ULiege; Latour, Philippe ULiege; Wagner, Jean-Marc et al

Learning material (2020)

Detailed reference viewed: 636 (131 ULiège)
Full Text
See detailForeground and background detection method
Van Droogenbroeck, Marc ULiege; Braham, Marc ULiege; Pierard, Sébastien ULiege

Patent (2020)

The present invention concerns a method for assigning a pixel to one of a foreground pixel set and a background pixel set. In this method, if a first condition is met the pixel is assigned to the ... [more ▼]

The present invention concerns a method for assigning a pixel to one of a foreground pixel set and a background pixel set. In this method, if a first condition is met the pixel is assigned to the background pixel set , and if the first condition is not met and a second condition is met, the pixel is assigned to the foreground pixel set. The method comprises a step ( S100 ) of calculating a probability that the pixel belongs to a foreground-relevant object according to a semantic segmentation algorithm, the first condition is that this probability that the pixel belongs to a foreground relevant object does not exceed a first predetermined threshold, and the second condition is that a difference between this probability that the pixel belongs to a foreground relevant object and a baseline probability for the pixel equals or exceeds a second predetermined threshold. [less ▲]

Detailed reference viewed: 76 (24 ULiège)
Full Text
Peer Reviewed
See detailGhost Loss to Question the Reliability of Training Data
Deliège, Adrien ULiege; Cioppa, Anthony ULiege; Van Droogenbroeck, Marc ULiege

in IEEE Access (2020), 8

Supervised image classification problems rely on training data assumed to have been correctly annotated; this assumption underpins most works in the field of deep learning. In consequence, during its ... [more ▼]

Supervised image classification problems rely on training data assumed to have been correctly annotated; this assumption underpins most works in the field of deep learning. In consequence, during its training, a network is forced to match the label provided by the annotator and is not given the flexibility to choose an alternative to inconsistencies that it might be able to detect. Therefore, erroneously labeled training images may end up “correctly” classified in classes which they do not actually belong to. This may reduce the performances of the network and thus incite to build more complex networks without even checking the quality of the training data. In this work, we question the reliability of the annotated datasets. For that purpose, we introduce the notion of ghost loss, which can be seen as a regular loss that is zeroed out for some predicted values in a deterministic way and that allows the network to choose an alternative to the given label without being penalized. After a proof of concept experiment, we use the ghost loss principle to detect confusing images and erroneously labeled images in well-known training datasets (MNIST, Fashion-MNIST, SVHN, CIFAR10) and we provide a new tool, called sanity matrix, for summarizing these confusions. [less ▲]

Detailed reference viewed: 50 (14 ULiège)