Eprint already available on another site (E-prints, working papers and research blog)
Looking for a reference for large datasets: relative reliability of visual and automatic sleep scoring
Muto, Vincenzo; Berthomier, Christian; Schmidt, Christina et al.
2019
 

Files


Full Text
Muto2019Lookingforareferenceforlargedatasetsrelativereliabilityofvisualandautomaticsleepscoring.pdf
Publisher postprint (341.58 kB)
Request a copy

All documents in ORBi are protected by a user license.

Send to



Details



Keywords :
Sleep; Autoscoring; Scoring variability
Abstract :
[en] Study Objectives: New challenges in sleep science require to describe fine grain phenomena or to deal with large datasets. Beside the human resource challenge of scoring huge datasets, the interand intra-expert variability may also reduce the sensitivity of such studies. Searching for a way to disentangle the variability induced by the scoring method from the actual variability in the data, visual and automatic sleep scorings of healthy individuals were examined. Methods: A first dataset (DS1, 4 recordings) scored by 6 experts plus an autoscoring algorithm was used to characterize inter-scoring variability. A second dataset (DS2, 88 recordings) scored a few weeks later was used to investigate intra-expert variability. Percentage agreements and Conger’s kappa were derived from epoch-by-epoch comparisons on pairwise, consensus and majority scorings. Results: On DS1 the number of epochs of agreement decreased when the number of expert increased, in both majority and consensus scoring, where agreement ranged from 86% (pairwise) to 69% (all experts). Adding autoscoring to visual scorings changed the kappa value from 0.81 to 0.79. Agreement between expert consensus and autoscoring was 93%. On DS2 intra-expert variability was evidenced by the kappa systematic decrease between autoscoring and each single expert between datasets (0.75 to 0.70). Conclusions: Visual scoring induces inter- and intra-expert variability, which is difficult to address especially in big data studies. When proven to be reliable and if perfectly reproducible, autoscoring methods can cope with intra-scorer variability making them a sensible option when dealing with large datasets.
Disciplines :
Neurology
Neurosciences & behavior
Computer science
Life sciences: Multidisciplinary, general & others
Human health sciences: Multidisciplinary, general & others
Author, co-author :
Muto, Vincenzo  ;  Université de Liège - ULiège > CRC In vivo Imaging-Sleep and chronobiology
Berthomier, Christian;  PHYSIP, Paris, France
Schmidt, Christina  ;  Université de Liège - ULiège > CRC In vivo Imaging-Sleep and chronobiology
Vandewalle, Gilles  ;  Université de Liège - ULiège > CRC In vivo Imaging-Sleep and chronobiology
Jaspar, Mathieu ;  Université de Liège - ULiège > Département de Psychologie > Ergonomie et intervention au travail
Devillers, Jonathan
Gaggioni, Giulia 
Chellappa, Sarah Laxhmi 
Meyer, Christelle 
Phillips, Christophe  ;  Université de Liège - ULiège > CRC In vivo Im.-Neuroimaging, data acquisition & processing
Salmon, Eric  ;  Université de Liège - ULiège > Département des sciences cliniques > Neuroimagerie des troubles de la mémoire et revalid. cogn.
Berthomier, Pierre;  PHYSIP, Paris, France
Prado, J.;  PHYSIP, Paris, France
Benoit, O.;  PHYSIP, Paris, France
Brandewinder, Marie;  PHYSIP, Paris, France
Mattout, J.;  Lyon Neuroscience Research Center, INSERM U1028, CNRS UMR 5292, University of Lyon 1, Lyon, France
MAQUET, Pierre  ;  Centre Hospitalier Universitaire de Liège - CHU > Département de médecine interne > Service de neurologie
More authors (7 more) Less
Language :
English
Title :
Looking for a reference for large datasets: relative reliability of visual and automatic sleep scoring
Publication date :
13 March 2019
Available on ORBi :
since 09 January 2020

Statistics


Number of views
80 (17 by ULiège)
Number of downloads
6 (6 by ULiège)

OpenCitations
 
0

Bibliography


Similar publications



Contact ORBi