Article (Scientific journals)
Automatic Sleep Stage Classification Using Nasal Pressure Decoding Based on a Multi-Kernel Convolutional BiLSTM Network.
Lee, Minji; Kang, Hyeokmook; Yu, Seong-Hyun et al.
2024In IEEE Transactions on Neural Systems and Rehabilitation Engineering, 32, p. 2533 - 2544
Peer Reviewed verified by ORBi
 

Files


Full Text
Automatic_Sleep_Stage_Classification_Using_Nasal_Pressure_Decoding_Based_on_a_Multi-Kernel_Convolutional_BiLSTM_Network.pdf
Publisher postprint (7.37 MB) Creative Commons License - Attribution
Download

All documents in ORBi are protected by a user license.

Send to



Details



Keywords :
Humans; Male; Adult; Female; Young Adult; Nose/physiology; Healthy Volunteers; Sleep, REM/physiology; Wakefulness/physiology; Sleep Stages/physiology; Neural Networks, Computer; Polysomnography; Deep Learning; Pressure; Algorithms; biomedical application; healthcare; nasal pressure; Sleep stage classification; Biomedical applications; F1 scores; Kappa values; Multi-kernel; Rapid eye movement; Sleep stages classifications; Nose; Sleep Stages; Sleep, REM; Wakefulness; Internal Medicine; Neuroscience (all); Biomedical Engineering; Rehabilitation
Abstract :
[en] Sleep quality is an essential parameter of a healthy human life, while sleep disorders such as sleep apnea are abundant. In the investigation of sleep and its malfunction, the gold-standard is polysomnography, which utilizes an extensive range of variables for sleep stage classification. However, undergoing full polysomnography, which requires many sensors that are directly connected to the heaviness of the setup and the discomfort of sleep, brings a significant burden. In this study, sleep stage classification was performed using the single dimension of nasal pressure, dramatically decreasing the complexity of the process. In turn, such improvements could increase the much needed clinical applicability. Specifically, we propose a deep learning structure consisting of multi-kernel convolutional neural networks and bidirectional long short-term memory for sleep stage classification. Sleep stages of 25 healthy subjects were classified into 3-class (wake, rapid eye movement (REM), and non-REM) and 4-class (wake, REM, light, and deep sleep) based on nasal pressure. Following a leave-one-subject-out cross-validation, in the 3-class the accuracy was 0.704, the F1-score was 0.490, and the kappa value was 0.283 for the overall metrics. In the 4-class, the accuracy was 0.604, the F1-score was 0.349, and the kappa value was 0.217 for the overall metrics. This was higher than the four comparative models, including the class-wise F1-score. This result demonstrates the possibility of a sleep stage classification model only using easily applicable and highly practical nasal pressure recordings. This is also likely to be used with interventions that could help treat sleep-related diseases.
Disciplines :
Neurosciences & behavior
Author, co-author :
Lee, Minji ;  The Catholic University of Korea, Department of Biomedical Software Engineering, Bucheon, South Korea
Kang, Hyeokmook;  Hanwha Systems, Soldier Combat Research and Development Team, Land Combat System Center, Seoul, South Korea
Yu, Seong-Hyun;  Chungbuk National University, Department of Computer Science, Cheongju, South Korea
Cho, Heeseung ;  Korea University, Department of Artificial Intelligence, Seoul, South Korea
Oh, Junhyoung;  Seoul Women's University, Division of Information Security, Seoul, South Korea
van der Lande, Glenn ;  Université de Liège - ULiège > GIGA
Gosseries, Olivia  ;  Université de Liège - ULiège > GIGA > GIGA Neurosciences - Coma Science Group
Jeong, Ji-Hoon ;  Chungbuk National University, Department of Computer Science, Cheongju, South Korea
Language :
English
Title :
Automatic Sleep Stage Classification Using Nasal Pressure Decoding Based on a Multi-Kernel Convolutional BiLSTM Network.
Publication date :
2024
Journal title :
IEEE Transactions on Neural Systems and Rehabilitation Engineering
ISSN :
1534-4320
eISSN :
1558-0210
Publisher :
Institute of Electrical and Electronics Engineers Inc., United States
Volume :
32
Pages :
2533 - 2544
Peer reviewed :
Peer Reviewed verified by ORBi
Funders :
NRF - National Research Foundation of Korea
CBNU - Chungbuk National University
Funding text :
This work was supported in part by the Institute of Information and Communications Technology Planning and Evaluation (IITP) grant funded by Korean Government [Ministry of Science and ICT (MSIT)] (Artificial Intelligence Innovation Hub) under Grant RS-2021-II212068; in part by the National Research Foundation of Korea (NRF) grant funded by Korean Government (MSIT) under Grant RS-2023-00252624 and Grant RS-2024-00336880; and in part by Korea Institute of Planning and Evaluation for Technology in Food, Agriculture and Forestry (IPET) through the Agriculture and Food Convergence Technologies Program for Research Manpower Development Program funded by the Ministry of Agriculture, Food and Rural Affairs (MAFRA) under Grant RS-2024-00398561.
Available on ORBi :
since 28 February 2025

Statistics


Number of views
47 (1 by ULiège)
Number of downloads
51 (1 by ULiège)

Scopus citations®
 
8
Scopus citations®
without self-citations
3
OpenCitations
 
0
OpenAlex citations
 
8

Bibliography


Similar publications



Contact ORBi