You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

30 July 2019

A Novel Human Respiration Pattern Recognition Using Signals of Ultra-Wideband Radar Sensor

,
and
1
Department of Computer Engineering, Gachon University, Seongnam 13120, Korea
2
Department of Energy IT, Gachon University, Seongnam 13120, Korea
*
Author to whom correspondence should be addressed.
This article belongs to the Section Biosensors

Abstract

Recently, various studies have been conducted on the quality of sleep in medical and health care fields. Sleep analysis in these areas is typically performed through polysomnography. However, since polysomnography involves attaching sensor devices to the body, accurate sleep measurements may be difficult due to the inconvenience and sensitivity of physical contact. In recent years, research has been focused on using sensors such as Ultra-wideband Radar, which can acquire bio-signals even in a non-contact environment, to solve these problems. In this paper, we have acquired respiratory signal data using Ultra-wideband Radar and proposed 1D CNN (1-Dimension Convolutional Neural Network) model that can classify and recognize five respiration patterns (Eupnea, Bradypnea, Tachypnea, Apnea, and Motion) from the signal data. Also, in the proposed model, we find the optimum parameter range through the recognition rate experiment on the combination of parameters (layer depth, size of kernel, and number of kernels). The average recognition rate of five breathing patterns experimented by applying the proposed method was 93.9%, which is about 3%~13% higher than that of conventional methods (LDA, SVM, and MLP).

1. Introduction

There are a variety of existing methods for human sleep testing as a means of diagnosing diseases and improving sleep quality. Sleep analysis methods that are mainly used for disease diagnosis are performed through polysomnography. Recently, in the field of health care, sleep-breath analysis has been studied to improve sleep quality.
Polysomnography measures several items through sensors and devices. The basic items are sleep phase, arousal, respiratory flow, respiratory ability, and blood oxygen saturation concentration [1,2]. The devices with sensor used to measure these items are either directly attached to the body or measured through the oral and nasal cavities. Therefore, performance of polysomnography is possible only at facilities equipped with such devices. As a result, the environment in which the user normally sleeps is changed, so that it may be difficult to accurately retrieve data due to a shallow sleep or a bad sleep. To minimize these problems, portable sleep inspection devices (PSG Type II and III) are being used, but they can also affect sleep because they attach multiple sensors (nasal pressure transducer, chest belt, etc.) to the body [3,4,5]. Therefore, when collecting breathing-related information, non-contact sensors can be used to reduce sleep disturbance factors rather than conventional methods.
In the current healthcare field, research is underway to analyze the quality of sleeping with the detection of sleep apnea or snoring using various sensors [6,7,8,9]. However, the healthcare sector analyzes the sleep using less information than polysomnography, so the information provided to users is limited. Therefore, various data acquisition methods are required for accurate sleep quality analysis. In recent years, devices for measuring data with a non-contact type sensor have emerged for the user’s convenience rather than contact type sensors.
The UWB (Ultra-WideBand) sensor, which has been used as a method for wireless communication in the past, has recently been recognized as a non-contact sensor and implemented as a radar system, which is used in healthcare research as well as obstacle detection [10,11,12,13,14]. Especially, it is known that UWB Radar can detect respiratory and pulse signals due to its signal characteristics. UWB radar has been used to detect apnea and respiratory rate [15,16,17,18,19]. However, in order to be eventually used for polysomnography or healthcare, it is necessary to recognize different breathing patterns as well as to measure the number of breaths per minute or detect the apnea.
Recently, recognition rate enhancement methods using artificial neural network technology are being studied in signal pattern recognition [20,21,22,23,24]. The existing UWB radar-based methods for recognizing apnea patterns are based on classical machine learning algorithms or on breathing frequency detection [15,16,17,18,19].
Recognition method through detection of respiration frequency can show good performance only when the respiration signal is extracted with a smooth shape and without noise, but if the human motion signal appears similar to the respiration signal, it is difficult to recognize the correct pattern.
If the artificial neural network technology, which has been actively studied recently, is used for breathing pattern recognition, better performance than conventional methods can be expected.
Therefore, in this paper, we propose a novel method to learn and detect five signal patterns of three general respirations (eupnea, bradypnea, and tachypnea), apnea, and body motion using 1-Dimension Convolutional Neural Network (1D CNN). The proposed method constructs an experimental set of neural network parameters and finds the appropriate range of neural network parameters by testing all cases of the set of parameters. And we show the superiority of the proposed method compared with the conventional machine learning algorithms LDA and SVM.

3. Proposed Method

3.1. Layer Design of 1-D CNN

The structure of the proposed 1-D Convolutional Neural Network is shown in Figure 6, and the overall structure is an input layer, a convolutional layer, and a dense layer, and is a general type of a convolutional neural network. Then, the feature map extracted from the last convolution layer is connected to the fully-connected layer by randomly selecting only the specified proportion of features through dropout after being serialized. Dropout is a technique to select only a part of the entire feature at random, which improves performance over all features [33,34].
Figure 6. Proposed 1D Convolutional Neural Network structure.
Therefore, since the features are randomly selected in each learning, the output layer results are derived differently. In this paper, we set the dropout ratio to 0.6 and use only 60% of the features in the learning.
In addition, we use the depth of the convolution layer, the kernel size of each convolution layer, the number of kernels, and the number of neurons of the fully-connected layer as parameters to find the optimal model in the proposed method.
In order to construct the 1D CNN with the maximum recognition rate, experiments are performed by combining various cases of parameters, and as a result, an optimal parameter range is searched.
The algorithm for finding optimal parameters consists of two steps. The first stage algorithm finds the optimal convolutional layer depth and the second stage algorithm finds the optimal parameter range in the depth determined in the first stage. In order to find the optimum parameter range, the experiment is performed N times in the depth determined in step 1, and the parameters in the case of the highest recognition rate per order are included in the optimum parameter range.
Algorithm 1 first constructs a set of experimental parameters to find the depth of the convolutional layer. Then, the accuracy and average accuracy for all cases (z) of the experimental parameter set are calculated within the depth range, while increasing the depth of the convolutional layer within the appropriate learning time range. Finally, the Depth value when the average accuracy per depth range is maximum is used as the optimal Depth parameter ( D o p t ).
Algorithm 1 Algorithm for finding optimal convolutional layer depth parameter ( D o p t )
(1)
Set the Convolutional Layer Depth parameter ( C L D n ).
C L D n , ( C L D n C L D , n = 1 , 2 , 3 , , T )   w h e n   C L D = { 1 , 2 , 3 , , T }
(2)
Set the Kernel Size ( K S j ).
K S j , ( K S j K S , j = 1 , , 10 )   w h e n   K S = { 5 , 9 , 13 , 17 , 21 , 25 , 29 , 33 , 37 , 41 }
(3)
Set the Kernel Count ( K C k ).
K C k , ( K C k K C , k = 1 , , 5 )   w h e n   K C = { 32 , 64 , 128 , 256 , 512 }
(4)
Set the Dense Layer Neuron Count ( D L N C l ).
D L N C l , ( D L N C l D L C N , l = 1 , , 4 )   w h e n   D L N C = { 256 , 512 , 1024 , 2048 }
(5)
Define the Convolution Layer parameter combination ( C o n v n t h , n = 1 , , T ).
: combination for all cases in set of elements.
C o n v n t h : { K S j K C k } j = 1 10 ; k = 1 5
(6)
Define the Dense Layer parameter ( D e n s e m t h , m = 1 , 2 ).
D e n s e m t h : { D L N C l } j = 1 10 ; k = 1 5
(7)
Define the P a r a m n t h by the combination ( ) of the parameters (1), (5) and (6) in Convolutional Layer depth n .
P a r a m n t h : { C L D n C o n v n t h D e n s e m t h } n = 1 , 2 , 3 ; m = 1 , 2
(8)
The recognition rate for each parameter combination and the average recognition rate at Convolutional Layer Depth n are calculated as follows.
Begin Loop:
for (n = 1; n < = T; n++)
Configures z parameter combinations C P n t h (Combination of Parameters) for all cases on depth n.
C P n t h : { P a r a m n t h 1 , , P a r a m n t h z }
Configures the A c c n t h by calculating the recognition rate of each
P a r a m n t h p ( p = 1 , z )   for   C P n t h .
A c c n t h : { a c c ( P a r a m n t h 1 ) , , a c c ( P a r a m n t h z ) }
Compute average recognition rate μ A c c n t h and store the result value and depth value n to R e s u l t n .
μ A c c n t h = 1 p { p = 1 z a c c ( P a r a m i t h p ) }
R e s u l t n : { n , μ A c c n t h }
End Loop:
(9)
Find the maximum μ A c c n t h of T Results and set the depth value n to the Dopt.
D o p t = { n | n R e s u l t n , a r g m a x ( μ A c c n t h ) }
Algorithm 2 performs N iterative learning to find the optimal parameters stabilized in the convolutional layer where the depth is set, and finds the optimum parameter range composed of the parameter values in the case of the highest recognition rate for each learning.
Algorithm 2 Algorithm for finding optimal range of 1-D CNN parameters
(1)
Within optimal depth D o p t , define Param D o p t by combination (⊙) of parameters C o n v n t h and D e n s e m t h .
Param D o p t = { D o p t C o n v n t h D e n s e m t h }
(2)
On the optimal depth D o p t , each recognition rate for parameter combinations in all cases is repeated N times as follows.
Begin Loop:
  for (t = 1; t <= N; t++){
Construct a parameter set CP D o p t (Combination of Parameters) for the number z in all cases.
CP D o p t : { Param D o p t 1 , , Param D o p t z }
Configure the Acc D o p t by calculating the recognition rate for each Param D o p t p (p = 1,…z) in the CP D o p t .
Acc D o p t : { a c c ( Param D o p t 1 ) , , a c c ( Param D o p t z ) }
Stores in Result t those having the maximum recognition rate among z Param D o p t .
Result t : { Param D o p t p | Param D o p t p CP D o p t , argmax ( a c c ( Param D o p t p ) ) } ,
p = 1,…,z
}
End Loop:
(3)
For Result t (t = 1, …, N),
Find the optimal parameter ranges ( KS o p t , KC o p t , D L N C o p t ).
KS t = { K S | K S Param D o p t p , Param D o p t p Result t } ,
KC t = { K C | K C Param D o p t p , Param D o p t p Result t } ,
D L N C t = { D L N C | D L N C Param D o p t p , Param D o p t p Result t }
Min ( KS t ) < = KS o p t < = Max ( KS t )
Min ( KC t ) < = KC o p t < = Max ( KC t )
Min ( DLNC t ) < = DLNC o p t < = Max ( DLNC t )

3.2. Optimal 1D CNN Parameters

The proposed method differs in performance depending on the combination of CNN parameters. Therefore, the goal is to find a combination of parameters with optimum performance. In order to maximize the recognition rate in the structure of the proposed neural network, we construct a set of parameter values to be optimized as shown in Table 2, and perform the recognition rate test for all cases where the parameter values can be combined. Items of experimental parameters are C L D (convolutional layer depth), K S (kernel size of convolutional layer), K C (kernel count of convolutional layer), and D L N C (neuron count of dense layer). Since there is a limit to test the recognition rate for the combination of all natural numbers that each parameter entry can have, we configure a set of setting values for each item as shown in Table 2 and test the number of all cases in the set.
Table 2. The parameters used in the proposed method.
The data used to find the optimal parameters consist of 1000 data sets of 200 each for each of five respiration patterns, and the ratio of the training set and the validation set is set to 6:4 in the corresponding data set. Recognition rate experiments are conducted for all cases combined by Convolution Layer Depth. The number of values that each parameter can have according to the Convolution Layer Depth ( C L D ) is as shown in Table 3. The number of cases is calculated by multiplying the number of parameters that each layer can have. The number of cases is 800 when C L D is 1, 40,000 when 2, and 2,000,000 when 3.
Table 3. Number of parameter combination cases according to Depth of Convolutional Layer ( C L D ).
The 100 samples are chosen at equal intervals in results of recognition rate experiments for all cases of parameter combinations for each depth is shown in Figure 7. The average recognition rate at C L D = 1 is about 87.4%, and the average recognition rate at C L D = 2 is about 90.7% and increases by about 3.3%. The recognition rate at C L D = 3 is 92.6%, this case is increased by about 1.9% compared to C L D = 2. It is expected that it will not be possible to expect a large recognition rate improvement even if the number of convolutional layers is further increased. Finding optimal parameters at deeper depths requires too much time for learning due to having too many cases.
Figure 7. Recognition rate graph according to convolutional layer depth ( C L D ).
Therefore, in this paper, we experiment with the combination of parameters for the case where Convolutional Layer depth is 3 ( D o p t = 3 ), and find the optimal parameter. However, in the proposed 1D CNN structure, even if the parameters are the same, the learning result is slightly different each time due to the dropout, so it cannot be fixed with a single parameter. We aim to find the optimal parameter range out of the parameter set configured in Table 2 instead of the single parameter. In this paper, in order to obtain this range, the optimal depth of convolutional layer is set to 3 ( D o p t = 3 ), and a total of 10 iterations are performed (N = 10). At this time, to find the optimal parameter range for each iteration as in Figure 8, a parameter set Param D o p t p ( p = 1 , , z ) to extract the maximum value out of results of 2 million recognition rates is used.
Figure 8. For iterative learning 10 times, find the combination of parameters (Parammax) with maximum recognition rate (Accuracymax) per each iteration.
In addition, the point to represent the highest recognition rate for each of the 10 repeated learning results are shown in Figure 9.
Figure 9. Out of the results of 10 times repeated learning on the C L D = 3, the points where the highest recognition rate per repetition time are located are marked.
Figure 9 shows that the training results with a combination of experimental parameters in Table 2 forms a gentle Gaussian graph and shows that the points with the highest recognition rate in the center are concentrated. The 10 parameter sets showing the highest recognition rate results in Figure 9 are the same as Table 4. At this time, the item showing the highest accuracy shows 93.76% accuracy from the 8th Iteration.
Table 4. For 10 times repeated learning, the combination of parameters having the maximum recognition rate for each iteration.
We can derive the optimal ranges as shown in Table 5 by taking optimal kernel size ( K S o p t ) and kernel count ( K C o p t ) of convolutional layer and optimal neuron count of dense layer ( D L N C o p t ) from Table 4. If we apply the parameters selected in Table 5 (range of optimal parameters) in the proposed neural network, we can expect an average accuracy of 93.57%.
Table 5. Experimental parameters and optimization parameter range of proposed 1-Dimension Convolutional Neural Network (1D CNN) model.

4. Experiments

4.1. Data Gathering Environment

The respiration data were collected using UWR (Ultra-Wideband Radar) with the specifications shown in Table 6 below (All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of GRRC-Gachon2017(B02)). The device was connected to the PC via UART Serial and data was collected using a program for the measurement of respiratory rate per minute and the storage of respiratory signal data.
Table 6. UWB Radar Specification used for breathing signal data collection.
Results of an executing program to measure the respiratory rate per minute and store the respiratory signal data are shown in Figure 10, there is an area of the screen for indicating a raw signal or a filtered signal at the upper part, and an area for indicating a respiration signal at the center. The area of the screen for indicating respiration rate per minute, calculated by the respiration signal, is located at the lower part.
Figure 10. UWB Radar PC program for extracting respiratory signal data.
In the display area of the raw and filtering signals, the amount of the signal reflected back at each distance or the signal to which the raw signal filtering is applied is displayed. In this paper, Kalman filter technique was applied to remove the noise of the raw signal, and the parameters applied to Kalman filter used 0.01 and 0.1 as default settings. Also, in order to extract the respiration signal, a distance value between the sensor and human is required. In this experimental environment, the distance between the sensor and the human thorax was set to 20 cm.
The equipment for collecting the respiration signal is shown in Figure 11.
Figure 11. UWB Radar installation environment for collecting respiratory signal data.
We place a UWB Radar device at a distance of 20 cm from the chest of a person while the person is lying on the bed in a normal state and collect signals for five breathing patterns from the device. The respiratory rate per minute is displayed in the bottom area of the program when collecting respiratory signal data. At this time, the signals of eupnea, bradypnea, tachypnea, and apnea are classified based on respiration rate per minute. In this experimental environment, four patterns of breathing signal and motion signal data were intentionally generated and collected within the range of breathing per minute while awake because of the limitation that it is difficult to collect all patterns of data when a person is sleeping.

4.2. Learning and Test Dataset

The patterns of UWB respiration signals collected for learning are shown in Figure 12. Eupnea, Bradypnea, and Tachypnea are similar in shape to each other, but it can be seen that signal intensity and cycle are different. Apnea and movement patterns can be noticeable when compared to other breathing patterns.
Figure 12. Five signal patterns for learning.
The structure of the respiration signal data is stored by the program as frame number, time stamp, and respiration signal data, as shown in Figure 13, and the number of data items stored at one point is 660. The UWB Radar used in the experiment generates data at 25 frames per second. In order to acquire experimental data to be used for learning, about 15,000 data items are stored by measuring 10 min for each pattern.
Figure 13. Breathing signal data extracted through UWB Radar.
The stored data is not used immediately for learning but after the data length is processed. In general, the device to detect respiratory state in polysomnography measures respiratory amplitude for at least 10 s to distinguish a state among respiratory signals [35]. Therefore, only 250 pieces of data, which is 10 s long, are used from the total of 660 pieces of data. Also, as shown in Figure 14, time-shifting is performed every 0.5 s on one breathing pattern data to form learning data sets of various shapes. Through this process, the learning and testing datasets are organized so that only one pattern is included in one data. Finally, 500 data sets are configured for each breathing pattern.
Figure 14. Construction of learning data by various time-shifting in one breathing pattern.
In addition, the number of people who participated in the data collection totaled 10, and each target’s age, height, weight, and five patterns of respiratory signal data were collected as shown in Table 7. And the total experimental data were collected by a total of 250 data per participant, 50 for each pattern. Thus, the total data is composed of 2500. The age group of volunteers is in their mid-20 s to early 40 s, who are roughly in the range of the average to obese South Korean physique.
Table 7. Data provider’s body information and five respiration patterns.

4.3. Comparison of Accuracy with Other Recognition Methods

To test the proposed 1D CNN model, we set the values of parameters as shown in Table 8 by selecting the parameters in the parameter range of Table 5. The result of learning each breathing pattern is shown in Figure 15. At the time of learning, Epoch is set to 40, batch size is set to 10, and learning is almost completed at Epoch 15. It can be seen that there is almost no fluctuation of accuracy and loss. After learning was completed, the final validation accuracy was 0.948, and the validation loss was 0.183.
Table 8. To evaluate the performance of the proposed 1-D CNN model, each parameter value is selected from the optimal parameter range.
Figure 15. The results of performance obtained by using the optimal parameters of the proposed 1 D CNN model.
To evaluate the performance of the proposed method, the recognition rates of respiratory patterns were compared with traditional machine learning algorithms LDA, SVM, and MLP. The data set used for performance evaluation uses a total of 2500 data items, 500 for each breathing pattern. Of the total 2500 data items, 1500 will be used for learning, and the remaining 1000 will be used for performance testing. As a result, the recognition rates of the proposed method and the conventional method are shown in a confusion matrix as shown in Figure 16.
Figure 16. Comparison of recognition rate for breathing pattern among the proposed method and the traditional methods.
When comparing the recognition results for each breathing pattern, it was found that the conventional method similarly recognizes eupnea, bradypnea, and tachypnea, and the recognition rate is lower than that of the proposed method. Comparing with the average recognition rate, the LDA was about 80.4%, the SVM was about 86%, the MLP was about 90.9%, and the proposed method was about 93.9%, showing an improvement in the recognition rate from at least 3% to up to 13.5%.

5. Conclusions

In this paper, to analyze the quality of sleep, we extracted respiration signals of human using UWB Radar device with non-contact sensor and classify respiratory of five types by proposed method (respiratory pattern recognition algorithm based on 1D Convolutional Neural Network) from extracted respiration signals.
Previous studies using respiration data from UWB radar devices include only apnea signal recognition or measurements of respiratory rate per minute. However, for accurate sleep analysis, recognition of not only apnea but also various other breathing patterns should be collected. In the proposed method, we designed the 1D CNN based learning model to recognize and classify the signal patterns of 5 types for eupnea, bradypnea, tachypnea, apnea, and motion signal, and found the range of optimum parameters to use in the model by executing various experiments. The proposed method could improve the breathing pattern recognition rate from minimum 3% to maximum 13.5% than conventional method.
Therefore, the proposed method can detect not only simple apnea but also other various sleep patterns, so it is expected that it can be used for analyzing respiratory disorders such as bradypnea and tachypnea.

Author Contributions

S.-H.K. performed calculation and wrote the paper; and Z.W.G. and G.-T.H. supervised the research direction.

Funding

This work was supported by the GRRC program of Gyeonggi province (GRRC-Gachon2017(B02), Bio-Data Construction and Prediction based on Artificial Intelligence).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Norifumi, T.; Alain, B.; Karen, R. Sleep and Depression. J. Clin. Psychiatry 2005, 66, 1254–1269. [Google Scholar]
  2. Marino, M.; Li, Y.; Rueschman, M.N.; Winkelman, J.W.; Ellenbogen, J.M.; Solet, J.M.; Dulin, H.; Berkman, L.F.; Buxton, O.M.; Inoshita, C.; et al. Measuring Sleep: Accuracy, Sensitivity, and Specificity of Wrist Actigraphy Compared to Polysomnography. Sleep Res. Soc. 2013, 36, 1747–1755. [Google Scholar] [CrossRef]
  3. Sleep Technology: Technical Guideline, “Standard Polysomnography”; American Association of Sleep Technologists: Chicago, IL, USA, 2012.
  4. Zou, D.; Grote, L.; Peker, Y.; Lindblad, U.; Hedner, J. Validation a Portable Monitoring Device for Sleep Apnea Diagnosis in a Population Based Cohort Using Synchronized Home Polysomnography. Sleep Res. Soc. 2006, 29, 367–374. [Google Scholar] [CrossRef] [PubMed]
  5. Collop, N.A.; Anderson, W.M.; Boehlecke, B.; Claman, D.; Goldberg, R.; Gottlieb, D.J. Portable Monitoring Task Force of the American Academy of Sleep Medicine. Clinical guidelines for the use of unattended portable monitors in the diagnosis of obstructive sleep apnea in adult patients. J. Clin. Sleep Med. 2007, 3, 737–747. [Google Scholar] [PubMed]
  6. Berry, R.B.; Hill, G.; Thompson, L.; McLaurin, V. Portable Monitoring and Autotitration versus Polysomnography for the Diagnosis and Treatment of Sleep Apnea. Sleep Res. Soc. 2008, 31, 1423–1431. [Google Scholar]
  7. Surrel, G.; Aminifar, A.; Rincon, F.; Murali, S.; Atienza, D. Online Obstructive Sleep Apnea Detection on Medical Wearable Sensors. IEEE Trans. Biomed. Circuits Syst. 2018, 12, 762–772. [Google Scholar] [CrossRef]
  8. Lee, H.; Lee, J.; Kim, H.; Ha, J.; Lee, K. Snoring detection using a piezo snoring sensor based on hidden Markov models. Inst. Phys. Eng. Med. 2013, 34, 41–49. [Google Scholar] [CrossRef]
  9. Nandakumar, R.; Gollakota, S.; Nathaniel, M.D. Contactless Sleep Apnea Detection on Smartphones. In Proceedings of the 13th Annual International Conference on Mobile Systems, Applications, and Services, Florence, Italy, 18–22 May 2015; pp. 45–57. [Google Scholar]
  10. Milici, S.; Lázaro, A. Wireless Wearable Magnetometer-Based Sensor for Sleep Quality Monitoring. IEEE Sens. J. 2018, 18, 2145–2152. [Google Scholar] [CrossRef]
  11. Nguyen, V.; Pyun, J. Location Detection and Tracking of Moving Targets by a 2D IR-UWB Radar System. Sensors 2015, 15, 6740–6762. [Google Scholar] [CrossRef]
  12. Khawaja, W.; Sasaoka, K.; Guvenc, I. UWB radar for indoor detection and ranging of moving objects: An experimental study. In Proceedings of the International Workshop on Antenna Technology (iWAT), Cocoa Beach, FL, USA, 29 February–2 March 2016. [Google Scholar]
  13. Li, C.; Mak, P.; Gómez-García, R.; Chen, Y. Guest Editorial Wireless Sensing Circuits and Systems for Healthcare and Biomedical Applications. IEEE J. Emerg. Sel. Top. Circuits Syst. 2018, 8, 161–164. [Google Scholar] [CrossRef]
  14. Li, C.; Lubecke, V.M.; Boric-Lubecke, O.; Lin, J. A Review on Recent Advances in Doppler Radar Sensors for Noncontact Healthcare Monitoring. IEEE Trans. Microw. Theory Tech. 2013, 61, 2046–2060. [Google Scholar] [CrossRef]
  15. Kim, M.; Pan, S.B. Deep Learning based on 1-D Ensemble Networks using ECG for Real-Time User Recognition. IEEE Trans. Ind. Inform. 2019. [Google Scholar] [CrossRef]
  16. Tran, V.P.; Al-Jumaily, A.A.; Islam, S.M.S. Doppler Radar-Based Non-Contact Health Monitoring for Obstructive Sleep Apnea Diagnosis: A Comprehensive Review. Big Data Cogn. Comput. 2019, 3, 21. [Google Scholar] [CrossRef]
  17. Javaid, A.Q.; Noble, C.M.; Rosenberg, R.; Weitnauer, M.A. Towards Sleep Apnea Screening with an Under-the-Mattress IR-UWB Radar Using Machine Learning. In Proceedings of the IEEE 14th International Conference on Machine Learning and Applications (ICMLA), Miami, FL, USA, 9–11 December 2015. [Google Scholar]
  18. Huang, X.; Sun, L.; Tian, T.; Huang, Z.; Clancy, E. Real-time non-contact infant respiratory monitoring using UWB radar. In Proceedings of the IEEE 16th International Conference on Communication Technology (ICCT), Hangzhou, China, 18–20 October 2015. [Google Scholar]
  19. Fedele, G.; Pittella, E.; Pisa, S.; Cavagnaro, M.; Canali, R.; Biagi, M. Sleep-Apnea Detection with UWB Active Sensors. In Proceedings of the IEEE International Conference on Ubiquitous Wireless Broadband (ICUWB), Montreal, QC, Canada, 4–7 October 2015. [Google Scholar]
  20. Lazaro, A.; Girbau, D.; Villarino, R. Analysis of Vital Signs Monitoring Using an IR-UWB Radar. Prog. Electromagn. Res. 2010, 100, 265–284. [Google Scholar] [CrossRef]
  21. Cho, H.; Yoon, S.M. Divide and Conquer-Based 1D CNN Human Activity Recognition Using Test Data Sharpening. Sensors 2018, 18, 24. [Google Scholar]
  22. Kravchik, M.; Shabtai, A. Detecting Cyber Attacks in Industrial Control Systems Using Convolutional Neural Networks. In Proceedings of the 2018 Workshop on Cyber-Physical Systems Security and PrivaCy, Toronto, ON, Canada, 15–19 October 2018; pp. 72–83. [Google Scholar]
  23. Kim, T.; Lee, J.; Nam, J. Sample-Level CNN Architectures for Music Auto-Tagging Using Raw Waveforms. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018. [Google Scholar]
  24. Lee, S.; Yoon, S.M.; Cho, H. Human activity recognition from accelerometer data using Convolutional Neural Network. In Proceedings of the IEEE International Conference on Big Data and Smart Computing (BigComp), Jeju, Korea, 13–16 February 2017. [Google Scholar]
  25. Venkatesh, S.; Anderson, C.R.; Rivera, N.V.; Buehrer, R.M. Implementation and analysis of respiration-rate estimation using impulse-based UWB. In Proceedings of the MILCOM 2005—2005 IEEE Military Communications Conference, Atlantic City, NJ, USA, 17–20 October 2005. [Google Scholar]
  26. Staderini, E.M. UWB radars in medicine. IEEE Aerosp. Electron. Syst. Mag. 2002, 17, 13–18. [Google Scholar] [CrossRef]
  27. Zetik, R.; Sachs, J.; Thoma, R.S. UWB short-range radar sensing - The architecture of a baseband, pseudo-noise UWB radar sensor. IEEE Instrum. Meas. Mag. 2007, 10, 39–45. [Google Scholar] [CrossRef]
  28. Wirth, S.; Seywert, L.; Spaeth, J.; Schumann, S. Compensating Artificial Airway Resistance via Active Expiration Assistance. Respir. Care 2016, 61, 1597–1604. [Google Scholar] [CrossRef]
  29. de Beer, J.M.; Gould, T. Principles of artificial ventilation. Anaesth. Intensive Care Med. 2013, 14, 83–93. [Google Scholar] [CrossRef]
  30. Bernardi, P.; Cicchetti, R.; Pisa, S.; Pittella, E.; Piuzzi, E.; Testa, O. Design, Realization, and Test of a UWB Radar Sensor for Breath Activity Monitoring. IEEE Sens. J. 2013, 14, 584–596. [Google Scholar] [CrossRef]
  31. Fan, D.; Ren, A.; Zhao, N.; Yang, X.; Zhang, Z.; Shah, S.A.; Hu, F.; Abbasi, Q.H. Breathing Rhythm Analysis in Body Centric Networks. IEEE Access Wearable Implant. Devices Syst. 2018, 6, 32507–32513. [Google Scholar] [CrossRef]
  32. Loughlin, P.C.; Sebat, F.; Kellett, J.G. Respiratory Rate: The Forgotten Vital Sign—Make It Count! Jt. Comm. J. Qual. Patient Saf. 2018, 44, 494–499. [Google Scholar] [CrossRef] [PubMed]
  33. Manjunatha, R.G.; Ranjith, N.; Meghashree, Y.V.; Rajanna, K.; Mahapatra, D.R. Identification of different respiratory rate by a piezo polymer based nasal sensor. In Proceedings of the IEEE Sensors, Baltimore, MD, USA, 3–6 November 2013. [Google Scholar]
  34. Elleuch, M.; Maalej, R.; Kherallah, M. A New Design Based-SVM of the CNN Classifier Architecture with Dropout for Offline Arabic Handwritten Recognition. Procedia Comput. Sci. 2016, 80, 1712–1723. [Google Scholar] [CrossRef]
  35. Xiao, T.; Li, H.; Ouyang, W.; Wang, X. Learning Deep Feature Representations with Domain Guided Dropout for Person Re-Identification. In Proceedings of the The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 1249–1258. [Google Scholar]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.