Next Article in Journal
Diagnosis by Volatile Organic Compounds in Exhaled Breath from Lung Cancer Patients Using Support Vector Machine Algorithm
Next Article in Special Issue
Detection of the Vibration Signal from Human Vocal Folds Using a 94-GHz Millimeter-Wave Radar
Previous Article in Journal
Structure-From-Motion in 3D Space Using 2D Lidars
Previous Article in Special Issue
Software Defined Doppler Radar as a Contactless Multipurpose Microwave Sensor for Vibrations Monitoring
Article

Real Time Apnoea Monitoring of Children Using the Microsoft Kinect Sensor: A Pilot Study

1
School of Engineering, University of South Australia, Mawson Lakes, SA 5095, Australia
2
Electrical Engineering Technical College, Middle Technical University, Al Doura 10022, Baghdad, Iraq
3
School of Nursing and Midwifery, University of South Australia, Adelaide, SA 5001, Australia
4
Joint and Operations Analysis Division, Defence Science and Technology Group, Melbourne, Victoria 3207, Australia
*
Author to whom correspondence should be addressed.
Academic Editor: Changzhi Li
Sensors 2017, 17(2), 286; https://doi.org/10.3390/s17020286
Received: 18 November 2016 / Revised: 27 January 2017 / Accepted: 30 January 2017 / Published: 3 February 2017
(This article belongs to the Special Issue Non-Contact Sensing)

Abstract

The objective of this study was to design a non-invasive system for the observation of respiratory rates and detection of apnoea using analysis of real time image sequences captured in any given sleep position and under any light conditions (even in dark environments). A Microsoft Kinect sensor was used to visualize the variations in the thorax and abdomen from the respiratory rhythm. These variations were magnified, analyzed and detected at a distance of 2.5 m from the subject. A modified motion magnification system and frame subtraction technique were used to identify breathing movements by detecting rapid motion areas in the magnified frame sequences. The experimental results on a set of video data from five subjects (3 h for each subject) showed that our monitoring system can accurately measure respiratory rate and therefore detect apnoea in infants and young children. The proposed system is feasible, accurate, safe and low computational complexity, making it an efficient alternative for non-contact home sleep monitoring systems and advancing health care applications.
Keywords: apnoea; apparent life-threatening event; Microsoft Kinect sensor; real-time image sequence analysis; motion magnification; motion detection apnoea; apparent life-threatening event; Microsoft Kinect sensor; real-time image sequence analysis; motion magnification; motion detection

1. Introduction

Cessation of respiration for 20 s or more, or a shorter pause resulting in cyanosis, obvious pallor or hypotonia or a marked decrease in heart rate with no respiratory, defines central apnoea [1]. Central apnoea is characterized by a lack of controlling breathing during sleep due to failure of the brain to correctly signal the muscles responsible for breathing. Other forms of apnoea include obstructive and mixed aetiologies, where inspiratory effort still occurs but without effective air flow and will not be discussed further in this work. Home apnoea monitoring may be an appropriate intervention for some children. Whilst there is no scientific evidence that a home apnoea monitor will prevent sudden unexpected infant death, including sudden infant death syndrome [1,2], there are situations where home monitors may be warranted. These circumstances include premature infants who have a prolonged course of apnoea (prolonged pauses in breathing) and bradycardia after discharge from hospital and are generally monitored until 43 weeks postmenstrual age when this most likely resolves. Infants who have medical conditions affecting breathing regulation or have unstable airways, infants who are technology-dependent, such as those requiring home respiratory support, have tracheostomies or oxygen therapy with chronic lung disease, or infants who have experienced an apparent life-threatening event (ALTE) requiring significant resuscitation may also be discharged home from hospital with an apnoea monitor [1,3].
Current methods for apnoea monitoring can be problematic. The most widely used form of non-invasive home respiratory monitoring for infants is with transthoracic impedance (TTI). This contact method requires the application of standard electrocardiogram (ECG) leads or two electrodes placed on the thorax and secured by a chest belt, and regular infant movement can disrupt the signal. Constant false alarms generated from this technology may cause significant parental frustration and associated non-compliance [4]. Monitors with leads may also increase the risk of strangulation or entrapment of the infant [5]. Home apnoea monitors are also expensive, with an estimated monthly cost of $300–$400 US [6]. Therefore, there is a significant need to provide a reliable remote monitoring system to observe respiratory activity in infants and young children, particularly in home applications. Several researchers have performed a variety of remote methods for respiratory monitoring, including methods based on radar effect [7,8,9,10]. However, using Doppler effect in sleep monitoring requires specialist hardware to produce radar output signals and receive a reliable return signal. Current studies suffer significant signal to noise ratio (SNR) decreases at distances larger than 1 m between the subject and the antennas due to increased free space loss [11]. Furthermore, the radar antenna must be directed towards the chest wall and any irregular movement of the subject leads to produce noise [7,8,9,10]. In addition, focused radar energy may be discovered to have harmful side effects on biological tissue [12]. A study by Yang et al. [13] proposed a portable wireless monitoring system for sleep apnoea detection based on active radio frequency identification (RFID) technology. Although this study was designed to consume low power and reduce a cost of the overall system, it is still needed to place the sensors on the patient’s body which may lead to discomfort when sleeping. In addition, the transmitted signal from on-body sensors to the RFID reader may be distorted by the patient’s movement and signal interference. Another study by Yang et al. [14] presented a novel wireless transducer based on analogue technology for remote monitoring under a single sleeping scenario for one participant without any physical contact. Their study provided a continuous sleeping monitoring, improving patient comfort and minimizing healthcare costs. However, amplitude modulating, filtering, amplifying and separating analogue signals needs to be considered carefully when the noise signal resulting from patient’s movement and signal interference falls within the working frequency. Another study by Jones et al. [15] used pressure sensors placed above and below the bed to monitor respiratory activity without any restrictions for the patient. However, a limitation of this is that measurements are also affected by patient movement. Non-contact methods based on electromagnetic sensors have also been developed for sleep monitoring [16,17,18]. Electromagnetic sensors have the advantage of not requiring adhesive sensors and electrodes and the signal is also un-attenuated by bone and skin. However, these methods require subjects to remain stationary and are limited in their range [17]. Thermal video cameras have been used in several studies [12,19,20] to monitor respiratory activity by detecting carbon dioxide emissions or by determining skin temperature differences through inspiration and expiration. Although thermal cameras were attractive to monitor respiratory rates, its measurements are also affected during head rotation, irregular movements, and particularly any apparatus that covers the face. Thus, thermal cameras are unable to determine respiratory activity when the nasal region is not clear to analyze. Some studies [21,22,23] used an image sequence analysis captured by video camera as a remote methods to measure respiratory activity by analyzing optical flow of chest surface movements resulting from respiration. Because these studies relied on optical flow calculations, computational complexity, motion artefacts, lighting conditions were the main drawbacks. Also, an image sequence analysis based on motion magnification system was recently used by Al-Naji and Chahl [24] to measure respiratory activity for a baby in different sleep positions. Although this study succeeded to detect the respiratory rate and breathing time parameters in different positions (even in the presence of the blanket), the data had to be interpreted after video recordings were analyzed and abnormal events could not be captured in real time. Other studies [25,26] used 3D surface information of the patient’s chest and abdomen captured by time of flight (ToF) cameras for detecting the respiratory motion. The studies are based on time variation of the signals obtained by averaging the range values in two fixed points. Though using 3D surface information is conceivable in theory, the measurements are determined by a distance between the patient and the camera and they are somewhat affected by noise and motion artefacts and ToF cameras are expensive. Based on photoplethysmography imaging (PPGI) signals, several researchers [27,28,29,30] have used camera-based PPGI signals to determine variations in the skin blood volume resulting from cardiorespiratory rhythms. Previous studies were affected by lighting conditions, skin tone, and distance [28] which may cause background noise falling within the frequency band of interest. These methods cannot be used to detect respiratory activity in unclear regions of interest (ROIs); therefore, PPGI cannot operate on a subject that changes to different positions.
To overcome the problems under these assumptions, this paper aims to propose a new real-time vision system based on Kinect sensor (developed by Microsoft based in Redmond, WA, USA) to monitor respiratory activity in any given sleep position, light conditions (even if installed in a dark environment) and whether the subject is covered or not, while being reliable, safe and cost effective. Furthermore, the proposed system has utilized several improvements, including wavelet decomposition, image de-noising and resampling to enhance performance beyond standard video magnification systems in terms of noise removal, video quality and execution time which make it suitable for real-time use.
The rest of the paper is organized as follows: Section 2 reviews some of the previous relative works based on Kinect sensor. Section 3 presented a description of the Kinect sensor. In the next section, materials and methods are provided. The results and discussion are presented in Section 5 and finally, Section 6 concludes the paper.

2. Related Works

Recently, there has been interesting research using a Kinect sensor as a non-contact device for tracking and detecting respiratory activity. For example, Xia and Siochi [31] proposed a respiratory monitoring system based on Kinect v1 sensor. This study used depth images captured by the Kinect sensor to determine the average depth over a thoracic region of interest. The ROI was manually determined by placing a translation surface on the patient’s thorax in the center of the image. Although this study was implemented in real-time, it required a clear ROI for measurement. Other studies [32,33] presented a sleep monitoring system using a Kinect v1 sensor, where thorax movements are detected by tracking over time the depth information recorded during sleep. Because there is no ROI tracking system in these studies, ROI must be in the center of the image and any unexpected movement leads to distorting and biasing the results. Also, some studies used a Kinect sensor to analyze the breathing activity and sleep disorders based on depth map information recorded during patient sleep while facing the Kinect sensor [34,35,36,37]. However, some limitations were related to unclear ROI and subject movement during the measurement. Another study by Harte et al. [38] proposed a developed system for analyzing chest wall motion based on four Kinect sensors. The sensors were placed around the subject with a distance of 1 m to create a 3D time-variation view of the patient’s torso. A benefit of using data from four Kinect sensors is to analyze chest wall motion even with moving subjects which may be useful in scenarios such as when measuring dynamic hyperinflation during exercise. However, this study did not focus on the movement of the diaphragm as well as being prone to some errors in the 3D reconstruction due to a design limitation of the sensors that are unsynchronized in time and frequency. Other studies [39,40] utilized a Kinect-based system to solve issues related to patient set-up misalignment and respiratory motion during radiotherapy. They used depth map information for patient set-up and breathing motion management using several ROIs within the abdominal-thoracic area. However, there is only one accurate position, which is for a subject to face the Kinect sensor and any movement would lead to further sources of error. A study by Lee et al. [41] proposed a sleep monitoring system based on a Kinect v2 sensor to detect the sleep patterns and postures during sleep without any attached devices. They used depth information to detect whole human body joints to gather the sleep movement, posture, and sleep information. However, it is it difficult to differentiate the subject’s front from their back as well as the human body not being able to be covered by a blanket in their study. Our current study is to develop a real-time vision system based on a Microsoft Kinect v2 sensor to detect breathing activity by tracking the interest region located within five joints that corresponds to both chest and abdomen areas. Our study can measure respiratory rates and detect apnoea in any sleep position, regardless of light conditions and whether the subject is covered.

3. Kinect Sensor

Kinect v1 was released as a peripheral device by Microsoft (Redmond, WA, USA) in 2010 for gaming XBOX360 purposes and due to the massive demand of the market, Microsoft has developed it to be compatible with Windows via Microsoft’s Kinect Standard Development Kit (SDK) and power adaptor [42,43,44,45]. The next generation of Kinect sensor (Kinect v2) released in 2014 has advantages over the original Kinect technology in terms of performance, accuracy and wide field of view [42,43,44,45]. This is because the Kinect v2 uses a ToF technology [46] for the depth measurements instead of the structured light coding technology [47] used in Kinect v1. A comprehensive comparison between Kinect technologies can be found in [45]. Table 1 summarizes some comparative specifications of Microsoft Kinect v1 and v2.
Microsoft Kinect v2 has three optical sensors: a RGB camera, IR sensor, IR projector which can provide three outputs; a RGB image, IR image and depth image. It can provide a body tracking, 3D body reconstruction, skeletal tracking, joint tracking and human recognition based on information obtained from the depth and colour sensors at any ambient temperature. Because Kinect v2 is commercially available, has specific features at a low cost, and designed for the sustained commercial use, it is an attractive device for many biomedical applications in both domestic and clinical environments [31,32,33,34,35,36,37,38,39,40,41]. Figure 1 shows the external view of the Microsoft Kinect v2 sensor.

4. Materials and Methods

4.1. Experimental Setup

Five subjects (three males and two females) with ages ranging from 1 to 5 years, weights from 10 to 17 kg and heights from 75 to 107 cm participated in our sleep monitoring experiments. The required ethical approval was granted by the UniSA Human Research Ethics Committee and it was carried out following the rules of the Declaration of Helsinki of 1975. Written informed consents were obtained from the parents before commencing the experiments. We performed the sleep monitoring in the home environment setup for approximately 3 h for each subject and we repeated the experimentation at different times (during the night and daytime), light conditions (more-lit and dark environment) and subjects with and without a blanket to obtain sufficient data. The Kinect sensor was located to the front of the subject in the direction of 45° at a distance of 2.5 m. The reference methods for monitoring respiration were recorded by a Piezo respiratory belt transducer (MLT1132, ADInstruments, NSW, Australia) and commercial product (Misfit Sleep Monitor, Misfit, Burlingame, CA, USA) for validation purpose. The proposed system is implemented under the MATLAB environment- R2016a (MathWorks, NSW, Australia) with Microsoft Windows 10 operating system, Visual Studio 2013, and Kinect SDK.

4.2. System Design/Overview

The overview of the proposed system is presented in Figure 2.

4.3. Data Analysis

A Microsoft Kinect v2 sensor was used in this study to detect and track the movement of chest and abdomen caused by inhalation and exhalation. Based on depth information, the Kinect v2 sensor tracks movements of the human body by determining the position of 25 skeletal joints as shown in Figure 3.
We tracked the area within five joints that corresponds to both chest and abdominal areas. This area is determined by tracking five joints, which includes left and right shoulder joints, spine shoulder joint and left and right hip joints. We stored the x, y, and z positions of these five joints by modifying the respective joint functions provided in the Kinect library and used them to determine an auto ROI.

4.3.1. Modified Motion Magnification System

To provide a real-time magnification, better noise performance and to support magnification factors better than linear Eulerian video magnification (EVM), a wavelet pyramid decomposition [48,49] and image de-noising [50] based on linear averaging filter were used to enhance performance of standard video magnification techniques [51,52]. The details of main equations and how the video magnification technique works can be found in [51,52,53].
In the modified motion magnification system, frame sequences of the RGB camera are converted to YIQ channel to separate the intensity information from the colour information. Only Y channel is resized down by using the Lanczos resampling method [54] to reduce processing time. The Y channel is then decomposed into different spatial frequency bands using wavelet pyramids. A temporal band-pass filtering is applied on each level of the wavelet pyramid to extract frequency bands of interest. The extracted band-passed signals are then multiplied by a magnification factor ( M ) to amplify signals of interest, and these signals are collapsed to obtain the magnified signals. The magnified signals are then filtered based on image de-noising [50] to increase the signal to noise ratio and resize them back to the original size. The magnified signals are then added back to the input signals to obtain processed Y channel. The processed Y is then concatenated with the original I and Q channels along array dimension and converted to RGB to obtain the final output. Since RGB sensor is not affective in the dark environment, the modified motion magnification system is also applied on the frame sequences obtained from IR Kinect sensor in the less-lit environment.

4.3.2. Motion Detection Based on Frame Differencing

The objective of motion detection based on frame differencing is to extract a respiratory motion in the selected ROI in consecutive video frames to recognize the presence of breathing. Basically, frame differencing method suggests that a pixel is moving if its intensity has significantly changed between two consecutive frames as follows [55]:
| I t ( i , j ) I t 1 ( i , j ) | τ
where I t ( i , j ) and I t 1 ( i , j )   represent image intensities for current and previous frames respectively and τ corresponds to a threshold that describing a significant intensity change. If the difference is greater or equal than τ then it is considered as a pixel caused by breath movement, otherwise it is considered as a no-breath movement pixel. The threshold range is from 0 to 255.

4.4. Respiratory Calculations

After determining motions above threshold τ in the ROI in all frame sequences caused by respiratory movement, a binary thresholding operation followed by a sequence of binary morphological filters was performed to separate the pixels corresponding to the breath movement to that without movement. This resulted in a binary image, thresholds at ≥110 were set to generate binary images, where the pixels corresponding to changes from breath movement were set to one and the pixels corresponding to areas unchanged by breath movement were set zero. The binary matrix was converted into a binary vector with values 0 and 1, where 0 represents the dark area and 1 represents the white area in the image. Since the Kinect sensor operates with a frame rate of 30 fps, we can find the frame interval using:
r a m e   i n t e r v a l   =   1 / f p s   =   1 / 30   =   0.0333   s
Respiratory rate readings are not instantaneous because they rely on calculating a difference between two consecutive breaths in the frame sequences ( N ). Let A i be a vector of length N for a number of consecutive breaths:
A i =   [ A i ( 1 ) ,   A i ( 2 ) ,   A i ( 3 ) ,   ..   A i ( N ) ]
where A i is a binary vector that contains values 0 and 1, where zeroes represent the frame sequences without detected respiratory motion and ones represent the frame sequences with detected respiratory motion.
To determine differences between adjacent elements of A i , let B = d i f f ( [ 0   A i ] ) return a vector of length N 1 . To determine positions of a nonzero value in B , we apply C = f i n d ( B ) that returns a vector with M length containing nonzero values. By calculating the differences among C values, and multiply them by the frame   interval , we can measure the vector of respiratory cycles ( R c )   over time (t). This vector is stored in the Matlab workspace. For the next N sequence, we repeat the previous steps to obtain other respiratory cycles and so on. Now we can detect apnoea and respiratory rate (breaths/min) using the following relations:
R c c u r r e n t   = R c 2 R c 1 = { R c c u r r e n t   R c 1 ,   R c 2 ,   R c M 1   n o r m a l   R c c u r r e n t     10 s e c A p n e a
r e s p i r a t o r y   r a t e   =   60   s R c
Generally, the respiratory rate varies with age, but the normal range is between 30–60 breaths/min for infants, 22–28 breaths/min for children, 16–20 breaths/min for teenagers and 14–18 breaths/min for adults [56]. A pre-set alarm function is considered in this study when the results fall outside theses ranges.

5. Results

The capturing of RGB, depth, IR, skeleton tracking and body index from the Kinect v2 sensor is shown in Figure 4.
The experimental results obtained from five subjects at many positions were set in two scenarios. The first scenario is when the experiments were carried out in a more-lit environment. The frames sequence in this state obtained from RGB sensor were processed through the proposed system. The second scenario is a less-lit environment (or dark environment). The frames sequences in this state obtained from IR sensor were processed. The experimental results for both scenarios were implemented with and without a blanket. Because all participants in this study are apparently healthy, we asked the participants to hold their breaths during the measurement to create a situation mimicking Apnoea. In the first scenario, the real-time breathing simulation signal based on frame sampling time of five minutes was compared to the reference signal for healthy subject, where the subject was asked to hold his breath for 10 s and 18 s during the measurement in a normal illumination level as shown in Figure 5.
The breathing simulation signal against the reference signal is also shown in Figure 6 when the subject was covered by a blanket and asked to hold his breath for 18 s and 10 s during the measurement in the same illumination level.
In the second scenario, the breath-holds signals were compared to the reference signals for healthy subject (with and without a blanket), where the subject was asked to hold his breath twice for 20 s during the measurement as shown in Figure 7 and Figure 8 respectively.
From the previous figures, the proposed system could detect the periods of simulated apnoea for 10 s that approximately corresponds to 300 frames (10 s/0.0334), 18 s that approximately corresponds to 539 frames (18 s/0.0334) and 20 s that approximately corresponds to 599 frames (20 s/0.0334). Our system recognised periods of simulated apnoea and measured respiratory rates using Equations (3) and (4) respectively. Breaths per minute can be determined by multiplying a corresponding number of frames with sampling period between the video frames shown in Equation (2), which located within 1796 frames (60 s/0.0334). We calculated number of pulses in 60 s to determine the respiratory rate per minute. The respiratory rate measurements from Figure 5, Figure 6, Figure 7 and Figure 8 were 24, 26, 25, 26 breaths/min, respectively, against 23, 24, 24, 24 breaths/min obtained from the reference data.
The statistical analysis based on correlation plot and Bland-Altman method [57] was used to measure the variability of the proposed system with the reference method. The respiratory data was assessed by recording the spontaneous respiratory signal for each subject using a Piezo respiratory belt attached to PowerLab-based computer (ADInstruments, NSW, Australia) which performs analogue to digital conversion and recording of the time-series signal. The respiratory rate values obtained by the Piezo respiratory belt were extracted by calculating a number of peaks in the time-series signal and used as references to compare with those obtained by the proposed system at the same time. The agreement was evaluated using the mean difference (bias) between the measurements, and the 95% limits of agreement (bias ± 1.96 SD) where SD is the standard deviation of the mean of the differences in paired measurements. The statistical analysis based on correlation plot and Bland-Altman method for respiratory rate measurements in the first scenario without a blanket is shown in Figure 9.
It is clearly observed from Figure 9 that there is a positive linear relationship between the measured values and reference data with a slope, intercept and sum of square error (SSE) of 1.04, −0.682 and 0.49 respectively. The correlation coefficients (Pearson correlation coefficient and Spearman’s Rho coefficient) were 0.9922 and 0.9859 respectively.
The mean difference of the Bland-Altman plot was 0.061% and 95% limits of agreement were −0.91 and +1 respectively with a reproducibility coefficient of 0.97 breaths/min, whereas the statistical analysis for respiratory rate measurements with a blanket is shown in Figure 10. Figure 10 presents a positive linear relationship between the measured values and reference data. The slope, intercept and SSE were 0.98, 0.64 and 0.81 respectively with 0.9764 and 0.9781 of Pearson and Spearman coefficients respectively. The mean difference was 0.28 and 95% limits of agreement were −1.3 and +1.8 with a reproducibility coefficient of 1.6 breaths/min. The results of the statistical analysis for the second scenario without a blanket are presented in Figure 11.
As shown in Figure 11, the slope, intercept and SSE were 1.07, −1.2 and 0.55 respectively with 0.9894 and 0.9832 of Pearson and Spearman coefficients respectively. The mean difference was 0.16% and 95% limits of agreement were −0.98 and +1.3 with a reproducibility coefficient of 1.1 breaths/min, whereas the statistical analysis for respiratory rate measurements with a blanket is shown in Figure 12.
Figure 12 presents a positive linear relationship between the measured values and reference data. The slope, intercept and SSE were 1.08, −1.13 and 0.9 respectively with 0.9725 and 0.9634 of Pearson and Spearman coefficients respectively. The mean difference was 0.48% and 95% limits of agreement were −1.3 and +2.3 with a reproducibility coefficient of 1.8 breaths/min.
Comparing the statistics from Figure 9, Figure 10, Figure 11 and Figure 12 reveals that the proposed system with the first scenario works slightly better than with the second scenario. Also, we noted that the statistics obtained from the subject without a blanket were better than those obtained with a blanket. This is because the proposed system may fail to track the body when it is a fully covered by a blanket (including the head) since the Kinect sensor cannot track the human joints in this situation. However, the resulting cross correlation coefficient of the proposed system for all scenarios was 0.9812 which is considered suitable for biomedical applications. When compared to other studies conducted on respiratory monitoring, our correlation coefficient was better than 0.96 [31], 0.98 [36], 0.8656 [38] and 0.90, 0.93 [39].

6. Conclusions

Current apnoea monitors used in the home environment are known to be very frustrating to families due to the regular false alarms that they generate. They are also very expensive. Therefore, we have proposed a real-time monitoring system to calculate respiratory rate and detect apnoea based on a Microsoft Kinect v2 sensor and may be utilized in both home and clinical environments. Our system relies on image information captured by the three sensors built-into the Kinect and analyze them through real-time motion magnification and motion detection to detect respiratory activity. We developed an enhanced video magnification technique to suit real-time applications. The experimental results for five subjects with different ages, sleep poses, and light conditions indicate that our proposed system has the potential to measure respiratory rates and detect apnoea even in dark environments. The experiments for both scenarios (more-lit and dark environment) with and without a blanket showed that the correction coefficient between the measured and reference data was very good (0.9812) and acceptable for biomedical applications. Our system may also be used in the future to detect other vital signs and sleep-disorder anomalies amongst other populations and provide a comfortable sleep environment for all whilst being monitored. We therefore believe that this system could potentially be at the forefront of modern respiratory monitoring technology. Further studies with larger numbers of subjects are clearly needed to confirm these findings. Also, future studies should have more sleep apnoea events to determine the clinical usefulness of the proposed system.

Acknowledgments

The authors thank the volunteers who participated in this study and the staff at the School of Engineering, University of South Australia, Mawson Lakes campus for their technical support to this work.

Author Contributions

A. Al-Naji conceived the algorithm, performed the experiments, and wrote the draft manuscript. J. Chahl and S-H. Lee supervised the work and contributed with valuable discussions and scientific advice. K. Gibson provided clinical support and some suggestions for the proposed system. All authors read and approved the final manuscript.

Conflicts of interest

The authors of this manuscript have no conflict of interest relevant to this work.

References

  1. Pediatrics, A.A.O. Apnea, sudden infant death syndrome, and home monitoring. Pediatrics 2003, 111, 914–917. [Google Scholar]
  2. Strehle, E.M.; Gray, W.K.; Gopisetti, S.; Richardson, J.; McGuire, J.; Malone, S. Can home monitoring reduce mortality in infants at increased risk of sudden infant death syndrome? A systematic review. Acta Paediatr. 2012, 101, 8–13. [Google Scholar] [CrossRef] [PubMed]
  3. Red Nose. National Scientific Advisory Group (NSAG). Information Statement: Home Monitoring; National SIDS Council of Australia: Melbourne, Australia, 2016; Available online: https://rednose.com.au/downloads/Home_Monitoring-Safe_Sleeping-Information_Statement.pdf (accessed on 25 December 2016).
  4. Bennett, A.D. Home apnea monitoring for infants. Adv. Nurse Practitioners. 2002, 10, 47–54. [Google Scholar]
  5. Sheppard, I.; Morris, L.; Blackstock, D. Medication Safety Alerts. Can. J. Hosp. Pharm. 2004, 57, 176–179. [Google Scholar]
  6. Fu, L.Y.; Moon, R.Y. Apparent life-threatening events (ALTEs) and the role of home monitors. Pediatr. Rev. 2007, 28, 203–208. [Google Scholar] [CrossRef] [PubMed]
  7. Beelke, M.; Angeli, S.; Del Sette, M.; Gandolfo, C.; Cabano, M.E.; Canovaro, P.; Nobili, L.; Ferrillo, F. Prevalence of patent foramen ovale in subjects with obstructive sleep apnea: A transcranial Doppler ultrasound study. Sleep Med. 2003, 4, 219–223. [Google Scholar] [CrossRef]
  8. Marchionni, P.; Scalise, L.; Ercoli, I.; Tomasini, E. An optical measurement method for the simultaneous assessment of respiration and heart rates in preterm infants. Rev. Sci. Instrum. 2013, 84, 121705. [Google Scholar] [CrossRef] [PubMed]
  9. Min, S.D.; Yoon, D.J.; Yoon, S.W.; Yun, Y.H.; Lee, M. A study on a non-contacting respiration signal monitoring system using Doppler ultrasound. Med. Biol. Eng. Comput. 2007, 45, 1113–1119. [Google Scholar] [CrossRef] [PubMed]
  10. Uenoyama, M.; Matsui, T.; Yamada, K.; Suzuki, S.; Takase, B.; Suzuki, S.; Ishihara, M.; Kawakami, M. Non-contact respiratory monitoring system using a ceiling-attached microwave antenna. Med. Biol. Eng. Comput. 2006, 44, 835–840. [Google Scholar] [CrossRef] [PubMed]
  11. Droitcour, A.; Lubecke, V.; Lin, J.; Boric-Lubecke, O. A microwave radio for Doppler radar sensing of vital signs. In Proceedings of the 2001 IEEE MTT-S International Microwave Symposium Digest, 20–24 May 2001; Volume 1, pp. 175–178.
  12. Abbas, A.K.; Heiman, K.; Orlikowsky, T.; Leonhardt, S. Non-contact respiratory monitoring based on real-time IR-thermography. In Proceedings of the World Congress on Medical Physics and Biomedical Engineering, Munich, Germany, 7–12 September 2009; pp. 1306–1309.
  13. Yang, Y.; Karmakar, N.; Zhu, X. A portable wireless monitoring system for sleep apnoea diagnosis based on active RFID technology. In Proceedings of the Microwave Conference Proceedings (APMC), Melbourne, Australia, 5–8 December 2011; pp. 187–190.
  14. Yang, Y.; Zhu, X.; Ma, K.; Simorangkir, R.B.; Karmakar, N.C.; Esselle, K.P. Development of Wireless Transducer for Real-Time Remote Patient Monitoring. IEEE Sens. J. 2016, 16, 4669–4670. [Google Scholar] [CrossRef]
  15. Jones, M.H.; Goubran, R.; Knoefel, F. Reliable respiratory rate estimation from a bed pressure array. In Proceedings of the 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–3 September 2006; pp. 6410–6413.
  16. Mahdavi, H.; Rosell Ferrer, F.J. Electromagnetic coupling simulagions for a magnetic induction sensor for sleep monitoring. In Proceedings of the XXII Congreso Anual de la Sociedad Española de Ingeniería Biomédica, Barcelona, Spain, 26–28 November 2014; pp. 1–4.
  17. Scalise, L.; De Leo, A.; Primiani, V.M.; Russo, P.; Shahu, D.; Cerri, G. Non-contact monitoring of the respiration activity by electromagnetic sensing. In Proceedings of the 2011 IEEE International Workshop on Medical Measurements and Applications Proceedings (MeMeA), Bari, Italy, 30–31 May 2011; pp. 418–422.
  18. Seeton, R.; Adler, A. Sensitivity of a single coil electromagnetic sensor for non-contact monitoring of breathing. In Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 518–521.
  19. Fei, J.; Pavlidis, I. Analysis of breathing air flow patterns in thermal imaging. In Proceedings of the 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–3 September 2006; pp. 946–952.
  20. Murthy, R.; Pavlidis, I.; Tsiamyrtzis, P. Touchless monitoring of breathing function. In Proceedings of the 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Francisco, CA, USA, 1–5 September 2004; Volume 1, pp. 1196–1199.
  21. Frigola, M.; Amat, J.; Pagès, J. Vision Based Respiratory Monitoring System. In Proceedings of the 10th Mediterranean Conference on Control and Automation (MED 2002), Lisbon, Portugal, 9–13 July 2002; pp. 9–12.
  22. Nakajima, K.; Matsumoto, Y.; Tamura, T. Development of real-time image sequence analysis for evaluating posture change and respiratory rate of a subject in bed. Physiol. Meas. 2001, 22, 21–28. [Google Scholar] [CrossRef]
  23. Nakajima, K.; Osa, A.; Miike, H. A method for measuring respiration and physical activity in bed by optical flow analysis. In Proceedings of the 19th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 30 October–2 November 1997; Volume 5, pp. 2054–2057.
  24. Al-Naji, A.; Chahl, J. Remote respiratory monitoring system based on developing motion magnification technique. Biomed. Signal Process. Control. 2016, 29, 1–10. [Google Scholar] [CrossRef]
  25. Christian, S.; Jochen, P.; Joachim, H. Time-of-flight sensor for respiratory motion gating. Med. Phys. 2008, 35, 3090–3093. [Google Scholar]
  26. Falie, D.; David, L.; Ichim, M. Statistical algorithm for detection and screening sleep apnea. In Proceedings of the International Symposium on Signals, Circuits and Systems (ISSCS 2009), Iasi, Romania, 9–10 July 2009; pp. 1–4.
  27. Blanik, N.; Abbas, A.K.; Venema, B.; Blazek, V.; Leonhardt, S. Hybrid optical imaging technology for long-term remote monitoring of skin perfusion and temperature behavior. J. Biomed. Opt. 2014, 19. [Google Scholar] [CrossRef] [PubMed]
  28. Bousefsaf, F.; Maaoui, C.; Pruski, A. Continuous wavelet filtering on webcam photoplethysmographic signals to remotely assess the instantaneous heart rate. Biomed. Signal Process. Control. 2013, 8, 568–574. [Google Scholar] [CrossRef]
  29. Nilsson, L.; Johansson, A.; Kalman, S. Monitoring of respiratory rate in postoperative care using a new photoplethysmographic technique. J. Clin. Monit. Comput. 2000, 16, 309–315. [Google Scholar] [CrossRef] [PubMed]
  30. Verkruysse, W.; Svaasand, L.O.; Nelson, J.S. Remote plethysmographic imaging using ambient light. Opt. Express 2008, 16, 21434–21445. [Google Scholar] [CrossRef] [PubMed]
  31. Xia, J.; Siochi, R.A. A real-time respiratory motion monitoring system using KINECT: Proof of concept. Med. Phys. 2012, 39, 2682–2685. [Google Scholar] [CrossRef] [PubMed]
  32. Yang, C.; Cheung, G.; Chan, K.L.; Stankovic, V. Sleep monitoring via depth video compression & analysis. In Proceedings of the IEEE International Conference on Multimedia and Expo Workshops (ICMEW), Chengdu, China, 14–18 July 2014; pp. 1–6.
  33. Yu, M.C.; Wu, H.; Liou, J.L.; Lee, M.S.; Hung, Y.-P. Breath and Position Monitoring during Sleeping with a Depth Camera. In Proceedings of the International Conference on Health Informatics, Algarve, Portugal, 1–4 February 2012; pp. 12–22.
  34. Aoki, H.; Nakamura, H.; Fumoto, K.; Nakahara, K.; Teraoka, M. Basic study on non-contact respiration measurement during exercise tolerance test by using kinect sensor. In Proceedings of the 2015 IEEE/SICE International Symposium on System Integration (SII), Nagoya, Japan, 11–13 December 2015; pp. 217–222.
  35. Centonze, F.; Schatz, M.; Prochazka, A.; Kuchynka, J.; Vysata, O.; Cejnar, P.; Valis, M. Feature extraction using MS Kinect and data fusion in analysis of sleep disorders. In Proceedings of the 2015 International Workshop on Computational Intelligence for Multimedia Understanding (IWCIM), Prague, Czech Republic, 29–30 October 2015; pp. 1–5.
  36. Ortmüller, J.; Gauer, T.; Wilms, M.; Handels, H.; Werner, R. Respiratory surface motion measurement by Microsoft Kinect. Curr. Dir. Biomed. Eng. 2015, 1, 270–273. [Google Scholar] [CrossRef]
  37. Tanaka, M. Application of depth sensor for breathing rate counting. In Proceedings of the 2015 10th Asian Control Conference (ASCC), Kota Kinabalu, Malaysia, 31 May–3 June 2015; pp. 1–5.
  38. Harte, J.M.; Golby, C.K.; Acosta, J.; Nash, E.F.; Kiraci, E.; Williams, M.A.; Arvanitis, T.N.; Naidu, B. Chest wall motion analysis in healthy volunteers and adults with cystic fibrosis using a novel Kinect-based motion tracking system. Med. Biol. Eng. Comput. 2016, 54, 1631–1640. [Google Scholar] [CrossRef] [PubMed]
  39. Kumagai, S.; Uemura, R.; Ishibashi, T.; Nakabayashi, S.; Arai, N.; Kobayashi, T.; Kotoku, J.I. Markerless Respiratory Motion Tracking Using Single Depth Camera. Open J. Med. Imaging 2016, 6, 20–31. [Google Scholar] [CrossRef]
  40. Tahavori, F.; Adams, E.; Dabbs, M.; Aldridge, L.; Liversidge, N.; Donovan, E.; Jordan, T.; Evans, P.; Wells, K. Combining marker-less patient setup and respiratory motion monitoring using low cost 3D camera technology. In SPIE Medical Imaging: International Society for Optics and Photonics; SPIE: Orlando, FL, USA, 2015; Volume 9415, pp. 1–7. [Google Scholar]
  41. Lee, J.; Hong, M.; Ryu, S. Sleep monitoring system using kinect sensor. Int. J. Distrib. Sens. Netw. 2015, 2015, 1–9. [Google Scholar] [CrossRef]
  42. Samir, M.; Golkar, E.; Rahni, A.A.A. Comparison between the KinectTM V1 and KinectTM V2 for respiratory motion tracking. In Proceedings of the 2015 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), Kuala Lumpur, Malaysia, 19–21 October 2015; pp. 150–155.
  43. Kim, C.; Yun, S.; Jung, S.-W.; Won, C.S. Color and depth image correspondence for Kinect v2. In Advanced Multimedia and Ubiquitous Engineering; Springer: New York, NY, USA, 2015; pp. 111–116. [Google Scholar]
  44. Yang, L.; Zhang, L.; Dong, H.; Alelaiwi, A.; El Saddik, A. Evaluating and improving the depth accuracy of Kinect for Windows v2. Sens. J. 2015, 15, 4275–4285. [Google Scholar] [CrossRef]
  45. Sarbolandi, H.; Lefloch, D.; Kolb, A. Kinect range sensing: Structured-light versus Time-of-Flight Kinect. Comput. Vis. Image Underst. 2015, 139, 1–20. [Google Scholar] [CrossRef]
  46. Mutto, C.D.; Zanuttigh, P.; Cortelazzo, G.M. Time-of-Flight Cameras and Microsoft Kinect (TM); Springer Publishing Company, Incorporated: New York, NY, USA, 2012. [Google Scholar]
  47. The Primesensor Reference Design. Available online: http://www.primesensor.com (accessed on 25 December 2016).
  48. González-Audícana, M.; Saleta, J.L.; Catalán, R.G.; García, R. Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1291–1299. [Google Scholar] [CrossRef]
  49. Tsai, D.-M.; Chiang, C.-H. Rotation-invariant pattern matching using wavelet decomposition. Pattern Recognit. Lett. 2002, 23, 191–201. [Google Scholar] [CrossRef]
  50. Patidar, P.; Gupta, M.; Srivastava, S.; Nagawat, A.K. Image de-noising by various filters for different noise. Int. J. Comput. Appl. 2010, 9, 45–50. [Google Scholar] [CrossRef]
  51. Wadhwa, N.; Rubinstein, M.; Durand, F.; Freeman, W.T. Phase-based video motion processing. ACM Trans. Graph. 2013, 32. [Google Scholar] [CrossRef][Green Version]
  52. Wu, H.-Y.; Rubinstein, M.; Shih, E.; Guttag, J.V.; Durand, F.; Freeman, W.T. Eulerian video magnification for revealing subtle changes in the world. ACM Trans. Graph. 2012, 31, 65. [Google Scholar] [CrossRef]
  53. Al-Naji, A.; Chahl, J. Non-contact heart activity measurement system based on video imaging analysis. Int. J. Pattern Recognit. Artif. Intell. 2017, 31, 1–21. [Google Scholar] [CrossRef]
  54. Madhukar, B.; Narendra, R. Lanczos resampling for the digital processing of remotely sensed images. In Proceedings of the International Conference on VLSI, Communication, Advanced Devices, Signals & Systems and Networking; Springer: New York, NY, USA, 2013; pp. 403–411. [Google Scholar]
  55. Martínez-Martín, E.; del Pobil, Á.P. Motion detection in static backgrounds. In Robust Motion Detection in Real-Life Scenarios; Springer: New York, NY, USA, 2012; pp. 5–42. [Google Scholar]
  56. Hossain, M.S.; Hossain, M.E.; Khalid, M.S.; Haque, M.A. A Belief Rule-Based (BRB) Decision Support System for Assessing Clinical Asthma Suspicion. In Proceedings of the Scandinavian Conference on Health Informatics, Grimstad, Norway, 21–22 August 2014; Linköping University Electronic Press, 2014; pp. 83–89. [Google Scholar]
  57. Bland, J.M.; Altman, D.G. Statistical methods for assessing agreement between two methods of clinical measurement. Int. J. Nurs. Stud. 2010, 47, 931–936. [Google Scholar] [CrossRef]
Figure 1. Microsoft Kinect v2 sensor.
Figure 1. Microsoft Kinect v2 sensor.
Sensors 17 00286 g001
Figure 2. System overview. The proposed system includes a Kinect sensor connected to a laptop via Microsoft Kinect adapter; a real-time system based on motion magnification and motion detection; and software built-in Kinect library, including body index and skeleton tracking.
Figure 2. System overview. The proposed system includes a Kinect sensor connected to a laptop via Microsoft Kinect adapter; a real-time system based on motion magnification and motion detection; and software built-in Kinect library, including body index and skeleton tracking.
Sensors 17 00286 g002
Figure 3. Skeletal joints provided in the Kinect code library. The Region of interest is the yellow pentagon defined by the 5 points.
Figure 3. Skeletal joints provided in the Kinect code library. The Region of interest is the yellow pentagon defined by the 5 points.
Sensors 17 00286 g003
Figure 4. Red, green & blue image, depth map, thermal, skeleton tracking, and body index from the Kinect v2 sensor.
Figure 4. Red, green & blue image, depth map, thermal, skeleton tracking, and body index from the Kinect v2 sensor.
Sensors 17 00286 g004
Figure 5. A five minutes simulation for healthy subject where the subject was asked to hold his breath twice during the measurement for the first scenario (without a blanket).
Figure 5. A five minutes simulation for healthy subject where the subject was asked to hold his breath twice during the measurement for the first scenario (without a blanket).
Sensors 17 00286 g005
Figure 6. A five minutes simulation for healthy subject where the subject was asked to hold his breath twice during the measurement for the first scenario (with a blanket).
Figure 6. A five minutes simulation for healthy subject where the subject was asked to hold his breath twice during the measurement for the first scenario (with a blanket).
Sensors 17 00286 g006
Figure 7. A five minutes simulation for healthy subject where the subject was asked to hold his breath twice during the measurement for the second scenario (without a blanket).
Figure 7. A five minutes simulation for healthy subject where the subject was asked to hold his breath twice during the measurement for the second scenario (without a blanket).
Sensors 17 00286 g007
Figure 8. A five minutes simulation for healthy subject where the subject was asked to hold his breath twice during the measurement for the second scenario (with a blanket).
Figure 8. A five minutes simulation for healthy subject where the subject was asked to hold his breath twice during the measurement for the second scenario (with a blanket).
Sensors 17 00286 g008
Figure 9. (a) Correlation plot; (b) Bland-Altman plot of the difference between measured data and reference data (first scenario without a blanket).
Figure 9. (a) Correlation plot; (b) Bland-Altman plot of the difference between measured data and reference data (first scenario without a blanket).
Sensors 17 00286 g009
Figure 10. (a) Correlation plot; (b) Bland-Altman plot of the difference between measured data and reference data (first scenario with a blanket).
Figure 10. (a) Correlation plot; (b) Bland-Altman plot of the difference between measured data and reference data (first scenario with a blanket).
Sensors 17 00286 g010
Figure 11. (a) Correlation plot; (b) Bland-Altman plot of the difference between measured data and reference data (second scenario without a blanket).
Figure 11. (a) Correlation plot; (b) Bland-Altman plot of the difference between measured data and reference data (second scenario without a blanket).
Sensors 17 00286 g011
Figure 12. (a) Correlation plot; (b) Bland-Altman plot of the difference between measured data and reference data (second scenario with a blanket).
Figure 12. (a) Correlation plot; (b) Bland-Altman plot of the difference between measured data and reference data (second scenario with a blanket).
Sensors 17 00286 g012
Table 1. Comparative specifications of Microsoft Kinect v1 and v2.
Table 1. Comparative specifications of Microsoft Kinect v1 and v2.
FeaturesKinect v1Kinect v2
Depth sensor typeStructured lightTime of Flight (ToF)
Red, Green & Blue (RGB) camera resolution640 × 480, 30 fps1920 × 1080, 30 fps
Infrared (IR) camera resolution320 × 240, 30 fps512 × 424, 30 fps
Field of view of RGB image62° × 48.6°84.1° × 53.8°
Field of view of depth image57° × 43°70° × 60°
Operative measuring range0.8 m–4 m (Default);
0.4 m–3.5 m (Near)
0.5 m–4.5 m
Skeleton joints defined20 joints25 joints
Maximum skeletal tracking26
Back to TopTop