Next Article in Journal
The Use of SMAP-Reflectometry in Science Applications: Calibration and Capabilities
Previous Article in Journal
Guided Next Best View for 3D Reconstruction of Large Complex Structures
Open AccessLetter

Life Signs Detector Using a Drone in Disaster Zones

1
Electrical Engineering Technical College, Middle Technical University, Baghdad 1022, Iraq
2
School of Engineering, University of South Australia, Mawson Lakes, SA 5095, Australia
3
Joint and Operations Analysis Division, Defence Science and Technology Group, Melbourne, VIC 3207, Australia
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(20), 2441; https://doi.org/10.3390/rs11202441
Received: 15 September 2019 / Revised: 13 October 2019 / Accepted: 17 October 2019 / Published: 21 October 2019
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

In the aftermath of a disaster, such as earthquake, flood, or avalanche, ground search for survivors is usually hampered by unstable surfaces and difficult terrain. Drones now play an important role in these situations, allowing rescuers to locate survivors and allocate resources to saving those who can be helped. The aim of this study was to explore the utility of a drone equipped for human life detection with a novel computer vision system. The proposed system uses image sequences captured by a drone camera to remotely detect the cardiopulmonary motion caused by periodic chest movement of survivors. The results of eight human subjects and one mannequin in different poses shows that motion detection on the body surface of the survivors is likely to be useful to detect life signs without any physical contact. The results presented in this study may lead to a new approach to life detection and remote life sensing assessment of survivors.
Keywords: cardiopulmonary motion; motion detection; drone; UAV; OpenPose; denoising; Wavelet cardiopulmonary motion; motion detection; drone; UAV; OpenPose; denoising; Wavelet

1. Introduction

In the case of disaster relief, the search operation for survivors is a challenge for lifesavers in conditions when contact with a survivor is a difficult, potentially unsafe, and very stressful task, leading to cognitive and physical fatigue. Recent advances in unmanned aerial vehicle (UAV), now synonymously, “drone” technologies will be suitable for this purpose and can help in the effort to search for living survivors in the aftermath of a disaster [1,2,3]. Current ground-based options for detecting humans, such as rescue robots, are expensive, require invasive equipment, and require a team of human operators to move around inside the search and rescue operation [4,5]. Furthermore, accessing a disaster zone is more difficult for rescue robots than for rescue workers and rescue dogs due to limited mobility of wheeled and tracked platforms. To address these challenges, we are developing a highly autonomous drone to assist first responders in their search to find survivors in disaster zones.
The task of observing and analysing humans from an aerial robot in outdoor environments has been of interest to the computer vision community for the last decade. Techniques to address the problem of estimating human pose and trajectory from the sky have been suggested. For example, Perera et al. [6,7] proposed an approach to estimation of the gait sequence and movement trajectory of human subjects from video captured by a drone. However, their proposed approach was limited to a fully visible subject and a limited range of poses such as standing or walking. Another study by Lygouras et al. [8] used UAV videos for the identification and location of open water swimmers. The proposed system detected the swimmers’ presence by applying a deep learning architecture and it was capable of accurately detecting swimmers without apparent human intervention. However, the proposed system may fail in some environmental conditions and no life detection was attempted in that study. Other techniques for detecting human subjects in aerial videos have been proposed for search and rescue operations in disaster scenarios [2,9,10,11,12]. The main focus of these studies was to determine humans’ locations and identify various humans poses on the ground without any detection of life signs of the subjects. Other techniques using aerial thermal videos have been suggested. For example, a study by Kang et al. [13] proposed a technique for detecting and tracking human subjects from the air based on thermal images captured by a UAV. A study by Portmann et al. [14] used aerial thermal images captured by a UAV for human detection and tracking. Another study by Rudol and Doherty [15] proposed a technique based on a combination of thermal and colour cameras to detect humans on standard hardware onboard an autonomous UAV in outdoor environments. A combination of thermal and colour cameras attached to a UAV was also used by Rivera et al. [16] to detect humans with geolocation of the survivors. These studies, however, have several disadvantages, such as short-range detection, low resolution, high cost, and motion artefacts. Human life signs detection was not established for all previous studies. Thermal cameras only detect live subjects where there is thermal contrast against the background, thus, there is a lack of ability to specifically detect the temperature of deceased subjects and limited ability to detect living subjects in warm environments, against warm backgrounds, or in the presence of insulated clothing when thermal cameras are used. A notable system proposed by Al-Kaff et al. [17] was developed to detect humans lying on the ground in unconstrained poses using the colour and depth data obtained from sensors on board the UAV. Although this system was efficient for detecting humans in many poses, it was limited to detecting human forms without any aspect of life signs detection and was limited to close range. Recent studies by Al-Naji et al. [18,19] proposed a robust technique for monitoring human vital signs (heart rate and breathing rate remotely from the video taken by a drone). The technique used imaging photoplethysmography (iPPG) based on skin colour analysis of the facial region at a distance of 3 meters. However, the technique relied on skin colour analysis more than motion analysis which means it is difficult to apply when the region of interest (ROI) is unclear due to occlusion by clothing, debris, blood, or when the subject is lying face down. In addition, the technique was limited to one pose where the subject was standing in front of the UAV. The experiments were further limited in not controlling for human forms without life signs. Therefore, the main contribution of the current study is to propose a new life signs detector system based on a drone observer that can efficiently detect living and non-living human subjects at a distances limited by optics, in these experiments, 4–8 meters, and at different poses intended for disaster zones.
This paper is organised as follows: Section 2 presents the methods and materials of the proposed life detector system, including the subjects and ethics considerations, experimental setup, data collection, system framework, and data analysis. The MATLAB graphical user interface (MathWorks, NSW, Australia) and the experimental results of the proposed life signs detection system with different poses are presented in Section 3. Section 4 discusses the experimental results, limitations, and directions of future work. Finally, concluding remarks are outlined in Section 5.

2. Methods and Materials

2.1. Human Ethics Considerations

The research procedure described in this study was carried out in accordance with the ethical standards of the Declaration of Helsinki (ethical principles for medical research involving human subjects), and approved by the University of South Australia, Human Research Ethics Committee (HREC) (Mawson Lakes, South Australia, Protocol number: 0000035185). A written description of the experimental procedures was provided to the participants who signed an informed consent form before commencing the experiment.
This study is the first of its type and was performed with a range of living subjects and a mannequin to emulate a deceased person. Studies with the deceased require a high degree of perceived societal benefit that may be fitting for mature development and certification from products resulting from work on this topic. We have thus focused on a high diversity of appearance of living human subjects rather than on incorporating deceased persons.

2.2. Experimental Setup and Data Acquisition

The video data was captured using a GoPro Hero 4 black action camera (GoPro Inc., San Mateo, CA, USA) with replacement lens (5.4 mm, 10MP, infrared IR cut filter) to reduce the fish-eye effect and narrow the field of view mounted on a three-axis 3D Robotics Solo gimbal. To reduce the vibration of the video camera installed on the drone, a standard gimbal was used to minimise the camera vibrations and stabilise the footage. The camera was attached to the drone with a three-axis gimbal (3DR Solo gimbal, 3DR Inc., Berkeley, CA, USA). The gimbal handles camera attitude and stabilisation autonomously and records stabilised videos within 0.1 degree of pointing accuracy. The vibration of the motors transferred to the camera is minimised by the gimbal.
The videos were sampled at 25 frames per second and the resolution was 3840 × 2160 pixels with sensitivity of the image sensor at 400 for the International Standards Organization (ISO). The videos were captured at an altitude of between 4 and 8 metres from the subject as shown in Figure 1.
The experiment was performed using eight human subjects (four males: South Asian, Middle Eastern, two Caucasians, and four females: one Sub-Saharan African, two Middle Eastern, one Asian) aged from 20 to 40 years and one full-body male mannequin. All subjects were asked to lie down on the ground in different poses and to breathe normally. In addition to the human subjects, a full-body male mannequin with a height of 1.96 m, fully clothed, wearing a black wig was used for data collection to simulate a deceased subject. The mannequin had a realistic face and both the arms and head of the mannequin were adjustable to different angles to create different poses. The videos were acquired during three days in daylight and in relatively low wind conditions for one minute for each subject and repeated at different poses to obtain sufficient video data for experimental purposes. However, we only used 30 s from each video to reduce the execution time which yielded the same detection accuracy compared to longer video recording times. The human subject and the mannequin lying on the ground next to each other was also recorded by the drone. The data acquisition of eight human subjects and one mannequin in different poses is shown in Figure 2.

2.3. System Framework and Data Analysis

The schematic diagram of the proposed system used as a life signs detector from video data captured by a drone is demonstrated in Figure 3.
The motion of the chest wall due to the cardiopulmonary activity for live subjects directly causes variations of reflected intensity values in the image sequences. To detect this motion, several image and video processing techniques are applied in this work to automatically analyse and extract luminance values of interest from video data. Firstly, each video is converted to image sequences and saved as joint photographic experts group (JPEG) format files on the personal computer (PC) for further analysis. The drone camera captured the video in the red, green and blue (RGB) colour model, so, in order to get the image intensity data as the luminance component, the MATLAB’s built-in function, ‘rgb2ycbcr’, was used to convert the RGB colour model into the YCbCr colour model using the following equation:
[ Y C b C r ] = [ 65.481 128.553 24.966 39.797 74.203   112 112 93.786   18.2214 ] [ R G B ] + [ 16 128 128 ] ,
where Y component represents the luminance values that ranged from 16 to 235, while C b and C r components are the chrominance values that ranged from 16 to 240 [20,21].
Secondly, the chest region where the cardiopulmonary motion is most pronounced was automatically selected using a body joint estimation approach, called OpenPose [22]. OpenPose is an efficient and robust approach to the challenges associated with unconstrained or multiple subjects that produces high-quality parses of body poses at low computational cost [22]. OpenPose approach can estimate 135 key-points on a single image (18 body joints for each subject). However, we only used the torso joints (neck, left shoulder, right shoulder, left hip, and right hip) to choose the chest region. After determining the detected joints that indicate upper body pose, the torso area was divided into upper and lower torso regions. We considered the upper torso region as our ROI as illustrated in Figure 4.
Thirdly, the luminance pixel values over the image sequences of the selected ROI from the Y component were averaged as follows:
I Y a v g   ( t ) = x , y R O I I ( x , y , t ) | R O I | ,
where I ( x , y , t ) is the luminance pixel value at image location ( x , y ) over time ( t ) from image sequences, and | R O I | is the size of the selected ROI.
To reduce the motion artefact noise from the I Y a v g   ( t ) signal, resulting from drone movement during videoing, a wavelet signal denoising method using an empirical Bayesian method with a Cauchy prior and a smoothing signal method using a moving average filter were used. MATLAB’s built-in function, called ‘wdenoise’, was firstly used to denoise the signal as shown in Figure 5b, with wavelet Daubechies (dbN) family and a level-dependent denoising method of universal soft threshold that estimates the variance of the noise based on the wavelet coefficients at each resolution level. MATLAB’s built-in function, called ‘smooth’, was secondly used to smooth the denoised signal as shown in Figure 5c.
Lastly, to identify the periodicity of peaks, their locations, and the number of peaks in the smoothed signal, peak detection based on the wavelet transform was used. Continuous wavelet transformation (CWT) is defined as the scalar multiplication of the input signal, I Y a v g   ( t ) , and the scaled, shifted versions of the wavelet mother function, ψ. Mathematically, a CWT on a signal, I Y a v g   ( t ) , at point ( x , y ) is described as [23,24]:
W ( x , y ) = + I Y a v g ( t )   ψ x , y ( t )   d t ,    ψ x , y ( t ) = 1 x ψ ( t y x ) ,
where I Y a v g ( t ) is a signal after denoising and smoothing and ψ x , y ( t ) is ψ ( t ) translated by scale x and shifted by y . The outcome of CWT coefficients contain patterns of peaks and periodicity which can be used to detect the number of peaks, their locations, and their strengths in I Y a v g ( t ) of similar size to ψ x , y ( t ) c. Varying x in ψ x , y ( t ) yields different width wavelets, and so all peaks in I Y a v g ( t ) , regardless of their width, can be detected. According to the patterns of peaks and their periodicity, our proposed system can detect life signs of the target subject.

3. Experimentation and Results

A graphical user interface (GUI) panel was carried out by using the MATLAB program—R2019a (MathWorks, NSW, Australia) with the Microsoft Windows 10 operating system. This panel allows the user to load video data captured by the drone, automatically select the ROI where chest motion is most apparent, and execute the algorithm. The proposed GUI provides an easy tool to see video information, the selected ROI, input signal, denoised/smoothed signal, and number of peaks. It also enables the user to detect the life sign of the target subject (alive or deceased) as illustrated in Figure 6 and Figure 7, respectively. To detect the chest motion on the human subject, our proposed life sign detector system extracts the signal from the selected ROI and applies denoising/smoothing methods on the selected signal and then calculates the number of peaks and their periodicity for each image. After determining the number of peaks for each signal, our proposed system could determine whether the target subject was alive or dead. Figure 6 shows the GUI main panel of the proposed life detector system for a human subject.
Our proposed system considered the mannequin to be a deceased subject since the mannequin had no motion of the chest region, thus, the input signal has no peaks on the selected ROI. Even if some peaks generated by drone motion and other motion artefacts may fall in the signal, the periodicity of the peaks is still nonuniform and our system will consider the mannequin to be a deceased subject. Figure 7 shows the GUI main panel of the proposed life detector system for the deceased subject (mannequin).
The experimental results were carried out for all subjects at four different poses (back position, side position facing the camera, side position non-facing the camera, and stomach position) as shown in Figure 8. Table 1 demonstrates the life detection results of eight human subjects and one mannequin at the poses shown in Figure 8 based on motion caused by cardiopulmonary activity.
The results from Table 1 show that motion detection from the chest region could be effectively and efficiently used to calculate the number of motion peaks for each image and thus to determine whether the subject is alive or deceased for all eight human subjects and one mannequin at different poses.
To detect life signs for multiple subjects, we recorded a video for both the human subject and the mannequin lying on the ground next to each other. The proposed system succeeded in recognising the life signs of these subjects and in determining which was alive and which was not, as shown in Figure 9.

4. Discussion

With the data set available, we have demonstrated that motion-based breathing detection from an aerial platform is possible with high certainty under the conditions tested. The technique could be included in mapping software to enhance drone generated maps of disaster scenes with life sign annotations although substantially more automation and integration would be required.
The system we have proposed successfully distinguished live subjects from a mannequin in daylight, in calm conditions, and using a powerful PC equipped with MATLAB software and a human operator. Each of these conditions is a limitation. Although the participants comprised a reasonable subset of humanity, with South Asian, East Asian, African, Middle Eastern, and European individuals, there were combinations of genders and appearances not tested. The mannequin was somewhat realistic; however, it is apparent to humans that it is not a human from around 8 m, the same distance as the longest ranges tested. Since the experiments were carried out within the altitude of 4–8 m, the size of upper-torso area (ROI) contained enough details to extract the body movements caused by the breathing with a high detection accuracy. The videos recorded above 8 m height had small ROIs and did not have enough image details at our recorded resolution (3840 × 2160 pixels). Therefore, the data recording was limited to the above altitude range for a reasonably sized ROI. Whilst the OpenPose method performed well, there is no doubt that field conditions and partial occlusion of subjects would create challenging situations.
The limit of the range of the camera is more driven by the stability of the gimbal than optics or dynamics of the platform. Although ranges of 4–8 m were tested, it is likely given the clean waveforms in Figure 4 that longer focal length optics would deliver detection at multiples of this range. This possibility would be due to the performance of the gimbal, that might also become far worse under less calm conditions.
Since the means of detection was focused on the cardiopulmonary motion in an ROI, it is also possible that the periodic chest movement alone might be a cue for finding occluded or camouflaged subjects. Such an approach would require significant dwell time on each scene imaged, but it might yield a new capability.
Several researches [25,26,27,28,29,30] have utilised video magnification techniques with human subjects to reveal physiological signs on an appropriate ROI when the cardiopulmonary motion is difficult to see using a vision system. These studies may provide further insight on the technology for future work directions.

5. Conclusions

Globally, recovery of survivors in a disaster zone is a time critical, high-priority activity. In this paper, we have demonstrated for the first time a technology using standard colour cameras, fitted by default to consumer drones, that can detect the presence or absence of life signs from humans and human-shaped objects lying in many poses on the ground. Under the test conditions, the obtained results at distances of 4–8 metres from the subject demonstrated the robustness of the proposed system for detecting life signs at different poses with an accuracy of 100%. We noted that the proposed system is able to efficiently detect the life signs of multiple subjects, showing promise as a future tool for search and rescue operations. Future work will focus on more realistic scenarios and more automation of the technique to ease operational application of the technology. Development of a software simulator for a breathing human in a cluttered simulated scene is a must for achieving a reliable and tested capability due to a large number of variables and scenarios.

Author Contributions

Conceptualization, A.A.-N.; Data curation, A.A.-N.; Funding acquisition, J.C.; Investigation, A.A.-N., A.G.P. and S.L.M.; Methodology, A.A.-N. and S.L.M.; Project administration, A.A.-N. and J.C.; Resources, A.A.-N. and A.G.P.; Software, A.A.-N. and A.G.P.; Supervision, J.C.; Writing—original draft, A.A.-N.; Writing—review & editing, A.A.-N., A.G.P., S.L.M. and J.C. All authors read and approved the final manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank the staff and participants in the Electrical Engineering Technical College, Middle Technical University and School of Engineering, University of South Australia for their support to conduct this work.

Conflicts of Interest

The authors of this manuscript have no conflict of interest relevant to this work.

References

  1. Mayer, S.; Lischke, L.; Woźniak, P.W. Drones for Search and Rescue. In 1st International Workshop on Human-Drone Interaction; Ecole Nationale de l’Aviation Civile [ENAC]: Glasgow, UK, 2019. [Google Scholar]
  2. Bogue, R. Search and rescue and disaster relief robots: Has their time finally come? Ind. Robot Int. J. 2016, 43, 138–143. [Google Scholar] [CrossRef]
  3. Hildmann, H.; Kovacs, E.; Saffre, F.; Isakovic, A. Nature-Inspired Drone Swarming for Real-Time Aerial Data-Collection Under Dynamic Operational Constraints. Drones 2019, 3, 71. [Google Scholar] [CrossRef]
  4. Casper, J.; Murphy, R.R. Human-Robot interactions during the Robot-Assisted urban search and rescue response at the world trade center. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2003, 33, 367–385. [Google Scholar] [CrossRef] [PubMed]
  5. Doroodgar, B.; Liu, Y.; Nejat, G. A learning-based Semi-Autonomous controller for robotic exploration of unknown disaster scenes while searching for victims. IEEE Trans. Cybern. 2014, 44, 2719–2732. [Google Scholar] [CrossRef] [PubMed]
  6. Perera, A.G.; Al-naji, A.; Law, Y.W.; Chahl, J. Human detection and motion analysis from a quadrotor UAV. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2018; Volume 405, p. 012003. [Google Scholar]
  7. Perera, A.G.; Law, Y.W.; Al-naji, A.; Chahl, J. Human motion analysis from UAV video. Int. J. Intell. Unmanned Syst. 2018, 6, 69–92. [Google Scholar] [CrossRef]
  8. Lygouras, E.; Santavas, N.; Taitzoglou, A.; Tarchanidis, K.; Mitropoulos, A.; Gasteratos, A. Unsupervised Human Detection with an Embedded Vision System on a Fully Autonomous UAV for Search and Rescue Operations. Sensors 2019, 19, 3542. [Google Scholar] [CrossRef] [PubMed]
  9. Doherty, P.; Rudol, P. A UAV search and rescue scenario with human body detection and geolocalization. In Australasian Joint Conference on Artificial Intelligence; Springer: Berlin, Heidelberg, 2007; pp. 1–13. [Google Scholar]
  10. Andriluka, M.; Schnitzspan, P.; Meyer, J.; Kohlbrecher, S.; Petersen, K.; Von Stryk, O.; Roth, S.; Schiele, B. Vision based victim detection from unmanned aerial vehicles. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; IEEE Publishing: San Antonio, TX, USA, 2010; pp. 1740–1747. [Google Scholar]
  11. Câmara, D. Cavalry to the rescue: Drones fleet to help rescuers operations over disasters scenarios. In Proceedings of the 2014 IEEE Conference on Antenna Measurements & Applications (CAMA), Antibes Juan-les-Pins, France, 16–19 November 2014; IEEE Publishing: San Antonio, TX, USA, 2014; pp. 1–4. [Google Scholar]
  12. Sulistijono, I.A.; Risnumawan, A. From concrete to abstract: Multilayer neural networks for disaster victims detection. In Proceedings of the 2016 International Electronics Symposium (IES), Denpasar, Indonesia, 29–30 September 2016; IEEE Publishing: San Antonio, TX, USA, 2016; pp. 93–98. [Google Scholar]
  13. Kang, J.; Gajera, K.; Cohen, I.; Medioni, G. Detection and tracking of moving objects from overlapping EO and IR sensors. In Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop, Washington, DC, USA, 27 June –2 July 2004; IEEE Publishing: San Antonio, TX, USA, 2004; p. 123. [Google Scholar]
  14. Portmann, J.; Lynen, S.; Chli, M.; Siegwart, R. People detection and tracking from aerial thermal views. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–5 June 2014; IEEE Publishing: San Antonio, TX, USA, 2014; pp. 1794–1800. [Google Scholar]
  15. Rudol, P.; Doherty, P. Human body detection and geolocalization for UAV search and rescue missions using color and thermal imagery. In Proceedings of the 2008 IEEE Aerospace Conference, Big Sky, MT, USA, 1–8 March 2008; IEEE Publishing: San Antonio, TX, USA, 2008; pp. 1–8. [Google Scholar]
  16. Rivera, A.; Villalobos, A.; Monje, J.; Mariñas, J.; Oppus, C. Post-disaster rescue facility: Human detection and geolocation using aerial drones. In Proceedings of the 2016 IEEE Region 10 Conference (TENCON), Singapore, 22–25 November 2016; IEEE Publishing: San Antonio, TX, USA, 2016; pp. 384–386. [Google Scholar]
  17. Al-Kaff, A.; Gómez-Silva, M.J.; Moreno, F.M.; de la Escalera, A.; Armingol, J.M. An Appearance-Based tracking algorithm for aerial search and rescue purposes. Sensors 2019, 19, 652. [Google Scholar] [CrossRef] [PubMed]
  18. Alnaji, A.; Perera, A.G.; Chahl, J. Remote measurement of cardiopulmonary signal using an unmanned aerial vehicle. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2018; Volume 405, p. 012001. [Google Scholar]
  19. Alnaji, A.; Perera, A.G.; Chahl, J. Remote monitoring of cardiorespiratory signals from a hovering unmanned aerial vehicle. Biomed. Eng. OnLine 2017, 16, 101. [Google Scholar] [CrossRef] [PubMed]
  20. John, N.; Viswanath, A.; Sowmya, V.; Soman, K. Analysis of various color space models on effective single image super resolution. In Intelligent Systems Technologies and Applications; Springer: Cham, Switzerland, 2016; pp. 529–540. [Google Scholar]
  21. Kumar, A.; Kaur, A.; Kumar, M. Face detection techniques: A review. Artif. Intell. Rev. 2019, 52, 927–948. [Google Scholar] [CrossRef]
  22. Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.-E.; Sheikh, Y. OpenPose: Realtime multi-person 2D pose estimation using Part Affinity Fields. arXiv 2018, arXiv:1812.08008. [Google Scholar] [CrossRef] [PubMed]
  23. Wee, A.; Grayden, D.B.; Zhu, Y.; Petkovic-Duran, K.; Smith, D. A continuous wavelet transform algorithm for peak detection. Electrophoresis 2008, 29, 4215–4225. [Google Scholar] [CrossRef] [PubMed]
  24. Zhang, Z.M.; Chen, S.; Liang, Y.Z.; Liu, Z.X.; Zhang, Q.M.; Ding, L.X.; Ye, F.; Zhou, H. An intelligent Background-Correction algorithm for highly fluorescent samples in Raman spectroscopy. J. Raman Spectrosc. 2010, 41, 659–669. [Google Scholar] [CrossRef]
  25. Alinovi, D.; Ferrari, G.; Pisani, F.; Raheli, R. Respiratory rate monitoring by video processing using local motion magnification. In Proceedings of the 2018 26th European Signal Processing Conference (EUSIPCO), Rome, Italy, 3–7 September 2018; IEEE Publishing: San Antonio, TX, USA, 2018; pp. 1780–1784. [Google Scholar]
  26. Aubakir, B.; Nurimbetov, B.; Tursynbek, I.; Varol, H.A. Vital sign monitoring utilizing Eulerian video magnification and thermography. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; IEEE Publishing: San Antonio, TX, USA, 2016; pp. 3527–3530. [Google Scholar]
  27. Al-naji, A.; Chahl, J. Contactless cardiac activity detection based on head motion magnification. Int. J. Image Graph. 2017, 17, 1750001. [Google Scholar] [CrossRef]
  28. Al-naji, A.; Chahl, J.; Lee, S.-H. Cardiopulmonary signal acquisition from different regions using video imaging analysis. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2019, 7, 117–131. [Google Scholar] [CrossRef]
  29. Ordóñez, C.; Cabo, C.; Menéndez, A.; Bello, A. Detection of human vital signs in hazardous environments by means of video magnification. PLoS ONE 2018, 13, e0195290. [Google Scholar] [CrossRef] [PubMed]
  30. Wu, H.-Y.; Rubinstein, M.; Shih, E.; Guttag, J.V.; Durand, F.; Freeman, W.T. Eulerian video magnification for revealing subtle changes in the world. ACM Trans. Graph. 2012, 31, 65. [Google Scholar] [CrossRef]
Figure 1. Experimental setup of the proposed life detector system.
Figure 1. Experimental setup of the proposed life detector system.
Remotesensing 11 02441 g001
Figure 2. Data acquisition from human subjects and a mannequin at different poses.
Figure 2. Data acquisition from human subjects and a mannequin at different poses.
Remotesensing 11 02441 g002
Figure 3. Schematic diagram demonstrating the process by which contactlessly obtained video data were acquired using a drone to detect life signs of both human subjects and a mannequin.
Figure 3. Schematic diagram demonstrating the process by which contactlessly obtained video data were acquired using a drone to detect life signs of both human subjects and a mannequin.
Remotesensing 11 02441 g003
Figure 4. ROI selection based on OpenPose approach where the chest region was selected by drawing a bounding box (red) around the upper torso (magenta) for (a) a human subject, (b) a mannequin.
Figure 4. ROI selection based on OpenPose approach where the chest region was selected by drawing a bounding box (red) around the upper torso (magenta) for (a) a human subject, (b) a mannequin.
Remotesensing 11 02441 g004
Figure 5. Wavelet signal denoising method (a) input signal I Y a v g   ( t ) , (b) denoised signal using wavelet = wdenoise( I Y a v g , 4,’Wavelet’,’db20’,’DenoisingMethod’,’UniversalThreshold’,’ThresholdRule’,’Soft’,’NoiseEstimate’,’LevelDependent’), and (c) smoothed signal based on a moving average filter with span equal to 5.
Figure 5. Wavelet signal denoising method (a) input signal I Y a v g   ( t ) , (b) denoised signal using wavelet = wdenoise( I Y a v g , 4,’Wavelet’,’db20’,’DenoisingMethod’,’UniversalThreshold’,’ThresholdRule’,’Soft’,’NoiseEstimate’,’LevelDependent’), and (c) smoothed signal based on a moving average filter with span equal to 5.
Remotesensing 11 02441 g005
Figure 6. The graphical user interface (GUI) main panel of the proposed life detector system for the human subject (alive).
Figure 6. The graphical user interface (GUI) main panel of the proposed life detector system for the human subject (alive).
Remotesensing 11 02441 g006
Figure 7. The graphical user interface (GUI) main panel of the proposed life detector system the mannequin (deceased).
Figure 7. The graphical user interface (GUI) main panel of the proposed life detector system the mannequin (deceased).
Remotesensing 11 02441 g007
Figure 8. Subjects at different poses (a) Pose 1: back position, (b) Pose 2: side position facing the camera, (c) Pose 3: side position non-facing the camera, and (d) Pose 4: stomach position.
Figure 8. Subjects at different poses (a) Pose 1: back position, (b) Pose 2: side position facing the camera, (c) Pose 3: side position non-facing the camera, and (d) Pose 4: stomach position.
Remotesensing 11 02441 g008
Figure 9. Multiple subject detection.
Figure 9. Multiple subject detection.
Remotesensing 11 02441 g009
Table 1. The experimental results of eight human subjects and one mannequin at different poses.
Table 1. The experimental results of eight human subjects and one mannequin at different poses.
SubjectPosesNo. of PeaksThe Proposed Life Signs Detector System
ALIVEDECEASED
Subject 1Pose 113
Pose 213
Pose 312
Pose 412
Subject 2Pose 19
Pose 29
Pose 39
Pose 49
Subject 3Pose 111
Pose 211
Pose 311
Pose 412
Subject 4Pose 111
Pose 210
Pose 311
Pose 411
Subject 5Pose 18
Pose 28
Pose 38
Pose 48
Subject 6Pose 17
Pose 28
Pose 38
Pose 48
Subject 7Pose 111
Pose 210
Pose 310
Pose 411
Subject 8Pose 110
Pose 210
Pose 310
Pose 49
Mannequin Pose 11
Pose 21
Pose 30
Pose 40
Back to TopTop