Estimation of vital signs using image processing techniques have already been proved to have a potential for supporting remote medical diagnostics and replacing traditional measurements that usually require special hardware and electrodes placed on a body. In this paper, we further extend studies on contactless Respiratory Rate (RR) estimation from extremely low resolution thermal imagery by enhancing acquired sequences using Deep Neural Networks (DNN). To perform extensive benchmark evaluation, we acquired two thermal datasets using FLIR®
cameras with a spatial resolution of 80 × 60 and 320 × 240 from 71 volunteers in total. In-depth analysis of the proposed Convolutional-based Super Resolution model showed that for images downscaled with a factor of 2 and then super-resolved using Deep Learning (DL) can lead to better RR estimation accuracy than from original high-resolution sequences. In addition, if an estimator based on a dominating peak in the frequency domain is used, SR can outperform original data for a down-scale factor of 4 and images as small as 20 × 15 pixels. Our study also showed that RR estimation accuracy is better for super-resolved data than for images with color changes magnified using algorithms previously applied in the literature for enhancing vital signs patterns.
This is an open access article distributed under the Creative Commons Attribution License
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited