Figure 1.
Participants Equipment distribution. Plot shows participants who had required equipment to take part in the study and remaining participants were sent the equipment if they resided within the UK.
Figure 1.
Participants Equipment distribution. Plot shows participants who had required equipment to take part in the study and remaining participants were sent the equipment if they resided within the UK.
Figure 2.
Final setup of the research study where the Wellue pulse oximeter and the ARPOS system acquire data simultaneously [
59].
Figure 2.
Final setup of the research study where the Wellue pulse oximeter and the ARPOS system acquire data simultaneously [
59].
Figure 3.
The optical extinction spectra of oxy-haemoglobin (red line) and deoxy-haemoglobin (blue line) within blood. Fig generated using data by Prahl [
60].
Figure 3.
The optical extinction spectra of oxy-haemoglobin (red line) and deoxy-haemoglobin (blue line) within blood. Fig generated using data by Prahl [
60].
Figure 4.
The spectral extinction coefficient differences between deoxy-haemoglobin and oxy-haemoglobin (deoxy minus oxy, shown with a black line). Shaded areas correspond to the spectral regions of colour camera channels red, green, and blue; the grey shaded area is the near infrared (IR) spectral region above 800nm, which IR cameras, such the Microsoft Kinect games console camera, are able to detect. Fig generated using data by Prahl [
60].
Figure 4.
The spectral extinction coefficient differences between deoxy-haemoglobin and oxy-haemoglobin (deoxy minus oxy, shown with a black line). Shaded areas correspond to the spectral regions of colour camera channels red, green, and blue; the grey shaded area is the near infrared (IR) spectral region above 800nm, which IR cameras, such the Microsoft Kinect games console camera, are able to detect. Fig generated using data by Prahl [
60].
Figure 5.
ARPOS live application demo image. The image shows peoples faces are identified on the left (monitor screen) where the heart rate is the blue line and blood oxygenation level is the red line plotted on the screen where the shaded regions represents the potential error.
Figure 5.
ARPOS live application demo image. The image shows peoples faces are identified on the left (monitor screen) where the heart rate is the blue line and blood oxygenation level is the red line plotted on the screen where the shaded regions represents the potential error.
Figure 6.
Sample reading from the system. The image showing the regions of interest (forehead, cheeks, and lips) automatically extracted from the face.
Figure 6.
Sample reading from the system. The image showing the regions of interest (forehead, cheeks, and lips) automatically extracted from the face.
Figure 7.
Screenshot of data acquisition study. A screenshot of what participants see during the data acquisition study showing that their face has been identified correctly and showing a log to update the user about the next steps.
Figure 7.
Screenshot of data acquisition study. A screenshot of what participants see during the data acquisition study showing that their face has been identified correctly and showing a log to update the user about the next steps.
Figure 8.
Illustration of data acquisition study design. Illustration showing the design of how the data were acquired in the study.
Figure 8.
Illustration of data acquisition study design. Illustration showing the design of how the data were acquired in the study.
Figure 9.
ARPOS Data Processing Flow Diagram. Showing the acquisition and the analysis for calculating heart rate and blood oxygenation level from obtained image data.
Figure 9.
ARPOS Data Processing Flow Diagram. Showing the acquisition and the analysis for calculating heart rate and blood oxygenation level from obtained image data.
Figure 10.
Blocking Queue Collection Concept. The image shows thread that puts the frame data in a blocking queue collection and another thread takes and processes it from the queue (extracts face data and writes it to the disk).
Figure 10.
Blocking Queue Collection Concept. The image shows thread that puts the frame data in a blocking queue collection and another thread takes and processes it from the queue (extracts face data and writes it to the disk).
Figure 11.
Flow chart showing post processing where ROI are extracted using Dlib in python from participants’ faces.
Figure 11.
Flow chart showing post processing where ROI are extracted using Dlib in python from participants’ faces.
Figure 12.
Colour ROI obtained from a participant’s face.
Figure 12.
Colour ROI obtained from a participant’s face.
Figure 13.
IR ROI obtained from a participant’s face. Images have been modified for clarity purposes.
Figure 13.
IR ROI obtained from a participant’s face. Images have been modified for clarity purposes.
Figure 14.
Sliding Windows Concept. The scale shown in the image represents number of seconds and grouped frame data are held inside a window of a particular size (which can be 4, 10, or 15). This window slides by 1 s and passes the data from that time frame window to the ARPOS system to measure vital for that specific window.
Figure 14.
Sliding Windows Concept. The scale shown in the image represents number of seconds and grouped frame data are held inside a window of a particular size (which can be 4, 10, or 15). This window slides by 1 s and passes the data from that time frame window to the ARPOS system to measure vital for that specific window.
Figure 15.
Window Sliding over signal data. A participants data are shown as an example where window of 15 s selects data for colour and IR and slides by 1 s.
Figure 15.
Window Sliding over signal data. A participants data are shown as an example where window of 15 s selects data for colour and IR and slides by 1 s.
Figure 16.
ARPOS system and ground truth comparison for all (40) participants for HR (BPM) for resting and active states. For resting state, plot selected with lowest RMSE value using FastICA with pre-processing technique 6 for fps greater than 15 and pre-processing technique 7 for fps lower than 15, which is also detailed in
Section 10 and
Table 4. For active state, plot selected with lowest RMSE value using PCAICA and similar pre-processing techniques as resting state. The data from each participant are obtained and shown for 60 s where the larger size and darker colour of sample points indicates the overlapping data on that point.
Figure 16.
ARPOS system and ground truth comparison for all (40) participants for HR (BPM) for resting and active states. For resting state, plot selected with lowest RMSE value using FastICA with pre-processing technique 6 for fps greater than 15 and pre-processing technique 7 for fps lower than 15, which is also detailed in
Section 10 and
Table 4. For active state, plot selected with lowest RMSE value using PCAICA and similar pre-processing techniques as resting state. The data from each participant are obtained and shown for 60 s where the larger size and darker colour of sample points indicates the overlapping data on that point.
Figure 17.
Microsoft (MS) face detection library vs. Haarcascade classifier(Viola Jones). Microsoft face detection library detects faces in different cases compared to Haarcascade.
Figure 17.
Microsoft (MS) face detection library vs. Haarcascade classifier(Viola Jones). Microsoft face detection library detects faces in different cases compared to Haarcascade.
Figure 18.
HR (BPM) RMSE values for all participants for resting and active states. The plots show Mean (), MeanAbs (mean absolute error ), RMSE, and r values for different noise reduction algorithms where FastICA has best (lowest errors values with highest r correlation values) values compared to rest of the algorithms. The signal data were also processed with no noise reduction algorithm, which is represented by ’None’ on the plot and compared with rest of the noise reduction algorithms.
Figure 18.
HR (BPM) RMSE values for all participants for resting and active states. The plots show Mean (), MeanAbs (mean absolute error ), RMSE, and r values for different noise reduction algorithms where FastICA has best (lowest errors values with highest r correlation values) values compared to rest of the algorithms. The signal data were also processed with no noise reduction algorithm, which is represented by ’None’ on the plot and compared with rest of the noise reduction algorithms.
Figure 19.
HR (BPM) evaluation measures for participants with darker (left side) and white(right side) skin pigmentation. Plot shows , RMSE, and r correlation for resting and active states measured over participants with darker and white skin pigmentation.
Figure 19.
HR (BPM) evaluation measures for participants with darker (left side) and white(right side) skin pigmentation. Plot shows , RMSE, and r correlation for resting and active states measured over participants with darker and white skin pigmentation.
Figure 20.
(%) evaluation measures for participants with darker (left side) and white (right side) skin pigmentation. evaluation measures (, RMSE) for resting and active states measured over all participants.
Figure 20.
(%) evaluation measures for participants with darker (left side) and white (right side) skin pigmentation. evaluation measures (, RMSE) for resting and active states measured over all participants.
Figure 21.
HR (BPM) RMSE values for participants with darker and white skin pigmentation. Plot shows RMSE HR (BPM) values averaged for resting and active states measured over participants with darker and white skin pigmentation.
Figure 21.
HR (BPM) RMSE values for participants with darker and white skin pigmentation. Plot shows RMSE HR (BPM) values averaged for resting and active states measured over participants with darker and white skin pigmentation.
Figure 22.
(%) RMSE values for participants with darker and white skin pigmentation. Plot shows RMSE HR (BPM) values averaged for resting and active states measured over participants with darker and white skin pigmentation.
Figure 22.
(%) RMSE values for participants with darker and white skin pigmentation. Plot shows RMSE HR (BPM) values averaged for resting and active states measured over participants with darker and white skin pigmentation.
Figure 23.
(%) RMSE values for all participants for resting and active states. The plots show MeanAbs (mean absolute error ) and RMSE for different noise reduction algorithms where None has the highest RMSE compared to rest of the algorithms. The signal data were also processed with no noise reduction algorithm, which is represented by ‘None’ on the plot and compared with rest of the noise reduction algorithms.
Figure 23.
(%) RMSE values for all participants for resting and active states. The plots show MeanAbs (mean absolute error ) and RMSE for different noise reduction algorithms where None has the highest RMSE compared to rest of the algorithms. The signal data were also processed with no noise reduction algorithm, which is represented by ‘None’ on the plot and compared with rest of the noise reduction algorithms.
Figure 24.
Pre-processing techniques applied on darker skin participants data in combination with algorithms. Y-axis shows HR (BPM) RMSE for darker skin participants and X-axis shows noise reduction algorithms applied with different pre-processes.
Figure 24.
Pre-processing techniques applied on darker skin participants data in combination with algorithms. Y-axis shows HR (BPM) RMSE for darker skin participants and X-axis shows noise reduction algorithms applied with different pre-processes.
Figure 25.
Pre-processing techniques applied on white participants data in combination with algorithms. Y-axis shows HR (BPM) RMSE for white skin participants and X-axis shows noise reduction algorithms applied with different pre-processes.
Figure 25.
Pre-processing techniques applied on white participants data in combination with algorithms. Y-axis shows HR (BPM) RMSE for white skin participants and X-axis shows noise reduction algorithms applied with different pre-processes.
Figure 26.
SNR by ROI over larger skin pixel area. SNR for ROIs over all participants showing comparison between larger and smaller skin pixel areas. The comparison shows that larger area of skin pixels generate a higher SNR compared to smaller pixel areas.
Figure 26.
SNR by ROI over larger skin pixel area. SNR for ROIs over all participants showing comparison between larger and smaller skin pixel areas. The comparison shows that larger area of skin pixels generate a higher SNR compared to smaller pixel areas.
Figure 27.
SNR by Channel Type. SNR by channel for all participants.
Figure 27.
SNR by Channel Type. SNR by channel for all participants.
Figure 28.
Window RMSE comparison. HR (BPM) RMSE comparison between different window sizes for all participants.
Figure 28.
Window RMSE comparison. HR (BPM) RMSE comparison between different window sizes for all participants.
Figure 29.
Window RMSE comparison. (%) RMSE comparison between different window sizes for all participants.
Figure 29.
Window RMSE comparison. (%) RMSE comparison between different window sizes for all participants.
Figure 30.
Ground truth selection method. HR (BPM) RMSE comparison between ground truth obtained using averaging method (average value obtained for a specific window size, for example, 4 s window,) and selecting last-second method (latest value from the window size) grouped by resting and active states.
Figure 30.
Ground truth selection method. HR (BPM) RMSE comparison between ground truth obtained using averaging method (average value obtained for a specific window size, for example, 4 s window,) and selecting last-second method (latest value from the window size) grouped by resting and active states.
Figure 31.
Makeup RMSE for HR (BPM) comparison. Participant with lipstick for lip region, where left bar shows RMSE with make up (PIS-3252) and without make up on right (PIS-3252P2).
Figure 31.
Makeup RMSE for HR (BPM) comparison. Participant with lipstick for lip region, where left bar shows RMSE with make up (PIS-3252) and without make up on right (PIS-3252P2).
Figure 32.
HR (BPM) RMSE for white and darker skin participants with beard. Figure showing RMSE bar plots where different noise reduction algorithms have been applied on data of participants (12.5%) with heavy beard of white skin and darker skin pigmentation.
Figure 32.
HR (BPM) RMSE for white and darker skin participants with beard. Figure showing RMSE bar plots where different noise reduction algorithms have been applied on data of participants (12.5%) with heavy beard of white skin and darker skin pigmentation.
Figure 33.
HR (BPM) RMSE for white skin participants with and without beard. Figure showing RMSE bar plots where different noise reduction algorithms have been applied on data of participants (5%) with heavy beard of white skin pigmentation.
Figure 33.
HR (BPM) RMSE for white skin participants with and without beard. Figure showing RMSE bar plots where different noise reduction algorithms have been applied on data of participants (5%) with heavy beard of white skin pigmentation.
Figure 34.
HR (BPM) RMSE for white skin participants with and without beard. Figure showing RMSE bar plots where different noise reduction algorithms have been applied on data of participants (7.5%) with heavy beard of white skin pigmentation.
Figure 34.
HR (BPM) RMSE for white skin participants with and without beard. Figure showing RMSE bar plots where different noise reduction algorithms have been applied on data of participants (7.5%) with heavy beard of white skin pigmentation.
Figure 35.
HR (BPM) RMSE comparison for pre-processing techniques with noise reduction algorithms applied for participants with fps lower than or equal to 15.
Figure 35.
HR (BPM) RMSE comparison for pre-processing techniques with noise reduction algorithms applied for participants with fps lower than or equal to 15.
Figure 36.
HR (BPM) RMSE comparison for pre-processing techniques with noise reduction algorithms applied for participants with fps greater than 15.
Figure 36.
HR (BPM) RMSE comparison for pre-processing techniques with noise reduction algorithms applied for participants with fps greater than 15.
Figure 37.
Participants retook the study for data with low fps. The retaken studies HR (BPM) RMSE has reduced quite a lot after fps has increased.
Figure 37.
Participants retook the study for data with low fps. The retaken studies HR (BPM) RMSE has reduced quite a lot after fps has increased.
Figure 38.
Time taken to process participants data grouped by algorithms (for five components at a time). The plot shows different noise reduction algorithms execution time, including the entire process of obtaining the vital signs such as spatial pooling, pre-processing, noise reduction algorithm, FFT, and filtering. Jade seems to take the most execution time compared to rest of the algorithms. Rest of the algorithms have execution time of less than 5 ms, which makes it look like its only one line. Zoomed in view of the plot where rest of the noise reduction algorithms execution time is shown for individual components in
Figure 39 and for all components in
Figure 40.
Figure 38.
Time taken to process participants data grouped by algorithms (for five components at a time). The plot shows different noise reduction algorithms execution time, including the entire process of obtaining the vital signs such as spatial pooling, pre-processing, noise reduction algorithm, FFT, and filtering. Jade seems to take the most execution time compared to rest of the algorithms. Rest of the algorithms have execution time of less than 5 ms, which makes it look like its only one line. Zoomed in view of the plot where rest of the noise reduction algorithms execution time is shown for individual components in
Figure 39 and for all components in
Figure 40.
Figure 39.
Time taken to process participants data grouped by algorithms (for single component at a time). The plot shows different noise reduction algorithms execution time, including the entire process of obtaining the vital signs such as spatial pooling, pre-processing, noise reduction algorithm, FFT, and filtering. Processing individual components individually takes double the time compared to processing all the components at the same time as shown in
Figure 40.
Figure 39.
Time taken to process participants data grouped by algorithms (for single component at a time). The plot shows different noise reduction algorithms execution time, including the entire process of obtaining the vital signs such as spatial pooling, pre-processing, noise reduction algorithm, FFT, and filtering. Processing individual components individually takes double the time compared to processing all the components at the same time as shown in
Figure 40.
Figure 40.
Time taken to process participants data grouped by algorithms (without Jade and for five components at a time). The plot shows different noise reduction algorithms execution time, including the entire process of obtaining the vital signs such as spatial pooling, pre-processing, noise reduction algorithm, FFT, and filtering. Processing all the components at the same time takes 50% reduced time as compared to processing each component individually as shown in
Figure 39.
Figure 40.
Time taken to process participants data grouped by algorithms (without Jade and for five components at a time). The plot shows different noise reduction algorithms execution time, including the entire process of obtaining the vital signs such as spatial pooling, pre-processing, noise reduction algorithm, FFT, and filtering. Processing all the components at the same time takes 50% reduced time as compared to processing each component individually as shown in
Figure 39.
Table 1.
Previous systems acquiring only HR BPM (lab based).
Table 1.
Previous systems acquiring only HR BPM (lab based).
Year | Author | Participants | SD 1 | RMSE 1 | R 1 |
---|
2010 | Poh et al. [19](Sitting still) | 12 | 2.29 | 2.29 | 0.98 |
2010 | Poh et al. [19](with slight movement) | 12 | 4.59 | 4.36 | 0.95 |
2010 | Poh et al. [19] reported by Hassan et al. [20] | 12 | 12.82 | 21.08 | 0.34 |
2010 | Poh et al. [19] reported by Waqar et al. [21] | 12 | 14.57 | 17.70 | 0.33 |
2011 | Poh et al. [22] | 12 | 0.83 | 2.29 | 1.00 |
2011 | Poh et al. [22] reported by Hassan et al. [20] | 20 | 12.66 | 14.01 | 0.44 |
2011 | Poh et al. [22] reported by Waqar et al. [21] | 5 | 18.12 | 18.02 | 0.14 |
2013 | Monkaresi et al. [23](ICA) | 18 | 25.54 | 35.31 | 0.53 |
2013 | Monkaresi et al. [23](ICA+KNN) | 18 | 4.33 | 4.33 | 0.97 |
2013 | Monkaresi et al. [23](ICA+KNN+Regression) | 18 | 13.70 | 13.69 | 0.58 |
2014 | Li et al. [24] VideoHR database | 10 | 0.72 | 1.27 | 0.99 |
2014 | Li et al. [24] MAHNOB-HCI database | 27 | −3.30 | 7.62 | 0.81 |
2014 | Li et al. [24] reported by Hassan et al. [20] | 20 | 9.53 | 12.47 | 0.53 |
2015 | Kumar et al. [25] (still) | 12 | - | 15.74 | - |
2015 | Kumar et al. [25] (reading) | 5 | - | 55.34 | - |
2015 | Kumar et al. [25] (watching video) | 5 | - | 97.51 | - |
2015 | Kumar et al. [25] (talking) | 5 | - | 67.08 | - |
2022 | Zheng et al. [26] (Low illumination) | 40 | 5.64 | 7.63 | 0.85 |
2022 | Zheng et al. [26] (average illumination) | 40 | 4.55 | 6.28 | 8.75 |
2022 | Zheng et al. [26] (high illumination) | 40 | 3.54 | 5.09 | 0.86 |
2022 | Zheng et al. [26] (unbalanced illumination) | 40 | 4.96 | 7.33 | 0.84 |
2022 | Zheng et al. [26] (slight head movement) | 40 | 5.95 | 7.03 | 0.85 |
Table 2.
Previous systems acquiring only .
Table 2.
Previous systems acquiring only .
Year | Author | Participants | RMSE 2 |
---|
2021 | Mathew et al. [27] (Model1 PD) | 14 | 3.07 |
2021 | Mathew et al. [27] (Model1 PU) | 14 | 2.16 |
2022 | Casalino et al. [28] (Still) | 10 | 1.879 |
2022 | Casalino et al. [28] (talking) | 10 | 1.188 |
2022 | Casalino et al. [28] (slight rotation) | 10 | 1.881 |
2022 | Casalino et al. [28] (some rotation) | 10 | 1.063 |
2016 | Van Gastel et al. [29] (Still) | 4 | 1.33 |
2016 | Van Gastel et al. [29] (some movement) | 4 | 1.64 |
Table 3.
Participant Information.
Table 3.
Participant Information.
Description | Total | Percentage |
---|
Participant’s Country | 40 | 100% |
United Kingdom | 23 | 57.5% |
Pakistan | 16 | 40% |
Malta | 1 | 2.5% |
Participant’s Gender | 40 | 100% |
Female | 25 | 62.5% |
Male | 15 | 37.5% |
Participant’s Age | 40 | 100% |
18–30 | 24 | 60% |
30–40 | 8 | 20% |
40–50 | 5 | 12.5% |
51–60 | 1 | 2.5% |
61 or above | 2 | 5% |
Participant’s Skin Pigmentation | 40 | 100% |
White | 21 | 52.5% |
Asian White | 1 | 2.5% |
Brown | 14 | 35% |
Darker | 4 | 10% |
Black | 0 | 0% |
Participant’s Ethnicity | 40 | 100% |
European | 21 | 52.5% |
Asian (South) | 18 | 45% |
Asian (Other) | 1 | 2.5% |
Participants asked to repeat research study from Europe | 3 | 7.5% |
Participants asked to repeat research study from Asia | 3 | 7.5% |
Participants consented but did not participate | 3 | 7.5% |
Participants data acquisition stopped due to health concern | 1 | 2.5% |
Participants consented to the research study (including repeat participants) | 40 | 100% |
Total participant’s data analysed (including repeated studies) | 40 | 100% |
Table 4.
Pre-processing Techniques Combinations.
Table 4.
Pre-processing Techniques Combinations.
Pre-Processing Type | Description |
---|
Type 1 | No processing |
Type 2 | only normalise signal |
Type 3 | Interpolate, apply hamming, smooth, median filter, and normalise |
Type 4 | Detrend, interpolate, apply hamming, smooth, median filter, and normalise |
Type 5 | Interpolate, apply hamming, and normalise |
Type 6 | Detrend, interpolate, apply hamming, and normalise |
Type 7 | Detrend, upsample, interpolate, apply hamming, and normalise |
Table 5.
ARPOS system and for vitals over all participants.
Table 5.
ARPOS system and for vitals over all participants.
Vital Type | State Type | | |
---|
HR (BPM) | Resting state | ±0.5 | ±5.5 |
HR (BPM) | Active state | ±1.88 | ±9.3 |
(%) | Resting state | ±2 | ±2 |
(%) | Active state | ±2 | ±2 |
Table 6.
HR evaluation measures from the ARPOS system.
Table 6.
HR evaluation measures from the ARPOS system.
Analysis Type | SD(SE) 4 | RMSE 4 | r 4 |
---|
All participants, resting with FastICA | 0.018 | 7.8 | 0.85 |
All participants, active with PCAICA | 0.014 | 15 | 0.75 |
White participants, resting with FastICA | 0.015 | 6.5 | 0.87 |
White participants, resting with PCAICA | 0.015 | 6.7 | 0.86 |
White participants, resting with PCA | 0.017 | 7.53 | 0.81 |
Darker participants, resting with FastICA | 0.0289 | 9.1 | 0.78 |
Darker participants, resting with PCAICA | 0.027 | 12.32 | 0.64 |
Darker participants, resting with PCA | 0.032 | 12.97 | 0.73 |
White participants, active with FastICA | 0.018 | 18.5 | 0.72 |
Darker participants, active with FastICA | 0.028 | 17.9 | 0.73 |
White participants, active with PCAICA | 0.017 | 15.45 | 0.77 |
Darker participants, active with PCAICA | 0.025 | 14.59 | 0.71 |
Table 7.
evaluation measures from the ARPOS system.
Table 7.
evaluation measures from the ARPOS system.
Analysis Type | RMSE 5 |
---|
All participants (resting state) using FastICA | 2.5 |
All participants (active state) using PCAICA | 2.5 |
White participants from Europe (resting state) using FastICA | 2.5 |
Darker participants from Asia (resting state) using PCAICA | 2.27 |
White participants from Europe (active state) using PCAICA | 2.3 |
Darker participants from Asia (active state) using PCAICA | 2.7 |