Next Article in Journal
Wireless Sensing and Networking for the Internet of Things
Next Article in Special Issue
A Comprehensive Review on Machine Learning in Healthcare Industry: Classification, Restrictions, Opportunities and Challenges
Previous Article in Journal
Assessment of the Structural Vibration Effect on Plasma Current Measurement Using a Fiber Optic Current Sensor in ITER
Previous Article in Special Issue
A Comprehensive Landscape of Imaging Feature-Associated RNA Expression Profiles in Human Breast Tissue
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of Feedforward and Recurrent Neural Networks for Fusion of Data from Radar and Depth Sensors Applied for Healthcare-Oriented Characterisation of Persons’ Gait

Warsaw University of Technology, Faculty of Electronics and Information Technology, Institute of Radioelectronics and Multimedia Technology, ul. Nowowiejska 15/19, 00-665 Warsaw, Poland
Sensors 2023, 23(3), 1457; https://doi.org/10.3390/s23031457
Submission received: 7 December 2022 / Revised: 22 January 2023 / Accepted: 26 January 2023 / Published: 28 January 2023

Abstract

:
In this paper, the useability of feedforward and recurrent neural networks for fusion of data from impulse-radar sensors and depth sensors, in the context of healthcare-oriented monitoring of elderly persons, is investigated. Two methods of data fusion are considered, viz., one based on a multilayer perceptron and one based on a nonlinear autoregressive network with exogenous inputs. These two methods are compared with a reference method with respect to their capacity for decreasing the uncertainty of estimation of a monitored person’s position and uncertainty of estimation of several parameters enabling medical personnel to make useful inferences on the health condition of that person, viz., the number of turns made during walking, the travelled distance, and the mean walking speed. Both artificial neural networks were trained on the synthetic data. The numerical experiments show the superiority of the method based on a nonlinear autoregressive network with exogenous inputs. This may be explained by the fact that for this type of network, the prediction of the person’s position at each time instant is based on the position of that person at the previous time instants.

1. Introduction

The life expectancy at birth, estimated for the global population in 2019, was, ca., 73 years; it has been rising during the last decades and is predicted to reach 77 years by the second half of the twenty-first century. At the same time the global fertility rate (i.e., the number of live births per woman over a lifetime) is decreasing [1]. As a consequence, the global population is ageing—it is expected that the share of European and North American population aged at least 65 years will reach 25% in 2050. Hence, there is a growing importance of research on new technologies that could be employed in monitoring systems supporting care services for elderly persons. The falls of elderly persons belong to the most frequent reasons for their admission and long-term stay in hospitals [2], and therefore the monitoring systems developed for the sake of those persons should enable prediction and prevention of dangerous events, such as person’s fall and harmful long lie after the fall, and also detection of those events [3,4]. The relevance of features related to gait analysis in monitoring of elderly persons, and in particular in fall prevention, has been emphasised in several papers [5,6,7,8].
Elderly persons are often reluctant to use the existing monitoring techniques because they may infringe on the privacy of those persons, or require the constant wearing of some additional devices [9,10]. As a result, two relatively new non-invasive and non-intrusive monitoring techniques are attracting growing attention of the researchers, viz., techniques based on depths sensors [11,12,13,14,15,16,17,18,19,20] and radar sensors [21,22,23,24,25,26,27,28,29,30,31,32].
A system based on the impulse-radar sensors and depth sensors may not only be used for localisation of a monitored person in their apartment, but also for estimation of several healthcare-related parameters, enabling medical personnel to make useful inferences on the health condition of that person [33,34]. These parameters include, but are not limited to: the estimates of the time spent in motion and travelled distance (both helpful in assessing the level of physical activity), the number of turns made during motion (considered an indicator of mental health condition [35]) and the mean walking speed (considered highly informative with respect to the overall health status [36]).
It has been recently shown by the author and his collaborators that the fusion of measurement data from the impulse-radar sensors and depth sensors may enable reliable estimation of the monitored person’s position [37], and that a multi-layer perceptron, trained on synthetic data, may be effectively used for that purpose [38]. In the case of the monitoring of elderly persons, training of neural networks may be a particularly challenging task: involving elderly persons in the experimentation aimed at the acquisition of the reference data could be problematic both for medical and ethical reasons. The use of synthetic data for training of these networks may solve this problem.
In this paper, the idea of applying artificial neural networks, trained on synthetic data, for fusion of the measurement data from the impulse-radar sensors and depth sensors, is further explored. The multilayer perceptron treats all the measurement data independently and does not take into account the movement history, i.e., the dependence of the current position and direction of the movement on the past position and direction of the movement. This characteristic of a movement may be exploited in the process of data fusion by means of a recurrent neural network, which takes into account the previous values of its output for predicting the current value of its output.
The novelty of the research results presented in this paper consists in application of a new method, based on a nonlinear autoregressive network with exogenous inputs, for the fusion of data from impulse-radar sensors and depth sensors, as well as a systematic comparison of that method with a method based on a multilayer perceptron, and one reference method within a comprehensive programme of experimentation, based on the real-world data, involving:
  • The localisation of a person walking around the monitored area according to various predefined movement scenarios.
  • The estimation of several parameters, carrying information important for medical experts, on the basis of the estimated movement trajectories.
This programme included, in particular, experiments aimed at the investigation of the influence of the obstacles, occluding a monitored person, and the walking speed of that person on the accuracy of the estimation of the parameters.

2. Compared Methods of Data Fusion

2.1. Method Based on Nonlinear Autoregressive Network with Exogenous Inputs

This method (called NARX method hereinafter) is based on a nonlinear autoregressive network with exogenous inputs, with the following structure (Figure 1a):
  • Four input neurons—the input data represent two pairs of the x-y coordinates acquired by means of the impulse-radar sensor and depth sensor;
  • Six neurons in a single hidden layer;
  • Two output neurons—the output data represent one pair of fused x-y coordinates; the output values, delayed by two time instants, are fed back to the hidden layer;
The implementation of the neural network has been based on the procedures available in the MATLAB Deep Learning Toolbox [39].
The artificial neural network was trained on the synthetic data generated according to the methodology described in detail in [38]. The synthetic data representative of the measurement data acquired by means of the impulse-radar sensors were corrupted with zero-mean red noise and smoothed using the moving-average filter. The generation of the synthetic data representative of the measurement data acquired by means of the depth sensor involved the modelling of a silhouette of a moving person by means of an ellipse following a sine-shaped trajectory oscillating around the reference trajectory; generation of those data took into account the fact that the depth sensors can “see” only one side of the body.
Four different reference trajectories, presented in Figure 2, were used for generation of the synthetic data. For each trajectory, forty realisations of the data were generated for each type of sensor, but in the case of a depth sensor, twenty of those realisations were fragmented to represent the corruption of the measurement data due to the occlusion. The data acquisition rate for the depth sensor and for the impulse-radar sensors was set to 10 Hz. All the generated trajectories are presented in Figure 3.

2.2. Method Based on Multilayer Perceptron

This method (called MLP method hereinafter), introduced by the author in [39], is based on a multilayer perceptron with four input neurons, eight neurons in a single hidden layer, and two output neurons (Figure 1b).

2.3. Method Based on Kalman Filter

This method (called KF method hereinafter), described in [37], has been chosen as the reference method because it yielded very good results in an extensive experiment based on real-world data; moreover, the KF method (as the NARX method and the MLP method) is a method of data-point fusion, i.e., it is based on the assumption that the fusion of the data acquired by means of the impulse-radar sensors and the data acquired by means of the depth sensor is performed whenever new data points are available.
This method is derived from linear transition equation modelling the movement of a monitored person walking with nearly constant velocity, viz., with a small random acceleration between two consecutive time instants [40]:
x n = F n x n 1 + Γ n α n
where:
  • x n = [ x n x n ( 1 ) y n y n ( 1 ) ] T is a state vector representative of the two-dimensional coordinates ( x n , y n ) , corresponding to a time instant t n , and the velocities along these dimensions ( x n ( 1 ) , y n ( 1 ) ) ;
  • F n and Γ n are the matrices of the form:
    F n = [ 1 Δ n 0 0 0 1 0 0 0 0 1 Δ n 0 0 0 1 ]   and   Γ n = [ Δ n 2 / 2 0 Δ n 0 0 Δ n 2 / 2 0 Δ n ]
    with Δ n being the time interval between the time instants t n 1 and t n ;
  • α n = [ α x , n α y , n ] T is a vector modelling the acceleration, whose elements are assumed to be the realisations of a zero-mean bivariate normal distribution with a known covariance matrix Σ α = diag { σ α , x 2 , σ α , y 2 } .
The output equation, complementing the transition Equation (1), is modelling the relationships between the vector of observations z ˜ n and the state vector x n :
z ˜ n = H x n + η n
where:
  • H is an observation matrix of the form:
    H = [ 1 0 0 0 0 0 1 0 ]
  • η n = [ η x , n η y , n ] T is a vector representative of the observation noise corrupting the data, whose elements are assumed to be realisations of a zero-mean bivariate normal distribution with a known covariance matrix Σ η , n .
Under the above-formulated assumptions, the radar data and the depth data can be fused by means of a Kalman filter (cf. the references [41,42,43]) performing, for each time instant, the following sequence of operations [40,44]:
1.
Determination of the pre-estimate x ^ n pre of the state vector and the pre-estimate P ^ k pre of its covariance matrix:
x ^ n pre = F n x ^ n 1
P ^ n pre = F n P ^ n 1 F n T + Q n
where:
Q n = [ 1 4 Δ n 4 σ α , x 2 1 2 Δ n 3 σ α , x 2 0 0 1 2 Δ n 3 σ α , x 2 Δ n 2 σ α , x 2 0 0 0 0 1 4 Δ n 4 σ α , y 2 1 2 Δ n 3 σ α , y 2 0 0 1 2 Δ n 3 σ α , y 2 Δ n 2 σ α , y 2 ]
2.
Calculation of the estimate z ^ n of the observation vector:
z ^ n = H x ^ n pre
3.
Calculation of the so-called innovation vector:
ζ n = z ˜ n z ^ n
and the estimate of its covariance matrix:
Σ ^ ζ , n = H P ^ n pre H T + Σ η , n
for each vector of data z ˜ n ;
4.
Calculation of the final estimate of the state vector:
x ^ n = x ^ n pre + G n ζ n
and the final estimate of its covariance matrix:
P ^ n = ( I N G n H ) P ^ n pre
where G n is the Kalman-gain matrix determined according to the formula:
G n = P ^ n pre H T Σ ^ ζ , n 1
It is worth noting that, in this method, the radar data and depth data are acquired asynchronously; therefore, an observation vector z ˜ n may be representative of either the radar data or the depth data, and is always associated with a corresponding known covariance matrix Σ η , n of the noise corrupting those data. The final estimate x ^ n of the state vector is the result of data fusion in every time instant t n .

3. Extraction of Healthcare-Related Parameters

3.1. Detection of Motion

Motion has been detected by comparing the distance travelled in a given period of time, T m , with a given threshold, D . A binary sequence, indicating motion, has been determined according to the formula:
b m , n { 1 if     d n > t n t n 0 T m D 0 otherwise for n = 1 , , N
where:
n 0 arg   inf ν { ν | t ν t n T m , ν = 0 , , n 1 }
d n ( x n x n 0 ) 2 + ( y n y n 0 ) 2
In the above equations, { x n } and { y n } are the sequences of the coordinates of the monitored person’s position, being either acquired by means of the sensors or fused by means of the methods described in Section 2. The values of T m and D should prevent small deviations in the position from being considered as motion; T m = 0.45 s and D = 0.1 m have been selected because these values yielded satisfactory results in the experiments.
To reduce the influence of the dispersion of the position estimates on the detection of motion, morphological closing [45] has been performed on the binary sequence { b m , n } . Because the fused data sequences are acquired by means of sensors with different data acquisition rates, the length of the structuring element used for the morphological closing has been equal to the number of data points within the time interval T m . The binary sequence obtained by means of the morphological closing, { b ¯ m , n } , has been used for the detection of the moments when a monitored person started walking, and the moments when that person stopped walking.

3.2. Estimation of Walking Direction and Moment of Turning

The walking direction has been estimated by computing the four-quadrant inverse tangent of two consecutive positions ( x n , y n ) and ( x n 1 , y n 1 ) of the walking person. It is assumed that the counter domain of the function is [ π , π ] or [ 180 ° , 180 ° ] .
A turn can be detected by comparing the absolute value of the sum of changes in the walking direction in a given period of time, T t , with a given threshold, Φ . A binary sequence, indicating turning, has been computed according to the formula:
b t , n { 1 if Φ < | ν = n 0 + 2 n arctan ( x ν x ν 1 y ν y ν 1 ) arctan ( x ν 1 x ν 2 y ν 1 y ν 2 ) | < π 0 otherwise for n = 1 , , N
In the above equations, { x n } and { y n } are the sequences of the coordinates of the monitored person’s position. The values of T t and Φ should be optimised to prevent small deviations in the position from being considered as turns; T t = 1 s and Φ 1.05 rad ( 60 ° ) have been selected because these values yielded satisfactory results in the experiments.
To reduce the influence of the dispersion of the position estimates on the detection of turns, morphological closing has been performed on the binary sequence { b t , n } . Because the fused data sequences are acquired by means of sensors with different data acquisition rates, the length of the structuring element used for the morphological closing has been equal to the number of data points within the time interval T t . The binary sequence obtained by means of the morphological closing, { b ¯ t , n } , has been used for the detection of the moments when a monitored person started turning, and the moments when that person stopped turning.

3.3. Estimation of Travelled Distance

The travelled distance has been calculated by summing up the distances between the consecutive locations of the walking person:
s = n = 2 N ( x n x n 1 ) 2 + ( y n y n 1 ) 2

3.4. Estimation of Walking Speed

The estimation of the average walking speed has been calculated by dividing the estimate of the travelled distance, determined according to Equation (18), by the estimate of the time spent in motion, determined according to Equation (14).

4. Methodology of Experimentation

4.1. Acquisition of Measurement Data

In the experiments, the raw measurement data were acquired by means of:
  • Two impulse-radar sensors based on Novelda NVA6201 (Novelda, Oslo, Norway) chip working in the frequency range 6.0–8.5 GHz [46], and having the data acquisition rate of 10 Hz;
  • An infrared depth sensor being a part of the Microsoft Kinect V2 device (Microsoft, Redmond, WA, United States) [47], having the data acquisition rate of 30 Hz.
The experimentation programme comprised two parts (named EXP#1 and EXP#2) involving the monitoring of the movements of a person walking within the area presented in Figure 4a:
  • In experiment EXP#1, a person walked forth and back along a serpentine trajectory, among the obstacles occluding that person (see Figure 4b); 30 walks were performed and the walking speed was kept constant at v = 0.5 m/s.
  • In experiment EXP#2, a person walked clockwise and counter-clockwise along a rectangle-shaped trajectory (see Figure 4c), at six different values of the walking speed v { 0.5 , 0.6 , 0.7 , 0.8 , 0.9 , 1.0 } m/s. For each value of the walking speed, 20 walks were performed.
Thus, the whole programme of experimentation comprised the acquisition of R = 150 sequences of x-y data. Throughout the experimentation, the constancy of the walking speed was assured by means of a metronome.

4.2. Criteria for Performance Evaluation

4.2.1. Indicators of Uncertainty of Estimation of Monitored Person’s Position

For each sequence of the data representative of the person’s position on the x-y plane:
{ x ^ n , r | n = 1 , , N r }   and   { y ^ n , r | n = 1 , , N r }   for   r = 1 , , R
the corresponding sequence of the absolute errors of the position estimation has been computed by subtraction of the reference values:
{ Δ x ^ n , r = x ^ n , r x ˙ n , r | n = 1 , , N r }   and   { Δ y ^ n , r = y ^ n , r y ˙ n , r | n = 1 , , N r }   for   r = 1 , , R
Finally, the sequences of the absolute errors of the position estimation, viz. { Δ x ^ n , r } and { Δ y ^ n , r } , have been used for determining the R sequences of the position errors { Δ d ^ n , r } :
{ Δ d ^ n , r = ( Δ x ^ n , r ) 2 + ( Δ y ^ n , r ) 2 | n = 1 , , N r }   for   r = 1 , , R
The assessment of the performance of the compared methods of data fusion has been based on the inspection of the empirical cumulative distribution functions F ( ξ ) [48] characterising the position errors:
F ( ξ ) 1 M n = 1 M I ( Δ d ^ m ξ )
where { Δ d ^ m | m = 1 , , M } is a sequence composed of all the position errors { Δ d ^ n , r } , and I ( ) is the function taking the value 1 if the condition inside the brackets is met, and the value is 0 otherwise. The following indicators of uncertainty of estimation have been used:
  • The area under the empirical cumulative distribution function F ( ξ ) in the interval ξ [ 0 , 1 ] m, denoted with A ECDF , taking the value from the interval [ 0 , 1 ] ;
  • The mean position error (MEAE);
  • The maximum position error (MAXE);
  • The standard deviation of the position errors (STDE).

4.2.2. Indicators of Uncertainty of Estimation of Healthcare-Related Parameters

The errors corrupting the estimates of the x-y coordinates can significantly decrease the accuracy of the estimates of the healthcare-related parameters. To reduce the effect of the propagation of the errors of position estimation, the sequences of the x-y coordinates have been smoothed by means of a method based on a moving average filter.
The following indicators of uncertainty of estimation have been used:
  • The mean error determined with respect to the corresponding reference value (ME);
  • The standard deviation of that error (SE).

5. Results of Experimentation

5.1. Uncertainty of Estimation of Monitored Person’s Position

The estimates of the two-dimensional movement trajectories obtained in experiment EXP#1 are shown in Figure 5; the corresponding empirical distribution functions are provided in Figure 6. The values of the uncertainty indicators described in Section 4.2.1 are presented in Table 1. The analogous set of results, but obtained for experiment EXP#2, is provided in Figure 7 and Figure 8, and Table 2.
The analysis of the presented results is leading to the following conclusions:
  • The fused estimates of the person’s position are characterised by lower bias and dispersion when compared with the radar-data-based and depth-data-based estimates; moreover, the fused estimates are only slightly affected by the obstacles occluding a person. It should be stressed that in the case of the methods based on the artificial neural networks, the significant increase in the accuracy of the position estimation has been achieved despite the fact that the artificial neural networks were trained only on the synthetic data.
  • In experiment EXP#1, where the monitored person was occluded by the obstacles, the more accurate estimates of the trajectories have been obtained by means of the NARX method than by means of the MLP method. This result can be explained by the fact that in the case of the recurrent neural network, the prediction of the person’s position at each time instant is based on the past position of that person. When the monitored person “disappears” behind the obstacle, larger measurement errors may corrupt the radar data and the depth data, but the recurrent neural network may react to such sudden changes and mitigate their influence on the result of the data fusion.
  • If the values of the uncertainty indicators, calculated on the basis of the fused data, are concerned, the best overall results have been obtained for the NARX method: it is reflected in the lowest values of the mean error and the median error, as well as the A ECDF indicator. Even though the values of the maximum error and the standard deviation of the errors are slightly greater when compared to the other methods, it does not affect the overall performance of the NARX method.
While the differences among the values of the A ECDF indicator may seem insignificant, it is worth noting that the decrease in its value by only a few hundredths may indicate a quite significant distortion of the estimated trajectories. In Figure 9, the estimates of the movement trajectories from EXP#1, obtained on the basis of the data fused by means of the NARX method and by means of the WINFNS method, minimisation of weighted infinity-norm [37], are presented; for the NARX method (Figure 9a), the value of the A ECDF indicator is 0.85, while for the WINFNS method (Figure 9b), it is 0.83. Moreover, even a small reduction of the uncertainty of the position estimates may have significant impact when those estimates are used for determining quantities carrying information important for medical and healthcare services.
Finally, during the experiments, it has been observed that the number of neurons in the hidden layer varying from 6 to 10 does not influence significantly the results of the data fusion; however, in the case of a nonlinear autoregressive network with exogenous inputs, the increase in the number of the hidden neurons may make the training more challenging because more complex networks are prone to overfitting.

5.2. Uncertainty of Estimation of Healthcare-Related Parameters

5.2.1. Experiment EXP#1

The values of the uncertainty indicators, obtained on the basis of the radar data, depth data, and fused data, acquired in EXP#1, are presented in Table 3.
If the EXP#1 results, obtained for the radar data and for the depth data, are considered, it can be noticed that:
  • In the case of the radar data, the number of turns is significantly underestimated. This may be explained by the smoothing of the radar data during their preprocessing: as a result, two consecutive turns are often treated as one.
  • In the case of the depth data, the travelled distance is significantly underestimated. This may be explained by the obstacles occluding the monitored person and making the tracking impossible.
  • The radar-data-based and depth-data-based estimates of the mean walking speed are similarly accurate.
If the EXP#1 results obtained for the fused data are considered, it can be noticed that:
  • The values of the uncertainty indicators are generally lower than the uncertainty indicators obtained for the radar data and for the depth data.
  • The estimates of the number of turns, obtained for the MLP method and the NARX method, are considerably more accurate than those estimates obtained for the KF method—in the case of the last method, the number of turns for each realisation of the scenario has been overestimated by almost one. This may be explained by the corruption of the depth data related to “disappearing” of the monitored person behind the obstacles and the inability of the KF method to mitigate this phenomenon.

5.2.2. Experiment EXP#2

The dependence of the values of the uncertainty indicators on the walking speed is presented in Figure 10, Figure 11 and Figure 12. The analysis of those results leads to the following conclusions:
  • In the case of the estimates of the number of turns, the values of the ME indicator, determined on the basis of the radar data, are generally lower than zero—meaning that, regardless of the walking speed, the number of turns is underestimated; on the other hand, the values of the ME indicator, determined on the basis of the data fused by the KF method, are greater than zero, which means that the number of turns is overestimated. The best results are obtained for the depth data and for the data fused by means of the MLP method and the NARX method—the values of the ME indicator, determined on the basis of those data, are not significantly affected by the walking speed of the person.
  • In the case of the estimates of the travelled distance and the estimates of the walking speed, the values of the ME indicator, determined on the basis of the data fused by means of the KF method, are slightly better than the values of this indicator, determined on the basis of the data fused by means of the other methods. Moreover, for all the methods of data fusion, the values of the ME indicator decrease with the increase in the walking speed of the monitored person. This phenomenon is caused by the deviations of the estimates of the walking trajectory around the corners of that trajectory—the greater the walking speed, the smoother the trajectory becomes and the smaller the estimates of the distance and walking speed.

6. Conclusions

In this paper, the useability of feedforward and recurrent neural networks for fusion of data from impulse-radar sensors and depth sensors, in the context of healthcare-oriented monitoring of elderly persons, has been investigated. A new method for fusion of data has been proposed, viz., a method based on a nonlinear autoregressive network with exogenous inputs (NARX method), and compared with a method based on a multi-layer perceptron (MLP method) as well as one reference method based on a Kalman filter (KF method). All the artificial neural networks have been trained on synthetic data. An extensive programme of experimentation, designed for comparison of all the methods, has involved localisation of a person within a monitored area, and the estimation of several parameters of gait, considered important for medical experts, viz., the number of turns made during walking, the travelled distance, and the mean walking speed.
The results of experimentation have confirmed that the artificial neural networks trained on synthetic data may be effectively used for fusing the data from the impulse-radar sensors and from the depth sensors. The best overall results have been obtained for the NARX method: in the case of the recurrent neural network, the estimate of the position of a person is based on the estimates of that position at previous time instants; therefore, such network may effectively react to sudden changes in the measurement data, caused by stops and occlusions, and mitigate their influence on the estimated movement trajectory. This cannot be achieved for the feedforward neural network without a feedback loop because it treats the measurement data independently, without taking into account the movement history.

Funding

This work has been accomplished partially within the project PL12-0001, financially supported by EEA Grants—Norway Grants, and partially within the project 2017/25/N/ST7/00411, financially supported by the National Science Centre, Poland.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. United Nations. World Population Prospects 2019: Highlights; ST/ESA/SER.A/423; United Nations, Department of Economic and Social Affairs, Population Division: New York, NY, USA, 2019. [Google Scholar]
  2. Fuller, G.F. Falls in the elderly. Am. Fam. Physician 2000, 61, 2159–2168. [Google Scholar] [PubMed]
  3. Hamm, J.; Money, A.G.; Atwal, A.; Paraskevopoulos, I. Fall prevention intervention technologies: A conceptual framework and survey of the state of the art. J. Biomed. Inform. 2016, 59, 319–345. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Chaccour, K.; Darazi, R.; Hassani, A.H.E.; Andrès, E. From Fall Detection to Fall Prevention: A Generic Classification of Fall-Related Systems. IEEE Sens. J. 2017, 17, 812–822. [Google Scholar] [CrossRef]
  5. Baldewijns, G.; Luca, S.; Vanrumste, B.; Croonenborghs, T. Developing a system that can automatically detect health changes using transfer times of older adults. BMC Med. Res. Methodol. 2016, 16, 23. [Google Scholar] [CrossRef] [Green Version]
  6. Lusardi, M. Is Walking Speed a Vital Sign? Top. Geriatr. Rehabil. 2012, 28, 67–76. [Google Scholar] [CrossRef]
  7. Stone, E.; Skubic, M.; Rantz, M.; Abbott, C.; Miller, S. Average in-home gait speed: Investigation of a new metric for mobility and fall risk assessment of elders. Gait Posture 2015, 41, 57–62. [Google Scholar] [CrossRef]
  8. Thingstad, P.; Egerton, T.; Ihlen, E.F.; Taraldsen, K.; Moe-Nilssen, R.; Helbostad, J.L. Identification of gait domains and key gait variables following hip fracture. BMC Geriatr. 2015, 15, 150. [Google Scholar] [CrossRef] [Green Version]
  9. Berge, M. Telecare acceptance as sticky entrapment: A realist review. Gerontechnology 2016, 15, 98–108. [Google Scholar] [CrossRef] [Green Version]
  10. Thilo, F.J.S.; Hahn, S.; Halfens, R.J.G.; Schols, J.M.G.A. Usability of a wearable fall detection prototype from the perspective of older people—A real field testing approach. J. Clin. Nurs. 2019, 28, 310–320. [Google Scholar] [CrossRef]
  11. Dubois, A.; Bihl, T.; Bresciani, J.-P. Identifying Fall Risk Predictors by Monitoring Daily Activities at Home Using a Depth Sensor Coupled to Machine Learning Algorithms. Sensors 2021, 21, 1957. [Google Scholar] [CrossRef]
  12. Liao, Y.Y.; Chen, I.H.; Wang, R.Y. Effects of Kinect-based exergaming on frailty status and physical performance in prefrail and frail elderly: A randomized controlled trial. Sci. Rep. 2019, 9, 9353. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Clark, R.A.; Mentiplay, B.F.; Hough, E.; Pua, Y.H. Three-dimensional cameras and skeleton pose tracking for physical function assessment: A review of uses, validity, current developments and Kinect alternatives. Gait Posture 2019, 68, 193–200. [Google Scholar] [CrossRef] [PubMed]
  14. Hynes, A.; Czarnuch, S.; Kirkland, M.C.; Ploughman, M. Spatiotemporal Gait Measurement With a Side-View Depth Sensor Using Human Joint Proposals. IEEE J. Biomed. Health Inform. 2021, 25, 1758–1769. [Google Scholar] [CrossRef] [PubMed]
  15. Vilas-Boas, M.D.C.; Rocha, A.P.; Cardoso, M.N.; Fernandes, J.M.; Coelho, T.; Cunha, J.P.S. Supporting the Assessment of Hereditary Transthyretin Amyloidosis Patients Based On 3-D Gait Analysis and Machine Learning. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 1350–1362. [Google Scholar] [CrossRef] [PubMed]
  16. Ferraris, C.; Cimolin, V.; Vismara, L.; Votta, V.; Amprimo, G.; Cremascoli, R.; Galli, M.; Nerino, R.; Mauro, A.; Priano, L. Monitoring of Gait Parameters in Post-Stroke Individuals: A Feasibility Study Using RGB-D Sensors. Sensors 2021, 21, 5945. [Google Scholar] [CrossRef] [PubMed]
  17. Castaño-Pino, Y.J.; González, M.C.; Quintana-Peña, V.; Valderrama, J.; Muñoz, B.; Orozco, J.; Navarro, A. Automatic Gait Phases Detection in Parkinson Disease: A Comparative Study. In Proceedings of the 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society, Montreal, QC, Canada, 20–24 July 2020; pp. 798–802. [Google Scholar]
  18. Oh, J.; Eltoukhy, M.; Kuenze, C.; Andersen, M.S.; Signorile, J.F. Comparison of predicted kinetic variables between Parkinson’s disease patients and healthy age-matched control using a depth sensor-driven full-body musculoskeletal model. Gait Posture 2020, 76, 151–156. [Google Scholar] [CrossRef]
  19. Albert, J.A.; Owolabi, V.; Gebel, A.; Brahms, C.M.; Granacher, U.; Arnrich, B. Evaluation of the Pose Tracking Performance of the Azure Kinect and Kinect v2 for Gait Analysis in Comparison with a Gold Standard: A Pilot Study. Sensors 2020, 20, 5104. [Google Scholar] [CrossRef]
  20. Guffanti, D.; Brunete, A.; Hernando, M. Non-Invasive Multi-Camera Gait Analysis System and its Application to Gender Classification. IEEE Access 2020, 8, 95734–95746. [Google Scholar] [CrossRef]
  21. Li, H.; Mehul, A.; Le Kernec, J.; Gurbuz, S.Z.; Fioranelli, F. Sequential Human Gait Classification With Distributed Radar Sensor Fusion. IEEE Sens. J. 2021, 21, 7590–7603. [Google Scholar] [CrossRef]
  22. Li, X.; Li, Z.; Fioranelli, F.; Yang, S.; Romain, O.; Le Kernec, J. Hierarchical Radar Data Analysis for Activity and Personnel Recognition. Remote Sens. 2020, 12, 2237. [Google Scholar] [CrossRef]
  23. Niazi, U.; Hazra, S.; Santra, A.; Weigel, R. Radar-Based Efficient Gait Classification using Gaussian Prototypical Networks. Proceedings of IEEE Radar Conference, Online, 8–14 May 2021. [Google Scholar]
  24. Saho, K.; Sugano, K.; Uemura, K.; Matsumoto, M. Screening of apathetic elderly adults using kinematic information in gait and sit-to-stand/stand-to-sit movements measured with Doppler radar. Health Inform. J. 2021, 27. [Google Scholar] [CrossRef] [PubMed]
  25. Saho, K.; Uemura, K.; Matsumoto, M. Screening of mild cognitive impairment in elderly via Doppler radar gait measurement. IEICE Commun. Express 2020, 9, 19–24. [Google Scholar] [CrossRef] [Green Version]
  26. Seifert, A.-K.; Grimmer, M.; Zoubir, A.M. Doppler Radar for the Extraction Biomechanical Parameters in Gait Analysis. IEEE J. Biomed. Health Inform. 2021, 25, 547–558. [Google Scholar] [CrossRef] [PubMed]
  27. Fioranelli, F.; Kernec, J.L. Radar sensing for human healthcare: Challenges and results. In Proceedings of the IEEE Sensors, Sydney, Australia, 31 October–3 November 2021; pp. 1–4. [Google Scholar]
  28. Bhavanasi, G.; Werthen-Brabants, L.; Dhaene, T.; Couckuyt, I. Patient activity recognition using radar sensors and machine learning. Neural Comput. Appl. 2022, 34, 16033–16048. [Google Scholar] [CrossRef]
  29. Taylor, W.; Dashtipour, K.; Shah, S.A.; Hussain, A.; Abbasi, Q.H.; Imran, M.A. Radar Sensing for Activity Classification in Elderly People Exploiting Micro-Doppler Signatures Using Machine Learning. Sensors 2021, 21, 3881. [Google Scholar] [CrossRef]
  30. Ding, C.; Zhang, L.; Chen, H.; Hong, H.; Zhu, X.; Li, C. Human Motion Recognition With Spatial-Temporal-ConvLSTM Network Using Dynamic Range-Doppler Frames Based on Portable FMCW Radar. IEEE Trans. Microw. Theory Tech. 2022, 70, 5029–5038. [Google Scholar] [CrossRef]
  31. Chen, M.; Yang, Z.; Lai, J.; Chu, P.; Lin, J. A Three-Stage Low-Complexity Human Fall Detection Method Using IR-UWB Radar. IEEE Sens. J. 2022, 22, 15154–15168. [Google Scholar] [CrossRef]
  32. Han, T.; Kang, W.; Choi, G. IR-UWB Sensor Based Fall Detection Method Using CNN Algorithm. Sensors 2020, 20, 5948. [Google Scholar] [CrossRef]
  33. Miękina, A.; Wagner, J.; Mazurek, P.; Morawski, R.Z.; Sudmann, T.T.; Børsheim, I.T.; Øvsthus, K.; Jacobsen, F.F.; Ciamulski, T.; Winiecki, W. Development of software application dedicated to impulse-radar-based system for monitoring of human movements. J. Phys. Conf. Ser. 2016, 772, 012028. [Google Scholar] [CrossRef] [Green Version]
  34. Wagner, J.; Mazurek, P.; Miękina, A.; Morawski, R.Z.; Jacobsen, F.F.; Sudmann, T.T.; Børsheim, I.T.; Øvsthus, K.; Ciamulski, T. Comparison of two techniques for monitoring of human movements. Measurement 2017, 111, 420–431. [Google Scholar] [CrossRef]
  35. Galambos, C.; Skubic, M.; Wang, S.; Rantz, M. Management of dementia and depression utilizing in-home passive sensor data. Gerontechnology 2013, 11, 457–468. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Fritz, S.; Lusardi, M. Walking Speed: The Sixth Vital Sign. J. Geriatr. Phys. Ther. 2009, 32, 2–5. [Google Scholar] [CrossRef] [Green Version]
  37. Mazurek, P.; Wagner, J.; Miękina, A.; Morawski, R.Z. Comparison of sixteen methods for fusion of data from impulse-radar sensors and depth sensors applied for monitoring of elderly persons. Measurement 2020, 154, 107455. [Google Scholar] [CrossRef]
  38. Mazurek, P. Application of Artificial Neural Networks for Fusion of Data from Radar and Depth Sensors Applied for Persons’ Monitoring. J. Phys. Conf. Ser. 2019, 1379, 012051. [Google Scholar] [CrossRef]
  39. Mathworks. Matlab Documentation: Deep Learning Toolbox. Available online: https://www.mathworks.com/help/deeplearning/index.html (accessed on 9 November 2022).
  40. Bar-Shalom, Y.; Li, X.-R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2001; p. 584. [Google Scholar]
  41. Durrant-Whyte, H. Multi-Sensor Data Fusion; Lecture notes; The University of Sydney: Sydney, Australia, 2001. [Google Scholar]
  42. Raol, J.R. Data Fusion Mathematics: Theory and Practice; CRC Press (Taylor & Francis Group): Boca Raton, FL, USA, 2016. [Google Scholar]
  43. Bar-Shalom, Y.; Li, X.R. Multitarget-Multisensor Tracking: Principles and Techniques; YBS Publishing: Storrs, CT, USA, 1995. [Google Scholar]
  44. Simon, D. Optimal State Estimation; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  45. Serra, J.; Vincent, L. An overview of morphological filtering. Circuits Syst. Signal Process. 1992, 11, 47–108. [Google Scholar] [CrossRef] [Green Version]
  46. Bajurko, P.; Yashchyshyn, Y. Study of Detection Capability of Novelda Impulse Transceiver with External RF Circuit. In Proceedings of the 8th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications, Warsaw, Poland, 24–26 September 2015; pp. 1–4. [Google Scholar]
  47. Lachat, E.; Macher, H.; Mittet, M.A.; Landes, T.; Grussenmeyer, P. First experiences with Kinect v2 sensor for close range 3d modelling. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-5/W4, 93–100. [Google Scholar] [CrossRef]
  48. van der Vaart, A.W. Asymptotic Statistics; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
Figure 1. Artificial neural networks used for fusion of data: the nonlinear autoregressive network with exogenous inputs (a) and the multilayer perceptron (b); x r and y r denote the coordinates acquired by means of the impulse-radar sensor; x d and y d denote the coordinates acquired by means of the depth sensor; x f and y f denote the fused coordinates; h 1 , , h 8 denote the neurons in the hidden layers; o 1 and o 2 denote the neurons in the output layers.
Figure 1. Artificial neural networks used for fusion of data: the nonlinear autoregressive network with exogenous inputs (a) and the multilayer perceptron (b); x r and y r denote the coordinates acquired by means of the impulse-radar sensor; x d and y d denote the coordinates acquired by means of the depth sensor; x f and y f denote the fused coordinates; h 1 , , h 8 denote the neurons in the hidden layers; o 1 and o 2 denote the neurons in the output layers.
Sensors 23 01457 g001
Figure 2. Reference trajectories used for generation of the synthetic data.
Figure 2. Reference trajectories used for generation of the synthetic data.
Sensors 23 01457 g002
Figure 3. Synthetic data used for training the neural networks: the radar data (left column), the depth data without occlusion (middle column) and the depth data with occlusion (right column). Each graph for radar data depicts 40 superimposed sequences of synthetic data, while each graph for depth data depicts 20 superimposed sequences of synthetic data.
Figure 3. Synthetic data used for training the neural networks: the radar data (left column), the depth data without occlusion (middle column) and the depth data with occlusion (right column). Each graph for radar data depicts 40 superimposed sequences of synthetic data, while each graph for depth data depicts 20 superimposed sequences of synthetic data.
Sensors 23 01457 g003aSensors 23 01457 g003b
Figure 4. Experimental setup (a) and the movement scenarios considered in the experiments: the serpentine trajectory used in EXP#1 (b), and the rectangle-shaped trajectory used in EXP#2 (c). The reference points, i.e., the points where marks have been placed on the floor, are indicated with the crosses.
Figure 4. Experimental setup (a) and the movement scenarios considered in the experiments: the serpentine trajectory used in EXP#1 (b), and the rectangle-shaped trajectory used in EXP#2 (c). The reference points, i.e., the points where marks have been placed on the floor, are indicated with the crosses.
Sensors 23 01457 g004
Figure 5. Estimates of the trajectories obtained in experiment EXP#1, on the basis of the radar data (a), the depth data (b), and the fused data (ce); the black dashed lines denote the reference trajectories.
Figure 5. Estimates of the trajectories obtained in experiment EXP#1, on the basis of the radar data (a), the depth data (b), and the fused data (ce); the black dashed lines denote the reference trajectories.
Sensors 23 01457 g005
Figure 6. Zoom on the empirical cumulative distribution functions characterising the position errors, obtained in EXP#1.
Figure 6. Zoom on the empirical cumulative distribution functions characterising the position errors, obtained in EXP#1.
Sensors 23 01457 g006
Figure 7. Estimates of the trajectories obtained in experiment EXP#2, on the basis of the radar data (a), the depth data (b), and the fused data (ce); the black dashed lines denote the reference trajectories.
Figure 7. Estimates of the trajectories obtained in experiment EXP#2, on the basis of the radar data (a), the depth data (b), and the fused data (ce); the black dashed lines denote the reference trajectories.
Sensors 23 01457 g007aSensors 23 01457 g007b
Figure 8. Zoom on the empirical cumulative distribution functions characterising the position errors, obtained in EXP#2.
Figure 8. Zoom on the empirical cumulative distribution functions characterising the position errors, obtained in EXP#2.
Sensors 23 01457 g008
Figure 9. Estimates of the trajectories obtained in EXP#1, by means of the NARX method (a), and the WINFNS method described in [37] (b); for the NARX method, the value of the A ECDF indicator is 0.85, while for the WINFNS method, it is 0.83.
Figure 9. Estimates of the trajectories obtained in EXP#1, by means of the NARX method (a), and the WINFNS method described in [37] (b); for the NARX method, the value of the A ECDF indicator is 0.85, while for the WINFNS method, it is 0.83.
Sensors 23 01457 g009
Figure 10. Uncertainty indicators characterising the estimates of the numbers of turns, obtained in EXP#2.
Figure 10. Uncertainty indicators characterising the estimates of the numbers of turns, obtained in EXP#2.
Sensors 23 01457 g010
Figure 11. Uncertainty indicators characterising the estimates of the travelled distance, obtained in EXP#2.
Figure 11. Uncertainty indicators characterising the estimates of the travelled distance, obtained in EXP#2.
Sensors 23 01457 g011
Figure 12. Uncertainty indicators characterising the estimates of the walking speed, obtained in EXP#2.
Figure 12. Uncertainty indicators characterising the estimates of the walking speed, obtained in EXP#2.
Sensors 23 01457 g012
Table 1. Values of the uncertainty indicators, calculated on the basis of the position errors, obtained in EXP#1.
Table 1. Values of the uncertainty indicators, calculated on the basis of the position errors, obtained in EXP#1.
Uncertainty
Indicator
Radar
Data
Depth
Data
MLPNARXKF
MEAE [m]0.220.170.160.150.16
MEDE [m]0.200.180.140.120.16
MAXE [m]0.630.410.630.670.70
STDE [m]0.120.070.110.100.08
A ECDF 0.780.830.840.850.84
Table 2. Values of the uncertainty indicators, calculated on the basis of the position errors, obtained in EXP#2.
Table 2. Values of the uncertainty indicators, calculated on the basis of the position errors, obtained in EXP#2.
Uncertainty
Indicator
Radar
Data
Depth
Data
MLPNARXKF
MEAE [m]0.140.120.070.070.09
MEDE [m]0.130.120.070.070.08
MAXE [m]0.600.330.320.340.43
STDE [m]0.080.050.040.050.06
A ECDF 0.860.880.930.930.91
Table 3. Values of the uncertainty indicators obtained in EXP#1.
Table 3. Values of the uncertainty indicators obtained in EXP#1.
Uncertainty IndicatorRadar DataDepth DataMLPNARXKF
Number of turns
ME–6.00–0.73–0.130.070.93
SE1.080.911.361.341.20
Travelled distance [m]
ME–1.43–6.86–0.74–0.79–0.53
SE0.420.130.300.330.25
Walking speed [m/s]
ME–0.03–0.04–0.02–0.02–0.01
SE0.010.000.010.010.01
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mazurek, P. Application of Feedforward and Recurrent Neural Networks for Fusion of Data from Radar and Depth Sensors Applied for Healthcare-Oriented Characterisation of Persons’ Gait. Sensors 2023, 23, 1457. https://doi.org/10.3390/s23031457

AMA Style

Mazurek P. Application of Feedforward and Recurrent Neural Networks for Fusion of Data from Radar and Depth Sensors Applied for Healthcare-Oriented Characterisation of Persons’ Gait. Sensors. 2023; 23(3):1457. https://doi.org/10.3390/s23031457

Chicago/Turabian Style

Mazurek, Paweł. 2023. "Application of Feedforward and Recurrent Neural Networks for Fusion of Data from Radar and Depth Sensors Applied for Healthcare-Oriented Characterisation of Persons’ Gait" Sensors 23, no. 3: 1457. https://doi.org/10.3390/s23031457

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop