Next Article in Journal
Photoacoustic/Ultrasound/Optical Coherence Tomography Evaluation of Melanoma Lesion and Healthy Skin in a Swine Model
Previous Article in Journal
Traffic Estimation for Large Urban Road Network with High Missing Data Ratio
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Approach to Fall Detection Based on Improved Dual Parallel Channels Convolutional Neural Network

1
College of Electronic and Information Engineering, Hebei University, Baoding 071002, China
2
Key Laboratory of Digital Medical Engineering of Hebei Province, Hebei University, Baoding 071002, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(12), 2814; https://doi.org/10.3390/s19122814
Submission received: 5 May 2019 / Revised: 15 June 2019 / Accepted: 21 June 2019 / Published: 24 June 2019
(This article belongs to the Section Biosensors)

Abstract

:
Falls are the major cause of fatal and non-fatal injury among people aged more than 65 years. Due to the grave consequences of the occurrence of falls, it is necessary to conduct thorough research on falls. This paper presents a method for the study of fall detection using surface electromyography (sEMG) based on an improved dual parallel channels convolutional neural network (IDPC-CNN). The proposed IDPC-CNN model is designed to identify falls from daily activities using the spectral features of sEMG. Firstly, the classification accuracy of time domain features and spectrograms are compared using linear discriminant analysis (LDA), k-nearest neighbor (KNN) and support vector machine (SVM). Results show that spectrograms provide a richer way to extract pattern information and better classification performance. Therefore, the spectrogram features of sEMG are selected as the input of IDPC-CNN to distinguish between daily activities and falls. Finally, The IDPC-CNN is compared with SVM and three different structure CNNs under the same conditions. Experimental results show that the proposed IDPC-CNN achieves 92.55% accuracy, 95.71% sensitivity and 91.7% specificity. Overall, The IDPC-CNN is more effective than the comparison in accuracy, efficiency, training and generalization.

1. Introduction

A fall is an accidental injury that is very common in everyday life. According to the latest statistics of the China Office for Ageing, as of the end of 2017, the number of elderly people over the age of 60 reached 240 million, accounting for 17.3% of the total population. This shows that China’s aging is more serious. In daily life, due to factors such as weak physical fitness and decreased balance, the elderly are extremely prone to fall. If not treated in time, it will cause serious harm to the elderly and even be life-threatening [1]. According to the "China Injury Prevention Report,” the incidence of falls in the elderly is 20.7% in China. Therefore, falls are the main cause of death, disability and loss of independence for people over 65 years old—about one in every three people. Therefore, fall detection is of great significance for improving the level of care for the elderly [2].
At present, the related research on human fall detection is mainly divided into two categories. One type is fall recognition based on video images. Sase, P.S. et al. [3] set the threshold to detect the falling action by extracting valid frames in video, filtering, binarization and connection components. Li, X. et al. [4] proposed a fall detection platform based on Kinect and support vector machine (SVM). Based on the logic rule and D-S evidence fusion method, the data were merged to identify the fall action and the accuracy rate was significantly improved. Baldewijns, G. et al. [5] proposed a delay fusion technique to improve the accuracy of fall detection. Combining the confidence of different video fall detection systems, four different aggregation methods were compared based on the lower area of the precision recall curve. Although this type of fall recognition has a better accuracy, it has a high requirement for the surrounding environment and a limited range of activities.
The other type is fall recognition based on various sensors including triaxial acceleration, acoustic sensors and new wearable sensors. Wang, F.T. et al. [6] proposed two new inertial parameters—acceleration cube product and angular velocity cube product—to distinguish between falls and other daily activities to improve the specificity of fall detection. Mao, A. et al [7] proposed a fall detection algorithm based on acceleration signal and Euler angle. To achieve better fall detection accuracy and convenience, the optimal position of the sensor placement is verified by comparing the detection performance of different positions of the human body. Er, P.V. et al. [8] used sound sensors to detect the sound pressure generated by falls and developed a fall detection algorithm based on fuzzy logic in conjunction with the accelerometer, which was used to process sound pressure and acceleration signals, effectively improving the accuracy of fall recognition. Hsieh, C.Y. et al. [9] proposed a new layered fall detection method based on the multiphase drop model and combined it with the feature extraction mode based on threshold value to effectively identify fall events from continuous sensor data. Mezghani, N. et al. [10] proposed a new fall detection system based on intelligent textiles, which uses nonlinear support vector machine to determine the fall direction, which is helpful for studying the impact of falling according to the fall direction. This kind of fall recognition has been applied in a wider range and its real-time detection and portability have been further improved. However, due to the relatively small amount of information contained in the sensor signals, the accuracy of fall recognition has been relatively reduced.
Common methods for fall detection include SVM [11,12], artificial neural network (ANN) [13,14], hidden Markov model [15] and decision tree [16] and so forth. Convolutional neural network (CNN) is a deep feedforward neural network that has been developed in recent years and has attracted extensive attention. It is most commonly used in supervised learning problems in the image field, such as image recognition and computer vision. As early as 1989, LeCun et al. [17] proposed the initial convolutional neural network model and improved it later. After Alex Net [18] won the 2012 ImageNet competition, convolutional neural networks have become synonymous with deep learning in the field of image recognition and there are more and more applications in other fields. At present, Hu, Y. et al. [19] proposed a Convolution-Long Short-Term Memory (Conv-LSTM) structure based on the attention mechanism to better capture the time characteristics of surface electromyography (sEMG) signals to solve the problem of gesture recognition.
The work presented in this paper addresses these severe limitations while still achieving state of the art results. The approach is based on employing improved dual parallel channels convolutional neural network (IDPC-CNN) to perform the classification of spectrograms of the sEMG in order to achieve fall detection. The original sEMG data is collected from the rectus femoris, medial femoral muscle, tibialis anterior muscle and gastrocnemius muscle. Firstly, sEMG is pre-processed to extract effective information. Sliding hamming window [20] is used to obtain the spectrum image features of sEMG and the spectrum image was further dimensionally reduced. The dimensionality reduction spectra [21] is used as the input of IDPC-CNN to perform the fall motion recognition. Compared with other identification methods, the proposed method has the advantages of low equipment price, high recognition accuracy, large operating range and protection of privacy. Overall, the method has higher accuracy, more efficient training and greater generalization. Falling down on the old man’s injury is closely related to whether he can get timely treatment after the fall. This method can detect the occurrence of the fall action more quickly and then issue an alarm to help the elderly get help faster.

2. SEMG Signal Acquisition and Preprocessing

2.1. SEMG Acquisition

There are many muscles that control the flexion and extension of the legs. In this paper, four main muscles of the rectus femoris, medial femoral muscle, tibialis anterior muscle and gastrocnemius muscle are selected as the collection objects. The corresponding electrode positions of each muscle are shown in Figure 1.
The sEMG data are collected by the data transmission system (DTS) series wireless telemetry surface electromyography acquisition system produced by NORAXON, USA. It supports up to 16 channels of sEMG simultaneously. The DTS provides a sEMG sampling frequency of 1500 Hz per channel. Five healthy men and five healthy women are selected as volunteers. The average age of the volunteers is 24. The volunteers did not exercise vigorously one week before the experiment to avoid the effect of muscle fatigue on the experiment. Before placing the electrodes on the skin, the body hair was removed from the measuring area and it was wiped with an alcohol cotton ball to remove dead skin. The preparatory work of the skin can minimize the interference with the measurement of the data. With the consent of the subjects and the subjects signed a written informed consent and privacy agreements before participating in the experiment. Volunteers agreed to use personal data in the context of medical, teaching and medical research and the dataset is freely distributed. Basic information and medical history of the 10 subjects are shown in Table 1:
Since the final purpose of this experiment is to distinguish between falls and daily actions, four different gestures are required. The classes are: walking, squatting, sitting and falling. The walking movement is completed on a flat road with a length of 10 m and the walking time is about 12 s. Data under the state of smooth walking are selected. Figure 2 shows the different gestures.
As stated previously, our experiment uses 4 sEMG channels, each sampled at 1500 Hz. During the data collection process, each movement was repeated 10 times. The subjects were required to rest for 10 s after completing one movement to avoid inaccurate data caused by muscle fatigue. A total of 400 sets of effective experimental data were obtained, including walking, squatting, sitting and falling, among which 100 sets were falling and 300 sets were the other activities. Each set of data contains 4 channel sEMG signals.

2.2. Signal Preprocessing

2.2.1. Signal Denoising

The sEMG signal is a weak signal with a low frequency range of 10–500 Hz [22], so the original sEMG signal needs to be pre-processed. Since the DTS system provides a voltage interference shielding module, the 50 Hz power frequency notch processing was eliminated, thus ensuring the integrity of the original signal [23]. The third-order Butterworth low-pass filter and the third-order Butterworth high-pass filter were cascaded into a band-pass filter [24]. The cut off frequency of the low-pass filter was 500 Hz. The cut off frequency of the high pass filter was 10 Hz. The attenuation rate of the filter was 18 db per octave. Noise outside the effective frequency range was removed. The comparison of sEMG signals before and after denoising is shown in Figure 3.

2.2.2. Extraction of Effective Signal Segment

There are some differences in the way muscles contract when different volunteers perform the same action [25]. Even the same volunteer cannot completely repeat the same action. This leads to a certain difference in the length and amplitude of the sEMG signal under the same action. In order to improve the motion recognition rate and reduce the amount of processed data, our system uses the sliding window energy threshold method to automatically extract each valid segment of the surface EMG signal [26].
Let x k i is the i th sampling point of the k -segment signal, the mean short-term energy of n sampling points after x k i is expressed as:
E k = 1 n i = 1 n x k i 2
In order to divide the effective signal segment more accurately, set n to 100. The result of the signal passing through the sliding window to calculate the mean short-term energy is shown in Figure 4.
Partial data of the subjects were selected for manual segmentation of the action interval and the mean value of the ratio between the mean short-term energy corresponding to all starting points ( E k s t a r t ) and the maximum value of the mean short-term energy in the corresponding interval was calculated and multiplied by the maximum value of the mean short-term energy ( E k m a x ) of the rest data of the subjects, so as to obtain the static threshold of the starting point for the subject. The static threshold of the starting point ( S T S ) was defined as:
S T S = M e a n ( E k s t a r t M a x ( E k ) ) ·   E k m a x
The mean value of the ratio between the mean short-term energy corresponding to all ending points ( E k e n d ) and the maximum value of the mean short-term energy in the corresponding interval was calculated and multiplied by the maximum value of the mean short-term energy ( E k m a x ) of the rest data of the subjects, so as to obtain the static threshold of the ending point for the subject. The static threshold of the ending point ( S T E ) was defined as:
S T E = M e a n ( E k e n d M a x ( E k ) ) ·   E k m a x
The threshold calculation of the starting point and the ending point is shown in Figure 5.
When E k > S T S and the mean value of the first l mean short-term energy should be less than the mean short-term energy of the current window, the value of l is related to the window size n and sampling frequency f s :
l = f s 2 n
Meanwhile, the mean value of the latter m mean short-term energy should be greater than the static threshold, the value of m is related to the window size n and sampling frequency f s :
f s 2 n < m < f s n
When E k < S T E and the mean value of the latter l mean short-term energy should be less than the mean short-term energy of the current window.
As mentioned earlier, in order to ensure that the extracted effective active segment had practical significance, the static thresholds S T S and S T E were set due to the difference between the movement and the muscle [27]. The method for extracting the active segment of the signal helps to improve the accuracy of the subsequent classification by reducing the interference information due to factors such as motion differences, muscle jitter and environmental influences [28]. The extraction of valid signal segments is shown in Figure 6.

3. SEMG Feature Selection and Fall Detection Method

3.1. SEMG Spectrograms Extraction

In order to avoid the frequency leakage caused by using a rectangular window to extract the sEMG spectrogram directly, the sliding Hamming window was selected to perform fast Fourier transform (FFT) on the sEMG. The window function of the Hamming window is:
W ( n , a ) = ( 1 a ) a cos ( 2 π n N 1 ) , 0 n N 1
In general, a is 0.46. Since the amplitude-frequency characteristic of the Hamming window is that the side lobe attenuation is large, the main lobe peak and the first side-lobe peak attenuation can reach 40 dB. After windowing, the middle data information is obvious and the data information on both sides is attenuated. When moving the window, moving the 1⁄3 or 1⁄2 window, the attenuated data in the previous window re-emerged, so that the window sliding became relatively stable and the spectrum diffusion was reduced to the minimum.
In order to further reduce the delay of fall recognition and improve the efficiency of the algorithm, FFT was adopted in this paper to extract the features of the spectrum.
Firstly, a sliding hamming window was used to carry out FFT on the sEMG signals of each channel for extracting the spectrograms. The horizontal axis of the spectrum is the time direction and the vertical axis is the frequency direction. Since the effective information of the sEMG signal is mainly concentrated in the range of 10–500 Hz, the spectrograms were processed to retain the effective frequency segment.
Since the different magnitude of sEMG channels, the spectrograms of sEMG were normalized to between 0 and 1 in order to improve the efficiency of the algorithm. Principal component analysis (PCA) was used to reduce the dimension of frequency direction of sEMG spectrogram data [29]. As can be seen from Figure 7 and Table 2, the cumulative variance contribution rate of the first eight principal components reached 95.3%. The reduced dimensional sEMG spectrum can be used to explain most of the effective information in the original high-dimensional sEMG spectrum, so as to adapt the data to the CNN classifier and reduce the classifier processing time.
The spectrum extraction and data dimensionality reduction for the four actions are shown in Figure 8.
Using the above method, Figure 9 shows the spectrums extraction process of sEMG.

3.2. SEMG Feature Selection

The beginning, process and end phases of an action contain different information. In order to effectively utilize the information, the sliding window is selected to extract the feature of sEMG signal. The sliding window consists of two key variables: the window size and the sliding step size [30], as shown in Figure 10.
In practical applications, the performance of the classifier should take priority over speed. In order to achieve an acceptable continuous classification, the latency should be less than 300 ms [31]. In our system, we opted for the windows of 200 ms (300 points) and the sliding step 100 ms (150 points).
The features of sEMG are time-domain features, frequency domain features and time-frequency domain features. The linear discriminant analysis was used to compare the classification accuracy of time domain features and frequency domain features and the feature selection was further optimized. In the time domain, mean absolute value (MAV), variance (VAR), waveform length (WL), root mean square (RMS), zero crossing (ZC) and slope sign change (SSC) are selected. The calculation formula is as follows:
  • mean absolute value:
    MAV = 1 n i = 1 n | x k i |
  • variance:
    VAR = 1 n i = 1 n ( x k i x k ¯ ) 2
  • waveform length:
    WL = i = 1 n 1 | x k ( i + 1 ) x k i |
  • root mean square:
    RMS = 1 n i = 1 n x k i 2
  • zero crossing:
    ZC = i = 1 n 1 [ f ( x k i × x k ( i + 1 ) ) | x k i x k ( i + 1 ) | ε ]
  • slope sign change:
    SSC = 1 n 2 i = 2 n 1 f ( i )
    f ( i ) = { 0 , ( x k i x k ( i 1 ) ) ( x k ( i + 1 ) x k i ) > 0 1 , ( x k i x k ( i 1 ) ) ( x k ( i + 1 ) x k i ) < 0
The frequency domain feature selects the spectrograms. The spectrograms contain two kinds of information, time and frequency, which reflects more information that cannot be observed in time domain features. The analysis of spectrogram analysis can improve the discriminability between different actions.
Three kinds of classifiers, LDA, KNN and SVM, were selected to compare the performance of different features under different classifiers. The classification accuracy is shown in Figure 11.
It can be seen from the figure that when the spectrogram was used as the feature, the classification accuracy was greatly improved compared with several time domain features. Spectrograms do not perform well under KNN, a classifier that relies on distance-dependent classification but also reach normal levels. The comprehensive performance of SVM in the three classifiers was better. Among several time domain features, the comprehensive index of RMS was better. RMS and SPM were selected as features to further compare the accuracy under IDPC-CNN.

3.3. Fall Detection Method Based on IDPC-CNN

The network structure of IDPC-CNN is shown in Figure 12. The first two convolutional layers of the parallel channel 1 adopted the third-order convolution kernels of 2 × 4 × 1. The difference is that the number of convolution kernels was 10 in first layer while the number was 20 in the second layer. Then the maximum pooling layer was added after each convolutional layer to improve the robustness of the algorithm and reduce the degradation of classification accuracy caused by local noise. In the last convolutional layer, the third-order convolution kernels of 2 × 4 × 2 was used and the number was 20. Similar to the first channel. The first two convolutional layers of parallel channel 2 adopted the third-order convolution kernels of 4 × 2 × 1 and the last convolutional layer adopted the third-order convolution kernels of 4 × 2 × 2. The number of convolution kernels of each layer was the same as the configuration of parallel channel 1. The rectangular convolution kernel was proposed, which can enhance parallel channel 1 for focusing on the time domain and the parallel channel 2 for focusing on the frequency domain and the square convolution kernel cannot reflect the focus on, such as single-channel CNN. Compared with the square convolution kernel, the rectangular convolution kernel can improve the information extraction of different features and network performance more effectively. The spectrums of different channels are mixed to detect the correlation between different muscles.
The focus of the two channels is different. Channel 1 used a 2 × 4 convolution kernel, which focused on analyzing horizontal continuous information in the spectrum, while channel 2 used a 4 × 2 convolution kernel, which focused on analyzing vertical continuous information in the spectrum. The two channels do not influence each other in the process of feature extraction. Finally, the outputs of the two channels were combined and passed to the fully connected channel.
The fully connected channel was composed of three fully connected layers and one softmax layer. The first fully connected layer has 40 units, the middle layer has 10 units and the last fully connected layer has 4 units, corresponding to the number of classification actions. In order to avoid gradient disappearance and overfitting problem the rectified linear unit (ReLU) was used as the activation function in the fully connected layer and the dropout with a probability of 0.5 was added between the fully connected layer to improve the training speed. The Softmax layer was used to convert the output to the probability distribution of different actions. The Softmax function formula was as follows:
P ( y = i | x ) = e h ( x , y i ) j = 1 n e h ( x , y j )
where h ( x , y i ) represents an original measure that input x belongs to class i and P ( y = i | x ) represents the probability that input x belongs to class i .
RMS and SPM were selected as features and IDPC-CNN as the classifier. The comparison test was conducted under the same experimental environment and the same data set. The CPU was Intel i5-8400 with 8GB DDR4 RAM and the GPU was NVIDIA GT710. All performance indicators were obtained through 10-fold cross-validation. The program running time of each stage and the final accuracy are shown in Table 3.
It can be seen from the table that the feature extraction time and the classifier training time of the SPM were greater than the RMS and the classifier test result time was not much different. Since the method is to detect the occurrence of the fall action more quickly and issue an alarm, the SPM was selected as the input feature in terms of classifier test result time and accuracy.

4. Experiment

In order to verify the effectiveness of the proposed method, support vector machine (SVM) and CNN [32] with several different structures were selected for comparative experiments.
1. SVM
The radial basis kernel function has the characteristics of simple structure and strong generalization ability and was selected as the kernel function.
2. DPC1-CNN
The structure of two parallel channels 1 was adopted and the fully connected channel consistent with IDPC-CNN, as shown in Figure 13.
3. DPC2-CNN
The structure of two parallel channels 2 was adopted, the fully connected channel consistent with IDPC-CNN, as shown in Figure 14.
4. Single channel CNN
The single-channel structure was adopted and the convolutional layer adopted the third-order convolution kernels of 2 × 2 × 4. Other parameters were the same as the IDPC-CNN proposed in this paper. The structure is shown in Figure 15.
The data set from all 10 people was trained by 10-fold cross-validation. In order to prevent each person’s data from being trained due to mixed data operation, the data set was divided into ten parts according to 10 people. The data of 9 people were used as the training set and the data of 1 person is used as the test set and the average value of the results of 10 times is taken as the estimation of algorithm performance. All the reported statistics are obtained with 10-fold cross-validation. Since the data of all training sets are sent to the CNN for iterative training at one time, the computational load is too large and the gradient descent is slow. In order to improve the training efficiency, the method of batch training is adopted. In each iteration, a batch of data in the training set is randomly selected and sent to the CNN, which can effectively improve the training speed, reduce the computation amount of each iteration, find the optimal gradient descent direction faster and avoid the problem of overfitting and gradient disappearance.
In order to reduce the amount of calculation and improve the training efficiency, the experiment uses a batch training method. A batch of data in the training set is randomly selected as the training data of the network in each iteration cycle. It can effectively improve the training speed, reduce the calculation amount of each iteration, find the optimal gradient descent direction more quickly to avoid the problem of overfitting and gradient disappearing. The train and test flow chart is shown in Figure 16:
In the same experimental environment and the same data set, the comparison test was carried out to analyze the accuracy of different methods for fall recognition. In order to quantitatively analyze the experimental results, fall recognition results were divided into four categories: TP-fall samples are identified as falls, FP-daily activity samples are identified as falls, TN-daily activity samples are identified as daily activities, FN-fall samples are identified as daily activities.
The following three performance indicators are used to judge the performance of the CNN and the SVM classifier:
  • Accuracy ( A c ), the accuracy of all samples and the formula is:
    A c = T P + T N T P + F P + T N + F N × 100 %
  • Sensitivity ( S e ), the detection rate of all fall samples and the formula is:
    S e = T P T P + F N × 100 %
  • Specificity ( S p ), the detection rate of all daily activity samples and the formula is:
    S p = T N F P + T N × 100 %
The experimental results are shown in Figure 17.
According to the analysis of experimental results, it can be verified that the proposed sEMG fall warning method based on IDPC-CNN has better classification performance.

5. Results

Aiming at the fall of the elderly, this paper proposes a fall detection method using sEMG based on IDPC-CNN. The method realizes the identification of three daily actions and forward falls using IDPC-CNN. This method has achieved an accuracy of 92.55%, a sensitivity of 95.71% and a specificity of 91.7%. The experimental results show that the method can realize the classification of four kinds of actions. In the next step, the parameters and structure of the convolutional neural network for fall recognition will be further optimized and the self-correction training will be considered to reduce the running time and improve the recognition accuracy.

6. Discussion

The IDPC-CNN classifier is significantly superior to other methods in terms of accuracy, sensitivity and specificity. Due to the small convolution kernel and cannot acquire more input information features, single channel CNN has the lowest performance indicators. The performance indexes of DPC1-CNN and DPC2-CNN classifiers are slightly lower than IDPC-CNN. It shows that the dual channel can improve the accuracy of classification by using different convolution kernels and combining the time direction and frequency direction of the spectrums. In addition to single channel CNN, the indicators of other methods are significantly better than that of SVM. Using the spectral information of sEMG, the improved dual parallel channel convolutional neural network updates the parameters by iterative training and back propagation, which effectively reduces the training time and improves the accuracy of the algorithm. The results show that the running time of online classification prediction is within the acceptable delay range. In the clinical field, it will have certain practical significance for the real-time warning of the fall for the elderly.

Author Contributions

Conceptualization, X.L. and H.L.; Data curation, T.L.; Formal analysis, X.L.; Investigation, C.L.; Methodology, T.L.; Software, C.L.; Validation, X.L.; Visualization, H.W.; Writing—original draft, H.L.; Writing—review & editing, X.L.

Funding

This research was funded by National Natural Science Foundation of China, grant number 61473112, 61673158, National Key R&D Program of China, grant number 2017YFB1401200 and Key Projects of Hebei Province, grant number F2017201222.

Acknowledgments

The authors would like to thank all the colleagues that have supported this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rubenstein, L.Z. Falls in older people: epidemiology, risk factors and strategies for prevention. Age Ageing 2006, 35, ii37–ii41. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Luo, D.; Luo, H.Y. Fall detection algorithm based on random forest. J. Comput. Appl. 2015, 35, 3157–3160. [Google Scholar]
  3. Sase, P.S.; Bhandari, S.H. Human Fall Detection using Depth Videos. In Proceedings of the 2018 5th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 22–23 February 2018; pp. 546–549. [Google Scholar]
  4. Li, X.; Nie, L.; Xu, H.; Wang, X. Collaborative Fall Detection Using Smart Phone and Kinect. Mob. Netw. Appl. 2018, 23, 775–788. [Google Scholar] [CrossRef]
  5. Baldewijns, G.; Debard, G.; Mertes, G.; Croonenborghs, T.; Vanrumste, B. Improving the accuracy of existing camera based fall detection algorithms through late fusion. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Seogwipo, Korea, 11–15 July 2017; pp. 2667–2671. [Google Scholar]
  6. Wang, F.T.; Chan, H.L.; Hsu, M.H.; Lin, C.K.; Chao, P.K.; Chang, Y.J. Threshold-based fall detection using a hybrid of tri-axial accelerometer and gyroscope. Physiol. Meas. 2018, 39, 105002. [Google Scholar] [CrossRef] [PubMed]
  7. Mao, A.; Ma, X.; He, Y.; Luo, J. Highly portable, sensor-based system for human fall monitoring. Sensors 2017, 17, 2096. [Google Scholar] [CrossRef] [PubMed]
  8. Er, P.V.; Tan, K.K. Non-intrusive fall detection monitoring for the elderly based on fuzzy logic. Measurement 2018, 124, 91–102. [Google Scholar] [CrossRef]
  9. Hsieh, C.Y.; Liu, K.C.; Huang, C.N.; Chu, W.C.; Chan, C.T. Novel Hierarchical Fall Detection Algorithm Using a Multiphase Fall Model. Sensors 2017, 17, 307. [Google Scholar] [CrossRef]
  10. Mezghani, N.; Ouakrim, Y.; Islam, M.R.; Yared, R.; Abdulrazak, B. Context aware adaptable approach for fall detection bases on smart textile. In Proceedings of the 2017 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), Orlando, FL, USA, 16–19 February2017; pp. 473–476. [Google Scholar]
  11. Nizam, Y.; Mohd, M.; Jamil, M. Development of a user-adaptable human fall detection based on fall risk levels using depth sensor. Sensors 2018, 18, 2260. [Google Scholar] [CrossRef] [PubMed]
  12. Iazzi, A.; Rziza, M.; Thami, R.O.H. Fall detection based on posture analysis and support vector machine. In Proceedings of the 2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), Sousse, Tunisia, 21–24 March 2018; pp. 1–6. [Google Scholar]
  13. Yoo, S.G.; Oh, D. An artificial neural network–based fall detection. Int. J. Eng. Bus. Manag. 2018, 10, 1847979018787905. [Google Scholar] [CrossRef]
  14. Yang, X.; Xiong, F.; Shao, Y.; Niu, Q. WmFall: WiFi-based multistage fall detection with channel state information. Int. J. Distrib. Sens. Netw. 2018, 14, 1550147718805718. [Google Scholar] [CrossRef]
  15. Yu, S.; Chen, H.; Brown, R.A. Hidden Markov model-based fall detection with motion sensor orientation calibration: A case for real-life home monitoring. IEEE J Biomed. Health Inform. 2018, 22, 1847–1853. [Google Scholar] [CrossRef] [PubMed]
  16. Junior, C.L.B.; Adami, A.G. SDQI-Fall Detection System for Elderly. Lat. Am. Trans. 2018, 16, 1084–1090. [Google Scholar] [CrossRef]
  17. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
  18. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  19. Hu, Y.; Wong, Y.; Wei, W.; Du, Y.; Kankanhalli, M.; Geng, W. A novel attention-based hybrid CNN-RNN architecture for sEMG-based gesture recognition. PLoS ONE 2018, 13, e0206049. [Google Scholar] [CrossRef] [PubMed]
  20. Samuel, O.W.; Asogbon, M.G.; Geng, Y.; AI-Timemy, A.H.; Pirbhulal, S.; Ji, N.; Chen, S.; Fang, P.; Li, G. Intelligent EMG Pattern Recognition Control Method for Upper-Limb Multifunctional Prostheses: Advances, Current Challenges, and Future Prospects. IEEE Access 2019, 7, 10150–10165. [Google Scholar] [CrossRef]
  21. Feng, N.; Shi, Q.; Wang, H.; Gong, J.; Liu, C.; Lu, Z. A soft robotic hand: design, analysis, sEMG control and experiment. Int. J. Adv. Manuf. Technol. 2018, 97, 319–333. [Google Scholar] [CrossRef]
  22. Cheng, L.; Chen, M.; Li, Z. Design and Control of a Wearable Hand Rehabilitation Robot. IEEE Access 2018, 6, 74039–74050. [Google Scholar] [CrossRef]
  23. Sun, Y.; Li, C.; Li, G.; Jiang, G.; Jiang, D.; Liu, H.; Zheng, Z.; Shu, W. Gesture recognition based on kinect and sEMG signal fusion. Mob. Net. Appl. 2018, 23, 797–805. [Google Scholar] [CrossRef]
  24. Cheng, J.; Wei, F.; Li, C.; Liu, Y.; Liu, A.; Chen, X. Position-independent gesture recognition using sEMG signals via canonical correlation analysis. Comput. Biol. Med. 2018, 103, 44–54. [Google Scholar] [CrossRef]
  25. Xiao, F.; Wang, Y.; He, L.; Wang, H.; Li, W.; Liu, Z. Motion Estimation from Surface Electromyogram Using Adaboost Regression and Average Feature Values. IEEE Access 2019, 7, 13121–13134. [Google Scholar] [CrossRef]
  26. Zhao, C.; Ma, S.; Liu, Y. Onset detection of surface diaphragmatic electromyography based on sample entropy and individualized threshold. J. Biomed. Eng. 2018, 35, 852–859. [Google Scholar]
  27. Allard, U.C.; Nougarou, F.; Fall, C.L.; Giguère, P.; Gosselin, C.; Laviolette, F.; Gosselin, B. A convolutional neural network for robotic arm guidance using semg based frequency-features. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 2464–2470. [Google Scholar]
  28. Shi, W.T.; Lyu, Z.J.; Tang, S.T.; Chia, T.L.; Yang, C.Y. A bionic hand controlled by hand gesture recognition based on surface EMG signals: A preliminary study. Biocybern. Biomed. Eng. 2018, 38, 126–135. [Google Scholar] [CrossRef]
  29. Naik, G.R.; Selvan, S.E.; Gobbo, M.; Acharyya, A.; Nguyen, H.T. Principal component analysis applied to surface electromyography: a comprehensive review. IEEE Access 2016, 4, 4025–4037. [Google Scholar] [CrossRef]
  30. Zhai, X.; Jelfs, B.; Chan, R.H.M.; Tin, C. Self-recalibrating surface EMG pattern recognition for neuroprosthesis control based on convolutional neural network. Front. Neurosci. 2017, 11, 379. [Google Scholar] [CrossRef] [PubMed]
  31. Jiang, S.; Lv, B.; Guo, W.; Zhang, C.; Wang, H.; Sheng, X.; Shull, P.B. Feasibility of Wrist-Worn, Real-Time Hand and Surface Gesture Recognition via sEMG and IMU Sensing. IEEE Trans. Ind. Inf. 2018, 14, 3376–3385. [Google Scholar] [CrossRef]
  32. Ding, Z.; Yang, C.; Tian, Z.; Yi, C.; Fu, Y.; Jiang, F. sEMG-Based Gesture Recognition with Convolution Neural Networks. Sustainability 2018, 10, 1865. [Google Scholar] [CrossRef]
Figure 1. The position of surface electromyography (sEMG) electrode.
Figure 1. The position of surface electromyography (sEMG) electrode.
Sensors 19 02814 g001
Figure 2. The 4 gestures considered in this work.
Figure 2. The 4 gestures considered in this work.
Sensors 19 02814 g002
Figure 3. The comparison of sEMG signals before and after denoising.
Figure 3. The comparison of sEMG signals before and after denoising.
Sensors 19 02814 g003
Figure 4. Mean short-term energy value result.
Figure 4. Mean short-term energy value result.
Sensors 19 02814 g004
Figure 5. Threshold segmentation diagram.
Figure 5. Threshold segmentation diagram.
Sensors 19 02814 g005
Figure 6. Effective signal segment extraction.
Figure 6. Effective signal segment extraction.
Sensors 19 02814 g006
Figure 7. Variance contribution rate.
Figure 7. Variance contribution rate.
Sensors 19 02814 g007
Figure 8. Spectrogram processing example.
Figure 8. Spectrogram processing example.
Sensors 19 02814 g008
Figure 9. Feature extraction flow chart.
Figure 9. Feature extraction flow chart.
Sensors 19 02814 g009
Figure 10. Sliding window schematic.
Figure 10. Sliding window schematic.
Sensors 19 02814 g010
Figure 11. Accuracy comparison diagram.
Figure 11. Accuracy comparison diagram.
Sensors 19 02814 g011
Figure 12. Improved dual parallel channels convolutional neural network structure.
Figure 12. Improved dual parallel channels convolutional neural network structure.
Sensors 19 02814 g012
Figure 13. Dual parallel channel 1 convolutional neural network structure.
Figure 13. Dual parallel channel 1 convolutional neural network structure.
Sensors 19 02814 g013
Figure 14. Dual parallel channel 2 convolutional neural network structure.
Figure 14. Dual parallel channel 2 convolutional neural network structure.
Sensors 19 02814 g014
Figure 15. Single-channel convolutional neural network (CNN) structure.
Figure 15. Single-channel convolutional neural network (CNN) structure.
Sensors 19 02814 g015
Figure 16. Train and test flow chart.
Figure 16. Train and test flow chart.
Sensors 19 02814 g016
Figure 17. Performance index comparison.
Figure 17. Performance index comparison.
Sensors 19 02814 g017
Table 1. Basic information of different research subjects.
Table 1. Basic information of different research subjects.
SubjectGenderAgeHeight/cmWeight/kgLower Limbs Diseases
1Male2417674No
2Male2317578No
3Male2317272No
4Male2517983No
5Male2417070No
6Female2716851No
7Female2316547No
8Female2416245No
9Female2417055No
10Female2316044No
Table 2. Variance and cumulative variance contribution rate.
Table 2. Variance and cumulative variance contribution rate.
Principal ComponentVariance Contribution RateAccumulated Variance Contribution Rate
147.647.6
225.573.1
310.283.3
45.388.6
53.191.7
61.893.5
71.194.6
80.795.3
90.295.5
100.195.6
200.0696.38
Table 3. Performance indicators comparison.
Table 3. Performance indicators comparison.
Performance IndicatorsRMSSPM
Signal preprocessing time(ms)25.0925.09
Feature extraction time(ms)101.37357.16
Classifier training time(h)7.510.6
Classifier test result time(ms)57.1263.5
Accuracy(%)84.2192.55

Share and Cite

MDPI and ACS Style

Liu, X.; Li, H.; Lou, C.; Liang, T.; Liu, X.; Wang, H. A New Approach to Fall Detection Based on Improved Dual Parallel Channels Convolutional Neural Network. Sensors 2019, 19, 2814. https://doi.org/10.3390/s19122814

AMA Style

Liu X, Li H, Lou C, Liang T, Liu X, Wang H. A New Approach to Fall Detection Based on Improved Dual Parallel Channels Convolutional Neural Network. Sensors. 2019; 19(12):2814. https://doi.org/10.3390/s19122814

Chicago/Turabian Style

Liu, Xiaoguang, Huanliang Li, Cunguang Lou, Tie Liang, Xiuling Liu, and Hongrui Wang. 2019. "A New Approach to Fall Detection Based on Improved Dual Parallel Channels Convolutional Neural Network" Sensors 19, no. 12: 2814. https://doi.org/10.3390/s19122814

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop