Next Article in Journal
Optical Analog to Electromagnetically Induced Transparency in Cascaded Ring-Resonator Systems
Next Article in Special Issue
A Self-Powered Insole for Human Motion Recognition
Previous Article in Journal
Computation and Communication Evaluation of an Authentication Mechanism for Time-Triggered Networked Control Systems
Previous Article in Special Issue
Test-Retest Reliability of an Automated Infrared-Assisted Trunk Accelerometer-Based Gait Analysis System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Analysis on Sensor Locations of the Human Body for Wearable Fall Detection Devices: Principles and Practice

by
Ahmet Turan Özdemir
Department of Electrical and Electronics Engineering, Erciyes University, Kayseri 38039, Turkey
Sensors 2016, 16(8), 1161; https://doi.org/10.3390/s16081161
Submission received: 29 May 2016 / Revised: 3 July 2016 / Accepted: 20 July 2016 / Published: 25 July 2016
(This article belongs to the Special Issue Wearable Biomedical Sensors)

Abstract

:
Wearable devices for fall detection have received attention in academia and industry, because falls are very dangerous, especially for elderly people, and if immediate aid is not provided, it may result in death. However, some predictive devices are not easily worn by elderly people. In this work, a huge dataset, including 2520 tests, is employed to determine the best sensor placement location on the body and to reduce the number of sensor nodes for device ergonomics. During the tests, the volunteer’s movements are recorded with six groups of sensors each with a triaxial (accelerometer, gyroscope and magnetometer) sensor, which is placed tightly on different parts of the body with special straps: head, chest, waist, right-wrist, right-thigh and right-ankle. The accuracy of individual sensor groups with their location is investigated with six machine learning techniques, namely the k-nearest neighbor (k-NN) classifier, Bayesian decision making (BDM), support vector machines (SVM), least squares method (LSM), dynamic time warping (DTW) and artificial neural networks (ANNs). Each technique is applied to single, double, triple, quadruple, quintuple and sextuple sensor configurations. These configurations create 63 different combinations, and for six machine learning techniques, a total of 63 × 6 = 378 combinations is investigated. As a result, the waist region is found to be the most suitable location for sensor placement on the body with 99.96% fall detection sensitivity by using the k-NN classifier, whereas the best sensitivity achieved by the wrist sensor is 97.37%, despite this location being highly preferred for today’s wearable applications.

Graphical Abstract

1. Introduction

Wearable device applications have gained incredible popularity in many areas of daily life, such as health [1], entertainment [2], communication [3], rehabilitation [4] and education [5]. On the other hand, thanks to the reduced service cost of communication networks, users can easily access the Internet at very reasonable prices. These advances in wearable devices and Internet technologies have resulted in today’s wearable and Internet of Things (IoT) patent wars. Wearable fall detection devices are one of the most popular fields in both academia and industry. This is because falls are a serious and common cause of morbidity and mortality among elderly people [6]. More than one third of elderly people, aged 65 or older, experienced at least one fall each year [7]. Fall detection devices can create fall alarms immediately and alert relevant persons for assistance. Immediate aid after a fall reduces the costs of treatment and the hospital length of stay. If a falling person stays unattended for a long time, physical and psychological complications can be observed. Physical complications depend on the severity of the injury and cover a reasonably broad spectrum from simple scratches and contusions to mortal (fatal) brain damage and hip fractions [8,9]. Psychologically, falls induce fear of falls and other physical activities; this feeling makes people more likely to fall again. Staying time on the floor after a fall has great importance in physiological complications; increasing time has deeper effects on subjects, like social isolation [7].
There are many techniques that are used to detect falls, such as camera systems and smart grounds; however, wearable systems are known as the most preferred solution [1,4,10,11,12]. This is because other systems have privacy problems and/or force people to live in a restricted area. Wearable fall detection systems produce high accuracy and improve mobility. These advantages make wearable device applications more preferable than the others [11,13]. Wearable fall detection devices can be divided into two main groups: user-manipulated and automatic systems [1,4]. User-manipulated systems basically have a panic button, and when the subject experiences a fall, he/she activates the button to create a fall alarm. In this way, relevant persons can be informed about the fall. User-manipulated fall detection systems are easy to use and inexpensive, but these systems are non-functional during loss of consciousness when first aid is needed the most by the user. Therefore, a wearable fall detection system should be automatic.
Different wearable sensor-device applications for fall detection exist in the literature and on the market. However, it is still impossible to compare the accuracies of these approaches and devices. There are many reasons for this; the first is that researchers do not use common activity and fall sets for evaluating system performance. Generally, different studies use different sets of activities and falls that are performed by different subjects to create their own dataset [14,15]. Thus, experimentation and evaluation standards are needed for better comparison. Another problem is the different types of sensors used during the evaluation, such as a combination of accelerometer, gyroscope, magnetometer, barometer or microphone [13,16,17,18,19,20,21,22]. Usage of different decision algorithms to detect falls is also an issue for the comparison of the accuracies of different devices [16]. Depending on the device ergonomics, different body locations are chosen for sensor placement for wearable fall detection devices [14]. There are some works for standardizing activity and fall movement sets [23] and comparing sensor achievements [17]; however, the literature lacks a study that specially focuses on determining the best sensor placement part of the body for single sensor-based solutions. If the most suitable body location for sensor placement is known, fall detection devices and test experiments can be better designed to achieve better accuracy by taking this location into consideration.
Falls are associated with major risk factors, like chronic illness, disabilities and aging [24]. People in fall risk groups mostly have motion limitations. These risk factors and motion limitations require wearable fall detection devices that are easy to use and wear. Single sensor placement solutions can make fall detection devices easy to wear and use. In the literature, different single sensor positions, such as waist, chest, head, wrist, leg, ankle and arm, are considered with the claim of the best fall detection accuracy; however, different movement datasets with different classification and decision algorithms are used in these works [10]. Therefore, it is impossible to assess the best body locations for fall detection accuracy on the basis of these results.
Bao et al. used five biaxial accelerometers and asked 20 volunteers, 13 males and 7 females, with ages from 17 to 48, with a mean 21.8, to wear these sensors on different parts of their bodies: hip, wrist, ankle, arm and thigh. They distinguished 20 different everyday activities using a decision table, instance-based learning (nearest neighbor), naive Bayes and the decision tree classification algorithm. The best accuracy is achieved with the decision tree algorithm at 84% overall success. They reported that the classification performance is reduced only 3.27% when two biaxial accelerometers are attached to the thigh and wrist [21]. This work proves that reducing the number of sensor nodes from five to two slightly decreases the accuracy.
Kangas et al. defined nine sets of fall actions performed by three volunteers (one 38 year-old female and two males that were 42 and 48 years-old, respectively). Activities of daily living (ADLs) are performed by the female and the 42 year-old male. During the simulated falls and ADLs, acceleration of the body is measured simultaneously from the waist, head and wrist with a triaxial accelerometer. Waist sensor sensitivity varies from 76% to 97%; head sensor sensitivity varies from 47% to 98%; and lastly, wrist sensor sensitivity varies from 37% to 71%. In this study, the head was found to be the best sensor location for detecting falls [18]. This work gives some ideas about three different body locations, but the diversity of activities and the number of volunteers are not sufficient to give a final decision about defining the best sensor placement location.
Li et al. aimed to distinguish falls from ADL activities and used a two accelerometer and a gyroscope combination instead of just an accelerometer. They created a dataset using 18 different trials, 5 falls and 13 ADLs, including fall-like activities, with three men in their 20s. Using two different sensor nodes each consisting of a triaxial accelerometer and a triaxial gyroscope on the chest and upper limb, falls from ADLs were recognized with 91% sensitivity [19]. However, this approach uses both chest and limb sensors, and individual sensor performances were not calculated. Therefore, this work does not exactly determine the best sensor placement part of the body. Volunteers and the diversity of trials are also not sufficient.
Atallah et al. discussed the literature about accelerometer-based wearable activity recognition and fall detection systems in terms of sensor placement. They argue that increasing the number of sensor units and the types, such as a combination of accelerometer, gyroscopes, magnetometers, microphone and pressure sensors, increase classification accuracy. Eleven subjects (nine males and two females) wore six triaxial accelerometers on different parts of their bodies (chest, arm, wrist, waist, knee and ankle) and one ear-worn sensor, then performed 15 sets of ADLs. The movement sets used in this work do not contain fall actions. Activities were recorded with battery-powered light-weight boards, and 13 features are extracted from these records. They found that features are highly affected by changes in orientation. k-NN and Bayesian classifier algorithms are used to classify activities; however, the optimal sensor location was not determined [20].
Shi et al. reported that there is a lack of determining the optimal sensor position for fall detection systems in the literature. They used 17 sensor nodes each consisting of a triaxial (accelerometer, gyroscope and magnetometer), as well as a contact pressure sensor. Thirteen young volunteers (12 males and one female, height in the range 160 cm to 185 cm, weight in the range 55 kg to 85 kg) performed 12 sets of ADLs and 13 sets of falls. Volunteers performed each fall pattern 10 times and each ADL pattern 20 times, and they created a dataset consisting of 3232 records. During the tests, they recorded motions wirelessly with these sensors attached to the thighs, shanks, feet, upper arms, forearms, hands, waist, neck, head and back. They found that the use of sensors on the waist and feet can detect falls with 98.9% sensitivity. The maximum single sensor sensitivity achieved in this study was 95.5% in the waist and upper waist separately using the decision tree classification algorithm [22]. The ratio of female participants is only 1/13; this situation makes the dataset unbalanced and dominated by men’s data. Anatomic differences between genders are very important in terms of sensor orientations and the execution style of individual movements. Another issue about this work is the number of repetitions for each test, because 10 or 20 repetitions are extremely exhausting for a volunteer. These high repetition rates disrupt the naturalness of the activities.
In this work, an activity and fall dataset [13] that covers 2520 records of seven males and seven females was used. These 14 volunteers performed 36 sets of movements, including ADLs (16 sets) and falls (20 sets) with five repetitions. During the tests, the volunteer’s movements were recorded with six sensor units, each with a triaxial sensor (accelerometer, gyroscope and magnetometer), placed on different parts of the body. These locations are head, chest, waist, right wrist, right thigh and right ankle. The main motivation of this work is to determine the best sensor positioning for wearable fall detection devices out of these six locations on the human body. The classification performances of these sensors are investigated with six machine learning techniques, namely the k-nearest neighbor (k-NN) classifier, Bayesian decision making (BDM), support vector machines (SVM), least squares method (LSM), dynamic time warping (DTW) and artificial neural networks (ANNs). Each technique is applied to single, double, triple, quadruple, quintuple and sextuple sensor configurations. These configurations create 63 different combinations, and for six machine learning techniques, a total of 63 × 6 = 378 combinations is investigated. Although the classification results of these machine learning algorithms are very satisfactory and stable, this work does not mainly aim to present a robust fall detection device or system, but to propose the best sensor location on the human body for wearable fall detection devices. This work is especially focused on sensor positioning effects over fall detection performance of the algorithms and the most comprehensive investigation in the literature in terms of the number of analyzed sensor combinations, employed machine learning algorithms and rates of accuracy and sensitivity for single sensor options.
The rest of this work is organized as follows. In Section 2, the System Design section, information about the dataset and volunteers, sensor specifications, experimental setup, data formation and feature extraction and reduction topics are discussed. In Section 3, the Materials and Methods section, sensor placement combinations on the body, performance metrics and six machine learning algorithms used in this work are described briefly. Section 4, the Results and Discussion section, is the section where the results are presented and the performances of the machine learning algorithms with a total of 378 different sensor configurations are discussed. In the last section, conclusions section, the paper is finalized with the description of the possible future works.

2. System Design

There are four main tasks that exist in this work, as shown in Figure 1. The first one is the data acquisition task, and in this step, a huge dataset has been created with 14 volunteers including 2520 records. The name of the second task is preprocessing, and in this step, meaningful data have been formed from the huge dataset, which was already created previously. The feature extraction task is the third step, and in this step, the raw dataset created in the preprocessing step has been reduced in size by using the feature extraction and dimension reduction techniques. The last step of the fall detection system is the classification process. The classification task uses dimensionally-reduced features and gives a binary decision about fall events using machine learning techniques.

2.1. Dataset, Volunteers and Tests

The first step in the design of a robust fall detection system is to use a comprehensive movement dataset, including ADLs that are mostly confused with falls, such as lying in bed, jogging, stumbling, etc. In general, researchers create their own datasets. Because of this, fall detection performance of different platforms cannot be compared. In this work, ADL and fall movements are adopted from [25] as a trial, and the trial protocol was approved by the Erciyes University Ethics Committee (Approval Number 2011/319). Volunteers received written informed consent and oral information about the tests. The seven healthy men aged 24 ± 3 years old, weighing 67.5 ± 13.5 kg and with a height of 172 ± 12 cm are named 101, 102, 103, 104, 106, 107 and 108. The seven healthy women aged 21.5 ± 2.5 years old, weighing 58.5 ± 11.5 kg and with a height 169.5 ± 12.5 cm are named 203, 204, 205, 206, 207, 208 and 209 (see Table 1).
The trial used in this work consists of 20 falls and 16 ADLs. These 36 movements are performed by 14 young subjects with five repetitions each. The resulting dataset covers 2520 records (36 tests × 14 subjects × 5 repeats = 2520 records) each 15 s long on average. Table 2 summarizes movements in terms of falls and ADLs. Each movement is labeled with a number; 8XX defines falls, and 9XX defines ADLs. A record contains a volunteer definition, a movement definition and repetition time. 102_903_3 is an example of a record label and it refers to the male volunteer-102 performing the sit bed activity (903) of the third (3) round, as defined in Table 2.
The movement database consists of 2520 uniquely-named records of complex inertial, magnetic, atmospheric pressure sensor and microphone data. Fall movements in the table are commonly observed in real life, and ADLs contain movements that can easily be confused with falls. Using this standard trial set allows comparability and robustness for a classification system. This movement set helps in designing robust classifiers, because falls and activity types are sufficient to simulate real-life conditions. It also helps in comparing the results with other works, because the movements are standard.

2.2. Sensor Specifications

In this work, six three-DOF orientation tracker MTw units (see Figure 2), produced by Xsens Technologies, are used [26]. Each unit has a triaxial gyroscope that can sense in the range of ±1200°/s angular velocities, a triaxial accelerometer that can sense in the range of ±160 m/s2 acceleration, a triaxial magnetometer that can sense earth magnetic fields in the range of ±1.5 Gauss and a pressure sensor that can sense atmospheric pressure in the range of 300 to 1100 mBar. Inertial and magnetic data in three axis (x, y, z) were acquired at a 25-Hz sampling rate, each test lasting about 15 s on average.
Although inertial and magnetic sensor units are designed for ambulatory and motion tracking purposes, generally in restricted laboratory environments, they have successfully been used for indoor, as well as outdoor movement analysis and started to gain increased popularity in the study of human motion analysis [27,28].
Atmospheric pressure and microphone data were not used in the feature extraction and classification processes because changes on the pressure sensor’s output during movements did not provide meaningful information. In addition, there was too much noise around, and the microphone recorded unwanted sounds with the voices and movements during the tests. Preventing noise in the restricted test area is not a solution because noise will be present in real life, as well. Therefore, in order to create a robust classifier, acquired data must reflect a real environment. Works in the literature mostly suffer from laboratory-based setups [11,13]. This is the answer to the question of why there are many works in the literature that have very good fall detection sensitivities, but there is no off-the-shelf product on the market. In this work, test scenarios need to be realistic and compatible with real-life conditions.
Inertial and magnetic sensors that are used in smartphones and other wearable devices may not be as sensitive and give as broad a range of motion data as MTw units do. However, the performances of different sensor units placed at the same location on the human body as proposed in this work should be similar to the sensors employed in this study. This works aims to determine the optimum body locations for general wearable fall detection devices. Therefore, it is believed that the results achieved in this work can be observed with different sensors with the same setup.

2.3. Experimental Setup

Six MTw sensor units are placed on different parts of the subject’s body with a special strap set. Each strap has sensor housing; thus, the MTw sensor units are easily attached to and detached from these straps. A special strap with sensor housing for the wrist is shown in Figure 3a. Extra foam covers are wrapped around all sensor units in order to protect them from direct impact on the ground. Because the sensor’s case is made of very tough plastic material, this hard surface increases the impact of shock in the case of a direct hit. The human body is relatively soft, and such an amount of exaggerated acceleration data cannot be observed on it.
Each strap is specially designed for individual body parts (see Figure 3b). In this work, the head, chest, waist, right wrist, right thigh and right ankle were chosen as specific sensor locations (see Figure 3c). Each sensor unit is marked with a letter as follows: A-head, B-chest, C-waist, D-right wrist, E-right thigh and F-right ankle. It is very important to attach sensors to the subject’s body tightly in order to collect body movements correctly. MTw sensor units connect to a computer over ZigBee. Wireless data acquisition systems let the user perform movements more naturally than cabled systems. Elaborate precautions were taken to prevent volunteers from injuries as part of the ethics committee approval. First of all, volunteers were asked to wear a helmet, elbow pads, knee and wrist guards. These protective clothes guard the body parts from dangerous shock. Lastly, fall movements are performed on a soft crash mat to reduce the shock effect on the body.
Current microelectromechanical systems (MEMS)-based inertial and magnetic sensors are typically light and thin; thus, they are preferred to be used as accessories or clothes. This work chose sensor locations to be the same places where accessories or daily wearables can be worn, such as the head (A) with hat, glasses and earrings, chest (B) with underwear, necklace and brooch, waist (C) with belt and waistband, wrist (D) with watch, gloves and armband, thigh (E) with pocket and ankle (F) with socks, shoes and boots [29]. Strap sets used in this work are suitable for these locations, and they perfectly attached sensors to the body, as shown in Figure 3b. Furthermore, thigh, waist, chest, wrist and head are important locations for collecting vital biomedical signals, such as EMG, EKG and EEG or heat and perspiration. Vital body parameters can be gathered with custom-designed fall detection devices and sent to remote monitoring services together with motion data.
The movements on one side of the body have opposite effects of the movements of similar pattern on the other side of the body [30]. Because of this biomechanical symmetry property, the sensors were attached to the ankle, thigh and waist only on the right side of the body. Moreover, decreasing the number of sensors used on outer limbs also reduces the computational complexity of classification algorithms by using one side of the body.

2.4. Data Formation

Xsens’ Awinda Station (Figure 2c) reads six MTw units with RF connection and transfers the data to a PC with a USB interface. This unit is controlled by Xsens’ MT Manager Software, and this software creates six comma-separated-values files simultaneously in a single test. Each data file comprises 10 columns and 25 lines per second; average line counts are 15 s × 25 Hz = 375 lines. Columns are Counter, A x ,   A y ,   A z ,   G x ,   G y ,   G z ,   M x ,   M y ,     and   M z . Counter is a kind of time stamp and counts up with each sample; counter data are used to check synchronization and missed data. A, G, and M are abbreviations of acceleration, gyroscope, and magnetometer sensors’ data respectively. x, y, and z represent perpendicular axes please see Figure 4. At the end of the complete trial, 15,120 files, consisting of 36 movements (Table 2), were created (14 volunteers × 36 tests × 5 repeats × 6 files = 15,120). In order to reduce the dimension of this huge dataset, 15 s-long frames are cut into 4 s-long frames. A 4-s frame consists of two 2-s frames that are around the maximum total acceleration (TA) of the waist sensor. TA is given in Equation (1):
T A = A x 2 + A y 2 + A z 2
TA is a vector consisting of m values of average acceleration along the x, y and z axes, which are, respectively, A x ,   A y ,   A z . A test lasts around 15 s; therefore the m value is approximately 375 (15 s × 25 Hz). The first 50 (2 s × 25 Hz) and the last 50 (2 s × 25 Hz) elements of a test record are not considered; outliers are eliminated, because these sections cover preparation and completion of an activity; therefore, high acceleration values in these phases are not meaningful. The maximum acceleration value in the TA vector was searched in the remaining 275 values, and a 4 s-long vector is constructed using two 2 s-long samples around the maximum value of TA vector. A 2 s sample consists of 50 elements (2 s × 25 Hz); therefore, two 2 s frames around the maximum TA value contain 101 elements (50 samples + max. TA value + 50 samples) (Figure 4). The same data reduction procedure is applied to the remaining five sensor units using the exact time index stamped with the Counter column in a record. A record contains acceleration, rate of turn and Earth magnetic field values in three axes.
In this manner, six 101 × 9 (101 lines × 9 columns) arrays of data are created from a single repetition of a test with a sensor unit, given in Table 2. Each column of data defines acceleration, angular velocity or Earth magnetic field in the x, y or z axis. A column is represented by an N × 1 vector d =   [ d 1 ,   d 2 ,   , d N   ] T , where N = 101. The first three features extracted from the vector d are the minimum, maximum and mean values. The second three features are variance, skewness and kurtosis. The other eleven features are the autocorrelation sequence consisting of the first 11 values. The last ten features are the first five peaks of the discrete Fourier transform (DFT) of the column with the corresponding frequencies. In this work, six types of features were utilized, and five of them are statistical features (mean, variance, skewness, kurtosis and autocorrelation). Four second-long signal segments were assumed to be the realization of an ergodic process. Therefore, time averages were used instead of ensemble averages. However, in order to prevent features from being spatial domain dependent, DFT features were used, as well. In this work, discrete cosine transform and the total energy of the signal were also considered. However, since extra coefficients and features extend the size of the feature vector, extra features, such as the total energy, cosine transform or wavelet transform coefficients, were not used. If the classification results of this work were not satisfactory, extra features would be used to improve the accuracies of machine learning algorithms for fall detection.
m e a n ( d ) : μ = 1 N i = 1 N d i
var i a n c e ( d ) : σ 2 = 1 N i = 1 N ( d i μ ) 2
s k e w n e s s ( d ) : 1 N σ 3 i = 1 N ( d i μ ) 3
k u r t o s i s ( d ) : 1 N σ 4 i = 1 N ( d i μ ) 4
a u t o c o r r e l a t i o n ( d ) : R s s ( Δ ) = 1 N Δ i = 0 N Δ 1 ( d i μ ) ( d i Δ μ ) , Δ = 0 , 1 , ... , N 1
D F T ( k ) = i = 0 N 1 d i e j 2 π k i / N , k = 0 , 1 , ... , N 1
Here, d i represents the i-th element of discrete d vector. µ is the mean, and σ is the standard deviation of d. R s s ( Δ ) is the unbiased autocorrelation discrete-time sequence of d, and DFT(k) is the k-th element of N point DFT. Here, the maximum, the minimum, mean, skewness and kurtosis features are all scalar values. The autocorrelation feature has eleven values, and the DFT feature has ten values. Each sensor unit (MTw) is triaxial, ( A x ,   A y ,   A z ,   G x ,   G y ,   G z ,   M x ,   M y ,   M z ) ; therefore, in total, nine signals are recorded from a unit. Since there are six sensor units, 54 signals (9 signals × 6 sensors) exist, and a total of 270 features (54 signal × 5 features) are created for the first five scalar features. Similarly, 270 features are created for the first five DFT peak values, and the other 270 features are created for the five respective frequencies. Lastly, 594 features (11 × 9 × 6) are created for 11 autocorrelation sequences. The feature formation process resulted in a feature vector of dimension 1404 × 1 (270 + 270 + 270 + 594) for each test of the 4-s signal segments. Feature vector formation is shown in Figure 5. This feature vector was created for a single test. However, in this work, there are 2520 movements, including 1400 falls (14 volunteers × 20 falls × 5 repetitions) and 1120 ADLs (14 volunteers × 16 ADLs × 5 repetitions). Therefore, a feature set dimension of a 1404 × 2520 vector array is generated from 2520 features extracted from the movements.

2.5. Feature Extraction and Dimensional Reduction

The resulting feature vector is quite big in size (1404 × 1); this situation increases the computational complexity of the classifiers, both in the training and testing phases. Therefore, the size of the feature vector (see Figure 5) needs to be reduced in order to simplify the computational process. Because each element of a feature vector does not have an equal contribution in defining individual movement, each movement has a different signal pattern. Therefore, the principal component analysis (PCA) technique is used as the dimension reduction method. PCA is the most traditional and efficient dimension reduction technique and transforms the original variables f ( 1 ) , f ( 2 ) , , f ( m ) into a smaller group of different variables x ( 1 ) , x ( 2 ) , , x ( p ) , where p m [31] and m is 1404. A single feature vector is given as below:
f k = [ f ( 1 ) k   f ( 2 ) k   f ( 1404 ) k ] T
Here, k = 2520, and the raw feature set F is defined by a matrix of 2520 × 1404 in size.
F =   [   f 1   f 2   f 2520 ]
f ¯ defines the mean of an individual feature vector, and it is calculated for each feature vector as below. Here, n equals the number of elements in a feature vector, which is 1404 elements.
f ¯ = 1 1404 n = 1 1404 f ( n )
P 1 , P 2 , , P i are principal components (PCs) and calculated by the following formula. Here, Cf is the covariance matrix of a feature vector, and the first 30 leading eigenvectors of the covariance matrix give principal components. The first principal component has the largest possible variance and each succeeding principal in turn.
C f = 1 1403 n = 1 1404 { ( f ( n ) f ¯ ) ( f ( n ) f ¯ ) T }
An orthogonal basis set is produced by using symmetric Pi eigenvectors and λ i eigenvalues from the Cf covariance matrix. Therefore, PCs are orthogonal, as well.
C f P i = λ i P i   , i = 1 , 2 , ... , 30
However, λ i eigenvalues need to be calculated first in order to find Pi eigenvectors. Here, I is the identity matrix.
det ( C f λ I ) = 0
This process shows how principal components (PCs) are calculated for a single raw feature vector. The last reduced size of the feature set is calculated by repeating this process 2520 times. The initial raw feature data vector was quite big (1404 × 1), and this situation resulted in a computationally complex process for distinguishing of falls from ADLs. Therefore, the raw feature vector, normalized first between zero and one, was reduced from 1404 to 30 elements using PCA. In this work, 30 PCs are used because 30 PCs constitute 72.38% of the total variance of the original data (1404-long feature vector), and this ratio represents much of the variability of the raw feature vector.

3. Materials and Methods

Six inertial and magnetic sensors create 26 (= 64) different sensor placement combinations on the body, as given in Table 3. However, one of these combinations is not applicable, as shown in the first line of the table, since it refers to the sensorless configuration. The rest of the 63 possible configurations are evaluated with six different machine learning techniques, and the fall detection accuracies of these configurations are specifically calculated for each. Sixty three different sensor placement combinations with six machine learning techniques create a total of 63 × 6 = 378 combinations. Therefore, each of these combinations is scored with its individual accuracy in the MATLAB environment. In this way, one can decide which body part is the most convenient for sensor placement in fall detection applications. Machine learning techniques used in this work are discussed briefly in this section.
Since wearable devices are mobile and battery powered, the power consumption of the system must be considered [32]. Another important issue about wearable fall detection systems that needs to be consider is adoption by the elderly population [33]. Elderly people are in the main fall risk group; most of them have motion limitation and are not familiar with current technology. Therefore, technologically, a fall detection system is required to be simple to use. These requirements force the designer to find the optimum solution. In this work, the main motivation is to reduce the number of sensor nodes attached to the user body. When the number of sensor nodes is reduced, an elderly person uses the system more easily, and the power consumption of the system is reduced. The required computational power in both feature extraction and decision steps also decreases with the reduction of sensor units. For example, in the feature extraction step, the feature vector size is 1404 × 1 with 6 sensors, but this feature vector size with a single sensor is only 234 × 1. This reduction in the size of the feature vector also reduces the computational complexity in the feature extraction step. Another important advantage is about PCs, since classification algorithms use the first 30 PCs of the raw feature vector, and these 30 PCs are much more descriptive of the new raw data. For example, while the largest 30 eigenvalues, in other words 30 PCs, constitute 72.38% of the total variance of PCs of a 1404 × 1 sized feature vector, the same amount of PCs constitutes 96.63% of the total variance of the principal components of a 234 × 1 sized feature vector and account for almost all of the variability of the data. This improves classification performances, because the reduced size of the feature vector with the PCA method can make the raw data much more descriptive.
The designed fall detection system in this work produces a binary decision about the fall event, whether it occurred or not. The performances of machine learning techniques, which are employed in this work, are compared in terms of accuracy, sensitivity and specificity. In order to determine these performance parameters, four possibilities that a binary classifier can encounter need to be considered. The first possibility is called true positive (TP), and in this case, a fall occurs and the algorithm detects it. In the second case, a fall does not occur, and the algorithm does not produce a fall alert. This possibility is called a true negative (TN). TN and TP cases are truly given decisions by the algorithm. Wrong decisions given by algorithm are annotated with false labels. The third case refers to false positive (FP), and in this case, a fall does not occur, but the algorithm creates a fall alert; this case is also called a false alarm. The most dangerous and unwanted case is called a false negative (FN), and in this case, a fall occurs, but the algorithm does not detect it. This case is also called a missed fall.
The binary classifier’s parameters can be formulated using the definitions given above. The rate of truly classified falls to all falls is called the sensitivity (Se). In other words, Se is a parameter defining how successfully the algorithm senses falls. Se measures the proportion of positives that are correctly identified as falls and indicates how well the algorithm predicts the falls.
S e =   T r u l y   C l a s s i f i e d   F a l l s A l l   F a l l s   ( P o s i t i v e s ) × 100   =   T P T P + F N   × 100
Specificity (Sp) measures the proportion of negatives that are correctly identified ADLs. Sp refers to the ability of the algorithm to correctly identify ADLs and indicates how well the algorithm predicts the ADLs.
S p =   T r u l y   C l a s s i f i e d   A D L s A l l   A D L s   ( N e g a t i v e s ) × 100   =   T N T N + F P   × 100
Accuracy (Acc) is expected to measure how well the algorithm predicts both Se and Sp. Acc is derived from Se and Sp, as well.
A c c =   T r u l y   G i v e n   D e c i s i o n s A l l   D e c i s i o n s × 100   =   T P + T N T P + T N + F P + F N   × 100 =   S e + S p 2  
Hence, a good binary classifier is expected to have high scores for all three factors, Se, Sp and Acc. However, specifically for the fall detection problem, the success of the algorithm is mostly dependent on the frequency of FN decisions. False alarms, FP, can be ignored by the user, as this fault is not considered an important problem. However, a missed fall, FN, is a serious mistake for the algorithm, and for a reliable binary classifier, FN is expected to be 0. For any classifier, there is a tradeoff between Sp and Se, and this relationship can be formulated as below.
F P   r a t i o = 1 S p
F N   r a t i o = 1 S e
In this work, it is aimed to improve the Acc parameter with a 100% Se success rate, because the reliability of the algorithm increases with increased sensitivity. In some cases, loss of consciousness and movement can be observed depending on the fall. On the other hand, loss of consciousness and movement can trigger a fall, as well. Hence, in these scenarios, if the algorithm misses the fall action, the person who experienced the fall will be alone without help when he/she needs it the most. Therefore, fall detection systems must have very high Se rates, and there is no tolerance for missing falls. However, false alarms, FPs, produced by the algorithm will confuse and occupy the system unnecessarily and need to be avoided, as well.
Recent work by the author [13] reports successfully distinguished fall actions from real-world ADLs; many of them are high impact activities that may easily be confused with falls. In that work, 6 sensor units each have a three-axis accelerometer, gyroscope and magnetometer fitted to the subjects’ body at six different positions, and falls are distinguished falls with 100% Se and 99.8% Sp [13]. However, this work differs from the previous one in that it is aimed to make the system easier to use by reducing the number of sensor nodes while keeping the Se rate at an acceptable level. Either in the training or testing phases, the dataset is randomly split into p = 10 equal partitions, and p-fold cross validation is employed. Nine of the ten partitions, p − 1, were used in training, and the rest of the one subset was used in the testing phases. For the purpose of each record in the dataset getting a chance for validation, the training and testing partitions crossed over in p successive rounds. This process is applied not only for each record having a contribution on both the training and validation stages, but also to avoid the approximation errors that may occur due to the unbalanced number of falls or ADLs among rounds.

3.1. The k-Nearest Neighbor Classifier

Basically, the k-NN method classifies a test object by finding the closest training object(s) to it [34]. The binary decision is made by using nearest neighbors k, where k > 0, and it is a user-defined value; majority voting determines the class decision. However, because the k-NN algorithm is sensitive to local dataset composition, a proper k value should be defined specifically for the individual problem. There is a tradeoff between sensitivity and robustness. Therefore, determining the k value is critical. For example, a larger k value reduces the sensitivity by increasing the bias, whereas smaller k values produce less stable results by increasing the variance. This explains why the correct k value depends on the local data. The k values between 1 and 15 have been tried, and the best result is obtained with k = 7.

3.2. Bayesian Decision Making

BDM is a robust algorithm and frequently preferred in statistical pattern classification problems. In this work, likelihood in BDM is defined by the normal density discriminant function, and the estimation of the class decision is given by using the maximum likelihood indexes for a given test vector x. The function parameters are the mean of the training vectors µ and the covariance matrix of the training vectors C for each class [34]. Since the mean vector and covariance matrix are calculated using the training records of the two classes, these values are constant for each fold. Maximum likelihood is searched, and the class decision is given for each test vector as follows:
( c l a s s   i ) =   1 2 { ( x μ i )   T C i 1 ( x μ i ) + log [ det ( C i ) ] }   i   =   1 , 2

3.3. Support Vector Machines

SVM is a very promising classification algorithm; however, it does not guarantee the best accuracy for all kinds of problems. The initial set of coefficients and kernel models also affect the classification accuracy. ( x j , l j ) ,   j = 1,…,J is the training data with the length of J, where xj N. lj is the class labels lj ∈ {1, −1}; there are two classes that exist: falls and ADLs. A library for SVM called LIBSVM toolbox in the MATLAB environment is employed with a radial basis kernel function K(x, xj) = e γ | x x j | 2 , where γ = 0.2 [35].

3.4. Least Squares Method

The LSM algorithm needs two average reference vectors calculated for falls and ADLs in order to come up with a class decision [34]. For a given test vector x = [ x 1 , , x m ] T sum square errors, ε i 2 , between the reference vectors r i = [ r i 1 , , r i M ] T , i = 1, 2 are calculated. The class decision is given as follows by minimizing ε i 2 :
ε i 2 = m = 1 M ( x m r i m ) 2 , i = 1 , 2

3.5. Dynamic Time Warping

DTW finds optimal alignment between two given time dependence sequences under certain restrictions. In order to achieve a perfect match, the sequences are warped nonlinearly by stretching or compressing in the time domain. Basically, elements between the test vector and reference vectors are calculated using the Euclidean distance as a cost measure. DTW aims to find the least-cost warping path and allows similar shapes to match, even if they are out of phase in the time axis. Because of its adaptive structure, DTW is widely used in many pattern recognition problems, such as speech recognition, signature and gait recognition, fingerprint pairing, face localization in color images and ECG signal classification [36].

3.6. Artificial Neural Networks

ANN is one of most preferred classifier models in pattern recognition and classification problems. ANN can be defined as a set of independent processing units, which receive their inputs through weighted connections [31]. In this work, a multilayer ANN, which consists of one input layer, two hidden layers and one output layer, has been used. The input layer has 30 input neurons, and the output layer has one output neuron. In the hidden layers, the sigmoid activation function is used, but in the output neuron, the purelin linear activation function is used. ANN is created by the Neural Networks Toolbox in the MATLAB environment and trained with a back propagation algorithm, namely the Levenberg–Marquardt (LM) algorithm. The class decision is made by the following rule by normalizing data between 0 and 1:
O U T = { A D L , O U T 0.5 F a l l , O U T < 0.5

4. Results and Discussion

Comparisons of the six machine learning algorithm’s accuracy performances based on sensor combinations are shown in Table 4. When all six sensor nodes are used for classification, the k-NN algorithm gives 99.91% accuracy; however, the best accuracy (99.94%) is achieved using three different sensor combinations, which are ECA, FDBA and FECA, right-thigh_waist_head, right-ankle_right-wrist_chest_head and right-ankle_right-thigh-waist-head, respectively. Increasing the number of sensor nodes does not guarantee the best classification results. When the single-sensor results are examined, it is clear that the waist sensor, labeled C, gives alone the best accuracy (99.87%) with the k-NN algorithm.
The BDM algorithm accuracy performances are promising and satisfactory. BDM produces 99.26% classification accuracy when six sensor units are used in the calculation; however, the best accuracy (99.90%) is achieved with the ECA sensor combination (right-thigh_waist_head). The waist sensor, labeled C, gives 99.24% accuracy itself.
SVM is known as a very robust classifier, and its classification accuracy performances are also very good in this work. SVM classification accuracy is 99.48% when all six sensor nodes are used; however, the best accuracy (99.69%) is achieved with FEA and FEBA sensor combinations, which are right-ankle_right-thigh_head and right-ankle_right-thigh_chest_head, respectively. When single sensor results are examined, it is clear that the right-thigh sensor, labeled E, gives alone the best accuracy (99.27%) with the SVM algorithm.
The LSM algorithm has a very simple structure in terms of computation, and this characteristic makes it advantageous in embedded hardware implementations. The LSM algorithm accuracy performance is also good, but the sensitivity of the algorithm is decreased compared to the k-NN, BDM and SVM algorithms. When all six sensor nodes are used, then LSM gives 99.65% accuracy; however, the best accuracy (99.67%) is achieved with the FEBA and FEDA sensor combinations, right-ankle_right-thigh_chest_head and right-ankle_right-thigh_right-wrist_head, respectively. When single sensor behaviors are taken into consideration, it is clear that the waist sensor, labeled C, gives alone the best result, with 98.46% accuracy.
DTW produces 97.85% accuracy with all six sensor nodes are used in the classification process. However, the best accuracy (98.67%) is achieved with a quintuple sensor combination (EDCBA), which is right-thigh_right-wrist_waist_chest_head. The waist sensor, labeled C, gives 98.29% accuracy itself, and this performance is the best result for DTW.
When all six sensor nodes are used, then ANN gives 95.68% accuracy; however, the best accuracy (96.27%) is achieved with the ECBA sensor combination, that is right-thigh_waist_chest_head. When single-sensor results are investigated from the table, the waist sensor, labeled C, is found to be the most accurate, with the accuracy of 95.68%.
The best results of the double, triple, quadruple and quintuple sensor configurations are given under the individual machine learning algorithm captions in Table 5. The accuracies of all of these sensor configurations are over 99% accuracy for k-NN, BDM, SVM and LSM algorithms. It is clearly seen from the table that the fall detection performances are gradually decreased for the DTW and ANN algorithms. Perfect sensitivity performance is achieved at a 100% Se rate both with the FDBA (right-ankle _right-wrist_chest_head) quadruple sensor configuration using the k-NN algorithm and the FEDCA (right-ankle_right-thigh_right-wrist_waist_head) quintuple sensor configuration using the LSM algorithm. The best performance of this work is achieved with the FDBA configuration using the k-NN algorithm (100% Se, 99.89% Sp and 99.94% Acc); this result is even better than the FEDCBA configuration (100% Se, 99.84 Sp and 99.91% Acc) in which all sensor units are used.
This work proves that it is possible to achieve very high accuracies with single-sensor options in fall detection applications. There is no strong relation between the number of sensor units and classification performance, because increasing the number of sensor units slightly improves the accuracy. Table 6 summaries single sensor accuracies for six machine learning techniques, and almost all of them are higher than 95%. The best single-sensor performance (99.96% sensitivity, 99.87% accuracy and 99.76% specificity) is achieved with the waist sensor using the k-NN algorithm. 99.96% sensitivity means a very good performance, because there are only six false negative; in other words, six missed falls exist in 10 rounds of the whole dataset. The dataset consists of 2520 movements and 1400 of them are falls. When the classifier runs in 10 rounds, then 14,000 falls are evaluated by the classification algorithm. The k-NN algorithm detects 13,994 falls out of 14,000, and this implies that the algorithm reaches 100% sensitivity in some rounds (see Table 7).
When single sensor solutions’ results are investigated, it is clearly seen that the best performance (99.87% accuracy) is achieved with the waist sensor, labeled C, by using the k-NN algorithm, and the average of accuracy of six machine learning techniques for this sensor (waist) is 98.42%. The waist sensor is the closest unit to the trunk. Thus, it is not affected much by interpersonal differences in the body movement of subjects. This immunity to position-based interpersonal difference enables better performance than from the outer limbs. The second best performance (97.89% accuracy) is achieved with the right-thigh sensor, labeled E, as an average accuracy of six sensor units. The right-ankle sensor, labeled F, is the third best sensor placement part of the body and produces 97% average accuracy. The head sensor, labeled A, gives 96.61% average accuracy, and this is the fourth best performance. Despite the fact that the chest sensor, labeled B, is close to the trunk, similar to the waist and thigh sensors, its average of accuracy is only 96.50%. The reason for this lack of performance is because of the anatomical interpersonal differences. The position of the chest sensor varies depending on the subjects’ gender, posture and physical characteristics, such as obesity, thinness, etc. This causes an increase in interpersonal differences and indirectly decreases the accuracy performances of fall detection. The worst performance is observed with the right-wrist sensor, labeled D, with 94.97% average accuracy; because this outer limb is the location where interpersonal differences (behavior and act) become evident. Whereas the wrist is the highly preferred body location for wearable devices currently, this location is not suitable for fall detection applications. Average sensor accuracies discussed in this section are given in Table 8.
Location-based average accuracies for single sensor units are visualized in Figure 6. The waist sensor unit gives the best accuracy performances with all machine learning algorithms used in this study, except SVM (see Table 6). The average accuracy of the waist sensor is 98.42% for six machine learning algorithms, and this is the best average accuracy. The second best average accuracy performance is achieved with the thigh sensor unit at a rate of 97.89% accuracy. The third best average accuracy is 97.00% and achieved with the ankle sensor unit. The fourth is 96.61% and achieved with the head sensor unit. The fifth is 96.50% and achieved with the chest. The wrist sensor unit gives the worst accuracy performances with six machine learning algorithm, except DTW. The average accuracy of the wrist sensor unit is 94.92%.
The observed performance differences between the body parts give clues about determining the correct sensor placement location on the body. The sensor unit in the waist region is the closest to the center of gravity of the body; this may be the sole reason for the best performance being observed at that location. However, it is understood from the results that using the waist sensor as a reference on calculating the maximum value of TA vector has a very small contribution to the fall detection performance of the waist sensor, as well. It was expected to achieve better accuracies with the head sensor unit, because the head is the region that contains the human vestibular (balance) system. Therefore, the head region anatomically has a critical importance for sensing body movements. It is believed that the reason for having worse performance with the head sensor unit is the dataset used in this work, because the dataset has been created using real-world ADLs, but voluntary (simulated) falls. Hence, even if volunteers make an effort to perform falls more naturally, their body reflexes keep their head from heavy impacts (hits) on the ground due to the autonomic nervous system.
The right-ankle sensor unit’s fall detection performance was the third best performance after the right-thigh sensor unit performance. In spite of the fact that the ankle is an outer limb, like the wrist, better accuracy was achieved with the right-ankle sensor unit than the right-wrist sensor unit. The reason why using this sensor gives a better classification result can be explained by the fact that the feet have limited motion compared to our hands/arms, as feet carry the body.
Results can also be analyzed in terms of algorithm performances; the best performance is achieved with the k-NN algorithm. The average accuracy of six single sensors for k-NN is 99.92%. The BDM algorithm has 97.77% average accuracy, and this is the second best classification performance. The SVM algorithm average accuracy is 97.49%, and this is slightly smaller than BDM; this result has the third best classification performance. The LSM, DTW and ANN algorithms have average accuracies 96.6%, 95.6% and 94.6%, respectively. There is only a 1% performance difference between these three classifiers. The performance summation is given in Table 8, both for sensor location and algorithm, in reverse order. The efficiency of the classification algorithm is more prominent than the sensor location for fall detection performance. As a result, it is possible to create a robust automatic fall detection device using the k-NN algorithm with a single sensor unit placed at the waist region of the body. The last section of Table 8 gives the first six best individual sensor performances. The first four best performances are achieved with the k-NN algorithm, but different sensor units. In addition, the waist sensor repeats its location-based performance advantage by appearing again in the table as having the sixth best performance with another classifier, BDM.
This work is uniquely focused on determining the best sensor placement part on the human body for wearable fall detection systems. A dataset consisting of 2520 movements, including 16 types of ADLs and 20 types of falls is created by seven men and seven women with and ethics approval. In this work six sensor units each with a triaxial (accelerometer, gyroscopes and magnetometer) sensor are used on the different parts of the human body. Another important advantage of this work is the variety of machine learning techniques employed in order to distinguish falls from ADLs. To the best of our knowledge, this work is the most comprehensive investigation in the literature about analyzing the fall detection performance of sensor placement parts on the human body for wearable devices.

5. Conclusions

In this paper, a comprehensive analysis of sensor placement locations for the human body over fall detection performance has been made. There are some works in the literature that give ideas about location-based performances [18,19,20]; however, the literature suffers from work that specially focused on sensor location performances [22]. Table 9 summarizes related works in terms of the number of sensors (Sens.), technical specifications of the used sensors (Spec.), volunteers participating in the tests (Vol.), sensor locations on the human body (Locat.), the investigated sensor combinations (Comb.), the number of movement types included in the dataset (Tests), the employed classification algorithms (Algorithms) and the classification performance metrics (Performances). This work has many advantages from the other works; for example in this work, the number of male and female volunteers is equal; however, the datasets used in other works in the literature are male dominated, and this makes their dataset unbalanced. The dataset used in this work contains 36 types of movements, including 16 ADLs and 20 falls. In this work, 378 sensor combinations are investigated. Six machine learning techniques are employed. A 99.96% sensitivity is achieved with a single waist sensor using the k-NN algorithm. These advantages show why this work is the most comprehensive investigation in the literature about sensor placement performance on fall detection.
Accurate fall detection systems in the existing literature generally use multiple sensors. However, the increase in the number of sensor nodes has a negative impact on the adoption of the system by the user. A fall detection system needs to be simple so as not to affect users’ routine daily lives. As a result, the reduction of the number of sensor nodes brings many advantages, which are improved mobility, reduced computational load, decreased power consumption and increased ease of use.
Each sensor unit used in this work is comprised of a triaxial (accelerometer, gyroscope, and magnetometer) sensor. This means nine sensors’ data stream at a time towards the feature extraction unit. This unit creates a feature vector from a 4 s-long frame, which is constructed at a 25-Hz sampling rate around the maximum TA value. The feature vector of an individual sensor unit has 234 elements; therefore, the feature vector belonging to the six sensor units has 1404 elements (234 × 6). While the feature vectors decrease in size from 1404 to 234 by reducing the number of sensor units, the capability of estimating the variance of the feature vectors with the PCA algorithm increases from 72.38% to 96.63% with the same amount of PCs (30 PCs). This situation has a great contribution toward increased accuracy in single-sensor combinations. Therefore, decreasing the number of sensor units may not be considered as a disadvantage.
The world’s current aged population is not eager to use technology in their daily lives, because they do not feel comfortable with body-attached wearable devices. However, wearable technologies are still one of the most popular fields in today’s entrepreneurship trends. The current young and middle-aged generation is going to be aging and in the near future, we will have an elderly population that will be much more aware of technology than today. This makes wearable technology a more attractive field for the future.
Although this work did not use elderly motion records, it is aimed to test the proposed fall detection system with elderly data in the future. For this purpose, another ethical approval that allows the collection of elderly ADL data was already obtained from the Erciyes University Ethics Committee (Approval Number 2015/411).
Lastly, to enable a comparison among the algorithms developed in different studies, it is intended to make this dataset publicly available at the University of Irvine Machine Learning Repository [37].

Acknowledgments

This work was supported by the Erciyes University Scientific Research Project Coordination Department under Grant Number FBA-11-3579. The author would like to thank the volunteers who participated in the experiments for their efforts and time.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Custodio, V.; Herrera, F.J.; López, G.; Moreno, J.I. A review on architectures and communications technologies for wearable health-monitoring systems. Sensors 2012, 12, 13907–13946. [Google Scholar] [CrossRef] [PubMed]
  2. Chu, N.N.; Yang, C.M.; Wu, C.C. Game interface using digital textile sensors, accelerometer and gyroscope. IEEE Trans. Consum. Electron. 2012, 58, 184–189. [Google Scholar] [CrossRef]
  3. Moustafa, H.; Kenn, H.; Sayrafian, K.; Scanlon, W.; Zhang, Y. Mobile wearable communications. IEEE Wirel. Commun. 2015, 22, 10–11. [Google Scholar] [CrossRef]
  4. Patel, S.; Park, H.; Bonato, P.; Chan, L.; Rodgers, M. A review of wearable sensors and systems with application in rehabilitation. J. Neuroeng. Rehabil. 2012, 9, 1–17. [Google Scholar] [CrossRef] [PubMed]
  5. Borthwick, A.C.; Anderson, C.L.; Finsness, E.S.; Foulger, T.S. Special article personal wearable technologies in education: Value or villain? J. Digit. Learn. Teacher Educ. 2015, 31, 85–92. [Google Scholar] [CrossRef]
  6. Rubenstein, L.Z. Falls in older people: Epidemiology, risk factors and strategies for prevention. Age Ageing 2006, 35, ii37–ii41. [Google Scholar] [CrossRef] [PubMed]
  7. Lord, S.R.; Sherrington, C.; Menz, H.B.; Close, J.C. Falls in Older People: Risk Factors and Strategies for Prevention; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  8. Mahoney, J.E.; Eisner, J.; Havighurst, T.; Gray, S.; Palta, M. Problems of older adults living alone after hospitalization. J. Gen. Intern. Med. 2000, 15, 611–619. [Google Scholar] [CrossRef] [PubMed]
  9. Sattin, R.W.; Huber, D.A.L.; Devito, C.A.; Rodriguez, J.G.; Ros, A.; Bacchelli, S.; Stevens, J.A.; Waxweiler, R.J. The incidence of fall injury events among the elderly in a defined population. Am. J. Epidemiol. 1990, 131, 1028–1037. [Google Scholar] [PubMed]
  10. Mubashir, M.; Shao, L.; Seed, L. A survey on fall detection: Principles and approaches. Neurocomputing 2013, 100, 144–152. [Google Scholar] [CrossRef]
  11. Delahoz, Y.; Labrador, M. Survey on fall detection and fall prevention using wearable and external sensors. Sensors 2014, 14, 19806–19842. [Google Scholar] [CrossRef] [PubMed]
  12. Fortino, G.; Graviana, R. Fall-MobileGuard: A Smart Real-Time Fall Detection System. In Proceedings of the 10th EAI International Conference on Body Area Networks, Sydney, Australia, 28–30 December 2015; pp. 44–50.
  13. Özdemir, A.T.; Barshan, B. Detecting falls with wearable sensors using machine learning techniques. Sensors 2014, 14, 10691–10708. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Vavoulas, G.; Pediaditis, M.; Chatzaki, C.; Spanakis, E.G.; Tsiknakis, M. The mobifall dataset: Fall detection and classification with a smartphone. Int. J. Monit. Surveill. Technol. Res. 2014, 2, 44–56. [Google Scholar] [CrossRef]
  15. Noury, N.; Rumeau, P.; Bourke, A.; ÓLaighin, G.; Lundy, J. A proposal for the classification and evaluation of fall detectors. IRBM 2008, 29, 340–349. [Google Scholar] [CrossRef]
  16. Bagalà, F.; Becker, C.; Cappello, A.; Chiari, L.; Aminian, K.; Hausdorff, J.M.; Zijlstra, W.; Klenk, J. Evaluation of accelerometer-based fall detection algorithms on real-world falls. PLoS ONE 2012, 7, e37062. [Google Scholar] [CrossRef] [PubMed]
  17. Noury, N.; Fleury, A.; Rumeau, P.; Bourke, A.; Laighin, G.; Rialle, V.; Lundy, J. Fall detection-principles and methods. In Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 23–26 August 2007; pp. 1663–1666.
  18. Kangas, M.; Konttila, A.; Lindgren, P.; Winblad, I.; Jämsä, T. Comparison of low-complexity fall detection algorithms for body attached accelerometers. Gait Posture 2008, 28, 285–291. [Google Scholar] [CrossRef] [PubMed]
  19. Li, Q.; Stankovic, J.; Hanson, M.; Barth, A.T.; Lach, J.; Zhou, G. Accurate, fast fall detection using gyroscopes and accelerometer-derived posture information. In Proceedings of the IEEE 6th International Workshop on Wearable and Implantable Body Sensor Networks, Berkerey, CA, USA, 3–5 June 2009; pp. 138–143.
  20. Atallah, L.; Lo, B.; King, R.; Yang, G.-Z. Sensor placement for activity detection using wearable accelerometers. In Proceedings of the IEEE Body Sensor Networks (BSN), Singapore, 7–9 June 2010; pp. 24–29.
  21. Bao, L.; Intille, S.S. Activity recognition from user-annotated acceleration data. In Pervasive Computing; Springer: Vienna, Austria, 2004; pp. 1–17. [Google Scholar]
  22. Shi, G.; Zhang, J.; Dong, C.; Han, P.; Jin, Y.; Wang, J. Fall detection system based on inertial mems sensors: Analysis design and realization. In Proceedings of the IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems, Shenyang, China, 8–12 June 2015; pp. 1834–1839.
  23. De Backere, F.; Ongenae, F.; Van den Abeele, F.; Nelis, J.; Bonte, P.; Clement, E.; Philpott, M.; Hoebeke, J.; Verstichel, S.; Ackaert, A. Towards a social and context-aware multi-sensor fall detection and risk assessment platform. Comput. Biol. Med. 2014, 64, 307–320. [Google Scholar] [CrossRef] [PubMed]
  24. Finlayson, M.L.; Peterson, E.W. Falls, aging, and disability. Phys. Med. Rehabil. Clin. N. Am. 2010, 21, 357–373. [Google Scholar] [CrossRef] [PubMed]
  25. Abbate, S.; Avvenuti, M.; Corsini, P.; Light, J.; Vecchio, A. Monitoring of human movements for fall detection and activities recognition in elderly care using wwireless sensor network: A survey. In Application-Centric Design Book, 1st ed.; InTech: Rijeka, Croatia, 2010; Chapter 9; pp. 147–166. [Google Scholar]
  26. MTw Development Kit User Manual and Technical Documentation. Xsens Technologies B.V.: Enschede, The Netherlands, 2016. Available online: http://www.xsens.com (accessed on 20 June 2016).
  27. Barshan, B.; Yüksek, M.C. Recognizing daily and sports activities in two open source machine learning environments using body-worn sensor units. Comput. J. 2013, 57, 1649–1667. [Google Scholar] [CrossRef]
  28. Altun, K.; Barshan, B.; Tuncel, O. Comparative study on classifying human activities with miniature inertial and magnetic sensors. Pattern Recogn. 2010, 43, 3605–3620. [Google Scholar] [CrossRef] [Green Version]
  29. Chan, M.; Campo, E.; Bourennane, W.; Estève, D. Connectivity for the indoor and outdoor elderly people safety management: An example from our current project. In Proceedings of the 7th European Symposium on Biomedical Engineering, Chalkidiki, Greece, 28–29 May 2010.
  30. Wu, G.; Cavanagh, P.R. ISB recommendations for standardization in the reporting of kinematic data. J. Biomech. 1995, 28, 1257–1261. [Google Scholar] [CrossRef]
  31. Ozdemir, A.T.; Danisman, K. A comparative study of two different FPGA-based arrhythmia classifier architectures. Turk. J. Electr. Eng. Comput. 2015, 23, 2089–2106. [Google Scholar] [CrossRef]
  32. Wang, C.; Lu, W.; Narayanan, M.R.; Redmond, S.J.; Lovell, N.H. Low-power technologies for wearable telecare and telehealth systems: A review. Biomed. Eng. Lett. 2015, 5, 1–9. [Google Scholar] [CrossRef]
  33. Chaudhuri, S.; Kneale, L.; Le, T.; Phelan, E.; Rosenberg, D.; Thompson, H.; Demiris, G. Older adults' perceptions of fall detection devices. J. Appl. Gerontol. 2016, 71. [Google Scholar] [CrossRef] [PubMed]
  34. Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification, 2nd ed.; Wiley: New York, NY, USA, 2001. [Google Scholar]
  35. Chang, C.C.; Lin, C.J. LibSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–27. [Google Scholar] [CrossRef]
  36. Keogh, E.; Ratanamahatana, C.A. Exact indexing of dynamic time warping. Knowl. Inf. Syst. 2005, 7, 358–386. [Google Scholar] [CrossRef]
  37. University of Irvine Machine Learning Repository. Available online: http://archive.ics.uci.edu/ml/ (accessed on 20 June 2016).
Figure 1. Fall detection system design block diagram.
Figure 1. Fall detection system design block diagram.
Sensors 16 01161 g001
Figure 2. (a) MTw sensor unit; (b) Sensor unit with housing [26]; (c) Wireless data acquisition system.
Figure 2. (a) MTw sensor unit; (b) Sensor unit with housing [26]; (c) Wireless data acquisition system.
Sensors 16 01161 g002
Figure 3. (a) MTw unit housing on a strap; (b) Strap set on mannequin [26]; (c) Sensors placement on the subject’s body.
Figure 3. (a) MTw unit housing on a strap; (b) Strap set on mannequin [26]; (c) Sensors placement on the subject’s body.
Sensors 16 01161 g003
Figure 4. These six graphics belong to the waist sensor and show the first repetition of five 901-Front Lying fall actions performed by Volunteer 203 (203_901_1). The top three graphic ((a) to (c)) are saved with 430 samples (more than 17 s-long raw data record, sampled at 25 Hz), and the bottom three graphics ((d) to (f)) are reduced to a 101 sample (nearly 4 s-long shortened data) record.
Figure 4. These six graphics belong to the waist sensor and show the first repetition of five 901-Front Lying fall actions performed by Volunteer 203 (203_901_1). The top three graphic ((a) to (c)) are saved with 430 samples (more than 17 s-long raw data record, sampled at 25 Hz), and the bottom three graphics ((d) to (f)) are reduced to a 101 sample (nearly 4 s-long shortened data) record.
Sensors 16 01161 g004
Figure 5. Feature vector formation.
Figure 5. Feature vector formation.
Sensors 16 01161 g005
Figure 6. Location-based average accuracies.
Figure 6. Location-based average accuracies.
Sensors 16 01161 g006
Table 1. Age, sex and anthropometric information of volunteers.
Table 1. Age, sex and anthropometric information of volunteers.
MenWomenAll
Volunteers101102103104106107108203204205206207208209Ave.
Age212123272221212121201920242221.64
Height (kg)758178675472685147514760557062.57
Weight (cm)170174180176160175184170157169166165163182170.79
Table 2. Falls and activities of daily living (ADLs) movement list.
Table 2. Falls and activities of daily living (ADLs) movement list.
Activities of Daily Living (ADLs)Voluntary Falls
#LabelDescription#LabelDescription
801Walking fwWalking forward901Front lyingFrom vertical going forward to the floor
802Walking bwWalking backward902Front protected lyingFrom vertical going forward to the floor with arm protection
803JoggingRunning903Front kneesFrom vertical going down on the knees
804Squatting downGoing down, then up904Front knees lyingFrom vertical going down on the knees and then lying on the floor
805BendingBending of about 90 degrees905Front quick recoveryFrom vertical going on the floor and quick recovery
806Bending and pick upBending to pick up an object on the floor906Front slow recoveryFrom vertical going on the floor and slow recovery
807LimpWalking with a limp907Front rightFrom vertical going down on the floor, ending in right lateral position
808StumbleStumbling with recovery908Front leftFrom vertical going down on the floor, ending in left lateral position
809Trip overBending while walking and then continue walking909Back sittingFrom vertical going on the floor, ending sitting
810CoughingCoughing or sneezing910Back lyingFrom vertical going on the floor, ending lying
811Sit chairFrom vertical sitting with a certain acceleration on a chair (hard surface)911Back rightFrom vertical going on the floor, ending lying in right lateral position
812Sit sofaFrom vertical sitting with a certain acceleration on a sofa (soft surface)912Back leftFrom vertical going on the floor, ending lying in left lateral position
813Sit airFrom vertical sitting in the air exploiting the muscles of legs913Right sidewayFrom vertical going on the floor, ending lying
814Sit bedFrom vertical sitting with a certain acceleration on a bed (soft surface)914Right recoveryFrom vertical going on the floor with subsequent recovery
815Lying bedFrom vertical lying on the bed915Left sidewayFrom vertical going on the floor, ending lying
816Rising bedFrom lying to sitting916Left recoveryFrom vertical going on the floor with subsequent recovery
917Rolling out bedFrom lying, rolling out of bed and going on the floor
918PodiumFrom vertical standing on a podium going on the floor
919SyncopeFrom standing going on the floor following a vertical trajectory
920Syncope wallFrom standing going down slowly slipping on a wall
Table 3. Combinations of sensor units on the body. COMB is combinations. A-head, B-chest, C-waist, D-right wrist, E-right thigh and F-right ankle.
Table 3. Combinations of sensor units on the body. COMB is combinations. A-head, B-chest, C-waist, D-right wrist, E-right thigh and F-right ankle.
SENSORS SENSORS
*AnkleThighWristWaistChestHeadCOMB*AnkleThighWristWaistChestHeadCOMB
FEDCBAFEDCBA
0000000 32100000F
1000001A33100001AF
2000010B34100010BF
3000011AB35100011ABF
4000100C36100100CF
5000101AC37100101ACF
6000110BC38100110BCF
7000111ABC39100111ABCF
8001000D40101000DF
9001001AD41101001ADF
10001010BD42101010BDF
11001011ABD43101011ABDF
12001100CD44101100CDF
13001101ACD45101101ACDF
14001110BCD46101110BCDF
15001111ABCD47101111ABCDF
16010000E48110000EF
17010001AE49110001AEF
18010010BE50110010BEF
19010011ABE51110011ABEF
20010100CE52110100CEF
21010101ACE53110101ACEF
22010110BCE54110110BCEF
23010111ABCE55110111ABCEF
24011000DE56111000DEF
25011001ADE57111001ADEF
26011010BDE58111010BDEF
27011011ABDE59111011ABDEF
28011100CDE60111100CDEF
29011101ACDE61111101ACDEF
30011110BCDE62111110BCDEF
31011111ABCDE63111111ACBDEF
Table 4. Machine learning algorithms’ performances based on sensor combinations. COMB is combinations, A-head, B-chest, C-waist, D-right wrist, E-right thigh and F-right ankle. k-NN, k-nearest neighbor classifier; BDM, Bayesian decision making; SVM, support vector machines; LSM, least squares method; DTW, dynamic time warping; ANN, artificial neural network.
Table 4. Machine learning algorithms’ performances based on sensor combinations. COMB is combinations, A-head, B-chest, C-waist, D-right wrist, E-right thigh and F-right ankle. k-NN, k-nearest neighbor classifier; BDM, Bayesian decision making; SVM, support vector machines; LSM, least squares method; DTW, dynamic time warping; ANN, artificial neural network.
No.COMBk-NNBDMSVMLSMDTWANNNo.COMBk-NNBDMSVMLSMDTWANN
0NONE 32F99.5098.2499.0696.3693.5195.30
1A99.2097.2996.0896.7796.1294.2033FA99.9199.1999.3599.0497.0495.56
2B99.6096.6596.2895.5396.5894.3534FB99.5598.8599.2298.2796.3195.13
3BA99.6998.7498.1998.1097.1295.2935FBA99.7099.4399.3299.3297.0495.61
4C99.8799.2498.9998.4698.2995.6936FC99.8199.4799.5998.6696.2495.46
5CA99.9299.6099.3798.8897.1195.6737FCA99.9399.7499.5599.3897.4195.55
6CB99.7799.2399.0098.1397.3595.6338FCB99.8099.3499.4299.0297.3095.58
7CBA99.7699.5499.3799.1796.8495.8239FCBA99.8399.5399.6299.5997.5295.70
8D97.4996.0895.2794.6393.6292.4040FD99.6097.8898.9298.3595.6394.50
9DA99.6598.5296.7298.5696.9994.2941FDA99.8298.6598.9799.4897.3595.13
10DB99.5498.2497.6396.3894.4294.2342FDB99.7998.6998.7198.6095.2595.14
11DBA99.7698.5798.0498.7396.7595.4243FDBA99.9499.2999.0799.3597.1795.77
12DC99.5498.6798.8498.7097.1994.9544FDC99.7798.9199.3899.1296.6095.16
13DCA99.8399.2999.0299.2997.0995.5345FDCA99.9299.0899.4799.5498.1995.38
14DCB99.6698.7598.8298.6095.8695.0846FDCB99.8598.8599.1699.0097.4195.31
15DCBA99.8099.0699.2399.1397.3595.3947FDCBA99.8599.1699.3799.4897.4095.92
16E99.6199.1299.2798.0995.6995.5348FE99.7599.1599.5798.1894.7995.54
17EA99.8199.1399.5298.3997.3795.5349FEA99.9199.6399.6999.1596.7695.47
18EB99.8299.1999.5598.7496.5895.7150FEB99.7099.4999.4799.0094.9595.46
19EBA99.7999.6499.5899.2197.8896.0251FEBA99.7899.7999.6999.6797.9295.77
20EC99.8499.4499.3198.8295.8595.1752FEC99.8899.6799.6499.0796.6595.58
21ECA99.9499.9099.5998.9397.8895.8353FECA99.9499.7399.6699.5297.6695.59
22ECB99.9199.4299.3799.3296.8595.6054FECB99.8699.5199.5399.4497.5395.59
23ECBA99.8899.6799.6299.3198.1196.2755FECBA99.8699.6599.6799.5697.4096.18
24ED99.6398.6099.2998.4195.6294.9356FED99.7098.9799.5599.0896.6995.18
25EDA99.8299.0099.4298.4597.8695.6857FEDA99.8799.1799.4599.6798.1195.42
26EDB99.7799.0499.3898.7796.4195.3058FEDB99.8399.2599.2199.2597.2795.36
27EDBA99.8799.2599.3199.2797.5395.5859FEDBA99.9099.3099.3899.4898.0295.63
28EDC99.6999.0999.3599.0097.4995.3060FEDC99.8299.0999.5099.2296.4695.42
29EDCA99.8899.2299.5699.4596.6995.7261FEDCA99.8899.2899.5999.5798.1995.62
30EDCB99.8499.0599.4099.1396.7295.5162FEDCB99.8799.1399.3499.3797.0995.67
31EDCBA99.8699.2199.3999.3398.6795.7563FEDCBA99.9199.2699.4899.6597.8595.68
Table 5. The best results of respective sensor combinations for double, triple, quadruple and quintuple. (P: positive; N: negative). A-head, B-chest, C-waist, D-right wrist, E-right thigh and F-right ankle. Acc (%)* is accuracy.
Table 5. The best results of respective sensor combinations for double, triple, quadruple and quintuple. (P: positive; N: negative). A-head, B-chest, C-waist, D-right wrist, E-right thigh and F-right ankle. Acc (%)* is accuracy.
CONFUSION MATRICES
k-NN BDM SVM LSM DTW ANN
Double PN PN PN PN PN PN
TRUEP13982 1398.51.5 1395.24.8 1387.312.7 1372.527.5 1356.643.4
N01120 8.61111.4 5.61114.4 11.41108.6 38.71081.3 64.61055.4
Acc (%)* 99.92 99.60 99.59 99.04 97.37 95.71
Combinations CA CA FC FA EA EB
Triple PN PN PN PN PN PN
TRUEP13991 13991 1395.14.9 13973 138416 1366.133.9
N0.41119.6 1.41118.6 31117 10.21109.8 37.41082.6 66.51053.5
Acc (%)* 99.94 99.90 99.69 99.48 97.88 96.02
Combinations ECA ECA FEA FDA EBA EBA
Quadruple PN PN PN PN PN PN
TRUEP14000 1398.31.7 1395.44.6 1399.10.9 138119 1368.531.5
N0.51118.5 3.61116.4 3.11116.9 7.31112.7 26.71093.3 62.41057.6
Acc (%)* 99.94 99.79 99.69 99.67 98.19 96.27
Combinations FDBA FEBA FEBA FEDA FDCA ECBA
Quintuple PN PN PN PN PN PN
TRUEP1399.70.3 1398.21.8 1394.85.2 14000 1389.410.6 1367.632.4
N2.21117.8 71113 31117 10.91109.1 22.91097.1 63.81056.2
Acc (%)* 99.90 99.65 99.67 99.57 98.67 96.18
Combinations FEDBA FECBA FECBA FEDCA EDCBA FECBA
Table 6. Comparison of the single sensor unit’s fall detection performances with different machine learning techniques. A-head, B-chest, C-waist, D-right wrist, E-right thigh and F-right ankle. Acc (%)* is accuracy.
Table 6. Comparison of the single sensor unit’s fall detection performances with different machine learning techniques. A-head, B-chest, C-waist, D-right wrist, E-right thigh and F-right ankle. Acc (%)* is accuracy.
CONFUSION MATRICES
k-NNBDMSVMLSMDTWANN
C (Waist)PNPNPNPNPNPN
TRUEP1399.40.61396.23.81391.78.31395.24.81385.614.41359.140.9
N2.71117.315.31104.717.11102.934108628.61091.467.81052.2
Acc (%)*99.8799.2498.9998.4698.2995.69
E (Thigh)PNPNPNPNPNPN
TRUEP1395.24.81390.79.3139551371.528.51320.479.61354.245.8
N5111512.81107.213.41106.619.71100.328.91091.166.81053.2
Acc (%)*99.6199.1299.2798.0995.6995.53
F (Ankle)PNPNPNPNPNPN
TRUEP1392.67.41390.69.41389.210.81326.673.41273.1126.91358.841.2
N5.21114.834.91085.112.81107.218.31101.736.61083.477.31042.7
Acc (%)*99.598.2499.0696.3693.5195.3
A (Head)PNPNPNPNPNPN
TRUEP139191384.615.41372.327.71376.523.51362.237.81354.445.6
N11.11108.952.91067.171104957.91062.1601060100.61019.4
ACC99.297.2996.0896.7796.1294.2
B (Chest)PNPNPNPNPNPN
TRUEP1398.11.91380.819.21363.936.11388.611.41381.418.61341.158.9
N8.11111.965.31054.757.61062.4101.31018.767.51052.583.51036.5
Acc (%)*99.696.6596.2895.5396.5894.35
D (Wrist)PNPNPNPNPNPN
TRUEP1370.729.31371.928.11353.846.21302.797.31314.285.8134357
N33.91086.170.81049.273.11046.937.91082.175.11044.9161.6985.4
Acc (%)*97.4996.0895.2794.6393.6292.4
Table 7. k-NN classifier results over 10 successive rounds with the waist (C) sensor unit. AVG: average; STD: standard deviation.
Table 7. k-NN classifier results over 10 successive rounds with the waist (C) sensor unit. AVG: average; STD: standard deviation.
Run12345678910AVGSTD
Se (%)99.9399.9310099.9399.9310099.9310010099.9399.960.0369
Acc (%)99.8899.8899.9299.7699.8899.9299.8899.8499.8499.8899.870.0460
Sp (%)99.8299.8299.8299.5599.8299.8299.8299.6499.6499.8299.760.1035
TN11181118111811151118111811181116111611181117.31.1595
FP22252224422.71.1595
TP13991399140013991399140013991400140013991399.40.5164
FN11011010010.60.5164
Table 8. Summary of the location and algorithm-based accuracy averages and individual sensor performances, calculated from Table 6. A-head, B-chest, C-waist, D-right wrist, E-right thigh and F-right ankle.
Table 8. Summary of the location and algorithm-based accuracy averages and individual sensor performances, calculated from Table 6. A-head, B-chest, C-waist, D-right wrist, E-right thigh and F-right ankle.
Location of Sensor UnitC (Waist)E (Right-Thigh)F (Right-Ankle)A (Head)B (Chest)D (Right-Wrist)
Average Accuracy (%) 98.42 97.89 97.00 96.61 96.50 94.92
Algorithmk-NNBDMSVMLSMDTWANN
Average Accuracy (%)99.2197.7797.4996.6495.6494.58
Individual Sensor UnitC (Waist)E (Right-Thigh)B (Chest)A (Head)E (Right-Thigh)C (Waist)
k-NNk-NNk-NNk-NNSVMBDM
Accuracy (%)99.8799.6199.6099.5099.2799.24
Table 9. Comparison of the literature works. Sens., sensor; Spec., specification; Vol., volunteer; Locat., location.
Table 9. Comparison of the literature works. Sens., sensor; Spec., specification; Vol., volunteer; Locat., location.
Sens.Spec.Vol.Locat.Comb.TestsAlgorithmsPerformances
Bao [21]
2X A
±10 g20 P
13 M
7 F
ankle arm
thigh hip
wrist
2020
20 ADL
0 fall
Decision Table
Instant Learning
Naïve Bayes
Decision Tree
All Sensors
84.5% Acc
thigh + wrist
80.73% Acc
Kangas [18]
3X A
±12 g3 P
2 M
1 F
waist
head
wrist
2412
9 ADL
3 fall
Rule Based Alg.98% Head Se
97% Waist Se
71% Wrist Se
Li [19]
3X A
3X G
±10 g
±500°/s
3 P
3 M
0 F
chest
thigh
114
9 ADL
5 fall
Rule Based Alg.chest + thigh
92% Acc
91% Se
Atallah [20]
3X G
±3 g11 P
9 M
2 F
ankle
knee
waist
wrist
arm
chest
ear
1215
15 ADL
k-NN
Bayesian Classifier
low level
waist
medium level
chest, wrist
high level
arm, knee
Shi [22]21×
3X A
3X G
3X M
±8 g
±2000°/s
N/A
13 P
12 M
1 F
thighs
shanks
feet
u-arms
f-arms
hands
waist
neck
head
back
1425
12 ADL
13 fall
Decision Tree Algorithmwaist
97.79% Acc
95.5% Se
98.8% Sp
Özdemir [13]
3X A
3X G
3X M
±16 g
±1200°/s
±1.5 G
14 P
7 M
7 F
head
chest
waist
wrist
thigh
knee
37836
16 ADL
20 fall
k-NN
BDM
SVM
LSM
DTW
ANN
waist
99.87% Acc
99.96% Se
99.76% Sp
In the sensors column, 6× means the number of sensor nodes is 6, 3X A means 3-axis accelerometer, 3X G means 3-axis gyroscope and 3X M means 3-axis magnetometer. In the volunteer column, P, M, and F mean people, male and female respectively.

Share and Cite

MDPI and ACS Style

Özdemir, A.T. An Analysis on Sensor Locations of the Human Body for Wearable Fall Detection Devices: Principles and Practice. Sensors 2016, 16, 1161. https://doi.org/10.3390/s16081161

AMA Style

Özdemir AT. An Analysis on Sensor Locations of the Human Body for Wearable Fall Detection Devices: Principles and Practice. Sensors. 2016; 16(8):1161. https://doi.org/10.3390/s16081161

Chicago/Turabian Style

Özdemir, Ahmet Turan. 2016. "An Analysis on Sensor Locations of the Human Body for Wearable Fall Detection Devices: Principles and Practice" Sensors 16, no. 8: 1161. https://doi.org/10.3390/s16081161

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop