Possible Life Saver: A Review on Human Fall Detection Technology

: Among humans, falls are a serious health problem causing severe injuries and even death for the elderly population. Besides, falls are also a major safety threat to bikers, skiers, construction workers, and others. Fortunately, with the advancements of technologies, the number of proposed fall detection systems and devices has increased dramatically and some of them are already in the market. Fall detection devices / systems can be categorized based on their architectures as wearable devices, ambient systems, image processing-based systems, and hybrid systems, which employ a combination of two or more of these methodologies. In this review paper, a comparison is made among these major fall detection systems, devices, and algorithms in terms of their proposed approaches and measure of performance. Issues with the current systems such as lack of portability and reliability are presented as well. Development trends such as the use of smartphones, machine learning, and EEG are recognized. Challenges with privacy issues, limited real fall data, and ergonomic design deﬁciency are also discussed.


Introduction
A fall is defined as an event which results in a person coming to rest inadvertently on the ground or floor or the lower level [1]. Fall related injuries include wrist, arm, ankle, and hip fractures and traumatic brain injuries. People are incessantly facing the risk of falls while performing daily activities, even when sleeping. The risk is even higher for those who engage in outdoor sport activities or construction work. Elderly people are the ones facing the highest risk of falls and the most severe consequences. According to the CDC, in the United States, more than one in four adults (30%) aged 65 and older report falling each year, which results in about 30 million falls annually [2]. Falls are the leading cause of injury-related death among adults age 65 and older, about 62 deaths per 100,000 older adults, and this rate is still increasing by more than 30% from 2007 to 2016 [3]. While not all falls are fatal, about 38% of those who fall reported an injury that required medical treatment or restricted their activity for at least one day [2]. Falls among adults age 65 and older are very costly, even with insurance, Medicare or Medicaid. Each year about $50 billion is spent on non-fatal fall injuries and $754 million is spent on fatal falls [4]. Except physical injuries, psychological trauma such as fear of falling leads to a reduction of daily activities that makes a person even weaker and more likely to fall again [5]. In fact, stride-to-stride temporal variations of gait are significantly larger in elderly fallers than non-fallers [6]. A fall detection system or device can act as a remedy for the above risks. Such a system will identify falls, ideally take preventive measures for fall injuries, and alert people of an emergency when a fall event has just occurred. The elderly population is most affected by fall injuries as the time

Systems Based on Wearable Devices
Wearable telemedicine technology provides an effective solution for the above mentioned falling issues and has become a new research hotspot. Wearable devices are smart electronic devices that have inertial and medical sensors embedded into watches or clothes to achieve a kind of non-instructive and non-invasive diagnosis and monitoring [20] by collecting body signals. The current systems and research of wearable devices for detecting falls can be classified into (i) tri-axial accelerometer-based, (ii) gyroscope-based, (iii) inertial measurement unit and barometric altimeter-based (BIMU), and (iv) smartphone-based systems. In these systems, the sensor units are attached to various parts of the body such as wrist, chest, thigh, and hip. Though a few researches are only focusing on applying a single type of these sensors, such as [21][22][23] discussed the systems based on accelerometer while [24,25] introduced those based on gyroscope. Most of the recent systems have a combination of two or more sensors including accelerometer and gyroscope. In many cases, such systems also come with a protection apparatus such as an embedded airbag ready to be deployed during the event of fall. The parameters monitored by these sensor systems include the following: root-mean-square (RMS) of acceleration as measured by accelerometer, angular velocity by gyroscope, vertical velocity, and height by BIMU and electrocardiogram (ECG) signals. Such systems can be broadly classified into threshold based and machine learning based systems. In [26][27][28], the tri-axial accelerometer and gyroscope sensor measurements are monitored for calculating the RMS values of acceleration and angular velocity of each axis and compared with a threshold value to detect fall events. In [29], the same methodology is performed with sensors present in a smartphone. Wearable devices readily available in market include Apple Watch Series 4 and 5 [30] and Sense4Care Angel4 [31] detect fall using the same accelerometer and gyroscope combination. In Apple Watch, the wrist trajectory and impact acceleration are measured for detecting fall. Since these devices measure the sensor values and only compare them with a threshold, the accuracy and sensitivity are quite low as well as the resulting increased false positives. Employing machine learning algorithms for predicting falls has become a trend since it results in increased prediction accuracy. For example, the angular rotation and acceleration measurements were considered as a binary classification problem in [32,33] were tested with k-nearest neighbor (KNN) (99.80% accuracy) classifier and random forest (96.82% accuracy) classifier. In study [34], a hidden Markov model is used to develop a sensor orientation calibration among algorithms to resolve sensor misplacement issues on human body to detect falls with an experimental positive prediction of 98.1%. These above mentioned studies on inertial sensor-based wearable fall detection systems are summarized in Table 1. It is clear that accelerometer and gyroscope are the most popular sensor among all listed, but not limited to, sensors used in wearable fall detection systems. As they can sense and extract multiple significant body parameters while being feasible, discreet, and budget friendly.
Most falls are accompanied with sudden and significant body change in orientation or position, which gives inertial sensors natural advantages since such alternation can be accurately sensed and measured by accelerometers, gyroscopes, barometric altimeters, and so on. Besides, other advantages such as being portable, discreet, and noninvasive all make inertial sensor-based fall detection systems one of the most popular studies in the field of medical devices. However, a major drawback somehow explains why such a hot research topic generates such little number of actual products in the market. The wearable inertial sensor based fall detection systems are not robust enough. Most of the studies claimed to have algorithms with high accuracy and performance but neglected the fact that their excellent results are only validated in the laboratory environment, where uncontrollable factors and noise are eliminated [13]. In fact, wearable systems are prone to failure of fall detection or false positive trigger, because these systems are directly triggered by sudden and significant changes in users' acceleration and angle that are related to falls. However, in real life there is a variety of activities of daily living (ADLs)-such as standing up, sitting down, or going from standing position to lying down-that have strong similarities to falls [11]. A tri-axial acceleration sensor was attached on each shoe. With a microcontroller, Bluetooth module was also used to transmit data in real time.
The sensor is attached on the shoe which is so close to the ground that less acceleration would occur than the rest of the body. [22] Joint sensing of several three-axle acceleration sensors was applied aiming to provide sufficient data to make judgment on fall accident and differentiate the behavioral event and the falling accident.
When the subjects walk slow, no obvious periodical acceleration may be used for judgment, leading to failure of identification. [23] This technique uses accelerometers placed on the pelvis and head to detect body accelerations when subjects are walking on a specially designed unpredictably irregular walkway.

N/A
This system only estimates the fall risk of the subject rather than making a decision on falling. [24] The ability to discriminate between falls and ADL was achieved using a bi-axial gyroscope sensor mounted on the trunk, measuring pitch and roll angular velocities, and a threshold-based algorithm. The gyroscope signals were acquired from simulated falls performed by healthy young subjects and ADLs performed by elderly persons in their own homes.
The ADLs chosen are only sitting downs and standing ups which do not generate large orientation disturbance in gyroscope as other fall-like ADLs, such as bending over tying a shoe. [25] A piezoelectric gyroscope was attached with a belt in front of the sternum. A standard motion analysis system was also used as reference. Subjects performed different activities involving postural transitions and dynamic activities with use of different type of chairs with and without use of armrest.
Only the tilt angle between the vertical axis and the subject's anterior wall of thorax was analyzed. The angular velocity is also important as it reflects how fast the angle is changed. [26] Three-axis accelerometer and gyroscope are attached to the chest of the patient. X, Y, and Z axes values are observed which are then compared with the accelerometer and gyro and a decision is made.
Cannot detect fall or misclassify when the person uses stairs or short corridors. [27] Uses a triaxial accelerometer and a triaxial gyroscope. A fall is considered when the acceleration is less than ±3 m/s 2 and the angular velocity exceeds 0.52 rad/s.
Works on sensors which is prone to failure of detection or false trigger. [28] Triaxial accelerometer is used to measure the X, Y, and Z values and sent to a microcontroller where the measured values are compared with a threshold value for fall detection.
Success Rate: Forward fall = 75% Backward fall = 95% Not great accuracy and success rate as it uses only one sensor to detect fall.

Article
Proposed Approach Measure of Performance Challenges [29] Fall detection is performed by comparing the difference of the maximum and minimum values in the sample and the last with a specific threshold. Also, the angle value is considered to measure the posture. Accuracy = 93.33% There are some falling situations which cannot be detected. [30] Fall detection works on the device's accelerometer and gyroscope. Wrist trajectory and impact acceleration were observed to detect fall incidents.

N/A
It is a watch worn on people's wrist where lot of unnecessary movements occur, which is prone to false negatives or false positives. [31] Fall detection is estimated by the combination of the device's triaxial accelerometer and a special algorithm. Device is connected with smartphones for emergency services.

N/A
Works on only one sensor which fails to provide accuracy for different types of falling events. [32] Angular rotation is considered using a gyroscope along with acceleration to minimize false positives. The proposed work considers the fall detection as a binary classification problem and tested with KNN and random forest classifier.
Accuracy: kNN classifier = 99.80% RF classifier = 96.82% Few falling activities, such as forward fall, while walking caused by a trip are hard to detect. [33] NNN, naïve Bayes, SVM, ANN, and decision tree algorithms with and without including the risk factorization were implemented and their results were compared for fall detection.
Several accuracy values were shown for all the algorithms implemented.
Utilize too many algorithms without indicating solution when given different results. [34] The proposed work includes a new representation for acceleration signals in HMMs to avoid feature engineering and the development of a sensor orientation calibration algorithm to resolve sensor misplacement issues in real-world scenarios. HMM classifiers are trained to detect falls based on acceleration signal data collected from motion sensors. The data in this study is a snapshot of one event, not many events from one subject over time.
It might be promising to include subject data over time and with changing health status to improve the system.

[35]
A tri-axial accelerometer and a CDMA standalone modem were used to detect and manage fall events.

No error was recognized in laboratory environment
Works with accelerometers only. Size of device is too bulky to be ergonomic. [36] This system is based on the integration of an inertial measurement unit with a barometric altimeter (BIMU). Using these sensors, the vertical velocity and height are measured, and the root sum of squares is calculated and compared with the threshold to decide the fall event.
Height is measured which is a bad factor for humans with varying heights. [37] Velocity and acceleration of the person is tracked by the MetaTracker fixed at the chest position. Works on only one sensor which fails to provide accuracy for different types of falling events. [39] A belt-like wearable sensor based on pre-fall detection system, which uses linear and angular velocity information from motion sensor to classify human fall. Accuracy = 96.63% Sensitivity = 100% Specificity = 95.45% The low-power-cost portable design limits the system with using threshold value based algorithm only.

Systems Based on Ambient Sensor Systems
Apart from wearable devices, there are ambient based and vision based systems which can monitor human posture to detect falls. Fall detection systems based on wearable sensors are non-sensitive to the changing ambient environment since they do not consider the dynamic environmental factors that might affect the detection. However, ambient systems, as shown in Figure 1, provide a solution by collecting data from the user as well as examining the environment. These systems make use of external sensors which are installed around the user's daily active area, such as a home or senior care facility, and monitor the posture of the humans in a fall event with some factors such as the time spent in falling. Current research on ambient systems for fall detection are based on (i) ultrasonic signals/radar, (ii) Kinect sensor, (iii) microphone, (iv) pressure sensor, and (v) infrared/Wi-Fi signals. The parameters observed by these sensors are mostly (i) motion of the human (ii) posture, (iii) pressure on the ground, (iv) acoustic, and (v) time spend in falling as well as the time spent on lying on the ground after fall. Since these parameters are different for each person and each fall event, these systems use pattern matching or machine learning algorithms to detect falls rather than using a threshold value.
Ultrasound sensor/radar is a motion detector that focuses on recognizing and tracking moving subjects based on ultrasound wave. It is one of the most popular unobtrusive sensors used in ambient fall detection system. The sensor itself is small and cheap and only requires minor installation. It is well functional under low-light environments and free of privacy concerns [40]. In [41], an array of six ultrasonic sensors is used to monitor the posture of the person in a fall event. The sensors radiate an eight-pulse signal waveform at 40 Hz. The distances between the subject and sensors were calculated, and the gesture of the target is analyzed based on the different distance information detected by the array of sensors. A trained SVM model is used for pattern matching and generated 98% accuracy on fall detection. In [40,42], specific radar technologies were used to monitor the change in frequencies when a fall event is happening. Deep neural networks are implemented to predict fall in [40] and generated an accuracy of 95.64%, whereas KNN algorithm is used in [42] with an accuracy of 95.5%. The disadvantage of radar is that the obstacle and clutter in indoor environments may obscure the person to be detected by the sensor. Besides, radar is so sensitive to motions yet it cannot distinguish which subject it is from, that only one subject is required to be presented in the monitoring area. The Kinect sensor is what comes in between a motion sensor and a vision camera, it unobtrusively tracks movements by using structured light or time of flight instead of ultrasound wave [43]. Similar to ultrasound sensors, it is easy to install and will not raise much of privacy issue. Kinect sensors were adopted in fall detection in [44,45]. A total of 25 specific points/joints of a human were tracked in real-time by a Microsoft Kinect v2 sensor in [44]. The minimum and maximum height of three specific points are measured dynamically and compared with a threshold to detect a fall. Stone et al. [45] proposed a method for detecting falls using Microsoft Kinect. Two stages are included, first of which is to characterizes a person's 3D bounding box to determine subjects' vertical state and track person overtime while the other stage is applying ensemble of decision trees to compute the confidence of a fall. The Kinect sensor shares similar issue as radar in fall detection such as being easily affected by obstacle and clutter, indoor coverage limitation, and single target requirement. Besides, since the Kinect sensor tracks subjects by projecting their heights and widths using structured light, it can be influenced by overpowering sunlight or even regard a human-sized object as a subject [45].
As for the microphone, the basic idea is to capture and analysis the acoustic information to identify a fall. It is cheap, small, and easy to acquire and install. Li et al. [46] had developed an acoustic fall detection system (acoustic-FADE) which consists of a circular microphone array that captures and analysis sounds to automatically detect a fall. When a sound is detected, acoustic-FADE locates the source, enhances the signal and decides if a fall has occurred. Though 100% sensitivity and 97% specificity are claimed in [46], generally speaking the microphone-type fall detection system is not as robust as Kinect type or radar type. Yet it shares their disadvantages such as the limit on indoor coverage and single target. It is highly sensitive to environmental noise and interference. It also has a hard time detecting slow falls that generate minimal sound. Besides, the material of the floor and the limited detection range also affect the system. It may also raise privacy concerns.
Pressure sensor is another popular feature in ambient systems. This type of sensor is usually installed beneath the floor to detect floor vibration and pressure to identify a fall. In [47], a device-free fall detection system based on a Raspberry Pi and three geophones is proposed. Falling mode is decomposed and characterized with time-dependent floor vibration features. By leveraging Hidden Markov Model (HMM), such system achieves a 95.74% precision. The disadvantage of pressure sensors is the high false positive it generates since it will regard a large object dropping on the floor as a fall. Additionally, even though the sensor itself is cheap, it needs to be installed under the floor of the whole living area, which requires major home renovation and complicated power supply for each sensor which ends up increasing the cost.
Just like in the wearable devices, sensor fusion has also been attempted in the ambient system. By detecting falls using an infrared signature, infrared sensors are mostly used together with other types of ambient sensors to increase accuracy. In [48], Wi-Fi signals were used to detect falls by using the channel state information (CSI). The human motion was recognized as it will significantly affect the wireless signal transmission channel. The infrared sensor is user to help locating the subject in case of 'bad antennas' situation happened in Wi-Fi device. A naïve Bayes classifier is used to predict fall reaching 91% of accuracy. In [49], infrared sensor has been combined with pressure sensor, which reached of specificity 96.7% and sensitivity of 100%. Infrared image is used to observe the whole environment while pressure sensors are to analyze floor action. Such a combo can reduce false alarm rate as the scenario when large item dropping on the ground and slow fall would be identified the infrared sensor. In [50], Kinect simulator and range Doppler radar are both used. Kinect can help in generating a better repository fall/non-fall classification, as sometimes the orthogonality between the motion direction and the radar's line of sight would lead to miss detection. Together, the Kinect sensor and Doppler radar are able to perform 3D position measurements, generating fall detection accuracy up to 96%. In Table 2, a summary of ambient fall detection systems is provided.
Unlike sensors of wearable systems, those used in ambient systems are dramatically different from each other. Overall, these ambient systems have the advantage of operating in a low-light region and not being restricted by privacy issues compared to vision-based systems. These systems provide a more comprehensive analysis of user's posture by taking environmental factor into consideration compared to inertial sensor-based wearable systems. Nevertheless, the limitations of ambient system are vital. To begin with, they are suitable only for indoor environment and cannot be installed or operated outdoor. Furthermore, the significant number of blind spots in the house or apartment, considering that the sensors are always located at fixed positions, makes it difficult to implement such systems. Moreover, most ambient systems can only serve one person in the monitored area, meaning no pets, no partner, and no friends. Most importantly, though the sensors used are cheap, the installation of such a system requires major home renovation as most of the sensors will be embedded underneath the floor or in the wall, which can be an expensive setup.
Robotics 2020, 9, 55 7 of 19 of the whole living area, which requires major home renovation and complicated power supply for each sensor which ends up increasing the cost. Just like in the wearable devices, sensor fusion has also been attempted in the ambient system. By detecting falls using an infrared signature, infrared sensors are mostly used together with other types of ambient sensors to increase accuracy. In [48], Wi-Fi signals were used to detect falls by using the channel state information (CSI). The human motion was recognized as it will significantly affect the wireless signal transmission channel. The infrared sensor is user to help locating the subject in case of 'bad antennas' situation happened in Wi-Fi device. A naïve Bayes classifier is used to predict fall reaching 91% of accuracy. In [49], infrared sensor has been combined with pressure sensor, which reached of specificity 96.7% and sensitivity of 100%. Infrared image is used to observe the whole environment while pressure sensors are to analyze floor action. Such a combo can reduce false alarm rate as the scenario when large item dropping on the ground and slow fall would be identified the infrared sensor. In [50], Kinect simulator and range Doppler radar are both used. Kinect can help in generating a better repository fall/non-fall classification, as sometimes the orthogonality between the motion direction and the radar's line of sight would lead to miss detection. Together, the Kinect sensor and Doppler radar are able to perform 3D position measurements, generating fall detection accuracy up to 96%. In Table 2, a summary of ambient fall detection systems is provided.
Unlike sensors of wearable systems, those used in ambient systems are dramatically different from each other. Overall, these ambient systems have the advantage of operating in a low-light region and not being restricted by privacy issues compared to vision-based systems. These systems provide a more comprehensive analysis of user's posture by taking environmental factor into consideration compared to inertial sensor-based wearable systems. Nevertheless, the limitations of ambient system are vital. To begin with, they are suitable only for indoor environment and cannot be installed or operated outdoor. Furthermore, the significant number of blind spots in the house or apartment, considering that the sensors are always located at fixed positions, makes it difficult to implement such systems. Moreover, most ambient systems can only serve one person in the monitored area, meaning no pets, no partner, and no friends. Most importantly, though the sensors used are cheap, the installation of such a system requires major home renovation as most of the sensors will be embedded underneath the floor or in the wall, which can be an expensive setup.    Fine tuning with a greater number of convolutional layers will result in overfitting. [41] An algorithm for fall detection based on event pattern matching with ultrasonic array sensors signal. Accuracy = 98% False alarm towards people with pets and walking sticks. [42] Dynamic range-Doppler trajectory (DRDT) method based on a frequency-modulated continuous-wave (FMCW) radar system is proposed. Multi-domain features including temporal changes of range, Doppler, radar cross-section, and dispersion are extracted from echo signals for a subspace kNN classifier.
Average classification accuracy = 95.5% The subject should be in line of sight which is a major drawback as the presence of walls and furniture at users' living environment. [44] A platform programmed in C# for movement monitoring and fall detection of persons based on data acquired from a Microsoft Kinect v2 sensor.
True Positive Rate = 82% False Alarm Rate = 18% High false alarm rate that requires human review on the RGB image of event. [45] Real-time fall detection system based on the Kinect sensor. The system defines a 3D bounding box of human posture with the measurement of width, height, and depth of the subject. Accuracy = 98.6% Requires tons of computing resources. [46] Acoustic fall detection system with acoustic signals recorded by arrays of microphones, sampled at 20 KHz.
False alarms due to large piece item dropped on the ground considered as fall. [47] A device-free fall detection system based on geophone. Falling mode is decomposed and characterized with time-dependent floor vibration features. Hidden Markov model is also utilized to recognize the fall event precisely and achieve training free recognition.
Precision = 95.74% False Alarm Rate = 5.30% The floor vibration profile induced by many other objects falling from a certain height is similar to human fall. [48] A robust and unobtrusive fall detection system using off-the-shelf Wi-Fi devices, which gather fluctuant wireless signals as indicators of human actions.
True Positive Rate = 92% False Alarm Rate = 6% Average Accuracy= 91% Accuracy decreases due to raw CSI data from bad antennas. [49] An ambient system combined with floor pressure sensor and infrared. It adjusts the detection sensitivity on a case-by-case basis to reduce unnecessary alarms.

Systems Based on Image Processing
While staying inside a home, except for ambient system, people can also be monitored for fall detection by vision-based systems to alert for an emergency or immediate assistance. Current studies on vision-based systems use suitable video cameras for real-time monitoring. Usually, these systems use a depth camera [51] or RGB camera such as Raspberry Pi camera [52] and indoor video surveillance camera [53] for image acquisition to detect falls. Depth cameras have the ability to calculate 3D information using a single camera. It also has better performance under low-light condition [51]. RGB camera, on the other hand, is just normal video camera used in daily life, including low profile internet camera to high end surveillance camera. It does not have the ability to acquire 3D information nor work under low-light conditions. However, these can be overcome by using multiple RGB cameras and adding infrared sensors, similar to what is going on with our cell phone camera. Generally speaking, both types of cameras detect subjects well in vision-based fall detection systems.
The procedures vision-based systems used are more or less the same as these following steps: (i) image preprocessing, (ii) background subtraction or foreground segmentation, (iii) feature extraction, and (iv) event recognition [54]. The dominate difference among different approaches is the third step on what kind of feature is determined and extracted. There are four major methods to extract features: (i) shape change monitoring, (ii) postures figuring, (iii) key point tracking, and (iv) inactivity detecting.
Shape change method normally approximates the subject by using an estimated shape such as an ellipse and rectangular [55]. It requires less computing resource and is simple to model. Hence adopted by most of the real-time vision system. For example, the human subject monitored by Raspberry Pi [52] and home surveillance camera [53] is approximated by an ellipse around them and a minimal rectangle is encompassed around the ellipse. The aspect ratio of this rectangle is observed in each frame and compared with a threshold to detect a fall. In [55], C-motion that describes the velocity of the subjects is combined with shape changes to detect fall. It first applies motion quantification to detect large motion like falls using C-motion. Then the system will analyze the orientation and proportion of subjects' shape to figure out the subjects' status. Meanwhile, the last analysis is to check the lack of motion after fall and count the length of time when the subject is lying on the ground. Study [56] used another technique to estimate the height-width ratio and the distance between the mid-center and top-center position of the approximated rectangle shape to detect a fall using a threshold. In [57], similar shape approximation is performed while SVM is utilized to detect eclipse shape, position of the head, and vertical and horizontal projection histogram in order to make a decision on identifying the subject's activity. Study [58] approximates the subject as a voxel shape by using multiple video cameras. Then the status of the subject is classified as upright, on the ground, and in between based on the height of voxel shape by using fuzzy logic technique. The disadvantage of shape change method is that in order to achieve fast and small calculation, it sacrifices the accuracy by approximating the subject as simple geometry shapes.
In contrary, posture figuring method is much more in detail as it either track the subjects' joints or draw the body contour to specify their posture. Hence higher accuracy. As shown in Figure 2, after background subtraction in [59], a Kalman filter with OpenCV is used to keep track of the person by identifying a set of points in the areas of interest. The system is even sensitive enough to notice small movements when the subject is standing still. A KNN algorithm is used to predict a fall with an accuracy of 96.9%. In [54], three depth cameras were used to work independently. The subject's body contour was used as the foreground feature which is analyzed and labeled as fall or non-fall events by each camera source. A voting technique is applied to make a final decision by the majority votes. This method reaches accuracy up to 96.5% which is close to that in [59] but drains even more computing source due to the voting rule. To balance the trade-off between accuracy and efficiency, the method proposed in [60] adopts gesture figuring by measuring the subjects body contour. Then instead of using machine learning technique, the authors simply calculate a threshold line that separates each frame. If the subject is positioned below the threshold then the fall is detected. Posture figuring method can surly guarantee high accuracy yet it is computationally very expensive. It requires an enormous amount of training data and is possibly not fast enough to be implemented in real time. Thus, shape change method is more popular at the moment with a little sacrifice in accuracy.
Key point tracking is another compromise made from posture figuring in order to save computing expenditure. This method normally projects the subject posture but only check a few key feature points instead of all of the pixels. In [51], a robust fall detection system based on human body part tracking using a depth camera is proposed. The 3D body joints are extracted first and then the head and hip as the most visible body parts are extracted and tracked. Such strategy is proved to be worthy as the frame rate of the camera is 30 fps and the joint extraction only takes a few milliseconds. Eventually the head joint distance trajectory is regarded as input feature vector to be analyzed by SVM, generating accuracy of 97.6%. Poonsri et al. [61] proposed an improved fall detection algorithm using consecutive frame voting. It first subtracts background using a mixture of Gaussian models (MoG) to detect human subject. The contour of the subject is identified as the feature. Then the centroid of the feature is being tracked only. Classification of the events is made on each frame. Eventually the consecutive-frame voting is proposed to increase an accuracy of prediction, up to 91.38% that is higher than their original 86.1%. Similarly, depth camera is adopted in [62] to detect fall by analyzing features as center of gravity. Neural-network is proposed to train classifiers, among which MLP generates the highest accuracy of 98.15%. On top of saving computing cost, key point tracking method can also help to avoid occlusion issue as we can choose the key point from the body part where is less likely to be blocked by furniture or so, say, the head. However, also because only key points are being tracked, information may be lost, leading to false alarms such as labeling quick sitting down as a fall or failure in detecting a slow fall.
As for inactivity detecting, it is the fastest method since it requires almost no computing resource. However, this method is seldom used alone as its high false alarm rates. Also it requires the subjects to be lying on the ground for a while, having their life at stake, to detect a fall. Thus, this method is always combined with those three above mentioned methods to serve as a "double insurance" [55,56]. A snapshot of recent research on fall detection using image processing systems is presented in Table 3.
feature points instead of all of the pixels. In [51], a robust fall detection system based on human body part tracking using a depth camera is proposed. The 3D body joints are extracted first and then the head and hip as the most visible body parts are extracted and tracked. Such strategy is proved to be worthy as the frame rate of the camera is 30 fps and the joint extraction only takes a few milliseconds. Eventually the head joint distance trajectory is regarded as input feature vector to be analyzed by SVM, generating accuracy of 97.6%. Poonsri et al. [61] proposed an improved fall detection algorithm using consecutive frame voting. It first subtracts background using a mixture of Gaussian models (MoG) to detect human subject. The contour of the subject is identified as the feature. Then the centroid of the feature is being tracked only. Classification of the events is made on each frame. Eventually the consecutive-frame voting is proposed to increase an accuracy of prediction, up to 91.38% that is higher than their original 86.1%. Similarly, depth camera is adopted in [62] to detect fall by analyzing features as center of gravity. Neural-network is proposed to train classifiers, among which MLP generates the highest accuracy of 98.15%. On top of saving computing cost, key point tracking method can also help to avoid occlusion issue as we can choose the key point from the body part where is less likely to be blocked by furniture or so, say, the head. However, also because only key points are being tracked, information may be lost, leading to false alarms such as labeling quick sitting down as a fall or failure in detecting a slow fall.
As for inactivity detecting, it is the fastest method since it requires almost no computing resource. However, this method is seldom used alone as its high false alarm rates. Also it requires the subjects to be lying on the ground for a while, having their life at stake, to detect a fall. Thus, this method is always combined with those three above mentioned methods to serve as a "double insurance" [55,56]. A snapshot of recent research on fall detection using image processing systems is presented in Table 3. To sum up, image processing based fall detection systems are somehow similar to ambient systems. They share advantages in environmental factor analysis that wearable devices do not have. Also, they both have the issue with indoor restriction, blind spots, and enormous expenses. In the meantime, image processing systems have their own unique advantages. Thanks to pattern recognition, image processing systems can identify, track, and monitor the target user even when there are multiple people or pets in the monitored area. However, such detailed video recording systems are always subject to privacy concerns as well as the need for calibration among multiple cameras and complex real-time image processing algorithms that consume a tremendous amount of computing space and power. To sum up, image processing based fall detection systems are somehow similar to ambient systems. They share advantages in environmental factor analysis that wearable devices do not have. Also, they both have the issue with indoor restriction, blind spots, and enormous expenses. In the meantime, image processing systems have their own unique advantages. Thanks to pattern recognition, image processing systems can identify, track, and monitor the target user even when there are multiple people or pets in the monitored area. However, such detailed video recording systems are always subject to privacy concerns as well as the need for calibration among multiple cameras and complex real-time image processing algorithms that consume a tremendous amount of computing space and power. Table 3. Summary of image processing based system.

Article
Proposed Approach Measure of Performance Challenges [51] Depth camera measures the relationship between the body and the environment. A randomized decision tree (RDT) algorithm is proposed for the key joint extraction. Then SVM classifier is employed to determine whether a fall motion occurs. Sensitivity = 95.3% Specificity = 100% Accuracy = 97.6% Error = 2.4% Does not detect fall event if one of the joints of the body is hidden by an obstacle. [52] Foreground segmentation, Motion History image, calculation of C-motion and pace, calculation of standard deviation of C-motion, calculation of orientation of ellipse (locate person's foreground) are performed for fall detection using Pi Camera. Finally, if the value of orientation standard deviation of ellipse has high changing rate, it detects as a fall event N/A Proposed approach just replaces expensive CCTV camera based fall detection. If some motion is observed 5 seconds after a fall, then it will consider that the fall did not occur. [53] Posture-based events captured with a camera resolution of 640 × 480 pixels and at 30fps. The fall detection includes video acquisition, background subtraction, object detection and rule-based classification.
Only works if the person lies on the ground for a while which can cause serious injury. [54] Fall detection systems with voting strategy from three depth cameras that provide the depth image to the fall detections. Voting strategy only depends on simple voting but not weighted voting. [55] An RGB camera system detects fall by analyzing C-motion coefficient, which measures human motion with the help of motion history images which shows the pace of human body.
C-motion works on the velocity of movement. It would return high value calculations even the subject is just running. [56] Background subtraction, contour based human template matching, Height-width ratio computation and computation of distance between top and mid center of rectangle covering human are performed Detection acc. = 95.2% False detection= 3.33% Does not detect fall when the human is very close or parallel to the camera. [57] An image processing based system that mainly tracks the head of subject. It applies SVM to detect eclipse shape, position of the head, vertical and horizontal projection histogram.
Head's movement is less intensive comparing to body's especially during slow fall period. [58] 3D representation of humans using multiple cameras. Two levels of fuzzy logic: 1.
Calibrations between cameras to determine posture of subject. 2. Decision making on identifying subject's activity.
Demands expensive costs of computing. Accuracy also relies on huge database. [59] The proposed approach includes image acquisition, foreground segmentation, Kalman filter optical flow occlusion detection, and a kNN classifier. Occlusion helps the system to detect falls if the person is hidden by an object after falling. Sensitivity = 96% Specificity = 97.6% Precision = 96% Accuracy = 96.9% Occlusion, light, and ambient conditions affect the fall detection significantly. [60] A support system based on depth videos for old age people living alone in their homes. A region of interest (ROI) is detected by subtracting background from extracted frames. A threshold is proposed to separate the ROI of fall and non-fall.
UR Dataset: Fall Accuracy = 100% Non-fall Accuracy = 82.5% Proposed method is not applicable to person lying on floor. It also spends most of the computing energy on specifying the posture of the. subject. A threshold is then proposed as a result instead of machine learning. [61] An improvement of fall detection using consecutive-frame voting to improve previous work accuracy. The method consists five stages: (1) human detection, (2) low-level feature extraction, (3) human centroid tracking (4) event classification, and (5)

Combined Systems Incorporating Two or More Technologies
As discussed above, each system has its distinct advantages as well as unique disadvantages. In order to complement each other, combination of these various techniques is being studied. Combined systems for fall detection are essentially a network of sensor nodes working in correlation to detect a fall. In [63], the acceleration data is observed from the sensor node attached on the body and forwarded to the base station, which is a computer, to detect falls, while RF signal strength is used to locate the user.
Their results indicate that with such a combination, normal activities do not produce false positives. In [64], a method for detecting falls using indoor localization system combining ultra-wideband (UWB) and accelerometer is presented. The accelerometer is placed in near-head position as tracker since the head experiences the largest vertical displacement during a fall. The UWBs are installed in living area as anchors to determine the user's location with an accuracy in the order of 10 cm. Ranging data are exchanged from the tracker node and to compute the distance of the tracker to the anchor node. Unlike acceleration based system, such combined system focuses on detecting the user's posture rather than sudden movements. Thus, this system is capable of detecting slow falls, which are not likely to be detected in traditional wearable devices. There are other studies performed with similar combinations and all claim better accuracy, such as listed in Table 4. In [65], a robotic platform is presented as it not only combines all three types of systems but also utilizes telepresence technology to enable caregiver to have real-time evaluation on the data collected, which provides an additional layer for detecting false positives.
Admittedly the combined systems have neutralized some weak points from each single system and brought higher accuracy in detecting fall. The vulnerability is still obvious. Such combined systems are unable to tell the difference between an accidental fall and a self-initiated activity. Besides, combined systems are likely to be more expensive and less ergonomic. Table 4. Summary of combined system.

Article
Proposed Approach Measure of Performance Challenges [63] Using a small device worn on the waist and a network of fixed motes of home environment, the occurrence of a fall is detected with the location of the victim. Low cost and power 3D accelerometers are used to detect the fall while RF signal is used to locate the person.

N/A
Accuracy decreases due to raw CSI data from bad antennas, barriers, and long distance. [64] Wearable device with multiple nodes in it is installed in near-head position of the patient. Then, the posture of the nodes is tracked by an ultrawide band radar.

N/A
Calibration for standing, sitting, and lying on ground postures is needed. [65] A fall detection system based on a combination sensor networks and home robots which comprises of body worn sensors and ambient sensors distributed in the environment.

N/A
Packet transmission delay is relatively large. Consumption of power also impacts battery life. [66] A fall detection system with improved framework by fusing the Doppler radar sensor result with a motion sensor network.
The experiment only relies on a portion of data from a large dataset. Further testing is needed.

Discussion
We have reviewed and summarized different types of fall detection systems that currently exist. Based on this review we will discuss about the issues, trends, and challenges.

Issues
In the previous sections, fall detection systems/devices are categorized as wearable, ambient and image processing-based system. Such diversified systems focus on different aspects of fall from acceleration to impact and posture of the user. Each type of systems obviously brings its unique benefits yet is accompanied by certain limitations that need to be discussed. Comparison among different fall detection system types is difficult to be fair as they each drew individual conclusion using different approaches applied on unique datasets acquired from their special hardware with distinctive experiments performed. Generally speaking, wearable systems have higher portability but less reliability, while ambient and vision system are more robust but with restricted working area. The characteristics of different systems is listed in Table 5. Acceptability of fall detection systems can be a problem. People have to weigh the cost of the fall detection system against the benefits it will bring. Ambient and vison-based fall detection systems usually consist a group of sensors, detectors, and cameras. Such a package not only requires a large fortune but also demands major renovation of daily living environment. Additionally, the inherent defects of importability and limited monitoring area prevent such type of systems from bringing more benefits to not only elderly people outdoor, in fact, but also those outdoor athletes such as runners, bikers, and skiers as well. In comparison, wearable device is a less financial burden and requires no expert installation. It is also less complicated and more portable which seems like a more acceptable choice. However, such a system is dependent on the user not only always remembering to wear them, especially during nighttime but also choosing to wear them due to the lack of ergonomic design and sustainable battery life.
False alarms are a major issue to consider with fall detection systems. Wearable devices' reliability and robustness are sacrificed for the advantage of portability and cost efficiency. A wearable device is only triggered by inertial sensors such as an accelerometer, gyroscope, or BIMU. As a result, it has limited ability to distinguish real falls from ADLs that generate similar acceleration and orientation. Such limitations can be even enlarged outside of the laboratory environment due to the lack of ability to handle environmental factors. Nonwearable systems take environmental factors into consideration in order to decrease the falls alarm rate. However, the performance is mainly ensured by the focus on a single target. Any other living creatures such as a pet, a partner, and a family member other than the subject her/himself will bring massive disturbance and noise to these systems.
Lack of public database surly acts as a rough obstacle on the road of developing fall detection systems. Such public database should both includes real-life fall data and standard evaluation framework among different systems. Most studies we reviewed in this paper collect data of falls simulated distinctly from subjects of various ages, sizes, and genders by putting different types of detectors on different position, which is extremely difficult to reproduce. Thus, a fair evaluation and comparison among different systems and algorithms seems tough. Moreover, even we assume all these methods are legit within a certain criterion, it is still unclear whether they would maintain excellent performance outside a laboratory environment due to the insufficiency of real-life fall data.

Trends
In order to find solutions for all these issues, studies are being conducted. In this section, current and future trends are described.
Sensor fusion is one of the most popular trends in developing fall detection systems, which is a combination among multiple sensors, systems, and algorithms. As mentioned in the previous sections, combining different types of sensors, systems, and algorithms can surely improve the performance of fall detection as these sensors and systems complement each other. Thinganos et al. [67] presents a comparison between three proposed data fusion schemes and one study in which only one type of sensor and algorithm is used, providing useful insights into the problem of fall detection.
To decrease the price and increase the usage rate, studies integrating fall detection into smartphones had been brought up since 2009 [68]. As for the hardware, smartphones are naturally made for data acquisition and wireless transmission. Besides, nowadays smartphones are always integrated with inertial sensors. Thus, users do not need to bear with extra expenses for extra devices. For the software, open source environment enables large number of developers to update APPs and improve algorithms promptly, pervasively, and precisely, which promises the users with the latest protection.
To reduce the rate of false positive events, machine learning techniques are started being applied in fall detection devices. Although traditional threshold based methods are able to detect when a fall occurs, the rate of false positives is always a problem. Each user is unique in height and weight and behaves differently in a variety of living environments. A single threshold is neither enough nor accurate. Machine learning approach is more sophisticated and thus more adaptive and leads to better performance. Currently, there are multiple machine learning methods being proposed, such as decision trees [69], nonlinear regression [70], dynamic Bayesian network [71], and a lot more. Yet not a single method is widely recognized as most effective and new approaches are still being introduced. Fortunately, real-world fall repository is being developed [72] and soon there will be enough real-life fall data for training and testing all the algorithms.
There is a new trend towards wearable EEG device to ease the difficulty in distinguishing real falls from fall alike ADLs. The EEG (aka, electroencephalogram) is a widely used noninvasive method for measuring brain dynamics and performance [73]. When brain cells called neurons are busy processing information, they emit electrical signals [74] which can be recorded by attaching small metal electrodes on our brain according to 10-20 location systems [75]. Event-related potential (ERP) is one of the major brain responses measured by EEG, which is electrical potential in the brain in response to specific events [76]. ERP waveform is consisted by a bunch of ERP components, each of which is indexed by its polarity (positive or negative going voltage), timing, scalp distribution, and sensitivity to task manipulations [76]. For instance, a negative-going peak that is the first substantial peak in the waveform and often occurs about 100 milliseconds after a stimulus being presented is often called the N100 or N1 [77]. An ERP can be evoked by external stimulus events but not as strong by spontaneous events. Adkin's study [78] on cortical responses suggested that EEG can be a useful tool in fall detection. A series of predictable or unpredictable whole-body perturbations which required balance corrections to maintain upright stability were conducted on eight subjects wearing EEG. Results indicating stronger ERP N1 components from the subjects can be detected over unexpected loss of balance than the ones of expected. To find out if EEG signal directly measured from our brain acts differently towards external stimulus, a fall, from self-initiated event, an ADL, future work needs to be done. Experiment having subjects perform ADLs and simulated falls while wearing both wearable fall detection system and EEG would be a good start. The high false alarm rate issue can be potentially eased, if the ERP N1s are shown with clear distinction between falls and ADLs. Currently, there is no available wearable fall detection device based on wearable EEG. This gives us the motivation to pursue the development of an ergonomic wearable fall detection system based on EEG and inertial sensors.

Challenges
Challenges are presented as people discovering new methods in improving fall detection devices.
To start with, fusion-based systems integrate multiple systems, devices, and algorithms. Such combinations raise the computing complexity, require calibration between different systems, and increase system cost even more, which could possibly overwhelm elderly users, causing them to reject such complexity.
As for smartphone-based systems, smartphones are not initially designed for fall detection. The accelerometer installed is only feasible for measuring mild activity with narrow range up to 2 g and with low sampling rate. The restrictions on carrying a smartphone in a standardized position to ensure high detection accuracy is against the nature of smartphones. Besides, real time monitoring requires continuous data collection, which will undermine the performance as well as drain the battery of the smartphone. Smartphones should be used in a manner that place no restriction on how, where, and when people want to use them.
Privacy concerns becomes even more debatable along with the building-up of real-life fall repository. On one hand, privacy concerns should not stand in the way of benefits brought by technology; on the other hand, privacy should also not be sacrificed for technology development. Admittedly, levels of privacy intrusion differ among each type of system. A privacy protection mechanism, such as data encryption, is inevitable in any cases.
Eventually, there is still much work to do for applying EEG in fall detection as EEG system is normally bulky, mainly used in hospital for seizure detection, which is not portable at all. Although the idea of wearable EEG was already introduced in [79,80] and, as an example, ear EEG is proposed [80,81] and proved to be feasible in epilepsy seizure detection in [82], there is yet no solid data validating its applicability in fall detection.

Conclusions
We have discussed and analyzed different fall detection systems that currently exist. With their benefits, issues, challenges, and trends identified. Fall detection systems are important and complex, yet still developing. They have great potential in broadly aiding and protecting against falls, fear of falls, or even health consequences after falling. However, as of now there is no satisfying solution as, even regardless of costs, the lack of ability in telling real fall from fall alike ADLs. Future work is still needed in building a large, shared real-world (not lab-simulation) fall database for the advanced machine learning algorithms. There is a new trend towards wearable EEG device. EEG directly measures signal from our brain which acts differently towards external stimulus, a fall, from self-initiated event, an ADL. To our knowledge, there is no such device that integrates wearable EEG into fall detection. Our next goal is to design and test an ergonomic wearable fall detection device that applies EEG.
Author Contributions: Z.W. and V.R. assembled and prepared the literature and did the writing. A.G. and U.G. helped with conceptualization, supervision, and contributed to the analysis. All authors have read and agreed to the published version of the manuscript.

Funding:
No funding was provided to any of the authors to perform this study.