Next Article in Journal
The Validity and Reliability of the Force Plates and the Linear Position Transducer in Measuring Countermovement Depth and Velocity During Countermovement Jump
Previous Article in Journal
STCCA: Spatial–Temporal Coupled Cross-Attention Through Hierarchical Network for EEG-Based Speech Recognition
Previous Article in Special Issue
Evaluation of a Vision-Guided Shared-Control Robotic Arm System with Power Wheelchair Users
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Fall Detection in Elderly People: A Systematic Review of Ambient Assisted Living and Smart Home-Related Technology Performance

by
Philippe Gorce
1,2 and
Julien Jacquier-Bret
1,2,*
1
University of Toulon, CS60584, Cedex 9, 83041 Toulon, France
2
International Institute of Biomechanics and Occupational Ergonomics, Avenue du Docteur Marcel Armanet, CS 10121, 83418 Hyères Cedex, France
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(21), 6540; https://doi.org/10.3390/s25216540
Submission received: 8 September 2025 / Revised: 20 October 2025 / Accepted: 20 October 2025 / Published: 23 October 2025
(This article belongs to the Special Issue Intelligent Sensors and Robots for Ambient Assisted Living)

Highlights

What are the main findings?
  • Non-wearable sensors and hybrid solutions (wearable + non-wearable sensors) achieved the highest fall detection performance.
  • Deep learning methods produced the best performance results.
What are the implications of the main findings?
  • Propose a systematic review of fall detection systems’ performances.
  • Identify the advantages of different solutions in terms of performance for researchers, practitioners, and policymakers in order to design and implement more effective fall detection systems.

Abstract

Fall detection systems in ambient assisted living (AAL) and smart homes are essential for the comfort, safety, and autonomy of elderly people. The aim of this study was to investigate the performance of these systems considering categories of sensors and methods used. A systematic review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Seven open databases were screened without a date limit: PubMed/MedLine, Google Scholar, ScienceDirect, Science.gov, Academia, IEEE Xplore, and Mendeley. The article selection and data extraction were performed by two authors independently. Among the 473 unique records, 80 studies were selected. Five fall detection performance parameters (accuracy, precision, sensitivity, specificity, F1-score) and two computation speed parameters (training and testing time) were extracted and classified according to three sensor categories (wearable, non-wearable, and hybrid solutions), and four methods (deep learning, machine learning, threshold, and all others). The ANOVA results showed that wearable sensors performed the worst in fall detection. Deep learning methods produced the best results for the five parameters. Identifying the advantages of different solutions is a major challenge for researchers, practitioners, and policymakers in the design and implementation of more effective fall detection systems.

1. Introduction

Improvements in quality of life have led to increased life expectancy and, as a result, a rise in the proportion of elderly people worldwide. Currently, there are over 1 billion people aged over 60. According to the United Nations, this number is expected to double by 2050 (2.1 billion) and triple (3.1 billion) by 2100 [1]. Preserving their autonomy and quality of life is a major challenge.
Falls are a major cause of injuries that can have dramatic consequences. They directly impact comfort and independence and generate significant costs for healthcare systems. For example, in 2014, in the USA, 29 million falls among older Americans caused seven million injuries and medical expenses of $31 billion [2]. In the European Union, more than €25 billion is spent on fall-related healthcare costs each year, an expenditure that is set to increase to approximately €45 billion by 2050 due to aging populations across the continent [3]. In China, the direct medical cost of falls among older adults is more than 5 billion yuan per year [4]. Consequently, extensive research and development on fall detection systems are essential to ensure safety, particularly at home.
Ambient Assisted Living (AAL) is a concept that uses technology to improve the quality of life and well-being of elderly people. Its aim is to preserve their independence and autonomy while ensuring their safety. Activity recognition is the most widely used methodology in the field of assisted living technology. It involves automatically detecting and classifying activities performed by occupants, including falls, using sensors. Researchers have used various methodologies based on different sensor types. The most common are wearable sensors. Their small size allows them to be integrated into clothing, watches, or other devices. They collect physical and contextual information for local processing or direct transmission to the central unit of an AAL system [5]. These include accelerometers, gyroscopes, magnetometers, and orientation sensors. They are often integrated into inertial measurement units [6,7] or portable devices such as bracelets [8,9] or smartwatches [10,11]. They are also found in smartphones, which are used alone or in combination with biological and behavioral monitoring [12,13] due to their high number of integrated sensors and cost-effectiveness [14,15]. Other non-wearable sensors have been used to detect falls without being attached to the body. These solutions play a crucial role in the functionality and effectiveness of AAL systems. Operating passively, these sensors autonomously monitor the occupants of a room without manual intervention. They include all camera systems (RGB [16], Kinect [17], etc.), radio frequency sensors [18,19], radars [20], vibration sensors [21], thermal sensors [22], and infrared sensors [23]. Other authors have combined wearable and non-wearable sensors to offer hybrid solutions [24,25].
Regardless of the sensors used, the data was processed using algorithms of different complexity. The simplest fall detection techniques are based on fixed [26] or adaptive [27] threshold detection. Thresholds are also used in hybrid versions to trigger a sensor (e.g., a video camera) when they are reached [25]. Over the last decade, Machine Learning (ML) methods have used more advanced algorithms that exploit larger databases to train different classifiers to identify a fall. Support Vector Machines (SVMs) [28,29], Artificial Neural Networks (ANNs) [30,31], K-Nearest-Neighbors (KNNs) [32,33], and Decision Trees (DTs) [34,35] are among the most widely used. More recently, Deep Learning (DL) methods have emerged. They automatically identify the characteristics necessary for detecting or classifying an activity from raw data, without significant human intervention [36]. DL methods mainly use four categories of algorithms, i.e., Convolutional Neural Networks (CNNs) [37,38], Long Short-Term Memory (LSTM) [39,40], Recurrent Neural Networks (RNNs) [41,42], and auto-encoders [43].
Fall detection technologies must be reliable and effective to ensure an appropriate response. The authors evaluated the performance of their algorithms using quantitative parameters such as accuracy, precision, specificity, sensitivity, and the F1-score. These parameters are computed from the confusion matrix, which contains the number of true and false positives and negatives [38,44]. Some studies have used one [45,46] or several [47,48] of these parameters. They have also evaluated processing speed performance using training times and testing times [49].
Several studies have proposed a state-of-the-art overview of existing techniques and algorithms in this field. Two recent studies conducted bibliometric analyses. Li et al. [50] proposed a synthesis dedicated to fall detection using wearable sensors over the last 10 years. Sanchez et al. [51] investigated work focused on Human Activity Recognition (HAR) in the AAL and smart home fields based on 100 most cited studies. Other works have been presented as reviews focusing on a specific point of view of fall detection. Casilari-Pérez et al. [52] and Iadarola et al. [53] provided a detailed summary of techniques based on wearable sensors, either solely in the AAL context [53] or by reporting performance information in a more general context [52]. Other authors have focused on methods. Amir et al. [54] summarized studies using ML (without specifically focusing on AAL and elderly people), while Islam et al. [55] and Gaya-Morey et al. [56] studied DL methods. Only the study by Islam et al. [55] reported performance parameters but not in an AAL context. Guerra et al. [57] provided a state-of-the-art overview of the different types of HAR sensors in AAL, reporting the method used but without focusing on performance. Finally, Ren et al. [58] produced a similar analysis, adding performance information but without being in the AAL context.
To our knowledge, no study in the literature has investigated the performance of fall detection systems in AAL. This requires consideration of different types of sensors (i.e., wearable sensors, non-wearable sensors, and hybrid solutions) and detection methods (i.e., threshold, ML, and DL). The objective of this study was to examine whether there was a relationship between the type of sensor or method used and performance parameters (detection, learning speed, and detection speed). The major contributions would be to present the quantitative values of the different performance parameters grouped by sensor category, method, and evaluation datasets in order to identify the most effective. Given the importance of AAL systems for the autonomy of elderly people, identifying the advantages of different solutions is a major challenge for researchers, practitioners, and policymakers in the design and implementation of more effective fall detection systems.

2. Related Works

In the context of fall detection, AAL system monitoring solutions can be based on a variety of technologies. There are three main categories of sensors: wearable sensors, non-wearable sensors, and hybrid solutions that include both of the previous categories [57]. Wearable devices generally consist of small sensors that can be integrated into clothing, accessories such as jewelry, and more recently into electronic devices such as watches and smartphones. They collect information that is processed locally or transmitted directly via appropriate communication protocols to the central unit of an AAL system [5]. Accelerometers and gyroscopes are the most commonly used portable sensors and are often combined in an inertial measurement unit [59]. They measure linear acceleration and angular velocity in their own local three-dimensional coordinate system. They are an attractive solution because they are inexpensive, compact, and can continuously measure motion-related parameters in real time [60] with high precision and accuracy [61]. These sensors were used to study the effect of different sensor positions [38] when detecting falls or optimizing energy expenditure [62]. Two approaches are commonly used to detect falls: threshold-based and artificial intelligence-based, which mainly use machine learning and deep learning methods. Pham et al. [63] used the fixed threshold approach based on accelerometer data to detect falls recorded in young people. They achieved an accuracy of 92%, a sensitivity of 93.3%, and a specificity of 91.4%. Based on similar data, Wu et al. [64] used an adaptive threshold approach with a sensitivity of 90% and a specificity of 92%. Yu et al. [65] used an ML method and obtained very high-performance parameters, based on accelerometers (sensitivity of 100% and specificity of 99.8%). Santos et al. [38] used CNN-type DL algorithms that produced performance between 99% and 100% for accuracy, specificity, sensitivity, and precision on the URDF dataset.
Non-wearable sensors play a fundamental role in the effectiveness of AAL systems. They include devices that enable passive recognition of human activities, i.e., without being directly worn by individuals. They are deployed in a room and autonomously monitor the occupants within it. Unlike wearable systems, these solutions require a more extensive infrastructure, including power and data cabling, wired or wireless connection to a data storage and processing server, and a setup and calibration phase prior to use, which often entails higher costs [66]. Vision-based fall detection, mainly different types of cameras (RGB, depth, infrared, etc.), hold an important place among non-wearable sensors. Li et al. [67] used Kinects combined with CNN-type DL analysis and achieved detection performance close to 100% (99.98% accuracy, 100% sensitivity, and 99.98% specificity). Shu et al. [68] used a machine learning-derived relevance vector machine algorithm to detect falls using cameras with a training accuracy of 94%. Other studies have used environmental sensors such as thermal, infrared, acoustic, or ultrasonic sensors. Bharathiraja et al. [69] deployed a system using AMG8833 infrared thermal sensor detection. In this technique, the sensor provided a set of temperature data corresponding to thermal signatures that are affected by changes in environmental conditions. Popescu et al. [70] used KNN (ML) approach on data recorded by three acoustic sensors to identify the fall detection. They had a sound level information system to achieve 100% fall detection. Kittiyanpunya et al. [71] used a millimeter-wave frequency-modulated continuous wave radar. They transformed the radar scattering signals into point clouds and Doppler velocity data. With an LSTM network, they achieved 99.5% accuracy in fall detection. Several studies have also explored floor-based fall detection. Clemente et al. [72] used seismic sensors placed on the floor of an apartment. Using SVM analysis, the authors detected falls with 73% accuracy and 86% sensitivity.
Finally, hybrid solutions coupling the two categories of portable and non-portable sensors have been implemented. They combine the advantages of each solution while limiting the disadvantages, which in principle increases the ability to detect falls. The most commonly used combination associates inertial data with video analysis. Kwolek et al. [25] have used information from a Kinect and an inertial measurement unit to detect falls. Detection is carried out in two stages: the inertial measurement unit detects a potential fall based on movement (threshold analysis), and the fall is then confirmed using an SVM classifier applied to the Kinect’s depth images. Using their methods, the authors achieved 98.33%, 96.77%, 100%, and 96.67% for accuracy, precision, sensitivity, and specificity, respectively. Li et al. [73] proposed a similar approach to fall detection based on accelerometer data recorded by a smartphone and video data captured by a Kinect. The authors indicated that their fall detection approach achieves the best accuracy of 100%. Although the high level of performance is achieved in part by the redundancy and complementarity of the information, hybrid solutions nevertheless have certain limitations. The use of different types of sensors requires more sophisticated algorithms, which increase the complexity of processing (synchronization, multiple layers, etc.) and require greater storage capacity. The installation and configuration of these solutions are also more complex (cables, synchronization, communication with the central server, etc.). All these factors have an impact, particularly on cost, which becomes more expensive.

3. Materials and Methods

The protocol was registered in the PROSPERO (CRD420251136411) and PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) guidelines proposed by Harris et al. [74] and Moher et al. [75] was followed to report the present systematic review results.

3.1. Search Strategy

The aim of this research was to investigate the performance of fall detection systems for elderly people in an AAL context. All generations of AAL systems, i.e., wearable sensors, non-wearable sensors, and hybrid solutions, were included in the analysis. The performance of the different systems was evaluated using the following parameters: accuracy, sensitivity, precision, F1-score, or training and testing time. To achieve this objective, a detailed list of keywords linked by the logical operators AND and OR, parentheses, and the operator “*” was used in seven open databases: PubMed/MedLine, Google Scholar, ScienceDirect, Science.gov, Academia, IEEE Xplore, and Mendeley. Due to differences in the search engines of each database, the list of keywords had to be slightly adapted to each one. Table 1 details the search strategy for each database explored. The research was conducted between June 22 and July 10, 2025, and all entries were considered without any date restrictions.
Data management was conducted as follows: (1) all items found in each database were merged into a single table (Microsoft Excel file); (2) the automatic detection function was used to remove multiple entries; (3) two independent reviewers (PG and JJB) separately evaluated the title and abstract of each unique entry to identify relevant works according to the inclusion/exclusion criteria; (4) the results were compared to establish the list of articles to be evaluated based on their full text; (5) the two reviewers evaluated the remaining full-text articles separately. Any article did not meet the inclusion/exclusion criteria were excluded; (6) the list of remaining articles from each reviewer was compared to determine the final list of articles to be included. All discrepancies were resolved by rereading the article and discussing to reach a consensus.
The second stage of the selection process used the snowball method. During the full-text evaluation, the list of references of each study was analyzed. A list of potentially relevant works was compiled. Each study was then evaluated, and those that met the selection criteria were manually added to the list of included works.

3.2. Selection Criteria

Several inclusion criteria were applied to select studies that met the objective. First, the studies had to be conducted in the context of AAL or smart homes for the elderly. Only published peer-reviewed original studies were considered. The reported method had to enable fall detection and had to include the following information: sensors, methods, and datasets used. The performance data had to be available throughout quantitative parameters: accuracy, precision, specificity, sensitivity, F1-score, training and testing time.
The exclusion criteria were: (1) the study design was a conference, book or chapter, review, report, case report, case study; (2) language of publication was not English; (3) the article has not been peer-reviewed; (4) the sample was not old people; (5) insufficient method detail or no performance parameters available.

3.3. Data Extraction and Classification

The data collected for each included article were as follows: first author name, year of publication, type of sensors used (accelerometer, camera, radar, acoustic sensor, etc.), their positions (on the body or external), their technical characteristics (range, sample, etc.), the method, algorithms and datasets used, type of input data, detection performance parameters (accuracy, precision, sensitivity, specificity, F1-score) and computation speed (training and testing time) when available.
To make it easier to read a large amount of information, the data was reported in summary form using abbreviations and classified. First, the data was presented by sensor category, i.e., wearable, non-wearable, and hybrid solutions, as proposed by Guerra et al. [57]. The different methods were presented according to the three families traditionally used in the literature: DL, ML, and Threshold [54,57]. Any study that did not correspond to any of these families was classified under the heading “Other”. Regarding algorithms, we considered all solutions in each method (CNN, SVM, KNN, LSTM, DT, etc.). For fall detection performance, the five most commonly used parameters in the literature were considered: accuracy (Acc), precision (Prec), sensitivity (Sens), specificity (Spec), and F1-score (F1) [47,48]. They are obtained from the four information items contained in the confusion matrix: true positives (TP, correctly predicted positives), true negatives (TN, correctly predicted negatives), false positives (FP, incorrectly predicted positives), false negatives (FN, incorrectly predicted negatives). The formulas were: Accuracy = (TP + TN)/(TP + TN + FP + FN), Sensitivity = TP/(TP + FN), Specificity = TN/(FP + TN), Precision = TP/(TP + FP), F1-score = 2 × (Sensitivity × Precision)/(Sensitivity + Precision). For computing speed, training and testing time values were used [49,76].

3.4. Data Analysis

In order to meet the objective of assessing the link between performance and fall detection systems, we summarized the results as follows. First, the five performance parameters were presented by sensor category, i.e., wearable, non-wearable, and hybrid solutions. Second, these same five parameters were displayed by method, i.e., DL, ML, Threshold, and Other. For each method, the performance parameters were plotted according to the most commonly used algorithms across all included studies. A third analysis reported the results of the five performance parameters according to the datasets used to test the methods proposed in the articles. Finally, training and testing times were analyzed by method and then detailed according to the most commonly used algorithms. In each section, the data were averaged across all included studies for which data were available.

3.5. Statistical Analysis

The effect of sensor categories (wearable, not wearable, and hybrid solution) and methods (ML, DL, Threshold, and Other) on fall detection performance parameters (Acc, Spec, Sens, Prec, and F1) was investigated using statistical analyses. First, the normality of the data distribution was checked using the Shapiro–Wilk test and the homogeneity of variances using the Levene test. In case these two conditions were verified, a one-way analysis of variance (ANOVA) was applied and Tukey’s post hoc test was used to identify the differences. The effect size was assessed using Cohen’s d. Otherwise, a non-parametric analysis was conducted using the Kruskal–Wallis test. Differences were analyzed using Dunn’s post hoc test with Bonferroni’s adjusted p-value and effect size was estimated with ε2 parameter. The significance level of all tests was set at 5%. All analyses were performed using JASP software (JASP Team, v0.19.3, Amsterdam, The Netherlands).

4. Results

4.1. Search Results

The search identified 484 items in the seven databases, with 11 duplicates. From the 473 unique items found, 280 were excluded because studies were conference, book, thesis, report, review, survey, or bibliometric analysis or did not deal with fall detection. Among the remaining 193 articles, 18 reports were not retrieved and 144 were excluded based on the selection criteria, i.e., no static and dynamic analysis, no performance values (accuracy, specificity, sensitivity, or precision values), or no training or testing time. Thus, the selection procedure retained 31 articles.
A second search was conducted based on citation searching (snowballing). 87 articles were identified and 38 were excluded (6 reports not retrieved and 32 excluded based on the selection criteria). Thus, 49 articles were added to the 31 identified by the search. A total of 80 articles were therefore included in the present study. Figure 1 depicts the results of the selection process in detail.

4.2. Study Characteristics

Table 2 presents information on the sensors, methods, and algorithms used in each study selected. In the 43 studies using wearable sensors, nine different sensors were reported: accelerometer (the most commonly used), angular velocity sensor, electrocardiogram, electromyogram, gyroscope, magnetometer, magnetic sensor, orientation sensor, and pressure sensor. All parts of the body were used to position the various sensors, with the majority located on the wrist or waist. For non-wearable sensors, seven different devices were identified in the 30 studies: antennas and receivers, acoustic sensors, different types of cameras (the most commonly used), infrared sensors, radar, thermal sensors, and vibration sensors. For the seven studies that used a hybrid approach, a combination of accelerometers, gyroscopes, magnetometers, and cameras was chosen. In terms of methodology, ML and DL were mainly implemented in the studies, with a wide variety of algorithms. CNN and LSTM were the most commonly used in DL, while SVM and KNN were the most commonly used in ML. Two trends emerged in the evaluation of the methods used: either the authors tested one or more open datasets (SisFall, MobiFall, PAMAP2, WISDM, USC-HAD, etc.), or they experimentally developed their own dataset (denoted “custom” in the table). Thirty-three different datasets were found in the included studies. The type of input data is labeled “Dynamic” in the case of a time series and “Static” in other cases [57]. In summary, the most used wearable sensors were accelerometers (86% of studies) and gyroscopes (34.9% of studies). For non-wearable sensors, the camera was used much more than all other sensors (53.3%; radar: 16.7; acoustic sensor: 13.3%). For the hybrid solution, the camera + accelerometer combination was used in all the studies included. For the methods, DL was the most used for non-wearable sensors (47% of the included studies) while ML was the most used for wearable sensors (44.2% of the studies). However, for these two categories, there are many other methods (Others: 47% and 55.8% of the studies, respectively). The threshold method was very few used (only 14% for wearable sensors and 28.6% for hybrid solutions). The few studies with a hybrid solution used all the methods. Regarding the datasets, a large part of the studies used their own data (60.47% for wearable sensors and 70% for non-wearable sensors). For the others, the most used datasets were SisFall, MobiFall, PA-MAP2, WISDM, USC-HAD.
Table 3 shows the performance parameters reported by each study, classified by sensor type. The presence of the X symbol indicates that the authors evaluated their method or algorithms using the considered parameter. In many studies, the authors tested several methods or algorithms on one or more datasets. Thus, several values for the same parameter were reported in a single study. The data used in the present study to evaluate performance was detailed in the Supplementary File. It contains 426 accuracy values across 59 studies, 363 sensitivity values from 50 studies, 351 specificity values from 52 studies, 229 precision values across 25 studies, 232 F1-score values from 19 studies, 50 training time values, and 28 testing time values from 9 and 8 studies, respectively. Thus, accuracy (wearable: 67.4%; non-wearable: 80.0%; hybrid: 100% of included studies) and specificity (wearable: 74.4%; non-wearable: 60.0%; hybrid: 28.6% of included studies) were the two most frequently evaluated parameters and the F1-score the least measured. Training and testing time parameters were very rarely reported (less than 15% of studies).

4.3. AAL Fall Detection Performance per Sensor Category

Figure 2 presents the analysis of performance parameters by sensor category, i.e., wearable sensors, non-wearable sensors, and hybrid solutions. Parts A and B detail the values of the performance criteria, Table C indicates the number of available values, and Table D shows the number of studies that achieved 100% for each performance parameter. The effect of sensor category was observed for 4 of the 5 parameters. The accuracy of the hybrid solution was higher compared to wearable sensors (94.6% vs. 91.1%, F = 6.314, p < 0.05, ε2 = 0.015). Non-wearable sensors scored an accuracy of 92.6%. Hybrid solution also showed the best sensitivity performance (97.6%, F = 8.049, p < 0.05, ε2 = 0.022) statistically higher than wearable (87.5%) and not wearable sensors (89.2%). For precision (F = 12.324, p < 0.05, ε2 = 0.054), and F1-score (F = 17.572, p < 0.05, ε2 = 0.076), the non-wearable sensor category achieved significantly higher performance than the wearable sensor category (91.8% vs. 83.3%, 90.3 vs. 85.9%, respectively). For these two parameters, the values measured for the hybrid solution were 90.0%, and 98.2% (statistically higher than wearable, p < 0.05), respectively. Only specificity had no effect (F = 3.537, p > 0.05; hybrid solution: 92.8%; non-wearable sensors: 93.7%; wearable sensors: 90.2%). Table D illustrates the number of studies that achieved 100% performance: 9 for accuracy, 29 for sensitivity, 27 for specificity, 7 for precision, and 3 for F1-score. This number was higher for the wearable sensor category when considering accuracy, sensitivity, specificity, and precision. Sensitivity and specificity provided the best results, with rates of 46.0% (23/50) and 38.5% (20/52) of studies, respectively.

4.4. AAL Fall Detection Performance per Method

Figure 3 presents the results of fall detection performance system for DL, ML, Threshold, and other methods separately. As in Figure 2, parts A and B depict the values of the performance criteria, Table C and D detail the number of available values and the number of studies that achieved 100% performance for each method, respectively. An effect of methods was observed on all performance parameters. The accuracy of the DL (93.7%), Threshold (94.5%), and Other (93.8%) methods showed similar results. The three methods performed significantly better than ML (88.2%, F = 45.897, p < 0.05, ε2 = 0.108). The same results were observed for specificity: DL (93.7%), Threshold (93.9%), and Other (93.5%) showed similar results that were statistically higher than the ML (86.2%, F = 45.491, p < 0.05, ε2 = 0.130). No effect was found for sensitivity. No data were available for the Threshold in terms of precision and F1-score. For these two parameters, the DL (91.9% and 91.8%) and Other methods (86.0% and 89.3%) performed significantly better than the ML (79.5% and 83.0%, F = 61.740 and F = 41.730, respectively, p < 0.05, ε2 > 0.2). In summary, DL produced the best results for all five parameters. Table C shows the distribution of the available values by method in the 80 studies included. ML was the most represented method with 583 values, followed by the Other category (557 values) and DL (413 values). Table D details the distribution of studies that achieved 100% performance: 10 for accuracy, 39 for sensitivity, 27 for specificity, 7 for precision, and 3 for F1-score. All parameters combined, the DL, ML, and Threshold methods presented an equivalent number of algorithms that achieved 100% performance (between 12 and 19 studies). Sensitivity was the parameter that presented the best results with 26% (13/50) and 20% (10/50) studies for Other and ML, respectively.

4.5. AAL Fall Detection with 100% Performance

Table 4 presents all studies that reported at least one performance parameter at 100% by sensor category: fifteen studies for wearable sensors, eight for non-wearable sensors, and two for hybrid solutions. The two performance parameters for which the largest number of studies reported a value of 100% were sensitivity and specificity. Berlin et al. [48] were the only ones to propose a detection method with all five parameters at 100%. The authors used a camera (non-wearable sensors) and a 2D-CNN to detect falls. Three studies [81,96,107] achieved 100% accuracy, sensitivity, and specificity using wearable sensors (mainly an accelerometer).

4.6. AAL Fall Detection Performance per Algorithm

Figure 4 illustrates the performance of the most commonly used algorithms in each method. For DL, CNN, LSTM, Gait Recurrent Unit (GRU) algorithms and their combination (CNN-GRU and CNN-LSTM) showed performance ranging from 90 to 95% for the five performance parameters. The GRU algorithm achieved a specificity of 97.13% whereas LSTM produced an F1-score of 86.62%. For ML, the most commonly used algorithms were SVM, RF, NB, KNN, DT, ANN, and LR, with mixed results. For accuracy, SVM, KNN, and ANN produced the best results (≥90%), while NB and DT reported values around 80%. For sensitivity, ANN performed best (95.33%) and DT reported the lowest results (85.35%). The other algorithms scored around 90%. For specificity, KNN showed the best results (90.58%), while the others achieved scores close to 85%. The least effective were NB and DT, with 81% specificity. Three algorithms showed greater precision values: ANN (84.84%), RF (85.60%), and LR (83.32%). The others had results ranging from 80.34% (SVM) to 66.17% (NB). Finally, for the F1-score, KNN (86.62%) and LR (86.36%) had the best scores, followed by SVM (84.79%). For the others, the F1-score was around 75%.

4.7. AAL Fall Detection Performance per Datasets

The performance of the algorithms was evaluated in two different ways. The authors either developed their own datasets based on experimentation (“custom” Table 2) or used an existing database dedicated to fall detection. The most commonly used were WISDM (Acc = 19 values; Sens = 18 values; Prec = 18 values; F1-score = 19 values), PAMAP 2 (Acc = 20 values; Sens = 17 values; Prec = 17; F1-score = 17 values), USC-HAD (Acc = 17 values; Sens = 17 values; Prec = 17; F1-score = 17 values), Cogent (Acc = 0; Sens = 16 values; Prec = 16 values; 1-score = 16 values), and SisFall (Acc = 15 values; Sens = 27 values; Spec = 11 values; Prec = 16 values; F1-score = 16 values). Figure 5 displays the performance results for the datasets used at least 4 times.
Two-thirds (273/426) of the accuracy values were obtained with a “Custom” dataset for an average of 90.33%. The others (153/426) were obtained by evaluating the methods using 29 different datasets. For the most frequently used, the accuracy ranged from 91.5% (USC-HAD) to 97.2% (WISDM). The best performance was obtained with the NTU120 dataset at 99.84%, but it was only used twice.
Sixty percent (223/363) of sensitivity values were found with a “Custom” dataset for an average of 85.84%. The others (140/363) were computed from 19 different datasets. For the most commonly used, sensitivity ranged from 87.5% (USC-HAD) to 95% (WISDM). The best sensitivity was reached with the URFD dataset at 99.10%, used four times.
Eighty-five percent (298/351) of the specificity values were achieved with a “Custom” dataset, with an average of 90.44%. The others (53/351) were obtained by evaluating the methods with 18 different datasets. Among the most commonly used, only SisFall had a value of 94.72% (11 datasets). The best specificity was found with the GMDCSA dataset, with 100% used only once.
Half (109/229) of the precision values were obtained with a “Custom” dataset, with an average of 86.98%. The remaining values (120/229) were computed from 18 different datasets. For the most commonly used, the precision ranged from 68.5% (SisFall) to 96% (WISDM). The best performance was found with GMDCSA dataset, with 100% precision, used only once.
Half (113/232) of the F1-score values were found with a “Custom” dataset, with an average of 84.88%. The others (119/232) were obtained by evaluating the methods using 22 different datasets. For the most commonly used, the F1-score ranged from 78% (SisFall) to 95.5% (WISDM). The best F1-score was obtained with the URFD dataset with 98.21% used three times.

4.8. Algorithm Training and Testing Time

The data showed strong heterogeneity. Three studies reported times in ms [22,49,76], five studies in s [22,35,76,78,91], and one study in hours [78]. Training times ranged from 1.4 ms (NB algorithm [49]) to 6.2 h (OP-Tanish algorithm [78]) from 9 studies whereas testing times ranged from 1.4 ms (NB algorithm [49]) to 17 s (KNN [22]) from 8 studies. Figure 6 separately presents the training and testing times in milliseconds, seconds, and hours available in the 80 included studies for the different fall detection algorithms.

5. Discussion

The objective of this systematic review was to provide an overview of the fall detection systems for elderly people in an AAL and smart home context and to evaluate their performance. Knowledge of this performance is essential in order to choose the solution best suited to the environment and user constraints. The analysis was conducted by considering wearable, non-wearable, and hybrid technologies, as well as different methods based on ML, DL, and Threshold for fall detection, based on 80 selected studies.

5.1. Performance by Sensor Category

Currently, studies using wearable sensors have been the most numerous (43/80). This is consistent with the bibliometric results proposed by Li et al. [50], which showed that the number of studies using this technology has increased steadily over the last decade. The most common sensors were accelerometers, gyroscopes, and magnetometers. They are found in inertial measurement units and in technologies that are increasingly present in everyday life, such as smartwatches [41,136] and smartphones [137,138]. This is due to technological advances that have made possible the miniaturization of equipment and improvements in the performance of this sensor category, offering a discreet monitoring solution. However, the effectiveness of these wearable devices depends heavily on their use. For older adults, this is not necessarily the most suitable solution, as they often forget to use additional accessories or resist using them [139]. In addition, the issue of energy dependence is a major drawback. These constraints can be compensated for by the use of non-wearable sensors, which were found in 30 of the studies included in this analysis. The most common were cameras, which can be of various types [126,127,140,141]. Radars, infrared, pressure, vibration, and acoustic sensors were also used (Table 2). They present the advantage of being placed in the environment and therefore do not encumber individuals. However, they are fixed devices that can only cover the environment for which they have been calibrated. In addition, these devices are sensitive to occlusions, which can lead to a loss of information and make setup difficult when multiple sensors are present [57]. In AAL or smart homes, the increase in the number of sensors provides a good understanding of the environment but leads to an increase in the cost of the fall detection system. Finally, more recently, hybrid solutions combining wearable and non-wearable sensors (mainly accelerometers and cameras) have appeared, and there were seven of these in our study. The fusion of sensors offers a robust approach, as data from different types of sensors can complement each other, producing a reliable system [142]. However, these solutions have several drawbacks. On a technical and technological level, the fusion of data from heterogeneous sensors requires sophisticated algorithms to integrate and synchronize the different formats and poses the problem of data storage [57]. Indeed, the multiplication of sensors and video is resource-intensive. In addition, the presence of several types of devices requires more significant maintenance (management of the batteries of wearable sensors, calibration of fixed sensors, cleaning, replacement or recalibration of damaged or obsolete equipment, etc.). Finally, to operate at full capacity, it is necessary that all sensors operate simultaneously. Forgetting to wear or the failure of a wearable sensor or the obstruction of a fixed sensor can lead to an increased risk of false alarms or an undetected fall. From an economic point of view, hybrid solutions require the purchase (or periodic replacement) and installation of several sensors associated with a reliable network (cabling, wireless communication system, configuration, etc.) which represents a higher cost than a wearable or non-wearable solutions.
In the present study, the results showed that the overall performance of the three categories of sensors was around 90%. Hybrid systems displayed all five performance parameters above this value (90.04% to 98.21%). Non-wearable sensors achieved a sensitivity of 89.19%, while the other parameters were between 90.35% and 93.68%. Finally, wearable sensors showed three parameters below 90%: sensitivity (87.49%), precision (83.27%), and F1-score (85.89%). This level of performance appears satisfactory, with an advantage for hybrid solutions. However, these results should be adjusted for several reasons. Statistical analysis revealed that hybrid solutions had only significantly higher accuracy than wearable systems, but with an overall score above 90% for all three categories. For specificity, precision, and F1-score, the non-wearable sensor category achieved higher performance than the wearable sensors. Finally, no effect was found for sensitivity despite a 10% higher performance for hybrid solutions. On the other hand, solutions with 100% performance for one or more parameters were found in each of the three categories.

5.2. Performance by Methods

Fall detection uses the three methods commonly used in the literature, i.e., threshold, ML, and DL [57,58]. The advantage of threshold-based algorithms is their low computational complexity. In the case of wearable sensors, they are generally associated with data acquisition and preprocessing [143], while for non-wearable sensors, they are directly integrated into the central host that receives and processes the data [14]. The main disadvantage lies in the definition of the threshold, as it is sensitive to inter- and intra-individual variability as well as to the sensor location. Incorrect threshold definition could lead to a decrease in the accuracy of fall detection [144]. The other two methods use intelligent approaches to detect falls. Supervised or unsupervised algorithms are applied to large databases to train a classification method. At the end of this learning stage, the classifiers become able to identify a fall [145]. The main difference is that ML requires a large amount of manually extracted information, whereas DL automatically extracts relevant features [41,146]. The need for large databases for learning, dependence on data quality (performance declines in the case of noisy or incomplete data), often significant hardware requirements (memory, processors, etc.), and the loss of ability to generalize on new data in the case of over-specialized learning are the main disadvantages of these methods [54,57].
In this study, performance results were compared. An overall performance of 90% was observed for all methods. More specifically, the Threshold and Other methods achieved results above 90% in terms of precision and specificity, while for DL, all five parameters were above this value. ML presented the lowest values, with all five parameters below 90%. This trend needs to be adjusted. Statistical analysis showed that the DL, Threshold, and Other methods had significantly higher precision and specificity than the ML method. For sensitivity, the DL methods performed significantly better than the other methods. Precision and F1 score of the DL (91.9% and 91.8%) and Other (86.0% and 89.3%) methods performed significantly better than the ML method. In summary, ML methods perform the least well, while the other methods appear to be equivalent, with DL performing slightly better. This result is consistent with several studies that have shown that DL methods performed better [40,147]. However, solutions with 100% performance have been identified in the literature for the different methods (Figure 3). In particular, across all performance parameters, ML methods account for the highest number of algorithms with 100% performance.

5.3. Algorithms

For ML, the most commonly used algorithms were ANN, KNN, DT, NB, and SVM. For DL, the most commonly used were CNN, LSTM, GRU, and their combinations. On average, DL algorithms produced the best results. This has been observed in several studies comparing different algorithms from ML and DL methods [87,148].

5.4. Datasets

A total of 63.5% of the methods and algorithms identified in this analysis were evaluated on the basis of a custom dataset, i.e., obtained experimentally. The others were assessed using 33 open datasets available in the literature (WISDM, PAMAP 2, USC-HAD, and SisFall for the most commonly). The various results reported showed very high average performance, >90% for the large majority of studies across all parameters. This can be explained by the fact that many studies use part of the dataset to train the algorithms and the other part to perform tests [149,150]. As the data was collected under similar conditions, identification could be facilitated, thereby providing effective fall detection. The use of different datasets for training and testing algorithms could be a first step in testing algorithms in situations occurring under different conditions. It should be noted that the available datasets or custom datasets use fall simulations, which may partly explain false negatives or false positives.

5.5. Training and Testing Time

Ten studies reported in this analysis measured the training time of the algorithms (for ML and DL) and the fall detection time (testing time). The results showed a wide range of durations, from 1.4 ms to several hours. For training, a long duration may not be problematic as long as the detection is fast. On the other hand, a method with an excessively long detection time could have significant consequences for the safety of elderly people in their environment and make these methods unsuitable for real-time situations.

5.6. Limitations

The first limitation concerns the conditions for evaluating the performance of fall detection systems. Indeed, this systematic review has shown that the datasets used to compute the various performance parameters were highly heterogeneous, whether they were available in the literature or constructed experimentally. This variability in the datasets has a major impact on the performance, regardless of the system itself (sensors and methods/algorithms used). This point is reinforced for ML and DL methods, as they are often trained with data extracted from the evaluation datasets. Standard evaluation conditions are necessary to accurately compare different systems. Finally, even though the datasets used have improved, they still offer only stereotypical solutions to falls, often simulated, without considering the health profile of participants or changing environments.
The second limitation concerns the performance assessment by category of wearable, non-wearable, and hybrid sensors. The results highlighted their respective advantages. However, the analysis could be extended by divided sensors type in subgroups: accelerometers, gyroscopes, cameras, etc. The same recommendation can be made for fall detection methods (mainly ML and DL), for which subgroups by algorithm (SVM, KNN, LSTM, RF, CNN, etc.) could be compared.
Another limitation concerns the analysis of training and testing times for fall detection systems. Few studies have reported this performance data, and often the test conditions (a single computational module vs. a complete method) in each study were very different or only partially described.
The final limitations are methodological and relate to the criteria for selecting articles. On the one hand, the research focused on fall detection systems for elderly people in the AAL context. In addition, the research was conducted using selected keywords, for which not all synonyms were necessarily used. On the other hand, only original peer-reviewed research written in English was included. These choices may have led to the omission of some relevant studies in which the fall detection system was effective.

5.7. General Outcomes and Future Research Directions

As we have seen in this study, fall detection systems are evolving with technological advances and often offer viable solutions with overall performance (across all performance parameters studied) of around 90%. Some solutions even achieve 100% for all parameters. Despite this, these detection systems remain imperfect and require further research and development. Future advances could focus on predicting falls before they occur [151] by exploiting behavioral and movement anomalies in older people. Another approach is not to simply study falling from a standing posture but to generalize it to complex behaviors such as activities in a sitting position or during physical activities (jogging, cycling, etc.). Systems should therefore have more sophisticated predictive algorithms to help anticipate a fall and take preventive measures. Furthermore, with regard to wearable systems and hybrid solutions, acceptability should be considered by offering solutions designed to guarantee long-term comfort, ergonomics, and ease of use. This is achieved through the design of intuitive interfaces and lightweight, miniaturized solutions [152,153]. Improving energy autonomy of wearable sensors is also essential to ensure continuous real-time monitoring without frequent recharging. For non-wearable sensors and hybrid solutions, privacy concerns will need to be addressed. There is a need for continued technical development and research to address ethical and social acceptability issues. The Internet of Things (IoT) is a promising avenue for fall detection [154]. The development of smartwatches, smartphone apps, and, more recently, connected clothing are all solutions that make fall detection more discreet and, a priori, more acceptable. The integration of sensors into furniture also contributes to this invisible detection. Connections to various healthcare services allow fall situations to be communicated in real time and contribute to the safety and protection of the elderly. However, the cost of these new systems will need to be controlled and managed, as it could be a barrier in regions where access to healthcare technologies is limited. All these improvements will make the use of these fall detection systems more attractive and acceptable to elderly people.
It could be considered to establish standards for system reliability and security. The first could consist of identifying a list of universal performance parameters that would allow the proposed solutions to be compared within a common framework. Commonly used parameters such as accuracy, precision, specificity, sensitivity, and F1-score are a good starting point but need to be supplemented by others. Similarly, the situations used as a basis for evaluation should be standardized and rely on real fall data rather than simulations, as is the case in many available datasets. This would enable objective comparison of different methods and testing their respective performance. Fall detection time is also a very relevant and important parameter, but it needs to be obtained within a well-defined context. This indicator is rarely tested in real-life situations. It contributes to the safety and protection of people in AAL environment. Finally, data security is also crucial for the viability of these systems. The development of secure data storage and transmission methods is essential for protecting users’ privacy.
All these advances could lead to the development of high-performance, adaptive systems that could consider the user’s health profile, level of independence and activity, and environment.

6. Conclusions

This literature review contributes to the field by providing an overview of the performance of fall detection systems. Its originality lies in its presentation by sensor category and detection method. Identifying the advantages of different solutions provides a source of information for researchers, practitioners, and policymakers in the design and implementation of more effective fall detection systems that are accepted by users. By combining these performance-related aspects with the challenges of cost, ergonomics, energy management, and privacy, future solutions could help improve the independence, safety, protection, and care of older adults.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/s25216540/s1, Table S1: Performance data of all methods presented in each included study used in the meta-analysis.

Author Contributions

Conceptualization, P.G. and J.J.-B.; Methodology, P.G. and J.J.-B.; Software, P.G. and J.J.-B.; Validation, P.G. and J.J.-B.; Formal Analysis, P.G. and J.J.-B.; Investigation, P.G. and J.J.-B.; Resources, P.G. and J.J.-B.; Data Curation, P.G. and J.J.-B.; Writing—Original Draft Preparation, P.G. and J.J.-B.; Writing—Review and Editing, P.G. and J.J.-B.; Visualization, P.G. and J.J.-B.; Supervision, P.G.; Project Administration, P.G.; Funding Acquisition, P.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AAccelerometer
A + RAntennas and Receiver
AALAmbient Assisted Living
AccLinear dichroism
AECauto-encoder
ANNArtificial Neural Network
ASAcoustic sensor
AVAngular Velocity
CCamera
CNNConvolutional Neural Network
DLDeep Learning
DTDecision Tree
DynDynamic
ECGElectrocardiogram
EMGElectromyography
Extexterior sensor.
F1F1-score
GGyroscope
GRUGait Recurrent Unit
HMLHybrid Machine Learning
HMMHidden Markov Model
IRInfrared sensor
KNNk-Nearest Neighbor
LRLinear regression
LSTMLong Short-Term Memory
MMagnetometer
Magmagnetic sensor
MLMachine Learning
MLPMulti-layer Perceptron
NBNaive Bayes
OOrientation sensor
PPressure Sensor
PrecPrecision
PRISMAPreferred Reporting Items for Systematic reviews and Meta-Analyses
RRadar
RFRandom Forest
SensSensitivity
SpecSpecificity
StatStatic
SVMSupport Vector Machines
TThermal sensor
VVibration sensor

References

  1. United Nations. World Population Prospects: The 2017 Revision, Key Findings and Advance Tables; Working Paper ESA/P/WP/248; Department of Economic and Social Affairs, Population Division: New York, NY, USA, 2017. [Google Scholar]
  2. Centers for Disease Control and Prevention. Falls Are Leading Cause of Injury and Death in Older Americans. Available online: https://www.cdc.gov/media/releases/2016/p0922-older-adult-falls.html (accessed on 25 August 2025).
  3. Turner, S.; Kisser, R.; Rogmans, W. Falls Among Older Adults in the EU-28: Key Facts from the Available Statistics. Available online: https://eupha.org/repository/sections/ipsp/Factsheet_falls_in_older_adults_in_EU.pdf (accessed on 25 August 2025).
  4. Li, L.T.; Wang, S.Y. Disease burden and risk factors of falls in the elderly. Chin. J. Epidemiol. 2001, 22, 262–264. [Google Scholar]
  5. Kumari, P.; Mathew, L.; Syal, P. Increasing trend of wearables and multimodal interface for human activity monitoring: A review. Biosens. Bioelectron. 2017, 90, 298–307. [Google Scholar] [CrossRef] [PubMed]
  6. Kumar, P.; Suresh, S. Deep-HAR: An ensemble deep learning model for recognizing the simple, complex, and heterogeneous human activities. Multimed. Tools Appl. 2023, 82, 30435–30462. [Google Scholar] [CrossRef]
  7. Nouredanesh, M.; Gordt, K.; Schwenk, M.; Tung, J. Automated detection of multidirectional compensatory balance reactions: A step towards tracking naturally occurring near falls. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 478–487. [Google Scholar] [CrossRef]
  8. De la Cal, E.A.; Fáñez, M.; Villar, M.; Villar, J.R.; González, V.M. A low-power HAR method for fall and high-intensity ADLs identification using wrist-worn accelerometer devices. Log. J. IGPL 2023, 31, 375–389. [Google Scholar] [CrossRef]
  9. Garcia-Ceja, E.; Galván-Tejada, C.E.; Brena, R. Multi-view stacking for activity recognition with sound and accelerometer data. Inf. Fusion 2018, 40, 45–56. [Google Scholar] [CrossRef]
  10. Xie, J.; Guo, K.; Zhou, Z.; Yan, Y.; Yang, P. ART: Adaptive and real-time fall detection using COTS smart watch. In Proceedings of the 2020 6th International Conference on Big Data Computing and Communications (BIGCOM), Deqing, China, 24–25 July 2020; pp. 33–40. [Google Scholar]
  11. Zhang, S.; Mc Cullagh, P. Situation Awareness Inferred from Posture Transition and Location. IEEE Trans. Hum. Mach. Syst. 2017, 47, 814–821. [Google Scholar] [CrossRef]
  12. Badgujar, S.; Pillai, A.S. Fall detection for elderly people using machine learning. In Proceedings of the 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kharagpur, India, 1–3 July 2020; pp. 1–4. [Google Scholar]
  13. Haghi, M.; Geissler, A.; Fleischer, H.; Stoll, N.; Thurow, K. Ubiqsense: A personal wearable in ambient parameters monitoring based on IoT platform. In Proceedings of the 2019 International Conference on Sensing and Instrumentation in IoT Era (ISSI), Lisbon, Portugal, 29–30 August 2019; pp. 1–6. [Google Scholar]
  14. Andò, B.; Baglio, S.; Lombardo, C.O.; Marletta, V. A multisensor data-fusion approach for ADL and fall classification. IEEE Trans. Instrum. Meas. 2016, 65, 1960–1967. [Google Scholar] [CrossRef]
  15. Jahanjoo, A.; Naderan, M.; Rashti, M.J. Detection and multi-class classification of falling in elderly people by deep belief network algorithms. J. Ambient Intell. Hum. Comput. 2020, 11, 4145–4165. [Google Scholar] [CrossRef]
  16. Zerrouki, N.; Harrou, F.; Sun, Y.; Houacine, A. Vision-based human action classification using adaptive boosting algorithm. IEEE Sens. J. 2018, 18, 5115–5121. [Google Scholar] [CrossRef]
  17. Guerra, B.M.V.; Ramat, S.; Beltrami, G.; Schmid, M. Recurrent network solutions for human posture recognition based on Kinect skeletal data. Sensors 2023, 23, 5260. [Google Scholar] [CrossRef] [PubMed]
  18. Fan, L.; Li, T.; Yuan, Y.; Katabi, D. In-home daily-life captioning using radio signals. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 105–123. [Google Scholar]
  19. Wang, F.; Gong, W.; Liu, J.K.; Wu, K. Channel Selective Activity Recognition with WiFi: A Deep Learning Approach Exploring Wideband Information. IEEE Trans. Netw. Sci. Eng. 2020, 7, 181–192. [Google Scholar] [CrossRef]
  20. Ding, C.; Zhang, L.; Chen, H.; Hong, H.; Zhu, X.; Fioranelli, F. Sparsity-Based Human Activity Recognition with Point Net Using a Portable FMCW Radar. IEEE Internet Things J. 2023, 10, 10024–10037. [Google Scholar] [CrossRef]
  21. Chen, J.; Wang, C.; Liu, Y. Vibration Signal Based Abnormal Gait Detection and Recognition. IEEE Access 2024, 12, 89845–89855. [Google Scholar] [CrossRef]
  22. Singh, S.; Aksanli, B. Non-Intrusive Presence Detection and Position Tracking for Multiple People Using Low-Resolution Thermal Sensors. J. Sens. Actuator Netw. 2019, 8, 40. [Google Scholar] [CrossRef]
  23. Mashiyama, S.; Hong, J.; Ohtsuki, T. Activity recognition using low resolution infrared array sensor. In Proceedings of the 2015 IEEE International Conference on Communications (ICC), London, UK, 8–12 June 2015; pp. 495–500. [Google Scholar]
  24. Clapés, A.; Pardo, À.; Pujol Vila, O.; Escalera, S. Action detection fusing multiple Kinects and a WIMU: An application to in-home assistive technology for the elderly. Mach. Vis. Appl. 2018, 29, 765–788. [Google Scholar] [CrossRef]
  25. Kwolek, B.; Kepski, M. Human fall detection on embedded platform using depth maps and wireless accelerometer. Comput. Methods Prog. Biomed. 2014, 117, 489–501. [Google Scholar] [CrossRef]
  26. Fetzer, T.; Ebner, F.; Bullmann, M.; Deinzer, F.; Grzegorzek, M. Smartphone-based indoor localization within a 13th century historic building. Sensors 2018, 18, 4095. [Google Scholar] [CrossRef]
  27. Madhu, B.; Mukherjee, A.; Islam, M.Z.; Mamun-Al-Imran, G.M.; Roy, R.; Ali, L.E. Depth motion map based human action recognition using adaptive threshold technique. In Proceedings of the 2021 5th International Conference on Electrical Information and Communication Technology (EICT), Khulna, Bangladesh, 18–20 November 2021; pp. 1–6. [Google Scholar]
  28. Liu, C.; Jiang, Z.; Su, X.; Benzoni, S.; Maxwell, A. Detection of Human Fall Using Floor Vibration and Multi-Features Semi-Supervised SVM. Sensors 2019, 19, 3720. [Google Scholar] [CrossRef]
  29. Vallabh, P.; Malekian, R.; Ye, N.; Bogatinoska, D.C. Fall detection using machine learning algorithms. In Proceedings of the 2024 24th International Conference on Software, Telecommunications and Computer Networks (SoftCOM) 2016, Split, Croatia, 22–24 September 2016; pp. 1–9. [Google Scholar] [CrossRef]
  30. Ando, B.; Baglio, S.; Marletta, V.; Crispino, R. A neurofuzzy approach for fall detection. In Proceedings of the 2017 International Conference on Engineering, Technology and Innovation (ICE/ITMC), Madeira, Portugal, 27–29 June 2017; pp. 1312–1316. [Google Scholar] [CrossRef]
  31. Lee, S. Fall detection using wavelet transform and neural network. Int. J. Comput. Sci. Electron. Eng. 2014, 2, 113–116. [Google Scholar]
  32. Hussain, F.; Hussain, F.; Ehatisham-ul-Haq, M.; Azam, M.A. Activity-Aware Fall Detection and Recognition Based on Wearable Sensors. IEEE Sens. J. 2019, 19, 4528–4536. [Google Scholar] [CrossRef]
  33. Zurbuchen, N.; Bruegger, P.; Wilde, A.A. Comparison of Machine Learning Algorithms for Fall Detection using Wearable Sensors. In Proceedings of the 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Fukuoka, Japan, 19–21 February 2020. [Google Scholar]
  34. He, J.; Hu, C.; Wang, X. A Smart Device Enabled System for Autonomous Fall Detection and Alert. Int. J. Distrib. Sens. Netw. 2016, 20, 2308183. [Google Scholar] [CrossRef]
  35. Yacchirema, D.; de Puga, J.S.; Palau, C.; Esteve, M. Fall detection system for elderly people using IoT and big data. Procedia Comput. Sci. 2018, 130, 603–610. [Google Scholar] [CrossRef]
  36. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  37. Fakhrulddin, A.H.; Fei, X.; Li, H. Convolutional neural networks (CNN) based human fall detection on body sensor networks (BSN) sensor data. In Proceedings of the 2017 4th International Conference on Systems and Informatics (ICSAI), Hangzhou, China, 11–13 November 2017. [Google Scholar]
  38. Santos, G.L.; Endo, P.T.; Monteiro, K.H.d.C.; Rocha, E.D.S.; Silva, I.; Lynn, T. Accelerometer-Based Human Fall Detection Using Convolutional Neural Networks. Sensors 2019, 19, 1644. [Google Scholar] [CrossRef]
  39. Ajerla, D.; Mahfuz, S.; Zulkernine, F. A real-time patient monitoring framework for fall detection. Wirel. Commun. Mob. Comput. 2019, 2, 9507938. [Google Scholar] [CrossRef]
  40. Theodoridis, T.; Solachidis, V.; Vretos, N.; Daras, P. Human Fall Detection from Acceleration Measurements Using a Recurrent Neural Network. In Precision Medicine Powered by pHealth and Connected Health, Proceedings of the ICBHI 2017. IFMBE Proceedings, Thessaloniki, Greece, 16–17 November 2017; Maglaveras, N., Chouvarda, I., de Carvalho, P., Eds.; Springer: Singapore, 2018; p. 66. [Google Scholar]
  41. Mauldin, T.R.; Canby, M.E.; Metsis, V.; Ngu, A.H.H.; Rivera, C.C. Smart-Fall: A smartwatch-based fall detection system using deep learning. Sensors 2018, 18, 3363. [Google Scholar] [CrossRef] [PubMed]
  42. Torti, E.; Fontanella, A.; Musci, M.; Blago, N.; Pau, D.; Leporati, F. Embedding recurrent neural networks in wearable systems for real-time fall detection. Microprocess. Microsyst. 2019, 71, 102895. [Google Scholar] [CrossRef]
  43. Al-Hassani, R.T.; Atilla, D.C. Human Activity Detection Using Smart Wearable Sensing Devices with Feed Forward Neural Networks and PSO. Appl. Sci. 2023, 13, 3716. [Google Scholar] [CrossRef]
  44. Zhang, X.; Xie, Q.; Sun, W.; Wang, T. Fall detection method based on spatio-temporal coordinate attention for high-resolution networks. Complex. Intell. Syst. 2025, 11, 1. [Google Scholar] [CrossRef]
  45. Benhaili, Z. Detecting human fall using IoT devices for healthcare applications. Int. J. Artif. Intell. 2025, 14, 561–569. [Google Scholar] [CrossRef]
  46. Droghini, D.; Ferretti, D.; Principi, E.; Squartini, S.; Piazza, F. A combined One-class SVM and Template-matching Approach for Useraided Human Fall Detection by Means of Floor Acoustic Features. Comput. Intell. Neurosci. 2017, 3, 1512670. [Google Scholar]
  47. Gibson, R.M.; Amira, A.; Ramzan, N.; Casaseca-De-La-Higuera, P.; Pervez, Z. Multiple comparator classifier framework for accelerometer-based fall detection and diagnostic. Appl. Soft Comput. 2016, 39, 94–103. [Google Scholar] [CrossRef]
  48. Berlin, S.J.; John, M. Vision based human fall detection with Siamese convolutional neural networks. J. Ambient. Intell. Hum. Comput. 2022, 13, 5751–5762. [Google Scholar] [CrossRef]
  49. Gulati, N.; Kaur, P.D. An argumentation enabled decision making approach for Fall Activity Recognition in Social IoT based Ambient Assisted Living systems. Future Gener. Comput. Syst. 2021, 122, 82–97. [Google Scholar] [CrossRef]
  50. Li, Y.; Liu, P.; Fang, Y.; Wu, X.; Xie, Y.; Xu, Z.; Ren, H.; Jing, F. A Decade of Progress in Wearable Sensors for Fall Detection (2015–2024): A Network-Based Visualization Review. Sensors 2025, 25, 2205. [Google Scholar] [CrossRef]
  51. Sanchez-Comas, A.; Synnes, K.; Hallberg, J. Hardware for Recognition of Human Activities: A Review of Smart Home and AAL Related Technologies. Sensors 2020, 20, 4227. [Google Scholar] [CrossRef]
  52. Casilari-Pérez, E.; García-Lago, F. A comprehensive study on the use of artificial neural networks in wearable fall detection systems. Expert. Syst. Appl. 2019, 138, 112811. [Google Scholar] [CrossRef]
  53. Iadarola, G.; Mengarelli, A.; Crippa, P.; Fioretti, S.; Spinsante, S. A Review on Assisted Living Using Wearable Devices. Sensors 2024, 24, 7439. [Google Scholar] [CrossRef]
  54. Amir, N.I.M.; Dziyauddin, R.A.; Mohamed, N.; Ismail, N.S.N.; Kaidi, H.M.; Ahmad, N. Fall Detection System using Wearable Sensor Devices and Machine Learning: A Review. TechRxiv 2024. [Google Scholar] [CrossRef]
  55. Islam, M.M.; Tayan, O.; Islam, M.I.; Islam, M.S.; Nooruddin, S.; Kabir, M.N. Deep Learning Based Systems Developed for Fall Detection: A Review. IEEE Access 2020, 8, 166117–166137. [Google Scholar] [CrossRef]
  56. Gaya-Morey, F.X.; Manresa-Yee, C.; Buades-Rubio, J.M. Deep learning for computer vision based activity recognition and fall detection of the elderly: A systematic review. Appl. Intell. 2024, 54, 8982–9007. [Google Scholar] [CrossRef]
  57. Guerra, B.M.V.; Torti, E.; Marenzi, E.; Schmid, M.; Ramat, S.; Leporati, F.; Danese, G. Ambient assisted living for frail people through human activity recognition: State-of-the-art, challenges and future directions. Front. Neurosci. 2023, 17, 1256682. [Google Scholar] [CrossRef] [PubMed]
  58. Ren, L.; Peng, Y. Research of Fall Detection and Fall Prevention Technologies: A Systematic Review. IEEE Access 2019, 7, 77702–77722. [Google Scholar] [CrossRef]
  59. Subramaniam, S.; Faisal, A.I.; Deen, M.J. Wearable Sensor Systems for Fall Risk Assessment: A Review. Front. Digit. Health 2022, 4, 921506. [Google Scholar] [CrossRef]
  60. Patel, S.; Park, H.; Bonato, P.; Chan, L.; Rodgers, M. A Review of Wearable Sensors and Systems with Application in Rehabilitation. J. Neuroeng. Rehabil. 2012, 9, 21. [Google Scholar] [CrossRef]
  61. Pathirana, P.N.; Karunarathne, M.S.; Williams, G.L.; Nam, P.T.; Durrant-Whyte, H. Robust and accurate capture of human joint pose using an inertial sensor. IEEE J. Transl. Eng. Health Med. 2018, 6, 2700913. [Google Scholar] [CrossRef]
  62. Cicirelli, G.; Marani, R.; Petitti, A.; Milella, A.; D’Orazio, T. Ambient assisted living: A review of technologies, methodologies and future perspectives for healthy aging of population. Sensors 2021, 21, 3549. [Google Scholar] [CrossRef]
  63. Pham, N.P.; Dao, H.V.; Phung, H.N.; Ta, H.V.; Nguyen, N.H.; Hoang, T.T. Classification different types of fall for reducing false alarm using single accelerometer. In Proceedings of the 2018 IEEE Seventh International Conference on Communications and Electronics (ICCE), Hue, Vietnam, 18–20 July 2018. [Google Scholar]
  64. Wu, Y.; Su, Y.; Hu, Y.; Yu, N.; Reng, R. A Multi-sensor Fall Detection System based on Multivariate Statistical Process Analysis. J. Med. Biol. Eng. 2019, 39, 336–351. [Google Scholar] [CrossRef]
  65. Yu, S.; Chen, H.; Brown, R.A. Hidden Markov model-based fall detection with motion sensor orientation calibration: A case for real-life home monitoring. IEEE J. Biomed. Health Inform. 2018, 22, 1847–1853. [Google Scholar] [CrossRef]
  66. Kaur, A.P.; Nsugbe, E.; Drahota, A.; Oldfield, M.; Mohagheghian, I.; Sporea, R.A. State-of-the-art fall detection techniques with emphasis on floor-based systems—A review. Biomed. Eng. Adv. 2025, 9, 100179. [Google Scholar] [CrossRef]
  67. Li, X.; Pang, T.; Liu, W.; Wang, T. Fall detection for elderly person care using convolutional neural networks. In Proceedings of the 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics, Shanghai, China, 14–16 October 2017. [Google Scholar]
  68. Shu, F.; Shu, J. An eight-camera fall detection system using human fall pattern recognition via machine learning by a low-cost android box. Sci. Rep. 2021, 11, 2471. [Google Scholar] [CrossRef] [PubMed]
  69. Bharathiraja, N.; Indhuja, R.; Krishnan, P.; Anandhan, S.; Hariprasad, S. Real-time fall detection using ESP32 and AMG8833 thermal sensor: A non-wearable approach for enhanced safety. In Proceedings of the Second International Conference on Augmented Intelligence and Sustainable Systems (ICAISS), Trichy, India, 23–25 August 2023. [Google Scholar] [CrossRef]
  70. Popescu, M.; Li, Y.; Skubic, M.; Rantz, M. An acoustic fall detector system that uses sound height information to reduce the false alarm rate. In Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–24 August 2008. [Google Scholar] [CrossRef]
  71. Kittiyanpunya, C.; Chomdee, P.; Boonpoonga, A.; Torrungrueng, D. Millimeter-Wave Radar-Based Elderly Fall Detection Fed by One-Dimensional Point Cloud and Doppler. IEEE Access 2023, 11, 76269–76283. [Google Scholar] [CrossRef]
  72. Clemente, J.; Li, F.; Valero, M.; Song, W. Smart seismic sensing for indoor fall detection, location, and notification. IEEE J. Biomed. Health Inform. 2020, 24, 524–532. [Google Scholar] [CrossRef]
  73. Li, X.; Nie, L.; Xu, H.; Wang, X. Collaborative Fall Detection Using Smart Phone and Kinect. Mob. Netw. Appl. 2018, 23, 775–788. [Google Scholar] [CrossRef]
  74. Harris, J.D.; Quatman, C.E.; Manring, M.M.; Siston, R.A.; Flanigan, D.C. How to Write a Systematic Review. Am. J. Sports Med. 2014, 42, 2761–2768. [Google Scholar] [CrossRef]
  75. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 2009, 6, 1006–1012. [Google Scholar] [CrossRef]
  76. Özdemir, A.T.; Barshan, B. Detecting falls with wearable sensors using machine learning techniques. Sensors 2014, 14, 10691–10708. [Google Scholar] [CrossRef]
  77. Agrawal, D.K.; Usaha, W.; Pojprapai, S.; Wattanapan, P. Fall Risk Prediction Using Wireless Sensor Insoles with Machine Learning. IEEE Access 2023, 11, 23119–23126. [Google Scholar] [CrossRef]
  78. Ankalaki, S.; Thippeswamy, M.N. A novel optimized parametric hyperbolic tangent swish activation function for 1D-CNN: Application of sensor-based human activity recognition and anomaly detection. Multimed. Tools Appl. 2024, 83, 61789–61819. [Google Scholar] [CrossRef]
  79. Bourke, A.K.; O’Brien, J.V.; Lyons, G.M. Evaluation of a threshold-based tri-axial accelerometer fall detection algorithm. Gait Posture 2007, 26, 194–199. [Google Scholar] [CrossRef]
  80. Bourke, A.K.; Lyons, G.M. A Threshold-based Fall-detection Algorithm Using A Bi-axial Gyroscope Sensor. Med. Eng. Phys. 2008, 30, 84–90. [Google Scholar] [CrossRef]
  81. Butt, F.S.; La Blunda, L.; Wagner, M.F.; Schäfer, J.; Medina-Bulo, I.; Gómez-Ullate, D. Fall Detection from Electrocardiogram (ECG) Signals and Classification by Deep Transfer Learning. Information 2021, 12, 63. [Google Scholar] [CrossRef]
  82. Chandramouli, N.A.; Natarajan, S.; Alharbi, A.H. Enhanced human activity recognition in medical emergencies using a hybrid deep CNN and bi-directional LSTM model with wearable sensors. Sci. Rep. 2024, 14, 30979. [Google Scholar] [CrossRef]
  83. Chelli, A.; Patzold, M. A machine learning approach for fall detection and daily living activity recognition. IEEE Access Pract. Innov. 2019, 7, 38670–38687. [Google Scholar] [CrossRef]
  84. Chen, K.H.; Hsu, Y.W.; Yang, J.J.; Jaw, F.S. Evaluating the specifications of built-in accelerometers in smartphones on fall detection performance. Instrum. Sci. Technol. 2018, 46, 194–206. [Google Scholar] [CrossRef]
  85. He, J.; Bai, S.; Wang, X. An Unobtrusive Fall Detection and Alerting System Based on Kalman Filter and Bayes Network Classifier. Sensors 2017, 17, 1393. [Google Scholar] [CrossRef]
  86. Jantaraprim, P.; Phukpattaranont, P.; Limsakul, C.; Wongkittisuksa, B. A system for improving fall detection performance using critical phase fall signal and a neural network. Songklanakarin J. Sci. Technol. 2012, 34, 637–644. [Google Scholar]
  87. Kerdegari, H.; Mokaram, S.; Samsudin, K.; Ramli, A.R. A pervasive neural network based fall detection system on smart phone. J. Ambient. Intell. Smart Environ. 2015, 7, 221–230. [Google Scholar] [CrossRef]
  88. Khojasteh, S.B.; Villar, J.R.; Chira, C.; González, V.M.; de la Cal, E. Improving fall detection using an on-wrist wearable accelerometer. Sensors 2018, 18, 1350. [Google Scholar] [CrossRef] [PubMed]
  89. Kraft, D.; Srinivasan, K.; Bieber, G. Deep Learning Based Fall Detection Algorithms for Embedded Systems, Smartwatches, and IoT Devices Using Accelerometers. Technologies 2020, 8, 72. [Google Scholar] [CrossRef]
  90. Liaqat, S.; Dashtipour, K.; Shah, S.A.; Rizwan, A.; Alotaibi, A.A.; Althobaiti, T.; Arshad, K.; Assaleh, K.; Ramzan, N. Novel Ensemble Algorithm for Multiple Activity Recognition in Elderly People Exploiting Ubiquitous Sensing Devices. IEEE Sens. J. 2021, 21, 18214–18221. [Google Scholar] [CrossRef]
  91. Martins, L.M.; Ribeiro, N.F.; Soares, F.; Santos, C.P. Inertial Data-Based AI Approaches for ADL and Fall Recognition. Sensors 2022, 22, 4028. [Google Scholar] [CrossRef] [PubMed]
  92. Medrano, C.; Igual, R.; García-Magariño, I.; Plaza, I.; Azuara, G. Combining novelty detectors to improve accelerometer-based fall detection. Med. Biol. Eng. Comput. 2017, 55, 1849–1858. [Google Scholar] [CrossRef] [PubMed]
  93. Miah, A.S.M.; Hwang, Y.S.; Shin, J. Sensor-Based Human Activity Recognition Based on Multi-Stream Time-Varying Features With ECA-Net Dimensionality Reduction. IEEE Access 2024, 12, 151649–151668. [Google Scholar] [CrossRef]
  94. Nyan, M.N.; Tay, F.E.H.; Murugasu, E. A wearable system for pre-impact fall detection. J. Biomech. 2008, 41, 3475–3481. [Google Scholar] [CrossRef]
  95. Özdemir, A.T.; Turan, A. An analysis on sensor locations of the human body for wearable fall detection devices: Principles and practice. Sensors 2016, 16, 1161. [Google Scholar] [CrossRef]
  96. Pan, D.; Liu, H.; Qu, D. Heterogeneous Sensor Data Fusion for Human Falling Detection. IEEE Access 2021, 9, 17610–17619. [Google Scholar] [CrossRef]
  97. Putra, I.P.E.S.; Brusey, J.; Gaura, E.; Vesilo, R. An Event-Triggered Machine Learning Approach for Accelerometer-Based Fall Detection. Sensors 2018, 18, 20. [Google Scholar] [CrossRef]
  98. Rashidpour, M.; Abdali-Mohammadi, F.; Fathi, A. Fall detection using adaptive neuro-fuzzy inference system. Int. J. Multimed. Ubiquitous Eng. 2016, 11, 91–106. [Google Scholar] [CrossRef]
  99. Ren, L.; Shi, W. Chameleon: Personalised and Adaptive Fall Detection of Elderly People in Home-based Environments. Int. J. Sens. Netw. 2016, 20, 163–176. [Google Scholar] [CrossRef]
  100. Rescio, G.; Leone, A.; Siciliano, P. Supervised Machine Learning Scheme for Electromyography-based Pre-fall Detection System. Expert. Syst. Appl. 2018, 100, 95–105. [Google Scholar] [CrossRef]
  101. Sabatini, A.M.; Ligorio, G.; Mannini, A.; Genovese, V.; Pinna, L. Prior-to- and Post-Impact Fall Detection Using Inertial and Barometric Altimeter Measurements. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 24, 774–783. [Google Scholar] [CrossRef] [PubMed]
  102. Sarabia-Jácome, D.; Usach, R.; Palau, C.E.; Esteve, M. Highly-efficient fog-based deep learning AAL fall detection system. Internet Things 2020, 11, 100185. [Google Scholar] [CrossRef]
  103. Shahzad, A.; Kim, K. FallDroid: An automated smart-phone-based fall detection system using multiple kernel learning. IEEE Trans. Ind. Inf. 2018, 15, 35–44. [Google Scholar] [CrossRef]
  104. Suriani, N.S.; Rashid, F.N.; Yunos, N.Y. Optimal Accelerometer Placement for Fall Detection of Rehabilitation Patients. J. Telecommun. Electron. Comput. Eng. JTEC 2018, 10, 25–29. [Google Scholar]
  105. Tunca, C.; Salur, G.; Ersoy, C. Deep Learning for Fall Risk Assessment with Inertial Sensors: Utilizing Domain Knowledge in Spatio-Temporal Gait Parameters. IEEE J. Biomed. Health Inform. 2020, 24, 1994–2005. [Google Scholar] [CrossRef]
  106. Xi, X.; Tang, M.; Miran, S.M.; Luo, Z. Evaluation of Feature Extraction and Recognition for Activity Monitoring and Fall Detection Based on Wearable sEMG Sensors. Sensors 2017, 17, 1229. [Google Scholar] [CrossRef]
  107. Yoo, S.; Oh, D. An artificial neural network–based fall detection. Int. J. Eng. Bus. Manag. 2018, 10, 1847979018787905. [Google Scholar] [CrossRef]
  108. Yuwono, M.; Moulton, B.D.; Su, S.W.; Celler, B.G.; Nguyen, H.T. Unsupervised machine-learning method for improving the performance of ambulatory fall-detection systems. Biomed. Eng. Online 2012, 11, 9. [Google Scholar] [CrossRef]
  109. Adnan, S.M.; Irtaza, A.; Aziz, S.; Ullah, M.O.; Javed, A.; Mahmood, M.T. Fall Detection Through Acoustic Local Ternary Patterns. Appl. Acoust. 2018, 140, 296–300. [Google Scholar] [CrossRef]
  110. Alam, E.; Sufian, A.; Dutta, P.; Leo, M. Human Fall Detection Using Transfer Learning-Based 3D CNN. In Computational Technologies and Electronics, Proceedings of the ICCTE 2023. Communications in Computer and Information Science, Siliguri, India, 23–25 November 2023; Majumder, M., Zaman, J.K.M.S.U., Ghosh, M., Chakraborty, S., Eds.; Springer: Cham, Switzerland, 2023; Volume 2376. [Google Scholar] [CrossRef]
  111. de Miguel, K.; Brunete, A.; Hernando, M.; Gambao, E. Home Camera- Based Fall Detection System for the Elderly. Sensors 2017, 17, 2864. [Google Scholar] [CrossRef] [PubMed]
  112. Droghini, D.; Principi, E.; Squartini, S.; Olivetti, P.; Piazza, F. Human Fall Detection by Using an Innovative Floor Acoustic Sensor. Smart Innov. 2017, 69, 97–107. [Google Scholar] [CrossRef]
  113. Fan, K.; Wang, P.; Hu, Y.; Dou, B. Fall Detection via Human Posture Representation and Support Vector Machine. Int. J. Distrib. Sens. Netw. 2017, 13, 1550147717707418. [Google Scholar] [CrossRef]
  114. Guerra, B.M.V.; Ramat, S.; Beltrami, G.; Schmid, M. Automatic pose Recognition for monitoring dangerous situations in ambient-assisted living. Front. Bioeng. Biotechnol. 2020, 8, 415. [Google Scholar] [CrossRef]
  115. Guerra, B.M.V.; Schmid, M.; Beltrami, G.; Ramat, S. Neural networks for automatic posture recognition in ambient-assisted living. Sensors 2022, 22, 2609. [Google Scholar] [CrossRef]
  116. Helen Victoria, A.; Maragatham, G. Activity recognition of FMCW radar human signatures using tower convolutional neural networks. Wirel. Netw. 2021. [Google Scholar] [CrossRef]
  117. Hu, X.; Qu, X. An Individual-specific Fall Detection Model based on the Statistical Process Control Chart. Saf. Sci. 2014, 64, 13–21. [Google Scholar] [CrossRef]
  118. Huu, P.N.; Thi, N.N.; Ngoc, T.P. Proposing Posture Recognition System Combining MobilenetV2 and LSTM for Medical Surveillance. IEEE Access 2022, 10, 1839–1849. [Google Scholar] [CrossRef]
  119. Karayaneva, Y.; Sharifzadeh, S.; Jing, Y.; Tan, B. Human activity recognition for AI-enabled healthcare using low-resolution infrared sensor data. Sensors 2023, 23, 478. [Google Scholar] [CrossRef]
  120. Li, Y.; Ho, M.; Popescu, A. Microphone Array System for Automatic Fall Detection. IEEE Trans. Biomed. Eng. 2012, 59, 1291–1301. [Google Scholar] [CrossRef] [PubMed]
  121. Li, X.; Xu, G.; He, B.; Ma, X.; Xie, J. Pre-impact Fall Detection based on A Modified Zero Moment Point Criterion using Data from Kinect Sensors. IEEE Sens. J. 2018, 18, 5522–5531. [Google Scholar] [CrossRef]
  122. Li, H.; Mehul, A.; Le Kernec, J.; Gurbuz, S.Z.; Fioranelli, F. Sequential Human Gait Classification with Distributed Radar Sensor Fusion. IEEE Sens. J. 2020, 21, 7590–7603. [Google Scholar] [CrossRef]
  123. Li, M.; Sun, Q. 3D skeletal human action recognition using a CNN fusion model. Math. Probl. Eng. 2021, 2021, 6650632. [Google Scholar] [CrossRef]
  124. Li, C.; Wang, X.; Shi, J.; Wang, H.; Wan, L. Residual Neural Network Driven Human Activity Recognition by Exploiting FMCW Radar. IEEE Access 2023, 11, 111875–111887. [Google Scholar] [CrossRef]
  125. Martellli, D.; Artoni, F.; Monaco, V.; Sabatini, A.M.; Micera, S. Pre-Impact Fall Detection: Optimal Sensor Positioning Based on a Machine Learning Paradigm. PLoS ONE 2014, 9, e92037. [Google Scholar] [CrossRef]
  126. Min, W.; Cui, H.; Rao, H.; Li, Z.; Yao, L. Detection of Human Falls on Furniture Using Scene Analysis Based on Deep Learning and Activity Characteristics. IEEE Access 2018, 6, 9324–9335. [Google Scholar] [CrossRef]
  127. Min, W.; Yao, L.; Lin, Z.; Liu, L. Support Vector Machine Approach to Fall Recognition based on Simplified Expression of Human Skeleton Action and Fast Detection of Start Key Frame using Torso Angle. IET Comput. Vis. 2018, 12, 1133–1140. [Google Scholar] [CrossRef]
  128. Natarajan, A.; Krishnasamy, V.; Singh, M. Device-Free Human Activity Recognition in Through-the-Wall Scenarios Using Single-Link Wi-Fi Channel Measurements. IEEE Sens. J. 2025, 25, 27556–27565. [Google Scholar] [CrossRef]
  129. Qiao, X.; Feng, Y.; Liu, S.; Shan, T.; Tao, R. Radar Point Clouds Processing for Human Activity Classification Using Convolutional Multilinear Subspace Learning. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5121117. [Google Scholar] [CrossRef]
  130. Spasova, V.; Iliev, I.; Petrova, G. Privacy preserving fall detection based on simple human silhouette extraction and a linear support vector machine. Int. J. Bioautomat. 2016, 20, 237–252. [Google Scholar]
  131. Zahan, S.; Hassan, G.M.; Mian, A. Modeling Human Skeleton Joint Dynamics for Fall Detection. In Proceedings of the 2021 Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, Australia, 29 November–1 December 2021; pp. 1–7. [Google Scholar] [CrossRef]
  132. Alabdulkreem, E.; Marzouk, R.; Alduhayyem, M.; Al-Hagery, M.A.; Motwakel, A.; Hamza, M.A. Chameleon Swarm Algorithm with Improved Fuzzy Deep Learning for Fall Detection Approach to Aid Elderly People. J. Disabil. Res. 2023, 2, 2–70. [Google Scholar] [CrossRef]
  133. Cao, X.; Wang, X.; Geng, X.; Wu, D.; An, H. An Approach for Human Posture Recognition Based on the Fusion PSE-CNN-BiGRU Model. CMES-Comp. Model. Eng. 2024, 140, 385–408. [Google Scholar] [CrossRef]
  134. Kepski, M.; Kwolek, B. Event-driven System for Fall Detection using Body-worn Accelerometer and Depth Sensor. IET Comput. Vis. 2018, 12, 48–58. [Google Scholar] [CrossRef]
  135. Sucerquia, A.; Lopez, J.D.; Vargas-Bonilla, J.F. Real-Life/Real-Time Elderly Fall Detection with a Triaxial Accelerometer. Sensors 2018, 18, 1101. [Google Scholar] [CrossRef]
  136. Habaebi, M.H.; Yusoff, S.H.; Ishak, A.N.; Islam, M.R.; Chebil, J.; Basahel, A. Wearable Smart Phone Sensor Fall Detection System. Int. J. Interact. Mob. Technol. 2022, 16, 72–93. [Google Scholar] [CrossRef]
  137. Agrawal, D.K.; Udgata, S.K.; Usaha, W. Leveraging Smartphone Sensor Data and Machine Learning Model for Human Activity Recognition and Fall Classification. Procedia Comput. Sci. 2024, 235, 1980–1989. [Google Scholar] [CrossRef]
  138. Stampfler, T.; Elgendi, M.; Fletcher, R.R.; Menon, C. Fall detection using accelerometer-based smartphones: Where do we go from here? Front. Public Health 2022, 17, 996021. [Google Scholar] [CrossRef]
  139. Vargas, V.; Ramos, P.; Orbe, E.A.; Zapata, M.; Valencia-Aragón, K. Low-Cost Non-Wearable Fall Detection System Implemented on a Single Board Computer for People in Need of Care. Sensors 2024, 24, 5592. [Google Scholar] [CrossRef]
  140. Auvinet, E.; Multon, F.; Saint-Arnaud, A.; Rousseau, J.; Meunier, J. Fall detection with multiple cameras: An occlusion-resistant method based on 3-D silhouette vertical distribution. IEEE Trans. Inf. Technol. Biomed. 2011, 15, 290–300. [Google Scholar] [CrossRef]
  141. Anderson, D.; Luke, R.H.; Keller, J.M.; Skubic, M.; Rantz, M.; Aud, M. Linguistic Summarization of Video for Fall Detection Using Voxel Person and Fuzzy Logic. Comput. Vis. Image Underst. 2009, 113, 80–89. [Google Scholar] [CrossRef]
  142. Wang, X.; Ellul, J.; Azzopardi, G. Elderly fall detection systems: A literature survey. Front. Robot. AI 2020, 7, 71. [Google Scholar] [CrossRef] [PubMed]
  143. Jung, S.; Hong, S.; Kim, J.; Lee, S.; Hyeon, T.; Lee, M.; Kim, D.H. Wearable fall detector using integrated sensors and energy devices. Sci. Rep. 2015, 5, 17081. [Google Scholar] [CrossRef] [PubMed]
  144. Singh, K.; Rajput, A.; Sharma, S. Human fall detection using machine learning methods: A survey. Int. J. Math. Eng. Manag. Sci. 2020, 5, 161–180. [Google Scholar] [CrossRef]
  145. Saleh, M.; Jeannès, R.L.B. Elderly fall detection using wearable sensors: A low cost highly accurate algorithm. IEEE Sens. J. 2019, 19, 3156–3164. [Google Scholar] [CrossRef]
  146. Wang, G.; Li, Q.; Wang, L.; Zhang, Y.; Liu, Z. Elderly Fall Detection with an Accelerometer Using Lightweight Neural Networks. Electronics 2019, 8, 1354. [Google Scholar] [CrossRef]
  147. Ding, J.; Wang, Y. A WiFi-based smart home fall detection system using recurrent neural network. IEEE Trans. Consum. Electron. 2020, 66, 308–317. [Google Scholar] [CrossRef]
  148. Kim, T.H.; Choi, A.; Heo, H.M.; Kim, K.; Lee, K.; Mun, J.H. Machine Learning-Based Pre-Impact Fall Detection Model to Discriminate Various Types of Fall. J. Biomech. Eng. 2019, 141, 081010. [Google Scholar] [CrossRef]
  149. He, J.; Zhang, Z.; Wang, X.; Yang, S. A low power fall sensing technology based on FD-CNN. IEEE Sens. J. 2019, 19, 5110–5118. [Google Scholar] [CrossRef]
  150. Yhdego, H.; Li, J.; Morrison, S.; Audette, M.; Paolini, C.; Sarkar, M.; Okhravi, H. Towards musculoskeletal simulation-aware fall injury mitigation: Transfer learning with deep CNN for fall detection. In Proceedings of the Spring Simulation Conference (SpringSim), Tucson, AZ, USA, 29 April–2 May 2019; pp. 1–12. [Google Scholar]
  151. Mamdiwar, S.D.; Shakruwala, Z.; Chadha, U.; Srinivasan, K.; Chang, C.-Y. Recent Advances on IoT-Assisted Wearable Sensor Systems for Healthcare Monitoring. Biosensors 2021, 11, 372. [Google Scholar] [CrossRef]
  152. Verma, N.; Mundody, S.; Guddeti, R.M.R. An Efficient AI and IoT Enabled System for Human Activity Monitoring and Fall Detection. In Proceedings of the 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT), Mandi, India, 24–28 June 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
  153. Habib, M.; Mohktar, M.; Kamaruzzaman, S.; Lim, K.; Pin, T.; Ibrahim, F. Smartphone-Based Solutions for Fall Detection and Prevention: Challenges and Open Issues. Sensors 2014, 14, 7181–7208. [Google Scholar] [CrossRef]
  154. Karar, M.E.; Shehata, H.I.; Reyad, O. A Survey of IoT-Based Fall Detection for Aiding Elderly Care: Sensors, Methods, Challenges and Future Trends. Appl. Sci. 2022, 12, 3276. [Google Scholar] [CrossRef]
Figure 1. PRISMA flow diagram.
Figure 1. PRISMA flow diagram.
Sensors 25 06540 g001
Figure 2. Fall detection performance analysis based on sensor category. (A): Histogram of performance parameter distribution by sensor category. (B): Radar chart comparing sensor categories for the five performance parameters. (C): Table listing the number of data entries available across all studies by parameter and sensor category. (D): Table displaying the distribution of studies that achieved 100% performance by parameter and sensor category. Nb = number; Ref = references. References: [15,19,22,35,38,41,48,61,65,79,80,83,88,94,96,97,98,101,107,110,112,120,121,128,134].
Figure 2. Fall detection performance analysis based on sensor category. (A): Histogram of performance parameter distribution by sensor category. (B): Radar chart comparing sensor categories for the five performance parameters. (C): Table listing the number of data entries available across all studies by parameter and sensor category. (D): Table displaying the distribution of studies that achieved 100% performance by parameter and sensor category. Nb = number; Ref = references. References: [15,19,22,35,38,41,48,61,65,79,80,83,88,94,96,97,98,101,107,110,112,120,121,128,134].
Sensors 25 06540 g002
Figure 3. Fall detection performance analysis based on methods. (A): Histogram of performance parameter distribution by methods. (B): Radar chart comparing the four methods for the five performance parameters. (C): Table listing the number of data entries available across all studies by parameter and method. (D): Table displaying the distribution of studies that achieved 100% performance by parameter and method. Nb = number; Ref = references. References: [15,19,22,25,35,38,41,48,76,79,80,83,94,96,97,98,101,107,110,112,120,121,128,134].
Figure 3. Fall detection performance analysis based on methods. (A): Histogram of performance parameter distribution by methods. (B): Radar chart comparing the four methods for the five performance parameters. (C): Table listing the number of data entries available across all studies by parameter and method. (D): Table displaying the distribution of studies that achieved 100% performance by parameter and method. Nb = number; Ref = references. References: [15,19,22,25,35,38,41,48,76,79,80,83,94,96,97,98,101,107,110,112,120,121,128,134].
Sensors 25 06540 g003
Figure 4. Radar chart of most commonly used algorithm performance for DL, ML, Threshold, and other methods.
Figure 4. Radar chart of most commonly used algorithm performance for DL, ML, Threshold, and other methods.
Sensors 25 06540 g004
Figure 5. Radar chart of fall detection performance computed by dataset for the five parameters.
Figure 5. Radar chart of fall detection performance computed by dataset for the five parameters.
Sensors 25 06540 g005
Figure 6. Radar chart of training and testing time by algorithm. The studies were divided into units of equal length. References: [22,35,49,76,78,91].
Figure 6. Radar chart of training and testing time by algorithm. The studies were divided into units of equal length. References: [22,35,49,76,78,91].
Sensors 25 06540 g006
Table 1. Keyword combination for each database.
Table 1. Keyword combination for each database.
DatabaseKeyword Combination
PubMed/Medline“wearable sensors”, (“Monitoring” OR “Ambient Assisted Living” OR “Assist* Living” OR “AAL” OR “Smart Home”), (“fall detection” OR “fall prevention” OR “fall risk assessment”), (“Static” OR “dynamic”), (“Elder*” OR “Senior” OR “old* people”), (“training time” OR “testing time”), (“specificity” OR “accuracy” OR “sensitivity” OR “precision”)
Google Scholar“wearable sensors”, (“Monitoring” OR “Ambient Assisted Living” OR “Assist* Living” OR “AAL” OR “Smart Home”), (“fall detection” OR “fall prevention” OR “fall risk assessment”), (“Static” OR “dynamic”), (“Elder*” OR “Senior” OR “old* people”), (“training time” OR “testing time”), (“specificity” OR “accuracy” OR “sensitivity” OR “precision”)
ScienceDirect“wearable sensors” AND “Ambient Assisted Living” AND “fall detection” AND “training time” AND “Elderly” AND accuracy AND specificity
Science.govWearable sensors, fall detection, AAL, elder, accuracy
Academia“wearable sensors”, “Ambient Assisted Living”, “fall detection”, accuracy, sensitivity, specificity
IEEE Xploresensor elderly AAL fall
Mendeleywearable sensors AND Ambient Assisted Living AND fall detection AND training time AND Elderly AND accuracy AND sensitivity AND specificity
Table 2. Detailed presentation of sensors, methods, algorithms, and datasets for each of the 80 studies included. Data are classified by sensor category: wearable sensors, non-wearable sensors, and hybrid solutions.
Table 2. Detailed presentation of sensors, methods, algorithms, and datasets for each of the 80 studies included. Data are classified by sensor category: wearable sensors, non-wearable sensors, and hybrid solutions.
AuthorsSensor TypeSensor
Position
Sensor
Characteristics
Used MethodAlgorithmsUsed DatasetInput Data Type
Wearable sensors
Agrawal et al., 2023 [77]PFoot20 HzMLSVM, RF, LR, NB, DT, KNNCustom
Al-Hassani et al., 2023 [43]A, G, O 100 HzDLAECCustom
Ankalaki et al., 2024 [78]A, G, MVarious DLCNNUCIHAR, PAMAP2, Opportunity, Daphnet Gait HAR, UPFALL, SIMADL
Bourke et al., 2007 [79]A ±10 gThresholdThresholdCustomDyn
Bourke et al., 2008 [80]GChestG: 1 kHzThresholdThresholdCustom
Butt et al., 2021 [81]ECGChest DLCNNCustom
Chandramouli et al., 2024 [82] DLCNN, LSTMActitracker, MHEALTH
Chelli et al., 2019 [83]A, GChestA: ±8 g, 100 Hz, G: ±2000°/s, 100 HzML, OtherKNN, SVM, ANNCustom
Chen et al., 2018 [84]A ±2 g to ±4 g, 96.35 to 202.1 HzThresholdThresholdCustomStat
Gibson et al., 2016 [47]AChest50 HzML, OtherANN, KNNCustom
Gulati et al., 2021 [49]A, GWrist MLRF, SVM, NB, DT, ANNADL, ARFallDyn
He et al., 2016 [34]A, GNeckA: ±16 g, G: ±2000°/sML, OtherKNN, NB, ANN, DTCustom
He et al., 2017 [85]A, G A: ±16 g, 100 Hz, G: 2000°/s, 100 HzML, OtherKNN, NB, DTCustom
Jahanjoo et al., 2020 [15]AWaist ThresholdThresholdtfall, MobiFall
Jantaraprim et al., 2012 [86]AChest1 kHzOther Custom
Kerdegari et al., 2015 [87]AWaist±3 g, 100 HzOther Custom
Khojasteh et al., 2018 [88]AWrist, Waist16 to 204.8 HzML, OtherDT, SVMUMAFall
Kraft et al., 2020 [89]AWrist, Waist DLCNNNotch, MUMA, SimFall, Smartwatch, Smar-Fall, UPFall
Liaqat et al., 2021 [90]AIn the pocket ML, DL, OtherLR, RF, KNN, SVM, DT, MLP, CNN, LSTMCustom (experimental, 30 subjects, 6 ADL)Stat
Martins et al., 2022 [91]A, G, MLower back, Thighs, Waist, Foot ML, OtherKNNSisfall, FallAIID, FARSEEING, UCI HAR, UMAFall, Custom
Mauldin et al., 2018 [41]AWrist, Waist±8 g to ±16 g, 21.25 to 100 HzML, OtherNB, SVMSmartwatch, Notch, Farseeing
Medrano et al., 2017 [92]A 50 HzML, OtherSVMtfallStat
Miah et al., 2024 [93]A, G, M ML, DL, OtherSVM, HMM, GRU, CNN, LSTMWISDM, PAMAP2, USCHAD, Opportunity, UCI HAR
Nyan et al., 2008 [94]A, GWaist, ThighA: ±4 g, G: 150°/sThresholdThresholdCustom
Özdemir et al., 2014 [76]A, G, MHead, Chest, Waist, Wrist, Thigh, AnkleA: ±13 g, 25 Hz, G: ±1200°/s, 25 Hz, M: ±1.5 Gauss, 25 HzML, OtherKNN, SVM, ANNCustom
Özdemir et al., 2016 [95]A, G, MHead, Chest, Waist, Wrist, Thigh, AnkleA: ±13 g, 25 Hz, G: ±1200°/s, 25 Hz, M: ±1.5 Gauss, 25 HzML, OtherKNN, SVM, ANNCustom
Pan et al., 2021 [96]A, AV, MagShoulder, Waist, Foot Other Custom
Putra et al., 2018 [97]AChest, Waist100, 200 HzDLCNNCogent, Sisfall
Rashidpour et al., 2016 [98]A, GThighA: ±2 g, 87 Hz, G: ±2000°/s, 200 HzOther MobiFall
Ren et al., 2016 [99]AWaist62.5 HzOther Custom
Rescio et al., 2018 [100]EMGLeg1 kHzOther Custom
Sabatini et al., 2016 [101]A, G, MWaistA: ±4 g, 50 Hz, G: 2000°/sThresholdThresholdCustom
Santos et al., 2019 [38]AWrist, Waist±8 g to ±16, 21, 25 to 256 HzDLCNN, LSTMURFD, Notch, Smartwatch
Sarabia-Jácome et al., 2020 [102]AWaist±16 g, 100 HzML, DL, OtherLSTM, GRU, SVM, KNNSisFallDyn
Shahzad et al., 2018 [103]AWaist, Thigh64 HzMLSVM, ANN, KNN, NBCustom
Suriani et al., 2018 [104]AHip, Thigh, Foot±3 g, 50 HzMLKNN, SVMCustom
Torti et al., 2019 [42]A DLLSTMSisFallDyn
Tunca et al., 2019 [105]A, G, MFoot ML, DLSVM, RF, MLP, HMM, LTSMCustom
Wu et al., 2018 [64]A, AV A: ±16 g, 20 Hz, G: 2000°/s, 100 HzOther Custom
Xi et al., 2017 [106]EMGThigh, Leg1024 HzOther Custom
Yacchirema et al., 2018 [35]AWaist ML, OtherDT, SVM, MLP, KNNSisFall
Yoo et al., 2018 [107]AWrist50 HzOther CustomDyn
Yuwono et al., 2012 [108]ARight pocket±6 g, 20 HzOther Custom
Not wearable sensors
Adnan et al., 2018 [109]ASExt16 to 48 kHzMLSVMCustom
Alam et al., 2023 [110]CExt DLCNNCAUCAFall, GMDCSA
Berlin et al., 2022 [48]CExt DLCNNURFD, FDD
de Miguel et al., 2017 [111]CExt MLKNNCustom
Ding et al., 2023 [20]RExt DL, ML, OtherCNN, KNN, LSTMCustom
Droghini et al., 2017 [46]ASExt44,100 kHzMLSVMCustom
Droghini et al., 2017 [112]ASExt44,100 kHzMLSVMCustom
Fan et al., 2017 [113]CExt ML, OtherMLP, SVMCustom
Guerra et al., 2020 [114]CExt Other Fall detection, Fall detection testingStat and Dyn
Guerra et al., 2022 [115]CExt DLGRU, LSTMCustomDyn
Guerra et al., 2023 [17]CExt DLLSTMCustomDyn
Helen Victoria et al., 2021 [116]RExt400 MHz, 5.8 GHzDLCNNUniversity of Glasgow
Hu et al., 2014 [117]CExt100 HzOther Custom
Huu et al., 2022 [118]CExt DL, ML, OtherSVM, CNN, LSTMHuman pose, Custom
Karayaneva et al., 2023 [119]CExt DL, OtherCNN, LSTMCustomStat and Dyn
Li et al., 2012 [120]ASExt20 kHzOther Custom
Li et al., 2018 [121]CExt Other Custom
Li et al., 2020 [122]RExt7.3 to 25 GHzDLLSTMCustom
Li et al., 2021 [123]CExt Other NTU RGB + DDyn
Li et al., 2023 [124]RExt DL, OtherCNN, LSTMCustom
Liu et al., 2019 [28]VExt Other Custom
Martelli et al., 2014 [125]CExt100 HzDLCNNCustomDyn
Min et al., 2018 [126]CExt256 HzDLCNNURFD, Custom
Min et al., 2018 [127]CExt MLSVMTST Fall
Natarajan et al., 2025 [128] Ext MLSVMCustom
Qiao et al., 2022 [129]RExt DL, OtherCNNCustom
Singh et al., 2019 [22]TExt MLLR, SVM, KNN, DT, RFCustomStat
Spasova et al., 2016 [130]IRExt MLSVMCustomStat
Wang et al., 2020 [19]A + RExt ML, OtherHMLCustom
Zahan et al., 2021 [131]CExt DL, OtherCNNUWA3D, NTU60, NTU120
Hybrid solution
Alabdulkree et al., 2023 [132] DL, OtherCNNCustom
Benhaili et al., 2025 [45]A, G, CWaistA: 50 Hz, G: 50 Hz, C: 200 HzDL, OtherCNN, LSTM, GRUICU HAR, MHEALTH, SisFall
Cao et al., 2024 [133] DLCNN, GRUUCI HAR, HAR70PLUS, HABRDDynamic
Kepski et al., 2018 [134]A, G, M, C 250 HzMLSVM, KNNURFD
Kwolek et al., 2014 [25]A, CLower back, Ext256 HzMLSVMURFDDynamic
Li et al., 2018 [73]A, CWaist, Ext50 HzML, ThresholdSVM, ThresholdCustom
Sucerquia et al., 2018 [135]A, CWaist±16 g, 200 HzThresholdThresholdSisFall
Sensor type abbreviation—A: Accelerometer; A + R: Antennas and Receiver; AS: Acoustic sensor; AV: Angular Velocity; C: Camera; ECG: Electrocardiogram; EMG: Electromyography; G: Gyroscope; IR: Infrared sensor; M: Magnetometer; Mag: magnetic sensor; O: Orientation sensor; P: Pressure Sensors; R: Radar; T: Thermal sensor; V: Vibration sensor. Sensor position abbreviation—Ext: exterior sensor. Method abbreviation—DL: Deep Learning; ML: Machine Learning. Algorithms abbreviation—ANN: Artificial Neural Network; AEC: auto-encoder; CNN: Convolutional Neural Network; DT: Decision Tree; GRU: Gait Recurrent Unit; HML: Hybrid Machine Learning; HMM: Hidden Markov Model; KNN: k-Nearest Neighbors; LR: Linear regression; LSTM: Long Short-Term Memory; MLP: Multi-layer Perceptron; NB: Naive Bayes; RF: Random Forest; SVM: Support Vector Machines. Input data type abbreviation—Dyn: Dynamic; Stat: Static.
Table 3. Performance parameters present in each of the 80 included studies. Data are classified by sensor category: wearable sensors, non-wearable sensors, and hybrid solutions.
Table 3. Performance parameters present in each of the 80 included studies. Data are classified by sensor category: wearable sensors, non-wearable sensors, and hybrid solutions.
AuthorsAccuracySpecificitySensitivityPrecisionF1-ScoreTraining TimeTesting Time
Wearable sensors
Agrawal et al., 2023 [77]XXX
Al-Hassani et al., 2023 [43]XX XX
Ankalaki et al., 2024 [78]X X
Bourke et al., 2007 [79] X
Bourke et al., 2008 [80]XXX
Butt et al., 2021 [81]X
Chandramouli et al., 2024 [82]X
Chelli et al., 2019 [83]X X
Chen et al., 2018 [84] XX
Gibson et al., 2016 [47]XXXXX
Gulati et al., 2021 [49]XX XXXX
He et al., 2016 [34] XX
He et al., 2017 [85]XXX
Jahanjoo et al., 2020 [15] XX
Jantaraprim et al., 2012 [86] XX
Kerdegari et al., 2015 [87]XXX
Khojasteh et al., 2018 [88]XXXX
Kraft et al., 2020 [89]XX XX
Liaqat et al., 2021 [90]XX XX
Martins et al., 2022 [91]XXX XXX
Mauldin et al., 2018 [41]X XX
Medrano et al., 2017 [92] XX
Miah et al., 2024 [93]X XXX
Nyan et al., 2008 [94] XX
Özdemir et al., 2014 [76]XXX XX
Özdemir et al., 2016 [95]X X
Pan et al., 2021 [96]XXX
Putra et al., 2018 [97] XXX
Rashidpour et al., 2016 [98] XX
Ren et al., 2016 [99]XXX
Rescio et al., 2018 [100] XX
Sabatini et al., 2016 [101] XX
Santos et al., 2019 [38]XXXX
Sarabia-Jácome et al., 2020 [102]XXX
Shahzad et al., 2018 [103]XXX
Suriani et al., 2018 [104]X
Torti et al., 2019 [42]XXX
Tunca et al., 2019 [105]X
Wu et al., 2018 [64] XX
Xi et al., 2017 [106] XX X
Yacchirema et al., 2018 [35]XXX XX
Yoo et al., 2018 [107]XXX
Yuwono et al., 2012 [108] X
Not wearable sensors
Adnan et al., 2018 [109]X XXX
Alam et al., 2023 [110]XXXXX
Berlin et al., 2022 [48]XXXXX
de Miguel et al., 2017 [111]XXXX
Ding et al., 2023 [20]XX XXXX
Droghini et al., 2017 [46] X
Droghini et al., 2017 [112] X
Fan et al., 2017 [113]X
Guerra et al., 2020 [114]XXXX
Guerra et al., 2022 [115] XXX
Guerra et al., 2023 [17]X
Helen Victoria et al., 2021 [116]X
Hu et al., 2014 [117] XX
Huu et al., 2022 [118]X XX
Karayaneva et al., 2023 [119]X
Li et al., 2012 [120]XXX
Li et al., 2018 [121]XXX
Li et al., 2020 [122]X
Li et al., 2021 [123]X
Li et al., 2023 [124]XX XX
Liu et al., 2019 [28] XX
Martelli et al., 2014 [125]XXX
Min et al., 2018 [126]X XX
Min et al., 2018 [127]X
Natarajan et al., 2025 [128] XXX
Qiao et al., 2022 [129]XXXX
Singh et al., 2019 [22]XX XXXX
Spasova et al., 2016 [130]XXX
Wang et al., 2020 [19]XX X
Zahan et al., 2021 [131]XXX XX
Hybrid solution
Alabdulkree et al., 2023 [132]X
Benhaili et al., 2025 [45]X
Cao et al., 2024 [133]X X
Kepski et al., 2018 [134]X X
Kwolek et al., 2014 [25]XXXX
Li et al., 2018 [73]X
Sucerquia et al., 2018 [135]XXX
All studies marked with the symbol X reported at least one value for the performance parameter considered. In many cases, the number of data reported is greater than 1, given the comparisons made between several methods and algorithms. Details of the data are available in the Supplementary File.
Table 4. Studies reporting one or more fall detection methods with a performance of at least one parameter equal to 100%. Data are classified by sensor category: wearable sensors, non-wearable sensors, and hybrid solutions.
Table 4. Studies reporting one or more fall detection methods with a performance of at least one parameter equal to 100%. Data are classified by sensor category: wearable sensors, non-wearable sensors, and hybrid solutions.
AuthorsAccuracySpecificitySensitivityPrecisionF1-ScoreDatasets
Wearable sensors
Bourke et al., 2007 [79] 100% (1) Custom
Bourke et al., 2008 [80]100% (1)100% (1)100% (1) Custom
Chelli et al., 2019 [83] 100% (1) Custom
Jahanjoo et al., 2020 [15] 100% (4)100% (3) tfall, MobiFall
Khojasteh et al., 2018 [88] 100% (1) UMAFall
Mauldin et al., 2018 [41] 100% (2) Smartwatch, Notch, Farseeing
Nyan et al., 2008 [94] 100% (1) Custom
Özdemir et al., 2014 [76] 100% (2) Custom
Pan et al., 2021 [96]100% (1)100% (1)100% (3) Custom
Putra et al., 2018 [97] 100% (4) Cogent, Sisfall
Rashidpour et al., 2016 [98] 100% (1)100% (1) MobiFall
Sabatini et al., 2016 [101] 100% (1) Custom
Santos et al., 2019 [38] 100% (3) 100% (3) URFD, Notch, Smartwatch
Yacchirema et al., 2018 [35] 100% (2)100% (2) SisFall
Yoo et al., 2018 [107]100% (4)100% (5)100% (4) Custom
Not wearable sensors
Alam et al., 2023 [110] 100% (1) 100% (1) Custom
Berlin et al., 2022 [48]100% (1)100% (2)100% (1)100% (2)100% (1)URFD, FDD
Droghini et al., 2017 [112] 100% (1)Custom
Li et al., 2012 [120] 100% (1) Custom
Li et al., 2018 [121] 100% (1) Custom
Natarajan et al., 2025 [128] 100% (1) Custom
Singh et al., 2019 [22] 100% (3) Custom
Wang et al., 2020 [19]100% (2)100% (1) Custom
Hybrid solution
Kepski et al., 2018 [134]100% (1) 100% (1)URFD
Kwolek et al., 2014 [25] 100% (2) URFD
The number in brackets indicates the number of methods/algorithms for each parameter in each study.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gorce, P.; Jacquier-Bret, J. Fall Detection in Elderly People: A Systematic Review of Ambient Assisted Living and Smart Home-Related Technology Performance. Sensors 2025, 25, 6540. https://doi.org/10.3390/s25216540

AMA Style

Gorce P, Jacquier-Bret J. Fall Detection in Elderly People: A Systematic Review of Ambient Assisted Living and Smart Home-Related Technology Performance. Sensors. 2025; 25(21):6540. https://doi.org/10.3390/s25216540

Chicago/Turabian Style

Gorce, Philippe, and Julien Jacquier-Bret. 2025. "Fall Detection in Elderly People: A Systematic Review of Ambient Assisted Living and Smart Home-Related Technology Performance" Sensors 25, no. 21: 6540. https://doi.org/10.3390/s25216540

APA Style

Gorce, P., & Jacquier-Bret, J. (2025). Fall Detection in Elderly People: A Systematic Review of Ambient Assisted Living and Smart Home-Related Technology Performance. Sensors, 25(21), 6540. https://doi.org/10.3390/s25216540

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop