Next Article in Journal
Assessing Ecological Inequality in Urban Green Space Distribution Along Road Networks in Riyadh City
Previous Article in Journal
Harnessing the Power of Biostimulants: A Comprehensive Review of Their Role in Enhancing Agricultural Productivity and Sustainability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

A Systematic Review of Fall Detection and Prediction Technologies for Older Adults: An Analysis of Sensor Modalities and Computational Models

by
Muhammad Ishaq
*,
Dario Calogero Guastella
,
Giuseppe Sutera
and
Giovanni Muscato
Department of Electrical Electronic and Information Engineering, University of Catania, 95123 Catania, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(4), 1929; https://doi.org/10.3390/app16041929
Submission received: 4 January 2026 / Revised: 30 January 2026 / Accepted: 7 February 2026 / Published: 14 February 2026

Abstract

Background: Falls are a leading cause of morbidity and mortality among older adults, creating a need for technologies that can automatically detect falls and summon timely assistance. The rapid evolution of sensor technologies and artificial intelligence has led to a proliferation of fall detection systems (FDS). This systematic review synthesizes the recent literature to provide a comprehensive overview of the current technological landscape. Objective: The objective of this review is to systematically analyze and synthesize the evidence from the academic literature on fall detection technologies. The review focuses on three primary areas: the sensor modalities used for data acquisition, the computational models employed for fall classification, and the emerging trend of shifting from reactive detection to proactive fall risk prediction. Methods: A systematic search of electronic databases was conducted for studies published between 2008 and 2025. Following the PRISMA guidelines, 130 studies met the inclusion criteria and were selected for analysis. Information regarding sensor technology, algorithm type, validation methods, and key performance outcomes was extracted and thematically synthesized. Results: The analysis identified three dominant categories of sensor technologies: wearable systems (primarily Inertial Measurement Units), ambient systems (including vision-based, radar, WiFi, and LiDAR), and hybrid systems that fuse multiple data sources. Computationally, the field has shown a progression from threshold-based algorithms to classical machine learning and is now dominated by deep learning architectures, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Transformers. Many studies report high performance, with accuracy, sensitivity, and specificity often exceeding 95%. An important trend is the expansion of research from post-fall detection to proactive fall risk assessment and pre-impact fall prediction, which aim to prevent falls before they cause injury. Conclusions: The technological capabilities for fall detection are well-developed, with deep learning models and a variety of sensor modalities demonstrating high accuracy in controlled settings. However, a critical gap remains; our analysis reveals that 98.5% of studies rely on simulated falls, with only two studies validating against real-world, unanticipated falls in the target demographic. Future research should prioritize real-world validation, address practical implementation challenges such as energy efficiency and user acceptance, and advance the development of integrated, multi-modal systems for effective fall risk management.

1. Introduction

Falls among older adults represent a growing public health concern worldwide. The World Health Organization (WHO) identifies falls as the second leading cause of accidental injury-related deaths, with adults over 65 years of age experiencing the highest number of fatal falls [1,2]. The consequences of a fall extend beyond immediate physical injury. An unassisted fall can lead to functional impairment, reduced mobility, and a decrease in independence and quality of life [3]. The period following a fall is critical, as a long lie on the floor can precipitate serious medical complications [4]. Furthermore, the fear of falling can result in psychological distress and a self-imposed restriction of daily activities [5].
In response to this challenge, the development of automated fall detection systems (FDS) has become a prominent area of research in health technology and gerontology. These systems are designed to monitor an individual, detect a fall event, and automatically summon assistance, thereby reducing the time between the fall and the arrival of medical help [6,7]. The evolution of these technologies has progressed from single-modality wearable devices to systems that incorporate ambient sensors, vision-based monitoring, and the integration of Internet of Things (IoT) frameworks [8,9]. This technological landscape is broadly categorized into wearable, ambient, and hybrid systems, each with distinct advantages and limitations regarding user adoption, installation complexity, and performance [10,11].
Recent advancements are increasingly characterized by the application of machine learning (ML), especially deep learning (DL) algorithms, to analyze sensor data with high accuracy [12]. Models such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Transformers are now frequently employed to classify complex human movements and differentiate falls from other activities of daily living (ADLs) [13,14,15]. Concurrently, research has expanded beyond reactive fall detection to include proactive fall risk assessment and pre-impact fall prediction, which aim to identify individuals at high risk or to anticipate a fall before impact occurs, opening new possibilities for injury prevention [16,17,18].
While several reviews on fall detection technologies exist [19,20,21], the rapid pace of technological innovation, evidenced by the sharp increase in publications since 2017 (Figure 1), necessitates a continuous and systematic synthesis of the literature. This increase in research activity indicates a growing academic interest in fall detection technologies. Notably, the publication volume peaked in 2020, a trend that coincides with the COVID-19 pandemic and reflects an intensified focus on remote monitoring solutions for older adults during a period of social isolation and healthcare system strain, a context explicitly noted within the reviewed literature [22]. A second peak in 2024 demonstrates sustained momentum in the field.
This review provides a structured analysis of 130 recent academic papers to map the current state of sensor technologies and computational models. To synthesize and evaluate the evidence, this review addresses four specific research questions (RQs): RQ1 examines the prevailing sensor modalities and hardware configurations; RQ2 analyzes how computational models have evolved from threshold-based methods to Deep Learning; RQ3 evaluates the reported performance metrics (accuracy, sensitivity, specificity) and the impact of validation environments (lab vs. real-world); and RQ4 investigates the extent of the field’s shift from reactive post-fall detection to proactive pre-impact prediction.

2. Methodology: A PRISMA-Based Approach

This systematic review was conducted and reported in accordance with the principles of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 statement. The review protocol was not registered in PROSPERO. The methodology was designed to systematically identify, screen, and synthesize relevant literature on fall detection and prevention technologies for older adults.

2.1. Search Strategy

A comprehensive literature search was performed across several electronic databases, including IEEE Xplore, PubMed, and Scopus, to identify relevant studies. The search strategy combined keywords related to the population (“older adults”, “elderly”, “geriatric”), the intervention (“fall detection”, “fall prevention”, “activity monitoring”, “wearable sensor”, “ambient intelligence”), and the outcome (“accuracy”, “risk assessment”, “prediction”). The search was limited to articles published between January 2008 and May 2025.

2.2. Inclusion and Exclusion Criteria

Studies were included if they met the following criteria: (1) Original research articles or systematic reviews published in peer-reviewed journals or conference proceedings. For review articles, to avoid double-counting of evidence, data regarding system accuracy and performance metrics were extracted exclusively from original research articles. Included review papers were utilized solely for the thematic synthesis of technological trends and historical evolution. (2) Focused on the design, development, validation, or review of technological systems for fall detection, prevention, or risk assessment. (3) The target population was older adults. Likewise, to ensure the review captured cutting-edge developments up to 2025, preprints from sources such as arXiv were included, provided they met strict quality criteria regarding methodological description. However, editorials, commentaries, and abstracts without full text were excluded.

2.3. Study Selection

The study selection process followed a multi-stage screening protocol. Initially, all identified records were collated, and duplicate entries were removed. To ensure consistency (PRISMA Item 8), two reviewers independently screened the titles and abstracts. Disagreements were resolved through discussion or consultation with a third reviewer. Articles that clearly did not meet the inclusion criteria were excluded at this stage. Subsequently, the full text of the remaining articles was retrieved and assessed for final eligibility against the predefined criteria. The process is detailed in the PRISMA flow diagram in Figure 2. It illustrates the systematic identification, screening, and inclusion of studies for the review. Furthermore, data items extracted (PRISMA Item 10a) included: sensor modality, algorithm type, validation environment (simulated vs. real-world), and performance metrics. Risk of bias was assessed by examining the validation method. Specifically, noting the reliance on young volunteers simulating falls versus real-world data collection.
The initial database search yielded 450 records. After the removal of 65 duplicates, 385 records were screened, leading to the exclusion of 210 based on title and abstract. The full texts of the remaining 175 papers were sought for retrieval, with 170 assessed for eligibility. Following a detailed full-text review, 40 papers were excluded for not meeting the inclusion criteria. This process resulted in a final 130 studies included in the systematic review, comprising 108 original research articles and 22 systematic reviews/surveys.
To provide an overview of the synthesized literature, the key characteristics of all 130 included studies are detailed in Appendix A. This table summarizes each paper’s reference, the primary technology or modality investigated, the type of algorithm employed for classification, and the principal finding or reported accuracy.

2.4. Meta-Analysis Statement

Due to the significant heterogeneity in sensor modalities (wearable vs. ambient), experimental protocols (simulated vs. real-world), and performance metrics reported across the 130 studies, a quantitative meta-analysis was not feasible. Consequently, a narrative synthesis approach was adopted to categorize findings.

3. Thematic Synthesis

This section synthesizes the findings from the 130 included studies, organized by the primary technological and methodological themes. While the 22 review papers were utilized to map the historical evolution of the field, the performance analysis relies primarily on the 108 original research articles. The analysis is structured to first examine the sensor technologies, followed by the computational models, the methods for system validation, and the progression from fall detection to risk prediction. Key characteristics of all included studies are detailed in Appendix A.

3.1. Sensor Technologies: A Tripartite Classification

The literature on fall detection is predominantly categorized by the type of sensor technology employed. Figure 3 illustrates the distribution of these technologies across the reviewed studies. These systems can be classified into three main groups: wearable sensor-based systems, ambient sensor-based systems, and hybrid systems that integrate multiple modalities [10,11]. Wearable systems (36.2%) represent the most prevalent research modality, followed by vision-based systems (20.8%). The ’Ambient (Non-Vision)’ category, which includes technologies such as radar, WiFi, and LiDAR, constitutes another area of investigation.
The remaining 26.1% (n = 34) comprises 22 systematic reviews (16.9%, included for qualitative synthesis only), alongside dataset publications and clinical/EHR risk assessment models. To avoid redundancy and double-counting of evidence, a concern raised regarding the inclusion of review papers, quantitative performance metrics were extracted exclusively from the original research articles. A detailed taxonomy of these modalities, illustrating the hardware technologies identified within each category, is presented in Figure 4. Each approach presents a distinct set of capabilities and challenges related to accuracy, user acceptance, privacy, and implementation complexity.

3.1.1. Wearable Sensor-Based Systems

Wearable systems are the most frequently investigated modality in the reviewed papers. These systems involve one or more sensors worn on the user’s body to continuously monitor movement and posture. The most common sensor is the tri-axial accelerometer, which measures acceleration along three axes [23]. Many systems advance this by incorporating a gyroscope and sometimes a magnetometer, creating an Inertial Measurement Unit (IMU) that provides a complete picture of the body’s orientation and movement [24,25,26].
The placement of the wearable device is an important factor influencing detection accuracy. Several studies advocate for waist or trunk placement, arguing it is close to the body’s center of mass and thus provides a stable reference for movement analysis [22,27]. Other research explores wrist-worn devices, which may offer better user compliance and comfort, though they present greater challenges in distinguishing falls from rapid arm movements [28,29,30]. Thigh placement has also been investigated for its ability to capture lower-limb dynamics effectively [16]. To improve performance, some wearable systems incorporate additional sensors, such as barometers to detect rapid changes in altitude associated with a fall [31]. Figure 5 illustrates the common sensor placements discussed in the literature and summarizes their respective trade-offs regarding signal accuracy and user compliance.
The primary advantage of wearable devices is their ability to provide continuous, person-centric monitoring regardless of the user’s location within a monitored space. However, their effectiveness is contingent on user adherence; a device that is not worn cannot detect a fall. This limitation is a frequently cited rationale for the development of ambient systems [32]. Furthermore, power consumption is a practical constraint, prompting research into energy-efficient algorithms and hardware [6,33,34]. The development of public datasets from wearable sensors, such as the SisFall [35] and UP-Fall Detection Dataset [36], has been instrumental in allowing researchers to benchmark and compare algorithms. Table 1 provides a summary of the characteristics of these widely used public datasets, detailing the sensor modalities used and the nature of the activities recorded. These datasets are crucial for benchmarking algorithms, as noted in Section 3.3.

3.1.2. Ambient Sensor-Based Systems

Ambient systems utilize sensors integrated into the living environment, offering a non-intrusive approach that does not require the user to wear a device. This category encompasses various technologies, with vision-based and radio-frequency-based methods being the most prominent.
Vision-Based Systems: These systems employ cameras to monitor the environment. Approaches range from standard RGB cameras [40,41] to RGB-Depth (RGB-D) cameras like the Microsoft Kinect, which provide depth information to better distinguish a person from the background [42,43]. Low-resolution thermal sensors have also been proposed as a privacy-preserving alternative to traditional cameras [5]. Analysis techniques in these systems often involve tracking the human silhouette [37,44], identifying skeletal key points through pose estimation [45,46,47], or analyzing optical flow to quantify motion [48]. Vision-based approaches are the subject of several review papers, which show the increasing application of DL in this domain [21,49]. The principal challenge for vision systems is the concern for user privacy, alongside technical issues like occlusion and varying illumination conditions [50].
Radio Frequency (RF)-Based Systems: RF-based systems use signals that can operate without line-of-sight and in various lighting conditions, making them a privacy-respecting alternative. Radar is a key technology in this sub-category, with studies using Continuous-Wave (CW) Doppler radar [51,52], Frequency-Modulated Continuous-Wave (FMCW) radar [53], millimeter-wave (mmWave) radar [54,55,56], and Ultra-Wideband (UWB) radar [57]. These systems typically analyze the micro-Doppler signatures reflected from a moving person to classify activities [58,59]. Another RF-based approach uses commodity WiFi signals to passively detect falls by analyzing disturbances in the signal patterns caused by human motion [60]. A review by [61] discusses the complementary nature of radar and RGB-D sensors.
Other Ambient Modalities: Researchers have also explored other novel ambient sensors. These include systems based on floor vibrations and acoustic signals [62], smart flooring with integrated force sensors [63], pyroelectric infrared (PIR) sensors for motion detection [64], and smart carpets with RFID technology [65]. More recently, LiDAR has been investigated for its ability to create a 3D point cloud of the environment, enabling fall detection while preserving anonymity [66,67].

3.1.3. Hybrid Systems

Hybrid systems, also known as multi-sensor fusion systems, combine two or more sensor modalities to overcome the limitations of any single approach [11,68]. The rationale is to improve the reliability and accuracy of the FDS. For example, a system might use a wearable accelerometer for initial fall detection and a camera for confirmation, a “double-check” method that can reduce false alarms [69]. Other examples include the fusion of RGB images with accelerometer data [70], the combination of video and audio streams [71], and the integration of a wearable accelerometer with a time-of-flight camera and a microphone [72]. These multimodal approaches aim to create a context-aware system by employing complementary data sources.

3.2. Computational Models: From Thresholds to Deep Learning

The classification of a fall event from sensor data is the central task of the computational model within an FDS. The sophistication of these models has evolved from simple, threshold-based algorithms to complex DL architectures. The choice of model is often linked to the sensor modality and the desired trade-off between accuracy, computational cost, and energy efficiency [12].

3.2.1. Threshold-Based and Rule-Based Algorithms

Early and some current FDS implementations rely on threshold-based algorithms. These methods typically extract simple features from sensor data, such as the peak resultant acceleration or changes in posture angle, and compare them against predefined thresholds to trigger a fall alarm [28]. While computationally inexpensive and easy to implement on low-power devices, these systems can be prone to false alarms as they may struggle to differentiate falls from vigorous ADLs that exhibit similar signal characteristics [73]. Some studies propose more complex rule-based systems or finite state machines (FSM) that combine multiple conditions to improve specificity [67].

3.2.2. Classical Machine Learning

ML models represent an advancement over simple thresholding. These supervised learning approaches are trained on labeled datasets containing examples of both falls and ADLs. Common algorithms reported in the literature include Support Vector Machines (SVM) [4,37,45], K-Nearest Neighbors (KNN) [65], Decision Trees [74], and ensemble methods like Random Forest [39,64] and Boosted Decision Trees [8]. These models often achieve high accuracy by learning complex decision boundaries from engineered features extracted from the raw sensor signals [75,76]. For example, a Multiple Kernel Learning SVM in a smartphone-based system has been used in [77], and in [6] the authors developed a low-cost ML algorithm with novel feature extraction that achieved over 99.9% accuracy. The performance of these models is heavily dependent on the quality and representativeness of the feature engineering process.

3.2.3. Deep Learning Architectures

The most recent trend, and a dominant theme in the reviewed papers, is the application of DL models. Figure 6 illustrates this evolution over three distinct periods, identifying the rapid displacement of classical ML and threshold-based methods by DL architectures since 2021. The data reveals a marked shift over time. In the initial period (2008–2015), research was based on threshold-based and classical ML models. The middle period (2016–2020) shows a transition, with a near-even split between classical ML (n = 23) and the rapidly emerging DL approaches (n = 21). In the most recent period (2021–2025), DL has become the dominant paradigm (n = 41), outnumbering classical ML (n = 9) and nearly displacing threshold-based methods. This trend reflects the field’s progression from reliance on manually engineered features to the adoption of end-to-end learning architectures capable of automatically deriving complex feature representations from raw sensor data. DL architectures can automatically learn hierarchical feature representations from raw or minimally processed data, eliminating the need for manual feature engineering. Several reviews are dedicated to this topic, underscoring the transformative impact of DL on fall detection [13,49,78].
Convolutional Neural Networks: CNNs are widely used, particularly for vision-based systems where they excel at processing image data [48,79,80]. They are also applied to time-series data from wearable sensors by treating the signal as a one-dimensional array [81,82,83]. Variants like 3D CNNs are used to capture spatiotemporal features from video sequences [84].
Recurrent Neural Networks: RNNs, particularly their advanced variants like Long Short-Term Memory (LSTM) [5,85] and Gated Recurrent Units (GRU) [86], are well-suited for modeling sequential data. They are commonly used with wearable sensor data to capture the temporal dynamics of movement leading up to, during, and after a fall [87].
Hybrid Models: Many of the highest-performing systems use hybrid DL architectures. CNN-LSTM models combine the feature extraction power of CNNs with the sequence modeling capabilities of LSTMs [17,18,33,88]. Transformer-based models, which use attention mechanisms to weigh the importance of different parts of the input sequence, are an emerging alternative, showing strong performance in both vision-based [15,89,90] and wearable systems [25].
Other DL Approaches: The literature also reports on other DL techniques, including Deep Belief Networks (DBN) [38], Autoencoders for anomaly detection [34,54], and Federated Learning to train models on decentralized data while preserving privacy [91]. To clarify the varied range of computational strategies employed, Figure 7 presents a taxonomy of the deep learning architectures identified in this review, categorized by their data processing focus.
The use of DL has consistently led to high reported accuracy rates, often exceeding 95%. However, these models require large amounts of labeled training data and are computationally more intensive, which can be a challenge for real-time implementation on low-power, edge-computing devices [92]. Table 2 synthesizes the findings from Section 3.2, and summarizes the comparative strengths and limitations of these computational approaches, showing the trade-offs between accuracy and resource consumption. It also shows the trade-offs between the three dominant algorithmic paradigms regarding feature extraction, performance, and hardware requirements.

3.3. System Validation and Performance Metrics

Validating the performance of FDS is an important step, yet it presents considerable challenges. The gold standard would be to test systems on real-world, unanticipated falls in older adults. However, such events are rare and unpredictable, making data collection difficult. As a result, the vast majority of studies in this review rely on simulated falls performed by healthy young or older adult volunteers [35,36]. While this provides a controlled environment for testing, it is acknowledged that simulated falls may not accurately replicate the biomechanics of genuine falls [4]. Only a few studies, such as those by [4,43], were able to include a small number of naturally occurring falls in their validation datasets.
A stratified analysis of the 130 reviewed studies reveals a sharp distinction in validation protocols between active detection and proactive risk assessment. While research focused on Fall Risk Assessment frequently utilized real-world clinical data or continuous monitoring of older adults [93,94,95], the validation of Fall Detection systems remains heavily reliant on laboratory simulation. Of the detection-focused studies reviewed, 128 (98.5%) validated their algorithms using simulated falls performed by volunteers or standardized public datasets. Only 2 studies (1.5%) [4,43] explicitly reported validating their detection performance against naturally occurring, unanticipated falls captured in the target demographic. This lack of ecological validity in detection algorithms remains the field’s most significant methodological hurdle.
The most common performance metrics used to evaluate FDS are accuracy, sensitivity (recall), and specificity.
  • Accuracy: The proportion of total classifications that are correct.
  • Sensitivity: The ability of the system to correctly identify true falls (true positives/(true positives + false negatives)). High sensitivity is important to ensure that actual falls are not missed.
  • Specificity: The ability of the system to correctly identify non-fall events (true negatives/(true negatives + false positives)). High specificity is essential for minimizing false alarms, which can lead to user frustration and alarm fatigue for caregivers.
Many studies report high values for these metrics, frequently exceeding 95% or even 99% [6,79,96]. However, a quantitative meta-analysis, such as pooled sensitivity/specificity, was not performed in this review due to the high heterogeneity of the included studies. The wide variation in sensor modalities (wearable vs. ambient), experimental protocols (scripted ADLs vs. continuous monitoring), and study populations (young actors vs. older adults) precludes a valid statistical comparison of aggregate performance metrics.
Nevertheless, as noted by [81], performance can be vastly dataset-dependent, and models that perform well on one dataset may not generalize to others. This dependency indicates a risk of overfitting, where deep learning models may memorize background noise or subject kinematics rather than learning generalized fall features. Public benchmarks like SisFall and UP-Fall are invaluable, yet few studies perform cross-dataset validation (training on one, testing on another) to prove model robustness. The use of publicly available datasets like SisFall, URFD, and MobiFall is a positive trend that facilitates more direct comparison between different algorithmic approaches [38,97,98]. Validation often involves cross-validation techniques to ensure that the model’s performance is not an artifact of some train-test split [99].

3.4. From Detection to Prediction: Assessing and Anticipating Falls

While the majority of the literature focuses on the reactive detection of fall events after they have occurred, a growing sub-field is dedicated to the proactive prediction and prevention of falls. This paradigm shift involves two primary approaches: long-term fall risk assessment and short-term, pre-impact fall prediction. This distinction is visualized in Figure 8, which maps the spectrum of fall management technologies against the timeline of a fall event, distinguishing between long-range risk assessment, immediate pre-impact prediction, and post-fall detection.

3.4.1. Fall Risk Assessment

Fall risk assessment aims to identify individuals who are at a high risk of falling in the future, allowing for timely clinical or lifestyle interventions. This approach moves away from event detection and towards continuous, passive monitoring of physiological and behavioral biomarkers. Several studies use wearable sensors to analyze gait parameters extracted from daily-life activities. Parameters such as gait speed, stride time, and trunk stability have been identified as important indicators of fall risk [100]. The authors in [93] employed DL models on trunk accelerometry data to assess fall risk, while Ref. [101] used fully convolutional neural networks with transfer learning on smartphone sensor data for the same purpose.
Other risk prediction models integrate a wider range of data sources. The authors in [95] combined geriatric assessments (Activities of Daily Living scores), gait measurements, and fall history to predict 6-month fall risk with good accuracy. Similarly, Ref. [102] demonstrated that ML algorithms applied to statewide electronic health records could effectively identify older adults at a higher risk of falling. In [103], the authors developed a simplified decision-tree algorithm using easily measurable clinical predictors. These models provide a holistic view of an individual’s risk profile. However, some reviews caution that while sensing technology shows promise, many clinical tests of balance, when used in isolation, demonstrate low diagnostic accuracy for predicting future falls [104,105]. The development of validated, multimodal prediction tools remains an active area of research [106,107]. The Oldfry device, for example, is a technological tool designed to detect frailty and assess fall risk [94].

3.4.2. Pre-Impact Fall Prediction

Pre-impact fall prediction represents an immediate form of prevention, aiming to detect a fall while it is in progress but before the body makes contact with the ground. A successful pre-impact detection could, in theory, trigger a protective device, such as a wearable airbag, to mitigate injury. This requires an algorithm with low latency and high accuracy to make a classification within a fraction of a second after the loss of balance.
This challenging task is addressed by several studies, almost exclusively using wearable inertial sensors and DL models. Researchers [17] developed a hybrid ConvLSTM model to classify the “pre-impact fall” stage, achieving high sensitivity and a latency of just over 1 millisecond. Similarly, Ref. [18] presented a system that could detect a fall within 0.5 s of its initiation, providing a sufficient lead time for a protective response. Studies [108,109] also proposed DL frameworks that classify movement into non-fall, pre-fall, and fall stages. The feasibility of implementing these computationally intensive models on low-power microcontrollers for real-time execution is a key research question, with studies like [92] analyzing the trade-offs between accuracy, inference speed, and energy consumption. Other work has focused on detecting “near-falls,” or stumbles, as key indicators of balance impairment and future fall risk [24]. The ability to accurately and rapidly predict an imminent fall remains one of the most advanced and impactful areas of fall detection research.

4. Discussion and Analysis

The thematic synthesis of the 130 reviewed studies reveals a dynamic and rapidly evolving field of research dedicated to fall detection and prevention in older adults, as emphasized by the accelerating publication rate over the last decade (Figure 1). The body of literature demonstrates a trajectory of increasing technological and algorithmic complexity. This progression moves from single-sensor, threshold-based wearable devices toward multi-modal, environmentally integrated systems powered by advanced DL architectures. This discussion critically analyzes the dominant trends, evaluates the strength of the reported performance claims, and identifies persistent challenges and research gaps.
A primary trend is the diversification of sensor modalities instead of exclusively relying on wearable devices. While wearables, particularly those incorporating IMUs, remain the most common platform [23,110], their fundamental limitation is user adherence. In response, a portion of the literature explores ambient sensing technologies that offer passive, non-intrusive monitoring. Vision-based systems, that use RGB, RGB-D, and thermal cameras, have shown high performance by observing human posture and movement [5,43,49]. However, privacy concerns remain a considerable barrier to their widespread adoption. Consequently, radio-frequency technologies such as radar and WiFi have emerged as compelling alternatives, capable of detecting motion through walls and in varied lighting without collecting personally identifiable images [58,60,61].
However, the deployment of vision-based systems in private spaces, such as bathrooms where falls frequently occur, raises critical ethical and legal concerns. While technical solutions like depth-sensing and thermal imaging obscure facial features, adherence to regulations such as the General Data Protection Regulation (GDPR) requires more than just anonymization, it demands explicit informed consent and transparent data processing protocols. Future frameworks must ensure that video data is processed at the edge (on-device) so that sensitive footage never leaves the user’s local network. Novel approaches using LiDAR, smart flooring, and even floor vibrations further underscore the innovative drive toward creating effective yet acceptable monitoring solutions [62,63,67].
Concurrent with this hardware evolution is the shift in computational models. The literature shows a path from simple threshold-based algorithms to classical ML and, most recently, to the widespread adoption of DL [12]. DL models, including CNNs, LSTMs, and Transformers, now dominate the field, largely because of their ability to learn relevant features from raw sensor data, obviating the need for manual feature engineering [13]. This has enabled the development of end-to-end systems that report high performance metrics. Despite these high metrics, the ’black-box’ nature of Deep Learning models poses a barrier to clinical adoption. Clinicians require interpretability to understand why a model classified a particular movement as a fall or a high-risk event. The current literature lacks sufficient focus on Explainable AI (XAI) approaches, such as SHAP (SHapley Additive exPlanations), which could visualize which gait parameters or image features triggered an alert, thereby fostering trust among medical practitioners and users.
However, the performance claims within the literature warrant critical examination. As summarized in Table 3, many studies report accuracy, sensitivity, and specificity values exceeding 99% [6,53,79]. While notable, these results are almost universally derived from experiments using simulated falls performed by healthy, and often young, volunteers in controlled laboratory settings. Furthermore, such near-perfect scores often suggest a lack of subject-independent cross-validation, meaning the models may be overfitting to certain movement patterns of the test subjects rather than learning generalized fall features. An identified gap is the lack of validation with real-world, unanticipated falls from the target elderly population. The few studies that do include such data report modest but realistic performance figures and show the discrepancy between simulated and actual fall dynamics [4,43]. This suggests that the reported near-perfect accuracy may not be generalizable to real-world conditions, where the variability of human movement and environmental contexts is far greater. The dataset-dependent nature of model performance, as emphasized by [81], further complicates the comparison of results across studies.
The movement from reactive detection to proactive prediction is another key finding. The development of fall risk assessment models using continuous monitoring data from wearables or periodic clinical assessments offers a pathway to prevention [95,102]. This approach integrates gerontological knowledge with technological monitoring. Even more ambitious are the pre-impact detection systems, which aim to anticipate a fall milliseconds before impact [17,18]. While technologically demanding, this research direction holds the promise of transforming fall detection from an alert system into an active injury prevention tool. The primary challenges in this domain are achieving the requisite low latency for real-time intervention while managing the high computational and energy costs of the underlying DL models on wearable devices [92].
Finally, the practical implementation of these systems at scale is a recurring theme. The integration of IoT architectures is frequently proposed to create connected health ecosystems that can monitor numerous individuals and seamlessly alert caregivers [3,8]. This necessitates a focus on device-type invariance [111], energy efficiency [33], and the use of edge or fog computing to process data locally, reducing latency and preserving privacy [25,112].
Furthermore, while multimodal fusion offers superior accuracy, it introduces practical limitations often overlooked in performance-centric studies. These include the high cost of instrumenting homes with ambient sensors, the technical difficulty of synchronizing time-series data from heterogeneous sources, such as aligning video frames with accelerometer timestamps, and the increased energy consumption required for continuous data transmission. To synthesize these implementation requirements, Figure 9 depicts a generalized architecture for a modern, scalable fall detection system.
Table 3. Comparison of Reported Accuracy by Sensor Modality.
Table 3. Comparison of Reported Accuracy by Sensor Modality.
Sensor ModalityReported Accuracy Range (%)Key AlgorithmsRepresentative Studies
Wearable (IMU/Accelerometer)90.3–99.9ML (SVM, Random Forest), DL (CNN, LSTM, Transformers)[6,22,99]
Vision-based (RGB/RGB-D)79.6–99.98DL (CNN, Transformers, YOLO), ML (SVM)[79,113,114]
Ambient (Radar/RF)90 (Recall)–99.77DL (CNN, LSTM), Signal Processing[52,53,55]
Ambient (Other)93 (Accuracy)–99ML (KNN, FSM), Pattern Recognition[5,62,65]
Hybrid/Multi-modal100 (in lab)DL (RNN, CNN), Sensor Fusion[69,70,72]

5. Conclusions and Future Directions

This systematic review has synthesized and analyzed 130 academic papers on fall detection and prevention technologies for older adults. The findings demonstrate a field characterized by noteworthy innovation in both sensor technology and computational methods. The research landscape has matured from foundational wearable systems to an ecosystem of ambient, vision-based, and hybrid solutions. Concurrently, the analytical core of these systems has transitioned from simple threshold-based logic to advanced ML and DL architectures, which now represent the state of the art. The literature reports high levels of accuracy for these systems, often exceeding 95%, underscoring the technical feasibility of automated fall detection.
The analysis reveals three primary conclusions. First, there is no single superior sensor modality. Instead, a trade-off exists between the continuous, person-centric monitoring of wearables and the non-intrusive, privacy-preserving nature of ambient systems like radar and LiDAR. The choice of technology is context-dependent, balancing user acceptance, environmental constraints, and desired performance. Second, DL has become the dominant computational paradigm, with hybrid models like CNN-LSTMs and Transformers showing better performance in classifying complex human movements. Third, a promising evolution is underway from reactive, post-fall detection to proactive fall risk assessment and pre-impact prediction, signaling a shift toward preventative health technology. It should be noted that while this review covers literature up to early 2025, the rapid pace of AI development implies that recent preprints and emerging trials may already be addressing some identified gaps.
Despite these advancements, several gaps and areas for future research have been identified. The most pressing need is for the validation of FDS in real-world environments with the target elderly population. The near-universal reliance on simulated falls in laboratory settings limits the generalizability of the extraordinary accuracy claims found throughout the literature. Our analysis indicates that less than 2% of the reviewed studies validated their systems using naturally occurring falls in the target demographic, leaving a significant gap between laboratory success and real-world reliability. Future studies must prioritize the collection and analysis of data from naturally occurring falls to develop better and more reliable systems.
Second, further research is required to address the practical challenges of deployment. This includes improving the energy efficiency of wearable devices to extend battery life, reducing the computational footprint of DL models for viable edge computing, and ensuring the seamless integration of these systems into existing healthcare and smart-home infrastructures via scalable IoT frameworks.
Third, the issue of user acceptance and privacy must be thoroughly addressed. While technologies like radar and thermal sensors offer privacy-preserving alternatives to cameras, the long-term adoption of any monitoring system hinges on user comfort, trust, and the perceived benefit. Co-design methodologies involving older adults, caregivers, and clinicians could lead to systems that are not only technologically sound but also user-centered.
Finally, the fusion of data from multiple sources presents another avenue for future work. This includes not only the combination of different environmental and wearable sensors but also the integration of clinical data from electronic health records. Such holistic models could provide a comprehensive, personalized assessment of an individual’s fall risk, enabling targeted interventions and moving the field closer to the ultimate goal of fall prevention. In conclusion, while the technological components of fall detection are well-established, the next phase of research must focus on real-world validation, practical implementation, and the integration of various data sources to create systems that can be effectively and acceptably deployed to improve the safety and independence of older adults.

Author Contributions

M.I.: Investigation, Methodology, Software, Visualization, Writing—Original Draft, Writing—-Review, Editing, Conceptualization, Validation, Resources. D.C.G.: Writing—Review, Editing. G.S.: Writing—Review, Editing. G.M.: Conceptualization, Supervision, Funding Acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This study was carried out within the PRIN-PNRR 2022 project “EasyWalk: Intelligent Social Walker for active living” funded by the European Union NextGenerationEU (Progetti di Ricerca di Rilevante Interesse Nazionale—Piano Nazionale di Ripresa e Resilienza).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare that the research was conducted without any commercial or financial relationships that could potentially create a conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
FDSFall Detection System
CNN(s)Convolutional Neural Network(s)
RNN(s)Recurrent Neural Network(s)
IMU(s)Inertial Measurement Unit(s)
WHOWorld Health Organization
MLMachine Learning
DLDeep Learning
ADL(s)Activities of Daily Living
PRISMAPreferred Reporting Items for Systematic Reviews and Meta-Analyses
CCACanonical Correlation Analysis
LSTMLong Short-Term Memory
DBNDeep Belief Network
HVRAEHybrid Variational RNN Autoencoder
YOLOYou Only Look Once
RGB(D)Red Green Blue (Depth)
RFRadio Frequency
CWContinuous Wave
FMCWFrequency-Modulated Continuous Wave
LiDARLight Detection and Ranging
RadarRadio Detection and Ranging
UWBUltra Wideband
PIRPyroelectric Infrared
RFIDRadio Frequency Identification
FSMFinite State Machine
SVMSupport Vector Machine
KNNKth Nearest Neighbor
GRU(s)Gated Recurrent Unit(s)
DBNDeep Belief Networks
FARFalse Alarm Rate

Appendix A

Table A1. Summary of Included Studies.
Table A1. Summary of Included Studies.
Ref.Technology
or Modality
Algorithm TypeValidation SettingKey Findings or
Reported Accuracy
[53]Ambient
(FMCW Radar)
CNN, Canonical Correlation Analysis (CCA)Lab/DatasetRadar micro-Doppler features with CNNs achieve 99.77% test accuracy.
[93]Wearable
(Trunk
Accelerometer)
DL (CNN, LSTM, ConvLSTM)Real-world (Daily Life)DL, particularly with multi-task learning, effectively assesses fall risk (AUC = 0.75).
[14]Vision-basedTransformer, CNNPublic DatasetsPCNN–Transformer model accurately recognizes human actions and falls.
[45]Vision-based (Surveillance Cameras)ML (SVM, Decision Tree, Random Forest, KNN)Lab/DatasetSpatiotemporal method using human skeleton key points achieves 98.5% accuracy.
[49]Vision-based (Review)DL (Review)N/A
(Review)
Review of DL-based vision techniques for fall detection.
[115]IoT, Wearable (Accelerometer, Gyroscope)TriNet (LSTM, optimized CNN, RNN), BlockchainLab
(Simulation)
IoT-blockchain Trinet system outperforms existing models in accuracy and security.
[65]Ambient
(Smart Flooring with RFID)
ML (KNN, Random Forest, XGBoost)Lab
(Simulation)
IoT-enabled smart flooring with KNN achieves 99% accuracy in fall identification.
[58]Ambient (Radar)Signal Processing (Time-frequency analysis)Lab
(Real Fall Data)
Radar signal processing is effective for elderly fall detection in assisted living.
[51]Ambient
(CW Radar)
Signal Processing (Short Time Fourier Transform)Lab (Motion Capture Comparison)Non-contact radar method using effective acceleration effectively detects falls.
[4]Wearable
(Accelerometer)
ML (SVM)Real-world (Daily Living)SVM-based system detected 8 of 10 real-world falls with a low false positive rate.
[27]Wearable
(Accelerometer, Gyroscope, Magnetometer)
Data FusionLab
(Simulation)
Waist-worn inertial unit with data fusion achieves excellent accuracy, sensitivity, and specificity.
[92]Wearable
(Accelerometer)
DL (CNN, LSTM, GRU variants)Lab/DatasetCNN DENSE model provides best pre-impact detection accuracy (94.70%) with a lead time of 176.91 ms.
[116]Vision-based (RGB videos)DLLab/Public DatasetDeep-learning approach for automatic fall detection achieves a mean recall of 0.916.
[23]Wearable
(Review)
ML (Review)N/A (Review)Review finds accelerometers effective for fall detection but calls for more research.
[81]Wearable
(Accelerometer)
DL (CNN)Public DatasetsCNNs can effectively detect falls but performance is dependent on the training dataset.
[73]IoT, Wearable (Accelerometer, Gyroscope)Threshold-based AlgorithmLab
(Simulation)
Algorithm using accelerometer and gyroscope data improves accuracy and reduces false positives.
[75]WearableML (QSVM, EBT)Public DatasetsQSVM and EBT algorithms achieve up to 100% fall detection accuracy.
[22]Wearable
(IMU, barometer)
DL (DNN)Lab
(Simulation)
IMU–barometer design with DNN outperforms traditional ML for slow fall detection (90.33% accuracy).
[97]Vision-basedDL (Improved YOLOv5s)Public Dataset (URFD)Improved YOLOv5s algorithm achieves 97.2% average accuracy for fall detection.
[98]Wearable
(Review)
Data Processing (Review)N/A (Review)Review on wearable sensor-based fall risk assessment for older adults.
[24]Wearable (IMU)DL
(Modified DAG-CNN)
Lab
(Simulation)
Modified DAG-CNN algorithm accurately predicts near-falls with over 98% accuracy.
[40]Vision-based (Camera)Image ProcessingLab/PrototypeFall detection system developed using open-source hardware and image tracking.
[61]Ambient
(Review)
Signal/Image Processing
(Review)
N/A (Review)Review on contactless fall detection using Radar and RGB-D sensors.
[63]Ambient (Smart Tiles with Force Sensors, Accelerometers)Sensor FusionReal-world
(Living Lab)
Fusion of force sensors and accelerometers under smart tiles improves fall detection accuracy.
[117]Wearable
(BLE)
CNNPublic DatasetBLE-based fall detection system for nursing homes shows excellent accuracy.
[113]Vision-basedDLPublic DatasetsDataset-independent DL model detects falls with 79.6% accuracy.
[60]Ambient
(WiFi signals)
RNNLab/Smart Home Env.WiFi-based, device-free FDS uses RNN to classify human motions and detect falls.
[25]Wearable
(IMU)
Transformer, Threshold-based AlgorithmLab/DatasetEdge computing system with Transformer architecture achieves 95.29% accuracy.
[70]Hybrid
(RGB images, accelerometers)
DL (CNN)Lab/DatasetMultimodal CNN effectively detects falls using combined vision and wearable sensor data.
[85]Wearable
(Accelerometer)
DL (LSTM)Lab/DatasetLSTM model combined with data augmentation effectively detects elderly falls.
[21]Vision-based (Review)DL (Review)N/A (Review)Systematic review of DL for vision-based HAR and fall detection.
[118]Wireless Sensor NetworkArtificial Neural Network (ANN)Lab/Controlled Env.FDS improves fall detection (100% LOS accuracy) and localization accuracy.
[72]Hybrid
(ToF camera, accelerometer, microphone)
Multi-sensor FusionLab/DatasetHardware–software framework for reliable fall detection using a multi-sensor approach.
[119]IoT, Wearable (Accelerometer, Gyroscope)Sensor FusionLab/PrototypeWearable IoT device with advanced sensors improves precision of fall detection.
[84]Vision-basedDL (Mixture of Experts CNN3D)Public Dataset (UP-Fall)MoE with CNN3D models achieves 99.67% weighted average F1 score.
[52]Ambient
(CW Doppler Radar)
MLLab/PrototypeRadar system detects falls (90% recall) and monitors vital signs (respiration, heartbeat).
[37]Vision-basedML
(MEWMA-based SVM)
Public Datasets (URFD, FDD)MEWMA-based SVM effectively detects and classifies falls from human silhouette shape.
[120]Smartphone (Accelerometer)DLPublic Dataset (MobiAct)Smartphone framework detects falls from streaming data and sends alerts.
[64]Ambient
(PIR sensors)
ML (Random Forest, AdaBoost)Lab
(Simulation)
Low-cost motion-based technique using PIR sensors achieves 99% accuracy with Random Forest and AdaBoost.
[106]Wearable
(Accelerometers, Insoles)
Neural Network, Naïve Bayesian, SVMClinical/
Prospective Study
Neural network with dual-task gait data from multiple sensors best predicts fall risk.
[13]Vision/Sensor-based
(Review)
DL (Review)N/A (Review)Review concludes 3D CNN and LSTM with CNN perform best for fall detection.
[18]WearableDL (CNN-LSTM Ensemble)Public Datasets (SisFall, KFall)Pre-impact FDS detects a fall within 0.5s of initiation with 99.24% sensitivity.
[38]Smartphone
(Accelerometer)
Deep Belief Network (DBN)Public Datasets (TFall, MobiFall)Smartphone-based framework using DBN achieves 97.56% sensitivity and 97.03% specificity.
[54]Ambient (mmWave Radar)Hybrid Variational RNN Autoencoder (HVRAE)Lab/
Apartment Testbed
mmFall system using mmWave radar and HVRAE achieves a 98% detection rate.
[89]Vision-based (Surveillance Cameras)Transformer NetworkDataset/
Synthetic
Transformer network with synthetic data improves fall detection, outperforming LSTM networks.
[108]Wearable
(Accelerometer, Gyroscope)
DL (CNN, LSTM)Public Datasets (SisFall, UMAFall)Class ensemble framework achieves better accuracy in classifying non-fall, pre-fall, and fall.
[50]Vision-basedDL
(CGNS-YOLO)
Public Datasets (Multicam, Le2i)Lightweight CGNS-YOLO approach improves fall detection accuracy and reduces model size.
[99]Wearable
(Accelerometer)
ML (SVM, k-NN, Random Forest, ANN)Public Dataset (SisFall)Optimal window size for fall detection is 3 s, SVM and Random Forest achieve >99% accuracy.
[96]Wearable
(Shimmer
devices)
Compressive SensingLab
(Simulation)
System using compressive sensing achieves up to 99.8% accuracy in detecting falls and ADLs.
[121]WearableCompressive SensingLab/PrototypeProposes a hardware framework for fall detection integrating compressed sensing.
[28]Wearable
(Wrist Accelerometer)
Threshold-based, MLPublic Datasets (Simulated)On-wrist accelerometer can improve fall detection, with rule-based systems being promising.
[42]Vision-based (RGB-D Cameras)DL (Multi-stream CNN)Public DatasetsWeighted multi-stream CNN exploits RGB, depth, and motion data for accurate detection.
[55]Ambient (mmWave Radar)DL (LSTM)Lab
(Simulation)
Technique using 1D point clouds and doppler velocity with LSTM achieves 99.50% accuracy.
[122]Digital Tech
(Review)
N/AN/A (Review)Review on emerging digital technologies for fall detection in aged care.
[123]Technologies
(Review)
N/AN/A (Review)Scoping review finds current fall detection technologies have low Technology Readiness Levels.
[69]Wearable
(IMU-L sensor),
Vision
(RGB Camera)
DL (RNN, CNN)Lab/PrototypeDouble-check method using IMU-L and RGB camera achieves 100% fall detection accuracy.
[79]Vision-based (Video Surveillance)DL (CNN)Lab/DatasetCNN applied to video frames achieves 99.98% average accuracy for fall detection.
[107]Clinical Assessment/
Questionnaire-based
Logistic Regression, NomogramDatabase/
Longitudinal Study (CHARLS)
Validated fall risk prediction model for Chinese older individuals based on CHARLS database.
[33]WearableDL (FD-DNN: CNN-LSTM)Lab/DatasetEnergy-efficient sensor with FD-DNN achieves 99.17% fall detection accuracy.
[44]Vision-basedNeural Network (Multilayer Perceptron)Public Dataset (URFD)Visual-based approach analyzing motion and shape achieves 99.60% detection rate.
[87]Wearable
(Accelerometer)
RNNPublic Dataset (SisFall)RNN models can be implemented in low-power microcontrollers for real-time fall detection.
[57]Ambient
(UWB Radar)
DL (ConvLSTM)Lab/TestbedUWB radar with ConvLSTM achieves good sensitivity for room-level fall detection.
[103]Clinical Assessments/
Interviews
Decision TreeCommunity
Cohort Study
Simplified decision-tree algorithm outperforms logistic regression in predicting falls.
[94]Device-based
Assessment
N/AReal-world
(Residential Care)
Oldfry device effectively detects frailty and fall risk in older adults.
[101]Smartphone
(Inertial Sensors)
DL (FCNN), Transfer LearningPublic DatasetFCNNs with transfer learning effectively classify fall risk (AUC 93.3%).
[36]Multimodal DatasetN/APublic Dataset CreationPresents the UP-Fall Detection Dataset, a multimodal resource for comparing FDS.
[32]Vision-based (Home Camera)ML (Kalman filtering, Optical Flow)Lab/Video DatasetLow-cost vision-based detector for smart homes achieves a detection ratio >96%.
[95]Clinical Assessment/Electronic Walkway SystemsMLReal-world
(Senior Care
Facilities)
Geriatric assessments, GAITRite data, and fall history predict 6-month fall risk (AUC 0.80).
[109]WearableDL (Ensemble CNN-RNN)Public Dataset (SisFall)Ensemble deep neural network effectively predicts and detects falls (98% accuracy for Fall).
[26]Wearable
(IMU sensors)
Kinematic ModelPublic DatasetMathematical model predicts falls using human body kinematics from three IMU sensors.
[20]AI-IoT (Review)AI, IoT (Review)N/A (Review)Review concludes AI-IoT technology provides the best solution for real-time monitoring.
[8]IoT, MobileML (Boosted Decision Trees)Lab
(Simulation)
Scalable architecture for monitoring older adults, using Boosted Decision Trees for classification.
[1]Literature ReviewN/AN/A (Review)Review stressing the need for low-cost, early fall detection mechanisms.
[76]Wearable
(Accelerometer)
MLPublic DatasetsProposes fall detection system using cross-disciplinary time series features.
[111]IoT, WearableN/ALab/PrototypeDevice-type invariant FDS achieves 99.7% accuracy, 96.3% sensitivity.
[124]Sensor-based
(Review)
N/AN/A (Review)Review of sensor-based systems, noting single-sensor accuracy and multi-sensor efficiency.
[48]Vision-basedDL (CNN)Public DatasetsVision-based method using CNNs on optical flow images achieves state-of-the-art results.
[15]Vision-based (Videos)TransformerLab/Video DatasetTransformer-based model effectively recognizes falls in videos.
[104]Clinical Tests
(Review)
N/AN/A (Review)Systematic review finds clinical tests alone (FRT, SLST, POMA) have low accuracy for fall prediction.
[88]WearableDL (CNN-LSTM with attention)Lab/PrototypeAI-based wearable device effectively detects and prevents falls in elderly.
[125]Vision-basedDL (Transfer Learning)Lab/DatasetFall detection methodology using deep neural networks achieves 98.15% test accuracy.
[31]Wearable
(IMU, Barometer)
Data Fusion AlgorithmLab
(Simulation)
Waist-mounted device using four sensors reaches 100% sensitivity.
[67]Ambient
(LiDAR)
FSMLab/PrototypeLow-cost LIDAR system uses FSM for privacy-preserving, interpretable fall detection.
[9]FDS
(Review)
N/AN/A (Review)Systematic review on FDS, emphasizing potential of new technologies like DL and IoT.
[29]Wearable (Wrist: Accelerometer, Gyroscope, Magnetometer)MLLab
(Simulation)
Wrist-worn device with movement decomposition and ML achieves 99.0% accuracy.
[68]Data Fusion
(Review)
N/AN/A (Review)Survey on data fusion approaches for fall detection.
[19]FDS
(Review)
N/AN/A (Review)Systematic review on fall detection and prevention technologies.
[100]Wearable
(Review)
N/AN/A (Review)Survey identifies key gait parameters from inertial sensors for frailty and fall risk detection.
[16]Wearable
(Thigh Accelerometer)
ML (NLSVM)Lab/Public Dataset
(MobiFall)
Patient-specific system predicts and detects falls with >97% sensitivity and >99% specificity.
[59]Ambient (Radar)ML (NLSVM)Lab/TestbedRadar-based method with time-frequency analysis and CNN achieves 98.37% precision.
[6]WearableMLPublic DatasetLow-cost ML-based algorithm for wearables achieves >99.9% accuracy.
[46]Vision-basedDL (TD-CNN-LSTM, 1D-CNN)Lab/DatasetPose estimation-based solution achieves high accuracy (98% with 1D-CNN) for fall detection.
[90]Vision-basedDL (YOLOv8, Time-Space Transformers)Public DatasetsHybrid approach using YOLOv8 and Transformers achieves high mAP on benchmark datasets.
[2]WearableN/ALab/PrototypeWearable device communicates with a cell phone to alert contacts after a fall.
[112]Fog-based AALDLLab
(Simulation)
Fog-based DL system provides timely and accurate fall detection (98.75% accuracy).
[126]Ambient
(Force-Plate)
DL (One-One-One DNN)Public DatasetOne-One-One DNN model predicts fall-risk from force-plate data with 99.9% accuracy.
[77]Smartphone
(Accelerometer)
ML (Multiple Kernel Learning SVM)Lab/PrototypeFallDroid system on smartphones detects falls with better accuracy (>97% at waist).
[41]Vision-based
(8-camera system)
MLLab/PrototypeHouse-wide FDS using ML can detect falls at 60 m and beyond.
[10]Sensor Tech
(Review)
N/AN/A (Review)Wide-ranging review of sensor technologies for fall detection systems.
[127]Vision-basedDL (CABMNet)Lab/DatasetAdaptive two-stage DL network (CABMNet) optimizes spatial and temporal analysis.
[43]Vision-based (Microsoft Kinect)ML (Ensemble of Decision Trees)Real-world (Home deployment)Two-stage fall detection system using Kinect significantly improves performance in real homes.
[35]DatasetN/ALab/Dataset Creation (SisFall)Presents the SisFall dataset of falls and ADLs from wearable sensors.
[7]Wearable
(Accelerometer)
Non-linear Classification, Kalman filterPublic Dataset (SisFall)/LabFall detection methodology tested on SisFall dataset achieves 99.4% accuracy.
[105]Sensor Tech
(Review)
N/AN/A (Review)Review on novel sensing technology for fall risk assessment.
[78]FDS
(Review)
DL (Review)N/A (Review)Comprehensive review on DL based fall detection.
[110]WearableFractal Dynamics, Linear Discriminant AnalysisLab/Hardware ValidationWearable FDS using fractal dynamics achieves 99.38% fall detection accuracy.
[5]Ambient
(Low-resolution Thermal Sensors)
RNN (Bi-LSTM)Lab/Test SubjectsBi-LSTM approach with thermal sensors achieves 93% accuracy while preserving privacy.
[82]WearableDL (CNN)Lab/HardwareUltra-low-power wearable sensor uses CNN and FPGA for fall detection.
[114]Vision-basedDL (Modified NASNet), Transfer LearningLab/DatasetModified NASNet model with LBP features and transfer learning achieves 99% accuracy.
[12]FDS
(Review)
ML (Review)N/A (Review)Systematic review of latest research trends in fall detection using ML.
[80]IoT, Vision-basedDL (ODCNN)Public DatasetsIoT-enabled model using an optimal DCNN achieves >99% accuracy on two datasets.
[66]Ambient
(LiDAR)
N/ALab/PrototypeLiDAR technology effectively detects falls while maintaining user privacy.
[39]Vision-basedML (MLP, Random Forest)Public DatasetsDual-Channel Feature Integration approach achieves >96% accuracy on two datasets.
[11]Sensor Fusion
(Review)
N/AN/A (Review)Literature survey on elderly fall detection using sensor fusion.
[56]Ambient (mmWave Radar)ML (LightGBM)Lab/PrototypeFusion of radar imaging and trajectory features predicts fall risk with 93.36% accuracy.
[128]Literature ReviewN/AN/A (Review)Review of fall detection technologies, noting challenges and trends.
[86]Smartphone
(Mobile Sensors)
DL (GRU)Lab/DatasetGRU architecture outperforms other models for fall detection.
[74]IoT, Big DataML (Decision Trees)Lab/PrototypeIoT system with decision trees-based Big Data model effectively detects falls.
[3]IoT, Wearable
(Accelerometer)
Ensemble MLLab/PrototypeIoT E-Fall system achieves over 94% accuracy, precision, sensitivity, and specificity.
[102]Electronic Health Records (EHR)MLDatabase (EHR)EHR-based fall risk predictive tool accurately identifies elders at higher risk of falls.
[34]WearableDL (Denoising LSTM-based CVAE)Lab/DatasetUnsupervised CVAE model detects falls and is suitable for integration into wearable devices.
[30]Wearable (Wrist Sensor)DL (ANN)Lab/PrototypeArtificial Neural Network-based method accurately detects falls from wrist sensors.
[17]Wearable
(Inertial Sensors)
DL (Hybrid ConvLSTM)Public DatasetHybrid ConvLSTM model accurately predicts pre-impact falls in older people.
[91]FDSFederated Learning, Extreme Learning Machine (Fed-ELM)Lab
(Simulation)
Fed-ELM algorithm improves accuracy for both young and elderly individuals (>96% accuracy).
[83]WearableDL (TinyCNN)Lab/PrototypeTinyCNN with two-stage feature extraction offers real-time performance for fall detection.
[129]Vision-based (Skeleton)DL (TCN, Transformer Encoder)Lab/PrototypeReal-time skeleton-based algorithm (TCNTE) achieves high accuracy for fall detection.
[130]Vision-basedDL (YOLO, Pose Estimation)Lab/TestbedDL model detects and classifies elderly abnormal behaviors including falls.
[71]Hybrid
(Video, Audio)
Masked Mamba, Cross-AttentionPublic DatasetsFall-Mamba model fuses video and audio data, achieving 99.63% accuracy.
[47]Vision-basedDL (OpenPose)Lab/Public DatasetDL algorithm based on bone key points achieves 99.4% accuracy.
[62]Ambient
(Floor Vibration, Sound)
Pattern RecognitionLab
(Simulation)
System using floor vibrations and sound detects falls with 97.5% sensitivity.

References

  1. Mushtaq, R.; Rafique, S.; Iqbal, M.W.; Ruk, S.A. Fall detection in elderly people. Bull. Bus. Econ. (BBE) 2024, 13, 228–236. [Google Scholar] [CrossRef]
  2. Santiago, J.; Cotto, E.; Jaimes, L.G.; Vergara-Laurens, I. Fall detection system for the elderly. In Proceedings of the 2017 IEEE 7th Annual Computing and Communication Workshop and Conference (CCWC); IEEE: Piscataway, NJ, USA, 2017; pp. 1–4. [Google Scholar]
  3. Yacchirema, D.; de Puga, J.S.; Palau, C.; Esteve, M. Fall detection system for elderly people using IoT and ensemble machine learning algorithm. Pers. Ubiquitous Comput. 2019, 23, 801–817. [Google Scholar] [CrossRef]
  4. Aziz, O.; Klenk, J.; Schwickert, L.; Chiari, L.; Becker, C.; Park, E.J.; Mori, G.; Robinovitch, S.N. Validation of accuracy of SVM-based fall detection system using real-world fall and non-fall datasets. PLoS ONE 2017, 12, e0180318. [Google Scholar] [CrossRef]
  5. Taramasco, C.; Rodenas, T.; Martinez, F.; Fuentes, P.; Munoz, R.; Olivares, R.; De Albuquerque, V.H.C.; Demongeot, J. A novel monitoring system for fall detection in older people. IEEE Access 2018, 6, 43563–43574. [Google Scholar] [CrossRef]
  6. Saleh, M.; Jeannès, R.L.B. Elderly fall detection using wearable sensors: A low cost highly accurate algorithm. IEEE Sens. J. 2019, 19, 3156–3164. [Google Scholar] [CrossRef]
  7. Sucerquia, A.; López, J.D.; Vargas-Bonilla, J.F. Real-life/real-time elderly fall detection with a triaxial accelerometer. Sensors 2018, 18, 1101. [Google Scholar] [CrossRef] [PubMed]
  8. Mrozek, D.; Koczur, A.; Małysiak-Mrozek, B. Fall detection in older adults with mobile IoT devices and machine learning in the cloud and on the edge. Inf. Sci. 2020, 537, 132–147. [Google Scholar] [CrossRef]
  9. Purwar, A.; Chawla, I. A systematic review on fall detection systems for elderly healthcare. Multimed. Tools Appl. 2024, 83, 43277–43302. [Google Scholar] [CrossRef]
  10. Singh, A.; Rehman, S.U.; Yongchareon, S.; Chong, P.H.J. Sensor technologies for fall detection systems: A review. IEEE Sens. J. 2020, 20, 6889–6919. [Google Scholar] [CrossRef]
  11. Wang, X.; Ellul, J.; Azzopardi, G. Elderly fall detection systems: A literature survey. Front. Robot. AI 2020, 7, 71. [Google Scholar] [CrossRef]
  12. Usmani, S.; Saboor, A.; Haris, M.; Khan, M.A.; Park, H. Latest research trends in fall detection and prevention using machine learning: A systematic review. Sensors 2021, 21, 5134. [Google Scholar] [CrossRef]
  13. Islam, M.M.; Tayan, O.; Islam, M.R.; Islam, M.S.; Nooruddin, S.; Kabir, M.N.; Islam, M.R. Deep learning based systems developed for fall detection: A review. IEEE Access 2020, 8, 166117–166137. [Google Scholar] [CrossRef]
  14. Al-qaness, M.A.; Dahou, A.; Abd Elaziz, M.; Helmi, A.M. Human activity recognition and fall detection using convolutional neural network and transformer-based architecture. Biomed. Signal Process. Control 2024, 95, 106412. [Google Scholar] [CrossRef]
  15. Núñez-Marcos, A.; Arganda-Carreras, I. Transformer-based fall detection in videos. Eng. Appl. Artif. Intell. 2024, 132, 107937. [Google Scholar] [CrossRef]
  16. Saadeh, W.; Butt, S.A.; Altaf, M.A.B. A patient-specific single sensor IoT-based wearable fall prediction and detection system. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 995–1003. [Google Scholar] [CrossRef]
  17. Yu, X.; Qiu, H.; Xiong, S. A novel hybrid deep neural network to predict pre-impact fall for older people based on wearable inertial sensors. Front. Bioeng. Biotechnol. 2020, 8, 63. [Google Scholar] [CrossRef]
  18. Jain, R.; Semwal, V.B. A novel feature extraction method for preimpact fall detection system using deep learning and wearable sensors. IEEE Sens. J. 2022, 22, 22943–22951. [Google Scholar] [CrossRef]
  19. Ren, L.; Peng, Y. Research of fall detection and fall prevention technologies: A systematic review. IEEE Access 2019, 7, 77702–77722. [Google Scholar] [CrossRef]
  20. Mohan, D.; Al-Hamid, D.Z.; Chong, P.H.J.; Sudheera, K.L.K.; Gutierrez, J.; Chan, H.C.; Li, H. Artificial intelligence and iot in elderly fall prevention: A review. IEEE Sens. J. 2024, 24, 4181–4198. [Google Scholar] [CrossRef]
  21. Gaya-Morey, F.X.; Manresa-Yee, C.; Buades-Rubio, J.M. Deep learning for computer vision based activity recognition and fall detection of the elderly: A systematic review. arXiv 2024, arXiv:2401.11790. [Google Scholar] [CrossRef]
  22. Chen, X.; Jiang, S.; Lo, B. Subject-independent slow fall detection with wearable sensors via deep learning. In Proceedings of the 2020 IEEE SENSORS; IEEE: Piscataway, NJ, USA, 2020; pp. 1–4. [Google Scholar]
  23. Bet, P.; Castro, P.C.; Ponti, M.A. Fall detection and fall risk assessment in older person using wearable sensors: A systematic review. Int. J. Med. Inform. 2019, 130, 103946. [Google Scholar] [CrossRef]
  24. Choi, A.; Kim, T.H.; Yuhai, O.; Jeong, S.; Kim, K.; Kim, H.; Mun, J.H. Deep learning-based near-fall detection algorithm for fall risk monitoring system using a single inertial measurement unit. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 2385–2394. [Google Scholar] [CrossRef]
  25. Fernandez-Bermejo, J.; Martinez-del Rincon, J.; Dorado, J.; Toro, X.D.; Santofimia, M.J.; Lopez, J.C. Edge computing transformers for fall detection in older adults. Int. J. Neural Syst. 2024, 34, 2450026. [Google Scholar] [CrossRef]
  26. Mohammed, S.H.; Fan, Y.; Lv, G.; Liu, S. A Mathematical model for fall detection predication in elderly people. IEEE Sens. J. 2023, 24, 32981–32990. [Google Scholar] [CrossRef]
  27. Pierleoni, P.; Belli, A.; Palma, L.; Pellegrini, M.; Pernini, L.; Valenti, S. A high reliability wearable device for elderly fall detection. IEEE Sens. J. 2015, 15, 4544–4553. [Google Scholar] [CrossRef]
  28. Khojasteh, S.B.; Villar, J.R.; Chira, C.; González, V.M.; De la Cal, E. Improving fall detection using an on-wrist wearable accelerometer. Sensors 2018, 18, 1350. [Google Scholar] [CrossRef] [PubMed]
  29. De Quadros, T.; Lazzaretti, A.E.; Schneider, F.K. A movement decomposition and machine learning-based fall detection system using wrist wearable device. IEEE Sens. J. 2018, 18, 5082–5089. [Google Scholar] [CrossRef]
  30. Yoo, S.; Oh, D. An artificial neural network–based fall detection. Int. J. Eng. Bus. Manag. 2018, 10, 1847979018787905. [Google Scholar] [CrossRef]
  31. Pierleoni, P.; Belli, A.; Maurizi, L.; Palma, L.; Pernini, L.; Paniccia, M.; Valenti, S. A wearable fall detector for elderly people based on AHRS and barometric sensor. IEEE Sens. J. 2016, 16, 6733–6744. [Google Scholar] [CrossRef]
  32. De Miguel, K.; Brunete, A.; Hernando, M.; Gambao, E. Home camera-based fall detection system for the elderly. Sensors 2017, 17, 2864. [Google Scholar] [CrossRef]
  33. Liu, L.; Hou, Y.; He, J.; Lungu, J.; Dong, R. An energy-efficient fall detection method based on FD-DNN for elderly people. Sensors 2020, 20, 4192. [Google Scholar] [CrossRef]
  34. Yi, M.K.; Han, K.; Hwang, S.O. Fall detection of the elderly using denoising LSTM-based convolutional variant autoencoder. IEEE Sens. J. 2024, 24, 18556–18567. [Google Scholar] [CrossRef]
  35. Sucerquia, A.; López, J.D.; Vargas-Bonilla, J.F. SisFall: A fall and movement dataset. Sensors 2017, 17, 198. [Google Scholar] [CrossRef]
  36. Martínez-Villaseñor, L.; Ponce, H.; Brieva, J.; Moya-Albor, E.; Núñez-Martínez, J.; Peñafort-Asturiano, C. UP-fall detection dataset: A multimodal approach. Sensors 2019, 19, 1988. [Google Scholar] [CrossRef] [PubMed]
  37. Harrou, F.; Zerrouki, N.; Sun, Y.; Houacine, A. Vision-based fall detection system for improving safety of elderly people. IEEE Instrum. Meas. Mag. 2017, 20, 49–55. [Google Scholar] [CrossRef]
  38. Jahanjoo, A.; Naderan, M.; Rashti, M.J. Detection and multi-class classification of falling in elderly people by deep belief network algorithms. J. Ambient Intell. Humaniz. Comput. 2020, 11, 4145–4165. [Google Scholar] [CrossRef]
  39. Wang, B.H.; Yu, J.; Wang, K.; Bao, X.Y.; Mao, K.M. Fall detection based on dual-channel feature integration. IEEE Access 2020, 8, 103443–103453. [Google Scholar] [CrossRef]
  40. Choi, S.; Youm, S. A study on a fall detection monitoring system for falling elderly using open source hardware. Multimed. Tools Appl. 2019, 78, 28423–28434. [Google Scholar] [CrossRef]
  41. Shu, F.; Shu, J. An eight-camera fall detection system using human fall pattern recognition via machine learning by a low-cost android box. Sci. Rep. 2021, 11, 2471. [Google Scholar] [CrossRef]
  42. Khraief, C.; Benzarti, F.; Amiri, H. Elderly fall detection based on multi-stream deep convolutional networks. Multimed. Tools Appl. 2020, 79, 19537–19560. [Google Scholar] [CrossRef]
  43. Stone, E.E.; Skubic, M. Fall detection in homes of older adults using the Microsoft Kinect. IEEE J. Biomed. Health Inform. 2014, 19, 290–301. [Google Scholar] [CrossRef] [PubMed]
  44. Lotfi, A.; Albawendi, S.; Powell, H.; Appiah, K.; Langensiepen, C. Supporting independent living for older adults; employing a visual based fall detection through analysing the motion and shape of the human body. IEEE Access 2018, 6, 70272–70282. [Google Scholar] [CrossRef]
  45. Alaoui, A.Y.; El Fkihi, S.; Thami, R.O.H. Fall detection for elderly people using the variation of key points of human skeleton. IEEE Access 2019, 7, 154786–154795. [Google Scholar] [CrossRef]
  46. Salimi, M.; Machado, J.J.; Tavares, J.M.R. Using deep neural networks for human fall detection based on pose estimation. Sensors 2022, 22, 4544. [Google Scholar] [CrossRef]
  47. Zhu, N.; Zhao, G.; Zhang, X.; Jin, Z. Falling motion detection algorithm based on deep learning. IET Image Process. 2022, 16, 2845–2853. [Google Scholar] [CrossRef]
  48. Núñez-Marcos, A.; Azkune, G.; Arganda-Carreras, I. Vision-based fall detection with convolutional neural networks. Wirel. Commun. Mob. Comput. 2017, 2017, 9474806. [Google Scholar] [CrossRef]
  49. Alam, E.; Sufian, A.; Dutta, P.; Leo, M. Vision-based human fall detection systems using deep learning: A review. Comput. Biol. Med. 2022, 146, 105626. [Google Scholar] [CrossRef]
  50. Kan, X.; Zhu, S.; Zhang, Y.; Qian, C. A lightweight human fall detection network. Sensors 2023, 23, 9069. [Google Scholar] [CrossRef]
  51. Arnaoutoglou, D.G.; Dedemadis, D.; Kyriakou, A.A.; Katsimentes, S.; Grekidis, A.; Menychtas, D.; Aggelousis, N.; Sirakoulis, G.C.; Kyriacou, G.A. Acceleration-Based Low-Cost CW Radar System for real-time elderly fall detection. IEEE J. Electromagn. RF Microwaves Med. Biol. 2024, 8, 102–112. [Google Scholar] [CrossRef]
  52. Hanifi, K.; Karsligil, M.E. Elderly fall detection with vital signs monitoring using CW Doppler radar. IEEE Sens. J. 2021, 21, 16969–16978. [Google Scholar] [CrossRef]
  53. Abdu, F.J.; Zhang, Y.; Deng, Z. Activity classification based on feature fusion of FMCW radar human motion micro-Doppler signatures. IEEE Sens. J. 2022, 22, 8648–8662. [Google Scholar] [CrossRef]
  54. Jin, F.; Sengupta, A.; Cao, S. mmfall: Fall detection using 4-d mmwave radar and a hybrid variational rnn autoencoder. IEEE Trans. Autom. Sci. Eng. 2020, 19, 1245–1257. [Google Scholar] [CrossRef]
  55. Kittiyanpunya, C.; Chomdee, P.; Boonpoonga, A.; Torrungrueng, D. Millimeter-wave radar-based elderly fall detection fed by one-dimensional point cloud and Doppler. IEEE Access 2023, 11, 76269–76283. [Google Scholar] [CrossRef]
  56. Wang, W.; Gong, Y.; Zhang, H.; Yuan, X.; Zhang, Y. Quantitative assessment of fall risk in the elderly through fusion of millimeter-wave radar imaging and trajectory features. IEEE Access 2024, 12, 13370–13385. [Google Scholar] [CrossRef]
  57. Ma, L.; Liu, M.; Wang, N.; Wang, L.; Yang, Y.; Wang, H. Room-level fall detection based on ultra-wideband (UWB) monostatic radar and convolutional long short-term memory (LSTM). Sensors 2020, 20, 1105. [Google Scholar] [CrossRef]
  58. Amin, M.G.; Zhang, Y.D.; Ahmad, F.; Ho, K.D. Radar signal processing for elderly fall detection: The future for in-home monitoring. IEEE Signal Process. Mag. 2016, 33, 71–80. [Google Scholar] [CrossRef]
  59. Sadreazami, H.; Bolic, M.; Rajan, S. Contactless fall detection using time-frequency analysis and convolutional neural networks. IEEE Trans. Ind. Inform. 2021, 17, 6842–6851. [Google Scholar] [CrossRef]
  60. Ding, J.; Wang, Y. A WiFi-based smart home fall detection system using recurrent neural network. IEEE Trans. Consum. Electron. 2020, 66, 308–317. [Google Scholar] [CrossRef]
  61. Cippitelli, E.; Fioranelli, F.; Gambi, E.; Spinsante, S. Radar and RGB-depth sensors for fall detection: A review. IEEE Sens. J. 2017, 17, 3585–3604. [Google Scholar] [CrossRef]
  62. Zigel, Y.; Litvak, D.; Gannot, I. A method for automatic fall detection of elderly people using floor vibrations and sound—Proof of concept on human mimicking doll falls. IEEE Trans. Biomed. Eng. 2009, 56, 2858–2867. [Google Scholar] [CrossRef] [PubMed]
  63. Daher, M.; Diab, A.; El Najjar, M.E.B.; Khalil, M.A.; Charpillet, F. Elder tracking and fall detection system using smart tiles. IEEE Sens. J. 2016, 17, 469–479. [Google Scholar] [CrossRef]
  64. Hassan, C.A.U.; Karim, F.K.; Abbas, A.; Iqbal, J.; Elmannai, H.; Hussain, S.; Ullah, S.S.; Khan, M.S. A cost-effective fall-detection framework for the elderly using sensor-based technologies. Sustainability 2023, 15, 3982. [Google Scholar] [CrossRef]
  65. Alharbi, H.A.; Alharbi, K.K.; Hassan, C.A.U. Enhancing elderly fall detection through IoT-enabled smart flooring and AI for independent living sustainability. Sustainability 2023, 15, 15695. [Google Scholar] [CrossRef]
  66. Viswa, A.; Pravardhana, V.D.; Thangavel, S.K.; Jeyakumar, G. Fall Detection for Elderly People using LiDAR Sensor. In Proceedings of the 2024 3rd International Conference on Artificial Intelligence For Internet of Things (AIIoT); IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
  67. Piñeiro, M.; Araya, D.; Ruete, D.; Taramasco, C. Low-cost LIDAR-based monitoring system for fall detection. IEEE Access 2024, 12, 72051–72061. [Google Scholar] [CrossRef]
  68. Rassekh, E.; Snidaro, L. Survey on data fusion approaches for fall-detection. Inf. Fusion 2025, 114, 102696. [Google Scholar] [CrossRef]
  69. Lee, D.W.; Jun, K.; Naheem, K.; Kim, M.S. Deep neural network–based double-check method for fall detection using IMU-L sensor and RGB camera data. IEEE Access 2021, 9, 48064–48079. [Google Scholar] [CrossRef]
  70. Galvão, Y.M.; Ferreira, J.; Albuquerque, V.A.; Barros, P.; Fernandes, B.J. A multimodal approach using deep learning for fall detection. Expert Syst. Appl. 2021, 168, 114226. [Google Scholar] [CrossRef]
  71. Zhang, X.; Xu, Q.; Feng, F.; Lu, X.; Xu, L. Fall-mamba: A multimodal fusion and masked mamba-based approach for fall detection. IEEE Internet Things J. 2025, 12, 10493–10505. [Google Scholar] [CrossRef]
  72. Grassi, M.; Lombardi, A.; Rescio, G.; Malcovati, P.; Malfatti, M.; Gonzo, L.; Leone, A.; Diraco, G.; Distante, C.; Siciliano, P.; et al. A hardware-software framework for high-reliability people fall detection. In Proceedings of the SENSORS; IEEE: Piscataway, NJ, USA, 2008; pp. 1328–1331. [Google Scholar]
  73. Chandra, I.; Sivakumar, N.; Gokulnath, C.B.; Parthasarathy, P. IoT based fall detection and ambient assisted system for the elderly. Clust. Comput. 2019, 22, 2517–2525. [Google Scholar] [CrossRef]
  74. Yacchirema, D.; De Puga, J.S.; Palau, C.; Esteve, M. Fall detection system for elderly people using IoT and big data. Procedia Comput. Sci. 2018, 130, 603–610. [Google Scholar] [CrossRef]
  75. Chelli, A.; Pätzold, M. A machine learning approach for fall detection and daily living activity recognition. IEEE Access 2019, 7, 38670–38687. [Google Scholar] [CrossRef]
  76. Al Nahian, M.J.; Ghosh, T.; Al Banna, M.H.; Aseeri, M.A.; Uddin, M.N.; Ahmed, M.R.; Mahmud, M.; Kaiser, M.S. Towards an accelerometer-based elderly fall detection system using cross-disciplinary time series features. IEEE Access 2021, 9, 39413–39431. [Google Scholar] [CrossRef]
  77. Shahzad, A.; Kim, K. FallDroid: An automated smart-phone-based fall detection system using multiple kernel learning. IEEE Trans. Ind. Inform. 2018, 15, 35–44. [Google Scholar] [CrossRef]
  78. Suryanarayana, S.V.; Lakshmi, H.; Swamy, P.T.C.; Mahesh, D.B. A Comprehensive Review on Deep Learning Based Fall Detection in Elderly People. In Proceedings of the 2025 International Conference on Machine Learning and Autonomous Systems (ICMLAS); IEEE: Piscataway, NJ, USA, 2025; pp. 396–402. [Google Scholar]
  79. Li, X.; Pang, T.; Liu, W.; Wang, T. Fall detection for elderly person care using convolutional neural networks. In Proceedings of the 2017 10th International Congress on Image and Signal Processing, Biomedical Engineering and Informatics (CISP-BMEI); IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
  80. Vaiyapuri, T.; Lydia, E.L.; Sikkandar, M.Y.; Díaz, V.G.; Pustokhina, I.V.; Pustokhin, D.A. Internet of things and deep learning enabled elderly fall detection model for smart homecare. IEEE Access 2021, 9, 113879–113888. [Google Scholar] [CrossRef]
  81. Casilari, E.; Lora-Rivera, R.; Garcia-Lagos, F. A study on the application of convolutional neural networks to fall detection evaluated with multiple public datasets. Sensors 2020, 20, 1466. [Google Scholar] [CrossRef]
  82. Tian, J.; Mercier, P.; Paolini, C. Ultra low-power, wearable, accelerated shallow-learning fall detection for elderly at-risk persons. Smart Health 2024, 33, 100498. [Google Scholar] [CrossRef] [PubMed]
  83. Yu, X.; Park, S.; Kim, D.; Kim, E.; Kim, J.; Kim, W.; An, Y.; Xiong, S. A practical wearable fall detection system based on tiny convolutional neural networks. Biomed. Signal Process. Control 2023, 86, 105325. [Google Scholar] [CrossRef]
  84. Ha, T.V.; Nguyen, H.M.; Thanh, S.H.; Nguyen, B.T. Fall detection using mixtures of convolutional neural networks. Multimed. Tools Appl. 2024, 83, 18091–18118. [Google Scholar] [CrossRef]
  85. García, E.; Villar, M.; Fáñez, M.; Villar, J.R.; de la Cal, E.; Cho, S.B. Towards effective detection of elderly falls with CNN-LSTM neural networks. Neurocomputing 2022, 500, 231–240. [Google Scholar] [CrossRef]
  86. Wu, X.; Zheng, Y.; Chu, C.H.; Cheng, L.; Kim, J. Applying deep learning technology for automatic fall detection using mobile sensors. Biomed. Signal Process. Control 2022, 72, 103355. [Google Scholar] [CrossRef]
  87. Luna-Perejón, F.; Domínguez-Morales, M.J.; Civit-Balcells, A. Wearable fall detector using recurrent neural networks. Sensors 2019, 19, 4885. [Google Scholar] [CrossRef]
  88. Paramasivam, A.; Jenath, M.; Sivakumaran, T.S.; Vijayalakshmi, S. Development of artificial intelligence edge computing based wearable device for fall detection and prevention of elderly people. Heliyon 2024, 10, e28688. [Google Scholar] [CrossRef]
  89. Juraev, S.; Ghimire, A.; Alikhanov, J.; Kakani, V.; Kim, H. Exploring human pose estimation and the usage of synthetic data for elderly fall detection in real-world surveillance. IEEE Access 2022, 10, 94249–94261. [Google Scholar] [CrossRef]
  90. Sanjalawe, Y.; Fraihat, S.; Abualhaj, M.; Al-E’Mari, S.R.; Alzubi, E. Hybrid deep learning for human fall detection: A synergistic approach using YOLOv8 and time-space transformers. IEEE Access 2025, 13, 41336–41366. [Google Scholar] [CrossRef]
  91. Yu, Z.; Liu, J.; Yang, M.; Cheng, Y.; Hu, J.; Li, X. An elderly fall detection method based on federated learning and extreme learning machine (fed-elm). IEEE Access 2022, 10, 130816–130824. [Google Scholar] [CrossRef]
  92. Benoit, A.; Escriba, C.; Gauchard, D.; Esteve, A.; Rossi, C. Analyzing and Comparing Deep Learning Models on an ARM 32 Bits Microcontroller for Pre-Impact Fall Detection. IEEE Sens. J. 2024, 24, 11829–11842. [Google Scholar] [CrossRef]
  93. Nait Aicha, A.; Englebienne, G.; Van Schooten, K.S.; Pijnappels, M.; Kröse, B. Deep learning to predict falls in older adults based on daily-life trunk accelerometry. Sensors 2018, 18, 1654. [Google Scholar] [CrossRef] [PubMed]
  94. Martí-Marco, E.; Vera-Remartínez, E.J.; Esteve-Clavero, A.; Carmona-Fortuño, I.; Flores-Saldaña, M.; Vila-Pascual, J.; Barba-Muñoz, M.; Molés-Julio, M.P. Detection of Falls and Frailty in Older Adults with Oldfry: Associated Risk Factors. Sensors 2025, 25, 2964. [Google Scholar] [CrossRef]
  95. Mishra, A.K.; Skubic, M.; Despins, L.A.; Popescu, M.; Keller, J.; Rantz, M.; Abbott, C.; Enayati, M.; Shalini, S.; Miller, S. Explainable fall risk prediction in older adults using gait and geriatric assessments. Front. Digit. Health 2022, 4, 869812. [Google Scholar] [CrossRef] [PubMed]
  96. Kerdjidj, O.; Ramzan, N.; Ghanem, K.; Amira, A.; Chouireb, F. Fall detection and human activity classification using wearable sensors and compressed sensing. J. Ambient Intell. Humaniz. Comput. 2020, 11, 349–361. [Google Scholar] [CrossRef]
  97. Chen, T.; Ding, Z.; Li, B. Elderly fall detection based on improved YOLOv5s network. IEEE Access 2022, 10, 91273–91282. [Google Scholar] [CrossRef]
  98. Chen, M.; Wang, H.; Yu, L.; Yeung, E.H.K.; Luo, J.; Tsui, K.L.; Zhao, Y. A systematic review of wearable sensor-based technologies for fall risk assessment in older adults. Sensors 2022, 22, 6752. [Google Scholar] [CrossRef] [PubMed]
  99. Kausar, F.; Mesbah, M.; Iqbal, W.; Ahmad, A.; Sayyed, I. Fall detection in the elderly using different machine learning algorithms with optimal window size. Mob. Netw. Appl. 2024, 29, 413–423. [Google Scholar] [CrossRef]
  100. Ruiz-Ruiz, L.; Jimenez, A.R.; Garcia-Villamil, G.; Seco, F. Detecting fall risk and frailty in elders with inertial motion sensors: A survey of significant gait parameters. Sensors 2021, 21, 6918. [Google Scholar] [CrossRef]
  101. Martinez, M.; De Leon, P.L. Falls risk classification of older adults using deep neural networks and transfer learning. IEEE J. Biomed. Health Inform. 2019, 24, 144–150. [Google Scholar] [CrossRef] [PubMed]
  102. Ye, C.; Li, J.; Hao, S.; Liu, M.; Jin, H.; Zheng, L.; Xia, M.; Jin, B.; Zhu, C.; Alfreds, S.T.; et al. Identification of elders at higher risk for fall with statewide electronic health records and a machine learning algorithm. Int. J. Med. Inform. 2020, 137, 104105. [Google Scholar] [CrossRef]
  103. Makino, K.; Lee, S.; Bae, S.; Chiba, I.; Harada, K.; Katayama, O.; Tomida, K.; Morikawa, M.; Shimada, H. Simplified decision-tree algorithm to predict falls for community-dwelling older adults. J. Clin. Med. 2021, 10, 5184. [Google Scholar] [CrossRef]
  104. Omaña, H.; Bezaire, K.; Brady, K.; Davies, J.; Louwagie, N.; Power, S.; Santin, S.; Hunter, S.W. Functional reach test, single-leg stance test, and tinetti performance-oriented mobility assessment for the prediction of falls in older adults: A systematic review. Phys. Ther. 2021, 101, pzab173. [Google Scholar] [CrossRef]
  105. Sun, R.; Sosnoff, J.J. Novel sensing technology in fall risk assessment in older adults: A systematic review. BMC Geriatr. 2018, 18, 14. [Google Scholar] [CrossRef]
  106. Howcroft, J.; Kofman, J.; Lemaire, E.D. Prospective fall-risk prediction models for older adults based on wearable sensors. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1812–1820. [Google Scholar] [CrossRef]
  107. Liang, X.Z.; Chai, J.L.; Li, G.Z.; Li, W.; Zhang, B.C.; Zhou, Z.Q.; Li, G. A fall risk prediction model based on the CHARLS database for older individuals in China. BMC Geriatr. 2025, 25, 170. [Google Scholar] [CrossRef]
  108. Kabir, M.M.; Shin, J.; Mridha, M.F. Secure your steps: A class-based ensemble framework for real-time fall detection using deep neural networks. IEEE Access 2023, 11, 64097–64113. [Google Scholar] [CrossRef]
  109. Mohammad, Z.; Anwary, A.R.; Mridha, M.F.; Shovon, M.S.H.; Vassallo, M. An enhanced ensemble deep neural network approach for elderly fall detection system based on wearable sensors. Sensors 2023, 23, 4774. [Google Scholar] [CrossRef]
  110. Tahir, A.; Morison, G.; Skelton, D.A.; Gibson, R.M. Hardware/software co-design of fractal features based fall detection system. Sensors 2020, 20, 2322. [Google Scholar] [CrossRef]
  111. Nooruddin, S.; Islam, M.M.; Sharna, F.A. An IoT based device-type invariant fall detection system. Internet Things 2020, 9, 100130. [Google Scholar] [CrossRef]
  112. Sarabia-Jácome, D.; Usach, R.; Palau, C.E.; Esteve, M. Highly-efficient fog-based deep learning AAL fall detection system. Internet Things 2020, 11, 100185. [Google Scholar] [CrossRef]
  113. Delgado-Escano, R.; Castro, F.M.; Cozar, J.R.; Marin-Jimenez, M.J.; Guil, N.; Casilari, E. A cross-dataset deep learning-based classifier for people fall detection and identification. Comput. Methods Programs Biomed. 2020, 184, 105265. [Google Scholar] [CrossRef]
  114. Umer, M.; Alarfaj, A.A.; Alabdulqader, E.A.; Alsubai, S.; Cascone, L.; Narducci, F. Enhancing fall prediction in the elderly people using LBP features and transfer learning model. Image Vis. Comput. 2024, 145, 104992. [Google Scholar] [CrossRef]
  115. Alfayez, F.; Bhatia Khan, S. IoT-blockchain empowered Trinet: Optimized fall detection system for elderly safety. Front. Bioeng. Biotechnol. 2023, 11, 1257676. [Google Scholar] [CrossRef]
  116. Berardini, D.; Moccia, S.; Migliorelli, L.; Pacifici, I.; Di Massimo, P.; Paolanti, M.; Frontoni, E.; Rivera, A.R. Fall detection for elderly-people monitoring using learned features and recurrent neural networks. Exp. Results 2020, 1, e7. [Google Scholar] [CrossRef]
  117. De Raeve, N.; Shahid, A.; De Schepper, M.; De Poorter, E.; Moerman, I.; Verhaevert, J.; Van Torre, P.; Rogier, H. Bluetooth-Low-Energy-Based Fall Detection and Warning System for Elderly People in Nursing Homes. J. Sens. 2022, 2022, 9930681. [Google Scholar] [CrossRef]
  118. Gharghan, S.K.; Mohammed, S.L.; Al-Naji, A.; Abu-AlShaeer, M.J.; Jawad, H.M.; Jawad, A.M.; Chahl, J. Accurate fall detection and localization for elderly people based on neural network and energy-efficient wireless sensor network. Energies 2018, 11, 2866. [Google Scholar] [CrossRef]
  119. Gupta, S.N.; Hussain, A.; Kumar, A.; Pramanik, I. IoT based Smart Fall Detection Device for Elderly People. In Proceedings of the 2024 International Conference on Electrical Electronics and Computing Technologies (ICEECT); IEEE: Piscataway, NJ, USA, 2024; Volume 1, pp. 1–5. [Google Scholar]
  120. Hassan, M.M.; Gumaei, A.; Aloi, G.; Fortino, G.; Zhou, M. A smartphone-enabled fall detection framework for elderly people in connected home healthcare. IEEE Netw. 2019, 33, 58–63. [Google Scholar] [CrossRef]
  121. Kerdjidj, O.; Boutellaa, E.; Amira, A.; Ghanem, K.; Chouireb, F. A hardware framework for fall detection using inertial sensors and compressed sensing. Microprocess. Microsyst. 2022, 91, 104514. [Google Scholar] [CrossRef]
  122. Mudiyanselage, S.P.K.; Yao, C.T.; Maithreepala, S.D.; Lee, B.O. Emerging Digital Technologies Used for Fall Detection in Older Adults in Aged Care: A Scoping Review. J. Am. Med Dir. Assoc. 2025, 26, 105330. [Google Scholar] [CrossRef]
  123. Lapierre, N.; Neubauer, N.; Miguel-Cruz, A.; Rincon, A.R.; Liu, L.; Rousseau, J. The state of knowledge on technologies and their use for fall detection: A scoping review. Int. J. Med. Inform. 2018, 111, 58–71. [Google Scholar] [CrossRef]
  124. Nooruddin, S.; Islam, M.M.; Sharna, F.A.; Alhetari, H.; Kabir, M.N. Sensor-based fall detection systems: A review. J. Ambient Intell. Humaniz. Comput. 2022, 13, 2735–2751. [Google Scholar] [CrossRef]
  125. Patel, A.N.; Murugan, R.; Maddikunta, P.K.R.; Yenduri, G.; Jhaveri, R.H.; Zhu, Y.; Gadekallu, T.R. AI-powered trustable and explainable fall detection system using transfer learning. Image Vis. Comput. 2024, 149, 105164. [Google Scholar] [CrossRef]
  126. Savadkoohi, M.; Oladunni, T.; Thompson, L.A. Deep neural networks for human’s fall-risk prediction using force-plate time series signal. Expert Syst. Appl. 2021, 182, 115220. [Google Scholar] [CrossRef]
  127. Soni, V.; Yadav, H.; Bijrothiya, S.; Semwal, V.B. CABMNet: An adaptive two-stage deep learning network for optimized spatial and temporal analysis in fall detection. Biomed. Signal Process. Control 2024, 96, 106506. [Google Scholar] [CrossRef]
  128. Wang, Z.; Ramamoorthy, V.; Gal, U.; Guez, A. Possible life saver: A review on human fall detection technology. Robotics 2020, 9, 55. [Google Scholar] [CrossRef]
  129. Yu, X.; Wang, C.; Wu, W.; Xiong, S. A real-time skeleton-based fall detection algorithm based on temporal convolutional networks and transformer encoder. Pervasive Mob. Comput. 2025, 107, 102016. [Google Scholar] [CrossRef]
  130. Zhang, Y.; Liang, W.; Yuan, X.; Zhang, S.; Yang, G.; Zeng, Z. Deep learning-based abnormal behavior detection for elderly healthcare using consumer network cameras. IEEE Trans. Consum. Electron. 2023, 70, 2414–2422. [Google Scholar] [CrossRef]
Figure 1. Annual Publication Trend of Included Studies. The bar chart illustrates the distribution of the 130 studies included in this systematic review by their year of publication, spanning from 2008 to 2025.
Figure 1. Annual Publication Trend of Included Studies. The bar chart illustrates the distribution of the 130 studies included in this systematic review by their year of publication, spanning from 2008 to 2025.
Applsci 16 01929 g001
Figure 2. PRISMA Flow Diagram of the Study Selection Process.
Figure 2. PRISMA Flow Diagram of the Study Selection Process.
Applsci 16 01929 g002
Figure 3. Distribution of Primary Technology and Methodology. The chart illustrates the classification of the 130 included studies based on their primary technological or methodological focus.
Figure 3. Distribution of Primary Technology and Methodology. The chart illustrates the classification of the 130 included studies based on their primary technological or methodological focus.
Applsci 16 01929 g003
Figure 4. Taxonomy of Sensor Modalities for Fall Detection. Based on the thematic synthesis in Section 3.1, this figure provides a hierarchical classification of the various sensor technologies identified in the literature. It details the hardware used within the primary categories of wearable, ambient, and hybrid systems, highlighting the variety of approaches in the field.
Figure 4. Taxonomy of Sensor Modalities for Fall Detection. Based on the thematic synthesis in Section 3.1, this figure provides a hierarchical classification of the various sensor technologies identified in the literature. It details the hardware used within the primary categories of wearable, ambient, and hybrid systems, highlighting the variety of approaches in the field.
Applsci 16 01929 g004
Figure 5. Common Wearable Sensor Placements and Associated Trade-offs. This diagram maps the body locations most frequently cited in the literature. It shows the balance between optimal signal quality (waist/trunk) and user comfort/compliance (wrist), a critical factor in the real-world effectiveness of wearable FDS.
Figure 5. Common Wearable Sensor Placements and Associated Trade-offs. This diagram maps the body locations most frequently cited in the literature. It shows the balance between optimal signal quality (waist/trunk) and user comfort/compliance (wrist), a critical factor in the real-world effectiveness of wearable FDS.
Applsci 16 01929 g005
Figure 6. Evolution of Computational Models in Fall Detection Research. The stacked bar chart illustrates the distribution of primary computational model types across the included studies, grouped into three distinct publication periods.
Figure 6. Evolution of Computational Models in Fall Detection Research. The stacked bar chart illustrates the distribution of primary computational model types across the included studies, grouped into three distinct publication periods.
Applsci 16 01929 g006
Figure 7. A Taxonomy of Deep Learning Models in Fall Detection Systems. This hierarchical diagram classifies the various DL architectures found in the review. It categorizes models based on their primary function: spatial feature extraction (CNNs for vision), temporal sequence modeling (RNNs for time-series data), and hybrid approaches that integrate both capabilities.
Figure 7. A Taxonomy of Deep Learning Models in Fall Detection Systems. This hierarchical diagram classifies the various DL architectures found in the review. It categorizes models based on their primary function: spatial feature extraction (CNNs for vision), temporal sequence modeling (RNNs for time-series data), and hybrid approaches that integrate both capabilities.
Applsci 16 01929 g007
Figure 8. The Spectrum of Fall Management Technologies. This figure illustrates the evolving technological landscape described in Section 3.4, moving from reactive post-fall detection to proactive approaches. It categorizes technologies based on their intervention timing relative to a fall event, from long-term risk assessment based on gait and clinical data to the immediate pre-impact prediction aimed at injury prevention.
Figure 8. The Spectrum of Fall Management Technologies. This figure illustrates the evolving technological landscape described in Section 3.4, moving from reactive post-fall detection to proactive approaches. It categorizes technologies based on their intervention timing relative to a fall event, from long-term risk assessment based on gait and clinical data to the immediate pre-impact prediction aimed at injury prevention.
Applsci 16 01929 g008
Figure 9. A Generalized IoT-Enabled Fall Detection System Architecture. Synthesizing the implementation themes discussed in this review, this diagram illustrates the data flow from different sensor modalities through the Edge/Fog layer for local processing, to the Cloud for advanced analysis, and finally to the Application layer for alerting caregivers.
Figure 9. A Generalized IoT-Enabled Fall Detection System Architecture. Synthesizing the implementation themes discussed in this review, this diagram illustrates the data flow from different sensor modalities through the Edge/Fog layer for local processing, to the Cloud for advanced analysis, and finally to the Application layer for alerting caregivers.
Applsci 16 01929 g009
Table 1. Key Public Datasets.
Table 1. Key Public Datasets.
Dataset NameReferenceModalitySubjectsKey Characteristics
KFall[18]WearableYoung AdultsUsed specifically for pre-impact fall detection research; includes extensive motion data.
SisFall[35]Wearable (Accel/Gyro)38 (15 Elderly)Contains 15 fall types and 19 ADLs. Includes a significant number of elderly participants performing ADLs.
UP-Fall[36]Multimodal (Wearable + Vision + Ambient)17 (Young)A comprehensive multimodal dataset (850 GB) allowing for fair comparison between vision, wearable, and hybrid approaches.
URFD[37]Vision (RGB-D)30 (Young)Uses Kinect cameras; contains depth and accelerometer data. Widely used for vision-based benchmarking.
MobiFall[38]Wearable (Smartphone)24 (Young)Focuses on smartphone-based detection (accelerometer/orientation) with realistic simulated falls.
Le2i[39]Vision (Video)VariousChallenges include varied lighting, occlusion, and textured backgrounds to simulate real-world difficulties.
Table 2. Comparative Analysis of Computational Approaches.
Table 2. Comparative Analysis of Computational Approaches.
ApproachRepresentative AlgorithmsFeature ExtractionStrengthsLimitations
Threshold-BasedFixed Thresholds, Finite State Machines (FSM)Manual (Peak acceleration, angles)
  • Low computational cost.
  • High energy efficiency.
  • Easy to implement on basic MCUs.
  • High False Alarm Rate (FAR).
  • Struggles to distinguish vigorous ADLs from falls.
  • Not adaptive to different users.
Classical Machine Learning (ML)SVM, Decision Trees, KNN, Random ForestManual (Hand-crafted features)
  • Good balance of accuracy and efficiency.
  • Requires less training data than DL.
  • Interpretable models (Decision Trees).
  • Performance depends heavily on feature engineering.
  • May struggle with complex, raw data streams.
  • Less effective at modeling temporal dependencies.
Deep Learning (DL)CNN, LSTM, GRU, TransformersAutomatic (Learned from raw data)
  • State-of-the-art accuracy ( > 99 % ).
  • No manual feature engineering required.
  • Handles complex/noisy data well.
  • High computational and memory cost.
  • Requires large labeled datasets.
  • “Black box” nature (low interpretability).
  • High energy consumption.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ishaq, M.; Guastella, D.C.; Sutera, G.; Muscato, G. A Systematic Review of Fall Detection and Prediction Technologies for Older Adults: An Analysis of Sensor Modalities and Computational Models. Appl. Sci. 2026, 16, 1929. https://doi.org/10.3390/app16041929

AMA Style

Ishaq M, Guastella DC, Sutera G, Muscato G. A Systematic Review of Fall Detection and Prediction Technologies for Older Adults: An Analysis of Sensor Modalities and Computational Models. Applied Sciences. 2026; 16(4):1929. https://doi.org/10.3390/app16041929

Chicago/Turabian Style

Ishaq, Muhammad, Dario Calogero Guastella, Giuseppe Sutera, and Giovanni Muscato. 2026. "A Systematic Review of Fall Detection and Prediction Technologies for Older Adults: An Analysis of Sensor Modalities and Computational Models" Applied Sciences 16, no. 4: 1929. https://doi.org/10.3390/app16041929

APA Style

Ishaq, M., Guastella, D. C., Sutera, G., & Muscato, G. (2026). A Systematic Review of Fall Detection and Prediction Technologies for Older Adults: An Analysis of Sensor Modalities and Computational Models. Applied Sciences, 16(4), 1929. https://doi.org/10.3390/app16041929

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop