Dual-Stream Transformer with Kalman-Based Sensor Fusion for Wearable Fall Detection
Abstract
1. Introduction
- 1.
- We empirically show that using gyroscope data as they are with linear accelerometer data hurts fall detection performance because neural networks cannot learn the physics-based transformation from sensed angular velocity to orientation from limited training data. By applying Kalman filtering, we inject domain knowledge that converts gyroscope data from a potential source of noise into a complementary modality.
- 2.
- We show that dual-stream architecture and Kalman fusion exhibit a synergistic interaction: dual-stream improves performance by +1.30% with Kalman inputs but degrades performance by −1.38% with raw inputs. This asymmetry suggests that modality isolation amplifies the effect of input quality.
- 3.
- We provide a systematic component-wise analysis that isolates the effects of Kalman fusion, architectural decoupling, and attention mechanisms. Kalman fusion enables a +3.52% F1 score improvement in the dual-stream setting (87.58% → 91.10%), dual-stream processing contributes an additional +1.30% F1 score over the best single-stream Kalman baseline, and SE+TAP attention mechanisms yield a further +1.28% F1 score improvement in single-stream transformers.
- 4.
- To analyze robustness and architectural sensitivity of the proposed method, we perform ablation studies, which include: (a) evaluation on three datasets, demonstrating consistent performance gains; (b) evaluation of allocating different embeddings, revealing that a balanced and moderate dimensional split between acceleration and orientation streams yields superior performance.
- 5.
- Finally, we validate the proposed method in a real-world smartwatch-based fall detection app demonstrating that our method maintains strong detection performance (83% F1 score and 90% accuracy) and practical viability under live operating conditions.
2. Related Work
- Deep Learning Approaches for IMU-Based Fall Detection: Early deep learning approaches for fall detection, such as recurrent neural networks (RNNs) and convolutional neural networks (CNNs), focused on single-stream architectures processing raw accelerometer data. For example, our initial work [7] evaluated a Gated Recurrent Unit (GRU) model using wrist-worn accelerometer data from the SmartFall2018 dataset [8], achieving 87% F1 score offline, but only 73% in real-world deployment, which revealed a critical 14% performance gap. To address this degradation, subsequent work [9] explored ensemble learning with user feedback, but the computational overhead made it unsuitable for resource-constrained smartwatches.
- Dual-Stream Architectures and Multimodal Fusion: The recognition that different sensor modalities require separate processing pathways has motivated the development of dual-stream architectures. For example, a three-stream spatio-temporal graph convolutional network (GCN) for fall recognition [15] demonstrated the benefits of processing multiple feature representations through separate pathways. In the scenarios of multimodal sensing, decision-level fusion architectures consisting of distinct processing streams, such as vision-based streams exploiting skeletal landmarks and inertial-based streams using LSTM autoencoders, have shown that late fusion is more robust than early fusion by enabling better calibration, fault isolation, and resilience against modality-specific failures [16]. However, these approaches typically combine fundamentally different sensing modalities (vision and inertial) rather than addressing the specific challenge of fusing complementary measurements from the same IMU device. Moreover, vision modality is not available from a wrist-worn watch. Additionally, these dual-stream approaches fail to address data quality at the signal level, i.e., they assume that architectural separation alone suffices to handle signal interference in multimodal sensing.
- Sensor Fusion and Kalman Filtering for Orientation Estimation: Kalman filtering [5] has been widely adopted in fall detection systems primarily for noise suppression and signal smoothing of accelerometer data. Liu and Lin [17] applied a first-order Kalman filter to extract slow-varying residual components from offline triaxial accelerometer signals, achieving 96.21% accuracy and 93.24% F1-score on using a support vector machine (SVM) with handcrafted features. The model accuracy was not tested in the real world. Similarly, in [18], complementary filtering techniques, such as the Madgwick algorithm, were employed to fuse accelerometer gravity references with gyroscope angular velocities for orientation estimation in motion-tracking applications. However, these approaches employed filtering solely for single-modality noise reduction or as preprocessing steps for threshold-based detection algorithms, with evaluations limited to offline settings. Critically, no prior work has applied Kalman filtering to noisy gyroscope data to obtain stable orientation angles and processed them through separate neural pathways to improve fall classification performance.
- Attention Mechanisms for Time-Series Classification: Squeeze-and-Excitation (SE) networks [6,19] and temporal attention mechanisms [20] have demonstrated effectiveness in dynamically allocating weights to the features in human activity recognition (HAR). SE mechanisms perform channel-wise recalibration to amplify discriminative channels while suppressing less informative ones in the spatial/channel dimension. Addressing the temporal dimension, Wang et al. [21] showed that temporal attention could capture long-term dependencies without RNNs by integrating dilated CNNs with modified temporal attention mechanisms. These networks further demonstrated that jointly modeling spatial and temporal dependencies improved HAR performance. However, the application of combined SE and temporal attention mechanisms to IMU-based fall detection remains underexplored. Critically, no prior work has examined whether attention mechanisms designed for single-stream architectures transfer effectively to dual-stream designs, where modality-specific fusion introduces different channel dynamics, particularly in real-world deployment scenarios.
3. Methodology
3.1. Kalman Fusion for Orientation Features
- Step 1: Sensor Acquisition: At discrete time step k, the inertial measurement unit (IMU) provides an accelerometer sample , representing linear accelerations along the sensor axes, and a gyroscope sample , representing angular velocities in rad/s.
- Step 2: State Definition: We apply Kalman fusion exclusively to orientation estimation, with the state vector defined as , where , , and denote roll, pitch, and yaw angles, respectively, and , , and denote the corresponding angular rates. This state encapsulates the filter’s internal belief about body orientation and angular motion at time k.
- Step 3: State Prediction: The prediction step uses gyroscope readings to estimate the system state at the next time step. Starting from the previous state , the Kalman filter projects the orientation and angular motion forward in time according to , where represents zero-mean Gaussian noise that accounts for modeling uncertainty and sensor imperfections. The state transition matrix governs this temporal propagation and is defined aswhere denotes the sampling interval.
- Step 4: Accelerometer-based Orientation Observation: Roll and pitch observations are obtained from the accelerometer via a gravity-based mapping using the four-quadrant inverse tangent:
- Step 5: Measurement Model Correction and Adaptive Noise Scaling: The measurement vector is defined as and is modeled as a noisy linear observation of the state through:where is measurement noise and the observation matrix is defined as:The state estimate is updated using the standard Kalman correction:where denotes the updated state estimate at time step k, is the predicted (prior) state estimate, and the matrix is the Kalman gain, which weights the influence of the innovation (difference between observed and predicted measurements) on the state update.where is the predicted state covariance matrix, and denotes the transpose of the observation matrix. is the measurement noise covariance matrix, which encodes the uncertainty associated with sensor-derived measurements. In this work, the accelerometer-related noise component is adaptively scaled based on the magnitude of the measured acceleration to reduce the influence of unreliable gravity estimates during high-dynamic events. Specifically, at each time step, is adjusted aswhere denotes the acceleration magnitude, g is the gravitational acceleration, is an activation threshold, and limits the maximum scaling factor. The modified noise term is incorporated into before computing the Kalman gain.
- Step 6. Final Output: The final output of the Kalman fusion is the orientation vector , which is subsequently used as the orientation input to the learning model.
3.2. Acceleration Features
3.3. Feature-Specific Normalization
3.4. Dual-Stream Network Architecture
- Step 1: Window-Level Input Definition: The proposed model performs window-by-window fall detection, where T denotes the window length in samples, and represents individual time steps within a window (details about window length and overlap are provided in Section 4.2). Stacking samples from and Kalman-fused over the window yields two sequences:
- Step 2: Dual-Stream Temporal Projections: The and sequences are processed by two parallel temporal projection streams with identical structure but independent parameters. Each stream applies a one-dimensional temporal convolution (Conv1D) with kernel size 8 and same padding to preserve temporal resolution, followed by batch normalization (BN), a Sigmoid Linear Unit (SiLU) activation function (), and dropout (Drop).
- Step 3: Feature Fusion and Normalization: The projected features and are fused by concatenation, followed by layer normalization (LN) to stabilize the combined representation across modalities. The fusion operation is defined as:where denotes concatenation along the feature dimension. At this point, the two streams are fully merged into a shared temporal representation.
- Step 4: Transformer Encoder for Temporal Modeling: The fused representation is further processed by a stack of transformer encoder layers to model long-range temporal dependencies across the window. The encoder applies pre-normalized multi-head self-attention (MSA) and a position-wise feed-forward network (FFN) at each layer. The encoder output is computed by:where denotes a stack of L transformer encoder layers and is the resulting sequence of contextualized representations.
- Step 5: Channel Attention via Squeeze–Excitation: The contextualized representation from the transformer encoder is recalibrated using a squeeze–excitation (SE) mechanism to emphasize informative feature channels while suppressing less relevant ones. The SE module first aggregates temporal information via global average pooling:yielding a global channel descriptor. Channel importance weights are then computed through a two-layer bottleneck with reduction ratio , yielding a bottleneck dimension of :where reduces dimensionality, restores it, denotes ReLU activation, and denotes the sigmoid function. The recalibrated features are obtained by channel-wise scaling:where ⊙ denotes element-wise multiplication, producing sequence.
- Step 6: Temporal Attention Pooling: To aggregate the recalibrated sequence into a fixed-length window-level representation, Temporal Attention Pooling (TAP) is applied. Unlike global average pooling, TAP learns to selectively weight time steps based on their relevance to fall detection, focusing on transient impact events while down-weighting the surrounding background motion.
- Step 7: Window-Level Classification: The pooled representation is passed through a dropout layer with a rate for regularization, then mapped to a scalar logit via a fully connected layer (FC):where and are learnable parameters of the FC layer. A sigmoid activation function produces the final window-level fall probability:where indicates a predicted fall, and indicates a predicted ADL.
4. Implementation Details
4.1. Datasets
4.2. Data Segmentation
4.3. Problem Formulation and Window Labeling
4.4. Training Configuration
4.5. Evaluation Protocol
5. Results
5.1. Performance Comparison with Baselines
5.2. Comparison Across Architectures
5.3. Computational Cost
5.4. Ablation Studies
- Consistency Across Multiple Datasets: To further analyze the robustness of the proposed method, we conducted an ablation study based on evaluation across multiple datasets. Unlike earlier experiments that focused on architectural and input-level variations within SmartFallMM, this study examined model behavior across datasets with substantially different sensor characteristics, sampling rates, and noise profiles.
- Effect of Yaw Drift on Classification Performance: To examine whether yaw drift affects classification performance, we conducted ablation experiments evaluating the contribution of the yaw channel to the orientation representation. Specifically, we tested three configurations: (1) the full Kalman-based orientation representation including yaw, (2) a drift-free alternative where yaw was replaced by the gyroscope magnitude , and (3) a configuration where yaw was completely excluded from the input. All experiments were performed on the SmartFallMM dataset using the same LOSO-CV protocol.
5.5. Real-Time Testing
6. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- WHO. Falls: Fact Sheet. 2021. Available online: https://www.who.int/news-room/fact-sheets/detail/falls (accessed on 1 December 2024).
- Yasmin, A.; Mahmud, T.; Haque, S.T.; Alamgeer, S.; Ngu, A.H.H. Enhancing Real-World Fall Detection Using Commodity Devices: A Systematic Study. Sensors 2025, 25, 5249. [Google Scholar] [CrossRef] [PubMed]
- SmartFall Group, Texas State University. SmartFallMM: A Multimodal Dataset Collected with Commodity Devices. 2025. Available online: https://github.com/txst-cs-smartfall/SmartFallMM-Dataset (accessed on 13 January 2026).
- Xuan, J.; Zhu, T.; Peng, G.; Sun, F.; Dong, D. A Review on the Inertial Measurement Unit Array of Microelectromechanical Systems. Sensors 2024, 24, 7140. [Google Scholar] [CrossRef] [PubMed]
- Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar] [CrossRef]
- Mauldin, T.R.; Canby, M.E.; Metsis, V.; Ngu, A.H.; Rivera, C.C. SmartFall: A Smartwatch-Based Fall Detection System Using Deep Learning. Sensors 2018, 18, 3363. [Google Scholar] [CrossRef]
- SmartFall Group, Texas State University. SmartFall Dataset, 2018. Available online: https://userweb.cs.txstate.edu/~hn12/data/SmartFallDataSet/ (accessed on 13 January 2026).
- Mauldin, T.R.; Ngu, A.H.; Metsis, V.; Canby, M.E. Ensemble Deep Learning on Wearables Using Small Datasets. ACM Trans. Comput. Healthcare 2021, 2, 5. [Google Scholar] [CrossRef]
- Haque, S.T.; Debnath, M.; Yasmin, A.; Mahmud, T.; Ngu, A.H.H. Experimental Study of Long Short-Term Memory and Transformer Models for Fall Detection on Smartwatches. Sensors 2024, 24, 6235. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Zafar, R.O.; Zafar, F. Real-time activity and fall detection using transformer-based deep learning models for elderly care applications. BMJ Health Care Informatics 2025, 32, e101439. [Google Scholar] [CrossRef]
- Vavoulas, G.; Chatzaki, C.; Malliotakis, T.; Pediaditis, M.; Tsiknakis, M. The MobiAct Dataset: Recognition of Activities of Daily Living using Smartphones. In Proceedings of the 2nd International Conference on Information and Communication Technologies for Ageing Well and e-Health, Rome, Italy, 21–22 April 2016. [Google Scholar]
- Yhdego, H.; Li, J.; Paolini, C.; Audette, M. Wearable Sensor Gait Analysis of Fall Detection using Attention Network. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Virtual, 9–12 December 2021; pp. 3137–3141. [Google Scholar] [CrossRef]
- Shin, J.; Miah, A.S.M.; Egawa, R.; Hirooka, K.; Hasan, M.A.M.; Tomioka, Y.; Hwang, Y.S. Fall recognition using a three stream spatio temporal GCN model with adaptive feature aggregation. Sci. Rep. 2025, 15, 10635. [Google Scholar] [CrossRef]
- Rehouma, H.; Boukadoum, M. Fall Detection by Deep Learning-Based Bimodal Movement and Pose Sensing with Late Fusion. Sensors 2025, 25, 6035. [Google Scholar] [CrossRef]
- Liu, K.C.; Lin, Y.D. Efficient fall detection using Kalman filter-enhanced triaxial accelerometer signals and machine learning. Biomed. Signal Process. Control 2026, 114, 109304. [Google Scholar] [CrossRef]
- Shi, Y.; Zhang, Y.; Li, Z.; Yuan, S.; Zhu, S. IMU/UWB Fusion Method Using a Complementary Filter and a Kalman Filter for Hybrid Upper Limb Motion Estimation. Sensors 2023, 23, 6700. [Google Scholar] [CrossRef]
- An, G.; Zhou, W.; Wu, Y.; Zheng, Z.; Liu, Y. Squeeze-and-Excitation on Spatial and Temporal Deep Feature Space for Action Recognition. In Proceedings of the 2018 14th IEEE International Conference on Signal Processing (ICSP), Beijing, China, 12–16 August 2018; pp. 648–653. [Google Scholar] [CrossRef]
- Essa, E.; Abdelmaksoud, I.R. Temporal-channel convolution with self-attention network for human activity recognition using wearable sensors. Knowl.-Based Syst. 2023, 278, 110867. [Google Scholar] [CrossRef]
- Wang, Z.; Kang, K. Adaptive temporal attention mechanism and hybrid deep CNN model for wearable sensor-based human activity recognition. Sci. Rep. 2025, 15, 33389. [Google Scholar] [CrossRef]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2999–3007. [Google Scholar] [CrossRef]
- Student. The Probable Error of a Mean. Biometrika 1908, 6, 1–25. [Google Scholar] [CrossRef]
- Wilcoxon, F. Individual Comparisons by Ranking Methods. Biom. Bull. 1945, 1, 80–83. [Google Scholar] [CrossRef]
- Nadeau, C.; Bengio, Y. Inference for the Generalization Error. Mach. Learn. 2003, 52, 239–281. [Google Scholar] [CrossRef]
- Zhang, J.; Li, Z.; Liu, Y.; Li, J.; Qiu, H.; Li, M.; Hou, G.; Zhou, Z. An Effective Deep Learning Framework for Fall Detection: Model Development and Study Design. J. Med. Internet Res. 2024, 26, e56750. [Google Scholar] [CrossRef] [PubMed]
- Liu, C.P.; Li, J.H.; Chu, E.P.; Hsieh, C.Y.; Liu, K.C.; Chan, C.T.; Tsao, Y. Deep Learning-based Fall Detection Algorithm Using Ensemble Model of Coarse-fine CNN and GRU Networks. In Proceedings of the 2023 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Jeju, Republic of Korea, 14–16 June 2023; pp. 1–5. [Google Scholar] [CrossRef]
- Wu, J.; Wang, J.; Zhan, A.; Wu, C. Fall Detection with CNN-Casual LSTM Network. Information 2021, 12, 403. [Google Scholar] [CrossRef]
- Martínez-Villaseñor, L.; Ponce, H.; Brieva, J.; Moya-Albor, E.; Núñez-Martínez, J.; Peñafort-Asturiano, C. UP-fall detection dataset: A multimodal approach. Sensors 2019, 19, 1988. [Google Scholar] [CrossRef]
- Fula, V.; Moreno, P. Wrist-based fall detection: Towards generalization across datasets. Sensors 2024, 24, 1679. [Google Scholar] [CrossRef]
- Yasmin, A.; Mahmud, T.; Debnath, M.; Ngu, A.H. An empirical study on ai-powered edge computing architectures for real-time iot applications. In Proceedings of the 2024 IEEE 48th Annual Computers, Software, and Applications Conference (COMPSAC), Osaka, Japan, 2–4 July 2024; pp. 1422–1431. [Google Scholar]
- SmartFall Group, Texas State University. Optimizing Real-Time Fall Detection: Integrating NATS.io for Low-Latency IoT Edge Applications. 2024. Available online: https://smartfall.github.io/assets/docs/SayaliNATS.pdf (accessed on 29 January 2026).
- Ngu, A.H.; Metsis, V.; Coyne, S.; Srinivas, P.; Salad, T.; Mahmud, U.; Chee, K.H. Personalized Watch-Based Fall Detection Using a Collaborative Edge-Cloud Framework. Int. J. Neural Syst. 2022, 32, 2250048. [Google Scholar] [CrossRef] [PubMed]





| Model | SE | TAP | F1 Score | Accuracy | F1 |
|---|---|---|---|---|---|
| (no attention) | ✗ | ✗ | 88.52 ± 11.10 | 83.67 ± 14.21 | — |
| ✓ | ✗ | 89.15 ± 5.77 | 84.23 ± 12.15 | +0.63 | |
| ✗ | ✓ | 88.34 ± 9.62 | 83.41 ± 9.34 | −0.18 | |
| ✓ | ✓ | 89.80 ± 8.99 | 84.96 ± 8.70 | +1.28 |
| Architecture | F1 Score | Accuracy | F1 Score |
|---|---|---|---|
| 89.80 ± 8.99 | 84.96 ± 8.70 | — | |
| (Proposed) | 91.10 ± 5.42 | 87.30 ± 7.16 | +1.30 |
| Metric | F1 Score | 95% CI | Paired t-Test | Wilcoxon | Nadeau–Bengio |
|---|---|---|---|---|---|
| F1 Score | +1.30 | [+0.36, +6.50] | |||
| Accuracy | +1.49 | [−0.18, +6.55] |
| Architecture | F1 Score | Accuracy | F1 |
|---|---|---|---|
| (Proposed) | 91.38 ± 5.42 | 88.44 ± 7.16 | — |
| Dual-Stream CNN-Mamba [27] | 88.34 ± 7.66 | 83.73 ± 9.89 | −3.04 |
| Dual-Stream LSTM [28] | 88.84 ± 3.84 | 85.28 ± 5.76 | −2.54 |
| Dual-Stream LSTM [28] + SE + TAP | 87.78 ± 3.29 | 84.12 ± 12.47 | −3.60 |
| DSCS [26] | 66.69 ± 0.19 | 79.61 ± 0.07 | −24.69 |
| Method | Params (K) | FLOPs (M) | Inference (ms/batch) | Preproc. Raw (ms) | Preproc. Kalman (ms) |
|---|---|---|---|---|---|
| (Proposed) | 42 | 2.8 | 3.0 | 0.01 | 14.26 |
| CNN-Mamba [27] | 54 | 7.0 | 3.1 | 0.01 | 13.97 |
| LSTM [28] + SE + TAP | 14 | 1.7 | 2.4 | 0.01 | 14.12 |
| DSCS [26] | 85 | 1.50 | 3.5 | 0.01 | 14.01 |
| Dataset | Proposed () | CNN-Mamba [27] | LSTM [28] + SE + TAP | |||
|---|---|---|---|---|---|---|
| F1 Score | Accuracy | F1 Score | Accuracy | F1 Score | Accuracy | |
| SmartFallMM | 91.38 ± 5.42 | 88.44 ± 7.16 | 88.34 ± 7.66 | 83.73 ± 9.89 | 88.84 ± 3.84 | 85.28 ± 5.76 |
| UP-FALL | 95.18 ± 3.03 | 96.53 ± 2.29 | 91.61 ± 8.21 | 94.47 ± 6.56 | 82.53 ± 18.02 | 87.92 ± 9.56 |
| WEDA-FALL | 95.41 ± 2.39 | 94.57 ± 2.95 | 91.09 ± 4.14 | 88.36 ± 5.94 | 90.22 ± 10.45 | 87.58 ± 10.72 |
| Method | Input | F1 Score | Accuracy | F1 |
|---|---|---|---|---|
| Single-Stream () | Raw IMU | 88.96 ± 7.66 | 84.55 ± 9.89 | −0.84 |
| Kalman-fused | 89.80 ± 8.99 | 84.96 ± 8.70 | ||
| Dual-Stream () | Raw IMU | 87.58 ± 7.27 | 83.12 ± 9.50 | −3.52 |
| Kalman-fused | 91.10 ± 5.42 | 87.30 ± 7.16 |
| Input | Embed (acc:gyro) | Total Dim | F1 Score |
|---|---|---|---|
| Kalman-fused | 32:32 | 64 | 91.10 ± 4.77 |
| 48:48 | 96 | 89.58 ± 9.03 | |
| 48:24 | 72 | 89.05 ± 7.34 |
| Configuration | Orientation Channels | F1 Score (%) |
|---|---|---|
| Full Kalman (with yaw) | 91.65 ± 5.36 | |
| Gyro magnitude (replacing yaw) | 90.76 ± 8.97 | |
| No yaw (excluded) | 90.15 ± 10.55 |
| Participant | Precision | Recall | F1 Score | Accuracy |
|---|---|---|---|---|
| Participant 1 | 80 | 80 | 80 | 89 |
| Participant 2 | 80 | 84 | 82 | 90 |
| Participant 3 | 87 | 84 | 85 | 92 |
| Participant 4 | 84 | 86 | 85 | 90 |
| Participant 5 | 82 | 88 | 84 | 88 |
| Average | 83 | 84 | 83 | 90 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Pradhan, A.; Alamgeer, S.; Suvvari, R.; Haque, S.T.; Ngu, A.H.H. Dual-Stream Transformer with Kalman-Based Sensor Fusion for Wearable Fall Detection. Big Data Cogn. Comput. 2026, 10, 90. https://doi.org/10.3390/bdcc10030090
Pradhan A, Alamgeer S, Suvvari R, Haque ST, Ngu AHH. Dual-Stream Transformer with Kalman-Based Sensor Fusion for Wearable Fall Detection. Big Data and Cognitive Computing. 2026; 10(3):90. https://doi.org/10.3390/bdcc10030090
Chicago/Turabian StylePradhan, Abheek, Sana Alamgeer, Rakesh Suvvari, Syed Tousiful Haque, and Anne H. H. Ngu. 2026. "Dual-Stream Transformer with Kalman-Based Sensor Fusion for Wearable Fall Detection" Big Data and Cognitive Computing 10, no. 3: 90. https://doi.org/10.3390/bdcc10030090
APA StylePradhan, A., Alamgeer, S., Suvvari, R., Haque, S. T., & Ngu, A. H. H. (2026). Dual-Stream Transformer with Kalman-Based Sensor Fusion for Wearable Fall Detection. Big Data and Cognitive Computing, 10(3), 90. https://doi.org/10.3390/bdcc10030090

