Next Article in Journal
Misalignment-Induced Aberration Compensation for Off-Axis Reflective Telescopes Based on Fusion of Spot Images and Zernike Coefficients
Previous Article in Journal
Dual-Resonance Plasmonic Nanocavity with Differential Thermo-Optic Response for Enhanced Fiber-Optic Thermal Flowmeters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Sequential Deep Learning with Feature Compression and Optimal State Estimation for Indoor Visible Light Positioning

by
Negasa Berhanu Fite
1,2,*,
Getachew Mamo Wegari
2 and
Heidi Steendam
1
1
TELIN/IMEC, Ghent University, 9000 Gent, Belgium
2
Faculty of Computing and Informatics, Jimma Institute of Technology, Jimma University, Jimma 378, Ethiopia
*
Author to whom correspondence should be addressed.
Photonics 2026, 13(2), 211; https://doi.org/10.3390/photonics13020211
Submission received: 5 January 2026 / Revised: 12 February 2026 / Accepted: 15 February 2026 / Published: 23 February 2026

Abstract

Visible Light Positioning (VLP) is widely regarded as a promising technology for high-precision indoor localization due to its immunity to radio-frequency interference and compatibility with existing Light-Emitting Diode (LED) lighting infrastructure. Despite recent progress, current VLP systems remain fundamentally limited by nonlinear received signal strength (RSS) characteristics, unknown transmitter orientations, and dynamic indoor disturbances. Existing solutions typically address these challenges in isolation, resulting in limited robustness and scalability. This paper proposes SCENE-VLP (Sequential Deep Learning with Feature Compression and Optimal State Estimation), a structured positioning framework that integrates feature compression, temporal sequence modeling, and probabilistic state refinement within a unified estimation pipeline. Specifically, SCENE-VLP combines Principal Component Analysis (PCA) and Denoising Autoencoders (DAE) for linear and nonlinear observation conditioning, Gated Recurrent Units (GRU) for modeling temporal dependencies in RSS sequences, and Kalman-based filtering (KF/EKF) for recursive state-space refinement. The framework is formulated as a hierarchical approximation of the nonlinear observation model, linking data-driven measurement learning with Bayesian state estimation. A systematic ablation study across multiple scenarios, including same-dataset evaluation and cross-dataset generalization, demonstrates that each component provides complementary benefits. Feature compression reduces redundancy while preserving dominant signal structure; GRU significantly improves robustness over static regression; and recursive filtering consistently reduces positioning error compared to unfiltered predictions. While both KF and EKF improve performance, EKF provides incremental refinement under mild nonlinearities. Extensive simulations conducted on an indoor dataset collected from a realistic deployment with eight ceiling-mounted LEDs and a single photodetector (PD) show that SCENE-VLP achieves sub-decimeter localization accuracy, with P50 and P95 errors of 1.84 cm and 6.52 cm, respectively. Cross-scenario evaluation further confirms stable generalization and statistically consistent improvements. These results demonstrate that the structured integration of observation conditioning, temporal modeling, and Bayesian refinement yields measurable gains beyond partial pipeline configurations, establishing SCENE-VLP as a robust and scalable solution for next-generation indoor visible light positioning systems.
Keywords: visible light positioning (VLP); light-emitting diode (LED); recurrent neural network (RNN); denoising autoencoder (DAE); Kalman filtering visible light positioning (VLP); light-emitting diode (LED); recurrent neural network (RNN); denoising autoencoder (DAE); Kalman filtering

Share and Cite

MDPI and ACS Style

Fite, N.B.; Wegari, G.M.; Steendam, H. Sequential Deep Learning with Feature Compression and Optimal State Estimation for Indoor Visible Light Positioning. Photonics 2026, 13, 211. https://doi.org/10.3390/photonics13020211

AMA Style

Fite NB, Wegari GM, Steendam H. Sequential Deep Learning with Feature Compression and Optimal State Estimation for Indoor Visible Light Positioning. Photonics. 2026; 13(2):211. https://doi.org/10.3390/photonics13020211

Chicago/Turabian Style

Fite, Negasa Berhanu, Getachew Mamo Wegari, and Heidi Steendam. 2026. "Sequential Deep Learning with Feature Compression and Optimal State Estimation for Indoor Visible Light Positioning" Photonics 13, no. 2: 211. https://doi.org/10.3390/photonics13020211

APA Style

Fite, N. B., Wegari, G. M., & Steendam, H. (2026). Sequential Deep Learning with Feature Compression and Optimal State Estimation for Indoor Visible Light Positioning. Photonics, 13(2), 211. https://doi.org/10.3390/photonics13020211

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop