This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Open AccessArticle
LDFE-SLAM: Light-Aware Deep Front-End for Robust Visual SLAM Under Challenging Illumination
by
Cong Liu
Cong Liu ,
You Wang
You Wang
,
Weichao Luo
Weichao Luo * and
Yanhong Peng
Yanhong Peng *
Peng Cheng Laboratory, Shenzhen 518055, China
*
Authors to whom correspondence should be addressed.
Machines 2026, 14(1), 44; https://doi.org/10.3390/machines14010044 (registering DOI)
Submission received: 1 December 2025
/
Revised: 23 December 2025
/
Accepted: 24 December 2025
/
Published: 29 December 2025
Abstract
Visual SLAM systems face significant performance degradation under dynamic lighting conditions, where traditional feature extraction methods suffer from reduced keypoint detection and unstable matching. This paper presents LDFE-SLAM, a novel visual SLAM framework that addresses illumination challenges through a Light-Aware Deep Front-End (LDFE) architecture. Our key insight is that low-light degradation in SLAM is fundamentally a geometric feature distribution problem rather than merely a visibility issue. The proposed system integrates three synergistic components: (1) an illumination-adaptive enhancement module based on EnlightenGAN with geometric consistency loss that restores gradient structures for downstream feature extraction, (2) SuperPoint-based deep feature detection that provides illumination-invariant keypoints, and (3) LightGlue attention-based matching that filters enhancement-induced noise while maintaining geometric consistency. Through systematic evaluation of five method configurations (M1–M5), we demonstrate that enhancement, deep features, and learned matching must be co-designed rather than independently optimized. Experiments on EuRoC and TUM sequences under synthetic illumination degradation show that LDFE-SLAM maintains stable localization accuracy (∼1.2 m ATE) across all brightness levels, while baseline methods degrade significantly (up to 3.7 m). Our method operates normally down to severe lighting conditions (30% ambient brightness and 20–50 lux—equivalent to underground parking or night-time streetlight illumination), representing a 4–6× lower illumination threshold compared to ORB-SLAM3 (200–300 lux minimum). Under severe (25% brightness) conditions, our method achieves a 62% tracking success rate, compared to 12% for ORB-SLAM3, with keypoint detection remaining above the critical 100-point threshold, even under extreme degradation.
Share and Cite
MDPI and ACS Style
Liu, C.; Wang, Y.; Luo, W.; Peng, Y.
LDFE-SLAM: Light-Aware Deep Front-End for Robust Visual SLAM Under Challenging Illumination. Machines 2026, 14, 44.
https://doi.org/10.3390/machines14010044
AMA Style
Liu C, Wang Y, Luo W, Peng Y.
LDFE-SLAM: Light-Aware Deep Front-End for Robust Visual SLAM Under Challenging Illumination. Machines. 2026; 14(1):44.
https://doi.org/10.3390/machines14010044
Chicago/Turabian Style
Liu, Cong, You Wang, Weichao Luo, and Yanhong Peng.
2026. "LDFE-SLAM: Light-Aware Deep Front-End for Robust Visual SLAM Under Challenging Illumination" Machines 14, no. 1: 44.
https://doi.org/10.3390/machines14010044
APA Style
Liu, C., Wang, Y., Luo, W., & Peng, Y.
(2026). LDFE-SLAM: Light-Aware Deep Front-End for Robust Visual SLAM Under Challenging Illumination. Machines, 14(1), 44.
https://doi.org/10.3390/machines14010044
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
here.
Article Metrics
Article metric data becomes available approximately 24 hours after publication online.