Next Article in Journal
Comparative Study of Motor Current–and RPM–Based Methods for Roll Force Estimation in Rolling Mill
Previous Article in Journal
Coupling Dynamic Behavior Analysis of Multiple Vibration Excitation Sources in Heavy-Duty Mining Screen
Previous Article in Special Issue
A Deep Learning-Driven Semantic Mapping Strategy for Robotic Inspection of Desalination Facilities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

LDFE-SLAM: Light-Aware Deep Front-End for Robust Visual SLAM Under Challenging Illumination

Peng Cheng Laboratory, Shenzhen 518055, China
*
Authors to whom correspondence should be addressed.
Machines 2026, 14(1), 44; https://doi.org/10.3390/machines14010044 (registering DOI)
Submission received: 1 December 2025 / Revised: 23 December 2025 / Accepted: 24 December 2025 / Published: 29 December 2025

Abstract

Visual SLAM systems face significant performance degradation under dynamic lighting conditions, where traditional feature extraction methods suffer from reduced keypoint detection and unstable matching. This paper presents LDFE-SLAM, a novel visual SLAM framework that addresses illumination challenges through a Light-Aware Deep Front-End (LDFE) architecture. Our key insight is that low-light degradation in SLAM is fundamentally a geometric feature distribution problem rather than merely a visibility issue. The proposed system integrates three synergistic components: (1) an illumination-adaptive enhancement module based on EnlightenGAN with geometric consistency loss that restores gradient structures for downstream feature extraction, (2) SuperPoint-based deep feature detection that provides illumination-invariant keypoints, and (3) LightGlue attention-based matching that filters enhancement-induced noise while maintaining geometric consistency. Through systematic evaluation of five method configurations (M1–M5), we demonstrate that enhancement, deep features, and learned matching must be co-designed rather than independently optimized. Experiments on EuRoC and TUM sequences under synthetic illumination degradation show that LDFE-SLAM maintains stable localization accuracy (∼1.2 m ATE) across all brightness levels, while baseline methods degrade significantly (up to 3.7 m). Our method operates normally down to severe lighting conditions (30% ambient brightness and 20–50 lux—equivalent to underground parking or night-time streetlight illumination), representing a 4–6× lower illumination threshold compared to ORB-SLAM3 (200–300 lux minimum). Under severe (25% brightness) conditions, our method achieves a 62% tracking success rate, compared to 12% for ORB-SLAM3, with keypoint detection remaining above the critical 100-point threshold, even under extreme degradation.
Keywords: visual SLAM; low-light environment; LightGlue; SuperPoint; illumination adaptive; point-line fusion; deep feature matching; robust localization visual SLAM; low-light environment; LightGlue; SuperPoint; illumination adaptive; point-line fusion; deep feature matching; robust localization

Share and Cite

MDPI and ACS Style

Liu, C.; Wang, Y.; Luo, W.; Peng, Y. LDFE-SLAM: Light-Aware Deep Front-End for Robust Visual SLAM Under Challenging Illumination. Machines 2026, 14, 44. https://doi.org/10.3390/machines14010044

AMA Style

Liu C, Wang Y, Luo W, Peng Y. LDFE-SLAM: Light-Aware Deep Front-End for Robust Visual SLAM Under Challenging Illumination. Machines. 2026; 14(1):44. https://doi.org/10.3390/machines14010044

Chicago/Turabian Style

Liu, Cong, You Wang, Weichao Luo, and Yanhong Peng. 2026. "LDFE-SLAM: Light-Aware Deep Front-End for Robust Visual SLAM Under Challenging Illumination" Machines 14, no. 1: 44. https://doi.org/10.3390/machines14010044

APA Style

Liu, C., Wang, Y., Luo, W., & Peng, Y. (2026). LDFE-SLAM: Light-Aware Deep Front-End for Robust Visual SLAM Under Challenging Illumination. Machines, 14(1), 44. https://doi.org/10.3390/machines14010044

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop