Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (363)

Search Parameters:
Keywords = night-time visibility

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 9975 KB  
Article
Leveraging LiDAR Data and Machine Learning to Predict Pavement Marking Retroreflectivity
by Hakam Bataineh, Dmitry Manasreh, Munir Nazzal and Ala Abbas
Vehicles 2026, 8(1), 23; https://doi.org/10.3390/vehicles8010023 - 20 Jan 2026
Viewed by 192
Abstract
This study focused on developing and validating machine learning models to predict pavement marking retroreflectivity using Light Detection and Ranging (LiDAR) intensity data. The retroreflectivity data was collected using a Mobile Retroreflectometer Unit (MRU) due to its increasing acceptance among states as a [...] Read more.
This study focused on developing and validating machine learning models to predict pavement marking retroreflectivity using Light Detection and Ranging (LiDAR) intensity data. The retroreflectivity data was collected using a Mobile Retroreflectometer Unit (MRU) due to its increasing acceptance among states as a compliant measurement device. A comprehensive dataset was assembled spanning more than 1000 miles of roadways, capturing diverse marking materials, colors, installation methods, pavement types, and vehicle speeds. The final dataset used for model development focused on dry condition measurements and roadway segments most relevant to state transportation agencies. A detailed synchronization process was implemented to ensure the accurate pairing of retroreflectivity and LiDAR intensity values. Using these data, several machine learning techniques were evaluated, and an ensemble of gradient boosting-based models emerged as the top performer, predicting pavement retroreflectivity with an R2 of 0.94 on previously unseen data. The repeatability of the predicted retroreflectivity was tested and showed similar consistency as the MRU. The model’s accuracy was confirmed against independent field segments demonstrating the potential for LiDAR to serve as a practical, low-cost alternative for MRU measurements in routine roadway inspection and maintenance. The approach presented in this study enhances roadway safety by enabling more frequent, network-level assessments of pavement marking performance at lower cost, allowing agencies to detect and correct visibility problems sooner and helping to prevent nighttime and adverse weather crashes. Full article
Show Figures

Figure 1

18 pages, 3017 KB  
Article
Study on Preparation of Long-Afterglow Luminescent Road-Marking Coatings and Simulation of Road Layout
by Xiaowei Feng, Bo Li, Yan Zhang and Yanrong Xu
Materials 2026, 19(2), 215; https://doi.org/10.3390/ma19020215 - 6 Jan 2026
Viewed by 259
Abstract
To improve night-time visibility of pavement markings, a long-afterglow road-marking coating was developed using strontium aluminate as the phosphorescent component. The influences of particle size (100–400 mesh), dosage (15–35 wt%), filler type, and coating thickness (200–600 μm) on optical behavior were systematically evaluated. [...] Read more.
To improve night-time visibility of pavement markings, a long-afterglow road-marking coating was developed using strontium aluminate as the phosphorescent component. The influences of particle size (100–400 mesh), dosage (15–35 wt%), filler type, and coating thickness (200–600 μm) on optical behavior were systematically evaluated. The optimal formulation—200-mesh strontium aluminate at 30 wt%, titanium dioxides combined with ultrafine glass powder, and a thickness of 500 μm—achieved an initial brightness of 3.08 cd/m2 and maintained visible afterglow for more than 9 h. Durability tests confirmed satisfactory resistance to water, alkali, and abrasion, meeting the requirements of JTT 280-2022. Twinmotion simulations further demonstrated that when the coating brightness remains above 0.1 cd/m2, it provides effective visual guidance on unlit road sections, thereby enhancing night-time driving safety. This study verifies the feasibility of using long-afterglow coatings to improve road visibility and reduce night-time accident risks. Full article
Show Figures

Figure 1

16 pages, 4121 KB  
Article
Uncovering Fishing Area Patterns Using Convolutional Autoencoder and Gaussian Mixture Model on VIIRS Nighttime Imagery
by Jeong Chang Seong, Jina Jang, Jiwon Yang, Seung Hee Choi and Chul Sue Hwang
ISPRS Int. J. Geo-Inf. 2026, 15(1), 25; https://doi.org/10.3390/ijgi15010025 - 5 Jan 2026
Viewed by 332
Abstract
The availability of nighttime satellite imagery provides unique opportunities for monitoring fishing activity in data-sparse ocean regions. This study leverages Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band monthly composite imagery to identify and classify recurring spatial patterns of fishing activity in the [...] Read more.
The availability of nighttime satellite imagery provides unique opportunities for monitoring fishing activity in data-sparse ocean regions. This study leverages Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band monthly composite imagery to identify and classify recurring spatial patterns of fishing activity in the Korean Exclusive Economic Zone from 2014 to 2024. While prior research has primarily produced static hotspot maps, our approach advances geospatial fishing activity identification by employing machine learning techniques to group similar spatiotemporal configurations, thereby capturing recurring fishing patterns and their temporal variability. A convolutional autoencoder and a Gaussian Mixture Model (GMM) were used to cluster the VIIRS imagery. Results revealed seven major nighttime light hotspots. Results also identified four cluster patterns: Cluster 0 dominated in December, January, and February, Cluster 1 in March, April, and May, Cluster 2 in July, August, and September, and Cluster 3 in October and November. Interannual variability was also identified. In particular, Clusters 0 and 3 expanded into later months in recent years (2022–2024), whereas Cluster 1 contracted. These findings align with environmental changes in the region, including ocean temperature rise and declining primary productivity. By integrating autoencoders with probabilistic clustering, this research demonstrates a framework for uncovering recurrent fishing activity patterns and highlights the utility of satellite imagery with GeoAI in advancing marine fisheries monitoring. Full article
(This article belongs to the Special Issue Spatial Data Science and Knowledge Discovery)
Show Figures

Figure 1

32 pages, 10101 KB  
Article
BNE-DETR: Nighttime Pedestrian Detection with Visible Light Sensors via Feature Enhancement and Multi-Scale Fusion
by Fu Li, Yan Lu, Ming Zhao and Wangyu Wu
Sensors 2026, 26(1), 260; https://doi.org/10.3390/s26010260 - 31 Dec 2025
Viewed by 623
Abstract
Pedestrian detection faces significant performance degradation challenges in nighttime visible light environments due to degraded target features, background noise interference, and the coexistence of multi-scale targets. To address this issue, this paper proposes a BNE-DETR model based on an improved RT-DETR. First, we [...] Read more.
Pedestrian detection faces significant performance degradation challenges in nighttime visible light environments due to degraded target features, background noise interference, and the coexistence of multi-scale targets. To address this issue, this paper proposes a BNE-DETR model based on an improved RT-DETR. First, we incorporate the lightweight backbone network CSPDarknet and design a Single-head Self-attention with EPGO and Convolutional Gated Linear Unit (SECG) module to replace the bottleneck layer in the original C2f component. By integrating single-head self-attention, the Efficient Prompt Guide Operator (EPGO) dynamic K-selection mechanism, and convolutional gated linear units, it effectively enhances the model’s feature representation capability under low-light conditions. Second, the AIFI-SEFN module, which combines Attention-driven Intra-scale Feature Interaction (AIFI) with a Spatially Enhanced Feedforward Network (SEFN), is constructed to strengthen the extraction of weak details and the fusion of contextual information. Finally, the Mixed Aggregation Network with Star Blocks (MANStar) module utilizes large-kernel convolutions and multi-branch star structures to enhance the representation and fusion of multi-scale pedestrian features. Experiments on the LLVIP dataset demonstrate that our model achieves 1.9%, 2.5%, and 1.9% improvements in Precision, Recall, and mAP50, respectively, compared to RT-DETR-R18, while maintaining low computational complexity (48.7 GFLOPs) and reducing parameters by 20.2%. Cross-dataset experiments further validate the method’s robust performance and generalization capabilities in nighttime pedestrian detection tasks. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 14385 KB  
Article
LDFE-SLAM: Light-Aware Deep Front-End for Robust Visual SLAM Under Challenging Illumination
by Cong Liu, You Wang, Weichao Luo and Yanhong Peng
Machines 2026, 14(1), 44; https://doi.org/10.3390/machines14010044 - 29 Dec 2025
Viewed by 312
Abstract
Visual SLAM systems face significant performance degradation under dynamic lighting conditions, where traditional feature extraction methods suffer from reduced keypoint detection and unstable matching. This paper presents LDFE-SLAM, a novel visual SLAM framework that addresses illumination challenges through a Light-Aware Deep Front-End (LDFE) [...] Read more.
Visual SLAM systems face significant performance degradation under dynamic lighting conditions, where traditional feature extraction methods suffer from reduced keypoint detection and unstable matching. This paper presents LDFE-SLAM, a novel visual SLAM framework that addresses illumination challenges through a Light-Aware Deep Front-End (LDFE) architecture. Our key insight is that low-light degradation in SLAM is fundamentally a geometric feature distribution problem rather than merely a visibility issue. The proposed system integrates three synergistic components: (1) an illumination-adaptive enhancement module based on EnlightenGAN with geometric consistency loss that restores gradient structures for downstream feature extraction, (2) SuperPoint-based deep feature detection that provides illumination-invariant keypoints, and (3) LightGlue attention-based matching that filters enhancement-induced noise while maintaining geometric consistency. Through systematic evaluation of five method configurations (M1–M5), we demonstrate that enhancement, deep features, and learned matching must be co-designed rather than independently optimized. Experiments on EuRoC and TUM sequences under synthetic illumination degradation show that LDFE-SLAM maintains stable localization accuracy (∼1.2 m ATE) across all brightness levels, while baseline methods degrade significantly (up to 3.7 m). Our method operates normally down to severe lighting conditions (30% ambient brightness and 20–50 lux—equivalent to underground parking or night-time streetlight illumination), representing a 4–6× lower illumination threshold compared to ORB-SLAM3 (200–300 lux minimum). Under severe (25% brightness) conditions, our method achieves a 62% tracking success rate, compared to 12% for ORB-SLAM3, with keypoint detection remaining above the critical 100-point threshold, even under extreme degradation. Full article
Show Figures

Figure 1

18 pages, 1457 KB  
Article
Research on Multi-Modal Fusion Detection Method for Low-Slow-Small UAVs Based on Deep Learning
by Zhengtang Liu, Yongjie Zou, Zhenzhen Hu, Han Xue, Meng Li and Bin Rao
Drones 2025, 9(12), 852; https://doi.org/10.3390/drones9120852 - 11 Dec 2025
Viewed by 676
Abstract
Addressing the technical challenges in detecting Low-Slow-Small Unmanned Aerial Vehicle (LSS-UAV) cluster targets, such as weak signals and complex environmental interference coupling with strong features, this paper proposes a visible-infrared multi-modal fusion detection method based on deep learning. The method utilizes deep learning [...] Read more.
Addressing the technical challenges in detecting Low-Slow-Small Unmanned Aerial Vehicle (LSS-UAV) cluster targets, such as weak signals and complex environmental interference coupling with strong features, this paper proposes a visible-infrared multi-modal fusion detection method based on deep learning. The method utilizes deep learning techniques to separately identify morphological features in visible light images and thermal radiation features in infrared images. A hierarchical multi-modal fusion framework integrating feature-level and decision-level fusion is designed, incorporating an Environment-Aware Dynamic Weighting (EADW) mechanism and Dempster-Shafer evidence theory (D-S evidence theory). This framework effectively leverages the complementary advantages of feature-level and decision-level fusion. This effectively enhances the detection and recognition capability, as well as the system robustness, for LSS-UAV cluster targets in complex environments. Experimental results demonstrate that the proposed method achieves a detection accuracy of 93.5% for LSS-UAV clusters in complex urban environments, representing an average improvement of 18.7% compared to single-modal methods, while the false alarm rate is reduced to 4.2%. Furthermore, the method demonstrates strong environmental adaptability, maintaining high performance under challenging conditions such as nighttime and haze. This method provides an efficient and reliable technical solution for LSS-UAV cluster target detection. Full article
Show Figures

Figure 1

28 pages, 5608 KB  
Article
GIS-Based Framework for Integrating Urban Heritage and Lighting Planning
by Orhun Soydan and Mertkan Fahrettin Tekinalp
Buildings 2025, 15(24), 4435; https://doi.org/10.3390/buildings15244435 - 8 Dec 2025
Viewed by 518
Abstract
This study develops a GIS-based, heritage-sensitive urban lighting framework for Niğde, Türkiye, integrating Sentinel-2 MSI Level-2A imagery (10 m), ASTER DEM, and municipal cadastral data. Five spatial criteria—land cover, parks, protected heritage assets, population distribution, and government institutions—were classified through supervised mapping, visibility [...] Read more.
This study develops a GIS-based, heritage-sensitive urban lighting framework for Niğde, Türkiye, integrating Sentinel-2 MSI Level-2A imagery (10 m), ASTER DEM, and municipal cadastral data. Five spatial criteria—land cover, parks, protected heritage assets, population distribution, and government institutions—were classified through supervised mapping, visibility analysis, and architectural integrity assessment. All layers were standardized and combined using a weighted-overlay approach, supported by sensitivity testing across three weighting scenarios to ensure model robustness. Priority zones are concentrated in the historic core, where cultural landmarks, central parks, and high-density residential areas overlap. Peripheral agricultural and rural zones exhibited minimal lighting needs. Field verification and expert consultation demonstrated 82% correspondence between modeled and observed priority and visibility patterns, while a structured nighttime audit and ecological checklist provided additional empirical grounding for lighting sufficiency, glare risks, and biodiversity considerations. Results emphasize context-specific lighting that strengthens cultural identity, improves pedestrian comfort and nighttime legibility, and reduces unnecessary energy use and light pollution. This approach offers a replicable workflow aligned with CIE 150:2017 and IES RP-8-18 guidance. Future work may incorporate dynamic population mobility, AHP-based weighting, and adaptive smart-lighting systems to scale the methodology across similar medium-sized heritage cities seeking balanced aesthetic, cultural, and ecological nighttime environments. Full article
(This article belongs to the Special Issue Natural-Based Solution for Sustainable Buildings)
Show Figures

Figure 1

18 pages, 653 KB  
Article
Condition Assessment of Road Markings in Denmark, Norway and Sweden—A Comparison Between Retroreflectivity, Visibility and Preview Time
by Anna Vadeby and Carina Fors
Appl. Sci. 2025, 15(23), 12788; https://doi.org/10.3390/app152312788 - 3 Dec 2025
Viewed by 449
Abstract
Longitudinal road markings provide visual guidance for drivers and are essential for safe driving, particularly at night. The aim of this study is to investigate possible differences in road marking performance, with regard to retroreflectivity, visibility and preview time between Denmark, Norway and [...] Read more.
Longitudinal road markings provide visual guidance for drivers and are essential for safe driving, particularly at night. The aim of this study is to investigate possible differences in road marking performance, with regard to retroreflectivity, visibility and preview time between Denmark, Norway and Sweden. The results are compared to current recommendations and regulations regarding road marking performance in the three countries. This study is based on condition assessments of 30,000 km of edge road markings from 2017 to 2021. The results showed that the performance requirement fulfillment for retroreflectivity of white road markings (150 mcd/m2/lx) is 38% in Denmark, 65% in Norway and 66% in Sweden. No large differences in dry road marking performance were found between the three countries. The performance regarding all variables was rather stable during the five years investigated. The mean preview time was 4.7 s in Sweden, 4.9 s in Norway and 5.6 s in Denmark. The observed preview times are higher than the recommended minimum preview times (ranging from 1.8 to 3.65 s) found in the literature. The results do not raise any need for revision of the current regulations regarding road marking retroreflectivity and geometry in Denmark, Norway and Sweden. Full article
(This article belongs to the Special Issue Road Markings: Technologies, Materials, and Traffic Safety)
Show Figures

Figure 1

18 pages, 13145 KB  
Article
CDFFusion: A Color-Deviation-Free Fusion Network for Nighttime Infrared and Visible Images
by Hao Chen, Tinghua Zhang, Shijie Zhai, Xiaoyun Tong and Rui Zhu
Sensors 2025, 25(23), 7337; https://doi.org/10.3390/s25237337 - 2 Dec 2025
Viewed by 358
Abstract
The purpose of infrared and visible image fusion is to integrate their complementary information into a single image, thereby increasing the amount of information expression. However, previously used methods often struggle to extract information hidden in darkness, and existing methods—which integrate brightness enhancement [...] Read more.
The purpose of infrared and visible image fusion is to integrate their complementary information into a single image, thereby increasing the amount of information expression. However, previously used methods often struggle to extract information hidden in darkness, and existing methods—which integrate brightness enhancement and image fusion—can cause overexposure, image blocking effects, and color deviation. Therefore, we propose a visible light and infrared image fusion method, CDFFusion, for low-light scenarios. The premise is to utilize Retinex theory to decompose the illumination and reflection components of visible light images at the feature level before fusing and decoding the reflection features with infrared features to obtain the Y component of the fused image. Next, the proposed color mapping formula is used to adjust the Cb and Cr components of the original visible light image; finally, the Y component of the fused image is concatenated to obtain the final fused image. The SF, CC, Nabf, Qabf, SCD, MS-SSIM, and ΔE indicators of this method reached 17.6531, 0.6619, 0.1075, 0.4279, 1.2760, 0.8335, and 0.0706, respectively, on the LLVIP dataset. The experimental results show that this method can effectively alleviate visual overexposure and image blocking effects, and it has the smallest color deviation. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 2374 KB  
Article
NightTrack: Joint Night-Time Image Enhancement and Object Tracking for UAVs
by Xiaomin Huang, Yunpeng Bai, Jiaman Ma, Ying Li, Changjing Shang and Qiang Shen
Drones 2025, 9(12), 824; https://doi.org/10.3390/drones9120824 - 27 Nov 2025
Viewed by 654
Abstract
UAV-based visual object tracking has recently become a prominent research focus in computer vision. However, most existing trackers are primarily benchmarked under well-illuminated conditions, largely overlooking the challenges that may arise in night-time scenarios. Although attempts exist to restore image brightness via low-light [...] Read more.
UAV-based visual object tracking has recently become a prominent research focus in computer vision. However, most existing trackers are primarily benchmarked under well-illuminated conditions, largely overlooking the challenges that may arise in night-time scenarios. Although attempts exist to restore image brightness via low-light image enhancement before feeding frames to a tracker, such two-stage pipelines often struggle to strike an effective balance between the competing objectives of enhancement and tracking. To address this limitation, this work proposes NightTrack, a unified framework that optimizes both low-light image enhancement and UAV object tracking. While boosting image visibility, NightTrack not only explicitly preserves but also reinforces the discriminative features required for robust tracking. To improve the discriminability of low-light representations, Pyramid Attention Modules (PAMs) are introduced to enhance multi-scale contextual cues. Moreover, by jointly estimating illumination and noise curves, NightTrack mitigates the potential adverse effects of low-light environments, leading to significant gains in precision and robustness. Experimental results on multiple night-time tracking benchmarks demonstrate that NightTrack outperforms state-of-the-art methods in night-time scenes, exhibiting strong promises for further development. Full article
Show Figures

Figure 1

16 pages, 1813 KB  
Article
Visibility and Usability of Protective Motorcycle Clothing from the Perspective of Car Drivers
by Gihyun Lee, Taehoon Kim, Jungmin Yun, Dae Young Lim, Seungju Lim, Woosung Lee, Seongjin Jang, Jongseok Lee and Hongbum Kim
Appl. Sci. 2025, 15(23), 12375; https://doi.org/10.3390/app152312375 - 21 Nov 2025
Cited by 1 | Viewed by 638
Abstract
Aiming to improve nighttime safety for motorcycle riders, this study evaluates the visibility and usability of LED and retroreflective-equipped protective motorcycle clothing versus conventional retroreflective gear. Ten male participants with driving experience were selected based on specific criteria, including normal or corrected visual [...] Read more.
Aiming to improve nighttime safety for motorcycle riders, this study evaluates the visibility and usability of LED and retroreflective-equipped protective motorcycle clothing versus conventional retroreflective gear. Ten male participants with driving experience were selected based on specific criteria, including normal or corrected visual acuity. Utilizing a simulated driving environment with a 75-inch screen and electric bicycles, the study employed an eye tracker to define recognition distances. It was found that LED and retroreflective-equipped clothing significantly increased the recognition distance in various nighttime scenarios, with the experimental group’s gear being visible from substantially further away than the control group’s gear. Additionally, subjective assessments showed that the LED gear scored higher in visibility and overall satisfaction, though no significant differences in wearability and activity performance were noted between the two groups. These results indicate that LED clothing could enhance rider safety at night, emphasizing the importance of such innovations for safety gear. Despite its focus on SUV drivers and specific conditions, the study provides foundational data for the development of effective protective motorcycle clothing, suggesting future research should include a broader array of vehicle types and environmental conditions. Full article
Show Figures

Figure 1

25 pages, 19784 KB  
Article
Spatiotemporal Dynamics of Anthropogenic Night Light in China
by Christopher Small
Lights 2025, 1(1), 4; https://doi.org/10.3390/lights1010004 - 21 Nov 2025
Viewed by 402
Abstract
Anthropogenic night light (ANL) provides a unique observable for the spatially explicit mapping of human-modified landscapes in the form of lighted infrastructure. Since 2013, the Visible Infrared Imaging Radiometer Suite (VIIRS) Day Night Band (DNB) on the Suomi NPP satellite has provided more [...] Read more.
Anthropogenic night light (ANL) provides a unique observable for the spatially explicit mapping of human-modified landscapes in the form of lighted infrastructure. Since 2013, the Visible Infrared Imaging Radiometer Suite (VIIRS) Day Night Band (DNB) on the Suomi NPP satellite has provided more than a decade of near-daily observations of anthropogenic night light. The objective of this study is to quantify changes in ANL in developed eastern China post-2013 using VIIRS DNB monthly mean brightness composites. Specifically, to constrain sub-annual and interannual changes in night light brightness to distinguish between apparent and actual change of ANL sources, and then conduct a spatiotemporal analysis of observed changes to identify areas of human activity, urban development and rural electrification. This analysis is based on a combination of time-sequential bitemporal brightness distributions and quantification of the spatiotemporal evolution of night light using Empirical Orthogonal Function (EOF) analysis. Bitemporal brightness distributions show that bright (>~1 nW/cm2/sr) ANL is heteroskedastic, with temporal variability diminishing with increasing brightness. Hence, brighter lights are more temporally stable. In contrast, dimmer (<~1 nW/cm2/sr) ANL is much more variable on monthly time scales. The same patterns of heteroskedasticity and variability of the lower tail of the brightness distribution are observed in year-to-year distributions. However, year-to-year brightness increases vary somewhat among different years. While bivariate distributions quantify aggregate changes on both subannual and interannual time scales, spatiotemporal analysis quantifies spatial variations in the year-to-year temporal evolution of ANL. The spatial distribution of brightening (and, much less commonly, dimming) revealed by the EOF analysis indicates that most of the brightening since 2013 has occurred at the peripheries of large cities and throughout the networks of smaller settlements on the North China Plain, the Yangtze River Valley, and the Sichuan Basin. A particularly unusual pattern of sequential brightening and dimming is observed on the Loess Plateau north of Xi’an, where extensive terrace construction has occurred. All aspects of this analysis highlight the difference between apparent and actual changes in night light sources. This is important because many users of VIIRS night light attribute all observed changes in imaged night light to actual changes in anthropogenic light sources—without consideration of low luminance variability related to the imaging process itself. Full article
Show Figures

Figure 1

22 pages, 33705 KB  
Article
Global and Local Context-Aware Detection for Infrared Small UAV Targets
by Liang Zhao, Yan Zhang, Yongchang Li and Han Zhong
Drones 2025, 9(11), 804; https://doi.org/10.3390/drones9110804 - 18 Nov 2025
Viewed by 533
Abstract
The widespread adoption of small unmanned aerial vehicles poses increasing challenges to public safety. Compared with visible-light sensors, infrared imaging offers excellent nighttime observation capabilities and strong robustness against interference, enabling all-weather UAV surveillance. However, detecting small UAVs in infrared imagery remains challenging [...] Read more.
The widespread adoption of small unmanned aerial vehicles poses increasing challenges to public safety. Compared with visible-light sensors, infrared imaging offers excellent nighttime observation capabilities and strong robustness against interference, enabling all-weather UAV surveillance. However, detecting small UAVs in infrared imagery remains challenging due to low target contrast and weak texture features. To address these challenges, we propose IUAV-YOLO, a context-aware detection framework built upon YOLOv10. Specifically, inspired by the receptive field mechanism in human vision, the backbone network is re-designed with a multi-branch structure to improve sensitivity to small targets. Additionally, a Pyramid Global Attention Module is incorporated to strengthen target–background associations, while a Spatial Context-Aware Module is developed to integrate spatial contextual cues and enhance target-background discrimination. Extensive experiments demonstrate that, compared with the baseline model, IUAV-YOLO achieves performance gains of 4.3% in AP0.5 and 2.6% in AP0.5–0.95 on the self-built IRSUAV dataset, with a reduction of 0.7M parameters. On the public SIRST-UAVB dataset, IUAV-YOLO attains improvements of 29.7% in AP0.5 and 16.3% in AP0.5–0.95. Compared with other advanced object detection algorithms, IUAV-YOLO demonstrates a superior accuracy-efficiency trade-off, highlighting its potential for practical infrared UAV surveillance applications. Full article
Show Figures

Figure 1

19 pages, 3709 KB  
Article
Evaluating the Influence of Aerosol Optical Depth on Satellite-Derived Nighttime Light Radiance in Asian Megacities
by Hyeryeong Park, Jaemin Kim and Yun Gon Lee
Remote Sens. 2025, 17(20), 3492; https://doi.org/10.3390/rs17203492 - 21 Oct 2025
Cited by 1 | Viewed by 658
Abstract
The Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) provides invaluable nighttime light (NTL) radiance data, widely employed for diverse applications including urban and socioeconomic studies. However, the inherent reliability of NTL data as a proxy for socioeconomic activities is significantly compromised [...] Read more.
The Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) provides invaluable nighttime light (NTL) radiance data, widely employed for diverse applications including urban and socioeconomic studies. However, the inherent reliability of NTL data as a proxy for socioeconomic activities is significantly compromised by atmospheric conditions, particularly aerosols. This study analyzed the long-term spatiotemporal variations in NTL radiance with respect to atmospheric aerosol optical depth (AOD) in nine major Asian cities from January 2012 to May 2021. Our findings reveal a complex and heterogeneous interplay between NTL radiance and AOD, fundamentally influenced by a region’s unique atmospheric characteristics and developmental stages. While major East Asian cities (e.g., Beijing, Tokyo, Seoul) exhibited a statistically significant inverse correlation, indicating aerosol-induced NTL suppression, other regions showed different patterns. For instance, the rapidly urbanizing city of Dhaka displayed a statistically significant positive correlation, suggesting a concurrent increase in NTL and AOD due to intensified urban activities. This highlights that the NTL-AOD relationship is not solely a physical phenomenon but is also shaped by independent socioeconomic processes. These results underscore the critical importance of comprehensively understanding these regional discrepancies for the reliable interpretation and effective reconstruction of NTL radiance data. By providing nuanced insights into how atmospheric aerosols influence NTL measurements in diverse urban settings, this research aims to enhance the utility and robustness of satellite-derived NTL data for effective socioeconomic analyses. Full article
Show Figures

Figure 1

24 pages, 4921 KB  
Article
YOLOv11-DCFNet: A Robust Dual-Modal Fusion Method for Infrared and Visible Road Crack Detection in Weak- or No-Light Illumination Environments
by Xinbao Chen, Yaohui Zhang, Junqi Lei, Lelin Li, Lifang Liu and Dongshui Zhang
Remote Sens. 2025, 17(20), 3488; https://doi.org/10.3390/rs17203488 - 20 Oct 2025
Viewed by 1274
Abstract
Road cracks represent a significant challenge that impacts the long-term performance and safety of transportation infrastructure. Early identification of these cracks is crucial for effective road maintenance management. However, traditional crack recognition methods that rely on visible light images often experience substantial performance [...] Read more.
Road cracks represent a significant challenge that impacts the long-term performance and safety of transportation infrastructure. Early identification of these cracks is crucial for effective road maintenance management. However, traditional crack recognition methods that rely on visible light images often experience substantial performance degradation in weak-light environments, such as at night or within tunnels. This degradation is characterized by blurred or deficient image textures, indistinct target edges, and reduced detection accuracy, which hinders the ability to achieve reliable all-weather target detection. To address these challenges, this study introduces a dual-modal crack detection method named YOLOv11-DCFNet. This method is based on an enhanced YOLOv11 architecture and incorporates a Cross-Modality Fusion Transformer (CFT) module. It establishes a dual-branch feature extraction structure that utilizes both infrared and visible light within the original YOLOv11 framework, effectively leveraging the high contrast capabilities of thermal infrared images to detect cracks under weak- or no-light conditions. The experimental results demonstrate that the proposed YOLOv11-DCFNet method significantly outperforms the single-modal model (YOLOv11-RGB) in both weak-light and no-light scenarios. Under weak-light conditions, the fusion model effectively utilizes the weak texture features of RGB images alongside the thermal radiation information from infrared (IR) images. This leads to an improvement in Precision from 83.8% to 95.3%, Recall from 81.5% to 90.5%, mAP@0.5 from 84.9% to 92.9%, and mAP@0.5:0.95 from 41.7% to 56.3%, thereby enhancing both detection accuracy and quality. In no-light conditions, the RGB single modality performs poorly due to the absence of visible light information, with an mAP@0.5 of only 67.5%. However, by incorporating IR thermal radiation features, the fusion model enhances Precision, Recall, and mAP@0.5 to 95.3%, 90.5%, and 92.9%, respectively, maintaining high detection accuracy and stability even in extreme no-light environments. The results of this study indicate that YOLOv11-DCFNet exhibits strong robustness and generalization ability across various low illumination conditions, providing effective technical support for night-time road maintenance and crack monitoring systems. Full article
Show Figures

Graphical abstract

Back to TopTop