Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (333)

Search Parameters:
Keywords = night light images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 8224 KB  
Article
QWR-Dec-Net: A Quaternion-Wavelet Retinex Framework for Low-Light Image Enhancement with Applications to Remote Sensing
by Vladimir Frants, Sos Agaian, Karen Panetta and Artyom Grigoryan
Information 2026, 17(1), 89; https://doi.org/10.3390/info17010089 - 14 Jan 2026
Viewed by 182
Abstract
Computer vision and deep learning are essential in diverse fields such as autonomous driving, medical imaging, face recognition, and object detection. However, enhancing low-light remote sensing images remains challenging for both research and real-world applications. Low illumination degrades image quality due to sensor [...] Read more.
Computer vision and deep learning are essential in diverse fields such as autonomous driving, medical imaging, face recognition, and object detection. However, enhancing low-light remote sensing images remains challenging for both research and real-world applications. Low illumination degrades image quality due to sensor limitations and environmental factors, weakening visual fidelity and reducing performance in vision tasks. Common issues such as insufficient lighting, backlighting, and limited exposure create low contrast, heavy shadows, and poor visibility, particularly at night. We propose QWR-Dec-Net, a quaternion-based Retinex decomposition network tailored for low-light image enhancement. QWR-Dec-Net consists of two key modules: a decomposition module that separates illumination and reflectance, and a denoising module that fuses a quaternion holistic color representation with wavelet multi-frequency information. This structure jointly improves color constancy and noise suppression. Experiments on low-light remote sensing datasets (LSCIDMR and UCMerced) show that QWR-Dec-Net outperforms current methods in PSNR, SSIM, LPIPS, and classification accuracy. The model’s accurate illumination estimation and stable reflectance make it well-suited for remote sensing tasks such as object detection, video surveillance, precision agriculture, and autonomous navigation. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

16 pages, 4121 KB  
Article
Uncovering Fishing Area Patterns Using Convolutional Autoencoder and Gaussian Mixture Model on VIIRS Nighttime Imagery
by Jeong Chang Seong, Jina Jang, Jiwon Yang, Seung Hee Choi and Chul Sue Hwang
ISPRS Int. J. Geo-Inf. 2026, 15(1), 25; https://doi.org/10.3390/ijgi15010025 - 5 Jan 2026
Viewed by 300
Abstract
The availability of nighttime satellite imagery provides unique opportunities for monitoring fishing activity in data-sparse ocean regions. This study leverages Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band monthly composite imagery to identify and classify recurring spatial patterns of fishing activity in the [...] Read more.
The availability of nighttime satellite imagery provides unique opportunities for monitoring fishing activity in data-sparse ocean regions. This study leverages Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band monthly composite imagery to identify and classify recurring spatial patterns of fishing activity in the Korean Exclusive Economic Zone from 2014 to 2024. While prior research has primarily produced static hotspot maps, our approach advances geospatial fishing activity identification by employing machine learning techniques to group similar spatiotemporal configurations, thereby capturing recurring fishing patterns and their temporal variability. A convolutional autoencoder and a Gaussian Mixture Model (GMM) were used to cluster the VIIRS imagery. Results revealed seven major nighttime light hotspots. Results also identified four cluster patterns: Cluster 0 dominated in December, January, and February, Cluster 1 in March, April, and May, Cluster 2 in July, August, and September, and Cluster 3 in October and November. Interannual variability was also identified. In particular, Clusters 0 and 3 expanded into later months in recent years (2022–2024), whereas Cluster 1 contracted. These findings align with environmental changes in the region, including ocean temperature rise and declining primary productivity. By integrating autoencoders with probabilistic clustering, this research demonstrates a framework for uncovering recurrent fishing activity patterns and highlights the utility of satellite imagery with GeoAI in advancing marine fisheries monitoring. Full article
(This article belongs to the Special Issue Spatial Data Science and Knowledge Discovery)
Show Figures

Figure 1

21 pages, 7741 KB  
Article
Polarization-Guided Deep Fusion for Real-Time Enhancement of Day–Night Tunnel Traffic Scenes: Dataset, Algorithm, and Network
by Renhao Rao, Changcai Cui, Liang Chen, Zhizhao Ouyang and Shuang Chen
Photonics 2025, 12(12), 1206; https://doi.org/10.3390/photonics12121206 - 8 Dec 2025
Viewed by 468
Abstract
The abrupt light-to-dark or dark-to-light transitions at tunnel entrances and exits cause short-term, large-scale illumination changes, leading traditional RGB perception to suffer from exposure mutations, glare, and noise accumulation at critical moments, thereby triggering perception failures and blind zones. Addressing this typical failure [...] Read more.
The abrupt light-to-dark or dark-to-light transitions at tunnel entrances and exits cause short-term, large-scale illumination changes, leading traditional RGB perception to suffer from exposure mutations, glare, and noise accumulation at critical moments, thereby triggering perception failures and blind zones. Addressing this typical failure scenario, this paper proposes a closed-loop enhancement solution centered on polarization imaging as a core physical prior, comprising a real-world polarimetric road dataset, a polarimetric physics-enhanced algorithm, and a beyond-fusion network, while satisfying both perception enhancement and real-time constraints. First, we construct the POLAR-GLV dataset, which is captured using a four-angle polarization camera under real highway tunnel conditions, covering the entire process of entering tunnels, inside tunnels, and exiting tunnels, systematically collecting data on adverse illumination and failure distributions in day–night traffic scenes. Second, we propose the Polarimetric Physical Enhancement with Adaptive Modulation (PPEAM) method, which uses Stokes parameters, DoLP, and AoLP as constraints. Leveraging the glare sensitivity of DoLP and richer texture information, it adaptively performs dark region enhancement and glare suppression according to scene brightness and dark region ratio, providing real-time polarization-based image enhancement. Finally, we design the Polar-PENet beyond-fusion network, which introduces Polarization-Aware Gates (PAG) and CBAM on top of physical priors, coupled with detection-driven perception-oriented loss and a beyond mechanism to explicitly fuse physics and deep semantics to surpass physical limitations. Experimental results show that compared to original images, Polar-PENet (beyond-fusion network) achieves PSNR and SSIM scores of 19.37 and 0.5487, respectively, on image quality metrics, surpassing the performance of PPEAM (polarimetric physics-enhanced algorithm) which scores 18.89 and 0.5257. In terms of downstream object detection performance, Polar-PENet performs exceptionally well in areas with drastic illumination changes such as tunnel entrances and exits, achieving a mAP of 63.7%, representing a 99.7% improvement over original images and a 12.1% performance boost over PPEAM’s 56.8%. In terms of processing speed, Polar-PENet is 2.85 times faster than the physics-enhanced algorithm PPEAM, with an inference speed of 183.45 frames per second, meeting the real-time requirements of autonomous driving and laying a solid foundation for practical deployment in edge computing environments. The research validates the effective paradigm of using polarimetric physics as a prior and surpassing physics through learning methods. Full article
(This article belongs to the Special Issue Computational Optical Imaging: Theories, Algorithms, and Applications)
Show Figures

Figure 1

24 pages, 2374 KB  
Article
NightTrack: Joint Night-Time Image Enhancement and Object Tracking for UAVs
by Xiaomin Huang, Yunpeng Bai, Jiaman Ma, Ying Li, Changjing Shang and Qiang Shen
Drones 2025, 9(12), 824; https://doi.org/10.3390/drones9120824 - 27 Nov 2025
Viewed by 650
Abstract
UAV-based visual object tracking has recently become a prominent research focus in computer vision. However, most existing trackers are primarily benchmarked under well-illuminated conditions, largely overlooking the challenges that may arise in night-time scenarios. Although attempts exist to restore image brightness via low-light [...] Read more.
UAV-based visual object tracking has recently become a prominent research focus in computer vision. However, most existing trackers are primarily benchmarked under well-illuminated conditions, largely overlooking the challenges that may arise in night-time scenarios. Although attempts exist to restore image brightness via low-light image enhancement before feeding frames to a tracker, such two-stage pipelines often struggle to strike an effective balance between the competing objectives of enhancement and tracking. To address this limitation, this work proposes NightTrack, a unified framework that optimizes both low-light image enhancement and UAV object tracking. While boosting image visibility, NightTrack not only explicitly preserves but also reinforces the discriminative features required for robust tracking. To improve the discriminability of low-light representations, Pyramid Attention Modules (PAMs) are introduced to enhance multi-scale contextual cues. Moreover, by jointly estimating illumination and noise curves, NightTrack mitigates the potential adverse effects of low-light environments, leading to significant gains in precision and robustness. Experimental results on multiple night-time tracking benchmarks demonstrate that NightTrack outperforms state-of-the-art methods in night-time scenes, exhibiting strong promises for further development. Full article
Show Figures

Figure 1

25 pages, 19784 KB  
Article
Spatiotemporal Dynamics of Anthropogenic Night Light in China
by Christopher Small
Lights 2025, 1(1), 4; https://doi.org/10.3390/lights1010004 - 21 Nov 2025
Viewed by 396
Abstract
Anthropogenic night light (ANL) provides a unique observable for the spatially explicit mapping of human-modified landscapes in the form of lighted infrastructure. Since 2013, the Visible Infrared Imaging Radiometer Suite (VIIRS) Day Night Band (DNB) on the Suomi NPP satellite has provided more [...] Read more.
Anthropogenic night light (ANL) provides a unique observable for the spatially explicit mapping of human-modified landscapes in the form of lighted infrastructure. Since 2013, the Visible Infrared Imaging Radiometer Suite (VIIRS) Day Night Band (DNB) on the Suomi NPP satellite has provided more than a decade of near-daily observations of anthropogenic night light. The objective of this study is to quantify changes in ANL in developed eastern China post-2013 using VIIRS DNB monthly mean brightness composites. Specifically, to constrain sub-annual and interannual changes in night light brightness to distinguish between apparent and actual change of ANL sources, and then conduct a spatiotemporal analysis of observed changes to identify areas of human activity, urban development and rural electrification. This analysis is based on a combination of time-sequential bitemporal brightness distributions and quantification of the spatiotemporal evolution of night light using Empirical Orthogonal Function (EOF) analysis. Bitemporal brightness distributions show that bright (>~1 nW/cm2/sr) ANL is heteroskedastic, with temporal variability diminishing with increasing brightness. Hence, brighter lights are more temporally stable. In contrast, dimmer (<~1 nW/cm2/sr) ANL is much more variable on monthly time scales. The same patterns of heteroskedasticity and variability of the lower tail of the brightness distribution are observed in year-to-year distributions. However, year-to-year brightness increases vary somewhat among different years. While bivariate distributions quantify aggregate changes on both subannual and interannual time scales, spatiotemporal analysis quantifies spatial variations in the year-to-year temporal evolution of ANL. The spatial distribution of brightening (and, much less commonly, dimming) revealed by the EOF analysis indicates that most of the brightening since 2013 has occurred at the peripheries of large cities and throughout the networks of smaller settlements on the North China Plain, the Yangtze River Valley, and the Sichuan Basin. A particularly unusual pattern of sequential brightening and dimming is observed on the Loess Plateau north of Xi’an, where extensive terrace construction has occurred. All aspects of this analysis highlight the difference between apparent and actual changes in night light sources. This is important because many users of VIIRS night light attribute all observed changes in imaged night light to actual changes in anthropogenic light sources—without consideration of low luminance variability related to the imaging process itself. Full article
Show Figures

Figure 1

28 pages, 13796 KB  
Article
Analyzing Nighttime Lights Using Multi-Temporal Imagery from Luojia-1 and the International Space Station with In Situ and Land Use Data
by Shengjie Kris Liu, Chu Wing So and Chun Shing Jason Pun
Remote Sens. 2025, 17(22), 3739; https://doi.org/10.3390/rs17223739 - 17 Nov 2025
Viewed by 992
Abstract
Remotely sensed nighttime lights (NTLs) have become essential in urban and environmental research but are typically captured at fixed local times by sun-synchronous satellites, limiting their ability to capture changes throughout the night. In contrast, in situ measurements of night sky brightness (NSB) [...] Read more.
Remotely sensed nighttime lights (NTLs) have become essential in urban and environmental research but are typically captured at fixed local times by sun-synchronous satellites, limiting their ability to capture changes throughout the night. In contrast, in situ measurements of night sky brightness (NSB) can provide continuous records over time, but direct comparisons with NTLs have remained rare. This study first examines the relationship between in situ NSB and remotely sensed NTLs using multi-temporal imagery from Luojia-1 and the International Space Station (ISS), focusing on 10 sites in Hong Kong and Macau. We find moderate to strong correlations between NSB and Luojia-1 (R = 0.73) and between NSB and ISS imagery (R = 0.8–1.0), though notable spatial and temporal variations persist. Even images captured within seconds differ in brightness across locations (R = 0.88–0.96), driven by factors such as changing viewing angles in dense urban areas, variations in light transmission paths, and atmospheric conditions, all influenced by satellite position. Our further analysis reveals distinct temporal patterns across land use categories: port facilities and airports are brightest late at night, whereas commercial districts peak earlier and gradually dim throughout the night. Within individual ISS images, transportation-related lighting tends to be red, and commercial areas appear blue compared to other urban areas, which may be due to lamp type differences (high pressure sodium, LED). This study highlights the need to cross-examine in situ and remotely sensed data in NTL research, emphasizing that factors such as local pass time, viewing geometry, color sensitivity, and atmospheric conditions can influence observations and ultimately affect the conclusions. Full article
Show Figures

Figure 1

30 pages, 11589 KB  
Article
Quantification of Light, Photoperiod, Temperature, and Water Stress Symptoms Using Image Features for Smart Vegetable Seedling Production
by Samsuzzaman, Sumaiya Islam, Md Razob Ali, Pabel Kanti Dey, Emmanuel Bicamumakuba, Md Nasim Reza and Sun-Ok Chung
Horticulturae 2025, 11(11), 1340; https://doi.org/10.3390/horticulturae11111340 - 7 Nov 2025
Cited by 1 | Viewed by 917
Abstract
Environmental factors like light, photoperiod, temperature, and water are vital for crop growth, and even slight deviations from their optimal ranges can cause seedling stress and reduce yield. Therefore, this study aimed to quantify seedling stress symptoms using image features analysis under varying [...] Read more.
Environmental factors like light, photoperiod, temperature, and water are vital for crop growth, and even slight deviations from their optimal ranges can cause seedling stress and reduce yield. Therefore, this study aimed to quantify seedling stress symptoms using image features analysis under varying light, photoperiod, temperature, and water conditions. Seedlings were grown under controlled low, normal, and high environmental conditions. Light intensity at 50 µmol m−2 s−1 (low), 250 µmol m−2 s−1 (normal), and 450 µmol m−2 s−1 (high), photoperiod cycles, 8/16 h (day/night) (low), 10/14 h (day/night) (normal), and 16/8 h (day/night) (high) day/night, temperature at 20 °C (low), 25 °C (normal), and 30 °C (high), and water availability at 1 L per day (optimal), 1 L every two days (moderate stress), and 1 L every three days (severe stress) were applied for 15 days. Commercial low-cost RGB, thermal, and depth sensors were used to collect data every day. A total of 1080 RGB images, which were pre-processed with histogram equalization and filters (Median and Gaussian), were used for noise reduction to minimize illumination effects. Morphological, color, and texture features were then analyzed using ANOVA (p < 0.05) to assess treatment effects. The result shows that the maximum canopy area for tomato was 115,226 pixels, while lettuce’s maximum plant height was 9.28 cm. However, 450 µmol m−2 s−1 light intensity caused increased surface roughness, indicating stress-induced morphological alteration. The analysis of Combined Stress Index (CSI) values indicated that the highest stress levels were 50% for pepper, 55% for tomato, 62% for cucumber, 55% for watermelon, 50% for lettuce, and 50% for pak choi. The findings showed that image-based stress detection enables precise environmental control and improves early-stage crop management. Full article
Show Figures

Figure 1

20 pages, 29199 KB  
Article
The First Dark-Sky Map of Thailand: International Comparisons and Factors Affecting the Rate of Change
by Farung Surina, Thanayut Changruenngam, Jinda Waikeaw, Suruswadee Nanglae, Saran Poshyachinda, Boonrucksar Soonthornthum and Michael F. Bode
Sustainability 2025, 17(21), 9856; https://doi.org/10.3390/su17219856 - 5 Nov 2025
Viewed by 2283
Abstract
We present the first dark-sky map of Thailand, derived from calibrated Visible Infrared Imaging Radiometer Suite (VIIRS) satellite data spanning 2012–2023. Artificial night-sky brightness was classified into 14 levels, with Classes 1–9 defined as potential dark-sky areas where the Milky Way remains visible. [...] Read more.
We present the first dark-sky map of Thailand, derived from calibrated Visible Infrared Imaging Radiometer Suite (VIIRS) satellite data spanning 2012–2023. Artificial night-sky brightness was classified into 14 levels, with Classes 1–9 defined as potential dark-sky areas where the Milky Way remains visible. International comparisons with the United Kingdom, Chile, and Botswana reveal that Thailand has undergone the steepest decline, losing 15.4% of pristine skies since 2012, while the UK remained stable (+0.8%), Botswana nearly unchanged (−0.7%), and Chile moderately degraded (−5.3%). A correlation analysis shows strong negative associations between potential dark-sky area and both GDP (r=0.65) and population (r=0.68), while inflation (r=0.26) and unemployment (r=0.24) exhibit weak influence. Five algorithms, including GLM and machine learning models, were tested; among them, the Decision Tree achieved the lowest relative error (0.4%±0.3%), with ensemble methods and GLM performing comparably and Deep Learning being less accurate. By 2023, over 60% of Thais lived under skies too bright to observe the Milky Way by naked eye, and one-fifth were exposed to intensities preventing dark adaptation. Thailand’s rapid transition to LED street lighting after 2015, while energy-efficient, has intensified skyglow. Protecting remaining dark-sky areas requires urgent policies, linking conservation to human health, biodiversity, cultural heritage, and sustainable development. Full article
Show Figures

Figure 1

33 pages, 4303 KB  
Article
Artificial Intelligence-Based Plant Disease Classification in Low-Light Environments
by Hafiz Ali Hamza Gondal, Seong In Jeong, Won Ho Jang, Jun Seo Kim, Rehan Akram, Muhammad Irfan, Muhammad Hamza Tariq and Kang Ryoung Park
Fractal Fract. 2025, 9(11), 691; https://doi.org/10.3390/fractalfract9110691 - 27 Oct 2025
Cited by 1 | Viewed by 1893
Abstract
The accurate classification of plant diseases is vital for global food security, as diseases can cause major yield losses and threaten sustainable and precision agriculture. The classification of plant diseases in low-light noisy environments is crucial because crops can be continuously monitored even [...] Read more.
The accurate classification of plant diseases is vital for global food security, as diseases can cause major yield losses and threaten sustainable and precision agriculture. The classification of plant diseases in low-light noisy environments is crucial because crops can be continuously monitored even at night. Important visual cues of disease symptoms can be lost due to the degraded quality of images captured under low-illumination, resulting in poor performance of conventional plant disease classifiers. However, researchers have proposed various techniques for classifying plant diseases in daylight, and no studies have been conducted for low-light noisy environments. Therefore, we propose a novel model for classifying plant diseases from low-light noisy images called dilated pixel attention network (DPA-Net). DPA-Net uses a pixel attention mechanism and multi-layer dilated convolution with a high receptive field, which obtains essential features while highlighting the most relevant information under this challenging condition, allowing more accurate classification results. Additionally, we performed fractal dimension estimation on diseased and healthy leaves to analyze the structural irregularities and complexities. For the performance evaluation, experiments were conducted on two public datasets: the PlantVillage and Potato Leaf Disease datasets. In both datasets, the image resolution is 256 × 256 pixels in joint photographic experts group (JPG) format. For the first dataset, DPA-Net achieved an average accuracy of 92.11% and harmonic mean of precision and recall (F1-score) of 89.11%. For the second dataset, it achieved an average accuracy of 88.92% and an F1-score of 88.60%. These results revealed that the proposed method outperforms state-of-the-art methods. On the first dataset, our method achieved an improvement of 2.27% in average accuracy and 2.86% in F1-score compared to the baseline. Similarly, on the second dataset, it attained an improvement of 6.32% in average accuracy and 6.37% in F1-score over the baseline. In addition, we confirm that our method is effective with the real low-illumination dataset self-constructed by capturing images at 0 lux using a smartphone at night. This approach provides farmers with an affordable practical tool for early disease detection, which can support crop protection worldwide. Full article
Show Figures

Figure 1

14 pages, 13455 KB  
Article
Enhancing 3D Monocular Object Detection with Style Transfer for Nighttime Data Augmentation
by Alexandre Evain, Firas Jendoubi, Redouane Khemmar, Sofiane Ahmedali and Mathieu Orzalesi
Appl. Sci. 2025, 15(20), 11288; https://doi.org/10.3390/app152011288 - 21 Oct 2025
Viewed by 834
Abstract
Monocular 3D object detection (Mono3D) is essential for autonomous driving and augmented reality, yet its performance degrades significantly at night due to the scarcity of annotated nighttime data. In this paper, we investigate the use of style transfer for nighttime data augmentation and [...] Read more.
Monocular 3D object detection (Mono3D) is essential for autonomous driving and augmented reality, yet its performance degrades significantly at night due to the scarcity of annotated nighttime data. In this paper, we investigate the use of style transfer for nighttime data augmentation and evaluate its effect on individual components of 3D detection. Using CycleGAN, we generated synthetic night images from daytime scenes in the nuScenes dataset and trained a modular Mono3D detector under different configurations. Our results show that training solely on style-transferred images improves certain metrics, such as AP@0.95 (from 0.0299 to 0.0778, a 160% increase) and depth error (11% reduction), compared to daytime-only baselines. However, performance on orientation and dimension estimation deteriorates. When real nighttime data is included, style transfer provides complementary benefits: for cars, depth error decreases from 0.0414 to 0.021, and AP@0.95 remains stable at 0.66; for pedestrians, AP@0.95 improves by 13% (0.297 to 0.336) with a 35% reduction in depth error. Cyclist detection remains unreliable due to limited samples. We conclude that style transfer cannot replace authentic nighttime data, but when combined with it, it reduces false positives and improves depth estimation, leading to more robust detection under low-light conditions. This study highlights both the potential and the limitations of style transfer for augmenting Mono3D training, and it points to future research on more advanced generative models and broader object categories. Full article
Show Figures

Figure 1

19 pages, 3709 KB  
Article
Evaluating the Influence of Aerosol Optical Depth on Satellite-Derived Nighttime Light Radiance in Asian Megacities
by Hyeryeong Park, Jaemin Kim and Yun Gon Lee
Remote Sens. 2025, 17(20), 3492; https://doi.org/10.3390/rs17203492 - 21 Oct 2025
Cited by 1 | Viewed by 651
Abstract
The Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) provides invaluable nighttime light (NTL) radiance data, widely employed for diverse applications including urban and socioeconomic studies. However, the inherent reliability of NTL data as a proxy for socioeconomic activities is significantly compromised [...] Read more.
The Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) provides invaluable nighttime light (NTL) radiance data, widely employed for diverse applications including urban and socioeconomic studies. However, the inherent reliability of NTL data as a proxy for socioeconomic activities is significantly compromised by atmospheric conditions, particularly aerosols. This study analyzed the long-term spatiotemporal variations in NTL radiance with respect to atmospheric aerosol optical depth (AOD) in nine major Asian cities from January 2012 to May 2021. Our findings reveal a complex and heterogeneous interplay between NTL radiance and AOD, fundamentally influenced by a region’s unique atmospheric characteristics and developmental stages. While major East Asian cities (e.g., Beijing, Tokyo, Seoul) exhibited a statistically significant inverse correlation, indicating aerosol-induced NTL suppression, other regions showed different patterns. For instance, the rapidly urbanizing city of Dhaka displayed a statistically significant positive correlation, suggesting a concurrent increase in NTL and AOD due to intensified urban activities. This highlights that the NTL-AOD relationship is not solely a physical phenomenon but is also shaped by independent socioeconomic processes. These results underscore the critical importance of comprehensively understanding these regional discrepancies for the reliable interpretation and effective reconstruction of NTL radiance data. By providing nuanced insights into how atmospheric aerosols influence NTL measurements in diverse urban settings, this research aims to enhance the utility and robustness of satellite-derived NTL data for effective socioeconomic analyses. Full article
Show Figures

Figure 1

24 pages, 4921 KB  
Article
YOLOv11-DCFNet: A Robust Dual-Modal Fusion Method for Infrared and Visible Road Crack Detection in Weak- or No-Light Illumination Environments
by Xinbao Chen, Yaohui Zhang, Junqi Lei, Lelin Li, Lifang Liu and Dongshui Zhang
Remote Sens. 2025, 17(20), 3488; https://doi.org/10.3390/rs17203488 - 20 Oct 2025
Viewed by 1267
Abstract
Road cracks represent a significant challenge that impacts the long-term performance and safety of transportation infrastructure. Early identification of these cracks is crucial for effective road maintenance management. However, traditional crack recognition methods that rely on visible light images often experience substantial performance [...] Read more.
Road cracks represent a significant challenge that impacts the long-term performance and safety of transportation infrastructure. Early identification of these cracks is crucial for effective road maintenance management. However, traditional crack recognition methods that rely on visible light images often experience substantial performance degradation in weak-light environments, such as at night or within tunnels. This degradation is characterized by blurred or deficient image textures, indistinct target edges, and reduced detection accuracy, which hinders the ability to achieve reliable all-weather target detection. To address these challenges, this study introduces a dual-modal crack detection method named YOLOv11-DCFNet. This method is based on an enhanced YOLOv11 architecture and incorporates a Cross-Modality Fusion Transformer (CFT) module. It establishes a dual-branch feature extraction structure that utilizes both infrared and visible light within the original YOLOv11 framework, effectively leveraging the high contrast capabilities of thermal infrared images to detect cracks under weak- or no-light conditions. The experimental results demonstrate that the proposed YOLOv11-DCFNet method significantly outperforms the single-modal model (YOLOv11-RGB) in both weak-light and no-light scenarios. Under weak-light conditions, the fusion model effectively utilizes the weak texture features of RGB images alongside the thermal radiation information from infrared (IR) images. This leads to an improvement in Precision from 83.8% to 95.3%, Recall from 81.5% to 90.5%, mAP@0.5 from 84.9% to 92.9%, and mAP@0.5:0.95 from 41.7% to 56.3%, thereby enhancing both detection accuracy and quality. In no-light conditions, the RGB single modality performs poorly due to the absence of visible light information, with an mAP@0.5 of only 67.5%. However, by incorporating IR thermal radiation features, the fusion model enhances Precision, Recall, and mAP@0.5 to 95.3%, 90.5%, and 92.9%, respectively, maintaining high detection accuracy and stability even in extreme no-light environments. The results of this study indicate that YOLOv11-DCFNet exhibits strong robustness and generalization ability across various low illumination conditions, providing effective technical support for night-time road maintenance and crack monitoring systems. Full article
Show Figures

Graphical abstract

19 pages, 762 KB  
Article
TMRGBT-D2D: A Temporal Misaligned RGB-Thermal Dataset for Drone-to-Drone Target Detection
by Hexiang Hao, Yueping Peng, Zecong Ye, Baixuan Han, Wei Tang, Wenchao Kang, Xuekai Zhang, Qilong Li and Wenchao Liu
Drones 2025, 9(10), 694; https://doi.org/10.3390/drones9100694 - 10 Oct 2025
Viewed by 1913
Abstract
In the field of drone-to-drone detection tasks, the issue of fusing temporal information with infrared and visible light data for detection has been rarely studied. This paper presents the first temporal misaligned rgb-thermal dataset for drone-to-drone target detection, named TMRGBT-D2D. The dataset covers [...] Read more.
In the field of drone-to-drone detection tasks, the issue of fusing temporal information with infrared and visible light data for detection has been rarely studied. This paper presents the first temporal misaligned rgb-thermal dataset for drone-to-drone target detection, named TMRGBT-D2D. The dataset covers various lighting conditions (i.e., high-light scenes captured during the day, medium-light and low-light scenes captured at night, with night scenes accounting for 38.8% of all data), different scenes (sky, forests, buildings, construction sites, playgrounds, roads, etc.), different seasons, and different locations, consisting of a total of 42,624 images organized into sequential frames extracted from 19 RGB-T video pairs. Each frame in the dataset has been meticulously annotated, with a total of 94,323 annotations. Except for drones that cannot be identified under extreme conditions, infrared and visible light annotations are one-to-one corresponding. This dataset presents various challenges, including small object detection (the average size of objects in visible light images is approximately 0.02% of the image area), motion blur caused by fast movement, and detection issues arising from imaging differences between different modalities. To our knowledge, this is the first temporal misaligned rgb-thermal dataset for drone-to-drone target detection, providing convenience for research into rgb-thermal image fusion and the development of drone target detection. Full article
(This article belongs to the Special Issue Detection, Identification and Tracking of UAVs and Drones)
Show Figures

Figure 1

16 pages, 5781 KB  
Article
Design of an Underwater Optical Communication System Based on RT-DETRv2
by Hexi Liang, Hang Li, Minqi Wu, Junchi Zhang, Wenzheng Ni, Baiyan Hu and Yong Ai
Photonics 2025, 12(10), 991; https://doi.org/10.3390/photonics12100991 - 8 Oct 2025
Cited by 1 | Viewed by 975
Abstract
Underwater wireless optical communication (UWOC) is a key technology in ocean resource development, and its link stability is often limited by the difficulty of optical alignment in complex underwater environments. In response to this difficulty, this study has focused on improving the Real-Time [...] Read more.
Underwater wireless optical communication (UWOC) is a key technology in ocean resource development, and its link stability is often limited by the difficulty of optical alignment in complex underwater environments. In response to this difficulty, this study has focused on improving the Real-Time Detection Transformer v2 (RT-DETRv2) model. We have improved the underwater light source detection model by collaboratively designing a lightweight backbone network and deformable convolution, constructing a cross-stage local attention mechanism to reduce the number of network parameters, and introducing geometrically adaptive convolution kernels that dynamically adjust the distribution of sampling points, enhance the representation of spot-deformation features, and improve positioning accuracy under optical interference. To verify the effectiveness of the model, we have constructed an underwater light-emitting diode (LED) light-spot detection dataset containing 11,390 images was constructed, covering a transmission distance of 15–40 m, a ±45° deflection angle, and three different light-intensity conditions (noon, evening, and late night). Experiments show that the improved model achieves an average precision at an intersection-over-union threshold of 0.50 (AP50) value of 97.4% on the test set, which is 12.7% higher than the benchmark model. The UWOC system built based on the improved model achieves zero-bit-error-rate communication within a distance of 30 m after assisted alignment (an initial lateral offset angle of 0°–60°), and the bit-error rate remains stable in the 10−7–10−6 range at a distance of 40 m, which is three orders of magnitude lower than the traditional Remotely Operated Vehicle (ROV) underwater optical communication system (a bit-error rate of 10−6–10−3), verifying the strong adaptability of the improved model to complex underwater environments. Full article
Show Figures

Figure 1

33 pages, 14767 KB  
Article
Night-to-Day Image Translation with Road Light Attention Training for Traffic Information Detection
by Ye-Jin Lee, Young-Ho Go, Seung-Hwan Lee, Dong-Min Son and Sung-Hak Lee
Mathematics 2025, 13(18), 2998; https://doi.org/10.3390/math13182998 - 16 Sep 2025
Viewed by 1437
Abstract
Generative adversarial networks (GANs)-based image deep learning methods are useful to improve object visibility in nighttime driving environments, but they often fail to preserve critical road information like traffic light colors and vehicle lighting. This paper proposes a method to address this by [...] Read more.
Generative adversarial networks (GANs)-based image deep learning methods are useful to improve object visibility in nighttime driving environments, but they often fail to preserve critical road information like traffic light colors and vehicle lighting. This paper proposes a method to address this by utilizing both unpaired and four-channel paired training modules. The unpaired module performs the primary night-to-day conversion, while the paired module, enhanced with a fourth channel, focuses on preserving road details. Our key contribution is an inverse road light attention (RLA) map, which acts as this fourth channel to explicitly guide the network’s learning. This map also facilitates a final cross-blending process, synthesizing the results from both modules to maximize their respective advantages. Experimental results demonstrate that our approach more accurately preserves lane markings and traffic light colors. Furthermore, quantitative analysis confirms that our method achieves superior performance across eight no-reference image quality metrics compared to existing techniques. Full article
(This article belongs to the Special Issue Machine Learning Applications in Image Processing and Computer Vision)
Show Figures

Figure 1

Back to TopTop