Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (690)

Search Parameters:
Keywords = nighttime image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2777 KB  
Article
Efficient Dual-Domain Collaborative Enhancement Method for Low-Light Images in Architectural Scenes
by Jing Pu, Wei Shi, Dong Luo, Guofei Zhang, Zhixun Xie, Wanying Liu and Bincan Liu
Infrastructures 2025, 10(11), 289; https://doi.org/10.3390/infrastructures10110289 (registering DOI) - 31 Oct 2025
Abstract
Low-light image enhancement in architectural scenes presents a considerable challenge for computer vision applications in construction engineering. Images captured in architectural settings during nighttime or under inadequate illumination often suffer from noise interference, low-light blurring, and obscured structural features. Although low-light image enhancement [...] Read more.
Low-light image enhancement in architectural scenes presents a considerable challenge for computer vision applications in construction engineering. Images captured in architectural settings during nighttime or under inadequate illumination often suffer from noise interference, low-light blurring, and obscured structural features. Although low-light image enhancement and deblurring are intrinsically linked when emphasizing architectural defects, conventional image restoration methods generally treat these tasks as separate entities. This paper introduces an efficient and robust Frequency-Space Recovery Network (FSRNet), specifically designed for low-light image enhancement in architectural contexts, tailored to the unique characteristics of such scenes. The encoder utilizes a Feature Refinement Feedforward Network (FRFN) to achieve precise enhancement of defect features while dynamically mitigating background redundancy. Coupled with a Frequency Response Module, it modifies the amplitude spectrum to amplify high-frequency components of defects and ensure balanced global illumination. The decoder utilizes InceptionDWConv2d modules to capture multi-directional and multi-scale features of cracks. When combined with a gating mechanism, it dynamically suppresses noise, restores the spatial continuity of defects, and eliminates blurring. This method also reduces computational costs in terms of parameters and MAC operations. To assess the effectiveness of the proposed approach in architectural contexts, this paper conducts a comprehensive study using low-light defect images from indoor concrete walls as a representative case. Experimental results indicate that FSRNet not only achieves state-of-the-art PSNR performance of 27.58 dB but also enhances the mAP of the downstream YOLOv8 detection model by 7.1%, while utilizing only 3.75 M parameters and 8.8 GMACs. These findings fully validate the superiority and practicality of the proposed method for low-light image enhancement tasks in architectural settings. Full article
23 pages, 10676 KB  
Article
Hourly and 0.5-Meter Green Space Exposure Mapping and Its Impacts on the Urban Built Environment
by Yan Wu, Weizhong Su, Yingbao Yang and Jia Hu
Remote Sens. 2025, 17(21), 3531; https://doi.org/10.3390/rs17213531 - 24 Oct 2025
Viewed by 305
Abstract
Accurately mapping urban residents’ exposure to green space at high spatiotemporal resolutions is essential for assessing disparities and equality across blocks and enhancing urban environment planning. In this study, we developed a framework to generate hourly green space exposure maps at 0.5 m [...] Read more.
Accurately mapping urban residents’ exposure to green space at high spatiotemporal resolutions is essential for assessing disparities and equality across blocks and enhancing urban environment planning. In this study, we developed a framework to generate hourly green space exposure maps at 0.5 m resolution using multiple sources of remote sensing data and an Object-Based Image Classification with Graph Convolutional Network (OBIC-GCN) model. Taking the main urban area in Nanjing city of China as the study area, we proposed a Dynamic Residential Green Space Exposure (DRGE) metric to reveal disparities in green space access across four housing price blocks. The Palma ratio was employed to explain the inequity characteristics of DRGE, while XGBoost (eXtreme Gradient Boosting) and SHAP (SHapley Additive explanation) methods were utilized to explore the impacts of built environment factors on DRGE. We found that the difference in daytime and nighttime DRGE values was significant, with the DRGE value being higher after 6:00 compared to the night. Mean DRGE on weekends was about 1.5 times higher than on workdays, and the DRGE in high-priced blocks was about twice that in low-priced blocks. More than 68% of residents in high-priced blocks experienced over 8 h of green space exposure during weekend nighttime (especially around 19:00), which was much higher than low-price blocks. Moreover, spatial inequality in residents’ green space exposure was more pronounced on weekends than on workdays, with lower-priced blocks exhibiting greater inequality (Palma ratio: 0.445 vs. 0.385). Furthermore, green space morphology, quantity, and population density were identified as the critical factors affecting DRGE. The optimal threshold for Percent of Landscape (PLAND) was 25–70%, while building density, height, and Sky View Factor (SVF) were negatively correlated with DRGE. These findings address current research gaps by considering population mobility, capturing green space supply and demand inequities, and providing scientific decision-making support for future urban green space equality and planning. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Urban Environment and Climate)
Show Figures

Figure 1

21 pages, 9302 KB  
Article
Research on Small Object Detection in Degraded Visual Scenes: An Improved DRF-YOLO Algorithm Based on YOLOv11
by Yan Gu, Lingshan Chen and Tian Su
World Electr. Veh. J. 2025, 16(11), 591; https://doi.org/10.3390/wevj16110591 - 23 Oct 2025
Viewed by 500
Abstract
Object detection in degraded environments such as low-light and nighttime conditions remains a challenging task, as conventional computer vision techniques often fail to achieve high precision and robust performance. With the increasing adoption of deep learning, this paper aims to enhance object detection [...] Read more.
Object detection in degraded environments such as low-light and nighttime conditions remains a challenging task, as conventional computer vision techniques often fail to achieve high precision and robust performance. With the increasing adoption of deep learning, this paper aims to enhance object detection under such adverse conditions by proposing an improved version of YOLOv11, named DRF-YOLO (Degradation-Robust and Feature-enhanced YOLO). The proposed framework incorporates three innovative components: (1) a lightweight Cross Stage Partial Multi-Scale Edge Enhancement (CSP-MSEE) module that combines multi-scale feature extraction with edge enhancement to strengthen feature representation; (2) a Focal Modulation attention mechanism that improves the network’s responsiveness to target regions and contextual information; and (3) a self-developed Dynamic Interaction Head (DIH) that enhances detection accuracy and spatial adaptability for small objects. In addition, a lightweight unsupervised image enhancement algorithm, Zero-DCE (Zero-Reference Deep Curve Estimation), is introduced prior to training to improve image contrast and detail, and Generalized Intersection over Union (GIoU) is employed as the bounding box regression loss. To evaluate the effectiveness of DRF-YOLO, experiments are conducted on two representative low-light datasets: ExDark and the nighttime subset of BDD100K, which include images of vehicles, pedestrians, and other road objects. Results show that DRF-YOLO achieves improvements of 3.4% and 2.3% in mAP@0.5 compared with the original YOLOv11, demonstrating enhanced robustness and accuracy in degraded environments while maintaining lightweight efficiency. Full article
Show Figures

Figure 1

25 pages, 18442 KB  
Article
Exploring the Spatial Coupling Between Visual and Ecological Sensitivity: A Cross-Modal Approach Using Deep Learning in Tianjin’s Central Urban Area
by Zhihao Kang, Chenfeng Xu, Yang Gu, Lunsai Wu, Zhiqiu He, Xiaoxu Heng, Xiaofei Wang and Yike Hu
Land 2025, 14(11), 2104; https://doi.org/10.3390/land14112104 - 23 Oct 2025
Viewed by 389
Abstract
Amid rapid urbanization, Chinese cities face mounting ecological pressure, making it critical to balance environmental protection with public well-being. As visual perception accounts for over 80% of environmental information acquisition, it plays a key role in shaping experiences and evaluations of ecological space. [...] Read more.
Amid rapid urbanization, Chinese cities face mounting ecological pressure, making it critical to balance environmental protection with public well-being. As visual perception accounts for over 80% of environmental information acquisition, it plays a key role in shaping experiences and evaluations of ecological space. However, current ecological planning often overlooks public perception, leading to increasing mismatches between ecological conditions and spatial experiences. While previous studies have attempted to introduce public perspectives, a systematic framework for analyzing the spatial relationship between ecological and visual sensitivity remains lacking. This study takes 56,210 street-level points in Tianjin’s central urban area to construct a coordinated analysis framework of ecological and perceptual sensitivity. Visual sensitivity is derived from social media sentiment analysis (via GPT-4o) and street-view image semantic features extracted using the ADE20K semantic segmentation model, and subsequently processed through a Multilayer Perceptron (MLP) model. Ecological sensitivity is calculated using the Analytic Hierarchy Process (AHP)—based model integrating elevation, slope, normalized difference vegetation index (NDVI), land use, and nighttime light data. A coupling coordination model and bivariate Moran’s I are employed to examine spatial synergy and mismatches between the two dimensions. Results indicate that while 72.82% of points show good coupling, spatial mismatches are widespread. The dominant types include “HL” (high visual–low ecological) areas (e.g., Wudadao) with high visual attention but low ecological resilience, and “LH” (low visual–high ecological) areas (e.g., Huaiyuanli) with strong ecological value but low public perception. This study provides a systematic path for analyzing the spatial divergence between ecological and perceptual sensitivity, offering insights into ecological landscape optimization and perception-driven street design. Full article
Show Figures

Figure 1

14 pages, 13455 KB  
Article
Enhancing 3D Monocular Object Detection with Style Transfer for Nighttime Data Augmentation
by Alexandre Evain, Firas Jendoubi, Redouane Khemmar, Sofiane Ahmedali and Mathieu Orzalesi
Appl. Sci. 2025, 15(20), 11288; https://doi.org/10.3390/app152011288 - 21 Oct 2025
Viewed by 315
Abstract
Monocular 3D object detection (Mono3D) is essential for autonomous driving and augmented reality, yet its performance degrades significantly at night due to the scarcity of annotated nighttime data. In this paper, we investigate the use of style transfer for nighttime data augmentation and [...] Read more.
Monocular 3D object detection (Mono3D) is essential for autonomous driving and augmented reality, yet its performance degrades significantly at night due to the scarcity of annotated nighttime data. In this paper, we investigate the use of style transfer for nighttime data augmentation and evaluate its effect on individual components of 3D detection. Using CycleGAN, we generated synthetic night images from daytime scenes in the nuScenes dataset and trained a modular Mono3D detector under different configurations. Our results show that training solely on style-transferred images improves certain metrics, such as AP@0.95 (from 0.0299 to 0.0778, a 160% increase) and depth error (11% reduction), compared to daytime-only baselines. However, performance on orientation and dimension estimation deteriorates. When real nighttime data is included, style transfer provides complementary benefits: for cars, depth error decreases from 0.0414 to 0.021, and AP@0.95 remains stable at 0.66; for pedestrians, AP@0.95 improves by 13% (0.297 to 0.336) with a 35% reduction in depth error. Cyclist detection remains unreliable due to limited samples. We conclude that style transfer cannot replace authentic nighttime data, but when combined with it, it reduces false positives and improves depth estimation, leading to more robust detection under low-light conditions. This study highlights both the potential and the limitations of style transfer for augmenting Mono3D training, and it points to future research on more advanced generative models and broader object categories. Full article
Show Figures

Figure 1

19 pages, 3709 KB  
Article
Evaluating the Influence of Aerosol Optical Depth on Satellite-Derived Nighttime Light Radiance in Asian Megacities
by Hyeryeong Park, Jaemin Kim and Yun Gon Lee
Remote Sens. 2025, 17(20), 3492; https://doi.org/10.3390/rs17203492 - 21 Oct 2025
Viewed by 240
Abstract
The Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) provides invaluable nighttime light (NTL) radiance data, widely employed for diverse applications including urban and socioeconomic studies. However, the inherent reliability of NTL data as a proxy for socioeconomic activities is significantly compromised [...] Read more.
The Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) provides invaluable nighttime light (NTL) radiance data, widely employed for diverse applications including urban and socioeconomic studies. However, the inherent reliability of NTL data as a proxy for socioeconomic activities is significantly compromised by atmospheric conditions, particularly aerosols. This study analyzed the long-term spatiotemporal variations in NTL radiance with respect to atmospheric aerosol optical depth (AOD) in nine major Asian cities from January 2012 to May 2021. Our findings reveal a complex and heterogeneous interplay between NTL radiance and AOD, fundamentally influenced by a region’s unique atmospheric characteristics and developmental stages. While major East Asian cities (e.g., Beijing, Tokyo, Seoul) exhibited a statistically significant inverse correlation, indicating aerosol-induced NTL suppression, other regions showed different patterns. For instance, the rapidly urbanizing city of Dhaka displayed a statistically significant positive correlation, suggesting a concurrent increase in NTL and AOD due to intensified urban activities. This highlights that the NTL-AOD relationship is not solely a physical phenomenon but is also shaped by independent socioeconomic processes. These results underscore the critical importance of comprehensively understanding these regional discrepancies for the reliable interpretation and effective reconstruction of NTL radiance data. By providing nuanced insights into how atmospheric aerosols influence NTL measurements in diverse urban settings, this research aims to enhance the utility and robustness of satellite-derived NTL data for effective socioeconomic analyses. Full article
Show Figures

Figure 1

24 pages, 4921 KB  
Article
YOLOv11-DCFNet: A Robust Dual-Modal Fusion Method for Infrared and Visible Road Crack Detection in Weak- or No-Light Illumination Environments
by Xinbao Chen, Yaohui Zhang, Junqi Lei, Lelin Li, Lifang Liu and Dongshui Zhang
Remote Sens. 2025, 17(20), 3488; https://doi.org/10.3390/rs17203488 - 20 Oct 2025
Viewed by 323
Abstract
Road cracks represent a significant challenge that impacts the long-term performance and safety of transportation infrastructure. Early identification of these cracks is crucial for effective road maintenance management. However, traditional crack recognition methods that rely on visible light images often experience substantial performance [...] Read more.
Road cracks represent a significant challenge that impacts the long-term performance and safety of transportation infrastructure. Early identification of these cracks is crucial for effective road maintenance management. However, traditional crack recognition methods that rely on visible light images often experience substantial performance degradation in weak-light environments, such as at night or within tunnels. This degradation is characterized by blurred or deficient image textures, indistinct target edges, and reduced detection accuracy, which hinders the ability to achieve reliable all-weather target detection. To address these challenges, this study introduces a dual-modal crack detection method named YOLOv11-DCFNet. This method is based on an enhanced YOLOv11 architecture and incorporates a Cross-Modality Fusion Transformer (CFT) module. It establishes a dual-branch feature extraction structure that utilizes both infrared and visible light within the original YOLOv11 framework, effectively leveraging the high contrast capabilities of thermal infrared images to detect cracks under weak- or no-light conditions. The experimental results demonstrate that the proposed YOLOv11-DCFNet method significantly outperforms the single-modal model (YOLOv11-RGB) in both weak-light and no-light scenarios. Under weak-light conditions, the fusion model effectively utilizes the weak texture features of RGB images alongside the thermal radiation information from infrared (IR) images. This leads to an improvement in Precision from 83.8% to 95.3%, Recall from 81.5% to 90.5%, mAP@0.5 from 84.9% to 92.9%, and mAP@0.5:0.95 from 41.7% to 56.3%, thereby enhancing both detection accuracy and quality. In no-light conditions, the RGB single modality performs poorly due to the absence of visible light information, with an mAP@0.5 of only 67.5%. However, by incorporating IR thermal radiation features, the fusion model enhances Precision, Recall, and mAP@0.5 to 95.3%, 90.5%, and 92.9%, respectively, maintaining high detection accuracy and stability even in extreme no-light environments. The results of this study indicate that YOLOv11-DCFNet exhibits strong robustness and generalization ability across various low illumination conditions, providing effective technical support for night-time road maintenance and crack monitoring systems. Full article
Show Figures

Graphical abstract

12 pages, 556 KB  
Article
Difficulty in Attention Switching and Its Neural Basis in Problematic Smartphone Use
by Nanase Kobayashi, Daisuke Jitoku, Toshitaka Hamamura, Masaru Honjo, Yusei Yamaguchi, Masaaki Shimizu, Shunsuke Takagi, Junya Fujino, Genichi Sugihara and Hidehiko Takahashi
Brain Sci. 2025, 15(10), 1100; https://doi.org/10.3390/brainsci15101100 - 13 Oct 2025
Viewed by 524
Abstract
Background: Problematic smartphone use (PSU) involves excessive smartphone engagement that disrupts daily functioning and is linked to attentional control deficits and altered reward processing. The nucleus accumbens (NAcc), a key structure in the reward system, may contribute to difficulty disengaging from rewarding [...] Read more.
Background: Problematic smartphone use (PSU) involves excessive smartphone engagement that disrupts daily functioning and is linked to attentional control deficits and altered reward processing. The nucleus accumbens (NAcc), a key structure in the reward system, may contribute to difficulty disengaging from rewarding digital content. This study examined relationships between NAcc volume, attentional switching, and objectively measured nighttime screen time in individuals with PSU. Methods: Fifty-three participants (aged ≥ 13 years) from an outpatient internet dependency clinic completed psychological assessments, brain MRI, and smartphone logging. PSU was diagnosed by two psychiatrists. Attentional switching was measured via the Autism Spectrum Quotient subscale. Nighttime screen time (00:00–06:00) was recorded via smartphone. MRI-derived NAcc volumes were normalized to total gray matter volume. Correlations, multiple regression (controlling for ASD and ADHD), and mediation analyses were conducted. Results: Difficulty in attention switching correlated with larger right NAcc volume (r = 0.45, p = 0.012) and increased nighttime screen time (r = 0.44, p = 0.014). Right NAcc volume also correlated with nighttime screen time (r = 0.46, p = 0.012). Regression showed right NAcc volume predicted nighttime screen time (β = 0.33, p = 0.022), whereas attentional switching was not significant. Mediation was unsupported. Sensitivity analyses confirmed associations. Conclusions: Larger right NAcc volume independently predicts prolonged nighttime smartphone use and is associated with impaired attentional switching in PSU. Structural variations in reward-related regions may underlie difficulty disengaging from digital content. Integrating neurobiological, cognitive, and behavioral measures offers a framework for understanding PSU. Full article
Show Figures

Figure 1

24 pages, 16680 KB  
Article
Research on Axle Type Recognition Technology for Under-Vehicle Panorama Images Based on Enhanced ORB and YOLOv11
by Xiaofan Feng, Lu Peng, Yu Tang, Chang Liu and Huazhen An
Sensors 2025, 25(19), 6211; https://doi.org/10.3390/s25196211 - 7 Oct 2025
Viewed by 565
Abstract
With the strict requirements of national policies on truck dimensions, axle loads, and weight limits, along with the implementation of tolls based on vehicle types, rapid and accurate identification of vehicle axle types has become essential for toll station management. To address the [...] Read more.
With the strict requirements of national policies on truck dimensions, axle loads, and weight limits, along with the implementation of tolls based on vehicle types, rapid and accurate identification of vehicle axle types has become essential for toll station management. To address the limitations of existing methods in distinguishing between drive and driven axles, complex equipment setup, and image evidence retention, this article proposes a panoramic image detection technology for vehicle chassis based on enhanced ORB and YOLOv11. A portable vehicle chassis image acquisition system, based on area array cameras, was developed for rapid on-site deployment within 20 min, eliminating the requirement for embedded installation. The FeatureBooster (FB) module was employed to optimize the ORB algorithm’s feature matching, and combined with keyframe technology to achieve high-quality panoramic image stitching. After fine-tuning the FB model on a domain-specific area scan dataset, the number of feature matches increased to 151 ± 18, substantially outperforming both the pre-trained FB model and the baseline ORB. Experimental results on axle type recognition using the YOLOv11 algorithm combined with ORB and FB features demonstrated that the integrated approach achieved superior performance. On the overall test set, the model attained an mAP@50 of 0.989 and an mAP@50:95 of 0.780, along with a precision (P) of 0.98 and a recall (R) of 0.99. In nighttime scenarios, it maintained an mAP@50 of 0.977 and an mAP@50:95 of 0.743, with precision and recall both consistently at 0.98 and 0.99, respectively. The field verification shows that the real-time and accuracy of the system can provide technical support for the axle type recognition of toll stations. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

34 pages, 13615 KB  
Article
Seamless Reconstruction of MODIS Land Surface Temperature via Multi-Source Data Fusion and Multi-Stage Optimization
by Yanjie Tang, Yanling Zhao, Yueming Sun, Shenshen Ren and Zhibin Li
Remote Sens. 2025, 17(19), 3374; https://doi.org/10.3390/rs17193374 - 7 Oct 2025
Viewed by 518
Abstract
Land Surface Temperature (LST) is a critical variable for understanding land–atmosphere interactions and is widely applied in urban heat monitoring, evapotranspiration estimation, near-surface air temperature modeling, soil moisture assessment, and climate studies. MODIS LST products, with their global coverage, long-term consistency, and radiometric [...] Read more.
Land Surface Temperature (LST) is a critical variable for understanding land–atmosphere interactions and is widely applied in urban heat monitoring, evapotranspiration estimation, near-surface air temperature modeling, soil moisture assessment, and climate studies. MODIS LST products, with their global coverage, long-term consistency, and radiometric calibration, are a major source of LST data. However, frequent data gaps caused by cloud contamination and atmospheric interference severely limit their applicability in analyses requiring high spatiotemporal continuity. This study presents a seamless MODIS LST reconstruction framework that integrates multi-source data fusion and a multi-stage optimization strategy. The method consists of three key components: (1) topography- and land cover-constrained spatial interpolation, which preliminarily fills orbit-induced gaps using elevation and land cover similarity criteria; (2) pixel-level LST reconstruction via random forest (RF) modeling with multi-source predictors (e.g., NDVI, NDWI, surface reflectance, DEM, land cover), coupled with HANTS-based temporal smoothing to enhance temporal consistency and seasonal fidelity; and (3) Poisson-based image fusion, which ensures spatial continuity and smooth transitions without compromising temperature gradients. Experiments conducted over two representative regions—Huainan and Jining—demonstrate the superior performance of the proposed method under both daytime and nighttime scenarios. The integrated approach (Step 3) achieves high accuracy, with correlation coefficients (CCs) exceeding 0.95 and root mean square errors (RMSEs) below 2K, outperforming conventional HANTS and standalone interpolation methods. Cross-validation with high-resolution Landsat LST further confirms the method’s ability to retain spatial detail and cross-scale consistency. Overall, this study offers a robust and generalizable solution for reconstructing MODIS LST with high spatial and temporal fidelity. The framework holds strong potential for broad applications in land surface process modeling, regional climate studies, and urban thermal environment analysis. Full article
Show Figures

Graphical abstract

25 pages, 13151 KB  
Article
Adaptive Energy–Gradient–Contrast (EGC) Fusion with AIFI-YOLOv12 for Improving Nighttime Pedestrian Detection in Security
by Lijuan Wang, Zuchao Bao and Dongming Lu
Appl. Sci. 2025, 15(19), 10607; https://doi.org/10.3390/app151910607 - 30 Sep 2025
Viewed by 259
Abstract
In security applications, visible-light pedestrian detectors are highly sensitive to changes in illumination and fail under low-light or nighttime conditions, while infrared sensors, though resilient to lighting, often produce blurred object boundaries that hinder precise localization. To address these complementary limitations, we propose [...] Read more.
In security applications, visible-light pedestrian detectors are highly sensitive to changes in illumination and fail under low-light or nighttime conditions, while infrared sensors, though resilient to lighting, often produce blurred object boundaries that hinder precise localization. To address these complementary limitations, we propose a practical multimodal pipeline—Adaptive Energy–Gradient–Contrast (EGC) Fusion with AIFI-YOLOv12—that first fuses infrared and low-light visible images using per-pixel weights derived from local energy, gradient magnitude and contrast measures, then detects pedestrians with an improved YOLOv12 backbone. The detector integrates an AIFI attention module at high semantic levels, replaces selected modules with A2C2f blocks to enhance cross-channel feature aggregation, and preserves P3–P5 outputs to improve small-object localization. We evaluate the complete pipeline on the LLVIP dataset and report Precision, Recall, mAP@50, mAP@50–95, GFLOPs, FPS and detection time, comparing against YOLOv8, YOLOv10–YOLOv12 baselines (n and s scales). Quantitative and qualitative results show that the proposed fusion restores complementary thermal and visible details and that the AIFI-enhanced detector yields more robust nighttime pedestrian detection while maintaining a competitive computational profile suitable for real-world security deployments. Full article
(This article belongs to the Special Issue Advanced Image Analysis and Processing Technologies and Applications)
Show Figures

Figure 1

19 pages, 3619 KB  
Article
Surface Urban Heat Island Risk Index Computation Using Remote-Sensed Data and Meta Population Dataset on Naples Urban Area (Italy)
by Massimo Musacchio, Alessia Scalabrini, Malvina Silvestri, Federico Rabuffi and Antonio Costanzo
Remote Sens. 2025, 17(19), 3306; https://doi.org/10.3390/rs17193306 - 26 Sep 2025
Viewed by 636
Abstract
Extreme climate events such as heatwaves are becoming more frequent and pose serious challenges in cities. Urban areas are particularly vulnerable because built surfaces absorb and release heat, while human activities generate additional greenhouse gases. This increases health risks, making it crucial to [...] Read more.
Extreme climate events such as heatwaves are becoming more frequent and pose serious challenges in cities. Urban areas are particularly vulnerable because built surfaces absorb and release heat, while human activities generate additional greenhouse gases. This increases health risks, making it crucial to study population exposure to heat stress. This research focuses on Naples, Italy’s most densely populated city, where intense human activity and unique geomorphological conditions influence local temperatures. The presence of a Surface Urban Heat Island (SUHI) is assessed by deriving high-resolution Land Surface Temperature (LST) in a time series ranging from 2013 to 2023, processed with the Statistical Mono Window (SMW) algorithm in the Google Earth Engine (GEE) environment. SMW needs brightness temperature (Tb) extracted from a Landsat 8 (L8) Thermal InfraRed Sensor (TIRS), emissivity from Advanced Spaceborne and Thermal Emission Radiometer Global Emissivity Database (ASTERGED), and atmospheric correction coefficients from the National Center for Environmental Prediction and Atmospheric Research (NCEP/NCAR). A total of 64 nighttime images were processed and analyzed to assess long-term trends and identify the main heat islands in Naples. The hottest image was compared with population data, including demographic categories such as children, elderly people, and pregnant women. A risk index was calculated by combining temperature values, exposure levels, and the vulnerability of each group. Results identified three major heat islands, showing that risk is strongly linked to both population density and heat island distribution. Incorporating Local Climate Zone (LCZ) classification further highlighted the urban areas most prone to extreme heat based on morphology. Full article
Show Figures

Graphical abstract

22 pages, 4173 KB  
Article
A Novel Nighttime Sea Fog Detection Method Based on Generative Adversarial Networks
by Wuyi Qiu, Xiaoqun Cao and Shuo Ma
Remote Sens. 2025, 17(19), 3285; https://doi.org/10.3390/rs17193285 - 24 Sep 2025
Viewed by 439
Abstract
Nighttime sea fog exhibits high frequency and prolonged duration, posing significant risks to maritime navigation safety. Current detection methods primarily rely on the dual-infrared channel brightness temperature difference technique, which faces challenges such as threshold selection difficulties and a tendency toward overestimation. In [...] Read more.
Nighttime sea fog exhibits high frequency and prolonged duration, posing significant risks to maritime navigation safety. Current detection methods primarily rely on the dual-infrared channel brightness temperature difference technique, which faces challenges such as threshold selection difficulties and a tendency toward overestimation. In contrast, the VIIRS Day/Night Band (DNB) offers exceptional nighttime visible-like cloud imaging capabilities, offering a new solution to alleviate the overestimation issues inherent in infrared detection algorithms. Recent advances in artificial intelligence have further addressed the threshold selection problem in traditional detection methods. Leveraging these developments, this study proposes a novel generative adversarial network model incorporating attention mechanisms (SEGAN) to achieve accurate nighttime sea fog detection using DNB data. Experimental results demonstrate that SEGAN achieves satisfactory performance, with probability of detection, false alarm rate, and critical success index reaching 0.8708, 0.0266, and 0.7395, respectively. Compared with the operational infrared detection algorithm, these metrics show improvements of 0.0632, 0.0287, and 0.1587. Notably, SEGAN excels at detecting sea fog obscured by thin cloud cover, a scenario where conventional infrared detection algorithms typically fail. SEGAN emphasizes semantic consistency in its output, endowing it with enhanced robustness across varying sea fog concentrations. Full article
Show Figures

Figure 1

33 pages, 14767 KB  
Article
Night-to-Day Image Translation with Road Light Attention Training for Traffic Information Detection
by Ye-Jin Lee, Young-Ho Go, Seung-Hwan Lee, Dong-Min Son and Sung-Hak Lee
Mathematics 2025, 13(18), 2998; https://doi.org/10.3390/math13182998 - 16 Sep 2025
Viewed by 689
Abstract
Generative adversarial networks (GANs)-based image deep learning methods are useful to improve object visibility in nighttime driving environments, but they often fail to preserve critical road information like traffic light colors and vehicle lighting. This paper proposes a method to address this by [...] Read more.
Generative adversarial networks (GANs)-based image deep learning methods are useful to improve object visibility in nighttime driving environments, but they often fail to preserve critical road information like traffic light colors and vehicle lighting. This paper proposes a method to address this by utilizing both unpaired and four-channel paired training modules. The unpaired module performs the primary night-to-day conversion, while the paired module, enhanced with a fourth channel, focuses on preserving road details. Our key contribution is an inverse road light attention (RLA) map, which acts as this fourth channel to explicitly guide the network’s learning. This map also facilitates a final cross-blending process, synthesizing the results from both modules to maximize their respective advantages. Experimental results demonstrate that our approach more accurately preserves lane markings and traffic light colors. Furthermore, quantitative analysis confirms that our method achieves superior performance across eight no-reference image quality metrics compared to existing techniques. Full article
(This article belongs to the Special Issue Machine Learning Applications in Image Processing and Computer Vision)
Show Figures

Figure 1

24 pages, 11967 KB  
Article
Smartphone-Based Edge Intelligence for Nighttime Visibility Estimation in Smart Cities
by Chengyuan Duan and Shiqi Yao
Electronics 2025, 14(18), 3642; https://doi.org/10.3390/electronics14183642 - 15 Sep 2025
Viewed by 515
Abstract
Impaired visibility, a major global environmental threat, is a result of light scattering by atmospheric particulate matter. While digital photographs are increasingly used for daytime visibility estimation, such methods are largely ineffective at night owing to the different scattering effects. Here, we introduce [...] Read more.
Impaired visibility, a major global environmental threat, is a result of light scattering by atmospheric particulate matter. While digital photographs are increasingly used for daytime visibility estimation, such methods are largely ineffective at night owing to the different scattering effects. Here, we introduce an image-based algorithm for inferring nighttime visibility from a single photograph by analyzing the forward scattering index and optical thickness retrieved from glow effects around light sources. Using photographs crawled from social media platforms across mainland China, we estimated the nationwide visibility for one year using the proposed algorithm, achieving high goodness-of-fit values (R2 = 0.757; RMSE = 4.318 km), demonstrating robust performance under various nighttime scenarios. The model also captures both chronic and episodic visibility degradation, including localized pollution events. These results highlight the potential of using ubiquitous smartphone photography as a low-cost, scalable, and real-time sensing solution for nighttime atmospheric monitoring in urban areas. Full article
(This article belongs to the Special Issue Advanced Edge Intelligence in Smart Environments)
Show Figures

Figure 1

Back to TopTop