Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (46)

Search Parameters:
Keywords = snow architecture

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3538 KB  
Article
Mobile AI-Powered Impurity Removal System for Decentralized Potato Harvesting
by Joonam Kim, Kenichi Tokuda, Yuichiro Miho, Giryeon Kim, Rena Yoshitoshi, Shinori Tsuchiya, Noriko Deguchi and Kunihiro Funabiki
Agronomy 2026, 16(3), 383; https://doi.org/10.3390/agronomy16030383 - 5 Feb 2026
Abstract
An advanced artificial intelligence (AI)-powered mobile automated impurity removal system was developed and integrated into potato harvesting machinery for decentralized agricultural environments in Japan. As opposed existing stationary AI systems in centralized processing facilities, this mobile prototype enables on-field impurity removal in real [...] Read more.
An advanced artificial intelligence (AI)-powered mobile automated impurity removal system was developed and integrated into potato harvesting machinery for decentralized agricultural environments in Japan. As opposed existing stationary AI systems in centralized processing facilities, this mobile prototype enables on-field impurity removal in real time through a systematic dual-evaluation methodology. The system integrates the YOLOX-small architecture with precision pneumatic actuators and achieves 40–50 FPS processing under dynamic field conditions. Algorithm validation across 10 morphologically diverse potato varieties (Danshaku, Harrow Moon, Hokkaikogane, Kitaakari, Kitahime, May Queen, Sayaka, Snowden, Snow March, and Toyoshiro) using count-based analysis showed exceptional recognition, with potato misclassification rates of 0.08 ± 0.03% (range: 0.01–0.32%) and impurity detection rates of 89.99 ± 1.25% (range: 80.00–93.30%). Cross-farm validation across seven commercial farms in Hokkaido confirmed robust algorithm consistency (PMR: 0.08 ± 0.03%, IDR: 90.56 ± 0.82%) without farm-specific calibration, establishing variety-independent and environment-independent operation. Field validation using weight-based analysis during actual harvesting at 1–4 km/h confirmed successful AI-to-field translation, with 0.22–0.42% potato misclassification and adaptive impurity removal of 71.43–85.29%. The system adapted intelligently, employing conservative sorting under high-impurity loads (71.43% removal, 0.33% misclassification) to prioritize potato preservation while maximizing efficiency under standard conditions (85.29% removal, 0.30% misclassification). The dual-evaluation framework successfully bridged the gap between AI accuracy in laboratory settings and effectiveness in agricultural operations. The proposed AI algorithm surpassed project targets for all tested conditions (>60% impurity removal, <1% potato misclassification). This successful integration demonstrates technical feasibility and commercial viability for widespread agricultural automation, with a validated 50% reduction in labor (four workers to two workers). This implementation provides a comprehensive validation methodology for next-generation autonomous harvesting systems. Full article
(This article belongs to the Collection AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

20 pages, 2333 KB  
Article
YOLOv11-TWCS: Enhancing Object Detection for Autonomous Vehicles in Adverse Weather Conditions Using YOLOv11 with TransWeather Attention
by Chris Michael and Hongjian Wang
Vehicles 2026, 8(1), 16; https://doi.org/10.3390/vehicles8010016 - 12 Jan 2026
Viewed by 317
Abstract
Object detection for autonomous vehicles under adverse weather conditions—such as rain, fog, snow, and low light—remains a significant challenge due to severe visual distortions that degrade image quality and obscure critical features. This paper presents YOLOv11-TWCS, an enhanced object detection model that integrates [...] Read more.
Object detection for autonomous vehicles under adverse weather conditions—such as rain, fog, snow, and low light—remains a significant challenge due to severe visual distortions that degrade image quality and obscure critical features. This paper presents YOLOv11-TWCS, an enhanced object detection model that integrates TransWeather, the Convolutional Block Attention Module (CBAM), and Spatial-Channel Decoupled Downsampling (SCDown) to improve feature extraction and emphasize critical features in weather-degraded scenes while maintaining real-time performance. Our approach addresses the dual challenges of weather-induced feature degradation and computational efficiency by combining adaptive attention mechanisms with optimized network architecture. Evaluations on DAWN, KITTI, and Udacity datasets show improved accuracy over baseline YOLOv11 and competitive performance against other state-of-the-art methods, achieving mAP@0.5 of 59.1%, 81.9%, and 88.5%, respectively. The model reduces parameters and GFLOPs by approximately 19–21% while sustaining high inference speed (105 FPS), making it suitable for real-time autonomous driving in challenging weather conditions. Full article
Show Figures

Figure 1

24 pages, 18607 KB  
Article
Robust Object Detection in Adverse Weather Conditions: ECL-YOLOv11 for Automotive Vision Systems
by Zhaohui Liu, Jiaxu Zhang, Xiaojun Zhang and Hongle Song
Sensors 2026, 26(1), 304; https://doi.org/10.3390/s26010304 - 2 Jan 2026
Viewed by 697
Abstract
The rapid development of intelligent transportation systems and autonomous driving technologies has made visual perception a key component in ensuring safety and improving efficiency in complex traffic environments. As a core task in visual perception, object detection directly affects the reliability of downstream [...] Read more.
The rapid development of intelligent transportation systems and autonomous driving technologies has made visual perception a key component in ensuring safety and improving efficiency in complex traffic environments. As a core task in visual perception, object detection directly affects the reliability of downstream modules such as path planning and decision control. However, adverse weather conditions (e.g., fog, rain, and snow) significantly degrade image quality—causing texture blurring, reduced contrast, and increased noise—which in turn weakens the robustness of traditional detection models and raises potential traffic safety risks. To address this challenge, this paper proposes an enhanced object detection framework, ECL-YOLOv11 (Edge-enhanced, Context-guided, and Lightweight YOLOv11), designed to improve detection accuracy and real-time performance under adverse weather conditions, thereby providing a reliable solution for in-vehicle perception systems. The ECL-YOLOv11 architecture integrates three key modules: (1) a Convolutional Edge-enhancement (CE) module that fuses edge features extracted by Sobel operators with convolutional features to explicitly retain boundary and contour information, thereby alleviating feature degradation and improving localization accuracy under low-visibility conditions; (2) a Context-guided Multi-scale Fusion Network (AENet) that enhances perception of small and distant objects through multi-scale feature integration and context modeling, improving semantic consistency and detection stability in complex scenes; and (3) a Lightweight Shared Convolutional Detection Head (LDHead) that adopts shared convolutions and GroupNorm normalization to optimize computational efficiency, reduce inference latency, and satisfy the real-time requirements of on-board systems. Experimental results show that ECL-YOLOv11 achieves mAP@50 and mAP@50–95 values of 62.7% and 40.5%, respectively, representing improvements of 1.3% and 0.8% over the baseline YOLOv11, while the Precision reaches 73.1%. The model achieves a balanced trade-off between accuracy and inference speed, operating at 237.8 FPS on standard hardware. Ablation studies confirm the independent effectiveness of each proposed module in feature enhancement, multi-scale fusion, and lightweight detection, while their integration further improves overall performance. Qualitative visualizations demonstrate that ECL-YOLOv11 maintains high-confidence detections across varying motion states and adverse weather conditions, avoiding category confusion and missed detections. These results indicate that the proposed framework provides a reliable and adaptable foundation for all-weather perception in autonomous driving systems, ensuring both operational safety and real-time responsiveness. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 3988 KB  
Article
Self-Supervised LiDAR Desnowing with 3D-KNN Blind-Spot Networks
by Junyi Li and Wangmeng Zuo
Remote Sens. 2026, 18(1), 17; https://doi.org/10.3390/rs18010017 - 20 Dec 2025
Viewed by 421
Abstract
Light Detection and Ranging (LiDAR) is fundamental to autonomous driving and robotics, as it provides reliable 3D geometric information. However, snowfall introduces numerous spurious reflections that corrupt range measurements and severely degrade downstream perception. Existing desnowing techniques either rely on handcrafted filtering rules [...] Read more.
Light Detection and Ranging (LiDAR) is fundamental to autonomous driving and robotics, as it provides reliable 3D geometric information. However, snowfall introduces numerous spurious reflections that corrupt range measurements and severely degrade downstream perception. Existing desnowing techniques either rely on handcrafted filtering rules that fail under varying snow densities, or require paired snowy–clean scans, which are nearly impossible to collect in real-world scenarios. Self-supervised LiDAR desnowing approaches address these challenges by projecting raw 3D point clouds into 2D range images and jointly training a point reconstruction network (PR-Net) and a reconstruction difficulty network (RD-Net). Nevertheless, these methods remain limited by their reliance on the outdated Noise2Void training paradigm, which restricts reconstruction quality. In this paper, we redesign PR-Net with a blind-spot architecture to overcome the limitation. Specifically, we introduce a 3D-KNN encoder that aggregates neighborhood features directly in Euclidean 3D space, ensuring geometrically consistent representations. Additionally, we integrate residual state-space blocks (RSSB) to capture long-range contextual dependencies with linear computational complexity. Extensive experiments on both synthetic and real-world datasets, including SnowyKITTI and WADS, demonstrate that our method outperforms state-of-the-art self-supervised desnowing approaches by up to 0.06 IoU while maintaining high computational efficiency. Full article
Show Figures

Graphical abstract

17 pages, 1732 KB  
Article
Enhancing Endangered Feline Conservation in Asia via a Pose-Guided Deep Learning Framework for Individual Identification
by Weiwei Xiao, Wei Zhang and Haiyan Liu
Diversity 2025, 17(12), 853; https://doi.org/10.3390/d17120853 - 12 Dec 2025
Viewed by 454
Abstract
The re-identification of endangered felines is critical for species conservation and biodiversity assessment. This paper proposes the Pose-Guided Network with the Adaptive L2 Regularization (PGNet-AL2) framework to overcome key challenges in wild feline re-identification, such as extensive pose variations, small sample sizes, and [...] Read more.
The re-identification of endangered felines is critical for species conservation and biodiversity assessment. This paper proposes the Pose-Guided Network with the Adaptive L2 Regularization (PGNet-AL2) framework to overcome key challenges in wild feline re-identification, such as extensive pose variations, small sample sizes, and inconsistent image quality. This framework employs a dual-branch architecture for multi-level feature extraction and incorporates an adaptive L2 regularization mechanism to optimize parameter learning, effectively mitigating overfitting in small-sample scenarios. Applying the proposed method to the Amur Tiger Re-identification in the Wild (ATRW) dataset, we achieve a mean Average Precision (mAP) of 91.3% in single-camera settings, outperforming the baseline PPbM-b (Pose Part-based Model) by 18.5 percentage points. To further evaluate its generalization, we apply it to a more challenging task, snow leopard re-identification, using a dataset of 388 infrared videos obtained from the Wildlife Conservation Society (WCS). Despite the poor quality of infrared videos, our method achieves a mAP of 94.5%. The consistent high performance on both the ATRW and snow leopard datasets collectively demonstrates the method’s strong generalization capability and practical utility. Full article
Show Figures

Graphical abstract

23 pages, 59318 KB  
Article
BAT-Net: Bidirectional Attention Transformer Network for Joint Single-Image Desnowing and Snow Mask Prediction
by Yongheng Zhang
Information 2025, 16(11), 966; https://doi.org/10.3390/info16110966 - 7 Nov 2025
Viewed by 458
Abstract
In the wild, snow is not merely additive noise; it is a non-stationary, semi-transparent veil whose spatial statistics vary with depth, illumination, and wind. Because conventional two-stage pipelines first detect a binary mask and then inpaint the occluded regions, any early mis-classification is [...] Read more.
In the wild, snow is not merely additive noise; it is a non-stationary, semi-transparent veil whose spatial statistics vary with depth, illumination, and wind. Because conventional two-stage pipelines first detect a binary mask and then inpaint the occluded regions, any early mis-classification is irreversibly baked into the final result, leading to over-smoothed textures or ghosting artifacts. We propose BAT-Net, a Bidirectional Attention Transformer Network that frames desnowing as a coupled representation learning problem, jointly disentangling snow appearance and scene radiance in a single forward pass. Our core contributions are as follows: (1) A novel dual-decoder architecture where a background decoder and a snow decoder are coupled via a Bidirectional Attention Module (BAM). The BAM implements a continuous predict–verify–correct mechanism, allowing the background branch to dynamically accept, reject, or refine the snow branch’s occlusion hypotheses, dramatically reducing error accumulation. (2) A lightweight yet effective multi-scale feature fusion scheme comprising a Scale Conversion Module (SCM) and a Feature Aggregation Module (FAM), enabling the model to handle the large scale variance among snowflakes without a prohibitive computational cost. (3) The introduction of the FallingSnow dataset, curated to eliminate the label noise caused by irremovable ground snow in existing benchmarks, providing a cleaner benchmark for evaluating dynamic snow removal. Extensive experiments on synthetic and real-world datasets demonstrate that BAT-Net sets a new state of the art. It achieves a PSNR of 35.78 dB on the CSD dataset, outperforming the best prior model by 1.37 dB, and also achieves top results on SRRS (32.13 dB) and Snow100K (34.62 dB) datasets. The proposed method has significant practical applications in autonomous driving and surveillance systems, where accurate snow removal is crucial for maintaining visual clarity. Full article
(This article belongs to the Special Issue Intelligent Image Processing by Deep Learning, 2nd Edition)
Show Figures

Figure 1

23 pages, 4897 KB  
Article
Long Short-Term Memory (LSTM) Based Runoff Simulation and Short-Term Forecasting for Alpine Regions: A Case Study in the Upper Jinsha River Basin
by Feng Zhang, Jiajia Yue, Chun Zhou, Xuan Shi, Biqiong Wu and Tianqi Ao
Water 2025, 17(21), 3117; https://doi.org/10.3390/w17213117 - 30 Oct 2025
Cited by 1 | Viewed by 1728
Abstract
Runoff simulation and forecasting is of great significance for flood control, disaster mitigation, and water resource management. Alpine regions are characterized by complex terrain, diverse precipitation patterns, and strong snow-and-ice melt influences, making accurate runoff simulation particularly challenging yet crucial. To enhance predictive [...] Read more.
Runoff simulation and forecasting is of great significance for flood control, disaster mitigation, and water resource management. Alpine regions are characterized by complex terrain, diverse precipitation patterns, and strong snow-and-ice melt influences, making accurate runoff simulation particularly challenging yet crucial. To enhance predictive capability and model applicability, this study takes the Upper Jinsha River as a case study and comparatively evaluates the performance of a physics-based hydrological model BTOP and the data-driven deep learning models LSTM and BiLSTM in runoff simulation and short-term forecasting. The results indicate that for daily-scale runoff simulation, the LSTM and BiLSTM models demonstrated superior simulation capabilities, achieving Nash–Sutcliffe efficiency coefficients (NSE) of 0.82/0.81 (Zhimenda Station) and 0.87/0.86 (Gangtuo Station) during the test period. These values are significantly better than those of the BTOP model, which achieved a validation NSE of 0.57 at Zhimenda and 0.62 at Gangtuo. However, the hydrology-based structure of the BTOP model endowed it with greater stability in water balance and long-term simulation. In short-term forecasting (1–7 d), LSTM and BiLSTM performed comparably, with the bidirectional architecture of BiLSTM offering no significant advantage. When it came to flood events, the data-driven models excelled at capturing peak timing and hydrograph shape, whereas the physical BTOP model demonstrated superior stability in flood peak magnitude. However, forecasts from the data-driven models also lacked hydrological consistency between upstream and downstream stations. In conclusion, the present study confirms that deep learning models achieve superior accuracy in runoff simulation compared to the physics-based BTOP model and effectively capture key flood characteristics, establishing their value as a powerful tool for hydrological applications in alpine regions. Full article
Show Figures

Figure 1

22 pages, 2570 KB  
Article
CMAWRNet: Multiple Adverse Weather Removal via a Unified Quaternion Neural Architecture
by Vladimir Frants, Sos Agaian, Karen Panetta and Peter Huang
J. Imaging 2025, 11(11), 382; https://doi.org/10.3390/jimaging11110382 - 30 Oct 2025
Cited by 1 | Viewed by 664
Abstract
Images used in real-world applications such as image or video retrieval, outdoor surveillance, and autonomous driving suffer from poor weather conditions. When designing robust computer vision systems, removing adverse weather such as haze, rain, and snow is a significant problem. Recently, deep-learning methods [...] Read more.
Images used in real-world applications such as image or video retrieval, outdoor surveillance, and autonomous driving suffer from poor weather conditions. When designing robust computer vision systems, removing adverse weather such as haze, rain, and snow is a significant problem. Recently, deep-learning methods offered a solution for a single type of degradation. Current state-of-the-art universal methods struggle with combinations of degradations, such as haze and rain streaks. Few algorithms have been developed that perform well when presented with images containing multiple adverse weather conditions. This work focuses on developing an efficient solution for multiple adverse weather removal, using a unified quaternion neural architecture called CMAWRNet. It is based on a novel texture–structure decomposition block, a novel lightweight encoder–decoder quaternion transformer architecture, and an attentive fusion block with low-light correction. We also introduce a quaternion similarity loss function to better preserve color information. The quantitative and qualitative evaluation of the current state-of-the-art benchmarking datasets and real-world images shows the performance advantages of the proposed CMAWRNet, compared to other state-of-the-art weather removal approaches dealing with multiple weather artifacts. Extensive computer simulations validate that CMAWRNet improves the performance of downstream applications, such as object detection. This is the first time the decomposition approach has been applied to the universal weather removal task. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

33 pages, 10969 KB  
Article
Analysis of the Cultural Cognition of Manchurian Regime Architectural Heritage via Online Ethnography Data
by Shanshan Zhang, Liwei Zhang, Yile Chen, Junxin Song, Jiaji Chen, Liang Zheng and Bailang Jing
Buildings 2025, 15(21), 3912; https://doi.org/10.3390/buildings15213912 - 29 Oct 2025
Viewed by 909
Abstract
As tangible relics of modern colonial history, Manchurian regime (Manchukuo) architecture of Changchun possesses both historical commemorative value and tourism and cultural functions. Public perception and sentiment regarding this heritage in the contemporary social media context are key dimensions for evaluating the effectiveness [...] Read more.
As tangible relics of modern colonial history, Manchurian regime (Manchukuo) architecture of Changchun possesses both historical commemorative value and tourism and cultural functions. Public perception and sentiment regarding this heritage in the contemporary social media context are key dimensions for evaluating the effectiveness of cultural regeneration. Existing research on Manchurian regime architecture has focused primarily on historical research and architectural form analysis, with limited research examining the diverse public interpretations of its cultural value through multi-platform social media data. This study aims to systematically explore the public’s cognitive characteristics, sentimental attitudes, and themes of interest regarding Changchun’s Manchurian regime architecture using online ethnographic data, providing empirical support for optimizing cultural regeneration pathways for Manchurian regime architectural heritage. The study collected data from 1 January 2020 to 20 September 2025, using the keyword “Changchun Manchurian regime architecture”. Using Python crawlers, the study extracted 334 original videos and 18,156 related comments from Douyin, Ctrip, and Dianping. The analysis was conducted using word frequency statistics, SnowNLP sentiment analysis, LDA topic modeling, and multidimensional visualization. The study found that (1) word frequency statistics show that the public has multiple concerns about the historical symbols, geographical positioning, cultural and tourism functions, and national emotions of Manchurian regime architecture; (2) SnowNLP analysis shows that positive comments account for 71%, neutral comments account for 11%, and negative comments account for 18%; (3) the optimal number of topics was determined to be five through perplexity and consistency indicators, namely “historical narrative and imperial power symbols”, “emotional experience and historical reflection”, “visit experience and service facilities”, “site distribution and regional space”, and “explanation and tour evaluation”; (4) the corpus can be divided into five time period stages, namely S1 (2020)–S5 (2024–2025), reflecting the shift in public attention from “space-facilities” to in-depth reflection on “emotion-history”. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

18 pages, 2632 KB  
Article
Adverse-Weather Image Restoration Method Based on VMT-Net
by Zhongmin Liu, Xuewen Yu and Wenjin Hu
J. Imaging 2025, 11(11), 376; https://doi.org/10.3390/jimaging11110376 - 26 Oct 2025
Viewed by 1062
Abstract
To address global semantic loss, local detail blurring, and spatial–semantic conflict during image restoration under adverse weather conditions, we propose an image restoration network that integrates Mamba with Transformer architectures. We first design a Vision-Mamba–Transformer (VMT) module that combines the long-range dependency modeling [...] Read more.
To address global semantic loss, local detail blurring, and spatial–semantic conflict during image restoration under adverse weather conditions, we propose an image restoration network that integrates Mamba with Transformer architectures. We first design a Vision-Mamba–Transformer (VMT) module that combines the long-range dependency modeling of Vision Mamba with the global contextual reasoning of Transformers, facilitating the joint modeling of global structures and local details, thus mitigating information loss and detail blurring during restoration. Second, we introduce an Adaptive Content Guidance (ACG) module that employs dynamic gating and spatial–channel attention to enable effective inter-layer feature fusion, thereby enhancing cross-layer semantic consistency. Finally, we embed the VMT and ACG modules into a U-Net backbone, achieving efficient integration of multi-scale feature modeling and cross-layer fusion, significantly improving reconstruction quality under complex weather conditions. The experimental results show that on Snow100K-S/L, VMT-Net improves PSNR over the baseline by approximately 0.89 dB and 0.36 dB, with SSIM gains of about 0.91% and 0.11%, respectively. On Outdoor-Rain and Raindrop, it performs similarly to the baseline and exhibits superior detail recovery in real-world scenes. Overall, the method demonstrates robustness and strong detail restoration across diverse adverse-weather conditions. Full article
Show Figures

Figure 1

15 pages, 5150 KB  
Article
Insulator Defect Detection Algorithm Based on Improved YOLO11s in Snowy Weather Environment
by Ziwei Ding, Song Deng and Qingsheng Liu
Symmetry 2025, 17(10), 1763; https://doi.org/10.3390/sym17101763 - 19 Oct 2025
Viewed by 613
Abstract
The intelligent transformation of power systems necessitates robust insulator condition detection to ensure grid safety. Existing methods, primarily reliant on manual inspection or conventional image processing, suffer significantly degraded target identification and detection efficiency under extreme weather conditions such as heavy snowfall. To [...] Read more.
The intelligent transformation of power systems necessitates robust insulator condition detection to ensure grid safety. Existing methods, primarily reliant on manual inspection or conventional image processing, suffer significantly degraded target identification and detection efficiency under extreme weather conditions such as heavy snowfall. To address this challenge, this paper proposes an enhanced YOLO11s detection framework integrated with image restoration technology, specifically targeting insulator defect identification in snowy environments. First, data augmentation and a FocalNet-based snow removal algorithm effectively enhance image resolution under snow conditions, enabling the construction of a high-quality training dataset. Next, the model architecture incorporates a dynamic snake convolution module to strengthen the perception of tubular structural features, while the MPDIoU loss function optimizes bounding box localization accuracy and recall. Comparative experiments demonstrate that the optimized framework significantly improves overall detection performance under complex weather compared to the baseline model. Furthermore, it exhibits clear advantages over current mainstream detection models. This approach provides a novel technical solution for monitoring power equipment conditions in extreme weather, offering significant practical value for ensuring reliable grid operation. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Data Analysis)
Show Figures

Figure 1

17 pages, 1344 KB  
Article
SolarFaultAttentionNet: Dual-Attention Framework for Enhanced Photovoltaic Fault Classification
by Mubarak Alanazi and Yassir A. Alamri
Inventions 2025, 10(5), 91; https://doi.org/10.3390/inventions10050091 - 9 Oct 2025
Cited by 1 | Viewed by 902
Abstract
Photovoltaic (PV) fault detection faces significant challenges in distinguishing subtle defects from complex backgrounds while maintaining reliability across diverse environmental conditions. Traditional approaches struggle with scalability and accuracy limitations, particularly when detecting electrical damage, physical defects, and environmental soiling in thermal imagery. This [...] Read more.
Photovoltaic (PV) fault detection faces significant challenges in distinguishing subtle defects from complex backgrounds while maintaining reliability across diverse environmental conditions. Traditional approaches struggle with scalability and accuracy limitations, particularly when detecting electrical damage, physical defects, and environmental soiling in thermal imagery. This paper presents SolarFaultAttentionNet, a novel dual-attention deep learning framework that integrates channel-wise and spatial attention mechanisms within a multi-path CNN architecture for enhanced PV fault classification. The approach combines comprehensive data augmentation strategies with targeted attention modules to improve feature discrimination across six fault categories: Electrical-Damage, Physical-Damage, Snow-Covered, Dusty, Bird-Drop, and Clean. Experimental validation on a dataset of 885 images demonstrates that SolarFaultAttentionNet achieves 99.14% classification accuracy, outperforming state-of-the-art models by 5.14%. The framework exhibits perfect detection for dust accumulation (100% across all metrics) and robust electrical damage detection (99.12% F1 score) while maintaining an optimal sensitivity (98.24%) and specificity (99.91%) balance. The computational efficiency (0.0160 s inference time) and systematic performance improvements establish SolarFaultAttentionNet as a practical solution for automated PV monitoring systems, enabling reliable fault detection critical for maximizing energy production and minimizing maintenance costs in large-scale solar installations. Full article
Show Figures

Figure 1

23 pages, 5501 KB  
Article
Development of a Road Surface Conditions Prediction Model for Snow Removal Decision-Making
by Gyeonghoon Ma, Min-Cheol Park, Junchul Kim, Han Jin Oh and Jin-Hoon Jeong
Sustainability 2025, 17(19), 8794; https://doi.org/10.3390/su17198794 - 30 Sep 2025
Viewed by 1030
Abstract
Snowfall and road surface freezing cause traffic disruptions and skidding accidents. When widespread extreme cold events or sudden heavy snowfalls occur, the continuous monitoring and management of extensive road networks until the restoration of traffic operations is constrained by the limited personnel and [...] Read more.
Snowfall and road surface freezing cause traffic disruptions and skidding accidents. When widespread extreme cold events or sudden heavy snowfalls occur, the continuous monitoring and management of extensive road networks until the restoration of traffic operations is constrained by the limited personnel and resources available to road authorities. Consequently, road surface condition prediction models have become increasingly necessary to enable timely and sustainable decision-making. This study proposes a road surface condition prediction model based on CCTV images collected from roadside cameras. Three databases were constructed based on different definitions of moisture-related surface classes, and models with the same architecture were trained and evaluated. The results showed that the best performance was achieved when ice and snow were combined into a single class rather than treated separately. The proposed model was designed with a simplified structure to ensure applicability in practical operations requiring computational efficiency. Compared with transfer learning using deeper and more complex pre-trained models, the proposed model achieved comparable prediction accuracy while requiring less training time and computational resources. These findings demonstrate the reliability and practical utility of the developed model, indicating that its application can support sustainable snow removal decision-making across extensive road networks. Full article
(This article belongs to the Special Issue Disaster Risk Reduction and Sustainability)
Show Figures

Figure 1

24 pages, 1826 KB  
Article
Cloud and Snow Segmentation via Transformer-Guided Multi-Stream Feature Integration
by Kaisheng Yu, Kai Chen, Liguo Weng, Min Xia and Shengyan Liu
Remote Sens. 2025, 17(19), 3329; https://doi.org/10.3390/rs17193329 - 29 Sep 2025
Viewed by 726
Abstract
Cloud and snow often share comparable visual and structural patterns in satellite observations, making their accurate discrimination and segmentation particularly challenging. To overcome this, we design an innovative Transformer-guided architecture with complementary feature-extraction capabilities. The encoder adopts a dual-path structure, integrating a Transformer [...] Read more.
Cloud and snow often share comparable visual and structural patterns in satellite observations, making their accurate discrimination and segmentation particularly challenging. To overcome this, we design an innovative Transformer-guided architecture with complementary feature-extraction capabilities. The encoder adopts a dual-path structure, integrating a Transformer Encoder Module (TEM) for capturing long-range semantic dependencies and a ResNet18-based convolutional branch for detailed spatial representation. A Feature-Enhancement Module (FEM) is introduced to promote bidirectional interaction and adaptive feature integration between the two pathways. To improve delineation of object boundaries, especially in visually complex areas, we embed a Deep Feature-Extraction Module (DFEM) at the deepest layer of the convolutional stream. This component refines channel-level information to highlight critical features and enhance edge clarity. Additionally, to address noise from intricate backgrounds and ambiguous cloud-snow transitions, we incorporate both a Transformer Fusion Module (TFM) and a Strip Pooling Auxiliary Module (SPAM) in the decoding phase. These modules collaboratively enhance structural recovery and improve robustness in segmentation. Extensive experiments on the CSWV and SPARCS datasets show that our method consistently outperforms state-of-the-art baselines, demonstrating its strong effectiveness and applicability in real-world cloud and snow-detection scenarios. Full article
Show Figures

Figure 1

20 pages, 2915 KB  
Article
From Lab to Launchpad: A Modular Transport Incubator for Controlled Thermal and Power Conditions of Spaceflight Payloads
by Sebastian Feles, Ilse Marie Holbeck and Jens Hauslage
Instruments 2025, 9(3), 21; https://doi.org/10.3390/instruments9030021 - 18 Sep 2025
Viewed by 1333
Abstract
Maintaining physiologically controlled conditions during the transport of biological experiments remains a long-standing but under-addressed challenge in spaceflight operations. Pre-launch thermal or mechanical stress induce artefacts that compromise the interpretation of biological responses to space conditions. Existing transport systems are limited to basic [...] Read more.
Maintaining physiologically controlled conditions during the transport of biological experiments remains a long-standing but under-addressed challenge in spaceflight operations. Pre-launch thermal or mechanical stress induce artefacts that compromise the interpretation of biological responses to space conditions. Existing transport systems are limited to basic heating of small sample containers and lack the capability to power and protect full experimental hardware during mission-critical phases. A modular transport incubator was developed and validated that combines active thermal regulation, battery-buffered power management, and mechanical protection in a compact, field-deployable platform. It enables autonomous environmental conditioning of complex biological payloads and continuous operation of integrated scientific instruments during ground-based transport and recovery. Validation included controlled experiments under sub-zero ambient temperatures, demonstrating rapid warm-up, stable thermal regulation, and uninterrupted autonomous performance. A steady-state finite difference thermal model was experimentally validated across 21 boundary conditions, enabling predictive power requirement estimation for mission planning. Field deployments during multiple MAPHEUS® sounding rocket campaigns confirmed functional robustness under wind, snow, and airborne recovery scenarios. The system closes a critical infrastructure gap in spaceflight logistics. Its validated performance, modular architecture, and proven operational readiness establish it as an enabling platform for standardized, reproducible ground handling of biological payloads and experiment hardware. Full article
Show Figures

Figure 1

Back to TopTop