Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (301)

Search Parameters:
Keywords = weather recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2996 KB  
Article
Sustainable Energy Transitions in Smart Campuses: An AI-Driven Framework Integrating Microgrid Optimization, Disaster Resilience, and Educational Empowerment for Sustainable Development
by Zhanyi Li, Zhanhong Liu, Chengping Zhou, Qing Su and Guobo Xie
Sustainability 2026, 18(2), 627; https://doi.org/10.3390/su18020627 - 7 Jan 2026
Viewed by 190
Abstract
Amid global sustainability transitions, campus energy systems confront growing pressure to balance operational efficiency, resilience to extreme weather events, and sustainable development education. This study proposes an artificial intelligence-driven framework for smart campus microgrids that synergistically advances environmental sustainability and disaster resilience, while [...] Read more.
Amid global sustainability transitions, campus energy systems confront growing pressure to balance operational efficiency, resilience to extreme weather events, and sustainable development education. This study proposes an artificial intelligence-driven framework for smart campus microgrids that synergistically advances environmental sustainability and disaster resilience, while deepening students’ understanding of sustainable development. The framework integrates an enhanced multi-scale gated temporal attention network (MS-GTAN+) to realize end-to-end meteorological hazard-state recognition for adaptive dispatch mode selection. Compared with Transformer and Informer baselines, MS-GTAN+ reduces prediction RMSE by approximately 48.5% for wind speed and 46.0% for precipitation while maintaining a single-sample inference time of only 1.82 ms. For daily operations, a multi-intelligence co-optimization algorithm dynamically balances economic efficiency with carbon reduction objectives. During disaster scenarios, an improved PageRank algorithm incorporating functional necessity and temporal sensitivity enables precise identification of critical loads and adaptive power redistribution, achieving an average critical-load assurance rate of approximately 75%, nearly doubling the performance of the traditional topology-based method. Furthermore, the framework bridges the divide between theoretical knowledge and educational practice via an educational digital twin platform. Simulation results demonstrate that the framework substantially improves carbon footprint reduction, resilience to power disruptions, and student sustainability competency development. By unifying technical innovation with pedagogical advancement, this study offers a holistic model for educational institutions seeking to advance sustainability transitions while preparing the next generation of sustainability leaders. Full article
Show Figures

Figure 1

27 pages, 5656 KB  
Article
Dynamic Visibility Recognition and Driving Risk Assessment Under Rain–Fog Conditions Using Monocular Surveillance Imagery
by Zilong Xie, Chi Zhang, Dibin Wei, Xiaomin Yan and Yijing Zhao
Sustainability 2026, 18(2), 625; https://doi.org/10.3390/su18020625 - 7 Jan 2026
Viewed by 176
Abstract
This study addresses the limitations of conventional highway visibility monitoring under rain–fog conditions, where fixed stations and visibility sensors provide limited spatial coverage and unstable accuracy. Considering that drivers’ visual fields are jointly affected by global fog and local spray-induced mist, a dynamic [...] Read more.
This study addresses the limitations of conventional highway visibility monitoring under rain–fog conditions, where fixed stations and visibility sensors provide limited spatial coverage and unstable accuracy. Considering that drivers’ visual fields are jointly affected by global fog and local spray-induced mist, a dynamic visibility recognition and risk assessment framework is proposed using roadside monocular CCTV (Closed-Circuit Television) imagery. The method integrates the Koschmieder scattering model with the dark channel prior to estimate atmospheric transmittance and derives visibility through lane-line calibration. A Monte Carlo-based coupling model simulates local visibility degradation caused by tire spray, while a safety potential field defines the low-visibility risk field force (LVRFF) combining dynamic visibility, relative speed, and collision distance. Results show that this approach achieves over 86% accuracy under heavy rain, effectively captures real-time visibility variations, and that LVRFF exhibits strong sensitivity to visibility degradation, outperforming traditional safety indicators in identifying high-risk zones. By enabling scalable, infrastructure-based visibility monitoring without additional sensing devices, the proposed framework reduces deployment cost and energy consumption while enhancing the long-term operational resilience of highway systems under adverse weather. From a sustainability perspective, the method supports safer, more reliable, and resource-efficient traffic management, contributing to the development of intelligent and sustainable transportation infrastructure. Full article
(This article belongs to the Special Issue Traffic Safety, Traffic Management, and Sustainable Mobility)
Show Figures

Figure 1

36 pages, 5941 KB  
Review
Physics-Driven SAR Target Detection: A Review and Perspective
by Xinyi Li, Lei Liu, Gang Wan, Fengjie Zheng, Shihao Guo, Guangde Sun, Ziyan Wang and Xiaoxuan Liu
Remote Sens. 2026, 18(2), 200; https://doi.org/10.3390/rs18020200 - 7 Jan 2026
Viewed by 291
Abstract
Synthetic Aperture Radar (SAR) is highly valuable for target detection due to its all-weather, day-night operational capability and certain ground penetration potential. However, traditional SAR target detection methods often directly adapt algorithms designed for optical imagery, simplistically treating SAR data as grayscale images. [...] Read more.
Synthetic Aperture Radar (SAR) is highly valuable for target detection due to its all-weather, day-night operational capability and certain ground penetration potential. However, traditional SAR target detection methods often directly adapt algorithms designed for optical imagery, simplistically treating SAR data as grayscale images. This approach overlooks SAR’s unique physical nature, failing to account for key factors such as backscatter variations from different polarizations, target representation changes across resolutions, and detection threshold shifts due to clutter background heterogeneity. Consequently, these limitations lead to insufficient cross-polarization adaptability, feature masking, and degraded recognition accuracy due to clutter interference. To address these challenges, this paper systematically reviews recent research advances in SAR target detection, focusing on physical constraints including polarization characteristics, scattering mechanisms, signal-domain properties, and resolution effects. Finally, it outlines promising research directions to guide future developments in physics-aware SAR target detection. Full article
Show Figures

Figure 1

24 pages, 8314 KB  
Article
Performance of Oil Spill Identification in Multiple Scenarios Using Quad-, Compact-, and Dual-Polarization Modes
by Guannan Li, Gaohuan Lv, Bingnan Li, Xiang Wang and Fen Zhao
J. Mar. Sci. Eng. 2026, 14(2), 113; https://doi.org/10.3390/jmse14020113 - 6 Jan 2026
Viewed by 119
Abstract
Oil spills, whether in open water or near shorelines, cause serious environmental problems. Moreover, polarimetric synthetic-aperture radar provides abundant oil spill information with all-weather, day–night detection capability, but its use is limited by data usage and processing costs. Compact Polarimetric (CP) systems as [...] Read more.
Oil spills, whether in open water or near shorelines, cause serious environmental problems. Moreover, polarimetric synthetic-aperture radar provides abundant oil spill information with all-weather, day–night detection capability, but its use is limited by data usage and processing costs. Compact Polarimetric (CP) systems as a subsequent emerging system, which balance data volume and system design requirements, are promising in this regard. Herein, we utilize multisource oil spill scenarios and datasets from multiple polarimetric modes (VV-HH, π/4, DCP, and CTLR) to assess the oil spill detection capability of each mode under varying incidence angles conditions, spill causes, and oil types. Using qualitative and quantitative evaluation indicators, we compare the typical features of the multiple polarization modes as well as assess their consistency with Full Polarization (FP) information and their oil spill recognition performance across different incidence angles. In large-incidence-angle oil spill scenarios, the VV–HH mode exhibits the highest information consistency with the FP mode and the strongest oil spill recognition ability. At small incidence angles, the CP mode (i.e., CTLR mode) exhibits the best overall performance, benefiting from its effective self-calibration capability and low noise sensitivity. Furthermore, despite containing comprehensive information, the FP mode is not always superior to the dual-polarization and CP modes. Thus, in oil spill scenarios across different incidence angles, incorporating features from an appropriate polarization mode into oil spill information extraction and recognition can optimize the associated efficiency. Full article
(This article belongs to the Section Marine Pollution)
Show Figures

Figure 1

20 pages, 1508 KB  
Article
Bidirectional Translation of ASL and English Using Machine Vision and CNN and Transformer Networks
by Stefanie Amiruzzaman, Md Amiruzzaman, Raga Mouni Batchu, James Dracup, Alexander Pham, Benjamin Crocker, Linh Ngo and M. Ali Akber Dewan
Computers 2026, 15(1), 20; https://doi.org/10.3390/computers15010020 - 4 Jan 2026
Viewed by 233
Abstract
This study presents a real-time, bidirectional system for translating American Sign Language (ASL) to and from English using computer vision and transformer-based models to enhance accessibility for deaf and hard of hearing users. Leveraging publicly available sign language and text–to-gloss datasets, the system [...] Read more.
This study presents a real-time, bidirectional system for translating American Sign Language (ASL) to and from English using computer vision and transformer-based models to enhance accessibility for deaf and hard of hearing users. Leveraging publicly available sign language and text–to-gloss datasets, the system integrates MediaPipe-based holistic landmark extraction with CNN- and transformer-based architectures to support translation across video, text, and speech modalities within a web-based interface. In the ASL-to-English direction, the sign-to-gloss model achieves a 25.17% word error rate (WER) on the RWTH-PHOENIX-Weather 2014T benchmark, which is competitive with recent continuous sign language recognition systems, and the gloss-level translation attains a ROUGE-L score of 79.89, indicating strong preservation of sign content and ordering. In the reverse English-to-ASL direction, the English-to-Gloss transformer trained on ASLG-PC12 achieves a ROUGE-L score of 96.00, demonstrating high-fidelity gloss sequence generation suitable for landmark-based ASL animation. These results highlight a favorable accuracy-efficiency trade-off achieved through compact model architectures and low-latency decoding, supporting practical real-time deployment. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

19 pages, 2314 KB  
Article
Occlusion Avoidance for Harvesting Robots: A Lightweight Active Perception Model
by Tao Zhang, Jiaxi Huang, Jinxing Niu, Zhengyi Liu, Le Zhang and Huan Song
Sensors 2026, 26(1), 291; https://doi.org/10.3390/s26010291 - 2 Jan 2026
Viewed by 254
Abstract
Addressing the issue of fruit recognition and localization failures in harvesting robots due to severe occlusion by branches and leaves in complex orchard environments, this paper proposes an occlusion avoidance method that combines a lightweight YOLOv8n model, developed by Ultralytics in the United [...] Read more.
Addressing the issue of fruit recognition and localization failures in harvesting robots due to severe occlusion by branches and leaves in complex orchard environments, this paper proposes an occlusion avoidance method that combines a lightweight YOLOv8n model, developed by Ultralytics in the United States, with active perception. Firstly, to meet the stringent real-time requirements of the active perception system, a lightweight YOLOv8n model was developed. This model reduces computational redundancy by incorporating the C2f-FasterBlock module and enhances key feature representation by integrating the SE attention mechanism, significantly improving inference speed while maintaining high detection accuracy. Secondly, an end-to-end active perception model based on ResNet50 and multi-modal fusion was designed. This model can intelligently predict the optimal movement direction for the robotic arm based on the current observation image, actively avoiding occlusions to obtain a more complete field of view. The model was trained using a matrix dataset constructed through the robot’s dynamic exploration in real-world scenarios, achieving a direct mapping from visual perception to motion planning. Experimental results demonstrate that the proposed lightweight YOLOv8n model achieves a mAP of 0.885 in apple detection tasks, a frame rate of 83 FPS, a parameter count reduced to 1,983,068, and a model weight file size reduced to 4.3 MB, significantly outperforming the baseline model. In active perception experiments, the proposed method effectively guided the robotic arm to quickly find observation positions with minimal occlusion, substantially improving the success rate of target recognition and the overall operational efficiency of the system. The current research outcomes provide preliminary technical validation and a feasible exploratory pathway for developing agricultural harvesting robot systems suitable for real-world complex environments. It should be noted that the validation of this study was primarily conducted in controlled environments. Subsequent work still requires large-scale testing in diverse real-world orchard scenarios, as well as further system optimization and performance evaluation in more realistic application settings, which include natural lighting variations, complex weather conditions, and actual occlusion patterns. Full article
Show Figures

Figure 1

27 pages, 26736 KB  
Article
A Lightweight Traffic Sign Small Target Detection Network Suitable for Complex Environments
by Zonghong Feng, Liangchang Li, Kai Xu and Yong Wang
Appl. Sci. 2026, 16(1), 326; https://doi.org/10.3390/app16010326 - 28 Dec 2025
Viewed by 267
Abstract
With the increasing frequency of traffic safety issues and the rapid development of autonomous driving technology, traffic sign detection is highly susceptible to adverse weather conditions such as changes in light intensity, fog, rain, snow, and partial occlusion, which places higher demands on [...] Read more.
With the increasing frequency of traffic safety issues and the rapid development of autonomous driving technology, traffic sign detection is highly susceptible to adverse weather conditions such as changes in light intensity, fog, rain, snow, and partial occlusion, which places higher demands on the accurate recognition of traffic signs. This paper proposes an improved DAYOLO model based on YOLOv8n, aiming to balance detection accuracy and model complexity. First, the Bottleneck in the C2f module of the YOLOv8n backbone network is replaced with Bottleneck DAttention. Introducing DAttention allows for more effective feature extraction, thereby improving model performance. Second, an ultra-lightweight and efficient upsampler, Dysample, is introduced into the neck network to further improve performance and reduce computational overhead. Finally, a Task-Aligned Dynamic Detection Head (TADDH) is introduced. TADDH enhances task interaction through a dynamic mechanism and utilizes shared convolutional modules to reduce parameters and improve efficiency. Simultaneously, an additional Layer2 detection head is added to the model to strengthen the extraction and fusion of features at different scales, thereby improving the detection accuracy of small traffic signs. Furthermore, replacing SlideLoss with NWDLoss can better handle prediction results with more complex distributions and more accurately measure the distance between predicted and ground truth boxes in the feature space during object detection. Experimental results show that DAYOLO achieves 97.2% mAP on the SDCCVP dataset, which is 5.3 higher than the baseline model YOLOv8n; the frame rate reaches 120, which is 37.8% higher than YOLOv8; and the number of parameters is reduced by 6.2%, outperforming models such as YOLOv3, YOLOv5, YOLOv6, and YOLOv7. In addition, DAYOLO achieves 80.8 mAP on the TT100K dataset, which is 9.2% higher than the baseline model YOLOv8n. The proposed method achieves a balance between model size and detection accuracy, meets the needs of traffic sign detection, and provides new ideas and methods for future research in the field of traffic sign detection. Full article
Show Figures

Figure 1

25 pages, 7265 KB  
Article
Hazy Aware-YOLO: An Enhanced UAV Object Detection Model for Foggy Weather via Wavelet Convolution and Attention-Based Optimization
by Lin Wang, Binjie Zhang, Qinyan Tan, Dejun Duan and Yulei Wang
Automation 2026, 7(1), 3; https://doi.org/10.3390/automation7010003 - 24 Dec 2025
Viewed by 269
Abstract
Foggy weather critically undermines the autonomous perception capabilities of unmanned aerial vehicles (UAVs) by degrading image contrast, obscuring object structures, and impairing small target recognition, which often leads to significant performance deterioration in conventional detection models. To address these challenges in automated UAV [...] Read more.
Foggy weather critically undermines the autonomous perception capabilities of unmanned aerial vehicles (UAVs) by degrading image contrast, obscuring object structures, and impairing small target recognition, which often leads to significant performance deterioration in conventional detection models. To address these challenges in automated UAV operations, this study introduces Hazy Aware-YOLO (HA-YOLO), an enhanced detection framework based on YOLO11, specifically engineered for reliable object detection under low-visibility conditions. The proposed model incorporates wavelet convolution to suppress haze-induced noise and enhance multi-scale feature fusion. Furthermore, a novel Context-Enhanced Hybrid Self-Attention (CEHSA) module is developed, which sequentially combines channel attention aggregation (CAA) with multi-head self-attention (MHSA) to capture local contextual cues while mitigating global noise interference. Extensive evaluations demonstrate that HA-YOLO and its variants achieve superior detection precision and robustness compared to the baseline YOLO11, while maintaining model efficacy. In particular, when benchmarked against state-of-the-art detectors, HA-YOLO exhibits a better balance between detection accuracy and complexity, offering a practical and efficient solution for real-world autonomous UAV perception tasks in adverse weather. Full article
(This article belongs to the Section Smart Transportation and Autonomous Vehicles)
Show Figures

Figure 1

17 pages, 5885 KB  
Article
Real-Time Detection of Dynamic Targets in Dynamic Scattering Media
by Ying Jin, Wenbo Zhao, Siyu Guo, Jiakuan Zhang, Lixun Ye, Chen Nie, Yiyang Zhu, Hongfei Yu, Cangtao Zhou and Wanjun Dai
Photonics 2025, 12(12), 1242; https://doi.org/10.3390/photonics12121242 - 18 Dec 2025
Viewed by 299
Abstract
In dynamic scattering media (such as rain, fog, biological tissues, etc.) environments, scattered light causes severe degradation of target images, directly leading to a sudden drop in the detection confidence of target detection models and a significant increase in the rate of missed [...] Read more.
In dynamic scattering media (such as rain, fog, biological tissues, etc.) environments, scattered light causes severe degradation of target images, directly leading to a sudden drop in the detection confidence of target detection models and a significant increase in the rate of missed detections. This is a key challenge in the intersection of optical imaging and computer vision. Aiming to address the problems of poor generalization and slow reasoning speed of existing schemes, we construct an end-to-end framework of multi-stage preprocessing, customized network reconstruction, and object detection based on the existing network framework. First, we optimize the original degraded image through preprocessing to suppress scattered noise from the source and retain the key features for detection. Relying on a lightweight and customized network (with only 8.20 M of parameters), high-fidelity reconstruction is achieved to further reduce scattering interference and ultimately complete target detection. The reasoning speed of this framework is significantly better than that of the existing network. On RTX4060, the network’s reasoning ability reaches 147.93 frames per second. After reconstruction, the average confidence level of dynamic object detection is 0.95 with a maximum of 0.99, effectively solving the problem of detection failure in dynamic scattering media. It can provide technical support for scenarios such as unmanned aerial vehicle (UAV) monitoring in foggy weather, biomedical target recognition, and low-altitude security. Full article
Show Figures

Figure 1

15 pages, 2262 KB  
Article
An Intelligent Surveillance Framework for Pedestrian Safety Under Low-Illuminance Street Lighting Conditions
by Junhwa Jeong, Kisoo Park, Taekyoung Kim and Wonil Park
Appl. Sci. 2025, 15(24), 13201; https://doi.org/10.3390/app152413201 - 16 Dec 2025
Viewed by 400
Abstract
This study proposes an intelligent surveillance framework that integrates image preprocessing, illuminance-adaptive object detection, multi-object tracking, and pedestrian abnormal behavior recognition to address the rapid degradation of image recognition performance under low-illuminance street lighting conditions. In the preprocessing stage, image quality was enhanced [...] Read more.
This study proposes an intelligent surveillance framework that integrates image preprocessing, illuminance-adaptive object detection, multi-object tracking, and pedestrian abnormal behavior recognition to address the rapid degradation of image recognition performance under low-illuminance street lighting conditions. In the preprocessing stage, image quality was enhanced by correcting color distortion and contour loss, while in the detection stage, illuminance-based loss weighting was applied to maintain high detection sensitivity even in dark environments. During the tracking process, a Kalman filter was employed to ensure inter-frame consistency of detected objects. In the abnormal behavior recognition stage, temporal motion patterns were analyzed to detect events such as falls and prolonged inactivity in real time. The experimental results indicate that the proposed method maintained an average detection accuracy of approximately 0.9 and adequate tracking performance in the 80% range under low-illuminance conditions, while also exhibiting stable recognition rates across various weather environments. Although slight performance degradation was observed under dense fog or highly crowded scenes, such limitations are expected to be mitigated through sensor fusion and enhanced processing efficiency. These findings experimentally demonstrate the technical feasibility of a real-time intelligent recognition system for nighttime street lighting environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

27 pages, 5763 KB  
Article
SatNet-B3: A Lightweight Deep Edge Intelligence Framework for Satellite Imagery Classification
by Tarbia Hasan, Jareen Anjom, Md. Ishan Arefin Hossain and Zia Ush Shamszaman
Future Internet 2025, 17(12), 579; https://doi.org/10.3390/fi17120579 - 16 Dec 2025
Viewed by 418
Abstract
Accurate weather classification plays a vital role in disaster management and minimizing economic losses. However, satellite-based weather classification remains challenging due to high inter-class similarity; the computational complexity of existing deep learning models, which limits real-time deployment on resource-constrained edge devices; and the [...] Read more.
Accurate weather classification plays a vital role in disaster management and minimizing economic losses. However, satellite-based weather classification remains challenging due to high inter-class similarity; the computational complexity of existing deep learning models, which limits real-time deployment on resource-constrained edge devices; and the limited interpretability of model decisions in practical environments. To address these challenges, this study proposes SatNet-B3, a quantized, lightweight deep learning framework that integrates an EfficientNetB3 backbone with custom classification layers to enable accurate and edge-deployable weather event recognition from satellite imagery. SatNet-B3 is evaluated on the LSCIDMR dataset and demonstrates high-precision performance, achieving 98.20% accuracy and surpassing existing benchmarks. Ten CNN models, including SatNet-B3, were experimented with to classify eight weather conditions, Tropical Cyclone, Extratropical Cyclone, Snow, Low Water Cloud, High Ice Cloud, Vegetation, Desert, and Ocean, with SatNet-B3 yielding the best results. The model addresses class imbalance and inter-class similarity through extensive preprocessing and augmentation, and the pipeline supports the efficient handling of high-resolution geospatial imagery. Post-training quantization reduced the model size by 90.98% while retaining accuracy, and deployment on a Raspberry Pi 4 achieved a 0.3 s inference time. Integrating explainable AI tools such as LIME and CAM enhances interpretability for intelligent climate monitoring. Full article
Show Figures

Graphical abstract

28 pages, 2177 KB  
Article
Ports and Climate Change: Exploring Stakeholder Insights, Governance and Policy Gaps in Greek Ports
by Aikaterini Karditsa, Lykourgos Kourkouvelas, George Vaggelas, Michael Tsatsaronis, Konstantina Manifava and Maria Hatzaki
Sustainability 2025, 17(24), 11111; https://doi.org/10.3390/su172411111 - 11 Dec 2025
Viewed by 810
Abstract
Ports are crucial nodes in global supply chains, and are critical infrastructures for the execution of global trade and components of the Blue Economy; however, they are highly vulnerable to climate change implications, especially for insular countries like Greece. This study investigates port [...] Read more.
Ports are crucial nodes in global supply chains, and are critical infrastructures for the execution of global trade and components of the Blue Economy; however, they are highly vulnerable to climate change implications, especially for insular countries like Greece. This study investigates port stakeholder perceptions, priorities, and preparedness for climate risks through field research with the use of questionnaires and in-depth interviews. The findings reveal a strong recognition of climate change threats—especially extreme weather conditions—but limited institutional capacity for adaptation. Key needs include targeted funding, regulatory clarity, and specialized environmental units. National coordination remains weak, with climate change often framed in economic rather than systemic terms. Smaller ports face greater exposure yet fewer resources. The results highlight governance gaps, emphasizing the need for integrated, stakeholder-informed strategies to enhance port resilience and ensure alignment with EU climate directives. This research provides evidence-based insights to guide policy development and foster adaptive capacity in the Greek port sector. Full article
(This article belongs to the Special Issue Sustainable Management of Shipping, Ports and Logistics)
Show Figures

Figure 1

27 pages, 9422 KB  
Article
A 3D GeoHash-Based Geocoding Algorithm for Urban Three-Dimensional Objects
by Woochul Choi, Hongki Sung, Youngjae Jeon and Kyusoo Chong
Remote Sens. 2025, 17(24), 3964; https://doi.org/10.3390/rs17243964 - 8 Dec 2025
Viewed by 540
Abstract
The growing frequency of extreme weather, earthquakes, fires, and environmental hazards underscores the need for real-time monitoring and predictive management at the urban scale. Conventional three-dimensional spatial information systems, which rely on orthophotos and ground surveys, often suffer from computational inefficiency and data [...] Read more.
The growing frequency of extreme weather, earthquakes, fires, and environmental hazards underscores the need for real-time monitoring and predictive management at the urban scale. Conventional three-dimensional spatial information systems, which rely on orthophotos and ground surveys, often suffer from computational inefficiency and data overload when processing large and heterogeneous datasets. To address these limitations, this study introduces a three-dimensional GeoHash-based geocoding algorithm designed for lightweight, real-time, and attribute-driven digital twin operations. The proposed method comprises five integrated steps: generation of 3D GeoHash grids using longitude, latitude, and altitude coordinates; integration with GIS-based urban 3D models; level optimization using the Shape Overlap Ratio (SOR) with a threshold of 0.90; representative object labeling through weighted volume ratios; and altitude correction using DEM interpolation. Validation using a testbed in Sillim-dong, Seoul (10.19 km2), demonstrated that the framework achieved approximately 9.8 times faster 3D modeling performance than conventional orthophoto-based methods, while maintaining complete object recognition accuracy. The results confirm that the 3D GeoHash framework provides a unified spatial key structure that enhances data interoperability across querying, visualization, and simulation. This approach offers a practical foundation for operational digital twins, supporting high-efficiency 3D mapping and predictive disaster management toward resilient and data-driven urban systems. Full article
(This article belongs to the Special Issue Advances in Applications of Remote Sensing GIS and GNSS)
Show Figures

Figure 1

21 pages, 2733 KB  
Article
Construction of an Intelligent Risk Identification System for Highway Flood Damage Based on Multimodal Large Models
by Jinzi Zheng, Zhiyang Liu, Chenguang Li, Hanchu Zhou, Erlong Lou, Yaqi Li and Bingou Xu
Appl. Sci. 2025, 15(23), 12782; https://doi.org/10.3390/app152312782 - 3 Dec 2025
Cited by 1 | Viewed by 384
Abstract
Under the increasing threat of extreme weather events, road infrastructure faces significant risks of flood-induced damage. Traditional manual inspection methods are insufficient for modern highway emergency response, which requires higher efficiency and accuracy. To enhance the precision and accuracy of flood damage identification, [...] Read more.
Under the increasing threat of extreme weather events, road infrastructure faces significant risks of flood-induced damage. Traditional manual inspection methods are insufficient for modern highway emergency response, which requires higher efficiency and accuracy. To enhance the precision and accuracy of flood damage identification, this study proposes an intelligent recognition system that integrates a multimodal large language model with a structured knowledge base. The system constructs a professional repository covering eight typical categories of flood damage, including roadbed, pavement, and bridge components, with associated attributes, visual features, and mitigation strategies. A vectorized indexing mechanism enables fine-grained semantic retrieval, while task-specific templates and prompt engineering guide the multimodal model, such as Qwen-VL-Max, which extracts risk elements from image–text inputs and generating structured identification results with expert recommendations. The system is evaluated on a real-world highway flood damage dataset. The results show that the knowledge-enhanced model performs better than the baseline and prompt-optimized models. It reaches 91.5% average accuracy, a semantic relevance score of 4.58 out of 5, and 85% robustness under difficult conditions. These results highlight the strong domain adaptability and practical value for real-time flood damage assessment and emergency response. Full article
(This article belongs to the Special Issue Autonomous Vehicles and Robotics—2nd Edition)
Show Figures

Figure 1

17 pages, 1253 KB  
Article
Wavelet-Enhanced Transformer for Adaptive Multi-Period Time Series Forecasting
by Ping Yu, Hoiio Kong and Zijun Li
Appl. Sci. 2025, 15(23), 12698; https://doi.org/10.3390/app152312698 - 30 Nov 2025
Viewed by 946
Abstract
Time series analysis is of critical importance in a wide range of applications, including weather forecasting, anomaly detection, and action recognition. Accurate time series forecasting requires modeling complex temporal dependencies, particularly multi-scale periodic patterns. To address this challenge, we propose a novel Wavelet-Enhanced [...] Read more.
Time series analysis is of critical importance in a wide range of applications, including weather forecasting, anomaly detection, and action recognition. Accurate time series forecasting requires modeling complex temporal dependencies, particularly multi-scale periodic patterns. To address this challenge, we propose a novel Wavelet-Enhanced Transformer (Wave-Net). Wave-Net transforms 1D time series data into 2D matrices based on periodicity, enhancing the capture of temporal patterns through convolutional filters. This paper introduces Wave-Net, a model that incorporates wavelet and Fourier transforms for feature extraction, along with an enhanced cycle offset and optimized dynamic K for improved robustness. The Transformer layer is further refined to bolster long-term modeling capabilities. Evaluations on real-world benchmarks demonstrate that Wave-Net consistently achieves state-of-the-art performance across mainstream time series analysis tasks. Full article
(This article belongs to the Special Issue AI-Based Supervised Prediction Models)
Show Figures

Figure 1

Back to TopTop