Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (72)

Search Parameters:
Keywords = real-world thermal image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 5378 KB  
Article
Deep Reinforcement Learning for Temperature Control of a Two-Way SMA-Actuated Tendon-Driven Gripper
by Phuoc Thien Do, Quang Ngoc Le, Hyeongmo Park, Hyunho Kim, Seungbo Shim, Kihan Park and Yeongjin Kim
Actuators 2026, 15(1), 37; https://doi.org/10.3390/act15010037 - 6 Jan 2026
Viewed by 400
Abstract
Shape Memory Alloy (SMA) actuators offer strong potential for compact, lightweight, silent, and compliant robotic grippers; however, their practical deployment is limited by the challenge of controlling nonlinear and hysteretic thermal dynamics. This paper presents a complete Sim-to-Real control framework for precise temperature [...] Read more.
Shape Memory Alloy (SMA) actuators offer strong potential for compact, lightweight, silent, and compliant robotic grippers; however, their practical deployment is limited by the challenge of controlling nonlinear and hysteretic thermal dynamics. This paper presents a complete Sim-to-Real control framework for precise temperature regulation of a tendon-driven SMA gripper using Deep Reinforcement Learning (DRL). A novel 12-action discrete control space is introduced, comprising 11 heating levels (0–100% PWM) and one active cooling action, enabling effective management of thermal inertia and environmental disturbances. The DRL agent is trained entirely in a calibrated thermo-mechanical simulation and deployed directly on physical hardware without real-world fine-tuning. Experimental results demonstrate accurate temperature tracking over a wide operating range (35–70 °C), achieving a mean steady-state error of approximately 0.26 °C below 50 °C and 0.41 °C at higher temperatures. Non-contact thermal imaging further confirms spatial temperature uniformity and the reliability of thermistor-based feedback. Finally, grasping experiments validate the practical effectiveness of the proposed controller, enabling reliable manipulation of delicate objects without crushing or slippage. These results demonstrate that the proposed DRL-based Sim-to-Real framework provides a robust and practical solution for high-precision SMA temperature control in soft robotic systems. Full article
(This article belongs to the Special Issue Actuation and Sensing of Intelligent Soft Robots)
Show Figures

Figure 1

36 pages, 2139 KB  
Systematic Review
A Systematic Review of the Practical Applications of Synthetic Aperture Radar (SAR) for Bridge Structural Monitoring
by Homer Armando Buelvas Moya, Minh Q. Tran, Sergio Pereira, José C. Matos and Son N. Dang
Sustainability 2026, 18(1), 514; https://doi.org/10.3390/su18010514 - 4 Jan 2026
Viewed by 413
Abstract
Within the field of the structural monitoring of bridges, numerous technologies and methodologies have been developed. Among these, methods based on synthetic aperture radar (SAR) which utilise satellite data from missions such as Sentinel-1 (European Space Agency-ESA) and COSMO-SkyMed (Agenzia Spaziale Italiana—ASI) to [...] Read more.
Within the field of the structural monitoring of bridges, numerous technologies and methodologies have been developed. Among these, methods based on synthetic aperture radar (SAR) which utilise satellite data from missions such as Sentinel-1 (European Space Agency-ESA) and COSMO-SkyMed (Agenzia Spaziale Italiana—ASI) to capture displacements, temperature-related changes, and other geophysical measurements have gained increasing attention. However, SAR has yet to establish its value and potential fully; its broader adoption hinges on consistently demonstrating its robustness through recurrent applications, well-defined use cases, and effective strategies to address its inherent limitations. This study presents a systematic literature review (SLR) conducted in accordance with key stages of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 framework. An initial corpus of 1218 peer-reviewed articles was screened, and a final set of 25 studies was selected for in-depth analysis based on citation impact, keyword recurrence, and thematic relevance from the last five years. The review critically examines SAR-based techniques—including Differential Interferometric SAR (DInSAR), multi-temporal InSAR (MT-InSAR), and Persistent Scatterer Interferometry (PSI), as well as approaches to integrating SAR data with ground-based measurements and complementary digital models. Emphasis is placed on real-world case studies and persistent technical challenges, such as atmospheric artefacts, Line-of-Sight (LOS) geometry constraints, phase noise, ambiguities in displacement interpretation, and the translation of radar-derived deformations into actionable structural insights. The findings underscore SAR’s significant contribution to the structural health monitoring (SHM) of bridges, consistently delivering millimetre-level displacement accuracy and enabling engineering-relevant interpretations. While standalone SAR-based techniques offer wide-area monitoring capabilities, their full potential is realised only when integrated with complementary procedures such as thermal modelling, multi-sensor validation, and structural knowledge. Finally, this document highlights the persistent technical constraints of InSAR in bridge monitoring—including measurement ambiguities, SAR image acquisition limitations, and a lack of standardised, automated workflows—that continue to impede operational adoption but also point toward opportunities for methodological improvement. Full article
(This article belongs to the Special Issue Sustainable Practices in Bridge Construction)
Show Figures

Figure 1

32 pages, 28708 KB  
Article
Adaptive Thermal Imaging Signal Analysis for Real-Time Non-Invasive Respiratory Rate Monitoring
by Riska Analia, Anne Forster, Sheng-Quan Xie and Zhiqiang Zhang
Sensors 2026, 26(1), 278; https://doi.org/10.3390/s26010278 - 1 Jan 2026
Viewed by 529
Abstract
(1) Background: This study presents an adaptive, contactless, and privacy-preserving respiratory-rate monitoring system based on thermal imaging, designed for real-time operation on embedded edge hardware. The system continuously processes temperature data from a compact thermal camera without external computation, enabling practical deployment for [...] Read more.
(1) Background: This study presents an adaptive, contactless, and privacy-preserving respiratory-rate monitoring system based on thermal imaging, designed for real-time operation on embedded edge hardware. The system continuously processes temperature data from a compact thermal camera without external computation, enabling practical deployment for home or clinical vital-sign monitoring. (2) Methods: Thermal frames are captured using a 256×192 TOPDON TC001 camera and processed entirely on an NVIDIA Jetson Orin Nano. A YOLO-based detector localizes the nostril region in every even frame (stride = 2) to reduce the computation load, while a Kalman filter predicts the ROI position on skipped frames to maintain spatial continuity and suppress motion jitter. From the stabilized ROI, a temperature-based breathing signal is extracted and analyzed through an adaptive median–MAD hysteresis algorithm that dynamically adjusts to signal amplitude and noise variations for breathing phase detection. Respiratory rate (RR) is computed from inter-breath intervals (IBI) validated within physiological constraints. (3) Results: Ten healthy subjects participated in six experimental conditions including resting, paced breathing, speech, off-axis yaw, posture (supine), and distance variations up to 2.0 m. Across these conditions, the system attained a MAE of 0.57±0.36 BPM and an RMSE of 0.64±0.42 BPM, demonstrating stable accuracy under motion and thermal drift. Compared with peak-based and FFT spectral baselines, the proposed method reduced errors by a large margin across all conditions. (4) Conclusions: The findings confirm that accurate and robust respiratory-rate estimation can be achieved using a low-resolution thermal sensor running entirely on an embedded edge device. The combination of YOLO-based nostril detector, Kalman ROI prediction, and adaptive MAD–hysteresis phase that self-adjusts to signal variability provides a compact, efficient, and privacy-preserving solution for non-invasive vital-sign monitoring in real-world environments. Full article
Show Figures

Graphical abstract

26 pages, 5101 KB  
Article
Cross-Modal Adaptive Fusion and Multi-Scale Aggregation Network for RGB-T Crowd Density Estimation and Counting
by Jian Liu, Zuodong Niu, Yufan Zhang and Lin Tang
Appl. Sci. 2026, 16(1), 161; https://doi.org/10.3390/app16010161 - 23 Dec 2025
Viewed by 405
Abstract
Crowd counting is a significant task in computer vision. By combining the rich texture information from RGB images with the insensitivity to illumination changes offered by thermal imaging, the applicability of models in real-world complex scenarios can be enhanced. Current research on RGB-T [...] Read more.
Crowd counting is a significant task in computer vision. By combining the rich texture information from RGB images with the insensitivity to illumination changes offered by thermal imaging, the applicability of models in real-world complex scenarios can be enhanced. Current research on RGB-T crowd counting primarily focuses on feature fusion strategies, multi-scale structures, and the exploration of novel network architectures such as Vision Transformer and Mamba. However, existing approaches face two key challenges: limited robustness to illumination shifts and insufficient handling of scale discrepancies. To address these challenges, this study aims to develop a robust RGB-T crowd counting framework that remains stable under illumination shifts, through introduces two key innovations beyond existing fusion and multi-scale approaches: (1) a cross-modal adaptive fusion module (CMAFM) that actively evaluates and fuses reliable cross-modal features under varying scenarios by simulating a dynamic feature selection and trust allocation mechanism; and (2) a multi-scale aggregation module (MSAM) that unifies features with different receptive fields to an intermediate scale and performs weighted fusion to enhance modeling capability for cross-modal scale variations. The proposed method achieves relative improvements of 1.57% in GAME(0) and 0.78% in RMSE on the DroneRGBT dataset compared to existing methods, and improvements of 2.48% and 1.59% on the RGBT-CC dataset, respectively. It also demonstrates higher stability and robustness under varying lighting conditions. This research provides an effective solution for building stable and reliable all-weather crowd counting systems, with significant application prospects in smart city security and management. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Digital Image Processing)
Show Figures

Figure 1

21 pages, 7017 KB  
Article
Federated Transfer Learning for Tomato Leaf Disease Detection Using Neuro-Graph Hybrid Model
by Ana-Maria Cristea and Ciprian Dobre
AgriEngineering 2025, 7(12), 432; https://doi.org/10.3390/agriengineering7120432 - 15 Dec 2025
Viewed by 513
Abstract
Plant diseases are currently a major threat to agricultural economies and food availability, having a negative environmental impact. Despite being a promising line of research, current approaches struggle with poor cross-site generalization, limited labels and dataset bias. Real-field complexities, such as environmental variability, [...] Read more.
Plant diseases are currently a major threat to agricultural economies and food availability, having a negative environmental impact. Despite being a promising line of research, current approaches struggle with poor cross-site generalization, limited labels and dataset bias. Real-field complexities, such as environmental variability, heterogeneous varieties or temporal dynamics as are often overlooked. Numerous studies have been conducted to address these challenges, proposing advanced learning strategies and improved evaluation protocols. Synthetic data generation and self-supervised learning reduce dataset bias, while domain adaptation, hyperspectral, and thermal signals improve robustness across sites. However, a large portion of current methods are developed and validated mainly on clean laboratory datasets, which do not capture the variability of real-field conditions. Existing AI models often lead to imperfect detection results when dealing with field images complexities, such as dense vegetation, variable illumination or changing symptom expression. Although augmentation techniques can approximate real-world conditions, incorporating field data represents a substantial enhancement in model reliability. Federated transfer learning offers a promising approach to enhance plant disease detection, by enabling collaborative training of models across diverse agricultural environments, using in-field data but without disclosing the participants data to each others. In this study, we collaboratively trained a hybrid Graph–SNN model using federated learning (FL) to preserve data privacy, optimized for efficient use of participant resources. The model achieved an accuracy of 0.9445 on clean laboratory data and 0.6202 exclusively on field data, underscoring the considerable challenges posed by real-world conditions. Our findings demonstrate the potential of FL for privacy preserving and reliable plant disease detection under real field conditions. Full article
Show Figures

Figure 1

30 pages, 34352 KB  
Review
Infrared and Visible Image Fusion Techniques for UAVs: A Comprehensive Review
by Junjie Li, Cunzheng Fan, Congyang Ou and Haokui Zhang
Drones 2025, 9(12), 811; https://doi.org/10.3390/drones9120811 - 21 Nov 2025
Cited by 3 | Viewed by 2005
Abstract
Infrared–visible (IR–VIS) image fusion is becoming central to unmanned aerial vehicle (UAV) perception, enabling robust operation across day–night cycles, backlighting, haze or smoke, and large viewpoint or scale changes. However, for practical applications some challenges still remain: visible images are illumination-sensitive; infrared imagery [...] Read more.
Infrared–visible (IR–VIS) image fusion is becoming central to unmanned aerial vehicle (UAV) perception, enabling robust operation across day–night cycles, backlighting, haze or smoke, and large viewpoint or scale changes. However, for practical applications some challenges still remain: visible images are illumination-sensitive; infrared imagery suffers thermal crossover and weak texture; motion and parallax cause cross-modal misalignment; UAV scenes contain many small or fast targets; and onboard platforms face strict latency, power, and bandwidth budgets. Given these UAV-specific challenges and constraints, we provide a UAV-centric synthesis of IR–VIS fusion. We: (i) propose a taxonomy linking data compatibility, fusion mechanisms, and task adaptivity; (ii) critically review learning-based methods—including autoencoders, CNNs, GANs, Transformers, and emerging paradigms; (iii) compare explicit/implicit registration strategies and general-purpose fusion frameworks; and (iv) consolidate datasets and evaluation metrics to reveal UAV-specific gaps. We further identify open challenges in benchmarking, metrics, lightweight design, and integration with downstream detection, segmentation, and tracking, offering guidance for real-world deployment. A continuously updated bibliography and resources are provided and discussed in the main text. Full article
Show Figures

Figure 1

19 pages, 13708 KB  
Article
A-BiYOLOv9: An Attention-Guided YOLOv9 Model for Infrared-Based Wind Turbine Inspection
by Sami Ekici, Murat Uyar and Tugce Nur Karadeniz
Appl. Sci. 2025, 15(21), 11840; https://doi.org/10.3390/app152111840 - 6 Nov 2025
Viewed by 726
Abstract
This work examines how thermal turbulence patterns can be identified on the blades of operating wind turbines—an issue that plays a key role in preventive maintenance and overall safety assurance. Using the publicly available KI-VISIR dataset, containing annotated infrared images collected under real-world [...] Read more.
This work examines how thermal turbulence patterns can be identified on the blades of operating wind turbines—an issue that plays a key role in preventive maintenance and overall safety assurance. Using the publicly available KI-VISIR dataset, containing annotated infrared images collected under real-world operating conditions, four object detection architectures were evaluated: YOLOv8, the baseline YOLOv9, the transformer-based RT-DETR, and an enhanced variant introduced as A-BiYOLOv9. The proposed approach extends the YOLOv9 backbone with convolutional block attention modules (CBAM) and integrates a bidirectional feature pyramid network (BiFPN) in the neck to improve feature fusion. All models were trained for thirty epochs on single-class turbulence annotations. The experiments confirm that YOLOv8 provides fast and efficient detection, YOLOv9 delivers higher accuracy and more stable convergence, and RT-DETR exhibits strong precision and consistent localization performance. A-BiYOLOv9 maintains stable and reliable accuracy even when the thermal patterns vary significantly between scenes. These results confirm that attention-augmented and feature-fusion-centric architectures improve detection sensitivity and reliability in the thermal domain. Consequently, the proposed A-BiYOLOv9 represents a promising candidate for real-time, contactless thermographic monitoring of wind turbines, with the potential to extend turbine lifespan through predictive maintenance strategies. Full article
Show Figures

Figure 1

36 pages, 6413 KB  
Review
A Review of Crop Attribute Monitoring Technologies for General Agricultural Scenarios
by Zhuofan Li, Ruochen Wang and Renkai Ding
AgriEngineering 2025, 7(11), 365; https://doi.org/10.3390/agriengineering7110365 - 2 Nov 2025
Cited by 2 | Viewed by 2196
Abstract
As global agriculture shifts to intelligence and precision, crop attribute detection has become foundational for intelligent systems (harvesters, UAVs, sorters). It enables real-time monitoring of key indicators (maturity, moisture, disease) to optimize operations—reducing crop losses by 10–15% via precise cutting height adjustment—and boosts [...] Read more.
As global agriculture shifts to intelligence and precision, crop attribute detection has become foundational for intelligent systems (harvesters, UAVs, sorters). It enables real-time monitoring of key indicators (maturity, moisture, disease) to optimize operations—reducing crop losses by 10–15% via precise cutting height adjustment—and boosts resource-use efficiency. This review targets harvesting-stage and in-field monitoring for grains, fruits, and vegetables, highlighting practical technologies: near-infrared/Raman spectroscopy (non-destructive internal attribute detection), 3D vision/LiDAR (high-precision plant height/density/fruit location measurement), and deep learning (YOLO for counting, U-Net for disease segmentation). It addresses universal field challenges (lighting variation, target occlusion, real-time demands) and actionable fixes (illumination compensation, sensor fusion, lightweight AI) to enhance stability across scenarios. Future trends prioritize real-world deployment: multi-sensor fusion (e.g., RGB + thermal imaging) for comprehensive perception, edge computing (inference delay < 100 ms) to solve rural network latency, and low-cost solutions (mobile/embedded device compatibility) to lower smallholder barriers—directly supporting scalable precision agriculture and global sustainable food production. Full article
(This article belongs to the Topic Digital Agriculture, Smart Farming and Crop Monitoring)
Show Figures

Figure 1

24 pages, 2635 KB  
Review
Hailstorm Impact on Photovoltaic Modules: Damage Mechanisms, Testing Standards, and Diagnostic Techniques
by Marko Katinić and Mladen Bošnjaković
Technologies 2025, 13(10), 473; https://doi.org/10.3390/technologies13100473 - 18 Oct 2025
Viewed by 1948
Abstract
This study examines the effects of hailstorms on photovoltaic (PV) modules, focussing on damage mechanisms, testing standards, numerical simulations, damage detection techniques, and mitigation strategies. A comprehensive review of the recent literature (2017–2025), experimental results, and case studies is complemented by advanced simulation [...] Read more.
This study examines the effects of hailstorms on photovoltaic (PV) modules, focussing on damage mechanisms, testing standards, numerical simulations, damage detection techniques, and mitigation strategies. A comprehensive review of the recent literature (2017–2025), experimental results, and case studies is complemented by advanced simulation methods such as finite element analysis (FEA) and smoothed particle hydrodynamics (SPH). The research emphasises the crucial role of protective glass thickness, cell type, number of busbars, and quality of lamination in improving hail resistance. While international standards such as IEC 61215 specify test protocols, actual hail events often exceed these conditions, leading to glass breakage, micro-cracks, and electrical faults. Numerical simulations confirm that thicker glass and optimised module designs significantly reduce damage and power loss. Detection methods, including visual inspection, thermal imaging, electroluminescence, and AI-driven imaging, enable rapid identification of both visible and hidden damage. The study also addresses the financial risks associated with hail damage and emphasises the importance of insurance and preventative measures. Recommendations include the use of certified, robust modules, protective covers, optimised installation angles, and regular inspections to mitigate the effects of hail. Future research should develop lightweight, impact-resistant materials, improve simulation modelling to better reflect real-world hail conditions, and improve AI-based damage detection in conjunction with drone inspections. This integrated approach aims to improve the durability and reliability of PV modules in hail-prone regions and support the sustainable use of solar energy amidst increasing climatic challenges. Full article
(This article belongs to the Special Issue Innovative Power System Technologies)
Show Figures

Graphical abstract

21 pages, 14964 KB  
Article
An Automated Framework for Abnormal Target Segmentation in Levee Scenarios Using Fusion of UAV-Based Infrared and Visible Imagery
by Jiyuan Zhang, Zhonggen Wang, Jing Chen, Fei Wang and Lyuzhou Gao
Remote Sens. 2025, 17(20), 3398; https://doi.org/10.3390/rs17203398 - 10 Oct 2025
Cited by 2 | Viewed by 878
Abstract
Levees are critical for flood defence, but their integrity is threatened by hazards such as piping and seepage, especially during high-water-level periods. Traditional manual inspections for these hazards and associated emergency response elements, such as personnel and assets, are inefficient and often impractical. [...] Read more.
Levees are critical for flood defence, but their integrity is threatened by hazards such as piping and seepage, especially during high-water-level periods. Traditional manual inspections for these hazards and associated emergency response elements, such as personnel and assets, are inefficient and often impractical. While UAV-based remote sensing offers a promising alternative, the effective fusion of multi-modal data and the scarcity of labelled data for supervised model training remain significant challenges. To overcome these limitations, this paper reframes levee monitoring as an unsupervised anomaly detection task. We propose a novel, fully automated framework that unifies geophysical hazards and emergency response elements into a single analytical category of “abnormal targets” for comprehensive situational awareness. The framework consists of three key modules: (1) a state-of-the-art registration algorithm to precisely align infrared and visible images; (2) a generative adversarial network to fuse the thermal information from IR images with the textural details from visible images; and (3) an adaptive, unsupervised segmentation module where a mean-shift clustering algorithm, with its hyperparameters automatically tuned by Bayesian optimization, delineates the targets. We validated our framework on a real-world dataset collected from a levee on the Pajiang River, China. The proposed method demonstrates superior performance over all baselines, achieving an Intersection over Union of 0.348 and a macro F1-Score of 0.479. This work provides a practical, training-free solution for comprehensive levee monitoring and demonstrates the synergistic potential of multi-modal fusion and automated machine learning for disaster management. Full article
Show Figures

Graphical abstract

25 pages, 13151 KB  
Article
Adaptive Energy–Gradient–Contrast (EGC) Fusion with AIFI-YOLOv12 for Improving Nighttime Pedestrian Detection in Security
by Lijuan Wang, Zuchao Bao and Dongming Lu
Appl. Sci. 2025, 15(19), 10607; https://doi.org/10.3390/app151910607 - 30 Sep 2025
Viewed by 740
Abstract
In security applications, visible-light pedestrian detectors are highly sensitive to changes in illumination and fail under low-light or nighttime conditions, while infrared sensors, though resilient to lighting, often produce blurred object boundaries that hinder precise localization. To address these complementary limitations, we propose [...] Read more.
In security applications, visible-light pedestrian detectors are highly sensitive to changes in illumination and fail under low-light or nighttime conditions, while infrared sensors, though resilient to lighting, often produce blurred object boundaries that hinder precise localization. To address these complementary limitations, we propose a practical multimodal pipeline—Adaptive Energy–Gradient–Contrast (EGC) Fusion with AIFI-YOLOv12—that first fuses infrared and low-light visible images using per-pixel weights derived from local energy, gradient magnitude and contrast measures, then detects pedestrians with an improved YOLOv12 backbone. The detector integrates an AIFI attention module at high semantic levels, replaces selected modules with A2C2f blocks to enhance cross-channel feature aggregation, and preserves P3–P5 outputs to improve small-object localization. We evaluate the complete pipeline on the LLVIP dataset and report Precision, Recall, mAP@50, mAP@50–95, GFLOPs, FPS and detection time, comparing against YOLOv8, YOLOv10–YOLOv12 baselines (n and s scales). Quantitative and qualitative results show that the proposed fusion restores complementary thermal and visible details and that the AIFI-enhanced detector yields more robust nighttime pedestrian detection while maintaining a competitive computational profile suitable for real-world security deployments. Full article
(This article belongs to the Special Issue Advanced Image Analysis and Processing Technologies and Applications)
Show Figures

Figure 1

37 pages, 2297 KB  
Systematic Review
Search, Detect, Recover: A Systematic Review of UAV-Based Remote Sensing Approaches for the Location of Human Remains and Clandestine Graves
by Cherene de Bruyn, Komang Ralebitso-Senior, Kirstie Scott, Heather Panter and Frederic Bezombes
Drones 2025, 9(10), 674; https://doi.org/10.3390/drones9100674 - 26 Sep 2025
Cited by 1 | Viewed by 3461
Abstract
Several approaches are currently being used by law enforcement to locate the remains of victims. Yet, traditional methods are invasive and time-consuming. Unmanned Aerial Vehicle (UAV)-based remote sensing has emerged as a potential tool to support the location of human remains and clandestine [...] Read more.
Several approaches are currently being used by law enforcement to locate the remains of victims. Yet, traditional methods are invasive and time-consuming. Unmanned Aerial Vehicle (UAV)-based remote sensing has emerged as a potential tool to support the location of human remains and clandestine graves. While offering a non-invasive and low-cost alternative, UAV-based remote sensing needs to be tested and validated for forensic case work. To assess current knowledge, a systematic review of 19 peer-reviewed articles from four databases was conducted, focusing specifically on UAV-based remote sensing for human remains and clandestine grave location. The findings indicate that different sensors (colour, thermal, and multispectral cameras), were tested across a range of burial conditions and models (human and mammalian). While UAVs with imaging sensors can locate graves and decomposition-related anomalies, experimental designs from the reviewed studies lacked robustness in terms of replication and consistency across models. Trends also highlight the potential of automated detection of anomalies over manual inspection, potentially leading to improved predictive modelling. Overall, UAV-based remote sensing shows considerable promise for enhancing the efficiency of human remains and clandestine grave location, but methodological limitations must be addressed to ensure findings are relevant to real-world forensic cases. Full article
Show Figures

Figure 1

16 pages, 2115 KB  
Article
Hygrothermal Aging and Thermomechanical Characterization of As-Manufactured Tidal Turbine Blade Composites
by Paul Murdy, Robynne E. Murray, David Barnes, Ariel F. Lusty, Erik G. Rognerud, Peter J. Creveling and Daniel Samborsky
J. Mar. Sci. Eng. 2025, 13(9), 1790; https://doi.org/10.3390/jmse13091790 - 16 Sep 2025
Cited by 2 | Viewed by 759
Abstract
This study investigates the hygrothermal aging behavior and thermomechanical properties of as-manufactured glass fiber-reinforced epoxy and thermoplastic composite tidal turbine blades. The blades were previously deployed in a marine environment and subsequently analyzed through a comprehensive suite of material characterization techniques, including hygrothermal [...] Read more.
This study investigates the hygrothermal aging behavior and thermomechanical properties of as-manufactured glass fiber-reinforced epoxy and thermoplastic composite tidal turbine blades. The blades were previously deployed in a marine environment and subsequently analyzed through a comprehensive suite of material characterization techniques, including hygrothermal aging, dynamic mechanical analysis (DMA), tensile testing and X-ray computed tomography (XCT). Hygrothermal aging experiments revealed that while thermoplastic composites exhibited lower overall water absorption (0.78% vs. 0.47%), they had significantly higher diffusion coefficients than epoxy (2.1 vs. 12.1 × 10−13 m2s−1), suggesting faster saturation in operational environments. DMA results demonstrated that water ingress caused plasticization in epoxy matrices, reducing the glass transition temperature and increasing damping (112 °C to 104 °C), while thermoplastic composites showed more stable thermal behavior (87 °C glass transition temperature). Tensile testing revealed substantial reductions in ultimate strength (>40%) for both materials after prolonged water exposure, with minimal change in elastic modulus, highlighting the role of matrix degradation over fiber reinforcement. XCT image analysis showed that both composites were manufactured with high quality: no large voids or cracks were present, and the degree of misalignment was low. These findings inform future marine renewable energy composite designs by emphasizing the critical influence of moisture on long-term structural integrity and the need for optimized material systems in harsh marine environments. This work provides a rare real-world comparison of epoxy and recyclable thermoplastic tidal turbine blades, showing how laboratory aging tests and advanced imaging reveal the influence of material and manufacturing choices on long-term marine durability. Full article
Show Figures

Figure 1

20 pages, 8235 KB  
Article
Enhancing Search and Rescue Missions with UAV Thermal Video Tracking
by Piero Fraternali, Luca Morandini and Riccardo Motta
Remote Sens. 2025, 17(17), 3032; https://doi.org/10.3390/rs17173032 - 1 Sep 2025
Cited by 3 | Viewed by 2761
Abstract
Wilderness Search and Rescue (WSAR) missions are time-critical emergency response operations that require locating a lost person within a short timeframe. Large forested terrains must be explored in challenging environments and adverse conditions. Unmanned Aerial Vehicles (UAVs) equipped with thermal cameras enable the [...] Read more.
Wilderness Search and Rescue (WSAR) missions are time-critical emergency response operations that require locating a lost person within a short timeframe. Large forested terrains must be explored in challenging environments and adverse conditions. Unmanned Aerial Vehicles (UAVs) equipped with thermal cameras enable the efficient exploration of vast areas. However, manual analysis of the huge amount of collected data is difficult, time-consuming, and prone to errors, increasing the risk of missing a person. This work proposes an object detection and tracking pipeline that automatically analyzes UAV thermal videos in real-time to identify lost people in forest environments. The tracking module combines information from multiple viewpoints to suppress false alarms and focus responders’ efforts. In this moving camera scenario, tracking performance is enhanced by introducing a motion compensation module based on known camera poses. Experimental results on the collected thermal video dataset demonstrate the effectiveness of the proposed tracking-based approach by achieving a Precision of 90.3% and a Recall of 73.4%. On a dataset of UAV thermal images, the introduced camera alignment technique increases the Recall by 6.1%, with negligible computational overhead, reaching 35.2 FPS. The proposed approach, optimized for real-time video processing, has direct application in real-world WSAR missions to improve operational efficiency. Full article
(This article belongs to the Section Earth Observation for Emergency Management)
Show Figures

Figure 1

23 pages, 1657 KB  
Article
High-Precision Pest Management Based on Multimodal Fusion and Attention-Guided Lightweight Networks
by Ziye Liu, Siqi Li, Yingqiu Yang, Xinlu Jiang, Mingtian Wang, Dongjiao Chen, Tianming Jiang and Min Dong
Insects 2025, 16(8), 850; https://doi.org/10.3390/insects16080850 - 16 Aug 2025
Viewed by 1806
Abstract
In the context of global food security and sustainable agricultural development, the efficient recognition and precise management of agricultural insect pests and their predators have become critical challenges in the domain of smart agriculture. To address the limitations of traditional models that overly [...] Read more.
In the context of global food security and sustainable agricultural development, the efficient recognition and precise management of agricultural insect pests and their predators have become critical challenges in the domain of smart agriculture. To address the limitations of traditional models that overly rely on single-modal inputs and suffer from poor recognition stability under complex field conditions, a multimodal recognition framework has been proposed. This framework integrates RGB imagery, thermal infrared imaging, and environmental sensor data. A cross-modal attention mechanism, environment-guided modality weighting strategy, and decoupled recognition heads are incorporated to enhance the model’s robustness against small targets, intermodal variations, and environmental disturbances. Evaluated on a high-complexity multimodal field dataset, the proposed model significantly outperforms mainstream methods across four key metrics, precision, recall, F1-score, and mAP@50, achieving 91.5% precision, 89.2% recall, 90.3% F1-score, and 88.0% mAP@50. These results represent an improvement of over 6% compared to representative models such as YOLOv8 and DETR. Additional ablation studies confirm the critical contributions of key modules, particularly under challenging scenarios such as low light, strong reflections, and sensor data noise. Moreover, deployment tests conducted on the Jetson Xavier edge device demonstrate the feasibility of real-world application, with the model achieving a 25.7 FPS inference speed and a compact size of 48.3 MB, thus balancing accuracy and lightweight design. This study provides an efficient, intelligent, and scalable AI solution for pest surveillance and biological control, contributing to precision pest management in agricultural ecosystems. Full article
Show Figures

Figure 1

Back to TopTop