Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (53)

Search Parameters:
Keywords = UAV-TIR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6134 KiB  
Article
The Evaluation of Small-Scale Field Maize Transpiration Rate from UAV Thermal Infrared Images Using Improved Three-Temperature Model
by Xiaofei Yang, Zhitao Zhang, Qi Xu, Ning Dong, Xuqian Bai and Yanfu Liu
Plants 2025, 14(14), 2209; https://doi.org/10.3390/plants14142209 - 17 Jul 2025
Viewed by 302
Abstract
Transpiration is the dominant process driving water loss in crops, significantly influencing their growth, development, and yield. Efficient monitoring of transpiration rate (Tr) is crucial for evaluating crop physiological status and optimizing water management strategies. The three-temperature (3T) model has potential for rapid [...] Read more.
Transpiration is the dominant process driving water loss in crops, significantly influencing their growth, development, and yield. Efficient monitoring of transpiration rate (Tr) is crucial for evaluating crop physiological status and optimizing water management strategies. The three-temperature (3T) model has potential for rapid estimation of transpiration rates, but its application to low-altitude remote sensing has not yet been further investigated. To evaluate the performance of 3T model based on land surface temperature (LST) and canopy temperature (TC) in estimating transpiration rate, this study utilized an unmanned aerial vehicle (UAV) equipped with a thermal infrared (TIR) camera to capture TIR images of summer maize during the nodulation-irrigation stage under four different moisture treatments, from which LST was extracted. The Gaussian Hidden Markov Random Field (GHMRF) model was applied to segment the TIR images, facilitating the extraction of TC. Finally, an improved 3T model incorporating fractional vegetation coverage (FVC) was proposed. The findings of the study demonstrate that: (1) The GHMRF model offers an effective approach for TIR image segmentation. The mechanism of thermal TIR segmentation implemented by the GHMRF model is explored. The results indicate that when the potential energy function parameter β value is 0.1, the optimal performance is provided. (2) The feasibility of utilizing UAV-based TIR remote sensing in conjunction with the 3T model for estimating Tr has been demonstrated, showing a significant correlation between the measured and the estimated transpiration rate (Tr-3TC), derived from TC data obtained through the segmentation and processing of TIR imagery. The correlation coefficients (r) were 0.946 in 2022 and 0.872 in 2023. (3) The improved 3T model has demonstrated its ability to enhance the estimation accuracy of crop Tr rapidly and effectively, exhibiting a robust correlation with Tr-3TC. The correlation coefficients for the two observed years are 0.991 and 0.989, respectively, while the model maintains low RMSE of 0.756 mmol H2O m−2 s−1 and 0.555 mmol H2O m−2 s−1 for the respective years, indicating strong interannual stability. Full article
Show Figures

Figure 1

27 pages, 4651 KiB  
Article
Thermal Infrared UAV Applications for Spatially Explicit Wildlife Occupancy Modeling
by Eve Bohnett, Babu Ram Lamichanne, Surendra Chaudhary, Kapil Pokhrel, Giavanna Dorman, Axel Flores, Rebecca Lewison, Fang Qiu, Doug Stow and Li An
Land 2025, 14(7), 1461; https://doi.org/10.3390/land14071461 - 14 Jul 2025
Viewed by 457
Abstract
Assessing the impact of community-based conservation programs on wildlife biodiversity remains a significant challenge. This pilot study was designed to develop and demonstrate a scalable, spatially explicit workflow using thermal infrared (TIR) imagery and unmanned aerial vehicles (UAVs) for non-invasive biodiversity monitoring. Conducted [...] Read more.
Assessing the impact of community-based conservation programs on wildlife biodiversity remains a significant challenge. This pilot study was designed to develop and demonstrate a scalable, spatially explicit workflow using thermal infrared (TIR) imagery and unmanned aerial vehicles (UAVs) for non-invasive biodiversity monitoring. Conducted in a 2-hectare grassland area in Chitwan, Nepal, the study applied TIR-based grid sampling and multi-species occupancy models with thin-plate splines to evaluate how species detection and richness might vary between (1) morning and evening UAV flights, and (2) the Chitwan National Park and Kumroj Community Forest. While the small sample area inherently limits ecological inference, the aim was to test and demonstrate data collection and modeling protocols that could be scaled to larger landscapes with sufficient replication, and not to produce generalizable ecological findings from a small dataset. The pilot study results revealed higher species detection during morning flights, which allowed us to refine our data collection. Additionally, models accounting for spatial autocorrelation using thin plate splines suggested that community-based conservation programs effectively balanced ecosystem service extraction with biodiversity conservation, maintaining richness levels comparable to the national park. Models without splines indicated significantly higher species richness within the national park. This study demonstrates the potential for spatially explicit methods for monitoring grassland mammals using TIR UAV as indicators of anthropogenic impacts and conservation effectiveness. Further data collection over larger spatial and temporal scales is essential to capture the occupancy more generally for species with larger home ranges, as well as any effects of rainfall, flooding, and seasonal variability on biodiversity in alluvial grasslands. Full article
(This article belongs to the Section Land, Biodiversity, and Human Wellbeing)
Show Figures

Figure 1

28 pages, 12669 KiB  
Article
Paddy Field Scale Evapotranspiration Estimation Based on Two-Source Energy Balance Model with Energy Flux Constraints and UAV Multimodal Data
by Tian’ao Wu, Kaihua Liu, Minghan Cheng, Zhe Gu, Weihua Guo and Xiyun Jiao
Remote Sens. 2025, 17(10), 1662; https://doi.org/10.3390/rs17101662 - 8 May 2025
Cited by 5 | Viewed by 673
Abstract
Accurate evapotranspiration (ET) monitoring is important for making scientific irrigation decisions. Unmanned aerial vehicle (UAV) remote sensing platforms allow for the flexible and efficient acquisition of field data, providing a valuable approach for large-scale ET monitoring. This study aims to enhance [...] Read more.
Accurate evapotranspiration (ET) monitoring is important for making scientific irrigation decisions. Unmanned aerial vehicle (UAV) remote sensing platforms allow for the flexible and efficient acquisition of field data, providing a valuable approach for large-scale ET monitoring. This study aims to enhance the accuracy and reliability of ET estimation in rice paddies through two synergistic approaches: (1) integrating the energy flux diurnal variations into the Two-Source Energy Balance (TSEB) model, which considers the canopy and soil temperature components separately, for physical estimation and (2) optimizing the flight altitudes and observation times for thermal infrared (TIR) data acquisition to enhance the data quality. The results indicated that the energy flux in rice paddies followed a single-peak diurnal pattern dominated by net radiation (Rn). The diurnal variation in the ratio of soil heat flux (G) to Rn could be well fitted by the cosine function with a max value and peak time (R2 > 0.90). The optimal flight altitude and time (50 m and 11:00 am) for improved identification of temperature differentiation between treatments were further obtained through cross-comparison. These adaptations enabled the TSEB model to achieve a satisfactory accuracy in estimating energy flux compared to the single-source SEBAL model, with R2 values of 0.8501 for RnG and 0.7503 for latent heat (LE), as well as reduced rRMSE values. In conclusion, this study presents a reliable method for paddy field scale ET estimation based on a calibrated TSEB model. Moreover, the integration of ground and UAV multimodal data highlights its potential for precise irrigation practices and sustainable water resource management. Full article
Show Figures

Figure 1

21 pages, 11630 KiB  
Article
Assessment of the Maize Crop Water Stress Index (CWSI) Using Drone-Acquired Data Across Different Phenological Stages
by Mpho Kapari, Mbulisi Sibanda, James Magidi, Tafadzwanashe Mabhaudhi, Sylvester Mpandeli and Luxon Nhamo
Drones 2025, 9(3), 192; https://doi.org/10.3390/drones9030192 - 6 Mar 2025
Cited by 1 | Viewed by 2072
Abstract
The temperature-based crop water stress index (CWSI) is the most robust metric among precise techniques that assess the severity of crop water stress, particularly in susceptible crops like maize. This study used a unmanned aerial vehicle (UAV) to remotely collect data, to use [...] Read more.
The temperature-based crop water stress index (CWSI) is the most robust metric among precise techniques that assess the severity of crop water stress, particularly in susceptible crops like maize. This study used a unmanned aerial vehicle (UAV) to remotely collect data, to use in combination with the random forest regression algorithm to detect the maize CWSI in smallholder croplands. This study sought to predict a foliar temperature-derived maize CWSI as a proxy for crop water stress using UAV-acquired spectral variables together with random forest regression throughout the vegetative and reproductive growth stages. The CWSI was derived after computing the non-water-stress baseline (NWSB) and non-transpiration baseline (NTB) using the field-measured canopy temperature, air temperature, and humidity data during the vegetative growth stages (V5, V10, and V14) and the reproductive growth stage (R1 stage). The results showed that the CWSI (CWSI < 0.3) could be estimated to an R2 of 0.86, RMSE of 0.12, and MAE of 0.10 for the 5th vegetative stage; an R2 of 0.85, RMSE of 0.03, and MAE of 0.02 for the 10th vegetative stage; an R2 of 0.85, RMSE of 0.05, and MAE of 0.04 for the 14th vegetative stage; and an R2 of 0.82, RMSE of 0.09, and MAE of 0.08 for the 1st reproductive stage. The Red, RedEdge, NIR, and TIR UAV-bands and their associated indices (CCCI, MTCI, GNDVI, NDRE, Red, TIR) were the most influential variables across all the growth stages. The vegetative V10 stage exhibited the most optimal prediction accuracies (RMSE = 0.03, MAE = 0.02), with the Red band being the most influential predictor variable. Unmanned aerial vehicles are essential for collecting data on the small and fragmented croplands predominant in southern Africa. The procedure facilitates determining crop water stress at different phenological stages to develop timeous response interventions, acting as an early warning system for crops. Full article
Show Figures

Figure 1

19 pages, 14386 KiB  
Article
Deep Learning Method for Wetland Segmentation in Unmanned Aerial Vehicle Multispectral Imagery
by Pakezhamu Nuradili, Ji Zhou, Guiyun Zhou and Farid Melgani
Remote Sens. 2024, 16(24), 4777; https://doi.org/10.3390/rs16244777 - 21 Dec 2024
Cited by 1 | Viewed by 1502
Abstract
This study highlights the importance of unmanned aerial vehicle (UAV) multispectral (MS) imagery for the accurate delineation and analysis of wetland ecosystems, which is crucial for their conservation and management. We present an enhanced semantic segmentation algorithm designed for UAV MS imagery, which [...] Read more.
This study highlights the importance of unmanned aerial vehicle (UAV) multispectral (MS) imagery for the accurate delineation and analysis of wetland ecosystems, which is crucial for their conservation and management. We present an enhanced semantic segmentation algorithm designed for UAV MS imagery, which incorporates thermal infrared (TIR) data to improve segmentation outcomes. Our approach, involving meticulous image preprocessing, customized network architecture, and iterative training procedures, aims to refine wetland boundary delineation. The algorithm demonstrates strong segmentation results, including a mean pixel accuracy (MPA) of 90.35% and a mean intersection over union (MIOU) of 73.87% across different classes, with a pixel accuracy (PA) of 95.42% and an intersection over union (IOU) of 90.46% for the wetland class. The integration of TIR data with MS imagery not only enriches the feature set for segmentation but also, to some extent, helps address data imbalance issues, contributing to a more refined ecological analysis. This approach, along with the development of a comprehensive dataset that reflects the diversity of wetland environments and advances the utility of remote sensing technologies in ecological monitoring. This research lays the groundwork for more detailed and informative UAV-based evaluations of wetland health and integrity. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images II)
Show Figures

Graphical abstract

28 pages, 14547 KiB  
Article
A Contrastive-Augmented Memory Network for Anti-UAV Tracking in TIR Videos
by Ziming Wang, Yuxin Hu, Jianwei Yang, Guangyao Zhou, Fangjian Liu and Yuhan Liu
Remote Sens. 2024, 16(24), 4775; https://doi.org/10.3390/rs16244775 - 21 Dec 2024
Cited by 2 | Viewed by 1094
Abstract
With the development of unmanned aerial vehicle (UAV) technology, the threat of UAV intrusion is no longer negligible. Therefore, drone perception, especially anti-UAV tracking technology, has gathered considerable attention. However, both traditional Siamese and transformer-based trackers struggle in anti-UAV tasks due to the [...] Read more.
With the development of unmanned aerial vehicle (UAV) technology, the threat of UAV intrusion is no longer negligible. Therefore, drone perception, especially anti-UAV tracking technology, has gathered considerable attention. However, both traditional Siamese and transformer-based trackers struggle in anti-UAV tasks due to the small target size, clutter backgrounds and model degradation. To alleviate these challenges, a novel contrastive-augmented memory network (CAMTracker) is proposed for anti-UAV tracking tasks in thermal infrared (TIR) videos. The proposed CAMTracker conducts tracking through a two-stage scheme, searching for possible candidates in the first stage and matching the candidates with the template for final prediction. In the first stage, an instance-guided region proposal network (IG-RPN) is employed to calculate the correlation features between the templates and the searching images and further generate candidate proposals. In the second stage, a contrastive-augmented matching module (CAM), along with a refined contrastive loss function, is designed to enhance the discrimination ability of the tracker under the instruction of contrastive learning strategy. Moreover, to avoid model degradation, an adaptive dynamic memory module (ADM) is proposed to maintain a dynamic template to cope with the feature variation of the target in long sequences. Comprehensive experiments have been conducted on the Anti-UAV410 dataset, where the proposed CAMTracker achieves the best performance compared to advanced tracking algorithms, with significant advantages on all the evaluation metrics, including at least 2.40%, 4.12%, 5.43% and 5.48% on precision, success rate, success AUC and state accuracy, respectively. Full article
Show Figures

Figure 1

19 pages, 14249 KiB  
Article
Combining UAV Multispectral and Thermal Infrared Data for Maize Growth Parameter Estimation
by Xingjiao Yu, Xuefei Huo, Long Qian, Yiying Du, Dukun Liu, Qi Cao, Wen’e Wang, Xiaotao Hu, Xiaofei Yang and Shaoshuai Fan
Agriculture 2024, 14(11), 2004; https://doi.org/10.3390/agriculture14112004 - 7 Nov 2024
Cited by 4 | Viewed by 1469
Abstract
The leaf area index (LAI) and leaf chlorophyll content (LCC) are key indicators of crop photosynthetic efficiency and nitrogen status. This study explores the integration of UAV-based multispectral (MS) and thermal infrared (TIR) data to improve the estimation of maize LAI and LCC [...] Read more.
The leaf area index (LAI) and leaf chlorophyll content (LCC) are key indicators of crop photosynthetic efficiency and nitrogen status. This study explores the integration of UAV-based multispectral (MS) and thermal infrared (TIR) data to improve the estimation of maize LAI and LCC across different growth stages, aiming to enhance nitrogen (N) management. In field trials from 2022 to 2023, UAVs captured canopy images of maize under varied water and nitrogen treatments, while the LAI and LCC were measured. Estimation models, including partial least squares regression (PLS), convolutional neural networks (CNNs), and random forest (RF), were developed using spectral, thermal, and textural data. The results showed that MS data (spectral and textural features) had strong correlations with the LAI and LCC, and CNN models yielded accurate estimates (LAI: R2 = 0.61–0.79, RMSE = 0.02–0.38; LCC: R2 = 0.63–0.78, RMSE = 2.24–0.39 μg/cm2). Thermal data reflected maize growth but had limitations in estimating the LAI and LCC. Combining MS and TIR data significantly improved the estimation accuracy, increasing R2 values for the LAI and LCC by up to 23.06% and 19.01%, respectively. Nitrogen dilution curves using estimated LAIs effectively diagnosed crop N status. Deficit irrigation reduced the N uptake, intensifying the N deficiency, while proper water and N management enhanced the LAI and LCC. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

21 pages, 5375 KiB  
Article
PII-GCNet: Lightweight Multi-Modal CNN Network for Efficient Crowd Counting and Localization in UAV RGB-T Images
by Zuodong Niu, Huilong Pi, Donglin Jing and Dazheng Liu
Electronics 2024, 13(21), 4298; https://doi.org/10.3390/electronics13214298 - 31 Oct 2024
Viewed by 1155
Abstract
With the increasing need for real-time crowd evaluation in military surveillance, public safety, and event crowd management, crowd counting using unmanned aerial vehicle (UAV) captured images has emerged as an essential research topic. While conventional RGB-based methods have achieved significant success, their performance [...] Read more.
With the increasing need for real-time crowd evaluation in military surveillance, public safety, and event crowd management, crowd counting using unmanned aerial vehicle (UAV) captured images has emerged as an essential research topic. While conventional RGB-based methods have achieved significant success, their performance is severely hampered in low-light environments due to poor visibility. Integrating thermal infrared (TIR) images can address this issue, but existing RGB-T crowd counting networks, which employ multi-stream architectures, tend to introduce computational redundancy and excessive parameters, rendering them impractical for UAV applications constrained by limited onboard resources. To overcome these challenges, this research introduces an innovative, compact RGB-T framework designed to minimize redundant feature processing and improve multi-modal representation. The proposed approach introduces a Partial Information Interaction Convolution (PIIConv) module to selectively minimize redundant feature computations and a Global Collaborative Fusion (GCFusion) module to improve multi-modal feature representation through spatial attention mechanisms. Empirical findings indicate that the introduced network attains competitive results on the DroneRGBT dataset while significantly reducing floating-point operations (FLOPs) and improving inference speed across various computing platforms. This study’s significance is in providing a computationally efficient framework for RGB-T crowd counting that balances accuracy and resource efficiency, making it ideal for real-time UAV deployment. Full article
(This article belongs to the Special Issue Image Processing Based on Convolution Neural Network)
Show Figures

Figure 1

24 pages, 20197 KiB  
Article
Thermal Infrared Orthophoto Geometry Correction Using RGB Orthophoto for Unmanned Aerial Vehicle
by Kirim Lee and Wonhee Lee
Aerospace 2024, 11(10), 817; https://doi.org/10.3390/aerospace11100817 - 6 Oct 2024
Cited by 1 | Viewed by 1486
Abstract
The geometric correction of thermal infrared (TIR) orthophotos generated by unmanned aerial vehicles (UAVs) presents significant challenges due to low resolution and the difficulty of identifying ground control points (GCPs). This study addresses the limitations of real-time kinematic (RTK) UAV data acquisition, such [...] Read more.
The geometric correction of thermal infrared (TIR) orthophotos generated by unmanned aerial vehicles (UAVs) presents significant challenges due to low resolution and the difficulty of identifying ground control points (GCPs). This study addresses the limitations of real-time kinematic (RTK) UAV data acquisition, such as network instability and the inability to detect GCPs in TIR images, by proposing a method that utilizes RGB orthophotos as a reference for geometric correction. The accelerated-KAZE (AKAZE) method was applied to extract feature points between RGB and TIR orthophotos, integrating binary descriptors and absolute coordinate-based matching techniques. Geometric correction results demonstrated a significant improvement in regions with stable and changing environmental conditions. Invariant regions exhibited an accuracy of 0.7~2 px (0.01~0.04), while areas with temporal and spatial changes saw corrections within 5~7 px (0.10~0.14 m). This method reduces reliance on GCP measurements and provides an effective supplementary technique for cases where GCP detection is limited or unavailable. Additionally, this approach enhances time and economic efficiency, offering a reliable alternative for precise orthophoto generation across various sensor data. Full article
(This article belongs to the Special Issue New Trends in Aviation Development 2024–2025)
Show Figures

Figure 1

24 pages, 5084 KiB  
Article
Comparative Analysis of TLS and UAV Sensors for Estimation of Grapevine Geometric Parameters
by Leilson Ferreira, Joaquim J. Sousa, José. M. Lourenço, Emanuel Peres, Raul Morais and Luís Pádua
Sensors 2024, 24(16), 5183; https://doi.org/10.3390/s24165183 - 11 Aug 2024
Cited by 3 | Viewed by 2082
Abstract
Understanding geometric and biophysical characteristics is essential for determining grapevine vigor and improving input management and automation in viticulture. This study compares point cloud data obtained from a Terrestrial Laser Scanner (TLS) and various UAV sensors including multispectral, panchromatic, Thermal Infrared (TIR), RGB, [...] Read more.
Understanding geometric and biophysical characteristics is essential for determining grapevine vigor and improving input management and automation in viticulture. This study compares point cloud data obtained from a Terrestrial Laser Scanner (TLS) and various UAV sensors including multispectral, panchromatic, Thermal Infrared (TIR), RGB, and LiDAR data, to estimate geometric parameters of grapevines. Descriptive statistics, linear correlations, significance using the F-test of overall significance, and box plots were used for analysis. The results indicate that 3D point clouds from these sensors can accurately estimate maximum grapevine height, projected area, and volume, though with varying degrees of accuracy. The TLS data showed the highest correlation with grapevine height (r = 0.95, p < 0.001; R2 = 0.90; RMSE = 0.027 m), while point cloud data from panchromatic, RGB, and multispectral sensors also performed well, closely matching TLS and measured values (r > 0.83, p < 0.001; R2 > 0.70; RMSE < 0.084 m). In contrast, TIR point cloud data performed poorly in estimating grapevine height (r = 0.76, p < 0.001; R2 = 0.58; RMSE = 0.147 m) and projected area (r = 0.82, p < 0.001; R2 = 0.66; RMSE = 0.165 m). The greater variability observed in projected area and volume from UAV sensors is related to the low point density associated with spatial resolution. These findings are valuable for both researchers and winegrowers, as they support the optimization of TLS and UAV sensors for precision viticulture, providing a basis for further research and helping farmers select appropriate technologies for crop monitoring. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

19 pages, 5751 KiB  
Article
Combining UAV-Based Multispectral and Thermal Infrared Data with Regression Modeling and SHAP Analysis for Predicting Stomatal Conductance in Almond Orchards
by Nathalie Guimarães, Joaquim J. Sousa, Pedro Couto, Albino Bento and Luís Pádua
Remote Sens. 2024, 16(13), 2467; https://doi.org/10.3390/rs16132467 - 5 Jul 2024
Cited by 6 | Viewed by 2025
Abstract
Understanding and accurately predicting stomatal conductance in almond orchards is critical for effective water-management strategies, especially under challenging climatic conditions. In this study, machine-learning (ML) regression models trained on multispectral (MSP) and thermal infrared (TIR) data acquired from unmanned aerial vehicles (UAVs) are [...] Read more.
Understanding and accurately predicting stomatal conductance in almond orchards is critical for effective water-management strategies, especially under challenging climatic conditions. In this study, machine-learning (ML) regression models trained on multispectral (MSP) and thermal infrared (TIR) data acquired from unmanned aerial vehicles (UAVs) are used to address this challenge. Through an analysis of spectral indices calculated from UAV-based data and feature-selection methods, this study investigates the predictive performance of three ML models (extra trees, ET; stochastic gradient descent, SGD; and extreme gradient boosting, XGBoost) in predicting stomatal conductance. The results show that the XGBoost model trained with both MSP and TIR data had the best performance (R2 = 0.87) and highlight the importance of integrating surface-temperature information in addition to other spectral indices to improve prediction accuracy, up to 11% more when compared to the use of only MSP data. Key features, such as the green–red vegetation index, chlorophyll red-edge index, and the ratio between canopy temperature and air temperature (Tc-Ta), prove to be relevant features for model performance and highlight their importance for the assessment of water stress dynamics. Furthermore, the implementation of Shapley additive explanations (SHAP) values facilitates the interpretation of model decisions and provides valuable insights into the contributions of the features. This study contributes to the advancement of precision agriculture by providing a novel approach for stomatal conductance prediction in almond orchards, supporting efforts towards sustainable water management in changing environmental conditions. Full article
(This article belongs to the Special Issue Remote Sensing for Crop Nutrients and Related Traits)
Show Figures

Figure 1

18 pages, 3298 KiB  
Article
Wheat Yield Prediction Using Machine Learning Method Based on UAV Remote Sensing Data
by Shurong Yang, Lei Li, Shuaipeng Fei, Mengjiao Yang, Zhiqiang Tao, Yaxiong Meng and Yonggui Xiao
Drones 2024, 8(7), 284; https://doi.org/10.3390/drones8070284 - 24 Jun 2024
Cited by 24 | Viewed by 3855
Abstract
Accurate forecasting of crop yields holds paramount importance in guiding decision-making processes related to breeding efforts. Despite significant advancements in crop yield forecasting, existing methods often struggle with integrating diverse sensor data and achieving high prediction accuracy under varying environmental conditions. This study [...] Read more.
Accurate forecasting of crop yields holds paramount importance in guiding decision-making processes related to breeding efforts. Despite significant advancements in crop yield forecasting, existing methods often struggle with integrating diverse sensor data and achieving high prediction accuracy under varying environmental conditions. This study focused on the application of multi-sensor data fusion and machine learning algorithms based on unmanned aerial vehicles (UAVs) in wheat yield prediction. Five machine learning (ML) algorithms, namely random forest (RF), partial least squares (PLS), ridge regression (RR), k-nearest neighbor (KNN) and extreme gradient boosting decision tree (XGboost), were utilized for multi-sensor data fusion, together with three ensemble methods including the second-level ensemble methods (stacking and feature-weighted) and the third-level ensemble method (simple average), for wheat yield prediction. The 270 wheat hybrids were used as planting materials under full and limited irrigation treatments. A cost-effective multi-sensor UAV platform, equipped with red–green–blue (RGB), multispectral (MS), and thermal infrared (TIR) sensors, was utilized to gather remote sensing data. The results revealed that the XGboost algorithm exhibited outstanding performance in multi-sensor data fusion, with the RGB + MS + Texture + TIR combination demonstrating the highest fusion performance (R2 = 0.660, RMSE = 0.754). Compared with the single ML model, the employment of three ensemble methods significantly enhanced the accuracy of wheat yield prediction. Notably, the third-layer simple average ensemble method demonstrated superior performance (R2 = 0.733, RMSE = 0.668 t ha−1). It significantly outperformed both the second-layer ensemble methods of stacking (R2 = 0.668, RMSE = 0.673 t ha−1) and feature-weighted (R2 = 0.667, RMSE = 0.674 t ha−1), thereby exhibiting superior predictive capabilities. This finding highlighted the third-layer ensemble method’s ability to enhance predictive capabilities and refined the accuracy of wheat yield prediction through simple average ensemble learning, offering a novel perspective for crop yield prediction and breeding selection. Full article
Show Figures

Figure 1

21 pages, 22127 KiB  
Article
A Cross-View Geo-Localization Algorithm Using UAV Image and Satellite Image
by Jiqi Fan, Enhui Zheng, Yufei He and Jianxing Yang
Sensors 2024, 24(12), 3719; https://doi.org/10.3390/s24123719 - 7 Jun 2024
Cited by 5 | Viewed by 3536
Abstract
Within research on the cross-view geolocation of UAVs, differences in image sources and interference from similar scenes pose huge challenges. Inspired by multimodal machine learning, in this paper, we design a single-stream pyramid transformer network (SSPT). The backbone of the model uses the [...] Read more.
Within research on the cross-view geolocation of UAVs, differences in image sources and interference from similar scenes pose huge challenges. Inspired by multimodal machine learning, in this paper, we design a single-stream pyramid transformer network (SSPT). The backbone of the model uses the self-attention mechanism to enrich its own internal features in the early stage and uses the cross-attention mechanism in the later stage to refine and interact with different features to eliminate irrelevant interference. In addition, in the post-processing part of the model, a header module is designed for upsampling to generate heat maps, and a Gaussian weight window is designed to assign label weights to make the model converge better. Together, these methods improve the positioning accuracy of UAV images in satellite images. Finally, we also use style transfer technology to simulate various environmental changes in order to expand the experimental data, further proving the environmental adaptability and robustness of the method. The final experimental results show that our method yields significant performance improvement: The relative distance score (RDS) of the SSPT-384 model on the benchmark UL14 dataset is significantly improved from 76.25% to 84.40%, while the meter-level accuracy (MA) of 3 m, 5 m, and 20 m is increased by 12%, 12%, and 10%, respectively. For the SSPT-256 model, the RDS has been increased to 82.21%, and the meter-level accuracy (MA) of 3 m, 5 m, and 20 m has increased by 5%, 5%, and 7%, respectively. It still shows strong robustness on the extended thermal infrared (TIR), nighttime, and rainy day datasets. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

13 pages, 5867 KiB  
Article
Determining Riverine Surface Roughness at Fluvial Mesohabitat Level and Its Influence on UAV-Based Thermal Imaging Accuracy
by Johannes Kuhn, Joachim Pander, Luis Habersetzer, Roser Casas-Mulet and Juergen Geist
Remote Sens. 2024, 16(10), 1674; https://doi.org/10.3390/rs16101674 - 9 May 2024
Viewed by 1989
Abstract
Water surface roughness (SR) is a highly relevant parameter governing data reliability in remote sensing applications, yet lacking appropriate methodology in riverine habitats. In order to assess thermal accuracy linked to SR of thermal imaging derived from an unmanned aerial vehicle (UAV), we [...] Read more.
Water surface roughness (SR) is a highly relevant parameter governing data reliability in remote sensing applications, yet lacking appropriate methodology in riverine habitats. In order to assess thermal accuracy linked to SR of thermal imaging derived from an unmanned aerial vehicle (UAV), we developed the SR Measurement Device (SRMD). The SRMD uses the concept of in situ quantification of wave frequency and wave amplitude. Data of nine installed SRMDs in four different fluvial mesohabitat classes presented a range of 0 to 47 waves per 30 s and an amplitude range of 0 to 6 cm. Even subtle differences between mesohabitat classes run, riffle, and no-/low-flow still and pool areas could be detected with the SRMD. However, SR revealed no significant influence on the accuracy of thermal infrared (TIR) imagery data in our study case. Overall, the presented device expands existing methods of riverine habitat assessments and has the potential to produce highly relevant data of SR for various ecological and technical applications, ranging from remote sensing of surface water and habitat quality characterizations to bank stability and erosion risk assessments. Full article
(This article belongs to the Special Issue Remote Sensing and GIS in Freshwater Environments)
Show Figures

Figure 1

20 pages, 10565 KiB  
Article
Detection of Leak Areas in Vineyard Irrigation Systems Using UAV-Based Data
by Luís Pádua, Pedro Marques, Lia-Tânia Dinis, José Moutinho-Pereira, Joaquim J. Sousa, Raul Morais and Emanuel Peres
Drones 2024, 8(5), 187; https://doi.org/10.3390/drones8050187 - 8 May 2024
Cited by 3 | Viewed by 3397
Abstract
Water is essential for maintaining plant health and optimal growth in agriculture. While some crops depend on irrigation, others can rely on rainfed water, depending on regional climatic conditions. This is exemplified by grapevines, which have specific water level requirements, and irrigation systems [...] Read more.
Water is essential for maintaining plant health and optimal growth in agriculture. While some crops depend on irrigation, others can rely on rainfed water, depending on regional climatic conditions. This is exemplified by grapevines, which have specific water level requirements, and irrigation systems are needed. However, these systems can be susceptible to damage or leaks, which are not always easy to detect, requiring meticulous and time-consuming inspection. This study presents a methodology for identifying potential damage or leaks in vineyard irrigation systems using RGB and thermal infrared (TIR) imagery acquired by unmanned aerial vehicles (UAVs). The RGB imagery was used to distinguish between grapevine and non-grapevine pixels, enabling the division of TIR data into three raster products: temperature from grapevines, from non-grapevine areas, and from the entire evaluated vineyard plot. By analyzing the mean temperature values from equally spaced row sections, different threshold values were calculated to estimate and map potential leaks. These thresholds included the lower quintile value, the mean temperature minus the standard deviation (Tmeanσ), and the mean temperature minus two times the standard deviation (Tmean2σ). The lower quintile threshold showed the best performance in identifying known leak areas and highlighting the closest rows that need inspection in the field. This approach presents a promising solution for inspecting vineyard irrigation systems. By using UAVs, larger areas can be covered on-demand, improving the efficiency and scope of the inspection process. This not only reduces water wastage in viticulture and eases grapevine water stress but also optimizes viticulture practices. Full article
(This article belongs to the Special Issue Advances of UAV in Precision Agriculture)
Show Figures

Figure 1

Back to TopTop