Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (208)

Search Parameters:
Keywords = MS (multispectral)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 6589 KiB  
Article
Machine Learning (AutoML)-Driven Wheat Yield Prediction for European Varieties: Enhanced Accuracy Using Multispectral UAV Data
by Krstan Kešelj, Zoran Stamenković, Marko Kostić, Vladimir Aćin, Dragana Tekić, Tihomir Novaković, Mladen Ivanišević, Aleksandar Ivezić and Nenad Magazin
Agriculture 2025, 15(14), 1534; https://doi.org/10.3390/agriculture15141534 - 16 Jul 2025
Viewed by 520
Abstract
Accurate and timely wheat yield prediction is valuable globally for enhancing agricultural planning, optimizing resource use, and supporting trade strategies. Study addresses the need for precision in yield estimation by applying machine-learning (ML) regression models to high-resolution Unmanned Aerial Vehicle (UAV) multispectral (MS) [...] Read more.
Accurate and timely wheat yield prediction is valuable globally for enhancing agricultural planning, optimizing resource use, and supporting trade strategies. Study addresses the need for precision in yield estimation by applying machine-learning (ML) regression models to high-resolution Unmanned Aerial Vehicle (UAV) multispectral (MS) and Red-Green-Blue (RGB) imagery. Research analyzes five European wheat cultivars across 400 experimental plots created by combining 20 nitrogen, phosphorus, and potassium (NPK) fertilizer treatments. Yield variations from 1.41 to 6.42 t/ha strengthen model robustness with diverse data. The ML approach is automated using PyCaret, which optimized and evaluated 25 regression models based on 65 vegetation indices and yield data, resulting in 66 feature variables across 400 observations. The dataset, split into training (70%) and testing sets (30%), was used to predict yields at three growth stages: 9 May, 20 May, and 6 June 2022. Key models achieved high accuracy, with the Support Vector Regression (SVR) model reaching R2 = 0.95 on 9 May and R2 = 0.91 on 6 June, and the Multi-Layer Perceptron (MLP) Regressor attaining R2 = 0.94 on 20 May. The findings underscore the effectiveness of precisely measured MS indices and a rigorous experimental approach in achieving high-accuracy yield predictions. This study demonstrates how a precise experimental setup, large-scale field data, and AutoML can harness UAV and machine learning’s potential to enhance wheat yield predictions. The main limitations of this study lie in its focus on experimental fields under specific conditions; future research could explore adaptability to diverse environments and wheat varieties for broader applicability. Full article
(This article belongs to the Special Issue Applications of Remote Sensing in Agricultural Soil and Crop Mapping)
Show Figures

Figure 1

21 pages, 4147 KiB  
Article
AgriFusionNet: A Lightweight Deep Learning Model for Multisource Plant Disease Diagnosis
by Saleh Albahli
Agriculture 2025, 15(14), 1523; https://doi.org/10.3390/agriculture15141523 - 15 Jul 2025
Viewed by 489
Abstract
Timely and accurate identification of plant diseases is critical to mitigating crop losses and enhancing yield in precision agriculture. This paper proposes AgriFusionNet, a lightweight and efficient deep learning model designed to diagnose plant diseases using multimodal data sources. The framework integrates RGB [...] Read more.
Timely and accurate identification of plant diseases is critical to mitigating crop losses and enhancing yield in precision agriculture. This paper proposes AgriFusionNet, a lightweight and efficient deep learning model designed to diagnose plant diseases using multimodal data sources. The framework integrates RGB and multispectral drone imagery with IoT-based environmental sensor data (e.g., temperature, humidity, soil moisture), recorded over six months across multiple agricultural zones. Built on the EfficientNetV2-B4 backbone, AgriFusionNet incorporates Fused-MBConv blocks and Swish activation to improve gradient flow, capture fine-grained disease patterns, and reduce inference latency. The model was evaluated using a comprehensive dataset composed of real-world and benchmarked samples, showing superior performance with 94.3% classification accuracy, 28.5 ms inference time, and a 30% reduction in model parameters compared to state-of-the-art models such as Vision Transformers and InceptionV4. Extensive comparisons with both traditional machine learning and advanced deep learning methods underscore its robustness, generalization, and suitability for deployment on edge devices. Ablation studies and confusion matrix analyses further confirm its diagnostic precision, even in visually ambiguous cases. The proposed framework offers a scalable, practical solution for real-time crop health monitoring, contributing toward smart and sustainable agricultural ecosystems. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

23 pages, 3492 KiB  
Article
A Multimodal Deep Learning Framework for Accurate Biomass and Carbon Sequestration Estimation from UAV Imagery
by Furkat Safarov, Ugiloy Khojamuratova, Misirov Komoliddin, Xusinov Ibragim Ismailovich and Young Im Cho
Drones 2025, 9(7), 496; https://doi.org/10.3390/drones9070496 - 14 Jul 2025
Viewed by 344
Abstract
Accurate quantification of above-ground biomass (AGB) and carbon sequestration is vital for monitoring terrestrial ecosystem dynamics, informing climate policy, and supporting carbon neutrality initiatives. However, conventional methods—ranging from manual field surveys to remote sensing techniques based solely on 2D vegetation indices—often fail to [...] Read more.
Accurate quantification of above-ground biomass (AGB) and carbon sequestration is vital for monitoring terrestrial ecosystem dynamics, informing climate policy, and supporting carbon neutrality initiatives. However, conventional methods—ranging from manual field surveys to remote sensing techniques based solely on 2D vegetation indices—often fail to capture the intricate spectral and structural heterogeneity of forest canopies, particularly at fine spatial resolutions. To address these limitations, we introduce ForestIQNet, a novel end-to-end multimodal deep learning framework designed to estimate AGB and associated carbon stocks from UAV-acquired imagery with high spatial fidelity. ForestIQNet combines dual-stream encoders for processing multispectral UAV imagery and a voxelized Canopy Height Model (CHM), fused via a Cross-Attentional Feature Fusion (CAFF) module, enabling fine-grained interaction between spectral reflectance and 3D structure. A lightweight Transformer-based regression head then performs multitask prediction of AGB and CO2e, capturing long-range spatial dependencies and enhancing generalization. Proposed method achieves an R2 of 0.93 and RMSE of 6.1 kg for AGB prediction, compared to 0.78 R2 and 11.7 kg RMSE for XGBoost and 0.73 R2 and 13.2 kg RMSE for Random Forest. Despite its architectural complexity, ForestIQNet maintains a low inference cost (27 ms per patch) and generalizes well across species, terrain, and canopy structures. These results establish a new benchmark for UAV-enabled biomass estimation and provide scalable, interpretable tools for climate monitoring and forest management. Full article
(This article belongs to the Special Issue UAVs for Nature Conservation Tasks in Complex Environments)
Show Figures

Figure 1

25 pages, 16927 KiB  
Article
Improving Individual Tree Crown Detection and Species Classification in a Complex Mixed Conifer–Broadleaf Forest Using Two Machine Learning Models with Different Combinations of Metrics Derived from UAV Imagery
by Jeyavanan Karthigesu, Toshiaki Owari, Satoshi Tsuyuki and Takuya Hiroshima
Geomatics 2025, 5(3), 32; https://doi.org/10.3390/geomatics5030032 - 13 Jul 2025
Viewed by 667
Abstract
Individual tree crown detection (ITCD) and tree species classification are critical for forest inventory, species-specific monitoring, and ecological studies. However, accurately detecting tree crowns and identifying species in structurally complex forests with overlapping canopies remains challenging. This study was conducted in a complex [...] Read more.
Individual tree crown detection (ITCD) and tree species classification are critical for forest inventory, species-specific monitoring, and ecological studies. However, accurately detecting tree crowns and identifying species in structurally complex forests with overlapping canopies remains challenging. This study was conducted in a complex mixed conifer–broadleaf forest in northern Japan, aiming to improve ITCD and species classification by employing two machine learning models and different combinations of metrics derived from very high-resolution (2.5 cm) UAV red–green–blue (RGB) and multispectral (MS) imagery. We first enhanced ITCD by integrating different combinations of metrics into multiresolution segmentation (MRS) and DeepForest (DF) models. ITCD accuracy was evaluated across dominant forest types and tree density classes. Next, nine tree species were classified using the ITCD outputs from both MRS and DF approaches, applying Random Forest and DF models, respectively. Incorporating structural, textural, and spectral metrics improved MRS-based ITCD, achieving F-scores of 0.44–0.58. The DF model, which used only structural and spectral metrics, achieved higher F-scores of 0.62–0.79. For species classification, the Random Forest model achieved a Kappa value of 0.81, while the DF model attained a higher Kappa value of 0.91. These findings demonstrate the effectiveness of integrating UAV-derived metrics and advanced modeling approaches for accurate ITCD and species classification in heterogeneous forest environments. The proposed methodology offers a scalable and cost-efficient solution for detailed forest monitoring and species-level assessment. Full article
Show Figures

Figure 1

26 pages, 7645 KiB  
Article
Prediction of Rice Chlorophyll Index (CHI) Using Nighttime Multi-Source Spectral Data
by Cong Liu, Lin Wang, Xuetong Fu, Junzhe Zhang, Ran Wang, Xiaofeng Wang, Nan Chai, Longfeng Guan, Qingshan Chen and Zhongchen Zhang
Agriculture 2025, 15(13), 1425; https://doi.org/10.3390/agriculture15131425 - 1 Jul 2025
Viewed by 460
Abstract
The chlorophyll index (CHI) is a crucial indicator for assessing the photosynthetic capacity and nutritional status of crops. However, traditional methods for measuring CHI, such as chemical extraction and handheld instruments, fall short in meeting the requirements for efficient, non-destructive, and continuous monitoring [...] Read more.
The chlorophyll index (CHI) is a crucial indicator for assessing the photosynthetic capacity and nutritional status of crops. However, traditional methods for measuring CHI, such as chemical extraction and handheld instruments, fall short in meeting the requirements for efficient, non-destructive, and continuous monitoring at the canopy level. This study aimed to explore the feasibility of predicting rice canopy CHI using nighttime multi-source spectral data combined with machine learning models. In this study, ground truth CHI values were obtained using a SPAD-502 chlorophyll meter. Canopy spectral data were acquired under nighttime conditions using a high-throughput phenotyping platform (HTTP) equipped with active light sources in a greenhouse environment. Three types of sensors—multispectral (MS), visible light (RGB), and chlorophyll fluorescence (ChlF)—were employed to collect data across different growth stages of rice, ranging from tillering to maturity. PCA and LASSO regression were applied for dimensionality reduction and feature selection of multi-source spectral variables. Subsequently, CHI prediction models were developed using four machine learning algorithms: support vector regression (SVR), random forest (RF), back-propagation neural network (BPNN), and k-nearest neighbors (KNNs). The predictive performance of individual sensors (MS, RGB, and ChlF) and sensor fusion strategies was evaluated across multiple growth stages. The results demonstrated that sensor fusion models consistently outperformed single-sensor approaches. Notably, during tillering (TI), maturity (MT), and the full growth period (GP), fused models achieved high accuracy (R2 > 0.90, RMSE < 2.0). The fusion strategy also showed substantial advantages over single-sensor models during the jointing–heading (JH) and grain-filling (GF) stages. Among the individual sensor types, MS data achieved relatively high accuracy at certain stages, while models based on RGB and ChlF features exhibited weaker performance and lower prediction stability. Overall, the highest prediction accuracy was achieved during the full growth period (GP) using fused spectral data, with an R2 of 0.96 and an RMSE of 1.99. This study provides a valuable reference for developing CHI prediction models based on nighttime multi-source spectral data. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

20 pages, 2848 KiB  
Article
A Dual-Branch Network for Intra-Class Diversity Extraction in Panchromatic and Multispectral Classification
by Zihan Huang, Pengyu Tian, Hao Zhu, Pute Guo and Xiaotong Li
Remote Sens. 2025, 17(12), 1998; https://doi.org/10.3390/rs17121998 - 10 Jun 2025
Viewed by 362
Abstract
With the rapid development of remote sensing technology, satellites can now capture multispectral (MS) and panchromatic (PAN) images simultaneously. MS images offer rich spectral details, while PAN images provide high spatial resolutions. Effectively leveraging their complementary strengths and addressing modality gaps are key [...] Read more.
With the rapid development of remote sensing technology, satellites can now capture multispectral (MS) and panchromatic (PAN) images simultaneously. MS images offer rich spectral details, while PAN images provide high spatial resolutions. Effectively leveraging their complementary strengths and addressing modality gaps are key challenges in improving the classification performance. From the perspective of deep learning, this paper proposes a novel dual-source remote sensing classification framework named the Diversity Extraction and Fusion Classifier (DEFC-Net). A central innovation of our method lies in introducing a modality-specific intra-class diversity modeling mechanism for the first time in dual-source classification. Specifically, the intra-class diversity identification and splitting (IDIS) module independently analyzes the intra-class variance within each modality to identify semantically broad classes, and it applies an optimized K-means method to split such classes into fine-grained sub-classes. In particular, due to the inherent representation differences between the MS and PAN modalities, the same class may be split differently in each modality, allowing modality-aware class refinement that better captures fine-grained discriminative features in dual perspectives. To handle the class imbalance introduced by both natural long-tailed distributions and class splitting, we design a long-tailed ensemble learning module (LELM) based on a multi-expert structure to reduce bias toward head classes. Furthermore, a dual-modal knowledge distillation (DKD) module is developed to align cross-modal feature spaces and reconcile the label inconsistency arising from modality-specific class splitting, thereby facilitating effective information fusion across modalities. Extensive experiments on datasets show that our method significantly improves the classification performance. The code was accessed on 11 April 2025. Full article
Show Figures

Figure 1

25 pages, 17332 KiB  
Article
Aerial Remote Sensing and Urban Planning Study of Ancient Hippodamian System
by Dimitris Kaimaris and Despina Kalyva
Urban Sci. 2025, 9(6), 183; https://doi.org/10.3390/urbansci9060183 - 22 May 2025
Viewed by 529
Abstract
In ancient Olynthus (Greece), an Unmanned Aircraft System (UAS) was utilized to collect both RGB and multispectral (MS) images of the archaeological site. Ground Control Points (GCPs) were used to solve the blocks of images and the production of Digital Surface Models (DSMs) [...] Read more.
In ancient Olynthus (Greece), an Unmanned Aircraft System (UAS) was utilized to collect both RGB and multispectral (MS) images of the archaeological site. Ground Control Points (GCPs) were used to solve the blocks of images and the production of Digital Surface Models (DSMs) and orthophotomosaics. Check Points (CPs) were employed to verify the spatial accuracy of the products. The innovative image fusion process carried out in this paper, which combined the RGB and MS orthophotomosaics from UAS sensors, led to the creation of a fused image with the best possible spatial resolution (five times better than that of the MS orthophotomosaic). This improvement facilitates the optimal visual and digital (e.g., classification) analysis of the archaeological site. Utilizing the fused image and reviewing the literature, the paper compiles and briefly presents information on the Hippodamian system of the excavated part of the ancient city of Olynthus (regularity, main and secondary streets, organization of building blocks, public and private buildings, types and sizes of dwellings, and internal organization of buildings) as well as information on its socio-economic organization (different social groups based on the characteristics of the buildings, commercial markets, etc.). Full article
Show Figures

Figure 1

8 pages, 3697 KiB  
Proceeding Paper
Pansharpening Remote Sensing Images Using Generative Adversarial Networks
by Bo-Hsien Chung, Jui-Hsiang Jung, Yih-Shyh Chiou, Mu-Jan Shih and Fuan Tsai
Eng. Proc. 2025, 92(1), 32; https://doi.org/10.3390/engproc2025092032 - 28 Apr 2025
Viewed by 307
Abstract
Pansharpening is a remote sensing image fusion technique that combines a high-resolution (HR) panchromatic (PAN) image with a low-resolution (LR) multispectral (MS) image to produce an HR MS image. The primary challenge in pansharpening lies in preserving the spatial details of the PAN [...] Read more.
Pansharpening is a remote sensing image fusion technique that combines a high-resolution (HR) panchromatic (PAN) image with a low-resolution (LR) multispectral (MS) image to produce an HR MS image. The primary challenge in pansharpening lies in preserving the spatial details of the PAN image while maintaining the spectral integrity of the MS image. To address this, this article presents a generative adversarial network (GAN)-based approach to pansharpening. The GAN discriminator facilitated matching the generated image’s intensity to the HR PAN image and preserving the spectral characteristics of the LR MS image. The performance in generating images was evaluated using the peak signal-to-noise ratio (PSNR). For the experiment, original LR MS and HR PAN satellite images were partitioned into smaller patches, and the GAN model was validated using an 80:20 training-to-testing data ratio. The results illustrated that the super-resolution images generated by the SRGAN model achieved a PSNR of 31 dB. These results demonstrated the developed model’s ability to reconstruct the geometric, textural, and spectral information from the images. Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

18 pages, 3766 KiB  
Article
Self-Supervised Multiscale Contrastive and Attention-Guided Gradient Projection Network for Pansharpening
by Qingping Li, Xiaomin Yang, Bingru Li and Jin Wang
Sensors 2025, 25(8), 2560; https://doi.org/10.3390/s25082560 - 18 Apr 2025
Cited by 2 | Viewed by 593
Abstract
Pansharpening techniques are crucial in remote sensing image processing, with deep learning emerging as the mainstream solution. In this paper, the pansharpening problem is formulated as two optimization subproblems with a solution proposed based on multiscale contrastive learning combined with attention-guided gradient projection [...] Read more.
Pansharpening techniques are crucial in remote sensing image processing, with deep learning emerging as the mainstream solution. In this paper, the pansharpening problem is formulated as two optimization subproblems with a solution proposed based on multiscale contrastive learning combined with attention-guided gradient projection networks. First, an efficient and generalized Spectral–Spatial Universal Module (SSUM) is designed and applied to spectral and spatial enhancement modules (SpeEB and SpaEB). Then, the multiscale high-frequency features of PAN and MS images are extracted using discrete wavelet transform (DWT). These features are combined with contrastive learning and residual connection to progressively balance spectral and spatial information. Finally, high-resolution multispectral images are generated through multiple iterations. Experimental results verify that the proposed method outperforms existing approaches in both visual quality and quantitative evaluation metrics. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

25 pages, 9142 KiB  
Article
Restricted Label-Based Self-Supervised Learning Using SAR and Multispectral Imagery for Local Climate Zone Classification
by Amjad Nawaz, Wei Yang, Hongcheng Zeng, Yamin Wang and Jie Chen
Remote Sens. 2025, 17(8), 1335; https://doi.org/10.3390/rs17081335 - 8 Apr 2025
Viewed by 627
Abstract
Deep learning techniques have garnered significant attention in remote sensing scene classification. However, obtaining a large volume of labeled data for supervised learning (SL) remains challenging. Additionally, SL methods frequently struggle with limited generalization ability. To address these limitations, self-supervised multi-mode representation learning [...] Read more.
Deep learning techniques have garnered significant attention in remote sensing scene classification. However, obtaining a large volume of labeled data for supervised learning (SL) remains challenging. Additionally, SL methods frequently struggle with limited generalization ability. To address these limitations, self-supervised multi-mode representation learning (SSMMRL) is introduced for local climate zone classification (LCZC). Unlike conventional supervised learning methods, SSMMRL utilizes a novel encoder architecture that exclusively processes augmented positive samples (PSs), eliminating the need for negative samples. An attention-guided fusion mechanism is integrated, using positive samples as a form of regularization. The novel encoder captures informative representations from the unannotated So2Sat-LCZ42 dataset, which are then leveraged to enhance performance in a challenging few-shot classification task with limited labeled samples. Co-registered Synthetic Aperture Radar (SAR) and Multispectral (MS) images are used for evaluation and training. This approach enables the model to exploit extensive unlabeled data, enhancing performance on downstream tasks. Experimental evaluations on the So2Sat-LCZ42 benchmark dataset show the efficacy of the SSMMRL method. Our method for LCZC outperforms state-of-the-art (SOTA) approaches. Full article
Show Figures

Graphical abstract

25 pages, 10869 KiB  
Article
Pansharpening Applications in Ecological and Environmental Monitoring Using an Attention Mechanism-Based Dual-Stream Cross-Modality Fusion Network
by Bingru Li, Qingping Li, Haoran Yang and Xiaomin Yang
Appl. Sci. 2025, 15(8), 4095; https://doi.org/10.3390/app15084095 - 8 Apr 2025
Viewed by 502
Abstract
Pansharpening is a critical technique in remote sensing, particularly in ecological and environmental monitoring, where it is used to integrate panchromatic (PAN) and multispectral (MS) images. This technique plays a vital role in assessing environmental changes, monitoring biodiversity, and supporting conservation efforts. While [...] Read more.
Pansharpening is a critical technique in remote sensing, particularly in ecological and environmental monitoring, where it is used to integrate panchromatic (PAN) and multispectral (MS) images. This technique plays a vital role in assessing environmental changes, monitoring biodiversity, and supporting conservation efforts. While many current pansharpening methods primarily rely on PAN images, they often overlook the distinct characteristics of MS images and the cross-modal relationships between them. To address this limitation, the paper presents a Dual-Stream Cross-modality Fusion Network (DCMFN), designed to offer reliable data support for environmental impact assessment, ecological monitoring, and material optimization in nanotechnology. The proposed network utilizes an attention mechanism to extract features from both PAN and MS images individually. Additionally, a Cross-Modality Feature Fusion Module (CMFFM) is introduced to capture the complex interrelationships between PAN and MS images, enhancing the reconstruction quality of pansharpened images. This method not only boosts the spatial resolution but also maintains the richness of multispectral information. Through extensive experiments, the DCMFN demonstrates superior performance over existing methods on three remote sensing datasets, excelling in both objective evaluation metrics and visual quality. Full article
(This article belongs to the Special Issue Applications of Big Data and Artificial Intelligence in Geoscience)
Show Figures

Figure 1

24 pages, 19515 KiB  
Article
Extensive Feature-Inferring Deep Network for Hyperspectral and Multispectral Image Fusion
by Abdolraheem Khader, Jingxiang Yang, Sara Abdelwahab Ghorashi, Ali Ahmed, Zeinab Dehghan and Liang Xiao
Remote Sens. 2025, 17(7), 1308; https://doi.org/10.3390/rs17071308 - 5 Apr 2025
Viewed by 626
Abstract
Hyperspectral (HS) and multispectral (MS) image fusion is the most favorable way to obtain a hyperspectral image that has high resolution in terms of spatial and spectral information. This fusion problem can be accomplished by formulating a mathematical model and solving it either [...] Read more.
Hyperspectral (HS) and multispectral (MS) image fusion is the most favorable way to obtain a hyperspectral image that has high resolution in terms of spatial and spectral information. This fusion problem can be accomplished by formulating a mathematical model and solving it either analytically or iteratively. The mathematical solutions class has serious challenges, e.g., computation cost, manually tuning parameters, and the absence of imaging models that laboriously affect the fusion process. With the revolution of deep learning, the recent HS-MS image fusion techniques gained good outcomes by utilizing the power of the convolutional neural network (CNN) for feature extraction. Moreover, extracting intrinsic information, e.g., non-local spatial and global spectral features, is the most critical issue faced by deep learning methods. Therefore, this paper proposes an Extensive Feature-Inferring Deep Network (EFINet) with extensive-scale feature-interacting and global correlation refinement modules to improve the effectiveness of HS-MS image fusion. The proposed network retains the most vital information through the extensive-scale feature-interacting module in various feature scales. Moreover, the global semantic information is achieved by utilizing the global correlation refinement module. The proposed network is validated through rich experiments conducted on two popular datasets, the Houston and Chikusei datasets, and it attains good performance compared to the state-of-the-art HS-MS image fusion techniques. Full article
Show Figures

Figure 1

18 pages, 8005 KiB  
Article
Durum Wheat (Triticum durum Desf.) Grain Yield and Protein Estimation by Multispectral UAV Monitoring and Machine Learning Under Mediterranean Conditions
by Giuseppe Badagliacca, Gaetano Messina, Emilio Lo Presti, Giovanni Preiti, Salvatore Di Fazio, Michele Monti, Giuseppe Modica and Salvatore Praticò
AgriEngineering 2025, 7(4), 99; https://doi.org/10.3390/agriengineering7040099 - 1 Apr 2025
Viewed by 974
Abstract
Durum wheat (Triticum durum Desf.), among the herbaceous crops, is one of the most extensively grown in the Mediterranean area due to its fundamental role in supporting typical food productions like bread, pasta, and couscous. Among the environmental and technical aspects, nitrogen [...] Read more.
Durum wheat (Triticum durum Desf.), among the herbaceous crops, is one of the most extensively grown in the Mediterranean area due to its fundamental role in supporting typical food productions like bread, pasta, and couscous. Among the environmental and technical aspects, nitrogen (N) fertilization is crucial to shaping plant development and that of kernels by also affecting their protein concentration. Today, new techniques for monitoring fields using uncrewed aerial vehicles (UAVs) can detect crop multispectral (MS) responses, while advanced machine learning (ML) models can enable accurate predictions. However, to date, there is still little research related to the prediction of the N nutritional status and its effects on the productivity of durum wheat grown in the Mediterranean environment through the application of these techniques. The present research aimed to monitor the MS responses of two different wheat varieties, one ancient (Timilia) and one modern (Ciclope), grown under three different N fertilization regimens (0, 60, and 120 kg N ha−1), and to estimate their quantitative and qualitative production (i.e., grain yield and protein concentration) through the Pearson’s correlations and five different ML approaches. The results showed the difficulty of obtaining good predictive results with Pearson’s correlation for both varieties of data merged together and for the Timilia variety. In contrast, for Ciclope, several vegetation indices (VIs) (i.e., CVI, GNDRE, and SRRE) performed well (r-value > 0.7) in estimating both productive parameters. The implementation of ML approaches, particularly random forest (RF) regression, neural network (NN), and support vector machine (SVM), overcame the limitations of correlation in estimating the grain yield (R2 > 0.6, RMSE = 0.56 t ha−1, MAE = 0.43 t ha−1) and protein (R2 > 0.7, RMSE = 1.2%, MAE 0.47%) in Timilia, whereas for Ciclope, the RF approach outperformed the other predictive methods (R2 = 0.79, RMSE = 0.56 t ha−1, MAE = 0.44 t ha−1). Full article
(This article belongs to the Section Sensors Technology and Precision Agriculture)
Show Figures

Figure 1

26 pages, 48126 KiB  
Article
Multi-Source Attention U-Net: A Novel Deep Learning Framework for the Land Use and Soil Salinization Classification of Keriya Oasis in China with RADARSAT-2 and Landsat-8 Data
by Yang Xiang, Ilyas Nurmemet, Xiaobo Lv, Xinru Yu, Aoxiang Gu, Aihepa Aihaiti and Shiqin Li
Land 2025, 14(3), 649; https://doi.org/10.3390/land14030649 - 19 Mar 2025
Cited by 2 | Viewed by 878
Abstract
Soil salinization significantly impacts global agricultural productivity, contributing to desertification and land degradation; thus, rapid regional monitoring of soil salinization is crucial for agricultural production and sustainable management. With advancements in artificial intelligence, the efficiency and precision of deep learning classification models applied [...] Read more.
Soil salinization significantly impacts global agricultural productivity, contributing to desertification and land degradation; thus, rapid regional monitoring of soil salinization is crucial for agricultural production and sustainable management. With advancements in artificial intelligence, the efficiency and precision of deep learning classification models applied to remote sensing imagery have been demonstrated. Given the limited feature learning capability of traditional machine learning, this study introduces an innovative deep fusion U-Net model called MSA-U-Net (Multi-Source Attention U-Net) incorporating a Convolutional Block Attention Module (CBAM) within the skip connections to improve feature extraction and fusion. A salinized soil classification dataset was developed by combining spectral indices obtained from Landsat-8 Operational Land Imager (OLI) data and polarimetric scattering features extracted from RADARSAT-2 data using polarization target decomposition. To select optimal features, the Boruta algorithm was employed to rank features, selecting the top eight features to construct a multispectral (MS) dataset, a synthetic aperture radar (SAR) dataset, and an MS + SAR dataset. Furthermore, Support Vector Machine (SVM), Random Forest (RF), K-Nearest Neighbor (KNN), and deep learning methods including U-Net and MSA-U-Net were employed to identify the different degrees of salinized soil. The results indicated that the MS + SAR dataset outperformed the MS dataset, with the inclusion of the SAR band resulting in an Overall Accuracy (OA) increase of 1.94–7.77%. Moreover, the MS + SAR MSA-U-Net, in comparison to traditional machine learning methods and the baseline model, improved the OA and Kappa coefficient by 8.24% to 12.55% and 0.08 to 0.15, respectively. The results demonstrate that the MSA-U-Net outperformed traditional models, indicating the potential of integrating multi-source data with deep learning techniques for monitoring soil salinity. Full article
Show Figures

Figure 1

23 pages, 26510 KiB  
Article
Improving the Individual Tree Parameters Estimation of a Complex Mixed Conifer—Broadleaf Forest Using a Combination of Structural, Textural, and Spectral Metrics Derived from Unmanned Aerial Vehicle RGB and Multispectral Imagery
by Jeyavanan Karthigesu, Toshiaki Owari, Satoshi Tsuyuki and Takuya Hiroshima
Geomatics 2025, 5(1), 12; https://doi.org/10.3390/geomatics5010012 - 10 Mar 2025
Cited by 1 | Viewed by 2029
Abstract
Individual tree parameters are essential for forestry decision-making, supporting economic valuation, harvesting, and silvicultural operations. While extensive research exists on uniform and simply structured forests, studies addressing complex, dense, and mixed forests with highly overlapping, clustered, and multiple tree crowns remain limited. This [...] Read more.
Individual tree parameters are essential for forestry decision-making, supporting economic valuation, harvesting, and silvicultural operations. While extensive research exists on uniform and simply structured forests, studies addressing complex, dense, and mixed forests with highly overlapping, clustered, and multiple tree crowns remain limited. This study bridges this gap by combining structural, textural, and spectral metrics derived from unmanned aerial vehicle (UAV) Red–Green–Blue (RGB) and multispectral (MS) imagery to estimate individual tree parameters using a random forest regression model in a complex mixed conifer–broadleaf forest. Data from 255 individual trees (115 conifers, 67 Japanese oak, and 73 other broadleaf species (OBL)) were analyzed. High-resolution UAV orthomosaic enabled effective tree crown delineation and canopy height models. Combining structural, textural, and spectral metrics improved the accuracy of tree height, diameter at breast height, stem volume, basal area, and carbon stock estimates. Conifers showed high accuracy (R2 = 0.70–0.89) for all individual parameters, with a high estimate of tree height (R2 = 0.89, RMSE = 0.85 m). The accuracy of oak (R2 = 0.11–0.49) and OBL (R2 = 0.38–0.57) was improved, with OBL species achieving relatively high accuracy for basal area (R2 = 0.57, RMSE = 0.08 m2 tree−1) and volume (R2 = 0.51, RMSE = 0.27 m3 tree−1). These findings highlight the potential of UAV metrics in accurately estimating individual tree parameters in a complex mixed conifer–broadleaf forest. Full article
Show Figures

Figure 1

Back to TopTop