Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (549)

Search Parameters:
Keywords = multispectral fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 5340 KiB  
Article
Potential of Multi-Source Multispectral vs. Hyperspectral Remote Sensing for Winter Wheat Nitrogen Monitoring
by Xiaokai Chen, Yuxin Miao, Krzysztof Kusnierek, Fenling Li, Chao Wang, Botai Shi, Fei Wu, Qingrui Chang and Kang Yu
Remote Sens. 2025, 17(15), 2666; https://doi.org/10.3390/rs17152666 (registering DOI) - 1 Aug 2025
Abstract
Timely and accurate monitoring of crop nitrogen (N) status is essential for precision agriculture. UAV-based hyperspectral remote sensing offers high-resolution data for estimating plant nitrogen concentration (PNC), but its cost and complexity limit large-scale application. This study compares the performance of UAV hyperspectral [...] Read more.
Timely and accurate monitoring of crop nitrogen (N) status is essential for precision agriculture. UAV-based hyperspectral remote sensing offers high-resolution data for estimating plant nitrogen concentration (PNC), but its cost and complexity limit large-scale application. This study compares the performance of UAV hyperspectral data (S185 sensor) with simulated multispectral data from DJI Phantom 4 Multispectral (P4M), PlanetScope (PS), and Sentinel-2A (S2) in estimating winter wheat PNC. Spectral data were collected across six growth stages over two seasons and resampled to match the spectral characteristics of the three multispectral sensors. Three variable selection strategies (one-dimensional (1D) spectral reflectance, optimized two-dimensional (2D), and three-dimensional (3D) spectral indices) were combined with Random Forest Regression (RFR), Support Vector Machine Regression (SVMR), and Partial Least Squares Regression (PLSR) to build PNC prediction models. Results showed that, while hyperspectral data yielded slightly higher accuracy, optimized multispectral indices, particularly from PS and S2, achieved comparable performance. Among models, SVM and RFR showed consistent effectiveness across strategies. These findings highlight the potential of low-cost multispectral platforms for practical crop N monitoring. Future work should validate these models using real satellite imagery and explore multi-source data fusion with advanced learning algorithms. Full article
(This article belongs to the Special Issue Perspectives of Remote Sensing for Precision Agriculture)
20 pages, 2108 KiB  
Review
Underwater Polarized Light Navigation: Current Progress, Key Challenges, and Future Perspectives
by Mingzhi Chen, Yuan Liu, Daqi Zhu, Wen Pang and Jianmin Zhu
Robotics 2025, 14(8), 104; https://doi.org/10.3390/robotics14080104 - 29 Jul 2025
Viewed by 293
Abstract
Underwater navigation remains constrained by technological limitations, driving the exploration of alternative approaches such as polarized light-based systems. This review systematically examines advances in polarized navigation from three perspectives. First, the principles of atmospheric polarization navigation are analyzed, with their operational mechanisms, advantages, [...] Read more.
Underwater navigation remains constrained by technological limitations, driving the exploration of alternative approaches such as polarized light-based systems. This review systematically examines advances in polarized navigation from three perspectives. First, the principles of atmospheric polarization navigation are analyzed, with their operational mechanisms, advantages, and inherent constraints dissected. Second, innovations in bionic polarization multi-sensor fusion positioning are consolidated, highlighting progress beyond conventional heading-direction extraction. Third, emerging underwater polarization navigation techniques are critically evaluated, revealing that current methods predominantly adapt atmospheric frameworks enhanced by advanced filtering to mitigate underwater interference. A comprehensive synthesis of underwater polarization modeling methodologies is provided, categorizing physical, data-driven, and hybrid approaches. Through rigorous analysis of studies, three persistent barriers are identified: (1) inadequate polarization pattern modeling under dynamic cross-media conditions; (2) insufficient robustness against turbidity-induced noise; (3) immature integration of polarization vision with sonar/IMU (Inertial Measurement Unit) sensing. Targeted research directions are proposed, including adaptive deep learning models, multi-spectral polarization sensing, and bio-inspired sensor fusion architectures. These insights establish a roadmap for developing reliable underwater navigation systems that transcend current technological boundaries. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

27 pages, 2978 KiB  
Article
Dynamic Monitoring and Precision Fertilization Decision System for Agricultural Soil Nutrients Using UAV Remote Sensing and GIS
by Xiaolong Chen, Hongfeng Zhang and Cora Un In Wong
Agriculture 2025, 15(15), 1627; https://doi.org/10.3390/agriculture15151627 - 27 Jul 2025
Viewed by 311
Abstract
We propose a dynamic monitoring and precision fertilization decision system for agricultural soil nutrients, integrating UAV remote sensing and GIS technologies to address the limitations of traditional soil nutrient assessment methods. The proposed method combines multi-source data fusion, including hyperspectral and multispectral UAV [...] Read more.
We propose a dynamic monitoring and precision fertilization decision system for agricultural soil nutrients, integrating UAV remote sensing and GIS technologies to address the limitations of traditional soil nutrient assessment methods. The proposed method combines multi-source data fusion, including hyperspectral and multispectral UAV imagery with ground sensor data, to achieve high-resolution spatial and spectral analysis of soil nutrients. Real-time data processing algorithms enable rapid updates of soil nutrient status, while a time-series dynamic model captures seasonal variations and crop growth stage influences, improving prediction accuracy (RMSE reductions of 43–70% for nitrogen, phosphorus, and potassium compared to conventional laboratory-based methods and satellite NDVI approaches). The experimental validation compared the proposed system against two conventional approaches: (1) laboratory soil testing with standardized fertilization recommendations and (2) satellite NDVI-based fertilization. Field trials across three distinct agroecological zones demonstrated that the proposed system reduced fertilizer inputs by 18–27% while increasing crop yields by 4–11%, outperforming both conventional methods. Furthermore, an intelligent fertilization decision model generates tailored fertilization plans by analyzing real-time soil conditions, crop demands, and climate factors, with continuous learning enhancing its precision over time. The system also incorporates GIS-based visualization tools, providing intuitive spatial representations of nutrient distributions and interactive functionalities for detailed insights. Our approach significantly advances precision agriculture by automating the entire workflow from data collection to decision-making, reducing resource waste and optimizing crop yields. The integration of UAV remote sensing, dynamic modeling, and machine learning distinguishes this work from conventional static systems, offering a scalable and adaptive framework for sustainable farming practices. Full article
(This article belongs to the Section Agricultural Soils)
Show Figures

Figure 1

19 pages, 5166 KiB  
Article
Estimating Wheat Chlorophyll Content Using a Multi-Source Deep Feature Neural Network
by Jun Li, Yali Sheng, Weiqiang Wang, Jikai Liu and Xinwei Li
Agriculture 2025, 15(15), 1624; https://doi.org/10.3390/agriculture15151624 - 26 Jul 2025
Viewed by 182
Abstract
Chlorophyll plays a vital role in wheat growth and fertilization management. Accurate and efficient estimation of chlorophyll content is crucial for providing a scientific foundation for precision agricultural management. Unmanned aerial vehicles (UAVs), characterized by high flexibility, spatial resolution, and operational efficiency, have [...] Read more.
Chlorophyll plays a vital role in wheat growth and fertilization management. Accurate and efficient estimation of chlorophyll content is crucial for providing a scientific foundation for precision agricultural management. Unmanned aerial vehicles (UAVs), characterized by high flexibility, spatial resolution, and operational efficiency, have emerged as effective tools for estimating chlorophyll content in wheat. Although multi-source data derived from UAV-based multispectral imagery have shown potential for wheat chlorophyll estimation, the importance of multi-source deep feature fusion has not been adequately addressed. Therefore, this study aims to estimate wheat chlorophyll content by integrating spectral and textural features extracted from UAV multispectral imagery, in conjunction with partial least squares regression (PLSR), random forest regression (RFR), deep neural network (DNN), and a novel multi-source deep feature neural network (MDFNN) proposed in this research. The results demonstrate the following: (1) Except for the RFR model, models based on texture features exhibit superior accuracy compared to those based on spectral features. Furthermore, the estimation accuracy achieved by fusing spectral and texture features is significantly greater than that obtained using a single type of data. (2) The MDFNN proposed in this study outperformed other models in chlorophyll content estimation, with an R2 of 0.850, an RMSE of 5.602, and an RRMSE of 15.76%. Compared to the second-best model, the DNN (R2 = 0.799, RMSE = 6.479, RRMSE = 18.23%), the MDFNN achieved a 6.4% increase in R2, and 13.5% reductions in both RMSE and RRMSE. (3) The MDFNN exhibited strong robustness and adaptability across varying years, wheat varieties, and nitrogen application levels. The findings of this study offer important insights into UAV-based remote sensing applications for estimating wheat chlorophyll under field conditions. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

18 pages, 2644 KiB  
Article
Multispectral and Chlorophyll Fluorescence Imaging Fusion Using 2D-CNN and Transfer Learning for Cross-Cultivar Early Detection of Verticillium Wilt in Eggplants
by Dongfang Zhang, Shuangxia Luo, Jun Zhang, Mingxuan Li, Xiaofei Fan, Xueping Chen and Shuxing Shen
Agronomy 2025, 15(8), 1799; https://doi.org/10.3390/agronomy15081799 - 25 Jul 2025
Viewed by 132
Abstract
Verticillium wilt is characterized by chlorosis in leaves and is a devastating disease in eggplant. Early diagnosis, prior to the manifestation of symptoms, enables targeted management of the disease. In this study, we aim to detect early leaf wilt in eggplant leaves caused [...] Read more.
Verticillium wilt is characterized by chlorosis in leaves and is a devastating disease in eggplant. Early diagnosis, prior to the manifestation of symptoms, enables targeted management of the disease. In this study, we aim to detect early leaf wilt in eggplant leaves caused by Verticillium dahliae by integrating multispectral imaging with machine learning and deep learning techniques. Multispectral and chlorophyll fluorescence images were collected from leaves of the inbred eggplant line 11-435, including data on image texture, spectral reflectance, and chlorophyll fluorescence. Subsequently, we established a multispectral data model, fusion information model, and multispectral image–information fusion model. The multispectral image–information fusion model, integrated with a two-dimensional convolutional neural network (2D-CNN), demonstrated optimal performance in classifying early-stage Verticillium wilt infection, achieving a test accuracy of 99.37%. Additionally, transfer learning enabled us to diagnose early leaf wilt in another eggplant variety, the inbred line 14-345, with an accuracy of 84.54 ± 1.82%. Compared to traditional methods that rely on visible symptom observation and typically require about 10 days to confirm infection, this study achieved early detection of Verticillium wilt as soon as the third day post-inoculation. These findings underscore the potential of the fusion model as a valuable tool for the early detection of pre-symptomatic states in infected plants, thereby offering theoretical support for in-field detection of eggplant health. Full article
Show Figures

Figure 1

28 pages, 7545 KiB  
Article
Estimation of Rice Leaf Nitrogen Content Using UAV-Based Spectral–Texture Fusion Indices (STFIs) and Two-Stage Feature Selection
by Xiaopeng Zhang, Yating Hu, Xiaofeng Li, Ping Wang, Sike Guo, Lu Wang, Cuiyu Zhang and Xue Ge
Remote Sens. 2025, 17(14), 2499; https://doi.org/10.3390/rs17142499 - 18 Jul 2025
Viewed by 450
Abstract
Accurate estimation of rice leaf nitrogen content (LNC) is essential for optimizing nitrogen management in precision agriculture. However, challenges such as spectral saturation and canopy structural variations across different growth stages complicate this task. This study proposes a robust framework for LNC estimation [...] Read more.
Accurate estimation of rice leaf nitrogen content (LNC) is essential for optimizing nitrogen management in precision agriculture. However, challenges such as spectral saturation and canopy structural variations across different growth stages complicate this task. This study proposes a robust framework for LNC estimation that integrates both spectral and texture features extracted from UAV-based multispectral imagery through the development of novel Spectral–Texture Fusion Indices (STFIs). Field data were collected under nitrogen gradient treatments across three critical growth stages: heading, early filling, and late filling. A total of 18 vegetation indices (VIs), 40 texture features (TFs), and 27 STFIs were derived from UAV images. To optimize the feature set, a two-stage feature selection strategy was employed, combining Pearson correlation analysis with model-specific embedded selection methods: Recursive Feature Elimination with Cross-Validation (RFECV) for Random Forest (RF) and Extreme Gradient Boosting (XGBoost), and Sequential Forward Selection (SFS) for Support Vector Regression (SVR) and Deep Neural Networks (DNNs). The models—RFECV-RF, RFECV-XGBoost, SFS-SVR, and SFS-DNN—were evaluated using four feature configurations. The SFS-DNN model with STFIs achieved the highest prediction accuracy (R2 = 0.874, RMSE = 2.621 mg/g). SHAP analysis revealed the significant contribution of STFIs to model predictions, underscoring the effectiveness of integrating spectral and texture information. The proposed STFI-based framework demonstrates strong generalization across phenological stages and offers a scalable, interpretable approach for UAV-based nitrogen monitoring in rice production systems. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

21 pages, 4147 KiB  
Article
AgriFusionNet: A Lightweight Deep Learning Model for Multisource Plant Disease Diagnosis
by Saleh Albahli
Agriculture 2025, 15(14), 1523; https://doi.org/10.3390/agriculture15141523 - 15 Jul 2025
Viewed by 455
Abstract
Timely and accurate identification of plant diseases is critical to mitigating crop losses and enhancing yield in precision agriculture. This paper proposes AgriFusionNet, a lightweight and efficient deep learning model designed to diagnose plant diseases using multimodal data sources. The framework integrates RGB [...] Read more.
Timely and accurate identification of plant diseases is critical to mitigating crop losses and enhancing yield in precision agriculture. This paper proposes AgriFusionNet, a lightweight and efficient deep learning model designed to diagnose plant diseases using multimodal data sources. The framework integrates RGB and multispectral drone imagery with IoT-based environmental sensor data (e.g., temperature, humidity, soil moisture), recorded over six months across multiple agricultural zones. Built on the EfficientNetV2-B4 backbone, AgriFusionNet incorporates Fused-MBConv blocks and Swish activation to improve gradient flow, capture fine-grained disease patterns, and reduce inference latency. The model was evaluated using a comprehensive dataset composed of real-world and benchmarked samples, showing superior performance with 94.3% classification accuracy, 28.5 ms inference time, and a 30% reduction in model parameters compared to state-of-the-art models such as Vision Transformers and InceptionV4. Extensive comparisons with both traditional machine learning and advanced deep learning methods underscore its robustness, generalization, and suitability for deployment on edge devices. Ablation studies and confusion matrix analyses further confirm its diagnostic precision, even in visually ambiguous cases. The proposed framework offers a scalable, practical solution for real-time crop health monitoring, contributing toward smart and sustainable agricultural ecosystems. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

23 pages, 3492 KiB  
Article
A Multimodal Deep Learning Framework for Accurate Biomass and Carbon Sequestration Estimation from UAV Imagery
by Furkat Safarov, Ugiloy Khojamuratova, Misirov Komoliddin, Xusinov Ibragim Ismailovich and Young Im Cho
Drones 2025, 9(7), 496; https://doi.org/10.3390/drones9070496 - 14 Jul 2025
Viewed by 323
Abstract
Accurate quantification of above-ground biomass (AGB) and carbon sequestration is vital for monitoring terrestrial ecosystem dynamics, informing climate policy, and supporting carbon neutrality initiatives. However, conventional methods—ranging from manual field surveys to remote sensing techniques based solely on 2D vegetation indices—often fail to [...] Read more.
Accurate quantification of above-ground biomass (AGB) and carbon sequestration is vital for monitoring terrestrial ecosystem dynamics, informing climate policy, and supporting carbon neutrality initiatives. However, conventional methods—ranging from manual field surveys to remote sensing techniques based solely on 2D vegetation indices—often fail to capture the intricate spectral and structural heterogeneity of forest canopies, particularly at fine spatial resolutions. To address these limitations, we introduce ForestIQNet, a novel end-to-end multimodal deep learning framework designed to estimate AGB and associated carbon stocks from UAV-acquired imagery with high spatial fidelity. ForestIQNet combines dual-stream encoders for processing multispectral UAV imagery and a voxelized Canopy Height Model (CHM), fused via a Cross-Attentional Feature Fusion (CAFF) module, enabling fine-grained interaction between spectral reflectance and 3D structure. A lightweight Transformer-based regression head then performs multitask prediction of AGB and CO2e, capturing long-range spatial dependencies and enhancing generalization. Proposed method achieves an R2 of 0.93 and RMSE of 6.1 kg for AGB prediction, compared to 0.78 R2 and 11.7 kg RMSE for XGBoost and 0.73 R2 and 13.2 kg RMSE for Random Forest. Despite its architectural complexity, ForestIQNet maintains a low inference cost (27 ms per patch) and generalizes well across species, terrain, and canopy structures. These results establish a new benchmark for UAV-enabled biomass estimation and provide scalable, interpretable tools for climate monitoring and forest management. Full article
(This article belongs to the Special Issue UAVs for Nature Conservation Tasks in Complex Environments)
Show Figures

Figure 1

20 pages, 10558 KiB  
Article
Spatial–Spectral Feature Fusion and Spectral Reconstruction of Multispectral LiDAR Point Clouds by Attention Mechanism
by Guoqing Zhou, Haoxin Qi, Shuo Shi, Sifu Bi, Xingtao Tang and Wei Gong
Remote Sens. 2025, 17(14), 2411; https://doi.org/10.3390/rs17142411 - 12 Jul 2025
Viewed by 380
Abstract
High-quality multispectral LiDAR (MSL) data are crucial for land cover (LC) classification. However, the Titan MSL system encounters challenges of inconsistent spatial–spectral information due to its unique scanning and data saving method, restricting subsequent classification accuracy. Existing spectral reconstruction methods often require empirical [...] Read more.
High-quality multispectral LiDAR (MSL) data are crucial for land cover (LC) classification. However, the Titan MSL system encounters challenges of inconsistent spatial–spectral information due to its unique scanning and data saving method, restricting subsequent classification accuracy. Existing spectral reconstruction methods often require empirical parameter settings and involve high computational costs, limiting automation and complicating application. To address this problem, we introduce the dual attention spectral optimization reconstruction network (DossaNet), leveraging an attention mechanism and spatial–spectral information. DossaNet can adaptively adjust weight parameters, streamline the multispectral point cloud acquisition process, and integrate it into classification models end-to-end. The experimental results show the following: (1) DossaNet exhibits excellent generalizability, effectively recovering accurate LC spectra and improving classification accuracy. Metrics across the six classification models show some improvements. (2) Compared with the method lacking spectral reconstruction, DossaNet can improve the overall accuracy (OA) and average accuracy (AA) of PointNet++ and RandLA-Net by a maximum of 4.8%, 4.47%, 5.93%, and 2.32%. Compared with the inverse distance weighted (IDW) and k-nearest neighbor (KNN) approach, DossaNet can improve the OA and AA of PointNet++ and DGCNN by a maximum of 1.33%, 2.32%, 0.86%, and 2.08% (IDW) and 1.73%, 3.58%, 0.28%, and 2.93% (KNN). The findings further validate the effectiveness of our proposed method. This method provides a more efficient and simplified approach to enhancing the quality of multispectral point cloud data. Full article
Show Figures

Figure 1

28 pages, 14588 KiB  
Article
CAU2DNet: A Dual-Branch Deep Learning Network and a Dataset for Slum Recognition with Multi-Source Remote Sensing Data
by Xi Lyu, Chenyu Zhang, Lizhi Miao, Xiying Sun, Xinxin Zhou, Xinyi Yue, Zhongchang Sun and Yueyong Pang
Remote Sens. 2025, 17(14), 2359; https://doi.org/10.3390/rs17142359 - 9 Jul 2025
Viewed by 250
Abstract
The efficient and precise identification of urban slums is a significant challenge for urban planning and sustainable development, as their morphological diversity and complex spatial distribution make it difficult to use traditional remote sensing inversion methods. Current deep learning (DL) methods mainly face [...] Read more.
The efficient and precise identification of urban slums is a significant challenge for urban planning and sustainable development, as their morphological diversity and complex spatial distribution make it difficult to use traditional remote sensing inversion methods. Current deep learning (DL) methods mainly face challenges such as limited receptive fields and insufficient sensitivity to spatial locations when integrating multi-source remote sensing data, and high-quality datasets that integrate multi-spectral and geoscientific indicators to support them are scarce. In response to these issues, this study proposes a DL model (coordinate-attentive U2-DeepLab network [CAU2DNet]) that integrates multi-source remote sensing data. The model integrates the multi-scale feature extraction capability of U2-Net with the global receptive field advantage of DeepLabV3+ through a dual-branch architecture. Thereafter, the spatial semantic perception capability is enhanced by introducing the CoordAttention mechanism, and ConvNextV2 is adopted to optimize the backbone network of the DeepLabV3+ branch, thereby improving the modeling capability of low-resolution geoscientific features. The two branches adopt a decision-level fusion mechanism for feature fusion, which means that the results of each are weighted and summed using learnable weights to obtain the final output feature map. Furthermore, this study constructs the São Paulo slums dataset for model training due to the lack of a multi-spectral slum dataset. This dataset covers 7978 samples of 512 × 512 pixels, integrating high-resolution RGB images, Normalized Difference Vegetation Index (NDVI)/Modified Normalized Difference Water Index (MNDWI) geoscientific indicators, and POI infrastructure data, which can significantly enrich multi-source slum remote sensing data. Experiments have shown that CAU2DNet achieves an intersection over union (IoU) of 0.6372 and an F1 score of 77.97% on the São Paulo slums dataset, indicating a significant improvement in accuracy over the baseline model. The ablation experiments verify that the improvements made in this study have resulted in a 16.12% increase in precision. Moreover, CAU2DNet also achieved the best results in all metrics during the cross-domain testing on the WHU building dataset, further confirming the model’s generalizability. Full article
Show Figures

Figure 1

39 pages, 18642 KiB  
Article
SDRFPT-Net: A Spectral Dual-Stream Recursive Fusion Network for Multispectral Object Detection
by Peida Zhou, Xiaoyong Sun, Bei Sun, Runze Guo, Zhaoyang Dang and Shaojing Su
Remote Sens. 2025, 17(13), 2312; https://doi.org/10.3390/rs17132312 - 5 Jul 2025
Viewed by 463
Abstract
Multispectral object detection faces challenges in effectively integrating complementary information from different modalities in complex environmental conditions. This paper proposes SDRFPT-Net (Spectral Dual-stream Recursive Fusion Perception Target Network), a novel architecture that integrates three key innovative modules: (1) the Spectral Hierarchical Perception Architecture [...] Read more.
Multispectral object detection faces challenges in effectively integrating complementary information from different modalities in complex environmental conditions. This paper proposes SDRFPT-Net (Spectral Dual-stream Recursive Fusion Perception Target Network), a novel architecture that integrates three key innovative modules: (1) the Spectral Hierarchical Perception Architecture (SHPA), which adopts a dual-stream separated structure with independently parameterized feature extraction paths for visible and infrared modalities; (2) the Spectral Recursive Fusion Module (SRFM), which combines hybrid attention mechanisms with recursive progressive fusion strategies to achieve deep feature interaction through parameter-sharing multi-round recursive processing; and (3) the Spectral Target Perception Enhancement Module (STPEM), which adaptively enhances target region representation and suppresses background interference. Extensive experiments on the VEDAI, FLIR-aligned, and LLVIP datasets demonstrate that SDRFPT-Net significantly outperforms state-of-the-art methods, with improvements of 2.5% in mAP50 and 5.4% in mAP50:95 on VEDAI, 11.5% in mAP50 on FLIR-aligned, and 9.5% in mAP50:95 on LLVIP. Ablation studies further validate the effectiveness of each proposed module. The proposed method provides an efficient and robust solution for multispectral object detection in remote sensing image interpretation, making it particularly suitable for all-weather monitoring applications from aerial and satellite platforms, as well as in intelligent surveillance and autonomous driving domains. Full article
Show Figures

Figure 1

26 pages, 7645 KiB  
Article
Prediction of Rice Chlorophyll Index (CHI) Using Nighttime Multi-Source Spectral Data
by Cong Liu, Lin Wang, Xuetong Fu, Junzhe Zhang, Ran Wang, Xiaofeng Wang, Nan Chai, Longfeng Guan, Qingshan Chen and Zhongchen Zhang
Agriculture 2025, 15(13), 1425; https://doi.org/10.3390/agriculture15131425 - 1 Jul 2025
Viewed by 449
Abstract
The chlorophyll index (CHI) is a crucial indicator for assessing the photosynthetic capacity and nutritional status of crops. However, traditional methods for measuring CHI, such as chemical extraction and handheld instruments, fall short in meeting the requirements for efficient, non-destructive, and continuous monitoring [...] Read more.
The chlorophyll index (CHI) is a crucial indicator for assessing the photosynthetic capacity and nutritional status of crops. However, traditional methods for measuring CHI, such as chemical extraction and handheld instruments, fall short in meeting the requirements for efficient, non-destructive, and continuous monitoring at the canopy level. This study aimed to explore the feasibility of predicting rice canopy CHI using nighttime multi-source spectral data combined with machine learning models. In this study, ground truth CHI values were obtained using a SPAD-502 chlorophyll meter. Canopy spectral data were acquired under nighttime conditions using a high-throughput phenotyping platform (HTTP) equipped with active light sources in a greenhouse environment. Three types of sensors—multispectral (MS), visible light (RGB), and chlorophyll fluorescence (ChlF)—were employed to collect data across different growth stages of rice, ranging from tillering to maturity. PCA and LASSO regression were applied for dimensionality reduction and feature selection of multi-source spectral variables. Subsequently, CHI prediction models were developed using four machine learning algorithms: support vector regression (SVR), random forest (RF), back-propagation neural network (BPNN), and k-nearest neighbors (KNNs). The predictive performance of individual sensors (MS, RGB, and ChlF) and sensor fusion strategies was evaluated across multiple growth stages. The results demonstrated that sensor fusion models consistently outperformed single-sensor approaches. Notably, during tillering (TI), maturity (MT), and the full growth period (GP), fused models achieved high accuracy (R2 > 0.90, RMSE < 2.0). The fusion strategy also showed substantial advantages over single-sensor models during the jointing–heading (JH) and grain-filling (GF) stages. Among the individual sensor types, MS data achieved relatively high accuracy at certain stages, while models based on RGB and ChlF features exhibited weaker performance and lower prediction stability. Overall, the highest prediction accuracy was achieved during the full growth period (GP) using fused spectral data, with an R2 of 0.96 and an RMSE of 1.99. This study provides a valuable reference for developing CHI prediction models based on nighttime multi-source spectral data. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

25 pages, 2723 KiB  
Article
A Human-Centric, Uncertainty-Aware Event-Fused AI Network for Robust Face Recognition in Adverse Conditions
by Akmalbek Abdusalomov, Sabina Umirzakova, Elbek Boymatov, Dilnoza Zaripova, Shukhrat Kamalov, Zavqiddin Temirov, Wonjun Jeong, Hyoungsun Choi and Taeg Keun Whangbo
Appl. Sci. 2025, 15(13), 7381; https://doi.org/10.3390/app15137381 - 30 Jun 2025
Cited by 1 | Viewed by 317
Abstract
Face recognition systems often falter when deployed in uncontrolled settings, grappling with low light, unexpected occlusions, motion blur, and the degradation of sensor signals. Most contemporary algorithms chase raw accuracy yet overlook the pragmatic need for uncertainty estimation and multispectral reasoning rolled into [...] Read more.
Face recognition systems often falter when deployed in uncontrolled settings, grappling with low light, unexpected occlusions, motion blur, and the degradation of sensor signals. Most contemporary algorithms chase raw accuracy yet overlook the pragmatic need for uncertainty estimation and multispectral reasoning rolled into a single framework. This study introduces HUE-Net—a Human-centric, Uncertainty-aware, Event-fused Network—designed specifically to thrive under severe environmental stress. HUE-Net marries the visible RGB band with near-infrared (NIR) imagery and high-temporal-event data through an early-fusion pipeline, proven more responsive than serial approaches. A custom hybrid backbone that couples convolutional networks with transformers keeps the model nimble enough for edge devices. Central to the architecture is the perturbed multi-branch variational module, which distills probabilistic identity embeddings while delivering calibrated confidence scores. Complementing this, an Adaptive Spectral Attention mechanism dynamically reweights each stream to amplify the most reliable facial features in real time. Unlike previous efforts that compartmentalize uncertainty handling, spectral blending, or computational thrift, HUE-Net unites all three in a lightweight package. Benchmarks on the IJB-C and N-SpectralFace datasets illustrate that the system not only secures state-of-the-art accuracy but also exhibits unmatched spectral robustness and reliable probability calibration. The results indicate that HUE-Net is well-positioned for forensic missions and humanitarian scenarios where trustworthy identification cannot be deferred. Full article
Show Figures

Figure 1

22 pages, 2999 KiB  
Article
MSFNet: A Multi-Source Fusion-Based Method with Enhanced Hierarchical Spectral Semantic Perception for Wheat Disease Region Classification
by Wenxu Jia, Ziyang Guo, Wenjing Zhang, Haixi Zhang and Bin Liu
Appl. Sci. 2025, 15(13), 7317; https://doi.org/10.3390/app15137317 - 29 Jun 2025
Viewed by 259
Abstract
Wheat diseases threaten yield and food security, highlighting the need for rapid, accurate diagnosis in precision agriculture. However, current remote sensing methods often lack hierarchical spectral semantic perception or rely on single-source data and simple fusion, limiting diagnostic performance. To address these challenges, [...] Read more.
Wheat diseases threaten yield and food security, highlighting the need for rapid, accurate diagnosis in precision agriculture. However, current remote sensing methods often lack hierarchical spectral semantic perception or rely on single-source data and simple fusion, limiting diagnostic performance. To address these challenges, this study proposed MSFNet, a novel multi-source fusion network with enhanced hierarchical spectral semantic perception, to achieve the precise regional classification of wheat diseases. Specifically, a multi-source fusion module (MSFM) was developed, employing a dual-branch architecture to simultaneously enhance spatial–spectral semantics and comprehensively explore complementary cross-modal features, thereby enabling the effective integration of critical information from both modalities. Furthermore, a hierarchical spectral semantic fusion module (HSSFM) was developed, which employs a pyramid architecture integrated with attention mechanisms to fuse hierarchical spectral semantics, thereby significantly enhancing the model’s hierarchical feature representation capacity. To support this research, we constructed a new multispectral remote sensing dataset, MSWDD2024, tailored for wheat disease region diagnosis. Experimental evaluations on MSWDD2024 demonstrated that MSFNet achieved 95.4% accuracy, 95.6% precision, and 95.6% recall, surpassing ResNet18 by 6.0%, 6.0%, and 5.8%, respectively, and outperforming RGB-only models by over 12% across all metrics. Moreover, MSFNet consistently exceeded the performance of existing state-of-the-art methods. These results confirm the superior effectiveness of MSFNet in remote sensing-based wheat disease diagnosis, offering a promising solution for robust and accurate monitoring in precision agriculture. Full article
Show Figures

Figure 1

31 pages, 6788 KiB  
Article
A Novel Dual-Modal Deep Learning Network for Soil Salinization Mapping in the Keriya Oasis Using GF-3 and Sentinel-2 Imagery
by Ilyas Nurmemet, Yang Xiang, Aihepa Aihaiti, Yu Qin, Yilizhati Aili, Hengrui Tang and Ling Li
Agriculture 2025, 15(13), 1376; https://doi.org/10.3390/agriculture15131376 - 27 Jun 2025
Viewed by 439
Abstract
Soil salinization poses a significant threat to agricultural productivity, food security, and ecological sustainability in arid and semi-arid regions. Effectively and timely mapping of different degrees of salinized soils is essential for sustainable land management and ecological restoration. Although deep learning (DL) methods [...] Read more.
Soil salinization poses a significant threat to agricultural productivity, food security, and ecological sustainability in arid and semi-arid regions. Effectively and timely mapping of different degrees of salinized soils is essential for sustainable land management and ecological restoration. Although deep learning (DL) methods have been widely employed for soil salinization extraction from remote sensing (RS) data, the integration of multi-source RS data with DL methods remains challenging due to issues such as limited data availability, speckle noise, geometric distortions, and suboptimal data fusion strategies. This study focuses on the Keriya Oasis, Xinjiang, China, utilizing RS data, including Sentinel-2 multispectral and GF-3 full-polarimetric SAR (PolSAR) images, to conduct soil salinization classification. We propose a Dual-Modal deep learning network for Soil Salinization named DMSSNet, which aims to improve the mapping accuracy of salinization soils by effectively fusing spectral and polarimetric features. DMSSNet incorporates self-attention mechanisms and a Convolutional Block Attention Module (CBAM) within a hierarchical fusion framework, enabling the model to capture both intra-modal and cross-modal dependencies and to improve spatial feature representation. Polarimetric decomposition features and spectral indices are jointly exploited to characterize diverse land surface conditions. Comprehensive field surveys and expert interpretation were employed to construct a high-quality training and validation dataset. Experimental results indicate that DMSSNet achieves an overall accuracy of 92.94%, a Kappa coefficient of 79.12%, and a macro F1-score of 86.52%, positively outperforming conventional DL models (ResUNet, SegNet, DeepLabv3+). The results confirm the superiority of attention-guided dual-branch fusion networks for distinguishing varying degrees of soil salinization across heterogeneous landscapes and highlight the value of integrating Sentinel-2 optical and GF-3 PolSAR data for complex land surface classification tasks. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

Back to TopTop