Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (473)

Search Parameters:
Keywords = hyperspectral image fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 7677 KiB  
Article
Hyperspectral Imaging Combined with a Dual-Channel Feature Fusion Model for Hierarchical Detection of Rice Blast
by Yuan Qi, Tan Liu, Songlin Guo, Peiyan Wu, Jun Ma, Qingyun Yuan, Weixiang Yao and Tongyu Xu
Agriculture 2025, 15(15), 1673; https://doi.org/10.3390/agriculture15151673 (registering DOI) - 2 Aug 2025
Abstract
Rice blast caused by Magnaporthe oryzae is a major cause of yield reductions and quality deterioration in rice. Therefore, early detection of the disease is necessary for controlling the spread of rice blast. This study proposed a dual-channel feature fusion model (DCFM) to [...] Read more.
Rice blast caused by Magnaporthe oryzae is a major cause of yield reductions and quality deterioration in rice. Therefore, early detection of the disease is necessary for controlling the spread of rice blast. This study proposed a dual-channel feature fusion model (DCFM) to achieve effective identification of rice blast. The DCFM model extracted spectral features using successive projection algorithm (SPA), random frog (RFrog), and competitive adaptive reweighted sampling (CARS), and extracted spatial features from spectral images using MobileNetV2 combined with the convolutional block attention module (CBAM). Then, these features were fused using the feature fusion adaptive conditioning module in DCFM and input into the fully connected layer for disease identification. The results show that the model combining spectral and spatial features was superior to the classification models based on single features for rice blast detection, with OA and Kappa higher than 90% and 88%, respectively. The DCFM model based on SPA screening obtained the best results, with an OA of 96.72% and a Kappa of 95.97%. Overall, this study enables the early and accurate identification of rice blast, providing a rapid and reliable method for rice disease monitoring and management. It also offers a valuable reference for the detection of other crop diseases. Full article
Show Figures

Figure 1

25 pages, 26404 KiB  
Review
Review of Deep Learning Applications for Detecting Special Components in Agricultural Products
by Yifeng Zhao and Qingqing Xie
Computers 2025, 14(8), 309; https://doi.org/10.3390/computers14080309 - 30 Jul 2025
Viewed by 228
Abstract
The rapid evolution of deep learning (DL) has fundamentally transformed the paradigm for detecting special components in agricultural products, addressing critical challenges in food safety, quality control, and precision agriculture. This comprehensive review systematically analyzes many seminal studies to evaluate cutting-edge DL applications [...] Read more.
The rapid evolution of deep learning (DL) has fundamentally transformed the paradigm for detecting special components in agricultural products, addressing critical challenges in food safety, quality control, and precision agriculture. This comprehensive review systematically analyzes many seminal studies to evaluate cutting-edge DL applications across three core domains: contaminant surveillance (heavy metals, pesticides, and mycotoxins), nutritional component quantification (soluble solids, polyphenols, and pigments), and structural/biomarker assessment (disease symptoms, gel properties, and physiological traits). Emerging hybrid architectures—including attention-enhanced convolutional neural networks (CNNs) for lesion localization, wavelet-coupled autoencoders for spectral denoising, and multi-task learning frameworks for joint parameter prediction—demonstrate unprecedented accuracy in decoding complex agricultural matrices. Particularly noteworthy are sensor fusion strategies integrating hyperspectral imaging (HSI), Raman spectroscopy, and microwave detection with deep feature extraction, achieving industrial-grade performance (RPD > 3.0) while reducing detection time by 30–100× versus conventional methods. Nevertheless, persistent barriers in the “black-box” nature of complex models, severe lack of standardized data and protocols, computational inefficiency, and poor field robustness hinder the reliable deployment and adoption of DL for detecting special components in agricultural products. This review provides an essential foundation and roadmap for future research to bridge the gap between laboratory DL models and their effective, trusted application in real-world agricultural settings. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

22 pages, 3506 KiB  
Review
Spectroscopic and Imaging Technologies Combined with Machine Learning for Intelligent Perception of Pesticide Residues in Fruits and Vegetables
by Haiyan He, Zhoutao Li, Qian Qin, Yue Yu, Yuanxin Guo, Sheng Cai and Zhanming Li
Foods 2025, 14(15), 2679; https://doi.org/10.3390/foods14152679 - 30 Jul 2025
Viewed by 213
Abstract
Pesticide residues in fruits and vegetables pose a serious threat to food safety. Traditional detection methods have defects such as complex operation, high cost, and long detection time. Therefore, it is of great significance to develop rapid, non-destructive, and efficient detection technologies and [...] Read more.
Pesticide residues in fruits and vegetables pose a serious threat to food safety. Traditional detection methods have defects such as complex operation, high cost, and long detection time. Therefore, it is of great significance to develop rapid, non-destructive, and efficient detection technologies and equipment. In recent years, the combination of spectroscopic techniques and imaging technologies with machine learning algorithms has developed rapidly, providing a new attempt to solve this problem. This review focuses on the research progress of the combination of spectroscopic techniques (near-infrared spectroscopy (NIRS), hyperspectral imaging technology (HSI), surface-enhanced Raman scattering (SERS), laser-induced breakdown spectroscopy (LIBS), and imaging techniques (visible light (VIS) imaging, NIRS imaging, HSI technology, terahertz imaging) with machine learning algorithms in the detection of pesticide residues in fruits and vegetables. It also explores the huge challenges faced by the application of spectroscopic and imaging technologies combined with machine learning algorithms in the intelligent perception of pesticide residues in fruits and vegetables: the performance of machine learning models requires further enhancement, the fusion of imaging and spectral data presents technical difficulties, and the commercialization of hardware devices remains underdeveloped. This review has proposed an innovative method that integrates spectral and image data, enhancing the accuracy of pesticide residue detection through the construction of interpretable machine learning algorithms, and providing support for the intelligent sensing and analysis of agricultural and food products. Full article
Show Figures

Figure 1

24 pages, 2508 KiB  
Article
Class-Discrepancy Dynamic Weighting for Cross-Domain Few-Shot Hyperspectral Image Classification
by Chen Ding, Jiahao Yue, Sirui Zheng, Yizhuo Dong, Wenqiang Hua, Xueling Chen, Yu Xie, Song Yan, Wei Wei and Lei Zhang
Remote Sens. 2025, 17(15), 2605; https://doi.org/10.3390/rs17152605 - 27 Jul 2025
Viewed by 297
Abstract
In recent years, cross-domain few-shot learning (CDFSL) has demonstrated remarkable performance in hyperspectral image classification (HSIC), partially alleviating the distribution shift problem. However, most domain adaptation methods rely on similarity metrics to establish cross-domain class matching, making it difficult to simultaneously account for [...] Read more.
In recent years, cross-domain few-shot learning (CDFSL) has demonstrated remarkable performance in hyperspectral image classification (HSIC), partially alleviating the distribution shift problem. However, most domain adaptation methods rely on similarity metrics to establish cross-domain class matching, making it difficult to simultaneously account for intra-class sample size variations and inherent inter-class differences. To address this problem, existing studies have introduced a class weighting mechanism within the prototype network framework, determining class weights by calculating inter-sample similarity through distance metrics. However, this method suffers from a dual limitation: susceptibility to noise interference and insufficient capacity to capture global class variations, which may lead to distorted weight allocation and consequently result in alignment bias. To solve these issues, we propose a novel class-discrepancy dynamic weighting-based cross-domain FSL (CDDW-CFSL) framework. It integrates three key components: (1) the class-weighted domain adaptation (CWDA) method dynamically measures cross-domain distribution shifts using global class mean discrepancies. It employs discrepancy-sensitive weighting to strengthen the alignment of critical categories, enabling accurate domain adaptation while maintaining feature topology; (2) the class mean refinement (CMR) method incorporates class covariance distance to compute distribution discrepancies between support set samples and class prototypes, enabling the precise capture of cross-domain feature internal structures; (3) a novel multi-dimensional feature extractor that captures both local spatial details and continuous spectral characteristics simultaneously, facilitating deep cross-dimensional feature fusion. The results in three publicly available HSIC datasets show the effectiveness of the CDDW-CFSL. Full article
Show Figures

Figure 1

24 pages, 8553 KiB  
Article
DO-MDS&DSCA: A New Method for Seed Vigor Detection in Hyperspectral Images Targeting Significant Information Loss and High Feature Similarity
by Liangquan Jia, Jianhao He, Jinsheng Wang, Miao Huan, Guangzeng Du, Lu Gao and Yang Wang
Agriculture 2025, 15(15), 1625; https://doi.org/10.3390/agriculture15151625 - 26 Jul 2025
Viewed by 335
Abstract
Hyperspectral imaging for seed vigor detection faces the challenges of handling high-dimensional spectral data, information loss after dimensionality reduction, and low feature differentiation between vigor levels. To address the above issues, this study proposes an improved dynamic optimize MDS (DO-MDS) dimensionality reduction algorithm [...] Read more.
Hyperspectral imaging for seed vigor detection faces the challenges of handling high-dimensional spectral data, information loss after dimensionality reduction, and low feature differentiation between vigor levels. To address the above issues, this study proposes an improved dynamic optimize MDS (DO-MDS) dimensionality reduction algorithm based on multidimensional scaling transformation. DO-MDS better preserves key features between samples during dimensionality reduction. Secondly, a dual-stream spectral collaborative attention (DSCA) module is proposed. The DSCA module adopts a dual-modal fusion approach combining global feature capture and local feature enhancement, deepening the characterization capability of spectral features. This study selected commonly used rice seed varieties in Zhejiang Province and constructed three individual spectral datasets and a mixed dataset through aging, spectral acquisition, and germination experiments. The experiments involved using the DO-MDS processed datasets with a convolutional neural network embedded with the DSCA attention module, and the results demonstrate vigor discrimination accuracy rates of 93.85%, 93.4%, and 96.23% for the Chunyou 83, Zhongzao 39, and Zhongzu 53 datasets, respectively, achieving 94.8% for the mixed dataset. This study provides effective strategies for spectral dimensionality reduction in hyperspectral seed vigor detection and enhances the differentiation of spectral information for seeds with similar vigor levels. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

25 pages, 5445 KiB  
Article
HyperspectralMamba: A Novel State Space Model Architecture for Hyperspectral Image Classification
by Jianshang Liao and Liguo Wang
Remote Sens. 2025, 17(15), 2577; https://doi.org/10.3390/rs17152577 - 24 Jul 2025
Viewed by 268
Abstract
Hyperspectral image classification faces challenges with high-dimensional spectral data and complex dependencies between bands. This paper proposes HyperspectralMamba, a novel architecture for hyperspectral image classification that integrates state space modeling with adaptive recalibration mechanisms. The method addresses limitations in existing techniques through three [...] Read more.
Hyperspectral image classification faces challenges with high-dimensional spectral data and complex dependencies between bands. This paper proposes HyperspectralMamba, a novel architecture for hyperspectral image classification that integrates state space modeling with adaptive recalibration mechanisms. The method addresses limitations in existing techniques through three key innovations: (1) a novel dual-stream architecture that combines SSM global modeling with parallel convolutional local feature extraction, distinguishing our approach from existing single-stream SSM methods; (2) a band-adaptive feature recalibration mechanism specifically designed for hyperspectral data that adaptively adjusts the importance of different spectral band features; and (3) an effective feature fusion strategy that integrates global and local features through residual connections. Experimental results on three benchmark datasets—Indian Pines, Pavia University, and Salinas Valley—demonstrate that the proposed method achieves overall accuracies of 95.31%, 98.60%, and 96.40%, respectively, significantly outperforming existing convolutional neural networks, attention-enhanced networks, and Transformer methods. HyperspectralMamba demonstrates an exceptional performance in small-sample class recognition and distinguishing spectrally similar terrain, while maintaining lower computational complexity, providing a new technical approach for high-precision hyperspectral image classification. Full article
Show Figures

Figure 1

14 pages, 4699 KiB  
Article
Parallel Dictionary Reconstruction and Fusion for Spectral Recovery in Computational Imaging Spectrometers
by Hongzhen Song, Qifeng Hou, Kaipeng Sun, Guixiang Zhang, Tuoqi Xu, Benjin Sun and Liu Zhang
Sensors 2025, 25(15), 4556; https://doi.org/10.3390/s25154556 - 23 Jul 2025
Viewed by 197
Abstract
Computational imaging spectrometers using broad-bandpass filter arrays with distinct transmission functions are promising implementations of miniaturization. The number of filters is limited by the practical factors. Compressed sensing is used to model the system as linear underdetermined equations for hyperspectral imaging. This paper [...] Read more.
Computational imaging spectrometers using broad-bandpass filter arrays with distinct transmission functions are promising implementations of miniaturization. The number of filters is limited by the practical factors. Compressed sensing is used to model the system as linear underdetermined equations for hyperspectral imaging. This paper proposes the following method: parallel dictionary reconstruction and fusion for spectral recovery in computational imaging spectrometers. Orthogonal systems are the dictionary candidates for reconstruction. According to observation of ground objects, the dictionaries are selected from the candidates using the criterion of incoherence. Parallel computations are performed with the selected dictionaries, and spectral recovery is achieved by fusion of the computational results. The method is verified by simulating visible-NIR spectral recovery of typical ground objects. The proposed method has a mean square recovery error of ≤1.73 × 10−4 and recovery accuracy of ≥0.98 and is both more universal and more stable than those of traditional sparse representation methods. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

35 pages, 58241 KiB  
Article
DGMNet: Hyperspectral Unmixing Dual-Branch Network Integrating Adaptive Hop-Aware GCN and Neighborhood Offset Mamba
by Kewen Qu, Huiyang Wang, Mingming Ding, Xiaojuan Luo and Wenxing Bao
Remote Sens. 2025, 17(14), 2517; https://doi.org/10.3390/rs17142517 - 19 Jul 2025
Viewed by 250
Abstract
Hyperspectral sparse unmixing (SU) networks have recently received considerable attention due to their model hyperspectral images (HSIs) with a priori spectral libraries and to capture nonlinear features through deep networks. This method effectively avoids errors associated with endmember extraction, and enhances the unmixing [...] Read more.
Hyperspectral sparse unmixing (SU) networks have recently received considerable attention due to their model hyperspectral images (HSIs) with a priori spectral libraries and to capture nonlinear features through deep networks. This method effectively avoids errors associated with endmember extraction, and enhances the unmixing performance via nonlinear modeling. However, two major challenges remain: the use of large spectral libraries with high coherence leads to computational redundancy and performance degradation; moreover, certain feature extraction models, such as Transformer, while exhibiting strong representational capabilities, suffer from high computational complexity. To address these limitations, this paper proposes a hyperspectral unmixing dual-branch network integrating an adaptive hop-aware GCN and neighborhood offset Mamba that is termed DGMNet. Specifically, DGMNet consists of two parallel branches. The first branch employs the adaptive hop-neighborhood-aware GCN (AHNAGC) module to model global spatial features. The second branch utilizes the neighborhood spatial offset Mamba (NSOM) module to capture fine-grained local spatial structures. Subsequently, the designed Mamba-enhanced dual-stream feature fusion (MEDFF) module fuses the global and local spatial features extracted from the two branches and performs spectral feature learning through a spectral attention mechanism. Moreover, DGMNet innovatively incorporates a spectral-library-pruning mechanism into the SU network and designs a new pruning strategy that accounts for the contribution of small-target endmembers, thereby enabling the dynamic selection of valid endmembers and reducing the computational redundancy. Finally, an improved ESS-Loss is proposed, which combines an enhanced total variation (ETV) with an l1/2 sparsity constraint to effectively refine the model performance. The experimental results on two synthetic and five real datasets demonstrate the effectiveness and superiority of the proposed method compared with the state-of-the-art methods. Notably, experiments on the Shahu dataset from the Gaofen-5 satellite further demonstrated DGMNet’s robustness and generalization. Full article
(This article belongs to the Special Issue Artificial Intelligence in Hyperspectral Remote Sensing Data Analysis)
Show Figures

Figure 1

19 pages, 4026 KiB  
Article
The Fusion of Focused Spectral and Image Texture Features: A New Exploration of the Nondestructive Detection of Degeneration Degree in Pleurotus geesteranus
by Yifan Jiang, Jin Shang, Yueyue Cai, Shiyang Liu, Ziqin Liao, Jie Pang, Yong He and Xuan Wei
Agriculture 2025, 15(14), 1546; https://doi.org/10.3390/agriculture15141546 - 18 Jul 2025
Viewed by 269
Abstract
The degradation of edible fungi can lead to a decrease in cultivation yield and economic losses. In this study, a nondestructive detection method for strain degradation based on the fusion of hyperspectral technology and image texture features is presented. Hyperspectral and microscopic image [...] Read more.
The degradation of edible fungi can lead to a decrease in cultivation yield and economic losses. In this study, a nondestructive detection method for strain degradation based on the fusion of hyperspectral technology and image texture features is presented. Hyperspectral and microscopic image data were acquired from Pleurotus geesteranus strains exhibiting varying degrees of degradation, followed by preprocessing using Savitzky–Golay smoothing (SG), multivariate scattering correction (MSC), and standard normal variate transformation (SNV). Spectral features were extracted by the successive projections algorithm (SPA), competitive adaptive reweighted sampling (CARS), and principal component analysis (PCA), while the texture features were derived using gray-level co-occurrence matrix (GLCM) and local binary pattern (LBP) models. The spectral and texture features were then fused and used to construct a classification model based on convolutional neural networks (CNN). The results showed that combining hyperspectral and image texture features significantly improved the classification accuracy. Among the tested models, the CARS + LBP-CNN configuration achieved the best performance, with an overall accuracy of 95.6% and a kappa coefficient of 0.96. This approach provides a new technical solution for the nondestructive detection of strain degradation in Pleurotus geesteranus. Full article
(This article belongs to the Section Agricultural Product Quality and Safety)
Show Figures

Figure 1

35 pages, 7685 KiB  
Article
Spatial and Spectral Structure-Aware Mamba Network for Hyperspectral Image Classification
by Jie Zhang, Ming Sun and Sheng Chang
Remote Sens. 2025, 17(14), 2489; https://doi.org/10.3390/rs17142489 - 17 Jul 2025
Viewed by 261
Abstract
Recently, a network based on selective state space models (SSMs), Mamba, has emerged as a research focus in hyperspectral image (HSI) classification due to its linear computational complexity and strong long-range dependency modeling capability. Originally designed for 1D causal sequence modeling, Mamba is [...] Read more.
Recently, a network based on selective state space models (SSMs), Mamba, has emerged as a research focus in hyperspectral image (HSI) classification due to its linear computational complexity and strong long-range dependency modeling capability. Originally designed for 1D causal sequence modeling, Mamba is challenging for HSI tasks that require simultaneous awareness of spatial and spectral structures. Current Mamba-based HSI classification methods typically convert spatial structures into 1D sequences and employ various scanning patterns to capture spatial dependencies. However, these approaches inevitably disrupt spatial structures, leading to ineffective modeling of complex spatial relationships and increased computational costs due to elongated scanning paths. Moreover, the lack of neighborhood spectral information utilization fails to mitigate the impact of spatial variability on classification performance. To address these limitations, we propose a novel model, Dual-Aware Discriminative Fusion Mamba (DADFMamba), which is simultaneously aware of spatial-spectral structures and adaptively integrates discriminative features. Specifically, we design a Spatial-Structure-Aware Fusion Module (SSAFM) to directly establish spatial neighborhood connectivity in the state space, preserving structural integrity. Then, we introduce a Spectral-Neighbor-Group Fusion Module (SNGFM). It enhances target spectral features by leveraging neighborhood spectral information before partitioning them into multiple spectral groups to explore relations across these groups. Finally, we introduce a Feature Fusion Discriminator (FFD) to discriminate the importance of spatial and spectral features, enabling adaptive feature fusion. Extensive experiments on four benchmark HSI datasets demonstrate that DADFMamba outperforms state-of-the-art deep learning models in classification accuracy while maintaining low computational costs and parameter efficiency. Notably, it achieves superior performance with only 30 training samples per class, highlighting its data efficiency. Our study reveals the great potential of Mamba in HSI classification and provides valuable insights for future research. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

26 pages, 6371 KiB  
Article
Growth Stages Discrimination of Multi-Cultivar Navel Oranges Using the Fusion of Near-Infrared Hyperspectral Imaging and Machine Vision with Deep Learning
by Chunyan Zhao, Zhong Ren, Yue Li, Jia Zhang and Weinan Shi
Agriculture 2025, 15(14), 1530; https://doi.org/10.3390/agriculture15141530 - 15 Jul 2025
Viewed by 253
Abstract
To noninvasively and precisely discriminate among the growth stages of multiple cultivars of navel oranges simultaneously, the fusion of the technologies of near-infrared (NIR) hyperspectral imaging (HSI) combined with machine vision (MV) and deep learning is employed. NIR reflectance spectra and hyperspectral and [...] Read more.
To noninvasively and precisely discriminate among the growth stages of multiple cultivars of navel oranges simultaneously, the fusion of the technologies of near-infrared (NIR) hyperspectral imaging (HSI) combined with machine vision (MV) and deep learning is employed. NIR reflectance spectra and hyperspectral and RGB images for 740 Gannan navel oranges of five cultivars are collected. Based on preprocessed spectra, optimally selected hyperspectral images, and registered RGB images, a dual-branch multi-modal feature fusion convolutional neural network (CNN) model is established. In this model, a spectral branch is designed to extract spectral features reflecting internal compositional variations, while the image branch is utilized to extract external color and texture features from the integration of hyperspectral and RGB images. Finally, growth stages are determined via the fusion of features. To validate the availability of the proposed method, various machine-learning and deep-learning models are compared for single-modal and multi-modal data. The results demonstrate that multi-modal feature fusion of HSI and MV combined with the constructed dual-branch CNN deep-learning model yields excellent growth stage discrimination in navel oranges, achieving an accuracy, recall rate, precision, F1 score, and kappa coefficient on the testing set are 95.95%, 96.66%, 96.76%, 96.69%, and 0.9481, respectively, providing a prominent way to precisely monitor the growth stages of fruits. Full article
Show Figures

Figure 1

24 pages, 3937 KiB  
Article
HyperTransXNet: Learning Both Global and Local Dynamics with a Dual Dynamic Token Mixer for Hyperspectral Image Classification
by Xin Dai, Zexi Li, Lin Li, Shuihua Xue, Xiaohui Huang and Xiaofei Yang
Remote Sens. 2025, 17(14), 2361; https://doi.org/10.3390/rs17142361 - 9 Jul 2025
Viewed by 352
Abstract
Recent advances in hyperspectral image (HSI) classification have demonstrated the effectiveness of hybrid architectures that integrate convolutional neural networks (CNNs) and Transformers, leveraging CNNs for local feature extraction and Transformers for global dependency modeling. However, existing fusion approaches face three critical challenges: (1) [...] Read more.
Recent advances in hyperspectral image (HSI) classification have demonstrated the effectiveness of hybrid architectures that integrate convolutional neural networks (CNNs) and Transformers, leveraging CNNs for local feature extraction and Transformers for global dependency modeling. However, existing fusion approaches face three critical challenges: (1) insufficient synergy between spectral and spatial feature learning due to rigid coupling mechanisms; (2) high computational complexity resulting from redundant attention calculations; and (3) limited adaptability to spectral redundancy and noise in small-sample scenarios. To address these limitations, we propose HyperTransXNet, a novel CNN-Transformer hybrid architecture that incorporates adaptive spectral-spatial fusion. Specifically, the proposed HyperTransXNet comprises three key modules: (1) a Hybrid Spatial-Spectral Module (HSSM) that captures the refined local spectral-spatial features and models global spectral correlations by combining depth-wise dynamic convolution with frequency-domain attention; (2) a Mixture-of-Experts Routing (MoE-R) module that adaptively fuses multi-scale features by dynamically selecting optimal experts via Top-K sparse weights; and (3) a Spatial-Spectral Tokens Enhancer (SSTE) module that ensures causality-preserving interactions between spectral bands and spatial contexts. Extensive experiments on the Indian Pines, Houston 2013, and WHU-Hi-LongKou datasets demonstrate the superiority of HyperTransXNet. Full article
(This article belongs to the Special Issue AI-Driven Hyperspectral Remote Sensing of Atmosphere and Land)
Show Figures

Figure 1

36 pages, 1925 KiB  
Review
Deep Learning-Enhanced Spectroscopic Technologies for Food Quality Assessment: Convergence and Emerging Frontiers
by Zhichen Lun, Xiaohong Wu, Jiajun Dong and Bin Wu
Foods 2025, 14(13), 2350; https://doi.org/10.3390/foods14132350 - 2 Jul 2025
Viewed by 1243
Abstract
Nowadays, the development of the food industry and economic recovery have driven escalating consumer demands for high-quality, nutritious, and safe food products, and spectroscopic technologies are increasingly prominent as essential tools for food quality inspection. Concurrently, the rapid rise of artificial intelligence (AI) [...] Read more.
Nowadays, the development of the food industry and economic recovery have driven escalating consumer demands for high-quality, nutritious, and safe food products, and spectroscopic technologies are increasingly prominent as essential tools for food quality inspection. Concurrently, the rapid rise of artificial intelligence (AI) has created new opportunities for food quality detection. As a critical branch of AI, deep learning synergizes with spectroscopic technologies to enhance spectral data processing accuracy, enable real-time decision making, and address challenges from complex matrices and spectral noise. This review summarizes six cutting-edge nondestructive spectroscopic and imaging technologies, near-infrared/mid-infrared spectroscopy, Raman spectroscopy, fluorescence spectroscopy, hyperspectral imaging (spanning the UV, visible, and NIR regions, to simultaneously capture both spatial distribution and spectral signatures of sample constituents), terahertz spectroscopy, and nuclear magnetic resonance (NMR), along with their transformative applications. We systematically elucidate the fundamental principles and distinctive merits of each technological approach, with a particular focus on their deep learning-based integration with spectral fusion techniques and hybrid spectral-heterogeneous fusion methodologies. Our analysis reveals that the synergy between spectroscopic technologies and deep learning demonstrates unparalleled superiority in speed, precision, and non-invasiveness. Future research should prioritize three directions: multimodal integration of spectroscopic technologies, edge computing in portable devices, and AI-driven applications, ultimately establishing a high-precision and sustainable food quality inspection system spanning from production to consumption. Full article
(This article belongs to the Section Food Quality and Safety)
Show Figures

Figure 1

24 pages, 7335 KiB  
Article
Soil Organic Matter Content Prediction Using Multi-Input Convolutional Neural Network Based on Multi-Source Information Fusion
by Li Guo, Qin Gao, Mengyi Zhang, Panting Cheng, Peng He, Lujun Li, Dong Ding, Changcheng Liu, Francis Collins Muga, Masroor Kamal and Jiangtao Qi
Agriculture 2025, 15(12), 1313; https://doi.org/10.3390/agriculture15121313 - 19 Jun 2025
Viewed by 459
Abstract
Soil organic matter (SOM) content is a key indicator for assessing soil health, carbon cycling, and soil degradation. Traditional SOM detection methods are complex and time-consuming and do not meet the modern agricultural demand for rapid, non-destructive analysis. While significant progress has been [...] Read more.
Soil organic matter (SOM) content is a key indicator for assessing soil health, carbon cycling, and soil degradation. Traditional SOM detection methods are complex and time-consuming and do not meet the modern agricultural demand for rapid, non-destructive analysis. While significant progress has been made in spectral inversion for SOM prediction, its accuracy still lags behind traditional chemical methods. This study proposes a novel approach to predict SOM content by integrating spectral, texture, and color features using a three-branch convolutional neural network (3B-CNN). Spectral reflectance data (400–1000 nm) were collected using a portable hyperspectral imaging device. The top 15 spectral bands with the highest correlation were selected from 260 spectral bands using the Correlation Coefficient Method (CCM), Boruta algorithm, and Successive Projections Algorithm (SPA). Compared to other methods, CCM demonstrated superior dimensionality reduction performance, retaining bands highly correlated with SOM, which laid a solid foundation for multi-source data fusion. Additionally, six soil texture features were extracted from soil images taken with a smartphone using the gray-level co-occurrence matrix (GLCM), and twelve color features were obtained through the color histogram. These multi-source features were fused via trilinear pooling. The results showed that the 3B-CNN model, integrating multi-source data, performed exceptionally well in SOM prediction, with an R2 of 0.87 and an RMSE of 1.68, a 23% improvement in R2 compared to the 1D-CNN model using only spectral data. Incorporating multi-source data into traditional machine learning models (SVM, RF, and PLS) also improved prediction accuracy, with R2 improvements ranging from 4% to 11%. This study demonstrates the potential of multi-source data fusion in accurately predicting SOM content, enabling rapid assessment at the field scale and providing a scientific basis for precision fertilization and agricultural management. Full article
(This article belongs to the Section Agricultural Soils)
Show Figures

Figure 1

28 pages, 4356 KiB  
Article
Hyperspectral Image Classification Based on Fractional Fourier Transform
by Jing Liu, Lina Lian, Yuanyuan Li and Yi Liu
Remote Sens. 2025, 17(12), 2065; https://doi.org/10.3390/rs17122065 - 15 Jun 2025
Viewed by 660
Abstract
To effectively utilize the rich spectral information of hyperspectral remote sensing images (HRSIs), the fractional Fourier transform (FRFT) feature of HRSIs is proposed to reflect the time-domain and frequency-domain characteristics of a spectral pixel simultaneously, and an FRFT order selection criterion is also [...] Read more.
To effectively utilize the rich spectral information of hyperspectral remote sensing images (HRSIs), the fractional Fourier transform (FRFT) feature of HRSIs is proposed to reflect the time-domain and frequency-domain characteristics of a spectral pixel simultaneously, and an FRFT order selection criterion is also proposed based on maximizing separability. Firstly, FRFT is applied to the spectral pixels, and the amplitude spectrum is taken as the FRFT feature of HRSIs. The FRFT feature is mixed with the pixel spectral to form the presented spectral and fractional Fourier transform mixed feature (SF2MF), which contains time–frequency mixing information and spectral information of pixels. K-nearest neighbor, logistic regression, and random forest classifiers are used to verify the superiority of the proposed feature. A 1-dimensional convolutional neural network (1D-CNN) and a two-branch CNN network (Two-CNNSF2MF-Spa) are designed to extract the depth SF2MF feature and the SF2MF-spatial joint feature, respectively. Moreover, to compensate for the defect that CNN cannot effectively capture the long-range features of spectral pixels, a long short-term memory (LSTM) network is introduced to be combined with CNN to form a two-branch network C-CLSTMSF2MF for extracting deeper and more efficient fusion features. A 3D-CNNSF2MF model is designed, which firstly performs the principal component analysis on the spa-SF2MF cube containing spatial information and then feeds it into the 3-dimensional convolutional neural network 3D-CNNSF2MF to extract the SF2MF-spatial joint feature effectively. The experimental results of three real HRSIs show that the presented mixed feature SF2MF can effectively improve classification accuracy. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Back to TopTop