Next Article in Journal
Exploring the Impact of Species Participation Levels on the Performance of Dominant Plant Identification Models in the Sericite–Artemisia Desert Grassland by Using Deep Learning
Previous Article in Journal
A Single-Cell Assessment of Intramuscular and Subcutaneous Adipose Tissue in Beef Cattle
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Fusion of Focused Spectral and Image Texture Features: A New Exploration of the Nondestructive Detection of Degeneration Degree in Pleurotus geesteranus

1
College of Mechanical and Electrical Engineering, Fujian Agriculture and Forestry University, Fuzhou 350108, China
2
College of Food Science, Fujian Agriculture and Forestry University, Fuzhou 350002, China
3
College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, China
*
Author to whom correspondence should be addressed.
Agriculture 2025, 15(14), 1546; https://doi.org/10.3390/agriculture15141546
Submission received: 9 June 2025 / Revised: 9 July 2025 / Accepted: 10 July 2025 / Published: 18 July 2025
(This article belongs to the Section Agricultural Product Quality and Safety)

Abstract

The degradation of edible fungi can lead to a decrease in cultivation yield and economic losses. In this study, a nondestructive detection method for strain degradation based on the fusion of hyperspectral technology and image texture features is presented. Hyperspectral and microscopic image data were acquired from Pleurotus geesteranus strains exhibiting varying degrees of degradation, followed by preprocessing using Savitzky–Golay smoothing (SG), multivariate scattering correction (MSC), and standard normal variate transformation (SNV). Spectral features were extracted by the successive projections algorithm (SPA), competitive adaptive reweighted sampling (CARS), and principal component analysis (PCA), while the texture features were derived using gray-level co-occurrence matrix (GLCM) and local binary pattern (LBP) models. The spectral and texture features were then fused and used to construct a classification model based on convolutional neural networks (CNN). The results showed that combining hyperspectral and image texture features significantly improved the classification accuracy. Among the tested models, the CARS + LBP-CNN configuration achieved the best performance, with an overall accuracy of 95.6% and a kappa coefficient of 0.96. This approach provides a new technical solution for the nondestructive detection of strain degradation in Pleurotus geesteranus.

1. Introduction

Edible mushroom strains are cultivated food sources whose quality directly affects yield. During growth, strain degradation frequently occurs due to genetic diversity and environmental changes. In artificial cultivation, degradation is almost inevitable, as factors such as suboptimal growth conditions, the number of passages, strain aging, and poor operational practices may lead to contamination or strain decline. Typical signs of degradation include sparse and slow mycelial growth, extended growth cycles, reduced resistance to contaminants, low yield, inferior quality, and delayed fruiting, all of which adversely impact production and result in economic losses. Using degraded strains for propagation further compromises mushroom quality and exacerbates economic risk for producers [1,2]. Accurate assessment of the degradation level of P. geesteranus is therefore of considerable importance. Traditionally, such determination has relied on morphological observation, molecular biological, enzyme activity assays, and metabolite differences [3]. However, these methods are often labor-intensive, technically demanding, and time-consuming, making them unsuitable for rapid, large-scale, and nondestructive screening. Therefore, there is a critical need to develop a rapid, large-scale, and nondestructive method to detect strain degradation in edible mushrooms.
The agri-food industry primarily employs three nondestructive testing methods: machine vision, spectroscopic technology, and electronic noses. Machine vision employs image analysis to detect surface defects and automatic fruit grading processes [4,5,6]. Spectroscopic techniques, such as near-infrared spectroscopy, enable rapid compositional analysis, including grain moisture detection and protein content assessment in milk powder [7,8,9]. Electronic noses evaluate product freshness—such as that of coffee and seafood—by analyzing odor profiles [10]. These technologies are complementary, collectively forming an integrated inspection system that encompass both surface attributes and internal quality. This comprehensive system provides efficient and reliable technical support for ensuring agri-food quality and safety. Increasing demands of novel detection methods are expected to provide new technical pathways for achieving more intelligent and accurate quality assessment in the agri-food sector.
Hyperspectral imaging technology enables non-contact, nondestructive, and real-time monitoring of the physical and chemical properties of samples by integrating spectral and spatial information. It has been widely applied in the nondestructive evaluation of crops and various food products, including meat, fruits, vegetables, and grains [11,12,13,14]. In the study of edible fungi, hyperspectral imaging has also been used to analyze the microstructure and moisture content of mushrooms, demonstrating promising performance across studies [15,16,17]. For instance, Pu et al. [18] investigated Agaricus bisporus during vacuum cooling using visible–near-infrared (400–1000 nm) hyperspectral imaging to assess quality attributes such as moisture content, brightness, and firmness, and to visualize their spatial distribution throughout the process. Liu et al. [15] achieved rapid and nondestructive prediction of polysaccharide content during the growth of Ganoderma lucidum, offering valuable guidance of its cultivation. However, most of these studies focus on the fruiting body rather than the fungal strains itself, despite the fact that strains play a vital role in edible mushroom production. Thus, it is necessary to develop an effective method for assessing the health and degradation status of mycelial.
Microscopic hyperspectral imaging technology combines hyperspectral technology with microscopic technology so that it can reflect the spectral information and micro-morphological structure characteristics of samples quickly, non-invasively, and without contact, not only providing spectral and spatial information from samples but also providing chemical information at the molecular or cellular level [19]. Park et al. [20] used hyperspectral microscopy imaging to study the spatial and spectral characteristics of the microstructure of the blueberry rind region to classify blueberries of different hardnesses with a test set accuracy of 85%. Wu et al. [21] took the catalase activity in tomato leaf cells as their research object and utilized microscopic hyperspectral imaging technology to obtain microscopic images to investigate the dynamic changes of catalase in leaves under different concentrations of NaCl stress, with the prediction set showing a coefficient of determination of 0.66 and a root mean square error of 18.94 U/g-min. These studies demonstrate the feasibility of microscopic hyperspectral imaging for classification and recognition tasks. By combining spectral information (to reflect internal sample information) with image texture data (to capture external texture features), researchers can develop improved recognition models. Therefore, it is worth exploring the combination of spectral and image texture features to assess the degradation degree of P. geesteranus.
In our previous study [22], we employed basic spectral and image data to construct a Support Vector Machine (SVM) classification model, which demonstrated the viability of micro-hyperspectral imaging in detecting the degradation of P. geesteranus. Based on these preliminary results, we increased the sample size and developed a feature fusion method combining spectral features corresponding to degradation parameters with image texture features. This method integrates spectral features corresponding to degradation parameters with image texture features, enabling a more comprehensive analysis of P. geesteranus spectral and morphological data. By exploring this combined approach, we aim to establish an optimized method for assessing degradation extent. The study’s primary objective is to develop a novel micro-hyperspectral imaging-based methodology for identifying the deterioration of edible mushrooms.

2. Materials and Methods

2.1. Sample Preparation

In this study, the Xiu-57 strain of P. geesteranus was selected, which was collected and provided by the Fungus Research Center of Fujian Agriculture and Forestry University. The non-degraded strain was labeled as CLASS 0, the subcultured strain was labeled as CLASS 1, and then the subcultured strain CLASS 1 was continued to be subcultured, and the middle 8th generation and the last 15th generation were labeled as CLASS 2 and CLASS 3, which were used as samples for subsequent experiments.
In this experiment, strain CLASS 0, strain CLASS 1 and its subculture strains CLASS 2 and CLASS 3 were selected as test samples for subculture. The selected strains were connected to a culture dish containing PDA culture medium for activation culture. An 8-mm-diameter block was cut from the mycelial end of the strain and inoculated into the center of a new culture dish, which was then inverted and incubated in an incubator at 25 °C for 6 days to collect data. The final data include 120 non-degraded strains CLASS 0, 120 subculture strains CLASS 1, CLASS 2 and CLASS 3, with a total of 480 samples. Figure 1 shows representative images of different degradation classes.

2.2. Spectral Acquisition and Pretreatment

The microscopic hyperspectral imaging system consists of a hyperspectral imaging system and a microscopic imaging device, as shown in Figure 2. The microscopic hyperspectral imaging devices used in this study include an ordinary optical microscope, a halogen light source, an adjustable DC regulated power supply and a push-sweep portable ground object hyperspectral instrument (GaiaField Pro-V10E, Sichuan Dualix Spectral Imaging Technology Co., Ltd., Chengdu, China). Among them, the hyperspectral instrument integrates the imaging lens, CCD camera and computer modules, and has the functions of imaging, driving power supply and motion control. The imaging lens model of the spectrometer is HSIAOL23, with a focal length of 23 mm, and the area array detector is a CCD camera for progressive scanning. The image resolution is 960 × 861 dpi, and the spectral range covers 400–1000 nm, with a spectral resolution of 2.8 nm. The bottom halogen light source has a power of 20 W and a constant current of 1.5 A provided by the adjustable power supply. In order to obtain clear hyperspectral images, we photographed the P. geesteranus strain at 100X magnification (eyepiece 10X, objective lens 10X), which provides the best balance between spatial resolution and the field of view of the P. geesteranus structure. The camera exposure time was 15 ms, the gain was 3, and the acquisition speed was adjusted according to the sample placement height and camera parameters, resulting in a speed of 0.06 cm/s.
At the same time, a black-and-white correction was performed prior to acquisition to eliminate instrumental baseline drift and background interference, ensuring the quality of the acquired data. The formula is as follows:
R = ( I r a w I d a r k ) / ( I r e f I d a r k )
Among them, R means the corrected sample image; I r a w stands for the original image of the sample; I d a r k denotes the dark background correction image; and I r e f represents the whiteboard correction image.
The region of interest (ROI) was created using the New Region of Interest tool of ENVI 5.3 software (Exiles Visual Information Solutions, Boulder, CO, USA), and the spectral transmittance of each pixel in the region was calculated as the average transmittance of the ROI. Three ROIs were selected for each sample, and the average value was used as the final data for the sample. By extracting the average spectral data from the ROI, a total of 480 valid spectral data points were obtained. Before processing the data, the experimental dataset was divided into subsets. To avoid bias caused by manual partitioning, this study used the Kennard–Stone (KS) algorithm to partition the dataset. The KS algorithm is a widely established method for representative sample selection based on Euclidean or Mahalanobis distance metrics. The algorithm begins by identifying the two most spectrally distant samples as initial training points. Subsequent samples are iteratively selected based on the maximal minimal distance to existing training samples, ensuring optimal coverage of spectral variability. In this study, KS partitioning yielded a 3:1 split, effectively capturing the dataset’s variance structure while maintaining independence between sets. As a result, 360 samples of micro-hyperspectral data were obtained for constructing the classification model, and the remaining 120 samples were used for final model evaluation.
Spectral data are often affected by external factors such as temperature, humidity, and random noise from the camera or instrument, leading to spectral variations. Using unprocessed spectral data may compromise the results of subsequent analyses [23]. To improve the signal-to-noise ratio and model robustness of hyperspectral data, preprocessing methods are essential to eliminate irrelevant information [24]. In this study, three preprocessing techniques were applied: SG smoothing (SG), multivariate scattering correction (MSC) and standard normal variate transformation (SNV). SG smoothing enhances the signal-to-noise ratio by fitting a polynomial to the data within a moving window and performing weighted averaging, thereby preserving effective signals while reducing noise [15,25]. MSC corrects spectral baseline shifts and drift by aligning each spectrum to the average reference spectrum [26,27]. SNV minimizes scattering effects caused by sample surface irregularities or particle size variations through normalization, making the spectra more representative of the sample’s intrinsic characteristics [28,29,30].

2.3. Chemometrics and Data Analysis

In this study, multivariate analysis was performed using The Unscrambler X 10.1 (CAMO Software, Oslo, Norway), while algorithm development was implemented in MATLAB R2020b (MathWorks, Natick, MA, USA).
The feature fusion method was used for the non-destructive detection of the degradation degree of Pleurotus geesteranus. First, spectral data analysis was carried out; after pre-processing the spectral data, three methods—the successive projections algorithm (SPA), competitive adaptive reweighted sampling (CARS), and principal component analysis (PCA)—were used for feature wavelength screening. For microscopic image analysis, we used the gray level co-occurrence matrix (GLCM) and local binary pattern (LBP) for feature extraction. In the final stage of this study, the practical usefulness of spectral and image fusion for nondestructive detection of Pleurotus geesteranus degradation was demonstrated by building a classification model based on feature fusion. Figure 3 illustrates the main steps of the process.

2.4. Feature Selection

SPA is a forward feature selection method. Starting from a single wavelength, it performs multiple iterations to calculate the projection of selected wavelengths onto unselected wavelengths in each round. The algorithm automatically includes wavelengths with the largest projection vectors into the feature wavelength set, effectively reducing inter-wavelength covariance [31,32,33,34].
CARS is a feature wavelength extraction method based on Monte Carlo sampling and partial least squares (PLS) regression coefficients. Through adaptive reweighted sampling with an exponential decay function, wavelengths with large absolute values of PLS regression coefficients are selected, while variables with small weights are excluded. After several iterations of eliminating irrelevant variables, the wavelength points corresponding to the model with the smallest cross-validated root-mean-square error (RMSECV) are selected to form the optimal variable combination [35,36,37].
Hyperspectral images are characterized by three key challenges: large data volume, high dimensionality, and band redundancy, making dimensionality reduction essential. PCA addresses this by projecting the data onto a set of orthogonal principal components, preserving the most significant information while reducing dimensionality. The algorithm first calculates the covariance matrix to determine the weights and then generates the principal component images through linear combination [38]. This process effectively reduces variable correlations, extracts key features, and enables dimensionality reduction, anomaly detection, and classification, thereby simplifying the model and improving efficiency [39,40,41].

2.5. Image Texture Feature Extraction

Texture analysis characterizes the intensity relationship between adjacent pixels, which is a fundamental aspect of image processing.
GLCM quantifies image texture features by statistically analyzing the frequency of different gray-level value pairs. As a second-order statistical method, GLCM examines spatial pixel relationships and is commonly employed to extract grayscale texture information [42,43,44]. In this study, GLCM was applied with a fixed pixel distance of 1 to compute four key texture parameters (contrast, correlation, energy, and entropy) across four directions (0°, 45°, 90°, and 135°). The average values of these parameters were then used as comprehensive texture descriptors for each image.
LBP method effectively measures local image contrast. Due to its computational efficiency and straightforward principle, LBP has been widely adopted for image retrieval and science classification applications. The algorithm characterizes local image microstructure by comparing gray-level variations between each central pixel and its neighboring pixels within a given window. This approach demonstrates particularly strong performance in identifying fine-scale local structures [45,46].

2.6. Graph Fusion Classification Model

Feature fusion combines different feature information into a single vector to obtain enhanced features and improve feature interpretability. Microscopic hyperspectral images are characterized by their “spectral integration” property, where each sample contains both spectral and texture features. This fusion approach improves sample identification and classification accuracy, enabling effective evaluation of P. geesteranus degradation levels. However, it suffers from information redundancy, which may cause dimensionality variations and increased computational time [47]. To address this issue, we performed canonical correlation analysis on the fused feature data.
Canonical correlation analysis (CCA) associates multi-view data by learning projection matrices for each feature space using paired data samples. The method identifies shared latent subspaces from multi-view feature spaces to facilitate subsequent clustering and classification tasks [48,49]. This approach can reveal relationships between different modal variable sets while simultaneously evaluating two distinct variable sets, requiring no specific assumptions. Consequently, CCA is not only suitable for information fusion but also effectively eliminates feature redundancy [50]. Applying CCA-based feature fusion, we combine spectral and spatial image features to generate new composite vectors, yielding more comprehensive feature representations. The CCA-derived canonical vectors are then fused via sequential strategy to form final feature vectors. The detailed workflow is presented in Figure 4.

2.7. Classification Model

Support vector machines (SVM) are a machine learning method based on statistical learning theory and the structural risk minimization principle [51]. They achieve classification and maximize the margin by finding the optimal hyperplane in the feature space to construct a robust classifier. The method uses kernel functions to map nonlinear problems to high-dimensional space, making it suitable for small sample problems.
K-Nearest neighbors (KNN) classifies data points by calculating distances or similarities between a point and other points in an n-dimensional space, then assigning the dominant class among its neighbors. The model training process is simple and efficient, operating on the principle of identifying the K most similar data points to the test sample in the training set. For classification tasks, it uses their majority vote as the prediction, while for regression tasks it uses their average value [52,53,54].
Convolutional neural networks (CNN) are a widely used deep learning algorithms, which plays an important role in image recognition, target detection, and other fields [55,56,57]. CNN possess powerful data representation learning capabilities, enabling automatic parameter tuning to extract useful features. They can train the model to mine latent features by autonomously learning image features, with performance scaling positively with training dataset size. CNN achieve high recognition accuracy and robustness, making them the mainstream framework for crop image recognition [58,59]. Their architecture is characterized by local receptive fields and shared-weight convolutional kernels that excite each feature layer from the previous layer’s local regions, making CNN more suitable for learning and representing image features than other neural networks [60].

2.8. Evaluation Indicators

After completing the modeling process, evaluating the model’s classification predictions is essential to assess its performance. To evaluate the model’s classification effectiveness on experimental samples, this study selected accuracy and the Kappa coefficient as evaluation metrics to qualitatively assess the model.
Accuracy is the proportion of correctly predicted instances relative to the total number of samples. Model reliability increases with higher accuracy. The formula is as follows:
A c c u r a c y = T P + T N / ( T P + T N + F P + F N )
where TP means that the positive sample is correctly marked as a positive sample; TN indicates that the negative sample is correctly marked as a negative sample; FP means that the negative sample is falsely marked as a positive sample; FN indicates that a positive sample is falsely marked as a negative sample.
The Kappa coefficient is a statistical measure of inter-rater consistency, also used to evaluate multi-class classification accuracy. Its value ranges from −1 to 1, though practical applications typically use the 0 to 1 range. Higher values indicate better model classification accuracy [61]. The formula is as follows:
Kappa = p 0 p e / ( 1 p e )
p e = a 1 × b 1 + a 2 × b 2 + · · · + a c × b c / ( n × n )
where p 0 represents the overall classification accuracy; a i represents the number of true samples in class i; b i represents the number of samples predicted in class i.

3. Results

3.1. Determination of Degradation Degree of P. geesteranus

During the hyphal growth and fruiting body development of edible fungi, extracellular enzymes play a key role in supporting physiological process. Previous studies have shown a correlation between extracellular enzyme activity and strain degradation, suggesting that changes in enzyme activity serve as key indicators of fungal strain decline [3,62]. Based on these findings, we used laccase, carboxymethyl cellulase and xylanase kits (purchased from Keming Biotechnology Co., Ltd., Suzhou, China) to measure the corresponding enzyme activities. By analyzing the physicochemical properties of the strains before and after degradation, we established an evaluation standard for grading the degradation level of P. geesteranus strains. This standard provides a reliable basis for the accurate identification and classification of degraded strains in subsequent analyses.
The assay results are presented in Table 1, showing that enzyme activities P. geesteranus samples varied significantly (p < 0.05). As the number of subcultures increased, all three enzyme activities showed an overall downward trend. Among them, xylanase activity decreased most markedly, followed by carboxymethyl cellulase, while laccase activity showed the least change. Moreover, enzyme activities decreased rapidly during the early subculture stages but the tended to stabilize in later stages. These trends suggest that enzyme activity changes can serve as an effective indicator of P. geesteranus strain degradation, enabling differentiation among strain degradation levels.

3.2. Spectra of P. geesteranus Samples

After black-and-white correction of acquired hyperspectral images of P. geesteranus strains, the average transmittance of pixels within the ROI was extracted as the original average spectrum of the sample. Figure 5 shows the raw spectral curves of four P. geesteranus samples with varying degrees of degradation. The results indicate that the spectral curves of strains with different degradation degrees are similar in the visible/near-infrared bands. Between 400 and 500 nm, the spectral transmittance gradually increases, likely due to the weak absorption of light energy by intracellular pigments, allowing more light to pass through the tissue. In the red light region (650–740 nm), the transmittance peaks near 700 nm are the “red peak”. This phenomenon can be explained by the combined effects of limited pigment absorption in mushroom tissues and low energy and deeper penetration of red light. Beyond 970 nm, a marked decrease in spectral transmittance is observed, primarily due to the strong absorption caused by O-H bond overtone vibrations in water molecules in the near-infrared region, which significantly reduces transmitted light intensity.
Furthermore, significant differences in spectral transmittance are observed among P. geesteranus samples across different degradation degrees. As degradation progresses, transmittance generally shows a relative increase, with non-degraded strains exhibiting systematically lower transmittance curves compared to degraded P. geesteranus strains. This phenomenon may be attributed to structural changes in the mycelial network: during degradation, the mycelium becomes increasingly porous, reducing light scattering and thus enhancing transmittance. In contrast, non-degraded strains maintain an intact and dense mycelial structure, which enhances light scattering and leads to lower transmittance.
In the raw spectral data, the hyperspectral camera exhibited instrument noise in the 1000–1046 nm range. Therefore, 336 spectral bands within the 400–1000 nm range were selected for subsequent analysis. To reduce the effects of light scattering and baseline drift, SG, MSC, and SNV transformation were applied for preprocessing. The resulting preprocessed spectral curves are shown in Figure 6. After SG processing, the spectral curves became smoother, with an improved signal-to-noise ratio, particularly in the 700–1000 nm range, although dispersed persisted in the 400–700 nm range. MSC treatment effectively corrected baseline shifts and concentrated the spectral profiles within the 500–900 nm range, while also enhancing subtle spectral variations. SNV preprocessing normalized each spectrum by subtracting the mean and dividing by the standard deviation, resulting in more pronounced spectral features and improved peak-to-valley contrast.
The spectral data processed using different preprocessing methods (SG, MSC, and SNV) were input into SVM, KNN, and CNN classifiers for comparative evaluation. As shown in Table 2, all preprocessing methods improved classification performance compared to the original unprocessed data. Among them, SNV achieved the best results by effectively eliminating scattering effects and baseline drift while significantly enhancing signal-to-noise ratio and model accuracy. Among the full-spectrum classification models, CNN consistently outperformed SVM and KNN in detecting the degradation levels of P. geesteranus. The combination of SNV preprocessing with the CNN model achieved the highest classification accuracy at 88.5%. Based on these findings, the SNV-CNN approach was selected for use in subsequent studies of P. geesteranus strain degradation.

3.3. Feature Wavelength Extraction

After applying SNV preprocessing to the original spectral data, the SPA algorithm was used to extract key spectral features. During each iteration, the root mean square error (RMSE) of the model was calculated based on the pre-selected feature bands in each iteration. During the SPA wavelength selection process, the RMSE shows a negative correlation between RMSE and the number of selected bands: as the number of bands increased, the RMSE generally decreased. Optimal performance was achieved when 15 feature bands were selected, corresponding to the lowest RMSE value. These bands were located at 402.7 nm, 411.1 nm, 412.8 nm, 419.5 nm, 429.6 nm, 441.5 nm, 478.9 nm, 660.2 nm, 695.9 nm, 704.9 nm, 713.9 nm, 726.5 nm, 777.3 nm, 817.5 nm, and 970.5 nm.
The CARS algorithm was used to select characteristic wavelengths from SNV-preprocessed spectral data, with the number of Monte Carlo sampling iterations (N) set to 50. The selection process is shown in Figure 7, which consists of three parts. From the plot showing the number of retained variables across iterations, it can be observed that all input bands were initially selected as characteristic variables. As sampling iterations increased, the number of characteristic variables dropped sharply before stabilizing. Simultaneously, the Root Mean Square Error of Cross-Validation (RMSECV) value decreased to a minimum before rising rapidly with further iterations. This indicates that during the initial reduction in feature variables, the algorithm primarily eliminated irrelevant variables, resulting in decreased RMSECV values. During cross-validation, the deviation between predicted and actual sample values was minimal, indicating high prediction accuracy. However, as iterations continued and relevant variables began to be removed, the RMSECV value increased, thereby reducing model accuracy. The minimum RMSECV was obtained after 28 iterations of SNV-preprocessed spectra. After CARS feature selection, the optimal feature subset contained 20 bands: 451.6 nm, 487.4 nm, 489.2 nm, 518.4 nm, 535.6 nm, 537.4 nm, 649.6 nm, 651.3 nm, 690.6 nm, 737.3 nm, 739.2 nm, 771.8 nm, 773.6 nm, 775.5 nm, 777.3 nm, 782.8 nm, 815.7 nm, 817.5 nm, 845.2 nm, and 972.4 nm. Compared to the SPA algorithm, CARS extracted a larger number of informative feature bands.
The PCA algorithm was applied to extract spectral band variables from SNV-pretreated data. Analysis of the cumulative contribution rate revealed that the first 11 principal components accounted for over 99% of the total variance. The corresponding eigenvalues for these principal components were 207.67, 66.08, 23.77, 12.37, 7.38, 6.84, 3.77, 2.22, 1.45, 0.93, and 0.65.
The SNV-preprocessed data were modeled using four approaches: no feature extraction, and feature band extraction via SPA, CARS, and PCA algorithms. As shown in Figure 8, the modeling accuracy of feature-extracted SNV data consistently exceeded that of the full-band model. This demonstrates that although full-band spectra contain complete sample information, appropriate feature wavelength extraction can simultaneously remove redundant data, retain informative features, reduce interference, and ultimately improve modeling. Notably, the SNV-CARS-CNN model delivered the best classification results, achieving a 92.9% weighted overall accuracy when accounting for training and test set proportions. These findings confirm the feasibility of spectral-based detection for assessing different degradation levels of P. geesteranus.

3.4. Image Feature Extraction at Characteristic Wavelengths

Since previous results demonstrated that the CARS algorithm provided optimal feature extraction following SNV pretreatment, the SNV-CARS method was employed to select gray images corresponding to 20 feature bands from 480 microscopic hyperspectral image samples for texture feature extraction and classification model development.
During GLCM processing of P. geesteranus sample images, feature variables from all 20 grayscale images per sample were consolidated, resulting in 80 representative feature variables for each sample. These 480 samples’ feature variables were then used as input for a CNN classifier to develop a P. geesteranus degradation detection model.
Furthermore, texture features were extracted from the grayscale images corresponding to the 20 SNV-CARS-selected feature bands using the local binary pattern (LBP) algorithm. Each sample was characterized by a 64-dimensional feature vector derived from LBP algorithm. These feature vectors from all 480 samples as input to a CNN classifier for constructing the P. geesteranus degeneration detection model.
Figure 9 presents the classification results of CNN models using different texture feature extraction methods. Under SNV preprocessing, the GLCM-based model achieved an accuracy of 89.4% with 51 misclassified samples. In comparison, the LBP-based model showed superior performance with 90.6% accuracy and 45 misclassified samples, representing a 1.2% improvement over the GLCM approach. These results demonstrate that LBP-extracted texture features more effectively represent image information than GLCM features. This performance advantage likely stems from LBP’s ability to analyze multiple pixels simultaneously rather than individual pixel pairs, maintaining effectiveness even with lower-quality images [63]. Therefore, the LBP-based model demonstrates slightly superior performance compared the GLCM-based approach.
For each detection model, classification performance improved after applying texture feature extraction compared to using the original gray image. This shows that texture features effectively capture surface details and enhance the discriminative capacity of the object. Texture feature extraction enhances the recognition and classification ability of the image, helps the model better understand the image content, and improves the model accuracy. When analyzing and comparing the two texture extraction algorithms, the model accuracy obtained by building a classification model based on the texture features extracted by the LBP algorithm is higher, and the detection results are better. Among all models evaluated, the LBP-CNN model delivered the best classification performance, achieving an overall accuracy of 90.6%.

3.5. Graph Fusion Classification Model Building

Since the proven effectiveness of spectral feature extraction in simplifying the recognition model and the feasibility of using the LBP algorithm to extract texture features from P. geesteranus microscopic images had been previously verified, a feature-level fusion method was adopted. This method combined the spectral information from 20 feature bands extracted by the SNV-CARS algorithm with texture features obtained through LBP processing. The fused feature data were then modeled and classified.
Previous research showed that the spectral feature values extracted by SNV-CARS ranged from −2 to 1, while the texture feature vectors from LBP ranged from 0 to 0.16. Direct fusion using CCA would create significant disparities between these value ranges, adversely affecting the performance of fused feature curves and making it difficult to distinguish between different degradation levels. To better integrate spectral and texture features, this study optimized the fusion by applying a 30X scaling factor to the texture feature vectors. As shown in Figure 10, after this enhancement, the CCA-fused features combined with the “Concat” method showed clear differentiation, particularly in the first half of the feature curve, enabling effective discrimination of P. geesteranus degradation levels.
The classification results of models using different features are shown in Table 3. In single-spectral feature classification, the CARS-CNN model performed best, achieving an accuracy rate of 92.9%. When different texture feature extraction algorithms were applied to microscopic images in the feature wavelength bands prior to classification, the resulting texture-based classification accuracy exceeded that obtained using original microscopic images. For P. geesteranus microscopic images, the LBP algorithm demonstrated superior texture extraction performance compared to GLCM, achieving 90.6% accuracy, though this remained 2.3% lower than the spectrum-based model. Using feature fusion based on CCA, we combined spectral features extracted by CARS with texture features extracted by LBP, and input the fused features into CNN. The resulting detection model achieved high classification accuracy. Compared to models using either spectral or texture features alone, the fusion-based approach showed better classification performance, demonstrating that fused features possess stronger representational capacity than either feature type individually. The optimal classification model was the CARS + LBP-CNN using fused features, achieving an accuracy rate of 95.6% and a Kappa coefficient of 0.96. Relative to spectral features alone, the model achieved a 2.7% accuracy improvement, while showing a 5% increase compared to using only texture features.
The above results demonstrate that combining different feature types through spectral-texture fusion fully utilizes micro-hyperspectral imaging characteristics, achieving “Spectrum-Image Unification”. Incorporating texture features compensates for spectral-only identification limitations, reduces model dependence on single features, enhances feature interpretability, and improves the robustness of the model by providing more comprehensive feature information. In this study, we employed a CCA-based feature fusion method to combine spectral and texture features of different dimensions. This approach enhanced their mutual correlation while minimizing potential information loss during joint processing of spectral and texture data in unified dimensions.

4. Conclusions

In this study, microscopic hyperspectral imaging was combined with data fusion analysis to investigate characteristics of P. geesteranus strains at different degradation levels and to establish a rapid detection model. Results show that the fusion-based model combining spectral and image texture features achieves superior classification accuracy compared to single-feature inputs. The optimal CARS + LBP-CNN model attained an accuracy of 92.9% on the validation set. These findings preliminarily reveal the spectral-image feature identification mechanism for P. geesteranus strain degradation. The proposed approach significantly reduces detection time and processing steps compared to conventional methods, while providing new solutions for rapid diagnosis and mechanistic studies of P. geesteranus strain degradation.
Regarding practical implementation, our analysis suggests that the technique shows promise for real-time applications in edible fungi cultivation. Although the current laboratory setup requires benchtop instrumentation, the computational workflow could potentially be adapted for deployment on edge devices with appropriate optimization. Key considerations for edge deployment include model simplification to maintain accuracy while reducing computational load, development of compact hyperspectral sensors, and optimization of preprocessing steps for resource-limited devices. Future work will focus on adapting the algorithm for embedded systems and evaluating performance with portable hyperspectral imagers, aiming to enable field applications in commercial cultivation settings.

Author Contributions

Conceptualization, Y.J. and X.W.; Data curation, J.P.; Formal analysis, Y.C. and J.P.; Funding acquisition, X.W.; Investigation, J.S. and Z.L.; Methodology, Y.J. and S.L.; Project administration, Y.H. and X.W.; Software, S.L.; Supervision, Y.H. and X.W.; Validation, J.S. and Y.C.; Visualization, Y.J. and Z.L.; Writing—original draft, Y.J.; Writing—review and editing, Y.J., J.S. and X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Outstanding Youth program of Fujian Agriculture and Forestry University grant number Kxjq21016, Natural Science Foundation of Fujian Province grant number 2021J01131 and Natural Science Foundation of China grant number 61705037. And the APC was funded by Zhejiang University.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data will be made available on request.

Acknowledgments

The authors gratefully acknowledge the editors and reviewers for their constructive comments on our manuscript.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Farimani, F.G.; Farsi, M. Strain Degeneration in White Button Mushroom (Agaricus bisporus) Using Amplified Fragment Length Polymorphism. Jentashapir J. Cell. Mol. Biol. 2021, 12, e121638. [Google Scholar] [CrossRef]
  2. Wang, Q.; Wang, W.; Wang, Y.; Yun, J.; Zhang, Y.; Zhao, F. Exogenous MnSO4 Improves Productivity of Degenerated Volvariella volvacea by Regulating Antioxidant Activity. J. Fungi 2024, 10, 825. [Google Scholar] [CrossRef] [PubMed]
  3. Gan, Y.; Chen, J.; Lin, J.; Yan, J.; Yang, S. Research Progress on Degeneration Mechanisms and Conservation Methods of Edible Fungus Strains. J. Fungal Res. 2024, 22, 1–9. [Google Scholar] [CrossRef]
  4. Ismail, N.; Malik, O.A. Real-time visual inspection system for grading fruits using computer vision and deep learning techniques. Inf. Process. Agric. 2022, 9, 24–37. [Google Scholar] [CrossRef]
  5. Kondoyanni, M.; Loukatos, D.; Templalexis, C.; Lentzou, D.; Xanthopoulos, G.; Arvanitis, K.G. Computer Vision in Monitoring Fruit Browning: Neural Networks vs. Stochastic Modelling. Sensors 2025, 25, 2482. [Google Scholar] [CrossRef] [PubMed]
  6. Palumbo, M.; Cefola, M.; Pace, B.; Attolico, G.; Colelli, G. Computer vision system based on conventional imaging for non-destructively evaluating quality attributes in fresh and packaged fruit and vegetables. Postharvest Biol. Technol. 2023, 200, 112332. [Google Scholar] [CrossRef]
  7. Jiang, Y.; Zhang, D.; Yang, L.; Cui, T.; He, X.; Wu, D.; Dong, J.; Li, C.; Xing, S. Design and experiment of non-destructive testing system for moisture content of in-situ maize ear kernels based on VIS-NIR. J. Food Compos. Anal. 2024, 133, 106369. [Google Scholar] [CrossRef]
  8. Yang, M.-D.; Hsu, Y.-C.; Tseng, W.-C.; Tseng, H.-H.; Lai, M.-H. Precision assessment of rice grain moisture content using UAV multispectral imagery and machine learning. Comput. Electron. Agric. 2025, 230, 109813. [Google Scholar] [CrossRef]
  9. Tan, A.; Su, H.; Zhao, Y.; Zhao, J.; Mu, H.; Wang, H. Development of Raman and NIR spectral feature fusion based on MDS-CNN for detection on adulteration of camel milk powder. J. Food Compos. Anal. 2025, 146, 107959. [Google Scholar] [CrossRef]
  10. Girmatsion, M.; Tang, X.; Zhang, Q.; Li, P. Progress in machine learning-supported electronic nose and hyperspectral imaging technologies for food safety assessment: A review. Food Res. Int. 2025, 209, 116285. [Google Scholar] [CrossRef] [PubMed]
  11. Mustafa, G.; Zheng, H.; Liu, Y.; Yang, S.; Khan, I.H.; Hussain, S.; Liu, J.; Weize, W.; Chen, M.; Cheng, T.; et al. Leveraging machine learning to discriminate wheat scab infection levels through hyperspectral reflectance and feature selection methods. Eur. J. Agron. 2024, 161, 127372. [Google Scholar] [CrossRef]
  12. Lu, B.; Dao, P.D.; Liu, J.; He, Y.; Shang, J. Recent Advances of Hyperspectral Imaging Technology and Applications in Agriculture. Remote Sens. 2020, 12, 2659. [Google Scholar] [CrossRef]
  13. Saha, D.; Manickavasagan, A. Machine learning techniques for analysis of hyperspectral images to determine quality of food products: A review. Curr. Res. Food Sci. 2021, 4, 28–44. [Google Scholar] [CrossRef] [PubMed]
  14. Wieme, J.; Mollazade, K.; Malounas, I.; Zude-Sasse, M.; Zhao, M.; Gowen, A.; Argyropoulos, D.; Fountas, S.; Van Beek, J. Application of hyperspectral imaging systems and artificial intelligence for quality assessment of fruit, vegetables and mushrooms: A review. Biosyst. Eng. 2022, 222, 156–176. [Google Scholar] [CrossRef]
  15. Liu, Y.; Long, Y.; Liu, H.; Lan, Y.; Long, T.; Kuang, R.; Wang, Y.; Zhao, J. Polysaccharide prediction in Ganoderma lucidum fruiting body by hyperspectral imaging. Food Chem. X 2022, 13, 100199. [Google Scholar] [CrossRef]
  16. Xiao, K.; Liu, Q.; Wang, L.; Zhang, B.; Zhang, W.; Yang, W.; Hu, Q.; Pei, F. Prediction of soluble solid content of Agaricus bisporus during ultrasound-assisted osmotic dehydration based on hyperspectral imaging. LWT 2020, 122, 109030. [Google Scholar] [CrossRef]
  17. Wei, Z.; Liu, H.; Xu, J.; Li, Y.; Hu, J.; Tian, S. Quality grading method for Pleurotus eryngii during postharvest storage based on hyperspectral imaging and multiple quality indicators. Food Control 2024, 166, 110763. [Google Scholar] [CrossRef]
  18. Pu, H.; Yu, J.; Liu, Z.; Paliwal, J.; Sun, D.-W. Evaluation of the effects of vacuum cooling on moisture contents, colour and texture of mushroom (Agaricus bisporus) using hyperspectral imaging method. Microchem. J. 2023, 190, 108653. [Google Scholar] [CrossRef]
  19. Banu, K.S.; Lerma, M.; Ahmed, S.U.; Gardea-Torresdey, J.L. Hyperspectral microscopy-applications of hyperspectral imaging techniques in different fields of science: A review of recent advances. Appl. Spectrosc. Rev. 2024, 59, 935–958. [Google Scholar] [CrossRef]
  20. Park, B.; Shin, T.-S.; Cho, J.-S.; Lim, J.-H.; Park, K.-J. Characterizing hyperspectral microscope imagery for classification of blueberry firmness with deep learning methods. Agronomy 2021, 12, 85. [Google Scholar] [CrossRef]
  21. Wu, L.; Jiang, Q.; Zhang, Y.; Du, M.; Ma, L.; Ma, Y. Peroxidase activity in tomato leaf cells under salt stress based on micro-hyperspectral imaging technique. Horticulturae 2022, 8, 813. [Google Scholar] [CrossRef]
  22. Wei, X.; Liu, S.; Xie, C.; Fang, W.; Deng, C.; Wen, Z.; Ye, D.; Jie, D. Nondestructive detection of Pleurotus geesteranus strain degradation based on micro-hyperspectral imaging and machine learning. Front. Plant Sci. 2023, 14, 1260625. [Google Scholar] [CrossRef] [PubMed]
  23. Li, X.; Li, Z.; Yang, X.; He, Y. Boosting the generalization ability of Vis-NIR-spectroscopy-based regression models through dimension reduction and transfer learning. Comput. Electron. Agric. 2021, 186, 106157. [Google Scholar] [CrossRef]
  24. Barra, I.; Briak, H.; Kebede, F. The application of statistical preprocessing on spectral data does not always guarantee the improvement of the predictive quality of multivariate models: Case of soil spectroscopy applied to Moroccan soils. Vib. Spectrosc. 2022, 121, 103409. [Google Scholar] [CrossRef]
  25. Schafer, R.W. What Is a Savitzky-Golay Filter? [Lecture Notes]. IEEE Signal Process. Mag. 2011, 28, 111–117. [Google Scholar] [CrossRef]
  26. Mishra, P.; Verkleij, T.; Klont, R. Improved prediction of minced pork meat chemical properties with near-infrared spectroscopy by a fusion of scatter-correction techniques. Infrared Phys. Technol. 2021, 113, 103643. [Google Scholar] [CrossRef]
  27. Karstang, T.V.; Manne, R. Optimized scaling: A novel approach to linear calibration with closed data sets. Chemom. Intell. Lab. Syst. 1992, 14, 165–173. [Google Scholar] [CrossRef]
  28. Kim, E.; Park, J.-J.; Lee, G.; Cho, J.-S.; Park, S.-K.; Yun, D.-Y.; Park, K.-J.; Lim, J.-H. Innovative strategies for protein content determination in dried laver (Porphyra spp.): Evaluation of preprocessing methods and machine learning algorithms through short-wave infrared imaging. Food Chem. X 2024, 23, 101763. [Google Scholar] [CrossRef] [PubMed]
  29. Barnes, R.J.; Dhanoa, M.S.; Lister, S.J. Standard Normal Variate Transformation and De-Trending of Near-Infrared Diffuse Reflectance Spectra. Appl. Spectrosc. 1989, 43, 772–777. [Google Scholar] [CrossRef]
  30. Yan, C. A review on spectral data preprocessing techniques for machine learning and quantitative analysis. iScience 2025, 28, 112759. [Google Scholar] [CrossRef] [PubMed]
  31. Zhang, J.; Cheng, T.; Guo, W.; Xu, X.; Qiao, H.; Xie, Y.; Ma, X. Leaf area index estimation model for UAV image hyperspectral data based on wavelength variable selection and machine learning methods. Plant Methods 2021, 17, 49. [Google Scholar] [CrossRef] [PubMed]
  32. Kamruzzaman, M.; Kalita, D.; Ahmed, M.T.; ElMasry, G.; Makino, Y. Effect of variable selection algorithms on model performance for predicting moisture content in biological materials using spectral data. Anal. Chim. Acta 2022, 1202, 339390. [Google Scholar] [CrossRef] [PubMed]
  33. Milanez, K.D.T.M.; Araújo Nóbrega, T.C.; Silva Nascimento, D.; Galvão, R.K.H.; Pontes, M.J.C. Selection of robust variables for transfer of classification models employing the successive projections algorithm. Anal. Chim. Acta 2017, 984, 76–85. [Google Scholar] [CrossRef] [PubMed]
  34. Araújo, M.C.U.; Saldanha, T.C.B.; Galvão, R.K.H.; Yoneyama, T.; Chame, H.C.; Visani, V. The successive projections algorithm for variable selection in spectroscopic multicomponent analysis. Chemom. Intell. Lab. Syst. 2001, 57, 65–73. [Google Scholar] [CrossRef]
  35. Ye, R.; Chen, Y.; Guo, Y.; Duan, Q.; Li, D.; Liu, C. NIR hyperspectral imaging technology combined with multivariate methods to identify shrimp freshness. Appl. Sci. 2020, 22, 5498. [Google Scholar] [CrossRef]
  36. Li, H.; Liang, Y.; Xu, Q.; Cao, D. Key wavelengths screening using competitive adaptive reweighted sampling method for multivariate calibration. Anal. Chim. Acta 2009, 648, 77–84. [Google Scholar] [CrossRef] [PubMed]
  37. Li, Y.; Yang, X. Quantitative analysis of near infrared spectroscopic data based on dual-band transformation and competitive adaptive reweighted sampling. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2023, 285, 121924. [Google Scholar] [CrossRef] [PubMed]
  38. Hu, Y.; Xu, L.; Huang, P.; Luo, X.; Wang, P.; Kang, Z. Reliable identification of oolong tea species: Nondestructive testing classification based on fluorescence hyperspectral technology and machine learning. Agriculture 2021, 11, 1106. [Google Scholar] [CrossRef]
  39. Chen, X.; Zhou, K.; Liu, Y.; Du, H.; Wang, D.; Liu, S.; Liu, S.; Li, J.; Zhao, L. A simplified hyperspectral identification system based on mathematical Transformation: An example of Cordyceps sinensis geographical origins. Microchem. J. 2024, 205, 111191. [Google Scholar] [CrossRef]
  40. Choi, J.-Y.; Lee, M.; Lee, D.U.; Choi, J.H.; Lee, M.-A.; Min, S.G.; Park, S.H. Non-destructive monitoring of qualitative properties of salted cabbage using hyperspectral image analysis. LWT 2024, 203, 116329. [Google Scholar] [CrossRef]
  41. Abdi, H.; Williams, L.J. Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
  42. Nyasulu, C.; Diattara, A.; Traore, A.; Ba, C.; Diedhiou, P.M.; Sy, Y.; Raki, H.; Peluffo-Ordóñez, D.H. A comparative study of Machine Learning-based classification of Tomato fungal diseases: Application of GLCM texture features. Heliyon 2023, 9, e21697. [Google Scholar] [CrossRef] [PubMed]
  43. Iqbal, N.; Mumtaz, R.; Shafi, U.; Zaidi, S.M.H. Gray level co-occurrence matrix (GLCM) texture based crop classification using low altitude remote sensing platforms. PeerJ Comput. Sci. 2021, 7, e536. [Google Scholar] [CrossRef] [PubMed]
  44. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 2007; SMC-3, 610–621. [Google Scholar] [CrossRef]
  45. Tang, X.; Yang, Y.; Huang, L.; Qiu, L. The application of texture feature analysis of rectus femoris based on local binary pattern (LBP) combined with gray-level co-occurrence matrix (GLCM) in sarcopenia. J. Ultrasound Med. 2022, 41, 2169–2179. [Google Scholar] [CrossRef] [PubMed]
  46. Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
  47. Adeel, A.; Khan, M.A.; Sharif, M.; Azam, F.; Shah, J.H.; Umer, T.; Wan, S. Diagnosis and recognition of grape leaf diseases: An automated system based on a novel saliency approach and canonical correlation analysis based multiple features fusion. Sustain. Comput. Inform. Syst. 2019, 24, 100349. [Google Scholar] [CrossRef]
  48. Zhuang, X.; Yang, Z.; Cordes, D. A technical review of canonical correlation analysis for neuroscience applications. Hum. Brain Mapp. 2020, 41, 3807–3833. [Google Scholar] [CrossRef] [PubMed]
  49. Wang, H.-T.; Smallwood, J.; Mourao-Miranda, J.; Xia, C.H.; Satterthwaite, T.D.; Bassett, D.S.; Bzdok, D. Finding the needle in a high-dimensional haystack: Canonical correlation analysis for neuroscientists. NeuroImage 2020, 216, 116745. [Google Scholar] [CrossRef] [PubMed]
  50. Wang, Z.; Wang, L.; Huang, H. Sparse additive discriminant canonical correlation analysis for multiple features fusion. Neurocomputing 2021, 463, 185–197. [Google Scholar] [CrossRef]
  51. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  52. Halder, R.K.; Uddin, M.N.; Uddin, M.A.; Aryal, S.; Khraisat, A. Enhancing K-nearest neighbor algorithm: A comprehensive review and performance analysis of modifications. J. Big Data 2024, 11, 113. [Google Scholar] [CrossRef]
  53. Boateng, E.Y.; Otoo, J.; Abaye, D.A. Basic tenets of classification algorithms K-nearest-neighbor, support vector machine, random forest and neural network: A review. J. Data Anal. Inf. Process. 2020, 8, 341–357. [Google Scholar] [CrossRef]
  54. Derrac, J.; Chiclana, F.; García, S.; Herrera, F. An interval valued k-nearest neighbors classifier. In Proceedings of the 2015 Conference of the International Fuzzy Systems Association and the European Society for Fuzzy Logic and Technology (IFSA-EUSFLAT-15), Gijón, Spain, 30 June–3 July 2015; pp. 378–384. [Google Scholar] [CrossRef]
  55. Ameslek, O.; Zahir, H.; Latifi, H.; Bachaoui, E.M. Combining OBIA, CNN, and UAV imagery for automated detection and mapping of individual olive trees. Smart Agric. Technol. 2024, 9, 100546. [Google Scholar] [CrossRef]
  56. Yang, J.; Xu, J.; Zhang, X.; Wu, C.; Lin, T.; Ying, Y. Deep learning for vibrational spectral analysis: Recent progress and a practical guide. Anal. Chim. Acta 2019, 1081, 6–17. [Google Scholar] [CrossRef] [PubMed]
  57. Blanco, M.; Coello, J.; Iturriaga, H.; Maspoch, S.; Pages, J. NIR calibration in non-linear systems: Different PLS approaches and artificial neural networks. Chemom. Intell. Lab. Syst. 2000, 50, 75–82. [Google Scholar] [CrossRef]
  58. Lu, J.; Tan, L.; Jiang, H. Review on convolutional neural network (CNN) applied to plant leaf disease classification. Agriculture 2021, 11, 707. [Google Scholar] [CrossRef]
  59. Trivedi, N.K.; Gautam, V.; Anand, A.; Aljahdali, H.M.; Villar, S.G.; Anand, D.; Goyal, N.; Kadry, S. Early detection and classification of tomato leaf disease using high-performance deep neural network. Sensors 2021, 21, 7987. [Google Scholar] [CrossRef] [PubMed]
  60. Li, Y.; Hao, Z.; Lei, H. Survey of convolutional neural network. J. Comput. Appl. 2016, 36, 6999–7019. [Google Scholar] [CrossRef]
  61. Li, M.; Wang, Z.; Cheng, T.; Zhu, Y.; Li, J.; Quan, L.; Song, Y. Phenotyping terminal heat stress tolerance in wheat using UAV multispectral imagery to monitor leaf stay-greenness. Smart Agric. Technol. 2025, 11, 100996. [Google Scholar] [CrossRef]
  62. Tian, T.; Yao, L.; Fan, D.; Li, K.; Li, C. Physiological and biochemical characteristics of degenerate mycelium of Flammulina filiformis. Microbiol. China 2021, 10, 3603–3611. [Google Scholar] [CrossRef]
  63. Peuna, A.; Thevenot, J.; Saarakkala, S.; Nieminen, M.; Lammentausta, E. Machine learning classification on texture analyzed T2 maps of osteoarthritic cartilage: Oulu knee osteoarthritis study. Osteoarthr. Cartil. 2021, 29, 859–869. [Google Scholar] [CrossRef] [PubMed]
Figure 1. RGB images of all degradation classes, (a) CLASS 0, (b) CLASS 1, (c) CLASS 2, (d) CLASS 3.
Figure 1. RGB images of all degradation classes, (a) CLASS 0, (b) CLASS 1, (c) CLASS 2, (d) CLASS 3.
Agriculture 15 01546 g001
Figure 2. Microscopic hyperspectral imaging system. 1. Hyperspectral imaging camera; 2. eyepiece; 3. objective lens; 4. sample; 5. stage; 6. light; 7. microscope; 8. computer; 9. adjustable power.
Figure 2. Microscopic hyperspectral imaging system. 1. Hyperspectral imaging camera; 2. eyepiece; 3. objective lens; 4. sample; 5. stage; 6. light; 7. microscope; 8. computer; 9. adjustable power.
Agriculture 15 01546 g002
Figure 3. Flowchart of micro-hyperspectral data analysis for the detection of degradation of P. geesteranus. The upper right corner shows microhyperspectral images obtained for each degradation classes, (a) CLASS 0, (b) CLASS 1, (c) CLASS 2, and (d) CLASS 3.
Figure 3. Flowchart of micro-hyperspectral data analysis for the detection of degradation of P. geesteranus. The upper right corner shows microhyperspectral images obtained for each degradation classes, (a) CLASS 0, (b) CLASS 1, (c) CLASS 2, and (d) CLASS 3.
Agriculture 15 01546 g003
Figure 4. Flowchart of feature fusion.
Figure 4. Flowchart of feature fusion.
Agriculture 15 01546 g004
Figure 5. Original average spectrum of P. geesteranus.
Figure 5. Original average spectrum of P. geesteranus.
Agriculture 15 01546 g005
Figure 6. Spectral curves of samples of strains with different degrees of degradation: (a) original spectral, (b) SG treatment, (c) MSC treatment, (d) SNV treatment.
Figure 6. Spectral curves of samples of strains with different degrees of degradation: (a) original spectral, (b) SG treatment, (c) MSC treatment, (d) SNV treatment.
Agriculture 15 01546 g006
Figure 7. Characteristic band selection of standard normal variate transformation (SNV) preprocessed spectrum by competitive adaptive reweighted sampling (CARS): (a) number of sampled variables, (b) Root Mean Square Error of Cross-Validation (RMSECV) variation, (c) regression coefficient path.
Figure 7. Characteristic band selection of standard normal variate transformation (SNV) preprocessed spectrum by competitive adaptive reweighted sampling (CARS): (a) number of sampled variables, (b) Root Mean Square Error of Cross-Validation (RMSECV) variation, (c) regression coefficient path.
Agriculture 15 01546 g007
Figure 8. Convolutional neural networks (CNN) model classification detection results of different spectral feature extraction methods based on SNV preprocessing.
Figure 8. Convolutional neural networks (CNN) model classification detection results of different spectral feature extraction methods based on SNV preprocessing.
Agriculture 15 01546 g008
Figure 9. CNN model classification detection results under different texture feature extraction methods: (a) Classification results based on gray level co-occurrence matrix (GLCM), (b) classification results based on local binary pattern (LBP).
Figure 9. CNN model classification detection results under different texture feature extraction methods: (a) Classification results based on gray level co-occurrence matrix (GLCM), (b) classification results based on local binary pattern (LBP).
Agriculture 15 01546 g009
Figure 10. Feature fusion vectors of strains in different degradation level: (a) feature fusion is carried out directly, (b) feature fusion is carried out after 30 times expansion.
Figure 10. Feature fusion vectors of strains in different degradation level: (a) feature fusion is carried out directly, (b) feature fusion is carried out after 30 times expansion.
Agriculture 15 01546 g010
Table 1. Extracellular enzyme activity of P. geesteranus.
Table 1. Extracellular enzyme activity of P. geesteranus.
Strain NameExtracellular Enzyme Activity (U/L)
LaccaseCarboxymethyl CellulaseXylanase
CLASS 0116.74 a947.75 a1278.90 a
CLASS 1120.34 b686.60 b803.48 b
CLASS 293.98 c479.90 c564.05 c
CLASS 378.68 d350.18 d458.20 d
a, b, c and d represent the difference of significance level of 0.05.
Table 2. Classification results of different models based on full bands spectral.
Table 2. Classification results of different models based on full bands spectral.
Classification ModelPreprocessing MethodsAccuracy Rate (%)Kappa
Training SetTest SetOverall
SVMNONE84.279.282.90.79
SG84.780.083.50.79
MSC86.780.085.00.80
SNV89.278.386.50.82
KNNNONE75.070.073.80.65
SG75.371.774.40.66
MSC75.672.574.80.66
SNV78.373.377.10.69
CNNNONE87.280.085.40.81
SG87.884.286.90.82
MSC88.185.887.50.83
SNV89.785.088.50.85
The most effective combination is in bold.
Table 3. Classification results of different dimensional data.
Table 3. Classification results of different dimensional data.
Data TypeClassification ModelAccuracy Rate (%)Kappa
Training SetTest SetOverall
SpectrumNONE-CNN89.785.088.50.85
SPA-CNN90.187.589.60.86
CARS-CNN93.192.592.90.91
PCA-CNN93.389.292.30.90
ImageNONE-CNN88.985.087.90.84
GLCM-CNN90.685.889.40.86
LBP-CNN91.189.290.60.87
Spectrum + imageCARS + LBP-CNN96.991.795.60.96
The most effective combination is in bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, Y.; Shang, J.; Cai, Y.; Liu, S.; Liao, Z.; Pang, J.; He, Y.; Wei, X. The Fusion of Focused Spectral and Image Texture Features: A New Exploration of the Nondestructive Detection of Degeneration Degree in Pleurotus geesteranus. Agriculture 2025, 15, 1546. https://doi.org/10.3390/agriculture15141546

AMA Style

Jiang Y, Shang J, Cai Y, Liu S, Liao Z, Pang J, He Y, Wei X. The Fusion of Focused Spectral and Image Texture Features: A New Exploration of the Nondestructive Detection of Degeneration Degree in Pleurotus geesteranus. Agriculture. 2025; 15(14):1546. https://doi.org/10.3390/agriculture15141546

Chicago/Turabian Style

Jiang, Yifan, Jin Shang, Yueyue Cai, Shiyang Liu, Ziqin Liao, Jie Pang, Yong He, and Xuan Wei. 2025. "The Fusion of Focused Spectral and Image Texture Features: A New Exploration of the Nondestructive Detection of Degeneration Degree in Pleurotus geesteranus" Agriculture 15, no. 14: 1546. https://doi.org/10.3390/agriculture15141546

APA Style

Jiang, Y., Shang, J., Cai, Y., Liu, S., Liao, Z., Pang, J., He, Y., & Wei, X. (2025). The Fusion of Focused Spectral and Image Texture Features: A New Exploration of the Nondestructive Detection of Degeneration Degree in Pleurotus geesteranus. Agriculture, 15(14), 1546. https://doi.org/10.3390/agriculture15141546

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop