Next Article in Journal
Analytical Approach to UAV Cargo Delivery Processes Under Malicious Interference Conditions
Previous Article in Journal
Emission Control in an n-Firm Oligopoly Game with Product Differentiation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Approach to Pine Nut Classification: Combining Near-Infrared Spectroscopy and Image Shape Features with Soft Voting-Based Ensemble Learning

by
Yueyun Yu
1,2,
Xin Huang
2,
Danjv Lv
2,
Benjamin K. Ng
1,* and
Chan-Tong Lam
1
1
Faculty of Applied Sciences, Macao Polytechnic University, Macao, China
2
College of Big Data and Intelligent Engineering, Southwest Forestry University, Kunming 650224, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(12), 2009; https://doi.org/10.3390/math13122009
Submission received: 14 May 2025 / Revised: 28 May 2025 / Accepted: 17 June 2025 / Published: 18 June 2025
(This article belongs to the Special Issue Mathematical Modelling in Agriculture)

Abstract

Pine nuts hold significant economic value due to their rich plant protein and healthy fats, yet precise variety classification has long been hindered by limitations of traditional techniques such as chemical analysis and machine vision. This study proposes a novel near-infrared (NIR) spectral feature selection algorithm, termed the improved binary equilibrium optimizer with selection probability (IBiEO-SP), which incorporates a dynamic probability adjustment mechanism to achieve efficient feature dimensionality reduction. Experimental validation on a dataset comprising seven pine nut varieties demonstrated that, compared to particle swarm optimization (PSO) and the genetic algorithm (GA), the IBiEO-SP algorithm improved average classification accuracy by 5.7% (p < 0.01, Student’s t-test) under four spectral preprocessing methods (MSC, SNV, SG1, and SG2). Remarkably, only 2–3 features were required to achieve optimal performance (MSC + random forest: 99.05% accuracy, 100% F1/precision; SNV + KNN: 97.14% accuracy, 100% F1/precision). Furthermore, a multimodal data synergy strategy integrating NIR spectroscopy with morphological features was proposed, and a classification model was constructed using a soft voting ensemble. The final classification accuracy reached 99.95%, representing a 2.9% improvement over single-spectral-mode analysis. The results indicate that the IBiEO-SP algorithm effectively balances feature discriminative power and model generalization needs, overcoming the contradiction between high-dimensional data redundancy and low-dimensional information loss. This work provides a high-precision, low-complexity solution for rapid quality detection of pine nuts, with broad implications for agricultural product inspection and food safety.

1. Introduction

Pine nuts, valued for their nutritional and economic significance, exhibit substantial variations in composition and physical characteristics due to differences in region, species, and growing conditions [1]. These variations necessitate precise and robust classification methods to ensure quality consistency and meet market demands. However, traditional classification approaches, such as chemical analysis and manual feature extraction, are often time-consuming, costly, and impractical for large-scale industrial application [2,3]. Moreover, visual-based traditional image processing methods, including shape, color histogram, and texture analysis, although computationally efficient, suffer from several limitations in complex agricultural scenarios.
By integrating spectral and visual modalities, the proposed approach aims to overcome core technical challenges in pine nut classification—such as minimal inter-species spectral variation, morphological similarity, and noise from variable imaging conditions. This work not only advances classification techniques for pine nuts but also contributes broadly to the application of swarm optimization and multimodal fusion in precision agriculture and food quality monitoring.
Near-infrared (NIR) spectroscopy, a non-destructive analytical method, has emerged as a powerful tool for food classification due to its ability to rapidly and accurately analyze chemical components [4,5]. However, the effectiveness of NIR spectroscopy relies heavily on feature selection, which is critical for distinguishing between different nut types and improving classification model performance [6]. Recent studies have demonstrated the potential of swarm intelligence optimization algorithms in addressing the challenges of high-dimensional data and enhancing feature selection efficiency [4]. Kong is based on the particle swarm optimization algorithm, designed to automate and optimize near-infrared spectroscopy (NIRS) modeling. By effectively tuning preprocessing and variable selection parameters, it significantly improves the predictive performance of partial least squares (PLS) [7].
In addition to spectral data, visual features play a crucial role in food classification, particularly for products like pine nuts, where appearance significantly influences consumer preference [8]. Traditional image processing methods, while computationally efficient, often fail to capture complex patterns, whereas deep learning approaches require extensive annotated datasets and computational resources [9]. To bridge this gap, researchers have explored hybrid methods that combine the strengths of traditional and deep learning techniques, leveraging multimodal data fusion to improve classification accuracy [10,11].
This research presents a pine nut classification method combining NIR spectroscopy and visual features, optimized via the IBiEO-SP algorithm to enhance discriminative performance. The contributions of this research are threefold: (1) The development of IBiEO-SP, a swarm intelligence-based optimization algorithm, for efficient feature selection in NIR spectroscopy data. (2) The extraction of seven traditional image features to complement spectral data, addressing the limitations of single-modality classification. (3) The construction of an ensemble learning model using decision tree, random forest, support vector machine, and k-nearest neighbors’ classifiers, with a soft voting mechanism for final classification. By combining spectral and visual features, this approach aims to overcome the challenges of traditional machine vision systems, reducing illumination sensitivity through NIR-assisted color normalization, resolving shape ambiguities via spectral-confirmed chemical markers, and suppressing background noise using optimized feature subspaces, thereby improving classification accuracy and reliability.

2. Related Work

(1)
Application of Near-Infrared Spectroscopy in Agricultural Classification
Near-infrared spectroscopy (NIRS) has become a widely adopted non-destructive technique for agricultural product classification due to its sensitivity to chemical composition. In pine nut research, Loewe et al. employed a Foss NIRSystems 6500 SYII spectrometer to collect raw spectral data of Mediterranean pine nuts grown in Chile and built a discriminant partial least squares (DPLS) model to determine their geographical origin. The resulting prediction error ranged from 12.2% to 9.2% [12]. Moscetti et al. applied partial least squares discriminant analysis (PLS-DA) on physical properties derived from both NIR and image analysis to differentiate pine nuts based on growing conditions and provenance [11]. Ríos-Reina et al. analyzed 63 pine nut samples from different commercial sources using pixel-based and nut-based classification, achieving 89–98% and 84–100% accuracy, respectively [13]. Huang et al. used machine learning and deep learning models to classify seven pine nut varieties, validating the feasibility of NIRS-based classification with high efficiency and precision [10].
(2)
Image Feature Analysis and Multimodal Data Fusion
Although both near-infrared spectroscopy and machine vision have shown strong potential in classifying pine nut quality, current studies tend to treat the two modalities in isolation. Few efforts have integrated spectral and morphological features for joint classification. This separation limits the full exploitation of complementary information embedded in different data sources. Fusion-based approaches can potentially enhance classification robustness and interpretability by capturing both chemical and structural characteristics. However, effective fusion strategies—especially those that combine multi-scale spectral features with image-based texture and shape descriptors—remain underexplored in pine nut classification tasks.
(3)
Progress in Optimization Algorithms for Spectral Feature Selection
Feature selection plays a crucial role in boosting classification performance and reducing computational burden when dealing with high-dimensional NIR data. Numerous intelligent optimization algorithms have been proposed to identify informative spectral regions. Allegrini applied the ant colony optimization (ACO) algorithm combined with partial least squares (PLS) regression to improve wavelength selection and concentration prediction accuracy [14]. Goodarzi integrated the firefly algorithm with PLS and showed that fewer wavelengths were needed without compromising model performance [15]. Yun introduced a hybrid variable selection strategy based on variable combination population analysis (VCPA), incorporating genetic algorithms (GA) and iterative retention of informative variables (IRIV), demonstrating superior performance on high-dimensional datasets [16]. Ren tested GA, SPA, CARS, and the shuffled frog-leaping algorithm (SFLA) to select spectral features for Qimen black tea, pairing them with classifiers such as LSSVM, BPNN, and RF [17]. Bian explored a discrete whale optimization algorithm (WOA), which effectively filtered redundant variables in mixed vegetable oils and improved prediction accuracy when combined with continuous wavelet transform (CWT) and PLS [18].
Despite these advances, critical research gaps remain: (1) the equilibrium optimizer (EO), a recent metaheuristic, has not yet been tested for multimodal agricultural data, particularly in scenarios combining NIR and image features; (2) current ensemble learning approaches lack effective mechanisms to jointly model morphological and spectral variables; and (3) high-accuracy, fully automated pine nut classification frameworks remain absent in the existing literature.
To highlight the distinction between prior work and our method, Table 1 summarizes representative algorithms and their accuracy/computational performance. Compared to traditional pipelines (e.g., NIR + CNN or SVM with hand-crafted features), our approach—combining BEO-based multimodal selection with ensemble learning—demonstrates marked improvements in both classification accuracy and processing efficiency.

3. Materials and Methods

3.1. Pine Nut Samples

Pine nut samples include seven species: Pinus bungeana, Pinus armandii, Pinus yunnanensis, Pinus thunbergia, Pinus massoniana, Pinus elliottii, and Pinus taiwanensis. All these samples were obtained from the Kunming Institute of Botany, Chinese Academy of Sciences, and the Forest Nursery Station of Yunnan Province in China.

3.2. Near-Infrared Spectral Data Acquisition

The NIR spectra were obtained using an Antaris Fourier Transform NIR spectrometer (Thermo Fisher Scientific, Waltham, MA, USA). The spectrometer is equipped with an InGaAs detector with a diffuse integrating sphere, a 7.78 cm quartz sample cup, and a sample turntable. The spectral range is from 12,800 c m 1 to 3800 c m 1 , with a resolution of 8 c m 1 . The samples were scanned 48 times, resulting in 210 data points with a dimensionality of 2235. Figure 1 shows the original near-infrared spectral data of the seven types of pine nut samples, with a sampling interval of 8 nm and a wavelength range of 721 nm to 2632 nm.
It is difficult to differentiate pine nut samples of different quality grades based on the raw spectral curves. Additionally, due to random noise and influences from the measurement environment during the collection process, preprocessing of the raw spectra is necessary to improve the accuracy of data analysis.

3.3. Near-Infrared Spectral Information Analysis

Figure 2 shows a randomly selected spectral curve from the original near-infrared spectra of each type of pine nut sample, serving as a comparative graph. It can be observed that the trends of the spectra for the seven types of pine nuts are generally similar, and the positions of the decreasing or increasing absorption peaks are mostly the same.
Table 1 provides the assignments of various hydrogen-containing functional groups in the near-infrared spectral region [19]. Therefore, it can be inferred that the peak around 830 nm in Figure 2 corresponds to the third overtone absorption of C-H stretching vibration. The peak near 1200 nm corresponds to the second overtone absorption of C-H stretching vibration. The weak peak around 1780 nm corresponds to the first overtone absorption of C-H stretching vibration. The prominent peak near 1450 nm corresponds to the combination overtone absorption of C-H stretching and bending vibrations. The combination overtone feature absorption band of N-H stretching vibration is observed around 1670 nm as a weak peak. The second overtone feature absorption of O-H stretching vibration in water molecules is near 1000 nm, while the combination overtone feature absorption of O-H stretching vibration in water molecules is observed near 1450 nm.

3.4. Preprocessing of Near-Infrared Spectral Data

In this study, various preprocessing methods were applied to the raw spectral data of shelled Korean pine seeds and shelled Korean pine nuts, including multiplicative scatter correction (MSC), standard normal variate (SNV) correction, first derivative (1-Der), and second derivative (2-Der). Classification models were then established using the original data and different preprocessing methods. Among them, when using derivative preprocessing methods to process the original spectra, it is necessary to select an optimal derivative window width. After multiple experiments, it was found that a window width of 30 provides the best results. The preprocessed spectra, as shown in Figure 3, demonstrate that all four methods have effectively reduced noise interference in the spectral curves.
In the preprocessed spectra of MSC and SNV, it can be observed that in the range of 820 nm–1100 nm, Pinus yunnanensis has the highest absorbance, while Pinus bungeana has the lowest absorbance. In the ranges of 1030 nm–1880 nm and 2000 nm–2400 nm, Pinus bungeana has the highest absorbance, while in the vicinity of 1400 nm–1840 nm, Pinus yunnanensis has the lowest absorbance. Around 1400 nm and 1900 nm, the absorbance of the seven types of pine nuts is similar. In the preprocessed spectra of the first derivative (1-Der) and second derivative (2-Der), there are significant differences in absorbance among the seven types of pine nuts in the range of 2500 nm–2631 nm.

3.5. Near-Infrared Spectral Feature Selection

3.5.1. Improved Binary Equilibrium Optimizer with Selection Probability (IBiEO-SP)

The equilibrium optimizer [20] is a newly proposed swarm intelligence optimization algorithm by Faramarzi in the year 2020. It is primarily inspired by the controlled volume mixing type dynamic mass balance in physics and serves as a heuristic optimization algorithm. It features strong optimization capability and fast convergence speed.
This article proposes an improved binary equilibrium optimizer with selection probability (IBIEO-SP) selection for feature selection in near-infrared spectroscopy, combined with multiple classifiers. As the original equilibrium optimizer algorithm randomly selects concentrations to determine the search direction during the search process, it may lead to getting trapped in local optima. Therefore, a new strategy for selecting balanced pool probabilities is proposed to address the shortcomings of the equilibrium optimizer algorithm as shown in Figure 4.
The process of the IBiEO-SP algorithms is shown as follows:
(1) Unlike the continuous variables in standard EO, IBiEO-SP starts with a binary initialization suitable for discrete problems. Each individual in the population is represented as a binary string of length N , where
C = c 1 , c 2 , , c N ,   i = 1,2 , , N
c i { 0 ,   1 }
In the equation, c 1 , c 2 , , c N are binary strings of length N, where each c i can be either 0 or 1. N represents the population size. Since near-infrared spectral feature selection is a discrete problem, involving the selection (binary variable = 1) or non-selection (binary variable = 0) of certain features, the original equilibrium optimizer was designed for continuous optimization problems. Therefore, it requires discretization and improvement to avoid potential information loss and increased algorithm complexity.
(2) Balanced state pool selection probability strategy: To improve the algorithm’s global search ability and prevent convergence to poor local optima, the equilibrium state (best individual) in the original equilibrium optimizer (Equation (3)) is chosen from a pool of the five best current candidate solutions. The equilibrium pool is constructed as follows:
C e q , p o o l = C e q 1 , C e q 2 , C e q 3 , C e q 4 , C e q a v e
In the original balanced optimizer, C e q 1 , C e q 2 , C e q 3 , and C e q 4 represent the four best solutions found up to the current iteration, while C_eqave represents the average state of these four solutions. The probability of choosing these five candidate solutions is equal, each being 0.2. The original EO selects its equilibrium candidate solution from a small set of elite individuals but applies uniform probability, leading to limited exploration and stagnation in later iterations.
IBiEO-SP introduces a selection probability control mechanism to address this. In order to make the algorithm more inclined to explore in the early iterations and gradually reduce exploratory selection as the number of iterations increases, IBiEO-SP introduces the r r flag variable to control the probability of selection concentration, as shown in Formula (4).
r r = r a n d 0 ,   1
When the current iteration count t is greater than r r , or when r r is greater than 0.2, the concentration is randomly selected from the balanced pool in Equation (3) with a probability of 0.2. If the condition is not met, the concentration with the best fitness value in the pool is chosen for updating, as shown in Equation (5).
C e q = M a x C e q , p o o l
With the increase in the number of iterations, the improved method makes the algorithm more inclined to explore in the early iterations, leading the population to update towards the best direction. As the current iteration number, t, increases, the exploratory selection gradually decreases and becomes more inclined towards exploitative selection. This can help the algorithm widely search the solution space at the beginning and then concentrate more on finding better solutions in the later iterations. Such a trade-off can contribute to optimizing the algorithm’s convergence to high-quality solutions. The pseudocode for the IBiEO-SP algorithm is shown in Algorithm 1.
Algorithm 1: IBiEO-SP
1Initialize population C as binary vectors
2Set parameters: a1 = 2, a2 = 1, GP = 0.5
3Iter = 0
4While Iter < Max_iter do
5 Evaluate fitness for each particle in population: fit(C)
 //Step 1: Calculate average particle concentration
6 C_ave = (C_eq1 + C_eq2 + C_eq3 + C_eq4)/4
 //Step 2: Establish equilibrium pool
7 C_eq_pool = {C_eq1, C_eq2, C_eq3, C_eq4, C_ave}
8 For i = 1 to n (number of particles) do
9  Generate random number rr in [0, 1]
10  If T > rr or rr > 0.2 then
   //With higher randomness, select from equilibrium pool
11   G_eq = randomly select a vector from C_eq_pool
12  Else
   //Otherwise choose the best (elitist) equilibrium candidate
13   G_eq = best(C_eq_pool)//highest fitness
14  End If
  //Step 3: Update binary concentration using position update equation
15  C[i] = G_eq + (C[i] − G_eq) * F + G/(λ * V) * (1 − F)
16 End For
17 Iter = Iter + 1
18End While

3.5.2. Objective Function

The objective function, B e s t   c o s t , is shown in Equation (6).
B e s t   c o s t = ω e r r o r + 1 ω S F T F
e r r o r = 1 A c c u r a c y
where S F represents the number of selected features, T F represents the total number of features; e r r o r is the classifier’s error rate; ω is the weight of the two objectives, ω [0, 1]. It can be adjusted appropriately when a higher precision of feature selection or selecting fewer features is desired.

3.6. Image Data Acquisition and Preprocessing

The pine nut images were captured using a LEICA EZ4 microscope (Leica Microsystems, Wetzlar, Germany) with a white background and eight-fold magnification, using a Huawei Mate 30 smartphone (Huawei Technologies Co., Ltd., Shenzhen, China). The smartphone is equipped with a 40 MP ultra-sensitive camera (wide-angle, f/1.8), supporting both autofocus and manual focus. The shooting angle was set at 90°, with a height of 50 cm, and 52 images were captured for each type of pine nut.
Due to the diverse shapes, sizes, and proportions of pine nut images, there can be significant variations among different batches or pictures. Therefore, in pine nut image feature extraction, the quality of the image directly affects the design of recognition algorithms and the accuracy of their results. Prior to feature extraction, preprocessing of the images is necessary. The main purpose of image preprocessing is to eliminate irrelevant information, restore useful real information, enhance detectability of relevant information, simplify data to the maximum extent, and improve the reliability of feature extraction. In this study, the Segment Anything Universal Segmentation Model (SAM) proposed by Meat AI is employed for image preprocessing. SAM utilizes 1100 images and 1 billion masks for training, thus achieving powerful zero-shot generalization capability [21].
The steps for image preprocessing are as follows:
Step 1: Use SAM to extract a mask from the original pine nut image, as shown in Figure 5b.
Step 2: Based on the extracted mask, crop out the foreground region of the original image, as shown in Figure 5c.
Step 3: Fill the image and adjust its size so that all images have the same dimensions, as shown in Figure 5d.
Figure 5. Preprocessing of nut images.
Figure 5. Preprocessing of nut images.
Mathematics 13 02009 g005

3.7. Pine Nut Image Feature Extraction

Due to the different sizes of pine nuts from various varieties, the size of the segmented nuts in the image also varies. It is necessary to calculate their morphological features, texture features, and color features. The chosen features include area, perimeter, circularity, centroid distance, gray-level co-occurrence matrix, local binary patterns, color features, scale-invariant feature transform, and histogram of oriented gradients. As shown in Table 2, these seven categories of features have a dimensionality of 12.

3.8. Feature Fusion and Ensemble Learning

This study integrates near-infrared (NIRS) spectral data and image data to comprehensively characterize the quality attributes of pine nut samples. As shown in Figure 6, the original NIRS spectra contain 2335 wavelength features. To enhance signal-to-noise ratio and discriminative power, four spectral preprocessing methods were applied, multiplicative scatter correction (MSC), standard normal variate (SNV), first-order derivative (SG1), and second-order derivative (SG2), generating four preprocessed spectral datasets. Subsequently, the proposed IBiEO-SP algorithm was employed in conjunction with four classifiers—k-nearest neighbors (KNN), support vector machine (SVM), decision tree (DT), and random forest (RF)—to evaluate each preprocessed spectral dataset. This process dynamically selected the most discriminative wavelengths to form optimal spectral feature subsets with adaptive dimensionality.
For image features, 12 visual characteristics were extracted through an image analysis module, encompassing geometric, textural, and color attributes: area, perimeter, circularity, centroid distance, gray-level co-occurrence matrix (GLCM) texture features, local binary pattern (LBP) features, and color moments.
To enable effective multimodal feature fusion, both image features and selected spectral features underwent min–max normalization to unify their value ranges to [0,1], mitigating scale-induced bias. Horizontal concatenation was then implemented to combine the 12-dimensional image features with N-dimensional optimal spectral features, forming a joint feature vector (12 + N dimensions). In the main experiments, no explicit weighting was applied during feature concatenation, allowing for the classification models to autonomously learn feature importance through training, thereby assessing the intrinsic impact of fusion strategy on performance.
The final ensemble classifier system feeds the combined features into four models (KNN, SVM, DT, and RF) and employs a probability-based soft voting strategy for decision fusion. This strategy aggregates class probability predictions from all classifiers for each sample, computes average probabilities, and selects the class with highest average probability as final prediction. Equal weights were assigned to all classifiers during soft voting, intentionally avoiding validation-based weight adjustment to prevent human bias and enhance model robustness and generalization capability.

3.9. Classification Evaluation Metrics

This paper uses A c c u r a c y (acc), P r e c i s i o n (pre), R e c a l l   ((True Positive Rate), TPR), F 1 s c o r e (F1), and the selected number of features (SNF) as comprehensive evaluation metrics. These three metrics can comprehensively reflect the performance of the model in classification tasks. Accuracy measures the proportion of correctly classified samples by the model, and its calculation method is shown in Formula (8).
A c c u r a c y = T P + T N T P + T N + F P + F N
where T P represents the number of correctly recognized positive samples, both predicted and actual positive samples; F P represents the number of falsely reported negative samples, where the classifier predicts positive but the actual sample is negative; T N represents the number of correctly recognized negative samples; and F N represents the number of missed positive samples, where the classifier predicts negative but the actual sample is positive.
P r e c i s i o n refers to the proportion of samples predicted as positive by the model that are actually positive. Its calculation method is shown in Formula (9).
P r e c i s i o n = T P T P + F P
The F 1 s c o r e is the harmonic mean of P r e c i s i o n and R e c a l l , and its calculation formula is shown in Formula (10). Recall refers to the proportion of actual positive samples that are correctly predicted as positive by the model, as shown in Formula (11).
F 1 s c o r e = 2 P r e c i s i o n R e c a l l P r e c i s i o n + R e c a l l
R e c a l l = T P T P + F N
The selected number of features (SNF) refers to the number of features ultimately selected during the process of feature selection or dimensionality reduction. It is an important metric for evaluating the simplicity and effectiveness of a model. By selecting fewer but more discriminative features, a model can improve training efficiency, reduce the risk of overfitting, and enhance generalization ability. SNF reflects the model’s capacity to retain high classification performance while effectively filtering features, which is particularly important in high-dimensional data scenarios. Generally, the fewer the selected features while maintaining or improving performance, the stronger the model’s capability in feature compression and extraction. Therefore, SNF can also serve as an auxiliary evaluation metric for model lightweighting and interpretability.
The use of accuracy, precision, recall, F1 score and SNF as evaluation metrics is to comprehensively measure the performance of a classification model. Accuracy reflects the overall correctness of the classification, while precision focuses on the accuracy of the model in predicting positive cases. The F1 score takes into account the balance between the model’s ability to identify positive cases and the accuracy of positive case classification, thereby providing a more comprehensive performance evaluation.

4. Results

4.1. Classification of Pine Nut Image Features

In this study, seven types of features were extracted from pine nut images, including area, perimeter, circularity, centroid distance, gray-level co-occurrence matrix, local binary patterns, color features, as well as scale-invariant feature transform and histogram of oriented gradients. Four different classifiers (decision tree—DT; random forest—RF; support vector machine—SVM; k-nearest neighbors—KNN) were used in 10-fold cross-validation classification experiments. Table 3 shows the classification performance of each classifier on different pine nut varieties (yunnanensis, armandii, taiwanensis, elliottii, bungeana, massoniana, thunbergii), including precision (Pre) and F1 score (F1) evaluation metrics.
From the experimental results, it can be observed that the performance of the classifiers varies across different pine nut varieties. Random forest (RF) achieved a higher precision and F1 score on yunnanensis and taiwanensis varieties, while support vector machine (SVM) performed better on thunbergii. However, the overall average performance was mediocre, especially on certain varieties such as armandii and elliottii, where the performance of all classifiers was suboptimal. It is noteworthy that the average F1 score is relatively high, but the accuracy is low. This indicates that the model has a significant misclassification issue when dealing with certain categories, and this misclassification is not evenly distributed, leading to lower overall accuracy. This could be due to the similarity between different varieties, which poses greater challenges for the model to differentiate these categories.
Although a multi-class image classification task was adopted and classification was performed using abundant image features and four common classifiers, the experimental results showed relatively low overall classification performance. This provides significant recognition and improvement space for optimizing subsequent multimodal fusion.

4.2. Pine Nut Near-Infrared Spectral Feature Selection

In this section, the performance of two swarm intelligence optimization methods, namely, IBiEO-SP and EO, combined with four classifiers (KNN, DT, RF, and SVM) in the wrapper feature selection task on four types of preprocessed spectral data is meticulously compared. The comparison is conducted using 10-fold cross-validation, and the parameters of the algorithms and classifiers are presented in Table 4.
The performance of the wrapper feature selection method using the IBiEO-SP and EO intelligent optimization methods and four classifiers was compared on four preprocessed near-infrared spectra, with the objective function of minimizing the number of feature selections and classification error rates (error). The experimental results are shown in Table 5.
For classifier configuration, this study employs the default classification models from MATLAB’s Statistics and Machine Learning Toolbox, including k-nearest neighbors (KNN), support vector machine (SVM), decision tree (DT), and random forest (RF), all implemented with their standard parameter settings. Specifically, KNN uses the default configuration with one neighbor (k = 1) and a Euclidean distance metric without distance weighting; SVM adopts a linear kernel function with the default box constraint parameter (C = 1); DT applies binary splitting with default values for maximum branch depth and minimum leaf node samples; and RF utilizes 100 decision trees by default with Gini impurity as the splitting criterion. These standardized default parameter settings establish a consistent and impartial benchmarking framework across all models, effectively eliminating potential biases from parameter tuning and ensuring an objective assessment of the feature fusion strategy’s fundamental performance.
Experimental results demonstrate that the proposed IBiEO-SP algorithm exhibits significant advantages in both feature selection efficiency and classification performance across multiple spectral preprocessing methods. When using MSC preprocessing, IBiEO-SP achieved 97.62% accuracy with only three features through KNN classification, while attaining perfect performance metrics (99.05% accuracy, 100% F1 score and precision) with just two features using RF classifiers. The algorithm’s efficiency became particularly evident under SNV preprocessing, where it maintained 97.14% accuracy with 100% F1 score/precision using two features via KNN—a notable improvement over conventional EO methods that required three features to achieve 99.05% accuracy. In more complex SG1/SG2 preprocessing environments, IBiEO-SP demonstrated remarkable adaptability, delivering 98.10–99.05% accuracy with 3–4 features through DT/RF classifiers while consistently maintaining F1 scores and precision above 95%.

4.3. Fusion Classification of Pine Nut Near-Infrared Spectral Features and Image Features

The IBiEO-SP algorithm was used to perform feature selection on the preprocessed near-infrared spectra of four categories. The selected feature subsets were fused with image features, and a comprehensive evaluation was conducted using four base classifiers (KNN, decision tree, random forest, SVM) and an ensemble method with soft voting. The experimental results are shown in Table 6.
In terms of accuracy, all classifiers performed well with accuracy rates above 90%, demonstrating effective classification capabilities for the samples. The ensemble method (soft voting) showed relative stability in accuracy, achieving an average level across the base classifiers. Different classifiers exhibited some differences in precision and recall. SVM performed better in precision, reducing false positive rates, while random forest performed better in recall, reducing false negative rates. The F1 score, which considers both precision and recall, showed a good compromise for the ensemble method, indicating a relatively balanced performance. In comparison to different classifiers, decision trees and SVM performed well in most evaluation metrics, effectively handling data complexity and non-linear relationships. Random forest improved model robustness through its ensemble nature. KNN performed slightly worse in certain metrics, possibly influenced by sensitivity to the data and computational costs. Ultimately, the ensemble method of soft voting successfully improved the overall performance and reduced the risk of overfitting by combining predictions from multiple classifiers. These results emphasize the effectiveness of integrating image features with near-infrared spectral feature selection and fusion, providing valuable insights for further research on the effects of different preprocessing methods and the impact of feature selection and fusion on performance.

5. Discussion

5.1. Low Image Classification Accuracy

In terms of classification performance of image features, experimental results demonstrate that traditional machine learning models (e.g., random forest) achieved only approximately 50% classification accuracy when using image features alone, significantly lower than the performance of fused-feature models. The primary reasons for this outcome can be analyzed from the following perspectives:
Firstly, the inherent low dimensionality of image features (12 dimensions total), although covering geometric shape characteristics (area, perimeter, and circularity), texture features (GLCM and LBP), and color attributes, results in a sparse overall feature space. This insufficiently captures fine-grained differences in pine nut micro-morphology and surface textures, limiting the model’s ability to establish discriminative boundaries between categories. Secondly, the image dataset exhibits certain class imbalance issues, compounded by subtle visual distinctions between some categories, which increases the difficulty of discrimination. Although training set balance was controlled through repeated sampling in experimental design, the limited original image quantity impedes the model from learning stable and representative discriminative patterns.
Our related work [10] previously attempted to employ convolutional neural networks (CNNs) like EfficientNet for end-to-end image modeling. However, without data augmentation or sample size expansion, the model accuracy improvements remained constrained. This further confirms that image feature representation reaches inherent limitations under current experimental conditions, proving inadequate to independently support high-precision classification. Consequently, this study adopts feature-level fusion of image features with spectral characteristics to leverage complementary advantages of multi-source information and enhance overall model discrimination. Subsequent work will explore strategies including transfer learning, data augmentation, synthetic image generation (e.g., CutMix, GAN-based synthesis) to enrich image sample diversity. Concurrently, deeper semantic feature extraction through advanced vision backbones (e.g., Vision Transformers/ViT or lightweight CNNs) will be investigated to improve image-side discriminative capability and fusion contribution.

5.2. Robustness and Classification Performance of IBiEO-SP Under Noise Interference

To validate the robustness and classification performance of the proposed IBiEO-SP method under noise interference, we added Gaussian noise with a 1% amplitude to the original data and evaluated the performance using four classifiers—KNN, DT, RF, and SVM—with consistent parameters. The experimental results are presented in Table 7 and Figure 7.
Overall, IBiEO-SP demonstrated the most robust classification performance among all feature selection methods. It maintained high accuracy (Acc) and F1 scores even with noise-sensitive classifiers like KNN and SVM. For instance, under the SG1 preprocessing method, IBiEO-SP combined with KNN achieved an accuracy of 0.9571 and an F1 score of 0.9577, outperforming EO (0.9476) and GA (0.9524). Similarly, under the SG2 preprocessing method, it maintained leading performance with an accuracy of 0.9571 and an F1 score of 0.9588. In contrast, EO and GA showed greater performance fluctuation with SVM, especially in SG1, where EO combined with SVM achieved only 0.8190 accuracy, while IBiEO-SP reached a baseline accuracy of 0.5143. Although the value was relatively low, the F1 score was slightly higher at 0.5210, indicating IBiEO-SP’s potential to retain classification capability under high noise conditions.
When using more robust classifiers such as decision trees (DTs) and random forests (RFs), IBiEO-SP remained competitive, suggesting that its selected feature subsets are not only stable but also highly adaptable to various classification models. For example, in the MSC subset, IBiEO-SP combined with DT achieved a high accuracy of 0.9857, and it also performed well with RF (0.9190). In the SG2 subset, IBiEO-SP with RF reached 0.9857 accuracy, nearly matching or slightly outperforming GA and PSO. Notably, under Gaussian noise, the number of selected features (SNF) by IBiEO-SP was generally lower than those selected by EO and GA. For instance, under the SG1 preprocessing method, IBiEO-SP with DT selected 27 features, compared to 928 by EO and DT and 1220 by GA and DT. This result indicates that IBiEO-SP can significantly reduce the number of features while maintaining classification performance, effectively lowering computational cost and enhancing model simplicity and interpretability.
Furthermore, IBiEO-SP exhibited good generalization performance across different sub-datasets, with minimal variation between SG1 and SG2, whereas EO and GA showed significant fluctuations across subsets. This highlights the superior robustness and stability of the proposed method.
Despite its strong performance with KNN, DT, and RF, IBiEO-SP showed relatively unstable results with the SVM classifier. Specifically, in the SG1 dataset, IBiEO-SP combined with SVM yielded an accuracy of only 0.5143 and an F1 score of 0.5210, far below EO (Acc = 0.8190) and GA (Acc = 0.8190) in the same group. This suggests that the feature subset selected by IBiEO-SP may present issues such as linear inseparability or blurred boundaries in SVM, affecting overall performance. In the SG2 dataset with RF, GA (Acc = 0.9857, F1 = 0.9908) and PSO (Acc = 0.9857, F1 = 0.9954) performed comparably or slightly better, indicating that under specific problem scales or feature space structures, IBiEO-SP might not fully capture the optimal subset and still has room for further improvement. For example, in the MSC subset, IBiEO-SP combined with SVM achieved an accuracy of 0.8095. Although it outperformed GA (0.9048), the overall F1 score was still inferior to other mainstream methods.
In summary, IBiEO-SP can effectively identify stable and discriminative feature subsets under slight Gaussian noise interference. It balances accuracy, precision, recall, and feature compression rate, demonstrating superior performance across various classifiers. This makes it particularly suitable for noise-sensitive application scenarios.

5.3. Misclassification Analysis

Comparative analysis reveals critical limitations of alternative approaches (as shown in Figure 8). While EO methods occasionally matched IBiEO-SP’s accuracy (e.g., 99.05% with SNV + KNN), they consistently required more features. The genetic algorithm (GA) exhibited marginal accuracy gains (99.95% with SG1 + RF) only when employing over 1000 features—a 500-fold increase compared to IBiEO-SP’s requirements. Particle swarm optimization (PSO) proved less effective, achieving ≤98.10% accuracy even with 300–400 features. This performance disparity stems from IBiEO-SP’s novel capability to dynamically balance feature importance with classifier demands, effectively resolving the dimensionality paradox that plagues traditional methods. By simultaneously eliminating redundant high-dimensional data while preserving discriminative low-dimensional features, the algorithm establishes a new benchmark for precision–efficiency trade-offs in feature selection tasks.
To further evaluate the classification performance of the IBiEO-SP feature selection method combined with the decision tree (DT) classifier on the SNV dataset, the corresponding confusion matrix was plotted, as shown in the Figure 9. In the confusion matrix, the diagonal elements represent correctly classified samples, while the off-diagonal elements indicate misclassifications. The dataset includes seven Pinus species, with 30 samples per class. The results demonstrate that the model achieved satisfactory classification performance across most categories. Specifically, p.elliottii and p.taiwanensis were classified with 100% accuracy, showing the best performance. p.massoniana and p.thunbergii each had one sample misclassified, resulting in an accuracy of 96.7%. The classification accuracies for p.armandii and p.bungeana were 93.3% and 90%, respectively, with most misclassifications occurring between these two species, indicating a certain degree of similarity in the feature space. p.yunnanensis was misclassified once each as p.elliottii and p.thunbergii, also yielding an accuracy of 93.3%. Overall, although IBiEO-SP selected only two optimal features for modeling, it achieved a high average classification accuracy of 97.14%, demonstrating its strong discriminative power. The decision tree classifier maintained robust performance despite the low feature dimensionality. Misclassifications mainly occurred among closely related species with similar morphological or spectral characteristics. Future improvements could involve incorporating additional data types, such as texture features or multimodal information, to further enhance classification accuracy.

6. Conclusions

This study proposes an innovative algorithm named IBiEO-SP for feature selection in the near-infrared (NIR) spectra of four types of pine nuts. The algorithm significantly improves classification performance by preprocessing the spectral data and performing multimodal fusion of the extracted feature subsets with image features. Experimental results demonstrate outstanding performance in key metrics such as accuracy and F1 score, confirming the substantial potential of multimodal fusion in enhancing classification performance.
In terms of methodology, this study employs four representative traditional classifiers (KNN, SVM, DT, and RF) as base classifiers and adopts a soft voting approach for multimodal ensemble learning. This selection is primarily based on the following considerations: First, the research data exhibits the characteristics of “small sample size and high dimensionality,” making traditional classifiers more effective than deep learning models in avoiding overfitting risks. Second, these models offer strong interpretability, computational efficiency, and parameter stability, facilitating wrapper-based feature selection and performance comparison. Additionally, the IBiEO-SP algorithm itself relies on classifier performance feedback, necessitating models with high computational efficiency and clear classification boundaries to ensure iterative optimization efficiency.
Future research directions include (1) exploring the selection of different feature subsets with similar performance in NIR spectra; (2) investigating how to leverage multimodal feature subsets to improve model stability; and (3) introducing more complex models such as XGBoost, LightGBM, and lightweight convolutional networks to further validate the universality and scalability of fused features. These studies will help better adapt to practical application needs and advance the development of multimodal feature selection methods.

Author Contributions

Y.Y. developed the methodology, performed formal analysis and validation, implemented the software, created visualizations, and wrote the original draft. X.H. contributed to data curation and visualization. D.L. provided critical resources and participated in visualization. C.-T.L. and B.K.N. jointly secured funding, administered the project, provided supervision, and participated in writing review & editing. Additionally, B.K.N. was responsible for conceptualizing the study and overall supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Yunnan Provincial Agricultural Joint Special Square Project (202301BD070001-086), Yunnan Provincial Department of Education Fund (2022J0495), National Natural Science Foundation of China (31860332), and National Natural Science Foundation of China (32360388).

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hu, J.; Qiao, P.; Yang, L.; Lv, H.; Shi, H.; He, Y.; Liu, Y. Research on nondestructive detection of pine nut quality based on terahertz imaging. Infrared Phys. Technol. 2023, 134, 104798. [Google Scholar] [CrossRef]
  2. Destaillats, F.; Cruz-Hernandez, C.; Giuffrida, F.; Dionisi, F. Identification of the botanical origin of pine nuts found in food products by gas−liquid chromatography analysis of fatty acid profile. J. Agric. Food Chem. 2010, 58, 2082–2087. [Google Scholar] [CrossRef] [PubMed]
  3. Lanner, R.M. The Piñon Pine: A Natural and Cultural History; University of Nevada Press: Reno, NV, USA, 1981. [Google Scholar]
  4. Zeng, J.; Guo, Y.; Han, Y.; Li, Z.; Yang, Z.; Chai, Q.; Wang, W.; Zhang, Y.; Fu, C. A review of the discriminant analysis methods for food quality based on near-infrared spectroscopy and pattern recognition. Molecules 2021, 26, 749. [Google Scholar] [CrossRef] [PubMed]
  5. Kharbach, M.; Alaoui Mansouri, M.; Taabouz, M.; Yu, H. Current application of advancing spectroscopy techniques in food analysis: Data handling with chemometric approaches. Foods 2023, 12, 2753. [Google Scholar] [CrossRef] [PubMed]
  6. Liang, L.; Wei, L.; Fang, G.; Xu, F.; Deng, Y.; Shen, K.; Tian, Q.; Wu, T.; Zhu, B. Prediction of holocellulose and lignin content of pulp wood feedstock using near infrared spectroscopy and variable selection. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2020, 225, 117515. [Google Scholar] [CrossRef] [PubMed]
  7. Kong, B.; Cai, J.; Tuo, S.; Wen, L.; Jiang, H.; He, L.; Luo, L.; Zhang, Y.; Chen, A.; Tang, J.; et al. Rapid construction of an optimal model for near-infrared spectroscopy (NIRS) by particle swarm optimization (PSO). Anal. Lett. 2022, 55, 1685–1700. [Google Scholar] [CrossRef]
  8. Ren, Z.; Fang, F.; Yan, N.; Wu, Y. State of the art in defect detection based on machine vision. Int. J. Precis. Eng. Manuf.-Green Technol. 2022, 9, 661–691. [Google Scholar] [CrossRef]
  9. Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6999–7019. [Google Scholar] [CrossRef] [PubMed]
  10. Huang, B.; Liu, J.; Jiao, J.; Lu, J.; Lv, D.; Mao, J.; Zhao, Y.; Zhang, Y. Applications of machine learning in pine nuts classification. Sci. Rep. 2022, 12, 8799. [Google Scholar] [CrossRef] [PubMed]
  11. Moscetti, R.; Berhe, D.H.; Agrimi, M.; Haff, R.P.; Liang, P.; Ferri, S.; Monarca, D.; Massantini, R. Pine nut species recognition using NIR spectroscopy and image analysis. J. Food Eng. 2021, 292, 110357. [Google Scholar] [CrossRef]
  12. Loewe, V.; Navarro-Cerrillo, R.M.; García-Olmo, J.; Riccioli, C.; Sánchez-Cuesta, R. Discriminant analysis of Mediterranean pine nuts (Pinus pinea L.) from Chilean plantations by near infrared spectroscopy (NIRS). Food Control 2017, 73, 634–643. [Google Scholar] [CrossRef]
  13. Ríos-Reina, R.; Callejón, R.M.; Amigo, J.M. Feasibility of a rapid and non-destructive methodology for the study and discrimination of pine nuts using near-infrared hyperspectral analysis and chemometrics. Food Control 2021, 130, 108365. [Google Scholar] [CrossRef]
  14. Allegrini, F.; Olivieri, A.C. A new and efficient variable selection algorithm based on ant colony optimization. Applications to near infrared spectroscopy/partial least-squares analysis. Anal. Chim. Acta 2011, 699, 18–25. [Google Scholar] [CrossRef] [PubMed]
  15. Goodarzi, M.; dos Santos Coelho, L. Firefly as a novel swarm intelligence variable selection method in spectroscopy. Anal. Chim. Acta 2014, 852, 20–27. [Google Scholar] [CrossRef] [PubMed]
  16. Yun, Y.-H.; Bin, J.; Liu, D.-L.; Xu, L.; Yan, T.-L.; Cao, D.-S.; Xu, Q.-S. A hybrid variable selection strategy based on continuous shrinkage of variable space in multivariate calibration. Anal. Chim. Acta 2019, 1058, 58–69. [Google Scholar] [CrossRef] [PubMed]
  17. Ren, G.; Wang, Y.; Ning, J.; Zhang, Z. Highly identification of keemun black tea rank based on cognitive spectroscopy: Near infrared spectroscopy combined with feature variable selection. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2020, 230, 118079. [Google Scholar] [CrossRef] [PubMed]
  18. Bian, X.; Zhang, R.; Liu, P.; Xiang, Y.; Wang, S.; Tan, X. Near infrared spectroscopic variable selection by a novel swarm intelligence algorithm for rapid quantification of high order edible blend oil. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2023, 284, 121788. [Google Scholar] [CrossRef] [PubMed]
  19. Qiu, X. Research on Machine Vision and Near Infrared Spectroscopy for Quality Detection of Korean Pine Seeds. Ph.D. Thesis, Northeast Forestry University, Harbin, China, 2017. (In Chinese). [Google Scholar]
  20. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  21. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.-Y. Segment anything. arXiv 2023, arXiv:2304.02643. [Google Scholar]
Figure 1. The original spectra of seven samples of pine nuts.
Figure 1. The original spectra of seven samples of pine nuts.
Mathematics 13 02009 g001
Figure 2. Randomly selected spectra of seven samples of pine nuts.
Figure 2. Randomly selected spectra of seven samples of pine nuts.
Mathematics 13 02009 g002
Figure 3. Preprocessed spectrum.
Figure 3. Preprocessed spectrum.
Mathematics 13 02009 g003
Figure 4. Improved binary equilibrium optimizer with probability selection.
Figure 4. Improved binary equilibrium optimizer with probability selection.
Mathematics 13 02009 g004
Figure 6. Fusion and integrated learning workflow of near-infrared spectra and image features.
Figure 6. Fusion and integrated learning workflow of near-infrared spectra and image features.
Mathematics 13 02009 g006
Figure 7. Adding noise to near infrared spectroscopy.
Figure 7. Adding noise to near infrared spectroscopy.
Mathematics 13 02009 g007
Figure 8. Feature subset selection for different preprocessed spectra.
Figure 8. Feature subset selection for different preprocessed spectra.
Mathematics 13 02009 g008
Figure 9. Optimal model confusion matrix diagram.
Figure 9. Optimal model confusion matrix diagram.
Mathematics 13 02009 g009
Table 1. Spectral assignments of hydrogen-containing functional groups in the NIR spectrum.
Table 1. Spectral assignments of hydrogen-containing functional groups in the NIR spectrum.
TypeAromatic Hydrocarbon (nm)Methyl (nm)Methylene (nm)N-H (nm)O-H (nm)
First Overtone Absorption16801700174515401450
Sum Frequency Absorption143513971435--
Second Overtone Absorption1145119012101040986
Sum Frequency Absorption-10151035--
Third Overtone Absorption875913934785730
fourth Overtone Absorption714746762--
Table 2. Feature extraction table for pine nut images.
Table 2. Feature extraction table for pine nut images.
NumFeature NameFeature Extraction Method
1Area (S) S = n I + n B 2 1
2Perimeter (L) L = P i x e l   C o u n t   i n   C o n t o u r   o f   S p e c i f i c   R e g i o n   i n   I m a g e
3Circularity e = ( 4 p i S ) / L L
4Centroid Distance m u j i = x , y ( a r r a y x , y x x ¯ j ( y y ) i )
5GLCM P i , j | d , θ = { ( x , y ) | f x , y = i , f x + d x , y + d y = k ; x , y = 0,1 , , N 1 }
6LBP L B P ( x c , y c = p = 0 p 1 2 p s ( i p i c ) )
7Color F = r R + g G + b [ B ]
Table 3. Image feature classification results.
Table 3. Image feature classification results.
TypeDTRFSVMKNN
PreF1PreF1PreF1PreF1
yunnanensis0.51430.48000.73680.47460.07370.10370.40000.3733
armandii0.34210.34670.36360.16670.04260.04760.16670.0465
taiwanensis0.50000.46340.77780.59150.11110.03770.24140.1918
elliottii0.36360.39510.72730.33330.09090.04170.00000.0000
bungeana0.57140.50630.64520.53330.09090.03640.44120.3846
massoniana0.48480.41030.62070.48650.16670.03920.46670.2333
thunbergii0.74190.63890.85190.67650.04590.06670.57140.4638
Average0.50260.46290.67470.46610.08880.05330.32680.2419
Accuracy0.51040.58680.08680.3333
Table 4. Experimental parameter settings.
Table 4. Experimental parameter settings.
Method Parameter
Number of Iterations100
IBiEO-SPa1 = 2, a2 = 1, GP = 0.5
EOa1 = 2, a2 = 1, GP = 0.5
PSOω = 0.9
GACrossover probability = 0.8
Mutation probability = 0.1
KNNk = 1
SVMC = 1, Kernel = Linear
DTDefault settings (MATLAB 2024b)
RFNumber of Trees = 100
Table 5. Result of near infrared spectral feature selection.
Table 5. Result of near infrared spectral feature selection.
MSC AccF1PreTPRSNFSNV AccF1PreTPRSNF
IBiEO-SPKNN0.97620.98001.00000.96083IBiEO-SPKNN0.97141.00001.000012
DT0.99520.98570.97500.99613DT0.97140.98570.97500.99612
RF0.99051.00001.000012RF0.96190.93140.95000.91343
SVM0.96670.95350.95550.95283SVM0.97140.96570.97560.95612
EOKNN0.99050.96570.97500.95673EOKNN0.99050.98570.97500.99613
DT0.99521.00001.000013DT0.97140.96570.97500.95674
RF0.92380.84050.85830.82332RF0.97140.98570.97500.99613
SVM0.96670.97070.97250.96893SVM0.97620.97300.97570.97042
GAKNN0.97380.97690.99330.96161070GAKNN0.97140.97770.99520.96081092
DT0.92430.92440.92620.92281104DT0.94520.94520.94630.94401092
RF0.98480.98350.99170.97601098RF0.98100.97840.98620.97071067
SVM0.90240.90150.91290.89011104SVM0.99050.987810.97571098
PSOKNN0.97140.97660.99290.9611344PSOKNN0.97620.97080.98190.9604334
DT0.95240.95240.95370.9513330DT0.96480.96480.96560.9647331
RF0.98520.98280.99180.9749342RF0.98100.98300.99510.9711387
SVM0.84380.84270.86570.8207330SVM0.98100.97780.99540.9608414
SG1 AccF1PreTPRSNFSG2 AccF1PreTPRSNF
IBiEO-SPKNN0.96191.00001.000012IBiEO-SPKNN0.98100.96570.97500.95653
DT0.98100.98570.97500.99613DT0.98570.98001.00000.96084
RF0.98571.00001.000013RF0.95710.97500.96000.99063
SVM0.98100.95660.95900.95323SVM0.99050.99020.99050.98993
EOKNN0.96190.98001.00000.96082EOKNN0.96190.92670.96670.88862
DT0.97620.98570.97500.99613DT0.97140.94570.97500.91734
RF0.97621.00001.000013RF0.96190.98001.00000.96083
SVM0.99050.99020.99170.98863SVM0.98100.96850.97100.96603
GAKNN0.99000.99120.99710.98541086GAKNN0.98620.98470.99020.97921102
DT0.96760.96770.96850.96811087DT0.93570.93540.93680.93401099
RF0.99950.99580.99950.99111088RF0.98620.98940.99400.98491100
SVM0.99950.996810.99361091SVM0.71950.83080.84930.81291115
PSOKNN0.99050.99100.99660.9855338PSOKNN0.98290.98410.99120.9722354
DT0.96330.96320.96370.9629311DT0.95480.95460.95590.9533359
RF0.99670.996610.9932308RF0.98760.98860.99310.9842367
SVM0.99710.99590.99910.9927297SVM0.72860.74570.69500.8089338
Table 6. Fusion of feature classification results of noisy datasets.
Table 6. Fusion of feature classification results of noisy datasets.
MSCSNVSG1SG2
AccPreF1TPRAccPreF1TPRAccPreF1TPRAccPreF1TPR
KNN0.90950.96670.88670.82230.9048 0.9667 0.8667 0.78890.9714 1.0000 0.9800 0.96080.9714 0.9750 0.9657 0.9566
DT0.96670.95000.95140.95290.9714 0.9750 0.9357 0.89950.9762 0.9750 0.9657 0.95660.9857 0.9667 0.9667 0.9667
RF0.95710.97500.92570.88130.9619 0.9500 0.9014 0.85631.0000 1.0000 1.0000 10.9905 0.9750 0.9857 0.9966
SVM0.96671.00000.98000.960.9619 1.0000 0.9800 0.960.9857 1.0000 0.9800 0.960.9857 1.0000 0.9800 0.96
Soft Voting0.95000.97290.93600.90160.95000.97290.92100.87230.98330.99380.98140.96930.98330.97920.97450.97
Table 7. Result of near infrared spectral feature selection with noisy.
Table 7. Result of near infrared spectral feature selection with noisy.
MSC_noisy AccF1PreTPRSNFSNV_noisy AccF1PreTPRSNF
IBiEO-SPKNN0.96670.96730.96670.9668135IBiEO-SPKNN0.93330.93370.93330.93295
DT0.98570.98570.98570.985652DT0.99050.99080.99050.99056
RF0.91900.92190.91900.9191147RF0.97620.97670.97620.97622
SVM0.80950.84200.80950.8081199SVM0.74290.76250.74290.73132
EOKNN0.95240.95360.95240.9522261EOKNN0.97620.97780.97620.975990
DT0.95240.95340.95240.9518453DT0.97620.97740.97620.9763132
RF0.98571.00000.97620.9876305RF0.99050.99540.98570.99034
SVM0.82380.85560.82380.8239220SVM0.88100.89740.88100.876510
GAKNN0.91430.91480.91430.91421105GAKNN0.94290.94330.94290.94291087
DT0.94760.94890.94760.94781140DT0.95240.95340.95240.95251085
RF0.98100.98560.97620.98081094RF0.98101.00000.96670.98271051
SVM0.90480.91470.90480.90431089SVM0.99520.99540.99520.99521040
PSOKNN0.92860.92920.92860.9283664PSOKNN0.96670.96730.96670.9667642
DT0.92860.92990.92860.9282770DT0.95240.95270.95240.9524639
RF0.98570.99010.97620.9831724RF0.98570.99080.97140.9807297
SVM0.90000.91040.90000.8993702SVM0.99050.99110.99050.9905271
SG1_noisy AccF1PreTPRSNFSG2_noisy AccF1PreTPRSNF
IBiEO-SPKNN0.95710.95770.95710.9565219IBiEO-SPKNN0.95710.95880.95710.9571258
DT0.96190.96220.96190.961927DT0.95710.95820.95710.9573128
RF0.97140.97180.97140.9713197RF0.98570.99080.98570.9880958
SVM0.51430.52100.51430.413342SVM0.51900.58580.51900.500020
EOKNN0.94760.94810.94760.9473535EOKNN0.96670.96710.96670.9665510
DT0.96190.96220.96190.9617928DT0.93810.94110.93810.9385344
RF1.00001.00000.99520.9976755RF0.91430.91560.91430.9144323
SVM0.81900.89480.81900.7831864SVM0.69520.67590.69520.6269749
GAKNN0.95240.95300.95240.95231179GAKNN0.96190.96220.96190.96171136
DT0.94290.94670.94290.94261220DT0.93330.93490.93330.93321168
RF1.00001.00000.99050.99511136RF0.98570.99080.98570.98801156
SVM0.81900.90040.81900.78451156SVM0.72860.81990.72860.67081154
PSOKNN0.95710.95780.95710.9569778PSOKNN0.94290.94320.94290.9425628
DT0.97140.97160.97140.9714943DT0.93810.93920.93810.93821095
RF0.99521.00000.99050.99511424RF0.98570.99540.98100.98781020
SVM0.82380.89190.82380.78521356SVM0.72860.83080.72860.66801085
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, Y.; Huang, X.; Lv, D.; Ng, B.K.; Lam, C.-T. A Novel Approach to Pine Nut Classification: Combining Near-Infrared Spectroscopy and Image Shape Features with Soft Voting-Based Ensemble Learning. Mathematics 2025, 13, 2009. https://doi.org/10.3390/math13122009

AMA Style

Yu Y, Huang X, Lv D, Ng BK, Lam C-T. A Novel Approach to Pine Nut Classification: Combining Near-Infrared Spectroscopy and Image Shape Features with Soft Voting-Based Ensemble Learning. Mathematics. 2025; 13(12):2009. https://doi.org/10.3390/math13122009

Chicago/Turabian Style

Yu, Yueyun, Xin Huang, Danjv Lv, Benjamin K. Ng, and Chan-Tong Lam. 2025. "A Novel Approach to Pine Nut Classification: Combining Near-Infrared Spectroscopy and Image Shape Features with Soft Voting-Based Ensemble Learning" Mathematics 13, no. 12: 2009. https://doi.org/10.3390/math13122009

APA Style

Yu, Y., Huang, X., Lv, D., Ng, B. K., & Lam, C.-T. (2025). A Novel Approach to Pine Nut Classification: Combining Near-Infrared Spectroscopy and Image Shape Features with Soft Voting-Based Ensemble Learning. Mathematics, 13(12), 2009. https://doi.org/10.3390/math13122009

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop