Next Article in Journal
Geophysical Log Responses and Predictive Modeling of Coal Quality in the Shanxi Formation, Northern Jiangsu, China
Previous Article in Journal
Navigation Control and Signal Processing Methods for Multiple Autonomous Unmanned Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on the Classification Method of Tea Tree Seeds Quality Based on Mid-Infrared Spectroscopy and Improved DenseNet

College of Science, Northeast Forestry University, Harbin 150040, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(13), 7336; https://doi.org/10.3390/app15137336
Submission received: 29 May 2025 / Revised: 17 June 2025 / Accepted: 27 June 2025 / Published: 30 June 2025

Abstract

Precise quality screening of tea tree seeds is crucial for the development of the tea industry. This study proposes a high-precision quality classification method for tea tree seeds by integrating mid-infrared (MIR) spectroscopy with an improved deep learning model. Four types of tea tree seeds in different states were prepared, and their spectral data were collected and preprocessed using Savitzky–Golay (SG) filtering and wavelet transform. Aiming at the deficiencies of DenseNet121 in one-dimensional spectral processing, such as insufficient generalization ability and weak feature extraction, the ECA-DenseNet model was proposed. Based on DenseNet121, the Batch Channel Normalization (BCN) module was introduced to reduce the dimensionality via 1 × 1 convolution while preserving the feature extraction capabilities, the Attention–Convolution Mix (ACMix) module was integrated to combine convolution and self-attention, and the Efficient Channel Attention (ECA) mechanism was utilized to enhance the feature discriminability. Experiments show that ECA-DenseNet achieves 99% accuracy, recall, and F1-score for classifying the four seed quality types, outperforming the original DenseNet121, machine learning models, and deep learning models. This study provides an efficient solution for tea tree seeds detection and screening, and its modular design can serve as a reference for the spectral classification of other crops.

1. Introduction

The tea tree serves as the cornerstone of the tea industry, playing a pivotal role in economic and cultural domains [1]. The quality of its seeds is critical to industrial development, as moldy seeds exhibit low germination rates and vulnerable seedlings prone to diseases, while improperly soaked seeds suffer from difficult sprouting [2]. Therefore, the precise elimination of low-quality seeds is key to efficient tea garden cultivation and the high-quality development of the tea industry.
In recent years, spectral technologies have been widely applied to seed detection and screening. Yang et al. [3] proposed the MIVopt-SPA algorithm for feature wavelength extraction from near-infrared (NIR) spectra of seeds, achieving multi-level accurate non-destructive detection of seed viability. Kusumaningrum et al. [4] developed a non-destructive method using Fourier transform near-infrared (FT-NIR) spectroscopy technique to assess soybean seed viability, accurately distinguishing viable from non-viable seeds through spectral analysis combined with chemometrics. Cui et al. [5] conducted a study on single corn seed maturity detection based on hyperspectral imaging and transfer learning, establishing a model by acquiring hyperspectral images and integrating transfer learning to achieve rapid, non-destructive maturity assessment. However, in practical applications, NIR spectroscopy is affected by factors such as temperature, light, moisture, and sample physical morphology (e.g., curvature and particle size), which easily cause deviations in the detection results [6]. Hyperspectral image classification faces challenges including high data dimensionality, high sample annotation costs, significant intra-class variation, and difficulties in spectral–spatial feature modeling, leading to an insufficient model generalization capability [7].
The wavelength range of MIR spectroscopy typically spans 2500 to 25,000 nm. This region corresponds to the characteristic spectral zone of molecular functional groups, where different chemical bonds or functional groups exhibit distinctive absorption frequencies in the MIR band. Spectral analysis can thus enable the identification of chemical species and functional groups [8]. Eevera et al. [9] utilized attenuated total reflection Fourier transform infrared (ATR–FTIR) spectroscopy to acquire the spectral data of peanut seeds. By analyzing the correlation between specific wavelength bands and quality indicators such as the germination rate and viability, they demonstrated its potential for rapid non-destructive detection of peanut seed quality. Gabriela et al. [10] employed mid-infrared diffuse reflection spectroscopy combined with stochastic gradient descent (SGD) preprocessing and support vector machine (SVM) models to achieve high-precision prediction of 11 nutrient concentrations in Ilex paraguariensis leaves. Andrade et al. [11] used ATR–FTIR technology coupled with chemometric tools to model and analyze the viability of artificially accelerated-aged corn seeds, achieving classification and prediction of seed viability grades through the correlation between the spectral data and the viability indicators.
While traditional MIR spectroscopy has achieved fruitful results in seed quality detection and other fields, deep learning has brought new research perspectives and technological breakthroughs to spectral analysis in recent years, thanks to its powerful feature learning capabilities. For example, Ma [12] employed near-infrared hyperspectral imaging (NIR–HSI) combined with deep learning methods such as convolutional neural networks (CNNs) to achieve rapid non-destructive prediction of seed viability, demonstrating high accuracy in classifying naturally aged seeds of Brassica juncea (Japanese mustard). Li et al. [13] used hyperspectral imaging and deep learning techniques to conduct classification studies on multi-year and multi-variety pumpkin seeds, achieving precise differentiation of seed types. Jin et al. [14] utilized NIR–HSI combined with deep learning models to identify five common rice seed varieties, with most models achieving classification accuracies exceeding 95%.
DenseNet demonstrates a higher connection density than ResNet, significant classification accuracy advantages over MobileNet, and better parameter efficiency compared to traditional CNNs, particularly in the spectral data classification. To achieve accurate screening of tea tree seeds using MIR spectroscopy, this study proposes an improved model based on DenseNet121 (ECA-DenseNet) for classifying seeds of different quality levels. Specifically, the following tasks were carried out:
  • Spectral data acquisition and preprocessing:
An FTIR spectrometer was employed to gather spectral data of tea tree seeds across different states. The data were preprocessed via Savitzky–Golay (SG) filtering and wavelet transform to ensure data quality.
  • Model improvement targeting DenseNet121 limitations:
The model was enhanced by addressing DenseNet121’s shortcomings, including simplifying the architecture, replacing the convolutional kernels, adopting a new normalization method, introducing a novel module, optimizing the transition layers, and adjusting the classification strategy to improve the model performance.
  • Performance comparison and validation:
The improved model was compared with DenseNet121 and other relevant models. Its advantages were evaluated using metrics such as accuracy, Kappa coefficient, Matthews correlation coefficient (MCC), and confusion matrices, verifying the effectiveness of the improvement strategies.

2. Materials and Methods

2.1. Experimental Materials

The tea tree seeds used in this study were all sourced from Xinhe Town, Shuyang County, Suqian City, Jiangsu Province. Four types of typical samples were prepared through artificial treatments: (1) Dry and healthy seeds: Fresh and healthy seeds were naturally ventilated and dried until the moisture content reached ≤12%, then stored in an environment with a temperature of 15–20 °C and a relative humidity of ≤50%; (2) Soaked healthy seeds: Fresh and healthy seeds were immersed in distilled water at 25 ± 1 °C for 72 h; (3) Soaked moldy seeds: Seeds were first soaked in distilled water at 25 ± 1 °C for 72 h, then cultured under conditions of 25 ± 1 °C and 90 ± 5% relative humidity for 3–5 days until visible mold spots emerged.; (4) Dry and moldy seeds: Seeds were directly placed in an environment with a temperature of 25 ± 1 °C and a relative humidity of 90 ± 5% for 5–7 days to induce mold growth.
The prepared samples are shown in Figure 1. A stratified random sampling method was employed to select 90 seed samples from each category, ensuring representativeness across the quality types (dry/soaked, healthy/moldy), yielding a total of 360 samples.

2.2. Spectral Data Acquisition

In this study, a Spectrum 400 Fourier transform infrared (FTIR) spectrometer (PerkinElmer, Waltham, MA, USA), controlled by Spectrum software, was used to collect spectral data. The spectral range was set to 4000–400 cm−1 with a scanning resolution of 1 cm−1. All measurements were conducted at room temperature (23 ± 2 °C) and a relative humidity below 50% to minimize environmental interference. The acquisition system is shown in Figure 2.
The encapsulated sample powders were first dried under an infrared drying lamp. High-purity potassium bromide (KBr) powder and the dried sample powders were then weighed separately: approximately 100–200 mg of KBr and 1–2 mg of the sample. Both were transferred to a mortar and gently ground to ensure thorough mixing. The mixed powder was poured into a tablet press mold and subjected to a hydraulic press set at 15 tons of pressure for 30 s to form a thin pellet. A blank KBr pellet without a sample was scanned prior to each batch to correct for background noise. The pellet was placed on the spectrometer’s sample stage for scanning analysis. Each sample underwent three scans under these conditions, and the average spectrum of the three measurements was taken as the spectral data for that sample, resulting in a total of 360 spectral datasets.

2.3. Data Preprocessing

The original spectral plot of the samples is shown in Figure 3. Spectral data may exhibit noise and baseline drift due to instrument technology, aging, environmental electromagnetic interference, the temperature, and the humidity [15], which affect the accuracy of the analysis. Therefore, targeted preprocessing was performed in this study. Different SG filtering [16] parameters (window length w = 1, 3, 5; polynomial order p = 0, 1, 2) were compared for their processing effects on the spectral data, as shown in Table 1. It was found that w = 1 resulted in insufficient noise reduction, while w = 5 easily caused the deformation of characteristic peaks; p = 0 had a poor fitting capability, and p = 2 was prone to overfitting. Overall, when w = 3 and p = 1, the noise standard deviation decreased by 57% compared to that when w = 1 and p = 1, while the characteristic peak retention rate remained at 95.3%. The processed spectral plot is shown in Figure 3. Even with the optimal parameters (w = 3, p = 1), the SG filtering still retained residual noise and struggled to accurately process the characteristic peaks. Wavelet transform, however, offers advantages in multi-resolution noise reduction, characteristic peak enhancement, and complex signal analysis [17]. Consequently, this study employed the db8 wavelet basis with a 3-layer decomposition. The reconstructed spectrum highlighted key features, as shown in Figure 3. The processed spectral data were stratified by their respective labels and then partitioned into a training set (70%) and a test set (30%) using stratified random sampling, ensuring that the sample distribution of each class in both sets is consistent with that of the original data and maintaining class balance.

2.4. ECA-DenseNet

The core innovation of DenseNet lies in its dense connection mechanism, which promotes feature reuse, mitigates gradient vanishing, and reduces the number of parameters. It is composed of alternating Dense Blocks and transition layers [18]. However, DenseNet is primarily designed for two-dimensional (2D) data-like images, presenting limitations when processing one-dimensional (1D) spectral data of tea tree seeds: 2D convolutional operations may fail to effectively capture key features in 1D data, while 2D pooling operations can disrupt the wavelength order and spectral feature information. Additionally, its complex network structure and excessive parameters may lead to overfitting.
To address these challenges, this study proposes improvements to DenseNet-121. The improved ECA-DenseNet is illustrated in Figure 4, consisting of an input layer, 1D convolutional layer, three Batch Attention–Dense Block (BA–Dense Block), transition layers, an ECA attention layer, an upsampling layer, and a classification layer, with dense connections implemented in a feedforward manner between the layers. The input spectral data is first normalized by the input layer, then processed through Conv1 with 1D convolution and Batch-Channel Normalization (BCN) to enhance the robustness and generalization, followed by Rectified Linear Unit (ReLU) activation and max pooling for initial feature extraction. The features then enter the BA–Dense Blocks, where multiple convolutional operations and Mixed Self-Attention and Convolution (ACMix) modules are used to capture local and global feature dependencies, strengthening the feature correlations. After dimensionality reduction via transition layers, the features are processed sequentially by ECA attention (filtering key features and suppressing redundancy), global pooling (aggregating information), and the classification layer (outputting results), with a 0.3 dropout rate introduced in the fully connected layers to disrupt feature co-adaptation and prevent model overfitting due to the limited sample size.

2.4.1. Batch Channel Normalization

When processing spectral data, traditional Batch Normalization (BN) [19] is limited by the batch size, restricting the exploration of the model capacity and affecting the accuracy when the training and test distributions are inconsistent. In contrast, BCN [20] adopts a channel—independent normalization mechanism, which is more suitable for the characteristics of spectral data.
In each training batch, BCN calculates an independent mean value μ j for each channel j
μ j = 1 N × H × W i = 1 N h = 1 H w = 1 W x i , j , h , w
and variance:
σ j 2 = 1 N × H × W i = 1 N h = 1 H w = 1 W ( x i , j , h , w μ j ) 2
N represents the batch size; that is, the number of samples in a batch: H and W are the height and width of the feature map, respectively. For one-dimensional spectral data, H can be regarded as 1, and W corresponds to the feature dimension of the spectral data, namely the number of wavelengths; x i , j , h , w represents the feature value at the position (h,w) in the j-th channel of the i-th sample. In this way, BCN can accurately capture the distribution characteristics of each channel.
Next, BCN performs a normalization operation on each channel:
x ^ n , c , l = x n , c , l μ c σ c 2 + ε
In this study, ε is set to 10 5 , which is used to avoid division by zero, standardizing the data within the channel to a distribution with a mean of 0 and a variance of 1, reducing the problem of internal variable distribution shift.
Finally, the normalized features are adjusted through learnable scaling parameter γ c and offset parameter β c :
y n , c , l = γ c x ^ n , c , l + β c
Channel-wise normalization accelerates convergence, regularizes implicitly, and boosts spectral robustness.

2.4.2. Mixed Self-Attention and Convolution

During the spectral feature extraction process, traditional convolutional operations can only capture local features and struggle to effectively handle long-range dependencies between different wavelength channels. The ACMix [21] module innovatively addresses this issue by combining local convolution with the introduction of a global attention mechanism.
ACMix uses convolutional kernels of different scales such as 3 × 1 and 5 × 1 in parallel to extract multi-scale local features of spectral data. The formula is:
C i = k K ω k x i
where ω k is the convolutional kernel and K is the set of kernels. Meanwhile, it is calculated through the multi-head self-attention mechanism:
A t t e n t i o n ( Q , K , V ) = S o f t m a x ( Q K T d k ) V
where Q, K, and V are the query, key, and value matrices, respectively, and d k is the dimension of the key vector. This mechanism is used to capture global dependencies across channels.
To achieve effective fusion of local and global features, ACMix employs a gating mechanism:
y = g ( C ) C + ( 1 g ( C ) )
where g ( C ) is the gating weight and “ ” represents element-wise multiplication. The mechanism adaptively balances local–global feature fusion in spectral data, preserving details and capturing long-range dependencies.

2.4.3. ECA Attention Mechanism

When modeling spectral features, conventional convolutional neural networks tend to overlook nonlinear dependencies between channels, leading to redundant and inefficient features. In contrast, Efficient Channel Attention (ECA) [22] captures channel dependencies without dimensionality reduction, owing to its lightweight cross-channel interaction mechanism.
The operation process begins with applying a 1D convolution to the channel dimension of the input feature map:
f = σ ( F c o n v l d ( X ) )
X R C × H × W is the input feature, where C represents the number of channels, and H × W denotes the spatial dimensions. F c o n v l d signifies a 1D convolution with kernel size k (In this study, k = 3), and σ is an activation function such as Sigmoid. This step captures local dependencies between adjacent channels, with the kernel size k controlling the scope of cross-channel interaction.
Subsequently, the 1D convolution output directly generates channel attention weights A R C × 1 × 1 , where each element A c represents the importance score of the c-th channel. The formula is:
A c = 1 k i = [ k / 2 ] [ k / 2 ] X c + i ω i
ω i are the weights of the 1D convolution, which adaptively adjust the interaction strength between adjacent channels during training. Finally, the generated channel attention weights are multiplied by the original features, i.e., Y = X × A , enabling the model to focus on wavelength channels with strong discriminative power in the spectral data, suppress noise and redundant channels, and achieve adaptive enhancement or suppression of channel-wise features. The module enhances modeling of nonlinear inter-channel dependencies and boosts key feature responses via lightweight 1D convolution.

3. Results and Discussion

3.1. Experimental Environment

The experiments were conducted on a Windows 11 operating system developed by Microsoft (Redmond, WA, USA). On the hardware side, the device is equipped with 128 GB of RAM, an Intel(R) Xeon(R) Bronze 3204 CPU (Intel, Santa Clara, CA, USA), and an NVIDIA GeForce RTX 3090 graphics card with 24 GB of video memory (NVIDIA, Santa Clara, CA, USA). All Python code was run using PyCharm Version 2024.3.1.1 (JetBrains, Prague, Czech Republic), with the Python environment based on PyTorch-GPU 1.8.0 (Meta, Menlo Park, CA, USA) and Python 3.8.8 programming language (Python Software Foundation, USA). The CUDA computing platform version 11.1 (NVIDIA, Santa Clara, CA, USA) was utilized for GPU acceleration.

3.2. Hyperparameter Settings

The reasonable setting of hyperparameters plays a crucial role in the model’s performance and training effect [23]. We adjusted and optimized several key hyperparameters, with specific settings shown in Table 2:

3.3. Evaluation Metrics

This study uses precision (P), recall (R), F1-score (F1), and accuracy [24] as evaluation metrics. The definitions of all the metrics are as follows:
P = T P T P + F P
R = T P T P + F N
F 1 = 2 × P × R P + R
A c c u r a c y = T P + T N T P + T N + F P + F N
Kappa [25]: measures model prediction-actual result consistency (range: [−1, 1]). Closer to 1 = better consistency; 0 = random guessing; −1 = complete opposition. Complements accuracy limitations in imbalanced datasets.
K a p p a = p o p e 1 p e
p o is the proportion of correctly predicted samples (accuracy), and p e is the proportion of correct predictions by random chance.
MCC [26]: Evaluates models using TP, TN, FP, and FN, assessing the performance in imbalanced datasets. Ranges from [−1, 1]: 1 = optimal, 0 = random guessing, −1 = complete misclassification.
M C C = T P × T N F P × F N ( T P + F P ) ( T P + F N ) ( T N + F P ) ( T N + F N )
TP, FP, TN, and FN denote the numbers of true positives, false positives, true negatives, and false negatives, respectively.
Macro Average [27]: Independently calculates metrics (e.g., precision, recall, F1) for each class, averages them without weighting by sample size, reflecting the model’s cross-class average performance. Example: macro-averaged precision formula.
m a c r o   a v g p = 1 n i = 1 n P i
where n is the number of classes, and P i is the precision of the i-th class.
Weighted Average [27]: calculates weighted metrics per class using sample size proportions (larger classes have higher influence), addressing imbalance. Example: weighted-averaged precision formula.
w e i g h t e d   a v g p = i = 1 n ( P i × s u p p o r t i ) i = 1 n s u p p o r t i
where s u p p o r t i is the number of samples in the i-th class.

3.4. Chemical Analysis

In this study, mid-infrared spectroscopy chemical analysis was performed on four types of samples. As shown in Figure 5, the chemical characteristics of the four types of tea tree seeds differ significantly and have clear physicochemical directivity:
Dry and healthy tea tree seeds (HDSC): the O-H stretching vibration peak at 3379.8 cm−1 indicates the presence of hydroxyl-related substances in the seeds. The C-H asymmetric/symmetric stretching vibration peaks at 2856.2 cm−1 and 2926.1 cm−1 suggest the presence of lipids and long-chain alkanes. Additionally, the C-H bending vibration peak at 1380.6 cm−1, the potential ester C-O or amide III band characteristic peak at 1246.2 cm−1, and the polysaccharide C-O-C skeleton vibration peak at 1042.8 cm−1 collectively demonstrate the stable chemical composition and good structural properties of the seed components.
Soaked healthy tea tree seeds (SHCS): Compared to HDSC, the O-H stretching vibration peak at 3452.3 cm−1 exhibits a red shift, implying that the soaking environment promotes the synthesis of hydrophilic substances. The enhanced C-O-C asymmetric stretching vibration peak at 1162.8 cm−1 (originating from hemicellulose synthesis) and the sharp peak of long-chain lipid ordered structures at 722.1 cm−1 indicate that the soaking treatment facilitates hemicellulose synthesis while maintaining lipid structural stability.
Dry and moldy tea tree seeds (DMCS): in addition to the C-H stretching vibration peaks shared with HDSC, an ester carbonyl C=O peak appears at 1744.8 cm−1, signaling lipid peroxidation. The CH2/CH3 bending vibration peak at 1461.1 cm−1, ester C-O stretching vibration peak at 1242.5 cm−1, residual C-O-C glycosidic bond breakage peak at 1160.8 cm−1, and -(CH2)n-out-of-plane rocking vibration peak at 722.1 cm−1 reveal degradation of lipid side chains and disruption of polysaccharide structures.
Soaked moldy tea tree seeds (SMCS): The O-H stretching vibration peak at 3385.8 cm−1 indicates the presence of H-bonded hydroxyl groups from polysaccharide hydrolysis products and fungal metabolites. The shift in the amide I band peak at 1655.2 cm−1 may lead to changes in the secondary structure of proteins. The characteristic peaks at 862.9 cm−1 and 719.5 cm−1, potentially associated with carbohydrate structures (e.g., cellulose) and lipid crystallization, respectively, reflect the damage to the structure and properties of multiple seed components caused by soaking and mildew.
Differences in the characteristic peak position, intensity, and shape (e.g., amide I band shift, lipid peak sharpness) serve as qualitative indicators, providing a basis for constructing deep learning-based seed quality classification models.

3.5. Comparative Experiments and Analysis

3.5.1. Comparison with the Original Model DenseNet121

A comparison was conducted between ECA-DenseNet and the DenseNet121 model, with results shown in Table 3, the four metrics of ECA-DenseNet consistently exceed 99%, while the metrics of DenseNet121 are generally lower than those of ECA-DenseNet. Table 4 demonstrates that both the macro-average and weighted-average metrics of ECA-DenseNet are around 99%, while the macro-average and weighted-average metrics of DenseNet121 are lower than those of ECA-DenseNet.
In summary, compared with DenseNet121, ECA-DenseNet has a relative improvement of about 3.6 in metrics such as P, R, and accuracy.
Confusion matrices serve as critical tools for evaluating the performance of classification models, intuitively displaying a model’s prediction outcomes for each class [28]. The confusion matrices of the test sets for DenseNet121 and the improved ECA-DenseNet are compared in Figure 6. The results show that ECA-DenseNet correctly predicts all the samples in each class, while DenseNet121 exhibits a misjudgment rate of 3.7% across all the classifications.

3.5.2. Comparison of ECA-DenseNet with Machine Learning Models

This section compares the performance of the ECA-DenseNet model with machine learning models including eXtreme Gradient Boosting (XGBoost) [29], Support Vector Machine (SVM) [30], Backpropagation (BP) [31], Random Forest (RF) [32], Gradient Boosting Machine (GBM) [33], and PLS-DA [34] on the test set. Evaluation metrics including accuracy (overall correctness rate of predictions), Kappa (agreement accounting for random chance), and MCC (a balanced measure robust to class imbalance) are employed, with results shown in Table 5 and Figure 7. In key classification metrics, ECA-DenseNet outperforms other machine learning models, with its accuracy, Kappa value, and MCC all exceeding 99%. The best-performing machine learning model, SVM, achieves corresponding metrics of 91.42%, 90.89%, and 91.12%, respectively. ECA-DenseNet outperforms SVM by approximately 8% across all the metrics.

3.5.3. Comparison with Deep Learning Models

Table 6 presents a comparison of ECA-DenseNet with deep learning models such as ResNet50 [35], MobileNetV3 [36], InceptionV3 [37], GhostNet [38], and GoogLeNet [39] using classification accuracy-related metrics. The accuracy of ECA-DenseNet is 99.42%, its Kappa value is 99.01%, and its MCC value is 99.11%. It outperforms other deep-learning models, being 3.32% higher than InceptionV3 and 1.7% higher than ResNet50, which are the top-performing models among them.
Training time is a critical indicator for measuring model training efficiency, which increases with the number of model parameters [40]. A comparison of training efficiency among deep learning models in this study is shown in Table 7. ECA-DenseNet has a parameter scale of 28,774 KB, which is approximately 69.1% smaller than the largest parameter scale of InceptionV3 (92,968 KB). Additionally, its training time is 310 s, approximately 66.3% shorter than InceptionV3’s 921 s. Even when compared to GhostNet, the model with the smallest parameter scale, ECA-DenseNet only experiences a training time increase of about 187.0% despite a parameter scale increase of approximately 309.2%. This demonstrates that ECA-DenseNet not only achieves superior classification performance but also maintains high training efficiency through lightweight design, striking an optimal balance between the model complexity and computational cost.
To visualize the comparative data more intuitively, a radar chart was plotted across five comprehensive dimensions. Among them, accuracy, MCC, and Kappa were normalized in the positive direction (higher values indicate better performance), while Training Time and Parameters were reverse-normalized (lower values indicate stronger capabilities), as shown in Figure 8. This chart vividly demonstrates the performance differences among ECA-DenseNet and five other deep learning models, highlighting ECA-DenseNet’s balanced superiority in accuracy, generalization ability (MCC/Kappa), and computational efficiency (training time/parameters). Specifically, ECA-DenseNet exhibits the largest radial coverage in the radar chart, indicating its dominance across all evaluated metrics and confirming its effectiveness as a lightweight and high-performance model for spectral data classification.

3.6. Ablation Experiments

To comprehensively evaluate the effectiveness of the constructed ECA-DenseNet model and deeply analyze the role of each module in the overall model performance, this study conducted ablation experiments [41] focusing on three key modules: BCA, ACMix, and ECA. The performance of the model was investigated in terms of metrics such as Acc, Kappa, MCC, Parameters, Size [42], and FLOPs [42]. The experiments adopted both forward and reverse ablation methods to quantitatively analyze the contribution of each module, with the results shown in Table 8.
In terms of accuracy, when all three modules (BCA, ACMix, ECA) were enabled, the accuracy reached the highest level of 99.42%, surpassing all the other combinations. When only single modules or partial combinations were enabled, the accuracy was relatively lower. For example, the accuracy was 94.85% when no modules were enabled. Similar trends were observed for the consistency coefficient (Kappa) and Matthews Correlation Coefficient (MCC). With all the three modules enabled, Kappa reached 99.01% and MCC reached 99.11%, both achieving optimal values among all the combinations.
Regarding the model resource consumption metrics, Parameters, Size, and FLOPs increased with the enabling of the modules. For instance, in terms of the parameter count, the value was 26.1 M when no modules were enabled, increasing to 28.1 M when all three modules were enabled. The model size increased from 5.5 M (no modules) to 6.5 M (all modules), and the number of floating-point operations (FLOPs) rose from 5.5 M to 6.5 M. This indicates that while the introduction of modules significantly improves the model performance, it inevitably leads to increased model complexity and resource requirements.
The ablation experiments validate the rationality and the effectiveness of the ECA-DenseNet model, which significantly enhances the performance while slightly increasing the size and computational costs. By innovatively integrating lightweight modules and attention mechanisms into DenseNet, this model achieves efficient and accurate identification of tea tree seed varieties.

4. Conclusions

4.1. Comparative Analysis of the Models

Manual screening of tea tree seeds (e.g., visual inspection and dissection) suffers from issues such as low efficiency, reliance on experience, and difficulty in identifying early-stage mildew and internal damage. The combination of MIR spectroscopy and deep learning has provided an efficient solution for maize seed identification.
The ECA-DenseNet model has shown significant performance improvements. Specifically, its accuracy has been enhanced from 95.20% to 99.42%, representing a notable increase of 4.43%. Meanwhile, the F1-score, precision (P), and recall (R) have, respectively, increased by 3.09%, 3.60%, and 3.16%. Notably, the misjudgment rate has been reduced from a maximum of 3.7% to 0%. When compared with machine learning algorithms, ECA-DenseNet outperforms the best-performing SVM (91.42%) by 8% in accuracy. Its Kappa value (99.01%) and Matthews Correlation Coefficient (MCC, 99.11%) are also 8.12% and 7.99% higher than those of SVM, respectively. In the comparison with deep learning models, ECA-DenseNet achieves a 3.32% improvement in accuracy over InceptionV3 (96.10%), while only requiring 31% of its parameters and reducing the training time by 66.3%. Ablation experiments further indicate that the synergistic effect of the three modules increases the accuracy by 4.57% compared to the model without these modules. These results collectively demonstrate ECA-DenseNet’s remarkable advantages in classification accuracy, generalization ability, and computational efficiency, highlighting that each module plays a critical complementary role in enhancing the model’s performance.
Therefore, the constructed ECA-DenseNet model has advanced the technological progress in tea tree seed identification. Its core advantage lies in the integration of the molecular-level detection capability of MIR spectroscopy and the automated feature learning capability of deep learning, providing an accurate screening technical pathway for agricultural production practices.

4.2. Future Work

This study has certain limitations—for example, the tea tree seeds were sourced from a single region and the FTIR used in this research requires sample pretreatment such as grinding and tableting, which may introduce limited on-site rapid detection. In future research, we will validate and optimize the model by incorporating the seeds from multiple geographical areas. Additionally, integrating portable spectral devices will facilitate the practical application of this technology in the field detection and intelligent management of germplasm resources. These efforts will not only enhance the product quality and screening efficiency in the tea industry but also provide a reference for the quality inspection of other crops.

Author Contributions

Conceptualization, H.L. and D.D.; methodology, H.L.; software, D.D.; validation, D.D.; formal analysis, D.D.; investigation, D.D. and H.L.; resources, H.M.; data acquisition, D.D., H.L., J.L. and J.J.; preparation of the original draft, D.D. and J.L.; review and editing, H.L. and H.M.; visualization, D.D.; supervision, H.M. and H.L.; project administration, H.M.; funding acquisition, H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Fundamental Research Funds for the Central Universities (No. 2572023DJ02) and by the Northeast Forestry University College Student Innovation and Entrepreneurship Training Program Project: DCLXY-2025024.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available upon request due to privacy or ethical restrictions. Data from this study are available from the corresponding authors upon request. Because of the privacy implications of the data in this study, these data are not publicly available.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Xia, E.-H.; Zhang, H.-B.; Sheng, J.; Li, K.; Zhang, Q.-J.; Kim, C.; Zhang, Y.; Liu, Y.; Zhu, T.; Li, W.; et al. The Tea Tree Genome Provides Insights into Tea Flavor and Independent Evolution of Caffeine Biosynthesis. Mol. Plant 2017, 10, 866–877. [Google Scholar] [CrossRef] [PubMed]
  2. Yang, H.-J.; Sung, Y. Biocontrol of Mildew with Bacillus Subtilis in Bitter Gourd (Momordica charantia L.) Seeds during Germination. Sci. Hortic. 2011, 130, 38–42. [Google Scholar] [CrossRef]
  3. Yang, D.; Li, A.; Liu, J.; Chen, Z.; Shi, C.; Hu, J. Optimization of Seed Vigor Near-Infrared Detection by Coupling Mean Impact Value with Successive Projection Algorithm. Spectrosc. Spectr. Anal. 2022, 42, 3135–3142. [Google Scholar] [CrossRef]
  4. Kusumaningrum, D.; Lee, H.; Lohumi, S.; Mo, C.; Kim, M.S.; Cho, B.-K. Non-Destructive Technique for Determining the Viability of Soybean (Glycine max) Seeds Using FT-NIR Spectroscopy. J. Sci. Food Agric. 2018, 98, 1734–1742. [Google Scholar] [CrossRef]
  5. Cui, C.; Wu, J.; Zhang, Q.; Yu, L.; Sun, X.; Liu, C.; Yang, Y. Maturity Detection of Single Maize Seeds Based on Hyperspectral Imaging and Transfer Learning. Infrared Phys. Technol. 2024, 138, 105242. [Google Scholar] [CrossRef]
  6. Research Progress on Influencing Factors and Correction Methods of Near Infrared Spectroscopy and Imaging-All Databases. Available online: https://webofscience.clarivate.cn/wos/alldb/full-record/CSCD:7695089 (accessed on 26 May 2025).
  7. Hyperspectral Image Classification: Potentials, Challenges, and Future Directions—Datta—2022—Computational Intelligence and Neuroscience—Wiley Online Library. Available online: https://onlinelibrary.wiley.com/doi/10.1155/2022/3854635 (accessed on 26 May 2025).
  8. Zhao, Y.; Kusama, S.; Furutani, Y.; Huang, W.-H.; Luo, C.-W.; Fuji, T. High-Speed Scanless Entire Bandwidth Mid-Infrared Chemical Imaging. Nat. Commun. 2023, 14, 3929. [Google Scholar] [CrossRef]
  9. Eevera, T.; Chinnasamy, G.P.; Venkatesan, S.; Navamaniraj, K.N.; Albert, V.A.; Anandhan, J. Attenuated Total Reflectance—Fourier Transform Infrared (ATR-FTIR) Spectroscopy: A Tool to Determine Groundnut Seed Quality. Legume Res. 2024, 47, 1165–1171. [Google Scholar] [CrossRef]
  10. Naibo, G.; de São José, J.F.B.; Pesini, G.; Chemin, C.; Lisboa, B.; Kayser, L.; Abichequer, A.D.; Moura-Bueno, J.M.; Ramon, R.; Tiecher, T. Combining Mid-Infrared Spectroscopy and Machine Learning to Estimate Nutrient Content in Plant Tissues of Yerba Mate (Ilex paraguariensis A. St. Hil.). J. Food Compos. Anal. 2024, 128, 106008. [Google Scholar] [CrossRef]
  11. Andrade, G.C.; Medeiros Clho, C.M.; Uarrota, V.G. Modelling the Vigour of Maize Seeds Submitted to Artificial Accelerated Ageing Based on ATR-FTIR Data and Chemometric Tools (PCA, HCA and PLS-DA). Heliyon 2020, 6, e03477. [Google Scholar] [CrossRef]
  12. Ma, T.; Tsuchikawa, S.; Inagaki, T. Rapid and Non-Destructive Seed Viability Prediction Using near-Infrared Hyperspectral Imaging Coupled with a Deep Learning Approach. Comput. Electron. Agric. 2020, 177, 105683. [Google Scholar] [CrossRef]
  13. Li, X.; Feng, X.; Fang, H.; Yang, N.; Yang, G.; Yu, Z.; Shen, J.; Geng, W.; He, Y. Classification of Multi-Year and Multi-Variety Pumpkin Seeds Using Hyperspectral Imaging Technology and Three-Dimensional Convolutional Neural Network. Plant Methods 2023, 19, 82. [Google Scholar] [CrossRef] [PubMed]
  14. Jin, B.; Zhang, C.; Jia, L.; Tang, Q.; Gao, L.; Zhao, G.; Qi, H. Identification of Rice Seed Varieties Based on Near-Infrared Hyperspectral Imaging Technology Combined with Deep Learning. ACS Omega 2022, 7, 4735–4749. [Google Scholar] [CrossRef]
  15. Hong, Y.; Liu, Y.; Chen, Y.; Liu, Y.; Yu, L.; Liu, Y.; Cheng, H. Application of Fractional-Order Derivative in the Quantitative Estimation of Soil Organic Matter Content through Visible and near-Infrared Spectroscopy. Geoderma 2019, 337, 758–769. [Google Scholar] [CrossRef]
  16. Rahman, M.A.; Rashid, M.A.; Ahmad, M. Selecting the Optimal Conditions of Savitzky-Golay Filter for fNIRS Signal. Biocybern. Biomed. Eng. 2019, 39, 624–637. [Google Scholar] [CrossRef]
  17. Ehrentreich, F.; Nikolov, S.G.; Wolkenstein, M.; Hutter, H. The Wavelet Transform: A New Preprocessing Method for Peak Recognition of Infrared Spectra. Mikrochim. Acta 1998, 128, 241–250. [Google Scholar] [CrossRef]
  18. Densely Connected Convolutional Networks|IEEE Conference Publication|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/8099726 (accessed on 26 May 2025).
  19. Group Normalization|SpringerLink. Available online: https://link.springer.com/chapter/10.1007/978-3-030-01261-8_1 (accessed on 26 May 2025).
  20. Khaled, A.; Li, C.; Ning, J.; He, K. BCN: Batch Channel Normalization for Image Classification. arXiv 2023, arXiv:2312.00596. [Google Scholar]
  21. Pan, X.; Ge, C.; Lu, R.; Song, S.; Chen, G.; Huang, Z.; Huang, G. On the Integration of Self-Attention and Convolution. arXiv 2022, arXiv:2111.14556. [Google Scholar]
  22. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks|IEEE Conference Publication|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/9156697 (accessed on 26 May 2025).
  23. Yang, L.; Shami, A. On Hyperparameter Optimization of Machine Learning Algorithms: Theory and Practice. Neurocomputing 2020, 415, 295–316. [Google Scholar] [CrossRef]
  24. Pattern Recognition and Machine Learning (Information Science and Statistics): |Guide Books|ACM Digital Library. Available online: https://dl.acm.org/doi/10.5555/1162264 (accessed on 26 May 2025).
  25. Di Eugenio, B.; Glass, M. The Kappa Statistic: A Second Look. Comput. Linguist. 2004, 30, 95–101. Available online: https://dl.acm.org/doi/10.1162/089120104773633402 (accessed on 26 May 2025). [CrossRef]
  26. The Matthews Correlation Coefficient (MCC) Should Replace the ROC AUC as the Standard Metric for Assessing Binary Classification—PubMed. Available online: https://pubmed.ncbi.nlm.nih.gov/36800973/ (accessed on 26 May 2025).
  27. Alswaidan, N.; Menai, M.E.B. Hybrid Feature Model for Emotion Recognition in Arabic Text. IEEE Access 2020, 8, 37843–37854. [Google Scholar] [CrossRef]
  28. Duntsch, I.; Gediga, G. Indices for Rough Set Approximation and the Application to Confusion Matrices. Int. J. Approx. Reasoning 2020, 118, 155–172. [Google Scholar] [CrossRef]
  29. XGBoost|Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Available online: https://dl.acm.org/doi/10.1145/2939672.2939785 (accessed on 26 May 2025).
  30. Burges, C.J.C. A Tutorial on Support Vector Machines for Pattern Recognition. Data Min. Knowl. Discov. 1998, 2, 121–167. [Google Scholar] [CrossRef]
  31. Zheng, L. Research on Application of Improved Genetic Algorithm and Bp Neural Network in Air Quality Evaluation. Fresenius Environ. Bull. 2022, 31, 6043–6052. [Google Scholar]
  32. Speiser, J.L.; Miller, M.E.; Tooze, J.; Ip, E. A Comparison of Random Forest Variable Selection Methods for Classification Prediction Modeling. Expert. Syst. Appl. 2019, 134, 93–101. [Google Scholar] [CrossRef] [PubMed]
  33. Zhou, J.; Li, E.; Yang, S.; Wang, M.; Shi, X.; Yao, S.; Mitri, H.S. Slope Stability Prediction for Circular Mode Failure Using Gradient Boosting Machine Approach Based on an Updated Database of Case Histories. Saf. Sci. 2019, 118, 505–518. [Google Scholar] [CrossRef]
  34. Lee, L.C.; Liong, C.-Y.; Jemain, A.A. Partial Least Squares-Discriminant Analysis (PLS-DA) for Classification of High-Dimensional (HD) Data: A Review of Contemporary Practice Strategies and Knowledge Gaps. Analyst 2018, 143, 3526–3539. [Google Scholar] [CrossRef]
  35. Feng, X.; Gao, X.; Luo, L. A ResNet50-Based Method for Classifying Surface Defects in Hot-Rolled Strip Steel. Mathematics 2021, 9, 2359. [Google Scholar] [CrossRef]
  36. Searching for MobileNetV3|IEEE Conference Publication|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/9008835 (accessed on 26 May 2025).
  37. Rethinking the Inception Architecture for Computer Vision|IEEE Conference Publication|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/7780677 (accessed on 26 May 2025).
  38. GhostNet: More Features From Cheap Operations|IEEE Conference Publication|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/9157333 (accessed on 26 May 2025).
  39. Going Deeper with Convolutions|IEEE Conference Publication|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/7298594 (accessed on 26 May 2025).
  40. Gupta, S.; Agrawal, A.; Gopalakrishnan, K.; Narayanan, P. Deep Learning with Limited Numerical Precision. arXiv 2015, arXiv:1502.02551. [Google Scholar]
  41. Gong, H.; Xing, H.; Yu, Y.; Liang, Y. A Combined Model Using Secondary Decomposition for Crude Oil Futures Price and Volatility Forecasting: Analysis Based on Comparison and Ablation Experiments. Expert Syst. Appl. Int. J. 2024, 252, 124196. Available online: https://dl.acm.org/doi/10.1016/j.eswa.2024.124196 (accessed on 26 May 2025). [CrossRef]
  42. Guan, H.; Deng, H.; Ma, X.; Zhang, T.; Zhang, Y.; Zhu, T.; Zhou, H.; Gu, Z.; Lu, Y. A Corn Canopy Organs Detection Method Based on Improved DBi-YOLOv8 Network. Eur. J. Agron. 2024, 154, 127076. [Google Scholar] [CrossRef]
Figure 1. Four different tea tree species samples: (a) dry and healthy tea tree seeds; (b) soaked healthy tea tree seeds; (c) dry and moldy tea tree seeds; (d) soaked moldy tea tree seeds.
Figure 1. Four different tea tree species samples: (a) dry and healthy tea tree seeds; (b) soaked healthy tea tree seeds; (c) dry and moldy tea tree seeds; (d) soaked moldy tea tree seeds.
Applsci 15 07336 g001
Figure 2. MIR data sampling system.
Figure 2. MIR data sampling system.
Applsci 15 07336 g002
Figure 3. (a) Raw acquired spectrogram; (b) spectrogram smoothed by SG filtering; (c) final processed spectrogram.
Figure 3. (a) Raw acquired spectrogram; (b) spectrogram smoothed by SG filtering; (c) final processed spectrogram.
Applsci 15 07336 g003
Figure 4. Diagram of ECA-DenseNet model.
Figure 4. Diagram of ECA-DenseNet model.
Applsci 15 07336 g004
Figure 5. Infrared chemical analysis spectrograms of four tea tree seed samples from different cultivars.
Figure 5. Infrared chemical analysis spectrograms of four tea tree seed samples from different cultivars.
Applsci 15 07336 g005
Figure 6. (a) Confusion matrix of ECA-DenseNet; (b) confusion matrix of DenseNet 121.
Figure 6. (a) Confusion matrix of ECA-DenseNet; (b) confusion matrix of DenseNet 121.
Applsci 15 07336 g006
Figure 7. Scatter plot for multi-performance comparison of ECA-DenseNet and other machine learning models.
Figure 7. Scatter plot for multi-performance comparison of ECA-DenseNet and other machine learning models.
Applsci 15 07336 g007
Figure 8. Radar chart for multi-performance comparison of ECA-DenseNet and other deep learning models.
Figure 8. Radar chart for multi-performance comparison of ECA-DenseNet and other deep learning models.
Applsci 15 07336 g008
Table 1. Comparison Table of the Influence of SG Filter Window Length and Polynomial Order Parameter Combinations on Spectral Data Processing Indicators.
Table 1. Comparison Table of the Influence of SG Filter Window Length and Polynomial Order Parameter Combinations on Spectral Data Processing Indicators.
Parameter CombinationNoise Standard Deviation (σ)Feature Peak Retention Rate (%)Peak Position Shift (nm)Baseline Fluctuation Amplitude (ΔR)
w = 1, p = 10.04298.7 ± 1.2±0.50.025
w = 3, p = 10.01895.3 ± 2.1±1.00.008
w = 5, p = 10.009 82.6 ± 3.8 ±2.3 0.005
w = 3, p = 00.02193.1 ± 2.8±1.20.032
w = 3, p = 20.017 92.4 ± 3.5±1.10.019
Table 2. Training hyperparameters.
Table 2. Training hyperparameters.
ParameterValue
Epoch200
Batch size32
Learning rate1 × 10−5
Regularization parameter0.1
OptimizerAdam
Activation functionReLU
Loss functionCross-entropy
Table 3. Data table of evaluation metrics for ECA-DenseNet and DenseNet121 in multi-class tasks.
Table 3. Data table of evaluation metrics for ECA-DenseNet and DenseNet121 in multi-class tasks.
CategoryModelP (%) R (%) F1 (%) Accuracy (%)
HDCSECA-DenseNet99.1899.1899.3099.41
DenseNet12195.1295.3696.2794.96
SHCSECA-DenseNet99.1199.1899.4299.52
DenseNet12196.6396.3496.3695.15
SMCSECA-DenseNet99.8499.0099.1399.33
DenseNet12195.5696.5296.2495.36
DMCSECA-DenseNet99.3599.1099.2499.43
DenseNet12196.3496.1296.3395.34
Table 4. Macro and weighted average evaluation metrics for ECA-DenseNet and DenseNet121.
Table 4. Macro and weighted average evaluation metrics for ECA-DenseNet and DenseNet121.
Evaluation IndicatorsModelP (%) R (%) F1 (%) Accuracy (%)
macro avg ECA-DenseNet99.3799.1299.2799.42
DenseNet12195.9196.0996.3095.20
weighted avg ECA-DenseNet99.3699.1299.2899.42
DenseNet12195.9096.0796.2995.19
Table 5. Performance comparison of ECA-DenseNet and other machine learning models.
Table 5. Performance comparison of ECA-DenseNet and other machine learning models.
ModelAccuracy (%)Kappa (%)MCC (%)
ECA-DenseNet99.4299.0199.11
XGboost 80.7175.2876.79
SVM 91.4290.8991.12
BP 72.7470.4470.44
RF84.9283.1783.77
GBM87.2286.4286.42
PLS-DA82.2283.4187.14
Table 6. Performance comparison of ECA-DenseNet and other deep learning models.
Table 6. Performance comparison of ECA-DenseNet and other deep learning models.
ModelAccuracy (%)Kappa (%)MCC (%)
ECA-DenseNet99.4299.0199.11
ResNet5097.7296.8897.18
MobileNetV394.1195.0395.01
InceptionV396.1096.4396.32
GhostNet95.1294.9395.42
GoogLeNet89.7289.8889.88
Table 7. Comparison table of training time and parameter scale between ECA-DenseNet and other deep learning models.
Table 7. Comparison table of training time and parameter scale between ECA-DenseNet and other deep learning models.
ModelTraining Time (Seconds)Parameters (KB)
ECA-DenseNet31028,774
ResNet5091192,148
MobileNetV325616,381
InceptionV392192,968
GhostNet1087031
GoogLeNet30126,562
Table 8. Performance Parameter Comparison of BCA/ACmix/ECA Ablation Experiments.
Table 8. Performance Parameter Comparison of BCA/ACmix/ECA Ablation Experiments.
BCAACmixECAAccuracy (%)Kappa (%)MCC (%)Parameters (M)Size (M)FLOPs (M)
94.8594.8394.9226.13.355.5
96.8296.6296.3626.33.525.6
97.1296.8896.7627.63.716.2
96.1595.8995.9826.43.475.7
99.4299.0199.1128.13.926.5
98.6498.7298.3427.93.866.4
98.5298.0998.6626.63.525.8
98.6798.5798.4627.83.806.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Deng, D.; Li, H.; Luo, J.; Jiang, J.; Mu, H. Research on the Classification Method of Tea Tree Seeds Quality Based on Mid-Infrared Spectroscopy and Improved DenseNet. Appl. Sci. 2025, 15, 7336. https://doi.org/10.3390/app15137336

AMA Style

Deng D, Li H, Luo J, Jiang J, Mu H. Research on the Classification Method of Tea Tree Seeds Quality Based on Mid-Infrared Spectroscopy and Improved DenseNet. Applied Sciences. 2025; 15(13):7336. https://doi.org/10.3390/app15137336

Chicago/Turabian Style

Deng, Di, Hao Li, Jiawei Luo, Jiachen Jiang, and Hongbo Mu. 2025. "Research on the Classification Method of Tea Tree Seeds Quality Based on Mid-Infrared Spectroscopy and Improved DenseNet" Applied Sciences 15, no. 13: 7336. https://doi.org/10.3390/app15137336

APA Style

Deng, D., Li, H., Luo, J., Jiang, J., & Mu, H. (2025). Research on the Classification Method of Tea Tree Seeds Quality Based on Mid-Infrared Spectroscopy and Improved DenseNet. Applied Sciences, 15(13), 7336. https://doi.org/10.3390/app15137336

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop