Next Article in Journal
An Optimized Gasper Consensus Protocol Resistant to Adversarial Bias Attacks
Previous Article in Journal
A Matrix-Statistics-Aware Attention Mechanism for Robust RUL Estimation in Aero-Engines
Previous Article in Special Issue
Adaptive Energy–Gradient–Contrast (EGC) Fusion with AIFI-YOLOv12 for Improving Nighttime Pedestrian Detection in Security
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PlantClassiNet: A Dual-Modal Fine-Tuning Framework for CNN-Based Plant Disease Classification

School of Management Science and Engineering, Anhui University of Finance and Economics, Hongye Road, Bengbu 233000, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(1), 170; https://doi.org/10.3390/app16010170
Submission received: 21 October 2025 / Revised: 16 December 2025 / Accepted: 17 December 2025 / Published: 23 December 2025
(This article belongs to the Special Issue Advanced Image Analysis and Processing Technologies and Applications)

Abstract

Although Convolutional Neural Networks (CNNs) have delivered state-of-the-art accuracy in plant disease classification, their deployment is still hindered by data scarcity, computational cost, and architectural heterogeneity. Transfer learning from large-scale pre-trained datasets alleviates these issues, yet generic feature extraction suffers from domain shift, while indiscriminate fine-tuning risks over-fitting and elevated training budgets. To address the identified limitations, PlantClassiNet is implemented as a unified framework. This framework facilitates systematic comparative analysis of six CNN architectures, AlexNet, ResNet50, InceptionV3, MobileNetV3Small, DenseNet121 and EfficientNetB0, across three publicly available datasets: PlantVillage, PlantLeaves and Eggplant. Two alternative fine-tuning approaches are proposed: Adaptive Fine-tuning (AdapFitu), which adaptively determines the depth of unfrozen layers, learning rates, and reinitializes selected layers, and a fixed-parameter baseline, which trains only the newly added classifier while keeping the convolutional backbone frozen and unfreezes a fixed number of network layers for retraining. Extensive experiments demonstrate that large models AlexNet, ResNet50, and Inceptionv3 achieve test accuracy exceeding 98.74% on the sizable PlantVillage dataset, whereas lightweight counterparts MobileNetV3Small, DenseNet121, and EfficientNetB0 achieve high accuracy of 99.79% ± 0.21% on the smaller Eggplant collection after fine-tuning.

1. Introduction

Plant diseases are caused by various pathogens, including fungi, bacteria, viruses, and nematodes, leading to significant yield losses and economic impacts. Traditional methods for plant disease detection, such as visual inspection and laboratory testing, are time-consuming and labor-intensive. With the advent of deep learning, computer vision techniques have shown great potential in automating and enhancing plant disease detection.
CNNs have been widely used to classify plant diseases based on leaf images [1,2,3,4,5,6,7,8,9,10]. However, training deep CNNs from scratch requires a large amount of labeled data and computational resources.
Transfer learning, which leverages pre-trained models on large-scale datasets, has emerged as an effective solution to address these challenges [11,12,13,14]. Data scarcity in real-world settings hinders direct large-scale model training. Meanwhile, domain shifts from lab environments like PlantVillage’s clean images to field conditions drastically reduce model robustness. Moreover, existing approaches frequently suffer from limitations such as regional bias in training data, insufficient generalization across diverse environmental conditions, and computational inefficiency for real-time deployment.
To resolve these challenges, fine-tuning pre-trained models has become a promising paradigm. By leveraging knowledge transferred from large-scale generic datasets, models can achieve high accuracy with limited labeled data while improving generalization [15,16,17,18]. However, existing studies fall short in providing a systematic analysis of critical factors, such as the selection between pre-trained architectures ranging from lightweight to large-scale, adaptation strategies applied layer by layer, and the effects of domain-specific augmentation techniques.
To address these issues, we designed a unified framework, PlantClassiNet, that compares six pre-trained models systematically. The fine-tuning parameter configuration either adopts the novel AdapFitu or utilizes fixed empirical parameters. This presented approach leverages the rich feature representations learned by pre-trained models while adapting them to the specific characteristics of plant disease datasets and enhances feature extraction and classification accuracy while maintaining computational efficiency.
The main contributions are summarized as follows:
(1)
Comprehensive Performance Benchmarking: An extensive comparative analysis of six CNN models demonstrates that transfer learning with fine-tuning, where the last ten layers are unfrozen and retrained for 100 epochs, achieves exceptional mean classification accuracies of 99.35%, 99.19%, and 88.48% on the three datasets, highlighting the impact of dataset characteristics and model architecture on feature extraction and classification precision.
(2)
Lightweight Model Efficiency: MobileNetV3Small, DenseNet121, and EfficientNetB0 achieve competitive accuracy with reduced computational complexity, significantly accelerating inference time and making them ideal for real-time deployment on edge devices in resource-constrained agricultural environments.
(3)
Practical Implications: This study provides actionable guidance for selecting optimal deep learning models based on dataset size, disease complexity, and hardware limitations, advancing automated plant disease diagnostics and demonstrating the transformative potential of deep learning in enhancing plant health monitoring and fostering sustainable agricultural practices through rapid, scalable, and precise disease detection.
The remainder of this paper is organized as follows. Section 2 describes related work. Section 3 describes the datasets and the proposed methods used in this work. Section 4 explains and discusses the experimental setup and results. Section 5 summarizes the conclusions and future work.

2. Related Work

In recent years, deep learning techniques have gained prominence in plant disease detection and classification. CNNs have emerged as a powerful tool for vision-based tasks, demonstrating the ability to discern crucial features from images efficiently. Several studies have proposed CNN-based architectures for plant disease detection, achieving high accuracy and generalization capabilities.
For example, Shewale and Daruwala designed a CNN architecture for nine diseases classification of tomato plants [4]. The designed model incorporated advanced pre-processing techniques and hyperparameter tuning, outperforming existing models in computational efficiency and plant disease classification accuracy, achieving a maximum accuracy of 99.81%. Khaki et al. proposed a hybrid CNN-RNN model that effectively predicts crop yields, achieving a validation RMSE of 11.48 bushels per acre for corn and 3.85 for soybean, outperforming traditional models like LASSO and random forest in accuracy [2]. Argüeso et al. investigated few-shot learning for plant leaf disease classification using Triplet loss with a CNN on the PlantVillage dataset [3]. The Triplet loss achieved 94.0% accuracy with the full dataset and 90.0% with 80 samples per class, outperforming Contrastive loss. Most classes had F-scores above 85%, with four exceeding 90%, highlighting FSL’s effectiveness in reducing data requirements for plant disease detection. Bedi et al. developed the PDSE-Lite framework using CNN for detecting plant diseases and estimating severity from digital leaf images [19]. Experimental results showed 98.35% accuracy in disease detection and significant performance in severity estimation, with a 99% confidence interval from t-test analysis.
Despite progress, early CNN models suffered from data hunger, high computational costs, and architectural design challenges, limiting their field applicability.
To mitigate resource limitations, Mohanty et al. demonstrated that transfer learning from ImageNet to PlantVillage on AlexNet achieved an overall accuracy of 99.35% [1]. However, field testing revealed a significant performance drop to 31.4% accuracy, highlighting critical challenges in deploying deep learning models for practical plant disease diagnosis applications. Bedi and Gole applied five widely used predefined CNN architectures, AlexNet, LeNet5, GoogLeNet, VGG16, and ResNet50, to assess their performance in identifying bacterial spot disease in peach plants [20]. As per their paper, the AlexNet CNN model outperformed other models with 98.5% testing accuracy. In another work, Atila et al. employed EfficientNet for plant leaf disease classification, eliminating manual feature extraction while achieving high accuracy [12]. On PlantVillage, when comparing EfficientNet-B4/B5 with state-of-the-art models, their transfer learning approach utilizing fully trainable layers achieved 99.91% accuracy on the original dataset and 99.97% accuracy on the augmented dataset, showcasing superior performance. This work highlights scalable architectures’ potential in agricultural diagnostics, establishing a key benchmark.
However, feature extraction using pre-trained models avoids overfitting on small datasets by freezing convolutional weights but introduces new limitations, including poor domain adaptability and performance ceilings. Fine-tuning pre-trained models significantly improves domain adaptation through layered parameter optimization.
Iftikhar et al. introduced an enhanced CNN model for plant disease detection, achieving 98.17% accuracy in identifying fungal diseases [17]. The model is fine-tuned through hyperparameter optimization, adjusting learning rates, dropout rates, and batch sizes to maximize performance. It evaluated ResNet50, DenseNet121, AlexNet, VGG16, and MobileNetV2 pre-trained models, ensuring effective integration into a user-friendly mobile application for real-time disease management. Too et al. fine-tuned DenseNet, Inception-v4, VGG16, and ResNet for plant disease classification by replacing their final layers and training with SGD [15]. Results show that ResNet and DenseNet outperform others, achieving over 90% classification accuracy within 10 training epochs, demonstrating transfer learning’s effectiveness for agricultural applications. Prashanthi et al. employed ETPLDNet and LEViT models integrating CNN-ViT hybrid architectures, achieving 99.02% accuracy on 38 leaf disease classes through fine-tuned hyperparameter optimization and Grad-CAM interpretability [18]. The proposed approach surpasses VGG16 with 87.99% accuracy and InceptionV3 with 91.84% accuracy, delivering both high precision and explainability for agricultural diagnostics. Nag et al. proposed a tomato leaf disease detection system fine-tuning five CNN architectures: AlexNet, ResNet50, SqueezeNet, VGG19, and DenseNet121 [16]. Leveraging transfer learning and data augmentation on a dataset of 10 disease classes, their framework achieved over 99% classification accuracy using ResNet50, VGG19, and DenseNet121. The study also introduced a real-time mobile application with multilingual support, demonstrating its viability for field deployment in precision agriculture.
Fine-tuning pre-trained models significantly improves domain adaptation through layered parameter optimization; however, it triggers challenges like catastrophic forgetting and overfitting on limited data, prompting innovations such as progressive unfreezing and tiered learning rates. Current research in this field exhibits notable limitations, as most studies either focus on evaluating a limited number of models or emphasize computational efficiency at the expense of accuracy. More importantly, existing approaches fail to provide a comprehensive assessment, particularly in terms of systematically comparing more than five distinct architectures, including both large-scale and lightweight models with fine-tuning strategies across multiple crop varieties.
In this study, the proposed PlantClassiNet fine-tuning framework conducts plant leaf disease classification experiments on six pre-trained models of varying scales using three datasets of different sizes, thereby facilitating a comprehensive evaluation of the experimental results.

3. Materials and Methods

This section introduces the data sources and methods of the study. It covers three core datasets, their preprocessing, and two fine-tuning techniques, followed by the proposed PlantClassiNet architecture.

3.1. Dataset Description

The datasets used for the experiments in this study are PlantVillage [21], PlantLeaves [22] and Eggplant [23], which are widely selected as benchmark datasets available for academic research in the field of plant disease detection.
PlantVillage is a popular public dataset for plant disease classification research. 54,305 images of plant leaves were collected, which had 38 distinct classes. For each class label, the crop–disease pair is given for the image of the plant leaf, as shown in Table 1. Throughout this paper, the symbol # denotes the number of samples in all tables.
The PlantLeaves dataset consists of 2277 images of healthy leaves and 2225 images of diseased leaves, totaling 4502 images across 22 classes. Among them, eight images are used for prediction, while the remaining images are divided into training, testing, and validation sets, as seen in Table 2.
The Eggplant dataset initially included 3116 original high-resolution images; 392 original images were collected to be augmented, such as shifting, flipping, zooming, shearing, brightness enhancement, and rotation, resulting in a total set of 3159 augmented images. As shown in Table 3, there are seven distinct kinds of eggplant diseases, including Healthy Leaf, Insect Pest Disease, Leaf Spot Disease, Mosaic Virus Disease, Small Leaf Disease, White Mold Disease, and Wilt Disease.
Table 4 presents a comprehensive overview of key dataset accessibility details, including valid download links, recommended citation formats, and applicable license terms. This information is critical for researchers seeking to replicate or extend the study, particularly when working with images that underwent localized curation procedures. Additionally, the table outlines the specific additional curation measures applied to the dataset, such as normalization, rotation, shear, and other relevant preprocessing steps. These processes are essential for maintaining data integrity and ensuring reliability in computer vision applications.

3.2. Data Partitioning and Pre-Processing

The original datasets in Table 1, Table 2 and Table 3 were divided into train, validation, and test datasets. The training dataset was employed to train the models. The validation dataset was used exclusively to select the best-trained model from different epochs. The best model was subsequently evaluated using the test dataset. Since Dataset Plantleave has already been pre-partitioned into training, validation, and test subsets, we have retained its original division without further partitioning. For the PlantVillage and Eggplant datasets, a fixed partitioning strategy was employed, with 70% allocated to training, 15% to validation, and 15% to testing, rather than utilizing stratified sampling. Extensive data augmentation was applied to the training set through techniques such as rotation, translation, cropping, scaling, and horizontal flipping. Only normalization preprocessing was performed on the validation and test sets. Class weights were determined based on the class distribution of the training set. To ensure reproducibility, a fixed random seed was used with shuffle = False in the data loader. Class-imbalanced data distributions lead to inadequate representation of minority categories during training, thereby impairing the model’s generalization performance. Class weight parameters were calculated solely based on the training set partition, with the validation set and test set excluded from these calculations, to prevent data leakage. To address this imbalance, higher penalties are assigned to underrepresented categories, amplifying their influence on the training objective. Class-specific weights were computed according to Equation (1); the i class weight was computed as follows: number of samples divided by the product of the number of classes and number of samples for class i.
weight _ i = # sample / ( # class × # sample _ i )
To achieve optimal performance, the train data is preprocessed to align its distribution with the pre-trained distribution. Three normalization schemes were implemented across different models, as shown in Table 5. The first method linearly scales pixel values to the [0, 1] range, maintaining relative intensity relationships while simplifying computation for DenseNet121. The second approach redistributes values symmetrically into the [−1, 1] interval, particularly beneficial for InceptionV3, EfficientNetB0 and MobileNetV3Small models. Lastly, zero-centering was applied to AlexNet and ResNet50 by subtracting the ImageNet dataset mean (123.68, 116.779, and 103.939), which helps convolutional networks by removing inherent bias from the input distribution. These techniques systematically enhance training stability, accelerate convergence, and ensure consistent feature scaling regardless of the model architecture being used.
To further improve model generalization, data augmentation techniques were applied. The used approaches included random rotations with a tolerance of 20°, alterations in width and height of 20%, shear transformations of 20%, zoom modifications of 20%, and horizontal flips. These augmentations changed the size and orientation, which improved the model’s generalization ability for future agricultural application.

3.3. Research Approach

The PlantClassiNet methodology comprises two sequential phases:
(1)
Initial Feature Adaptation: The convolutional backbone of the pre-trained model remains frozen, while its original classification layer is substituted with a task-specific fully connected layer aligned with the new category dimensionality. Training exclusively targets the top two dense layers during this stage.
(2)
Dual-Modal Fine-tuning: The process bifurcates into two configurations. It utilizes the novel AdapFitu operator for parameter optimization and employs predefined training hyperparameters. The architectural framework incorporates essential components: ReLU activation functions for non-linear transformations, Batch Normalization to enhance training stability and accelerate convergence, and Pooling layers for feature dimensionality reduction, as illustrated in Figure 1.

3.3.1. The Adaptive Finetune Optimizer

The presented AdapFitu was designed for computing the unfreeze layers and learning rate (Algorithm 1).
Algorithm 1 The proposed AdapFitu unfreeze strategy
  1:
Input Pre-train model
  2:
total _ layers = len ( base _ model . layers )
  3:
total _ params = base _ model . count _ params ( ) × 10 6
  4:
unfreeze _ ratio = min ( 0.15 + ( total _ params / 1000 ) × 0.1 , self . max _ unfreeze _ ratio )
  5:
unfreeze _ count = max ( self . min _ unfreeze , total _ layers × unfreeze _ ratio )
  6:
i f total _ params > 100 :
  7:
lr = 1 × 10 4
  8:
e l i f total _ params > 50 :
  9:
lr = 3 × 10 4
10:
e l s e :
11:
lr = 1 × 10 3
12:
Output unfreeze layers and learning rate
The AdapFitu algorithm establishes a quantifiable relationship between model scale (parameter count and number of layers) and fine-tuning parameters (number of frozen layers and learning rate) through mathematical modeling. Its design integrates theoretical analysis with engineering practice: based on gradient sensitivity to model complexity, it dynamically controls the number of frozen layers according to parameter count and depth, thereby preventing training instability in large models caused by excessive unfreezing while ensuring sufficient knowledge transfer in smaller models; grounded in optimization theory’s gradient scaling effects, it adaptively adjusts the learning rate based on parameter magnitude, employing smaller learning rates for larger models to stabilize optimization and higher learning rates for smaller models to accelerate convergence. Through theoretical derivation and experimental validation, it calibrates constants (e.g., 0.15) and thresholds (e.g., 100 M) in the formulation. This approach achieves scale-driven adaptive fine-tuning, striking an optimal balance between computational resources and model performance.
The unfreeze layers and learning rate was computed by the AdapFitu optimizer, as shown in Table 6. In the fine-tune stage, all the computed unfreeze layers and dense layers were re-trained for the downstream plant disease classification task.

3.3.2. Adaptive and Fixed Training Strategies

The proposed PlantClassiNet framework incorporates two distinct training modalities: adaptive-parameter training and fixed-parameter training. Experimental implementation follows a two-phase procedure. During the initial phase, the convolutional base of a pre-trained model remains unchanged while replacing its original classification layer with a fully connected layer corresponding to the dimensionality of new task categories, with training exclusively applied to the top two fully connected layers. The subsequent phase diverges into the specified modalities: parameter configuration either adopts the novel AdapFitu operator or utilizes the predefined fixed training parameters enumerated in Table 7. For detailed experimental configurations, refer to Section 4.1.

3.3.3. Evaluation Metrics

In plant disease classification where the class distribution is imbalanced (e.g., rare disease detection scenarios), conventional accuracy metrics may create misleading performance impressions. The confusion matrix and its derived metrics (Precision, Recall, and F1) enable precise identification of performance disparities across classes, thereby serving as a critical foundation for targeted model optimization strategies.
The confusion matrix cross-references predicted outcomes with true labels to generate four key metrics: True Positive (TP) for correctly identified positive instances, True Negative (TN) for correctly identified negative instances, False Positive (FP) for negative instances misclassified as positive (Class1 Error), and False Negative (FN) for positive instances misclassified as negative (Class2 Error). This visualization tool is particularly valuable for identifying classification biases in imbalanced datasets.
The derivation process for these metrics has been intentionally omitted from this section, as it has been comprehensively addressed in the experimental section (Equations (2)–(4)), maintaining methodological consistency while avoiding redundancy.

4. Results and Discussion

4.1. Experimental Section

The experimental configurations, including software and hardware settings, training parameters, and evaluation metrics, are based on the general methods described in the Materials and Methods, with specific parameters detailed below.

4.1.1. Software and Hardware Specifications

The experimental environment used in this study is outlined below, with detailed specifications provided in Table 7. The hardware includes a server equipped with an AMD EPYC processor, sufficient RAM, a high-performance GPU, and ample SSD storage. The software environment consists of an Ubuntu operating system, Python, and key libraries for data analysis and deep learning.

4.1.2. Training Hyperparameter Configuration

In our experiments, the various values of hyperparameters were selected for different training steps and strategies, as shown in Table 8. The pre-trained convolutional base values were used for the first train stage, and the two dense layers were retrained. In the fine-tuning stage, the freeze layers and learning rate were the top ten layers and 0.0001 or computed by the developed AdapFitu optimizer (Table 6). For general methodological details, see “Materials and Methods” in Section 3.3.

4.1.3. Evaluation Metrics

Performance metrics commonly used in plant disease detection studies include accuracy, precision, recall, and F1-score. Accuracy measures the proportion of correctly classified samples, while precision and recall provide insights into the model’s ability to correctly identify positive samples. The F1-score combines precision and recall into a single metric, offering a balanced measure of performance.
Precision represents the accuracy of positive predictions, which is calculated as shown in Equation (2).
P r e c i s i o n = T P T P + F P
where TP is true positives and FP is false positives.
Recall explains the ability of a model to find all relevant cases and is calculated as shown in Equation (3), where FN is false negatives.
R e c a l l = T P T P + F N
The F1-score is the harmonic mean of the precision and recall, where an F1-score reaches its best value at 1 and worst score at 0. The relative contributions of precision and recall to the F1-score are equal. The formula for the F1-score is shown in Equation (4).
F 1 = 2 × T P 2 × T P + F P + F N
The reported averages consist of the macro average, which averages the unweighted mean per label, weighted average, which averages the support-weighted mean per label, and sample average, applicable only for multilabel classification. Micro average, which averages the total true positives, false negatives, and false positives, is displayed solely for multi-label or multi-class scenarios involving a subset of classes.

4.2. Result

The proposed PlantClassiNet architecture was fine-tuned and trained for six pre-trained models, evaluated in precision, recall and F1-score. Each model was trained using training and validation datasets and then used to make predictions on testing data, which was separated from the training and validation datasets. The six models’ weighted average values on the three datasets are presented in Table 9, Table 16 and Table 23. The experiment results of each class are presented in Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17, Table 18, Table 19, Table 20, Table 21, Table 22, Table 23, Table 24, Table 25, Table 26, Table 27, Table 28 and Table 29, excluding Table 16 and Table 23. The confusion matrices of each model on each dataset are illustrated in the even-numbered figures from Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24, Figure 25, Figure 26, Figure 27, Figure 28, Figure 29, Figure 30, Figure 31, Figure 32, Figure 33, Figure 34, Figure 35 and Figure 36. The train accuracy and loss for the six pre-trained models and three datasets are demonstrated in the odd-numbered figures from Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24, Figure 25, Figure 26, Figure 27, Figure 28, Figure 29, Figure 30, Figure 31, Figure 32, Figure 33, Figure 34, Figure 35, Figure 36 and Figure 37. Overall, the fine-tuned DenseNet121 outperformed other baseline models on all of the datasets for downstream plant disease classification.

4.2.1. PlantVillage Dataset

The experimental results of all models on PlantVillage Dataset are presented. Table 9 reports the weighted average performance of all pre-trained models under the proposed PlantClassiNet across all categories of PlantVillage. Among these, the DenseNet121 base model achieves the highest performance, with Precision: 0.9968, Recall: 0.9967, and F1-score: 0.9967.
For each model, the results includes two figures: a confusion matrix and training accuracy/loss curves, along with one table detailing Precision, Recall, and F1-score per class. These visuals are systematically organized in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 and Table 10, Table 11, Table 12, Table 13, Table 14 and Table 15, covering six pre-trained models on the PlantVillage.

4.2.2. PlantLeaves Dataset

The experimental results of all models on PlantLeaves Dataset are presented. Table 16 reports the weighted average performance of all pre-trained models under the proposed PlantClassiNet across all categories of PlantLeaves. Among these, the ResNet50 base model achieves the highest performance, with Precision: 0.9513, Recall: 0.9364, and F1-score: 0.9346.
For each model, the results includes two figures: a confusion matrix and training accuracy/loss curves, along with one table detailing Precision, Recall, and F1-score per class. These visuals are systematically organized in Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24 and Figure 25 and Table 17, Table 18, Table 19, Table 20, Table 21 and Table 22, covering six pre-trained models on the PlantLeaves.

4.2.3. Eggplant Dataset

The experimental results of all models on Eggplant dataset are presented. Table 23 reports the weighted average performance of all pre-trained models under the proposed PlantClassiNet across all categories of Eggplant. Among these, the DenseNet121 base model achieves the highest performance, with Precision: 1.0000, Recall: 1.0000, and F1-score: 1.0000.
For each model, the results includes two figures: a confusion matrix and training accuracy/loss curves, along with one table detailing Precision, Recall, and F1-score per class. These visuals are systematically organized in Figure 26, Figure 27, Figure 28, Figure 29, Figure 30, Figure 31, Figure 32, Figure 33, Figure 34, Figure 35, Figure 36 and Figure 37 and Table 24, Table 25, Table 26, Table 27, Table 28 and Table 29, covering six pre-trained models on the Eggplant.

4.3. Discussion

The proposed AdapFitu adaptive strategy enables rapid training because only a small subset of layers are updated and the number of epochs is limited. Nevertheless, preliminary experiments on three benchmark datasets exhibit high variance and sub-optimal classification accuracy. It is hypothesized that, although the strategy determines fine-tuning parameters by jointly considering model size, the number of frozen pre-trained layers, network capacity, and the scale of the downstream dataset, the individual or combined influence of these factors on fine-tuning effectiveness has not been rigorously validated. Therefore, a fixed-parameter protocol was adopted, in which multiple models are fine-tuned with deeper layers, a larger number of epochs, and consequently longer training times. This protocol yields markedly improved and stable results, achieving classification accuracies for the testing dataset that surpass those reported in most existing studies, as shown in Table 30 and Figure 38.
Our study explicitly compares results with prior work, demonstrating superior classification accuracy over existing studies. Most prior work reports isolated results for single architectures and uses standard fine-tuning, while our approach integrates empirical validation for stability.
As illustrated in Figure 38, both ResNet50 and DenseNet121 consistently surpass 90% accuracy across all three benchmark datasets. More importantly, fine-tuning with frozen backbone parameters proves highly effective for large-scale architectures coupled with extensive data: AlexNet, ResNet50, and InceptionV3 all exceed 98% testing accuracy on the Plantvillage dataset. In contrast, lightweight models fine-tuned on smaller datasets exhibit comparable efficacy; MobileNetV3, DenseNet121, and EfficientNetB0 attain near-perfect classification performance (approximately 100%) on the Eggplant dataset.
Additionally, we can identify that some extreme values of 0 in the PlantLeaves dataset arise primarily due to the limited number of test samples for the corresponding categories, thereby possibly affecting the robustness of the metrics. These observations raise the hypothesis that large models are best suited to large datasets, whereas smaller datasets may benefit from fine-tuning compact architectures. Preliminary experiments with plant-domain adaptive optimization yielded inconsistent results due to data variability. We thus adopted a fixed-parameter fine-tuning approach, which proved more robust and reproducible. Fixed parameters may not generalize to all plant species or datasets with extreme imbalances.
In future work, we will systematically investigate hybrid optimization strategies (e.g., combining adaptive techniques with domain-specific constraints), explore multi-modal data integration approaches, and analyze the intricate interplay between dataset scale, model capacity, and their respective computational and memory resource demands.
Furthermore, the influence of key image attributes on model accuracy will be quantitatively assessed through analysis of noise levels, contrast variations, and object scale disparities. Subsequently, this evaluation will serve as the foundation for designing specialized architectures and customized training protocols, specifically adapted to address variations in image characteristics. This comprehensive analysis aims to identify optimal deployment strategies for practical applications.

5. Conclusions

This study leverages convolutional bases of computer vision pre-trained models and computes fine-tuning parameters through the proposed AdapFitu strategy. The performance of AdapFitu across all evaluated models demonstrated significant deviation from theoretical expectations, highlighting the intricate gap between conceptual design and practical implementation. While its dynamic adjustment mechanism is theoretically advantageous, empirical training processes revealed susceptibility to unaccounted variables—particularly gradient propagation characteristics and model structural coupling effects. These findings underscore the need for future research to develop a more comprehensive evaluation framework, integrating advanced metrics such as gradient sensitivity analysis and parameter coupling assessment into theoretical modeling.
Due to unsatisfactory results in subsequent exploratory experiments, the adaptive parameter strategy was revised to fixed-parameter fine-tuning training in subsequent research. The proposed PlantClassiNet with fix parameters was applied to PlantVillage, PlantLeaves, and Eggplant using AlexNet, ResNet50, InceptionV3, DenseNet121, EfficientNetB0, and MobileNetV3Small pre-trained models. During the initial training phase, the pre-trained convolutional base parameters remained fixed, while only the two dense layers underwent retraining. For the fine-tuning stage, the configuration involved freezing the top ten layers and applying a learning rate of 0.0001. After transfer learning and fine-tuning, the best model was used for inference on testing data. The experimental results achieved promising outcomes, effectively extracting leaf disease features and enabling multi-class classification of plant leaf diseases. Even though the performance of proposed PlantClassiNet with fixed train parameters is good, further research needs to be done to improve on the proposed AdapFitu adaptive strategy.
Furthermore, we plan to conduct dedicated experiments on domain adaptation and cross-dataset generalization in our future work to more comprehensively assess the method’s practical application potential. Future studies will systematically examine the relationship between dataset scale, model architecture, and their corresponding computational or memory costs to identify optimal deployment approaches in real-world applications.

Author Contributions

All authors contributed to the study conception and design. X.Z. wrote the first draft of the manuscript and carried out experiments. X.X. performed material preparation, data collection and analysis. All authors have read and agreed to the published version of the manuscript.

Funding

This study was partially supported by the program of the School Scientific Research of Anhui University of Finance and Economics (Grant No. ACKYC20049) and the program of the School Teaching Research of Anhui University of Finance and Economics (Grant No. acszjyyb2021047).

Institutional Review Board Statement

Not applicable. This study did not involve human participants, animals, or any data requiring ethical approval, as it focused solely on computational modeling and analysis of publicly available datasets.

Informed Consent Statement

Not applicable. The research did not involve human participants or animals, and all data used were publicly available.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mohanty, S.P.; Hughes, D.P.; Salathe, M. Using Deep Learning for Image-Based Plant Disease Detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef]
  2. Khaki, S.; Wang, L.; Archontoulis, S.V. A CNN-RNN Framework for Crop Yield Prediction. Front. Plant Sci. 2020, 10, 1750. [Google Scholar] [CrossRef] [PubMed]
  3. Argüeso, D.; Picon, A.; Irusta, U.; Medela, A.; San-Emeterio, M.G.; Bereciartua, A.; Alvarez-Gila, A. Few-Shot Learning approach for plant disease classification using images taken in the field. Comput. Electron. Agric. 2020, 175, 105542. [Google Scholar] [CrossRef]
  4. Shewale, M.V.; Daruwala, R.D. High performance deep learning architecture for early detection and classification of plant leaf disease. J. Agric. Food Res. 2023, 14, 100675. [Google Scholar] [CrossRef]
  5. Ouamane, A.; Chouchane, A.; Himeur, Y.; Debilou, A. Enhancing plant disease detection: A novel CNN-based approach with tensor subspace learning and HOWSVD-MDA. Neural Comput. Appl. 2024, 36, 22957–22981. [Google Scholar] [CrossRef]
  6. Perumal, V.K.; Supriyaa, T.; Santhosh, P.R.; Dhanasekaran, S. CNN based plant disease identification using PYNQ FPGA. Microelectron. J. 2024, 6, 200088. [Google Scholar] [CrossRef]
  7. González-Briones, A.; Florez, S.L.; Chamoso, P.; Castillo-Ossa, L.F.; Corchado, E.S. Enhancing Plant Disease Detection: Incorporating Advanced CNN Architectures for Better Accuracy and Interpretability. Int. J. Comput. Intell. Syst. 2025, 18, 120. [Google Scholar] [CrossRef]
  8. Bhakta, I.; Phadikar, S.; Mukherjee, H.; Sau, A. A novel plant disease prediction model based on thermal images using modified deep convolutional neural network. Precis. Agric. 2023, 24, 23–39. [Google Scholar] [CrossRef]
  9. Yang, B.; Li, M.; Li, F.; Wang, Y.; Liang, Q.; Zhao, R.; Li, C.; Wang, J. A novel plant type, leaf disease and severity identification framework using CNN and transformer with multi-label method. Sci. Rep. 2024, 14, 11664. [Google Scholar] [CrossRef]
  10. Sinamenye, J.H.; Chatterjee, A.; Shrestha, R. Potato plant disease detection: Leveraging hybrid deep learning models. BMC Plant Biol. 2025, 25, 647. [Google Scholar] [CrossRef] [PubMed]
  11. Amanda, R.; Kelsee, B.; Peter, M.C.; Babuali, A.; James, L.; Hughes, D.P. Deep Learning for Image-Based Cassava Disease Detection. Front. Plant Sci. 2017, 8, 1852. [Google Scholar] [CrossRef]
  12. Atila, M.; Uar, M.; Akyol, K.; Uar, E. Plant leaf disease classification using EfficientNet deep learning model. Ecol. Inform. 2021, 61, 101182. [Google Scholar] [CrossRef]
  13. Sinancterzi, D. Effect of different weight initialization strategies on transfer learning for plant disease detection. Plant Pathol. 2024, 73, 2325–2343. [Google Scholar] [CrossRef]
  14. Sambana, B.; Nnadi, H.S.; Wajid, M.A.; Fidelia, N.O.; Camacho-Zuiga, C.; Ajuzie, H.D.; Onyema, E.M. An efficient plant disease detection using transfer learning approach. Sci. Rep. 2025, 15, 19082. [Google Scholar] [CrossRef] [PubMed]
  15. Too, E.C.; Li, Y.; Njuki, S.; Liu, L. A comparative study of fine-tuning deep learning models for plant disease identification. Comput. Electron. Agric. 2018, 161, 272–279. [Google Scholar] [CrossRef]
  16. Nag, A.; Chanda, P.R.; Nandi, S. Mobile app-based tomato disease identification with fine-tuned convolutional neural networks. Comput. Electr. Eng. 2023, 112, 14. [Google Scholar] [CrossRef]
  17. Iftikhar, M.; Kandhro, I.A.; Kausar, N.; Kehar, A.; Uddin, M.; Dandoush, A. Plant disease management: A fine-tuned enhanced CNN approach with mobile app integration for early detection and classification. Artif. Intell. Rev. 2024, 57, 167. [Google Scholar] [CrossRef]
  18. Prashanthi, B.; Praveen Krishna, A.V.; Rao, C.M. A Comparative Study of Fine-Tuning Deep Learning Models for Leaf Disease Identification and Classification. Eng. Technol. Appl. Sci. Res. 2025, 15, 19661–19669. [Google Scholar] [CrossRef]
  19. Bedi, P.; Gole, P.; Marwaha, S. PDSE-Lite: Lightweight framework for plant disease severity estimation based on Convolutional Autoencoder and Few-Shot Learning. Front. Plant Sci. 2023, 14, 20. [Google Scholar] [CrossRef]
  20. Bedi, P.; Gole, P. PlantGhostNet: An Efficient Novel Convolutional Neural Network Model to Identify Plant Diseases Automatically. In Proceedings of the 2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Noida, India, 3–4 September 2021; pp. 1–6. [Google Scholar]
  21. Hughes, D.; Salathe, M. An open access repository of images on plant health to enable the development of mobile disease diagnostics through machine learning and crowdsourcing. arXiv 2015, arXiv:1511.08060. [Google Scholar]
  22. Chouhan, S.S.; Koul, A.; Singh, U.P.; Jain, S. A Data Repository of Leaf Images: Practice towards Plant Conservation with Plant Pathology. In Proceedings of the 2019 4th International Conference on Information Systems and Computer Networks (ISCON), Mathura, India, 21–22 November 2019. [Google Scholar]
  23. Mafi, M.H.M.; Ava, A.A. Eggplant Disease Recognition Dataset. Mendeley Data V1, 2023. Available online: https://data.mendeley.com/datasets/r3tb5mzn4d/1 (accessed on 5 July 2025).
  24. Krizhevsky, A.; Sutskever, I.; Hinton, G. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2012, 60, 84–90. [Google Scholar] [CrossRef]
  25. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  26. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar] [CrossRef]
  27. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef]
  28. Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2019, arXiv:1905.11946. [Google Scholar]
  29. Howard, A.; Sandler, M.; Chen, B.; Wang, W.; Chen, L.C.; Tan, M.; Chu, G.; Vasudevan, V.; Zhu, Y.; Pang, R.; et al. Searching for MobileNetV3. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar] [CrossRef]
  30. Ferentinos, K.P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
  31. Alaeddine, H.; Jihene, M. Deep Batch-normalized eLU AlexNet For Plant Diseases Classification. In Proceedings of the International Multi-Conference on Systems, Signals and Devices, Monastir, Tunisia, 22–25 March 2021. [Google Scholar]
  32. Khan, A.T.; Jensen, S.M.; Khan, A.R.; Li, S. Plant disease detection model for edge computing devices. Front. Plant Sci. 2023, 14, 10. [Google Scholar] [CrossRef]
  33. Ndovie, L.K.; Masabo, E. Leveraging MobileNetV3 for In-Field Tomato Disease Detection in Malawi via CNN. SAIEE Afr. Res. J. 2024, 115, 74–85. [Google Scholar] [CrossRef]
  34. Shiyan, A.S.; Kozlov, I.D.; Baimuratov, I.R.; Zhukova, N.A. Recognizing Plants and Their Diseases: Benchmarks for Multiclass and Multilabel Classification. Pattern Recognit. Image Anal. 2025, 35, 159–168. [Google Scholar] [CrossRef]
  35. Yao, J.; Tran, S.N.; Garg, S.; Sawyer, S. Deep Learning for Plant Identification and Disease Classification from Leaf Images: Multi-prediction Approaches. ACM Comput. Surv. 2024, 56, 153. [Google Scholar] [CrossRef]
  36. Uskaner Hepsa, P. Efficient plant disease identification using few-shot learning: A transfer learning approach. Multimed. Tools Appl. 2024, 83, 58293–58308. [Google Scholar] [CrossRef]
  37. Sagar, A.; Dheeba, J. On Using Transfer Learning For Plant Disease Detection. bioRxiv 2020. [Google Scholar] [CrossRef]
  38. Mohameth, F.; Bingcai, C.; Sada, K.A. Plant Disease Detection with Deep Learning and Feature Extraction Using Plant Village. J. Comput. Commun. 2020, 8, 10–22. [Google Scholar] [CrossRef]
  39. Vimal Adit, V.; Rubesh, C.V.; Sanjay Bharathi, S.; Santhiya, G.; Anuradha, R. A Comparison of Deep Learning Algorithms for Plant Disease Classification. In Advances in Cybernetics, Cognition, and Machine Learning for Communication Technologies; Gunjan, V.K., Senatore, S., Kumar, A., Gao, X.Z., Merugu, S., Eds.; Springer: Singapore, 2020; pp. 153–161. [Google Scholar] [CrossRef]
Figure 1. The proposed PlantClassiNet architecture.
Figure 1. The proposed PlantClassiNet architecture.
Applsci 16 00170 g001
Figure 2. Confusion Matrix for AlexNet based on Plantvillage.
Figure 2. Confusion Matrix for AlexNet based on Plantvillage.
Applsci 16 00170 g002
Figure 3. Training Accuracy and Loss for Fine-Tuning AlexNet on Plantvillage.
Figure 3. Training Accuracy and Loss for Fine-Tuning AlexNet on Plantvillage.
Applsci 16 00170 g003
Figure 4. Confusion Matrix for ResNet50 based on Plantvillage.
Figure 4. Confusion Matrix for ResNet50 based on Plantvillage.
Applsci 16 00170 g004
Figure 5. Training Accuracy and Loss for Fine-Tuning ResNet50 on Plantvillage.
Figure 5. Training Accuracy and Loss for Fine-Tuning ResNet50 on Plantvillage.
Applsci 16 00170 g005
Figure 6. Confusion Matrix for InceptionV3 based on Plantvillage.
Figure 6. Confusion Matrix for InceptionV3 based on Plantvillage.
Applsci 16 00170 g006
Figure 7. Training Accuracy and Loss for Fine-Tuning InceptionV3 on Plantvillage.
Figure 7. Training Accuracy and Loss for Fine-Tuning InceptionV3 on Plantvillage.
Applsci 16 00170 g007
Figure 8. Confusion Matrix for DenseNet121 on Plantvillage.
Figure 8. Confusion Matrix for DenseNet121 on Plantvillage.
Applsci 16 00170 g008
Figure 9. Training Accuracy and Loss for Fine-Tuning DenseNet121 on Plantvillage.
Figure 9. Training Accuracy and Loss for Fine-Tuning DenseNet121 on Plantvillage.
Applsci 16 00170 g009
Figure 10. Confusion Matrix for EfficientNetB0 on Plantvillage.
Figure 10. Confusion Matrix for EfficientNetB0 on Plantvillage.
Applsci 16 00170 g010
Figure 11. Training Accuracy and Loss for Fine-Tuning EfficientNetB0 on Plantvillage.
Figure 11. Training Accuracy and Loss for Fine-Tuning EfficientNetB0 on Plantvillage.
Applsci 16 00170 g011
Figure 12. Confusion Matrix for MobileNetV3Small on Plantvillage.
Figure 12. Confusion Matrix for MobileNetV3Small on Plantvillage.
Applsci 16 00170 g012
Figure 13. Training Accuracy and Loss for Fine-Tuning MobileNetV3Small on Plantvillage.
Figure 13. Training Accuracy and Loss for Fine-Tuning MobileNetV3Small on Plantvillage.
Applsci 16 00170 g013
Figure 14. Confusion Matrix for AlexNet on PlantLeaves.
Figure 14. Confusion Matrix for AlexNet on PlantLeaves.
Applsci 16 00170 g014
Figure 15. Training Accuracy and Loss for Fine-Tuning AlexNet on PlantLeaves.
Figure 15. Training Accuracy and Loss for Fine-Tuning AlexNet on PlantLeaves.
Applsci 16 00170 g015
Figure 16. Confusion Matrix for ResNet50 on PlantLeaves.
Figure 16. Confusion Matrix for ResNet50 on PlantLeaves.
Applsci 16 00170 g016
Figure 17. Training Accuracy and Loss for Fine-Tuning ResNet50 on PlantLeaves.
Figure 17. Training Accuracy and Loss for Fine-Tuning ResNet50 on PlantLeaves.
Applsci 16 00170 g017
Figure 18. Confusion Matrix for Fine-Tuning InceptionV3 on PlantLeaves.
Figure 18. Confusion Matrix for Fine-Tuning InceptionV3 on PlantLeaves.
Applsci 16 00170 g018
Figure 19. Training Accuracy and Loss for Fine-Tuning InceptionV3 on PlantLeaves.
Figure 19. Training Accuracy and Loss for Fine-Tuning InceptionV3 on PlantLeaves.
Applsci 16 00170 g019
Figure 20. Confusion Matrix for Fine-Tuning DenseNet121 on PlantLeaves.
Figure 20. Confusion Matrix for Fine-Tuning DenseNet121 on PlantLeaves.
Applsci 16 00170 g020
Figure 21. Training Accuracy and Loss for Fine-Tuning DenseNet121 on PlantLeaves.
Figure 21. Training Accuracy and Loss for Fine-Tuning DenseNet121 on PlantLeaves.
Applsci 16 00170 g021
Figure 22. Confusion Matrix for Fine-Tuning EfficientNetB0 on PlantLeaves.
Figure 22. Confusion Matrix for Fine-Tuning EfficientNetB0 on PlantLeaves.
Applsci 16 00170 g022
Figure 23. Training Accuracy and Loss for Fine-Tuning EfficientNetB0 on PlantLeaves.
Figure 23. Training Accuracy and Loss for Fine-Tuning EfficientNetB0 on PlantLeaves.
Applsci 16 00170 g023
Figure 24. Confusion Matrix for Fine-Tuning MobileNetV3Small on PlantLeaves.
Figure 24. Confusion Matrix for Fine-Tuning MobileNetV3Small on PlantLeaves.
Applsci 16 00170 g024
Figure 25. Training Accuracy and Loss for Fine-Tuning MobileNetV3Small on PlantLeaves.
Figure 25. Training Accuracy and Loss for Fine-Tuning MobileNetV3Small on PlantLeaves.
Applsci 16 00170 g025
Figure 26. Confusion Matrix for Fine-Tuning AlexNet on Eggplant.
Figure 26. Confusion Matrix for Fine-Tuning AlexNet on Eggplant.
Applsci 16 00170 g026
Figure 27. Training Accuracy and Loss for Fine-Tuning AlexNet on Eggplant.
Figure 27. Training Accuracy and Loss for Fine-Tuning AlexNet on Eggplant.
Applsci 16 00170 g027
Figure 28. Confusion Matrix for Fine-Tuning ResNet50 on Eggplant.
Figure 28. Confusion Matrix for Fine-Tuning ResNet50 on Eggplant.
Applsci 16 00170 g028
Figure 29. Training Accuracy and Loss for Fine-Tuning ResNet50 on Eggplant.
Figure 29. Training Accuracy and Loss for Fine-Tuning ResNet50 on Eggplant.
Applsci 16 00170 g029
Figure 30. Confusion Matrix for Fine-Tuning InceptionV3 on Eggplant dataset.
Figure 30. Confusion Matrix for Fine-Tuning InceptionV3 on Eggplant dataset.
Applsci 16 00170 g030
Figure 31. Training Accuracy and Loss for Fine-Tuning InceptionV3 on Eggplant.
Figure 31. Training Accuracy and Loss for Fine-Tuning InceptionV3 on Eggplant.
Applsci 16 00170 g031
Figure 32. Confusion Matrix for Fine-Tuning DenseNet121 on Eggplant.
Figure 32. Confusion Matrix for Fine-Tuning DenseNet121 on Eggplant.
Applsci 16 00170 g032
Figure 33. Training Accuracy and Loss for Fine-Tuning DenseNet121 on Eggplant.
Figure 33. Training Accuracy and Loss for Fine-Tuning DenseNet121 on Eggplant.
Applsci 16 00170 g033
Figure 34. Confusion Matrix for Fine-Tuning EfficientNetB0 on Eggplant.
Figure 34. Confusion Matrix for Fine-Tuning EfficientNetB0 on Eggplant.
Applsci 16 00170 g034
Figure 35. Training Accuracy and Loss for Fine-Tuning EfficientNetB0 on Eggplant.
Figure 35. Training Accuracy and Loss for Fine-Tuning EfficientNetB0 on Eggplant.
Applsci 16 00170 g035
Figure 36. Confusion Matrix for Fine-Tuning MobileNetV3Small on Eggplant.
Figure 36. Confusion Matrix for Fine-Tuning MobileNetV3Small on Eggplant.
Applsci 16 00170 g036
Figure 37. Training Accuracy and Loss for Fine-Tuning MobileNetV3Small on Eggplant.
Figure 37. Training Accuracy and Loss for Fine-Tuning MobileNetV3Small on Eggplant.
Applsci 16 00170 g037
Figure 38. Test accuracy of six pre-trained models on three plant leaf datasets.
Figure 38. Test accuracy of six pre-trained models on three plant leaf datasets.
Applsci 16 00170 g038
Table 1. Plantvillage dataset description.
Table 1. Plantvillage dataset description.
CategoryTrainValidationTest# Sample
Apple Apple scab4419495630
Apple Black rot4349493621
Apple Cedar apple rust1924142275
Apple Healthy11512462481645
Blueberry Healthy10512252261502
Cherry Healthy597128129854
Cherry Powdery mildew7361571591052
Corn Gray leaf spot3597678513
Corn Common rust8341781801192
Corn Healthy8131741751162
Corn Northern leaf blight689147149985
Grape Black rot8261771771180
Grape Black measles9682072081383
Grape Isariopsis leaf spot7531611621076
Grape Healthy2966364423
Orange Citrus greening38548268275507
Peach Bacterial spot16073443462297
Peach Healthy2515455360
Bell pepper Bacterial spot697149151997
Bell pepper Healthy10342212231478
Potato Healthy1062224152
Potato Early Blight7001501501000
Potato Late Blight7001501501000
Raspberry Healthy2595557371
Soybean Healthy35637637645090
Squash Powdery mildew12842752761835
Strawberry Healthy3196869456
Strawberry Leaf scorch7761661671109
Tomato Bacterial spot14883193202127
Tomato Early blight7001501501000
Tomato Healthy11132382401591
Tomato Late blight13362862871909
Tomato Leaf mold666142144952
Tomato Septorial leaf spot12392652671771
Tomato Two spotted spider mite11732512521676
Tomato Target spot9822102121404
Tomato Mosaic Virus2615557373
Tomato Yellow leaf curl virus37498038055357
Total 54,305
Table 2. PlantLeaves dataset description.
Table 2. PlantLeaves dataset description.
CategoryTrainValidationTest# Sample
Alstonia Scholaris diseased (P2a)24455254
Alstonia Scholaris healthy (P2b)16855178
Arjun diseased (P1a)22255232
Arjun healthy (P1b)21055220
Bael diseased (P4b)10755117
Basil healthy (P8)13755147
Chinar diseased (P11b)11055120
Chinar healthy (P11a)9355103
Gauva diseased (P3b)13155141
Gauva healthy (P3a)26755277
Jamun diseased (P5b)33555345
Jamun healthy (P5a)26855278
Jatropha diseased (P6b)11455124
Jatropha healthy (P6a)12355133
Lemon diseased (P10b)675577
Lemon healthy (P10a)14955159
Mango diseased (P0b)25555265
Mango healthy (P0a)15955169
Pomegranate diseased (P9b)26155271
Pomegranate healthy (P9a)27755287
Pongamia Pinnata diseased (P7b)26555275
Pongamia Pinnata healthy (P7a)31255322
Total 4494
Table 3. Eggplant dataset description.
Table 3. Eggplant dataset description.
CategoryTrainValidationTest# Sample
Healthy Leaf3758081536
Insect Pest Disease536115116767
Leaf Spot Disease627134135896
Mosaic Virus Disease2014344288
Small Leaf Disease781618112
White Mold Disease4491164
Wilt Disease3477475496
Total 3159
Table 4. Dataset Accession Description.
Table 4. Dataset Accession Description.
PlantVillage
Download URLhttps://www.kaggle.com/datasets/abdallahalidev/plantvillage-dataset accessed on 5 July 2025
Citation Format[21]
License ConstraintsCC BY-NC-SA 4.0
Local CurationThese details are discussed in Section 3.2.
PlantLeaves
Download URLhttps://www.kaggle.com/datasets/csafrit2/plant-leaves-for-image-classification accessed on 5 July 2025
Citation Format[22]
License ConstraintsCommunity Data License Agreement-Sharing-Version 1.0
Local CurationThese details are discussed in Section 3.2.
Eggplant
Download URLhttps://www.kaggle.com/datasets/kamalmoha/eggplant-disease-recognition-dataset/data accessed on 5 July 2025
Citation Format[23]
License ConstraintsCC BY 4.0
Local CurationThese details are discussed in Section 3.2.
Table 5. Preprocessing Description.
Table 5. Preprocessing Description.
AlexNet [24]ResNet50 [25]InceptionV3 [26]DenseNet121 [27]EfficientNetB0 [28]MobileNetV3Small [29]
Input size 227 × 227 224 × 224 299 × 299 224 × 224 224 × 224 224 × 224
Normalizationx-meanx-mean(x/127.5) − 1.0x/255.0(x/127.5) − 1.0(x/127.5) − 1.0
Scalednono[−1, 1][0, 1][−1, 1][−1, 1]
Rotation20°20°20°20°20°20°
Width shift0.20.20.20.20.20.2
Height shift0.20.20.20.20.20.2
Shear0.20.20.20.20.20.2
Zoom0.20.20.20.20.20.2
Horizontal flipYesYesYesYesYesYes
Table 6. The computed fine-tune parameter by AdapFitu.
Table 6. The computed fine-tune parameter by AdapFitu.
Models#Layer#Para#Conv_Layer#Unfreezelr
AlexNet860 M550.001
ResNet5017625.6 M53180.0001
InceptionV331223.6 M94140.001
DenseNet1214287.04 M120220.001
EfficientNetB02394.05 M65180.0001
MobileNetV3Small2300.94 M41270.0001
Note: #layer: total layers; #para: parameters; #conv_layer: convolutional layers; #unfreeze: unfreeze layers; lr: learning rate.
Table 7. Experimental Environment Specification.
Table 7. Experimental Environment Specification.
Software and Hardware Configuration
GPUNVIDIA GeForce RTX 3080
GPU Memory Driver550.163.01
CPUAMD EPYC 7601
CPU Configuration32 cores
Video Memory20G
RAM63G
Hard disk70G
Programming languagePython 3.10
Deep Learning FrameworkTensorflow 2.13.0
Table 8. Two-stage training: fixed parameters.
Table 8. Two-stage training: fixed parameters.
First Stage:Second Stage:
Freeze Convolutional Base Unfreeze Top 10 Layers
epochs10epochs100
batch size32batch size32
learning rate0.001learning rate0.0001
retraining layersDenseretraining layersDense + BN + top
Table 9. Weighted average for six pre-trained models on PlantVillage dataset.
Table 9. Weighted average for six pre-trained models on PlantVillage dataset.
ModelLayersParamsPrecisionRecallF1-Score
AlexNet860 M0.99380.99380.9938
ResNet505025.5 M0.99460.99450.9945
InceptionV34623.6 M0.98760.98750.9874
DenseNet1211217.03 M0.99680.99670.9967
EfficientNetB02374.05 M0.99500.99500.9950
MobileNetV3Small542.5 M0.99370.99360.9936
Table 10. The proposed PlantClassiNet for AlexNet on the Plantvillage dataset.
Table 10. The proposed PlantClassiNet for AlexNet on the Plantvillage dataset.
Plantvillage CategoryPrecisionRecallF1-Score# Test Sample
Apple Apple scab0.98920.96840.978795
Apple Black rot0.98951.00000.994794
Apple Cedar apple rust1.00001.00001.000042
Apple healthy0.99190.99190.9919247
Blueberry healthy0.99561.00000.9978226
Cherry Powdery mildew1.00000.99370.9968158
Cherry healthy0.98471.00000.9923129
Corn Gray leaf spot0.93590.94810.941977
Corn Common rust1.00001.00001.0000179
Corn Northern Leaf Blight0.97300.97300.9730148
Corn healthy1.00001.00001.0000175
Grape Black rot1.00000.98870.9943177
Grape Black Measles0.99051.00000.9952208
Grape Isariopsis Leaf Spot1.00001.00001.0000162
Grape healthy1.00001.00001.000064
Orange Citrus greening1.00000.99880.9994827
Peach Bacterial spot1.00001.00001.0000345
Peach healthy1.00001.00001.000054
Pepper bell Bacterial spot1.00000.98670.9933150
Pepper bell healthy1.00000.99550.9977222
Potato Early blight1.00001.00001.0000150
Potato Late blight0.98041.00000.9901150
Potato healthy0.92001.00000.958323
Raspberry healthy1.00001.00001.000056
Soybean healthy1.00000.99610.9980764
Squash Powdery mildew1.00001.00001.0000276
Strawberry Leaf scorch1.00000.99400.9970167
Strawberry healthy1.00001.00001.000069
Tomato Bacterial spot0.99690.99060.9937320
Tomato Early blight0.95480.98670.9705150
Tomato Late blight0.99650.97910.9877287
Tomato Leaf Mold0.99290.97200.9823143
Tomato Septoria leaf spot0.99251.00000.9963266
Tomato Two spotted spider mite0.97290.99600.9843252
Tomato Target Spot0.99020.96210.9760211
Tomato Yellow Leaf Curl Virus0.99880.99630.9975804
Tomato mosaic virus0.98251.00000.991256
Tomato healthy0.97951.00000.9896239
Macro Avg0.98970.99260.9910
Table 11. The proposed PlantClassiNet for ResNet50 on the Plantvillage dataset.
Table 11. The proposed PlantClassiNet for ResNet50 on the Plantvillage dataset.
Plantvillage CategoryPrecisionRecallF1-Score# Test Sample
Apple Apple scab0.98940.97890.984195
Apple Black rot1.00001.00001.000094
Apple Cedar apple rust1.00001.00001.000042
Apple healthy0.99190.99600.9939247
Blueberry healthy1.00001.00001.0000226
Cherry Powdery mildew1.00000.98730.9936158
Cherry healthy1.00001.00001.0000129
Corn Gray leaf spot0.97100.87010.917877
Corn Common rust1.00000.99440.9972179
Corn Northern Leaf Blight0.93550.97970.9571148
Corn healthy1.00001.00001.0000175
Grape Black rot1.00001.00001.0000177
Grape Black Measles1.00001.00001.0000208
Grape Isariopsis Leaf Spot1.00001.00001.0000162
Grape healthy1.00001.00001.000064
Orange Citrus greening1.00001.00001.0000827
Peach Bacterial spot1.00000.99710.9985345
Peach healthy0.98150.98150.981554
Pepper bell Bacterial spot1.00000.99330.9967150
Pepper bell healthy0.99551.00000.9978222
Potato Early blight0.99341.00000.9967150
Potato Late blight1.00000.99330.9967150
Potato healthy0.95650.95650.956523
Raspberry healthy1.00001.00001.000056
Soybean healthy0.99871.00000.9993764
Squash Powdery mildew0.99641.00000.9982276
Strawberry Leaf scorch1.00001.00001.0000167
Strawberry healthy1.00001.00001.000069
Tomato Bacterial spot1.00000.99380.9969320
Tomato Early blight0.99300.94670.9693150
Tomato Late blight0.98620.99300.9896287
Tomato Leaf Mold1.00000.97900.9894143
Tomato Septoria leaf spot0.98881.00000.9944266
Tomato Two spotted spider mite0.97630.98020.9782252
Tomato Target Spot0.96351.00000.9814211
Tomato Yellow Leaf Curl Virus1.00000.99880.9994804
Tomato mosaic virus1.00001.00001.000056
Tomato healthy0.99171.00000.9958239
Macro Avg0.99230.99000.9911
Table 12. The proposed PlantClassiNet for InceptionV3 on the Plantvillage dataset.
Table 12. The proposed PlantClassiNet for InceptionV3 on the Plantvillage dataset.
Plantvillage CategoryPrecisionRecallF1-Score# Test Sample
Apple Apple scab0.98910.95790.973395
Apple Black rot0.98951.00000.994794
Apple Cedar apple rust1.00000.97620.988042
Apple healthy0.99600.99600.9960247
Blueberry healthy0.99561.00000.9978226
Cherry Powdery mildew1.00000.99370.9968158
Cherry healthy1.00001.00001.0000129
Corn Gray leaf spot0.87950.94810.912577
Corn Common rust1.00001.00001.0000179
Corn Northern Leaf Blight0.97180.93240.9517148
Corn healthy1.00001.00001.0000175
Grape Black rot0.99441.00000.9972177
Grape Black Measles1.00000.99520.9976208
Grape Isariopsis Leaf Spot1.00001.00001.0000162
Grape healthy0.98440.98440.984464
Orange Citrus greening1.00001.00001.0000827
Peach Bacterial spot0.99140.99710.9942345
Peach healthy0.98150.98150.981554
Pepper bell Bacterial spot0.99330.99330.9933150
Pepper bell healthy0.98671.00000.9933222
Potato Early blight0.99341.00000.9967150
Potato Late blight0.95510.99330.9739150
Potato healthy1.00000.73910.850023
Raspberry healthy1.00001.00001.000056
Soybean healthy0.99480.99870.9967764
Squash Powdery mildew1.00000.99640.9982276
Strawberry Leaf scorch1.00000.98800.9940167
Strawberry healthy1.00001.00001.000069
Tomato Bacterial spot0.98120.97810.9797320
Tomato Early blight0.93710.89330.9147150
Tomato Late blight0.96820.95470.9614287
Tomato Leaf Mold0.97200.97200.9720143
Tomato Septoria leaf spot0.95620.98500.9704266
Tomato Two spotted spider mite0.97600.96830.9721252
Tomato Target Spot0.95430.99050.9721211
Tomato Yellow Leaf Curl Virus0.99750.98760.9925804
Tomato mosaic virus0.94921.00000.973956
Tomato healthy0.99580.99580.9958239
Macro Avg0.98380.97890.9807
Table 13. The proposed PlantClassiNet for DenseNet121 on the Plantvillage dataset.
Table 13. The proposed PlantClassiNet for DenseNet121 on the Plantvillage dataset.
Plantvillage CategoryPrecisionRecallF1-Score# Test Sample
Apple Apple scab1.00000.98950.994795
Apple Black rot0.98951.00000.994794
Apple Cedar apple rust1.00001.00001.000042
Apple healthy1.00001.00001.0000247
Blueberry healthy1.00001.00001.0000226
Cherry Powdery mildew1.00001.00001.0000158
Cherry healthy1.00001.00001.0000129
Corn Gray leaf spot0.88370.98700.932577
Corn Common rust1.00001.00001.0000179
Corn Northern Leaf Blight0.99290.93920.9653148
Corn healthy1.00001.00001.0000175
Grape Black rot0.99441.00000.9972177
Grape Black Measles1.00001.00001.0000208
Grape Isariopsis Leaf Spot1.00000.99380.9969162
Grape healthy1.00001.00001.000064
Orange Citrus greening1.00000.99880.9994827
Peach Bacterial spot0.99421.00000.9971345
Peach healthy1.00001.00001.000054
Pepper bell Bacterial spot1.00000.98670.9933150
Pepper bell healthy0.99551.00000.9978222
Potato Early blight0.99341.00000.9967150
Potato Late blight1.00000.99330.9967150
Potato healthy1.00000.95650.977823
Raspberry healthy1.00001.00001.000056
Soybean healthy0.99871.00000.9993764
Squash Powdery mildew1.00001.00001.0000276
Strawberry Leaf scorch1.00000.99400.9970167
Strawberry healthy1.00001.00001.000069
Tomato Bacterial spot0.99381.00000.9969320
Tomato Early blight0.98680.99330.9900150
Tomato Late blight0.99650.99300.9948287
Tomato Leaf Mold1.00000.98600.9930143
Tomato Septoria leaf spot0.99631.00000.9981266
Tomato Two spotted spider mite1.00000.99600.9980252
Tomato Target Spot0.99060.99530.9929211
Tomato Yellow Leaf Curl Virus1.00000.99750.9988804
Tomato mosaic virus1.00001.00001.000056
Tomato healthy0.99581.00000.9979239
Macro Avg0.99480.99470.9946
Table 14. The proposed PlantClassiNet for EfficientNetB0 on the Plantvillage dataset.
Table 14. The proposed PlantClassiNet for EfficientNetB0 on the Plantvillage dataset.
Plantvillage CategoryPrecisionRecallF1-Score# Test Sample
Apple Apple scab1.00000.97890.989495
Apple Black rot1.00001.00001.000094
Apple Cedar apple rust1.00001.00001.000042
Apple healthy1.00000.99600.9980247
Blueberry healthy0.99561.00000.9978226
Cherry Powdery mildew1.00001.00001.0000158
Cherry healthy1.00001.00001.0000129
Corn Gray leaf spot0.90120.94810.924177
Corn Common rust1.00001.00001.0000179
Corn Northern Leaf Blight0.97200.93920.9553148
Corn healthy1.00001.00001.0000175
Grape Black rot1.00001.00001.0000177
Grape Black Measles1.00001.00001.0000208
Grape Isariopsis Leaf Spot1.00001.00001.0000162
Grape healthy1.00001.00001.000064
Orange Citrus greening1.00001.00001.0000827
Peach Bacterial spot0.99140.99710.9942345
Peach healthy0.96431.00000.981854
Pepper bell Bacterial spot1.00001.00001.0000150
Pepper bell healthy0.99111.00000.9955222
Potato Early blight0.99341.00000.9967150
Potato Late blight0.98681.00000.9934150
Potato healthy1.00000.82610.904823
Raspberry healthy1.00001.00001.000056
Soybean healthy1.00000.99870.9993764
Squash Powdery mildew1.00001.00001.0000276
Strawberry Leaf scorch1.00000.99400.9970167
Strawberry healthy1.00001.00001.000069
Tomato Bacterial spot1.00000.99690.9984320
Tomato Early blight1.00000.96670.9831150
Tomato Late blight0.98280.99300.9879287
Tomato Leaf Mold0.98610.99300.9895143
Tomato Septoria leaf spot0.99250.99250.9925266
Tomato Two spotted spider mite0.99200.98020.9860252
Tomato Target Spot0.97691.00000.9883211
Tomato Yellow Leaf Curl Virus1.00000.99750.9988804
Tomato mosaic virus1.00001.00001.000056
Tomato healthy0.99171.00000.9958239
Macro Avg0.99260.98940.9907
Table 15. The proposed PlantClassiNet for MobileNetV3Small on the Plantvillage dataset.
Table 15. The proposed PlantClassiNet for MobileNetV3Small on the Plantvillage dataset.
Plantvillage CategoryPrecisionRecallF1-Score# Test Sample
Apple Apple scab0.98950.98950.989595
Apple Black rot0.98951.00000.994794
Apple Cedar apple rust1.00001.00001.000042
Apple healthy1.00000.99190.9959247
Blueberry healthy1.00001.00001.0000226
Cherry Powdery mildew1.00000.99370.9968158
Cherry healthy1.00001.00001.0000129
Corn Gray leaf spot0.92410.94810.935977
Corn Common rust1.00001.00001.0000179
Corn Northern Leaf Blight0.97240.95270.9625148
Corn healthy1.00000.99430.9971175
Grape Black rot1.00000.99440.9972177
Grape Black Measles0.99051.00000.9952208
Grape Isariopsis Leaf Spot1.00001.00001.0000162
Grape healthy1.00001.00001.000064
Orange Citrus greening0.99881.00000.9994827
Peach Bacterial spot0.99711.00000.9986345
Peach healthy1.00000.98150.990754
Pepper bell Bacterial spot1.00001.00001.0000150
Pepper bell healthy0.99551.00000.9978222
Potato Early blight1.00001.00001.0000150
Potato Late blight1.00000.99330.9967150
Potato healthy1.00000.95650.977823
Raspberry healthy1.00001.00001.000056
Soybean healthy0.99871.00000.9993764
Squash Powdery mildew1.00001.00001.0000276
Strawberry Leaf scorch1.00000.99400.9970167
Strawberry healthy1.00000.98550.992769
Tomato Bacterial spot0.99680.98750.9922320
Tomato Early blight0.97960.96000.9697150
Tomato Late blight0.97920.98610.9826287
Tomato Leaf Mold0.99290.97200.9823143
Tomato Septoria leaf spot0.96381.00000.9815266
Tomato Two spotted spider mite0.98420.98810.9861252
Tomato Target Spot0.96310.99050.9766211
Tomato Yellow Leaf Curl Virus1.00000.99250.9963804
Tomato mosaic virus1.00000.98210.991056
Tomato healthy0.99581.00000.9979239
Macro Avg0.99240.99040.9913
Table 16. Weighted average for six pre-trained models on PlantLeaves dataset.
Table 16. Weighted average for six pre-trained models on PlantLeaves dataset.
ModelLayersParamsPrecisionRecallF1-Score
AlexNet860 M0.87150.85450.8277
ResNet505025.6 M0.95130.93640.9346
InceptionV34623.6 M0.91330.87270.8654
DenseNet1211217.03 M0.91650.90000.8964
EfficientNetB02374.05 M0.87150.87270.8568
MobileNetV3Small542.5 M0.89320.87270.8632
Table 17. The proposed PlantClassiNet for AlexNet on the PlantLeaves dataset.
Table 17. The proposed PlantClassiNet for AlexNet on the PlantLeaves dataset.
PlantLeaves CategoryPrecisionRecallF1-Score# Test Sample
Alstonia Scholaris diseased (P2a)0.83331.00000.90915
Alstonia Scholaris healthy (P2b)1.00001.00001.00005
Arjun diseased (P1a)1.00001.00001.00005
Arjun healthy (P1b)1.00001.00001.00005
Bael diseased (P4b)1.00001.00001.00005
Basil healthy (P8)1.00001.00001.00005
Chinar diseased (P11b)1.00001.00001.00005
Chinar healthy (P11a)1.00001.00001.00005
Gauva diseased (P3b)1.00000.80000.88895
Gauva healthy (P3a)0.71431.00000.83335
Jamun diseased (P5b)0.50001.00000.66675
Jamun healthy (P5a)1.00000.40000.57145
Jatropha diseased (P6b)1.00000.20000.33335
Jatropha healthy (P6a)0.50001.00000.66675
Lemon diseased (P10b)1.00000.40000.57145
Lemon healthy (P10a)0.62501.00000.76925
Mango diseased (P0b)1.00001.00001.00005
Mango healthy (P0a)1.00001.00001.00005
Pomegranate diseased (P9b)1.00001.00001.00005
Pomegranate healthy (P9a)1.00001.00001.00005
Pongamia Pinnata diseased (P7b)0.00000.00000.00005
Pongamia Pinnata healthy (P7a)1.00001.00001.00005
Macro Avg0.87150.85450.8277
Table 18. The proposed PlantClassiNet for ResNet50 on the PlantLeaves dataset.
Table 18. The proposed PlantClassiNet for ResNet50 on the PlantLeaves dataset.
PlantLeaves CategoryPrecisionRecallF1-Score# Test Sample
Alstonia Scholaris diseased (P2a)0.83331.00000.90915
Alstonia Scholaris healthy (P2b)1.00000.80000.88895
Arjun diseased (P1a)1.00000.80000.88895
Arjun healthy (P1b)0.83331.00000.90915
Bael diseased (P4b)1.00001.00001.00005
Basil healthy (P8)1.00001.00001.00005
Chinar diseased (P11b)1.00001.00001.00005
Chinar healthy (P11a)1.00001.00001.00005
Gauva diseased (P3b)1.00001.00001.00005
Gauva healthy (P3a)1.00001.00001.00005
Jamun diseased (P5b)0.71431.00000.83335
Jamun healthy (P5a)1.00000.60000.75005
Jatropha diseased (P6b)1.00000.80000.88895
Jatropha healthy (P6a)0.83331.00000.90915
Lemon diseased (P10b)1.00000.60000.75005
Lemon healthy (P10a)0.71431.00000.83335
Mango diseased (P0b)1.00001.00001.00005
Mango healthy (P0a)1.00001.00001.00005
Pomegranate diseased (P9b)1.00001.00001.00005
Pomegranate healthy (P9a)1.00001.00001.00005
Pongamia Pinnata diseased (P7b)1.00001.00001.00005
Pongamia Pinnata healthy (P7a)1.00001.00001.00005
Macro Avg0.95130.93640.9346
Table 19. The proposed PlantClassiNet for InceptionV3 on PlantLeaves dataset.
Table 19. The proposed PlantClassiNet for InceptionV3 on PlantLeaves dataset.
PlantLeaves CategoryPrecisionRecallF1-Score# Test Sample
Alstonia Scholaris diseased (P2a)1.00001.00001.00005
Alstonia Scholaris healthy (P2b)0.83331.00000.90915
Arjun diseased (P1a)0.80000.80000.80005
Arjun healthy (P1b)0.66670.80000.72735
Bael diseased (P4b)1.00001.00001.00005
Basil healthy (P8)1.00001.00001.00005
Chinar diseased (P11b)1.00001.00001.00005
Chinar healthy (P11a)1.00001.00001.00005
Gauva diseased (P3b)1.00001.00001.00005
Gauva healthy (P3a)1.00000.80000.88895
Jamun diseased (P5b)0.50001.00000.66675
Jamun healthy (P5a)1.00000.20000.33335
Jatropha diseased (P6b)1.00000.80000.88895
Jatropha healthy (P6a)1.00001.00001.00005
Lemon diseased (P10b)1.00000.40000.57145
Lemon healthy (P10a)0.62501.00000.76925
Mango diseased (P0b)1.00000.80000.88895
Mango healthy (P0a)0.83331.00000.90915
Pomegranate diseased (P9b)1.00000.80000.88895
Pomegranate healthy (P9a)1.00001.00001.00005
Pongamia Pinnata diseased (P7b)1.00000.80000.88895
Pongamia Pinnata healthy (P7a)0.83331.00000.90915
Macro Avg0.91330.87270.8654
Table 20. The proposed PlantClassiNet for DenseNet121 on the PlantLeaves dataset.
Table 20. The proposed PlantClassiNet for DenseNet121 on the PlantLeaves dataset.
PlantLeaves CategoryPrecisionRecallF1-Score# Test Sample
Alstonia Scholaris diseased (P2a)1.00000.80000.88895
Alstonia Scholaris healthy (P2b)1.00001.00001.00005
Arjun diseased (P1a)0.66670.80000.72735
Arjun healthy (P1b)0.80000.80000.80005
Bael diseased (P4b)1.00001.00001.00005
Basil healthy (P8)1.00001.00001.00005
Chinar diseased (P11b)1.00001.00001.00005
Chinar healthy (P11a)1.00001.00001.00005
Gauva diseased (P3b)1.00000.80000.88895
Gauva healthy (P3a)1.00001.00001.00005
Jamun diseased (P5b)0.57140.80000.66675
Jamun healthy (P5a)0.66670.40000.50005
Jatropha diseased (P6b)1.00001.00001.00005
Jatropha healthy (P6a)1.00001.00001.00005
Lemon diseased (P10b)1.00000.40000.57145
Lemon healthy (P10a)0.62501.00000.76925
Mango diseased (P0b)0.83331.00000.90915
Mango healthy (P0a)1.00001.00001.00005
Pomegranate diseased (P9b)1.00001.00001.00005
Pomegranate healthy (P9a)1.00001.00001.00005
Pongamia Pinnata diseased (P7b)1.00001.00001.00005
Pongamia Pinnata healthy (P7a)1.00001.00001.00005
Macro Avg0.91650.90000.8964
Table 21. The proposed PlantClassiNet for EfficientNetB0 on the PlantLeaves dataset.
Table 21. The proposed PlantClassiNet for EfficientNetB0 on the PlantLeaves dataset.
PlantLeaves CategoryPrecisionRecallF1-Score# Test Sample
Alstonia Scholaris diseased (P2a)1.00001.00001.00005
Alstonia Scholaris healthy (P2b)0.83331.00000.90915
Arjun diseased (P1a)1.00000.80000.88895
Arjun healthy (P1b)0.83331.00000.90915
Bael diseased (P4b)1.00000.80000.88895
Basil healthy (P8)1.00001.00001.00005
Chinar diseased (P11b)0.83331.00000.90915
Chinar healthy (P11a)1.00000.80000.88895
Gauva diseased (P3b)1.00000.60000.75005
Gauva healthy (P3a)0.83331.00000.90915
Jamun diseased (P5b)0.62501.00000.76925
Jamun healthy (P5a)1.00000.60000.75005
Jatropha diseased (P6b)0.71431.00000.83335
Jatropha healthy (P6a)1.00000.80000.88895
Lemon diseased (P10b)0.00000.00000.00005
Lemon healthy (P10a)0.50001.00000.66675
Mango diseased (P0b)1.00001.00001.00005
Mango healthy (P0a)1.00001.00001.00005
Pomegranate diseased (P9b)1.00001.00001.00005
Pomegranate healthy (P9a)1.00000.80000.88895
Pongamia Pinnata diseased (P7b)1.00001.00001.00005
Pongamia Pinnata healthy (P7a)1.00001.00001.00005
Macro Avg0.87150.87270.8568
Table 22. The proposed PlantClassiNet for MobileNetV3Small on the PlantLeaves dataset.
Table 22. The proposed PlantClassiNet for MobileNetV3Small on the PlantLeaves dataset.
PlantLeaves CategoryPrecisionRecallF1-Score# Test Sample
Alstonia Scholaris diseased (P2a)1.00001.00001.00005
Alstonia Scholaris healthy (P2b)0.83331.00000.90915
Arjun diseased (P1a)1.00000.80000.88895
Arjun healthy (P1b)0.83331.00000.90915
Bael diseased (P4b)1.00001.00001.00005
Basil healthy (P8)1.00001.00001.00005
Chinar diseased (P11b)1.00001.00001.00005
Chinar healthy (P11a)1.00001.00001.00005
Gauva diseased (P3b)1.00000.80000.88895
Gauva healthy (P3a)1.00001.00001.00005
Jamun diseased (P5b)0.42860.60000.50005
Jamun healthy (P5a)0.33330.20000.25005
Jatropha diseased (P6b)1.00000.80000.88895
Jatropha healthy (P6a)0.83331.00000.90915
Lemon diseased (P10b)1.00000.20000.33335
Lemon healthy (P10a)0.55561.00000.71435
Mango diseased (P0b)0.83331.00000.90915
Mango healthy (P0a)1.00001.00001.00005
Pomegranate diseased (P9b)1.00001.00001.00005
Pomegranate healthy (P9a)1.00000.80000.88895
Pongamia Pinnata diseased (P7b)1.00001.00001.00005
Pongamia Pinnata healthy (P7a)1.00001.00001.00005
Macro Avg0.89320.87270.8632
Table 23. Weighted average for six pre-trained models on Eggplant dataset.
Table 23. Weighted average for six pre-trained models on Eggplant dataset.
ModelLayersParamsPrecisionRecallF1-Score
AlexNet860 M0.98780.98740.9874
ResNet505025.6 M0.99390.99370.9937
InceptionV34623.6 M0.97790.97700.9771
DenseNet1211217.03 M1.00001.00001.0000
EfficientNetB02374.05 M0.99790.99790.9979
MobileNetV3Small542.5 M0.99590.99580.9958
Table 24. The proposed PlantClassiNet for AlexNet on the Eggplant dataset.
Table 24. The proposed PlantClassiNet for AlexNet on the Eggplant dataset.
PlantLeaves CategoryPrecisionRecallF1-Score# Test Sample
Healthy Leaf0.97591.00000.987881
Insect Pest Disease0.96671.00000.9831116
Leaf Spot Disease1.00000.95560.9773135
Mosaic Virus Disease1.00001.00001.000044
Small Leaf Disease1.00001.00001.000017
White Mold Disease1.00001.00001.000010
Wilt Disease1.00001.00001.000075
Macro Avg0.99180.99370.9926
Table 25. The proposed PlantClassiNet for ResNet50 on the Eggplant dataset.
Table 25. The proposed PlantClassiNet for ResNet50 on the Eggplant dataset.
PlantLeaves CategoryPrecisionRecallF1-Score# Test Sample
Healthy Leaf1.00001.00001.000081
Insect Pest Disease0.97481.00000.9872116
Leaf Spot Disease1.00000.98520.9925135
Mosaic Virus Disease1.00001.00001.000044
Small Leaf Disease1.00001.00001.000017
White Mold Disease1.00001.00001.000010
Wilt Disease1.00000.98670.993375
Macro Avg0.99640.99600.9962
Table 26. The proposed PlantClassiNet for InceptionV3 on the Eggplant dataset.
Table 26. The proposed PlantClassiNet for InceptionV3 on the Eggplant dataset.
PlantLeaves CategoryPrecisionRecallF1-Score# Test Sample
Healthy Leaf0.98781.00000.993981
Insect Pest Disease1.00001.00001.0000116
Leaf Spot Disease1.00000.99260.9963135
Mosaic Virus Disease1.00001.00001.000044
Small Leaf Disease1.00001.00001.000017
White Mold Disease1.00001.00001.000010
Wilt Disease1.00001.00001.000075
Macro Avg0.99830.99890.9986
Table 27. The proposed PlantClassiNet for DenseNet121 on the Eggplant dataset.
Table 27. The proposed PlantClassiNet for DenseNet121 on the Eggplant dataset.
PlantLeaves CategoryPrecisionRecallF1-Score# Test Sample
Healthy Leaf1.00001.00001.000081
Insect Pest Disease1.00001.00001.0000116
Leaf Spot Disease1.00001.00001.0000135
Mosaic Virus Disease1.00001.00001.000044
Small Leaf Disease1.00001.00001.000017
White Mold Disease1.00001.00001.000010
Wilt Disease1.00001.00001.000075
Macro Avg1.00001.00001.0000
Table 28. The proposed PlantClassiNet for EfficientNetB0 on the Eggplant dataset.
Table 28. The proposed PlantClassiNet for EfficientNetB0 on the Eggplant dataset.
PlantLeaves CategoryPrecisionRecallF1-Score# Test Sample
Healthy Leaf0.91950.98770.952481
Insect Pest Disease0.98260.97410.9784116
Leaf Spot Disease0.99230.95560.9736135
Mosaic Virus Disease1.00001.00001.000044
Small Leaf Disease1.00001.00001.000017
White Mold Disease1.00000.90000.947410
Wilt Disease0.98681.00000.993475
Macro Avg0.98300.97390.9779
Table 29. The proposed PlantClassiNet for MobileNetV3Small on the Eggplant dataset.
Table 29. The proposed PlantClassiNet for MobileNetV3Small on the Eggplant dataset.
PlantLeaves CategoryPrecisionRecallF1-Score# Test Sample
Healthy Leaf0.97591.00000.987881
Insect Pest Disease1.00001.00001.0000116
Leaf Spot Disease1.00000.98520.9925135
Mosaic Virus Disease1.00001.00001.000044
Small Leaf Disease1.00001.00001.000017
White Mold Disease1.00001.00001.000010
Wilt Disease1.00001.00001.000075
Macro Avg0.99660.99790.9972
Table 30. Comparison of found plant leaf classification methods for PlantVillage.
Table 30. Comparison of found plant leaf classification methods for PlantVillage.
ModelTesting AccuracyProposed PlantClassiNet
AlexNet [30]0.99060.9938
AlexNet [31]0.9948
MobileNetV3 [32]0.99500.9937
MobileNetV3 [33]0.9259
MobileNetV3 [34]0.946
MobileNetV3 [35]0.9927
DenseNet121 [36]0.86200.9968
DenseNet121 [15]0.9975
DenseNet121 [34]0.791
ResNet50 [15]0.99590.9946
ResNet50 [37]0.982
ResNet50 [38]0.9538
InceptionV3 [39]0.98000.9876
InceptionV3 [35]0.9803
EfficientNetB0 [34]0.9470.9950
EfficientNetB0 [35]0.9803
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Xu, X. PlantClassiNet: A Dual-Modal Fine-Tuning Framework for CNN-Based Plant Disease Classification. Appl. Sci. 2026, 16, 170. https://doi.org/10.3390/app16010170

AMA Style

Zhang X, Xu X. PlantClassiNet: A Dual-Modal Fine-Tuning Framework for CNN-Based Plant Disease Classification. Applied Sciences. 2026; 16(1):170. https://doi.org/10.3390/app16010170

Chicago/Turabian Style

Zhang, Xiaochun, and Xiaopeng Xu. 2026. "PlantClassiNet: A Dual-Modal Fine-Tuning Framework for CNN-Based Plant Disease Classification" Applied Sciences 16, no. 1: 170. https://doi.org/10.3390/app16010170

APA Style

Zhang, X., & Xu, X. (2026). PlantClassiNet: A Dual-Modal Fine-Tuning Framework for CNN-Based Plant Disease Classification. Applied Sciences, 16(1), 170. https://doi.org/10.3390/app16010170

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop