Boosting Rice Disease Diagnosis: A Systematic Benchmark of Five Deep Convolutional Neural Network Models in Precision Agriculture
Abstract
1. Introduction
- It provides a rigorous and reproducible performance baseline under a unified framework, including 5-fold cross-validation and a challenging Out-of-Distribution (OOD) generalization test.
- It delivers a clear performance hierarchy and architectural analysis, identifying DenseNet121 as the superior model in terms of both accuracy and parameter efficiency, making it ideally suited for edge deployment.
- It establishes a crucial foundational benchmark for the community, serving as a reference point for future studies exploring more complex models, including transformers, in this specific domain.
2. Literature Review
2.1. Rice Leaf Diseases Dataset
- Brown spot: It is characterized by elliptical lesions that align with the leaf veins. The center of the lesion is dark brown, surrounded by a red-brown or yellow-brown margin, often appearing in patchy distributions (Figure 1a).
- Hispa: It manifests as long, narrow, whitish or silvery streaks on leaves caused by insect scraping. Severe infections lead to leaf drying, curling, and browning (Figure 1b).
- Leaf Blast: This typically produces spindle-shaped lesions on leaves with whitish or gray centers and dark brown to reddish margins. It can also infect the panicle neck, causing rot and yield loss (Figure 1c).
2.2. Deep Learning
2.2.1. VGG
2.2.2. ResNet
2.2.3. Xception
2.2.4. DenseNet
2.2.5. Loss Function and Optimizer
3. System Architecture and Experimental Setup
3.1. System Overview
3.2. Experimental Scenarios for Robustness Evaluation
- Original Training Set: 320 images per class (1280 total).
- Original Validation Set: 80 images per class (320 total).
- Original Test Set: 123 images per class (492 total).
3.3. Data Preprocessing and Augmentation Pipeline
3.3.1. Preprocessing
- Resizing: Images were resized to 224 × 224 pixels, the default input size for the selected CNN architectures.
- Normalization: Pixel values were normalized to the range [0, 1] by dividing by 255.
3.3.2. Augmentation Strategies
- Mixed Augmentation (for Cases A, C, and D): This strategy applied a unified transformation that randomly combined all techniques listed in Table 3 in a single pass to each original training image, generating 160 additional samples per class.
- Individual + Mixed Augmentation (for Case B): To investigate the impact of extensive augmentation diversity, this strategy first applied the mixed augmentation. Furthermore, each of the seven augmentation techniques was also applied individually to generate distinct variants, yielding 608 additional samples per class.
3.4. Model Training and Fine-Tuning Strategy
- Phase 1: Feature Extraction: The convolutional base of each model was frozen, and only the newly initialized top classification layers were trained for 20 epochs. This allows the model to adapt its new head to the features extracted by the frozen base.
- Phase 2: Fine-Tuning: Subsequently, the top 20% of layers from the convolutional base were unfrozen. The entire model was then trained for an additional 80 epochs with a significantly reduced learning rate (10 times lower than the initial rate). This approach helps prevent catastrophic forgetting and allows for domain-specific adaptation of higher-level features [31].
3.5. Model Evaluation Metrics
- True Positive (TP): The number of samples correctly predicted as the positive class. (e.g., A ‘Hispa’ image is predicted as ‘Hispa’).
- True Negative (TN): The number of samples correctly predicted as not being the positive class. (e.g., A ‘Brown Spot’ image is predicted as ‘Brown Spot’, ‘Leaf blast’, or ‘Healthy’).
- False Positive (FP): The number of samples incorrectly predicted as the positive class. (e.g., A ‘Healthy’ image is predicted as ‘Hispa’).
- False Negative (FN): The number of samples incorrectly predicted as not being the positive class. (e.g., A ‘Hispa’ image is predicted as ‘Brown Spot’).
- Accuracy: The proportion of total correct predictions (both positive and negative) among the total number of cases examined. It provides an overall measure of correctness.
- Precision: For each class, it measures the proportion of correctly identified positive instances among all instances predicted as positive. A high precision indicates a low false positive rate for that class.
- Recall: For each class, it measures the proportion of actual positive instances that were correctly identified. A high recall indicates a low false negative rate for that class.
- F1-Score: The harmonic mean of precision and recall, providing a single metric that balances both concerns, especially useful when class distribution is uneven.
4. Results and Analysis
4.1. Comprehensive Performance Benchmark and Architectural Analysis
4.2. Model Diagnosis and Reliability Assessment
4.2.1. Diagnostic Capability with ROC and Precision–Recall Analysis
4.2.2. Prediction Reliability via Calibration Analysis
4.2.3. Decision Rationale and Error Analysis Through Interpretability
4.3. Robustness and Generalization Under Challenging Conditions
4.3.1. Multi-Scenario Performance and Robustness
- Impact of Augmentation Volume: Comparing Case B (Extended Augmentation) to Case A (Baseline Augmentation), most models showed a slight performance improvement (e.g., DenseNet121: 71% → 73%; Xception: 64% → 68%). This confirms that increasing the diversity and volume of augmented data is beneficial for generalization, albeit with diminishing returns. The consistent but modest gains suggest that while helpful, simply adding more augmented data from the same source distribution has limitations.
- Superior OOD Generalization: The Case C (OOD) results remain the most telling indicator of true generalization ability. Here, DenseNet121 demonstrated exceptional capability (85%), significantly outperforming all other models on completely unseen data from an external source.
- Robustness Under Stress: The Case D (Stress Test) reveals model vulnerability to synthetic perturbations within the test set itself. All models experienced a performance drop compared to their baseline (Case A). DenseNet121, despite a notable drop from 71% to 64%, maintained the highest absolute accuracy, demonstrating relative resilience.
4.3.2. Consistency and Architectural Implications
4.4. Limitations and Future Work
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| AI | Artificial Intelligence |
| ANN | Artificial Neural Network |
| AUC | Area Under the Curve |
| AUC-PR | Area Under the Precision–Recall Curve |
| AUC-ROC | Area Under the Receiver Operating Characteristic Curve |
| CNN | Convolutional Neural Network |
| DenseNet | Densely Connected Convolutional Network |
| ECE | Expected Calibration Error |
| FN | False Negative |
| FP | False Positive |
| GAP | Global Average Pooling |
| Grad-CAM | Gradient-weighted Class Activation Mapping |
| OOD | Out-of-Distribution |
| ReLU | Rectified Linear Unit |
| ResNet | Residual Network |
| RMSprop | Root Mean Square Propagation |
| SGD | Stochastic Gradient Descent |
| Tanh | Hyperbolic Tangent |
| TN | True Negative |
| TP | True Positive |
| UCI | University of California, Irvine |
| VGG | Visual Geometry Group |
| ViT | Vision Transformer |
References
- Fukagawa, N.K.; Ziska, L.H. Rice: Importance for Global Nutrition. J. Nutr. Sci. Vitaminol. 2019, 65, S2–S3. [Google Scholar] [CrossRef] [PubMed]
- Kumar, K.S.A.; Karthika, K.S. Abiotic and Biotic Factors Influencing Soil Health and/or Soil Degradation. In Soil Health; Springer: Cham, Switzerland, 2020; pp. 145–161. [Google Scholar]
- Phadikar, S.; Sil, J.; Das, A.K. Rice Diseases Classification Using Feature Selection and Rule Generation Techniques. Comput. Electron. Agric. 2013, 90, 76–85. [Google Scholar] [CrossRef]
- Zhang, Y.Z. Ecology and Control Measures for Major Rice Diseases in Taiwan. In Proceedings of the Symposium on Rice Health Management, Taipei, Taiwan, 15–17 April 2004. [Google Scholar]
- Latif, G.; Abdelhamid, S.E.; Mallouhy, R.E.; Alghazo, J.; Kazimi, Z.A. Deep Learning Utilization in Agriculture: Detection of Rice Plant Diseases Using an Improved CNN Model. Plants 2022, 11, 2230. [Google Scholar] [CrossRef] [PubMed]
- Laborte, A.G.; Gutierrez, M.A.; Balanza, J.G.; Saito, K.; Zwart, S.J.; Boschetti, M.; Murty, M.V.R.; Villano, L.; Aunario, J.K.; Reinke, R.; et al. RiceAtlas, a Spatial Database of Global Rice Calendars and Production. Sci. Data 2017, 4, 170074. [Google Scholar] [CrossRef] [PubMed]
- Sarker, I.H. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef] [PubMed]
- Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional Neural Networks: An Overview and Application in Radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [PubMed]
- Burhan, S.A.; Minhas, S.; Tariq, A.; Hassan, M.N. Comparative Study of Deep Learning Algorithms for Disease and Pest Detection in Rice Crops. In Proceedings of the 2020 International Conference on Electronics, Computers and Artificial Intelligence (ECAI), Bucharest, Romania, 25–27 June 2020; pp. 1–5. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Khan, A.; Rauf, Z.; Sohail, A.; Aslam, M.S.; Baber, J.; Ullah, H.; Saeed, M.; Alomari, A.A.; Alraddadi, M.O. A Survey of the Vision Transformers and Their CNN-Transformer Based Variants. Artif. Intell. Rev. 2023, 56, 2917–2970. [Google Scholar] [CrossRef]
- Han, K.; Wang, Y.; Chen, H.; Chen, X.; Guo, J.; Liu, Z.; Tang, Y.; Xiao, A.; Xu, C.; Xu, Y.; et al. A Survey on Vision Transformer. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 87–110. [Google Scholar] [CrossRef] [PubMed]
- Canziani, A.; Paszke, A.; Culurciello, E. An Analysis of Deep Neural Network Models for Practical Applications. arXiv 2016, arXiv:1605.07678. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar]
- Pereira, J.R.J. Rice Disease. Kaggle. 2023. Available online: https://www.kaggle.com/datasets/jonathanrjpereira/rice-disease (accessed on 16 November 2025).
- UCI Machine Learning Repository. Rice Leaf Diseases Dataset. Available online: https://archive.ics.uci.edu/ml/datasets/Rice+Leaf+Diseases (accessed on 24 July 2025).
- Chen, Y.-C. Applications of Convolution Neural Networks to Predict Clinical Pregnancy from Embryo Microscope Images in In Vitro Fertilization. Master’s Thesis, Taipei Medical University, Taipei, Taiwan, 2021. [Google Scholar]
- Glorot, X.; Bordes, A.; Bengio, Y. Deep Sparse Rectifier Neural Networks. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 11–13 April 2011; pp. 315–323. [Google Scholar]
- Huang, T.-Y. Application of Deep Learning Combined with Balanced Experimental Design Method for Pneumonia Classification in Chest X-ray Images. Master’s Thesis, National Pingtung University, Pingtung, Taiwan, 2020. [Google Scholar]
- Huang, T.-H. Research on Machine Learning for Indoor Positioning Using Multi-Channel Information. Master’s Thesis, National Formosa University, Yunlin, Taiwan, 2020. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
- Lin, C.-T. Matching the Method of Neural Network CNN, LSTM and DNN on the High Confused Mandarin Vowel Recognition. Master’s Thesis, National Chung Hsing University, Taichung, Taiwan, 2019. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Ruder, S. An Overview of Gradient Descent Optimization Algorithms. arXiv 2016, arXiv:1609.04747. [Google Scholar]
- Jade, T. Rice Disease Image Dataset. Kaggle. 2023. Available online: https://www.kaggle.com/datasets/tiffanyjade/rice-disease-image-dataset (accessed on 16 November 2025).
- University of Jaffna. Hispa. Roboflow Universe. 2023. Available online: https://universe.roboflow.com/university-of-jaffna-cfjf1/hispa-9f7nz (accessed on 16 November 2025).
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
- Deng, Y.-H. Fine-Tuning Deep Learning Image Classification Parameter Based on Transfer Learning. Master’s Thesis, National Taiwan University of Science and Technology, Taipei, Taiwan, 2018. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Berrar, D. Cross-Validation. In Encyclopedia of Bioinformatics and Computational Biology; Elsevier: Oxford, UK, 2019; pp. 542–545. [Google Scholar]
- Valverde-Albacete, F.J.; Peláez-Moreno, C. 100% Classification Accuracy Considered Harmful: The Normalized Information Transfer Factor Explains the Accuracy Paradox. PLoS ONE 2014, 9, e84217. [Google Scholar] [CrossRef] [PubMed]
- Guo, C.; Pleiss, G.; Sun, Y.; Weinberger, K.Q. On Calibration of Modern Neural Networks. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 1321–1330. [Google Scholar]
- Arora, A. DenseNet Architecture. Available online: https://amaarora.github.io/posts/2020-08-02-densenets.html (accessed on 15 November 2025).
- Liu, Z.; Mao, H.; Wu, C.-Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A ConvNet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 11976–11986. [Google Scholar]












| Disease | Shape | Location | Color |
|---|---|---|---|
| Brown spot | Elliptical spots | Leaf surface | Dark brown center, red-brown or yellow-brown surround |
| Hispa | Linear streaks | Leaf surface | White, silvery |
| Leaf blast | Spindle-shaped spots | Leaf surface | Gray center, brown margins |
| Scenario | Training Set (Per Class) | Augmentation Strategy | Test Set (Per Class) | Primary Objective |
|---|---|---|---|---|
| Case A (Baseline) | 320 (orig.) + 160 (aug) = 480 | Mixed Augmentation | 123 (original) | Establish a strong baseline with standard augmentation. |
| Case B (Extended Aug) | 320 (orig.) + 608 (aug) = 928 | Mixed + Individual Augmentation (Section 3.3) | 123 (original) | Probe the effect of significantly increased augmentation diversity and volume. |
| Case C (OOD) | Identical to Case A | Identical to Case A | 123 (external, from [28,29]) | Evaluate generalization to a completely unseen data domain (Most rigorous test). |
| Case D (Stress Test) | Identical to Case A | Identical to Case A | 98 (original) + 25 (augmented) = 123 | Assess robustness against synthetic variations and potential overfitting to augmentation artifacts. |
| Augmentation Technique | Parameters |
|---|---|
| Rotation | ±30 degrees |
| Width Shift | ±20% of total width |
| Height Shift | ±20% of total height |
| Shear Transformation | Intensity of 0.2 radians |
| Zoom | Range of ±20% |
| Brightness Adjustment | [0.8, 1.2] range |
| Horizontal Flip | Randomly applied with 50% probability |
| Parameter | VGG16 | VGG19 | DenseNet121 | ResNet101V2 | Xception |
|---|---|---|---|---|---|
| Input Size | (224,224,3) | (224,224,3) | (224,224,3) | (224,224,3) | (224,224,3) |
| Top Layers | Flatten, Dense (4096, ReLU) × 2, Dropout (0.5) × 2 | Flatten, Dense (4096, ReLU) × 2, Dropout (0.5) × 2 | GAP, Dense (512, ReLU), Dropout (0.5) | Flatten, Dense (2048, ReLU) | GAP *, Dense (2048, ReLU) |
| Output Layer | Dense (4, Softmax) | Dense (4, Softmax) | Dense (4, Softmax) | Dense (4, Softmax) | Dense (4, Softmax) |
| Optimizer | Adam | Adam | Adam | Adam | Adam |
| Learning Rate | a, b | a, b | a, b | a, b | a, b |
| Loss Function | categorical_crossentropy | categorical_crossentropy | categorical_crossentropy | categorical_crossentropy | categorical_crossentropy |
| Epochs | 100 | 100 | 100 | 100 | 100 |
| Batch Size | 32 | 32 | 32 | 32 | 32 |
| EarlyStopping | Patience = 20 | Patience = 20 | Patience = 20 | Patience = 20 | Patience = 20 |
| ReduceLROnPlateau | Factor = 0.5, patience = 5 | Factor = 0.5, patience = 5 | Factor = 0.5, patience = 5 | Factor = 0.5, patience = 5 | Factor = 0.5, patience = 5 |
| Model | Accuracy | Precision | Recall | F1-Score |
|---|---|---|---|---|
| VGG16 | 68.12 ± 2.65 | 69.20 ± 2.57 | 71.11 ± 2.29 | 68.84 ± 2.51 |
| VGG19 | 64.80 ± 1.94 | 67.06 ± 1.49 | 67.75 ± 1.41 | 66.36 ± 1.70 |
| DenseNet121 | 85.08 ± 1.07 | 87.22 ± 1.38 | 83.75 ± 1.05 | 85.08 ± 1.15 |
| ResNet101V2 | 73.98 ± 1.70 | 75.17 ± 1.62 | 75.31 ± 1.37 | 74.89 ± 1.59 |
| Xception | 64.30 ± 2.67 | 65.98 ± 2.75 | 68.04 ± 2.31 | 65.66 ± 2.65 |
| Model | Parameters (Millions) | Theoretical GFLOPs (Giga) |
|---|---|---|
| VGG16 | 138.4 | 15.5 |
| VGG19 | 143.7 | 19.6 |
| DenseNet121 | 8.1 | 2.9 |
| ResNet101V2 | 44.6 | 7.8 |
| Xception | 22.9 | 4.3 |
| Rice Disease | Metric | VGG16 | VGG19 | DenseNet121 | ResNet101V2 | Xception |
|---|---|---|---|---|---|---|
| Brown Spot | AUC-ROC | 0.89 | 0.89 | 0.93 | 0.90 | 0.90 |
| AUC-PR | 0.81 | 0.79 | 0.88 | 0.84 | 0.82 | |
| Healthy | AUC-ROC | 0.87 | 0.86 | 0.90 | 0.84 | 0.85 |
| AUC-PR | 0.62 | 0.65 | 0.69 | 0.61 | 0.60 | |
| Hispa | AUC-ROC | 0.83 | 0.80 | 0.89 | 0.77 | 0.81 |
| AUC-PR | 0.60 | 0.56 | 0.73 | 0.48 | 0.58 | |
| Leaf Blast | AUC-ROC | 0.86 | 0.85 | 0.90 | 0.85 | 0.84 |
| AUC-PR | 0.74 | 0.70 | 0.84 | 0.70 | 0.70 |
| Model | Brown Spot | Healthy | Hispa | Leaf Blast | Average ECE |
|---|---|---|---|---|---|
| VGG16 | 0.086 | 0.081 | 0.103 | 0.054 | 0.081 |
| VGG19 | 0.081 | 0.067 | 0.102 | 0.068 | 0.080 |
| ResNet101V2 | 0.077 | 0.082 | 0.132 | 0.097 | 0.097 |
| DenseNet121 | 0.073 | 0.112 | 0.158 | 0.089 | 0.108 |
| Xception | 0.054 | 0.067 | 0.078 | 0.059 | 0.065 |
| Model | Primary Error Pattern | Error Count | Class-Wise Accuracy (Healthy/Hispa) |
|---|---|---|---|
| DenseNet121 | Healthy → Hispa | 41 | 78%/97% |
| Xception | Hispa → Healthy | 37 | 82%/71% |
| ResNet101V2 | Hispa → Healthy | 44 | 84%/65% |
| VGG16 | Healthy → Hispa | 32 | 71%/80% |
| VGG19 | Healthy → Hispa | 27 | 75%/77% |
| Model | Case A (Baseline Aug) | Case B (Extended Aug) | Case C (OOD) | Case D (Stress Test) |
|---|---|---|---|---|
| VGG16 | 0.66 | 0.67 | 0.68 | 0.63 |
| VGG19 | 0.62 | 0.65 | 0.65 | 0.59 |
| ResNet101V2 | 0.63 | 0.65 | 0.74 | 0.58 |
| DenseNet121 | 0.71 | 0.73 | 0.85 | 0.64 |
| Xception | 0.64 | 0.68 | 0.64 | 0.60 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lee, S.-H.; Jiang, Q.-W.; Cheng, C.-H.; Tsai, Y.-S.; Huang, Y.-F. Boosting Rice Disease Diagnosis: A Systematic Benchmark of Five Deep Convolutional Neural Network Models in Precision Agriculture. Agriculture 2025, 15, 2494. https://doi.org/10.3390/agriculture15232494
Lee S-H, Jiang Q-W, Cheng C-H, Tsai Y-S, Huang Y-F. Boosting Rice Disease Diagnosis: A Systematic Benchmark of Five Deep Convolutional Neural Network Models in Precision Agriculture. Agriculture. 2025; 15(23):2494. https://doi.org/10.3390/agriculture15232494
Chicago/Turabian StyleLee, Shu-Hung, Qi-Wei Jiang, Chia-Hsin Cheng, Yu-Shun Tsai, and Yung-Fa Huang. 2025. "Boosting Rice Disease Diagnosis: A Systematic Benchmark of Five Deep Convolutional Neural Network Models in Precision Agriculture" Agriculture 15, no. 23: 2494. https://doi.org/10.3390/agriculture15232494
APA StyleLee, S.-H., Jiang, Q.-W., Cheng, C.-H., Tsai, Y.-S., & Huang, Y.-F. (2025). Boosting Rice Disease Diagnosis: A Systematic Benchmark of Five Deep Convolutional Neural Network Models in Precision Agriculture. Agriculture, 15(23), 2494. https://doi.org/10.3390/agriculture15232494

