Accurate and Efficient Recognition of Mixed Diseases in Apple Leaves Using a Multi-Task Learning Approach
Abstract
1. Introduction
- (1)
- We propose the ALMDR model for simultaneous multi-disease classification and segmentation, bridging the gap between single-disease detection and more realistic multi-disease scenarios in agricultural fields.
- (2)
- We introduce a multi-task learning architecture that integrates GFPN for multi-scale feature extraction, MLCH for disease type prediction, and segmentation heads (LSH and LeSH) for precise leaf and lesion delineation, significantly enhancing disease identification accuracy and lesion localization.
- (3)
- We demonstrate the superior performance and computational efficiency of ALMDR through extensive experiments on the Plant Pathology 2021-FGVC8 (an Apple leaf disease dataset) and a self-collected Cucumber leaf disease dataset.
2. Related Work
2.1. Classification-Based Methods
2.2. Detection-Based Methods
2.3. Segmentation-Based Methods
3. Materials
3.1. Dataset Construction
3.2. Data Augmentation
3.3. Data Annotation
3.4. Methodology
3.4.1. ALMDR
3.4.2. Group Feature Pyramid Network
3.4.3. Multi-Label Classification Head
3.4.4. Lesion Segmentation Head
3.4.5. Leaf Segmentation Head
4. Results
4.1. Implementation Details
4.2. Evaluation Metrics
- Classification task metrics: Disease classification in instance segmentation models typically relies on aggregating confidence scores from detected instances. The ALMDR model enhances this approach by effectively integrating outputs from three key modules for disease classification: LSH for precise leaf area delineation, LeSH for instance-level disease predictions, and MLCH for image-level multi-disease detection. Figure 6 provides a detailed comparison of confidence score computation methods between ALMDR and other models.
- Detection and segmentation task metrics: For detection and segmentation tasks, we adopt evaluation metrics consistent with the COCO dataset [65], including threshold-based and object size-based metrics. This facilitates comparisons with other research in the computer vision field. We assess the model’s performance using average precision (AP) at different intersection over union (IoU) thresholds, ranging from to with a step size of 0.10. For example, considers a prediction correct when the predicted bounding box has an with the ground truth and the predicted category matches. We also compute the mean average precision (mAP) and mean average recall (mAR) across these thresholds to reflect the model’s overall performance. Moreover, we use , , and to evaluate the model’s performance on small (area pixels), medium ( to pixels), and large (area pixels) objects, respectively. These metrics help assess the model’s ability to detect and segment objects across different scales.
- Disease severity estimation task metrics: The segmentation results obtained from our model enable us to assess the severity of plant diseases. This assessment follows the National Standard of the People’s Republic of China GB/T 17980.124-2004 for apple leaf diseases [66] and GB/T 17980.30-2000 for cucumber leaf diseases [67]. Although primarily designed for apple leaf spot diseases (Alternaria mali and Marssonina coronaria), this standard explicitly states its applicability to other apple leaf lesions, making it suitable for our diverse set of leaf diseases. Table 3 presents the standard’s severity level criteria, which quantify disease severity based on the proportion of lesion area to total leaf area. The table also includes the distribution of samples across these severity levels in our test set. We evaluate disease severity using two approaches: (1) linear regression to fit predicted disease proportions with ground truth, measured by the coefficient of determination [14], and (2) classification of severity levels, evaluated using F1-score. These methods assess our model’s performance in both continuous proportion prediction and discrete severity level classification.
- Model efficiency metrics: To assess our ALMDR model’s efficiency, we use three metrics: FPS (frames per second), which measures real-time performance; FLOPs(G) (giga floating point operations per second), which indicates computational complexity; and parameters (in millions), which reflects storage requirements. These metrics collectively evaluate the model’s practicality for deployment and optimization.
4.3. Comparative Analysis of Multi-Label Disease Classification
4.4. Evaluation of Instance Segmentation of Leaves and Lesion Regions
4.5. Evaluation of Disease Severity Estimation Performance
4.6. Ablation Study
5. Discussion
5.1. Cross-Model Applicability of GFPN and MLCH Modules
5.2. Visualization of Leaf and Lesion Segmentation Results Across Different Models
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Boyer, J.; Liu, R.H. Apple phytochemicals and their health benefits. Nutr. J. 2004, 3, 5. [Google Scholar] [CrossRef] [PubMed]
- Yang, Q.; Duan, S.; Wang, L. Efficient identification of apple leaf diseases in the wild using convolutional neural networks. Agronomy 2022, 12, 2784. [Google Scholar] [CrossRef]
- Lv, X.; Zhang, X.; Gao, H.; He, T.; Lv, Z.; Zhangzhong, L. When crops meet machine vision: A review and development framework for a low-cost nondestructive online monitoring technology in agricultural production. Agric. Commun. 2024, 2, 100029. [Google Scholar] [CrossRef]
- Feng, W.; Song, Q.; Sun, G.; Zhang, X. Lightweight Isotropic Convolutional Neural Network for Plant Disease Identification. Agronomy 2023, 13, 1849. [Google Scholar] [CrossRef]
- Nikith, B.; Keerthan, N.; Praneeth, M.; Amrita, T. Leaf disease detection and classification. Procedia Comput. Sci. 2023, 218, 291–300. [Google Scholar] [CrossRef]
- Khan, R.U.; Khan, K.; Albattah, W.; Qamar, A.M. Image-based detection of plant diseases: From classical machine learning to deep learning journey. Wirel. Commun. Mob. Comput. 2021, 2021, 5541859. [Google Scholar] [CrossRef]
- Hardham, A.R. Confocal microscopy in plant–pathogen interactions. In Plant Fungal Pathogens: Methods and Protocols; Humana Press: Totowa, NJ, USA, 2012; pp. 295–309. [Google Scholar]
- Buja, I.; Sabella, E.; Monteduro, A.G.; Chiriacò, M.S.; De Bellis, L.; Luvisi, A.; Maruccio, G. Advances in plant disease detection and monitoring: From traditional assays to in-field diagnostics. Sensors 2021, 21, 2129. [Google Scholar] [CrossRef]
- Schena, L.; Duncan, J.; Cooke, D. Development and application of a PCR-based ‘molecular tool box’ for the identification of Phytophthora species damaging forests and natural ecosystems. Plant Pathol. 2008, 57, 64–75. [Google Scholar] [CrossRef]
- Mahlein, A.K. Plant disease detection by imaging sensors–parallels and specific demands for precision agriculture and plant phenotyping. Plant Dis. 2016, 100, 241–251. [Google Scholar] [CrossRef] [PubMed]
- Ferentinos, K.P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
- Suseno, J.R.K.; Azhar, Y.; Minarno, A.E. The Implementation of Pretrained VGG16 Model for Rice Leaf Disease Classification using Image Segmentation. Kinet. Game Technol. Inf. Syst. Comput. Netw. Comput. Electron. Control 2023, 8, 499–506. [Google Scholar] [CrossRef]
- Zhao, Y.; Lin, C.; Wu, N.; Xu, X. APEIOU Integration for Enhanced YOLOV7: Achieving Efficient Plant Disease Detection. Agriculture 2024, 14, 820. [Google Scholar] [CrossRef]
- Liu, W.; Chen, Y.; Lu, Z.; Lu, X.; Wu, Z.; Zheng, Z.; Suo, Y.; Lan, C.; Yuan, X. StripeRust-Pocket: A Mobile-Based Deep Learning Application for Efficient Disease Severity Assessment of Wheat Stripe Rust. Plant Phenomics 2024, 2024, 0201. [Google Scholar] [CrossRef]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 22–25 July 2017; pp. 2117–2125. [Google Scholar]
- Wang, X.; Kong, T.; Shen, C.; Jiang, Y.; Li, L. Solo: Segmenting objects by locations. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XVIII 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 649–665. [Google Scholar]
- Zhang, X.; Li, H.; Sun, S.; Zhang, W.; Shi, F.; Zhang, R.; Liu, Q. Classification and Identification of Apple Leaf Diseases and Insect Pests Based on Improved ResNet-50 Model. Horticulturae 2023, 9, 1046. [Google Scholar] [CrossRef]
- Lin, J.; Zhang, X.; Qin, Y.; Yang, S.; Wen, X.; Cernava, T.; Migheli, Q.; Chen, X. Local and global feature-aware dual-branch networks for plant disease recognition. Plant Phenomics 2024, 6, 0208. [Google Scholar] [CrossRef] [PubMed]
- Prashanthi, B.; Krishna, A.P.; Rao, C.M. LEViT-Leaf Disease identification and classification using an enhanced Vision transformers (ViT) model. Multimed. Tools Appl. 2025, 84, 23313–23344. [Google Scholar] [CrossRef]
- Zeng, W.; Li, H.; Hu, G.; Liang, D. Lightweight dense-scale network (LDSNet) for corn leaf disease identification. Comput. Electron. Agric. 2022, 197, 106943. [Google Scholar] [CrossRef]
- Cheng, H.; Li, H. Identification of apple leaf disease via novel attention mechanism based convolutional neural network. Front. Plant Sci. 2023, 14, 1274231. [Google Scholar] [CrossRef] [PubMed]
- Liu, S.; Bai, H.; Li, F.; Wang, D.; Zheng, Y.; Jiang, Q.; Sun, F. An apple leaf disease identification model for safeguarding apple food safety. Food Sci. Technol. 2023, 43, e104322. [Google Scholar] [CrossRef]
- Zhang, S.; Wang, D.; Yu, C. Apple leaf disease recognition method based on Siamese dilated Inception network with less training samples. Comput. Electron. Agric. 2023, 213, 108188. [Google Scholar] [CrossRef]
- Karthik, R.; Alfred, J.J.; Kennedy, J.J. Inception-based global context attention network for the classification of coffee leaf diseases. Ecol. Inform. 2023, 77, 102213. [Google Scholar] [CrossRef]
- Huang, X.; Xu, D.; Chen, Y.; Zhang, Q.; Feng, P.; Ma, Y.; Dong, Q.; Yu, F. EConv-ViT: A strongly generalized apple leaf disease classification model based on the fusion of ConvNeXt and Transformer. Inf. Process. Agric. 2025, 12, 466–477. [Google Scholar] [CrossRef]
- Redmon, J. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Li, Y.; Wang, J.; Wu, H.; Yu, Y.; Sun, H.; Zhang, H. Detection of powdery mildew on strawberry leaves based on DAC-YOLOv4 model. Comput. Electron. Agric. 2022, 202, 107418. [Google Scholar] [CrossRef]
- Rajamohanan, R.; Latha, B.C. An Optimized YOLO v5 Model for Tomato Leaf Disease Classification with Field Dataset. Eng. Technol. Appl. Sci. Res. 2023, 13, 12033–12038. [Google Scholar] [CrossRef]
- Iren, E. Comparison of yolov5 and yolov6 models for plant leaf disease detection. Eng. Technol. Appl. Sci. Res. 2024, 14, 13714–13719. [Google Scholar] [CrossRef]
- Sun, H.; Nicholaus, I.T.; Fu, R.; Kang, D.K. YOLO-FMDI: A Lightweight YOLOv8 Focusing on a Multi-Scale Feature Diffusion Interaction Neck for Tomato Pest and Disease Detection. Electronics 2024, 13, 2974. [Google Scholar] [CrossRef]
- Rehana, H.; Ibrahim, M.; Ali, M.H. Plant disease detection using region-based convolutional neural network. arXiv 2023, arXiv:2303.09063. [Google Scholar] [CrossRef]
- Kang, R.; Huang, J.; Zhou, X.; Ren, N.; Sun, S. Toward Real Scenery: A Lightweight Tomato Growth Inspection Algorithm for Leaf Disease Detection and Fruit Counting. Plant Phenomics 2024, 6, 0174. [Google Scholar] [CrossRef]
- Wang, X.; Liu, J. Detection of small targets in cucumber disease images through global information perception and feature fusion. Front. Sustain. Food Syst. 2024, 8, 1366387. [Google Scholar] [CrossRef]
- Lee, Y.S.; Patil, M.P.; Kim, J.G.; Seo, Y.B.; Ahn, D.H.; Kim, G.D. Hyperparameter Optimization for Tomato Leaf Disease Recognition Based on YOLOv11m. Plants 2025, 14, 653. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhou, G.; Chen, A.; He, M.; Li, J.; Hu, Y. A precise apple leaf diseases detection using BCTNet under unconstrained environments. Comput. Electron. Agric. 2023, 212, 108132. [Google Scholar] [CrossRef]
- Wang, S.; Xu, D.; Liang, H.; Bai, Y.; Li, X.; Zhou, J.; Su, C.; Wei, W. Advances in deep learning applications for plant disease and pest detection: A review. Remote Sens. 2025, 17, 698. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef]
- Abinaya, S.; Kumar, K.U.; Alphonse, A.S. Cascading Autoencoder with Attention Residual U-Net for Multi-Class Plant Leaf Disease Segmentation and Classification. IEEE Access 2023, 11, 98153–98170. [Google Scholar] [CrossRef]
- Deng, Y.; Xi, H.; Zhou, G.; Chen, A.; Wang, Y.; Li, L.; Hu, Y. An effective image-based tomato leaf disease segmentation method using MC-UNet. Plant Phenomics 2023, 5, 0049. [Google Scholar] [CrossRef] [PubMed]
- Yang, Y.; Wang, C.; Zhao, Q.; Li, G.; Zang, H. Se-swin unet for image segmentation of major maize foliar diseases. Eng. Agríc. 2024, 44, e20230097. [Google Scholar] [CrossRef]
- Zhang, X.; Li, D.; Liu, X.; Sun, T.; Lin, X.; Ren, Z. Research of segmentation recognition of small disease spots on apple leaves based on hybrid loss function and cbam. Front. Plant Sci. 2023, 14, 1175027. [Google Scholar] [CrossRef]
- Zhou, H.; Peng, Y.; Zhang, R.; He, Y.; Li, L.; Xiao, W. GS-DeepLabV3+: A Mountain Tea Disease Segmentation Network Based on Improved Shuffle Attention and Gated Multidimensional Feature Extraction. Crop Prot. 2024, 106762. [Google Scholar] [CrossRef]
- Zhu, S.; Ma, W.; Lu, J.; Ren, B.; Wang, C.; Wang, J. A novel approach for apple leaf disease image segmentation in complex scenes based on two-stage DeepLabv3+ with adaptive loss. Comput. Electron. Agric. 2023, 204, 107539. [Google Scholar] [CrossRef]
- Yang, R.; Guo, Y.; Hu, Z.; Gao, R.; Yang, H. Semantic segmentation of cucumber leaf disease spots based on ECA-SegFormer. Agriculture 2023, 13, 1513. [Google Scholar] [CrossRef]
- Wang, H.; Ding, J.; He, S.; Feng, C.; Zhang, C.; Fan, G.; Wu, Y.; Zhang, Y. MFBP-UNet: A network for pear leaf disease segmentation in natural agricultural environments. Plants 2023, 12, 3209. [Google Scholar] [CrossRef]
- Zhang, W.; Wang, Y.; Shen, G.; Li, C.; Li, M.; Guo, Y. Tobacco leaf segmentation based on improved mask RCNN algorithm and SAM model. IEEE Access 2023, 11, 103102–103114. [Google Scholar] [CrossRef]
- Afzaal, U.; Bhattarai, B.; Pandeya, Y.R.; Lee, J. An instance segmentation model for strawberry diseases based on mask R-CNN. Sensors 2021, 21, 6565. [Google Scholar] [CrossRef] [PubMed]
- Johnson, J.; Sharma, G.; Srinivasan, S.; Masakapalli, S.K.; Sharma, S.; Sharma, J.; Dua, V.K. Enhanced field-based detection of potato blight in complex backgrounds using deep learning. Plant Phenomics 2021, 2021, 9835724. [Google Scholar] [CrossRef] [PubMed]
- Vora, K.; Padalia, D. An ensemble of convolutional neural networks to detect foliar diseases in apple plants. arXiv 2022, arXiv:2210.00298. [Google Scholar] [CrossRef]
- Yadav, A.; Thakur, U.; Saxena, R.; Pal, V.; Bhateja, V.; Lin, J.C.W. AFD-Net: Apple Foliar Disease multi classification using deep learning on plant pathology dataset. Plant Soil 2022, 477, 595–611. [Google Scholar] [CrossRef]
- Zuo, X.; Chu, J.; Shen, J.; Sun, J. Multi-granularity feature aggregation with self-attention and spatial reasoning for fine-grained crop disease classification. Agriculture 2022, 12, 1499. [Google Scholar] [CrossRef]
- Chen, Z.; Peng, Y.; Jiao, J.; Wang, A.; Wang, L.; Lin, W.; Guo, Y. MD-Unet for tobacco leaf disease spot segmentation based on multi-scale residual dilated convolutions. Sci. Rep. 2025, 15, 2759. [Google Scholar] [CrossRef]
- Thapa, R.; Zhang, K.; Snavely, N.; Belongie, S.; Khan, A. The Plant Pathology Challenge 2020 data set to classify foliar disease of apples. Appl. Plant Sci. 2020, 8, e11390. [Google Scholar] [CrossRef] [PubMed]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8759–8768. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790. [Google Scholar]
- Tian, Z.; Shen, C.; Chen, H.; He, T. FCOS: A simple and strong anchor-free object detector. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 1922–1933. [Google Scholar] [CrossRef]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 658–666. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Neubeck, A.; Van Gool, L. Efficient non-maximum suppression. In Proceedings of the IEEE 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, 20–24 August 2006; Volume 3, pp. 850–855. [Google Scholar]
- Rosner, B.; Glynn, R.J.; Lee, M.L.T. The Wilcoxon signed rank test for paired comparisons of clustered data. Biometrics 2006, 62, 185–192. [Google Scholar] [CrossRef]
- Lin, T.Y.; Maire, M.; Belongie, S.; Bourdev, L.; Girshick, R.; Hays, J.; Perona, P.; Ramanan, D.; Zitnick, C.L.; Dollár, P. Microsoft COCO: Common Objects in Context. arXiv 2015, arXiv:1405.0312. [Google Scholar] [CrossRef]
- GB/T 17980.124-2004; The National Standard of the People’s Republic of China: Pesticide–Guidelines for the Field Efficacy Trials (I)–Fungicides Against Cucumber Powdery Mildew. State Administration for Market Regulation: Beijing, China, 2004.
- GB/T 17980.30-2000; The National Standard of the People’s Republic of China: Pesticide–Guidelines for the Field Efficacy Trials (I)–Fungicides Against Cucumber Powdery Mildew. State Administration for Market Regulation: Beijing, China, 2000.
- Bolya, D.; Zhou, C.; Xiao, F.; Lee, Y.J. Yolact: Real-time instance segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9157–9166. [Google Scholar]
- Wang, X.; Zhang, R.; Kong, T.; Li, L.; Shen, C. Solov2: Dynamic and fast instance segmentation. Adv. Neural Inf. Process. Syst. 2020, 33, 17721–17732. [Google Scholar]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 7464–7475. [Google Scholar]
- Jocher, G.; Chaurasia, A.; Qiu, J. Ultralytics YOLO. 2023. Available online: https://github.com/ultralytics/ultralytics (accessed on 7 February 2025).
- Wang, C.Y.; Yeh, I.H.; Liao, H.Y.M. Yolov9: Learning what you want to learn using programmable gradient information. arXiv 2024, arXiv:2402.13616. [Google Scholar] [CrossRef]
- Wang, D.; Wang, J.; Li, W.; Guan, P. T-CNN: Trilinear convolutional neural networks model for visual detection of plant diseases. Comput. Electron. Agric. 2021, 190, 106468. [Google Scholar] [CrossRef]













| Image Information | Instance Information | |||||||
|---|---|---|---|---|---|---|---|---|
| Health | PM | PD | PW&PD | Total | PM | PD | Total | |
| Train | 200 | 200 | 200 | 200 | 800 | 8540 | 884 | 9424 |
| Val | 25 | 25 | 25 | 25 | 100 | 1098 | 81 | 1179 |
| Test | 25 | 25 | 25 | 25 | 100 | 1109 | 78 | 1187 |
| Image Information | Instance Information | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Health | Rust | Scab | FELS | PM | R&S | R&FELS | S&FELS | Total | Rust | Scab | FELS | PM | Total | ||
| Train | Initial | 3647 | 1460 | 3801 | 2497 | 845 | 540 | 84 | 124 | 12,998 | 16,030 | 37,915 | 17,525 | 902 | 71,470 |
| Augmented | 0 | 0 | 0 | 0 | 0 | 824 | 1396 | 1344 | 3564 | 1020 | 6392 | 2467 | 0 | 9879 | |
| Total | 3647 | 1460 | 3801 | 2497 | 845 | 1364 | 1480 | 1468 | 16,562 | 17,050 | 44,307 | 19,992 | 902 | 81,349 | |
| Val | 456 | 183 | 475 | 312 | 106 | 68 | 10 | 16 | 1626 | 1626 | 2804 | 799 | 88 | 5229 | |
| Test | 456 | 183 | 475 | 312 | 106 | 68 | 10 | 16 | 1626 | 1626 | 2789 | 748 | 92 | 5163 | |
| Criteria for Apple Leaf Disease Severity Levels | Criteria for Cucumber Disease Severity Levels | |||
|---|---|---|---|---|
| Ratio (%) | Sample Count | Ratio (%) | Sample Count | |
| Level 0 | 456 | 25 | ||
| Level 1 | 284 | 7 | ||
| Level 3 | 319 | 6 | ||
| Level 5 | 201 | 22 | ||
| Level 7 | 178 | 19 | ||
| Level 9 | 188 | 21 | ||
| Model | Multi-Label Classification | Multi-Class Classification | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Hamming Loss (%) ↓ | One-Error (%) ↓ | Zero-One Loss (%) ↓ | Example-Based ↑ | -Value | Macro ↑ | Micro ↑ | -Value | |||||||
| Precision (%) | Recall (%) | F1-Score (%) | Precision (%) | Recall (%) | F1-Score (%) | Precision (%) | Recall (%) | F1-Score (%) | ||||||
| YOLACT | 8.70 | 9.60 | 9.73 | 88.70 | 88.32 | 89.83 | 86.79 | 86.16 | 87.36 | 87.76 | 87.85 | 87.80 | ||
| FCOS | 8.41 | 8.24 | 9.29 | 89.93 | 89.94 | 90.05 | 88.96 | 88.90 | 89.32 | 89.57 | 88.73 | 89.15 | ||
| SOLOv1 | 10.65 | 11.47 | 11.78 | 85.82 | 85.91 | 87.48 | 83.78 | 82.95 | 82.60 | 84.18 | 84.93 | 84.55 | ||
| SOLOv2 | 9.11 | 8.46 | 10.49 | 86.38 | 87.61 | 88.07 | 84.55 | 84.47 | 84.04 | 86.74 | 86.83 | 86.78 | ||
| YOLOv7 | 8.39 | 8.54 | 9.02 | 90.02 | 91.04 | 91.03 | 90.33 | 89.43 | 90.17 | 90.52 | 90.05 | 90.28 | ||
| YOLOv8 | 8.30 | 8.49 | 8.98 | 91.31 | 92.88 | 92.29 | 91.29 | 90.87 | 91.64 | 91.18 | 91.15 | 91.16 | ||
| YOLOv9 | 8.27 | 8.47 | 8.89 | 92.41 | 92.96 | 92.91 | 92.54 | 91.12 | 93.09 | 92.44 | 91.73 | 92.08 | ||
| ALMDR | 7.94 | 8.46 | 8.75 | 93.19 | 94.34 | 93.74 | - | 94.40 | 92.49 | 93.66 | 93.72 | 92.35 | 93.03 | - |
| Model | Multi-Label Classification | Multi-Class Classification | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Hamming Loss (%) ↓ | One-Error (%) ↓ | Zero-One Loss (%) ↓ | Example-Based ↑ | p-Value | Macro ↑ | Micro ↑ | p-Value | |||||||
| Precision (%) | Recall (%) | F1-Score (%) | Precision (%) | Recall (%) | F1-Score (%) | Precision (%) | Recall (%) | F1-Score (%) | ||||||
| YOLACT | 8.05 | 8.07 | 8.63 | 89.28 | 88.64 | 90.35 | 87.31 | 86.51 | 88.14 | 89.39 | 88.52 | 88.95 | ||
| FCOS | 8.06 | 7.94 | 8.49 | 90.18 | 90.59 | 90.98 | 89.25 | 89.61 | 89.65 | 90.15 | 89.18 | 89.66 | ||
| SOLOv1 | 9.60 | 9.32 | 9.21 | 86.62 | 86.13 | 87.86 | 84.58 | 83.20 | 83.32 | 87.06 | 85.54 | 86.29 | ||
| SOLOv2 | 8.26 | 8.96 | 9.54 | 87.08 | 87.89 | 88.59 | 85.10 | 85.26 | 84.36 | 88.52 | 87.27 | 87.89 | ||
| YOLOv7 | 8.00 | 7.77 | 8.07 | 90.42 | 91.67 | 91.34 | 90.55 | 90.26 | 90.54 | 91.00 | 90.74 | 90.87 | ||
| YOLOv8 | 7.94 | 7.69 | 7.78 | 92.09 | 93.15 | 92.84 | 91.60 | 91.61 | 91.90 | 91.68 | 91.56 | 91.62 | ||
| YOLOv9 | 7.87 | 7.44 | 7.99 | 92.77 | 93.43 | 93.60 | 92.81 | 91.97 | 93.48 | 92.90 | 92.26 | 92.58 | ||
| ALMDR | 7.59 | 7.31 | 7.90 | 93.40 | 95.16 | 94.08 | - | 94.63 | 93.37 | 94.00 | 94.29 | 92.77 | 93.52 | - |
| Model | Apple Leaf Disease Dataset | Cucumber Leaf Disease Dataset | FPS ↑ | FLOPS (G) ↓ | Parameter (M) ↓ | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Detection ↑ | Segmentation ↑ | Detection ↑ | Segmentation ↑ | ||||||||
| mAP (%) | mAR (%) | mAP (%) | mAR (%) | mAP (%) | mAR (%) | mAP (%) | mAR (%) | ||||
| YOLACT | 44.88 | 52.22 | 40.94 | 44.94 | 48.43 | 55.78 | 45.29 | 47.84 | 6.20 | 67.05 | 49.61 |
| FCOS | 46.19 | 52.08 | - | - | 50.18 | 55.80 | - | - | 6.14 | 70.81 | 50.96 |
| SOLOv1 | - | - | 30.91 | 40.18 | - | - | 35.70 | 42.90 | 5.86 | 125.80 | 55.07 |
| SOLOv2 | - | - | 35.05 | 41.57 | - | - | 39.73 | 45.03 | 5.83 | 125.91 | 65.36 |
| YOLOv7 | 46.83 | 54.67 | 41.66 | 44.28 | 51.95 | 58.02 | 46.45 | 47.51 | 6.02 | 230.91 | 73.36 |
| YOLOv8 | 48.50 | 55.87 | 42.46 | 45.07 | 53.49 | 59.44 | 48.03 | 48.96 | 6.08 | 200.06 | 47.75 |
| YOLOv9 | 50.11 | 56.78 | 44.31 | 46.71 | 54.46 | 60.57 | 49.99 | 49.83 | 6.23 | 145.50 | 27.40 |
| ALMDR | 51.32 | 58.71 | 45.50 | 48.10 | 55.55 | 62.61 | 50.96 | 51.89 | 6.25 | 74.78 | 45.27 |
| Model | Without MLCH | With MLCH | Detection ↑ | Segmentation ↑ | FPS ↑ | FLOPs (G) ↓ | Parameters (M) ↓ | ||
|---|---|---|---|---|---|---|---|---|---|
| mAP (%) | mAR (%) | mAP (%) | mAR (%) | ||||||
| FPN | ✓ | 46.99 | 53.59 | 44.37 | 44.14 | 6.24 | 77.67 | 46.32 | |
| PANet | ✓ | 47.08 | 55.36 | 45.09 | 45.36 | 6.17 | 78.38 | 46.46 | |
| Bi-FPN | ✓ | 49.53 | 57.43 | 45.06 | 46.37 | 5.96 | 80.8 | 46.57 | |
| GFPN | ✓ | 50.89 | 58.05 | 45.11 | 47.55 | 6.26 | 73.78 | 45.25 | |
| GFPN | ✓ | 51.32 | 58.71 | 45.50 | 48.10 | 6.25 | 74.78 | 45.27 | |
| GFPN | MLCH | Detection ↑ | Segmentation ↑ | FPS ↑ | FLOPs (G) ↓ | Parameter (M) ↓ | |||
|---|---|---|---|---|---|---|---|---|---|
| mAP (%) | mAR (%) | mAP (%) | mAR (%) | ||||||
| YOLOv7 | ✗ | ✗ | 46.83 | 54.67 | 41.66 | 44.28 | 6.02 | 230.91 | 73.36 |
| ✓ | ✗ | 47.35 (↑ 0.52) | 55.26 (↑ 0.59) | 41.95 (↑ 0.29) | 44.66 (↑ 0.38) | 6.03 (↑ 0.01) | 229.35 (↓ 1.56) | 73.25 (↓ 0.11) | |
| ✗ | ✓ | 47.00 (↑ 0.17) | 54.88 (↑ 0.21) | 41.77 (↑ 0.11) | 44.60 (↑ 0.32) | 6.02 (0.00) | 231.59 (↑ 0.68) | 73.41 (↑ 0.05) | |
| ✓ | ✓ | 47.70 (↑ 0.87) | 55.64 (↑ 0.97) | 42.36 (↑ 0.70) | 45.18 (↑ 0.90) | 6.02 (0.00) | 230.03 (↓ 0.88) | 73.30 (↓ 0.06) | |
| YOLOv8 | ✗ | ✗ | 48.50 | 55.87 | 42.46 | 45.07 | 6.08 | 200.06 | 47.75 |
| ✓ | ✗ | 48.85 (↑ 0.35) | 56.24 (↑ 0.37) | 42.93 (↑ 0.47) | 45.26 (↑ 0.19) | 6.09 (↑ 0.01) | 198.51 (↓ 1.55) | 47.64 (↓ 0.11) | |
| ✗ | ✓ | 48.78 (↑ 0.28) | 56.02 (↑ 0.15) | 42.86 (↑ 0.40) | 45.17 (↑ 0.10) | 6.08 (0.00) | 200.74 (↑ 0.68) | 47.80 (↑ 0.05) | |
| ✓ | ✓ | 49.33 (↑ 0.83) | 56.58 (↑ 0.71) | 43.07 (↑ 0.61) | 45.47 (↑ 0.40) | 6.08 (0.00) | 199.19 (↓ 0.87) | 47.69 (↓ 0.06) | |
| YOLOv9 | ✗ | ✗ | 50.11 | 56.78 | 44.31 | 46.71 | 6.23 | 145.5 | 27.4 |
| ✓ | ✗ | 50.44 (↑ 0.33) | 57.20 (↑ 0.42) | 44.46 (↑ 0.15) | 46.93 (↑ 0.22) | 6.24 (↑ 0.01) | 143.97 (↓ 1.53) | 27.29 (↓ 0.11) | |
| ✗ | ✓ | 50.35 (↑ 0.24) | 56.98 (↑ 0.20) | 44.39 (↑ 0.08) | 46.91 (↑ 0.20) | 6.23 (0.00) | 146.18 (↑ 0.68) | 27.45 (↑ 0.05) | |
| ✓ | ✓ | 50.89 (↑ 0.78) | 57.50 (↑ 0.72) | 45.00 (↑ 0.69) | 47.29 (↑ 0.58) | 6.23 (0.00) | 144.64 (↓ 0.86) | 27.34 (↓ 0.06) | |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Luan, P.; Guo, N.; Li, L.; Li, B.; Zhao, Z.; Ma, L.; Liu, B. Accurate and Efficient Recognition of Mixed Diseases in Apple Leaves Using a Multi-Task Learning Approach. Agriculture 2026, 16, 71. https://doi.org/10.3390/agriculture16010071
Luan P, Guo N, Li L, Li B, Zhao Z, Ma L, Liu B. Accurate and Efficient Recognition of Mixed Diseases in Apple Leaves Using a Multi-Task Learning Approach. Agriculture. 2026; 16(1):71. https://doi.org/10.3390/agriculture16010071
Chicago/Turabian StyleLuan, Peng, Nawei Guo, Libo Li, Bo Li, Zhanmin Zhao, Li Ma, and Bo Liu. 2026. "Accurate and Efficient Recognition of Mixed Diseases in Apple Leaves Using a Multi-Task Learning Approach" Agriculture 16, no. 1: 71. https://doi.org/10.3390/agriculture16010071
APA StyleLuan, P., Guo, N., Li, L., Li, B., Zhao, Z., Ma, L., & Liu, B. (2026). Accurate and Efficient Recognition of Mixed Diseases in Apple Leaves Using a Multi-Task Learning Approach. Agriculture, 16(1), 71. https://doi.org/10.3390/agriculture16010071

