Disease-Seg: A Lightweight and Real-Time Segmentation Framework for Fruit Leaf Diseases
Abstract
1. Introduction
- We propose a lightweight real-time leaf disease segmentation framework that employs a parallel fusion of Transformer and CNN, effectively capturing both local texture and global context, significantly reducing lesion omission.
- We design an Extended Feature Module (EFM), a Deep Multi-scale Attention mechanism (DM-Attention), and a Feature-Weighted Fusion Module (FWFM) to expand the receptive field, enhance multi-scale feature representation, and optimize heterogeneous feature fusion.
- The model demonstrates strong segmentation accuracy and cross-dataset generalization across various fruit disease datasets and public benchmarks.
- The framework achieves 69 FPS on high-performance devices and an average inference time of 49 ms per image on edge devices, showing practical applicability in smart agriculture scenarios.
2. Materials and Methods
2.1. Data Collection
2.2. Data Preprocessing
2.3. Disease-Seg
2.4. Extended Feature Module
2.5. Deep Multi-Scale Attention Mechanism
2.6. Feature-Weighted Fusion Module
2.7. Experimental Platform and Evaluation Metrics
3. Results
3.1. Comparative Experiment
3.2. Generalization Experiment
3.3. Attention Comparison Experiment
3.4. Ablation Experiment
3.5. Deploy Experiment
3.6. Heterogeneous Comparison Experiment
3.7. Experiments for Disease Severity Assessment
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Ebrahimi, M.; Khoshtaghaza, M.H.; Minaei, S.; Jamshidi, B. Vision-based pest detection based on SVM classification method. Comput. Electron. Agric. 2017, 137, 52–58. [Google Scholar] [CrossRef]
- Savary, S.; Willocquet, L.; Pethybridge, S.J.; Esker, P.; McRoberts, N.; Nelson, A. The global burden of pathogens and pests on major food crops. Nat. Ecol. Evol. 2019, 3, 430–439. [Google Scholar] [CrossRef] [PubMed]
- Oerke, E.-C. Crop losses to pests. J. Agric. Sci. 2006, 144, 31–43. [Google Scholar] [CrossRef]
- Fisher, M.C.; Henk, D.A.; Briggs, C.J.; Brownstein, J.S.; Madoff, L.C.; McCraw, S.L.; Gurr, S.J. Emerging fungal threats to animal, plant and ecosystem health. Nature 2012, 484, 186–194. [Google Scholar] [CrossRef] [PubMed]
- Hasan, S.; Jahan, S.; Islam, M.I. Disease detection of apple leaf with combination of color segmentation and modified DWT. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 7212–7224. [Google Scholar] [CrossRef]
- Zheng, J.; Yang, L.; Li, Y.; Yang, K.; Wang, Z.; Zhou, J. Lightweight Vision Transformer with Spatial and Channel Enhanced Self-Attention. In Proceedings of the the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 1492–1496. [Google Scholar]
- Zhang, X.; Li, F.; Jin, H.; Mu, W. Local Reversible Transformer for semantic segmentation of grape leaf diseases. Appl. Soft Comput. 2023, 143, 110392. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. arXiv 2015, arXiv:1411.4038. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18, 2015. pp. 234–241. [Google Scholar]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
- Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 3349–3364. [Google Scholar] [CrossRef]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Wang, Z.; Zhang, Z.; Lu, Y.; Luo, R.; Niu, Y.; Yang, X.; Jing, S.; Ruan, C.; Zheng, Y.; Jia, W. SE-COTR: A novel fruit segmentation model for green apples application in complex orchard. Plant Phenomics 2022, 2022, 0005. [Google Scholar] [CrossRef]
- Tassis, L.M.; de Souza, J.E.T.; Krohling, R.A. A deep learning approach combining instance and semantic segmentation to identify diseases and pests of coffee leaves from in-field images. Comput. Electron. Agric. 2021, 186, 106191. [Google Scholar] [CrossRef]
- Wang, C.; Du, P.; Wu, H.; Li, J.; Zhao, C.; Zhu, H. A cucumber leaf disease severity classification method based on the fusion of DeepLabV3+ and U-Net. Comput. Electron. Agric. 2021, 189, 106373. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Vaswani, A. Attention is all you need. In Advances in Neural Information Processing Systems 30; Curran Associates, Inc.: Red Hook, NY, USA, 2017. [Google Scholar]
- Dosovitskiy, A. An Image is Worth 16 × 16 Words: Transformers for Image Recognition at Scale; Curran Associates, Inc.: Red Hook, NY, USA, 2020. [Google Scholar]
- Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J.M.; Luo, P. SegFormer: Simple and efficient design for semantic segmentation with transformers. Adv. Neural Inf. Process. Syst. 2021, 34, 12077–12090. [Google Scholar]
- Yu, W.; Luo, M.; Zhou, P.; Si, C.; Zhou, Y.; Wang, X.; Feng, J.; Yan, S. Metaformer is actually what you need for vision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 10819–10829. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
- Salamai, A.A. Towards automated, efficient, and interpretable diagnosis coffee leaf disease: A dual-path visual transformer network. Expert Syst. Appl. 2024, 255, 124490. [Google Scholar] [CrossRef]
- Thai, H.-T.; Le, K.-H.; Nguyen, N.L.-T. FormerLeaf: An efficient vision transformer for Cassava Leaf Disease detection. Comput. Electron. Agric. 2023, 204, 107518. [Google Scholar] [CrossRef]
- Pacal, I. Enhancing crop productivity and sustainability through disease identification in maize leaves: Exploiting a large dataset with an advanced vision transformer model. Expert Syst. Appl. 2024, 238, 122099. [Google Scholar] [CrossRef]
- Li, X.; Li, X.; Zhang, M.; Dong, Q.; Zhang, G.; Wang, Z.; Wei, P. SugarcaneGAN: A novel dataset generating approach for sugarcane leaf diseases based on lightweight hybrid CNN-Transformer network. Comput. Electron. Agric. 2024, 219, 108762. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, S.; Ni, W.; Zeng, Q. An Instance Segmentation Method for Anthracnose Based on Swin Transformer and Path Aggregation. In Proceedings of the 2022 7th International Conference on Image, Vision and Computing (ICIVC), Xi’an, China, 26–28 July 2022; pp. 381–386. [Google Scholar]
- Lu, J.; Lu, B.; Ma, W.; Sun, Y. EAIS-Former: An efficient and accurate image segmentation method for fruit leaf diseases. Comput. Electron. Agric. 2024, 218, 108739. [Google Scholar] [CrossRef]
- Zhang, X.; Li, F.; Zheng, H.; Mu, W. UPFormer: U-sharped perception lightweight transformer for segmentation of field grape leaf diseases. Expert Syst. Appl. 2024, 249, 123546. [Google Scholar] [CrossRef]
- Guo, Z.; Cai, D.; Jin, Z.; Xu, T.; Yu, F. Research on unmanned aerial vehicle (UAV) rice field weed sensing image segmentation method based on CNN-transformer. Comput. Electron. Agric. 2025, 229, 109719. [Google Scholar] [CrossRef]
- Hughes, D.; Salathé, M. An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv 2015, arXiv:1511.08060. [Google Scholar]
- Russell, B.C.; Torralba, A.; Murphy, K.P.; Freeman, W.T. LabelMe: A database and web-based tool for image annotation. Int. J. Comput. Vis. 2008, 77, 157–173. [Google Scholar] [CrossRef]
- Guo, M.-H.; Lu, C.-Z.; Hou, Q.; Liu, Z.; Cheng, M.-M.; Hu, S.-M. Segnext: Rethinking convolutional attention design for semantic segmentation. Adv. Neural Inf. Process. Syst. 2022, 35, 1140–1156. [Google Scholar]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3146–3154. [Google Scholar]
- Yuan, Y.; Chen, X.; Wang, J. Object-contextual representations for semantic segmentation. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part VI 16, 2020. pp. 173–190. [Google Scholar]
- Xiao, T.; Liu, Y.; Zhou, B.; Jiang, Y.; Sun, J. Unified perceptual parsing for scene understanding. In Proceedings of the European Conference on Computer Vision (ECCV); Springer: Cham, Switzerland, 2018; pp. 418–434. [Google Scholar]
- MMSegmentation: Openmmlab Semantic Segmentation Toolbox and Benchmark. 2020. Available online: https://github.com/open-mmlab/mmsegmentation (accessed on 13 January 2026).
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV); Springer: Cham, Switzerland, 2018; pp. 3–19. [Google Scholar]
- Li, Y.; Yao, T.; Pan, Y.; Mei, T. Contextual transformer networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 1489–1500. [Google Scholar] [CrossRef] [PubMed]
- Li, X.; Wang, W.; Hu, X.; Yang, J. Selective kernel networks. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 510–519. [Google Scholar]
- Misra, D.; Nalamada, T.; Arasanipalai, A.U.; Hou, Q. Rotate to attend: Convolutional triplet attention module. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 3139–3148. [Google Scholar]
- Cao, Y.; Xu, J.; Lin, S.; Wei, F.; Hu, H. Gcnet: Non-local networks meet squeeze-excitation networks and beyond. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- GB/T 17980.47-2000; Pesticide--Guidelines for the Field Efficacy Trials(I)--Herbicides Against Weeds in Root Vegetables. Standardization Administration of China: Beijing, China, 2000.
- Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
- Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 215232. [Google Scholar] [CrossRef]
- Too, E.C.; Li, Y.; Njuki, S.; Lin, Y. A comparative study of fine-tuning deep learning models for plant disease identification. Comput. Electron. Agric. 2019, 161, 272–279. [Google Scholar] [CrossRef]
- Picon, A.; Alvarez-Gila, A.; Seitz, M.; Ortiz-Barredo, A.; Echazarra, J.; Johannes, A. Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild. Comput. Electron. Agric. 2019, 161, 280–290. [Google Scholar] [CrossRef]
- Han, S.; Mao, H.; Dally, W.J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv 2015, arXiv:1510.00149. [Google Scholar]



















| Crop | Diseases | Sample | Crop | Diseases | Sample |
|---|---|---|---|---|---|
| Pomegranate | Cercospora spot | ![]() | Apple | Spotted Leaf Drop Disease | ![]() |
| Mango | Brown Spot | ![]() | Grape | White Rot | ![]() |
| Apple | Rust | ![]() | Plum | Red Spot Disease | ![]() |
| Pear | Black Star Disease | ![]() | |||
| Scene | Outdoor | Indoor | |||||
|---|---|---|---|---|---|---|---|
| Class | Spotted Leaf Drop | Rust | White Rot | Black Star | Red Spot | Cercospora Spot | Brown Spot |
| Noon | 185 | 163 | 223 | 192 | 222 | 261 | 248 |
| Evening | 33 | 41 | 28 | 31 | 36 | ||
| Total | 218 | 204 | 251 | 223 | 258 | ||
| Category (Disease) | Species | Origin | Images | Leaf Pixels | Spot Pixels | Resolution |
|---|---|---|---|---|---|---|
| Spotted Leaf Drop | Apple | JLAU (Field) | 218 | 16.14 M | 1.3 M | 512 × 512 |
| Rust | Apple | JLAU (Field) | 204 | 16.14 M | 0.83 M | 512 × 512 |
| White rot | Grape | JLAU (Field) | 251 | 28.61 M | 0.60 M | 512 × 512 |
| Black star | Pear | JLAU (Field) | 223 | 11.84 M | 0.42 M | 512 × 512 |
| Red spot | Plum | JLAU (Field) | 258 | 18.68 M | 0.54 M | 512 × 512 |
| Cercospora spot | Pomegranate | Plant Village | 261 | 5.07 M | 0.28 M | 512 × 512 |
| Brown spot | Mango | Plant Village | 248 | 11.09 M | 0.42 M | 512 × 512 |
| Method | BackBone | Apple | Grape | Plum | ||||
|---|---|---|---|---|---|---|---|---|
| Leaf | SLD | Rust | Leaf | Disease | Leaf | Disease | ||
| IoU | IoU | IoU | ||||||
| DeepLabV3+ | Mobilenet | 95% | 53% | 76% | 94% | 56% | 95% | 67% |
| U-Net | Vgg | 96% | 62% | 81% | 96% | 62% | 95% | 70% |
| HRNetV2 | W18 | 95% | 52% | 77% | 95% | 50% | 95% | 61% |
| SegNeXt | Mscan | 96% | 64% | 82% | 97% | 61% | 97% | 69% |
| DANet | Resnet101 | 95% | 47% | 77% | 94% | 48% | 95% | 60% |
| OCRNet | HR-W18 | 93% | 45% | 75% | 95% | 39% | 93% | 57% |
| UPerNet | Resnet18 | 85% | 34% | 60% | 91% | 32% | 82% | 53% |
| PSPNet | Mobilenet | 95% | 40% | 71% | 94% | 36% | 94% | 34% |
| SegFormer | Mit b_3 | 96% | 58% | 82% | 96% | 59% | 96% | 73% |
| Ours | Mit b_0 | 98% | 78% | 93% | 98% | 77% | 98% | 83% |
| Method | BackBone | Mango | Pomegranate | Pear | |||
|---|---|---|---|---|---|---|---|
| Leaf | Disease | Leaf | Disease | Leaf | Disease | ||
| IoU | IoU | IoU | |||||
| DeepLabV3+ | Mobilenet | 98% | 54% | 96% | 74% | 96% | 61% |
| U-Net | Vgg | 98% | 55% | 97% | 79% | 96% | 70% |
| HRNetV2 | W18 | 97% | 34% | 95% | 67% | 96% | 60% |
| SegNeXt | Mscan-T | 98% | 53% | 96% | 78% | 97% | 71% |
| DANet | Resnet101 | 97% | 42% | 95% | 72% | 96% | 55% |
| OCRNet | HR-W18 | 97% | 39% | 95% | 73% | 95% | 31% |
| UPerNet | Resnet18 | 96% | 35% | 92% | 71% | 82% | 30% |
| PSPNet | Mobilenet | 96% | 27% | 93% | 57% | 96% | 48% |
| SegFormer | Mit B_3 | 98% | 54% | 97% | 80% | 97% | 70% |
| Ours | Mit B_0 | 99% | 70% | 98% | 86% | 98% | 81% |
| Architecture | Method | BackBone | Crop Size | mIoU | Acc | Params (M) | FLOPs (G) | FPS |
|---|---|---|---|---|---|---|---|---|
| CNN Based | DeepLabV3+ | Mobilenet | 78.45% | 98.64% | 5.82 | 52.96 | 72 | |
| U-Net | Vgg | 82.61% | 98.97% | 24.89 | 452.07 | 28 | ||
| HRNetV2 | W32 | 83.45% | 99.28% | 29.54 | 91.11 | 38 | ||
| PSPNet | Mobilenet | 68.72% | 73.61% | 2.37 | 6.03 | 102 | ||
| DANet | Resnet101 | 77.02% | 98.88% | 66.47 | 289.05 | 18 | ||
| Transformer Based | SegNeXt | Mscan-T | 82.87% | 99.19% | 4.23 | 6.30 | 88 | |
| SegFormer | Mit b_3 | 82.53% | 99.14% | 44.64 | 42.53 | 38 | ||
| CNN Transformer | OCRNet | HR-W18 | 71.42% | 98.34% | 12.08 | 53.42 | 26 | |
| UPerNet | Swin | 64.65% | 96.46% | 40.81 | 220.00 | 35 | ||
| Ours | Mit b_0 | 90.32% | 99.52% | 4.78 | 16.25 | 69 |
| Method | Method | BackBone | Crop Size | mIoU | Acc | Params (M) | FLOPs (G) |
|---|---|---|---|---|---|---|---|
| CNN Based | DeepLabV3+ | Mobilenet | 87.71% | 97.49% | 5.82 | 13.23 | |
| U-Net | Vgg | 86.54% | 97.44% | 24.89 | 112.96 | ||
| HRNetV2 | W32 | 89.23% | 98.18% | 29.54 | 22.75 | ||
| PSPNet | Mobilenet | 74.81% | 96.18% | 2.37 | 2.15 | ||
| DANet | Resnet101 | 28.8% | 70.68% | 66.47 | 72.26 | ||
| Transformer Based | SegNeXt | Mscan-T | 62.35% | 97.14% | 4.23 | 1.56 | |
| SegFormer | Mit b_3 | 76.51% | 96.77% | 44.64 | 10.63 | ||
| CNN Transformer | OCRNet | HR-W18 | 38.28% | 87.11% | 12.08 | 13.38 | |
| UPerNet | Swin | 60.45% | 93.70% | 40.81 | 55.02 | ||
| Ours | Mit b_0 | 91.19% | 98.30% | 4.78 | 4.05 |
| Method | mIoU | mPA | Acc |
|---|---|---|---|
| SE-Attention | 82.57% | 87.57% | 99.04% |
| CBAM-Attention | 84.52% | 88.83% | 99.28% |
| COT-Attention | 71.28% | 78.89% | 96.20% |
| SK-Attention | 72.84% | 79.61% | 96.98% |
| Triplet-Attention | 69.39% | 76.87% | 95.85% |
| Global Context | 67.51% | 75.46% | 94.94% |
| Base | 81.70% | 87.90% | 99.09% |
| DM-Attention | 85.98% | 91.31% | 99.11% |
| Base | DM | EFM | Fusion | mIoU | mPA | Acc | |
|---|---|---|---|---|---|---|---|
| Test1 | ✓ | - | - | - | 81.70 | 87.90 | 99.09 |
| Test2 | ✓ | ✓ | - | - | 85.98 | 91.31 | 99.11 |
| Test3 | ✓ | - | ✓ | - | 86.27 | 90.28 | 99.28 |
| Test4 | ✓ | ✓ | ✓ | - | 87.93 | 91.87 | 99.44 |
| Test5 | ✓ | - | ✓ | ✓ | 87.21 | 91.55 | 99.38 |
| Test6 | ✓ | ✓ | ✓ | ✓ | 90.32 | 93.65 | 99.52 |
| Method | Backbone | Inference Speed/ms |
|---|---|---|
| DeepLabV3+ | Mobilenet | 52 |
| U-Net | Vgg | 129 |
| HRNetV2 | W32 | 54 |
| DANet | Resnet101 | 150 |
| OCRNet | HR-W18 | 70 |
| UPerNet | Swin | 108 |
| PSPNet | Mobilenet | 22 |
| SegFormer | Mit b_3 | 105 |
| Ours | Mit b_0 | 49 |
| mIoU | mPA | Acc | Params | FLOPs | |
|---|---|---|---|---|---|
| Stage1 | 90.32% | 93.65% | 99.52% | 4.78 M | 16.25 G |
| Stage2 | 87.50% | 91.12% | 99.21% | 4.87 M | 67.71 G |
| Stage3 | 88.16 | 91.70% | 99.31% | 5.49 M | 73.27 G |
| Stage4 | 88.66% | 92.10% | 99.33% | 7.08 M | 77.05 G |
| Original Image | Visualize Images | Category | Value | Ratio | Disease Proportion | Level |
|---|---|---|---|---|---|---|
![]() | ![]() | Background | 165,758 | 63.23% | 6.36% | Level 2 |
| Leaf | 90,249 | 34.43% | ||||
| Disease spots | 6137 | 2.34% | ||||
![]() | ![]() | Background | 160,851 | 61.36% | 6.19% | Level 2 |
| Leaf | 95,017 | 36.25% | ||||
| Disease spots | 6276 | 2.39% | ||||
![]() | ![]() | Background | 176,920 | 67.49% | 7.51% | Level 2 |
| Leaf | 78,818 | 30.07% | ||||
| Disease spots | 6406 | 2.44% | ||||
![]() | ![]() | Background | 132,128 | 50.40% | 0.37% | Level 1 |
| Leaf | 129,531 | 49.41% | ||||
| Disease spots | 485 | 0.19% | ||||
![]() | ![]() | Background | 145,951 | 55.68% | 0.48% | Level 1 |
| Leaf | 115,628 | 44.11% | ||||
| Disease spots | 565 | 0.22% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Cao, L.; Jiang, D.; Wang, Y.; Cao, J.; Liu, Z.; Li, J.; Si, X.; Du, W. Disease-Seg: A Lightweight and Real-Time Segmentation Framework for Fruit Leaf Diseases. Agronomy 2026, 16, 311. https://doi.org/10.3390/agronomy16030311
Cao L, Jiang D, Wang Y, Cao J, Liu Z, Li J, Si X, Du W. Disease-Seg: A Lightweight and Real-Time Segmentation Framework for Fruit Leaf Diseases. Agronomy. 2026; 16(3):311. https://doi.org/10.3390/agronomy16030311
Chicago/Turabian StyleCao, Liying, Donghui Jiang, Yunxi Wang, Jiankun Cao, Zhihan Liu, Jiaru Li, Xiuli Si, and Wen Du. 2026. "Disease-Seg: A Lightweight and Real-Time Segmentation Framework for Fruit Leaf Diseases" Agronomy 16, no. 3: 311. https://doi.org/10.3390/agronomy16030311
APA StyleCao, L., Jiang, D., Wang, Y., Cao, J., Liu, Z., Li, J., Si, X., & Du, W. (2026). Disease-Seg: A Lightweight and Real-Time Segmentation Framework for Fruit Leaf Diseases. Agronomy, 16(3), 311. https://doi.org/10.3390/agronomy16030311

















