SLD-YOLO11: A Topology-Reconstructed Lightweight Detector for Fine-Grained Maize–Weed Discrimination in Complex Field Environments
Abstract
1. Introduction
- (1)
- Information-preserving lightweight perception for tiny and occluded seedlings. We redesign the downsampling topology using Space-to-Depth Convolution (SPD-Conv) to reduce feature truncation during encoding, improving sensitivity to minute weed seedlings and preserving discriminative cues under severe leaf occlusion.
- (2)
- Long-range contextual modeling. We introduce an efficient Decomposed Large-Kernel Attention (D-LKA) mechanism to enhance global contextual perception, enabling more reliable separation of crops and weeds when local appearance is ambiguous, without imposing heavy computational costs.
- (3)
- Perception-to-decision integration via an interpretable weed-pressure index. Beyond bounding-box detection, we propose the Visual Weed–Crop Competition Index (VWCI) to convert detection outputs into a low-cost, quantitative indicator of weed pressure, supporting variable-rate spraying decisions without relying on computationally expensive pixel-level segmentation.
- (4)
- Comprehensive validation from accuracy to agronomic applicability. We evaluate the proposed framework through extensive comparisons and ablations, demonstrating its practical potential for decision-making.
2. Materials and Methods
2.1. Dataset Preparation and Characteristics
2.2. Data Collection and Processing
- (1)
- Ground Truth Annotation: This study employed the open-source image annotation tool LabelImg to manually annotate the collected raw images. Following the Pascal VOC dataset format standard, objects in the images were classified into two categories: “Maize” and “Weed”. During annotation, bounding boxes were tightly fitted around the visible portions of target plants to minimize background noise interference. Upon completion, the system generated corresponding XML files for each image. These files detailed the image filename, dimensional parameters (height, width, channel count), and provided precise ground truth data for subsequent supervised learning by recording the category labels and positional coordinates for each target object.
- (2)
- Dataset Partitioning: To objectively evaluate the model’s generalization performance on unseen data, this study employed stratified random sampling to partition the dataset into training and test sets. Specifically, 70% of the data was used for model training and parameter fine-tuning, while the remaining 30% formed an independent test set solely for final model performance assessment. This partitioning method ensures consistency in the distribution between test and training data.
- (3)
- Data Augmentation Strategy: Due to the heavy reliance of deep learning models on large training datasets, and considering the high variability in morphology and growth stages of field weed samples, relying solely on raw data can easily lead to model overfitting. Therefore, this study employed data augmentation techniques to expand the training set [51]. The dataset was augmented through methods including flipping, adding Gaussian noise, and contrast enhancement, as illustrated in Figure 2.
2.3. Methodology
2.3.1. SLD-YOLO11 Network Architecture Overview
- (1)
- (2)
- Global Structural Enhancement: At the deep feature stage, the large kernel attention operator is introduced to weight features, as shown in Equation (2):
- (3)
- Content-Aware Reconstruction: During the sampling phase on the neck, the dynamic operator replaces static interpolation, as shown in Equation (3):

2.3.2. Lossless Downsampling Topology Reconstruction
2.3.3. Decomposed Large Kernel Attention, D-LKA
2.3.4. Dynamic Flow-Field Upsampling
2.4. Visual Weed-Crop Competition Index (VWCI) Modeling
2.5. Training Model
2.6. Evaluation Indicators and Statistical Analysis
3. Results and Discussion
3.1. Model Training
3.2. Comparative Performance Evaluation
3.3. Ablation Study
3.4. Robustness and Generalization Analysis
3.4.1. Qualitative Analysis in Complex Field Scenarios
3.4.2. Generalization Capability Analysis
3.5. Agronomic Application Evaluation
3.5.1. Reliability Verification of VWCI
3.5.2. Variable Rate Spraying Decision Framework
3.6. Discussion
3.6.1. Mechanism of Performance Improvement
- (1)
- SPD-Conv: It mitigates the disappearance of features for tiny seedlings via “information-preserving downsampling”. One of the longstanding core bottlenecks in small-object detection is that repeated stride-based downsampling causes instances with an extremely low target pixel proportion to be irreversibly weakened or even lost in deep features. This issue has been systematically discussed in reviews on small-object detection, and it also constitutes a major motivation for multi-scale architectures, i.e., alleviating the loss of small-object information through semantic representations at different resolutions [62]. In our scenario, early-stage weed seedlings often occupy only a few pixels, and the effect of information truncation introduced by sampling becomes more pronounced due to leaf occlusion and background texture interference. To address this problem, SPD-Conv rearranges local spatial neighborhoods into the channel dimension, thereby reducing spatial resolution while avoiding, as much as possible, the loss induced by stride sampling. This is consistent with the commonly used Focus strategy; for example, YOLOv5 performs downsampling via slice-and-concatenate rather than directly dropping samples, and it is also consistent with the idea that pixel rearrangement enables lossless information transfer between spatial and channel domains [63,64]. Therefore, the improvement brought by SPD-Conv in Table 2 stems from its ability to preserve and strengthen microscopic cues that distinguish weeds—such as thin-leaf edges, local gaps, and the shapes of partially missing leaves after occlusion—thereby improving the detectability of tiny weeds without significantly increasing the backbone complexity.
- (2)
- D-LKA: It enhances long-range contextual reasoning to handle “green-on-green” similarity and occlusion ambiguity. When crops and weeds are highly similar in local appearance or only partially visible, a local receptive field is often insufficient to support stable discrimination. The model then needs to leverage broader contextual information to determine whether a fragment belongs to a maize plant with row–column structure or to a weed cluster with a more irregular spatial distribution. This demand is consistent with conclusions in related literature that modeling long-range dependencies can improve the robustness of visual recognition under complex visibility conditions [62]. Large-kernel attention has been proposed to capture long-range correlations with controllable computational overhead and has been used to build effective methods for detection tasks [54]. Meanwhile, studies on large-kernel convolutional network design have pointed out that enlarging convolution kernels can substantially expand the effective receptive field, making representations more biased toward shape and structural cues, which is particularly beneficial when texture cues are unstable [65]. In this work, D-LKA achieves a “large-kernel” effect while keeping the cost under control, more effectively resolving ambiguities caused by insufficient local evidence, thereby improving recall under occlusion and boosting the overall mAP.
- (3)
- DySample: It reduces boundary jaggedness and spatial drift during feature fusion via content-adaptive upsampling. Beyond the encoding stage, accurate detection under occlusion and for small-scale instances also depends on whether fine structures can be reliably reconstructed during multi-scale fusion in the neck. Conventional nearest-neighbor or bilinear interpolation is a geometrically fixed upsampling scheme; when cases such as leaf overlap or highly fragmented boundaries occur, it can easily introduce jagged edges, blurred details, or feature misalignment, thereby undermining localization stability. Recent studies on content-aware upsampling have shown that adaptively modeling sampling locations or reassembly kernels conditioned on content can improve dense prediction and structural restoration quality at relatively low cost [66]. DySample formulates upsampling as efficient point sampling and learns offsets, avoiding the heavier computation of dynamic convolution while still enabling content-guided reconstruction. In this work, the consistent gains brought by DySample indicate that it achieves better feature alignment and contour recovery during pyramid fusion, which is particularly critical for slender, fragmented, or partially occluded weed instances, thereby improving the localization accuracy and robustness of the detection head.
3.6.2. Limitations and Error Analysis
3.6.3. Future Work
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Nomenclature
| Symbol/Abbreviation | Definition |
| SPD-Conv | Space-to-Depth Convolution |
| D-LKA | Decomposed Large Kernel Attention |
| DySample | Dynamic sampling-based upsampling operator |
| VWCI | Visual Weed-Crop Competition Index |
| SSWM | Site-Specific Weed Management |
| mAP@0.5 | Mean Average Precision at IoU threshold 0.5 |
| mAP@0.5:0.95 | Mean Average Precision at IoU thresholds from 0.5 to 0.95 |
| P | Precision |
| R | Recall |
| TP | True Positives (Correctly detected targets) |
| FP | False Positives (Predicted positive but actually negative) |
| FN | False Negatives (Missed positive) |
| FLOPs | Floating Point Operations (Measure of computational cost) |
| FPS | Frames Per Second (Measure of inference speed) |
| α | morphological fill factor used in VWCI |
| Seff | effective biological coverage area after morphological correction |
References
- Fathi, A.; Zeidali, E. Conservation Tillage and Nitrogen Fertilizer: A Review of Corn Growth and Yield and Weed Management. Cent. Asian J. Plant Sci. Innov. 2021, 1, 121–142. [Google Scholar] [CrossRef]
- Gao, W.-T.; Su, W.-H. Weed Management Methods for Herbaceous Field Crops: A Review. Agronomy 2024, 14, 486. [Google Scholar] [CrossRef]
- Cui, J.; Tan, F.; Bai, N.; Fu, Y. Improving U-Net Network for Semantic Segmentation of Corns and Weeds during Corn Seedling Stage in Field. Front. Plant Sci. 2024, 15, 1344958. [Google Scholar] [CrossRef] [PubMed]
- Venkataraju, A.; Arumugam, D.; Stepan, C.; Kiran, R.; Peters, T. A Review of Machine Learning Techniques for Identifying Weeds in Corn. Smart Agric. Technol. 2023, 3, 100102. [Google Scholar] [CrossRef]
- Tao, T.; Wei, X. A Hybrid CNN–SVM Classifier for Weed Recognition in Winter Rape Field. Plant Methods 2022, 18, 29. [Google Scholar] [CrossRef]
- De Villiers, C.; Munghemezulu, C.; Chirima, G.J.; Tesfamichael, S.G.; Mashaba-Munghemezulu, Z. Crop Classification and Weed Detection in Rainfed Maize Crops Using UAV and PlanetScope Imagery. Abstr. ICA 2023, 6, 50. [Google Scholar] [CrossRef]
- Horvath, D.P.; Clay, S.A.; Swanton, C.J.; Anderson, J.V.; Chao, W.S. Weed-Induced Crop Yield Loss: A New Paradigm and New Challenges. Trends Plant Sci. 2023, 28, 567–582. [Google Scholar] [CrossRef]
- Idziak, R.; Waligóra, H.; Szuba, V. The Influence of Agronomical and Chemical Weed Control on Weeds of Corn. J. Plant Prot. Res. 2023, 62, 215–222. [Google Scholar] [CrossRef]
- Casimero, M.; Abit, M.J.; Ramirez, A.H.; Dimaano, N.G.; Mendoza, J. Herbicide Use History and Weed Management in Southeast Asia. Adv. Weed Sci. 2022, 40, e020220054. [Google Scholar] [CrossRef] [PubMed]
- Lu, C.; Yu, Z.; Hennessy, D.A.; Feng, H.; Tian, H.; Hui, D. Emerging Weed Resistance Increases Tillage Intensity and Greenhouse Gas Emissions in the US Corn–Soybean Cropping System. Nat. Food 2022, 3, 266–274. [Google Scholar] [CrossRef]
- Gerhards, R.; Andújar Sanchez, D.; Hamouz, P.; Peteinatos, G.G.; Christensen, S.; Fernandez-Quintanilla, C. Advances in Site-specific Weed Management in Agriculture—A Review. Weed Res. 2022, 62, 123–133. [Google Scholar] [CrossRef]
- Qu, H.-R.; Su, W.-H. Deep Learning-Based Weed–Crop Recognition for Smart Agricultural Equipment: A Review. Agronomy 2024, 14, 363. [Google Scholar] [CrossRef]
- Mkhize, Y.; Madonsela, S.; Cho, M.; Nondlazi, B.; Main, R.; Ramoelo, A. Mapping Weed Infestation in Maize Fields Using Sentinel-2 Data. Phys. Chem. Earth Parts A/B/C 2024, 134, 103571. [Google Scholar] [CrossRef]
- Ahmad, A.; Saraswat, D.; Aggarwal, V.; Etienne, A.; Hancock, B. Performance of Deep Learning Models for Classifying and Detecting Common Weeds in Corn and Soybean Production Systems. Comput. Electron. Agric. 2021, 184, 106081. [Google Scholar] [CrossRef]
- Dastres, E.; Esmaeili, H.; Edalat, M. Species Distribution Modeling of Malva Neglecta Wallr. Weed Using Ten Different Machine Learning Algorithms: An Approach to Site-Specific Weed Management (SSWM). Eur. J. Agron. 2025, 167, 127579. [Google Scholar] [CrossRef]
- Hasan, A.S.M.M.; Diepeveen, D.; Laga, H.; Jones, M.G.K.; Sohel, F. Object-Level Benchmark for Deep Learning-Based Detection and Classification of Weed Species. Crop Prot. 2024, 177, 106561. [Google Scholar] [CrossRef]
- Wessner, R.N.; Frozza, R.; Duarte Da Silva Bagatini, D.; Molz, R.F. Recognition of Weeds in Corn Crops: System with Convolutional Neural Networks. J. Agric. Food Res. 2023, 14, 100669. [Google Scholar] [CrossRef]
- Picon, A.; San-Emeterio, M.G.; Bereciartua-Perez, A.; Klukas, C.; Eggers, T.; Navarra-Mestre, R. Deep Learning-Based Segmentation of Multiple Species of Weeds and Corn Crop Using Synthetic and Real Image Datasets. Comput. Electron. Agric. 2022, 194, 106719. [Google Scholar] [CrossRef]
- Andrea, C.-C.; Mauricio Daniel, B.B.; Jose Misael, J.B. Precise Weed and Maize Classification through Convolutional Neuronal Networks. In Proceedings of the 2017 IEEE Second Ecuador Technical Chapters Meeting (ETCM), Salinas, Ecuador, 16–20 October 2017; pp. 1–6. [Google Scholar]
- Peteinatos, G.G.; Reichel, P.; Karouta, J.; Andújar, D.; Gerhards, R. Weed Identification in Maize, Sunflower, and Potatoes with the Aid of Convolutional Neural Networks. Remote Sens. 2020, 12, 4185. [Google Scholar] [CrossRef]
- García-Navarrete, O.L.; Santamaria, O.; Martín-Ramos, P.; Valenzuela-Mahecha, M.Á.; Navas-Gracia, L.M. Development of a Detection System for Types of Weeds in Maize (Zea mays L.) under Greenhouse Conditions Using the YOLOv5 v7.0 Model. Agriculture 2024, 14, 286. [Google Scholar] [CrossRef]
- Wang, B.; Yan, Y.; Lan, Y.; Wang, M.; Bian, Z. Accurate Detection and Precision Spraying of Corn and Weeds Using the Improved YOLOv5 Model. IEEE Access 2023, 11, 29868–29882. [Google Scholar] [CrossRef]
- Jia, Z.; Zhang, M.; Yuan, C.; Liu, Q.; Liu, H.; Qiu, X.; Zhao, W.; Shi, J. ADL-YOLOv8: A Field Crop Weed Detection Model Based on Improved YOLOv8. Agronomy 2024, 14, 2355. [Google Scholar] [CrossRef]
- Liu, H.; Hou, Y.; Zhang, J.; Zheng, P.; Hou, S. Research on Weed Reverse Detection Methods Based on Improved You Only Look Once (YOLO) v8: Preliminary Results. Agronomy 2024, 14, 1667. [Google Scholar] [CrossRef]
- Kharismawati, D.E.; Kazic, T. Maize Seedling Detection Dataset (MSDD): A Curated High-Resolution RGB Dataset for Seedling Maize Detection and Benchmarking with YOLOv9, YOLO11, YOLOv12 and Faster-RCNN 2025. arXiv 2025, arXiv:2509.15181. [Google Scholar]
- Sharma, A.; Kumar, V.; Longchamps, L. Comparative Performance of YOLOv8, YOLOv9, YOLOv10, YOLOv11 and Faster R-CNN Models for Detection of Multiple Weed Species. Smart Agric. Technol. 2024, 9, 100648. [Google Scholar] [CrossRef]
- Zheng, L.; Yi, J.; He, P.; Tie, J.; Zhang, Y.; Wu, W.; Long, L. Improvement of the YOLOv8 Model in the Optimization of the Weed Recognition Algorithm in Cotton Field. Plants 2024, 13, 1843. [Google Scholar] [CrossRef] [PubMed]
- Li, L.; Sun, R.; Xu, Y. Design and Optimization of a New Corn–Weed Detection Model with YOLOv8–GAS Based on Artificial Intelligence. J. Real Time Image Proc. 2025, 22, 167. [Google Scholar] [CrossRef]
- Saltık, A.O.; Allmendinger, A.; Stein, A. Comparative Analysis of YOLOv9, YOLOv10 and RT-DETR for Real-Time Weed Detection. In Computer Vision—ECCV 2024 Workshops; Del Bue, A., Canton, C., Pont-Tuset, J., Tommasi, T., Eds.; Lecture Notes in Computer Science; Springer Nature: Cham, Switzerland, 2025; Volume 15625, pp. 177–193. ISBN 978-3-031-91834-6. [Google Scholar]
- Alkhammash, E.H. Multi-Classification Using YOLOv11 and Hybrid YOLO11n-MobileNet Models: A Fire Classes Case Study. Fire 2025, 8, 17. [Google Scholar] [CrossRef]
- Liao, Y.; Li, L.; Xiao, H.; Xu, F.; Shan, B.; Yin, H. YOLO-MECD: Citrus Detection Algorithm Based on YOLOv11. Agronomy 2025, 15, 687. [Google Scholar] [CrossRef]
- Ma, X.; Hao, Z.; Liu, S.; Li, J. Walnut Surface Defect Classification and Detection Model Based on Enhanced YOLO11n. Agriculture 2025, 15, 1707. [Google Scholar] [CrossRef]
- Upadhyay, A.; Sunil, G.C.; Das, S.; Mettler, J.; Howatt, K.; Sun, X. Multiclass Weed and Crop Detection Using Optimized YOLO Models on Edge Devices. J. Agric. Food Res. 2025, 22, 102144. [Google Scholar] [CrossRef]
- Wang, J.; Li, W. YOLO-Weed Nano: A Lightweight Weed Detection Algorithm Based on Improved YOLOv8n for Cotton Field Applications. Scientific Reports 2025, 14, 84748. [Google Scholar] [CrossRef]
- Li, Y.; Guo, Z.; Sun, Y.; Chen, X.; Cao, Y. Weed Detection Algorithms in Rice Fields Based on Improved YOLOv10n. Agriculture 2024, 14, 2066. [Google Scholar] [CrossRef]
- Allmendinger, A.; Spaeth, M.; Saile, M.; Peteinatos, G.G.; Gerhards, R. Precision Chemical Weed Management Strategies: A Review and a Design of a New CNN-Based Modular Spot Sprayer. Agronomy 2022, 12, 1620. [Google Scholar] [CrossRef]
- Hasan, A.S.M.M.; Diepeveen, D.; Laga, H.; Jones, M.G.K.; Muzahid, A.A.M.; Sohel, F. Morphology-Based Weed Type Recognition Using Siamese Network. Eur. J. Agron. 2025, 163, 127439. [Google Scholar] [CrossRef]
- Liu, Y.; Sun, P.; Wergeles, N.; Shang, Y. A Survey and Performance Evaluation of Deep Learning Methods for Small Object Detection. Expert Syst. Appl. 2021, 172, 114602. [Google Scholar] [CrossRef]
- Yang, S.; Lin, J.; Cernava, T.; Chen, X.; Zhang, X. WeedDETR: An Efficient and Accurate Detection Method for Detecting Small-Target Weeds in UAV Images. Weed Sci. 2025, 73, e84. [Google Scholar] [CrossRef]
- Jia, F. Occlusion Target Recognition Algorithm Based on Improved YOLOv4. J. Comput. Methods Sci. Eng. 2024, 24, 3799–3811. [Google Scholar] [CrossRef]
- Shang, Q.; Zhang, J.; Yan, G.; Hong, L.; Zhang, R.; Li, W.; Xia, H. Target Tracking Algorithm Based on Occlusion Prediction. Displays 2023, 79, 102481. [Google Scholar] [CrossRef]
- Dheeraj, A.; Chand, S. Deep Learning Based Weed Classification in Corn Using Improved Attention Mechanism Empowered by Explainable AI Techniques. Crop Prot. 2025, 190, 107058. [Google Scholar] [CrossRef]
- Veeragandham, S.; Santhi, H. Effectiveness of Convolutional Layers in Pre-Trained Models for Classifying Common Weeds in Groundnut and Corn Crops. Comput. Electr. Eng. 2022, 103, 108315. [Google Scholar] [CrossRef]
- Mesías-Ruiz, G.A.; Borra-Serrano, I.; Peña, J.M.; De Castro, A.I.; Fernández-Quintanilla, C.; Dorado, J. Weed Species Classification with UAV Imagery and Standard CNN Models: Assessing the Frontiers of Training and Inference Phases. Crop Prot. 2024, 182, 106721. [Google Scholar] [CrossRef]
- Peng, G.; Wang, K.; Ma, J.; Cui, B.; Wang, D. AGRI-YOLO: A Lightweight Model for Corn Weed Detection with Enhanced YOLO V11n. Agriculture 2025, 15, 1971. [Google Scholar] [CrossRef]
- Li, X.; Qin, Y.; Wang, F.; Guo, F.; Yeow, J.T.W. Pitaya Detection in Orchards Using the MobileNet-YOLO Model. In Proceedings of the 2020 39th Chinese Control Conference (CCC), Shenyang, China, 27–29 July 2020; pp. 6274–6278. [Google Scholar]
- Yu, H.; Che, M.; Yu, H.; Zhang, J. Development of Weed Detection Method in Soybean Fields Utilizing Improved DeepLabv3+ Platform. Agronomy 2022, 12, 2889. [Google Scholar] [CrossRef]
- Zou, K.; Chen, X.; Zhang, F.; Zhou, H.; Zhang, C. A Field Weed Density Evaluation Method Based on UAV Imaging and Modified U-Net. Remote Sens. 2021, 13, 310. [Google Scholar] [CrossRef]
- Gegen. YOLO Dataset for Corn Weed Recognition 2026. Available online: https://zenodo.org/records/18285884 (accessed on 20 January 2026).
- Xiao, L.; Wang, X. Interseedling Weed Detection Model of Maize Based on Improved YOLO Algorithm. J. Agric. Mech. Res. 2025, 47, 10–16. [Google Scholar] [CrossRef]
- Zhou, H.; Su, Y.; Chen, J.; Li, J.; Ma, L.; Liu, X.; Lu, S.; Wu, Q. Maize Leaf Disease Recognition Based on Improved Convolutional Neural Network ShuffleNetV2. Plants 2024, 13, 1621. [Google Scholar] [CrossRef]
- Zhou, Q.; Chai, B.; Tang, C.; Guo, Y.; Wang, K.; Nie, X.; Ye, Y. Enhanced YOLOv8 with DWR-DRB and SPD-Conv for Mechanical Wear Fault Diagnosis in Aero-Engines. Sensors 2025, 25, 5294. [Google Scholar] [CrossRef]
- Gu, Z.; Zhu, K.; You, S. YOLO-SSFS: A Method Combining SPD-Conv/STDL/IM-FPN/SIoU for Outdoor Small Target Vehicle Detection. Electronics 2023, 12, 3744. [Google Scholar] [CrossRef]
- Guo, M.-H.; Lu, C.-Z.; Liu, Z.-N.; Cheng, M.-M.; Hu, S.-M. Visual Attention Network. Comput. Vis. Media 2023, 9, 733–752. [Google Scholar] [CrossRef]
- Liu, W.; Lu, H.; Fu, H.; Cao, Z. Learning to Upsample by Learning to Sample. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 6027–6037. [Google Scholar]
- Dai, X.; Chen, Y.; Yang, J.; Zhang, P.; Yuan, L.; Zhang, L. Dynamic DETR: End-to-End Object Detection with Dynamic Attention. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 11–17 October 2021; pp. 2968–2977. [Google Scholar]
- Mi, B.; Cheng, W. Intelligent Aerodynamic Modelling Method for Steady/Unsteady Flow Fields of Airfoils Driven by Flow Field Images Based on Modified U-Net Neural Network. Eng. Appl. Comput. Fluid Mech. 2025, 19, 2440075. [Google Scholar] [CrossRef]
- Shafiee Sarvestani, G.; Edalat, M.; Shirzadifar, A.; Pourghasemi, H.R. Early Season Dominant Weed Mapping in Maize Field Using Unmanned Aerial Vehicle (Uav) Imagery: Towards Developing Prescription Map. Smart Agric. Technol. 2025, 11, 100956. [Google Scholar] [CrossRef]
- Chen, P.; Xu, W.; Zhan, Y.; Yang, W.; Wang, J.; Lan, Y. Evaluation of Cotton Defoliation Rate and Establishment of Spray Prescription Map Using Remote Sensing Imagery. Remote Sens. 2022, 14, 4206. [Google Scholar] [CrossRef]
- Rovira-Más, F.; Saiz-Rubio, V.; Cuenca, A.; Ortiz, C.; Teruel, M.P.; Ortí, E. Open-Format Prescription Maps for Variable Rate Spraying in Orchard Farming. J. ASABE 2024, 67, 243–257. [Google Scholar] [CrossRef]
- Rainio, O.; Teuho, J.; Klén, R. Evaluation Metrics and Statistical Tests for Machine Learning. Sci. Rep. 2024, 14, 6086. [Google Scholar] [CrossRef]
- Lin, T.-Y.; Dollar, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 22–25 July 2017; pp. 936–944. [Google Scholar]
- Sun, B.; Zhang, Y.; Jiang, S.; Fu, Y. Hybrid Pixel-Unshuffled Network for Lightweight Image Super-Resolution. Proc. AAAI Conf. Artif. Intell. 2023, 37, 2375–2383. [Google Scholar] [CrossRef]
- Shi, W.; Caballero, J.; Huszar, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
- Ding, X.; Zhang, X.; Han, J.; Ding, G. Scaling Up Your Kernels to 31 × 31: Revisiting Large Kernel Design in CNNs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 11963–11975. [Google Scholar]
- Wang, J.; Chen, K.; Xu, R.; Liu, Z.; Loy, C.C.; Lin, D. CARAFE: Content-Aware ReAssembly of FEatures. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3007–3016. [Google Scholar]
- Gao, S.; Zhong, R.; Yan, K.; Ma, X.; Chen, X.; Pu, J.; Gao, S.; Qi, J.; Yin, G.; Myneni, R.B. Evaluating the Saturation Effect of Vegetation Indices in Forests Using 3D Radiative Transfer Simulations and Satellite Observations. Remote Sens. Environ. 2023, 295, 113665. [Google Scholar] [CrossRef]
- Yan, K.; Gao, S.; Yan, G.; Ma, X.; Chen, X.; Zhu, P.; Li, J.; Gao, S.; Gastellu-Etchegorry, J.-P.; Myneni, R.B.; et al. A Global Systematic Review of the Remote Sensing Vegetation Indices. Int. J. Appl. Earth Obs. Geoinf. 2025, 139, 104560. [Google Scholar] [CrossRef]
- Sweet, D.D.; Tirado, S.B.; Springer, N.M.; Hirsch, C.N.; Hirsch, C.D. Opportunities and Challenges in Phenotyping Row Crops Using Drone-based RGB Imaging. Plant Phenome J. 2022, 5, e20044. [Google Scholar] [CrossRef]
- Huete, A.; Didan, K.; Miura, T.; Rodriguez, E.P.; Gao, X.; Ferreira, L.G. Overview of the Radiometric and Biophysical Performance of the MODIS Vegetation Indices. Remote Sens. Environ. 2002, 83, 195–213. [Google Scholar] [CrossRef]
- Wu, Z.; Chen, Y.; Zhao, B.; Kang, X.; Ding, Y. Review of Weed Detection Methods Based on Computer Vision. Sensors 2021, 21, 3647. [Google Scholar] [CrossRef] [PubMed]










| Model | Precision | Recall | mAP@0.5 | mAP @0.5:0.95 | F1-Score | FLOPs (G) |
|---|---|---|---|---|---|---|
| YOLO11-MobileNetV3 | 0.857 | 0.899 | 0.923 | 0.479 | 0.878 | 3.4 |
| YOLO11-ShuffleNetV2 | 0.911 | 0.900 | 0.948 | 0.485 | 0.905 | 2.8 |
| YOLOv8n | 0.919 | 0.906 | 0.956 | 0.523 | 0.912 | 8.7 |
| YOLOv10n | 0.887 | 0.875 | 0.925 | 0.560 | 0.881 | 7.7 |
| YOLOv11n | 0.924 | 0.938 | 0.970 | 0.575 | 0.931 | 6.1 |
| SLD-YOLO11 | 0.935 | 0.948 | 0.974 | 0.655 | 0.941 | 6.3 |
| Model | SPD-Conv | D-LKA | DySample | Precision (%) | Recall (%) | mAP @0.5 (%) | mAP @0.5:0.95(%) | Params (M) | FLOPs (G) |
|---|---|---|---|---|---|---|---|---|---|
| YOLO11n | — | — | — | 92.4 | 93.8 | 97.0 | 57.5 | 2.6 | 6.1 |
| S-YOLO11n | √ | — | — | 92.9 | 94.3 | 97.2 | 59.8 | 2.6 | 6.0 |
| L-YOLO11n | — | √ | — | 93.2 | 94.1 | 97.3 | 61.5 | 2.8 | 6.2 |
| D-YOLO11n | — | — | √ | 92.6 | 93.9 | 97.1 | 58.9 | 2.6 | 6.1 |
| SLD-YOLO11 | √ | √ | √ | 93.5 | 94.8 | 97.4 | 65.5 | 2.8 | 6.3 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Liu, M.; Gao, J. SLD-YOLO11: A Topology-Reconstructed Lightweight Detector for Fine-Grained Maize–Weed Discrimination in Complex Field Environments. Agronomy 2026, 16, 328. https://doi.org/10.3390/agronomy16030328
Liu M, Gao J. SLD-YOLO11: A Topology-Reconstructed Lightweight Detector for Fine-Grained Maize–Weed Discrimination in Complex Field Environments. Agronomy. 2026; 16(3):328. https://doi.org/10.3390/agronomy16030328
Chicago/Turabian StyleLiu, Meichen, and Jing Gao. 2026. "SLD-YOLO11: A Topology-Reconstructed Lightweight Detector for Fine-Grained Maize–Weed Discrimination in Complex Field Environments" Agronomy 16, no. 3: 328. https://doi.org/10.3390/agronomy16030328
APA StyleLiu, M., & Gao, J. (2026). SLD-YOLO11: A Topology-Reconstructed Lightweight Detector for Fine-Grained Maize–Weed Discrimination in Complex Field Environments. Agronomy, 16(3), 328. https://doi.org/10.3390/agronomy16030328
