This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Open AccessArticle
YOLOv8-SDC: An Improved YOLOv8n-Seg-Based Method for Grafting Feature Detection and Segmentation in Melon Rootstock Seedlings
by
Lixia Li
Lixia Li 1
,
Kejian Gong
Kejian Gong 1,
Zhihao Wang
Zhihao Wang 2,
Tingna Pan
Tingna Pan 3 and
Kai Jiang
Kai Jiang 2,*
1
Faculty of Modern Agricultural Engineering, Kunming University of Science and Technology, Kunming 650500, China
2
Intelligent Equipment Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
3
Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Agriculture 2025, 15(10), 1087; https://doi.org/10.3390/agriculture15101087 (registering DOI)
Submission received: 1 April 2025
/
Revised: 9 May 2025
/
Accepted: 16 May 2025
/
Published: 17 May 2025
Abstract
To address the multi-target detection problem in the automatic seedling-feeding procedure of vegetable-grafting robots from dual perspectives (top-view and side-view), this paper proposes an improved YOLOv8-SDC detection segmentation model based on YOLOv8n-seg. The model improves rootstock seedlings’ detection and segmentation accuracy by SAConv replacing the original Conv c2f_DWRSeg module, replacing the c2f module, and adding the CA mechanism. Specifically, the SAConv module dynamically adjusts the receptive field of convolutional kernels to enhance the model’s capability in extracting seedling shape features. Additionally, the DWR module enables the network to more flexibly adapt to the perception accuracy of different cotyledons, growth points, stem edges, and contours. Furthermore, the incorporated CA mechanism helps the model eliminate background interference for better localization and identification of seedling grafting characteristics. The improved model was trained and validated using preprocessed data. The experimental results show that YOLOv8-SDC achieves significant accuracy improvements over the original YOLOv8n-seg model, YOLACT, Mask R-CNN, YOLOv5, and YOLOv11 in both object detection and instance segmentation tasks under top-view and side-view conditions. The mAP of Box and Mask for cotyledon (leaf1, leaf2, leaf), growing point (pot), and seedling stem (stem) assays reached 98.6% and 99.1%, respectively. The processing speed reached 200 FPS. The feasibility of the proposed method was further validated through grafting features, such as cotyledon deflection angles and stem–cotyledon separation points. These findings provide robust technical support for developing an automatic seedling-feeding mechanism in grafting robotics.
Share and Cite
MDPI and ACS Style
Li, L.; Gong, K.; Wang, Z.; Pan, T.; Jiang, K.
YOLOv8-SDC: An Improved YOLOv8n-Seg-Based Method for Grafting Feature Detection and Segmentation in Melon Rootstock Seedlings. Agriculture 2025, 15, 1087.
https://doi.org/10.3390/agriculture15101087
AMA Style
Li L, Gong K, Wang Z, Pan T, Jiang K.
YOLOv8-SDC: An Improved YOLOv8n-Seg-Based Method for Grafting Feature Detection and Segmentation in Melon Rootstock Seedlings. Agriculture. 2025; 15(10):1087.
https://doi.org/10.3390/agriculture15101087
Chicago/Turabian Style
Li, Lixia, Kejian Gong, Zhihao Wang, Tingna Pan, and Kai Jiang.
2025. "YOLOv8-SDC: An Improved YOLOv8n-Seg-Based Method for Grafting Feature Detection and Segmentation in Melon Rootstock Seedlings" Agriculture 15, no. 10: 1087.
https://doi.org/10.3390/agriculture15101087
APA Style
Li, L., Gong, K., Wang, Z., Pan, T., & Jiang, K.
(2025). YOLOv8-SDC: An Improved YOLOv8n-Seg-Based Method for Grafting Feature Detection and Segmentation in Melon Rootstock Seedlings. Agriculture, 15(10), 1087.
https://doi.org/10.3390/agriculture15101087
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
here.
Article Metrics
Article Access Statistics
For more information on the journal statistics, click
here.
Multiple requests from the same IP address are counted as one view.