Next Article in Journal
Predictive Potential of Maize Yield in the Mesoregions of Northeast Brazil
Previous Article in Journal
Evaluation of a System to Assess Herbicide Movement in Straw under Dry and Wet Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Applying YOLOv8 and X-ray Morphology Analysis to Assess the Vigor of Brachiaria brizantha cv. Xaraés Seeds

by
Daniel de Amaral da Silva
1,
Emannuel Diego Gonçalves de Freitas
1,
Haynna Fernandes Abud
2 and
Danielo G. Gomes
1,*
1
GREat Lab, PPGETI—Centro de Tecnologia, Universidade Federal do Ceará (UFC), Fortaleza 60455-970, CE, Brazil
2
Image Pesquisas Sementes e Plantas—Parque de Desenvolvimento Tecnológico (PADETEC), Universidade Federal do Ceará (UFC), Fortaleza 60714-903, CE, Brazil
*
Author to whom correspondence should be addressed.
AgriEngineering 2024, 6(2), 869-880; https://doi.org/10.3390/agriengineering6020050
Submission received: 3 January 2024 / Revised: 21 February 2024 / Accepted: 28 February 2024 / Published: 22 March 2024

Abstract

:
Seed quality significantly affects how well crops grow. Traditional methods for checking seed quality, like seeing how many seeds sprout or using a chemical test called tetrazolium testing, require people to look at the seeds closely, which takes a lot of time and effort. Nowadays, computer vision, a technology that helps computers see and understand images, is being used more in farming. Here, we use computer vision with X-ray imaging to assist experts in rapidly and accurately assessing seed quality. We looked at three different sets of seeds using X-ray images and used YOLOv8 to analyze them. YOLOv8 software measures different aspects about seeds, like their size and the area taken up by the part inside, called the endosperm. Based on this information, we put the seeds into four groups depending on how much endosperm they have. Our results show that the YOLOv8 program works well in identifying and separating the endosperm, even with a small amount of data. Our method was able to accurately identify the endosperm about 95.6% of the time. This means that our approach can help determine how effective the seeds are to plant crops.

1. Introduction

The global demand for food and agricultural products continues to surge, propelled by the expanding human population, posing significant challenges to soil health, atmospheric balance, and water resources [1]. Meeting this demand sustainably necessitates solutions that not only bolster agricultural productivity in terms of land use but also optimize resource allocation. One effective strategy to enhance productivity involves the cultivation of high-vigor seeds [2,3,4]. The quality of these seeds, influenced by genetic, physical, sanitary, and physiological factors, profoundly impacts crop development, thereby directly affecting yield potential across plant species [5,6].
Despite the reliability of traditional germination and vigor tests in meeting seed production standards, they often fall short in predicting seedling emergence under less favorable environmental conditions [7]. Moreover, these tests can be time-consuming, costly, and offer limited insights into the internal state of the seed [8]. Addressing this challenge, the assessment of internal seed morphology has emerged as a crucial approach for identifying issues related to seed physiological potential [6,9].
Recent research in the literature has explored various computational methods that correlate image-derived parameters with seed quality standards [5,10,11]. Image analysis techniques provide non-destructive means to comprehend various aspects of seed development, establishing a connection between internal morphology and structural integrity [8,12,13,14]. This approach enables the determination of the physiological potential of seed lots. A well-established method for image acquisition relies on X-ray differential absorption by seed tissues, which varies with tissue thickness, density, composition, and radiation wavelength [15]. This process involves exposing seeds to X-rays, creating a latent image on a photosensitive film [6,11]. However, despite the assistance of computational tools, the manual measurement of parameters contributing to seed vigor remains a challenge in seed image analysis.
In the pursuit of automation solutions, the recent literature has explored various computational methods for seed quality assessment. Cheng et al. [7] combined low-field nuclear magnetic resonance (LF-NMR) spectral data and machine learning algorithms like Fisher’s linear discriminant (FLD) to accurately distinguish high- and low-vigor rice seeds. Cioccia et al. [12] used laser-induced breakdown spectroscopy (LIBS) and machine learning algorithms like linear discriminant analysis (LDA) to evaluate the vigor of Brachiaria brizantha seeds, achieving 100% accuracy in distinguishing high- and low-vigor seeds.
Moreover, recent advancements in computer vision, machine learning, and deep learning have gained considerable attention for seed quality evaluation [4,14,16]. Convolutional neural networks (CNNs) have shown particular promise for image-based seed assessment by learning complex patterns from seed images which are difficult to discern by traditional methods [11,14,16]. Wu et al. [16] used a CNN model with a weighted loss function to detect rice seed vigor from hyperspectral images, achieving an impressive accuracy of 97.69%. Xu et al. [14] introduced two CNN models (CNN-FES and CNN-ATM) for maize seed defect detection using hyperspectral imaging, outperforming traditional methods with over 90% accuracy and reaching 98.21% accuracy with feature wavelength modeling.
Here, we propose a method to automate the analysis of X-ray images of Brachiaria seeds (Urochloa brizantha cv. Xaraés) using YOLOv8 [17], which employs Darknet53 as the CNN backbone feature extractor. We implement a post-processing method to make YOLOv8 robust to variations in rotation and scale. By subsequently processing the Darknet53-extracted features, the model can directly analyze X-ray images with multiple seed instances. We implement YOLOv8 with a post-segmentation module to segment seed images and derive quality descriptors based on internal morphology, providing a quick and non-destructive estimate of seed germination potential. The method is compared to traditional seed quality assessment techniques as well as recent advancements in computer vision and deep learning for comprehensive evaluation.

2. Materials and Methods

This section provides a comprehensive overview of the procedures undertaken to establish the seed image database and train the computer vision model employed for segmentation and classification.

2.1. Seeds

Brachiaria (Urochloa brizantha cv. Xaraés) stands out as an extensively cultivated forage grass globally [11], underscoring the substantial interest in automated seed quality assessment within the agricultural sector. In this study, random samples of Brachiaria seeds were utilized to capture radiographic images. Figure 1 shows the modest size of these seeds during the inspection, emphasizing the necessity for an automated evaluation approach.

2.2. Dataset

For each set, we used ten sets of 20 seeds. These seeds were arranged on transparent plastic sheets and secured in place with double-sided adhesive tape. This setup ensured clear visualization of seed parts, including the embryo, endosperm, seed coat, and empty spaces. Subsequently, 30 images (each containing 20 seeds, as shown in Figure 2) were saved on an external storage device for subsequent batch processing, resulting in a total dataset size of 600 radiographic images of seeds. We used Roboflow (https://roboflow.com, accessed on 1 March 2024), a computer vision platform for dataset management, to prepare the training data. Additionally, traditional data augmentation techniques such as rotation and blur were applied to the bounding boxes of each seed.
In Figure 2, radiographic images of Urochloa brizantha seeds, cv. Xaraes reveal crucial details of internal morphology. This allows for the assessment of characteristics such as low quality (#5, #15 and #19), poor formation (#4 and #11), and the presence of embryo damage (#17). These factors significantly impact seed lot quality, directly influencing germination.
For seed segmentation, a supervised training approach was employed. Using the Labelme (https://github.com/labelmeai/labelme, accessed on 1 March 2024) tool, we labeled the 20 seeds in each image into two categories: seed and endosperm.

2.3. YOLOv8

To address the segmentation of regions of interest, we opted to employ the YOLOv8 model, renowned for its practical training approach and high accuracy. Developed by Ultralytics in late 2022, YOLOv8 is a member of the YOLO (You Only Look Once) series of object detection algorithms [17], representing an improvement over its predecessor, YOLOv5. The architectural enhancements introduced in YOLOv8 focus on boosting both speed and accuracy, featuring a novel C2F module that replaces YOLOv5’s C3 module. This alteration results in the generation of more precise feature maps crucial for object detection.
The YOLOv8 architecture comprises three main components: the backbone network, neck network, and detection head. The backbone, based on CSPDarknet53, is responsible for extracting feature maps from the input image. The YOLOv8 model backbone network down-samples the image five times to obtain five layers of feature expressions (P1, P2, P3, P4, and P5), with Pi denoting a resolution of 1/2i of the original image. These feature expressions capture hierarchical representations of the input image at different scales, contributing to the model’s ability to detect objects of varying sizes. Concurrently, the neck network and main network leverage these feature expression maps to infer bounding boxes and object labels.
YOLOv8 offers various model sizes, such as yolov8n (nano), yolov8s (small), yolov8m (medium), yolov8l (large), and yolov8x (extra large). The size is directly proportional to mAP and inversely proportional to inference time. Additionally, YOLOv8 provides variations tailored for different tasks, including classification (cls), segmentation (seg), pose estimation (pose), and oriented object detection (obb). In our study, we utilized the yolov8n-seg version, designed for segmentation applications with a nano size.
During the training phase, our model utilized an augmented dataset, randomly split into 70% for training and 30% for testing. K-fold cross-validation was employed, dividing the image dataset into five sets for robust evaluation. The Ultralytics’ version X implementation of the model incorporated measures to prevent overfitting, including dropout for balancing network weights, early stopping at 50 training epochs, and weight transfer from pre-training on the COCO image dataset (https://cocodataset.org, accessed on 1 March 2024). Training occurred at different epochs (500, 1500, and 3000) using the SGD optimizer, a learning rate of 0.01, and dynamic weight decay.

2.4. Human Analysis of Seed Vigor

To validate the data obtained from our proposed method, images from the dataset (Figure 3A) underwent human visual analysis. This analysis focused on studying the internal morphology of the seeds, identifying areas occupied by the seed (Figure 3B) and endosperm (Figure 3C), the internal area/total area ratio (mm2), and the length and width of the seeds (Figure 3D). Additionally, it included the detection of deteriorated tissues and malformations, characteristics that can lead to reduced germination.
Following the visual analysis, the germination test was conducted using the same seeds utilized in the X-ray test, maintaining the seed distribution order. Each batch underwent ten repetitions of 20 seeds. The germination test employed paper sheets premoistened with distilled water, equivalent to 2.5 times the mass of the dry substrate. Seed-containing paper rolls were placed in a BOD-type germinator with alternating temperatures of 15–35 °C and a photoperiod of 8 h of light and 16 h of darkness. Evaluations were conducted seven days after sowing, which is the established date for the first germination count [18].
After seven days post-sowing, we transferred normal and abnormal seedlings, along with dead seeds, onto black A3 paper. We captured images using an HP Scanjet 2004 scanner, which was adapted in an inverted position within an aluminum box (refer to Figure 4). The images were scanned at 300 dpi and stored for subsequent analysis.
We individually measured the length of each seedling in millimeters (mm) using the ImageJ 1.8.0 software tool. This tool allowed us to outline specific parts of the analyzed material, highlighted in yellow at the center of Figure 5.
Based on the frequency distribution of seed internal area, we established four classes, as follows:
  • Class I: Seeds with an internal area ranging from 0.391 to 1.667 mm2;
  • Class II: Seeds with an internal area ranging from 1.668 to 2.944 mm2;
  • Class III: Seeds with an internal area ranging from 2.945 to 4.221 mm2;
  • Class IV: Seeds with an internal area ranging from 4.222 to 5.497 mm2.
We used a dataset from an experimental design with an unbalanced completely randomized design (UCRD). For a seedling length variable y, the data underwent transformation by applying y + 1 . This transformation was deemed necessary to facilitate parametric analysis of variance, especially if the homoscedasticity assumption was not met [19]. The variables were subsequently analyzed using analysis of variance (ANOVA) at a 5% significance level. Variables exhibiting significant differences underwent Tukey’s test for mean comparison.
Table 1 presents the morphological characteristics of Brachiaria seeds, where means followed by the same letter (a, b, c, or d) in the same column are statistically similar to each other. The total seed area indicates that Class IV significantly differs from the other three classes in the Tukey test at a 5% significance level, with a value of 8.50 mm2. Larger seeds can accommodate a greater endosperm area due to their size. A larger endosperm area provides more reserves for the germination process, thereby increasing the likelihood of germination.
The ratio of internal area to total area exhibited an increasing trend based on seed class. Class I had the smallest ratio, while Class IV had the largest. Seed length reached its peak in Class I (Table 1), while the other classes had statistically similar seed lengths.
Examining seed length revealed that Class I had smaller seedlings, as the endosperm is the primary supplier of reserves needed for germination in Brachiaria seeds. The other classes did not differ significantly from each other. In summary, vigor increased with endosperm area classes. Larger endosperm areas indicate greater vigor, as reflected in seed length. Analyzing seed images through X-ray morphological studies and classifying seeds into different filling levels allows for the correlation of morphological characteristics with physiological potential. This highlights the importance of methods that automate this process, providing significant contributions to the seed industry.

2.5. Proposed Method

To evaluate the robustness of Brachiaria brizantha cv. Xaraés seeds using X-ray images, we introduced a post-segmentation module to the YOLO model. This module, known as the morphological analysis module (MAM), employs a digital image processing algorithm to compute various variables: total area (mm2), embryo and endosperm area (mm2), internal area/total area ratio (mm2), length (mm), and width of the seeds (mm).
The area is determined using Green’s theorem, implemented numerically in the OpenCV library (https://opencv.org, accessed on 1 March 2024). Length and width are derived from the seed’s angle and the extreme limits of the rotated bounding box, obtained through OpenCV methods based on algorithms proposed by O’Rourke et al. [20] and Klee and Laskowski [21].
Since the measurements are initially in pixels or pixels2, the MAM converts these values to mm and mm2, respectively, by calibrating the measurement system with a reference measurement. In this calibration process, a known measurement of 3.5 mm is precisely marked on the X-ray images using a razor blade, serving as an object of reference. By precisely counting the pixels along this reference measurement, we establish the mm/pixels ratio for the images.
To obtain the mm/pixels ratio, the following formula is applied:
mm / pixels ratio = Known measurement in mm Counted pixels along the known measurement .
Subsequently, the ratio defined in Equation (1) serves as a conversion factor, allowing us to determine the corresponding physical dimensions by simply multiplying the pixel count of a given measurement by the established mm/pixels ratio.

2.6. Experiment Analysis

For the experiment analysis, we adopted a randomized block design based on the divisions within the X-ray database. This approach aims to reduce bias or confounding factors that might influence our findings. Initially, we assessed the overall model performance by blocking based on the fold factor. However, for a more detailed evaluation at the class level, we maintained the randomized block design, still considering the fold, but this time stratifying by class.
To check the normality of the residuals, we conducted the Shapiro–Wilk test [22]. For assessing the homogeneity of variances, we used the O’Neill–Mathews test [23]. While the majority of the analyses in Section 3 met the assumptions of normality and homogeneity of variances at the 5% significance level, a few did not comply with one or both assumptions. Therefore, for these cases, a non-parametric approach, specifically the Wilcoxon test [24], was employed.

3. Results

We selected the YOLOv8 architecture due to its balanced performance in segmentation accuracy and efficient training times. Training the model on the seed X-ray dataset involved 500, 1500, and 3000 epochs, covering dataset sizes of 5, 15, and 30 combinations, respectively. This training was conducted utilizing an NVIDIA Tesla T4 GPU with the PyTorch framework.

3.1. Segmentation Performance

The YOLOv8 model achieved an impressive overall Average Precision (AP) of 97.3% in accurately segmenting seeds and endosperm in the X-ray images (refer to Table 2). This signifies the correct delineation and classification of seed and endosperm regions in 97.3% of instances. For comprehensive insights into segmentation performance across training epochs and dataset sizes, please refer to Table 2 and Table 3.
On a per class basis, seeds exhibited a higher Average Precision (AP) compared to endosperm, achieving 98.9% versus 95.5%, respectively, after 3000 epochs (refer to Table 3). Despite this slight discrepancy, the inference performance for endosperm remained notably high, with a minimum AP of 91.7% even after just 500 training epochs. This underscores the model’s adaptability in handling variability and visual ambiguities inherent in seed X-rays, particularly within the endosperm area, which tends to exhibit greater variability compared to the seed area (see Figure 6).
Furthermore, the Tukey multiple comparison test revealed no statistically significant differences in AP metrics across varying epoch levels ( p > 0.05 ), while maintaining a fixed dataset size. This observation indicates a stable performance and suggests appropriate convergence without overfitting, even when dealing with smaller training set sizes.

3.2. Seed Classification and Vigor Prediction

The segmented seed and endosperm areas obtained through the YOLOv8 model lay the foundation for an automated classification system for seed vigor levels, relying on internal morphology. The size of the endosperm, a crucial nutrient reserve for the embryo, serves as an indicator of germination potential.
Figure 6 shows the relationship between segmented seed and endosperm areas, with contour levels representing endosperm/seed area ratios—a potential indicator of seed quality. The majority of points fall within the 50–60% ratio range. Notably, no points surpass 70%, establishing an initial benchmark for high vigor classification.
Upon qualitative visual inspection (refer to Figure 7), a clear and accurate alignment is evident between true seed/endosperm boundaries in X-ray images and segmentation masks predicted by YOLOv8. The model demonstrates proficiency in handling high appearance variation arising from internal structural ambiguities, rotations, and varying endosperm densities.

4. Discussion

The YOLOv8 model demonstrates impressive segmentation and classification performance even with limited dataset sizes. Given the challenges of compiling specialized agricultural data, the statistical similarity between 500, 1500, and 3000 training epochs suggests robust models that avoid overfitting. Notably, segmentation still reached 97.2% Average Precision (AP) when trained on just five seed X-ray images (refer to Table 2).
This finding is comparable to a computer vision approach used to assess the viability of guavira treated seeds with tetrazolium salt [25], which achieved better precision with 97.90% correct recognition for mucilage and 96.71% for lime. However, it relied on destructive tests. In a vigor test for rice seeds using computer vision techniques [26], a new prediction model was introduced for non-destructive germination forecasting, achieving a high accuracy of 94.17%. This demonstrates that our results outperform other deep learning methods.
The 50–60% endosperm/seed area ratio range encompassing over 50% of samples (see Figure 6) offers a clear indication for assessing the viability of seed lots. Although further physiochemical testing can refine categorical boundaries, this morphological indicator facilitates quick sorting and selective harvesting.
Deep learning, by automatically extracting spatial features predictive of germination, circumvents the need for extensive manual measurement while enhancing consistency compared to subjective human visual assessments. Table 3 highlights the accuracy of endosperm identification and segmentation, reaching a maximum of 95.6% in 1500 epochs with 15 training images. The automatic extraction of interpretable morphological features from X-ray scans through deep learning enables swift and reproducible seed sorting without requiring specialized image analysis expertise. This method generalizes well across varying appearances, orientations, and shapes compared to template-matching approaches. Additionally, easy retraining allows for updates in biological classifications as expert knowledge evolves.
Our proposed approach offers several advancements over existing methods for seed quality assessment. Firstly, we utilize a deep learning-based model, YOLOv8, for automated seed segmentation and classification. As shown in Table 4, most prior works rely on traditional machine learning techniques [3,13,25,27] without leveraging the representation learning capabilities of deep neural networks. By using the Darknet53 CNN backbone, our method can extract robust spatial features predictive of seed vigor levels.
Additionally, our solution provides a low-cost alternative suitable for batch analysis of seed lots, addressing limitations in techniques requiring expensive hyperspectral cameras [3,13,16,25,28] or destructive biochemical testing [27]. The use of widely accessible X-ray RGB imagery, correctly segmented over 95% of the time by YOLOv8, offers an affordable option for seed producers compared to hyperspectral imaging utilized in several related papers.
Finally, the non-destructive real-time assessment facilitated by our approach enables the rapid sorting of seed batches into categorical vigor levels with morphological indicators like endosperm size. This allows for selective harvesting and quality control prior to sowing. The automated measurement of morphological features through post-segmentation analysis demonstrates comparable or higher accuracy than existing methods, underlining the viability of our proposed computer vision pipeline for practical applications in the seed industry.

5. Conclusions

We recommend employing YOLOv8 for the study of the internal structure of Brachiaria brizantha cv. Xaraés seeds through X-ray images to evaluate seed vigor. YOLOv8, with an added post-segmentation module, facilitates obtaining quality descriptors for seed batches based on their internal morphology. This process automates the analysis of segmented images, replicating human visual analysis.
Our findings suggest that the proposed model performs well in segmenting and classifying despite having a relatively small dataset. It achieved up to 95.6% accuracy in identifying and segmenting the endosperm over 1500 epochs with just 15 training images. The endosperm/seed area ratio, specifically in the 50–60% range, which covers over 50% of the samples, offers a meaningful measure for assessing the viability of seed batches.
As a future step, we plan to develop a user-friendly web application. This application aims to be a valuable tool in agricultural engineering post-harvest processes. It will assist seed production companies by automatically categorizing seed batches, providing the market with options based on cost-effectiveness.

Author Contributions

Conceptualization, D.d.A.d.S., H.F.A. and D.G.G.; Methodology, D.d.A.d.S., E.D.G.d.F. and H.F.A.; Software, D.d.A.d.S. and E.D.G.d.F.; Validation, D.d.A.d.S., E.D.G.d.F., H.F.A. and D.G.G.; Formal analysis, D.d.A.d.S., E.D.G.d.F. and H.F.A.; Investigation, D.d.A.d.S., E.D.G.d.F., H.F.A. and D.G.G.; Data curation, D.d.A.d.S., E.D.G.d.F. and H.F.A.; Writing—original draft, D.d.A.d.S., E.D.G.d.F. and H.F.A.; Writing—review and editing, D.d.A.d.S., E.D.G.d.F. and D.G.G.; Visualization, D.d.A.d.S., E.D.G.d.F. and D.G.G.; Supervision, D.G.G.; Project administration, D.G.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES).

Data Availability Statement

The data presented in this paper can be accessed upon request to the corresponding author.

Acknowledgments

Danielo G. Gomes thanks the support of the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) (grant number #311845/2022-3). We would like to thank Arley Daniel Peter for proofreading this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kopittke, P.M.; Menzies, N.W.; Wang, P.; McKenna, B.A.; Lombi, E. Soil and the intensification of agriculture for global food security. Environ. Int. 2019, 132, 105078. [Google Scholar] [CrossRef] [PubMed]
  2. Medeiros, A.D.d.; Silva, L.J.d.; Ribeiro, J.P.O.; Ferreira, K.C.; Rosas, J.T.F.; Santos, A.A.; Silva, C.B.d. Machine Learning for Seed Quality Classification: An Advanced Approach Using Merger Data from FT-NIR Spectroscopy and X-ray Imaging. Sensors 2020, 20, 4319. [Google Scholar] [CrossRef] [PubMed]
  3. Zhang, T.; Lu, L.; Yang, N.; Fisk, I.D.; Wei, W.; Wang, L.M.; Li, J.J.; Sun, Q. Integration of Hyperspectral Imaging, Non-targeted Metabolomics and Machine Learning for Vigour Prediction of Naturally and Accelerated Aged Sweetcorn Seeds. Food Control 2023, 153, 109930. [Google Scholar] [CrossRef]
  4. Zou, Z.; Chen, J.; Wu, W.; Luo, J.; Long, T.; Wu, Q.; Wang, Q.; Zhen, J.; Zhao, Y.; Wang, Y. Detection of Peanut Seed Vigor Based on Hyperspectral Imaging and Chemometrics. Front. Plant Sci. 2023, 14, 1127108. [Google Scholar] [CrossRef] [PubMed]
  5. de Oliveira, G.R.F.; Mastrangelo, C.B.; Hirai, W.Y.; Batista, T.B.; Sudki, J.M.; Petronilio, A.C.P.; Crusciol, C.A.C.; da Silva, E.A.A. An Approach Using Emerging Optical Technologies and Artificial Intelligence Brings New Markers to Evaluate Peanut Seed Quality. Front. Plant Sci. 2022, 13, 849986. [Google Scholar] [CrossRef]
  6. Campos, L.V.; Rodrigues, A.A.; Sales, J.d.F.; Rodrigues, D.A.; Filho, S.C.V.; Rodrigues, C.L.; Vieira, D.A.; de Castro, S.T.; Neto, A.R. Radiographic Imaging as a Quality Index Proxy for Brachiaria brizantha Seeds. Plants 2022, 11, 1014. [Google Scholar] [CrossRef] [PubMed]
  7. Cheng, E.; Song, P.; Wang, B.; Hou, T.; Wu, L.; Zhang, W. Determination of Rice Seed Vigor by Low-field Nuclear Magnetic Resonance Coupled with Machine Learning. INMATEH-Agric. Eng. 2022, 67, 533–542. [Google Scholar] [CrossRef]
  8. Zhang, S.; Zeng, H.; Ji, W.; Yi, K.; Yang, S.; Mao, P.; Wang, Z.; Yu, H.; Li, M. Non-destructive Testing of Alfalfa Seed Vigor Based on Multispectral Imaging Technology. Sensors 2022, 22, 2760. [Google Scholar] [CrossRef]
  9. Javorski, M.; Carrara Castan, D.O.; da Silva, S.S.; Gomes-Junior, F.G.; Cicero, S.M. Image Analysis to Evaluate the Physiological Potential and Morphology of Pearl Millet Seeds. J. Seed Sci. 2018, 40, 127–134. [Google Scholar] [CrossRef]
  10. de Freitas, M.N.; Dias, M.A.N.; Gomes-Junior, F.G.; Abud, H.F.; de Araújo, L.B.; de Moraes, T.F. Discrimination of Urochloa seed genotypes through image analysis: Morphological features. Agron. J. 2021, 113, 4930–4944. [Google Scholar] [CrossRef]
  11. Domingues, R.C.; Fruet, G.; Abud, H.F.; Gomes, D.G. Imagens de Raios X e YOLOv8 para Avaliação Automatizada, Precisa e Não Destrutiva da Qualidade de Sementes Braquiária (Urochloa brizantha). In Congresso Brasileiro de Agroinformática (SBIAGRO); Sociedade Brasileira de Computação: Natal, Brazil, 2023; pp. 167–174. ISSN 2177-9724. [Google Scholar] [CrossRef]
  12. Cioccia, G.; de Morais, C.P.; Babos, D.V.; Pereira Milori, D.M.B.; Alves, C.Z.; Cena, C.; Nicolodelli, G.; Marangoni, B.S. Laser-induced Breakdown Spectroscopy Associated with the Design of Experiments and Machine Learning for Discrimination of Brachiaria brizantha Seed Vigor. Sensors 2022, 22, 5067. [Google Scholar] [CrossRef] [PubMed]
  13. Cui, H.; Bing, Y.; Zhang, X.; Wang, Z.; Li, L.; Miao, A. Prediction of Maize Seed Vigor Based on First-order Difference Characteristics of Hyperspectral Data. Agronomy 2022, 12, 1899. [Google Scholar] [CrossRef]
  14. Xu, P.; Sun, W.; Xu, K.; Zhang, Y.; Tan, Q.; Qing, Y.; Yang, R. Identification of Defective Maize Seeds Using Hyperspectral Imaging Combined with Deep Learning. Foods 2022, 12, 144. [Google Scholar] [CrossRef] [PubMed]
  15. Simak, M. Testing of forest tree and shrub seeds by X-radiography. In Tree and Shrub Seed Handbook; Gordon, A.G., Gosling, P., Wang, B.S.P., Eds.; ISTA: Zurich, Switzerland, 1991; pp. 1–28. [Google Scholar]
  16. Wu, N.; Weng, S.; Chen, J.; Xiao, Q.; Zhang, C.; He, Y. Deep Convolution Neural Network with Weighted Loss to Detect Rice Seeds Vigor Based on Hyperspectral Imaging Under the Sample-imbalanced Condition. Comput. Electron. Agric. 2022, 196, 106850. [Google Scholar] [CrossRef]
  17. Glenn, J. Ultralytics YOLOv8. 2023. Available online: https://github.com/ultralytics/ultralytics (accessed on 1 March 2024).
  18. Brasil Ministério da Agricultura, Pecuária e Abastecimento. Regras para Analise Sementes, 1st ed.; Mapa: Brasilia, Brazil, 2009. Available online: https://www.gov.br/agricultura/pt-br/assuntos/insumos-agropecuarios/arquivos-publicacoes-insumos/2946_regras_analise__sementes.pdf (accessed on 1 March 2024).
  19. Montgomery, D.C.; Peck, E.A.; Vining, G.G. Introduction to Linear Regression Analysis. In Wiley Series in Probability and Statistics; Wiley: Hoboken, NJ, USA, 2013; pp. 172–175. ISBN 0470542810. [Google Scholar]
  20. O’Rourke, J.; Aggarwal, A.; Maddila, S.; Baldwin, M. An optimal algorithm for finding minimal enclosing triangles. J. Algorithms 1986, 7, 258–269. [Google Scholar] [CrossRef]
  21. Klee, V.; Laskowski, M.C. Finding the smallest triangles containing a given convex polygon. J. Algorithms 1985, 6, 359–375. [Google Scholar] [CrossRef]
  22. Shapiro, S.S.; Wilk, M.B. An Analysis of Variance Test for Normality (Complete Samples). Biometrika 1965, 52, 591–611. [Google Scholar] [CrossRef]
  23. O’Neill, M.E.; Mathews, K. Theory & Methods: A Weighted Least Squares Approach to Levene’s Test of Homogeneity of Variance. Aust. N. Z. J. Stat. 2000, 42, 81–100. [Google Scholar]
  24. Stuart, A.L.; William, J. Conover. Practical Nonparametric Statistics. Int. Stat. Rev. Int. De Stat. 1972, 40, 393. [Google Scholar] [CrossRef]
  25. Nucci, H.H.P.; de Azevedo, R.G.; Nogueira, M.C.; Costa, C.S.; de Oliveira Guilherme, D.; Hirokawa Higa, G.T.; Pistori, H. Use of computer vision to verify the viability of guavira seeds treated with tetrazolium salt. Smart Agric. Technol. 2023, 5, 100239. [Google Scholar] [CrossRef]
  26. Qiao, J.; Liao, Y.; Yin, C.; Yang, X.; Tú, H.M.; Wang, W.; Liu, Y. Vigour testing for the rice seed with computer vision-based techniques. Front. Plant Sci. 2023, 14, 1194701. [Google Scholar] [CrossRef]
  27. de Oliveira, E.R.; Bugatti, P.H.; Saito, P.T.M. Assessment of clustering techniques to support the analyses of soybean seed vigor. PLoS ONE 2023, 18, e0285566. [Google Scholar] [CrossRef]
  28. Jin, B.; Qi, H.; Jia, L.; Tang, Q.; Gao, L.; Li, Z.; Zhao, G. Determination of viability and vigor of naturally-aged rice seeds using hyperspectral imaging with machine learning. Infrared Phys. Technol. 2022, 122, 104097. [Google Scholar] [CrossRef]
Figure 1. Actual sample of Brachiaria seeds (Urochloa brizantha) with a ruler for size comparison [11]. The right side of the ruler features a scale in millimeters (mm).
Figure 1. Actual sample of Brachiaria seeds (Urochloa brizantha) with a ruler for size comparison [11]. The right side of the ruler features a scale in millimeters (mm).
Agriengineering 06 00050 g001
Figure 2. Radiographic images of Urochloa brizantha cv. Xaraés seeds [11].
Figure 2. Radiographic images of Urochloa brizantha cv. Xaraés seeds [11].
Agriengineering 06 00050 g002
Figure 3. Measurement of Brachiaria seed structures using the ImageJ software version 1.8.0: (A) images from the dataset; (B) areas occupied by seeds; (C) areas occupied by endosperms; (D) length and width of the seeds.
Figure 3. Measurement of Brachiaria seed structures using the ImageJ software version 1.8.0: (A) images from the dataset; (B) areas occupied by seeds; (C) areas occupied by endosperms; (D) length and width of the seeds.
Agriengineering 06 00050 g003
Figure 4. (A) Seedlings image capture system; (B) Detail of the scanner adapted in an aluminum box.
Figure 4. (A) Seedlings image capture system; (B) Detail of the scanner adapted in an aluminum box.
Agriengineering 06 00050 g004
Figure 5. Computerized images of Brachiaria seedlings.
Figure 5. Computerized images of Brachiaria seedlings.
Agriengineering 06 00050 g005
Figure 6. Comparison of segmented seed and endosperm areas by the YOLOv8 model, with contour levels showing endosperm/seed area ratio (higher ratio indicates better quality).
Figure 6. Comparison of segmented seed and endosperm areas by the YOLOv8 model, with contour levels showing endosperm/seed area ratio (higher ratio indicates better quality).
Agriengineering 06 00050 g006
Figure 7. Visual comparison between real X-ray seed images and endosperm segmentation masks from YOLOv8 model for seeds with varying quality: (A) high-vigor seed (endosperm/seed area ratio ≈ 52%); (B) medium-vigor seed (endosperm/seed area ratio ≈ 12%); (C) low-vigor seed (no endosperm area, endosperm/seed area ratio = 0%).
Figure 7. Visual comparison between real X-ray seed images and endosperm segmentation masks from YOLOv8 model for seeds with varying quality: (A) high-vigor seed (endosperm/seed area ratio ≈ 52%); (B) medium-vigor seed (endosperm/seed area ratio ≈ 12%); (C) low-vigor seed (no endosperm area, endosperm/seed area ratio = 0%).
Agriengineering 06 00050 g007
Table 1. Morphological characteristics of Brachiaria seeds obtained through visual analysis.
Table 1. Morphological characteristics of Brachiaria seeds obtained through visual analysis.
ClassEndosperm Area Interval (mm2)Seed Width (mm)Seed Length (mm)Endosperm Area (mm2)Total Area (mm2)Endosperm/Total Area RatioSeedling Length 1 (mm)NSG 2 (%)
I0.391–1.6672.15d5.23a1.03d8.03b12.86d2.08b0.174
II1.668–2.9442.20c4.97b2.31c8.06b28.82c9.02a0.523
III2.945–4.2212.29b4.90b3.82b8.04b47.81b9.11a15.679
IV4.222–5.4972.38a5.03b4.55a8.50a53.78a9.22a43.728
1 Means transformed by y + 1 , where y is the seedling length variable. 2 NSG—Percentage of normal seedlings on the seventh day after the start of the germination test.
Table 2. YOLOv8 overall segmentation performance (AP, AP50, AP75) by training epochs (500, 1500, 3000) and dataset size.
Table 2. YOLOv8 overall segmentation performance (AP, AP50, AP75) by training epochs (500, 1500, 3000) and dataset size.
Dataset
Size
APAP50AP75
50015003000500150030005001500 13000 1
592.8%96.3%97.2%99.1%99.2%99.3%98.5%98.6%99.2%
1593.7%96.9%97.3%98.8%99.5%99.4%98.0%99.1%99.0%
3093.2%96.7%96.6%98.9%99.1%99.2%98.4%98.7%98.8%
1 The combination did not meet the assumptions of normality of the residuals and/or variance homogeneity; therefore, the non-parametric Wilcoxon test was used as a substitute for ANOVA.
Table 3. YOLOv8 class-wise (seed, endosperm) segmentation performance (AP) by training epochs (500, 1500, 3000) and dataset size.
Table 3. YOLOv8 class-wise (seed, endosperm) segmentation performance (AP) by training epochs (500, 1500, 3000) and dataset size.
Dataset
Size
AP(seed)AP(endosperm)
500150030005001500 13000
593.8%98.3%98.9%91.8%94.4%95.5%
1595.1%98.4%99.0%92.2%95.3%95.6%
3094.7%98.6%98.9%91.7%94.9%94.2%
1 The combination did not meet the assumptions of normality of the residuals and/or variance homogeneity; therefore, the non-parametric Wilcoxon test was used as a substitute for ANOVA.
Table 4. Related works.
Table 4. Related works.
ReferenceMachine Learning MethodologyImaging TechnologyNon-Destructive TestingLow Cost SolutionUse of Deep Learning
[25]MNB, RFC, ADA, MLP, KNN, SVMHyperspectral ImagingNoNoNo
[3]PLS-R, SVM-RVis-NIR HSIYesNoNo
[13]PCR, PLS, SVRHyperspectral ImagingYesNoNo
[16]DCNNHyperspectral ImagingYesNoYes
[28]CNN, SVM, LR, PCANear-Infrared HSIYesNoYes
[27]Clustering methodsVisible Spectrum RGB ImagesNoYesNo
This paperYOLOv8X-rayYesYesYes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

da Silva, D.d.A.; de Freitas, E.D.G.; Abud, H.F.; Gomes, D.G. Applying YOLOv8 and X-ray Morphology Analysis to Assess the Vigor of Brachiaria brizantha cv. Xaraés Seeds. AgriEngineering 2024, 6, 869-880. https://doi.org/10.3390/agriengineering6020050

AMA Style

da Silva DdA, de Freitas EDG, Abud HF, Gomes DG. Applying YOLOv8 and X-ray Morphology Analysis to Assess the Vigor of Brachiaria brizantha cv. Xaraés Seeds. AgriEngineering. 2024; 6(2):869-880. https://doi.org/10.3390/agriengineering6020050

Chicago/Turabian Style

da Silva, Daniel de Amaral, Emannuel Diego Gonçalves de Freitas, Haynna Fernandes Abud, and Danielo G. Gomes. 2024. "Applying YOLOv8 and X-ray Morphology Analysis to Assess the Vigor of Brachiaria brizantha cv. Xaraés Seeds" AgriEngineering 6, no. 2: 869-880. https://doi.org/10.3390/agriengineering6020050

Article Metrics

Back to TopTop