Next Article in Journal / Special Issue
Predicting Pineapple Quality from Hyperspectral Data of Plant Parts Applied to Machine Learning
Previous Article in Journal
Experimental Evaluation of the Loss Coefficient of Insect-Proof Agro-Textiles and Application to Wind Loads
Previous Article in Special Issue
Advancing Precision Agriculture Through Digital Twins and Smart Farming Technologies: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence in the Identification of Germinated Soybean Seeds

by
Hiago H. R. Zanetoni
,
Lucas G. Araujo
,
Reynaldo P. Almeida
and
Carlos E. A. Cabral
*
Institute of Agrarian and Technological Sciences, Federal University of Rondonópolis (UFR), Rondonópolis 78736-900, Brazil
*
Author to whom correspondence should be addressed.
AgriEngineering 2025, 7(6), 169; https://doi.org/10.3390/agriengineering7060169
Submission received: 12 April 2025 / Revised: 8 May 2025 / Accepted: 19 May 2025 / Published: 2 June 2025

Abstract

:
This study resulted from the demand for seeds with physiological qualities and studies in germination tests applied for seed improvement aimed at productive and homogeneous harvests. The objective of this study was to improve the classification of seeds in germination tests by introducing YOLO as a classification tool for germinated or nongerminated seeds to specify the results and optimize the analysis period. Germination tests were performed for Glycine max (soybean) seeds, and the capture of images from the tests and conventional categorization was performed by uncorrelated individuals, for the processing of these images and application to YOLO. Subsequently, graphical analyses of the YOLO results and comparison metrics with conventional categorization were performed to determine the accuracy of YOLO as a seed categorization tool. The results derived from the analysis of the graphs and comparisons to the conventional methodology of seed classification showed the effectiveness of YOLO for classifying seeds as germinated or nongerminated, reaching 95% accuracy in seed classification, beyond the range of 0–0.110 of the prediction errors, determined by the application of the methodology of mean square error, highlighting the efficiency of YOLO.

1. Introduction

In agricultural production systems, seeds with good physiological quality are determinants of high productivity and, consequently, of the profitability of the producer [1]. In this context, the germination test is of paramount importance for determining the capacity and quality of seed development. However, this test requires time and, consequently, investment, in addition to subjecting the analyzed seeds to destructive conditions. For this reason, it is necessary to develop new technologies that reduce the costs and analysis time [2].
Given the need to integrate new technologies into the seed sector, artificial intelligence is a viable alternative because of its variety of applications, which allows greater control over the agricultural and environmental variables that impact the production system. Taking as a starting point the ability to simulate human reasoning and decision making, artificial intelligence has the ability to interpret and learn from the data it processes, thus generating a bank of information that can assist in future decision making.
Artificial intelligence applied to agricultural and livestock activities promotes the development and use of technologies that assist farmers at every stage of their production. When integrated with computerized systems, artificial intelligence performs the reading and processing of data collected by sensors or satellites, thus generating a robust database [3]. In this way, it enhances the accuracy of the decision-making process, making it more efficient and ultimately benefiting agricultural and livestock activities.
Currently, there are multiple areas within agricultural production that can be impacted by the use of artificial intelligence, making operations more effective and sustainable [4]. Recent studies demonstrate the integration of conventional rural production technologies with artificial intelligence tools, such as in the qualitative classification of corn kernels based on physical attributes [5]; in the identification of distinct pathogens in bean crops, offering greater efficiency compared to conventional detection methods, as well as faster results than laboratory-based detection [6]; in the detection of ripe apples along with their branches and trunks, aiming to improve the harvesting process performed by harvesters [7]; and in the detection of tomato seeds with vigor, suitable for cultivation [8].
Among the alternatives provided by artificial intelligence, digital cameras, satellites, drones, and other image-capture devices have become fundamental pieces in the construction of databases that support new technological tools. These devices enable the analysis of images, the extraction and interpretation of data and, in some cases, the autonomous application of these results.
Studies have shown that digital image processing involves capturing visual information through the recognition of objects of interest, extracting features for the formation of patterns, and subsequent classification on the basis of these patterns [9]. In this sense, the use of digital image processing also allows the evaluation of the physiological attributes of seeds with greater reliability and effectiveness, especially compared with analyses based exclusively on human supervision [10]. Among the available tools, YOLO (you only look once) is a real-time object detection algorithm that is capable of analyzing an image in just a single reading [11]. Like other current technologies, YOLO has been incorporated into studies in agricultural areas, opening new research perspectives, such as more accurate identification of insects [12] and pests at different scales in light traps [13].
In addition, the eighth version of YOLO has shown the potential to reduce the high error rate in manual evaluations of tomato seed vigor, indicating that it is an effective and nondestructive method for determining this attribute [8]. Thus, the present study aimed to use YOLO, an artificial intelligence tool, to identify and classify germinated and ungerminated soybean seeds and compare their accuracy with that of the conventional method to establish the degree of reliability and efficiency of this technology.

2. Materials and Methods

2.1. Preparation of Germination Tests and Image Capture

The entire stage of development of the germination tests, including preparation, quantification and calculations, was carried out according to the guidelines established by the Rules for Seed Analysis [14]. The tests were conducted at the Seed Laboratory of the Institute of Agrarian and Technological Sciences (ICAT) of the Federal University of Rondonópolis (UFR). The images were captured in a standardized way via a camera with a resolution of 12 megapixels (12,000,000 pixels), an angular aperture of ƒ/1.5 (0.66), and optical stabilization functionality. The capture height was set at 0.30 m.
Before the tests were performed, the bench and trays were sanitized with 70% alcohol. Then, with the aid of a precision scale, the tare weight was removed from the trays, and subsequently, the germination paper (Germitest® CEL-060, Netlab, São Paulo, Brazil) were weighed. The quantification of distilled water followed the proportion of 2 to 3 times the weight of the substrate; in this study, a factor of 2.5 times the weight of the germination paper was used, ensuring that the volume of water was sufficient to cover the entire substrate area.
The tests were carried out in 30 sections, with 8-day intervals between each one, representing different germination stages of the Glycine max (soybean) crop. Each sample was composed of 25 Glycine max seeds distributed uniformly (Figure 1A), totaling 75 seeds per section (3 samples per section). The second image was captured 5 days after the assembly of the tests (Figure 1B), and the third image was recorded 8 days after the assembly, that is, 3 days after the second capture (Figure 1C).
After performing the germination tests and capturing the images, an image bank containing 90 samples was prepared, counting 2250 seeds, including germinated and nongerminated seeds.

2.2. Conventional Classification

The classification of seeds as germinated or not germinated followed the criteria established by the Rules for Seed Analysis [14]. This stage was conducted by two independent evaluators. On the basis of the interpretation and extraction of data from the images, a comparative table was constructed containing two columns: the first column indicated the total number of seeds in the test, which was previously standardized at 25 units, and the second column corresponded to the number of germinated seeds in each image.

2.3. Digital Processing of Images and Application of Samples to YOLO

At this stage, the images were sent to a cloud server, where the seeds were manually demarcated and categorized as germinated seeds (SGs) or nongerminated seeds (SNGs), with the help of the Label Studio 1.15 software. After each seed was delimited, the files were exported in a format compatible with the YOLO software.
Once the digital processing was completed, the extracted files were organized according to their purpose in the study: images intended for training, used to train the YOLO model, and images intended for testing, used to evaluate the performance of the model. In both directories, the images and label folders were created. For training, 70 images were used with their respective labels; for testing, 20 images were allocated, accompanied by their labels.
The implementation was carried out via PyCharm 3.12.2 software integrated with the Ultralytics 8.3.24 database, which includes the primary and updated versions of YOLO. The entire application stage of the YOLO model was performed on a notebook equipped with a GeForce RTX 2050 graphics card and 4 GB of random access memory (RAM).
Initially, the YOLOv8n (nano) version was used to evaluate the processing capacity of the equipment. The YOLOv8m (medium) version was subsequently adopted to ensure greater accuracy in the analyses. The training of the images was conducted through a code developed in the Python 3.12.2 programming language.

2.4. Categorization via YOLO and Data Analysis

To improve the image quality and optimize the training and testing process, adjustments were made using Python programming, enabling the use of higher-resolution images and configuring the number of repetitions during training
For the detection of the object of interest, the structural logic presented in Figure 2 was followed. The image dataset was divided as follows: 70 images were allocated for the training and validation stages, and 20 images were reserved for the testing stage.
In the training phase, the YOLOv8n version was initially used, with 50 epochs and images at a resolution of 400 × 400 pixels, aiming to optimize its efficiency [15]. The subsequent training employed the YOLOv8m version, maintaining 50 epochs but using images at a resolution of 320 × 320 pixels. During this phase, the data was continuously validated, enabling the evaluation of the model’s performance and the assessment of YOLO’s learning process—verifying whether the algorithm had truly learned the patterns of interest or merely memorized the information. The validation phase occurred cyclically, being conducted at the end of each epoch and before the beginning of the next.
After the training and validation phases, the generated data was presented through graphs and a confusion matrix. Finally, the results were tested using an image from the test set to verify the validity of the obtained outcomes.
At the end of the categorization process, the extracted data were submitted to different analysis metrics. The interpretation and extraction of the data obtained from the graphs generated during the training of the YOLO model were carried out, considering aspects such as the identification and classification of seeds, as well as the accuracy effectiveness as a function of the season. Next, an analysis of the confusion matrix was carried out with the objective of evaluating the performance of YOLO in the classification of samples, verifying the accuracy of the model in distinguishing between germinated and nongerminated seeds.

2.5. Mean Squared Error

In order to validate the efficiency of automatic classification in comparison to conventional classification, the Mean Squared Error (MSE) method—a precise comparison metric—was used, as represented by Equation (1), to compare the manually obtained results with those generated by YOLO.
MSE = 1 n nu = 1 yi -   y ^ i 2
where
MSE—mean squared error; n—number of samples; yi—real value (conventional classification); ŷi—value foreseen by YOLO.

3. Results

3.1. YOLO’s Performance in the Training and Testing Phases

There was a decrease in the number of actual errors in the training stage as the periods increased, and this decrease tended to be linear (Figure 3). The highest number of actual errors in the first season was 3.168, and the lowest number of actual errors in the fiftieth season was 1.526. Thus, the average real error was equivalent to 1.789.
Figure 4 illustrates the need to balance both classes, due to the predominance of germinated seed samples compared to nongerminated seeds. The values represented on the y-axis highlight a decrease in the need to equalize both classes, with the highest recorded value being 1800 in the first epoch and the lowest being 1174. In other words, as the model was trained over more epochs, YOLO reduced generalizations in its classifications of nongerminated seeds, thus becoming more efficient.
With respect to the number of real errors that YOLO presented to classify the seeds into germinated seeds (SGs) or nongerminated seeds (SNGs), as a function of the number of seasons, the maximum error was in the first season, with 4.351 real errors, which is the highest value (Figure 5). There is a tendency to stabilize the number of errors from the fifth season onward, where the value remains below 1, indicating constant behavior. The average number of real errors in the categorization of the seeds is 0.879 real errors, with 0.667 real errors being the lowest value, which is present in the fiftieth epoch.
With respect to the evaluation of one’s own learning, YOLO delimited all germinated seeds and presented a high rate of reliability in its choices, alternating between 74% and 86% confidence in its classification (Figure 6).

3.2. Effectiveness in Seed Classification

YOLO showed greater efficiency for seed classification, maintaining regularity after the twentieth season, with an average of 86.54% success between the twentieth and fiftieth seasons (Figure 7). In addition, the peak efficiency occurred in the thirty-second period, with 94.58% accuracy (Figure 6).
The confusion matrix revealed 95% efficacy for the classification of germinated seeds and 93% efficacy for the classification of nongerminated seeds (Figure 8). In addition, 1% error in the classification of germinated seeds as nongerminated and 4% error in the classification of nongerminated seeds as germinated were observed.
When the YOLO classification results were compared with those of the conventional method of seed germination analysis, the mean squared error (MSE) was 0.015, and 35.5% of the samples had an MSE value equal to zero (Figure 9).

4. Discussion

During the YOLO training stage for identifying the seeds as objects present in the image and classifying them as germinated seeds or nongerminated seeds, the backpropagation methodology was used. This methodology is a training algorithm for neural networks that defines the real error as the difference between the expected error in the input of the information into its network and the actual value of errors in the output [16], so if the real error is close to zero, it is indicated that YOLO has predicted the error in the image input phase.
The results obtained in the stage of identifying the seeds as objects under study in the images (Figure 3) were accurate but ineffective. Despite the accuracy of YOLO in identifying the seeds as objects of study in the images, there is no good accuracy, as there is a constant alternation between 1.5 and 2 real errors. These results corroborate those of [17], who developed studies applying YOLO to detect objects on railway tracks and reported a decrease in errors over time, verifying YOLO’s ability to learn and estimate the real locations of objects in images.
The values presented in Figure 4, particularly their decrease, highlight the relationship between the class balancing performed by YOLO and the number of training epochs. These results are consistent with the findings of [18], who, in their study using an enhanced version of YOLOv8n to accurately determine ripeness in pepper crops, observed a linear decrease over the epochs, ultimately recording a value of 0.2 at the end of the training phase.
With respect to the seed classification stage, YOLO, in addition to the precision already demonstrated in the seed identification stage, was also accurate. The average number of real errors in the classification of seeds was 0.879, and the lowest value of real errors recorded was 0.667 in the fiftieth season. In view of the linear trend (Figure 5), the feasibility of the YOLO training process for object classification of seeds, in the case of the study, is evident. These values contribute to the study by [19], who determined the accuracy of the YOLO classification for different fruit species, with an error ranging between 0.001 and 0.005 in one hundred analyses.
YOLO delimited all the germinated seeds and presented a high rate of reliability in its choices, alternating between 74% and 86% confidence in its classification (Figure 6), thus validating the previous training stages. The results obtained in this study were similar to those of the study by [20], which demonstrated the efficiency and reliability of YOLO, which presented between 49% and 88% confidence in the identification and classification of peppers in images.
The efficient performance of YOLO resulted from its training phases, whose effectiveness was proven (Figure 7), when the accuracy of YOLO reached 94.58% of correct answers in the classification of seeds as germinated or nongerminated. These values corroborated the studies by [21], which reported more than 90% accuracy in identifying signs of ripeness in fruit species. This result supports the findings of [7] in their studies aimed at optimizing the harvest of fruit species—specifically apple trees—by providing visual guidance to pickers and assisting in distinguishing between fruits, branches, and trunks. Their approach integrated YOLOv8s with an additional algorithm and compared the outcomes with other YOLO versions, including YOLOv8n, which achieved an accuracy of 97.3%.
However, its results attest to the need for a minimum value of periods applied to training, owing to the regularity presented after the twentieth period (Figure 7). This minimum standard of applied seasons aims at greater precision and efficiency in seed classification, similar to [22], who defined the minimum number of learning times as a critical factor for YOLO.
The results represented in the confusion matrix (Figure 8) demonstrate the accuracy of YOLO in categorizing soybean seeds in relation to the initially predicted balance and the actual output values. There was greater efficacy for classifying germinated seeds (95%); however, for nongerminated seeds, this value remained very close (93%), which indicates the efficiency of YOLO in classifying seeds. In addition, there were 1% errors in the classification of germinated seeds as nongerminated and 4% errors in the classification of nongerminated seeds as germinated. These results are consistent with those of [23], who used YOLO as a tool for classifying rice, pea, soybean, and wheat seeds.
These results are similar to the values presented in current studies obtained by [24], who applied YOLOv8n to identify corn crop seeds and categorize them into germinated, abnormal, or nongerminated seeds and were successful, presenting 95% correct answers in the classification of germinated corn seeds and 95% nongerminated corn seeds, in addition to 73% abnormal seeds. These data corroborate the results of the present study, validating them when compared with recent studies.
However, the main error rate presented by YOLO was when 5% of germinated seeds and 3% of nongerminated seeds were related to the background of the image, similar to the study by [25], where, when YOLO was implemented for the detection of citrus fruits in orchards, the software defined 24% of the fruits as the background of the image. However, this definition of 5% of germinated seeds and 3% of nongerminated seeds as background may have been due to the reading carried out by YOLO in another phase of its processing, considering that the predetermined labels also include the background of the image, causing YOLO to count pixels referring to the background (germination paper) in labels of germinated and nongerminated seeds. This fact is also explained by [26], who, when applying YOLO for the detection of license plates, paid attention to the possibility of the software confusing the plates with other objects present in the image because the labels are reduced.
The proximity of the mean squared error values to zero (Figure 9) supports the assertion that the YOLO algorithm has precision, considering that the YOLO algorithm presented a mean MSE value of 0.015 errors, in addition to 35.5% of the samples presenting a value equal to zero. According to [27], values close to zero imply that YOLO works with precision, whereas values far from zero imply the opposite. Like [28], who applied the root of the mean square error to compare the performance of different versions of YOLO in detecting impurities in cotton samples, they affirm the tool’s accuracy, as they find values close to zero.
Other studies have integrated the mean squared error metric to make it possible to compare a custom YOLO model with the basic version in terms of flower and flower bud counts [29]. Thus, the accuracy and precision of YOLO make it an auxiliary tool in the conventional classification process, which in turn can be subjective and time-consuming.
Thus, the practical application of YOLO in laboratory environments, genetic improvement programs, and seed processing units shows promise, especially for its automation capacity, speed in analysis, and reduction in human subjectivity. The implementation of this technology can represent a significant advance in the standardization and efficiency of germination tests, contributing to more accurate decision making in the agricultural sector.
For future studies, expanding the database to include different soybean cultivars and germination conditions and evaluating the robustness of the model in uncontrolled environments and with visual noise are recommended. In addition, investigations that integrate YOLO with other computer vision and deep-learning approaches can further increase the accuracy and adaptability of the system in diverse operational contexts.

5. Conclusions

The YOLO application as a method for identifying germinated and nongerminated seeds was effective, with a peak of 94.58% accuracy in seed categorization. In addition, a stable learning line from 20 periods of study is presented.
The comparison, via the mean square error methodology, of the YOLO results to the results obtained by the conventional method of seed classification, performed by humans, presents a low margin of prediction error. Thus, the validity of YOLO as a tool applied to seed classification is evident. However, there are possibilities for improvement, especially in the capture of images and in the process of identifying the object of study.
Thus, the use of YOLO for the classification of soybean seeds for germination is highly effective, with consistent performance and comparable to, or even superior to, traditional methods, which makes it a promising tool for automating analyses in the agricultural sector.

Author Contributions

Conceptualization, H.H.R.Z. and L.G.A.; methodology, H.H.R.Z.; software, H.H.R.Z., L.G.A. and R.P.A.; validation, L.G.A. and R.P.A.; formal analysis, H.H.R.Z., L.G.A. and R.P.A.; investigation, L.G.A. and R.P.A.; data curation, H.H.R.Z.; writing—original draft preparation, H.H.R.Z. and L.G.A.; writing—review and editing, C.E.A.C.; visualization, C.E.A.C.; supervision, H.H.R.Z.; project administration, H.H.R.Z.; funding acquisition, H.H.R.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chagas, J.T.; Farias, J.; Souza, R.; Júnior, S.F.; Costa, M.G. Germinação e vigor de sementes crioulas de feijão-caupi. Agrar. Acad. 2018, 5, 488. [Google Scholar] [CrossRef]
  2. Noronha, B.G.; Medeiros, A.D.; Pereira, M.D. Avaliação da qualidade fisiológica de sementes de Moringa oleífera Lam. Ciência Florest. 2018, 28, 393–402. [Google Scholar] [CrossRef]
  3. Sharma, S.; Verma, K.; Hardaha, P. Implementation of artificial intelligence in agriculture. J. Comput. Cogn. Eng. 2023, 2, 155–162. [Google Scholar] [CrossRef]
  4. Silva, A.; Santos, F.; Machado, P.; Berghahn, L.; Campos, G.; Araújo, C.; Araújo, S.; Menezes, F. Uso de Inteligência Artificial na Pecuária: Revisão de literatura. Res. Soc. Dev. 2023, 12, 2–3. [Google Scholar] [CrossRef]
  5. Suárez, P.; Velesaca, H.; Carpio, D.; Sappa, A. Corn kernel classification from few training samples. Artif. Intell. Agric. 2023, 9, 89–99. [Google Scholar] [CrossRef]
  6. Gomez, D.; Selvaraj, M.; Casas, J.; Mathiyazhagan, K.; Rodriguez, M.; Assefa, T.; Mlaki, A.; Nyakunga, G.; Kato, F.; Mukankusi, C.; et al. Advancing commom bean (Phaseolus vulgaris L.) disease detection with YOLO driven deep learning to enhance agricultural AI. Sci. Rep. 2024, 14, 15596. [Google Scholar] [CrossRef]
  7. Yan, B.; Liu, Y.; Yan, W. A novel fusion perception algorithm of tree branch/trunk and apple for harvesting robot based on improved yolov8s. Agronomy 2024, 14, 1895. [Google Scholar] [CrossRef]
  8. Tian, L.; Fang, Z.; Jiang, H.; Liu, S.; Zhang, H.; Fu, X. Evaluation of tomato seed full-time sequence germination vigor based on improved YOLOv8s. Comput. Electron. Agric. 2025, 230, 109871. [Google Scholar] [CrossRef]
  9. Franco, J.R.; Calça, M.V.C.; Junior, G.N.; Gamabarato, R.T.; Padovani, C.R.P. Sistema computacional de análise e processamento digital de imagem do exame de brucelose. Tekhne Logos 2019, 10, 49. [Google Scholar]
  10. Brunes, A.P.; Araujo, Á.S.; Dias, L.W.; Villela, F.A.; Aumonde, T.Z. Seedling lenght in wheat determined by image processing using mathematical tools. Rev. Ciência Agronômica 2016, 47, 374–379. [Google Scholar]
  11. Terven, J.; Córdova-Esparza, D.M.; Romero-González, J.A. A comprehensive review of yolo architectures in computer vision: From yolov1 to yolov8 and yolo-nas. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
  12. Wang, N.; Fu, S.; Rao, Q.; Zhang, G.; Ding, M. Insect-YOLO: A new method of crop insect detection. Comput. Electron. Agric. 2025, 232, 110085. [Google Scholar] [CrossRef]
  13. Zhang, W.; Huang, H.; Sun, Y.; Wu, X. AgriPest-YOLO: A rapid light-trap agricultural pest detection method based on deep learning. Front. Plant Sci. 2022, 13, 1079384. [Google Scholar] [CrossRef]
  14. Brasil Ministério da Agricultura, Pecuária e Abastecimento. Regra para Análise de Sementes, 1st ed.; Assessoria de Comunicação Social: Brasília, Brazil, 2009; pp. 147–224. [Google Scholar]
  15. Ajayi, O.G.; Ashi, J.; Guda, B. Performance evaluation of YOLO v5 model for automatic crop and weed classification on UAV images. Smart Agric. Technol. 2023, 5, 4. [Google Scholar] [CrossRef]
  16. Lillicrap, T.P.; Santoro, A. Backpropagation through time and the brain. Curr. Opin. Neurobiol. 2019, 55, 82–89. [Google Scholar] [CrossRef]
  17. Liu, C.; Yang, Y. YOLO-Based Obstacle Detection Under Few-Shot Learning. In Proceedings of the 2024 IEEE 4th International Conference on Data Science and Computer Application (ICDSCA), Dalian, China, 22–24 November 2024; pp. 867–872. [Google Scholar]
  18. Wang, Y.; Ouyang, C.; Peng, H.; Deng, J.; Yang, L.; Chen, H.; Luo, Y.; Jiang, P. YOLO-ALW: An enhanced High-Precision Model for Chili Maturity Detection. Sensors 2025, 25, 1405. [Google Scholar] [CrossRef]
  19. Zhang, W. Automated Fruit Gradind in Precise Agriculture using You Only Look Once Algorithm. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 1136–1144. [Google Scholar]
  20. Turnip, A.; Marselina, L.; Joelianto, E.; Sihombing, P.; Sumari, P. Real-Time Chili Harvest Estimation with Object Detection Technology in Enhancing Agricultural Efficiency. Internetwork. Indones. J. 2024, 16, 3–9. [Google Scholar]
  21. Xiao, B.; Nguyen, M.; Yan, W.Q. Fruit ripeness identification using YOLOv8 model. Multimed. Tools Appl. 2024, 83, 28039–28056. [Google Scholar] [CrossRef]
  22. Paul, A.; Machavaram, R.; Kumar, D.; Nagar, H. Smart solutions for capsicum Harvesting: Unleashing the power of YOLO for Detection, Segmentation, growth stage Classification, Counting, and real-time mobile indentification. Comput. Electron. Agric. 2024, 219, 108832. [Google Scholar] [CrossRef]
  23. Pativada, P.K. Real-Time Detection and Classification of Plant Seeds Using YOLOv8 Object Detection Model. Ph.D. Thesis, Kansas State University, Manhattan, KS, USA, 2024. [Google Scholar]
  24. Sun, W.; Xu, M.; Xu, K.; Chen, D.; Wang, J.; Yang, R.; Chen, Q.; Yang, S. CSGD-YOLO: A Corn Seed Germination Status Detection Model Based on YOLOv8n. Agronomy 2025, 15, 128. [Google Scholar] [CrossRef]
  25. Lin, Y.; Huang, Z.; Liang, Y.; Liu, Y.; Jiang, W. G-YOLO: A Rapid Citrus Fruit Detection Algorithm with Global Context Fusion. Agriculture 2024, 14, 114. [Google Scholar] [CrossRef]
  26. Laroca, R.; Zanlorensi, L.A.; Gonçalves, G.R.; Todt, E.; Schwartz, W.R.; Menotti, D. An efficient and layout-independent automatic license plate recognition system based on the YOLO detector. IET Intell. Transp. Syst. 2021, 15, 483–503. [Google Scholar] [CrossRef]
  27. Dewi, C.; Chen, R.C.; Liu, Y.T.; Jiang, X.; Hartomo, K.D. Yolo V4 for advanced traffic sign recognition with synthetic training data generated by various GAN. IEEE Access 2021, 9, 97230. [Google Scholar] [CrossRef]
  28. Jiang, L.; Chen, W.; Shi, H.; Zhang, H.; Wang, L. Cotton-YOLO-Seg: An enhanced YOLOV8 model for impurity rate detection in machine-picked seed cotton. Agriculture 2024, 14, 1499. [Google Scholar] [CrossRef]
  29. Wang, N.; Cao, H.; Huang, X.; Ding, M. Rapessed flower counting method based on GhP2-YOLO and StrongSORT algorithm. Plants 2024, 13, 2388. [Google Scholar] [CrossRef]
Figure 1. Periodic captures of germination tests for soybean crops according to their phenological stages: (A) first day of assembly the germination test; (B) five days after the assembly of the tests; (C) eight days days after the assembly of the tests. Source: Authors (2025).
Figure 1. Periodic captures of germination tests for soybean crops according to their phenological stages: (A) first day of assembly the germination test; (B) five days after the assembly of the tests; (C) eight days days after the assembly of the tests. Source: Authors (2025).
Agriengineering 07 00169 g001
Figure 2. Logical structure for detecting the object of interest. Source: Authors (2025).
Figure 2. Logical structure for detecting the object of interest. Source: Authors (2025).
Agriengineering 07 00169 g002
Figure 3. Errors in the identification of seeds as objects under study. Source: Authors (2025).
Figure 3. Errors in the identification of seeds as objects under study. Source: Authors (2025).
Agriengineering 07 00169 g003
Figure 4. Distribution focal loss in the training stage. Source: Authors (2025).
Figure 4. Distribution focal loss in the training stage. Source: Authors (2025).
Agriengineering 07 00169 g004
Figure 5. Errors in the classification of seeds as germinated or not germinated. Source: Authors (2025).
Figure 5. Errors in the classification of seeds as germinated or not germinated. Source: Authors (2025).
Agriengineering 07 00169 g005
Figure 6. YOLO learning test. Source: Authors (2025).
Figure 6. YOLO learning test. Source: Authors (2025).
Agriengineering 07 00169 g006
Figure 7. Accuracy in the classification of germinated seeds. Source: Authors (2025).
Figure 7. Accuracy in the classification of germinated seeds. Source: Authors (2025).
Agriengineering 07 00169 g007
Figure 8. Confusion matrix for identification of germinated soybean seeds. SG: germinated seeds; SNG: nongerminated seeds. Source: Authors (2025).
Figure 8. Confusion matrix for identification of germinated soybean seeds. SG: germinated seeds; SNG: nongerminated seeds. Source: Authors (2025).
Agriengineering 07 00169 g008
Figure 9. Mean square error applied to compare the conventional classification to the results obtained via YOLO. Source: Authors (2025).
Figure 9. Mean square error applied to compare the conventional classification to the results obtained via YOLO. Source: Authors (2025).
Agriengineering 07 00169 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zanetoni, H.H.R.; Araujo, L.G.; Almeida, R.P.; Cabral, C.E.A. Artificial Intelligence in the Identification of Germinated Soybean Seeds. AgriEngineering 2025, 7, 169. https://doi.org/10.3390/agriengineering7060169

AMA Style

Zanetoni HHR, Araujo LG, Almeida RP, Cabral CEA. Artificial Intelligence in the Identification of Germinated Soybean Seeds. AgriEngineering. 2025; 7(6):169. https://doi.org/10.3390/agriengineering7060169

Chicago/Turabian Style

Zanetoni, Hiago H. R., Lucas G. Araujo, Reynaldo P. Almeida, and Carlos E. A. Cabral. 2025. "Artificial Intelligence in the Identification of Germinated Soybean Seeds" AgriEngineering 7, no. 6: 169. https://doi.org/10.3390/agriengineering7060169

APA Style

Zanetoni, H. H. R., Araujo, L. G., Almeida, R. P., & Cabral, C. E. A. (2025). Artificial Intelligence in the Identification of Germinated Soybean Seeds. AgriEngineering, 7(6), 169. https://doi.org/10.3390/agriengineering7060169

Article Metrics

Back to TopTop