Next Article in Journal
Driving Factors, Regional Differences and Mitigation Strategies for Greenhouse Gas Emissions from China’s Agriculture
Previous Article in Journal
Diversity and Community Structure of Rhizosphere Arbuscular Mycorrhizal Fungi in Songnen Grassland Saline–Alkali-Tolerant Plants: Roles of Environmental Salinity and Plant Species Identity
Previous Article in Special Issue
Optimization of Sorghum Spike Recognition Algorithm and Yield Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Dimensional Convolutional Neural Networks (3D-CNN) in the Classification of Varieties and Quality Assessment of Soybean Seeds (Glycine max L. Merrill)

1
Department of Agronomy, Poznań University of Life Sciences, Dojazd 11, 60-632 Poznań, Poland
2
Agricultural College of Coimbra (ESAC/IPC), Research Centre for Natural Resources, Environment and Society (CERNAS), Bencanta, 3045-601 Coimbra, Portugal
3
Department of Agronomy, University of Florida, 1676 McCarty Drive, 3105 McCarty Hall B, Gainesville, FL 32611-0500, USA
4
Department of Biosystems Engineering, Poznań University of Life Sciences, Wojska Polskiego 50, 60-637 Poznań, Poland
5
Department of Genetics and Plant Breeding, Poznań University of Life Sciences, Dojazd 11, 60-632 Poznań, Poland
*
Author to whom correspondence should be addressed.
Agronomy 2025, 15(9), 2074; https://doi.org/10.3390/agronomy15092074
Submission received: 29 July 2025 / Revised: 19 August 2025 / Accepted: 25 August 2025 / Published: 28 August 2025

Abstract

The precise identification, classification, sorting, and rapid and accurate quality assessment of soybean seeds are extremely important in terms of the continuity of agricultural production, varietal purity, seed processing, protein extraction, and food safety. Currently, commonly used methods for the identification and quality assessment of soybean seeds include morphological analysis, chemical analysis, protein electrophoresis, liquid chromatography, spectral analysis, and image analysis. The use of image analysis and artificial intelligence is the aim of the presented research, in which a method for the automatic classification of soybean varieties, the assessment of the degree of damage, and the identification of geometric features of soybean seeds based on numerical models obtained using a 3D scanner has been proposed. Unlike traditional two-dimensional images, which only represent height and width, 3D imaging adds a third dimension, allowing for a more realistic representation of the shape of the seeds. The research was conducted on soybean seeds with a moisture content of 13%, and the seeds were stored in a room with a temperature of 20–23 °C and air humidity of 60%. Individual soybean seeds were scanned to create 3D models, allowing for the measurement of their geometric parameters, assessment of texture, evaluation of damage, and identification of characteristic varietal features. The developed 3D-CNN network model comprised an architecture consisting of an input layer, three hidden layers, and one output layer with a single neuron. The aim of the conducted research is to design a new, three-dimensional 3D-CNN architecture, the main task of which is the classification of soybean seeds. For the purposes of network analysis and testing, 22 input criteria were defined, with a hierarchy of their importance. The training, testing, and validation database of the SB3D-NET network consisted of 3D models obtained as a result of scanning individual soybean seeds, 100 for each variety. The accuracy of the training process of the proposed SB3D-NET model for the qualitative classification of 3D models of soybean seeds, based on the adopted criteria, was 95.54%, and the accuracy of its validation was 90.74%. The relative loss value during the training process of the SB3D-NET model was 18.53%, and during its validation process, it was 37.76%. The proposed SB3D-NET neural network model for all twenty-two criteria achieves values of global error (GE) of prediction and classification of seeds at the level of 0.0992.

1. Introduction

Soybean (Glycine max (Linn.) Merr.) is native to Southeast Asia and is one of the principal leguminous crops cultivated worldwide. The countries with the largest soybean cultivation areas include Brazil (ca. 39 million ha), the United States (ca. 35 million ha), Argentina (ca. 16 million ha), India (ca. 12 million ha), and China (ca. 8 million ha) [1]. Soybean is a short-day plant, requiring high air temperatures during the growing season and soils above class IV in regards to soil classification, and is considered part of the whole-grain complex, while also being rich in calcium. Soybean seeds contain from 33% to even 45% protein [2] and are rich in compounds such as sucrose, stachyose, raffinose, phospholipids, and isoflavones [3]. The fat content in soybean seeds depends primarily on the genetic characteristics of the variety and the weather conditions during its growth period [4]. Soybean can be consumed by humans in various forms, i.e., as whole seeds or in soybean products such as milk, sauce, or soybean oil. It is also a valuable source of animal feed. Soybean, as a cultivated plant, also exhibits significant potential for enriching the soil through symbiotic N2 fixation [5].
Precise identification, classification, sorting, and rapid and accurate quality assessment of soybean seeds are of great importance for the continuity of agricultural production, varietal purity, seed processing, protein extraction, and food safety. A reduction in nutritional value poses a threat to both humans and animals. Therefore, there is a need to remove low-quality soybean seeds, with the main criteria for their classification being size, shape, texture, color, surface quality, any mechanical or insect damage, fungal infections, and mold.
Currently, traditional organoleptic methods of quality assessment prevail, which mainly involve visual inspection of soybean seeds to identify visible signs of damage or discoloration. The sorting of soybean seeds, on the other hand, is carried out by sieving, primarily based on their size and shape, since damaged seeds often exhibit reduced length or diameter. The disadvantages of these methods are their labor-intensiveness, high cost, and the significant subjectivity of assessment, resulting in inconsistency and the incorrect identification of damaged soybean seeds. An important aspect of soybean seed classification is also ensuring varietal purity, particularly in breeding work. Commonly used methods for the identification of different soybean seed varieties to date include morphological analysis [6,7], chemical analysis (molecular markers, random amplified polymorphic DNA-RAPD) and simple sequence repeat (SSR) [8,9], protein electrophoresis [10], liquid chromatography [11], and spectral analysis [12,13].
Modern and rapidly developing methods include deep neural networks (DNN) and convolutional neural networks (CNNs). A CNN is a specific type of deep learning model that enables the recognition of spatial and textural patterns within a given sample image. CNN uses convolution and pooling layers to extract significant information from complex images and learns to predict a specific target variable. These capabilities can be utilized for automatic feature extraction, whereas tasks related to manual feature engineering can be avoided during model training. CNNs are excellent tools for solving multi-level complex tasks, e.g., image analysis and object recognition [14,15,16], face and person recognition [17,18], human speech recognition [19], text translation [20], sign language conversion [21], and the generation and detection of sound waves [22]. DNN models play an important role in understanding the genetic backgrounds of diseases such as autism [23] and muscular dystrophy [24]. They are also used for the detection of skin cancer [25], breast cancer [26], and brain cancer [27]. Deep neural networks are also used for monitoring road traffic and drivers [28], robot motion control [29], visual navigation [30], and the supervision and control of aircraft [31].
Many researchers are investigating the potential use of CNNs and machine vision for monitoring field crops [32], evaluating climate change in the context of agricultural production [33], assessing fruit maturity [34], the geometric classification of carrots [35], and the identification and classification of weeds [36,37,38], diseases [39,40,41], or pests of cultivated plants [42,43,44]. This technology has now become one of the main methods used for assessing seeds and grain in terms of quality loss, quantifying the degree of mechanical damage, maturity stage, infection by diseases, or contamination with other plant species. Klt et al. [45] proposed a method for the selection and classification of pepper seeds and Kong et al. [46] for the automatic assessment of rice seed thickness, while Li et al. [47] proposed a method based on deep neural networks for estimating the number of seeds in a soybean pod. Rybacki et al. [48], on the other hand, applied machine learning algorithms, computer image analysis, and CNNs for the qualitative classification and assessment of maturity level and damage in rapeseed seeds.
Most CNN models developed so far are based on standard RGB images, presenting the analyzed objects on a two-dimensional (2D) plane. The two-dimensional nature of images results in certain limitations related to their low accuracy and the need to build a two-dimensional neural network architecture (2D-CNN) [49,50,51,52,53,54,55,56,57,58].
A much more accurate approach is the analysis of seeds using 3D scanning, which enables the construction of three-dimensional models. The 3D scanning technology itself has been known for years, but it has only recently gained significant importance due to the increased computing power of processors and the accuracy of point cloud generation by scanners, which has led to an improvement in the quality of developed models characterized by higher resolution and accuracy [59]. It is now used in various branches of the economy and in many disciplines and fields of science. Such 3D models are created for the needs of games and films, mechanical engineering, archaeology, the automotive industry, medicine, and of course, agriculture [60]. The 3D scanning method is the process of creating a digital model of a physical object, consisting of collecting data on its shape and size in three dimensions. The result of this process is the so-called point cloud, which is a set of points in space described by Cartesian coordinates (x, y, z).
During the acquisition of 3D models, disturbances (noise) are generated, resulting from the scanner quality and environmental conditions, which negatively affect the merging and analysis of the point cloud. Such noise should be eliminated during analysis, and this can be achieved using various methods. Liu et al. [61] used statistical filtering to remove noise and reconstruct a 3D model of peanut plants. Wang et al. [62] obtained a point cloud and 3D model of a potato by applying filtering and k-means clustering using a stereo scanner. Bao et al. [63] used, among other methods, the RANSAC method and statistical filtering techniques to eliminate disturbances and obtain models of sorghum plants. Similar methods were used by references [64,65,66] in their research on creating 3D models.
An essential step in processing and extracting features from point clouds after scanning objects is the precise segmentation of the data (coordinates), especially when dealing with large volumes. Segmentation is the process of classifying the point cloud based on local features, grouping them into regions according to similar attributes, and dividing the point cloud into blocks for further processing [67]. The quality and effectiveness of the segmentation process is largely determined by the number of points and their structure. Several methods of point cloud segmentation are used, e.g., region growing [68,69], edge extraction [70,71], model building [72,73], clustering [74,75], and deep learning algorithms [76,77,78].
The hypothesis posed in the presented study assumes that the application of three-dimensional scanning, the construction of 3D models, and the proposed three-dimensional convolutional neural network (3D-CNN) will enable precise qualitative assessment of soybean seeds and their classification in terms of damage.
The main research objective presented in this paper is to develop algorithms and the 3D-CNN architecture for recognizing defects of varying scales in soybean seeds using digital 3D models. In addition, the developed model will enable the automatic classification of the tested soybean seed varieties, as well as the assessment of seed maturity and damage based on their color or geometric shape.

2. Materials and Methods

2.1. Data Set Preparation

The study utilized seeds from five soybean varieties, namely, Aligator, Fiskeby, Mavka, Merlin, and Petrina, obtained from the collection of the Department of Genetics and Plant Breeding, Poznań University of Life Sciences (Figure 1). The varieties were selected based on the most important traits for soybean cultivation under Polish conditions. The selected varieties differed in morphological characteristics, duration of the vegetation period, and protein content in the seeds. The seed samples were cleaned using sieves to remove all foreign matter, such as pods, parts of stems and leaves, weed seeds, soil residues, dust, and stones, from the samples. The soybean seeds displayed a moisture content of 13%, and the scanned samples were stored in paper bags at a temperature of 20.0–23.0 °C.
The seeds of the scanned varieties were characterized by a spherical shape (Mavka, Petrina), ovoid shape (Fiskeby, Merlin), or laterally flattened ovoid shape (Aligator). The hilum, depending on the variety, was either of uniform width (Aligator, Mavka, Petrina) or wedge-shaped (Fiskeby, Merlin), approximately 1/6 of the seed circumference in length, dark or light, uniform or with a differently colored stripe along the center. The color of the scanned seeds is a varietal characteristic and ranged from light cream, cream, to dark cream, with a uniform or mottled coloration. The dimensions of the seeds are determined by the variety and weather conditions during growth. The seeds in the analyzed samples were characterized by varying lengths, i.e., 6.2–13.8 mm, widths of 5.0–10.0 mm, and thicknesses of 3.0–8.1 mm. The average thousand seed weight (TSW) ranged from 155.4 to 182.3 g. The protein content in the analyzed varieties ranged between 25.9–41.0% (Table 1).
Each scanned soybean seed was assigned a code (Figure 2), which included the variety designation and a serial number, after which the seeds were measured and visually assessed (Table 2). The scanned seeds were also weighed using an electronic balance with an accuracy of 0.0001 g. Measurements of the geometric parameters were obtained using an electronic caliper with an accuracy of 0.01 mm.

2.2. 3D Model Preprocessing

The spatial imaging method applied in the study enabled the creation of three-dimensional models of soybean seeds. Unlike traditional two-dimensional images, which only represent height and width, 3D imaging adds a third dimension, allowing for a more realistic representation of the shape of the seeds. The study used the Revopoint Range 3D scanner (Figure 3), equipped with projectors and dual infrared cameras with aspheric lenses, enabling a capture range of 360 × 650 mm, a working distance of 100–800 mm, a scanning speed of 18 frames per second, and a single frame repeatability accuracy of 0.1 mm. The scanning density was 278 points per mm2.
The 3D models obtained in the study, 100 for each variety, were analyzed using algorithms developed to determine the geometric parameters of the seeds (length, width, thickness), the degree of damage to the seed coat, as well as its texture, color, and discoloration. The 3D scanning of the seeds also enabled the identification and assessment of the shape and color of the hilum of soybean seeds, which is one of the basic criteria for differentiating varieties. Figure 4 presents 3D models of soybean seeds with defined point clouds and a finite element mesh (FEM), forming a database for further analysis, classification, and qualitative assessment.
The algorithms for varietal and qualitative classification of soybean seeds were developed using two programming tools, namely MATLAB and the high-level programming language Python 3.9, along with libraries (programming environments) for scientific computations: Scikit-shape, Numpy, SciPy, Keras, Scipy, and TensorFlow 2.0. The codes for these algorithms have been made available on an open-access platform at http//github.com/piotrrybacki/soybean-SB3D-NET.git (accessed on 29 July 2025) in the Supplementary Materials.

2.3. Defining Soybean Seed Classification Criteria

The definition of criteria for the quality assessment and classification of soybean seeds using 3D models and three-dimensional convolutional neural networks (3D-CNN) was based on assumptions derived from the quality parameters of seeds within the framework of the European Union requirements covered by the common market organization. Table 3 lists twenty-two fundamental criteria for the prediction of variety and qualitative assessment of soybean seeds. The developed soybean seed models are classified using eight of these criteria.
The developed algorithms, based on the three-dimensional models of soybean seeds, enabled the measurement of geometric parameters, which will serve as the basis for variety classification. Knowing the seed dimensions from the analysis of the 3D models, the physical quantities were calculated using the following equations:
A g = π · D g 2   mm 2 ,
A = π 2 · L · L m · L m L + 1 U · arcsin U   mm 2
L m = W + T 2   mm ,
U = ( L 2 L m 2 ) 1 2 L
V g = π 6 · L · W · T   mm 3 .
The equivalent diameter Dg, sphericity coefficient φ, and shape coefficient Ra were calculated using Equations (6)–(8):
D g = L · W · T 1 3   mm ,
φ = L · W · T 1 3 L   ,
R a = W L   ,
The 3D models of soybean seeds are characterized by a large number of features and classification criteria, which in turn generate an enormous volume of data. These include the geometric parameters of the seeds, primarily resulting from their varietal characteristics, as well as criteria defining the method of classifying the damage or disease infection of the seeds. In order to filter the classification criteria and identify the most significant types, a method based on the maximum relevance and minimum redundancy (MRMR) algorithms [79] was applied. This is a filtering method aimed at optimizing redundancy among the simultaneously selected features of soybean seed classification and selecting the most significant features. Each feature is treated by the algorithm as a discrete random variable. The idea of MRMR is that two features use the mutual information between them, I(X,Y), to measure the level of similarity between X and Y, according to Equation (9):
I   X , Y = y Y x X p x , y l o g p x , y p 1 x p 2 y ,
In Equation (9), p(x,y) is the joint probability distribution function of X and Y, while p1(x) and p2(y) represent the marginal probability distribution functions of the random variables X and Y. If Fi is a discrete random variable, e.g., the mass of a soybean seed, then the mutual information between feature i and j, e.g., the diameter of a soybean seed, is expressed as I(Fi, Fj). The parameter d, in turn, reflects the number of features in the dataset, i.e., i, j = 1, 2, …, d.
If I(H, Fi) is the measure of similarity between any feature i and the class vector h ([h = h1, h2,…, hN]), and at the same time, S is the set of features to be selected, then |S| indicates the number of elements in this set, through the minimum redundancy determined according to Equation (10):
W = 1 S 2 F i , F j   S I F i , F j ,
Meanwhile, the maximum relevance of feature selection is determined by Equation (11):
V = 1 S F i S I F i , H ,
Combinations linking conditions (10) and (11) can be expressed as max(VW) and max(V/W). Additionally, according to Equations (10) and (11), the best feature set can be determined as a result of a search O(N|S|), whereby the MRMR algorithm first selects the initial feature according to the above equations. At each subsequent stage, feature i is selected, satisfying conditions (12) and (13). The selected feature is stored in set S. All features, except the selected features, can be described as ΩS = Ω−S.
m a x F 1 Ω S   I H , F i ,
m i n F 1 Ω S   1 S F j Ω S I   F i ,   F j ,
Combining Equations (12) and (13) according to the mathematical relationships max(VW) and max(V/W), two selection criteria (14) and (15) are also obtained for the MRMR algorithm.
M I D = max F I Q S I F i , H 1 S F j ϵ S I F i , F j ,
M I Q = max F I Q S I F i , H 1 S F j ϵ S I F i , F j ,
The MID and MIQ indices allow for the determination of selection criteria for the appropriate geometric features of soybean seeds. The complexity of these indices is described as O(|S|⋅N) [79].
The maximum relevance and minimum redundancy (MRMR) algorithm is a filtering method that seeks to minimize redundancy among the simultaneously selected features while attempting to select the features most associated with class significance (Figure 5). The algorithm treats each feature as a discrete random variable. Two features use the mutual information between them, I(X, Y), to measure the level of similarity between X and Y.

2.4. Architecture of the Multilayer 3D-CNN Network

The widely available literature proposes ready-made 3D-CNN architectures. However, each of these is most effective under specific conditions, with a precisely defined dataset and strictly defined object classification criteria. The aim of the conducted research is to design a new, three-dimensional 3D-CNN architecture, the main task of which is the classification of soybean seeds. The main advantage of the proposed 3D-CNN solution is the absence of the need for the pre-processing of data in the form of 3D models to obtain features used at the classification stage.
The 3D-CNN architecture proposed in this study for the automatic classification of soybean seeds, designated as SB3D-NET, has a 3D convolution filter and a pooling layer consisting of three elements, namely Tn × Hn × Wn, where Tn, Hn, and Wn are the thickness, height, and width, respectively, analogous to the case of 2D-CNN networks. The additional dimension d defines the depth of the network, indicating the number of frames or images. The result of the 3D convolution is three-dimensional cuboids (Figure 6).
The dataset, in the form of 3D models of soybean seeds, was randomly divided into a training set, a validation set (being a subset of the training set), and a test set. In the network training process, the 3D models from the validation set are used to control the course of the training process, the purpose of which is to monitor the SB3D-NET network in terms of the degree of training of its neurons. The training process itself consists of two stages, namely the selection of weights for the training set and their testing on models from the validation set. The weights determine the significance of the dataset and criteria. The selection of weights and their correct testing allows for the avoidance of the so-called generalization error.
The correction of weight values, both at the training and validation stages, continues until the approximation error in the training set is minimized, or until the error value in the validation set does not increase. The error is most often assumed to be the sum of squares (SS) of deviations between the assumed value and the output from the network. After the selection of the significance indicator weights has been completed, the SB3D-NET network model enters playback mode, and the input data are then 3D models of soybean seeds from the test set, which did not participate in the network training process.
The assessment of the SB3D-NET network model in terms of its ability to classify soybean varieties and the qualitative quantification of seeds was carried out based on the value of the global error (GE) of the model, determined for the test set from Equation (16):
G E   =   i = 1 n z i   y i 2 i = 1 n z i 2   ,
where n—number of cases, z—set value (benchmark), and y—network response.
In addition, error values were analyzed at each stage, i.e., training, validation and testing, serving as the criterion for assessing the overall model accuracy. The most commonly used criteria are
-
Standard deviation:
R M S   =   i = 1 n z i   y i 2 n 1   ,  
-
Mean error:
M E   =   1 n   i = 1 n z i   y i ,
-
Absolute mean error:
M A E   =   1 n   i = 1 n z i   y i ,
-
Normalized standard deviation:
n R M S   =   R M S y m a x     y m i n   ,
-
Error variance:
M S E   =   1 n 1   i = 1 n z i   y i 2 ,
For a more precise assessment of the SB3D-NET network model proposed in this study, an additional quality indicator for the quantification of soybean seeds was used in the form of the standard deviation ratio (SDR), determined as the ratio of the standard deviation of quantification errors to the standard deviation of the output variable, and the Pearson linear correlation coefficient (R). This indicator can be calculated in total or for specific types of 3D model sets of soybean seeds at the result stage, as well as for the set values. The soybean seed classification model was also assessed in terms of its performance. For this purpose, measures quantifying the operating speed and classification accuracy of the 3D models were applied. The operating speed indicator was the classification rate, expressing the number of assigned 3D models of soybean seeds per second, and the average classification time of a single soybean seed model. The accuracy of the proposed SB3D-NET model was estimated using the following indicators: classification accuracy (PPV) and true positive rate (TPR), as well as the result correction coefficient (f) and its accuracy (ACC). These indicators were determined using Equations (22)–(25).
P P V X = T P X T P X + F P X   ,
T P R X = T P X T P X + F N X   ,
f s c o r e X = 1 P P V X + T P R X ,  
where ∝ = 0.5 gives equal weight to TPR and PPV:
A C C = i = 1 n T P i I i n ,  
where n = no. of classes; Ii = no. of images in class i.

3. Results

The final outcome of the conducted research and analyses is the proposal of a three-dimensional convolutional neural network architecture, designated as SB3D-NET, along with a set of codes developed in MATLAB R2024a and Python 3.9 (http//github.com/piotrrybacki/soybean-SB3D-NET.git; accessed on 29 July 2025 in the Supplementary Materials). These codes enable the automatic classification and qualitative assessment of soybean seeds based on the adopted criteria derived from the geometric parameters of their 3D models. The basis of the 3D model analysis is their conversion from a solid model (Figure 7a), through the point cloud forming it (Figure 7b), to a model in the form of a triangular FEM mesh (Figure 7c).
The detection of the contours of the 3D soybean seed models, the surface areas of discolorations on the seed coat, and the size of the hilum enable their varietal and qualitative classification. The model contours constitute the basis for calculating the geometric parameters of the seeds, i.e., their size, thickness, and width. The areas of discoloration marked on the models can also be treated as varietal features, if such a criterion for a given variety is adopted, but they may also be indicative of damage to the seed coat or infection by disease.
As shown in Figure 8 for randomly selected 3D models of soybean seeds, using differing colors, the proposed algorithms and codes indicate the areas of the seed coat and the hilum of the seed, while simultaneously estimating its dimensions.
Table 4 presents a summary of the results of the statistical calculations of the geometric features of the analyzed 3D models of soybean seeds, which form the database for the proposed algorithm.
The dataset used to build the soybean seed classification model contained 500 3D models and 22 criteria. The models were divided into a training set containing 300 (60%) randomly selected models and test and validation sets, each containing 100 (20%) of the remaining models. The SB3D-NET neural network model with the structure of 12:12:3 × 12:12:1 achieved the approximation criterion in the 53rd cycle, which means that the weight adaptation was performed by the learning algorithm 53 times (Figure 9).
The constructed SB3D-NET network architecture and the codes developed for it enable the automatic and importantly, random, sorting of 3D models of soybean seeds, creating training, validation, and test databases. Table 5 presents a summary of the changes in map sizes depending on the layer number of the developed SB3D-NET model. As can be seen from the data, each hidden layer of the 3D-CNN network model causes a reduction in map sizes, resulting in 90,561,020 parameters at the output.
In the next stage, the algorithms performed the training, validation, and testing of the SB3D-NET model, the results of which are presented in Figure 10.
The accuracy of the training process of the proposed SB3D-NET model for the qualitative classification of 3D models of soybean seeds, based on the adopted criteria, was 95.54%, and the accuracy of its validation was 90.74% (Figure 10a). Figure 10b shows that the relative loss value during the training process of the SB3D-NET model was at the level of 18.53%, and during its validation process, it was 37.76%. The developed algorithm searched for 3D models of soybean seeds with the highest probability of belonging to and complying with the adopted classification criteria and assigned the appropriate informational label. The label is a set of information calculated based on the geometric shape of the soybean seed model and the number of pixels defining the size of its hilum and the surface area of the seed coat discoloration. Figure 11 presents eight sample analyzed 3D models of soybean seeds, along with the assigned informational label and varietal classification. The expression “True” was added to the label with the correct variety classification and the expression “False” indicating incorrect label assignment.
Table 6 presents a summary of the data generated by the proposed SB3D-NET model and the constructed 3D-CNN architecture. The numerical values were compared with empirical measurements of the geometric parameters of the soybean seeds, noting that the accuracy of the readings for the physical dimensions of the 3D models of the soybean seeds, based on the number of pixels and considering only their length, width, and thickness, was 85.37%, 88.20%, 88.11%, respectively. The detection of the hilum of the soybean seed, which co-determines varietal classification, was performed with an accuracy 82.47% for its length and 87.93% for its width.
The fitting of the SB3D-NET model was performed based on the determined value of the global error (GE). This indicator shows that the proposed network reflects an error on the order of 0.0992 in the classification of 3D models of soybean seeds (Table 7). The value of this error indicates the correct selection of the network architecture, mainly the number of hidden layers with 12 neurons each, which ensures the model’s generalization capabilities.
From the classification point of view, the processing time is of great importance. Three variants were analyzed using a GPU (Graphics Processing Unit). These were classification based on the geometric parameters of the soybean seed models, the color or discolorations of the seed coat, and a variant combining the two. As shown in Table 8, the fastest classification occurred for the models of soybean seeds based on the reading of geometric parameters (7.32 ms/model). The classification and analysis of the seed coat color of soybeans took longer, at 5.54 ms/model. The longest, at 8.78 ms/model, was the classification of models when combining the groups of criteria, i.e., using geometric parameters and color. However, the combination of classification criteria significantly increased the process accuracy, which reached 95.54% for the proposed SB3D-NET model and architecture.

4. Discussion

Knowledge of the geometric and physical properties of soybean seeds allows for the design of devices that accelerate various technological processes regarding their handling, primarily cleaning, separation, drying, and processing. The rapid development of digitization and AI has led to proposals in the literature for models aimed at the qualitative assessment of agricultural produce, including seeds and grains (soybean, maize, rice, wheat) [39,40,41,42,43,44]. However, these are predominantly based on two-dimensional images, mainly acquired using RGB technology or spectral cameras. Currently, there are a few emerging proposals for the use of three-dimensional scanners to develop 3D models of whole plants or their parts, which form the basis for developing qualitative assessment and classification methods [76,77,78,80,81,82,83].
The method proposed in this paper, along with the developed SB3D-NET three-dimensional neural network model, enables the automation of this classification process. The accuracy of the training process for the proposed SB3D-NET model in the qualitative classification of 3D soybean seed models, based on the adopted criteria, was 95.54%, with a validation accuracy of 90.74% and a relative loss value of 18.53% during training and 37.76% during validation. As shown in Figure 8, the developed algorithm achieved the highest accuracy, with 53 epochs. Further training of the network would certainly lead to overtraining and overfitting, as evidenced by the lower loss. The size of the validation sample may also contribute to a higher loss in the validation process. A similar level of accuracy, 90.67%, was achieved by Fan et al. [84], who used a large-scale 3D-CNN network to analyze individual maize seeds with respect to their germination strength. A similar model to the one presented in our study, referred to as SSDINet, was introduced by Sable et al. [85], which was designed for quantifying defects and classifying soybean seeds. The authors reported that their experiments demonstrated that SSDINet achieved the highest accuracy of 98.64%, with 1.15 million parameters processed in 4.70 ms, outperforming existing state-of-the-art models. Saito et al. [86] applied CNN models for the classification of soybean seeds, which were developed using three pre-trained network architectures: AlexNet, ResNet-18, and EfficientNet. The highest classification accuracy reported by the authors was 93.90% for CNN models using ResNet-18. Convolutional neural networks for soybean seed classification were also applied by Lin et al. [87], achieving fscore values for normal, damaged, and out-of-class soybean seeds of 95.97%, 97.41%, and 96.14%, respectively. Ultimately, using the NVIDIA Jetson TX2 platform, they achieved an accuracy of 95.63%. In the SB3D-NET model proposed in this study, the fscore values were 91.87, 92.54, and 94.78, respectively, for soybean seed analyses of the geometric parameters of soybean seeds, seed coat color and hilum identification, and both criteria combined (Table 9).
Another important parameter for agricultural practice is the seed classification time, which mainly depends on the computing power. In this study, the shortest time, i.e., 5.54 ms/model, was obtained when classifying seeds based on seed coat color and 7.32 ms/model when classifying geometric parameters. The longest classification time, i.e., 8.78 ms/model, was achieved when all criteria were taken into account. However, these values may change when using graphics cards with higher computing power. All statistics used to characterize the neural network model (i.e., SS, MAE, MSE, RMS, R2, SDR, and GE) show its high effectiveness and accuracy, especially at the testing and validation stage.
Important parameters characterizing the proposed SB3D-NET model also include the average precision values compared to actual measurements. The average measurement precision for soybean seed length was 85.37%, for width, it was 88.20%, and for thickness, it was 88.11%. Soybean seed marker identification was performed with an accuracy of 82.47% for length and 87.35% for width. A similar study was presented by Baek et al. [88], who also determined the geometric parameters of soybean seeds by measuring their length, width, and thickness, with an accuracy of approximately 90.00%.

5. Conclusions

This study proposed a three-dimensional artificial neural network model, designated SB3D-NET, aimed at multi-criteria varietal and qualitative classification of seeds from five soybean varieties. The database used for analysis comprised 3D models of soybean seeds, obtained through scanning. Seed model classification was performed based on 22 criteria, relying on geometric parameters of the seed models and seed coat color. The color of the soybean seed coat indicated the degree of their maturity, as well as the presence of mechanical or pathological damage.
The accuracy of the training process of the proposed SB3D-NET model for the qualitative classification of 3D models of soybean seeds, based on the adopted criteria, was 95.54%, and the accuracy of its validation was 90.74%. The relative loss value during the training process of the SB3D-NET model was 18.53%, while during its validation, it amounted to 37.76%. The classification speed ranged from 5.54 to 8.78 ms per model, largely depending on the computing power of the graphics card. Meanwhile, the score values achieved were: 91.87, 92.54, and 94.78, respectively, for analyses of geometric seed parameters, seed coat color and hilum identification, and both criteria combined.
Based on the conducted analyses, it can be concluded that the research hypothesis—that the application of three-dimensional scanning, the construction of 3D models, and the proposed three-dimensional convolutional neural network (3D-CNN) would enable precise qualitative assessment and classification of soybean seeds with respect to damage—has been fully confirmed.
The most important advantage of 3D scanning is the acquisition of three-dimensional models of soybeans, which contain more information, as the model can be analyzed in a Cartesian coordinate system (x, y, z). The model can be analyzed in a 360-degree plane, which gives it an advantage over RGB images. However, the practical limitation of this technology may lie in the difficulty of scanning individual seeds, especially those that are small or have varied shapes.

Supplementary Materials

The following supporting information can be downloaded at: https://github.com/piotrrybacki/soybean-SB3D-NET (accessed on 29 July 2025).

Author Contributions

Conceptualization, P.R. and J.N.; methodology, P.R. and J.N.; software, P.R.; validation, P.R. and D.J.; formal analysis, P.R., J.N., K.B., and D.J.; resources, P.R., J.N., K.B., D.J., I.K., A.O., and E.O.; data curation, P.R. and A.O.; writing—original draft preparation, P.R., J.N., and I.K.; writing—review and editing, P.R.; J.N.; K.B., D.J., and E.O.; visualization, P.R., J.N., K.B., and D.J.; supervision, P.R. and J.N.; project administration, P.R. and J.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data is available at: Department of Agronomy and Department of Genetics and Plant Breeding, Poznań University of Life Sciences, Dojazd 11, 60-632 Poznań, Poland.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Sun, H.; Hua, Z.; Yin, C.; Li, F.; Shi, Y. Geographical traceability of soybean: An electronic nose coupled with an effective deep learning method. Food Chem. 2024, 440, 138207. [Google Scholar] [CrossRef] [PubMed]
  2. Barboza Martignone, G.M.; Papadas, D.; Behrendt, K.; Ghosh, B. The rise of Soybean in international commodity markets: A quantile investigation. Heliyon 2024, 10, e34669. [Google Scholar] [CrossRef] [PubMed]
  3. Messina, M.; Duncan, A.; Messina, V.; Lynch, H.; Kilonia, J.; Erdman, J.W., Jr. The health effects of soy: A reference guide for health professionals. Front. Nutr. 2022, 9, 970364. [Google Scholar] [CrossRef] [PubMed]
  4. Abdelghany, A.M.; Zhang, S.; Azam, M.; Shaibu, A.S.; Feng, Y. Profiling of seed fatty acid composition in 1025 Chinese soybean accessions from diverse ecoregions. Crop J. 2020, 8, 635–644. [Google Scholar] [CrossRef]
  5. Fabrizzi, K.P.; Fernández, F.G.; Venterea, R.T.; Naeve, S.L. Nitrous oxide emissions from soybean in response to drained and undrained soils and previous corn nitrogen management. J. Environ. Qual. 2024, 53, 407–417. [Google Scholar] [CrossRef]
  6. Zhu, D.Z.; Li, Y.F.; Wang, D.C.; Wu, Q.; Zhang, D.Y.; Wang, C. The Identification of Single Soybean Seed Variety by Laser Light Backscattering Imaging. Sens. Leterst 2012, 10, 399–404. [Google Scholar] [CrossRef]
  7. Sharma, N.; Bajwa, J.S.; Gautam, N.; Attri, S. Preparation of health beneficial probiotic soya ice-cream and evaluation of quality attributes. J. Microbiol. Biotechnol. Food Sci. 2024, 13, e5309. [Google Scholar] [CrossRef]
  8. Serson, W.R.; Gishini, M.F.S.; Stupar, R.M.; Stec, A.O.; Armstrong, P.R.; Hildebrand, D. Identification and Candidate Gene Evaluation of a Large Fast Neutron-Induced Deletion Associated with a High-Oil Phenotype in Soybean Seeds. Genes 2024, 15, 892. [Google Scholar] [CrossRef]
  9. Carpentieri-Pipolo, V.; Barreto, T.P.; Silva, D.A.; da Abdelnoor, R.V.; Marin, S.R.; Degrassi, G. Soybean Improvement for Lipoxygenase-free by Simple Sequence Repeat (SSR) Markers Selection. J. Bot. Res. 2021, 3, 23–31. [Google Scholar] [CrossRef]
  10. Bayramli, O. Delineating genetic diversity in soybean (Glycine max L.) genotypes: Insights from a-page analysis of globulin reserve proteins. Adv. Biol. Earth Sci. 2024, 9, 259–266. [Google Scholar] [CrossRef]
  11. Megha, P.S.; Ramnath, V.; Karthiayini, K.; Beena, V.; Vishnudas, K.V.; Sapna, P.P. Free Isoflavone (Daidzein and Genistein) Content in Soybeans, Soybean Meal and Dried Soy Hypocotyl Sprout Using High Performance Liquid Chromatography (HPLC). J. Sci. Res. Rep. 2024, 30, 803–812. [Google Scholar] [CrossRef]
  12. Santana, D.C.; de Oliveira, I.C.; Cavalheiro, S.B.; das Chagas, P.H.M.; Teixeira Filho, M.C.M.; Della-Silva, J.L.; Teodoro, L.P.R.; Campos, C.N.S.; Baio, F.H.R.; da Silva Junior, C.A.; et al. Classification of Soybean Genotypes as to Calcium, Magnesium, and Sulfur Content Using Machine Learning Models and UAV–Multispectral Sensor. AgriEngineering 2024, 6, 1581–1593. [Google Scholar] [CrossRef]
  13. Sarkar, S.; Sagan, V.; Bhadra, S.; Fritschi, F.B. Spectral enhancement of PlanetScope using Sentinel-2 images to estimate soybean yield and seed composition. Sci. Rep. 2024, 14, 15063. [Google Scholar] [CrossRef] [PubMed]
  14. de Oliveira, S.S.C.; Ribeiro, L.K.M.; Cruz, S.J.S.; Zuchi, J.; Ponciano, V.d.F.G.; Maia, A.J.; Valichesky, R.R.; Grande, G.G. Analysis of seedling images to evaluate the physiological potential of soybean seeds. Obs. Econ. Latinoam. 2024, 22, e3633. [Google Scholar] [CrossRef]
  15. Duc, N.T.; Ramlal, A.; Rajendran, A.; Raju, D.; Lal, S.K.; Kumar, S.; Sahoo, R.N.; Chinnusamy, V. Image-based phenotyping of seed architectural traits and prediction of seed weight using machine learning models in soybean. Front. Plant Sci. 2023, 14, 1206357. [Google Scholar] [CrossRef]
  16. Kim, J.; Lee, C.; Park, J.; Kim, N.; Kim, S.-L.; Baek, J.; Chung, Y.-S.; Kim, K. Comparison of Various Drought Resistance Traits in Soybean (Glycine max L.) Based on Image Analysis for Precision Agriculture. Plants 2023, 12, 2331. [Google Scholar] [CrossRef]
  17. Singh, P.; Kansal, M.; Singh, R.; Kumar, S.; Sen, C. A Hybrid Approach based on Haar Cascade, Softmax, and CNN for Human Face Recognition. J. Sci. Ind. Res. 2024, 83, 414–423. [Google Scholar] [CrossRef]
  18. Yo, M.; Chong, S.; Chong, L. Sparse CNN: Leveraging deep learning and sparse representation for masked face recognition. Int. J. Inf. Technol. 2025, 22, accessed January 31. [Google Scholar] [CrossRef]
  19. Bhanbhro, J.; Memon, A.A.; Lal, B.; Talpur, S.; Memon, M. Speech Emotion Recognition: Comparative Analysis of CNN-LSTM and Attention-Enhanced CNN-LSTM Models. Signals 2025, 6, 22. [Google Scholar] [CrossRef]
  20. Raja’a, M.M.; Suhad, M.K. Automatic Translation from Iraqi Sign Language to Arabic Text or Speech Using CNN. Iraqi J. Comput. Commun. Control Syst. Eng. 2023, 23, 112–124. [Google Scholar] [CrossRef]
  21. Al Ahmadi, S.; Muhammad, F.; Al Dawsari, H. Enhancing Arabic Sign Language Interpretation: Leveraging Convolutional Neural Networks and Transfer Learning. Mathematics 2024, 12, 823. [Google Scholar] [CrossRef]
  22. Mishra, J.; Sharma, R.K. Optimized FPGA Architecture for CNN-Driven VoiceDisorder Detection. Circuits Syst. Signal Process. 2025, 44, 4455–4467. [Google Scholar] [CrossRef]
  23. Li, H.; Gu, Y.; Han, J.; Sun, Y.; Lei, H.; Li, C.; Xu, N. Faster R-CNN-MobileNetV3 Based Micro Expression Detection for Autism Spectrum Disorder. AI Med. 2025, 2, 2. [Google Scholar] [CrossRef]
  24. Billa, G.B.; Rao, V. An Automated Identification of Muscular Atrophy and Muscular Dystrophy Disease in Fetus using Deep Learning Approach. J. Inf. Syst. Eng. Manag. 2025, 10, 551–559. [Google Scholar] [CrossRef]
  25. Çetiner, İ. SkinCNN: Classification of Skin Cancer Lesions with A Novel CNN Model. Bitlis Eren Üniv. Fen Bilim. Derg. 2023, 12, 1105–1116. [Google Scholar] [CrossRef]
  26. Britto, J.G.M.; Mulugu, N.; Bharathi, K.S. A hybrid deep learning approach for breast cancer detection using cnn and rnn. Bioscan 2024, 19, 272–286. [Google Scholar] [CrossRef]
  27. Afify, H.; Mohammed, K.; Hassanien, A. Leveraging hybrid 1D-CNN and RNN approach for classification of brain cancer gene expression. Complex Intell. Syst. 2024, 10, 7605–7617. [Google Scholar] [CrossRef]
  28. Lei, J.; Ni, Z.; Peng, Z.; Hu, H.; Hong, J.; Fang, X.; Yi, C.; Ren, C.; Wasaye, M.A. An intelligent network framework for driver distraction monitoring based on RES-SE-CNN. Sci. Rep. 2025, 15, 6916. [Google Scholar] [CrossRef]
  29. Peng, Y.; Cai, Z.; Zhang, L.; Wang, X. BCAMP: A Behavior-Controllable Motion Control Method Based on Adversarial Motion Priors for Quadruped Robot. Appl. Sci. 2025, 15, 3356. [Google Scholar] [CrossRef]
  30. Yiping, Z.; Wilker, K. Visual-and-Language Multimodal Fusion for Sweeping Robot Navigation Based on CNN and GRU. J. Organ. End User Comput. 2024, 36, 21. [Google Scholar] [CrossRef]
  31. Alraba’nah, Y.; Hiari, M. Improved convolutional neural networks for aircraft type classification in remote sensing images. IAES Int. J. Artif. Intell. 2025, 14, 1540–1547. [Google Scholar] [CrossRef]
  32. Nawaz, S.M.; Maharajan, K.; Jose, N.N.; Praveen, R.V.S. GreenGuard CNN-Enhanced Paddy Leaf Detection for Crop Health Monitoring. Int. J. Comput. Exp. Sci. Eng. 2025, 11. [Google Scholar] [CrossRef]
  33. Chinchorkar, S. Utilizing satellite data and machine learning to monitor agricultural vulnerabilities to climate change. Int. J. Geogr. Geol. Environ. 2025, 7, 1–9. [Google Scholar] [CrossRef]
  34. Rendón-Vargas, A.; Luna-Álvarez, A.; Mújica-Vargas, D.; Castro-Bello, M.; Marianito-Cuahuitic, I. Application of Convolutional Neural Networks for the Classification and Evaluation of Fruit Ripeness. Commun. Comput. Inf. Sci. 2024, 2249, 150–163. [Google Scholar] [CrossRef]
  35. Rybacki, P.; Przygodziński, P.; Osuch, A.; Osuch, E.; Kowalik, I. Artificial Neural Network Model for Predicting Carrot Root Yield Loss in Relation to Mechanical Heading. Agriculture 2024, 14, 1755. [Google Scholar] [CrossRef]
  36. Garibaldi-Márquez, F.; Flores, G.; Valentín-Coronado, L.M. Leveraging deep semantic segmentation for assisted weed detection. J. Agric. Eng. 2025, 36, 1741. [Google Scholar] [CrossRef]
  37. Gómez, A.; Moreno, H.; Andújar, D. Intelligent Inter- and Intra-Row Early Weed Detection in Commercial Maize Crops. Plants 2025, 14, 881. [Google Scholar] [CrossRef]
  38. Mishra, A.M.; Singh, M.P.; Singh, P.; Djwakar, M.; Gupta, I.; Bijalwan, A. Hybrid deep learning model for density and growth rate estimation on weed image dataset. Sci. Rep. 2025, 15, 11330. [Google Scholar] [CrossRef] [PubMed]
  39. Anggraini, N.; Kusuma, B.; Subarkah, P.; Utomo, F.; Hermanto, N. Classification of Rice Plant Disease Image Using Convolutional Neural Network (CNN) Algorithm based on Amazon Web Service (AWS). Build. Inform. Technol. Sci. 2024, 6, 1293–1300. [Google Scholar] [CrossRef]
  40. Gangadevi, E.; Soufiane, B.O.; Balusamy, B.; Khan, F.; Getahun, M. A novel hybrid fruit fly and simulated annealing optimized faster R-CNN for detection and classification of tomato plant leaf diseases. Sci. Rep. 2025, 15, 16571. [Google Scholar] [CrossRef]
  41. Ray, S.K.; Hossain, A.; Islam, N.; Hasan, M.R. Enhanced plant health monitoring with dual head CNN for leaf classification and disease identification. J. Agric. Food Res. 2025, 21, 101930. [Google Scholar] [CrossRef]
  42. Hadianti, S.; Aziz, F.; Nur Sulistyowati, D.; Riana, D.; Saputra, R. Kurniawantoro, Identification of Potato Plant Pests Using the Convolutional Neural Network VGG16 Method. J. Med Inform. Technol. 2024, 2, 39–44. [Google Scholar] [CrossRef]
  43. Meshram, A.; Meshram, K.; Vanalkar, A.; Badar, A.; Mehta, G.; Kaushik, V. Deep Learning for Cotton Pest Detection: Comparative Analysis of CNN Architectures. Indian J. Ѐntomol. 2025, 1–4. [Google Scholar] [CrossRef]
  44. Soekarno, G.W.; Suhendar, A. Implementation of the Convolutional Neural Network (CNN) Algorithm for Pest Detection in Green Mustard Plants. G-Tech J. Teknol. Terap. 2025, 9, 202–210. [Google Scholar] [CrossRef]
  45. Tu, K.-L.; Li, L.-J.; Yang, L.-M.; Wang, J.-H.; Sun, Q. Selection for high quality pepper seeds by machine vision and classifiers. J. Integr. Agric. 2018, 17, 1999–2006. [Google Scholar] [CrossRef]
  46. Kong, Y.; Fang, S.; Wu, X.; Gong, Y.; Zhu, R.; Liu, J.; Peng, Y. Novel and Automatic Rice Thickness Extraction Based on Photogrammetry Using Rice Edge Features. Sensors 2019, 19, 5561. [Google Scholar] [CrossRef]
  47. Li, Y.; Jia, J.; Zhang, L.; Khattak, A.; Mateen, S.; Shi, G.; Wanlin, W. Soybean seed counting based on pod image using twocolumn convolution neural network. IEEE Access 2019, 7, 64177–64185. [Google Scholar] [CrossRef]
  48. Rybacki, P.; Niemann, J.; Bahcevandziev, K.; Durczak, K. Convolutional Neural Network Model for Variety Classification and Seed Quality Assessment of Winter Rapeseed. Sensors 2023, 23, 2486. [Google Scholar] [CrossRef] [PubMed]
  49. Kurtulmus, F.; Ünal, H. Discriminating rapeseed varieties using computer vision and machine learning. Expert Syst. Appl. 2015, 42, 1880–1891. [Google Scholar] [CrossRef]
  50. Oussama, A.; Kherfi, M.L. A new method for automatic date fruit classification. Int. J. Comput. Vis. Robot. 2017, 7, 692–711. [Google Scholar] [CrossRef]
  51. Hossain, M.S.; Muhammad, G.; Amin, S.U. Improving consumer satisfaction in smart cities using edge computing and caching: A case study of date fruits classification. Future Gener. Comput. Syst. 2018, 88, 333–341. [Google Scholar] [CrossRef]
  52. Lin, P.; Li, X.L.; Chen, Y.M.; He, Y. A deep convolutional neural network architecture for boosting image discrimination accuracy of rice species. Food Bioprocess Technol. 2018, 11, 765–773. [Google Scholar] [CrossRef]
  53. Ni, C.; Wang, D.; Vinson, R.; Holmes, M.; Tao, Y. Automatic inspection machine for maize kernels based on deep convolutional neural networks. Biosyst. Eng. 2019, 178, 131–144, ISSN 1537-5110. [Google Scholar] [CrossRef]
  54. Jung, M.; Song, J.S.; Hong, S.; Kim, S.; Go, S.; Lim, Y.P.; Park, J.; Park, S.G.; Kim, Y.M. Deep Learning Algorithms Correctly Classify Brassica rapa Varieties Using Digital Images. Front. Plant Sci. 2021, 12, 738685. [Google Scholar] [CrossRef]
  55. Albarrak, K.; Gulzar, Y.; Hamid, Y.; Mehmood, A.; Soomro, A.B. A Deep Learning-Based Model for Date Fruit Classification. Sustainability 2022, 14, 6339. [Google Scholar] [CrossRef]
  56. Hamid, Y.; Wani, S.; Soomro, A.B.; Alwan, A.A.; Gulzar, Y. Smart Seed Classification System based on MobileNetV2 Architecture. In Proceedings of the 2nd International Conference on Computing and Information Technology (ICCIT), Tabuk, Saudi Arabia, 25–27 January 2022; pp. 217–222. [Google Scholar]
  57. Jintasuttisak, T.; Edirisinghe, E.; Elbattay, A. Deep neural network based date palm tree detection in drone imagery. Comput. Electron. Agric. 2022, 192, 106560. [Google Scholar] [CrossRef]
  58. Sun, Z.; Guo, X.; Xu, Y.; Zhang, S.; Cheng, X.; Hu, Q.; Wang, W.; Xue, X. Image Recognition of Male Oilseed Rape (Brassica napus) Plants Based on Convolutional Neural Network for UAAS Navigation Applications on Supplementary Pollination and Aerial Spraying. Agriculture 2022, 12, 62. [Google Scholar] [CrossRef]
  59. Zhang, L.; Shi, S.; Zain, M.; Sun, B.; Han, D.; Sun, C. Evaluation of Rapeseed Leave Segmentation Accuracy Using Binocular Stereo Vision 3D Point Clouds. Agronomy 2025, 15, 245. [Google Scholar] [CrossRef]
  60. Yin, Y.; Guo, L.; Chen, K.; Guo, Z.; Chao, H.; Wang, B.; Li, M. 3D Reconstruction of Lipid Droplets in the Seed of Brassica napus. Sci. Rep. 2018, 8, 6560. [Google Scholar] [CrossRef]
  61. Liu, Y.; Yuan, H.; Zhao, X.; Fan, C.; Cheng, M. Fast reconstruction method of three-dimension model based on dual RGB-D cameras for peanut plant. Plant Methods 2023, 19, 17. [Google Scholar] [CrossRef]
  62. Wang, D.; Song, Z.; Miao, T.; Zhu, C.; Yang, X.; Yang, T.; Zhou, Y.; Den, H.; Xu, T. DFSP: A fast and automatic distance field-based stem-leaf segmentation pipeline for point cloud of maize shoot. Front. Plant Sci. 2023, 14, 1109314. [Google Scholar] [CrossRef]
  63. Bao, Y.; Tang, L.; Breitzman, M.W.; Salas Fernandez, M.G.; Schnable, P.S. Field-based robotic phenotyping of sorghum plant architecture using stereo vision. J. Field Robot. 2019, 36, 397–415. [Google Scholar] [CrossRef]
  64. Ma, X.; Zhu, K.; Guan, H.; Feng, J.; Yu, S.; Liu, G. Calculation Method for Phenotypic Traits Based on the 3D Reconstruction of Maize Canopies. Sensors 2019, 19, 1201. [Google Scholar] [CrossRef] [PubMed]
  65. Li, Y.; Liu, J.; Zhang, B.; Wang, Y.; Yao, J.; Zhang, X.; Fan, B.; Li, X.; Hai, Y.; Fan, X. Three-dimensional reconstruction and phenotype measurement of maize seedlings based on multi-view image sequences. Front. Plant Sci. 2022, 13, 974339. [Google Scholar] [CrossRef] [PubMed]
  66. Wei, B.; Ma, X.; Guan, H.; Yu, M.; Yang, C.; He, H.; Wang, F.; Shen, P. Dynamic simulation of leaf area index for the soybean canopy based on 3D reconstruction. Ecol. Inform. 2023, 75, 102070. [Google Scholar] [CrossRef]
  67. Nguyen, A.; Le, B. 3D point cloud segmentation: A survey. In Proceedings of the 2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM), Manila, Philippines, 12–15 November 2013; pp. 225–230. [Google Scholar]
  68. Wang, W.; Zhang, Y.; Ge, G.; Jiang, Q.; Wang, Y.; Hu, L. Indoor Point Cloud Segmentation Using a Modified Region Growing Algorithm and Accurate Normal Estimation. IEEE Access 2023, 11, 42510–42520. [Google Scholar] [CrossRef]
  69. Fu, Y.; Niu, Y.; Wang, L.; Li, W. Individual-Tree Segmentation from UAV–LiDAR Data Using a Region-Growing Segmentation and Supervoxel-Weighted Fuzzy Clustering Approach. Remote Sens. 2024, 16, 608. [Google Scholar] [CrossRef]
  70. Yang, X.; Huang, Y.; Zhang, Q. Automatic Stockpile Extraction and Measurement Using 3D Point Cloud and Multi-Scale Directional Curvature. Remote Sens. 2020, 12, 960. [Google Scholar] [CrossRef]
  71. Zhu, B.; Zhang, Y.; Sun, Y.; Shi, Y.; Ma, Y.; Guo, Y. Quantitative estimation of organ-scale phenotypic parameters of field crops through 3D modeling using extremely low altitude UAV images. Comput. Electron. Agric. 2023, 210, 107910. [Google Scholar] [CrossRef]
  72. Ghahremani, M.; Williams, K.; Corke, F.; Tiddeman, B.; Liu, Y.; Wang, X.; Doonan, J.H. Direct and accurate feature extraction from 3D point clouds of plants using RANSAC. Comput. Electron. Agric. 2021, 87, 106240. [Google Scholar] [CrossRef]
  73. Fugacci, U.; Romanengo, C.; Falcidieno, B.; Biasotti, S. Reconstruction and Preservation of Feature Curves in 3D Point Cloud Processing. Comput. Aided Des. 2024, 167, 103649. [Google Scholar] [CrossRef]
  74. Miao, Y.; Li, S.; Wang, L.; Li, H.; Qiu, R.; Zhang, M. A single plant segmentation method of maize point cloud based on Euclidean clustering and K-means clustering. Comput. Electron. Agric. 2023, 210, 107951. [Google Scholar] [CrossRef]
  75. Zou, R.; Zhang, Y.; Chen, J.; Li, J.; Dai, W.; Mu, S. Density estimation method of mature wheat based on point cloud segmentation and clustering. Comput. Electron. Agric. 2023, 205, 107626. [Google Scholar] [CrossRef]
  76. Du, R.; Ma, Z.; Xie, P.; He, Y.; Cen, H. PST: Plant segmentation transformer for 3D point clouds of rapeseed plants at the podding stage. ISPRS J. Photogramm. Remote Sens. 2023, 195, 380–392. [Google Scholar] [CrossRef]
  77. Yan, J.; Tan, F.; Li, C.; Jin, S.; Zhang, C.; Gao, P.; Xu, W. Stem–Leaf segmentation and phenotypic trait extraction of individual plant using a precise and efficient point cloud segmentation network. Comput. Electron. Agric. 2024, 220, 108839. [Google Scholar] [CrossRef]
  78. Zhang, W.; Wu, S.; Wen, W.; Lu, X.; Wang, C.; Gou, W.; Li, Y.; Guo, X.; Zhao, C. Three-dimensional branch segmentation and phenotype extraction of maize tassel based on deep learning. Plant Methods 2023, 19, 76. [Google Scholar] [CrossRef]
  79. Dönmez, E. Enhancing classification capacity of CNN models with deep feature selection and fusion: A case study on maize seed classification. Data Knowl. Eng. 2022, 141, 102075. [Google Scholar] [CrossRef]
  80. Hu, X.; Xia, T.; Yang, L.; Wu, K.; Ying, W.; Tian, Y. 3D modeling and volume measurement of bulk grains stored in large warehouses using bi-temporal multi-site terrestrial laser scanning data. J. Agric. Eng. 2024, 55, 1055. [Google Scholar] [CrossRef]
  81. Huang, Z.; Wang, R.; Cao, Y.; Zheng, S.; Teng, Y.; Wang, F.; Wang, L.; Du, J. Deep learning based soybean seed classification. Comput. Electron. Agric. 2022, 202, 107393. [Google Scholar] [CrossRef]
  82. Yang, S.; Zheng, L.; Yang, H.; Zhang, M.; Wu, T.; Sun, S.; Tomasetto, F.; Wang, M. A synthetic datasets based instance segmentation network for High-throughput soybean pods phenotype investigation. Expert Syst. Appl. 2022, 192, 116403. [Google Scholar] [CrossRef]
  83. Brandani, E.B.; Souza, N.O.S.; Mattioni, N.M.; de Jesus Souza, F.F.; Vilela, M.S.; Marques, É.A.; de Souza Ferreira, W.F. Image analysis for the evaluation of soybean seeds vigor. Acta Agronómica 2022, 70, 1–14. [Google Scholar] [CrossRef]
  84. Fan, Y.; An, T.; Wang, Q.; Yang, G.; Huang, W.; Wang, Z.; Zhao, C.; Tian, X. Non-destructive detection of single-seed viabilityin maize using hyperspectral imaging technology and multi-scale 3D convolutional neural network. Front. Plant Sci. 2023, 14, 1248598. [Google Scholar] [CrossRef] [PubMed]
  85. Sable, A.; Singh, P.; Kaur, A.; Driss, M.; Boulila, W. Quantifying Soybean Defects: A Computational Approach to Seed Classification Using Deep Learning Techniques. Agronomy 2024, 14, 1098. [Google Scholar] [CrossRef]
  86. Saito, Y.; Miyakawa, R.; Murai, T.; Itakura, K. Classification of external defects on soybean seeds using multi-input convolutional neural networks with color and UV-induced fluorescence images input, Intelligence. Inform. Infrastruct. 2024, 5, 135–140. [Google Scholar] [CrossRef]
  87. Lin, W.; Shu, L.; Zhong, W.; Lu, W.; Ma, D.; Meng, Y. Online classification of soybean seeds based on deep learning. Eng. Appl. Artif. Intell. 2023, 123, 106434. [Google Scholar] [CrossRef]
  88. Baek, J.; Lee, E.; Kim, N.; Kim, S.L.; Choi, I.; Ji, H.; Chung, Y.S.; Choi, M.-S.; Moon, J.-K.; Kim, K.-H. High Throughput Phenotyping for Various Traits on Soybean Seeds Using Image Analysis. Sensors 2020, 20, 248. [Google Scholar] [CrossRef]
  89. Vollmann, J.; Walter, H.; Sato, T.; Schweiger, P. Digital image analysis and chlorophyll metering for phenotyping the effects of nodulation in soybean. Comput. Electron. Agric. 2011, 75, 190–195. [Google Scholar] [CrossRef]
Figure 1. Scanned seeds of soybean varieties: (a) Aligator, (b) Fiskeby, (c) Mavka, (d) Merlin, and (e) Petrina.
Figure 1. Scanned seeds of soybean varieties: (a) Aligator, (b) Fiskeby, (c) Mavka, (d) Merlin, and (e) Petrina.
Agronomy 15 02074 g001
Figure 2. Individual seeds of the analyzed soybean varieties: (a) Aligator, (b) Fiskeby, (c) Mavka, (d) Merlin, and (e) Petrina.
Figure 2. Individual seeds of the analyzed soybean varieties: (a) Aligator, (b) Fiskeby, (c) Mavka, (d) Merlin, and (e) Petrina.
Agronomy 15 02074 g002
Figure 3. Seed scanning station (Revopoint Range 3D scanner).
Figure 3. Seed scanning station (Revopoint Range 3D scanner).
Agronomy 15 02074 g003
Figure 4. The 3D soybean seed model with an extracted point cloud and FEM (finite element method) mesh per variety: (a) Aligator, (b) Fiskeby, (c) Mavka, (d) Merlin, and (e) Petrina.
Figure 4. The 3D soybean seed model with an extracted point cloud and FEM (finite element method) mesh per variety: (a) Aligator, (b) Fiskeby, (c) Mavka, (d) Merlin, and (e) Petrina.
Agronomy 15 02074 g004
Figure 5. Stability of MID vs. MIQ.
Figure 5. Stability of MID vs. MIQ.
Agronomy 15 02074 g005
Figure 6. Architecture of the SB3D-NET multilayer neural network.
Figure 6. Architecture of the SB3D-NET multilayer neural network.
Agronomy 15 02074 g006
Figure 7. Conversion of 3D soybean seed models using AR001 as an example: (a) solid model; (b) point cloud model; (c) FEM mesh model (http//github.com/piotrrybacki/soybean-SB3D-NET.git; accessed on 29 July 2025 in the Supplementary Materials).
Figure 7. Conversion of 3D soybean seed models using AR001 as an example: (a) solid model; (b) point cloud model; (c) FEM mesh model (http//github.com/piotrrybacki/soybean-SB3D-NET.git; accessed on 29 July 2025 in the Supplementary Materials).
Agronomy 15 02074 g007
Figure 8. Detection of geometric parameters of 3D soybean seed models and the size of its markers: (a) PA003—Petrina, (b) MA063—Mavka, (c) MN012—Merlin, and (d) AR077—Aligator (http//github.com/piotrrybacki/soybean-SB3D-NET.git; accessed on 29 July 2025 in the Supplementary Materials).
Figure 8. Detection of geometric parameters of 3D soybean seed models and the size of its markers: (a) PA003—Petrina, (b) MA063—Mavka, (c) MN012—Merlin, and (d) AR077—Aligator (http//github.com/piotrrybacki/soybean-SB3D-NET.git; accessed on 29 July 2025 in the Supplementary Materials).
Agronomy 15 02074 g008
Figure 9. Diagram of the training, testing, and validation process of the SB3D-NET model for soybean seed model classification.
Figure 9. Diagram of the training, testing, and validation process of the SB3D-NET model for soybean seed model classification.
Agronomy 15 02074 g009
Figure 10. Visualization of training and validation accuracy and loss curves for the SB3D-NET model in the classification of 3D soybean seed models: (a) training and validation accuracy; (b) training and validation loss.
Figure 10. Visualization of training and validation accuracy and loss curves for the SB3D-NET model in the classification of 3D soybean seed models: (a) training and validation accuracy; (b) training and validation loss.
Agronomy 15 02074 g010
Figure 11. Sample output data of the analyzed 3D soybean seed models, along with their predicted labels.
Figure 11. Sample output data of the analyzed 3D soybean seed models, along with their predicted labels.
Agronomy 15 02074 g011
Table 1. The main characteristics of soybean varieties used in the experiment.
Table 1. The main characteristics of soybean varieties used in the experiment.
No.VarietySeed MaturityPlant HeightHeight of the First PodColor of Seed CoatType of Seed Coat ColorationMarker Color/
Shape
Total ProteinTSW
Dayscmcm---% s. m.g
1Aligator130–14060.0–81.710.7–12.3dark creamuniformbrown/
oblong
33.8180.0
2Fiskeby121–13733.5–37.79.3–10.6dark creamspottedbrown/
irregular
41.0171.0
3Mavka120–13280.0–110.015.2–21.2light creamspottedlight yellow/
narrow regular
32.9182.3
4Merlin130–13780.0–95.09.0–11.4dark creamspottedbrown/
irregular
32.2165.0
5Petrina265–280110.5–126.014.3–18.0creamspottedbrown/
oblong
25.9155.4
Table 2. Codes, actual geometric parameters, and organoleptic evaluation of soybean seeds.
Table 2. Codes, actual geometric parameters, and organoleptic evaluation of soybean seeds.
CodeVarietyLengthWidthThicknessLength of Marker Width of MarkerSeed MassDiscoloration of Seeds
mmmmmmmmmmg-
AR001Aligator7.255.383.592.031.520.1570none
AR0027.075.582.742.551.020.1898none
AR0995.993.873.031.181.790.2366none
AR1008.015.603.451.611.510.1504none
FY001Fiskeby6.945.393.923.911.270.2197none
FY0028.324.034.041.592.210.2236none
FY0997.975.824.021.551.660.1576none
FY1008.436.053.474.272.380.1860none
MA001Mavka7.094.213.691.971.150.1734light brown
MA0025.743.713.101.362.070.2220light brown
MA0997.233.824.101.181.790.2320light brown
MA1005.355.412.651.611.510.1664light brown
MN001Merlin6.323.913.453.911.270.1570dark brown
MN0026.154.482.791.592.210.1898dark brown
MN0997.304.402.731.551.660.2366dark brown
MN1007.394.773.244.272.380.1504dark brown
PA001Petrina8.074.983.351.971.150.2197dark brown
PA0026.114.053.411.362.070.2236none
PA0996.884.523.051.181.790.1576dark brown
PA1005.744.882.701.611.510.1860light brown
Table 3. Criteria for varietal and quality classification of soybean seeds.
Table 3. Criteria for varietal and quality classification of soybean seeds.
No.SymbolDescriptionUnit
1A*seed surface area determined using 3D scanner mm2
2Agseed surface area calculated based on Equation (1)mm2
3Aseed surface area calculated using Equation (2) mm2
4Dg*equivalent diameter calculated based on measurements from the 3D model
replacement diameter calculated based on measurements of the 3D model
mm
5Dgequivalent diametermm
6Lseed lengthmm
7L*seed length determined based on the 3D model mm
8Lmhalf the sum of the width and length of the seed mm
9mseed massg
10m*mass of 3D model seedsg
11Nsample sizeNo.
12Ra*shape coefficient calculated based on measurements of the 3D model%
13Rashape coefficient%
14Tthickness of seedmm
15T* seed thickness determined based on the 3D modelmm
16Useed length-dependent coefficient
factor dependent on the length of soybean seeds
-
17Wseed widthmm
18W*seed width determined based on a 3D modelmm
19V*seed volume determined using a 3D scannermm3
20Vgseed volume calculated using the formulamm3
21φseed sphericity coefficient%
22φ*seed sphericity coefficient calculated based on 3D model measurements%
* 3D model parameters
Table 4. Summary of statistical calculations of geometric features for 3D soybean seed models (http//github.com/piotrrybacki/soybean-SB3D-NET.git; accessed on 29 July 2025 in the Supplementary Materials).
Table 4. Summary of statistical calculations of geometric features for 3D soybean seed models (http//github.com/piotrrybacki/soybean-SB3D-NET.git; accessed on 29 July 2025 in the Supplementary Materials).
VariableLL*WW*TT*DgDg*RaRa*φφ*AAgA*VgV*m
mmmmmmmmmmmmmmmm----mm2mm2mm2mm3mm3g
Aligator
Min.5.265.143.603.712.592.673.663.710.680.660.700.6942.442.143.125.726.60.1493
Max.8.048.275.795.964.174.305.795.960.720.730.720.72106.1105.3111.6101.7110.90.2685
Average6.476.604.624.763.333.434.644.760.710.700.710.7068.467.571.152.156.40.2207
Standard deviation0.911.040.730.750.530.540.740.730.720.710.720.710.710.720.710.720.710.0325
Variation coefficient14.0915.7815.6615.8215.7814.4415.6615.7115.3315.2615.1315.2115.2315.1115.2215.1115.2214.720
Fiskeby
Min. 5.775.653.964.072.852.944.024.070.670.680.710.6950.150.852.134.135.40.2186
Max.8.558.786.156.334.434.566.156.330.710.730.730.73111.4118.8125.8121.9132.80.2950
Average6.987.114.985.133.593.705.005.130.700.710.710.7177.278.482.565.370.50.2602
Standard deviation0.911.050.730.750.520.550.710.720.700.720.710.720.710.720.710.720.710.0216
Variation coefficient13.0614.4514.7114.4414.6614.8714.7214.529.219.2115.3215.2110.2610.118.219.118.898.2980
Mavka
Min. 5.275.153.613.712.602.683.673.710.690.680.690.7043.142.343.325.826.80.1900
Max.8.058.285.805.974.184.305.805.970.750.760.720.74110.5105.6111.9102.0111.30.2663
Average6.486.614.634.773.343.444.644.770.730.730.710.7269.367.771.352.456.60.2284
Standard deviation0.911.060.740.750.540.530.750.760.720.710.720.710.710.720.710.720.710.0244
Variation coefficient14.0615.7614.5615.9915.7615.5414.5714.5515.3315.2615.1114.9915.2115.4415.2410.1410.2110.679
Merlin
Min. 5.015.103.573.682.572.653.583.680.650.660.680.7041.240.342.524.126.00.1454
Max.8.017.775.445.603.923.935.555.600.700.730.720.7396.196.798,589.492.00.2184
Average6.556.504.554.693.283.384.614.690.690.700.700.7167.266.769.051.354.00.1850
Standard deviation0.990.850.600.620.430.440.610.630.700.70.720.710.710.720.710.720.710.0224
Variation coefficient12.1813.1213.1313.2213.1713.2413.1513.3413.1114.2112.1213.2112.0912.0512.2512.1512.1512.095
Petrina
Min. 5.045.113.583.382.582.663.593.670.680.670.680.6742.140.542.624.226.20.1219
Max.8.207.785.455.613.934.045.565.610.740.730.720.7097.296.998.889.892.40.2326
Average6.566.514.564.703.293.394.624.700.710.710.710.6967.266.969.351.554.20.1810
Standard deviation0.990.860.600.610.430.440.600.610.690.710.720.700.710.720.710.720.710.0322
Variation coefficient15.1615.1117.1117.6313.2113.1216.1213.1516.2116.1116.0316.0116.2416.1415.8916.2415.2917.772
* 3D model parameters.
Table 5. Changes in map size depending on the layer number of the developed SB3D-NET model.
Table 5. Changes in map size depending on the layer number of the developed SB3D-NET model.
Layer (Type)Output ShapeParam.
conv3d (Conv3D)(None, 1, 1, 198, 198, 32)8900
max_pooling3d (MaxPooling3D)(None, 1, 1, 179, 179, 32)0
dropout (Dropout)(None, 1, 1, 179, 179, 32)0
conv3d_1 (Conv3D)(None, 1, 1, 117, 117, 32)184,060
max_pooling3d_1 (MaxPooling3D)(None, 1, 1, 98, 98, 32)0
dropout_1 (Dropout)(None, 1, 1, 98, 98, 32)0
conv3d_2 (Conv3D)(None, 1, 1, 66, 66, 64)1,030,560
max_pooling3d_2 (MaxPooling3D)(None, 1, 1, 43, 43, 64)0
dropout_2 (Dropout)(None, 1, 1, 43, 43, 64)0
conv3d_3 (Conv3D)(None, 1, 1, 41, 41, 64)6,075,040
max_pooling2d_3 (MaxPooling3D)(None, 1, 1, 30, 30, 64)0
dropout_3 (Dropout)(None, 1, 1, 30, 30, 64)0
conv3d_4 (Conv3D)(None, 1, 1, 24, 24, 128)10,102,620
max_pooling2d_4 (MaxPooling3D)(None, 1, 1, 17, 17, 128)0
dropout_4 (Dropout)(None, 1, 1, 17, 17, 128)0
conv3d_5 (Conv3D)(None, 1, 1, 21, 21, 128)60,068,620
max_pooling2d_5 (MaxPooling3D)(None, 1, 1, 10, 10, 128)0
dropout_5 (Dropout)(None, 1, 1, 10, 10, 128)0
conv2d_6 (Conv3D)(None, 1, 1, 19, 19, 128)80,840,010
max_pooling3d_6 (MaxPooling3D)(None, 1, 1, 7, 7, 128)0
dropout_6 (Dropout)(None, 1, 1, 7, 7, 128)0
flatten (Flatten)(None, 1, 1, 12,800)0
dense (Dense)(None, 1, 1, 512)90,561,020
Total params: 90,561,020; trainable params: 90,561,020; non-trainable params: 0.
Table 6. Data from labels generated by the SB3D-NET network based on the analysis of 3D soybean seed models.
Table 6. Data from labels generated by the SB3D-NET network based on the analysis of 3D soybean seed models.
CodeLengthLength PrecisionWidthWidth PrecisionThicknessThickness PrecisionLength of Marker Precision of Marker Length Width of MarkerPrecision of Marker Width
mm%mm%mm%mm%mm%
AR0017.5396.285.2397.213.7396.252.8072.501.6095.02
AR0027.7890.875.4196.952.9493.203.0982.521.3277.27
AR0995.6193.665.3871.933.8279.321.5078.672.0388.18
AR1007.7096.134.5981.962.7278.842.0180.501.7287.79
FY0017.5392.164.787.204.0696.554.5885.371.7273.84
FY0026.0472.604.1796.643.4585.402.8256.382.6683.08
FY0996.5482.064.2973.714.4789.932.3466.241.8191.71
FY1007.2986.485.9598.352.9986.174.3897.492.4597.14
MA0017.9389.414.7089.572.9880.762.3185.281.3287.12
MA0027.9572.204.5781.183.4789.341.4295.772.1098.57
MA0995.7178.985.4470.223.4082.931.2395.931.8696.24
MA1007.9767.135.5098.363.7271.241.7989.941.6690.96
MN0017.8880.205.0477.583.5696.914.2192.871.5482.47
MN0026.4795.055.1287.502.8896.881.9979.902.6683.08
MN0997.4198.524.2195.682.8296.811.8086.111.6898.81
MN1005.4473.614.9895.783.3497.014.3099.303.3870.41
PA0016.3778.935.5290.223.6990.792.6275.191.4877.70
PA0027.9377.054.1398.063.0188.271.8573.512.3090.00
PA0997.5491.254.6896.583.3591.041.5576.131.9790.86
PA1006.0594.883.8779.303.6274.592.0279.701.6094.38
Average85.37---88.20---88.11---82.47---87.73
Table 7. Numerical metrics of the SB3D-NET model performance for qualitative classification of soybean seeds.
Table 7. Numerical metrics of the SB3D-NET model performance for qualitative classification of soybean seeds.
MetricsTraining SetTest SetValidation Set
SS0.13321.43551.3778
MAE0.00220.01850.0399
MSE0.00170.01810.0365
RMS0.06770.14210.1466
R20.94100.95990.9997
SDR0.00530.02770.0497
GE 0.0992
Table 8. Performance of the SB3D-NET model for qualitative and varietal classification of 3D soybean seed models.
Table 8. Performance of the SB3D-NET model for qualitative and varietal classification of 3D soybean seed models.
Classification TypeACCPPVTPRfscoreAverage Classification Time GPU*
%%%%ms/Model
Geometric parameters of seeds 89.3389.7988.7491.877.32
Discoloration of seeds 91.3190.8790.7692.545.54
Discoloration and geometric parameters of seeds 91.7892.6793.8794.788.78
* GPU: NVIDIA GeForce RTX Studio 2060, 32 GB.
Table 9. Comparison with existing major studies.
Table 9. Comparison with existing major studies.
StudiesUtilized MethodUtilized EquipmentDatasetOverall Accuracy
[53]point cloud methodUAV-based RGB imaging system70.922 plant images 71.20–96.00%
[54]AI-based classification modelsTesla V100 GPU with 32 GB video random access memory (VRAM)2.138 images58.34%
[56]deep learning convolutional neural networks (DCNNs)RGB image14 classes of seeds95.00%
[57]YOLO-V5UAV-based RGB imaging system125 images 92.34%.
[58]least-squares method (LSM) and Hough transformdigital imaging technology
DJI Phantom 4
180 RGB plant images93.54%
[88]high-throughput analysis methodRGB image39,065 seed images97.00%
[89]regression analysisRGB image1000 seed images93.70%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rybacki, P.; Bahcevandziev, K.; Jarquin, D.; Kowalik, I.; Osuch, A.; Osuch, E.; Niemann, J. Three-Dimensional Convolutional Neural Networks (3D-CNN) in the Classification of Varieties and Quality Assessment of Soybean Seeds (Glycine max L. Merrill). Agronomy 2025, 15, 2074. https://doi.org/10.3390/agronomy15092074

AMA Style

Rybacki P, Bahcevandziev K, Jarquin D, Kowalik I, Osuch A, Osuch E, Niemann J. Three-Dimensional Convolutional Neural Networks (3D-CNN) in the Classification of Varieties and Quality Assessment of Soybean Seeds (Glycine max L. Merrill). Agronomy. 2025; 15(9):2074. https://doi.org/10.3390/agronomy15092074

Chicago/Turabian Style

Rybacki, Piotr, Kiril Bahcevandziev, Diego Jarquin, Ireneusz Kowalik, Andrzej Osuch, Ewa Osuch, and Janetta Niemann. 2025. "Three-Dimensional Convolutional Neural Networks (3D-CNN) in the Classification of Varieties and Quality Assessment of Soybean Seeds (Glycine max L. Merrill)" Agronomy 15, no. 9: 2074. https://doi.org/10.3390/agronomy15092074

APA Style

Rybacki, P., Bahcevandziev, K., Jarquin, D., Kowalik, I., Osuch, A., Osuch, E., & Niemann, J. (2025). Three-Dimensional Convolutional Neural Networks (3D-CNN) in the Classification of Varieties and Quality Assessment of Soybean Seeds (Glycine max L. Merrill). Agronomy, 15(9), 2074. https://doi.org/10.3390/agronomy15092074

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop