Next Article in Journal
Agro-Morphological Characterization of 14 Quinoa (Chenopodium quinoa Willd.) × Pitseed Goosefoot (C. berlandieri Moq.) Interspecific Hybrid-Derived Lines in an Arid Zone
Previous Article in Journal
New Insights into the Formation Mechanism of Continuous Cropping Obstacles in Dioscorea opposita Thunb. cv. Tiegun Yam from Rhizosphere Metabolites and Microflora
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rapid Seed Viability Detection Using Laser Speckle Weighted Generalized Difference with Improved Residual Networks

1
Robotics College, Beijing Union University, Beijing 100020, China
2
Artificial Intelligence College, Beijing Union University, Beijing 100020, China
*
Author to whom correspondence should be addressed.
Agronomy 2026, 16(1), 81; https://doi.org/10.3390/agronomy16010081
Submission received: 5 November 2025 / Revised: 24 December 2025 / Accepted: 25 December 2025 / Published: 27 December 2025
(This article belongs to the Section Precision and Digital Agriculture)

Abstract

Conventional seed viability assessment methods are often destructive, time-consuming, and highly sensitive to environmental conditions, resulting in estimated annual global agricultural losses exceeding 12 billion USD, as reported by the Food and Agriculture Organization (FAO) of the United Nations. To overcome these limitations, this study proposes a non-destructive framework for evaluating the viability of multiple pea seed varieties—including Gancui-2, Jinwan No.6, Hongyun 211, Mawan No.1, and Wuxuwan No.2—using laser speckle imaging (LSI). A He–Ne laser combined with a CCD camera was employed to capture 512-frame dynamic speckle sequences from 3000 seeds. A weighted generalized difference (WGD) algorithm was developed to enhance feature extraction by emphasizing physiologically relevant temporal variations through frame weighting based on the global mean and standard deviation of inter-frame differences. The extracted features were classified using an improved Weighted Generalized Residual Network (ResNet-W), which integrates weighted average pooling and 1 × 1 convolution to enhance feature aggregation and classification efficiency. Experimental results demonstrated strong performance, achieving 91.32% accuracy, 90.78% precision, 92.04% recall, and a 91.38% F1-score. The proposed framework offers a cost-effective, high-accuracy, and fully non-destructive solution for seed viability assessment, with significant potential for real-time agricultural quality monitoring and intelligent seed sorting applications.

1. Introduction

Seed viability assessment is a fundamental aspect of quality control in modern agriculture, directly affecting sowing performance, storage management, and breeding efficiency. Ensuring the use of high-viability seeds is essential for sustaining global food security. However, conventional testing methods are often time-consuming, destructive, and highly sensitive to environmental variations [1,2,3]. Non-destructive techniques, such as near-infrared (NIR) spectroscopy, are limited by high instrumentation costs and reduced generalizability across different seed varieties [4,5]. According to the Food and Agriculture Organization (FAO) of the United Nations, global agricultural losses resulting from insufficient seed viability exceed USD 12 billion annually, highlighting the urgent need for efficient, accurate, and scalable detection technologies [6,7].
Laser speckle imaging (LSI), owing to its non-contact nature, high sensitivity, and ability to capture dynamic biological processes in real time, has emerged as a promising optical technique for agricultural and biological inspection [8]. In recent years, LSI has been increasingly applied to non-destructive seed viability assessment, with the goal of correlating speckle pattern dynamics with physiological activity during germination. A key challenge in this field is the effective extraction of spatiotemporal features from complex biospeckle signals to achieve rapid and accurate classification of seed vitality.
Early studies laid the theoretical foundation for biospeckle-based evaluation. Thakur et al. [9] developed a spatiotemporal biospeckle modeling framework for seed viability analysis, demonstrating that temporal fluctuations in speckle patterns are closely linked to germination performance. Building upon this work, they employed a three-dimensional convolutional neural network (3D-CNN) to capture the temporal–spatial evolution of biospeckle patterns, significantly improving classification robustness compared to traditional statistical methods [10]. Later, they incorporated deep residual learning into biospeckle image processing, enhancing the extraction of discriminative features under varying illumination and noise conditions [11]. In a subsequent review, the authors summarized the progress of deep learning-based biospeckle techniques, highlighting their promising potential for real-time, field-deployable agricultural applications [12].
Complementary optical approaches have also been developed to enhance biospeckle sensitivity. Researchers at the Federal University of Lavras [13] proposed a dual-illumination dynamic light scattering (DLS) system operating under both visible and infrared light. Their findings indicated that near-infrared (NIR) illumination reduces variability and increases penetration depth, thus improving the correlation between optical fluctuations and the internal physiological states of seeds. These results are consistent with other studies that have confirmed that multispectral and NIR illumination can enhance biospeckle contrast and measurement reliability [14,15].
Quantitative methods for estimating biological activity have also been refined. Singh et al. [3] introduced the Full-Field Temporal History of the Speckle Pattern (FTHSP) approach, demonstrating that biospeckle activity (BA) varies significantly with temperature and initial moisture content. Similarly, Braga et al. [5] proposed a temporal entropy metric for biospeckle analysis, enabling early prediction of seed viability prior to visible germination.
Recent efforts have increasingly focused on integrating LSI with deep learning frameworks to achieve automated feature extraction and robust classification. Singh et al. [16] demonstrated that BA correlates with seed germination and viability, reflecting the effects of hydropriming and chemical priming, while Genze et al. [17] applied convolutional neural networks (CNNs) and transfer learning to accurately detect germination and assess quality in multiple grain crops. Hu et al. [18] combined multispectral imaging with multivariate analysis to distinguish hard and soft seeds in legumes, enabling non-destructive classification of dormancy status. Bouzaouia et al. [19] applied dynamic LSI to monitor plant water stress and physiological activity during breeding trials, showing that temporal speckle fluctuations can serve as sensitive indicators of biological responses. Ansari et al. [20] explored biospeckle imaging for evaluating biological activity in plant tissues, detailing advanced methods for capturing spatiotemporal dynamics and linking them to physiological processes. Finally, Zhang et al. [21] proposed a method to suppress noise and enhance dynamic speckle contrast, demonstrating superior sensitivity to early enzymatic activity during seed germination.
Collectively, these studies highlight the evolution of LSI from traditional statistical descriptors to data-driven, deep spatiotemporal models. The integration of convolutional, residual, and transformer-based architectures, along with multispectral and multimodal optical sensing, has significantly advanced the precision, stability, and interpretability of LSI in seed viability detection. These developments lay a solid foundation for the next generation of rapid, intelligent, and fully automated agricultural quality monitoring systems.
Building upon these findings, the present study focuses on the Gancui-2 pea variety. Speckle images were captured using laser speckle imaging (LSI), and key discriminative features were extracted using an improved weighted generalized difference (WGD) algorithm. Based on these features, an optimized deep learning architecture, termed the ResNet-W model, was developed by refining the classical residual network structure to enhance feature learning and classification performance. The objective is to overcome the limitations of traditional methods and establish a low-cost, high-efficiency intelligent detection system for agricultural applications.
To address the limitations of existing laser speckle feature extraction algorithms and the performance challenges of seed vitality detection models in current research, this study aims to design an enhanced speckle feature extraction algorithm and develop an improved pea seed vitality classification model. The goal is to improve detection accuracy and robustness, ensuring better performance in real-world agricultural applications.

2. Materials and Methods

2.1. Experiment Settings

In this study, five pea seed varieties—Gancui-2 (Shandong Shouhe Seed Industry Co., Ltd., Weifang, China), Jinwan No.6 (Youyu Agricultural Experimental Station, Shanxi Academy of Agricultural Sciences, Jinzhong, China), Hongyun 211 (Hebei Fengtian Agriculture Co., Ltd., Shijiazhuang, China), Mawan No.1 (Shandong Shouhe Seed Industry Co., Ltd., Weifang, China), and Wuxuwan No.2 (Sichuan Shuxin Seed Industry Co., Ltd., Chengdu, China)—were selected as experimental materials. A total of 3000 imbibed seeds (600 seeds per variety, each imbibed in water at 25 °C for 6 h to achieve uniform moisture content) were subjected to laser speckle imaging (LSI) and subsequently incubated under controlled conditions for five days. For the non-viable seed group, 1500 seeds were used: half were inactivated through desiccation in a DHG-9070A electric thermostatic drying oven(), while the remaining half underwent microbial inactivation by incubation in an oxygen-free environment at 30 °C and 80% relative humidity for 3 days. The viable seed group consisted of the remaining 1500 seeds, which were incubated in a constant-temperature and humidity incubator (Shenghui Intelligent Technology Co., Ltd., Yantai, China) under optimal germination conditions (22 °C and 70% relative humidity).
For both viable and non-viable seed groups, one seed was randomly selected from every ten seeds and soaked in clean water at room temperature for 6 h prior to germination assays, as shown in Figure 1. Germinating seeds were maintained under a 12 h light/12 h dark photoperiod in a sterile mixed substrate of vermiculite and perlite (3:1 volume, Stanley Agricultural Group Co., Ltd., Linyi, China) to ensure adequate aeration and stable moisture retention. As shown in Figure 1, germination tests confirmed that viable seed samples exhibited high germination activity, whereas no germination was observed in seeds subjected to either inactivation treatment, indicating complete loss of viability. The laser speckle images, processed using the WGD algorithm, were then used to construct a comprehensive dataset for further analysis.
After speckle image acquisition, 50 seeds were randomly selected from both the viable and non-viable groups of each seed variety, yielding a total of 500 seeds. These samples were treated as independent newly collected specimens and were used for external detection and validation of the proposed method, rather than being included in the model training or testing datasets. The experimental setup comprised a 25-LHR-911-230 He–Ne laser (Melles Griot, Rochester, NY, USA), an MV-EM120C CCD camera (Shaanxi Vish Vision Intelligence Manufacturing Technology Co., Ltd., Xi’an, China), and an M3Z1228C-MP zoom industrial lens (Shaanxi Vish Vision Intelligence Manufacturing Technology Co., Ltd., Xi’an, China). Additional components included an air-cushion vibration-isolated optical platform (Changfu Technology (Beijing) Co., Ltd., Beijing, China), a spatial filter, and mirrors (Guangzhou Hengyang Electronics Technology Co., Ltd., Guangzhou, China). The arrangement of the optical elements is illustrated in Figure 2, and the detailed instrument parameters are summarized in Table 1.

2.1.1. Laser Speckle Imaging of Seeds

Laser speckle is an optical phenomenon that occurs when a laser beam illuminates a rough surface or passes through a transparent scattering medium [22,23,24,25]. When projected onto the seed surface, dynamic changes in cellular activity induce temporal variations in the speckle pattern, and the intensity of these variations is positively correlated with seed viability—more pronounced fluctuations indicate higher viability [26,27,28]. In this study, imbibed seeds were grouped prior to imaging, and speckle sequences were captured using a CCD camera at 30 frames per second, with 512 frames collected for each group. Figure 3 illustrates the sample acquisition process, including seed grouping and image capture procedures.

2.1.2. Preprocessing of Seed Speckle Images

During the image data processing stage, the region extraction and segmentation step was primarily performed to obtain time-series images of individual seeds. The implementation procedure is detailed as follows.
First, the region extraction and segmentation process involved several operations, including edge detection, contour tracking and filtering, construction of minimum bounding rectangles, and cropping. For edge detection, Gaussian filtering was initially applied to suppress noise (Equation (1)), followed by computation of the gradient magnitude and direction. Finally, a double-threshold method was employed to select the edges.
A x , y = 1 2 π γ 2 e x 2 + y 2 2 γ 2 × B x , y
where A x , y denotes the filtered image, γ is the standard deviation of the Gaussian kernel, and i represents the input grayscale image.
L x , y = σ A σ x 2 + σ A σ y 2
ρ x , y = arctan σ A / σ y σ A / σ x
where L x , y represents the gradient magnitude, ρ x , y denotes the gradient direction, σ A σ x and σ A σ y are the partial derivatives of the filtered image along the x and y directions, respectively, and a r c t a n is the arctangent function used to calculate the gradient orientation.
Next, in the region extraction and segmentation step, the binarized edge image was processed by traversing pixel connectivity to identify continuous edge points and form contours. The contour area A (in pixels, see Equation (1)) was calculated, and the contour corresponding to m a x A was selected, representing the bright region of the seed.
For the minimum bounding rectangle, either the rotating calipers method or the convex hull algorithm was applied to the set of contour points x i , y i i = 1 N to determine the rectangle of minimal area that encloses all points. Finally, all frame images corresponding to the same individual seed were grouped to form a set. The overall preprocessing workflow for seed speckle images is illustrated in Figure 4.

2.2. Speckle Feature Extraction Using the WGD Algorithm

The WGD algorithm, a key component for enhancing the discriminability of dynamic speckle features in seed viability detection, primarily consists of two sequentially coupled stages: dynamic weighting coefficient design and weighted cumulative feature generation. These two stages are logically connected, with the former providing a quantitative foundation for distinguishing the significance of inter-frame variations, and the latter leveraging the derived weights to aggregate effective feature information. Both stages are detailed in the following sections.

2.2.1. Dynamic Weighting Coefficient Design and Weighted Cumulative Feature Generation

The core function of the weighting coefficient is to quantify the importance of each D i ; that is, the more pronounced the variation in D i , the higher the weight it should receive. The weighting scheme is based on the global statistical characteristics of inter-frame differences, and the specific process is illustrated in Figure 5.
First, the mean feature mean   ( D i ) of each difference map is calculated (e.g., the average pixel value of the i -th difference map), reflecting the overall intensity of inter-frame variations in the seed speckle images. The mean μ D and standard deviation σ D of all mean ( D i ) values are then computed to establish a baseline for evaluating the significance of these variations:
μ D = 1 n 1 i = 1 n 1 mean D i
σ D = 1 n 1 i = 1 n 1 mean D i μ D 2
Next, a weighting function is designed inspired by a Gaussian-shaped weighting function. The larger the deviation of mean ( D i ) from μ D , the more significant the variation, and thus the higher the intermediate variable H i . Since w i is obtained via max-normalization of H i , a larger weight w i is assigned:
H i = 1 exp mean D i μ D 2 2 σ D 2
w i = H i max H 1 , H 2 , , H n 1
The dynamic weights are then multiplied with the corresponding inter-frame difference maps and accumulated to generate the final speckle dynamic feature map G :
G = i = 1 n 1 w i D i
In this feature map, regions of significant speckle variation in seeds are characterized by a large deviation of mean ( D i ) from μ D , resulting in higher weights and greater pixel intensities. Conversely, regions corresponding to noise or minor fluctuations exhibit smaller deviations, leading to lower weights and effective suppression. This process significantly enhances feature discriminability.

2.2.2. Construction of the Classification Model

In agricultural inspection, deep learning and laser speckle imaging (LSI) form a closed-loop framework, where LSI captures dynamic speckle information from seeds, which is then mapped by deep learning models to determine seed viability classes [29,30,31]. This study, built on the PyTorch framework (https://pytorch.org, accessed on 15 March 2023), introduces the ResNet-W model by optimizing ResNet-D. To construct a binary classification network for seed viability detection, the downsampling module at the start of each of the four residual stages in ResNet-D was improved. Specifically, the original downsampling structure, consisting of average pooling followed by a 1 × 1 convolution (the standard ResNet-D design), was replaced with a “learnable weighted average pooling and 1 × 1 convolution” approach.
The improved downsampling module is centered on a learnable weighted average pooling component (Figure 6), and its architectural details are presented below.
The proposed learnable weighted average pooling module employs learnable spatial weights with a fixed dimensionality of 1 1 k k , which corresponds to the pooling kernel size and is independent of the number of input feature channels. These weights model the relative importance of different spatial locations within each pooling window and are adaptively optimized through backpropagation, enabling the network to selectively emphasize spatial regions that are more informative for speckle dynamics (e.g., high-frequency spatial patterns associated with radicle activity during seed germination).
A single set of spatial weights is shared across all input channels. Specifically, the learned weight tensor is replicated along the channel dimension and applied uniformly to each feature map, thereby enforcing a consistent spatial weighting strategy across channels while avoiding channel-specific parameter inflation. The pooling kernel size is fixed at 2 × 2 , and the stride is set to 2, which is consistent with the original downsampling configuration and results in a twofold reduction in spatial resolution. At the current stage, the downsampling module is deliberately kept lightweight and consists solely of the learnable weighted average pooling operation. No additional components—such as 1 × 1 convolutions, BatchNorm2d normalization layers, or channel dimension alignment—are included. Consequently, the module focuses exclusively on spatially weighted downsampling without altering the feature dimensionality.
To optimize feature extraction and mitigate overfitting on small seed datasets, transfer learning was employed. The initial convolutional layer and the first three residual stages, comprising three residual blocks in Stage 2 and four residual blocks in Stage 3, together form 25 layers and were entirely frozen to leverage generic representations pre-trained on large-scale datasets. This frozen portion covers the shallow and mid-level feature extraction modules, in which each residual block is composed of convolutional, batch normalization, and activation layers that collectively establish a hierarchical feature representation. In contrast, only the final two residual stages were fine-tuned to adapt the network to the specific task of seed viability detection. These stages consist of six residual blocks in Stage 4 and three residual blocks in Stage 5, corresponding to a total of 20 layers and forming the deep feature learning module. Table 2 summarizes the variations in model parameters under different freezing strategies, illustrating how freezing different numbers of layers influences the network configuration and the scale of trainable parameters.
To clearly present the structural design of ResNet-W, this section breaks it down by modules and details their composition and modification points in Table 3. For image preprocessing, operations such as median filtering were applied, from which 2500 speckle feature maps were generated using the WGD algorithm. The processed dataset was then divided into training, validation, and testing subsets with a ratio of 7:2:1.
In this study, an improved ResNet-W model, derived from ResNet-D [32], was employed to train the dataset. The model consists of 50 residual layers, enabling automatic extraction of multi-level features from seed images. To adapt the architecture for the binary classification task of seed viability, most of the convolutional layers within the first 50 layers were frozen, while only the last 20 layers were fine-tuned to mitigate overfitting. Additionally, a Dropout layer was incorporated before the fully connected layer in the binary classification module, forming a hybrid architecture. The overall workflow of the ResNet-W model is illustrated in Figure 7.
In this study, a differentiated data processing strategy was adopted. The training set was augmented using a combination of random resized cropping, random horizontal flipping, mild color jittering (with small perturbations in brightness, contrast, saturation, and hue), and random erasing applied with a low probability and a small erasure area. In contrast, the validation and test sets were only subjected to resizing, center cropping, and normalization to prevent information leakage.
To enhance the reliability, stability, and generalizability of the proposed model, a stratified 10-fold cross-validation strategy was employed. Specifically, the entire dataset was first randomly partitioned into 10 mutually exclusive folds with balanced class distributions. Following the standard 10-fold cross-validation protocol, in each iteration, one fold was designated as the test set, two folds were assigned as the validation set, and the remaining seven folds were combined to form the training set. This procedure was repeated across iterations such that each fold was used for testing, validation, and training in turn. Consequently, every sample in the dataset participated in all three roles throughout the cross-validation process, effectively reducing the bias associated with a single fixed data split and enabling a more robust and comprehensive evaluation of model performance.
For model training, the LabelSmoothFocalLoss function was used, combining label smoothing to reduce overconfidence and a focal mechanism to address class imbalance, with the Adam optimizer and an initial learning rate of 0.001. The ReduceLROnPlateau scheduler adjusted the learning rate based on validation F1 scores. Training was conducted for a maximum of 30 epochs per fold, with early stopping (patience of 15 epochs) to prevent overfitting. The optimal model weights for each fold were saved based on the highest validation F1 score, and the final model performance was summarized by averaging the metrics across all 5 folds. A structured evaluation protocol was followed, incorporating comprehensive metrics and visualizations to assess model performance.
To ensure consistency with the experimental design and training strategy described above, the loss formulation and optimization procedure are detailed as follows.
Assume that the dataset contains C classes. For the i -th sample, the model predicts the probability of belonging to class j as P i j , while the corresponding ground-truth label is represented by a one-hot encoded vector Y i j , where Y i j = 1 if sample i belongs to class j , and Y i j = 0 otherwise.
In this study, the LabelSmoothFocalLoss is employed as the training objective. This loss function is derived from the standard cross-entropy loss by incorporating label smoothing to alleviate overconfidence and a focal mechanism to address class imbalance. As the foundational form, the standard cross-entropy loss is expressed as:
L = 1 N i = 1 N j = 1 C y i j log p i j
where N denotes the total number of samples. As the predicted probability P i j approaches the corresponding ground-truth label Y i j , the loss value decreases accordingly.
Model parameters are optimized using the Adam optimizer with an initial learning rate of 0.001, consistent with the training configuration described earlier. Adam maintains exponential moving averages of both the first-order and second-order moments of the gradients. At time step t , the first-order moment estimate n t and the second-order moment estimate c t are computed as:
n t = υ 1 n t 1 + 1 υ 1 k t
c t = υ 2 c t 1 + 1 υ 2 k t 2
where k t denotes the gradient of the loss with respect to the model parameters at time step t , and υ 1 and υ 2 are decay coefficients for the first-order and second-order moments, respectively, typically set to 0.9 and 0.999.
To correct the bias introduced by zero initialization of the moment estimates, bias-corrected moments are calculated as:
w t ^ = w t 1 υ 1 t , v t ^ = c t 1 υ 2 t
Based on these corrected estimates, the model parameters θ t are updated iteratively according to:
θ t + 1 = θ t r w t ^ v t ^ + χ
where r denotes the current learning rate, which is dynamically adjusted using the ReduceLROnPlateau scheduler based on the validation F1 score, and χ is a small constant (typically 10 8 ) introduced to ensure numerical stability.
By integrating the loss function design, the Adam optimization strategy with adaptive learning rate scheduling, and the 5-fold stratified cross-validation framework described previously, a complete and coherent training workflow for the proposed model is established. The overall process is illustrated in Figure 8, which summarizes the key components of the training pipeline, including data augmentation, cross-validation partitioning, loss computation, parameter optimization, and learning rate adjustment, as well as their interactions within the system.

3. Results

3.1. Experiment Results and Analysis

The resulting map can then be visualized as the original speckle image of the seed. By applying pseudocolor mapping, the characteristics of the speckle map are intuitively highlighted. Figure 9 illustrates the speckle feature extraction results obtained using different algorithms.
Next, using the same training set, four different classification models were trained, and their results were compared. In Figure 10A shows that ResNet-W not only rapidly converges to a high range of 0.88–0.9 but also demonstrates significantly superior stability compared to the other models after epoch 10. In contrast, ResNet-50 exhibits a notable performance drop around epoch 15, VGG-16 fluctuates markedly during the early stages of training, and ResNet-D experiences multiple minor oscillations in later epochs. The accuracy curve of ResNet-W, however, remains consistently stable at a high level. Figure 10B shows that the precision of ResNet-W stabilizes above 0.88 after epoch 10, without the large fluctuations observed in the other models, while its overall precision level is comparable to the best performance achieved by the comparative models.
In Figure 11A, the F1 score of ResNet-W rapidly converges to a high range around 0.9 during the early stages of training (before epoch 5) and remains stable throughout subsequent training with minimal fluctuations. In contrast, ResNet-50 and ResNet-D exhibit multiple performance oscillations, while the F1 score of VGG-16 fluctuates considerably during the early training stages and only gradually approaches the level of ResNet-W in later epochs. In Figure 11B, the loss of ResNet-W quickly drops to an extremely low level near zero within the first five epochs and remains stable without fluctuations for the remainder of the training. By comparison, the loss of VGG-16 remains above 0.3, and ResNet-D stabilizes around 0.2, while only ResNet-50 reaches levels close to ResNet-W, albeit with slightly lower stability. In Figure 11C, the recall of ResNet-W rapidly rises above 0.9 after epoch 5 and remains consistently high and stable throughout training. In contrast, ResNet-50 shows a notable drop in recall around epoch 15, VGG-16 initially has a recall below 0.7 with significant fluctuations, and ResNet-D exhibits multiple minor oscillations in recall.
Subsequently, we trained the ResNet-W model on speckle feature map datasets processed using different feature extraction algorithms, and the results are shown in Figure 12. In Figure 12A, the accuracy of WGD rapidly increases to above 0.85 before epoch 5 and remains consistently close to 0.9 throughout subsequent training. In contrast, LSTCA hovers around 0.6, while FUJII and GD stay around 0.7, all exhibiting noticeable performance fluctuations. In Figure 12B, the F1 score of WGD quickly converges to a high value of 0.9 and remains stable, whereas FUJII, LSTCA, and GD consistently stay below 0.7 with frequent fluctuations, indicating substantially lower overall performance compared to WGD. In Figure 12C, the loss of WGD rapidly drops to an extremely low level of approximately 0.02 within the first five epochs and remains stable without significant fluctuations. In contrast, the loss of the other models stays above 0.03, reflecting notably weaker fitting performance. In Figure 12D, the precision of WGD quickly rises above 0.85 after epoch 5 and remains stably high, while the precision of the other models generally stays below 0.7, with LSTCA persistently around 0.6, demonstrating much lower accuracy and stability compared to WGD.
Table 4 summarizes the model performance under stratified 10-fold cross-validation, which ensures balanced class distributions and mitigates evaluation bias. The model achieved an average accuracy of 91.32% and an F1 score of 91.38%, indicating a favorable balance between precision (90.78%) and recall (92.04%). The low mean loss value (14.743 × 10−3) further suggests effective and stable model training. The standard deviations of accuracy, F1 score, precision, recall, and loss were 0.65, 0.67, 1.59, 1.95, and 1.203 × 10−3, respectively. Overall, these small variations indicate consistent performance across folds.
Although recall exhibited a relatively higher standard deviation, attributable to inter-fold variability, this fluctuation did not adversely affect overall performance, as both accuracy and F1 score remained consistently high. Across all ten folds, the model maintained a balanced classification behavior, with only minor differences between precision and recall in individual folds. Overall, these results demonstrate that the proposed model exhibits strong classification capability, stable cross-fold performance, and good generalization, highlighting its suitability for practical application scenarios.
The violin plot in Figure 13A illustrates the distributional characteristics of the training and validation losses. The training loss exhibits a mean value of 0.0185 (standard deviation = 0.0072), with a relatively concentrated yet moderately dispersed kernel density profile. In contrast, the validation loss shows a slightly lower mean (0.0174) and a notably smaller standard deviation (0.0040), indicating stable generalization behavior. The limited fluctuation range of the validation loss suggests the absence of significant overfitting and confirms that the model maintains consistent performance across training iterations.
Figure 13B shows the ROC curves of the proposed model under 10-fold cross-validation. All curves cluster near the upper-left corner, with AUC values ranging from 0.9691 to 0.9872, indicating strong and consistent discriminative capability across different data splits. Together with the stable training trends and cross-validation metrics, these results confirm the robustness and reliable generalization of the proposed model.
Figure 14 shows the epoch-wise performance of the proposed model under 10-fold cross-validation, presenting the evolution of training and validation accuracy, precision, recall, and F1-score over 50 epochs, together with inter-fold variability. In Figure 14A, training and validation accuracy increase consistently without evident overfitting or underfitting, while epoch-wise fluctuations decrease and gradually stabilize, indicating stable convergence. In Figure 14B, validation precision exhibits larger fluctuations in the early stage, whereas training precision increases smoothly. After approximately epoch 25, both curves stabilize around 90% with reduced variability. As shown in Figure 14C, recall remains stable and well aligned between training and validation, maintaining values of approximately 87% throughout most of the training process. In Figure 14D, the training and validation F1-scores show a steady upward trend with minimal epoch-wise variation, converging to around 88%. Overall, all metrics demonstrate clear convergence and strong agreement between training and validation under 10-fold cross-validation, confirming the robustness and generalization capability of the proposed model.

3.2. Model Prediction Results

The seed viability classification model in this study was developed based on the ResNet-D architecture and optimized for high performance on the binary dataset, achieving impressive precision, recall, and F1 scores. These results underscore the model’s stability and reliability in distinguishing seed viability categories. The detection results are presented in Figure 15.
Table 5 summarizes the performance of the proposed classification model, with all metrics computed from 500 independent samples, providing an external evaluation of the model’s generalization capability. The model achieves consistently high performance, with a mean accuracy of 92.24% and a mean macro-F1 score of 92.18%, demonstrating strong robustness beyond the training data. The mean class-specific accuracies reach 91.08% for the Live class and 92.94% for the Dead class, while the average confidence of correct predictions is 83.86%.
Stability analysis further supports these results, as the coefficients of variation for overall accuracy and macro-F1 remain low, indicating limited performance variability. The slightly higher variability observed in Live-class accuracy reflects modest challenges in viable-sample recognition but remains interpretable and well controlled. Overall, these findings confirm that the proposed model maintains reliable and stable performance when applied to independent samples, underscoring its suitability for practical deployment.

4. Discussion

This study presents a non-destructive seed viability detection framework that integrates LSI, the WGD algorithm, and an optimized ResNet-W model, achieving a binary classification accuracy of 91.32% across various pea seed varieties, including Gancui-2, Jinwan No.6, Hongyun 211, Mawan No.1, and Wuxuwan No.2. This section discusses the key findings, underlying mechanisms, comparisons with state-of-the-art (SOTA) approaches, existing limitations, and directions for future research.

4.1. Core Findings and Mechanistic Insights

The superior performance of the proposed framework arises from two synergistic innovations: the WGD algorithm’s enhanced feature extraction and the ResNet-W model’s efficient feature learning, which together overcome key bottlenecks in traditional LSI-based seed viability detection.

4.1.1. WGD Algorithm: Targeted Enhancement of Dynamic Speckle Features

Traditional speckle feature extraction methods struggle to capture biologically meaningful dynamic changes. The WGD algorithm addresses this challenge by introducing a globally guided dynamic weighting mechanism (Equations (4)–(7)):
  • By computing the global mean ( μ D ) and standard deviation ( σ D ) of inter-frame difference map means ( m e a n ( D i ) ), the algorithm quantifies the “baseline” speckle variation intensity across all frames. Frames with m e a n ( D i ) significantly deviating from μ D (i.e., those capturing critical physiological activity) are assigned higher weights, while noise-dominated frames are suppressed.
  • As summarized in Table 5, the proposed method demonstrates strong and stable classification performance under 10-fold cross-validation. The model achieves a mean accuracy of 92.24% and a mean macro-F1 score of 92.18%, indicating robust generalization. The Dead class attains slightly higher accuracy and lower variability than the Live class, while the modest variability observed in Live-class recognition suggests potential for further refinement. Overall, these results confirm the effectiveness and stability of the proposed method in capturing class-discriminative features.

4.1.2. Optimized Deep Learning for Speckle Feature Classification

The ResNet-W model, derived from ResNet-D, addresses two practical challenges in agricultural AI: computational efficiency and overfitting with small datasets:
  • Replacing ResNet-D’s 1 × 1 average convolution downsampling with “weighted average pooling and 1 × 1 convolution” reduces redundant parameters while preserving fine-grained speckle features. Freezing 25 layers leverages pre-trained generic features to prevent overfitting, while Drop Block regularization further stabilizes training by mitigating over-reliance on local noise.
  • The proposed model demonstrates robust convergence, balanced precision–recall performance, and strong discriminative capability. The training and validation losses exhibit closely aligned mean values of 0.0185 and 0.0174 with small variances, indicating stable convergence without noticeable overfitting. Consistent with the 10-fold cross-validation results, which yield a mean accuracy of 91.32% and a mean F1-score of 91.38%, the ROC analysis further confirms reliable discrimination across different data splits, with all curves concentrated near the upper-left corner and AUC values ranging from 0.9691 to 0.9872. Overall, the combination of high classification performance, stable training behavior, and low inter-fold variability highlights the effectiveness and generalization robustness of the proposed ResNet-W model, supporting its suitability for practical seed viability detection under resource-constrained conditions.

4.2. Comparison with State-of-the-Art Methods

To contextualize the framework’s performance, we compare it with representative studies in LSI-based seed viability detection. Literature [10] proposes a 3D-CNN + LSI model for pea seed detection, achieving an accuracy of 92.3%. While this approach is non-destructive, it incurs high computational costs due to reliance on 3D feature maps and is limited by slow inference and the need for large datasets. Literature [14] presents a multispectral + LSTCA method for wheat seed detection, achieving an accuracy of 94.1%, but it suffers from poor generalization across different wheat varieties. Literature [3] adopts an FTHSP + SVM scheme for soybean seed detection, reaching an accuracy of 89.7%, but it fails to capture the complex temporal dynamics of seed speckles. In contrast, the proposed WGD + ResNet-W model framework in this study, also targeting pea seeds, achieves a detection accuracy of 91.32%. However, it is currently limited to pea seeds and is sensitive to seed coat damage.
The proposed framework outperforms these methods in three key aspects:
  • Higher accuracy, achieved through WGD’s targeted feature enhancement;
  • Lower computational cost, enabling real-time detection;
  • Stronger noise resistance, demonstrated by stable performance under varying illumination conditions.
These advantages directly address the FAO’s call for “efficient, scalable non-destructive seed testing,” with the potential to reduce global agricultural losses due to poor seed viability.

4.3. Limitations and Practical Considerations

Despite its strengths, the framework has limitations that need to be addressed for broader application:
  • Variety-Specific Generalization: The model was validated only on pea seeds. Seeds with different coat thicknesses (e.g., maize, cotton) or dormancy characteristics may exhibit distinct speckle patterns. For instance, thick-coated seeds may scatter laser light more uniformly, reducing WGD’s ability to detect internal activity.
  • Sensitivity to Seed Coat Damage: Seeds with damaged coats but intact viability may occasionally be misclassified. Physical damage alters surface scattering properties, generating artificial speckle variations that can mimic or obscure physiological signals.
  • Imaging Environment Constraints: While the He–Ne laser and vibration-isolated platform ensure high-quality speckle images in controlled conditions, field applications may encounter challenges that degrade feature quality.

4.4. Future Research Directions

To address these limitations and expand the framework’s utility, future work will focus on three key areas:
  • Multi-Crop and Multi-Modal Optimization: Construct a dataset of 5+ crops (wheat, maize, soybean) to fine-tune the fully connected layer of the ResNet-W model. Integrate NIR imaging with LSI, as NIR can penetrate seed coats more deeply, potentially resolving the seed coat damage issue by directly capturing internal physiological signals.
  • Embedded System Development: Port the WGD algorithm and ResNet-W model to edge devices using model quantization. This will reduce latency to <1 s per seed, enabling real-time screening on smallholder farms or in seed processing facilities.
  • Physics-Informed Deep Learning: Incorporate the physical model of laser speckle formation into the ResNet-W model’s loss function. This will enhance the model’s interpretability—linking activation heatmaps to specific physiological processes—and improve robustness to environmental noise.
In summary, the LSI-WGD-ResNet-W model framework advances non-destructive seed viability detection by bridging optical physics and intelligent data analysis. Its high accuracy, efficiency, and non-destructiveness make it a promising tool for precision agriculture. Future refinements will further extend its practical and scientific impact.

5. Conclusions

This study presents a high-precision, non-destructive framework for pea seed viability detection by integrating laser speckle imaging (LSI), the WGD algorithm, and the enhanced ResNet-W model. The proposed system establishes a seamless pipeline from optical sensing to intelligent classification, achieving 91.32% accuracy, 90.78% precision, and 92.04% recall in binary seed viability classification tasks.
The main contributions of this paper are summarized as follows:
  • Enhanced Speckle Feature Extraction: The weighted generalized difference (WGD) algorithm is proposed to improve the accuracy and robustness of feature extraction.
  • Development of an Enhanced ResNet-W Model: A ResNet-W model specifically tailored for seed viability detection is developed, incorporating structural optimizations for more accurate identification of seed health status from biospeckle data.
  • Superior Performance: Experimental results demonstrate that the proposed system significantly improves detection efficiency and classification accuracy, outperforming existing methods.
From a scientific perspective, the WGD algorithm introduces a globally guided weighting strategy that enhances biologically meaningful dynamic speckle features while suppressing noise. The ResNet-W model, in turn, improves residual learning through weighted average pooling and transfer learning, mitigating overfitting on small agricultural datasets and enhancing computational efficiency. Together, these components form a robust, interpretable, and efficient solution for seed viability analysis.
From a practical standpoint, the proposed framework preserves seed integrity, relies on cost-effective optical hardware, and produces reliable results consistent with cultivation outcomes. Its compact architecture and rapid inference capability make it well-suited for real-time quality control and intelligent seed sorting in agricultural production systems.
However, the current study is limited to pea seeds and operates under controlled imaging conditions. Future work will focus on extending the framework to a broader range of crop species, incorporating multimodal imaging techniques such as near-infrared and LSI fusion, and developing lightweight, edge-based implementations for field applications.
Overall, the proposed LSI-WGD-ResNet-W framework demonstrates a robust integration of optical sensing and deep learning for agricultural applications, providing a foundation for next-generation intelligent seed quality monitoring systems. This study illustrates that combining advanced speckle feature analysis with an enhanced residual network offers a reliable, non-destructive, and high-precision approach for rapid seed viability assessment.

Author Contributions

Methodology, S.M. and W.L.; Software, J.Z.; Data curation, X.L. and T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “the National Natural Science Foundation of China” grant number 31770769, and by “the Scientific Research Plan Project of the Beijing Municipal Education Commission” grant number KM201911417008.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in zenodo at https://doi.org/10.5281/zenodo.18041269.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Reed, R.C.; Bradford, K.J.; Khanday, I. Seed Germination and Viability: Ensuring Crop Sustainability in a Changing Climate. Heredity 2022, 128, 450–459. [Google Scholar] [CrossRef]
  2. Zhou, X.H.; He, W.M. Climate Warming Facilitates Seed Germination in Native but Not Invasive Solidago Canadensis Populations. Front. Ecol. Evol. 2020, 8, 595214. [Google Scholar] [CrossRef]
  3. Singh, P.; Chatterjee, A.; Rajput, L.S.; Rana, S.; Kumar, S.; Nataraj, V.; Bhatia, V.; Prakash, S. Development of an Intelligent Laser Biospeckle System for Early Detection and Classification of Soybean Seeds Infected with Seed-Borne Fungal Pathogen (Colletotrichum Truncatum). Biosyst. Eng. 2021, 212, 442–457. [Google Scholar] [CrossRef]
  4. Qiao, J.; Liao, Y.; Yin, C.; Yang, X.; Tú, H.M.; Wang, W.; Liu, Y. Vigour Testing for the Rice Seed with Computer Vision-Based Techniques. Front. Plant Sci. 2023, 14, 1194701. [Google Scholar] [CrossRef]
  5. Braga, R.A., Jr.; Contado, J.L.; Ducatti, K.R.; da Silva, E.A.A. Analysis of Seed Vigor Using the Biospeckle Laser Technique. AgriEngineering 2025, 7, 3. [Google Scholar] [CrossRef]
  6. Liang, Y.; Li, Z.; Shi, J.; Zhang, N.; Qin, Z.; Du, L.; Zhai, X.; Shen, T.; Zhang, R.; Zou, X.; et al. Advances in Hyperspectral Imaging Technology for Grain Quality and Safety Detection: A Review. Foods 2025, 14, 2977. [Google Scholar] [CrossRef]
  7. Wonggasem, K.; Wongchaisuwat, P.; Chakranon, P.; Onwimol, D. Utilization of Machine Learning and Hyperspectral Imaging Technologies for Classifying Coated Maize Seed Viability: A Case Study on the Assessment of Seed DNA Repair Capability. Agronomy 2024, 14, 1991. [Google Scholar] [CrossRef]
  8. Balmages, I.; Smite, K.; Bļizņuks, D.; Reinis, A.; Lihachev, A.; Lihacova, I. Adapted Correlation Methods for Laser Speckle Imaging of Microbial Activity: Evaluation and Rationale. Sensors 2025, 25, 5772. [Google Scholar] [CrossRef]
  9. Thakur, P.S.; Bhatia, V.; Rajput, L.S.; Rana, S.; Prakash, S. Laser Biospeckle Technique for Evaluating Biotic Stress on Seed Germination. In Proceedings of the 2022 Workshop on Recent Advances in Photonics (WRAP 2022), Jaipur, India, 12–14 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 78–83. [Google Scholar] [CrossRef]
  10. Thakur, P.S.; Kumar, A.; Tiwari, B.; Gedam, B.; Bhatia, V.; Rana, S.; Prakash, S. Machine Learning Based Biospeckle Technique for Identification of Seed Viability Using Spatio-Temporal Analysis. In Proceedings of the 2022 Workshop on Recent Advances in Photonics (WRAP 2022), Jaipur, India, 12–14 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 102–107. [Google Scholar] [CrossRef]
  11. Thakur, P.S.; Krejcar, O.; Bhatia, V.; Prakash, S. Deep Learning Based Processing Framework for Spatio-Temporal Analysis and Classification of Laser Biospeckle Data. Opt. Laser Technol. 2024, 169, 110138. [Google Scholar] [CrossRef]
  12. Thakur, P.S.; Renju, P.B.; Pal, P.; Paswan, M. A Smartphone-Based Platform to Grade Seeds Based on Biological Activity. In Proceedings of the 2024 SPIE Photonics India, New Delhi, India, 18–20 January 2024; SPIE: Bellingham, WA, USA, 2024; PC12879, pp. 1–6. [Google Scholar] [CrossRef]
  13. Contado, E.W.N.; Pasqual, M.; Dória, J.; Gonzalez-Peña, R.J.; Dupuy, L.X.; Braga, R.A. Assessment of the Use of Infrared Laser for Dynamic Laser Speckle (DLS) Technique. Agriculture 2023, 13, 546. [Google Scholar] [CrossRef]
  14. Renju, P.B.; Thakur, P.S.; Rai, B.; Pal, P. AgriSPEC: A Smartphone-Based, Compact Biospeckle Imager for Assessing Seed Viability. npj Sustain. Agric. 2025, 3, 54. [Google Scholar] [CrossRef]
  15. Félix-Quintero, H.; Avila-Gaxiola, J.C.; Millan-Almaraz, J.R.; Yee-Rendón, C.M. Feature Comparison from Laser Speckle Imaging as a Novel Tool for Identifying Infections in Tomato Leaves. Smart Agric. Technol. 2024, 9, 100603. [Google Scholar] [CrossRef]
  16. Singh, P.; Chatterjee, A.; Bhatia, V.; Prakash, S. Application of Laser Biospeckle Analysis for Assessment of Seed Priming Treatments. Comput. Electron. Agric. 2020, 169, 105212. [Google Scholar] [CrossRef]
  17. Genze, N.; Bharti, R.; Grieb, M.; Schultheiss, S.J.; Grimm, D.G. Accurate Machine Learning-Based Germination Detection, Prediction and Quality Assessment of Three Grain Crops. Plant Methods 2020, 16, 99. [Google Scholar] [CrossRef]
  18. Hu, X.; Yang, L.; Zhang, Z. Non-Destructive Identification of Single Hard Seed via Multispectral Imaging Analysis in Six Legume Species. Plant Methods 2020, 16, 59. [Google Scholar] [CrossRef]
  19. Bouzaouia, S.; Ryckewaert, M.; Héran, D.; Ducanchez, A.; Bendoula, R. Using Dynamic Laser Speckle Imaging for Plant Breeding: A Case Study of Water Stress in Sunflowers. Sensors 2024, 24, 5260. [Google Scholar] [CrossRef] [PubMed]
  20. Ansari, M.Z.; Ansari, M.Z. Evaluation of Biological Activity via Biospeckle Laser Imaging. Biophys. Rep. 2025, 11, 1. [Google Scholar] [CrossRef]
  21. Zhang, Q.; Pandit, A.; Liu, Z.; Guo, Z.; Muddu, S.; Wei, Y.; Pereg, D.; Nazemifard, N.; Papageorgiou, C.; Yang, Y.; et al. Non-Invasive Estimation of the Powder Size Distribution from a Single Speckle Image. Light. Sci. Appl. 2024, 13, 15. [Google Scholar] [CrossRef]
  22. Surkov, Y.; Timoshina, P.; Serebryakova, I.; Stavtcev, D.; Kozlov, I.; Piavchenko, G.; Meglinski, I.; Konovalov, A.; Telyshev, D.; Kuznetcov, S.; et al. Laser Speckle Contrast Imaging with Principal Component and Entropy Analysis: A Novel Approach for Depth-Independent Blood Flow Assessment. Front. Optoelectron. 2025, 18, 143. [Google Scholar] [CrossRef]
  23. Miao, G.; Ren, X.; Guo, R.; Peng, Z. Application of an Improved Oriented Object Detection Algorithm in Remote Sensing Images. In Proceedings of the 2021 International Conference on Wireless Communications and Smart Grid (ICWCSG 2021), Xi’an, China, 26–28 March 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 34–37. [Google Scholar] [CrossRef]
  24. Kalibhat, R.; Kulkarni, R.; Mondal, P.K. Laser Speckle Contrast Imaging for Plant and Seed Characterization. In Proceedings of the 2024 IEEE Applied Sensing Conference (APSCON 2024), Bangalore, India, 10–12 April 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 201–206. [Google Scholar] [CrossRef]
  25. Morales-Vargas, E.; Peregrina-Barreto, H.; Fuentes-Aguilar, R.Q.; Padilla-Martinez, J.P.; Garcia-Suastegui, W.A.; Ramirez-San-Juan, J.C. Improving Blood Vessel Segmentation and Depth Estimation in Laser Speckle Images Using Deep Learning. Information 2024, 15, 185. [Google Scholar] [CrossRef]
  26. Kaler, N.; Bhatia, V.; Mishra, A.K. Deep Learning-Based Robust Analysis of Laser Bio-Speckle Data for Detection of Fungal-Infected Soybean Seeds. IEEE Access 2023, 11, 89331–89348. [Google Scholar] [CrossRef]
  27. Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International Conference on Machine Learning (ICML 2019), Long Beach, CA, USA, 9–15 June 2019; Volume 97, pp. 6105–6114. Available online: https://proceedings.mlr.press/v97/tan19a.html (accessed on 4 November 2025).
  28. Bello, I.; Zoph, B.; Fedus, W.; Shlens, J.; Le, Q.V. Revisiting ResNets: Improved Training and Scaling Strategies. arXiv 2022, arXiv:2203.07285. [Google Scholar] [CrossRef]
  29. Wei, Z.; Masouros, C.; Liu, F. Secure Directional Modulation with Few-Bit Phase Shifters: Optimal and Iterative-Closed-Form Designs. IEEE Trans. Commun. 2021, 69, 486–500. [Google Scholar] [CrossRef]
  30. Chen, X.; He, K. Exploring Simple Siamese Representation Learning. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2020), Seattle, WA, USA, 13–19 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1575–1585. [Google Scholar] [CrossRef]
  31. Li, J.; Xu, F.; Song, S.; Qi, J. A Maize Seed Variety Identification Method Based on Improving Deep Residual Convolutional Network. Front. Plant Sci. 2024, 15, 1382715. [Google Scholar] [CrossRef] [PubMed]
  32. Yu, H.; Chen, Z.; Song, S.; Chen, M.; Yang, C. Classification of Rice Seeds Grown in Different Geographical Environments: An Approach Based on Improved Residual Networks. Agronomy 2024, 14, 1244. [Google Scholar] [CrossRef]
Figure 1. Germination experiments for different seed groups: (A) Viable seeds: Seeds that were incubated under optimal conditions and exhibited high germination activity, (B) Desiccation-inactivated seeds: Seeds subjected to desiccation in a thermostatic drying oven, (C) Microbial-inactivated seeds: Seeds inactivated through microbial exposure.
Figure 1. Germination experiments for different seed groups: (A) Viable seeds: Seeds that were incubated under optimal conditions and exhibited high germination activity, (B) Desiccation-inactivated seeds: Seeds subjected to desiccation in a thermostatic drying oven, (C) Microbial-inactivated seeds: Seeds inactivated through microbial exposure.
Agronomy 16 00081 g001
Figure 2. Schematic diagram of the laser speckle imaging (LSI) system for seeds.
Figure 2. Schematic diagram of the laser speckle imaging (LSI) system for seeds.
Agronomy 16 00081 g002
Figure 3. Flow diagram of the seed grouping and image capture process.
Figure 3. Flow diagram of the seed grouping and image capture process.
Agronomy 16 00081 g003
Figure 4. Schematic of the seed speckle image preprocessing procedure.
Figure 4. Schematic of the seed speckle image preprocessing procedure.
Agronomy 16 00081 g004
Figure 5. Schematic diagram of dynamic weighting coefficient design.
Figure 5. Schematic diagram of dynamic weighting coefficient design.
Agronomy 16 00081 g005
Figure 6. Flowchart of the learnable weighted downsampling module.
Figure 6. Flowchart of the learnable weighted downsampling module.
Agronomy 16 00081 g006
Figure 7. Schematic diagram of the ResNet-W model architecture.
Figure 7. Schematic diagram of the ResNet-W model architecture.
Agronomy 16 00081 g007
Figure 8. Schematic of the overall workflow of the ResNet-W model.
Figure 8. Schematic of the overall workflow of the ResNet-W model.
Agronomy 16 00081 g008
Figure 9. Comparison of speckle feature maps generated using four algorithms.
Figure 9. Comparison of speckle feature maps generated using four algorithms.
Agronomy 16 00081 g009
Figure 10. Accuracy results for various speckle feature extraction methods paired with two training models: (A) classification accuracy, (B) classification precision.
Figure 10. Accuracy results for various speckle feature extraction methods paired with two training models: (A) classification accuracy, (B) classification precision.
Agronomy 16 00081 g010
Figure 11. Training results for various speckle feature extraction methods paired with two training models: (A) classification F1-score, (B) training loss, (C) recall.
Figure 11. Training results for various speckle feature extraction methods paired with two training models: (A) classification F1-score, (B) training loss, (C) recall.
Agronomy 16 00081 g011aAgronomy 16 00081 g011b
Figure 12. Comparison of different algorithms using the same model: (A) classification accuracy, (B) F1-score, (C) training loss, (D) classification precision.
Figure 12. Comparison of different algorithms using the same model: (A) classification accuracy, (B) F1-score, (C) training loss, (D) classification precision.
Agronomy 16 00081 g012aAgronomy 16 00081 g012b
Figure 13. (A) Training vs. Validation Loss Distribution of the Proposed Model (B) 10-Fold Cross-Validation ROC Curves.
Figure 13. (A) Training vs. Validation Loss Distribution of the Proposed Model (B) 10-Fold Cross-Validation ROC Curves.
Agronomy 16 00081 g013
Figure 14. Trends of model evaluation metrics under 10-fold cross-validation: (A) classification accuracy for training and validation sets across epochs, (B) precision trends for training and validation, (C) recall trends for training and validation, (D) F1-score trends for training and validation.
Figure 14. Trends of model evaluation metrics under 10-fold cross-validation: (A) classification accuracy for training and validation sets across epochs, (B) precision trends for training and validation, (C) recall trends for training and validation, (D) F1-score trends for training and validation.
Agronomy 16 00081 g014aAgronomy 16 00081 g014b
Figure 15. Partial detection results of the model. Original Image: WGD-processed seed speckle feature images; Heatmap: Seed image feature heatmaps and images; Superimposed: Overlay visualization of seed heatmaps.
Figure 15. Partial detection results of the model. Original Image: WGD-processed seed speckle feature images; Heatmap: Seed image feature heatmaps and images; Superimposed: Overlay visualization of seed heatmaps.
Agronomy 16 00081 g015
Table 1. Key Parameters of the Laser Speckle Imaging System.
Table 1. Key Parameters of the Laser Speckle Imaging System.
Equipment TypeModel NumberKey ParameterSpecification Value
He–Ne Laser25-LHR-911-230Wavelength632.8 nm
Output Power10 mW
CCD CameraMV-EM120CResolution1280 × 960 pixels
Frame Rate30 fps
Exposure Time10 ms
Industrial LensMVL-KF1624M-25MPFocal Length16 mm
Numerical Aperture (NA) Range0.03125–0.208
Table 2. Model evaluation metrics under different numbers of frozen layers.
Table 2. Model evaluation metrics under different numbers of frozen layers.
Number of Frozen LayersAccuracy (%)Precision (%)Recall (%)F1 Score (%)
55955.29469.6
1076767576
158274.29884.4
208985.49489.5
259193.49091.6
308884.39489.4
Table 3. Detailed Disassembly of ResNet-W Network Structure.
Table 3. Detailed Disassembly of ResNet-W Network Structure.
Network ModuleSub-Module CompositionImprovement PositionFrozen/Fine-tune
Initial Convolutional Layer7 × 7 convolution, followed by Batch Normalization (BN), ReLU activation, and 3 × 3 max poolingOriginal structure retainedFrozen
Stage 1 (conv2_x)3 ordinary, non-downsampling residual blocks: Each block contains 1 × 1 convolution, BN, ReLU, 3 × 3 convolution, BN, ReLU, 1 × 1 convolution, BN, shortcut connection, and ReLU in sequenceNo improvementFrozen
Stage 2 (conv3_x)4 residual blocks:1st block: Learnable weighted average pooling, followed by 1 × 1 convolution, BN, ReLU, 3 × 3 convolution, BN, ReLU, 1 × 1 convolution, BN, shortcut connection, and ReLU. Last 3 blocks same as the ordinary blocks in Stage 1st residual block: replaced ResNet-D’s “1 × 1 conv combine average pooling” with “learnable weighted avg pooling combine 1 × 1 convolution” to retain fine-grained speckle features Frozen
Stage 3 (conv4_x)6 residual blocks: 1st block: Learnable weighted average pooling, followed by 1 × 1 convolution, BN, ReLU, 3 × 3 convolution, BN, ReLU, 1 × 1 convolution, BN, shortcut connection, and ReLULast 5 blocks same as the ordinary blocks in Stage 11st residual block: enhances retention of medium-level speckle dynamic features Frozen
Stage 4 (conv5_x)3 residual blocks: 1st block: Learnable weighted average pooling, followed by 1 × 1 convolution, BN, ReLU, 3 × 3 convolution, BN, ReLU, 1 × 1 convolutio, BN, shortcut connection, and ReLU. Last 2 blocksame as the ordinary blocks in Stage 11st residual block: preserves high-level discriminative features Fine-tune
Global Average Pooling7 × 7 average pooling No improvementFine-tune
Fully Connected Layer Dropout layer, followed by 2 output neurons and Sigmoid activationNewly added Dropout layer Fine-tune
Table 4. Performance metrics of the model under 10-fold stratified cross-validation.
Table 4. Performance metrics of the model under 10-fold stratified cross-validation.
FoldBest EpochLoss (×10−3)Accuracy (%)Precision (%)Recall (%)F1 Score (%)
13714.32691.0089.57592.8091.159
24416.71490.4092.08388.4090.204
34513.79891.8091.96891.6091.784
44715.23691.2089.92292.8091.339
54916.18191.0088.67994.0091.262
62813.14492.6093.11792.0092.555
75015.15891.4092.94689.6091.242
84513.11292.0089.47495.2092.248
94514.46490.8089.84492.0090.909
104215.29391.0090.19692.0091.089
Mean43.214.74391.3290.78092.0491.380
Standard Deviation6.51.2030.651.5901.950.670
Table 5. Core and Class-Specific Performance Metrics of the Classification Model with Descriptive Statistics.
Table 5. Core and Class-Specific Performance Metrics of the Classification Model with Descriptive Statistics.
Performance MetricsAccuracy (%)F1 Score (%)Live Class Accuracy (%)Dead Class Accuracy (%)Confidence of Correct Predictions (%)
Mean92.2492.1891.0892.9483.86
Standard Deviation 1.551.632.751.271.36
Coefficient of Variation1.681.773.012.222.65
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Men, S.; Zhang, J.; Liu, X.; Sun, T.; Liu, W. Rapid Seed Viability Detection Using Laser Speckle Weighted Generalized Difference with Improved Residual Networks. Agronomy 2026, 16, 81. https://doi.org/10.3390/agronomy16010081

AMA Style

Men S, Zhang J, Liu X, Sun T, Liu W. Rapid Seed Viability Detection Using Laser Speckle Weighted Generalized Difference with Improved Residual Networks. Agronomy. 2026; 16(1):81. https://doi.org/10.3390/agronomy16010081

Chicago/Turabian Style

Men, Sen, Junhao Zhang, Xinhong Liu, Tianyi Sun, and Wei Liu. 2026. "Rapid Seed Viability Detection Using Laser Speckle Weighted Generalized Difference with Improved Residual Networks" Agronomy 16, no. 1: 81. https://doi.org/10.3390/agronomy16010081

APA Style

Men, S., Zhang, J., Liu, X., Sun, T., & Liu, W. (2026). Rapid Seed Viability Detection Using Laser Speckle Weighted Generalized Difference with Improved Residual Networks. Agronomy, 16(1), 81. https://doi.org/10.3390/agronomy16010081

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop