Next Article in Journal
Resilient Urban and Architecture Design—Strategies for Low-Carbon and Climate-Adaptive Cities
Next Article in Special Issue
Real-Time Progress Monitoring of Bricklaying
Previous Article in Journal
New Bound and Hybrid Composite Insulation Materials from Waste Wheat Straw Fibers and Discarded Tea Bags
Previous Article in Special Issue
The Lightweight Method of Ground Penetrating Radar (GPR) Hidden Defect Detection Based on SESM-YOLO
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Rapid Sand Gradation Detection Method Based on Dual-Camera Fusion

School of Computer and Information Science, Chongqing Normal University, No. 37, University City Middle Road, Chongqing 401331, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Buildings 2025, 15(14), 2404; https://doi.org/10.3390/buildings15142404
Submission received: 9 June 2025 / Revised: 27 June 2025 / Accepted: 30 June 2025 / Published: 9 July 2025
(This article belongs to the Special Issue AI in Construction: Automation, Optimization, and Safety)

Abstract

Precise grading of manufactured sand is vital to concrete performance, yet standard sieve tests, though accurate, are too slow for online quality control. Thus, we devised an image-based inspection method combining a dual-camera module with a Temporal Interval Sampling Strategy (TISS) to enhance throughput while maintaining precision. In this design, a global wide-angle camera captures the entire particle field, whereas a local high-magnification camera focuses on fine fractions. TISS selects only statistically representative frames, effectively eliminating redundant data. A lightweight segmentation algorithm based on geometric rules cleanly separates overlapping particles and assigns size classes using a normal-distribution classifier. In tests on ten 500 g batches of manufactured sand spanning fine, medium, and coarse gradations, the system processed each batch in an average of 7.8 min using only 34 image groups. It kept the total gradation error within 12% and the fineness-modulus deviation within ±0.06 compared to reference sieving. These results demonstrate that the combination of complementary optics and targeted sampling can provide a scalable, real-time solution.

1. Introduction

The particle-size gradation of manufactured sand (fine aggregate) is a key indicator of aggregate quality. It governs the packing structure and stress-transfer pathways in concrete, thereby affecting workability, mechanical performance, and durability [1,2]. As construction standards rise and schedules tighten, the industry urgently requires rapid, fully automated gradation measurement to replace labour-intensive manual methods [3].
Two mainstream approaches are currently used. Traditional sieve analysis is standardised and accurate, but it relies on manual work, lacks repeatability, and cannot support continuous online monitoring [4,5]. Image-based techniques offer non-contact operation and high levels of automation, making them an attractive alternative [6]. However, existing systems rarely meet industrial efficiency demands. A 500 g sample of manufactured sand contains millions of particles between 0.075 mm and 4.75 mm, the size range defined for construction sand in Chinese national standards [7,8]. This classification, though different from some international standards [9,10], reflects common engineering practice in China.
When pixel density is increased to detect particles smaller than 0.3 mm, the field of view narrows so much that thousands of images are needed to cover one sample. Enlarging the field of view retains coverage but sacrifices resolution, making fine particles hard to detect. This hardware trade-off leads to image acquisition and processing times that often exceed production control requirements [11].
The present study tackles this bottleneck by reducing both the number of images captured and the processing burden, without compromising accuracy. The proposed solution introduces three principal innovations:
  • Dual-camera architecture with temporal-interval sampling: This design pairs a wide-angle global camera with a high-magnification local camera, selecting only statistically representative frames. The approach significantly enhances acquisition efficiency without compromising coverage or detail.
  • Adaptive segmentation with statistical classification: An adaptive segmentation algorithm for bonded particles is integrated with a statistical size-classification model. This ensures stable detection accuracy, even under sparse sampling conditions.
  • Comprehensive performance evaluation: Experiments on ten 500 g batches covering fine, medium, and coarse gradations showed an average detection cycle of 7.8 min per batch. The total gradation error remained below 12%, and fineness-modulus deviation remained within ±0.06, confirming the method’s suitability for scalable, real-time industrial deployment.

2. Related Work

Given the broad size range and high packing density of fine aggregates, recent research has aimed to improve all stages of image-based gradation detection, including camera design, particle segmentation, and size analysis. The accuracy and adaptability of image capture ultimately constrain the performance of downstream processing. To address fluctuating illumination and complex particle piles under field conditions, many tailored imaging schemes have been developed.
Li et al. [12] created a digital acquisition system with adjustable lighting and resolution that permits real-time monitoring on construction sites, thereby boosting both speed and robustness. Huang et al. [13] devised a falling-particle platform that enhances image quality through grayscale conversion and filtering, yet its camera cannot resolve particles finer than 0.15 mm. Lin et al. [14] addressed this gap with a dual-camera, multiscale setup designed to recover missing fine fractions, whereas Zhao et al. [15] took a different approach by using three cameras to view rotating aggregates from multiple angles and building a 2D image library to mitigate dynamic imaging errors. However, even with such setups, dynamic acquisition demands high-speed shutters and precise lighting synchronisation, and ultra-fine grains tend to disperse during motion. These factors impair data stability. To avoid the challenges of dynamic imaging, Zhang et al. [16] adopted a static local sampling approach. Multiple close-up shots provided reliable fine-particle estimates, but repeated sampling greatly slowed the workflow and failed to meet real-time, high-throughput demands.
Because sand particles naturally clump and overlap, inadequate segmentation directly degrades size measurements. Classical segmentation methods—for example, the threshold-plus-watershed routine proposed by Leonardo et al.  [17] and the wavelet-based variant introduced by Zhang et al. [18]—can split some adhered grains in static images. However, these approaches rely on manually tuned thresholds and are prone to both over-segmentation and under-segmentation. In recent years, deep learning has been explored as an alternative solution. Zhu et al. [19] built a two-stage FCN that first detects targets, then separates contacts; Cao et al. [20] proposed Multi-ResUNet, which inserts Inception blocks and residual links to handle severe occlusion; and Yang et al. [21] applied Mask R-CNN augmented with generative adversarial network (GAN)-generated data to improve generalisation. Despite their higher accuracy, these deep models still struggle with densely adhered particles and are sensitive to scene changes. They also require large labeled datasets and long training cycles, making industrial deployment difficult.
Another challenge is converting two-dimensional image data into accurate gradation estimates. Many studies have tried to translate 2D particle contours into gradation proxies. For instance, Kumara et al. [22] recommended using the Feret diameter for irregular particle shapes, Liu et al. [23] added an aspect-ratio metric to reflect particle flatness, Zhou et al. [24] paired the projected area with the Feret diameter to estimate particle volume, and Xu et al. [25] introduced a correction factor to reduce bias in volume estimates from 2D images. However, any purely 2D projection misses the particle thickness, so errors persist. Therefore, researchers have explored 3D approaches: Liang et al. [26] used laser scanning to capture particle shapes, and Su et al. [27] employed micro-focus X-ray computed tomography (CT) to reconstruct high-fidelity 3D models. However, both techniques require expensive equipment and intensive computation, which limits their on-site practicality.
Deep learning has also been applied to particle classification and overall gradation prediction. Siyao et al. [28] enhanced the U-Net architecture with multi-scale fusion and a custom loss function for this task. Li et al. [29] used transfer learning to train a ResNet50 classifier on sand images. Chen et al. [30] fitted a nonlinear mapping between image-derived gradations and true gradations for dynamic calibration. Buscombe’s SediNet [31] performs end-to-end gradation prediction, while Kim et al. [32] used synthetic data with transfer learning to compensate for limited training samples. Li and Iskander [33] achieved accurate particle sorting through conventional supervised learning on hand-crafted image descriptors. Although these data-driven models excel in accuracy, they depend on large, well-annotated datasets and long training cycles, which complicates their deployment. As a result, they are not yet well suited for real-time industrial monitoring.
In short, image-based gradation analysis has made substantial progress, yet the full pipeline still struggles to balance speed and precision. This study addresses this challenge by integrating dual-view collaborative imaging, multiscale region fusion, and a weighted error-correction framework. This combination stabilises fine-fraction detection without the need for extreme image resolution, achieving high accuracy while significantly improving efficiency and facilitating on-site deployment.

3. Methodology

This study presents a high-throughput image analysis pipeline for manufactured sand gradation that addresses the twin challenges of fine-particle resolution and data capture speed. The hardware integrates quantitative feeding, vibration-assisted dispersion, synchronised dual-camera imaging, and pneumatic sand cleaning within a closed-loop automated platform. A Temporal Interval Sampling Strategy (TISS) accelerates acquisition without compromising representativeness. On the software side, multi-frame fusion background subtraction and block-wise adaptive thresholding feed into a Recursive Concavity-Guided Segmentation (RCGS) algorithm, which effectively separates adhered grains and generates reliable particle contours. Gradation is computed by combining a normal-distribution size classifier with a volume estimator based on Feret diameter and projected area. Finally, a dual-view fusion module reconciles global and local data through normalisation and weighted error correction, delivering accurate overall gradation. Together, these innovations enhance both precision and throughput, offering a practical solution for real-time fine-aggregate quality monitoring in industrial settings.

3.1. Image Acquisition Optimization

Traditional image-based detection systems often suffer from low cleaning efficiency, uneven sample coverage, limited fine-particle visibility, and long processing times. To address these challenges, we implemented systematic optimisations in both hardware and sampling strategy. The closed-loop detection platform now integrates quantitative feeding, vibration-assisted dispersion, synchronised dual-camera imaging, and pneumatic sand cleaning, greatly enhancing acquisition stability and fine-particle visibility. However, the system’s performance may be limited when processing sands with high cohesion or excessive moisture, as these conditions reduce dispersion effectiveness and impact detection accuracy.
On the sampling side, we adopt TISS under the assumption of uniform particle distribution. This strategy reduces detection time by skipping selected acquisition rounds. Its effectiveness was verified against a continuous-shooting approach, which served as the benchmark for traditional multi-group sampling.
Within the acquisition workflow, key hardware enhancements improve sampling accuracy and measurement stability. A high-precision load cell beneath the feed bin enables quantitative feeding by real-time weighing, automatically stopping once the target mass (1.5 g) is reached. This reduces sample mass fluctuations to within ±1% and ensures consistent dosing. After feeding, sand is discharged onto a flexible vibration tray driven by a multi-axis voice-coil motor array, which generates controlled vertical and horizontal micro-vibrations. This effectively breaks up clusters and redistributes sand, addressing stacking and adhesion issues that could impair image analysis. Finally, the original vibration-and-flip cleaning mechanism is replaced with directed air blowing, which shortens cleaning time and minimises residual sand interference, improving platform cleanliness and image consistency.
The dual-camera module is also optimised. A wide-angle global camera and high-magnification local camera, synchronised by a PLC, capture complementary views: the global camera records overall distribution, while the local camera resolves micro-scale detail. Compared with frame-by-frame stitching, this configuration achieves full coverage with fewer images and significantly less operating time. Figure 1 shows the overall schematic diagram of the sampling device components.
Under current settings, each sample cycle requires about 2 s for weighing and feeding, 4 s for release, 3 s for vibration dispersion, 1 s for imaging, and 5 s for pneumatic cleaning, totalling roughly 15 s per sample. Detailed camera parameters are listed in Table 1.
Although a single acquisition now completes in 15 s, exhaustively scanning all 334 groups in a 500 g batch would still take over 1.3 h. TISS removes this bottleneck by sampling at fixed intervals (interval): assuming an approximately uniform tray distribution, the system acquires images only once every interval groups, where “interval” is a predefined sampling parameter. Skipping the intervening feed–imaging cycles reduces both image count and runtime. For example, at interval = 10  only 34 groups are captured, and the batch time drops to 8.5 min—a 16-fold speed-up compared to the original 2.13 h.
Under this constraint, combining TISS with single-frame sampling reduced the batch time to 8 min while keeping the total gradation error under 12%, meeting the requirements for rapid on-site grading in industrial settings.

3.2. Image Processing

3.2.1. Block-Wise Adaptive Threshold Segmentation

Complex backgrounds and uneven lighting can complicate the extraction of sand-particle contours from images. Texture artefacts, brightness gradients, and sensor noise are common in our setup, and the edges of particles smaller than 0.15 mm are easily lost in the background. Conventional single-frame global thresholding fails under these conditions because it cannot adjust to local brightness variations or preserve the boundaries of very small particles.
To overcome these limitations, we introduce a two-stage background-suppression scheme that combines multi-frame background modelling with block-wise adaptive thresholding. First, consecutive frames are fused to isolate the stable background component, thereby suppressing fixed-pattern noise and repetitive texture. The resulting averaged image serves as an accurate background estimate. Next, the image is partitioned into local blocks, and each block is thresholded based on its luminance statistics, enabling localised binarisation. In essence, multi-frame fusion yields a clean background, while block-wise adaptation compensates for spatial illumination differences. Together, these steps greatly improve the retention and boundary integrity of fine particles in complex scenes. Pseudocode for the full segmentation procedure is given in Algorithm 1.
Algorithm 1: Multi-frame Adaptive Sand Particle Segmentation
Buildings 15 02404 i001

3.2.2. Recursive Concavity-Guided Segmentation (RCGS)

Following background suppression and thresholding, adhered or touching particles must be separated to isolate individual grains. To accomplish this, the RCGS algorithm is employed, utilizing efficient, rule-based techniques to accurately identify and segment adhered particles. The overall process is outlined in Figure 2.
(1)
Adhesion Determination.
Before segmenting a particle cluster, RCGS first assesses whether a given contour corresponds to a single grain or an adhered aggregate. This decision uses a hybrid approach combining geometric descriptors—such as aspect ratio, solidity, and convexity—with an analysis of concave regions along the contour. If the metrics indicate a single particle, the contour remains unchanged; otherwise, segmentation proceeds.
(2)
Iterative Concavity-Based Splitting.
For identified adhered clusters, RCGS iteratively splits the contour along lines connecting pairs of significant concave points. At each iteration, the closest pair of concave points (based on Euclidean distance) is selected, and the particle is bisected accordingly. Each resulting sub-contour is then re-evaluated using the adhesion criteria from step (1).
(3)
Recursion and Termination.
This split-and-check cycle recurses until all contours satisfy the single-particle criteria or can no longer be divided by concavity rules. By combining precise adhesion detection with recursive splitting, RCGS effectively separates complex particle clusters without relying on training data. Compared to fixed-threshold or morphology-based methods, RCGS offers enhanced adaptability and produces physically meaningful segmentation boundaries.
Overall, the RCGS pipeline integrates concave-defect detection, nearest-point splitting, and iterative refinement, enabling reliable decomposition of adhered particles with complex shapes. As shown in Figure 3, the method successfully separates merged contours into individual, geometrically consistent grains.

3.3. Gradation Calculation

3.3.1. Particle Size Classification

To enhance the accuracy and automation of particle size classification in image-based gradation analysis, we propose a statistically robust and computationally efficient interval-division method. Assuming particle sizes approximately follow a normal distribution, this approach combines single-size sample experiments with probabilistic modelling to determine classification boundaries. This data-driven method avoids the subjectivity and misclassification risks inherent in empirical size thresholds.
Specifically, sand samples were first separated by standard sieve analysis into well-defined size intervals. High-resolution images of each single-size sample were then acquired, extracting the Feret diameter of individual, well-formed particles while excluding agglomerates and incomplete edge particles. Each size group’s diameter distribution was found to be unimodal and approximately normal. Adjacent size groups showed minimal overlap; thus, classification boundaries were set at the intersections of their fitted probability density functions, minimising misclassification risk between neighbouring classes.
Practically, for each size interval (e.g., 0.075–0.15 mm and 0.15–0.3 mm), multiple images were collected, and Feret diameter histograms were plotted. The mean ( μ ) and standard deviation ( σ ) of each group defined its normal distribution. Overlaying the probability density curves of adjacent intervals, the intersection points were computed as optimal classification thresholds.
By numerically determining these intersection points for all adjacent pairs, six size thresholds were established to segment the full particle size range. As shown in Figure 4, these thresholds align closely with conventional sieve grading standards and effectively reduce misclassification. The complete classification procedure is detailed in Algorithm 2.
Figure 4 illustrates the empirical data points (solid markers) alongside their fitted normal-distribution curves (dashed lines) for each size interval, with a detailed explanation of the fitting process provided below.
Explanation of Plotting Procedure: First, all detected sand particles were classified into six predefined size intervals based on their short Feret diameter (e.g., 0.075–0.15 mm, 0.15–0.30 mm, …, 2.36–4.75 mm). Within each interval, the range of short Feret diameters was divided into 20 equal-width bins. We then counted the number of particles in each bin and normalized these counts according to the total particle count in the interval to obtain relative frequencies. These frequencies, plotted as solid markers at bin centres, represent the empirical particle size distribution for each interval. Finally, we calculated the sample mean and standard deviation of the short Feret diameters in each interval and fitted a continuous normal (Gaussian) curve using these parameters (depicted as dashed lines). This process was repeated for all six intervals, resulting in the overlaid empirical distributions and fitted curves shown in Figure 4.
Algorithm 2: Feret Diameter-Based Particle Size Classification
Buildings 15 02404 i002
Unlike fixed-ratio or empirically set thresholds, this statistically driven method builds classification boundaries solely from actual image data. It eliminates the need for manual tuning or subjective parameters, enhancing adaptability and robustness across different sand batches. This advantage is especially significant for fine aggregates with continuous size distributions and overlapping ranges. By minimizing misclassification risk, the approach provides a reliable foundation for accurate gradation calculation.

3.3.2. Dual-View Fusion and Correction

Even after size classification, combining the data from the two different camera views requires careful calibration to avoid bias. We developed a three-stage compensation and correction strategy for dual-view fusion, consisting of scale normalization, view substitution, and error correction.
Scale Normalization: Camera calibration established the physical length per pixel in horizontal and vertical directions for both global and local cameras, producing scaling factors for each view. Since particle orientations vary randomly, simple one-dimensional scaling risks directional bias. To mitigate this, we apply an orientation-independent mapping that statistically aligns the Feret diameters measured by the local camera with the global scale via weighted averaging. For-particle projection areas, a two-dimensional scaling factor, maps local areas to the global scale. Given the global and local cameras’ differing fields of view, the local fine-particle volume is amplified proportionally to the area ratio, ensuring proper scaling of the local camera’s contribution.
Fine-Particle Replacement: Assuming uniform particle dispersion due to vibration-assisted sample spreading, the local camera’s field of view is considered representative for particles smaller than 0.3 mm. Therefore, we replace all <0.3 mm size data in the global image with measurements from the local camera, compensating for the global camera’s limited resolution in this range. After substitution, the combined particle size distribution is normalized so that the total volume fraction sums to 100%, forming the initial fused gradation profile.
Weighted Error Correction: To correct residual systematic deviations between cameras, we introduce a weight vector ( W = ( w 1 , w 2 , ) ) applied to the normalized fused gradation vector ( X = ( X 1 , X 2 , ) ), aiming to minimize the squared error relative to the ground-truth sieve analysis ( Y = ( Y 1 , Y 2 , ) ). The objective function is
SSE ( W ) = j ( w j X j Y j ) 2 .
We solve this using a quasi-Newton BFGS optimization to find the optimal weights ( W * ), which are then applied to X to yield the final corrected gradation.
With this three-stage fusion strategy—fine-particle replacement, global normalization, and weighted error correction—we fully exploit the complementary strengths of the dual-camera system. The global view ensures comprehensive coverage of particle sizes, while the local view provides high-resolution detail for the fine particle range. The final fused result achieves high accuracy for the fine particles without sacrificing consistency in the overall distribution. In our implementation, this approach significantly reduced the number of images required and shortened the detection duration, yet it still enabled high-precision reconstruction of the full-range particle gradation. It provides an efficient and reliable framework for rapid fine-aggregate gradation analysis in engineering applications. (The detailed pseudocode for the fusion and correction procedure is presented in Algorithm 3).
Algorithm 3: Weighted Correction of Fine-Scale Particle Volume Distribution
Buildings 15 02404 i003

4. Results and Analysis

This study focuses on manufactured sand fine aggregates with particle sizes ranging from 0.075 mm to 4.75 mm. A total of ten 500 g sand samples were collected for gradation experiments. Additionally, systematically captured single-size samples were obtained: for the 0.075–0.15 mm and 0.15–0.3 mm intervals, 50 sample groups were collected, each comprising 10 images; for the 0.3–4.75 mm range, 20 sample groups per size interval were collected, each also imaged 10 times.
To comprehensively evaluate the effectiveness and practical applicability of the proposed method, two key metrics were adopted: grading error, which quantifies deviations in overall particle size distribution fitting, and fineness-modulus error, reflecting shifts in the distribution trend. Based on these metrics, three sets of experiments were designed to assess the performance of the image-based detection method under various strategies and parameter settings.
First, sampling efficiency optimization experiments were conducted using 500 g batches of mixed manufactured sand. These experiments evaluated the impact of the TISS technique and a multi-frame burst imaging scheme with varying sampling intervals and frame counts on detection time and grading error, thereby validating the feasibility of reducing image quantity while maintaining sample representativeness.
Second, a dual-camera fusion comparison experiment was carried out, wherein gradation results were calculated from global images, local images, and their fusion. This was intended to investigate the effectiveness of multi-scale collaborative imaging in compensating for fine particles and enhancing overall accuracy.
Finally, segmentation strategy comparison experiments were performed, contrasting four approaches: no segmentation, direct elimination, fixed thresholding, and dynamic judgment. These experiments demonstrated the robustness and accuracy of the proposed recursive segmentation algorithm under complex particle adhesion scenarios.

4.1. Efficiency Optimization Comparison Experiment

To assess the trade-off between efficiency and accuracy achieved by the proposed acquisition optimizations, we conducted a sampling efficiency experiment using 500 g batches of standard manufactured sand. Since the vibration dispersion plate could only accommodate approximately 1.5 g of sand per operation, multiple feeding and imaging cycles were required to cover the entire 500 g sample. In these tests, the dual-camera system was employed to ensure image quality while exploring methods to reduce the total number of images and, thus, improve detection efficiency. In particular, a single-frame sampling approach was adopted, where only the first image captured in each feeding cycle was retained to represent that group. This formed the basis for analysing different sampling intervals.
Using this single-frame-per-group baseline, we further investigated various frame intervals. The interval parameter (interval = 1, 2, …, 20) indicated that one image was retained out of every specified number of consecutive frames in the original sequence. Thus, larger interval values simulated more aggressive data reduction. Although our final optimized strategy retained only one image per group, in this experiment, the retained images were partitioned into multiple equal-sized groups (based on the interval) for comparative analysis.
This approach allowed us to assess the internal consistency and stability of grading results at different sampling densities. For each interval, the minimum, maximum, and average gradation errors were calculated to evaluate how image sampling density affected detection accuracy and robustness (see Table 2). In the table, “Time” refers to the duration required to analyse the sample, “Error” denotes the mean absolute cumulative gradation error, and “Std Dev” represents the standard deviation of these gradation errors computed from multiple independent sampling runs within the same interval, reflecting measurement consistency and repeatability under that sampling configuration.
Figure 5 illustrates the trade-off between total processing time and grading error as the sampling interval increases. As expected, a larger interval (i.e., fewer images analysed) substantially reduced total detection time but caused a gradual increase in average grading error. Specifically, at an interval of 11, the average grading error remained around 12%, which was acceptable for practical applications, while total processing time was reduced to approximately 7–8 min. This represents more than a 15-fold speed-up compared to using the full image set. However, beyond interval = 11, the error increased sharply (exceeding 18% at interval = 15), compromising reliability. Therefore, an interval of 11 images is recommended as the optimal balance between accuracy and efficiency for our system.
Table 3, Table 4 and Table 5 present the detection time, sampled mass, grading error, and standard deviation for sampling schemes using 1, 2, or 3 images per group across various sampling intervals. Detection time (in minutes) denotes the total duration required to analyse each sampling group. Sampled mass (in grams) represents the approximate weight of sand corresponding to each group, which varies depending on the number of images and the sampling interval. Grading error (percentage) indicates the mean absolute cumulative gradation error, reflecting the deviation from the reference gradation. The standard deviation captures the variability of grading errors computed from multiple independent sampling runs under the same conditions, thereby representing the internal consistency and stability of the results.
Experimental results show that increasing the sampling interval reduced the actual sand mass represented per group, which led to greater random fluctuations and larger grading errors. This effect was especially pronounced when the sampled mass fell below 50 g, thereby compromising the reliability of the results.
Further analysis showed that when each batch maintained a sampled mass of at least 50 g (with interval ≈ 10–11), the randomness in particle distribution was effectively suppressed. Under these conditions, both the grading error and its standard deviation remained well controlled: fluctuations in grading accuracy stayed within ±2%, and the standard deviation of error typically remained below 0.02, indicating stable and representative results. Within these constraints, comparative tests confirmed that—for the same total sampled mass—the single-image-per-group method with TISS required the fewest images and the shortest detection time (approximately 7–8 min per batch). Using a multi-frame burst per group (e.g., capturing 2–3 images per group) yielded only marginal accuracy improvements (reducing error to below 12% and slightly lowering the standard deviation) but incurred significantly higher image counts and longer processing times. Considering the trade-offs among accuracy, stability (in both error and variability), and efficiency, we recommend the single-image-per-group strategy with a moderate interval of 11. This configuration maintained grading error near 12%, kept a low standard deviation of error, and achieved the fastest processing speed, demonstrating strong practical utility for real-time grading in industrial environments.

4.2. Dual-Camera Comparison Experiment

Next, we evaluated the dual-camera fusion strategy’s ability to improve detection accuracy while reducing image acquisition load. Comparative tests were conducted on 500 g standard sand samples, examining three detection schemes across various particle-size intervals. Conventionally, a high-magnification local camera and a wide-angle global camera each capture multiple images (e.g., five consecutive frames) per sample group to average out random particle distribution variations. In contrast, our proposed fusion strategy compensates for missing fine-particle information (<0.3 mm) in the global images by incorporating data from the local images, combined with field-of-view area ratio scaling and volume normalization. Consequently, the fusion approach requires only one image per group to accurately account for the fine fraction and produce a stable overall gradation.
To further validate the robustness of the fusion strategy, we tested one representative sand sample for each of three gradation types: fine-graded, medium-graded, and coarse-graded. Table 6, Table 7 and Table 8 present the gradation and fineness-modulus errors obtained with the three methods: local only, global only, and fusion. In each case, the local-only method uses only the high-resolution camera data, the global-only method uses only the wide-angle camera data, and the fusion method combines both via our dual-view algorithm.
For the coarse sand sample (FM = 3.52), the average detection error based on five local camera images was 13.20%. The global camera, after excluding particles smaller than 0.3 mm, achieved a reduced error of 11.34%. In comparison, the fusion strategy—which uses only a single local image to compensate for the fine particle range—further decreased the overall error to 7.77%, outperforming both the local-only and global-only methods. This demonstrates that the fusion approach not only reduces the number of images required but also improves the accuracy of gradation estimation.
For the medium sand sample (FM = 2.51), the local-only scheme exhibited an error of 15.29%, while the global-only approach showed an even higher error of 16.04%, reflecting size deviations under moderate fineness conditions. In contrast, the fusion strategy lowered the error to 10.24%, confirming its accuracy advantage, even with a more uniform particle size distribution.
In the fine sand sample (FM = 2.01), errors from the local-only and global-only methods were 10.00% and 13.44%, respectively, highlighting inconsistencies in fine particle measurements when using a single imaging path. The fusion strategy further reduced the error to 9.14%, the lowest among the three methods, validating the importance of high-resolution imaging in compensating for the finest particle fractions.
Overall, comparisons across the three sample types demonstrate that the fusion strategy, while maintaining high acquisition efficiency by relying on just a single local image, achieves significant error reductions—up to 42.9% compared to the local-only method. This highlights its superior adaptability and robustness, confirming its accuracy advantage and practical feasibility as an industrial online detection solution.
Figure 6 illustrates the fineness-modulus error and gradation error across ten sample groups for the three methods. The local camera method (local only) generally exhibited higher errors, with significant fluctuations between groups, indicating insufficient statistical stability. While the global camera method (global only), which excludes fine particles, slightly reduced the average error, it still showed noticeable variation and failed to address the omission of fine particles. In contrast, the fusion strategy achieved the lowest errors in the vast majority of samples, with deviations controlled within ±2%, demonstrating superior consistency and robustness. These results indicate that the fusion approach significantly improves detection accuracy while reducing the number of required images.
Table 9 presents detailed gradation error and fineness-modulus (FM) error values for each sample group, along with the overall standard deviations computed from the ten samples per group. The gradation error reflects the cumulative deviation in particle size distribution, while the FM error measures deviations in the fineness modulus. The standard deviations indicate the variability of errors across the ten samples, highlighting the fusion strategy’s advantage in achieving both lower average errors and improved consistency compared to the other methods.

4.3. Segmentation Algorithm Comparison Experiment

The performance of image-based particle segmentation algorithms directly impacts the accuracy of gradation analysis. To evaluate the effectiveness of the proposed dynamic thresholding combined with the RCGS segmentation strategy, four comparison schemes were designed to analyse differences in particle identification and gradation outcomes:
Selectively removing large contours identified as aggregates while retaining small clusters containing only a few particles;
Retaining strictly single-particle contours, deleting all regions classified as aggregates.
Applying a fixed global shape-factor threshold, accepting contours above the threshold as single particles and forcibly splitting those below it via convex-hull or defect analysis.
Employing size-adaptive thresholds that tighten for large particles and relax for small ones, thereby mitigating projection- or noise-induced misclassification and reducing over-segmentation.
In Table 10, the gradation errors corresponding to the four segmentation strategies are denoted as m 1 , m 2 , m 3 , and RCGS. Specifically, m 1 corresponds to selective removal of only large aggregate contours while retaining small clusters; m 2 refers to strict single-particle retention, deleting every contour identified as an aggregate; m 3 applies a fixed global shape-factor threshold and forcibly splits contours falling below it via convex analysis; and RCGS uses a size-adaptive shape-factor threshold that tightens for large particles and relaxes for small ones, thereby mitigating projection- or noise-induced misclassification.
Furthermore, Δ i ( i = 1 , , 4 ) denotes the retained percentage error in each particle-size interval under the i-th segmentation strategy; a “+” sign indicates underestimation (calculated < actual), while a “−” sign indicates overestimation (calculated > actual). In particular, Δ 1 corresponds to strategy m 1 , Δ 2 to m 2 , Δ 3 to m 3 , and Δ 4 to RCGS.
Table 11 presents the gradation error and fineness-modulus (FM) error for four segmentation methods (m1–m3 and RCGS) across ten samples. The Error columns indicate the mean absolute cumulative gradation error (expressed as a percentage), while the FM Error columns show deviations in fineness modulus. The final row lists the standard deviations calculated from the ten samples per method, reflecting the variability and stability of each approach.
Figure 7 and Table 11, together, demonstrate that RCGS outperforms the other three methods in both gradation and fineness-modulus accuracy. In Figure 7a, the FM error bars for RCGS cluster tightly around zero, indicating minimal deviation, whereas m1, m2, and m3 exhibit larger fluctuations. Table 11 quantitatively confirms this pattern, with RCGS showing a substantially lower maximum absolute FM error compared to m1, m2, and m3. A similar trend is observed in Figure 7b for gradation error, where RCGS consistently achieves the smallest bias.
In practical terms, method m1’s fixed-threshold approach tends to over-delete fine particles, while m2’s complete-deletion scheme underestimates fine-particle content. Method m3 mitigates these extremes by applying a uniform shape-factor threshold but still misclassifies some aggregates. In contrast, RCGS employs a size-adaptive threshold that effectively preserves small particles and accurately separates larger aggregates. This balanced strategy leads to the lowest and most consistent errors without requiring training data, making RCGS the preferred segmentation method.

5. Conclusions

This study addressed the efficiency bottlenecks of manufactured sand gradation detection in industrial applications and proposed an image-based gradation analysis strategy focused on optimizing detection timeliness. Systematic innovations and experimental validations were conducted across three key components: sampling strategy, data fusion, and particle segmentation.
First, a TISS-based image acquisition scheme was introduced. Under a fixed total sample mass of 500 g, the original approach requiring 334 groups of continuous imaging was optimized into an interval-based sampling scheme with a tunable parameter. Experimental results showed that when the interval was set to 11 (sampling approximately 45 g of sand), the grading error remained within ±2%, while imaging time was reduced by 80% to under 8 min. This strategy effectively balanced image quantity and detection stability, providing a tunable trade-off between accuracy and speed for online detection systems.
Second, a dual-camera collaborative fusion strategy was developed to integrate global and local perspectives: the global camera captured the overall particle size range, while the local camera focused on high-precision acquisition of particles smaller than 0.3 mm. Through field-of-view area mapping, scale alignment, volume normalization, and weighted correction, the local fine-particle volume compensated for missing fractions in the global images. Comparative experiments on three representative samples demonstrated that the fusion strategy reduced the average error of single-camera methods from up to 15.29% to 9.52%, significantly narrowing error fluctuations across sand types and confirming its robust adaptability.
Finally, at the image processing level, an adaptive recognition and recursive segmentation algorithm was proposed to separate bonded particles based on equivalent Feret diameter. By combining dual-threshold screening using shape factor and solidity with convex-hull indentation analysis and constraint-driven recursive cuts, the method efficiently separated complex aggregates. Compared to fixed-threshold methods, this approach improved both the identification rate and segmentation integrity of bonded particles without relying on training data, providing high-quality contour data for gradation statistics.
In summary, the proposed multi-level optimization strategy significantly improved detection time, error control, and system adaptability. Without compromising accuracy or stability, it elevated the efficiency of image-based manufactured sand gradation detection to a new industrial standard. This study primarily focused on manufactured sands conforming to Chinese national standards for construction sand, characterized by particle sizes ranging from 0.075 mm to 4.75 mm and low clay impurity levels. The applicability of the proposed method to other sand types, such as recycled concrete aggregates or sands with higher clay and cohesive contents, requires further investigation. Future work will aim to extend and validate the method for these materials to enhance its practical versatility. This study provides significant practical value for rapid deployment and engineering applications, laying a solid foundation for future advancements in intelligent aggregate detection systems.

Author Contributions

Conceptualization, S.Z. and Y.Z.; Data curation, S.Z., X.Y., H.S. and H.W.; Formal analysis, Y.Z., X.Y., H.S. and H.W.; Funding acquisition, C.X.; Investigation, X.Y., H.S., H.W., Y.Y. and D.L.; Methodology, S.Z. and Y.Z.; Project administration, C.X.; Resources, S.S.; Software, S.Z. and S.S.; Supervision, Y.Z. and C.X.; Validation, Y.Z. and S.S.; Visualization, Y.Y. and D.L.; Writing—original draft, S.Z.; Writing—review and editing, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by Chongqing Municipal Education Commission (grant number KJZD-M202500505), the China Chongqing Municipal Science and Technology Bureau (grant number CSTB2024TIAD-CYKJCXX0009), the Chongqing Municipal Commission of Housing and Urban-Rural Development (grant number CKZ2024-87), and the National College Student Innovation and Entrepreneurship Training Program.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Poloju, K.K.; Anil, V.; Manchiryal, R.K. Properties of concrete as influenced by shape and texture of fine aggregate. Am. J. Appl. Sci. Res. 2017, 3, 28–36. [Google Scholar] [CrossRef]
  2. Uddin, M.T.; Harun, M.Z.B.; Joy, J.A.; Ahmed, M.A. Effect of Sand-to-Aggregate Volume Ratio on Mechanical Properties of Concrete. In Proceedings of the IABSE-JSCE Joint Conference on Advances in Bridge Engineering-IV, Dhaka, Bangladesh, 26–27 August 2020; pp. 45–52. [Google Scholar]
  3. Qin, J.; Wang, J.; Lei, T.; Sun, G.; Yue, J.; Wang, W.; Chen, J.; Qian, G. Deep learning-based software and hardware framework for a noncontact inspection platform for aggregate grading. Measurement 2023, 211, 112634. [Google Scholar] [CrossRef]
  4. Aragão, F.T.S.; Pazos, A.R.G.; Motta, L.M.G.D.; Kim, Y.-R.; Nascimento, L.A.H.D. Effects of morphological characteristics of aggregate particles on the mechanical behavior of bituminous paving mixtures. Constr. Build. Mater. 2016, 123, 444–453. [Google Scholar] [CrossRef]
  5. Wang, X. Study on the Influence Law of Particle Shape on the Numerical Simulation of Vibratory Sieving. Master’s Thesis, Huaqiao University, Quanzhou, China, 2017. [Google Scholar]
  6. Yu, X.; Yang, Q.; Xu, G.; Zhang, X. Research progress on characterization and evaluation of concrete aggregate particle shape. In Proceedings of the 8th National Sand and Aggregate Industry Technology Conference, Huzhou, China, 24–25 July 2021; Volume 1, pp. 33–38. [Google Scholar]
  7. JGJ 52-2006; Standard for Technical Requirements and Test Methods of Sand and Crushed Stone for Ordinary Concrete. Ministry of Construction of the People’s Republic of China: Beijing, China, 2006.
  8. GB/T 14684-2022; Sand for Construction. Standardization Administration of China. National Quality and Technical Supervision Administration: Beijing, China, 2022.
  9. ASTM D2487-17; Standard Practice for Classification of Soils for Engineering Purposes (Unified Soil Classification System). ASTM International: West Conshohocken, PA, USA, 2017.
  10. ISO 14688-1:2017; Geotechnical Investigation and Testing—Identification and Classification of Soil—Part 1: Identification and Description. ISO: Geneva, Switzerland, 2017.
  11. Xie, X.; Lu, G.; Liu, P.; Wang, D.; Fan, Q.; Oeser, M. Evaluation of morphological characteristics of fine aggregate in asphalt pavement. Constr. Build. Mater. 2017, 139, 1–8. [Google Scholar] [CrossRef]
  12. Li, W.; Wang, F.; Du, X.; Zhang, J. The study on digital imaging technology of real-time mineral mixture gradation detection on construction sites. Appl. Mech. Mater. 2011, 58–60, 2040–2045. [Google Scholar]
  13. Huang, X. Research on Mechanized Sand Gradation Measurement and Compensation Algorithm Based on Dynamic Image Method. Master’s Thesis, Huaqiao University, Fujian, China, 2020; pp. 11–43. [Google Scholar]
  14. Lin, W.; Fang, H.; Fu, L.; Yang, J. Gradation measurement and void content prediction of manufactured sand using dual camera multi-scale method. J. Huaqiao Univ. (Nat. Sci.) 2022, 43, 285–292. [Google Scholar]
  15. Zhao, L.; Zhang, S.; Huang, D.; Wang, X. A digitalized 2D particle database for statistical shape analysis and discrete modeling of rock aggregate. Constr. Build. Mater. 2020, 247, 117906. [Google Scholar] [CrossRef]
  16. Zhang, Y.; Hou, D.; Xu, C.; Li, Q.; Wang, H. Sand gradation detection method based on local sampling. Sci. Rep. 2024, 14, 80980. [Google Scholar] [CrossRef]
  17. Bruno, L.; Parla, G.; Celauro, C. Image analysis for detecting aggregate gradation in asphalt mixture from planar images. Constr. Build. Mater. 2012, 28, 21–30. [Google Scholar] [CrossRef]
  18. Zhang, J.; Feng, X.; Zhang, J. Application of improved wavelet and watershed algorithms in ore particle size detection. Mach. Des. Manuf. 2022, 290–294. [Google Scholar] [CrossRef]
  19. Zhu, D. Grain Size Detection of Sand and Gravel Images Based on Deep Learning. Master’s Thesis, Nanjing University of Science and Technology, Nanjing, China, 2020; pp. 28–37. [Google Scholar]
  20. Cao, L. Research on Aggregate Particle Size Detection Algorithm for Adhered Particles Based on Machine Learning. Master’s Thesis, Chang’an University, Xi’an, China, 2021; pp. 19–49. [Google Scholar]
  21. Yang, D.; Wang, X.; Zhang, H.; Yin, Z.-Y.; Su, D.; Xu, J. A Mask R-CNN based particle identification for quantitative shape evaluation of granular materials. Powder Technol. 2021, 392, 296–305. [Google Scholar] [CrossRef]
  22. Kumara, G.H.A.J.J.; Hayano, K.; Ogiwara, K. Image analysis techniques on evaluation of particle size distribution of gravel. Int. J. Geomate 2012, 3, 290–297. [Google Scholar] [CrossRef]
  23. Liu, F.-L.; Fang, H.-Y.; Chen, S.-J.; Zhou, L.; Yang, J.-H. Experimental study on manufactured sand shape detection by image method. J. Test. Eval. 2019, 47, 3515–3532. [Google Scholar] [CrossRef]
  24. Zhou, J.-H.; Fang, H.-Y.; Yang, J.; Chen, S.; Luo, M. Study on characterization parameters of aggregate particle size using image analysis. Acta Metrol. Sin. 2018, 39, 783–790. [Google Scholar] [CrossRef]
  25. Xu, C.; Wang, H.; Zhang, Y.; Liu, Z.; Ren, Q. Detecting sand gradation based on the two-dimensional sand particle features in sand images. J. Appl. Sci. Eng. 2025, 29, 139–149. [Google Scholar] [CrossRef]
  26. Li, L. Study on Geometrical Characteristics of Aggregate and Concrete Appearance Based on 3D Scanning Technology. Master’s Thesis, Central South University, Changsha, China, 2022. [Google Scholar]
  27. Su, D.; Yan, W.M. 3D characterization of general-shape sand particles using microfocus X-ray computed tomography and spherical harmonic functions, and particle regeneration using multivariate random vector. Powder Technol. 2018, 323, 8–23. [Google Scholar] [CrossRef]
  28. Shao, S.; Mallery, K.; Hong, J. Machine learning holography for measuring 3D particle size distribution. arXiv 2019, arXiv:1912.13036. [Google Scholar]
  29. Li, Y.X. Research on Aggregate Gradation Prediction Method Based on Machine Learning. Master’s Thesis, Chang’an University, Xi’an, China, 2022; pp. 63–64. [Google Scholar]
  30. Chen, S.J. Development and Experimental Study of a Detection System for Particle Size and Shape of Manufactured Sand. Master’s Thesis, Huaqiao University, Quanzhou, China, 2017; pp. 41–45. [Google Scholar]
  31. Buscombe, D. SediNet: A configurable deep learning model for mixed qualitative and quantitative optical granulometry. Earth Surf. Process. Landf. 2020, 45, 797–813. [Google Scholar] [CrossRef]
  32. Kim, Y.; Ma, J.; Lim, S.Y.; Song, J.Y.; Yun, T.S. Determination of shape parameters of sands: A deep learning approach. Acta Geotech. 2022, 17, 1521–1531. [Google Scholar] [CrossRef]
  33. Li, L.; Iskander, M. Use of machine learning for classification of sand particles. Acta Geotech. 2022, 17, 4739–4759. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the sampling device components.
Figure 1. Schematic diagram of the sampling device components.
Buildings 15 02404 g001
Figure 2. Segmentation processing flowchart.
Figure 2. Segmentation processing flowchart.
Buildings 15 02404 g002
Figure 3. Example workflow for clumped-particle segmentation.
Figure 3. Example workflow for clumped-particle segmentation.
Buildings 15 02404 g003
Figure 4. Fitted normal distribution curves for six particle-size intervals.
Figure 4. Fitted normal distribution curves for six particle-size intervals.
Buildings 15 02404 g004
Figure 5. Effect of the sampling interval on total processing time and mean grading error for the single–frame strategy. The processing time drops almost exponentially as interval increases, whereas the grading error remains below 12% until interval ≈ 11, after which it rises sharply.
Figure 5. Effect of the sampling interval on total processing time and mean grading error for the single–frame strategy. The processing time drops almost exponentially as interval increases, whereas the grading error remains below 12% until interval ≈ 11, after which it rises sharply.
Buildings 15 02404 g005
Figure 6. Comparison of fineness-modulus error and grading error across ten manufactured sand samples using different computation strategies. (a) The fineness-modulus error is presented as the absolute value. (b) The grading error is shown for comparison.
Figure 6. Comparison of fineness-modulus error and grading error across ten manufactured sand samples using different computation strategies. (a) The fineness-modulus error is presented as the absolute value. (b) The grading error is shown for comparison.
Buildings 15 02404 g006
Figure 7. Fineness modulus error and overall grading error of four image processing strategies across ten coarse manufactured sand samples. (a) The fineness-modulus error is presented as the absolute value. (b) The grading error is shown for comparison.
Figure 7. Fineness modulus error and overall grading error of four image processing strategies across ten coarse manufactured sand samples. (a) The fineness-modulus error is presented as the absolute value. (b) The grading error is shown for comparison.
Buildings 15 02404 g007
Table 1. Imaging module parameter settings.
Table 1. Imaging module parameter settings.
ParameterGlobal Camera
(MV-CS060-10GC)
Local Camera
(MV-CS200-10GC)
Focal Length (mm)1250
Resolution (pixels)3072 × 20485472 × 3648
Field of View (cm)10.9 × 7.53.8 × 2.6
Frame Rate (fps)158
Pixel Size (µm)35.26.8
Detection ObjectiveOverall Distribution
Statistics
Fine-grained
Morphology Analysis
Table 2. Raw metrics for the single-frame strategy at different sampling intervals.
Table 2. Raw metrics for the single-frame strategy at different sampling intervals.
IntervalMax Error (%)Min Error (%)Average Error (%)Std DevTime (min)
19.629.629.62083.50
29.949.829.880.000841.75
310.859.5610.090.006828.08
410.649.3110.110.006221.13
510.6710.3210.480.001616.95
611.4410.1810.680.004314.17
711.859.8311.040.008112.18
811.679.5310.890.007710.69
912.4110.3411.290.00599.53
1012.4610.6111.350.00528.60
1113.2910.1112.020.00897.84
1212.5310.3911.680.00767.21
1313.2710.9912.010.00746.67
1413.8210.8612.260.00916.21
1514.6411.1612.600.01115.82
1613.6910.7212.220.00775.47
1714.2411.5612.830.00725.16
1814.6310.6812.770.01314.89
1914.5411.3313.020.00994.64
2015.5810.7113.070.01304.43
Table 3. Detection time, mass, error, and standard deviation for sampling scheme with 1 image per group.
Table 3. Detection time, mass, error, and standard deviation for sampling scheme with 1 image per group.
IntervalTime (min)Mass (g)Error (%)Std Dev
183.50500.009.620
241.75250.009.880.0008
328.08166.6710.090.0068
421.13125.0010.110.0062
516.95100.0010.480.0016
614.1783.3310.680.0043
712.1871.4311.040.0081
810.6962.5010.890.0077
99.5355.5611.290.0059
108.6050.0011.350.0052
117.8445.4512.020.0089
127.2141.6711.680.0076
136.6738.4612.010.0074
146.2135.7112.260.0091
155.8233.3312.600.0111
165.4731.2512.220.0077
175.1629.4112.830.0072
184.8927.7812.770.0131
194.6426.3213.020.0099
204.4325.0013.070.0130
Table 4. Detection time, mass, error, and standard deviation for sampling scheme with 2 images per group.
Table 4. Detection time, mass, error, and standard deviation for sampling scheme with 2 images per group.
IntervalTime (min)Mass (g)Error (%)Std Dev
152.88250.009.540
226.60125.009.930.0050
318.0583.3310.170.0033
413.6262.5010.720.0034
510.7750.0010.840.0081
69.1841.6711.100.0073
77.9235.7111.560.0078
86.9731.2511.760.0098
96.3327.7811.680.0106
105.7025.0012.690.0161
115.0722.7312.300.0097
124.7520.8312.800.0118
134.4319.2313.090.0152
144.1217.8613.520.0146
153.8016.6713.050.0121
163.4815.6313.660.0136
173.4814.7113.740.0107
183.1713.8913.220.0161
193.1713.1613.990.0167
202.8512.5014.930.0158
Table 5. Detection time, mass, error, and standard deviation for sampling scheme with 3 images per group.
Table 5. Detection time, mass, error, and standard deviation for sampling scheme with 3 images per group.
IntervalTime (min)Mass (g)Error (%)Std Dev
140.13166.679.830
220.0783.3310.600.0026
313.6255.5610.840.0081
410.3941.6711.210.0090
58.2433.3312.130.0075
66.8127.7812.310.0155
76.0923.8112.430.0117
85.3820.8313.000.0115
94.6618.5212.770.0116
104.3016.6713.260.0084
113.9415.1513.740.0195
123.5813.8913.770.0131
133.2312.8214.170.0100
143.2311.9014.510.0216
152.8711.1114.920.0150
162.8710.4215.620.0193
172.519.8015.470.0141
182.519.2615.870.0168
192.518.7716.270.0197
202.158.3315.940.0165
Table 6. Comparison of gradation results for a coarse manufactured sand sample (FM = 3.52) using three computation strategies. (Error denotes mean absolute cumulative gradation error, %).
Table 6. Comparison of gradation results for a coarse manufactured sand sample (FM = 3.52) using three computation strategies. (Error denotes mean absolute cumulative gradation error, %).
Particle Size (mm)0.075–0.150.15–0.30.3–0.60.6–1.181.18–2.362.36–4.75>4.75FMError
Reference4%2%16%22%22%28%0%3.52
Global only15.73%21.21%27.60%35.46%0%11.34%
Local only3.23%2.11%21.36%23.12%24.39%25.78%0%3.4113.20%
Fusion3.22%2.34%18.56%21.72%25.17%28.99%0%3.507.77%
Table 7. Comparison of gradation results for a coarse manufactured sand sample (FM = 2.51) using three computation strategies. (Error denotes mean absolute cumulative gradation error, %).
Table 7. Comparison of gradation results for a coarse manufactured sand sample (FM = 2.51) using three computation strategies. (Error denotes mean absolute cumulative gradation error, %).
Particle Size (mm)0.075–0.150.15–0.30.3–0.60.6–1.181.18–2.362.36–4.75>4.75FMError
Reference13%9%27%26%15%10%0%2.51
Global only29.96%29.97%23.57%16.50%0%16.04%
Local only14.05%9.77%32.83%21.78%14.18%7.39%0%2.3415.29%
Fusion15.18%11.94%25.70%22.42%14.95%9.81%0%2.3910.24%
Table 8. Comparison of gradation results for a coarse manufactured sand sample (FM = 2.01) using three computation strategies. (Error denotes mean absolute cumulative gradation error, %).
Table 8. Comparison of gradation results for a coarse manufactured sand sample (FM = 2.01) using three computation strategies. (Error denotes mean absolute cumulative gradation error, %).
Particle Size (mm)0.075–0.150.15–0.30.3–0.60.6–1.181.18–2.362.36–4.75>4.75FMError
Reference20%10%29%31%10%0%0%2.01
Global only37.26%41.73%17.61%3.4%0%13.44%
Local only17.85%10.87%31.44%28.83%9.31%1.69%0%2.0610.00%
Fusion18.12%12.51%29.09%28.31%10.12%1.86%0%2.059.14%
Table 9. Gradation error and FM error for three methods (global only, local only, and fusion) by sample.
Table 9. Gradation error and FM error for three methods (global only, local only, and fusion) by sample.
SampleGlobal OnlyLocal OnlyFusion
ErrorFM ErrorErrorFM ErrorErrorFM Error
111.34%0.104513.20% 0.1133 7.77% 0.0175
27.92%0.727010.18%0.13217.65%0.0629
313.86%0.145812.06%0.02635.76% 0.0637
415.98%0.204019.50% 0.0620 10.15%0.1574
518.67%0.222010.38%0.21426.50%0.1056
613.33%0.104726.22% 0.1121 15.37% 0.0127
716.04%0.163515.29% 0.1655 10.24% 0.1156
819.65%0.20159.63%0.100910.71%0.0977
922.65%0.26418.57% 0.0411 12.87%0.0713
1013.44%0.142910.00%0.04959.14%0.0435
Std Dev0.04280.06010.05530.12240.02940.0451
Table 10. Comparison of four image processing strategies for gradation computation on coarse manufactured sand (Mx = 3.52). (Error denotes mean absolute cumulative gradation error, %).
Table 10. Comparison of four image processing strategies for gradation computation on coarse manufactured sand (Mx = 3.52). (Error denotes mean absolute cumulative gradation error, %).
Method0.075–0.150.15–0.30.3–0.60.6–1.181.18–2.362.36–4.75>4.75FMError
Reference4%2%16%22%28%28%0%3.52
m 1 4.09%4.56%21.75%24.79%21.14%23.67%0%3.25
Δ 1 −0.09%−2.56%−5.75%−2.79%6.86%4.33%0%22.38%
m 2 3.90%3.37%19.18%22.09%23.43%28.03%0%3.42
Δ 2 0.10%−1.37%−3.18%−0.09%4.57%−0.03%0%9.34%
m 3 3.83%3.41%19.02%21.82%23.57%28.35%0%3.43
Δ 3 0.17%−1.41%−3.02%0.18%4.43%−0.35%0%9.56%
RCGS3.22%2.34%18.56%21.72%25.17%28.99%0%3.50
Δ 4 0.78%−0.34%−2.56%0.28%2.83%−0.99%0%7.77%
Table 11. Gradation error and FM error for four methods (m1–m3 and RCGS) by sample.
Table 11. Gradation error and FM error for four methods (m1–m3 and RCGS) by sample.
Samplem1m2m3RCGS
ErrorFM ErrorErrorFM ErrorErrorFM ErrorErrorFM Error
122.38% 0.2667 9.34% 0.1012 9.56% 0.0907 7.77% 0.0175
221.01% 0.1732 9.55% 0.0462 9.43% 0.0390 7.65%0.0629
313.73% 0.1700 10.83% 0.1758 10.16% 0.1608 5.76% 0.0637
419.53% 0.1691 9.52%0.04459.85%0.063210.15%0.1574
515.15% 0.1682 8.86% 0.1061 9.28%0.11346.50%0.1056
633.28% 0.3502 16.96% 0.2007 16.71% 0.1927 15.37% 0.0127
726.68% 0.2926 20.16% 0.2632 20.20% 0.2575 10.24% 0.1156
822.45% 0.2340 23.64% 0.2032 23.21% 0.1981 10.71%0.0977
925.03% 0.1300 21.03% 0.0592 22.23% 0.0498 12.87%0.0713
1027.51% 0.0208 25.05% 0.0586 25.62% 0.0563 9.14%0.0435
Std0.05840.09270.06550.07880.06680.07570.02940.0451
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, S.; Zhang, Y.; Sun, S.; Yuan, X.; Sun, H.; Wang, H.; Yuan, Y.; Luo, D.; Xu, C. A Rapid Sand Gradation Detection Method Based on Dual-Camera Fusion. Buildings 2025, 15, 2404. https://doi.org/10.3390/buildings15142404

AMA Style

Zhang S, Zhang Y, Sun S, Yuan X, Sun H, Wang H, Yuan Y, Luo D, Xu C. A Rapid Sand Gradation Detection Method Based on Dual-Camera Fusion. Buildings. 2025; 15(14):2404. https://doi.org/10.3390/buildings15142404

Chicago/Turabian Style

Zhang, Shihao, Yang Zhang, Song Sun, Xinghai Yuan, Haoxuan Sun, Heng Wang, Yi Yuan, Dan Luo, and Chuanyun Xu. 2025. "A Rapid Sand Gradation Detection Method Based on Dual-Camera Fusion" Buildings 15, no. 14: 2404. https://doi.org/10.3390/buildings15142404

APA Style

Zhang, S., Zhang, Y., Sun, S., Yuan, X., Sun, H., Wang, H., Yuan, Y., Luo, D., & Xu, C. (2025). A Rapid Sand Gradation Detection Method Based on Dual-Camera Fusion. Buildings, 15(14), 2404. https://doi.org/10.3390/buildings15142404

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop