Point Cloud Measurement of Rubber Tread Dimension Based on RGB-Depth Camera
Abstract
:1. Introduction
2. Related Work
3. Methodology and Design
3.1. Reprojection Calibration Based on NSGA-II Multi-Objective Optimization
3.2. Number of Corner Points and Error Modeling
3.3. NSGA-II Genetic Algorithm
3.4. Optimization Results
3.5. Three-Dimensional Dimensional Inspection of Tread Based on Point Cloud Data
3.6. Point Cloud Data Preprocessing
3.6.1. Hood Layer Screening Point Cloud
3.6.2. RANSAC to Remove Outliers
3.6.3. Statistical Filtering Denoising (SFD)
3.6.4. Point Cloud Alignment
- (1)
- In the initial iterative point cloud set, the point with the smallest Euclidean distance value in the and point clouds is taken as the corresponding point, and the corresponding point set is formed;
- (2)
- Calculate the translation and rotation matrices and from the corresponding point set to find the current objective loss function ;
- (3)
- With the rigid-body transformation matrix obtained in the second step, perform a rigid-body transformation on the target point cloud, and follow the idea of the first step to update the corresponding point set according to the Euclidean distance between the new point cloud;
- (4)
- Repeat the second step when the value of the target error function is larger than the set threshold and stop the iteration when the opposite or the number of iterations reaches the upper limit.
4. Results and Analysis
4.1. Evaluation Index of Image Denoising Accuracy
4.2. Analysis of Image Denoising Results
4.3. Analysis of Rubber Tread Size Measurement Results
5. Conclusions
6. Discussion
- (1)
- In the tread dimension inspection process, equipment hardware constraints lead to lengthy data processing times, making it difficult to detect tread images and point cloud data collected by the RGB-D camera’s RGB and IR lens modules in real-time. Consequently, all measurements are processed offline, preventing online debugging. The next step involves upgrading the acquisition equipment, further improving algorithm efficiency, reducing detection complexity, and developing stable preview software to provide real-time feedback for monitoring tread quality detection results.
- (2)
- This study initially verifies the quality inspection of rigid rubber treads. The next step is to study the point cloud alignment and 3D reconstruction methods for non-rigid treads, designing an algorithm with improved robustness to inspect the quality of irregular and curved surface rubber products and archive the 3D data of retained products.
- (3)
- This study only conducts experimental verification, demonstrating the feasibility, reliability, and accuracy of detecting tread quality based on point cloud data. The next step involves converting the inspection program written in Python and Matlab into an industrial control program and integrating this inspection method into the production line after the treads are cut to fixed lengths. Subsequently, extensive on-site experiments and verifications will be conducted.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Xing, Z.Y.; Chen, Y.J.; Wang, X.H.; Qin, Y.; Chen, S. Online detection system for wheel-set size of rail vehicle based on 2D laser displacement sensors. Optik 2016, 127, 1695–1702. [Google Scholar] [CrossRef]
- Izadi, S.; Kim, D.; Hilliges, O.; Molyneaux, D.; Newcombe, R.; Kohli, P.; Shotton, J.; Hodges, S.; Freeman, D.; Davison, A.; et al. KinectFusion: Real-time 3D reconstruction and interaction using a moving depth camera. In Proceedings of the 24th annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, 10 October 2011; pp. 559–568. [Google Scholar]
- Khoshelham, K.; Elberink, S.O. Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications. Sensors 2012, 12, 1437–1454. [Google Scholar] [CrossRef]
- Azinovi, D.; Martin-Brualla, R.; Goldman, D.B.; Niener, M.; Thies, J. Neural RGB-D Surface Reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 19–25 June 2021; pp. 6290–6301. [Google Scholar]
- Etchepareborda, P.; Moulet, M.-H.; Melon, M. Random laser speckle pattern projection for non-contact vibration measurements using a single high-speed camera. Mech. Syst. Signal Process. 2021, 158, 107719. [Google Scholar] [CrossRef]
- Alhwarin, F.; Ferrein, A.; Scholl, I. IR Stereo Kinect: Improving Depth Images by Combining Structured Light with IR Stereo. In Proceedings of the Pacific Rim International Conference on Artificial Intelligence, Gold Coast, QLD, Australia, 1–5 December 2014; pp. 409–421. [Google Scholar]
- Bi, H.-B.; Liu, Z.-Q.; Wang, K.; Dong, B.; Chen, G.; Ma, J.-Q. Towards accurate RGB-D saliency detection with complementary attention and adaptive integration. Neurocomputing 2021, 439, 63–74. [Google Scholar] [CrossRef]
- Fu, Y.; Yan, Q.; Liao, J.; Chow, A.L.H.; Xiao, C. Real-time dense 3D reconstruction and camera tracking via embedded planes representation. Vis. Comput. 2020, 36, 2215–2226. [Google Scholar] [CrossRef]
- Yang, K.; Guo, Y.; Tang, P.; Zhang, H.; Li, H. Object registration using an RGB-D camera for complex product augmented assembly guidance. Virtual Real. Intell. Hardw. 2020, 2, 501–517. [Google Scholar] [CrossRef]
- Li, W.; Tang, B.; Hou, Z.; Wang, H.; Bing, Z.; Yang, Q.; Zheng, Y. Dynamic Slicing and Reconstruction Algorithm for Precise Canopy Volume Estimation in 3D Citrus Tree Point Clouds. Remote Sens. 2024, 16, 2142. [Google Scholar] [CrossRef]
- Gu, W.; Wen, W.; Wu, S.; Zheng, C.; Lu, X.; Chang, W.; Xiao, P.; Guo, X. 3D Reconstruction of Wheat Plants by Integrating Point Cloud Data and Virtual Design Optimization. Agriculture 2024, 14, 391. [Google Scholar] [CrossRef]
- Guo, Y.; Wang, H.; Hu, Q.; Liu, H.; Liu, L.; Bennamoun, M. Deep Learning for 3D Point Clouds: A Survey. IEEE Trans. Pattern Anal. 2021, 43, 4338–4364. [Google Scholar] [CrossRef]
- Liu, F.; Liang, L.; Hou, C.; Xu, G.; Liu, D.; Zhang, B.; Wang, L.; Chen, X.; Du, H. On-Machine Measurement of Wheel Tread Profile With the 1-D Laser Sensor. IEEE Trans. Instrum. Meas. 2021, 70, 1011011. [Google Scholar] [CrossRef]
- Sharma, P.; Katrolia, J.S.; Rambach, J.; Mirbach, B.; Stricker, D.; Seiler, J. Resilient Consensus Sustained Collaboratively. arXiv 2023, arXiv:2306.17636. [Google Scholar]
- Gallo, A.; Phung, M.D. Classification of EEG Motor Imagery Using Deep Learning for Brain-Computer Interface Systems. arXiv 2022, arXiv:2206.07655. [Google Scholar]
- Vila, O.; Boada, I.; Raba, D.; Farres, E. A Method to Compensate for the Errors Caused by Temperature in Structured-Light 3D Cameras. Sensors 2021, 21, 2073. [Google Scholar] [CrossRef]
- Xie, R.P.; Yao, J.; Liu, K.; Lu, X.H.; Liu, Y.H.; Xia, M.H.; Zeng, Q.F. Automatic multi-image stitching for concrete bridge in-spection by combining point and line features. Autom. Constr. 2018, 90, 265–280. [Google Scholar] [CrossRef]
- Xue, T.; Wu, B. Reparability measurement of vision sensor in active stereo visual system. Measurement 2014, 49, 275–282. [Google Scholar] [CrossRef]
- Qiao, X.Y.; Fan, C.J.; Chen, X.; Ding, G.Q.; Cai, P.; Shao, L. Uncertainty analysis of two-dimensional self-calibration with hybrid position using the GUM and MCM methods. Meas. Sci. Technol. 2021, 32, 125012. [Google Scholar] [CrossRef]
- Zhang, Z.Y. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
- Han, H.; Wu, S.; Song, Z. An Accurate Calibration Means for the Phase Measuring Deflectometry System. Sensors 2019, 19, 5377. [Google Scholar] [CrossRef] [PubMed]
- Zhou, K.; Meng, X.X.; Cheng, B. Review of Stereo Matching Algorithms Based on Deep Learning. Comput. Intell. Neurosci. 2020, 2020, 8562323. [Google Scholar] [CrossRef]
- Kirby, B.J.; Hanson, R.K. Linear excitation schemes for IR planar-induced fluorescence imaging of CO and CO2. Appl. Opt. 2002, 41, 1190–1201. [Google Scholar] [CrossRef]
- Guerra, B.M.V.; Ramat, S.; Beltrami, G.; Schmid, M. Recurrent Network Solutions for Human Posture Recognition Based on Kinect Skeletal Data. Sensors 2023, 23, 5260. [Google Scholar] [CrossRef]
- Ahmed, I.; Modu, G.U.; Yusuf, A.; Kumam, P.; Yusuf, I. A mathematical model of Coronavirus Disease (COVID-19) containing asymptomatic and symptomatic classes. Results Phys. 2021, 21, 103776. [Google Scholar] [CrossRef]
- Yazdinejad, A.; Dehghantanha, A.; Parizi, R.M.; Epiphaniou, G. An optimized fuzzy deep learning model for data classification based on NSGA-II. Neurocomputing 2023, 522, 116–128. [Google Scholar] [CrossRef]
- Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
- Deng, W.; Zhang, X.; Zhou, Y.; Liu, Y.; Zhou, X.; Chen, H.; Zhao, H. An enhanced fast non-dominated solution sorting genetic algorithm for multi-objective problems. Inf. Sci. 2022, 585, 441–453. [Google Scholar] [CrossRef]
- Li, H.; Xu, B.; Lu, G.; Du, C.; Huang, N. Multi-objective optimization of PEM fuel cell by coupled significant variables recognition, surrogate models and a multi-objective genetic algorithm. Energy Convers. Manag. 2021, 236, 114063. [Google Scholar] [CrossRef]
- Alam, S.J.; Arya, S.R. Volterra LMS/F Based Control Algorithm for UPQC With Multi-Objective Optimized PI Controller Gains. IEEE J. Emerg. Sel. Top. Power Electron. 2023, 11, 4368–4376. [Google Scholar] [CrossRef]
- Zou, W.; Wei, Z. Flexible Extrinsic Parameter Calibration for Multicameras With Nonoverlapping Field of View. IEEE Trans. Instrum. Meas. 2021, 70, 5017514. [Google Scholar] [CrossRef]
- Wei, S.J.; Zeng, X.F.; Zhang, H.; Zhou, Z.C.; Shi, J.; Zhang, X.L. LFG-Net: Low-Level Feature Guided Network for Precise Ship Instance Segmentation in SAR Images. IEEE Trans. Geosci. Remote 2022, 60, 5231017. [Google Scholar] [CrossRef]
- Raguram, R.; Chum, O.; Pollefeys, M.; Matas, J.; Frahm, J.-M. USAC: A Universal Framework for Random Sample Consensus. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2022–2038. [Google Scholar] [CrossRef]
- Zhang, Y.; Yuan, L.; Liang, W.; Xia, X.; Pang, Z. 3D-SWiM: 3D vision based seam width measurement for industrial composite fiber layup in-situ inspection. Robot. Comput. Integr. Manuf. 2023, 82, 102546. [Google Scholar] [CrossRef]
- Yang, H.; Shi, J.; Carlone, L. TEASER: Fast and Certifiable Point Cloud Registration. IEEE Trans. Robot. 2021, 37, 314–333. [Google Scholar] [CrossRef]
- You, N.; Han, L.; Zhu, D.; Song, W. Research on Image Denoising in Edge Detection Based on Wavelet Transform. Appl. Sci. 2023, 13, 1837. [Google Scholar] [CrossRef]
- Lalak, M.; Wierzbicki, D. Methodology of Detection and Classification of Selected Aviation Obstacles Based on UAV Dense Image Matching. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 1869–1883. [Google Scholar] [CrossRef]
- Brady, S.L.; Trout, A.T.; Somasundaram, E.; Anton, C.G.; Li, Y.; Dillman, J.R. Improving Image Quality and Reducing Radia-tion Dose for Pediatric CT by Using Deep Learning Reconstruction. Radiology 2021, 298, 180–188. [Google Scholar] [CrossRef]
- Gao, L.; Gao, H.; Wang, Y.H.; Liu, D.; Momanyi, B.M. Center-Ness and Repulsion: Constraints to Improve Remote Sensing Object Detection via RepPoints. Remote Sens. 2023, 15, 1479. [Google Scholar] [CrossRef]
- Köhler, N.A.; Nöh, C.; Geis, M.; Kerzel, S.; Frey, J.; Gross, V.; Sohrabi, K. Influence of Ambient Factors on the Acquisition of 3-D Respiratory Motion Measurements in Infants-A Preclinical Assessment. IEEE Trans. Instrum. Meas. 2023, 72, 5014510. [Google Scholar] [CrossRef]
- Lamprecht, S.; Stoffels, J.; Dotzler, S.; Hass, E.; Udelhoven, T. aTrunk-An ALS-Based Trunk Detection Algorithm. Remote Sens. 2015, 7, 9975–9997. [Google Scholar] [CrossRef]
Parameters | Parameters | ||
---|---|---|---|
Baseline (mm) | 75 | data transmission | USB 2.0 |
Depth distance (m) | 0.6–8.0 | video interface | UVC |
Maximum power consumption (W) | 2.50 | operating system | Windows 10 |
Accurate (mm) | ±3 | Working temperature (°) | 10~40 |
Depth FOV (°) | H58.4, V45.5 | Depth map resolution | 1280 × 1024@7FPs |
Color FOV (°) | H66.1, V40.2 | Color map resolution | 1280 × 720@7FPs |
Delay (ms) | 30–45 | Dimension (mm) | 164.85 × 30.00 × 48.25 |
Sum of Squares for Error (SSE) | Root-Mean-Squared Error (RMSE) | Coefficient of Determination (R-Square) | |
---|---|---|---|
RGB camera | 0.0002764 | 0.002478 | 0.9994 |
IR camera | 0.01242 | 0.01661 | 0.994 |
Binocular camera | 0.08985 | 0.048 | 0.5691 |
Probability of Variation (Pm) | Crossing Probability (Xovr) | Population Size (Nind) | Evolutionary Algebra (Gen) | |
---|---|---|---|---|
Retrieve value | 0.2 | 0.9 | 100 | 200 |
Corner Point Retention (%) | Reprojection Error | |||
---|---|---|---|---|
RGB Camera | IR Camera | Binocular Camera | ||
Pre-optimization | 100 | 0.213 | 1.372 | 0.964 |
Post optimization | 78 | 0.101 | 0.737 | 0.379 |
Range of PSNR Values | Image Quality Effects |
---|---|
excellent | |
good | |
poor | |
extremely poor |
Noise level (σ) | Gaussian | Binomial | Impulse | Line | ||
---|---|---|---|---|---|---|
25 | 50 | 0.5 | 0.5 | 25 | ||
PSNR | BM3D | 35.77 | 35.86 | 35.78 | 35.61 | 35.90 |
DBSN | 43.99 | 40.99 | 46.01 | 41.47 | 34.83 | |
SFD | 44.05 | 41.21 | 49.57 | 48.03 | 37.35 |
Measured Value/ (mm) | Actual Value/ (mm) | Error/ (%) | ST1/ SW1 (%) | ST2/ SW2 (%) | ST3/ SW3 (%) | |
---|---|---|---|---|---|---|
Thickness | 26.57 | 25.98 | 2.27 | 3.5 | 5.0 | 7.0 |
Width | 152.90 | 150.17 | 1.82 | 2.0 | 2.5 | 3.0 |
Length | 198.30 | 200.28 | 0.99 | 2.0 | 2.5 | 3.0 |
Measured Value (mm) | True Value (mm) | Absolute Error (mm) | Relative Error (%) | ||
---|---|---|---|---|---|
Thickness | Group 1 | 25.54 | 25.98 | 0.44 | 1.69 |
Group 2 | 25.75 | 26.11 | 0.36 | 1.38 | |
Group 3 | 25.44 | 25.98 | 0.54 | 2.08 | |
Group 4 | 25.19 | 25.88 | 0.69 | 2.67 | |
Group 5 | 25.36 | 25.95 | 0.59 | 2.27 | |
Width | Group 1 | 36.80 | 37.00 | 0.2 | 0.54 |
Group 2 | 36.80 | 37.00 | 0.2 | 0.54 | |
Group 3 | 36.95 | 37.00 | 0.05 | 0.14 | |
Group 4 | 36.90 | 37.00 | 0.1 | 0.27 | |
Group 5 | 36.90 | 37.00 | 0.1 | 0.27 | |
Length | Group 1 | 148.46 | 150.63 | 2.17 | 1.44 |
Group 2 | 148.32 | 149.76 | 1.44 | 0.96 | |
Group 3 | 147.34 | 148.82 | 1.48 | 0.99 | |
Group 4 | 148.42 | 150.31 | 1.89 | 1.26 | |
Group 5 | 147.65 | 149.38 | 1.73 | 1.16 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Huang, L.; Chen, M.; Peng, Z. Point Cloud Measurement of Rubber Tread Dimension Based on RGB-Depth Camera. Appl. Sci. 2024, 14, 6625. https://doi.org/10.3390/app14156625
Huang L, Chen M, Peng Z. Point Cloud Measurement of Rubber Tread Dimension Based on RGB-Depth Camera. Applied Sciences. 2024; 14(15):6625. https://doi.org/10.3390/app14156625
Chicago/Turabian StyleHuang, Luobin, Mingxia Chen, and Zihao Peng. 2024. "Point Cloud Measurement of Rubber Tread Dimension Based on RGB-Depth Camera" Applied Sciences 14, no. 15: 6625. https://doi.org/10.3390/app14156625
APA StyleHuang, L., Chen, M., & Peng, Z. (2024). Point Cloud Measurement of Rubber Tread Dimension Based on RGB-Depth Camera. Applied Sciences, 14(15), 6625. https://doi.org/10.3390/app14156625