Binocular Visual Measurement Method Based on Feature Matching
Abstract
:1. Introduction
2. Algorithm Flow Design
- (1)
- Obtain left and right images from the stereo camera.
- (2)
- Utilize the FAST feature point detection algorithm for feature extraction.
- (3)
- Treat feature points as seed points and perform region growing.
- (4)
- Based on the information from feature regions, match the left and right images.
- (5)
- Within each pair of matched feature regions, match the seed points.
- (6)
- If the majority of feature points in a region do not match, it indicates a region matching error, and the calculation is returned. If the majority of feature points within a region are correct, the region matching is considered correct, and incorrect feature point matches are eliminated.
- (7)
- Use the triangulation formula to calculate the three-dimensional information of the object under test.
3. Three-Dimensional Measurement Method Based on Improved Feature Matching Algorithm
3.1. FAST Feature Point Detection
3.2. Region Growth Based on Seed Points
3.2.1. Constraints of Binocular Cameras
3.2.2. Improved Particle Swarm Optimization
- (1)
- Suppose in a D-dimensional target search space, there is a particle population consisting of M particles representing potential solutions to the problem. Each particle has a position and velocity, where the position of the i-th particle is represented as a D-dimensional vector , and the velocity of the i-th particle is represented as a D-dimensional vector .
- (2)
- Start by randomly initializing M particles and then iteratively find the optimal solution. In each iteration, each particle updates its velocity and position based on two extremes to guide its flight. One extreme is the local optimal solution for that particle, and the other extreme is the global optimal solution. The local optimal solution is the best value found by each particle up to the current iteration, denoted as . The global optimal solution is the best value found by the entire particle swarm up to the current iteration, denoted as .
- (3)
- After updating the local and global optimal solutions, particles update their velocity and position according to the following Equations (2) and (3).
- (4)
- For each particle trapped in a poorer search region, define the stretching operation:
3.2.3. Similarity Measurement Function
3.2.4. Region Growing
3.3. Triangulation Principle
4. Experimental Results and Data Analysis
4.1. Experimental Setup
4.2. Experimental Results
4.3. Data Analysis
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Li, D.; Xu, L.; Tang, X.S.; Sun, S.; Cai, X.; Zhang, P. 3D imaging of greenhouse plants with an inexpensive binocular stereo vision system. Remote Sens. 2017, 9, 508. [Google Scholar] [CrossRef]
- Sun, J.; Zhang, G.; Wei, Z.; Zhou, F. Large 3D free surface measurement using a mobile coded light-based stereo vision system. Sens. Actuators A Phys. 2006, 132, 460–471. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Bay, H.; Tuytelaars, T.; Van Gool, L. SURF: Speeded up robust features. In Proceedings of the European Conference on Computer Vision (ECCV), Graz, Austria, 7–13 May 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
- Alahi, A.; Ortiz, R.; Vandergheynst, P. Freak: Fast retina keypoint. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 510–517. [Google Scholar]
- Zitnick, C.L.; Ramnath, R. Edge foci interest points. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Barcelona, Spain, 6–13 November 2011; pp. 3225–3232. [Google Scholar]
- Balntas, V.; Lenc, K.; Vedaldi, A.; Mikolajczyk, K. HPatches: A benchmark and evaluation of handcrafted and learned local descriptors. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5173–5182. [Google Scholar]
- Yi, K.M.; Trulls, E.; Ono, Y.; Mordohai, P.; Fua, P. Learning to assign orientations to feature points. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 4823–4831. [Google Scholar]
- Auvolat, A.; Lepetit, V. Towards real-time photorealistic 3D reconstruction of opaque objects. In Proceedings of the 2018 European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 274–290. [Google Scholar]
- He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Bromley, J.; Guyon, I.; LeCun, Y.; Säckinger, E.; Shah, R. Signature verification using a “Siamese” time delay neural network. Int. J. Pattern Recognit. Artif. Intell. 1993, 7, 669–688. [Google Scholar] [CrossRef]
- Pilzer, A.; Xu, D.; Puscas, M.; Ricci, E.; Sebe, N. Unsupervised adversarial depth estimation using cycled generative networks. In Proceedings of the International Conference on 3D Vision (3DV), Verona, Italy, 5–8 September 2018; IEEE Press: New York, NY, USA, 2018; pp. 587–595. [Google Scholar]
- Wu, W.; Wang, D.; Xing, Y.; Gong, X.; Liu, J. Binocular Visual Odometry Algorithm and Experimental Research for Lunar Rover Exploration. Sci. China: Inf. Sci. 2011, 41, 1415–1422. [Google Scholar]
- Wang, Z.F.; Zheng, Z.G. A region based stereo matching algorithm using cooperative optimization. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
- Zhang, J.Y. Research on Extraction, Matching, and Application of Local Invariant Features in Images. Ph.D. Thesis, School of Automation, Nanjing University of Science and Technology, Nanjing, China, 2010; pp. 25–70. [Google Scholar]
- Rosten, E.; Drummond, T. Machine learning for high-speed corner detection. In Proceedings of the Computer Vision—ECCV 2006: 9th European Conference on Computer Vision, Part I 9, Graz, Austria, 7–13 May 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 430–443. [Google Scholar]
- Lin, H.Y.; Tsai, C.L. Depth measurement based on stereo vision with integrated camera rotation. IEEE Trans. Instrum. Meas. 2021, 70, 5009210. [Google Scholar] [CrossRef]
- Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
- Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
- Bratton, D.; Kennedy, J. Defining a standard for particle swarm optimization. In Proceedings of the 2007 IEEE Swarm Intelligence Symposium, Honolulu, HI, USA, 1–5 April 2007; pp. 120–127. [Google Scholar]
- Bellman, R. The theory of dynamic programming. Bull. Am. Math. Soc. 1954, 60, 503–515. [Google Scholar] [CrossRef]
Algorithm Name | False Match Rate | PSNR |
---|---|---|
Siamese Network for Feature | 14.75% | 30 dB |
GAN-based Feature Matching | 13.47% | 30 dB |
Region Matching Element-Based Feature Matching | 7.49% | 33 dB |
Dynamic Programming | 8.59% | 27 dB |
SIFT-Based Feature Matching | 5.42% | 32 dB |
Proposed Algorithm | 2.02% | 34 dB |
Objective Index | 10 cd/m2 | 20 cd/m2 | 30 cd/m2 | 40 cd/m2 | 50 cd/m2 |
---|---|---|---|---|---|
Correct rate | 97.61% | 97.63% | 97.57% | 97.59% | 97.62% |
Group Number | Measured Length/mm | True Length/mm | Relative Error/% | Cost Time/s |
---|---|---|---|---|
1 | 53.21 | 54.26 | 1.93 | 0.74 |
2 | 57.89 | 58.16 | 0.46 | 0.68 |
3 | 73.38 | 73.29 | 0.12 | 0.59 |
4 | 76.75 | 78.01 | 1.61 | 1.12 |
5 | 81.39 | 82.67 | 1.54 | 0.88 |
6 | 84.57 | 84.69 | 0.14 | 0.79 |
7 | 92.59 | 92.12 | 0.51 | 0.76 |
8 | 94.57 | 95.31 | 0.77 | 0.97 |
9 | 105.34 | 104.99 | 0.33 | 0.89 |
10 | 107.89 | 108.01 | 0.11 | 0.89 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xie, Z.; Yang, C. Binocular Visual Measurement Method Based on Feature Matching. Sensors 2024, 24, 1807. https://doi.org/10.3390/s24061807
Xie Z, Yang C. Binocular Visual Measurement Method Based on Feature Matching. Sensors. 2024; 24(6):1807. https://doi.org/10.3390/s24061807
Chicago/Turabian StyleXie, Zhongyang, and Chengyu Yang. 2024. "Binocular Visual Measurement Method Based on Feature Matching" Sensors 24, no. 6: 1807. https://doi.org/10.3390/s24061807
APA StyleXie, Z., & Yang, C. (2024). Binocular Visual Measurement Method Based on Feature Matching. Sensors, 24(6), 1807. https://doi.org/10.3390/s24061807