Image Preprocessing with Enhanced Feature Matching for Map Merging in the Presence of Sensing Error
Abstract
:1. Introduction
2. Literature Review
3. Basic Definitions on Map Merging
- Create occupancy grid maps: In the context of map merging, we define M as a 2D occupancy grid map with size as shown in Equation (1). Each cell contains a binary value that indicates whether the cell is occupied () or free ().Let be the probability that the cell is occupied. A grid cell could be classified into three conditions:In addition to a binary value, if M represents an image, each cell element in Equation (1) will have a pixel value that can also be used to represent the cell conditions.
- Determine acceptance of the merged result: In Equation (3), the acceptance index [11], denoted as , serves as a means to evaluate the merging result when the transformed map is overlaid onto map . The acceptance index considers two types of grids: agreement and disagreement. Agreement represents correctly paired grid cells between and , indicating that both maps have the same type (occupied-occupied or free-free) in the overlapping areas. On the other hand, disagreement signifies incorrect pairings, indicating that the two maps have different types (occupied-free) in those regions. Due to its higher uncertainty, the unknown type is not discussed here.When the transformation accurately aligns the grids of the transferred map with those of map in the overlapping regions, the number of agreement grids increases while the number of disagreement grids decreases. As a result, the acceptance index approaches 1. Conversely, if the transformation is incorrect, the overlapping areas will contain numerous disagree grids, leading to a smaller or even 0 acceptance index . Thus, the acceptance index effectively determines the correctness of the map transformation and reflects the quality of the merging result. In our study, we define an index threshold, as shown in Equation (4), to judge the accuracy of the map transformation.
4. Map Merging Method
4.1. Existing Method in Image Stitching
- Step 1
- Define a ‘Guess’ model.Randomly sample two matched pairs of SIFT features from the input data.Calculate the four variables in the model with the following equations.
- Step 2
- If the transformed pair satisfies
- Step 3
- Repeat Steps 1 and 2 until the maximum number of iterations is reached and output the best model as .
4.2. Proposed Image Pre-Processing Operation
4.2.1. Image Correction Process
Algorithm 1 Extraction of interest points of map images. |
Input: M: map image; S: structuring element; Output: : map image with interest points
|
- Step 1
- Find the reference point closest to the origin using the Manhattan distance in Equation (18).
- Step 2
- Find the target point closest to the reference point as presented in Equation (19).
- Step 3
- Connect and with a line, and all grids () that the line intersects with are recorded. The position of these n grids is then used to calculate the ratio of occupied areas on the original image M, as shown in Equation (20).The line connecting and was present in the original image and can be corrected if the calculated ratio exceeds the defined threshold , . If not, the algorithm picks a new target point and repeats Step 2 in that case.
- Step 4
- Calculate a vector as Equation (21) if the line between and can be corrected.According to , we then take as a reference and translate . The translation follows the following rule.
- (1)
- If , translate grid in x-direction. For example, moves to ;
- (2)
- If , translate grid in y-direction. For example, moves to ;
- (3)
- If , add an additional grid as a target point according to the previous direction and perform the translation again. For example, if the previous direction is:
- x-direction, then add a new grid as a new target point;
- y-direction, then add a new grid as a new target point;
- none (no previous direction), then abandon the translation and select another reference and target points.
In the aforementioned translation, shifts or additions of grids are performed. To ensure that the correction does not differ excessively from the original image, we again connect the grid and the shifted (or added) grid (or ) on a line, record all n grids that the line passes through, and calculate the ratio of occupied areas at the same positions on original image M. This calculation is the same as Equation (20). If ratio is greater than , the correction is accepted and the occupied grids on the connected line will be stored to corrected image . If not, the correction is rejected and the original information from image M will be stored to corrected image . - Step 5
- According to the result of Step 4, the correction point (or ) and can be used as a new reference point to find a new adjacent target point. Steps 3 to 5 are repeated until the correction is completed. Once the correction is completed, the corrected image, , is output as the result.
Algorithm 2 Corrected occupancy grid maps with interest points. |
Input: M: map image; : map image with interest points; L: distance parameter; Output: : map image after correction
|
4.2.2. Image Stitching with ICP
5. Results & Discussion
5.1. Scenario 1: Performance of Our Proposed Method
5.1.1. Results without Image Pre-Process—Mills’ Method [16]
5.1.2. Results with Image Pre-Process
5.2. Scenario 2: Merging Multiple Maps
Results with the Proposed Method
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–110. [Google Scholar] [CrossRef]
- Bailey, T.; Durrant-Whyte, H. Simultaneous localization and mapping (SLAM): Part II. IEEE Robot. Autom. Mag. 2006, 13, 108–117. [Google Scholar] [CrossRef]
- Dissanayake, M.G.; Newman, P.; Clark, S.; Durrant-Whyte, H.F.; Csorba, M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom. 2001, 17, 229–241. [Google Scholar] [CrossRef]
- Montemerlo, M.; Thrun, S.; Koller, D.; Wegbreit, B. FastSLAM: A factored solution to the simultaneous localization and mapping problem. In Proceedings of the AAAI National Conference on Artificial Intelligence, Edmonton, AB, Canada, 28 July–1 August 2022; pp. 593–598. [Google Scholar]
- Grisetti, G.; Kümmerle, R.; Stachniss, C.; Burgard, W. A tutorial on graph-based SLAM. IEEE Intell. Transp. Syst. Mag. 2010, 2, 31–43. [Google Scholar] [CrossRef]
- Yu, S.; Fu, C.; Gostar, A.K.; Hu, M. A Review on Map-Merging Methods for Typical Map Types in Multiple-Ground-Robot SLAM Solutions. Sensors 2020, 20, 6988. [Google Scholar] [CrossRef] [PubMed]
- Jiang, Z.; Zhu, J.; Jin, C.; Xu, S.; Zhou, Y.; Pang, S. Simultaneously merging multi-robot grid maps at different resolutions. Multimed. Tools Appl. 2020, 79, 14553–14572. [Google Scholar] [CrossRef]
- Lee, H.C.; Lee, S.H.; Lee, T.S.; Kim, D.J.; Lee, B.H. A survey of map merging techniques for cooperative-SLAM. In Proceedings of the 2012 9th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Daejeon, Republic of the Korea, 26–29 November 2012; pp. 285–287. [Google Scholar]
- Konolige, K.; Fox, D.; Limketkai, B.; Ko, J.; Stewart, B. Map merging for distributed robot navigation. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No. 03CH37453), Las Vegas, NV, USA, 27–31 October 2003; Volume 1, pp. 212–217. [Google Scholar]
- Lee, H.S.; Lee, K.M. Multi-robot SLAM using ceiling vision. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 912–917. [Google Scholar]
- Carpin, S. Fast and accurate map merging for multi-robot systems. Auton. Robot. 2008, 25, 305–316. [Google Scholar] [CrossRef]
- Ferrão, V.T.; Vinhal, C.D.N.; da Cruz, G. An occupancy grid map merging algorithm invariant to scale, rotation and translation. In Proceedings of the 2017 Brazilian Conference on Intelligent Systems (BRACIS), Uberlândia, Brazil, 2–5 October 2017; pp. 246–251. [Google Scholar]
- Lindeberg, T. Scale invariant feature transform. Scholarpedia 2012, 7, 10491. [Google Scholar] [CrossRef]
- Arya, S. A review on image stitching and its different methods. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 2015, 5, 299–303. [Google Scholar]
- Jiang, Z.; Zhu, J.; Li, Y.; Wang, J.; Li, Z.; Lu, H. Simultaneous merging multiple grid maps using the robust motion averaging. J. Intell. Robot. Syst. 2019, 94, 655–668. [Google Scholar] [CrossRef]
- Mills, A.; Dudek, G. Image stitching with dynamic elements. Image Vis. Comput. 2009, 27, 1593–1602. [Google Scholar] [CrossRef]
- Fox, D.; Burgard, W.; Thrun, S.; Cremers, A.B. Position estimation for mobile robots in dynamic environments. In Proceedings of the Fifteenth National Conference on Artificial Intelligence and Tenth Innovative Applications of Artificial Intelligence Conference, AAAI 98, IAAI 98, Madison, WI, USA, 26–30 July 1998; pp. 983–988. [Google Scholar]
- Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Corfu, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157. [Google Scholar]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Thrun, S. Learning occupancy grid maps with forward sensor models. Auton. Robot. 2003, 15, 111–127. [Google Scholar] [CrossRef]
- Helgason, S.; Helgason, S. The Radon Transform; Springer: Berlin/Heidelberg, Germany, 1980; Volume 2. [Google Scholar]
- Haralick, R.M.; Sternberg, S.R.; Zhuang, X. Image analysis using mathematical morphology. IEEE Trans. Pattern Anal. Mach. Intell. 1987, PAMI-9, 532–550. [Google Scholar] [CrossRef] [PubMed]
- Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Sensor Fusion IV: Control Paradigms and Data Structures; International Society for Optics and Photonics: Bellingham, WA, USA, 1992; Volume 1611, pp. 586–606. [Google Scholar]
- Huang, T.S.; Blostein, S.D.; Margerum, E. Least-squares estimation of motion parameters from 3-D point correspondences. In Proceedings of the IEEE Conference Computer Vision and Pattern Recognition, Miami Beach, FL, USA, 22–26 June 1986; pp. 112–115. [Google Scholar]
- Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An open-source Robot Operating System. In Proceedings of the ICRA Workshop on Open Source Software, Kobe, Japan, 12–17 May 2009; Volume 3, p. 5. [Google Scholar]
- Koenig, N.; Howard, A. Design and use paradigms for gazebo, an open-source multi-robot simulator. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No. 04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 3, pp. 2149–2154. [Google Scholar]
- Bradski, G. The openCV library. Dobb’S J. Softw. Tools Prof. Program. 2000, 25, 120–123. [Google Scholar]
Local Maps | Resolution (Number of Grids/Meter) | Lidar Sensing Error (Standard Deviation, Meter) |
---|---|---|
Map 1 (Figure 12a) | 20 | 0.005 |
Map 2 (Figure 12b) | 20 | 0.005 |
Map 3 (Figure 12c) | 10 | 0.005 |
Map 4 (Figure 12d) | 20 | 0.05 |
Input Map 1 | Input Map 2 | Relation of Variables | |
---|---|---|---|
Set 1 | Local map 1 (Figure 12a) | Local map 2 (Figure 12b) | Rotation Translation |
Set 2 | Local map 1 (Figure 12a) | Local map 3 (Figure 12c) | Rotation Translation Scale |
Set 3 | Local map 1 (Figure 12a) | Local map 4 (Figure 12d) | Rotation Translation Sensing Error |
Set 1 | Set 2 | Set 3 | |
---|---|---|---|
Acceptance Index | |||
Human judgment | Success | Failure | Failure |
Set 1 | Set 2 | Set 3 | |
---|---|---|---|
Acceptance Index | |||
Human judgment | Success | Success | Success |
Existing Method | Our Method | Increased Performance | |
---|---|---|---|
Set 1 | |||
Set 2 | |||
Set 3 |
Local Maps | Resolution (Number of Grids/Meter) | Lidar Sensing Error (Standard Deviation, Meter) |
---|---|---|
Map 1 (Figure 19a) | 20 | 0.005 |
Map 2 (Figure 19b) | 20 | 0.005 |
Map 3 (Figure 19c) | 20 | 0.005 |
Input Map 1 | Input Map 2 | Relation of Variables | |
---|---|---|---|
Set 1 | Local map 1 (Figure 19a) | Local map 2 (Figure 19b) | Rotation Translation |
Set 2 | Local map 1 (Figure 19a) | Local map 3 (Figure 19c) | Rotation Translation |
Set 1 | Set 2 | |
---|---|---|
Acceptance Index | ||
Human judgment | Success | Success |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, Y.-L.; Chan, K.-Y. Image Preprocessing with Enhanced Feature Matching for Map Merging in the Presence of Sensing Error. Sensors 2023, 23, 7303. https://doi.org/10.3390/s23167303
Chen Y-L, Chan K-Y. Image Preprocessing with Enhanced Feature Matching for Map Merging in the Presence of Sensing Error. Sensors. 2023; 23(16):7303. https://doi.org/10.3390/s23167303
Chicago/Turabian StyleChen, Yu-Lin, and Kuei-Yuan Chan. 2023. "Image Preprocessing with Enhanced Feature Matching for Map Merging in the Presence of Sensing Error" Sensors 23, no. 16: 7303. https://doi.org/10.3390/s23167303
APA StyleChen, Y.-L., & Chan, K.-Y. (2023). Image Preprocessing with Enhanced Feature Matching for Map Merging in the Presence of Sensing Error. Sensors, 23(16), 7303. https://doi.org/10.3390/s23167303