An AR Map Virtual–Real Fusion Method Based on Element Recognition
Abstract
:1. Introduction
- (1)
- This study proves the feasibility of using an automatic identification and extraction method of unmarked map elements to implement augmented reality map applications. Most of the previous AR studies focused on tags that replaced virtual models rather than unmarked map elements, lacking flexibility and automation.
- (2)
- The author designs and proposes a new step-by-step identification and extraction method for unmarked map elements. This method combines the spatial and attribute characteristics of point-like elements and line-like elements, extracts the color, geometric features, and spatial distribution of map elements through computer vision methods, and completes the identification and automatic extraction of map elements. This method changes marker mode in traditional AR for the point-like and line elements in planar maps. With the characteristics of being flexible, accurate, and universal, it can automatically identify and extract multi-type and multi-objective elements from the maps.
- (3)
- A virtual–real fusion algorithm of augmented reality map and virtual model is proposed. The algorithm with certain universality realizes step-by-step identification and extraction for unmarked map elements, 3D tracking and registration, and the seamless integration of the real scene and the virtual model.
- (4)
- Through experiments, such as identification of unmarked map elements and virtual–real fusion and results analysis, it is shown that the step-by-step identification and extraction method for unmarked map elements proposed in this paper is effective. The AR map virtual–real fusion algorithm can extract virtual models corresponding to map elements and seamlessly integrate them with the real scene. The virtual models are displayed accurately and stably so as to achieve the effect of expression enhancement. Through efficiency and recognition rate analysis of step-by-step identification of unmarked map elements, it is illustrated that the step-by-step identification and extraction method for unmarked map elements proposed in this paper is effective, with higher identification efficiency meeting the real-time requirements and a higher accuracy rate. Compared with classical methods, the proposed method improves the recognition efficiency and recognition rate of point-like elements and line elements in the map.
2. Related Research
3. Identification and Extraction of Unmarked Map Elements
3.1. Characterization of Planar Map Elements
3.2. Step-by-Step Identification and Extraction Method for Unmarked Map Elements
3.2.1. Multi-Target Point-like Element Recognition and Extraction Based on Template and Contour Matching
3.2.2. Line Element Identification and Extraction Based on Color Space and Region Growth
- (1)
- According to the characteristics of the visual system, the image color space conversion is performed to convert the RGB image to an HSV image to ensure the effect of target extraction [37], and the conversion equation is shown in Equation (10):
- (2)
- Based on the color characteristics of the line elements-roads, the specified color ranges in the planar map are determined, and the associated road target areas are extracted. The color ranges set the corresponding data ranges for hue, saturation, and value, respectively, which are determined by the line elements in the planar map.
- (3)
- The image is converted into a grayscale map, and the grayscale values of the image are used as a basis for segmentation of the binarization to obtain the target binary image.
- (4)
- As the converted image still has some rough edges, it is smoothed using the bilateral filtering method, a non-linear filtering method that is more effective than other methods in preserving the image edges.
- (5)
- Refinement based on the Rosenfeld algorithm [38] for the road target area is performed to complete the road target skeleton line generation preparation.
- (6)
- The road skeleton line is extracted using the area growth method. The initial seed point is set, and any pixel point of the road target area is selected as the seed point.
- (7)
- Using the initial seed point as the starting point, we determine whether the non-seed points in the eight neighborhoods around the seed point meet the growth rule, and if they do, these non-seed points are inserted into the set of seed points. If they do not meet the growth rule, growth is stopped. We then proceed to find new initial seed points.
- (1)
- When there are still initial seed points available in the seed point set, return to step 7. After there are no available initial seed points, seed diffusion is ended. It is then determined whether the extracted road route length reaches the minimum threshold. If it does not satisfy this requirement, it is discarded; otherwise, it is retained.
- (2)
- Target extraction is ended.
4. Three-Dimensional Tracking and Registration of Unmarked Map Elements
5. AR Map Virtual–Real Fusion Algorithm
6. AR Representation Experiments and Analysis of Maps
6.1. Experimental Methods
6.2. Step-by-Step Identification and Extraction of Unmarked Map Elements
6.2.1. Point-like Element Identification and Extraction
6.2.2. Line Element Identification and Extraction
6.3. Virtual–Real Fusion in AR Maps
6.4. Experimental Results and Analysis
6.4.1. Validation Analysis of Identification of Unmarked Map Elements and Virtual–Real Fusion
6.4.2. Efficiency and Recognition Rate Analysis of Step-by-Step Identification of Unmarked Map Elements
6.4.3. Experiment Summary
7. Conclusions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Peitso, L.E.; Michael, J.B. The promise of interactive shared augmented reality. Computer 2020, 53, 45–52. [Google Scholar] [CrossRef]
- Cools, R.; Han, J.; Simeone, A.L. SelectVisAR: Selective Visualisation of Virtual Environments in Augmented Reality. In Designing Interactive Systems Conference 2021 (DIS ’21); Association for Computing Machinery: New York, NY, USA, 2021; pp. 275–282. [Google Scholar]
- De Lucia, A.; Francese, R.; Passero, I.; Tortora, G. Augmented Reality Mobile Applications: Challenges and Solutions. Recent Pat. Comput. Sci. 2011, 4, 80–90. [Google Scholar] [CrossRef]
- Chatain, J.; Demangeat, M.; Brock, A.M.; Laval, D.; Hachet, M. Exploring input modalities for interacting with augmented paper maps. In Proceedings of the 27th Conference on l’Interaction Homme-Machine, Toulouse, France, 27–30 October 2015; pp. 1–6. [Google Scholar]
- Morrison, A.; Mulloni, A.; Lemmelä, S.; Oulasvirta, A.; Jacucci, G.; Peltonen, P.; Schmalstieg, D.; Regenbrecht, H. Collaborative use of mobile augmented reality with paper maps. Comput. Graph. 2011, 35, 789–799. [Google Scholar] [CrossRef] [Green Version]
- Sun, M.; Chen, X.; Zhang, F.; Zheng, H. Augmented Reality Geographic Information System. Acta Sci. Nat. Univ. Pekin. 2004, 6, 906–913. [Google Scholar]
- Volker, P.; Monika, S. Augmented paper maps exploring the design space of a mixed reality system. Int. J. Photogramm. Remote Sens. 2010, 65, 56–265. [Google Scholar]
- Bobrich, J.; Otto, S. Augmented Maps. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 502–505. [Google Scholar]
- Schmalstieg, D.; Reitmayr, G. Augmented Reality as a Medium for Cartography. Multimedia Cartography; Springer: New York, NY, USA, 2006; pp. 267–282. [Google Scholar]
- Uchiyama, H.; Saito, H.; Servières, M.; Moreau, G. AR city representation system based on map recognition using topological information. In Virtual and Mixed Reality, LNCS 5622; Springer: Berlin/Heidelberg, Germany, 2009; pp. 128–135. [Google Scholar]
- Zhang, A.; Zhuang, J.; Qi, Q. Visualization and Interaction of Augmented Paper Maps Based on Augmented Reality. Trop. Geogr. 2012, 32, 476–480. [Google Scholar]
- Fang, X.; Qu, Q. Paper map expression and its application based on mobile augmented reality. Microcomput. Appl. 2014, 7, 41–43, 47. [Google Scholar]
- An, Z.; Xu, X.; Yang, J.; Liu, Y.; Yan, Y. Research of the three-dimensional tracking and registration method based on multiobjective constraints in an AR system. Appl. Opt. 2018, 57, 9625–9634. [Google Scholar] [CrossRef]
- Santos, C.; Araújo, T.; Morais, J.; Meiguins, B. Hybrid approach using sensors, GPS and vision based tracking to improve the registration in mobile augmented reality applications. Int. J. Multimed. Ubiquitous Eng. 2017, 12, 117–130. [Google Scholar] [CrossRef]
- Wu, Y.; Che, W.; Huang, B. An Improved 3D Registration Method of Mobile Augmented Reality for Urban Built Environment. Int. J. Comput. Games Technol. 2021, 2021, 8810991. [Google Scholar] [CrossRef]
- Pauwels, K.; Rubio, L.; Diaz, J.; Ros, E. Real-time model-based rigid object pose estimation and tracking combining dense and sparse visual cues. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 2347–2354. [Google Scholar]
- Payet, N.; Todorovic, S. From contours to 3d object detection and pose estimation. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 983–990. [Google Scholar]
- Yue, W.A.; Shusheng, Z.H.; Weiping, H.E.; Xiaoliang, B.A. Model-based marker-less 3D tracking approach for augmented reality. J. Shanghai Jiaotong Univ. 2018, 52, 83. [Google Scholar]
- Chari, V.; Singh, J.M.; Narayanan, P.J. Augmented Reality Using Over-Segmentation; Center for Visual Information Technology, International Institute of Information Technology: Hyderabad, India, 2008. [Google Scholar]
- Pambudi, E.A.; Fauzan, A.; Sugiyanto, S. Logarithmic transformation for enhancing keypoint matching of SIFT in augmented reality. AIP Conf. Proc. 2022, 2578, 060010. [Google Scholar]
- Li, X.; Wang, X.; Cheng, C. Application of scene recognition technology based on fast ER and surf algorithm in augmented reality. In Proceedings of the 4th International Conference on Smart and Sustainable City (ICSSC 2017), Shanghai, China, 5–6 June 2017; pp. 1–5. [Google Scholar]
- Tian, Y.; Zhou, X.; Wang, X.; Wang, Z.; Yao, H. Registration and occlusion handling based on the FAST ICP-ORB method for augmented reality systems. Multimed. Tools Appl. 2021, 80, 21041–21058. [Google Scholar] [CrossRef]
- Tan, S.Y.; Arshad, H.; Abdullah, A. An improved colour binary descriptor algorithm for mobile augmented reality. Virtual Real. 2021, 25, 1193–1219. [Google Scholar] [CrossRef]
- Malek, K.; Mohammadkhorasani, A.; Moreu, F. Methodology to integrate augmented reality and pattern recognition for crack detection. In Computer—Aided Civil and Infrastructure Engineering; Wiley: Hoboken, NJ, USA, 2022. [Google Scholar]
- Araghi, L.F.; Arvan, M.R. An implementation image edge and feature detection using neural network. In Proceedings of the International MultiConference of Engineers and Computer Scientists, Hong Kong, China, 18–20 March 2009; pp. 835–837. [Google Scholar]
- Maini, R.; Aggarwal, H. Study and comparison of various image edge detection techniques. Int. J. Image Process. (IJIP) 2009, 3, 1–11. [Google Scholar]
- Ren, F.; Wu, X. Outdoor Augmented Reality Spatial Information Representation. Appl. Math. 2013, 7, 505–509. [Google Scholar] [CrossRef] [Green Version]
- Pan, C.; Chen, Y.; Wang, G. Virtual-Real Fusion with Dynamic Scene from Videos. In Proceedings of the 2016 International Conference on Cyberworlds (CW), Chongqing, China, 28–30 September 2016. [Google Scholar]
- Brejcha, J.; Lukáč, M.; Hold-Geoffroy, Y.; Wang, O.; Čadík, M. Landscapear: Large scale outdoor augmented reality by matching photographs with terrain models using learned descriptors. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2020; pp. 295–312. [Google Scholar]
- Stylianidis, E.; Valari, E.; Pagani, A.; Carrillo, I.; Kounoudes, A.; Michail, K.; Smagas, K. Augmented Reality Geovisualisation for Underground Utilities. J. Photogramm. Remote Sens. Geoinf. Sci. 2020, 88, 173–185. [Google Scholar] [CrossRef]
- Garbin, E.P.; Santil FL, D.P.; Bravo JV, M. Semiotics and the Cartographic Visualization theory: Considerations on the analysis of map design. Bol. Ciênc. Geod. 2012, 18, 624–642. [Google Scholar] [CrossRef] [Green Version]
- Chang, K.T. Introduction to Geographic Information Systems; McGraw-Hill Education: New York, NY, USA, 2018. [Google Scholar]
- Eba, S.; Nakabayashi, N.; Hashimoto, M. Single-scan multiple object detection based on template matching using only effective pixels. In Proceedings of the International Workshop on Advanced Imaging Technology (IWAIT), Hong Kong, China, 4–6 January 2022; Volume 12177, pp. 55–60. [Google Scholar]
- Liu, Z.; Guo, Y.; Feng, Z.; Zhang, S. Improved Rectangle Template Matching Based Feature Point Matching Algorithm. In Proceedings of the Chinese Control and Decision Conference (CCDC), Nanchang, China, 3–5 June 2019; pp. 2275–2280. [Google Scholar]
- Ingle, M.A.; Talmale, G.R. Respiratory mask selection and leakage detection system based on canny edge detection operator. Procedia Comput. Sci. 2016, 78, 323–329. [Google Scholar] [CrossRef] [Green Version]
- Khotanzad, A.; Zink, E. Contour line and geographic feature extraction from USGS color topographical paper maps. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 18–31. [Google Scholar] [CrossRef]
- Zhou, W.; Xu, J.; Jiang, Q.; Chen, Z. No-reference quality assessment for 360-degree images by analysis of multifrequency information and local-global naturalness. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 1778–1791. [Google Scholar] [CrossRef]
- Xu, S.; Xun, Y.; Jia, T.; Yang, Q. Detection method for the buds on winter vines based on computer vision. In Proceedings of the 2014 Seventh International Symposium on Computational Intelligence and Design, Hangzhou, China, 13–14 December 2014; Volume 2, pp. 44–48. [Google Scholar]
- Yalcinkaya, B.; Aguizo, J.; Couceiro, M.; Figueiredo, A. A Multimodal Tracking Approach For Augmented Reality Applications. In Proceedings of the 12th Augmented Human International Conference (AH2021), Geneva, Switzerland, 27–28 May 2021; pp. 1–8. [Google Scholar]
- Shi, Q.; Wang, Y.T.; Cheng, J. Vision-Based Algorithm for Augmented Reality Registration. J. Image Graph. 2002, 7, 679–683. [Google Scholar]
- Burkard, S.; Fuchs-Kittowski, F. User-aided global registration method using geospatial 3D data for large-scale mobile outdoor augmented reality. In Proceedings of the 2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Recife, Brazil, 9–13 November 2020; pp. 104–109. [Google Scholar]
- Koenderink, J.J.; Van Doorn, A.J. Affine structure from motion. JOSA A 1991, 8, 377–385. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wu, X.; Ren, F. Spatial information augmented representation using affine transformations registration. Comput. Eng. Appl. 2010, 3, 16–19, 29. [Google Scholar]
- Ma, W.; Xiong, H.; Dai, X.; Zheng, X.; Zhou, Y. An indoor scene recognition-based 3D registration mechanism for real-time AR-GIS visualization in mobile applications. ISPRS Int. J. Geo-Inf. 2018, 7, 112. [Google Scholar] [CrossRef] [Green Version]
- Laganiere, R. OpenCV Computer Vision Application Programming Cookbook Second Edition; Packt Publishing Ltd.: Birmingham, UK, 2014. [Google Scholar]
- Sellers, G.; Wright, R.S., Jr.; Haemel, N. OpenGL superBible: Comprehensive Tutorial and Reference; Addison-Wesley: Boston, MA, USA, 2013. [Google Scholar]
- Zhang, T.Y.; Suen, C.Y. A Fast Parallel Algorithm for Thinning Digital Patterns. Commun. Acm 1984, 27, 236–239. [Google Scholar] [CrossRef]
Experiment Content | Identification Times (N) | Single Mean Time for Point-like Element Identification (t1, ms) | Single Mean Time for Line Element Identification (t2, ms) | ||
---|---|---|---|---|---|
The Proposed Method | SIFT | The Proposed Method | Region Extraction + Fast Parallel | ||
Point-like element and line element identification | 10 | 115.11 | 501.82 | 21.21 | 31.96 |
20 | 114.93 | 498.98 | 19.97 | 31.23 | |
30 | 114.86 | 498.97 | 19.88 | 33.58 | |
40 | 114.76 | 504.70 | 19.85 | 28.71 | |
50 | 114.73 | 502.60 | 20.95 | 30.87 | |
60 | 114.66 | 502.85 | 19.79 | 33.42 | |
70 | 114.57 | 506.25 | 19.73 | 33.37 | |
80 | 114.43 | 508.08 | 19.81 | 30.82 | |
90 | 114.27 | 503.37 | 20.97 | 30.80 | |
100 | 114.29 | 502.62 | 19.87 | 30.76 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, Z. An AR Map Virtual–Real Fusion Method Based on Element Recognition. ISPRS Int. J. Geo-Inf. 2023, 12, 126. https://doi.org/10.3390/ijgi12030126
Wang Z. An AR Map Virtual–Real Fusion Method Based on Element Recognition. ISPRS International Journal of Geo-Information. 2023; 12(3):126. https://doi.org/10.3390/ijgi12030126
Chicago/Turabian StyleWang, Zhangang. 2023. "An AR Map Virtual–Real Fusion Method Based on Element Recognition" ISPRS International Journal of Geo-Information 12, no. 3: 126. https://doi.org/10.3390/ijgi12030126
APA StyleWang, Z. (2023). An AR Map Virtual–Real Fusion Method Based on Element Recognition. ISPRS International Journal of Geo-Information, 12(3), 126. https://doi.org/10.3390/ijgi12030126