Tape-Shaped, Multiscale, and Continuous-Readable Fiducial Marker for Indoor Navigation and Localization Systems
Abstract
:1. Introduction
2. Related Works
Markers | Shape | Color | Encoding | Algorithm |
---|---|---|---|---|
AprilTag [16] | Square | Monochrome | Code | Geometric calculations |
AprilTag2 [29] | Square | Monochrome | Code | Geometric calculations |
AprilTag3 [30] | Square | Monochrome | Code | Geometric calculations |
AprilTags 3D [31] | Square | Monochrome | Code | Geometric calculations |
ArUco [17] | Square | Monochrome | Code | Geometric calculations |
BlurTags [32] | Square | Monochrome | Code | Geometric calculations |
BullsEye [33] | Circle | Monochrome | Code | Geometric calculations |
Cantag [34] | Circle | Monochrome | Code | Geometric calculations |
CCTag [35] | Circle | Monochrome | Code | Geometric calculations |
Chilitags [36] | Square | Monochrome | Code | Geometric calculations |
ChromaTag [37] | Square | Multicolor | Code | Geometric calculations |
Claus and Fitzgibbon [38] | Square | Monochrome | Glyph | Trained |
Color marker-based [39] | Triangulated | Multicolor | Code | Geometric calculations |
Concentric contrasting circle [40] | Circle | Monochrome | Code | Geometric calculations |
Concentric ring fiducial [41] | Circle | Monochrome | Code | Geometric calculations |
CoP-Tag [42] | Square | Monochrome | Code | Geometric calculations |
CyberCode [43] | Square | Monochrome | Code | Geometric calculations |
DeepTag [12] | Square | Multicolor | Glyph | Trained |
E2ETag [44] | Square | Monochrome | Code | Trained |
Farkas et al. [45] | Square | Multicolor | Code | Geometric calculations |
FourierTag [27] | Circle | Monochrome | Code | Geometric calculations |
Fractal Marker [46] | Square | Monochrome | Code | Geometric calculations |
HArCo marker [25] | Square | Monochrome | Code | Geometric calculations |
ICL [47] | Square | Monochrome | Code | Region adjacency |
Jumarker [21] | Cube | Multicolor | Code | Geometric calculations |
LFTag [48] | Square | Multicolor | Code | Region adjacency |
Markers with alphabet [49] | Cube | Monochrome | Glyph | Trained |
Monospectrum marker [22] | Square | Multicolor | Code | Geometric calculations |
Order Type Tags [50] | Square | Monochrome | Code | Geometric calculations |
Pi-Tag [51] | Square | Monochrome | Code | Geometric calculations |
PRASAD et al. [52] | Square | Monochrome | Code | Geometric calculations |
ReacTIVision [23] | Undefined | Monochrome | Code | Region adjacency |
RuneTag [53] | Circle | Monochrome | Code | Geometric calculations |
Seedmarkers [54] | Undefined | Monochrome | Code | Region adjacency |
SIFT [28] | Square | Monochrome | Code | Geometric calculations |
sSLAM [20] | Square | Monochrome | Code | Geometric calculations |
STag [18] | Square | Monochrome | Code | Geometric calculations |
Standard Pattern [55] | Rectangle | Monochrome | Code | Geometric calculations |
SURF [28] | Square | Monochrome | Code | Geometric calculations |
SVMS [56] | Square | Monochrome | Code | Geometric calculations |
Tcross [57] | Square | Multicolor | Code | Trained |
Topotag [58] | Square | Monochrome | Code | Region adjacency |
TRIP [59] | Circle | Monochrome | Code | Geometric calculations |
WhyCode [60] | Circle | Monochrome | Code | Geometric calculations |
X-tag [61] | Square | Monochrome | Code | Geometric calculations |
3. System Overview
3.1. Marker Design
- Finder Patterns allow for marker location detection and positioning. They are located at the marker ends (left and right) of each symbol; they have 7 × 7 dark blocks, 5 × 5 light blocks, and a dark block in the center of 3 × 3.
- Quiet Zone is an area that does not contain data and is used to ensure that the surrounding markup does not disorient the marker code data made up of white blocks around the Finder Patterns, Alignment Patterns, and Region of Codification.
- Alignment Patterns are the black squares that form an ’L’ in a CM, containing a black square in the upper right corner of the region, providing marker orientation in scenes.
- The encoding region contains black and white blocks corresponding to the coded information embedded in the marker.
- Checksum contains eight black or white blocks corresponding to the validation bits of the information embedded in the marker.
3.2. Marker Encoding
Algorithm 1 Increment Code |
Require: |
Ensure: ▹ The input incremented |
|
3.2.1. Tape Generator
3.2.2. Multiscale Functionality
- Detect the horizontal lines in the tape, extracting only the most representative line that makes up the tape.
- Rotate the image to be parallel to the abscissa axis through angular adjustments according to the line detected in the previous step, leaving the detected line horizontal and cutting the image longitudinally.
- Apply grayscale, contrast, smoothing, and threshold filters.
- For the horizontal line of each image, the presence of patterns that indicate the beginning and end of the TM is verified. When detecting two patterns, convert the line into a bit string.
- Simplification of the bit string into unit values.
- Store the simplified bits in an array for voting.
- Apply voting to detected bit sequences by selecting the sequence with the highest frequency. If the confidence is greater than or equal to 70%, the string is returned. Otherwise, the algorithm returns null.
- Applying preprocessing filters: grayscale conversion, contrast enhancement, and threshold filters.
- Extraction of the Finder Patterns square contours that delimit the region of each marker present on the tape, also extracting points corresponding to the region of the markers. Otherwise, the process flow returns to the beginning.
- Correction of the image perspective to represent the actual aspect ratio of the marker.
- Reading the grid pixels of the marker coding region in order to divide the image into cells. Each cell in the grid will correspond to the value one for pixels greater than 127, and for smaller values, the value 0 will be assigned in the bit matrix.
- Validation of the pixels that represent the checksum bits.
- The resulting bit output will be converted into a bit vector, selecting only the bits in the coding region. The process repeats until the algorithm extracts the last pair of Finder Patterns.
- Returns the vector of detected bits.
3.2.3. Mobile Application
4. Experiments and Results
4.1. Simulation in a 3D Environment
4.2. Mobile Device Testing
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Kunhoth, J.; Karkar, A.; Al-Maadeed, S.; Al-Ali, A. Indoor positioning and wayfinding systems: A survey. Hum.-Centric Comput. Inf. Sci. 2020, 10, 1–41. [Google Scholar] [CrossRef]
- Zhuang, Y.; Zhang, C.; Huai, J.; Li, Y.; Chen, L.; Chen, R. Bluetooth localization technology: Principles, applications, and future trends. IEEE Internet Things J. 2022, 9, 23506–23524. [Google Scholar] [CrossRef]
- Hu, X.; Cheng, L.; Zhang, G. A Zigbee-based localization algorithm for indoor environments. In Proceedings of the 2011 International Conference on Computer Science and Network Technology, Harbin, China, 24–26 December 2011; Volume 3, pp. 1776–1781. [Google Scholar]
- Simões, W.C.; Machado, G.S.; Sales, A.M.; de Lucena, M.M.; Jazdi, N.; de Lucena, V.F., Jr. A review of technologies and techniques for indoor navigation systems for the visually impaired. Sensors 2020, 20, 3935. [Google Scholar] [CrossRef]
- Yang, M.; Sun, X.; Jia, F.; Rushworth, A.; Dong, X.; Zhang, S.; Fang, Z.; Yang, G.; Liu, B. Sensors and sensor fusion methodologies for indoor odometry: A review. Polymers 2022, 14, 2019. [Google Scholar] [CrossRef]
- Forghani, M.; Karimipour, F.; Claramunt, C. From cellular positioning data to trajectories: Steps towards a more accurate mobility exploration. Transp. Res. Part C Emerg. Technol. 2020, 117, 102666. [Google Scholar] [CrossRef]
- Mustafa, T.; Varol, A. Review of the internet of things for healthcare monitoring. In Proceedings of the 2020 8th International Symposium on Digital Forensics and Security (ISDFS), Beirut, Lebanon, 1–2 June 2020; pp. 1–6. [Google Scholar]
- Leo, M.; Carcagnì, P.; Mazzeo, P.L.; Spagnolo, P.; Cazzato, D.; Distante, C. Analysis of facial information for healthcare applications: A survey on computer vision-based approaches. Information 2020, 11, 128. [Google Scholar] [CrossRef]
- Yang, S.; Ma, L.; Jia, S.; Qin, D. An improved vision-based indoor positioning method. IEEE Access 2020, 8, 26941–26949. [Google Scholar] [CrossRef]
- Li, Q.; Zhu, J.; Liu, T.; Garibaldi, J.; Li, Q.; Qiu, G. Visual landmark sequence-based indoor localization. In Proceedings of the the 1st Workshop on Artificial Intelligence and Deep Learning for Geographic Knowledge Discovery, Los Angeles, CA, USA, 7–10 November 2017; pp. 14–23. [Google Scholar]
- Fiala, M. ARTag, a fiducial marker system using digital techniques. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–26 June 2005; Volume 2, pp. 590–596. [Google Scholar]
- Zhang, Z.; Hu, Y.; Yu, G.; Dai, J. DeepTag: A general framework for fiducial marker design and detection. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 2931–2944. [Google Scholar] [CrossRef]
- Martins, M.S.; Neto, B.S.; Serejo, G.L.; Santos, C.G. Tape-Shaped Multiscale Fiducial Marker: A Design Prototype for Indoor Localization. Int. J. Electron. Commun. Eng. 2024, 18, 69–76. [Google Scholar]
- Muñoz-Salinas, R.; Marín-Jimenez, M.J.; Yeguas-Bolivar, E.; Medina-Carnicer, R. Mapping and localization from planar markers. Pattern Recognit. 2018, 73, 158–171. [Google Scholar] [CrossRef]
- Kalaitzakis, M.; Cain, B.; Carroll, S.; Ambrosi, A.; Whitehead, C.; Vitzilaios, N. Fiducial markers for pose estimation. J. Intell. Robot. Syst. 2021, 101, 71. [Google Scholar] [CrossRef]
- Olson, E. AprilTag: A robust and flexible visual fiducial system. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 3400–3407. [Google Scholar]
- Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F.J.; Marín-Jiménez, M.J. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognit. 2014, 47, 2280–2292. [Google Scholar] [CrossRef]
- Benligiray, B.; Topal, C.; Akinlar, C. STag: A stable fiducial marker system. Image Vis. Comput. 2019, 89, 158–169. [Google Scholar] [CrossRef]
- Wu, Y.; Tang, F.; Li, H. Image-based camera localization: An overview. Vis. Comput. Ind. Biomed. Art 2018, 1, 8. [Google Scholar] [CrossRef]
- Romero-Ramirez, F.J.; Muñoz-Salinas, R.; Marín-Jiménez, M.J.; Cazorla, M.; Medina-Carnicer, R. sSLAM: Speeded-Up Visual SLAM Mixing Artificial Markers and Temporary Keypoints. Sensors 2023, 23, 2210. [Google Scholar] [CrossRef]
- Jurado-Rodríguez, D.; Muñoz-Salinas, R.; Garrido-Jurado, S.; Medina-Carnicer, R. Design, Detection, and Tracking of Customized Fiducial Markers. IEEE Access 2021, 9, 140066–140078. [Google Scholar] [CrossRef]
- Toyoura, M.; Aruga, H.; Turk, M.; Mao, X. Detecting markers in blurred and defocused images. In Proceedings of the 2013 International Conference on Cyberworlds, Yokohama, Japan, 21–23 October 2013; pp. 183–190. [Google Scholar]
- Bencina, R.; Kaltenbrunner, M.; Jorda, S. Improved topological fiducial tracking in the reactivision system. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)-Workshops, San Diego, CA, USA, 21–23 September 2005; p. 99. [Google Scholar]
- Costanza, E.; Robinson, J. A Region Adjacency Tree Approach to the Detection and Design of Fiducials. 2003. Available online: http://eprints.soton.ac.uk/id/eprint/270958 (accessed on 18 April 2024).
- Wang, H.; Shi, Z.; Lu, G.; Zhong, Y. Hierarchical fiducial marker design for pose estimation in large-scale scenarios. J. Field Robot. 2018, 35, 835–849. [Google Scholar] [CrossRef]
- Fiala, M. Comparing ARTag and ARToolkit Plus fiducial marker systems. In Proceedings of the IEEE International Workshop on Haptic Audio Visual Environments and their Applications, Ottawa, ON, Canada, 1 October 2005; p. 6. [Google Scholar]
- Sattar, J.; Bourque, E.; Giguere, P.; Dudek, G. Fourier tags: Smoothly degradable fiducial markers for use in human-robot interaction. In Proceedings of the Fourth Canadian Conference on Computer and Robot Vision (CRV’07), Montreal, QC, Canada, 28–30 May 2007; pp. 165–174. [Google Scholar]
- Schweiger, F.; Zeisl, B.; Georgel, P.F.; Schroth, G.; Steinbach, E.G.; Navab, N. Maximum Detector Response Markers for SIFT and SURF. In Proceedings of the International Symposium on Vision, Modeling, and Visualization, Braunschweig, Germany, 16–18 November 2009; Volume 10, pp. 145–154. [Google Scholar]
- Wang, J.; Olson, E. AprilTag 2: Efficient and robust fiducial detection. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 4193–4198. [Google Scholar]
- Krogius, M.; Haggenmiller, A.; Olson, E. Flexible layouts for fiducial tags. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 1898–1903. [Google Scholar]
- Mateos, L.A. Apriltags 3d: Dynamic fiducial markers for robust pose estimation in highly reflective environments and indirect communication in swarm robotics. arXiv 2020, arXiv:2001.08622. [Google Scholar]
- Reuter, A.; Seidel, H.P.; Ihrke, I. BlurTags: Spatially varying PSF estimation with out-of-focus patterns. In Proceedings of the 20th International Conference on Computer Graphics, Visualization and Computer Vision 2012, WSCG’2012, Plenz, Czech Republic, 25–28 June 2012; pp. 239–247. [Google Scholar]
- Klokmose, C.N.; Kristensen, J.B.; Bagge, R.; Halskov, K. BullsEye: High-precision fiducial tracking for table-based tangible interaction. In Proceedings of the the Ninth ACM International Conference on Interactive Tabletops and Surfaces, Dresden, Germany, 16–19 November 2014; pp. 269–278. [Google Scholar]
- Rice, A.C.; Beresford, A.R.; Harle, R.K. Cantag: An open source software toolkit for designing and deploying marker-based vision systems. In Proceedings of the Fourth Annual IEEE International Conference on Pervasive Computing and Communications (PERCOM’06), Pisa, Italy, 13–17 March 2006; p. 10. [Google Scholar]
- Calvet, L.; Gurdjos, P.; Griwodz, C.; Gasparini, S. Detection and accurate localization of circular fiducials under highly challenging conditions. In Proceedings of the the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 562–570. [Google Scholar]
- Romero-Ramirez, F.J.; Muñoz-Salinas, R.; Medina-Carnicer, R. Speeded up detection of squared fiducial markers. Image Vis. Comput. 2018, 76, 38–47. [Google Scholar] [CrossRef]
- DeGol, J.; Bretl, T.; Hoiem, D. Chromatag: A colored marker and fast detection algorithm. In Proceedings of the the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1472–1481. [Google Scholar]
- Claus, D.; Fitzgibbon, A.W. Reliable fiducial detection in natural scenes. In Proceedings of the 8th European Conference on Computer Vision, Prague, Czech Republic, 11–14 May 2004; pp. 469–480. [Google Scholar]
- Liu, J.; Chen, S.; Sun, H.; Qin, Y.; Wang, X. Real time tracking method by using color markers. In Proceedings of the 2013 International Conference on Virtual Reality and Visualization, Xi’an, China, 14–15 September 2013; pp. 106–111. [Google Scholar]
- Gatrell, L.B.; Hoff, W.A.; Sklair, C.W. Robust image features: Concentric contrasting circles and their image extraction. In Proceedings of the Cooperative Intelligent Robotics in Space II, Boston, MA, USA, 1 March 1992; Volume 1612, pp. 235–244. [Google Scholar]
- O’Gorman, L.; Bruckstein, A.M.; Bose, C.B.; Amir, I. Subpixel registration using a concentric ring fiducial. In Proceedings of the [1990] Proceedings. 10th International Conference on Pattern Recognition, Atlantic City, NJ, USA, 16–21 June 1990; Volume 2, pp. 249–253. [Google Scholar]
- Li, Y.; Chen, Y.; Lu, R.; Ma, D.; Li, Q. A novel marker system in augmented reality. In Proceedings of the 2012 2nd International Conference on Computer Science and Network Technology, Changchun, China, 29–31 December 2012; pp. 1413–1417. [Google Scholar]
- Rekimoto, J.; Ayatsuka, Y. CyberCode: Designing augmented reality environments with visual tags. In Proceedings of the DARE 2000 on Designing Augmented Reality Environments, Elsinore, Denmark, 12–14 April 2000; pp. 1–10. [Google Scholar]
- Peace, J.B.; Psota, E.; Liu, Y.; Pérez, L.C. E2etag: An end-to-end trainable method for generating and detecting fiducial markers. arXiv 2021, arXiv:2105.14184. [Google Scholar]
- Farkas, Z.V.; Korondi, P.; Illy, D.; Fodor, L. Aesthetic marker design for home robot localization. In Proceedings of the IECON 2012—38th Annual Conference on IEEE Industrial Electronics Society, Montreal, QC, Canada, 25–28 October 2012; pp. 5510–5515. [Google Scholar]
- Romero-Ramire, F.J.; Munoz-Salinas, R.; Medina-Carnicer, R. Fractal Markers: A new approach for long-range marker pose estimation under occlusion. IEEE Access 2019, 7, 169908–169919. [Google Scholar] [CrossRef]
- Elbrechter, C.; Haschke, R.; Ritter, H. Bi-manual robotic paper manipulation based on real-time marker tracking and physical modelling. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 1427–1432. [Google Scholar]
- Wang, B. LFTag: A scalable visual fiducial system with low spatial frequency. In Proceedings of the 2020 2nd International Conference on Advances in Computer Technology, Information Science and Communications (CTISC), Suzhou, China, 20–22 March 2020; pp. 140–147. [Google Scholar]
- Kim, G.; Petriu, E.M. Fiducial marker indoor localization with artificial neural network. In Proceedings of the 2010 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Montreal, QC, Canada, 6–9 July 2010; pp. 961–966. [Google Scholar]
- Cruz-Hernández, H.; de la Fraga, L.G. A fiducial tag invariant to rotation, translation, and perspective transformations. Pattern Recognit. 2018, 81, 213–223. [Google Scholar] [CrossRef]
- Bergamasco, F.; Albarelli, A.; Torsello, A. Pi-tag: A fast image-space marker design based on projective invariants. Mach. Vis. Appl. 2013, 24, 1295–1310. [Google Scholar] [CrossRef]
- Prasad, M.G.; Chandran, S.; Brown, M.S. A motion blur resilient fiducial for quadcopter imaging. In Proceedings of the 2015 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 5–9 January 2015; pp. 254–261. [Google Scholar]
- Bergamasco, F.; Albarelli, A.; Rodola, E.; Torsello, A. Rune-tag: A high accuracy fiducial marker with strong occlusion resilience. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 113–120. [Google Scholar]
- Getschmann, C.; Echtler, F. Seedmarkers: Embeddable Markers for Physical Objects. In Proceedings of the the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction, Salzburg, Austria, 14–17 February 2021; pp. 1–11. [Google Scholar]
- Kabuka, M.; Arenas, A. Position verification of a mobile robot using standard pattern. IEEE J. Robot. Autom. 1987, 3, 505–516. [Google Scholar] [CrossRef]
- Bondy, M.; Krishnasamy, R.; Crymble, D.; Jasiobedzki, P. Space vision marker system (SVMS). In Proceedings of the AIAA SPACE 2007 Conference & Exposition, Long Beach, CA, USA, 18–20 September 2007; p. 6185. [Google Scholar]
- Košt’ák, M.; Slabỳ, A. Designing a Simple Fiducial Marker for Localization in Spatial Scenes Using Neural Networks. Sensors 2021, 21, 5407. [Google Scholar] [CrossRef]
- Yu, G.; Hu, Y.; Dai, J. Topotag: A robust and scalable topological fiducial marker system. IEEE Trans. Vis. Comput. Graph. 2020, 27, 3769–3780. [Google Scholar] [CrossRef]
- Lo´pez de Ipin˜a, D.; Mendonça, P.R.; Hopper, A.; Hopper, A. TRIP: A low-cost vision-based location system for ubiquitous computing. Pers. Ubiquitous Comput. 2002, 6, 206–219. [Google Scholar] [CrossRef]
- Lightbody, P.; Krajník, T.; Hanheide, M. An efficient visual fiducial localisation system. ACM SIGAPP Appl. Comput. Rev. 2017, 17, 28–37. [Google Scholar] [CrossRef]
- Birdal, T.; Dobryden, I.; Ilic, S. X-tag: A fiducial tag for flexible and accurate bundle adjustment. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 556–564. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bergamasco, F.; Albarelli, A.; Cosmo, L.; Rodola, E.; Torsello, A. An accurate and robust artificial marker based on cyclic codes. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 2359–2373. [Google Scholar] [CrossRef] [PubMed]
- ISO. IEC 18004: 2006 Information Technology–Automatic Identification and Data Capture Techniques–QR Code 2005 Bar Code Symbology Specification. 2006. Available online: https://www.sis.se/api/document/preview/911067 (accessed on 8 April 2024).
- Gollapudi, S.; Gollapudi, S. OpenCV with Python. In Learn Computer Vision Using OpenCV: With Deep Learning CNNs and RNNs; Apress: Berkeley, CA, USA, 2019; pp. 31–50. [Google Scholar]
- Leavers, V.F. Shape Detection in Computer Vision Using the Hough Transform; Springer: London, UK, 1992; Volume 1. [Google Scholar]
- Byrne, D. Full Stack Python Security: Cryptography, TLS, and Attack Resistance; Manning: Shelter Island, NY, USA, 2021. [Google Scholar]
- Barua, T.; Doshi, R.; Hiran, K.K. Mobile Applications Development: With Python in Kivy Framework; Walter de Gruyter GmbH & Co. KG: Berlin, Germany, 2020. [Google Scholar]
- OpenCV.org. Open Source Computer Vision Library. 2024. Available online: https://opencv.org/ (accessed on 5 May 2024).
Decimal | WBC | Proposed Code | Ink Proposed | Level ink |
---|---|---|---|---|
0 | 0000 | 0000 | less | |
1 | 0001 | 0001 | less | |
2 | 0010 | 0010 | less | |
3 | 0011 | 0100 | less | |
4 | 0100 | 1000 | less | |
5 | 0101 | 0011 | cannot be used | |
6 | 0110 | 0101 | cannot be used | |
7 | 0111 | 1001 | cannot be used | |
8 | 1000 | 0110 | cannot be used | |
9 | 1001 | 1010 | cannot be used | |
10 | 1010 | 1100 | cannot be used | |
11 | 1011 | 0111 | more | |
12 | 1100 | 1011 | more | |
13 | 1101 | 1101 | more | |
14 | 1110 | 1110 | more | |
15 | 1111 | 1111 | more |
Condition A | Condition B | Condition C | ||||
---|---|---|---|---|---|---|
AVG | MED | AVG | MED | AVG | MED | |
Tape-shaped | 1458.07 | 375.00 | 785.00 | 406.25 | 8242.53 | 3484.38 |
ArUco | 43.88 | 15.63 | 91.57 | 46.88 | 91.57 | 46.88 |
QRCode | 2562.5 | 2562.5 | 91.57 | 46.88 | - | - |
STag | 91.57 | 46.88 | 67.97 | 31.25 | 110.58 | 31.25 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Neto, B.S.R.; Araújo, T.D.O.; Meiguins, B.S.; Santos, C.G.R. Tape-Shaped, Multiscale, and Continuous-Readable Fiducial Marker for Indoor Navigation and Localization Systems. Sensors 2024, 24, 4605. https://doi.org/10.3390/s24144605
Neto BSR, Araújo TDO, Meiguins BS, Santos CGR. Tape-Shaped, Multiscale, and Continuous-Readable Fiducial Marker for Indoor Navigation and Localization Systems. Sensors. 2024; 24(14):4605. https://doi.org/10.3390/s24144605
Chicago/Turabian StyleNeto, Benedito S. R., Tiago D. O. Araújo, Bianchi S. Meiguins, and Carlos G. R. Santos. 2024. "Tape-Shaped, Multiscale, and Continuous-Readable Fiducial Marker for Indoor Navigation and Localization Systems" Sensors 24, no. 14: 4605. https://doi.org/10.3390/s24144605
APA StyleNeto, B. S. R., Araújo, T. D. O., Meiguins, B. S., & Santos, C. G. R. (2024). Tape-Shaped, Multiscale, and Continuous-Readable Fiducial Marker for Indoor Navigation and Localization Systems. Sensors, 24(14), 4605. https://doi.org/10.3390/s24144605