Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras
AbstractArtificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm. View Full-Text
Scifeed alert for new publicationsNever miss any articles matching your research from any publisher
- Get alerts for new papers matching your research
- Find out the new papers from selected authors
- Updated daily for 49'000+ journals and 6000+ publishers
- Define your Scifeed now
Wu, D.; Chen, R.; Chen, L. Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras. Sensors 2017, 17, 2645.
Wu D, Chen R, Chen L. Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras. Sensors. 2017; 17(11):2645.Chicago/Turabian Style
Wu, Dewen; Chen, Ruizhi; Chen, Liang. 2017. "Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras." Sensors 17, no. 11: 2645.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.