E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Indoor LiDAR/Vision Systems"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors, Control, and Telemetry".

Deadline for manuscript submissions: closed (31 December 2017)

Special Issue Editors

Guest Editor
Prof. Dr. Zhizhong Kang

Department of Remote Sensing and Geo-Information Engineering, School of Land Science and Technology, China University of Geosciences in Beijing, Xueyuan Road 29, Haidian District, Beijing 100083, China
Website | E-Mail
Interests: digital photogrammetry and computer vision; processing of indoor, terrestrial and air-borne LiDAR data; indoor 3D modeling
Guest Editor
Prof. Dr. Jonathan Li

Department of Geography and Environmental Management and the Department of Systems Design Engineering (cross-appointed), University of Waterloo, 200 University Avenue West, Waterloo, Ontario N2L 3G1, Canada
Website 1 | Website 2 | E-Mail
Phone: 6479686898
Interests: mobile laser scanners; multispectral LiDAR; LiDAR data processing; LiDAR backpack; LiDAR modeling; indoor mapping
Guest Editor
Prof. Dr. Cheng Wang

Department of Computer Science, School of Informatics, Xiamen University, 422 Siming Road South, Xiamen, Fujian 361005, China
Website | E-Mail
Interests: mobile mapping systems; remote sensing; digital photogrammtry; Lidar; artificial intelligence

Special Issue Information

Dear Colleagues,

With the rising urban population and the increasing complexity of cities as conglomerates of enclosed spaces, there is a growing demand for indoor navigation, positioning, mapping, modeling, etc. However, outdoor systems in common use, e.g., GNSS, airborne, vehicle-based and terrestrial LiDAR become unavailable or expensive in the indoor environments. Therefore, compact and low-cost sensors like 2D LiDAR, RGB-D cameras and vision systems are playing important roles in 3D indoor spatial applications. The evolution of geo-spatial sensors from outdoor environment to indoor space requires new sensor models, methods, algorithms and techniques for multi-sensor integration and data fusion.

The aim of this Special Issue is to present current and state of the art research in the development of indoor LiDAR/vision systems related to multi-sensor integration and data fusion. This Special Issue invite contributions in the following topics (but is not limited to them):

  • LiDAR/vision sensor calibration
  • Multisensory fusion for indoor mapping
  • Indoor sensing solutions with low-cost sensors in mobile devices
  • Low-cost sensor integration and fusion for indoor positioning and navigation
  • Quality control and evaluation of indoor LiDAR/vision systems
  • SLAM methods for indoor LiDAR/vision systems

Prof. Dr. Zhizhong Kang
Prof. Dr. Jonathan Li
Prof. Dr. Cheng Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • LiDAR/vision system
  • RGB-D camera
  • multi-sensor
  • sensor calibration
  • multisensory fusion
  • data quality control/evaluation
  • indoor positioning and navigation
  • indoor mapping

Published Papers (11 papers)

View options order results:
result details:
Displaying articles 1-11
Export citation of selected articles as:

Research

Open AccessArticle
Space Subdivision in Indoor Mobile Laser Scanning Point Clouds Based on Scanline Analysis
Sensors 2018, 18(6), 1838; https://doi.org/10.3390/s18061838
Received: 27 April 2018 / Revised: 30 May 2018 / Accepted: 1 June 2018 / Published: 5 June 2018
Cited by 3 | PDF Full-text (9577 KB) | HTML Full-text | XML Full-text
Abstract
Indoor space subdivision is an important aspect of scene analysis that provides essential information for many applications, such as indoor navigation and evacuation route planning. Until now, most proposed scene understanding algorithms have been based on whole point clouds, which has led to [...] Read more.
Indoor space subdivision is an important aspect of scene analysis that provides essential information for many applications, such as indoor navigation and evacuation route planning. Until now, most proposed scene understanding algorithms have been based on whole point clouds, which has led to complicated operations, high computational loads and low processing speed. This paper presents novel methods to efficiently extract the location of openings (e.g., doors and windows) and to subdivide space by analyzing scanlines. An opening detection method is demonstrated that analyses the local geometric regularity in scanlines to refine the extracted opening. Moreover, a space subdivision method based on the extracted openings and the scanning system trajectory is described. Finally, the opening detection and space subdivision results are saved as point cloud labels which will be used for further investigations. The method has been tested on a real dataset collected by ZEB-REVO. The experimental results validate the completeness and correctness of the proposed method for different indoor environment and scanning paths. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Figures

Figure 1

Open AccessArticle
Behavior Analysis of Novel Wearable Indoor Mapping System Based on 3D-SLAM
Sensors 2018, 18(3), 766; https://doi.org/10.3390/s18030766
Received: 3 February 2018 / Revised: 27 February 2018 / Accepted: 1 March 2018 / Published: 2 March 2018
Cited by 5 | PDF Full-text (25235 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a Wearable Prototype for indoor mapping developed by the University of Vigo. The system is based on a Velodyne LiDAR, acquiring points with 16 rays for a simplistic or low-density 3D representation of reality. With this, a Simultaneous Localization and [...] Read more.
This paper presents a Wearable Prototype for indoor mapping developed by the University of Vigo. The system is based on a Velodyne LiDAR, acquiring points with 16 rays for a simplistic or low-density 3D representation of reality. With this, a Simultaneous Localization and Mapping (3D-SLAM) method is developed for the mapping and generation of 3D point clouds of scenarios deprived from GNSS signal. The quality of the system presented is validated through the comparison with a commercial indoor mapping system, Zeb-Revo, from the company GeoSLAM and with a terrestrial LiDAR, Faro Focus3D X330. The first is considered as a relative reference with other mobile systems and is chosen due to its use of the same principle for mapping: SLAM techniques based on Robot Operating System (ROS), while the second is taken as ground-truth for the determination of the final accuracy of the system regarding reality. Results show that the accuracy of the system is mainly determined by the accuracy of the sensor, with little increment in the error introduced by the mapping algorithm. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Figures

Figure 1

Open AccessArticle
Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System
Sensors 2018, 18(2), 506; https://doi.org/10.3390/s18020506
Received: 30 November 2017 / Revised: 31 January 2018 / Accepted: 3 February 2018 / Published: 8 February 2018
Cited by 4 | PDF Full-text (6821 KB) | HTML Full-text | XML Full-text
Abstract
The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and [...] Read more.
The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Figures

Graphical abstract

Open AccessArticle
Study of the Integration of the CNU-TS-1 Mobile Tunnel Monitoring System
Sensors 2018, 18(2), 420; https://doi.org/10.3390/s18020420
Received: 30 December 2017 / Revised: 26 January 2018 / Accepted: 28 January 2018 / Published: 1 February 2018
PDF Full-text (3933 KB) | HTML Full-text | XML Full-text
Abstract
A rapid, precise and automated means for the regular inspection and maintenance of a large number of tunnels is needed. Based on the depth study of the tunnel monitoring method, the CNU-TS-1 mobile tunnel monitoring system (TS1) is developed and presented. It can [...] Read more.
A rapid, precise and automated means for the regular inspection and maintenance of a large number of tunnels is needed. Based on the depth study of the tunnel monitoring method, the CNU-TS-1 mobile tunnel monitoring system (TS1) is developed and presented. It can efficiently obtain the cross-sections that are orthogonal to the tunnel in a dynamic way, and the control measurements that depend on design data are eliminated. By using odometers to locate the cross-sections and correcting the data based on longitudinal joints of tunnel segment lining, the cost of the system has been significantly reduced, and the interval between adjacent cross-sections can reach 1–2 cm when pushed to collect data at a normal walking speed. Meanwhile, the relative deformation of tunnel can be analyzed by selecting cross-sections from original data. Through the measurement of the actual tunnel, the applicability of the system for tunnel deformation detection is verified, and the system is shown to be 15 times more efficient than that of the total station. The simulation experiment of the tunnel deformation indicates that the measurement accuracy of TS1 for cross-sections is 1.1 mm. Compared with the traditional method, TS1 improves the efficiency as well as increases the density of the obtained points. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Figures

Figure 1

Open AccessArticle
A New Localization System for Indoor Service Robots in Low Luminance and Slippery Indoor Environment Using Afocal Optical Flow Sensor Based Sensor Fusion
Sensors 2018, 18(1), 171; https://doi.org/10.3390/s18010171
Received: 6 December 2017 / Revised: 5 January 2018 / Accepted: 5 January 2018 / Published: 10 January 2018
Cited by 2 | PDF Full-text (8457 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a new localization system utilizing afocal optical flow sensor (AOFS) based sensor fusion for indoor service robots in low luminance and slippery environment is proposed, where conventional localization systems do not perform well. To accurately estimate the moving distance of [...] Read more.
In this paper, a new localization system utilizing afocal optical flow sensor (AOFS) based sensor fusion for indoor service robots in low luminance and slippery environment is proposed, where conventional localization systems do not perform well. To accurately estimate the moving distance of a robot in a slippery environment, the robot was equipped with an AOFS along with two conventional wheel encoders. To estimate the orientation of the robot, we adopted a forward-viewing mono-camera and a gyroscope. In a very low luminance environment, it is hard to conduct conventional feature extraction and matching for localization. Instead, the interior space structure from an image and robot orientation was assessed. To enhance the appearance of image boundary, rolling guidance filter was applied after the histogram equalization. The proposed system was developed to be operable on a low-cost processor and implemented on a consumer robot. Experiments were conducted in low illumination condition of 0.1 lx and carpeted environment. The robot moved for 20 times in a 1.5 × 2.0 m square trajectory. When only wheel encoders and a gyroscope were used for robot localization, the maximum position error was 10.3 m and the maximum orientation error was 15.4°. Using the proposed system, the maximum position error and orientation error were found as 0.8 m and within 1.0°, respectively. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Figures

Figure 1

Open AccessArticle
Real-Time Indoor Scene Description for the Visually Impaired Using Autoencoder Fusion Strategies with Visible Cameras
Sensors 2017, 17(11), 2641; https://doi.org/10.3390/s17112641
Received: 2 October 2017 / Revised: 9 November 2017 / Accepted: 9 November 2017 / Published: 16 November 2017
Cited by 2 | PDF Full-text (2772 KB) | HTML Full-text | XML Full-text
Abstract
This paper describes three coarse image description strategies, which are meant to promote a rough perception of surrounding objects for visually impaired individuals, with application to indoor spaces. The described algorithms operate on images (grabbed by the user, by means of a chest-mounted [...] Read more.
This paper describes three coarse image description strategies, which are meant to promote a rough perception of surrounding objects for visually impaired individuals, with application to indoor spaces. The described algorithms operate on images (grabbed by the user, by means of a chest-mounted camera), and provide in output a list of objects that likely exist in his context across the indoor scene. In this regard, first, different colour, texture, and shape-based feature extractors are generated, followed by a feature learning step by means of AutoEncoder (AE) models. Second, the produced features are fused and fed into a multilabel classifier in order to list the potential objects. The conducted experiments point out that fusing a set of AE-learned features scores higher classification rates with respect to using the features individually. Furthermore, with respect to reference works, our method: (i) yields higher classification accuracies, and (ii) runs (at least four times) faster, which enables a potential full real-time application. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Figures

Figure 1

Open AccessArticle
Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices
Sensors 2017, 17(11), 2567; https://doi.org/10.3390/s17112567
Received: 23 August 2017 / Revised: 26 October 2017 / Accepted: 3 November 2017 / Published: 7 November 2017
Cited by 3 | PDF Full-text (4508 KB) | HTML Full-text | XML Full-text
Abstract
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present [...] Read more.
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Figures

Figure 1

Open AccessArticle
Line-Constrained Camera Location Estimation in Multi-Image Stereomatching
Sensors 2017, 17(9), 1939; https://doi.org/10.3390/s17091939
Received: 8 July 2017 / Revised: 12 August 2017 / Accepted: 14 August 2017 / Published: 23 August 2017
PDF Full-text (5455 KB) | HTML Full-text | XML Full-text
Abstract
Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid—we will assume [...] Read more.
Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid—we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Figures

Figure 1

Open AccessArticle
A Visual-Based Approach for Indoor Radio Map Construction Using Smartphones
Sensors 2017, 17(8), 1790; https://doi.org/10.3390/s17081790
Received: 1 June 2017 / Revised: 27 July 2017 / Accepted: 2 August 2017 / Published: 4 August 2017
Cited by 5 | PDF Full-text (4591 KB) | HTML Full-text | XML Full-text
Abstract
Localization of users in indoor spaces is a common issue in many applications. Among various technologies, a Wi-Fi fingerprinting based localization solution has attracted much attention, since it can be easily deployed using the existing off-the-shelf mobile devices and wireless networks. However, the [...] Read more.
Localization of users in indoor spaces is a common issue in many applications. Among various technologies, a Wi-Fi fingerprinting based localization solution has attracted much attention, since it can be easily deployed using the existing off-the-shelf mobile devices and wireless networks. However, the collection of the Wi-Fi radio map is quite labor-intensive, which limits its potential for large-scale application. In this paper, a visual-based approach is proposed for the construction of a radio map in anonymous indoor environments. This approach collects multi-sensor data, e.g., Wi-Fi signals, video frames, inertial readings, when people are walking in indoor environments with smartphones in their hands. Then, it spatially recovers the trajectories of people by using both visual and inertial information. Finally, it estimates the location of fingerprints from the trajectories and constructs a Wi-Fi radio map. Experiment results show that the average location error of the fingerprints is about 0.53 m. A weighted k-nearest neighbor method is also used to evaluate the constructed radio map. The average localization error is about 3.2 m, indicating that the quality of the constructed radio map is at the same level as those constructed by site surveying. However, this approach can greatly reduce the human labor cost, which increases the potential for applying it to large indoor environments. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Figures

Figure 1

Open AccessArticle
Build a Robust Learning Feature Descriptor by Using a New Image Visualization Method for Indoor Scenario Recognition
Sensors 2017, 17(7), 1569; https://doi.org/10.3390/s17071569
Received: 21 May 2017 / Revised: 20 June 2017 / Accepted: 26 June 2017 / Published: 4 July 2017
PDF Full-text (12109 KB) | HTML Full-text | XML Full-text
Abstract
In order to recognize indoor scenarios, we extract image features for detecting objects, however, computers can make some unexpected mistakes. After visualizing the histogram of oriented gradient (HOG) features, we find that the world through the eyes of a computer is indeed different [...] Read more.
In order to recognize indoor scenarios, we extract image features for detecting objects, however, computers can make some unexpected mistakes. After visualizing the histogram of oriented gradient (HOG) features, we find that the world through the eyes of a computer is indeed different from human eyes, which assists researchers to see the reasons that cause a computer to make errors. Additionally, according to the visualization, we notice that the HOG features can obtain rich texture information. However, a large amount of background interference is also introduced. In order to enhance the robustness of the HOG feature, we propose an improved method for suppressing the background interference. On the basis of the original HOG feature, we introduce a principal component analysis (PCA) to extract the principal components of the image colour information. Then, a new hybrid feature descriptor, which is named HOG–PCA (HOGP), is made by deeply fusing these two features. Finally, the HOGP is compared to the state-of-the-art HOG feature descriptor in four scenes under different illumination. In the simulation and experimental tests, the qualitative and quantitative assessments indicate that the visualizing images of the HOGP feature are close to the observation results obtained by human eyes, which is better than the original HOG feature for object detection. Furthermore, the runtime of our proposed algorithm is hardly increased in comparison to the classic HOG feature. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Figures

Figure 1

Open AccessArticle
A New Calibration Method for Commercial RGB-D Sensors
Sensors 2017, 17(6), 1204; https://doi.org/10.3390/s17061204
Received: 1 March 2017 / Revised: 9 May 2017 / Accepted: 20 May 2017 / Published: 24 May 2017
Cited by 11 | PDF Full-text (3208 KB) | HTML Full-text | XML Full-text
Abstract
Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter‑level accuracy, accurate and [...] Read more.
Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter‑level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Figures

Figure 1

Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top