Next Article in Journal
Development of an Electronic Stethoscope and a Classification Algorithm for Cardiopulmonary Sounds
Next Article in Special Issue
A Novel Hyperspectral Method to Detect Moldy Core in Apple Fruits
Previous Article in Journal
Noise2Kernel: Adaptive Self-Supervised Blind Denoising Using a Dilated Convolutional Kernel Architecture
Previous Article in Special Issue
Integrating GEDI and Landsat: Spaceborne Lidar and Four Decades of Optical Imagery for the Analysis of Forest Disturbances and Biomass Changes in Italy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Mobile Mapping Systems: From Sensors to Applications

by
Mostafa Elhashash
1,2,
Hessah Albanwan
1,3 and
Rongjun Qin
1,2,3,4,*
1
Geospatial Data Analytics Lab, The Ohio State University, Columbus, OH 43210, USA
2
Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43210, USA
3
Department of Civil, Environmental and Geodetic Engineering, The Ohio State University, Columbus, OH 43210, USA
4
Translational Data Analytics Institute, The Ohio State University, Columbus, OH 43210, USA
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(11), 4262; https://doi.org/10.3390/s22114262
Submission received: 28 April 2022 / Revised: 28 May 2022 / Accepted: 31 May 2022 / Published: 2 June 2022
(This article belongs to the Special Issue Feature Papers in the Remote Sensors Section 2022)

Abstract

:
The evolution of mobile mapping systems (MMSs) has gained more attention in the past few decades. MMSs have been widely used to provide valuable assets in different applications. This has been facilitated by the wide availability of low-cost sensors, advances in computational resources, the maturity of mapping algorithms, and the need for accurate and on-demand geographic information system (GIS) data and digital maps. Many MMSs combine hybrid sensors to provide a more informative, robust, and stable solution by complementing each other. In this paper, we presented a comprehensive review of the modern MMSs by focusing on: (1) the types of sensors and platforms, discussing their capabilities and limitations and providing a comprehensive overview of recent MMS technologies available in the market; (2) highlighting the general workflow to process MMS data; (3) identifying different use cases of mobile mapping technology by reviewing some of the common applications; and (4) presenting a discussion on the benefits and challenges and sharing our views on potential research directions.

1. Introduction

The need for regularly updated and accurate geospatial data has grown exponentially in the last decades. Geospatial data serve as an important source for various applications, including, but not limited to: indoor and outdoor 3D modeling, generation of geographic information system (GIS) data, disaster response high-definition (HD) maps, and autonomous vehicles. This data collection has been made possible through continuous advances in mobile mapping systems (MMSs). MMS refers to an integrated system of mapping sensors mounted on a moving platform to provide the positioning of the platform while collecting geospatial data [1]. A typical MMS platform uses light detection and ranging (LiDAR) and/or high-resolution cameras as its primary sensors to acquire data for objects/areas of interest, integrated with sensor suites for positioning and georeferencing, such as the global navigation satellite system (GNSS) and inertial measurement unit (IMU). To perform accurate georeferencing, traditional mobile mapping approaches require extensive post-processing, such as strip adjustment of point cloud scans or bundle adjustment (BA) of images using ground control points (GCPs), during which manual operations may be required to clean noisy data and unsynchronized observations. Recent trends of MMSs have aimed to perform direct georeferencing and to leverage the capabilities of a multi-sensor platform [2,3] in order to minimize human interventions during data collection and processing. Automation has been further strengthened to use machine learning/artificial intelligence to perform online/offline object extraction and mappings, such as traffic lights and road sign extraction [4,5,6].
Mobile mapping technology has undergone significant development in the past few decades, with algorithmic advances in photogrammetry, computer vision, and robotics [7]. In addition, increased processing power and storage capacity have further facilitated the collection speed and data volume [8]. The applications and systems have been further strengthened by the availability of a diverse set of low-cost survey sensors with various specifications, making mobile mapping more flexible and able to acquire data in complex environments (e.g., tunnels, caves, and enclosed spaces) with lower cost and labor expenditures [9]. Typically, commercial MMSs can be classified (based on their hosting platforms) into handheld, backpack, trolley, and vehicle-based. Some platforms are designed to work indoors without relying on GNSS, while others can work indoors and outdoors. Mobile mapping technology gained more attention when it was adopted by companies such as Google and Apple [10,11] for various applications including navigation and virtual/augmented reality [12].
An example of a vehicle-based MMS (Leica Pegasus: Two Ultimate [13]) and its typical sensor suites is shown in Figure 1. It consists of both data acquisition sensors and positioning sensors. The data acquisition sensors primarily consist of a calibrated LiDAR and digital camera suite. The LiDAR sensor is capable of producing 1,000,000 points/second and the camera suite captures a 360° horizontal field of view (FoV) to provide both texture/color information and stereo measurements, if needed. The positioning sensors include a GNSS receiver that provides the global positional information, with an additional IMU and distance measuring instrument (DMI) that obtains odometry information for integrated position correction. These positioning systems are required to be calibrated in their relative positions and play a vital role in the generation of globally-consistent point clouds.
Despite the existence of a few mobile mapping technologies in the market, the technology landscape of MMS is highly disparate. There is no single and standard MMS that is widely used in the mapping community. Most of the existing MMSs are customized using different sensor suites at different grades of integration. As such, each has its pros and cons. Previous studies have largely focused on comparative studies among certain devices [1,14,15,16,17,18,19] or targeted systems for specific application scenarios (e.g., indoors or outdoors) [20,21,22]. Due to the rapid development of imaging, LiDAR, positioning sensors, and onboard computers, the updated capabilities of these essential components of an MMS may not be fully reflected through a few integrated systems, rendering such studies less informative. There is a general dearth of studies covering the comprehensive landscape of sensor suites and respective MMSs. In this paper, we focused on providing a meta-review of sensors and platforms tasked for ground-level 3D mappings, as well as the techniques needed to integrate these sensors as suites for MMSs in different application scenarios. This review was intended to provide an update on sensors, MMSs with different hosting platforms, and the extended applications of MMSs. The aim was to serve not only researchers in the field of mobile mapping, with updated background information, but also practitioners on critical factors of concern when customizing an MMS for specific applications. We highlighted the main steps from data acquisition to refinement and discussed some of the most common challenges and considerations of MMS.

2. Paper Scope and Organization

This paper was intended to provide a comprehensive review of MMS technology, including a thorough discussion, covering mobile mapping from sensors and software to their applications. We thoroughly discussed the different types of sensors, their practical capabilities, and their limitations, as well as methods of fusing sensory data. We then described the main platforms that are currently used for mapping tasks in different application scenarios (e.g., indoor and outdoor applications). In addition, we discussed the main stages of processing MMS data, including preprocessing, calibration, and refinement. In order to assess the benefits of an MMS in practice, we examined a few of the most important applications that widely use mobile mapping technology. Finally, for the benefit of future work, we highlighted the main considerations and challenges in an open discussion.
The rest of this paper has been organized as follows: Section 3 provides a detailed review of essential positioning and data collection sensors in MMSs; Section 4 presents the different MMS hosting platforms, based on their application scenarios (i.e., vehicle-mounted systems, handheld, wearable, and trolley-based); Section 5 presents the workflow to process MMS data, from acquisition to algorithms for fusing their observations and refinement; Section 6 introduces the enabled applications using MMS for mapping and beyond. Finally, Section 7 concludes this review and discusses future trends.

3. An Overview of Sensors in Mobile Mapping Systems

Positioning and data collection sensors are two classes of essential components used in a typical MMS, as depicted in Figure 1. Positioning sensors are used to obtain the geographical positions and motion of the sensors, which are used to georeference the collected 3D data. Examples of these sensors include GNSS, IMU, DMI (i.e., odometers), etc. To achieve more statistically accurate positioning, the measurements from these sensors are usually jointly used through fusion. Addition fusion can also be performed between the position and navigation cameras. Sensor fusion solution for positioning is currently standard, as neither the GNSS receiver nor the IMU/DMI alone can provide sufficiently accurate and reliable measurements for navigating mobile platforms. GNSS measurements are usually subject to signal strength variation in different environments; for example, one could obtain a strong signal in open spaces and weak signal or signal loss in tunnels or indoors, leading to a loss of information. On the other hand, the IMU and DMI are subject to a significant accumulation of errors and are often used as supplemental observations for navigation when GPS data are available.
Data collection sensors mostly consist of LiDAR and digital cameras, providing raw 3D/2D measurements of the surrounding environments. The 3D measurements of an MMS rely on LiDAR sensors, while the images are primarily used to provide colorimetric/spectral information [20]. With the development of advanced dense image matching methods [23,24,25,26,27], these images are also collected stereoscopically to provide additional dense measurements for 3D data fusion. In the following subsections, we provided an overview of positioning and data collection sensors, as well as the respective sensor fusion approaches.

3.1. Positioning Sensors

As mentioned above, typical positioning sensors include GNSS receiver, IMU, and DMI. Their patterns regarding errors are complementary; thus, in modern MMS, they are often used, through a sensor fusion solution, to provide accurate positioning information up to the centimeter [28]. Nevertheless, their individual measurement accuracies are still critical to the accuracy of the resulting 3D maps. An overview of the positioning sensors is shown in Table 1. In the following subsections, we discussed the three main positioning sensors: GNSS, IMU, and DMI.

3.1.1. Global Navigation Satellite System Receiver

The GNSS receiver is a primary source used to estimate absolute position, velocity, and elevation in open areas referenced to a global coordinate system (e.g., WGS84). It passively receives signals from a minimum of four different navigational satellite systems and performs trilateration to calculate its real-time positions. Since it depends on an external source of signal, the GNSS often exhibits fewer accumulation errors or none at all. These satellite systems mainly refer to the GPS developed by the United States, the GLONASS (Globalnaya Navigatsionnaya Sputnikovaya Sistema) developed by Russia, the Galileo built by the European Union, and the BeiDou system developed by China [29]. The raw observations (pseudo-range, carrier phase, doppler shifts, etc.) from the chipset of the receiver with its solver often give a positional error at the meter level, depending on the chipsets and antenna (e.g., single/dual frequencies) [30]. High-tier MMSs often use augmented GPS solutions, such as Differential GPS (DGPS) or Real-Time Kinematic GPS (RTK-GPS), to improve the positioning accuracy to decimeters and centimeters (and can achieve an accuracy of 1 cm [31]). DGPS uses a code-based measure and can operate with single-frequency receivers without initialization time, while RTK-GPS uses carrier-phase measures and requires dual-frequency receivers. The latter takes about one minute to initialize (for fixing wavenumbers) [19]. Both DGPS and RTK-GPS rely on a network of reference stations, linked to a surveyed point in its vicinity, to apply corrections and eliminate various errors such as ionosphere delays and other unmodeled errors. The traditional DGPS method achieves submeter accuracy in the horizontal position, while, with much more advanced techniques and solvers, the RTK-GPS, as a type of DGPS, can achieve centimeter-level accuracy in three dimensions. However, these achievable accuracy measures are conditioned to open areas; when collecting 3D data in dense urban areas with tall buildings or indoor environments, the GNSS signal can be heavily impacted by occlusions and the resulting measurements can be inaccurate [32]. As such, it requires other complimentary sensors when operating under such conditions. In general, the positioning platform of an MMS is expected to achieve an accuracy of 5–50 mm at speeds that can reach the maximum speed of highways (120–130 km/h) when considering the integration of complementary sensors.

3.1.2. Inertial Measurement Unit

IMU is an egocentric sensor that records the relative position of the orientation and directional acceleration of the host platform. Its positional information can be calculated through dead reckoning approaches [33,34]. Unlike GNSS, it does not require links to external signal sources, and it records relative positions with respect to a reference to its starting point (which can usually be dynamically provided by GNSS in open fields). Like many other egocentric navigation methods, it suffers from accumulation errors, often leading to significant drifts to its true positions. To be more specific, an IMU consists of an accelerometer and a gyroscope which it uses to sense acceleration and angular velocity. These raw measurements are fed into an onboard computing unit to apply the dead reckoning algorithm to provide real-time positioning. Thus, the IMU and computing unit, together with the algorithm as a whole, are also called an inertial navigation system (INS). The grade/quality of IMU sensors can be differentiated by the type of gyroscope: a majority of light-weight, consumer-grade IMUs use microelectromechanical systems (MEMS), which are affordable but suffer from poor precision and large drift errors (often 10–100°/h [35]) [3,36]. Higher grade systems for precise navigation use a larger but more accurate gyroscope, e.g., a ring laser or fiber optic gyroscope, which can reach a drift error of less than 1°/h [35]. IMU can work in GPS-denied environments indoors, outdoors, and in tunnels. However, given its use of dead reckoning navigation, its measurements will only be accurate for a relatively short period in reference to the starting point. Since GNSS provides reasonable accuracy in an open area and its measurements do not have error accumulations as the platform moves, it often integrates IMU for additional observations. This, as a standard approach, provides more accurate positional information in complex environments mixed with both open and occluded surroundings [37].

3.1.3. Distance Measuring Instrument

The DMI generally refers to instruments that measure the traveled distance of the platform. In many cases, DMI is alternatively referred to as the odometer or wheel sensor for MMS based on vehicles or bikes. It computes the distance based on the number of cycles the wheel rotates. Since DMI only measures distance, it is often used as supplementary information to GNSS/IMU as an effective means to reduce the accumulated errors and constrain the drift from IMU in GPS-denied environments such as tunnels [38]. It requires calibration before use and measures distance, velocity, and acceleration.

3.2. Sensors for Data Collection

Data collection sensors are another major component in an MMS, used to collect 3D data. They typically refer to sensors such as LiDAR and high-resolution cameras that provide both geometry and texture information. They require constant georeferencing using the position and orientation information provided by the positioning sensors to link the 3D data to the world coordinate system. In this section, we introduced LiDAR and imaging systems (i.e., cameras), described their functions, types, benefits, challenges, and limitations, and provided an example of a system representing the status quo.

3.2.1. Light Detection and Ranging (LiDAR)

LiDAR, or light detection and ranging, is an optical instrument that uses directional laser beams to measure the distances and locations of objects. It provides individual and accurate point measurement on a 3D object; thus, many of these measurements together constitute information about the shape and surface characteristics of objects in the scene. It has many desirable features in a 3D model, as it is highly accurate, can acquire dense 3D information in a short time, exhibits invariance to illumination, and can partially penetrate sparse objects like canopies. LiDAR itself is still an instrument used for measuring relative locations. It requires a suite of highly accurate and well-calibrated navigation systems to retrieve global 3D points, the installation and cost of which, in addition to the already expensive LiDAR sensor, make it a high-cost means of collection.
The concept of using light beams for distance measurements has existed since 1930 [39]. Since the invention of the laser in 1960, LiDAR technology has experienced rapid development [40] and has been very popular for accurate mapping and autonomous driving applications [41]. Nowadays, there are many commercially-available LiDAR sensors for surveying or automotive applications. Typically, survey-grade LiDAR achieves a range accuracy at the millimeter level (usually 10–80 mm); examples include the RIEGL VQ-250, VQ-450 [42], and Trimble MX9 and MX50 [43]. Relatively lower-grade LiDAR sensors (which are also lower in cost) achieve a range accuracy at the centimeter level (usually 1–8 cm), generally satisfying applications for obstacle avoidance and object detection. These are often used in autonomous driving platforms given their good tradeoff between cost and performance. Examples of such LiDAR sensors include Velodyne [44], Ouster [45], Luminar Technology [46], and Innoviz Technologies [47]. In fact, the level of accuracy of different grades of LiDAR sensors and their costs are the main deciding factors when considering the choice of a LiDAR sensor. There is a large cost gap between grades, with survey-grade LiDARs often costing hundreds of thousands of USD (at the time of this publication) and relatively lower-grade ones coming in approximately ten times cheaper.
LiDAR sensors can be categorized, based on their collecting principles, into three main categories: rotating, solid-state, and flash. Rotating LiDAR uses a rotating mirror spinning for 360 degrees and redirecting laser beams. It usually has multiple beams, and each beam illuminates one point at a time. The rotating LiDAR is the most commonly used in MMS; based on its rotating nature, it provides large FoV, high signal-to-noise ratio, and dense point clouds [48]. Solid-state LiDAR usually uses MEMS mirrors, embedded in a chip [49] through which the mirror can be controlled to follow a specific trajectory or optical phased arrays to steer beams [50]. As such, it is considered solid because it does not possess any moving parts in the sensor. Flash LiDAR [51] usually illuminates the entire FoV with a wide beam in a single pulse. Analogous to a camera with a flash, a flash LiDAR uses a 2D array of photodiodes to capture the laser returns, which are finally processed to form 3D point clouds [48,52]. Typically, a Flash LiDAR has a limited range (less than 100 m) as well as a limited FoV, constrained by the sensor size. Although LiDAR is primarily used to generate point clouds, it can also be used for localization purposes through different techniques such as scan matching [53,54,55]. The extractable information can be further enhanced by deep neural networks for semantic segmentation and localization [56,57]. However, like many other methods, while LiDAR sensors can provide relatively accurate range measurements, their performance deteriorates significantly in hazardous weather conditions such as heavy rain, snow, and fog.
Table 2 shows an example of several existing LiDAR sensors along with their technical specifications in terms of range, accuracy, number of beams, FoV, resolution, point per second, and refresh rate. Generally, the choice of sensors depends on the application and the characteristics of the moving platform (e.g., the speed, payload, etc.). As mentioned above, most MMSs rely on rotating LiDAR sensors, but they often come at a high cost compared to other categories. Therefore, using solid-state LiDAR in an MMS is a promising direction, since its cost is lower than rotating LiDAR. When the vehicle speed is high, there is less time to acquire data, and more beams are needed to ensure that the object of interest is measured by sufficient points [44,58]. For instance, a 32-beam LiDAR could be sufficient for a vehicle moving at a speed of 50–60 km/h, but a LiDAR with 128 beams is recommended for higher speeds, up to 100–110 km/h, so that the acquired data have an adequate resolution. The operating range of a LiDAR can be also important and should be considered on an application basis (e.g., long-range LiDAR may be unnecessary for indoor applications.). In general, the cost usually increases by a factor of 1.5–2 when the number of beams is doubled; this is also positively correlated to the operating range.

3.2.2. Imaging Systems and Cameras

Imaging systems like cameras are among the most popular sensors used for data collection due to their low-cost and ability to provide high-resolution texture information. Cameras are usually mounted on the top or front of the moving platform to capture information about the surrounding environment. They are intended to acquire many images at a high frame rate i.e., 30–60 frames per second. Cameras are useful to serve a few main purposes. First, they are used for recovering the geometry of the scene, usually obtained through stereoscopic/binocular cameras that process a pair of overlapping images and recover depth information using stereo-dense image matching approaches [24,25,26]. Second, they are capable of obtaining the textures of objects in the scene (a camera records photons of the object at different spectral frequencies that provide rich and critical information about the object’s natural appearance) and can be used to build panoramic and geotagged images, as well as photorealistic models. Third, the texture information gathered by cameras encodes critical semantics of the object and can be used to detect static objects such as traffic lights, stop signs, markings, and road lanes. They can also detect moving objects such as pedestrians and cars, which is gradually becoming more applicable as modern deep learning methods are developed to tackle such problems [4,59,60].
There are many types of camera sensors and configurations used in MMSs, depending on their intended use, as described earlier. Examples include monocular cameras, binocular cameras, RGB-D cameras, multi-camera systems (e.g., ladybug), fisheye, etc. A summary of different camera types is shown in Table 3. Monocular cameras (low-cost cameras) provide a series of single RGB images without any additional depth information and are often used to collect high-resolution and geotagged images or panoramas [11]. However, they cannot be used to recover 3D scale or generate high accurate 3D points. Binocular cameras, on the other hand, consist of two cameras capturing synchronized stereo images to recover depth and scale, with an additional computational cost incurred through stereo-dense image matching techniques [24,25,26]. The performance and accuracy of the 3D information depend on the selection of the stereo-dense image matching method. In many cases, mapping solutions may rely on RGB-D cameras (e.g., Kinect [61], Intel RealSense D435 [62]) which can provide both RGB images and depth images (through structured light) simultaneous. They are primarily used in indoor settings due to their limited range. Integrating LiDAR with RGB-D images can yield highly accurate 3D information; however, this may require precompensation for the uncertainties in the RGB-D images from random noise or occlusions. Due to the compact/cluttered environment, an MMS often includes a wide FoV or even 360° panoramic camera, which is usually achieved via a multi-camera system that uses a group of synchronized cameras sharing the same optical center (e.g., FLIR Ladybug5+ [63]). Panoramic images additionally facilitate the integration with LiDAR scanners, providing 100% overlap between the image and LiDAR point clouds (e.g., those from rotating LiDAR). As a result, the panoramic images are suitable for street mapping applications. As a lower-cost alternative, a fisheye camera aims to provide an image with extended FoV from a single camera. It has a spherical lens that can provide more than 180° FOV. Although this is cheaper, the savings may come at the cost of image distortions in scale, geometry, shape, and illumination, requiring additional and slightly more complex calibrations.
As a modern MMS benefits from various cameras providing additional information, it comes with a few added complexities. First, it captures images using the reflected light off of objects, which makes it sensitive to the illumination of the environment, such as the high dynamic range of the scene (between sky and ground) and hazy weather conditions [64,65]. Second, cameras and multi-camera systems require calibration to reduce different types of distortions [66]. Third, moving platforms require high framerate cameras to leverage the speed, image quality, and resolution [67].

4. Mobile Mapping Systems and Platforms

There are a few factors that determine the type of sensor and platform to be used for MMS tasks. These factors include available sensors, project budget, technical solutions, processing strategies, and scene contents (i.e., indoor or outdoor). These help to determine the type of available sensors (e.g., with/without GPS) and the accessible platforms (e.g., vehicle-mounted or backpack, etc.). For example, in indoor environments, there is no access to GPS signals or vehicles, thus, alternative solutions must be adopted.
In general, we considered broadly categorizing the MMS platforms into traditional vehicles and nontraditional lightweight/portable mapping devices. Traditional vehicle-based MMSs primarily operate on main roads, collecting city or block-level 3D data. The nontraditional portable devices, such as backpack/wearable systems, handheld systems, or trolley-based systems, depending on their application and task, can be used both outdoors or indoors in GPS-denied environments. For outdoor applications, these means of mapping are primarily used to complement vehicle-based systems, mapping narrow streets, and areas that cannot be accessed by larger vehicles [68,69]. For indoor or GPS-denied environments, the sensor suites may be significantly different from those used outdoors; for example, they may primarily rely on INS or visual odometry for positioning [70,71]. To be more specific, in this section, we introduced four typical MMS platforms that offered mapping solutions, namely a traditional vehicle platform and three portable platforms (handheld, wearable, and trolley-based systems). Further details of these systems are provided in Table 4 and the following subsections.

4.1. Vehicle-Mounted Systems

This setting refers to mounting the sensor suites on top of a vehicle to capture dense point clouds. These systems enable a high rate of data acquisition at the vehicle travel speed (20–70 mph). The sensor platform can be mounted on cars, trains, or boats, depending on the mapping application. Generally, vehicle-mounted systems achieve the highest accuracy compared to other mobile mapping platforms, primarily because of their size and payload, which allows them to host high-grade sensors [72]. A vehicle-mounted MMS system is usually equipped with a survey-grade LiDAR that provides dense and accurate measurements, as well as a deeply integrated 360° FoV camera providing textural information. Regarding the positioning sensors, a vehicle-mounted system usually fuses measurements from GNSS receivers with IMU and DMI. An example of these systems is introduced in [73], where one Velodyne HDL-32E, and five Velodyne VLP-16 LiDAR sensors were combined with multiple GPS receivers and IMUs. Other examples include Leica Pegasus: Two Ultimate [13], Teledyne Optech Lynx HS600-D [74], Topcon IP-S3 HD1 [75], Hi-Target HiScan-C [76], Trimble MX50, MX9, MX7 [43], and Viametris vMS3D [77]. Figure 2 shows a sample of such vehicle-mounted systems.
Vehicle-mounted systems are used for various applications, such as urban 3D modeling, road asset management, and condition assessment [78,79]. Moreover, these systems can be used for automated change detection in the mapped regions [80,81], creating up-to-date HD maps as an asset for autonomous driving [73] and railway monitoring applications [82].
Although vehicle-mounted systems play a major role in mobile mapping, their relatively large size hinders their accessibility to many sites, such as narrow alleys and indoor environments. Additionally, some studies [83] have demonstrated that the speed of the vehicle may affect the quality of the 3D data, creating doppler effects over successive scans [84]. Therefore, the speed and route have to be planned ahead of the mapping mission.

4.2. Handheld and Wearable Systems

The handheld and wearable systems follow lightweight and compact designs using small-sized sensors. An operator can hold or wear the platform and walk through the area of interest. Wearable systems are often designed as a backpack system to allow the operator to collect data while walking. Both handheld and wearable systems are distinguished by their portability, which enables mapping GPS-denied environments such as enclosed spaces, complex terrains, or narrow spaces that vehicles cannot access [20,85]. Due to the nature of these environments, handheld and wearable systems may not rely on GNSS receivers for positioning, but could instead depend on an IMU or use a LiDAR and camera for both data collection and localization (using simultaneous localization and mapping (SLAM) approaches) [86]. A sample of several handheld and wearable systems is shown in Figure 3. Examples of these devices include HERON LITE Color [87], GeoSLAM Zeb Revo Go, Zeb RT, Zeb Horizon [88], Leica BLK2GO [13], Leica Pegasus: Backpack [13], HERON MS Twin [87], NavVis VLX [89], and Viametris BMS3D-HD [77]. Some other examples of these devices are introduced in [70,85,90], where they showed the benefit of using LiDAR with IMU to generate 2D and 3D maps and evaluated the performance of these mapping devices in indoor environments.
As mentioned above, handheld and wearable systems are effective in mapping enclosed spaces; for instance, these devices can be used to map caves where GNSS signals and lighting are not available [91]. In addition, they are used to map cultural heritage sites that may be complex and require data to be efficiently collected from different viewing points [92,93,94]. Furthermore, these systems are efficient in mapping areas that are not machine-accessible, such as forest surveying [68,95,96], safety and security maps, and building information modeling (BIM) [97]. However, working in GPS-denied regions requires compensating for the lost signal, which demands setting GCPs in these regions, or utilizing GPS information before entering into such environments [98], whereas, inside tunnels, navigation completely depends on IMU/DMI or scanning sensors (LiDAR or cameras).

4.3. Trolley-Based Systems

This type of system is similar in nature to the backpack system while maintaining the ability to be slightly more sizeable and carry a heavier payload. It is suitable for indoor and outdoor mapping where the ground is flat [99]. A sample of trolley-based systems is shown in Figure 4. Examples of these systems include NavVis M6 [89], Leica ProScan [13], Trimble indoor [43], and FARO Focus Swift [100]. Trolley-based systems are also suitable for a variety of applications, such as tunnel inspection, measuring asphalt roughness, creating floorplans, and BIM [22]. In addition, they are used for creating 3D indoor geospatial views of all kinds of infrastructure, such as plant and factory facilities, residential and commercial buildings, airports, and train stations.

5. MMS Workflow and Processing Pipeline

There are a few processing steps required to turn raw sensory data from an MMS to the final 3D product. These generally include data acquisition, sensor calibration and fusion, georeferencing, and data processing in preparation for scene understanding (shown in Figure 5). In the following subsections, we provided an overview of these typical processing steps.

5.1. Data Acquisition

The planned route must be analyzed to determine the configured platforms and sensors to be deployed; for example, the operators should be aware of the GNSS accessible regions in order to plan the primary sensors to use. For MMS positioning, the GNSS, IMU, and DMI continuously measure the position and motion of the platform. In most outdoor applications, the main navigation and positioning data are provided by the GNSS satellite to the receiver, and the IMU and DMI supplement measurements where GNSS signals are insufficient or lost. In some specific cases where GNSS is completely inaccessible, such as cave mapping, GCPs will be used to reference the data to the geodetic coordinate system. Additionally, 3D data are mainly collected by an integrated LiDAR and camera system, where LiDAR produces accurate 3D point clouds colorized by the images from the associated camera.

5.2. Sensors Calibration and Fusion

Sensor calibration and fusion are often performed throughout the data collection cycle. The goal of this is to calibrate the relative positions between multiple sensors, including between cameras, between camera and LiDAR, or among LiDAR, camera, and navigation sensors. Additionally, their output must be fused as a postprocessing step to achieve more accurate positional measurements. These serve multiple purposes: more accurate localization, more accurate geometric reconstruction, and data alignment for fusion [101,102]. In the following subsections, we introduced a few typical calibration procedures in MMS.

5.2.1. Positioning Sensors Calibration and Fusion

The integration of GNSS, IMU, and DMI is split into several steps. The first is lab/factory precalibration, which estimates the relative offset among these sensors and their relative position to the data collecting sensors (e.g., LiDAR and cameras) [103]. The second step involves the fusion of sensor information to output the estimated positions through optimal statistical/stochastic estimators [29,104]. A typical algorithm used for this purpose is the Kalman filter (KF) [105,106,107,108], which uses continuous measurements over time with their uncertainties and a stochastic model for each sensor to estimate the unknown variables in a recursive scheme. KF is the simplest dynamic estimator that assumes linear models and Gaussian random noise of observations. As such, it is often readopted through Extended KF (EKF) for linearized nonlinear models [109,110]. However, convergence is not guaranteed for EKF, especially when the random noise does not follow the Gaussian distribution. Thus, a particle filter [111,112] is usually adopted as a good alternative, as it can simulate the noises to deal with the potential non-Gaussian noise distributions.

5.2.2. Camera Calibration

Camera calibration refers to the process of rigorously determining the camera intrinsic parameters (i.e., focal length, principal point), various lens distortions (e.g., radial distortion), and other intrinsic distortions, such as affinity or decentering errors [113,114]. The camera calibration parameters often follow the standard Brown model [115] or the extended parameters [116] that additionally model in-plane errors due to film/chip displacements. The traditional and most rigorous camera calibration approach uses a 3D control field consisting of highly accurate 3D physical point arrays. Converging images of these point arrays are captured at various angles and positions. These well-distributed 3D points, together with their corresponding 2D observations on the image, go through a rigorous BA with additional parameters (i.e., calibration parameters). However, 3D control fields are demanding and costly. This approach is mostly used for calibrating survey-grade or aerial cameras at the factory level. A popular and less demanding calibration approach is called cloud-based camera calibration [116]. Instead of using the very expensive 3D control fields, this approach uses coded targets, which can be arbitrarily placed (but well-distributed with certain depth variations) in a scene as a target cloud. Converging images of these targets can yield very accurate 2D multi ray measurements, which are fed into a free-network BA for calibrating the camera parameters. Thanks to its simplicity, this is often used as a good alternative for calibrating cameras for close-range applications, including MMS applications. A less rigorous (but often used) calibration method uses a chessboard as a target for calibration [117], which extracts regularly distributed 2D measurements from images of the chessboard and performs self-calibrating BA. However, due to its limited scene coverage, the nature of the board being planar (thus lacking depth variation), and the limited flexibility in capturing well-converged images filled with features, this method may not, in BA, decorrelate camera parameters from exterior orientation parameters, leading to potential errors in calibration. Since it is very commonly used in the computer vision community and well supported by available open-source tools, it is one of the most popular approaches to obtain quick calibrations and can be used to calibrate cameras that do not demand high surveying accuracy, such as navigation cameras.
When calibrating a multi-camera system (e.g., a stereo rig), the camera calibration is extended to additionally estimate the accurate relative orientation among these cameras. The calibration still goes through a BA, but requires at least knowledge of the scale of the targets (either the target clouds or chessboard) in order to metrically estimate the baselines between all these cameras.

5.2.3. LiDAR and Camera Calibration

The calibration between LiDAR and camera covers a few aspects: first, images must be time-synchronized with the LiDAR scans. Second, the relative orientation between the LiDAR and camera rays must be computed. Third, they must have the same viewpoint to avoid parallaxes. Time-synchronization is one of the most crucial calibration steps to correct time offsets between sensors. It refers to matching the recorded and measured data from different sensors with separate clocks to create well-aligned data, in terms of time and position. Time-synchronization errors (or the time offsets) are due to (1) clock offset, which refers to the time difference between internal clocks of sensors, or (2) clock drift, which refers to sensors’ clocks operating at different frequencies or times [118]. A time offset greater than milliseconds between LiDAR and camera can cause significant positioning errors when recording an object. Additionally, the impact can be more significant and noticeable if the platform is operating at a high speed. To address this problem, sensors must have a common time reference, often based on GNSS’s time because of its high precision and ability to record positions in nanoseconds [119]. The time offset between GNSS and IMU is either neglected because of its insignificance or may be slightly corrected using KF as a fusion method. The LiDAR and camera timestamps are corrected in real-time using the computer system on board. The computer system updates that data based on GNSS’s time; some examples of these computer systems/servers include GPS service daemon (GPSD), IEEE, and Chrony.
Relative orientation refers to estimating the translation and orientation parameters between sensors. This type of calibration needs to be carried out periodically due to deteriorations of the mechanics within and among the sensors after operating in different environments. Quick calibration can be performed using a single image where four corresponding points are selected between the image and the 3D scan, using either the well-identified natural corner points or highly reflective coded targets.
Ideally, LiDAR and Camera data must be well-aligned. However, because of the wide baseline between the two sensors, they often view objects from different angles, leading to large parallaxes between them. A large parallax causes crucial distortions such as flipping or flying points, flying points, occlusions, etc. To resolve this issue, the relative orientation parameters are used to project the LiDAR data into the camera coordinate system [120].

5.3. Georeferencing LiDAR Scans and Camera Images Using Navigation Data

Both the LiDAR scan and images collect data in a local coordinate system. Georeferencing them refers to determining their global/geodetic coordinates, mostly based on fused GNSS/IMU/DMI positioning data. This is a process after calibration among these data collection sensors (as described in Section 5.2). Georeferencing includes the estimation of the orientation (boresight) and position (lever-arm) parameters/offsets with respect to GNSS and IMU [19]. The boresight and level-arm parameters define the geometrical relationship between positioning and data collection sensors. There are two approaches to perform georeferencing: (1) the direct approach, which uses only GNSS/IMU data, or (2) the indirect approach, which uses GNSS/IMU data in addition to GCP and BA for refinement [68]. The direct approaches are less demanding, since they do not require GCP, and they can achieve accuracy in the decimeter to centimeter levels. Indirect approaches can provide more accurate (centimeter-level) and precise results where typical surveying methods like GCPs and BA are adopted. However, they are very expensive, and their accuracy may vary based on the GCP setup (i.e., position and number of GCPs).

5.4. Data Processing in Preparation for Scene Understanding

Mobile mapping is highly relevant to autonomous vehicles, where scene understanding is crucial to not only automate the mapping process but also provide critical scene information in real-time to support platform mobilizations. Scene understanding is the process of identifying the semantics and geometry of objects [121,122]. With the enhanced processing capability of mobile computing units, advanced machine learning models, and the ever-increasing datasets, there is a growing trend toward performing on-board data processing and scene understanding using the collected measurements from the mobile system [123,124]. These include real-time detection, tracking, and semantic segmentation of both dynamic (e.g., pedestrians) and static (e.g., road markings or signs) objects in a scene [122,125]. This has driven the need to develop representative benchmark datasets, better-generalized training, domain adaptation approaches [126], and lighter machine learning models or network structures that support real-time result inferences [127]. Examples of these efforts include MobileNet [128], BlitzNet [127], MGNet [129], and MVLidarNet [130]. Challenges exist when addressing these needs, as the mobile platforms may collect data under extremely different illuminations (e.g., daylight and night), weather conditions (e.g., rainy, snowy, and sunny), and may utilize drastically different sensor suites with different qualities of raw data.

6. Applications

Mobile mapping provides valuable assets for different applications, driven by not only the broad availability of easy-to-use and portable MMS platforms but also their readiness under different operating environments. This is particularly useful as most of these applications rely on regularly acquired data for detection and monitoring purposes, such as railway-based powerline detection/monitoring [131,132]. In this section, we reviewed some of the main applications of mobile mapping technology, including road asset management, conditions assessment, BIM creation, disaster response, and heritage conservation. Documented examples of these applications in publications are shown in Table 5 and are detailed in the following subsections.
Road Asset Management and Condition Assessment: MMSs operating on roads can regularly collect accurate 3D data of the road and its surroundings, which facilitates road asset management, mapping, and monitoring (e.g., road signs, traffic signals, pavement dimensions) [79]. Creating road asset inventories is of great importance, given the large volume of road assets. Furthermore, since the condition of the roads deteriorates over time, automatic means of regular transportation maintenance, such as pavement crack and distress detection are critically needed [133,135]. Therefore, a key benefit of generating an updated and accurately georeferenced road asset inventory is to allow automatic and efficient change detection in place of traditionally laborious manual inspections [134].
Typically, road condition monitoring processes consist of four steps [148]: (1) data collection using MMS, (2) defect detection, which can be performed automatically using deep learning-based approaches, (3) defect assessment, and (4) road condition index calculation to classify road segments based on the type and severity of the defect. Therefore, MMS data could further assist in increasing road safety, for example by detecting road potholes [78,149], evaluating the location of speed signs before horizontal curves on roadways [150], or assessing the passing sight distance on highways [151].
Building Information Modeling: BIM is one of the most well-established technologies in the industry of architecture, engineering, and construction. It provides an integrated digital database about an asset (e.g., building, tunnel, bridge, or 3D model of a city) during the project’s life cycle. Typical BIM stages include (1) rigorous data collection, preparation of the 2D plans, and upload of these into specialized software programs to convert them into a digital format. The collected data include information such as architectural design (i.e., materials and dimensions), structural design (e.g., beams, columns, etc.), electrical and mechanical designs, sewage systems, etc.; (2) preparation of the 2D plans; and (3) manual upload and update of the plans using specialized software. This can facilitate the design, maintenance, and renovation processes of engineering buildings and infrastructures. However, this can be a challenging task because of the amount of data that need to be collected and the lack of automated processes that can increase the time and cost.
Nowadays, MMSs have been widely adopted for BIM projects due to their high accuracy, time efficiency, and lower cost in collecting 3D data. The collected point clouds and images are used to produce the 3D reconstructed model of an asset, then processed under semantic segmentation or classification to extract detailed information of all elements in the asset. The final product is then transferred to the BIM software to extract and simulate important information related to the life cycle of the project. In general, MMS can provide sufficiently accurate results for the derived BIM products [20]. These derived products can be either 2D floor plans or 3D mesh or polyhedral models representing the structure of architecture or the life cycle of the construction process [145]. A popular example of MMS in BIM is 3D city modeling, where MMS can be used to collect information on roadside buildings [143,144,145] and their structural information (e.g., window layout and doors) [98,146]. Additionally, they can also be used to maintain plans and record indoor 3D assets and building layouts, which can be generated using handheld, backpack, or trolley MMSs.
Emergency and Disaster Response: The geospatial data provided by the MMSs are critical to improving emergency, disaster responses, and post-disaster recovery projects. MMSs provide cost and time-efficient solutions to collect and produce 3D reconstructed models with detailed information about the semantics and geometry to navigate through an emergency or a disaster. MMSs have been reported to provide building-level information (e.g., floors, walls, and doors) to a resolution/accuracy at the centimeter level. In many cases, the building’s plans are not up-to-date after construction, which may hinder a rescue mission in case of a fire emergency [139,152]. On the other hand, the MMS can provide an efficient alternative to produce an accurate and updated 3D model of a facility or a building at a minimal cost, which facilitates emergency responses. Another example is collecting 3D data on roadside assets and feeding them into GIS systems, which can serve as pre-event analysis tools to identify potential impacts of natural disasters through simulation (e.g., flood simulation, earthquake, etc.), aiding preventative planning. The future directions of MMSs involve more efficient data collection methods (as simple as a man holding a cellphone imaging surroundings), and although these might involve lower accuracy [3], they could supply critical georeferenced information in a disastrous or emergency event to supply information for situation awareness and remedies.
Vegetation Mapping and Detection: Mobile mapping has shown great success in collecting high-resolution and detailed vegetation plots, used to create up-to-date digital tree inventories for urban green planning and management. This has greatly accelerated traditionally laborious visual inspections [142]. In addition, vegetation monitoring is important to limit declines in biodiversity and identify hazardous trees [153,154]. Therefore, these requirements impacted the advancement of keeping an up-to-date digital database for vegetation data. The collected 3D data could be used to model 3D trees for visualization purposes in urban 3D models. Typically, the workflow might consist of three main steps [155]: (1) tree detection by segmenting the generated point clouds, (2) simplifying the detected structure of point clouds, (3) deriving the geometry parameters, such as canopy height and crown width and diameter [155,156]. Then, the collected point cloud could be used to detect trees and low vegetation at the roadside [141,143] to take into account the occlusion on building facades and supply information for city modeling. Moreover, the collected data from MMSs could also be used in calculating urban biomass, combatting the urban heat island effect, and helping in analyzing the influence of the ecosystem on climate change [157].
Digital Heritage Conservation: There is a growing trend toward people realizing the importance of digitally documenting archaeological sites and preserving cultural heritage [146,158,159]. Many of these sites are in danger of deterioration and collapse, which may be accelerated due to extreme weather and natural disasters, such as the collapse of many cultural sites in Nepal and Iran due to earthquakes [160]. Therefore, there is a critical need to proactively document these sites while they are still in shape [92,161,162]. Moreover, a well-documented heritage site may enable other means of tourism, such as virtual tours, to off-load site visitation and reduce the human factors that impact the deterioration of these sites. As a means of collecting highly accurate 3D data, MMS has been used as one of the primary sources to create 3D models of complex and large archaeological heritage sites. The data collection process for these sites often requires multi-scans of both the interior and exterior from different angles to generate occlusion-free, realistic 3D models. For example, a vehicle-mounted system could be used to drive around the sites to collect exterior information, and wearable/handheld devices used to scan their interiors [162].

Summary

We discussed selected applications of mobile mapping that demonstrate the importance and necessity of utilizing MMSs in different scenarios. The adoption of mobile mapping technology in various applications has been proven to not only increase productivity but also reduce the cost of operation. For instance, using digital assets for construction has led to a boost in productivity for the global construction sector by 14–15% [163]. In addition, digitizing historic structures through creating BIMs using mobile mapping data will enable preventive conservation for heritage buildings, saving 40–70% on maintenance costs [164]. Aside from the productivity and cost aspects, mobile mapping data pave the way for producing new road monitoring studies and methods that will increase road safety and dramatically reduce the probability of accidents [165].

7. Conclusions

7.1. Summary

In this paper, we provided a thorough review of state-of-the-art mobile mapping systems, their sensors, and relevant applications. We reviewed sensors and sensor suites typically used in modern MMSs and discussed, in detail, their types, benefits, and limitations (Section 3). Then, we reviewed mobile platforms, including vehicle-mounted, handheld, wearables, etc., and described, in detail, their collection logistics, giving examples of modern systems of different types (Section 4). We also specifically highlighted their supported-use case scenarios. We further reviewed the critical processing steps that turn raw data into the final mapping products (Section 5), including sensor calibration, fusion, and georeferencing. Finally, we summarized the most common applications (Section 6) that currently utilize the capabilities of modern MMSs.

7.2. Future Trends

Despite the many variations of MMSs and their sensor suites, the main goal of an MMS is to provide a means of collecting 3D data at close range, with maximal flexibility and minimal cost. Given the complex terrain environment, a single MMS or even a few MMSs could hardly be sufficient at all levels of mobile mapping applications. Thus, while off-the-shelf solutions are partially available, developing or adapting MMSs to designated applications is an ongoing effort. To date, MMS is still regarded as an expensive collection means, as the equipment, sensors, and manpower required to handle the logistics and processing are still considerable. Therefore, as far as we can conclude, ongoing and future trends continue to be:
(1)
Reduced sensor cost for high-resolution sensors, primarily LiDAR systems with equivalent accuracy/resolution as those currently in use, but at a much lower cost.
(2)
Crowdsourced and collaborative MMS using smartphone data; for example, the new iPhone has been equipped with a low-cost LiDAR sensor.
(3)
Incorporation of new sensors, such as ultra-wide-band tracking systems, as well as WiFi-based localization for use in MMS.
(4)
Enhanced (more robust) use of cameras as visual sensors for navigation.
(5)
Higher flexibility in sensor integration and customization as well as more mature software ecosystems (e.g., self-calibration algorithms among multiple sensors) to allow users to easily plug and play different sensors to match the demand for mapping different environments.
(6)
Advanced post-processing algorithms for pose estimation, data registration for a close-range scenario, dynamic object removal for data cleaning, and refinement for collections in cluttered environments.
(7)
The integration of novel deep learning solutions at all levels of processing, from navigation and device calibration to 3D scene reconstruction and interpretation.
Given the complexity of MMSs and their application scenarios, a one-stop-shop solution arguably does not exist. However, it could be possible to streamline and optimize the customization of a system if the above-mentioned challenges were consistently attacked. Our future work will encompass component-level surveys that provide the community with comprehensive views accelerating the convergence of solutions addressing the above-mentioned efforts.

Author Contributions

Conceptualization, M.E., H.A. and R.Q.; formal analysis, M.E. and H.A.; investigation, M.E. and H.A.; resources, M.E. and H.A.; writing—original draft preparation, M.E., H.A. and R.Q.; writing—review and editing, M.E., H.A. and R.Q.; visualization, M.E. and H.A.; supervision, R.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research reflects efforts partially supported by ONR N00014-20-1-2141. Hessah Albanwan is sponsored by Kuwait University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. Mention of brand names in this paper does not constitute an endorsement by the authors.

References

  1. Al-Bayari, O. Mobile mapping systems in civil engineering projects (case studies). Appl. Geomat. 2019, 11, 1–13. [Google Scholar] [CrossRef]
  2. Schwarz, K.P.; El-Sheimy, N. Mobile mapping systems—State of the art and future trends. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 10. [Google Scholar]
  3. Chiang, K.W.; Tsai, G.-J.; Zeng, J.C. Mobile Mapping Technologies. In Urban Informatics; Shi, W., Goodchild, M.F., Batty, M., Kwan, M.-P., Zhang, A., Eds.; The Urban Book Series; Springer: Singapore, 2021; pp. 439–465. [Google Scholar]
  4. Balado, J.; González, E.; Arias, P.; Castro, D. Novel Approach to Automatic Traffic Sign Inventory Based on Mobile Mapping System Data and Deep Learning. Remote Sens. 2020, 12, 442. [Google Scholar] [CrossRef] [Green Version]
  5. Soilán, M.; Riveiro, B.; Martínez-Sánchez, J.; Arias, P. Automatic road sign inventory using mobile mapping systems. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 717–723. [Google Scholar] [CrossRef] [Green Version]
  6. Hirabayashi, M.; Sujiwo, A.; Monrroy, A.; Kato, S.; Edahiro, M. Traffic light recognition using high-definition map features. Robot. Auton. Syst. 2019, 111, 62–72. [Google Scholar] [CrossRef]
  7. Wang, Y.; Chen, Q.; Zhu, Q.; Liu, L.; Li, C.; Zheng, D. A survey of mobile laser scanning applications and key techniques over urban areas. Remote Sens. 2019, 11, 1540. [Google Scholar] [CrossRef] [Green Version]
  8. Vallet, B.; Mallet, C. Urban Scene Analysis with Mobile Mapping Technology. In Land Surface Remote Sensing in Urban and Coastal Areas; Baghdadi, N., Zribi, M., Eds.; Elsevier: Amsterdam, The Netherlands, 2016; pp. 63–100. [Google Scholar]
  9. Raper, J. GIS, Mobile and Locational Based Services. In International Encyclopedia of Human Geography; Kitchin, R., Thrift, N., Eds.; Elsevier: Oxford, UK, 2009; pp. 513–519. [Google Scholar]
  10. Mahabir, R.; Schuchard, R.; Crooks, A.; Croitoru, A.; Stefanidis, A. Crowdsourcing Street View Imagery: A Comparison of Mapillary and OpenStreetCam. ISPRS Int. J. Geo-Inf. 2020, 9, 341. [Google Scholar] [CrossRef]
  11. Anguelov, D.; Dulong, C.; Filip, D.; Frueh, C.; Lafon, S.; Lyon, R.; Ogale, A.; Vincent, L.; Weaver, J. Google street view: Capturing the world at street level. Computer 2010, 43, 32–38. [Google Scholar] [CrossRef]
  12. Werner, P.A. Review of Implementation of Augmented Reality into the Georeferenced Analogue and Digital Maps and Images. Information 2019, 10, 12. [Google Scholar] [CrossRef] [Green Version]
  13. Leica Geosystems. Available online: http://www.leica-geosystems.com/ (accessed on 15 February 2022).
  14. Laguela, S.; Dorado, I.; Gesto, M.; Arias, P.; Gonzalez-Aguilera, D.; Lorenzo, H. Behavior Analysis of Novel Wearable Indoor Mapping System Based on 3D-SLAM. Sensors 2018, 18, 766. [Google Scholar] [CrossRef] [Green Version]
  15. Nocerino, E.; Menna, F.; Remondino, F.; Toschi, I.; Rodríguez-Gonzálvez, P. Investigation of indoor and outdoor performance of two portable mobile mapping systems. In Proceedings of the Videometrics, Range Imaging, and Applications, Munich, Germany, 26 June 2017. [Google Scholar]
  16. Tucci, G.; Visintini, D.; Bonora, V.; Parisi, E.I. Examination of Indoor Mobile Mapping Systems in a Diversified Internal/External Test Field. Appl. Sci. 2018, 8, 401. [Google Scholar] [CrossRef] [Green Version]
  17. Lague, D.; Brodu, N.; Leroux, J. Accurate 3D comparison of complex topography with terrestrial laser scanner: Application to the Rangitikei canyon (N-Z). ISPRS J. Photogramm. Remote Sens. 2013, 82, 10–26. [Google Scholar] [CrossRef] [Green Version]
  18. Maboudi, M.; Bánhidi, D.; Gerke, M. Evaluation of indoor mobile mapping systems. In Proceedings of the GFaI Workshop 3D North East, Berlin, Germany, 1–2 December 2017; pp. 125–134. [Google Scholar]
  19. Puente, I.; Gonzalez-Jorge, H.; Martinez-Sanchez, J.; Arias, P. Review of mobile mapping and surveying technologies. Measurement 2013, 46, 2127–2145. [Google Scholar] [CrossRef]
  20. Otero, R.; Laguela, S.; Garrido, I.; Arias, P. Mobile indoor mapping technologies: A review. Autom. Constr. 2020, 120, 103399. [Google Scholar] [CrossRef]
  21. Karimi, H.A.; Khattak, A.J.; Hummer, J.E. Evaluation of mobile mapping systems for roadway data collection. J. Comput. Civ. Eng. 2000, 14, 168–173. [Google Scholar] [CrossRef]
  22. Lovas, T.; Hadzijanisz, K.; Papp, V.; Somogyi, A. Indoor Building Survey Assessment. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 43, 251–257. [Google Scholar] [CrossRef]
  23. Han, Y.; Liu, W.; Huang, X.; Wang, S.; Qin, R. Stereo Dense Image Matching by Adaptive Fusion of Multiple-Window Matching Results. Remote Sens. 2020, 12, 3138. [Google Scholar] [CrossRef]
  24. Hirschmüller, H. Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 328–341. [Google Scholar] [CrossRef]
  25. Li, Y.; Hu, Y.; Song, R.; Rao, P.; Wang, Y. Coarse-to-Fine PatchMatch for Dense Correspondence. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 2233–2245. [Google Scholar] [CrossRef]
  26. Shen, S. Accurate multiple view 3D reconstruction using patch-based stereo for large-scale scenes. IEEE Trans. Image Process. 2013, 22, 1901–1914. [Google Scholar] [CrossRef]
  27. Barnes, C.; Shechtman, E.; Goldman, D.; Finkelstein, A. The Generalized PatchMatch Correspondence Algorithm. In Proceedings of the European Conference on Computer Vision, Crete, Greece, 5–11 September 2010; pp. 29–43. [Google Scholar]
  28. Li, T.; Zhang, H.; Gao, Z.; Niu, X.; El-sheimy, N. Tight Fusion of a Monocular Camera, MEMS-IMU, and Single-Frequency Multi-GNSS RTK for Precise Navigation in GNSS-Challenged Environments. Remote Sens. 2019, 11, 610. [Google Scholar] [CrossRef] [Green Version]
  29. Grewal, M.S.; Andrews, A.P.; Bartone, C.G. Global Navigation Satellite Systems, Inertial Navigation, and Integration; John Wiley & Sons: Hoboken, NJ, USA, 2020. [Google Scholar]
  30. Hofmann-Wellenhof, B.; Lichtenegger, H.; Wasle, E. GNSS—Global Navigation Satellite Systems: GPS, GLONASS, Galileo, and More; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  31. Gan-Mor, S.; Clark, R.L.; Upchurch, B.L. Implement lateral position accuracy under RTK-GPS tractor guidance. Comput. Electron. Agric. 2007, 59, 31–38. [Google Scholar] [CrossRef]
  32. Shi, B.; Wang, M.; Wang, Y.; Bai, Y.; Lin, K.; Yang, F. Effect Analysis of GNSS/INS Processing Strategy for Sufficient Utilization of Urban Environment Observations. Sensors 2021, 21, 620. [Google Scholar] [CrossRef] [PubMed]
  33. Wei-Wen, K. Integration of GPS and dead-reckoning navigation systems. In Proceedings of the Vehicle Navigation and Information Systems Conference, Troy, MI, USA, 20–23 October 1991; pp. 635–643. [Google Scholar]
  34. Noureldin, A.; Karamat, T.B.; Georgy, J. Fundamentals of Inertial Navigation, Satellite-Based Positioning and Their Integration; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  35. Ahmed, H.; Tahir, M. Accurate attitude estimation of a moving land vehicle using low-cost MEMS IMU sensors. IEEE Trans. Intell. Transp. Syst. 2016, 18, 1723–1739. [Google Scholar] [CrossRef]
  36. Petrie, G. An introduction to the technology: Mobile mapping systems. Geoinformatics 2010, 13, 32. [Google Scholar]
  37. Falco, G.; Pini, M.; Marucco, G. Loose and Tight GNSS/INS Integrations: Comparison of Performance Assessed in Real Urban Scenarios. Sensors 2017, 17, 255. [Google Scholar] [CrossRef]
  38. Tao, V.; Li, J. Advances in Mobile Mapping Technology; ISPRS Series; Taylor & Francis, Inc.: London, UK, 2007; Volume 4. [Google Scholar]
  39. Mehendale, N.; Neoge, S. Review on Lidar Technology. SSRN Electron. J. 2020. [Google Scholar] [CrossRef]
  40. Wandinger, U. Introduction to lidar. In Lidar; Springer: Berlin/Heidelberg, Germany, 2005; pp. 1–18. [Google Scholar]
  41. Royo, S.; Ballesta-Garcia, M. An Overview of Lidar Imaging Systems for Autonomous Vehicles. Appl. Sci. 2019, 9, 4093. [Google Scholar] [CrossRef] [Green Version]
  42. RIEGL. Available online: http://www.riegl.com/ (accessed on 11 February 2022).
  43. Trimble. Available online: https://www.trimble.com/ (accessed on 11 February 2022).
  44. Velodyne. Available online: https://velodynelidar.com/ (accessed on 8 February 2022).
  45. Ouster. Available online: http://ouster.com/ (accessed on 8 February 2022).
  46. Luminar Technologies. Available online: https://www.luminartech.com/ (accessed on 8 February 2022).
  47. Innoviz Technologies. Available online: http://www.innoviz.tech/ (accessed on 8 February 2022).
  48. Li, Y.; Ibanez-Guzman, J. Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems. IEEE Signal Process. Mag. 2020, 37, 50–61. [Google Scholar] [CrossRef]
  49. Yoo, H.W.; Druml, N.; Brunner, D.; Schwarzl, C.; Thurner, T.; Hennecke, M.; Schitter, G. MEMS-based lidar for autonomous driving. Elektrotechnik Und Inf. 2018, 135, 408–415. [Google Scholar] [CrossRef] [Green Version]
  50. Poulton, C.V.; Byrd, M.J.; Timurdogan, E.; Russo, P.; Vermeulen, D.; Watts, M.R. Optical Phased Arrays for Integrated Beam Steering. In Proceedings of the IEEE International Conference on Group IV Photonics, Cancun, Mexico, 29–31 August 2018; pp. 1–2. [Google Scholar]
  51. Amzajerdian, F.; Roback, V.E.; Bulyshev, A.; Brewster, P.F.; Hines, G.D. Imaging flash lidar for autonomous safe landing and spacecraft proximity operation. In Proceedings of the AIAA SPACE, Long Beach, CA, USA, 13–16 September 2016; p. 5591. [Google Scholar]
  52. Zhou, G.; Zhou, X.; Yang, J.; Tao, Y.; Nong, X.; Baysal, O. Flash Lidar Sensor Using Fiber-Coupled APDs. IEEE Sens. J. 2015, 15, 4758–4768. [Google Scholar] [CrossRef]
  53. Yokozuka, M.; Koide, K.; Oishi, S.; Banno, A. LiTAMIN: LiDAR-based Tracking and MappINg by Stabilized ICP for Geometry Approximation with Normal Distributions. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 5143–5150. [Google Scholar]
  54. Yokozuka, M.; Koide, K.; Oishi, S.; Banno, A. LiTAMIN2: Ultra Light LiDAR-based SLAM using Geometric Approximation applied with KL-Divergence. In Proceedings of the IEEE International Conference on Robotics and Automation, Philadelphia, PA, USA, 30 May–5 June 2021; pp. 11619–11625. [Google Scholar]
  55. Droeschel, D.; Behnke, S. Efficient continuous-time SLAM for 3D lidar-based online mapping. In Proceedings of the IEEE International Conference on Robotics and Automation, Brisbane, Australia, 21–26 May 2018; pp. 5000–5007. [Google Scholar]
  56. Milioto, A.; Vizzo, I.; Behley, J.; Stachniss, C. RangeNet++: Fast and Accurate LiDAR Semantic Segmentation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, The Venetian Macao, Macau, 3–8 November 2019; pp. 4213–4220. [Google Scholar]
  57. Chen, X.; Milioto, A.; Palazzolo, E.; Giguère, P.; Behley, J.; Stachniss, C. SuMa++: Efficient LiDAR-based Semantic SLAM. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, The Venetian Macao, Macau, 3–8 November 2019; pp. 4530–4537. [Google Scholar]
  58. Zhang, J.; Xiao, W.; Coifman, B.; Mills, J.P. Vehicle Tracking and Speed Estimation from Roadside Lidar. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5597–5608. [Google Scholar] [CrossRef]
  59. Arcos-García, Á.; Soilán, M.; Álvarez-García, J.A.; Riveiro, B. Exploiting synergies of mobile mapping sensors and deep learning for traffic sign recognition systems. Expert Syst. Appl. 2017, 89, 286–295. [Google Scholar] [CrossRef]
  60. Wan, R.; Huang, Y.; Xie, R.; Ma, P. Combined Lane Mapping Using a Mobile Mapping System. Remote Sens. 2019, 11, 305. [Google Scholar] [CrossRef] [Green Version]
  61. Microsoft Azure. Available online: https://azure.microsoft.com/en-us/services/kinect-dk/ (accessed on 11 February 2022).
  62. Intel RealSense. Available online: https://www.intelrealsense.com/ (accessed on 11 February 2022).
  63. Teledyne FLIR LLC. Available online: http://www.flir.com/ (accessed on 11 February 2022).
  64. El-Hashash, M.M.; Aly, H.A. High-speed video haze removal algorithm for embedded systems. J. Real-Time Image Process. 2019, 16, 1117–1128. [Google Scholar] [CrossRef]
  65. Li, B.; Ren, W.; Fu, D.; Tao, D.; Feng, D.; Zeng, W.; Wang, Z. Benchmarking Single-Image Dehazing and Beyond. IEEE Trans. Image Process. 2019, 28, 492–505. [Google Scholar] [CrossRef] [Green Version]
  66. Remondino, F.; Fraser, C. Digital camera calibration methods: Considerations and comparisons. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 266–272. [Google Scholar]
  67. Lee, H.S.; Lee, K.M. Dense 3D Reconstruction from Severely Blurred Images Using a Single Moving Camera. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 273–280. [Google Scholar]
  68. Blaser, S.; Meyer, J.; Nebiker, S.; Fricker, L.; Weber, D. Centimetre-accuracy in forests and urban canyons—Combining a high-performance image-based mobile mapping backpack with new georeferencing methods. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, V-1-2020, 333–341. [Google Scholar] [CrossRef]
  69. Kukko, A.; Kaartinen, H.; Hyyppä, J.; Chen, Y. Multiplatform Mobile Laser Scanning: Usability and Performance. Sensors 2012, 12, 11712–11733. [Google Scholar] [CrossRef] [Green Version]
  70. Karam, S.; Vosselman, G.; Peter, M.; Hosseinyalamdary, S.; Lehtola, V. Design, Calibration, and Evaluation of a Backpack Indoor Mobile Mapping System. Remote Sens. 2019, 11, 905. [Google Scholar] [CrossRef] [Green Version]
  71. Wen, C.; Dai, Y.; Xia, Y.; Lian, Y.; Tan, J.; Wang, C.; Li, J. Toward Efficient 3-D Colored Mapping in GPS-/GNSS-Denied Environments. IEEE Geosci. Remote Sens. Lett. 2020, 17, 147–151. [Google Scholar] [CrossRef]
  72. Fassi, F.; Perfetti, L. Backpack mobile mapping solution for dtm extraction of large inaccessible spaces. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W15, 473–480. [Google Scholar] [CrossRef] [Green Version]
  73. Ilci, V.; Toth, C. High Definition 3D Map Creation Using GNSS/IMU/LiDAR Sensor Integration to Support Autonomous Vehicle Navigation. Sensors 2020, 20, 899. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  74. Optech. Available online: http://www.teledyneoptech.com/ (accessed on 15 February 2022).
  75. Topcon Positioning Systems, Inc. Available online: http://topconpositioning.com/ (accessed on 15 February 2022).
  76. Hi-Target Navigation Technology Corporation. Available online: https://en.hi-target.com.cn/ (accessed on 15 February 2022).
  77. VIAMETRIS. Available online: https://viametris.com/ (accessed on 15 February 2022).
  78. Wu, H.; Yao, L.; Xu, Z.; Li, Y.; Ao, X.; Chen, Q.; Li, Z.; Meng, B. Road pothole extraction and safety evaluation by integration of point cloud and images derived from mobile mapping sensors. Adv. Eng. Inform. 2019, 42, 100936. [Google Scholar] [CrossRef]
  79. Sairam, N.; Nagarajan, S.; Ornitz, S. Development of Mobile Mapping System for 3D Road Asset Inventory. Sensors 2016, 16, 367. [Google Scholar] [CrossRef] [Green Version]
  80. Voelsen, M.; Schachtschneider, J.; Brenner, C. Classification and Change Detection in Mobile Mapping LiDAR Point Clouds. J. Photogramm. Remote Sens. Geoinf. Sci. 2021, 89, 195–207. [Google Scholar] [CrossRef]
  81. Qin, R.; Tian, J.; Reinartz, P. 3D change detection—Approaches and applications. ISPRS J. Photogramm. Remote Sens. 2016, 122, 41–56. [Google Scholar] [CrossRef] [Green Version]
  82. Kremer, J.; Grimm, A. The Railmapper—A Dedicated Mobile Lidar Mapping System for Railway Networks. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39-B5, 477–482. [Google Scholar] [CrossRef] [Green Version]
  83. Cahalane, C.; McElhinney, C.P.; McCarthy, T. Mobile mapping system performance-an analysis of the effect of laser scanner configuration and vehicle velocity on scan profiles. In Proceedings of the European Laser Mapping Forum, The Hague, The Netherlands, 10 December 2010. [Google Scholar]
  84. Scotti, F.; Onori, D.; Scaffardi, M.; Lazzeri, E.; Bogoni, A.; Laghezza, F. Multi-Frequency Lidar/Radar Integrated System for Robust and Flexible Doppler Measurements. IEEE Photonics Technol. Lett. 2015, 27, 2268–2271. [Google Scholar] [CrossRef]
  85. Lauterbach, H.A.; Borrmann, D.; Hess, R.; Eck, D.; Schilling, K.; Nuchter, A. Evaluation of a Backpack-Mounted 3D Mobile Scanning System. Remote Sens. 2015, 7, 13753–13781. [Google Scholar] [CrossRef] [Green Version]
  86. Debeunne, C.; Vivet, D. A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping. Sensors 2020, 20, 2068. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  87. GEXCEL. Available online: https://gexcel.it/ (accessed on 15 February 2022).
  88. GeoSLAM Limited. Available online: http://www.geoslam.com/ (accessed on 15 February 2022).
  89. NavVis. Available online: https://www.navvis.com/ (accessed on 15 February 2022).
  90. Wen, C.L.; Pan, S.Y.; Wang, C.; Li, J. An Indoor Backpack System for 2-D and 3-D Mapping of Building Interiors. IEEE Geosci. Remote Sens. Lett. 2016, 13, 992–996. [Google Scholar] [CrossRef]
  91. Raval, S.; Banerjee, B.P.; Singh, S.K.; Canbulat, I. A Preliminary Investigation of Mobile Mapping Technology for Underground Mining. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 6071–6074. [Google Scholar]
  92. Zlot, R.; Bosse, M.; Greenop, K.; Jarzab, Z.; Juckes, E.; Roberts, J. Efficiently capturing large, complex cultural heritage sites with a handheld mobile 3D laser mapping system. J. Cult. Herit. 2014, 15, 670–678. [Google Scholar] [CrossRef]
  93. Puche, J.M.; Macias Solé, J.; Sola-Morales, P.; Toldrà, J.; Fernandez, I. Mobile mapping and laser scanner to interrelate the city and its heritage of Roman Circus of Tarragona. In Proceedings of the 3rd International Conference on Preservation, Maintenance and Rehabilitation of Historical Buildings and Structures, Braga, Portugal, 14–16 June 2017; pp. 21–28. [Google Scholar]
  94. Nespeca, R. Towards a 3D digital model for management and fruition of Ducal Palace at Urbino. An integrated survey with mobile mapping. SCIRES-IT-SCIentific RESearch Inf. Technol. 2019, 8, 1–14. [Google Scholar] [CrossRef]
  95. Vatandaşlar, C.; Zeybek, M. Extraction of forest inventory parameters using handheld mobile laser scanning: A case study from Trabzon, Turkey. Measurement 2021, 177, 109328. [Google Scholar] [CrossRef]
  96. Ryding, J.; Williams, E.; Smith, M.J.; Eichhorn, M.P. Assessing Handheld Mobile Laser Scanners for Forest Surveys. Remote Sens. 2015, 7, 1095–1111. [Google Scholar] [CrossRef] [Green Version]
  97. Previtali, M.; Banfi, F.; Brumana, R. Handheld 3D Mobile Scanner (SLAM): Data Simulation and Acquisition for BIM Modelling. In R3 in Geomatics: Research, Results and Review; Springer: Cham, Switzerland, 2020; pp. 256–266. [Google Scholar]
  98. Maset, E.; Cucchiaro, S.; Cazorzi, F.; Crosilla, F.; Fusiello, A.; Beinat, A. Investigating the performance of a handheld mobile mapping system in different outdoor scenarios. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, 43, 103–109. [Google Scholar] [CrossRef]
  99. Karam, S.; Peter, M.; Hosseinyalamdary, S.; Vosselman, G. An evaluation pipeline for indoor laser scanning point clouds. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, IV-1, 85–92. [Google Scholar] [CrossRef] [Green Version]
  100. Faro Technologies. Available online: https://www.faro.com/ (accessed on 15 February 2022).
  101. Kubelka, V.; Oswald, L.; Pomerleau, F.; Colas, F.; Svoboda, T.; Reinstein, M. Robust Data Fusion of Multimodal Sensory Information for Mobile Robots. J. Field Robot. 2015, 32, 447–473. [Google Scholar] [CrossRef] [Green Version]
  102. Simanek, J.; Kubelka, V.; Reinstein, M. Improving multi-modal data fusion by anomaly detection. Auton. Robot. 2015, 39, 139–154. [Google Scholar] [CrossRef]
  103. Ravi, R.; Lin, Y.-J.; Elbahnasawy, M.; Shamseldin, T.; Habib, A. Simultaneous system calibration of a multi-lidar multicamera mobile mapping platform. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1694–1714. [Google Scholar] [CrossRef]
  104. Simon, D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  105. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  106. Farrell, J.; Barth, M. The Global Positioning System and Inertial Navigation; Mcgraw-Hill: New York, NY, USA, 1999; Volume 61. [Google Scholar]
  107. Vanicek, P.; Omerbašic, M. Does a navigation algorithm have to use a Kalman filter? Can. Aeronaut. Space J. 1999, 45, 292–296. [Google Scholar]
  108. Zarchan, P.; Musoff, H. Fundamentals of Kalman Filtering—A Practical Approach, 4th ed.; ARC: Reston, VA, USA, 2015; Volume 246. [Google Scholar]
  109. Ristic, B.; Arulampalam, S.; Gordon, N. Beyond the Kalman Filter: Particle Filters for Tracking Applications; Artech House: London, UK, 2003. [Google Scholar]
  110. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–108. [Google Scholar] [CrossRef] [Green Version]
  111. Del Moral, P.; Doucet, A. Particle methods: An introduction with applications. ESAIM Proc. 2014, 44, 1–46. [Google Scholar] [CrossRef]
  112. Georgy, J.; Karamat, T.; Iqbal, U.; Noureldin, A. Enhanced MEMS-IMU/odometer/GPS integration using mixture particle filter. GPS Solut. 2011, 15, 239–252. [Google Scholar] [CrossRef]
  113. Brown, D.C. Decentering distortion of lenses. Photogramm. Eng. Remote Sens. 1966, 32, 444–462. [Google Scholar]
  114. Gruen, A.; Beyer, H.A. System calibration through self-calibration. In Calibration and Orientation of Cameras in Computer Vision; Armin Gruen, T.S.H., Ed.; Springer: Berlin/Heidelberg, Germany, 2001; Volume 34, pp. 163–193. [Google Scholar]
  115. Brown, D.C. Close-range camera calibration. Photogramm. Eng. 1971, 37, 855–866. [Google Scholar]
  116. Fraser, C.S. Automatic Camera Calibration in Close Range Photogrammetry. Photogramm. Eng. Remote Sens. 2013, 79, 381–388. [Google Scholar] [CrossRef] [Green Version]
  117. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  118. Tungadi, F.; Kleeman, L. Time synchronisation and calibration of odometry and range sensors for high-speed mobile robot mapping. In Proceedings of the Australasian Conference on Robotics and Automation, Canberra, Australia, 3–5 December 2008. [Google Scholar]
  119. Madeira, S.; Gonçalves, J.A.; Bastos, L. Sensor integration in a low cost land mobile mapping system. Sensors 2012, 12, 2935–2953. [Google Scholar] [CrossRef]
  120. Shim, I.; Shin, S.; Bok, Y.; Joo, K.; Choi, D.-G.; Lee, J.-Y.; Park, J.; Oh, J.-H.; Kweon, I.S. Vision system and depth processing for DRC-HUBO+. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 2456–2463. [Google Scholar]
  121. Liu, X.; Neuyen, M.; Yan, W.Q. Vehicle-Related Scene Understanding Using Deep Learning. In Pattern Recognition; Springer: Singapore, 2020; pp. 61–73. [Google Scholar]
  122. Hofmarcher, M.; Unterthiner, T.; Arjona-Medina, J.; Klambauer, G.; Hochreiter, S.; Nessler, B. Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning; Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 285–296. [Google Scholar]
  123. Pintore, G.; Ganovelli, F.; Gobbetti, E.; Scopigno, R. Mobile Mapping and Visualization of Indoor Structures to Simplify Scene Understanding and Location Awareness. In Proceedings of the Computer Vision—ECCV 2016 Workshops, Amsterdam, The Netherlands, 8–10, 15–16 October 2016; pp. 130–145. [Google Scholar]
  124. Wald, J.; Tateno, K.; Sturm, J.; Navab, N.; Tombari, F. Real-Time Fully Incremental Scene Understanding on Mobile Platforms. IEEE Robot. Autom. Lett. 2018, 3, 3402–3409. [Google Scholar] [CrossRef]
  125. Wu, Z.; Deng, X.; Li, S.; Li, Y. OC-SLAM: Steadily Tracking and Mapping in Dynamic Environments. Front. Energy Res. 2021, 9, 803631. [Google Scholar] [CrossRef]
  126. Csurka, G. A Comprehensive Survey on Domain Adaptation for Visual Applications. In Domain Adaptation in Computer Vision Applications; Csurka, G., Ed.; Springer International Publishing: Cham, Switzerland, 2017; pp. 1–35. [Google Scholar]
  127. Dvornik, N.; Shmelkov, K.; Mairal, J.; Schmid, C. BlitzNet: A Real-Time Deep Network for Scene Understanding. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4174–4182. [Google Scholar]
  128. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  129. Schön, M.; Buchholz, M.; Dietmayer, K. MGNet: Monocular Geometric Scene Understanding for Autonomous Driving. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 15784–15795. [Google Scholar]
  130. Chen, K.; Oldja, R.; Smolyanskiy, N.; Birchfield, S.; Popov, A.; Wehr, D.; Eden, I.; Pehserl, J. MVLidarNet: Real-Time Multi-Class Scene Understanding for Autonomous Driving Using Multiple Views. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 2288–2294. [Google Scholar]
  131. Sánchez-Rodríguez, A.; Soilán, M.; Cabaleiro, M.; Arias, P. Automated Inspection of Railway Tunnels’ Power Line Using LiDAR Point Clouds. Remote Sens. 2019, 11, 2567. [Google Scholar] [CrossRef] [Green Version]
  132. Zhang, S.; Wang, C.; Yang, Z.; Chen, Y.; Li, J. Automatic railway power line extraction using mobile laser scanning data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B5, 615–619. [Google Scholar] [CrossRef] [Green Version]
  133. Stricker, R.; Eisenbach, M.; Sesselmann, M.; Debes, K.; Gross, H.M. Improving Visual Road Condition Assessment by Extensive Experiments on the Extended GAPs Dataset. In Proceedings of the International Joint Conference on Neural Networks, Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
  134. Eisenbach, M.; Stricker, R.; Sesselmann, M.; Seichter, D.; Gross, H. Enhancing the quality of visual road condition assessment by deep learning. In Proceedings of the World Road Congress, Abu Dhabi, United Arab Emirates, 6–10 October 2019. [Google Scholar]
  135. Aoki, K.; Yamamoto, K.; Shimamura, H. Evaluation model for pavement surface distress on 3D point clouds from mobile mapping system. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, XXXIX-B3, 87–90. [Google Scholar] [CrossRef] [Green Version]
  136. Ortiz-Coder, P.; Sánchez-Ríos, A. A Self-Assembly Portable Mobile Mapping System for Archeological Reconstruction Based on VSLAM-Photogrammetric Algorithm. Sensors 2019, 19, 3952. [Google Scholar] [CrossRef] [Green Version]
  137. Costin, A.; Adibfar, A.; Hu, H.; Chen, S.S. Building Information Modeling (BIM) for transportation infrastructure—Literature review, applications, challenges, and recommendations. Autom. Constr. 2018, 94, 257–281. [Google Scholar] [CrossRef]
  138. Laituri, M.; Kodrich, K. On Line Disaster Response Community: People as Sensors of High Magnitude Disasters Using Internet GIS. Sensors 2008, 8, 3037–3055. [Google Scholar] [CrossRef]
  139. Gusella, L.; Adams, B.; Bitelli, G. Use of mobile mapping technology for post-disaster damage information collection and integration with remote sensing imagery. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2007, 34, 1–8. [Google Scholar]
  140. Saarinen, N.; Vastaranta, M.; Vaaja, M.; Lotsari, E.; Jaakkola, A.; Kukko, A.; Kaartinen, H.; Holopainen, M.; Hyyppä, H.; Alho, P. Area-Based Approach for Mapping and Monitoring Riverine Vegetation Using Mobile Laser Scanning. Remote Sens. 2013, 5, 5285–5303. [Google Scholar] [CrossRef] [Green Version]
  141. Monnier, F.; Vallet, B.; Soheilian, B. Trees detection from laser point clouds acquired in dense urban areas by a mobile mapping system. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, I-3, 245–250. [Google Scholar] [CrossRef] [Green Version]
  142. Holopainen, M.; Vastaranta, M.; Kankare, V.; Hyyppä, H.; Vaaja, M.; Hyyppä, J.; Liang, X.; Litkey, P.; Yu, X.; Kaartinen, H.; et al. The use of ALS, TLS and VLS measurements in mapping and monitoring urban trees. In Proceedings of the Joint Urban Remote Sensing Event, Munich, Germany, 11–13 April 2011; pp. 29–32. [Google Scholar]
  143. Jaakkola, A.; Hyyppä, J.; Kukko, A.; Yu, X.; Kaartinen, H.; Lehtomäki, M.; Lin, Y. A low-cost multi-sensoral mobile mapping system and its feasibility for tree measurements. ISPRS J. Photogramm. Remote Sens. 2010, 65, 514–522. [Google Scholar] [CrossRef]
  144. Sánchez-Aparicio, L.J.; Mora, R.; Conde, B.; Maté-González, M.Á.; Sánchez-Aparicio, M.; González-Aguilera, D. Integration of a Wearable Mobile Mapping Solution and Advance Numerical Simulations for the Structural Analysis of Historical Constructions: A Case of Study in San Pedro Church (Palencia, Spain). Remote Sens. 2021, 13, 1252. [Google Scholar] [CrossRef]
  145. Barba, S.; Ferreyra, C.; Cotella, V.A.; Filippo, A.d.; Amalfitano, S. A SLAM Integrated Approach for Digital Heritage Documentation. In Proceedings of the International Conference on Human-Computer Interaction, Washington, DC, USA, 24–29 July 2021; pp. 27–39. [Google Scholar]
  146. Malinverni, E.S.; Pierdicca, R.; Bozzi, C.A.; Bartolucci, D. Evaluating a SLAM-Based Mobile Mapping System: A Methodological Comparison for 3D Heritage Scene Real-Time Reconstruction. In Proceedings of the Metrology for Archaeology and Cultural Heritage, Cassino, Italy, 22–24 October 2018; pp. 265–270. [Google Scholar]
  147. Jan, J.-F. Digital heritage inventory using open source geospatial software. In Proceedings of the 22nd International Conference on Virtual System & Multimedia, Kuala Lumpur, Malaysia, 17–21 October 2016; pp. 1–8. [Google Scholar]
  148. Radopoulou, S.C.; Brilakis, I. Improving Road Asset Condition Monitoring. Transp. Res. Procedia 2016, 14, 3004–3012. [Google Scholar] [CrossRef] [Green Version]
  149. Douangphachanh, V.; Oneyama, H. Using smartphones to estimate road pavement condition. In Proceedings of the International Symposium for Next Generation Infrastructure, Wollongong, Australia, 1–4 October 2013. [Google Scholar]
  150. Koloushani, M.; Ozguven, E.E.; Fatemi, A.; Tabibi, M. Mobile Mapping System-based Methodology to Perform Automated Road Safety Audits to Improve Horizontal Curve Safety on Rural Roadways. Comput. Res. Prog. Appl. Sci. Eng. (CRPASE) 2020, 6, 263–275. [Google Scholar]
  151. Agina, S.; Shalkamy, A.; Gouda, M.; El-Basyouny, K. Automated Assessment of Passing Sight Distance on Rural Highways using Mobile LiDAR Data. Transp. Res. Rec. 2021, 2675, 676–688. [Google Scholar] [CrossRef]
  152. Nikoohemat, S.; Diakité, A.A.; Zlatanova, S.; Vosselman, G. Indoor 3D reconstruction from point clouds for optimal routing in complex buildings to support disaster management. Autom. Constr. 2020, 113, 103109. [Google Scholar] [CrossRef]
  153. Bienert, A.; Georgi, L.; Kunz, M.; Von Oheimb, G.; Maas, H.-G. Automatic extraction and measurement of individual trees from mobile laser scanning point clouds of forests. Ann. Bot. 2021, 128, 787–804. [Google Scholar] [CrossRef]
  154. Holopainen, M.; Kankare, V.; Vastaranta, M.; Liang, X.; Lin, Y.; Vaaja, M.; Yu, X.; Hyyppä, J.; Hyyppä, H.; Kaartinen, H.; et al. Tree mapping using airborne, terrestrial and mobile laser scanning—A case study in a heterogeneous urban forest. Urban For. Urban Green. 2013, 12, 546–553. [Google Scholar] [CrossRef]
  155. Rutzinger, M.; Pratihast, A.K.; Oude Elberink, S.; Vosselman, G. Detection and modelling of 3D trees from mobile laser scanning data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2010, 38, 520–525. [Google Scholar]
  156. Pratihast, A.K.; Thakur, J.K. Urban Tree Detection Using Mobile Laser Scanning Data. In Geospatial Techniques for Managing Environmental Resources; Thakur, J.K., Singh, S.K., Ramanathan, A.L., Prasad, M.B.K., Gossel, W., Eds.; Springer: Dordrecht, The Netherlands, 2011; pp. 188–200. [Google Scholar]
  157. Herrero-Huerta, M.; Lindenbergh, R.; Rodríguez-Gonzálvez, P. Automatic tree parameter extraction by a Mobile LiDAR System in an urban context. PLoS ONE 2018, 13, e0196004. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  158. Hassani, F. Documentation of cultural heritage; techniques, potentials, and constraints. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-5/W7, 207–214. [Google Scholar] [CrossRef] [Green Version]
  159. Di Stefano, F.; Chiappini, S.; Gorreja, A.; Balestra, M.; Pierdicca, R. Mobile 3D scan LiDAR: A literature review. Geomat. Nat. Hazards Risk 2021, 12, 2387–2429. [Google Scholar] [CrossRef]
  160. Parsizadeh, F.; Ibrion, M.; Mokhtari, M.; Lein, H.; Nadim, F. Bam 2003 earthquake disaster: On the earthquake risk perception, resilience and earthquake culture—Cultural beliefs and cultural landscape of Qanats, gardens of Khorma trees and Argh-e Bam. Int. J. Disaster Risk Reduct. 2015, 14, 457–469. [Google Scholar] [CrossRef]
  161. Remondino, F. Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning. Remote Sens. 2011, 3, 1104–1138. [Google Scholar] [CrossRef] [Green Version]
  162. Rodríguez-Gonzálvez, P.; Jiménez Fernández-Palacios, B.; Muñoz-Nieto, Á.L.; Arias-Sanchez, P.; Gonzalez-Aguilera, D. Mobile LiDAR System: New Possibilities for the Documentation and Dissemination of Large Cultural Heritage Sites. Remote Sens. 2017, 9, 189. [Google Scholar] [CrossRef] [Green Version]
  163. Moretti, N.; Ellul, C.; Re Cecconi, F.; Papapesios, N.; Dejaco, M.C. GeoBIM for built environment condition assessment supporting asset management decision making. Autom. Constr. 2021, 130, 103859. [Google Scholar] [CrossRef]
  164. Mora, R.; Sánchez-Aparicio, L.J.; Maté-González, M.Á.; García-Álvarez, J.; Sánchez-Aparicio, M.; González-Aguilera, D. An historical building information modelling approach for the preventive conservation of historical constructions: Application to the Historical Library of Salamanca. Autom. Constr. 2021, 121, 103449. [Google Scholar] [CrossRef]
  165. Chen, Y.; Chen, Y. Reliability Evaluation of Sight Distance on Mountainous Expressway Using 3D Mobile Mapping. In Proceedings of the 2019 5th International Conference on Transportation Information and Safety (ICTIS), Liverpool, UK, 14–17 July 2019; pp. 1–7. [Google Scholar]
Figure 1. An example of an MMS: a vehicle-mounted mobile mapping platform consisting of different positioning and data collection sensors to generate an accurate georeferenced 3D map of the environment. Shown here are the main sensors of the Leica Pegasus: Two Ultimate as an example. Photo courtesy of Leica Geosystems [13].
Figure 1. An example of an MMS: a vehicle-mounted mobile mapping platform consisting of different positioning and data collection sensors to generate an accurate georeferenced 3D map of the environment. Shown here are the main sensors of the Leica Pegasus: Two Ultimate as an example. Photo courtesy of Leica Geosystems [13].
Sensors 22 04262 g001
Figure 2. Leica Pegasus: Two Ultimate vehicle-mounted system. Photo courtesy of Leica Geosystems [13].
Figure 2. Leica Pegasus: Two Ultimate vehicle-mounted system. Photo courtesy of Leica Geosystems [13].
Sensors 22 04262 g002
Figure 3. Handheld and wearable systems: (a) HERON LITE Color, (b) HERON MS Twin. Photos courtesy of Gexcel srl [87].
Figure 3. Handheld and wearable systems: (a) HERON LITE Color, (b) HERON MS Twin. Photos courtesy of Gexcel srl [87].
Sensors 22 04262 g003
Figure 4. Leica ProScan trolley-based MMSs. Photo courtesy of Leica Geosystems [13].
Figure 4. Leica ProScan trolley-based MMSs. Photo courtesy of Leica Geosystems [13].
Sensors 22 04262 g004
Figure 5. The standard processing pipeline for MMS data.
Figure 5. The standard processing pipeline for MMS data.
Sensors 22 04262 g005
Table 1. Positioning sensors overview.
Table 1. Positioning sensors overview.
SensorDescriptionBenefitsLimitations
GNSS
receiver
The signals from orbiting satellites are utilized by the GNSS receiver to compute the position, velocity, and elevation. Some examples include GPS, GLONASS, Galileo, and BeiDou.
  • No/less accumulation of errors due to its dependence on external signals.
  • Data collected under a global reference coordinate system (e.g., WGS84).
  • Signal inaccessible in complex urban regions e.g., tall buildings, trees, tunnels, indoor environments, etc.
  • Requires post-processing using DGPS and RTK-GPS to minimize errors from receiver’s noise, pseudo-range, carrier phase, doppler shifts, atmospheric delays, etc.
IMUIMU is an egocentric sensor that records the relative position of the orientation and directional acceleration of the host platform.
  • Capable of navigating in all environments, such as indoors, outdoors, tunnels, caves, etc.
  • A necessary supplemental data source for urban environments where GPS is unstable.
  • Requires consistent calibration and a reference to avoid drift from the true position.
  • Limited to short-range navigation.
DMIA supplementary positioning sensor measures the traveled distance of the platform, i.e., information derived from a speedometer.
  • A supplemental sensor to provide additional data points to alleviate accumulation errors of IMU sensors.
  • Requires calibration and provides only distance information (1 degree of freedom).
Table 2. Specifications of different LiDAR sensors.
Table 2. Specifications of different LiDAR sensors.
CompanyModelRange (m)Range Accuracy (cm)Number of BeamsHorizontal FoV (°)Vertical FoV (°)Horizontal Resolution (°)Vertical Resolution (°)Points Per SecondRefresh Rate (Hz)
RotatingRIGELVQ-2501.5–5000.1360300,000
VQ-4501.5–8000.8360550,000
TrimbleMX50 laser scanner0.6–800.2360960,000
MX9 laser scanner1.2–4200.53601,000,000
VelodyneHDL-64E120±26436026.90.08 to 0.350.41,300,0005 to 20
HDL-32E100±23236041.330.08 to 0.331.33695,0005 to 20
Puck100±316360300.1 to 0.42.0300,0005 to 20
Puck LITE100±316360300.1 to 0.42.0300,0005 to 20
Puck Hi-Res100±316360200.1 to 0.41.33300,0005 to 20
Puck 32MR120±332360400.1 to 0.40.33 (min)600,0005 to 20
Ultra Puck200±332360400.1 to 0.40.33 (min)600,0005 to 20
Alpha Prime245±3128360400.1 to 0.40.11 (min)2,400,0005 to 20
OusterOS2-321 to 240±2.5 to ±83236022.50.180.7655,00010, 20
OS2-641 to 240±2.5 to ±86436022.50.180.361,311,00010, 20
OS2-1281 to 240±2.5 to ±812836022.50.180.182,621,00010–20
HesaiPandarQT0.1 to 60±364360104.20.6°1.45384,00010
PandarXT0.05 to 120±132360310.09, 0.18, 0.361640,0005, 10, 20
Oandar40M0.3 to 120±5 to ±240360400.2, 0.41, 2, 3, 4, 5, 6720,00010, 20
Oandar640.3 to 200±5 to ±264360400.2, 0.41, 2, 3, 4, 5, 61,152,00010, 20
Pandar128E3X0.3 to 200±8 to ±2128360400.1, 0.2, 0.40.125, 0.5, 13,456,00010, 20
Solid-stateLuminarIRISUp to 600640 lines/s1200–260.050.05300 points/square degree1 to 30
InnovizInnovizOne250115250.10.15 to 20
InnovizTwo3008000 lines/s125400.070.0510 to 20
FlashLeddarTechPixellUp to 56±3117.5 ± 2.516.0 ± 0.520
ContinentalHFL110501203025
“—” indicates that the specifications were not mentioned in the product datasheet.
Table 3. Camera sensor overview.
Table 3. Camera sensor overview.
TypeDescriptionBenefitsLimitations
MonocularSingle-lens camera.
  • Low cost.
  • Provides a series of single RGB images to collect high-resolution and geotagged images or panoramas.
  • Cannot recover 3D scale without additional sensors.
  • Camera networks suboptimal to generate highly accurate 3D points.
BinocularTwo collocated cameras with known relative orientation capturing overlapping and synchronized image
  • Can provide depth and scale of objects the scene.
  • Provides better accuracy integrated with LiDAR sensor.
  • Performance and accuracy may depend on the algorithm used to compute the 3D information.
RGB-DCameras that capture RGB and depth images at the same time
  • Simultaneous data acquisition.
  • Provides high accuracy when integrated with LiDAR.
  • Depth image sensitive to occlusions.
  • Low range.
  • The depth image may include some uncertainties and errors.
Multi-camera systemA spherical camera system with multiple cameras that can provide a 360° field of view
  • Panoramic view showing the entire scene.
  • Suitable for street mapping applications.
  • Requires large storage to save images in real-time.
  • Must be properly calibrated to assure alignment of images and minimum distortions.
FisheyeSpherical lens camera that has more than 180° field of view
  • Provides wide coverage of the scene allowing capture of the scene with fewer images.
  • Lens distortions.
  • Non-projective transformation.
  • Requires rigorous calibration.
Table 4. Specifications of different MMSs.
Table 4. Specifications of different MMSs.
SystemRelease YearIndoorOutdoorCameraLiDAR/Max. RangeIMUGPSAccuracy *Applications
Vehicle-mountedLeica Pegasus: Two Ultimate2018🗶🗸360° FoVZF9012 profiler 360° × 41.33°/100 m🗸🗸2 cm horizontal accuracy
1.5 cm vertical accuracy
  • Urban 3D modeling.
  • Road asset management.
  • Analyzing change detection
  • Creating HD maps.
  • Generating geolocated panoramic images.
Teledyne Optech Lynx HS600-D2017🗶🗸360° FoV2 Optech sensors/130 m🗸🗸±5 cm absolute accuracy
Topcon IP-S3 HD12015🗶🗸360° FoVVelodyne HDL-32E LiDAR/100 m🗸🗸0.1 cm road surface accuracy (1 sigma)
Hi-Target HiScan-C2017🗶🗸360° FoV650 m🗸🗸5 cm at 40 m range
Trimble MX7 🗶🗸360° FoV🗶🗸🗸
Trimble MX502021🗶🗸90% of a full sphere2 MX50 Laser scanner/80 m🗸🗸0.2 cm (laser scanner)
Trimble MX92018🗶🗸1 spherical + 2 side looking + 1 backward/downward cameraMX9 Laser scanner/up to 420 m🗸🗸0.5 cm (laser scanner)
Viametris vMS3D2016🗶🗸FLIR Ladybug5+Velodyne VLP-16 + Velodyne HDL-32E🗸🗸2–3 cm relative accuracy
HandheldHERON LITE Color2018🗸🗸360° × 360° FoV1 Velodyne Puck/100 m🗸🗶3 cm relative accuracy
  • Mapping enclosed and complex spaces and cultural heritage.
  • Forest surveying.
  • Building Information Modeling.
GeoSLAM Zeb Go2020🗸🗶Can be added, accessoryHokuyo UTM-30LX laser scanner/30m🗶🗶1 to 3 cm relative accuracy
GeoSLAM Zeb Revo RT2015🗸🗶Can be added, accessoryHokuyo UTM-30LX laser scanner/30m🗶🗶0.6 cm relative accuracy
GeoSLAM Zeb Horizon2018🗸🗸Can be added, accessoryVelodyne Puck VLP-16/100 m🗶🗶0.6 cm relative accuracy
Leica BLK2GO2018🗸🗸3 camera system 300° × 150° FoVUp to 25 m 360 × 270🗶🗶±1 cm in an indoor environment with a scan duration of 2 min
WearableLeica Pegasus: Backpack2017🗸🗸360° × 200° FoVDual Velodyne VLP-16/100 m🗸🗸2 to 3 cm relative accuracy
5 cm absolute accuracy
HERON MS Twin2020🗸🗸360° × 360° FoVDual Velodyne Puck/
100 m
🗸🗶3 cm relative accuracy
NavVis VLX2021🗸🗸360° FoVDual Velodyne Puck LITE/100 m🗸🗶0.6 cm absolute accuracy at 68% confidence
1.5 cm absolute accuracy at 95% confidence
Viametris BMS3D-HD2019🗸🗸FLIR Ladybug5+16 beams LiDAR + 32 beams LiDAR🗸🗸2 cm relative accuracy
TrolleyNavVis M62018🗸🗶360° FoV6 Velodyne Puck LITE/100 m🗸🗶0.57 cm absolute accuracy at 68% confidence
1.38 cm absolute accuracy at 95% confidence
  • Indoor mapping for government buildings, airports, and train stations.
  • Tunnel inspection.
  • Measuring asphalt roughness.
  • Building Information Modeling.
Leica ProScan2017🗸🗸🗶Leica ScanStation P40, P30 or P16🗸🗸0.12 cm (range accuracy for Lecia ScanStation P40)
Trimble Indoor2015🗸🗶360° FoVTrimble TX-5, FARO Focus X-130, X-330, S-70-A, S-150-A, S-350-A🗸🗶1 cm relative accuracy when combined with FARO Focus X-130
FARO Focus Swift2020🗸🗶HDR cameraFARO Focus Laser Scanner with a FARO ScanPlan 2D mapper🗸🗶0.2 cm relative accuracy at 10 m range
0.1 cm absolute accuracy
* The accuracy measurement reported by the manufacturers. The measure of the accuracy is unknown if not stated as relative or absolute. The “—” symbol indicates that the specifications were not mentioned in the product datasheet.
Table 5. An overview of the selected mobile mapping applications.
Table 5. An overview of the selected mobile mapping applications.
Selected ApplicationsHighlights
Road asset management and condition assessmentExtraction of road assets [79]; road condition assessment [133]; detection of pavement distress using deep-learning [134]; evaluation of pavement surface distress for maintenance planning [135].
  • Vehicle-mounted system regularly operating on the road.
  • More efficient than manual inspection.
  • Leveraging deep learning to facilitate the inspection process.
BIMLow-cost MMS for BIM of archeological reconstruction [136]; analysis of BIM for transportation infrastructure [137].
  • Data are collected with portable systems.
  • Useful for maintenance and renovation planning.
  • Rich database for better information management.
Emergency and disaster responseNetwork-based GIS for disaster response [138]; analyzing post-disaster damage [139].
  • Timely and accurate disaster response.
  • Facilitates the decision-making process.
  • Effective training and simulations.
Vegetation mapping and detectionMapping and monitoring riverine vegetation [140]; tree detection and measurement [141,142,143].
  • Accurate and automatic measurements.
  • Reduces occlusions for 3D urban models.
Digital Heritage ConservationMapping a complex heritage site using handheld MMS [92]; mapping a museum in a complex building [94]; numerical simulations for structural analysis of historical constructions [144]; digital heritage documentation [145]; mapping archaeological sites [146]; development of a digital heritage inventory system [147].
  • Utilizes the flexibility of portable platforms.
  • Enables virtual tourism.
  • Digital recording of cultural sites.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Elhashash, M.; Albanwan, H.; Qin, R. A Review of Mobile Mapping Systems: From Sensors to Applications. Sensors 2022, 22, 4262. https://doi.org/10.3390/s22114262

AMA Style

Elhashash M, Albanwan H, Qin R. A Review of Mobile Mapping Systems: From Sensors to Applications. Sensors. 2022; 22(11):4262. https://doi.org/10.3390/s22114262

Chicago/Turabian Style

Elhashash, Mostafa, Hessah Albanwan, and Rongjun Qin. 2022. "A Review of Mobile Mapping Systems: From Sensors to Applications" Sensors 22, no. 11: 4262. https://doi.org/10.3390/s22114262

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop