Next Article in Journal
Virtual Surgical Planning and the “In-House” Rapid Prototyping Technique in Maxillofacial Surgery: The Current Situation and Future Perspectives
Previous Article in Journal
TQ-6, a Novel Ruthenium Derivative Compound, Possesses Potent Free Radical Scavenging Activity in Macrophages and Rats
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Accuracy Assessment of a Mobile Mapping System with Lightweight Laser Scanning and MEMS Sensors

1
Department of Geomatics Engineering, The University of Calgary, 2500 University Drive NW, Calgary, AB T2N 1N4, Canada
2
NovAtel, Hexagon Calgary Campus, 10921 14th Street NE, Calgary, AB T3K 2L5, Canada
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(3), 1007; https://doi.org/10.3390/app11031007
Submission received: 18 December 2020 / Revised: 14 January 2021 / Accepted: 19 January 2021 / Published: 23 January 2021
(This article belongs to the Special Issue Performance Evaluation of Lightweight LiDAR for UAV Applications)

Abstract

:
The accuracy assessment of mobile mapping system (MMS) outputs is usually reliant on manual labor to inspect the quality of a vast amount of collected geospatial data. This paper presents an automated framework for the accuracy assessment and quality inspection of point cloud data collected by MMSs operating with lightweight laser scanners and consumer-grade microelectromechanical systems (MEMS) sensors. A new, large-scale test facility has been established in a challenging navigation environment (downtown area) to support the analyses conducted in this research work. MMS point cloud data are divided into short time slices for comparison with the higher-accuracy, terrestrial laser scanner (TLS) point cloud of the test facility. MMS data quality is quantified by the results of registering the point cloud of each slice with the TLS datasets. Experiments on multiple land vehicle MMS point cloud datasets using a lightweight laser scanner and three different MEMS devices are presented to demonstrate the effectiveness of the proposed method. The mean accuracy of a consumer grade MEMS (<$100) was found to be 1.13 ± 0.47 m. The mean accuracy of two commercial MEMS (>$100) was in the range of 0.48 ± 0.23 m to 0.85 ± 0.52 m. The method presented here in can be straightforwardly implemented and adopted for the accuracy assessment of other MMSs types such as unmanned aerial vehicles (UAV)s.

1. Introduction

A mobile mapping system (MMS) can be defined as an integrated sensing system for the collection of geospatial data from a moving platform. At its core, an MMS comprises an integrated GNSS/IMU system for determination of the platform state and one or more imaging sensors. It may include additional navigation sensors such as a wheel revolution counter. The trajectory estimates from the navigation system allow direct georeferencing of acquired image data. The two main types of imaging sensors found on board MMSs are passive cameras and laser scanners. Since the focus of this work is the latter, an MMS is henceforth understood to mean a laser scanning based system. Early MMSs were developed for land-vehicle platforms but have since evolved into backpack, hand cart, water vehicle and unmanned aerial vehicle (UAV) embodiments in support of a vast range of applications. The work described herein focuses on a land-vehicle MMS accuracy testing, though the methods are expected to be broadly applicable to other platforms such as UAVs with suitable adaptation.
According to [1], MMSs may be classified according to the accuracy they can achieve, either mapping or survey grade. Achieving the higher, survey-grade accuracy relies on accurate trajectory determination, which is particularly challenging in GNSS-denied and multipath environments such as urban canyons. Kersting and Friess [1] report a post-mission quality assurance procedure for survey-grade MMSs. They perform a fully automated self-calibration to estimate corrections to system parameters (boresight and lever arm) and corrections to the trajectory. Planar features are used for the self-calibration. The plane-based system calibration method has been used for airborne [2], UAV [3] and mobile [4,5] laser scanner systems and the modelling has been extended to include catenary curves [6] and poles [7]. The trajectory model reported by [1] includes a second-order corrections as a function of time. The trajectory is segmented according to position accuracy to allow piecewise modelling with continuity and smoothness constraints.
Methods to assess and improve the accuracy of MMSs have been investigated by many authors. Signalized control points [8] and signalized tie points [9] observed in multiple MMS passes have been reported to correct navigation system errors while check points [10] can be employed to independently quantify the errors. The use of total stations and GNSS equipment results in accurately determined point coordinates [11]. However, point-based assessment suffers from the great disparity in data density between the MMS and the independently surveyed points.
In recognition of this shortcoming, a number of researchers have developed outdoor test fields for MMS accuracy assessment. Generally speaking, they comprise a dense reference point cloud of a large urban area through which the MMS can be driven. The reference point cloud is collected by static terrestrial laser scanning (TLS) that is georeferenced with static or RTK GNSS methods. The accuracy assessment of MMS data can be performed by comparing the two point clouds, though the features and methods used for comparison vary.
Kaartinen et al. [12] detail the testing of several MMSs over a 1.7 km loop. They quantify system accuracy by comparing the coordinates of independently surveyed point features. The reference targets are poles, building corners and curb corners extracted from the point cloud. Xu et al. [13] report a mobile laser scanner test field of similar size (1000 m × 550 m area). Their assessment is conducted with 45 control points. Fryskowska and Wróblewski [14] also use TLS data as a reference to assess the accuracy of an MMS. Their assessment is performed using a large number of check points and the lengths of building features. In their manual for laser-based mobile mapping, Johnson et al. [15] report two test sites (urban and rural) established with control points for performance testing of MMSs. Both sites are about 3 km long. Accuracy is assessed with signalized control points and TLS reference data. The latter source is used at a specific site to estimate bridge clearance. Toschi et al. [16] use TLS reference data collected of two buildings in a city center to evaluate MMS accuracy. Iterative closest point (ICP) method registration [17] is performed on segmented building data and various statistical measures, including higher-order moments, are computed for the assessment. Rodríguez-Gonzálvez et al. [18] constructed a 2.5 km long TLS point cloud of a medieval wall for the accuracy assessment of their MMS. However, the assessment was performed over a small section of the wall.
The accuracy assessment of indoor MMSs (IMMS) is described by [19]. Three IMMSs were tested using a pre-surveyed cultural heritage site as a reference test field. TLS data of the test field serves as ground truth. IMMS data were collected on two pre-defined paths: one closed and one open. Subsets of the data were registered to reference data by the ICP method. A number of different data comparisons are reported to quantify the accuracy of each system.
The assessment of MMS accuracy is not, however, confined to the built environment. Rönnholm et al. [20] evaluate the quality of backpack laser MMS in a forest relative to reference data collected with UAV and TLS. The evaluation is performed by comparing data patches comprising 10 s of data with ICP. The time series duration is a tradeoff between enough topographic variation in the data and changes in system orientation. Performing independent ICPs creates discontinuities between patches, so the parameters of adjacent patches are (linearly) time dependent.
The past decade has seen tremendous growth in the utilization of unmanned aerial vehicles for airborne mapping. Though cameras are ubiquitous sensors found onboard mapping UAVs, the deployment of light-weight laser scanner systems is becoming more commonplace. Accordingly, the assessment of the accuracy of point clouds derived from such systems is of great importance. Perhaps not surprisingly, the use of an independently observed reference point cloud has been employed by a number of researchers. Several different laser scanner systems appear in the literature including the ibeo LUX LiDAR sensor [21], the SICK LMS511 PRO [22], the RIEGL miniVUX lidar [23] as well as the HDL-32E [24], the VLP-16 Puck HI-RES [3] and the VLP-16 [25] models from Velodyne.
Bakuła et al. [21] report on the assessment of UAV point cloud accuracy in support of levee monitoring using signalized points and cross sections. This group utilizes airborne laser scanner and ground survey data for a later assessment [25]. A multi-faceted assessment of UAV laser scanner data quality over a landslide area is described by [23]. Assessments of strip adjustment quality, overall digital elevation model accuracy, point density and change detection were conducted by comparing the UAV data with TLS, total station and GNSS data. The use of TLS point clouds of single buildings and their surroundings as the reference data are reported by [22,24].
Clearly, recent activities found in the literature indicate the importance of evaluating the metric performance of lightweight laser scanning systems utilized in MMSs regardless of whether they are deployed on a land vehicle or a UAV platform. This need is further emphasized with the increased use of low-cost microelectromechanical systems (MEMS) sensors in the navigation system. This work presents the design of a completely automated framework for land-vehicle MMS accuracy assessment by comparing a mobile laser scanner point cloud with a reference point cloud of a dedicated testing facility. The establishment of the testing facility comprising several downtown city blocks—a challenging GNSS environment—is also detailed. Multiple datasets with an MMS fitted with a Velodyne VLP-16 and three different MEMS sensors were captured to demonstrate the effectiveness of the proposed method. Detailed evaluation of these sensors is presented following the experiment description.

2. Methodology

Before describing the proposed methodology for lightweight laser scanner performance assessment, it is important to review the kinematic MMS positioning equation. The mapping frame coordinates (m) of a point p, r p m ( t ) , observed by an MMS, can be expressed as follows:
r p m ( t ) = r b m ( t ) + R b m ( t ) { a I M U / s b + R s b r p s ( t ) }
where r b m ( t ) is the position of the body frame (b) origin; R b m ( t ) is the body frame attitude matrix, which is parameterized in terms of the time-varying roll, pitch and yaw angles, and represents the rotational transformation from the body frame to the mapping frame; a I M U / s b is the lever arm offset vector between the sensor (laser scanner) frame (s) and the body frame (the IMU axes); R s b is the boresight matrix, parameterized in terms of three Cardan angles, which represents the rotation from the sensor frame to the body frame; and r p s ( t ) is the position vector of point p in the sensor frame. For a Velodyne laser scanner, the vector r p s ( t ) is a function of the observed range, rotation angle and the vertical angle of each of its 16 lasers as well the scanner’s interior calibration parameters. Additional details about the laser scanner are given later.
All quantities are considered to time-varying except for the elements of the boresight matrix and the lever arm vector, which are pre-calibrated and treated as constants. The mapping frame in this project was the 3TM projection with central meridian 114° west. The point coordinates given by Equation (1) are prone to many error sources. If the mounting parameters (boresight and lever arm) have been accurately calibrated, then the errors can be attributed to the navigation and laser scanner systems. MMS position and orientation can be severely biased in GNSS-denied and high-multipath environments like urban canyons. They can rapidly vary with time if a low-cost IMU is used.
The accuracy assessment methodology is based on spatial comparison of an MMS (kinematic) point cloud and a GNSS/IMU-independent (static) point cloud. The kinematic point cloud possesses variable spatial accuracy at different temporal points due to the nature of the MMS data collection. For instance, areas with sufficient GNSS coverage feature high spatial quality, while areas with a lack of GNSS coverage may feature lower spatial quality, depending on the aiding provided by the IMU for the navigation solution. On the other hand, the static point cloud possesses almost constant accuracy since it is based on a precise multi-station terrestrial laser scanning survey that does not rely on kinematic GNSS/IMU position and orientation estimates. A spatial comparison between the kinematic point cloud and the static one will indicate any discrepancies between the two datasets.
One famous way to derive spatial discrepancies between point clouds is by conducting 3D registration [26]. The well-known iterative closest point method [17] can handle registration between point clouds from different sources and with variable point density. The application of the ICP-registration results in six Helmert transformation parameters (i.e., three translations and three rotation angles), which are used as the accuracy measure for the spatial quality of the kinematic point cloud. The scale parameter is not solved for using the ICP since the laser scanners preserve unit scale factor. If the kinematic point cloud has a high spatial quality, it will match the static point cloud and the values of the six transformation parameters will be close to zeroes following the ICP registration. Any differences between the two point-clouds will be indicated by large, non-zero values for the transformation parameters.
For a rigorous accuracy assessment, the kinematic point cloud is divided into temporal slices prior to the registration. Each temporal slice contains points from a short time interval of data collection. The individual slices are sequentially registered to the static point cloud using the ICP (Figure 1). The functional relationship between a point in the cloud captured by the MMS in slice k, r M M S , and the closest point in the reference TLS point cloud, r T L S , is thus given by:
r T L S = R k r M M S + t k
where t k is the translation vector and R k is the rotation matrix between the two point clouds. Note that the mapping frame superscript has been omitted for clarity. The objective of the ICP is to minimize the sum of squares of differences between the two point clouds, ε2, i.e.,
ε 2 = R k r M M S + t k r T L S 2
It is assumed that the initial position and orientation of the kinematic point cloud provided by the direct georeferencing is sufficiently accurate enough to ensure convergence of the ICP. The result of applying this procedure to all slices is a time-series of transformation parameter sets that provides an accuracy measure of the kinematic data at different temporal points and spatial locations.
Note that navigation quality changes rapidly in urban areas and, accordingly, the spatial quality of the kinematic point cloud may vary at different temporal points. Therefore, it is hypothesized that a shorter temporal slice will result in a more realistic accuracy assessment. Point cloud data collected within a short time interval from a moving mobile mapping system will have homogeneous spatial accuracy. Nevertheless, one should bear in mind that a very short time interval may not contain a sufficient number of points or enough objects/features to perform the ICP registration. On the other hand, a very long time interval will provide a large size point cloud with inhomogeneous spatial accuracy and may lead to a false accuracy assessment. Spatial errors in a few locations may not be detected through the registration of large point clouds. The errors may be distributed throughout the larger dataset and will not contribute significantly to the estimation of the transformation parameters. Since the transformation parameters are used to quantify data quality, a long time interval must be avoided. A few factors should be taken to account for determining the time interval length, for instance: (1) the mobile mapping system speed, (2) the laser scanner range, (3) the scanning frequency, (4) the quality of the GNSS/IMU system, and (5) the structure of the scanning area.

3. Experiment Description

3.1. Test Facility Establishment

Static and kinematic point cloud data captured over a newly established testing facility were used in this experiment. Pictured in Figure 2, the testing facility is located in the downtown area (central business district, CBD) of Calgary, Canada. It is comprised of two loops with a combined length of approximately 2.4 km long. Importantly, it passes through a variety of different urban environments including both open sky conditions and narrow urban canyons defined by high-rise buildings between 9 Ave SW, 2 St SW, 3 Ave SW and Barclay Parade SW.
The test facility was scanned by a professional surveying company using a RIEGL VZ400 terrestrial laser scanner (RIEGL Laser Measurement Systems GmbH, Horn, Austria). The static cloud comprised 184 scans that were registered into a common external coordinate system courtesy of a network of 38 control points surveyed by total station. The network was georeferenced by real-time kinematic GNSS observations of six of the control points located in open sky conditions. The georeferenced scanner data were validated at 173 independently determined points and cross sections in terms of the absolute value of coordinate differences, Δ. As can be seen from the statistical summary in Table 1, the point cloud quality is very high and more than sufficient for the assessment of MEMS-based, lightweight laser scanner MMS accuracy.

3.2. Experiment Details

The goal of this experiment was to evaluate the accuracy of the kinematic point-cloud collected from different sensors using the proposed ICP-based accuracy assessment method. Six kinematic point cloud datasets of the test facility acquired with different navigation sensors were evaluated against the static point cloud. The full area static and kinematic point clouds are shown in Figure 3.
The MMS comprises a Velodyne VLP-16 (Velodyne Lidar, San Jose, CA, USA) deployed in tandem with a NovAtel SPAN system (Novatel Inc., Calgary, AB, Canada), a tightly coupled GNSS/IMU system that includes a multi-frequency, multi-constellation receiver and an IMU. The Velodyne VLP-16 sensor comprises a vertical array of 16 lasers with an angular spacing such that the field-of-view is approximately 30°. Continuous rotation of the laser array about the sensor’s vertical axis allows collection of a panoramic point cloud. The system was fitted with three different IMUs for testing: an InvenSense IAM20680 (InvenSense Inc., City San Jose, CA, USA); an Epson G320N (Seiko Epson Corporation, Nagano, Japan); and a Honeywell HG4930 AN01 (SPAN-CPT7; Honeywell International Inc., Charlotte, NC, USA). The InvenSense falls into the consumer MEMS (<$100) category while the other IMUs are commercial MEMS (>$100).
The datasets were captured with three different test setups on 21 March, 29 October, and 31 October 2019. Figure 4 shows an example of the instrumentation setup. For the first data collection, the Invensens was mounted on the same base plate as the VLP-16 and the Epson was on a different plate but along the same beam. The second and third data collections included both the Honeywell and Epson on the same base plate as the VLP-16. The weather conditions were all similar for the three data collections, no rain or snow was present. For each dataset, a reference trajectory was determined using a Honeywell LASEREF V Micro Inertial Reference System (μIRS; Honeywell International Inc., Charlotte, NC, USA). This navigation-grade IMU provides high-accuracy trajectory data to allow quantification of MMS georeferencing accuracy independent of the Velodyne.
The proposed methodology was applied on the six kinematic data clouds one at a time. Different time intervals were tested to quantify the impact of temporal slice length on the accuracy assessment results. Each cloud was sliced based on 5 s and 10 s time interval lengths. An example of kinematic point cloud slicing with 5 s time interval is shown in Figure 5. Areas with no overlap with the static cloud have been excluded from testing. In some locations, the ICP did not converge. These slices were excluded from the analysis as they did not have strong point geometry due to excessive clutter and areas with insufficiently geometric features to allow registration of the point clouds. For this project, the registration results were inspected visually to ensure proper alignment between the kinematic and the static data. The length of the translation vector, t k , that has been determined for each slice from the registration was used to create color maps at the locations of the slice center.

4. Experiment Results and Discussion

The assessment commences with analysis of the trajectory accuracy as determined with respect to the μIRS reference. Statistics of the 3D position differences for each dataset were computed over short time intervals. Typical values from each dataset are presented in Table 2. Sub-meter accuracy (in terms of mean value) was achieved with the Honeywell IMU, while the Epson provided roughly meter-level accuracy. The mean value from the Invensens was above 2 m.
The accuracy assessment over the six kinematic point clouds resulted in a wide range of values for t k depending on the MMS’s location within the testing facility. The results are statistically summarized in Table 3 and Table 4 and depicted graphically in Figure 6 and Figure 7. For the six datasets, high spatial accuracy values were found in areas with good navigation conditions (open sky, no sharp turns). These outcomes were found at the northern extent of the test field in the vicinity of Barclay Parade. The land uses in this region comprise a mixture of low-rise commercial and residential buildings (two to three stories). On the contrary, but not surprisingly, lower spatial accuracy values mainly occur in the southern part of the test field where the land use is predominantly high-rise buildings that occlude the GNSS signals and degrade the navigation solution.
As can be seen in Table 3, the six point clouds provided a wide range of mean accuracy values depending mainly on the sensor type and the scanning experiment. The Epson and CPT sensors showed error magnitudes in the range of 0.48 m to 0.85 m. The InvenSense MEMS sensor produced a mean error of 1.13 m.
It is evident that the slice size did not significantly impact the accuracy assessment mean values. A comparison of the results in Table 3 and Table 4 reveals that the 5 and 10 s slice lengths provide very similar mean error values. The differences are on the order of a few centimeters, which are not significant in relation to the overall magnitude of the mean errors themselves. The differences in mean error values can be attributed to differences in the point cloud shapes for different slice sizes, which slightly impacts the quality of registration for each slice.
For a better visualization of the accuracy assessment results, Figure 6 shows color maps of the error values for the six point clouds and 5 s slice size length. Both color and the size of the data markers indicate the error magnitude for each slice. The points are plotted at the centroid of each individual slice (c.f. Figure 5b). In general, the navigation accuracy of a mobile mapping system changes slowly at different points in time. Except for the InvenSense sensor, which is a lower grade of MEMS IMU, a smooth transition of error values from one slice to the next is generally evident, as can be seen in Figure 6. Similar observations are valid for experiments with slice size of 10 s (Figure 7). However, it was observed during testing that the smaller the slice size produced better the alignment following the registration. Accordingly, the accuracy assessment results from the 5 s slices are assumed as the more representative of the accuracy for the three sensors.
Multiple datasets were captured on separate dates with both the Epson and Honeywell sensors. Inspection of the results in common areas of the test field (Figure 7) reveals overall consistency of the results for both. Some local differences in accuracy do exist, in the vicinity of x = 100 m and y = 0 m, for example. These can be attributed to different satellite visibility conditions, as can other localized accuracy differences that can be seen between different sensors. Comparison of Table 2 with the results in Table 3 and Table 4 reveals that the MMS errors as determined by the proposed method are largely due to the low-cost IMUs rather than the Velodyne data. In fact, the new accuracy assessment methodology provides slightly more optimistic results than what is indicated by the trajectory difference statistics.
Further demonstration of the performance of the proposed method is given in Figure 8. This figure shows an example of point cloud cross-sections before and after registration. As can be seen, the ICP method provided precise alignment between the kinematic and static point clouds following the registration, even for point clouds that were significantly (~1 m) separated in three dimensions. This provides high confidence in the results of the proposed accuracy assessment methodology.

5. Conclusions

The evaluation of the metric performance of lightweight laser scanning systems utilized in MMSs, as well with on other platforms like UAVs, is of great importance given the growth of lightweight scanner and low-cost MEMS sensor usage. A methodology to perform this evaluation has been proposed. At its core is a large scale and rigorously surveyed test field situated in an urban environment. MMS point cloud data are decomposed into slices of short temporal duration and compared with the test field reference point cloud via ICP registration to quantify overall accuracy. The effectiveness of the method has been demonstrated on several Velodyne VLP-16 MMS datasets having different low-cost IMUs. The accuracy assessment is straightforward yet rigorous and gives a clear indication—both graphically and statistically—of overall point cloud quality along the MMS trajectory. Though not demonstrated here, it is expected that this method can be adapted for the assessment of lightweight laser scanners deployed on other platforms such as UAVs. Future work will focus on the investigation of real-time accuracy assessment of MMSs. Opportunity also exists to extend the testing to other types of navigation sensors and laser scanners.

Author Contributions

Conceptualization, K.A.-D., D.D.L., E.K. and R.D.; methodology, K.A.-D., D.D.L. and E.K.; software, K.A.-D.; validation, K.A.-D.; formal analysis, K.A.-D. and D.D.L.; investigation, K.A.-D. and D.D.L.; resources, R.D.; data curation, K.A.-D.; writing—original draft preparation, K.A.-D. and D.D.L.; writing—review and editing, K.A.-D., D.D.L., E.K. and R.D.; visualization, K.A.-D.; supervision, D.D.L. and R.D.; project administration, D.D.L.; funding acquisition, D.D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by NovAtel.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to proprietary reasons.

Acknowledgments

The Point cloud library (PCL), Cloud compare (ccViewer), and Matplotlib libraries have been used in support of software implementation and visualization used in this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kersting, A.P.; Friess, P. Post-Mission Quality Assurance Procedure for Survey-Grade Mobile Mapping Systems. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 647–652. [Google Scholar] [CrossRef]
  2. Skaloud, J.; Lichti, D.D. Rigorous Approach to Bore-Sight Self-Calibration in Airborne Laser Scanning. ISPRS J. Photogramm. Remote Sens. 2016, 61, 47–59. [Google Scholar] [CrossRef]
  3. De Oliveira Junior, E.M.; dos Santos, D.R. Rigorous Calibration of UAV-Based LiDAR Systems with Refinement of the Boresight Angles Using a Point-to-Plane Approach. Sensors 2019, 19, 5224. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Glennie, C. Calibration and Kinematic Analysis of the Velodyne HDL-64E S2 Lidar Sensor. Photogramm. Eng. Remote Sens. 2012, 78, 339–347. [Google Scholar] [CrossRef]
  5. Hillemann, M.; Weinmann, M.; Mueller, M.S.; Jutzi, B. Automatic Extrinsic Self-Calibration of Mobile Mapping Systems Based on Geometric 3D Features. Remote Sens. 2019, 11, 1955. [Google Scholar] [CrossRef] [Green Version]
  6. Chan, T.O.; Lichti, D.D.; Glennie, C.L. Multi-Feature Based Boresight Self-Calibration of a Terrestrial Mobile Mapping System. ISPRS J. Photogramm. Remote Sens. 2013, 82, 112–124. [Google Scholar] [CrossRef]
  7. Chan, T.O.; Lichti, D.D.; Belton, D.; Nguyen, H.L. Automatic Point Cloud Registration Using a Single Octagonal Lamp Pole. Photogramm. Eng. Remote Sens. 2016, 82, 257–269. [Google Scholar] [CrossRef]
  8. Schaer, P.; Vallet, J. Trajectory Adjustment of Mobile Laser Scan Data in GPS Denied Environments. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 40, 61–64. [Google Scholar] [CrossRef]
  9. Nolan, J.; Eckels, R.; Evers, M.; Singh, R.; Olsen, M.J. Multi-Pass Approach for Mobile Terrestrial Laser Scanning. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 2, 105–112. [Google Scholar] [CrossRef] [Green Version]
  10. Sairam, N.; Nagarajan, S.; Ornitz, S. Development of Mobile Mapping System for 3D Road Asset Inventory. Sensors 2016, 16, 367. [Google Scholar] [CrossRef] [Green Version]
  11. Hofmann, S.; Brenner, C. Accuracy Assessment of Mobile Mapping Point Clouds Using the Existing Environment as Terrestrial Reference. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 601–608. [Google Scholar] [CrossRef] [Green Version]
  12. Kaartinen, H.; Hyyppä, J.; Kukko, A.; Jaakkola, A.; Hyyppä, H. Benchmarking the Performance of Mobile Laser Scanning Systems Using a Permanent Test Field. Sensors 2012, 12, 12814–12835. [Google Scholar] [CrossRef] [Green Version]
  13. Xu, S.; Cheng, P.; Zhang, Y.; Ding, P. Error Analysis and Accuracy Assessment of Mobile Laser Scanning System. Open Autom. Control Syst. J. 2015, 7, 485–495. [Google Scholar] [CrossRef] [Green Version]
  14. Fryskowska, A.; Wróblewski, P. Mobile Laser Scanning Accuracy Assessment for the Purpose of Base-Map Updating. Geod. Cartogr. 2018, 67, 35–55. [Google Scholar] [CrossRef]
  15. Johnson, S.; Bethel, J.; Supunyachotsakul, C.; Peterson, S. Laser Mobile Mapping Standards and Applications in Transportation; Purdue University: West Lafayette, IN, USA, 2016. [Google Scholar]
  16. Toschi, I.; Rodríguez-Gonzálvez, P.; Remondino, F.; Minto, S.; Orlandini, S.; Fuller, A. Accuracy Evaluation of a Mobile Mapping System with Advanced Statistical Methods. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 245–253. [Google Scholar] [CrossRef] [Green Version]
  17. Besl, P.; McKay, N. A Method for Registration of 3-D Shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  18. Rodríguez-Gonzálvez, P.; Jiménez Fernández-Palacios, B.; Muñoz-Nieto, Á.; Arias-Sanchez, P.; Gonzalez-Aguilera, D. Mobile LiDAR System: New Possibilities for the Documentation and Dissemination of Large Cultural Heritage Sites. Remote Sens. 2017, 9, 189. [Google Scholar] [CrossRef] [Green Version]
  19. Tucci, G.; Visintini, D.; Bonora, V.; Parisi, E. Examination of Indoor Mobile Mapping Systems in a Diversified Internal/External Test Field. Appl. Sci. 2018, 8, 401. [Google Scholar] [CrossRef] [Green Version]
  20. Rönnholm, P.; Liang, X.; Kukko, A.; Jaakkola, A.; Hyyppä, J. Quality Analysis and Correction of Mobile Backpack Laser Scanning Data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 41–47. [Google Scholar] [CrossRef]
  21. Bakuła, K.; Salach, A.; Wziątek, D.Z.; Ostrowski, W.; Górski, K.; Kurczyński, Z. Evaluation of the Accuracy of Lidar Data Acquired Using a UAS for Levee Monitoring: Preliminary Results. Int. J. Remote Sens. 2017, 38, 18. [Google Scholar]
  22. Conte, G. Evaluation of a Light-Weight Lidar and a Photogrammetric System for Unmanned Airborne Mapping Applications. Photogramm. Fernerkund. Geoinf. 2014, 4, 287–298. [Google Scholar] [CrossRef] [Green Version]
  23. Babbel, B.J.; Olsen, M.J.; Che, E.; Leshchinsky, B.A.; Simpson, C.; Dafni, J. Evaluation of Uncrewed Aircraft Systems’ Lidar Data Quality. ISPRS Int. J. Geo-Inf. 2019, 8, 532. [Google Scholar] [CrossRef] [Green Version]
  24. Jozkow, G.; Wieczorek, P.; Karpina, M.; Walicka, A.; Borkowski, A. Performance Evaluation of SUAS Equipped with Velodyne HDL-32E Lidar Sensor. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 171–177. [Google Scholar] [CrossRef] [Green Version]
  25. Salach, A.; Bakuła, K.; Pilarska, M.; Ostrowski, W.; Górski, K.; Kurczyński, Z. Accuracy Assessment of Point Clouds from LiDAR and Dense Image Matching Acquired Using the UAV Platform for DTM Creation. ISPRS Int. J. Geo-Inf. 2018, 7, 342. [Google Scholar] [CrossRef] [Green Version]
  26. Girardeau-Montaut, D.; Roux, M.; Marc, R.; Thibault, G. Change Detection on Points Cloud Data Acquired with a Ground Laser Scanner. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2005, 36, 30–345. [Google Scholar]
Figure 1. ICP-based accuracy assessment (point cloud color represents height).
Figure 1. ICP-based accuracy assessment (point cloud color represents height).
Applsci 11 01007 g001
Figure 2. Top view of the study area (image from Google Maps). The dotted red line indicates the extents of the static laser scanner data.
Figure 2. Top view of the study area (image from Google Maps). The dotted red line indicates the extents of the static laser scanner data.
Applsci 11 01007 g002
Figure 3. Static and kinematic point clouds of the entire study area: (a) Static point cloud (the test facility); (b) Sample kinematic point cloud—loop A; (c) Sample kinematic point cloud—loop B.
Figure 3. Static and kinematic point clouds of the entire study area: (a) Static point cloud (the test facility); (b) Sample kinematic point cloud—loop A; (c) Sample kinematic point cloud—loop B.
Applsci 11 01007 g003
Figure 4. Mobile mapping system setup for the experimental testing.
Figure 4. Mobile mapping system setup for the experimental testing.
Applsci 11 01007 g004
Figure 5. Example of kinematic point cloud slicing (5 s interval): (a) Top view of the kinematic point cloud with colors indicating the point clouds for each temporal slice; (b) Time slice trajectory centroids used for subsequent plotting.
Figure 5. Example of kinematic point cloud slicing (5 s interval): (a) Top view of the kinematic point cloud with colors indicating the point clouds for each temporal slice; (b) Time slice trajectory centroids used for subsequent plotting.
Applsci 11 01007 g005
Figure 6. Color maps of accuracy assessment for 5 s slice length (marker size and color indicate error magnitude): (a) InvenSense; (b) Epson_1; (c) Honeywell_1; (d) Epson_2; (e) Honeywell_2; (f) Epson_3. Note that the coordinates have been reduced to a local system for display clarity.
Figure 6. Color maps of accuracy assessment for 5 s slice length (marker size and color indicate error magnitude): (a) InvenSense; (b) Epson_1; (c) Honeywell_1; (d) Epson_2; (e) Honeywell_2; (f) Epson_3. Note that the coordinates have been reduced to a local system for display clarity.
Applsci 11 01007 g006
Figure 7. Color maps of accuracy assessment for 10 s slice length (marker size and color indicate error magnitude): (a) InvenSense; (b) Epson_1; (c) Honeywell_1; (d) Epson_2; (e) Honeywell_2; (f) Epson_3. Note that the coordinates have been reduced to a local system for display clarity.
Figure 7. Color maps of accuracy assessment for 10 s slice length (marker size and color indicate error magnitude): (a) InvenSense; (b) Epson_1; (c) Honeywell_1; (d) Epson_2; (e) Honeywell_2; (f) Epson_3. Note that the coordinates have been reduced to a local system for display clarity.
Applsci 11 01007 g007
Figure 8. Example of point clouds before and after registration (Epson): (a) Top-view of the point clouds before ICP registration (static points in red and kinematic points in green); the translation vector length between the clouds is 1 m; (b) Top-view of point clouds after ICP registration (static points in red and transformed kinematic points in yellow); the RMSE from the registration was 0.02 m.
Figure 8. Example of point clouds before and after registration (Epson): (a) Top-view of the point clouds before ICP registration (static points in red and kinematic points in green); the translation vector length between the clouds is 1 m; (b) Top-view of point clouds after ICP registration (static points in red and transformed kinematic points in yellow); the RMSE from the registration was 0.02 m.
Applsci 11 01007 g008
Table 1. Accuracy and precision analysis of the static, reference point cloud. Percentages indicate the proportion of coordinate differences lying within each indicated range.
Table 1. Accuracy and precision analysis of the static, reference point cloud. Percentages indicate the proportion of coordinate differences lying within each indicated range.
Δ ≤ 0.015 m0.015 m ≤ Δ ≤ 0.025 mΔ ≥ 0.025 m
Accuracy
Horizontal87.50%10.60%1.90%
Vertical100.00%0.00%0.00%
Precision
Horizontal78.30%17.40%4.30%
Vertical98.10%1.90%0.00%
Table 2. Three-dimensional trajectory difference statistics computed in a local level frame with respect to the Honeywell LASEREF V Micro Inertial Reference System (μIRS) reference trajectory.
Table 2. Three-dimensional trajectory difference statistics computed in a local level frame with respect to the Honeywell LASEREF V Micro Inertial Reference System (μIRS) reference trajectory.
MEMS Sensor# of EpochsMean Error (m)Min Error (m)Max Error (m)Standard Deviation (m)
InvenSense72602.670.575.581.35
Epson_172601.150.013.540.88
Epson_282501.090.002.860.77
Epson_375501.420.014.380.96
Honeywell_182500.390.001.220.31
Honeywell_275500.850.014.140.62
Table 3. Accuracy statistics—5 s slice length.
Table 3. Accuracy statistics—5 s slice length.
MEMS SensorSlice CountMean Error (m)Min Error (m)Max Error (m)Standard Deviation (m)
InvenSense931.130.081.890.47
Epson_11080.830.171.800.37
Epson_21070.850.051.840.52
Epson_31090.750.091.700.45
Honeywell_1980.480.081.140.23
Honeywell_21060.850.071.780.42
Table 4. Accuracy statistics—10 s slice length.
Table 4. Accuracy statistics—10 s slice length.
MEMS SensorSlice CountMean Error (m)Min Error (m)Max Error (m)Standard Deviation (m)
InvenSense551.120.121.900.48
Epson_1570.830.171.620.37
Epson_2620.870.051.790.53
Epson_3620.760.131.490.43
Honeywell_1570.540.101.130.31
Honeywell_2590.840.071.710.42
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Al-Durgham, K.; Lichti, D.D.; Kwak, E.; Dixon, R. Automated Accuracy Assessment of a Mobile Mapping System with Lightweight Laser Scanning and MEMS Sensors. Appl. Sci. 2021, 11, 1007. https://doi.org/10.3390/app11031007

AMA Style

Al-Durgham K, Lichti DD, Kwak E, Dixon R. Automated Accuracy Assessment of a Mobile Mapping System with Lightweight Laser Scanning and MEMS Sensors. Applied Sciences. 2021; 11(3):1007. https://doi.org/10.3390/app11031007

Chicago/Turabian Style

Al-Durgham, Kaleel, Derek D. Lichti, Eunju Kwak, and Ryan Dixon. 2021. "Automated Accuracy Assessment of a Mobile Mapping System with Lightweight Laser Scanning and MEMS Sensors" Applied Sciences 11, no. 3: 1007. https://doi.org/10.3390/app11031007

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop