Accuracy Assessment of the Integration of GNSS and a MEMS IMU in a Terrestrial Platform

MEMS Inertial Measurement Units are available at low cost and can replace expensive units in mobile mapping platforms which need direct georeferencing. This is done through the integration with GNSS measurements in order to achieve a continuous positioning solution and to obtain orientation angles. This paper presents the results of the assessment of the accuracy of a system that integrates GNSS and a MEMS IMU in a terrestrial platform. We describe the methodology used and the tests realized where the accuracy of the positions and orientation parameters were assessed using an independent photogrammetric technique employing cameras that integrate the mobile mapping system developed by the authors. Results for the accuracy of attitude angles and coordinates show that accuracies better than a decimeter in positions, and under a degree in angles, can be achieved even considering that the terrestrial platform is operating in less than favorable environments.


Introduction
A Direct Georeferencing System (DGS) can be defined as a set of sensors onboard a platform, whose goal is to obtain positions (as three coordinates) and attitudes (as three angles) of the origin of a reference system, defined in the platform, without external control points. One main application of a DGS is as a major component of a Mobile Mapping System (MMS), which contains, besides the DGS, a set of remote sensors to acquire information from the surrounding objects. The linkage of absolute positions and orientation parameters to the remote sensors allows the determination of positional and geometrical information of the objects observed [1]. Detailed descriptions of this technology can be found for example in [2,3].
In terms of applications, terrestrial MMS can be used to acquire data of urban or road infrastructures and can be optimized for specific applications such as traffic sign inventory and railway or road inspection as described in [4][5][6]. More specific applications of terrestrial MMS are the automatic DTM (Digital Terrain Model) generation in sandy beaches [7], river shorelines change detection [8] or pavement surface analysis [9].
Nowadays two lines emerge in MMS. A first one associated with high to moderate cost navigation or tactical grade Inertial Measuring Units (IMUs) and laser/camera sensors, which when integrated with satellite positioning receivers (Global Navigation Satellite Systems-GNSS) can deliver high accuracy surveys/modeling. The level of accuracy achieved by GNSS/IMU systems used in DGS can vary significantly, depending on the grade of the IMUs and on the GNSS data used. For a tactical grade IMU, integrated with geodetic grade GNSS receivers (multi-frequency code and phase receivers), accuracies can be of the order of one decimeter for positions, and better than 0.03° for attitude [10]. Navigation grade IMUs can provide accuracies of one order of magnitude better. Some of these types of systems are commercially available and are mainly directed to road, urban or railway environment surveys. Low cost Micro Electromechanical System (MEMS) IMUs integrated with geodetic grade GNSS receivers can provide results at the several decimeter level for positions and 0.2° for attitude, as shown for example in [11,12]. A very robust way of developing a DGS is by integrating GNSS and IMU measurements-an insight into this approach can be found in [13,14], or [15]. Manufacturers of GNSS receivers and IMUs provide data sheets with information on the expected accuracy that these individual systems can achieve, which usually relates to optimal operating conditions. However, in terrestrial platforms the working environment is more aggressive and consequently an accuracy decrease can be expected. In a MMS, the accuracy performance of the DGS is critical to the final positional accuracy of the objects acquired by the data sensors.
The subject of accuracy evaluation or calibration of MEMS units using known orientation and positions was presented by Artese et al. [16]. El-Sheimy presented a standard technique to obtain relative orientation between pairs of imaging sensors and DGS by using external control points on a bundle adjustment with known parameters as constraints [17].
The main goal of the work presented here was to assess the linear and angular accuracy of a DGS mounted on a terrestrial platform previously developed by the authors and described in [7], considering good conditions for GNSS observation. The tested DGS is composed by a dual frequency GNSS receiver and a MEMS IMU. The methodology used for the accuracy assessment is based on the comparison of the parameters given by the DGS with those obtained independently by photogrammetric techniques, using control points.
In Section 2, we describe briefly the DGS used and address the problem of converting data between different reference systems (sensors and platform), including the used conventions. In Section 3 the surveying test carried out is also described, as well as the methodology to assess the accuracy of the DGS system implemented. Finally, in Section 4, a brief discussion of the results obtained is presented.

Equipment and Methodology
The implemented DGS relies on the integration of measurements from a GNSS phase receiver with measurements from an IMU through a Kalman filter, which is a classic configuration that can provide very high accuracy due to its complementarity. A description of GNSS and IMU systems is not in the scope of this work since there is enough and accessible literature on the subjects (for GNSS see for example [18] and for IMU [19]).

Equipment Used
The accuracy of a GNSS/IMU system depends on the equipment used, built in processing and integration software. The DGS equipment used in our case was a Novatel dual phase GNSS receiver (Novatel Inc., Calgary, Canada) and a AHRS440 IMU from Crossbow (Crossbow: San Jose, CA, USA). Tables 1 and 2 present their accuracies, taken from the manufacturers' specifications.  The remote sensors used were two digital CCD cameras whose characteristics are presented in Table 3. The cameras were calibrated independently with a methodology developed by the authors [20]. The resulting five parameters (principal point coordinates, focal distance and two radial distortions) allows for a mean linear re-projection error of 0.35 pixels. The calibrated focal distances, always fixed in the maximum value, were approximately 3.6 mm for both cameras. Images of the vehicle and the equipment mounted on it can be viewed in Figure 1.

GNSS/INS Integration by Kalman Filter
The Least Squares method is the most common way to estimate the state vector in geomatics, which is purely based on measurements [21]. For navigation applications it is more efficient to use a sequential approach and here the Kalman Filter plays an important role in the integration of GNSS and IMU observations [22][23][24][25], which is based on the knowledge of the measurements and the state vector dynamics.
In the Kalman Filter mechanical equations, the position, velocity and attitude computed in the navigation frame, combined with other information, such as sensors errors and local gravity disturbance, are taken as the states of the filter. Here we have used a 34 state vector for the Extend Kalman Filter (EKF) described as: The dynamic error equation is: where 34 34 × F is the system dynamics matrix; 34 7 × G is the disturbance input matrix; 7 7 × w is the system white noise matrix. A typical Kalman Filter dynamic equation in our application can be written as: where the superscript "-" means "a prediction estimation"; the subscript "k" is the current index of discrete samplings; P34x34 is the covariance matrix of the state vector; 1 k− Φ is the state transition matrix, describing how the system changes over time, which can be approximated by the equation: The noise matrix 1 k− Q can be approximated as: where Q is the so called Spectral Density Matrix, which is a function of white noise w [26]: In this equation t is the current observation time, τ means the time difference between consecutive observations and δ is the Dirac delta function with dimension s −1 ; In our application the 1 k− Q has the form: where σ a and w σ are the standard deviations of the accelerometers and gyros; σ g δ is the standard deviation of the gravity. In high precision demanding applications, Q must be specified carefully, particularly for the low cost IMU. If Q is set too small, the Kalman Filter will trust the measurements more than the dynamic equation. In the EKF a typical representation for Q can be given as: The system observation equations can be written as: 6 1 where k H is the measurement matrix and k v is a zero mean Gaussian noise. When the GNSS updates are available the update process can be done using the following equations: where the superscript "+" means "a corrected estimation"; k K is the so-called Kalman gain; k R is the measurement noise covariance matrix. Figure 2 shows a plan of the platform used to carry out the surveys. Complementary information about the plan is that the cameras are leveled over the platform, as can be viewed in Figure 1b, and their centers are two centimeters above the center of the DGS. The lengths between components were measured rigorously and the values are shown. There are four reference systems (r.s.) present: the Cam left, Cam right, DGS (whose origin is considered at the center of the platform) and the cartographic r.s. in which the platform moves. In order to convert coordinates of points between reference frames, three linear and three angular components are needed. Considering any of these r.s., say 1 and 2, assuming that only scale is maintained, then the following equation can be established relating to a point in space (P) [27]:

Coordinate Transformation between Reference Systems
where [X]1 contains the coordinates of P in 1st r.s., [X]2 the coordinates of P in 2nd r.s., [X0]1 contains the coordinates of the origin of the 2nd r.s. in the 1 st r.s. and M21 is a matrix that explains the rotation from the 1st to the 2nd r.s. Considering the observation of the rotation angles, and the positions of cameras and DGS in the cartographic r.s., the following equations can be established using Equation (15): . 0 (17) . 0 (18) . 0 (19) . 0 (20) where the M matrices contain the rotation angles in the indicated directions and the X0 contains the cartographic coordinates (X, Y, H) of the origins of the r. s. Using Equations (16)-(18) the relative orientation parameters of the right camera relatively to the left camera can be deduced: .
Using Equations (16), (19) and (20) the offset parameters relating the DGS with the left camera are: .
These results will be used as needed in the rest of the document.

Rotation Angles (ω, φ, κ versus Roll, Pitch, Heading)
Traditionally, in photogrammetry, the orientation angles were associated with aerial photography and used to calculate Earth referenced coordinates from the photographic coordinates. They are the well-known rotations ω, ϕ, κ, of the photographic camera axes [27]. Table 4. Rotations and rotation matrix conventions used for a terrestrial camera and a moving body.  With the development of navigation technology applied to mobile surveying platforms, orientation angles different from the usual ones in photogrammetry were adopted, namely the roll, pitch and heading angles-see Table 4.

Measuring Element Rotation Order Rotation Matrices
In the case of the terrestrial image, the authors used an axis convention different from the usual in photogrammetry, namely by considering photo coordinates as (x,z) and the y axis forward from the photo, in so achieving a more intuitive relation between κ and heading angles. Despite the axes considered in each case being different and rotated in different order, the resulting rotation matrices are the same [27]. In this work the conventions used are presented in Table 4. In each case the rotation angles can be obtained from the rotation matrices as follows: ω, φ, κ case:  (26) where rij represents (i,j) matrix elements.

Evaluation of the DGS Accuracy
In order to evaluate the quality of the obtained navigation solution, the authors compared the exterior orientation parameters obtained independently from the cameras on the platform, with the parameters derived from the DGS itself. The offsets between the DGS and the cameras must be considered. However only the linear offsets are known by means of rigorous measurements over the platform (Figure 2). In fact, offset determination using the independent observations from cameras and DGS were used in this work to estimate the accuracy of the later. The observations, the strategy adopted and the results obtained are described below.

Observations
The data acquisition was conducted in an urban environment with the terrestrial video-based MMS, described in [7]. The test zone was selected on pavement with painted crosswalks because there we have well-defined points with enough variability on the ground and in the photos and also because such observations may be easily repeated at other locations in order to apply this methodology in our future work.
The control points were observed with centimetric accuracy by means of a static relative GNSS survey (referring Table 1, the acquisition mode was RT-2 (Real Time Kinematics), with the reference station at a distance of 6 km. Figure 3 shows a general view of the test area (a) and a scheme with the observed control points (b). Several passes were made through the test zone, from different directions, either following a straight line or a curve, collecting images at a rate of four per second (the cameras collect images at the same instant since they share the same trigger). Of these, nine pairs were selected, in which a sufficient number of control points could be observed, in order to compare the position and orientation parameters with those given by the DGS. This means that the tests were carried using eighteen images in which the control points could be observed. Of interest is the fact that the 2nd and 3rd pairs were selected because the vehicle was moving in a curve, and all the other were along a straight trajectory.

Position and Orientation Parameters by Space Resection and Respective Accuracy Estimation
The position and orientation parameters of the cameras in the cartographic space were obtained by space resection from the control points. This is a photogrammetric process by which the six exterior orientation parameters of an image are computed using at least 3 control points on the photo [27]. The image coordinates of the control points were identified manually and independently in each photo. The results of this phase are summarized in Table 5.       Table 5 contains the images used to perform exterior orientation, including the used control points, the exterior orientation parameters obtained and, in the right column, the calculated relative orientation parameters. The relative orientation matrices and linear relative parameters were obtained with Formulas (21) and (22) and the relative orientation angles were extracted from the rotation matrices as in Formula (25).
The relative orientation between cameras is constant for all pairs because they are fixed on the platform and the images of each pair were obtained at the same instants. Moreover, the exterior orientation of the images, as referred previously, was obtained independently in each photo. In this way it is expected that the relative orientation parameters calculated, regarded as observations, follow a normal law, so their standard deviations are a good estimate of the accuracy of this methodology for obtaining the exterior orientation of an image. Table 6 presents the accuracy estimations in the orientation angles, in the 3D positions and in the length of vector T. The length separation between the two camera centers is known to be 1.044 m (as shown in Figure 2). In this case it is possible to present the root mean square error and the mean error of the length of vector T, what is also done in Table 6. This is a good estimate of the accuracy achieved by space resection, for the positional parameters.

Position and Orientation Parameters Given by the GNSS / IMU System and Respective Accuracy Estimation
After processing the IMU and GNSS observations, as described in Section 2.2, a navigation solution was obtained for the surveyed path. Of interest are the instants that correspond to the analyzed photogrammetric pairs. The results obtained are presented in Table 7. Estimation of the accuracy of the IMU, either in coordinates or orientations, cannot rely on the differences between IMU and cameras observations as these depend on the directions of the vehicle. The followed strategy was to calculate the linear and angular offsets of the left camera in the IMU r.s., for each observation, which, in the absence of observation errors, should be constant.
The offset matrices, , were calculated using Formula (23), where is the left camera rotation matrix, calculated with the rotation angles in Table 5, and is the rotation matrix of DGS calculated with the rotation angles in Table 7. The offset angles where extracted using Formula (26). The linear offsets were calculated using Formula (24), where 0 − 0 is the difference between the cartographic coordinates of the left camera and DGS, given respectively in Tables 5 and 7. The results are presented in Table 8.
In this case, as the DGS r.s. is leveled over the platform and its orientation matches the orientation of the platform, the linear offsets are known rigorously by means of precise measurements (see Figure 2). In this way it is possible to assess the mean error and the root mean square error of the 3 linear offset components, which were also given in Table 8. The linear accuracy of this GNSS/IMU integration can be assessed by means of the root mean square error of the vector T and the angular accuracy by means of the standard deviation of the obtained offset angles.

Discussion
The evaluation of the DGS accuracy relied on the comparison between its parameters computed using the measurements acquired in good GNSS observation conditions and the ones obtained from the exterior orientation of the left camera at the same instants, by space resection, using rigorous control points on the pavement. It was necessary first to estimate the accuracy of this exterior orientation process, which is summarized in Table 6, showing an accuracy of better than 1.5 cm in 3D positions and better than 0.1 degrees in orientation angles. This result agrees with what was expected regarding the geometric characteristics of camera/lens systems and previous experiments made during calibration tests [18]. The obtained mean linear re-projection errors of 0.35 pixels represent 0.6 cm to 1.5 cm, respectively at 4 to 10 meters distance, being that sort of distances at which the control points were observed.
The accuracy estimation of the DGS itself relied on the fact that the offset parameters between DGS and cameras must be constant. Using this condition, as shown by the results summarized in Table 8, the obtained accuracy was better than 6cm in 3D positions, and around 0.15 degrees in roll and pitch and 0.75 degrees in heading.
A major issue relates to the fact that the accuracy of the DGS is being assessed through an independent process of obtaining position and orientation. It can be noted, however, that the estimated accuracy with the space resection process is better than the accuracy estimated for the DGS. The relation is about 4 times better in 3D positions, 2.5 times in attitude (ω and φ angles) and 20 times in heading.
As a final remark, and regarding Table 8, if the GNSS/IMU observations made in curves are discarded, the 2nd and 3rd pairs on the table, the heading accuracy achieved is higher, remaining under 0.2°. As expected the heading accuracy given by the GNSS/IMU system looses quality when the vehicle is changing direction.

Conclusions
In this work the accuracy of the navigation solution provided by a DGS implemented with a particular GNSS/IMU integration was evaluated for a terrestrial platform moving in an urban environment. The used sensors were a high quality dual frequency GNSS receiver and a medium quality MEMS IMU, integrated by means of an extended Kalman filter. The results show that, for specific environments, these type of sensors can deliver good results and replace the more expensive tactical grade type of IMU.
The methodology used was to compare the DGS derived parameters with those obtained directly from the images applying space resection in the same instants. The obtained accuracy for the DGS, presented in Table 8, was of 6 cm in position parameters, 0.2° in attitude and 0.8° in heading.