Cooperative GNSS ‐ RTK Ambiguity Resolution with GNSS, INS, and LiDAR Data for Connected Vehicles

: Intelligent vehicles and connected vehicles have garnered more and more attention recently, and both require accurate positions of the vehicles in their operation, which relies on navigation sensors such as Global Navigation Satellite System (GNSS), Inertial Navigation System (INS), Light Detection And Ranging (LiDAR) and so on. GNSS is the key sensor to obtain high accuracy positions in the navigation system, because GNSS Real Time Kinematic (RTK) with correct ambiguity resolution (AR) can provide centimeter ‐ level absolute position. But AR may fail in the urban occlusion environment because of the limited satellite visibility for single vehicles. The navigation data from multiconnected vehicles can improve the satellite geometry significantly, which is able to help improve the AR, especially in occlusion environment. In this work, the GNSS, INS, and LiDAR data from multiconnected vehicles are jointly processed together to improve the GNSS RTK AR, and to obtain high accuracy positioning results, using a scan ‐ to ‐ map matching algorithm based on an occupancy likelihood map (OLM) for the relative position between the connected vehicles, a Damped Least ‐ squares AMBiguity Decorrelation Adjustment (LAMBDA) method with least ‐ squares for a relative AR between the connected vehicles, and a joint RTK algorithm for solving the absolute positioning for the vehicles by involving the relative position and relative ambiguity constraints. The experimental results show that the proposed approach can improve the AR for the connected vehicles with higher ratio values, success rates, and fixed rates, and achieve high ‐ precision cooperative absolute positions compared with traditional GNSS RTK methods, especially in occlusion environments such as below a viaduct.


Introduction
With the advance of computer science and 5G technology, intelligent transportation systems, autonomous vehicles, and connected vehicles are developing rapidly. An essential function of these systems is accurate positioning. Their effectiveness heavily depends on high accuracy navigation results. The positions of the vehicles must be highly reliable to prevent failures of these systems during operation. Integrated navigation technology is commonly used for these systems, which can fully use the complementary characteristics of different navigation subsystems and greatly improve the accuracy and reliability of the integrated navigation system. The integration of the Global Navigation Satellite System (GNSS) and the Inertial Navigation System (INS) is a traditional integrated navigation system. With the development of Light Detection And Ranging (LiDAR) required addition to connected vehicles and can measure distance without incurring expenses. Moreover, DSRC devices are able to identify vehicles, and it is easy to associate their received information and range measurements [28]. With these benefits, many researchers have explored the DSRC-based cooperative positioning using Ratio Signal Strength (RSS) ranging, time-based ranging and a range-rating method based on Carrier Frequency Offset (CFO). Alam et al. used CFO-based range-rating measurements based on the Doppler shift of the carrier of DSRC signals to improve GPS estimates [31]. Liu et al. further integrated CFO-based range-rating measurements of DSRC and GNSS pseudorange observations by a cubature Kalman filter for positioning improvements [27]. In addition to absolute positioning, Alam et al. also used DSRC distance measurements and GNSS pseudorange observations by Kalman Filter to improve relative positioning between two vehicles [32]. Wang proposed a tight integration of DGPS, Ultra-Wide Band (UWB) ranging radio observations and simulated bearing observations in a small network of vehicles or infrastructure points to improve relative positioning accuracy [33].
However, few researchers have investigated the performance of RTK AR by fusing sensor data from connected vehicles [34]. GNSS RTK is the most cost-effective way for a high-precision position if the observation condition is good. For connected vehicles, even if one vehicle's GNSS signal is partially blocked and fails in AR, GNSS observation from multiple vehicles can be complementary. Fusing these GNSS observations can greatly improve the geometry of the GNSS observation equation, improve the fixed rate of ambiguity, and improve the positioning accuracy of all the connected vehicles. For example, in Figure 1 the red and green vehicles are moving below a viaduct and the red one can only receive satellite signals coming from the left half of the sky and the green one receives signals coming from the right half of the sky, so that the two vehicles share complementary GNSS observation.
In this work, we focus on the cooperative positioning with GNSS, INS and LiDAR data from multiple vehicles to improve the GNSS-RTK AR and achieve precise positions for all connected vehicles. The main contributions of our work include the following: (1) proposing a high-precision relative positioning method between vehicles based on a window Occupancy Likelihood Map (OLM) and a scan-to-map matching method by LiDAR; (2) proposing a method for fixing the relative ambiguity between vehicles based on their relative positions; (3) proposing a joint RTK ambiguity fixing method for connected vehicles to improve their absolute positioning based on (1) and (2).
In the below sections, we elaborate on our methods and conduct field experiments to evaluate our system.

Methods
In this research, GNSS RTK is treated as the main method for high-precision positioning in our navigation system and the correct RTK AR is an important requirement. Compared to the traditional RTK AR, which uses GNSS observations from one rover receiver, we propose a joint RTK method to enhance the AR performance by fusing the observations from connected vehicles. Firstly, the relative positions between the vehicles can built the position constraints between the vehicles, which are calculated based on LiDAR OLM method and attitude information from INS. Taking advantage of this precise baseline, the double differential (DD) carrier phase measurements from the common satellites of the connected vehicles are utilized to solve their relative DD ambiguities by a Damped LAMBDA method, which is considered as the ambiguity constraints. Finally, a joint RTK is applied for final position estimations of the connected vehicles with the relative positions and ambiguities constraint. Figure 2 illustrates the main procedures of our system. More details about INS mechanization, relative positioning by LiDAR based on a window OLM, relative ambiguity resolution and joint RTK will be introduced in the following subsections. Table 1 outlines the main processing steps in our methods by pseudocode, where M, F, К, and H denote the procedures of generating the OLM, calculating relative position, calculating relative DD ambiguities, and calculating absolute positions, respectively.  The n-frame is a local geodetic frame with platform mass center as the coordinate origin. The X axis pointing towards the East, the Y axis pointing to the North, and the Z axis completing a righthanded coordinate system. It is also called an east-north-up (ENU) system, forming a right-handed coordinate system.
(2) Earth-Centered Earth-Fixed Coordinate System (e-frame) The e-frame rotates with the Earth. Its coordinate origin is the Earth's centroid, the X axis points to the intersection of the equator and the prime meridian. The Z axis is the Earth's rotation axis and points to the North Pole. Then, the Y axis is perpendicular to the X-Z plane, forming a right-handed coordinate system.
(3) World Coordinate System (w-frame) The w-frame is used to express the GNSS positioning results. The initial GNSS position is taken as the origin. The X-Y plane is the local horizontal plane, with the X axis pointing East and the Y axis pointing North. The Z axis is perpendicular to the X-Y plane, forming a right-handed coordinate system. The w-frame is parallel to the n-frame.
(4) IMU Coordinate System (b-frame) The b-frame is dynamic and moves with the vehicle, and the coordinate origin is the IMU barycenter, The X axis points to the right along the IMU horizontal axis, the Y axis points forward along the IMU longitudinal axis, and the Z axis is perpendicular to the X-Y plane, forming a righthanded coordinate system.
(5) LiDAR Coordinate System (l-frame) The l-frame moves with the vehicle, with the LiDAR measurement center as the coordinate origin, the X axis pointing to the right along the LiDAR horizontal axis, the Y axis pointing forward along the LiDAR longitudinal axis, and the Z axis being perpendicular to the X-Y plane, forming a right-handed coordinate system. (6) Map Coordinate System (m-frame) The map coordinate system is used by LiDAR OLM. This frame is the l-frame of the point where the map starts.
More details about the reference systems can be seen in [35,36]. The b-frame and l-frame are precalibrated to achieve spatial synchronization. All the coordinate transformations in this paper are rigid body transformations. The rigid body transformation j i R from the i-frame to j-frame involves translation j i p and quaternion j i q . The rigid body transformation operation is defined as [35]: where i j C is the direction cosine matrix of i i q and i v is a vector in the i-frame.

INS Modelling
Firstly, the inertial system motion model is introduced based on an INS error model [5]: where the superscript n means n-frame and T represents transpose. An error propagation model works alongside the system motion model to correct the IMU errors. A first-order Taylor series expansion is applied, and the error state vector and state-space model are defined as: where t means time step, F is the dynamic matrix, G is a design matrix, u is the forcing vector of white noise, and w is the driven response of the input white noise. After the error state vector is solved, the predicted position and attitude in n-frame at time t is obtained.

Relative Positioning by INS aiding LiDAR Based on a Window Occupancy Likelihood Map
To achieve the multi-vehicle cooperative positioning solution, the relative position of the vehicles needs to be estimated. Simultaneous position and mapping (SLAM) based on environmental perception is a widely used method for accurate three-dimensional relative position information. Existing research on SLAM mostly focuses on the relative position between epochs. Here we draw on the idea of SLAM and propose a method for relative positioning between vehicles based on a scanto-map matching of a window OLM.
Scan-to-map is a front-end matching method in SLAM. It mainly stores historical observation information in the form of grid maps and achieves relative positioning through current scan and map matching. Compared with the scan-to-scan methods, such as Iterative Closed Point [37], Iterative Closed Line [38] and Polar Scan Matching [39], the scan-to-map method has better robustness. Each vehicle can store a short-term window OLM map which has a small cumulative error and then use a scan-to-map method between vehicles to obtain the relative position.
Our laser scan-to-map matching method with OLM is based on Maximum Likelihood Estimation (MLE) and aims to find the optimal rigid-body transformation in the m-frame. More details can be seen in our previous work in [40]. The MLE matches current scan t S to a grid-based OLM 1 t M  , which is generated by all the previous scans from time step 1 to t-1 and stores the likelihood value of each grid cell in the 3D space region. According to Bayes rules, assuming that the scan points in t S are independent, the sum likelihood value of t S is calculated as: where t T S  is the set of t S transformed by the rigid-body transformation T.
To find T  , we apply a Gauss-Newton approach for (11). Assume the rigid transformation is x y z p p p is translation and ( , , )    is the attitude angles. Given a starting estimate of T, we want to estimate T  , which optimizes the error measurement as: where i means the ith scan point in n points and S T ( ) are the coordinates of the scan points after transformation T and can be expressed as: x y z s s s are the coordinates of the scan point in the l-frame. Solving for T  yields the Gauss-Newton equation for the minimization problem: LiDAR model, and more details can be seen in [41].
In the next step, where c x , c y , and c z are the x coordinate, y coordinate and z coordinate of the center of each grid cell around the scan point. The sign d means distance, and  is the standard deviation of measurement noise of LiDAR sensor. The corresponding likelihood value of each grid cell is updated as: where ' P is the grid cell's prior likelihood value in -1 t M . After all the scan points update the grid cells around them as Equation (13) and Equation (14), new OLM t M is generated.
In our relative position, we generate a window OLM which stores the LiDAR information in a time window for each vehicle at each epoch, for example, the window OLM includes historical scans in the last one second so that our m-frame is dynamic, which can reduce the accumulation error and computation. Assuming we have 1 The relative position ( , , ) DE DN DU is thought to be as the final relative position of vehicle 1 and vehicle 2 in the nframe, and the superscript n of ( , , ) DE DN DU is omitted for the simplicity of the following equations in the following subsections. The relative position in the w-frame is equal to that in the n-frame, because the axis directions of them are almost same in a small region.
The cross matching of LiDAR data from the vehicles can calculate relative position between the vehicles. In our work, the scan-to-map matching method is superior to scan-to-scan method for the cross matching. The scan-to-scan requires that the vehicles observe the same features simultaneously, which is difficult in the outdoor environment. The strategy that matches one vehicle's laser scan to another vehicle's window OLM makes feature matching more robust. If the OLM includes the same feature with the laser scan in the time window, the scan-to-map method can work successfully. Here we use a window OLM to ensure the efficiency and accuracy of the scan-to-map method. It is possible to store all the historical scan information in the OLM, but it takes time to update the OLM and match the laser scan to the large-size map. The OLM may accumulate errors over running distance; therefore, a window OLM might reduce the cumulative errors with a short time window to improve the accuracy of relative positioning.

Relative Ambiguity Resolution with Relative Position Constraint
After relative position between the vehicle 1 and vehicle 2 is determined, we integrate this constraint with the carrier phase measurements of GNSS by RTK method. It is noted that in this subsection we assume that either vehicle 1 or vehicle 2 is a moving reference receiver and another one is a rover receiver in our RTK method. An accurate position of the reference receiver is unnecessary and their relative position gets more interest. The effect of the reference receiver's absolute positioning error on the baseline solution is [33]: is the absolute positioning error of the reference receiver. If the reference receiver has a five-kilometer positioning error using GPS standalone positioning; the maximum resulting relative positioning error on a 500m baseline is about 10 -8 m, which is negligible. The carrier phase measurement is derived from counting the carrier phase cycles of the incoming carrier. In a general form, the DD pseudorange and carrier phase measurement model of RTK can be written as: where the  notation refers to a double differentiation, p and  are the pseudorange observable and the carrier phase measurements respectively, subscript r and s mean the rover receiver and the reference receiver respectively, superscript i and j mean the ith and jth satellites,  refers to carrier wavelength,  is the geometrical range between the satellite and receiver,  refers to the orbital error of the satellite, T is the tropospheric error, I is the ionospheric error, N is the integer carrier ambiguity cycles, and  represents measurement error. DD can mitigate spatially correlated errors T and I .  , T  and I  are only a few millimeters over a one kilometer baseline so that they are neglected in our work as we apply short-range positioning applications,   includes the residual of multipath error and observation error, which is assumed to be white noise and it is considered in the stochastic model. Equation (19) is simplified to: which can be linearized as: where the superscript i means the reference satellite in DD and we assume that either vehicle 1 or vehicle 2 is the reference station in our method, d According to the initial position of the reference station 0 0 0 ( , , ) x y z and 0 0 ( , ) b l in the e-frame, the Equation (21) can be transformed to the w-frame: where ( , , ) de dn du is the baseline correction in the w-frame. Assuming that the receivers of vehicle 1 and vehicle 2 have n common satellites with the reference receiver, we can construct n-1 equations like Equation (23). Through the LiDAR scan-to-map matching described in the above subsection, the accurate relative position between the connected vehicles in the w-frame is calculated and we involve this information into our least-squares adjustment to improve the accuracy by a Damped LAMBDA method. A prior weight matrix x P for DE , DN , DU is used in the normal equation of leastsquares adjustment: where X is baseline components and Y is DD ambiguities in n-1 dimension, A and B are the coefficient matrices for X and Y according to Equation (

Joint RTK Algorithm for Connected Vehicles
In this part, both the vehicle 1 and vehicle 2 are rover receivers and one RTK reference station is chosen. DD observation equations are formed for all visible satellites for vehicle 1 and reference station and those for vehicle 2 and reference station to jointly solve their absolute position estimations. The relative position and relative DD ambiguities described in the above subsections are considered as constraints in our joint RTK. Figure 3 illustrates the GNSS satellites shown in Figure 2, the relative position, and the relative ambiguity between the connected vehicles. It is noted that the numbers and distribution of satellites in Figure 2 and Figure 3 are for illustration. In the real-world practice the numbers and distribution of satellites can be different. Supposing that the satellite 4 is our reference satellite, and the DD pseudo range and carrier phase observation equations in the e-frame are formed as: where 1,0 1,0 1,0 ( , , ) e n u and 2,0 2,0 2,0 ( , , ) e n u are the initial positions of vehicle 1 and vehicle 2 in the wframe, respectively. Then, we apply constrained indirect least squares for Equation (28) where X and Ŷ mean fixed solutions of absolute positions and ambiguities.
In order to validate the fixed ambiguity, the ratio test is used, and the success rate, which is an output from LAMBDA method, is used to assess the probability of correct position solution. The ratio test aims to decide whether the integer ambiguity solution is sufficiently more likely than any other integer candidate. Ŷ is the integer minimizer of , then the ratio is computed as only if the ratio is sufficiently large, the fixed solutions will be accepted. Otherwise, the fixed solutions are rejected in favor of the float solutions. The success rate of AR is determined by the strength of the underlying GNSS model; the stronger the model, the higher the success rate [42].

Experimental Results
To evaluate our methods, we conduct field experiments. Our vehicles are shown in Figure 4 in red frames, and GNSS and LiDAR are mounted on each of the vehicles and their equipment types we used are shown in Figure 4. Table 2 shows the IMU parameters of the two types of IMU. For both vehicles, GNSS runs at 1 Hz and LiDAR runs at 10 Hz. IMU with the Honeywell device in the white vehicle (denoted as "vehicle 1" in below) runs at 600 Hz, and the IMU with Novatel SPAN device in the blue vehicle (denoted as "vehicle 2" in below) runs at 100Hz. The frequency of our ambiguities resolutions and position solutions is in accordance with that of GNSS as 1Hz. Experiments are done in an open area (shown in Figure 4(a)) and below a viaduct (shown in Figure 4(b)). The vehicles below a viaduct can share complementary GNSS observations, which is suitable for evaluating our method. It is hard to get continually accurate positions below the viaduct, which makes it difficult to evaluate the performance of our integrated system, so we only analyze the geometrical distribution of satellites above the vehicles' horizon in the experiments below the viaduct, then we choose measurements from the same satellites collected from the open area to simulate the observation condition below the viaduct, and the RTK results in the open area are assumed to be the ground truth of the absolute positions of the two vehicles.   Figure 5 shows the distributions of satellites for the two vehicles in the open area and below the viaduct. In our work, G19 and C22 are the reference satellites for GPS and BDS respectively. The viaduct in Figure 4 is oriented directly south to north and the two vehicles move on both sides of the road below the viaduct. Because the viaduct blocks satellite signal from some angles of arrival, vehicle 1 in Figure 5(b) cannot receive satellite signal when satellite elevation angle is less than 60 degrees and yaw angle is from 15 degrees to 165 degrees, and vehicle 2 in Figure 5(d) cannot receive satellite signal when elevation angle is less than 60 degrees and yaw angle is between 195 degrees and 345 degrees. From Figure 5(b) and 5(d) we can see that they have seven common satellites including two reference satellites at this epoch. In the whole duration of our experiments below the viaduct, the number of common nonreference satellites of vehicle 1 and vehicle 2 varies from five to eight. The lower the number is, the more difficult to solve the relative position and ambiguities between the vehicles generally. Figure 5(f) shows all the visible satellites for vehicle 1 and vehicle 2, which combines Figure 5(b) and Figure 5(d) and makes sufficient satellites available in our joint RTK solution. The serial numbers of GPS satellites start with letter "G" and those of BDS start with letter "C".
We first evaluate the performance of the LiDAR scan-to-map matching method in our work by comparing the relative position between the vehicles calculated by LiDAR scan-to-map matching method and ground truth. In our work, we did not use the scan-to-scan matching method, because the scan-to-scan method requires that the two vehicles should observe the same features simultaneously, which is difficult in practice. To improve the availability and reliability of the feature matching, the environment information observed in the last second was stored in the OLM. If the two vehicles observed the same features in the last second, they could be matched successfully. It was possible to store all the historical environment information in the OLM, but for efficiency of feature matching, a window OLM was generated using the LiDAR data in the previous 1s which included ten laser scans as the frequency of LiDAR sensors was 10 Hz. Moreover, the window OLM helped keep precision of relative positioning because OLM accumulated errors over distance, and a short time window could reduce cumulative error. Figure 6 shows the LiDAR point clouds of the window OLM of vehicle 1 and the current laser scan of vehicle 2 at one epoch. As it is hard to visualize the three-dimensional OLM, the LiDAR point clouds of the window OLM are shown in the figure. We can see that the vehicle 1 travelled on the right side of the viaduct piers and the vehicle 2 travelled on the left side of the viaduct piers. The current laser scan of vehicle 2 could be matched successfully to the window OLM of vehicle 1, as they had common features of the viaduct piers.   Figure 7 we can see that the relative positioning is highly accurate with a bias of a few centimeters in east, north and up directions, which is beneficial to our high-precision absolute positioning of the two connected vehicles. It is noted that in our work the precision of yaw angles of vehicles are about 0.1 degree and our baseline's length is about 20 m so that the yaw angle error might cause relative positioning error of about 2 cm. Regarding relative ambiguity resolution in the above subsection C, we compare the results of using relative position constraint from LiDAR scan-to-map matching (denoted as "with LiDAR" in Figure 8) and not using this constraint (denoted as "without LiDAR" in Figure 8). Figure 8 Figure 8) are high and close to 1. But the ratio values for the case in the open area without LiDAR constraint are much smaller than the ratio values for the case with LiDAR constraint, which means that the LiDAR constraint can help fix the ambiguities and makes AR more reliable, and that even in the open area with sufficient available satellites this constraint is able to largely increase the ratio values. This is reasonable, because if the relative position constraint is not considered, vehicles' predicted positions are calculated by GNSS pseudorange measurements which include large errors so that the float solutions of relative ambiguities may not be reliable and the ratio values are small. When vehicles move below the viaduct, the number of their visible common satellites takes half. From the blue lines and red lines in the Figure 8 we can see that if relative position constraint is considered, both the success rate and ratio values increase significantly. The mean ratio value of the case below the viaduct without LiDAR constraint is only 0.23. In our work, we set the threshold of the ratio test to 3. Therefore, for this case at most of the epochs the relative ambiguities are not fixed correctly. For another case with LiDAR constraint, the mean ratio value is 14 and all ratio values from all epochs are larger than 3. In the first a few epochs, because only five common nonreference satellites are available, the success rate is about 0.8 for the case with LiDAR constraint and about 0.02 for the case without constraint. These results prove that our relative positioning method can largely improve the AR even when there are only a few common satellites for the connected vehicles. Figure 9 shows the ratio values and success rate of AR for absolute positioning by our joint RTK algorithm. Six cases are tested and analyzed in this part: joint RTK without considering relative position constraint  Once reliably fixed, the integer ambiguities make the carrier phase observables become a precise pseudorange. The fixed positioning solution is then achieved. The biases between the fixed positioning solutions and the ground truth in east, north and height directions are plotted in Figure  10 for vehicle 1 and in Figure 11 for vehicle 2. In these figures, there are spikes which have large biases. These spikes mean that the ambiguities are not fixed (ratio is less than 3) at these epochs and the incorrect float solutions have large biases. In the open area, the case without considering RP d or RA N gets the worst positioning results, and the case with RP d constraint and the case with both RP d and RA N constraints have nearly equal biases in the three directions. This is because in Figure 9 both cases achieve a high success rate and all the ratio values are larger than 3, which means the fixed ambiguities are reliable and can produce high-precision positioning solutions. For most of the epochs, these cases can achieve centimeter-level precision. For the cases below the viaduct, the case without considering RP d or RA N still gets the worst positioning results. At certain epochs the case only considering RP d worsens the position results comparing to the case considering both constraints, but at most epochs these two cases can achieve centimeter-level precision in east and north while the case considering no constraint gets very large biases. To compare the performances of our joint RTK method and single positioning by RTK, we conducted a single positioning experiment for vehicle 1, and results are shown in Figure 12. The results of the single positioning by RTK are quite similar to the results of the case with no constraint in Figure 10, which should be expected. Table 3 shows the statistical positioning precisions and fixed rates of these six cases and single positioning for the two vehicles. The fixed rates in Table 3 prove that our approach can largely increase the fixed rates below the viaduct where satellite observation condition is worse.

Discussion
Connected vehicles build a network where vehicles can share information and sensor data. Compared with a single vehicle, the observation information from the network is greatly increased, and the positioning accuracy of the connected vehicles can be significantly improved. An important prerequisite of our approach is the precise relative position between the connected vehicles. Even though our OLM-based matching method does not require the vehicles observe the same environmental characteristics at the same time, the vehicles are still required to observe the same features within a time window. In the future work, the geometric shapes of features in the environment, such as cylindrical and rectangular buildings, will be considered, so that even if different parts of the features are observed by vehicles, the scan-to-map matching method in our approach can still work successfully via geometric constrains.
The relative position between the vehicles should be transformed from the m-frame to the wframe, which needs the attitude of the LiDAR sensor calculating by INS. In our experiments, IMUs with upper grade gyroscopes are used, therefore, the error of attitude accumulated slowly and neglected. In our work, the precision of yaw angles of vehicles is about 0.1 degree and our baseline's length is about 20m, so that the yaw angle error might cause relative positioning error of about 2 cm. In future work, we will try a lower-grade IMU and test our method further.
Our cooperative GNSS-RTK ambiguity resolution model for two connected vehicles is relatively simple, and a more complete model will be investigated in the future to address the multi-vehicle networking environment. Relative positions will be calculated by our INS aiding LiDAR OLM method for multiple vehicles in the network, which will increase computation costs. Our relative ambiguity resolution method with relative position constraint and joint RTK method are easy to be expanded to a multivehicle networking environment, but the number of parameters will increase along with the number of vehicles. Therefore, if there are large quantities of vehicles in the network, solving Equation (23) and Equation (28) will become difficult. In the future, we will also focus on this issue.

Conclusion
In this study, we proposed a cooperative GNSS-RTK AR method by sharing the sensor data between the connected vehicles. A LiDAR scan-to-map matching method based on a window OLM was applied to the cross matching of LiDAR data from the vehicles, based on which the precise relative position between the vehicles can be calculated. With this relative position constraint, a relative ambiguity resolution is proposed for the common satellites of the vehicles. Finally, a joint RTK method is applied to fuse the GNSS measurements from the connected vehicles and their relative ambiguities. Comparing to the standard RTK AR method of a single vehicle, our approach can improve the performance of RTK AR in terms of ratio values, success rate and fixed rate and improve positioning accuracy for the vehicles. Especially, even in the high occlusion environment, when the vehicles share complementary GNSS observations our method can still obtain satisfactory results. We investigate the performances of our approach below an open sky and below a viaduct, which is a typical urban occlusion environment. The following conclusions can be drawn based on the experimental results.
(1) The proposed cross matching method based on the window OLM can calculate highprecision relative position between the connected vehicles with a bias of a few centimeters in east, north and up directions, and this method does not require that the vehicles observe the same features simultaneously, which is more reliable and robust.
(2) This baseline constraint helps to fix the relative ambiguities by improving the success rate and ratio values. This approach works very well even when the number of common satellites of the connected vehicles is a few.
(3) In the absolute positioning, taking advantage the constraints of the intervehicle position and ambiguities, the geometric condition of the GNSS RTK observation equation is improved by our joint RTK algorithm in term of ratio values, success rate and fixed rate, so that the absolute positioning precision is improved for both vehicles. With our method, both vehicles achieved centimeter-level position accuracy with 98.61% fixed rate in the open area, while the standard GPS+BDS RTK obtained decimeter-level accuracy with 95.83% fixed rate. In the high occlusion environment below a viaduct, the vehicles still achieved centimeter-level accuracy with a fixed rate of 97.22%, comparing with a decimeter-level accuracy and a fixed rate of 35.42% of the standard GPS+BDS RTK method.