Angle-Awareness Based Joint Cooperative Positioning and Warning for Intelligent Transportation Systems

In future intelligent vehicle-infrastructure cooperation frameworks, accurate self-positioning is an important prerequisite for better driving environment evaluation (e.g., traffic safety and traffic efficiency). We herein describe a joint cooperative positioning and warning (JCPW) system based on angle information. In this system, we first design the sequential task allocation of cooperative positioning (CP) warning and the related frame format of the positioning packet. With the cooperation of RSUs, multiple groups of the two-dimensional angle-of-departure (AOD) are estimated and then transformed into the vehicle’s positions. Considering the system computational efficiency, a novel AOD estimation algorithm based on a truncated signal subspace is proposed, which can avoid the eigen decomposition and exhaustive spectrum searching; and a distance based weighting strategy is also utilized to fuse multiple independent estimations. Numerical simulations prove that the proposed method can be a better alternative to achieve sub-lane level positioning if considering the accuracy and computational complexity.


Introduction
Advanced vehicular assistance systems have become an upward trend with great attention in both academic research and industrial application [1,2]. During the evolution process from traditional human based driving to smart assisted driving, even to autonomous driving, the signal processing techniques aiming at diverse perception data will play a critical role. Especially, strengthening the safety [3,4], no matter what level the autonomous driving of on-road vehicles is, shall become a principal task in the future. One of the vital issues is how to guarantee an accurate localization for each on-road vehicle with the at least sub-meter level requirement [5,6].
As we know, global navigation satellite systems (GNSS), e.g., the Global Positioning System (GPS) and BeiDou Navigation Satellite System (BDS), are widely used for current vehicular localization and navigation; but they cannot always satisfy the rigorous requirements of some location based services (LBS) [7][8][9]. The reasons involve the navigating signals being inevitably blocked or multipath transmission in urban environments, the vehicle motion, the satellites' absence, etc. More seriously, the navigating signals can be recorded with malicious tampering and retransmitted to the vehicular terminals. Besides the improvement of GPS and BDS with some inertial navigation (utilizing some on-board kinematic sensors such as odometers, accelerometers, gyroscopes, etc.), the laser imaging detection and ranging (LIDAR) technique can achieve a simultaneous localization and mapping for intelligent vehicles, which is now accepted by more and more industries [10][11][12][13]. However, there still exist some problems that cannot be neglected if it is to be commercialized [14]. The first one is the cost; and the second one, also the most important one, is that the quality of LIDAR images will deteriorate because of the weak reflectivity of the wet road surface, which results in some detected region disappearing in the LIDAR images (such a difference between LIDAR images and map images will affect the further similarity calculation); in addition, the irregular snow lines inside the lane and near the roadsides also can confuse the lane identifiability. Both situations will bring great hidden danger to intelligent vehicles.
The wireless communication techniques provide a new alternative solution, which can become a powerful supplement to localization. For example, the vehicular ad-hoc networks (VANETs) have been designed with dedicated protocols and differentiated quality-of-service to achieve a connected road environment [15,16], including vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. In that sense, the cooperative positioning (CP) [17,18] becomes an effective measure to improve the localization accuracy by jointly fusing multiple location-related parameters exchanged among a series of VANET nodes. Generally, these location based parameters include maximum movable distance [19], channel state information (CSI) [20], received signal strength indicator (RSSI) [21,22], time-of-arrival (TOA), and time-difference-of-arrival (TDOA) [23,24].
In [25], two road-side units (RSUs) are utilized to broadcast their position and road geometry information, the vehicle combining them with odometer data and Doppler shift of the received signals to achieve a lane-level localization accuracy. References [26,27] investigated a vehicle-to-infrastructure based CP where each vehicle measures its position through the direction-of-arrival (acquired by a uniform linear array on the vehicle) and the known RSU position (acquired in each beacon packet). Such CP gains a better performance than GPS based localization. However, there exist four potential shortcomings. The first one is that the vehicle motion and the road unevenness will generate mechanical vibration so that the ULAbecomes unstable, which consequently influences DOA estimation; the second one is that the accuracy of DOA estimation highly depends on the array aperture so that the vehicle should be equipped with a larger scale antenna array, which will increase the cost. Although the coprime-array [28,29] can achieve increased degrees of freedom compared with the traditional uniform array, it still requires complicated operations; the third one is that one-dimensional angle estimation alone cannot distinguish the vehicle in the adjacent lane, which will induce the ambiguity of the spatial position; the last one is the angle estimation algorithm because it must simultaneously consider the estimation accuracy and computational complexity rather than focus on one of them [30], which is a basic requirement for timeliness. In addition, among the above CP strategies, an important function, i.e., the cooperative warning with respect to traffic safety, is neglected. In a highly connected road environment, each intelligent vehicle has the responsibility of guaranteeing the driving safety and road efficiency, that is the so-called cooperative safety. Such cooperation is embodied at least in an active reporting of the traffic accident or vehicle fault by V2I links to RSUs; consequently, each individual road, even the whole road networks, can gain global control.
In this paper, aiming at solving the aforementioned impasses, we introduce a novel joint cooperative positioning and warning system on the basis of spatial angle information. The positioning mainly depends on the wireless V2I links and kinematic model without considering the GNSS. The warning is depicted by the safety distance and the vehicle's deceleration. Besides the detailed design of sequential allocation for CP warning tasks and the data format of localization packets, we also propose the computationally efficient AOD estimation algorithm and multiple detection fusing strategy to achieve sub-lane level positioning. To summarize, the main contributions of this paper are three-fold: 1.
An angle-awareness based framework of the joint cooperative positioning and warning system is first discussed, which includes the positioning model based on state representation, the warning mechanism based on safety distance, and the CP warning task allocation and related data formats for periodic interaction.

2.
To decrease the computational complexity, a truncated signal subspace based algorithm for angle estimation is proposed, which avoids matrix eigen decomposition and spectra searching.

3.
To decrease the adverse influence of the near-far effect caused by angle estimation and improve the positioning accuracy, a distance based weighting strategy is also designed, which only utilizes the estimated positions without extra calculations.
The rest of this paper is organized as follows: In Section 2, the overview of the joint cooperative positioning and warning (JCPW) system is introduced. More details on the proposed algorithm are described in Section 3. Section 4 discusses the distance based weighting strategy for the initial position estimation. The numerical results are shown in Section 5, and Section 6 gives the conclusions.

Joint Cooperative Positioning and Warning Overview
We assume as a reference scenario in Figure 1 a fully connected intelligent vehicle network deployed along a given road segment with a double-lane width equal to W meters, belonging to an urban canyon environment. Without loss of generality, two different kinds of nodes are present: L RSUs, {R 1 , R 2 , . . . , R L }, placed on the roadsides; and multiple intelligent vehicles, randomly located on the lanes, are traveling along arbitrary trajectories. The JCPW system mainly depends on the on-board processor and RF wireless module to achieve data transceiving and positioning.  Figure 1. A graphical illustration of joint cooperative positioning and warning (JCPW).

Cooperative Positioning
For intelligent vehicle V, if requiring sub-lane-level localization accuracy, a more appropriate way is to perform self-positioning with the aid of multiple RSUs' cooperation. There are two main reasons. One is that the centralized positioning schemes will give rise to a heavy computational load for RSUs, and they usually require complex algorithms or multi-user schemes to discern vehicles. The other is that, although the V2V communications can provide relative position information, it is usually unreliable because the communication links vary dynamically and are vulnerable to being blocked or having severe distortion [31]. Besides, GNSS is also a common choice. However, taking the GPS for example, it suffers from GPS signal blockage and multipath, as well as inadequate accuracy (∼10 m) [32]. Therefore, we only consider the V2I communications, in which the line-of-sight (LOS) path generally dominates in such a condition [33]. In addition, the cooperative positioning in our system refers to multiple RSUs' cooperation for achieving decentralized positioning.
The CP stage includes the localization data transmitting and receiving, AOD estimating, and position calculating. The first event is in time span T TR , and the last two events are in time span T P , that is to say, a CP interval T CP = T TR + T P . For a better understanding, let the RSU position p R = [x R , y R , z R ] T be fixed and exactly known, while the vehicle's position p V (t) = [x V (t), y V (t), z V (t)] T is contrary. It is reasonable to consider that the velocity of the vehicle keeps nearly invariant during T TR , and the acceleration contributes very little due to the sufficiently high packet rate. The real-time velocity reading v(t) = [v x (t), v y (t)] T and acceleration reading a V (t) = [a x (t), a y (t)] T can be acquired from the vehicular sensors. Besides, the two-dimensional AOD information θ and φ denotes the elevation (i.e., the angle between the z-axis and the LOS signal) and azimuth (i.e., the angle between the x-axis and the projection of the LOS signal) of a vehicle, respectively, which is defined by the Cartesian coordinates of the RSU; see Figure 1.
We define the vehicle's state vector as: then in the k-th CP interval, k = 1, 2, . . . , the kinematics used in the positioning stage is a constant model with invariant velocity and acceleration. It is given by the following iterative formulas, where we use subscript "s" for discrete time indexes, w(t s ) represents the noise item, and: where Γ denotes the state transition matrix that applies the effect of the vehicle's state at time t s−1 on the one at time t s ; and Ξ is the control matrix that applies the effect of acceleration a V (t s−1 ) on the current vehicle's state vector.
In (2), the first formula gives the initial position states, and the second one gives the estimations of subsequent positions for the left time span T − T TR with the help of velocity and acceleration readings.
The elements x V (T k−1 ) and y V (T k−1 ) of c(T k−1 ) are the only unknown parameters. Once the elevation θ and azimuth φ are estimated, which will be introduced in Section 3, we can further estimate the vehicle's initial positions at time T k−1 according to basic geometry relations, i.e., wherez i = z R i − z V . Herein, we consider K RSUs for cooperation. The weighting coefficients need to be designed to improve the accuracy of localization, which will be reserved for Section 4. Based on the above kinematic model and estimated position information, the whole trajectory would be retrieved. It is worth mentioning that, based on (2), the positioning at time T k can be given by state vector is a nonlinear mapping function with respect to the AOD estimation and v(T k ) is noise. To reduce the influence of noise, the extended Kalman filter (EKF) [34] will be a better choice due to it being able to provide optimal position estimations in the mean-squared sense for a Gaussian noise distribution. However, two basic restrictions should be considered beforehand, i.e., the identification of the probability distribution and the determination of the covariance information of the measurement error, which are not easy tasks due to the limited number of sampling in actual scenarios. Therefore, we leave them as future works.

Warning
In our considered system, the traffic safety is guaranteed in an active manner, i.e., each intelligent vehicle periodically monitors the surrounding traffic status that is broadcast by RSUs; meanwhile, its own states, such as traffic accident, component fault, velocity, acceleration/deceleration rate, remaining energy, etc., should be reported in a timely manner. To do so, the RSUs can infer the global traffic status, and each vehicle can also acquire safety-related traffic parameters, for example deceleration and safety distance.
We now consider a basic scenario that is depicted in Figure 2a, in which vehicle V 1 with length L 1 , velocity v 1 , and acceleration a 1 and vehicle V 2 with length L 2 , velocity v 1 , and acceleration a 1 are driving along a road. According to the vehicle kinematics, we can calculate the safety distance D s to characterize the warning strategy. The total safety distance includes three components. The first one is distance D w , which represents the moving distance of vehicle V 2 after it receives the warning message from the RSUs and begins to decelerate. The second one is distance D r , which represents the relative moving distance when V 2 slows down with deceleration a d until the relative speed between two vehicles becomes zero. The last one is the minimum headway distance D h , which needs to be guaranteed when the relative speed reaches zero. Therefore, the safety distance is represented by: where D w = d 2 − d 1 and d 1 and d 2 denote the moving distance of V 1 and V 2 in time span T R2v , respectively, so that , and vehicle V 2 begins to slow down with a desired deceleration a 2d , which will generate relative deceleration ∆a = a 2d − a 1 ; where d min is the minimum distance between two vehicles. It is worth mentioning that the desired deceleration a 2d in Equation (5) is the only unknown parameter, which has a direct relationship with the warning strategy. By simple derivation, the estimated desired deceleration:â whereD is the estimated inter-vehicle distance D based on the vehicles' positions. The qualitative deceleration-distance curve is shown in Figure 2b. For a given distance D temp , the corresponding a temp is the least desired deceleration. Vehicle V 2 performs a reasonable braking according to the real-time distance information for avoiding accidents.

Remark 1.
For the above qualitative analysis, it actually relies on some assumptions, i.e., vehicle V 1 keeps its constant deceleration before stopping; vehicle V 2 keeps its acceleration unchanged during T R2v and then slows down with an assigned constant deceleration no less than that of vehicle V 1 . More complicated scenarios and related parameters' evaluation will be left for future work. Given that the measured distanceD depends on the position of two vehicles, hence the following work will focus on the problem of the vehicle's positioning.

Task Sequence and Data Frame Format
The JCPW system refers to two important sub-functions: positioning and warning; therefore, an efficient task sequence should be designed. We first give a detailed explanation and analysis about the designed task sequences. For convenience, the basic structure is shown in Figure 3. Figure 3. A graphical representation of the task sequences. CP, cooperative positioning.
For one time interval T, there are three parts. The first one with time interval T CP is used for positioning. Without loss of generality, we assume K RSUs participate in the current CP. That is to say, KT L is required, where T L denotes the allocated time span of localization data for one RSU. In each T L , a series of orthogonal code sequences c(t) with length T O is transmitted to assist the vehicle in AOD detection. The data format is shown in Figure 4, where we depict a case that K = 3 RSUs, and the orthogonal sequence c(t) is repeated G + 1 times with a total length . . .
. . .  As shown in the aforementioned kinematic models (see Equation (2)), during the AOD detections, all the data packets transmitting from RSUs and received by the vehicle are completed in a time span of T TR . Due to the speed of light being much larger greater the vehicle's speed and the high data rate of the orthogonal sequences, it is reasonable to neglect the time consumption in the stage of transmitting and receiving; consequently, a computationally efficient algorithm for AOD detection is very necessary for accurate localization. That is to say, the time span of data processing, i.e., T P = T CP − T TR , will become a dominant factor in the subsequent localization. In T TR , the vehicle's position is roughly thought to be kept unchanged; after that, it will be calculated by inertial equations. Obviously, a small T P is a benefit for reducing the position error. In addition, during the data processing, multiple two-dimensional AODs are estimated separately due to the RSUs cooperating in a successive manner. Strictly speaking, this AOD information cannot be transformed into the same position because the vehicle is moving in T TR ; however, the time differences between the successively transmitted localization data from different RSUs are also negligible; that is to say, we can consider all K estimated AODs as effective measurements for the initial position x V (T k−1 ) and y V (T k−1 ). For example, consider such a scenario in which all K = 5 RSUs locate approximately 100m around the vehicle; the system bandwidth is 10 MHz; the localization data codes are selected from a 64 × 64 Hadamard matrix, and each repeat 20 times; the vehicle moves 20 m/s along the longitudinal direction. Only considering the line-of-sight communication link, we can see that, during one localization data time, the vehicle moves forward 0.256 cm; and all five RSUs will contribute a movement of 1.28 cm.

RSU-3
The second part with time span T W = T v2R + T R2v is used for warning. In this stage, there are two basic contents, i.e., the active state information reporting from the vehicle (to the RSUs) with time span T v2R and the periodic traffic status received from the RSUs (to the vehicle) with time span T R2v . In the time sequence, the first event has the highest priority because all other vehicles have to depend on the current vehicle's status to make a reasonable strategy for further driving. Different from the common state information, those emergent events, for example traffic accidents, loosing control, malfunction, etc., are much more important for traffic safety. In particular, given the randomness of traffic accidents, the conflict of the tasks in the time sequence is inevitable; see Figure 3, in which the accident labeled by a red star occurs in the CP stage. In such scenarios, the vehicle's on-board central processing unit should execute an interruption and switch to the warning stage. If the active reporting module still works, the last time positioning results and other related vehicle state information will automatically report to the RSUs; if it unfortunately falls into a breakdown, the subsequent vehicles will take responsibility for uploading the accident information. By doing so, the system can be guaranteed a timely traffic status reporting.
Besides the above two important parts, the third one is reserved for further exploiting functions, for example vehicle platoon control, high definition map matching, and so on. The reserved time span is T − T CP − T W .

Computationally Efficient Two-Dimensional AOD Estimation
Different from the research works [27,35], we focus on the computationally efficient AOD estimation rather than simply assuming they are obtained beforehand or utilizing directional antennas. We herein just take one RSU for example in the following content because the others share the same data model and processing procedures.

Data Model
In the designed CP system, the RSU is equipped with a uniform rectangular array (URA) with M × N omnidirectional antenna elements, and the vehicle is equipped with a single omnidirectional antenna. The Cartesian coordinate is shown in Figure 1 We assume that the time synchronization and frequency synchronization between the RSU and vehicle have been calibrated. The V2I communication is in the line-of-sight (LOS) condition, which is depicted by the Rician channel model with parameter κ [36]. In addition, the system is working in narrow-band. Based on the designed localization data frames, all MN antennas simultaneously transmit orthogonal code sequences c q (t), q = 1, . . . , MN. The signals propagate from the RSU to the vehicle over different paths, resulting in the superposition of multipath components (MPCs); therefore, besides the LOS signal component, it is reasonable to assume that there are K MPCs, each of which has its own AOD pairs θ p and φ p , p = 1, 2, . . . , K. Let the propagation delay τ p , attenuation coefficient A p , and AOD pair {θ p , φ p } denote the p-th parameterized path [33,37]. Without loss of generality, given that the wireless environment has to undergo changes due to the vehicle's movement, we assume the attenuation coefficient A p is block-variant, i.e., it remains unchanged within T O , but varies with different T O ; the LOS signal component is deemed as the first path in the following derivation.
Therefore, the complex baseband signal received by the vehicle can be expressed as [37,38]: where g indexes the transmission of c q (t); n(t) is zero-mean Gaussian white noise with covariance σ 2 n . For the p-th signal component, ϕ q (θ p , φ p ) represents the phase difference between the q-th antenna and the reference one, obviously, ϕ 1 (θ p , φ p ) = 0.

Truncated Signal Subspace for AOD Estimation
Many research works such as [26,27] directly utilize the multiple signal classification (MUSIC) algorithm to achieve angle estimation. The basic spectrum function is given by: where the noise subspace U 0 is acquired by the eigenvalue decomposition of R = E{y[g]y H [g]}, i.e., where signal subspace U S ∈ C MN×K consists of the eigen-vectors with respect to the K largest eigenvalues (that construct Σ S = diag{η 1 , η 2 , . . . , η K }), and the remnant eigen-vectors form noise subspace U 0 . However, there are some restrictions for P (θ, φ) in actual scenarios. First, it is difficult to determine the real number of MPCs, which will result in the erroneous partition of the signal subspace and noise subspace. Specifically, if the MPCs are coherent with each other, the MUSIC algorithm has to solve the rank-deficient problem at the cost of decreasing the array aperture. Second, the globe searching in the two-dimensional angle domain is very exhaustive, which will consume huge time complexity. In addition, the identification of the LOS component also requires extra calculations.
As shown in the aforementioned task sequences, the time complexity of the parameter estimation during CP processing has a direct influence on the positioning error. That is to say, the two-dimensional AOD estimation algorithm for the LOS signal component should simultaneously satisfy the requirements for lower computational complexity and higher estimation accuracy. To achieve such an objective, we mainly adopt two techniques.
On the one hand, in order to effectively utilize the observed data Y = [y[0], y[1], . . . , y[G]], we can double the length of the data by taking advantage of the complex conjugate version, i.e., constructing the so-called forward-backward observing matrix [39], where J is the matrix with ones on its anti-diagonal and zeros elsewhere. Such an operation can promote the accuracy of AOD estimation.
On the other hand, we try to directly extract the signal subspace with respect to the LOS signal without performing matrix decomposition. According to the subspace theory, U S andÃ span the same signal subspace, namely U S =ÃT, where T is a nonsingular matrix. In our scenario, the LOS signal component plays a dominant role in the propagation environment, i.e., it usually has the strongest power; therefore, the truncated signal subspace u 1 ∈ C MN×1 , which is defined by the eigenvector corresponding to the largest eigenvalue, is actually a low-dimensional approximation to the signal subspace U S . Strictly speaking, the subspace dimension reducing indeed results in partial loss of MPC information; however, we are only concerned with the LOS signal component because it is directly related to the vehicle's position. As we know, the depiction of the dominant LOS signal is the Rician κ factor, which is defined by the power ratio between the LOS component and other MPCs. Larger κ means the array steering vectors of the LOS component contribute more in the signal subspace; hence, we can use the truncated signal subspace u 1 as an estimation to the array manifold vectorã LOS , i.e., u 1 ≈ τ 1ãLOS , where τ 1 denotes the value in the first row and first column of T.
To this end, we introduce an iterative method, which is based on the power iteration scheme [40].
The basic principle is as follows. Considering an initial non-zero vector u (0) t ∈ C MN×1 , it can be expressed in terms of the linear combination of all eigen-vectors of R, i.e., Left-multiplying both sides of (18) by the covariance matrix R, we have: If we repeat the above multiplication l times, i.e., As we can see, due to the LOS signal being dominant compared with other MPCs in our considered scenario, i.e., η 1 > η 2 , η 3 , . . . , η K , hence the following conclusion holds when l → ∞, The conclusion in Equation (21) illustrates that u t is a reasonable approximation of the truncated signal subspace u 1 . Based on that, we summarize the above procedures in Algorithm 1.
Once we get u t by Algorithm 1, according to the following approximate relation, where γ is an unknown complex constant. We can directly calculate the two-dimensional AOD of the LOS signal component, i.e.,μ where ∠[·] is to get the phase.ã LOS (i, k) is the [(i − 1)M + k]-th element ofã LOS .

Computational Complexity Analysis
In order to estimate the two-dimensional AOD information, if the forward-backward observing matrixỸ is also utilized, the computational cost of the typical MUSIC algorithm attains in the where one flop means once complex multiplications, and δ 1 and δ 2 are the total spectrum searching times within the search range, respectively. Besides, the reduced-dimensional MUSIC algorithm [41], which is an improved version in computational cost, attains in the order of O 2GM 2 N 2 flops (based on a condition that the elevation θ and azimuth φ are transformed into two angles measured respectively by the LOS signal and the x-axis and y-axis). However, the proposed algorithm only attains in the order of O 2GM 2 N 2 + δ 4 M 2 N 2 flops, where δ 4 is the total number of iterations in Algorithm 1. For an easy comparison, we ignore the small quantity of higher order and then give the ratio for the computational costs of both algorithms, Numerically, if we let the scale of antenna array M = N = 10, the number of repeated codes G = 20, the total number of MPC K = 20, the searching step of MUSIC algorithm is set as 0.1 • , so that it implies δ 1 = 900 within the range of elevation and δ 2 = 1800 within the range of azimuth. For reduced-dimensional MUSIC, δ 3 = 1800. The iteration number of the proposed algorithm is smaller than 20. Therefore, the computational cost of the proposed algorithm is just approximately 1/27,000 of that of the MUSIC algorithm and is approximately 1/270 of that of reduced-dimensional MUSIC algorithm.

Distance-Weighted Positioning
We can directly utilize the multiple estimated two-dimensional AOD information to achieve the vehicle's positioning; however, the accuracy of angle detection is vulnerable to the influence of the vehicle's position. We herein call this problem the near-far effect, which indicates the fact that the farther the vehicle is from the RSU, the less reliable the corresponding positioning result is. For example, if one vehicle is located a large distance, it travels a little further, and the positioning will give rise to a great error even if all the conditions are the same. Furthermore, such a phenomenon will get worse in a noisy environment.
The fundamental reason behind the above phenomenon is that, when the elevation θ is large enough to approach 90 • and/or the azimuth φ approaches 0 • or 180 • , the antenna array will gradually function as endfire mode. In such a mode, the effective aperture of the antenna array is greatly reduced. That is to say, there is no sufficient array aperture to guarantee a satisfactory accuracy of angle estimation.
As we know, elevation and azimuth establish a one-to-one mapping with a point on the lane plane, which means that the position has no ambiguity. Therefore, for a vehicle in a dense RSU deployment scenario, if it locates near the position that induces one RSU to work in endfire mode, correspondingly it also falls into the non-endfire mode of other RSUs. That inspired a method for reducing the adverse influence of the near-far effect, i.e., a distance based weighting strategy. It trusts the "near" positioning results more than the "far" ones. To be specific, according to the estimated two-dimensional angles, we can retrieve the distance information D i between the vehicle and i-th RSU, i.e., Without loss of generality, for K RSUs participating in cooperative positioning, we have distance information D 1 , D 2 , . . . , D K , so the weighting coefficients can be determined by: Taking two RSUs for example, if D 1 < D 2 , then according to Equation (28), the weighting coefficients ω 1 and ω 2 are: Obviously, the positioning result with small distance information will play an important role in final position determining; see Equations (3) and (4).
There exist several weighting strategies, for example weighting based on the received signal strength (RSS) (or the estimated signal-to-noise ratio (SNR)), i.e., ω ∝ RSS, and weighting based on the Cramér-Rao lower bound (CRB) of angle estimation, i.e., ω ∝ 1 CRB . Among them, the RSS falls off inversely proportional to the square of the distance between the vehicle and RSU in free-space. However, on the one hand, it is just a qualitative factor for evaluating the angle estimation and is unable to reflect the directional affect; on the other hand, although the statistical distance can be estimated from the propagation model, it is unreliable due to the random fluctuation of multipath signals. The CRB provides a bound on the covariance matrix of any unbiased estimation of angle, which includes many factors such as SNR, snapshot length G, and array manifold a(µ k ) and b(υ k ) [39,42]. However, there are no simple and practical methods to obtain noise variance, and also, the SNR exhibits random fluctuation [27]. In addition, the calculation of the CRB is computationally high and needs to be re-calculated when the angle information changes.
To sum up, we summarize the whole CP procedure in Table 1. Besides the basic data , i = 1, 2, . . . , K as shown in Section 3.1, the kernel of this procedures lies in Step 1, Step 2, and Step 5. Through data extension in Step 1, the accuracy of angle estimation can be improved because the Cramer-Rao lower bound is in inverse proportion to the length of observed data. In Step 2, Algorithm 1 utilizes an iterative manner to extract the signal subspace with respect to the LOS signal component rather than adopting matrix decomposition, and simultaneously, the subsequent AOD estimation in Step 3 is in a closed-form expression rather than in searching, which is computationally efficient. The final estimations of the vehicle's positions in Step 7 actually fuse all K detected AOD information through the distance based weighting method in Step 5.

Remark 2.
Looking back at the aforementioned safety distance based warning in Section 2.2, once we acquire the vehicles' positions, the desired deceleration of vehicle V 2 can be calculated via (6) with the aid of the designed sequential task allocation in Figure 3. Due to different decelerations resulting in different driving experiences, the warning strategies can be expressed by a series of levels, for example if a L−1 < a 2d < a L , L = 1, 2, . . . , L, then the warning belongs to the L level. A large value of deceleration means a high warning level and also means a big emergency. The thresholds will be determined and evaluated by actual driving tests in the next works. Besides, the vehicle's positioning error inevitably produces erroneous inter-vehicle distance D and further affects the desired deceleration a 2d of vehicle V 2 . In order to understand such a relationship, we try to analyze the first-order perturbation of the desired deceleration. According to (6), let the estimated distanceD = D + δ D , then we have: whereD = D w + D h and δ, δ D , δ ∆v , and δ a 1 denote the perturbations of the desired deceleration a 2d , the inter-vehicle distance D, the velocity difference ∆v, and acceleration a 1 , respectively. After a lengthy, but straightforward derivation and ignoring the second-order items, we can get: This manifests that, theoretically, if other factors are correctly measured, the bias of the desired deceleration a 2d is proportional to that of inter-vehicle distance D. In particular, based on first-order perturbation analysis, if the positioning errors in the x and y directions are independent identically distributed and of zero-mean, then E{δ} = ρ 1 E{δ D } = 0. Further, the mean-squared error is E{δ 2 } = ρ 2 1 E{δ 2 D }.

Numerical Examples
In order to demonstrate the effectiveness and advantages of the proposed AOD estimation algorithm and the localization scheme, in this section, a series of Monte Carlo numerical simulations are presented. We assume that the RSU R is deployed on top of a traffic light with height 6 m. The vehicle proceeds along the lane, and the lane width is set as 3.5 m. The onboard unit (OBU) antenna is deployed on the vehicle's rooftop position with a total height of 1.8 m.
In the following simulations, the system works at carrier frequency f c = 5.9 GHz with bandwidth B = 10 MHz, and the data length for positioning is fixed as G = 20. For the LOS component, we adopt the dual slope model [43] to describe the path loss, i.e., the path loss PL(D) = PL(D 0 ) + 10γ 1 log 10 (D/D 0 ) for D 0 < D ≤ D C ; and PL(D) = PL(D 0 ) + 10γ 2 log 10 (D/D C ) + 10γ 1 log 10 (D C /D 0 ) for D > D C , where PL(D 0 ) is the signal attenuation in free-space at distance D 0 . According to [43], we chose D 0 = 10 m, D c = 80 m, γ 1 = 1.9, and γ 1 = 3.8. Besides the LOS component, the other 20 MPCs come uniformly from any direction in angular-domain θ ∈ [0 • , 90 • ] and φ ∈ [0 • , 180 • ); the phase and magnitude of the attenuation coefficient for each signal component are modeled as random variables with a uniform distribution in (0, 2π) and (0, 1), respectively. We use the Ricean κ factor to indicate the power proportion, which is defined as the ratio of the power in the LOS component to the total power in the diffused non-LOS components. Through the whole simulations, the parameter in Algorithm 1 is set as 10 −3 .
Simulation 1: We first consider the root mean-squared error (RMSE) performance. For comparison, we just consider one RSU with coordinate {0, 0, 6} participating in positioning under two cases, respectively. One is the "far" case that the vehicle locates at {θ, φ} = {74.5 • , 6.7 • }, D = 15.67 m; the other is the "near" case that the vehicle locates at {θ, φ} = {39.6 • , 30.3 • }, D = 5.45 m. The antenna array M = N = 10. The noise power is given by P n = −174 + 10 log 10 B = −104dBm for temperature T = 300 K. Considering other unavoidable link loss, we let P n = −74 dBm. According to different transmitting power, we can set a different received SNR. The antenna gains are absorbed into the SNR. Let κ = 5, and the total number of Monte Carlo simulations is set as 1000. Figure 5a reports the RMSE performance of the estimated LOS two-dimensional AOD. The MUSIC algorithm serves as a benchmark because it is a typical high-resolution algorithm. Correspondingly, Figure 5b gives the comparison of the average running time when executing the algorithm one time. For the MUSIC algorithm, we set the searching step as 0.1 • and the searching range from ψ − 5 • to ψ + 5 • , ψ ∈ {θ, φ}. From the simulation results, we can make two conclusions. First, for both cases, the RMSE performance of the proposed algorithm is slightly inferior to the MUSIC algorithm; however, the much lower computational complexity makes it a better alternative to the exhaustive searching algorithm. Second, at the same SNR level, the "far" position manifests inferior error performance to the "near" one, which proves the near-far effect in angle based positioning.
Simulation 2: We then consider the cumulative distribution of the average absolute error for LOS AOD estimation provided by the proposed algorithm. This evaluation criterion is defined by P ζ (θ 1 , φ 1 ) = P{[|θ 1 − θ 1 | + |φ 1 − φ 1 |]/2 ≤ ζ}, where ζ denotes a series of allowed angle scales. We choose from 0 • to 2.5 • with step 0.1 • . The purpose of this simulation is to examine the error level under different conditions. The simulation results is shown in Figure 6. We can conclude that the AOD estimation accuracy can be improved with the increase of the κ factor, the scale of the antenna array, and the SNR. For example, at κ = 3, M = N = 6, and SNR = 10 dB, the error cumulative distribution shows P ζ (θ 1 , φ 1 )| ζ=1.3 • = 1, and it turns to P ζ (θ 1 , φ 1 )| ζ=0.5 • = 1 at κ = 8, M = N = 10, and SNR = 10dB, which illustrates that the error values are strongly converging.   Simulation 3: We now evaluate the positioning performance. For convenience, we assume that there are two RSUs locating at {0, 0, 6}m and {12, 0, 6}m, respectively. One vehicle travels along the middle line of the lane and passes five points where the CP is launched. The coordinates of these points and the corresponding AOD and distance information are listed in Table 2. The transmitting power is set as 10 dBm. The Ricean κ = 3. The path loss model is the same as previous simulations.
Besides the proposed distance based weighting, we also compare two different strategies, i.e., the uniform weighting and the CRB based weighting. Figure 7 gives the positioning RMSE of all five points. As we can see, the CP with distance based weighting performs better than the uniform weighting and is slightly inferior to the CRB based weighting. It is worth mentioning that the CRB based weighting stems from the complicated CRB expression [39]; although it gives the smallest error in positioning, the fast and accurate calculation is impractical because, on the one hand, the array manifold of all MPCs and noise variance should be known and, on the other hand, the computational burden is heavy. Oppositely, the proposed one makes a better trade-off between the positioning accuracy and the computational complexity.

Conclusions
We designed a basic framework of a joint cooperative positioning and warning system from the perspective of angle-awareness. In this framework, the cooperative positioning model based on state representation, the warning mechanism based on safety distance, and the sequential task allocation were discussed. Besides, in order to reduce the computational complexity of angle-awareness and improve the cooperative positioning accuracy, we proposed a truncated signal-subspace based algorithm for AOD estimation and a distance based weighting strategy for position estimation, respectively. Compared with the exhaustive searching based algorithms such as two-dimensional MUSIC, the proposed one maintains acceptable performance and decreases the computational complexity. Besides, the proposed distance based weighting method also achieves a similar level of positioning accuracy as the theoretical CRB based weighting, which is more practical. Therefore, both proposed methods can be used as better alternatives in a practical positioning and warning system. Actually, there exist some important issues that need to be considered; therefore, future works will focus on the optimization of the deployment of RSUs, the real-time high-accuracy trajectory tracking based on Kalman filtering, and the fusion of supplementary information such as a camera or LIDAR.