1. Introduction
Advanced vehicular assistance systems have become an upward trend with great attention in both academic research and industrial application [
1,
2]. During the evolution process from traditional human based driving to smart assisted driving, even to autonomous driving, the signal processing techniques aiming at diverse perception data will play a critical role. Especially, strengthening the safety [
3,
4], no matter what level the autonomous driving of on-road vehicles is, shall become a principal task in the future. One of the vital issues is how to guarantee an accurate localization for each on-road vehicle with the at least sub-meter level requirement [
5,
6].
As we know, global navigation satellite systems (GNSS), e.g., the Global Positioning System (GPS) and BeiDou Navigation Satellite System (BDS), are widely used for current vehicular localization and navigation; but they cannot always satisfy the rigorous requirements of some location based services (LBS) [
7,
8,
9]. The reasons involve the navigating signals being inevitably blocked or multipath transmission in urban environments, the vehicle motion, the satellites’ absence, etc. More seriously, the navigating signals can be recorded with malicious tampering and retransmitted to the vehicular terminals. Besides the improvement of GPS and BDS with some inertial navigation (utilizing some on-board kinematic sensors such as odometers, accelerometers, gyroscopes, etc.), the laser imaging detection and ranging (LIDAR) technique can achieve a simultaneous localization and mapping for intelligent vehicles, which is now accepted by more and more industries [
10,
11,
12,
13]. However, there still exist some problems that cannot be neglected if it is to be commercialized [
14]. The first one is the cost; and the second one, also the most important one, is that the quality of LIDAR images will deteriorate because of the weak reflectivity of the wet road surface, which results in some detected region disappearing in the LIDAR images (such a difference between LIDAR images and map images will affect the further similarity calculation); in addition, the irregular snow lines inside the lane and near the roadsides also can confuse the lane identifiability. Both situations will bring great hidden danger to intelligent vehicles.
The wireless communication techniques provide a new alternative solution, which can become a powerful supplement to localization. For example, the vehicular ad-hoc networks (VANETs) have been designed with dedicated protocols and differentiated quality-of-service to achieve a connected road environment [
15,
16], including vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. In that sense, the cooperative positioning (CP) [
17,
18] becomes an effective measure to improve the localization accuracy by jointly fusing multiple location-related parameters exchanged among a series of VANET nodes. Generally, these location based parameters include maximum movable distance [
19], channel state information (CSI) [
20], received signal strength indicator (RSSI) [
21,
22], time-of-arrival (TOA), and time-difference-of-arrival (TDOA) [
23,
24].
In [
25], two road-side units (RSUs) are utilized to broadcast their position and road geometry information, the vehicle combining them with odometer data and Doppler shift of the received signals to achieve a lane-level localization accuracy. References [
26,
27] investigated a vehicle-to-infrastructure based CP where each vehicle measures its position through the direction-of-arrival (acquired by a uniform linear array on the vehicle) and the known RSU position (acquired in each beacon packet). Such CP gains a better performance than GPS based localization. However, there exist four potential shortcomings. The first one is that the vehicle motion and the road unevenness will generate mechanical vibration so that the ULAbecomes unstable, which consequently influences DOA estimation; the second one is that the accuracy of DOA estimation highly depends on the array aperture so that the vehicle should be equipped with a larger scale antenna array, which will increase the cost. Although the coprime-array [
28,
29] can achieve increased degrees of freedom compared with the traditional uniform array, it still requires complicated operations; the third one is that one-dimensional angle estimation alone cannot distinguish the vehicle in the adjacent lane, which will induce the ambiguity of the spatial position; the last one is the angle estimation algorithm because it must simultaneously consider the estimation accuracy and computational complexity rather than focus on one of them [
30], which is a basic requirement for timeliness. In addition, among the above CP strategies, an important function, i.e., the cooperative warning with respect to traffic safety, is neglected. In a highly connected road environment, each intelligent vehicle has the responsibility of guaranteeing the driving safety and road efficiency, that is the so-called cooperative safety. Such cooperation is embodied at least in an active reporting of the traffic accident or vehicle fault by V2I links to RSUs; consequently, each individual road, even the whole road networks, can gain global control.
In this paper, aiming at solving the aforementioned impasses, we introduce a novel joint cooperative positioning and warning system on the basis of spatial angle information. The positioning mainly depends on the wireless V2I links and kinematic model without considering the GNSS. The warning is depicted by the safety distance and the vehicle’s deceleration. Besides the detailed design of sequential allocation for CP warning tasks and the data format of localization packets, we also propose the computationally efficient AOD estimation algorithm and multiple detection fusing strategy to achieve sub-lane level positioning. To summarize, the main contributions of this paper are three-fold:
An angle-awareness based framework of the joint cooperative positioning and warning system is first discussed, which includes the positioning model based on state representation, the warning mechanism based on safety distance, and the CP warning task allocation and related data formats for periodic interaction.
To decrease the computational complexity, a truncated signal subspace based algorithm for angle estimation is proposed, which avoids matrix eigen decomposition and spectra searching.
To decrease the adverse influence of the near-far effect caused by angle estimation and improve the positioning accuracy, a distance based weighting strategy is also designed, which only utilizes the estimated positions without extra calculations.
The rest of this paper is organized as follows: In
Section 2, the overview of the joint cooperative positioning and warning (JCPW) system is introduced. More details on the proposed algorithm are described in
Section 3.
Section 4 discusses the distance based weighting strategy for the initial position estimation. The numerical results are shown in
Section 5, and
Section 6 gives the conclusions.
Notation: , , and denote the complex conjugate, transpose, and Hermitian transpose, respectively. Symbol “⊗” denotes the Kronecker product.
2. Joint Cooperative Positioning and Warning Overview
We assume as a reference scenario in
Figure 1 a fully connected intelligent vehicle network deployed along a given road segment with a double-lane width equal to
W meters, belonging to an urban canyon environment. Without loss of generality, two different kinds of nodes are present:
RSUs,
, placed on the roadsides; and multiple intelligent vehicles, randomly located on the lanes, are traveling along arbitrary trajectories. The JCPW system mainly depends on the on-board processor and RF wireless module to achieve data transceiving and positioning.
2.1. Cooperative Positioning
For intelligent vehicle
V, if requiring sub-lane-level localization accuracy, a more appropriate way is to perform self-positioning with the aid of multiple RSUs’ cooperation. There are two main reasons. One is that the centralized positioning schemes will give rise to a heavy computational load for RSUs, and they usually require complex algorithms or multi-user schemes to discern vehicles. The other is that, although the V2V communications can provide relative position information, it is usually unreliable because the communication links vary dynamically and are vulnerable to being blocked or having severe distortion [
31]. Besides, GNSS is also a common choice. However, taking the GPS for example, it suffers from GPS signal blockage and multipath, as well as inadequate accuracy (∼10 m) [
32]. Therefore, we only consider the V2I communications, in which the line-of-sight (LOS) path generally dominates in such a condition [
33]. In addition, the cooperative positioning in our system refers to multiple RSUs’ cooperation for achieving decentralized positioning.
The CP stage includes the localization data transmitting and receiving, AOD estimating, and position calculating. The first event is in time span
, and the last two events are in time span
, that is to say, a CP interval
. For a better understanding, let the RSU position
be fixed and exactly known, while the vehicle’s position
is contrary. It is reasonable to consider that the velocity of the vehicle keeps nearly invariant during
, and the acceleration contributes very little due to the sufficiently high packet rate. The real-time velocity reading
and acceleration reading
can be acquired from the vehicular sensors. Besides, the two-dimensional AOD information
and
denotes the elevation (i.e., the angle between the z-axis and the LOS signal) and azimuth (i.e., the angle between the x-axis and the projection of the LOS signal) of a vehicle, respectively, which is defined by the Cartesian coordinates of the RSU; see
Figure 1.
We define the vehicle’s state vector as:
then in the
k-th CP interval,
, the kinematics used in the positioning stage is a constant model with invariant velocity and acceleration. It is given by the following iterative formulas,
where we use subscript “
s” for discrete time indexes,
represents the noise item, and:
where
denotes the state transition matrix that applies the effect of the vehicle’s state at time
on the one at time
; and
is the control matrix that applies the effect of acceleration
on the current vehicle’s state vector.
In (
2), the first formula gives the initial position states, and the second one gives the estimations of subsequent positions for the left time span
with the help of velocity and acceleration readings. The elements
and
of
are the only unknown parameters. Once the elevation
and azimuth
are estimated, which will be introduced in
Section 3, we can further estimate the vehicle’s initial positions at time
according to basic geometry relations, i.e.,
where
. Herein, we consider
RSUs for cooperation. The weighting coefficients
need to be designed to improve the accuracy of localization, which will be reserved for
Section 4. Based on the above kinematic model and estimated position information, the whole trajectory would be retrieved.
It is worth mentioning that, based on (
2), the positioning at time
can be given by state vector
and measurement vector
, where
is a nonlinear mapping function with respect to the AOD estimation and
is noise. To reduce the influence of noise, the extended Kalman filter (EKF) [
34] will be a better choice due to it being able to provide optimal position estimations in the mean-squared sense for a Gaussian noise distribution. However, two basic restrictions should be considered beforehand, i.e., the identification of the probability distribution and the determination of the covariance information of the measurement error, which are not easy tasks due to the limited number of sampling in actual scenarios. Therefore, we leave them as future works.
2.2. Warning
In our considered system, the traffic safety is guaranteed in an active manner, i.e., each intelligent vehicle periodically monitors the surrounding traffic status that is broadcast by RSUs; meanwhile, its own states, such as traffic accident, component fault, velocity, acceleration/deceleration rate, remaining energy, etc., should be reported in a timely manner. To do so, the RSUs can infer the global traffic status, and each vehicle can also acquire safety-related traffic parameters, for example deceleration and safety distance.
We now consider a basic scenario that is depicted in
Figure 2a, in which vehicle
with length
, velocity
, and acceleration
and vehicle
with length
, velocity
, and acceleration
are driving along a road. According to the vehicle kinematics, we can calculate the safety distance
to characterize the warning strategy. The total safety distance includes three components. The first one is distance
, which represents the moving distance of vehicle
after it receives the warning message from the RSUs and begins to decelerate. The second one is distance
, which represents the relative moving distance when
slows down with deceleration
until the relative speed between two vehicles becomes zero. The last one is the minimum headway distance
, which needs to be guaranteed when the relative speed reaches zero. Therefore, the safety distance is represented by:
where
and
and
denote the moving distance of
and
in time span
, respectively, so that
. As we know, the velocity difference of two vehicles at time
is
, and vehicle
begins to slow down with a desired deceleration
, which will generate relative deceleration
; therefore, we can get
. The minimum headway distance
, where
is the minimum distance between two vehicles.
It is worth mentioning that the desired deceleration
in Equation (
5) is the only unknown parameter, which has a direct relationship with the warning strategy. By simple derivation, the estimated desired deceleration:
where
is the estimated inter-vehicle distance
D based on the vehicles’ positions. The qualitative deceleration-distance curve is shown in
Figure 2b. For a given distance
, the corresponding
is the least desired deceleration. Vehicle
performs a reasonable braking according to the real-time distance information for avoiding accidents.
Remark 1. For the above qualitative analysis, it actually relies on some assumptions, i.e., vehicle keeps its constant deceleration before stopping; vehicle keeps its acceleration unchanged during and then slows down with an assigned constant deceleration no less than that of vehicle . More complicated scenarios and related parameters’ evaluation will be left for future work. Given that the measured distance depends on the position of two vehicles, hence the following work will focus on the problem of the vehicle’s positioning.
2.3. Task Sequence and Data Frame Format
The JCPW system refers to two important sub-functions: positioning and warning; therefore, an efficient task sequence should be designed. We first give a detailed explanation and analysis about the designed task sequences. For convenience, the basic structure is shown in
Figure 3.
For one time interval
T, there are three parts. The first one with time interval
is used for positioning. Without loss of generality, we assume
RSUs participate in the current CP. That is to say,
is required, where
denotes the allocated time span of localization data for one RSU. In each
, a series of orthogonal code sequences
with length
is transmitted to assist the vehicle in AOD detection. The data format is shown in
Figure 4, where we depict a case that
RSUs, and the orthogonal sequence
is repeated
times with a total length
.
As shown in the aforementioned kinematic models (see Equation (
2)), during the AOD detections, all the data packets transmitting from RSUs and received by the vehicle are completed in a time span of
. Due to the speed of light being much larger greater the vehicle’s speed and the high data rate of the orthogonal sequences, it is reasonable to neglect the time consumption in the stage of transmitting and receiving; consequently, a computationally efficient algorithm for AOD detection is very necessary for accurate localization. That is to say, the time span of data processing, i.e.,
, will become a dominant factor in the subsequent localization. In
, the vehicle’s position is roughly thought to be kept unchanged; after that, it will be calculated by inertial equations. Obviously, a small
is a benefit for reducing the position error. In addition, during the data processing, multiple two-dimensional AODs are estimated separately due to the RSUs cooperating in a successive manner. Strictly speaking, this AOD information cannot be transformed into the same position because the vehicle is moving in
; however, the time differences between the successively transmitted localization data from different RSUs are also negligible; that is to say, we can consider all
estimated AODs as effective measurements for the initial position
and
. For example, consider such a scenario in which all
RSUs locate approximately 100 m around the vehicle; the system bandwidth is 10 MHz; the localization data codes are selected from a
Hadamard matrix, and each repeat 20 times; the vehicle moves 20 m/s along the longitudinal direction. Only considering the line-of-sight communication link, we can see that, during one localization data time, the vehicle moves forward 0.256 cm; and all five RSUs will contribute a movement of 1.28 cm.
The second part with time span
is used for warning. In this stage, there are two basic contents, i.e., the active state information reporting from the vehicle (to the RSUs) with time span
and the periodic traffic status received from the RSUs (to the vehicle) with time span
. In the time sequence, the first event has the highest priority because all other vehicles have to depend on the current vehicle’s status to make a reasonable strategy for further driving. Different from the common state information, those emergent events, for example traffic accidents, loosing control, malfunction, etc., are much more important for traffic safety. In particular, given the randomness of traffic accidents, the conflict of the tasks in the time sequence is inevitable; see
Figure 3, in which the accident labeled by a red star occurs in the CP stage. In such scenarios, the vehicle’s on-board central processing unit should execute an interruption and switch to the warning stage. If the active reporting module still works, the last time positioning results and other related vehicle state information will automatically report to the RSUs; if it unfortunately falls into a breakdown, the subsequent vehicles will take responsibility for uploading the accident information. By doing so, the system can be guaranteed a timely traffic status reporting.
Besides the above two important parts, the third one is reserved for further exploiting functions, for example vehicle platoon control, high definition map matching, and so on. The reserved time span is .
3. Computationally Efficient Two-Dimensional AOD Estimation
Different from the research works [
27,
35], we focus on the computationally efficient AOD estimation rather than simply assuming they are obtained beforehand or utilizing directional antennas. We herein just take one RSU for example in the following content because the others share the same data model and processing procedures.
3.1. Data Model
In the designed CP system, the RSU is equipped with a uniform rectangular array (URA) with
omnidirectional antenna elements, and the vehicle is equipped with a single omnidirectional antenna. The Cartesian coordinate is shown in
Figure 1. Let the antenna element in origin
o be the reference point. The
-th antenna locates in the
plane with coordinate
, where
is the wavelength of the carrier frequency, antenna element spacing
, and
;
.
We assume that the time synchronization and frequency synchronization between the RSU and vehicle have been calibrated. The V2I communication is in the line-of-sight (LOS) condition, which is depicted by the Rician channel model with parameter
[
36]. In addition, the system is working in narrow-band. Based on the designed localization data frames, all
antennas simultaneously transmit orthogonal code sequences
,
. The signals propagate from the RSU to the vehicle over different paths, resulting in the superposition of multipath components (MPCs); therefore, besides the LOS signal component, it is reasonable to assume that there are
K MPCs, each of which has its own AOD pairs
and
. Let the propagation delay
, attenuation coefficient
, and AOD pair
denote the
p-th parameterized path [
33,
37]. Without loss of generality, given that the wireless environment has to undergo changes due to the vehicle’s movement, we assume the attenuation coefficient
is block-variant, i.e., it remains unchanged within
, but varies with different
; the LOS signal component is deemed as the first path in the following derivation.
Therefore, the complex baseband signal received by the vehicle can be expressed as [
37,
38]:
where
g indexes the transmission of
;
is zero-mean Gaussian white noise with covariance
. For the
p-th signal component,
represents the phase difference between the
q-th antenna and the reference one, obviously,
.
We herein neglect the delay difference between MPCs, i.e.,
, and
induced by the LOS component can be effectively eliminated by correlation detection. By performing match-filtering, we can get:
where:
is the match-filtered result with respect to the
-th antenna in the URA, and
is the noise after match-filtering.
Let
and
be the
x-axis and
y-axis steering vectors,
and let:
then it has:
where vector
, and noise vector
has the same form as
. Furthermore, defining
,
, therefore, the joint array manifold
, so Equation (
13) can be expressed by:
3.2. Truncated Signal Subspace for AOD Estimation
Many research works such as [
26,
27] directly utilize the multiple signal classification (MUSIC) algorithm to achieve angle estimation. The basic spectrum function is given by:
where the noise subspace
is acquired by the eigenvalue decomposition of
, i.e.,
where signal subspace
consists of the eigen-vectors with respect to the
K largest eigenvalues (that construct
), and the remnant eigen-vectors form noise subspace
.
However, there are some restrictions for in actual scenarios. First, it is difficult to determine the real number of MPCs, which will result in the erroneous partition of the signal subspace and noise subspace. Specifically, if the MPCs are coherent with each other, the MUSIC algorithm has to solve the rank-deficient problem at the cost of decreasing the array aperture. Second, the globe searching in the two-dimensional angle domain is very exhaustive, which will consume huge time complexity. In addition, the identification of the LOS component also requires extra calculations.
As shown in the aforementioned task sequences, the time complexity of the parameter estimation during CP processing has a direct influence on the positioning error. That is to say, the two-dimensional AOD estimation algorithm for the LOS signal component should simultaneously satisfy the requirements for lower computational complexity and higher estimation accuracy. To achieve such an objective, we mainly adopt two techniques.
On the one hand, in order to effectively utilize the observed data
, we can double the length of the data by taking advantage of the complex conjugate version, i.e., constructing the so-called forward-backward observing matrix [
39],
where
is the matrix with ones on its anti-diagonal and zeros elsewhere. Such an operation can promote the accuracy of AOD estimation.
On the other hand, we try to directly extract the signal subspace with respect to the LOS signal without performing matrix decomposition. According to the subspace theory, and span the same signal subspace, namely , where is a nonsingular matrix. In our scenario, the LOS signal component plays a dominant role in the propagation environment, i.e., it usually has the strongest power; therefore, the truncated signal subspace , which is defined by the eigenvector corresponding to the largest eigenvalue, is actually a low-dimensional approximation to the signal subspace . Strictly speaking, the subspace dimension reducing indeed results in partial loss of MPC information; however, we are only concerned with the LOS signal component because it is directly related to the vehicle’s position. As we know, the depiction of the dominant LOS signal is the Rician factor, which is defined by the power ratio between the LOS component and other MPCs. Larger means the array steering vectors of the LOS component contribute more in the signal subspace; hence, we can use the truncated signal subspace as an estimation to the array manifold vector , i.e., , where denotes the value in the first row and first column of .
To this end, we introduce an iterative method, which is based on the power iteration scheme [
40]. The basic principle is as follows. Considering an initial non-zero vector
, it can be expressed in terms of the linear combination of all eigen-vectors of
, i.e.,
Left-multiplying both sides of (
18) by the covariance matrix
, we have:
If we repeat the above multiplication
l times, i.e.,
As we can see, due to the LOS signal being dominant compared with other MPCs in our considered scenario, i.e.,
, hence the following conclusion holds when
,
The conclusion in Equation (
21) illustrates that
is a reasonable approximation of the truncated signal subspace
. Based on that, we summarize the above procedures in Algorithm 1.
Algorithm 1 Truncated signal subspace extraction. |
|
Once we get
by Algorithm 1, according to the following approximate relation,
where
is an unknown complex constant. We can directly calculate the two-dimensional AOD of the LOS signal component, i.e.,
where
is to get the phase.
is the
-th element of
.
3.3. Computational Complexity Analysis
In order to estimate the two-dimensional AOD information, if the forward-backward observing matrix
is also utilized, the computational cost of the typical MUSIC algorithm attains in the order of
flops, where one flop means once complex multiplications, and
and
are the total spectrum searching times within the search range, respectively. Besides, the reduced-dimensional MUSIC algorithm [
41], which is an improved version in computational cost, attains in the order of
flops (based on a condition that the elevation
and azimuth
are transformed into two angles measured respectively by the LOS signal and the
x-axis and
y-axis). However, the proposed algorithm only attains in the order of
flops, where
is the total number of iterations in Algorithm 1. For an easy comparison, we ignore the small quantity of higher order and then give the ratio for the computational costs of both algorithms,
Numerically, if we let the scale of antenna array , the number of repeated codes , the total number of MPC , the searching step of MUSIC algorithm is set as , so that it implies within the range of elevation and within the range of azimuth. For reduced-dimensional MUSIC, . The iteration number of the proposed algorithm is smaller than 20. Therefore, the computational cost of the proposed algorithm is just approximately 1/27,000 of that of the MUSIC algorithm and is approximately of that of reduced-dimensional MUSIC algorithm.
4. Distance-Weighted Positioning
We can directly utilize the multiple estimated two-dimensional AOD information to achieve the vehicle’s positioning; however, the accuracy of angle detection is vulnerable to the influence of the vehicle’s position. We herein call this problem the near-far effect, which indicates the fact that the farther the vehicle is from the RSU, the less reliable the corresponding positioning result is. For example, if one vehicle is located a large distance, it travels a little further, and the positioning will give rise to a great error even if all the conditions are the same. Furthermore, such a phenomenon will get worse in a noisy environment.
The fundamental reason behind the above phenomenon is that, when the elevation is large enough to approach and/or the azimuth approaches or , the antenna array will gradually function as endfire mode. In such a mode, the effective aperture of the antenna array is greatly reduced. That is to say, there is no sufficient array aperture to guarantee a satisfactory accuracy of angle estimation.
As we know, elevation and azimuth establish a one-to-one mapping with a point on the lane plane, which means that the position has no ambiguity. Therefore, for a vehicle in a dense RSU deployment scenario, if it locates near the position that induces one RSU to work in endfire mode, correspondingly it also falls into the non-endfire mode of other RSUs. That inspired a method for reducing the adverse influence of the near-far effect, i.e., a distance based weighting strategy. It trusts the “near” positioning results more than the “far” ones. To be specific, according to the estimated two-dimensional angles, we can retrieve the distance information
between the vehicle and
i-th RSU, i.e.,
where
. Without loss of generality, for
RSUs participating in cooperative positioning, we have distance information
, so the weighting coefficients can be determined by:
Taking two RSUs for example, if
, then according to Equation (
28), the weighting coefficients
and
are:
Obviously, the positioning result with small distance information will play an important role in final position determining; see Equations (
3) and (
4).
There exist several weighting strategies, for example weighting based on the received signal strength (RSS) (or the estimated signal-to-noise ratio (SNR)), i.e.,
, and weighting based on the Cramér–Rao lower bound (CRB) of angle estimation, i.e.,
. Among them, the RSS falls off inversely proportional to the square of the distance between the vehicle and RSU in free-space. However, on the one hand, it is just a qualitative factor for evaluating the angle estimation and is unable to reflect the directional affect; on the other hand, although the statistical distance can be estimated from the propagation model, it is unreliable due to the random fluctuation of multipath signals. The CRB provides a bound on the covariance matrix of any unbiased estimation of angle, which includes many factors such as SNR, snapshot length
G, and array manifold
and
[
39,
42]. However, there are no simple and practical methods to obtain noise variance, and also, the SNR exhibits random fluctuation [
27]. In addition, the calculation of the CRB is computationally high and needs to be re-calculated when the angle information changes.
To sum up, we summarize the whole CP procedure in
Table 1. Besides the basic data
as shown in
Section 3.1, the kernel of this procedures lies in Step 1, Step 2, and Step 5. Through data extension in Step 1, the accuracy of angle estimation can be improved because the Cramer–Rao lower bound is in inverse proportion to the length of observed data. In Step 2, Algorithm 1 utilizes an iterative manner to extract the signal subspace with respect to the LOS signal component rather than adopting matrix decomposition, and simultaneously, the subsequent AOD estimation in Step 3 is in a closed-form expression rather than in searching, which is computationally efficient. The final estimations of the vehicle’s positions in Step 7 actually fuse all
detected AOD information through the distance based weighting method in Step 5.
Remark 2. Looking back at the aforementioned safety distance based warning in Section 2.2, once we acquire the vehicles’ positions, the desired deceleration of vehicle can be calculated via (6) with the aid of the designed sequential task allocation in Figure 3. Due to different decelerations resulting in different driving experiences, the warning strategies can be expressed by a series of levels, for example if , then the warning belongs to the L level. A large value of deceleration means a high warning level and also means a big emergency. The thresholds will be determined and evaluated by actual driving tests in the next works. Besides, the vehicle’s positioning error inevitably produces erroneous inter-vehicle distance D and further affects the desired deceleration of vehicle . In order to understand such a relationship, we try to analyze the first-order perturbation of the desired deceleration. According to (6), let the estimated distance , then we have:where and δ, , , and denote the perturbations of the desired deceleration , the inter-vehicle distance D, the velocity difference , and acceleration , respectively. After a lengthy, but straightforward derivation and ignoring the second-order items, we can get: This manifests that, theoretically, if other factors are correctly measured, the bias of the desired deceleration is proportional to that of inter-vehicle distance D. In particular, based on first-order perturbation analysis, if the positioning errors in the x and y directions are independent identically distributed and of zero-mean, then . Further, the mean-squared error is .
5. Numerical Examples
In order to demonstrate the effectiveness and advantages of the proposed AOD estimation algorithm and the localization scheme, in this section, a series of Monte Carlo numerical simulations are presented. We assume that the RSU R is deployed on top of a traffic light with height 6 m. The vehicle proceeds along the lane, and the lane width is set as 3.5 m. The onboard unit (OBU) antenna is deployed on the vehicle’s rooftop position with a total height of 1.8 m.
In the following simulations, the system works at carrier frequency
GHz with bandwidth
MHz, and the data length for positioning is fixed as
. For the LOS component, we adopt the dual slope model [
43] to describe the path loss, i.e., the path loss
for
; and
for
, where
is the signal attenuation in free-space at distance
. According to [
43], we chose
m,
m,
, and
. Besides the LOS component, the other 20 MPCs come uniformly from any direction in angular-domain
and
; the phase and magnitude of the attenuation coefficient for each signal component are modeled as random variables with a uniform distribution in
and
, respectively. We use the Ricean
factor to indicate the power proportion, which is defined as the ratio of the power in the LOS component to the total power in the diffused non-LOS components. Through the whole simulations, the parameter
in Algorithm 1 is set as
.
Simulation 1: We first consider the root mean-squared error (RMSE) performance. For comparison, we just consider one RSU with coordinate participating in positioning under two cases, respectively. One is the “far” case that the vehicle locates at , m; the other is the “near” case that the vehicle locates at , m. The antenna array . The noise power is given by dBm for temperature K. Considering other unavoidable link loss, we let dBm. According to different transmitting power, we can set a different received SNR. The antenna gains are absorbed into the SNR. Let , and the total number of Monte Carlo simulations is set as 1000.
Figure 5a reports the RMSE performance of the estimated LOS two-dimensional AOD. The MUSIC algorithm serves as a benchmark because it is a typical high-resolution algorithm. Correspondingly,
Figure 5b gives the comparison of the average running time when executing the algorithm one time. For the MUSIC algorithm, we set the searching step as
and the searching range from
to
,
. From the simulation results, we can make two conclusions. First, for both cases, the RMSE performance of the proposed algorithm is slightly inferior to the MUSIC algorithm; however, the much lower computational complexity makes it a better alternative to the exhaustive searching algorithm. Second, at the same SNR level, the “far” position manifests inferior error performance to the “near” one, which proves the near-far effect in angle based positioning.
Simulation 2: We then consider the cumulative distribution of the average absolute error for LOS AOD estimation provided by the proposed algorithm. This evaluation criterion is defined by
, where
denotes a series of allowed angle scales. We choose
from
to
with step
. The purpose of this simulation is to examine the error level under different conditions. The simulation results is shown in
Figure 6. We can conclude that the AOD estimation accuracy can be improved with the increase of the
factor, the scale of the antenna array, and the SNR. For example, at
,
, and
dB, the error cumulative distribution shows
, and it turns to
at
,
, and
dB, which illustrates that the error values are strongly converging.
Simulation 3: We now evaluate the positioning performance. For convenience, we assume that there are two RSUs locating at
m and
m, respectively. One vehicle travels along the middle line of the lane and passes five points where the CP is launched. The coordinates of these points and the corresponding AOD and distance information are listed in
Table 2. The transmitting power is set as 10 dBm. The Ricean
. The path loss model is the same as previous simulations.
Besides the proposed distance based weighting, we also compare two different strategies, i.e., the uniform weighting and the CRB based weighting.
Figure 7 gives the positioning RMSE of all five points. As we can see, the CP with distance based weighting performs better than the uniform weighting and is slightly inferior to the CRB based weighting. It is worth mentioning that the CRB based weighting stems from the complicated CRB expression [
39]; although it gives the smallest error in positioning, the fast and accurate calculation is impractical because, on the one hand, the array manifold of all MPCs and noise variance should be known and, on the other hand, the computational burden is heavy. Oppositely, the proposed one makes a better trade-off between the positioning accuracy and the computational complexity.
6. Conclusions
We designed a basic framework of a joint cooperative positioning and warning system from the perspective of angle-awareness. In this framework, the cooperative positioning model based on state representation, the warning mechanism based on safety distance, and the sequential task allocation were discussed. Besides, in order to reduce the computational complexity of angle-awareness and improve the cooperative positioning accuracy, we proposed a truncated signal-subspace based algorithm for AOD estimation and a distance based weighting strategy for position estimation, respectively. Compared with the exhaustive searching based algorithms such as two-dimensional MUSIC, the proposed one maintains acceptable performance and decreases the computational complexity. Besides, the proposed distance based weighting method also achieves a similar level of positioning accuracy as the theoretical CRB based weighting, which is more practical. Therefore, both proposed methods can be used as better alternatives in a practical positioning and warning system. Actually, there exist some important issues that need to be considered; therefore, future works will focus on the optimization of the deployment of RSUs, the real-time high-accuracy trajectory tracking based on Kalman filtering, and the fusion of supplementary information such as a camera or LIDAR.