Next Article in Journal
Design and Evaluation of LYSO/SiPM LIGHTENING PET Detector with DTI Sampling Method
Next Article in Special Issue
An Adaptive Network Coding Scheme for Multipath Transmission in Cellular-Based Vehicular Networks
Previous Article in Journal
Parkinson’s Disease Tremor Detection in the Wild Using Wearable Accelerometers
Previous Article in Special Issue
Classification and Segmentation of Longitudinal Road Marking Using Convolutional Neural Networks for Dynamic Retroreflection Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Letter

Angle-Awareness Based Joint Cooperative Positioning and Warning for Intelligent Transportation Systems

1
College of Transportation Engineering, Chang’an University, Xi’an 710064, China
2
College of Electronic and Control Engineering, Chang’an University, Xi’an 710064, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(20), 5818; https://doi.org/10.3390/s20205818
Submission received: 6 September 2020 / Revised: 10 October 2020 / Accepted: 11 October 2020 / Published: 15 October 2020

Abstract

:
In future intelligent vehicle-infrastructure cooperation frameworks, accurate self-positioning is an important prerequisite for better driving environment evaluation (e.g., traffic safety and traffic efficiency). We herein describe a joint cooperative positioning and warning (JCPW) system based on angle information. In this system, we first design the sequential task allocation of cooperative positioning (CP) warning and the related frame format of the positioning packet. With the cooperation of RSUs, multiple groups of the two-dimensional angle-of-departure (AOD) are estimated and then transformed into the vehicle’s positions. Considering the system computational efficiency, a novel AOD estimation algorithm based on a truncated signal subspace is proposed, which can avoid the eigen decomposition and exhaustive spectrum searching; and a distance based weighting strategy is also utilized to fuse multiple independent estimations. Numerical simulations prove that the proposed method can be a better alternative to achieve sub-lane level positioning if considering the accuracy and computational complexity.

1. Introduction

Advanced vehicular assistance systems have become an upward trend with great attention in both academic research and industrial application [1,2]. During the evolution process from traditional human based driving to smart assisted driving, even to autonomous driving, the signal processing techniques aiming at diverse perception data will play a critical role. Especially, strengthening the safety [3,4], no matter what level the autonomous driving of on-road vehicles is, shall become a principal task in the future. One of the vital issues is how to guarantee an accurate localization for each on-road vehicle with the at least sub-meter level requirement [5,6].
As we know, global navigation satellite systems (GNSS), e.g., the Global Positioning System (GPS) and BeiDou Navigation Satellite System (BDS), are widely used for current vehicular localization and navigation; but they cannot always satisfy the rigorous requirements of some location based services (LBS) [7,8,9]. The reasons involve the navigating signals being inevitably blocked or multipath transmission in urban environments, the vehicle motion, the satellites’ absence, etc. More seriously, the navigating signals can be recorded with malicious tampering and retransmitted to the vehicular terminals. Besides the improvement of GPS and BDS with some inertial navigation (utilizing some on-board kinematic sensors such as odometers, accelerometers, gyroscopes, etc.), the laser imaging detection and ranging (LIDAR) technique can achieve a simultaneous localization and mapping for intelligent vehicles, which is now accepted by more and more industries [10,11,12,13]. However, there still exist some problems that cannot be neglected if it is to be commercialized [14]. The first one is the cost; and the second one, also the most important one, is that the quality of LIDAR images will deteriorate because of the weak reflectivity of the wet road surface, which results in some detected region disappearing in the LIDAR images (such a difference between LIDAR images and map images will affect the further similarity calculation); in addition, the irregular snow lines inside the lane and near the roadsides also can confuse the lane identifiability. Both situations will bring great hidden danger to intelligent vehicles.
The wireless communication techniques provide a new alternative solution, which can become a powerful supplement to localization. For example, the vehicular ad-hoc networks (VANETs) have been designed with dedicated protocols and differentiated quality-of-service to achieve a connected road environment [15,16], including vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. In that sense, the cooperative positioning (CP) [17,18] becomes an effective measure to improve the localization accuracy by jointly fusing multiple location-related parameters exchanged among a series of VANET nodes. Generally, these location based parameters include maximum movable distance [19], channel state information (CSI) [20], received signal strength indicator (RSSI) [21,22], time-of-arrival (TOA), and time-difference-of-arrival (TDOA) [23,24].
In [25], two road-side units (RSUs) are utilized to broadcast their position and road geometry information, the vehicle combining them with odometer data and Doppler shift of the received signals to achieve a lane-level localization accuracy. References [26,27] investigated a vehicle-to-infrastructure based CP where each vehicle measures its position through the direction-of-arrival (acquired by a uniform linear array on the vehicle) and the known RSU position (acquired in each beacon packet). Such CP gains a better performance than GPS based localization. However, there exist four potential shortcomings. The first one is that the vehicle motion and the road unevenness will generate mechanical vibration so that the ULAbecomes unstable, which consequently influences DOA estimation; the second one is that the accuracy of DOA estimation highly depends on the array aperture so that the vehicle should be equipped with a larger scale antenna array, which will increase the cost. Although the coprime-array [28,29] can achieve increased degrees of freedom compared with the traditional uniform array, it still requires complicated operations; the third one is that one-dimensional angle estimation alone cannot distinguish the vehicle in the adjacent lane, which will induce the ambiguity of the spatial position; the last one is the angle estimation algorithm because it must simultaneously consider the estimation accuracy and computational complexity rather than focus on one of them [30], which is a basic requirement for timeliness. In addition, among the above CP strategies, an important function, i.e., the cooperative warning with respect to traffic safety, is neglected. In a highly connected road environment, each intelligent vehicle has the responsibility of guaranteeing the driving safety and road efficiency, that is the so-called cooperative safety. Such cooperation is embodied at least in an active reporting of the traffic accident or vehicle fault by V2I links to RSUs; consequently, each individual road, even the whole road networks, can gain global control.
In this paper, aiming at solving the aforementioned impasses, we introduce a novel joint cooperative positioning and warning system on the basis of spatial angle information. The positioning mainly depends on the wireless V2I links and kinematic model without considering the GNSS. The warning is depicted by the safety distance and the vehicle’s deceleration. Besides the detailed design of sequential allocation for CP warning tasks and the data format of localization packets, we also propose the computationally efficient AOD estimation algorithm and multiple detection fusing strategy to achieve sub-lane level positioning. To summarize, the main contributions of this paper are three-fold:
  • An angle-awareness based framework of the joint cooperative positioning and warning system is first discussed, which includes the positioning model based on state representation, the warning mechanism based on safety distance, and the CP warning task allocation and related data formats for periodic interaction.
  • To decrease the computational complexity, a truncated signal subspace based algorithm for angle estimation is proposed, which avoids matrix eigen decomposition and spectra searching.
  • To decrease the adverse influence of the near-far effect caused by angle estimation and improve the positioning accuracy, a distance based weighting strategy is also designed, which only utilizes the estimated positions without extra calculations.
The rest of this paper is organized as follows: In Section 2, the overview of the joint cooperative positioning and warning (JCPW) system is introduced. More details on the proposed algorithm are described in Section 3. Section 4 discusses the distance based weighting strategy for the initial position estimation. The numerical results are shown in Section 5, and Section 6 gives the conclusions.
Notation: ( · ) * , ( · ) T , and ( · ) H denote the complex conjugate, transpose, and Hermitian transpose, respectively. Symbol “⊗” denotes the Kronecker product.

2. Joint Cooperative Positioning and Warning Overview

We assume as a reference scenario in Figure 1 a fully connected intelligent vehicle network deployed along a given road segment with a double-lane width equal to W meters, belonging to an urban canyon environment. Without loss of generality, two different kinds of nodes are present: L  RSUs, { R 1 , R 2 , , R L } , placed on the roadsides; and multiple intelligent vehicles, randomly located on the lanes, are traveling along arbitrary trajectories. The JCPW system mainly depends on the on-board processor and RF wireless module to achieve data transceiving and positioning.

2.1. Cooperative Positioning

For intelligent vehicle V, if requiring sub-lane-level localization accuracy, a more appropriate way is to perform self-positioning with the aid of multiple RSUs’ cooperation. There are two main reasons. One is that the centralized positioning schemes will give rise to a heavy computational load for RSUs, and they usually require complex algorithms or multi-user schemes to discern vehicles. The other is that, although the V2V communications can provide relative position information, it is usually unreliable because the communication links vary dynamically and are vulnerable to being blocked or having severe distortion [31]. Besides, GNSS is also a common choice. However, taking the GPS for example, it suffers from GPS signal blockage and multipath, as well as inadequate accuracy (∼10 m) [32]. Therefore, we only consider the V2I communications, in which the line-of-sight (LOS) path generally dominates in such a condition [33]. In addition, the cooperative positioning in our system refers to multiple RSUs’ cooperation for achieving decentralized positioning.
The CP stage includes the localization data transmitting and receiving, AOD estimating, and position calculating. The first event is in time span T T R , and the last two events are in time span T P , that is to say, a CP interval T C P = T T R + T P . For a better understanding, let the RSU position p R = [ x R , y R , z R ] T be fixed and exactly known, while the vehicle’s position p V ( t ) = [ x V ( t ) , y V ( t ) , z V ( t ) ] T is contrary. It is reasonable to consider that the velocity of the vehicle keeps nearly invariant during T T R , and the acceleration contributes very little due to the sufficiently high packet rate. The real-time velocity reading v ( t ) = [ v x ( t ) , v y ( t ) ] T and acceleration reading a V ( t ) = [ a x ( t ) , a y ( t ) ] T can be acquired from the vehicular sensors. Besides, the two-dimensional AOD information θ and ϕ denotes the elevation (i.e., the angle between the z-axis and the LOS signal) and azimuth (i.e., the angle between the x-axis and the projection of the LOS signal) of a vehicle, respectively, which is defined by the Cartesian coordinates of the RSU; see Figure 1.
We define the vehicle’s state vector as:
c ( t ) = [ p V T ( t ) v T ( t ) ] T .
then in the k-th CP interval, k = 1 , 2 , , the kinematics used in the positioning stage is a constant model with invariant velocity and acceleration. It is given by the following iterative formulas,
c ( t s ) = c ( T k 1 ) + w ( t s ) if T k 1 t s < T k 1 + T T R Γ c ( t s 1 ) + Ξ a V ( t s 1 ) if T k 1 + T T R t s < T k
where we use subscript “s” for discrete time indexes, w ( t s ) represents the noise item, and:
Γ = I 2 0 2 × 1 ( t s t s 1 ) I 2 0 1 × 2 1 0 1 × 2 0 2 0 2 × 1 I 2 , Ξ = 1 2 ( t s t s 1 ) 2 I 2 0 1 × 2 ( t s t s 1 ) I 2
where Γ denotes the state transition matrix that applies the effect of the vehicle’s state at time t s 1 on the one at time t s ; and Ξ is the control matrix that applies the effect of acceleration a V ( t s 1 ) on the current vehicle’s state vector.
In (2), the first formula gives the initial position states, and the second one gives the estimations of subsequent positions for the left time span T T T R with the help of velocity and acceleration readings. The elements x V ( T k 1 ) and y V ( T k 1 ) of c ( T k 1 ) are the only unknown parameters. Once the elevation θ and azimuth ϕ are estimated, which will be introduced in Section 3, we can further estimate the vehicle’s initial positions at time T k 1 according to basic geometry relations, i.e.,
x ^ V ( T k 1 ) = 1 K i = 1 K ω i [ z ¯ i tan θ ^ i cos ϕ ^ i + x R i ]
y ^ V ( T k 1 ) = 1 K i = 1 K ω i [ z ¯ i tan θ ^ i sin ϕ ^ i + y R i ]
where z ¯ i = z R i z V . Herein, we consider K RSUs for cooperation. The weighting coefficients { ω i } i = 1 K need to be designed to improve the accuracy of localization, which will be reserved for Section 4. Based on the above kinematic model and estimated position information, the whole trajectory would be retrieved.
It is worth mentioning that, based on (2), the positioning at time T k can be given by state vector c ( T k ) = Γ c ( T k 1 ) + Ξ a V ( T k 1 ) and measurement vector γ ( T k ) = H ( c ( T k ) ) + v ( T k ) , where  H ( c ( T k ) ) is a nonlinear mapping function with respect to the AOD estimation and v ( T k ) is noise. To reduce the influence of noise, the extended Kalman filter (EKF) [34] will be a better choice due to it being able to provide optimal position estimations in the mean-squared sense for a Gaussian noise distribution. However, two basic restrictions should be considered beforehand, i.e., the identification of the probability distribution and the determination of the covariance information of the measurement error, which are not easy tasks due to the limited number of sampling in actual scenarios. Therefore, we leave them as future works.

2.2. Warning

In our considered system, the traffic safety is guaranteed in an active manner, i.e., each intelligent vehicle periodically monitors the surrounding traffic status that is broadcast by RSUs; meanwhile, its own states, such as traffic accident, component fault, velocity, acceleration/deceleration rate, remaining energy, etc., should be reported in a timely manner. To do so, the RSUs can infer the global traffic status, and each vehicle can also acquire safety-related traffic parameters, for example deceleration and safety distance.
We now consider a basic scenario that is depicted in Figure 2a, in which vehicle V 1 with length L 1 , velocity v 1 , and acceleration a 1 and vehicle V 2 with length L 2 , velocity v 1 , and acceleration a 1 are driving along a road. According to the vehicle kinematics, we can calculate the safety distance D s to characterize the warning strategy. The total safety distance includes three components. The first one is distance D w , which represents the moving distance of vehicle V 2 after it receives the warning message from the RSUs and begins to decelerate. The second one is distance D r , which represents the relative moving distance when V 2 slows down with deceleration a d until the relative speed between two vehicles becomes zero. The last one is the minimum headway distance D h , which needs to be guaranteed when the relative speed reaches zero. Therefore, the safety distance is represented by:
D s = D w + D r + D h
where D w = d 2 d 1 and d 1 and d 2 denote the moving distance of V 1 and V 2 in time span T R 2 v , respectively, so that D w = ( v 2 v 1 ) T R 2 v + 1 2 ( a 2 a 1 ) T R 2 v 2 . As we know, the velocity difference of two vehicles at time ( t 0 + T R 2 v ) is Δ v = v 2 n v 1 n = ( v 2 v 1 ) + ( a 2 a 1 ) T R 2 v , and vehicle V 2 begins to slow down with a desired deceleration a 2 d , which will generate relative deceleration Δ a = a 2 d a 1 ; therefore, we can get D r = d 4 d 3 = 0 Δ v Δ a ( Δ v Δ a · t ) d t = ( Δ v ) 2 2 Δ a . The minimum headway distance D h = 1 2 ( L 1 + L 2 ) + d m i n , where d m i n is the minimum distance between two vehicles.
It is worth mentioning that the desired deceleration a 2 d in Equation (5) is the only unknown parameter, which has a direct relationship with the warning strategy. By simple derivation, the estimated desired deceleration:
a ^ 2 d = ( Δ v ) 2 2 ( D ^ D w D h ) + a 1
where D ^ is the estimated inter-vehicle distance D based on the vehicles’ positions. The qualitative deceleration-distance curve is shown in Figure 2b. For a given distance D temp , the corresponding a temp is the least desired deceleration. Vehicle V 2 performs a reasonable braking according to the real-time distance information for avoiding accidents.
Remark 1. 
For the above qualitative analysis, it actually relies on some assumptions, i.e., vehicle V 1 keeps its constant deceleration before stopping; vehicle V 2 keeps its acceleration unchanged during T R 2 v and then slows down with an assigned constant deceleration no less than that of vehicle V 1 . More complicated scenarios and related parameters’ evaluation will be left for future work. Given that the measured distance D ^ depends on the position of two vehicles, hence the following work will focus on the problem of the vehicle’s positioning.

2.3. Task Sequence and Data Frame Format

The JCPW system refers to two important sub-functions: positioning and warning; therefore, an efficient task sequence should be designed. We first give a detailed explanation and analysis about the designed task sequences. For convenience, the basic structure is shown in Figure 3.
For one time interval T, there are three parts. The first one with time interval T CP is used for positioning. Without loss of generality, we assume K RSUs participate in the current CP. That is to say, K T L is required, where T L denotes the allocated time span of localization data for one RSU. In each T L , a series of orthogonal code sequences c ( t ) with length T O is transmitted to assist the vehicle in AOD detection. The data format is shown in Figure 4, where we depict a case that K = 3 RSUs, and the orthogonal sequence c ( t ) is repeated G + 1 times with a total length T L = ( G + 1 ) T O .
As shown in the aforementioned kinematic models (see Equation (2)), during the AOD detections, all the data packets transmitting from RSUs and received by the vehicle are completed in a time span of T T R . Due to the speed of light being much larger greater the vehicle’s speed and the high data rate of the orthogonal sequences, it is reasonable to neglect the time consumption in the stage of transmitting and receiving; consequently, a computationally efficient algorithm for AOD detection is very necessary for accurate localization. That is to say, the time span of data processing, i.e., T P = T C P T T R , will become a dominant factor in the subsequent localization. In T T R , the vehicle’s position is roughly thought to be kept unchanged; after that, it will be calculated by inertial equations. Obviously, a small T P is a benefit for reducing the position error. In addition, during the data processing, multiple two-dimensional AODs are estimated separately due to the RSUs cooperating in a successive manner. Strictly speaking, this AOD information cannot be transformed into the same position because the vehicle is moving in T T R ; however, the time differences between the successively transmitted localization data from different RSUs are also negligible; that is to say, we can consider all K estimated AODs as effective measurements for the initial position x V ( T k 1 ) and y V ( T k 1 ) . For example, consider such a scenario in which all K = 5 RSUs locate approximately 100 m around the vehicle; the system bandwidth is 10 MHz; the localization data codes are selected from a 64 × 64 Hadamard matrix, and each repeat 20 times; the vehicle moves 20 m/s along the longitudinal direction. Only considering the line-of-sight communication link, we can see that, during one localization data time, the vehicle moves forward 0.256 cm; and all five RSUs will contribute a movement of 1.28 cm.
The second part with time span T W = T v 2 R + T R 2 v is used for warning. In this stage, there are two basic contents, i.e., the active state information reporting from the vehicle (to the RSUs) with time span T v 2 R and the periodic traffic status received from the RSUs (to the vehicle) with time span T R 2 v . In the time sequence, the first event has the highest priority because all other vehicles have to depend on the current vehicle’s status to make a reasonable strategy for further driving. Different from the common state information, those emergent events, for example traffic accidents, loosing control, malfunction, etc., are much more important for traffic safety. In particular, given the randomness of traffic accidents, the conflict of the tasks in the time sequence is inevitable; see Figure 3, in which the accident labeled by a red star occurs in the CP stage. In such scenarios, the vehicle’s on-board central processing unit should execute an interruption and switch to the warning stage. If the active reporting module still works, the last time positioning results and other related vehicle state information will automatically report to the RSUs; if it unfortunately falls into a breakdown, the subsequent vehicles will take responsibility for uploading the accident information. By doing so, the system can be guaranteed a timely traffic status reporting.
Besides the above two important parts, the third one is reserved for further exploiting functions, for example vehicle platoon control, high definition map matching, and so on. The reserved time span is T T C P T W .

3. Computationally Efficient Two-Dimensional AOD Estimation

Different from the research works [27,35], we focus on the computationally efficient AOD estimation rather than simply assuming they are obtained beforehand or utilizing directional antennas. We herein just take one RSU for example in the following content because the others share the same data model and processing procedures.

3.1. Data Model

In the designed CP system, the RSU is equipped with a uniform rectangular array (URA) with M × N omnidirectional antenna elements, and the vehicle is equipped with a single omnidirectional antenna. The Cartesian coordinate is shown in Figure 1. Let the antenna element in origin o be the reference point. The ( m , n ) -th antenna locates in the x o y plane with coordinate ( λ 2 m , λ 2 n , 0 ) , where λ is the wavelength of the carrier frequency, antenna element spacing d = λ / 2 , and m = 0 , 1 , , M 1 ; n = 0 , 1 , , N 1 .
We assume that the time synchronization and frequency synchronization between the RSU and vehicle have been calibrated. The V2I communication is in the line-of-sight (LOS) condition, which is depicted by the Rician channel model with parameter κ [36]. In addition, the system is working in narrow-band. Based on the designed localization data frames, all M N antennas simultaneously transmit orthogonal code sequences c q ( t ) , q = 1 , , M N . The signals propagate from the RSU to the vehicle over different paths, resulting in the superposition of multipath components (MPCs); therefore, besides the LOS signal component, it is reasonable to assume that there are K MPCs, each of which has its own AOD pairs θ p and ϕ p , p = 1 , 2 , , K . Let the propagation delay τ p , attenuation coefficient A p , and AOD pair { θ p , ϕ p } denote the p-th parameterized path [33,37]. Without loss of generality, given that the wireless environment has to undergo changes due to the vehicle’s movement, we assume the attenuation coefficient A p is block-variant, i.e., it remains unchanged within T O , but varies with different T O ; the LOS signal component is deemed as the first path in the following derivation.
Therefore, the complex baseband signal received by the vehicle can be expressed as [37,38]:
r ( t ) = p = 1 K q = 1 M N g = 0 G A p [ g ] e j φ q ( θ p , ϕ p ) c q ( t g T O τ p ) + n ( t )
where g indexes the transmission of c q ( t ) ; n ( t ) is zero-mean Gaussian white noise with covariance σ n 2 . For the p-th signal component, φ q ( θ p , ϕ p ) represents the phase difference between the q-th antenna and the reference one, obviously, φ 1 ( θ p , ϕ p ) = 0 .
We herein neglect the delay difference between MPCs, i.e., τ 1 { τ p } k = 2 P , and τ 1 induced by the LOS component can be effectively eliminated by correlation detection. By performing match-filtering, we can get:
y m , n [ g ] k = 1 P A p [ g ] e j ( m 1 ) μ k e j ( n 1 ) υ k + w m , n [ g ]
where:
μ k = 2 π d sin θ k cos ϕ k λ , υ k = 2 π d sin θ k sin ϕ k λ
y m , n [ g ] is the match-filtered result with respect to the { m , n } -th antenna in the URA, and w m , n [ g ] is the noise after match-filtering.
Let a ( μ k ) and b ( υ k ) be the x-axis and y-axis steering vectors,
a ( μ k ) = [ 1 , e j π μ k , e j 2 π μ k , , e j π ( M 1 ) μ k ] T
b ( υ k ) = [ 1 , e j π υ k , e j 2 π υ k , , e j π ( N 1 ) υ k ] T
and let:
y [ g ] = [ y 1 , 1 [ g ] , y 2 , 1 [ g ] , , y M , 1 [ g ] , , y M , N [ g ] ] T
then it has:
y [ g ] = [ b ( υ 1 ) a ( μ 1 ) , , b ( υ K ) a ( μ K ) ] s [ g ] + w [ g ]
where vector s [ g ] = [ A 1 [ g ] , A 2 [ g ] , , A K [ g ] ] T C K × 1 , and noise vector w [ g ] has the same form as y [ g ] . Furthermore, defining A = [ a ( μ 1 ) , a ( μ 2 ) , , a ( μ K ) ] C M × K , B = [ b ( υ 1 ) , b ( υ 2 ) , , b ( υ K ) ] C N × K , therefore, the joint array manifold A ˜ = B A , so Equation (13) can be expressed by:
y [ g ] = A ˜ s [ g ] + w [ g ]

3.2. Truncated Signal Subspace for AOD Estimation

Many research works such as [26,27] directly utilize the multiple signal classification (MUSIC) algorithm to achieve angle estimation. The basic spectrum function is given by:
P ( θ , ϕ ) | ( θ , ϕ ) ( υ , μ ) = 1 b ( υ ) a ( μ ) H U 0 U 0 H b ( υ ) a ( μ ) .
where the noise subspace U 0 is acquired by the eigenvalue decomposition of R = E { y [ g ] y H [ g ] } , i.e.,
R = [ U S U 0 ] Σ S 0 0 Σ 0 U S H U 0 H
where signal subspace U S C M N × K consists of the eigen-vectors with respect to the K largest eigenvalues (that construct Σ S = diag { η 1 , η 2 , , η K } ), and the remnant eigen-vectors form noise subspace U 0 .
However, there are some restrictions for P ( θ , ϕ ) in actual scenarios. First, it is difficult to determine the real number of MPCs, which will result in the erroneous partition of the signal subspace and noise subspace. Specifically, if the MPCs are coherent with each other, the MUSIC algorithm has to solve the rank-deficient problem at the cost of decreasing the array aperture. Second, the globe searching in the two-dimensional angle domain is very exhaustive, which will consume huge time complexity. In addition, the identification of the LOS component also requires extra calculations.
As shown in the aforementioned task sequences, the time complexity of the parameter estimation during CP processing has a direct influence on the positioning error. That is to say, the two-dimensional AOD estimation algorithm for the LOS signal component should simultaneously satisfy the requirements for lower computational complexity and higher estimation accuracy. To achieve such an objective, we mainly adopt two techniques.
On the one hand, in order to effectively utilize the observed data Y = [ y [ 0 ] , y [ 1 ] , , y [ G ] ] , we can double the length of the data by taking advantage of the complex conjugate version, i.e., constructing the so-called forward-backward observing matrix [39],
Y ˜ = Y J Y *
where J is the matrix with ones on its anti-diagonal and zeros elsewhere. Such an operation can promote the accuracy of AOD estimation.
On the other hand, we try to directly extract the signal subspace with respect to the LOS signal without performing matrix decomposition. According to the subspace theory, U S and A ˜ span the same signal subspace, namely U S = A ˜ T , where T is a nonsingular matrix. In our scenario, the LOS signal component plays a dominant role in the propagation environment, i.e., it usually has the strongest power; therefore, the truncated signal subspace u 1 C M N × 1 , which is defined by the eigenvector corresponding to the largest eigenvalue, is actually a low-dimensional approximation to the signal subspace U S . Strictly speaking, the subspace dimension reducing indeed results in partial loss of MPC information; however, we are only concerned with the LOS signal component because it is directly related to the vehicle’s position. As we know, the depiction of the dominant LOS signal is the Rician κ factor, which is defined by the power ratio between the LOS component and other MPCs. Larger  κ means the array steering vectors of the LOS component contribute more in the signal subspace; hence, we can use the truncated signal subspace u 1 as an estimation to the array manifold vector a ˜ L O S , i.e., u 1 τ 1 a ˜ L O S , where τ 1 denotes the value in the first row and first column of T .
To this end, we introduce an iterative method, which is based on the power iteration scheme [40]. The basic principle is as follows. Considering an initial non-zero vector u t ( 0 ) C M N × 1 , it can be expressed in terms of the linear combination of all eigen-vectors of R , i.e.,
u t ( 0 ) = α 1 u 1 + k = 2 K α k u k
Left-multiplying both sides of (18) by the covariance matrix R , we have:
R u t ( 0 ) = α 1 R u 1 + k = 2 K α k R u k = α 1 η 1 u 1 + k = 2 K α k η k u k .
If we repeat the above multiplication l times, i.e.,
R l u t ( 0 ) = α 1 η 1 l u 1 + k = 2 K α k η k l u k R l u t ( 0 ) η 1 l = α 1 u 1 + k = 2 K α k η k l η 1 l u k .
As we can see, due to the LOS signal being dominant compared with other MPCs in our considered scenario, i.e., η 1 > η 2 , η 3 , , η K , hence the following conclusion holds when l ,
u t R l u t ( 0 ) η 1 l α 1 u 1
The conclusion in Equation (21) illustrates that u t is a reasonable approximation of the truncated signal subspace u 1 . Based on that, we summarize the above procedures in Algorithm 1.
Algorithm 1 Truncated signal subspace extraction.
  • Input: The estimated auto-correlation matrix R ^ = Y ˜ Y ˜ H C M N × M N ;
  • Output: The estimated a ˜ L O S ;
    1:
    Initializing
    2:
    l = 1 ; ϵ = 10 3
    3:
     Auxiliary vectors u t ( 0 ) = [ 1 , 0 , , 0 ] T and u t ( 1 ) = [ 1 , 1 , , 1 ] T ;
    4:
    while u t ( l ) u t ( l 1 ) ϵ do
    5:
    l = l + 1 ;
    6:
    u t ( l ) = R u t ( l 1 ) ;
    7:
    end while
    8:
    return u t = u t ( l ) .
Once we get u t by Algorithm 1, according to the following approximate relation,
a ˜ L O S γ u t
where γ is an unknown complex constant. We can directly calculate the two-dimensional AOD of the LOS signal component, i.e.,
μ ^ 1 = 1 M ( N 1 ) i = 1 M k = 2 N a ˜ L O S ( i , k ) a ˜ L O S ( i , k 1 )
υ ^ 1 = 1 N ( M 1 ) k = 1 N i = 2 M a ˜ L O S ( i , k ) a ˜ L O S ( i 1 , k )
where [ · ] is to get the phase. a ˜ L O S ( i , k ) is the [ ( i 1 ) M + k ] -th element of a ˜ L O S .

3.3. Computational Complexity Analysis

In order to estimate the two-dimensional AOD information, if the forward-backward observing matrix Y ˜ is also utilized, the computational cost of the typical MUSIC algorithm attains in the order of O 2 G M 2 N 2 + M 3 N 3 + δ 1 δ 2 [ M N ( M N K ) + M N K ] flops, where one flop means once complex multiplications, and δ 1 and δ 2 are the total spectrum searching times within the search range, respectively. Besides, the reduced-dimensional MUSIC algorithm [41], which is an improved version in computational cost, attains in the order of O 2 G M 2 N 2 + M 3 N 3 + δ 3 [ ( M 2 N + M 2 ) ( M N K ) + M 2 ] flops (based on a condition that the elevation θ and azimuth ϕ are transformed into two angles measured respectively by the LOS signal and the x-axis and y-axis). However, the proposed algorithm only attains in the order of O 2 G M 2 N 2 + δ 4 M 2 N 2 flops, where δ 4 is the total number of iterations in Algorithm 1. For an easy comparison, we ignore the small quantity of higher order and then give the ratio for the computational costs of both algorithms,
O 2 G M 2 N 2 + M 3 N 3 + δ 1 δ 2 [ M N ( M N K ) + M N K ] O 2 G M 2 N 2 + δ 2 M 2 N 2 O 2 G + M N + δ 1 δ 2 O 2 G + δ 4
2 G M 2 N 2 + M 3 N 3 + δ 3 [ ( M 2 N + M 2 ) ( M N K ) + M 2 ] O 2 G M 2 N 2 + δ 2 M 2 N 2 O 2 G + M N + δ 3 ( M + M K N ) O 2 G + δ 4
Numerically, if we let the scale of antenna array M = N = 10 , the number of repeated codes G = 20 , the total number of MPC K = 20 , the searching step of MUSIC algorithm is set as 0.1 , so that it implies δ 1 = 900 within the range of elevation and δ 2 = 1800 within the range of azimuth. For reduced-dimensional MUSIC, δ 3 = 1800 . The iteration number of the proposed algorithm is smaller than 20. Therefore, the computational cost of the proposed algorithm is just approximately 1/27,000 of that of the MUSIC algorithm and is approximately 1 / 270 of that of reduced-dimensional MUSIC algorithm.

4. Distance-Weighted Positioning

We can directly utilize the multiple estimated two-dimensional AOD information to achieve the vehicle’s positioning; however, the accuracy of angle detection is vulnerable to the influence of the vehicle’s position. We herein call this problem the near-far effect, which indicates the fact that the farther the vehicle is from the RSU, the less reliable the corresponding positioning result is. For example, if one vehicle is located a large distance, it travels a little further, and the positioning will give rise to a great error even if all the conditions are the same. Furthermore, such a phenomenon will get worse in a noisy environment.
The fundamental reason behind the above phenomenon is that, when the elevation θ is large enough to approach 90 and/or the azimuth ϕ approaches 0 or 180 , the antenna array will gradually function as endfire mode. In such a mode, the effective aperture of the antenna array is greatly reduced. That is to say, there is no sufficient array aperture to guarantee a satisfactory accuracy of angle estimation.
As we know, elevation and azimuth establish a one-to-one mapping with a point on the lane plane, which means that the position has no ambiguity. Therefore, for a vehicle in a dense RSU deployment scenario, if it locates near the position that induces one RSU to work in endfire mode, correspondingly it also falls into the non-endfire mode of other RSUs. That inspired a method for reducing the adverse influence of the near-far effect, i.e., a distance based weighting strategy. It trusts the “near” positioning results more than the “far” ones. To be specific, according to the estimated two-dimensional angles, we can retrieve the distance information D i between the vehicle and i-th RSU, i.e.,
θ ^ i , ϕ ^ i x ^ V , i ( T k 1 ) , y ^ V , i ( T k 1 ) P R i , z V D i
where D i = p R i p ^ V , i ( T k 1 ) 2 . Without loss of generality, for K RSUs participating in cooperative positioning, we have distance information D 1 , D 2 , , D K , so the weighting coefficients can be determined by:
ω i = 1 / D i i = 1 K 1 / D i .
Taking two RSUs for example, if D 1 < D 2 , then according to Equation (28), the weighting coefficients ω 1 and ω 2 are:
ω 1 = D 2 / ( D 1 + D 2 )
ω 2 = D 1 / ( D 1 + D 2 )
Obviously, the positioning result with small distance information will play an important role in final position determining; see Equations (3) and (4).
There exist several weighting strategies, for example weighting based on the received signal strength (RSS) (or the estimated signal-to-noise ratio (SNR)), i.e., ω RSS , and weighting based on the Cramér–Rao lower bound (CRB) of angle estimation, i.e., ω 1 CRB . Among them, the RSS falls off inversely proportional to the square of the distance between the vehicle and RSU in free-space. However, on the one hand, it is just a qualitative factor for evaluating the angle estimation and is unable to reflect the directional affect; on the other hand, although the statistical distance can be estimated from the propagation model, it is unreliable due to the random fluctuation of multipath signals. The CRB provides a bound on the covariance matrix of any unbiased estimation of angle, which includes many factors such as SNR, snapshot length G, and array manifold a ( μ k ) and b ( υ k ) [39,42]. However, there are no simple and practical methods to obtain noise variance, and also, the SNR exhibits random fluctuation [27]. In addition, the calculation of the CRB is computationally high and needs to be re-calculated when the angle information changes.
To sum up, we summarize the whole CP procedure in Table 1. Besides the basic data y ( i ) [ g ] g = 1 G , i = 1 , 2 , , K as shown in Section 3.1, the kernel of this procedures lies in Step 1, Step 2, and Step 5. Through data extension in Step 1, the accuracy of angle estimation can be improved because the Cramer–Rao lower bound is in inverse proportion to the length of observed data. In Step 2, Algorithm 1 utilizes an iterative manner to extract the signal subspace with respect to the LOS signal component rather than adopting matrix decomposition, and simultaneously, the subsequent AOD estimation in Step 3 is in a closed-form expression rather than in searching, which is computationally efficient. The final estimations of the vehicle’s positions in Step 7 actually fuse all K detected AOD information through the distance based weighting method in Step 5.
Remark 2. 
Looking back at the aforementioned safety distance based warning in Section 2.2, once we acquire the vehicles’ positions, the desired deceleration of vehicle V 2 can be calculated via (6) with the aid of the designed sequential task allocation in Figure 3. Due to different decelerations resulting in different driving experiences, the warning strategies can be expressed by a series of levels, for example if a L 1 < a 2 d < a L , L = 1 , 2 , , L , then the warning belongs to the L level. A large value of deceleration means a high warning level and also means a big emergency. The thresholds will be determined and evaluated by actual driving tests in the next works. Besides, the vehicle’s positioning error inevitably produces erroneous inter-vehicle distance D and further affects the desired deceleration a 2 d of vehicle V 2 . In order to understand such a relationship, we try to analyze the first-order perturbation of the desired deceleration. According to (6), let the estimated distance D ^ = D + δ D , then we have:
( a 2 d + δ ) ( D + δ D D ¯ ) = 1 2 ( Δ v + δ Δ v ) 2 + ( a 1 + δ a 1 ) ( D + δ D D ¯ )
where D ¯ = D w + D h and δ, δ D , δ Δ v , and δ a 1 denote the perturbations of the desired deceleration a 2 d , the inter-vehicle distance D, the velocity difference Δ v , and acceleration a 1 , respectively. After a lengthy, but straightforward derivation and ignoring the second-order items, we can get:
δ = a 1 a 2 d D D ¯ δ D + Δ v D D ¯ δ Δ v + δ a 1 = ρ 1 δ D + ρ 2 δ Δ v + δ a 1
This manifests that, theoretically, if other factors are correctly measured, the bias of the desired deceleration a 2 d is proportional to that of inter-vehicle distance D. In particular, based on first-order perturbation analysis, if the positioning errors in the x and y directions are independent identically distributed and of zero-mean, then E { δ } = ρ 1 E { δ D } = 0 . Further, the mean-squared error is E { δ 2 } = ρ 1 2 E { δ D 2 } .

5. Numerical Examples

In order to demonstrate the effectiveness and advantages of the proposed AOD estimation algorithm and the localization scheme, in this section, a series of Monte Carlo numerical simulations are presented. We assume that the RSU R is deployed on top of a traffic light with height 6 m. The vehicle proceeds along the lane, and the lane width is set as 3.5 m. The onboard unit (OBU) antenna is deployed on the vehicle’s rooftop position with a total height of 1.8 m.
In the following simulations, the system works at carrier frequency f c = 5.9 GHz with bandwidth B = 10 MHz, and the data length for positioning is fixed as G = 20 . For the LOS component, we adopt the dual slope model [43] to describe the path loss, i.e., the path loss P L ( D ) = P L ( D 0 ) + 10 γ 1 log 10 ( D / D 0 ) for D 0 < D D C ; and P L ( D ) = P L ( D 0 ) + 10 γ 2 log 10 ( D / D C ) + 10 γ 1 log 10 ( D C / D 0 ) for D > D C , where P L ( D 0 ) is the signal attenuation in free-space at distance D 0 . According to [43], we chose D 0 = 10 m, D c = 80 m, γ 1 = 1.9 , and γ 1 = 3.8 . Besides the LOS component, the other 20 MPCs come uniformly from any direction in angular-domain θ [ 0 , 90 ] and ϕ [ 0 , 180 ) ; the phase and magnitude of the attenuation coefficient for each signal component are modeled as random variables with a uniform distribution in ( 0 , 2 π ) and ( 0 , 1 ) , respectively. We use the Ricean κ factor to indicate the power proportion, which is defined as the ratio of the power in the LOS component to the total power in the diffused non-LOS components. Through the whole simulations, the parameter ϵ in Algorithm 1 is set as 10 3 .
Simulation 1: We first consider the root mean-squared error (RMSE) performance. For comparison, we just consider one RSU with coordinate { 0 , 0 , 6 } participating in positioning under two cases, respectively. One is the “far” case that the vehicle locates at { θ , ϕ } = { 74.5 , 6.7 } , D = 15.67 m; the other is the “near” case that the vehicle locates at { θ , ϕ } = { 39.6 , 30.3 } , D = 5.45 m. The antenna array M = N = 10 . The noise power is given by P n = 174 + 10 log 10 B = 104 dBm for temperature T = 300 K. Considering other unavoidable link loss, we let P n = 74 dBm. According to different transmitting power, we can set a different received SNR. The antenna gains are absorbed into the SNR. Let κ = 5 , and the total number of Monte Carlo simulations is set as 1000.
Figure 5a reports the RMSE performance of the estimated LOS two-dimensional AOD. The MUSIC algorithm serves as a benchmark because it is a typical high-resolution algorithm. Correspondingly, Figure 5b gives the comparison of the average running time when executing the algorithm one time. For the MUSIC algorithm, we set the searching step as 0.1 and the searching range from ψ 5 to ψ + 5 , ψ { θ , ϕ } . From the simulation results, we can make two conclusions. First, for both cases, the RMSE performance of the proposed algorithm is slightly inferior to the MUSIC algorithm; however, the much lower computational complexity makes it a better alternative to the exhaustive searching algorithm. Second, at the same SNR level, the “far” position manifests inferior error performance to the “near” one, which proves the near-far effect in angle based positioning.
Simulation 2: We then consider the cumulative distribution of the average absolute error for LOS AOD estimation provided by the proposed algorithm. This evaluation criterion is defined by P ζ ( θ 1 , ϕ 1 ) = P { [ θ ^ 1 θ 1 + ϕ ^ 1 ϕ 1 ] / 2 ζ } , where ζ denotes a series of allowed angle scales. We choose ϵ from 0 to 2.5 with step 0.1 . The purpose of this simulation is to examine the error level under different conditions. The simulation results is shown in Figure 6. We can conclude that the AOD estimation accuracy can be improved with the increase of the κ factor, the scale of the antenna array, and the SNR. For example, at κ = 3 , M = N = 6 , and SNR = 10 dB, the error cumulative distribution shows P ζ ( θ 1 , ϕ 1 ) | ζ = 1.3 = 1 , and it turns to P ζ ( θ 1 , ϕ 1 ) | ζ = 0.5 = 1 at κ = 8 , M = N = 10 , and SNR = 10 dB, which illustrates that the error values are strongly converging.
Simulation 3: We now evaluate the positioning performance. For convenience, we assume that there are two RSUs locating at { 0 , 0 , 6 } m and { 12 , 0 , 6 } m, respectively. One vehicle travels along the middle line of the lane and passes five points where the CP is launched. The coordinates of these points and the corresponding AOD and distance information are listed in Table 2. The transmitting power is set as 10 dBm. The Ricean κ = 3 . The path loss model is the same as previous simulations.
Besides the proposed distance based weighting, we also compare two different strategies, i.e., the uniform weighting and the CRB based weighting. Figure 7 gives the positioning RMSE of all five points. As we can see, the CP with distance based weighting performs better than the uniform weighting and is slightly inferior to the CRB based weighting. It is worth mentioning that the CRB based weighting stems from the complicated CRB expression [39]; although it gives the smallest error in positioning, the fast and accurate calculation is impractical because, on the one hand, the array manifold of all MPCs and noise variance should be known and, on the other hand, the computational burden is heavy. Oppositely, the proposed one makes a better trade-off between the positioning accuracy and the computational complexity.

6. Conclusions

We designed a basic framework of a joint cooperative positioning and warning system from the perspective of angle-awareness. In this framework, the cooperative positioning model based on state representation, the warning mechanism based on safety distance, and the sequential task allocation were discussed. Besides, in order to reduce the computational complexity of angle-awareness and improve the cooperative positioning accuracy, we proposed a truncated signal-subspace based algorithm for AOD estimation and a distance based weighting strategy for position estimation, respectively. Compared with the exhaustive searching based algorithms such as two-dimensional MUSIC, the proposed one maintains acceptable performance and decreases the computational complexity. Besides, the proposed distance based weighting method also achieves a similar level of positioning accuracy as the theoretical CRB based weighting, which is more practical. Therefore, both proposed methods can be used as better alternatives in a practical positioning and warning system. Actually, there exist some important issues that need to be considered; therefore, future works will focus on the optimization of the deployment of RSUs, the real-time high-accuracy trajectory tracking based on Kalman filtering, and the fusion of supplementary information such as a camera or LIDAR.

Author Contributions

Conceptualization, methodology, and validation, Z.D.; software and formal analysis, B.Y.; writing, original draft preparation, Z.D.; writing, review and editing and supervision, B.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the MOE (Ministry of Education in China) Project of Humanities and Social Sciences (No. 19YJCZH024), the National Key Research and Development Program of China (No. 2018YFB1601200), the Fundamental Research Funds for the Central Universities, CHD (Nos. 300102210663, 300102219646), and the National Natural Science Foundation of China (No. 61601058).

Acknowledgments

The authors would like to thank the anonymous reviewers and the Editors for their valuable comments and suggestions, which have greatly improved the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bengler, K.; Dietmayer, K.; Farber, B.; Maurer, M.; Stiller, C.; Winner, H. Three decades of driver assistance systems: Review and future perspectives. IEEE Intell. Transp. Syst. Mag. 2014, 6, 6–22. [Google Scholar] [CrossRef]
  2. de Gelder, E.; Paardekooper, J.-P. Assessment of automated driving systems using real-life scenarios. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 589–594. [Google Scholar]
  3. Arbabzadeh, N.; Jafari, M. A data-driven approach for driving safety risk prediction using driver behavior and roadway information data. IEEE Trans. Intell. Transp. Syst. 2017, 19, 446–460. [Google Scholar] [CrossRef]
  4. Philip, K.; Wagner, M. Autonomous Vehicle Safety: An Interdisciplinary Challenge. IEEE Intell. Transp. Syst. Mag. 2017, 9, 90–96. [Google Scholar]
  5. Karlsson, R.; Gustafsson, F. The future of automotive localization algorithms: Available, reliable, and scalable localization: Anywhere and anytime. IEEE Signal Process. Mag. 2017, 34, 60–69. [Google Scholar] [CrossRef] [Green Version]
  6. Piao, J.; Beecroft, M.; McDonald, M. Vehicle positioning for improving road safety. Transp. Rev. 2010, 30, 701–715. [Google Scholar] [CrossRef]
  7. Albelaihy, A.; Thayananthan, V. BL0K: A New Stage of Privacy-Preserving Scope for Location-Based Services. Sensors 2019, 19, 696. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Alam, N.; Dempster, A.G. Cooperative positioning for vehicular networks: Facts and future. IEEE Trans. Intell. Transp. Syst. 2013, 14, 1708–1717. [Google Scholar] [CrossRef]
  9. Shladover, S.E.; Tan, S.-K. Analysis of vehicle positioning accuracy requirements for communication based cooperative collision warning. J. Intell. Transp. Syst. 2006, 10, 131–140. [Google Scholar] [CrossRef]
  10. Hata, A.Y.; Wolf, D.F. Feature detection for vehicle localization in urban environments using a multilayer LIDAR. IEEE Trans. Intell. Transp. Syst. 2015, 17, 420–429. [Google Scholar] [CrossRef]
  11. Agrawal, P.; Iqbal, A.; Russell, B.; Hazrati, M.K.; Kashyap, V.; Akhbari, F. PCE-SLAM: A real-time simultaneous localization and mapping using LiDAR data. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 1752–1757. [Google Scholar]
  12. deMiguel, M.A.; Garcia, F.; Armingol, J.M. Improved LiDAR Probabilistic Localization for Autonomous Vehicles Using GNSS. Sensors 2020, 20, 3145. [Google Scholar] [CrossRef]
  13. Matthaei, R.; Bagschik, G.; Maurer, M. Map-relative localization in lane-level maps for ADAS and autonomous driving. In Proceedings of the 2014 IEEE Intelligent Vehicles Symposium Proceedings, Dearborn, MI, USA, 8–11 June 2014; pp. 49–55. [Google Scholar]
  14. Aldibaja, M.; Suganuma, N.; Yoneda, K. Robust intensity based localization method for autonomous driving on snow-wet road surface. IEEE Trans. Ind. Inform. 2017, 13, 2369–2378. [Google Scholar] [CrossRef] [Green Version]
  15. Belanovic, P.; Valerio, D.; Paier, A.; Zemen, T.; Ricciato, F.; Mecklenbrauker, C.F. On wireless links for vehicle-to-infrastructure communications. IEEE Trans. Veh. Technol. 2010, 59, 269–282. [Google Scholar] [CrossRef]
  16. Fogue, M.; Martinez, F.J.; Garrido, P.; Fiore, M.; Chiasserini, C.-F.; Casetti, C.; Cano, J.-C.; Calafate, C.T.; Manzoni, P. Securing warning message dissemination in VANETs using cooperative neighbor position verification. IEEE Trans. Veh. Technol. 2015, 64, 2538–2550. [Google Scholar] [CrossRef] [Green Version]
  17. Dammann, A.; Sand, S.; Raulefs, R. Signals of opportunity in mobile radio positioning. In Proceedings of the 2012 20th European Signal Processing Conference (EUSIPCO), Bucharest, Romania, 27–31 August 2012; pp. 549–553. [Google Scholar]
  18. Ramos, H.S.; Boukerche, A.; Pazzi, R.W.; Frery, A.C.; Loureiro, A.A.F. Cooperative target tracking in vehicular sensor networks. IEEE Wirel. Commun. 2012, 19, 66–73. [Google Scholar] [CrossRef]
  19. Gui, L.; He, B.; Xiao, F.; Shu, F. Resolution limit of positioning error for range-free localization schemes. IEEE Syst. J. 2020, 14, 2980–2989. [Google Scholar] [CrossRef]
  20. Gui, L.; Yang, M.; Yu, H.; Li, J.; Shu, F.; Xiao, F. A Cramer-Rao lower bound of CSI based indoor localization. IEEE Trans. Veh. Technol. 2018, 67, 2814–2818. [Google Scholar] [CrossRef]
  21. Watanabe, Y.; Shoji, Y. An RSSI based low-power vehicle-approach detection technique to alert a pedestrian. Sensors 2020, 20, 118. [Google Scholar] [CrossRef] [Green Version]
  22. Parker, R.; Valaee, S. Vehicular node localization using receivedsignal-strength indicator. IEEE Trans. Veh. Technol. 2007, 56, 3371–3380. [Google Scholar] [CrossRef] [Green Version]
  23. Mohammadabadi, P.H.; Valaee, S. Cooperative node positioning in vehicular networks using inter-node distance measurements. In Proceedings of the 2014 IEEE 25th Annual International Symposium on Personal, Indoor, and Mobile Radio Communication (PIMRC), Washington, DC, USA, 2–5 September 2014; pp. 1448–1452. [Google Scholar]
  24. Diez-Gonzalez, J.; Alvarez, R.; Sanchez-Gonzalez, L.; Fernandez-Robles, L.; Perez, H.; Castejon-Limas, M. 3D tdoa problem solution with four receiving nodes. Sensors 2019, 19, 2892. [Google Scholar] [CrossRef] [Green Version]
  25. Alam, N.; Balaei, A.T.; Dempster, A.G. An instantaneous lanelevel positioning using DSRC carrier frequency offset. IEEE Trans. Intell. Transp. Syst. 2012, 13, 1566–1575. [Google Scholar] [CrossRef]
  26. Fascista, A.; Ciccarese, G.; Coluccia, A.; Ricci, G. Angle of arrival based cooperative positioning for smart vehicles. IEEE Trans. Intell. Transp. Syst. 2017, 99, 1–13. [Google Scholar] [CrossRef]
  27. Fascista, A.; Ciccarese, G.; Coluccia, A.; Ricci, G. A localization algorithm based on V2I communications and AOA estimation. IEEE Signal Process. Lett. 2017, 24, 126–130. [Google Scholar] [CrossRef]
  28. Zhou, C.; Gu, Y.; Fan, X.; Shi, Z.; Mao, G.; Zhang, Y.D. Direction-of-arrival estimation for coprime array via virtual array interpolation. IEEE Trans. Signal Process. 2018, 66, 5956–5971. [Google Scholar] [CrossRef]
  29. Shi, Z.; Zhou, C.; Gu, Y.; Goodman, N.A.; Qu, F. Source estimation using coprime array: A sparse reconstruction perspective. IEEE Sens. J. 2017, 17, 755–765. [Google Scholar] [CrossRef]
  30. Wu, X.; Zhu, W.-P.; Yan, J.; Zhang, Z. Two sparse based methods for off-grid direction-of-arrival estimation. Signal Process. 2018, 142, 87–95. [Google Scholar] [CrossRef]
  31. Ansari, K. Cooperative position prediction: Beyond vehicle-to-vehicle relative positioning. IEEE Trans. Intell. Transp. Syst. 2019, 21, 1121–1130. [Google Scholar] [CrossRef]
  32. Kuutti, S.; Fallah, S.; Katsaros, K.; Dianati, M.; Mccullough, F.; Mouzakitis, A. A survey of the state-of-the-art localization techniques and their potentials for autonomous vehicle applications. IEEE Internet Things J. 2018, 5, 829–846. [Google Scholar] [CrossRef]
  33. Gong, Z.; Jiang, F.; Li, C. Angle domain channel tracking with large antenna array for high mobility V2I millimeter wave communications. IEEE J. Sel. Top. Signal Process. 2019, 13, 1077–1089. [Google Scholar] [CrossRef]
  34. Kilberg, B.G.; Campos, F.M.R.; Schindler, C.B.; Pister, K.S.J. Quadrotor based lighthouse localization with time-synchronized wireless sensor nodes and bearing-only measurements. Sensors 2020, 20, 3888. [Google Scholar] [CrossRef]
  35. Ou, C.H.; Wu, B.Y.; Cai, L. GPS-free vehicular localization system using roadside units with directional antennas. J. Commun. Netw. 2019, 21, 12–24. [Google Scholar] [CrossRef]
  36. Zhang, H.; He, R.; Ai, B.; Cui, S.; Zhang, H. Measuring sparsity of wireless channels. IEEE Trans. Cogn. Commun. Netw. 2020. [Google Scholar] [CrossRef]
  37. Zhang, R.; Zhong, Z.; Zhao, J.; Li, B.; Wang, K. Channel measurement and packet-level modeling for V2I spatial multiplexing uplinks using massive MIMO. IEEE Trans. Veh. Technol. 2016, 65, 7831–7843. [Google Scholar] [CrossRef]
  38. Kim, M. Analysis of multipath component parameter estimation accuracy in directional ccanning measurement. IEEE Trans. Antennas Propag. 2017, 17, 12–16. [Google Scholar] [CrossRef]
  39. Yao, B.; Zhang, W.; Wu, Q. Weighted subspace fitting for two-dimension DOA dstimation in massive MIMO systems. IEEE Access 2017, 5, 14020–14027. [Google Scholar] [CrossRef]
  40. Golub, G.H.; Loan, C.F.V. Matrix Computations, 3rd ed.; The Johns Hopkins Univ. Press: Baltimore, MD, USA, 1996. [Google Scholar]
  41. Zhang, X.; Xu, L.; Xu, L.; Xu, D. Direction of departure (DOD) and direction of arrival (DOA) estimation in MIMO radar with reduced-dimension MUSIC. IEEE Commun. Lett. 2010, 14, 1161–1163. [Google Scholar] [CrossRef]
  42. Hua, Y.; Sarkar, T.K. A note on the Cramer-Rao bound for 2-D direction finding based on 2-D array. IEEE Trans. Signal Process. 1991, 39, 1215–1218. [Google Scholar] [CrossRef]
  43. ETSI TR 102 861 V1.1.1. Intelligent Transport Systems (ITS)—STDMA Recommended Parameters and Settings for Cooperative ITS—Access Layer Part. 2012. Available online: https://www.etsi.org/deliver/etsi_tr/102800_102899/102861/01.01.01_60/tr_102861v010101p.pdf (accessed on 14 October 2020).
Figure 1. A graphical illustration of joint cooperative positioning and warning (JCPW).
Figure 1. A graphical illustration of joint cooperative positioning and warning (JCPW).
Sensors 20 05818 g001
Figure 2. The warning systems: (a) two vehicle scenario; (b) a qualitative deceleration-distance curve.
Figure 2. The warning systems: (a) two vehicle scenario; (b) a qualitative deceleration-distance curve.
Sensors 20 05818 g002
Figure 3. A graphical representation of the task sequences. CP, cooperative positioning.
Figure 3. A graphical representation of the task sequences. CP, cooperative positioning.
Sensors 20 05818 g003
Figure 4. A graphical illustration of the localization data format.
Figure 4. A graphical illustration of the localization data format.
Sensors 20 05818 g004
Figure 5. Performance evaluation for LOS angle-of-departure (AOD) estimation with different algorithms. (a) RMSE performance comparison; (b) Average single running time comparison.
Figure 5. Performance evaluation for LOS angle-of-departure (AOD) estimation with different algorithms. (a) RMSE performance comparison; (b) Average single running time comparison.
Sensors 20 05818 g005
Figure 6. The cumulative distribution of the average absolute angle estimation error under different conditions.
Figure 6. The cumulative distribution of the average absolute angle estimation error under different conditions.
Sensors 20 05818 g006
Figure 7. The positioning RMSE comparison for different weighting strategies.
Figure 7. The positioning RMSE comparison for different weighting strategies.
Sensors 20 05818 g007
Table 1. The vehicle’s position estimation procedures.
Table 1. The vehicle’s position estimation procedures.
GivenA series of received data from K RSUs, y ( i ) [ g ] g = 1 G , i = 1 , 2 , , K .
Step 1.(Data Extension) Construct Y ˜ ( i ) via (17).
Step 2.(Subspace Extraction) Run Algorithm 1 to obtain a ˜ L O S ( i ) .
Step 3.(Parameters’ Estimation) Estimate μ 1 ( i ) and υ 1 ( i ) according to (23) and (24).
Step 4.(Parameters’ Conversion) Convert μ ^ 1 ( i ) and υ ^ 1 ( i ) into θ 1 ( i ) and ϕ 1 ( i ) by (9).
Step 5.(Weighting Determination) Determine the weighting coefficients by (28).
Step 6.(Data Traversal) Repeat Step 2Step 5 until all the data of K RSUs are processed.
Step 7.(Position Calculation) Calculate the vehicle’s positions during T T R by (3) and (4) and calculate other positions by (2).
Table 2. Parameter arrangement for positions to be located.
Table 2. Parameter arrangement for positions to be located.
Para. x V y V D θ ϕ
Ref. No.
RSU-1 @ { 0 , 0 , 6 } m 2 m 1.75 m 5.0 m 32.3 131.2
1.5 m 1.75 m 4.79 m 28.7 49.4
5 m 1.75 m 6.76 m 51.6 19.3
8.5 m 1.75 m 9.65 m 64.2 11.6
12 m 1.75 m 12.8 m 70.9 8.3
RSU-2 @ { 12 , 0 , 6 } m 2 m 1.75 m 14.62 m 73.4 172.9
1.5 m 1.75 m 11.43 m 68.5 170.5
5 m 1.75 m 8.35 m 59.8 166
8.5 m 1.75 m 5.74 m 43 153.4
12 m 1.75 m 4.55 m 22.6 90
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dong, Z.; Yao, B. Angle-Awareness Based Joint Cooperative Positioning and Warning for Intelligent Transportation Systems. Sensors 2020, 20, 5818. https://doi.org/10.3390/s20205818

AMA Style

Dong Z, Yao B. Angle-Awareness Based Joint Cooperative Positioning and Warning for Intelligent Transportation Systems. Sensors. 2020; 20(20):5818. https://doi.org/10.3390/s20205818

Chicago/Turabian Style

Dong, Zhi, and Bobin Yao. 2020. "Angle-Awareness Based Joint Cooperative Positioning and Warning for Intelligent Transportation Systems" Sensors 20, no. 20: 5818. https://doi.org/10.3390/s20205818

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop