^{1}

^{2}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

Wireless location is the function used to determine the mobile station (MS) location in a wireless cellular communications system. When it is very hard for the surrounding base stations (BSs) to detect a MS or the measurements contain large errors in non-line-of-sight (NLOS) environments, then one need to integrate all available heterogeneous measurements to increase the location accuracy. In this paper we propose a novel algorithm that combines both time of arrival (TOA) and angle of arrival (AOA) measurements to estimate the MS in NLOS environments. The proposed algorithm utilizes the intersections of two circles and two lines, based on the most resilient back-propagation (Rprop) neural network learning technique, to give location estimation of the MS. The traditional Taylor series algorithm (TSA) and the hybrid lines of position algorithm (HLOP) have convergence problems, and even if the measurements are fairly accurate, the performance of these algorithms depends highly on the relative position of the MS and BSs. Different NLOS models were used to evaluate the proposed methods. Numerical results demonstrate that the proposed algorithms can not only preserve the convergence solution, but obtain precise location estimations, even in severe NLOS conditions, particularly when the geometric relationship of the BSs relative to the MS is poor.

The problem of position determination of a mobile user in a wireless network has been studied extensively in recent years. It has received significant attention and various location identification technologies have been proposed in the past few years. A recent report and order issued by the U. S. Federal Communications Commission (FCC) in July 1996 requires that all wireless service providers provide the location information to emergency 911 (E-911) public safety services. The separate accuracy requirements of the E-911 mandate were set for network-based technologies: within 125 meters for 67 percent of calls, and within 300 meters for 95 percent of the calls. To date, satisfying the FCC accuracy requirement is very difficult. Most papers and their algorithms could not achieve this goal.

The various techniques proposed include signal strength (SS), angle of arrival (AOA), time of arrival (TOA) and time difference of arrival (TDOA). The signal strength scheme uses a known mathematical model to describe the path loss attenuation with distance. A fuzzy logic technique with a geometrical solution was applied to calculate range estimates through signal strength measurements [

One critical problem in wireless location systems is the non-line-of-sight (NLOS) propagation effect. A common requirement for high location accuracy is the presence of a line-of-sight (LOS) path between the MS and each participating BS. Due to the signal reflection or diffraction between MS and BSs, NLOS errors can significantly impact wireless location performance. Extensive research on NLOS effect mitigation for location estimation have been carried out in the past few years. Since in NLOS the delay has a higher variance than under LOS conditions, [

Another major concern that affects the choice of location scheme to deploy in cellular communication systems is

Due to poor

Artificial neural network (ANN) is an information processing method inspired by the biological nervous system, which can approximate nonlinear functions based on data sets. The system employs a set of activation functions and input-output of sample patterns that do not require

Resilient back-propagation (Rprop) is the best algorithm in terms of convergence speed, accuracy as well as robustness with respect to the training parameters [

The remainder of this paper is organized as follows. In Section 2, we describe the MS positioning methods by using TSA and HLOP. The geometrical positioning methods are reviewed in Section 3. Section 4 briefly describes BPNN and Rprop methods. In Section 5, we propose the algorithm based on Rprop to determine the position of the MS. Next, Section 6 compares the performance of the proposed algorithm with the other methods through simulation result. Finally, Section 7 draws conclusions.

If both the TOA and AOA measurements are accurate, then only one BS is required to locate the MS [_{i}_{i}_{i}_{i}_{1}_{1}_{2}_{2}_{2}_{i}

TSA [

TOA and AOA measurements are used as inputs to the Taylor series position estimator. Let (_{v}_{v}_{v}_{v}_{v}_{v}

The least-squares (LS) solution to the estimation problem is given by:

It requires a proper initial position guess close to the true solution and can achieve high accuracy. This method is recursive and the computational overhead is intensive in the iteration. Due to the initial guess of the MS location is not accurate enough, the convergence of the iterative process is not assured [

This scheme makes use of the original nonlinear range equations to produce linear lines of position (LOP), rather than circular LOP, to locate the MS. The method takes the advantage of simpler computation of MS location. The details of the linear LOP approach can be acquired by using the TOA measurements as in [

Again, the LS solution to

From the viewpoint of geometric approach, the TOA value measured at any BS can be used to form a circle centered at the BS. The MS position is then given by the intersection of the circles from multiple TOA measurements. Similarly, a single AOA measurement constrains the MS along a line. Each of the following equations describes a circle for TOA, a line for AOA, as shown in

If there is no error or even no noise at all, the circles and lines will intersect at only one point. However, this is usually not the case in practice where the NLOS effect exists. NLOS propagation is quite common and it seriously degrades location accuracy. The intersections of two TOA circles and two AOA lines will be spread over a region, which will be offset from the true MS location. Because of the fact that NLOS effect always increases the propagation delay, the measured TOA estimated are always greater than the true values due to the excess path length. The true MS location must lie in the region of overlap of the two circles. As mentioned earlier, the intersecting points that are within this are defined as feasible intersections. Hence, the feasible intersections must satisfy the following inequalities simultaneously:

The most direct method is to utilize these feasible intersections of the circles and lines to estimate the MS location. To achieve high accuracy of MS location with less complexity, we have proposed a class of geometrical positioning methods in [

By using two AOA measurements, the least likely intersection is first eliminated. The MS location is obtained to calculate the average value of all the remaining AOA measurements with feasible intersections.

Step 1. Find all the feasible intersections of the two circles and two lines.

Step 2. Assume _{1}, _{2} < 180°, and _{1}, _{2} < 360°. Delete the least likely intersection from the set of feasible intersections and there will be

Step 3. The MS location (_{N}_{N}

However, not all the remaining feasible intersections can always provide information of the same value for location estimation. In this method, the weights are inversely proportional to the squared value of the distance between the remaining feasible intersections and the average MS location.

Steps 1–3 are the same as those of the averaging method.

Step 4. Calculate the distance _{i}_{i}_{i}_{N}_{N}

Step 5. Set the weight for the _{d}_{d}

One can see in the averaging method and distance-weighted method, all the remaining feasible intersections will affect the MS location estimation. In the following we also propose two methods of sort averaging and sort-weighted, which can be applied without considering the influence of feasible intersections for too far away from the average MS location.

Steps 1–4 are the same as those of the distance-weighted method.

Step 5. Rank the distances _{i}

Step 6. The MS location (_{M}_{M}

Steps 1–5 are the same as those of the sort averaging method.

Step 6. The MS location is estimated by a weighted average of the first

The weight of this method is based on how close the remaining feasible intersections are. Those feasible intersections that are closer to one another are assigned with greater weights. In other words, those intersections that are in close proximity will be assigned with greater weights.

Steps 1 and 2 are the same as those of the averaging method.

Step 3. Calculate the distance _{mn}

Step 4. Select a threshold value _{thr}_{mn}

Step 5. Set the initial weight _{k}

If _{mn} ≤ _{thr}, then _{m} = _{m} + 1 and _{n} = _{n} + 1 for 1 ≤

Step 6. The MS location (_{t}_{t}

In this section, we describe the methodology based on artificial neural network (ANN). It is a technique that models the learning procedures of a human brain, and employs a set of activation functions, either nonlinear or linear, thus one doesn’t require

Generally speaking, the BPNN architecture comprises one input layer, one output layer, with one or a number of hidden layers in between them. Although a network with multiple hidden layers is possible, a single layer is sufficient to model arbitrarily complex nonlinear functions. With proper selection of architecture, it is capable of approximating most problems with high accuracy and generalization ability. The input layer receives information from the external sources and passes this information to the network for processing. The hidden layer determines the mapping relationships between neurons are stored as weights of connecting links. When the input and output variables are related nonlinearly, the hidden layer can extract higher level features and facilitate generalization. The output from the output layer is the prediction of the net for the corresponding input. The structure of BPNN chosen for the present problem is shown in

BPNN estimate relation between input and output of sample patterns by updating iteratively the weights in the network so as to minimize the difference between the actual output vectors and the desired output vectors. The back propagation learning algorithm is composed of initialization, a forward pass, and a backward pass. The weights and biases in the network are initialized to small random numbers. Once these parameters have been initialized, the network is ready for training. A training pattern consists of a set of the input vectors and the corresponding output vectors. In the beginning, a set of training patterns are fed to the input layer of the network. The forward pass starts from the input layer, the net inputs of the neurons are multiplied with corresponding weights, then summated, and transferred to the hidden layer. The activated signals are outputted from the hidden layer, and are passed forward to the output layer. Finally, the output of BPNN is generated. Subsequently in the backward pass, the error between actual output and desired output is calculated. The error function Ψ is defined as the mean squared sum of differences between the actual output vector _{k}_{k}

The error signal at the output layer is propagated backward to the input layer through the hidden layer in the network. Back-propagation is so named because the error derivatives are calculated in the opposite direction of signal propagation. In the training process, the gradient descent method calculates and adjusts the weight of the network to minimize the error. In the weight updating algorithm, the derivative of the error with respect to the weight was first negated then multiplied by a small constant β known as the learning rate, as expressed in the following equation:

The negative sign indicates that the new weighting vector is moving in a direction opposite to that of the gradient. In the learning process of neural network, the learning rate affects the speed of convergence. The training process may lead to an oscillatory state if a learning rate is too fast, on the other hand, the convergence speed may suffer if the learning rate is too slow. The training process may not converge in the case of either a too high or too low value for the learning rate

In

Set the number of the layer and the number of neurons in each layer:

Set

Giving input and output vectors.

Compute the output values of each layer and unit in a feed-forward direction.

Calculate the output for the

Calculate the output for the

Calculate the error function at the output neuron.

Compute the deltas for each of the preceding layers by back propagating the errors.

Calculate error for the

Calculate error for the

Update all weights and biases

Repeat steps 3–7 until the iteration has finished or the algorithm is convergent.

Compared to the traditional BPNN algorithm, the Rprop algorithm can provide faster training and rate of convergence, and has the capability to escape from local minima. The Rprop is known to be very robust with respect to their internal parameters and therefore regarded as one of the best first-order learning methods among the ANN algorithms. Rprop is a first-order algorithm and its time and memory requirement scale is linear with the number of parameters to optimize. The Rprop algorithm is probably the most easily adjustable learning rule, slight variations of the values of parameters can not affect the convergence time. The activation function of the hidden and output layers is treated as linear transfer function. Rprop is easy to implement and the hardware implementation is described in [_{ij}^{−} < 1 < ^{+}. We can simply describe the adaptation rule as follows: Whenever the partial derivative of the error function ψ with respect to the corresponding weight _{ij}_{ij}^{−}. If the derivative retains its sign, the update-value is slightly increased by the factor ^{+} in order to accelerate convergence in shallow regions.

Once the update-value for each weight is adapted, the weight-update itself follows a very simple rule: if the derivative is positive (increasing error), the weight is decreased by its update-value, if the derivative is negative, the update-value is added to the weight:

There is one exception to the rule above. If the partial derivative changes sign,

Due to that ‘backtracking’ weight step, the derivative is supposed to change its sign once again in the following step. In order to avoid a double punishment of the update value, there should be no adaptation of the update value in the succeeding step. In practice this can be done by setting ∂ψ^{(t−1)}/∂_{ij} = 0 in the Δ_{ij}

To improve the accuracy of MS location, we proposed the employment of Rprop, a supervised learning neural network to obtain an approximation of MS location. The remaining feasible intersections are fed to the input layer, and MS location is the only one variable in the output layer. Given a number of known input-output training patterns, the Rprop models are trained continuously and deployed to adjust the weights with one hidden layer. A trained Rprop is to minimize the difference between the actual MS location and the desired MS location. The network has the following input-output mapping:

Input: V remaining feasible intersections (V = 1, 2,…, 6)

Output: desired MS location

The number of the remaining feasible intersections depends on the geometric relationship of the two TOA circles and two AOA lines. In this case, the number of the remaining feasible intersections is between 1 and 6. Every measurement will result in one input data.

According to the number of the remaining feasible intersections, the first type establishes different input data subsets respectively. For each measurement, we collect the V remaining feasible intersections and put them into the V-th input data subsets separately. There are six data subsets in this input layer for training purpose, and the measurement number of each subset won’t be identical. From simulation results, the measurement number of 4 remaining feasible intersections is the maximum, while the measurement number of 1 remaining feasible intersections is the minimum.

The detailed steps are as follows:

Collect the V remaining feasible intersections of two TOA circles and two AOA lines. (V = 1, 2,…, 6).

If the number of remaining feasible intersection is V, then placing these V points in the V-th subset. The V remaining feasible intersections are belonging to the corresponding V-th input data subsets separately.

The 6 input data subsets with various measurement numbers are trained according to Rprop.

The second type is a collection of the V remaining feasible intersections in order. Regardless of the number of remaining feasible intersections in each measurement, we will only establish one input data set. The summation of all the measurement number for the 6 subsets is equal to the number of all measurements. The detailed steps are as follows:

Collect the V remaining feasible intersections for each measurement and expand to six ones in a data set.

The method to expand the remaining feasible intersections to 6 ones during each measurement is as follows.

If the number of remaining feasible intersections is V, replicate them by (6/V) times. (V = 1, 2, 3)

If the number of remaining feasible intersections is 4, take the average value of these 4 points and treat it as the fifth point. By this manner the 6th point is the average of the 5 previous numbers.

If 5 remaining feasible intersections are collected, take an average of these 5 points as the 6th one.

After expansion, placing the 6 remaining feasible intersections in the input data set for training purposes.

The training data is different from the data that uses to estimate the MS location. That is, the training input-output patterns is no longer be used after training is down. In real application, we collect the remaining feasible intersections and the desired MS location to train the neural network prior to the practical use. After the training, then the remaining feasible intersections as input data (with the MS locations be unknown) can not only pass through the trained Rprop more quickly, but estimate the better appropriate MS location. Whenever we start to find the positions, the “remaining feasible intersections” can be used as the trained input, as we expect this model can estimate MS locations quickly and precisely. In addition, we have found that when there are only 200 pieces of input-output patterns as training data, the proposed algorithm still work better than the other methods. Therefore, the conclusion is that the proposed algorithm can be applied in practical situations.

In this section for fairly comparison with various methods we apply the computer simulations to demonstrate the performance of the proposed algorithm. In the simulations, the BSs are respectively located at (0, 0) and (2,000 m, 0). Each simulation is performed by 10,000 independent runs, and the MS location is chosen randomly according to a uniform distribution within the rectangular area formed by the points I, J, K and L, as shown in

The first NLOS propagation model is based on the uniformly distributed noise model [_{i}_{i}_{i}_{i}

Single hidden layer is the most widely used one among various learning methods for neural networks. It is well enough to model arbitrarily complex nonlinear functions [

If BS1 is the serving BS of MS, its TOA and AOA measurements should be more accurate. The variables of this model are chosen as follows: _{1} = 200 m, _{2} = 400 m, _{1} = 5° and _{2} = 10°. The proposed algorithm based on Rprop algorithm produce more accurate estimations of MS location than those based on BPNN with a learning rate of 0.01, as shown in

Based on the proposed neural network structure stated above, the Rprop can be applied to estimate the location of MS for every input data.

The second NLOS propagation model is based on the distance-dependent NLOS error model [_{i} = χ_{i}_{i}_{i}_{i}_{i}_{i}_{1} = 0.13, _{2} = 0.2, _{1} = 2.5° and _{2} = 5°

The third NLOS propagation model is based on a biased uniform random variable [_{i}_{i}_{i}_{i}_{i}_{i}_{i}_{i}_{i}_{i}_{1} = 50 m, _{2} = 150 m, _{1} = _{2} = 200 m, _{1} = 2.5°, _{2} = 3° and _{1} = _{2} = 5°. The resulting CDF curves of the location error are as shown in

When the MS is close to the condition of being aligned with the two BSs, TSA may not converge. HLOP can result in large location errors when the measured angle approaches 90° or 270°. We define the divergence point when the RMS error is above 3,000 m. The distributions of the divergence points of TSA and HLOP are shown in

This paper proposes novel Rprop-based algorithm to obtain approximate MS location. We combine both TOA and AOA measurements to estimate the MS location under the condition that the MS is heard by only two BSs. The key issue is to apply Rprop to model the relationship of the remaining feasible intersections and MS location. After training, the proposed algorithm can reduce the effects of NLOS errors and improve MS location performance. One the other hand, the traditional methods of TSA and HLOP may not converge when the MS/BSs have an undesirable geometric layout. The positioning accuracy of the proposed algorithms is hardly affected by the relative position between the MS and BSs. Simulation results show that the convergence performance of the proposed algorithms are quite well and provides the capabilities to explicitly reduce the effects of NLOS errors. In summary, the proposed algorithm can always yield better performance than TSA, HLOP and the geometrical positioning methods for different levels of NLOS errors.

Geometric layout of the two circles and two lines.

A fully connected multilayer feed-forward network with one hidden layer.

The flow chart of the calculation procedure for BPNN.

Structure of the prediction models for

RMS errors reduction according to the number of epochs.

The RMS errors with various neurons numbers of hidden layer.

Comparison of average MS location based on BPNN and Rprop.

Comparison of error CDFs when NLOS errors are modeled as the upper bound.

Performance comparison of the location estimation methods when the upper bound is used to model the NLOS.

CDFs of the location error with distance-dependent NLOS error.

Comparison of location error CDFs with biased uniform random error.

CDFs of the location error of the other different methods and the proposed algorithm with 2,000 and 200 epochs.

Distribution of the divergence points for TSA.

Distribution of the divergence points for HLOP.