This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

In a wireless communication system, wireless location is the technique used to estimate the location of a mobile station (MS). To enhance the accuracy of MS location prediction, we propose a novel algorithm that utilizes time of arrival (TOA) measurements and the angle of arrival (AOA) information to locate MS when three base stations (BSs) are available. Artificial neural networks (ANN) are widely used techniques in various areas to overcome the problem of exclusive and nonlinear relationships. When the MS is heard by only three BSs, the proposed algorithm utilizes the intersections of three TOA circles (and the AOA line), based on various neural networks, to estimate the MS location in non-line-of-sight (NLOS) environments. Simulations were conducted to evaluate the performance of the algorithm for different NLOS error distributions. The numerical analysis and simulation results show that the proposed algorithms can obtain more precise location estimation under different NLOS environments.

The purpose of a wireless location identification algorithm is to estimate the position of a mobile station (MS) in a wireless communication network. The need for determining the location of MS has become increasingly important in the past few years. A variety of wireless location techniques are known, including signal strength [

The accuracy of MS location estimation highly depends on the propagation conditions of the wireless channels. The non-line-of-sight (NLOS) problem is always the dominant factor that greatly affects the precision of MS location estimation. The accuracy of MS location can be seriously degraded in the absence of a line-of-sight (LOS) signal component. Good positioning accuracy can be achieved if LOS propagation exists between the MS and each participating base station (BS). However, LOS paths are usually unavailable, especially in urban or suburban areas. This is due to the reflection or diffraction of the signals propagating between the MS and the BSs, NLOS propagation introduces both biases in time and angle measurements. It is necessary to remove NLOS errors before the time and angle measurements applied in MS location estimation. In the past few years there have been many researches and literatures discussing about the NLOS mitigation effects for location estimation. Because the NLOS delay has higher variance in comparison with LOS, so reference [

Artificial neural networks (ANNs) have been widely applied in various fields to overcome the problem of exclusive and nonlinear relationships. Recently, different kinds of neural networks have been applied for localization. Three networks are used, by utilizing distance measurements [

Back-propagation neural network (BPNN) is the most representative training model for the ANN [

In most rural areas, it is difficult for an MS to detect more than three BSs for location purposes. We had proposed a novel positioning algorithm, based on Rprop, to estimate the MS location if both TOA and AOA measurements are simultaneously available from two BSs [

The remainder of this paper is organized as follows: in Section 2, we introduce the MS positioning methods using existing methods. BPNN and other training algorithms are described in Section 3. In Section 4, we propose the algorithm based on various neural network training methods to estimate the position of an MS. Next, Section 5 discusses the simulations performed to compare the proposed algorithm with the other methods. Finally, the conclusions are given in Section 6.

Taking into account the constraint on _{1}, _{1}) = (0, 0), (_{2}, _{2}) = (_{2}, 0), and (_{3}, _{3}), respectively. The distances between BS_{i}_{i}_{v}_{v}_{v}_{x}_{v}_{y}

The least-squares (LS) estimation can be solved by:

The recursive process starts with an initial guess for the MS location, and then repeats the computations in the iteration. Depending on the initial estimate of the MS location, the convergence is not guaranteed [

This scheme utilizes the reduced linear equation derived from the original nonlinear range equations. Rather than circular lines of position (LOP), the linear LOP (LLOP) equation passes through the intersections of the two circular for TOA measurements. The linear equations can be found by squaring and subtracting the distances obtained by

Again, the LS solution to

Range scaling algorithm (RSA) is proposed, based on a nonlinear object function, to solve an optimization problem under three TOA measurements [

Denoting

The observed timing and angular measurements can generate a set of nonlinear equations. The process starts of TSA with an initial location guess and can achieve high positioning accuracy. This method is recursive and the computational overhead is very intensive [

This scheme applies the original nonlinear range equations to produce a linear LOP, rather than a circular LOP, to locate the MS. The method takes the advantage of simpler computation of MS location. Combining the linear LOPs and the AOA line, the MS location is determined by [

When AOA information is available, RSA can be extended to the hybrid TOA/AOA algorithm (HTA) [

The ANN is an information processing system inspired by the ability of human brain to learn from observations and generalize by abstraction [

BPNN consists of an input layer, an output layer, and usually one or more hidden layer(s). It is well known that a single hidden layer is sufficient to approximate a continuous function with arbitrary precision. To compute the net input to the neuron, each input connected to the neuron is multiplied by its corresponding weight to form a weighted sum, which is added to the bias associated with neuron _{j}_{ij}_{i}_{j}

The training procedures of BPNN are composed of initialization, a forward pass, and a backward pass. The training process of neural network is obtained through the use of a training pattern, which consists of a set of input vectors with a corresponding output vectors. At the beginning of training, the set of training patterns is given to the input layer of the network. In the forward pass, the training pattern is applied to the input layer and its effect propagates through the network. During the forward pass, the synaptic weights of the network are all fixed. On the other hand during the backward phase, the weights are adjusted in accordance with an error-correction rule. The actual output of the network is subtracted from the desired output, which is a part of the training, to produce an error signal. This error signal is than propagated backward through the network, against the direction of synaptic connections. The weights are adjusted so as to make the actual output of the network move closer to the desired output. The error function _{l}_{l}_{k}_{k}_{+1} is the next weighting vector, and

Different faster training algorithms have been presented in MS location estimation, such as conjugate gradient, Rprop and LM. Here the above algorithms will be analyzed to find out which algorithm can provide the better NS location estimation.

The basic BPNN adjusts the weights in the steepest descent direction. The error function decreases very rapidly along the negative direction of the gradient. However, it would not produce the fastest convergence. So this may be very crucial to the learning rate given by the user. Conjugate gradient algorithms update weights along conjugate directions and produce generally faster convergence than that of the steepest descent. In the conjugate gradient algorithms, the step size is adjusted for each iteration. In the first iteration, the algorithms initialize the net by searching in the steep descent direction (negative of the gradient):
_{0} is the initial search gradient, and _{0} is the initial gradient. Then, we find the optimal distance to move along the current search direction by a line search:
_{k}_{k}_{+1} is the next weight vector, _{k}_{k}_{k}_{k}_{−1} is the previous search directions, and the weighting value _{k}

Most conjugate gradient algorithms perform a line search for each iteration along conjugate directions, which requires great deals of computational effort. By using a step size scaling mechanism, SCG avoids the time consuming line-search method per learning iteration, however, it makes the algorithm faster than other second order conjugate gradient algorithms. The SCG, developed by [

Fletcher-Reeves version of conjugate gradient used the norm squares of both previous and current gradients to calculate the weights and biases. For Fletcher-Reeves version of conjugate gradient [_{k}

This version of the conjugate gradient was proposed by Polak and Ribiere [

The Rprop algorithm provides faster training time and convergence rate and has the capability to escape from local minima. Rprop is a first-order algorithm and its time and memory required is only linear proportional to the number of parameters to optimize [

Although BPNN is an algorithm with steepest descent, it often failed to converge. The LM algorithm not only has the fastest convergence but also train a neural network 10–100 times faster than the BPNN algorithm. Another advantage of this algorithm is especially useful when a very accurate training is required. It is an approximation to the Newton’s method [

According to the viewpoint of geometric approach, distance measured from each BS can form a circle, centered at the BS. Then the MS position is estimated by the intersection of the circles from multiple TOA measurements. Each of the following three equations describes a circle for TOA, as shown in

If there is no NLOS error and measurement error, the three circles will intersect at the same point, which is the true MS location. However, NLOS propagation may occur in most environments and cause three circles to intersect at three points. Because NLOS error is always positive due to the excess path length, the TOA measurements always appears as a positive bias, greater than the true values.

Utilize three feasible intersections to establish an input data set for training purposes.

The training process with a training set composed of input patterns together with the required output pattern.

The network has the following input-output mapping:

Input: three feasible intersections (

Output: desired MS location.

The feasible intersections and the true MS location are used to train the network until it establishes the desired relationship.

During training, neural network repeats and adjusts the weights of the connections in the network, and the objective is to minimize the difference between the actual MS location and the desired MS location.

After training, the feasible intersections are input data passing through the trained neural networks to predict the MS location.

It is well known that a single AOA measurement constrains the MS along a line. Denote by

Due to NLOS errors, three TOA circles and one AOA line would intersect at various feasible intersections. The number of the feasible intersections depends on the geometric relationship of the three circles and the line. From simulation results, the number of the feasible intersections is 3, 4 and 5. According to the number of the feasible intersections, we establish different input data subsets respectively. Hence, the training set consists of three data subsets. For each measurement, we collect the feasible intersections and put them into various input data subsets separately. There are three data subsets in this input layer for the purpose of training, and the measurement number of each subset will not be identical. The detailed steps of the proposed algorithm based on neural network are as follows:

Collect the measurements of the K feasible intersections of three TOA circles and one AOA line. (

If the number of the feasible intersections is K in one measurement, then place this measurement in one specific subset. For example, all the measurements of three intersections are put in one subset. Thus, there will be three subsets for three different numbers of feasible intersections.

The three input data subsets with various measurement numbers are separately trained in the neural networks.

The training set was composed of the following mapping relationship:

Input:

Output: desired MS location.

We performed computer simulations to examine the performance of the proposed location algorithm. The coordinates of the BSs are respectively set to BS1: (0, 0), BS2: (1,732 m, 0), and BS3: (866 m, 1,500 m) [

The former NLOS propagation model is called the uniformly distributed noise model [_{i}_{i}

The most major problem during the training process is the possibility of overtraining. Generally, an over-trained neural network are able to output highly accurate values for the training set input patterns, but may not be better to new data outside the training set [_{i}

The number of hidden neurons is determined through experimentation. If there are too few hidden neurons, it will cause a bigger error. Increasing the number of hidden neurons can alleviate this situation, but it will also affect the speeds of convergence simultaneously, and the computing would be almost no help in reducing NLOS errors after exceeding a certain number of neurons. The general rules for choosing the number of neurons in the hidden layer are: (i) 0.5(·

The second NLOS propagation model is based on CDSM [

Under highly NLOS conditions, the average location errors of TSA and LLOP are at least two times larger than the proposed algorithm. The proposed algorithm is less sensitive to the increasing in NLOS magnitude compared to the TSA, LLOP and RSA. The proposed algorithm can provide a more accurate MS location estimation and reduce the errors caused by the effect of NLOS propagation. As shown in

When three TOA and one AOA measurements are available simultaneously, the final NLOS propagation model based on a biased uniform random variable is employed [_{i}_{i}_{i}_{i}_{i}_{1} = 50 m _{2} = _{3} = 150 m, _{1} = _{2} = _{3} = 200 m,

Overtraining the neural network can seriously deteriorate the forecasting results. A series of experiments were performed to determine the appropriate number of epochs.

For various numbers of the hidden neuron layer, every training method provides identical MS location estimation. In order to minimize the computational load, the propose algorithm with 0.5 · (

This paper presents a novel positioning algorithm based on neural network to determine MS location in NLOS environments. In this paper, we develop algorithm which make use of the feasible intersections of three TOA circles (and one AOA line) to provide improved MS location accuracy in the presence of NLOS errors. During the training period, various neural network algorithms are trained to establish the nonlinear relationship between these feasible intersections and MS location. After training, the proposed algorithm can reduce NLOS errors and obtain a more accurate MS location estimate. In order to evaluate the performance for the proposed algorithm, different NLOS models have been employed. Simulation results show that the proposed algorithm can provide enhanced precision in the location estimation of an MS for different levels of NLOS errors.

Geometry layout of the three circles.

Geometry layout of the three circles and a line.

Cell layout showing the relationship between the true ranges and inter-BS distances.

Variation RMS error of convergence

RMS error

Average location error

Average location error

Comparison of location error CDFs when NLOS errors are modeled as CDSM.

RMS errors reduction

RMS errors with different number of neurons in the hidden layer.

The CDF of location error of various methods for the biased uniform random variable model.