# An Improved Neural Network Training Algorithm for Wi-Fi Fingerprinting Positioning

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

_{2}from AP

_{2}is the strongest among RSS

_{i}from AP

_{i}, for i = 1, 2, …, m, then (X

_{2}, Y

_{2}) will be used to approximate the mobile device’s position. With this approach, the accuracy would depend on the effective signal propagation distance, as well as density and distribution of the APs installed. This approach was further improved by for example, the weighted centroid localization (WCL) proposed by [3]. For the trilateration approach, the mobile device’s position, normally in two dimensions, is determined using a set of measured distances from the nearby known APs. Least squares solution is normally applied when more than two distances are observed. It should be noted that the terrestrial land surveying techniques adopt measured distance as raw observations, while for the Wi-Fi based techniques, the raw data are RSSs, therefore a RSS-to-distance conversion method is to be applied, and the known APs’ positions will be treated as control points. The general RSS-to-distance conversion approach is by curve fitting with for example, parabolic or logarithmic regression, based on free space propagation model [4]. By further considering the complex real site conditions such as path loss of signal due to attenuation, reflection and refraction, as well as the geometrical effects on length resection, different RSS-to-distance conversion algorithms such as the Gaussian process regression [5] and the statistical path loss parameter estimation [6] were proposed. Regarding the fingerprinting approach that is more suitable for indoor environments, it has the advantage that the APs coordinates are not required in the position determination process. However, it requires preliminary efforts of database development. The database, also called radio map (Figure 1), contains a collection of calibration points at different locations in the area where Wi-Fi positioning is to be performed. The database development process is normally carried out if there are no significant factors that would seriously affect the RSS patterns due to for example, relocation of large objects and removal or addition of fixed structures in the Wi-Fi positioning area.

_{RM}(i,j) is the RSS value of the signal transmitted from access point (i) at radio map point (j), and SS

_{MEAS}(i) is the measured RSS of the signal transmitted from access point (i). The radio map point (j) having the minimum norm is considered to be the most probable position. Since in the real-time positioning process the Wi-Fi sensor can be in any direction, a practical approach in the database development process is that, at each sampling point, the RSS data is firstly collected in a reference 0°, then 90°, 180°, and 270° directions, and the mean RSS value of the data collected in these four directions is used in the computation. It is obvious from Equation (1) that the positioning accuracy is dependent on the resolution of the calibration points, and the positioning results are always snapped to the discrete points’ position. Hence, the higher resolution the calibration points, the more accurate the result. However, as shown in Figure 1, the signal propagation from each available AP is a continuous non-linear surface. Therefore, a model that can best describe the surface of all APs’ signal propagation will help to improve the positioning accuracy. Due to the reflections of waves by obstacles and other interferences, the structure of the above functions could be rather complex. The traditional statistical methods based on some smoothing approximation may fail to capture the widely fluctuating characteristics of these wave patterns.

## 2. Neural Network Modeling

_{i}}, i = 1, 2, …, m, where, x = f(s

_{1}, s

_{2}, …, s

_{m}) and y = g(s

_{1}, s

_{2}, …, s

_{m}). If a sample of uniform (or random) distribution of points with known positions and the signal strengths from those access points can be measured accurately, the minimum norm as shown in Equation (1) or some well known statistical methods can sometimes give a fairly good approximation of the position of any other points in this region based on the measured signal strengths at this position. As explained in the previous section, the traditional statistical methods based on some smoothing approximation may fail to capture the widely fluctuating characteristics of these wave patterns produced by those access points. This explains the high errors in Wi-Fi positioning inside certain buildings [1].

_{i}be the input of the average measured signal strength from access point i at the position P, where, i = 1, 2, …, m.

_{j}is the threshold parameter, and j = 1, 2, …, n, and the output coordinates (z

_{1,}z

_{2}) is given by

_{k}is the threshold parameter.

_{ij}’s, the coordinates (z

_{1,}z

_{2}) correspond as the output from the network. Consider

_{i,l}are parameters to be included in our learning process with

_{i}at a point P from access point i is initially taken to be the arithmetic mean of p (p = 3 or 4) signal levels measured at p appropriately chosen directions. Our learning process involves the determination of Parameters {η

_{jk}}, {ω

_{ij}}, {θ

_{j}}, {ϕ

_{k}} and {β

_{i,l}} so that the discrepancy output coordinates and the actual coordinates on a chosen set of points is minimal. More precisely; it is the actual coordinates (ẑ

_{1}, ẑ

_{2}) of a given point in our training set corresponding to the output (z

_{1}, z

_{2}); and the above parameters should be determined with the condition that the sum of squares of their difference is minimized. That is to minimize the expression

_{1}− ẑ

_{1})

^{2}+ (z

_{2}− ẑ

_{2})

^{2}

- (i)
- full local exploration
- (ii)
- partial local movement and
- (iii)
- exploratory movements

**x**

^{(k)}be the k

^{th}approximation to the point where the minimum occurs and h be the step-length. The objective function at two sets of points about

**x**

^{(k)}defined in Equations (8) and (9) is evaluated.

_{1}:

**x**

^{(k + 1)}=

**x**

^{(k)}± h

**e**

_{i}_{2}:

**x**

^{(k + 1)}=

**x**

^{(k)}± h

**e**

_{i}± h

**e**

_{j}

i = 1,2, …, n, j = 1,2, …, n, and j ≠ i

**e**

_{j}= (0, …, 1, 0, …, 0) is the unit vector whose j

^{th}coordinate is one while the remaining coordinates are zero.

_{2}lies on a sphere of radius h with the centre at

**x**

^{(k)}obtained by taking the respective lengths of

**x**defined in Equations (8) and (9). This means that the total number of function evaluations is 2n(n − 1) + 2n = 2n

^{(k+1)}^{2}. It can be shown that the global minimum, if it exists, would most likely be entrapped lying inside this neighborhood.

**x**

^{(k + 1)}) ≤ f(

**x**

^{(k)}) for some choices of i and j, then the function values at an additional set of 2(n − 1) points about

**x**

^{(k + 1)}will be evaluated before performing partial local exploration in order to finely adjust search direction, that is, to set:

**x**

^{(k + 1)}=

**x**

^{(k + 1)}+ h

**e**

_{t}(or − h

**e**

_{t})

**b**=

**x**

^{(k+1)}−

**x**

^{(k)}. We evaluate f at the following sets of points about

**x**

^{(k + 1)}:

_{1}:

**x**

_{s}=

**x**

^{(k + 1)}+

**b**

_{2}:

**x**

_{s}=

**x**

^{(k + 1)}+

**b**− ε

_{i}h

**e**

_{i}

_{i}= 1 or −1 according to the sign of the coordinates of

**b**. Otherwise, set

**x**

_{s}=

**x**

^{(k+1)}±

**b**− h

**e**

_{i}and

**e**

_{i}along the direction of b is excluded.

**x**

_{s}) ≤ f(

**x**

^{(k+1)}) for some choice of i, then

**x**

_{s}−

**x**

^{(k+1)}definitely gives a better direction of descent and we can make exploratory moves, as described in (iii) below, along this direction. Otherwise, we have to reduce the step-length by D and start the full exploration again at

**x**

^{(k + 1)}. It should be noted that the order of iteration is linear with respect to n.

**m**=

**x**

_{s}−

**x**

^{(k)}, and evaluate f at the following points:

_{1}:

**x**

^{(k + 2)}=

**x**

_{s}+

**m**

_{2}:

**x**

^{(k + 2)}=

**x**

_{s}+

**m**+ h

**e**

_{i}

**e**

_{i}along the direction of b is excluded.

**x**

^{(k)}again, one can be assured that the actual minimum point lies within these contracted spheres. Now, if the minimum value is at either one of the points on the contracted sphere S'

_{1}, (i.e.,

**x**

^{(k + 1)}=

**x**

^{(k)}± (h/4)

**e**

_{i}for some i ) or one of the points on the contracted sphere S'

_{2}, (i.e.,

**x**

^{(k + 1)}=

**x**

^{(k)}± (h/4)

**e**

_{i}± (h/4)

**e**

_{j}for some i and j), again only one full exploration around this point with step length h/4 will be needed to determine that the actual minimum probably lies within the contracted spheres; otherwise, one additional partial movement will lead the search outside this region. On the other hand, if the actual minimum point lies between the contracted outer sphere S'

_{2}and the original outer sphere S

_{2}, it is easy to see that no more than four combined local or exploratory moves are required to reach it. In all cases, the order of complexity of search is n

^{2}, similar to those with lower contraction ratios. It was found by [14,15] that, for experimenting with most of the benchmark test examples, the best contraction ratios were D = 4, followed by D = 5. There was no gain in further increasing the contraction ratio, except in some rare cases.

^{−7}.

## 3. Algorithm Validation

**Figure 5.**Points with known position were used to verify the accuracy achievement of the neural network results.

4-AP Combination | 0–1 m | 1.1–2 m | 2.1–3 m | 3.1–4 m | >4 m | 0–4 m | Mean Square Error | Total No. of Points |
---|---|---|---|---|---|---|---|---|

G_D_E_F | 3.5% | 14.1% | 28.2% | 15.3% | 38.8% | 61.2% | 5.5 | 85 |

H_I_J_K | 0.9% | 7.8% | 12.1% | 15.5% | 63.8% | 36.2% | 26.7 | 116 |

L_K_J_M | 7.7% | 24.0% | 32.7% | 14.4% | 21.2% | 78.8% | 10.8 | 104 |

L_N_A_B | 9.8% | 24.5% | 26.5% | 13.7% | 25.5% | 74.5% | 2.6 | 102 |

L_C_J_B | 1.8% | 8.3% | 14.7% | 20.2% | 55.0% | 45.0% | 23.2 | 109 |

N_L_B_M | 18.1% | 28.7% | 20.2% | 9.6% | 23.4% | 76.6% | 2.7 | 94 |

C_G_D_H | 14.0% | 23.3% | 22.1% | 10.5% | 30.2% | 69.8% | 8.7 | 86 |

C_J_H_L | 1.3% | 8.8% | 8.8% | 15.0% | 66.3% | 33.8% | 5.1 | 80 |

D_F_G_E | 1.1% | 7.7% | 24.2% | 22.0% | 45.1% | 54.9% | 13.6 | 91 |

H_I_G_D | 2.9% | 10.8% | 13.7% | 19.6% | 52.9% | 47.1% | 12.3 | 102 |

_{ij}= 0.25, ω

_{ij}= 0.25, η

_{jk}= 0.5, θ

_{j}= 300, φ

_{k}= 1.

3-AP Combination | 0–1 m | 1.1–2 m | 2.1–3 m | 3.1–4 m | >4 m | 0–4 m | Mean Square Error | Total No. of Points |
---|---|---|---|---|---|---|---|---|

G_E_F | 17.6% | 24.7% | 11.8% | 20.0% | 25.9% | 74.1% | 4.0 | 85 |

H_I_K | 9.5% | 40.5% | 28.4% | 19.8% | 1.7% | 98.3% | 4.8 | 116 |

K_J_M | 6.7% | 20.2% | 26.0% | 24.0% | 23.1% | 76.9% | 3.7 | 104 |

L_N_B | 5.9% | 12.7% | 20.6% | 17.6% | 43.1% | 56.9% | 7.7 | 102 |

C_J_B | 15.6% | 40.4% | 26.6% | 5.5% | 11.9% | 88.1% | 5.3 | 109 |

N_L_M | 2.1% | 14.9% | 17.0% | 19.1% | 46.8% | 53.2% | 8.2 | 94 |

G_D_H | 22.1% | 37.2% | 24.4% | 3.5% | 12.8% | 87.2% | 4.3 | 86 |

C_J_H | 8.8% | 11.3% | 30.0% | 22.5% | 27.5% | 72.5% | 9.8 | 80 |

D_G_E | 5.5% | 17.6% | 28.6% | 17.6% | 30.8% | 69.2% | 12.7 | 91 |

H_I_G | 2.0% | 6.9% | 7.8% | 14.7% | 68.6% | 31.4% | 14.3 | 102 |

H_I_J | 1.7% | 9.5% | 18.1% | 21.6% | 49.1% | 50.9% | 13.4 | 116 |

I_J_K | 5.2% | 14.7% | 26.7% | 18.1% | 35.3% | 64.7% | 8.9 | 116 |

H_J_K | 10.3% | 19.0% | 25.0% | 19.0% | 26.7% | 73.3% | 6.1 | 116 |

_{ij}= 0.25, ω

_{ij}= 0.25, η

_{jk}= 0.5, θ

_{j}= 300, φ

_{k}= 1.

**Table 3.**Comparison of accuracy achievement between the neural network and the minimum point matching methods.

Neural Network | Point Matching | |||||
---|---|---|---|---|---|---|

3-AP Combination | 0–2 m | 2.1–4 m | Mean Square Error | 0–2 m | 2.1–4 m | Total No. of Points |

G_E_F | 42.3% | 31.8% | 4.0 | 47.1% | 31.8% | 85 |

H_I_K | 50.0% | 48.2% | 4.8 | 59.5% | 29.3% | 116 |

K_J_M | 26.9% | 50.0% | 3.7 | 63.4% | 25.9% | 104 |

G_D_H | 59.3% | 27.9% | 4.3 | 58.1% | 30.2% | 86 |

_{ij}= 0.25, θ

_{j}= 300 and φ

_{k}= 1 were considered to be acceptable settings, they were fixed in our investigation in order to improve the training efficiency. Parameters ω

_{ij}and η

_{jk}were varied between the following ranges and increments,

- ω
_{ij}= 0.10 to 0.50, step 0.05 - η
_{jk}= 0.1 to 0.9, step 0.1

Initialization Parameters | 0–1 m | 1.1–2 m | 2.1–3 m | 3.1–4 m | >4 m | 0–4 m | Mean Square Error | Total No. of Points |
---|---|---|---|---|---|---|---|---|

Set 1 | 8.6% | 19.8% | 26.7% | 20.7% | 24.1% | 75.9% | 7.9 | 116 |

Set 2 | 12.9% | 34.5% | 37.1% | 6.9% | 8.6% | 91.4% | 4.2 | 116 |

Set 3 | 2.6% | 8.6% | 7.8% | 12.1% | 69.0% | 31.0% | 22.7 | 116 |

- Set 1: β
_{i}_{l}= 0.25, ω_{ij}= 0.1, η_{jk}= 0.9, θ_{j}= 300, φ_{k}= 1; - Set 2 : β
_{i}_{l}= 0.25, ω_{ij}= 0.5, η_{jk}= 0.1, θ_{j}= 300, φ_{k}= 1; - Set 3 : β
_{i}_{l}= 0.25, ω_{ij}= 0.5, η_{jk}= 0.9, θ_{j}= 300, φ_{k}= 1.

**Table 5.**Comparison of the neural network and the point matching method using the lowest MSE of the I_J_K combination.

Accuracy | Point Matching | Neural Network |
---|---|---|

0–2 m | 60.4% | 44.8% |

2.1–4 m | 26.7% | 46.5% |

0–4 m | 87.1% | 91.3% |

Sequence of Combination | 0–1 m | 1.1–2 m | 2.1–3 m | 3.1–4 m | >4 m | 0–4 m | Mean Square Error |
---|---|---|---|---|---|---|---|

I_J_K | 8.5% | 19.1% | 24.0% | 18.1% | 30.4% | 69.6% | 8.7 |

I_K_J | 8.1% | 20.2% | 24.4% | 17.1% | 30.2% | 69.8% | 8.2 |

J_I_K | 11.4% | 24.6% | 26.6% | 15.8% | 21.6% | 78.4% | 6.4 |

J_K_I | 11.3% | 24.7% | 25.8% | 16.7% | 21.4% | 78.6% | 6.4 |

K_I_J | 9.0% | 20.9% | 24.6% | 19.2% | 26.3% | 73.7% | 8.1 |

K_J_I | 9.9% | 22.0% | 24.6% | 17.4% | 26.1% | 73.9% | 7.8 |

_{ij}(0.1 to 0.5) and η

_{jk}(0.1 to 0.9), and changing the input order of the AP combination to obtain the best trained surface.

## 4. Conclusions

- This algorithm is based on the Wi-Fi fingerprinting approach that, Wi-Fi AP’s coordinates are not required in the position determination process. It is suitable for establishing a Wi-Fi based positioning system in areas such as inside shopping malls where APs’ positions are difficult or not possible to be precisely determined.
- This approach is entirely general and flexible. Whenever there are some changes in the existing Wi-Fi network (e.g., some addition, or deletion or reposition of access points), all we have to do is to retrain our neural network properly.
- There is no limit to how close our neural network approximation is to the actual radio data pattern (or hyper-surface), as long as we have sufficiently large number of neurons in the hidden layer of our simple three layer feed-forward neural network. However, it should be noted that, a too large number of neurons in the middle layer may render the learning process less tractable and higher truncation error. This is because the more complex structure of the function to be minimized may offset some of its improved accuracy.
- Since the percentage of better than four meter accuracy is found graphically to be inversely proportional to the mean square error in our training process, one can improve it to any desirable proportion by further minimizing. It has been illustrated that one can get closer to the actual minimum square error by retraining the neural network with different initial parameter settings, our optimization algorithm is simple and effective, and can be further improved or replaced by another even more powerful one with basically no change in our model structure.

## Acknowledgements

## Conflict of Interest

## References

- Mok, E.; Yuen, K.Y. A study on the use of Wi-Fi positioning technology for wayfinding in large shopping centers. Asian Geogr.
**2013**, 30, 55–64. [Google Scholar] [CrossRef] - Mok, E.; Retscher, G. Location determination using WiFi fingerprinting versus WiFi trilateration. J. Location Based Serv.
**2007**, 1, 145–159. [Google Scholar] [CrossRef] - Wang, J.; Urriza, P.; Han, Y.; Cabric, D. Weighted centroid algorithm for estimating primary user location: Theoretical analysis and distributed implementation. Trans. Wirel. Commun.
**2011**, 10, 3403–3413. [Google Scholar] [CrossRef] - Theodore, S.R. Wireless Communications, Principles and Practice, 2nd ed.; Prentice-Hall, Inc.: Upper Saddle River, NJ, USA, 2002; p. 693. [Google Scholar]
- Cho, Y.; Ji, M.; Lee, Y.; Kim, J.; Park, S. Improved Wi-Fi AP Positioning Estimation Using Regression Based Approach. In Proceedings of the 3rd Proceedings of International Conference on Indoor Positioning and Indoor Navigation (IPIN), Sydney, Australia, 13–15 November 2012.
- Nurminen, H.; Talvitie, J.; Ali-Loytty, S.; Muller, P.; Lohan, E.; Piche, R.; Renfors, M. Statistical Path Loss Parameter Estimation and Positioning Using RSS Measurements in Indoor Wireless Networks. In Proceedings of the 3rd Proceedings of International Conference on Indoor Positioning and Indoor Navigation (IPIN), Sydney, Australia, 13–15 November 2012.
- Elnahrawy, E.; Li, X.; Martin, R.P. The Limits of Localization Using Signal Strength: A Comparative Study. In Proceedings of 2004 First Annual IEEE Communications Society Conference on Sensor and Ad Hoc Communications and Networks (IEEE SECON 2004), Santa Clara, CA, USA, 4–7 October 2004; pp. 406–414.
- Addesso, P.; Bruno, L.; Restaino, R. Adaptive Localization Techniques in WiFi Environments. In Proceedings of 5th IEEE International Symposium on Wireless Pervasive Computing, Modena, Italy, 5–7 May 2010; pp. 289–294.
- Koski, L.; PERÄLÄ, T.; PICHÉ, R. Indoor Positioning Using WLAN Coverage Area Estimates. In Proceedings of the 1st International Conference on Indoor Positioning and Indoor Navigation (IPIN), ETH Zurich, Zurich, Switzerland, 15–17 September 2010.
- Liu, H.H.; Yang, Y.N. WiFi-Based Indoor Positioning for Multi-Floor Environment. In Proceedings of the IEEE TENCON 2011, Bali, Indonesia, 21–24 November 2011; pp. 597–601.
- Shi, J.; Shin, Y. A Low-Complexity Floor Determination Method Based on WiFi for Multi-Floor Buildings. In Proceedings of the 9th Advanced International Conference on Telecommunications, Rome, Italy, 23–28 June 2013.
- Hornik, K.; Stinchcombe, H.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw.
**1989**, 2, 359–366. [Google Scholar] [CrossRef] - Li, L.K.; Cheung, B.K.-S. Learning and Forecasting Foreign Exchange Rates using Recurrent Neural Network Dynamics; Les Cahiers du GERADG-2000-02; GERAD: Montreal, QC, Canada.
- Cheung, B.K.-S.; Ng, A.C.L. An efficient and reliable algorithm for nonsmooth nonlinear optimization. Neural Parallel Sci. Comput.
**1995**, 3, 115–128. [Google Scholar] - Cheung, B.K.-S.; Ng, A.C.L. An efficient search method for nonsmooth nonlinear optimization problems with mostly simple constraints. Neural Parallel Sci. Comput.
**1997**, 5, 335–346. [Google Scholar]

© 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

## Share and Cite

**MDPI and ACS Style**

Mok, E.; Cheung, B.K.S. An Improved Neural Network Training Algorithm for Wi-Fi Fingerprinting Positioning. *ISPRS Int. J. Geo-Inf.* **2013**, *2*, 854-868.
https://doi.org/10.3390/ijgi2030854

**AMA Style**

Mok E, Cheung BKS. An Improved Neural Network Training Algorithm for Wi-Fi Fingerprinting Positioning. *ISPRS International Journal of Geo-Information*. 2013; 2(3):854-868.
https://doi.org/10.3390/ijgi2030854

**Chicago/Turabian Style**

Mok, Esmond, and Bernard K.S. Cheung. 2013. "An Improved Neural Network Training Algorithm for Wi-Fi Fingerprinting Positioning" *ISPRS International Journal of Geo-Information* 2, no. 3: 854-868.
https://doi.org/10.3390/ijgi2030854