A Sensor Fusion Framework for Indoor Localization Using Smartphone Sensors and Wi-Fi RSSI Measurements

: Sensor fusion frameworks for indoor localization are developed with the speciﬁc goal of reducing positioning errors. Although many conventional localization frameworks without fusion have been improved to reduce positioning error, sensor fusion frameworks generally provide a further improvement in positioning accuracy. In this paper, we propose a sensor fusion framework for indoor localization using the smartphone inertial measurement unit (IMU) sensor data and Wi-Fi received signal strength indication (RSSI) measurements. The proposed sensor fusion framework uses location ﬁngerprinting and trilateration for Wi-Fi positioning. Additionally, a pedestrian dead reckoning (PDR) algorithm is used for position estimation in indoor scenarios. The proposed framework achieves a maximum of 1.17 m localization error for the rectangular motion of a pedestrian and a maximum of 0.44 m localization error for linear motion.


Introduction
Accurate positioning for indoor or outdoor scenarios requires that a positioning system's displacement error be minimized. For locating user position in outdoor environments, localization systems such as global positioning system (GPS) [1] and base transceiver station (BTS) [2] based approaches exist. Both GPS and BTS face significant challenges when used for indoor localization. These include signal interference and spatial coverage limitations. The BTS cell phone technology does not achieve accurate results for indoor localization due to constrained signal coverage of the target area and dense urban environments characterized by high-rise buildings. To overcome these challenges, it is necessary to develop alternative localization systems for indoor localization. Existing indoor localization systems are based on ultra-wideband (UWB) technology [3], pedestrian dead reckoning (PDR) [4], radio frequency identification (RFID) [5], Bluetooth [6], visible light communication (VLC) [7], Zigbee [8] and Wi-Fi systems [9]. Each of these techniques achieves high position accuracy for indoor localization. However, combined systems give better performance compared to these individual systems. PDR systems use data from smartphone accelerometers, magnetometers, and gyroscopes for localization.
It is possible to use received signal strength indication (RSSI) signals for indoor localization with Wi-Fi access points (APs) in indoor environments. Many studies [10][11][12] have used smartphones equipped with a Wi-Fi receiving module for the RSSI data collection. The fusion of smartphone sensor data and Wi-Fi generated data for indoor localization has become prominent in indoor localization studies [13][14][15]. This paper proposes a sensor data fusion framework for both PDR and Wi-Fi systems. The proposed sensor fusion framework combines the location fingerprinting and trilateration algorithms for Wi-Fi indoor positioning and it fuses with PDR position results. The experiment results demonstrate that the proposed fusion framework reduced the localization errors when compared with conventional localization approaches.
The conventional approaches used for sensor fusion are PDR+Wi-Fi trilateration and PDR+Wi-Fi fingerprint algorithms. The most popular and easiest conventional approach used for indoor localization is the PDR+Wi-Fi trilateration algorithm. However, this algorithm does not provide very good accuracy (average error of 1.5-2 m) [16]. The multipath effects and non-line of sight (NLOS) conditions in the experiment areas reduce the trilateration algorithm performance. In trilateration algorithm, we use a free space propagation model for estimating the user distance from the Wi-Fi AP. The estimated user distance depends on the RSSI measurements. However, the measured RSSI signals tend to fluctuate according to the indoor environment's physical characteristics and contained objects. Wi-Fi fingerprint algorithms can solve the multipath fading effect by creating a fingerprint map, but changes in the indoor environment affect the fingerprint maps and require updates to the fingerprint database. To overcome the challenges faced by conventional approaches, we present a sensor fusion framework for indoor localization by utilizing the PDR and Wi-Fi positioning results.
A recent trend in indoor localization is to use the hybrid localization system for locating the user position with minimum errors. The hybrid localization systems give better performance than individual localization systems. In hybrid localization systems, we combine multiple localization technologies with the help of sensor fusion frameworks. The most common hybrid localization system is the PDR with Wi-Fi localization system. In this paper, we propose a PDR with Wi-Fi localization system with high position accuracy for indoor localization. The proposed system uses a sensor fusion framework for combining the PDR results with Wi-Fi localization system results. For PDR approach, we used our previous model proposed in [17]. Our previous PDR model reduced the smartphone sensor errors and provides accurate localization results. In the case of Wi-Fi localization, we proposed a fusion algorithm and it uses the results from Wi-Fi trilateration and Wi-Fi fingerprint algorithms to enhance the position accuracy. Finally, we proposed a sensor fusion framework to combine the Wi-Fi fusion algorithm and PDR for indoor localization using the Kalman filter. Through extensive experiments, we demonstrated that a higher indoor positioning accuracy based on the framework is achievable.
The rest of the paper is organized as follows: The related works on the sensor fusion for PDR and Wi-Fi localization systems are discussed in Section 2. The proposed sensor fusion framework model is presented in Section 3. In the Section 4, the experiment results and analysis are discussed. Finally, the conclusions and future work are presented in Section 5.

Related Works
Sensor data fusion frameworks have been studied in the past for PDR and Wi-Fi localization systems. In this section, we discuss the related work on sensor fusion frameworks. The proposed sensor fusion framework model depends on three research areas, which are PDR, Wi-Fi localization systems and sensor data fusion frameworks for combined PDR and Wi-Fi localization systems. The performance of a PDR system depends on a smartphone's sensor data error. When pedestrian walking distance increases, a PDR system will drift, and this error reduces the location accuracy. PDR algorithms depend on step detection [18], heading [19] and position estimation [20]. The localization accuracy of a Wi-Fi system depends on RSSI fluctuation. In Wi-Fi localization systems, the distance between a user and APs is estimated from free space path loss model. The Wi-Fi signal fluctuation affects distance estimation and degrades system performance. To overcome the problems related to PDR and Wi-Fi localization systems, a more accurate sensor data fusion framework for those systems is proposed in this paper.
The first sensor data fusion framework for Wi-Fi and PDR was introduced by Evennou and Marx [21]. In [21], Evennou and Marx proposed a sensor fusion framework which uses a Kalman filter and a particle filter. The benefits of the Evennou and Marx architecture were evaluated and compared with pure Wi-Fi localization systems and inertial navigation system (INS) positioning systems. A sensor fusion framework using a particle filter is explained in [22][23][24]. The real-world experiment results from [22][23][24] indicate remarkable performance improvements for indoor localization. An effective implementation of the extended Kalman filter (EKF) for sensor fusion is explained in [25][26][27]. The experiment results from [25][26][27] show that the author's proposed sensor fusion frameworks achieve high localization accuracy when compared to individual localization systems. A complementary extended Kalman filter for the sensor fusion framework is explained by Leppäkoski et al. [28]. The results from [28] show that both the map information and wireless local area network (WLAN) signals can be used to improve the PDR system accuracy. The idea of using an unscented Kalman filter (UKF) algorithm for a sensor fusion framework is introduced by Chen et al. [29]. In [29], an integrated technique for merging Wi-Fi localization system, PDR and smartphone sensor data using a UKF algorithm for 3D indoor localization is proposed. In [30,31], a detailed analysis of sensor fusion frameworks of Wi-Fi fingerprint with PDR systems are discussed and the results indicate that a hybrid localization system's performance is better than that of individual localization systems. An adaptive and robust filter for indoor localization using Wi-Fi and PDR is proposed by Li et al. [32]. Results from [32] show that the sensor fusion framework reduced the gross errors in the Wi-Fi localization system and gives accurate position results for indoor localization.
So far, we have discussed different types of sensor fusion frameworks used for Wi-Fi and PDR localization systems fusion. These sensor fusion frameworks reduced the localization errors; however, these require further position accuracy improvement for indoor localization. The proposed sensor fusion framework of this paper uses the position results from Wi-Fi localization and PDR systems with the help of a linear Kalman filter (LKF). For Wi-Fi localization systems, the proposed model uses a fusion algorithm to combine the trilateration and fingerprint algorithms. The fusion algorithm compensates for the line of sight (LOS) problem by taking advantage of fingerprinting to enhance the Wi-Fi system's performance. The trilateration algorithm achieves better results when applied in different environments and the system is free from a calibration phase . The Wi-Fi fingerprint technology is useful for non-line of sight (NLOS) conditions when the APs and the user face interference from other objects. Combining these two technologies offers better performance than the individual systems. The PDR algorithm in the proposed model uses two sensor fusion techniques to reduce the smartphone sensor errors. The accelerometer and gyroscope sensors in the smartphone are used for pitch and roll estimation. The accelerometer and gyroscope sensor fusion reduces the accumulated errors and drift errors from the sensors and gives better results for pitch and roll estimation. In the case of heading estimation in the PDR algorithm, the proposed model uses another sensor fusion which combines the magnetometer and gyroscope headings together for better heading estimation. The step length from pitch-based step detector and heading results from heading estimator are for position estimation and the position results from PDR is free from smartphone sensor errors and gives better results than conventional localization systems.

Proposed Sensor Fusion Framework Model Using PDR and W-Fi Localization Systems
The proposed sensor fusion framework model uses a Wi-Fi fusion algorithm results with PDR to enhance the position accuracy for indoor localization. The Wi-Fi fusion model in the proposed algorithm utilizes the effective features of two classical localization algorithms such as trilateration and fingerprint for user position estimation. The Wi-Fi fusion results combined with PDR results and the results from the proposed model are free from smartphone sensor errors and Wi-Fi RSSI signal strength problems. Figure 1 shows the proposed sensor fusion framework model. The position results from the PDR algorithm in the proposed model utilizes different smartphone sensors such as the accelerometer, gyroscope and magnetometer for position estimation. The data from accelerometer and gyroscope is used for pitch and roll estimation and the step detector uses the pitch values for step detection. The pitch and roll values together are used for heading estimation from the magnetometer. The data from gyroscope is also estimating the user heading and an LKF filter combines the gyroscope and magnetometer heading together for better performance. The step length is estimated from step detector and the position estimation algorithm uses the step length values and heading together for identifying the current user position. The results from PDR algorithm used in the sensor fusion framework for further position improvements. In the case of Wi-Fi fusion algorithm in the proposed model, it uses the results from trilateration and fingerprint algorithms together for user position estimation. The results of the Wi-Fi fusion algorithm are used in the sensor fusion framework for combining with PDR results. The linear Kalman filter explained in [33] is used for sensor fusion framework implementation.

PDR Positioning
The PDR positioning in the proposed model uses our previous work [17] for user position estimation. In [17], we proposed a sensor fusion technique for pitch and roll estimation, a pitch-based step detector algorithm for step detection, step length estimation from step detector, a sensor fusion technique for heading estimation and a position estimator. The pitch values from the proposed sensor fusion technique are used for user step detection. The pitch-based step detector identifies the user steps from pitch amplitude. A step is detected when the pitch amplitude crosses the threshold level. The step length estimator uses the results from step detector and the step length estimator follows a model presented in [34]. The sensor fusion technique for heading estimation uses the heading results from the magnetometer and gyroscope sensors. The individual heading results of the magnetometer and gyroscope are not free from smartphone sensor errors and these heading results are not sufficient for position estimation. To compensate the magnetometer and gyroscope sensor errors, the PDR model combines the gyroscope heading with the magnetometer heading and the combined results are more accurate than individual sensor heading results. The LKF used in [33] is used for heading sensor fusion. The position estimation algorithm proposed in [35] is used for current user position estimation.
The position estimation algorithm takes the user step length and heading values and estimates the current user position. For more details of PDR implementation, refer to our previous work in [17].

Wi-Fi Positioning
The classical localization approaches for Wi-Fi positioning are Wi-Fi trilateration and Wi-Fi fingerprinting. Of these approaches, Wi-Fi fingerprinting is the most popular and it does not require the coordinates of Wi-Fi APs in the experiment area. However, the fingerprint approach has two drawbacks when used in practical applications. The fingerprint approach needs a priori knowledge of the local area and thus requires a lot of time for location survey and manpower to generate the fingerprints. The second drawback is the position accuracy. As opposed to other localization approaches, the position accuracy of the fingerprint approach depends on the amount of fingerprint data that is generated for a specific coverage area. If the amount of data used in the fingerprint maps is not sufficient, it is difficult to estimate the user position with high accuracy. The computation time for estimating user position in fingerprinting algorithm is very high as compared to other Wi-Fi localization approaches. In the case of Wi-Fi trilateration algorithm, we estimated the distance from the Wi-Fi AP to the user using a free-space path loss model [36]. The localization algorithm uses the distance values from path loss model to estimate the current user position. The performance of trilateration algorithm depends on the channel conditions between APs and the user. For position estimation using trilateration algorithm, the user should be within a limited Wi-Fi signal coverage area with LOS conditions. In order to improve the localization accuracy of Wi-Fi systems and to compensate the position errors of classical Wi-Fi localization approaches, we propose a Wi-Fi fusion algorithm for indoor localization. The proposed Wi-Fi fusion algorithm improves the indoor position accuracy by utilizing the advantages of fingerprint and trilateration algorithms.

Wi-Fi Trilateration Algorithm
The model presented in our previous work [16] is used for trilateration algorithm implementation. For accurate position estimation using trilateration algorithm, the experiment area should contain at least three APs. In our experiment we use four APs and these APs are placed at four corners of the experiment area. A model for Wi-Fi localization using trilateration is shown in Figure 2.
A free space path loss, F is used to estimate the user position from APs and is expressed as [37]: where d is the user distance from AP in meters, f is the signal frequency expressed in megahertz. The trilateration algorithm follows the models explained in [38,39]. The distance d i between smartphone (x p , y p ) and the APs A i (x i , y i ) is expressed as: The expanded form of the equation is; The above equation can be rewritten as Subtracting Equation (3) from Equation (2), then the equation can be expressed as; With i = 1 and by varying index k = 2, 3, 4 we obtain Equation (6).
The coordinates (x p , y p ) of the smartphone is obtained from the above system of equations and we express the equation in a linear form with three unknowns, x = x p y p The solution of the equations can be the x * p , y * p that minimizes the δ defined by the following: Applying Minimum Mean Square Error (MMSE) method in the above equation, we express user position, x * in the following form.

Wi-Fi Fingerprint Algorithm
The basic idea of creating fingerprint maps for Wi-Fi positioning is explained in [40,41]. In this localization approach, we use a grid-based representation of the indoor environment. The fingerprint algorithm consists of two stages. The creation of fingerprint maps of the localization area from the sampled RSSI values is the first stage of the fingerprint algorithm. To accomplish this, we divide the location area into different zones and collect RSSI samples at all the grid points. In the second stage, we estimate the position of the receiving module from the fingerprint maps using the nearest neighbor method. Other methods used in position estimation are the support vector machine (SVM) and the hidden Markov model. Figure 3 shows the Wi-Fi fingerprint flow chart. The first stage of Wi-Fi fingerprint positioning is explained in Figure 3. The localization area is divided into grid points and RSSI samples are collected from each grid point. The fingerprint gathering utility is used for data collection at each access point. The fingerprint parsing utility takes the RSSI samples as input and generates the data necessary to build the RSSI probability distribution for each reference point. The location estimation procedure for the Wi-Fi fingerprint algorithm is shown in Figure 4. The location estimation procedure uses the localization algorithm on the fingerprint maps of the four APs and the received RSSI samples on-the-fly. The nearest neighbor method is used to determine the user location based on the fingerprint data. The nearest neighbor approach calculates the Euclidean distance between the live RSSI sample and each reference point fingerprint. The minimum Euclidean distance is the approximated user location. Let the RSSI samples from N Wi-Fi APs in the fingerprint maps be expressed as a vector ρ = ρ 1 , ρ 2 , ρ 3 , ......., ρ N |ρ i ∈ R N to generate fingerprints database and i = 1, 2, 3, ..., N be the number of grid points. The measured RSSI of a user from N Wi-Fi access points at a particular location during the experiment time is expressed as the vector δ = δ 1 , δ 2 , δ 3 , ........, δ N |δ k ∈ R k . The Euclidean distance D i between fingerprint maps and the corresponding measured values can be expressed as in [42]: From the resulting matrix D i , the minimum value represents the approximated user location and is expressed as

Proposed Wi-Fi Fusion Algorithm
The idea of combining Wi-Fi fingerprint algorithm with trilateration algorithm is explained in [43,44]  The unique characteristics of LKF give Wi-Fi fusion results which are free from localization errors and it improves the user position accuracy. The variables used for LKF implementation are shown in Table 1.

Proposed Sensor Fusion Framework Algorithm using Wi-Fi Fusion and PDR Position Results
The proposed sensor fusion framework as well uses the advantages of the LKF for combining the Wi-Fi fusion and PDR position results. As compared to other sensor fusion frameworks such as EKF, complementary EKF, UKF and Markov model [45], the LKF is simple and easy to use for the fusion process. In our experiment scenarios, we tackled the problem formulation in the linear perspective and the LKF is the best option for combining the user position results. The proposed sensor fusion framework uses the same LKF implementation explained in Figure 5 for Wi-Fi and PDR systems fusion. The LKF is a recursive filter that estimates the state in a linear system. The estimation in LKF is repeated to minimize errors from inaccurate measurements and observations and optimum performance can be achieved when the noise is Gaussian noise. The LKF implementation consists of two phases, prediction and correction as illustrated in Figure 5.
The position results from Wi-Fi and PDR are fused by LKF filter and the results from LKF shows better performance than individual systems. In the LKF prediction phase, the state variablex k and error covariance matrix P k are estimated using state transition matrix (A). It also predicted the corrected statex k−1 and the error covariance P k−1 at previous time. The Gaussian noise (Q) is used for error covariance prediction. In the correction phase, the Kalman gain represents the reliability of the predicted value and the measured value. The larger the Kalman gain, the more reliable the measured value, and the smaller the Kalman gain, the more reliable the predicted value. The final output value P k−1 of the LKF is determined according to the Kalman gain.

Experiment and Result Analysis
The performance of the proposed sensor fusion framework is evaluated by two experiment scenarios based on the user motion. We used two user motions; rectangular and linear motion with predefined paths. For collecting data, a user of age 27 and height 172 cm walked on the reference path in the experiment areas with a smartphone in his hand. Figure 6 shows the experiment scenarios and the red line indicates the reference paths used for the experiment. For collecting the smartphone sensor and RSSI data, we used a Samsung Galaxy Note 8 smartphone with a Snapdragon 835 processor and 6 GB RAM. The experiment area uses a LG U plus Wi-Fi APs for providing RSSI signals. In the first experiment, the user took a rectangular motion with a test area of 45 m × 37 m. The test area was divided into 4 × 20 grid points and the Wi-Fi RSSI for each of the points was recorded. To collect the Wi-Fi RSSI, we used a fingerprint Android application which can store the RSSI measurements in form of comma-separated values (CSV). This application provides an option for selecting the required APs for the experiment. Four APs were chosen for localization based on their signal strength in the experiment area. For the second experiment, a linear motion of the user was considered in a 75 m × 3 m corridor as shown in Figure 6. The corridor was divided into 2 × 60 grid points and the user walked in the reference paths. For PDR data collection, the sensor stream inertial measurement unit and global positioning system Android application were used. The Android application can access the accelerometer, gyroscope, and magnetometer sensors in the smartphone. The user can select any of the sensors and analyze the current values of the sensors. The application has an option for adjusting sensor frequency based on the experiment requirements.
The performance of the proposed sensor fusion framework was analyzed using the data from paths 1 and 2. Figures 7 and 8 show the results from the proposed sensor fusion framework and conventional sensor fusion frameworks for paths 1 and 2. From Figures 7 and 8, it can be seen that the proposed sensor fusion framework can minimize localization errors when compared to conventional sensor fusion approaches. In these experiments, the starting position was the origin and the experiments were carried out strictly along the predefined paths. The red line in Figure 7 shows the reference path for rectangular pedestrian motion and the dashed red line in Figure 8 shows the reference path for linear user motion. The accuracy of the proposed sensor fusion framework was evaluated by computing the average localization error and probability distribution of the localization errors. The average localization errors for the proposed sensor fusion framework and conventional sensor fusion frameworks for paths 1 and 2 are shown in Figures 9 and 10.    From Figures 9 and 10, the proposed sensor fusion framework has less position errors and gives better performance when compared to conventional sensor fusion approaches. The average localization error in the proposed sensor fusion framework is lower than in conventional sensor fusion frameworks. The average localization error from the PDR+Wi-Fi fingerprint approach shows a constant average localization error for most of the experiment time. This is due to the fingerprint map approximation between online data and offline data. The PDR+Wi-Fi trilateration approach shows the worst performance compared to other sensor fusion approaches. The localization accuracy of PDR+Wi-Fi trilateration approach depends on the multipath effects. The NLOS conditions and free space path loss model in the PDR+Wi-Fi trilateration approach yields larger position errors than other sensor fusion approaches. The probability distribution of localization errors for sensor fusion frameworks in both experiment scenarios is shown in Figures 11 and 12. From probability distribution of the localization error plots, it is clear that the proposed sensor fusion framework is better than other methods and gives high position accuracy for indoor localization. When the experiment starts, the proposed sensor fusion framework and PDR+Wi-Fi fingerprint approach show the same localization errors. When the experiment time increases the proposed sensor fusion framework shows the highest position accuracy as compared to the other sensor fusion frameworks. At some point in the experiment, the PDR+Wi-Fi fingerprint and PDR+Wi-Fi trilateration approaches show the same localization errors. Tables 2 and 3 summarize the performance of the proposed sensor fusion framework versus conventional sensor fusion frameworks in terms of mean error, maximum error, minimum error, and standard deviation of error (STD).   Tables 2 and 3, the proposed sensor fusion framework has a lower mean error and maximum error than conventional sensor fusion approaches. The mean error results show that the PDR+Wi-Fi trilateration and PDR+Wi-Fi fingerprint approaches have almost similar error results compared to the proposed sensor fusion approach. However, the maximum error from PDR+Wi-Fi trilateration is very high compared to the other two sensor fusion approaches. The minimum error results indicate that the PDR+Wi-Fi trilateration approach has the least minimum error compared to the other two approaches. The standard deviation of error results validates the significance of the proposed sensor fusion framework with respect to conventional sensor fusion frameworks. From all the experiment results and analysis, we conclude that the proposed sensor fusion approach significantly outperforms that of conventional sensor fusion approaches and shows significant position accuracy for indoor localization.

Conclusions and Future Work
This paper proposed a sensor fusion framework for indoor localization using Wi-Fi RSSI signals and smartphone sensors. In the proposed sensor fusion framework model, we used a PDR algorithm which reduces the smartphone sensor errors and gives accurate position results for indoor localization. In the case of Wi-Fi localization systems, we proposed a Wi-Fi fusion algorithm for localization that achieves better performance than conventional Wi-Fi localization systems. To combine the position results from PDR and Wi-Fi systems, we proposed a sensor fusion framework model that shows better indoor localization accuracy than conventional Wi-Fi localization systems. The results from the rectangular motion experiment shows that the proposed sensor fusion framework has a maximum of 1.17 m error when compared to the predefined path. In the case of linear motion, the results show that the proposed sensor fusion framework has a maximum of 0.44 m errors when compared to predefined path. From the experiments and results, the proposed sensor fusion framework showed reasonable localization accuracy for indoor localization with fewer IMU sensor errors and solved the Wi-Fi RSSI signal fluctuation problems. However, in the future, we will mainly focus on multiple pedestrian localization. We will carry out experiments for multiple users in indoor scenarios.