Next Article in Journal
How Well Do CMIP6 Models Simulate the Greening of the Tibetan Plateau?
Next Article in Special Issue
Multi-Level Fusion Indoor Positioning Technology Considering Credible Evaluation Analysis
Previous Article in Journal
A Novel Multi-Candidate Multi-Correlation Coefficient Algorithm for GOCI-Derived Sea-Surface Current Vector with OSU Tidal Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Map-Assisted 3D Indoor Localization Using Crowd-Sensing-Based Trajectory Data and Error Ellipse-Enhanced Fusion

1
College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
2
Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong 999077, China
3
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(18), 4636; https://doi.org/10.3390/rs14184636
Submission received: 1 August 2022 / Revised: 10 September 2022 / Accepted: 13 September 2022 / Published: 16 September 2022

Abstract

:
Crowd-sensing-based localization is regarded as an effective method for providing indoor location-based services in large-scale urban areas. The performance of the crowd-sensing approach is subject to the poor accuracy of collected daily-life trajectories and the efficient combination of different location sources and indoor maps. This paper proposes a robust map-assisted 3D Indoor localization framework using crowd-sensing-based trajectory data and error ellipse-enhanced fusion (ML-CTEF). In the off-line phase, novel inertial odometry which contains the combination of 1D-convolutional neural networks (1D-CNN) and Bi-directional Long Short-Term Memory (Bi-LSTM)-based walking speed estimator is proposed for accurate crowd-sensing trajectories data pre-processing under different handheld modes. The Bi-LSTM network is further applied for floor identification, and the indoor network matching algorithm is adopted for the generation of fingerprinting database without pain. In the online phase, an error ellipse-assisted particle filter is proposed for the intelligent integration of inertial odometry, crowdsourced Wi-Fi fingerprinting, and indoor map information. The experimental results prove that the proposed ML-CTEF realizes autonomous and precise 3D indoor localization performance under complex and large-scale indoor environments; the estimated average positioning error is within 1.01 m in a multi-floor contained indoor building.

Graphical Abstract

1. Introduction

Indoor positioning capability is regarded as an important part of smart city infrastructure. Aiming at complex and diversified urban indoor environments, how to provide autonomous and low-cost indoor location-based services becomes an urgent task. Existing indoor positioning systems (IPS) such as Wi-Fi [1], BLE [2], UWB [3], acoustic sensors [4], and inertial sensors [5] can provide indoor positioning ability with different levels of precision. Among most kinds of IPS, the Wi-Fi positioning system (WPS) is proven to be an efficient approach for realizing universal localization without installing additional facilities in large-scale indoor spaces, which usually uses the crowdsourced spatiotemporal data mining technology based on the collected mobile sensors data acquired from people’s daily lives [6].
At this stage, the performance of crowd-sensing-based WPS is limited by the poor accuracy of collected daily-life trajectories data, mainly due to the diversified handheld modes of smartphones [7], the cumulative error of built-in sensors [8], the efficient generation and updating of crowdsourced navigation databases [9], and a lack of efficient combinations of different location sources and existing indoor maps or floorplans [10].
To solve the above problems, previous researchers have created many meaningful works. Yan et al. [11] proposed RIDI, aiming at providing robust smartphone-based walking speed estimation and localization information under changeable handheld modes, which realize comparable results with traditional Visual Inertial Odometry (VIO). Moreover, they further enhanced the accuracy and stability of inertial odometry by developing the RoNIN network, which can realize significant improvement using the new dataset containing over 40 h of built-in sensor data from different human daily-life contributions [12]. Klein et al. [13] used a machine learning (ML) algorithm to detect the daily-life handheld modes of the pedestrian and proposed an adaptive gain value selection method to improve the accuracy of step-length estimation. In order to enhance the algorithm efficiency, only a single tester is needed in the procedure of model training. Guo et al. [14] proposed a handheld mode awareness strategy for walking speed estimation under pedestrians’ complex motion modes using the machine learning-based classification approach. Comprehensive mobile sensor data are collected by different users for model training purposes and accuracy evaluation of the proposed speed estimator, which effectively decreases the cumulative error originating from low-cost micro-electromechanical systems (MEMS) sensors.
For efficient crowdsourced navigation database generation, Yang et al. [9] used the transfer learning approach to improve the efficiency and accuracy of a wireless database update, which can autonomously recognize the outlier features and search a suitable mapping space among the generated database and collected crowd-sensing data. Li et al. [15] proposed a RITA localization system, which models the rotation and translation of crowdsourced trajectories into optimization problems under the distance constraints of Wi-Fi APs, and the particle filter (PF) is further applied for robust multi-source fusion. Zhang et al. [16] proposed a novel quality evaluation criteria for autonomously selecting the eligible crowdsourced trajectories for the construction of the final navigation database, in which the motion modes, sensor biases, and time duration of each trajectory are adopted as the essential features. The final experimental results realize the comparable results of the map-aimed approach.
The indoor map information is also an essential part of the procedure of crowdsourced database generation and updating with high accuracy. Wu et al. [17] proposed the HTrack system for more accurate map matching, which takes the pedestrian heading and geospatial data into consideration and can effectively reduce the calculation complexity. Xia et al. [18] combined the models of pedestrian dead reckoning (PDR), BLE-based received signal strength indicator (RSSI) ranging, and map constraints using a unified PF. In addition, accessible and inaccessible spaces are defined to further enhance the positioning continuity and accuracy, and the RMSE of 1.48 m is finally achieved. Li et al. [19] proposed fingerprinting accuracy indicators for autonomously predicting the accuracy of Wi-Fi and magnetic fingerprinting results combined with the signal, indoor map, and database-based features, which effectively improves the performance of final integrated localization using crowd-sensing data.
In addition, indoor floor detection also consists of an important part in enhancing the efficiency and accuracy of crowdsourced trajectories classification and database generation. Zhao et al. [20] developed an HYFI system, in which the distribution of local Wi-Fi APs is adopted to provide an initial floor estimation result and further combined it with pressure information to decrease the effects of environments, and an overall accuracy of more than 96.1% is realized compared with a single source. Shao et al. [21] proposed an adaptive wireless floor detection algorithm for large-scale and multi-floor contained indoor areas by extracting the Wi-Fi RSSI and spatial similarity features and dividing the local environments using a block model, which achieved an average accuracy of 97.24%.
To further enhance the performance of crowd-sensing-based database construction and multi-source fusion-based 3D indoor localization, this paper proposes the ML-CTEF structure, which uses a deep-learning framework for crowdsourced trajectories data modeling, accurate walking speed estimation, and floor detection, and applies indoor network information for trajectory matching and calibration in the procedure of crowdsourced database generation, and finally, the error ellipse is adopted to enhance the performance of traditional PF in the multi-source fusion phase. By using the proposed ML-CTEF framework, sub-meter-level optimized indoor trajectories can be acquired for the enhancement of crowdsourced navigation database construction, and the meter-level indoor positioning precision can be realized. The main contributions of this work are summarized as follows:
(1)
This paper proposes novel inertial odometry which contains the combination of a deep-learning-based walking speed estimator (DLSE) and the non-holonomic constraint, which takes the handheld modes, lateral error, and step-length constraint into consideration, and updates the location based on a period of observations instead of just considering the last moment.
(2)
A novel Bi-LSTM-based floor detection algorithm is applied to provide floor indexes reference for crowdsourced trajectories by extracting the hybrid wireless and sensor- related features to enhance the recognition precision and further improve the efficiency of crowdsourced database generation.
(3)
This paper simplifies the indoor network and represents it in the form of a matrix and proposes the grid search approach for crowdsourced trajectory matching and calibration. The calibrated trajectories effectively improve the precision of crowdsourced database generation.
(4)
Based on the results of walking speed estimation, floor detection, and crowdsourced database generation, an error ellipse-assisted particle filter (EE-PF) is proposed for the robust integration of Wi-Fi fingerprinting, inertial sensors data, and indoor map information, and meter-level positioning accuracy can be realized.
The remainder of this article is organized as follows. Section 2 presents the related work. Section 3 introduces the deep-learning-based speed estimator and inertial odometry. Section 4 presents Bi-LSTM-based floor detection, crowdsourced trajectory matching and calibration, and error ellipse-assisted particle filter-based multi-source fusion. Section 5 describes the experimental results of the proposed ML-CTEF. Finally, Section 6 concludes this article.

2. Related Work

The crowdsourced wireless positioning technology is proven as an effective and labor-saving approach for providing indoor location-based services in large-scale indoor spaces, which can autonomously generate an indoor navigation database using the massive daily-life trajectories provided by the public. Normally, the crowdsourced wireless positioning system contains two main realizing approaches: a map-assisted positioning algorithm and a non-map-assisted positioning algorithm.
Zee [22] and UnLoc [23] are two early crowdsourced indoor localization systems, which adopt indoor map information and deployed landmark points to provide absolute locations for pedestrians indoors. LIFS [24] constructs the indoor navigation database using the modeled PDR-originated trajectories and indoor floor plans without pain. Li et al. [19] use geospatial big data technology to collect massive indoor pedestrian trajectories using the enhanced PDR algorithm and propose the precision indicator for different location sources, which effectively increases indoor positioning performance during the multi-source fusion phase. FineLoc [25] further applied BLE nodes in the procedure of crowdsourced trajectories generation and merging, while the disadvantages are that the prior location information of BLE nodes is required to get the absolute position of the pedestrian and the accuracy of landmark detection of BLE node would decrease among indoor open areas.
The non-map-assisted indoor positioning algorithm aims at generating a navigation database without the help of indoor map information and deployed local facilities. Walkie-Markie [26] is a classical crowdsourced indoor mapping and positioning system, which uses the RSSI vector acquired from local Wi-Fi access points (APs) to model as the signal-marks, and the locations of the APs are defined as the nearest distance of RSSI ranging results. PiLoc [27] clusters crowdsourced indoor trajectories using the combination of Wi-Fi RSSI similarity detection results and the shape of each collected trajectory. The disadvantage of the Walkie-Markie and PiLoc systems is that the high precision of heading information from collected crowdsourced trajectories is required, which is not always applicable in real-world indoor scenes with complex interference. Li et al. [8] realize crowdsourced trajectory merging using a loop closure detection algorithm based on acquired Wi-Fi RSSI similarity information. In order to improve the algorithm efficiency, they further propose the Wi-Fi-RITA system [15], which applied the rotation and scale parameters as the optimization model for trace merging and is more efficient for a large number of crowdsourced trajectories merging.
In addition, multi-source fusion-based indoor positioning solution also provides an effective method for enhancing the accuracy and adaptability of large-scale indoor location-based services (iLBS), in which the MEMS sensor-based localization approach is usually applied as the basic positioning model [28], and different technologies are used for indoor navigation, including Wi-Fi [1], Bluetooth [2], computer cameras [29,30] and so on. The different indoor location sources are integrated by the classical Kalman Filter (KF) including extended Kaiman filter (EKF) and unscented Kalman filter (UKF) or the particle filter (PF), while the traditional KF or PF only provide fixed weights of corresponding different location sources, which would decrease the localization performance in complex indoor environments.
Lee and Kim [31] propose a hybrid marker-based indoor positioning system (HMIPS), which uses quick response markers and augmented reality to enhance the localization ability in large-scale indoor areas. The Viterbi tracking algorithm combines image information and inertial sensor data to correct the cumulative error originating from the single sensor-based approach. Wang et al. [32] propose a tightly coupled multi-source fusion structure using the combination of UWB, floorplans, and MEMS sensors. The inertial navigation system (INS) mechanization is applied to eliminate the non-line-of-sight (NLOS) effect of UWB-ranging measurements, and the map line segment matching algorithm is applied to further eliminate the abnormal observations. Wang et al. [33] develop a novel heading estimation algorithm that can be applied under complex smartphone handheld modes and proposes an online trajectory calibration method using the magnetic fingerprinting-based matching approach, which realizes meter-level accuracy under different tested indoor environments.
Different from existing crowdsourced indoor positioning and multi-source fusion algorithms, our proposed ML-CTEF uses a deep-learning framework for crowdsourced trajectories data modeling, accurate walking speed estimation, and floor detection and needs only indoor network information for trajectory matching and calibration purposes to generate the final navigation database, and the error ellipse is applied in the traditional PF algorithm in order to increase the multi-source fusion accuracy.

3. Deep-Learning Based Speed Estimator and Inertial Odometry

This work proposes the ML-CTEF framework, containing the combination of inertial odometry and multi-level observations, which can autonomously generate a crowdsourced navigation database and realize accurate multi-source fusion. First of all, the sensor data acquired from the tri-gyroscope, tri-accelerometer, tri-magnetometer, and barometer are integrated by the proposed inertial odometry to get the raw 3D position, velocity, and attitude information. Next, the deep-learning-based speed estimator (DLSE), the Bi-LSTM-based floor detection, and the non-holonomic constraints are modeled as the multi-level observations and integrated with inertial odometry using EE-PF. In addition, the map-matching and network fusion algorithms are proposed to optimize and calibrate the crowdsourced trajectories and generate an accurate navigation database, which is further applied in the online multi-source fusion phase. The overall description of the proposed ML-CTEF framework is shown in Figure 1. This section focuses on proposing a novel deep-learning-based speed estimator for pedestrian walking speed estimation under complex handheld modes, and the estimated walking speed is further combined with the pedestrian dead reckoning (PDR) mechanization and magnetic observation as the novel inertial odometry for the robust reconstruction of crowdsourced indoor trajectories.

3.1. Deep-Learning Based Speed Estimator

Aiming at mobile terminals based on pedestrian indoor localization, INS and PDR mechanizations are regarded as two effective approaches for realizing inertial odometry. However, the accuracy of traditional INS/PDR mechanizations is limited by the diversified handheld modes of the terminals, and the cumulative error of inertial sensors. In addition, the location update results provided by the INS/PDR mechanizations are always based on the previous moment, which may miss some useful motion information during the selected walking period.
To solve the problems of traditional dead reckoning, in this work, a novel deep-learning-based speed estimator (DLSE) is proposed to provide an accurate reference for PDR mechanization regarding the above challenges. In order to recognize the changeable handheld modes of mobile terminals, the 1D-CNN model is applied to detect the different handheld modes by considering a period of acceleration data and angular velocity data and the corresponding extracted features. Afterward, the Bi-LSTM network is applied for the further prediction of pedestrians’ continuous walking speed also using a period of acceleration data and angular velocity data and the corresponding extracted features. The overall structure of the proposed deep-learning-based speed estimator is shown in Figure 2:
Figure 2 describes the main structure of the proposed deep-learning-based speed estimator. In the proposed model, the 1D-CNN is applied as the first part for the handheld modes detection, and the recognized handheld modes and extracted features are further applied in the Bi-LSTM network for the accurate estimation of the pedestrians’ walking speed-related features, and the fully connected layer is finally adopted as the output layer which contains the real-time estimated pedestrians’ walking speed information.
To realize the handheld modes’ detection and corresponding walking speed estimation, the input features of 1D-CNN are the smoothed data collected from the accelerometer and gyroscope firstly for the handheld modes’ detection, and the detected results are further applied as the enhanced input feature for the training purposes of the corresponding Bi-LSTM network to estimate the final speed information under different handheld modes. In this case, to improve the final performance of speed estimation, the overall length of 3 s of data is applied as the input vector, with a sampling rate of 50 Hz.
In the convolution layer, the relationship between the input value and the output value is described as:
y j = ζ ( i = 1 M x i k i j + b j )
where x i indicates the input vector, k i j indicates the kernel weights, b j indicates the biases, ζ ( · ) represents the activation function, and y j is the output vector of the convolution layer.
In the Bi-LSTM layer, the update model of Bi-LSTM parameters is described as [34]:
f t = σ ( W f [ h t 1 , X t ] + b f ) i t = σ ( W i [ h t 1 , X t ] + b i ) C ˜ t = tanh ( W C [ h t 1 , X t ] + b C ) o t = σ ( W o [ h t 1 , X t ] + b o ) h t = o t tanh ( C t )
where i t , f t , and o t represent the input, forget and output units, X t indicates the input vector of the Bi-LSTM model at the timestamp t, and the h t represents the hidden state vector, which is regarded as the output of the Bi-LSTM model at that moment. σ indicates the sigmoid function, and C t is the candidate vector which is combined with the output vector as the memorized state at timestamp t.
Finally, the output layer of the Bi-LSTM units is modeled as the input vector of a fully connected network function MLP(), and the predicted uncertainty error is presented as:
v i = M L P ( y i )
We adopt the walking speed as the expected output value during the training phase of the deep-learning-based speed estimator framework. The initial predicted walking speed does not contain the absolute coordinate reference, and we can just regard it as the forward speed, which is described as:
v b = v f o r w a r d b 0 0 T
where v f o r w a r d b is calculated by a step-length-based method.
In order to get the pedestrian’s forward speed, the estimated walking speed v b should be transformed based on the results of the handheld mode recognition. The forward speed in the navigation coordinate system is calculated as [7]:
v n = C C e n C e 1 e v b e 1 b
where v n is the converted NHC-based speed, C b e 1 represents the calculated attitude matrix from the carrier coordinate system to the ENU coordinate system; C e 1 e indicates the handheld mode-related translation matrix, which converts the heading-related axis into the reading mode-based heading-related axis based on the results of handheld mode recognition. C e n indicates the translation matrix from the ENU coordinate system to the NED coordinate system.
The final estimated location of the pedestrian is the combination of both the heading and walking speed information:
P i = P i 1 + t = i 1 t = i v i n
where v i n indicates the converted walking speed information estimated during one detected step period, which has been transferred into the navigation coordinate system, and the real-time location P i is updated based on the previous result.

3.2. Integrated Model of Inertial Odometry

In a complex indoor environment, the initial location of pedestrians is usually unknown due to the lack of absolute reference. To model the real-time three-dimensional (3D) trajectory of pedestrians indoors, the state vector of the trajectory estimator is constructed as follows:
X k = [ r x k , r y k , r z k , ξ k , ε k ] T
where r x k , r y k , and r z k indicate the real-time estimated 3D position, ξ k is the corrected heading information, ε k represents the heading bias caused by the random walking error of the gyroscope. The state update equation is described as:
f ( X k ) = r x k 1 + cos ξ k L k r y k 1 + sin ξ k L k r z k 1 + Δ h k ξ k 1 + Δ T ε k exp ( Δ T / T c ) ε k 1
where the heading bias ε k is modeled as the first-order Markov process, Δ T indicates the estimated time period of each gait, Tc indicates the correlation time, and L k is the estimated gait-length information provided in [7].
Considering the interference caused by the indoor artificial magnetic field, the heading difference during the straightforward motion mode under quasi-static magnetic field (QSMF) periods [35] is extracted as the pseudo-observation to constrain the heading divergence error:
ψ ˜ k ψ ^ 0 = δ ψ k + n ψ
where ψ ^ 0 and ψ ˜ k represent the reference heading acquired from the first epoch of the recognized QSMF period under the straightforward walking modes and other epochs, and n ψ indicates the Gaussian noise.
The observation model aiming at the deep-learning model-estimated velocity under the navigation coordinate is modeled as the observation value:
δ z v n = v S t e p n v D L n
where v S t e p n indicates the walking speed calculated in (11); v D L n is the deep-learning-based speed estimation results. The observation equation for location increment observation in the n-frame can also be given by:
δ z p n = p S t e p n p D L n
where p S t e p n indicates the location updated by the state value, and p D L n indicates the deep-learning model-based location update results.

4. Deep-Learning Based Database Generation and Intelligent Integration

In order to improve the efficiency and accuracy of crowdsourced Wi-Fi fingerprinting database generation and multi-source fusion, a deep-learning framework is proposed for autonomous floor detection, crowdsourced trajectories matching and calibration, and Bi-LSTM-based Wi-Fi and MEMS sensor integration.

4.1. Bi-LSTM Based Floor Detection

In order to fully use all the observations acquired from the crowdsourced navigation data and obtain the complementary effects of the observation sources, in this work, a time-continuous method of floor detection using the Bi-LSTM network is applied by taking a period of crowdsourced data into consideration to improve the final accuracy of floor detection. The extracted features include the combination of Wi-Fi, barometer, and magnetic sources, which are constructed as the input vector of Bi-LSTM:
(1)
The most representative collected RSSI values: In order to cover the wireless characteristics of the specific indoor floor, the constructed RSSI vector collected from several of the most characteristic local Wi-Fi APs are constructed as the input values during the training phase of Bi-LSTM:
ς R S S I 1 ς R S S I 2 ς R S S I k T h R S S I
where ς R S S I k is the scanned RSSI value of the specific Wi-Fi AP, and T h R S S I is the threshold for the RSSI filter.
(2)
Average RSSI index of the representative collected RSSI vector: To describe the universal feature of the RSSI vector, the average signal strength is also calculated as one of the input values of the proposed Bi-LSTM:
ς R S S I A v e = i = 1 k ς R S S I i
where ς R S S I A v e represents the estimated average RSSI index.
(3)
Differences of the representative collected RSSI vector: The real-time difference of the scanned RSSI vector can effectively present the changing characteristics of the environment:
ς R S S I D i f f = ς R S S I k ς R S S I k 1
where ς R S S I D i f f indicates the RSSI difference index.
(4)
Norm of collected local magnetic vector:
M N o r m = m x 2 + m y 2 + m z 2
where m x , m y , and m z indicate the real-time collected local magnetic field data.
(5)
Barometric pressure at specific floors at different time periods of the same day:
P B a r o 1 , 6 < T < 12 P B a r o 2 , 12 < T < 18 P B a r o 3 , 18 < T < 24
where P B a r o 1 , P B a r o 2 , and P B a r o 3 indicate the real-time collected barometer data during different time periods of the same day. Because of the effects of wind, humidity, and small dust indoors, the measured pressure data may influence even in a one-day period. Thus, in this case, the three different time periods relating to pressure are collected to avoid the time deviation.
(6)
Barometric pressure difference index:
P B a r o D i f f = P B a r o k P B a r o k 1
where P B a r o k and P B a r o k 1 are the acquired pressure data at two adjacent timestamps. The different indexes of barometric pressure can effectively indicate the floor change during the pedestrian’s walking procedure.

4.2. Crowdsourced Trajectory Matching and Calibration

In this work, the indoor pedestrian network extracted from each floor is represented in the form of a matrix, and each intersection point is marked as the element in the matrix, which contains the heading and length information between each two intersection points. The extracted indoor network and corresponding network matrix is described in Figure 3:
In Figure 3, the overall indoor pedestrian network is divided into a combination of straight lines and intersection points, and each straight line has two features, heading and length, which are applied as the feature-matching parameters for comparison with the real-time collected trajectories. In this case, the two adjacent intersection points can be marked as 1, and the disconnected intersection points are marked as 0 in the generated network matrix:
1 2 3 4 5 6 7 8 9 ϖ = 0 1 0 1 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 1 0 1 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 1 0 1 0 1 2 3 4 5 6 7 8 9
In the constructed network matrix, the dimension of the matrix is the same as the number of intersection points in the extracted indoor network. Among each two adjacent points, the detailed features contain the heading and length information of the pedestrian’s straight forward walking period, which can be described as:
ϖ ( i , j ) = [ 1 , θ i j , ξ i j ]
where θ i j and ξ i j indicate the calculated heading and overall length information of each two adjacent intersection points when the pedestrian walks straight forward from one point to the other point.
In this work, the grid search approach is proposed for realizing trajectory matching and calibration between collected crowdsourced trajectories and the extracted indoor pedestrian network using the constructed network matrix information, which is described as follows:
(1)
Turning points detection: In the case of complex pedestrian handheld modes, the proposed hybrid deep-learning model is applied to classify the four different handheld modes and find the forward axis during the pedestrian’s walking procedure. The turning points are further extracted using a peak detection algorithm based on the smoothed angular velocity data, similar to the step detection approach [7];
(2)
Selecting the crowdsourced trajectories with more than three turning points for comparison purposes, which can effectively decrease the false matching rate;
(3)
To realize the trajectories matching with the existing indoor network, the correlation coefficient index and the dynamic time warping (DTW) index are applied to find similar trajectories using the information of the detected turning points in each trajectory:
D T W ( β τ 1 , β τ ) = D i s t ( p j , s k ) + min [ D ( s j 1 , p k ) , D ( s j , p k 1 ) , D ( s j 1 , p k 1 ) ]
where D T W ( δ r e f e r , δ k ) presents the cumulative distance between two turning point distributions, D i s t ( q n , c m ) indicates the Euclidean distance between each two points of distribution.
ρ c o r ( x , y ) = ρ c o r ( x τ 1 , x τ ) + ρ c o r ( y τ 1 , y τ ) = i = 1 M x τ 1 i x τ 1 ¯ x τ i x τ ¯ i = 1 M x τ 1 i x τ 1 ¯ 2 i = 1 2 m + 1 x τ i x τ ¯ 2 + i = 1 M y τ 1 i y τ 1 ¯ y τ i y τ ¯ i = 1 M y τ 1 i y τ 1 ¯ 2 i = 1 2 m + 1 y τ i y τ ¯ 2
where ρ c o r ( x τ 1 , x τ ) and ρ c o r ( y τ 1 , y τ ) indicate the results of the correlation coefficient on the x- and y-axis, respectively.
(4)
Crowdsourced trajectory calibration: After the map-matching phase, the matched turning points on the existing indoor network are applied as the absolute reference for the smoothing procedure of the trajectory segmentations:
x ^ k 1 | k = x ^ k 1 + P k 1 ϕ k T ( P k 1 ) 1 ( x ^ k x ^ k )
P k 1 | k = P k 1 ( P k 1 ϕ k T ( P k ) 1 ) ( P k P k ) ( P k 1 ϕ k T ( P k ) 1 ) T
where x k and z k represent the state value and the measured value, which are represented in Equation (8).

4.3. Error Ellipse Assisted Particle Filter for Multi-Source Integration

In this work, the error ellipse-assisted particle filter (EEPF) is applied to integrate all of the results provided by the crowdsourced Wi-Fi fingerprinting, built-in sensors, indoor network observation, and map constraint for realizing the meter-level indoor positioning performance.
The state model is the same as Equation (7), and the observed model of crowdsourced Wi-Fi RSSI fingerprinting can be described as:
δ z p n = p r s s i n p M E M S n δ z v n = v r s s i n v M E M S n
where p r s s i n and v r s s i n represent the received Wi-Fi RSSI fingerprinting-based position and speed results, and p M E M S n and v M E M S n represent the MEMS sensor-based navigation results.
In addition, the indoor network and map constraint are also applied for localization performance improvement. The first step is to search the nearest two adjacent reference turning points from the extracted indoor pedestrian network according to the current locations provided by the integration of the crowdsourced Wi-Fi fingerprinting and built-in sensors. The modeled indoor network segment is described as:
a x + b y + c = 0
Thus, the nearest observation point on the indoor network can be calculated by the current location (x1, y1). Thus, the final observed model of the indoor network can be described as:
δ z p n = p m a p n p M E M S / W i F i n δ z v n = v m a p n v M E M S / W i F i n
where p m a p n and v m a p n represent the received Wi-Fi RSSI fingerprinting-based position and speed results, and p M E M S / W i F i n and v M E M S / W i F i n represent the Wi-Fi and MEMS sensor-integrated localization results.
Finally, the indoor map information is applied to provide outlier constraints for the PF-based weight update phase. In this case, an error ellipse is built for particle constraint during the overall PF update procedure, which is summarized as:
Regarding the definition of the confidence ellipse in the engineering field [36], the central point of the error ellipse is set as the location of the major semi-axis of the ellipse, and is presented as:
a = s e 0.5 ( σ N 2 + σ E 2 ) + 0.25 ( σ E 2 σ N 2 ) 2 + σ N E 2
The minor semi-axis can be described as:
b = s e 0.5 ( σ N 2 + σ E 2 ) 0.25 ( σ E 2 σ N 2 ) 2 + σ N E 2
The azimuth of the major semi-axis can be calculated by:
θ = 0.5 tan 4 1 ( 2 σ N E / ( σ E 2 σ N 2 ) )
The generated error ellipse is firstly applied for the error constraint during the particle state update phase of the whole PF procedure, and the generated new particle, which is out of the existing error ellipse, is autonomously discarded. Secondly, the matched points on the existing indoor network and the matched reference points within the range of the error ellipse are further fused with the other location sources to realize the meter-level indoor positioning accuracy in complex and multi-floor contained indoor environments. Thus, the updated weights of particles are described as:
w k ( i ) = p ( Z k | X ^ k ( i ) ) p ( X ^ k ( i ) | X k ( i ) ) q ( X ^ k ( i ) | X 0 : k ( i ) , Z 1 : k ) , c a s e 1 w k ( i ) = 0 , c a s e 2 c a s e 1 : x k 2 a 2 + y k 2 b 2 < 1 c a s e 2 : e l s e
in which case1 indicates that the location of the particle is within the boundary of the error ellipse, and case2 indicates that the location of the particle is out of the boundary of the error ellipse. Thus, after the check of the error ellipse-assisted map constraint, the number of M-eligible particles will remain for the next PF update procedure.

5. Experimental Results of ML-CTEF

In this section, comprehensive experiments are designed to evaluate the performance of the proposed ML-CTEF. A multi-floor contained 3D indoor environment is selected as the experimental site, in which the proposed inertial odometry, Bi-LSTM-based floor detection, map-matching, and crowdsourced database generation, and error ellipse-assisted particle filter are evaluated and compared with the existing algorithms, respectively. For the presented model setting, the Adam, because of its efficiency regarding a large amount of training data, is applied as the optimizer, with a learning rate of 0.002. For the deep-learning-based speed estimator module, the dimension of the input vector of the 1D-CNN is set as 10, the same as the dimension of the sensor data, and the dimension of the kernel size of 1D-CNN is set as 5, and the dimension of the output hidden state from the LSTM unit is set as 30, with the dimension of input vector as 11.

5.1. Performance Evaluation of Inertial Odometry

In order to evaluate the performance of the proposed inertial odometry, a long-term experiment is designed for accuracy estimation under different handheld modes. In this case, a comprehensive experimental site containing the indoor and outdoor scenes is selected for evaluation purposes, which is shown in Figure 4. The tester started with point A, passed point B to K, and returned to point A, and this walking route was continuously repeated 10 times in order to test the long-term performance.
The long-term performance of inertial odometry is firstly compared with the single PDR mechanization [13] using the same walking route and smartphone data. In this case, the absolute control points are deployed at each turning point to calculate the positioning error, and the comparison of the estimated trajectories provided by two different algorithms is shown in Figure 5:
It can be found in Figure 5 that the proposed inertial odometry realizes much more stable and precise long-term localization performance compared to single-PDR mechanization. To further estimate the positioning accuracy of the two algorithms, ten users repeated the same walking route, and the estimated positioning errors of the two algorithms are described in Figure 6:
Figure 6 presents that the long-term error of the proposed inertial odometry is within 3.62 m in 75%, compared with the single-PDR approach within 5.76 m in 75%, due to the contributions of hybrid observations and constraints.
To further evaluate the performance of the proposed DLSE-assisted inertial odometry under different handheld modes, a state-of-the-art pose awareness estimator (PAE) [14] is adopted for comparison with the proposed inertial odometry. We compared the average positioning errors under four different handheld modes using the same walking route in Figure 6, and the final estimated average positioning errors are compared as:
It can be found from Figure 7 that the proposed DLSE-assisted inertial odometry realizes much higher positioning accuracy under four different handheld modes; the realized average positioning errors are within 2.91 m (reading mode), 3.56 m (calling mode), 5.79 m (swaying mode), and 3.89 m (pocket mode) under the long-term test route, compared with the PAE algorithm with 3.47 m (reading mode), 4.13 m (calling mode), 7.22 m (swaying mode), and 4.34 m (pocket mode).

5.2. Performance Evaluation of Floor Detection

In this paper, the initial floor information is missing from the collected crowdsourced raw trajectories. To mark the floor index for crowdsourced trajectories before the map matching phase, the Bi-LSTM network is adopted to integrate the information collected from the local wireless signals and sensor-related features to enhance the floor detection performance in multi-floor contained indoor environments. To cover the required indoor scenes, a time period of 2.5 h trajectories dataset is collected from four different floors for training purposes, and a time period of 0.5 h trajectories dataset is applied for accuracy evaluation. In this case, the proposed Bi-LSTM-based floor detection algorithm is compared with the previous LSTM method [37], and the classical K-Nearest Neighbor (KNN) method [38], and the accuracy comparison result is described as:
Figure 8 presents that the proposed Bi-LSTM-based floor detection algorithm realizes improved accuracy compared with the LSTM and KNN algorithms, which reach an average accuracy of more than 98.93% using the test dataset compared with the average accuracy of 97.28% using the LSTM-based approach and 93.7% using the KNN-based approach.

5.3. Performance Evaluation of Map Matching and Crowdsourced Database Generation

To evaluate the performance of map-matching and crowdsourced database generation, a multi-floor contained 3D indoor environment is selected, which covers four different floors, where the daily-life trajectories are collected, as shown in Figure 9. In this case, the crowdsourced smartphone data collected from different floors is firstly modeled by the proposed multi-level observations and constraints-assisted inertial odometry for the initial estimation of the trajectories, while these modeled trajectories have only relative location information and the same original point, which are shown in Figure 10:
It can be found from Figure 10 that the raw modeled crowdsourced trajectories are irregular and cannot get the true walking routes directly due to the lack of absolute reference. Thus, this work proposes the indoor network-matching algorithm for providing the absolute turning points reference for the optimization and calibration of raw modeled crowdsourced trajectories, and the floor detection algorithm is applied to provide floor indexes for raw trajectories. The final generated crowdsourced indoor network is described as:
It can be found from Figure 11 that the matched and calibrated crowdsourced trajectories effectively reconstruct the indoor pedestrian network, and a sub-meter level of accuracy can be realized with the assistance of an indoor map. In this case, to further evaluate the performance of the proposed map-assisted crowdsourced trajectories matching and calibration (M-CTMC), the state-of-the-art Walkie-Markie [27] algorithm is compared with our proposed approach. As shown in Figure 12, the proposed M-CTMC algorithm realizes a much higher crowdsourced trajectories reconstruction accuracy, and the overall trajectory error is within 0.51 m in 75%, compared with the Walkie-Markie algorithm, with a trajectory error of 1.2 m in 75%.

5.4. Performance Evaluation of Error Ellipse Enhanced Fusion Approach

To estimate the performance of the final ML-CTII approach, four adjacent floors contained within the 3D indoor environment were selected, the tester walked from the sixth floor continuously to the ninth floor, and the detailed walking route is described in Figure 8. In this case, the EEPF-based fusion algorithm is proposed to intelligently integrate the different location sources, including the crowdsourced Wi-Fi fingerprinting, MEMS sensors, and indoor network information, in which the error ellipse is applied to constrain the gross error of different location sources and match the useful indoor network information for the improvement of the integrated results of Wi-Fi fingerprinting and inertial odometry. The estimated trajectories comparison between inertial odometry (IO), Wi-Fi fingerprinting/inertial odometry (W-IO) integration, and indoor map/Wi-Fi fingerprinting/inertial odometry (MW-IO) integration are described as:
It can be found from Figure 13 that the inertial odometry is still affected by the cumulative errors even after multi-level-based constraints and observations. The integration model of crowdsourced Wi-Fi fingerprinting and inertial odometry effectively enhances the performance of single IO, and the assistance of indoor map information further eliminates the positioning error of W-IO, which is closer to the ground-truth trajectory. The comparison of the estimated positioning error of the three different combinations of location sources is described as follows:
Figure 14 presents that the combination of indoor maps realizes the meter-level indoor positioning accuracy; the positioning error is within 1.22 m in 75%, compared with the combination of Wi-Fi fingerprinting and inertial odometry, which achieves an accuracy of 2.2 m in 75%.
Finally, the proposed EE-PF-based multi-source fusion algorithm is compared with the state-of-the-art map-aided particle filter (MA-PF) approach [39], in which the same walking route and generated crowdsourced Wi-Fi fingerprinting database are applied to make the comparison fairer. The 3D trajectories comparison and positioning errors comparison between the two algorithms are described in Figure 15 and Table 1:
It can be found from Table 1 that the proposed EE-PF achieves better multi-source integration performance compared to the MA-PF approach because of the assistance of the error ellipse-based particle update strategy, and the realized average positioning error is within 1.01 m, improved by 21.7% compared to the MA-PF approach (average within 1.29 m).

6. Conclusions

In order to enhance the localization ability under large-scale indoor areas, this paper proposes a robust map-assisted 3D indoor localization framework using crowd-sensing-based trajectory data and error ellipse-enhanced fusion (ML-CTEF). A novel deep-learning-based walking speed estimator is applied to enhance the performance of inertial odometry, combined with the assistance of multi-level constraints and observations. A Bi-LSTM network is applied for the robust detection of floor indexes, and the grid search approach is proposed for map matching and final crowdsourced database generation. Finally, the EE-PF is developed for the intelligent integration of different location sources and indoor map information in order to realize meter-level positioning accuracy. The average positioning accuracy provided by the proposed ML-CTEF achieves 1.01 m under the estimation of comprehensive and large-scale 3D indoor environments and handheld modes.

Author Contributions

This paper is a collaborative work by all of the authors. Y.Y. proposed the idea and implemented the system. Q.W. performed the experiments, analyzed the data, and wrote the manuscript. R.C. and L.C. aided in proposing the idea, gave suggestions, and revised the rough draft. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by The Hong Kong Polytechnic University (1-ZVN6, 4-BCF7); The State Bureau of Surveying and Mapping, P.R. China (1-ZVE8), and the Hong Kong Research Grants Council (T22-505/19-N).

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare that they have no conflicts of interest to disclose.

References

  1. Yu, Y.; Chen, R.; Chen, L.; Li, W.; Wu, Y.; Zhou, H. H-WPS: Hybrid wireless positioning system using an enhanced wi-fi FTM/RSSI/MEMS sensors integration approach. IEEE Internet Things 2022, 9, 11827–11842. [Google Scholar] [CrossRef]
  2. Luo, R.C.; Hsiao, T. Indoor localization system based on hybrid Wi-Fi/BLE and hierarchical topological fingerprinting approach. IEEE Trans. Veh. Technol. 2019, 68, 10791–10806. [Google Scholar] [CrossRef]
  3. Yin, Z.; Jiang, X.; Yang, Z.; Zhao, N.; Chen, Y. WUB-IP: A high-precision UWB positioning scheme for indoor multiuser applications. IEEE Syst. J. 2017, 13, 279–288. [Google Scholar] [CrossRef]
  4. Chen, R.; Li, Z.; Ye, F.; Guo, G.; Xu, S.; Qian, L.; Liu, Z.; Huang, L. Precise indoor positioning based on acoustic ranging in smartphone. IEEE Trans. Instrum. Meas. 2021, 70, 9509512. [Google Scholar] [CrossRef]
  5. Niu, X.; Liu, T.; Kuang, J.; Zhang, Q.; Guo, C. Pedestrian trajectory estimation based on foot-mounted inertial navigation system for multistory buildings in postprocessing mode. IEEE Internet Things J. 2021, 9, 6879–6892. [Google Scholar] [CrossRef]
  6. Zhang, Z.; He, S.; Shu, Y.; Shi, Z. A self-evolving WiFi-based indoor navigation system using smartphones. IEEE Trans. Mob. Comput. 2019, 19, 1760–1774. [Google Scholar] [CrossRef]
  7. Yu, Y.; Chen, R.; Shi, W.; Chen, L. Precise 3D indoor localization and trajectory optimization based on sparse Wi-Fi FTM anchors and built-in sensors. IEEE Trans. Veh. Technol. 2022, 71, 4042–4056. [Google Scholar] [CrossRef]
  8. Li, Z.; Zhao, X.; Hu, F.; Zhao, Z.; Villacrés, J.L.C.; Braun, T. SoiCP: A seamless outdoor–indoor crowdsensing positioning system. IEEE Internet Things J. 2019, 6, 8626–8644. [Google Scholar] [CrossRef]
  9. Yang, J.; Zhao, X.; Li, Z. Updating radio maps without pain: An enhanced transfer learning approach. IEEE Internet Things J. 2020, 8, 10693–10705. [Google Scholar] [CrossRef]
  10. Du, X.; Yang, K.; Zhou, D. MapSense: Mitigating Inconsistent WiFi Signals Using Signal Patterns and Pathway Map for Indoor Positioning. IEEE Internet Things J. 2018, 5, 4652–4662. [Google Scholar] [CrossRef] [Green Version]
  11. Yan, H.; Shan, Q.; Furukawa, Y. RIDI: Robust IMU double integration. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 621–636. [Google Scholar]
  12. Herath, S.; Yan, H.; Furukawa, Y. Ronin: Robust neural inertial navigation in the wild: Benchmark, evaluations, & new methods. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 3146–3152. [Google Scholar]
  13. Klein, D.I.; Solaz, Y.; Ohayon, G. Pedestrian dead reckoning with smartphone mode recognition. IEEE Sens. J. 2018, 18, 7577–7584. [Google Scholar] [CrossRef]
  14. Guo, G.; Chen, R.; Ye, F.; Chen, L.; Pan, Y.; Liu, M.; Cao, Z. A pose awareness solution for estimating pedestrian walking speed. Remote Sens. 2018, 11, 55. [Google Scholar] [CrossRef]
  15. Li, Z.; Zhao, X.; Zhao, Z.; Braun, T. WiFi-RITA positioning: Enhanced crowdsourcing positioning based on massive noisy user traces. IEEE Trans. Wirel. Commun. 2021, 20, 3785–3799. [Google Scholar] [CrossRef]
  16. Zhang, P.; Chen, R.; Li, Y.; Niu, X.; Wang, L.; Li, M.; Pan, Y. A Localization database establishment method based on crowdsourcing inertial sensor data and quality assessment criteria. IEEE Internet Things J. 2018, 5, 4764–4777. [Google Scholar] [CrossRef]
  17. Wu, Y.; Chen, P.; Gu, F.; Zheng, X.; Shang, J. HTrack: An efficient heading-aided map matching for indoor localization and tracking. IEEE Sens. J. 2019, 19, 3100–3110. [Google Scholar] [CrossRef]
  18. Xia, H.; Zuo, J.; Liu, S.; Qiao, Y. Indoor localization on smartphones using built-in sensors and map constraints. IEEE Trans. Instrum. Meas. 2018, 68, 1189–1198. [Google Scholar] [CrossRef]
  19. Li, Y.; He, Z.; Gao, Z.; Zhuang, Y.; Shi, C.; El-Sheimy, N. Toward Robust Crowdsourcing-Based Localization: A fingerprinting accuracy indicator enhanced wireless/magnetic/inertial integration approach. IEEE Internet Things J. 2018, 6, 3585–3600. [Google Scholar] [CrossRef]
  20. Zhao, F.; Luo, H.; Zhao, X.; Pang, Z.; Park, H. HYFI: Hybrid floor identification based on wireless fingerprinting and barometric pressure. IEEE Trans. Ind. Inform. 2015, 13, 330–341. [Google Scholar] [CrossRef]
  21. Shao, W.; Luo, H.; Zhao, F.; Tian, H.; Huang, J.; Crivello, A. Floor identification in large-scale environments with wi-fi autonomous block models. IEEE Trans. Ind. Inform. 2021, 18, 847–858. [Google Scholar] [CrossRef]
  22. Rai, A.; Chintalapudi, K.K.; Padmanabhan, V.N.; Sen, R. Zee: Zero-effort crowdsourcing for indoor localization. In Proceedings of the 18th Annual International Conference on Mobile Computing and Networking, Istanbul, Turkey, 22–26 August 2012; pp. 293–304. [Google Scholar]
  23. Wang, H.; Sen, S.; Elgohary, A.; Farid, M.; Youssef, M.; Choudhury, R.R. No need to war-drive: Unsupervised indoor localization. In Proceedings of the 10th International Conference on Mobile Systems, Applications, and Services, Ambleside, UK, 25–29 June 2012; pp. 197–210. [Google Scholar]
  24. Yang, Z.; Wu, C.; Liu, Y. Locating in fingerprint space: Wireless indoor localization with little human intervention. In Proceedings of the 18th Annual International Conference on Mobile Computing and Networking, Istanbul, Turkey, 22–26 August 2012; pp. 269–280. [Google Scholar]
  25. Tong, X.; Liu, K.; Tian, X.; Fu, L.; Wang, X. FineLoc: A fine-grained self-calibrating wireless indoor localization system. IEEE Trans. Mob. Comput. 2018, 18, 2077–2090. [Google Scholar] [CrossRef]
  26. Shen, G.; Chen, Z.; Zhang, P.; Moscibroda, T.; Zhang, Y. Walkie-Markie: Indoor pathway mapping made easy. In Proceedings of the 10th USENIX Symposium on Networked Systems Design and Implementation (NSDI 13), Lombard, IL, USA, 2–5 April 2013; pp. 85–98. [Google Scholar]
  27. Luo, C.; Hong, H.; Chan, M.C. PiLoc: A self-calibrating participatory indoor localization system IPSN-14. In Proceedings of the 13th International Symposium on Information Processing in Sensor Networks, Berlin, Germany, 15–17 April 2014; pp. 143–153. [Google Scholar]
  28. Tang, H.; Zhang, T.; Niu, X.; Fan, J.; Liu, J. Impact of the Earth Rotation Compensation on MEMS-IMU preintegration of factor graph optimization. IEEE Sens. J. 2022, 22, 17194–17204. [Google Scholar] [CrossRef]
  29. Khan, D.; Cheng, Z.; Uchiyama, H.; Ali, S.; Asshad, M.; Kiyokawa, K. Recent advances in vision-based indoor navigation: A systematic literature review. Comput. Graph. 2022, 104, 24–45. [Google Scholar] [CrossRef]
  30. Nishiguchi, K.; Bousselham, W.; Uchiyama, H.; Thomas, D.; Shimada, A.; Taniguchi, R.-I. Generating a consistent global map under intermittent mapping conditions for large-scale vision-based navigation. In Proceedings of the 15th International Joint Conference on Computing Vision, Imaging and Computer Graphics Theory and Applications, Valletta, Malta, 27–29 February 2020; pp. 783–793. [Google Scholar]
  31. Lee, G.; Kim, H. A hybrid marker-based indoor positioning system for pedestrian tracking in subway stations. Appl. Sci. 2020, 10, 7421. [Google Scholar] [CrossRef]
  32. Wang, C.; Xu, A.; Kuang, J.; Sui, X.; Hao, Y.; Niu, X. A high-accuracy indoor localization system and applications based on tightly coupled UWB/INS/floor map integration. IEEE Sens. J. 2021, 21, 18166–18177. [Google Scholar] [CrossRef]
  33. Wang, Q.; Luo, H.; Xiong, H.; Men, A.; Zhao, F.; Xia, M.; Ou, C. Pedestrian dead reckoning based on walking pattern recognition and online magnetic fingerprint trajectory calibration. IEEE Internet Things J. 2020, 8, 2011–2026. [Google Scholar] [CrossRef]
  34. Shahid, F.; Zameer, A.; Muneeb, M. Predictions for COVID-19 with deep learning models of LSTM, GRU and Bi-LSTM. Chaos Solitons Fractals 2020, 140, 110212. [Google Scholar] [CrossRef]
  35. Kuang, J.; Niu, X.; Chen, X. Robust pedestrian dead reckoning based on MEMS-IMU for smartphones. Sensors 2018, 18, 1391. [Google Scholar] [CrossRef]
  36. Li, Y.; Zhuang, Y.; Zhang, P.; Lan, H.; Niu, X.; El-Sheimy, N. An improved inertial/wifi/magnetic fusion structure for indoor navigation. Inf. Fusion 2017, 34, 101–119. [Google Scholar] [CrossRef]
  37. Yu, Y.; Shi, W.; Chen, R.; Chen, L. AP Detector: Crowdsourcing-based approach for self-localization of wi-fi FTM stations. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 46, 249–254. [Google Scholar] [CrossRef]
  38. Yu, Y.; Chen, R.; Chen, L.; Li, W.; Wu, Y.; Zhou, H. Autonomous 3D indoor localization based on crowdsourced wi-fi fingerprinting and MEMS sensors. IEEE Sens. J. 2021, 22, 5248–5259. [Google Scholar] [CrossRef]
  39. Du, X.; Liao, X.; Gao, Z.; Fan, Y. An enhanced particle filter algorithm with map information for indoor positioning system. In Proceedings of the 2019 IEEE Global Communications Conference (GLOBECOM), Big Island, HI, USA, 9–13 December 2019; pp. 1–6. [Google Scholar]
Figure 1. The Framework of Proposed ML-CTEF.
Figure 1. The Framework of Proposed ML-CTEF.
Remotesensing 14 04636 g001
Figure 2. Deep-learning Framework of Speed Estimator.
Figure 2. Deep-learning Framework of Speed Estimator.
Remotesensing 14 04636 g002
Figure 3. Schematic Diagram of Indoor Network.
Figure 3. Schematic Diagram of Indoor Network.
Remotesensing 14 04636 g003
Figure 4. Test Walking Route of Inertial Odometry.
Figure 4. Test Walking Route of Inertial Odometry.
Remotesensing 14 04636 g004
Figure 5. Estimated Trajectory of Inertial Odometry.
Figure 5. Estimated Trajectory of Inertial Odometry.
Remotesensing 14 04636 g005
Figure 6. Error Comparison of Inertial Odometry and PDR.
Figure 6. Error Comparison of Inertial Odometry and PDR.
Remotesensing 14 04636 g006
Figure 7. Error Comparison Under Different Handheld Modes.
Figure 7. Error Comparison Under Different Handheld Modes.
Remotesensing 14 04636 g007
Figure 8. Precision Comparison of Floor Detection.
Figure 8. Precision Comparison of Floor Detection.
Remotesensing 14 04636 g008
Figure 9. Multi-floor Contained Indoor Environment and Daily-life Data Collection. (a) Sixth Floor; (b) Seventh Floor; (c) Eighth Floor; (d) Ninth Floor; (e) Daily-life Data Collection.
Figure 9. Multi-floor Contained Indoor Environment and Daily-life Data Collection. (a) Sixth Floor; (b) Seventh Floor; (c) Eighth Floor; (d) Ninth Floor; (e) Daily-life Data Collection.
Remotesensing 14 04636 g009aRemotesensing 14 04636 g009b
Figure 10. Raw Trajectories by Inertial Odometry.
Figure 10. Raw Trajectories by Inertial Odometry.
Remotesensing 14 04636 g010
Figure 11. Reconstructed 3D Indoor Network.
Figure 11. Reconstructed 3D Indoor Network.
Remotesensing 14 04636 g011
Figure 12. Precision Comparison of M-CTMC and Walkie-Markie.
Figure 12. Precision Comparison of M-CTMC and Walkie-Markie.
Remotesensing 14 04636 g012
Figure 13. Trajectories Comparison of Different Models.
Figure 13. Trajectories Comparison of Different Models.
Remotesensing 14 04636 g013
Figure 14. Positioning Errors Comparison of Different Models.
Figure 14. Positioning Errors Comparison of Different Models.
Remotesensing 14 04636 g014
Figure 15. Trajectories Comparison of MA-PF and EE-PF.
Figure 15. Trajectories Comparison of MA-PF and EE-PF.
Remotesensing 14 04636 g015
Table 1. Positioning Errors Comparison of MA-PF and EE-PF.
Table 1. Positioning Errors Comparison of MA-PF and EE-PF.
IndexesMaximum Error 75th PercentileAverage Error
MA-PF2.58 m1.62 m1.29 m
EE-PF1.85 m1.22 m1.01 m
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wan, Q.; Yu, Y.; Chen, R.; Chen, L. Map-Assisted 3D Indoor Localization Using Crowd-Sensing-Based Trajectory Data and Error Ellipse-Enhanced Fusion. Remote Sens. 2022, 14, 4636. https://doi.org/10.3390/rs14184636

AMA Style

Wan Q, Yu Y, Chen R, Chen L. Map-Assisted 3D Indoor Localization Using Crowd-Sensing-Based Trajectory Data and Error Ellipse-Enhanced Fusion. Remote Sensing. 2022; 14(18):4636. https://doi.org/10.3390/rs14184636

Chicago/Turabian Style

Wan, Qiao, Yue Yu, Ruizhi Chen, and Liang Chen. 2022. "Map-Assisted 3D Indoor Localization Using Crowd-Sensing-Based Trajectory Data and Error Ellipse-Enhanced Fusion" Remote Sensing 14, no. 18: 4636. https://doi.org/10.3390/rs14184636

APA Style

Wan, Q., Yu, Y., Chen, R., & Chen, L. (2022). Map-Assisted 3D Indoor Localization Using Crowd-Sensing-Based Trajectory Data and Error Ellipse-Enhanced Fusion. Remote Sensing, 14(18), 4636. https://doi.org/10.3390/rs14184636

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop