Next Article in Journal
A Deep Reinforcement Learning Strategy for Surrounding Vehicles-Based Lane-Keeping Control
Previous Article in Journal
Give Me a Sign: Using Data Gloves for Static Hand-Shape Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Positioning and Navigation Method Combining Multimotion Features Dead Reckoning with Acoustic Localization

1
Guangxi Key Laboratory of Precision Navigation Technology and Application, Guilin University of Electronic Technology, Guilin 541004, China
2
School of Information and Communication, Guilin University of Electronic Technology, Guilin 541004, China
3
Guangxi Key Laboratory of Image and Graphic Intelligent Processing, Guilin University of Electronic Technology, Guilin 541004, China
4
Department of Science and Engineering, Guilin University, Guilin 541006, China
5
National & Local Joint Engineering Research Center of Satellite Navigation Localization and Location Service, Guilin 541004, China
6
GUET-Nanning E-Tech Research Institute Co., Ltd., Nanning 530031, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(24), 9849; https://doi.org/10.3390/s23249849
Submission received: 4 October 2023 / Revised: 23 November 2023 / Accepted: 13 December 2023 / Published: 15 December 2023
(This article belongs to the Section Navigation and Positioning)

Abstract

:
Accurate location information can offer huge commercial and social value and has become a key research topic. Acoustic-based positioning has high positioning accuracy, although some anomalies that affect the positioning performance arise. Inertia-assisted positioning has excellent autonomous characteristics, but its localization errors accumulate over time. To address these issues, we propose a novel positioning navigation system that integrates acoustic estimation and dead reckoning with a novel step-length model. First, the features that include acceleration peak-to-valley amplitude difference, walk frequency, variance of acceleration, mean acceleration, peak median, and valley median are extracted from the collected motion data. The previous three steps and the maximum and minimum values of the acceleration measurement at the current step are extracted to predict step length. Then, the LASSO regularization spatial constraint under the extracted features optimizes and solves for the accurate step length. The acoustic estimation is determined by a hybrid CHAN–Taylor algorithm. Finally, the location is determined using an extended Kalman filter (EKF) merged with the improved pedestrian dead reckoning (PDR) estimation and acoustic estimation. We conducted some comparative experiments in two different scenarios using two heterogeneous devices. The experimental results show that the proposed fusion positioning navigation method achieves 8~56.28 cm localization accuracy. The proposed method can significantly migrate the cumulative error of PDR and high-robustness localization under different experimental conditions.

1. Introduction

Location based services (LBS) have been involved in every aspect of people’s lives, such as obtaining location information about products in a shopping mall, searching for a vehicle in an underground parking lot, and monitoring the location of a patient in a hospital. At present, the global navigation satellite system (GNSS) can meet all LBS requirements in outdoor environments across all weather conditions and times [1,2]. However, in indoor environments, there are many interferences and obstacles that prevent satellite signals from entering indoor environments. Thus, the GNSS cannot support the location estimation [3]. Moreover, related studies have revealed that people currently spend more than 80% of their time in indoor environments. Overall, the study of LBSs in indoor navigation is of great research significance [4].
There are many common indoor localization methods, such as vision, Bluetooth, Wi-Fi, and positioning based acoustic. Vision-based positioning technology has good adaptability, but it is not able to protect the privacy issue well [5]. Bluetooth positioning is low- cost and requires simple implementation, but it is only suitable for positioning in a small range [6]. Wi-Fi positioning has a large coverage area and a wide deployment capacity with susceptible environmental interferences [7]. Currently, mobile smartphones are equipped with microphones and receivers for acoustic signals. Therefore, mobile smartphones can be used to send and receive acoustic signals, and no more additional infrastructure is required [8]. In addition, acoustic positioning has good security and there is no leakage of personal privacy information. Furthermore, acoustic signal positioning has the advantages of high accuracy and good compatibility [9,10]. However, acoustic signals are susceptible to environmental disturbances. These noises include inherent noises, the heating noises of electronic components, acoustic signal reflection, interference, diffraction, and pedestrian movement and conversations in the indoor environment.
Acoustic-based localization and navigation have become hot topics in current LBS research. Lopes, S.I., et al. [8] designed a passive time difference of arrival (TDOA) positioning system that is compatible with smartphones, which yielded a protocol for synchronizing acoustic beacons. Murakami, H., et al. [11] described a method for three-dimensional positioning using a smartphone with only an external speaker. Zhou, R.H., et al. [12] proposed a hybrid CHAN–Taylor algorithm. In this method, CHAN localization estimation is used as the initial value for the iteration of the Taylor algorithm. The Taylor iteration is interrupted when the error is below a preset threshold. The simulation results demonstrated that the hybrid CHAN–Taylor algorithm has better localization accuracy, convergence speed, and self-adaptability than the CHAN algorithm. Wang, X., et al. [13] used the combined CHAN–Taylor algorithm to effectively suppress non-line-of-sight (NLOS) errors for target localization in 3D indoor scenes. Yang, H.B., et al. [14] employed the hybrid CHAN–Taylor algorithm for underwater localization with a high accuracy of time-delay estimation. It was verified that the hybrid CHAN–Taylor algorithm can suppress the error of the CHAN algorithm.
Inertial measuring unit (IMU) navigation estimation includes an inertial navigation system (INS) and pedestrian dead reckoning (PDR) [15]. The INS estimates the current position with the angular velocity observed by the gyroscope sensors and the force observed by the accelerometer sensors. INS is not limited by application scenarios and is a very ideal navigation method. However, low-performance micro-electromechanical system (MEMS) devices are used and INS cannot provide reliable navigation results. PDR can provide long-term and stable relative positioning results by accurately detecting step counting and estimating step length and walking direction based on the movement characteristics of the pedestrian in the walking process [16,17,18]. It has simple implementation. Compared with the INS, PDR requires less accuracy for the sensor and enables better localization with limited cost. The implementation method has been widely used in the field of pedestrian navigation.
Many methods have been proposed to solve the problem of reducing the cumulative error of PDR. Yotsuya, K., et al. [19] presented an improvement to the accuracy of trajectories using a large amount of pedestrian trajectory data. Guo, S.L., et al. [20] presented a gait-detection method based on dual-frequency Butterworth filtering and a linear combination of multiple features combining the step frequency, the amplitude of acceleration, the mean of acceleration, and the variance of acceleration. Im, C., et al. [21] presented a multi-modal PDR system based on recurrent neural networks with a long short-term memory (LSTM) algorithm to extract potential features from sensor data. Yao, Y.B., et al. [22] proposed a method of identifying the step length based on the features extracted at each step, and the step-length error was approximately about 3%. Zhang, M., et al. [23] used adaptive step length estimation based on time windows and dynamic thresholds. Vathsangam, H., et al. [24] used Gaussian process-based regression (GPR) to estimate walking speeds and compared the performance of the Bayesian linear regression (BLR) and least squares regression methods. Zihajehzadeh, S., et al. [25] applied a linear model to estimate walking speed. Yan et al. [26] proposed an improved PDR method, which adds the previous three steps to predict the step value. The experimental results indicate that this method can obtain a more accurate step estimation.
Scholars have made some achievements in localization acoustic-based research. Reflections and diffraction from walls in indoor environments and noise from the environments can affect the accuracy performance of acoustic-based localization, and some outliers can even occur. The PDR not only has a low computational complexity but can also output accurate and reliable location information in a short period of time without relying on building any external infrastructure [27]. Nevertheless, cumulative errors in PDR can occur over time, which can have an extremely detrimental effect on the localization results [28,29].
To address the problems mentioned above, we propose a positioning and navigation system. To effectively mitigate the cumulative error of PDR, a novel step-length model with constraint LASSO regression [30,31] is proposed. This improved step-length model considers more relevant information to predict the current value than the state-of-the-art methods. The EKF is adopted to determine the target location by integrating acoustic-based localization with improved PDR. The main contributions of this paper are summarized below:
  • A novel weighted step-length model: To improve the accuracy of step length, we propose a novel weighted step length with constraint LASSO regression in this paper. In the first step, the coarse current step length is predicted by combining the previous three steps inspired by Weinberg model. Then, the LASSO regression is used to correct the step estimation by combining the acceleration peak-to-valley amplitude difference, the walk frequency, the variance of acceleration, the mean acceleration, the peak median, and the valley median. The experimental results demonstrate that the proposed step-length model has better performance than the state-of-the-art methods.
  • A fusion positioning and navigation framework: An EKF-based fusion positioning and navigation framework is presented. In this framework, the hybrid CHAN–Taylor method is used to estimate the location in the acoustic-based positioning. The improved PDR is adopted by the weighted step-length LASSO-based model. Then, the improved PDR is used as the state model and the acoustic estimation is used as the measurement model. The experiments show that the proposed positioning and navigation achieve better localization performance for different users, different devices, and different scenarios than existing methods. The framework is highly robust.
The rest of this paper is structured as follows. Section 2 provides related works about the current research. Section 3 introduces the positioning system methodology. The experimental results are depicted in Section 4. Finally, Section 5 summarizes the research in this paper.

2. Related Works

Fusion positioning technology has become a research hotspot in the field of indoor positioning. Song, X.Y., et al. [32] presented a method to validate the plausibility of PDR results using acoustic constraints between the acoustic source and the image source. Wang, M., et al. [33] proposed a method that combines the Hamming distance-based acoustic estimation with PDR. Yan proposed a CHAN–IPDR–ILS method in reference [34], which combines the CHAN algorithm and PDR algorithm. Al Mamun et al. [35] presented a lightweight fusion technique combining the PDR algorithm with the RSSI fingerprinting method. To decrease the cumulative error, landmarks are adopted to achieve localization. The experiment showed that the median positioning can reach 0.73 m. Poulose, A., et al. [36] proposed a fusion framework based on Wi-Fi and the PDR algorithm. The average localization accuracy of the combined position estimation algorithm was improved by 1.6 m compared with those of the separate algorithm. Lee, G.T., et al. [37] proposed a fusion algorithm based on Kalman filter (KF) for UWB localization and UWB-assisted PDR (U-PDR). Better performances were shown by comparing the UWB localization and PDR algorithm in the experimental results. Wu, J., et al. [38] proposed a text map-based indoor localization method that integrated RFID and the PDR method in a narrow corridor.
The EKF is a recursive algorithm that can be used for nonlinear systems and has a wide range of applications in the fields of navigation, positioning, and information fusion. Tian, X., et al. [39] used a two-step EKF iterative process to perform a state estimation of all the anchors in indoor environments. Yang, C.Y., et al. [40] constructed a 5G/geomagnetic/visual inertial odometry (VIO) positioning system based on an error-state EKF. Liu, W., et al. [41] proposed an autonomous navigation method combining EKF and a rapid exploration random tree (RRT) for four-wheel-steering vehicles to improve the accuracy of autonomous vehicle navigation in indoor environments. Mendoza, L.R., et al. [42] proposed a wearable ultrawideband indoor positioning system based on periodic EKF. Pak, J.M., et al. [43] proposed a switched extended Kalman filter bank (SEKFB) algorithm to overcome the problem of unstable noise covariance generated by isokinetic motion models for indoor localization.
Inspired by the existing positioning algorithms, we propose an indoor positioning method based on EKF fusion integrated with improved PDR and acoustic-based positioning. Specifically, in acoustic-based localization, a hybrid CHAN–Taylor algorithm is utilized to obtain the localization position. In PDR estimation, we propose a weighted fusion step improvement model based on LASSO. The step length estimation is obtained by the previous three steps and the Weinberg model. LASSO is used to modify the predicted step estimation, which makes the prediction value optimally close to the real value.

3. Methodology

In this section, we describe the EKF-based fusion localization architecture integrated into the acoustic-based and improved PDR positioning estimation. An overview of the proposed method is introduced in Section 3.1. The acoustic-based positioning method is described in Section 3.2. Step-count detection is presented in Section 3.3. The improved step model based on LASSO is proposed in Section 3.4. Section 3.5 depicts the heading direction calculation, and Section 3.6 analyzes the fusion method based on the EKF.

3.1. Overview

The methodological framework of the proposed positioning and navigation system is presented in Figure 1. The framework is divided into four parts: data collection, acoustic-based estimation, PDR-based estimation, and EKF-based fusion positioning.
In data collection, acceleration, gyroscope, magnetometer, and ultrasonic signals can be sampled and saved in *.txt format in a smartphone. The collected data will be intermittently uploaded to the server terminal. After preprocessing the collected data, the estimation-based acoustic is solved by the hybrid CHAN–Taylor algorithm. Then, the peaks and valleys of accelerations are detected and step frequency can be determined. The coarse step-length estimation is obtained by the previous three steps and the maximum and minimum values of the acceleration at the current step, and then we combine LASSO regularization spatial constraint and the acceleration peak-to-valley amplitude difference, walking frequency, acceleration variance, mean acceleration, peak median, and valley median to achieve fine step-length estimation. The heading direction is obtained by quaternions method. In the target location estimation, the outliers are detected for acoustic-based estimation. Then, the EKF is used to fuse the target localization. The dead reckoning estimation is taken as the state vector, and the acoustic-based estimation is taken as the observation vector. Finally, the target location is obtained by incorporating the EKF method.

3.2. Acoustic-Based Estimation

Linear frequency modulation signals increase the transmission bandwidth of the signal by carrier frequency and perform pulse compression during reception. Additionally, linear frequency modulation signals have high resolution, can distinguish interference and targets at a distance, and can greatly simplify the signal processing system. A chirp is a typical nonstationary signal with great applications in sonar, radar, and other fields. In this paper, we use the chirp signal to transmit the acoustic signal. To validate the characteristics of the acoustic signals, we collect the acoustic signal using a Vivo X30 (Guangdong, China) smartphone. Collected signals are filtered and preprocessed through a Finite Impulse Response (FIR) bandpass filter, which basically filters out the interference information, such as indoor inherent noise and electronic components. In the final filtering stage, the adaptive minimum mean square error method is used to fuse the nonlinear approximation linearization, which once again alleviates the impact of noise. Figure 2 shows the strength fluctuation of the acoustic signal after filtering. From the figure, the acoustic signal is stable at 8–14 kHz and 17.5–19.5 kHz. Considering the interference of speech signals on positioning, pseudo ultrasound ranging from 17.5 to 19.5 kHz is selected as the acoustic localization source because the human ear is not sensitive to it. And the location estimation based on acoustic signal is solved using a cross correlation function. These data come from the same sending and receiving device every time. Device heterogeneity has little effect on the performance based on acoustic localization.
The CHAN algorithm is a non-iterative method with an analytic solution. The advantages of this algorithm are a high localization accuracy and low computation, but the localization accuracy is easily affected by complex indoor obstacles. The Taylor algorithm is a recursive algorithm that requires an initial position estimate. This algorithm solves the local least squares solution of the measurement error value at each recursion, continuously updating the estimate. The Taylor algorithm is robust and suitable for complex environments, but it is too dependent on initial values. This hybrid algorithm combines the advantages of the CHAN algorithm’s low computation and the Taylor algorithm’s good robustness. Therefore, for acoustic-based estimation, we chose the CHAN–Taylor hybrid algorithm for this paper.
The spatial geometric distribution of the three anchors and the target location is shown in Figure 3. Assuming the target location M is ( x , y ) , the three anchors A i are ( x i , y i ) , i = 1,2 , 3 .
The distance between the target M and the anchor A i is
x i x 2 + y i y 2 = d i
where d i denotes the distance from the i-th anchor to the target M .
Expanding Equation (1), we can obtain
u i 2 + x 2 + y 2 2 x i x 2 y i y = ( d i ) 2
where u i 2 = x i 2 + y i 2 .
Using anchor A 1 as the reference anchor, the difference d i , 1 between the i-th anchor and anchor A 1 can be derived:
d i , 1 = d i d 1 = x i x 2 + y i y 2 x 1 x 2 + y 1 y 2
where d 1 denotes the distance from the first anchor to the target M .
Then,
u i 2 + x 2 + y 2 2 x i x 2 y i y = d i , 1 2 + 2 d i , 1 d 1 + d 1 2
Letting x i , 1 = ( x i x 1 ) , y i , 1 = ( y i y 1 ) ,
1 2 [ d i , 1 2 u i 2 + u 1 2 ] = x i , 1 x y i , 1 y d i , 1 d 1
Suppose
H c h a n = 1 2 d 2,1 2 u 2 2 + u 1 2 d 3,1 2 u 3 2 + u 1 2 . . . ,   G c h a n = x 2,1 y 2,1 d 2,1 x 3,1 y 3,1 d 3,1 . . . . . . . . .   and   Z c h a n = x y d 1 .
Equation (5) can be expressed with matrix as follows:
H c h a n = G c h a n Z c h a n
Considering the measurement error, the error vector is depicted as
e 1 = H c h a n G c h a n Z c h a n o
where Z c h a n o is the value without Gaussian noise.
Its covariance matrix is
σ = E [ e 1 e 1 T ] = c 2 B Q B
where B = d i a g d 2 ,   d 3 ,   d 4 , . . . , d N , and Q is the covariance matrix of the measurement errors.
The weighted least squares estimate of Z c h a n can be derived:
Z c h a n = ( G c h a n T σ 1 G c h a n ) 1 G c h a n T σ 1 H c h a n
After obtaining the first estimate, the weighted squares method was again utilized to calculate the second estimate. The error variance can be expressed as
e 2 = H c h a n G c h a n Z c h a n
with the constraint:
Z c h a n = Z 1 Z 2 Z 3 = x o + 1 y o + 2 d 1 0 + 3
where 1 , 2 , 3 are the estimation errors. Z c h a n = ( x x 1 ) 2 ( y y 1 ) 2 , H c h a n = ( Z 1 x 1 ) 2 ( Z 2 y 1 ) 2 ( Z 3 ) 2 , G c h a n = 1 0 0 1 1 1 .
Then, the Z c h a n estimation is
Z c h a n = ( ( G c h a n ) T σ 1 G c h a n ) T ( G c h a n ) T σ 1 H c h a n
where
σ = 4 B C o v ( Z c h a n ) B
B = d i a g { x o x 1 , y o y 1 , d 1 0 }
The location of the target is
x c h a n y c h a n = ± Z c h a n + x 1 y 1
Then, the estimate is used as the initial iterative solution of the Taylor algorithm. Specifically, the function f x i , y i , x , y is assumed to represent the constraint relationship between the anchor and the target position. f x i , y i , x , y is expanded in a Taylor series at ( x c h a n , y c h a n ) , ignoring components above the second order to obtain the following equation:
f x i , y i , x , y = f x i , y i , x c h a n , y c h a n + x x c h a n f x i x i , y i , x c h a n , y c h a n   + ( y y c h a n ) f y i x i , y i , x c h a n , y c h a n
By defining x c t = x x c h a n and y c t = y y c h a n , the following can be obtained:
f x i , y i , x , y f x i , y i , x c h a n , y c h a n = x c t f x i x i , y i , x c h a n , y c h a n + y c t f y i x i , y i , x c h a n , y c h a n
According to Equation (3), f x i , y i , x c h a n , y c h a n can be represented as
f x i , y i , x c h a n , y c h a n = x i x c h a n 2 + y i y c h a n 2 x 1 x c h a n 2 + y 1 y c h a n 2
where d i c h a n is the distance between the coordinate ( x c h a n , y c h a n ) and the anchor A i .
Converting Equation (17) into matrix form is as follows:
φ = H c t G c t σ c t
where φ is the error vector and H c t is the difference matrix between the real and measured values. σ c t is the estimation error, as follows:
H c t = d 2,1 T ( d 2 d 1 ) d 3,1 T ( d 3 d 1 ) d N , 1 T ( d N d 1 )
G c t = x 1 x c h a n d 1 x 2 x c h a n d 2 y 1 y c h a n d 1 y 2 y c h a n d 2 x 1 x c h a n d 1 x 3 x c h a n d 3 y 1 y c h a n d 1 y 3 y c h a n d 3 x 1 x c h a n d 1 x N x c h a n d N y 1 y c h a n d 1 y N y c h a n d N
σ c t = x c t y c t
The weighted least squares solution is computed as
σ c t = ( G c t T Q 1 G c t ) 1 G c t T Q 1 H c t
In the next recursive operation, the iterative computation is performed after updating the coordinate values of the target estimate.
x c h a n t u p d a t e = x c t + x c h a n y c h a n t u p d a t e = y c t + y c h a n
x c t u p d a t e = x c h a n t u p d a t e x c h a n y c t u p d a t e = y c h a n t u p d a t e y c h a n
where ( x c h a n u p d a t e , y c h a n u p d a t e ) is the updated estimate calculated at each iteration. x c t and y c t are also constantly updated. The above process is repeated continuously until the iterative operation stops when the error meets the set conditions.
| x c t u p d a t e | + | y c t u p d a t e | < η
where η is the error threshold.
Finally, the localization of the target M is determined as
x c h a n t = x c t u p d a t e + x c h a n y c h a n t = y c t u p d a t e + y c h a n

3.3. Step Count Detection

During the data collection process, the collected data always include noise. Inaccurate step count detection, pseudo peaks and pseudo valleys, or missed detections will occur in the peak and valley detection if the original data are used. Therefore, noise cancellation processing is required for data collection.
Sliding-window filtering, low-pass filtering, median filtering, and Hampel filtering are common methods. To validate the performance of these methods, we recruited one volunteer to sample acceleration data in the experimental path at a stable speed. Figure 4 shows the acceleration results after filtering. The experiments demonstrate that sliding-window filtering retains better smoothness for the collected acceleration data than the other three methods. It has the best filtering performance compared with the other methods.
Therefore, we adopt sliding-window filtering to preprocess the original data. The width of the window size is chosen as 10 samples. In Figure 5, the original acceleration data are denoted by the blue dashed line, and the acceleration data after filtering are denoted by the red solid line. Compared with the original data, the filtered acceleration values have less fluctuation, which is favorable for step detection.
The peaks and valleys of the acceleration values are used to determine the step count. This mainly includes the following steps:
(1)
Setting the acceleration threshold
Different pedestrians have different motion patterns. Depending on the motion pattern, the acceleration threshold is set differently. When the acceleration value is greater than the preset threshold, this is determined as a candidate peak or a candidate valley.
(2)
Setting the recognition sequence
Acceleration exhibits a distinct regularity with successive peak–valley pairs. When one peak is recognized in the acceleration data, the valley will be judged in the next interval of data.
(3)
Setting the time interval threshold
The current candidate peak or candidate valley is valid only if the time interval between two neighboring peaks or valleys exceeds the preset time interval threshold.
To validate the above step detection method, a volunteer holding a Vivo X30 phone collected acceleration data on a 42 m experimental path. Figure 6 shows the maximum and minimum results of the pedestrian accelerations for each step on a 42 m experimental path. The maximum values of the pedestrian acceleration each step are marked in red stars, and the minimum values of the pedestrian acceleration each step are marked in gray stars. Therefore, step counts are accurately detected. This is because the above step detection methods can effectively identify pseudo-peaks and pseudo-valleys.

3.4. Step Length Prediction

Step-length prediction plays an important role in PDR localization. There are nonlinear and linear models in step-length prediction. A linear model only considers the relationship between step length and step frequency, which is not very accurate. A nonlinear model, which describes the more accurate correlation between the step size and motion parameters, is often used. The Scarlet model [44], Kim model [45], and Weinberg model [46] are typical nonlinear models. These three models are established on the basis of the relationship between the peak and valley of pedestrian acceleration and step length. However, pedestrian step length is related not only to the peak-to-valley amplitude difference in acceleration but also to multiple other potential characteristics. Therefore, it can achieve better performance when multiple characteristics are used to estimate the step length. Additionally, data overfitting and increased model complexity occur if there are too many characteristics.
Considering the continuity of adjacent steps and inspired by reference [26], the current step length is estimated by the weighted fusion of the previous three step lengths. In addition, to avoid overfitting, a regularization term constraining multiple characteristics is adopted to modify the step length. LASSO regression and ridge regression are commonly used regression methods with regularization terms. Ridge regression incorporates an L2 regularization term. LASSO regression incorporates an L1 regularization term and has an additional variable-filtering function compared with the former [47]. In addition, LASSO can not only prevent data overfitting but also reduces the model complexity. Therefore, LASSO regression is chosen to deal with the feature variables related to step length in this paper.
To address the above problems, we propose a novel step-length model; the coarse predicted value of the current step length is obtained using the weighted previous three steps based on the Weinberg model. The coarse step length S L i at time i can be obtained by the previous three steps and the acceleration maximum and minimum.
The coarse predicted step length S L i is described below:
S L i = k 1 S L i 1 + k 2 S L i 2 + k 3 S L i 3 + k 4 K a i m a x a i m i n 4
with the LASSO constraint:
m i n ( 1 2 i = 1 M S L i j = 1 N A C C F j i β j 2 + λ j = 1 N β j )
where S L i 1 , S L i 2 , and S L i 3 are the lengths of the previous three steps. k 1 , k 2 , k 3 , and k 4 are the weight factors. K is an empirical constant. a i m a x ,   a i m i n are the maximum and minimum of the pedestrian accelerations for step i. M denotes the step number and N represents the number of features. A C C F is the six features of the acceleration values. β = [ β 1 , . . . , β N ] denotes the regression coefficient, and λ is the penalty coefficient, which is chosen based on 10-fold cross-validation.
Firstly, we can obtain the coarse step length S L i from Equation (28); S L i is used as the dependent variable of the model. The peak-to-valley amplitude difference, walking frequency, acceleration variance, acceleration mean, peak median, and valley median are extracted from the collected acceleration sensors. and the six motion features are used as the independent variables A C C F of the model. Then, we will find the optimal value from Equation (29).
L o s s = 1 2 i = 1 M S L i j = 1 N A C C F j i β j 2 + λ j = 1 N β j
Equation (29) presents the minimum of loss function. The first part represents the squared loss function, and the second part represents the L1 regularization term. λ in Equation (29) adjusts the size of the regression coefficient β j .
Expanding Equation (29), we can obtain the following:
L o s s = 1 2 i = 1 M ( S L i 2 2 S L i j = 1 N A C C F j i β j + ( j = 1 N A C C F j i β j ) 2 ) + λ j = 1 N β j
where A C C F j i denotes the i-th sample value of the j-th feature variable.
To achieve better performance, the loss function in Equation (29) chooses the minimum value. Therefore, the first derivative of the regularization term in Equation (32) is expressed as follows:
λ j = 1 N β j β j = λ β j , β j > 0 0 , β j = 0 λ β j , β j = 0 = λ s i g n ( β j )
Then, the first derivative of Equation (27) is obtained:
L o s s β j = 1 2 i = 1 M ( 2 S L i A C C F j i + 2 A C C F j i ( j = 1 N A C C F j i β j ) ) + λ s i g n ( β j ) = i = 1 M ACCF j i ( SL i + ( j = 1 N ACCF j i β j ) ) + λ s i g n ( β j )
In the multidimensional derivative, the fixed values β w can be described as follows:
L o s s β w = i = 1 M ACCF j i ( SL i + ( j w N ACCF j i β j ) + x w i β w ) + λ s i g n ( β w ) = i = 1 M ACCF j i ( SL i + ( j w N ACCF j i β j ) ) + i = 1 M A C C F w 2 i β w + λ s i g n ( β w )
Assuming that
A j = i = 1 M A C C F j i ( S L i + ( j w N A C C F j i β j ) )
B j = i = 1 M x w 2 i
Equation (34) can be simplified as follows:
A j + B j β w + λ s i g n ( β w ) = 0
Then, β w is
β W = B j + λ A j , β k > 0 0 , β k = 0 B j λ A j , β k < 0
Finally, all regression coefficients are calculated. The final estimates of the step length are obtained:
S L = A C C F · β + C
where C denotes the matrix of constants corresponding to the regression coefficients.
To validate the weighted fusion step improvement model based on LASSO, a volunteer holding a Vivo X30 smartphone collected acceleration data along a 42 m experimental path. Figure 7 shows the step error of the Weinberg, Scarlet, Kim, Multi-feature, Yan+ 2022 [26], and proposed step models. From the results, the average step length error of the step improvement model proposed in this paper has the least errors compared with the others. Therefore, we can find that the step improvement method proposed in this paper is effective and the accuracy of the calculated step length is higher.

3.5. Heading Direction Calculation

Heading direction estimation is also an important factor in PDR and determines the direction of the entire track deflection [48]. The measured angular velocity of gyroscope sensors ω i b b , the angular velocity of earth coordinate system relative to inertial coordinate system ω i e b , the angular velocity of navigation coordinate system relative to earth coordinate system ω e n b , and the angular velocity of body coordinate system relative to navigation coordinate system ω n b b satisfy as follows:
ω i b b = ω i e b + ω e n b + ω n b b
ω i e b = C n b ω i e n = C n b C e n ω i e e ω e n b = C n b ω e n n
where C e n is the transfer matrix between earth coordinate system and navigation coordinate system. ω i e e   is the angular velocity of earth coordinate system. ω e n n is the angular velocity of navigation coordinate system relative to earth coordinate system.
The attitude angular velocity equation can be expressed in matrix as
ω n b b = ω n b x b ω n b y b ω n b z b = ω i b x b ω i b y b ω i b z b C n b C 13 ω i e e + ω e n x n C 23 ω i e e + ω e n y n C 33 ω i e e + ω e n z n
where C 13 , C 23 , and C 33 are the transfer matrix vectors of earth coordinate system to navigation coordinate system, C n b is the transfer matrix from navigation coordinate system to body coordinate system.
From Equation (42), we can obtain the angular velocity ω n b b , and then we will continue to find the quaternion elements Q through the differential equation below:
Q ˙ = 1 2 Q ω n b n
where Q = q 0 + q 1 i + q 2 j + q 3 k , q 0 , q 1 , q 2 , q 3   are real numbers, and i , j , k are mutually orthogonal unit vectors. Q = 1 is called a normalized quaternion.
Expanding Equation (43) into matrix as
q ˙ 0 q ˙ 1 q ˙ 2 q ˙ 3 = 1 2 0 ω n b x b ω n b y b ω n b z b ω n b x b 0 ω n b z b ω n b y b ω n b y b ω n b z b 0 ω n b x b ω n b z b ω n b y b ω n b x b 0 q 0 q 1 q 2 q 3
Once determining the vector ( q 0 , q 1 , q 2 , q 3 ), the attitude matrix can be depicted as follows:
C b n = q 0 2 + q 1 2 q 2 2 q 3 2 2 ( q 1 q 2 q 0 q 3 ) 2 ( q 1 q 3 + q 0 q 2 ) 2 ( q 1 q 3 + q 0 q 3 ) q 0 2 q 1 2 + q 2 2 q 3 2 2 ( q 2 q 3 q 0 q 1 ) 2 ( q 1 q 3 q 0 q 2 ) 2 ( q 2 q 3 + q 0 q 1 ) q 0 2 q 1 2 + q 2 2 q 3 2
To simplify Equation (45), C b n can be expressed as:
C b n = C 11 C 12 C 13 C 21 C 22 C 23 C 31 C 32 C 33
The attitude directions are
θ p i t c h _ b = arcsin C 32 γ r o l l _ b = arctan C 31 C 33 C 33 > 0 arctan C 31 C 33 + π arctan C 31 C 33 π C 33 < 0 arctan C 31 C 33 < 0 arctan C 31 C 33 > 0 ψ h e a d _ b = arctan C 12 C 22 arctan C 12 C 22 + 2 π C 22 < 0 arctan C 12 C 22 > 0 arctan C 12 C 22 < 0 arctan C 12 C 22 + π C 22 < 0

3.6. EKF-Based Fusion Positioning

In fusion positioning, the acoustic-based estimation is set as the initial location of the target. To avoid the outliers, we set a threshold D t h to detect anomalies in the estimation. At time i − 1, the acoustic-based estimation is L o c i 1 c h a n t ( x i 1 c h a n t , y i 1 c h a n t ) , and the estimation of the proposed dead reckoning method is L o c i 1 p ( x i 1 p , y i 1 p ) .
Case 1: If the distance between the acoustic-based estimation and the localization is greater than the preset threshold D t h , the acoustic-based estimation is discarded as an outlier. Then, the estimation at time i − 1 is used for localization, where x i 1 , y i 1 = L o c i 1 .
Case 2: When the distance between the acoustic-based estimation and the localization is less than the preset threshold D t h , ( x i , y i ) is determined by EKF-based fusion positioning.
In our localization scheme, the PDR estimation is set as the state variable and the estimation is set as the observation variable. The state and observation vector are expressed as follows:
X = x p d r , y p d r , S L , ψ t a r g e t T
Z = x c h a n t , y c h a n t , S L , ψ t a r g e t T
where S L is the pedestrian step length and ψ t a r g e t is the heading direction of the target. x p d r , y p d r is the PDR estimation, and x c h a n t , y c h a n t is the acoustic-based estimation.
In fusion localization, the observation equation and state equation of the EKF algorithm are described as follows:
X i = F i 1 X i 1 + ω i Z i = Ψ i X i + ν i
where i N = 0,1 , 2 , . . . , X i R 4 is the pedestrian target position to be estimated, which is the state vector of the Kalman filter. Z i R 4 is the volume measurement vector, representing the acoustic estimate. ω i R 4 is the process noise. ν i is the measurement noise, which satisfies a Gaussian distribution. F i 1 X i 1 and Ψ i X i are the nonlinear state and observation functions, respectively.
State vectors X i , measure vectors Z i and noise signals ω i , ν i satisfy statistical properties:
Ε ω i = 0 , Ε υ i = 0
Ε ω i ω n T = Q i δ i , n , i   , n N
Ε v i v n T = R i δ i , n ,   i , n N
Ε ω i v n T = 0 , i ,   n N
Ε X i v n T = Ε X i ω n T = Ε Z i v n T = Ε Z i ω n T = 0 , i ,   n N
where Q i and R i are
Q = δ x 2 0 0 0 0 δ y 2 0 0 0 0 δ s L 2 0 0 0 0 δ ψ 2
R = δ x 2 0 0 0 0 δ y 2 0 0 0 0 δ s L 2 0 0 0 0 δ ψ 2
where δ x 2 , δ y 2 are the errors in PDR positioning. δ x 2 , δ y 2 are the errors in acoustic positioning. δ s L 2 , δ ψ 2 are the number of steps and direction angle of the PDR, respectively.
To estimate accurate pedestrian target location information, the nonlinear function needs to be linearized. The local linearization F ^ i 1 and T ^ i of nonlinear functions F i 1 and T i are expressed as follows:
F ^ i 1 = X i 1 F i 1 ( X i 1 ) X i 1 = X ^ i 1 | i 1 T ^ i = X i Ψ i ( X i ) X i = X ^ i | i 1
where
X i 1 = [ X i 1 ( 1 ) , X i 1 ( 2 ) , , X i 1 ( N ) ]
The linearization of Equation (50) is described as follows:
X i = F ^ i 1 · X i 1 + ω i Z i = Ψ ^ i · X i + ν i
Equation (60) can then be used to achieve fused localization using Kalman filtering. Thus, the fusion localization objective in this paper becomes the design of a suitable optimized filter for the system.
Design the Kalman recursive filter in the following form:
X ^ i | i 1 = F ^ i X ^ i 1                     X ^ i = X ^ i | i 1 + K i ( Z i Ψ ^ i X ^ i | i 1 )
where K i is the filtering gain at moment i . X ^ i is the state estimate at moment i with initial value X 0 = X ( 0 ) . X ^ i is the one-step state vector prediction at moment i.
In fusion localization, calculating the gain of the Kalman filter often requires calculating the inverse of a high-dimensional matrix, which increases the computational complexity. Therefore, it is necessary to consider suboptimal filters. To facilitate the analytical derivation of the suboptimization problem, the following two theorems are introduced.
Theorem 1. For matrices A and B of appropriate dimensions, the trace of the matrix exists:
t r A B A = B T t r A B A T A = 2 A B
Theorem 2. 
The filter Equation (61) is estimated unbiased, implying that all  i N = 0,1 , 2 , . . .  satisfies E{(i)} is zero.
Proof. Combining Equations (60) and (61), the estimated value of the state vector X ^ i at time i:
X ^ i = X ^ i | i 1 + K i ( Z i Ψ ^ i X ^ i | i 1 ) = X ^ i | i 1 + K i ( Ψ ^ i X i + v i Ψ ^ i X ^ i | i 1 ) = ( I K i Ψ ^ i ) X i i 1 + K i Ψ ^ i X i + K i v i
Then the expectation of the state vector X ^ i is expressed as
E [ X ^ i ] = E ( I K i Ψ ^ i ) X i i 1 + K i Ψ ^ i X i + K i v i = ( I K i Ψ ^ i ) E ( X ^ i | i 1 ) + K i Ψ ^ i E ( X i ) + K i E ( v i )
In the fusion localization process, the mean value of the time i = 0 is used as the estimated mean value, X ^ ( 0 ) = E { X ¯ ( 0 ) } , E { X ( 0 ) } = 0 ,   E ( X ^ i | i 1 ) = 0 According to Equation (51), E v ( i ) = 0 .
When i N = 0,1 , 2 , . . . , E { X ^ i } = 0 . Thus, we have proved that the filter (53) is unbiasedly estimated. □
For the fusion localization in this paper, we need to solve the recursive filtering suboptimization problem.
According to Equation (61), the estimation error is
e i = X i X ^ i = X i X ^ i | i 1 K i ( Z i Ψ ^ i X ^ i | i 1 )           = X i X ^ i | i 1 K i ( Ψ ^ i X i + v i Ψ ^ i X ^ i | i 1 )           = ( I K i Ψ ^ i ) ( X i X ^ i | i 1 ) K i v i  
The mean square error of prediction is
P i = ( e i e i T ) = E ( ( I K i Ψ ^ i ) ( X i X ^ i | i 1 ) K i v i ( I K i Ψ ^ i ) ( X i X ^ i | i 1 ) K i v i T ) = E ( ( I K i Ψ ^ i ) e ^ i i 1 K i v i e ^ i i 1 T ( I Ψ ^ i T K i T ) v i T K i T )     = E ( ( I K i Ψ ^ i ) e ^ i i 1 e ^ i i 1 T ( I Ψ ^ i T K i T ) ( I K i Ψ ^ i ) e ^ i i 1 v i T K i T K i v i e ^ i i 1 T ( I Ψ ^ i T K i T ) + K i v i v i T K i T )
The measurement noise ν i is uncorrelated with the one-step prediction error e i i 1 , resulting in
E ( e ^ i i 1 v i T ) = E { e ^ i i 1 } E { v i T } = 0
E ( v i e i i 1 T ) = E ( v i ) E ( e i i 1 T ) = 0
Equation (66) can be expressed as
P i = E ( I K i Ψ ^ i ) e ^ i i 1 e ^ i i 1 T ( I Ψ ^ i T K i T ) + K i v i v i T K i T = ( I K i Ψ ^ i ) P i i 1 ( I Ψ ^ i T K i T ) + K i R i K i T = P i i 1 P i i 1 Ψ ^ i T K i T K i Ψ ^ i P i i 1 + K i Ψ ^ i P i i 1 Ψ ^ i T K i T + K i R i K i T = ( I K i Ψ ^ i ) P i i 1 ( I K i Ψ ^ i ) T + K i R i K i T
Thus, the suboptimal problem for Equation (61) becomes solving Equation (69) to minimize the mean-square error, which is equal to the derivation of the matrix trace for Equation (69).
According to Theorem 1, the derivation of the matrix trace for Equation (69) is
t r ( P i ) K i = 2 ( Ψ ^ i P i i 1 ) T + 2 K i Ψ ^ i P i i 1 Ψ ^ i T + 2 K i R i
To obtain min ( P i ), we obtain
K i Ψ ^ i P i i 1 Ψ ^ i T + R i i = P i i 1 Ψ ^ i T
The filter gain is
K i = P i i 1 Ψ ^ i T Ψ ^ i P i i 1 Ψ ^ i T + R i 1
Lemma 1. 
X i  is the position of the target to be estimated, which is a state vector of the extended Kalman filter.  X ^ i | i 1  is the one-step predicted value of the target, and ω i  is the process noise obeying a Gaussian distribution.  F ^ i  is an approximate linear state function. The one-step prediction estimation of the mean square error satisfies linear estimation with the mean square error of the previous moment.
Proof. According to Equation (60), the mean square error of the one-step prediction estimate is
P i i 1 = E { X ^ i | i 1 X ^ i i 1 T } = E { ( F ^ i 1 X ^ i 1 + ω i ) ( F ^ i 1 X ^ i 1 + ω i ) T } = E ( F ^ i 1 X ^ i 1 X ^ i 1 T F ^ i 1 T ) + E ( F ^ i 1 X ^ i 1 ω i T ) + E ( ω i X ^ i 1 T F ^ i 1 ) + E ( ω i ω i T )
According to Equation (65), we obtain
E ( F ^ i 1 X ^ i 1 X ^ i 1 T F ^ i 1 T ) = F ^ i 1 E ( X ^ i 1 X ^ i 1 T ) F ^ i 1 T = F ^ i 1 P i i 1 F ^ i 1 T
Compute the second and third terms of Equation (73), respectively.
E ( F ^ i 1 X ^ i 1 ω i T ) = F ^ i 1 E ( X ^ i 1 ω i T ) = F ^ i 1 E ( X ^ i 1 ) E ( ω i T )
E ( ω i X ^ i 1 T F ^ i 1 ) = E ( ω i X ^ i 1 T ) F ^ i 1
Based on Equations (52) and (54) of the previous fusion localization model, the following can be obtained:
E ( ω i X ^ i 1 T F ^ i 1 ) = E F ^ i 1 X ^ i 1 ω i T = 0
E ( ω i ω i T ) = Q i
The one-step prediction mean square error is
P i i 1 = F ^ i 1 P i 1 F ^ i 1 + Q i
Thus, the mean square error one-step prediction value is proved. □
Substituting Equation (79) into (72), the filter gain K i at moment i can be derived based on the minimum mean square error. The minimum mean square error under suboptimal filtering is obtained by substituting the K i obtained from the projection into Equation (69). Thus, the suboptimal estimation problem of fusion localization is solved.
The filter gain design in Equation (72) does not require a very large dimensional inversion of the inverse. A fusion localization scheme is established based on Theorems 1, 2, and Lemma 1. In this paper, the focus is on the transient characteristics, where the filtered mean-square error is obtained at each sampling instant i . The appropriate gain is designed to make the fusion localization sub-optimal.

4. Results

In this section, the experimental setup is depicted in Section 4.1. Then, the localization accuracy of the LASSO-based weighted fusion step improvement model is analyzed in Section 4.2. In Section 4.3, the CDF positioning performance of the EKF-based PDR combined with the acoustic estimation method is reported. The mean and RMS error performance of the EKF-based PDR combined with the acoustic estimation method is discussed.

4.1. Experimental Setup

In this paper, we conducted experiments in two indoor environments. Scenario 1, with dimensions of 27 × 16 × 3 m3, is a reading room, similar to a seminar room. There are many tables, some air conditioners, and potted plants in the reading room. The windows in the reading room are made of glass on both sides and walls on the other sides, which may affect signal reflection and absorption. The second scene, at 34 × 17.3 × 3 m3, is a big, closed corridor that follows an indoor loop. This scene is more open compared with the first scene. The anchor distribution is shown in Figure 8. Twenty-five beacons are deployed in the first experimental scenario and thirty-six beacons in the second scenario. The red solid line denotes the pedestrian movement trajectory, and the black solid arrows are the direction of movement. In Figure 8a, the experimental path is along the desks in the reading room. The start and end points are not the same location. In Figure 8b, the experimental path is a closed rectangle with the same start and endpoints.
In this experiment, we invited a female volunteer, 160 cm in height (#1), and a male volunteer, 181 cm in height (#2), to collect acoustic signals and IMU data. The two volunteers, holding Vivo X30 and OPPO K5 smartphones, walked along the test path with a 0.6 m/step speed several times, respectively.

4.2. Improved Step Length Performance

To assess the performance of the weighted fusion step estimation model based on LASSO, we conducted experiments on the Scarlet model, Kim model, Weinberg model, multifeatured model, Yan+ 2022 [26] model, and our model.
Figure 9 shows the step-length results using a Vivo X30 and an OPPO K5 smartphone (Guangdong, China) for two volunteers, in scene 1, respectively. The proposed step estimation model is more accurate and produced a result closer to the real step length than the state-of-the-art step length models for different volunteers and mobile devices in scene 1. This is because the step estimation model proposed in this paper considers not only the first three steps but also the acceleration peak-to-valley amplitude difference, walk frequency, variance of acceleration, mean acceleration, peak median, and valley median. The method can supply more features to predict step length and can effectively mitigate the errors in the approximate symmetry.
Figure 10 presents step-length results with two volunteers holding the Vivo X30 and OPPO K5 smartphones in scene 2. The experimental results show that the proposed step-length improvement model also performs better than the state-of-the-art step-length models. This is because the proposed step-length model has better robustness and can avoid the effects of different pedestrians and devices.
In addition, we compared the step errors of the Weinberg, Scarlet, Kim, Multi-feature, Yan+ 2022 [26], and proposed step models in scene 1. In Figure 11, it can be observed that the step errors of the step-length model proposed in this paper are smaller than those of the other models. The reason is that the proposed step-length estimation model combines various influencing features to estimate the step length in a more comprehensive way.
Figure 12 illustrates the step length errors of the Weinberg, Scarlet, Kim, Multi-feature, Yan+ 2022 [26], and proposed step models in scene 2. In longer paths, the step improvement model proposed in this paper had a smaller average step error and can achieve higher target localization. This is because the model estimation with constrained LASSO can obtain more features for a fine estimation.
Table 1 and Table 2 show the average step-length results among the Scarlet model, Kim model, Weinberg model, Multi-feature model, Yan+ 2022 [26] model, and proposed model for Volunteer #1 holding the OPPO K5 and Vivo X30 smartphones in the two scenes. The step estimation results of the proposed model have a higher step-length estimation performance for different scenarios and devices.
Table 3 and Table 4 show the average step-length estimation results of the Scarlet model, Kim model, Weinberg model, Multi-feature model, Yan+ 2022 [26] model, and the proposed model by Volunteer #2 holding OPPO K5 and Vivo X30 smartphones in the two scenes. The step estimation results demonstrate that the proposed model is more robust at different heights and has better universality.

4.3. CDF Positioning Performance

To verify the positioning performance, we carried out multiple experiments on the PDR algorithm, CHAN–Taylor hybrid algorithm, CHAN–IPDR–ILS [34], improved PDR algorithm, and proposed algorithm using different phones in different scenarios. Figure 13 presents the localization performance of the above-mentioned algorithms for two volunteers using the Vivo X30 and OPPO K5 mobile phones in scene 1. The experiments show that the proposed algorithm has smaller positioning errors than the state-of-the-art algorithms. This method not only uses the weighted fusion step estimation model based on LASSO to improve the step accuracy of PDR but also combines it with acoustic estimation to reduce the cumulative error of PDR.
Figure 14 presents the positioning performance of the PDR algorithm, CHAN–Taylor hybrid algorithm, CHAN–IPDR–ILS, improved PDR algorithm, and our algorithm by two volunteers using Vivo X30 and OPPO K5 smartphones in the second scene. The proposed algorithm has a smaller localization error over long movement times in similar scenes. The experiments demonstrate that the PDR algorithm in this paper significantly improves its positioning performance in similar scenarios, and the EKF fusion of the proposed positioning algorithm has the best positioning performance among these algorithms and solves the contradiction between high positioning accuracy and low cost. The main reason is that this method can extract accurate features for step-length prediction in the dead reckoning. The outlier schemes are determined during the fusion positioning process, and the EKF can achieve good nonlinear filtering.
The mean localization errors of different step numbers for the PDR algorithm, CHAN–Taylor algorithm, CHAN–IPDR–ILS algorithm, improved PDR algorithm, and our algorithm at the first scene are shown in Figure 15. The proposed algorithm resulted in the least positioning errors for different length paths. This is because the method attenuates the cumulative error in the PDR algorithm over long movement times and the occasional error of the acoustic-based estimation.
Figure 16 shows the mean localization errors of different length paths in different algorithms with different smartphones and pedestrians in the second scene. The results reveal that the positioning errors of the proposed algorithm increase slightly as the step numbers increases. However, the overall positioning performance remains basically stable, and the accumulated errors are effectively reduced. The improved PDR has better performance in the cumulative errors. The proposed system exhibits a good positioning performance in different length paths, and good robustness and universality. This is because device heterogeneity and pedestrian step differences during step-length prediction are effectively eliminated, and pedestrian motion features are accurately extracted. In addition, the impact of the environment on acoustic signal localization is addressed.

4.4. Mean and RMS Error Performance

Table 5 and Table 6 present the mean and RMS errors among the different algorithms in scene 1. The PDR algorithm in this paper has good robustness for different pedestrians. Due to equipment heterogeneity, the proposed algorithm has slightly different errors, of less than 10 cm. In this open symmetric scene, the proposed positioning system has better performance than the other algorithms. This is because the improved PDR algorithm effectively addresses the accumulated errors, and EKF fusion reduces the nonlinearity effect.
Table 7 and Table 8 describe the mean and RMS errors in the different algorithms in scene 2. The proposed algorithm has better performance than the others. Moreover, it can be revealed that the proposed EKF-based PDR method combined with acoustic estimation has better localization accuracy, and more robust localization performance under many different experimental conditions. Its greatest highlight is that the step-length estimation model is based on LASSO. Therefore, more accurate localization can be achieved.

5. Conclusions

In this paper, we present a localization method that utilizes the EKF to fuse acoustic estimation and improve PDR. In the method in this paper, acoustic estimation is implemented using a hybrid CHAN–Taylor algorithm. In the dead reckoning, we propose a novel weighted step-length model with constraint LASSO. In this model, the peak and valley of the current step and the previous three steps are used to obtain a coarse step estimation; then, acceleration peak-to-valley amplitude difference, walk frequency, variance of acceleration, mean acceleration, peak median, valley median are extracted from the collected data. Finally, we combine the extracted motion features and LASSO regularization spatial constraints to obtain accurate step length. The model utilizes LASSO regression to combine multiple features to predict step results and improve the step estimation accuracy of PDR. Finally, the improved PDR is used as the state model and the acoustic estimation is used as the observation model. The target location is determined by EKF fusion.
To demonstrate the localization accuracy of the proposed method, we conducted extensive experiments on different experimental paths in two scenes. Scene 1 is a reading room with an area of approximately 432 m2, and scene 2 is a corridor with an area of approximately 584.8 m2. Two volunteers with different heights holding Vivo X30 and OPPO K5 smartphones were recruited to collect the data in the experiment. The experimental results demonstrate that the proposed step-length method is more accurate at comparing than the Weinberg, Scarlet, Kim, Multi-feature, Yan+ 2022 [26] model. It validates that our method can extract more accurate information to achieve high performance. Finally, we fuse the acoustic positioning with dead reckoning to obtain high positioning performance and low lost. Different pedestrians and devices were carried out in two different scenes. The experimental results show that the proposed positioning system can achieve a more accurate localization performance in the case of different users and different devices. The localization method can effectively mitigate the cumulative error of PDR and improve the accuracy and stability of indoor positioning. Although we conducted the experiment in different scenes, there were not enough obstacles for the two scenes. Therefore, the underground parking lot is an interesting test scenario for future work.

Author Contributions

Conceptualization, S.Y., X.L. and R.W.; methodology, S.Y., X.X. and R.W.; software, X.L., Y.J. and R.W.; validation, S.Y., X.X. and J.X.; formal analysis, X.L., Y.J. and R.W.; resources, J.X. and Y.J.; data curation, S.Y. and X.X.; writing—original draft preparation, S.Y. and X.X.; writing—review and editing, S.Y., X.X. and J.X.; funding acquisition, X.L., J.X. and Y.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by Guangxi Science and Technology Project: Grant AB21196041, Grant AB22035074, Grant AD22080061, Grant AB23026120, and Grant AA20302022; National Natural Science Foundation of China: Grant 62061010, Grant 62161007, Grant U23A20280, Grant 61936002, Grant 62033001, and Grant 6202780103; National Key Research and Development Program (2018AA100305); Guangxi Bagui Scholar Project; Guilin Science and Technology Project: Grant 20210222-1; Guangxi Key Laboratory of Precision Navigation Technology and Application; Innovation Project of Guang Xi Graduate Education: YCSW2022291; Innovation Project of GUET Graduate Education: 2023YCXS024.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All test data mentioned in this paper will be made available on request to the corresponding author’s email with appropriate justification.

Conflicts of Interest

Author Y.J. is the expert in GUET-Nanning E-Tech Research Institute Co., Ltd. The remaining authors declare that the research has no any commercial or financial relationships that could cause a conflict of interest.

References

  1. Ge, H.B.; Li, B.F.; Jia, S.; Nie, L.W.; Wu, T.H.; Yang, Z.; Shang, J.Z.; Zheng, Y.N.; Ge, M.R. LEO Enhanced Global Navigation Satellite System (LeGNSS): Progress, opportunities, and challenges. Geo-Spat. Inf. Sci. 2022, 25, 1–13. [Google Scholar] [CrossRef]
  2. Morales-Ferre, R.; Richter, P.; Falletti, E.; de la Fuente, A.; Lohan, E.S. A Survey on Coping With Intentional Interference in Satellite Navigation for Manned and Unmanned Aircraft. IEEE Commun. Surv. Tutor. 2020, 22, 249–291. [Google Scholar] [CrossRef]
  3. Shu, Y.M.; Xu, P.L.; Niu, X.J.; Chen, Q.J.; Qiao, L.L.; Liu, J.N. High-Rate Attitude Determination of Moving Vehicles With GNSS: GPS, BDS, GLONASS, and Galileo. IEEE Trans. Instrum. Meas. 2022, 71. [Google Scholar] [CrossRef]
  4. Bi, J.X.; Zhao, M.Q.; Yao, G.B.; Cao, H.J.; Feng, Y.G.; Jiang, H.; Chai, D.S. PSOSVRPos: WiFi indoor positioning using SVR optimized by PSO. Expert Syst. Appl. 2023, 222, 119778. [Google Scholar] [CrossRef]
  5. Mandia, S.; Kumar, A.; Verma, K.; Deegwal, J.K. Vision-Based Assistive Systems for Visually Impaired People: A Review. In Proceedings of the 5th International Conference on Optical and Wireless Technologies (OWT), Electr Network, Jaipur, India, 9–10 October 2021; pp. 163–172. [Google Scholar]
  6. Zhuang, Y.; Zhang, C.Y.; Huai, J.Z.; Li, Y.; Chen, L.; Chen, R.Z. Bluetooth Localization Technology: Principles, Applications, and Future Trends. IEEE Internet Things J. 2022, 9, 23506–23524. [Google Scholar] [CrossRef]
  7. Jia, M.; Khattak, S.B.; Guo, Q.; Gu, X.M.; Lin, Y. Access Point Optimization for Reliable Indoor Localization Systems. IEEE Trans. Reliab. 2020, 69, 1424–1436. [Google Scholar] [CrossRef]
  8. Lopes, S.I.; Vieira, J.M.N.; Reis, J.; Albuquerque, D.; Carvalho, N.B. Accurate smartphone indoor positioning using a WSN infrastructure and non-invasive audio for TDoA estimation. Pervasive Mob. Comput. 2015, 20, 29–46. [Google Scholar] [CrossRef]
  9. Chen, X.; Chen, Y.H.; Cao, S.; Zhang, L.; Zhang, X.; Chen, X. Acoustic Indoor Localization System Integrating TDMA plus FDMA Transmission Scheme and Positioning Correction Technique. Sensors 2019, 19, 2353. [Google Scholar] [CrossRef]
  10. Filonenko, V.; Cullen, C.; Carswell, J.D. Indoor Positioning for Smartphones Using Asynchronous Ultrasound Trilateration. ISPRS Int. Geo-Inf. 2013, 2, 598–620. [Google Scholar] [CrossRef]
  11. Murakami, H.; Nakamura, M.; Hashizume, H.; Sugimoto, M. 3-D Localization for Smartphones using a Single Speaker. In Proceedings of the 10th International Conference on Indoor Positioning and Indoor Navigation (IPIN), Pisa, Italy, 30 September–3 October 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
  12. Zhou, R.H.; Sun, H.M.; Li, H.; Luo, W.L. Time-difference-of-arrival Location Method of UAV Swarms Based on Chan-Taylor. In Proceedings of the 3rd International Conference on Unmanned Systems (ICUS), Harbin, China, 27–28 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1161–1166. [Google Scholar]
  13. Wang, X.; Huang, Z.H.; Zheng, F.Q.; Tian, X.C. The Research of Indoor Three-Dimensional Positioning Algorithm Based on Ultra-Wideband Technology. In Proceedings of the 39th Chinese Control Conference (CCC), Shenyang, China, 27–29 July 2020; pp. 5144–5149. [Google Scholar]
  14. Yang, H.B.; Gao, X.J.; Huang, H.W.; Li, B.S.; Xiao, B. An LBL positioning algorithm based on an EMD-ML hybrid method. EURASIP J. Adv. Signal Process. 2022, 2022, 38. [Google Scholar] [CrossRef]
  15. Feng, T.Y.; Zhang, Z.X.; Wong, W.C.; Sun, S.M.; Sikdar, B. A Privacy-Preserving Pedestrian Dead Reckoning Framework Based on Differential Privacy. In Proceedings of the 32nd IEEE Annual International Symposium on Personal, Indoor and Mobile Radio Communications (IEEE PIMRC), Electr Network, Helsinki, Finland, 13–16 September 2021; IEEE: Piscataway, NJ, USA, 2021. [Google Scholar]
  16. Zhang, R.; Bannoura, A.; Hoflinger, F.; Reindl, L.M.; Schindelhauer, C. Indoor Localization Using A Smart PhoneAC. In Proceedings of the 8th IEEE Sensors Applications Symposium (SAS), Galveston, TX, USA, 19–21 February 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 38–42. [Google Scholar]
  17. Ehrlich, C.R.; Blankenbach, J. Indoor localization for pedestrians with real-time capability using multi-sensor smartphones. Geo-Spat. Inf. Sci. 2019, 22, 73–88. [Google Scholar] [CrossRef]
  18. Diez, L.E.; Bahillo, A.; Otegui, J.; Otim, T. Step Length Estimation Methods Based on Inertial Sensors: A Review. IEEE Sens. J. 2018, 18, 6908–6926. [Google Scholar] [CrossRef]
  19. Yotsuya, K.; Ito, N.; Naito, K.; Chujo, N.; Mizuno, T.; Kaji, K. Method to Improve Accuracy of Indoor PDR Trajectories Using a Large Amount of Trajectories. In Proceedings of the 11th International Conference on Mobile Computing and Ubiquitous Network (ICMU), Auckland, New Zealand, 5–8 October 2018; IEEE: Piscataway, NJ, USA, 2018. [Google Scholar]
  20. Guo, S.L.; Zhang, Y.T.; Gui, X.Z.; Han, L.N. An Improved PDR/UWB Integrated System for Indoor Navigation Applications. IEEE Sens. J. 2020, 20, 8046–8061. [Google Scholar] [CrossRef]
  21. Im, C.; Eom, C.; Lee, H.; Jang, S.; Lee, C. Deep LSTM-Based Multimode Pedestrian Dead Reckoning System for Indoor Localization. In Proceedings of the International Conference on Electronics, Information, and Communication (ICEIC), Jeju, Republic of Korea, 6–9 February 2022; IEEE: Piscataway, NJ, USA, 2022. [Google Scholar]
  22. Yao, Y.B.; Pan, L.; Fen, W.; Xu, X.R.; Liang, X.S.; Xu, X. A Robust Step Detection and Stride Length Estimation for Pedestrian Dead Reckoning Using a Smartphone. IEEE Sens. J. 2020, 20, 9685–9697. [Google Scholar] [CrossRef]
  23. Zhang, M.; Shen, W.B.; Yao, Z.; Zhu, J.H. Multiple information fusion indoor location algorithm based on WIFI and improved PDR. In Proceedings of the 35th Chinese Control Conference (CCC), Chengdu, China, 27–29 July 2016; pp. 5086–5092. [Google Scholar]
  24. Vathsangam, H.; Emken, A.; Spruijt-Metz, D.; Sukhatme, G.S. Toward free-living walking speed estimation using Gaussian Process-based Regression with on-body accelerometers and gyroscopes. In Proceedings of the 2010 4th International Conference on Pervasive Computing Technologies for Healthcare, Munich, Germany, 22–25 March 2010; pp. 1–8. [Google Scholar]
  25. Zihajehzadeh, S.; Park, E.J. Experimental Evaluation of Regression Model-Based Walking Speed Estimation Using Lower Body-Mounted IMU. In Proceedings of the 38th Annual International Conference of the IEEE-Engineering-in-Medicine-and-Biology-Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 243–246. [Google Scholar]
  26. Yan, S.Q.; Wu, C.P.; Deng, H.G.; Luo, X.N.; Ji, Y.F.; Xiao, J.M. A Low-Cost and Efficient Indoor Fusion Localization Method. Sensors 2022, 22, 5505. [Google Scholar] [CrossRef]
  27. Naser, R.S.; Lam, M.C.; Qamar, F.; Zaidan, B.B. Smartphone-Based Indoor Localization Systems: A Systematic Literature Review. Electronics 2023, 12, 1814. [Google Scholar] [CrossRef]
  28. Lu, Y.L.; Luo, S.Q.; Yao, Z.X.; Zhou, J.F.; Lu, S.C.A.; Li, J.W. Optimization of Kalman Filter Indoor Positioning Method Fusing WiFi and PDR. In Proceedings of the 7th International Conference on Human Centered Computing (HCC), Electr Network, Virtual, 9–11 December 2021; pp. 196–207. [Google Scholar]
  29. Hou, X.Y.; Bergmann, J. Pedestrian Dead Reckoning With Wearable Sensors: A Systematic Review. IEEE Sens. J. 2021, 21, 143–152. [Google Scholar] [CrossRef]
  30. Duan, J.B.; Soussen, C.; Brie, D.; Idier, J.; Wan, M.X.; Wang, Y.P. Generalized LASSO with under-determined regularization matrices. Signal Process. 2016, 127, 239–246. [Google Scholar] [CrossRef]
  31. Arbet, J.; McGue, M.; Chatterjee, S.; Basu, S. Resampling-based tests for Lasso in genome-wide association studies. BMC Genet. 2017, 18, 15. [Google Scholar] [CrossRef]
  32. Song, X.Y.; Wang, M.; Qiu, H.B.; Luo, L.Y. Indoor Pedestrian Self-Positioning Based on Image Acoustic Source Impulse Using a Sensor-Rich Smartphone. Sensors 2018, 18, 4143. [Google Scholar] [CrossRef]
  33. Wang, M.; Duan, N.; Zhou, Z.; Zheng, F.; Qiu, H.B.; Li, X.P.; Zhang, G.L. Indoor PDR Positioning Assisted by Acoustic Source Localization, and Pedestrian Movement Behavior Recognition, Using a Dual-Microphone Smartphone. Wirel. Commun. Mob. Comput. 2021, 2021, 9981802. [Google Scholar] [CrossRef]
  34. Yan, S.Q.; Wu, C.P.; Luo, X.A.; Ji, Y.F.; Xiao, J.M. Multi-Information Fusion Indoor Localization Using Smartphones. Appl. Sci. 2023, 13, 3270. [Google Scholar] [CrossRef]
  35. Al Mamun, M.A.; Yuce, M.R. Map-Aided Fusion of IMU PDR and RSSI Fingerprinting for Improved Indoor Positioning. In Proceedings of the 20th IEEE Sensors Conference, Electr Network, Virtual, 31 October–4 November 2021; IEEE: Piscataway, NJ, USA, 2021. [Google Scholar]
  36. Poulose, A.; Eyobu, O.S.; Han, D.S. A Combined PDR and Wi-Fi Trilateration Algorithm for Indoor Localization. In Proceedings of the 1st International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Okinawa, Japan, 11–13 February 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 72–77. [Google Scholar]
  37. Lee, G.T.; Seo, S.B.; Jeon, W.S. Indoor Localization by Kalman Filter based Combining of UWB-Positioning and PDR. In Proceedings of the IEEE 18th Annual Consumer Communications and Networking Conference (CCNC), Electr Network, Las Vegas, NV, USA, 9–13 January 2021; IEEE: Piscataway, NJ, USA, 2021. [Google Scholar]
  38. Wu, J.; Zhu, M.H.; Xiao, B.; Qiu, Y.Z. Graph-Based Indoor Localization with the Fusion of PDR and RFID Technologies. In Proceedings of the 18th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP), Guangzhou, China, 15–17 November 2018; pp. 630–639. [Google Scholar]
  39. Tian, X.; Wei, G.L.; Zhou, J. Calibration method of anchor position in indoor environment based on two-step extended Kalman filter. Multidimens. Syst. Signal Process. 2021, 32, 1141–1158. [Google Scholar] [CrossRef]
  40. Yang, C.Y.; Cheng, Z.H.; Jia, X.X.; Zhang, L.T.; Li, L.Y.; Zhao, D.Q. A Novel Deep Learning Approach to 5G CSI/Geomagnetism/VIO Fused Indoor Localization. Sensors 2023, 23, 1311. [Google Scholar] [CrossRef]
  41. Liu, W.; Jing, C.; Wan, P.; Ma, Y.H.; Cheng, J. Combining extended Kalman filtering and rapidly-exploring random tree: An improved autonomous navigation strategy for four-wheel steering vehicle in narrow indoor environments. Proc. Inst. Mech. Eng. Part I–J Syst Control Eng. 2022, 236, 883–896. [Google Scholar] [CrossRef]
  42. Mendoza, L.R.; O’Keefe, K. Periodic Extended Kalman Filter to Estimate Rowing Motion Indoors Using a Wearable Ultra-Wideband Ranging Positioning System. In Proceedings of the 11th International Conference on Indoor Positioning and Indoor Navigation (IPIN), Univ Oberta Catalunya, Lloret de Mar, Spain, 29 November–2 December 2021; IEEE: Piscataway, NJ, USA, 2021. [Google Scholar]
  43. Pak, J.M. Switching Extended Kalman Filter Bank for Indoor Localization Using Wireless Sensor Networks. Electronics 2021, 10, 718. [Google Scholar] [CrossRef]
  44. Scarlett, J. Enhancing the Performance of Pedometers Using a Single Accelerometer; Analog Devices: Wilmington, MA, USA, 2009. [Google Scholar]
  45. Kim, J.W.; Jang, H.J.; Hwang, D.-H. A Step, Stride and Heading Determination for the Pedestrian Navigation System. J. Glob. Position. Syst. 2004, 3, 273–279. [Google Scholar] [CrossRef]
  46. Weinberg, H. Using the ADXL202 in Pedometer and Personal Navigation Applications; Analog Devices: Wilmington, MA, USA, 2009. [Google Scholar]
  47. Reddy, M.S.K.; Sumathi, R.; Reddy, N.V.K.; Revanth, N.; Bhavani, S. Analysis of Various Regressions for Stock Data Prediction. In Proceedings of the 2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS), Tashkent, Uzbekistan, 10–12 October 2022; pp. 538–542. [Google Scholar]
  48. Chen, D.Z.; Zhang, W.B.; Zhang, Z.Z. Indoor Positioning with Sensors in a Smartphone and a Fabricated High-Precision Gyroscope. In Proceedings of the 7th International Conference on Communications, Signal Processing, and Systems (CSPS), Dalian, China, 14–16 July 2018; pp. 1126–1134. [Google Scholar]
Figure 1. The methodological framework of the proposed positioning and navigation system.
Figure 1. The methodological framework of the proposed positioning and navigation system.
Sensors 23 09849 g001
Figure 2. Spectrogram of the acoustic signal using a Vivo X30 smartphone.
Figure 2. Spectrogram of the acoustic signal using a Vivo X30 smartphone.
Sensors 23 09849 g002
Figure 3. Spatial geometry distribution with three anchors ( A 1 , A 2 , A 3 ) and the target M.
Figure 3. Spatial geometry distribution with three anchors ( A 1 , A 2 , A 3 ) and the target M.
Sensors 23 09849 g003
Figure 4. Comparison results among sliding-window filtering, low-pass filtering, median filtering, and Hampel filtering on acceleration processing.
Figure 4. Comparison results among sliding-window filtering, low-pass filtering, median filtering, and Hampel filtering on acceleration processing.
Sensors 23 09849 g004
Figure 5. Acceleration data preprocessed by sliding-window filtering.
Figure 5. Acceleration data preprocessed by sliding-window filtering.
Sensors 23 09849 g005
Figure 6. Peak and valley detection results on a 42 m experimental path.
Figure 6. Peak and valley detection results on a 42 m experimental path.
Sensors 23 09849 g006
Figure 7. The step length error of the Weinberg, Scarlet, Kim, Multi-feature, Yan+ 2022 [26], and proposed step models.
Figure 7. The step length error of the Weinberg, Scarlet, Kim, Multi-feature, Yan+ 2022 [26], and proposed step models.
Sensors 23 09849 g007
Figure 8. Floor plan of the experimental site: (a) scene 1, (b) scene 2.
Figure 8. Floor plan of the experimental site: (a) scene 1, (b) scene 2.
Sensors 23 09849 g008
Figure 9. Step-length estimation comparison on different methods at scene 1. (a) Volunteer #1 using an OPPO K5 smartphone. (b) Volunteer #1 using a Vivo X30 smartphone. (c) Volunteer #2 using an OPPO K5 smartphone. (d) Volunteer #2 using a Vivo X30 smartphone.
Figure 9. Step-length estimation comparison on different methods at scene 1. (a) Volunteer #1 using an OPPO K5 smartphone. (b) Volunteer #1 using a Vivo X30 smartphone. (c) Volunteer #2 using an OPPO K5 smartphone. (d) Volunteer #2 using a Vivo X30 smartphone.
Sensors 23 09849 g009
Figure 10. Step-length comparison on different methods in scene 2. (a) Volunteer #1 using an OPPO K5 smartphone. (b) Volunteer #1 using a Vivo X30 smartphone. (c) Volunteer #2 using an OPPO K5 smartphone. (d) Volunteer #2 using a Vivo X30 smartphone.
Figure 10. Step-length comparison on different methods in scene 2. (a) Volunteer #1 using an OPPO K5 smartphone. (b) Volunteer #1 using a Vivo X30 smartphone. (c) Volunteer #2 using an OPPO K5 smartphone. (d) Volunteer #2 using a Vivo X30 smartphone.
Sensors 23 09849 g010
Figure 11. Step-length errors of the Weinberg, Scarlet, Kim, Multi-feature, Yan+ 2022 [26], and proposed step models in scene 1. (a) Volunteer #1 using an OPPO K5 smartphone. (b) Volunteer #1 using a Vivo X30 smartphone. (c) Volunteer #2 using an OPPO K5 smartphone. (d) Volunteer #2 using a Vivo X30 smartphone.
Figure 11. Step-length errors of the Weinberg, Scarlet, Kim, Multi-feature, Yan+ 2022 [26], and proposed step models in scene 1. (a) Volunteer #1 using an OPPO K5 smartphone. (b) Volunteer #1 using a Vivo X30 smartphone. (c) Volunteer #2 using an OPPO K5 smartphone. (d) Volunteer #2 using a Vivo X30 smartphone.
Sensors 23 09849 g011
Figure 12. Step-length errors of the Weinberg, Scarlet, Kim, Multi-feature, Yan+ 2022 [26], and proposed step models in scene 2. (a) Volunteer #1 using an OPPO K5 smartphone. (b) Volunteer #1 using a Vivo X30 smartphone. (c) Volunteer #2 using an OPPO K5 smartphone. (d) Volunteer #2 using a Vivo X30 smartphone.
Figure 12. Step-length errors of the Weinberg, Scarlet, Kim, Multi-feature, Yan+ 2022 [26], and proposed step models in scene 2. (a) Volunteer #1 using an OPPO K5 smartphone. (b) Volunteer #1 using a Vivo X30 smartphone. (c) Volunteer #2 using an OPPO K5 smartphone. (d) Volunteer #2 using a Vivo X30 smartphone.
Sensors 23 09849 g012
Figure 13. CDFs of the positioning errors on the different algorithms at the first scene: (a) Volunteer #1 using an OPPO K5 smartphone. (b) Volunteer #1 using a Vivo X30 smartphone. (c) Volunteer #2 using an OPPO K5 smartphone. (d) Volunteer #2 using a Vivo X30 smartphone.
Figure 13. CDFs of the positioning errors on the different algorithms at the first scene: (a) Volunteer #1 using an OPPO K5 smartphone. (b) Volunteer #1 using a Vivo X30 smartphone. (c) Volunteer #2 using an OPPO K5 smartphone. (d) Volunteer #2 using a Vivo X30 smartphone.
Sensors 23 09849 g013aSensors 23 09849 g013b
Figure 14. CDFs of the positioning errors on the different algorithms at the second scene: (a) Volunteer #1 using an OPPO K5 smartphone. (b) Volunteer #1 using a Vivo X30 smartphone. (c) Volunteer #2 using an OPPO K5 smartphone. (d) Volunteer #2 using a Vivo X30 smartphone.
Figure 14. CDFs of the positioning errors on the different algorithms at the second scene: (a) Volunteer #1 using an OPPO K5 smartphone. (b) Volunteer #1 using a Vivo X30 smartphone. (c) Volunteer #2 using an OPPO K5 smartphone. (d) Volunteer #2 using a Vivo X30 smartphone.
Sensors 23 09849 g014
Figure 15. Mean localization errors of different step numbers on different algorithms in the first scene. (a) Volunteer #1 using an OPPO K5 smartphone. (b) Volunteer #1 using a Vivo X30 smartphone. (c) Volunteer #2 using an OPPO K5 smartphone. (d) Volunteer #2 using a Vivo X30 smartphone.
Figure 15. Mean localization errors of different step numbers on different algorithms in the first scene. (a) Volunteer #1 using an OPPO K5 smartphone. (b) Volunteer #1 using a Vivo X30 smartphone. (c) Volunteer #2 using an OPPO K5 smartphone. (d) Volunteer #2 using a Vivo X30 smartphone.
Sensors 23 09849 g015
Figure 16. Mean localization errors of different step numbers on the different algorithms at the second scene. (a) Volunteer #1 using an OPPO K5 smartphone. (b) Volunteer #1 using a VivoX30 smartphone. (c) Volunteer #2 using an OPPO K5 smartphone. (d) Volunteer #2 using a Vivo X30 smartphone.
Figure 16. Mean localization errors of different step numbers on the different algorithms at the second scene. (a) Volunteer #1 using an OPPO K5 smartphone. (b) Volunteer #1 using a VivoX30 smartphone. (c) Volunteer #2 using an OPPO K5 smartphone. (d) Volunteer #2 using a Vivo X30 smartphone.
Sensors 23 09849 g016
Table 1. Estimation of the average step length in scene 1 by Volunteer #1 using the OPPO K5 and Vivo X30 smartphone.
Table 1. Estimation of the average step length in scene 1 by Volunteer #1 using the OPPO K5 and Vivo X30 smartphone.
DeviceMethodAverage Step Length (m)
OPPO K5Scarlet model0.6265
Kim model0.5576
Weinberg model0.5688
Multi-feature model0.5756
Yan+ 2022 [26] model0.5913
Proposed model0.6013
Vivo X30Scarlet model0.6201
Kim model0.5527
Weinberg model0.5635
Multi-feature model0.5838
Yan+ 2022 [26] model0.5877
Proposed model0.5991
Table 2. Estimation of the average step length in scene 2 by Volunteer #1 using the OPPO K5 and Vivo X30 smartphone.
Table 2. Estimation of the average step length in scene 2 by Volunteer #1 using the OPPO K5 and Vivo X30 smartphone.
DeviceMethodAverage Step Length (m)
OPPO K5Scarlet model0.6306
Kim model0.5606
Weinberg model0.5814
Multi-feature model0.5875
Yan+ 2022 [26] model0.5916
Proposed model0.6010
Vivo X30Scarlet model0.6263
Kim model0.5564
Weinberg model0.5797
Multi-feature model0.5859
Yan+ 2022 [26] model0.5907
Proposed model0.5992
Table 3. Estimation of the average step length in scene 1 by Volunteer #2 using the OPPO K5 and Vivo X30 smartphone.
Table 3. Estimation of the average step length in scene 1 by Volunteer #2 using the OPPO K5 and Vivo X30 smartphone.
DeviceMethodAverage Step Length (m)
OPPO K5Scarlet model0.6337
Kim model0.5631
Weinberg model0.5765
Multi-feature model0.5775
Yan+ 2022 [26] model0.6114
Proposed model0.6056
Vivo X30Scarlet model0.6298
Kim model0.5658
Weinberg model0.5649
Multi-feature model0.5755
Yan+ 2022 [26] model0.5900
Proposed model0.6005
Table 4. Estimation of the average step length in scene 2 by Volunteer #2 using the OPPO K5 and Vivo X30 smartphone.
Table 4. Estimation of the average step length in scene 2 by Volunteer #2 using the OPPO K5 and Vivo X30 smartphone.
DeviceMethodAverage Step Length (m)
OPPO K5Scarlet model0.6297
Kim model0.5630
Weinberg model0.5813
Multi-feature model0.5824
Yan+ 2022 [26] model0.6095
Proposed model0.5996
Vivo X30Scarlet model0.6243
Kim model0.5462
Weinberg model0.5880
Multi-feature model0.5878
Yan+ 2022 [26] model0.5894
Proposed model0.5995
Table 5. Mean and RMS error comparison among different algorithms for Volunteer #1 holding the OPPO K5 and Vivo X30 in scene 1 (m).
Table 5. Mean and RMS error comparison among different algorithms for Volunteer #1 holding the OPPO K5 and Vivo X30 in scene 1 (m).
DeviceMethodMean ErrorRMS Error
OPPO K5PDR algorithm1.37951.4714
CHAN–Taylor algorithm0.23390.2706
CHAN–IPDR–ILS algorithm0.22190.3126
Improved PDR algorithm0.32490.3780
Proposed algorithm0.16400.1881
Vivo X30PDR algorithm1.86362.3054
CHAN–Taylor algorithm0.11010.2127
CHAN–IPDR–ILS algorithm0.12290.1905
Improved PDR algorithm0.20090.2578
Proposed algorithm0.09050.1237
Table 6. Mean and RMS error comparison among different algorithms for Volunteer #2 holding the OPPO K5 and Vivo X30 in scene 1 (m).
Table 6. Mean and RMS error comparison among different algorithms for Volunteer #2 holding the OPPO K5 and Vivo X30 in scene 1 (m).
DeviceMethodMean ErrorRMS Error
OPPO K5PDR algorithm1.61891.8466
CHAN–Taylor algorithm0.22170.3332
CHAN–IPDR–ILS algorithm0.20340.3004
Improved PDR algorithm0.36880.4056
Proposed algorithm0.18040.3004
Vivo X30PDR algorithm1.88352.1757
CHAN–Taylor algorithm0.22240.3285
CHAN–IPDR–ILS algorithm0.19550.2877
Improved PDR algorithm0.30780.3490
Proposed algorithm0.16740.2072
Table 7. Mean and RMS error comparison among different algorithms for Volunteer #1 holding the OPPO K5 and Vivo X30 in scene 2 (m).
Table 7. Mean and RMS error comparison among different algorithms for Volunteer #1 holding the OPPO K5 and Vivo X30 in scene 2 (m).
DeviceMethodMean ErrorRMS Error
OPPO K5PDR algorithm0.90761.0232
CHAN–Taylor algorithm0.15830.3184
CHAN–IPDR–ILS algorithm0.19830.2536
Improved PDR algorithm0.21230.3111
Proposed algorithm0.13950.1981
Vivo X30PDR algorithm1.48521.6446
CHAN–Taylor algorithm0.16370.2819
CHAN–IPDR–ILS algorithm0.12050.2010
Improved PDR algorithm0.14740.2003
Proposed algorithm0.11880.1200
Table 8. Mean and RMS error comparison among different algorithms for Volunteer #2 holding the OPPO K5 and Vivo X30 in scene 2 (m).
Table 8. Mean and RMS error comparison among different algorithms for Volunteer #2 holding the OPPO K5 and Vivo X30 in scene 2 (m).
DeviceMethodMean ErrorRMS Error
OPPO K5PDR algorithm3.87884.0600
CHAN–Taylor algorithm0.23560.3685
CHAN–IPDR–ILS algorithm0.25280.3782
Improved PDR algorithm0.24610.2754
Proposed algorithm0.16230.1900
Vivo X30PDR algorithm2.48352.7227
CHAN–Taylor algorithm0.21460.3798
CHAN–IPDR–ILS algorithm0.17180.2706
Improved PDR algorithm0.20630.2746
Proposed algorithm0.14890.1879
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yan, S.; Xu, X.; Luo, X.; Xiao, J.; Ji, Y.; Wang, R. A Positioning and Navigation Method Combining Multimotion Features Dead Reckoning with Acoustic Localization. Sensors 2023, 23, 9849. https://doi.org/10.3390/s23249849

AMA Style

Yan S, Xu X, Luo X, Xiao J, Ji Y, Wang R. A Positioning and Navigation Method Combining Multimotion Features Dead Reckoning with Acoustic Localization. Sensors. 2023; 23(24):9849. https://doi.org/10.3390/s23249849

Chicago/Turabian Style

Yan, Suqing, Xiaoyue Xu, Xiaonan Luo, Jianming Xiao, Yuanfa Ji, and Rongrong Wang. 2023. "A Positioning and Navigation Method Combining Multimotion Features Dead Reckoning with Acoustic Localization" Sensors 23, no. 24: 9849. https://doi.org/10.3390/s23249849

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop