Next Article in Journal
Online Optimization Method for Nonlinear Model-Predictive Control in Angular Tracking for MEMS Micromirror
Next Article in Special Issue
Characterization of a Dual Nonlinear Helmholtz Resonator
Previous Article in Journal
Three-Dimensional Cell Drawing Technique in Hydrogel Using Micro Injection System
Previous Article in Special Issue
A Miniaturized Piezoelectric MEMS Accelerometer with Polygon Topological Cantilever Structure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Algorithm for Scenario Recognition Based on MEMS Sensors of Smartphone

Aerospace Information Research (AIR) Institute, Chinese Academy of Sciences (CAS), Beijing 100091, China
*
Author to whom correspondence should be addressed.
Micromachines 2022, 13(11), 1865; https://doi.org/10.3390/mi13111865
Submission received: 13 October 2022 / Revised: 27 October 2022 / Accepted: 28 October 2022 / Published: 30 October 2022
(This article belongs to the Special Issue MEMS Accelerometers: Design, Applications and Characterization)

Abstract

:
The scenario is very important to smartphone-based pedestrian positioning services. The smartphone is equipped with MEMS(Micro Electro Mechanical System) sensors, which have low accuracy. Now, the methods for scenario recognition are mainly machine-learning methods. The recognition rate of a single method is not high. Multi-model fusion can improve recognition accuracy, but it needs to collect many samples, the computational cost is high, and it is heavily dependent on feature selection. Therefore, we designed the DT-BP(decision tree-Bayesian probability) scenario recognition algorithm by introducing the Bayesian state transition model based on experience design in the decision tree. The decision-tree rules and state transition probability assignment methods were respectively designed for smartphone mode and motion mode. We carried out experiments for each scenario and compared them with the methods in the references. The results showed that the method proposed in this paper has a high recognition accuracy, which is equivalent to the accuracy of multi-model machine learning, but it is simpler, easier to implement, requires less computation, and requires fewer samples.

1. Introduction

With the rapid development of Micro Electro Mechanical System(MEMS)sensors, the smartphone is equipped with an inertial measurement unit (IMU), barometer and magnetometer, which provide new and cheap approaches to smartphone-based pedestrian positioning services. Now, it has penetrated all aspects of people’s lives. However, because of the complex scenarios faced by the smartphone, obtaining a high-precision position based on the smartphone is still a challenge. The complex scenarios include the diversity of smartphone carrying modes, pedestrian movement modes, and the accuracy limitations of smartphone built-in sensing, which are all factors that affect the accuracy of pedestrian positioning. Contextual information is very important to the positioning system. It not only affects the types of available signals, but also provides more information for positioning, and provides an important basis for positioning methods, the selection of fusion algorithms, and failure detection. Therefore, it is necessary to identify different scenarios and choose different coping strategies for different scenarios to obtain high-precision pedestrian positioning results.
At present, pedestrian motion mode recognition is mainly divided into two research directions: one is based on image processing technology, which converts the input image or video into feature vectors, and then recognizes the motion mode [1,2]. However, it is easily infringes on personal privacy and relies heavily on light conditions [3]. The other is based on various sensors, such as accelerometers, gyroscopes, gravimeters, barometers, etc., that collect sensor data, extract various features, and classifying features. Then, various methods are used to recognize the movement pattern. Machine learning methods are mostly used for motion pattern recognition based on sensors, such as support vector machine (SVM) [4], k-nearest neighbor algorithm (KNN) [3], Gaussian naive Bayes (GNB), and artificial neural network (ANN). The average recognition success rate can reach more than 80%. For example, Sun Bingyi et al. [5] proposed a behavior recognition method based on the SC-HMM algorithm, which can classify up and down stairs and elevators with a classification accuracy of more than 80%. Jin-Shyan Lee et al. [6] proposed a threshold-based classification algorithm for carrying phone mode with an acceleration value, which is very simple and easy to achieve. Qinglin Tian et al. [7] proposed a finite state machine (FSM) to classify the smartphone mode with a classification accuracy of more than 89%.
Other scholars have tried to combine different models to improve recognition performance. Liu Bin et al. [8] combined four typical methods: k-nearest neighbor algorithm, support vector machine, naive Bayesian network, and the AdaBoost algorithm based on a naive Bayesian network to create a human activity recognition model. The optimal human activity recognition model was obtained through model decision-making, and the accuracy reached 92%. Using support vector machine (SVM) and decision tree (DT), any combination of motion state and mobile phone postures could be successfully identified [9] with an average success rate of 92.4%. Other scholars combined convolutional neural network (CNN) and long-term and short-term memory (LSTM) network to recognize walking, sitting, and lying behaviors for wearable (tied-to-the-waist) devices, with a success rate of over 96% [10,11]. In addition, some scholars used other methods to realize motion pattern recognition, such as the implicit Markov model [12,13], the sensor data interception method based on last bit matching [14], and the voting method [15]. Some scholars have also studied the influence of window length on human motion pattern recognition, to choose the optimal window length [16,17].
Ichikawa et al. [18] studied the ways people like to use mobile phones. The common locations of mobile phones are trouser pockets, clothing pockets, hand-held and so on. Scholars have explored various methods to identify the common locations, that is, to identify the location of the mobile phone. Yang et al. [19] proposed PACP (Parameters Adjustment-Corresponding-to-smartphone position), a method that is independent of the smartphone mode. It uses the SVM (support vector machine) model to identify the smartphone mode with an accuracy rate of 91%. Deng et al. [20] proposed to recognize the location of mobile phones based on accelerometer features, and tested the recognition results based on SVM, Bayesian network, and random forest. Noy et al. [21] used KNN, decision tree, and XGBoost to test and compare, and showed that XGBoost has the best recognition success rate. Wang [22] proposed a recognition method of the superimposed model, which combines the six models of AdaBoost, DT, KNN, LightGBM, SVM, and XGBoost to realize the location recognition of smartphones, and the recognition accuracy can reach 98.37%.
In general, the methods for scenarios recognition mainly focus on machine learning methods, such as SVM, CNN, KNN, etc. These methods have a low recognition accuracy rate when recognizing based on raw data, and when recognizing based on sensor features, they have a strong dependence on feature selection. The fusion of multiple models can improve the recognition accuracy, increasing the complexity of calculations and requiring a large number of samples. In addition, the calculation cost is large, and the choice of features is heavily dependent.
To solve this problem, we designed a DT-BP (decision tree-Bayesian probability) scenarios recognition algorithm by using a single model decision tree and a Bayesian state transition model, which aims at motion mode and smartphone mode. This method is more simplified, less computationally expensive, and less computationally complex, and can obtain the same recognition accuracy as the multi-model machine learning method. The contributions of this study are as follows:
  • We designed a decoupling analysis method to analyze the relationship between e different kinds of scenario, so as determine the identification order. As the interactions of different scenario are categorised, adverse effects on scenario recognition occur. Therefore, a decoupling relationship analysis method was designed to decouple different scenario categories and determine the sequence of scenario type identification;
  • We designed a DT-BP (decision tree-Bayesian probability) scenario recognition algorithm by using a single model decision tree and a Bayesian state transition model, which aimed at motion mode and smartphone mode. This method is more simplified, less computationally expensive, and less computationally complex, and can obtain the same recognition accuracy as the multi-model machine learning method;
  • We designed the corresponding decision tree criteria and probability allocation method for smartphone mode and motion mode. We carried out experiments for each scenario and compared them with the methods in the references.
The rest of this paper is organized as follows: Section 2 introduces the presented algorithm, including decoupling analysis, feature extraction and scenario recognition algorithm. Section 3 shows the experimental setup, results and discussion. And finally, Section 4 concludes the paper.

2. Methodology

2.1. Decoupling Analysis of Scenario Category

It is necessary to analyze the decoupling relationship of different scenario categories to determine their independence and correlation. For example, if there are nk motion modes, mk smartphone modes, there are n k × m k kinds of situations in any combination of the two kinds of contexts. It is too complicated and redundant to identify all the combined scenarios. And as the interaction of different scenario is categorized, adverse effects on scenario recognition occur. Therefore, a decoupling relationship analysis method was designed to decouple different scenario categories and determine the sequences of scenario type recognition.
The decoupling of smartphone modes and motion modes needs to be analyzed in three parts:
  • The correlation coefficient of the same motion mode in different smartphone modes;
  • The correlation coefficient of different motion modes in the same smartphone mode;
  • The correlation coefficient between different smartphone modes and different motion modes.
We used Pearson’s correlation coefficient to analyze the decoupling of the data. The calculation formula of the correlation coefficient is as follows:
r = i = 1 n ( x i x ¯ ) ( y i y ¯ ) i = 1 n ( x i x ¯ ) 2 i = 1 n ( y i y ¯ ) 2 , { x ¯ = i = 1 n x i / n y ¯ = i = 1 n y i / n
where r is the correlation coefficient, n is the length of the data, x i , y i which are two different time series.
As the data sampling length in different situation is different and the data contains the periodic behavior of pedestrian movement, it was necessary to establish a time-lag series [23]. The sequence ( X , Y ) = { ( x i , y i ) , i = 1 , 2 , , n } , after moving forward and backward by m sampling points is:
( X t , Y t + m ) = { ( x i , y i + m ) , i = 1 , 2 , , n m } , 1 m < n , m N + ( X t , Y t m ) = { ( x i , y i m ) , i = m + 1 , m + 2 , , n } , 1 m < n , m N +
If the time-shift sequence is correlated, it must exist m 0 ( 1 | m 0 | n , | m 0 | N + ) to maximize the correlation coefficient of ( X t , Y t + m 0 ) .
To decouple different scenario categories, the following analysis method was used:
(1)
To avoid dependence on feature selection, raw data were selected for data analysis;
To ensure a full analysis of different scenario categories, it was necessary to exclude other factors as far as possible, such as pedestrian differences, smartphone brand differences, etc. Therefore, the window length n shall meet the pedestrian movement cycle, generally 0.5–1.2 s. The calculation method of window length n is as follows:
n > 2 × [ T n Δ t ]
where T n is the window period, generally bigger than 2 s. Δ t is the sampling period, which depends on smartphone’s brand and model. [] means round numbers.
(2)
To ensure the integrity of pedestrian motion cycle, the forward and backward sampling points m should be selected as:
[ 0.5 × T n Δ t ] + 1 m < n
(3)
To ensure the analysis is not disturbed by abnormal data, the sliding window is used to calculate the correlation coefficient r, which is = r i ( i = 1 , 2 , , k ) , k = [ N / n ] , and N is the sampling length of data. The decoupling analysis correlation coefficient is
r ¯ = 1 k i = 1 k r i
(4)
To analyze the decoupling correlation different scenario categories, we set r ( x i , j , y u , v ) = r ¯ . Where i and u represent the smartphone mode, j and v represent the motion mode. y u , v and y u , v are two kinds of scenario. To obtain the analysis result we needed to analyze three situations as follows:
{ i = u , j v i u , j = v i u , j v
The test results are shown in Table 1, which gives the correlation calculation results of a total of nine scenarios composed of three motion modes and three smartphone modes. The raw data include GNSS sensor, accelerometer, gyroscope, magnetometer, barometer, and Bluetooth.
According to the test results in Table 1, the decoupling correlation can be summarized as follows:
{ 0.3 < r ( x i , j , y u , v ) < 0.4 , w h e n i = u , j v r ( x i , j , y u , v ) > 0.5 , w h e n i u , j = v r ( x i , j , y u , v ) < 0.3 , w h e n i u , j v
From Formula (7) we know that the correlation between different motion modes has a certain correlation under the same smartphone mode. In the case of different smartphone modes, the correlation of the same motion mode is greater than 0.5. The correlation between different smartphone modes and different motion modes was very low, and less than 0.3. So, smartphone modes have little influence on motion mode recognition. On the contrary, motion modes have a great influence on smartphone mode recognition.
According to the above analysis, during scenario recognition, we can recognize the motion mode first. And when this is determined, the smartphone mode is recognized secondly.

2.2. Feature Extraction

In this paper, we extracted features from different sensor data in both the time domain and frequency domain, respectively. The time-domain refers to the extraction of the mathematical-statistical characteristics of the sensor measurement data in a certain window length, such as variance, mean, amplitude, etc. The frequency domain refers to the calculation of the Fourier transform and frequency domain entropy of the sensor measurement data in a certain window length. Then, the features in the frequency domain were extracted, such as dominant frequency, energy, frequency difference, etc.
As shown in Table 2, the time-domain features extracted from active sensors were used in this paper. Where n 1 , n 2 , n 3 , n 4 , n 5 , n 6 is the length of the data window, x i , y i , z i is the sampling data, x ¯ is the mean value, i,j,k are the times at different sampling points, s k is the value that the data in the window length n 4 is greater and smaller than the threshold value t h r e z e r o , t h r e z e r o .
The height gradient value [25,26] is calculated by the raw data of barometer as:
d h = h h 0 = 18400 · ( 1 + t 273.15 ) · lg p 0 p
where t is the temperature, and the unit is °C. p 0 is the reference air pressure and p is the output of the barometer.

2.3. Scenario Recognition Algorithm

2.3.1. Design of Scenario Recognition Algorithm Based on DT-BP

The decision tree (DT) establishes the nodes by exploring the high-value data features in the overall data and constructs the branches of the tree according to the required research contents. With repeatedly establishing the branch nodes, the classification results and decision set contents are displayed with the tree structure [27,28]. The decision tree has the advantages of low computational complexity and is insensitive to missing content in the middle. It can handle irrelevant feature data [29]. Decision trees also have shortcomings, such as low detection accuracy and the work needed for the preprocessing of time-sequential data. In the actual environment, due to the complexity of the environment, the different interference, the performance of different devices, the error accumulation of the sensor itself, etc., the error in the identification process is so relatively large that the usability is not high. To deal with this problem, we designed a context recognition method based on the decision tree and Bayesian state transition probability (decision tree-Bayesian probability, DT-BP).
Bayes theory is a common method in model decision-making. The basic idea is to know the conditional probability density parameter expression and a priori probability, convert the formula into a posteriori probability, and finally use a posteriori probability for decision classification.
If P ( A ) is a priori probability or edge probability of A. the conditional probability of A after the occurrence of B is P ( A | B ) , which is called the posterior probability of A. P ( B | A ) is the conditional probability of B after the occurrence of A, which is called the posterior probability of B. P ( B ) is the prior probability of B [30]. Then
P ( B | A ) = P ( A | B ) P ( B ) P ( A )
For the situation recognition in this paper, we suppose T i as the situation category, and T i Γ ( b 1 , b 2 , , b u ) . Where U is the number of situations. We assume S f as the feature quantity set, and S f = { a 1 , a 2 , , a V } . Where V is the number of features. If the features of S f all belong to T i , the probability is P ( T i | S f ) . When it is satisfied P ( T k | S f ) = max i = 1 , , K P ( T i | S f ) , it is considered S f T k , which means the recognition is successful. Therefore, we only need to calculate P ( T i | S f ) to recognize context. The Formula (9) is changed to:
P ( T i | S f ) = P ( T i ) P ( S f | T i ) P ( S f )
To obtain the conditional probability of the scenario T i , we need to calculate P ( S f ) , P ( T i ) and P ( S f | T i ) .
The principle of probability allocation in this article: the number of features quantities is V, and the probability of each feature quantity is the same, which means P ( S f ) is a constant. So P ( T i | S f ) is the largest when P ( T i ) P ( S f | T i ) is the largest. That is
P ( T i | S f ) P ( T i ) P ( S f | T i )
where P ( T i ) is the state probability. Its value at the current moment is related to the number of scenarios to be detected and the probability of the previous moment. It is independently designed according to different scenario categories and the number of scenarios U. P ( S f | T i ) is the conditional probability of the feature vector S f , which is obtained from DT rules. The obtaining algorithm designed in this paper is as follows:
P ( S f | T i ) = j = 1 K i c j K i
where K i represents the number of features related to the category T i . c j is the judgment value of each feature. If the judgment condition is met, it is c j = 1 , otherwise is c j = 0 .

2.3.2. Recognition of Smartphone Mode Based on DT-BP

(1)
Algorithm design based on decision tree
To realize the recognition of different smartphone modes, it is necessary to detect the transformation process between different smartphone modes, which determines whether the smartphone mode is transforming or fixed. When it is in the fixed position, the different smartphone mode is determined. There are two parts of mobile phone location recognition: transformation recognition and current location recognition. In this paper, we took six common smartphone modes [18] as examples to design the specific decision tree, including texting, calling, pants front pocket, clothes pocket, pants back pocket and hand swing, as shown in Figure 1.
According to the data characteristic analysis and feature analysis, the variance of the first-order norm of acceleration was used as the criterion for judging the position change of the mobile phone. The first-level decision criterion for the decision tree is:
{ i f var ( a x ) > T h r e & var ( a y ) > T h r e & var ( a z ) > T h r e , c h a n g i n g = 1 o t h e r s , c h a n g i n g = 0
where Thre is the threshold. When c h a n g i n g = 1 , the location is changing, otherwise it is not.
When the position is fixed, it is necessary to determine whether there is periodic oscillation, which means there are other periodic motions besides walking and swinging with the pedestrian. We use the second main frequency amplitude of the acceleration amplitude to determine it. The second-level decision criterion of the decision tree is:
{ i f ( A m f 2 > T h r e m f 2 ) , p e r = 1 ; o t h e r s , p e r = 0
where T h r e m f 2 is the threshold. When p e r = 1 , the position of the mobile phone has periodic movement, otherwise it is not. If the smartphone location with periodic movement, the judgment criterion is as follows:
{ i f ( F m f > T h r e m f ) , s w p p = 1 ; o t h e r s , s w p p = 0
where T h r e m f is the threshold. When s w p p = 1 , the position is pants pocket(pp), otherwise it is swinging. As the features of the front pants pocket(fpp) and back pants pocket(bpp) is similar, there is a new branch for them, and the decision criterion is:
{ i f 1 M i = 1 M ( a x a z ) > T h r e q h , f p p = 1 o t h e r s , b p p = 1
where M is the window length, T h r e q h is the threshold.
To recognize the fixed smartphone mode without periodic motion, we used the first-order norm of acceleration, peaks, and wave as the features. The determination rule for designing a fixed smartphone mode decision tree is:
{ t e x t : | a x | < T h r e 1 , | a y | < T h r e 1 , a z min > T h r e 2 c a l l : a x max < T h r e 1 , a y min > T h r e 2 , | a z | < T h r e 1 p o c k e t : a x min > T h r e 2 , a y max < T h r e 1 , | a z | < T h r e 1
where Thre1, Thre2, Thre3 are thresholds.
(2)
Probabilistic design based on DT-BP method
According to the design of the DT-BP method, we needed to design P ( T i ) and P ( S f | T i ) to calculate the probability distribution principle. The number of scenarios U is 7, including 6 smartphone positions and the change process. P ( T i ) is designed as shown in Table 3. According to the design of the decision tree, the number of features V is 10. The design of K i in Formula (12) is as follows in Table 4.

2.3.3. Recognition of Motion Mode Based on DT-BP

(1)
Design of motion mode recognition algorithm based on decision tree
In this paper, the recognition algorithm is designed by taking the motion modes of static, walking, turning, going upstairs and downstairs, escalator, and elevator as examples. According to the analysis of the extracted features, the dynamic and static are distinguished by the acceleration variance. We used positive and negative zero-crossing rates of acceleration air pressure gradients as the features of static, elevator and escalator recognition. Dynamic motion includes walking on the ground, going upstairs, going downstairs and turning. Walking, turning, going upstairs and downstairs are coupled movements, that is, turning and going up and down stairs also have walking movements. Therefore, the main work was to distinguish turning and going up and down stairs from walking. The amplitude of angular velocity is used to recognize turning. We used the auto-correlation coefficient, Fourier dominant frequency, and elevation gradient to recognize going up and down stairs and walking, as shown in Figure 2.
The process of motion mode recognition based on the decision tree method is as follows:
(a)
If var ( a ) is greater than the threshold T h r e a , it is considered dynamic, otherwise, it is static.
(b)
When it is static, it is necessary to distinguish between static and elevator and escalator. We use the zero-crossing rate of acceleration amplitude as judgment, and the judgment criterion is:
{ f l a g e l = 1 , i f ( p + ( a k ) > T h r e + e l & p ( a k ) < T h r e e l ) f l a g e l = 1 , i f ( p ( a k ) > T h r e + e l & p + ( a k ) < T h r e e l ) f l a g e l = 0 , o t h e r w i s e
where a k = s v m a g , s v m a is the amplitude of acceleration and g is the acceleration of gravity. p + ( a k ) and p ( a k ) is the positive and negative zero-crossing rate. T h r e + e l and T h r e e l are the zero-crossing judgment threshold. If f l a g e l 0 , it is going up or down elevator.
(c)
When the motion is the elevator, we judge whether going up or down. If f l a g e l = 1 and the previous state is not the elevator, the state is going up elevator. If f l a g e l = 1 and the previous state is the elevator, the state is going down elevator. If f l a g e l = 1 and the previous state is not the elevator, the state is going down elevator. If f l a g e l = 1 and the previous state is the elevator, the state is going up elevator.
(d)
If f l a g e l = 0 , it is further recognized whether it is an escalator, and the recognition criteria are as follows:
{ f l a g e s = 1 , i f ( p + ( d b r o ) > T h r e + e s & p ( d b r o ) < T h r e e s ) f l a g e s = 1 , i f ( p ( d b r o ) > T h r e + e s & p + ( d b r o ) < T h r e e s ) f l a g e s = 0 , o t h e r w i s e
where d b r o is the pressure gradient, p + ( d b r o ) and p ( d b r o ) are the positive and negative zero-crossing rates of the pressure gradient. T h r e + e s and T h r e e s are the thresholds. When f l a g e s = 0 , it is the static state. If f l a g e s = 1 , the escalator is going up. If f l a g e s = 1 , it is the elevator is going down.
(e)
When pedestrians are in a dynamic state, we mainly distinguish turning, stairs, and walking. The angular velocity amplitude is used to recognize turning. When svm w > T h r e w , it is turning.
(f)
We use the auto-correlation coefficient and Fourier transform to distinguish stairs and walking. The auto-correlation coefficient at the offset k = 2 and k = 4 as the judgment value, and the main frequency of the Fourier transform as the judgment condition. The criterion is as follows:
{ f l a g s t = 1 , i f ( | R a ( 2 ) | > | R a ( 4 ) | & F a > T h r e f a ) f l a g s t = 0 , o t h e r w i s e
where R a ( 2 ) is the correlation coefficient of the acceleration amplitude offset k = 2. F a represents the main frequency of the Fourier transform. T h r e f a is the main frequency judgment threshold. If f l a g s t = 1 , it is up and downstairs, otherwise it is walking on level ground.
(g)
When pedestrians are going up and down the stairs, we use the height gradient value to make judgments. The criterion is as follows:
{ f l a g h = 1 , i f ( d h > T h r e h ) f l a g h = 1 , i f ( d h < T h r e h )
where T h r e h and T h r e h are the judgment thresholds of the height gradient. If f l a g h = 1 , it is going up the stairs. If f l a g h = 1 , it is going down the stairs.
(2)
Probabilistic design based on DT-BP method
For the nine motion modes in the above example, the probability P ( T i ) of each state in the DT-BP method is determined according to the previous state. The transition probability between each motion mode is shown in Table 5. According to Figure 2 and the decision tree rules, the number of features V is 12, as shown in Table 6 for the design K i in Formula (12).

3. Experimental

3.1. Experimental Setup

To understand the effectiveness and limitations of our proposed Scenario recognition algorithm, we conducted an implementation on Android to collect data. During the experiment, we collected data using an Android smartphone (Huawei mate 8, whose parameters are shown in Table 7), which was equipped with a three-axis accelerometer and a three-axis gyroscope. We evaluated the proposed method in six common smartphone modes (texting, calling, pants front pocket, clothes pocket, pants back pocket and hand swing) and nine natural motion modes (static, walking, turning, upstairs and downstairs, up escalator, down escalator, up elevator and down elevator). The threshold of smartphone mode and motion mode in the experimental are shown in Table 8. We compared the proposed algorithm with state-of-the-art algorithms.

3.2. Experimental Results of Smartphone Mode Recognition

To test the algorithm of smartphone mode recognition, we collect text, calling, front pants pocket, back pants pocket, clothes pocket, and hand-held positions, respectively. There was a total of 9901 epoch data, including 568 epoch data in the changing, 2691 epoch data in texting, 1236 epoch data in calling, 2095 epoch data in the front pants pocket 1512 epochs in the back pants pockets, 1561 epochs in clothes pocket and 238 epochs in swing.
The accuracy of smartphone mode recognition is shown in Table 9. The recognition accuracy was calculated with the following formula:
e r r = N u m r e c o N u m t r u e × 100 %
where N u m r e c o is the number of epochs where the state recognition result is the same as the state to be recognized. N u m t r u e is the actual number of epochs in the state to be recognized. The results were similar to Table 9 in many tests. The average recognition accuracy was 99.06%, the lowest accuracy was 96.48%, and the recognition accuracy of all positions was greater than 96%. The main reason is that the smartphone mode changing was followed by various fixed smartphone modes. The error of various smartphone modes will be reflected in the smartphone mode changing.
The comparison of the recognition accuracy of the algorithm in this paper with other different algorithms [20,31] is shown in Table 10. From the table, the random forest has the highest accuracy when texting and calling. DT-BP has the highest success rate in hand-held. The average accuracy in this paper is slightly higher by 0.5%. Compared with other methods based on machine learning, the DT-BP proposed in this paper has a slightly higher recognition accuracy, and takes 0.51 s in total, while the random forest takes 8.34 s. The algorithm in this paper greatly reduces the operation time and ensures the recognition success rate.

3.3. Experimental Results of Motion Mode Recognition

We collected static, walking, up and down stairs, elevator up and down, and escalator up and down data, respectively, to test DT-BP. There was a total of 6789 epoch data, including 3239 epochs in static, 2005 epochs in walking, 173 epochs in up the stairs, 165 epochs in down the stairs, 317 epochs in up the elevator, 319 epochs in down the elevator, 145 epochs in up the escalator, and 426 epochs in down the escalator. The recognition accuracy was calculated by the Equation (22).
The accuracy of motion recognition is shown in Table 11, which was similar to many tests. The average recognition accuracy of DT-BP was 97.3%, and the lowest was 88.73%, which is down escalator. The main reason is that there is a parallel stage of the escalator when getting on the escalator and preparing to get off the escalator. During this period, the speed is uniform. It is difficult to recognize as the speed is submerged in the noise of acceleration. The accuracy of the elevator going up and down and turning is the highest, mainly because the acceleration change characteristics are obvious in these states. Since the intermediate transition state of each motion process is static, it will be considered static when detecting errors in the other eight states. Therefore, there will be more states of detecting errors at rest.
To further analyze DT-BP proposed in this paper, it is compared with other algorithms [9,10,32,33,34], as shown in Figure 3. The recognition accuracy using a single machine learning model was relatively lower. For example, SVM and KNN are both more than 80%. The recognition success rate using multiple models will significantly improve with increases in algorithm complexity and calculation cost. The accuracy of DT-BP is the same as the method of machine learning using multiple models, and the computational complexity and computational cost are significantly reduced.

4. Conclusions

At present, the methods for scenario recognition are mainly machine-learning methods. The recognition accuracy of a single model is not high. Multi-model fusion can improve recognition accuracy. However, the computational cost is high, and it is heavily dependent on feature selection. We mainly focused on two types of contexts: sports mode, and mobile phone location to design a DT-BP recognition algorithm by introducing the Bayesian state transition model based on experience design into the decision tree. It is more simplified and easier to implement and has less computation and lower computational complexity. In addition, it can obtain the same recognition accuracy as the multi-model machine learning method.

Author Contributions

Conceptualization, X.L. and H.Y.; methodology, X.L.; software, Y.G.; validation, X.L., Y.G. and G.Y.; formal analysis, X.L.; investigation, H.Y.; resources, H.Y.; data curation, Y.G.; writing—original draft preparation, X.L.; writing—review and editing, X.L. and H.Y.; visualization, X.L.; supervision, Y.G. and J.X.; project administration, G.Y. and J.X.; funding acquisition, Y.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Space information technology innovation workstation, CX2022-04-03-02.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Simonyan, K.; Zisserman, A. Two-Stream Convolutional Networks for Action Recognition in Videos. In Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 568–576. [Google Scholar]
  2. Luo, P. Research on Pedestrian Detection and Action Recognition Based on Deep Learning. Master’s Thesis, Control Science and Engineering, University of Science and Technology of China, Hefei, China, 2020. [Google Scholar]
  3. Zhou, X.; Ran, F.; Huang, Y.; Kong, X. Improved PDR indoor location based on KNN motion pattern recognition. Geospat. Inf. 2019, 17, 25–28. [Google Scholar]
  4. Guo, J.; Wang, W.; Zhang, S. Pedestrian Motion Modes Recognition of Smart Phone Based on Support Vector Machine. Bull. Surv. Mapp. 2018, 2, 1–5. [Google Scholar]
  5. Sun, B.; Lü, W.; Li, W.Y. Activity Recognition Based on Smartphone Sensors and SC-HMM Algorithm. J. Jilin Univ. (Sci. Ed.) 2013, 51, 1128–1132. [Google Scholar]
  6. Lee, J.; Huang, S. An experimental heuristic approach to multi-pose pedestrian dead reckoning without using magnetometers for indoor localization. IEEE Sens. J. 2019, 19, 9532–9542. [Google Scholar] [CrossRef]
  7. Tian, Q.; Salcic, Z.; Kevin, I.; Wang, K.; Pan, Y. A multi-mode dead reckoning system for pedestrian tracking using smartphones. IEEE Sens. J. 2016, 16, 2079–2093. [Google Scholar] [CrossRef]
  8. Liu, B.; Liu, H.; Jin, X.; Guo, D. Human activity recognition based on sensors of smart phone. Comput. Eng. Appl. 2016, 52, 188–193. [Google Scholar]
  9. Wang, B.; Liu, X.; Yu, B.; Jia, R.; Gan, X. Pedestrian Dead Reckoning Based on Motion Mode Recognition Using a Smartphone. Sensors 2018, 18, 1811. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Chen, X.; Zhang, T.; Zhu, X.; Mo, L. Human behavior recognition method based on fusion model. Transducer Microsyst. Technol. 2021, 40, 142–145, 149. [Google Scholar]
  11. Chen, F.; Chen, H.B.; Wang, W.G. Human Motion Recognition Method Based on CNN-LSTMs Hybrid Model. Inf. Technol. Informatiz. 2019, 4, 32–34. [Google Scholar]
  12. Li, F.; Pan, J.K. Human Motion Recognition Based on Triaxial Accelerometer. J. Comput. Res. Dev. 2016, 53, 621–631. [Google Scholar]
  13. Xiong, H.; Guo, S.; Zheng, X.; Zhou, Y. Indoor Pedestrian Mobile Activity Recognition and Trajectory Tracking. Geomat. Inf. Sci. Wuhan Univ. 2018, 43, 1696–1703. [Google Scholar]
  14. Su, B.; Zheng, D.; Tang, Q.; Sheng, M. Human daily short-time activity recognition method driven by single sensor data. Infrared Laser Eng. 2019, 48, 282–290. [Google Scholar]
  15. Qian, L.P.; Li, P.H.; Huang, L. An Adaptive Algorithm for Activity Recognition with Evolving Data Streams. Chin. J. Sens. Actuators 2017, 37, 909–915. [Google Scholar]
  16. Wang, G.; Li, Q.; Wang, L.; Wang, W.; Wu, M.; Liu, T. Impact of Sliding Window Length in Indoor Human Motion Modes and Pose Pattern Recognition Based on Smartphone Sensors. Sensors 2018, 18, 1965. [Google Scholar] [CrossRef] [Green Version]
  17. Banos, O.; Galvez, J.-M.; Damas, M.; Pomares, H.; Rojas, I. Window size impact in human activity recognition. Sensors 2014, 14, 6474–6499. [Google Scholar] [CrossRef] [Green Version]
  18. Ichikawa, F.; Chipchase, J.; Grignani, R. Where’s the phone? A study of mobile phone location in public spaces. In Proceedings of the International Conference on Mobile Technology, Nice, France, 2–4 September 2009; pp. 1–8. [Google Scholar]
  19. Yang, R.; Wang, B.W. PACP: A Position-Independent Activity Recognition Method Using Smartphone Sensors. Information 2016, 7, 72. [Google Scholar] [CrossRef] [Green Version]
  20. Deng, Z.-A.; Wang, G.; Hu, Y.; Cui, Y. Carrying Position Independent User Heading Estimation for Indoor Pedestrian Navigation with Smartphones. Sensors 2016, 16, 677. [Google Scholar] [CrossRef] [Green Version]
  21. Noy, L.; Bernard, N.; Klein, I. Smartphone Mode Recognition During Stairs Motion. Proceedings 2019, 42, 65. [Google Scholar]
  22. Wang, Q.; Ye, L.; Luo, H.; Men, A.; Zhao, F.; Ou, C. Pedestrian Walking Distance Estimation Based on Smartphone Mode Recognition. Remote Sens. 2019, 11, 1140. [Google Scholar] [CrossRef] [Green Version]
  23. Jiang, G.X.; Wang, W.J. Correlation Analysis in Curve Registration of Time Series. J. Softw. 2014, 25, 2002–2017. [Google Scholar]
  24. Elhoushi, M.; Georgy, J.; Noureldin, A.; Korenberg, M.J. Motion Mode Recognition for Indoor Pedestrian Navigation Using Portable Devices. IEEE Trans. Instrum. Meas. 2016, 65, 208–221. [Google Scholar] [CrossRef]
  25. Liu, K.; Wang, Y.; Wang, J. Differential Barometric Altimetry Assists Floor Identification in WLAN Location Fingerprinting Study. In Principle and Application Progress in Location-Based Services; Springer International Publishing: Cham, Switzerland, 2014; pp. 21–29. [Google Scholar]
  26. Zheng, L.; Zhou, W.; Tang, W.; Zheng, X.; Peng, A.; Zheng, H. A 3D indoor positioning system based on low-cost MEMS sensors. Simul. Model. Pract. Theory 2016, 19, 45–56. [Google Scholar] [CrossRef]
  27. Yu, D.B. Pedestrian Behavior Pattern Recognition and Analysis of Indoor Location Data. Ph.D. Dissertation, Photogrammetry and Remote Sensing, Wuhan University, Wuhan, China, 2019. [Google Scholar]
  28. Schafer, P. The BOSS is Concerned with Time Series Classification in the Presence of Noise. Data Min. Knowl. Discov. 2014, 29, 1505–1530. [Google Scholar] [CrossRef]
  29. Harrington, P. Machine Learning in Action; Manning Publications: New York, NY, USA, 2012; p. 33. [Google Scholar]
  30. Jing, X.N.; Li, X.J. Application of Naïve Bayesian Method in Girl’s Figure Discrimination. J. Text. Res. 2017, 38, 124–128. [Google Scholar]
  31. Chen, G.L.; Cao, X.X. Method of Pedestrian’s Behavior Recognition Based on Built-in Sensor of Smartphone in Compartment Fires. J. Tongji Univ. (Nat. Sci.) 2019, 47, 414–420. [Google Scholar]
  32. Anguita, D.; Ghio, A.; Oneto, L.; Parra, X.; Reyes-Ortiz, J.L. Training computationally efficient smartphone-based human activity recognition models. In Proceedings of the 2013 Artificial Neural Networks and Machine Learning, Sofia, Bulgaria, 10–13 September 2013; pp. 426–433. [Google Scholar]
  33. Reyes-Ortiz, J.L.; Ghio, A.; Parra, X.; Anguita, D.; Cabestany, J.; Catala, A. Human activity and motion disorder recognition: Towards smarter interactive cognitive environments. In Proceedings of the 21th International European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Espoo, Finland, 24–26 April 2013; pp. 403–412. [Google Scholar]
  34. Wu, Y.; Tian, X.S.; Zhang, Z.X. An Improved GRU-InFCN for Human Behavior Recognition Model. Comput. Appl. Softw. 2020, 37, 205–210. [Google Scholar]
Figure 1. The example of decision tree for smartphone modes recognition.
Figure 1. The example of decision tree for smartphone modes recognition.
Micromachines 13 01865 g001
Figure 2. The example of decision tree for motion modes recognition.
Figure 2. The example of decision tree for motion modes recognition.
Micromachines 13 01865 g002
Figure 3. Comparison of motion mode recognition accuracy for different methods.
Figure 3. Comparison of motion mode recognition accuracy for different methods.
Micromachines 13 01865 g003
Table 1. Pearson correlation coefficients of different motion modes and different smartphone modes.
Table 1. Pearson correlation coefficients of different motion modes and different smartphone modes.
WaUSUE
CallingTextingCloths PocketCallingTextingCloths PocketCallingTextingCloths Pocket
WaCalling0.77--------
Texting0.560.64-------
Cloths pocket0.550.530.77------
USCalling0.350.180.210.68-----
Texting0.270.340.250.550.80----
Cloths pocket0.270.240.330.510.520.67--
UETexting0.320.210.230.260.260.210.71--
Cloths pocket0.210.330.280.180.330.250.520.86-
Texting0.210.250.350.220.150.310.650.550.72
Wa is Walking, US is upstairs, UE is up elevator.
Table 2. Feature extraction in time domain and frequency domain [24].
Table 2. Feature extraction in time domain and frequency domain [24].
FeatureDefinition
First-order norm x 1 = | x i |
Peaks a max = max ( x i ) , i = 1 , , n 1
Wave a min = min ( x i ) , i = 1 , , n 1
Difference of Peaks and Wave a m m = a max a min
Mean x ¯ = 1 n 2 i = 1 n x i
Variance σ 2 = 1 n 3 i = 1 n 3 ( x i x ¯ ) 2
Amplitude svm = x i 2 + y i 2 + z i 2
Zero-Crossing Rate p 0 = s k n 4 , { s k = s k + 1 , i f ( x i > t h r e z e r o ) s k = s k + 1 , i f ( x i < t h r e z e r o )
Gradient d x = x k + i x k
correlation coefficient r k = i = 1 n 5 k ( x i x ¯ ) ( x i + k x ¯ ) i = 1 n 5 ( x i x ¯ ) 2
Fourier Transform X ( k ) = i = 0 n 1 x ( i ) W n 6 k i , k = 0 , 1 , , n 1 , W n 6 = e j 2 π n 6
Table 3. Smartphone mode transition probability allocation.
Table 3. Smartphone mode transition probability allocation.
ChangingTextCallFppPocketBppSwing
Changing1/71/71/71/71/71/71/7
Text1/21/2-----
Call1/2-1/2----
Fpp1/2--1/2---
Pocket1/2---1/2--
Bpp1/2----1/2-
Swing1/2-----1/2
Table 4. Smartphone mode related feature quantity allocation.
Table 4. Smartphone mode related feature quantity allocation.
ChangingTextCallFppPocketBppSwing
K i 3776765
Table 5. Motion mode transition probability allocation.
Table 5. Motion mode transition probability allocation.
StaticWaUSDSUEDEUCDCTurning
Static1/81/81/81/81/81/81/81/8-
Wa1/91/91/91/91/91/91/91/91/9
US1/41/41/4-----1/4
DS1/41/4-1/4----1/4
UE1/31/3--1/3----
DE1/31/3---1/3---
UC1/31/3----1/3--
DC1/31/3-----1/3-
Turning1/51/51/51/5----1/5
Wa is Walking, US is upstairs, DS is down stairs, UE is up elevator, DE is down elevator, UC is up escalator, DC is down escalator.
Table 6. Motion mode related feature quantity allocation.
Table 6. Motion mode related feature quantity allocation.
StaticWUSDSUEDEUCDCTurning
K i 556644662
Table 7. Huawei mate 8 smartphone sensor and related parameters.
Table 7. Huawei mate 8 smartphone sensor and related parameters.
SensorsTypeParameters
GNSS sensor--Support GPS, A-GPS, GLONASS and BDS
accelerometerLSM330Sensitivity: 0.0095768068 m/S2;
Measurement Range: 78.4532012939 m/S2
gyroscopeLSM330Sensitivity: 0.0012217305 rad/s;
Measurement Range: 34.9065856934 rad/s
magnetometerAK09911Sensitivity: 0.0625 μT;
Measurement Range: 2000 μT
barometerAirPress sensor Rohm, BM1383Sensitivity: 0.0099999998 hPa;
Measurement Range: 1100 hPa
Bluetooth--4.2 + BLE
Table 8. The threshold of smartphone mode and motion mode.
Table 8. The threshold of smartphone mode and motion mode.
Smartphone ModeMotion Mode
ThresholdNumberThreshold Number
T h r e 2 T h r e a 1.5
T h r e m f 2 1.5 T h r e + e l 0.9
T h r e m f 10 T h r e e l 0.05
T h r e q h 1 T h r e + e s 0.9
Thre16 T h r e e s 0.05
Thre26 T h r e f a 14
Thre38 T h r e h 0.1
T h r e h −0.1
Table 9. Smartphone mode recognition results.
Table 9. Smartphone mode recognition results.
ChangingTextCallFppPocketBppSwing
Changing96.48%0.18%002.64%00.70%
Text2.04%97.96%00000
Call1.13%098.87%0000
Fpp0.29%0099.71%000
Pocket0000100%00
Bpp0.40%000099.6%0
Swing1.68%0000098.32%
Table 10. Comparison of smartphone mode recognition accuracy for different methods.
Table 10. Comparison of smartphone mode recognition accuracy for different methods.
TextCallPocketSwingAverage
DT-BP97.2%98.0%99.77%98.32%98.32%
SVM97.96%95.8%95.2%98.1%96.765%
Bayesian Network94.5%97.2%92.2%93.6%94.375%
Random Forest99.0%98.2%97.1%96.9%97.8%
Table 11. Motion mode recognition results.
Table 11. Motion mode recognition results.
StaticWUSDSUEDEUCDCTurning
Static96.10%2.07%0.27%0.24%00.03%0.03%1.25%0
W0.17%99.83%0000000
US00100%000000
DS000100%00000
UE0.95%00099.05%0000
DE1.88000098.12%000
UC6.46%0000093.54%00
DC11.27%00000088.73%0
Turning00000000100%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, X.; Yuan, H.; Yang, G.; Gong, Y.; Xu, J. A Novel Algorithm for Scenario Recognition Based on MEMS Sensors of Smartphone. Micromachines 2022, 13, 1865. https://doi.org/10.3390/mi13111865

AMA Style

Li X, Yuan H, Yang G, Gong Y, Xu J. A Novel Algorithm for Scenario Recognition Based on MEMS Sensors of Smartphone. Micromachines. 2022; 13(11):1865. https://doi.org/10.3390/mi13111865

Chicago/Turabian Style

Li, Xianghong, Hong Yuan, Guang Yang, Yingkui Gong, and Jiajia Xu. 2022. "A Novel Algorithm for Scenario Recognition Based on MEMS Sensors of Smartphone" Micromachines 13, no. 11: 1865. https://doi.org/10.3390/mi13111865

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop