Next Article in Journal
Towards Improved Airborne Fire Detection Systems Using Beetle Inspired Infrared Detection and Fire Searching Strategies
Next Article in Special Issue
WiFi-Aided Magnetic Matching for Indoor Navigation with Consumer Portable Devices
Previous Article in Journal / Special Issue
Signal Processing Technique for Combining Numerous MEMS Gyroscopes Based on Dynamic Conditional Correlation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reciprocal Estimation of Pedestrian Location and Motion State toward a Smartphone Geo-Context Computing Solution

1
Department of Remote Sensing and Photogrammetry, Finnish Geospatial Research Institute, National Land Survey of Finland, Masala 02431, Finland
2
Centre of Excellence in Laser Scanning Research, Academy of Finland, Finland
3
Conrad Blucher Institute of Surveying & Science, Texas A & M University Corpus Christi, Corpus Christi, TX 78412-5868, USA
*
Author to whom correspondence should be addressed.
Micromachines 2015, 6(6), 699-717; https://doi.org/10.3390/mi6060699
Submission received: 26 February 2015 / Accepted: 10 June 2015 / Published: 15 June 2015
(This article belongs to the Special Issue Next Generation MEMS-Based Navigation—Systems and Applications)

Abstract

:
The rapid advance in mobile communications has made information and services ubiquitously accessible. Location and context information have become essential for the effectiveness of services in the era of mobility. This paper proposes the concept of geo-context that is defined as an integral synthesis of geographical location, human motion state and mobility context. A geo-context computing solution consists of a positioning engine, a motion state recognition engine, and a context inference component. In the geo-context concept, the human motion states and mobility context are associated with the geographical location where they occur. A hybrid geo-context computing solution is implemented that runs on a smartphone, and it utilizes measurements of multiple sensors and signals of opportunity that are available within a smartphone. Pedestrian location and motion states are estimated jointly under the framework of hidden Markov models, and they are used in a reciprocal manner to improve their estimation performance of one another. It is demonstrated that pedestrian location estimation has better accuracy when its motion state is known, and in turn, the performance of motion state recognition can be improved with increasing reliability when the location is given. The geo-context inference is implemented simply with the expert system principle, and more sophisticated approaches will be developed.

Graphical Abstract

1. Introduction

Rapid advances in mobile communications technology have made information and services ubiquitously accessible. The challenges of future mobility ecosystems in many aspects such as transportation, environmental sustainability and human well-being call for novel approaches to universal positioning and contextual thinking [1,2].
Smartphones have an ever-increasing number of advanced sensors and powerful computational resources, which make the smartphone the first truly ubiquitous mobile computing device that has the capability to host various intelligent applications. Maps and navigation have been two of the most widely used applications in modern smartphones, and they are able to answer questions such as the following:
  • Where are you (location)?
  • How can you travel from point A to B (route navigation)?
Computing intelligence is becoming ubiquitous and pervasive and will push another epochal shift in the evolution of technology toward the era of cognitive computing. Emerging technologies will continue to extend the boundaries of human limitations to enhance and augment our sensing capability [3,4,5,6,7]. It can be expected that more emerging signals and sensors will be integrated with mobile phones in the future, and they can be used for mobility sensing. For example, computer sensors combined with analytics engines will vastly extend our ability to gather and process sense-based information. In particular, spectral sensors and laser scanners have great potential to become standard features of future mobile platforms, due to their ever-smaller sizes, such that the smartphone will be an affordable personal mobile mapping platform [8,9]. Consequently, cognitive computing systems will be able to sense, learn and even predict human mobility. Many studies have developed cognitive systems of social network structures based on human location and context knowledge. Studies [10,11] have surveyed existing mobile phone sensing algorithms, applications, and systems. For instance, reference [12] introduces a system for sensing complex social systems using Bluetooth-enabled phones. Work [13] presents online algorithms to extract social contexts using global positioning system (GPS) traces. Reference [14] develops methods to automatically and unobtrusively learn the social network structures that arise within human groups based on wearable sensors. Works [15,16] present smartphone navigation solutions based on personal motion knowledge. Campbell and Choudhury first introduce the concept of a cognitive phone and enumerate the potential applications in [17]; it is argued that the cognitive phone will be the next step in the evolution of the mobile phone beyond the smartphone.
Beyond the current shape, a prospective cognitive phone is expected to be able to understand the mobility style, sense the health and well-being, and even make necessary interventions for a user’s benefit [2,17]. A cognitive phone should answer more contextual questions such as the following:
  • What are you doing?
  • What is the environment around you?
  • What is your current situation?
  • What can be done for your benefit?
Integrating the capability of contextual thinking with communication functionalities, a cognition-capable phone can, for instance, reject a coming call when a user is in a meeting and send back a notification message to the caller. A cognition-capable phone can issue a warning and even ask for an external intervention when a user is driving drunk or an elderly user has fallen [18].
In this paper, the geo-context is defined as an integral synthesis of the geographical location, human motion states and mobility context. A geo-context computing solution consists of a positioning engine, a human motion state recognition engine, and a computing component of inferring the mobility context knowledge. In the geo-context concept, human motion states and geospatial context information are associated with the geographical location where they have occurred.
Many past studies have presented many methods and results of the estimation of pedestrian location and motion states, which were usually conducted separately in respective framework [19,20,21,22,23,24,25,26,27,28,29,30,31,32]. This paper contributes in two aspects. First, pedestrian location and motion state are jointly estimated, and information regarding both physical variables is utilized reciprocally to improve the estimation performance of one another. In other words, pedestrian location estimation has better accuracy when its motion state is known, and in turn, the performance of motion state recognition can be improved with increasing reliability when the location is given. This paper presents certain experimental results that can be achieved by using hidden Markov model (HMM) methods for simultaneously estimating the pedestrian location and motion state. HMMs are preferred for modeling pedestrian movement because Markov models do not restrict the physical process to any specific function forms [33]. The mobility situation of pedestrian users may vary frequently and sharply in terms of speed, direction and motion states and cannot be modeled accurately by a closed function form [34,35,36]. In the proposed HMM approach, location is utilized to determine the current area type and consequently improve the calculation of state transition probability (STP) in the course of motion state estimation. In turn, the motion state improves the precision of STP in the positioning course. The more precise the state transition probabilities are, the more reliably the estimation can be achieved in the HMM method. Section 3 presents the methods of calculating the state transition probability to estimate the location and motion state, separately.
This paper is organized as follows: It first reviews existing methods of smartphone mobility sensing, including location estimation and motion state recognition. It then presents the proposed geo-context computing solution, including the applied method and experimental results of office daily mobility. The paper then concludes with a summary and a proposal of further work.

2. Background of Smartphone Mobility Sensing

This section presents a review of positioning and motion state recognition techniques within a smartphone. A smartphone provides multiple types of measurements of built-in sensors and signals of opportunity (SoOP) that can be used for positioning and motion state recognition [10,37,38], as illustrated in Figure 1. Sensors include an accelerometer, gyroscope, compass, camera, barometer, acoustic sensor, proximity sensor, and even an ambient light sensor [4,5,6,7]. Signals of opportunity are defined in this paper as signals that are not originally intended for positioning and navigation purposes, and they include radio frequency (RF) signals, e.g., cellular networks, digital television (DTV), frequency modulation broadcasting (FM), wireless local area networks (WLAN) and Bluetooth [24,36,39], as well as naturally occurring signals such as Earth’s magnetic field, ambient light, and polarized sun light [6,7].
Figure 1. Location and motion state sensing using measurements of versatile signals and sensors available from a smartphone.
Figure 1. Location and motion state sensing using measurements of versatile signals and sensors available from a smartphone.
Micromachines 06 00699 g001
Each of the positioning methods has limitations in terms of positioning accuracy and availability. For example, global navigation satellite systems (GNSS) positioning is the most common technology embedded in navigators and smartphones. GNSS positioning derives the location of the user’s receiver based on radio frequency signals transmitted by the satellite systems. It provides accurate outdoor positioning solution, while degraded accuracy and availability of GNSS in urban and indoor environments must be complemented by alternative indoor positioning solutions [35]. FM, DTV and cellular signals can also be used for positioning, but they offer only limited accuracy. Dead reckoning (DR) systems estimate relative locations using inertial sensors but suffer from accumulated positioning errors over time. A hybrid positioning solution is necessary to achieve indoor/outdoor seamless positioning and commonly integrates different technologies of relative and absolute positioning to provide an enhanced solution in terms of accuracy, reliability and availability.
It is preferable to use signals of opportunity for positioning in urban and indoor environments because of cost efficiency, operational practicability, and ubiquitous availability. The fingerprinting approach of SoOP positioning resolves the most likely position estimate by correlating observed SoOP measurements, e.g., a received signal strength indicator (RSSI) of WiFi, Bluetooth and Radio-frequency identification (RFID), with an established fingerprinting database. Classic fingerprinting algorithms include K-nearest neighbors [40,41], maximum likelihood estimation (MLE), probabilistic inference [30], and pattern recognition techniques [24,42]. These algorithms commonly consider moving positions as a series of isolated points and are hence related to the single-point positioning approach [19,20]. These methods can be operated easily, whereas they may suffer from noisy position jumps due to high variation of RSSI observables.
States of motion dynamics describe the correlation of a user’s locations over time. The knowledge of motion dynamics has been used through three approaches to improve positioning accuracy in past studies [22,23,25,34,43,44,45]. The first approach, a set of predefined movement models, has been used to represent a user’s movement during the entire duration [23,24], whereas the second approach utilizes the topology of spatial objects to restrict potential moving directions and routes in indoor environments [43,44,45,46,47]. Both approaches calculate motion dynamics information (MDI) based on prior knowledge such as an existing building layout, instead of physically occurring motion. Because pedestrian motion is complex and speed and direction can be freely changed, it is not adequate to represent MDI using any pre-defined models. The third approach measures online motion states, and the corresponding result is more accurate and widely usable because it measures the real MDI during running time using physical sensors.
The smartphone is a preferable platform for sensing human motion states because there are a number of built-in motion sensors such as accelerometers, gyroscopes and magnetometers [17,21]. This study concentrates on the recognition of human motion states such as sitting, standing and walking, which represent certain types of human activities that may result in location changes. Figure 2 illustrates the process of motion state recognition using smartphone sensors.
Figure 2. The process of motion state recognition from sensor measurements to recognized activities.
Figure 2. The process of motion state recognition from sensor measurements to recognized activities.
Micromachines 06 00699 g002
The measurements of these sensors are used first to calculate informative signals and further feature values, which are used as the input of an activity classifier to resolve human motion states. In general, motion state recognition is related to a classification problem that can be managed using a number of algorithms, such as logics, K-nearest neighbor, support vector machine, artificial neural networks, decision trees and Bayesian techniques [37,38,47,48]. Most classification algorithms take a memoryless process, which does not consider motion transition. Motion transition means that a person’s current activity influences the subsequent activity. For example, if a person is currently lying down, the most probable activity he or she will be performing immediately afterwards is either to get up or to remain lying down, but usually not to fall and certainly not to run. It is commonly acknowledged that the knowledge of motion transitions is useful because the most likely sequence of motions can be estimated by using all historical measurements. The study in [27] utilizes the HMM method to recognize human motion states such as sitting, standing, walking, running, jumping, falling, and lying. The HMM method considers the motion states of a user as a correlated process in spatial and temporal domains, instead of a series of isolated events.

3. Geo-Context Computing Based on Hidden Markov Models

This section presents the proposed smartphone geo-context computing solution, which includes a positioning engine, a motion state recognition engine, and a computing component of mobility context inference. The positioning engine and motion state recognition engine commonly utilizes the same methodology of HMM that is introduced briefly in Subsection 3.1. The positioning and motion state recognition engines are then presented in Subsections 3.2 and 3.3, where the HMM methodology is instantiated in the specific applications. The developed positioning and motion state recognition engines are compared through field experiments with existing solutions. Finally, a mobility context inference computing solution is demonstrated.

3.1. Problem Formulation and Solutions of Hidden Markov Models

In probability theory, a stochastic process is called a Markov process if it assumes the Markov property. The Markov property states that the conditional probability distribution of future states only depends upon the present state. In a discrete-time finite state space, a Markov process is represented with a Markov chain, as the discrete-time states transit from one to another in a chain-like manner.
The concept of HMM arises from the well-known Markov model in which each state corresponds to a physically observable symbol. Observable Markov models are not applicable to many applications in which states cannot be directly observed. Subsequently, the concept of Markov models has been extended to include the case of hidden Markov models, in which the states are not directly observable (hidden), and an observation is a probabilistic function of the hidden states. In the HMM, the underlying stochastic process (state evolution) is not directly observable but can be observed in the Bayesian sense through another set of stochastic processes, which produce the sequence of observables. Hidden Markov models are significantly more usable in the real world than observable Markov models when physical states of interest are largely unobservable. The basic theory and selected applications of HMM have been presented with details in [33,49].
At regularly spaced discrete time points, the Markov process undergoes a random change of state, also called state transition. The state transition is defined by a set of probability coefficients associated with each of the states, known as state transition probability. All of these states constitute a state space. The state space and associated transition probabilities completely characterizes a Markov process.
In general, a hidden Markov model characterizes a physical system with a state-space model, as shown in Figure 3. Formally, an HMM includes five elements as follows [49]:
(1) The state space S that consists of N hidden states S = { S 1 ,   S 2 ,   ,   S N } .
(2) The set of observables at each epoch t, O ( t ) = { o 1 ,   o 2 ,   ,   o M } , where M is the number of observable symbols.
(3) The matrix of state transition probabilities A = { a i j } . Each element of state transition probabilities a i j defines the probability that the motion state transits from a value S i at the immediately prior epoch to another value S j at the current epoch, i.e., a i j  = P ( X t + 1 = S j | X t = S i ) , 1 i , j N .
(4) The vector of emission probabilities B = { b j ( t ) } , where b j ( t ) = P ( O ( t ) | X ( t ) = S j ) 1 j N .
(5) An initial state probability distribution Π = { π i } , where π i defines the probability that the state has a value S i at the first epoch, i.e., π i = P ( X 1 = S i ) 1 i N .
Figure 3. Structure and temporal evolution process of a hidden Markov model system.
Figure 3. Structure and temporal evolution process of a hidden Markov model system.
Micromachines 06 00699 g003
Applications of HMM emerge in the form of three basic problems that are summarized in [33]. Positioning and motion states recognition are related to the same basic problem of HMM, which is to determine the best sequence of model states, given a specific HMM and an observable sequence. In this study, the grid based filter algorithm is utilized to resolve the estimation problem. The grid-based filter consists of prediction and update steps outlined as follows:
Prediction step:
P ( X t = S j | o 1 , , o t 1 , Λ ) = i = 1 N ω t | t 1 i δ ( S j S i ) ,     j = 1 , , N
Update step:
P ( X t = S j | o 1 , , o t , Λ ) = i = 1 N ω t | t i δ ( S j S i ) ,     j = 1 , , N
where δ ( ) is the Dirac delta function, and ω t | t i is the conditional probability of each state S i given measurements of up to epoch t;
ω t | t 1 i j = 1 N ω t 1 | t 1 j P ( S t i | S t 1 j )
ω t | t i = P ( X t = S i | o 1 , , o t ) ω t | t 1 i P ( o t | S t i ) j = 1 N ω t | t 1 j P ( o t | S t j )
Once the posterior probabilities P ( X t = S j | o 1 , , o t , Λ ) of all states are estimated, the filter solution is given by the state with the maximum probability. The grid-based filter produces an optimal estimation when the state space is discrete and consists of a finite number of states [50].
As stated previously, it is preferable to apply HMM for representing the pedestrian movement process because the movement status may vary frequently and sharply in terms of speed, direction and motion state during this process. It is favorable to measure these movement elements online and to apply these measurements in the location and motion state estimation for better performance. In the HMM approach, state transition probability is a critical factor for estimation performance. The following subsections will instantiate the elements of HMM described above in the specific positioning and motion state recognition applications, and they will concentrate on how to compute more precise state transition probabilities using the online estimates of movement situations.

3.2. Radio Signals and MEMS Sensors Integration for Smartphone Positioning

Based on the HMM method, an accurate smartphone indoor positioning solution called Hybrid Indoor Positioning Engine (HIPE) has been developed by integrating measurements of multiple radio networks and multiple sensors of a modern smartphone. As shown in Figure 4, the HIPE solution is designed with the intention to support various location-based services (LBSs) [42]. The methodology of hidden Markov models is used to fuse different types of measurements, including RSSI observables and the motion dynamics information that is measured by smartphone sensors. In the positioning application, the target spatial area is discretized with a set of grid reference points that represents the state space of HMM. RSSI observables of WLAN represent the observable symbols. At each reference point, emission probabilities of RSSI observables are learnt and recorded in the knowledge database, as illustrated in Figure 4. An initial position is determined using the maximum likelihood fingerprinting method [21], and then the initial state probability distribution is established according to the fingerprinting result. Motion dynamics information, including the traveled distance and direction during a measurement duration, is used to improve the precision of state transition probability described as below. After these elements are determined, the grid based filter can be applied through Equations (1)–(4) for location estimation. The more precise the state transition probabilities are, the more reliable the estimate that can be achieved.
Figure 4. Architecture of a smartphone hybrid indoor positioning solution that combines measurements of multiple sensors and wireless networks and radio signals mapping knowledge database. Five elements of HMM are determined using different types of information inputs and are applied with the grid-based filter to output location results.
Figure 4. Architecture of a smartphone hybrid indoor positioning solution that combines measurements of multiple sensors and wireless networks and radio signals mapping knowledge database. Five elements of HMM are determined using different types of information inputs and are applied with the grid-based filter to output location results.
Micromachines 06 00699 g004
In the general pedestrian dead reckoning (PDR) technique, accelerometers are used for step detection, and step length estimation, gyroscope and magnetometer sensors are combined to resolve the movement heading [21,32]. The movement heading is fused with the step length to calculate the accumulated position change relative to a previous position [21,43,51]. The PDR technique has high precision within a short period, although suffering from accumulated absolute positioning errors over time [2,32,43,51]. Contrarily, fingerprinting positioning does not suffer from error accumulation, whereas due to multipath and non-line-of-sight propagation of WLAN signals, non-stationary RSSI observables might generate noisy and jumpy positioning results [22,23,24,25]. Therefore, an integrated positioning solution is expected to mitigate the weaknesses of the respective methods and to yield a synergetic effect resulting in higher robustness and accuracy.
Different smartphone platforms may pose different situations of sensor availability. To make the HIPE solution widely applicable, it must be flexible enough with the various situations. Table 1 lists all four situations that have different MDI data available. When an accelerometer is available, the moved distance is the accumulated step length that is obtained by multiplying the number of detected steps with corresponding step lengths. This study adopts simply the constant step model of 0.7 m per step. Alternatively, step lengths can be estimated with sophisticated algorithms, such as linear and nonlinear models, and even artificial intelligence techniques [52]. Otherwise, an accelerometer is unavailable, based on general pedestrian dynamics, the moved distance range of a pedestrian user can be estimated with an empirical constant speed model, e.g., 1 m/s in this work for the scenario “Measured heading & assumed speed” and “Assumed speed”. When a compass is usable, the heading is measured directly, otherwise the heading remains unknown, e.g., in the scenario “Measured distance” and “Assumed speed”, and all directions are considered as a possible heading because a user may freely change the heading. The four cases in Table 1 cover all situations regardless of sensor types used.
Table 1. Definitions of situations that use different combinations of motion dynamics information (MDI).
Table 1. Definitions of situations that use different combinations of motion dynamics information (MDI).
No.Combinations of MDISensors and methods used to obtain MDI
DistanceHeading
1Measured distance & headingaccelerometerscompass
accumulated step lengthsdirectly measured
2Measured distanceaccelerometersnone
accumulated step lengthsunknown
3Measured heading & assumed speednonecompass
a constant speed model of 1 m/sdirectly measured
4Assumed speednonenone
a constant speed model of 1 m/sunknown
The positioning engine (HIPE) and the motion state recognition engine presented in the next subsection are evaluated through the field experiments at the three-story office premise of the Finnish Geospatial Research Institute (FGI). Figure 5 shows the three-dimensional (3D) layout represented by the 3D point cloud and a snapshot of the interior structure.
Figure 5. Experimental environment of the positioning and motion state estimation of this study, which is represented by the 3D point cloud and a snapshot of the interior structure of the FGI building.
Figure 5. Experimental environment of the positioning and motion state estimation of this study, which is represented by the 3D point cloud and a snapshot of the interior structure of the FGI building.
Micromachines 06 00699 g005
The testing person holds a Nokia smartphone (Nokia Ltd., Espoo, Finland) and moves around within the building to evaluate the positioning accuracy. The experiment takes approximately one and a half hours and obtains the positioning solutions of 528 RSSI observation epochs. Figure 6 compares the root-mean-square errors (RMSE) and Maximum errors of these positioning results when different MDI combinations are applied. Given different combinations of MDI that are defined in Table 1, the grid-based filter achieves an accuracy improvement in terms of reduced RMSE by 1.34 m (30.3%), 1.26 m (28.4%), 0.95 m (21.4%), and 0.56 m (12.6%) over the MLE method, respectively. It is more noticeable that the maximum errors were significantly reduced up to 60% from 15 m to 6 m when adequate MDIs were available. This improvement is significant for the user experience due to the less noisy positioning jumps. Because the HIPE is designed as a universal engine with different smartphone and LBS platforms, which probably have different types of sensors available to measure MDI, it is important for the HIPE to have adequate flexibility to yield enough accuracy using different MDI combinations.
Figure 6. Positioning errors in terms of RMSE (a) and maximum errors (b) when the different cases of MDI combinations are applied. The yellow bars (marked as “5” of x-axis) represent the results of the baseline method MLE.
Figure 6. Positioning errors in terms of RMSE (a) and maximum errors (b) when the different cases of MDI combinations are applied. The yellow bars (marked as “5” of x-axis) represent the results of the baseline method MLE.
Micromachines 06 00699 g006

3.3. Human Motion State Recognition

The HMM method is also developed to estimate human motion states such as sitting, standing and walking, which constitute a set of human activities that may result in geographical location change. The process of motion state recognition utilizes the measurements of a smartphone’s built-in motion sensors such as an accelerometer, gyroscope and magnetometer [27]. In this application, the target motion states to be recognized represent the state space of HMM. Sensor measurements represent the observable symbols, and corresponding emission probabilities are learnt and recorded as a knowledge database. An initial state probability distribution of motion states is given empirically according to the specific application context, which is an office environment. In this study, these HMM elements are kept [27], and area type-specific state transition probabilities are highlighted by comparing the performance of motion state recognition when different state transition probabilities are applied.
The HMM method considers the motion states of a pedestrian as a spatiotemporally correlated process, instead of a series of isolated events, and thus differs from most traditional classification algorithms that use a memoryless process and do not consider motion transition [47,48]. In this study, the proposed HMM motion state recognition solution improves the performance of motion state estimation by applying location-based motion transitions. The basic idea is that motion transitions are related to geographical locations, and specific transition probabilities of human motion states are defined and calculated for different types of geographical areas. For example, a person is more likely to be walking and sitting (instead of running) in a coffee room of an office building, while he or she is more likely to be walking or running (instead of constantly sitting) in a corridor.
State transition probability plays a critical role in the HMM solution. In this study, we determine the matrix of state transition probabilities dynamically based on the type of area where a pedestrian currently stays. Given a user’s current location that is determined by the above-described positioning solution, the corresponding area type can be obtained from the spatial attribute database of a geographic information system (GIS). Notably, GIS has only been used to specify area types that are related to state transition probabilities in this study, whereas GIS is not applied to restricting specific motion models. Thus, the matrix of state transition probabilities can be refined using online estimated location information and the motion state of a previous epoch. The rest of this subsection demonstrates the experiment and the improvement of motion state recognition when an area type-specific motion transition probability matrix is applied, compared with the motion state recognition result that is derived with a general state transition probability matrix for the entire area.
In our experiment, the three-story institute office building, as showed in Figure 5, contains five types of areas: working room, lunch/coffee area, intersection area, staircase area and generic area, which are common to office environments. In the common sense, a person has a main state in each of these types of areas. For example, a person is most likely to be sitting in a working room or lunch/coffee area, and he or she is more likely to be turning at an intersection area than at other areas, while the most probable state in a corridor is walking. Matching a user’s current location, which is estimated by the HIPE, with the GIS database makes it easy to determine the corresponding area types.
This study collects experimental data to learn state transition probabilities. A smartphone application that runs on a Nokia N8 device is developed to record input motion states at the 1 Hz rate. The records were used to count area type-specific state transition probabilities such as Equation (5), and the results were listed in Table 2. The minimum value of aij is given empirically. The learning experiment is performed approximately 3 h per weekday (from Monday to Friday) for a full week. The general state transition probability matrix of the whole area is generated using the whole dataset and is listed in Table 3.
a i j = N a i j N a i a i j 0.000002
where aij is the state transition probability from the i-th state to the j-th one in a specific area type a, N a i j is the count number of occurrences of state transition from the i-th state to the j-th one in the area type a, and N a j is the total count of occurrences of state transition from the i-th state to any one of next epoch.
Notably, this proposed solution requires the use of the GIS database but does not necessarily cause extra complexity with the entire system because the GIS has been involved in the LBS system as an essential part, for example, to present visualization or map interface; the proposed solution has just re-used the attribute database. Again, different from previously reported solutions of map-aided positioning, the proposed solution of this study does not restrict the movement of users with any GIS-based motion models. GIS is only used to recognize area types.
Table 2. Area type-specific motion transition probability matrix in the office environment.
Table 2. Area type-specific motion transition probability matrix in the office environment.
Area TypesMotion t + 1SittingStandingWalkingRunningFallingTurning
Motion t
Working roomSitting0.99130.00870.0000020.0000020.0000020.000002
Standing0.31030.15240.42020.0000020.0000020.1172
Walking0.00080.06150.84510.0000170.0000070.0925
Running0.0000020.02560.10390.80630.0000070.0642
Falling0.99250.00750.0000020.0000020.0000020.000002
Turning0.0000020.21400.77050.0000040.0000060.0155
Coffee roomSitting0.86640.13360.0000020.0000020.0000020.000002
Standing0.32580.26310.39380.0000020.0000020.0173
Walking0.00080.08210.85220.0000220.0000080.0648
Running0.0000020.03690.14970.7914380.0000090.0219
Falling0.99420.00580.0000020.0000020.0000020.000002
Turning0.0000020.11800.87500.0000040.0000060.0069
IntersectionSitting0.64190.35810.0000020.0000020.0000020.000002
Standing0.0000020.12460.40100.00480.0000020.4696
Walking0.0000020.17410.34540.01170.0000020.4687
Running0.0000020.06770.22870.30700.0000020.3966
Falling0.99000.01000.0000020.0000020.0000020.000002
Turning0.0000020.00330.87360.11950.0000020.0035
StaircaseSitting0.61540.38460.0000020.0000020.0000020.000002
Standing0.0000020.08120.54120.29160.0000020.0859
Walking0.0000020.08460.67250.07250.08620.0843
Running0.0000020.04050.26570.44700.16810.0785
Falling0.99000.01000.0000020.0000020.0000020.000002
Turning0.0000020.00730.64330.13450.17720.0377
Generic areaSitting0.77120.22880.0000020.0000020.0000020.000002
Standing0.10790.36820.31940.11360.0000020.0909
Walking0.06820.17750.59340.13590.0000020.0250
Running0.0000020.18630.35900.44890.0000020.0058
Falling0.99000.01000.0000020.0000020.0000020.000002
Turning0.08880.21660.62890.06570.0000020.000002
Table 3. Overall motion transition probability matrix in the office environment.
Table 3. Overall motion transition probability matrix in the office environment.
Motion t + 1SittingStandingWalkingRunningFallingTurning
Motion t
Sitting0.86640.13360.0000020.0000020.0000020.000002
Standing0.32590.26370.39320.0000020.0000020.0172
Walking0.00080.08210.85220.0000220.0000080.0648
Running0.0000020.03690.14970.79140.0000090.0219
Falling0.99420.00580.0000020.0000020.0000020.000002
Turning0.0000020.11800.87500.0000040.0000060.0070
After the motion state transition probability matrix is established, pedestrian motion states can be recognized using the grid-based filter under the HMM framework. Using the area type specific motion state transition probability matrix and the general transition probability matrix separately, the performance of motion state recognition was evaluated in the study through two separate experiments. The two evaluation experiments are performed with a Nokia N8 device by two testees on two separate weekdays that differ from the days of learning state transition probabilities. The testees have natural behaviors that are usual in routine office life. The experiment of each day lasts for more than three hours. The performance of motion state recognition is measured with a single merit figure named F-measure, which is the harmonic mean of two separate measures, namely precision and recall, of system performance [53]. In statistics, precision is the percentage of slots in the hypothesis that are correct, while recall is the percentage of reference slots for which the hypothesis is correct [53]. An F-measure reaches its best score at 1 and worst score at 0. Real motion states are manually recorded by the testees using a smartphone application and are used as the truth reference in the performance evaluation. Represented by F-measure values, the performances of motion state recognition are compared in Figure 7, when the area type-specific state transition probability matrix in Table 2 and overall state transition probability matrix in Table 3 are utilized separately.
Figure 7. Motion state recognition performance indicated by the F-measure when area type-specific state transition probability matrix and overall STP matrix are utilized separately.
Figure 7. Motion state recognition performance indicated by the F-measure when area type-specific state transition probability matrix and overall STP matrix are utilized separately.
Micromachines 06 00699 g007
It is demonstrated that, for some motion states, e.g., falling down and turning, the performance of motion state estimation is improved using an area type-specific STP matrix with the increasing F-measure value from 0.7 to 0.9. These two motion states are critical for particular applications such as indoor navigation and homecare because turning is related to a large heading change and effective recognition of falling-down events is critically vital for the timely provision of first-aid homecare. For dynamic motions such as walking, running and standing, the F-measure value is improved by 5 to 10 percent. For the sitting state, there is no significant difference. The two experiments conducted by different persons at different times produce consistent results.

3.4. Geo-Context Inference and Interpretation

Motion information is incorporated with location and attributes information to infer geo-context knowledge expressed with natural language. The mentioned attribute information includes geospatial attribute that is associated with maps and the social attribute, e.g., the affixing department and research group. This information can be retrieved usually from a GIS system or a specific knowledge database. Furthermore, multiple users, i.e., a group of users, should be involved in the inference because many social events, e.g., coffee breaks and meetings occur with a group of people. Figure 8 shows an example of geo-context inference based on an expert system by combining the location, motion states and map database.
Figure 8. Process of the forward chaining method for geo-context inference.
Figure 8. Process of the forward chaining method for geo-context inference.
Micromachines 06 00699 g008
Expert system is a form of artificial intelligence and is designed to solve complex problems by reasoning about knowledge like an expert. Thus, it is one of the approaches of geo-context inference. This study simply uses the forward chaining method of expert system problems to infer the context knowledge. With a knowledge database, the forward chaining method starts with the location and motion state of a user to infer the corresponding context. In the process, locations and motion states of a group of people may be inquired about for use as the observables. The inquired people are grouped according to their social and geospatial attributes. As shown in Figure 8, the inference process of the expert system translates the parametric observables, i.e., location and motion states, into context information expressed in natural language. Figure 9 shows the flowchart of the geo-context computing from the location, motion state and attribute database to context knowledge expressed in natural language and potential applications.
In Figure 9, we discuss the process of geo-context computing with our office case. First, the user location and user motion state are estimated with smartphone radio signals and built-in sensors. Then, an area type of current location is recognized by looking up the spatial attributes database. In our case, possible area types include working room, coffee room, meeting room, and laboratory. Third, geospatial information such as location, area type, motion state and movement trajectory of a group of users are combined together to perform the geo-context inference. The result of geo-context inference may be various, and Figure 9 shows three examples A, B and C. Based on the estimated location and context information, high-level applications can be further developed by integrating smartphone telecommunication functions with the geo-context status. For instance, in the example case C, when the geo-context computing system finds that an old lady has fallen at the staircase, it is able to send an emergency aid request to the emergency department or relevant persons.
Figure 9. Flowchart of the geo-context computing solution and related applications.
Figure 9. Flowchart of the geo-context computing solution and related applications.
Micromachines 06 00699 g009

4. Conclusions and Outlook

This paper proposes the concept of a geo-context that integrates geospatial location, human motion state and mobility context and presents a smartphone geo-context computing solution for daily mobility environments. In the proposed HMM-based solution, the pedestrian location and motion state are estimated in an integrated framework, and each can reciprocally improve the estimation accuracy and reliability of the other. Furthermore, it is shown that pedestrian location and motion information can be combined to infer the mobility context knowledge of an individual or group of users and eventually achieve the concept of geo-context computing.
The smartphone is an efficient platform because it is capable of providing adequate measurements of multiple sensors and multiple networks that are used to estimate pedestrian location and motion states. Furthermore, related applications can be developed by integrating the estimated geo-context knowledge with the telecommunication function that is the primary function of a smartphone.
A simultaneous positioning and motion state estimation solution is implemented that runs on a smartphone. Compared with the widely used MLE method, the experimental results obtained in our office environment demonstrate up to 30% less overall positioning errors indicated by RMSE and up to 60% less positioning jumps indicated by maximum errors when motion information is applied in the location estimation. Meanwhile, it is shown that motion state estimation has 20%–30% higher precision and reliability represented by the F-measure values. The main cause of the performance improvement is that pedestrian location and motion states are considered as correlated processes in temporal and spatial domains, instead of a series of isolated epochs, and the information regarding location and motion states is used to refine the parameters of hidden Markov models to improve the estimation performance of one another. These improvements of accuracy and robustness will significantly enhance the user experience of LBS and future geo-context based applications.
Future research will include the development of more sophisticated algorithms such as machine learning and Bayesian inference to perform geo-context computing, as well as the development of novel geo-context knowledge-enabled cognitive applications that have the potential to become a prospective business section beyond the current LBS.

Acknowledgments

This work was supported in part by the Finnish Centre of Excellence in Laser Scanning Research (CoE-LaSR), which is granted (No. 272195) by Academy of Finland.

Author Contributions

Jingbin Liu, Juha Hyyppä and Ruizhi Chen initiated the idea and conception; Jingbin Liu implemented the proposed algorithms and the benchmark, and conducted the experiments; Lingli Zhu provided indoor building model and database, and calculated area type specific probability matrix; Yunsheng Wang participated in the calculation of area type specific probability matrix, and conducted the experiment, and analyzed the data; Xinlian Liang conducted the training experiment, and analyzed the data; Tianxing Chu and Keqiang Liu participated in the development of expert system method, and validated the proposed solution. All authors helped with the paper writing and reviewed the text.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Calderoni, L.; Ferrara, M.; Franco, A.; Maio, D. Indoor localization in a hospital environment using Random Forest classifiers. Expert Syst. Appl. 2015, 42, 125–134. [Google Scholar] [CrossRef]
  2. Chen, R.; Chu, T.; Xu, W.; Li, X.; Liu, J.; Chen, Y.; Chen, L.; Hyyppa, J.; Tang, J. Development of a contextual thinking engine in mobile devices. In Proceedings of IEEE UPINLBS 2014, Corpus Christ, TX, USA, 2–5 November 2014.
  3. Conte, G.; Marchi, M.; Nacci, A.; Rana, V.; Sciuto, D. BlueSentinel: A first approach using iBeacon for an energy efficient occupancy detection system. In Proceedings of Proceedings of the 1st ACM Conference on Embedded Systems for Energy-Efficient Buildings (BuildSys’14), New York, NY, USA, 5–6 November 2014; pp. 11–19. [CrossRef]
  4. Pei, L.; Chen, L.; Guinness, R.; Liu, J.; Kuusiniemi, H.; Chen, Y.; Chen, R. Sound positioning using a small scale linear microphone array. In Proceedings of the IPIN 2013 Conference, Montbeliard, France, 28–31 October 2013.
  5. Li, X.; Wang, J.; Li, T. Seamless positioning and navigation by using geo-referenced images and multi-sensor data. Sensors 2013, 13, 9047–9069. [Google Scholar] [CrossRef] [PubMed]
  6. Liu, J.; Chen, R.; Chen, Y.; Tang, J.; Hyyppä, J. A bright idea: Testing the feasibility of positioning using ambient light. Available online: http://gpsworld.com/innovation-a-bright-idea (accessed on 11 February 2015).
  7. Liu, J.; Chen, Y.; Tang, J.; Jaakkola, A.; Hyyppä, J. The uses of ambient light for ubiquitous positioning. In Proceedings of IEEE/ION PLANS 2014, Monterey, CA, USA, 5–8 May 2014.
  8. Liang, X.; Jaakkola, A.; Wang, Y.; Hyyppä, J.; Honkavaara, E.; Liu, J.; Kaartinen, H. The use of a hand-held camera for individual tree 3D mapping in forest sample plots. Remote Sens. 2014, 6, 6587–6603. [Google Scholar] [CrossRef]
  9. Al-Hamad, A.; El-Sheimy, N. Smartpones based mobile mapping systems. In Proceedings of ISPRS Technical Commission V Symposium, Riva del Garda, Italy, 23–25 June 2014; Volume XL-5, pp. 29–34.
  10. Lane, N.D.; Miluzzo, E.; Lu, H.; Peebles, D.; Choudhury, T.; Campbell, A.T. A survey of mobile phone sensing. IEEE Commun. Mag. 2010, 48, 140–150. [Google Scholar] [CrossRef]
  11. Taniuchi, D.; Maekawa, T. Automatic update of indoor location fingerprints with pedestrian dead reckoning. ACM Trans. Embed. Comput. Syst. 2015, 14. [Google Scholar] [CrossRef]
  12. Eagle, N.; Pentland, A. Reality mining: Sensing complex social systems. Pers. Ubiquitous Comput. 2006, 10, 255–268. [Google Scholar] [CrossRef]
  13. Adams, B.; Phung, D.; Venkatesh, S. Sensing and using social context. ACM Trans. Multimed. Comput. Commun. Appl. 2008, 5, 11. [Google Scholar] [CrossRef]
  14. Choudhury, T.; Pentland, A. Sensing and modeling human networks using the sociometer. In Proceedings of the Proceedings 7th IEEE International Symposium on Wearable Computers (ISWC2003), White Plains, NY, USA, 21–23 October 2003; pp. 216–222.
  15. Masiero, A.; Guarnieri, A.; Vettore, A.; Pirotti, F. ISVD-based Euclidian structure from motion for smartphones. In Proceedings of ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci., Riva del Garda, Italy, 23–25 June 2014; pp. 401–406.
  16. Saeedi, S.; Moussa, A.; El-Sheimy, N. Context-Aware Personal Navigation Using Embedded Sensor Fusion in Smartphones. Sensors 2014, 14, 5742–5767. [Google Scholar] [CrossRef] [PubMed]
  17. Campbell, A.; Choudhury, T. From smart to cognitive phones. IEEE Pervasive Comput. 2012, 11, 7–11. [Google Scholar] [CrossRef]
  18. Wang, D.; Subagdja, B.; Kang, Y.; Tan, A.; Zhang, D. Towards intelligent caring agents for aging-in-place: Issues and challenges. In Proceedings of 2014 IEEE Symposium on Computational Intelligence for Human-Like Intelligence (CIHLI), Orlando, FL, USA, 9–12 December 2014; pp. 1–8.
  19. Bahl, P.; Padmanabhan, V. Radar: An in-building RF based user location and tracking system. In Proceedings of IEEE INFOCOM, Tel-Aviv, Israel, 26–30 March 2000; pp. 775–784.
  20. Youssef, M.; Agrawala, A. The Horus WLAN location determination system. In Proceedings of the 3rd International Conference on Mobile Systems, Applications, and Services, New York, NY, USA, 5 June 2005; pp. 205–218.
  21. Liu, J.; Chen, R.; Pei, L.; Chen, W.; Tenhunen, T.; Kuusniemi, H.; Kröger, T.; Chen, Y. Accelerometer assisted robust wireless signal positioning based on a hidden Markov model. In Proceedings of IEEE/ION PLANS 2010, Indian Wells, CA, USA, 4–6 May 2010; pp. 488–497.
  22. Liu, J.; Chen, R.; Pei, L.; Guinness, R.; Kuusniemi, H. A hybrid smartphone indoor positioning solution for mobile LBS. Sensors 2012, 12, 17208–17233. [Google Scholar] [CrossRef] [PubMed]
  23. Au, A.; Feng, C.; Valaee, S.; Reyes, S.; Sorour, S.; Markowitz, S.N.; Gordon, K.; Eizenman, M.; Gold, D. Indoor tracking and navigation using received signal strength and compressive sensing on a mobile device. IEEE Trans. Mob. Comput. 2013, 12, 2050–2062. [Google Scholar] [CrossRef]
  24. Jie, Y.; Qiang, Y.; Lionel, N. Learning adaptive temporal radio maps for signal-strength-based location estimation. IEEE Trans. Mob. Comput. 2008, 7, 869–883. [Google Scholar]
  25. Kushki, A.; Plataniotis, K.N.; Venetsanopoulos, A.N. Intelligent dynamic radio tracking in indoor wireless local area networks. IEEE Trans. Mob. Comput. 2010, 9, 405–419. [Google Scholar] [CrossRef]
  26. Pei, L.; Chen, R.; Liu, J.; Chen, W.; Kuusniemi, H.; Tenhunen, T.; Kröger, T.; Leppäkoski, H.; Chen, Y.; Takala, J. Motion recognition assisted indoor wireless navigation on a mobile phone. In Proceedings of the ION GNSS 2010 conference, Portland, OR, USA, 21–24 September 2010; pp. 3366–3375.
  27. Frank, K.; Vera-Nadales, M.J.; Robertson, P.; Angermann, M. Reliable real-time recognition of motion related human activities using MEMS inertial sensors. In Proceedings of the ION GNSS 2010, Portland, OR, USA, 21–24 September 2010; pp. 2919–2932.
  28. Shin, B.; Kim, C.; Kim, J.; Lee, S.; Kee, C.; Lee, T. Hybrid model-based motion recognition for smartphone users. ETRI J. 2014, 36, 1016–1022. [Google Scholar] [CrossRef]
  29. Parviainen, J.; Bojja, J.; Collin, J.; Leppänen, J.; Eronen, A. Adaptive activity and environment recognition for mobile phones. Sensors 2014, 14, 20753–20778. [Google Scholar] [CrossRef] [PubMed]
  30. Chen, Z.; Zou, H.; Jiang, H.; Zhu, Q.; Soh, Y.; Xie, L. Fusion of WiFi, smartphone sensors and landmarks using the Kalman filter for indoor localization. Sensors 2015, 15, 715–732. [Google Scholar] [CrossRef] [PubMed]
  31. Wang, H.; Sen, S.; Elgohary, A.; Farid, M.; Youssef, M; Choudhury, R. Unsupervised Indoor Localization. In Proceedings of the Mobisys, Low Wood Bay, Lake District, UK, 25–29 June 2012.
  32. Lukianto, C.; Sternberg, H.; Gacic, A. STEPPING—Phone-based portable pedestrian indoor navigation. Arch. Photogramm. Cartogr. Remote Sens. 2011, 22, 311–323. [Google Scholar]
  33. Rabiner, L.R. A tutorial on hidden Markov models and selected applications in speech recognition. IEEE Proc. 1989, 77, 257–286. [Google Scholar] [CrossRef]
  34. Pei, L.; Chen, R.; Liu, J.; Kuusniemi, H.; Chen, Y.; Tenhunen, T. Using motion-awareness for the 3D indoor personal navigation on a Smartphone. In Proceedings of the 24rd International Technical Meeting of The Satellite Division of the Institute of Navigation, Portland, OR, USA, 19–23 September 2011; pp. 2906–2912.
  35. Godha, S.; Lachapelle, G.; Cannon, M.E. Integrated GPS/INS system for pedestrian navigation in a signal degraded environment. In Proceedings of the ION GNSS 2006 Conference, Fort Worth, TX, USA, 26–29 September 2006.
  36. Kuusniemi, H.; Liu, J.; Pei, L.; Chen, Y.; Chen, L.; Chen, R. Reliability considerations of multi-sensor multi-network pedestrian navigation. IET Radar Sonar Navig. 2012, 6, 157–164. [Google Scholar] [CrossRef]
  37. Pei, L.; Guinness, R.; Chen, R.; Liu, J.; Kuusniemi, H.; Kaistinen, J. Human behavior cognition using smartphone sensors. Sensors 2013, 13, 1402–1424. [Google Scholar] [CrossRef] [PubMed]
  38. Pei, L.; Liu, J.; Guinness, R.; Chen, Y.; Kuusniemi, H.; Chen, R. Using LS-SVM based motion recognition for smartphone indoor wireless positioning. Sensors 2012, 12, 6155–6175. [Google Scholar] [CrossRef] [PubMed]
  39. Pei, L.; Chen, R.; Liu, J.; Tenhunen, T.; Kuusniemi, H.; Chen, Y. An inquiry-based Bluetooth indoor positioning approach for the Finnish pavilion at Shanghai World Expo2010. In Proceedings of the Position Location and Navigation Symposium (PLANS), 2010 IEEE/ION, Indian Wells, CA, USA, 3–6 May 2010; pp. 1002–1009.
  40. King, T.; Kopf, S.; Haenselmann, T.; Lubberger, C.; Effelsberg, W. COMPASS: A probabilistic indoor positioning system based on 802.11 and digital compasses. In Proceedings of the International Workshop on Wireless Network Testbeds, Experimental Evaluation and Characterization (WiNTECH’06), Los Angeles, CA, USA, 29 September 2006; pp. 34–40.
  41. Besada, J.A.; Bernardos, A.M.; Tarrio, P.; Casar, J.R. Analysis of tracking methodr wireless indoor localization. In Proceedings of 2nd International Symposium on Wireless Pervasive Computing 2007.(ISWPC ’07), San Juan, Puerto Rico, 5–7 February 2007; pp. 493–497.
  42. Liu, J.; Chen, R.; Chen, Y.; Pei, L.; Chen, L. iParking: An intelligent indoor location-based smartphone parking service. Sensors 2012, 12, 14612–14629. [Google Scholar] [CrossRef] [PubMed]
  43. Masiero, A.; Guarnieri, A.; Pirotti, F.; Vettore, A. A Particle Filter for Smartphone-Based Indoor Pedestrian Navigation. Micromachines 2014, 5, 1012–1033. [Google Scholar] [CrossRef]
  44. Widyawan; Pirkl, G.; Munaretto, D.; Fischer, C.; An, C.; Lukowicz, P.; Klepal, M.; Timm-Giel, A.; Widmer, J.; Pesch, D.; et al. Virtual lifeline: Multimodal sensor data fusion for robust navigation in unknown environments. Pervasive Mob. Comput. 2012, 8, 388–401. [Google Scholar] [CrossRef]
  45. Tian, Z.; Fang, X.; Zhou, M.; Li, L. Smartphone-Based Indoor Integrated WiFi/MEMS Positioning Algorithm in a Multi-Floor Environment. Micromachines 2015, 6, 347–363. [Google Scholar] [CrossRef]
  46. Evennou, F.; Marx, F.; Novakov, E. Map-aided Indoor Mobile Positioning System Using Particle Filter. In Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC’ 05), New Orleans, LA, USA, 13–17 March 2005; Volume 4, pp. 2490–2494.
  47. Kotsiantis, S.B. Supervised machine learning: A review of classification techniques. Informatica 2007, 31, 249–268. [Google Scholar]
  48. Pearl, J. Causality: Models, Reasoning, and Inference; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  49. Liu, J. Hybrid positioning with smartphones. In Ubiquitous Positioning and Mobile Location-Based Services in Smart Phones; Chen, R., Ed.; IGI Global: Hershey, PA, USA, 2012; pp. 159–194. [Google Scholar]
  50. Ristic, B.; Arulampalam, S.; Gordon, N. Beyond the Kalman Filter: Particle Filters for Tracking Applications; Artech House Publishers: Norwood, MA, USA, 2004. [Google Scholar]
  51. Hautefeuille, M.; O’Flynn, B.; Peters, F.H.; O’Mahony, C. Development of a Microelectromechanical System (MEMS)-Based Multisensor Platform for Environmental Monitoring. Micromachines 2011, 2, 410–430. [Google Scholar] [CrossRef]
  52. Aggarwal, P.; Syed, Z.; El-Sheimy, N. MEMS-Based Integrated Navigation; Artech House Publishers: Norwood, UK, 2010. [Google Scholar]
  53. Francis, J.M.; Kubala, F.; Schwartz, R.; Weischedel, R. Performance measures for information extraction. In Proceedings of DARPA Broadcast News Workshop, Herndon, VA, USA, 28 February–3 March 1999; pp. 249–252.

Share and Cite

MDPI and ACS Style

Liu, J.; Zhu, L.; Wang, Y.; Liang, X.; Hyyppä, J.; Chu, T.; Liu, K.; Chen, R. Reciprocal Estimation of Pedestrian Location and Motion State toward a Smartphone Geo-Context Computing Solution. Micromachines 2015, 6, 699-717. https://doi.org/10.3390/mi6060699

AMA Style

Liu J, Zhu L, Wang Y, Liang X, Hyyppä J, Chu T, Liu K, Chen R. Reciprocal Estimation of Pedestrian Location and Motion State toward a Smartphone Geo-Context Computing Solution. Micromachines. 2015; 6(6):699-717. https://doi.org/10.3390/mi6060699

Chicago/Turabian Style

Liu, Jingbin, Lingli Zhu, Yunsheng Wang, Xinlian Liang, Juha Hyyppä, Tianxing Chu, Keqiang Liu, and Ruizhi Chen. 2015. "Reciprocal Estimation of Pedestrian Location and Motion State toward a Smartphone Geo-Context Computing Solution" Micromachines 6, no. 6: 699-717. https://doi.org/10.3390/mi6060699

Article Metrics

Back to TopTop