Next Article in Journal
Microwave Chemical Sensor Using Substrate-Integrated-Waveguide Cavity
Previous Article in Journal
Novel Networked Remote Laboratory Architecture for Open Connectivity Based on PLC-OPC-LabVIEW-EJS Integration. Application in Remote Fuzzy Control and Sensors Data Acquisition
Previous Article in Special Issue
Ensemble of One-Class Classifiers for Personal Risk Detection Based on Wearable Sensor Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Integrated Wireless Wearable Sensor System for Posture Recognition and Indoor Localization

1
Key Laboratory of Image Processing and Intelligent Control, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
2
Department of Rehabilitation, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, 1277 Jiefang Avenue, Wuhan 430022, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(11), 1825; https://doi.org/10.3390/s16111825
Submission received: 30 June 2016 / Revised: 22 October 2016 / Accepted: 24 October 2016 / Published: 31 October 2016

Abstract

:
In order to provide better monitoring for the elderly or patients, we developed an integrated wireless wearable sensor system that can realize posture recognition and indoor localization in real time. Five designed sensor nodes which are respectively fixed on lower limbs and a standard Kalman filter are used to acquire basic attitude data. After the attitude angles of five body segments (two thighs, two shanks and the waist) are obtained, the pitch angles of the left thigh and waist are used to realize posture recognition. Based on all these attitude angles of body segments, we can also calculate the coordinates of six lower limb joints (two hip joints, two knee joints and two ankle joints). Then, a novel relative localization algorithm based on step length is proposed to realize the indoor localization of the user. Several sparsely distributed active Radio Frequency Identification (RFID) tags are used to correct the accumulative error in the relative localization algorithm and a set-membership filter is applied to realize the data fusion. The experimental results verify the effectiveness of the proposed algorithms.

1. Introduction

There are many problems that have arisen due to the fast-aging population all over the world. Among them, health care for and monitoring of the elderly is one of the most important issues that should be addressed. Since more and more elderly people are living alone, a kind of sensor system is urgently needed, which can monitor the posture and location of the elderly. When an emergency happens, their families can obtain timely access to the physical condition and location information of the elderly with the help of this sensor system. For instance, if a monitored old person is found to be lying down but not located on the bed, an alarm should be sent out. To effectively detect this situation, the sensor system has to possess both posture recognition and indoor location abilities. So far, there are plenty of researches on sole posture recognition or indoor localization.
The localization problems exist widely in both the micro and macro applications [1,2]. It is well known that the Global Positioning System (GPS) is one of the most successful localization systems. However, the performance of GPS degrades drastically in an indoor environment [3]. To obtain a robust and accurate indoor localization method, a lot of effective methods have been put forward by researchers. The first class of methods can be categorized into wireless communication-based technologies. So far, there are several wireless technologies being used in the indoor localization, such as WiFi [4,5], Bluetooth [6,7], ZigBee [8,9], RFID (Radio Frequency Identification) [10], and so on. The main purpose of a wireless sensor network (WSN) is to determine the position between a moving target and anchor nodes which are distributed over a geographic area based on the signal strength or transmission [11]. In these methods, plenty of anchor nodes are needed to achieve relatively high accuracy, which increases the total cost of the whole system. With the growing number of nodes, the complexity of the system increases drastically. When the energy of anchor nodes is insufficient, the positioning error of the algorithms based on signal strength will increase quickly. Considering these defects, some algorithms based on Inertial Measurement Units (IMUs) are proposed. Fan et al. proposed an indoor localization method using phone inertial sensors [12]. Gusenbauer et al. also used a mobile phone to realize indoor positioning [13]. They have developed algorithms for reliable detection of steps and heading directions. The main disadvantage of this method is that the phone must be held in a hand, and be pointed in the direction of users’ movement. Jimenez et al. used an INS (Inertial Navigation System)/EKT (Extended Kalman Filter) framework and a foot-mounted IMU to realize indoor localization [14]. Hoflinger et al. presented a wireless Micro- Inertial Measurement Unit to realize localization in indoor areas with sensor data fusion based on Kalman Filter and ZUPT (Zero Velocity Update) [15]. Zhang et al. presented a novel indoor localization and monitoring system based on inertial sensors for emergency responders [16]. They also used the ZUPT method for localization. Some IMUs were attached on different segments to monitor the orientation information of each human body segment. However, they did not give a clear gait or posture recognition method and the localization error is relatively large. In order to overcome the accumulative error, more sensors are used in the indoor localization system. Ultrasonic rangefinder is used to detect the still-phase of the ZUPT method [17]. A three-axis magnetometer is used for heading estimation in [18]. Antonio et al. used a Foot-Mounted IMU and some RFID tags to accurately locate persons indoors [19]. Most of these localization algorithms are based on the ZUPT method. The positioning accuracy of ZUPT method strongly relies on the double integral of the measured acceleration by the inertial sensor. Unfortunately, the accumulative error will increase drastically in the repetitive double integral process [20]. The zero-velocity detection is another key technique of the ZUPT method. That is, the heel strike and heel off should be accurately detected during the user’s walking movement. A wrong zero-velocity detection results in wrong starting and ending time of the double integral.
There are two kinds of approaches often used in posture recognition, including the vision-based approach and approach based on inertial sensors. Boulay et al. proposed an approach to recognise human postures (sitting, standing, bending and lying) in video sequences, which combines a 2D approach with a 3D human model [21]. Le et al. have proposed a method for human posture recognition using a skeleton provided by Kinect device [22]. Yang et al. proposed a portable single-camera gait kinematics analysis system with autonomous knee angle measuring and gait event detection functionality [23]. Diraco et al. presented an active vision system for the automatic detection of falls and the recognition of several postures for elderly homecare applications [24]. The main advantage of vision-based approaches is that they are less intrusive, because the cameras are installed on the building (not worn by users). The disadvantage is that multiple cameras have to be installed in each room. Therefore, the cost is high and the users may be worried about their privacy. Different from the vision-based methods, the methods based on inertial sensors have advantages including the robustness to the ambient light, high precision, easy use and low cost. The disadvantage is also obvious, i.e., inertial sensors have to be worn by the user. Gallagher et al. have presented a technique that computes accurate posture estimates in real-time from inertial and magnetic sensors [25]. In [26], a mobile three-dimensional posture measurement system is developed based on inertial sensors and smart shoes. Harms et al. [27] analyzed the influence of a smart garment on the posture recognition performance used for shoulder and elbow rehabilitation, some garment-attached and skin-attached acceleration sensors were used. Zhang et al. [28] investigated an optimal model selection for posture recognition in home-based healthcare, the tri-axis acceleration signal obtained by a smart phone were used. Gjoreski et al. [29] investigated the impact of accelerometer number and placement on the accuracy of posture recognition. Considering that each sensor modality has its own limitations, some researchers tried to use the fusion of vision and inertial sensor data to improve the accuracy of recognition [30].
It should be noted that most research achievements just focus on a sensor system with sole posture recognition or indoor localization function. Currently, there are not many studies on the integrated sensor systems which combine both of these two functions. Redondi et al. proposed an integrated system based on wireless sensor networks for patient monitoring, localization and tracking [31]. An RF (radio frequency)-based localization algorithm was used to realize indoor localization and a bi-axial accelerometer is used to classify four human movements (prone, supine, standing and walking). Lee et al. used wearable sensors to determine a user’s location and recognize sitting, standing and walking behaviors [32].
In this paper, we designed a wireless sensor node which is used to collect the acceleration, angular velocity and magnetic field strength of a human body segment. Five wireless sensor nodes are respectively fixed on two thighs, two shanks and the waist of the user to obtain the posture information. A standard Kalman filter is used to get more precise posture data. The pitch angles of the thigh and waist are used to realize posture recognition based on a common-used minimum distance classifier. The coordinate values of the lower limb joints are calculated online, and the result is used to compute a one-step vector. We proposed a novel algorithm based on human kinematics to realize the relative indoor localization, which is different from conventional ZUPT methods. Sparsely distributed active RFID tags are used to correct the positioning error, realizing the absolute localization. An ellipsoidal set-membership filter with incomplete observation is applied to achieve the data fusion for enhancing the localization accuracy. The main contribution of this study is to develop a novel wearable sensor system, which combines the functions of posture recognition and indoor localization. A new indoor localization method based on RFID tags and IMUs is proposed, which is better than a conventional method based only on IMU sensors. Compared with the indoor localization method based on wireless technology (e.g., [31]), our proposed system can achieve more accurate localization with fewer anchor nodes. Compared with the dead-reckoning method in [32], our system can recognize more postures besides the advantage of high-precision indoor localization. Also, the zero-velocity detection in the conventional ZUPT method is no longer needed in our proposed system.
The rest of this paper is organized as follows. Section 2 describes the structure and working principle of our integrated wearable sensor system and gives the design detail of our sensor nodes. Section 3 presents the proposed posture recognition algorithm and indoor localization algorithm with a set-membership filter. Section 4 evaluates the proposed algorithms by various experiments. Finally, Section 5 draws the conclusions.

2. System Design

2.1. System Structure and Working Principle

The structure of the integrated wireless wearable sensor system is shown in Figure 1. The whole system consists of five sensor nodes, a central node for data collection, a data processing unit based on a Samsung Cortex-A8 S5PV210 platform, several active RFID tags and an RFID reader. The sensor nodes are respectively fixed at the waist, both thighs and both shanks of the user (see Figure 2a). The sensor nodes are used to collect the tri-axial acceleration, the angular velocity and the magnetic field strength of the corresponding body parts. A wireless sensor network is formed by the central node and five sensor nodes. The data of each sensor node are periodically sent to the central node in terms of the ZigBee wireless network protocol for a fixed period of time, and the sampling frequency is 20 Hz. After collecting all data from the sensor nodes for one cycle, the central node sends these data to the data processing unit via a USB interface. The posture information of each body part is then extracted from the sensory data by the data processing unit, which can be used to calculate the attitude angles and human joint coordinate values (see the details in the Section 3).
The active RFID tags are deployed at some vital positions of the indoor environment, which are used to correct the localization error of wearable sensors. The RFID reader is connected to the data processing unit through a USB cable, and carried by the user. When the user steps into the read range of an active RFID tag, its unique ID is then recognized by the RFID reader and sent to the data processing unit. The current position of the user can then be calibrated by the preset position of the corresponding RFID tag. The whole process is shown in Figure 2b.
The whole system can be divided into two parts, the posture recognition subsystem and the localization subsystem. When the user is in a static state, the posture recognition algorithm is used to recognize the user’s posture. When the user is in a state of motion, the indoor localization algorithm is used to determine the location of the user. The RMS (root mean square) of the angular velocity of the tri-axial gyroscope at the waist is used to judge if the user is static or not. The RMS is calculated by:
RMS = ω x 2 + ω y 2 + ω z 2
where, ω x , ω y and ω z are the outputs of the tri-axial gyroscope at waist. When the user is static, the RMS is close to zero. Setting a small threshold value τ, when the RMS satisfies R M S < τ , the user is then thought to be in a static state. The flow chart of the whole system is shown in Figure 3.

2.2. Sensor Node Design

In this study, we designed a small-sized and light-weight sensor node which consists of three parts: the control module, the power module and the sensor module (see Figure 4). The Texas instruments’ CC2530 chip was chosen as the control module, which communicates with the sensor module through an IIC-bus protocol to obtain the posture data. The CC2530 enables robust network nodes to be built with very low total bill-of-material costs. Combined with the industry-leading and golden-unit-status ZigBee protocol stack from Texas Instruments, the CC2530F256 provides a robust and complete ZigBee solution. Therefore, we can set up a simple and reliable five-to-one wireless data transmission network by the CC2530.
The Micro Electromechanical System (MEMS) sensor GY-83 was chosen as the posture sensor module, which consists of a tri-axial accelerometer, a gyroscope and a magnetometer. The full-range of the acceleration, the angular velocity and the magnetic field intensity are ±4 g, ±500 /s and ±1.3 gauss respectively. As for the power module, we use a rechargeable lithium battery and a low dropout regulator (LDO) TPS7333Q to provide a stable voltage of 3.3 V. The whole sensor node is 4.8 cm long and 4.3 cm wide, which is shown in Figure 5.

3. Related Algorithm Description

3.1. The Calculation of Attitude Angle for Single Sensor Node

The yaw angle ψ, roll angle ϕ and pitch angle θ are commonly used in inertial navigation to represent the carrier attitude. These angles are referred to as the attitude angles. To calculate the attitude angles, coordinate systems are established first. System {Fb} is defined as the base coordinate system with the x-axis pointing to the magnetic north and the z-axis to the ground. The y-axis of the system {Fb} is determined by the Right-hand rule. We also define a sensor coordinate system {Fs} fixed on the sensor itself (see Figure 6).
Kalman estimation has a quite good effect in data fusion, and it is widely used in various applications including low cost inertial navigation system [33,34,35,36]. Most inertial navigation systems use quaternion as the state variables of Kalman filter. For small wearable sensor systems, considering the computation complexity of the quaternion kalman estimation, Zhu proposed estimation algorithms using the acceleration and magnetic strength as state variables to simplify the calculation [37,38].
For general rotating transformation, the coordinate system that rotates ϑ around the vector n, can be described by the following rotation transformation matrix:
R o t n , ϑ = n x 2 V e r s ϑ + C ϑ n x n y V e r s ϑ n z S ϑ n x n z V e r s ϑ + n z S ϑ n x n y V e r s ϑ + n z S ϑ n y 2 V e r s ϑ + C ϑ n y n z V e r s ϑ n x S ϑ n x n z V e r s ϑ n y S ϑ n y n z V e r s ϑ + n x S ϑ n z 2 V e r s ϑ + C ϑ
where V e r s ϑ = 1 cos ϑ , S ϑ = sin ϑ , C ϑ = cos ϑ . n = n x n y n z T denotes the standard orthonormal basis of the general rotation .
Considering the dynamic process of a posture sensor, let us use t and t + Δ t respectively to denote the start moment and the end moment of a process. Assuming that the period Δ t is very small, then we have cos ϑ 1 and sin ϑ ϑ at time t. Thus Equation (1) can be written as:
R o t n ( t ) , ϑ 1 n z ( t ) ϑ n y ( t ) ϑ n z ( t ) ϑ 1 n x ( t ) ϑ n y ( t ) ϑ n x ( t ) ϑ 1 = 1 ω z ( t ) Δ t ω y ( t ) Δ t ω z ( t ) Δ t 1 ω x ( t ) Δ t ω y ( t ) Δ t ω x ( t ) Δ t 1
where ω x ( t ) , ω y ( t ) and ω z ( t ) are the outputs of the tri-axial gyroscope, which satisfy the follo wing equations:
ω x ( t ) = n x ( t ) · ( ϑ / Δ t ) | Δ t 0 ω y ( t ) = n y ( t ) · ( ϑ / Δ t ) | Δ t 0 ω z ( t ) = n z ( t ) · ( ϑ / Δ t ) | Δ t 0
The rotation transformation of posture sensor from time t to t + Δ t can be expressed by:
g x t + Δ t g y t + Δ t g z t + Δ t T = R o t n ( t ) , ϑ g x t g y t g z t T H x t + Δ t H y t + Δ t g z t + Δ t T = R o t n ( t ) , ϑ H x t H y t H z t T
where g x t g y t g z t T is the gravity acceleration vector and H x t H y t H z t T is the magnetic field intensity vector in sensor system {Fs}.
For the calculation in the digital processor, the dynamic equations should be discretized. Let T denote the sampling period, the dynamic discrete model of Kalman filter is given by:
S ( k ) = A ( k ) S ( k 1 ) + W ( k ) Z ( k ) = S ( k 1 ) + V ( k )
where W(k) and V(k) respectively denote the process noise and the observation noise.
The state vector is defined by
S = g x g y g z H x H y H z T
And the observation vector satisfies:
Z = a x a y a z h x h y h z T
where a x , a y and a z are the outputs of tri-axial accelerometer. h x , h y and h z are the outputs of the tri-axial accelerometer.
From Equations (2) and (4), the process matrix A ( k ) at time k can be obtained by:
A ( k ) = 1 ω z ( k ) Δ t ω y ( k ) Δ t 0 0 0 ω z ( k ) Δ t 1 ω x ( k ) Δ t 0 0 0 ω y ( k ) Δ t ω x ( k ) Δ t 1 0 0 0 0 0 0 1 ω z ( k ) Δ t ω y ( k ) Δ t 0 0 0 ω z ( k ) Δ t 1 ω x ( k ) Δ t 0 0 0 ω y ( k ) Δ t ω x ( k ) Δ t 1
At each time k, the optimal estimation of state variables is calculated from standard Kalman filter procedure and denoted by:
S ^ ( k ) = g ^ x g ^ y g ^ z H ^ x H ^ y H ^ z T
The rotational transformation matrix from the sensor coordinate system to the base coordinate system is defined as C b s . The rotation motion is realized by the following procedure. Firstly, let us rotate system {Fs} around the positive direction of y-axis by angle θ (its range is −180 to 180 ). Then rotate it around the positive direction of x-axis by angle ϕ (its range is −90 to 90 ). Finally, rotate it around the positive direction of z-axis by angle ψ (its range is −180 to 180 ). The whole procedure is shown in Figure 6.
Thus, we have:
C b s = R o t y , θ R o t x , ϕ R o t z , ψ = cos θ 0 sin θ 0 1 0 sin θ 0 cos θ · 1 0 0 0 cos ϕ sin ϕ 0 sin ϕ cos ϕ · cos ψ sin ψ 0 sin ψ cos ψ 0 0 0 1 = C θ C ψ + S θ S ϕ S ψ C θ S ψ + S θ S ϕ C ψ S θ C ϕ C ϕ S ψ C ϕ C ψ S ϕ S θ C ψ + C θ S ϕ S ψ S θ S ψ + C θ S ϕ C ψ C θ C ϕ
where C X and S X respectively represent cos X and sin X .
The optimal estimation of the gravity acceleration vector in the sensor coordinate system is denoted as g ^ x g ^ y g ^ z T . The optimal estimation of the magnetic field intensity vector is H ^ x H ^ y H ^ z T in the sensor system. The different representations of the gravity and geomagnetic intensity in different coordinate systems have the following relationships:
g ^ x g ^ y g ^ z T = C b s g e a r t h = C b s 0 0 1 T H x b 0 H z b T = R o t ( z , ψ ) H x h H y h H z h T H x h H y h H z h T = R o t ( x , ϕ ) R o t ( y , θ ) H ^ x H ^ y H ^ z T
where H x h H y h H z h T is the magnetic field vector in the horizontal plane coordinate system, H x b 0 H z b T is the magnetic field vector in the base coordinate system {Fb}.
Form (10) and (11), the yaw angle ψ, roll angle ϕ and pitch angle θ can be calculated as follows:
ψ = arctan ( H y h / H x h ) π + arctan ( H y h / H x h ) π + arctan ( H y h / H x h ) π / 2 π / 2 H x h > 0 H y h > 0 a n d H x h < 0 H y h < 0 a n d H x g < 0 H y h > 0 a n d H x h = 0 H y h < 0 a n d H x h = 0
ϕ = arctan ( g ^ y / g ^ x 2 + g ^ z 2 )
θ = arctan ( g ^ x / g ^ z ) π + arctan ( g ^ x / g ^ z ) π + arctan ( g ^ x / g ^ z ) π / 2 π / 2 g ^ z > 0 g ^ x > 0 a n d g ^ z < 0 g ^ x < 0 a n d g ^ z < 0 g ^ x > 0 a n d g ^ z = 0 g ^ x < 0 a n d g ^ z = 0

3.2. Posture Recognition

The attitude angles of the thighs, shanks and waist can be represented by the yaw angle ψ, roll angle ϕ and pitch angle θ of the sensors on corresponding body segments, which are calculated by the Equations (13)–(15). To carry out efficient and real-time posture recognition, first we need to extract some of the most important features. As shown in Figure 7a, the pitch angle θ can represent the tilt angle between the body segment and the flat ground. For simplicity of computation, here we assume that the range of pitch angle is within [0 , 360 ]. The pitch angles of left thigh and waist are chosen as the features to distinguish the five postures (sitting, standing, squatting, supine and prone) in daily life. We collected 30 sampling points of each posture respectively, and Figure 7b gives the pitch angle scatter diagram. As shown in Figure 7b, the difference of pitch angles between each two postures is very obvious. Thus, the five postures can be distinguished easily by using these two features.
The common-used minimum distance classifier is used to recognize the five postures. Let m k = θ t k , θ w k ( k = 1 , 2 , , 5 ) denote the mean vector of k-th posture. Let 1, 2, 3, 4 and 5 respectively denote standing, sitting, squatting, supine and prone posture. θ t k is the pitch angle of left thigh of k-th posture, and θ w k is the pitch angle of waist of k-th posture. To realize posture recognition, firstly K training samples of each posture are used to estimate the five mean vector m k by the following equation:
m k = 1 K i = 1 K y i k ( k = 1 , 2 , 5 )
where K = 50, y i k is the i-th training sample of k-th posture.
Then for a new sampling point y j = θ t j , θ w j , it is classified to m k if its Euclidean distance to m k is smaller than those to all other classes:
y j m k i f d ( y j , m k ) = min d ( y j , m i ) i = 1 , 2 , 5
where d ( y j , m i ) = ( θ t j θ t i ) 2 + ( θ w j θ w i ) 2 , ( i = 1 , 2 , , 5 ) .

3.3. Localization Algorithm Based on Inertial Navigation

3.3.1. The Calculation of Joints Coordinates

To calculate the coordinates of human body joints, we first define several important coordinate systems (see Figure 8). System {b} is a base reference coordinate system which is fixed at the midpoint of the two hip joints with the same direction as {Fb} given in Section 3.1. The origin of this coordinate system is the midpoint of user’s two hip joints. System { s i } is a sensor coordinate system fixed on sensor i ( i = 1 , 2 , , 5 ) with its z-axis perpendicular to the sensor surface and y-axis parallel to the sensor surface. For each hip or knee joint point, there is a coordinate system fixed on it with the same orientation as system {b} and a coordinate system fixed on it with the same orientation as the sensor coordinate system { s i }. For example, system { b L 2 } is fixed on the left knee joint with the z-axis pointing downwards and the x-axis pointing to the magnetic north. The origin of coordinate system { L 2 s 3 } is the left knee joint, whose orientation is the same as the sensor coordinate system { s 3 }.
From Equations (13)–(15), we can get each sensor’s yaw angle ψ i , roll angle ϕ i and pitch angle θ i in the sensor coordinate system { s i } . C s 1 b is the rotation transformation matrix which describes the rotation from coordinate system { b } to sensor coordinate system { s i } and can be given by:
C s i b = R o t z , ψ i R o t x , ϕ i R o t y , θ i = ( C b s i ) 1 , ( i = 1 , 2 , , 5 )
Then we can compute the coordinate values of all joint points by:
X L 1 b = C s 1 b X L 1 s 1 , ( X L 1 s 1 = 0 l w a i s t / 2 0 T ) X L 2 b = C L 1 s 2 b L 1 X L 2 L 1 s 2 + X L 1 b , ( X L 2 L 1 s 2 = l t h i g h 0 0 T , C L 1 s 2 b L 1 = C s 2 b ) X L 3 b = C L 2 s 3 b L 2 X L 3 L 2 s 3 + X L 2 b , ( X L 3 L 2 s 3 = l s h a n k 0 0 T , C L 21 s 3 b L 2 = C s 3 b ) X R 1 b = C s 1 b X R 1 s 1 , ( X R 1 s 1 = 0 l w a i s t / 2 0 T ) X R 2 b = C R 1 s 4 b R 1 X R 2 R 1 s 4 + X R 1 b , ( X R 2 R 1 s 4 = l t h i g h 0 0 T , C R 1 s 4 b R 1 = C s 4 b ) X R 3 b = C R 2 s 5 b R 2 X R 3 R 2 s 5 + X R 2 b , ( X R 3 s 5 = l s h a n k 0 0 T , C R 2 s 5 b R 2 = C s 5 b )
where X L 1 b , X L 2 b , X L 3 b are the coordinate values of left hip, knee, ankle joint points in the base coordinate system {b}. X R 1 b , X R 2 b , X R 3 b are the coordinate values of the right hip, knee, ankle joint points in the base coordinate system {b}. l w a i s t is the length of user’s waist. l t h i g h denotes the length of user’s thigh. l s h a n k denotes the length of user’s shank. X L 1 s 1 is the coordinate value of left hip joint in system { s 1 } . X L 2 L 1 s 2 is the coordinate value of left knee joint in system { L 1 s 1 } , and X L 3 L 2 s 3 , X R 1 s 1 , X R 2 R 1 s 4 , X R 3 s 5 are similarly defined. Thus, the user’s gait is well described by all the joint points in the base reference coordinate system.

3.3.2. Relative Localization Algorithm Based on Step Length

In this subsection, we propose a relative localization algorithm based on the step length. From Equation (19) in Section 3.3.1, the coordinate values in the base coordinate system {b} of the right ankle joint and left ankle joint can be calculated, which are respectively denoted by X R 3 b and X L 3 b . The length of one step is the distance between the right ankle and the left ankle, which can be calculated by X R 3 b X L 3 b .
The recognition of one step is the most important problem in the relative localization. Let β denote the angle between the two thighs, then we have:
cos β = C s 2 b L 1 X L 2 s 2 , C s 4 b R 1 X R 2 s 4 C s 2 b L 1 X L 2 s 2 C s 4 b R 1 X R 2 s 4
From the research in [39], we know that in a gait cycle the angle β will increase to reach a maximum value and then decreases (see Figure 9). One step is formed when the front heel touches the ground while the rear heel is going to be lifted. The angle β reaches the maximum value at this moment. If the sensory data at this moment are recorded, the one-step vector can then be obtained.
Since only the 2D ground coordinates are needed in the indoor localization, we just focus on the x-axis and y-axis coordinate. Let X f r o n t b = x f r o n t y f r o n t T denote the coordinate of front ankle joint and X r e a r b = x r e a r y r e a r T denote the coordinate of rear ankle joint, respectively. Then the one-step vector can be represented by:
L b = X f r o n t b X r e a r b = x f r o n t x r e a r y f r o n t y r e a r T
After walking for n steps, the displacement of user is calculated by D n = i = 1 n L i b , where the L i b represents the i-th one-step vector. It is worth noting that the displacement is calculated in the base coordinate system {b}. There is an indoor coordinate system established for the indoor localization. The x-axis of base coordinate system is pointing at magnetic north, but the x-axis of indoor coordinate system is determined by the building. There is an angle between the base coordinate system and the indoor coordinate system (see Figure 10a), which is denoted by α. The rotational transformation matrix from the indoor coordinate system to the base coordinate system is calculated by:
C b i n d o o r = cos α sin α sin α cos α
L i i n d o o r = C b i n d o o r L i b

3.3.3. Set-Membership Filter Algorithm with Incomplete Observation

Note that there is a small error in the measurement of each one-step vector using the proposed wearable sensor system. With the number of steps increasing, the cumulative error increases quickly. This results in larger and larger positioning error, which is not allowable in the localization.
To solve this problem, we use fixed-point tags for error correction. Some active RFID tags are installed in vital positions. When a user walks in the area of a RFID tag, the RFID reader carried by the user can recognize the ID of the RFID tag. Once the RFID reader finds a tag, a data fusion algorithm is needed to fuse the localization data from the relative localization algorithm and the fixed localization data from the RFID tag. A sub-optimal Kalman filter is used in [40], which has the limitation that the noise must be white Gaussian noise. The Kalman filter has a poor performance for non-Gaussian noises [41]. Assuming that the processing and measurement noises are unknown-but-bounded (UBB), an ellipsoidal set-membership filter is applied as follows.
First, the system model of our localization is established as:
X k = F ( X k 1 , L k 1 , φ k 1 ) + w k 1 Z k = Γ k ( HX k + 1 | k + v k )
where X k is the position state variable vector which is determined by X k = x k y k T . x k is the x-axis coordinate value and y k is the y-axis coordinate value of user in the indoor coordinate system. X k and X k 1 respectively denote the k-th and ( k 1 ) -th position state. Z k is the observation vector. w k and v k respectively denote the process noise and observation noise. H is the observation matrix. Γ k is an unknown binary sequence composed of 0’s and 1’s. Z k is available if Γ k = 1 while it is missing if Γ k = 0 . F is the nonlinear rule between X k and X k 1 , which satisfies:
F X k 1 , L k 1 , φ k 1 = x k 1 + L k 1 · cos φ k 1 y k 1 + L k 1 · sin φ k 1
where L k 1 denotes the ( k 1 ) -th step length. φ k 1 is the direction angle between the ( k 1 ) -th step vector and the x-axis of indoor coordinate system(see Figure 10b).
The description of an ellipsoid is given by a set:
Ω = x : ( x a ) T P 1 ( x a ) σ 2
where a is the center of the ellipsoid. x is an arbitrary possible value within the ellipsoid, and P is a positive definite matrix that determines the shape of the ellipsoid. σ is not a physically interpretable measure of the size of the ellipsoid. It has been noted in [42] that σ is usually considered as a measure of optimality in ellipsoid set-membership filter. In the following, we write the ellipsoid as Ω a , P , σ .
In the set-membership framework, the process noise can be summarized as the unknown-but-bounded (UBB) noises which belong to the given set:
W k = w k : w k T Q k 1 w k σ w 2
where Q k is the known positive matrix and σ k is a known positive scalar which represents the upper bound of the process noise.
The observation noise v k belongs to:
V k = v k : v k T v k γ 2
where γ is also a known positive scalar which represents the upper bound of the observation noise.
The initial state X 0 belongs to a given set Ω X ^ 0 , P 0 , σ 0 .
The first step of the set-membership filter is time updating. A predictive value is obtained after time updating. Assuming that the state vector satisfies X Ω X ^ k , P k , σ k , and the prediction ellipsoid containing the state at time is defined as X Ω X k + 1 | k , P k + 1 | k , σ k + 1 | k , then we have:
X k + 1 | k = F k X ^ k
P k + 1 | k = ( 1 + p k ) F k P k F k T + ( 1 + p k 1 ) σ w 2 σ k 2 Q k
σ k + 1 | k = σ k
where
F k = F ( X ^ k , L k , α k ) X ^ k
Equations (34)–(36) are similar to those presented in [43], and the method of selecting p k can be found in [44], which can be concluded as follows: If p k satisfies
p k = σ w σ k t r ( Q k ) t r ( F k P k F k )
then the trace of matrix P k + 1 | k achieves its minimum.
The whole process of time updating is shown in Figure 11a.
The second step of our filter is the observation updating. Considering the possible loss of measurements, the observation can be categorized into two cases: the observation is available and the observation is missing.
Ω X ^ k + 1 , P k + 1 , σ k + 1 is defined as the final estimated ellipsoid of our set-membership filter. If there is an observation, then we have:
P k + 1 = ( I K k + 1 H ) P k + 1 | k
X ^ k + 1 = X k + 1 | k + K k + 1 e k + 1
σ k + 1 2 = σ k + 1 | k 2 + q k + 1 γ 2 q k + 1 e k + 1 T S k + 1 1 e k + 1
where
S k + 1 = I + q k + 1 HP k + 1 | k H T
e k + 1 = Z k + 1 HX k + 1 | k
K k + 1 = P k + 1 | k H T ( 1 q k + 1 I + HP k + 1 | k H T ) 1
q k + 1 is a parameter that determines the property of the outer bounding ellipsoid Ω X ^ k + 1 , P k + 1 , σ k + 1 . The method of selecting q k + 1 has been discussed in [45]. If q k + 1 satisfies:
q k + 1 = 0 | | e k + 1 | | γ ( 1 / g k + 1 ) ( e k + 1 / γ 1 ) | | e k + 1 | | > γ
where g k + 1 is the maximum singular value of P k + 1 | k , then σ k + 1 2 achieves its minimum.
The whole process of observation updating without observation missing is concluded in Figure 11b.
When the observation is missing, we directly choose the ellipsoid Ω k + 1 | k as Ω k + 1 | , that is:
P k + 1 = P k + 1 | k
X ^ k + 1 = X k + 1 | k
σ k + 1 = σ k + 1 | k
The whole set-membership filter with incomplete observation algorithm is concluded as Algorithm 1.
Algorithm 1: Set-membership filter with incomplete observation
Require: X ^ k , F k , σ k , σ w , P k , Γ k + 1 , γ
   1: Calculate X k + 1 | k from Equation (29)
   2: Select the parameter p k from Equation (33)
   3: Calculate σ k + 1 | k from Equation (31), calculate P k + 1 | k from Equation (30)
   4: if Γ k + 1 = 1 then
   5:  Select the parameter q k from Equation (40)
   6:  Calculate X ^ k + 1 from Equation (35), calculate P k + 1 from Equation (34), calculate σ k + 1 from Equation (36)
   7: else
   8:  Calculate X ^ k + 1 from Equation (42) , calculate P k + 1 from Equation (41), calculate σ k + 1 from Equation (43)
   9: end if
  10: return X ^ k + 1 , P k + 1 , σ k + 1

4. Experiments Results and Discussion

4.1. Posture Recognition Using Wireless Wearable Sensors System

Four male subjects took part in the experiments voluntarily. The physical parameters of subject A (167 cm, 60 kg), subject B (178 cm, 65 kg), subject C (168 cm, 62 kg) and subject D (168 cm, 75 kg) are shown in Table 1.
Each subject was asked to wear the sensor system and then keep standing, sitting, squatting, supine and prone for a given period of time, respectively. The five postures are shown in Figure 12. Using the method mentioned in Section 3.2, we first established the mean vector of each posture with a number of training data. Then the subject walked freely and kept a certain posture for a short time suddenly, so that the posture recognition algorithm could be used to distinguish the static posture at the same time. Each subject repeated 50 experimental trials for each posture, the success rates of posture recognition are shown in Table 2. Compared with the posture recognition method in [31], in which only an accelerometer was used, more postures are recognized because we combine the posture information of two body segments (the left thigh and waist).

4.2. One-Step Vector Measurement Experiments

Every step length calculation is the basis for the indoor localization algorithms. Before the localization experiments, one-step length experiments were conducted to evaluate the performance of the proposed sensor system. Each subject was asked to take a step with different length and towards different directions. The measurement results of the ruler are used as the reference values. The experimental setup is shown in Figure 13. The footprints are marked on the floor, and subject was asked to stand just above the footprints. The step length and angle φ are recorded at the same time. For each different step length and angle, 20 repeated experiments were carried out by each subject. Figure 14 gives the measurement error bar graph.
As shown in Figure 14, the mean measurement error of one-step length is smaller than 5 cm, and the maximum measurement error is smaller than 6 cm. The mean measure error of the one-step angle is smaller than 4 , and the maximum measurement error is smaller than 6 . These errors are acceptable considering the interference of ambient field and the measurement error of the ruler. The experimental results ensure the feasibility of the proposed indoor localization algorithms based on the one-step vector.

4.3. Indoor Localization Experiments

4.3.1. Description of Experiments

The same four subjects wearing the sensor system took part in the experiments. The appearance of a subject wearing the sensor system is shown in Figure 15. The ichnography of experiment environment is shown in Figure 16. The subjects were asked to walk along the red dashed line which is marked in the ichnography. During the subjects’ walking, the posture data were also measured online. The data processing unit calculated the coordinate value of every step at the same time, so that the indoor localization can be realized simultaneously based on these data.
Considering the characteristics of the planned trajectory, we placed four RFID tags at the four corners and a RFID tag near the elevator (see Figure 16). The coordinate of each RFID tag was saved in the program which is running in the data processing unit. When the user walked into the read-range of the RFID tag, its coordinate value was then used to correct the error of localization. It is worth noting that the read-range of the RFID tag can affect the accuracy of localization. If the range is too big, the error will be relatively large. Meanwhile, the RFID tag may not be detected if the read-range is set too small. Thus, the read-range should be set at a moderate value. In our experiments, the read-range of the RFID tag is empirically set as 1 m.

4.3.2. Experiments on Different Subjects

In order to evaluate the applicability of our method in different people, plentiful repeated experiments were carried out by the four subjects. Each subject was asked to walk along the planned trajectory 10 times. In order to compare the performances of the sole relative localization algorithm and the localization algorithm with set-membership filter, we plotted the trajectories of these two localization algorithms in one figure. Figure 17 shows the average trajectory of the subject A obtained using our localization approach over 10 repetitions of the experiment, and Figure 18 gives the mean error curves and the standard deviation. An error bar graph is also presented to compare the indoor localization experiment results of the four subjects (see Figure 19).
From Figure 19, we can draw the conclusion that the performance of the localization algorithm with the set-membership filter is excellent and stable. It is observed that the mean error of the sole relative localization is relatively large and it presents different performances for different subjects. Compared with the sole relative localization algorithm, the mean error of localization with the set-membership filter is much smaller. Note that the mean error is less than 50 cm, which is smaller than the result (approximately 1.5 m) proposed in [12,19], and is much better than the result (about 2–3 m) in [31].

4.3.3. Experiments Regarding Different Ways of Walking

In order to evaluate the applicability of our method in different ways of walking, subject A was asked to walk in four different ways (brisk walking with small and big steps, backward walking and quick walking). Subject A walked along the planned trajectory with each type of walking 10 times simultaneously, and the localization results were recorded. Figure 20 shows the obtained average localization trajectory of subject A walking with small steps over 10 repetitions of the experiment. Figure 21 gives the mean error curves and the standard deviation. An error bar graph is also presented to compare the indoor localization experiment results of the four types of walking, which is shown in Figure 22.
As shown in Figure 22, different ways of walking have different effects on the experimental results. It is observed that the localization error is biggest when the subject walked with a big step. Normally this type of walking makes the human body shake, which deteriorates the measurement error of our wearable sensors. Due to the limitation of wireless communication rates, the localization result is not satisfactory when the user is running. In contrast, when the pace is very small, the human body is then stable. Therefore, the smallest localization error is obtained among the four types of walking. In general, our method can be applied to different types of walking, the performance of the proposed algorithm is satisfactory compared with the existed methods proposed in [12,19].

5. Conclusions

This paper proposed an integrated wireless wearable sensor system that combines the functions of posture recognition and indoor localization. The developed low-cost sensor system has many advantages such as simple structure, light weight, small size, convenient maintenance and is very easy to use. The pitch angles of the left thigh and waist are used to recognize five human common postures. By calculating the coordinates of two hip joints, two knee joints and two ankle joints, the one-step vector can be obtained. Based on the one-step vector calculation and using the human body attitude information, the relative indoor localization is realized. The localization accuracy can be further improved by fusing the relative localization result and pre-setting the RFID tags’ positions using the set-membership filtering algorithm with incomplete observation. Experiments were conducted to verify the effectiveness of the proposed sensor system and corresponding algorithms.
It has to be pointed out that there are also some limitations in our sensor system. We can achieve very high positioning accuracy, but many sensors are needed. This may bring some inconvenience to users’ daily life. It also should be noted that the coordinates of six lower limb joints (two hip joints, two knee joints and two ankle joints) can be calculated by our system. These data are very useful for gait recognition and analysis in the field of rehabilitation. We would like to apply the proposed system in the field of lower limb rehabilitation for the elderly in the future.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant 61473130, the Science Fund for Distinguished Young Scholars of Hubei Province (2015CFA047), the Fundamental Research Funds for the Central Universities (HUST: 2015TS028) and the Program for New Century Excellent Talents in University (NCET-12-0214).

Author Contributions

Jian Huang initiated the research and wrote the paper. Xiaoqiang Yu designed the sensor system and performed the localization experiments. Yuan Wang designed the filter algorithm. Xiling Xiao designed the posture recognition algorithm.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shi, C.; Luu, D.K.; Yang, Q.; Liu, J.; Sun, Y. Recent Advances in Nanorobotic Manipulation inside Scanning Electron Microscopes. Microsys. Nanoengi. 2016, 8, 16024. [Google Scholar] [CrossRef]
  2. Yang, Z.; Wang, Y.; Yang, B.; Li, G.; Chen, T.; Nakajima, M.; Sun, L.; Fukuda, T. Mechatronic development and vision feedback control of a nanorobotics manipulation system inside sem for nanodevice assembly. Sensors 2016, 16, 1479. [Google Scholar] [CrossRef] [PubMed]
  3. Heidari, M.; Alsindi, N.A.; Pahlavan, K. UDP identification and error mitigation in toa-based indoor localization systems using neural network architecture. IEEE Trans. Wirel. Commun. 2009, 8, 3597–3607. [Google Scholar] [CrossRef]
  4. Xu, Y.; Zhou, M.; Ma, L. WiFi indoor location determination via ANFIS with PCA methods. In Proceedings of the 2009 IEEE International Conference on Network Infrastructure and Digital Content, Beijing, China, 6–8 Novenber 2009; pp. 647–651.
  5. Figuera, C.; Rojo-álvarez, J.L.; Mora-Jiménez, I.; Guerrero-Curieses, A.; Wilby, M.; Ramos-López, J. Time-space sampling and mobile device calibration for WiFi indoor location systems. IEEE Trans. Mob. Comput. 2011, 10, 913–926. [Google Scholar] [CrossRef]
  6. Aparicio, S.; Perez, J.; Tarrío, P.; Bernardos, A.M.; Casar, J.R. An indoor location method based on a fusion map using bluetooth and WLAN technologies. In International Symposium on Distributed Computing and Artificial Intelligence 2008; Corchado, J.M., Rodríguez, S., Llinas, J., Molina, J.M., Eds.; Springer: Berlin, Germany, 2009; pp. 702–710. [Google Scholar]
  7. Zhuang, Y.; Yang, J.; Li, Y.; Qi, L.; El-Sheimy, N. Smartphone-based indoor localization with bluetooth low energy beacons. Sensors 2016, 16, 596. [Google Scholar] [CrossRef] [PubMed]
  8. Cheng, Y.M. Using ZigBee and Room-Based Location Technology to Constructing an Indoor Location-Based Service Platform. In Proceedings of the IEEE 5th International Conference on Intelligent Information Hiding & Multimedia Signal Processing, Kyoto, Japan, 12–14 September 2009; pp. 803–806.
  9. Huang, C.N.; Chan, C.T. ZigBee-based indoor location system by k-nearest neighbor algorithm with weighted RSSI. Procedia Comput. Sci. 2011, 5, 58–65. [Google Scholar] [CrossRef]
  10. Bao, X.; Wang, G. Random sampling algorithm in RFID indoor location system. In Proceedings of the 3rd IEEE International Workshop on Electronic Design, Test and Applications, Kuala Lumpur, Malaysia, 17–19 January 2006; pp. 168–176.
  11. Zou, T.; Lin, S.; Li, S. Blind RSSD-based indoor localization with confidence calibration and energy control. Sensors 2016, 16, 788. [Google Scholar] [CrossRef] [PubMed]
  12. Li, F.; Zhao, C.; Ding, G.; Gong, J.; Liu, C.; Zhao, F. A reliable and accurate indoor localization method using phone inertial sensors. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing ACM Conference on Ubiquitous Computing, New York, NY, USA, 2009; pp. 421–430.
  13. Gusenbauer, D.; Isert, C.; KröSche, J. Self-contained indoor positioning on off-the-shelf mobile devices. In Proceedings of the 2010 International Conference on Indoor Positioning and Indoor Navigation, Zürich, Switzerland, 15–17 September 2010; pp. 1–9.
  14. Jimenez, A.R.; Seco, F.; Prieto, J.C.; Guevara, J. Indoor pedestrian navigation using an INS/EKF framework for yaw drift reduction and a foot-mounted IMU. In Proceedings of the 2010 7th Workshop on Positioning Navigation and Communication (WPNC), Dresden, Germany, 2010; pp. 135–143.
  15. Hoflinger, F.; Zhang, R.; Reindl, L.M. Indoor-localization system using a Micro-Inertial Measurement Unit (IMU). In Proceedings of the European Frequency and Time Forum (EFTF), Gothenburg, Sweden, 23–27 April 2012; pp. 443–447.
  16. Zhang, R.; Hoflinger, F.; Reindl, L. Inertial sensor based indoor localization and monitoring system for emergency responders. IEEE Sens. J. 2013, 13, 838–848. [Google Scholar] [CrossRef]
  17. Zhang, R.; Hoeflinger, F.; Gorgis, O.; Reindl, L.M. Indoor Localization Using Inertial Sensors and Ultrasonic Rangefinder. In Proceedings of the 2011 IEEE International Conference on Wireless Communications and Signal Processing (WCSP2011), Nanjing, China, 9–11 November 2011; pp. 1–5.
  18. Yuan, X.; Yu, S.; Zhang, S.; Wang, S.; Liu, S. Quaternion-based unscented kalman filter for accurate indoor heading estimation using wearable multi-sensor system. Sensors 2015, 5, 10872–10890. [Google Scholar] [CrossRef] [PubMed]
  19. Ruiz, A.R.J.; Granja, F.S.; Honorato, J.C.P.; Rosas, J.I.G. Accurate pedestrian indoor navigation by tightly coupling foot-mounted IMU and RFID measurements. IEEE Trans. Instrum. Meas. 2012, 61, 178–189. [Google Scholar] [CrossRef] [Green Version]
  20. Woodman, O.J. An Introduction to Inertial Navigation; Technical Report UCAMCL-TR-696; University of Cambridge, Computer Laboratory: Cambridge, UK, 2007. [Google Scholar]
  21. Boulay, B.; Brémond, F.; Thonnat, M. Applying 3D human model in a posture recognition system. Pattern Recognit. Lett. 2006, 27, 1788–1796. [Google Scholar] [CrossRef]
  22. Le, T.L.; Nguyen, M.Q.; Nguyen, T.T.M. Human posture recognition using human skeleton provided by Kinect. In Proceedings of the 2013 IEEE International Conference on Computing, Management and Telecommunications, Ho Chi Minh City, Vietnam, 21–24 January 2013; pp. 340–345.
  23. Yang, C.; Ugbolue, U.C.; Kerr, A.; Stankovic, V.; Stankovic, L.; Carse, B.; Kaliarntas, K.T.; Rowe, P.J. Autonomous gait event detection with portable single-camera gait kinematics analysis system. J. Sens. 2016, 2016, 5036857. [Google Scholar] [CrossRef]
  24. Diraco, G.; Leone, A.; Siciliano, P. An active vision system for fall detection and posture recognition in elderly healthcare. Am. J. Physiol. 2010, 267, 1536–1541. [Google Scholar]
  25. Gallagher, A.; Matsuoka, Y.; Ang, W.T. An efficient real-time human posture tracking algorithm using low-cost inertial and magnetic sensors. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS 2004), Sendai, Japan, 28 September–2 October 2004; pp. 2967–2972.
  26. Jung, P.G.; Lim, G.; Kong, K. Human posture measurement in a three-dimensional space based on inertial sensors. In Proceedings of the 2012 12th IEEE International Conference on Control, Automation and Systems, Jeju Island, Korea, 17–21 October 2012; pp. 1013–1016.
  27. Harms, H.; Amft, O.; Troster, G. Influence of a loose-fitting sensing garment on posture recognition in rehabilitation. In Proceedings of the 2008 IEEE Biomedical Circuits and Systems Conference, Baltimore, MD, USA, 20–22 November 2008; pp. 353–356.
  28. Zhang, S.; Mccullagh, P.; Nugent, C.; Zheng, H.; Baumgarten, M. Optimal model selection for posture recognition in home-based healthcare. Int. J. Mach. Learn. Cybern. 2011, 2, 1–14. [Google Scholar] [CrossRef]
  29. Gjoreski, H.; Lustrek, M.; Gams, M. Accelerometer Placement for Posture Recognition and Fall Detection. In Proceedings of the 2011 Seventh IEEE International Conference on Intelligent Environments, Notingham, UK, 25–28 July 2011; pp. 47–54.
  30. Chen, C.; Jafari, R.; Kehtarnavaz, N. A survey of depth and inertial sensor fusion for human action recognition. Multimed. Tools Appl. 2016, 74. [Google Scholar] [CrossRef]
  31. Redondi, A.; Chirico, M.; Borsani, L.; Cesana, M.; Tagliasacchi, M. An integrated system based on wireless sensor networks for patient monitoring, localization and tracking. Ad Hoc Netw. 2013, 11, 39–53. [Google Scholar] [CrossRef]
  32. Lee, S.W.; Mase, K. Activity and location recognition using wearable sensors. IEEE Pervasive Comput. 2002, 1, 24–32. [Google Scholar]
  33. Maria, S.A. Estimating three-dimensional orientation of human body parts by inertial/magnetic sensing. Sensors 2011, 11, 1489–1525. [Google Scholar]
  34. Tayebi, A.; Mcgilvray, S.; Roberts, A.; Moallem, M. Attitude estimation and stabilization of a rigid body using low-cost sensors. In Proceedings of the 2007 46th IEEE Conference on Decision & Control, New Orleans, LA, USA, 12–14 December 2007; pp. 6424–6429.
  35. Lee, J.K.; Park, E.J. Minimum-order kalman filter with vector selector for accurate estimation of human body orientation. IEEE Trans. Robot. 2009, 25, 1196–1201. [Google Scholar]
  36. Huang, J.; Huo, W.; Xu, W.; Mohammed, S.; Amirat, Y. Control of Upper-Limb Power-Assist Exoskeleton Using a Human-Robot Interface Based on Motion Intention Recognition. IEEE Trans. Autom. Sci. Eng. 2015, 12, 1257–1270. [Google Scholar] [CrossRef]
  37. Zhu, R.; Zhou, Z.A. Real-time articulated human motion tracking using tri-axis inertial/magnetic sensors package. IEEE Trans. Neural Syst. Rehabil. Eng. 2004, 12, 295–302. [Google Scholar] [CrossRef] [PubMed]
  38. Liu, T.; Inoue, Y.; Shibata, K. Simplified Kalman filter for a wireless inertial-magnetic motion sensor. In Proceedings of the 2011 IEEE Sensors, Limerick, Ireland, 28–31 October 2011; pp. 569–572.
  39. Mathie, M. Monitoring and Interpreting Human Movement Patterns Using a Triaxial Accelerometer; The University of New South Wales: Sydney, Australia, 2003; pp. 56–57. [Google Scholar]
  40. Wang, Y.; Huang, J.; Wang, Y. Wearable sensor-based indoor localisation system considering incomplete observations. Int. J. Model. Identif. Control 2015, 24. [Google Scholar] [CrossRef]
  41. Yang, F.; Wang, Z.; Hung, Y.S. Robust Kalman filtering for discrete time-varying uncertain systems with multiplicative noises. IEEE Trans. Autom. Control 2002, 47, 1179–1183. [Google Scholar] [CrossRef] [Green Version]
  42. Deller, J.R.; Nayeri, M.; Liu, M.S. Unifying the landmark developments in optimal bounding ellipsoid identification. Int. J. Adapt. Control Signal Process. 1994, 8, 43–60. [Google Scholar] [CrossRef]
  43. Schweppe, F.C. Recursive state estimation: Unknown but bounded errors and system inputs. IEEE Trans. Autom. Control 1968, 13, 22–28. [Google Scholar] [CrossRef]
  44. Chernousko, F.L. Optimal guaranteed estimates of indeterminacies with the aid of ellipsoids. I. Eng. Cybern. 1980, 18, 729–796. [Google Scholar]
  45. Nagaraj, S.; Gollamudi, S.; Kapoor, S.; Huang, Y.F. BEACON: An adaptive set-membership filtering technique with sparse updates. IEEE Trans. Signal Process. 1999, 47, 2928–2941. [Google Scholar] [CrossRef]
Figure 1. The structure of the integrated wireless wearable sensor system.
Figure 1. The structure of the integrated wireless wearable sensor system.
Sensors 16 01825 g001
Figure 2. The structure of the whole system. (a) The picture of proposed wearable sensor system; (b) Work principle of indoor localization corrected by Radio Frequency Identification (RFID) tags.
Figure 2. The structure of the whole system. (a) The picture of proposed wearable sensor system; (b) Work principle of indoor localization corrected by Radio Frequency Identification (RFID) tags.
Sensors 16 01825 g002
Figure 3. The flow chart of the whole system.
Figure 3. The flow chart of the whole system.
Sensors 16 01825 g003
Figure 4. The structure of the sensor node.
Figure 4. The structure of the sensor node.
Sensors 16 01825 g004
Figure 5. The designed sensor node.
Figure 5. The designed sensor node.
Sensors 16 01825 g005
Figure 6. Rotation transformation.
Figure 6. Rotation transformation.
Sensors 16 01825 g006
Figure 7. The features selection of posture recognition. (a) The illustration of pitch angle; (b) The pitch angles of the left thigh and waist in each posture.
Figure 7. The features selection of posture recognition. (a) The illustration of pitch angle; (b) The pitch angles of the left thigh and waist in each posture.
Sensors 16 01825 g007
Figure 8. Rotation transformation.
Figure 8. Rotation transformation.
Sensors 16 01825 g008
Figure 9. Typical normal gait cycle.
Figure 9. Typical normal gait cycle.
Sensors 16 01825 g009
Figure 10. The coordinate definition of the indoor localization subsystem. (a) Indoor coordinate system and base coordinate system.; (b) Updating of localization algorithm.
Figure 10. The coordinate definition of the indoor localization subsystem. (a) Indoor coordinate system and base coordinate system.; (b) Updating of localization algorithm.
Sensors 16 01825 g010
Figure 11. The process of time updating and observation updating without observation missing. (a) The process of time updating; (b) Observation updating without observation missing.
Figure 11. The process of time updating and observation updating without observation missing. (a) The process of time updating; (b) Observation updating without observation missing.
Sensors 16 01825 g011
Figure 12. The five postures of proposed posture recognition algorithm. (a) Standing posture; (b) Sitting posture; (c) Squatting posture; (d) Supine posture.; (e) Prone posture.
Figure 12. The five postures of proposed posture recognition algorithm. (a) Standing posture; (b) Sitting posture; (c) Squatting posture; (d) Supine posture.; (e) Prone posture.
Sensors 16 01825 g012
Figure 13. The setup of one-step experiments.
Figure 13. The setup of one-step experiments.
Sensors 16 01825 g013
Figure 14. The mean (error bar) and standard deviation (black lines on the error bar) of the measurement error per subject according to the step length and step angle. (a) The measurement error per subject according to the step length; (b) The measurement per subject according to the step angle.
Figure 14. The mean (error bar) and standard deviation (black lines on the error bar) of the measurement error per subject according to the step length and step angle. (a) The measurement error per subject according to the step length; (b) The measurement per subject according to the step angle.
Sensors 16 01825 g014
Figure 15. Wearable sensor system for posture recognition and indoor localization.
Figure 15. Wearable sensor system for posture recognition and indoor localization.
Sensors 16 01825 g015
Figure 16. The ichnography of indoor localization environment.
Figure 16. The ichnography of indoor localization environment.
Sensors 16 01825 g016
Figure 17. The average trajectory curves of subject A walking with normal steps.
Figure 17. The average trajectory curves of subject A walking with normal steps.
Sensors 16 01825 g017
Figure 18. The mean error curves of Subject A walking with normal steps.
Figure 18. The mean error curves of Subject A walking with normal steps.
Sensors 16 01825 g018
Figure 19. The mean (error bar) and standard deviation (black lines on the error bar) of the localization error per subject using the relative localization algorithm and the proposed algorithm.
Figure 19. The mean (error bar) and standard deviation (black lines on the error bar) of the localization error per subject using the relative localization algorithm and the proposed algorithm.
Sensors 16 01825 g019
Figure 20. The average trajectory curves of subject A walking with small steps.
Figure 20. The average trajectory curves of subject A walking with small steps.
Sensors 16 01825 g020
Figure 21. The mean error curves of Subject A walking with small steps.
Figure 21. The mean error curves of Subject A walking with small steps.
Sensors 16 01825 g021
Figure 22. The mean (error bar) and standard deviation (black lines on the error bar) of the localization error of subject A in different walking styles.
Figure 22. The mean (error bar) and standard deviation (black lines on the error bar) of the localization error of subject A in different walking styles.
Sensors 16 01825 g022
Table 1. The parameters of subjects.
Table 1. The parameters of subjects.
ParameterSubject ASubject BSubject CSubject DDescription
l h 73 cm76 cm75 cm70 cmThe length of HAT(Head-Arm-Trunk)
l w a i s t 28 cm30 cm30 cm32 cmThe distance between two hip joints
l t h i g h 48 cm52 cm45 cm50 cmThe length of thigh
l s h a n k 46 cm50 cm48 cm48 cmThe length of shank
Table 2. The results of posture recognition.
Table 2. The results of posture recognition.
PostureStandingSittingSquattingSupineProne
Subject A100%100%100%100%100%
Subject B100%100%100%100%100%
Subject C100%100%100%100%100%
Subject D100%100%100%100%100%

Share and Cite

MDPI and ACS Style

Huang, J.; Yu, X.; Wang, Y.; Xiao, X. An Integrated Wireless Wearable Sensor System for Posture Recognition and Indoor Localization. Sensors 2016, 16, 1825. https://doi.org/10.3390/s16111825

AMA Style

Huang J, Yu X, Wang Y, Xiao X. An Integrated Wireless Wearable Sensor System for Posture Recognition and Indoor Localization. Sensors. 2016; 16(11):1825. https://doi.org/10.3390/s16111825

Chicago/Turabian Style

Huang, Jian, Xiaoqiang Yu, Yuan Wang, and Xiling Xiao. 2016. "An Integrated Wireless Wearable Sensor System for Posture Recognition and Indoor Localization" Sensors 16, no. 11: 1825. https://doi.org/10.3390/s16111825

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop