Next Article in Journal
An Adaptive Real-Time Detection Algorithm for Dim and Small Photoelectric GSO Debris
Previous Article in Journal
Robust and Fast Scene Recognition in Robotics Through the Automatic Identification of Meaningful Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Extensible Positioning System for Locating Mobile Robots in Unfamiliar Environments

1
Key Laboratory of Micro-Inertial Instrument and Advanced Navigation Technology, Ministry of Education, Southeast University, Nanjing 210096, China
2
School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(18), 4025; https://doi.org/10.3390/s19184025
Submission received: 19 August 2019 / Revised: 13 September 2019 / Accepted: 16 September 2019 / Published: 18 September 2019
(This article belongs to the Section Physical Sensors)

Abstract

:
In this paper, an extensible positioning system for mobile robots is proposed. The system includes a stereo camera module, inertial measurement unit (IMU) and an ultra-wideband (UWB) network which includes five anchors, one of which is with the unknown position. The anchors in the positioning system are without requirements of communication between UWB anchors and without requirements of clock synchronization of the anchors. By locating the mobile robot using the original system, and then estimating the position of a new anchor using the ranging between the mobile robot and the new anchor, the system can be extended after adding the new anchor into the original system. In an unfamiliar environment (such as fire and other rescue sites), it is able to locate the mobile robot after extending itself. To add the new anchor into the positioning system, a recursive least squares (RLS) approach is used to estimate the position of the new anchor. A maximum correntropy Kalman filter (MCKF) which is based on the maximum correntropy criterion (MCC) is used to fuse data from the UWB network and IMU. The initial attitude of the mobile robot relative to the navigation frame is calculated though comparing position vectors given by a visual simultaneous localization and mapping (SLAM) system and the UWB system respectively. As shown in the experiment section, the root mean square error (RMSE) of the positioning result given by the proposed positioning system with all anchors is 0.130 m. In the unfamiliar environment, the RMSE is 0.131 m which is close to the RMSE (0.137 m) given by the original system with a difference of 0.006 m. Besides, the RMSE based on Euler distance of the new anchor is 0.061 m.

1. Introduction

The indoor positioning system (IPS) has been widely applied in many fields. Mobile robots, mine workers, indoor parking lots, factory cargo automation management, fire rescue, location detection of soldiers and so on rely on the support of indoor positioning technology. The existing indoor positioning technologies include Wi-Fi positioning, Bluetooth, ZigBee, RFID, Ultra-wideband (UWB) and so on [1]. UWB has become a good solution for high-precision indoor positioning technology due to its advantages of higher positioning accuracy, better penetration, and better confidentiality than other positioning methods [2]. UWB is a radio-based positioning technology. Its positioning principle can be divided into time of arrival (TOA), time difference of arrival (TDOA), angle of arrival (AOA) and received signal strength indication (RSSI) [3,4,5,6]. Because a single positioning scheme is unable to solve all the problems of indoor positioning completely, indoor integrated navigation positioning technologies have become one of the key research areas. Typical indoor positioning schemes include UWB and SINS (strap-down inertial navigation system) indoor integrated navigation [7,8,9], and UWB and visual simultaneous localization and mapping (SLAM) integrated navigation [10], and so on. In order to improve the robustness of the positioning system, some scholars proposed UWB, INS (inertial navigation system) and visual SLAM integrated navigation [11].
In the age of artificial intelligence, mobile robots are ubiquitous. Various kinds of mobile robots have been invented [12], and even quantum drones have been invented to try to construct quantum communication networks [13]. An important research topic of mobile robots is the location of mobile robots, especially indoor mobile robots. Outdoors, the global satellite positioning system (GNSS) can be used to locate mobile robots [14,15]. Alas, indoors it is hard to receive the signal of GNSS, so GNSS to locate the mobile robot is not able to be used. In order to locate indoor mobile robots, scholars have proposed many methods, including UWB positioning, inertial measurement unit (IMU) based inertial navigation positioning, vision-based SLAM positioning and multi-sensor fusion based combination positioning [16,17,18,19,20,21]. In [19], a Sage-Husa fuzzy adaptive filter is used to solve the time-varying noise problem. For mobile robots, accurate positioning is one of the most basic and important requirements. How to make mobile robots know their position accurately in an unfamiliar environment is also an important topic for researchers. In order to achieve this goal, a robust positioning system needs to be studied.
In this paper, we present an extensible positioning system to locate mobile robots in unfamiliar environments of mobile robots. The positioning system fuses multiple sensors, including UWB, IMU, camera and other sensors. The system can be expanded by adding new UWB anchors of which the positions are unknown, to locate mobile robots in an unfamiliar environment. By using this positioning system, when mobile robots move into an unfamiliar environment, it is able to locate mobile robots through the placement of new anchors.
Compared with the existing literature, the main contributions of this paper are summarized as follows:
(1)
An extensible positioning system is proposed, which adds unknown UWB anchors to the existing positioning system. The UWB anchors do not need clock synchronization, and there is no need for communication between anchors.
(2)
A positioning method for mobile robots is proposed. The proposed method based on multiple sensors fusion is available for a complex environment in which the information of the positioning system is incomplete.
(3)
A real-time positioning system for collecting camera data, IMU data and UWB ranging data is built, and the maximum correlation entropy Kalman filter (MCKF) is used to fuse sensors’ data to locating mobile robots.
(4)
Using the extended positioning system, the localization of the mobile robot in an unfamiliar environment is realized, and the feasibility and expansibility of the positioning system are verified.
After reviewing related work in Section 2, this article is structured as follows: In Section 3, we present the extensible positioning system. In particular, we introduce the platform of the positioning system in Section 3.1, introduce the positioning process and positioning method of the whole positioning system in Section 3.2, introduce the positioning method of INS subsystem and UWB subsystem in Section 3.3, establish the state-space model of the whole positioning system, use MCKF method for data fusion, and present the positioning method of mobile robot in Section 3.4, add unknown anchors to the existing system by the recursive least squares (RLS) method to expand the positioning system in Section 3.5. According to the method introduced in Section 3.4, we use the extended positioning system to locate the mobile robot in Section 3.6. In Section 4, we give the experimental verification for the method proposed in this paper. To give a reference path, while positioning the mobile robot using the extensible positioning system, we use a SLAM system to locate the mobile robot at the same time. The SLAM system is based on key-frame which is introduced in [21]. A stereo camera and an IMU which are integrated into one module is used to realize the SLAM system instead of a monocular camera. In Section 5, we summarize the work of this paper.

2. Related Work

The indoor positioning technologies of mobile robots include UWB technology, UWB/SINS integration technology, visual SLAM technology and so on. At present, indoor positioning technologies based on multi-sensors fusion are the trend of positioning technology. A simplified positioning model is shown in [22]. In [22], Dobrev realizes the 2-dimensional (2-D) position estimation and 1-D attitude estimation of the robot with UWB technology. Strohmeier estimates the attitude of the mobile robot based on UWB technology [16]. What is more, in Strohmeier’s method, he fuses data of a variety of sensors. Fan realizes the two-dimensional indoor positioning of mobile robots by tightly coupled UWB and IMU measurements [23]. Guo combines UWB and IMU technologies and implements cooperative relative localization (RL) of UAV using a 2-D strategy [24]. However, differing from Fan, Guo presents a method to locate multiple mobile robots at the same time, and when mobile robots move in 3-D space, the relative positions of multiple robots can be estimated in real-time. A 3-D positioning model is used to estimate the positions of mobile robots by Wang [25]. In this model, Wang uses a self-made structured light scanner to improve the positioning accuracy of the mobile robot. Mur-Artal and Tardos study the positioning technology of simultaneous localization and mapping (SLAM) and share their code online [20]. Based on the research of Mur-Artal and Tardos, scholars successfully apply ORB-SLAM2 to real-time positioning systems of mobile robots [26,27,28]. After that, SLAM technologies become a hot topic in the positioning of mobile robots. SLAM technologies deeply rely on the calculation speed of computers. In recent years, the advantages of visual-inertial systems (VINS) are found by Qin and other scholars. VINS is a multi-sensor system. Qin et al. applied VINS to the localization of mobile robots and realized the precise localization of mobile robots through visual relocation and Loop detection [21,29,30].
Data fusion of multi-sensor is an important part of multi-sensor integrated positioning technology. An effective technology of data fusion is Kalman filtering. Since the Kalman filter algorithm was proposed in 1960 [31], various improved algorithms based on Kalman filtering have been proposed. Based on the Kalman filter algorithm, some scholars proposed an extended Kalman filter (EKF). A typical description of the EKF is given in [32]. The EKF is a nonlinear filter, which is widely used in many fields. In recent years, nonlinear filters have been widely studied. Julier and Uhlmann proposed an unscented Kalman filter (UKF) in [33], Arulampalam proposed a particle filter (PF) in [34]. After that, a cubature Kalman filter (CKF) was proposed by Arasaratnam and Haykin in 2009 [35,36]. The CKF calculates the multi-dimensional weight integral with a spherical-radial rule and works well for high-order nonlinear systems. However, if the system model is inaccurate or the state of the system is abrupt, the CKF does not perform as well as expected. The KF, UKF, and CKF, etc. are all algorithms based on minimum mean square error (MMSE) criterion. If the model of the signal is non-Gaussian, the algorithms mentioned above are no longer applicable or ineffective. In order to solve the problem of the non-Gaussian signal model, Chen and Liu propose a filtering algorithm based on the maximum correntropy criterion (MCC) and called the maximum correntropy Kalman filter (MCKF) [37,38,39]. When the model of the signal is non-Gaussian, the filter works well, and when the model of the signal is a non-Gaussian model, the filter can work normally. The MCKF can also filter the coarse errors in the signal effectively.
Absolute positioning technologies using UWB measurements need to know the locations of the anchors before positioning. However, the measurement of a large number of anchors is a resource-intensive matter. Sometimes, the positioning environment is very specialized, for example, situations where the UWB ranging information is needed temporarily (such as unloading vehicles which are temporarily docked), and situations which are not enough time to measure the positions of all anchors in the UWB positioning system (such as fire and other rescue sites). In some environments, it is not possible to measure the locations of all anchors (such as a cavern). So, there is a need to study a technique to locate the anchors of the UWB positioning system. In [40], Mekonnen shows that the estimation of positions of anchors is a non-convex non-linear estimation problem. A few scholars have studied the self-localization problem of anchors [41,42]. They used the multidimensional scaling (MDS) approach and convex optimization methods to build an initial positioning system. The maximum likelihood (ML) estimator of the positions of anchors is relaxed to a semi-definite programming problem, so it is possible to calculate the positions of anchors. However, the existing self-localization methods of UWB anchors require communication between anchors, and clocks of anchors are required to be synchronous. When the anchors cannot communicate with each other, the positioning system is unable to establish, and the additional UWB anchors cannot be added to the positioning system. Therefore, it is still an unresolved and challenging problem to study the expansion of the positioning system when the clocks of independent anchors are not synchronized. The problem of locating mobile robots in an unfamiliar environment is still unsolved. In this paper, we propose an extensible positioning system, which aims at the absolute positioning of mobile robots in unfamiliar environments.

3. Extensible Positioning System

In this section, an extensible positioning system is proposed. Using an MCKF, the system fuses data from sensors such as UWB, IMU, camera and so on. In environments which mobile robots are unfamiliar with, the positioning system is expanded by adding new anchors through an RLS technic.

3.1. System Platform

As shown in Figure 1, the platform of the proposed positioning system includes a mobile robot and a location network which includes UWB, IMU, camera and so on. The mobile robot is an unmanned vehicle with a radius of 25 cm and a height of 25 cm. The mobile robot is equipped with an IMU, a UWB tag, a Stereo camera and a WiFi router. The onboard IMU includes a three-axis gyroscope and a three-axis accelerometer. The UWB tag is a part of a UWB system network. The tag on the mobile robot receives the ranging message of the UWB system. Another important part of the UWB system network is anchors. Anchors and the tag make up the entire UWB system which is a part of the location network. A microcomputer with a six-core CPU is used to control the mobile robot. Besides, a WiFi router is equipped on the mobile robot to transmit data between the mobile robot and the server in real-time. The detail of the platform is described as follows:
(1)
IMU with a three-axis gyroscope and a three-axis accelerometer (BDStar Navigation: KY-INS110, Beijing, China)
(2)
UWB transceiver with five anchors and a tag (DecaWave: DWM1000, Dublin, Ireland)
(3)
Stereo camera (MYNTAI: MYNT EYE D1000-IR-120/Color, Jiangsu, China)
(4)
WiFi router (gee routing: HC5661A)
(5)
Onboard microcomputer (Rockchip: Firefly-RK3399, Fuzhou, China)

3.2. Positioning Process and Architecture of the System

The positioning process of the system is shown in Figure 2. The entire positioning process includes four steps. In the first step, we use an existing positioning system which includes Anchor j and tag to locate the mobile robot. In the second step, to extend our positioning system in unfamiliar environments, we calculate the position of unknown anchors ( Anchor i ) and add those new anchors to our positioning system. For the third step, we use the extended positioning system to locate the mobile robot which carries the tag. In the last step, we describe how to use the positioning system to locate the mobile robot when it moves into an unfamiliar environment. The second step is the most important step in all of the steps. In the second step, the first thing to do is to put the tag on the mobile robot and then move the robot along a certain trajectory. The location ( x t , k , y t , k , z t , k ) of the tag is then determined based on the ranging measurements d m j , k between the tag and the anchors with known locations. During the movement of the mobile robot, the tag continuously communicates with anchors Anchor i with unknown locations to obtain the ranging measurements d m i , k . Then, based on the ranging measurements d m i , k and the estimated position ( x t , k , y t , k , z t , k ) of the tag on the mobile robot, the position ( x a , i , y a , i , z a , i ) of the unknown anchors can be estimated.
In this positioning system, we proposed an approach to estimate the position of the mobile robot using anchors with known positions. We also describe how new anchors with unknown positions are added to the existing positioning system to form an extension system. The process of locating a mobile robot using an extended positioning system is also described.
The method of the positioning system is summarized in Figure 3. Based on the UWB measurements d m j , k , IMU measurements ( ω b , f b ) and camera measurements ( R , t ) , the attitude matrix C b u between the mobile robot frame (also called body frame) and the UWB system frame can be estimated using an attitude heading reference system (AHRS). Then, based on IMU measurements, the velocity v i m u and the position P i m u of the mobile robot relative to the body frame (b-frame) at the initial position can be estimated. At the same time, the velocity v u w b and position P u w b of the mobile robot in the UWB frame (u-frame, also the navigation frame in this paper) are calculated based on the measurements of the UWB system. Finally, an MCKF is used to fuse the two positions and velocities of the mobile robot to obtain the final position v u and velocity P u of the mobile robot.

3.3. SINS and UWB System

3.3.1. Strapdown Inertial Navigation System

In the model of SINS, the acceleration and the angular velocity of the mobile robot in 3-D space are measured by the IMU. Equations of acceleration bias and gyroscope bias are as follows
˙ r b = 1 τ A r b + w r A
ε ˙ r b = 1 τ G ε r b + w r G
where r b is offset error vector of first-order Gauss–Markov process model of accelerators, ε r b is offset error vector of gyroscopes, w r A and w r G are a white Gaussian noise (WGN) of r b and ε r b respectively. Thus, the IMU measurement model can be modeled by
f ˜ b = f b + r b + w A
ω ˜ b = ω b + ε r b + w G
where f ˜ b is the measurement vector of accelerators, ω ˜ b is the output vector of gyroscopes, f b is the true value of specific force, ω b is the true angle velocity of the mobile robot, w A and w G are a WGN of f b and ω b respectively.
In a strap-down inertial navigation system, the kinematic equation of the mobile robot can be expressed as follows
C ˙ b u = C b u ( ω u b b × ) , ω u b b = ω i b b C u b ( ω i e u + ω e u u )
v ˙ u = C b u f b ( 2 ω i e u + ω e u u ) × v u + g u
where C b u is the attitude matrix, ω u b b × is a skew-symmetric matrix constructed with elements of angular velocity vector relative to the UWB frame, ω i b b is angular velocity of the mobile robot relative to the inertial frame, ω i e u is the Earth rotation velocity expressed in u-frame, ω e u u is the angular velocity of u-frame respect to Earth frame (e-frame) expressed in u-frame which is a zero vector in this model, g u is the gravity vector. ( 2 ω i e u + ω e u u ) × v u and C u b ( ω i e u + ω e u u ) are too small, they are considered as parts of the noise in our system. Therefore, Equations (5) and (6) can be simplified to
{ C ˙ b u = C b u ( ω u b b × ) , ω u b b = ω i b b = ω b v ˙ u = C b u f b + g u
Then, using the simplified model, the velocity v ^ i m u and position P ^ i m u of the mobile robot can be obtained.

3.3.2. UWB Positioning System Model

In dynamic positioning, the TOA method is more advantageous than the TDOA method [43]. So, in our system, we use the TOA method to estimate the position of the mobile robot. For simplicity, we use a least-squares (LS) method to estimate the position of the mobile robot. Following this, differentiate the position P ^ u w b to get the velocity v ^ u w b . We assume that the UWB positioning system model can be described as
{ P ^ u w b = P u w b δ P u w b v ^ u w b = v u w b δ v u w b
where δ v u w b and δ P u w b are error vectors of the velocity and position respectively. Since the height of the mobile robot hardly changes, we consider its height to be a constant (the error of height is recorded as δ z ). The estimated position of the UWB system contains a large error, which causes a large error to be introduced when calculating the velocity. We assume that the motion model of the mobile robot is an even variable motion model and then use multiple positions estimates to calculate the velocity.

3.4. Data Fusion Based on MCKF

Data fusion of the positioning system is one of the most important parts of the positioning. Considering that the measurement errors of the IMU and UWB are everywhere when positioning, it is necessary to estimate the most possible position of the mobile robot. Measurement errors of IMU and UWB measurements cause positioning errors, especially coarse errors. To mitigate the effects of measurement errors on the positioning system, we used MCKF to fuse data from SINS and UWB systems. The model of MCKF is described as follows.

3.4.1. State Space Model

An estimate of the position and velocity of the mobile robot is given by the UWB system and SINS respectively. According to the model of UWB positioning system and SINS, the state-space model of the positioning system can be described as
X ˙ = F X + G W
Z = [ v ^ i m u v ^ u w b P ^ i m u P ^ u w b ] = H X + V
where the state vector X = [ ( δ ϕ ) T ( δ v ) T ( δ P ) T ( ε r b ) T ( r b ) T ] T is a 15-dimensional state vector, δ ϕ is an error vector of Euler angle, the velocity error δ v is the difference between v ^ i m u and v ^ u w b , the position error δ P represents the difference between P ^ i m u and P ^ u w b , V is the observation noise vector of the system, W is the systematic error which is a WGN. The state-space model is similar to the model of SINS and GPS integrated navigation [44], but not the same, especially the meaning of the measurements.

3.4.2. Time Update

Discretizing the state-space model above, the time update equation in the instant k is
X ^ ( k | k 1 ) = F ( k 1 ) X ^ ( k 1 | k 1 ) + G ( k 1 ) W ( k 1 )
where F ( k 1 ) is the discretized transition matrix of the system, G ( k 1 ) is the corresponding system noise matrix, W ( k 1 ) is system noise with a covariance matrix Q ( k 1 ) . Let C A = d i a g ( τ A 1 ) , C G = d i a g ( τ G 1 ) , and T s be the sampling time, then we get
F ( k 1 ) I 15 × 15 + [ 0 3 × 3 0 3 × 3 0 3 × 3 C b u 0 3 × 3 f u × 0 3 × 3 0 3 × 3 0 3 × 3 C b u 0 3 × 3 I 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 C G 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 C A ] T s
G ( k 1 ) [ C b u 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 C b u 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I 3 × 3 ]
W ( k 1 ) = [ w G T w A T w r G T w r A T ] T
and the covariance matrix propagation equation is
P ( k | k 1 ) = F ( k 1 ) P ( k 1 | k 1 ) F ( k 1 ) T + G ( k 1 ) Q ( k 1 ) G ( k 1 ) T
where P ( k | k 1 ) is the predicted covariance matrix for prior estimation of the state vector. Q ( k 1 ) is the variance-covariance matrix of the state vector.

3.4.3. Measurement Update

The discretized measurement equation of instant k is
Z ( k ) = [ v ^ i m u , k v ^ u w b , k P ^ i m u , k P ^ u w b , k ] = H ( k ) X ( k | k ) + V ( k )
where V ( k ) is the measurement errors vector of the discretized system with a covariance matrix R ( k ) . The matrix H ( k ) is given by
H ( k ) [ 0 6 × 3 I 6 × 6 0 6 × 6 ]
When UWB and IMU data are received, the measurement update is launched. In the measurement update progress, the MCKF updates the posterior estimation by adopting a maximum correntropy (MC) criterion as the optimality criterion. The difference between the MC criterion and minimum mean square error (MMSE) criterion is that higher-order statistical information of the error vector is considered in the MC criterion. One of the cost functions based on the MC criterion is
J M C ( X ( k | k ) ) = 1 L i = 1 L G σ ( e i ( k ) ) = 1 L i = 1 L G σ ( d i ( k ) w i ( k ) X ( k | k ) )
where G σ ( · ) is a shift-invariant Mercer kernel function, one of which is a Gauss kernel function. e i ( k ) is the difference between d i ( k ) and w i ( k ) X ( k | k ) , also the residual error of the extended state-space model of the original system. d i ( k ) and w i ( k ) X ( k | k ) are both variables related to system state vectors and system observation vectors and are defined as [39]. L is the dimension of the extended state-space model. The optimal estimation of state variables can be obtained by optimizing the cost function, that is, solving the equation
J M C ( X ( k | k ) ) X ( k | k ) = i = 1 L [ G σ ( e i ( k ) ) w i ( k ) T ( d i ( k ) w i ( k ) X ( k | k ) ) ] = 0
which is equivalent to solving the fixed-point equation (of X ( k | k ) )
X ( k | k ) = ( i = 1 L [ G σ ( e i ( k ) ) w i ( k ) T w i ( k ) ] ) 1 ( i = 1 L [ G σ ( e i ( k ) ) w i ( k ) T d i ( k ) ] )
An algorithm of the fixed-point equation is described as follows. First, rewrite the equation as
X ^ ( k | k ) t + 1 = f ( X ^ ( k | k ) t )
where
X ^ ( k | k ) t + 1 = X ^ ( k | k 1 ) + K ˜ ( k ) ( Z ( k ) H ( k ) X ^ ( k | k 1 ) )
denotes the estimation of the state vector after t iterations, K ˜ ( k ) is the gain matrix after t iterations which is given by
K ˜ ( k ) = P ˜ ( k | k 1 ) H ( k ) T ( H ( k ) P ˜ ( k | k 1 ) H ( k ) T + R ˜ ( k ) ) 1
where P ˜ ( k | k 1 ) is the covariance matrix of the state vector correspondingly, R ˜ ( k ) is the covariance matrix of the measurement vector which is obtained based on the MC criterion. When the Equation (23) below does not hold, stop iterating
X ^ ( k | k ) t + 1 X ^ ( k | k ) t X ^ ( k | k ) t τ
and the final posterior estimation of the state vector of instant k is obtained by setting X ^ ( k | k ) = X ^ ( k | k ) t + 1 . τ is a small positive number. Correspondingly, the posterior covariance matrix P ( k | k ) is given as
P ( k | k ) = ( I K ˜ ( k ) H ( k ) ) P ˜ ( k | k 1 ) ( I K ˜ ( k ) H ( k ) ) T + K ˜ ( k ) R ˜ ( k ) K ˜ ( k ) T

3.4.4. Initialization

In order to fuse data from the SINS and UWB system, it is necessary to calculate the initial attitude matrix C b u ( 0 ) or the initial Euler angle ϕ ( 0 ) of SINS relative to the UWB system. We calculate the initial angle using outputs of the camera, UWB and IMU. Firstly, the IMU data is used to align the z-axis of the body frame with the Earth’s gravity vector, then the attitude matrix between the body frame and a certain horizontal frame (h-frame) is obtained, which is C b h ( 0 ) . Since the UWB frame is also a horizontal frame, only the heading angle may not be zero in the three Euler angles between the h-frame and the UWB frame. Using the displacement vector r v i s u a l in the h-frame which is calculated by a visual SLAM system and r u w b in the UWB frame which is calculated by the UWB system, the attitude matrix C h u ( 0 ) between the h-frame and the UWB frame can be estimated. Finally, the initial attitude matrix C b u ( 0 ) between the body frame and the UWB frame is calculated by
C b u ( 0 ) = C h u ( 0 ) C b h ( 0 )
According to (25), the Euler angles between the body frame and the UWB frame can also be calculated.

3.5. Extension of the Positioning System

When the mobile robot moves to a new environment, it fails to receive the measurement information of anchors which are too far away in the positioning system. It is possible that the mobile robot is unable to get enough positioning information, which leads to positioning failure. Extending the positioning system means adding new anchors to the positioning system. Using the ranging information of the new anchors, the position of the mobile robot can be estimated by the extended system. The architecture of the extension of the positioning system is shown in Figure 4. Firstly, move the mobile robot along a planned trajectory and estimate the positions of the mobile robot. The position ( x t , l k , y t , l k , z t , l k ) of the mobile robot is estimated by an MCKF using the ranging message d m j , l k and IMU measurements ω b , f b . The subscript l k denotes the lk-th measurement of the variable. Meanwhile, measure the distance d m i , l k of the mobile robot to the anchors that need to add to the positioning system. The subscript m j and m i denote the j-th old anchor and i-th new anchor. Thirdly, according to Section 3.3, estimate the positions of new anchors using the distances d m i , k . The anchors’ positions p ^ a i , k are optimized by an RLS technic. l measurements are needed to estimate the position of a new anchor at one time, so total l k measurements are needed for k estimations of a new anchor. Finally, add the new anchors to the origin positioning system to extend the system.
To estimate the positions of new anchors, we assume that the positions variables of new anchors obey the Gaussian distribution which can be described as
{ x ^ a i = x a i + δ x a i y ^ a i = y a i + δ y a i z ^ a i = z a i + δ z a i
where z a i is the height of i-th new anchor and is known but with an error of δ z a i . x a i , y a i and z a i are the true coordination of the new anchor. ( x ^ a i , y ^ a i , z ^ a i ) is the estimation of the position of the new anchor. As the position of the new anchor is fixed, the estimation error δ x a i , δ y a i and δ z a i are all zero-mean Gaussian white noise sequence. Rewrite the Equation (26) using a simple form as
p ^ a i = H a i p a i + δ p a i
where H a i is a matrix with unit diagonal, δ p a i is the estimation error with a covariance matrix R a i . To op optimize the estimation, minimize the cost function below
min | | δ p a i | | = min | | p ^ a i p a i | |
which can be expanded as
J = ( p ˜ a i H a i p ^ a i ) T ( p ˜ a i H a i p ^ a i ) = p ˜ a i T p ˜ a i p ^ a i T H a i T p ˜ a i p ˜ a i T H a i p ^ a i + p ^ a i T H a i T H a i p ^ a i
where p ˜ a i is an estimation of the position using l measurements of IMU and UWB respectively. We optimize the estimation though a recursive least squares method. The mentioned recursive least squares method is described as follows
K a i ( k ) = P a i ( k 1 ) H a i T ( k ) ( H a i ( k ) P a i ( k 1 ) H a i T ( k ) + R a i ( k ) ) 1
p ^ a i ( k ) = p ^ a i ( k 1 ) + p ^ a i K a i ( k ) ( p ˜ a i ( k ) H a i ( k ) p ^ a i ( k 1 ) )
P a i ( k ) = ( I K a i ( k ) H a i ( k ) ) P a i ( k 1 ) ( I K a i ( k ) H a i ( k ) ) T + K a i ( k ) R a i ( k ) K a i T ( k )
where P a i ( k ) is the error covariance matrix of p ^ a i ( k ) , K a i ( k ) is the gain matrix in instant k , R a i ( k ) is covariance matrix of measurements which has been described in (27), p ^ a i ( k ) is the optimization of the position.
It should be noted that the trajectory of the mobile robot is designed instead of a random trajectory. When the route is circular, the position of anchors can be estimated more quickly and better [45]. Besides, according to [46], the range of movement of the mobile robot needs to be large enough and the distance between the l measurements of the position of the mobile robot used to estimate the position of the new anchor needs to be sufficiently large.

3.6. Positioning Using the Extended Positioning System

The extended positioning system contains anchors where the position is not completely determined. It is a very complex problem to locate mobile robots using anchors with uncertain positions. Fortunately, some scholars have studied how to locate tags using anchors with uncertain locations [47]. Referring to the maximum likelihood estimation method, we present a likelihood function with anchors’ positions uncertainty
ln p ( d m i , p ^ a i , k ; p t , p a i ) = 1 2 ( d m i f ( p t , p a i ) ) T C 1 ( d m i f ( p t , p a i ) ) + i = 1 N ( ( p ^ a i , k , i p a i , i ) 2 2 σ α 2 )
where C is a covariance matrix of vector [ f ( p t , p a 0 ) , f ( p t , p a 1 ) , , f ( p t , p a i ) ] , f ( p t , p a i ) is a ranging which considers ranging errors between the tag’s position p t and the i-th anchor’s position p a i , d m i is a vector of ranging of UWB system, p a i , i is the mean of anchor’ positions p ^ a i , k , i which obey the Gaussian distribution, σ α 2 is the variance correspondingly.
Since we have enough measurement of the position of the anchors, the final estimation of the anchor’s position is sufficiently accurate. Therefore, we consider all anchors’ locations in the extended positioning system to be known exactly. This simplifies the positioning of the system. Another reason is that in our system, the output of the IMU on the mobile robot can also be used to assist in estimating the position of the mobile robot. It can be seen from the experiment that the positioning result of the mobile robot using this method is acceptable. Based on the above considerations, we use the method presented in Section 3.4 to estimate the position of the mobile robot after extending the positioning system.
To mimic the positioning of robots in unfamiliar environments, we assume that some anchors are not available when the robot moves to an unfamiliar environment. In the extended positioning system, the newly added anchors can be detected by the mobile robot. However, some older anchors, or anchors that are farther away from the mobile robot, may not be detected by the mobile robot. If some anchors are lost, the system uses the last few anchors which can be found by the tag. If all of the anchors are lost, the system will be replaced by SINS (failure mode). However, we assume that at least four anchors can be found by the tag to provide adequate ranging information. Therefore, the extended positioning system is able to provide enough ranging information.

4. Experiment and Discussion

The plan of the experimental site and the location of the anchors are shown in Figure 5. In this paper, the experimental site is a school laboratory (a 3.5 × 9 ( m 2 ) laboratory) which includes all kinds of obstacles. In the experiment, four anchors Anchor j ( j = 0 , 1 , 2 , 3 ) which are black anchors in the figure are known and are parts of the original positioning system. The positions of these anchors are considered to be accurately calibrated by a laser rangefinder. One anchor Anchor i ( i = 4 ) which is the red anchor in the figure is the new anchor that is used to extend the positioning system. The position of the red anchor is unknown. The coordinate shown in Figure 5 is the UWB frame, and it is introduced in Section 3. The coordinates of the anchors are shown in Table 1. We measure the positions of anchors by using a laser rangefinder. The measurement parameters of the laser rangefinder and the UWB transceivers are given in Table 2. The ranging error of UWB transceivers which is shown in Table 2 is estimated under a favorable condition which is called light-of-sight (LOS) condition. When there are many obstacles in the environment of the positioning which is the exact situation in our experiment, the ranging error changes.
We use the result of a SLAM positioning system as a reference path of our mobile robot. The SLAM positioning system based on key-frames uses a stereo camera which is introduced before to locate the mobile robot. The stereo camera which is shown in Figure 1 integrates a low-cost IMU. To evaluate the performance of our approach, we compare the estimated path with the reference path.
The whole experiment includes four parts. In the first part, we use the positioning system to locate the mobile robot using four anchors of which the positions are known. Then, in the second part, we estimate the position of a new anchor of which the position is considered unknown. In the third part, an extended positioning system which includes the original system is used to locate the mobile robot. Then, in the last part, we consider that one anchor of the positioning system is lost to simulate the situation where the mobile robot moves into an unfamiliar environment.

4.1. Positioning for the Mobile Robot

The trajectory of the mobile robot is shown in Figure 6. The mobile robot moves along the o-shaped trajectory. The red trajectory is the reference path, and the blue trajectory is the estimated path which is given by the positioning system. In this experiment, only 4 anchors are used in the system. The anchors are shown in Figure 6, and are represented by the red circles in the figure.
The positioning error of the mobile robot is shown in Figure 7. At first, the positioning error decreases rapidly. Then, as time goes on, the positioning error convergences gradually. The root mean square error (RMSE) of the positioning result is 0.137 m.

4.2. Adding a New Anchor

While positioning the mobile robot, a new anchor is added to the system to extend the system itself. By using the ranging between the mobile robot and the new anchor, the position of the new anchor is estimated by the RLS approach. The positions of the mobile robot are estimated before. The ranging distances are measured while the mobile robot moves along the trajectory. The position estimation of the new anchor is shown in Figure 8.
We give the coordinate estimations of the new anchor in Figure 9. As shown in the figure, the coordinate of the new anchor is close to the reference coordinate which is measured by the laser rangefinder. The reference coordinate (x, y) of the new anchor is given in Table 3. The height (z) of the new anchor is known as a true value. The final position estimation of the new anchor is shown in Table 3 as well. Note that the errors presented are different in two axes. As the trajectory of the mobile robot is not a circle or the coordinate difference of the old anchors in x-axis is different from that in y-axis. As shown in Table 3, the RMSE of Euler distance error is 0.061 m. In addition, we calculate the coordinate of the new anchor with four methods. The results of the calculation are also shown in Table 3. The label “MCKF-RLS” represents that we estimate the positions of the mobile robot using the MCKF and then estimate the position of the new anchor with the RLS. In Table 3, It indicates that all of the methods are able to locate the position of the new anchor using the proposed scheme which suggests the reasonability of our scheme. Besides, it shows that the RMSE of Euler distance using MCKF-RLS is better than the other methods while the other types of results are comparable with the other approaches. This illustrates the effectiveness of our strategy.

4.3. Positioning Using the Extended Positioning System

After extending the system by adding the new anchor, we use the extended positioning system to locate the mobile robot. The mobile robot moves in the school laboratory. The trajectory which is estimated by the extended positioning system is shown in Figure 10. The positioning error of the mobile robot is given in Table 4 and Figure 11. As a comparison, we use a reference positioning system which includes all anchors to locate the mobile robot. The positions of the anchors in the reference positioning system are measured by the laser rangefinder. The positioning error of the mobile robot and the difference of the RMSE given by the two systems is shown in Table 4 and Figure 11 as well.
As shown in Table 4 and Figure 11, the positioning effect of the extended positioning system is close to that given by the reference system which uses all reference positions of all anchors. It means that the estimation of the extended system is credible and it is reliable to replace the reference system using the extended positioning system.

4.4. Positioning in an Unfamiliar Environment

To simulate the unfamiliar environment where the UWB anchor is lost, we consider that the Anchor0 loses contact with the mobile robot. In this case, the number of available anchors of the extended positioning system is four. And the positions of remaining anchors are considered as known. A trajectory of the mobile robot is shown in Figure 12.
The positioning error of the mobile robot is shown in Figure 13. The RMSE of the position estimation is 0.131 m. The positioning effect of the mobile robot using the extended positioning system is close to Figure 7 which uses the reference system with four anchors (the difference of the RMSE between two systems is 0.006 m). Although the positioning effect is a little worse than the reference system and the extended system which includes all anchors (the difference between two results is 0.008 m), it is acceptable.
In order to show the performance of the extended system clearly, we compare the results obtained by three different methods. The results of the three methods are given in Table 5. It shows that the RMSE given by the extended system using four anchors (the extended system used in an unfamiliar environment, ES-UE) is close to that given by the original system, and the performance of the extended system using all anchors is close to that of the reference system. Thus, the extended system is able to replace the original system or the reference system. In addition, the RMSE given by the MCKF fusion is smaller than the other methods in most of the environments. It suggests the effectiveness of the MCKF fusion and our scheme. Besides, the results of the mean and standard deviation (Std) which is based on the absolute value of the errors are given in Table 5.

5. Conclusions

The paper presents an extensible positioning system to locate the mobile robot with a radius of 25 cm and a height of 25 cm which is equipped with an IMU, a UWB tag, a Stereo camera module and a WiFi router. The positioning system estimates the positions of the mobile robot by using the ranging of UWB transceivers and the angular velocity and linear velocity measured by the IMU. The system fuses data by an MCKF approach which is based on an MC criterion. By locating the mobile robot using the original system, and then estimating the position of a new anchor using the ranging between the mobile robot and the new anchor, the positioning system is extended after adding the new anchor to the original system. In this scheme, the positioning system is without the requirement of communication between anchors and the requirement clock synchronization of anchors. It estimates the initial attitude of the mobile robot relative to the navigation frame by comparing estimations of the same displacement vectors given by the UWB system and a SLAM system, respectively. In an unfamiliar environment of the mobile robot, the extended positioning system is able to estimate the position of the mobile robot by adding new anchors. The RMSE of the positioning result is 0.131 m (the experiment site is a 3.5 × 9 ( m 2 ) laboratory) while using the extended positioning system (unfamiliar environment) which is closed to the result given by the original system. On the other hand, the results given by the extended system are close to that of the reference system (all anchors). Besides, the RMSE of the new anchor presented in Euler distance is 0.061 m. However, some other experiments may be needed to make sure that the system is fit for a larger environment.
An improvement of the system may be found by using an MDS approach to reduce the position uncertainty of anchors and to consider the transmission error nature of the UWB transceiver signal by using Equation (34). Besides, the future work of the paper is to integrate the visual SLAM system into the whole system to improve the robustness and performance of the system.

Author Contributions

Conceptualization, X.L. and B.Y.; Data curation, X.L.; Investigation, X.L.; Methodology, X.L.; Software, X.L. and B.Z.; Writing—original draft, X.L.; Writing—review & editing, X.X.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 51775110).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dardari, D.; Closas, P.; Djuric, P.M. Indoor Tracking: Theory, Methods, and Technologies. IEEE Trans. Veh. Technol. 2015, 64, 1263–1278. [Google Scholar] [CrossRef]
  2. Jimenez Ruiz, A.R.; Seco Granja, F. Comparing Ubisense, BeSpoon, and DecaWave UWB Location Systems: Indoor Performance Analysis. IEEE Trans. Instrum. Meas. 2017, 66, 2106–2117. [Google Scholar] [CrossRef]
  3. Lymberopoulos, D.; Liu, J. The Microsoft Indoor Localization Competition: Experiences and Lessons Learned. IEEE Signal Process. Mag. 2017, 34, 125–140. [Google Scholar] [CrossRef]
  4. Yassin, A.; Nasser, Y.; Awad, M.; Al-Dubai, A.; Liu, R.; Yuen, C.; Raulefs, R.; Aboutanios, E. Recent Advances in Indoor Localization: A Survey on Theoretical Approaches and Applications. IEEE Commun. Surv. Tutor. 2017, 19, 1327–1346. [Google Scholar] [CrossRef]
  5. Yu, H.; Huang, G.; Gao, J.; Liu, B. An Efficient Constrained Weighted Least Squares Algorithm for Moving Source Location Using TDOA and FDOA Measurements. IEEE Trans. Wirel. Commun. 2012, 11, 44–47. [Google Scholar] [CrossRef]
  6. De Angelis, G.; Moschitta, A.; Carbone, P. Positioning Techniques in Indoor Environments Based on Stochastic Modeling of UWB Round-Trip-Time Measurements. IEEE Trans. Intell. Transp. Syst. 2016, 17, 2272–2281. [Google Scholar] [CrossRef]
  7. Wang, Y.; Li, X. The IMU/UWB Fusion Positioning Algorithm Based on a Particle Filter. ISPRS Int. J. Geo-Inf. 2017, 6, 235. [Google Scholar] [CrossRef]
  8. Li, X.; Wang, Y.; Khoshelham, K. A Robust and Adaptive Complementary Kalman Filter Based on Mahalanobis Distance for Ultra Wideband/Inertial Measurement Unit Fusion Positioning. Sensors 2018, 18, 3435. [Google Scholar] [CrossRef] [PubMed]
  9. Xu, Y.; Ahn, C.K.; Shmaliy, Y.S.; Chen, X.; Li, Y. Adaptive robust INS/UWB-integrated human tracking using UFIR filter bank. Measurement 2018, 123, 1–7. [Google Scholar] [CrossRef]
  10. Wang, C.; Zhang, H.; Nguyen, T.; Xie, L. Ultra-Wideband Aided Fast Localization and Mapping System. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 1602–1609. [Google Scholar]
  11. Benini, A.; Mancini, A.; Longhi, S. An IMU/UWB/Vision-based Extended Kalman Filter for Mini-UAV Localization in Indoor Environment using 802.15.4a Wireless Sensor Network. J. Intell. Robot. Syst. 2013, 70, 461–476. [Google Scholar] [CrossRef]
  12. Rubio, F.; Valero, F.; Llopis-Albert, C. A review of mobile robots: Concepts, methods, theoretical framework, and applications. Int. J. Adv. Robot. Syst. 2019, 16. [Google Scholar] [CrossRef] [Green Version]
  13. Liu, H.; Tian, X.; Gu, C.; Fan, P.; Ni, X.; Yang, R.; Zhang, J.; Hu, M.; Niu, Y.; Cao, X.; et al. Drone-based all-weather entanglement distribution. arXiv 2019, arXiv:1905.0952. [Google Scholar]
  14. Suzuki, T. Mobile robot localization with GNSS multipath detection using pseudorange residuals. Adv. Robot. 2019, 33, 602–613. [Google Scholar] [CrossRef]
  15. Jurišica, L.; Duchoň, F.; Kaštan, D.; Babinec, A. High Precision GNSS Guidance for Field Mobile Robots. Int. J. Adv. Robot. Syst. 2012, 9, 169. [Google Scholar] [CrossRef]
  16. Strohmeier, M.; Walter, T.; Rothe, J.; Montenegro, S. Ultra-Wideband Based Pose Estimation for Small Unmanned Aerial Vehicles. IEEE Access 2018, 6, 57526–57535. [Google Scholar] [CrossRef]
  17. Akeila, E.; Salcic, Z.; Swain, A. Reducing Low-Cost INS Error Accumulation in Distance Estimation Using Self-Resetting. IEEE Trans. Instrum. Meas. 2014, 63, 177–184. [Google Scholar] [CrossRef]
  18. Patonis, P.; Patias, P.; Tziavos, I.N.; Rossikopoulos, D.; Margaritis, K.G. A Fusion Method for Combining Low-Cost IMU/Magnetometer Outputs for Use in Applications on Mobile Devices. Sensors 2018, 18, 2616. [Google Scholar] [CrossRef] [PubMed]
  19. Liu, J.; Pu, J.; Sun, L.; He, Z. An Approach to Robust INS/UWB Integrated Positioning for Autonomous Indoor Mobile Robots. Sensors 2019, 19, 950. [Google Scholar] [CrossRef]
  20. Mur-Artal, R.; Tardos, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
  21. Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version]
  22. Dobrev, Y.; Flores, S.; Vossiek, M. Multi-Modal Sensor Fusion for Indoor Mobile Robot Pose Estimation. In IEEE-ION Position Location and Navigation Symposium; IEEE: New York, NY, USA, 2016; pp. 553–556. [Google Scholar]
  23. Fan, Q.; Sun, B.; Sun, Y.; Wu, Y.; Zhuang, X. Data Fusion for Indoor Mobile Robot Positioning Based on Tightly Coupled INS/UWB. J. Navig. 2017, 70, 1079–1097. [Google Scholar] [CrossRef]
  24. Guo, K.; Li, X.; Xie, L. Ultra-wideband and Odometry-Based Cooperative Relative Localization with Application to Multi-UAV Formation Control. IEEE Trans. Cybern. 2019, 1–14. [Google Scholar] [CrossRef] [PubMed]
  25. Wang, C.; Li, K.; Liang, G.; Chen, H.; Huang, S.; Wu, X. A Heterogeneous Sensing System-Based Method for Unmanned Aerial Vehicle Indoor Positioning. Sensors 2017, 17, 1842. [Google Scholar] [CrossRef] [PubMed]
  26. Bescos, B.; Facil, J.M.; Civera, J.; Neira, J. DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes. IEEE Robot. Autom. Lett. 2018, 3, 4076–4083. [Google Scholar] [CrossRef] [Green Version]
  27. Cvišić, I.; Ćesić, J.; Marković, I.; Petrović, I. SOFT-SLAM: Computationally efficient stereo visual simultaneous localization and mapping for autonomous unmanned aerial vehicles. J. Field Robot. 2018, 35, 578–595. [Google Scholar] [CrossRef]
  28. Yang, G.; Chen, Z.; Li, Y.; Su, Z. Rapid Relocation Method for Mobile Robot Based on Improved ORB-SLAM2 Algorithm. Remote Sens. 2019, 11, 149. [Google Scholar] [CrossRef]
  29. Delmerico, J.; Scaramuzza, D. A Benchmark Comparison of Monocular Visual-Inertial Odometry Algorithms for Flying Robots. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018. [Google Scholar] [CrossRef]
  30. Qin, T.; Li, P.; Shen, S. Relocalization, Global Optimization and Map Merging for Monocular Visual-Inertial SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation ICRA, Brisbane, QLD, Australia, 21–25 May 2018. [Google Scholar] [CrossRef]
  31. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. Trans. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  32. Li, M.; Mourikis, A.I. High-precision, consistent EKF-based visual-inertial odometry. Int. J. Robot. Res. 2013, 32, 690–711. [Google Scholar] [CrossRef]
  33. Julier, S.; Uhlmann, J.; Durrant-Whyte, H.F. A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans. Autom. Control 2000, 45, 477–482. [Google Scholar] [CrossRef]
  34. Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef] [Green Version]
  35. Arasaratnam, I.; Haykin, S. Cubature Kalman Filters. IEEE Trans. Autom. Control 2009, 54, 1254–1269. [Google Scholar] [CrossRef]
  36. Arasaratnam, I.; Haykin, S.; Hurd, T.R. Cubature Kalman Filtering for Continuous-Discrete Systems: Theory and Simulations. IEEE Trans. Signal Process. 2010, 58, 4977–4993. [Google Scholar] [CrossRef]
  37. Chen, B.; Principe, J.C. Maximum Correntropy Estimation Is a Smoothed MAP Estimation. IEEE Signal Process. Lett. 2012, 19, 491–494. [Google Scholar] [CrossRef]
  38. Chen, B.; Xing, L.; Liang, J.; Zheng, N.; Príncipe, J.C. Steady-State Mean-Square Error Analysis for Adaptive Filtering under the Maximum Correntropy Criterion. IEEE Signal Process. Lett. 2014, 21, 880–884. [Google Scholar]
  39. Chen, B.; Liu, X.; Zhao, H.; Principe, J.C. Maximum correntropy Kalman filter. Automatica 2017, 76, 70–77. [Google Scholar] [CrossRef] [Green Version]
  40. Mekonnen, Z.W.; Wittneben, A. Self-calibration method for TOA based localization systems with generic synchronization requirement. In Proceedings of the 2015 IEEE International Conference on Communications (ICC), London, UK, 8–12 June 2015. [Google Scholar]
  41. Naddafzadeh-Shirazi, G.; Shenouda, M.B.; Lampe, L. Second Order Cone Programming for Sensor Network Localization with Anchor Position Uncertainty. IEEE Trans. Wirel. Commun. 2014, 13, 749–763. [Google Scholar] [CrossRef]
  42. Hamer, M.; D’Andrea, R. Self-Calibrating Ultra-Wideband Network Supporting Multi-Robot Localization. IEEE Access 2018, 6, 22292–22304. [Google Scholar] [CrossRef]
  43. Ledergerber, A.; Hamer, M.; D’Andrea, R. A robot self-localization system using one-way ultra-wideband communication. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–3 October 2015. [Google Scholar]
  44. Cui, X.; Mei, C.; Qin, Y.; Yan, G.; Fu, Q. In-motion Alignment for Low-cost SINS/GPS under Random Misalignment Angles. J. Navig. 2017, 70, 1224–1240. [Google Scholar] [CrossRef]
  45. Rezazadeh, J.; Moradi, M.; Ismail, A.S.; Dutkiewicz, E. Impact of static trajectories on localization in wireless sensor networks. Wirel. Netw. 2015, 21, 809–827. [Google Scholar] [CrossRef]
  46. Suwatthikul, C.; Chantaweesomboon, W.; Manatrinon, S.; Athikulwongse, K.; Kaemarungsi, K. Implication of anchor placement on performance of UWB real-time locating system. In Proceedings of the 2017 8th International Conference of Information and Communication Technology for Embedded Systems (IC-ICTES), Chonburi Beach, Thailand, 7–9 May 2017. [Google Scholar]
  47. Mridula, K.M.; Ameer, P.M. Localization under anchor node uncertainty for underwater acoustic sensor networks. Int. J. Commun. Syst. 2018, 31, e3445. [Google Scholar] [CrossRef]
Figure 1. The platform of the positioning system.
Figure 1. The platform of the positioning system.
Sensors 19 04025 g001
Figure 2. Positioning Process.
Figure 2. Positioning Process.
Sensors 19 04025 g002
Figure 3. The Architecture of the positioning system.
Figure 3. The Architecture of the positioning system.
Sensors 19 04025 g003
Figure 4. The architecture of the extension system.
Figure 4. The architecture of the extension system.
Sensors 19 04025 g004
Figure 5. Experimental site.
Figure 5. Experimental site.
Sensors 19 04025 g005
Figure 6. Trajectory of the mobile robot.
Figure 6. Trajectory of the mobile robot.
Sensors 19 04025 g006
Figure 7. Positioning error using 4 anchors.
Figure 7. Positioning error using 4 anchors.
Sensors 19 04025 g007
Figure 8. Positioning of the new anchor.
Figure 8. Positioning of the new anchor.
Sensors 19 04025 g008
Figure 9. Coordinate estimation of the new anchor.
Figure 9. Coordinate estimation of the new anchor.
Sensors 19 04025 g009
Figure 10. Positioning using the extended positioning system.
Figure 10. Positioning using the extended positioning system.
Sensors 19 04025 g010
Figure 11. Positioning effect of the positioning system. (A) Using the estimation position of the new anchor. (B) Using the reference position of the new anchor.
Figure 11. Positioning effect of the positioning system. (A) Using the estimation position of the new anchor. (B) Using the reference position of the new anchor.
Sensors 19 04025 g011
Figure 12. Positioning in an unfamiliar environment.
Figure 12. Positioning in an unfamiliar environment.
Sensors 19 04025 g012
Figure 13. Positioning effect in an unfamiliar environment.
Figure 13. Positioning effect in an unfamiliar environment.
Sensors 19 04025 g013
Table 1. Coordinates of anchors.
Table 1. Coordinates of anchors.
AnchorsCoordinates (x, y, z) (m)
Anchors0(−0.197, 0.671, 1.8)
Anchors1(1.836, 0.375, 2.0)
Anchors2(−0.262, 3.440, 1.6)
Anchors3(2.330, 5.995, 2.0)
Anchors4(−0.174, 6.242, 2.3)
Table 2. Measurement parameters of modules.
Table 2. Measurement parameters of modules.
ModuleMeasurement Error (m)
Laser rangefinder.<0.005
UWB transceivers<0.15
Table 3. Coordinate of the new anchor.
Table 3. Coordinate of the new anchor.
TypeValue of (x, y) (m)
MCKF-RLSKF-RLSUWB-RLSUWB-LS
Reference(−0.197, 6.242)(−0.197, 6.242)(−0.197, 6.242)(−0.197, 6.242)
Final estimation(−0.218, 6.296)(−0.225, 6.304)(−0.181, 6.245)(−0.399, 6.166)
RMSE(0.043, 0.043)(0.041, 0.053)(0.064, 0.024)(0.131, 0.101)
RMSE (distance)0.0610.0670.0680.166
Mean(−0.160, 6.272)(−0.167, 6.283)(−0.125, 6.232)(−0.180, 6.246)
Standard deviation(0.041, 0.031)(0.041, 0.033)(0.039, 0.022)(0.132, 0.101)
Table 4. Positioning error using the extended system.
Table 4. Positioning error using the extended system.
SystemRMSE (m)Mean (m)Standard Deviation (Std) (m)
Original positioning system (OPS)0.1370.1190.067
Reference positioning system (RPS)0.1220.1160.058
Extended positioning system (EPS)0.1300.1080.057
Difference between RPS and EPS0.0080.0080.001
Table 5. Positioning error using difference methods.
Table 5. Positioning error using difference methods.
SystemPositioning Performance (m)
MCKF FusionKF FusionUWB Only
RMSEMeanStdRMSEMeanStdRMSEMeanStd
Original system0.1370.1190.0670.1440.1210.0780.1600.1490.058
Extended system0.1300.1160.0580.1360.1220.0610.1540.1410.061
Reference system0.1220.1080.0570.1260.1130.0550.1420.1300.057
ES-UE0.1310.1110.0690.1400.1210.0720.1710.1550.072

Share and Cite

MDPI and ACS Style

Xu, X.; Liu, X.; Zhao, B.; Yang, B. An Extensible Positioning System for Locating Mobile Robots in Unfamiliar Environments. Sensors 2019, 19, 4025. https://doi.org/10.3390/s19184025

AMA Style

Xu X, Liu X, Zhao B, Yang B. An Extensible Positioning System for Locating Mobile Robots in Unfamiliar Environments. Sensors. 2019; 19(18):4025. https://doi.org/10.3390/s19184025

Chicago/Turabian Style

Xu, Xiaosu, Xinghua Liu, Beichen Zhao, and Bo Yang. 2019. "An Extensible Positioning System for Locating Mobile Robots in Unfamiliar Environments" Sensors 19, no. 18: 4025. https://doi.org/10.3390/s19184025

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop