Next Article in Journal
Improving Autonomous Vehicle Perception through Evaluating LiDAR Capabilities and Handheld Retroreflectivity Assessments
Next Article in Special Issue
Obstacle Avoidance and Path Planning Methods for Autonomous Navigation of Mobile Robot
Previous Article in Journal
IR.WoT: An Architecture and Vision for a Unified Web of Things Search Engine
Previous Article in Special Issue
Robust Tracking Control of Wheeled Mobile Robot Based on Differential Flatness and Sliding Active Disturbance Rejection Control: Simulations and Experiments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Crossing-Point Estimation in Human–Robot Navigation—Statistical Linearization versus Sigma-Point Transformation

by
Rainer Palm
1,* and
Achim J. Lilienthal
2
1
Center for Applied Autonomous Sensor Systems (AASS), Department of Technology, Örebro University, SE-701 82 Örebro, Sweden
2
Technical University Munich (TUM), 80333 Munich, Germany
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(11), 3303; https://doi.org/10.3390/s24113303
Submission received: 20 March 2024 / Revised: 16 May 2024 / Accepted: 17 May 2024 / Published: 22 May 2024
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)

Abstract

:
Interactions between mobile robots and human operators in common areas require a high level of safety, especially in terms of trajectory planning, obstacle avoidance and mutual cooperation. In this connection, the crossings of planned trajectories and their uncertainty based on model fluctuations, system noise and sensor noise play an outstanding role. This paper discusses the calculation of the expected areas of interactions during human–robot navigation with respect to fuzzy and noisy information. The expected crossing points of the possible trajectories are nonlinearly associated with the positions and orientations of the robots and humans. The nonlinear transformation of a noisy system input, such as the directions of the motion of humans and robots, to a system output, the expected area of intersection of their trajectories, is performed by two methods: statistical linearization and the sigma-point transformation. For both approaches, fuzzy approximations are presented and the inverse problem is discussed where the input distribution parameters are computed from the given output distribution parameters.

1. Introduction

The planning and performing of mobile robot tasks in the presence of human operators while sharing the same workspace requires a high level of stability and safety. Research activities regarding navigation, obstacle avoidance, adaptation and collaboration between robots and human agents have been widely reported [1,2]. Multiple target tracking for robots using higher control levels in a control hierarchy are discussed in [3,4]. A human-friendly interaction between robots and humans can be obtained by human-like sensor systems [5]. A prominent role in robot navigation is the trajectory-crossing problem of robots and humans [6,7] and corresponding fuzzy solutions [8]. Motivations for a fuzzy solution of the intersection problem are manifold. One point is an uncertain measurement of the position and orientation of the human agent, because of which the use of a fuzzy signal and an adequate fuzzy processing seems natural [9,10]. Another aspect is the need for decreasing the computing effort in the case of complex calculations during a very small time interval. System uncertainties and observation noise lead to uncertainties of the intersection estimations.
The objective of this work is the formulation of the crossing/intersection problem by taking into account the uncertainties in human–robot systems, including sensors and motor characteristics. An important aspect is to define permissible uncertainties in a human–robot system for a given uncertainty at a possible intersection of their trajectories. Taking into account the nonlinearities, this is performed by the differential approach and a following analysis of the regarding Gaussian distributions. This approach is compared with the sigma-point transformation, which represents a simplification of the computation and a qualitative extension of the analysis regarding the statistics of the random variables. For broader areas of possible intersections, both methods are extended to fuzzy regions together with different numbers and shapes of fuzzy sets. The most important contributions are as follows:
  • An investigation of uncertainties of possible intersection areas originating from sensor noise or system uncertainties.
  • A direct and inverse transformation of the error variables at the intersection areas for two input variables (orientation angles) and two output variables (intersection coordinates).
  • An extension of the method from two to six input variables (two orientation angles and four position coordinates).
  • An exploration of the formulations of fuzzy versions.
  • A formulation of the problem by the sigma-point transformation and corresponding comparison of the two methods.
This paper deals with the one-robot one-human trajectory-crossing problem, where small uncertainties in the position and orientation may lead to high uncertainties at the intersection points. The position and orientation of the human and robot are nonlinearly coupled but can be linearized. In the following, the linear part of the nonlinear system is considered in the analysis reported for small variations in the input [11]. Then, the “direct task” is described, meaning that the parameters of the input distribution are transformed to the output distribution parameters. The “inverse task” is also solved, meaning that for the defined output distribution parameters the input parameters are calculated. In this paper, two methods are outlined:
  • The statistical linearization, which linearizes the nonlinearity around the operating area at the intersection. The means and standard deviations on the input parameters positions (orientations) are transformed through the linearized nonlinear system to obtain the means and standard deviations of the output parameters (the position of intersection).
  • The sigma-point transformation, which calculates the so-called sigma points of the input distribution, including the mean and covariance of the input. The sigma points are directly propagated through the nonlinear system [12,13,14] to obtain the means and covariance of the output and, with this, the standard deviations of the output (the position of intersection). The advantage of the sigma-point transformation is that it captures the first- and second-order statistics of a random variable, whereas the statistical linearization approximates a random variable only by its first order. However, the computational complexity of the extended Kalman filter (EKF, differential approach) and unscented Kalman filter (UKF, sigma-point approach) is of the same order [13].
This paper is organized as follows. Section 2 describes the related work already conducted on unscented Kalman filters in mobile robot applications. In Section 3, the general intersection problem and its analytical approach is described. Section 4 deals with the transformation/conversion of Gaussian distributions for a two-input–two-output system and for a six-input–two-output system plus the corresponding inverse and fuzzy solutions. In Section 5, the sigma-point approach plus inverse and fuzzy solutions are addressed. Section 6 presents simulations of the statistical linearization and the sigma-point transformation to show the quality of the input–output conversion of the distributions and the impact of different resolutions of fuzzy approximations on the accuracy of the random variable intersection. Finally, Section 7 concludes this paper with a discussion of the two different approaches and a comparison of the methods.

2. Related Work

The crossing problem for mobile robots has been especially dealt with by [6,7]. Both publications deal with the so-called rendezvous problem whereby the key point is the trajectory planning under time constraints, taking into account the dynamics of the contributing robots. Uncertainties of possible intersection areas that come from sensor noise or system uncertainties are not discussed deeply. A fuzzy-adaptive extended Kalman filter (FAEKF) for the real-time attitude estimation of a mobile robot is proposed in [15] where fuzzy IF–THEN rules-based adaption laws modify the noise covariance matrices of the filter. However, the use of unscented Kalman filters or sigma-point transformation has not been discussed. For the estimation of landmarks, a simultaneous localization and mapping (SLAM) method is presented by [16] where an iterated sigma-point FastSLAM (ISP-FastSLAM) algorithm is proposed to minimize statistical linearization errors through the Gaussian–Newton iteration. A further application is presented by [17] where a walking robot uses sigma-point transformation for state estimation to guarantee stability in the system’s hybrid dynamics, which contains continuous and switching parts during movement. In [18], a vision-based SLAM system uses both extended Kalman filters (EKFs) and sigma-point Kalman filter (SPKF) algorithms and showed its superiority over the EKF. The pose estimation of mobile robots is discussed in [19] whereby several filter techniques like the Kalman filter (EKF), the unscented Kalman filter (UKF) and several variants of the particle filter (PF) are compared. It turns out that the UKF (also the sigma-point approach) exhibits almost the same computational cost. In [20], the inter-robot and robot–target correlations are discussed, and unscented transformation-based collaborative self-localization and a target tracking algorithm between robots are proposed. A tutorial on different approaches to exploit the structure of a system’s state and measurement models to reduce the computational demand of the algorithms is presented by [21]. In this publication, the computational complexity of different state estimation algorithms is presented, showing the superiority of the sigma-point transformation algorithms.
In all these publications, the problem of obstacle avoidance and/or the crossing problem in the presence of human actors are not taken into account, because of which the present paper is a further contribution to the robot–human interaction problem.

3. Computation of Intersections

The problem can be stated as follows:
A robot and human agent move in a common area according to their tasks or intentions. To avoid collisions, possible intersections of the paths of the agents should be predicted for both the trajectory planning and on-line interactions. To accomplish this, the positions, orientations and intended movements of the robot and human should be estimated as accurately as needed.
In this connection, uncertainties and noise on the random variables’ position/orientation x R , x H , ϕ R and ϕ H of the robot and human have a great impact on the calculation of the expected intersection position x c . The random variable x c is calculated as the crossing point of the extension of the orientation or velocity vectors of the robot and human, which may change during motion depending on the task and current interaction. The task is to calculate the intersection and its uncertainty in the presence of the known uncertainties of the acting agent robot and human.
System noise w R and w H for the robot and human can be obtained from experiments. The noise w c of the “virtual” intersection is composed of the nonlinear transformed noise w R and w H and some additional noise v c that may come from uncertainties of the nonlinear computation of the intersection position x c (see Figure 1). In the following, the geometrical relations are described as well as the fuzzy approximations and nonlinear transformations of the random variables x R , x H , ϕ R and ϕ H .

3.1. Geometrical Relations

Let the y-axis of the mobile coordinate frame of the robot and human be aligned with their directions of motion. Furthermore, let the orientation angles ϕ R and ϕ H of the robot and human be measured from the x-axis of the base frame counterclockwise. Let the intersection ( x c , y c ) of the two linear trajectories x R ( t ) and x H ( t ) in a plane be described by the following relations (see Figure 2):
x H = x R + d R H cos ( ϕ R + δ R ) y H = y R + d R H sin ( ϕ R + δ R ) x R = x H + d R H cos ( ϕ H + δ H ) y R = y H + d R H sin ( ϕ H + δ H )
where x H = ( x H , y H ) and x R = ( x R , y R ) are the positions of the human and robot and ϕ H and ϕ R are their orientation angles, and δ H and δ R are the positive angles measured from the y coordinates counterclockwise. The angle at the intersection is β ˜ = π δ R δ H . The variables x H , x R , ϕ R , ϕ H δ H and ϕ H δ R ; distance d R H ; and angle γ are assumed to be measurable. Angle γ is a bearing angle for the robot-to-human direction measured in base coordinates. If ϕ H is not directly measurable, then it can be computed by
ϕ H = arcsin ( ( y H y R ) / d R H ) δ H + π
The coordinates x c and y c of the intersection are computed straightforwardly by [8]
x c = A B tan ϕ R tan ϕ H y c = A tan ϕ H B tan ϕ R tan ϕ R tan ϕ H A = x R tan ϕ R y R B = x H tan ϕ H y H
Rewriting (3) leads to
x c = x R tan ϕ R G y R 1 G x H tan ϕ H G y H 1 G y c = x R tan ϕ R tan ϕ H G y R tan ϕ H G x H tan ϕ H tan ϕ R G y H tan ϕ R G G = tan ϕ R tan ϕ H
After rearranging (4), we observe that x c = ( x c , y c ) T is linear in x R H = ( x R , y R , x H , y H ) T
x c = A R H · x R H
where
A R H = f ( ϕ R , ϕ H ) = 1 G tan ϕ R 1 tan ϕ H 1 tan ϕ R tan ϕ H tan ϕ H tan ϕ R tan ϕ H tan ϕ R
This notation is of advantage for further computations, such as the fuzzification of the intersection problem and the transformation of the error distributions.

3.2. Computation of Intersections—Fuzzy Approach

The fuzzy solution presented in the following is a combination of classical analytical (crisp) methods and rule-based methods in the sense of a Takagi–Sugeno fuzzy rule base. An appropriate choice of the number of fuzzy sets and corresponding fuzzy rules depends strongly on the specific application. In the present case, fuzzy sets are used as the approximation of nonlinear functions. In the following, we introduce a fuzzy rule-based approximation of (5) with n × n fuzzy rules R i , j
R i , j : I F ϕ R = Φ R i A N D ϕ H = Φ H j T H E N x c = A R H i , j · x R H
n—the number of fuzzy terms, Φ R i and Φ H j for ϕ R and ϕ H , with the result
x c = i , j w i ( ϕ R ) w j ( ϕ H ) · A R H i , j · x R H
i , j = 1 n , w i ( ϕ R ) , w j ( ϕ H ) [ 0 , 1 ] are normalized membership functions with i w i ( ϕ R ) = 1 and j w j ( ϕ H ) = 1 .
Let the universes of discourse for ϕ R and ϕ H be ϕ R , ϕ H [ 0 , 360 ] . Furthermore, let these universes of discourse be divided into n partitions (for example, 6) of 60, which leads to 6 × 6 fuzzy rules. The corresponding membership functions are shown in Figure 3. It turns out that this resolution leads to a poor fuzzy approximation. The approximation quality can be improved by increasing the number of fuzzy sets, which however results in a quadratic increase in the number of fuzzy rules. To avoid an “explosion” of the number of fuzzy rules being computed in one time step, a set of sub-areas covering a small number of rules for each sub-area is defined. Based on the measurements of ϕ R and ϕ H , the appropriate sub-area is selected together with a corresponding set of rules (see Figure 4, sub-area A R , A H ). With this, the number of rules to be activated at one time step of calculation is low, although the total number of rules can be high. At the borderlines between the sub-areas, abrupt changes may occur, which can be avoided by overlapping the sub-areas.

3.3. Differential Approach

The positions and orientations of robots and humans are usually corrupted with noise originated from system uncertainties, sensor errors and motor characteristics. These uncertainties become apparent in uncertainties in the crossing/intersection areas of the trajectories of the robot and human. The analysis of uncertainty and noise at x c generated by the noise at ϕ R , ϕ H and x R H = ( x R , y R , x H , y H ) T requires a linearization of (4) around the operating points and with this a differential strategy. Let, for simplification, only the orientation angles ϕ R and ϕ H be corrupted with noise. In Section 4.3, the positions x R H = ( x R , y R , x H , y H ) T are taken into account, too.
Differentiating (4) with x R H = c o n s t . yields
dx c = J ˜ · d Œ d Œ = ( d ϕ R d ϕ H ) T ; J ˜ = J ˜ 11 J ˜ 12 J ˜ 21 J ˜ 22
where
J ˜ 11 = tan ϕ H 1 tan ϕ H 1 x R H G 2 · cos 2 ϕ R J ˜ 12 = tan ϕ R 1 tan ϕ R 1 x R H G 2 · cos 2 ϕ H J ˜ 21 = J ˜ 11 · tan ϕ H J ˜ 22 = J ˜ 12 · tan ϕ R
The following sections deal with the accuracy of the computed intersection in the case of noisy orientation information (see Figure 5).

4. Transformation of Gaussian Distributions

4.1. General Assumptions

Consider a nonlinear system
z = F ( x )
where the random variables x = ( x 1 , x 2 ) T denote the input, z = ( z 1 , z 2 ) T denotes the output and F denotes a nonlinear transformation. The distribution of the uncorrelated Gaussian distributed components x 1 and x 2 is described by
f x 1 , x 2 = 1 2 π σ x 1 σ x 2 e x p ( 1 2 ( e x 1 2 σ x 1 2 + e x 2 2 σ x 2 2 ) )
where e x 1 = x 1 x 1 ¯ , with x 1 ¯ —the mean ( x 1 ) and σ x 1 —the standard deviation x 1 , and e x 2 = x 2 x 2 ¯ , with x 2 ¯ —the mean ( x 2 ) and σ x 2 —the standard deviation x 2 .
The goal is as follows: Given the nonlinear transformation (9) and the distribution (10), compute the output signals z 1 and z 2 and their distributions together with their standard deviations and the correlation coefficient. Linear systems transform Gaussian distributions linearly such that the output signals are also Gaussian-distributed. This does not apply for nonlinear systems, but if the input standard deviation is small enough, then a local linear transfer function can be built for which the outputs are Gaussian-distributed. Suppose the input standard deviations are small with respect to the nonlinear function, then the output distribution can be written as follows:
f z 1 , z 2 = 1 2 π σ z 1 σ z 2 1 ρ z 12 2 · e x p ( 1 2 ( 1 ρ z 12 2 ) ( e z 1 2 σ z 1 2 + e z 2 2 σ z 2 2 2 ρ z 12 e z 1 e z 2 σ z 1 σ z 2 ) )
ρ z 12 —the correlation coefficient.

4.2. Statistical Linearization, Two Inputs–Two Outputs

Let the nonlinear transformation F be described by two smooth transfer functions (see block scheme Figure 6)
z 1 = f 1 ( x 1 , x 2 ) z 2 = f 2 ( x 1 , x 2 )
where ( x 1 , x 2 ) = ( ϕ R , ϕ H ) and ( z 1 , z 2 ) = ( x c , y c ) .
The linearization of (12) yields
dz = J ˜ · dx o r e z = J ˜ · e x
with
e z = ( e z 1 , e z 2 ) T a n d e x = ( e x 1 , e x 2 ) T dz = ( d z 1 , d z 2 ) T a n d dx = ( d x 1 , d x 2 ) T
J ˜ = f 1 / x 1 , f 1 / x 2 f 2 / x 1 , f 2 / x 2

4.2.1. Output Distribution

To obtain the density f z 1 , z 2 (11) of the output signal, we invert (15) and substitute the entries of e x into (10). J ˜ is invertible if it is positive definite with | J ˜ | > 0 . Otherwise, there exist singularities due to different constellations of the position vector x R H and/or the orientations ϕ R and ϕ H . To find all the singularities requires a further analysis, which is not the content of this paper. However, a simple heuristic leads us to some obvious situations: If ϕ R = ϕ H or ϕ R = ϕ H + π , then the human and robot would move in parallel either in the same or the opposite direction. On the other hand, one may also obtain diverging trajectories with no crossing.
e x = J · e z
with J = J ˜ 1 and
J = J 11 J 12 J 21 J 22 = j xz j yz
where j xz = ( J 11 , J 12 ) and j yz = ( J 21 , J 22 ) . The entries J i j are the result of the inversion of J ˜ . From this substitution, we obtain
f x 1 , x 2 = K x 1 , x 2 · e x p ( 1 2 · e z T · ( j x 1 , z T , j x 2 , z T ) · S x 1 · j x 1 , z j x 2 , z · e z )
where K x 1 , x 2 = 1 2 π σ x 1 σ x 2 and
S x 1 = 1 σ x 1 2 , 0 0 , 1 σ x 2 2
The exponent of (18) is rewritten into
x p o = 1 2 · ( 1 σ x 1 2 ( e z 1 J 11 + e z 2 J 12 ) 2 + 1 σ x 2 2 ( e z 1 J 21 + e z 2 J 22 ) 2 )
and furthermore
x p o = 1 2 · [ e z 1 2 ( J 11 2 σ x 1 2 + J 21 2 σ x 2 2 ) + e z 2 2 ( J 12 2 σ x 1 2 + J 22 2 σ x 2 2 ) + 2 · e z 1 e z 2 ( J 11 J 12 σ x 1 2 + J 21 J 22 σ x 2 2 ) ]
Let
A = ( J 11 2 σ x 1 2 + J 21 2 σ x 2 2 ) ; B = ( J 12 2 σ x 1 2 + J 22 2 σ x 2 2 ) C = ( J 11 J 12 σ x 1 2 + J 21 J 22 σ x 2 2 )
then a comparison of x p o in (21) and the exponent in (11) yields
1 ( 1 ρ z 12 2 ) 1 σ z 1 2 = A ; 1 ( 1 ρ z 12 2 ) 1 σ z 2 2 = B 2 ρ z 12 ( 1 ρ z 12 2 ) 1 σ z 1 σ z 2 = 2 C
The standard deviations σ z 1 and σ z 2 and the correlation coefficient ρ z 12 yield
ρ z 12 = C A B 1 σ z 1 2 = A C 2 B ; 1 σ z 2 2 = B C 2 A
The result is as follows: If the parameter of the input distribution and the transfer function F ( x , y ) are known, then the output distribution parameters can be computed straightforwardly.

4.2.2. Fuzzy Solution

To save computing costs in real time, we create a TS fuzzy model that is represented by the rules R i j .
R i j : I F x 1 = X 1 i A N D x 2 = X 2 i
T H E N ρ z 12 = C i j A i j B i j A N D 1 σ z 1 2 = A i j C i j 2 B i j ; A N D 1 σ z 2 2 = B i j C i j 2 A i j
where X 1 i , X 2 i are fuzzy terms for x 1 , x 2 , and A i j , B i j , C i j are functions of the predefined variables x 1 = x 1 i and x 2 = x 2 i .
From (25), we derive
ρ z 12 = i j w i ( x 1 ) w j ( x 2 ) C i j A i j B i j 1 σ z 1 2 = i j w i ( x 1 ) w j ( x 2 ) ( A i j C i j 2 B i j ) 1 σ z 2 2 = i j w i ( x 1 ) w j ( x 2 ) ( B i j C i j 2 A i j )
w i ( x 1 ) [ 0 , 1 ] and w j ( x 2 ) [ 0 , 1 ] are the weighting functions with i w i ( x 1 ) = 1 , j w j ( x 2 ) = 1 .

4.2.3. Inverse Solution

The previous paragraph discussed the direct transformation task: Let the distribution parameters of the input variable be defined and find the corresponding output parameters. However, it might also be useful to solve the inverse task: Given the output parameters (standard deviation and correlation coefficient), find the corresponding input parameters. This solution of the inverse task is similar to those discussed in Section 4.2. The starting points are equations (10) and (11), which describe the distributions of the inputs and outputs, respectively. Then, we substitute (13) into (10) and rename the resulting exponent x p o z into x p o x and discuss the exponent x p o x
x p o x = 1 2 ( 1 ρ z 12 2 ) ( e x T J ˜ T S z 1 J ˜ e x 2 ρ z 12 e z 1 e z 2 σ z 1 σ z 2 )
with
S x 1 = 1 σ z 1 2 , 0 0 , 1 σ z 2 2
Now, comparing (27) with the exponent of (10) of the input density, we find that the mixed term in (27) must be zero, from which we obtain the correlation coefficient ρ z 12 and with this the standard deviations of the inputs
ρ z 12 = ( J ˜ 11 J ˜ 12 σ z 1 2 + J ˜ 21 J ˜ 22 σ z 2 2 ) σ z 1 σ z 2 ( J ˜ 11 J ˜ 22 + J ˜ 12 J ˜ 21 ) 1 σ x 2 = ( J ˜ 11 2 σ z 1 2 + J ˜ 21 2 σ z 2 2 2 ρ z 12 σ z 1 σ z 2 J ˜ 11 J ˜ 21 ) / ( 1 ρ z 12 2 ) 1 σ y 2 = ( J ˜ 12 2 σ z 1 2 + J ˜ 22 2 σ z 2 2 2 ρ z 12 σ z 1 σ z 2 J ˜ 12 J ˜ 22 ) / ( 1 ρ z 12 2 )
The detailed development can be found in [22].

4.3. Six Inputs–Two Outputs

Consider again the nonlinear system
x c = F ( x )
In the previous subsections, we assumed the positions x R and x H not to be corrupted with noise. However, taking into account the positions to be random variables, the number of inputs is 6 so that the input vector yields x = ( x 1 , x 2 , x 3 , x 4 , x 5 , x 6 ) T or x = ( ϕ R , ϕ H , x R , y R , x H , y H ) with the output vector x c = ( x c , y c ) T .
Furthermore, let the uncorrelated Gaussian-distributed inputs x 1 x 6 be described by the 6-dim density
f x i = 1 ( 2 π ) 6 / 2 | S x | 1 / 2 e x p ( 1 2 ( e x T S x 1 e x ) )
where e x = ( e x 1 , e x 2 , , e x 6 ) T ; e x = x x ¯ , x ¯ —the mean( x ) and S x —the covariance matrix.
S x = σ x 1 2 0 0 0 σ x 2 2 0 0 0 σ x 6 2
According to (11), the output density is described by
f x c , y c = 1 2 π σ x c σ y c 1 ρ 2 · e x p ( 1 2 ( 1 ρ 2 ) ( e x c T S c 1 e x c 2 ρ e x c e y c σ x c σ y c ) )
ρ —the correlation coefficient, e x c = ( e x c , e y c ) T .
After some calculations [23], we find for ρ , 1 σ x c 2 and 1 σ y c 2
ρ = C A D 1 σ x c 2 = A C 2 D ; 1 σ y c 2 = D C 2 A
with
A = i = 1 6 1 σ x i 2 J i 1 2 ; B = i = 1 6 1 σ x i 2 J i 1 J i 2 C = i = 1 6 1 σ x i 2 J i 1 J i 2 ; D = i = 1 6 1 σ x i 2 J i 2 2
This is the counterpart to the 2-dim input case (24).

4.3.1. Inverse Solution

An inverse solution cannot be uniquely computed due to the undetermined character of the 6-input–2-output system. Therefore, from the required variances at the intersection position (output), the corresponding variances for the positions and orientations of the robot–human or robot–robot (input) cannot be concluded.

4.3.2. Fuzzy Approach

The steps to the fuzzy approach are very similar to those of the 2-input case:
-
Define the operation points x i = ( x 1 , x 2 , x 3 , x 4 , x 5 , x 6 ) i T ;
-
Compute A i , B i and C i at x i = ( x 1 , x 2 , x 3 , x 4 , x 5 , x 6 ) i T from (33);
-
Formulate the fuzzy rules R i according to (25) and (26), i = 1 n .
The number n of rules is computed as follows:
With l = 6 —the number of fuzzy terms and k = 6 —the number of inputs, we obtain n = l k = 6 6 —the number of rules.
This number of rules is unacceptably high. To limit n to an adequate number, one has to limit the number of inputs and/or fuzzy terms to look for the most influential variables either in a heuristic or systematic way [24]. This however is not the issue to be discussed in this paper.

5. Sigma-Point Transformation

In the following, the estimation/identification of the standard deviations of possible intersection coordinates of trajectories for both the robot–robot and human–robot combinations by means of the sigma-point technique is discussed. The following method is based on the unscented Kalman filter technique where the intersections cannot be directly measured but predicted/computed only. Nevertheless, it is possible to compute the variance of the predicted events, such as possible collisions or planned rendezvous situations, by a direct propagation of statistical parameters—the sigma points—through the nonlinear geometrical relation, which is a result of the crossing of two trajectories. Let x = ( x 1 , x 2 ) T —the input vector and x c = ( x c 1 , x c 2 ) T —the output vector where for the special case ( x 1 , x 2 ) T = ( ϕ R , ϕ H ) T and ( x c 1 , x c 2 ) T = ( x c , y c ) T . The nonlinear relation between x and x c is given by (34)
x c = F ( x )
For the discrete case, we obtain for the state x c
x c ( k ) = F ( x ( k 1 ) + w ( k 1 ) )
and for the measured output z c ( k )
z c ( k ) = h ( x c ) ( k ) + v ( k ) )
where w and v are the system noise and measurement noise, respectively. h ( x c ) is the output nonlinearity. Furthermore, let there be the following:
  • x ¯ ( k ) —the mean at time t k ;
  • P ( k ) —the covariance matrix;
  • x 0 —the initial state with the known mean μ 0 = E ( x 0 ) ;
  • P 0 ( k ) = E [ ( x 0 μ 0 ) ( x 0 μ 0 ) T ] .

5.1. Selection of Sigma Points

Sigma points are the selected parameters of a given error distribution of a random variable. Sigma points lie along the major eigen-axes of the covariance matrix of the random variable. The height of each sigma point (see Figure 7) represents its relative weight W j used in the following selection procedure.
Let X ( k 1 ) be a set of 2 n + 1 sigma points where n is the dimension of the state space (in our example, n = 2 ).
X ( k 1 ) = { ( x j ( k 1 ) , W j ) | j = 0 2 n }
Consider the following selection of sigma points
x 0 ( k 1 ) = x ¯ ( k 1 ) 1 < W 0 < 1 W 0 = λ n + λ ; λ = α 2 ( n + κ ) n
x i ( k 1 ) = x ¯ ( k 1 ) + ( n 1 W 0 P ( k 1 ) ) ; i = 1 n x i ( k 1 ) = x ¯ ( k 1 ) ( n 1 W 0 P ( k 1 ) ) ; i = ( n + 1 ) 2 n
W j = 1 W 0 2 n
under the following condition
j = 0 2 n W j = 1
α and κ are scaling factors. A usual choice is α = 10 2 and κ = 0 . n 1 W 0 P ( k 1 ) is the row/column of the matrix square root of n 1 W 0 P . The square root of a matrix P is the solution S for P = S · S , which is obtained by Cholesky factorization.

5.2. Model Forecast Step

To go on with the UKF, the following step is devoted to the model forecast. In this way, the sigma points x j ( k ) are propagated through the nonlinear process model
x c f , j ( k ) = F ( x j ( k 1 ) )
where the superscript f means “forecast”. From these transformed and forecasted sigma points, the mean and covariance for the forecast value of x c ( k ) are
x c f ( k ) = j = 0 2 n W j x c f , j ( k ) P f ( k ) = j = 0 2 n W j ( x c f , j ( k ) x c f ( k ) ) ( x c f , j ( k ) x c f ( k ) ) T

5.3. Measurement Update Step

In this step, the sigma points are propagated through the nonlinear observation model
z c f , j ( k ) = h ( x c j ( k 1 ) )
from which we obtain the mean and covariance (innovation covariance)
z c f ( k 1 ) = j = 0 2 n W j z c f , j ( k 1 ) C o v ( z ˜ c f ( k 1 ) ) = j = 0 2 n W j ( z c f , j ( k 1 ) z c f ( k 1 ) ) × ( z c f , j ( k 1 ) z c f ( k 1 ) ) T + R ( k )
and the cross-covariance
C o v ( x ˜ c f ( k ) , z ˜ c f ( k 1 ) ) = j = 0 2 n W j ( x c f , j ( k ) x c f ( k ) ) ( z c f , j ( k 1 ) z c f ( k 1 ) ) T

5.4. Data Assimilation Step

In this step, the forecast information is combined with the new information from the output z ( k ) from which we obtain, with the Kalman filter, gain K
x ^ c ( k ) = x c f ( k ) + K ( k ) ( z c ( k ) z c f ( k 1 ) )
The gain K is given by
K ( k ) = C o v ( x ˜ c f ( k ) , z ˜ c f ( k 1 ) ) · C o v 1 ( z ˜ c f ( k 1 ) )
and the posterior covariance is updated by
P ( k ) = P f ( k ) K ( k ) · C o v ( z ˜ c f ( k 1 ) ) K T ( k )
Usually, it is sufficient to compute the mean and variance for the output/state x c of the nonlinear static system F ( x ) . In this case, it is possible to stop further computing at Equation (42), meaning to rather calculate the transformed sigma points x c f , j and develop the specific output means and variances from (41) and (42). In this connection, it is enough to substitute the covariance matrix Q into (38) instead of P . One advantage of the sigma-point approach prior to statistical linearization is the easy scalability to multi-dimensional random variables.
For the intersection problem, there are 2 cases:
  • The 2 inputs, 2 outputs (2 orientation angles and 2 crossing coordinates);
  • The 6 inputs, 2 outputs (2 orientation angles and 4 position coordinates, and 2 crossing coordinates).
For the statistical linearization (method 1), the step from the 2 inputs–2 outputs case to the (6,2)-case is computationally more costly than that for the sigma-point approach (method 2), (see Equations (20)–(24) versus Equations (37) and (40)–(42)).

5.5. Sigma Points—Fuzzy Solutions

In order to lower the computing effort, the application of the TS fuzzy interpolation may be a solution, which will be shown in the following. Having a look at the two-dimensional problem, we can see a nonlinear propagation of the input sigma points through a nonlinear function F . Let x j be the two-dimensional “input” sigma points
x j = ( x 1 j , x 2 j ) T
or for the special case “intersection”
x j = ( ϕ R j , ϕ H j ) T
The propagation through F leads to the “output” sigma points
x c f , j ( k ) = F ( x j ( k 1 ) )
or for the special case
x c f , j ( k ) = F ( x 1 j ( k 1 ) , x 2 j ( k 1 ) ) = F ( ϕ R j ( k 1 ) , ϕ H j ( k 1 ) )
The special nonlinear function F is described by (see (5))
x c = A R H ( ϕ R , ϕ H ) · x R H
where A R H is a nonlinear matrix (6) linearly combined with the position vector x R H = ( x R , y R , x H , y H ) T .
A fuzzification aims at A R H :
F f u z z ( ϕ R , ϕ H ) = A R H f u z z · x R H = l 1 , l 2 m w l 1 ( ϕ R ) w l 2 ( ϕ H ) · A R H ( ϕ R l 1 , ϕ H l 2 ) · x R H
Applied to the sigma points ( ϕ R j , ϕ H j ) , we obtain a TS fuzzy model described by the following rules R l 1 , l 2
R l 1 , l 2 :
I F ϕ R j = Φ R j l 1 A N D ϕ H j = Φ H j l 2 T H E N x c f , j = A R H ( ϕ R l 1 , j , ϕ H l 2 , j ) · x R H
where Φ R j l 1 , Φ H j l 2 are fuzzy terms for ϕ R j , ϕ H j ; the matrices A R H are functions of the predefined variables ϕ R j and ϕ H j . This set of rules leads to the result
x c f , j = F f u z z ( ϕ R j , ϕ H j ) = l 1 , l 2 m w l 1 ( ϕ R j ) w l 2 ( ϕ H j ) · A R H ( ϕ R l 1 , j , ϕ H l 2 , j ) · x R H
w l 1 ( ϕ R j ) [ 0 , 1 ] and w l 2 ( ϕ H j ) [ 0 , 1 ] are weighting functions with l 1 w l 1 = 1 , l 2 w l 2 = 1 . The advantage of this approach is that the l 1 × l 2 matrices A R H l 1 , l 2 , j = A R H ( ϕ R l 1 , j , ϕ H l 2 , j ) can be computed off-line. Then, the calculation of the mean and covariance matrix is obtained by
x c f ( k ) = j = 0 2 n W j x c f , j ( k ) P f ( k ) = j = 0 2 n W j x ˜ c f , j ( k ) ( x ˜ c f , j ( k ) ) T x ˜ c f , j = x c f , j x c f
From the covariance P f , the variances σ c x x , σ c y y , σ c x y can be obtained
σ c x x = E ( ( x c f x ¯ c f ) 2 ) ) σ c y y = E ( ( y c f y ¯ c f ) 2 ) ) σ c x y = σ c y x = E ( ( x c f x ¯ c f ) · ( y c f y ¯ c f ) )

5.6. Inverse Solution

The inverse solution for the sigma-point approach is much easier to obtain than that for the statistical linearization method. Starting from Equation (34), we build the inverse function
x = F 1 ( x c )
on the condition that F 1 exists. Then, the covariance matrix P is defined in correspondence to the required variances σ c x x , σ c y y and σ c x y . The following steps correspond to Equations (34)–(42). The position vector x R H is assumed to be known. The inversion of F requires a linearization of x R H and a starting point to obtain a stable convergence to the inverse F 1 . The result is the mean x and the covariance Q at the input. A reliable inversion is only possible for the 2-input–2-output case.

5.7. Six-Inputs–Two-Outputs

This case works exactly as the 2-input–2-output case along with Equations (34)–(42) due to the fact that the computation of the sigma points (38)–(40) and the propagation through the nonlinearity F automatically include the input and output dimensions.

6. Simulation Results

The following simulations show the results of the uncertainties of the predicted intersections based on statistical linearization and sigma-point transformation. For both methods, identical parameters are employed for comparison reasons (see Figure 2). The position/orientation of the robot and human are given by the following:
  • x R = ( x R , y R ) T = ( 2 , 0 ) T m;
  • x H = ( x H , y H ) T = ( 4 , 10 ) T m;
  • ϕ R = 1.78 rad = 102°;
  • ϕ H = 3.69 rad = 212°.
  • ϕ R and ϕ H are corrupted by Gaussian noise with standard deviations (std) of σ ϕ R = σ x 1 = 0.02 rad, ( = 1.1 ° ) , and σ ϕ H = σ x 2 = 0.02 rad, ( = 1.1 ° ) .

6.1. Statistical Linearization

Table 1 shows a comparison of the non-fuzzy method with the fuzzy approach using sectors of 60 ° , 30 ° , 15 ° , 7 . 5 ° of the unit circle for the orientations of the robot and human. The notations in Table 2 are as follows: σ x c —std-computed, σ x m —std-measured, etc. As expected, we see that higher resolutions lead to a better match between the fuzzy and analytical approach. Furthermore, the match between the measured and calculated values depends on the form of membership functions (MFS). For example, low input standard deviations (0.02 rad) show a better match for Gaussian membership functions, and higher input standard deviations (0.05 rad = 2 . 9 ° ) require Gaussian bell-shaped membership functions, which comes from different smoothing effects (see columns 4 and 5 in Table 2).
A comparison of the control surfaces and corresponding measurements x c m , y c m (black and red dots) is depicted in Figure 8, Figure 9 and Figure 10. Figure 8 shows the control surface of x c and y c for the non-fuzzy case (4). The control surfaces of the fuzzy approximations (7) for the 30 ° and 7 . 5 ° sectors are shown in Figure 9 and Figure 10. The resolution 30 ° (Figure 9) shows a very high deviation compared to the non-fuzzy approach (Figure 8), which decreases further down to the resolution 7 . 5 ° (Figure 10). This explains the high differences between the measured and computed standard deviations and correlation coefficients, in particular for sector sizes of 30 ° and higher.

6.2. Sigma-Point Method

Two-inputs–two-outputs:
The simulation of the sigma-point method is based on a Matlab implementation of an unscented Kalman filter by [25]. The first example deals with the 2-inputs–2-outputs case in which only the orientations are taken into account, but the disturbances of the positions of the robot and human are not part of the sigma-point calculation. A comparison between the computed and measured covariance shows a very good match. The same holds for the standard deviations σ x c , σ y c . A comparison with the statistical linearization shows a good match as well (see Table 2, rows 1 and 2).
A view at the sigma points presents the following results: Figure 11 shows the two-dimensional distribution of the orientation angles ( ϕ R , ϕ H ) and the corresponding sigma points s 1 , , s 5 where s 1 denotes the mean value. Figure 12 shows the two-dimensional distribution of the intersection coordinates ( x c , y c ) with the sigma points S 1 , , S 5 . S 1 denotes the mean value and S 1 , , S 5 are distributed in such a way that the s i are transformed into S i , i = 1 5 . From both figures, an optimal selection of both s 1 , , s 5 and S 1 , , S 5 can be observed, which results in a good match of the computed and measured standard deviations σ x c .
Six-inputs–two-outputs:
The 6-inputs–2-outputs example shows that the additional consideration of 4 input position coordinates with σ x R = 0.02 leads to similar results both for the computed and measured covariances and between the sigma-point method and statistical linearization (see P ( 7 , 7 ) = σ x c 2 , P ( 8 , 8 ) = σ y c 2 and c o v a r ( 7 , 7 ) = σ x m 2 , c o v a r ( 8 , 8 ) = σ y m 2 , and σ x c 2 —computed, and σ x m 2 —the measured variation). Table 2 shows the covariance submatrix considering the output positions only.
Computed covariance:
P = 10 1 × 0.004 0.000 0.000 0.000 0.000 0.000 0.030 0.018 0.000 0.004 0.000 0.000 0.000 0.000 0.003 0.017 0.000 0.000 0.004 0.000 0.000 0.000 0.004 0.002 0.000 0.000 0.000 0.004 0.000 0.000 0.001 0.000 0.000 0.000 0.000 0.000 0.004 0.000 0.000 0.002 0.000 0.000 0.000 0.000 0.000 0.004 0.001 0.004 0.030 0.003 0.004 0.001 0.000 0.001 0 . 235 0 . 127 0.018 0.017 0.002 0.000 0.002 0.004 0 . 127 0 . 165 σ x c = 0.153 , σ y c = 0.122
Measured covariance:
c o v a r = 10 1 × 0.004 0.000 0.000 0.000 0.000 0.000 0.028 0.020 0.000 0.004 0.000 0.001 0.000 0.000 0.000 0.020 0.000 0.000 0.004 0.000 0.001 0.001 0.003 0.001 0.000 0.001 0.000 0.004 0.000 0.000 0.000 0.003 0.000 0.000 0.001 0.000 0.005 0.000 0.001 0.006 0.000 0.000 0.001 0.000 0.000 0.005 0.000 0.005 0.028 0.000 0.003 0.000 0.001 0.000 0 . 213 0 . 131 0.020 0.020 0.001 0.003 0.006 0.005 0 . 131 0 . 182 σ x c = 0.145 , σ y c = 0.134
Two-inputs–two-outputs, direct and inverse solution
The next example shows the computation of the direct and inverse cases. In the direct case, we obtain again similar values between the computed and measured covariances and, with this, the standard deviations. The results of the inverse solution lead to similar values of the original inputs (orientations x 1 = ϕ R , x 2 = ϕ H ) (see Table 2). The simulations of the fuzzy versions showed the same similarities and can therefore be left out here.
Two-inputs–two-outputs, moving robot–human
The next example deals with the robot and human in motion. Figure 13 shows the positions and orientations of the robot and human at selected time steps t 1 t 5 and the development of the corresponding intersections x c .
Figure 14 shows the corresponding time plot. The time steps t 1 t 5 are taken at 0.58 s, , 4.58 s with a time distance of 1 s , which is 25 time steps of 0.04 s each. The robot and human start at
  • x R = ( x R , y R ) T = ( 2 , 0 ) T m
  • x H = ( x H , y H ) T = ( 4 , 10 ) T m
  • with the velocities
  • x ˙ R ( k ) = 0.21 m/s;
  • y ˙ R ( 1 ) = + 0.24 m/s;
  • x ˙ H ( k ) = 0.26 m/s;
  • y ˙ H ( 1 ) = 0.24 m/s.
  • k is the time step.
Figure 14. Time plot, robot and human.
Figure 14. Time plot, robot and human.
Sensors 24 03303 g014
The x components of the velocities x ˙ R ( k ) and x ˙ H ( k ) stay constant during the whole simulation.
The y components change their velocities with constant factors
y ˙ R ( k + 1 ) = K R · y ˙ R ( k ) y ˙ H ( k + 1 ) = K H · y ˙ H ( k )
where K R = 1.2 and K H = 0.9 . The orientation angles start with the following:
  • ϕ R = 1.78 rad;
  • ϕ H = 3.69 rad.
They change their values every second according to the direction of motion.
From both plots, one observes an expected decrease in the output standard deviations for a mutual decrease in their distances to the specific intersection and a good match between the computed and measured values x c (see Table 3). With the information about the distance of the robot and the standard deviation from and at the expected intersection, respectively, it becomes possible to plan either an avoidance strategy or mutual cooperation between the robot and human.

7. Summary and Conclusions

The content of this work is the prediction of encounter situations of mobile robots and human agents in shared areas by analyzing planned/intended trajectories in the presence of uncertainties and system and observation noise. In this context, the problem of intersections of trajectories with respect to system uncertainties and Gaussian noise of the position and orientation of the agents involved is discussed. The problem is addressed by two methods: the statistical linearization of distributions and the sigma-point transformation of the distribution parameters. The positions and orientations of the robot and human are corrupted with Gaussian noise represented by the parameters’ mean and standard deviation. The goal is to calculate the mean and standard deviation/variation at the intersection via the nonlinear relation between the positions/orientations of the robot and human, on the one hand, and the position of the intersection of their intended trajectories, on the other hand.
This analysis is realized by the statistical linearization of the nonlinear relation between the statistics of the robot and human (input) and the statistics of the intersection (output). The output results are the mean and standard deviation of the intersection as functions of the input parameters’ mean and standard deviation of the positions and orientations of robot and human. This work is first carried out for two-input–two-output relations (two orientations of the robot–human and two intersection coordinates) and then for six inputs–two outputs (two orientations and four position coordinates of the robot–human and two intersection coordinates). These cases were extended to their fuzzy versions by different Takagi–Sugeno (TS) fuzzy approximations and compared with the non-fuzzy case. Up to a certain resolution, the approximation works as accurately as the original non-fuzzy version. For the two-input–two-output case, an inverse solution is derived, except for the six-input–two-output case because of the undetermined nature of the differential input–output relation.
The sigma-point transformation aims at transforming/propagating distribution parameters—the sigma points—directly through nonlinearities. The transformed sigma points are then converted into the distribution parameters’ mean and covariance matrix. The sigma-point transformation is closely connected to the unscented Kalman filter, which is used in the example of the robot and human in motion. The specialty of the example is a computed virtual system output (“observation”)—the intersection of two intended trajectories—where the corresponding output uncertainty is a sum of the transformed position/orientation noise and the computational uncertainty from the fuzzy approximation. In total, the comparison between the computed and measured covariances shows a very good match and the comparison with the statistical linearization shows good coincidences as well. Both the sigma-point transformation and the differential statistical linearization scales for more than two variables linearly. Their computational complexity is in the same order [13]. However, if the model is nonlinear, then the differential linearization (EKF) serves as the first-order or second-order approximating estimator. If the system is highly nonlinear, the EKF may diverge and the sigma-point approach produces typically better results. In summary, a prediction of the accuracy of human–robot trajectories using the methods presented in this work increases the performance of human–robot collaboration and human safety. In future work, this method can be used for robot–human scenarios in factory workshops and for robots working in complicated environments like rescue operations in cooperation with human operators.

Author Contributions

R.P. developed the methods and implemented the simulation programs. A.J.L. conceived the project and gave advice and support. All authors have read and agreed to the published version of the manuscript.

Funding

This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 101017274 (DARKO).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Khatib, O. Real-time obstacle avoidance for manipulators and mobile robots. In Proceedings of the IEEE International Conference on Robotics and Automation, St. Louis, MI, USA, 25–28 March 1985; p. 500505. [Google Scholar]
  2. Firl, J. Probabilistic Maneuver Recognition in Traffic Scenarios. Ph.D. Thesis, KIT Karlsruhe Institute of Technology, Karlsruhe, Germany, 2014. [Google Scholar]
  3. Luo, W.; Xing, J.; Milan, A.; Zhang, X.; Liu, W.; Zhao, X.; Kim, T. Multiple object tracking: A literature review. Artif. Intell. 2021, 293, 103448. [Google Scholar] [CrossRef]
  4. Chen, J.; Wang, C.; Chou, C. Multiple target tracking in occlusion area with interacting object models in urban environments. Robot. Auton. Syst. 2018, 103, 68–82. [Google Scholar] [CrossRef]
  5. Kassner, M.; Patera, W.; Bulling, A. Pupil: An open source platform for pervasive eye tracking and mobile gaze-based interaction. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Seattle, DC, USA, 13–17 September 2014; pp. 1151–1160. [Google Scholar]
  6. Bruce, J.; Wawer, J.; Vaughan, R. Human-robot rendezvous by co-operative trajectory signals. In Proceedings of the 10th ACM/IEEE International Conference on Human-Robot Interaction Workshop on Human-Robot Teaming, Portland, OR, USA, 2–5 March 2015; pp. 1–2. [Google Scholar]
  7. Kunwar, F.; Benhabib, B. Advanced predictive guidance navigation for mobile robots: A novel strategy for rendezvous in dynamic settings. Intern. J. Smart Sens. Intell. Syst. 2008, 1, 858–890. [Google Scholar] [CrossRef]
  8. Palm, R.; Lilienthal, A. Fuzzy logic and control in human-robot systems: Geometrical and kinematic considerations. In Proceedings of the WCCI 2018: 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 827–834. [Google Scholar]
  9. Palm, R.; Driankov, D. Fuzzy inputs. In Fuzzy Sets and Systems—Special Issue on Modern Fuzzy Control; IEEE: Piscataway, NJ, USA, 1994; pp. 315–335. [Google Scholar]
  10. Foulloy, L.; Galichet, S. Fuzzy control with fuzzy inputs. IEEE Trans. Fuzzy Syst. 2003, 11, 437–449. [Google Scholar] [CrossRef]
  11. Banelli, P. Non-linear transformations of gaussians and gaussian-mixtures with implications on estimation and information theory. IEEE Trans. Inf. Theory 2013. [Google Scholar]
  12. Julier, S.J.; Uhlmann, J.K. Unscented filtering and nonlinear estimation. Proc. IEEE 2004, 92, 401–422. [Google Scholar] [CrossRef]
  13. Merwe, R.v.d.; Wan, E.; Julier, S.J. Sigma-point kalman filters for nonlinear estimation and sensor-fusion: Applications to integrated navigation. In Proceedings of the AIAA 2004, Guidance, Navigation and Control Conference, Portland, OR, USA, 28 June–1 July 2004. [Google Scholar]
  14. Terejanu, G.A. Unscented Kalman Filter Tutorial; Department of Computer Science and Engineering University at Buffalo: Buffalo, NY, USA, 2011. [Google Scholar]
  15. Odry, Á.; Kecskes, I.; Sarcevic, P.; Vizvari, Z.; Toth, A.; Odry, P. A novel fuzzy-adaptive extended kalman filter for real-time attitude estimation of mobile robots. Sensors 2020, 20, 803. [Google Scholar] [CrossRef] [PubMed]
  16. Song, Y.; Song, Y.; Li, Q. Robust iterated sigma point fastslam algorithm for mobile robot simultaneous localization and mapping. Chin. J. Mech. Eng. 2011, 24, 693. [Google Scholar] [CrossRef]
  17. Bittler, J.; Bhounsule, P.A. Hybrid unscented kalman filter: Application to the simplest walker. In Proceedings of the 3rd Modeling, Estimation and Control Conference MECC, Lake Tahoe, NV, USA, 2–5 October 2023. [Google Scholar]
  18. Alireza, S.D.; Shahri, M. Vision based simultaneous localization and mapping using sigma point kalman filter. In Proceedings of the 2011 IEEE International Symposium on Robotic and Sensors Environments (ROSE), Montreal, QC, Canada, 17–18 September 2011. [Google Scholar]
  19. Xue, Z.; Schwartz, H. A comparison of several nonlinear filters for mobile robot pose estimation. In Proceedings of the 2013 IEEE International Conference on Mechatronics and Automation (ICMA), Kagawa, Japan, 4–7 August 2013. [Google Scholar]
  20. Lyu, Y.; Pan, Q.; Lv, J. Unscented transformation-based multi-robot collaborative self-localization and distributed target tracking. Appl. Sci. 2019, 9, 903. [Google Scholar] [CrossRef]
  21. Raitoharju, M.; Piche, R. On computational complexity reduction methods for kalman filter extensions. IEEE Aerosp. Electron. Syst. Mag. 2019, 34, 2–19. [Google Scholar] [CrossRef]
  22. Palm, R.; Lilienthal, A. Uncertainty and fuzzy modeling in human-robot navigation. In Proceedings of the 11th Joint International Computer Conference (IJCCI 2019), Vienna, Austria, 17–19 September 2019; pp. 296–305. [Google Scholar]
  23. Palm, R.; Lilienthal, A. Fuzzy geometric approach to collision estimation under gaussian noise in human-robot interaction. In Proceedings of the Computational Intelligence: 11th International Joint Conference, IJCCI 2019, Vienna, Austria, 17–19 September 2019; Springer: Cham, Switzerland; pp. 191–221. [Google Scholar]
  24. Schaefer, J.; Strimmer, K. A shrinkage to large scale covariance matrix estimation and implications for functional genomics. Stat. Appl. Genet. Mol. Biol. 2005, 4, 32. [Google Scholar] [CrossRef] [PubMed]
  25. Cao, Y. Learning the Unscented Kalman Filter. 2021. Available online: https://www.mathworks.com/matlabcentral/fileexchange/18217-learning-the-unscented-kalman-filter (accessed on 14 May 2024).
Figure 1. Intersection principle.
Figure 1. Intersection principle.
Sensors 24 03303 g001
Figure 2. Human–robot scenario: geometry.
Figure 2. Human–robot scenario: geometry.
Sensors 24 03303 g002
Figure 3. Membership functions for Δ ϕ R , Δ ϕ H = 0 360 ° .
Figure 3. Membership functions for Δ ϕ R , Δ ϕ H = 0 360 ° .
Sensors 24 03303 g003
Figure 4. Fuzzy sectors.
Figure 4. Fuzzy sectors.
Sensors 24 03303 g004
Figure 5. Intersection with noisy orientations.
Figure 5. Intersection with noisy orientations.
Sensors 24 03303 g005
Figure 6. Differential transformation.
Figure 6. Differential transformation.
Sensors 24 03303 g006
Figure 7. Sigma points for a 2-dim Gaussian random variable.
Figure 7. Sigma points for a 2-dim Gaussian random variable.
Sensors 24 03303 g007
Figure 8. Control surface non-fuzzy, units of ϕ R and ϕ H in rad.
Figure 8. Control surface non-fuzzy, units of ϕ R and ϕ H in rad.
Sensors 24 03303 g008
Figure 9. Control surface fuzzy, 30 ° , units of ϕ R and ϕ H in rad.
Figure 9. Control surface fuzzy, 30 ° , units of ϕ R and ϕ H in rad.
Sensors 24 03303 g009
Figure 10. Control surface fuzzy, 7 . 5 ° , units of ϕ R and ϕ H in rad.
Figure 10. Control surface fuzzy, 7 . 5 ° , units of ϕ R and ϕ H in rad.
Sensors 24 03303 g010
Figure 11. S i g m a p o i n t s , input, units of ϕ R and ϕ H in rad.
Figure 11. S i g m a p o i n t s , input, units of ϕ R and ϕ H in rad.
Sensors 24 03303 g011
Figure 12. S i g m a p o i n t s , output.
Figure 12. S i g m a p o i n t s , output.
Sensors 24 03303 g012
Figure 13. Moving robot and human.
Figure 13. Moving robot and human.
Sensors 24 03303 g013
Table 1. Standard deviations and fuzzy and non-fuzzy results.
Table 1. Standard deviations and fuzzy and non-fuzzy results.
Input Std0.02 Gauss, Bell Shaped (GB)Gauss0.05 GB
sector size/° 60 ° 30 ° 15 ° 7 . 5 ° 7 . 5 ° 7 . 5 °
non-fuzz σ x c 0.1430.1400.1380.1250.1440.366
fuzz σ x c 0.2200.1840.1400.1260.1440.367
non-fuzz σ x m 0.1600.1440.1380.1260.1420.368
fuzz σ x m 0.5550.2240.0610.2250.1640.381
non-fuzz σ y c 0.1280.1320.1230.1140.1240.303
fuzz σ y c 0.0920.0870.1200.1120.1220.299
non-fuzz σ y m 0.1340.1200.1230.1130.1290.310
fuzz σ y m 0.5990.1710.0340.1540.1390.325
non-fuzz ρ x y c 0.5760.5410.5880.5610.6230.669
fuzz ρ x y c −0.2630.2720.4780.5060.5920.592
non-fuzz ρ x y m 0.5720.4590.5860.5490.6600.667
fuzz ρ x y m 0.3800.5750.9900.7110.6350.592
Table 2. Covariances, standard deviations—computed and measured.
Table 2. Covariances, standard deviations—computed and measured.
OutputsCovariance, ComputedCovariance, Measured σ x c , Comp/Meas σ y c , Comp/Meas
2 inputs P = 0.0213 0.0114 0.0114 0.0159 c o v a r = 0.0264 0.0146 0.0146 0.0166 0.145 / 0.144 0.126 / 0.134
2 inputs, stat. lin.-- 0.144 / 0.142 0.124 / 0.129
6 inputs P = 0.0235 0.0127 0.0127 0.0165 c o v a r = 0.0213 0.0131 0.0131 0.0182 0.135 / 0.145 0.122 / 0.134
Direct solution P = 0.0234 0.0133 0.0133 0.0151 c o v a r = 0.0264 0.0146 0.0146 0.0166 0.152 / 0.162 0.128 / 128
Inverse solution P = 10 3 × 0.4666 0.0522 0.0522 0.4744 c o v a r = 10 3 × 0.4841 0.0190 0.0190 0.396 0.0215 / 0.0220 0.0217 / 0.0190
Table 3. Covariances, standard deviations—computed and measured, moving robot–human.
Table 3. Covariances, standard deviations—computed and measured, moving robot–human.
OutputsCovariance, ComputedCovariance, Measured σ x c , Comp/Meas σ y c , Comp/Meas
t 1 P = 0.0220 0.0017 0.0017 0.0163 c o v a r = 0.0246 0.0002 0.0002 0.0202 0.148 / 0.156 0.127 / 0.142
t 2 P = 0.0198 0.0023 0.0023 0.0138 c o v a r = 0.0222 0.0018 0.0018 0.0153 0.140 / 0.148 0.117 / 0.123
t 3 P = 0.0168 0.0030 0.0030 0.0107 c o v a r = 0.0140 0.0040 0.0040 0.0088 0.129 / 0.118 0.103 / 0.093
t 4 P = 0.0151 0.0029 0.0029 0.0083 c o v a r = 0.0127 0.0014 0.0014 0.0073 0.122 / 0.112 0.091 / 0.085
t 5 P = 0.0125 0.0023 0.0023 0.0061 c o v a r = 0.0102 0.0030 0.0030 0.0056 0.112 / 0.101 0.078 / 0.074
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Palm, R.; Lilienthal, A.J. Crossing-Point Estimation in Human–Robot Navigation—Statistical Linearization versus Sigma-Point Transformation. Sensors 2024, 24, 3303. https://doi.org/10.3390/s24113303

AMA Style

Palm R, Lilienthal AJ. Crossing-Point Estimation in Human–Robot Navigation—Statistical Linearization versus Sigma-Point Transformation. Sensors. 2024; 24(11):3303. https://doi.org/10.3390/s24113303

Chicago/Turabian Style

Palm, Rainer, and Achim J. Lilienthal. 2024. "Crossing-Point Estimation in Human–Robot Navigation—Statistical Linearization versus Sigma-Point Transformation" Sensors 24, no. 11: 3303. https://doi.org/10.3390/s24113303

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop