Next Article in Journal
Power-Efficient Beacon Recognition Method Based on Periodic Wake-Up for Industrial Wireless Devices
Next Article in Special Issue
A Weighted Combination Method for Conflicting Evidence in Multi-Sensor Data Fusion
Previous Article in Journal
Older Adults with Weaker Muscle Strength Stand up from a Sitting Position with More Dynamic Trunk Use
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Closed-Form Error Model of Straight Lines for Improved Data Association and Sensor Fusing

Department of Computer Science and Media, Beuth University of Applied Sciences, Luxemburger Str. 10, D-13353 Berlin, Germany
Sensors 2018, 18(4), 1236; https://doi.org/10.3390/s18041236
Submission received: 12 March 2018 / Revised: 12 April 2018 / Accepted: 13 April 2018 / Published: 17 April 2018
(This article belongs to the Collection Multi-Sensor Information Fusion)

Abstract

:
Linear regression is a basic tool in mobile robotics, since it enables accurate estimation of straight lines from range-bearing scans or in digital images, which is a prerequisite for reliable data association and sensor fusing in the context of feature-based SLAM. This paper discusses, extends and compares existing algorithms for line fitting applicable also in the case of strong covariances between the coordinates at each single data point, which must not be neglected if range-bearing sensors are used. Besides, in particular, the determination of the covariance matrix is considered, which is required for stochastic modeling. The main contribution is a new error model of straight lines in closed form for calculating quickly and reliably the covariance matrix dependent on just a few comprehensible and easily-obtainable parameters. The model can be applied widely in any case when a line is fitted from a number of distinct points also without a priori knowledge of the specific measurement noise. By means of extensive simulations, the performance and robustness of the new model in comparison to existing approaches is shown.

1. Introduction

Contour points acquired by active sensors using sonar, radar or LiDAR [1], or extracted from image data [2,3], are a key source of information for mobile robots in order to detect obstacles or to localize themselves in known or unknown environments [4,5]. For this purpose, often, geometric features are extracted from raw data since in contrast to detailed contours, features are uniquely described just by a limited set of parameters, and their extraction works as additional filtering in order to improve reliability when dealing with sensor noise and masking [6]. However, the performance of feature-based localization or SLAM strongly depends on exact the determination of a feature vector y from measured raw data. Moreover, especially for data association, as well as for sensor fusing, not only the feature parameters are needed, but also a reliable estimation of their uncertainty is required. Generally, in the case of non-linear multi-sensor fusing, likelihood-based models can be applied (see [7]), which employ Bayesian filtering [8] or the calculation of entropies [9] to quantify uncertain information. Alternatively, especially for localization and map building, sensor fusing often is achieved with a Kalman filter (EKF). For this purpose, the covariance matrix R is required, which encapsulates the variances of the single elements in y and their dependencies.
This will be obvious if one looks at the standard algorithm for updating an estimated system state x ^ typically by means of EKF; compare [10,11,12]: new measurements y are plausible if their deviations from expected measurements y ^ = h ( x ^ ) dependent on the in general non-linear measurement model h ( x ^ ) are within a limited range. For exact calculation of this limit, usually the Mahalanobis metric is applied (see [11,13]), which considers the covariance matrix S of the innovation ν = y y ^ with S = R + H · P ^ · H T dependent on R, the covariance matrix P ^ of the system state and using H = h ( x ^ ) . A new measurement y will be considered to relate to an already known feature vector y ^ if its distance is below a given threshold r t h with ν T S 1 ν < r t h 2 . Only in this case, the system state vector x ^ can be updated by means of Δ x ^ = K · ν using the Kalman gain K = P ^ · H T · S , again depending on the covariance matrix R of the measurements, while otherwise x ^ and P ^ are expanded by the new feature.
Thus, for reliable map building, errors in the step of data association should be strictly avoided by means of exact knowledge of the covariance matrix at each measurement, since otherwise, multiple versions of certain features would be included in the map, while other features erroneously are ignored.
Particularly in artificial environments, straight lines in a plane are frequently used as features, since these are defined by just two parameters and can be clearly and uniquely determined. In contrast to point features, lines in images are almost independent of illumination and perspective, and a number of measurements can be taken along their length to localize them accurately and to distinguish them from artifacts [14]. Moreover, already, a single line enables a robot to determine its orientation and perpendicular distance, which clearly improves localization accuracy. Thus, many tracking systems have been proposed based on line features, either using range-bearing scans [15,16] or applying visual servoing (see [17,18]), and also, recently, this approach has been successfully implemented [19,20,21]. However, due to missing knowledge of the covariance matrix, for data association, often, suboptimal solutions like the Euclidean distance in Hough space [15] or other heuristics are used [22].
Obviously, fitting data to a straight line is a well-known technique, addressed in a large number of papers [23,24,25] and textbooks [26,27,28]. In [29], a recent overview of algorithms in this field is outlined. As shown in [30,31], if linear regression is applied to data with uncertainties in the x- and y-direction, always both coordinates must be considered as random variables. In [32], Arras and Siegwart suggest an error model for range-bearing sensors including a covariance matrix, affected exclusively by noise in the radial direction. Pfister et al. introduce weights into the regression algorithm in order to determine the planar displacement of a robot from range-bearing scans [33]. In [34], a maximum likelihood approach is used to formulate a general strategy for estimating the best fitted line from a set of non-uniformly-weighted range measurements. Furthermore, merging of lines and approximating the covariance matrix from an iterative approach is considered. In [30], Krystek and Anton point out that the weighting factors of the single measurements depend on the orientation of a line, which therefore can only be determined numerically. This concept has been later extended to the general case with covariances existing between the coordinates of each data point [35].
Since linear regression is sensitive with respect to outliers, split-and-merge algorithms must be applied in advance, if a contour consists of several parts; see [36,37]. In cases of strong interference, straight lines can still be identified by Hough-transformation (compare [38,39,40]), or alternatively, RANSAC algorithms can be applied; see [41,42]. Although these algorithms work reliably, exact determination of line parameters and estimating their uncertainties still requires linear regression [43].
In spite of a variety of contributions in this field, a straightforward, yet accurate algorithm for determining the covariance matrix of lines reliably, quickly and independently of the a priori mostly unknown measurement noise is missing. In Section 4, such a model in closed-form is proposed depending on just a few clearly-interpretable and easily-obtainable parameters. Besides its low complexity and great clarity, the main advantage of the covariance matrix in closed form results from the fact that it can be calculated from the same data points as used for line fitting without the need to provide additional reliability information of the measurements, which in many cases is not available.
Beforehand, in the next two paragraphs, existing methods for the linear regression and calculation of the covariance matrix are reviewed with certain extensions focusing on the usage of range-bearing sensors, which cause strong covariances between the x- and y-coordinates. Based on these theoretical foundations, Section 5 exhibits detailed simulation results in order to compare the precision and robustness of the presented algorithms.

2. Determination of the Accurate Line Parameters

In 2D-space, each straight line is uniquely described by its perpendicular distance d from the origin and by the angle ϕ between the positive x-axis and this normal line; see Figure 1. In order to determine these two parameters, the mean squared error M S E considering the perpendicular distances of N measurement points from the fitted line needs to be minimized. For this purpose, each perpendicular distance ρ i of point i is calculated either from polar or with x i = r i cos θ i and y i = r i sin θ i alternatively in Cartesian coordinates as:
ρ i = d i d = r i cos ( θ i ϕ ) d = x i cos ϕ + y i sin ϕ d
Then, M S E is defined as follows dependent on ϕ and d:
M S E ( ϕ , d ) = i = 1 N ( s i ρ i ) 2
In (2), optional scaling values s i are included in order to consider the individual reliability of each measurement point. By calculating the derivatives of (2) with respect to ϕ and d and setting both to zero, the optimum values of these parameters can be analytically derived assuming all s i to be constant, i.e., independent of ϕ and d. The solution has been published elsewhere (compare [32]), and in the Appendix of this paper, a straightforward derivation is sketched, yielding for ϕ and d:
ϕ = 1 2 · atan2 2 σ x y , σ y 2 σ x 2
d = x ¯ cos ϕ + y ¯ sin ϕ
The function atan2 ( ) means the four quadrant arc tangent, which calculates ϕ always in the correct range. If d becomes negative, its modulus must be taken, and the corresponding ϕ has to be altered by plus or minus π . In these equations, x ¯ and y ¯ denote the mean values of all N measurements x i and y i , while σ x 2 , σ y 2 and σ x y denote the variances and the covariance:
σ x 2 = 1 N i = 1 N w i x i x ¯ 2
σ y 2 = 1 N i = 1 N w i y i y ¯ 2
σ x y = 1 N i = 1 N w i x i x ¯ y i y ¯
x ¯ = 1 N i = 1 N w i x i
y ¯ = 1 N i = 1 N w i y i
In (5)–(9), normalized weighting factors w i are used with 1 N i = 1 N w i = 1 and 0 w i 1 , calculated dependent on the chosen scaling values s i :
w i = s i 2 1 N i = 1 N s i 2
As pointed out in [35], for accurate line matching, the scaling values s i must not be assumed to be constant since in general, they depend on ϕ . This can be understood from Figure 2, which shows for one measurement point i the error ellipse spanned by the standard deviations σ x , i and σ y , i , while the rotation of the ellipse is caused by the covariance σ x y , i .
Apparently, as a measure of confidence, only the deviation σ ρ , i perpendicular to the line is relevant, while the variance of any data point parallel to the fitted line does not influence its reliability. Thus, the angle ϕ given in (3) will only be exact, if the error ellipse equals a circle, which means that all measurements exhibit the same standard deviations in the x- as in the y-direction, and no covariance exists. Generally, in order to determine optimum line parameters with arbitrary variances and covariance of each measurement i, in Equation (2) the inverse of σ ρ , i dependent on ϕ has to be used as scaling factor s i , yielding:
M S E ( ϕ ) = i = 1 N ρ i 2 ( ϕ ) σ ρ , i 2 ( ϕ )
In this formula, which can only be solved numerically, the variance σ ρ , i 2 needs to be calculated dependent on the covariance matrix of each measurement point i. In the case of line fitting from range-bearing scans, the covariance matrix R ̲ r θ , i can be modeled as a diagonal matrix since both parameters r i and θ i are measured independently, and thus, their covariance σ r θ , i equals zero:
R ̲ r θ , i = σ r , i 2 0 0 σ θ , i 2
Typically, this matrix may also be considered as constant, thus independent of index i, assuming that all measured radii and angles are affected by the same noise, i.e., R ̲ r θ , i R ̲ r θ .
With known variances σ r , i 2 and σ θ , i 2 and for a certain ϕ , now σ ρ , i 2 is determined by evaluating the relation between ρ i and the distances d i of each data point with 1 i N . According to (1) and with the distance d written as the mean of all d i , it follows:
ρ i = d i 1 N j = 1 N d j = N 1 N d i 1 N j = 1 ( j i ) N d j
Since noise-induced variations of all distances d i are uncorrelated with each other, now the variance σ ρ , i 2 is calculated by means of summing over all variances σ d , i 2 :
σ ρ , i 2 = N 1 N 2 σ d , i 2 + 1 N 2 j = 1 ( j i ) N σ d , j 2
In order to derive σ d , i 2 , changes of d i with respect to small deviations of r i and θ i from their expected values r ¯ i and θ ¯ i are considered with d i = d ¯ i + Δ d i , r i = r ¯ i + Δ r i and with θ i = θ ¯ i + Δ θ i :
Δ d i = Δ d i r + Δ d i θ
The terms on the right side of (15) can be determined independently of each other, since Δ r i and Δ θ i are assumed to be uncorrelated. With d i = r i · cos ( θ i ϕ ) , it follows:
Δ d i r = Δ r i · cos ( θ ¯ i ϕ )
and:
Δ d i θ = r ¯ i cos ( θ ¯ i ϕ + Δ θ i ) cos ( θ ¯ i ϕ ) r ¯ i Δ θ i 2 2 cos ( θ ¯ i ϕ ) + Δ θ i sin ( θ ¯ i ϕ )
In the last line, the addition theorem was applied for cos ( θ ¯ i ϕ + Δ θ i ) , and for small variations, the approximations cos ( Δ θ i ) 1 Δ θ i 2 2 and sin ( Δ θ i ) Δ θ i are valid.
The random variables Δ r i and Δ θ i are assumed to be normally distributed with variances σ r , i 2 and σ θ , i 2 . Thus, the random variable Δ θ i 2 exhibits a χ 2 -distribution with variance 2 ( σ θ , i 2 ) 2 (see [44]), and the variance of d i is calculated from (15)–(17) as the weighted sum with r ¯ i and θ ¯ i approximately replaced by r i and θ i , respectively:
σ d , i 2 = σ r , i 2 + ( σ θ , i 2 ) 2 2 cos 2 ( θ i ϕ ) + σ θ , i 2 sin 2 ( θ i ϕ )
When applying this algorithm, a one-dimensional minimum search of M S E according to (11) needs to be executed, yielding the optimum ϕ of the straight line. For this purpose, σ ρ , i 2 is inserted from (14) considering (18) and ρ i is determined according to (1) by calculating d from (4) and (8)–(10) with s i = 1 / σ ρ , i .
Obviously, numerical line fitting can also be accomplished if measurements are available in Cartesian coordinates x i and y i . In this case, the covariance matrix R ̲ x y , i of each measurement point must be known, defined as:
R ̲ x y , i = σ x , i 2 σ x y , i σ x y , i σ y , i 2
Furthermore, the partial derivatives of d i according to (1) with respect to x i and y i need to be calculated:
J ̲ d , i = d i x i d i y i = cos ϕ sin ϕ
Then, σ d , i 2 follows dependent on R ̲ x y , i and J ̲ d , i :
σ d , i 2 = J ̲ d , i · R ̲ x y , i · ( J ̲ d , i ) T = σ x , i 2 cos 2 ϕ + σ x y , i sin ϕ cos ϕ + σ y , i 2 sin 2 ϕ
If raw data stem from a range-bearing scan, R ̲ x y , i can be calculated from R ̲ r θ , i by exploiting the known dependencies between the polar and Cartesian plane. For this purpose, the Jacobian matrix J ̲ x y , i is determined:
J ̲ x y , i = x i r i x i θ i y i r i y i θ i = cos θ i r i sin θ i sin θ i r i cos θ i
Then, the covariance matrix R ̲ x y , i will depend on R ̲ r θ , i , if small deviations from the mean value of the random variables r i and θ i and a linear model are assumed:
R ̲ x y , i = J ̲ x y , i · R ̲ r θ , i · ( J ̲ x y , i ) T
According to (23), generally a strong covariance σ x y , i in R ̲ x y , i must be considered, if measurements are taken by range-bearing sensors.
By means of applying (21)–(23) instead of (18) for searching the minimum of M S E dependent on ϕ , the second order effect regarding Δ θ i is neglected. This yields almost the same formula as given in [35], though the derivation differs, and in [35], additionally, the variance of d is ignored assuming σ ρ , i 2 = σ d , i 2 , which according to (14) is only asymptotically correct for large N.
Finally, it should be noted that the numerical determination of ϕ according to (11) means clearly more complexity compared to the straightforward solution according to Equation (3). Later, in Section 5, it will be analyzed under which conditions this additional computational effort actually is required.

3. Analytic Error Models of Straight Lines

In the literature, several methods are described to estimate errors of ϕ and d and their mutual dependency. Thus, the covariance matrix R d ϕ must be known, defined as:
R ̲ d ϕ = σ d 2 σ d ϕ σ d ϕ σ ϕ 2
For this purpose, a general method in nonlinear parameter estimation is the calculation of the inverse Hessian matrix at the minimum of M S E . Details can be found in [30,35], while in [45], it is shown that this procedure may exhibit numerical instability. In Section 5, results using this method are compared with other approaches.
Alternatively, in [32,46], an analytic error model is proposed based on fault analysis of the line parameters. In this approach, the effect of variations of each single measurement point defined by R ̲ x y , i with respect to the covariance matrix of the line parameters R ̲ d ϕ is considered, based on (3) and (4). Thereto, the Jacobian matrix J ̲ d ϕ , i with respect to x i and y i is determined, defined as:
J ̲ d ϕ , i = d x i d y i ϕ x i ϕ y i
With this matrix, the contribution of a single data point i to the covariance matrix between d and ϕ can be written as:
R ̲ d ϕ , i = J ̲ d ϕ , i · R ̲ x y , i · J ̲ d ϕ , i T
For determining the partial derivatives of d in (25), Equation (4) is differentiated after expanding it by (8) and (9), yielding:
d x i = w i cos ϕ N + y ¯ cos ϕ x ¯ sin ϕ ϕ x i
d y i = w i sin ϕ N + y ¯ cos ϕ x ¯ sin ϕ ϕ y i
Differentiating ϕ according to (3) with respect to x i gives the following expression with u = 2 σ x y and v = σ y 2 σ x 2 :
ϕ x i = 1 2 ( u 2 + v 2 ) u x i v v x i u
The partial derivation of u in (29) is calculated after expanding it with (7) and (8) as:
u x i = 2 N · x i i = 1 N w i x i y i y ¯ i = 1 N w i x i = 2 w i N y i y ¯
while partial derivation of v with (5), (6) and (8) yields:
v x i = 1 N · x i i = 1 N w i x i 2 1 N i = 1 N w i x i 2 = 2 w i N x i x ¯
Finally, after substituting all terms with u and v in (29), it follows:
ϕ x i = w i σ x 2 σ y 2 y i y ¯ 2 σ x y x i x ¯ N σ x 2 σ y 2 2 + 4 σ x y 2
Correspondingly, for the partial derivative of ϕ with respect to y i , the following result is obtained:
ϕ y i = w i σ x 2 σ y 2 x i x ¯ + 2 σ x y y i y ¯ N σ x 2 σ y 2 2 + 4 σ x y 2
Now, after inserting (27), (28), (32) and (33) into (25), the covariance matrix of d and ϕ (24) is calculated by summing over all N data points since the noise contributions of the single measurements can be assumed to be stochastically independent of each other:
R ̲ d ϕ = i = 1 N R ̲ d ϕ , i = i = 1 N J ̲ d ϕ , i · R ̲ x y , i · J ̲ d ϕ , i T
Equation (34) enables an exact calculation of the variances σ d 2 , σ ϕ 2 and of the covariance σ d ϕ as long as the deviations of the measurements stay within the range of a linear approach, and as long as Equations (3) and (4) are valid. In contrast to the method proposed in [35], no second derivatives and no inversion of the Hessian matrix are needed, and thus, more stable results can be expected.
However, both algorithms need some computational effort especially for a large number of measurement points. Moreover, they do not allow one to understand the effect of changing parameters on R d ϕ , and these models can only be applied, if for each data point, the covariance matrix R ̲ x y , i is available. Unfortunately, for lines extracted from images, this information is unknown, and also, in the case of using range-bearing sensors, only a worst case estimate of σ r is given in the data sheet, while σ θ is ignored.

4. Closed-Form Error Model of a Straight Line

In this section, a simplified error model in closed form is deduced, which enables a fast, clear and yet, for most applications, sufficiently accurate calculation of the covariance matrix R ̲ d ϕ in any case when line parameters d and ϕ have been determined from a number of discrete data points.
Thereto, first, the expected values of the line parameters d and ϕ , denoted as d ¯ and ϕ ¯ , are assumed to be known according to the methods proposed in Section 2 with d ¯ d and ϕ ¯ ϕ . Besides, for modeling the small deviation of d and ϕ , the random variables Δ d and Δ ϕ are introduced. Thus, with d = d ¯ + Δ d and ϕ = ϕ ¯ + Δ ϕ , it follows for the variances and the covariance:
σ d 2 = σ Δ d 2 σ ϕ 2 = σ Δ ϕ 2 σ d ϕ = σ Δ d Δ ϕ
Next, Δ d and Δ ϕ shall be determined dependent on a random variation of any of the N measured data points. For this purpose, Figure 3 is considered, which shows the expected line parameters and the random variables Δ d and Δ ϕ .
In order to derive expressions for Δ d and Δ ϕ depending on the random variables ρ i , Figure 4 shows an enlargement of the rectangular box depicted in Figure 3 along the direction of the line x ˜ .
First, the effect of variations of any ρ i on Δ ϕ is considered. Since Δ ϕ is very small, this angle may be replaced by its tangent, which defines the slope Δ m of the line with respect to the direction x ˜ . Here, only ρ i is considered as random variable, but not x ˜ i . Thus, the standard formula for the slope of a regression line can be applied (see, e.g., [26] Chapter 2), which will minimize the mean squared distance in the direction of ρ , if all x ˜ i are assumed to be exactly known:
Δ ϕ tan Δ ϕ = Δ m = σ ρ x ˜ σ x ˜ 2 = i ρ i · x ˜ i i x ˜ i 2
Now, in order to calculate the variance of Δ ϕ , a linear relation between Δ ϕ and each ρ i is required, which is provided by the first derivation of (36) with respect to ρ i :
Δ ϕ ρ i = x ˜ i i x ˜ i 2
Then, the variance of Δ ϕ dependent on the variance of ρ i can be specified. From (37), it follows:
σ Δ ϕ , i 2 = σ ρ , i 2 · Δ ϕ ρ i 2 = σ ρ , i 2 · x ˜ i 2 i x ˜ i 2 2
If σ ρ , i 2 is assumed to be approximately independent of i, it may be replaced by σ ρ 2 and can be estimated from (2) with ρ i taken from (1) and setting all s i to 1 / N :
σ ρ , i 2 σ ρ 2 = 1 N i = 1 N ρ i ( ϕ , d ) 2
It should be noted that for a bias-free estimation of σ ρ 2 with (39), the exact line parameters ϕ and d must be used in (1), which obviously are not available. If instead, estimated line parameters according to Section 2 are taken, e.g., by applying (3) and (4), calculated from the same data as used in (39), an underestimation of σ ρ 2 especially for small N can be expected, since ϕ and d are determined by minimizing the variance of ρ of these N data points. This is referred to later.
Next, from (38), the variance of Δ ϕ results as the sum over all N data points, since all ρ i are independent of each other:
σ Δ ϕ 2 = i σ Δ ϕ , i 2 σ ρ 2 · i x ˜ i 2 i x ˜ i 2 2 = σ ρ 2 · 1 i x ˜ i 2
Equations (40) with (35) and (39) enables an exact calculation of σ ϕ 2 dependent on the N data points of the line.
However, from (40), a straightforward expression can be derived, which is sufficiently accurate in most cases and enables a clear understanding of the influencing parameters on σ ϕ 2 ; compare Section 5. For this purpose, according to Figure 3, the length L of a line segment is determined from the perpendicular distance d and from the angles θ 1 and θ N of the first and N-th data point, respectively:
L = d · tan ( ϕ θ N ) tan ( ϕ θ 1 )
Furthermore, a constant spacing Δ x ˜ between adjacent data points is assumed:
Δ x ˜ L N 1 .
Applying this approximation, the sum over all squared x ˜ i can be rewritten, yielding for even N as depicted in Figure 4:
i x ˜ i 2 2 · i = 1 N / 2 Δ x ˜ 2 ( 2 i 1 ) 2 = Δ x ˜ 2 · i = 1 N / 2 ( 2 i 1 ) 2 2
The last sum can be transformed into closed form as:
i = 1 N / 2 ( 2 i 1 ) 2 2 = N 2 4 N 2 2 1 6 = N ( N 2 1 ) 12
With N odd, the sum must be taken twice from 1– N 1 2 , since in this case, the central measurement point has no effect on σ Δ ϕ , i 2 , yielding:
i x ˜ i 2 2 · i = 1 ( N 1 ) 2 [ Δ x ˜ · i ] 2 = Δ x ˜ 2 · i = 1 ( N 1 ) 2 2 · i 2
Again, the last sum can be written in closed form, which gives the same result as in (44):
i = 1 ( N 1 ) 2 2 · i 2 = N 1 2 N 1 2 + 1 2 · N 1 2 + 1 3 = N ( N 2 1 ) 12
Finally, by substituting (43) with (44) or (45) with (46) into (40) and regarding (35), as well as (42), a simple analytic formula for calculating the variance of ϕ is obtained, just depending on L, N and the variance of ρ :
σ ϕ 2 σ ρ 2 · 12 L 2 · N · N 1 N + 1 N 1 σ ρ 2 · 12 L 2 · N
The last simplification in (47) overestimates σ ϕ 2 a little bit for small N. Interestingly, this error compensates quite well for a certain underestimation of σ ρ 2 according to (39), assuming that the line parameters ϕ and d are determined from the same data as σ ρ 2 ; see Section 5.
Next, in order to deduce the variance σ d 2 , again, Figure 3 is considered. Apparently, the first part of the random variable Δ d is strongly correlated with Δ ϕ since any mismatch in ϕ is transformed into a deviation Δ d by means of the geometric offset x o f f with:
Δ d ϕ = x o f f · Δ ϕ
Actually, with a positive value for x o f f , as depicted in Figure 3 the correlation between Δ d and Δ ϕ becomes negative, since positive values of Δ ϕ correspond to negative values of Δ d . According to Figure 3, x o f f is determined from ϕ and d, as well as from θ 1 and θ N :
x o f f = d 2 · tan ( ϕ θ N ) + tan ( ϕ θ 1 )
Alternatively, x o f f can be taken as the mean value from all N data points of the line segment:
x o f f = d N · i = 1 N tan ( ϕ θ i )
Nevertheless, it should be noted that Δ d is not completely correlated with Δ ϕ , since also in the case x o f f = 0 , the error Δ d will not be zero.
Indeed, as a second effect, each single ρ i has a direct linear impact on the variable Δ d . For this purpose, in Figure 4, the random variable Δ d ρ is depicted, which describes a parallel shift of the regression line due to variation in ρ i , calculated as the mean value over all ρ i :
Δ d ρ = 1 N · i ρ i
Combining both effects, variations in d can be described as the sum of two uncorrelated terms, Δ d ϕ and Δ d ρ :
Δ d = Δ d ϕ + Δ d ρ = x o f f · Δ ϕ + 1 N · i ρ i
This missing correlation between Δ ϕ and the sum over all ρ i is also intuitively accessible: if the latter takes a positive number, it will not be possible to deduce the sign or the modulus of Δ ϕ . From (52) and with E ( Δ d ϕ · Δ d ρ ) = 0 , E ( Δ d ϕ ) = 0 and E ( Δ d ρ ) = 0 , the variance σ d 2 can be calculated as:
σ d 2 = E ( [ Δ d ] 2 ) = E ( [ Δ d ϕ ] 2 ) + E ( [ Δ d ρ ] 2 ) = x o f f 2 · E ( [ Δ ϕ ] 2 ) + 1 N 2 · E i ρ i 2
x o f f 2 · σ ϕ 2 + 1 N · σ ρ 2
In the last step from (53) to (54), again, the independence of the single measurements from each other is used; thus, the variance of the sum of the N data points approximates N-times the variance σ ρ 2 .
Finally, the covariance between ϕ and d needs to be determined. Based on the definition, it follows with σ d ϕ = σ Δ d Δ ϕ :
σ d ϕ = E ( Δ d · Δ ϕ ) = E ( Δ d ϕ · Δ ϕ ) + E ( Δ d ρ · Δ ϕ ) = x o f f · E ( [ Δ ϕ ] 2 ) = x o f f · σ ϕ 2
By means of (47), (54) and (55), now, the complete error model in closed form is known, represented by the covariance matrix R ̲ d ϕ given as:
R ̲ d ϕ σ ρ 2 · 12 · x o f f 2 L 2 · N + 1 N 12 · x o f f L 2 · N 12 · x o f f L 2 · N 12 L 2 · N
Applying this error model is easy since no knowledge of the variances and covariance for each single measurement is needed, which in practice is difficult to acquire. Instead, just the number N of preferably equally-spaced points used for line fitting, the variance σ ρ 2 according to (39), the length L of the line segment calculated with (41) and its offset x o f f according to (49) or (50) must be inserted.

5. Simulation Results

The scope of this section is to compare the presented algorithms for linear regression and error modeling based on statistical evaluation of the results. Segmentation of raw data is not considered; if necessary, this must be performed beforehand by means of well-known methods like Hough transformation or RANSAC; compare Section 1. Thus, for studying the performance reliably and repeatably, a large number of computer simulations was performed, applying a systematic variation of parameters within a wide range, which would not be feasible if real measurements are used.
For this purpose, straight lines with a certain perpendicular distance d from the origin and within a varying range of normal angles ϕ have been specified. Each of these lines is numerically described by a number of N points either given in Cartesian ( x i , y i ) or in polar ( r i θ i ) coordinates. In order to simulate the outcome of a real range-bearing sensor as much as possible, the angular coordinate was varied between θ 1 and θ N . To each measurement, a certain amount of normally-distributed noise with σ x , σ y and σ x y or alternatively with σ r and σ θ was added. Further, for each ϕ , a number of N s = 1000 sets of samples was generated, in order to allow statistical evaluation of the results. The first simulation was performed with N = 40 equally-spaced points affected each by uncorrelated noise in the x- and y-direction with standard deviations σ x = σ y = 5 cm. This is a typical situation when a line is calculated from binary pixels, and in Figure 5a, a bundle of the simulated line segments is shown. The deviations Δ ϕ and Δ d taken as the mean value over all N s samples of the estimated ϕ and d from their true values, respectively, are depicted in Figure 5b,c, comparing four algorithms as presented in Section 2: The triangles mark the outcome of Equations (3) and (4) with all weights set to one, whereas the squares are calculated according to the same analytic formulas, but using individual weighting factors applying (10) with s i = 1 / σ ρ , i . The perpendicular deviations σ ρ , i are determined according to (14) and (21) with ϕ taken from (3) without weights. Obviously, in this example, all triangles coincide with the squares since each measurement i is affected by the same noise and thus for any ϕ , all weighting factors are always identical. The blue lines in Figure 5b,c show the results when applying the iterative method according to (11) with the minimum of M S E found numerically. For this purpose, σ ρ , i 2 is inserted from (14) considering (21) and ρ i is taken from (1) and d is calculated from (4), (8)–(10) with s i = 1 / σ ρ , i . The black lines (KA) depict the deviations of d and ϕ obtained according to Krystek and Anton in [35]. Both numerical algorithm yield the same results, which is not surprising, since the variances σ ρ , i 2 used as weighting factors are all identical. Further, here, the analytical algorithms provide exactly the same performance as the numerical ones, since for σ x = σ y . the weighting factors show no dependency on ϕ , and for that case, the analytical formulas are optimal.
The lower subfigures depict the parameters of the covariance matrix R ̲ d ϕ , again as a function of ϕ comparing different methods. Here, the circles represent numerical results obtained from the definitions of variance and covariance by summing over all N s passes with 1 k N s , yielding d k and ϕ k , respectively:
σ d 2 = 1 N s k = 1 N s d k d 2
σ ϕ 2 = 1 N s k = 1 N s ϕ k ϕ 2
σ d ϕ = 1 N s k = 1 N s d k d ϕ k ϕ
Since these numerical results serve just as a reference for judging the accuracy of the error models, in the formulas above, the true values for d and ϕ have been used. The required line parameters d k and ϕ k in (57)–(59) can be estimated with any of the four described methods, since minor differences in d k and ϕ k have almost no effect on the resulting variances and the covariance. The blue lines in Figure 5d–f show the results of the analytic error model as described in Section 3, and the black lines represent the outcomes of the algorithm from Krystek and Anton [35], while the red lines corresponds to the model in closed-form according to (56) in Section 4 with L and x o f f taken from (41) and (49), respectively. Interestingly, although the theoretical derivations differ substantially, the results match very well, which especially proves the correctness of the simplified model in closed-form. Since this model explicitly considers the effect of the line length L and of the geometric offset x o f f , the behavior of the curves can be clearly understood: The minimum of L will occur if ϕ equals the mean value of θ m i n and θ m a x , i.e., at ϕ = 55 , and exactly at this angle, the maximum standard deviation σ ϕ occurs. Further, since L linearly depends on ϕ , a quadratic dependence of σ ϕ on ϕ according to (47) can be observed. With respect to Figure 5e, the minimum of σ d also appears at ϕ = 55 corresponding to x o f f = 0 . At this angle, according to (54), the standard deviation of d is given as σ d σ ρ / N = 5 / 40 = 0 . 79 , while the covariance σ ρ d calculated according to (55) and with it the correlation coefficient shown in Figure 5f vanish.
When comparing the results, one should be aware that in the simulations of the analytic error models, the exact variances σ x i 2 , σ y i 2 and σ x y i are used; thus, in practice, the achievable accuracies will be worse. On the other hand, when applying the new error model in closed-form, the variance σ ρ 2 is calculated as the mean value of all ρ i 2 from the actual set of N data points according to (39), and hence, is always available.
Nevertheless, if in this equation, the estimated line parameters ϕ and d are used, which are calculated, e.g., according to (3) and (4) using the same measurements as in (39), no unbiased σ ρ 2 can be expected. This is reasoned from the fact that for each set of N data points, the mean quadratic distance over all ρ i 2 is minimized in order to estimate ϕ and d. Thus, the numeric value of σ ρ 2 will always be smaller than its correct value calculated with the exact line parameters. This effect can be clearly observed from Figure 6, which shows for the same simulation parameters as depicted in Figure 5a the dependency of σ ρ 2 on the number of points on the line N, averaged over N s sets of samples: only in the case of using the exact line parameters in (39), which obviously are only available in a simulation, actually the correct σ ρ 2 = 25 cm 2 is obtained as shown by the triangles. If however, in each run, σ ρ 2 is calculated with the estimated ϕ and d as indicated by the squares, a clear deviation especially at low N occurs. Only asymptotically for large N when ϕ converges to its exact value, the correct σ ρ 2 is reached. Fortunately, this error can be compensated quite well by means of multiplying σ ρ 2 with a correction factor c = N + 1 N 1 as shown by the dashed line in Figure 6. Due to the strongly non-linear relation between ϕ and any ρ i , this correction works much better than simply exchanging in (39) the divisor N by N 1 as often used in statistics. Since c is the inverse of the term neglected in the approximation of σ ϕ 2 in (47), the closed-form of the covariance matrix R ̲ d ϕ according to (56) yields almost unbiased results also for small N if σ ρ 2 is calculated according to (39) with estimated line parameters ϕ and d. Although not shown here, the proposed bias compensation works well for a large range of measurement parameters. For a reliable determination of σ ρ 2 from N data points of a line segment, N should be at least in the order of 10.
Figure 7 shows the results when simulating a range-bearing scan with a constant angular offset Δ θ = ( θ m a x θ m i n ) / ( N 1 ) between adjacent measurements. Each measurement is distorted by adding normally-distributed noise with standard deviations σ r = 5 cm and σ θ = 0 . 1 . This is a more challenging situation, since now that the measurements are not equispaced, each data point exhibits individual variances σ x , i , σ y , i dependent on ϕ , and moreover, a covariance σ x y , i exists. As can be seen, the errors of the estimated ϕ and d as depicted in Figure 7b,c exhibit the same order of magnitude as before; yet, both analytic results differ slightly from each other and are less accurate compared to the numerical solutions. Both numerical methods yield quasi-identical results, since for the chosen small noise amplitudes, the differences between both algorithms have no impact on the resulting accuracy.
Regarding the error models, Figure 7d–f reveal that in spite of unequal distances between the measurement points and varying σ ρ , i , the results of the closed-form model match well with the analytic and numeric results. Only σ d shows a certain deviation at steep and flat lines with ϕ below 30 or above 80 . This is related to errors in x o f f , since in this range of ϕ , the points on the lines measured with constant Δ θ have clearly varying distances, and thus, (49) yields just an approximation of the effective offset of the straight line.
The next Figure 8 shows the results with the models applied to short lines measured in the angular range of 30 θ 40 with N = 20 , while all other parameters are identical to those depicted in Figure 7a. As can be seen from Figure 8b,c, now, the analytical algorithms based on (3) and (4) are no longer adequate since these, independent of applying weights or not, yield much higher errors than the numerical approaches. All error models however still provide accurate results. Actually, the closed-form model even yields better accuracy than before, since the distances of the data points on the line between adjacent measurement and also σ ρ , i are more uniform compared to the simulations with long lines.
In order to check the limits of the models, Figure 9 depicts the results when applying large angular noise with σ θ = 2 . In this extreme case, also the numerical algorithms show systematic errors dependent on ϕ since the noise of ρ i can no longer be assumed to be normally distributed. However, according to Figure 8b,c the iterative method as presented in Section 2 shows clear benefits in comparison to the KA algorithm proposed in [35], caused by the more accurate modeling of σ ρ i .
With respect to the outcome of the noise models in Figure 9d–f, now, only the analytic algorithm as presented in Section 3 still yields reliable results, while the KA-method based on matrix inversion reveals numerical instability. Due to the clear uneven distribution of measurements along the line, also the simplified error model in this case shows clear deviations, although at least the order of magnitude is yet correct.
Finally, Figure 10 shows typical results, if the sensor noise is not exactly known. In this example, the radial standard deviation was assumed to be 10 cm, whereas the exact value, applied when generating the measurements, was only 5 cm. The simulation parameters correspond to those in Figure 7, only the number of data points has been reduced to N = 10 . According to Figure 10b,c, now, for calculating ϕ and d, the numerical methods yield no benefit over the analytical formulas with or without weights. Due to the only approximately known variance, the analytic error model, as well as the KA-method in Figure 10d–f reveal clear deviations from the reference results. Only the model in closed-form is still accurate, since it does not require any a priori information regarding sensor noise. In addition, these results prove the bias-free estimation of σ ρ 2 with (39) also if N is low, as depicted in Figure 6.

6. Conclusions

In this study, the performance of linear regression is evaluated, assuming both coordinates as random variables. It is shown that, especially with range-bearing sensors, frequently used in mobile robotics, a distinct covariance of the noise in the x- and y-direction at each measurement point exists. In this case, analytical formulas assuming identical and uncorrelated noise will only provide accurate line parameters ϕ and d if the detected line segments are sufficiently long and the noise level stays below a certain limit. If these prerequisites are not fulfilled and if the sensor noise is known, numerical algorithms should be applied, which consider the reliability of each measurement point as a function of ϕ . For this, the performance of prior work can be improved by means of modeling the independence of the single data points exactly and by paying attention also to second order effects of the angular noise.
The main focus of this paper is on the derivation of the covariance matrix R ̲ d ϕ of straight lines. This information has a crucial impact on the performance of SLAM with line features, since for both, data association and sensor fusing, R ̲ d ϕ must be estimated precisely. For this purpose, the first analytical error models are reviewed, which however need exact knowledge of the measurement noise, although in many applications, this is not available. In addition, these approaches require high computational effort and do not allow one to comprehend the effect of measurement parameters on the resulting accuracy of an estimated straight line. Thus, a new error model in closed form is proposed, depending only on two geometric parameters, as well as on the number of points of a line segment. Besides, a single variance must be known, which is determined easily and reliably from the same measurements as used for line fitting. By means of this model, the covariance matrix can be estimated quickly and exactly. Moreover, it allows one to adapt measurement conditions in order to achieve the maximum accuracy of detected line features.

Acknowledgments

The author greatly appreciates the editor’s encouragement and the valuable comments of the reviewers. This work has been partially supported by the German Federal Ministry of Education and Research (Grant No. BMBF 17N2008) and funded by the department of computer science and media at Beuth university.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. Analytic Derivation of Straight Line Parameters With Errors in Both Coordinates

For the derivation of the perpendicular distance d, the partial derivative of Equation (2) with respect to d is taken and set to zero, which directly gives Equation (4) using (8)–(10). In order to calculate ϕ , first the partial derivation of (2) with respect to ϕ must be calculated and set to zero, yielding:
1 N i = 1 N s i x i y i cos 2 ϕ sin 2 ϕ + y i 2 x i 2 sin ϕ cos ϕ + 1 N i = 1 N s i d x i sin ϕ y i cos ϕ = 0
Now, the distance d can be replaced by (4), and after inserting the definitions of x ¯ , y ¯ , σ x 2 , σ y 2 and σ x y according to (5)–(9) considering (10), it follows from (A1) after reordering:
σ x y cos 2 ϕ sin 2 ϕ + sin ϕ cos ϕ σ y 2 σ x 2 = 0
Applying the theorem of Pythagoras and the addition theorems of angles, the terms with the sine and cosine can be rewritten:
cos 2 ϕ sin 2 ϕ = 2 cos 2 ϕ 1 = cos 2 ϕ
sin ϕ cos ϕ = 1 2 sin 2 ϕ
Inserting these formulas into (A2) finally yields for ϕ :
ϕ = 1 2 arctan 2 σ x y σ y 2 σ x 2
Equation (A5) calculates ϕ always in the range π / 4 < ϕ < π / 4 , although according to Figure 1, this is only correct if σ y 2 > σ x 2 , while in the case σ y 2 < σ x 2 , an angle π / 2 must be added to ϕ . Thus, as general solution (3) should be taken also avoiding a special consideration if σ y 2 equals σ x 2 .

References

  1. Everett, H.R. Sensors for Mobile Robots, 1st ed.; A. K. Peters Ltd.: New York, NY, USA, 1995. [Google Scholar]
  2. Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 679–698. [Google Scholar] [CrossRef] [PubMed]
  3. Guse, W.; Sommer, V. A New Method for Edge Oriented Image Segmentation. In Proceedings of the Picture Coding Symposium, Tokio, Japan, 2–4 September 1991. [Google Scholar]
  4. Gao, Y.; Liu, S.; Atia, M.M.; Noureldin, A. INS/GPS/LiDAR Integrated Navigation System for Urban and Indoor Environments Using Hybrid Scan Matching Algorithm. Sensors 2015, 15, 23286–23302. [Google Scholar] [CrossRef] [PubMed]
  5. Lu, F.; Milios, E. Robot pose estimation in unknown environments by matching 2D range scans. J. Intell. Robot. Syst. 1997, 18, 249–275. [Google Scholar] [CrossRef]
  6. Arras, K.O.; Castellanos, J.A.; Schilt, M.; Siegwart, R. Feature-based multi-hypothesis localization and tracking using geometric constraints. Robot. Auton. Syst. 2003, 44, 41–53. [Google Scholar] [CrossRef]
  7. Rodriguez, S.; Paz, J.F.D.; Villarrubia, G.; Zato, C.; Bajo, J.; Corchado, J.M. Multi-Agent Information Fusion System to mange data from a WSN in a residential home. Inf. Fusion 2016, 23, 43–57. [Google Scholar] [CrossRef]
  8. Li, T.; Corchado, J.M.; Bajo, J.; Sun, S.; Paz, J.F.D. Effectiveness of Bayesian filters: An information fusion perspective. Inf. Sci. 2016, 329, 670–689. [Google Scholar] [CrossRef]
  9. Tang, Y.; Zhou, D.; Xu, S.; He, Z. A Weighted Belief Entropy-Based Uncertainty Measure for Multi-Sensor Data Fusion. Sensors 2017, 17, 928. [Google Scholar] [CrossRef] [PubMed]
  10. Borenstein, J.; Everett, H.R.; Feng, L. Navigating Mobile Robots, Systems and Techniques, 1st ed.; A. K. Peters: Natick, MA, USA, 1996. [Google Scholar]
  11. Neira, J.; Tardos, J.D. Data association in stochastic mapping using the joint compatibility test. IEEE Trans. Robot. Autom. 2001, 17, 890–897. [Google Scholar] [CrossRef]
  12. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–110. [Google Scholar] [CrossRef]
  13. Blanco, J.L.; Gonzalez-Jimenez, J.; Fernandez-Madrigal, J.A. An Alternative to the Mahalanobis Distance for Determining Optimal Correspondences in Data Association. Trans. Robot. 2012, 28, 980–986. [Google Scholar] [CrossRef]
  14. Wang, H.; Liu, Y.H.; Zhou, D. Adaptive Visual Servoing Using Point and Line Features With an Uncalibrated Eye-in-Hand Camera. IEEE Trans. Robot. 2008, 24, 843–857. [Google Scholar] [CrossRef]
  15. Choi, Y.H.; Lee, T.K.; Oh, S.Y. A line feature based SLAM with low grade range sensors using geometric constraints and active exploration for mobile robot. Auton. Robot. 2008, 24, 13–27. [Google Scholar] [CrossRef]
  16. Yin, J.; Carlone, L.; Rosa, S.; Anjum, M.L.; Bona, B. Scan Matching for Graph SLAM in Indoor Dynamic Scenarios. In Proceedings of the Twenty-Seventh International Florida Artificial Intelligence Research Society Conference, Pensacola Beach, FL, USA, 21–23 May 2014; pp. 418–423. [Google Scholar]
  17. Pasteau, F.; Narayanan, V.K.; Babel, M.; Chaumette, F. A visual servoing approach for autonomous corridor following and doorway passing in a wheelchair. Robot. Auton. Syst. 2016, 75, 28–40. [Google Scholar] [CrossRef] [Green Version]
  18. David, P.; DeMenthon, D.; Duraiswami, R.; Samet, H. Simultaneous pose and correspondence determination using line features. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; Volume 2, pp. 424–431. [Google Scholar]
  19. Marchand, É.; Fasquelle, B. Visual Servoing from lines using a planar catadioptric system. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 2935–2940. [Google Scholar]
  20. Bista, S.R.; Giordano, P.R.; Chaumette, F. Combining Line Segments and Points for Appearance- based Indoor Navigation by Image Based Visual Servoing. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 2960–2967. [Google Scholar]
  21. Xu, D.; Lu, J.; Wang, P.; Zhang, Z.; Liang, Z. Partially Decoupled Image-Based Visual Servoing Using Different Sensitive Features. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 2233–2243. [Google Scholar] [CrossRef]
  22. Jeong, W.Y.; Lee, K.M. Visual SLAM with Line and Corner Features. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and System (IROS), Beijing, China, 9–15 October 2006. [Google Scholar]
  23. York, D. Least-squares fitting og a straight line. Can. J. Phys. 1966, 44, 1079–1086. [Google Scholar] [CrossRef]
  24. Krane, K.S.; Schecter, L. Regression line analysis. Am. J. Phys. 1982, 50, 82–84. [Google Scholar] [CrossRef]
  25. Golub, G.; van Loan, C. Ana analysis of the total least squares problem. SIAM J. Numer. Anal. 1980, 17, 883–893. [Google Scholar] [CrossRef]
  26. Weisberg, S. Applied Linear Regression, 3rd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2005; ISBN 0-471-66379-4. [Google Scholar]
  27. Draper, N.R.; Smith, H. Applied Regression Analysis, 3rd ed.; John Wiley & Sons: Hoboken, NJ, USA, 1988. [Google Scholar]
  28. Seber, G.A.F.; Lee, A.J. Linear Regression Analysis, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  29. Amiri-Simkooeiab, A.R.; Zangeneh-Nejadac, F.; Asgaria, J.; Jazaeri, S. Estimation of straight line parameters with fully correlated coordinates. J. Int. Meas. Confed. 2014, 48, 378–386. [Google Scholar] [CrossRef]
  30. Krystek, M.; Anton, M. A weighted total least-squares algorithm for fitting a straight line. Meas. Sci. Technol. 2007, 18, 3438–3442. [Google Scholar] [CrossRef]
  31. Cecchi, G.C. Error analysis of the parameters of a least-squares determined curve when both variables have uncertainties. Meas. Sci. Technol. 1991, 2, 1127–1129. [Google Scholar] [CrossRef]
  32. Arras, K.O.; Siegwart, R.Y. Feature Extraction and Scene Interpretation for Map-Based Navigation and Map Building. In Proceedings of SPIE: Mobile Robotics XII; SPIE: Pittsburgh, PA, USA, 1997; pp. 42–53. [Google Scholar]
  33. Pfister, S.T.; Kriechbaum, K.L.; Roumeliotis, S.I.; Burdick, J.W. A Weighted range sensor matching algorithms for mobile robot displacement estimation. In Proceedings of the IEEE International Conference on Robotics and Automation, Washington, DC, USA, 11–15 May 2002; Volume 4. [Google Scholar]
  34. Pfister, A.T.; Roumeliotis, S.I.; Burdick, W. Weighted line fitting algorithms for mobile robot map building and efficient data representation. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation, Taipei, Taiwan, 14–19 September 2003; pp. 1304–1311. [Google Scholar]
  35. Krystek, M.; Anton, M. A least-squares algorithm for fitting data points with mutually correlated coordinates to a straight line. Meas. Sci. Technol. 2011, 22, 035101. [Google Scholar] [CrossRef]
  36. Borges, G.A.; Aldon, M.J. A Split-and-Merge Segmentation Algorithm for Line Extraction in 2-D Range Images. In Proceedings of the 15th International Conference on Pattern Recognition, Barcelona, Spain, 3–7 September 2000; Volume 4. [Google Scholar]
  37. Jian, M.; Zhang, C.F.; Yan, F.; Tang, M.Z. A global line extraction algorithm for indoor robot mapping based on noise eliminating via similar triangles rule. In Proceedings of the 35th Chinese Control Conference (CCC), Chengdu, China, 27–29 July 2016; pp. 6133–6138. [Google Scholar]
  38. Illingworth, J.; Kittler, J. A survey of the hough transform. Comput. Vis. Graph. Image Process. 1988, 44, 87–116. [Google Scholar] [CrossRef]
  39. Kim, J.; Krishnapuram, R. A Robust Hough Transform Based on Validity. In Proceedings of the International Conference on Computational Intelligence, Anchorage, AK, USA, 4–9 May 1998; Volume 2, pp. 1530–1535. [Google Scholar]
  40. Banjanovic-Mehmedovic, L.; Petrovic, I.; Ivanjko, E. Hough Transform based Correction of Mobile Robot Orientation. In Proceedings of the International Conference on Industrial Technology, Hammamet, Tunisia, 8–10 December 2004; Volume 3, pp. 1573–1578. [Google Scholar]
  41. Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 1981, 25, 381–395. [Google Scholar] [CrossRef]
  42. Liu, Y.; Gu, Y.; Li, J.; Zhang, X. Robust Stereo Visual Odometry Using Improved RANSAC-Based Methods for Mobile Robot Localization. Sensors 2017, 17, 2339. [Google Scholar] [CrossRef] [PubMed]
  43. Nguyen, V.; Martinelli, A.; Tomatis, N.; Siegwart, R. A Comparison of Line Extraction Algorithms using 2D Laser Rangefinder for Indoor Mobile Robotics. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and System (IROS), Edmonton, AB, Canada, 2–6 August 2005. [Google Scholar]
  44. Westfall, P.H. Understanding Advanced Statistical Methods; CRC Press: Boca Raton, FL, USA, 2013; Chapter 16. [Google Scholar]
  45. Dovì, V.G.; Paladino, O.; Reverberi, A.P. Some remarks on the use of the inverse hessian matrix of the likelihood function in the estimation of statistical properties of parameters. Appl. Math. Lett. 1991, 4, 87–90. [Google Scholar] [CrossRef]
  46. Garulli, A.; Giannitrapani, A.; Rossi, A.; Vicino, A. Mobile robot SLAM for line-based environment representation. In Proceedings of the 44th IEEE Conference on Decision and Control, Seville, Spain, 15 December 2005; pp. 2041–2046. [Google Scholar]
Figure 1. Parameters of measured raw data and a fitted straight line.
Figure 1. Parameters of measured raw data and a fitted straight line.
Sensors 18 01236 g001
Figure 2. Optimum setting of weighting parameter for each data point.
Figure 2. Optimum setting of weighting parameter for each data point.
Sensors 18 01236 g002
Figure 3. Dependency between Δ d , Δ ϕ and geometric parameters.
Figure 3. Dependency between Δ d , Δ ϕ and geometric parameters.
Sensors 18 01236 g003
Figure 4. Details of Figure 3 with the deviation of data points along the axis x ˜ .
Figure 4. Details of Figure 3 with the deviation of data points along the axis x ˜ .
Sensors 18 01236 g004
Figure 5. Simulation results for equidistant measurement points superimposing normally-distributed and uncorrelated noise in the x- and y-direction.
Figure 5. Simulation results for equidistant measurement points superimposing normally-distributed and uncorrelated noise in the x- and y-direction.
Sensors 18 01236 g005
Figure 6. Variance of ρ dependent on the number N of measured data points, using the same simulation parameters as indicated in Figure 5a.
Figure 6. Variance of ρ dependent on the number N of measured data points, using the same simulation parameters as indicated in Figure 5a.
Sensors 18 01236 g006
Figure 7. Results from simulated range-bearing scans superimposing low noise in the r- and θ -direction.
Figure 7. Results from simulated range-bearing scans superimposing low noise in the r- and θ -direction.
Sensors 18 01236 g007
Figure 8. Results from simulated range-bearing scans of short lines superimposing low noise in the r- and θ -direction.
Figure 8. Results from simulated range-bearing scans of short lines superimposing low noise in the r- and θ -direction.
Sensors 18 01236 g008
Figure 9. Results from simulated range-bearing scans superimposing high noise only in the θ -direction.
Figure 9. Results from simulated range-bearing scans superimposing high noise only in the θ -direction.
Sensors 18 01236 g009
Figure 10. Results from simulated range-bearing scans with a low number of data points and only an approximately known noise level of the sensor.
Figure 10. Results from simulated range-bearing scans with a low number of data points and only an approximately known noise level of the sensor.
Sensors 18 01236 g010

Share and Cite

MDPI and ACS Style

Sommer, V. A Closed-Form Error Model of Straight Lines for Improved Data Association and Sensor Fusing. Sensors 2018, 18, 1236. https://doi.org/10.3390/s18041236

AMA Style

Sommer V. A Closed-Form Error Model of Straight Lines for Improved Data Association and Sensor Fusing. Sensors. 2018; 18(4):1236. https://doi.org/10.3390/s18041236

Chicago/Turabian Style

Sommer, Volker. 2018. "A Closed-Form Error Model of Straight Lines for Improved Data Association and Sensor Fusing" Sensors 18, no. 4: 1236. https://doi.org/10.3390/s18041236

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop