Next Article in Journal
Data Association for Multi-Object Tracking via Deep Neural Networks
Previous Article in Journal
Design of a New Stress Wave-Based Pulse Position Modulation (PPM) Communication System with Piezoceramic Transducers
Previous Article in Special Issue
Generalized Parking Occupancy Analysis Based on Dilated Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Track-Before-Detect Framework-Based Vehicle Monocular Vision Sensors

Laboratory SATIE (Systèmes et Applications des Technologies de l’Information et de l’Energie), CNRS (UMR 8029), Université Paris Sud, 91405 Orsay, France
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(3), 560; https://doi.org/10.3390/s19030560
Submission received: 14 December 2018 / Revised: 19 January 2019 / Accepted: 25 January 2019 / Published: 29 January 2019
(This article belongs to the Special Issue Sensors Applications in Intelligent Vehicle)

Abstract

:
This paper proposes a Track-before-Detect framework for a multibody motion segmentation (named TbD-SfM). Our contribution relies on a tightly coupled tracking before detection strategy intended to reduce the complexity of existing Multibody Structure from Motion approaches. Efforts were done towards an algorithm variant closer and aimed to a further embedded implementation for dynamic scene analysis while enhancing processing time performances. This generic motion segmentation approach can be transposed to several transportation sensor systems since no constraints are considered on segmented motions (6-DOF model). The tracking scheme is analyzed and its performance is evaluated under thorough experimental conditions including full-scale driving scenarios from known and available datasets. Results on challenging scenarios including the presence of multiple and simultaneous moving objects observed from a moving camera are reported and discussed.

1. Introduction

The increasing introduction of Autonomous Vehicles (AV) and Advanced Driver Assistance Systems (ADAS) into the marketplace is essential in the design of Intelligent Transportation Systems (ITS). Recently, these areas have shown an active development towards unmanned transportation solutions (Car autonomy SAE Level 4). In this context, perception is a critical task since it provides meaningful, complete and reliable information about the vehicle surroundings [1,2]. Several studies have demonstrated that vision perception is an essential sensing method for scene analysis [3,4,5]. Vision-based techniques such as Visual Simultaneous Localization And Mapping (VSLAM) are well-suited for inferring ego-localization by reconstructing simultaneously the environment structure [6]. Another well-known technique considered for monocular vision applications is Structure-from-Motion ( S f M ) . This method estimates the camera pose from the image motion and the 3D structure of the scene, up to a scale factor. In this paper a Track-before-Detect framework coupled to a multibody S f M (TbD-SfM) methodology is deployed to detect and to segment multiple motions in dynamic scenes. In the first stage, our algorithm is initialized using the motion segmentation approach described in [7]. The initialization procedure provides a rough feature segmentation of static feature points (ego-motion) and dynamic feature points (euro-motions). Further, the euro-motions are tracked by the use of a bank of Bayesian filters so as to observe and predict the image position of these objects in next frames. Then, the feature points inside of the tracked areas are refined to precisely estimate the euro-motions. The remaining feature points are used to compute the ego-motion. A robust formulation based on RANSAC is proposed for finding the motion hypotheses in each tracked area. Finally, the motions are computed using S f M formulation [8].

1.1. Related Works

Image motion segmentation has been widely studied using different approaches as it is surveyed in [9]. Tomasi and Kanade [10] presented a well-known factorization approach that became very popular due to its simplicity for recovering scene geometry and camera motion. Later in [8], a factorization framework of multibody S f M was proposed. This approach considers a static camera that observes a scene with moving objects. A common drawback of all these approaches is their sensibility to noise conditions.
Vidal et al. [11] proposed the use of an algebraic and geometric method for estimating 3D motion and segmenting multiple rigid-body motions from two perspective views. The method relies on multibody epipolar constraint and its corresponding multibody fundamental matrix. The complexity of such an approach is unbounded since the amount of required image pairs grows quartically in presence of more than two simultaneous motions. Goh and Vidal [12] proposed the Locally Linear Manifold Clustering (LLMC). It consists on a nonlinear dimensionality reduction which finds different clusters where feature points are segmented. This unsupervised method does not require any prior knowledge but the clusters results are not consistent. Alternatively, Vidal and Hartley [13] addressed the multiple rigid-body motion segmentation using a three view geometry model. In detail, a multibody trifocal tensor encodes the parameters of all rigid motions and transfers epipolar points and lines between pairs of views. This information is used to obtain an initial clustering. Trifocal tensors and motion segmentation are then refined.
Li et al. [14] proposed an extension of the iterative Sturm/Triggs (ST) algorithm to alternate between the depth estimation and the trajectories segmentation. Then, a Generalized Principal Component Analysis (GPCA) or a Local Sub-space Affinity (LSA) is performed for data clustering in multiple linear subspace. The method reduces the processing time, however, it does not improve the motion segmentation error.
Ozden et al. [15] applied the multibody S f M formulation to compute the 3D structure of objects and the camera motion via geometry decomposition using the five-points algorithm. The approach uses three non-consecutive frames of the sequence for segmenting (the first, middle and last frame of the sequence) in order to obtain stable results. Rao et al. [16] suggested a subspace separation method based on expectation-maximization and spectral clustering named Agglomerative Lossy Compression (ALC). This non-iterative algorithm applies the principles of data compression and sparse representation to the motion segmentation. Zapella et al. [17] proposed a solution based on a bi-linear optimization procedure to refine a initial segmentation following metric constraints and the sparsity matrix of the 3D shape of moving objects.
Dragon et al. [18] suggested the multi-scale clustering (MSCM). This method is performs top-down split and merge for segmenting between two consecutive frames. Image segments are then split until they are consistent and finally merged to neighboring segments until convergence. MSCM combines frame-to-frame motion segmentation in a time-consistent manner. In [19] was implemented the Discrete Cosine Transform (DCT) to segment motion. To this end, a non-linear optimization scheme decomposes the input trajectories into a set of DCT vectors. Then, a spectral clustering technique is used to separate the foreground trajectories from the background trajectory. Jung et al. [20] studied a randomized voting (RV) method. The algorithm is based on epipolar constraints and Sampson distances between feature points and theirs epipolar line. The motions that are correctly estimated get high scores and invalid motions get low scores. The score is used to separate the motions in clusters. Li et al. [21] presented a subspace clustering approach called Mixture of Gaussian Regression (MoG Regression), which employs the MoG model to characterize noise with a complex distribution. Then, it is applied a clustering method based on the spectral clustering theory. Tourani et al. [22] carried out the hypothesis generation using the RANSAC procedure. An over-segmentation is implemented by a long-term gestalt-inspired motion similarity constraints, into a multi-label Markov Random Field (MRF). Segmented Motions are merged in clusters based on a new motion coherence constraint named in-frame shear. Sako et al. [23] proposed to segment motions by hierarchically separating trajectories into 2D and 3D affine spaces. The affine space is determined by the rank value of the trajectory matrix and computed by using the Minimum Description Length (MDL). Then, the average likelihood of the identified trajectories is computed and those associated to large likelihood are segmented again. Zhu et al. [24] suggested a general multilayer framework to detect dynamic objects based on motion, appearance and probability. The motion is estimated with Gaussian Belief Propagation and employed for propagating the appearance models and the prior probability. Kernel Density Estimation is applied to obtain the probability map as output. Recently, [7] introduces an iterative approach for robust estimations of multiple structures and motions from perspective views. This work was then extended in [25] by introducing kinematic constrains of ground vehicles in order to reduce the mathematical complexity of the motion-estimation procedure.

1.2. Contributions

The main contributions of our work are summarized below:
  • A novel tracking framework for general 6-DOF simultaneous motion segmentation based on temporal filtering and RANSAC formulation. This monocular vision sensor approach minimizes the amount of hypothesis to achieve a good motion segmentation without any prior knowledge about observed motions.
  • A thorough experimental procedure is reported on full-scale dynamic scenarios. Based on the obtained results, our method improves the motion detection on challenging dynamic scenes without need of a fine tuning procedure.
  • A comparison with other state-of-the-art techniques is provided in terms of segmentation and reprojection errors as proposed in [18,19,20,21,22,23,24,25,26,27,28,29,30,31,32]. The outliers ratio is also provided as an indicator of the number of segmented feature points.

1.3. Paper Outline

This paper is structured as follows: Section 2 is devoted to introduce the theoretical concepts of single and multiple motion formulation of the S f M factorization approach. Section 3 explains the methodology fundamentals for multibody motion segmentation using S f M . In Section 4, the proposed framework is detailed with a particular focus on the strategy for reducing the number of hypotheses required for the multibody motion segmentation. Finally, Section 5 presents the experimental protocol and the evaluation of the obtained results under full scale dynamic scenes.

2. Structure from Motion Factorization

2.1. Single Motion Formulation

Let us consider an object as a rigid body and its motion to be represented and sampled by image feature points. From the viewpoint of a moving camera, the feature points observed on a scene can lie on static and dynamic objects. Under these assumptions, the factorization approach in [10] considers a group of 2D feature points to be tracked and matched over f consecutive frames in a sequence of images. The cardinality of this set of points is denoted p. Based on these observations two problems are addressed: (i) recovering the unknown 3D scene structure up to a scale factor and (ii) estimating ego-camera motion.
A static scenario observed from a moving camera constitutes the simplest use-case. Let us consider W R 3 f × p as the measurement matrix composed of the image coordinates of the feature points along the sequence. Each column vector of this matrix represents the feature point position by frame as w p = w 1 p , w 2 p , , w fp T with w fp R 3 f × 1 . The camera motion between frames is modeled by a rigid transformation, M = R | t , where M R 3 f × 4 , R R 3 × 3 and t R 3 × 1 stands for rotation and translation respectively. Finally, S R 4 × p is the structure composed of 3D homogeneous coordinates of the feature points s p = [ s x , s y , s z , 1 ] T as stated in Equation (1):
W = w 11 w 12 w 1 p w 21 w 22 w 2 p w f 1 w f 2 w f p , M = M 1 M 2 M f S = s 1 s 2 s p
Thus, the single motion general formulation of S f M is as follows:
W 3 f × p = M 3 f × 4 · S 4 × p
The bilinear elements M and S are computed by factorizing W. The solution to the Equation (2), namely W ˜ , stands for the best rank-4 approximation to the matrix W given by the rank-4 estimates of motion ( M ˜ ) and structure ( S ˜ ) as:
W ˜ 3 f × p M ˜ 3 f × 4 S ˜ 4 × p

2.2. Multiple Motions Formulation

In a scene composed of multiple motions [8], multibody motion segmentation facilitates the computation of the camera motion and the structure of all the rigid bodies in the scene using the general formulation (see Equation (2)). The multibody trajectory matrix W is consisted of the trajectory matrix of the n independent motions, each of them are represented by W n R 3 f × p . The multibody camera motion M R 3 f × 4 n is computed with respect to each n independent body motion and denoted as M n R 3 f × 4 . Finally, multibody 3D structure, S R 4 n × p , is built in a sparse shape enclosing the structure of each body, S n R 4 × p , in a diagonal matrix. The general multibody S f M formulation is:
W 1 | | W n = M 1 | | M n · S 1 0 0 0 0 S n
Equation (4) is solved by factorizing each motion individually.

3. Scene Motion Segmentation Methodology

The S f M procedure stated in [7] is considered in this study to detect motion and to recover trajectories from multiple views. Let us refer to this method as baseline method. This methodology is applied to scenes composed of static and dynamic objects. Hereafter, we consider monocular image sequences captured on board a moving vehicle. Images are analyzed and processed through a temporal sliding window and feature points are extracted.
The detection process starts by randomly sampling a feature points set of two consecutive frames from the trajectory matrix. These points are employed to recover the relative motion between the frames ( M ) and the structure ( S ) . This stage is carried out on the same set of feature points along a temporal sliding window of size Γ , so as to retrieve a trajectory which minimizes the reprojection error.
A new motion hypothesis, ( W n h y p ) , is instantiated from any set of features achieving a reprojection error less than a threshold. A motion hypothesis is defined as a possible trajectory matrix that satisfies the reprojection error criterion and represents the nth motion of the observed scene. Since the number of observed motions is unknown, new trajectories are built until all feature points are assigned. At the end of this procedure, the scene segmentation is then composed of n motions. As a result, the best scene segmentation in terms of reprojection errors is selected.
In the remaining of this section, it is detailed how to determine the number of sampling trials that are required to instantiate a new motion hypothesis. Next, the hypotheses evaluation method is introduced and the association criterion of a feature point to a motion hypothesis is detailed.

3.1. Recover Motion and Structure

The trajectory matrix W is normalized using the 8-point algorithm and represented by W ¯ . A set of k points in two consecutive frames are sampled from the matrix W ¯ and defined by w ¯ f = [ p 1 , p 2 , , p k ] T and its consecutive frame w ¯ f = [ p 1 , p 2 , , p k ] T . A feature point p ¯ i is selected randomly [33] and p ¯ i features are associated following a nearest neighbor criterion with a probability distribution modeled by Equation (5). The values of ζ and ρ are selected heuristically in function of the probability scale.
P ( p ¯ i | p ¯ i ) = 1 ζ e x p p ¯ i p ¯ i 2 ρ 2 i f p ¯ i p ¯ i 0 i f p ¯ i = p ¯ i
The vectors are used to enforce epipolar constraints over the matrix E as it is written in Equation (6). E is computed in a least square form, A x = 0 , where A are the coefficients of w ¯ f and w ¯ f , and x the essential matrix E.
w ¯ f T · E · w ¯ f = 0
The motion is defined as M ˜ = R | t where the rotation and translation are recovered by means of a singular-value decomposition (SVD) of the essential matrix E as:
U D V T = S V D E
The possible four solutions U Q V T ± U 3 c and U Q T V T ± U 3 c are evaluated in order to select the only valid combination. Finally, the structure S ˜ k R 4 × k is estimated with a SVD of the camera projection matrix of two consecutive images.

3.2. Generation of Motion Hypotheses

A motion hypothesis is estimated from the motion M ˜ and the Structure S ˜ k recovered using the vectors w ¯ f and w ¯ f , (see Section 3.1), in each consecutive pair of frames along the sliding window. The matrix W ¯ ˜ k is determined by the Equation (2) for each sampling trial. The reprojection error is evaluated for each pair of frames and accumulated in the sliding window. A hypothesis is accepted if the reprojection error on the sliding window is less than a threshold ϵ h y p , such as:
f = 1 Γ W k M ˜ · S ˜ k ϵ h y p
If the hypothesis is validated, the trajectory matrix, the motion and the structure are kept in W ˜ k h , M ˜ h and S ˜ k h , respectively. If the hypothesis is discarded, a new set of k feature points are sampled until the number of sampling trials, ψ , is reached.

Association Criterion of a Feature Point and a Motion Hypothesis

Given the motion M ˜ h and the feature points matrix W ¯ , the structure ( S ˜ h ) is calculated using linear triangulation method [34]. The motion M ˜ h is applied to the structure S ˜ h in Equation (2) to obtain W ¯ ˜ . The reprojection error is computed for each point in the sliding window as in Equation (9). Feature points achieving a reprojection error less than a threshold ϵ p t o are kept in the group W n and removed from W. The threshold ϵ p t o is defined as the maximum reprojection error allowed by feature point.
W W ˜ ϵ p t o
Finally, the structure S n is updated using the feature points satisfying the reprojection error criterion ( W n ) and the motion M ˜ h . Motion hypotheses are created from the remaining points ( W W n ) until all the trajectory points in W are assigned or rejected as outliers.

3.3. Sampling Trials for Motion Segmentation

The motion segmentation addressed in this paper is a probabilistic procedure. This procedure is carried out iteratively on the set of features until all observed motions are detected. It is necessary then to determine the number of sampling trials ( ψ ) required to achieve good results with a probability p r . ψ is estimated relying on the RANSAC formulation, where ϵ stands for the probability that any selected data point is an outlier, such as:
ψ = log 1 p r log 1 1 ϵ k
It is worth to mention that this formulation leads to detect at first the dominant motion of the scene. This motion corresponds to that of the camera (i.e., ego-motion). In the subsequent iterations, motions from features lying on dynamic objects are detected.

3.4. Evaluation of a Motion Hypothesis

After ψ trials, multiple solutions for an observed motion can satisfy the condition stated in Equation (9). The solution with the smallest Euclidean distance between the trajectory matrix W and the hypotheses estimations ( W ˜ n ) is selected as the best motion hypothesis. For the first motion ( n = 1 ) , this is considered as the dominant motion since it retrieves the higher consensus of the features point set.
The outline of the motion segmentation process is summarized in the Algorithm 1.
Algorithm 1 Motion Segmentation Algorithm
1: procedure Segmentation(W)
2:   k = 8 ▹ minimum number of points
3:   ψ ▹ number of hypotheses
4:  n = 0▹ counter of motions
5:  while h y p ψ do
6:    h y p = 0 ▹ hypotheses counter
7:   while number of feature points in ( W ) k do
8:    while reprojection error ϵ h y p do
9:      W ¯ = Normalize ( W )
10:     Sample k points from W ¯
11:     Compute M ˜ and S ˜ k , Section 3.1
12:     Compute W ¯ ˜ k with M ˜ and S ˜ k
13:     Compute reprojection Error for the h y p
14:    end while
15:     n = n + 1
16:     M ˜ h = M ˜ , S ˜ k h = S ˜ k
17:    Apply M ˜ h over the remaining feature points
18:    Compute W ¯ ˜ with M ˜ h and S ˜ h
19:    Compute reprojection Error point
20:    if reprojection Error point w p ϵ p t o then
21:     Add the points to W n
22:     Remove the points from W
23:    end if
24:   end while
25:    h y p = h y p + 1
26:  end while
27:  return W n , M n , S n ▹ Trajectory matrix segmented, Motion and Structure
28: end procedure

4. Track-Before-Detect Framework

The multibody SfM based approach introduced by Sabzevari et al. [7] has proved to be suitable for achieving scene motion segmentation following a closed-form formalism. However, the computational complexity of this strategy is vast and it increases with the number of observed motions. To alleviate this limitation, the authors recently proposed in [25] a speeded up variant of the procedure taking advantage of motion model priors in context of a ground vehicle application. With lost of generality, the reformulated problem was limited to 2-DOF instead 6-DOF reducing drastically the complexity.
A Track-before-Detect-SfM(TbD-SfM) framework is proposed for improving scene motion segmentation by simultaneously detecting and tracking multiple dynamic image regions. This method is intended to efficiently limit the computational complexity without motion prior constraints on the scene dynamics. As a result, this method improves the inference of the observed motions number, deals with more complex scenarios including partial occlusions and preserves a high feature point density on tracked dynamic regions.
A drastic decrease on the sampling and the evaluation of scene motion hypotheses is achieved since dynamic regions are tracked and efficiently exploited to limit the solution exploration space.
The TbD-SfM framework needs to be initialized with a set of rough motion segments. To this end, factorization-based scene motion segmentation presented in Section 3 is employed. Alternatively, multiple-view motion detection can also be performed [35]. Based on the rough scene segmentation a multi-target tracking (MTT) is started to manage dynamic regions. Such regions enclose sets of feature points randomly sampled so as to retrieve motion and structure. Along the processing sliding window, tracked regions are propagated until reaching dynamic scene motion segmentation convergence.
Figure 1 illustrates the outline of the proposed approach, referred as TbD-SfM. In the following the sequential process is detailed.

4.1. Representation of Dynamic Regions

A dynamic region is represented by a horizontally oriented box with centroid coordinates ( u , v ) , width, w, and height, h in pixels. In this context, dynamic regions enclose objects entities and associate theirs feature points along the sliding window. It is worth noting that ego-motion features cannot be correctly enclosed by a unique dynamic region. For this reason, this set of features is put aside from the tracking scheme. Only the remaining dynamic regions are then considered as potential dynamic objects.

4.2. Initialization

The TbD-SfM is initialized with rough motion segments (see Section 3 or alternatively [35]. In this stage, feature points are assigned to the inputted dynamic regions. Ego-motion is inferred as the dynamic region is composed of the larger set of feature points (dominant motion assumption). At this stage, a first estimation of their size and location is carried out on the set of dynamic regions.

4.3. Scene Analysis

Scene analysis starts by identifying features belonging to the dominant motion set, denoted as W 1 . To this end, feature points enclosed in the dynamic regions ( W 2 p , , n p ) are removed from the trajectory matrix. The remaining features follow the dominant trajectory matrix:
W 1 p = W ^ W 2 p | W 3 p | | W n p
where, W n p represents the trajectory matrix of the nth motion. It is important to note that the set of features W 1 p can include missed classified features. A robust RANSAC-based motion estimation is carried out on the set W 1 p following the steps described in Section 3.2. The estimation of the dominant motion must fulfill a consensus set of features composed of at least m features. The consensus value, m, is determined by the minimum number of feature points ( k ) required to instantiate a motion estimate. That corresponds to the number of columns in W n p as follows:
m = c o l ( W n p ) k
The solution with the largest consensus among the set of features is selected. If there are multiple motion solutions with the same consensus, the one with the smallest mean reprojection error is maintained. In presence of multiple observed motions included in the set of features W 1 p , the motion estimates might not achieve the minimum required consensus. This situation occurs when the number of outliers is greater than k feature points or when there is at least one new moving object in the scene. Motion factorization is applied to the set of unsegmented features in order to find new moving objects or to discard such features as outliers. The results of this stage are W 1 , the structure S ˜ 1 and motion M ˜ 1 of the dominant motion, and the W n , its structure S ˜ n and motion M ˜ n of the new objects that entered in the scene.

4.4. Motion Factorization on Dynamic Regions

The motions are factorized relying on the segmented feature points inside of each dynamic region W 2 p | W 3 p | | W n p . In each matrix it is assumed the presence of feature points following the nth moving object and outliers. Features classified as outliers by the motion factorization, are associated to other dynamic regions or finally discarded following their reprojection error. At this stage, feature points are classified in the trajectory matrix W 2 | W 3 | | W n and theirs structures and motions are recovered.

4.5. Number of Hypotheses

The number of motion hypothesis during RANSAC can be fixed assuming a known proportion of outliers on the dynamic region that should not be exceeded. The outlier proportion can be adaptive as presented in [34]. The number of motion hypotheses are computed with a probability of p r = 99 % and k = 8 , as stated in the Equation (10).

4.6. Filtering

A bank of Kalman filters (KF) is implemented to manage and to infer the most probable states of the dynamic regions. Assuming that the observed moving objects in the sequence are subject to physical dynamics, these are expected to perform smooth changes in the image sequence. The state of a dynamic region in the image plane is tracked by a 8D vector. The track state is denoted by x f (see Equation (13)) consisting of the image centroid coordinates, ( x c , y c ) , in pixel, the width, w and height, h:
x f | f = x c , y c , w , h , v x , v y , δ w , δ h T
The state vector of the image region attributes also includes their first derivatives respectively ( v x , v y , δ w , δ h ). Since an inter-frame linear and uniform motion is assumed, a linear Gaussian model is well suited for tracking purpose as is stated in Equation (14):
x f = A · x f 1 + ff f α f N ( α f ; 0 , Λ f ) y f = C · x f + fi f β f N ( β f ; 0 , Γ f )
where A and C represent the transition and the observation models, respectively. x f 1 stands for the state vector in a previous sample frame and y f the multivariate observations. ff f and fi f are the state and observation noise following a zero-centered normal distribution with known variances.

4.6.1. Track-to-Motion Association

Tracked regions states are predicted by means of its associated Kalman filter. State predictions enclose the set of points employed for motion factorization as illustrated in Figure 1. The features following the factorized motion update the tracked region if it satisfies a geometric distance criterion. The criterion correlates the tracked dynamic region and the region enclosing the detected factorized motion regarding their appearance and uncertainty-weighted state given by the inverse of the mean point reprojection error.

4.6.2. Track Creation and Deletion

A dynamic region has to be detected in at least 60% of frames of the sliding window size so as provide enough evidence to initialize a filter to track it. The non-updated tracks are destroyed if theirs predictions are not reliable enough to be associated to new detected motions (i.e., 60% detection rate). A new moving object is detected using the points classified as outliers. The factorization method is applied over these feature points in order to find a new group that satisfies the reprojection error criterion ϵ h y p , see Section 3.
Hereafter, the outline of Tdb-Sfm is presented in Algorithm 2:
Algorithm 2 Proposed Algorithm Framework
1: procedure Framework(W)
2:  for f r a m e = 1 to l a s t   f r a m e do
3:   if f r a m e = = 1 then
4:    Motion Segmentation with baseline method Algorithm 1
5:    Get the dynamic objects positions
6:   else
7:    if f r a m e = F then
8:     Remove from W feature points belonging to dynamic objects
9:     Find the Ego-motion feature points
10:     Find the dynamic feature points
11:     Search new motions in the outliers feature points
12:     Feed the KF with the position of the dynamic objects
13:    else
14:     Predict positions and sizes of the objects
15:     Remove the points in the motion objects areas
16:     Find the Ego-motion feature points
17:     Find the dynamic feature points
18:     Search new motions in the outliers feature points
19:     Update the position of the dynamic objects
20:     Feed the KF with the position of the dynamic objects
21:    end if
22:    end if
23:   end if
24:   end if
25:  end for
26:  end for
27:  return W n , M n , S n ▹ Trajectory matrix segmented, Motion and Structure
28: end procedure

5. Results

The baseline algorithm (Section 3) and TbD-SfM algorithm (Section 4) are evaluated in different urban scenarios using the Hopkins 155 (http://www.vision.jhu.edu/data/hopkins155/) and KITTI (http://www.cvlibs.net/datasets/kitti/) datasets. The Hopkins 155 dataset provides a sequence of images with small inter-frame motions. The images were recorded with a hand-held camera. The dataset provides the optical flow without tracking errors in the differences sequences. The 2D feature points are tracked along of sequences composed of 640 × 480 images acquired with a rate of 15 frames per second. KITTI dataset [36] has scenarios with greater dynamic complexity in comparison with Hopkins dataset. KITTI has 1392 × 512 images sampled in uncontrolled illumination conditions from a camera embedded on a moving car. The speed of the camera can reach 60 Km/h in some scenes. The dataset does not furnish the feature points in the scenes. This allows the possibility of tracking errors in the optical flow. Feature points are acquired by means of the Libviso2 extractor [37]. The scenes are processed in a temporal sliding window of 5-frames of size ( Γ ). The results obtained per sliding window are processed and the mean value is reported as a frame result. At least, 8 feature points are required for motion detection. The initialization of the TbD-SfM method is done with the baseline algorithm and the result is reported in the first frame. The values of ζ and ρ are selected heuristically and were set to ζ = 1 , ρ = 0.07 in the experiments. Threshold values ϵ h y p = ϵ k · k · Γ and ϵ p t o = ϵ p · Γ are selected based on the performance of the method estimated with the confusion matrix, Table 1. The evaluation of the methods are done following: the reprojection error, the segmentation error and the outliers ratio.
The reprojection error stands for the average difference between trajectory matrix, W, and its corresponding estimate, W ˜ as follows:
R e p . E r r o r = ( W W ˜ ) T o t a l # o f p o i n t s
The segmentation error is defined in [11] as the misclassification of a point between the objects observed in the scene. It is computed with the Equation (17) as:
S e g . E r r o r = 100 # o f m i s c l a s s i f i e d p o i n t s T o t a l # o f p o i n t s
Outliers are defined as points that do not meet the reprojection error criterion established by the threshold ϵ p included on the RANSAC scheme. The outliers ratio is then computed as:
O u t l i e r s R a t i o = 100 # o f u n c l a s s i f i e d p o i n t s T o t a l # o f p o i n t s

5.1. Experimental Evaluation of Baseline Method

The baseline algorithm was tested on the KITTI scenes road-2011_10_03_drive_0042 (Scene 1) and residential-2011_09_30_drive_0034 (Scene 2), the results were compared with [7]. Scene 1 involves two cars at high speed (around 55 km/h), the moving camera and a car passing from back to the front. 5 frames were processed, each one composed of 218 feature points. The Figure 2a illustrates the feature points trajectories of the dominant motion in red and the moving object in green. The Figure 2b exhibits an example of over-segmented motion obtained with ϵ k = 0.25 pixels and ϵ p = 3 pixels. Table 2 exhibits precision and recall results. For these tests 200 scene motion segmentation hypotheses were generated with the values of ϵ k = 0.3 pixels and ϵ p = 4 pixels. In the obtained results the moving object was correctly segmented, however, the dominant motion was divided into two motions. It is worth noting that even if there is no classification errors on moving objects, the set of dominant motion features might be, in some cases, over-segmented.
Figure 3 displays the highest reprojection error of the baseline method in Scene 1 for the dominant motion and moving object with values of 2.8 pixels and 1.8 pixels, respectively.
In Scene 2, it is observed a vehicle moving in reverse direction and turning. The parameter values employed in this sequence were ϵ k = 0.25 pixels and ϵ p = 3 pixels. In Figure 4, three motions groups were detected: the dominant motion, the moving object and a group of 11 feature points. The observed over-segmentation can be coped with a fine tuning of the ϵ p threshold.
Table 3 summarizes the results obtained for Scene 1 and 2 and includes the performances reported in the state of the art [7]. In Scene 1, the baseline method achieves a motion segmentation error of 0% with a mean and median reprojection error lower than the ones reported on [7]. However, in the Scene 2 the segmentation error was greater with 3.3% and the mean and median reprojection error were lower than the ones of [7]. These results let us assume that our implementation is reliable enough for a fair comparison.

5.2. Experimental Evaluation of TbD-SfM

The baseline and TbD-SfM methods were tested and compared. Hereafter, a first set of experiments using Hopkins 155 traffic dataset is reported. It is recalled that TbD-SfM uses the results provided by the baseline method in the first frame as an initial knowledge of the scene (rough segmentation). TbD-SfM is able to detect and to segment moving objects present in the scene as well as new objects that may appear or leave.
Figure 5 presents a scene composed of two simultaneous motions called Car2 (named Scene 3). The baseline method was parametrized considering 200 scene motion segmentation hypotheses by frame along the sequences. Thirty frames were processed using 26 sliding windows, each frame includes 490 feature points. The best precision and recall values were obtained with ϵ k = 0.5 pixels and ϵ p = 1 as reported in the Table 4.
In Figure 5, the segmentation and the reprojection error obtained with the baseline method in the first frame of the scene are shown. The moving object was segmented correctly, however, the dominant motion was over-segmented. A third group in blue was created with few feature points. The right image exposes the reprojection error in the first frame.
Figure 6 shows the number of segmented motions reported by the baseline method along the sequence. Since the scene is only composed of two independent motions, results with more than 2 are over-segmented and less than 2 are under-segmented. The low recall value of 0.66 (see Table 4) is caused by the incorrect segmentation in the frames 10, 11, 20 and 23. This is probably due to the fact that the observed vehicle slows down. Decreasing the value of ϵ p may help to segment small inter-frame motions but can also lead to over-segmented scenes.
Figure 7 plots the mean reprojection error of the motions detected by the baseline method in Scene 3. In green dot-line, the moving object motion and in red dot-line the dominant motion (ego). Since the moving object was missed and its feature points were assigned to the dominant motion set in frames 10, 11, 20 and 23, no reprojection error was computed. The highest reprojection error was 1.9 pixels in frame 21 for the dominant motion and 1.6 pixels in the frame 4 for the moving object motion. Despite these reprojection errors, motions were segmented correctly.
The TbD-SfM was parametrized assuming an outlier ratio of 30% and thresholds values ϵ k = 0.5 pixels and ϵ p = 3 . Threshold values were selected following precision and recall scores computed for the first frame of Scene 3 and reported in Table 5. A good feature points classification was obtained and no over-segmented areas were observed in the frame.
For the complete sequence Scene 3, the two motions were segmented correctly using TbD-SfM. The highest mean reprojection error was of 1.35 pixels for the dominant motion and 0.8 pixels for the moving object as shown in Figure 8.
The ratio of outliers per frame is illustrated in Figure 9. The highest value corresponds to the first frame estimation. In the next frames, the ratio of outliers with the TbD-SfM approach was less than 1%.
A Monte Carlo experiment was carried out in order to evaluate the repeatability and the stability of TbD-SfM results. To this end, scene segmentation was performed on 100 repetitions. The highest reprojection error was limited by the threshold ϵ p = 3 . The boxplot illustrates (see Figure 10) that frames 13, 14, 15, 18, 21 and 22 used the range established in ϵ p . Others frames had the maximum boxplot value of the mean reprojection error results less than ϵ p threshold.
The highest percentage of outliers observed along Scene 3 is less than 2% as shown Figure 11. At least 98% of feature points by frame were correctly classified and not rejected as outliers.
Figure 12 shows motion segmentation for the first frame of Hopkins 155 Car 9 sequence called Scene 4. The scene is composed of three simultaneous independent motions: the dominant motion (static objects in red) and two moving objects (green and blue). This sequence is a challenging use case since the observed objects moves at slow speed. Twenty four frames were processed with 220 feature points per frame. The baseline method was set to consider 300 scene motion segmentation hypotheses by frame. Results of the sequence are quantified in Table 6. Reported results were obtained with threshold values ϵ k = 0.25 pixels and ϵ p = 2.5 pixels. The Figure 12 illustrates the motion segmentation result for the first frame.
Despite the fact that precision and recall scores in Table 6 are high, motion segmentation errors are still present along the sequence. That is the case for frames 4, 12 and from 14 to 20 where the baseline method over-segments motions and misses one of them in frame 8 (Figure 13). Figure 13b illustrates as an example the segmentation result of frame 12.
Figure 14 illustrates the mean reprojection error evolution in Scene 4, the highest value was obtained in the 13th frame for the 2nd observed motion with 1.2 pixels. The 8th frame shows that the 1st observed motion was not detected.
The TbD-SfM was tested with the same values ϵ k = 0.25 pixels and ϵ p = 2.5 and a RANSAC outlier ratio of 30%. The three motions were segmented correctly. Figure 15b shows the mean reprojection error with a highest error of 1.45 pixels for the dominant motion. The highest reprojection error in the moving objects were less than 0.55 pixels.
The highest percentage of outliers was obtained in frame 15 as illustrated in Figure 16. In this frame, it was also obtained the highest reprojection error in the dominant motion. In this case, the selected hypotheses increases the reprojection error in the feature points and some of them were rejected. A high percentage of outliers are coming from the dominant motion even when the reprojection error is less than 1.5 pixels. The opposite situation was presented in the frames 2, 3, 4 and 5 where all the feature points were segmented correctly.
The results of the Monte Carlo experiment with TbD-SfM in Scene 9 are shown in Figure 17. The highest reprojection error was limited by the threshold ϵ p = 2.5 . In a scene composed of three observed motions, the frame range from 3 to 8 shows that the maximum boxplot value for the mean reprojection error is less than 1 pixel. After frame 10, the upper whisker is greater because the moving objects are getting closer to the camera.
Figure 18 illustrates the boxplot results of Monte-Carlo experiment in Scene 4. It is noted that until frame 14 the maximum percentage of outliers obtained was 3.1%. In frame 19, it is shows a maximum boxplot value of 5.5% and the highest percentage of outliers with 12.2%. Except for this frame, the maximum boxplot value for the percentage of outliers is less than 4%.
Table 7 summarizes the evaluation results of the Monte Carlo experiments in Scene 3 and Scene 4 using TbD-SfM method. In Scene 3, TbD-SfM achieved a mean reprojeccion error of 1.25 pixel, a segmentation error of 0.01% and a mean outliers percentage of 0.8%. In Scene 4, it was obtained a mean reprojeccion error of 0.84 pixel, a segmentation error of 0.19% and a mean outliers percentage of 3.1%.
Scene 1 (Figure 2) from KITTI dataset was processed with the baseline algorithm and TbD-SfM. A sequence of 20 frames with an average of 185 feature points by frame was processed. The baseline method was used to create 200 scene motion segmentation hypotheses by frame with the values ϵ k = 0.875 and ϵ p = 3 . The Figure 19 illustrates the mean error reprojection error for the two segmented motions, the highest value was 3.6 pixels for the moving object in the first frame.
TbD-SfM was set to assume an outliers ratio of 35%. The highest mean reprojection error was in the 4th frame of the dominant motion with 4 pixels as shown in Figure 20b. One can notice that reprojection errors in KITTI dataset are higher than the ones achieved on Hopkins. Since Hopkins provides error-free feature tracking, reprojeccion errors are greatly improved. KITTI experiments shows the robustness of the proposed method to feature tracking errors and their impact in terms of the reprojeccion error.
The segmentation results along the sequence are presented in Figure 20a. The feature points located in the side-view mirror of the vehicle were not segmented correctly. Since these points are observed in some frames outside of the predicted area, they were segmented in another group or classified as outliers. It was obtained a segmentation error of 1.4% along the sequence. The results are detailed in the Table 8.
TbD-SfM efficiently addresses the scalability problem presented in the baseline method when the number of simultaneous motions increases. In the Scene 5, the scalability of TbD-SfM was tested in a scenario with 4 simultaneous motions as shown in Figure 21. There are two moving objects approaching to camera with different speeds and a third one moving along the moving camera. 8 frames were processed with 4 sliding windows, an average of 1450 feature points are observed by frame. The first frame segmentation was obtained with the baseline method considering 400 scene motion segmentation hypotheses with the parameters of ϵ k = 0.25 pixels and ϵ p = 3 . In the first frame, some segmentation errors were observed: some feature points of the moving object 1 (green) were assigned to the moving object 2 (blue). However, the TbD-SfM procedure allowed to correct these errors and enhanced the segmentation as shown the frame 4. The outliers feature points are shown in cyan color.
In Scene 6, TbD-SfM was implemented in a sequence under particular characteristics. The moving camera is turning right, objects enter or leave the scene and some of them are partially occluded. This scene allows to test the detection and segmentation of new moving objects. It were processed 6 frames with an average of 670 feature points per frame. The moving objects are represented by the green and blue feature points. The parameter values employed in this sequence were ϵ k = 0.75 pixels, ϵ p = 3.5 and it was assumed an outliers ratio of 45%. Figure 22 illustrates the results obtained by frame with TbD-SfM approach. In the first frame, it was detected 3 simultaneous motions. The green moving object has a partial occlusion by the ego-motion feature points located over the traffic light post. In the third frame, a small group of feature points was segmented as other moving object over the traffic light post, however, this group is not detected in the next frames. The outliers feature points are represented in cyan, this points over the gray car were not associated because they do not meet the reprojection error criterion ( ϵ h y p ) . Some feature points of a new object(white car) were segmented with the dominant motion. In the 5th frame, the white car was detected as new moving object for first time and some segmentation errors. In the 6th frame the new moving object is detected with a better segmentation. The results evaluated are reported in the Table 8.
It is worth to mention that performances and execution time of the TbD-SfM were also evaluated on a long sequence context. In the Scene 7, 130 frames were processed involving a moving ego-camera, two cars passing from back to the front and a third car approaching. As an example, the Figure 23a shows the 6th frame where the first moving object was segmented. A second car was then detected and segmented as shown the Figure 23b. Figure 23c presents the second car marked in blue is occluding the first detected moving object. At the left side, a van approaching to the ego-camera that was segmented and marked in yellow. Finally, the Figure 23d illustrates 120th frame where the object was segmented while it moves away. Performance results are reported in the Table 8.
The Figure 24 details the execution time per frame along the sequence in Scene 7. The results show that before the 50th frame, run time is higher due to a greater amount of dynamic feature points. After, run time decreases. It is worth noting that the detection of new motions requires more processing time due to feature resampling task. That can be observed by run time peaks in frames 14, 22, 34, 44, 57, 74. This processing time can be greatly enhanced by parallelizing or pipelining feature resampling and motion tracking threads.
TbD-SfM was tested in car sequences of Hopkins dataset for allowing comparison with other methods. The dataset has 8 scenes with two simultaneous motions and 3 scenes with three simultaneous motions. The algorithm was run once by sequence and the results reported in the Table 9. The highest mean reprojection error was of 1.25 pixels for Car2 sequence and the highest segmentation error and outliers percentage were 0.2% and 6.1%, respectively, for the Truck2 sequence.
Table 10 shows a benchmark comparison of the car sequences results using TbD-SfM (Table 9) and other state-of-the-art methods [38,39]. The results presented in the Table 10 shows that TbD-SfM achieves a lower segmentation error in scenes with two and three simultaneous motions in comparison to methods presented in [18,20,21,22,24,26,27,28,29,30,31,32]. TbD-SfM obtains a segmentation error of 0.07% for sequences involving three simultaneous motions. This error is higher in comparison to HSIT [23] that reaches a perfect segmentation. In contrast, the segmentation error in two simultaneous motions sequences of TbD-SfM is 0.02% compared to 1.65% of HSIT that is 82 times lower. TbD-SfM has similar performance in comparison with the DCT [19]. The DCT segmentation error was 0.05% considering all the sequences of the dataset, while TbD-SfM segmentation error was lower in datasets with two motions by a difference of 0.03% and higher by 0.02% for three motions dataset. Comparing TbD-SfM to the baseline method, the segmentation error is higher by a difference of 0.02% in sequences with two simultaneous motions and lower by 0.04% in datasets with three simultaneous motions. In particular, TbD-SfM have obtained a greater number of feature points correctly segmented in comparison with the baseline method as shown the percentage of outliers in the Table 9. It is worth noting TbD-SfM achieves a denser feature segmentation than the baseline approach. That is because the baseline approach performs an optimization step intended to enhance motion segmentation by rejecting feature points with a high reprojection error. This procedure can certainly improve motion estimates but it also reduces the number of feature points that represent a motion. Objects with few features may be easily lost or missed detected.
The results show that our algorithm outperforms the RANSAC formulation proposed in [31]. The reprojection error obtained with TbD-SfM algorithm can be reduced with an optimization method over the RANSAC formulation as described in [40,41].
The reported experiments were obtained thanks to Matlab implementations on a laptop with processor i-7 2.6 GHz and 16 GB-RAM. The average running time of the baseline method for two, three and four simultaneous motions were 85.2 s, 259 s and 6360 s. For TbD-SfM method execution time decreases in average to 3.5 s, 3.9 s and 78.3 s for two, three and four simultaneous motions, respectively.

6. Conclusions

This paper proposed an efficient TbD-SfM framework able to infer independent motions (euro-motions) and ego-camera trajectory under a 6-DOF motion model. Compared to complex existing motion segmentation approaches, the proposed methodology represents a reliable vision-only alternative for sensors-based dynamic scene analysis and VSLAM applications. The implementation of the TbD-SfM in S f M allows us to drastically decrease the number of trial hypotheses required for a scene motion segmentation without the use of kinematics constraints. Thanks to this, our method is scalable and its advantages were thoroughly demonstrated in scenes with more than two simultaneous motions. The TbD-SfM constitutes a feasible motion segmentation algorithm for monocular vision systems with a bounded complexity intended to an embedded system implementation. A Hardware–Software co-design approach remains an issue to be addressed and constitutes a perspective of this work in order to achieve real-time performances. To this end, a further study of an embedded HPS (Hardware Processing System) based on a GPU or a FPGA architecture will be carried out in order to design a sensor implementing high-level on-chip pre-processing.

Author Contributions

All the authors contributed equally to this work.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Elfring, J.; Appeldoorn, R.; van den Dries, S.; Kwakkernaat, M. Effective World Modeling: Multisensor Data Fusion Methodology for Automated Driving. Sensors 2016, 16, 1668. [Google Scholar] [CrossRef]
  2. Florez, S.A.R.; Frémont, V.; Bonnifait, P.; Cherfaoui, V. Multi-modal object detection and localization for high integrity driving assistance. Mach. Vis. Appl. 2014, 25, 583–598. [Google Scholar] [CrossRef]
  3. Li, Q.; Zhou, J.; Li, B.; Guo, Y.; Xiao, J. Robust Lane-Detection Method for Low-Speed Environments. Sensors 2018, 18, 4274. [Google Scholar] [CrossRef]
  4. Ibarra-Arenado, M.; Tjahjadi, T.; Pérez-Oria, J.; Robla-Gómez, S.; Jiménez-Avello, A. Shadow-Based Vehicle Detection in Urban Traffic. Sensors 2017, 17, 975. [Google Scholar] [CrossRef] [PubMed]
  5. Zhao, D.; Fu, H.; Xiao, L.; Wu, T.; Dai, B. Multi-Object Tracking with Correlation Filter for Autonomous Vehicle. Sensors 2018, 18, 2004. [Google Scholar] [CrossRef] [PubMed]
  6. Aladem, M.; Rawashdeh, S.A. Lightweight Visual Odometry for Autonomous Mobile Robots. Sensors 2018, 18, 2837. [Google Scholar] [CrossRef]
  7. Sabzevari, R.; Scaramuzza, D. Monocular simultaneous multi-body motion segmentation and reconstruction from perspective views. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 23–30. [Google Scholar]
  8. Costeira, J.P.; Kanade, T. A Multibody Factorization Method for Independently Moving Objects. Int. J. Comput. Vis. 1998, 29, 159–179. [Google Scholar] [CrossRef]
  9. Zappella, L.; Lladó, X.; Salvi, J. Motion Segmentation: A Review. In Proceedings of the 11th International Conference of the Catalan Association for Artificial Intelligence, Sant Martí d’Empúries, Spain, 22–24 October 2008; pp. 398–407. [Google Scholar]
  10. Tomasi, C.; Kanade, T. Shape and motion from image streams under orthography: A factorization method. Int. J. Comput. Vis. 1992, 9, 137–154. [Google Scholar] [CrossRef]
  11. Vidal, R.; Ma, Y.; Soatto, S.; Sastry, S. Two-View Multibody Structure from Motion. Int. J. Comput. Vis. 2006, 68, 7–25. [Google Scholar] [CrossRef]
  12. Goh, A.; Vidal, R. Segmenting Motions of Different Types by Unsupervised Manifold Clustering. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–6. [Google Scholar]
  13. Vidal, R.; Hartley, R. Three-View Multibody Structure from Motion. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 30, 214–227. [Google Scholar] [CrossRef]
  14. Li, T.; Kallem, V.; Singaraju, D.; Vidal, R. Projective Factorization of Multiple Rigid-Body Motions. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–6. [Google Scholar]
  15. Ozden, K.E.; Schindler, K.; Gool, L.J.V. Multibody Structure-from-Motion in Practice. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1134–1141. [Google Scholar] [CrossRef] [PubMed]
  16. Rao, S.R.; Tron, R.; Vidal, R.; Ma, Y. Motion Segmentation in the Presence of Outlying, Incomplete, or Corrupted Trajectories. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1832–1845. [Google Scholar] [CrossRef] [PubMed]
  17. Zappella, L.; Del Bue, A.; Lladó, X.; Salvi, J. Simultaneous Motion Segmentation and Structure from Motion. In Proceedings of the 2011 IEEE Workshop on Applications of Computer Vision (WACV), Kona, HI, USA, 5–7 January 2011; pp. 679–684. [Google Scholar]
  18. Dragon, R.; Rosenhahn, B.; Ostermann, J. Multi-Scale Clustering of Frame-to-Frame Correspondences for Motion Segmentation. In Proceedings of the 12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; pp. 445–458. [Google Scholar]
  19. Shi, F.; Zhou, Z.; Xiao, J.; Wu, W. Robust Trajectory Clustering for Motion Segmentation. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 3088–3095. [Google Scholar]
  20. Jung, H.; Ju, J.; Kim, J. Rigid Motion Segmentation Using Randomized Voting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1210–1217. [Google Scholar]
  21. Li, B.; Zhang, Y.; Lin, Z.; Lu, H. Subspace clustering by Mixture of Gaussian Regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 2094–2102. [Google Scholar]
  22. Tourani, S.; Krishna, K.M. Using In-frame Shear Constraints for Monocular Motion Segmentation of Rigid Bodies. J. Intell. Robot. Syst. 2016, 82, 237–255. [Google Scholar] [CrossRef]
  23. Sako, Y.; Sugaya, Y. Multibody motion segmentation for an arbitrary number of independent motions. IPSJ Trans. Comput. Vis. Appl. 2016, 8, 1. [Google Scholar] [CrossRef]
  24. Zhu, Y.; Elgammal, A. A Multilayer-Based Framework for Online Background Subtraction with Freely Moving Cameras. arXiv, 2017; arXiv:1709.01140. [Google Scholar]
  25. Sabzevari, R.; Scaramuzza, D. Multi-body Motion Estimation from Monocular Vehicle-Mounted Cameras. IEEE Trans. Robot. 2016, 32, 638–651. [Google Scholar] [CrossRef]
  26. Zhang, T.; Szlam, A.; Wang, Y.; Lerman, G. Hybrid Linear Modeling via Local Best-Fit Flats. Int. J. Comput. Vis. 2012, 100, 217–240. [Google Scholar] [CrossRef]
  27. Elhamifar, E.; Vidal, R. Clustering disjoint subspaces via sparse representation. In Proceedings of the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, Dallas, TX, USA, 14–19 March 2010; pp. 1926–1929. [Google Scholar]
  28. Ma, Y.; Yang, A.Y.; Derksen, H.; Fossum, R. Estimation of Subspace Arrangements with Applications in Modeling and Segmenting Mixed Data. Soc. Ind. Appl. Math. 2008, 50, 413–458. [Google Scholar] [CrossRef]
  29. Ma, Y.; Derksen, H.; Hong, W.; Wright, J. Segmentation of Multivariate Mixed Data via Lossy Data Coding and Compression. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1546–1562. [Google Scholar] [CrossRef]
  30. Yan, J.; Pollefeys, M. A General Framework for Motion Segmentation: Independent, Articulated, Rigid, Non-rigid, Degenerate and Non-degenerate. In Proceedings of the European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; pp. 94–106. [Google Scholar]
  31. Yang, A.Y.; Rao, S.R.; Ma, Y. Robust Statistical Estimation and Segmentation of Multiple Subspaces. In Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop, New York, NY, USA, 17–22 June 2006; p. 99. [Google Scholar]
  32. Sugaya, Y.; Kanatani, K. Geometric Structure of Degeneracy for Multi-body Motion Segmentation. In Proceedings of the International Workshop on Statistical Methods in Video Processing, Prague, Czech Republic, 16 May 2004; pp. 13–25. [Google Scholar]
  33. Zuliani, M.; Kenney, C.S.; Manjunath, B.S. The multiransac algorithm and its application to detect planar homographies. In Proceedings of the IEEE International Conference on Image Processing, Genova, Italy, 14 September 2005. [Google Scholar]
  34. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  35. Fremont, V.; Rodriguez Florez, S.A.; Wang, B. Mono-Vision based Moving Object Detection in Complex Traffic Scenes. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017. [Google Scholar]
  36. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets Robotics: The KITTI Dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
  37. Geiger, A.; Ziegler, J.; Stiller, C. StereoScan: Dense 3D Reconstruction in Real-time. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011. [Google Scholar]
  38. Tron, R.; Vidal, R. A Benchmark for the Comparison of 3-D Motion Segmentation Algorithms. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 7–22 June 2007; pp. 1–8. [Google Scholar]
  39. Vidal, R. Subspace Clustering. IEEE Signal Process. Mag. 2011, 28, 52–68. [Google Scholar] [CrossRef]
  40. Lebeda, K.; Matas, J.; Chum, O. Fixing the Locally Optimized RANSAC—Full experimental evaluation. In Proceedings of the British Machine Vision Conference, Guildford, UK, 3–7 September 2012; pp. 1–11. [Google Scholar]
  41. Chum, O.; Matas, J.; Kittler, J. Locally Optimized RANSAC. In Proceedings of the Joint Pattern Recognition Symposium, Magdeburg, Germany, 10–12 September 2003; Volume 2781, pp. 236–243. [Google Scholar]
Figure 1. Motion segmentation with tracking objects.
Figure 1. Motion segmentation with tracking objects.
Sensors 19 00560 g001
Figure 2. Results of Scene 1: (a) Motion trajectories; (b) Oversegmentation example.
Figure 2. Results of Scene 1: (a) Motion trajectories; (b) Oversegmentation example.
Sensors 19 00560 g002
Figure 3. Mean reprojection error by frame for Scene 1.
Figure 3. Mean reprojection error by frame for Scene 1.
Sensors 19 00560 g003
Figure 4. Results of Scene 2: (a) Trajectories of the vehicle; (b) Reprojection error.
Figure 4. Results of Scene 2: (a) Trajectories of the vehicle; (b) Reprojection error.
Sensors 19 00560 g004
Figure 5. Baseline method results for Scene 3: (a) First frame segmentation; (b) Reprojection error.
Figure 5. Baseline method results for Scene 3: (a) First frame segmentation; (b) Reprojection error.
Sensors 19 00560 g005
Figure 6. Number of motions by frame with baseline method.
Figure 6. Number of motions by frame with baseline method.
Sensors 19 00560 g006
Figure 7. Mean reprojection error of detected motions for Scene 3 with baseline method.
Figure 7. Mean reprojection error of detected motions for Scene 3 with baseline method.
Sensors 19 00560 g007
Figure 8. TbD-SfM results for Scene 3: (a) Motions segmented; (b) Mean reprojection error evolution.
Figure 8. TbD-SfM results for Scene 3: (a) Motions segmented; (b) Mean reprojection error evolution.
Sensors 19 00560 g008
Figure 9. Ratio of outliers using TbD-SfM method for Scene 3.
Figure 9. Ratio of outliers using TbD-SfM method for Scene 3.
Sensors 19 00560 g009
Figure 10. Mean reprojection error for Scene 3: (a) Dominant motion; (b) Dynamic object.
Figure 10. Mean reprojection error for Scene 3: (a) Dominant motion; (b) Dynamic object.
Sensors 19 00560 g010
Figure 11. Outliers percentage in Monte-Carlo experiment for Scene 3 using TbD-SfM.
Figure 11. Outliers percentage in Monte-Carlo experiment for Scene 3 using TbD-SfM.
Sensors 19 00560 g011
Figure 12. First frame segmentation for Scene 4.
Figure 12. First frame segmentation for Scene 4.
Sensors 19 00560 g012
Figure 13. Baseline method results for Scene 4: (a) Number of motions by frame; (b) Motion segmentation.
Figure 13. Baseline method results for Scene 4: (a) Number of motions by frame; (b) Motion segmentation.
Sensors 19 00560 g013
Figure 14. Mean reprojection error for Scene 4 with baseline method.
Figure 14. Mean reprojection error for Scene 4 with baseline method.
Sensors 19 00560 g014
Figure 15. TbD-SfM results for Scene 4: (a) Motions segmented; (b) Mean reprojection error.
Figure 15. TbD-SfM results for Scene 4: (a) Motions segmented; (b) Mean reprojection error.
Sensors 19 00560 g015
Figure 16. Ratio of outliers with TbD-SfM method for Scene 4.
Figure 16. Ratio of outliers with TbD-SfM method for Scene 4.
Sensors 19 00560 g016
Figure 17. Mean reprojection error results of Monte Carlo test: (a) Dominant motion; (b) Motion 1. (c) Motion 2.
Figure 17. Mean reprojection error results of Monte Carlo test: (a) Dominant motion; (b) Motion 1. (c) Motion 2.
Sensors 19 00560 g017
Figure 18. Outliers percentage in Monte-Carlo experiment for Scene 4 using TbD-SfM.
Figure 18. Outliers percentage in Monte-Carlo experiment for Scene 4 using TbD-SfM.
Sensors 19 00560 g018
Figure 19. Mean reprojection error in Scene 1 with baseline method.
Figure 19. Mean reprojection error in Scene 1 with baseline method.
Sensors 19 00560 g019
Figure 20. TbD-SfM results for Scene 1: (a) Motion segmentation; (b) Mean reprojection error.
Figure 20. TbD-SfM results for Scene 1: (a) Motion segmentation; (b) Mean reprojection error.
Sensors 19 00560 g020
Figure 21. TbD-SfM results for Scene 5: (a) 1th frame segmentation; (b) 4th frame segmentation.
Figure 21. TbD-SfM results for Scene 5: (a) 1th frame segmentation; (b) 4th frame segmentation.
Sensors 19 00560 g021
Figure 22. TbD-SfM results for Scene 6: (a) Frame 1; (b) Frame 3; (c) Frame 5; (d) Frame 6.
Figure 22. TbD-SfM results for Scene 6: (a) Frame 1; (b) Frame 3; (c) Frame 5; (d) Frame 6.
Sensors 19 00560 g022
Figure 23. TbD-SfM results for Scene 7. The motions segmented are indicated by colors and markers. Red points represent the ego-motion. The 1st, 2nd and 3rd motion are represented by the green plus signs, blue asterisk and yellow cross respectively.
Figure 23. TbD-SfM results for Scene 7. The motions segmented are indicated by colors and markers. Red points represent the ego-motion. The 1st, 2nd and 3rd motion are represented by the green plus signs, blue asterisk and yellow cross respectively.
Sensors 19 00560 g023
Figure 24. Execution time along the sequence for Scene 7.
Figure 24. Execution time along the sequence for Scene 7.
Sensors 19 00560 g024
Table 1. Confusion Matrix.
Table 1. Confusion Matrix.
Actual Classification
Predictive classification YesNo
YesTrue Positives (TP)False Positives (FP)
NoFalse Negatives (FN)True Negatives (TN)
Table 2. Precision(P) vs. Recall(R) of Baseline method for Scene 1.
Table 2. Precision(P) vs. Recall(R) of Baseline method for Scene 1.
ϵ k 0.30.40.5
ϵ p PRPRPR
10.8311110.46
2111111
3111111
4111111
50.9411111
Table 3. Reprojection and segmentation errors obtained for Scene 1 and Scene 2.
Table 3. Reprojection and segmentation errors obtained for Scene 1 and Scene 2.
SequenceNumber of FramesNumber of PointsMean Reprojection Error (pixels)Median Reprojection Error (pixels)Segmentation Error (%)
Reported in [7]Scene 151931.631.430
BaselineScene 152181.541.180
Reported in [7]Scene 255732.141.671.57
BaselineScene 254771.81.283.35
Table 4. Precision and Recall values for sequence of Scene 3.
Table 4. Precision and Recall values for sequence of Scene 3.
ϵ k 0.250.50.751
ϵ p PRPRPRPR
0.750.880.580.740.630.790.690.940.58
10.800.720.940.660.510.680.70.78
1.50.370.770.40.80.530.70.40.7
20.350.780.270.820.340.750.430.67
30.170.900.160.850.180.860.180.82
Table 5. Precision and Recall scores for threshold selection in the 1st frame of Scene 3.
Table 5. Precision and Recall scores for threshold selection in the 1st frame of Scene 3.
ϵ k 0.250.50.7511.25
ϵ p PRPRPRPRPR
210.9310.9310.510.9310.75
310.9310.9310.8810.8310.92
410.7710.8910.930.980.8910.93
Table 6. Precision and Recall scores in Scene 4 using baseline method.
Table 6. Precision and Recall scores in Scene 4 using baseline method.
ϵ k 0.1250.250.3750.5
ϵ p PRPRPRPR
0.50.990.560.990.650.950.730.940.8
10.990.880.990.830.890.790.990.81
1.50.980.910.940.870.910.820.930.92
20.860.930.990.9510.930.910.87
2.510.9210.9610.9410.88
310.820.740.830.80.790.680.88
Table 7. Results with TbD-SfM method in Monte-Carlo experiment for Scene 3 and Scene 4.
Table 7. Results with TbD-SfM method in Monte-Carlo experiment for Scene 3 and Scene 4.
SequenceNumber of FramesNumber of PointsMean Reprojection Error (pixels)Median Reprojection Error (pixels)Segmentation Error (%)Mean Outliers Percentage (%)
Car2264901.250.940.0150.8
Car9202200.840.580.193.1
Table 8. Results reported for the KITTI datasets.
Table 8. Results reported for the KITTI datasets.
SequenceMethodNumber of MotionsNumber of FramesNumber of PointsMean Reprojection Error (pixels)Median Reprojection Error (pixels)Segmentation Error (%)Mean Outliers Percentage (%)
Scene 1Baseline2181851.982.042.167.2
Scene 1TbD-SfM2261851.531.71.455.71
Scene 5TbD-SfM4414501.221.150.243.22
Scene 6TbD-SfM2 and 366701.371.241.51.5
Scene 7TbD-SfM3 and 413014105.325.761.4513.3
Table 9. TbD-SfM results for Hopkins dataset car sequences.
Table 9. TbD-SfM results for Hopkins dataset car sequences.
SequenceNumber of MotionsNumber of FramesNumber of Points Per FrameMean Reprojection Error (pixels)Median Reprojection Error (pixels)Segmentation Error (%)Mean Outliers Percentage (%)
Car12163071.100.9601.09
Car22264901.250.9300.73
Car33135480.970.790.073.85
Car42501470.780.5202.3
Car53303910.470.2900.1
Car62274640.440.350.030.1
Car72215020.880.7500.1
Car82211920.740.5800.37
Car93202200.650.470.151.75
Truck122618810.8200.16
Truck22183311.070.940.26.1
Table 10. TbD-SfM results compared with other methods for Hopkins dataset car sequences.
Table 10. TbD-SfM results compared with other methods for Hopkins dataset car sequences.
MethodReprojection Error (pixels)Mean Segmentation Error for 2 Motions (%)Median Segmentation Error for 2 Motions (%)Mean Segmentation Error for 3 Motions (%)Median Segmentation Error for 3 Motions (%)
Our TbD-SfM0.850.0200.070.07
Baseline [25]0.091000.110.24
MLBS [24]-8.86-25.1-
HSIT [23]-1.65-0-
IfSC [22]-1.25-3.97-
MoGR [21]-1.24-4.97-
RV [20]-0.44-1.88-
DCT [19]-0.0500.050
MSMC [18]-0.66-0.17-
SLBF [26]-0.200.380
SSC [27]-1.20.320.520.28
GPCA [28]-1.41019.8319.55
ALC [29]-2.830.34.011.35
LLMC [12]-2.1305.620
LSA [30]-5.431.4825.0723.79
RANSAC [31]-2.550.2112.8311.45
MSL [32]-2.2301.80

Share and Cite

MDPI and ACS Style

Gonzalez, H.; Rodriguez, S.; Elouardi, A. Track-Before-Detect Framework-Based Vehicle Monocular Vision Sensors. Sensors 2019, 19, 560. https://doi.org/10.3390/s19030560

AMA Style

Gonzalez H, Rodriguez S, Elouardi A. Track-Before-Detect Framework-Based Vehicle Monocular Vision Sensors. Sensors. 2019; 19(3):560. https://doi.org/10.3390/s19030560

Chicago/Turabian Style

Gonzalez, Hernan, Sergio Rodriguez, and Abdelhafid Elouardi. 2019. "Track-Before-Detect Framework-Based Vehicle Monocular Vision Sensors" Sensors 19, no. 3: 560. https://doi.org/10.3390/s19030560

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop