Localization of Stereovision for Measuring In-Crash Toeboard Deformation

This paper presents a technique to localize a stereo camera for in-crash toeboard deformation measurement. The proposed technique designed a sensor suite to install not only the stereo camera but also initial measurement units (IMUs) and a camera for localizing purpose. The pose of the stereo camera is recursively estimated using the measurement of IMUs and the localization camera through an extended Kalman filter. The performance of the proposed approach was first investigated in a stepwise manner and then tested in controlled environments including an actual vehicle crash test, which had successfully resulted in measuring the toeboard deformation during a crash. With the oscillation motion in the occurrence of the crash captured, the deformation of the toeboard measured by stereo cameras can be described in a fixed coordinate system.


Introduction
The incidence of vehicle crashes keeps increasing with the production of vehicles, and more than six million crashes are reported every year these days in the United States, about thirty percent of which come with fatality and injury [1].Vehicle crash is a severe and complicated dynamic phenomenon due to the complexity of physical interaction between the structures resulting from the impact.It is indispensable that the vehicles be crash-tested in indoor facilities and that the test results be used to design vehicles with improved crashworthiness [2][3][4][5][6].In the crash tests, one of the focuses is laid on the toeboard deformation [7,8], which is due to the strong correlation between toeboard intrusion and lower extremity injuries, as seen both from collision statistics [9,10] and crash simulation [11].A detailed analysis of toeboard deformation during a crash test can disclose useful intricacies about the otherwise unknown effect of the crash phenomenon.
In general, past work on vehicle deformation measurement can be classified into two approaches.In the first approach, finite element analysis (FEA) and other computational mechanics analyses have been used to predict instead of measuring the deformation.The advantage of numerical analysis is its ability to simulate all types of crash tests ( [12][13][14][15]).Hickey and Xiao [16] used the three-dimensional (3D) model of a commercial vehicle and performed FEA to examine the effects of the deformation during a car crash test.Employing the full-scale 3D model directly for crash analysis and subsequent computational design requires large computational loads.Yang et al. [17] utilized the response surface method for a complex and dynamic large deformation problem to accelerate the analysis and design process.Pre-and post-measurements from the actual vehicle crash tests were used by Cheng et al. [18] and McClenathan et al. [19] to provide boundary conditions for FEA.Zhang et al. [20] reported the reconstruction of the deformation of a vehicle in a crash accident using high-performance parallel computing to incorporate an advanced elasticplastic model.While the FEA has grown to be ever more important, its ability to estimate detailed deformations such as toeboard deformation is significantly limited because of the presence of various computation errors and the lack of actual measurements.
The second approach is based on the direct measurement.In the approach, visionbased measurement became most popular because of the ability of the camera to capture information of a continuous field ( [21][22][23][24][25]).Digital image correlation, originally proposed by Sutton et al. [26], projects an electronic speckle pattern to the specimen to derive the deformation of the specimen from the displacement of the image of the speckle pattern on the specimen, and it has been widely employed to measure small deformation such as structure strain [27], cracks identification [28,29], or vibration of structures [30,31], in a static environment.Schmidt et al. [32] employed a stereo vision-based technique to measure the time-varying deformation of specimens in gas gun impact tests.As an alternative to the digital image correlation (DIC), Iliopoulos and Mchopoulos [33] developed the mesh-free random grid technique to measure deformation using marked dots, which was later referred to as Dot Centroid Tracking (DCT) for comparison with the DIC [34].These techniques observed the deformation of a surface fixed to a static base from a static viewpoint.In general, while deformation measurement was well studied, the techniques cannot be applied to toeboard deformation measurement directly because of the oscillatory motion of the cameras.Since the line of sight of the toeboard is significantly limited, the cameras must be fixed near the toeboard to some part of the vehicle body, which deforms and oscillates by the crash.The measurement of toeboard through a stereo camera is therefore subject to the relative motion of the camera itself.
This paper presents a technique to localize a stereo camera for in-crash toeboard deformation measurement and a design to implement the technique in vehicle crash tests using a recursive state estimation [35][36][37].This state estimation technique further completes our previous work that localizes the sensor suite using only the camera pose obtained through a computer vision technique [7], as the excessive oscillation of the localization camera due to the non-rigidity of the sensor suite poses extra errors in the measured toeboard deformation.In addition to the cameras for deformation measurement and localization, the proposed technique uses additional inertial measurement units (IMUs) for the localization.Having all the sensors in a rigid structure, the proposed technique implements extended Kalman filter (EKF) and estimates the pose of the stereo camera in the global coordinate frame using the observations of the downward camera and IMUs.In-crash toeboard deformation can thus be measured by subtracting the estimated stereo camera pose from the observed toeboard deformation.A sensor suite comprising the stereo camera, downward camera, and the IMUs is designed and placed on the seat mount such that the rigid body assumption is valid.The downward camera is located underneath the vehicle floor by making a hole.
This paper is organized as follows.The next section defines the state estimation problem of concern.Section 3 presents the proposed EKF-based technique to localize the stereo camera for the in-crash toeboard deformation measurement.Section 4 introduces the hardware design for our approach.The experimental results are then presented in Section 5. Conclusions and ongoing work are summarized in Section 6.

Problem Formulation
Figure 1 illustrates the fundamental design of the sensor suite adopted in this paper and the major components that address the problem of localizing the stereo camera.The stereo camera is fixed to the camera fixture, which is assumed to be rigid, and located such that it sees the toeboard.Additionally fixed to the camera fixture are the downward camera and the IMUs adjacent to the cameras, and a checkerboard is taped to the floor to localize the downward camera.The IMUs each consist of a triaxial gyroscope and an accelerometer.While the stereo camera measures the toeboard deformation in its camera coordinate frame, the downward camera and the IMUs are the sensors available to localize the pose of the stereo camera.The stereovision localization problem is thus defined as identifying the stereo camera pose given the images of the downward camera and the readings of the IMUs.

Linear Acceleration of Sensor Suite
In accordance with the rigid body dynamics, the acceleration (a) at the positions of IMUs is related to that of the center of the sensor suite by where a c is the acceleration of the center of the sensor suite body, ω is the angular velocity of the sensor suite body frame, and r is the relative position of the IMU to the center.Note that it is more convenient to describe the acceleration relation (1) in the body frame, and that the cross-product identity U(v 1 × v 2 ) = (Uv 1 ) × (Uv 2 ) holds for unitary matrix U ∈ R 3×3 .By left multiplying the transformation matrix from the inertial frame to the body frame, the acceleration relation (1) can be rewritten as where a B , ωB , ω B and r B are the linear acceleration, angular acceleration, angular velocity, and position in the body frame ( B ).
The linear accelerations at different position of a rigid body need the input of angular acceleration.The input can be derived from the measurements of the triaxial accelerometers and gyroscopes in the IMUs.Because the derivation of angular acceleration at a point requires two IMU measurements, consider two IMUs in addition to the reference point as indicated in Figure 2. From Equation (1), the difference of the angular acceleration of IMU 1 from that of IMU 2 is given by where a 1 , a 2 , ω 1 , and ω 2 are the accelerations and angular velocities at IMU 1 and 2, respectively.r 1 and r 2 are each the vector from the reference point to them.The position of the reference point for each pair of IMUs is chosen to be the midpoint of the IMU positions: −r 1 = r 2 = 1 2 r 12 ≡ 1 2 r.This is to minimize the complexity and the error of transformation.The substitution of the midpoint into Equation (3) yields where the skew-symmetric cross-product matrix C(r) : R 3 → R 3×3 is defined as where r = (r 1 , r 2 , r 3 ) T .The angular acceleration ω can be computed through the IMU measurements of a 1 , a 2 , ω 1 , and ω 2 by solving the linear problem (4).

Localization of Downward Camera
The installation of the downward camera and the checkerboard on the ground is because the ground is the closest static surrounding structure to the sensor suite and thus can be measured most accurately.As illustrated in Figure 3, the global frame can be set at one end of the checkerboard and used to localize the downward camera.The downward camera observes the checkerboard pattern and any other extractable features.By associating the pattern and features between the global frame and pixel frame, the pose of the downward camera with respect to the global frame can be identified after calibrating the camera parameters following the procedures proposed by Zhang [38] or Heikkilä and Silvén [39].In particular, for the checkerboard attached to the ground, the checkerboard corner points are readily identified [40] and serve as the planar pattern for the camera pose measurement, as illustrated in Figure 3.
For the global localization of the downward camera, let the pixel coordinates of the corner point (i, j) at the kth time step be p k,ij .The pixel coordinates can be related to the corner point in the global frame by where w is the width of tiles of the checkerboard.N k is the number of tiles of the origin of the detected checkerboard at the current step relative to the origin at the first step, which can be accumulated by where ∆ k is the motion between two images in the pixel coordinates, and p k,0 is the origin of the detected checkerboard origin in the pixel coordinate in image k.The initial N 0 is (0, 0) T .
Pattern @ k Motion between images < l a t e x i t s h a 1 _ b a s e 6 4 = " x < l a t e x i t s h a 1 _ b a s e 6 4 = " w 6 R t D j w p y M q P l 5 a G L u 7 L T  The design and sensor settings adopted in this paper allow the motion of the sensor suite to be predicted with respect to the body frame and the pose of the downward camera to be measured in the global frame.While the downward camera is fixed to the sensor suite and thus can measure the sensor suite pose through the kinematic transformation, the pose predicted by the motion model and the angular acceleration measurement would be different from the pose measured by the downward camera and the kinematic transformation.The primary reason is the dead-reckoning errors stemming from the motion prediction with angular acceleration measurements.In addition, the pose of the downward camera alone contains excessive oscillations because of the non-rigidity of the sensor frame in the crash test.The acceleration measurement is in general noisy, and its integration creates notable integration errors, which accumulate over time in the name of dead-reckoning errors.Since the pose of the downward camera can be accurately measured because of its close distance to the ground, it is essential to estimate the pose of the sensor suite or the stereo camera by integrating the motion prediction and the observation.The next section presents the localization of a stereo camera proposed and formulated in the framework of EKF.

Overview
Figure 4 shows the EKF-based localization of the stereo camera proposed in this paper for toeboard deformation measurement.Since the camera fixture is assumed to be rigid and can transform the body frame to the stereo camera, to be identified are the position and orientation of the body frame, {p, θ}, where p ∈ R 3 is the position of the origin of the body frame and θ ∈ R 3 is its Euler angle.To recursively estimate the motion of sensor suite using EKF framework, the state x ≡ {p, ṗ, p, θ, ω B , ωB } includes not only the pose, but also velocity ṗ, acceleration p, angular velocity ω B , and angular acceleration ωB in the body frame.

Stereo vision
Toeboard measurement Stereo camera < l a t e x i t s h a 1 _ b a s e 6 4 = " j r 3 N U Z t F u k F 0 e s 9 h 9 X W e K I 1 7 r J X r 3 i S e I t p B u 2 g f V d E x q q N z d I E a i K J b d I 8 e 0 K N z 5 z w 7 r 8 7 b p 7 X g T H q 2 0 U w 5 7 x 8 H Q q 3 7 < / l a t e x i t > 7 r J X r 3 i S e I t p B u 2 g f V d E x q q N z d I E a i K J b d I 8 e 0 K N z 5 z w 7 r 8 7 b p 7 X g T H q 2 0 U w 5 7 x 8 H Q q 3 7 < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " r Z n b r d r / g q W 3 E P 7 x M 3 t h c i i + y 4 m w p / 5 P m a b l y V q 5 e V 0 s 1 Z 1 p P A Q 7 g E I 6 h A u d Q g y u o Q w M I p P A I T / B s P V g v 1 q v 1 N l l d s K Y 3 e z A H 6 / 0 L h F y c p A = = < / l a t e x i t > x k|k , P k|k < l a t e x i t s h a 1 _ b a s e 6 4 = " b P In accordance with the EKF, the primary processes of estimation are the prediction and the correction.The prediction of the state at discrete time k, or the derivation of the mean x k|k−1 and the covariance P k|k−1 , is performed using the motion model of a sensor suite and its state estimated at k − 1, i.e., x k−1|k−1 and P k−1|k−1 .In the correction step, the predicted sensor suite state is corrected to x k|k and P k|k by fusing the measurements of IMUs (a k and ω k ), downward camera pose (p C k and θ C k ), and angular acceleration ( ωk ) described in Sections 2.2 and 2.3.While different sensors are used for the same state, Kalman gain is adjusted through the statistic property of motion and sensor noise via their covariance P k|k−1 and Σ k,v and provides the optimal estimate for each state in the proposed approach.Once the pose of the sensor suite is globally estimated, the deformation of the toeboard can be measured with respect to the global and any other coordinate frames.
The operation of the EKF needs motion and sensor models.The discrete motion model of the sensor suite and the observation model subjected to uncertainty are generically given by where f (•) and h(•) are the respective motion and sensor models; w k and v k represent the motion model noise and sensor model noise, respectively.The subsequent two sections present the motion and sensor models developed in the proposed approach.

Motion Model of Sensor Suite
The motion of the sensor suite pose is determined by the motion and the deformation of the entire vehicle, which cannot be modeled and identified easily.Meanwhile, the sensor suite motion is constrained by the motion and the deformation of the vehicle.This means that the range of the sensor suite motion is bounded.With a short time step ∆t, the proposed approach accordingly predicts the pose of the sensor suite by the random walk motion model as where k, p, k, ω is a motion noise due to unresolved linear and angular acceleration, and θ = [φ, θ, ϕ] are the Euler angles corresponding to the roll, pitch, and yaw motion of the sensor suite.In the equation is the Euler angle rates matrix [41], whose multiplication with the body-fixed angular velocity yields Euler angle rates.Through linearization, the motion model can be encapsulated into a canonical form as: where F k and V k are the Jacobian matrix of the motion and the motion noise term, respectively, and are given by , and where ) is a motion noise, which is validly assumed to be Gaussian since ∆t is small.

Sensor Models of Accelerometer
According to Equation ( 2), the sensor model outputting linear acceleration with respect to the body frame can be approximated with Gaussian noise as where z k,a is the measurement of linear acceleration, r B is the position of accelerometer in the body frame, and v a ∼ N (0, Σ a ) is the measurement noise of the accelerometer.R(θ) is the rotation matrix [41] to transfer the body frame to the inertial frame.Because of the association through integration, the measurement z k,a can also be represented with the velocity and the position.This additionally introduces two sensor models: Let the measurements and the sensor models be described collectively as The corresponding Jacobian matrix is written as where C(•) is the skew symmetric matrix in Equation ( 5) and where δ represents the finite difference operation and δ 2 p k = p k − 2p k−1 + p k−2 and δ ṗk = ṗk − ṗk−1 .

Sensor Models of Gyroscope
Because the gyroscope measures the angular velocity, the proposed approach constructs two sensor models, each outputting the angular velocity and the sensor suite orientation: where v g is the gyroscope measurement noise, and the inverse of the Euler angle rates matrix is given by Let the measurements and the sensor models be described collectively again: The Jacobian matrix is given by with

Sensor Models for Angular Acceleration
The derivation of the angular acceleration ωB k in Equation ( 13) from accelerometers in Equations ( 3)-( 5) additionally introduces sensor models for angular acceleration.They are associated with the angular acceleration, the angular velocity, and the orientation as where v k, ω ∼ N (0, Σ ω) is a measurement noise for angular acceleration.Let the measurements and the sensor models be described collectively: The Jacobian matrix is obtained by with E (•) being defined in Equation ( 20) and While the sensor models have been constructed, the remaining modeling process for angular acceleration is the derivation of the covariance Σ ω for angular acceleration from a pair of IMUs.We expand the right-hand side of Equation ( 4) by considering noise on top of the measurement of acceleration and angular velocity: where v b is the noise on vector b, v a 1 and v a 1 are the measurement noise of the accelerometer, and v ω 1 and v ω 2 are that of the gyroscope.By neglecting the smaller quadratic noise terms above, the covariance can be computed as where C(•) is the cross-product matrix defined in Equation (5).

Sensor Model of Downward Camera
Since the downward camera ultimately derives its pose with respect to the global frame, the sensor model associates the observations with the pose of the sensor suite: where r B c is the coordinate of the downward camera in the body frame.Let the measurements and the sensor models be described collectively: The Jacobian of the downward camera model is given by

EKF
The implementation of the EKF for the localization of the sensor suite is relatively straightforward once the motion model and all the sensor models have been developed.
Given the motion model (11), the proposed approach predicts the mean and the covariance, x k|k−1 and P k|k−1 , using its prior belief x k−1|k−1 and P k−1|k−1 as The sensor models are each given by a function of some state variables.Accordingly, the proposed approach corrects the estimation through the parallel KF where the correction of each state from each sensor is fused through summation [42]: where s ∈ S is each sensor, and Σ k,s and H k,s are the covariance of measurement noise of sensor and the Jacobian matrix of the sensor model h k,s (x), respectively.Kalman gain for sensor s is given by The great advantage of the proposed formulation is the simplicity.While the dimension of each correction is different, the Kalman gain acts not only as a scaling factor but also as a transformation matrix.All the correction terms are thus summed without loss of generality.

Sensor Suite Design and Installation
Figure 5a shows the design of the sensor suite with all sensors installed.The stereo cameras are installed on the sensor suite using an orientation-adjustable base to have a better view of the toeboard.The downward camera is installed on a leg welded on the sensor suite.A three-axis gyroscope and a three-axis accelerometer are equipped on each camera to estimate its position and orientation.The frame size is 560 mm × 450 mm × 130 mm.The size enables the sensor suite to be installed on the mounting bolt holes of the front seat without modifications to the test vehicle structure, as shown in Figure 5b.The sensor specifications are shown in Table 1.Since the the crash of vehicle test typically lasts 100∼200 ms, high-speed sensors with a wide measurement range are selected.The sensor suite was designed and tested in a laboratory environment.It was later tested at the crash test center of Honda Research & Development Americas, Inc. (Ohio, USA), in a full-size passenger car. Figure 6a,b show the side and top view of the sensor suite installed on the test vehicle.The stereo camera was placed 400 mm away from the toeboard so both cameras could have a full view of it.A hole with size 150 mm × 150 mm was opened on the vehicle floor to install the downward camera, as shown in Figure 6b,c.The downward camera was about 95 mm above the ground and had a clear view of the checkerboard pattern.Additional installations of IMU are shown in Figure 6c,d.

Experimental Results
This section first demonstrates the localization of the sensor suite using the proposed EKF framework in a simulated environment.Then, results of a real car crash test are followed.

Sensor Suite Localization in Simulated Environment
Consider the following case where the center of sensor suite undergoes simple linear and angular motion in the global frame as a = −(2π f ) 2 sin(2π f t), 0, 0 T , ω = −2π f w (cos(2π f w t) − 1), 0, 0 T , with angular acceleration ω = f w ) 2 sin(2π f w t) T , 0, 0 T .
The measurement of the accelerometer and gyroscope in the body frame shall be where a B c and ωB are measurements at sensor suite center as a B c = −a, ωB = − ω and v g ∼ N (0, σ g ) and v a ∼ N (0, σ a ) model the sensor noise of accelerometer and gyroscope.The pose of downward camera shall be provided by where the position and orientation of the sensor suite are Here, R(θ) is the rotation matrix given Euler angle θ, and r B C D is the position of downward camera in the body frame.Sensor noise of camera v c ∼ N (0, σ c ) is Gaussian.IMU data are available between −0.1s ∼ 0.3s while camera pose information is available between −0.01s ∼ 0.2s, to be consistent with the real car crash test.
By letting f = f w = 10, the angular acceleration can be recovered from the measurement of accelerometers and gyroscope, as illustrated in Figure 7a.This angular acceleration follows the prescribed ωB .The amplitude in the x-direction is close to the given amplitude (2π f w ) 2 .The other two directions ( ωy and ωz ) are zero-meaned.The estimated velocity is shown in Figure 7b.Linear velocity and angular velocity at x-direction follow the cosine wave, while motion in other directions is trivial.The displacement and roll angles are shown in Figure 7c,d.Figure 7e illustrates the motion of the sensor suite in space, where rectangular panels illustrate the position with a normal vector for the orientation.
Differential entropy of the estimation is plotted in Figure 8, which is given by h for multivariate Gaussian distribution considered here.The entropy shows that the downward camera pose significantly improves the certainty of the estimation, as shown in Figure 8.
To test the limits of the proposed EKF framework, we vary noise level by σ a , σ g , or σ c individually while keeping the other two at a low level, 0.1% of the maximum value, to be specific.For each combination, 100 tests were performed and the statistics of maximum error of the position max p k|k − p k are plotted in Figure 9, where p k|k is the estimation and p k is the ground truth at step k.The figure shows that the proposed EKF-based estimation is accurate when noise is typically small, say, less than 1%.Then, error can grow significantly when the noise of accelerometers and gyroscope further increases.Note that dead-reckoning error comes from measurements of accelerometer and gyroscope; therefore, noise on the downward camera pose may not significantly influence the estimation if IMU measurements are not significantly noisy.Figure 10 shows the results of pose detection of the downward camera using the computer vision technique.Figure 10a shows the checkerboard pattern detected, while Figure 10b shows the features matched between two consecutive images for the purpose of localizing the downward camera.With pixel coordinates in Figure 10a and their global frame coordinates through Equation ( 7), camera pose is computed and presented in Figure 10c for position and orientation in Figure 10d.

Localization of Sensor Suite
For the localization purpose, angular acceleration is first computed by using IMU measurements and is plotted in Figure 11a.The results show that the pitch motion (rotation along y-axes), related to ωy , is significantly larger than the other two directions, which is consistent with onsite observation.The state of the sensor suite is then estimated through the proposed EKF framework with the computed pose of downward camera, angular acceleration, and IMU measurements.The results are shown in the rest of Figure 11.
Figure 11b shows the estimated velocity.The frontal crash test has an initial speed of 15.6 m/s (56 km/h).The results show that, after the crash, the vehicle bounced back and moved powerlessly at a speed around −3.8 m/s.Motion in the y-and z-direction is much smaller than the x-direction.This is consistent with the onsite measurement.
Figure 11c shows the position estimated for the sensor suite.Instead of the center, the position of the downward camera is plotted so that results from Section 5.2.1 can be compared.The compression of structures, the distance moved in x-direction after the crash, is about 700 mm from the figure.It conforms to the onsite measurement.The figure also shows that the vehicle bounced up about 50 mm after the crash (from 33 ms to 77 ms), and the motion in the z-direction was relatively small compared to the other two directions.
Figure 11d shows the pitch motion during the crash.The estimation shows that the maximum pitch angle is around 8 • during the crash.For comparison, we also include the pitching angles measured by the downward camera and the gyroscope installed at the center of the designed sensor suite.The camera measurement is subject to the oscillation of the frame and shows strong oscillations throughout the in-crash measurement.Meanwhile, the integration of gyroscope measurement is subject to the dead-reckoning errors and it is hard to measure the precise pitching angle.Therefore, measurement based on a single sensor may be less accurate than the state estimation technique that fuses multiple sensor measurements.The motion of the sensor suite during the crash is illustrated in Figure 11e, where rectangular panels represent its position and orientation at every other 5 ms.On each panel, the normal vector is plotted and scaled by velocity v x to illustrate the orientation and velocity of the sensor suite during the crash test.

Conclusions and Ongoing Work
This paper presented a technique to localize a stereo camera for in-crash toeboard deformation measurement.For localizing purposes, we designed a sensor suite with IMU sensors and cameras.This technique recursively estimates the pose of the sensor suite to where the stereo camera is attached.Using an EKF-based framework, the state, including state mean and covariance, is predicted using a motion model and current state.The prediction is then corrected through proposed sensor models and measurements of IMUs and the camera.The state estimation technique can remove the dead-reckoning errors of IMU sensors.Compared to the classical measurement-based technique, the state estimation can remove the measurement error of a single sensor incurred by the excessive oscillation of the sensor suite during a crash test.The proposed technique was first verified in a simulated environment, in which the estimation matched well with the ground truth, and the estimation was reliable given moderate sensor noise.The real crash data were analyzed and provided localization results of the sensor suite as well as of the stereo camera.
This paper mainly focuses on the localization of a sensor suite for toeboard deformation measurement, where the latter is not touched on in this paper but will be reported in future work.We noticed that cameras underwent relative motion to the sensor suite in the real crash test.This may be due to the structural flexibility, and further investigation of the influence of structure flexibility may provide more accurate localization results.

Figure 1 .
Figure 1.Sensor suite and localization problem formulation.

Figure 2 .
Figure 2. Illustration for angular acceleration using two IMUs.

Images < l a t e x i t s h a 1 _
b a s e 6 4 = " x I H V 4 P F O d e x 1 B AP R 2 + v Q E g 6 3 X v U = " > A A A B 8 H i c b V B N S w M x E J 3 1 s 9 a v q k c v w S J 4 q G V X RX s s e P F Y w X 5 I u 5 R s m m 3 D J t k l y Q p l 6 a / w 4 k E R r / 4 c b / 4 b 0 3 Y P 2 v p g 4 P H e D D P z g o Q z 5 f l e u 1 P I 4 C H M M J n I E H N 1 C H O 2 h A E w g I e I Z X e H O U 8 + K 8 O x / z 1 h U n n z m C P 3 A + f w C k y I 7 6 < / l a t e x i t > k 1, k Pattern @ k-1 < l a t e x i t s h a 1 _ b a s e 6 4 = " u / N h B S 5 E 3 T V / i k x c i Q f n o K n T H G I = " > A A A C A 3 i c b V C 7 S g N B F L 0 b X z G + o p Y 2 g 0 G w k L A b R S 0 D N p Y R z A O y S 5 i d T J I h M 7 P L z K w Q l i 3 9 B V v t 7 c T W D 7 H 1 S 5 w k W 5 j E A x c O 5 9 z L u Z w w 5 k w b 1 / 1 2 C m v r G 5 t b x e 3 S z u 7 e / k H 5 8 H 6 4 q 9 V p e T x F O 4 B T O w Y M b q M M 9 N K A J B G J 4 g V d 4 c 5 6 d d + f D + Z y v F p z 8 5 h g W 4 H z 9 A m 6 5 m E E = < / l a t e x i t > p k,0 < l a t e x i t s h a 1 _ b a s e 6 4 = " S k P V 2 4 f E a b 3 I / t r 1 A x B c Q 5 f q 4 p 0 = " > A A A C B X i c b V C 7 S g N B F L 0 b X z G + o p Y 2 g 0 G w 0 L A b R S 0 D N p Y R z A O S N c x O Z p M h M 7 P L z K w Q l q 3 9 B V v t 7 c T W 7 7 D 1 S 5 w 8 C p N 4 4 M L h n H s 5 l x P E n G n j u t 9 O b / l a t e x i t > y < l a t e x i t s h a 1 _ b a s e 6 4 = " y 2 P I M E i C / O Q 9 k p P 0 I c Z p q / l x o k U = " > A A A B + H i c b V A 9 T w J B E J 3 z E / E L t b T Z S E y s y J 0 a p S S x s Y R E P h K 4 k L 1 l g A 2 7 d 5 f d P R O 4 8 A t s t b c z t v 4 b W 3 + J C 1 w h 4 E s m e X l v J j P z g l h w b V z 3 2 9 n Y 3 N r e 2 c 3 t 5 f c P D o + O C y e n D R 0
< l a t e x i t s h a 1 _ b a s e 6 4 = " n d Y 0 D n 7 e

Figure 6 .
Figure 6.Sensor suite for real car test: (a) side view; (b) top view; (c) bottom view; (d) close-up view of camera and IMU sensors.

Figure 7 .Figure 8 .
Figure 7. Results of sensor suite localization using EKF framework in a simulated environment.(a) Angular acceleration; (b) linear and angular velocity of the camera fixture; (c) Estimated displacement of the camera fixture; (d) estimated roll angle of the camera fixture; (e) trajectory of the camera fixture in space.

Figure 9 .
Figure 9. Error of EKF estimation under different noise level on accelerometer, gyroscope, and camera measurement.

Figure 10 .
Figure 10.Results of camera pose estimation using computer vision technique.

Figure 11 .
Figure 11.Frontal barrier localization result.(a) Displacement of the camera fixture.(b) Pitch angle of the camera fixture.(c) Localization of camera fixture in the global frame.

Table 1 .
Parameters of sensors.