A Novel Multi-Sensor Environmental Perception Method Using Low-Rank Representation and a Particle Filter for Vehicle Reversing Safety

Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety.


Introduction
Vehicles have long been widely utilized around the world. To reduce the probability and rate of traffic accidents is always the focus of research [1][2][3][4]. The growing number of reversing traffic accidents has become a serious social safety problem in recent years. Collision, especially reversing 90% which means it has a good performance. In [38], the authors presented a stereo vision-based vehicle detection system on the road using a disparity histogram. Their system can be viewed as three main parts: obstacle detection, obstacle segmentation, and vehicle detection, with a 95.5% average detection rate.
Based on sparse expression, researchers built a new adaptive sparse expression [39] theory system, improved the robustness of the image recognition, and promoted target tracking technology innovation. This system has wide application potential. Some robust visual tracking and vehicle classifications have been proposed in [40][41][42][43].

Reversing Speed Control for Vehicle Safety
In [44], model predictive control (MPC) was used to compute the spacing-control laws for transitional maneuvers of vehicles. Drivers may adapt to the automatic braking control feature available on adaptive cruise control (ACC) in ways unintended by designers [45]. In [46], an autonomous reverse parking system was presented based on robust path generation and improved sliding mode control for vehicle reversing safety. Their system consists of four key parts: a novel path-planning module; a modified sliding mode controller on the steering wheel; image processing and real-time estimation of the vehicle's position; and a robust overall control scheme. The authors of [14] presented a novel vehicle speed control method based on driver vigilance detection using EEG and sparse representation. The scheme mentioned in this paper has been implemented and successfully used to reverse the vehicle. In [47], a Bayesian network is used to detect human action to reduce reversing traffic accidents. The authors used Lidar and wheel speed sensors to detect environmental situations. In [48], a robust trajectory tracking for a reversing tractor trailer system was proposed. They treat the vehicle reversing speed control by virtue of neural network [49], fuzzy control [50], and human-automation interaction [18,51].
Despite successful utilization of the existing approaches and systems, a variety of factors in vehicle reversing safety systems still challenge researchers. Many studies have been conducted on reversing safety systems, focusing on three main problems: (1) how to find smaller size, higher reliability, and lower cost multi-sensors that are suitable to environmental perception; (2) how to realize target recognition and tracking based on information fusion for different physical phenomena measured by the multi-sensors; (3) how to realize the vehicle reversing speed control strategy based on multi-sensor environmental perception and object tracking for preventing collisions in realistic conditions?
In this paper, we introduce a multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. A multi-sensor environmental perception module based on a binocular-camera system and ultrasonic range finders is used to acquire the distance of an obstacle behind the vehicle. The information fusion algorithm using adaptive Kalman filter is employed to process the data obtained by the binocular vision and ultrasonic sensors. The framework of particle filter and low-rank representation is used to track the main obstacles. After obstacle detection and tracking, the vehicle reversing control strategy takes steps to avoid reversing collisions.
The rest of the paper is organized as follows. In Section 2, we present the general system architecture of our proposed system. Section 3 focuses on multi-sensor environmental perception for vehicle reversing. Target recognition and tracking are developed in Section 4, and vehicle reversing speed control strategies are described in Section 5. Section 6 is devoted to the system simulation and validation. Finally, some conclusions are provided in Section 7.

System Architecture
The general architecture of our system, as shown in Figure 1, is made up of multi-sensor environmental perception, target recognition and object tracking, and vehicle reversing speed control strategy. In the first step, when a vehicle is reversing, the Electronic Control Unit (ECU) receives the reversing information behind the vehicle and automatically operates the multi-sensors' environmental perception module to acquire rear-view information. Then the binocular cameras and two ultrasonic range finders capture information about the complex reversing environment. After obtaining the images, we can get the obstacle's distance information using disparity computation and triangulation. At the same time, two ultrasonic sensors are applied for rear collision intervention, which inform drivers of distance feedback for obstacles behind the vehicle. The left block of Figure 1 shows the binocular cameras and two ultrasonic sensors.
The second step includes the information fusion algorithm based on multi-sensors for obstacles detection, and target tracking using particle filter and low-rank representation. As shown in the middle blocks of Figures 1 and 2, an adaptive Kalman filter is used to process the data by binocular vision and ultrasonic sensors. The result of multi-sensor information fusion is very important for a vehicle to keep a safe distance from obstacles. A novel framework of a particle filter based on low-rank representation is used to track the main obstacles for vehicle reversing, as shown in the middle block of Figure 1. In this paper, we introduce a low-rank matrix in the particle filter to choose an optimal objective particle template with the smallest L-1 norm. As shown in Figure 3, combining low-rank representation in a target particle, we eventually choose the particle that has the smallest difference compared with target templates in the candidates set. The proposed novel obstacle tracking algorithm can successfully track the obstacle when the vehicle is reversing.  In the first step, when a vehicle is reversing, the Electronic Control Unit (ECU) receives the reversing information behind the vehicle and automatically operates the multi-sensors' environmental perception module to acquire rear-view information. Then the binocular cameras and two ultrasonic range finders capture information about the complex reversing environment. After obtaining the images, we can get the obstacle's distance information using disparity computation and triangulation. At the same time, two ultrasonic sensors are applied for rear collision intervention, which inform drivers of distance feedback for obstacles behind the vehicle. The left block of Figure 1 shows the binocular cameras and two ultrasonic sensors.
The second step includes the information fusion algorithm based on multi-sensors for obstacles detection, and target tracking using particle filter and low-rank representation. As shown in the middle blocks of Figures 1 and 2, an adaptive Kalman filter is used to process the data by binocular vision and ultrasonic sensors. The result of multi-sensor information fusion is very important for a vehicle to keep a safe distance from obstacles. A novel framework of a particle filter based on low-rank representation is used to track the main obstacles for vehicle reversing, as shown in the middle block of Figure 1. In this paper, we introduce a low-rank matrix in the particle filter to choose an optimal objective particle template with the smallest L-1 norm. As shown in Figure 3, combining low-rank representation in a target particle, we eventually choose the particle that has the smallest difference compared with target templates in the candidates set. The proposed novel obstacle tracking algorithm can successfully track the obstacle when the vehicle is reversing. In the first step, when a vehicle is reversing, the Electronic Control Unit (ECU) receives the reversing information behind the vehicle and automatically operates the multi-sensors' environmental perception module to acquire rear-view information. Then the binocular cameras and two ultrasonic range finders capture information about the complex reversing environment. After obtaining the images, we can get the obstacle's distance information using disparity computation and triangulation. At the same time, two ultrasonic sensors are applied for rear collision intervention, which inform drivers of distance feedback for obstacles behind the vehicle. The left block of Figure 1 shows the binocular cameras and two ultrasonic sensors.
The second step includes the information fusion algorithm based on multi-sensors for obstacles detection, and target tracking using particle filter and low-rank representation. As shown in the middle blocks of Figures 1 and 2, an adaptive Kalman filter is used to process the data by binocular vision and ultrasonic sensors. The result of multi-sensor information fusion is very important for a vehicle to keep a safe distance from obstacles. A novel framework of a particle filter based on low-rank representation is used to track the main obstacles for vehicle reversing, as shown in the middle block of Figure 1. In this paper, we introduce a low-rank matrix in the particle filter to choose an optimal objective particle template with the smallest L-1 norm. As shown in Figure 3, combining low-rank representation in a target particle, we eventually choose the particle that has the smallest difference compared with target templates in the candidates set. The proposed novel obstacle tracking algorithm can successfully track the obstacle when the vehicle is reversing.    The final step is the vehicle reversing speed control strategy, also shown in the right block of Figure 1. After target recognition and tracking, the ECU of the vehicle controls reversing speed for vehicle safety. Based on the information fusion of multi-sensor environmental perception and obstacle tracking, ECU will judge the safe distance for reversing. If a danger is detected, ECU will control the speed of the vehicle to avoid a reversing collision.

Multi-Sensor Environmental Perception
In this section, two types of sensors, ultrasonic range finders and binocular cameras, are used to perceive the environment around a vehicle. As shown in Figure 4, the proposed multiple sensors are used in our research. Two ultrasonic range finders measure the distance from obstacles, and the binocular cameras are used to capture vision and distance information about the obstacles. We integrate the two sensors on a board as a vehicle reversing multi-sensor in Figure 4. The images from the binocular cameras are shown in Figure 5.   The final step is the vehicle reversing speed control strategy, also shown in the right block of Figure 1. After target recognition and tracking, the ECU of the vehicle controls reversing speed for vehicle safety. Based on the information fusion of multi-sensor environmental perception and obstacle tracking, ECU will judge the safe distance for reversing. If a danger is detected, ECU will control the speed of the vehicle to avoid a reversing collision.

Multi-Sensor Environmental Perception
In this section, two types of sensors, ultrasonic range finders and binocular cameras, are used to perceive the environment around a vehicle. As shown in Figure 4, the proposed multiple sensors are used in our research. Two ultrasonic range finders measure the distance from obstacles, and the binocular cameras are used to capture vision and distance information about the obstacles. We integrate the two sensors on a board as a vehicle reversing multi-sensor in Figure 4. The images from the binocular cameras are shown in Figure 5.  The final step is the vehicle reversing speed control strategy, also shown in the right block of Figure 1. After target recognition and tracking, the ECU of the vehicle controls reversing speed for vehicle safety. Based on the information fusion of multi-sensor environmental perception and obstacle tracking, ECU will judge the safe distance for reversing. If a danger is detected, ECU will control the speed of the vehicle to avoid a reversing collision.

Multi-Sensor Environmental Perception
In this section, two types of sensors, ultrasonic range finders and binocular cameras, are used to perceive the environment around a vehicle. As shown in Figure 4, the proposed multiple sensors are used in our research. Two ultrasonic range finders measure the distance from obstacles, and the binocular cameras are used to capture vision and distance information about the obstacles. We integrate the two sensors on a board as a vehicle reversing multi-sensor in Figure 4. The images from the binocular cameras are shown in Figure 5.    The final step is the vehicle reversing speed control strategy, also shown in the right block of Figure 1. After target recognition and tracking, the ECU of the vehicle controls reversing speed for vehicle safety. Based on the information fusion of multi-sensor environmental perception and obstacle tracking, ECU will judge the safe distance for reversing. If a danger is detected, ECU will control the speed of the vehicle to avoid a reversing collision.

Multi-Sensor Environmental Perception
In this section, two types of sensors, ultrasonic range finders and binocular cameras, are used to perceive the environment around a vehicle. As shown in Figure 4, the proposed multiple sensors are used in our research. Two ultrasonic range finders measure the distance from obstacles, and the binocular cameras are used to capture vision and distance information about the obstacles. We integrate the two sensors on a board as a vehicle reversing multi-sensor in Figure 4. The images from the binocular cameras are shown in Figure 5.

Obstacle Detection Based on Binocular Cameras
Stereo vision technology is used more often in machine vision. With two images from left and right, a person can approximately obtain an object's 3D geometry, distance, and position. In this paper, videos captured by binocular cameras as in Figure 4 are used not only for visual information for a driver, but also for ECU for vehicle reversing safety.

Binocular Stereo Calibration
In order to get the depth of an obstacle, key parameters are introduced to solve the 3D depth computation of a target for binocular stereo rectification [52] of the binocular cameras. To get these parameters, images are calibrated using methods given in the literature [5]. For binocular stereo rectification, the parameters are defined as follows: M: intrinsic matrix, a 3ˆ3 matrix containing camera normalized focal length and optical center. f x , f y : camera normalized focal length. c x , c y : camera normalized optical center. d: distortion vector, it is a 5ˆ1 vector. k 1 , k 2 , k 3 : radial distortion parameters. p 1 , p 2 : tangential distortion parameters. R: rotation matrix, it is a 3ˆ3 matrix that contains three 3ˆ1 vectors. r 1 , r 2 , r 3 : rotation matrix vectors. t: translation vector, it is a 3ˆ1 vector of three translation parameters. T x , T y , T z : translation parameters.
We can find the relation of the real-world plane coordinates (X, Y) and the camera coordinates (x, y) using the above parameters: where s is a scale ratio and r 3 can be removed by the depth Z = 0, and the coordinate should be modified by a Taylor series expansion at r = 0.
where (x d , y d ) is a coordinate before correction and (x corrected , y corrected ) is a corrected coordinate for stereo rectification. A single camera can be calibrated using Equations (1) and (2). At the same time, another two parameters called rotation matrix R s = (r s1 , r s2 , r s3 ) and translation vector T s = (T sx , T sy , T sz ), which aligns with two cameras, are computed as in Equation (3): where P r and P l are the right and left camera coordinates, respectively. A 3D point P can be projected into the left and right cameras as in Equations (4) and (5) according to the above single camera calibration: P l " R l P`T l (4) P r " R r P`T r (5) where R l is the rotation matrix of left camera and R r is the rotation matrix of right camera. T l is the translation vector of left camera and T r is the translation vector of right camera. R s and T s can be calculated using Equations (3)-(5).

Binocular Stereo Rectification and Stereo Correspondence
When the binocular camera images are obtained, the next steps are to rectify binocular stereo and establish stereo correspondence. Stereo rectification is used to remove the distortions and turn the stereo binocular images into a standard aligned form utilizing the calibration results. This is an important step for calculating the disparity of binocular images for vehicle reversing. Open Source Computer Vision Library (OpenCV) [53] is introduced to rectify the binocular images.
For binocular stereo rectification and stereo correspondence, the parameters are defined as follows: R s : the segment which is used to obtain the minimization of reprojection distortion R l and R r : the rotation matrices M l and M r : intrinsic matrices.
After finding the correspondence points, the disparity can easily be calculated. Disparity optimization algorithm is also used to remove bad matching points.
After stereo rectification and correspondence, we compute the disparity of the binocular images. The disparities of different targets that have a different distance to the binocular cameras usually contain the edge information of the targets, which can be used for separating targets from others. This important characteristic can be used for target recognition and tracking.
The disparity is usually affected by the noise of light and shelter. We must take measures to decrease the noise to obtain the edge information successfully. In our paper, some operations, such as dilation, erosion, and image binarization, are used to settle this problem. After that, there are still regions with a small area that is the noise, so we choose the largest region as the target to track. In order to gain the location of the target, the external rectangle of the target can be obtained. As a result, we can also get the location of the external rectangle. The four points of the rectangle are the initial location of the particle filter. Of course, the target chosen by binocular stereo vision is the particle template.

Binocular Triangulation
After binocular stereo rectification and correspondence, binocular triangulation is used to compute the position of a target in 3D space. A binocular stereo model is shown in Figure 6. The theorems of binocular triangulation are analyzed in Equations (6)- (8): where (x l , y l ), (x r , y r ), (c ) are corrected through the above steps. In Equation (7), T is the distance between the binocular cameras' centers. At the same time, the obstacle's depth, width, and height can also be obtained by the binocular vision for target detection.

Binocular Triangulation
After binocular stereo rectification and correspondence, binocular triangulation is used to compute the position of a target in 3D space. A binocular stereo model is shown in Figure 6. The theorems of binocular triangulation are analyzed in Equations (6)-(8):

Obstacle Detection Based on Ultrasonic Range Finders
An ultrasonic sensor is used to find the range of a target by means of capturing the reflected ultrasonic wave. An ultrasonic wave is a mechanical vibration at a frequency higher than the sound wave. It is used widely for the high frequency, the short wave length, the lower diffraction, and especially the good direction. However, the range of this sensor is limited to 0-10 m. In our proposed system, two ultrasonic range finders are installed on the rear board for the distance of an obstacle and the instantaneous information. The ultrasonic range finder, KS109, is shown in Figure 4.

Target Recognition and Tracking Based on Information Fusion and the Improved Particle Filter
The information fusion algorithm using an adaptive Kalman filter is employed to process the data obtained from binocular vision and ultrasonic sensors of the same obstacle at the same time. It improves the robustness of the sensors. Then the improved particle filter based on low-rank representation tracks the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. This optimization improves the tracking performance. The structure of the proposed algorithm for information fusion and tracking is shown in Figure 7. ) are corrected through the above steps. In Equation (7), T is the distance between the binocular cameras' centers. At the same time, the obstacle's depth, width, and height can also be obtained by the binocular vision for target detection.

Obstacle Detection Based on Ultrasonic Range Finders
An ultrasonic sensor is used to find the range of a target by means of capturing the reflected ultrasonic wave. An ultrasonic wave is a mechanical vibration at a frequency higher than the sound wave. It is used widely for the high frequency, the short wave length, the lower diffraction, and especially the good direction. However, the range of this sensor is limited to 0-10 m. In our proposed system, two ultrasonic range finders are installed on the rear board for the distance of an obstacle and the instantaneous information. The ultrasonic range finder, KS109, is shown in Figure 4.

Target Recognition and Tracking Based on Information Fusion and the Improved Particle Filter
The information fusion algorithm using an adaptive Kalman filter is employed to process the data obtained from binocular vision and ultrasonic sensors of the same obstacle at the same time. It improves the robustness of the sensors. Then the improved particle filter based on low-rank representation tracks the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. This optimization improves the tracking performance. The structure of the proposed algorithm for information fusion and tracking is shown in Figure 7.   In the model of a Kalman filter, we use one stochastic differential equation, where X(k) represents the state of the system in moment k and U(k) is the control quality of the current state. A and B are the parameters of the system, which vary in different systems. W(k) is the processing noise.
Therefore, we can describe the measurement by: Zpkq " HXpkq`Vpkq (10) where Z(k) represents the measurement, H is the parameter of the measuring system, and V(k) is the noise in measuring. Based on the model of the Kalman filter, the current state can be estimated based on the previous state. It is described as: where X(k|k´1) is the estimation based on the previous state and X(k´1|k´1) is the optimal result of the previous state. After updating the system, the covariance can be updated using Equation (12). We use P to represent the covariance: where covariance P(k|k´1) corresponds to X(k|k´1), P(k´1|k´1) corresponds to X(k´1|k´1), A 1 is the transpose of A, and Q represents the variance of the system. We can have the optimal value X(k|k) using the measurement of the system and the predicted value: where Kg is the Kalman gain obtained by In order to update the system constantly, the covariance updates through Ppk|k´1q " pI´KgpkqHqPpk|k´1q (15) where I is the identity matrix. Equations (11)-(15) are the basic frame of the Kalman filter.

Information Fusion Based on Federal Kalman Filter
In this paper, a federated Kalman filter is used for multi-sensor information fusion as in Figure 8. An adaptive federal filter is a decentralized scheme to distribute dynamic information. The dynamic information has two parts, information about the state equation and information about the observation equation.
where X(k) represents the state of the system in moment k and U(k) is the control quality of the current state. A and B are the parameters of the system, which vary in different systems. W(k) is the processing noise. Therefore, we can describe the measurement by: where Z(k) represents the measurement, H is the parameter of the measuring system, and V(k) is the noise in measuring. Based on the model of the Kalman filter, the current state can be estimated based on the previous state. It is described as: where X(k|k − 1) is the estimation based on the previous state and X(k − 1|k − 1) is the optimal result of the previous state. After updating the system, the covariance can be updated using Equation (12). We use P to represent the covariance: is the transpose of A, and Q represents the variance of the system. We can have the optimal value X(k|k) using the measurement of the system and the predicted value: where Kg is the Kalman gain obtained by In order to update the system constantly, the covariance updates through where I is the identity matrix. Equations (11)-(15) are the basic frame of the Kalman filter.

Information Fusion Based on Federal Kalman Filter
In this paper, a federated Kalman filter is used for multi-sensor information fusion as in Figure 8. An adaptive federal filter is a decentralized scheme to distribute dynamic information. The dynamic information has two parts, information about the state equation and information about the observation equation.  As shown in Figure 8, the output x i pτq and covariance P i pτq of the sub-filters in federal filter fusion are local estimations based on the measurement of the subsystem. The outputs, Xpτq and Ppτq, determined by all the subsystems of the filter, are the optimal estimation. Their relationship can be described as follows: After fusing, the main filter distributes the information to each sub-filter. The principle of allocation can be described as: The parameter β i which is shown above, should meet the following requirement: Actually, we used four real sensors in our proposed multi-sensor system, including binocular cameras and two ultrasonic rangefinders. The above equations should be changed according to the state equation and measuring equation of the system.

Introduction of Particle Filter
Framework: A particle filter based on the Monte Carlo method is widely used in many fields. We can get the state probability using the following steps: ppx k |y 1:k q " ppy k |x k qppx k |y 1:k´1 q ppy k |y 1:k´1 q where x k is a state variable at time k and y k is an observation of x k , and ppx k |y 1:k´1 q is a normalizing constant in Equation (25): By spreading N samples at time k {x ki , i = 1, ..., N }~importance distribution q(x) with the weights {wki, i = 1, ..., N }, the weight is approximated in the following: A maximal approximate optical state is given by the following: Model: x k contains the affine transformation parameters that convert the obstacle region to a fixed size rectangle. Parameters in x k are independently drawn from a Gaussian distribution around x k´1 . y k is the transformed image and is stretched into columns of the obstacle region with a constant size by using x k . For details about the particle filter, please refer to the literature [5].

Introduction of Low-Rank Representation and Principal Component Analysis
As in many practical applications, the given data matrix D is low rank or approximately low rank. In order to restore the low-rank structure of matrix D, matrix D is decomposed into two unknown matrices, X and E, as in D = X + E. X is low rank, as shown in Figure 9.
where ||E|| 0 is the number of the nonzero elements. Because this is a non-deterministic polynomial, it needs convex optimization. The formula turns to where ||X||˚is the trace nor, and the sum of the singular value of the matrix; ||E|| 1 is the L1 norm and the sum of the absolute value of the element, and λ is the weight of the formulation. In this way, we convert the problem of searching for the rank of the matrix into searching for the trace norm of the matrix.

Introduction of Low-Rank Representation and Principal Component Analysis
As in many practical applications, the given data matrix D is low rank or approximately low rank. In order to restore the low-rank structure of matrix D, matrix D is decomposed into two unknown matrices, X and E, as in D = X + E. X is low rank, as shown in Figure 9.
where 0 || || E is the number of the nonzero elements. Because this is a non-deterministic polynomial, it needs convex optimization. The formula turns to where * || || X is the trace nor, and the sum of the singular value of the matrix; 1 || || E is the L1 norm and the sum of the absolute value of the element, and  is the weight of the formulation. In this way, we convert the problem of searching for the rank of the matrix into searching for the trace norm of the matrix.
In the process of mathematical optimization, the Lagrange multiplier method is one of the optimization methods that can help with finding the extremum of a multivariate function bounded by one or more variables. The Augmented Lagrange Multiplier (ALM) algorithm also makes use of the multiplier method. The target function is: where || * || F is the Frobenius norm. The iterative process of the algorithm is shown in Algorithm 1.
Algorithm 1 Algorithm of low rank representation Input Initialization: In the process of mathematical optimization, the Lagrange multiplier method is one of the optimization methods that can help with finding the extremum of a multivariate function bounded by one or more variables. The Augmented Lagrange Multiplier (ALM) algorithm also makes use of the multiplier method. The target function is: where ||˚|| F is the Frobenius norm. The iterative process of the algorithm is shown in Algorithm 1.

End while Output X,E
In the process of iteration, the main computation is a singular value decomposition (SVD). In order to improve the speed of the system, we use IALM in our paper.

Low Rank Representation for Obstacle Recognition and Tracking
As shown in Figure 7, we introduce low-rank representation into the particle filter to improve the tracking performance.
The target template set D " rd 1 , d 2 . . . , d n s, D P R mˆn is defined to have n target templates and m dimensions of the matrix. There are k candidate targets generated based on the framework of particle filter. They are defined as: If we combine each particle of the candidate targets X P R mˆk with target template D P R mˆn one by one to form a new matrix Y i P R mˆpn`1q , we have Then, we can make use of principle component analysis: which is a low-rank matrix obtained after the computation. It can be considered as a matrix that consists of the unchangeable data of the target.
where Y " rD, x i s is an observation that can be decomposed into a low-rank matrix Z i P R mˆpn`1q and a noise matrix E i P R mˆpn`1q . For the element of the candidate template set X " tx 1 , x 2 , . . . , x k u, it will be a zero matrix. Based on the theory described above, Equation (34) can be changed to: In this case, Z i is a low-rank matrix. However, we mainly concentrate on the noise matrix E i , because the last column of matrix E i is smaller if the target template set D gets closer to the target. We define the last column of E i as e i , which represents the difference between the target of the current frame and the sample set. The optimal objective is to have the smallest L-1 norm. Actually, e i can be considered as the difference in the process of tracking of the shelter, and the change of the light. We chose particles having the smallest difference compared with target templates in the candidate set, as shown in Figure 10. The chosen particle has the smallest that is the most desirable particle. It shows that this kind of algorithm can successfully track the target when the vehicle is reversing. In the process of iteration, the main computation is a singular value decomposition (SVD). In order to improve the speed of the system, we use IALM in our paper.

Low Rank Representation for Obstacle Recognition and Tracking
As shown in Figure 7, we introduce low-rank representation into the particle filter to improve the tracking performance.
The target template set Then, we can make use of principle component analysis: which is a low-rank matrix obtained after the computation. It can be considered as a matrix that consists of the unchangeable data of the target.
In this case, i Z is a low-rank matrix. However, we mainly concentrate on the noise matrix i E , because the last column of matrix i E is smaller if the target template set D gets closer to the target.
We define the last column of i E as i e , which represents the difference between the target of the current frame and the sample set. The optimal objective is to have the smallest L-1 norm. Actually, i e can be considered as the difference in the process of tracking of the shelter, and the change of the light. We chose particles having the smallest difference compared with target templates in the candidate set, as shown in Figure 10. The chosen particle has the smallest that is the most desirable particle. It shows that this kind of algorithm can successfully track the target when the vehicle is reversing.

Vehicle Speed Control Strategy
In this section, we use the vehicle reversing speed control algorithm as in [5] to keep the vehicle in a safety speed. This makes the reversing control safer and more reliable. Table 1 shows strategies of ECU under different conditions for vehicle reversing safety. The fuzzy rules in [5] and Table 1 are presented to show the feasibility. The system is enabled to take control of the electronic throttle opening and automatic braking to avoid collisions. The prototype shown in Figure 11 makes reversing control more reliable [5].

Vehicle Speed Control Strategy
In this section, we use the vehicle reversing speed control algorithm as in [5] to keep the vehicle in a safety speed. This makes the reversing control safer and more reliable. Table 1 shows strategies of ECU under different conditions for vehicle reversing safety. The fuzzy rules in [5] and Table 1 are presented to show the feasibility. The system is enabled to take control of the electronic throttle opening and automatic braking to avoid collisions. The prototype shown in Figure 11 makes reversing control more reliable [5].

Simulation and Validation
In this section, experiments have been done to confirm the effectiveness of the system. As shown in Figure 12, a real vehicle reversing experimental environment is used. The ultrasonic sensors work under the control of Arduino chips and the binocular cameras are operated by a laptop under the environment of Microsoft Visual.
There are three parts to our experiment, as shown in Figure 11a. The first part is a multi-sensor that perceives the environment of the vehicle. The vision information is obtained from binocular cameras and the distance information from ultrasonic sensors. The second part is target recognition and tracking using information fusion and low rank with particle filter. Simulation and tests about reversing speed control are the third part.
(a) Figure 11. Vehicle reversing speed control prototype: (a) position of multi-sensors; (b) prototype of the vehicle reversing speed control system.

Simulation and Validation
In this section, experiments have been done to confirm the effectiveness of the system. As shown in Figure 12, a real vehicle reversing experimental environment is used. The ultrasonic sensors work under the control of Arduino chips and the binocular cameras are operated by a laptop under the environment of Microsoft Visual.
There are three parts to our experiment, as shown in Figure 11a. The first part is a multi-sensor that perceives the environment of the vehicle. The vision information is obtained from binocular cameras and the distance information from ultrasonic sensors. The second part is target recognition and tracking using information fusion and low rank with particle filter. Simulation and tests about reversing speed control are the third part.

Vehicle Speed Control Strategy
In this section, we use the vehicle reversing speed control algorithm as in [5] to keep the vehicle in a safety speed. This makes the reversing control safer and more reliable. Table 1 shows strategies of ECU under different conditions for vehicle reversing safety. The fuzzy rules in [5] and Table 1 are presented to show the feasibility. The system is enabled to take control of the electronic throttle opening and automatic braking to avoid collisions. The prototype shown in Figure 11 makes reversing control more reliable [5].

Simulation and Validation
In this section, experiments have been done to confirm the effectiveness of the system. As shown in Figure 12, a real vehicle reversing experimental environment is used. The ultrasonic sensors work under the control of Arduino chips and the binocular cameras are operated by a laptop under the environment of Microsoft Visual.
There are three parts to our experiment, as shown in Figure 11a. The first part is a multi-sensor that perceives the environment of the vehicle. The vision information is obtained from binocular cameras and the distance information from ultrasonic sensors. The second part is target recognition and tracking using information fusion and low rank with particle filter. Simulation and tests about reversing speed control are the third part.   Figure 12 shows the real experimental environment for testing the system. The algorithm was tested on video sequences captured by the binocular cameras, as shown in Figure 12b, where the binocular cameras were manufactured at our lab. This system has been field-tested on the test vehicle in Figure 12c. Figure 12d-g illustrate the experiments on binocular camera images and obstacle detection using ultrasonic range finders. As shown in Figure 12h,i, in our experiments a homemade experimental vehicle is designed to test our system without traffic risk The ultrasonic sensors and binocular cameras work simultaneously. Ultrasonic sensors are first used to detect whether there is an obstacle within 0-10 m or not. The binocular cameras capture the visual information at the back of a vehicle at the same time. If an obstacle exists, the system will start to target the obstacle and compute the distance to it based on the binocular stereo vison. After that, the initial location of the target is sent to the particle filter. Finally, the system controls the speed based on target tracking and information fusion.

Experimental Results of Binocular Vision
The distance to an obstacle is tested by using the 3D reconstruction of binocular vision, mainly for the depth of the obstacle. Firstly, we use a Matlab box to have the parameters of the binocular cameras, which are then installed into Microsoft Visual for the calibration of images captured from the right and left cameras. The results of calibration are shown in Figure 13. We can see that the images are calibrated properly.   Figure 12 shows the real experimental environment for testing the system. The algorithm was tested on video sequences captured by the binocular cameras, as shown in Figure 12b, where the binocular cameras were manufactured at our lab. This system has been field-tested on the test vehicle in Figure 12c. Figure 12d-g illustrate the experiments on binocular camera images and obstacle detection using ultrasonic range finders. As shown in Figure 12h,i, in our experiments a homemade experimental vehicle is designed to test our system without traffic risk The ultrasonic sensors and binocular cameras work simultaneously. Ultrasonic sensors are first used to detect whether there is an obstacle within 0-10 m or not. The binocular cameras capture the visual information at the back of a vehicle at the same time. If an obstacle exists, the system will start to target the obstacle and compute the distance to it based on the binocular stereo vison. After that, the initial location of the target is sent to the particle filter. Finally, the system controls the speed based on target tracking and information fusion.

Experimental Results of Binocular Vision
The distance to an obstacle is tested by using the 3D reconstruction of binocular vision, mainly for the depth of the obstacle. Firstly, we use a Matlab box to have the parameters of the binocular cameras, which are then installed into Microsoft Visual for the calibration of images captured from the right and left cameras. The results of calibration are shown in Figure 13. We can see that the images are calibrated properly.   Figure 12 shows the real experimental environment for testing the system. The algorithm was tested on video sequences captured by the binocular cameras, as shown in Figure 12b, where the binocular cameras were manufactured at our lab. This system has been field-tested on the test vehicle in Figure 12c. Figure 12d-g illustrate the experiments on binocular camera images and obstacle detection using ultrasonic range finders. As shown in Figure 12h,i, in our experiments a homemade experimental vehicle is designed to test our system without traffic risk The ultrasonic sensors and binocular cameras work simultaneously. Ultrasonic sensors are first used to detect whether there is an obstacle within 0-10 m or not. The binocular cameras capture the visual information at the back of a vehicle at the same time. If an obstacle exists, the system will start to target the obstacle and compute the distance to it based on the binocular stereo vison. After that, the initial location of the target is sent to the particle filter. Finally, the system controls the speed based on target tracking and information fusion.

Experimental Results of Binocular Vision
The distance to an obstacle is tested by using the 3D reconstruction of binocular vision, mainly for the depth of the obstacle. Firstly, we use a Matlab box to have the parameters of the binocular cameras, which are then installed into Microsoft Visual for the calibration of images captured from the right and left cameras. The results of calibration are shown in Figure 13. We can see that the images are calibrated properly. Figure 13. Binocular calibration of our system. Figure 13. Binocular calibration of our system. After the calibration process, stereo rectification and stereo correspondence are made to acquire the disparity of the images. As shown in Figure 14a, the distance to obstacles is obtained from the disparity. In addition, the disparity is used to detect the obstacle. The external rectangle corresponds to the target, as shown in Figure 14b, and the objects are successfully detected. After the disparity of the target, we can calculate the distance to an obstacle based on Equation (6). Here, the distance to an obstacle based on binocular vision is shown in Figure 15. The obstacle detection results at different distances are shown in Figure 16a-d. In Figure 16a, the obstacle distance is about 3.5 m away, and from the detection results it can be seen that the obstacle is well detected. Figure 16b, c show the detection results when the obstacle is about 4.0 m and 5.0 m away, respectively. We can see that the obstacles are well detected. We chose the largest region as the target to track. In Figure 16d, the obstacle is about 6.0 m away, and from the detection results we can see that the obstacle is also well detected. This shows the validity of the proposed binocular vision obstacle detection module. After the calibration process, stereo rectification and stereo correspondence are made to acquire the disparity of the images. As shown in Figure 14a, the distance to obstacles is obtained from the disparity. In addition, the disparity is used to detect the obstacle. The external rectangle corresponds to the target, as shown in Figure 14b, and the objects are successfully detected. After the disparity of the target, we can calculate the distance to an obstacle based on Equation (6). Here, the distance to an obstacle based on binocular vision is shown in Figure 15. The obstacle detection results at different distances are shown in Figure 16a-d. In Figure 16a, the obstacle distance is about 3.5 m away, and from the detection results it can be seen that the obstacle is well detected. Figure 16b, c show the detection results when the obstacle is about 4.0 m and 5.0 m away, respectively. We can see that the obstacles are well detected. We chose the largest region as the target to track. In Figure 16d, the obstacle is about 6.0 m away, and from the detection results we can see that the obstacle is also well detected. This shows the validity of the proposed binocular vision obstacle detection module.   After the calibration process, stereo rectification and stereo correspondence are made to acquire the disparity of the images. As shown in Figure 14a, the distance to obstacles is obtained from the disparity. In addition, the disparity is used to detect the obstacle. The external rectangle corresponds to the target, as shown in Figure 14b, and the objects are successfully detected. After the disparity of the target, we can calculate the distance to an obstacle based on Equation (6). Here, the distance to an obstacle based on binocular vision is shown in Figure 15. The obstacle detection results at different distances are shown in Figure 16a-d. In Figure 16a, the obstacle distance is about 3.5 m away, and from the detection results it can be seen that the obstacle is well detected. Figure 16b, c show the detection results when the obstacle is about 4.0 m and 5.0 m away, respectively. We can see that the obstacles are well detected. We chose the largest region as the target to track. In Figure 16d, the obstacle is about 6.0 m away, and from the detection results we can see that the obstacle is also well detected. This shows the validity of the proposed binocular vision obstacle detection module.

Information Fusion Based on an Adaptive Kalman Filter
In the previous step, we get different results related to the same obstacle from the two ultrasonic sensors and binocular cameras. Because the obstacle data is obtained from four sensors. So it must be processed properly. The results of information fusion based on an adaptive Kalman filter using binocular cameras and two ultrasonic sensors are shown in Figures 17 to 19 at 1.0, 1.5, and 2.0 m. The proposed information fusion algorithm has better performance when the system is influenced by the environment or other factors greatly.

Information Fusion Based on an Adaptive Kalman Filter
In the previous step, we get different results related to the same obstacle from the two ultrasonic sensors and binocular cameras. Because the obstacle data is obtained from four sensors. So it must be processed properly. The results of information fusion based on an adaptive Kalman filter using binocular cameras and two ultrasonic sensors are shown in Figures 17-19

Information Fusion Based on an Adaptive Kalman Filter
In the previous step, we get different results related to the same obstacle from the two ultrasonic sensors and binocular cameras. Because the obstacle data is obtained from four sensors. So it must be processed properly. The results of information fusion based on an adaptive Kalman filter using binocular cameras and two ultrasonic sensors are shown in Figures 17 to 19 at 1.0, 1.5, and 2.0 m. The proposed information fusion algorithm has better performance when the system is influenced by the environment or other factors greatly.

Target Recognition and Tracking Based on the Modified Particle Filter
After the information fusion and target detection, using low-rank representation, the framework of the particle filter tracks and identifies obstacles like human or animal bodies and vehicles, as shown in Figure 20. The proposed algorithm can track any object in various light intensity as well as in the shadows of other objects. In Figure 20, we tried to track and recognize a vehicle, where the target is marked as the green rectangle. It can be seen that even when the vehicle is in the shadows, the target can be well tracked and recognized. Figure 21 shows the tracking results for a human based on a particle filter using low-rank representation, where the human is well targeted. Figure 22 shows a contrast experiment wherein we tried to track and recognize the man on the left in an object's shadow. From the results, we can see that the selected target is well tracked and recognized successfully.

Target Recognition and Tracking Based on the Modified Particle Filter
After the information fusion and target detection, using low-rank representation, the framework of the particle filter tracks and identifies obstacles like human or animal bodies and vehicles, as shown in Figure 20. The proposed algorithm can track any object in various light intensity as well as in the shadows of other objects. In Figure 20, we tried to track and recognize a vehicle, where the target is marked as the green rectangle. It can be seen that even when the vehicle is in the shadows, the target can be well tracked and recognized. Figure 21 shows the tracking results for a human based on a particle filter using low-rank representation, where the human is well targeted. Figure 22 shows a contrast experiment wherein we tried to track and recognize the man on the left in an object's shadow. From the results, we can see that the selected target is well tracked and recognized successfully.

Target Recognition and Tracking Based on the Modified Particle Filter
After the information fusion and target detection, using low-rank representation, the framework of the particle filter tracks and identifies obstacles like human or animal bodies and vehicles, as shown in Figure 20. The proposed algorithm can track any object in various light intensity as well as in the shadows of other objects. In Figure 20, we tried to track and recognize a vehicle, where the target is marked as the green rectangle. It can be seen that even when the vehicle is in the shadows, the target can be well tracked and recognized. Figure 21 shows the tracking results for a human based on a particle filter using low-rank representation, where the human is well targeted. Figure 22 shows a contrast experiment wherein we tried to track and recognize the man on the left in an object's shadow. From the results, we can see that the selected target is well tracked and recognized successfully.  Tracking results based on a particle filter using low-rank representation.

Figure 22.
Tracking results based on a particle filter using low-rank representation under a sheltered object.
In order to prove the effectiveness of the system, an experiment about the fps (frames per second) has been done on a computer equipped with an Intel i3-4150 processer at 3.5 GHz and 4 GB memory in the MATLAB 2012a environment. The comparison of fps between our proposed algorithm and the algorithm in the literature [41] is shown in Table 2. As we can see from this table, we use the same video taken by other researchers but with different algorithms. When the number is set to 30, the result of our proposed algorithm can get to 148.1246, about 7 to 8 times that of the algorithm used by other researchers. While we increase the number of frames, it still keeps the same trend. These results indicate that the proposed method shows better performance in obstacle tracking.  Tracking results based on a particle filter using low-rank representation.

Figure 22.
Tracking results based on a particle filter using low-rank representation under a sheltered object.
In order to prove the effectiveness of the system, an experiment about the fps (frames per second) has been done on a computer equipped with an Intel i3-4150 processer at 3.5 GHz and 4 GB memory in the MATLAB 2012a environment. The comparison of fps between our proposed algorithm and the algorithm in the literature [41] is shown in Table 2. As we can see from this table, we use the same video taken by other researchers but with different algorithms. When the number is set to 30, the result of our proposed algorithm can get to 148.1246, about 7 to 8 times that of the algorithm used by other researchers. While we increase the number of frames, it still keeps the same trend. These results indicate that the proposed method shows better performance in obstacle tracking.  Tracking results based on a particle filter using low-rank representation.

Figure 22.
Tracking results based on a particle filter using low-rank representation under a sheltered object.
In order to prove the effectiveness of the system, an experiment about the fps (frames per second) has been done on a computer equipped with an Intel i3-4150 processer at 3.5 GHz and 4 GB memory in the MATLAB 2012a environment. The comparison of fps between our proposed algorithm and the algorithm in the literature [41] is shown in Table 2. As we can see from this table, we use the same video taken by other researchers but with different algorithms. When the number is set to 30, the result of our proposed algorithm can get to 148.1246, about 7 to 8 times that of the algorithm used by other researchers. While we increase the number of frames, it still keeps the same trend. These results indicate that the proposed method shows better performance in obstacle tracking. Tracking results based on a particle filter using low-rank representation under a sheltered object.
In order to prove the effectiveness of the system, an experiment about the fps (frames per second) has been done on a computer equipped with an Intel i3-4150 processer at 3.5 GHz and 4 GB memory in the MATLAB 2012a environment. The comparison of fps between our proposed algorithm and the algorithm in the literature [41] is shown in Table 2. As we can see from this table, we use the same video taken by other researchers but with different algorithms. When the number is set to 30, the result of our proposed algorithm can get to 148.1246, about 7 to 8 times that of the algorithm used by other researchers. While we increase the number of frames, it still keeps the same trend. These results indicate that the proposed method shows better performance in obstacle tracking. On the other hand, we have counted the tracking rate based on the 200 frames of the same video in which the light changes a lot using four videos offered by other researchers and captured by ourselves. Table 3 shows the results of the tracking. As shown in Table 3, the proposed algorithm in our paper can also reach good performance.  Figure 23 shows the simulation results of vehicle reversing control according to the rule of Table 1. When the vehicle is reversing, the multi-sensor's environmental perception module starts to detect and track rear obstacles to get the obstacle's distance information in real time. In the simulation, a man suddenly appears 15 m behind the vehicle, but the driver keeps reversing the vehicle at 18 km/h without noticing him. In order to avoid a collision, the speed of the vehicle must be restricted according to the obstacle distance, as Figure 23 shows.  On the other hand, we have counted the tracking rate based on the 200 frames of the same video in which the light changes a lot using four videos offered by other researchers and captured by ourselves. Table 3 shows the results of the tracking. As shown in Table 3, the proposed algorithm in our paper can also reach good performance.  Figure 23 shows the simulation results of vehicle reversing control according to the rule of Table 1. When the vehicle is reversing, the multi-sensor's environmental perception module starts to detect and track rear obstacles to get the obstacle's distance information in real time. In the simulation, a man suddenly appears 15 m behind the vehicle, but the driver keeps reversing the vehicle at 18 km/h without noticing him. In order to avoid a collision, the speed of the vehicle must be restricted according to the obstacle distance, as Figure 23 shows. As shown in Figure 23a,b, when the distance between the vehicle and the man is about 10 m, by obstacle detection and tracking, the vehicle begins to slow down from 18 km/h to 10 km/h. When the distance between the vehicle and the man decreases to 5 m, the vehicle automatically slows down to 6 km/h. When the driver continues to reverse the vehicle to 2.5 m away from the pedestrian, the vehicle speed drops to 2 km/h. Finally, when the distance is less than 0.4 m, the vehicle automatically begins to brake to zero to prevent a reversing accident. So with the assistant of the multi-sensors environmental perception using low-rank representation and particle filter, the driver can operate the automobile more safely and stably to prevent reversing accident.

Conclusions
In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, target recognition, target tracking, and vehicle reversing speed control modules. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. This system has been tested on a DODGE SUV and a homemade vehicle. The theoretical analysis and practical experiments show that the proposed system not only has better performance in obstacle tracking and recognition, but also enhances the vehicle's reversing control. The information fusion and particle filter tracking improve the accuracy of measurements and rear object tracking. The effectiveness of our system is one of the key factors for the active safety system; it can help reduce drivers' work and tiredness and, accordingly, decrease traffic accidents. As shown in Figure 23a,b, when the distance between the vehicle and the man is about 10 m, by obstacle detection and tracking, the vehicle begins to slow down from 18 km/h to 10 km/h. When the distance between the vehicle and the man decreases to 5 m, the vehicle automatically slows down to 6 km/h. When the driver continues to reverse the vehicle to 2.5 m away from the pedestrian, the vehicle speed drops to 2 km/h. Finally, when the distance is less than 0.4 m, the vehicle automatically begins to brake to zero to prevent a reversing accident. So with the assistant of the multi-sensors environmental perception using low-rank representation and particle filter, the driver can operate the automobile more safely and stably to prevent reversing accident.

Conclusions
In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, target recognition, target tracking, and vehicle reversing speed control modules. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. This system has been tested on a DODGE SUV and a homemade vehicle. The theoretical analysis and practical experiments show that the proposed system not only has better performance in obstacle tracking and recognition, but also enhances the vehicle's reversing control. The information fusion and particle filter tracking improve the accuracy of measurements and rear object tracking. The effectiveness of our system is one of the key factors for the active safety system; it can help reduce drivers' work and tiredness and, accordingly, decrease traffic accidents.