# Moving Object Detection Using an Object Motion Reflection Model of Motion Vectors

^{*}

## Abstract

**:**

## 1. Introduction

## 2. The Proposed Method

#### 2.1. Road Estimation

#### 2.2. Depth Map Calculation

_{l}and x

_{r}represent the coordinates of the left and right images, respectively. f represents the focal length of the camera. b is the distance between the cameras, and Z is the distance between the camera and the object.

#### 2.3. System Motion Estimation

_{x}, T

_{y}, and T

_{z}are the linear motion of the system. W

_{x}, W

_{y}, and W

_{z}represent the rotating motion of the system and is called pitch, yaw, and roll, respectively. f is the focal length of the camera, and Z is the actual distance between the camera and the corresponding object.

_{x}, T

_{y}, and W

_{z}to zero in Equation (7). System motions T

_{z}, W

_{x}and W

_{y}are obtained through inverse matrix computation.

_{z}, W

_{x}and W

_{y}are calculated by substituting the vectors into Equation (8). Since this system motion may reflect the composite motion vector, error calculation is necessary. The system motion is substituted into Equation (8), and the motion vectors u′ and v′ for error calculation are obtained at each pixel. The sum of absolute difference (SAD) between these motion vectors and the motion vectors obtained through the optical flow is calculated as shown in Equation (9).

_{min}. We set e

_{min}to 5 which showed the best performance in several experiments. Generally, the system motion can be obtained by repeating the process of Figure 7 about 1000 times. Figure 8a shows the extracted motion vectors using the LK optical flow, while Figure 8b shows the motion vectors calculated using the system motion acquired by the proposed method. Since the system motion is correctly estimated, it shows that the motion vectors of the two images do not differ greatly.

#### 2.4. Moving Object Detection

_{s}and σ

_{r}means the standard deviation of the angle and the size difference, respectively. The deviation is an acceptable value for the error, and if each value exceeds this deviation, the error will be increased. If the error probability is greater than 0.5, the vector extracted from the optical flow determines that the movement of the object was affected. This process is performed on all vectors within the object, and if 0.7 or more of the vectors are like this, it is judged to be moving objects. Additionally, for objects moving at the same speed as the camera, the motion vector extracted by the optical flow is small or zero. On the other hand, vectors calculated by system motion at the same position are independent of the motion of the object because the motion of the camera is compensated. Thus, the proposed method can also detect objects moving at the same speed as the system.

## 3. Experimental Results

## 4. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Huval, B.; Wang, T.; Tandon, S.; Kiske, J.; Song, W.; Pazhayampallil, J.; Andriluka, M.; Rajpukar, P.; Migimatsu, T.; Cheng-Yue, R.; et al. An empirical evaluation of deep learning on highway driving. arXiv, 2015; arXiv:1504.01716v3. [Google Scholar]
- Gavrila, D.M. Sensor-based pedestrian protection. IEEE Intell. Syst.
**2001**, 16, 77–81. [Google Scholar] [CrossRef] - Gopalakrishnan, S. A public health perspective of road traffic accidents. J. Fam. Med. Prim. Care
**2012**, 1, 144–150. [Google Scholar] [CrossRef] [PubMed] - Piccardi, M. Background subtraction techniques: A review. In Proceedings of the 2004 IEEE International Conference on Systems, Man and Cybernetics, The Hague, The Netherlands, 10–13 October 2004. [Google Scholar]
- Postica, G.; Romanoni, A.; Matteucci, M. Robust moving objects detection in lidar data exploiting visual cues. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016. [Google Scholar]
- Hariyono, J.; Hoang, V.D.; Jo, K.H. Moving object localization using optical flow for pedestrian detection from a moving vehicle. Sci. World J.
**2014**, 2014, 1–8. [Google Scholar] [CrossRef] [PubMed] - Bouguet, J.Y. Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm. Intel Corp.
**2001**, 5, 1–10. [Google Scholar] - Chen, L.; Fan, L.; Xie, G.; Huang, K.; Nuchter, A. Moving-object detection from consecutive stereo pairs using slanted plane smoothing. IEEE Trans. Intell. Transp. Syst.
**2017**, 18, 3093–3102. [Google Scholar] [CrossRef] - Kitt, B.; Geiger, A.; Lategahn, H. Visual odometry based on stereo image sequences with ransac-based outlier rejection scheme. In Proceedings of the IEEE Intelligent Vehicles Symposium, San Diego, CA, USA, 21–24 June 2010. [Google Scholar]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Susstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell.
**2012**, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed] - Seo, S.W.; Lee, G.C.; Yoo, J.S. Motion Field Estimation Using U-Disparity Map in Vehicle Environment. J. Electr. Eng. Technol.
**2017**, 12, 428–435. [Google Scholar] [CrossRef] [Green Version] - Labayrade, R.; Aubert, D.; Tarel, J.P. Real time obstacle detection in stereovision on non flat road geometry through v-disparity representation. In Proceedings of the IEEE Intelligent Vehicle Symposium, Versailles, France, 17–21 June 2002. [Google Scholar]
- Turner, D.; Lucieer, A.; Watson, C. An automated technique for generating georectified mosaics from ultra-high resolution unmanned aerial vehicle (UAV) imagery, based on structure from motion (SfM) point clouds. Remote Sens.
**2012**, 4, 1392–1410. [Google Scholar] [CrossRef] - Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Geiger, A.; Roser, M.; Urtasun, R. Efficient large-scale stereo matching. In Proceedings of the Asian Conference on Computer Vision (ACCV 2010), Queenstown, New Zealand, 8–12 November 2010. [Google Scholar]
- Scharstein, D.; Pal, C. Learning conditional random fields for stereo. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007), Minneapolis, MN, USA, 17–22 June 2007. [Google Scholar]
- Ballard, D.H. Generalizing the Hough transform to detect arbitrary shapes. Pattern Recognit.
**1981**, 13, 111–122. [Google Scholar] [CrossRef] [Green Version] - Giachetti, A.; Campani, M.; Torre, V. The use of optical flow for road navigation. IEEE Trans. Robot. Autom.
**1998**, 14, 34–48. [Google Scholar] [CrossRef] [Green Version] - Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Lawrence Zitnick, C. Microsoft coco: Common objects in context. In Proceedings of the 13th European Conference on Computer Vision (ECCV 2014), Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
- Keller, C.; Enzweiler, M.; Gavrila, D.M. A New Benchmark for Stereo-based Pedestrian Detection. In Proceedings of the IEEE Intelligent Vehicles Symposium, Baden-Baden, Germany, 5–9 June 2011. [Google Scholar]
- Zhou, D.; Frémont, V.; Quost, B.; Dai, Y.; Li, H. Moving object detection and segmentation in urban environments from a moving platform. Image Vis. Comput.
**2017**, 68, 76–87. [Google Scholar] [CrossRef] [Green Version]

**Figure 2.**Disparity map extraction using Efficient Large-Scale Matching (ELAS): (

**a**) left image, (

**b**) right image, (

**c**) disparity image, and (

**d**) depth image.

**Figure 8.**Motion vector comparison. (

**a**) Motion vector acquired using optical flow, and (

**b**) motion vector acquired using system motion.

**Figure 9.**The motion vector of the moving object. (

**a**) Motion vector extracted by the Lucas-Kanade (LK) optical flow, and (

**b**) motion vector calculated using system motion.

**Figure 13.**The result of applying the proposed method to the Daimler dataset. (

**a**) Result image 1, (

**b**) Result image 2, (

**c**) Result image 3, and (

**d**) Result image 4.

Algorithm | Processing Time (ms) |
---|---|

Stereo matching (ELAS [15]) | 73 |

Road estimation | 1 |

System motion estimation | 97 |

Object detection (YOLO [14]) | 17 |

Moving object detection | 2 |

Total | 190 |

Method | Yaw Error (degree/s) |
---|---|

The proposed method | 0.00471 |

Hariyono’s method [6] | 0.00781 |

Seo’s method [11] | 0.00779 |

Dataset | Frames | Moving Objects | Static Objects |
---|---|---|---|

Daimler dataset | 22,500 | 437 | 794 |

Our dataset | 12,000 | 240 | 545 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Yoo, J.; Lee, G.-c.
Moving Object Detection Using an Object Motion Reflection Model of Motion Vectors. *Symmetry* **2019**, *11*, 34.
https://doi.org/10.3390/sym11010034

**AMA Style**

Yoo J, Lee G-c.
Moving Object Detection Using an Object Motion Reflection Model of Motion Vectors. *Symmetry*. 2019; 11(1):34.
https://doi.org/10.3390/sym11010034

**Chicago/Turabian Style**

Yoo, Jisang, and Gyu-cheol Lee.
2019. "Moving Object Detection Using an Object Motion Reflection Model of Motion Vectors" *Symmetry* 11, no. 1: 34.
https://doi.org/10.3390/sym11010034