You are currently viewing a new version of our website. To view the old version click .
ISPRS International Journal of Geo-Information
  • Editor’s Choice
  • Article
  • Open Access

31 May 2019

Speed Estimation of Multiple Moving Objects from a Moving UAV Platform

,
,
and
1
Department of Civil, Environmental and Geomatics Engineering, Florida Atlantic University, Boca Raton, FL 33431, USA
2
Lab of Remote Sensing Image Processing, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing 100101, China
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Deep Learning and Computer Vision for GeoInformation Sciences

Abstract

Speed detection of a moving object using an optical camera has always been an important subject to study in computer vision. This is one of the key components to address in many application areas, such as transportation systems, military and naval applications, and robotics. In this study, we implemented a speed detection system for multiple moving objects on the ground from a moving platform in the air. A detect-and-track approach is used for primary tracking of the objects. Faster R-CNN (region-based convolutional neural network) is applied to detect the objects, and a discriminative correlation filter with CSRT (channel and spatial reliability tracking) is used for tracking. Feature-based image alignment (FBIA) is done for each frame to get the proper object location. In addition, SSIM (structural similarity index measurement) is performed to check how similar the current frame is with respect to the object detection frame. This measurement is necessary because the platform is moving, and new objects may be captured in a new frame. We achieved a speed accuracy of 96.80% with our framework with respect to the real speed of the objects.

1. Introduction

Real-time traffic monitoring is a challenging task. Even with all the best technologies we have, we are still struggling with insufficient information on the road to solve traffic problems. Traffic monitoring systems have become more intelligent than ever before. Traffic signaling systems are adaptive, car counting is automated [,], and car density estimation on the road is also becoming automated [,]. Every day, new technologies are introduced into this field to make intelligent traffic monitoring systems (ITMS) better. One of the latest technological additions to this field is the use of UAVs (unmanned aerial vehicles). UAVs could be very effective to monitor traffic where traffic cameras are not available, especially for long driveways, forests, mountains and desert highways, and public events []. In addition, UAVs could be very effective in tracking and monitoring a vehicle for law enforcement and crime prevention. Static traffic cameras have a limited field of view, whereas drone cameras can overcome this limitation. High efficiency in collecting data in remote areas as well as for small and narrow lane data can be achieved by using a UAV platform. With this research, we are now more capable of collecting detailed vehicle-level data instead of only road and lane-level data. Vehicle-level data give us driving trajectories, which would be beneficial in monitoring driving behavior and patterns [].
UAVs can provide a large field of view with more mobility and lower cost compared to traditional ground-based transportation sensors or low-angle cameras. UAVs can record and transmit data for investigation from the field directly but are rarely dedicated only to monitoring transportation and roadways. The short battery lives of the UAVs and a lack of efficient algorithms to detect and track moving vehicles [] are the main factors preventing this technology from being widely used. It is extremely difficult to predict accurate motions when both the platform and target are moving at variable speeds. Since there are a total of six degrees of freedom for a UAV platform, compensating for these movements is challenging. Moreover, vibration and natural weather can cause negative effects during the detection and tracking processes.
In this study, we estimated the speed of multiple vehicles from a moving UAV platform using video streams of optical cameras. Detecting speeding vehicles or stopped vehicles in unsafe positions in real time will be a lifesaving solution. Locating trapped vehicles after natural disasters, such as snow, rain, flooding, and hurricanes, will be a potential application of this research. This study will be beneficial not only for civilian purposes but also for military purposes. In particular, tracking moving vehicles from a UAV platform will be extremely effective if the speed of the moving vehicles can be accurately estimated.
In this paper, the related work is introduced in Section 2. We describe the methodology in detail in Section 3 and explain how the framework is built. Section 4 documents the dataset information used for the experiments. The results of the framework and accuracy are discussed in Section 5. The conclusion and future scope of the study are discussed in Section 6.

3. Methodology

Vehicle detection and tracking using a UAV platform are attracting numerous researchers. This type of research can open many new avenues beyond our imaginations. Still, some issues need to be addressed more carefully. First, (a) increasing the detection rate when vehicle size is very small needs attention. Faster region-based convolutional neural network (R-CNN) [] addresses this problem, and that is why we have adopted this CNN algorithm in our study to solve detection problems. Another existing issue in this research area is (b) maintaining tracking for a longer time while both the platform and the object are moving. The final problem is (c) extracting the speed information of the vehicle. Solving the last problem is challenging because reference objects to measure the exact distance traveled by the vehicles are not always available. Furthermore, a UAV platform does not move with the same elevation or direction at all times. We handled this problem very tactically in our study. We created a database for different sizes of vehicles, and when a particular category of vehicle was detected, we used the vehicle as a reference object to measure the distance traveled. After that, we converted the distance information into speed with respect to time. In this section, we discussed Faster R-CNN as a vehicle detection algorithm, channel and spatial reliability tracking (CSRT) as a tracking algorithm, feature-based image alignment (FBIA) as an image alignment algorithm, and structural similarity index measurement (SSIM) as a similarity index measure algorithm.

3.1. Faster R-CNN

Faster R-CNN is used for vehicle detection in this research. The R-CNN family was originally introduced by Girshick et al. in 2013. After several iterations and improvements, Girshick published a second paper, Fast R-CNN, in 2015. A few months later, Girshick published the third paper of this series, Faster R-CNN []. The Faster R-CNN algorithm can be performed in three steps. Figure 1 explains the basic functionality of Faster R-CNN. It takes a raw image as input.
Figure 1. Faster region-based convolutional neural network (R-CNN) architecture.
Step 1 performs feature extraction using pre-trained CNN. In Step 2, the extracted features are passed in parallel to two different components of the Faster R-CNN. (a) A region proposal network (RPN) determines the potential objectness, i.e., whether any object is present at a certain location. (b) A region of interest (ROI)—pooling layer, pools and scales down the image and passes proposed ROI bounding boxes to two different CNN networks. Step 3 determines the object type and the location of the object, using these two CNN networks.
Implementation of Faster R-CNN: Vehicles are labeled using the ‘Labeling’ [] software from the training images. Against every labeled image, an annotation file is created. Tensorflow object detection (TFOD) API is used as a training platform. Three different kinds of record files are created that store the location and class information about the datasets.
The record directory stores these three files:
  • Training.record: The serialized image dataset of images, bounding boxes, and labels used for training.
  • Testing.record: The images, bounding boxes, and labels used for testing.
  • Classes.pbtxt: A plaintext file containing the names of the class labels and their unique integer IDs.
A special configuration file is used to instruct the TFOD API how to train the model. This file contains information about the location of training data, size of the images, and hyper-parameters, such as learning rate, number of iterations. After the training is done, frozen models (model files) are created with weights. The model is later used for vehicle detection purposes. The number of iterations performed is 50 k, and the learning rate 0.0001 is used for this experiment.

3.2. CSRT

Object detection does not preserve the information about the detection of previous frames. In object tracking, the algorithm gathers lots of information from each object’s previous location, such as direction and motion of the objects. Moreover, before tracking, detection is mandatory. From the detection, it is possible to know the appearance of the objects. A good tracking algorithm will use all the information it has to predict the next location of the objects. Several common tracking algorithms include (a) BOOSTING Tracker is slow and does not work well. (b) MIL Tracker gives better accuracy than BOOSTING Tracker but does a poor job reporting failure. (c) KCF Tracker is faster than BOOSTING and MIL Trackers but is not able to handle occlusion. (d) MedianFlow Tracker does a nice job of reporting failure but is unable to handle too fast-moving objects. (e) TLD Tracker does best under multiple occlusion conditions, but it has a higher false-positive rate. (f) MOSSE Tracker is very fast but not very accurate. We chose CSRT Tracker [] for our experiment, as this algorithm gives good accuracy, and we were not concerned about occlusions.
CSRT uses a spatial reliability map to adjust filter support to the part of the selected region from the frame for tracking. CSRT uses only two standard features, HOGs and color names, to achieve correlation responses among channels.
Pseudocode for CSRT
Inputs:
Image, object position on the previous frame, filter, scale, color histogram, channel reliability.
Localization and scale estimation:
  • Create a new target position using the position of the maximum in the correlation between filter and image patch features extracted on the previous position and weighted by the channel reliability scores.
  • Estimate detection reliability using per-channel responses.
  • Use the new location to get a new scale.
Update:
  • Extract foreground and background histograms.
  • Update foreground and background histograms.
  • Estimate reliability map.
  • Estimate new filter.
  • Estimate learning channel reliability.
  • Calculate channel reliability.
  • Update filter.
  • Update channel reliability.
Repeat:
Perform (i) Localization and scale estimation and do (ii) Update for every new detection.

3.3. FBIA

Image alignment is a necessary step for speed estimation. Image alignment is done for all the consecutive frames with respect to the detection frame to get the exact distance a car moves in a particular time. Image alignment, or image registration, is a technique to find out common features between two images where one image is lined up with respect to the other image.
The core of the image alignment techniques consists of a 3 × 3 matrix. This core matrix is called homography. If ( x 1 , y 1 ) is a point at first frame and ( x 2 , y 2 ) is the same point at the second frame, then homography relates them in the following way:
[ x 1 y 1 1 ] = H [ x 2 y 2 1 ] = [ h 00 h 01 h 02 h 10 h 11 h 12 h 20 h 21 h 22 ] [ x 2 y 2 1 ] .
We need four or more points to find the homography (H). The ORB feature detector is applied to find the same points for two comparable frames. ORB stands for “oriented FAST and rotated BRIEF”. The FAST algorithm locates the coordinates that are stable under image transformation, such as translation, scale, and rotation. The BRIEF algorithm works as a descriptor that encodes the appearance of the points to find the features. The same descriptor is used for the same physical point in two images. Hamming distance is implemented to measure the similarity between two features, but it is not uncommon for 20% to 30% of the matches to be incorrect. A robust estimation technique called random sample consensus (RANSAC), which produces the right result even in the presence of a large number of bad matches, is applied to remove the outliers. After calculating the homography, the transformation is applied to all the pixels in one frame to map it to the other frame.
Figure 2 shows the result of a frame alignment. Figure 2a,b show a reference frame and a frame to align. A match result is shown using Hamming distance in Figure 2c, and finally, the aligned image is displayed in Figure 2d.
Figure 2. Feature-based image alignment result.

3.4. SSIM

Structural similarity index measure (SSIM) is used in this experiment to make sure that contiguous frames have enough similarity. The SSIM threshold for this study is set to 0.50. That means the detection frame and the current frame should have at least 0.50 SSIM. This index is a very important part of the experiment because SSIM is used to determine when the detection algorithm (Faster R-CNN) needs to perform based on its value. The tracking algorithm (CSRT) is performed when SSIM is greater than 0.50. In Figure 3, SSIM is 0.21, which means it is time to perform the detection algorithm. The optimal SSIM index is set to 0.50 after performing several experiments to make sure there is enough overlapping between the frames. At the same time, duplicated detection can be avoided. The detection algorithm is much slower with respect to the tracking algorithm, which turned out to be a bottle-neck to limit the real-time performance.
Figure 3. Structural similarity index measure (SSIM) result with a value of 0.21.
SSIM attempts to model the perceived change in the structural information of the image. Rather than comparing for the entire image, Equation (2) compares two windows (small sub-samples). In addition to the perceived changes, this approach accounts for changes in the structure of the images. The value of SSIM can vary between −1 and 1, where 1 indicates perfect similarity.
S S I M ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 )
Equation (2) indicates the ( x , y ) location of a N   X   N window in each image, where μ indicates the mean of the pixels at x and y direction, σ x and σ y indicates the variance of x and y. σ x y denotes the covariance of x and y. c 1 and c 2 are two stabilizers to act on weak denominator.

3.5. Speed Estimation

Two categories of vehicles are detected for this study. These are small vehicles and large vehicles. Sedan cars from the category of small vehicles and eighteen-wheel trucks from the category of large vehicles are considered as a reference object to measure the speed of the vehicles. The average length of the sedan cars are 186 inches [], and the average length of the multi axel eighteen-wheel trucks are 840 inches []. The number of pixels is calculated against the length of the vehicle when a sedan car or an eighteen-wheel truck is detected in a frame. Then the traveled distance of a vehicle is calculated by measuring the centroid displacement of the vehicle from the reference frame to the current frame. Finally, the speed of the vehicles is measured by the traveled distance with respect to time (time is calculated from the frame rate).
Figure 4 explains the complete framework of the study. The object detection algorithm (Faster R-CNN) must be applied under the three conditions. (a) for the first frame, (b) frame number multiple of 100 or c) SSIM index is less than 0.5. Then FBIA is applied for image alignment, and CSRT is applied for object tracking when SSIM index is more than 0.5. The speeds of the vehicles are calculated from the gathered statistics during the tracking process. Image alignment (FBIA) process can distort the video frames; hence, applying inverse homography is important to get undistorted video frames with tracking information.
Figure 4. Complete flow chart of the framework.

4. Datasets

An Inspire I V2.0 UAV with a ZENMUSE X3 optical camera is used for this study. The camera is equipped with 4 k video, 16 MP, 3-axis gimbal. Two types of videos are captured for the experiment: while the drone is static and while the drone is moving. The videos are collected from three different locations for this study: (a) the intersection of Glades Road and State Road 441 in Boca Raton, FL (Static video), (b) Burt Aaronson South County Regional Park, Boca Raton, FL (static and moving), and (c) Bluegrass Yeehaw junction, Okeechobee, FL (moving). Figure 5 shows their locations in Google Maps.
Figure 5. Google Maps data collection points.
From these videos, two classes are prepared for this study, “SmallVehicle” and “LargeVehicle”. All the small cars, sedan, SUVs, and small trucks are labeled as “SmallVehicles”. The big vans, buses, and eighteen-wheel trucks are labeled as “LargeVehicle” A total of 17,304 small vehicles and 784 large vehicles are used as training and validation for Faster R-CNN. Among these data, 70% is used for training, 26% is used for validation, and 4% is used for manual testing. Open source software called “Labelimg” [] is used for image labeling. The images are collected in different light conditions.

5. Results and Discussion

The results of this study are presented in two tables below. Table 1 shows the detection results, and Table 2 represents the speed estimation results. Table 1 compares manual detection and Faster R-CNN detection. We collected sample frames from all three video capture locations. Glades Road is a city road, so we obtained fewer sample data for large vehicles at that location. Burt Aaronson is a public park, so big trucks are not allowed there. We obtained some good sample data for large vehicles on the highway turnpike at Bluegrass Yeehaw junction. We achieved 90.77% average accuracy for small vehicles and 96.15% average accuracy for large vehicle detection. F score is also calculated following the equation ( r e c a l l 1 + p r e c i s i o n 1 2 ) 1 . F score for large vehicle is 98.04% and for small vehicle, it is 95.16%, since there is no misclassification among the two classes although we found a number of missing detections.
Table 1. Vehicle detection results.
Table 2. Speed detection results.
Table 2 reports the speed accuracy of the framework. In the static video of Glades Road, we observed 43 vehicles with an average speed of 0 mph, 10 vehicles with an average speed of 30 mph, and 15 vehicles with an average speed of 25 mph. Our framework gives 100% accuracy for 0 mph vehicles, 98.33% for 30 mph, and 96% for 25 mph. The average RMSE is 0.564 for our case studies. Figure 6a shows speed estimation results for the Glades Road video. The Bluegrass video was recorded from a moving platform; we obtained a 100% accuracy for stopped vehicles and 95.83% accuracy for moving vehicles. Figure 6b shows speed estimation results for the Bluegrass video. At Burt Aaronson, we achieved 100% accuracy for stopped vehicles from a moving UAV and 96% accuracy for moving vehicles.
Figure 6. Speed estimation results.
To measure the accuracy at the Glades Road and Bluegrass locations, we picked out two spots on the map and measured the distance using Google Maps. Then we counted the time each car took to cover the distance to calculate the speed of the cars. At Burt Aaronson Park, we drove two cars and recorded the speedometer readings. Later in the lab, we matched the speedometer recording video with UAV recording and compared the speedometer reading to our framework outputs. In Figure 6c, we can see that the right-side speedometer reads 21 mph, and the framework also shows 21 mph. The left-side speedometer displays 23 mph while the framework shows 24 mph.

6. Conclusions and Future Work

This study is a stepping stone to solve a long-existing problem. In this study, we achieved 100% speed accuracy while vehicles are non-moving. In addition, we achieved over 96% speed accuracy from a static platform (Table 2). The most challenging problem to solve was to estimate the speed when both the platform and target were moving. We accomplished a speed accuracy of over 95% in this scenario. We observed that when the number of vehicles is lowered, the frame per second rate (fps) of processing is higher, but as vehicles increase, the fps of processing reduces. The tracking algorithm takes more resources for more vehicles in the frames. We did not perform any comparative study to check how fps performs with respect to the number of vehicles.
Since UAV technologies, computer vision, and photogrammetry algorithms are under active development, we can skip the image alignment steps (FBIA) if the geolocation of frames from the UAV videos can be accurately determined. We would not need to perform FBIA to estimate the speed. Furthermore, we could handle the UAV elevation and angle fluctuation better with georeferenced videos, as image alignment alone cannot handle elevation or angle fluctuation well. Implementing the framework under different weather conditions and different light conditions is a future direction to pursue as well.

Author Contributions

Debojit Biswas and Hongbo Su conceived and designed the experiments; Debojit Biswas performed the experiments and analyzed the data; Debojit Biswas and Hongbo Su wrote the paper; Aleksandar Stevanovic and Chengyi Wang revised the paper.

Funding

This research was funded by Guangxi innovation driven development special projects on “Product development and applications of nearshore dual frequency LiDAR detector” and a Florida Space Research Grant (No. NNX15_002). The APC was funded by Guangxi innovation driven development special projects on “Product development and applications of nearshore dual frequency LiDAR detector”.

Acknowledgments

The authors are thankful to FDOT staff for their help with acquiring the data under the project BDV27 TWO 977-12—”Development of a Traffic Map Evaluation Tool for Traffic Management Center Applications”. It should be noted that the opinions, findings, and conclusions expressed in this publication are those of the authors and not necessarily those of the Florida Department of Transportation or the U.S. Department of Transportation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Biswas, D.; Su, H.; Wang, C.; Blankenship, J.; Stevanovic, A. An automatic car counting system using OverFeat framework. Sensors 2017, 17, 1535. [Google Scholar] [CrossRef] [PubMed]
  2. Alpatov, B.A.; Babayan, P.V.; Ershov, M.D. Vehicle detection and counting system for real-time traffic surveillance. In Proceedings of the 7th Mediterranean Conference on Embedded Computing (MECO), Budva, Montenegro, 10–14 June 2018. [Google Scholar] [CrossRef]
  3. Biswas, D.; Su, H.; Wang, C.; Stevanovic, A.; Wang, W. An automatic traffic density estimation using Single Shot Detection (SSD) and MobileNet-SSD. Phys. Chem. Earth Parts A/B/C 2019, 110, 176–184. [Google Scholar] [CrossRef]
  4. Eamthanakul, B.; Ketcham, M.; Chumuang, N. The Traffic Congestion Investigating System by Image Processing from CCTV Camera. In Proceedings of the International Conference on Digital Arts, Media and Technology (ICDAMT), Chiangmai, Thailand, 1–4 May 2017. [Google Scholar] [CrossRef]
  5. Wang, L.; Chen, F.; Yin, H. Detecting and tracking vehicles in traffic by unmanned aerial vehicles. Autom. Constr. 2016, 72, 294–308. [Google Scholar] [CrossRef]
  6. Toledo, T. Driving behavior: Models and challenges. Transp. Rev. 2007, 27, 65–84. [Google Scholar] [CrossRef]
  7. Kanistras, K.; Martins, G.; Rutherford, M.J.; Valavanis, K.P. A survey of unmanned aerial vehicles (UAVs) for traffic monitoring. In Proceedings of the International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, 28–31 May 2013; pp. 221–234. [Google Scholar]
  8. Rodriguez-Canosa, G.R.; Thomas, S.; del Cerro, J.; Barrientos, A.; MacDonald, B. A realtime method to detect and track moving objects (DATMO) from unmanned aerial vehicles (UAVs) using a single camera. Remote Sens. 2012, 4, 1090–1111. [Google Scholar] [CrossRef]
  9. Seenouvong, N.; Watchareeruetai, U.; Nuthong, C.; Khongsomboon, K.; Ohnishi, N. A computer vision based vehicle detection and counting system. In Proceedings of the 8th International Conference on Knowledge and Smart Technology (KST), Chiangmai, Thailand, 3–6 February 2016. [Google Scholar] [CrossRef]
  10. Qin, H.; Zhen, Z.; Ma, K. Moving object detection based on optical flow and neural network fusion. Int. J. Intell. Comput. Cybern. 2016, 9, 325–335. [Google Scholar] [CrossRef]
  11. Xu, J.; Wang, G.; Sun, F. A novel method for detecting and tracking vehicles in traffic image sequence. In Proceedings of the Volume SPIE 8878, Fifth International Conference on Digital Image Processing, Beijing, China, 19 July 2013; p. 88782P. [Google Scholar]
  12. Elloumi, M.; Dhaou, R.; Escrig, B.; Idoudi, H.; Saidane, L.A. Monitoring Road Traffic with a UAV-based System. In Proceedings of the IEEE Wireless Communications and Networking Conference, Barcelona, Spain, 15–18 April 2018. [Google Scholar]
  13. Gleason, J.; Nefian, A.V.; Bouyssounousse, X.; Fong, T.; Bebis, G. Vehicle Detection from Aerial Imagery. In Proceedings of the IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 2065–2070. [Google Scholar]
  14. Leitloff, J.; Hinz, S.; Stilla, U. Vehicle detection in very high-resolution satellite images of city areas. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2795–2806. [Google Scholar] [CrossRef]
  15. Tuermer, S.; Kurz, F.; Reinartz, P.; Stilla, U. Airborne vehicle detection in dense urban areas using HoG features and disparity maps. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 2327–2337. [Google Scholar] [CrossRef]
  16. Leitloff, J.; Rosenbaum, D.; Kurz, F.; Meynberg, O.; Reinartz, P. An operational system for estimating road traffic information from aerial images. Remote Sens. 2014, 6, 11315–11341. [Google Scholar] [CrossRef]
  17. Hardjono, B.; Tjahyadi, H.; Widjaja, E.A.; Rhizma, M.G.A. Vehicle travel distance and time prediction using virtual detection zone and CCTV data. In Proceedings of the IEEE 17th International Conference on Communication Technology (ICCT), Chengdu, China, 27–30 October 2017. [Google Scholar] [CrossRef]
  18. Cao, X.; Lan, J.; Yan, P.; Li, X. Vehicle detection and tracking in airborne videos by multi-motion layer analysis. Mach. Vis. Appl. 2011, 23, 921–935. [Google Scholar] [CrossRef]
  19. Lingua, A.; Marenchino, D.; Nex, F. Performance analysis of the SIFT operator for automatic feature extraction and matching in photogrammetric applications. Sensors 2009, 9, 3745–3766. [Google Scholar] [CrossRef] [PubMed]
  20. Ibrahim, O.; ElGendy, H.; ElShafee, A.M. Speed Detection Camera System using Image Processing Techniques on Video Streams. Int. J. Comput. Electr. Eng. 2011, 3, 6. [Google Scholar] [CrossRef]
  21. Wu, J.; Liu, Z.; Li, J.; Gu, C.; Si, M.; Tan, F. An algorithm for automatic vehicle speed detection using video camera. In Proceedings of the International Conference on Computer Science & Education, Nanning, China, 25–28 July 2009. [Google Scholar]
  22. Rad, A.G.; Dehghani, A.; Karim, M.R. Vehicle speed detection in video image sequences using CVS method. Int. J. Phys. Sci. 2010, 5, 2555–2563. [Google Scholar]
  23. Ranjit, S.S.S.; Anas, S.A.; Subramaniam, S.K.; Lim, K.C.; Fayeez, A.F.I.; Amirah, A.R. Real-Time Vehicle Speed Detection Algorithm usingMotion Vector Technique. In Proceedings of the International Conference on Advances in Electrical & Electronics, NCR, India, 28–29 December 2012. [Google Scholar]
  24. Wang, J.X. Research of vehicle speed detection algorithm in video surveillance. In Proceedings of the International Conference on Audio, Language and Image Processing (ICALIP), Shanghai, China, 11–12 July 2016; pp. 349–352. [Google Scholar]
  25. Hua, S.; Kapoor, M.; Anastasiu, D.C. Vehicle Tracking and Speed Estimation from Traffic Videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
  26. Zitová, B.; Flusser, J. Image registration methods: A survey. Image Vis. Comput. 2003, 21, 977–1000. [Google Scholar] [CrossRef]
  27. Aicardi, I.; Nex, F.; Gerke, M.; Lingua, A.M. An Image-Based Approach for the Co-Registration of Multi-Temporal UAV Image Datasets. Remote Sens. 2016, 8, 779. [Google Scholar] [CrossRef]
  28. Sheng, Y.; Shah, C.A.; Smith, L.C. Automated image registration for hydrologic change detection in the lake-rich arctic. IEEE Geosci. Remote Sens. Lett. 2008, 5, 414–418. [Google Scholar] [CrossRef]
  29. Behling, R.; Roessner, S.; Segl, K.; Kleinschmit, B.; Kaufmann, H. Robust automated image co-registration of optical multi-sensor time series data: Database generation for multi-temporal landslide detection. Remote Sens. 2014, 6, 2572–2600. [Google Scholar] [CrossRef]
  30. Aicardi, I.; Chiabrando, F.; Grasso, N.; Lingua, A.M.; Noardo, F.; Spanò, A. UAV Photogrammetry with Oblique Images: First Analysis on Data Acquisition and Processing. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI-B1, 2016 XXIII ISPRS Congress, Prague, Czech Republic, 12–19 July 2016. [Google Scholar]
  31. Jiang, S.; Jiang, W.; Huang, W.; Yang, L. UAV-Based Oblique Photogrammetry for Outdoor Data Acquisition and Offsite Visual Inspection of Transmission Line. Remote Sens. 2017, 9, 278. [Google Scholar] [CrossRef]
  32. Vacca, G.; Dessì, A.; Sacco, A. The Use of Nadir and Oblique UAV Images for Building Knowledge. ISPRS Int. J. Geo-Inf. 2017, 6, 393. [Google Scholar] [CrossRef]
  33. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. NIPS 2015, arXiv:1506.01497. [Google Scholar] [CrossRef]
  34. LabelImg. Available online: https://github.com/tzutalin/labelImg (accessed on 7 January 2019).
  35. Lukezic, A.; Vojir, T.; Zajc, L.C.; Matas, J.; Kristan, M. Discriminative Correlation Filter Tracker with Channel and Spatial Reliability. Int. J. Comput. Vis. 2018, 126, 671–688. [Google Scholar] [CrossRef]
  36. Creditdonkey. Available online: https://www.creditdonkey.com/average-weight-car.html (accessed on 7 January 2019).
  37. Truckersreport. Available online: https://truckersreport.wordpress.com/2013/09/09/20-insane-but-true-things-about-18-wheelers/ (accessed on 7 January 2019).

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.