Estimating Vehicle and Pedestrian Activity from Town and City Traffic Cameras †
Abstract
:1. Introduction
- The data are very timely. Cameras may provide continuous video, or stills updated many times each hour, and these images can be accessed almost immediately following capture.
- A wide range of different types of moving objects can be determined, including cars, buses, motorcycles, vans, cyclists, and pedestrians, all of which can provide different useful information from changes in individual behavior to impacts on services and local economies.
- The source is open. Data are widely accessible to groups with differing interests and reusing a public resource adds value to public investment without incurring additional collection cost.
- Cameras provide coverage over a range of geographic settings: the center of towns and cities, as well as many areas that show either retail or commuting traffic and pedestrian flows.
- There is a large network of traffic cameras capturing data simultaneously, allowing the selection of individual cameras for different purposes whilst still retaining acceptable coverage and precision.
- Individuals and vehicles are, in almost all cases, not personally identifiable, e.g., number plates and faces, due to the low resolution of the images.
Background
2. Data
2.1. Camera Selection
2.2. Annotation of Images
3. Processing Pipeline: Architecture
4. Deep Learning Model
4.1. Data Cleaning
4.2. Object Detection
4.2.1. Deep Learning Model Comparison
4.2.2. Investigation of Ensemble of Models
4.3. Static Mask
5. Statistical Processing
5.1. Imputation
5.2. Seasonal Adjustment
6. Validation of Time Series
6.1. Time Series Validation against ANPR
6.2. Comparison with Department for Transport Data
7. Discussion
8. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
References
- Feng, W.; Ji, D.; Wang, Y.; Chang, S.; Ren, H.; Gan, W. Challenges on large scale surveillance video analysis. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); IEEE: Salt Lake City, UT, USA, 2018; pp. 69–697. [Google Scholar] [CrossRef]
- Liu, X.; Liu, W.; Ma, H.; Fu, H. Large-scale vehicle re-identification in urban surveillance videos. In 2016 IEEE International Conference on Multimedia and Expo (ICME); IEEE: Seattle, WA, USA, 2016; pp. 1–6. [Google Scholar] [CrossRef]
- Sochor, J.; Spanhel, J.; Herout, A. BoxCars: Improving fine-grained recognition of vehicles using 3-D bounding boxes in traffic surveillance. IEEE Trans. Intell. Transp. Syst. 2019, 20, 97–108. [Google Scholar] [CrossRef] [Green Version]
- Spanhel, J.; Sochor, J.; Makarov, A. Vehicle fine-grained recognition based on convolutional neural networks for real-world applications. In 2018 14th Symposium on Neural Networks and Applications (NEUREL); IEEE: Belgrade, Serbia, 2018; pp. 1–5. [Google Scholar] [CrossRef]
- Coronavirus and the Latest Indicators for the UK Economy and Society Statistical Bulletins. Available online: https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/conditionsanddiseases/bulletins/coronavirustheukeconomyandsocietyfasterindicators/previousReleases (accessed on 2 February 2021).
- Buch, N.; Velastin, S.A.; Orwell, J. A review of computer vision techniques for the analysis of urban traffic. IEEE Trans. Intell. Transp. Syst. 2011, 12, 920–939. [Google Scholar] [CrossRef]
- Bautista, C.M.; Dy, C.A.; Mañalac, M.I.; Orbe, R.A.; Ii, M.C. Convolutional neural network for vehicle detection in low resolution traffic videos. In IEEE Region 10 Symposium (TENSYMP); IEEE: Bali, Indonesia, 2016; pp. 277–281. [Google Scholar] [CrossRef]
- Zhang, S.; Benenson, R.; Schiele, B. CityPersons: A diverse dataset for pedestrian detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4457–4465. [Google Scholar] [CrossRef] [Green Version]
- Farias, I.S.; Fernandes, B.J.T.; Albuquerque, E.Q.; Leite, B.L.D. Tracking and counting of vehicles for flow analysis from urban traffic videos. In Proceedings of the 2017 IEEE Latin American Conference on Computational Intelligence (LA-CCI), Arequipa, Peru, 8–10 November 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Idé, T.; Katsuki, T.; Morimura, T.; Morris, R. City-wide traffic flow estimation from a limited number of low-quality cameras. IEEE Trans. Intell. Transp. Syst. 2017, 18, 950–959. [Google Scholar] [CrossRef]
- Fedorov, A.; Nikolskaia, K.; Ivanov, S.; Shepelev, V.; Minbaleev, A. Traffic flow estimation with data from a video surveillance camera. J. Big Data 2019, 6. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788. [Google Scholar] [CrossRef] [Green Version]
- Espinosa, J.E.; Velastin, S.A.; Branch, J.W. Vehicle detection using alex net and faster R-CNN deep learning models: A comparative study. In Advances in Visual Informatics (IVIC); Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2017; Volume 10645, pp. 3–15. [Google Scholar] [CrossRef] [Green Version]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wen, X.; Shao, L.; Xue, Y.; Fang, W. A rapid learning algorithm for vehicle classification. Inf. Sci. 2015, 295, 395–406. [Google Scholar] [CrossRef]
- Feris, R.S.; Siddiquie, B.; Petterson, J.; Zhai, Y.; Datta, A.; Brown, L.M.; Pankanti, S. Large-scale vehicle detection, indexing, and search in urban surveillance videos. IEEE Trans. Multimed. 2012, 14, 28–42. [Google Scholar] [CrossRef]
- Fraser, M.; Elgamal, A.; He, X.; Conte, J.P. Sensor network for structural health monitoring of a highway bridge. J. Comput. Civ. Eng. 2010, 24, 11–24. [Google Scholar] [CrossRef] [Green Version]
- Semertzidis, T.; Dimitropoulos, K.; Koutsia, A.; Grammalidis, N. Video sensor network for real-time traffic monitoring and surveillance. IET Intell. Transp. Syst. 2010, 4, 103–112. [Google Scholar] [CrossRef]
- Rezazadeh Azar, E.; McCabe, B. Automated visual recognition of dump trucks in construction videos. J. Comput. Civ. Eng. 2012, 26, 769–781. [Google Scholar] [CrossRef]
- Chen, Y.; Wu, B.; Huang, H.; Fan, C. A real-time vision system for nighttime vehicle detection and traffic surveillance. IEEE Trans. Ind. Electron. 2011, 58, 2030–2044. [Google Scholar] [CrossRef]
- Shi, H.; Wang, Z.; Zhang, Y.; Wang, X.; Huang, T. Geometry-aware traffic flow analysis by detection and tracking. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 July 2018; pp. 116–1164. [Google Scholar] [CrossRef]
- Kumar, A.; Khorramshahi, P.; Lin, W.; Dhar, P.; Chen, J.; Chellappa, R. A Semi-automatic 2D solution for vehicle speed estimation from monocular videos. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 July 2018; pp. 137–1377. [Google Scholar] [CrossRef]
- Hua, S.; Kapoor, M.; Anastasiu, D.C. Vehicle tracking and speed estimation from traffic videos. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 July 2018; pp. 153–1537. [Google Scholar] [CrossRef] [Green Version]
- Tran, M.; Dinh-duy, T.; Truong, T.; Ton-that, V.; Do, T.; Luong, Q.-A.; Nguyen, T.-A.; Nguyen, V.-T.; Do, M.N. Traffic flow analysis with multiple adaptive vehicle detectors and velocity estimation with landmark-based scanlines. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 July 2018; pp. 100–107. [Google Scholar] [CrossRef]
- Huang, T. Traffic speed estimation from surveillance video data. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 July 2018; pp. 161–165. [Google Scholar] [CrossRef]
- Fan, Q.; Brown, L.; Smith, J. A closer look at Faster R-CNN for vehicle detection. In 2016 IEEE Intelligent Vehicles Symposium (IV); IEEE: Los Angeles, CA, USA, 2016; pp. 124–129. [Google Scholar] [CrossRef]
- Project Odysseus–Understanding London ‘Busyness’ and Exiting Lockdown. Available online: https://www.turing.ac.uk/research/research-projects/project-odysseus-understanding-london-busyness-and-exiting-lockdown (accessed on 2 February 2021).
- Quantifying Traffic Dynamics to Better Estimate and Reduce Air Pollution Exposure in London. Available online: https://www.dssgfellowship.org/project/reduce-air-pollution-london/ (accessed on 12 May 2021).
- James, P.; Das, R.; Jalosinska, A.; Smith, L. Smart cities and a data-driven response to COVID-19. Dialogues Hum. Geogr. 2020, 10, 255–259. [Google Scholar] [CrossRef]
- Peppa, M.V.; Bell, D.; Komar, T.; Xiao, W. Urban traffic flow analysis based on deep learning car detection from CCTV image series. ISPRS–Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, XLII-4, 499–506. [Google Scholar] [CrossRef] [Green Version]
- Uo-Object_Counting API. Available online: https://github.com/TomKomar/uo-object_counting (accessed on 2 February 2021).
- Using Spare CCTV Capacity to Monitor Activity Levels during the COVID-19 Pandemic. Available online: http://www.ubdc.ac.uk/news-media/2020/april/using-spare-cctv-capacity-to-monitor-activity-levels-during-the-covid-19-pandemic/ (accessed on 14 April 2020).
- Creating Open Data Counts of Pedestrians and Vehicles using CCTV Cameras. Available online: http://www.ubdc.ac.uk/news-media/2020/july/creating-open-data-counts-of-pedestrians-and-vehicles-using-cctv-cameras/ (accessed on 28 July 2020).
- Vivacity: The 2m Rule: Are People Complying? Available online: https://vivacitylabs.com/2m-rule-are-people-complying/ (accessed on 2 February 2021).
- Coronavirus and the Latest Indicators for the UK Economy and Society. Available online: https://www.spring-board.info/news-media/news-post/coronavirus-and-the-latest-indicators-for-the-uk-economy-and-society (accessed on 2 February 2021).
- Aloi, A.; Alonso, B.; Benavente, J.; Cordera, R.; Echániz, E.; González, F.; Ladisa, C.; Lezama-Romanelli, R.; López-Parra, Á.; Mazzei, V.; et al. Effects of the COVID-19 lockdown on urban mobility: Empirical evidence from the city of Santander (Spain). Sustainability 2020, 12, 3870. [Google Scholar] [CrossRef]
- Costa, C.; Chatzimilioudis, G.; Zeinalipour-Yazti, D.; Mokbel, M.F. Towards real-time road traffic analytics using telco big data. In Proceedings of the International Workshop on Real-Time Business Intelligence and Analytics; ACM: New York, NY, USA, 2017; pp. 1–5. [Google Scholar] [CrossRef]
- Zambrano-Martinez, J.; Calafate, C.; Soler, D.; Cano, J.-C.; Manzoni, P. Modeling and characterization of traffic flows in urban environments. Sensors 2020, 18, 2020. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Naphade, M.; Anastasiu, D.C.; Sharma, A.; Jagarlamudi, V.; Jeon, H.; Liu, K.; Chang, M.-C.; Lyu, S.; Gao, Z. The NVIDIA AI City Challenge. In 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI); IEEE: San Francisco, CA, USA, 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Naphade, M.; Wang, S.; Anastasiu, D.C.; Tang, Z.; Chang, M.-C.; Yang, X.; Zheng, L.; Sharma, A.; Chellappa, R.; Chakraborty, P. The 4th AI city challenge. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 16–18 June 2020; pp. 2665–2674. [Google Scholar] [CrossRef]
- Wen, L.; Du, D.; Cai, Z.; Lei, Z.; Chang, M.-C.; Qi, H.; Lim, J.; Yang, M.-H.; Lyu, S. UA-DETRAC: A new benchmark and protocol for multi-object detection and tracking. Comput. Vis. Image Underst. 2020, 193. [Google Scholar] [CrossRef] [Green Version]
- Tang, Z.; Naphade, M.; Liu, M.-Y.; Yang, X.; Birchfield, S.; Wang, S.; Kumar, R.; Anastasiu, D.C.; Hwang, J.-N. CityFlow: A city-scale benchmark for multi-target multi-camera vehicle tracking and Re-identification. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–21 June 2019; pp. 8789–8798. [Google Scholar] [CrossRef] [Green Version]
- Shao, S.; Zhao, Z.; Li, B.; Xiao, T.; Yu, G.; Zhang, X.; Sun, J. CrowdHuman: A Benchmark for Detecting Human in a Crowd. Available online: https://arxiv.org/abs/1805.00123 (accessed on 12 May 2021).
- Leal-Taixé, L.; Milan, A.; Reid, I.; Roth, S.; Schindler, K. MOTChallenge 2015: Towards a Benchmark for Multi-Target Tracking. 2015. Available online: https://arxiv.org/abs/1504.01942 (accessed on 12 May 2021).
- Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The Cityscapes Dataset for Semantic Urban Scene Understanding. 2016. Available online: https://arxiv.org/abs/1604.01685 (accessed on 12 May 2021).
- Zhang, S.; Wu, G.; Costeira, J.P.; Moura, J.M.F. FCN-rLSTM: Deep spatio-temporal neural networks for vehicle counting in city cameras. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 3687–3696. [Google Scholar] [CrossRef] [Green Version]
- Dollar, P.; Wojek, C.; Schiele, B.; Perona, P. Pedestrian detection: An evaluation of the state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 743–761. [Google Scholar] [CrossRef]
- Braun, M.; Krebs, S.; Flohr, F.; Gavrila, D.M. EuroCity persons: A novel benchmark for person detection in traffic scenes. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 1844–1861. [Google Scholar] [CrossRef] [Green Version]
- Geiger, A.; Lenz, P.; Stiller, U.R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef] [Green Version]
- Urban Observatory–Camera Views. Available online: https://api.newcastle.urbanobservatory.ac.uk/camera/ (accessed on 4 May 2021).
- United Nations Global Platform: Hosted Instance of VGG Image Annotator. Available online: https://annotate.officialstatistics.org/ (accessed on 30 March 2021).
- VGG Image Annotator (VIA). Available online: https://www.robots.ox.ac.uk/~vgg/software/via/ (accessed on 30 March 2021).
- United Nations Global Platform. Available online: https://marketplace.officialstatistics.org/ (accessed on 30 March 2021).
- Google BigQuery. Available online: https://cloud.google.com/bigquery (accessed on 2 February 2021).
- Google Cloud Functions. Available online: https://cloud.google.com/functions (accessed on 2 February 2021).
- Google Vision API. Available online: https://cloud.google.com/vision (accessed on 30 September 2020).
- TensorFlow 1 Detection Model Zoo. Available online: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md (accessed on 30 September 2020).
- Training and Evaluation with TensorFlow 1. Available online: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_training_and_evaluation.md (accessed on 9 March 2021).
- YOLOv5 in PyTorch. Available online: https://github.com/ultralytics/yolov5 (accessed on 1 August 2020).
- Google GPUs on Compute Engine. Available online: https://cloud.google.com/compute/docs/gpus (accessed on 30 October 2020).
- Everingham, M.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The Pascal Visual Object Classes (VOC) challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef] [Green Version]
- Google Pricing. Available online: https://cloud.google.com/vision/pricing (accessed on 10 January 2021).
- Xu, Y.; Dong, J.; Zhang, B.; Xu, D. Background modeling methods in video analysis: A review and comparative evaluation. CAAI Trans. Intell. Technol. 2016, 1, 43–60. [Google Scholar] [CrossRef] [Green Version]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bouwmans, T.; Silva, C.; Marghes, C.; Zitouni, M.S.; Bhaskar, H.; Frelicot, C. On the role and the importance of features for background modeling and foreground detection. Comput. Sci. Rev. 2018, 28, 26–91. [Google Scholar] [CrossRef] [Green Version]
- Moritza, S.; Bartz-Beielstein, T. imputeTS: Time Series Missing Value Imputation in R. R J. 2017, 9, 207–218. [Google Scholar] [CrossRef] [Green Version]
- Rjdhighfreq. Available online: https://github.com/palatej/rjdhighfreq (accessed on 30 March 2021).
- Transport Use during the Coronavirus (COVID-19) Pandemic. Available online: https://www.gov.uk/government/statistics/transport-use-during-the-coronavirus-covid-19-pandemic (accessed on 1 May 2021).
- Department for Transport: Road Traffic Statistics. Available online: https://roadtraffic.dft.gov.uk/ (accessed on 1 May 2021).
- Google. COVID-19 Community Mobility Reports. Available online: https://www.google.com/covid19/mobility/ (accessed on 30 October 2020).
Image Source | Date First Ingested |
---|---|
Durham | 7 May 2020 |
Transport for London (TfL) | 11 March 2020 |
Transport for Greater Manchester (TfGM) | 17 April 2020 |
North East England (NE Travel Data) | 1 March 2020 |
Northern Ireland | 15 May 2020 |
Southend | 7 May 2020 |
Reading | 7 May 2020 |
Object | Score |
---|---|
Shops | +5 |
Residential | +5 |
Pavement | +3 |
Cycle lane | +3 |
Bus lane | +1 |
Traffic lights | +1 |
Roundabout | −3 |
Images | Car | Van | Truck | Bus | Pedestrian | Cyclist | Motorcyclist | |
---|---|---|---|---|---|---|---|---|
London | 1950 | 9278 | 1814 | 1124 | 1098 | 3083 | 311 | 243 |
North East England | 2104 | 9968 | 1235 | 186 | 1699 | 2667 | 57 | 43 |
Total | 4054 | 19,246 | 3049 | 1310 | 2797 | 5750 | 368 | 286 |
Google Vision API | Faster-RCNN | YOLOv5s | YOLOv5x | |
---|---|---|---|---|
Speed (ms/image) | 109 | 35 | 14 | 31 |
Object Class | Count of Objects |
---|---|
Car | 8570 |
Bus | 958 |
Truck | 1044 |
Van | 1593 |
Person | 2879 |
Cyclist | 281 |
Motorcyclist | 219 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, L.; Grimstead, I.; Bell, D.; Karanka, J.; Dimond, L.; James, P.; Smith, L.; Edwardes, A. Estimating Vehicle and Pedestrian Activity from Town and City Traffic Cameras. Sensors 2021, 21, 4564. https://doi.org/10.3390/s21134564
Chen L, Grimstead I, Bell D, Karanka J, Dimond L, James P, Smith L, Edwardes A. Estimating Vehicle and Pedestrian Activity from Town and City Traffic Cameras. Sensors. 2021; 21(13):4564. https://doi.org/10.3390/s21134564
Chicago/Turabian StyleChen, Li, Ian Grimstead, Daniel Bell, Joni Karanka, Laura Dimond, Philip James, Luke Smith, and Alistair Edwardes. 2021. "Estimating Vehicle and Pedestrian Activity from Town and City Traffic Cameras" Sensors 21, no. 13: 4564. https://doi.org/10.3390/s21134564
APA StyleChen, L., Grimstead, I., Bell, D., Karanka, J., Dimond, L., James, P., Smith, L., & Edwardes, A. (2021). Estimating Vehicle and Pedestrian Activity from Town and City Traffic Cameras. Sensors, 21(13), 4564. https://doi.org/10.3390/s21134564