Real-Time Traffic Risk Detection Model Using Smart Mobile Device
Abstract
:1. Introduction
2. Related Work
3. Proposed Real-Time Traffic Risk Detection System
3.1. System Configuration and Operation
3.2. Driving Data Management
3.2.1. The Front Vehicle Detection and the Inter-Vehicle Distance Estimation
Algorithm 1: IPM Transformation |
//src: road area in input image //dst: IPM applied output image function IPM(src, dst) T = getPerspectiveTransform(src.coordiate, dst.coordinate) for x = 0 to dst.width, y = 0 to dst.height u = T(0, 0) * x + T(0, 1) * y + T(0, 2) v = T(1, 0) * x + T(1, 1) * y + T(1, 2) s = T(2, 0) * x + T(2, 1) * y + T(2, 2) Map_X(x) = u/s Map_Y(x) = v/s return remap(src, dst, Map_X, Map_Y) |
Algorithm 2: ROI Configuration |
// img: a source image // WIDTH: the width of the image, 960 px // HEIGHT: the height of the image, 540 px // Road_Top: a predefined top position of the road area in the image, HEIGHT*2.7/10 // Road_Bottom_Width: a predefined width of the bottom side of the road area, WIDTH/3*2 // Road_Height: a predefined height of the road area, HEIGHT*4.3/10 function getROI(img) black_area = getBlackArea(img) if black_area! = False S = black_area.right−black_area.left x_coord = (WIDTH–1.5*S)/2 y_coord = black_area.top–0.5*S ROI(x,y,w,h) = (x_coord, y_coord, 1.5*S, 0.5*S) else S = Road_Bottom_Width x_coord = (WIDTH–1.5*S)/2 y_coord = Road_Top+Road_Height−0.5*S ROI(x,y,w,h) = (x_coord, y_coord, 1.5*S, 0.5*S) return ROI |
- (a)
- For the HSV color space, the hue value (hue: 0°~180°) is less than or equal to 10° or greater than or equal to 170°.
- (b)
- They are located symmetrically in the left and right area from the center of the ROI, respectively.
- (c)
- The size of red area is bigger than the predefined threshold value α
- (d)
- For the two symmetric red areas, the height difference is less than δ, and the slope difference is less than ε.
- (e)
- The shape of the two selected areas is almost the same and left–right symmetric.
3.2.2. Driving Data Collection
3.2.3. Deceleration Segment Extraction and Feature Vector Generation
3.3. Traffic Risk Detection
3.3.1. Learning Module
- L = a subset of data that has values lower than vi;
- U = a subset of data that has values equal or greater than vi;
- ;
- ;
3.3.2. Decision Module
- If all models decide as danger, the final decision is “danger”,
- If all models decide as normal, the final decision is “normal”,
- otherwise, the final decision is “suspicious”.
- If two or more models decide as danger, the final decision is “danger”,
- If all models decide as normal, the final decision is “normal”,
- otherwise, the final decision is “suspicious”.
- EHall only uses data for which all of the learning models made the same output. That is, data that all of the models output as “dangerous” is used as “dangerous” training data. Likewise, data that all of the models output “normal” is used as “normal” training data.
- EH2 follows the final decision of the above decision module DM2. That is, the data for which the final decision was “dangerous” is used as “dangerous” training data, and the data for which the final decision was “normal” is used as “normal” training data.
- EHw assigns different weights according to the decisions of the three models. It is the same as EH2, except that the data that all of the models output as “dangerous” are doubly copied and used for training data. If all of the models decide that some data are “dangerous”, the risk dataset is then slightly more enhanced, because the amount of risk training data has increased.
4. Simulated Performance
4.1. The Performance of the Headway Distance Estimation
4.2. The Performance of the Traffic Risk Detection
4.2.1. Experimental Data
4.2.2. The Accuracy of the Traffic Risk Detection Model
5. Conclusions and Future Work
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Kaur, R.; Sobti, R. Current Challenges in Modelling Advanced Driver Assistance Systems: Future Trends and Advancements. In Proceedings of the IEEE International Conference on Intelligent Transportation Engineering, Singapore, 1–3 September 2017. [Google Scholar]
- iOnRoad. iOnRoad Augmented Driving Pro. Available online: https://ionroad-pro.en.aptoide.com/ (accessed on 29 September 2018).
- Movon Corporation. Movon FCW. Available online: https://apkpure.com/movon-fcw/com.movon.fcw (accessed on 29 September 2018).
- Danescu, R.; Itu, R.; Petrovai, A. Generic Dynamic Environment Perception Using Smart Mobile Devices. Sensors 2016, 16, 1721. [Google Scholar] [CrossRef] [PubMed]
- Mohan, P.; Padmanabhan, V.M.; Ramjee, R. Nericell: Rich Monitoring of Road and Traffic Conditions Using Mobile Smartphones. In Proceedings of the ACM Conference on Embedded Network Sensor Systems, Raleigh, NC, USA, 5–7 November 2008. [Google Scholar]
- Kalra, N.; Chugh, G.; Bansal, D. Analyzing Driving and Road Events via Smartphone. Int. J. Comput. Appl. 2014, 98, 5–9. [Google Scholar] [CrossRef]
- Gozick, M.B.; Dantu, R.; Bhukhiya, M.; Gonzalez, M.C. Safe Driving Using Mobile Phones. IEEE Trans. Intell. Transp. Syst. 2012, 13, 1462–1468. [Google Scholar]
- Fischer, R.W.; Fischer, J.R.; Ricaurte, W.A. Safe Driving Monitoring System. U.S. Patent 2018/013958A1, 17 May 2018. [Google Scholar]
- Albertus, G.; Meiring, M.; Myburgh, H.C. A Review of Intelligent Driving Style Analysis Systems and Related Artificial Intelligence Algorithms. Sensors 2015, 15, 30653–30682. [Google Scholar] [Green Version]
- Ly, M.V.; Martin, S.; Trivedi, M.M. Driver Classification and Driving Style Recognition using Inertial Sensors. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Gold Coast, Australia, 23–26 June 2013. [Google Scholar]
- Lavie, S.; Jacobs, F.; Fuchs, G.; Bergh, J.V.D. System and Method for Use of Pattern Recognition in Assessing or Monitoring Vehicle Status or Operator Driving Behavior. U.S. Patent 2018/0089142A1, 29 Mar 2018. [Google Scholar]
- Deng, C.; Wu, C.; Lyu, M.; Huang, Z. Driving style recognition method using braking characteristics based on hidden Markov model. PLoS ONE 2017, 12, e0182419. [Google Scholar] [CrossRef] [PubMed]
- Xiao, J.; Liu, J. Early Forecast and Recognition of the Driver Emergency Braking Behavior. In Proceedings of the International Conference on Computational Intelligence and Security, Beijing, China, 11–14 December 2009. [Google Scholar]
- Shin, M.J.; Oh, K.J.; Park, H.M. Accident Alert System for Preventing Secondary Collision. U.S. Patent US9142128 B2, 22 September 2015. [Google Scholar]
- Lee, J.; Park, S.; Yoo, J. A Location-based Highway Safety System using Smart Mobile Devices. J. KIISE 2016, 43, 389–397. [Google Scholar] [CrossRef]
- WHO. Global Status Report on Road Safety. Available online: http://www.who.int/violence_injury_prevention/road_safety_status/2015/en/ (accessed on 29 September 2018).
- Tuohy, S.; O’Cualain, D.; Jones, E.; Glavin, M. Distance Determination for an Automobile Environment Using Inverse Perspective Mapping in OpenCV. In Proceedings of the 2010 IET Signal and Systems Conference (ISSC), Cork, Ireland, 23–24 June 2010; pp. 100–105. [Google Scholar]
- Sathyanarayana, A.; Sadjadi, S.O.; Hansen, J.H.L. Leveraging Sensor Information from Portable Devices towards Automatic Driving Maneuver Recognition. In Proceedings of the International IEEE Conference on Intelligent Transportation Systems, Anchorage, AK, USA, 16–19 September 2012. [Google Scholar]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
Items | Samsung Galaxy Mobile Devices | ||
---|---|---|---|
J7 | A8 | S7 | |
IPM image creation time | 7.633 ms | 7.278 ms | 6.862 ms |
Shadow area detection time | <0.1 ms | <0.1 ms | <0.1 ms |
Taillight detection time | 17.895 ms | 17.261 ms | 16.344 ms |
The number of image frames per second | 37 frames | 38 frames | 41 frames |
Items | Daytime Urban Roads | Daytime Expressway | Night |
---|---|---|---|
The number of total frames | 1440 | 1440 | 1440 |
The number of target frames | 1145 | 1068 | 1180 |
The number of detected frames | 1060 | 963 | 588 |
Detection rate (%) | 92.58 | 90.17 | 49.83 |
Real Distance | Types of Vehicle | |
---|---|---|
Sedan (Avante) | SUV (Tuscan) | |
10 m | 10.23 m | 10.00 m |
15 m | 15.51 m | 14.82 m |
20 m | 21.47 m | 19.16 m |
The average error rate | 4.35% | 1.80% |
Initial Training Data | Test Data Type_1 | Test Data Type_2 | |
---|---|---|---|
Danger | 26 | 29 | 74 |
Normal | 445 | 443 | 796 |
Total | 471 | 472 | 870 |
EHall | EH2 | EHw | |||||||
---|---|---|---|---|---|---|---|---|---|
Danger | Normal | Total | Danger | Normal | Total | Danger | Normal | Total | |
Step 1 | 41 | 937 | 978 | 54 | 935 | 989 | 51 | 952 | 1003 |
Step 2 | 63 | 1507 | 1570 | 75 | 1519 | 1594 | 110 | 1514 | 1624 |
Step 3 | 77 | 2085 | 2162 | 99 | 2101 | 2200 | 160 | 2086 | 2246 |
kNN | Random Forest | Neural Network | DMall | DM2 | |||||
---|---|---|---|---|---|---|---|---|---|
RScore | TScore | RScore | TScore | RScore | TScore | RScore | TScore | RScore | TScore |
0.8923 | 0.9940 | 0.9846 | 0.9991 | 0.9231 | 0.9957 | 0.8923 | 0.9940 | 0.9231 | 0.9957 |
kNN | Random Forest | Neural Network | DMall | DM2 | ||||||
---|---|---|---|---|---|---|---|---|---|---|
RScore | TScore | RScore | TScore | RScore | TScore | RScore | TScore | RScore | TScore | |
Type_1 | 0.9247 | 0.9953 | 0.7708 | 0.9847 | 0.9522 | 0.9970 | 0.7229 | 0.9830 | 0.9593 | 0.9974 |
Type_2 | 0.7184 | 0.9765 | 0.5662 | 0.9639 | 0.7321 | 0.9777 | 0.5218 | 0.9601 | 0.7074 | 0.9756 |
kNN | Random Forest | Neural Network | DMall | DM2 | |
---|---|---|---|---|---|
EHall | 0.986832612 | 0.974025974 | 0.950108225 | 0.93953824 | 0.974025974 |
EH2 | 0.956668 | 0.974108 | 0.910759 | 0.892948 | 0.952627 |
EHw | 0.968773006 | 0.99125 | 0.934026074 | 0.914026074 | 0.982523006 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Park, S.; Han, H.; Kim, B.-S.; Noh, J.-H.; Chi, J.; Choi, M.-J. Real-Time Traffic Risk Detection Model Using Smart Mobile Device. Sensors 2018, 18, 3686. https://doi.org/10.3390/s18113686
Park S, Han H, Kim B-S, Noh J-H, Chi J, Choi M-J. Real-Time Traffic Risk Detection Model Using Smart Mobile Device. Sensors. 2018; 18(11):3686. https://doi.org/10.3390/s18113686
Chicago/Turabian StylePark, Soyoung, Homin Han, Byeong-Su Kim, Jun-Ho Noh, Jeonghee Chi, and Mi-Jung Choi. 2018. "Real-Time Traffic Risk Detection Model Using Smart Mobile Device" Sensors 18, no. 11: 3686. https://doi.org/10.3390/s18113686
APA StylePark, S., Han, H., Kim, B.-S., Noh, J.-H., Chi, J., & Choi, M.-J. (2018). Real-Time Traffic Risk Detection Model Using Smart Mobile Device. Sensors, 18(11), 3686. https://doi.org/10.3390/s18113686