Next Article in Journal
Model Uncertainty for Displacement Prediction for Laterally Loaded Piles on Granular Fill
Previous Article in Journal
Fractional View Analysis of Acoustic Wave Equations, Using Fractional-Order Differential Equations
Open AccessArticle

Exploring a Multimodal Mixture-Of-YOLOs Framework for Advanced Real-Time Object Detection

Department of Electrical Engineering, Soonchunhyang University, Asan 31538, Korea
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(2), 612;
Received: 24 November 2019 / Revised: 13 December 2019 / Accepted: 11 January 2020 / Published: 15 January 2020
To construct a safe and sound autonomous driving system, object detection is essential, and research on fusion of sensors is being actively conducted to increase the detection rate of objects in a dynamic environment in which safety must be secured. Recently, considerable performance improvements in object detection have been achieved with the advent of the convolutional neural network (CNN) structure. In particular, the YOLO (You Only Look Once) architecture, which is suitable for real-time object detection by simultaneously predicting and classifying bounding boxes of objects, is receiving great attention. However, securing the robustness of object detection systems in various environments still remains a challenge. In this paper, we propose a weighted mean-based adaptive object detection strategy that enhances detection performance through convergence of individual object detection results based on an RGB camera and a LiDAR (Light Detection and Ranging) for autonomous driving. The proposed system utilizes the YOLO framework to perform object detection independently based on image data and point cloud data (PCD). Each detection result is united to reduce the number of objects not detected at the decision level by the weighted mean scheme. To evaluate the performance of the proposed object detection system, tests on vehicles and pedestrians were carried out using the KITTI Benchmark Suite. Test results demonstrated that the proposed strategy can achieve detection performance with a higher mean average precision (mAP) for targeted objects than an RGB camera and is also robust against external environmental changes. View Full-Text
Keywords: autonomous driving; RGB camera; LiDAR; CNN; object detection autonomous driving; RGB camera; LiDAR; CNN; object detection
Show Figures

Figure 1

MDPI and ACS Style

Kim, J.; Cho, J. Exploring a Multimodal Mixture-Of-YOLOs Framework for Advanced Real-Time Object Detection. Appl. Sci. 2020, 10, 612.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

Back to TopTop