Next Article in Journal
Evaluation of Periodontal Status and Oral Health Habits with Continual Dental Support for Young Patients with Hemophilia
Previous Article in Journal
High-Temperature Creep and Microstructure Evolution of Alloy 800H Weldments with Inconel 625 and Haynes 230 Filler Materials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Layer Fusion 3D Object Detection via Lidar Point Cloud and Camera Image

College of Information Engineering, East China Jiaotong University, Nanchang 330013, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(4), 1348; https://doi.org/10.3390/app14041348
Submission received: 26 December 2023 / Revised: 24 January 2024 / Accepted: 31 January 2024 / Published: 6 February 2024
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)

Abstract

:
Object detection is a key task in automatic driving, and the poor performance of small object detection is a challenge that needs to be overcome. Previously, object detection networks could detect large-scale objects in ideal environments, but detecting small objects was very difficult. To address this problem, we propose a multi-layer fusion 3D object detection network. First, a dense fusion (D-fusion) method is proposed, which is different from the traditional fusion method. By fusing the feature maps of each layer, more semantic information of the fusion network can be preserved. Secondly, in order to preserve small objects at the feature map level, we designed a feature extractor with an adaptive fusion module (AFM), which reduces the impact of the background on small objects by weighting and fusing different feature layers. Finally, an attention mechanism was added to the feature extractor to accelerate the training efficiency and convergence speed of the network by suppressing information that is irrelevant to the task. The experimental results show that our proposed approach greatly improves the baseline and outperforms most state-of-the-art methods on KITTI object detection benchmarks.

1. Introduction

Object detection is a fundamental and crucial task in computer vision, which is aimed at identifying and locating specific classes of objects within the environment (e.g., cars, people). It has wide-ranging applications, particularly in the field of autonomous driving. Autonomous driving has requirements in many aspects: 3D detection, fast detection speeds, low costs, and so on. In response to the demand for 3D detection, cameras and Lidar have been installed in vehicles. With the development of deep learning, using neural network methods to detect objects has become a current trend. This paper focuses on object detection technology. High-performance object detection technology is of great significance in improving the safety of automatic driving. To achieve greater autonomous driving safety, more tasks and sensors will need to work together, and the technology in this paper is one of the many key technologies in autonomous driving systems.
At present, 3D object detection methods are divided into three kinds: image-based, point cloud-based and fusion-based. Images can provide vivid color and scale information and are the most commonly used data form in traditional object detection networks. Images have many advantages, such as containing continuous information, occupying small memory, having a low cost, and being easy to process. However, images are two-dimensional and cannot provide complete three-dimensional information. Lidar can provide accurate three-dimensional point cloud data, can directly obtain the spatial information of objects, can capture the details and shape of objects, and can replace the estimation of depth. It has relatively little impact on occlusion, viewing angle changes and lighting conditions and is suitable for accurate object detection in complex environments. However, the amount of Lidar data is large, the requirements for computing resources are high, and a large amount of point cloud data needs to be processed and stored. In order to make comprehensive use of the advantages of these two kinds of data, the object detection method based on fusion has become a research hotspot in recent years. In this paper, an object detection network based on multi-sensor fusion is given priority, which is the inevitable direction of object detection in the field of autonomous driving.
Most fusion networks choose to fuse after extracting features from the image and point cloud data, respectively, which places higher requirements on the network; that is, it needs to retain more object information. With the deepening of the backbone network, the pixels of small objects in the feature map are decreased in number or even lost. Feature pyramid networks (FPN) [1] preserve small objects by fusing multi-scale features, but even simple additions can cause small objects’ features to be overridden by large objects. Therefore, we believe that reasonable ways to guide the method of fusion are needed to preserve small object information. The reference to ‘small objects’ in this article refers specifically to situations in autonomous driving images and Lidar data that occupy fewer pixels or points in image or point cloud data due to the relative distance between the object and the sensor. Although these objects are small in size in terms of data, they are of great importance in ensuring road safety.
Aiming at the problem that the detection performance of small objects in automatic driving scenes is poor, affecting the detection accuracy of 3D objects, a multi-layer fusion 3D object detection network combining Lidar point cloud data and RGB images is proposed in this paper. This system is developed for use in autonomous vehicles to detect cars, pedestrians and cyclists. Compared with the method of using only RGB images, this method can use point cloud information to estimate the position and direction of objects more accurately, especially under poor lighting conditions. Compared with the method of using only Lidar, this method can make use of the texture and color information of the RGB images to achieve more accurate object recognition. In addition, in order to prevent the loss of small object information, either adding an adaptive fusion module (AFM) to the feature extraction network or using multi-layer fusion methods such as D-fusion in the fusion network has produced positive results. The main contributions of this paper are as follows:
(1)
A novel fusion method, D-fusion, is proposed, which can preserve the information of each layer of the fusion network to solve the problem of semantic loss and improve fusion performance.
(2)
We designed an adaptive fusion module (AFM) and applied it after using the feature extraction network, which effectively solves the problem of small-scale object loss in detection tasks.
(3)
An attention mechanism was introduced to optimize the efficiency of the feature extraction network.
(4)
We conducted comparative experiments on the challenging KITTI data set, and the results show that our network achieves satisfactory performance.
In this paper, Section 2 introduces related 3D object detection works, Section 3 introduces the overall framework of the network, Section 4 describes the experimental requirements, ablation experiments and analysis of the results, and Section 5 summarizes the work and looks at future directions.

2. Related Works

2.1. Image-Based 3D Object Detection

Image-based detection methods can be divided into monocular vision image and binocular vision image methods. Three-dimensional object detection networks based on monocular visual images mainly adopt the ideas of depth estimation [2,3], detecting key points [4] and using CAD prior information [5]. A monocular image is a two-dimensional projection of an eye cone in a three-dimensional world, with information such as depth missing. In order to obtain more accurate 3D information, such as depth features, some researchers are also studying 3D object detection networks based on binocular vision images. Chen et al. proposed 3DOP [6] to estimate point clouds from binocular images. Xu and Chen proposed MLF [7] to estimate parallax maps from binocular images and reverse-project them into depth maps and point clouds. Li et al. proposed that CGStereo [8], combined with additional semantic segmentation supervision, significantly improves the accuracy of foreground depth estimation in images. Chen et al. proposed pseudo-stereo [9] to estimate depth maps from binocular images. Peng et al.‘s study proposed to generate pseudo radar and target level depth estimation based on the SIDE [10] of two branch networks, respectively. It is difficult for image-based methods to obtain accurate 3D information, so the accuracy of detection is difficult to improve.

2.2. Point Cloud-Based 3D Object Detection

Point cloud-based object detection networks can be divided into three types. (1) A method based on the original point cloud. This method can retain the position information of objects in three-dimensional space to the maximum extent, such as 3DSSD [11]. (2) A method based on the projection of a point cloud. This method projects the point cloud into two-dimensional views from different angles and then uses a mature two-dimensional object detection network to achieve 3D object detection, such as RangeDet [12]. (3) A method based on point cloud voxelization. Disordered point cloud data are organized into ordered voxel expressive forms, and a 3D convolutional network is applied to extract the voxel features to achieve 3D object detection, such as SE-SSD [13]. Although this kind of network can accurately obtain the location information of the object, the information contained in the network is relatively sparse, resulting in heavy computation and loss of the object in the distance.

2.3. Fusion-Based 3D Object Detection

The method based on visual images can provide texture information but lacks depth information. The method based on point cloud provides spatial geometry information but lacks texture information. Texture information is helpful for object detection and classification, while depth information is helpful for object spatial location estimation. At present, it is a research direction of 3D object detection methods to improve the overall performance by using image and point cloud data at the same time. The methods based on fusion can be classified into three types: early fusion network, medium fusion network and late fusion network.
Early fusion refers to the fusion of information at the original pixel level. Pointpainting [14] and PI-RCNN [15] will color the point cloud by splicing the color information of each pixel of the image with the corresponding point cloud features and then use the existing detection network (such as PointRCNN [16], PointPillars [17], etc.) for the object detection of the colored point cloud. MVX-Net [18] uses two simple and effective early fusion methods, PointFusion and VoxelFusion, to integrate visual texture information and point cloud spatial geometry information to achieve high-precision object detection. This kind of network improves the effect well, but increases the amount of computation, and there is a problem that causes difficulty to align pixels. F-PointNet [19], proposed by Charles et al., Faraway-frustum [20], proposed by Zhang et al. and F-ConvNet [21], proposed by Wang et al., etc., generate high-quality two-dimensional candidate boxes using image data; they then map them to the three-dimensional space of the original point cloud. Three-dimensional candidate boxes were generated by extracting regional point cloud features. Unfortunately, such practices are largely limited by 2D detection results.
Medium fusion refers to the fusion of information at the feature map level or RoI (Regions of Interest) level. The MV3D [22] method proposed by Chen et al. uses the point cloud to generate the corresponding front view (FV) and bird’s eye view (BEV), which, together with RGB images, serve as the input of the three feature extraction networks, respectively, and realizes the task of 3D object classifications and boundary box regression through deep fusion. Different from the MV3D, AVOD [23], proposed by Ku et al., only uses the BEV generated by the image data and point cloud data coding as the network input and realizes the 3D object detection and classification and boundary box regression tasks through early fusion. SCANet [24], proposed by Lu and Chen, aims to effectively integrate multi-scale and global context information and, at the same time, generate attention from space and channels to select discriminant features. ContFuse [25] projects the image in a BEV to supplement the sparse BEV information. Crossfusion [26] realizes the cross-projection and fusion of an image and BEV on the basis of ContFuse. The cross-modality 3D object detection model [27] proposed by Zhu et al. not only realizes interactive fusion at the feature level but also combines 2D and 3D candidate boxes to optimize the results. RoIFusion [28], proposed by Chen et al., saves a lot of computation by integrating 3D RoI with 2D RoI. Medium fusion will not be very computation-intensive. However, in the process of fusion, the problem of information degradation still exists because of insufficient integration.
Late fusion refers to the fusion optimization of the detection results. A typical example is the CLOCs [29] proposed by Pang et al. The network first obtains 2D detection results and 3D detection results from the image and point cloud, respectively, and then filters the final 3D frame and adjusts the scale according to the geometric and semantic consistency of the 2D and 3D frames. This kind of network is difficult to apply because of the difficulty of training and real-time problems, so there are few related follow-up studies.
Based on the problem, the medium fusion network is not sufficient in the fusion stage. In this paper, the medium fusion network structure is adopted to fuse the point cloud and image at the ROI level, and the D-fusion method is designed to reduce semantic information degradation in the fusion network.

2.4. Detection of Small Objects

Small objects occupy very few pixels in the original image and will become very small in the feature map after convolution, which puts forward higher performance requirements on the network. In the current research, the method of feature fusion is basically adopted; that is, the shallow feature map and the deep feature map are fused together. Feature pyramid networks for object detection [1] are a typical example of this approach, which uses a pyramid structure to integrate the features of different layers. Dssd [30] deconvolves the deep feature map with the original dot product for feature fusion. Small object detection using context and attention [31] combines context information with an attention mechanism to comprehensively determine the object’s category and location by understanding the object’s background and paying attention to useful information. The main idea of augmentation for small object detection [32] is to over-sample small object samples so as to improve the performance of small object detection. In this paper, we use an attention mechanism to learn the information of the foreground and background and weigh the feature layers of different scales to reduce the loss of small object information.

3. The Proposed Approach

In this section, we describe the structure and implementation of a multi-layer fusion 3D object detection network. The proposed network architecture is shown in Figure 1. The network architecture consists of three main parts: feature extraction network, regional suggestion network (RPN) and fusion network. Using RGB images and the bird’s eye view (BEV) as inputs, the feature extraction network processes them to obtain the corresponding feature maps. In this paper, VGG [33] is used as a backbone network. In addition, 100 K anchors are preset as the initial input to the RPN in 3D space. The RPN network filters these anchor boxes to obtain the RoI. Then, the feature maps are combined with the RoIs generated by RPN, and the corresponding feature region is cut out and sent to the fusion network for the final parameter regression. In the fusion network, we adopt a novel multi-layer fusion method, D-fusion, which can effectively combine features from different perspectives and retain the semantic information of each layer of the network so as to achieve 3D bounding box regression. Finally, the fusion network outputs classification and regression results. Both the feature extraction and fusion methods of the network adopt the multi-layer fusion approach to achieve richer feature expression and more efficient data processing.

3.1. Inputs

The multi-layer fusion 3D object detection network has two types of input data: the RGB image and the Lidar point cloud. The camera is a typical representative of passive sensors. Images have rich color and texture information, which can help us intuitively understand the traffic scene and identify the object. Moreover, the image occupies very little memory. Therefore, we use the image as an input. However, this method lacks depth information, which is essential for accurate position estimation in the real 3D world. Using images as a standalone visual system is far from sufficient, as the brightness of the camera can easily degrade its accuracy at night or in rainy weather conditions. Lidar is a representative of active sensors. Lidar can not only acquire depth information but is less affected by external lighting conditions (i.e., at night) because it emits its own light pulses. Therefore, the Lidar system has higher accuracy and reliability than the camera system. We use the Lidar point cloud and RGB image input at the same time, which can complement their advantages and greatly improve the applicability of detection.
The point cloud data of the KITTI [34] dataset were collected using the Velodyne HDL-64E Lidar. Each collected point cloud file contains hundreds of thousands of points. Each point represents three-dimensional position and intensity information in a three-dimensional space and is distributed irregularly. Point cloud data are usually stored in the form of x, y, z and intensity, with the four values representing the 3D coordinates and reflectance of a point. Due to the uneven distribution of point cloud data and the large number of points, direct processing of point cloud data will take up a large amount of computing. Therefore, we use the BEV representation to represent the point cloud while preserving the information of the point cloud. Since all objects are not covered in the vertical direction of the road, the size and shape information of the object will be retained. The BEV includes the height map and density map of the point cloud obtained by encoding the height information and density information of the point cloud. The height map discretized the point cloud according to a certain resolution, projected the voxel onto the ground plane to generate a BEV, and took the highest height of the point in each voxel as the height feature. The slices are evenly divided over a certain height range so that the BEV contains more height features. The height feature map was calculated within each slice. The density map represents the number of points in each voxel. Considering the camera’s viewing range, we selected the point cloud range of [−40,40] × [0,70] meters.
We discretized the projected point cloud into a two-dimensional grid with a resolution of 0.1 m. For each grid, the height feature is calculated as the maximum height of a point in the cell. To encode more detailed height information, the point cloud is evenly divided into five slices. By calculating a height plot for each slice, we obtain five distinct height features. Point cloud density represents the number of points in each cell. To standardize the feature, it is calculated as m i n ( 1.0 ,   l o g ( N + 1 ) l o g ( 64 ) ) , where N is the number of points in the cell. Note that the density feature is calculated for the entire point cloud, while the height feature is calculated for five slices, so the BEV is coded as a six-channel feature.

3.2. Feature Extractor

The multi-layer fusion 3D object detection network comprises two feature extraction networks, each dedicated to processing either the image or BEV input data. The structure of both feature extraction networks is the same. We designed the adaptive fusion module (AFM) and combined it with the attention mechanism to design the overall structure of the feature extractor. The network structure is shown in Figure 2.
Adaptive Fusion Module. The features of each layer of the decoder are fused in the spatial dimension. Different from the previous methods of integrating multi-layer features using elements-wise sum or concatenation, the key idea of this method is to learn the spatial weights of the feature map fusion on each scale adaptively. It consists of two steps: feature map uniformity and adaptive fusion.
Feature map uniformity: As shown in Figure 2, feature maps with three different resolutions are represented as F 1 , F 2 and F 3 . We need to pre-process three feature maps with different resolutions into feature maps with the same dimensions. Since the dimension of F 3 was directly used in the final prediction, we adopted different upsampling strategies for the F 1 and F 2 feature maps, respectively, so that each dimension of F 1 and F 2 was consistent with that of F 3 . For F 1 upsampling, we first use a 1 × 1 convolution layer to compress the number of channels of the feature to the number of channels of F 3 , and then raise the resolution to be consistent with F 3 by nearest neighbor interpolation.
Adaptive fusion: we define R 13 and R 23 , respectively, to represent the feature maps obtained after the consistency of the feature maps for F 1 and F 2 , and R 33 and F 3 are the same feature. α i , j , β i , j and γ i , j represent the weights of the i , j vectors corresponding to R 13 , R 23 and R 33 , respectively. Note that α i , j , β i , j and γ i , j are shared across all channels. Then, the fused feature can be expressed as:
Y i , j   =   α i , j   ×   R 13   +   β i , j   ×   R 23   +   γ i , j × R 33
The weight parameters α i j , β i j and γ i j are obtained by a 1 × 1 convolution of the three feature maps after uniformization. And the parameters α i j , β i j and γ i j are concatenated through the softmax function so that their range is in [ 0,1 ] and the sum is 1. Therefore, α i , j , β i , j and γ i , j are obtained by the following calculation:
α i , j   =   e α i j e α i j   +   e β i j   +   e γ i j β i , j = e β i j e α i j   +   e β i j   +   e γ i j γ i , j = e γ i j e α i j   +   e β i j   +   e γ i j
In this way, we obtain the Y i , j for the image and point cloud; they are combined with the RPN network by the fusion network and fulfill classification and regression.
Attention Mechanism. Through the attention mechanism, the network can learn to selectively emphasize the informative features using global information and suppress the less useful features. In this network, we adopt SENet [35], a type of channel attention mechanism designed to enhance the network’s representation ability by enabling it to perform dynamic channel feature recalibration.
We choose to add the SENet module to the decoder of the feature extraction network. The SENet module consists of two operations, squeeze and excitation, and is composed of a pooling layer, a convolutional layer and an activation layer. As shown in Figure 3, the original feature map X is first globally average-pooled to obtain S , with the dimension changing from H × W × C to 1 × 1 × C , corresponding to the squeeze operation. Then, S is processed by the convolutional layer and activation layer to obtain the weighted information E , corresponding to the excitation operation. Finally, E is multiplied by the original feature map X on a channel-wise basis to obtain the final X . The purpose of the squeeze operation is to enhance the correlation of channel data. The purpose of the excitation operation is to obtain the weight coefficients for each feature map on the channel dimension, thus making the channel features of the feature map more capable of extracting features, amplifying effective features, and reducing ineffective feature information. In short, the channel attention mechanism SENet is designed to allow the network to use more effective channels and suppress relatively ineffective ones.

3.3. RPN

We have adopted the same RPN network as the AVOD network and do not claim novelty here.
The representation of each anchor box is defined by six parameters, namely ( c x ,   c y ,   c z ,   l ,   w ,   h ) , where   ( c x ,   c y ,   c z ) represents the centroid coordinates of the anchor box, and ( l ,   w ,   h ) specifies the dimensions of the box. In this work, we leverage the benefits of BEV images to generate anchor boxes that are invariant to occlusion and preserve the object sizes. Specifically, we sample the anchor boxes at 0.5 m intervals along the BEV plane, with   ( c x ,   c y ) serving as the center. The vertical coordinate c z is determined by the height of the Lidar sensor from the ground. We use the K-means clustering method to cluster the labels in the training set to determine the initial size of the anchor. Due to the sparsity of the BEV, many anchor boxes may not contain any point clouds. To eliminate such empty anchor boxes, we utilize an integral image to calculate the point occupancy map.
The anchor boxes are projected onto two feature maps obtained from the BEV and RGB images, resulting in a 7 × 7 feature crop for each box. These crops are down-sampled via a 1 × 1 convolution kernel to reduce the number of parameters in subsequent operations. The resulting feature crops undergo the element-wise mean operation and are then input to a fully connected block that outputs the region proposal parameters, including the object’s confidence and offset. A 2D non-maximum suppression (NMS) algorithm is applied to remove overlapping proposals and retain up to a maximum of 1024 proposals. The fully connected block consists of three fully connected layers with a size of 2048, which output the bounding box regression, direction estimation and object classification.

3.4. D-Fusion

We designed a fusion approach—dense fusion (D-fusion). Compared with the previous early fusion, late fusion and deep fusion, it can not only combine the features from multiple views but also effectively combine the semantic information of each layer in the network to carry out three-dimensional box regression. The network structure is shown in Figure 4d.
Since features of different views often have different resolutions, we use RoI pooling for each view to obtain feature vectors of the same length. For the generated 3D suggestions, we are able to project them into any view in the 3D space. In our example, we project them onto two views, the BEV and the RGB Image.
In order to combine information on different features, the previous work usually adopted early fusion, late fusion or deep fusion. Inspired by DenseNet, we adopted the dense fusion method to fuse every layer of the network densely. A comparison of the architecture of our D-fusion network and the early/late/deep fusion network is shown in Figure 4. For networks with P-layers, early fusion combines F B E V and F R G B from multiple views at the input stage:
F P = H P H P 1 H 1 F B E V F R G B
{ H p ,   p = 1 , , P } is the feature transformation function. H p refers to the feature map obtained at the P-layer.   is a join operation (such as concatenation and sum).
In contrast, late fusion uses separate subnetworks to independently learn feature transforms and combine their outputs in the prediction stage:
F P = H P H P 1 H 1 F B E V H P H P 1 H 1 F R G B
In early and late fusion, the operation ⊕ is implemented using the concatenation method. Deep fusion enables more interaction between the middle features from different aspects:
F 0 = F B E V F R G B F P = ( H P B E V ( F P 1 ) ( H P R G B ( F P 1 ) , p = 1,2 , P
The operation in deep fusion uses the method of element-wise mean.
To further improve the flow of information between layers, we propose a different fusion mode. It works by connecting directly from any layer to all subsequent layers:
F 0 = F B E V F R G B F 1 = H 1 F B E V F R G B F P = a p 1 a p 2 a 2 a 1 F 1 F P 2 F P 1 , p = 2,3 , , P
The operation in the D-fusion uses the weighted summation method between concatenation and element-wise mean. Where a 1 , . . . . . . a p 2 , a p 1 , respectively, represent the weights given after the feature fusion of each layer. The network adopts a three-layer network structure with the default settings a 1 = 1 / 2 ,   a 2 = 1 / 2 and a 3 = 1 . We also use the dropout mechanism to mitigate the occurrence of overfitting and achieve a regularization effect to some extent. It also saves computing overhead.

3.5. Training

We trained two separate networks, one for the car class and the other for the pedestrian and cyclist classes. The RPN and detection networks were jointly trained in an end-to-end approach using mini-batches that contained one image with 512 and 1024 RoIs, respectively. The ADAM optimizer was used with an initial learning rate of 0.0001, which decayed exponentially every 30 K iterations with a decay factor of 0.8. The network was trained for 120 K iterations.

4. Experiment and Results

We tested the performance of the multi-layer fusion 3D object detection network for proposal generation and object detection tasks on three classes of the KITTI object detection benchmark. According to the 7481 training frames provided by the KITTI dataset, we divided the training set and testing set into a ratio of about 1:1. For the evaluation, we followed KITTI’s easy, moderate and hard difficulty levels. We evaluated and compared the four versions we implemented as follows: the first version using early fusion, the second version using late fusion, the third version using deep fusion and the fourth version using our D-fusion.
The training and testing of the network was run on an NVIDIA GeForce GTX 1080 Ti GPU (NVDIA, Santa Clara, CA, USA) with 11GB of memory. This network includes feature extraction networks based on AFM and SENet, as well as fusion networks based on D-fusion. Figure 5 shows the final output result. The comparison of small object detection results is shown in Figure 6. We only take the car class for demonstration because the small object problem and occlusion are more common and prominent in the car class. As can be seen in Figure 6, both small-size objects and occluded small-size objects can be effectively detected.
The training results of detection accuracy are shown in Figure 7. As the number of iterations increases, the accuracy of object detection continues to improve. We use three indicators to evaluate the performance of the network, namely AP2D, AP3D and APBEV. Figure 7 shows the performance of the network on the car class. The performance of the network for pedestrian and cyclist classes is shown in Figure 7. As the number of training epochs increases, the accuracy also continues to improve. For cars, the accuracy of a 2D prediction box is close to 90%, while the accuracy of a 3D prediction box is about 85%. The accuracy of 2D and 3D prediction boxes on the pedestrian class is close to 60%. The accuracy of the cyclist class is close to 65%. As shown in Figure 7, the detection accuracy of cars is much higher than that of pedestrians and cyclists. This is because the sample number of cars is relatively large and abundant.

4.1. 3D Detection

For the final 3D detection results, we used two metrics to measure the accuracy of 3D positioning and 3D anchor box detection. For 3D positioning, we projected the 3D box to the ground plane to obtain the BEV anchor frame. We calculated the average accuracy of the BEV anchor frame (APBEV). For the 3D bounding box, we used the average precision (AP3D) metric to evaluate the complete 3D anchor box.
When using AP3D and APBEV for evaluation, we set an IoU threshold of 0.7 for the car class and 0.5 for the pedestrian and bicycle classes. We compared the detection results with the state-of-the-art network publicly available on the validation set. On the validation set, as shown in Table 1, our architecture performs optimally in both car and pedestrian detection. It is worth mentioning that in the comparison of car and pedestrian detection, our architecture is, on average, 3.19% and 5.55% higher than AVOD-FPN on AP3D and the APBEV, respectively.

4.2. The Effect of D-Fusion

To analyze the effectiveness of different fusion methods, we tested four types of fusion methods. They are early fusion, late fusion, deep fusion and the D-fusion we designed. From Table 2, it can be clearly seen that in the case where only the fusion method is different, D-fusion basically exhibits optimal performance in detection tasks of different types of objects or different difficulty levels. Compared with early fusion, late fusion and deep fusion of AP3D, the network with D-fusion increased by 8.35%, 8.33% and 9.75%, respectively. In comparison with APBEV, the network with D-fusion increased by 8.94%, 5.94% and 8.35%, respectively.

4.3. The Effect of AFM and SENet

For the feature extractors, we compared three scenarios: traditional convolution with FPN, traditional convolution with AFM and traditional convolution with AFM and SENet, as shown in Table 3. For the car class, traditional convolution with AFM and SENet has a certain effect on AP3D, but on the APBEV, the difference is very small compared to traditional convolution with AFM, indicating that the attention mechanism does not play a significant role in the BEV. For the pedestrian categories, traditional convolution with AFM and SENet is significantly better than traditional convolution with FPN, but it is also significantly worse than traditional convolution with AFM. We analyzed that the reason for this result is that there are many small object pedestrian samples, and factors such as human posture cause significant sample differences. For the cyclist class, the results of traditional convolution with AFM and SENet are significantly better than those of traditional convolution with FPN and also better than traditional convolution with AFM. It is worth noting that compared to traditional convolution with FPN, traditional convolution with AFM and SENet is about 20% higher at the easy level, at least 10% higher at the moderate level and about 16% higher at the hard level.

4.4. Ablation Experiment

In order to investigate the contribution of various improvement methods to the network, we conducted ablation experiments on three class detection tasks. From the data in Table 4, it can be seen that D-fusion, AFM and SENet have all improved network performances. Moreover, in the car detection tasks, the combination of the three can achieve the best overall results. In the detection task of pedestrians and cyclists, the three together failed to achieve the optimal effect on each task, which is due to a certain degree of overfitting caused by the deepening of the network. Pay attention to network complexity as well as performance improvement. Table 5 shows the number of parameters in each part of the network. Thus, the introduction of the SENet and AFM modules increases the number of parameters by a relatively small amount.

5. Conclusions

This paper presents a 3D object detection network that leverages LiDAR point clouds and RGB images, with its effectiveness validated through experiments on the KITTI dataset. Firstly, we propose a new D-fusion method based on the existing three fusion methods, which solves the problem of semantic loss in the fusion network. Secondly, we have improved the feature extraction network by adding AFM and attention mechanism to the traditional convolutional network, which improves the detection accuracy. At the same time, the network also performs well in small object detection tasks. By comparing with existing fusion networks of the same type, our network achieved the overall best performance on the KITTI benchmark.
According to the analysis of the poor experimental results, a large portion of detection errors is attributed to the similarity between the background and the object in a complex environment. Afterward, we will introduce data augmentation and instance segmentation to enhance the network’s ability to cope with complex environments. In addition, the running speed of the network has not yet met the requirements for processing video stream data. We will improve the detection speed by streamlining the network structure.

Author Contributions

Conceptualization, Y.G. and H.H.; methodology, Y.G.; software, Y.G.; validation, Y.G.; formal analysis, Y.G.; investigation, H.H.; resources, H.H.; data curation, Y.G.; writing—original draft preparation, Y.G.; writing—review and editing, H.H.; visualization, Y.G.; supervision, H.H.; project administration, Y.G.; funding acquisition, H.H. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the National Natural Science Foundation of China under Grant No. 61961020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The code is available at https://github.com/JackKu0/MLFOD (accessed on 9 August 2023). We used the open-source data KITTI, which is available at https://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d (accessed on 10 August 2019).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
  2. Chen, X.; Kundu, K.; Zhang, Z.; Ma, H.; Fidler, S.; Urtasun, R. Monocular 3D Object Detection for Autonomous Driving. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2147–2156. [Google Scholar]
  3. Qin, Z.; Wang, J.; Lu, Y. MonoGRNet: A Geometric Reasoning Network for Monocular 3D Object Localization. In Proceedings of the AAAI Conference on Artificial Intelligence, Hilton, HI, USA, 27 January–1 February 2019; Volume 33, pp. 8851–8858. [Google Scholar]
  4. Chabot, F.; Chaouch, M.; Rabarisoa, J.; Teuliere, C.; Chateau, T. Deep manta: A coarse-to-fine many-task network for joint 2d and 3d vehicle analysis from monocular image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2040–2049. [Google Scholar]
  5. Liu, Z.; Wu, Z.; Tóth, R. Smoke: Single-stage monocular 3d object detection via keypoint estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 996–997. [Google Scholar]
  6. Chen, X.; Kundu, K.; Zhu, Y.; Ma, H.; Fidler, S.; Urtasun, R. 3d object proposals using stereo imagery for accurate object class detection. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 1259–1272. [Google Scholar] [CrossRef] [PubMed]
  7. Xu, B.; Chen, Z. Multi-level fusion based 3d object detection from monocular images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2345–2353. [Google Scholar]
  8. Xu, B.; Chen, Z. Confidence guided stereo 3D object detection with split depth estimation. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, Nevada, USA, 25–29 October 2020; pp. 5776–5783. [Google Scholar]
  9. Chen, Y.N.; Dai, H.; Ding, Y. Pseudo-stereo for monocular 3d object detection in autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 887–897. [Google Scholar]
  10. Peng, X.; Zhu, X.; Wang, T.; Ma, Y. SIDE: Center-based stereo 3D detector with structure-aware instance depth estimation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 119–128. [Google Scholar]
  11. Yang, Z.; Sun, Y.; Liu, S.; Jia, J. 3dssd: Point-Based 3D Single Stage Object Detector. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11037–11045. [Google Scholar]
  12. Fan, L.; Xiong, X.; Wang, F.; Wang, N.; Zhang, Z. Rangedet: In defense of range view for lidar-based 3d object detection. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 2898–2907. [Google Scholar]
  13. Zheng, W.; Tang, W.; Jiang, L.; Fu, C.W. SE-SSD: Self-Ensembling Single-Stage Object Detector From Point Cloud. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 14489–14498. [Google Scholar]
  14. Vora, S.; Lang, A.H.; Helou, B.; Beijbom, O. Pointpainting: Sequential fusion for 3d object detection. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 4604–4612. [Google Scholar]
  15. Xie, L.; Xiang, C.; Yu, Z.; Xu, G.; Yang, Z.; Cai, D.; He, X. PI-RCNN: An efficient multi-sensor 3D object detector with point-based attentive cont-conv fusion module. In Proceedings of the 2020 AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 12460–12467. [Google Scholar]
  16. Shi, S.; Wang, X.; Li, H. Pointrcnn: 3d object proposal generation and detection from point cloud. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–17 June 2019; pp. 770–779. [Google Scholar]
  17. Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; Beijbom, O. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12697–12705. [Google Scholar]
  18. Sindagi, V.A.; Zhou, Y.; Tuzel, O. Mvx-net: Multimodal voxelnet for 3d object detection. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 7276–7282. [Google Scholar]
  19. Qi, C.R.; Liu, W.; Wu, C.; Su, H.; Guibas, L.J. Frustum PointNets for 3D Object Detection from RGB-D Data. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 918–927. [Google Scholar]
  20. Zhang, H.; Yang, D.; Yurtsever, E.; Redmill, K.A.; Özgüner, Ü. Faraway-frustum: Dealing with lidar sparsity for 3D object detection using fusion. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; pp. 2646–2652. [Google Scholar]
  21. Wang, Z.; Kui, J. Frustum convnet: Sliding frustums to aggregate local point-wise features for a modal 3d object detection. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 4–8 November 2019; pp. 1742–1749. [Google Scholar]
  22. Chen, X.; Ma, H.; Wan, J.; Li, B.; Xia, T. Multi-view 3D Object Detection Network for Autonomous Driving. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6526–6534. [Google Scholar]
  23. Ku, J.; Mozifian, M.; Lee, J.; Harakeh, A.; Waslander, S.L. Joint 3d proposal generation and object detection from view aggregation. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1–8. [Google Scholar]
  24. Lu, H.; Chen, X.; Zhang, G.; Zhou, Q.; Ma, Y.; Zhao, Y. SCANet: Spatial-channel attention network for 3D object detection. In Proceedings of the ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1992–1996. [Google Scholar]
  25. Liang, M.; Yang, B.; Wang, S.; Urtasun, R. Deep continuous fusion for multi-sensor 3d object detection. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 641–656. [Google Scholar]
  26. Hong, D.S.; Chen, H.H.; Hsiao, P.Y.; Fu, L.C.; Siao, S.M. CrossFusion net: Deep 3D object detection based on RGB images and point clouds in autonomous driving. Image Vis. Comput. 2020, 100, 103955. [Google Scholar] [CrossRef]
  27. Zhu, M.; Ma, C.; Ji, P.; Yang, X. Cross-modality 3d object detection. In Proceedings of the 2021 IEEE/CVF Winter Conference on Applications of Computer Vision, Waikola, HI, USA, 5–9 January 2021; pp. 3772–3781. [Google Scholar]
  28. Chen, C.; Fragonara, L.Z.; Tsourdos, A. RoIFusion: 3D object detection from LiDAR and vision. IEEE Access 2021, 9, 51710–51721. [Google Scholar] [CrossRef]
  29. Pang, S.; Morris, D.; Radha, H. CLOCs: Camera-LiDAR object candidates fusion for 3D object detection. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 10386–10393. [Google Scholar]
  30. Fu, C.Y.; Liu, W.; Ranga, A.; Tyagi, A.; Berg, A.C. Dssd: Deconvolutional single shot detector. arXiv 2017, arXiv:1701.06659. [Google Scholar]
  31. Lim, J.S.; Astrid, M.; Yoon, H.J.; Lee, S.I. Small object detection using context and attention. In Proceedings of the 2021 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Jeju Island, Republic of Korea, 13–16 April 2021; pp. 181–186. [Google Scholar]
  32. Kisantal, M.; Wojna, Z.; Murawski, J.; Naruniec, J.; Cho, K. Augmentation for small object detection. arXiv 2019, arXiv:1902.07296. [Google Scholar]
  33. Vedaldi, A.; Zisserman, A. Vgg Convolutional Neural Networks Practical; Department of Engineering Science, University of Oxford: Oxford, UK, 2016; Volume 66. [Google Scholar]
  34. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The kitti dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
  35. Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
Figure 1. Overall frame of multi-layer fusion 3D object detection structure.
Figure 1. Overall frame of multi-layer fusion 3D object detection structure.
Applsci 14 01348 g001
Figure 2. Feature extractor structure.
Figure 2. Feature extractor structure.
Applsci 14 01348 g002
Figure 3. Structure of attention mechanism SENet.
Figure 3. Structure of attention mechanism SENet.
Applsci 14 01348 g003
Figure 4. Different types of fusion: (a) represents the early fusion method, (b) represents the late fusion method, (c) represents the deep fusion method, and (d) represents the dense fusion (D-fusion) method.
Figure 4. Different types of fusion: (a) represents the early fusion method, (b) represents the late fusion method, (c) represents the deep fusion method, and (d) represents the dense fusion (D-fusion) method.
Applsci 14 01348 g004
Figure 5. Test result diagram. Car, pedestrian and cyclist represent three object classes, with the predicted car represented by a green box in (a), pedestrian represented by a turquoise box in (b), and cyclist represented by a yellow box in (c). The red boxes represent the true value of GroundTruth. The first score represents the confidence score of the 3D prediction box, and the second score represents the intersection over union (IoU) of the 3D prediction box and the GroundTruth.
Figure 5. Test result diagram. Car, pedestrian and cyclist represent three object classes, with the predicted car represented by a green box in (a), pedestrian represented by a turquoise box in (b), and cyclist represented by a yellow box in (c). The red boxes represent the true value of GroundTruth. The first score represents the confidence score of the 3D prediction box, and the second score represents the intersection over union (IoU) of the 3D prediction box and the GroundTruth.
Applsci 14 01348 g005
Figure 6. Comparison of small object detection results. (a) represents the network AVOD, and (b) represents the network in this paper. The red boxes represent the GroundTruth. The green boxes represent the predicted results.
Figure 6. Comparison of small object detection results. (a) represents the network AVOD, and (b) represents the network in this paper. The red boxes represent the GroundTruth. The green boxes represent the predicted results.
Applsci 14 01348 g006
Figure 7. The testing accuracy of the object detection. The following three indicators measure the test results of the car, cyclist and pedestrian. Where object_detection, object_detection_3D and car_detection_BEV represent the AP, where an object represents the car, cyclist or pedestrian.
Figure 7. The testing accuracy of the object detection. The following three indicators measure the test results of the car, cyclist and pedestrian. Where object_detection, object_detection_3D and car_detection_BEV represent the AP, where an object represents the car, cyclist or pedestrian.
Applsci 14 01348 g007
Table 1. Average precision of 3D anchor boxes on the KITTI validation set (AP3D) (represented by %) and average precision of BEV anchor boxes (APBEV) (represented by %). The best scores are highlighted in bold.
Table 1. Average precision of 3D anchor boxes on the KITTI validation set (AP3D) (represented by %) and average precision of BEV anchor boxes (APBEV) (represented by %). The best scores are highlighted in bold.
MethodClassAP3D (%)APBEV (%)
EasyModerateHardEasyModerateHard
MV3DCar71.0962.3555.1286.0276.9068.49
AVOD73.5965.7858.3886.8085.4477.73
AVOD-FPN81.9471.8866.3888.5383.7977.90
F-PointNet81.2070.3962.1988.7084.0075.33
SCANet83.6374.4767.78---
MVX-Net83.2072.7065.2089.2085.9078.10
ContFuse82.5466.2264.0488.8185.8377.33
CrossFusion 83.2074.5067.0188.3986.1778.23
Ours85.5875.3768.8389.5986.6279.52
MV3DPed39.4833.6931.5146.1340.7438.11
AVOD38.2831.5126.9842.5235.2433.97
AVOD-FPN50.8042.8140.8858.7551.0547.54
F-PointNet51.2144.8940.2358.0950.2247.20
Ours53.8751.2745.9962.0056.1949.97
MV3DCyc61.2248.3644.3766.7054.7650.55
AVOD60.1144.9038.8063.6647.7446.55
AVOD-FPN64.0052.1846.6168.0957.4850.77
F-PointNet71.9656.7750.3975.3861.9654.68
Ours68.6642.2341.7168.2846.4840.64
Table 2. Average precision comparison of different fusions on the KITTI validation set. The best scores are highlighted in bold.
Table 2. Average precision comparison of different fusions on the KITTI validation set. The best scores are highlighted in bold.
MethodClassAP3D (%)APBEV (%)
EasyModerateHardEasyModerateHard
Early fusionCar82.1273.8767.7088.5086.0779.03
Late fusion70.2856.4855.8586.4777.1770.03
Deep fusion82.8673.4267.3089.0785.9178.99
D-fusion (Ours)85.5875.3768.8389.5986.6279.52
Early fusionPed45.6040.7535.0748.7143.4237.23
Late fusion48.7743.7537.2052.6446.5545.16
Deep fusion47.1640.8535.2154.4347.6941.83
D-fusion (Ours)53.8751.2745.9962.0056.1949.97
Early fusionCyc49.6232.131.5750.0932.4931.89
Late fusion65.1940.7140.3265.7241.3540.73
Deep fusion46.2229.0723.6647.1629.6629.37
D-fusion (Ours)68.6642.2341.7168.2846.4840.64
Table 3. Average precision comparison of different feature extractors on the KITTI validation set. The best scores are highlighted in bold.
Table 3. Average precision comparison of different feature extractors on the KITTI validation set. The best scores are highlighted in bold.
MethodClassAP3D (%)APBEV (%)
EasyModerateHardEasyModerateHard
FPNCar83.2574.5567.4689.2486.5778.81
AFM84.7174.7968.1789.7786.8479.34
AFM and SENet85.1275.6568.7889.7486.6679.39
FPNPed47.3341.2335.2955.4048.5742.27
AFM61.3254.4747.6259.3052.6046.17
AFM and SENet50.6945.1739.8857.0150.3744.09
FPNCyc48.4831.2525.6649.1631.7525.92
AFM66.6540.9740.4667.2140.8040.51
AFM and SENet68.0341.8841.4569.2042.7141.74
Table 4. Ablation study. The best scores are highlighted in bold.
Table 4. Ablation study. The best scores are highlighted in bold.
MethodClassAP3D (%)APBEV (%)
EasyModerateHardEasyModerateHard
D-fusionCar83.9974.9368.0889.4180.1279.21
AFM84.5274.6068.0089.3386.3779.32
SENet82.9073.4066.9888.5985.7378.65
D-fusion and AFM85.1175.7068.6489.1986.5679.29
D-fusion and SENet84.4574.8067.6489.5380.1379.05
AFM and SENet85.1275.6568.7889.7486.6679.39
D-fusion and AFM and SENet85.5875.3768.8389.5986.7279.52
D-fusionPed53.9848.4441.9057.9551.9745.09
AFM59.2752.5646.1461.3554.5447.64
SENet47.3341.2335.2955.4048.5742.27
D-fusion and AFM53.9848.7842.8757.1351.4145.09
D-fusion and SENet53.8450.6545.1458.7953.4247.65
AFM and SENet54.0243.7941.8354.8349.2343.23
D-fusion and AFM and SENet53.8751.2745.9962.0056.1949.97
D-fusionCyc68.0741.8441.2258.6140.4639.90
AFM66.6540.9740.4667.2140.8040.51
SENet48.4831.2525.6649.1631.7525.92
D-fusion and AFM69.1143.4242.4069.7243.9542.95
D-fusion and SENet61.1941.3834.5761.7041.6734.81
AFM and SENet67.9342.1141.0768.5342.3141.92
D-fusion and AFM and SENet68.6642.2341.7168.2846.4840.64
Table 5. Network parameters.
Table 5. Network parameters.
ArchitectureNumber of Parameters
Base Model26,265,899
Backbone (Image and Lidar)9,366,336 & 9,366,336
AFM64,515
SENet717,440
D-fusion 12,589,056
Total38,854,955
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, Y.; Hu, H. Multi-Layer Fusion 3D Object Detection via Lidar Point Cloud and Camera Image. Appl. Sci. 2024, 14, 1348. https://doi.org/10.3390/app14041348

AMA Style

Guo Y, Hu H. Multi-Layer Fusion 3D Object Detection via Lidar Point Cloud and Camera Image. Applied Sciences. 2024; 14(4):1348. https://doi.org/10.3390/app14041348

Chicago/Turabian Style

Guo, Yuhao, and Hui Hu. 2024. "Multi-Layer Fusion 3D Object Detection via Lidar Point Cloud and Camera Image" Applied Sciences 14, no. 4: 1348. https://doi.org/10.3390/app14041348

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop