Next Article in Journal
A Study on the Impacts of Slot Types and Training Data on Joint Natural Language Understanding in a Spanish Medication Management Assistant Scenario
Previous Article in Journal
A Hierarchical Path Planning Approach with Multi-SARSA Based on Topological Map
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spatial Attention Frustum: A 3D Object Detection Method Focusing on Occluded Objects

School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(6), 2366; https://doi.org/10.3390/s22062366
Submission received: 21 February 2022 / Revised: 16 March 2022 / Accepted: 17 March 2022 / Published: 18 March 2022

Abstract

:
Achieving the accurate perception of occluded objects for autonomous vehicles is a challenging problem. Human vision can always quickly locate important object regions in complex external scenes, while other regions are only roughly analysed or ignored, defined as the visual attention mechanism. However, the perception system of autonomous vehicles cannot know which part of the point cloud is in the region of interest. Therefore, it is meaningful to explore how to use the visual attention mechanism in the perception system of autonomous driving. In this paper, we propose the model of the spatial attention frustum to solve object occlusion in 3D object detection. The spatial attention frustum can suppress unimportant features and allocate limited neural computing resources to critical parts of the scene, thereby providing greater relevance and easier processing for higher-level perceptual reasoning tasks. To ensure that our method maintains good reasoning ability when faced with occluded objects with only a partial structure, we propose a local feature aggregation module to capture more complex local features of the point cloud. Finally, we discuss the projection constraint relationship between the 3D bounding box and the 2D bounding box and propose a joint anchor box projection loss function, which will help to improve the overall performance of our method. The results of the KITTI dataset show that our proposed method can effectively improve the detection accuracy of occluded objects. Our method achieves 89.46%, 79.91% and 75.53% detection accuracy in the easy, moderate, and hard difficulty levels of the car category, and achieves a 6.97% performance improvement especially in the hard category with a high degree of occlusion. Our one-stage method does not need to rely on another refining stage, comparable to the accuracy of the two-stage method.

1. Introduction

With the surging demand for autonomous driving and robotics, 3D object detection has substantially progressed in recent years [1,2,3,4,5]. However, developing reliable autonomous driving is still a very challenging task. In the actual driving situation, dealing with occlusion problems in complex road conditions is closely related to the driving safety of autonomous vehicles. It is also a key factor restricting the performance of 3D object detection. Several existing 3D object detection methods have been explored to address these challenges. LiDAR-based Bird’s Eye View (BEV) methods do not suffer from scale and occlusion problems and have been widely used in various current 3D object detection methods [6]. The Ku projected 3D proposals onto the corresponding 2D feature maps for 3D object detection, fusing features from BEV maps and RGB images [7]. However, these methods suffer from losing critical 3D information during the projection process.
Some other methods extract features directly from the raw point cloud [8,9]. Although LiDAR sensors can provide accurate position information, it is challenging to rely on LiDAR sensors to identify the same objects of the previous and next frames in actual motion scenes. The image provided by the camera has rich semantic features and higher resolution, meaning that it can be used to quickly detect objects and implement tracking. Therefore, recent work fused multi-sensor information to obtain better detection performance. Chen proposed Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as the input and predicts oriented 3D bounding boxes [10]. Qi and Wang utilized a 2D detector to generate a 3D frustum and then regressed the parameters of the 3D box directly from the raw point cloud [11,12]. Compared to projection methods, multi-sensor fusion methods avoided losing information and took advantage of 2D image detectors. However, the performance of F-pointnet [11] was limited because the final estimate relied on too few front points, which could themselves be incorrectly segmented. Wang [12] gave up the estimation of foreground points and proposed FconvNet that tried to segment the frustum by aggregating point features into frustum features and estimating oriented boxes in 3D space. However, the same step size caused all point clouds in the frustum to have the same feature weights in feature extraction, which meant that the occluded objects lacked importance in the frustum. The detector does not know which part of the region should be of interest. The occluded objects in the scene could not be assigned sufficient feature weights, which resulted in features from unimportant regions in the feature map suppressing the features of the occluded objects.
This work is dedicated to studying the occlusion problem of 3D object detection. Figure 1 presents a typical occlusion scene. From our current viewpoint, the car in the red region is the occluded object of our interest, and the car in the green region is an unimportant object we do not care about. As shown in Figure 2, we generate a frustum from the 2D detection results of the red occluded object to detect it in 3D space. When the shorter-distance green object occludes the red occluded object, part of the point cloud of the green object appears in the frustum. The model cannot know in advance which part of the point cloud belongs to the occluded object, and the point cloud will be given the same importance because of the static segmentation method. As the distance increases, the point cloud density will become relatively sparse, which will cause the features of the essential region to be insignificant or even suppressed. This is not conducive to the detection of the occluded object.
To solve the limitations of F-pointnet and FconvNet, we tried to improve the feature saliency of the occluded object in the frustum further. When faced with a complex occluded scene, the human visual system quickly focuses on critical regions of the scene and suppresses unimportant information. Inspired by the visual attention mechanism, we proposed the spatial attention frustum (SAF). This study assumed that the occluded objects need to be partially visible rather than completely occluded, while the 2D detector has available performance. The spatial attention module can adaptively suppress unimportant features and allocate valuable neural computational resources to critical parts of the scene, which can be applied to faster and more efficient visual inference tasks. To ensure that the model still has good inference capability for objects with only partial structures, we proposed a local feature aggregation (LFA) module to capture more complex local features of point clouds and a joint anchor box projection loss (PL) function based on the projection constraint relationship between 3D bounding boxes and 2D bounding boxes. Our contributions can be summarized into four points:
  • We proposed the SAF, which simulates the human visual attention mechanism to position occluded objects in autonomous driving scenes accurately. The SAF can adaptively suppress unimportant features and allocate valuable computational resources to the occluded objects in the frustum so that the features of the occluded objects can be more effectively represented in the limited feature space.
  • Considering that the occluded objects usually have only the visible part of the point cloud, we proposed a point cloud local feature aggregation module to enhance the model’s ability to infer the whole from the local structure. The local feature aggregation module integrates more neighbourhood features, giving each point a larger perceptual field and allowing the model to learn more complex local features.
  • We propose a joint anchor box PL function to obtain a more accurate boundary box prediction method by utilizing the projection constraint relationship between the 2D and 3D boxes. The experiment indicates that the joint anchor box PL function helps to improve the overall performance of the model.
  • In the process of 3D object detection, our one-stage method can match the performance of the two-stage method without using refine stage, which makes our model more suitable for the autonomous driving scene in terms of real-time detection and the number of parameters.

2. Related Works

In this section, we briefly review the existing methods for 3D object detection, including image-based methods, LiDAR-based methods, and multi-sensor-based methods.

2.1. Image-Based 3D Object Detection Methods

There are several existing works on estimating the 3D bounding box from images. Chen proposed estimating 3D boxes using the geometry relations between 2D box edges and 3D box corners [13]. Zhang transformed 3D geometric information constraints into energy functions to correct the estimated 3D bounding boxes and faced the problem of insufficient depth information [14]. Brazil utilized the image grid and the location features of the 2D box centre to establish the relationship between the 2D box and the 3D box centre [15]. Weng converted the input image to the representation of a pseudo-LiDAR point cloud through monocular depth estimation and then used a 3D detection network trained end-to-end [16]. Wang used different optimization objects and decoders to estimate the foreground and background depth [17]. Dewi utilized generative adversarial networks (GAN) to enhance the image dataset and improve the recognition rate of the model [18]. Although these methods have demonstrated the feasibility of image-based methods, they are not effective in meeting the safety requirements of autonomous driving due to the lack of precise location information.

2.2. LiDAR-Based 3D Object Detection Methods

LiDAR can provide accurate location information, and there are many LiDAR-based methods to perform 3D object detection tasks. Zhou divided the point cloud into a certain number of voxels to classify and position regression [19]. Ye used a sparse convolutional middle extractor instead of 3D CNN [20]. Lang used PillarNet for feature extraction and transformed it into a presudo-2D image for bounding box regression [21]. Shi implemented foreground segmentation and rough prediction of bounding boxes and fused predictions and features to achieve accurate regression of the prediction box [22,23]. Ye proposed a hybrid voxel network, which used the attention mechanism to extract more fine-grained point cloud features to balance speed and accuracy [24]. Wang improved the prediction performance of the model by analysing the distribution of the point cloud to extract the features of the region of interest (ROI) [25]. Meyer used a fully convolutional network to predict a multimodal distribution over 3D boxes for each point, and then it efficiently fused these distributions to generate a prediction for each object [26]. Wang introduced domain adaption in migration learning to achieve cross-range adaptation and achieved better performance in the detection task for long-range objects [27].

2.3. Multi-Sensor-Based 3D Object Detection Methods

The image contains rich colour and semantic information, and the point cloud contains precise 3D geometric structure and depth information. Making full use of the advantages of the two types of information is beneficial for 3D object detection. Ku projected 3D proposals onto the corresponding 2D feature maps for 3D object detection, which improved the detection efficiency and reduced the difficulty of learning the 3D structure [7]. Xu fused RGB and the original 3D point cloud features [28]. Wang generated a series of frustums and aggregated multi-scale features [12]. Qi borrowed the voting ideas of VoteNet [29] and combined 2D votes on image 3D votes on the point cloud [30]. Zhu proposed a two-stage multimodal fusion network for 3D object detection, and they used pseudo-LiDAR points from stereo matching as a data augmentation method to densify the LiDAR point. The experiments on the KITTI dataset show that the proposed multi-stage fusion helps the network to learn better representations [31]. Vora proposed a sequential fusion method that projects LiDAR points into an image-only semantic segmentation network’s output and appends the class scores to each point [32].

3. Materials and Methods

In this section, we present our proposed method in detail. Section 3.1 presents the construction method of SAF. Section 3.2 presents the LFA module for the point cloud. Section 3.3 presents the joint 3D-2D anchor box PL function.

3.1. Spatial Attention Frustum (SAF) Module

This study proposed a SAF module based on monocular depth estimation. The segmentation method for spatial attention is guided by object height, where the evaluation metrics of spatial attention are closely related to the distance estimation of objects. The F-pointnet indicated that finding the local point cloud corresponding to the suggested pixels in the 2D region can avoid traversing an extensive range of point clouds and improve detection efficiency. We hope to construct a model that resembles the mechanism of human visual attention to be able to observe the occluded objects more efficiently. Inspired by FconvNet, we further thought about the work of sliding the frustum. The point cloud density distribution follows the law of becoming sparser as the distance increases, and so the density of unimportant objects close to the occlusion scene is denser than that of the occluded objects in the distance. As shown in Figure 3, the fixed frustum sequence step size makes the unimportant point feature and the exciting point feature indistinguishable in the feature extraction process, which may cause computational costs to be wasted on detecting unimportant points, affecting the occluded object feature expression. The occluded object features’ weight is relatively small in the limited feature space, which leads to the feature expression not being significant in the subsequent process. The human visual attention mechanism ignores the unimportant object features and improves the weight of the occluded object features in the feature map. Therefore, we designed a frustum structure with spatial attention. As shown in Figure 4, with 2D region suggestions and camera parameters, the model can focus more on the occluded object features.

SAF Segmentation Method

We estimated a coarse distance for the model to focus on the features of the occluded object, while the exact regression of the 3D position was performed in the point cloud. Therefore, we chose a relatively lightweight approach to restore depth based on the principle of camera projection.
As shown in Figure 5, H is the true height of the object in the 3D ground truth. The height H of each cuboid is fixed in the 3D real space, but the projection heights of the four vertical edges are different on the image plane projected by the camera. For H , we found the average height based on the height statistical characteristics of the dataset, and we used H = 1.56   m for cars in the KITTI dataset. In the image plane, the more significant the height of the vertical edge projection, the closer the corresponding 3D spatial edge was to us. Therefore, we chose the side with the more significant vertical edge projection height to estimate the closest depth possible. For the relative depth L d e p t h of each vertical edge of the box project on the image plane, we assumed that the camera was distortion-free, and solved for it according to Equation (1):
L d e p t h = f × H h i   i 1 , 2 , 3 , 4
where f is the focal length of the camera, L d e p t h is the true distance from the optical centre O to the focused object, and h 1 ,   h 2 ,   h 3   and h 4   are the vertical projection heights of the cuboid of the target bounding box in the 3D space on the image plane.
Given the actual situation of the 2D detector, it would be complicated to calculate the vertical edge projection in 3D space on the image plane. Therefore, we ignored the regression performance of the 2D detector on the orientation angle of the 3D box and only obtained the class information and position information of the 2D box from the 2D detector x m a x ,   y m a x , x m i n , y m i n . As shown in Figure 5, we were able to complete our rough depth estimation according to the results of the red box. We calculated the projection height according to Equation (2).
h 0 = y m a x y m i n
The corresponding depth estimate was:
L d e p t h = f × H h 0
The FconvNet verified that the multi-resolution frustum feature integration variant is effective. We referred to some of the original paper settings in the subsequent FCN module to facilitate feature alignment in subsequent operations. For the frustum of each region proposal, we proposed the following segmentation scheme. Table 1 shows the division scale and number of the frustum. First, the division scale is the division size of the frustum, which can also be interpreted as the resolution of the frustum-level features, and there are four levels of feature resolution, T, T/2, T/4, T/8. T is generally taken as a multiple of 8. The slice step of each sub-frustum is the length parameter along the axis of the apparent frustum, denoted as L T i n . n is the resolution level of the division scale and takes the values of 1, 2, 3, 4. Its parameters are determined by the correction factor ω and L p c . The correction coefficient ω is used to correct the distance estimation. L p c is the length of the extracted frustum in total. Num A is the number of frustums in the not interested region, and Num B is the number of frustums in the ROI. For a frustum of any scale, the frustum length L T 1 n   and step length   d T 1 n of each not interested region segmentation are solved by Equation (4):
L T 1 n = ω × L d e p t h d T 1 n = ω × L d e p t h
Each frustum length L T i n and step length d T i n of the frustums of ROI in the four scales are solved by Equations (5)–(8), respectively:
L T i 1 = 2 × L p c ω × L d e p t h T d T i 1 = L p c ω × L d e p t h T
L T i 2 = 2 × L p c ω × L d e p t h T / 2 d T i 2 = L p c ω × L d e p t h T / 2
L T i 3 = 2 × L p c ω × L d e p t h T / 4 d T i 3 = L p c ω × L d e p t h T / 4
L T i 4 = 2 × L p c ω × L d e p t h T / 8 d T i 4 = L p c ω × L d e p t h T / 8

3.2. Local Feature Aggregation (LFA) Module

The occluded object is usually only a part of the point cloud visible with LiDAR, and the lack of some features will increase the difficulty of recognition. Enhancing the understanding of the local structure of the object is crucial, because sometimes it is needed to infer the whole object position from a smaller number of local point clouds. We thought that each point should have a larger sensory field in the point sampling stage and ensure sampling efficiency. The current common point cloud sampling methods were analysed and compared in the selection of sampling methods. Farthest point sampling (FPS) was considered first to ensure good coverage of the sampled points. However, when dealing with large-scale scenes of point clouds, the complexity of the squared calculation will result in the more unsatisfactory real-time performance of the model. The grid sampling method uses grid points to discrete 3D space, and then samples each grid point and controls the spacing between points by controlling the size of the grid points, but its uniformity is not as good as the FPS method. The sampling method based on the point cloud curvature shape is stable, but the long curvature computation time causes it to be unsuitable for large-scale datasets. Random sampling (RS) has the most efficient constant computational complexity and good dataset scalability but will inevitably result in the loss of some useful information, which will adversely affect the feature representation of the model.
This study selected the RS algorithm, which allows the model to work well when facing datasets of any size. Inspired by the work of RandLA-Net and pointnet [33,34], we defined an LFA module to increase the receptive field of each point. The LFA module is based on the K nearest neighbour (KNN) algorithm to find the nearest K neighbour points. Figure 6 illustrates the feature aggregation and the down-sampling process of the LFA module. The red dashed box shows the feature aggregation process for the sampled points, and the number of point clouds after each RS operation is reduced to half of the original number.
For any one of the frustums, assume that it contains M local points, which are represented in the camera coordinate system as P c a m e r a = x i , y i , z i c a m e r a . Instead of the coordinates in the camera coordinate system, we used the relative coordinate x c , y c , z c f r u s t u m to the centre of the current frustum, which is calculated as P i = x i x c , y i y c , z i z c . The KNN algorithm can find the nearest K neighbourhood points p i 1 , p i 2 , p i 3 , , p i k in the Euclidean space for each point P i , the Euclidean space coordinate characteristic of each point is   d , and the point feature of each neighbor point is f i 1 , f i 2 , f i 3 , , f i k . Then, the local map structure position encoding is performed, and the 3D coordinate of P i . A multilayer perceptron (MLP) maps features to high-dimensional space, connects them to the original neighbouring point features, and pools them. The output result is used as a new point feature. The vector mapping relationship of neighbour features is as follows:
G p i , p i k = M L P p i k ( p i p i k ǁ p i p i k ǁ p i )
The new point features are as follows:
f i = M a x P o o l G p i , p i k M L P p i i 1 , 2 , , n ,   k 1 , 2 , , m
In Equations (9) and (10), MLP represents the multi-layer perceptron, MaxPool represents maximum pooling, p i is the coordinate of the selected point, p i k is the coordinate of the neighbour point, is concatenate operation, p i p i k is the relative coordinate, and ǁ p i p i k ǁ is the Euclidean distance. The KNN algorithm ensures that the neighbouring points can still be extracted in the sparse region of the point cloud. After two down-sampling processes of the aggregated local features, the sampled points can be considered to have a larger perceptual field. The local graph structure embeds the coordinates of all neighbouring points and efficiently learns the complex local structure to retain more local features.

3.3. Feature Extractor and Fully Convolutional Network (FCN)

As with FconvNet, we used the pointnet with weight sharing for parallel processing and aggregated point feature into the frustum feature. The pointnet module consists of three MLP layers and one Max Pooling layer. Pointnet with T numbers of shared weights aggregates the features of T numbers of a subfrustum into a frustum level feature vector. The T feature vectors are combined into a 2D feature map F of size T × d , used as the input of a subsequent FCN. The FCN contains four convolution layers and three deconvolution layers. Each convolutional layer is followed by batch normalization and ReLU nonlinearity. Except for the first convolutional layer, each convolution block uses stride-2 convolution to down-sample the 2D feature maps, so the output feature map of the convolutional block in FCN has a 2-fold lower resolution in the frustum dimension. When the scale is T/2, the feature map is compatible with its corresponding one in FCN. To maintain the integrity of the FCN, we concatenate the feature vectors extracted in the T scale down-sampling process and the feature vectors of the T/2 scale and use a fusion convolution layer to keep the size constant. The feature map output by each convolution block uses the corresponding deconvolution block for up-sampling. It concatenates all deconvolution outputs together with the feature size. Our detection header includes parts of CLS and REG.

3.4. Projection Loss Function

To fully exploit the excellent performance of the 2D detector, inspired by [31], we proposed a 3D-2D coupled loss function in the regression stage to obtain a more accurate 3D box estimate. The ideal 2D bounding box corresponds to the projection of the 3D bounding box in the image plane. Therefore, it is necessary to make full use of the constraints of the 2D bounding box on the 3D bounding box in the regression process of the 3D bounding box. The ground truth of the 3D bounding box is represented as x g , y g , z g , l g , w g , h g , θ g in the LiDAR coordinate system, where x g , y g , z g denote coordinates of box centre,   l g , w g , h g denote three side lengths of the box, and θ g is the object orientation from the BEV. The 2D bounding box is represented as x g , y g , l g , h g , where x g , y g   is the 2D bounding box centre and l g , h g is the 2D bounding box size. The projection relationship from a point x in the Velodyne LiDAR coordinate system to image coordinate y is as follows:
y = P r e c t i R r e c t 0 T v e l o c a m x
In Equation (11), x is the homogeneous coordinate form of the point cloud, P r e c t i is a 3 × 4 projection correction matrix containing camera parameters, R r e c t 0 is a 4 × 4 rectifying rotation matrix of the reference camera, and T v e l o c a m is the external parameter matrix of the LiDAR and camera obtained by calibration, including the rotation matrix   R 3 × 3 and translation matrix T ,   as follows:
T v e l o c a m = R 3 × 3 T 0 1
We followed existing study [12] for anchor boxes generation. For any one of them represented as x p , y p , z p , l p , w p , h p , θ p , the centre offsets Δ x , Δ y , Δ z predefined size offsets Δ l , Δ w , Δ h and the orientation Δ θ were computed. For the regression for projection of 3D bounding box, we projected regressed 3D anchors onto images to generate 2D anchors of size x p , y p , l p , h p , and computed 2D centre offsets Δ x , Δ y and size offsets Δ l , Δ   h   . We calculated the offset by using Equation (13).
Δ x = x g x p , Δ y = y g y p , Δ z = z g z p Δ l = l o g l g l p , Δ h = l o g h g h p , Δ w = l o g w g w p Δ θ = s i n θ g θ p Δ x = x g x p , Δ y = y g y p Δ l = l o g l g l p , Δ h = l o g h g h p
The regression loss function is as follows:
L o s s = L c l s , 3 D + γ ( r e s x , y , z , l , w , h , θ L s m o o t h L 1 Δ r e s + μ p r o x , y , l , h L s m o o t h L 1 Δ p r o + φ L c o n e r )
The regression loss (Equation (14)) is based on the Euclidean distance and smooth-L1 regression loss for offsets of size and angle, including Δ r e s and Δ p r o . γ ,   μ , and φ are loss coefficients. The focal loss [35] is used to calculate point segmentation loss L c l s , 3 D to handle the class imbalance issue:
L c l s , 3 D P = σ 1 P τ l o g P  
where
P =               p ,     f o r   f o r e g r o u n d   p o i n t s , 1 p ,     o t h e r w i s e .
where p is the predicted foreground probability of a single 3D point, and we use a corner loss L c o n e r [11] to regularize box regression of all parameters.

4. Experiments

This section evaluates our proposed 3D object detector on the public KITTI benchmark [36], and our method will be compared with previous methods in the 3D object detection task. Section 4.1 introduces our dataset and some experimental details. Section 4.2 provides a full ablation experimental study and analysis of the various components of the model. Section 4.3 shows the visualization of the results of the 3D detection model on the KITTI dataset. Section 4.4 shows the results of comparison with other methods.

4.1. Dataset

The KITTI dataset is one of the most popular autonomous driving datasets available. As the ground truth of the test set is unavailable, we refer to existing work [12] for dataset division and evaluation approaches. We follow the convention and use the car category containing the most training examples for the ablation study. The official 3D IOU evaluation metric for cars, pedestrians and cyclists are 0.7, 0.5 and 0.5, respectively. The mean average precision (mAP) is our evaluation metric following the official evaluation protocol. The KITTI evaluates 3D object detection performance using the PASCAL criteria also used for 2D object detection. Distant objects are thus filtered based on their bounding box height in the image plane, and the three difficulty categories are as follows:
For the easy category, the minimum bounding box height is 40 Px and the max truncation is 15%; for the moderate category, the minimum bounding box height is 25 Px and the max truncation is 30%; for the hard category, the minimum bounding box height is 25 Px and the max truncation is 50%. More details about difficulties are defined in Table 2.

4.2. Implementation Details

We used the 2D detection results provided by the FconvNet. For the LiDAR backbone network, we set the depth range in KITTI to (0, 75) meters. We performed two down-sampling operations on the 3D space corresponding to each region proposal, each at half the size of the previous one, for a final number of sampled point clouds of 1024. To prepare the positive and negative training samples, we scaled down the ground truth box by 0.5, counted the anchor box as the foreground box and counted the others as the background, and ignored the anchor box centred between the reduced box and the ground truth. We also performed random flipping and shifting to these points, similar to the FconvNet. We trained our model with a mini-batch size 32 for 60 epochs on one NVIDIA Quadro M6000 GPU. We used the ADAM optimizer with an initial learning rate of 0.001, and the weight decayed to 0.001 every 10 epochs. For the car category, the frustum resolution was four groups. We present the number of groups of four frustums in training for cars category as 240 , 120 , 60 , 30 . We kept σ = 0.25 and τ = 2, in accordance with the original paper on regression loss, and we set loss weight μ = 0.1. In the evaluation phase, we used an NMS module with a 3D IOU threshold of 0.1 to reduce redundancy. The final 3D detection score was calculated from the 2D detector and the predicted 3D bounding box scores.

4.3. Ablation Study

This section verifies components and variants proposed by conducting ablation studies on the validation split of KITTI. We used the official training and validation splits and accumulate the evaluation results over the whole training set. We followed the convention and used the car category that contains the most training examples. While [9] proposed a refining stage to modify the estimation of the bounding box, the refining stage will destroy the integrity of the model and slow down the detection speed. Therefore, we used the structure part that does not include the final refine process as the backbone and used the results reproduced in the data above to set the division standard as the baseline. The ablation study results are shown in Table 3 and Figure 7.

4.3.1. Effects of the SAF Module

Table 3 shows the effect of the SAF module. In the three difficulty levels of easy, moderate, and hard in the 3D detection results, the SAF contributed −0.33%, 2.22%, and 4.18% to the overall accuracy improvement, and the SAF plays a crucial role in improving the detection accuracy of the moderate and hard difficulty levels. For the accuracy improvement effect of the SAF at each difficulty level, the results of the easy difficulty level demonstrate that the detection accuracy decreased compared with the baseline results. We speculate that in the standard of the KITTI dataset, the occlusion level in the easy difficulty level is fully visible. There are almost no unimportant points in this case so we can think that the features extracted by the model are based on the objects we are interested in. The biased depth estimation may lose some features when detecting obvious objects, so SAF does not perform easy difficulty-level detection well. However, in the moderate difficulty level and the hard difficulty level, the occlusion will make our model focus on the occluded object and suppress the feature of the not interesting object to make the feature of ROI more prominent, which is also the key to improving the performance at the moderate difficulty level significantly and the hard difficulty level compared with the backbone.
We further analyse the test results of SAF combined with other modules alone. As can be seen from Table 3, the structure with the combination of SAF and LFA achieves the highest accuracy and exceeds the sum of the accuracy of the two modules alone regarding the difficult level, which indicates that the two modules can promote each other regarding the difficult level. The LFA better retains the local structure features. It is crucial to learn these local features for the objects of interest for whom only part of the point cloud is visible. Similarly, most detection accuracy improvement is due to combining the two modules at the moderate difficulty level. The effect of the two modules at the easy difficulty level is improved, indicating that the introduction of LFA makes up for the loss of SAF performance to a certain extent, but the overall improvement is still limited.

4.3.2. Effects of the LFA Module

As shown in Table 3, applying the LFA module brings a 0.89%, 1.74%, and 2.77% gain in 3D detection at the easy, moderate, and hard difficulty levels, respectively. The results display the 3D detection performance of the car class on the validation split of KITTI. The local features aggregation module improves the overall performance of the model, which has a good effect on the improvement of the detection performance of the three difficulty levels. As shown in Table 2, objects at the easy level are fully visible. The point cloud’s general information is more abundant, enabling the model to make more accurate inferences without relying too much on neighbour point features. Therefore, the performance of the LFA module at the easy level is only improved by 0.89%. However, LFA increased the effect on objects with more severe occlusion at the moderate and hard difficulty levels. The experimental results show that LFA can effectively aggregate the features of neighbouring points and infer the overall structure of the occluded object for which only a local visible point cloud is available. Finally, considering the differential configuration of cars, an efficient random sampling method allows the model to maintain good usability when dealing with datasets of any size.

4.3.3. Effects of the PL Loss Function

The impact of the 3D-2D projection loss module on performance is shown in Table 3, where the module contributes 0.96%, 1.11%, and −0.47% to the improved detection accuracy. In the performance improvement at the easy difficulty level, a single PL module has the best performance, and the joint action with SAF can make up for the error effect caused by the depth estimation. At the hard difficulty level, the introduction of the PL module leads to a decrease in performance. When the detection object is far away, the slight change in the projection coordinate may cause a significant change in the predicted 3Dbox, so the performance of the PL module is better over shorter distances than over longer distances. In subsequent experimental parameter settings, the accuracy impact of the three difficulty levels will be set within an acceptable range. Considering the overall performance of the model, we believe that it is necessary to utilize the constraint relation of 3D-2D effects on the projection.

4.3.4. Effects of Feature Extractor

In the structure of the feature extraction from the point cloud, we tested and compared the impact of pointnet [34] and pointnet++ [37] on performance. Before the test, we thought that the pointnet++ incorporating local features would show better performance, but we did not obtain the expected accuracy improvement in actual verification. We believe that the reasons for this result are as follows: our model’s local feature enhancement module already contains part of the local information, which causes the sampling module of pointnet++ to be unable to play many roles. In some sparse point cloud regions, the radius-based sampling method of pointnet++ may lead to insufficient sampling points in this region and loss of information. Finally, since the sampling method of pointnet++ is based on the FPS method, considering the number of model parameters and good dataset scalability, we abandon the pointnet++ method and adopt the relatively lightweight pointnet.

4.4. Qualitative Results

The precision–recall (PR) curve is useful for measuring whether the method is good enough for all the positive and negative samples, and it can be calculated as P r e c i s i o n = T P / T P + F P and R e c a l l = T P / T P + F N . Here, T P is the true positive, F P is the false positive, and F N is the false negative. Figure 8 shows the PR curves for car 3D detection at three difficulty levels. Our method is always the best when compared with F-pointnet. Compared with the FconvNet with a refining stage, our one-stage method also has good model performance. We show the proposed detector’s qualitative 3D object detection results on the KITTI benchmark in Figure 9. The upper part is the image, and the lower part is the visualization of the 3D point cloud. We use red bounding boxes to represent the predicted bounding boxes and green bounding boxes representing the ground truth-bound boxes for better visual comparison. As shown in Figure 8, the 3D box of the occluded object is estimated accurately, which shows that our method has good detection performance for long and highly occluded cars.

5. Discussion

As shown in Table 4, we compared the performance of the KITTI validation set with methods that also rely on the 2D detector. Several methods used the same results provided by the 2D detector and routinely evaluate the most numerous three categories to be fair. The earliest work is F-pointnet. To be precise, this is a three-stage approach. After the completion of the 3D instance segmentation process, it was necessary to train a transformation network to return the residuals between the coordinate origin and the centre of the real object. Experiments show that this stage has a critical impact on performance. As an improvement, FconvNet removed the T-Net structure but added a refinement stage to improve the detection accuracy. The ablation experiments showed that this strategy significantly improved the accuracy of object detection. However, it is also a two-stage method. To facilitate performance comparison, we referred to some settings of the FconvNet and repeated the tests on the baseline dataset. We used a one-stage network as the backbone and separately listed the two-stage experiment results that joined the refine stage. We performed a four-fold cross-validation and averaged the experimental results. The algorithm runtime is very important for self-driving vehicles. As the algorithm runtime is affected by the computational speed of the hardware device, in order to be fair, we implemented FconvNet and our proposed model on the same device, because the mechanisms of these models all work based on 2D detectors. Table 4 compared the above methods and achieved state-of-the-art performance at easy and moderate difficulty levels for the car category. The test results for the hard difficulty level show that our one-stage method does not require an additional refinement stage and has a performance comparable to that of the two-stage method. The one-stage method has superior deployment value in subsequent applications because real-time and small parameters are critical for autonomous driving. Moreover, our model is reduced by almost half of the parameters due to the absence of the refinement stage. Table 5 and Table 6 compare the results of the above methods on the pedestrian and cyclist categories, and the proposed method exceeds the F-pointnet and the FconvNet without the refinement stage in terms of accuracy, which indicates the effectiveness of our proposed model. However, unlike the performance of the vehicle category, the proposed method does not perform as well as the FconvNet model with the refinement stage for both the pedestrian and cyclist categories. There may be two reasons for this: (1) the value of H in Equation (1) needs to be further optimized; and (2) the number of pedestrians and cyclists in the dataset is much smaller than the number of vehicle categories, which means that there is an imbalance problem in the dataset, and our proposed method provides limited improvement in the detection accuracy for the pedestrian and cyclist categories. Finally, although FconvNet with a refinement stage achieved the highest detection accuracy, it also resulted in the longest runtime. The advantage of our proposed model is that it achieves a compromise between accuracy and runtime.
Table 7 compared the existing methods on the KITTI validation set. By comparing with [7,11,12,19,20,22,38,39,40], we obtained significant accuracy improvements at moderate and difficult difficulties, which indicates that our method effectively improves the detection performance of obscured objects and has comparable performance to the two-stage method. Moreover, it is necessary to utilize RGB and LiDAR multimodal information to improve the overall performance of the model compared to the LiDAR-based methods. There is also interest in exploring the effect of the occlusion percentage on detection accuracy. In fact, as shown in Table 2, three occlusion levels are provided in the official KITTI dataset and expressed by the percentage of occlusion, and the results of the three difficulty categories in Table 4, Table 5, Table 6 and Table 7 reflect the impact of the percentage of occlusion on the detection accuracy. Conservatively, the maximum percentage of partial occlusion that can be detected by the proposed method is 50%. However, as shown in Figure 9, some occlusion percentages above 50% were not officially labelled, but were still successfully detected by our model. Therefore, further research on this issue relies on further high-precision labelling of the dataset by KITTI.

6. Conclusions

In this paper, we propose a 3D object detection method with spatial attention, which improves the detection performance of occluded objects. The SAF module achieves a significant representation of the features of the occluded objects in a limited feature space. The LFA module enhances the understanding of the local structure of the occluded object. It allows better inference of the overall structure of the obscured object from a small number of locally visible point clouds when confronted with only the visible part of the object. We explored the feasibility of fully exploiting the 3D-2D constraint relationship, and the experimental results show that the joint 3D-2D anchor box projection loss helps to improve the overall performance of the model. Finally, compared to the baseline, our method significantly improves the detection accuracy of obscured objects without additional stages, suitable for real-time and parametric autonomous driving scenarios. The present limitation of this work is that we have assumed for now that the 2D region proposals are accurate enough. Therefore, when the 2D detector fails to detect the occluded object on the image, it also fails to generate a frustum for the occluded object. In addition, the effect of illumination changes on model performance is an interesting topic. We will focus on this problem in our future research.

Author Contributions

Methodology, X.H.; software, X.H. and X.Z.; validation, X.H., Y.W. and H.J.; formal analysis, Y.W.; investigation, X.H. and X.D.; resources, F.G.; data curation, X.Z.; writing—original draft preparation, X.H.; writing—review and editing, X.H. and X.Z.; visualization, X.D.; supervision, H.J.; project administration, H.J.; funding acquisition, F.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gao, Q.; Shen, X. ThickSeg: Efficient semantic segmentation of large-scale 3D point clouds using multi-layer projection. Image Vis. Comput. 2021, 108, 104161. [Google Scholar] [CrossRef]
  2. Qin, P.; Zhang, C.; Dang, M. GVnet: Gaussian model with voxel-based 3D detection network for autonomous driving. Neural Comput. Appl. 2021, 5, 1–9. [Google Scholar] [CrossRef]
  3. Li, D.; Deng, L.; Cai, Z. Design of traffic object recognition system based on machine learning. Neural Comput. Appl. 2021, 33, 8143–8156. [Google Scholar] [CrossRef]
  4. Liang, W.; Xu, P.; Guo, L.; Bai, H.; Zhou, Y.; Chen, F. A survey of 3D object detection. Multimed. Tools Appl. 2021, 80, 29617–29641. [Google Scholar] [CrossRef]
  5. Yang, B.Y.; Du, X.P.; Fang, Y.Q.; Li, P.Y.; Wang, Y. Review of rigid object pose estimation from a single image. J. Image Graph. 2021, 26, 334–354. [Google Scholar]
  6. Zamanakos, G.; Tsochatzidis, L.; Amanatiadis, A.; Pratikakis, I. A comprehensive survey of LIDAR-based 3D object detection methods with deep learning for autonomous driving. Comput. Graph. 2021, 99, 153–181. [Google Scholar] [CrossRef]
  7. Ku, J.; Mozifian, M.; Lee, J.; Harakeh, A.; Waslander, S.L. Joint 3d proposal generation and object detection from view aggregation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1–8. [Google Scholar]
  8. Hao, W.; Wang, Y. Structure-based object detection from scene point clouds. Neurocomputing 2016, 191, 148–160. [Google Scholar] [CrossRef]
  9. Ye, Y.; Chen, H.; Zhang, C.; Hao, X.; Zhang, Z. SARPNET: Shape attention regional proposal network for liDAR-based 3D object detection. Neurocomputing 2020, 379, 53–63. [Google Scholar] [CrossRef]
  10. Chen, X.; Ma, H.; Wan, J.; Li, B.; Xia, T. Multi-view 3D Object Detection Network for Autonomous Driving. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 24–28 September 2017; pp. 6526–6534. [Google Scholar]
  11. Qi, C.R.; Liu, W.; Wu, C.; Su, H.; Guibas, L.J. Frustum pointnets for 3D object detection from rgb-d data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 918–927. [Google Scholar]
  12. Wang, Z.; Jia, K. Frustum convnet: Sliding frustums to aggregate local point-wise features for amodal 3D object detection. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 1742–1749. [Google Scholar]
  13. Chen, X.; Kundu, K.; Zhang, Z.; Ma, H.; Fidler, S.; Urtasun, R. Monocular 3d object detection for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2147–2156. [Google Scholar]
  14. Zhang, J.; Su, Q.; Wang, C.; Gu, H. Monocular 3D vehicle detection with multi-instance depth and geometry reasoning for autonomous driving. Neurocomputing 2020, 403, 182–192. [Google Scholar] [CrossRef]
  15. Brazil, G.; Liu, X. M3d-rpn: Monocular 3d region proposal network for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 9287–9296. [Google Scholar]
  16. Weng, X.; Kitani, K. Monocular 3D object detection with pseudo-lidar point cloud. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Korea, 27–28 October 2019; pp. 857–866. [Google Scholar]
  17. Wang, X.; Yin, W.; Kong, T.; Jiang, Y.; Li, L.; Shen, C. Task-aware monocular depth estimation for 3D object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 12257–12264. [Google Scholar]
  18. Dewi, C.; Chen, R.C.; Liu, Y.T.; Jiang, X.; Hartomo, K.D. Yolo V4 for Advanced Traffic Sign Recognition with Synthetic Training Data Generated by Various GAN. IEEE Access 2021, 9, 97228–97242. [Google Scholar] [CrossRef]
  19. Zhou, Y.; Tuzel, O. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4490–4499. [Google Scholar]
  20. Yan, Y.; Mao, Y.; Li, B. SECOND: Sparsely Embedded Convolutional Detection. Sensors 2018, 18, 3337. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; Beijbom, O. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12697–12705. [Google Scholar]
  22. Shi, S.; Wang, X.; Li, H. Pointrcnn: 3D object proposal generation and detection from point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 770–779. [Google Scholar]
  23. Shi, S.; Wang, Z.; Shi, J.; Wang, X.; Li, H. From Points to Parts: 3D Object Detection from Point Cloud with Part-Aware and Part-Aggregation Network. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 1. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Ye, M.; Xu, S.; Cao, T. Hvnet: Hybrid voxel network for lidar based 3D object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Online, 14–19 June 2020; pp. 1631–1640. [Google Scholar]
  25. Wang, J.; Lan, S.; Gao, M.; Davis, L.S. Infofocus. In 3D Object Detection for Autonomous Driving with Dynamic Information Modeling. Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin, Germany, 2020; pp. 405–420. [Google Scholar]
  26. Meyer, G.P.; Laddha, A.; Kee, E.; Vallespi-Gonzalez, C.; Wellington, C.K. Lasernet: An efficient probabilistic 3D object detector for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12677–12686. [Google Scholar]
  27. Wang, Z.; Ding, S.; Li, Y.; Zhao, M.; Roychowdhury, S.; Wallin, A.; Sapiro, G.; Qiu, Q. Range adaptation for 3D object detection in lidar. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Korea, 27–28 October 2019; pp. 2320–2328. [Google Scholar]
  28. Xu, D.; Anguelov, D.; Jain, A. Pointfusion: Deep sensor fusion for 3D bounding box estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June; pp. 244–253.
  29. Ding, Z.; Han, X.; Niethammer, M. VoteNet. In A Deep Learning Label Fusion Method for Multi-Atlas Segmentation. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October 2019; Springer: Berlin, Germany; pp. 202–210.
  30. Qi, C.R.; Chen, X.; Litany, O.; Guibas, L.J. Imvotenet: Boosting 3D object detection in point clouds with image votes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Online, 14–19 June 2020; pp. 4404–4413. [Google Scholar]
  31. Zhu, M.; Ma, C.; Ji, P.; Yang, X. Cross-modality 3D object detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 5–9 January 2021; pp. 3772–3781. [Google Scholar]
  32. Vora, S.; Lang, A.H.; Helou, B.; Beijbom, O. Pointpainting: Sequential fusion for 3D object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Online, 14–19 June 2020; pp. 4604–4612. [Google Scholar]
  33. Hu, Q.; Yang, B.; Xie, L.; Rosa, S.; Guo, Y.; Wang, Z.; Trigoni, N.; Markham, A. Randla-net: Efficient semantic segmentation of large-scale point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Online, 14–19 June 2020; pp. 11108–11117. [Google Scholar]
  34. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3D classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
  35. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  36. Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The kitti vision benchmark suite. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
  37. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. arXiv 2017, 02413, 1706. [Google Scholar]
  38. Liang, M.; Yang, B.; Wang, S.; Urtasun, R. Deep continuous fusion for multi-sensor 3D object detection. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 641–656. [Google Scholar]
  39. Wen, L.; Jo, K.-H. Three-Attention Mechanisms for One-Stage 3-D Object Detection Based on LiDAR and Camera. IEEE Trans. Ind. Inform. 2021, 17, 6655–6663. [Google Scholar] [CrossRef]
  40. Wen, L.-H.; Jo, K.-H. Fast and Accurate 3D Object Detection for Lidar-Camera-Based Autonomous Vehicles Using One Shared Voxel-Based Backbone. IEEE Access 2021, 9, 22080–22089. [Google Scholar] [CrossRef]
Figure 1. Common occlusion scene in autonomous driving.
Figure 1. Common occlusion scene in autonomous driving.
Sensors 22 02366 g001
Figure 2. The frustum with the same length at each scale means that the unimportant point cloud and the attention point cloud cannot be effectively distinguished. The ‘+’ is concatenate operation.
Figure 2. The frustum with the same length at each scale means that the unimportant point cloud and the attention point cloud cannot be effectively distinguished. The ‘+’ is concatenate operation.
Sensors 22 02366 g002
Figure 3. The feature vector of the unimportant object will seriously affect the expression of the feature vector of the object of interest in feature map. The ‘+’ is concatenate operation.
Figure 3. The feature vector of the unimportant object will seriously affect the expression of the feature vector of the object of interest in feature map. The ‘+’ is concatenate operation.
Sensors 22 02366 g003
Figure 4. The frustum with spatial attention can improve the feature expression of the focused objects in the feature map. The ‘+’ is concatenate operation.
Figure 4. The frustum with spatial attention can improve the feature expression of the focused objects in the feature map. The ‘+’ is concatenate operation.
Sensors 22 02366 g004
Figure 5. Projection relationship between ground truth and image.
Figure 5. Projection relationship between ground truth and image.
Sensors 22 02366 g005
Figure 6. The LFA module.
Figure 6. The LFA module.
Sensors 22 02366 g006
Figure 7. The effect of different modules on performance improvement.
Figure 7. The effect of different modules on performance improvement.
Sensors 22 02366 g007
Figure 8. The precision-recall curves for car 3D detection at all levels of difficulty.
Figure 8. The precision-recall curves for car 3D detection at all levels of difficulty.
Sensors 22 02366 g008
Figure 9. Qualitative results on the KITTI.
Figure 9. Qualitative results on the KITTI.
Sensors 22 02366 g009
Table 1. The division scale and number of the frustum.
Table 1. The division scale and number of the frustum.
ScaleNum ANum B
T1T-1
T/21T/2-1
T/41T/4-1
T/81T/8-1
Table 2. The difficulty level officially provided by the KITTI dataset.
Table 2. The difficulty level officially provided by the KITTI dataset.
LevelMin Bounding Box HeightMax Occlusion LevelMax Truncation
Easy40 PxFully visible15%
Moderate25 PxPartly occluded30%
Hard25 PxDifficult to see50%
Table 3. Effects of using different modules.
Table 3. Effects of using different modules.
BackboneSAFLFAPLEasyModHard
Yes 87.9576.3768.56
YesYes 87.62 (−0.33)78.59 (+2.22)72.74 (+4.18)
Yes Yes 88.84 (+0.89)78.11 (+1.74)71.33 (+2.77)
Yes Yes88.91 (+0.96)77.48 (+1.11)68.09 (−0.47)
YesYesYes 88.71 (+0.76)79.52(+3.15)75.69 (+7.13)
YesYes Yes88.48 (+0.53)78.89 (+2.52)72.81 (+4.25)
Yes YesYes89.72 (+1.77)78.27 (+1.90)71.17 (+2.61)
YesYesYesYes89.46 (+1.51)79.91 (+3.54)75.53 (+6.97)
Table 4. Performance comparison between our method and the state of the art based on the 2D detector to generate the frustum on the Cars category of the KITTI validation set.
Table 4. Performance comparison between our method and the state of the art based on the 2D detector to generate the frustum on the Cars category of the KITTI validation set.
MethodStageNumber of ParametersRuntime (s)AP3D (Cars)APBEV (Cars)
EasyModHardEasyModHard
F-pointnetTwo--83.7670.9263.6588.1684.0276.44
Backbone + RefineTwo6,633,5540.4988.9878.6672.2390.0888.8480.10
BackboneOne3,316,7770.2687.9576.3768.5689.8887.4878.99
OursOne3,724,0130.2989.4679.9175.5391.2789.6385.75
Table 5. Performance comparison between our method and the state of the art based on the 2D detector to generate the frustum on the Pedestrians category of the KITTI validation set.
Table 5. Performance comparison between our method and the state of the art based on the 2D detector to generate the frustum on the Pedestrians category of the KITTI validation set.
MethodStageNumber of
Parameters
Runtime (s)AP3D (Pedestrians)APBEV (Pedestrians)
EasyModHardEasyModHard
F-pointnetTwo--70.0061.3253.5972.3866.3959.57
Backbone + RefineTwo6,633,5540.4970.8862.2453.3772.5967.0558.68
BackboneOne3,316,7770.2668.4760.6350.8070.3166.1456.09
OursOne3,724,0130.2970.6161.8453.9372.2466.5859.11
Table 6. Performance comparison between our method and the state of the art based on the 2D detector to generate the frustum on the Cyclists category of the KITTI validation set.
Table 6. Performance comparison between our method and the state of the art based on the 2D detector to generate the frustum on the Cyclists category of the KITTI validation set.
MethodStageNumber of ParametersRuntime (s)AP3D (Cyclists)APBEV (Cyclists)
EasyModHardEasyModHard
F-pointnetTwo--77.1556.4953.3781.8260.0356.32
Backbone + RefineTwo6,633,5540.4981.6969.5559.8783.2870.1061.79
BackboneOne3,316,7770.2675.8864.6355.7480.3763.2457.52
OursOne3,724,0130.2977.2465.2156.1580.7966.4757.86
Table 7. Performance comparison between our method and the state of the art on the KITTI validation set.
Table 7. Performance comparison between our method and the state of the art on the KITTI validation set.
MethodModalityAP3D (Cars)APBEV (Cars)
EasyModHardEasyModHard
VoxelNet [15]LiDAR81.9765.4662.8589.6084.8178.57
SECOND [16]LiDAR87.4376.4869.1089.9687.0779.66
PointRCNN [18]LiDAR88.8878.6377.3890.2187.8985.51
ContFuse [34]LiDAR + RGB86.3273.2567.8195.4487.3482.43
AVODFPN [4]LiDAR + RGB84.4174.4468.65---
F-pointnet [8]LiDAR + RGB83.7670.9263.6588.1684.9276.44
FconvNet [9]LiDAR + RGB89.0278.8077.0990.2388.7986.84
OursLiDAR + RGB89.4679.9175.5391.2789.6385.75
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

He, X.; Zhang, X.; Wang, Y.; Ji, H.; Duan, X.; Guo, F. Spatial Attention Frustum: A 3D Object Detection Method Focusing on Occluded Objects. Sensors 2022, 22, 2366. https://doi.org/10.3390/s22062366

AMA Style

He X, Zhang X, Wang Y, Ji H, Duan X, Guo F. Spatial Attention Frustum: A 3D Object Detection Method Focusing on Occluded Objects. Sensors. 2022; 22(6):2366. https://doi.org/10.3390/s22062366

Chicago/Turabian Style

He, Xinglei, Xiaohan Zhang, Yichun Wang, Hongzeng Ji, Xiuhui Duan, and Fen Guo. 2022. "Spatial Attention Frustum: A 3D Object Detection Method Focusing on Occluded Objects" Sensors 22, no. 6: 2366. https://doi.org/10.3390/s22062366

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop