Next Article in Journal
HyFormer: Hybrid Grouping-Aggregation Transformer and Wide-Spanning CNN for Hyperspectral Image Super-Resolution
Previous Article in Journal
Fast Adaptive Beamforming for Weather Observations with Convolutional Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detector Consistency Research on Remote Sensing Object Detection

Shaanxi Key Laboratory for Network Computing and Security Technology, Department of Computer Science and Engineering, Xi’an University of Technology, No. 5 South Jinhua Road, Xi’an 710048, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(17), 4130; https://doi.org/10.3390/rs15174130
Submission received: 28 June 2023 / Revised: 29 July 2023 / Accepted: 19 August 2023 / Published: 23 August 2023
(This article belongs to the Section AI Remote Sensing)

Abstract

:
Remote Sensing Image processing is a traditional research field, where RSI object detection is one of the most important directions. This paper focuses on an inherent problem of multi-stage object detection frameworks: the coupling error transmitting problem. In brief, because of the coupling method between the classifier and the regressor, the traditional multi-stage Detection frameworks tend to be fallible when encountering coarse object proposals. To deal with this problem, this article proposes a novel deep learning-based multi-stage object detection framework. Specifically, a novel network head architecture with a multi-to-one coupling method is proposed to avoid the coupling error of the traditional network head architecture. Moreover, it is found that the traditional network head architecture is more efficient than the novel network architecture when encountering fine object proposals. Considering this phenomenon, a proposal-consistent cooperation mechanism between the network heads is proposed. This mechanism makes the traditional network head and the novel network head develop each other’s advantages and avoid the disadvantages. Experiments with different backbone networks on three publicly available data sets have shown the effectiveness of the proposed method since mAP is proposed as 0.7% to 12.3% on most models and data sets.

1. Introduction

Remote sensing object detection is the process of determining the location and the category of objects in optical remote sensing images. In recent years, a large number of remote sensing object detection methods [1,2] have been proposed based on the deep learning technique. As an important and practical research field, the object detection task is not only used in the detection of ships [3], airports [4], vehicles [5] and other objects, but is also widely used in object tracking [6], instance segmentation [7], caption generation [8] and many other fields. The detector consists of a position regressor and a category classifier. In the object detection task the coupling problem has always been a concern in the object detection task.
Recently, many object detection methods have been proposed. In these methods, the mainstream methods are based on deep learning. As shown in Figure 1a, deep learning = based methods can be roughly divided into the following steps: feature extraction, region proposal, Region of Interest Pooling (RoIP), classification and regression. The region proposal is used to pre-generate regions where objects may exist. The RoIP is used to sample features in the pre-generated regions.
The above methods use a single network head (including classification and regression) to detect objects. The singularity of the network head makes the framework lack efficiency in both classification and regression [9]. In order to deal with this problem, many studies have been made on multi-stage object detection, which is defined as using multiple cascaded network heads to improve the accuracy of the bounding boxes. The framework of the multi-stage object detection is shown in Figure 1b. Some examples are shown as follows. Li et al. [9] proposed a group recursive learning network consisting of three cascaded network heads: weakly supervised object segmentation, object proposal generation and recursive detection refinement. Cai et al. [10] proposed cascade R-CNN with different Intersection over Union (IoU) thresholds to deal with the insufficient training samples problem on the later network heads.
However, the current multi-stage methods have an inherent problem that the coupling error is transmitted along multiple network heads [10,11,12]. In the aforementioned methods, the traditional way of classify-to-regress coupling is one-for-one exact matching. First, this one-for-one exact matching is likely to transmit classification errors to regression. As shown in Figure 1a,b, the network finally adopts the regressor corresponding to the classification result. Secondly, the false regression will make the input proposal of the RoIP deviate from the true object. Therefore, the RoIP cannot sample useful features, which further leads to the classification error in the next network head [11]. In other words, the regression error is transmitted to the next network head. Finally, the above two kinds of errors will be repeated between multiple network heads, resulting in the final detection result error. Apparently, the problem can cause iterative error transmitting in the multi-stage detection heads. It urgently needs to be dealt with for this issue.
In order to overcome the coupling error transmitting problem, the Consistent Multi-stage Detection (CMD) framework is proposed. As shown in Figure 1c, the proposed CMD framework consists of the following parts.
First, the proposed method introduced the conception of a robust coupling head and an efficient coupling head for coarse boxes and fine boxes, respectively. In contrast, the prior works only used the fine coupling head. Second, the proposed method adopts coupling mechanisms that are consistent with the change in boxes during multiple detection stages. The boxes tend to change from coarse to fine, thus the adopted coupling mechanism also keep this trend. Thus the proposed model is a relatively consistent method.
In summary, the main contributions of our work are as follows.
  • Concepts of a robust head and efficient head are proposed. Through various experiments, the functions of two kinds of coupling methods are validated. The robust coupling method is helpful for avoiding detection errors for coarse proposals. The efficient coupling method usually achieves better performance on fine proposals but worse performance on coarse proposals.
  • Fineness-consistent multi-head cooperation mechanisms are investigated between the robust coupling head and the efficient coupling head. These cooperation mechanisms are designed to be consistent with the coarse-to-fine trend of the object bounding boxes during the multi-stage detection process.
  • A novel network head architecture, consistent multi-stage detection, is proposed to deal with the coupling error transmitting problem by adopting an appropriate multi-head cooperation mechanism. Experiments with different backbone networks on three widely used remote sensing object detection data sets have shown the effectiveness of the proposed framework.
The rest of this paper is organized as follows. In Section 2, some existing object detection methods are illustrated. Section 3 describes the proposed method in detail. Finally, the experimental results and discussion are reported in Section 4, while the conclusion is made in Section 5.

2. Related Works

This section reviews major object detection methods that have made significant contributions to the Remote Sensing Image (RSI) object detection task. Taking the implementation of deep learning as a milestone, these RSI object detection methods are divided into two categories: traditional manual methods and neural network methods.

2.1. Traditional Manual Methods

The traditional manual methods take manually designed features for classification and localization. These traditional manual methods have made great progress and are divided into several series according to the main thoughts.

2.1.1. Low-Level Feature Methods

SIFT-based methods:
The first series is based on the Scale Invariant Feature Transform (SIFT) [13] feature descriptor, which extracts features of several selected points from a given image and compares them with features of selected points from a known object. SIFT-based methods are robust to object rotation, scaling and panning. Sedaghat et al. [14] proposed an improved SIFT to address the feature distribution problem of SIFT in multi-source RSIs. Li et al. [15] proposed scale-orientation join restriction criteria for better feature-matching performance among object key points in RSIs. This method enables SIFT to be robust to the scaling problem that is common in RSIs.
HOG-based methods:
The next series is based on the Histogram of Oriented Gradients (HOG) [16]. HOG features represent objects with orientation distributions and intensity distributions of an object’s spatial region gradient vectors. Tuermer et al. [17] proposed an integrated real-time processing chain using HOG features to classify regions in RSI vehicle detection. Grabner et al. [5] proposed an efficient online boosting algorithm for RSI vehicle detection based on the research on HOG. Cheng et al. [18] proposed an RSI object detection framework using a collection of part detectors (COPD), which uses HOG as low-level features.

2.1.2. Mid-Level Feature Methods

BoVW-based methods:
Another series is based on the Bag of Visual Words (BoVW) [19] model. The BoVW is an unsupervised recognition method that represents images by a collection of local regions. Later, Xu et al. implied the BoVW to object-based RSI classification as BoVW [20]. Sun et al. [21] proposed a sparse coding and BoVW-based rotation invariant method to deal with the complex shape problem of RSI object detection. Cheng et al. [22] proposed a BoVW and pLSA-based scene classification method to detect landslides from RSIs. Xia et al. [23] proposed an active clustering method to annotate RSIs with little expert knowledge.
SC-based methods:
The final series is based on Sparse Coding (SC). The main idea of SC is to represent the high-dimensional original data with a low-dimensional manifold that contains several structural primitives. Zhang et al. [24] proposed a Sparse Representation-Based Binary Hypothesis (SRBBH) for RSI object detection. Yokoya et al. [25] integrated local-feature sparse coding into a generalized Hough transform. Zhang et al. [26] proposed Sparse Transfer Manifold Embedding (STME), which can extract discriminative features from limited samples. Han et al. [27] proposed a multi-class RSI object detection method integrating visual saliency modeling and sparse coding.

2.2. Neural Network Methods

The neural network methods refer to the methods based on deep learning features. Compared with traditional manual methods, neural network methods contain little human interference [18]. The recent development of deep learning-based object detection methods are organized as: one-stage methods, two-stage methods and multi-stage methods.

2.2.1. One-Stage Methods

YOLO-v3 [28] proposed a state-of-the-art method in terms of its lightweight and high-speed characteristics. YOLOF [29] analyzed the success of FPN and proposed an efficient single-level feature method. SSD [30] added the Pyramid Feature Hierarchy with YOLO; that is, predict the target on the feature map of a different receptive field.

2.2.2. Two-Stage Methods

Girshick et al. [31] first proposed an object detection framework using R-CNN based on convolutional neural networks. To improve the speed of this model, Girshick proposed Fast R-CNN [32] and co-authored with Ren et al. to propose Faster R-CNN [33]. Cheng et al. [34] proposed a rotation-invariant and Fisher-discriminative CNN model to address object rotation. Liu et al. [35] proposed a network with oriented response modules to deal with small objects. Zhao et al. [36] proposed a Multi-scale Image block-level Fully Convolutional Neural Network (MIF-CNN) to deal with the fast detection in RSIs. Lu et al. [37] proposed a Gated and Axis-Concentrated Localization Network (GACL Net) to address small-object localization. Long et al. [38] proposed a Feature Fusion Deep Network (FFDN) to deal with small, partially obscured or out-of-view objects. Zhang et al. [39] proposed a Multi-Scale Feature Fusion Network (MS-FF Net) to address scale variation.

2.2.3. Multi-Stage Methods

Different from previous object detection methods, which mainly use a single network head, multi-stage object detection frameworks with multiple cascaded network heads are investigated. Li et al. [9] proposed a group recursive learning network consisting of three cascaded networks: weakly supervised object segmentation, object proposal generation and recursive detection refinement. Cai et al. [10] proposed a detection framework that consists of multiple cascaded detectors with different IoU thresholds.

3. Proposed Method

In dealing with the coupling error transmitting problem, the CMD framework is proposed with a robust network head to keep different stages network heads consistent with the roughness of their input proposals. In the following subsections, the CMD framework is illustrated from four aspects: the entire framework, the robust network head architecture, the proposal-consistent heads cooperation mechanism and the training loss function.

3.1. Entire Framework

As shown in Figure 2, the proposed CMD framework consists of several parts. First, CNN is used to extract hierarchical deep features of the input image. Secondly, the Feature Pyramid Network (FPN) is used to integrate layers with hierarchical semantic levels and different spatial scales. Thirdly, the Region Proposal Network (RPN) is used to generate proposals, which are expected to contain objects. Finally, several detection stages are implemented iteratively to improve detection accuracy. Each detection stage consists of two parts: the RoIP to sample features of proposals from the FPN features of the input image and the network head to give the detection results. There are two types of network heads, one of which is proposed in this article. Moreover, a proposal-consistent head cooperation mechanism between the two types of network heads is proposed.

3.1.1. CNN

As shown in Figure 2, CNN is the first part of the CMD framework. In the training time, cascaded convolutional layers of the CNN are trained to learn effective features. In the inference time, learned weights of the convolutional layers are implemented to extract features of the whole image. In detail, given an input image or a feature map I = { I s } s = 1 S with S channels, each convolutional layer computes an output O = { O c } c = 1 C with C channels by
O c = K c ( I ) = × s = 1 S κ c s I s + b c
where K c represents the convolutional layer with the weights of [ κ c 1 , κ c 2 , , κ c s , , κ c S ] and the biases of b c . The symbol ⊗ represents the convolutional operation.

3.1.2. FPN

FPN is adopted in our framework to ensure feature efficiency on object detection. As shown in Figure 2, the FPN starts with several deep-layer CNN features. The FPN features on the right side of the dashed box are elementary, adding results of feature maps from two different layers. Before adding, the shallow layer feature is resized through a 1 × 1 convolutional layer, and the deep layer feature is upsampled to the same size as the shallow layer. Finally, several integrated feature maps with different spatial scales are output for the use of the rest of the framework.

3.1.3. RPN

Given a feature map of the image, anchors are built on each pixel with different width–height ratios and areas. Then a classifier is used to predict whether there is an object in an anchor. At the same time, a regressor is used to to predict the bounding box coordinates regression to get the anchor closer to the object, if there is one. Since there is more than one feature map generated by the FPN, the CMD framework consists of several RPNs, as many as the FPN feature maps.

3.1.4. RoIP

RoIP receives proposals from RPNs or network heads and sample features of the proposals from FPN features of the whole image. Obviously, different scale features are sampled for each single proposal, making the following detection with network heads more accurate.

3.1.5. Network Head

A network head refers to a couple of a regressor and a classifier. The network head receives the feature map of a proposal from RoIP. Then the regressor takes this proposal feature map to predict the bounding box coordinate’s regression. The classifier takes this proposal feature map to predict the proposal category. Apparently, the proposal may not perfectly contain the object, causing a possible classification error.

3.2. Robust Network Head Architecture

3.2.1. Robust Network Head

As described in Section 1, the traditional network head is likely to cause the coupling error transmitting problem. Therefore, a network head with a novel coupling method that can avoid transmitting is highly demanded.
As shown in Figure 3a, the traditional efficient network head adopts the category prediction to decide which element of the regression list to use. This coupling method is likely to convey classification errors to the regression branch. In order to deal with this coupling problem, a robust network head is proposed, as shown in Figure 3b, which only has a single general regressor for all categories.
This robust network head is helpful to improve object detection performance for coarse input proposals whose bounding box is not close enough to the true object. The reason is explained as follows. First, during the inference time, the single regressor will not be affected by the classifier of the same stage. The regression results are related only to the convolutional proposal features and the regressor itself. Second, in the robust network head, the single regressor is trained with samples of different classes. Therefore, the network head can robustly improve the location accuracy of different class-bounding boxes. Finally, through blocking both of the detection error broadcasting paths, namely broadcasting through the regression branch and the classification branch, this robust network head is able to improve the model performance.
However, the robust network head does not perform efficiently enough on fine input proposals whose bounding box is close to the true object. The reason is explained as follows. First, with fine input proposals, the traditional regressor of the traditional efficient network head can get less affected by the coupling error. Secondly, because of the multi-to-one coupling method between the general regressor and the classifier, the robust network head cannot regress specifically for the shape of each category.
For relatively coarse input proposals, experiments (see Section 4) proved that the robust network head shows better performance than the traditional network head.

3.2.2. Efficient Network Head

Although the robust network head can avoid the coupling error well, it still has some limitations. Therefore, an analysis of the traditional network head is also needed.
This efficient network head is more efficient than the robust network head for fine input proposals. The reasons are listed as follows. First, under the precondition of relatively fine input proposals, a current stage classifier is relatively reliable. Thus the coupling error, which is caused by classifiers misleading the regressor through a one-to-one coupling method, can be less important for detection performance. Second, considering that different categories of objects have various fine spatial characteristics, the traditional regressor in the efficient network head can specifically adjust to these various fine spatial characteristics of different categories. In other words, the general regressor in the robust network head has to consider fine spatial characteristics of different category objects and thus cannot be sensitive to differences among different categories. In summary, with the fine input proposals, coupling error can be ignored, and the traditional regressor can adjust to different categories. Therefore, the traditional efficient network head is more appropriate for fine input proposals.
For relatively fine input proposals, experiments (see Section 4) proved that the efficient network head shows better performance than the robust network head.

3.3. Proposal-Consistent Heads Cooperation Mechanism

According to Section 3.2, the robust network head and the efficient network head are respectively suitable for coarse input proposals and fine input proposals. However, as shown in Figure 4a, the traditional multi-stage detection adopts efficient network heads for all stages, neglecting the different roughnesses of different stages’ input proposals. Apparently, there should be a cooperation mechanism between the robust network head and the efficient network head to find the optimal detection performance. Thus, in this section, a proposal-consistent head cooperation mechanism is proposed.

3.3.1. Coarse-to-Fine Proposals

In multi-stage detection frameworks, proposals are growing from coarse to fine. Through a detection network head, the majority of input proposals are optimized [10]. Therefore, after several detection network heads, the majority of the positive proposals are closer to the real objects. In summary, there is a coarse-to-fine trend of the proposals among different stages of the multi-stage detection frameworks.

3.3.2. Heads Cooperation Mechanism

As shown in Figure 4b, in accordance with the coarse-to-fine trend of the proposals, robust network heads are implemented before efficient network heads in our framework. Since the very start, proposals are still too coarse for the efficient network heads, robust network heads are adopted to avoid the coupling error transmitting problem. Then, until the proposals are fine enough, efficient network heads are used to provide better performance through category-sensitive regression. In summary, the proposed method adopts a network head cooperation mechanism that is consistent with the coarse-to-fine trend of different stages proposals.

3.3.3. Comparison and Advantages

In the traditional multi-stage detection framework, efficient proposals at the early-stage network heads cause serious coupling errors. As shown in Figure 4a, these coupling errors make a part of positive proposals falsely regress to new positions, including no real objects. These falsely regressed proposals are then classified by the next-stage efficient network head as negative samples, which are never considered again in the rest of the stages. As a result, positive proposals are finally lost.
In the CMD framework, robust proposals at the early-stage network heads effectively avoid coupling errors, although they are not efficient enough in optimizing proposals. As shown in Figure 4b, most of the positive proposals are properly regressed to new positions, including real objects. Then the efficient network heads are then adopted to efficiently optimize these fine proposals to the real objects.
Through the comparison, it can be found that the proposal-consistent head cooperation mechanism can make better use of the advantages of the two network head architectures to make up for each other’s disadvantages. With this well-designed detection framework, CMD, the coupling error transmitting problem can be well addressed.

3.4. Training Loss Function

Since the multi-stage detection framework consists of several stages, each of which contains a classifier and a regressor, the loss function of the proposed framework is a hierarchical weighted sum of several losses.

3.4.1. Classification Loss

First, the category label and the category prediction of a bounding box are, respectively, defined as p * = { p c * | c = 0 , 1 , 2 , , C } and p = { p c | c = 0 , 1 , 2 , , C } . Second, C r o s s E n t r o p y L o s s i represents the cross-entropy loss for an object proposal out of I object proposals:
C r o s s E n t r o p y L o s s i ( p * , p ) = c = 1 C p c log ( p c * )
Finally, the classification loss function of the k t h stage is represented as:
L o s s c l s k = 1 I i = 1 I C r o s s E n t r o p y L o s s i

3.4.2. Regression Loss

First, the regression label of a bounding box is defined as t * = { t l * | l = x , y , w , h } . Meanwhile, the regression prediction has two forms, t r b under the robust network head and T e f = { t c | c = 0 , 1 , 2 , , C } under the efficient network head. Second, S m o o t h L 1 L o s s i represents the regression loss function of s m o o t h L 1 [32] for an object proposal out of I object proposals:
S m o o t h L 1 L o s s i = s m o o t h L 1 ( t * t r b ) c = 1 C σ c s m o o t h L 1 ( t * t c )
while σ c is formulated as follows:
σ c = 1 ,   c = arg max ( p ) 0 ,   e l s e
Finally, the regression loss function of the k t h stage is represented as:
L o s s r e g k = 1 I i = 1 I S m o o t h L 1 L o s s i

3.4.3. CMD Loss

While τ represents a fixed-loss weighting parameter of regression loss for all stages, and λ k represents loss weighting parameters of different stages, the final loss function is formulated as:
C M D _ L o s s = k λ k ( L c l s k + τ L r e g k )

4. Results and Analysis

In this section, we sequentially introduce our experiments from four different aspects: data set description, evaluation metrics, implementation details and experimental results. Details of these parts are illustrated as follows.

4.1. Data Set Description

To evaluate our localization model, experiments are implemented on three remote sensing object detection data sets: DIOR [40], HRRSD [41] and NWPU VHR-10 [18]. Details of these data sets are introduced below.

4.1.1. DIOR

The DIOR data set is the most recently proposed remote sensing object detection data set, which is a large-scale benchmark data set. This data set contains 23,463 optical remote sensing images. In total, 190,288 instances of 20 class objects are distributed in these images.
Some examples of DIOR are shown in Figure 5. In this data set, objects of the same class have different sizes, which can increase the difficulty of detection [40]. Moreover, object instance numbers and class numbers are abundant. Therefore, DIOR is a large-scale and hard data set.

4.1.2. HRRSD

The HRRSD data set is a balanced, which means that object instances of different classes have similar quantities, remote sensing object detection data set. Moreover, this data set is also a large-scale data set containing 21,761 optical remote sensing images. In total, 55,740 instances of 13 class objects are distributed in these images.
Some examples of HRRSD are shown in Figure 6. In this data set, the object numbers are balanced for different classes. This balanced data set makes the training samples of each class sufficient [41]. Moreover, this data set contains a large number of object instances. Therefore, the HRRSD is a large-scale and balanced data set.

4.1.3. NWPU VHR-10

The DIOR data set is a historical remote sensing object detection data set, which was proposed in 2014. This data set contains 800 optical remote sensing images. In total, 800 instances of 10 class objects are distributed in these images.
Some examples of NWPU VHR-10 are shown in Figure 7. NWPU VHR-10 is a small-scale data set, which was one of the earliest remote sensing object detection data sets.

4.2. Evaluation Metrics

In order to quantitatively validate the efficiency of the proposed method, we adopt the most-used evaluation metrics in object detection: Average Precision (AP) for each class and mean AP (mAP) for each data set. Moreover, Intersection over Union (IoU) is used for location evaluation of objects.

4.2.1. AP and mAP

To clearly explain AP, it is necessary to introduce the confusion matrix. The confusion matrix is shown in Table 1. As shown in this table, TP represents the true positive object that is correctly predicted as positive. Similarly, FP represents the false positive object that is falsely predicted as positive, FN represents the false negative object that is falsely predicted as negative and TN represents the true negative object that is correctly predicted as negative. Then formulas for precision and recall are represented as:
P r e c i s i o n = TP TP + FP
R e c a l l = TP TP + FN
For a single object category, a Precision-Recall Curve (PRC) is drawn. For PRC, the vertical axis and the horizontal axis are the P r e c i s i o n and R e c a l l , respectively. Since both the P r e c i s i o n and the R e c a l l are smaller than one, the area under PRC is represented as the Average Precision (AP), a synthetic measurement considering both the P r e c i s i o n and the R e c a l l .
After the illustration of AP, it is easy to understand that the mean AP (mAP) is the mean value of the APs of all categories in a data set.

4.2.2. IoU

For a single predicted box, both the category and the location are used for the assessment of detection results. In this evaluation process, the location accuracy of an object is measured by IoU.
While I n t and U n i represent the intersection area and the union area between the predicted bounding box and the label bounding box, respectively, IoU is represented as
IoU = Int Uni
To be noted, if a predicted bounding box does not have an intersection area with any label-bounding boxes, the predicted bounding box is viewed as background.

4.3. Implementation Details

In this part, the implementation details of the experiments are illustrated from four aspects: experimental environment, preprocessing, training parameters and CMD parameters.

4.3.1. Experimental Environment

The following experiments are conducted on a server with seven NVIDIA Titan X GPU. A comprehensive toolbox named MMdetection [42] is used in our experiment, with the software environment of CUDA 10.1, CUDNN 7.6.3, gcc-4.8, g++-4.9 and Pytorch 1.2.

4.3.2. Preprocessing

The images are first resized and paddled, with a paddle size of 32 to (1333, 800). Moreover, half of the randomly selected training examples are flipped. Finally, the images are normalized with statistical information, including average values and standard deviations for three channels (red, green and blue) of the current data set.

4.3.3. Training Parameters

First of all, the experiments are trained rather than fine-tuned. The training is implemented with a batch size of 16 for 12 epochs. In each epoch, all samples of the training set are used only once. Second, the stochastic gradient descent optimizer is set as: a learning rate of 0.02 , a weight decay of 10 4 and a momentum of 0.9 . Third, the RPNs are trained on samples without the ground-truth bounding boxes, which have IoU scores of 100 % . All the training samples are obtained with anchors generated on each pixel. The anchors on each pixel are with an aspect ratio of 0.5 , 1.0 and 2.0 . Finally, 256 anchors are randomly selected for RPN training, and 512 RPN proposals are randomly selected for detector training.

4.3.4. CMD Parameters

Since the CMD contains multiple detectors, the training of CMD involves the IoU settings of each detector. In the proposed method, the IoU of each detector is set to 0.5 in accordance with most current detectors.

4.4. Experimental Results

In this part, validation experiments are first implemented to show the function of robust detectors and fine detectors. After that, the CMD-based framework is compared with other methods in the contrast experiments.

4.4.1. Validation Experiments

To validate the function of robust detector and fine detector, CMDs with different structures are investigated on three data sets. In all the validation experiments, each of the different CMDs consists of three cascaded detectors. Each detector can be a robust detector, represented as “R”, or a fine detector, represented as “F”. According to the selection and order of “R” and “F”, four different structures of CMDs are implemented: CMD-RRR, CMD-RRF, CMD-RFF and CMD-FFF.
  • DIOR Validation: As shown in Table 2, the four types of CMDs are implemented on the DIOR data set. CMD-RRR, with three robust detectors, reaches the highest mAP of 60.0%. CMD-FFF, with three fine detectors, achieves the lowest mAP of 58.4%. Moreover, CMDs with more fine detectors tend to have better performance.
  • HRRSD Validation: As shown in Table 3, the four types of CMDs are implemented on the HRRSD data set. CMD-RRF, with two robust detectors and one fine detector, reaches the highest mAP of 88.7%. CMD-RFF, with one fine detector and two fine detectors, reaches the same mAP as CMD-RRF. CMD-FFF, with three fine detectors, reaches the lowest mAP of 88.1%.
  • NWPU VHR-10 Validation: As shown in Table 4, the four types of CMDs are implemented on the NWPU VHR-10 data set. CMD-RRR, with three robust detectors, reaches the highest mAP of 82.3%. The CMD-FFF, with three fine detectors, reaches the lowest mAP of 72.3%. Moreover, CMDs with more fine detectors tend to have better performance.
Table 2. Validation Experiments on DIOR (%).
Table 2. Validation Experiments on DIOR (%).
classSpSTSdBgVcApTCWMDmGFBF
CMD-RRR73.365.137.733.844.766.581.375.646.271.163.6
CMD-RRF74.269.644.234.144.963.580.473.144.571.064.5
CMD-RFF73.664.642.334.344.162.080.475.143.969.264.3
CMD-FFF73.464.141.132.242.658.480.974.041.768.465.3
BCGTFESAHbETSOpCnArTSmAP
CMD-RRR85.474.652.544.645.252.575.961.649.760.0
CMD-RRF86.172.152.545.046.952.676.358.249.059.9
CMD-RFF86.772.551.643.446.552.875.759.748.659.6
CMD-FFF84.969.649.040.846.750.777.061.944.858.4
Best performance of each class is in bold. Sp: ship, ST: storage tank, Sd: stadium, Bg: bridge, Vc: vehicle, Ap: airport, TC: tennis court, WM: windmill, Dm: dam, GF: golf field, BF: baseball field, BC: basketball court, GTF: ground track field, ESA: expressway service area, Hb: harbor, ETS: expressway toll station, Op: overpass, Cn: chimney, Ar: airplane, TS: train station.
Table 3. Validation Experiments on HRRSD (%).
Table 3. Validation Experiments on HRRSD (%).
classSpBgGTFSTBCTCArBDHbVcCRTJPLmAP
CMD-RRR91.587.397.795.768.293.198.191.593.794.892.577.466.788.3
CMD-RRF92.888.297.995.969.393.098.590.994.292.992.477.767.988.7
CMD-RFF91.688.397.795.569.393.798.591.095.195.093.278.166.888.7
CMD-FFF92.286.497.595.368.893.398.690.794.495.792.075.565.588.1
Best performance of each class is in bold. Sp: ship, Bg: bridge, GTF: ground track field, ST: storage tank, BC: basketball court, TC: tennis court, Ar: airplane, Bd: baseball diamond, Hb: harbor, Vc: vehicle, CR: cross-road, TJ: T-junction, PL: parking lot.
Table 4. Validation Experiments on NWPU VHR-10 (%).
Table 4. Validation Experiments on NWPU VHR-10 (%).
classSpBgGTFSTBCTCArBDHbVcmAP
CMD-RRR93.342.885.395.472.480.699.496.869.787.082.3
CMD-RRF89.924.576.096.650.879.199.296.658.485.475.7
CMD-RFF90.937.270.196.371.577.799.697.260.386.278.7
CMD-FFF84.219.254.395.762.275.399.196.451.485.572.3
Best performance of each class is in bold. Sp: ship, Bg: bridge, GTF: ground track field, ST: storage tank, BC: basketball court, TC: tennis court, Ar: airplane, Bd: baseball diamond, Hb: harbor, Vc: vehicle.
According to these results, we can find two clues for the analysis of the robust detector and the fine detector. Among the four CMDs, CMD-FFF always has the worst performance on different data sets, CMD-RRF and CMD-RFF reach the best performance on HRRSD and CMD-RRR reaches the best performance on DIOR and NWPU VHR-10. Details of the analyses are illustrated as follows.
First, among the four CMDs, CMD-FFF always achieves the worst performance on different data sets. Obviously, CMD-RFF can always surpass CMD-FFF; in other words, replacing the first fine detector in CMD-FFF with a robust detector can steadily improve model performance. This phenomenon implies that the robust detector works better in the earlier stages, with relatively coarse bounding boxes as input.
Second, among the four CMDs, CMD-RRF and CMD-RFF reach the best performance on HRRSD. To be noted, the CMD-FFF mAP on HRRSD reaches a high score of nearly 90%. This means that the RPN trained on HRRSD, a balanced large-scale data set, provides proposals fine enough to use three fine detectors. Under this condition, CMD-RRF or CMD-RFF shows better performance than CMD-RRR. This phenomenon implies that the fine detector works better in the later stages, with relatively fine bounding boxes.
Third, among the four CMDs, CMD-RRR reaches the best performance on DIOR and NWPU VHR-10. To be noted, the CMD-FFF mAPs on DIOR and NWPU VHR-10 cannot surpass 75%. This means that the RPNs trained on DIOR, a hard large-scale data set, and NWPU VHR-10, a small-scale data set, provide proposals too coarse to use three fine detectors. Under this condition, CMD-RRR shows better performance than CMD-RRF or CMD-RFF. Compared with validation experiments on HRRSD, where RPN is trained to provide fine proposals, we can find that more robust detectors are preferred if coarse proposals are provided, and more fine detectors are preferred if fine proposals are provided. This conclusion can also be inferred from the performance (mAP) order of CMDs in Table 2 and Table 4: CMD-RRR > CMD-RRF > CMD-RFF > CMD-FFF.

4.4.2. Ablation Experiments

To evaluate model effectiveness, ablation experiments are conducted. In Exp. 2, the experiment without (w/o) the RFF schedule denotes a normal three-stage network with normal heads, i.e., the FFF schedule. Then, in Exp. 3 and 4, the three cascaded detection heads are reduced. As shown in Table 5, the proposed method shows better performance than the ablation experiments, Exp. 2 to 4. This validates the effectiveness of the proposed model and schedule.

4.4.3. Contrast Experiments

To evaluate the proposed CMD-based object detection framework, some other state-of-the-art frameworks are adopted as contrast methods, the majority of which are illustrated as follows:
  • R-CNN [31]: Region-based Convolutional Neural Network (R-CNN) is the first deep feature-based object detection framework, which consists of several progresses: proposal generation, region cropping, feature extraction, classification and regression.
  • RICNN [43]: Rotation-Invariant Convolutional Neural Network (RICNN) proposed a new rotation-invariant layer with the Alexnet CNN as the backbone network.
  • FasterR-CNN_r101 [33]: Faster R-CNN is proposed on the base of R-CNN and can simultaneously improve both the speed and accuracy of detection through the optimization and integration of different parts. This framework takes the 101-layer ResNet [44] for feature extraction.
  • FasterR-CNN_r50/_r101+FPN: These are compound object detection frameworks from [42]. These frameworks take the ResNet [44] with 50 or 101 layers for feature extraction and adopt the Feature Pyramid Network (FPN) [45] to enhance the extracted framework.
  • YOLO-v3 [28]: Yolov3 is an incremental improvement proposed by a state-of-the-art method in terms of its lightweight and high-speed characteristics.
  • YOLOF [29]: The You Only Look One-level Feature analyzed the success of FPN and proposed an efficient single-level feature method.
  • Dynamic R-CNN [46]: The Dynamic R-CNN: Toward high-quality object detection via dynamic training designed a dynamic adjustment scheme for model parameters.
After introducing the contrast methods, the proposed methods are represented with the form of FasterR-CNN_r{N} + FPN + CMD - {D}, where N and D are optional from the sets of {50, 101} and {RRR, RFF}, respectively. Contrast experiments between the proposed methods and contrast methods on different data sets are conducted as follows.
  • DIOR Contrast: As shown in Table 6, different methods are implemented on the DIOR data sets. The proposed CMD-RRR on FasterR-CNN_r101+RPN reaches the best performance, with a mAP of 61.6%. The proposed CMD-RRR improves the mAP on FasterR-CNN_r50+RPN by 1.9% and on FasterR-CNN_r101+RPN by 1.1%. The proposed CMD-RFF improves the mAP on FasterR-CNN_r50+RPN by 1.5% and on FasterR-CNN_r101+RPN by 0.8%. Apparently, CMD-RRR shows better performance than CMD-RFF.
  • HRRSD Contrast: As shown in Table 7, different methods are implemented on the DIOR data sets. The proposed CMD-RFF on FasterR-CNN_r101+RPN reaches the second-best performance, with a mAP of 90.3%. The proposed CMD-RRR improves the mAP on FasterR-CNN_r50+RPN by 2.3% and on FasterR-CNN_r101+RPN by −1.6%. The proposed CMD-RFF improves the mAP on FasterR-CNN_r50+RPN by 2.7% and on FasterR-CNN_r101+RPN by −0.9%. Apparently, CMD-RFF shows better performance than CMD-RRR.
  • NWPU VHR-10 Contrast: As shown in Table 8, different methods are implemented on the DIOR data sets. The proposed CMD-RRR on FasterR-CNN_r101+RPN reaches the best performance, with a mAP of 87.4%. The proposed CMD-RRR improves the mAP on FasterR-CNN_r50+RPN by 12.3% and on FasterR-CNN_r101+RPN by 4.9%. The proposed CMD-RFF improves the mAP on FasterR-CNN_r50+RPN by 8.7% and on FasterR-CNN_r101+RPN by 3.0%. Apparently, CMD-RRR shows better performance than CMD-RFF.
Table 6. Contrast Experiments on DIOR (%).
Table 6. Contrast Experiments on DIOR (%).
classSpSTSdBgVcApTCWMDmGFBF
R-CNN [31]9.118.060.815.69.143.054.016.433.750.153.8
RICNN [43]9.119.161.125.311.461.063.531.541.155.960.1
FasterR-CNN [33]27.739.873.028.023.649.375.245.462.368.078.8
RIFD-CNN [47]31.741.573.629.028.553.279.546.963.168.979.9
ssD [30]59.246.661.029.727.472.776.365.756.665.372.4
YOLov3 [28]87.468.770.631.248.329.287.378.726.931.174.0
FasterR-CNN_r50+FPN [42]73.363.247.832.043.254.280.574.437.165.666.0
FasterR-CNN_r50+FPN+CMD-RRR73.365.137.733.844.766.581.375.646.271.163.6
FasterR-CNN_r50+FPN+CMD-RFF73.664.642.334.344.162.080.475.143.969.264.3
FasterR-CNN_r101+FPN [42]72.761.651.634.943.061.580.075.448.971.064.7
FasterR-CNN_r101+FPN+CMD-RRR72.963.346.835.944.370.581.474.452.273.564.5
FasterR-CNN_r101+FPN+CMD-RFF72.962.441.537.144.267.480.375.652.774.464.3
classBCGTFESAHbETSOpCnArTSmAP
R-CNN [31]62.349.350.239.533.530.953.735.636.137.7
RICNN [43]66.358.951.743.536.639.063.339.146.144.2
FasterR-CNN [33]66.256.969.050.255.250.170.953.638.654.1
RIFD-CNN [47]69.062.469.051.256.051.171.556.640.156.1
ssD [30]75.768.663.549.453.148.165.859.555.158.6
YOLov3 [28]78.661.148.644.954.449.769.772.229.457.1
FasterR-CNN_r50+FPN [42]85.570.849.638.046.549.976.962.644.258.1
FasterR-CNN_r50+FPN+CMD-RRR85.474.652.544.645.252.575.961.649.760.0
FasterR-CNN_r50+FPN+CMD-RFF86.772.551.643.446.552.875.759.748.659.6
FasterR-CNN_r101+FPN [42]86.072.053.642.850.052.577.858.052.060.5
FasterR-CNN_r101+FPN+CMD-RRR85.373.755.046.248.854.376.755.457.561.6
FasterR-CNN_r101+FPN+CMD-RFF86.071.754.347.051.555.077.156.253.661.3
Best performance of each class is in bold. Sp: ship, ST: storage tank, Sd: stadium, Bg: bridge, Vc: vehicle, Ap: airport, TC: tennis court, WM: windmill, Dm: dam, GF: golf field, BF: baseball field, BC: basketball court, GTF: ground track field, ESA: expressway service area, Hb: harbor, ETS: expressway toll station, Op: overpass, Cn: chimney, Ar: airplane, TS: train station.
Table 7. Contrast Experiments on HRRSD (%).
Table 7. Contrast Experiments on HRRSD (%).
classSpBgGTFSTBCTCAr
R-CNN [31]49.720.076.379.118.070.877.5
RICNN [43]56.527.478.081.023.066.478.1
FasterR-CNN [33]88.585.590.688.747.980.790.8
YOLO-v3 [28]83.788.196.192.953.587.196.7
YOLOF [29]90.189.296.790.661.089.997.3
Dynamic R-CNN [46]88.989.297.293.469.092.796.9
FasterR-CNN_r50+FPN [42]89.886.297.394.560.489.198.7
FasterR-CNN_r50+FPN+CMD-RRR91.587.397.795.768.293.198.1
FasterR-CNN_r50+FPN+CMD-RFF91.688.397.795.569.393.798.5
FasterR-CNN_r101+FPN [42]94.489.697.996.577.696.497.8
FasterR-CNN_r101+FPN+CMD-RRR92.989.898.495.670.693.198.4
FasterR-CNN_r101+FPN+CMD-RFF93.290.498.495.971.693.798.1
classBDHbVcCRTJPLmAP
R-CNN [31]57.654.141.325.92.416.645.3
RICNN [43]59.647.852.026.69.320.548.2
FasterR-CNN [33]86.989.484.088.675.163.381.5
YOLO-v3 [28]89.195.081.191.575.053.183.3
YOLOF [29]91.895.087.093.279.763.886.6
Dynamic R-CNN [46]93.195.491.492.776.664.087.7
FasterR-CNN_r50+FPN [42]90.494.392.690.271.963.086.0
FasterR-CNN_r50+FPN+CMD-RRR91.593.794.892.577.466.788.3
FasterR-CNN_r50+FPN+CMD-RFF91.095.195.093.278.166.888.7
FasterR-CNN_r101+FPN [42]93.896.096.192.284.072.891.2
FasterR-CNN_r101+FPN+CMD-RRR91.894.395.393.181.170.389.6
FasterR-CNN_r101+FPN+CMD-RFF93.395.895.693.583.271.590.3
Best performance of each class is in bold. Sp: ship, Bg: bridge, GTF: ground track field, ST: storage tank, BC: basketball court, TC: tennis court, Ar: airplane, Bd: baseball diamond, Hb: harbor, Vc: vehicle, CR: cross-road, TJ: T-junction, PL: parking lot.
Table 8. Contrast Experiments on NWPU (%).
Table 8. Contrast Experiments on NWPU (%).
classSpBgGTFSTBCTCArBDHbVcmAP
R-CNN [31]63.745.481.284.346.835.570.183.662.344.861.8
RICNN [43]77.361.586.785.358.540.888.488.168.671.172.6
FasterR-CNN [33]89.980.9100.067.387.578.690.789.289.888.086.2
OneshotDet-r50 [48]90.453.197.096.941.39.57.32.974.856.052.92
CoAE-r50 [49]87.079.23100.088.275.24.19.119.4794.577.8663.47
SCoDANet-r50 [50]90.086.299.890.879.76.7518.516.8187.177.6665.33
FasterR-CNN_r50+FPN [42]84.323.650.196.561.573.397.295.737.383.470.3
FasterR-CNN_r50+FPN+CMD-RRR93.342.885.395.472.480.699.496.869.787.082.3
FasterR-CNN_r50+FPN+CMD-RFF90.937.270.196.371.577.799.697.260.386.278.7
FasterR-CNN_r101+FPN [42]91.445.479.196.176.076.199.797.576.087.382.5
FasterR-CNN_r101+FPN+CMD-RRR94.053.192.896.880.581.999.997.885.991.187.4
FasterR-CNN_r101+FPN+CMD-RFF92.156.683.897.380.080.299.797.579.388.585.5
Best performance of each class is in bold. Sp: ship, Bg: bridge, GTF: ground track field, ST: storage tank, BC: basketball court, TC: tennis court, Ar: airplane, Bd: baseball diamond, Hb: harbor, Vc: vehicle.
According to these experiments, it is found that the proposed method can steadily improve the detection performance on different models and most data sets. Moreover, the proposed method reaches state-of-the-art performance on different data sets. Therefore, the proposed method is an effective method for the remote sensing object detection task.

4.5. Failure Case Analysis

As shown in Table 2 and Table 3, the proposed schedules are not always effective. In other words, the FFF schedule sometimes reaches the best performance compared with other schedules. To be specific, failure cases include the categories {baseball field, chimney, airplane and train station} in DIOR and {airplane, harbor and vehicle} in HRRSD. The robust head ‘R’ can more easily handle low-quality proposals, but the accuracy is lower when proposals are more precise. The input proposal for the categories of failed cases is generally better. Thus the accuracy after using the R-header is lower in these cases. More studies are to be conducted in the future.

5. Conclusions

In this article, novel network detector schedules on multi-stage convolutional neural network frameworks are built for object detection in remote sensing images. First, the robust detectors and the fine detector are carefully defined to build the CMDs. Then, different network detector schedules are investigated to give suggestions for using robust and fine detectors. The conclusion of this investigation can be easily migrated to various other models. Experiments with different baselines on different data sets are conducted to probe that the robust detector and the fine detector are very appropriate for coarse and fine proposals, respectively. Finally, the proposed frameworks are compared with several state-of-the-art methods. These abundant and convincing experiments have shown that: First, the robust detector and the fine detector are, respectively, appropriate for coarse proposals and fine proposals; Second, the order and the numbers of the two types of detectors are designed in accordance with the coarse–fine degree of different stage input proposals. Finally, the CMD-based framework can steadily improve the model performance with various backbone networks on different data sets.

Author Contributions

Conceptualization, Y.Z.; methodology, Y.Z.; software, Y.Z.; validation, Y.Z. and H.J.; formal analysis, Y.Z.; investigation, Y.Z.; resources, Y.Z.; data curation, Y.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, Y.Z. and H.J.; visualization, Y.Z.; supervision, Y.Z.; project administration, Y.Z.; funding acquisition, Y.Z. and H.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation of China under Grants 62201472 and 62272383).

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the editors and the anonymous reviewers for their comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, H.; Zheng, W.; Liu, F.; Li, P.; Wang, R. Unmanned Aerial Vehicle Perspective Small Target Recognition Algorithm Based on Improved YOLOv5. Remote Sens. 2023, 15, 3583. [Google Scholar] [CrossRef]
  2. Körez, A.; Barışçı, N.; Çetin, A.; Ergün, U. Weighted ensemble object detection with optimized coefficients for remote sensing images. ISPRS Int. J. Geo-Inf. 2020, 9, 370. [Google Scholar] [CrossRef]
  3. Tang, J.; Deng, C.; Huang, G.B.; Zhao, B. Compressed-domain ship detection on spaceborne optical image using deep neural network and extreme learning machine. IEEE Trans. Geosci. Remote Sens. (TGRS) 2014, 53, 1174–1185. [Google Scholar] [CrossRef]
  4. Chen, F.; Ren, R.; Van de Voorde, T.; Xu, W.; Zhou, G.; Zhou, Y. Fast automatic airport detection in remote sensing images using convolutional neural networks. Remote Sens. 2018, 10, 443. [Google Scholar] [CrossRef]
  5. Grabner, H.; Nguyen, T.T.; Gruber, B.; Bischof, H. On-line boosting-based car detection from aerial images. ISPRS J. Photogramm. Remote Sens. (P&RS) 2008, 63, 382–396. [Google Scholar]
  6. Keuper, M.; Tang, S.; Andres, B.; Brox, T.; Schiele, B. Motion segmentation & multiple object tracking by correlation co-clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 140–153. [Google Scholar]
  7. Karpathy, A.; Toderici, G.; Shetty, S.; Leung, T.; Sukthankar, R.; Fei-Fei, L. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 1725–1732. [Google Scholar]
  8. Lu, J.; Yang, J.; Batra, D.; Parikh, D. Neural baby talk. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
  9. Li, J.; Liang, X.; Li, J.; Wei, Y.; Xu, T.; Feng, J.; Yan, S. Multistage object detection with group recursive learning. IEEE Trans. Multimed. 2017, 20, 1645–1655. [Google Scholar] [CrossRef]
  10. Cai, Z.; Vasconcelos, N. Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 6154–6162. [Google Scholar]
  11. Yuan, Y.; Zhang, Y. OLCN: An optimized low coupling network for small objects detection. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  12. Liu, E.; Zheng, Y.; Pan, B.; Xu, X.; Shi, Z. DCL-Net: Augmenting the Capability of Classification and Localization for Remote Sensing Object Detection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7933–7944. [Google Scholar] [CrossRef]
  13. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157. [Google Scholar]
  14. Sedaghat, A.; Mokhtarzade, M.; Ebadi, H. Uniform robust scale-invariant feature matching for optical remote sensing images. IEEE Trans. Geosci. Remote Sens. (TGRS) 2011, 49, 4516–4527. [Google Scholar] [CrossRef]
  15. Li, Q.; Wang, G.; Liu, J.; Chen, S. Robust scale-invariant feature matching for remote sensing image registration. IEEE Geosci. Remote Sens. Lett. (GRSL) 2009, 6, 287–291. [Google Scholar]
  16. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 886–893. [Google Scholar]
  17. Tuermer, S.; Kurz, F.; Reinartz, P.; Stilla, U. Airborne vehicle detection in dense urban areas using HoG features and disparity maps. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. (J-STARS) 2013, 6, 2327–2337. [Google Scholar] [CrossRef]
  18. Cheng, G.; Han, J.; Zhou, P.; Guo, L. Multi-class geospatial object detection and geographic image classification based on collection of part detectors. ISPRS J. Photogramm. Remote Sens. (P&RS) 2014, 98, 119–132. [Google Scholar]
  19. Li, F.F.; Perona, P. A bayesian hierarchical model for learning natural scene categories. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 524–531. [Google Scholar]
  20. Xu, S.; Fang, T.; Li, D.; Wang, S. Object classification of aerial images with bag-of-visual words. IEEE Geosci. Remote Sens. Lett. (GRSL) 2010, 7, 366–370. [Google Scholar]
  21. Sun, H.; Sun, X.; Wang, H.; Li, Y.; Li, X. Automatic target detection in high-resolution remote sensing images using spatial sparse coding bag-of-words model. IEEE Geosci. Remote Sens. Lett. (GRSL) 2012, 9, 109–113. [Google Scholar] [CrossRef]
  22. Cheng, G.; Guo, L.; Zhao, T.; Han, J.; Li, H.; Fang, J. Automatic landslide detection from remote-sensing imagery using a scene classification method based on BoVW and pLSA. Int. J. Remote Sens. (IJRS) 2013, 34, 45–59. [Google Scholar] [CrossRef]
  23. Xia, G.S.; Wang, Z.; Xiong, C.; Zhang, L. Accurate annotation of remote sensing images via active spectral clustering with little expert knowledge. Remote Sens. 2015, 7, 15014–15045. [Google Scholar] [CrossRef]
  24. Zhang, Y.; Du, B.; Zhang, L. A sparse representation-based binary hypothesis model for target detection in hyperspectral images. IEEE Trans. Geosci. Remote Sens. (TGRS) 2015, 53, 1346–1354. [Google Scholar] [CrossRef]
  25. Yokoya, N.; Iwasaki, A. Object detection based on sparse representation and Hough voting for optical remote sensing imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. (J-STARS) 2015, 8, 2053–2062. [Google Scholar] [CrossRef]
  26. Zhang, L.; Zhang, L.; Tao, D.; Huang, X. Sparse transfer manifold embedding for hyperspectral target detection. IEEE Trans. Geosci. Remote Sens. (TGRS) 2014, 52, 1030–1043. [Google Scholar] [CrossRef]
  27. Han, J.; Zhou, P.; Zhang, D.; Cheng, G.; Guo, L.; Liu, Z.; Bu, S.; Wu, J. Efficient, simultaneous detection of multi-class geospatial targets based on visual saliency modeling and discriminative learning of sparse coding. ISPRS J. Photogramm. Remote Sens. (P&RS) 2014, 89, 37–48. [Google Scholar]
  28. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  29. Chen, Q.; Wang, Y.; Yang, T.; Zhang, X.; Cheng, J.; Sun, J. You only look one-level feature. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 13039–13048. [Google Scholar]
  30. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
  31. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  32. Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Boston, MA, USA, 7–12 June 2015; pp. 1440–1448. [Google Scholar]
  33. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Montreal, QC, Canada, 7–12 December 2015; Volume 28, pp. 91–99. [Google Scholar]
  34. Cheng, G.; Han, J.; Zhou, P.; Xu, D. Learning rotation-invariant and fisher discriminative convolutional neural networks for object detection. IEEE Trans. Image Process. (TIP) 2018, 28, 265–278. [Google Scholar] [CrossRef] [PubMed]
  35. Liu, W.; Ma, L.; Wang, J. Detection of multiclass objects in optical remote sensing images. IEEE Geosci. Remote Sens. Lett. (GRSL) 2018, 16, 791–795. [Google Scholar] [CrossRef]
  36. Zhao, W.; Ma, W.; Jiao, L.; Chen, P.; Yang, S.; Hou, B. Multi-scale image block-level f-cnn for remote sensing images object detection. IEEE Access 2019, 7, 43607–43621. [Google Scholar] [CrossRef]
  37. Lu, X.; Zhang, Y.; Yuan, Y.; Feng, Y. Gated and axis-concentrated localization network for remote sensing object detection. IEEE Trans. Geosci. Remote Sens. (TGRS) 2020, 58, 179–192. [Google Scholar] [CrossRef]
  38. Long, H.; Chung, Y.; Liu, Z.; Bu, S. Object detection in aerial images using feature fusion deep networks. IEEE Access 2019, 7, 30980–30990. [Google Scholar] [CrossRef]
  39. Zhang, W.; Jiao, L.; Liu, X.; Liu, J. Multi-scale feature fusion network for object detection in vhr optical remote sensing images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Yokohama, Japan, 28 July–2 August 2019; pp. 330–333. [Google Scholar]
  40. Li, K.; Wan, G.; Cheng, G.; Meng, L.; Han, J. Object detection in optical remote sensing images: A survey and a new benchmark. ISPRS J. Photogramm. Remote Sens. (P&RS) 2020, 159, 296–307. [Google Scholar]
  41. Zhang, Y.; Yuan, Y.; Feng, Y.; Lu, X. Hierarchical and robust convolutional neural network for very high-resolution remote sensing object detection. IEEE Trans. Geosci. Remote Sens. (TGRS) 2019, 57, 5535–5548. [Google Scholar] [CrossRef]
  42. Chen, K.; Wang, J.; Pang, J.; Cao, Y.; Xiong, Y.; Li, X.; Sun, S.; Feng, W.; Liu, Z.; Xu, J.; et al. MMDetection: Open mmlab detection toolbox and benchmark. arXiv 2019, arXiv:1906.07155. [Google Scholar]
  43. Cheng, G.; Zhou, P.; Han, J. Learning rotation-invariant convolutional neural networks for object detection in VHR optical remote sensing images. IEEE Trans. Geosci. Remote Sens. (TGRS) 2016, 54, 7405–7415. [Google Scholar] [CrossRef]
  44. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  45. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
  46. Zhang, H.; Chang, H.; Ma, B.; Wang, N.; Chen, X. Dynamic R-CNN: Towards high quality object detection via dynamic training. In Proceedings of the ECCV 2020: Computer Vision European Conference, Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 260–275. [Google Scholar]
  47. Cheng, G.; Zhou, P.; Han, J. Rifd-cnn: Rotation-invariant and fisher discriminative convolutional neural networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2884–2893. [Google Scholar]
  48. Li, X.; Zhang, L.; Chen, Y.P.; Tai, Y.W.; Tang, C.K. One-shot object detection without fine-tuning. arXiv 2020, arXiv:2005.03819. [Google Scholar]
  49. Hsieh, T.I.; Lo, Y.C.; Chen, H.T.; Liu, T.L. One-shot object detection with co-attention and co-excitation. Adv. Neural Inf. Process. Syst. 2019, 32, 2725–2734. [Google Scholar]
  50. Li, L.; Yao, X.; Cheng, G.; Xu, M.; Han, J.; Han, J. Solo-to-collaborative dual-attention network for one-shot object detection in remote sensing images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–11. [Google Scholar] [CrossRef]
Figure 1. Difference among deep learning object detection frameworks. (a) Faster R-CNN represents the mainstream of deep learning object detection. (b) Cascade R-CNN represents multi-stage object detection. (c) CMD Net is the method proposed in this article.
Figure 1. Difference among deep learning object detection frameworks. (a) Faster R-CNN represents the mainstream of deep learning object detection. (b) Cascade R-CNN represents multi-stage object detection. (c) CMD Net is the method proposed in this article.
Remotesensing 15 04130 g001
Figure 2. Illustration of the CMD framework. The ellipses represent different parts of the framework. The ellipses named “Robust” and “Efficient” represent the (proposed) robust network head and the (traditional) efficient network head, respectively.
Figure 2. Illustration of the CMD framework. The ellipses represent different parts of the framework. The ellipses named “Robust” and “Efficient” represent the (proposed) robust network head and the (traditional) efficient network head, respectively.
Remotesensing 15 04130 g002
Figure 3. Contrast between the traditional efficient network head and the proposed robust network head. (a) The traditional efficient network head, whose regressors are selected by the classification prediction. (b) The proposed robust network head, whose single regressor is totally independent of the classifier and is general for different categories. Apparently, the robust network head adopts a novel coupling method and can avoid the coupling error from the classifier to the regressor.
Figure 3. Contrast between the traditional efficient network head and the proposed robust network head. (a) The traditional efficient network head, whose regressors are selected by the classification prediction. (b) The proposed robust network head, whose single regressor is totally independent of the classifier and is general for different categories. Apparently, the robust network head adopts a novel coupling method and can avoid the coupling error from the classifier to the regressor.
Remotesensing 15 04130 g003
Figure 4. Illustration of the CMD framework. (a) The traditional multi-stage detection framework. (b) The consistent multi-stage detection framework CMD. The ellipses represent different parts of the framework. The ellipses named “Robust” and “Efficient” represent the (proposed) robust network head and the (traditional) efficient network head, respectively.
Figure 4. Illustration of the CMD framework. (a) The traditional multi-stage detection framework. (b) The consistent multi-stage detection framework CMD. The ellipses represent different parts of the framework. The ellipses named “Robust” and “Efficient” represent the (proposed) robust network head and the (traditional) efficient network head, respectively.
Remotesensing 15 04130 g004
Figure 5. Examples of the DIOR data set.
Figure 5. Examples of the DIOR data set.
Remotesensing 15 04130 g005
Figure 6. Examples of the HRRSD data set.
Figure 6. Examples of the HRRSD data set.
Remotesensing 15 04130 g006
Figure 7. Examples of the NWPU VHR-10 data set.
Figure 7. Examples of the NWPU VHR-10 data set.
Remotesensing 15 04130 g007
Table 1. Confusion matrix.
Table 1. Confusion matrix.
Label PositiveLabel Negative
Predicted PositiveTPFP
Predicted NegativeFNTN
Table 5. Ablation Experiments on HRRSD (%).
Table 5. Ablation Experiments on HRRSD (%).
classExp.SpBgGTFSTBCTCAr
FasterR-CNN_r50+FPN+CMD-RFF191.688.397.795.569.393.798.5
w/o RFF schedual292.286.497.595.368.893.398.6
w/o stage3389.187.497.093.267.392.297.8
w/0 stage2&3487.285.496.493.964.891.997.6
classExp.BDHbVcCRTJPLmAP
FasterR-CNN_r50+FPN+CMD-RFF191.095.195.093.278.166.888.7
w/o RFF schedual290.794.495.792.075.565.588.1
w/o stage3391.995.191.192.076.863.687.3
w/0 stage2&3491.793.090.790.875.060.486.1
Best performance of each class is in bold. Sp: ship, Bg: bridge, GTF: ground track field, ST: storage tank, BC: basketball court, TC: tennis court, Ar: airplane, Bd: baseball diamond, Hb: harbor, Vc: vehicle, CR: cross-road, TJ: T-junction, PL: parking lot.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Jin, H. Detector Consistency Research on Remote Sensing Object Detection. Remote Sens. 2023, 15, 4130. https://doi.org/10.3390/rs15174130

AMA Style

Zhang Y, Jin H. Detector Consistency Research on Remote Sensing Object Detection. Remote Sensing. 2023; 15(17):4130. https://doi.org/10.3390/rs15174130

Chicago/Turabian Style

Zhang, Yuanlin, and Haiyan Jin. 2023. "Detector Consistency Research on Remote Sensing Object Detection" Remote Sensing 15, no. 17: 4130. https://doi.org/10.3390/rs15174130

APA Style

Zhang, Y., & Jin, H. (2023). Detector Consistency Research on Remote Sensing Object Detection. Remote Sensing, 15(17), 4130. https://doi.org/10.3390/rs15174130

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop