A Novel Cargo Ship Detection and Directional Discrimination Method for Remote Sensing Image Based on Lightweight Network

: Recently, cargo ship detection in remote sensing images based on deep learning is of great signiﬁcance for cargo ship monitoring. However, the existing detection network is not only unable to realize autonomous operation on spaceborne platforms due to the limitation of computing and storage, but the detection result also lacks the directional information of the cargo ship. In order to address the above problems, we propose a novel cargo ship detection and directional discrimination method for remote sensing images based on a lightweight network. Speciﬁcally, we design an efﬁcient and lightweight feature extraction network called the one-shot aggregation and depthwise separable network (OSADSNet), which is inspired by one-shot feature aggregation modules and depthwise separable convolutions. Additionally, we combine the RPN with the K-Mean++ algorithm to obtain the K-RPN, which can produce a more suitable region proposal for cargo ship detection. Furthermore, without introducing extra parameters, the directional discrimination of the cargo ship is transformed into a classiﬁcation task, and the directional discrimination is completed when the detection task is completed. Experiments on a self-built remote sensing image cargo ship dataset indicate that our model can provide relatively accurate and fast detection for cargo ships ( mAP of 91.96% and prediction time of 46 ms per image) and discriminate the directions (north, east, south, and west) of cargo ships, with fewer parameters (model size of 110 MB), which is more suitable for autonomous operation on spaceborne platforms. Therefore, the proposed method can meet the needs of cargo ship detection and directional discrimination in remote sensing images on spaceborne platforms.


Introduction
With the progress of science and technology and the development of world trade, economic globalization has become the trend of world economic development. The trade links between countries are becoming closer and closer, and maritime transportation has become the main mode of transportation in foreign trade because of its advantages of large volume and low-cost transportation. Cargo ships are the main means of transportation for sea transportation. However, with the diversification of cargo ship types, the larger and higher speed of cargo ships, and the increase of ports, the navigation environment of cargo ships is becoming more and more complicated, causing the course of cargo ships to deviate from their planned course. This leads to frequent problems such as channel block, cargo ships collisions, and increased navigation costs. Therefore, if we can grasp the navigation information of the cargo ship in time, it is of great significance for improving the navigation environment of the cargo ship, ensuring the safe navigation of the cargo ship, improving the navigation efficiency, and shortening the navigation time [1][2][3][4]. 2 of 17 In recent years, with the continuous development of satellite remote sensing technology and the emergence of a large number of remote sensing images, facing the vast ocean, remote sensing images have the advantages of wide coverage, high spatial resolution, fast update speed, and low cost, which makes the use of remote sensing images for real-time monitoring of ships in the ocean important [5,6]. However, in the face of massive remote sensing images, if we only rely on manual interpretation to obtain cargo ship information in remote sensing images, it can no longer meet the needs of modern society because of the huge workload and low efficiency. Therefore, obtaining cargo ship target information quickly and accurately from massive remote sensing images has become an urgent problem to be solved.
Traditional ship detection algorithms in remote sensing images rely heavily on manual design features, which require designers to understand relevant professional knowledge. It is difficult to realize the rapid processing of massive amounts of remote sensing data, and the accuracy of cargo ships under a complex background is low [7][8][9]. Compared with traditional object detection algorithms, the algorithms based on deep learning can independently extract object features, avoiding the complexity of manual design features, and the extracted features are more robust. Therefore, deep learning makes it possible to intellectualize cargo ship detection in remote sensing images.
Recently, with the substantial improvement of computer hardware performance and the emergence of large-scale training samples, the deep learning techniques represented by convolutional neural networks have shown strong performance in object detection applications [10][11][12]. At present, object detection algorithms based on the convolutional neural network can be divided into two-stage object detection algorithms and one-stage object detection algorithms according to whether region proposals are generated. Two-stage object detection algorithms, such as R-CNN [13], fast-RCNN [14], and faster-RCNN [15], firstly extract region proposals on the feature map where there may be objects, then further classify and locate these areas, and finally the detection results are obtained. This type of method achieves a high detection accuracy, but it cannot meet real-time requirements. One-stage object detection algorithms, such as YOLO [16][17][18][19] and SSD [20], use the regression method to establish an object detection framework, which removes the operation of generating region proposals. This type of method has fast detection speed, but the detection accuracy is low.
In view of the excellent performance of deep learning in computer vision, deep learning has been widely used in ship detection in remote sensing images [21][22][23]. However, there are some problems in remote sensing images, such as diversified ship sizes and complex backgrounds, which bring difficulties to ship detection. Many scholars have put forward their own solutions to such problems. To efficiently detect ships with various scales, Li et al. [24] proposed a hierarchical selective filtering layer based on the faster-RCNN algorithm to map features in different scales to the same scale space. Although this method can simultaneously detect inshore and offshore ships with dozens of pixels to thousands of pixels, the detection results lack the directional information of the cargo ships. Aiming at the problem of high false alarm ratio and the great influence of the sea surface in traditional ship detection methods, Zhang et al. [25] adopted the idea of deep networks and proposed a fast regional-based convolutional neural network (R-CNN) method to detect ships from high-resolution remote sensing imagery. However, the proposed method contains a complicated pre-processing stage, which cannot meet the requirements of real-time detection. Wang et al. [26] proposed a fast-RCNN method based on the adversary strategy, which adopted the adversarial spatial transformer network (ASTN) module to improve the classification ability for ships in remote sensing images under complex backgrounds. However, it is hard to realize autonomous operation on spaceborne platforms. Although automatic identification system (AIS) data can provide some information about the cargo ship, the information will be lost if the AIS does not keep the normal opening statement or fails, and some cargo ships are not even loaded with AIS. However, the remote sensing satellite will not have a similar situation, so the remote sensing satellite image has become an important data source to obtain cargo ship information.
In the face of massive satellite remote sensing images, the transmission bandwidth from the satellite to ground is limited. Therefore, the real-time intelligent processing of remote sensing images on spaceborne platforms must be the development trend in the future. However, due to the limitation of computing and storage on the spaceborne platform, complex and huge object detection models have not been widely used. In particular, although the two-stage target detection algorithm has high accuracy, it has high model complexity and low detection efficiency, resulting in poor availability of the algorithm on the spaceborne platform.
The category and position information of cargo ships can be obtained by real-time detection of cargo ships in remote sensing images. However, due to the unique perspective of remote sensing images, the remote sensing images also contain rich directional information of cargo ship targets. If the directional information of cargo ship targets can be obtained at the same time, it not only can ensure the safe navigation of cargo ships but can also help cargo ships navigate accurately along the route and reduce navigation costs.
In this paper, cargo ships in remote sensing images are taken as the research objects, and the faster-RCNN algorithm is used as the basis to propose a novel cargo ship detection and directional discrimination method for remote sensing images based on a lightweight network. This model not only can accurately detect the category and position of cargo ship targets in remote sensing images in real time but can also discriminate the direction of cargo ships. Additionally, its lightweight advantage improves the availability of spaceborne platforms. The rest of the paper is summarized as follows: Section 2 introduces the proposed method, Section 3 describes the experimental dataset and the experimental results, and Section 4 summarizes the whole paper and puts forward some suggestions for future work.

Faster-RCNN Network
Considering the application needs of cargo ship detection and direction recognition in remote sensing images, this paper adopts the faster-RCNN model with high comprehensive performance of detection accuracy and detection speed as the algorithm prototype. As shown in Figure 1, faster-RCNN is composed of three parts: feature extraction network, region proposal network (RPN), and region based convolutional neural networks (R-CNN). The feature extraction network uses convolution to extract the features of the images, and the extracted features are shared by the following RPN and R-CNN. The RPN is a fully convolutional neural network used to generate region proposals for potential cargo ships in the images, and the following ROI pooling layer is used to unify region proposals of different sizes. R-CNN is used for coordinating regression and classification of the extracted region of interest to realize the detection of the cargo ships.

Proposed Model
The model of cargo ship detection and directional discrimination in remote sensing images based on a lightweight network proposed in this paper is shown in Figure 2. In

Model Overview
The model of cargo ship detection and directional discrimination in remote sensing images based on a lightweight network proposed in this paper is shown in Figure 2. In this model, one-shot aggregation module [27] and depthwise separable convolution [28] are used to build an efficient and lightweight feature extraction network, namely the oneshot aggregation and depthwise separable network (OSADSNet). Second, K-Means++ clustering algorithm is introduced into RPN to generate high-quality candidate boxes. Finally, without introducing extra parameters, the direction recognition problem of the cargo ship in remote sensing images is converted to a classification problem, and the directional discrimination is completed while the detection task is accomplished.

Model Overview
The model of cargo ship detection and directional discrimination in remote sensing images based on a lightweight network proposed in this paper is shown in Figure 2. In this model, one-shot aggregation module [23] and depthwise separable convolution [24] are used to build an efficient and lightweight feature extraction network, namely the oneshot aggregation and depthwise separable network (OSADSNet). Second, K-Means++ clustering algorithm is introduced into RPN to generate high-quality candidate boxes. Finally, without introducing extra parameters, the direction recognition problem of the cargo ship in remote sensing images is converted to a classification problem, and the directional discrimination is completed while the detection task is accomplished.

Feature Extraction Network
The feature extraction network is an important module in the object detection model, and its performance precisely influences the performance of the model. In this paper, we propose a novel and efficient feature extraction network, the one-shot aggregation and depthwise separable network (OSADSNet), to meet the specific needs of cargo ship detection in remote sensing images. The network draws on the idea of the one-shot aggregation

Feature Extraction Network
The feature extraction network is an important module in the object detection model, and its performance precisely influences the performance of the model. In this paper, we propose a novel and efficient feature extraction network, the one-shot aggregation and depthwise separable network (OSADSNet), to meet the specific needs of cargo ship detection in remote sensing images. The network draws on the idea of the one-shot aggregation module ( Figure 3). Each layer of convolution has two connections: one is directly connected to the next layer to acquire a larger receptive field, and the other is connected to the last layer to aggregate features. Compared with the residual module [29], the network integrates more low-level features, and the connection is not the point-topoint addition of the feature map but the splicing between channels, which reduces the number of parameters between layers. Compared with the dense connection module [30], the connection is optimized, and all features are aggregated only before the last layer, which realizes the efficient aggregation of different convolutional layers, avoids feature redundancy, and performs at a faster speed. At the same time, the network uses depthwise separable convolution (Figure 4b) to replace the standard convolution (Figure 4a), which reduces the model parameters and effectively extracts features, finally realizing the light weight of the model. Table 1 shows the implementation details of OSADSNet. rameters between layers. Compared with the dense connection module [26], the connection is optimized, and all features are aggregated only before the last layer, which realizes the efficient aggregation of different convolutional layers, avoids feature redundancy, and performs at a faster speed. At the same time, the network uses depthwise separable convolution ( Figure 4b) to replace the standard convolution (Figure 4a), which reduces the model parameters and effectively extracts features, finally realizing the light weight of the model. Table 1 shows the implementation details of OSADSNet.    of the feature map but the splicing between channels, which reduces the number of parameters between layers. Compared with the dense connection module [26], the connection is optimized, and all features are aggregated only before the last layer, which realizes the efficient aggregation of different convolutional layers, avoids feature redundancy, and performs at a faster speed. At the same time, the network uses depthwise separable convolution ( Figure 4b) to replace the standard convolution (Figure 4a), which reduces the model parameters and effectively extracts features, finally realizing the light weight of the model. Table 1 shows the implementation details of OSADSNet.

K-RPN
The faster-RCNN uses a region proposal network (RPN) to effectively generate highquality region proposals, where each of them corresponds to the probability and position information of the cargo ship. Meanwhile, the network shares the feature maps with the feature extraction network to shorten the computing time.
In order to locate objects of different sizes effectively, the anchors of different scales are used in RPN, but the original anchors are designed manually without the prior information of cargo ship sizes in remote sensing images. Therefore, in this paper, the K-Mean++ algorithm [31] is used to cluster the bounding boxes of cargo ships in the dataset to obtain anchors suitable for cargo ships. The structure of K-RPN is shown in Figure 5.
ing boxes. In order to make the IOU of the anchors and the bounding boxes larger and obtain better anchors, the IOU of the bounding boxes and the cluster center bounding boxes is used as the distance function of the K-Mean++ algorithm to complete clustering: where centroid represents the bounding boxes of the cluster centers; box represents the bounding boxes of the cargo ships; IOU represents the intersection over union of the cargo ship bounding boxes of cargo ships and the cluster center bounding boxes of the cluster centers; and d represents the distance between the bounding boxes of cluster centers and the bounding boxes of cargo ships.

Directional Discrimination
Since remote sensing images are taken above the cargo ships, the images contain not only the category and location information of the cargo ships but also the directional information of the cargo ships. Mastering the course information of cargo ships is important for cargo ships to sail along the expected route, which not only can ensure the safe navigation of the cargo ships but also can help to reduce the cost of navigation. In this paper, In the process of clustering, the Euclidean distance clustering method will lead to a greater error for the larger bounding boxes than for the smaller bounding boxes, resulting in a greater error in the intersection over union (IOU) between the anchors and the bounding boxes. In order to make the IOU of the anchors and the bounding boxes larger and obtain better anchors, the IOU of the bounding boxes and the cluster center bounding boxes is used as the distance function of the K-Mean++ algorithm to complete clustering: where centroid represents the bounding boxes of the cluster centers; box represents the bounding boxes of the cargo ships; IOU represents the intersection over union of the cargo ship bounding boxes of cargo ships and the cluster center bounding boxes of the cluster centers; and d represents the distance between the bounding boxes of cluster centers and the bounding boxes of cargo ships.

Directional Discrimination
Since remote sensing images are taken above the cargo ships, the images contain not only the category and location information of the cargo ships but also the directional information of the cargo ships. Mastering the course information of cargo ships is important for cargo ships to sail along the expected route, which not only can ensure the safe navigation of the cargo ships but also can help to reduce the cost of navigation. In this paper, the problem of directional discrimination of cargo ships is transformed into a classification problem. The direction of the cargo ship is divided into four directions: east, south, west, and north ( Figure 6). The object detection model is used to classify the directional information into category information to learn, and the model outputs the category of cargo ships and directional information at the same time. In this model, the directional information of cargo ships can be obtained without adding extra parameters. the problem of directional discrimination of cargo ships is transformed into a classification problem. The direction of the cargo ship is divided into four directions: east, south, west, and north ( Figure 6). The object detection model is used to classify the directional information into category information to learn, and the model outputs the category of cargo ships and directional information at the same time. In this model, the directional information of cargo ships can be obtained without adding extra parameters.

Experimental Dataset
(1) Data collection. As far as we know, there is no publicly available remote sensing image cargo ship dataset with category and directional information. To detect cargo ships and discriminate directions of cargo ships, it is necessary to collect corresponding remote sensing images. (2) Data annotation. In order to establish a cargo ship dataset of remote sensing images, it is necessary to annotate the collected remote sensing images for the training and testing of the models. In this paper, labeling software (Labelimg) [28] is used on the cargo ships in remote sensing images. Labeled objects include bulk carrier, container, it is necessary to annotate the collected remote sensing images for the training and testing of the models. In this paper, labeling software (Labelimg) [32] is used on the cargo ships in remote sensing images. Labeled objects include bulk carrier, container, and tanker. According to the directions of cargo ships, they can be divided into four categories, east, south, west, and north, and the angle range of each category is 90 degrees. After the labeling is completed, a corresponding labeling file will be formed, which mainly records the location, category, and direction of the cargo ship. The obtained annotated dataset is uniformly processed into the format of VOC2007 [33] to provide a standard dataset for the training of models. (3) Data augmentation and splitting. In the process of model training, the larger and more comprehensive the dataset, the stronger the model recognition ability. Therefore, in this paper, the collected remote sensing images are rotated clockwise by 90 degrees, 180 degrees, and 270 degrees, as well as flipped horizontally and vertically, to expand the data. The number of the dataset is expanded by six times, and its directional label is adjusted accordingly. Finally, 15,654 remote sensing images are obtained. Table 2 shows the details of various cargo ships in the dataset. The dataset is randomly divided into training set, validation set, and test set according to the ratio of 8:1:1. from Google Earth. The resolutions of images are 16, 17, and 18 levels. The bands of images are red, green, and blue. The data content covers different backgrounds and various positional relationships, which can meet the need for practical tasks. Due to the limitation of computer memory capacity, the collected images are cropped into 800 × 800 pixels, and the images containing three types of cargo ships are filtered out-bulk carrier, container, and tanker-ensuring that each image contains at least one cargo ship. The examples are shown in the Figure 7. (2) Data annotation. In order to establish a cargo ship dataset of remote sensing images, it is necessary to annotate the collected remote sensing images for the training and testing of the models. In this paper, labeling software (Labelimg) [28] is used on the cargo ships in remote sensing images. Labeled objects include bulk carrier, container,

The Anchors Clustering
To obtain anchors suitable for cargo ships, this paper introduces the K-Mean++ algorithm into the RPN to propose the K-RPN, and the K-Mean++ algorithm is used to automatically generate the appropriate anchors instead of the manual method. If the anchors are not selected properly, the final detection results will be affected. Figure 8 shows the length and width distribution of cargo ships in the dataset and the clustering results of K-Mean++. Table 3 shows the comparison between original anchors and K-Mean++ anchors.
degrees. After the labeling is completed, a corresponding labeling file will be formed, which mainly records the location, category, and direction of the cargo ship. The obtained annotated dataset is uniformly processed into the format of VOC2007 [29] to provide a standard dataset for the training of models.
(3) Data augmentation and splitting. In the process of model training, the larger and more comprehensive the dataset, the stronger the model recognition ability. Therefore, in this paper, the collected remote sensing images are rotated clockwise by 90 degrees, 180 degrees, and 270 degrees, as well as flipped horizontally and vertically, to expand the data. The number of the dataset is expanded by six times, and its directional label is adjusted accordingly. Finally, 15,654 remote sensing images are obtained. Table 2 shows the details of various cargo ships in the dataset. The dataset is randomly divided into training set, validation set, and test set according to the ratio of 8:1:1.

The Anchors Clustering
To obtain anchors suitable for cargo ships, this paper introduces the K-Mean++ algorithm into the RPN to propose the K-RPN, and the K-Mean++ algorithm is used to automatically generate the appropriate anchors instead of the manual method. If the anchors are not selected properly, the final detection results will be affected. Figure 8 shows the length and width distribution of cargo ships in the dataset and the clustering results of K-Mean++. Table 3 shows the comparison between original anchors and K-Mean++ anchors.

Implementation Details
The experiments are conducted on a Dell T3640 workstation. The operating system is Ubuntu 16.04 LTS. The model is written in Python and supported by Torch on the backend.
The model input size is set to 800 × 800. Stochastic gradient descent is used in training to minimize the loss function. The initial learning rate is set to 0.025, and the learning rate is reduced by 0.1 times in 30,000 and 40,000 iterations. The batch size is two. The value of momentum is set to 0.9. The value of weight decay is set to 0.0005. The model is trained for 50,000 iterations. Figure 9 shows the change of loss during training.

Original Anchors
Height 181

Implementation Details
The experiments are conducted on a Dell T3640 workstation. The operating system is Ubuntu 16.04 LTS. The model is written in Python and supported by Torch on the backend. The model input size is set to 800 × 800. Stochastic gradient descent is used in training to minimize the loss function. The initial learning rate is set to 0.025, and the learning rate is reduced by 0.1 times in 30,000 and 40,000 iterations. The batch size is two. The value of momentum is set to 0.9. The value of weight decay is set to 0.0005. The model is trained for 50,000 iterations. Figure 9 shows the change of loss during training.

Evaluation Metrics
This paper used four evaluation metrics: precision (P), recall (R), average precision (AP), and mean average precision (mAP) to evaluate the performance of the model. Precision indicates the percentage of true positives of cargo ships in the sum of true positives of cargo ships and false positives of cargo ships. Recall indicates the percentage of true positives of cargo ships to the ground truths of cargo ships. AP is the area under the curve of precision-recall, which is the popular evaluation metric of object detection, and is often used to measure the advantages and disadvantages of object detection models. mAP indicates the mean of AP across all classes. Precision: Recall: AP: Figure 9. The curve of loss of training.

Evaluation Metrics
This paper used four evaluation metrics: precision (P), recall (R), average precision (AP), and mean average precision (mAP) to evaluate the performance of the model. Precision indicates the percentage of true positives of cargo ships in the sum of true positives of cargo ships and false positives of cargo ships. Recall indicates the percentage of true positives of cargo ships to the ground truths of cargo ships. AP is the area under the curve of precision-recall, which is the popular evaluation metric of object detection, and is often used to measure the advantages and disadvantages of object detection models. mAP indicates the mean of AP across all classes. Precision: Recall: AP: where T p represents the number of cargo ships correctly identified in the detection results, Fp represents the number of cargo ships falsely identified in the detection results, Fn represents the number of cargo ships that have been missed, and n represents the number of types of cargo ships.

Performance of Different Models
To quantitatively analyze the effectiveness of the proposed model, the self-built dataset is used to train and test the model, and the evaluation indexes P, R, and AP are used to evaluate the test results of the model. Table 4 lists the test results of our model on the test set.  Table 5 gives statistics on model size, training duration, mAP, and single image prediction duration of different models. As can be seen from the experimental results, in terms of model size, our model has fewer parameters (model size of 110 MB) and a shorter training time (79.5 min), so it is more suitable to be deployed on the spaceborne platform. In terms of detection accuracy, our model can achieve a higher detection accuracy (mAP of 91.96%) similar to the two-stage object detection algorithm faster-RCNN (mAP of 94.41%). In terms of detection speed, our model maintains a fast detection speed (prediction time of 46ms per image) and meets the requirements of real-time detection.  Figure 10 shows the detection results of different models on test sets. It can be seen from the figure that our model and faster-RCNN can complete the classification, positioning, and directional discrimination of cargo ships in remote sensing images well, while the YOLOv3 model has different degrees of missed detection.

Performance of Other Remote Sensing Images
Given that Google Earth images are the integration of multiple satellite images and aerial images, this paper conducts tests on high-resolution remote sensing images of multiple satellites, including Skysat 1.0 m/pixel resolution images (Figure 11), Deimos-2 0.5~2 m/pixel resolution images (Figure 12), and QuickBird 0.5~2 m/pixel resolution images (Figure 13), to verify the performance of our model on remote sensing images of a single source.  Figure 10 shows the detection results of different models on test sets. It can be seen from the figure that our model and faster-RCNN can complete the classification, positioning, and directional discrimination of cargo ships in remote sensing images well, while the YOLOv3 model has different degrees of missed detection.

Performance of Other Remote Sensing Images
Given that Google Earth images are the integration of multiple satellite images and aerial images, this paper conducts tests on high-resolution remote sensing images of multiple satellites, including Skysat 1.0 m/pixel resolution images (Figure 11), Deimos-2 0.5~2 m/pixel resolution images (Figure 12), and QuickBird 0.5~2 m/pixel resolution images ( Figure 13), to verify the performance of our model on remote sensing images of a single source.

Discussion
Using remote sensing technology to obtain the remote sensing images of the cargo ships, combined with the deep learning object detection algorithms to obtain the classification, position, and directional information of the cargo ship in real time, is of great significance to cargo ship monitoring. In this paper, to further excavate the directional information of cargo ships in remote sensing images, by using the unique perspective of remote sensing images, the directional recognition problem of cargo ships is transformed into a classification problem, so that the model can complete the directional discrimination while completing the detection task. By extracting the heatmap of the feature map corresponding to the detection box, it is found that the model pays more attention to the head of the cargo ship, which is consistent with the actual judgment of the direction of the cargo ship (Table 6). At the same time, real-time processing of remote sensing images on the space-borne platform is the future trend. However, due to the limited computing and storage resources of the spaceborne platform, the complex and huge target detection model has not been widely used. Therefore, we propose a novel cargo ship detection and directional discrimination method for remote sensing images based on a lightweight network, which has important practical significance.

Discussion
Using remote sensing technology to obtain the remote sensing images of the cargo ships, combined with the deep learning object detection algorithms to obtain the classification, position, and directional information of the cargo ship in real time, is of great significance to cargo ship monitoring. In this paper, to further excavate the directional information of cargo ships in remote sensing images, by using the unique perspective of remote sensing images, the directional recognition problem of cargo ships is transformed into a classification problem, so that the model can complete the directional discrimination while completing the detection task. By extracting the heatmap of the feature map corresponding to the detection box, it is found that the model pays more attention to the head of the cargo ship, which is consistent with the actual judgment of the direction of the cargo ship (Table 6). At the same time, real-time processing of remote sensing images on the space-borne platform is the future trend. However, due to the limited computing and storage resources of the spaceborne platform, the complex and huge target detection model has not been widely used. Therefore, we propose a novel cargo ship detection and directional discrimination method for remote sensing images based on a lightweight network, which has important practical significance.

Container South
Tanker North

Conclusions
This paper proposes a novel cargo ship detection and directional discrimination method for remote sensing images based on a lightweight network, which can efficiently and accurately classify, locate, and discriminate the direction of cargo ships in remote sensing images. Aiming at the problem in which the complex and large object detection network is not conducive to real-time autonomous operation on the spaceborne platform, we use one-shot feature aggregation modules and depthwise separable convolutions to design an efficient and lightweight feature extraction network, namely the one-shot aggregation and depthwise separable network (OSADSNet). At the same time, the K-Mean++ algorithm is introduced into the RPN to form the K-RPN to generate a more suitable region proposal for cargo ship detection. The current detection results of the cargo ship in the remote sensing image are output as the category and location of the cargo ship, but the directional information of the cargo ship is lacking. To solve this problem, we transform the directional recognition problem of the cargo ship in the remote sensing image into a classification problem without introducing additional parameters, which completes the directional estimation while completing the detection task. Finally, a comparative experiment is conducted based on the self-built dataset. Experimental results show that our model can meet the requirements of spaceborne platform cargo ship detection and directional discrimination in terms of model size (110 MB), detection accuracy (mAP of 91.96%), and detection speed (prediction time of 46 ms per image). Considering that Google Earth images are the integration of multiple satellite images and aerial images, to test the effectiveness of the model for remote sensing images that are from a single spaceborne platform, remote sensing images from different satellites are selected for testing, and the experimental results show the effectiveness of the model.

Container South
Tanker North

Conclusions
This paper proposes a novel cargo ship detection and directional discrimination method for remote sensing images based on a lightweight network, which can efficiently and accurately classify, locate, and discriminate the direction of cargo ships in remote sensing images. Aiming at the problem in which the complex and large object detection network is not conducive to real-time autonomous operation on the spaceborne platform, we use one-shot feature aggregation modules and depthwise separable convolutions to design an efficient and lightweight feature extraction network, namely the one-shot aggregation and depthwise separable network (OSADSNet). At the same time, the K-Mean++ algorithm is introduced into the RPN to form the K-RPN to generate a more suitable region proposal for cargo ship detection. The current detection results of the cargo ship in the remote sensing image are output as the category and location of the cargo ship, but the directional information of the cargo ship is lacking. To solve this problem, we transform the directional recognition problem of the cargo ship in the remote sensing image into a classification problem without introducing additional parameters, which completes the directional estimation while completing the detection task. Finally, a comparative experiment is conducted based on the self-built dataset. Experimental results show that our model can meet the requirements of spaceborne platform cargo ship detection and directional discrimination in terms of model size (110 MB), detection accuracy (mAP of 91.96%), and detection speed (prediction time of 46 ms per image). Considering that Google Earth images are the integration of multiple satellite images and aerial images, to test the effectiveness of the model for remote sensing images that are from a single spaceborne platform, remote sensing images from different satellites are selected for testing, and the experimental results show the effectiveness of the model.

Detection Result Region Proposal Heatmap Class Prediction Direction Prediction
Bulk carrier East

Container South
Tanker North

Conclusions
This paper proposes a novel cargo ship detection and directional discrimination method for remote sensing images based on a lightweight network, which can efficiently and accurately classify, locate, and discriminate the direction of cargo ships in remote sensing images. Aiming at the problem in which the complex and large object detection network is not conducive to real-time autonomous operation on the spaceborne platform, we use one-shot feature aggregation modules and depthwise separable convolutions to design an efficient and lightweight feature extraction network, namely the one-shot aggregation and depthwise separable network (OSADSNet). At the same time, the K-Mean++ algorithm is introduced into the RPN to form the K-RPN to generate a more suitable region proposal for cargo ship detection. The current detection results of the cargo ship in the remote sensing image are output as the category and location of the cargo ship, but the directional information of the cargo ship is lacking. To solve this problem, we transform the directional recognition problem of the cargo ship in the remote sensing image into a classification problem without introducing additional parameters, which completes the directional estimation while completing the detection task. Finally, a comparative experiment is conducted based on the self-built dataset. Experimental results show that our model can meet the requirements of spaceborne platform cargo ship detection and directional discrimination in terms of model size (110 MB), detection accuracy (mAP of 91.96%), and detection speed (prediction time of 46 ms per image). Considering that Google Earth images are the integration of multiple satellite images and aerial images, to test the effectiveness of the model for remote sensing images that are from a single spaceborne platform, remote sensing images from different satellites are selected for testing, and the experimental results show the effectiveness of the model.

Detection Result Region Proposal Heatmap Class Prediction Direction Prediction
Bulk carrier East

Container South
Tanker North

Conclusions
This paper proposes a novel cargo ship detection and directional discrimination method for remote sensing images based on a lightweight network, which can efficiently and accurately classify, locate, and discriminate the direction of cargo ships in remote sensing images. Aiming at the problem in which the complex and large object detection network is not conducive to real-time autonomous operation on the spaceborne platform, we use one-shot feature aggregation modules and depthwise separable convolutions to design an efficient and lightweight feature extraction network, namely the one-shot aggregation and depthwise separable network (OSADSNet). At the same time, the K-Mean++ algorithm is introduced into the RPN to form the K-RPN to generate a more suitable region proposal for cargo ship detection. The current detection results of the cargo ship in the remote sensing image are output as the category and location of the cargo ship, but the directional information of the cargo ship is lacking. To solve this problem, we transform the directional recognition problem of the cargo ship in the remote sensing image into a classification problem without introducing additional parameters, which completes the directional estimation while completing the detection task. Finally, a comparative experiment is conducted based on the self-built dataset. Experimental results show that our model can meet the requirements of spaceborne platform cargo ship detection and directional discrimination in terms of model size (110 MB), detection accuracy (mAP of 91.96%), and detection speed (prediction time of 46 ms per image). Considering that Google Earth images are the integration of multiple satellite images and aerial images, to test the effectiveness of the model for remote sensing images that are from a single spaceborne platform, remote sensing images from different satellites are selected for testing, and the experimental results show the effectiveness of the model.

Detection Result Region Proposal Heatmap Class Prediction Direction Prediction
Bulk carrier East

Container South
Tanker North

Conclusions
This paper proposes a novel cargo ship detection and directional discrimination method for remote sensing images based on a lightweight network, which can efficiently and accurately classify, locate, and discriminate the direction of cargo ships in remote sensing images. Aiming at the problem in which the complex and large object detection network is not conducive to real-time autonomous operation on the spaceborne platform, we use one-shot feature aggregation modules and depthwise separable convolutions to design an efficient and lightweight feature extraction network, namely the one-shot aggregation and depthwise separable network (OSADSNet). At the same time, the K-Mean++ algorithm is introduced into the RPN to form the K-RPN to generate a more suitable region proposal for cargo ship detection. The current detection results of the cargo ship in the remote sensing image are output as the category and location of the cargo ship, but the directional information of the cargo ship is lacking. To solve this problem, we transform the directional recognition problem of the cargo ship in the remote sensing image into a classification problem without introducing additional parameters, which completes the directional estimation while completing the detection task. Finally, a comparative experiment is conducted based on the self-built dataset. Experimental results show that our model can meet the requirements of spaceborne platform cargo ship detection and directional discrimination in terms of model size (110 MB), detection accuracy (mAP of 91.96%), and detection speed (prediction time of 46 ms per image). Considering that Google Earth images are the integration of multiple satellite images and aerial images, to test the effectiveness of the model for remote sensing images that are from a single spaceborne platform, remote sensing images from different satellites are selected for testing, and the experimental results show the effectiveness of the model.

Detection Result Region Proposal Heatmap Class Prediction Direction Prediction
Bulk carrier East

Container South
Tanker North

Conclusions
This paper proposes a novel cargo ship detection and directional discrimination method for remote sensing images based on a lightweight network, which can efficiently and accurately classify, locate, and discriminate the direction of cargo ships in remote sensing images. Aiming at the problem in which the complex and large object detection network is not conducive to real-time autonomous operation on the spaceborne platform, we use one-shot feature aggregation modules and depthwise separable convolutions to design an efficient and lightweight feature extraction network, namely the one-shot aggregation and depthwise separable network (OSADSNet). At the same time, the K-Mean++ algorithm is introduced into the RPN to form the K-RPN to generate a more suitable region proposal for cargo ship detection. The current detection results of the cargo ship in the remote sensing image are output as the category and location of the cargo ship, but the directional information of the cargo ship is lacking. To solve this problem, we transform the directional recognition problem of the cargo ship in the remote sensing image into a classification problem without introducing additional parameters, which completes the directional estimation while completing the detection task. Finally, a comparative experiment is conducted based on the self-built dataset. Experimental results show that our model can meet the requirements of spaceborne platform cargo ship detection and directional discrimination in terms of model size (110 MB), detection accuracy (mAP of 91.96%), and detection speed (prediction time of 46 ms per image). Considering that Google Earth images are the integration of multiple satellite images and aerial images, to test the effectiveness of the model for remote sensing images that are from a single spaceborne platform, remote sensing images from different satellites are selected for testing, and the experimental results show the effectiveness of the model.

Detection Result Region Proposal Heatmap Class Prediction Direction Prediction
Bulk carrier East

Container South
Tanker North

Conclusions
This paper proposes a novel cargo ship detection and directional discrimination method for remote sensing images based on a lightweight network, which can efficiently and accurately classify, locate, and discriminate the direction of cargo ships in remote sensing images. Aiming at the problem in which the complex and large object detection network is not conducive to real-time autonomous operation on the spaceborne platform, we use one-shot feature aggregation modules and depthwise separable convolutions to design an efficient and lightweight feature extraction network, namely the one-shot aggregation and depthwise separable network (OSADSNet). At the same time, the K-Mean++ algorithm is introduced into the RPN to form the K-RPN to generate a more suitable region proposal for cargo ship detection. The current detection results of the cargo ship in the remote sensing image are output as the category and location of the cargo ship, but the directional information of the cargo ship is lacking. To solve this problem, we transform the directional recognition problem of the cargo ship in the remote sensing image into a classification problem without introducing additional parameters, which completes the directional estimation while completing the detection task. Finally, a comparative experiment is conducted based on the self-built dataset. Experimental results show that our model can meet the requirements of spaceborne platform cargo ship detection and directional discrimination in terms of model size (110 MB), detection accuracy (mAP of 91.96%), and detection speed (prediction time of 46 ms per image). Considering that Google Earth images are the integration of multiple satellite images and aerial images, to test the effectiveness of the model for remote sensing images that are from a single spaceborne platform, remote sensing images from different satellites are selected for testing, and the experimental results show the effectiveness of the model.

Detection Result Region Proposal Heatmap Class Prediction Direction Prediction
Bulk carrier East

Container South
Tanker North

Conclusions
This paper proposes a novel cargo ship detection and directional discrimination method for remote sensing images based on a lightweight network, which can efficiently and accurately classify, locate, and discriminate the direction of cargo ships in remote sensing images. Aiming at the problem in which the complex and large object detection network is not conducive to real-time autonomous operation on the spaceborne platform, we use one-shot feature aggregation modules and depthwise separable convolutions to design an efficient and lightweight feature extraction network, namely the one-shot aggregation and depthwise separable network (OSADSNet). At the same time, the K-Mean++ algorithm is introduced into the RPN to form the K-RPN to generate a more suitable region proposal for cargo ship detection. The current detection results of the cargo ship in the remote sensing image are output as the category and location of the cargo ship, but the directional information of the cargo ship is lacking. To solve this problem, we transform the directional recognition problem of the cargo ship in the remote sensing image into a classification problem without introducing additional parameters, which completes the directional estimation while completing the detection task. Finally, a comparative experiment is conducted based on the self-built dataset. Experimental results show that our model can meet the requirements of spaceborne platform cargo ship detection and directional discrimination in terms of model size (110 MB), detection accuracy (mAP of 91.96%), and detection speed (prediction time of 46 ms per image). Considering that Google Earth images are the integration of multiple satellite images and aerial images, to test the effectiveness of the model for remote sensing images that are from a single spaceborne platform, remote sensing images from different satellites are selected for testing, and the experimental results show the effectiveness of the model.

Detection Result Region Proposal Heatmap Class Prediction Direction Prediction
Bulk carrier East

Container South
Tanker North

Conclusions
This paper proposes a novel cargo ship detection and directional discrimination method for remote sensing images based on a lightweight network, which can efficiently and accurately classify, locate, and discriminate the direction of cargo ships in remote sensing images. Aiming at the problem in which the complex and large object detection network is not conducive to real-time autonomous operation on the spaceborne platform, we use one-shot feature aggregation modules and depthwise separable convolutions to design an efficient and lightweight feature extraction network, namely the one-shot aggregation and depthwise separable network (OSADSNet). At the same time, the K-Mean++ algorithm is introduced into the RPN to form the K-RPN to generate a more suitable region proposal for cargo ship detection. The current detection results of the cargo ship in the remote sensing image are output as the category and location of the cargo ship, but the directional information of the cargo ship is lacking. To solve this problem, we transform the directional recognition problem of the cargo ship in the remote sensing image into a classification problem without introducing additional parameters, which completes the directional estimation while completing the detection task. Finally, a comparative experiment is conducted based on the self-built dataset. Experimental results show that our model can meet the requirements of spaceborne platform cargo ship detection and directional discrimination in terms of model size (110 MB), detection accuracy (mAP of 91.96%), and detection speed (prediction time of 46 ms per image). Considering that Google Earth images are the integration of multiple satellite images and aerial images, to test the effectiveness of the model for remote sensing images that are from a single spaceborne platform, remote sensing images from different satellites are selected for testing, and the experimental results show the effectiveness of the model.

Conclusions
This paper proposes a novel cargo ship detection and directional discrimination method for remote sensing images based on a lightweight network, which can efficiently and accurately classify, locate, and discriminate the direction of cargo ships in remote sensing images. Aiming at the problem in which the complex and large object detection network is not conducive to real-time autonomous operation on the spaceborne platform, we use one-shot feature aggregation modules and depthwise separable convolutions to design an efficient and lightweight feature extraction network, namely the one-shot aggregation and depthwise separable network (OSADSNet). At the same time, the K-Mean++ algorithm is introduced into the RPN to form the K-RPN to generate a more suitable region proposal for cargo ship detection. The current detection results of the cargo ship in the remote sensing image are output as the category and location of the cargo ship, but the directional information of the cargo ship is lacking. To solve this problem, we transform the directional recognition problem of the cargo ship in the remote sensing image into a classification problem without introducing additional parameters, which completes the directional estimation while completing the detection task. Finally, a comparative experiment is conducted based on the self-built dataset. Experimental results show that our model can meet the requirements of spaceborne platform cargo ship detection and directional discrimination in terms of model size (110 MB), detection accuracy (mAP of 91.96%), and detection speed (prediction time of 46 ms per image). Considering that Google Earth images are the integration of multiple satellite images and aerial images, to test the effectiveness of the model for remote sensing images that are from a single spaceborne platform, remote sensing images from different satellites are selected for testing, and the experimental results show the effectiveness of the model. This study has some limitations. The proposed method can complete the classification, positioning, and directional discrimination of cargo ships in remote sensing images well. The direction of the cargo ship is divided into four directions: east, south, west, and north. Remote sensing images of cargo ship contain rich information, and this paper only mined part of this information. Therefore, the direction of cargo ship can be further refined to obtain more detailed directional information. In the future, we will consider refining the direction of the cargo ship into north, northeast, east, southeast, south, southwest, west, northwest, or even a degree.

Data Availability Statement:
The data could be available on request from the corresponding author (liujianzhong@zzu.edu.cn).

Acknowledgments:
The authors would like to thank the anonymous reviewers and editors for their useful comments and suggestions, which were of great help in improving this paper.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: AP Average precision AIS