Next Article in Journal
Intensification Trend and Mechanisms of Oman Upwelling During 1993–2018
Previous Article in Journal
Monitoring Nitrogen Uptake and Grain Quality in Ponded and Aerobic Rice with the Squared Simplified Canopy Chlorophyll Content Index
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fine Recognition of MEO SAR Ship Targets Based on a Multi-Level Focusing-Classification Strategy

1
School of Electronics and Information Engineering, Beihang University, Beijing 100191, China
2
School of Information and Communication Engineering, Communication University of China, Beijing 100024, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(15), 2599; https://doi.org/10.3390/rs17152599 (registering DOI)
Submission received: 15 June 2025 / Revised: 11 July 2025 / Accepted: 22 July 2025 / Published: 26 July 2025
(This article belongs to the Special Issue Advances in Remote Sensing Image Target Detection and Recognition)

Abstract

The Medium Earth Orbit (MEO) spaceborne Synthetic Aperture Radar (SAR) has great coverage ability, which can improve maritime ship target surveillance performance significantly. However, due to the huge computational load required for imaging processing and the severe defocusing caused by ship motions, traditional ship recognition conducted in focused image domains cannot process MEO SAR data efficiently. To address this issue, a multi-level focusing-classification strategy for MEO SAR ship recognition is proposed, which is applied to the range-compressed ship data domain. Firstly, global fast coarse-focusing is conducted to compensate for sailing motion errors. Then, a coarse-classification network is designed to realize major target category classification, based on which local region image slices are extracted. Next, fine-focusing is performed to correct high-order motion errors, followed by applying fine-classification applied to the image slices to realize final ship classification. Equivalent MEO SAR ship images generated by real LEO SAR data are utilized to construct training and testing datasets. Simulated MEO SAR ship data are also used to evaluate the generalization of the whole method. The experimental results demonstrate that the proposed method can achieve high classification precision. Since only local region slices are used during the second-level processing step, the complex computations induced by fine-focusing for the full image can be avoided, thereby significantly improving overall efficiency.

1. Introduction

The Medium Earth Orbit (MEO) spaceborne Synthetic Aperture Radar (SAR) has advantages such as short visit time and wide coverage, which can significantly improve real-time observation capabilities [1,2]. These features are particularly valuable for wide-swath maritime ship target surveillance in both civilian and military scenarios. Traditional ship target detection and recognition procedures typically consist of two main steps. Firstly, SAR image formation is conducted to acquire the full-scene focused SAR image. Then, target detection [3,4,5] and recognition [6,7,8,9] methods are applied to the focused image to acquire further ship target information. Compared to Low Earth Orbit (LEO) SAR system, MEO SAR has higher orbital altitude and longer synthetic aperture time. This consequently introduces complex range cell migration (RCM) and spatially variant echo parameters, making full-scene imaging processing highly complex [10,11]. In addition, motion errors induced by ship movements also make target ships severely defocused in MEO SAR images [12,13]. These issues cause increased computational complexity and reduced precision for traditional ship target detection and recognition methods when processing MEO SAR data.
Performing ship target detection in the range-compressed domain is an effective way to improve detection efficiency, especially for open-sea scenarios [14,15]. Extracting target ship signals in the range-compressed domain can significantly reduce the data that need to be processed. In order to achieve further ship classification, it is still necessary to focus the range-compressed data to obtain the ship image [16]. However, moving ship targets may suffer from defocusing in SAR images, as traditional imaging algorithms are designed for stationary scenarios. Many refocusing methods for moving ship targets in SAR images have been proposed. The conventional phase-gradient autofocus (PGA) method employs a strong scatter point approach to estimate phase errors induced by target movements [17,18,19]. Since motion errors cause SAR image defocusing, some algorithms estimate phase errors by optimizing SAR image quality, such as by maximizing image contrast or minimizing image entropy [20,21]. For MEO SAR systems, with increasing synthetic aperture time, the degree of defocusing caused by ship rotations becomes increasingly severe [22]. The above-mentioned refocusing methods cannot estimate and correct space-variant motion errors effectively [23,24]. This problem can be solved by dividing SAR image into multiple sub-blocks and refocusing them separately [25,26,27,28]. But this strategy faces the challenge of increasing computational load. Therefore, achieving both efficient and effective ship target fine recognition still remains a critical challenge for MEO SAR systems.
To address the above difficulties, a novel multi-level focusing-classification strategy for MEO SAR systems is proposed in this paper to realize ship target fine recognition. The proposed method is applied to detected range-compressed ship data. The main idea is to decompose the complex ship focusing process into fast global coarse-focusing and local fine-focusing and to design corresponding coarse classification and fine classification networks for these two focusing results. Based on the coarse classification results, the target-specific local region image slices are extracted for fine focusing and classification, which greatly reduces the computational load required for global fine-focusing.
A diagram of the proposed MEO SAR ship target recognition strategy is shown in Figure 1. The primary task involves classification of six types of ships, including aircraft carriers, cruisers, destroyers, tankers, container ships, and bulk carriers. The ship target multi-level focusing-classification strategy contains the following steps.
(1)
Coarse focusing: Ship targets are coarsely focused on by estimated Doppler parameters. Motion errors induced by target sailing velocity are compensated for in this step.
(2)
Coarse classification: Each coarsely focused ship image is input into the coarse classification network to complete the classification of major target categories, including aircraft carriers, other military ships, and civilian ships.
(3)
Fine refocusing: The local region slice of the ship target image is extracted and fine-refocused by estimating and compensating for spatially varying higher-order motion errors.
(4)
Fine classification: The finely-focused image slice is input into the fine classification network to achieve fine recognition of specific target categories.
The inputs of the proposed method are the complex data of potential ship targets in the range-compressed domain, which can be obtained by ship detection using range-compressed data [14,15]. The proposed method can be seen as the subsequent processing of range-compressed domain detection. For open ocean scenarios, extracting potential targets in the range-compressed domain can significantly reduce the amount of data processing needed. Therefore, using range-compressed domain data as input aligns with the future trend of efficient processing for maritime scenarios.
The organization of this paper is as follows. In Section 2, the signal model for ship targets in MEO SAR is established. The defocusing characteristics of MEO SAR ship targets are analyzed in detail. Coarse-level focusing and classification and fine-level focusing and classification are described in Section 3 and Section 4, respectively. Section 5 presents the dataset construction and experimental results. Section 6 gives the conclusion.

2. Signal Modeling and Characteristic Analysis of MEO SAR Ships

In this section, the signal models of MEO SAR ship targets are first established based on the geometry of MEO SAR system and a ship motion model. Combined with the SAR imaging principle, the defocusing characteristics of moving ships induced by ship sailing motions and rotational motions are then analyzed in detail.

2.1. Signal Model Establishment

Let R p η be the simultaneous slant range between the SAR antenna and point scatter P. The echo signal can be expressed as [29]
S p τ , η = σ 0 w r τ 2 R p η / c w a η η p exp j 4 π R p η / λ exp j π K r τ 2 R p η / c 2
in which η is the azimuth slow time, τ is the fast time along range direction, σ 0 is the scattering intensity, w r is the range signal envelope, w a is the azimuth signal envelope, λ is the wavelength, c is the light speed, and K r is the frequency modulation (FM) rate of the transmitted signal. For a static target, R p η is a function of the shortest slant range. After range compression, the signal can be expressed as
S p , r c τ , η = σ 0 p r τ 2 R p η / c w a η η p exp j 4 π R p η / λ
where p r is the signal range envelope after range compression. The movements of a ship include translational motions as well as rotational motions caused by waves. The synthetic aperture time of an MEO SAR system ranges from several seconds to over ten seconds. The long aperture time leads to significant slant range errors caused by target motions. This leads to a mismatch between the signal parameters of the moving ship and the traditional imaging algorithm, consequently resulting in azimuthal defocusing in the final SAR image. Point target simulation experiments are conducted to analyze the impact of ship motions on SAR imaging. The simulation parameters of the MEO SAR and LEO SAR are shown in Table 1. The resolution is set to 1.5 m. The orbit heights of MEO SAR and LEO SAR are 7500 km and 600 km, respectively.

2.2. MEO SAR Image Defocusing Caused by Ship Sailing

The translational motions of a ship mainly refer to its sailing movements. Specifically, the average speed of bulk carriers is about 10.9 knots, while the average speed of container ships is 13.9 knots, and that of tankers is approximately 11.5 knots. In contrast, military ships typically maintain a speed of around 30 knots. The translational motions can be decomposed into velocity along the azimuth direction and velocity along the range direction. The range velocity primarily causes azimuthal displacement in the SAR image, while the azimuth velocity mainly leads to defocusing. The latter has a more significant impact on target recognition. The greater the azimuth velocity of a ship, the more pronounced the defocusing of its image. This section focuses on analyzing the effects of azimuth velocity on the SAR image.
The azimuth FM rate of the SAR signal for a static target is
K a = 2 v 2 λ R
in which v is the satellite equivalent velocity and R is the shortest slant range. Let Δ v g denote the target azimuth velocity. Taking into account the satellite–ground geometry, the azimuth FM error induced by Δ v g can be written as
Δ K a = 4 v Δ v g λ R H + R e R e
where H is the orbit height, while R e is the Earth’s radius. Let T s be the synthetic aperture time. The maximum quadratic phase error caused by Δ K a is
φ max = π Δ K a T s 2 2
Figure 2 shows the maximum quadratic phase error with different target azimuth velocities. Compared with LEO SAR, MEO SAR exhibits a multiplicative increase in phase errors. Set the azimuth velocities of the point target to 0 m/s, 5 m/s, 10 m/s, and 15 m/s, respectively, with the range velocity fixed at 0 m/s. Perform echo simulation and imaging processing on the point target under two orbital altitude conditions. The final SAR images are shown in Figure 3.
The range velocity of the point target is set to 10 m/s, and the azimuth velocity is set to 0 m/s. Perform echo simulation and imaging processing on the point target with MEO SAR and LEO SAR parameters, respectively. The imaging results are shown in Figure 4. It can be seen that, for both the MEO and LEO SAR systems, the range velocity has nearly no influence on the focusing result.
From the imaging results it can be observed that, compared to LEO SAR, the defocusing of moving point targets in MEO SAR is more severe under the same velocity conditions. In the LEO SAR images, targets with velocities below 15 m/s can still be visually distinguished, whereas in the MEO SAR images, the shape characteristics of the point targets are completely lost. Azimuthal defocusing causes the point targets to appear as straight lines, making it difficult to directly identify targets under such conditions.

2.3. MEO SAR Image Defocusing Caused by Ship Rotation

Ocean waves can induce rotational movements for ships, including rolling, pitching, and yawing, as shown in Figure 5. The oscillations of a ship exhibit distinct periodic characteristics. For LEO SAR systems, the synthetic aperture time is relatively short, and the impact of ship rotations on imaging quality is minimal. However, as the orbital altitude increases, to maintain the same resolution, the synthetic aperture time also becomes longer. For MEO SAR, during the synthetic aperture time, a target may experience one or even multiple oscillation cycles. These complex movements can cause spatial and temporal variations in the phase errors of SAR echo signals, leading to significant degradation of SAR image quality. To simplify the analysis, the ship’s rotational motion is modeled as a sinusoidal function. The expression for the rotation angle is as follows:
β i = A i sin 2 π T i η   +   ϕ i ,   i   =   x ,   y ,   z
in which β i is the instantaneous rotation angle, A i is the amplitude, T i is the period, ϕ i is the initial phase, i   =   x indicates the roll angle, i   =   y indicates the pitch angle, and i   =   z indicates the yaw angle. Unlike the translational motion, the ship’s rotational motion is non-uniform, meaning that different scattering points on the ship exhibit different motion patterns.
Taking roll motion as an example to analyze phase errors caused by target rotation, Figure 6 shows the motion error caused by ship rotation. For simplicity, the rotation axis is parallel to the azimuth direction. Let γ be the antenna beam incidence angle. The red rectangle denotes the ship target position in the stationary state. The black rectangle denotes the ship target position at azimuth time η . The rotation angle at η is β x . The motion error of the point scatter at a distance Δ L from the rotation axis is Δ d . According to the small angle approximation principle, projection of Δ d in the SAR line of sight direction is
Δ R x = Δ L β x cos γ
The above equation indicates that the slant distance error caused by ship rotation is linearly related to the distance from the point scatter to the rotation axis. The phase error caused by Δ R x in the azimuth time domain is
4 π Δ L A x sin 2 π / T x η + ϕ x cos γ / λ
According to the principle of stationary phase (POSP), the azimuth signal in the frequency domain can be expressed as
S a f = ω a η exp j 4 π Δ L A x cos γ λ sin 2 π T x η + ϕ x π K a η 2 exp j 2 π f η d η W a f exp j π f 2 K a exp j 4 π Δ L A x cos γ λ sin 2 π T x f K a ϕ x
in which f denotes the azimuth frequency. Let Δ r be the slant range displacement corresponding to Δ L . Then, the phase errors in the range-Doppler domain can be expressed as
ϑ Δ r , f = exp j 4 π Δ r A x cos γ λ sin γ sin 2 π T x f K a ϕ x
To further analyze the defocusing characteristics caused by ship rotations, 3 × 3 point targets are set up in the simulation scene, with a displacement of 50 m between two targets. Pitch, roll, and yaw motions are superimposed on the point targets, whose parameters are shown in Table 2.
Figure 7 shows the imaging results of rotating point targets under different orbit heights. The imaging results from LEO SAR exhibit varying degrees of defocusing among the nine point targets, yet the outline of the point array remains discernible. In contrast, the imaging results from MEO SAR demonstrate severe defocusing, with the original shape and contour completely lost, and the defocusing states of different point targets varying significantly. The imaging results from MEO SAR suffer from more pronounced defocusing, bringing challenges for subsequent target recognition.
From the analysis above, it is evident that the defocusing of imaging results caused by ship motions is more severe for MEO SAR, with the defocused ship targets being significantly distorted and unrecognizable in their direct form. It is necessary to perform refocusing processing of the images of ship targets to achieve further recognition. However, refocusing of ships with complex motions is computationally intensive. Therefore, this paper integrates the refocusing processing with a target recognition network. Based on the defocusing characteristics of MEO SAR ships, a multi-level focusing-classification strategy is proposed to realize accurate recognition of MEO SAR ship targets.

3. Coarse-Level Focusing and Classification of MEO SAR Ship Targets

3.1. Ship Target Coarse-Focusing Based on Doppler Parameter Estimation

Coarse focusing compensates for motion errors caused by ship sailing. The input for this step is the range-compressed signals of a target, as shown in Equation (2). The Doppler centroid frequency of the target signals is first estimated, and imaging processing is carried out using the target’s Doppler centroid frequency to compensate for the spectral shift induced by the target’s radial velocity. Due to the uncompensated target azimuth velocity, the image still remains defocused. The target image is subsequently transformed into the range-Doppler domain via azimuth Fast Fourier Transform (FFT), where the quadratic phase errors caused by the azimuth velocity are estimated and corrected, yielding the coarsely focused SAR image.
The Doppler centroid frequency error induced by the target’s radial velocity Δ v r is expressed as
Δ f d = 2 Δ v r λ
The projection of the ship’s velocity in the direction of the radar line of sight is within 10 m/s. For an X-band MEO SAR satellite, the Doppler centroid frequency error caused by the ship’s radial velocity is within 700 Hz, which usually does not exceed one Pulse Repetition Frequency (PRF). Therefore, it is only necessary to estimate the baseband Doppler centroid frequency error, and there is no need to consider the ambiguity number portion of the Doppler centroid frequency error. The average phase increment method is employed to estimate the Doppler centroid frequency. S p , r c τ , η is the ship signal after range compression. S p , r c τ , η + 1 / P R F denotes the signal at the adjacent azimuth time. By performing average cross-correlation processing and extracting the phase, we can obtain [29]
ϕ a = η , τ S p , r c * τ , η + 1 P R F S p , r c τ , η + 1 P R F
The Doppler centroid frequency can then be calculated by
f d = P R F 2 π ϕ a
The estimated Doppler centroid frequency is then utilized to conduct subsequent range cell migration correction (RCMC) and azimuth compression. In order to directly process range-compressed SAR data, an approximate wavenumber focusing algorithm is used to form the ship SAR image [29,30]. In addition to Doppler centroid frequency, the Doppler FM rate should also be estimated when focusing a moving ship target. According to Section 2.2, the phase error in the azimuth frequency domain caused by Δ K a is
φ f = π Δ K a K a 2 f 2 ,   f P R F 2 ,   P R F 2
The azimuth FM rate error can lead to a mismatch in the azimuth-matched filter, causing defocusing along the azimuth direction in the SAR image. The degree of defocusing can be measured by the image entropy: the higher the image entropy, the more severe the image defocusing. The image entropy of a SAR image is
I E = 1 Q l = 0 L 1 n = 0 N 1 I l , n 2 ln I l , n 2 + ln Q
where I represents the discrete-domain SAR image, L and N denote the number of azimuth pixels and range pixels, respectively, and Q is the image energy. Therefore, refocusing of the SAR image can be formulated as an optimization problem, with SAR image entropy serving as the objective function. The goal is to find the optimal azimuth FM rate error Δ K a that minimizes image entropy. The optimization model is constructed as follows:
Δ K a = arg min Δ K a   I E k = 0 L 1 e j π Δ K a P R F 2 K a 2 k L 1 2 2 G k , n e j 2 π k l L
in which k denotes the azimuth frequency bin, ranging from 0 to L − 1,
G k , n = l = 0 L 1 I 0 l , n e j 2 π k l L
is the range-Doppler domain SAR image, and I 0 l , n is the original SAR image. This is a one-dimensional optimization problem, which can be solved using Newton’s iteration method.
Ignore the constant term. The gradient of the objective function is given by
I E Δ K a = l = 0 L 1 n = 0 N 1 I l , n 2 Δ K a ln I l , n 2 + 1
in which
I l , n = k = 0 L 1 G k , n e j π Δ K a P R F 2 K a 2 k L 1 2 2 e j 2 π k l L
I l , n 2 Δ K a = 2 π P R F 2 K a 2 Im I l , n k = 0 L 1 k L 1 2 2 G * k , n e j π Δ K a P R F 2 K a 2 k L 1 2 2 e j 2 π k l L
The gradient descent method is employed to iteratively solve the optimization problem above to obtain the azimuth FM rate error. After the quadratic phase errors are corrected, the coarsely focused SAR image is obtained, which can be utilized to conduct the subsequent coarse classification.

3.2. Ship Target Coarse Classification Based on Global Features

In the first-level coarse classification, ship targets are categorized into three major classes: civilian vessels, other military ships, and aircraft carriers. The classification is primarily based on the target’s global morphological characteristics. Aircraft carriers, as typical large military vessels, exhibit significant morphological and dimensional distinctions from other ships. These distinctive features become markedly apparent after coarse focusing. Consequently, aircraft carriers are classified as a separate category during the coarse-classification stage. Other military ships consist of destroyers and cruisers. These two classes exhibit high global similarity, with comparable characteristics in terms of dimensions, length-to-width ratios, and bow and stern structures. After coarse focusing, they can be classified as military ships based on contour morphological features. Civilian ships, including bulk carriers, container ships, and tankers, also demonstrate similar typical civilian ship characteristics, such as rounded bows and concentrated length-to-width ratio distributions smaller than military ships. In summary, there is a large gap between these three categories after coarse focusing, providing adequate features for discrimination. At the same time, intra-class variations within each coarse class remain sufficiently small, which will not affect the global feature representation of the target.
Classification is achieved based on global feature extraction, which enables rapid pre-classification of ship targets into these broad categories. The ResNet algorithm [31,32,33] is a deep neural network architecture renowned for its significant advantage of enabling cross-layer residual learning. This capability substantially reduces the number of parameters, thereby decreasing the computational load of deep networks. In this paper, the ResNet-101 network is employed as the main component of the coarse-classification model. The specific parameter structure of each layer, the dimensions of the output vectors, and the total number of parameters are detailed in Table 3.
The globally salient morphological features of the ship targets are utilized to refine the coarse target classification results by the ResNet network. The aspect ratio, serving as a critical feature for ship targets, is widely utilized in target identification tasks. By integrating the grayscale characteristics of targets in SAR imaging results, the aspect ratio and the mean pixel value are employed as two key features to refine the coarse classification results.
Firstly, Otsu’s thresholding is applied to the target slice to remove background interference and extract the target region data. This step effectively separates the target from the background in the SAR image. Subsequently, statistical analysis is performed on the pixel values of the target to obtain the mean pixel value.
Ship targets often exhibit certain angles. When ship targets are not aligned horizontally or vertically in the SAR image, traditional horizontal bounding boxes fail to closely fit the targets, resulting in inaccurate aspect ratios. Therefore, a minimum bounding rectangle is employed to select the targets, ensuring a better fit to the target contours and improving the accuracy of the calculated aspect ratios.
The above two features are then quantified by statistically analyzing a portion of the training data during the training process. The analysis results are used to correct biases in the training results, thereby guiding the network’s training outcomes to converge toward the target standard features. This enables feature-guided coarse classification of ship targets.
The number of sample categories in the training set is three. Let Z be the number of samples per category. For the i-th image data sample of the j-th category, the pixel mean value is P M V j , i , and the aspect ratio is A R j , i . The mean pixel value and mean aspect ratio for the j-th category are as follows:
P M V j ¯ = 1 Z i = 1 Z P M V j , i
A R j ¯ = 1 N i = 1 N A R j , i
Based on the known prior statistical information of the targets, the mean pixel values and mean aspect ratios of different target categories can be obtained. Let P M V t e s t and A R t e s t be the pixel mean value and aspect ratio of the test samples, respectively. To improve precision, the confidence of the prediction results can be adjusted, and the correction coefficient for the sample is given by the following equation:
C o r m e a n = [ c o r m e a n 1 , c o r m e a n 2 , c o r m e a n 3 ]
The mean value c o r m e a n j for the j-th category of samples is given by
c o r m e a n j = 1 ( P M V j ¯ P M V t e s t ) ( A R j ¯ A R t e s t )
The output probabilities of the neural network classifier are adjusted using the sample correction coefficient, thereby incorporating the domain-specific characteristics of the training samples into the learning process. The correction formula is given by
P = P + C o r m e a n
in which P’ represents the corrected output probability of the classifier, while P denotes the original output probability before correction. The coarse classification network diagram is shown in Figure 8.

4. Fine-Level Focusing and Classification of MEO SAR Ship Targets

Based on the first-level classification results, target local region slices are extracted according to the spatial distribution of typical features. Target local region slices are then directed to corresponding second-level focusing-classification networks for fine classification. The typical feature locations for different targets are demonstrated in Figure 9.
The extracted local region slice is first processed by the fine-refocusing algorithm to further improve imaging quality. Then the refocused slice is input to the fine-classification network. If the first-level result is a civilian ship, it enters a dedicated civilian ship classification network for further distinction among bulk carriers, container ships, and tankers. If the first-level result indicates other military ships, it proceeds to a specialized network for further classification between destroyers and cruisers. If the first-level result is an aircraft carrier, it enters an aircraft carrier confirmation network for final verification.

4.1. Ship Target Fine-Focusing Based on Local Region Slice

The fine-refocusing process compensates for defocusing caused by ship rotations induced by ocean waves. The phase errors induced by target rotation exhibit space-varying characteristics. Although the motions of individual scattering points may differ, it is reasonable to assume that scattering points within a small localized region of the target share similar motion errors. Accordingly, the coarsely focused SAR image can be divided into multiple sub-blocks along the range direction based on the structural characteristics of the target. For each sub-image block, high-order phase errors estimation and compensation are performed to achieve local fine-refocusing. The finely focused sub-image slices are then fed into the subsequent fine classification network.
To estimate phase errors in arbitrary forms without parametric modeling, the phase errors in every frequency bin are directly treated as independent variables, while the image entropy of sub-blocks serves as the objective function for the optimization model in the fine refocusing. The optimization model can be constructed as
Δ φ = arg min Δ φ   I E k = 0 L 1 G k , n e j 2 π k l L + Δ φ k
in which
Δ φ = Δ φ 1 Δ φ k Δ φ L T
is the azimuth frequency phase error vector to be estimated. Compared to the coarse-focusing process, the fine-focusing procedure requires solving for an L-dimensional parameter space instead of a single dimension, leading to a dramatic increase in computational load for parameter estimation. The high-dimensional optimization problem is solved using the quasi-Newton iteration method. The gradient of the objective function is given by the following equation:
I E Δ φ k = 2 Im G * k , n e j 2 π k l L + Δ φ k I l , n l = 0 L 1 n = 0 N 1 ln I l , n 2 + 1
Frequency-domain phase errors are then estimated via iterative calculation. The DFP (Davidon–Fletcher–Powell) algorithm is utilized to calculate the optimal solution [34,35]. The DFP algorithm does not require computing the second-order derivatives of the objective function, which exhibits super linear convergence and is widely applied to large-scale optimization problems. Let Δ φ 0 be the initial zero vector and Δ φ p be the vector of the p-th iteration. D 0 is an identity matrix of size L × L. Then the searching direction of the p-th iteration is
d P = D P J P
in which
J p = I E Δ φ p
The next p + 1-th iteration result is
Δ φ p + 1 = Δ φ p + λ p d p
in which λ p is the searching step. It can be estimated via minimizing the current objective function I E Δ φ p + λ p d p . The Hessian matrix D P is calculated via [34]
D p + 1 = D p + s p s p T s p T y p D p y p y p T D p y p T D p y p
where s p = λ p d p , y p = J p + 1 J p . The iteration terminates when J p + 1 ε . The obtained final phase vector is utilized to obtain the fine-refocused SAR image slice, which is then input to the second-level classification network.

4.2. Ship Target Fine Classification Based on Local Features

The second-level classification network is designed to achieve fine target classification through local structural features. The input data consist of fine-refocused local target slices. Under MEO imaging conditions, these slices typically range from tens to hundreds of pixels in size, which does not impose significant computational burdens. Furthermore, to meet integrated processing efficiency requirements, a lightweight deep learning model based on MobileNet-v3 [36,37] is implemented to achieve fine classification of ship targets. MobileNet-v3 employs depthwise separable convolutions combined with residual connections to achieve efficient inference. The detailed architecture is illustrated in Figure 10.
MobileNet-v3 employs a hybrid neural architecture search (NAS) approach combining block-wise and layer-wise search strategies. The framework first establishes the macro-architecture through block-wise search, then performs micro-architecture refinement via layer-wise search, achieving complementary global-local optimization for inference efficiency. To enhance classification robustness for localized target features, we train the model using multi-scale target slices, including: full, 1/2, 1/3, and 1/4 of target regions. Figure 11 illustrates the extraction of local region slices for a bulk carrier.
After the network is constructed, three groups of fine-classification network weight parameters corresponding to the three coarse-classification results are trained. For aircraft carriers, the fine-classification network performs aircraft carrier confirmation. For other military ships, the network accomplishes further classification of cruisers and destroyers. For other civilian ships, the network achieves further classification of tankers, bulk carriers, and container ships.
Figure 12 shows the whole flowchart of the proposed multi-level focusing-classification method. After the ship SAR data in the range-compressed domain are obtained, Doppler parameter estimation is conducted. Then, RCMC and azimuth focusing are performed using the estimated Doppler centroid frequency and Doppler FM rate. Next, the coarsely-focused SAR image is input to the coarse-classification network to complete coarse classification. Based on the result, the image local region slice is extracted, and fine focusing is conducted via space-variant high-order phase error compensation. Finally, the fine-focused image slice is input to the fine classification network to achieve the final fine classification.

5. Dataset Construction and Experimental Results

In this section, datasets are constructed for network training and testing, including equivalent MEO SAR ship data and simulated MEO SAR ship data. The experimental results for coarse focusing-classification and fine focusing-classification are given to show the effectiveness of the proposed method.

5.1. Dataset Construction

Considering the limited availability of real MEO SAR data, LEO SAR ship target data with equivalent resolution is employed to build the dataset for model training and testing. The MEO SAR ship rotational motion errors are added to the LEO SAR data, generating an equivalent coarse-focused MEO SAR ship dataset for the first-level classification network. Local region slice extraction is performed on these real data to build fine-focused local region slice dataset for the second-level classification network.
Phase errors induced by ship rotations, as analyzed in Section 2.3, can be employed to construct the dataset for the coarse-classification network. Let L and N be the number of pixels along the azimuth and range directions for the real SLC SAR image of a ship target. Transform the image into the azimuth frequency domain. Let f s be the range sample rate. The slant range displacement between two adjacent range pixels is c / 2 f s , and the azimuth frequency interval between two adjacent azimuth pixels is P R F / L . The phase errors to be added in the discrete distance-Doppler domain can be expressed as
ϑ d n , l = exp j 4 π n N 2 c 2 f s A cos γ λ sin γ sin 2 π T 1 K a l L 2 P R F L ϕ n = 0 ,   1 ,   ,   N 1             l = 0 ,   1 ,   ,   L 1
By taking the MEO SAR system parameters and the ship roll parameters into the above equation, we can derive the equivalent phase errors in the MEO SAR ship image. Figure 13 presents the equivalent MEO SAR coarse-focused ship image, which is generated by introducing these phase errors into the real LEO SAR image in Figure 14.
Simulated MEO SAR ship data are also generated to further evaluate the generalization performance of the proposed multi-level focusing-classification strategy. Taking real ship SAR images as the intensity values of target scattering points, MEO SAR parameters are employed to generate simulated echo signals. Classical ship target motions are also injected during the echo simulation process. Range compression is performed on the simulated echo data. The range-compressed results are then utilized for coarse focusing and coarse classification. Subsequently, based on the coarse classification results, ship targets’ local region slices are extracted to validate the fine-refocusing method and the fine-classification network. The workflow for dataset construction and experimental design is illustrated in Figure 15.

5.2. Coarse Focusing-Classification Experiment

Real spaceborne SAR data from multiple satellites, including Gaofen-3 [38], HISEA-1, Umbra, and Capella, are utilized to construct a coarse classification dataset. The dataset of ship target SAR images is categorized into three classes: aircraft carriers, other military ships, and civilian ships. The dataset for coarse classification is split into training, validation, and testing sets at a ratio of 0.7:0.2:0.1, as shown in Table 4.
The coarse classification network is designed based on the improved ResNet-101 architecture. The training process consisted of 200 epochs, including 50 epochs of frozen backbone network training and 150 epochs of standard fine-tuning. The stochastic gradient descent (SGD) optimizer is employed to generate a non-linear descent learning rate. This optimization strategy initially uses higher learning rates to accelerate model convergence. Then lower learning rates are used in later stages to smooth the fitting process and achieve an optimal learning performance. The range of the learning rate is [0.0001, 0.01]. The loss curve during the training process is illustrated in Figure 16.
When the backbone network was frozen, the loss of training and validation sets decreased smoothly, which indicated that the generalization performance of the model was preserved to some degree. After unfreezing the backbone network, both the training and validation sets converge to a relatively flat range.
The optimal weights retained by the network are utilized to test the testing set. The recall and precision rates are shown in Figure 17.
The statistical results show that the average recall rate of targets in the coarse-classification network can reach 87.5%, and the average recognition precision can reach 88.2%. The category of aircraft carriers has the lowest recall rate of 77%, while the category of other military ships has the lowest precision rate of 80%. This proves that the proposed coarse-classification network can effectively distinguish three types of targets, and there is no overfitting to the validation set.
Simulated MEO SAR ship data for these three categories are employed to evaluate the generalization of the first-level model. In total, 10 aircraft carriers, 20 other military ships, and 30 civilian ships are selected for classification experiments. Figure 18 demonstrates a simulated range-compressed SAR image for a destroyer. The coarse-focusing method in Section 3.1 is performed on the range-compressed data. Figure 19a,c present two SAR images processed by a traditional imaging algorithm without motion error compensation. Figure 19b,d show the corresponding SAR images after coarse focusing. It can be seen that, after coarse focusing, the imaging quality has improved to some degree. The outline of the ship has been roughly restored. Although the details of the ship are still unclear, the size and shape features are still sufficient to support coarse classification.
The well-trained coarse-classification network is directly used to classify the coarsely focused simulated MEO SAR data. The precision of the coarse-classification network is shown in Table 5. The experimental results show that the proposed first-level focusing-classification network can effectively achieve classification of three types of ship target.

5.3. Fine Focusing-Classification Experiment

Real spaceborne SAR data from Gaofen-3, HISEA-1, Umbra, and Capella are utilized to construct the dataset for ship fine classification. The local region slices for the ship target are extracted as data samples. The target categories include aircraft carriers, destroyers, cruisers, bulk carriers, tankers, and container ships, as shown in Figure 20.
The experimental dataset for fine classification is split into training, validation, and testing sets at a ratio of 0.7:0.2:0.1, as shown in Table 6. The fine-classification network is designed based on MobileNet-v3 architecture. To prevent potential overfitting that may arise from training small sample sizes with excessive epochs, we limited the training to 100 epochs and omitted backbone network freezing. A SGD optimizer is also employed to generate a non-linear descent learning rate with a range of [0.0001, 0.01]. The loss curves during the training process are shown in Figure 21. The loss curves illustrate that all three fine-classification networks converged to relatively flat regions in both the training and testing datasets after a comparatively small number of training epochs.
To enhance the model’s adaptability, multi-scale target slices, including full, 1/2, 1/3, and 1/4 of target regions, are utilized during the network training phase. During the network inference phase, it is necessary to select a suitable local slice extraction parameter. Among the three fine-classification scenarios, the aircraft carrier confirmation network is a binary classifier, the military ship classification network is a three-class classifier, and the civilian ship classification network is a four-class classifier. Therefore, taking the most complex civilian ship classification as an example, we evaluate the fine-classification network performance under different region extraction sizes (1/2, 1/3, and 1/4), as shown in Figure 22, Figure 23 and Figure 24.
To ensure rationality, classification testing is conducted on the same batch of testing sets with different extraction sizes, and the weights and corresponding parameters of the classification model remain unchanged. The classification precision of different slice sizes is shown in Table 7.
The experimental results show that the precision and recall rates of fine classification for civilian ships are at a low level at a 1/4 extraction ratio. This is because the target is unable to preserve its complete structure at this extraction size and lacks enough local features for fine classification.
Figure 25 shows a tanker SAR image with a 1/4 extraction ratio. The structural feature has been destroyed, which cannot meet the requirements for classification tasks. Therefore, this extraction ratio cannot be used for practical target classification tasks.
High classification performance can be achieved under both 1/2 and 1/3 extraction ratio conditions. The classification performance is best under the condition of a 1/2 extraction ratio, followed by a 1/3 extraction ratio. Under these two conditions, the precision and recall rates of various targets remain stable at over 90%, which is sufficient for practical classification tasks.
However, the computational load generated by fine focusing is greater at the 1/2 extraction ratio, which will increase the computational burden of the whole process. Therefore, in the consideration of balancing classification accuracy and computational efficiency, a 1/2 extraction ratio can be prioritized when there are sufficient computing resources or computing time allows. In the case of limited computing resources, a 1/3 extraction ratio can basically meet the target classification task, but the performance may be slightly reduced.
In order to better demonstrate the efficiency of the proposed method, in the subsequent experiments, a 1/3 extraction ratio is used in the second-level focusing-classification processing. The recall and precision rates of the three testing sets under the optimal weights are shown in Figure 26 and Figure 27.
The statistical results show that the proposed three fine-classification networks can effectively achieve target recognition based on the local region slices, and there is no overfitting to the validation set.
Simulated MEO SAR ship data are employed to evaluate the generalization of the proposed multi-level recognition strategy. Based on the coarse classification results, local region slices are extracted from the coarsely focused images. Then fine focusing is performed, and the well-trained fine-classification network is used to make the final classification. Figure 28 shows the multi-level focusing-classification results for simulated MEO SAR data corresponding to six types of ships. After coarse classification, the targets are divided into aircraft carriers, other military ships, and civilian ships. For aircraft carriers and other military ships, image slices of the bow are extracted. For civilian ships, image slices of the midship area are extracted. After fine focusing, the quality of these image slices is improved with enhanced structural details. Fine classification is then performed to achieve final recognition of aircraft carriers, destroyers, cruisers, bulk carriers, tankers, and container ships.
The precision of the whole multi-level refocusing-classification network is shown in Table 8. The experimental results show that the proposed network can effectively achieve classification of six types of ship targets from simulated MEO SAR data.
To further demonstrate the efficiency of the proposed method, its computational efficiency was compared with that of a traditional single-stage strategy of global fine focusing and fine classification. The experiments were both conducted on a computer with Intel1 Core1 i7-10750H CPU@2.6GHz, 64-GB RAM and NVIDIA RTX 3060 GPU (Dell, Round Rock, TX, USA), and 12-GB video memory. The time consumption of the two strategies at different steps is shown in Table 9. It can be seen that the computational cost generated by fine focusing accounts for the main part of the overall computational cost. The computational cost of the single-stage strategy is much higher than that of the proposed two-level strategy method. Under MEO SAR imaging conditions, defocusing caused by ship movements is very serious, due to the long synthetic aperture time. Coarse focusing can quickly estimate and compensate for errors introduced by ship sailing velocity. However, the fine-grained features of the target still cannot be accurately restored, making it difficult to directly achieve the expected target classification. On the other hand, the computational load generated by directly performing fine-focusing processing on all data is enormous, which will increase the computational burden of the entire process. Therefore, to reduce computational complexity in fine focusing, only local regions exhibiting typical target features are selected for fine focusing. And the coarse classification results from the previous stage can help locate local areas that require fine processing. The experimental and analytical results demonstrate that the proposed method can effectively improve the classification efficiency of MEO SAR ship data compared with a conventional method.

6. Conclusions

A multi-level focusing-classification strategy is proposed for MEO SAR ship target recognition in this paper. The complex ship focusing process is decomposed into rapid global coarse focusing and local fine focusing. Two classification networks are designed for each focusing stage. Specifically, coarse ship classification is first performed after global coarse focusing, followed by target local region slice selection based on the coarse classification results. The image slices are utilized for the next fine focusing step, and ultimately for fine classification. Local region extraction significantly reduces the amount of data requiring fine focusing, substantially improving processing efficiency. The equivalent MEO SAR ship dataset is constructed by adding MEO ship motion errors to real LEO SAR ship images. The simulated MEO SAR ship data are also generated by echo simulation. The experimental results validate the effectiveness of the proposed method.

Author Contributions

Conceptualization, Z.L. and W.Y.; methodology, Z.L., W.Y., C.S. and J.G.; software, Z.L.; validation, J.G. and C.S.; formal analysis, H.Z.; investigation, Y.W.; resources, H.X.; data curation, Z.L. and H.X.; writing—original draft preparation, Z.L.; writing—review and editing, Z.L. and J.G., visualization, C.S. and J.G.; supervision, W.Y. and H.Z.; project administration, W.Y. and Y.W.; funding acquisition, W.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key Research and Development Program of China, grant number 2022YFB3901601.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Matar, J.; Rodriguez-Cassola, M.; Krieger, G.; López-Dekker, P.; Moreira, A. MEO SAR: System Concepts and Analysis. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1313–1324. [Google Scholar] [CrossRef]
  2. Matar, J.; Rodriguez-Cassola, M.; Krieger, G.; Zonno, M.; Moreira, A. Potential of MEO SAR for Global Deformation Mapping. In Proceedings of the EUSAR 2021—13th European Conference on Synthetic Aperture Radar 2021, Leipzig, Germany, 29–31 March 2021; pp. 1–5. [Google Scholar]
  3. Dai, D.; Wu, H.; Wang, Y.; Ji, P. LHSDNet: A Lightweight and High-Accuracy SAR Ship Object Detection Algorithm. Remote Sens. 2024, 16, 4527. [Google Scholar] [CrossRef]
  4. Feng, Y.; You, Y.; Tian, J.; Meng, G. OEGR-DETR: A Novel Detection Transformer Based on Orientation Enhancement and Group Relations for SAR Object Detection. Remote Sens. 2024, 16, 106. [Google Scholar] [CrossRef]
  5. Ai, J.; Tian, R.; Luo, Q.; Jin, J.; Tang, B. Multi-scale rotation-invariant Haar-like feature integrated CNN-based ship detection algorithm of multiple-target environment in SAR imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 10070–10087. [Google Scholar] [CrossRef]
  6. Shang, Y.; Pu, W.; Liao, D.; Yang, J.; Wu, C.; Huang, Y.; Zhang, Y.; Wu, J.; Yang, J.; Wu, J. SA2 Net: Ship Augmented Attention Network for Ship Recognition in SAR Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 10036–10049. [Google Scholar] [CrossRef]
  7. Shang, Y.; Pu, W.; Wu, C.; Liao, D.; Xu, X.; Wang, C.; Huang, Y.; Zhang, Y.; Wu, J.; Yang, J.; et al. HDSS-Net: A novel hierarchically designed network with spherical space classifier for ship recognition in SAR images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–20. [Google Scholar] [CrossRef]
  8. He, J.; Wang, Y.; Liu, H. Ship classification in medium-resolution SAR images via densely connected triplet CNNs integrating Fisher discrimination regularized metric learning. IEEE Trans. Geosci. Remote Sens. 2020, 59, 3022–3039. [Google Scholar] [CrossRef]
  9. Ma, M.; Chen, J.; Liu, W.; Yang, W. Ship classification and detection based on CNN using GF-3 SAR images. Remote Sens. 2018, 10, 2043. [Google Scholar] [CrossRef]
  10. Liu, W.; Sun, G.C.; Xing, M.; Li, H.; Bao, Z. Focusing of MEO SAR data based on principle of optimal imaging coordinate system. IEEE Trans. Geosci. Remote Sens. 2020, 58, 5477–5489. [Google Scholar] [CrossRef]
  11. Zhang, Y.; Ren, H.; Lu, Z.; Yang, X.; Li, G. Focusing of Highly Squinted Bistatic SAR with MEO Transmitter and High Maneuvering Platform Receiver in Curved Trajectory. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5227522. [Google Scholar] [CrossRef]
  12. Guo, J.; Yang, W.; Chen, J.; Zou, F. Refocusing of Moving Ship Targets in SAR Images with Long Synthetic Aperture Time. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 2079–2082. [Google Scholar]
  13. Zhang, Y.; Li, L.; Liao, W.; Qi, X.; Ren, H. Three-Dimensional Reconstruction of Ship Target on MEO SAR/ISAR Hybrid Imaging. In Proceedings of the IGARSS 2024—2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 7–12 July 2024; pp. 3344–3347. [Google Scholar]
  14. Wang, C.; Guo, B.; Song, J.; He, F.; Li, C. A novel CFAR-based ship detection method using range-compressed data for spaceborne SAR system. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5215515. [Google Scholar] [CrossRef]
  15. Tan, X.; Leng, X.; Ji, K.; Kuang, G. RCShip: A dataset dedicated to ship detection in range-compressed SAR data. IEEE Geosci. Remote Sens. Lett. 2024, 21, 4004805. [Google Scholar] [CrossRef]
  16. Tan, X.; Leng, X.; Luo, R.; Sun, Z.; Ji, K.; Kuang, G. YOLO-RC: SAR ship detection guided by characteristics of range-compressed domain. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 18834–18851. [Google Scholar] [CrossRef]
  17. Wahl, D.E.; Eichel, P.H.; Ghiglia, D.C.; Jakowatz, C.V. Phase gradient autofocus-a robust tool for high resolution SAR phase correction. IEEE Trans. Aerosp. Electron. Syst. 2002, 30, 827–835. [Google Scholar] [CrossRef]
  18. Seo, K.; Kwon, Y.; Kim, C.K. An Enhanced Phase Gradient Autofocus Algorithm for SAR: A Fractional Fourier Transform Approach. Remote Sens. 2025, 17, 1216. [Google Scholar] [CrossRef]
  19. Zhu, D.; Jiang, R.; Mao, X.; Zhu, Z. Multi-subaperture PGA for SAR autofocusing. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 468–488. [Google Scholar] [CrossRef]
  20. Martorella, M.; Berizzi, F.; Haywood, B. Contrast maximization based technique for 2-D ISAR autofocusing. IEE Proc.-Radar Sonar Navig. 2005, 152, 253–262. [Google Scholar] [CrossRef]
  21. Morrison, R.L.; Do, M.N.; Munson, D.C. SAR image autofocus by sharpness optimization: A theoretical study. IEEE Trans. Image Process. 2007, 16, 2309–2321. [Google Scholar] [CrossRef]
  22. Doerry, A.W. Ship Dynamics for Maritime ISAR Imaging; Sandia National Laboratories (SNL): Albuquerque, NM, USA; Livermore, CA, USA, 2008. [Google Scholar]
  23. Zhou, B.; Qi, X.; Zhang, J.; Zhang, H. Effect of 6-DOF Oscillation of Ship Target on SAR Imaging. Remote Sens. 2021, 13, 1821. [Google Scholar] [CrossRef]
  24. Liu, W.; Sun, G.C.; Xia, X.G.; Fu, J.; Xing, M.; Bao, Z. Focusing challenges of ships with oscillatory motions and long coherent processing interval. IEEE Trans. Geosci. Remote Sens. 2020, 59, 6562–6572. [Google Scholar] [CrossRef]
  25. Sommer, A.; Ostermann, J. Backprojection subimage autofocus of moving ships for synthetic aperture radar. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8383–8393. [Google Scholar] [CrossRef]
  26. Yu, J.; Yu, Z.; Li, C. GEO SAR imaging of maneuvering ships based on time–frequency features extraction. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–21. [Google Scholar] [CrossRef]
  27. Xu, X.; Leng, X.; Sun, Z.; Tan, X.; Ji, K.; Kuang, G. Adaptive Fast Refocusing for Ship Targets with Complex Motion in SAR Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 8559–8572. [Google Scholar] [CrossRef]
  28. Guo, J.; Yang, W.; Qi, W.; Deng, J.; Zeng, H.; Chen, J.; Wang, Y.; Ji, W.; Wang, W. Refocusing of Rotating Ships in Spaceborne SAR Imagery based on NLCS Principle. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5207015. [Google Scholar] [CrossRef]
  29. Cumming, I.G.; Wong, F. Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation; Artech House: Norwood, MA, USA, 2005. [Google Scholar]
  30. Bamler, R. A comparison of range-Doppler and wavenumber domain SAR focusing algorithms. IEEE Trans. Geosci. Remote Sens. 1992, 30, 706–713. [Google Scholar] [CrossRef]
  31. Li, J.; Chen, J.; Cheng, P.; Yu, Z.; Yu, L.; Chi, C. A survey on deep-learning-based real-time SAR ship detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 3218–3247. [Google Scholar] [CrossRef]
  32. Qiao, M.; Liu, Y.; Xu, M.; Deng, X.; Li, B.; Hu, W.; Borji, A. Joint learning of audio–visual saliency prediction and sound source localization on multi-face videos. Int. J. Comput. Vis. 2024, 132, 2003–2025. [Google Scholar] [CrossRef]
  33. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the CVPR, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  34. Jorge, N.; Stephen, J.W. Numerical Optimization, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  35. Shanno, D.F. Conditioning of quasi-Newton methods for function minimization. Math. Comput. 1970, 24, 647–656. [Google Scholar] [CrossRef]
  36. Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for MobileNetV3. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
  37. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
  38. Sun, J.; Yu, W.; Deng, Y. The SAR Payload Design and Performance for the GF-3 Mission. Sensors 2017, 17, 2419. [Google Scholar] [CrossRef]
Figure 1. Diagram of the proposed MEO SAR ship target recognition strategy.
Figure 1. Diagram of the proposed MEO SAR ship target recognition strategy.
Remotesensing 17 02599 g001
Figure 2. Maximum quadratic phase error caused by target azimuth velocity for LEO SAR and MEO SAR.
Figure 2. Maximum quadratic phase error caused by target azimuth velocity for LEO SAR and MEO SAR.
Remotesensing 17 02599 g002
Figure 3. Imaging results of a point target for LEO SAR and MEO SAR with different azimuth velocities: (a) 0 m/s; (b) 5 m/s; (c) 10 m/s; (d) 15 m/s.
Figure 3. Imaging results of a point target for LEO SAR and MEO SAR with different azimuth velocities: (a) 0 m/s; (b) 5 m/s; (c) 10 m/s; (d) 15 m/s.
Remotesensing 17 02599 g003
Figure 4. Imaging results of a point target for LEO SAR and MEO SAR with a range velocity of 10 m/s: (a) LEO SAR; (b) MEO SAR.
Figure 4. Imaging results of a point target for LEO SAR and MEO SAR with a range velocity of 10 m/s: (a) LEO SAR; (b) MEO SAR.
Remotesensing 17 02599 g004
Figure 5. Three-dimensional rotations of a ship target.
Figure 5. Three-dimensional rotations of a ship target.
Remotesensing 17 02599 g005
Figure 6. Motion error caused by ship rotation.
Figure 6. Motion error caused by ship rotation.
Remotesensing 17 02599 g006
Figure 7. Imaging results of rotating point targets: (a) MEO SAR; (b) LEO SAR.
Figure 7. Imaging results of rotating point targets: (a) MEO SAR; (b) LEO SAR.
Remotesensing 17 02599 g007
Figure 8. Coarse classification network diagram.
Figure 8. Coarse classification network diagram.
Remotesensing 17 02599 g008
Figure 9. Typical features of and key part distribution for different ship targets.
Figure 9. Typical features of and key part distribution for different ship targets.
Remotesensing 17 02599 g009
Figure 10. MobileNet-v3 architecture.
Figure 10. MobileNet-v3 architecture.
Remotesensing 17 02599 g010
Figure 11. Local region slices for a bulk carrier: (a) 1/4 local region slice; (b) 1/3 local region slice; (c) 1/2 local region slice; (d) full region slice.
Figure 11. Local region slices for a bulk carrier: (a) 1/4 local region slice; (b) 1/3 local region slice; (c) 1/2 local region slice; (d) full region slice.
Remotesensing 17 02599 g011
Figure 12. Flowchart of the proposed multi-level focusing-classification method.
Figure 12. Flowchart of the proposed multi-level focusing-classification method.
Remotesensing 17 02599 g012
Figure 13. Equivalent MEO SAR coarsely focused images of ships.
Figure 13. Equivalent MEO SAR coarsely focused images of ships.
Remotesensing 17 02599 g013
Figure 14. Real LEO SAR images of ships.
Figure 14. Real LEO SAR images of ships.
Remotesensing 17 02599 g014
Figure 15. Dataset construction and experimental design.
Figure 15. Dataset construction and experimental design.
Remotesensing 17 02599 g015
Figure 16. Loss curve of the coarse classification network.
Figure 16. Loss curve of the coarse classification network.
Remotesensing 17 02599 g016
Figure 17. Evaluation results of the testing set for the coarse classification network: (a) recall rate; (b) precision rate.
Figure 17. Evaluation results of the testing set for the coarse classification network: (a) recall rate; (b) precision rate.
Remotesensing 17 02599 g017
Figure 18. Simulated MEO SAR data of a ship target in range-compressed domain.
Figure 18. Simulated MEO SAR data of a ship target in range-compressed domain.
Remotesensing 17 02599 g018
Figure 19. Simulated MEO SAR images with and without coarse focusing: (a) SAR image of a destroyer processed by a traditional image-focusing algorithm without motion error compensation; (b) SAR image of a destroyer with coarse focusing; (c) SAR image of a bulk carrier processed by a traditional image-focusing algorithm without motion error compensation; (d) SAR image of a bulk carrier with coarse focusing.
Figure 19. Simulated MEO SAR images with and without coarse focusing: (a) SAR image of a destroyer processed by a traditional image-focusing algorithm without motion error compensation; (b) SAR image of a destroyer with coarse focusing; (c) SAR image of a bulk carrier processed by a traditional image-focusing algorithm without motion error compensation; (d) SAR image of a bulk carrier with coarse focusing.
Remotesensing 17 02599 g019
Figure 20. Examples of fine classification experimental data: (a) aircraft carrier; (b) destroyer; (c) cruiser; (d) bulk carrier; (e) tankers; (f) container ship.
Figure 20. Examples of fine classification experimental data: (a) aircraft carrier; (b) destroyer; (c) cruiser; (d) bulk carrier; (e) tankers; (f) container ship.
Remotesensing 17 02599 g020
Figure 21. Loss curves of the fine-classification networks: (a) aircraft carrier confirmation network loss; (b) military ship fine-classification network loss; (c) civilian ship fine-classification network loss.
Figure 21. Loss curves of the fine-classification networks: (a) aircraft carrier confirmation network loss; (b) military ship fine-classification network loss; (c) civilian ship fine-classification network loss.
Remotesensing 17 02599 g021
Figure 22. Extracted image slices of a container ship: (a) 1/2 extraction; (b) 1/3 extraction; (c) 1/4 extraction.
Figure 22. Extracted image slices of a container ship: (a) 1/2 extraction; (b) 1/3 extraction; (c) 1/4 extraction.
Remotesensing 17 02599 g022
Figure 23. Extracted image slices of a bulk carrier: (a) 1/2 extraction; (b) 1/3 extraction; (c) 1/4 extraction.
Figure 23. Extracted image slices of a bulk carrier: (a) 1/2 extraction; (b) 1/3 extraction; (c) 1/4 extraction.
Remotesensing 17 02599 g023
Figure 24. Extracted image slices of a tanker: (a) 1/2 extraction; (b) 1/3 extraction; (c) 1/4 extraction.
Figure 24. Extracted image slices of a tanker: (a) 1/2 extraction; (b) 1/3 extraction; (c) 1/4 extraction.
Remotesensing 17 02599 g024
Figure 25. Local region image slice of a tanker. After extraction, obvious structural features are missing.
Figure 25. Local region image slice of a tanker. After extraction, obvious structural features are missing.
Remotesensing 17 02599 g025
Figure 26. Evaluation results of the recall rate for the fine-classification network: (a) aircraft carrier confirmation network; (b) military ship fine-classification network; (c) civilian ship fine-classification network.
Figure 26. Evaluation results of the recall rate for the fine-classification network: (a) aircraft carrier confirmation network; (b) military ship fine-classification network; (c) civilian ship fine-classification network.
Remotesensing 17 02599 g026
Figure 27. Evaluation results of the precision rate for the fine-classification network: (a) aircraft carrier confirmation network; (b) military ship fine-classification network; (c) civilian ship fine-classification network.
Figure 27. Evaluation results of the precision rate for the fine-classification network: (a) aircraft carrier confirmation network; (b) military ship fine-classification network; (c) civilian ship fine-classification network.
Remotesensing 17 02599 g027
Figure 28. Multi-level focusing-classification results for simulated MEO SAR data.
Figure 28. Multi-level focusing-classification results for simulated MEO SAR data.
Remotesensing 17 02599 g028
Table 1. Simulation parameters of MEO SAR and LEO SAR.
Table 1. Simulation parameters of MEO SAR and LEO SAR.
ParametersMEO SARLEO SAR
Wavelength (m)0.030.03
Bandwidth (MHz)300300
Orbit height (km)7500600
Look angle (°)1515
Table 2. Rotation parameters of point targets.
Table 2. Rotation parameters of point targets.
ParametersRollPitchYaw
Amplitude (°)210.2
Period (s)101220
Initial Phase (km)454545
Table 3. Coarse classification network structure.
Table 3. Coarse classification network structure.
ResNet101Output Size
Conv17 × 7, 64, stride 2112 × 112
Conv2_x3 × 3, max pool, stride 2 1 × 164 3 × 364 1 × 1256 × 3 56 × 56
Conv3_x 1 × 1128 3 × 3128 1 × 1512 × 4 28 × 28
Conv4_x 1 × 1256 3 × 3356 1 × 11024 × 23 14 × 14
Conv5_x 1 × 1512 3 × 3512 1 × 12048 × 3 7 × 7
Average pool, 1000-d fc, softmax1 × 1
FLOPS7.6 × 109
Table 4. Dataset for the coarse classification network.
Table 4. Dataset for the coarse classification network.
Target CategoryTraining SetValidation SetTesting Set
Aircraft carriers1905427
Other military ships2697739
Civilian ships2948442
Table 5. Coarse-classification network evaluation results from the simulated data.
Table 5. Coarse-classification network evaluation results from the simulated data.
Target CategoryNumber of Test Data PointsNumber of Correct Data PointsPrecisionAverage Precision
Aircraft
carriers
10880%88.3%
Other military ships201785%
Civilian ships302893.3%
Table 6. Dataset for the fine classification network.
Table 6. Dataset for the fine classification network.
Target CategoryTraining SetValidation SetTesting Set
Aircraft carriers1905427
Destroyers1614723
Cruisers1083115
Bulk carriers1053015
Tankers822613
Container ships972814
Table 7. Classification precision of different slice sizes.
Table 7. Classification precision of different slice sizes.
Extraction RatioShip CategoryRecall RatePrecision Rate
1/2Tankers98%95%
Bulk carriers96%99%
Container ships99%98%
Average97.49%97.39%
1/3Tankers90%92%
Bulk carriers91%96%
Container ships97%88%
Average92.12%92.72%
1/4Tankers54%19%
Bulk carriers53%89%
Container ships75%67%
Average60.60%58.56%
Table 8. Evaluation results from the simulated data.
Table 8. Evaluation results from the simulated data.
Target CategoryNumber of Test Data PointsNumber of Correct Data PointsPrecisionAverage Precision
Aircraft carriers1070.781.7%
Destroyers1070.7
Cruisers1070.7
Bulk carriers1090.9
Tankers10101.0
Container ships1090.9
Table 9. Time consumption of the two strategies at different steps.
Table 9. Time consumption of the two strategies at different steps.
MethodCoarse Focusing (s)Coarse
Classification (s)
Fine Focusing (s)Fine
Classification (s)
Total (s)
Proposed two-stage strategy0.80.0091.50.012.319
Single-stage strategy//11.10.0111.11
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Z.; Yang, W.; Su, C.; Zeng, H.; Wang, Y.; Guo, J.; Xu, H. Fine Recognition of MEO SAR Ship Targets Based on a Multi-Level Focusing-Classification Strategy. Remote Sens. 2025, 17, 2599. https://doi.org/10.3390/rs17152599

AMA Style

Li Z, Yang W, Su C, Zeng H, Wang Y, Guo J, Xu H. Fine Recognition of MEO SAR Ship Targets Based on a Multi-Level Focusing-Classification Strategy. Remote Sensing. 2025; 17(15):2599. https://doi.org/10.3390/rs17152599

Chicago/Turabian Style

Li, Zhaohong, Wei Yang, Can Su, Hongcheng Zeng, Yamin Wang, Jiayi Guo, and Huaping Xu. 2025. "Fine Recognition of MEO SAR Ship Targets Based on a Multi-Level Focusing-Classification Strategy" Remote Sensing 17, no. 15: 2599. https://doi.org/10.3390/rs17152599

APA Style

Li, Z., Yang, W., Su, C., Zeng, H., Wang, Y., Guo, J., & Xu, H. (2025). Fine Recognition of MEO SAR Ship Targets Based on a Multi-Level Focusing-Classification Strategy. Remote Sensing, 17(15), 2599. https://doi.org/10.3390/rs17152599

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop