Next Article in Journal
A Novel Gait Phase Recognition Method Based on DPF-LSTM-CNN Using Wearable Inertial Sensors
Previous Article in Journal
Diversifying Emotional Dialogue Generation via Selective Adversarial Training
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Super-Resolution Method for Targets Observed by Satellite SAR

Korea Aerospace Research Institute, 169-84, Gwahak-ro, Daejeon 34133, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(13), 5893; https://doi.org/10.3390/s23135893
Submission received: 19 May 2023 / Revised: 21 June 2023 / Accepted: 21 June 2023 / Published: 25 June 2023
(This article belongs to the Section Sensing and Imaging)

Abstract

:
This study presents an efficient super-resolution (SR) method for targets observed by satellite synthetic aperture radar (SAR). First, a small target image is extracted from a large-scale SAR image and undergoes proper preprocessing. The preprocessing step is adaptively designed depending on the types of movements of targets. Next, the principal scattering centers of targets are extracted using the compressive sensing technique. Subsequently, an impulse response function (IRF) of the satellite SAR system (IRF-S) is generated using a SAR image of a corner reflector located at the calibration site. Then, the spatial resolution of the IRF-S is improved by the spectral estimation technique. Finally, according to the SAR signal model, the super-resolved IRF-S is combined with the extracted scattering centers to generate a super-resolved target image. In our experiments, the SR capabilities for various targets were investigated using quantitative and qualitative analysis. Compared with conventional SAR SR methods, the proposed scheme exhibits greater robustness towards improvement of the spatial resolution of the target image when the degrees of SR are high. Additionally, the proposed scheme has faster computation time (CT) than other SR algorithms, irrespective of the degree of SR. The novelties of this study can be summarized as follows: (1) the practical design of an efficient SAR SR scheme that has robustness at a high SR degree; (2) the application of proper preprocessing considering the types of movements of targets (i.e., stationary, moderate motion, and complex motion) in SAR SR processing; (3) the effective evaluation of SAR SR capability using various metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), focus quality parameters, and CT, as well as qualitative analysis.

1. Introduction

Satellite synthetic aperture radar (SAR) has been the main instrument used to monitor specific targets because it can offer two-dimensional (2D) target scattering information at all times with all-weather imaging capability [1,2,3,4,5,6,7,8]. The scattering information of targets should be clearly recognizable by humans or machines to achieve reliable target monitoring performance because it reflects the physical characteristics (e.g., structure and shape), category (e.g., type and model), and states (e.g., movement and change information) of targets.
In general, the spatial resolution of satellite SAR images plays a crucial role in representing the scattering information of targets in the 2D image domain. As the spatial resolution of satellite SAR images improves, the scattering information of the targets becomes clearer [9,10]. This is because better spatial resolution makes the impulse response functions (IRFs) of the SAR system sharper and reduces interference among backscattered signals from the scatterers. However, the spatial resolution of satellite SAR is predetermined by the development requirements and design process of the satellite SAR system, considering the operational objectives and application field of the corresponding SAR mission. Therefore, it cannot be adjusted by the users.
To overcome this limitation, super-resolution (SR) methods have been proposed to improve the spatial resolution of target images for SAR and inverse SAR (ISAR) [9,10,11,12,13,14,15,16]. The SR methods in [9,10,11,12,13,14,15,16] are mostly based on spectral estimation (SPE) techniques such as multiple signal classification (MUSIC), estimation of signal parameters through rotational invariance techniques (ESPRIT), relaxation (RELAX), and autoregressive (AR) model-based linear prediction (LP). In [17], the compressive sensing (CS) technique was also used to generate super-resolved target images by solving optimization problems based on the radar signal model. The above SR methods all utilized complex radar signals, because phase information is very important to process radar images.
In [18,19,20], CS theory was utilized to conduct learning-based SR for SAR and optical images. The primary idea of learning-based SR is the learning correspondence between low-resolution (LR) and high-resolution (HR) image patches from the training database. In [18], the concept of multi-dictionary CS was proposed to jointly train low- and high-resolution dictionaries, generating super-resolved SAR patches. However, the training process may be time-consuming, which is not appropriate for real-time SAR applications. In addition, the method in [18] uses only amplitude information to make a feature vector from a SAR patch. In this case, the principal information in SAR images may be lost. In [19,20], some learning-based SR strategies were presented to enhance spatial resolutions of optical images. The method in [19] extracted similar image patches existing in the same LR remote sensing image, which was called structural self-similarity (SSSIM). Then, pre-HR images obtained by applying an interpolation process to SSSIM were utilized for dictionary training based on K-singular value decomposition (K-SVD). In [20], a blurring matrix is introduced in order to enhance the incoherency between the sparsifying dictionary and the sensing matrices. In addition, the method in [21] proposed an image deblurring method using derivative CS when accurate knowledge of the blurring operator is lacking. In [22], a CS model-based reconstruction method for multi-detector signal acquisition was presented.
It should be noted that a target observed by SAR can be represented by a combination of the target’s scattering information and the IRF of the SAR system; the IRF generally has a sinc-like shape. Naturally, it is desirable that the super-resolved image is also a combination of the target’s scattering information and the sinc-like IRF with improved spatial resolution. However, most of the above SR techniques are limited in terms of their ability to retain the sinc-like IRF in the super-resolved image. The MUSIC method computes the spatial spectral function using predefined direction vectors and the noise subspace of the target image to create a super-resolved image, after which the sinc-like shape of the IRF is completely lost. Additionally, the ESPRIT and RELAX methods estimate the geometric locations (GLs) and radar cross-sections (RCSs) of the main scatterers (i.e., line spectra), leading to multiple points in the resulting image that cannot contain the sinc-like shape of the IRF. In [23], AR model-based LP and CS techniques were used for the SR procedure of satellite SAR images to retain the sinc-like shape of the IRF well in the super-resolved image; it was demonstrated that the methods in [23] could maintain the sinc-like shape of the IRF well at a low degree of SR, yielding reliable SR performances.
However, as the degree of SR increases, the SR performances of AR model-based LP and CS techniques may degrade. In the case of AR model-based LP, the extrapolation errors may grow significantly as the degree of SR increases. This is because a simple AR model cannot effectively handle complex combinations of radar signals from many distributed scatterers in the target response at a high degree of SR. In addition, in the case of CS techniques, a high degree of SR induces only multiple point-like information in 2D images, resulting in the severe destruction of the sinc-like characteristics of the IRF in the resulting image. Consequently, it is difficult to generate reliable super-resolved results with a high degree of SR.
To overcome this problem, we propose an efficient SR method for targets observed using satellite SAR images. In short, we conceptually combine two factors: (1) the GLs and RCSs of dominant scattering centers (SCs) in the target image, and (2) the IRF of the satellite SAR system (IRF-S). First, a small target image extracted from a large-scale SAR image is subjected to proper signal processing. The principal scatterers of the targets are then extracted from the target image using the CS technique. Secondly, a SAR image of a corner reflector (CR) extracted from a large-scale satellite SAR image undergoes clutter signal removal and normalization, generating an IRF-S. Subsequently, the spatial resolution of the IRF-S is improved using AR model-based LP. Finally, the super-resolved IRF-S is convolved with the extracted SCs to generate a clear super-resolved target image. In this study, we used Korea Multi-Purpose SATellite-5 (KOMPSAT-5, K-5) images obtained at a high frequency (X-band) to analyze the SR performance of the proposed method.
The greatest advantage of the proposed method is that it only requires the improvement of the spatial resolution of the IRF-S containing ideal point target information to generate the super-resolved image, instead of considering a large number of scatterers; this can assist in reducing the extrapolation error of AR model-based LP at a high degree of SR. Consequently, the proposed method efficiently produces reliable super-resolved target images with a high degree of SR, even though the target response has a complex spatial distribution of scatterers.
The major objectives of this study can be summarized as follows. The first objective is to practically design an efficient SAR SR scheme that has robustness at a high SR degree. It should be noted that the proposed SR scheme contains proper preprocessing steps to cope with various types of motions of targets occurring in real situations; this can assist in improving the applicability of the proposed scheme to real systems. The second objective is to effectively verify the SR capabilities of the proposed scheme. In our experiments, various metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), focus quality parameters, and computation time (CT) as well as qualitative analysis are used to demonstrate the effectiveness of the proposed SR scheme.

2. Proposed SR Method for Target Image

2.1. Overall Flowchart of the Proposed Method

Figure 1 shows the overall flowchart of the proposed method, which consists of four steps: (1) preprocessing, (2) SC extraction (SCE), (3) generation of the super-resolved IRF-S, and (4) convolution of the SC image and super-resolved IRF-S. Notably, the target image can be extracted from a large-scale K-5 image through manual inspection or using various target detection algorithms [1,2,3,4]. These four steps are described in detail in the following section.

2.2. Radar Signal Model for the Proposed Method

According to the high-frequency scattering theory, a backscattered field in the high-frequency region can be represented as the sum of fields from a discrete set of independent scattering centers (SCs) on a target [24]. For simplicity, we adopt an undamped exponential model without angle dependence and the frequency-dependence term included in the geometrical theory of diffraction (GTD) model. Then, the scattered field signals from I SCs at different frequencies f and look angles ϕ can be modeled as [25]:
s ϕ , f = i = 1 I a i exp j 2 k sin ϕ y i exp j 2 k cos ϕ x i
where a i represents the amplitude of the i -th SC at ( x i , y i ) and k = 2 π f / c denotes the wavenumber. Let f x = f cos ϕ and f y = f sin ϕ . Then, Equation (1) can be expressed as follows:
s n a z , n s l = i = 1 I a i exp j 2 π n a z y i R y exp j 2 π n s l x i R x
where R y = c / ( 2 Δ f y ) and R x = c / ( 2 Δ f x ) represent the maximum unambiguous ranges in the azimuth and slant-range directions, respectively. n a z and n s l denote azimuth and slant-range frequencies, respectively. p and q denote the azimuth and slant-range frequency indices, respectively. If the 2D target image domain is discretized by a 2D P × Q grid, Equation (2) can be expressed as follows:
s n a z , n s l = p = 0 P 1 q = 0 Q 1 a p , q exp j 2 π P n a z p exp j 2 π Q n s l q

2.3. Preprocessing (Step 1)

In Step 1, the original target image is transformed to be appropriate for the subsequent steps of the proposed SR method. First, the small target image is decompressed using a 2D fast Fourier transform (FFT) along the slant-range and azimuth directions, yielding the 2D frequency spectrum shown in Figure 2. The frequency spectrum contains no-data regions induced by oversampling in the SAR processor (SARP) along the slant-range and azimuth directions (black regions in Figure 2) [23,26].
The no-data regions break the continuity of the target information in the 2D frequency spectrum, thereby impeding successful SR processing. Thus, it is desirable to remove the no-data regions from the 2D frequency spectrum. In the case of the slant-range direction, no-data regions are always found in the middle part of the spectra owing to the characteristics of SAR processing. Therefore, no-data regions can be directly removed using the metadata provided by the SARP. Meanwhile, in the azimuthal direction, no-data regions are located in the vicinity of the Doppler centroid. Thus, the Doppler centroid is estimated, and no-data regions are removed using metadata. In this study, we refer to the preprocessed (PR) 2D frequency spectrum whose no-data regions are removed as s ( n a z , n s l ) .
s n a z , n s l = p = 0 P 1 q = 0 Q 1 a p , q exp j 2 π P n a z p exp j 2 π Q n s l q
where n a z = 1 , 2 , , N a z , n s l = 1 , 2 , , N s l , and N a z and N s l denote the numbers of pixels in the azimuth and slant-range frequency directions, respectively. As Equation (4) is a well-known FT relationship, it can be rewritten as the following matrix equation:
S N a z × N s l = F a z N a z × P A P × Q F s l T Q × N s l
where S = s n a z , n s l denotes the N a z × N s l matrix, A = a p , q denotes the P × Q matrix, F a z denotes the N a z × P Fourier dictionary in the azimuthal direction, and F s l denotes the Q × N s l Fourier dictionary in the slant-range direction.

2.4. Scattering Center Extraction (Step 2)

In radar signal processing, SCE can be effectively accomplished using various CS or SPE techniques, such as the orthogonal matching pursuit (OMP) [27], root-MUSIC [15], and ESPRIT [14] algorithms, provided that the backscattered field satisfies the signal modeling in Equation (4). Unlike SPE techniques, CS techniques avoid the need to estimate the number of SCs. This is a big advantage for SCE because the estimation of the number of SCs is very difficult for an extended target. It should be noted that the most important thing for the SCE step in the proposed scheme is the computation time (CT), because the main application of the proposed scheme is target recognition using satellite SAR images requiring real-time processing. In the area of radar imaging, the OMP algorithm, which is the most popular greedy pursuit method based on the CS technique, has provided reliable accuracy with very low CTs [27,28]. In addition, when we conducted experiments for SCE using several CS algorithms (OMP, MP, Lasso, BP, and BPDN), OMP exhibited the most reliable SCE performances in terms of accuracy and CT. Thus, the OMP algorithm was adopted for SCE in this study.
In the case of the stationary target (ST), its PR 2D frequency spectrum, s S T n a z , n s l , can be well matched with Equation (4). After we assume P > N a z and Q > N s l , the SC image can be obtained by solving the l 0 -norm minimization problem as follows:
( P 0 ) : min A A 0   subject   to   S S T = F a z A F s l T
where S S T = s S T n a z , n s l denotes the N a z × N s l matrix. Because the optimization of the nonconvex ( P 0 ) is an NP-hard problem that is extremely complex and difficult to solve, the OMP algorithm suboptimally selects the best solution at every iteration until the convergence criterion is satisfied [27].
Meanwhile, the PR 2D frequency spectrum of the moving target (MT), s M T n a z , n s l , differs significantly from Equation (4) due to the target’s motion-induced phase, which leads to severe blurring of the target response in the target image. Therefore, in this study, the refocusing technique is applied to s M T n a z , n s l if the corresponding target image contains the blurred target response of the moving target [29,30]. The refocusing technique can be carried out in two different ways: (1) only phase adjustment (PA) and (2) PA with optimal time windowing (OTW). In the case of moderate target motion, it is enough to use only the PA algorithm to obtain a refocused (REFOC) 2D frequency spectrum s M T , R F 1 n a z , n s l ; n a z = 1 , 2 , , N a z , n s l = 1 , 2 , , N s l , whose signals are well-matched with Equation (4). A clear target response can be obtained by applying the IFFT processing to s M T , R F 1 n a z , n s l . Furthermore, the number of pixels in s M T , R F 1 n a z , n s l is the same as that of s M T n a z , n s l . Meanwhile, if a target has 3D dynamic motion, its effective rotation vector (ERV) varies during the coherent processing interval (CPI). For example, a moving ship can have complex 3D self-motion, such as roll, pitch, and yaw due to waves and offshore winds. In this case, only using the PA algorithm cannot solve the mismatch between s M T n a z , n s l and Equation (4). OTW [31] selects an optimal time window in which the ERV of the ship is nearly constant. Thus, the combination of OTW and the PA algorithm can effectively cope with the target’s complex motion, yielding s M T , R F 2 n a z , n s l ; n a z = 1 , 2 , , M a z , n s l = 1 , 2 , , N s l . Notably, the number of pixels in s M T , R F 2 n a z , n s l is smaller than that in s M T n a z , n s l along the azimuthal direction (i.e., M a z < N a z ) in general, because OTW only selects a certain part of the total signals collected in the CPI, as shown in Figure 3. This implies that the azimuth frequency bandwidth of s M T , R F 2 n a z , n s l is smaller than that of s M T n a z , n s l . After the refocusing technique has been applied to s M T n a z , n s l , the SCs are extracted using the same method as that used for s S T n a z , n s l :
( P 0 ) : min A A 0   subject   to   S M T = F a z A F s l T
where S M T denotes the S M T , R F 1 or S M T , R F 2 , S M T , R F 1 = s M T , R F 1 n a z , n s l is the N a z × N s l matrix, and S M T , R F 2 = s M T , R F 2 n a z , n s l is the M a z × N s l matrix.

2.5. Generation of Super-Resolved IRF-S (Step 3)

In SAR signal processing, a target observed by satellite SAR can be represented by a combination of the SC image containing the target’s scattering information (i.e., geometric locations and radar cross-sections) and 2D IRF-S, as follows [26,32]:
T I p , q = i = 1 I r i f 2 D p p i , q q i
where f 2 D p p i , q q i = f a z p p i f s l q q i denotes the 2D IRF-S shifted by p i and q i , respectively; f a z p p i and f s l q q i denote the 1D IRF-S along the azimuthal and slant-range directions shifted by p i and q i , respectively; r i denotes the RCS of the i-th scatterer of the target; and p i and q i denote the target’s position along the azimuth and slant-range directions, respectively. In Equation (8), it is assumed that the same 2D IRF-S is combined with all the scatterers of the target because the scatterers of the target are generally concentrated in a small area. Because the SC image already has all the geometric locations and radar cross-sections of the target’s scatterers (i.e., p i , q i , and r i ), Equation (8) can be reformulated in the 2D image domain as follows:
T I p , q = I S C f 2 D ( p , q )
where I S C denotes the SC image, denotes the convolutional operation, and f 2 D p , q denotes the 2D IRF-S. Thus, only the 2D IRF-S f 2 D p , q is required.
To obtain the 2D IRF-S, it is desirable to use a SAR image of an isolated point target. This is because it can wholly represent the quality parameters of the satellite SAR system such as the 3-dB bandwidth (i.e., spatial resolution), peak side–lobe ratio (PSLR), and integrated side–lobe ratio (ISLR), without interference from other scatterers. In this study, a SAR image of a CR was first extracted from a large-scale satellite SAR image. Next, the preprocessing step in Step 1 was applied to the SAR image of the CR to remove the no-data region in the frequency spectrum, followed by IFFT processing, yielding a PR image of the CR. However, the PR image of the CR cannot be directly regarded as the IRF-S because it contains many clutter signals reflected from the background and is amplified by the RCS of the target. To solve this problem, slant-range and azimuth cuts were obtained by cutting the PR image of the CR at the center pixels in the slant-range and azimuth directions, respectively. Then, the multiplication of the slant-range and azimuth cuts resulted in a clean PR image of the CR, where the clutter signals were almost completely removed. Subsequently, the clean PR image was normalized by the maximum amplitude, yielding a 2D IRF-S f 2 D p , q .
Subsequently, the spatial resolution of the 2D IRF-S was improved using a conventional SAR SR algorithm based on AR model-based LP [16,17]. In this study, the Burg algorithm was adopted because of its efficiency with respect to accuracy and complexity (CT). The Burg algorithm extends the frequency bandwidths of scattered field signals through extrapolation and generates a new image with improved spatial resolution. Let 1D scattered field signals along the azimuth frequency or slant-range frequency direction at a specific slant-range or azimuth bin be noted by s 1 D ( n ) ; n = 1 , 2 , , N , where N is either N a z (in the case of the AR model in the azimuthal frequency direction) or N s l (in the case of the AR model in the slant-range frequency direction). The Burg algorithm utilizes the AR model, which assumes that s 1 D ( n ) is the sum of the undamped exponentials [11,16,17]. In the AR model, s 1 D ( n ) must satisfy the following forward and backward linear prediction conditions:
s ^ 1 D ( n ) = i = 1 k γ i s 1 D ( n i ) , n = k + 1 , k + 2 , , N i = 1 k γ i * s 1 D ( n + i ) , n = 1 , 2 , , N k
where * denotes the complex conjugate, γ i denotes the coefficients of the AR model, k is the AR model order, and s ^ n is the estimated data using forward or backward prediction. The forward prediction error e n f and backward prediction error e n b can be defined as follows:
e n f = s 1 D ( n ) s ^ 1 D ( n ) 2 = i = 0 k γ i s 1 D ( n i ) 2 , n = k + 1 , k + 2 , , N
e n b = s 1 D ( n ) s ^ 1 D ( n ) 2 = i = 0 k γ i * s 1 D ( n + i ) 2 , n = 1 , 2 , , N k
where γ 0 = 1 . To minimize the sum of forward and backward prediction errors in Equations (11) and (12), the Burg method determines the coefficients of the AR model γ i . In this study, we chose k = N / 3 because it provides a robust estimation of a i [33]. After a i had been obtained, the number of additional cells required for extrapolation was determined as follows:
L = Round N × S R b e f / S R a f t 1 / 2
where Round denotes the round-off operator, and S R b e f and S R a f t are the spatial resolutions in the azimuth or slant-range direction before and after the SR procedure, respectively. Next, L cells were added to the first and last cells of s 1 D ( n ) . Then, the scattered field signals of the 2 L cells were estimated using γ i . The above extrapolation was iterated for all the azimuthal and slant-range bins. Then, 2D IFFT was applied to the total scattered field signals to generate a 2D IRF-S with improved spatial resolution, referred to as f 2 D I M P p , q . The Burg algorithm assumes L to be linearly proportional to the increment of the azimuth and slant-range frequency bandwidths, which are directly associated with the SAR image resolutions. Thus, the spatial resolutions of f 2 D I M P p , q can be determined by controlling L .

2.6. Convolution of SC Image and Super-Resolved IRF-S (Step 4)

As the last step in generating the super-resolved target image, f 2 D I M P p , q in Section 2.5 is convolved with I S C in Section 2.4 according to Equation (9).

3. Experimental Results

To investigate the effectiveness of the proposed method, we considered ship targets observed by K-5. We extracted each target image from a large-scale K-5 image. Additionally, we extracted CR images from other large-scale K-5 images of a real CR located at the KOMPSAT calibration site in Mongolia. Notably, the target and CR images were obtained using the same observation mode (spotlight), beam number, and polarization (HH). In this study, we analyzed SR performance from two perspectives: (1) restoration and (2) improvement.
In many studies on the development of SR algorithms, restoration metrics have been widely used to evaluate SR capability. In this study, we used two restoration metrics: the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), which have been used in optical image-based SR algorithms [34,35]. Let a PR and REFOC (optional depending on the motion of the ship) target image be referred to as the reference (REF) target image. When PSNR and SSIM were used to evaluate the SR capability, the spatial resolution of the REF target image was intentionally worsened by reducing the slant-range and azimuth frequency bandwidth, yielding an LR target image. Similarly, the spatial resolution of the IRF-S is degraded, leading to an LR IRF-S. The proposed SR method was then applied to the LR target image and the LR IRF-S to generate a restored target image whose spatial resolution was the same as that of the REF target image. The PSNR and SSIM compute the similarity of the scattering information between two images (i.e., the REF target image and the restored target image) to evaluate the SR capability more accurately than the focus qualities. The PSNR is the ratio between the maximum signal and the corrupting noise that affects high-resolution reconstruction:
P S N R = 20 log 10 ( M A X I ) 10 log 10 M S E ( x , y )
where M A X I denotes the maximum possible pixel value of the image, and M S E ( x , y ) denotes the mean squared error between the two images, x and y . The SSIM is a metric used to evaluate the similarity between two images by combining brightness, contrast, and structural information.
S S I M ( x , y ) = 2 μ x μ y + c 1 2 σ x y + c 2 u x 2 + μ y 2 + c 1 σ x 2 σ y 2 + c 2
where μ x , μ y , σ x , σ y , and σ x y denote the local means, standard deviations, and cross-covariances of x and y , respectively. c 1 and c 2 are small constants. Generally, higher PSNR and SSIM indicate better restoration performance and vice versa.

3.1. SR Results for Static Ship Target

The ratio of the adjusted spatial resolution to the original spatial resolution is denoted as d . Figure 4 shows the REF and LR target images for the stationary ship target with PR and LR IRF-Ss when d = 2 .
In Figure 4b,d, the spatial resolutions of the REF target image and PR IRF-S are degraded by reducing the frequency bandwidth along the slant-range and azimuthal directions. Next, Figure 5 shows the SC image and super-resolved IRF-S obtained using the processing steps described in Section 2.4 and Section 2.5, respectively. In Figure 5, the SC image shows the primary scattering information for the target response. Additionally, the super-resolved IRF-S generated from the LR IRF-S was similar to the PR IRF-S.
Figure 6 shows the SR results obtained using the five SR methods (the four SR methods in [23], and the proposed SR method). As shown in Figure 6, the five SR methods generated slightly different SR results, all of which were similar to the REF target image shown in Figure 4. Evidently, the results from Figure 6 show a better spatial resolution than the LR target image in Figure 4b. However, when observed with the naked eye, it is difficult to determine which algorithm has a better SR capability.
To conduct a quantitative analysis of the SR capabilities of the five SR methods, the PSNRs and SSIMs were computed by varying d from 2 to 4 in increments of 0.5, as summarized in Table 1 and Table 2. In our study, the BPDN algorithm induces slightly different results for the same target image at every iteration, because the constraint relaxation parameter needed for the log-barrier algorithm is a random matrix. Thus, in the case of the BPDN algorithm, the results in Table 1 and Table 2 are obtained from the average values of 50 independent realizations to provide reliable performance evaluations.
In Table 1, the MCM method shows better PSNR only at d = 2 . Otherwise, the PSNRs of the proposed method are the highest for all other values of d = 2.5 , 3 , 3.5 , 4 . Additionally, the proposed method shows the best SSIMs over the entire range of d in Table 2.
In addition, Table 3 shows the standard deviations of the BPDN algorithm for the results in Table 1 and Table 2.
In Table 3, the standard deviations of BPDN are almost all very small values lower than 0.0001. Thus, the BPDN algorithm is very statistically stable for solving our problem.

3.2. SR Results for Moving Ship Target—Moderate Motion

In this section, we consider another ship target with moderate motion. In this case, although the target response in Figure 7a is blurred owing to the motion of the target, its phase errors can be effectively eliminated using only the PA algorithm. In this study, the entropy minimization method in [36] was selected as the PA algorithm. Figure 7 shows the PR and REF target images of a moving target with moderate motion. As shown in Figure 7b, the PA algorithm successfully removes the blurring effect of the target response resulting from motion-induced phase errors.
Figure 8 shows the LR target image for a moving ship target with moderate motion and the corresponding SR results obtained using the five SR methods when d = 2 .
In Figure 8, all the five methods enhance the spatial resolution of the LR target image, generating different super-resolved scattering information results, which are all similar to the REF image in Figure 7b. Table 4 and Table 5 show the PSNRs and SSIMs for super-resolved images in Figure 8, respectively. In Table 4 and Table 5, the results of the BPDN algorithm were obtained from the average values of 50 independent realizations to provide reliable performance evaluations. In the case of PSNR, the Burg method shows the highest performances at d = 2 , 2.5 . However, the proposed method achieves the best PSNRs at d = 3 , 3.5 , 4 . In the case of SSIM, the proposed method shows outstanding performances at all d s , as compared with other SR methods.

3.3. SR Results for Moving Ship Target—Complex Motion

As mentioned in Section 2, a ship target can have 3D complex motion due to waves and offshore wind. In this case, the ERV varies during the CPI, resulting in a severely degraded target response. Thus, using only the PA algorithm has limitations in dealing with the blurring effect of the target response, leading to a mismatch between s M T n a z , n s l and Equation (4). Figure 9 shows the PR image, REF image obtained using only the PA algorithm [36], and REF image obtained using both the OTW [29] and PA [36] algorithms for moving ship targets with complex motion. Notably, as shown in Figure 9b, the target response still contains many phase errors, which cause blurring of the target response. This is because the PA algorithm cannot handle the 3D motion components of the target. The combination of the OTW and PA algorithms can generate a clear target response, as shown in Figure 9c. Thus, it is necessary to utilize both the OTW and PA algorithms to obtain the correct scattering information for a moving target with complex motion.
Figure 10 shows the LR target image and the corresponding SR results obtained using the five SR methods for moving targets with complex motion when d = 2.5 . In Figure 10a, the spatial resolution of the LR target image considerably deteriorates compared with that of the REF image in Figure 9c. In the case of the super-resolved images, the five algorithms led to slightly different scattering information for the target. When observed with the naked eye, it is difficult to determine which algorithm has better SR capability. Although these super-resolved images are not identical to the REF image in Figure 9c, it is evident that the five SR algorithms improved the spatial resolution of the LR image in Figure 10a.
Table 6 and Table 7 list the PSNRs and SSIMs for the super-resolved images shown in Figure 10, respectively. In Table 6 and Table 7, the results of the BPDN algorithm were obtained from the average values of 50 independent realizations to provide reliable performance evaluations. In the case of the PSNR, the proposed method achieved slightly better scores for all d s . In Table 7, the SSIMs of all the algorithms are similar at d = 2 . Additionally, those of the proposed method were slightly better than those of the other algorithms for all other instances of d s .

3.4. SR Results in the Case of Improvement

In the previous sections, we demonstrated that, from a restoration perspective, the proposed scheme is useful for improving the spatial resolution of target images extracted from large-scale KOMPSAT-5 images. In addition, we examine the SR capability of the proposed method from the perspective of improvement, which is closer to a real situation. Figure 11 shows the PR and REF target images of a moving target with moderate motion. The target motion causes blurring of the target response in the target image. After the PR target image had been refocused, the REF target image contained a clear target response, as shown in Figure 11b.
The ratio of the original spatial resolution to the adjusted spatial resolution is denoted as r . Figure 12 shows the original IRF-S, super-resolved IRF-S, SC image, and super-resolved target image obtained using the proposed scheme when r = 3 . From Figure 12b, it is evident that the super-resolved IRF-S has a better spatial resolution than the original IRF-S. Additionally, the SC image appeared to represent the principal scattering information of the target response, as shown in Figure 12c. Consequently, compared with Figure 11b, the proposed method significantly enhanced the spatial resolution of the target image, as shown in Figure 12d, and the IRFs of the scatterers of the target response became sharper, while the interference among the scatterers of the target response was reduced. Although measuring the degree of improvement in the spatial resolution is challenging, it is obvious that the SR results in Figure 12d provide more precise and delicate information about the principal scattering centers.
To compare the SR capability of the proposed method with that of other SR algorithms, the Burg, MCM, BP, and BPDN algorithms in [23] were also used to generate super-resolved images. Figure 13 and Figure 14 show a comparison of the SR results obtained using the proposed method with those obtained using the four algorithms (Burg, MCM, BP, and BPDN) at r = 4 and r = 7 , respectively.
Figure 13 and Figure 14 demonstrate that all SR methods successfully improved the spatial resolution of the REF image in Figure 11b. However, the super-resolved images obtained using the four methods in [23] cannot effectively reflect the inherent sinc-like shape of the IRF. In particular, they exhibited significantly different scattering mechanisms from the REF image when there was an extremely high degree of SR ( r = 7 ). In the cases of Burg and MCM, the scattering information was largely smeared. BP and BPDN yielded multiple point-like information. Meanwhile, the proposed method can maintain the sinc-like shape of the IRF, generating a more realistic super-resolved target image regardless of the degree of SR, because it directly utilizes the super-resolved IRF-S (for example, Figure 12b) in SR processing. This indicates that the proposed method is robust for generating super-resolved images with extremely high degrees of SR.
Furthermore, the CTs of the five SR methods used for Figure 13 and Figure 14 were measured to investigate their applicability to real systems. The MATLAB program and a PC with a CPU clock speed of 3.7 GHz were used (the MATLAB program was not optimized to obtain the best computation speed).
To analyze the CTs versus r , super-resolved target images were generated by varying r from 3 to 7 in increments of 1.
Table 8 lists the CT values to generate the super-resolved target images for Figure 11b. As shown in Table 8, the Burg, MCM, and proposed methods exhibited reliable CTs, and the proposed method achieved the best CTs over the entire range of r . Considering that our equipment and software were not optimized for SR processing, the proposed method has great potential for use in real systems. Meanwhile, the CTs of the BP and BPDN methods increased exponentially as r increased. This means that BP and BPDN are not appropriate for real-time applications (e.g., SAR automatic target recognition (ATR)). In this study, we used l 1 -magic [36] software by Candes and Romberg to conduct BP and BPDN. In the case of BP, linear programs are solved using a generic path-following primal-dual method. In the case of BPDN, second-order cone programs are solved with a generic log-barrier algorithm. Because BP and BPDN spend a lot of time solving the above optimization problems, they exhibit worse CTs compared with Burg, MCM, and the proposed scheme.

4. Discussion

In Section 3, we demonstrate that the proposed scheme improves the spatial resolution of target images extracted from large-scale KOMPSAT-5 images. This works well for various types of targets. In particular, the proposed method exhibits excellent SR performance at both high and extremely high degrees of SR ( d 2.5 , r = 4 , 7 ).
In the case of restoration (i.e., Section 3.1, Section 3.2, and Section 3.3), the PSNR and SSIM were sufficient to quantitatively analyze the SR performance. However, in the case of improvement (i.e., Section 3.4), it is difficult to measure the SR capability quantitatively. In fact, we had no choice but to depend on the naked eye.
Nevertheless, we added an indirect analysis using Shannon entropy (SE) and image contrast (IC), which are widely used to evaluate the focus quality of SAR images [37,38,39]. SE and IC can be expressed as follows [37]:
SE = I 2 D 2 S ln S I 2 D 2
IC = σ I 2 D 2 / E I 2 D 2
where I 2 D denotes a 2D image, denotes the summation of all the elements in a matrix, S = I 2 D 2 , E denotes the mean, and σ denotes the standard deviation. Generally, a lower SE and higher IC imply a better focus quality. In addition, an improvement in focus quality can indirectly imply an improvement in the 3-dB bandwidth of the IRF and a reduction in interference among IRFs [23]. Table 9 and Table 10 list the SEs and ICs of the super-resolved images in Figure 13 and Figure 14.
In Table 9 and Table 10, the REF images exhibit better focus quality than the PR images at r = 4 , 7 . This is natural, considering the refocusing process. In addition, all super-resolved images exhibited better focus quality than the REF images at r = 4 , 7 . Particularly, the proposed method yields the best SE and IC for r = 4 .
The images super-resolved by BP and BPDN yield the best SEs and ICs at r = 7 . However, this is because the focus quality computes only the sharpness of the image instead of considering the scattering information of the target response. In fact, these methods generate only multiple points in Figure 13 and Figure 14; the sinc-like shape of the IRF is completely lost in the super-resolved image. Thus, IC and SE must be used with caution to evaluate SR performance, considering the scattering information of the resulting super-resolved images.
Furthermore, the proposed method extracts the IRF-S from real K-5 images to generate the super-resolved IRF-S; the advantage of this is that the IRF-S can be prepared in advance, because various K-5 images for the CR are already obtained for calibration and validation purposes. Thus, a super-resolved IRF-S can be generated using AR model-based LP in a short time period.
As an alternative, point-target simulation can also be used to obtain the super-resolved IRF-S. Once the REF image has been obtained, point-target simulation can generate a super-resolved IRF-S considering the frequency bandwidth of the REF target image and the degree of SR. However, point-target simulation requires complex processing to imitate the satellite SAR geometry, SAR raw signal generation, and SAR processors, resulting in long computation time. Notably, a super-resolved IRF-S cannot be prepared in advance using point-target simulation, which is a critical problem because the main application of the proposed method is SAR target recognition, which requires near-real-time processing. Therefore, it is desirable to utilize real images to generate a super-resolved IRF-S for the proposed method.

5. Conclusions

In this study, our major objectives were successfully accomplished. The proposed SR scheme was effectively designed to have robustness at a high SR degree. In addition, proper preprocessing steps are contained in the proposed scheme to deal with various motions of targets. Experiments demonstrated the effectiveness of the proposed SR scheme using various metrics. The major conclusions can be summarized as follows:
(1)
In terms of both restoration and improvement, the proposed scheme led to considerably improved spatial resolution of the target images for various types of targets, leading to clearer information on the principal scatterers.
(2)
In particular, the proposed method exhibited excellent SR capabilities at a high degree of SR in terms of PSNR, SSIM, and CT compared with other SAR SR methods. This implies that the proposed method can extract highly precise and meaningful information regarding the targets represented in satellite SAR images.
(3)
The concept of the proposed scheme can be easily extended to other satellite SAR systems such as ICEEYE, Capella, TerraSAR-X, and KOMPSAT-6 if the preprocessing steps are slightly adjusted depending on the characteristics of the SAR system.
(4)
It is expected that the proposed scheme will also be useful for improving target recognition capability using satellite SAR images.

Author Contributions

Conceptualization, S.-J.L.; methodology, S.-J.L.; software, S.-J.L.; validation, S.-J.L.; investigation, S.-J.L.; resources, S.-J.L.; data curation, S.-J.L.; writing—original draft preparation, S.-J.L.; writing—review and editing, S.-J.L.; visualization, S.-J.L.; supervision, S.-G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Korea Aerospace Research Institute, grant number FR23J00.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, S.-J.; Lee, K.-J. Efficient generation of artificial training DB for ship detection using satellite SAR images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 11764–11774. [Google Scholar] [CrossRef]
  2. Chang, Y.-L.; Anagaw, A.; Chang, L.; Wang, Y.C.; Hsiao, C.-Y.; Lee, W.-H. Ship detection based on YOLOv2 for SAR imagery. Remote Sens. 2019, 11, 786. [Google Scholar] [CrossRef] [Green Version]
  3. Wang, Y.; Wang, C.; Zhang, H.; Dong, Y.; Wei, S. A SAR dataset of ship detection for deep learning under complex backgrounds. Remote Sens. 2019, 11, 765. [Google Scholar] [CrossRef] [Green Version]
  4. Ma, M.; Chen, J.; Liu, W.; Yang, W. Ship classification and detection based on CNN using GF-3 SAR Images. Remote Sens. 2018, 10, 2043. [Google Scholar] [CrossRef] [Green Version]
  5. Wang, C.; Zhang, H.; Wu, F.; Jiang, S.; Zhang, B.; Tang, Y. A novel hierarchical ship classifier for COSMO-SkyMed SAR data. IEEE Geosci. Remote Sens. Lett. 2014, 11, 484–488. [Google Scholar] [CrossRef]
  6. Zhang, H.; Tian, X.; Wang, C.; Wu, F.; Zhang, B. Merchant vessel classification based on scattering component analysis for COSMO-SkyMed SAR images. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1275–1279. [Google Scholar] [CrossRef]
  7. Jiang, M.; Yang, X.; Dong, Z.; Fang, S.; Meng, J. Ship classification based on superstructure scattering features in SAR images. IEEE Geosci. Remote Sens. Lett. 2016, 13, 616–620. [Google Scholar] [CrossRef]
  8. Xing, X.; Ji, K.; Zou, H.; Chen, W.; Sun, J. Ship classification in TerraSAR-X images with feature space-based sparse representation. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1562–1566. [Google Scholar] [CrossRef]
  9. Novak, L.M.; Halversen, S.D.; Owirka, G.J.; Hiett, M. Effects of polarization and resolution on the performance of a SAR automatic target recognition system. Lincoln Lab. J. 1995, 8, 49–68. [Google Scholar]
  10. Moore, T.G.; Zuerndorfer, B.W.; Burt, E.C. Enhanced imagery using spectral-estimation-based techniques. Lincoln Lab. J. 1997, 10, 171–186. [Google Scholar]
  11. Gupta, I.; Beals, M.; Moghaddar, A. Data extrapolation for high resolution radar imaging. IEEE Trans. Antennas Propag. 1994, 42, 1540–1545. [Google Scholar] [CrossRef]
  12. Li, J.; Stoica, P. Efficient mixed-spectrum estimation with applications to target feature extraction. IEEE Trans. Signal Process. 1996, 44, 281–295. [Google Scholar] [CrossRef]
  13. Odendaal, J.; Barnard, E.; Pistorius, C. Two-dimensional superresolution radar imaging using the MUSIC algorithm. IEEE Trans. Antennas Propag. 1994, 42, 1386–1391. [Google Scholar] [CrossRef]
  14. Roy, R.; Kailath, T. ESPRIT-estimation of signal parameters via rotational invariance techniques. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 984–995. [Google Scholar] [CrossRef] [Green Version]
  15. Kay, S.M. Modern Spectral Estimation: Theory and Application; Prentice-Hall: Englewood Cliffs, NJ, USA, 1988. [Google Scholar]
  16. Lee, S.-J.; Lee, M.-J.; Kim, K.-T.; Bae, J.-H. Classification of ISAR images using variable cross-range scaling. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 2291–2303. [Google Scholar] [CrossRef]
  17. Ye, F.; Zhang, F.; Zhu, J. ISAR super-resolution imaging based on sparse representation. In Proceedings of the 2010 International Conference on Wireless Communications & Signal Processing (WCSP), Suzhou, China, 21–23 October 2010; pp. 1–6. [Google Scholar] [CrossRef]
  18. He, C.; Liu, L.; Xu, L.; Liu, M.; Liao, M. Learning based compressed sensing for SAR image super-resolution. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 1272–1281. [Google Scholar] [CrossRef]
  19. Pan, Z.; Yu, J.; Huang, H.; Hu, S.; Zhang, A.; Ma, H.; Sun, W. Super-resolution based on compressive sensing and structural self-similarity for remote sensing images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4864–4876. [Google Scholar] [CrossRef]
  20. Deka, B.; Gorain, K.K.; Kalita, N.; Das, B. Single image super-resolution using compressive sensing with learned overcomplete dictionary. In Proceedings of the 2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG), Jodhpur, India, 18–21 December 2013; pp. 1–5. [Google Scholar] [CrossRef]
  21. Rostami, M.; Michailovich, O.; Wang, Z. Image deblurring using derivative compressed sensing for optical imaging application. IEEE Trans. Image Process. 2012, 21, 3139–3149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Edeler, T.; Ohliger, K.; Hussmann, S.; Mertins, A. Multi image super resolution using compressed sensing. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech Republic, 22–27 May 2011; pp. 2868–2871. [Google Scholar] [CrossRef]
  23. Lee, S.-J.; Lee, S.-G. Super-resolution procedure for target responses in KOMPSAT-5 images. Sensors 2022, 22, 7189. [Google Scholar] [CrossRef]
  24. Potter, L.C.; Chiang, M.; Carriere, R.; Gerry, M.J. A GTD-based parametric model for radar scattering. IEEE Trans. Antennas Propag. 1995, 43, 1058–1067. [Google Scholar] [CrossRef]
  25. Bae, H.; Kang, S.; Kim, T.; Yang, E. Performance of sparse recovery algorithms for the reconstruction of radar images from incomplete RCS data. IEEE Geosci. Remote Sens. Lett. 2015, 12, 860–864. [Google Scholar]
  26. Cumming, I.G.; Wong, F.H. Digital Processing of Synthetic Aperture Radar Data; Artech House: Norwood, MA, USA, 2005. [Google Scholar]
  27. Elad, M. Sparse and Redundant Representations; Springer-Verlag: Berlin, Germany, 2010. [Google Scholar]
  28. Bae, J.-H.; Kang, B.-S.; Yang, E.; Kim, K.-T. Compressive sensing-based algorithm for one-dimensional scattering center extraction. Microw. Opt. Technol. Lett. 2016, 58, 1408–1415. [Google Scholar] [CrossRef]
  29. Martorella, M.; Giusti, E.; Berizzi, F.; Bacci, A.; Mese, E.D. ISAR based technique for refocusing non-cooperative targets in SAR images. IET Radar, Sonar Navig. 2012, 6, 332–340. [Google Scholar] [CrossRef]
  30. Chen, V.C.; Martorella, M. Inverse Synthetic Aperture Radar; Imaging Principles, Algorithms and Applications; SciTech Publishing: Chennai, India, 2014. [Google Scholar]
  31. Martorella, M.; Berizzi, F. Time windowing for highly focused ISAR image reconstruction. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 992–1007. [Google Scholar] [CrossRef]
  32. Soumekh, M. Synthetic Aperture Radar Signal Processing with MATLAB Algorithms; John Wiley & Sons: Hoboken, NJ, USA, 1999. [Google Scholar]
  33. Cuomo, K.; Pion, J.; Mayhan, J. Ultrawide-band coherent processing. IEEE Trans. Antennas Propag. 1999, 47, 1094–1107. [Google Scholar] [CrossRef]
  34. Symolon, W.; Dagli, C. Single-image super resolution using convolutional neural network. Procedia Comput. Sci. 2021, 185, 213–222. [Google Scholar] [CrossRef]
  35. Choi, Y.-J.; Han, S.-H.; Kim, Y.-W. A no-reference CNN-based super-resolution method for KOMPSAT-3 using adaptive image quality modification. Remote Sens. 2021, 13, 3301. [Google Scholar] [CrossRef]
  36. l_1-MAGIC. Available online: https://candes.su.domains/software/l1magic/ (accessed on 1 January 2022).
  37. Wang, J.; Liu, X.; Zhou, Z. Minimum-entropy phase adjustment for ISAR. IEE Proc.-Radar Sonar Navig. 2004, 151, 203–209. [Google Scholar] [CrossRef]
  38. Kang, M.-S.; Lee, S.-J.; Lee, S.-H.; Kim, K.-T. ISAR imaging of high-speed maneuvering target using gapped stepped-frequency waveform and compressive sensing. IEEE Trans. Signal Process. 2017, 26, 5043–5056. [Google Scholar] [CrossRef]
  39. Kang, M.-S.; Kim, K.-T. Compressive sensing based SAR imaging and autofocus using improved Tikhonov regularization. IEEE Sens. J. 2019, 19, 5529–5540. [Google Scholar] [CrossRef]
Figure 1. Overall flowchart of the proposed method, which consists of four steps: (1) preprocessing, (2) SC extraction, (3) generation of the super-resolved IRF-S, (4) convolution of the SC image and super-resolved IRF-S.
Figure 1. Overall flowchart of the proposed method, which consists of four steps: (1) preprocessing, (2) SC extraction, (3) generation of the super-resolved IRF-S, (4) convolution of the SC image and super-resolved IRF-S.
Sensors 23 05893 g001
Figure 2. 2D frequency spectrum of target image.
Figure 2. 2D frequency spectrum of target image.
Sensors 23 05893 g002
Figure 3. ROFOC 2D frequency spectrum for moving target with complex 3D motion.
Figure 3. ROFOC 2D frequency spectrum for moving target with complex 3D motion.
Sensors 23 05893 g003
Figure 4. REF and LR target images for stationary ship target with PR and LR IRF−Ss. (a) REF target image, (b) LR target image, (c) PR IRF−S, (d) LR IRF−S.
Figure 4. REF and LR target images for stationary ship target with PR and LR IRF−Ss. (a) REF target image, (b) LR target image, (c) PR IRF−S, (d) LR IRF−S.
Sensors 23 05893 g004
Figure 5. SC image and super−resolved IRF−S. (a) SC image, (b) super-resolved IRF−S.
Figure 5. SC image and super−resolved IRF−S. (a) SC image, (b) super-resolved IRF−S.
Sensors 23 05893 g005
Figure 6. SR results obtained using five SR methods for static ship target. (a) Burg, (b) MCM, (c) BP, (d) BPDN, (e) proposed.
Figure 6. SR results obtained using five SR methods for static ship target. (a) Burg, (b) MCM, (c) BP, (d) BPDN, (e) proposed.
Sensors 23 05893 g006
Figure 7. PR and REF target images for moving ship target with moderate motion. (a) PR target image, (b) REF target image.
Figure 7. PR and REF target images for moving ship target with moderate motion. (a) PR target image, (b) REF target image.
Sensors 23 05893 g007
Figure 8. LR target image and SR results obtained using five SR methods for moving ship target with moderate motion. (a) LR target image, (b) Burg, (c) MCM, (d) BP, (e) BPDN, (f) proposed.
Figure 8. LR target image and SR results obtained using five SR methods for moving ship target with moderate motion. (a) LR target image, (b) Burg, (c) MCM, (d) BP, (e) BPDN, (f) proposed.
Sensors 23 05893 g008
Figure 9. PR and REF target images for moving ship target with complex motion. (a) PR target image, (b) REF target image−PA, (c) REF target image−OTW + PA.
Figure 9. PR and REF target images for moving ship target with complex motion. (a) PR target image, (b) REF target image−PA, (c) REF target image−OTW + PA.
Sensors 23 05893 g009
Figure 10. LR target image and SR results obtained using five SR methods for moving ship target with complex motion. (a) LR target image, (b) Burg, (c) MCM, (d) BP, (e) BPDN, (f) proposed.
Figure 10. LR target image and SR results obtained using five SR methods for moving ship target with complex motion. (a) LR target image, (b) Burg, (c) MCM, (d) BP, (e) BPDN, (f) proposed.
Sensors 23 05893 g010
Figure 11. PR and REF target images for moving target with moderate motion in the case of improvement. (a) PR target image, (b) REF target image.
Figure 11. PR and REF target images for moving target with moderate motion in the case of improvement. (a) PR target image, (b) REF target image.
Sensors 23 05893 g011
Figure 12. Original IRF−S, super-resolved IRF−S, SC image, and super-resolved target image. (a) Original IRF−S, (b) super-resolved IRF−S, (c) SC image, (d) super−resolved target image.
Figure 12. Original IRF−S, super-resolved IRF−S, SC image, and super-resolved target image. (a) Original IRF−S, (b) super-resolved IRF−S, (c) SC image, (d) super−resolved target image.
Sensors 23 05893 g012
Figure 13. Comparison of SR results obtained using the proposed method with those obtained using the four algorithms in [23] at r = 4 . (a) Burg, (b) MCM, (c) BP, (d) BPDN, (e) proposed.
Figure 13. Comparison of SR results obtained using the proposed method with those obtained using the four algorithms in [23] at r = 4 . (a) Burg, (b) MCM, (c) BP, (d) BPDN, (e) proposed.
Sensors 23 05893 g013aSensors 23 05893 g013b
Figure 14. Comparison of SR results obtained using the proposed method with those obtained using the four algorithms in [23] at r = 7 . (a) Burg, (b) MCM, (c) BP, (d) BPDN, (e) proposed.
Figure 14. Comparison of SR results obtained using the proposed method with those obtained using the four algorithms in [23] at r = 7 . (a) Burg, (b) MCM, (c) BP, (d) BPDN, (e) proposed.
Sensors 23 05893 g014aSensors 23 05893 g014b
Table 1. PSNRs for super-resolved image in Figure 6.
Table 1. PSNRs for super-resolved image in Figure 6.
d
22.533.54
Burg30.3929.0828.1727.7227.65
MCM30.8829.0128.2327.5827.44
BP29.6828.7628.1327.9927.63
BPDN29.7128.7628.1327.9927.63
Proposed29.6429.6128.2728.6228.04
Table 2. SSIMs for super-resolved image in Figure 6.
Table 2. SSIMs for super-resolved image in Figure 6.
d
22.533.54
Burg0.680.620.510.460.44
MCM0.710.630.540.440.42
BP0.720.630.620.560.56
BPDN0.720.640.620.560.56
Proposed0.750.740.620.640.58
Table 3. Standard deviations of BPDN algorithm.
Table 3. Standard deviations of BPDN algorithm.
d
22.533.54
PSNR 5.29 × 10 4 2.88 × 10 5 7.27 × 10 6 1.19 × 10 5 6.48 × 10 5
SSIM 6.92 × 10 6 4.28 × 10 6 4.2 × 10 6 3.27 × 10 6 2.79 × 10 6
Table 4. PSNRs for super-resolved image in Figure 8.
Table 4. PSNRs for super-resolved image in Figure 8.
d
22.533.54
Burg29.7428.7828.1327.726.77
MCM29.5429.0627.6526.9626.84
BP27.6928.1226.1426.4526.82
BPDN27.6628.1326.1326.4526.83
Proposed28.9328.1728.3628.626.86
Table 5. SSIMs for super-resolved image in Figure 8.
Table 5. SSIMs for super-resolved image in Figure 8.
d
22.533.54
Burg0.750.630.620.580.47
MCM0.750.670.640.470.46
BP0.710.680.640.610.56
BPDN0.710.680.640.610.56
Proposed0.750.740.740.720.69
Table 6. PSNRs for super-resolved image in Figure 10.
Table 6. PSNRs for super-resolved image in Figure 10.
d
22.533.54
Burg28.7427.6327.2226.6826.3
MCM28.8127.3827.2626.7126.1
BP28.7627.7727.2126.9626.31
BPDN28.7727.7727.2126.9626.31
Proposed29.422828.3227.727.2
Table 7. SSIMs for super-resolved image in Figure 10.
Table 7. SSIMs for super-resolved image in Figure 10.
d
22.533.54
Burg0.770.740.710.670.67
MCM0.790.740.70.680.67
BP0.80.780.750.740.71
BPDN0.80.780.750.740.71
Proposed0.80.790.760.770.77
Table 8. CTs for super-resolved images in Figure 13.
Table 8. CTs for super-resolved images in Figure 13.
r
34567
Burg (s)0.150.220.290.390.48
MCM (s)0.150.210.30.40.52
BP (s)4.198.6617.1531.6845.52
BPDN (s)29.264.45117.24195.72298.25
Proposed (s)0.120.140.230.370.39
Table 9. SEs of super-resolved images in Figure 13 and Figure 14.
Table 9. SEs of super-resolved images in Figure 13 and Figure 14.
Algorithm
r PRREFBurgMCMBPBPDNProposed
49.748.827.647.947.637.627.13
79.748.827.037.756.516.526.69
Table 10. ICs of super-resolved images in Figure 13 and Figure 14.
Table 10. ICs of super-resolved images in Figure 13 and Figure 14.
Algorithm
r PRREFBurgMCMBPBPDNProposed
48.1615.5332.4326.0337.7937.7347.12
78.1615.5357.930.4784.2383.9369.14
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, S.-J.; Lee, S.-G. Efficient Super-Resolution Method for Targets Observed by Satellite SAR. Sensors 2023, 23, 5893. https://doi.org/10.3390/s23135893

AMA Style

Lee S-J, Lee S-G. Efficient Super-Resolution Method for Targets Observed by Satellite SAR. Sensors. 2023; 23(13):5893. https://doi.org/10.3390/s23135893

Chicago/Turabian Style

Lee, Seung-Jae, and Sun-Gu Lee. 2023. "Efficient Super-Resolution Method for Targets Observed by Satellite SAR" Sensors 23, no. 13: 5893. https://doi.org/10.3390/s23135893

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop