Modulus Stretch-Based Circular SAR Imaging with Contour Thinning

: This paper presents a modulus stretch-based circular Synthetic Aperture Radar (SAR) imaging method. This method improves the traditional backprojection algorithm for circular SAR imaging, and introduces the modulus stretch transformation function in the imaging process. By performing a modulus stretch transformation on the intermediate results, the target contour in the ﬁnal imaging result is thinner and clearer. A thinner and clearer contour can help to increase the recognizability of the target and provide a basis for subsequent target recognition. The proposed method is demonstrated on the line target imaging simulations and Gothca dataset.


Introduction
Synthetic Aperture Radar (SAR) is a type of Radar capable of performing high-resolution microwave imaging on target areas under all-weather and all-day conditions. The research on SAR includes imaging, target detection, time-frequency analysis and other aspects, among which imaging is one of the more studied fields [1][2][3][4]. The development of SAR always aims at improving image resolution. Wide-angle SAR (WSAR) refers to SAR which spans a wide range of azimuth during data acquisition to obtain higher azimuth resolution and more azimuth scattering information. Circular SAR (CSAR) is a special case of wide-angle SAR. CSAR moves 360 • in a circle with the radar platform centering on the observation scene, and the beam always shines on the same ground scene to achieve all-round observation.
The imaging mode of circular SAR was first proposed by Soumekh in 1996. Soumekh also presented the time-domain model of CSAR echo signal and CSAR imaging algorithm based on wavefront reconstruction [5]. Subsequently, many researchers have studied the imaging theory of circular SAR. The Air Force Research Laboratory (AFRL) and other organizations have conducted several experimental tests of WSAR and CSAR and released several test datasets and challenge problems about WSAR and CSAR [6][7][8][9][10][11][12][13]. Zhang et al. presented an approximation method using linear spotlight SAR to simulate CSAR [14,15]. Lin, Hong and Teng et al. have done a series of research on the three-dimensional imaging, imaging accuracy and imaging algorithm of CSAR [16][17][18][19][20]. Ponce et al. used an Experimental airborne SAR (E-SAR) system to conduct L-band experiments with circular SAR systems and adopted a fast factorized backprojection algorithm for imaging [21]. Liao et al. presented a modified Omega-K algorithm to focus accurately. The accuracy can be controlled by keeping enough terms in the two series expansions so that a well-focused image can be achieved with a proper range approximation [22,23]. Li et al. investigated the availability of an expediting backprojection algorithm for CSAR imaging. Without time-consuming 2D interpolation, it is capable of decreasing the loss of spectrum, while significantly improving the computational efficiency compared with direct backprojection [24]. Farhadi et al. proposed a distributed compressed sensing algorithm for circular SAR imaging, which improves the resolution and reduces the side-lobe effect in the full-aperture 3D image, and meanwhile reduces the computation [25]. In recent years, the research on Circular SAR imaging mainly focuses on Deep Learning, Compression Sensing, 3D imaging and other aspects [26][27][28][29][30][31][32][33][34]. Many excellent research results and algorithm theories have emerged.
Starting from the traditional backprojection imaging algorithm, this paper re-examines the imaging algorithm in order to improve the recognizability of the target. The idea of combining imaging and target detection has been studied a lot in the field of infrared target detection [35][36][37][38]. This paper presents an imaging method which is more beneficial to target recognition. The traditional backprojection imaging algorithm has high imaging accuracy and simple implementation, which has been widely used in the radar imaging field. However, in a real scene, a target may have a clear contour through visible light imaging that will become blurred in SAR imaging. The target may have a continuous reflection surface, and there will be a continuous strong signal response in the reflected echo detected by radar. In the imaging process, the original obvious contour will be connected and cannot be distinguished, which is not conducive to classification and recognition. This paper proposes a contour thinning imaging algorithm by modulus stretch. Based on the principle of backprojection imaging, the modulus stretch transform function is introduced to reduce the low modulus area of the intermediate image. The final image after processing has clearer and thinner contour edges than that obtained from the traditional backprojection algorithm. It is more conducive to classification and recognition of targets.

Backprojection Algorithm
According to the principle of SAR detection, the radar transmits a linear continuous frequency modulation wave to the target and receives the echo by matched filter. Radar echo data are functions of receiving frequency f and slow time τ, which is denoted by S( f , τ). Let r = (x, y, z) denote the target coordinates in the detection scene. Let ∆R(r, τ) denote the difference between the distance from the sensor to the origin and the distance from the sensor to the target with the coordinate r at time τ. The output of the matched filter is the scattering response of the target at coordinate r in the scene, which is denoted by I(r), is given by [15] where c denotes the constant speed of light. Since Equation (1) is calculated point-by-point and the speed is too slow, it is rewritten to facilitate the use of the Inverse Fast Fourier Transform (IFFT) to obtain the backprojection algorithm. Rewrite Equation (1) as a discrete form [11]: where Np denotes the number of sampling points of the sensor on the circular orbit. K denotes the sampling frequency of the echo of the sensor at each circular orbital position. First consider the inner summation, which can be regarded as the IFFT of S( f k , τ n ). After transformation, the inner summation can be rewritten as [11]: where Nfft denotes the length of the IFFT. To put m = 1 at the center of the range profile, the function FFTshift(•) is applied to the output of the IFFT. Since m is not the coordinate value in the real scene, it is necessary to interpolate the real coordinate value in order to obtain the IFFT sequence corresponding to the real coordinate approximately. The interpolated expression is denoted by [11] s int (r, where r represents the true coordinates in the scene. Interpolating all the coordinates in the scene, the overall imaging result of the scene at time τ n is obtained, S int (τ n ), given by where L represents the pixel length of the final image size. Finally, all the data at the slow time are superimposed together to obtain the coherent imaging result of backprojection, represented by Ia, as follows [11]: Since the echo data is complex, the final Ia is also a complex matrix, so the modulus value is used to represent the grayscale in the image. Let abs(•) represent the modulus of each element in the matrix, and the grayscale matrix of the image is denoted by I a (1, 1) I a (1, 2) · · · I a (1, L) I a (2, 1) I a (2, 2) · · · I a (2, L) . . . . . . . . . . . . I a (L, 1) I a (L, 2) · · · I a (L, L)

Contour Thinning Analysis
Considering the s(m, τ n ) obtained by Equation (3), when τ n is fixed, it is essentially a one-dimensional function determined by the slant range difference ∆R(m, τ n ). At a given time τ n , the same ∆R(m, τ n ) corresponds to the same s(m, τ n ), and the same interpolation s int (r, τ n ) is obtained. Therefore, the imaging results of the whole scene are only related to the ∆R(m, τ n ). A series of bright and dark stripes perpendicular to the azimuth of the radar are obtained by imaging the modulus of S int (τ n ). A location at the same distance from the radar in the scene has the same brightness. The angle of the stripes changes with different τ n . The scattering properties of the target also exhibit different echo data as the angle changes, as shown in Figure 1. The final imaging result of the scene is the superposition of this series of striped images. It can be intuitively concluded that the wider the sub-image stripes at each n  time, the larger the area of the final imaging target after the superposition. If the imaging result is not for the purpose of restoring the true scattering properties of the target, but for the purpose of clearer and thinner contours of the target, then another idea can be considered. Consider introducing a transformation function ()   into Equation (6). The purpose of the transformation function is to make the contour in the image clearer and thinner, but to avoid changing the overall shape of the original contour in the image. In order to satisfy this requirement, In order to evaluate the thinning degree of contour after transformation, appropriate evaluation indexes should be defined. Considering that the clearer and thinner the contour, the smaller the target area, the target area can be used as one of the evaluation factors. However, if the area decreases and the perimeter also decreases, it becomes the reduction of the overall area, rather than the thinning of the contour width, which fails to achieve the purpose of contour thinning. Therefore, the perimeter of the area should also be taken into account. Based on the above two points, a simple and intuitive evaluation index D(I) of contour thinning degree is defined. D(I) is defined as the perimeter of the pixels of all the target areas in a binary image divided by the area of the pixels of all the target areas.
After introducing the evaluation index of contour thinning degree, the optimization problem can be described as follows: Equation (10) is a compound function optimization problem, which cannot be solved directly. It probably does not have an optimal solution, but a suboptimal solution can be constructed based on given conditions. The problem description is modified to ) Intuitively, there are two ways to improve the contour thinning degree, one is to increase the perimeter, the other is to reduce the area. Firstly, the perimeter can be increased by adding new area, The final imaging result of the scene is the superposition of this series of striped images. It can be intuitively concluded that the wider the sub-image stripes at each τ n time, the larger the area of the final imaging target after the superposition. If the imaging result is not for the purpose of restoring the true scattering properties of the target, but for the purpose of clearer and thinner contours of the target, then another idea can be considered. Consider introducing a transformation function ψ(·) into Equation (6). The purpose of the transformation function is to make the contour in the image clearer and thinner, but to avoid changing the overall shape of the original contour in the image. In order to satisfy this requirement, ψ(·) should be a monotone function. The transformed image is denoted as In order to evaluate the thinning degree of contour after transformation, appropriate evaluation indexes should be defined. Considering that the clearer and thinner the contour, the smaller the target area, the target area can be used as one of the evaluation factors. However, if the area decreases and the perimeter also decreases, it becomes the reduction of the overall area, rather than the thinning of the contour width, which fails to achieve the purpose of contour thinning. Therefore, the perimeter of the area should also be taken into account. Based on the above two points, a simple and intuitive evaluation index D(I) of contour thinning degree is defined. D(I) is defined as the perimeter of the pixels of all the target areas in a binary image divided by the area of the pixels of all the target areas.

D(I) =
pixel perimeter of target pixel area of target (9) After introducing the evaluation index of contour thinning degree, the optimization problem can be described as follows: argmax Equation (10) is a compound function optimization problem, which cannot be solved directly. It probably does not have an optimal solution, but a suboptimal solution can be constructed based on given conditions. The problem description is modified to Intuitively, there are two ways to improve the contour thinning degree, one is to increase the perimeter, the other is to reduce the area. Firstly, the perimeter can be increased by adding new area, or by corroding the existing area to make the edges more complicated. However, neither method obviously makes the contour thinner. Therefore, it can only be achieved by reducing the area. If the stripes of the sub-aperture image can be narrowed, it is possible to reduce the target area of the final superimposed image.
Reducing the area can be achieved by stretching the modulus of the sub-aperture image, that is, enhancing the region with high modulus and greatly reducing the region with low modulus. This appears on the image as a contrast stretch transformation of the image, leaving only the highlights and greatly reducing the lower brightness. The usual contrast stretch transformation includes gamma transformation, piecewise linear transformation, histogram equalization and so on.
Through the above analysis, ψ(·) is constructed as the modulus stretch function, which enhances the high modulus and weakens the low modulus.

Contour Thinning Algorithm
The process of contour thinning algorithm is as follows: 1.
The Nfft-point IFFT is performed on the echo data S( f k , τ n ) received at each slow time τ n to obtain an M-point inverse transform sequence, and then cyclically shifts with the FFTshift(•) function [11]. 2.
The real coordinates corresponding to each pixel on the image are calculated, and the ∆R(r, τ n ) corresponding to each pixel is calculated [11].
where d a (r, τ n ) denotes the distance of the antenna from the origin of the ground at coordinate r and the time τ n . d a0 (r, τ n ) denotes the distance of the antenna from the target in the ground.

3.
Linear interpolation is performed on s 0 (m, τ n ) to obtain the estimated values of the corresponding IFFT transformation at all ∆R(r, τ n ) points, and the sub-imaging data at each time τ n is obtained by multiplying by the data the compensation term [11].
where L denotes the real side length of the imaged scene.

4.
Let θ denote the value of the synthetic aperture. The sub-images of all τ n in the range of the synthetic aperture are superimposed, and then the modulus is stretched for each sub-aperture image. Finally, all the sub-aperture images are superimposed to obtain the final imaging results.
The usual contrast stretch transformation includes gamma transformation, piecewise linear transformation, histogram equalization, and so on. Among them, histogram transformation will Appl. Sci. 2019, 9, 2728 6 of 14 improve the brightness of dark areas when the low-brightness area is in the majority, which is not in line with our requirement to significantly reduce the low-modulus part. Gamma transformation and piecewise linear transformation, on the other hand, can adjust parameters so that highlights remain and dark areas are significantly reduced. Therefore, this paper mainly studies the effect of modulus stretch on image contour thinning by using gamma transformation and piecewise linear transformation.
Gamma transformation in image processing is based on real number transformation, while radar data is complex data, so it is slightly modified to achieve modulus stretch. Gamma transformation based on modulus stretch is shown as follows: Similarly, the piecewise linear transformation of complex numbers is given in Equation (18). Where T is the threshold value, k 1 is the enhancement coefficient, and k 2 is the inhibition coefficient.

Simulated Data Imaging
The simulation of CSAR data generation and imaging was carried out by programming on the MATLAB platform. The simulation radar had a carrier frequency of 10 GHz, a bandwidth of 600 MHz, a slant distance of 10 km from the ground scene origin, a height angle of 30 degrees, a frequency sampling of 128 points, a steering angle step of 0.1 degrees, a total of one cycle of rotation and a circumference sampling of 3600 points. The length of the imaging area was 20 m, and the imaging resolution was 200 × 200 pixels. All targets are assumed to be plane targets with zero height.

Single Line Target Imaging
The target was a line consisting of dense points with an interval of 0.1 between x = −5 and 5. The y coordinate of the line target was 0. Figure 2a shows the target location map. Figure 2b shows the imaging of the traditional backprojection algorithm with a synthetic aperture of 5 • . Figure 2c is the full aperture imaging of the traditional backprojection algorithm.     It can be seen from Figures 3 and 4 that the results of synthetic aperture imaging are better than that of full aperture imaging. Piecewise linear transformation is better than gamma transformation in both full aperture imaging and 5° synthetic aperture imaging. With the increase of gamma transformation parameters and the threshold of piecewise linear transformation, the contour of the imaging results tends to extend outward along the contour line. The higher the parameter value, the more obvious the contour line extension. This helps to strengthen contours, enhance features, and improve the accuracy of identification. But the cost is that imaging noise is also increasing, and imaging accuracy is getting worse. Considering that the noise of the gamma transformation was too large, the gamma transformation was no longer selected in the subsequent simulations, but piecewise linear transformation was directly adopted.

Double-Line Target Imaging
The target was two lines consisting of dense points with an interval of 0.1 between x = −5 and 5. The y coordinates of the two lines were ±0.2, ±0.15 and ±0.1, that is, the distance between the two lines It can be seen from Figures 3 and 4 that the results of synthetic aperture imaging are better than that of full aperture imaging. Piecewise linear transformation is better than gamma transformation in both full aperture imaging and 5 • synthetic aperture imaging. With the increase of gamma transformation parameters and the threshold of piecewise linear transformation, the contour of the imaging results tends to extend outward along the contour line. The higher the parameter value, the more obvious the contour line extension. This helps to strengthen contours, enhance features, and improve the accuracy of identification. But the cost is that imaging noise is also increasing, and imaging accuracy is getting worse. Considering that the noise of the gamma transformation was too large, the gamma transformation was no longer selected in the subsequent simulations, but piecewise linear transformation was directly adopted.

Double-Line Target Imaging
The target was two lines consisting of dense points with an interval of 0.1 between x = −5 and 5. The y coordinates of the two lines were ±0.2, ±0.15 and ±0.1, that is, the distance between the two lines was 0.4, 0.3 and 0.2, as shown in Figure 5.

Double-Line Target Imaging
The target was two lines consisting of dense points with an interval of 0.1 between x = −5 and 5. The y coordinates of the two lines were ±0.2, ±0.15 and ±0.1, that is, the distance between the two lines was 0.4, 0.3 and 0.2, as shown in Figure 5.   Figure  6a-c after piecewise linear transformation based on a 5° synthetic aperture. It can be seen that the traditional backprojection algorithm can distinguish the two lines when they are 0.4 apart. When the distance is shortened to 0.3, it is more difficult to distinguish the two lines. When the distance is shortened to 0.2, the imaging results of the traditional backprojection algorithm are completely indistinguishable from the two lines. However, the imaging results after piecewise linear transformation can distinguish the two lines in all cases.      The above simulation results show that the imaging result after piecewise linear transformation is better than that of the traditional backprojection algorithm. The imaging results after piecewise linear transformation can distinguish the very close targets which are mixed together in the traditional backprojection algorithm imaging. The above simulation results show that the imaging result after piecewise linear transformation is better than that of the traditional backprojection algorithm. The imaging results after piecewise linear transformation can distinguish the very close targets which are mixed together in the traditional backprojection algorithm imaging.

Vehicle Imaging in Gotcha Dataset
The Target Discrimination Research subset of the Gotcha dataset published by the Air Force Research Laboratory for circular flight observations in a parking lot was used for the imaging test. The airborne synthetic aperture radar detects data in a 5 km diameter area on 31 altitude orbits and extracts phase history data of 56 targets from the large dataset. Targets include 33 civilian vehicles (with repeat models) and 22 reflectors and an open area. The circular synthetic aperture radar provides 360 degrees of azimuth around each target. A Chevrolet Impala LT model in the dataset was selected for the modulus stretch imaging with the data label fcarA1 and the orbit number 214. The results are shown in the following figure. Figure 8a shows the imaging of the traditional backprojection algorithm with a synthetic aperture of 5 • . Figure 8e is a full aperture imaging of the traditional backprojection algorithm.   It can be clearly seen from the figure that the vehicle image obtained by the traditional backprojection algorithm is fuzzy and cannot represent the contour structure and details of the vehicle well. The imaging results after piecewise linear transformation significantly improve the contour information of vehicle imaging and can distinguish the double-line fine contour on one side of the vehicle, which cannot be distinguished from the two close contours in the traditional backprojection algorithm imaging. These fine structures may play an active role in target recognition. At the same time, it is noticed that full aperture imaging can not only thin the contour, but also lose part of the contour details, which makes the imaging accuracy worse. However, the piecewise linear transformation based on a synthetic aperture can thin the contour and lose less of the original contour, so the imaging result is better. Figure 9 is the image of four different models of vehicles after modulus stretched by piecewise linear transformation. Figure 9a-d is the imaging results of using the traditional backprojection algorithm for four models of vehicles, namely Chevrolet Impala LT, Mitsubishi Galant ES, Toyota Highlander and Chevrolet HHR LT. Figure 9e-f corresponds to Figure 9a-d for imaging after the modulus is stretched using piecewise linear transformation. The parameters of the piecewise linear transformation were k 1 = 1.2, k 2 = 0.1 and T = 0.9. It can be seen that the image of all models of vehicles after modulus stretched greatly strengthens the contour features of the target and improves the recognizability. Subsequently, target recognition can be carried out by detecting the aspect ratio of target contour, the relative position of internal and external contour, contour angle, and other features.
(e) (f) (g) (h)   In order to obtain the optimal threshold parameters of the piecewise linear transformation, different thresholds T were used for imaging different vehicles, and the degree of contour thinning was calculated. The curves are shown in Figure 10. As can be seen from the figure, when the threshold T is between 0.9 and 0.96, the degree of contour thinning reaches a maximum value and then decreases rapidly as T increases. The experiment shows that when k 1 = 1.2 and k 2 = 0.1, for most of the vehicle data in the Gotcha dataset, the maximum degree of contour thinning can be obtained when the threshold T is about 0.94.
Data for 150 vehicles were randomly selected from the Goctha dataset, and they were individually imaged by the traditional backprojection algorithm and by the modulus stretch with piecewise linear transformation. Then the degree of contour thinning was calculated and is shown in Figure 11.
It can be seen from the figure that the degree of contour thinning after the modulus is stretched by piecewise linear transformation is generally higher than the traditional backprojection algorithm. After modulus stretch, the degree of contour thinning is increased by 134% on average, which greatly enhances the recognizability of the target.
We have made a preliminary study of target classification based on contour thinning imaging with piecewise linear transformation. Preliminary experimental results show that high classification accuracy can be achieved by using modulus stretch imaging combined with an appropriate classification method.
In order to obtain the optimal threshold parameters of the piecewise linear transformation, different thresholds T were used for imaging different vehicles, and the degree of contour thinning was calculated. The curves are shown in Figure 10. As can be seen from the figure, when the threshold T is between 0.9 and 0.96, the degree of contour thinning reaches a maximum value and then decreases rapidly as T increases. The experiment shows that when 1 k = 1.2 and 2 k = 0.1, for most of the vehicle data in the Gotcha dataset, the maximum degree of contour thinning can be obtained when the threshold T is about 0.94. Data for 150 vehicles were randomly selected from the Goctha dataset, and they were individually imaged by the traditional backprojection algorithm and by the modulus stretch with piecewise linear transformation. Then the degree of contour thinning was calculated and is shown in Figure 11. It can be seen from the figure that the degree of contour thinning after the modulus is stretched by piecewise linear transformation is generally higher than the traditional backprojection algorithm. After modulus stretch, the degree of contour thinning is increased by 134% on average, which greatly enhances the recognizability of the target.
We have made a preliminary study of target classification based on contour thinning imaging with piecewise linear transformation. Preliminary experimental results show that high classification accuracy can be achieved by using modulus stretch imaging combined with an appropriate classification method.

Conclusions
This paper presents a contour thinning imaging method. Based on the backprojection algorithm, the method performs modulus stretch on each sub-aperture image to make the target's perimeter area ratio larger and the target contour thinner and clearer. The experimental results using the Goctha dataset show that the contour of the image after modulus stretch using piecewise linear

Conclusions
This paper presents a contour thinning imaging method. Based on the backprojection algorithm, the method performs modulus stretch on each sub-aperture image to make the target's perimeter area ratio larger and the target contour thinner and clearer. The experimental results using the Goctha dataset show that the contour of the image after modulus stretch using piecewise linear transformation is thinner and clearer than the original. The degree of contour thinning is 134% higher than the original average. Based on this method, the target classification algorithm based on contour thinning imaging will be further studied.