Next Article in Journal
Integrating Relational Structure to Heterogeneous Graph for Chinese NL2SQL Parsers
Next Article in Special Issue
A 6G-Enabled Lightweight Framework for Person Re-Identification on Distributed Edges
Previous Article in Journal
A Method to Reduce the Intra-Frame Prediction Complexity of HEVC Based on D-CNN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A CSAR 3D Imaging Method Suitable for Edge Computation

1
Department of UAV Engineering, Shijiazhuang Campus, Army Engineering University, Shijiazhuang 050003, China
2
The 54th Research Institute of China Electronics Technology Group Corporation, Shijiazhuang 050085, China
3
College of Mechanical and Electrical Engineering, Shijiazhuang University, Shijiazhuang 050035, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(9), 2092; https://doi.org/10.3390/electronics12092092
Submission received: 20 March 2023 / Revised: 27 April 2023 / Accepted: 28 April 2023 / Published: 4 May 2023
(This article belongs to the Special Issue Edge AI for 6G and Internet of Things)

Abstract

:
Due to the large amount of CSAR echo data carried by UAVs, either the original echo data need to be transmitted to the ground for processing or post-processing must be implemented after the flight. Therefore, it is difficult to use edge computing power such as a UAV onboard computer to implement image processing. The commonly used back projection (BP) algorithm and corresponding improved imaging algorithms require a large amount of computation and have slow imaging speed, which further limits the realization of CSAR 3D imaging on edge nodes. To improve the speed of CSAR 3D imaging, this paper proposes a CSAR 3D imaging method suitable for edge computation. Firstly, the improved Crazy Climber algorithm extracts sine track ridges that represent the amplitude changes in the range-compressed echo. Secondly, two-dimensional (2D) profiles of CSAR with different heights are obtained via inverse Radon transform (IRT). Thirdly, the Hough transform is used to extract the intersection points of the defocused circle along the heights in the X and Y directions. Finally, 3D point cloud extraction is completed through voting screening. In this paper, image detection methods such as ridge extraction, IRT, and Hough transform replace the phase compensation processing of the traditional BP 3D imaging method, which significantly reduces the time of CSAR 3D imaging. The correctness and effectiveness of the proposed method are verified by the 3D imaging results for the simulated data of ideal targets and X-band CSAR outfield flight raw data carried by a small rotor unmanned aerial vehicle (SRUAV). The proposed method provides a new direction for the fast 3D imaging of edge nodes, such as aircraft and small ground terminals. The image can be directly transmitted, which can improve the information transmission efficiency of the Internet of Things (IoT).

1. Introduction

As a remote sensing method, airborne CSAR is an important part of the Internet of Things (IoT) sensing system. Making full use of edge computing power for, e.g., aircraft or small ground stations can reduce the transmission pressure of the IoT and improve the efficiency of information acquisition [1,2]. Compared to linear SAR, CSAR imaging on UAVs requires a long observation time and a large amount of echo data, so imaging requires more computing resources. In general, the original data must transmit to the ground for processing or post-processing after the flight. It is challenging to use edge computing power for real-time imaging processing on a UAV platform. Therefore, the speed of acquiring imaging target information is poor.
The process of CSAR observations and imaging consumes significant computing resources due to the application of the Back Projection (BP) imaging algorithm in the time domain [3]. The computational efficiency of CSAR is also not high. Taking X-band CSAR imaging carried by a small rotor unmanned aerial vehicle (SRUAV) as an example, when the flight radius is 600 m, there are 180,000 heading sampling points and 3000 range direction sampling points. The BP imaging algorithm needs to carry out phase compensation in the echo signal for each unit in the 3D imaging grid at each sampling point, which requires many calculations and has low imaging speed. The authors in [4] proposed an improved BP algorithm (IBP) by constructing a geometric interpolation kernel to transform 3D interpolation operations into 1D interpolations and range vector searching operations. Using this method, the BP 3D imaging time was compressed by two-thirds. However, the imaging speed was still not ideal. Other 3D imaging methods of the CSAR sub-aperture class based on BP imaging [5,6,7] showed no optimization in terms of imaging speed and were focused on sub-aperture partitioning according to the scattering characteristics of the target. Overall, the high time delay and low speed of the current BP 3D imaging algorithms severely limit the realization of CSAR 3D imaging at edge nodes.
In this paper, a CSAR 3D imaging method suitable for edge computation is proposed. This method applies inverse Radon transform (IRT) and Hough transform to complete the 3D point cloud extraction and replaces phase compensation processing of the BP algorithm with azimuth point-by-point and grid-based one-by-one 3D imaging. This method can greatly improve the rate of CSAR 3D imaging and reduce imaging delays. The 3D imaging results of the simulation and measured CSAR echo data carried by an SRUAV demonstrate the correctness and effectiveness of the proposed method.
This paper is organized as follows. Section 2 introduces the principle of BP 3D imaging. Then, the processes and key technologies of the CSAR 3D imaging method suitable for edge computation are described in Section 3. Section 4 presents the results and analyses of the proposed method for ideal targets and the X-band CSAR echo dataset carried by an SRUAV. Finally, in Section 5, conclusions are drawn.

2. BP 3D Imaging

2.1. Imaging Geometric Model of CSAR

The CSAR imaging system’s geometric model is shown in Figure 1a. In this system, the SAR is carried by an SRUAV in a circle around the observation scene at a fixed height. The beam center of the SAR antenna always points to the center of the imaging scene. The instantaneous slant range from the SAR platform to the point target P in the movement process is as follows [8]:
R p φ = R xy cos φ r p cos θ p 2 + R xy sin φ r p sin θ p 2 + H z p 2 , φ 0 , 2 π = R xy 2 + r p 2 + H z p 2 2 R xy r p cos ( φ θ p )
where R xy is the radius of the circular trajectory; R xy cos φ , R xy sin φ , H is the position coordinate of the carrier platform flying at point A, and φ 0 , 2 π is the angle with the positive half of the X axis as the reference direction; incident angle ψ = arctan ( R xy / H ) is the angle between the center of the SAR beam and the negative half axis of the Z axis; and r p cos θ p , r p sin θ p , z p is the coordinate of any point target P in the observation scene.
The instantaneous slant range of point target P measured in CSAR mode changes with the position changes in the SAR platform, as shown in Figure 1b.

2.2. BP 3D Imaging

Here, the signal emitted by the radar is a frequency-modulated continuous wave (FMCW), and the echo received by the de-chirp procedure after range compression can be expressed as follows [9,10]:
s r τ , φ = σ p sin c B τ 2 R p φ R c c exp j 4 π f c c R p φ R c j 4 π K r c 2 R p φ R c 2 = σ p sin c B τ 2 R p φ R c c exp j 4 π f c R p φ R c
where τ is fast time; σ p is the scattering coefficient of point target P; sin c is the sinc function, which can be expressed as sin c x = sin π x / π x ; B denotes the transmission signal bandwidth; R c = R xy 2 + H 2 indicates the slant range between SAR and the center of the scene; c = 3 × 10 8 m/s indicates the propagation speed of electromagnetic waves; and f = f c + f r is the frequency of the transmitted signal, f c is the center frequency of the transmitted signal, f r = K r τ / 2 is the range frequency, and K r is the linear modulation frequency.

2.2.1. Traditional BP 3D Imaging

The traditional BP 3D imaging algorithm used to obtain 3D images via azimuth pulse coherent accumulation [11] can be expressed by the following formula:
I x , y , z = φ s r τ , φ exp j 4 π λ R p φ R c d φ
where I x , y , z is the scattering intensity value of the grid point x , y , z in the imaging space region. BP 3D imaging can only be realized when every grid point in the 3D imaging space is processed by Formula (3).

2.2.2. IBP 3D Imaging

The IBP 3D Imaging method transforms the 3D processing of the BP algorithm into 1D phase compensation and searches by constructing a geometric interpolation kernel [4]. The geometric interpolation kernel is as follows:
k ref = s r τ , φ exp j 4 π λ r ref R c
where r ref is the slant range vector, which can be expressed as follows:
r ref = R min   R min + Δ r   R min + 2 Δ r     R max
where R min and R max are the minimum and maximum values of the slant range, respectively, and Δ r is the slant range interval.
In the IBP 3D Imaging method, firstly, the slant range R p φ between the coordinate unit x , y , z and the platform is calculated. Secondly, the sequence number l of R p φ R min is calculated as the closest element in r ref . Thirdly, the l element in the slant range vector r ref is searched, and coherent accumulation is achieved.

3. A CSAR 3D Imaging Method Suitable for Edge Computation

3.1. Principle of the Method

Let R = R p φ R c . When the radar and the target meet the far-field conditions, i.e., r p R xy 2 0 , z p H 2 0 , then Formula (1) can be expressed as follows:
R p φ R c = R xy 2 + r p 2 + H z p 2 2 R xy r p cos ( φ θ p ) R c r p sin ψ cos ( φ θ p ) z p cos ψ
During the CSAR movement, the incident angle ψ of the beam remains unchanged. Substituting Formula (6) into Formula (2) yields the following:
s r τ , φ σ p sin c B τ 2 r p sin ψ cos ( φ θ p ) z p cos ψ c exp j 4 π f c r p sin ψ cos ( φ θ p ) z p cos ψ
where B τ 2 r p sin ψ cos ( φ θ p ) z p cos ψ c represents the position tracking of target point P in the echo after range compression. When point P is located at the center of the scene, Formula (7) can be expressed as follows:
s r τ , φ σ p sin c B τ 2 z p cos ψ c exp j 4 π f c z p cos ψ
Here, the position trajectory of the target B τ 2 z p cos ψ c is constant; that is, the trajectory of the center point of the scene will not change with a position change in the SAR platform after the target range is compressed. Next, ignore the change in the scattering coefficient for target σ p . Except for the central point of the scene, the trajectories of all other target points have the characteristics of a sinusoidal curve change, and the change frequency is φ 2 π , which is consistent with the CSAR motion platform. The amplitude of oscillation is 2 B r p sin ψ , which is related to the bandwidth of the radar signal B , the distance between the target point and the center of the scene r p , and the incident angle of the radar beam ψ . When the CSAR observation geometry is determined, the amplitude of the sinusoidal change in the trajectory of a fixed-point target is only related to r p . The larger r p is, the greater the amplitude of sinusoidal oscillation becomes. The initial phase is θ p , which is the inversion of the azimuth angle of the target point.
When there are multiple targets located at different positions and azimuth angles in the scene, the range-compressed echoes form multiple sinusoidal curves that have the same frequency, different amplitudes, and different initial phases. These sinusoids overlap each other with the locus of the central point of the scene as the symmetrical center B τ 2 z p cos ψ c . By detecting the sinusoidal curve of range-compressed CSAR echo data along the azimuth direction, the position of the target in the scene can be extracted, and 2D imaging of the scene can be realized.
IRT can accumulate and focus the sinusoidal curve in the image, transforming it into a point on the imaging plane [12]. Ignoring the changes to σ p , the scattering coefficient of the target and the sine change track of the ideal point target echo after distance compression is as follows:
s r 0 τ , φ = δ 2 B z p cos ψ c + 2 B r p sin ψ c cos ( φ θ p )
The IRT of s r 0 τ , φ is as follows:
g x , y = + + s r 0 τ , φ exp j 2 π 2 B z i cos ψ c exp j 2 π x k x + y k y d k x d k y
where k x = 2 B sin ψ c cos ( φ ) and k y = 2 B sin ψ c sin ( φ ) . When the height of the imaging plane is consistent with the actual size of the target, that is z i = z p , a sine curve in the range-compressed echo signal is transformed into a point x i , y i in the imaging plane x y after IRT, and the relationship between the amplitude A i and initial phase θ i of the point corresponding to the original sine curve and the point coordinates is as follows:
A ^ i = x i 2 + y i 2 ,   θ ^ i = arctan y i x i
The IRT of the compressed echo of the N ideal point target distance is:
I x , y = i = 1 N δ x A i sin θ i δ y A i cos θ i
The height of the imaging plane is inconsistent with the actual height of the target; that is, z i z p . Based on CSAR confocal 3D imaging theory [13], only when the imaging height is consistent with the target height can the target on the 2D image be accurately focused; On the contrary, at other imaging heights, the target on the 2D image will be defocused into a circle, and the radius of the circle has a linear relationship with the deviation between the imaging height and the target height. Then, the sine curve is transformed into a circle in the imaging plane x y by IRT, and the relationship between the radius of the circle Δ r and the height deviation Δ h = z p z i can be defined as follows:
Δ r = Δ h tan ψ
In CSAR imaging mode, the incident angle ψ is constant, and the relationship between the radius of the circle Δ r and the height deviation Δ h is linear. As shown in Figure 1, the target focus point P, which is the vertex of the cone formed by circles with different heights, is the intersection point of the cone generatrix passing through the vertex.
Hough transform is an effective method to detect the intersection of many straight lines [14]. In the Hough-transformed space ρ , ϑ of the binary image, the meeting of two line segments on the image is the sine curve that passes through two peak points simultaneously. According to the coordinates ρ 1 , ϑ 1 and ρ 2 , ϑ 2 , which are the two peak points, the intersection coordinates of two line segments in the image can be calculated via the Hough transformation formula as follows:
ρ 1 = x 0 cos ϑ 1 + y 0 sin ϑ 1 ρ 2 = x 0 cos ϑ 2 + y 0 sin ϑ 2 x 0 = ρ 2 sin ϑ 1 ρ 1 sin ϑ 2 sin ϑ 1 ϑ 2 , x 0 0 , x max y 0 = ρ 1 cos ϑ 2 ρ 2 cos ϑ 1 sin ϑ 1 ϑ 2 , y 0 0 , y max

3.2. Algorithm Flow

A flow chart of the CSAR fast 3D imaging method based on image detection presented in this paper is shown in Figure 2. This method includes three steps: (1) using the Crazy Climber algorithm to extract the sine track ridge [15], (2) using IRT to obtain the CSAR height profile, and (3) applying Hough transform to extract a CSAR 3D point cloud. The details are as follows.
Step 1: Extract the sine track ridge. Because the sinusoidal trajectories of different targets overlap and cross in the range-compressed echo signal, high-quality CSAR images cannot be obtained directly by IRT. The improved Crazy Climber algorithm is used to extract the spine of the target sinusoidal trajectory [15], which can effectively improve the imaging effect of CSAR. The specific steps of the improved Crazy Climber algorithm are as follows.
(1) Initialization. s r τ , φ is the M × N observation matrix, the observation matrix R and the measurement matrix D are zero matrices with the exact dimensions of s r τ , φ , and the K Climbers are evenly distributed in the observation matrix R . Let the number of climber moves be n .
(2) Move the Climber. The time corresponding to the movement of the Climber is t k k = 1 , 2 , , n . If the position of the Climber at time t k is p o s t k = i , j , the rule for estimating the position of the Climber p o s t k + 1 = i , j at time t k + 1 involves calculating the probability of the Climber moving to the adjacent six positions:
p u = Δ A u u = 1 6 Δ A u
where Δ A u = S A u S p o s t k + min S A u S p o s t k , and S A u S p o s t k is the amplitude increment of the six adjacent positions A u u = 1 , 2 , , 6 relative to the position p o s t k . When p o s t k is located inside the matrix, that is 2 i M 1 and 2 j N 1 , the adjacent six positions are A 1 i 1 , j 1 , A 2 i , j 1 , A 3 i + 1 , j 1 , A 4 i 1 , j + 1 , A 5 i , j + 1 , and A 6 i + 1 , j + 1 . Conversely, when p o s t k is located at the edge of the matrix, first connect the end of the matrix s r τ , φ and then shift it. For example, if i = M and 2 j N 1 , the adjacent six positions are A 1 i 1 , j 1 , A 2 i , j 1 , A 3 1 , j 1 , A 4 i 1 , j + 1 , A 5 i , j + 1 , and A 6 1 , j + 1 , as shown in Figure 3.
(3) According to the moving result, add 1 to the corresponding position of the measurement matrix D .
(4) Repeat the above steps until the Climber traverses s r τ , φ to obtain the final metric matrix D .
(5) Recurse the measurement matrix D in the slow time direction to form a sinusoidal ridge line s r τ , φ and eliminate the ridge lines with too small a length to obtain the ridge line matrix s rj τ , φ .
Step 2: Obtain the CSAR height profile. Two-dimensional profiles of CSAR with different heights are obtained via IRT.
(1) Divide the sub-apertures. According to the scattering characteristics of the target in the scene, s rj τ , φ is divided into sub-apertures s rj _ sub , i τ , φ along the slow time, where i = 1 , 2 , , N , N = N B s u b is the number of sub-apertures, and B s u b is the sub-aperture width;
(2) Set the height of the imaging plane h k k = 1 , 2 , , K . By changing the range center of the data s rj _ sub , i τ , φ during the IRT, 2D imaging of the CSAR at different height planes can be realized;
(3) Sub-aperture IRT imaging. IRT is performed on s rj _ sub , i τ , φ along the slow time direction to obtain I rj _ sub , hk , i , which is a 2D image of each sub-aperture at height h k :
I rj _ sub , hk , i = IRT s rj _ sub , i τ , φ
where IRT denotes the IRT.
(4) Sub-aperture image fusion. To reduce the average effect of sub-apertures in incoherent processing and the influence of scattering center intensity in different sub-apertures, sub-aperture images are fused based on the generalized likelihood ratio test (GLRT) [16], and the largest pixel in each sub-image is taken as the pixel of the final 2D image:
I final , hk x , y = arg max i I rj _ sub , hk , i x , y
where I final , hk is the CSAR 2D image representing the height h k . CSAR 2D images with different heights are then superimposed to form a CSAR image volume Ι final .
Step 3: Extract the CSAR 3D point cloud. Hough transform is used to extract the intersection points of the height slices of the defocused circle in the X and Y directions, respectively. Lastly, 3D point cloud extraction is completed after voting fusion.
(1) Layering along the X (Y) direction. Firstly, the CSAR image volume Ι final obtained in step 2 is layered along the X direction and the Y direction, respectively, as shown in Figure 4a. Each layer’s data I Xj ( I Yj ) represent a change in the Z−Y (Z−X) direction, where j = 1 , 2 , , J . Then, the Canny operator is used to binarize each layer of data I Xj ( I Yj ) to obtain I X 0 j ( I Y 0 j ), which prepares the data for the Hough transform;
(2) Hough transform along the Z direction. Carry out a one-dimensional Hough transform on I X 0 j ( I Y 0 j ) along the Z direction and extract the peak points ρ i , ϑ i of the Hough transform in each layer, where i 2 . The upper limit needs of i can be set according to the number of targets to be extracted in this layer;
(3) Output intersection points. According to Formula (14), calculate the intersection points of the straight lines in I X 0 j ( I Y 0 j ), as shown in Figure 4b;
(4) Voting fusion to extract the 3D point cloud. In the 3D grid composed of the CSAR image volume Ι final , according to the Hough transform, the output intersection coordinates are voted for in the corresponding grid. As seen in Figure 4c, only when the height of the imaging plane is consistent with the actual height of the target will the output intersection coordinates in the X and Y directions completely coincide when the target is entirely focused. If the grid voting result exceeds a certain threshold, the CSAR 3D point cloud extraction is considered complete.

3.3. Algorithm Complexity Analysis

The traditional 3D BP imaging algorithm has to perform phase compensation for each 3D grid in each azimuth pulse, so the imaging speed is extremely low. In this paper, IRT and Hough transforms are used to extract the 3D point cloud, which can greatly improve the rate of CSAR 3D imaging. Let the azimuth pulse number of the full CSAR aperture data be N a , the distance pulse number be N r , and the 3D imaging grid size be N x × N y × N z .
The traditional 3D BP algorithm needs N x × N y × N z × N a phase compensation operations, and the algorithm time is as follows:
T BP = N x × N y × N z × N a × t 1
where t 1 is the time consumption of one phase compensation operation using the traditional 3D BP algorithm.
The IBP 3D imaging method requires one geometric interpolation kernel construction, M × N a interpolation and phase compensation operations, and N x × N y × N z × N a vector search operations. The algorithm time is as follows:
T IBP = M × N a × t 1 + t 2 + N x × N y × N z × N a × t 3
where t 2 denotes the time consumption of one geometric interpolation kernel construction, and t 3 denotes the time consumption of one vector search operation.
The proposed method needs to extract the ridge line of the sinusoidal trajectory once, including an N r × N a prediction of Climber movement. For N z IRT sub-aperture fusion imaging, if the full aperture data are divided into N sub-apertures, each imaging dataset includes N IRT and one sub-aperture image fusion based on GLRT; then, N x + N y Hough transforms are used to extract the intersection points with one vote for fusion. The time consumption is as follows:
T IDI = N r × N a × t 4 + N z × N × t 51 + t 52 + N x + N y × t 61 + t 62
where t 4 is the time consumption of one Climber motion prediction, t 51 indicates the time consumption of one IRT, t 52 is the time consumption of one sub-aperture image fusion based on GLRT, t 61 indicates the time consumption of one Hough transform to extract intersection points, and t 62 is the time consumption of one vote for fusion.
The proposed algorithm simplifies the N x × N y × N z × N a phase compensation operation of the traditional 3D BP or IBP algorithm into N r × N a Climber motion prediction, N z × N IRT, N z sub-aperture image fusion, N x + N y Hough transform to extract the intersection points, and one voting fusion, which effectively simplifies the computational complexity of the model and improves the imaging speed.

4. Data Processing

4.1. Simulation Data Processing

This section uses simulation data to analyze the proposed algorithm’s performance. Here, the radar transmits FMCW signals with a carrier frequency of 9.6 GHz and a bandwidth of 750 MHz. The flight radius of the SRUAV is 600 m, the flight altitude is 300 m, and the flight speed is 7 m/s. Slow time sampling points for one circle of flight total 179,520 s, while fast time sampling points total 1502. The size of the imaging scene is 20 × 20 × 10   m 3 , which is divided into 3D imaging grids of 110 × 110 × 23 based on the range resolution of the echo. There are five point targets in the simulation scene located in two height planes and distributed in the center and periphery of the scene. The distribution of the simulation point targets is shown in Figure 5, and the coordinates of the five point targets in the scene are provided in Table 1.
The 2D imaging results of CSAR obtained by IRT with a height of z = 5 m are shown in Figure 6. Compared to the imaging results when using the traditional BP and IBP algorithms, 2D imaging under the proposed method is precise and accurate. The X slices of point A and point B are shown in Figure 7. Point A is located in the center of the scene, and the focus position of point A in the two images is the same. However, the imaging resolution of point A proposed in this paper is lower than that of the traditional BP and IBP algorithms. Point B is far from the center of the scene, and the focus position of point A in the two images is the same. The imaging resolution of point B proposed in this paper is similar to that of the traditional BP and IBP algorithms. When the normalized amplitude is 0.707, the corresponding main lobe width is the resolution of the target. A resolution comparison of the A and B points is shown in Table 2. The current imaging resolution (point A 0.36 m and point B 0.36 m) is slightly lower than that of the traditional BP (point A 0.2 m and point B 0.3 m) and IBP algorithms (point A 0.24 m and point B 0.32 m). Nevertheless, the target’s position is consistent with the BP algorithm, enabling it to be used for fast CSAR image processing.
The 2D imaging results of CSAR with typical heights obtained by IRT are shown in Figure 8. It can be seen that when the height z = 0 m, three points are well focused, and the other two points are defocused into a circle. When the height z = 5 m, two points are in good focus, and the other three are out of focus and form a circle. When the height z = 2.5 m, there is no point target, and the distances from the set point target (z = 0 m and z = 5 m) are basically the same, defocusing all five point targets into circles. These circles are fully consistent with the actual set point targets in the scene.
After the X-direction is layered by position, the Hough transform is used to extract the intersection of the defocused circle that changes with height to form a cone. The extraction result of the intersection of typical X-position slices is shown in Figure 9. When x = −5 m, the peak point after the Hough transform is extracted as shown in Figure 9a; the endpoint of the extracted line is shown as the red “Electronics 12 02092 i001” in Figure 9b; and the intersection point of the detected output line is shown as the cyan “Electronics 12 02092 i002” in Figure 9b. Here, the two intersection points are consistent with the two-point target with a height of z = 0 m in the scene. When x = 0 m, the peak point after the Hough transform is extracted, as shown in Figure 9c, and the intersection point of the extracted straight line and the detected output straight line is shown in Figure 9d. When x = 5 m, the peak point after the Hough transform is extracted, as shown in Figure 9e, and the intersection point of the extracted straight line and the detected output straight line is shown in Figure 9f. The two intersection points are located at different heights, consistent with heights of z = 0 m and z = 5 m in the scene.
Taking point targets (−5, 5, 0) and (−5, −5, 0) as examples, the intersection points of x = −5 m, x = −6.5 m, and x = −8 m slices are extracted, as shown in Figure 10. As the slice sits away from the target slice x = −5 m, the position of the extracted intersection points moves to the right. That is, the height gradually increases, and the intersection points away from the focusing height, consistent with the analysis in Figure 4b.
Using Hough transform, extracting the intersection points of the height slice of the defocused circle in the Y direction is consistent with that in the X direction, so the process will not be repeated.
After voting fusion, the threshold is set to 2. The results after extracting the 3D point cloud (Figure 11) demonstrate that the method proposed in this paper can effectively remove CSAR 3D point clouds. However, due to the limitations of resolution in the slant range direction of the system, the target position is not entirely located in the imaging grid, resulting in a deviation of about one grid in the process of extracting intersection points via the Hough transform. Thus, the target points far from the imaging center are scattered to some extent.
The above data processing adopted MATLAB R2019b with an i7−10750H processor. The imaging time consumption of the traditional BP 3D imaging algorithm, the IBP 3D imaging method, and the proposed algorithm was also compared, as shown in Table 3. The time consumption of the proposed algorithm was 582.1 s, which was about half that of the IBP 3D imaging method and one-sixth that of the traditional BP 3D imaging algorithm.

4.2. Measured Data Processing

In this section, measured echo data are used to analyze the proposed algorithm’s performance. Here, the radar equipped on an SRUAV transmits FMCW signals with a carrier frequency of 9.6 GHz and bandwidth of 750 MHz. The flight radius of the SRUAV is 600 m, the flight altitude is 300 m, and the flight speed is 7 m/s. There are 179,520 slow-time sampling points in the flight circle and 1502 fast-time sampling points. The size of the imaging scene is 100 × 50 × 7   m 3 , which is divided into 401 × 201 × 16 3D imaging grids based on the range resolution of the echo. The center of the scene is a house composed of scattered containers, as shown in Figure 12a; the flight path is shown in Figure 12b. Due to the limitations of system performance, the three-axis self-stabilizing platform of airborne CSAR is closed. Therefore, the beam center direction of CSAR changes with the attitude of the SRUAV platform and cannot be guaranteed to point only to the observation center.
The scattering characteristics of containers change with the observation angle. Using the sinusoidal track ridge of 360 ° , the whole aperture is evenly divided into eight non-overlapping sub-apertures, and the sub-aperture images are obtained by IRT. The full aperture image is obtained by GLRT between the sub-aperture images. The full aperture images of different heights constitute the CSAR image volume. Hough transform is used to extract the intersection points of the defocusing ring height slices at different positions in the X and Y directions, and the voting fusion threshold is set as 10. The results of extracting the CSAR 3D point cloud are shown in Figure 13.
The reconstructed geometric shape of the container house in the center of the scene is shown in Figure 13a. The height of the second floor is 6.36 m, which is consistent with the height of the actual scene. Because there are many containers on the first floor that influence each other, the geometric shape of the point cloud on the first floor is seriously distorted. However, this method can still provide a reasonable estimation of the height. Figure 13b–d shows that compared to traditional BP and IBP 3D imaging methods, the proposed method offers a better 3D focusing effect.
The above data processing was performed using MATLAB R2019b with an i7−10750H processor. The imaging time consumption of the traditional BP 3D imaging algorithm, the IBP 3D imaging method, and the proposed algorithm were also compared, as shown in Table 4. The time consumption of the proposed algorithm was 3809.32s, which was about half that of the IBP 3D imaging method and one-sixth that of the traditional BP 3D imaging algorithm.

5. Conclusions

CSAR is a typical 3D imaging mode of SAR that uses a small airborne platform to realize 3D imaging by observing targets in 360 ° . However, the low efficiency of the BP 3D imaging algorithm limits its implementation for edge nodes, which is not conducive to the popularization and application of practical engineering. To fill this gap, a CSAR 3D imaging method suitable for edge calculation was proposed in this paper. By directly using image processing methods such as track ridge extraction, IRT, and Hough transform to image the target, our method replaces the phase compensation and accumulation of the azimuth-by-azimuth pulse of traditional BP imaging and greatly shortens the imaging time. The correctness and effectiveness of the proposed method were verified by the imaging results for the simulation and measured data. The proposed method can be applied to aircraft, small ground terminals, and other edge nodes to realize fast CSAR 3D imaging and provide a technical means for the efficient transmission and application of SAR perception information in the IoT.

Author Contributions

Conceptualization and writing—original draft, L.C.; methodology and funding acquisition, Y.M.; validation, Z.H.; formal analysis, B.L.; resources, Y.S.; writing review and editing, W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Research of Military Internal Scientific Project under grant number KYSZQZL2019.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CSARCircular synthetic aperture radar
2DTwo-dimensional
3DThree-dimensional
BPBack projection
FMCWFrequency-modulated continuous wave
GLRTGeneralized likelihood ratio test
IBPImproved BP
IRTInverse Radon transform
IoTInternet of Things
SRUAVSmall rotor unmanned aerial vehicle

References

  1. Hu, X.; Liu, Z.; Yu, X.; Zhao, Y.; Chen, W.; Hu, B.; Du, X.; Li, X.; Helaoui, M. Convolutional Neural Network for Behavioral Modeling and Predistortion of Wideband Power Amplifiers. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 3923–3937. [Google Scholar] [CrossRef] [PubMed]
  2. Hu, X.; Zhang, Y.; Liao, X.; Liu, Z.; Wang, W.; Ghannouchi, F.M. Dynamic Beam Hopping Method Based on Multi-Objective Deep Reinforcement Learning for Next Generation Satellite Broadband Systems. IEEE Trans. Broadcast. 2020, 66, 630–646. [Google Scholar] [CrossRef]
  3. Zuo, F.; Li, J.; Hu, R.; Pi, Y. Unified coordinate system algorithm for terahertz video-SAR image formation. IEEE Trans. Terahertz Sci. Technol. 2018, 8, 725–735. [Google Scholar] [CrossRef]
  4. Han, D.; Zhou, L.; Jiao, Z.; Wu, Y. A coherent 3-D imaging method for multi-circular SAR based on an improved 3-D back projection algorithm. J. Electron. Inf. Technol. 2021, 43, 131–137. [Google Scholar]
  5. Li, Y.; Chen, L.; An, D.; Zhou, Z. A novel DEM extraction method based on chain correlation of CSAR subaperture images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 8718–8728. [Google Scholar] [CrossRef]
  6. Zhang, J.; Suo, Z.; Li, Z.; Bao, Z. Joint cross-correlation DEM extraction method for CSAR subaperture image sequences. Syst. Eng. Electron. 2018, 40, 1939–1944. [Google Scholar]
  7. Feng, S.; Lin, Y.; Wang, Y.; Teng, F.; Hong, W. 3D Point Cloud Reconstruction Using Inversely Mapping and Voting from Single Pass CSAR Images. Remote Sens. 2021, 13, 3534. [Google Scholar] [CrossRef]
  8. Zhang, X.; Shi, J.; Wei, S. Three Dimensional Synthetic Aperture Radar; National Defense Industry Press: Beijing, China, 2017. [Google Scholar]
  9. Bao, Z.; Xing, M.; Wang, T. Radar Imaging Technique; Electronic Industry Press: Beijing, China, 2005. [Google Scholar]
  10. Ma, Y.; Chu, L.; Yang, X.; Hou, X. SAR autofocusing method based on slant range wavenumber subband division. J. Eng. Univ. PLA 2022, 1, 12–21. [Google Scholar]
  11. Kan, X.; Li, Y.; Wang, H.; Wang, Y.; Fu, C. A new algorithm of fast back projection in circular SAR. Guid. Fuze 2018, 39, 10–14. [Google Scholar]
  12. Stankovic, L.; Dakovic, M.; Thayaparan, T.; Vesna, P. Inverse radon transform–based micro-doppler analysis from a reduced set of observations. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 1155–1169. [Google Scholar] [CrossRef]
  13. Ishimaru, A.; Chan, T.K.; Kuga, Y. An imaging technique using confocal circular synthetic aperture radar. IEEE Trans. Geosci. Remote Sens. 1998, 36, 1524–1530. [Google Scholar] [CrossRef]
  14. Jiang, J.; Shen, J.; Zhou, Z.; Han, P. Research on rotary drum assembly and adjustment technology based on improved probabilistic Hough transform. J. Appl. Opt. 2020, 41, 394–399. [Google Scholar]
  15. Chen, J.; Yang, B.; Huang, K.; Liu, Y.; Liu, X. Applications of a ridgeline extraction method in bearing fault diagnosis. China Mech. Eng. 2021, 32, 1157–1163. [Google Scholar]
  16. Feng, D.; An, D.; Chen, L.; Huang, X. Holographic SAR tomography 3-D reconstruction based on iterative adaptive approach and generalized likelihood ratio test. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 59, 305–315. [Google Scholar] [CrossRef]
Figure 1. Geometric model of airborne CSAR imaging; (a) CSAR imaging geometry diagram; (b) diagram of the relationship between R p and φ .
Figure 1. Geometric model of airborne CSAR imaging; (a) CSAR imaging geometry diagram; (b) diagram of the relationship between R p and φ .
Electronics 12 02092 g001
Figure 2. Flowchart of the proposed method.
Figure 2. Flowchart of the proposed method.
Electronics 12 02092 g002
Figure 3. Schematic diagram of climber position prediction: (a) p o s t k at a non-boundary; (b) p o s t k at the boundary.
Figure 3. Schematic diagram of climber position prediction: (a) p o s t k at a non-boundary; (b) p o s t k at the boundary.
Electronics 12 02092 g003
Figure 4. Schematic diagram of extracting a 3D point cloud via Hough transform; (a) layering along the X (Y) direction; (b) Hough transform output intersection point; and (c) voting fusion of the intersecting points.
Figure 4. Schematic diagram of extracting a 3D point cloud via Hough transform; (a) layering along the X (Y) direction; (b) Hough transform output intersection point; and (c) voting fusion of the intersecting points.
Electronics 12 02092 g004
Figure 5. Simulated point target distribution diagram.
Figure 5. Simulated point target distribution diagram.
Electronics 12 02092 g005
Figure 6. Two-dimensional imaging comparison of the three methods when height z = 5 m: (a) 2D image using the traditional BP algorithm when z = 5 m; (b) 2D image using the IBP algorithm when z = 5 m; and (c) 2D image using the proposed imaging algorithm when z = 5 m.
Figure 6. Two-dimensional imaging comparison of the three methods when height z = 5 m: (a) 2D image using the traditional BP algorithm when z = 5 m; (b) 2D image using the IBP algorithm when z = 5 m; and (c) 2D image using the proposed imaging algorithm when z = 5 m.
Electronics 12 02092 g006
Figure 7. Comparison of point target slices: (a) X slices of point A; (b) X slices of point B.
Figure 7. Comparison of point target slices: (a) X slices of point A; (b) X slices of point B.
Electronics 12 02092 g007
Figure 8. Typical height images based on IRT: (a) image when the height z = 0 m; (b) image when the height z = 2.5 m; and (c) image when the height z = 5 m.
Figure 8. Typical height images based on IRT: (a) image when the height z = 0 m; (b) image when the height z = 2.5 m; and (c) image when the height z = 5 m.
Electronics 12 02092 g008
Figure 9. Intersection points extracted from slices at different X positions via Hough transform: (a) peak points detected at x = −5 m; (b) end points and intersection points detected at x = −5 m; (c) peak points detected at x = 0 m; (d) end points and intersection points detected at x = 0 m; (e) peak points detected at x = 5 m; and (f) end points and intersection points detected at x = 5 m.
Figure 9. Intersection points extracted from slices at different X positions via Hough transform: (a) peak points detected at x = −5 m; (b) end points and intersection points detected at x = −5 m; (c) peak points detected at x = 0 m; (d) end points and intersection points detected at x = 0 m; (e) peak points detected at x = 5 m; and (f) end points and intersection points detected at x = 5 m.
Electronics 12 02092 g009
Figure 10. Intersection points extracted from different X position slices via Hough transform for the same target points: (a) intersection points detected at x = −5 m; (b) intersection points detected at x = −6.5 m; and (c) intersection points detected at x = −8 m.
Figure 10. Intersection points extracted from different X position slices via Hough transform for the same target points: (a) intersection points detected at x = −5 m; (b) intersection points detected at x = −6.5 m; and (c) intersection points detected at x = −8 m.
Electronics 12 02092 g010
Figure 11. Result of 3D point cloud extraction.
Figure 11. Result of 3D point cloud extraction.
Electronics 12 02092 g011
Figure 12. CSAR experiment scene and flight trajectory: (a) optical image of the scene center; (b) flight trajectory of the SRUAV using the airborne recorder.
Figure 12. CSAR experiment scene and flight trajectory: (a) optical image of the scene center; (b) flight trajectory of the SRUAV using the airborne recorder.
Electronics 12 02092 g012
Figure 13. Three-dimensional point cloud extraction results of the measured scene: (a) 3D imaging results of the method proposed in this paper; (b) top view of the 3D point cloud using the traditional BP algorithm; (c) top view of the 3D point cloud using the IBP algorithm; and (d) top view of the 3D point cloud using the proposed method.
Figure 13. Three-dimensional point cloud extraction results of the measured scene: (a) 3D imaging results of the method proposed in this paper; (b) top view of the 3D point cloud using the traditional BP algorithm; (c) top view of the 3D point cloud using the IBP algorithm; and (d) top view of the 3D point cloud using the proposed method.
Electronics 12 02092 g013
Table 1. Coordinates of the five point targets.
Table 1. Coordinates of the five point targets.
Serial NumberX (m)Y (m)Z (m)
1005
25−55
3−550
4−5−50
5550
Table 2. Comparison of point target resolution.
Table 2. Comparison of point target resolution.
BP (m)IBP (m)The Proposed Algorithm (m)
point A0.20.240.36
point B0.30.320.36
Table 3. Time consumption comparison.
Table 3. Time consumption comparison.
BP (s)IBP (s)The Proposed Algorithm (s)
3787.41278.4582.1
Table 4. Time consumption comparison.
Table 4. Time consumption comparison.
BP (s)IBP (s)The Proposed Algorithm (s)
23,974.438001.403809.32
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chu, L.; Ma, Y.; Hao, Z.; Li, B.; Shi, Y.; Li, W. A CSAR 3D Imaging Method Suitable for Edge Computation. Electronics 2023, 12, 2092. https://doi.org/10.3390/electronics12092092

AMA Style

Chu L, Ma Y, Hao Z, Li B, Shi Y, Li W. A CSAR 3D Imaging Method Suitable for Edge Computation. Electronics. 2023; 12(9):2092. https://doi.org/10.3390/electronics12092092

Chicago/Turabian Style

Chu, Lina, Yanheng Ma, Zhisong Hao, Bingxuan Li, Yuanping Shi, and Wei Li. 2023. "A CSAR 3D Imaging Method Suitable for Edge Computation" Electronics 12, no. 9: 2092. https://doi.org/10.3390/electronics12092092

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop