Next Article in Journal
Graph-Based Cooperative Localization Using Symmetric Measurement Equations
Previous Article in Journal
Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3D Imaging Millimeter Wave Circular Synthetic Aperture Radar

Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ 85721, USA
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(6), 1419; https://doi.org/10.3390/s17061419
Submission received: 24 February 2017 / Revised: 20 April 2017 / Accepted: 14 June 2017 / Published: 17 June 2017
(This article belongs to the Section Remote Sensors)

Abstract

:
In this paper, a new millimeter wave 3D imaging radar is proposed. The user just needs to move the radar along a circular track, and high resolution 3D imaging can be generated. The proposed radar uses the movement of itself to synthesize a large aperture in both the azimuth and elevation directions. It can utilize inverse Radon transform to resolve 3D imaging. To improve the sensing result, the compressed sensing approach is further investigated. The simulation and experimental result further illustrated the design. Because a single transceiver circuit is needed, a light, affordable and high resolution 3D mmWave imaging radar is illustrated in the paper.

1. Introduction

3D imaging radar becomes feasible in the field of remote sensing due to the advancement in solid state microwave circuits and the digital signal processor [1]. However, large antenna synthesization is usually performed by airborne radar to interrogate the terrain [2] and is not suitable for civilian usage. Considering a wide usage of millimeter wave radar for automobile, security and surveillance, we are proposing a new imaging technique [3,4] combining synthetic aperture radar (SAR) and millimeter wave radar. The proposed millimeter wave circular synthetic aperture radar (MMWCSAR) conducts a circular trajectory to synthesize a large aperture for achieving 3D imaging. MMWCSAR has four key aspects: high resolution, working on diversity conditions, portable and low-cost.
For high resolution, millimeter wave radar can support hundreds of megahertz for range detection. For example, automotive radar is working on millimeter wave. It can sweep hundreds of megahertz to distinguish cars and even pedestrians [5]. In addition, automotive radar has been applied on smart vehicles for 2D terrain mapping [6], collision detection, parking assistance and the blind spot indicator [7]. However, the current automotive radar uses traditional strip-map SAR imaging to generate a 2D terrain mapping. The proposed MMWCSAR aims at generating 3D imaging. It can be further applied to automotive radar for better understanding of the environment.
For working on diversity conditions, we compare millimeter wave radar with optical technologies. Despite that optical imaging techniques have better resolutions due to the high center frequency, radar imaging has numerous advantages over traditional optical imaging techniques like camera and LiDAR. It is underlined in [8] that radar has superior working capacity in any weather condition, including rain, snow and fog. The complex roadway environment for automobiles requires uninterrupted remote sensors performing consistently in inclement weather. The millimeter wave radar is capable of acquiring and tracking all obstacles in its field of view (FOV) under all weather conditions [8]. MMWCSAR is an innovative imaging device capable of working in diverse conditions compared to traditional optical sensors.
For portable and low-cost features, we compare with traditional millimeter wave imaging techniques, such as security surveillance system [9,10], concealed weapon detection [11,12,13] and millimeter wave cameras [6,14]. Most of these traditional applications, including the millimeter wave scanner, which deploys many transceivers, are bulky and high-cost system designs. On the contrary, MMWCSAR adopts a monostatic radar, which can minimize its size and cost. It ensures that MMWCSAR can provide three-dimensional microwave imaging for security, indoor surveillance and automotive target detection in a wearable and inexpensive manner.
MMWCSAR requires the motion of radar in a circular track other than a straight track as traditional strip-map SAR techniques. Our study found that the movement between radar and targets produces the projections of objects to different movement directions. One can therefore obtain a range projection angle datacube from the circular movements. From the datacube, by applying inverse Radon transform (IRT) [15,16], the datacube can be converted to the FOV figure. The principles are similar to computed tomography [15,17,18,19,20]. Jia et al. [21] presented a 2D imaging algorithm for circular SAR. The motion of the SAR is along a circular trajectory. The sub-spectrums of different angles are matched filtered and summed in the Fourier domain to obtain a 2D Fourier spectrum of the imaging. By space-invariant matched filtering and 2D inverse Fourier transform (2D-IFFT), the trajectory deviation is eliminated, and the final aperture view is presented. Additionally, Bao et al. [22] proposed a multi-circular SAR approach. Their multi-circular SAR samples on different elevation levels. Different circular rotation angles and down-looking angles are recorded according to the range data. Consequently, their signal processing procedure results in reconstruction of a 3D image from multi-circular SAR. In our approach, we can use hand swinging to obtain a full-circular projection. The circular trajectory plane is parallel to the range bin plane. The range calibration is also implemented before the IRT in order to reduce target mismatch on the volume FOV image.
Traditional spotlight SAR works on S-band or X-band. It has a low carrier frequency and uses the range information to perform inverse radon transform. In the paper, we are investigating the W-band 3D imaging technique. We found that as the carrier frequency increases to the W-band, the synthetic aperture becomes much smaller and allows the radar to use Doppler angle data to perform inverse Radon transform. The advantages of the proposed system are multifold: First, the high carrier frequency makes the radar very small. Second, the lesser number of transceiver elements can lower the cost of the radar. Third, further study found that a high resolution result can be obtained with less samples if compressed sensing (CS) is applied.
MMWCSAR tested with hand swinging is not accurate for high resolution imaging. In order to improve the sensing results, sensing time and improve the rotating accuracy, additional signal processing procedure of CS can be applied in our MMWCSAR system. CS has been introduced in [23,24]. Many large data size applications behaving as sparse targets have been using CS for data reduction and restoration, e.g., single-pixel imaging [25] and magnetic resonance imaging (MRI) [26,27]. It has advantages for situations when sampling is expensive, slow or difficult [28]. The CS method applied in radar signal processing can sample fewer signals for the sensors, while keeping the same quality of the generated imaging. With reducing the size of samples for the portable radar device like MMWCSAR, a fast data acquisition can be achieved by the CS method. Meanwhile, the sensing results can be improved.
Radar signal processing with CS is addressed in [2,21,22,29,30,31,32]. In [29], Ender gave a full analysis on applying the CS to radar pulse compression. Sevimli [30] introduced range-Doppler compressed sensing and optimization comparison of different reconstruction algorithms. In addition to the CS used in range-Doppler response, our method of applying CS to radar signal processing is innovative. It allows not only the pulse-Doppler compressed sensing, but also the slow-time-angle compressed sensing. Recent work on CS applied to SAR systems is drawing researchers’ attention. For instance, Bao et al. [22] produced the 3D multi-circular SAR image using a “2D + 1D” mode, i.e., 2D focused SAR images are followed by 1D profile estimation of the elevation direction. With CS applied to the 2D ground plane image and 1D profile of the elevation dimension, the 3D figure can be reproduced. CS theory is also applicable in our MMWCSAR system. First, CS is applied to 2D slow-time-angle data to form a 2D FOV image on a single range bin. The volume FOV figure can be reconstructed and analyzed by applying the 1D range profile to 2D FOV images of different range bins. Second, to achieve the CS in radar signal processing, the sparsity and incoherence properties are discussed. In addition, 2D transformation from the slow-time-angle to the azimuth-elevation representation matrix is discussed. Besides, we also introduce how to choose the sensing matrix, so that the CS algorithm can be realized. Finally, to further improve the performance of the MMWCSAR system, we focus on decreasing the data acquisition time, improving imaging results and reducing errors caused by humans with the CS algorithm applied in the experiment.
The structure of this paper is as follows. In Section 2, the MMWCSAR system configuration, parameters, data acquisition, resolution and its constraints of choosing MMWCSAR system parameters are introduced. In Section 3, range calibration and radar imaging processing by using IRT to reconstruct the volume FOV image are presented. In Section 4, we propose the CS for MMWCSAR algorithm. The corresponding simulation, as well as experimental results are shown in Section 5. In Section 6, we discussed the results from the simulation and experiment. In Section 7, we conclude the paper.

2. MMWCSAR System

The relative movement between a radar and targets can be used to detect and locate targets. Examples include SAR [2,33], inverse SAR (ISAR) [34,35,36,37], moving target indicator [7], etc. To introduce the relative movement between the proposed radar and targets, the placement of the MMWCSAR system is presented in Section 2.1. The parameters for the monostatic radar are introduced in Section 2.2. In Section 2.3, data acquisition is shown. The resolution of the MMWCSAR system is presented in Section 2.4. The constraints of MMWCSAR parameters are studied in Section 2.5.

2.1. MMWCSAR Configuration

The proposed 3D imaging MMWCSAR system uses a single transceiver element to acquire data from targets by emulating multiple transceivers through movement of the single transceiver. To simplify the movement of MMWCSAR, we assume that MMWCSAR moves along a circular track. The plane of the circular track is perpendicular to the range bin axis (in Figure 1a). The radar moving along the circular track keeps a constant speed, but the direction changes over time (in Figure 1b).
As the radar is moving inside the plane perpendicular to the range bin axis, targets detected remain stationary within its range bin while sampling. The movement of radar produces relative speeds of detecting targets. For each separate range bin, detected targets have different relative velocities depending on their azimuth and elevation locations. As the radar moves to different directions, targets within a single range bin project to different directions. If the target is at the center of its range bin, the relative velocity is zero no matter the radar’s movement direction. For targets away from the center, the relative velocities vary according to the radar’s movement directions. Figure 1c,d shows that targets within a range bin are projected onto different radar moving directions. Consequently, relative movement produces targets’ Doppler data, which can be used to distinguish targets within the same range bin.
As we can see from Figure 2, the geometry of MMWCSAR is presented. H represents a monostatic radar. For different rotation positions, the range-azimuth-elevation of the radar can be presented in Cartesian coordinates as ( H x , 0 , H z ) : H x = r cos θ and H z = r sin θ , where r denotes the rotation radius of the MMWCSAR system, and θ is the rotation angle at which the MMWCSAR takes a frame of 2D range-Doppler data. θ is related to the angular velocity ω and acquisition time stamp t, i.e.,
θ = 2 π t ω .
A, B are point targets ahead of the radar with different positions, and their Cartesian representation are ( A x , A y , A z ) and ( B x , B y , B z ) . As the spherical coordinates are used for signal processing, A and B have spherical coordinates’ profiles of ( R 1 , α 1 , β 1 ) and ( R 2 , α 2 , β 2 ) . The radar-target vector components of H A and H B relative to radar in Cartesian coordinates are:
H A = ( R 1 sin β 1 cos α 1 r cos θ , R 1 sin β 1 sin α 1 , R 1 cos β 1 r sin θ )
and:
H B = ( R 2 sin β 2 cos α 2 r cos θ , R 2 sin β 2 sin α 2 , R 2 cos β 2 r sin θ ) ,
respectively. Equations (2) and (3) provide a method to describe targets in terms of range, azimuth angle and elevation angle instead of range, azimuth location and elevation location. Targets’ 3D location profiles are independent of the radar movement as long as applying the range calibration of the displacement of the origin (radar rotation center) to the radar. The projections are projected onto each range bin and are associated with rotation angle θ . Therefore, the MMWCSAR addresses a unique approach of the radar remote sensing problem.
The moving direction can be recorded by angle θ over time. The geometry of obtaining the projection angle γ can be seen from Figure 3. Doppler velocities of targets are the projections onto the radar rotation plane. γ can be represented using the displacement vector H A and the velocity vector H V :
γ = arccos H A · H V | H A | | H V | .
From geometry, we know that:
H A · H V = ( O A O H ) · H V   = O A · H V O H · H V   = O A · H V .
| H A | is the actual range of radar and target, R r e v i s e d . The range difference between the assumed range R a s s u m e d = | O A | and R r e v i s e d will be discussed in range calibration Section 3.1. Thus, the Equation (4) can be simplified as:
γ = arccos O A · H V R r e v i s e d | v r a d a r | .
| v r a d a r | is the magnitude of the velocity of the radar:
| v r a d a r | = 2 π r ω .
All vectors are derived from coordinates calculations.

2.2. MMWCSAR Parameters

As introduced above, to build a light and low-cost imaging radar, we use a monostatic radar with a single transceiver element.
In our setups, the monostatic radar is transmitting linear frequency modulated (LFM) pulse waveforms and operating in range-Doppler mode. The intermediate frequency (IF) of the radar is defined as center frequency, f c . The bandwidth (BW) of the radar determines the range resolution. In our millimeter wave design, we use a wide bandwidth chirp. Sampling frequency, f s , and pulse chirp duration, T P , define the number of range bins, N R . Pulse repetition interval (PRI) generally determines the blind speed and hence the unambiguous Doppler frequency, f d [38]; thus limiting the swinging angular velocity, ω , in our MMWCSAR system. The number of Doppler bins, N D , defines the velocity resolution and curbs the scanning frames, N C h . The number of frames collected is fundamental to the final FOV image.

2.3. Data Acquisition

The radar transmission signal s ( t ) is the LFM signal. The observation of radar signal for a single scatterer after de-ramping is:
r ( t ) = k exp [ j 2 π ( 2 R c ( B W ) T P + 2 v f c c ) t ] + n ( t ) .
where k is the reflective amplitude related to the target’s radar cross-section (RCS), n ( t ) is the white noise, R is the range of the target, v is the velocity of the target and c is the speed of electromagnetic wave. Two terms of the frequency component in the exponential function are fast-time and slow-time samples. These samples are the frequency difference of range and Doppler, respectively.
The fast-time sampling frequency of the radar determines the number of range bins, and the pulse repetition frequency (i.e., slow-time sampling frequency) determines the Doppler bins. Radar received signals are forming a time series 1D signal after the analog-to-digital converter (ADC). In Figure 4, the fast-slow-time samples accumulated at each angle θ 1 , θ 2 , . . . , θ n are reshaped by the number of range bins, N R , and the number of Doppler bins, N D . Hence, the sampling sequence of fast-time, slow-time and frames (angles) data are organized as a N R × N D × N C h complex time domain data matrix. In our MMWCSAR system, the acquisition data format is the fast-time-slow-time-angle datacube, as the fast-time and slow-time data are associated with range and Doppler (projections), respectively. This datacube can be processed into range-Doppler angle data after pulse-Doppler processing. The IRT is related to the Doppler angle planar data for each range bin. Because the relative velocities of targets caused by circular movement project different velocities based on the azimuth/elevation location onto different angle profiles, the IRT method can be applied to reconstruct the image in each range bin of our imaging geometry, which is similar to computed tomography [15]. Some IRT-applied radar techniques can be found in [34]. Consequently, the Doppler-angle data matrix can be extracted for imaging restoration of each range bin. 3D imaging can be obtained from recovering 2D images for each range bin using the “2D + 1D” model [22].
For LFM waveform, its pulse-Doppler processing is coupled together [39]. However, in our approach, we implement compressed sensing (in Section 4) to improve the final image quality of the proposed radar. The pulse compression is done separately from the Doppler compression.
After forming a 3D datacube, the uncompressed received echo signals are pulse compressed by discrete Fourier transform (DFT) of the transmitted signal along the range axis. In Section 3, we separate the range profile and process the signal on 2D profile of the datacube to obtain the FOV figure on each range bin.
From the acquisition stage, we are using the fast-time, slow-time and angles in forming a 3D datacube. The pulse compression converts the datacube into the range-slow-time-angle datacube; and applying IRT to obtain range-azimuth-elevation datacube. The last datacube is the volume FOV figure of the actual image.

2.4. MMWCSAR Resolution

The MMWCSAR system has 3D imaging capacity. Therefore, the resolution is an essential topic for high resolution imaging.
For the range resolution, the range profile is independent of the Doppler and angle profiles. Thus the range resolution is:
Δ R = c 2 ( B W ) .
For azimuth and elevation resolution, they are dependent on Doppler and angle profiles. Both resolutions are equivalent because the MMWCSAR is moving along a circular track with even angle spaces. Due to polar to Cartesian interpolation, the resolution of azimuth or elevation is higher around the rotation center and is lower at the edge. The azimuth/elevation resolution Δ l is defined as:
Δ l = 2 l sin π ( P R I ) N D ω ,
where l is the projection distance from the center. l has the limit from the center of origin to the azimuth/elevation edge, which is:
0 l R tan θ F O V 2 + r ,
where R means the range at which we measure the azimuth/elevation resolution and θ F O V denotes the FOV angle of radar looking vision. Because the resolution depends on the location of the azimuth/elevation FOV figure, in general, the worst resolution in a azimuth/elevation FOV figure is used to judge the MMWCSAR system’s azimuth/elevation resolution. Therefore, the resolution for the azimuth/elevation is:
Δ l = 2 ( R tan θ F O V 2 + r ) sin π ( P R I ) N D ω .
Note that the resolution of azimuth/elevation is dependent on the range at which we measure the azimuth/elevation resolution.

2.5. Constraints of Parameters

In order to reconstruct a high quality image, more data should be acquired in the 3D datacube in the acquisition stage. However, the number of data have some limitations as below.

2.5.1. Constraints of Number of Doppler Bins

Because the radar is moving and targets are static within the sampling period, targets have relative velocities with respect to the MMWCSAR depending on their azimuth and elevation locations. If the target is far from the center of its range bin, the relative velocity increases. Accumulated Doppler bins define the resolution of the Doppler frequency based on targets’ azimuth and elevation locations. Hence, to better reflect the relative velocities of targets in the datacube, more Doppler bins are needed. We need to have enough Doppler bins to cover all of the relative velocities within the FOV angle:
N D + 1 2 Δ v D 4 π r ω sin θ F O V 2 ,
where Δ v D is the velocity interval between two adjacent Doppler bins. Δ v D is defined as:
Δ v D = c 2 f c ( P R I ) ( N D ) ,
where c is the speed of the electromagnetic wave.
By choosing the appropriate number of Doppler bins, the system is able to capture all needed data for range-Doppler response within a limited given time period. This allows the system to capture enough frame data to form a range-Doppler-frames datacube. In this case, the frame data serve as the rotation angle. Therefore, the 3D datacube is consisting of range-Doppler-angle with the magnitude of targets. PRI also has a limit, which needs to cover all of the relative velocities with respect to radar to avoid blind speed:
2 π r ω c 2 f c ( P R I ) .

2.5.2. Constraints of the Number of Angle Bins

However, choosing too many Doppler bins results in fewer range-Doppler responses per full-round scan. To increase the number of angle bins, one needs to decrease the time used for capturing fast-time-slow-time samples. Hence, the system needs to take less time in capturing per fast-time-slow-time samples, so that allows more angle data (frames data) recorded through rotating. The sampling frame time for each fast-time-slow-time sample is:
T C h = ( P R I ) ( N D ) .
The radar movement is relatively constant when collecting fast-time-slow-time samples. That is to say, to achieve each sample within 10 rotation, thus allowing 36 angle samples per full-round scan, we have another constraint:
2 π T C h ω 2 π ( 10 360 ) .
Simplify the equation, and we get:
ω 36 ( P R I ) ( N D ) .
The maximum detectable Doppler frequency is restricted by Doppler compression:
f d , max = 2 π r ω 2 f c c 2 P R I .
The same, the minimum detectable Doppler frequency is restricted by:
f d , min = 2 π r ω 2 f c c 1 N D 2 ( P R I ) ( N D ) .
Merging Equations (13)–(20), we get constraints with our system. A proper choice of parameters is:
N D = 25 , P R I = 30 × 10 6 s , r = 0.2 m , ω = 0.6 s / round .
In the later sections, simulation and experiment restrictions follow constraints discussed in this section. The different chosen parameters can result in different scanning schemes. Thus, this depends on targets and detecting scenarios. For example, we want to see metal objects concealed behind people’s clothes [40]. We need to increase the swinging rate and improve the FOV resolution. Using our constraints, we reduce our frames and PRI in order to meet the criteria.

3. Radar Imaging Processing

In this section, we discuss range calibration, as well as the imaging processing for the receiving datacube.

3.1. Range Calibration

From Section 2.1, additional range calibration is needed. This is because: the fan-shaped range bin should be converted into a plane-shaped range bin; the range profile of the radar should match with the rotation position, as we assumed the rotation radius is zero. From Figure 2, for a single target A, the displacement vector from radar is the actual measured range. The assumed range is from the origin, O A = ( R 1 sin β 1 cos α 1 , R 1 sin β 1 sin α 1 , R 1 cos β 1 ) , which is not changing throughout time. The radar location at different time is O H = ( r cos ( 2 π t / ω ) , 0 , r sin ( 2 π t / ω ) ) . Thus, the revised range is:
R r e v i s e d = | H A | = | O A O H |   = ( R 1 sin β 1 cos α 1 r cos 2 π t ω ) 2 + ( R 1 sin β 1 sin α 1 ) 2 + ( R 1 cos β 1 r sin 2 π t ω ) 2
The range difference of the revised range and assumed range is:
R d i f f = R r e v i s e d R a s s u m e d = | H A | | O A |   = ( R 1 sin β 1 cos α 1 r cos 2 π t ω ) 2 + ( R 1 sin β 1 sin α 1 ) 2 + ( R 1 cos β 1 r sin 2 π t ω ) 2   ( R 1 sin β 1 cos α 1 ) 2 + ( R 1 sin β 1 sin α 1 ) 2 + ( R 1 cos β 1 ) 2
As | H A | is only dependent on the time of the rotation, thus the range-Doppler-angle 3D datacube requires the calibration of range using R d i f f at each range-Doppler planar data corresponding to time. Time in our MMWCSAR system is related to the rotation angle θ , which is the angle profile of the datacube. A time-indexed matrix V 1 is thus able to perform the range calibration in compressed sensing in Section 4.

3.2. Radon Transform and 3D Imaging Reconstruction

Radon transform and 3D image reconstruction are illustrated in many applications, i.e., computed tomography [15] and thermoacoustic tomography [41]. Obtaining tomographic image from projections data and conversion has been introduced in [15,18,19,20,42,43]. The cross-sectional image of targets at each range bin consists of projection angle θ and the projection distance ξ . The IRT is performed for reconstruction tomography.

3.2.1. Radon Transform and Inverse Radon Transform

The Radon transform is an integral transform converts the 2D image to its projections, p ( ξ , θ ) . The Radon transform can be defined as [15]:
p ( ξ , θ ) = f ( x , y ) δ ( ξ x cos θ y sin θ ) d x d y ,
in which f ( x , y ) denotes the original 2D density distribution function indexed by x and y; ξ is the projection distance from center; θ is the projection angle; and δ ( · ) is the Dirac delta function.
Its inverse transform, IRT, is widely used for image reconstructions. IRT can reconstruct the image from the projection data by several techniques. Traditionally, people used the back-projection theorem to recover the inverse Radon transform [42]. A recent CT development inspired the central slice theorem (CST) [15], which is the simplest method conceptually compared to back-projection and iterative algebraic techniques [20]. The theorem states that the 2D Fourier transform (FT) of the original function f ( x , y ) is the function of 1D Fourier transforms of the projection slices in the order of angles. The 2D FT of the original function is:
F ( u , v ) = f ( x , y ) exp [ j 2 π ( u x + v y ) ] d x d y .
P ( ρ , θ ) is the 1D FT series of the projections p ( ξ , θ ) , which can be represent by:
P ( ρ , θ ) = p ( ξ , θ ) e x p ( j 2 π ρ ξ ) d ξ .
Hence, the 2D Fourier domain function F ( u , v ) can be obtained from the Fourier domain function P ( ρ , θ ) by interpolation between polar and Cartesian coordinates:
P ( ρ , θ ) = F ( u , v ) | u = ρ cos θ , v = ρ sin θ .
Therefore, the spatial image can be obtained from projection slices by CST. In this MMWCSAR system, the Doppler-angle data can be converted by this algorithm to reconstruct FOV images of targets’ scene at different range bins.

3.2.2. 3D Imaging Reconstruction and Point Spread Function

The profile of range data is independent of the Doppler-angle data after the geometric calibration of the range. The Doppler-angle planar data can be used to reconstruct the 3D image. For each range bin, Doppler-angle data are the projections of the 2D azimuth/elevation FOV figure in the back-projection domain [44]. The transformation of the Doppler-angle and 2D FOV figure is basically using the Radon-inverse Radon transform pair. This pair can be expressed as an invertible matrix of the Radon transform-IRT linear matrix with interpolation of the Cartesian and polar coordinates. Thus, by using the “2D + 1D” model [22], 2D azimuth-elevation FOV images are resolved at different range bins. These images are adding the range bin profile to reconstruct a volume FOV figure. This allows 3D imaging.
For the point spread function (PSF), it defines the response of an imaging system to a point source [45]. The MMWCSAR system azimuth-elevation FOV image is formed as the response from point scatterers with the following summation:
M M W C S A R ( x , y ) = i = 1 N A i δ ( x x i , y y i ) * h ( x , y ) ,
where N represents the total number of point scatterers and A i is the scattered field amplitude. x , y is the azimuth and elevation location, respectively. h ( x , y ) is the PSF of the MMWCSAR system. h ( x , y ) can be regarded as the system’s impulse response to any point scatterers of the target. The image can be expressed as the convolution of scatterers with the PSF. For our system, the PSF for one scatterer at a range of 5 m at the center of the radar view is shown in Figure 5.
From Figure 5, the side lobe level is −13 dB. The PSF has circular side lobes from the center. This is caused by IRT along all directions. The side lobes exhibit a sinc-function shape because of the finite bandwidth in the Doppler and angle domain. The common way to suppress the side lobes is to use windowing. In our MMWCSAR system experiment, we are using the Hanning window.

4. Compressed Sensing for 3D Imaging Radar System

In terms of improving sensing results and boosting the rotating accuracy, compressed sensing is used in our MMWCSAR.

4.1. Compressed Sensing Review

The basic CS idea is reviewed below. Suppose that we have a two-dimensional (2D) image with a size of A × B , which can be extract into a one-dimensional (1D) vector f with a length of N = A B × 1 . Any 1D vectors can be constructed by sampling the sensing basis ϕ = [ ϕ 1 | ϕ 2 | . . . | ϕ M ] [23]. M is the number of measurements of the sensing basis ϕ and is smaller than the length of f . Thus, the sampling vector y can be expressed as:
y = ϕ k f , k = 1 , 2 , . . . , M .
The restriction laying the K-sparse signal f should have K < M < N , which allows the signal recovery from the M measurements.
CS requires the sparsity and incoherent sampling. For sparsity, we have f R N , which is N = A B pixels in the 2D image. For an orthogonal transform, i.e., discrete cosine transform (DCT), almost all of the pixels could have sparse expansion without much perceptual loss. This results in a representation basis ψ = [ ψ 1 | ψ 2 | . . . | ψ N ] [23], which allows the following representation:
f = i = 1 N ψ i x i .
For incoherence sampling, from [23], the required sensing basis and representation basis should have the following incoherence parameter:
u ( ϕ , ψ ) = n max 1 k , j n | ϕ k , ψ j | .
The reconstruction uses l 1 -norm minimization [46,47,48]. The proposed solution f ^ is constructed by f ^ = ϕ x ^ , where x ^ is the solution to the convex optimization:
x ^ = min { x 1 : y = ϕ ψ x } .
Thus, the measurement x ^ is produced by the sparsest signal of the decoding model.

4.2. Compressed Sensing on Doppler-Angle Data

As the Doppler profile is in the frequency domain and the angle profile is in the spatial domain, slow-time data are used, and the pulse compression is done apart from pulse-Doppler compression. Consequently, we have a DFT matrix of U 1 . U 1 is a 2D complex matrix, which has 1D DFT along fast-time bins, with slow-time and angle profiles repeating to match the whole 3D data size. U is then the transform from range frequency to fast-time samples with repetition of slow-time and angle profiles. The range calibration matrix can be expressed as V 1 .
We also implement the RT and IRT matrix for the CS. The projections on each range profile are the projection domain data of the 2D azimuth-elevation FOV figure. We use a rectangular IRT matrix W 1 transform the slow-time-angle 2D data to the 2D azimuth-elevation FOV figure with the range profile repeating to match the whole 3D data size. The IRT matrix representation W 1 is produced combined with linear interpolation from polar coordinates to Cartesian coordinates. Thus, we have a conversion from the original 1D-reshaped range-slow-time-angle data b to the 1D-reshaped calibrated range-azimuth-elevation data x :
b = UVW x .
Fourier transform and Radon transform are linear transforms. Both of the transformation matrices are invertible. In addition, both U 1 and W 1 work on 1D-reshaped range-slow-time-angle data. Hence, CS requires solving x by using three linear mapping matrices W 1 V 1 U 1 :
x = W 1 V 1 U 1 b .
In this approach, the representation basis ψ is represented as:
ψ = UVW .
For any condensed signal y , the sensing of the range-slow-time-angle data b can be expressed as:
y = ϕ b .
Therefore, the reconstruction is able to be implemented by using l 1 -minimization on the sensing basis ϕ and the representation basis ψ from Equation (34):
x ^ = min { x 1 : y = ϕ UVW x } .
To summarize, U 1 converts the 1D-reshaped fast-time-slow-time-angle original data into 1D-reshaped range-slow-time-angle data. V 1 is the calibration matrix of the range bins. W 1 transforms the range-slow-time-angle data into 1D-reshaped range-azimuth-elevation FOV figure.
In our simulation and experiment, we solve the convex optimization through the MATLAB primal-dual interior point method [49,50]. The sparsity of Doppler-angle planar data is exploited. Because angle data are limited due to PRI and the ADC sampling rate when MMWCSAR accumulates data, the Doppler-angle data form a sparsity signal (multiple sinusoidal shapes) from targets. Through the optimization, the sparsity of the original slow-time-angle signal can be exploited to recover from fewer samples than that of the Nyquist sampling theorem [23]. Therefore, the compressed sensing is able to be implemented to recover the 3D imaging from fast-time-slow-time-angle datacube.

4.3. Sensing Basis ϕ Selection

We first define the compressed sensing ratio (CSR) as R C S , expressed as:
R C S = N M ,
M and N are the number of rows and columns of the sensing basis, respectively. CSR is the ratio of the length of the expected signal b over the length of condensed signal y . It specifies the reconstruction quality and size of the sensing basis and representation basis. The slow-time-angle data CS is done with compressing both slow-time and angle. These data are transformed on each calibrated range profile. Sensing basis ϕ can be expressed as follows:
(1)
Reduced rotation acquisition matrix:
The matrix is expressed as:
ϕ k + h , k = 1 , k = 1 , 2 , . . . , M & 0 k + h M 0 , otherwise .
h is the offset depends on the rotation span. This allows sensing the expected signal with reduced rotation angles. The sensing basis reconstructs the condensed signal into a full angular projections Doppler-angle signal along the angle profile. Different projections provide different IRT responses. Thus, this sensing basis enables the sampling at fewer angle bins, which reduces the swinging inaccuracy. This sensing matrix is also applicable to the Doppler profile, which also improves projections along angle profile.
(2)
Reduced sampling matrix:
This matrix is expressed as:
ϕ k , R C S ( k 1 ) + 1 = 1 , k = 1 , 2 , . . . , M 0 , otherwise .
R C S denotes the max integer smaller than R C S . This allows the sensing basis sensing the expected signal at its higher sampling rate. The sensing basis converts the condensed signal with more projection data along the Doppler or angle bins. This method avoids swinging at an inconsistent rate of the velocity. This allows the signal recovery at better constant sinusoidal-shaped Doppler-angle data.
(3)
Gaussian or random matrix:
The Gaussian sensing basis is shown below:
ϕ k = exp [ ( k μ ) 2 / ( 2 σ 2 ) ] , k = 1 , 2 , . . . , M .
μ is the expected value of the Gaussian, and σ is the standard variance of the Gaussian. The random matrix is also applicable. This method allows the sampling followed the normal compressed sensing procedure. This corresponds to some CS applications in MRI, i.e., [26]. The sensing basis converts the condensed signal in a smoother way. It gives the compensation to the signal. It allows the signal recovery with much more precise points along the Doppler-angle data.
The following simulation and experiment will provide the results of CS involved in the MMWCSAR system. The comparison will be provided with the IRT method.

5. Simulation and Experiment

5.1. Simulation Setup and Results

MMWCSAR is able to reconstruct the 3D image along each range bin. The following simulation provides a FOV figure of range of 5 m. The MMWCSAR system parameters can be found in Table 1. The four targets’ scheme parameters are shown in Table 2.
As the MMWCSAR system is producing the radar transmitted LFM waveforms, the received signal together with produced Gaussian noise is analyzed with the IRT method and the involved CS method. Additionally, in order to process Doppler-angle data without the influence of the range profile, the range calibration is necessary in the simulation. From the 3D datacube, the 3D fan-shaped range bin is accumulated. From Equation (22), we process every Doppler-angle slice along the range axis with a calibration matrix V to convert the fan-shaped range bin into a plane-shaped range bin.
The simulation results are shown in Figure 6 with FOV figures of 5 m seen from the radar panel. Figure 6a addresses the IRT method with the perfect setup without swinging inaccuracies. The velocity of the radar is changing evenly with direction and a constant magnitude. Figure 6b–g introduces all of the involved CS method by applying different sensing basis on slow-time-angle bins separately. For the experiment, we will use the Equation (38) matrix for the slow-time profile as in Figure 6d, as it provides high contrast and better scatterer indication.

5.2. Experiment Setup and Results

With the equipment from the INRAS MIMO radar [51,52] and three metal ball targets, a basic system can be set up. The INRAS MIMO radar is a four-transmitter, eight-receiver MIMO radar. We use only one transceiver element, which satisfies the MMWCSAR configuration. The radar works at 77 GHz in range-Doppler mode. The ball targets are made from metal with a high RCS, which can produce high reflective beams compared to other targets’ reflections. The parameters for the experiment can be seen in Table 3. The experimental setup scheme is shown in Figure 7a. The ball lineup with measurement is given in Figure 7b. The balls are set at a range of 1.21 m, 1.66 m and 2.16 m from the radar panel, respectively.
The rotating angle per frame is around 15.3°, based on the calculation of 360 over the number of frames N C h accumulated by the signal processing unit. Multiple swinging circles are recorded in order to reduce the impact of inaccuracies caused by hand. Additionally, in order to minimize the hand swinging inaccuracy on Doppler-angle data, the range calibration is necessary in the experiment. Otherwise, some ghost targets will be observed around detected targets. The calibration method is similar to that of the simulation part.
The results provide a 3D scattering plot of the FOV in front of the MMWCSAR. The indices of the figure are the range in meters, the azimuth angle and the elevation angle in degrees. The figures of the applying IRT and CS methods can be seen in Figure 8. We add the threshold to distinguish the metal ball targets from the table, the board and walls. The metal ball targets’ reconstruction using the IRT method (Figure 8a,b) gives a slightly larger size in azimuth and elevation directions compared to the actual size. Reconstruction using the CS method (Figure 8c,d) provides a more precise scatterer indication, but lost the shape of the metal ball. The range resolution is 37.5 mm for both methods, as the range resolution is dependent on bandwidth only. The azimuth/elevation resolution based on Equation (12) is 36.2 mm for the 2.16-m range bin. The experiment measured azimuth/elevation resolution is 44 mm from the worst recognized 2.16-m target. The CS method provides a better scatterer indication, but the IRT method gives a more precise ball shape. In conclusion, the MMWCSAR system using both methods is capable of 3D imaging by moving the radar along a circular track.

6. Discussion

6.1. Millimeter Wave 3D Imaging Radar

For both the IRT and CS method, the 3D range-azimuth-elevation FOV figure is recovered. The proposed system can have the following advantages:
• Fast:
To recover the 3D imaging, the proposed system needs to collect data when the system is moving. A full circular movement track is needed for the IRT method to collect data, while the CS method can greatly decrease the number of samples. It takes only seconds for MMWCSAR to recover an FOV figure.
• Portable:
Due to its size, the proposed system can be wearable for 3D imaging. Both the IRT and CS methods produce convincing results from the simulation and experiment.
• High resolution:
With other SAR imaging device shown in [21,22], the resolution is around 0.1 m. Our MMWCSAR working on 77 GHz can achieve 37.5-mm range resolution and 36.2-mm azimuth/elevation resolution at a range of 2.16 m (from Equation (12) and experiment data), theoretically. Due to hand swinging inaccuracy and noise, the resolution is reduced to around 44 mm in experimental measurements.
Due to limited access to resources, we rotate the platform manually and record the time we finish one rotation. Due to the rough estimation of the rotation, the accuracy and resolution are lower than the theoretical analysis. To compensate the errors, we set a calibration target, e.g., we use a corner reflector at a range bin without any other objects; we track the errors reading the Doppler-angle profile of the corner reflector. After compensation, the imaging still has decreased resolution. However, if we can integrate an inertial sensor into the system to extract real-time velocity information, a more precise and accurate figure could come out.

6.2. Comparison with the Radon Method and the Compressed Sensing Method

The conversion of the 3D data matrix is linear, and the signal is sparse. The inverse transform of the matrix is accessible, thus allowing the involved CS method. Comparing to the IRT method, the CS method gives the following advantages:
• Flexible:
From the simulation and experiment, it is clear that the involved CS MMWCSAR system is more flexible on data reconstruction. The calculations in the signal processing module are only involved in the matrix multiplexing and solving l 1 -minimization. The acquisition is allowed more freedom. Based on the sampling of targets, the CS method is able to reconstruct a better image than that of the IRT method. Data accumulated are not limited to a full circle. Mistakes and errors can be eliminated from the CS method, as well. Besides, R C S = 1 / 2 is used for both simulation and experiment. It is flexible to adjust the compressed sensing ratio to improve the MMWCSAR FOV image further.
• High-SNR:
The peak of targets is more recognizable with a high decibel difference to the background compared to that of the IRT method. For example, in the simulation part, the CS provides a 43 dB peak identification to a 24 dB peak in the IRT method. The peak power is measured as 32.6 dB in the CS method compared to 80.8 dB in the IRT method. Besides, improvements on scatterer indication are also noticeable. In the experiment part, targets’ reflective scatterers are shown in more accurate locations. Targets are more easily recognized by the involved CS method of MMWCSAR.
• Fast-acquisition:
As matrix transformation is implemented in the CS method. Huge data convolution along different axes is accomplished. Fast implementation on acquisition is achieved at the cost of more time spent on signal processing. It is an advanced signal processing method used in large data matrices in modern imaging devices.

7. Conclusions

In this paper, we present a radar system named MMWCSAR. It can generate a high resolution 3D image. The resolution and constraints of the MMWCSAR platform are discussed. We take a step further with the signal processing module of our system. We discussed the range calibration and 3D imaging reconstruction. In addition to using the IRT method, the involved CS matrix transformation of the original data to the final volume FOV image is shown.
The proposed radar system is efficient, portable and fast for widespread use. The radar transceiver used in this design is more affordable than using an MIMO imaging or traditional SAR imaging. It avoids the large antenna, as well as the complex radar transceivers. A user just needs to move the radar along a circular track. A high resolution volume FOV figure can be extracted using our algorithms.

Acknowledgments

Authors would like to thank the University of Arizona for supporting our research project. We would also like to thank anonymous editors and reviewers for their feedback of this final version of paper.

Author Contributions

R.Z. and S.C. conceived and designed the experiments; R.Z. performed the experiments; R.Z. and S.C. analyzed the data; S.C. contributed reagents/materials/analysis tools; R.Z. wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ahmed, S.S.; Schiessl, A.; Gumbmann, F.; Tiebout, M.; Methfessel, S.; Schmidt, L.P. Advanced microwave imaging. IEEE Microw. Mag. 2012, 13, 26–43. [Google Scholar] [CrossRef]
  2. Austin, C.D.; Ertin, E.; Moses, R.L. Sparse signal methods for 3-D radar imaging. IEEE J. Sel. Top. Signal Proc. 2011, 5, 408–423. [Google Scholar] [CrossRef]
  3. Zhang, R.; Cao, S. Portable millimeter wave 3D imaging radar. In Proceedings of the 2017 IEEE Radar Conference (RadarConf17), Seattle, WA, USA, 8–12 May 2017; pp. 0298–0303. [Google Scholar]
  4. Zhang, R.; Cao, S. Compressed sensing for portable millimeter wave 3D imaging radar. In Proceedings of the 2017 IEEE Radar Conference (RadarConf17), Seattle, WA, USA, 8–12 May 2017; pp. 0663–0668. [Google Scholar]
  5. Rohling, H.; Heuel, S.; Ritter, H. Pedestrian detection procedure integrated into an 24 GHz automotive radar. In Proceedings of the 2010 IEEE Radar Conference, Arlington, VA, USA, 10–14 May 2010; pp. 1229–1232. [Google Scholar]
  6. Wang, J.; Jiang, Z.; Song, Q.; Zhou, Z. Forward looking InSAR based field terrain mapping for unmanned ground vehicle. In Proceedings of the 2016 Asia-Pacific Conference on Intelligent Robot Systems (ACIRS 2016), Tokyo, Japan, 20–22 July 2016; pp. 168–173. [Google Scholar]
  7. Schneider, M. Automotive radar–status and trends. In Proceedings of the 2005 German Microwave Conference (GeMiC 2005), Ulm, Germany, 5–7 April 2005; pp. 144–147. [Google Scholar]
  8. Russell, M.E.; Crain, A.; Curran, A.; Campbell, R.A.; Drubin, C.A.; Miccioli, W.F. Millimeter-wave radar sensor for automotive intelligent cruise control (ICC). IEEE Trans. Microw. Theory Tech. 1997, 45, 2444–2453. [Google Scholar] [CrossRef]
  9. Cortese, F.; Flynn, T.; Francis, C.; Salloum, H.; Sedunov, A.; Sedunov, N.; Sutin, A.; Yakubovskiy, A. Experimental security surveillance system for an Island-based facility. In Proceedings of the 2016 IEEE Symposium on Technologies for Homeland Security (HST), Waltham, MA, USA, 10–11 May 2016; pp. 1–4. [Google Scholar]
  10. Appleby, R.; Anderton, R.N. Millimeter-wave and submillimeter-wave imaging for security and surveillance. IEEE Proc. 2007, 95, 1683–1690. [Google Scholar] [CrossRef]
  11. Sheen, D.M.; McMakin, D.L.; Hall, T.E. Three-dimensional millimeter-wave imaging for concealed weapon detection. IEEE Trans. Microw. Theory Tech. 2001, 49, 1581–1592. [Google Scholar] [CrossRef]
  12. Agarwal, S.; Kumar, B.; Singh, D. Non-invasive concealed weapon detection and identification using V band millimeter wave imaging radar system. In Proceedings of the 2015 National Conference on Recent Advances in Electronics & Computer Engineering (RAECE), Roorkee, India, 13–15 Feburary 2015; pp. 258–262. [Google Scholar]
  13. Xiao, Z.; Hu, T.; Xu, J.; Wu, L. Millimetre-wave radiometric imaging for concealed contraband detection on personnel. IET Image Proc. 2011, 5, 375–381. [Google Scholar] [CrossRef]
  14. Yujiri, L. Passive millimeter wave imaging. In Proceedings of the 2006 IEEE MTT-S International Microwave Symposium Digest, San Francisco, CA, USA, 11–16 June 2006; pp. 98–101. [Google Scholar]
  15. Macovski, A. Medical Imaging Systems; Prentice-Hall: Englewood Cliffs, NJ, USA, 1983; pp. 113–141. [Google Scholar]
  16. Lin, S.S.; Fuh, C.S. Range Data Reconstruction Using Fourier Slice Theorern. In Proceedings of the 1996 International Conference on Pattern Recognition (ICPR ’96), Vienna, Austria, 25–30 August 1996; Volume 96, p. 874. [Google Scholar]
  17. Miyakawa, M. Tomographic measurement of temperature change in phantoms of the human body by chirp radar-type microwave computed tomography. Med. Biol. Eng. Comput. 1993, 31, S31–S36. [Google Scholar] [CrossRef] [PubMed]
  18. Kak, A.C.; Slaney, M. Principles of Computerized Tomographic Imaging; IEEE Press: New York, NY, USA, 1988. [Google Scholar]
  19. Herman, G.T. Image reconstruction from projections. Real-Time Imaging 1995, 1, 3–18. [Google Scholar] [CrossRef]
  20. Hashemi, S.; Beheshti, S.; Gill, P.R.; Paul, N.S.; Cobbold, R.S. Fast fan/parallel beam CS-based low-dose CT reconstruction. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, BC, Canada, 26–31 May 2013; pp. 1099–1103. [Google Scholar]
  21. Jia, G.; Buchroithner, M.F.; Chang, W.; Liu, Z. Fourier-Based 2-D Imaging Algorithm for Circular Synthetic Aperture Radar: Analysis and Application. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 475–489. [Google Scholar] [CrossRef]
  22. Bao, Q.; Lin, Y.; Hong, W.; Zhang, B. Multi-circular synthetic aperture radar imaging processing procedure based on compressive sensing. Proceedings of 2016 the 4th International Workshop on Compressed Sensing Theory and its Applications to Radar, Sonar and Remote Sensing (CoSeRa), Aachen, Germany, 19–22 September 2016; pp. 47–50. [Google Scholar]
  23. Candès, E.J.; Wakin, M.B. An introduction to compressive sampling. IEEE Signal Proc. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  24. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  25. Duarte, M.F.; Davenport, M.A.; Takhar, D.; Laska, J.N.; Sun, T.; Kelly, K.E.; Baraniuk, R.G. Single-pixel imaging via compressive sampling. IEEE Signal Proc. Mag. 2008, 25, 83–91. [Google Scholar] [CrossRef]
  26. Lustig, M.; Donoho, D.; Pauly, J.M. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn. Reson. Med. 2007, 58, 1182–1195. [Google Scholar] [CrossRef] [PubMed]
  27. Salam, A.A.; Fawzy, F.; Shaker, N.; Kadah, Y.M. K1. High performance compressed sensing MRI image reconstruction. In Proceedings of the 2012 29th National Radio Science Conference (NRSC), Cairo, Egypt, 10–12 April 2012; pp. 627–631. [Google Scholar]
  28. Mahalanobis, A.; Xiao, X.; Rivenson, Y.; Horisaki, R.; Stern, A.; Tanida, J.; Javidi, B. 3D Imaging with Compressive Sensing. In Proceedings of the Imaging Systems and Applications, Arlington, VA, USA, 23–27 June 2013; p. IW1E–1. [Google Scholar]
  29. Ender, J.H. On compressive sensing applied to radar. Signal Proc. 2010, 90, 1402–1414. [Google Scholar] [CrossRef]
  30. Sevimli, R.A.; Tofighi, M.; Cetin, A.E. Range-Doppler radar target detection using denoising within the compressive sensing framework. In Proceedings of the 2014 22nd European Signal Processing Conference (EUSIPCO), Lisbon, Portugal, 1–5 September 2014; pp. 1950–1954. [Google Scholar]
  31. Herman, M.; Strohmer, T. Compressed sensing radar. In Proceedings of the 2008 IEEE Radar Conference, Rome, Italy, 26–30 May 2008; pp. 1–6. [Google Scholar]
  32. Yan, H.; Xu, J.; Zhang, X. Compressed sensing radar imaging of off-grid sparse targets. In Proceedings of the 2015 IEEE International Radar Conference (RADAR 2015), Arlington, VA, USA, 10–15 May 2015; pp. 690–693. [Google Scholar]
  33. Tian, H.; Li, D.; Li, L. Simulation of signal reconstruction based sparse flight downward-looking 3D imaging SAR. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 3762–3765. [Google Scholar]
  34. Bao, Z.; Wang, G.; Luo, L. Inverse synthetic aperture radar imaging of maneuvering targets. Opt. Eng. 1998, 37, 1582–1588. [Google Scholar] [CrossRef]
  35. Hu, X.; Tong, N.; Zhang, Y.; Wang, Y. 3D imaging using narrowband MIMO radar and ISAR technique. In Proceedings of the 2015 International Conference on Wireless Communications & Signal Processing (WCSP), Nanjing, China, 5–17 October 2015; pp. 1–5. [Google Scholar]
  36. Berizzi, F.; Corsini, G. Autofocusing of inverse synthetic aperture radar images using contrast optimization. IEEE Trans. Aerosp. Electron. Syst. 1996, 32, 1185–1191. [Google Scholar] [CrossRef]
  37. Patel, V.M.; Easley, G.R.; Healy, D.M., Jr.; Chellappa, R. Compressed synthetic aperture radar. IEEE J. Sel. Top. Signal Proc. 2010, 4, 244–254. [Google Scholar] [CrossRef]
  38. Richards, M.A. Fundamentals of Radar Signal Processing; McGraw-Hill Education: New York, NY, USA, 2014. [Google Scholar]
  39. Krichene, H.; Pekala, M.; Sharp, M.; Lauritzen, K.; Lucarelli, D.; Wang, I. Compressive sensing and stretch processing. In Proceedings of the 2011 IEEE Radar Conference (RadarCon), Kansas City, MO, USA, 23–27 May 2011; pp. 362–367. [Google Scholar]
  40. Blasch, E.; Ewing, R.; Liu, Z.; Pomrenke, G.; Petkie, D.; Reinhardt, K. Image fusion of the terahertz-visual naecon grand challenge data. In Proceedings of the 2012 IEEE National Aerospace and Electronics Conference (NAECON), Dayton, OH, USA, 25–27 July 2012; pp. 220–227. [Google Scholar]
  41. Haltmeier, M.; Scherzer, O.; Burgholzer, P.; Nuster, R.; Paltauf, G. Thermoacoustic tomography and the circular Radon transform: Exact inversion formula. Math. Models Methods Appl. Sci. 2007, 17, 635–655. [Google Scholar] [CrossRef]
  42. Xu, M.; Wang, L.V. Universal back-projection algorithm for photoacoustic computed tomography. Phys. Rev. E 2005, 71, 016706. [Google Scholar] [CrossRef] [PubMed]
  43. Stergiopoulos, S. Advanced Signal Processing Handbook: Theory and Implementation for Radar, Sonar, and Medical Imaging Real Time Systems; CRC Press: Boca Raton, FL, USA, 2000. [Google Scholar]
  44. Duersch, M.I. Backprojection for Synthetic Aperture Radar. Ph.D. Thesis, Brigham Young University, Provo, UT, USA, 2013. [Google Scholar]
  45. Ozdemir, C. Inverse Synthetic Aperture Radar Imaging with MATLAB Algorithms; John Wiley & Sons: Hoboken, NJ, USA, 2012; Volume 210. [Google Scholar]
  46. Candes, E.; Romberg, J. L1-Magic: Recovery of Sparse Signals via Convex Programming. Available online: https://statweb.stanford.edu/~candes/l1magic/downloads/l1magic.pdf (accessed on 14 Apirl 2017).
  47. Romberg, J. Imaging via compressive sampling [introduction to compressive sampling and recovery via convex programming]. IEEE Signal Proc. Mag. 2008, 25, 14–20. [Google Scholar] [CrossRef]
  48. Zhang, Y. Theory of Compressive Sensing via l1-Minimization: A Non-RIP Analysis and Extensions. J. Oper. Res. Soc. China 2013, 1, 79–105. [Google Scholar] [CrossRef]
  49. Gondzio, J. Multiple centrality corrections in a primal-dual method for linear programming. Comput. Optim. Appl. 1996, 6, 137–156. [Google Scholar] [CrossRef]
  50. Mehrotra, S. On the implementation of a primal-dual interior point method. SIAM J. Optim. 1992, 2, 575–601. [Google Scholar] [CrossRef]
  51. Inras Industrial Radar Systems. Available online: http://www.inras.at/uploads/media/Radarbook_02.pdf (accessed on 14 Apirl 2017).
  52. INRAS MIMO-77-TX4RX8 Frontend User Manual. Available online: http://www.inras.at/uploads/media/MIMO-77-TX4RX8_01.pdf (accessed on 14 Apirl 2017).
Figure 1. Uses rotation radar to resolve 3D imaging. (a) Schematic view of resolving range using range bins; (b) the swinging within the rotation plane of the radar generates velocities in different directions; (c,d) each range bin is then projected into different velocity directions while data are collected.
Figure 1. Uses rotation radar to resolve 3D imaging. (a) Schematic view of resolving range using range bins; (b) the swinging within the rotation plane of the radar generates velocities in different directions; (c,d) each range bin is then projected into different velocity directions while data are collected.
Sensors 17 01419 g001
Figure 2. Geometry of monostatic radar remote sensing targets (the positive y axis is the boresight direction).
Figure 2. Geometry of monostatic radar remote sensing targets (the positive y axis is the boresight direction).
Sensors 17 01419 g002
Figure 3. Geometry of monostatic radar velocity projections (the positive y axis is the boresight direction).
Figure 3. Geometry of monostatic radar velocity projections (the positive y axis is the boresight direction).
Sensors 17 01419 g003
Figure 4. From projections to radar signal datacube.
Figure 4. From projections to radar signal datacube.
Sensors 17 01419 g004
Figure 5. 2D point spread function (PSF) for the point scatterer located at the center ( 0 , 0 ) at a range of 5 m (amplitude in dB).
Figure 5. 2D point spread function (PSF) for the point scatterer located at the center ( 0 , 0 ) at a range of 5 m (amplitude in dB).
Sensors 17 01419 g005
Figure 6. Simulation results (front view from radar panel at 5 m, the x axis and y axis are azimuth and elevation distance from the center, respectively). (a) Inverse Radon transform (IRT) method with the perfect setup; (b) compressed sensing (CS) applying Equation (38) to the angle profile with R C S = 1 / 2 ; (c) CS applying Equation (39) to the angle profile with R C S = 1 / 2 ; (d) CS applying Equation (40) to the angle profile with R C S = 1 / 2 ; (e) CS applying Equation (38) to the slow-time profile with R C S = 1 / 2 ; (f) CS applying Equation (39) to the slow-time profile with R C S = 1 / 2 ; (g) CS applying Equation (40) to the slow-time profile with R C S = 1 / 2 .
Figure 6. Simulation results (front view from radar panel at 5 m, the x axis and y axis are azimuth and elevation distance from the center, respectively). (a) Inverse Radon transform (IRT) method with the perfect setup; (b) compressed sensing (CS) applying Equation (38) to the angle profile with R C S = 1 / 2 ; (c) CS applying Equation (39) to the angle profile with R C S = 1 / 2 ; (d) CS applying Equation (40) to the angle profile with R C S = 1 / 2 ; (e) CS applying Equation (38) to the slow-time profile with R C S = 1 / 2 ; (f) CS applying Equation (39) to the slow-time profile with R C S = 1 / 2 ; (g) CS applying Equation (40) to the slow-time profile with R C S = 1 / 2 .
Sensors 17 01419 g006
Figure 7. Experiment setup. (a) Radar position with antenna facing three ball targets; (b) three ball targets’ lineup.
Figure 7. Experiment setup. (a) Radar position with antenna facing three ball targets; (b) three ball targets’ lineup.
Sensors 17 01419 g007
Figure 8. Experiment results. (a,b) IRT method; (c,d) CS applying Equation (38) to the slow-time profile.
Figure 8. Experiment results. (a,b) IRT method; (c,d) CS applying Equation (38) to the slow-time profile.
Sensors 17 01419 g008
Table 1. MMWCSAR simulation setup. PRI, pulse repetition interval.
Table 1. MMWCSAR simulation setup. PRI, pulse repetition interval.
ParametersSymbolValue
Center frequency f c 76.5 GHz
Chirp starting frequency f s t a r t 76 GHz
Chirp end frequency f s t o p 77 GHz
PRI P R I 30 × 10 6 s
Chirp duration T P 20 × 10 6 s
Fast time samples N R 400
Slow time samples N D 100
Frame samples N C h 68
Sampling frequency f s 20 × 10 6 Hz
Angular velocity ω 0.2 s/round
Rotation radiusr0.6 m
Signal to noise ratio (SNR) S N R 10 dB
Table 2. Four targets’ scheme setup. RCS, radar cross-section.
Table 2. Four targets’ scheme setup. RCS, radar cross-section.
ParametersTarget 1Target 2Target 3Target 4
Range5 m5 m5 m5 m
Azimuth location 1.2941 m1.7101 m0 m2.1131 m
Elevation location0.8682 m1.7101 m0 m 0.4358 m
RCS1 m 2 1 m 2 1 m 2 1 m 2
Table 3. Three ball targets’ scheme setup.
Table 3. Three ball targets’ scheme setup.
ParametersTarget 1Target 2Target 3
Range1.21 m1.66 m2.16 m
Azimuth angle α 2 5 ≈5
Elevation angle β ≈0 ≈0 ≈0

Share and Cite

MDPI and ACS Style

Zhang, R.; Cao, S. 3D Imaging Millimeter Wave Circular Synthetic Aperture Radar. Sensors 2017, 17, 1419. https://doi.org/10.3390/s17061419

AMA Style

Zhang R, Cao S. 3D Imaging Millimeter Wave Circular Synthetic Aperture Radar. Sensors. 2017; 17(6):1419. https://doi.org/10.3390/s17061419

Chicago/Turabian Style

Zhang, Renyuan, and Siyang Cao. 2017. "3D Imaging Millimeter Wave Circular Synthetic Aperture Radar" Sensors 17, no. 6: 1419. https://doi.org/10.3390/s17061419

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop