Next Article in Journal
Infrared Image Super-Resolution via Progressive Compact Distillation Network
Previous Article in Journal
A Privacy Preserved, Trust Relationship (PTR) Model for Internet of Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Non-Iterative Method Combined with Neural Network Embedded in Physical Model to Solve the Imaging of Electromagnetic Inverse Scattering Problem

College of Control Science and Engineering, China University of Petroleum (East China), Qingdao 266000, China
*
Authors to whom correspondence should be addressed.
Electronics 2021, 10(24), 3104; https://doi.org/10.3390/electronics10243104
Submission received: 1 November 2021 / Revised: 10 December 2021 / Accepted: 10 December 2021 / Published: 14 December 2021
(This article belongs to the Section Microwave and Wireless Communications)

Abstract

:
The main purpose of this paper is to solve the electromagnetic inverse scattering problem (ISP). Compared with conventional tomography technology, it considers the interaction between the internal structure of the scene and the electromagnetic wave in a more realistic manner. However, due to the nonlinearity of ISP, the conventional calculation scheme usually has some problems, such as the unsatisfactory imaging effect and high computational cost. To solve these problems and improve the imaging quality, this paper presents a simple method named the diagonal matrix inversion method (DMI) to estimate the distribution of scatterer contrast (DSC) and a Generative Adversarial Network (GAN) which could optimize the DSC obtained by DMI and make it closer to the real distribution of scatterer contrast. In order to make the distribution of scatterer contrast generated by GAN more accurate, the forward model is embedded in the GAN. Moreover, because of the existence of the forward model, not only is the DSC generated by the generator similar to the original distribution of the scatterer contrast in the numerical distribution, but the numerical of each point is also approximate to the original.

1. Introduction

Electromagnetic inverse scattering is an accurate imaging method in the field of non-destructive testing. It determines the distribution characteristics of the physical parameters of scatterers by measuring the distribution of scattering fields. Additionally, it is widely used in science, engineering, and the military and medical fields [1,2,3,4,5,6,7]. Due to the inherent nonlinearity of ISP, regularization iterative optimization methods are usually used to solve ISP, such as the Born iterative method [8], the distorted Born iterative method [9], contrast source inversion (CSI) method [10,11], subspace optimization method (SOM) [12,13,14], and so on. Although it has been proven that these methods can provide satisfactory results for objects of medium size and contrast, their computational cost is too high and not suitable for real-time imaging. Because of the limitation of computation, it is still a great challenge to apply it to some large scenes. Up to now, due to the influence of multiple scattering effects, the nonlinear electromagnetic inverse scattering technology has mainly been used for low-contrast objects.
Over the past few years, with the development of the neural network, the neural network has been one of the most influential methods in the field of computer vision and natural language processing, such as image classification [15,16,17], image segmentation [18], and speech recognition [19,20]. Most recently, in the field of biological imaging, the neural network has also achieved excellent achievements, such as magnetic resonance imaging [21] and computational optical imaging [22]. Additionally it has been found that the method based on neural networks is superior to the traditional image reconstruction technology in improving image quality and reducing computational cost through experiments [21,23,24,25].
The imaging of electromagnetic inverse scattering can also be solved by the neural network. DeepNIS is a neural network composed of multiple complex-valued cascading residual convolutional blocks [23]. The complex-valued residual convolutional is used to approximately characterize the multi-scattering physical mechanism. Additionally, the inputs of DeepNIS come from back-propagation images. DeepNIS greatly reduces the computational time compared with the traditional iterative method. CVP2P constructs a complex-valued network with reference to the Generative Adversarial Networks (GAN) [24]. The generator of CVP2P adopts a multilayer complex-valued convolution neural network that can calculate complex-valued convolution. The back-propagation images are also used to be the inputs of CVP2P. This algorithm is mainly used to reconstruct binary images, and the reconstruction time is significantly reduced. Wei et al. constructed three different imaging schemes based on U-Net [26]. The first scheme is to directly input the scattering field measurement data into the neural network to calculate the final imaging results. The second scheme is to obtain the initial image by using the BP algorithm and then input the initial image into the neural network for imaging. The last one is to obtain the initial contrast by using the dominant current scheme method, and then it is inputted into the neural network for imaging. A fully connected artificial neural network is used to solve ISP [27]. Firstly, the scattered field samples collected at the receiver are processed, and the estimated value of the scatterer contrast is provided by the neural network in the case of strong nonlinearity. In the training process, a random shape generator affected by the statistical distribution of breast biological tissue is used to assist the training. It also obtained an excellent imaging result. Sanghvi proposed the contrast source network to solve the problem of getting trapped in false local minima when recovering objects with high permittivity [28]. The contrast source network refines the iterative solution by learning the noise spatial component of the radiation operator and estimating the spatial component of the signal directly from the data. When the traditional technology fails and the computational time does not significantly increase, the correct results are obtained.
In this paper, we propose a simple but effective method, the diagonal matrix inversion method (DMI), to get an initial estimate of the distribution of scattering contrast. We also refer to the GAN idea to establish a neural network to decrease the error between the initial estimation and the real contrast. The generator of the GAN is similar to a “U-Net”-based architecture [18], and for the discriminator, a “PatchGAN”-like classifier is used for auxiliary discrimination [29]. The generator obtains the bottom features through multiple down-sampling layers and then restores the bottom features to the real contrast distribution through up-sampling. The features obtained from the down-sampling layers are connected with the features restored by the up-sampling layer through skip-connection so that the information lost in the process of down-sampling can be collected and the real contrast can be restored more fully. When training, we make the physical model between the generator and the discriminator provide assistance. The theoretical measured value can be calculated by the forward model with the contrast distribution obtained by the generator. Additionally, the discriminator then judges whether the theoretical measured value is close to the real measured value. This makes the generator eliminate errors more accurately. Obtaining the initial contrast through the DMI and using the generator to make the initial contrast close to the real contrast is a non-iterative method for solving ISP. After our experimental verification, the GAN combined with the DMI method is superior to the traditional imaging scheme in both the imaging effect and speed. The main contributions of this paper are as follows:
  • A non-iterative method (DMI) is proposed to estimate the initial contrast, which is more accurate than the previous schemes;
  • A neural network based on physical model training is constructed, which can obtain a better imaging effect than the traditional scheme.

2. Problem Statement

In this paper, we only considered the two-dimensional scalar electromagnetic field. As shown in Figure 1, the TM polarized incident plane wave irradiates the target region.
Several TM polarized incident plane wave-generating devices are evenly distributed at the edge of field D. It can transmit and receive electromagnetic waves. One of them transmits electromagnetic waves, which form a scattered field after scattering by the object. Then, all devices receive scattered field data so that the scattered field data can be present by the following equation:
E z , s c a ( r ) = k 0 2 Σ G ( r , r ) χ ( r ) E z ( r ) d S r D , r Σ ,
E z ( r ) = E z , i n ( r ) + k 0 2 Σ G ( r , r ) χ ( r ) E z ( r ) d S r Σ , r Σ ,
in which r and r represent the field point and the source point. E z , s c a , E z , i n , and E z are the scattered, incident, and total fields, respectively. k 0 is the wavenumber in the background medium. G ( r , r ) = i 4 H 0 ( 1 ) ( k 0 | r r | ) is the two-dimensional Green’s function for the case of a homogeneous background, and H 0 ( 1 ) ( ) is the first kind of zero-order Hankel function. χ ( r ) = ϵ r ( r ) 1 i ( ( σ ( r ) ) / ( ϵ 0 ω ) ) denotes a quantitative relationship between the contrast of the object χ ( r ) and relative permittivity ϵ r ( r ) . σ ( r ) , ϵ 0 , and ω are the conductivity, vacuum permittivity, and angular frequency, respectively. d S is an area unit on Σ . In order to facilitate the numerical experiment, we divide Σ into a K × K square grid by the method of moments aiming to obtain the discretized version of Equations (1) and (2). For a single illumination, one of the electromagnetic waves generating devices generates, and all of them receive. The received data is the discrete version of the scattered field. Then, combined with Equations (1) and (2), we obtain the following discretized equation about a single illumination:
E S j = G m χ E z j ,
E z j = E in j + G s χ E z j ,
where E z j C K 2 × 1 , E in j C K 2 × 1 , and E s j C K 2 × 1 denote the discrete version of the total fields, the incident fields, and the scattered field for the j-th illumination, respectively. The incident fields can be calculated by the Green Function. χ C K 2 × K 2 is the discrete version of the contrast in the imaging domain Σ . χ is a diagonal matrix, and the element on the diagonal is the divided contrast on Σ . G s C K 2 × K 2 and G m C N × K 2 denote the Green’s object and data matrix, respectively, and N is the number of the electromagnetic waves generating antennas.
When each antenna acts as a transmitter, every vector E S j , E z j , and E in j were added to the measurement data matrix E S , the total field matrix E z , and the incident field matrix E in as one column, respectively. The equations become the following:
E S = G m χ E z ,
E z = E in + G s χ E z
The inverse scattering problem is the use of the measured scattered field E S to estimate the contrast distribution χ on Σ .

3. Methods

3.1. Motivation

Equations (5) and (6) are nonlinear in the process of inversion. The solution may not be unique, especially when the target is high-contrast objects. As a result of this, from the measured data E S to the object contrast χ , the process of directly solving Equations (5) and (6) is very difficult. Although it has been proven in previous experiments that neural networks can estimate the distribution of contrast, most of them cannot estimate the true contrast value of a certain point. Accordingly, we propose an easy but effective method named the diagonal matrix inversion method (DMI) to solve the nonlinear equations, and a neural network with the forward model to estimate the distribution of contrast and the true contrast value of a certain point. Next, we will explain the solution process of DMI and the implementation of this neural network.

3.2. Diagonal Matrix Inversion Method

For Equations (5) and (6), solving the contrast χ directly is a nonlinear problem. However, if solving the inverse matrix of χ directly, it can be regarded as a linear problem. Combining Equation (5) with Equation (6), we can get the following equation:
E S = G m ( χ 1 G s ) 1 E in .
This equation is the forward function, and it will be embedded in the neural network as the forward model. For Equation (7), it is easy to derive the following equation:
χ 1 = E in E S Ψ G m + G s ,
where E S Ψ is the generalized inverse matrix of E S and χ 1 is the inverse matrix of χ . E S Ψ can be calculated by singular value decomposition. If a singular value decomposition is conducted on E S , it can obtain Equation (9):
E S = n u n s n v n H ,
where u n , v n , and s n represent the left, right, and singular vectors and singular values of E S . If s i ( 0 < i n ) is less than λ max ( s n ) , s i will be set to zero. λ is a constant that is used to filter small singular values. After discarding small singular values, the E S Ψ can be calculated by the following equation:
E S Ψ = n ( v n H ) T s n u n T .
Because χ is a diagonal matrix, the inverse matrix of a diagonal matrix must be a diagonal matrix and have the following property:
χ i , i = 1 ( χ 1 ) i , i ,
so it is easy to calculate χ from χ ( 1 ) . However, for the scattered field E S , although it is a square matrix, the condition number is too large to calculate the generalized inverse of E S and it will suffer from some calculation errors. Because of these calculation errors, χ e 1 is not an exact diagonal matrix, but the distribution of the elements on its diagonal is similar to χ r 1 s. ( χ e is the contrast calculated by DMI and χ r is the contrast calculated by the real contrast with Equation (11)). After the calculation of Equations (8) and (11), a contrast estimate appeared, and this will be the initial guess.
The above method of estimating contrast is called the diagonal matrix inversion method (DMI).

4. Implementation Details of the GAN

4.1. Structure and Central Idea

The main purpose of the GAN is to eliminate the error introduced by DMI. The GAN refers to the design of pix2pix. Pix2pix has been proven to be effective in image translation. Additionally, the essence of the image is a matrix, so it can say that pix2pix performs well from matrix to matrix and GAN may have great performances from the initial guess contrast matrix to the real contrast matrix. In terms of the generator, the GAN and pix2pix both adopt “U-Net”-based architecture, but the up-sampling layers and down-sampling layers used by pix2pix are seven layers, and our GAN is five layers. For the discriminator, since the data of each column in the scattering field measurement data matrix do not affect each other, the filter we used is 1 × N , while pix2pix selects N × N . It is different from pix2pix as it directly inputs the generated result into the discriminator, and we input them into the discriminator after the forward model calculation.
The GAN mainly contains three parts: the generator, forward model, and discriminator. The generator is of the “U-Net”-like structure, and the discriminator is of the “PatchGAN”-like structure. The generator tries to learn the mapping from the initial guess contrast to the real one. Additionally, the forward model can help the discriminator to more easily distinguish the contrast generated by the generator.
The generator architecture is shown in Figure 2. The inputs have two channels, where one is the real part of the contrast and another is the imaginary part, and so are the outputs. Different from the conventional U-Net, pooling layers are not used in the generator, and a full convolution structure is adopted. In the process of down-sampling, leakyrelu is adopted as the activation. Additionally, in the process of up-sampling, relu is adopted as the activation. The 4 × 4 kernel size is adopted for all layers. Additionally, the strides were set to 2.
The same idea as the patchGAN, the discriminator outputs a matrix as an evaluation of the overall generation results, but different from the traditional patchGAN, considers that each column in the scattering field measurement matrix is the measurement value generated by one scattering. Only each column is convoluted, and the column dimension of the final output matrix and the input matrix is reduced and the row dimension is unchanged. Different from the generator, the inputs only have one channel and the imaginary part is spliced below the real part. All the parameters of the discriminator can be seen in Table 1.
In the process of training, the generator and the discriminator confront each other so that the generator and the discriminator achieve a balance. Due to the existence of a forward model, the discriminator determines the generation effect of the generator by distinguishing the measured value of the scattered field from the actual measured value. Additionally, it is not only the distribution of contrast, but also the contrast of each pixel that can be generated by the generator. It can be seen that the greatest advantage of GAN is to embed the forward model between the generator and the discriminator, so that the discriminator has a stronger ability to judge the results generated by the generator.

4.1.1. Loss Function

The loss function of the GAN is similar to traditional pix2pix. The loss function of cGAN with L1 distance is used for the generator, and the binary cross-entropy loss function is used for the discriminator. Additionally, the loss function of the cGAN and L1 distance can be expressed as the following Equations (12) and (13), respectively:
L cGAN ( G , D ) = E x , y [ log D ( x , y ) ] + E x , G ( x ) [ log ( 1 D ( x , G ( x ) ) ) ]
and
L l 1 ( G ) = E x , y [ | | y G ( x ) | | 1 ] .
Combining Equations (12) and (13), the final loss is:
L G = arg min G max D L cGAN λ L l 1 G ,
where λ controls the relative importance of the two loss functions.

4.1.2. Network Training

The GAN and DMI, applied for nonlinear inverse scattering, can be seen in Figure 3. The input data of the generator come from the DMI algorithm after normalization.
The training procedure for the GAN is as follows:
  • In the first step, the real measured data can be calculated by the forward model (Equation (7)) with the ground truths;
  • In the second step, the initial contrasts can be obtained by DMI with the measurement data. Additionally, the initial contrasts are divided into the real part and the imaginary part, and these two parts are different channels for the inputs of the generator after normalization. The generator converts the normalization into the contrast closed to the real;
  • In the third step, the measured data obtained in the second step is sent to the discriminator to compare with the real measured data;
  • In the final step, according to the results of the generator and the discriminator, the discriminator and generator are optimized to balance them.
The experimental results show that the GAN with the forward model has a good effect on eliminating the error introduced by DMI.

5. Numerical Simulation and Experimental Results

The performance of the GAN and DMI is assessed from the two aspects of “simulation” and “experiment”. Additionally, we also compare with the Multiplicative Regularization Contrast Source Inversion (MR-CSI) method [11], and both of them use the measured data calculated by Equation (7). The imaging of electromagnetic inverse scattering technology is mainly used in the early detection of breast cancer. Its features are ellipses, circles, and long bars. These features happen to be included in handwritten numerals. Therefore, we choose the MNIST as the training dataset.

5.1. Training and Testing on MNIST Datasets

The experiment uses MNIST handwritten digits to evaluate the GAN. A total of 8000 examples were used to train the GAN, and 2000 examples were used for testing. Additionally, The pixel value was between 0 and 255 in MNIST. All the examples were converted from 28 × 28 to 64 × 64 pixels. Additionally, in this experiment, each image was converted to float64 and normalized to [0, 1].
Referring to Figure 1 and considering the experimental environment, the imaging region Σ is a square with a size of 0.07 × 0.07 m, and it is composed of 64 × 64 cells on average. A total of 24 transmitting–receiving antennas were evenly distributed on the circular region D, including the imaging region Σ . The relative permittivity ϵ r was set to 1. The radius of the circular region D is represented by R = 0.1 m . Additionally, the system works at 4.4 GHz.
The training of the GAN was optimized by the ADAM optimization method [30]. The learning rate of the generator was set to 10 5 , and the discriminators were set to 2 × 10 4 . The filters were initialized randomly. All calculations were performed on a small server configured with 256 GB of memory with Intel(R) Xeon(R) CPU E5-2690 v4 and Tesla K80. The GAN and DMI were implemented by a tensorflow library [31]. Additionally, the MR-CSI algorithms were also implemented by the tensorflow and numpy library [31,32]. The DMI algorithm took about 4 s, and the MR-CSI algorithm, with 500 iterations, took about 5 min. The training time of the GAN was about 24 h and reconstructed process was about 5 s.The numerical simulation process is as follows:
  • First, the scattered field measurements can be calculated by Equation (7) with ground truths;
  • Second, the initial contrast images can be calculated by Equations (8) and (9) (DMI) with the scattered field measurements that are calculated by the first step;
  • Finally, the initial contrast images are the inputs of the GAN, and the outputs are the real images.
In this example, the real parts of the contrast are the ground truths and the imaginary parts are all set to 0. The contrast is used for the forward model to calculate the measurement data. Then, the measurement data are used for DMI to estimate the initial contrast with errors. The initial contrast is used to generate the real contrast. Figure 4 shows the ground truths of the MNIST, the image of the contrast distribution calculated by DMI, the image of the contrast distribution generated by GAN, and the result calculated by MR-CSI with 500 iterations, respectively. Figure 5 shows the ground truths and the distribution images of scattered field measurements that were calculated by the ground truths in Figure 4. In order to compare the effects of different methods on imaging quality, the scattering field reconstruction error (SRE) and structural similarity (SSIM) were used to evaluate the imaging quality. The formula to calculate SRE and SSIM are as follows:
SRE = i = 0 K j = 0 K | X r i , j X s i , j | 2 ,
SSIM ( x , y ) = ( 2 μ x μ y + C 1 ) ( 2 σ x σ y + C 2 ) ( 2 μ x 2 μ y 2 + C 1 ) ( 2 σ x 2 σ y 2 + C 2 ) ,
where X r is the real distribution of contrast and X s represents the reconstruction distribution of contrast (including the real part and imaginary part). In Equation (16), x represents the real part of the reconstruction, and y is the real part of the original. The constant C 1 and C 2 is included to avoid instability when the denominator is very closed to zero. μ and σ represent the luminance of the image and the contrast of the image, respectively. μ and σ could be calculated by the following formula:
μ x = 1 N i = 1 n x i ,
σ x = ( 1 N 1 i = 1 n ( x i μ x ) 2 ) 1 2 .
Table 2 and Table 3 show the corresponding SRE and SSIM of different methods, respectively.
For SRE, a smaller value means a better imaging effect. However, for SSIM, a higher value means a better imaging effect. As can be seen from the above, the GAN reduces the error introduced by DMI to a certain extent, and the imaging effect is better than traditional methods. In ISP, the reconstruction result is usually compared with the real part of the ground truth. However, only considering the reconstruction effect of the real part is not complete, and the imaginary part is also a very important factor. The SRE both considers the real part and the imaginary part, and it can better explain the reconstruction effect. In the following experiments, we will pay more attention to the results of SRE. For Table 2, the GAN significantly reduced the SRE results of DMI. Additionally, compared with the MR-CSI method, the SRE results of the GAN are also smaller.

5.2. Testing on EMNIST Dataset

In order to verify the superiority of this method, another dataset is tested. In this test, the EMNIST [33] dataset is used to test with the model which was trained successfully by MNIST. The EMNIST dataset contains digits, uppercase and lowercase handwritten letters, and some of them are used to test. The pixel value is also between 0 and 255. Similarly, all test examples were enlarged to 64 × 64 pixels, converted to float64, and normalized to [0, 1].
In this example, the real parts of the contrast are the ground truths in Figure 6 and the imaginary parts were also set to 0. Figure 6 shows the reconstruction results based on different methods. The SRE is used to evaluate the imaging effects of these three methods. Additionally, the results are shown in Table 4. Through the above experiments, it can be concluded that although the GAN is only trained by the MNIST dataset, excellent reconstruction results can still be obtained by using the trained model for objects with different characteristics. This shows that GAN has good generalization ability and can eliminate the calculation error introduced by DMI. Using DMI combined with the GAN is an excellent method to solve ISP.

5.2.1. Comparing the Imaging Ability with CVP2P

We also wanted to know whether the imaging ability of the GAN is better than other neural networks. We decided to compare the imaging results with CVP2P [24]. CVP2P is mainly used for the image which is composed of 0 and 1. Thus, we converted the ground truths from [0, 1] to 0 and 1. If the pixel value is greater than 0.5, we set its real parts to 1, and if the pixel value is less than 0.5, we set its real parts to 0. All the imaginary parts were set to 0. The ground truths were used for calculating the measurement data by the forward model. Additionally, the DMI method was used to estimate the initial images. The initial images are the inputs of the GAN. The imaging results of CVP2P and the GAN are shown in Figure 7. Additionally, the SRE results are shown in Table 5.

5.2.2. Testing with Lossy Scatterers

In order to verify the effectiveness of this method on lossy scatterers, the contrast range of the real part was set to [0, 1] and the imaginary part of the target was set to [1, 2]. In this example, the real part and the imaginary part of the contrast are all ground truths in Figure 8. Figure 8 shows the imaging results of these methods on lossy scatterers. The imaging results include real and imaginary parts. The SRE result of the GAN is 98.12068 and that of the MR-CSI is 182.1479.

5.2.3. Testing with More Complex Images

Handwritten letters and numbers have different characteristics from human tissues. In order to verify the feasibility of this method, we use a more complex image composed of multiple circles as test samples to test the DMI and GAN. In this example, the real part of the contrast is the ground truth in Figure 9, and the imaginary part was set to 0. The test results are shown in Figure 9. The SRE result of the GAN is 109.2299 and the MR-CSI is 127.7074. Additionally, the SSIM result of the GAN is 0.75476 and the MR-CSI is 0.7223. It can be verified that the GAN with DMI still has better imaging ability than MR-CSI even on images with different features through this test.

5.2.4. Testing with Varying Degrees of Noise

In this experiment, the white noise will be added to the scattered field measurements. The real parts are the handwriting digital image, and the imaginary parts are set to 0. The scattered field measurements are calculated by Equation (7) and varying degrees of noise are added to them. Figure 10 shows the experimental results.
It can be seen from the experimental results that when the noise is lower than 10%, the GAN can still complete the reconstruction task well. When the noise is higher than 10%, the reconstruction ability of the GAN is greatly weakened. This is because the DMI method cannot obtain a good initial result, resulting in the deviation of the GAN reconstruction.

5.2.5. Testing Trained Networks by Experimental Data

In order to test the imaging effect of DMI and the GAN in the actual experiment, a self-made experimental system is used to obtain the measured data of the scattering field for verification.
The whole experimental system can work at 3–5 GHz. A total of 24 balanced Vivaldi antennas were evenly distributed on a cylinder with a radius of 0.1 m as electromagnetic wave-generating devices. Each Vivaldi antenna is 7 cm long and 7.3 cm wide. The maximum imaging area is a square area with a side length of 7 cm. The model of the vector network analyzer is kc901v and connected to the antenna through an Agilent coaxial matrix switch to control the reception and transmission of electromagnetic waves. Additionally, the analyzer provides port isolation of greater than 100 dB over the frequency range of interest. The computer is connected to the vector network analyzer through a USB and automatically collects measured data through the Agilent coaxial matrix switch control device. The target is a square wooden block with a side length of 5 cm, and its contrast is 2. The same as the previous numerical simulation, the imaging area D is evenly divided into 64 × 64 small cells. The target is shown in Figure 11.
The measurement data of the target are shown in Figure 12. The GAN trained by the MNIST dataset is used to test the experimental data. Figure 13 shows the generation results of the ground truth, DMI, MR-CSI, and the GAN at 4.4 GHz operating frequency. The SRE result of the GAN is 76.97342 and the MR-CSI is 140.7981. Although the experimental data are quite different from the simulated data, the results of the GAN are satisfactory and better than MR-CSI in imaging effect. Adding the actual experimental data to MNIST for training, the trained GAN can reduce the SREs for actual experimental measurements. MNIST may not be perfect for application. It would be better if some actual experimental data were added for training. For a particular application, it is best to add some data with features that are similar to the application to MNIST datasets for training.

6. Conclusions

In this paper, by deriving the formula of the forward process, an equation that is easier to obtain the approximate contrast was obtained, and a neural network framework was constructed to convert the obtained approximate contrast into a closer and more real contrast. In addition, the quantitative comparison between the simulation and experimental data showed that DMI can easily obtain better contrast, and that the GAN can further optimize this result to get better results. Additionally, in the solving process of DMI with the GAN were non-iterative processes, and the reconstruction speed was much better than MR-CSI.

Author Contributions

Conceptualization, H.W., X.R. and L.G.; methodology, H.W.; software, H.W.; validation, H.W., X.R., L.G. and Z.L.; formal analysis, H.W.; investigation, H.W.; resources, H.W.; data curation, H.W.; writing—original draft preparation, H.W.; writing—review and editing, X.R. and L.G.; visualization, H.W.; funding acquisition, L.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the new thermoacoustic imaging method research project based on modular learning in Shandong Natural Science Foundation, grant number ZR2021ME093. We also acknowledge them support to carry out the study.

Conflicts of Interest

The authors declare no conflict of interest.The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
DMIdiagonal matrix inversion method
GANGenerative Adversarial Network
MR-CSIMultiplicative Regularization Contrast Source Inversion

References

  1. Kofman, W.; Herique, A.; Barbin, Y.; Barriot, J.P.; Ciarletti, V.; Clifford, S.; Edenhofer, P.; Elachi, C.; Eyraud, C.; Goutail, J.P.; et al. Properties of the 67P/Churyumov-Gerasimenko interior revealed by CONSERT radar. Science 2015, 349. [Google Scholar] [CrossRef] [Green Version]
  2. Redo-Sanchez, A.; Heshmat, B.; Aghasi, A.; Naqvi, S.; Zhang, M.; Romberg, J.; Raskar, R. Terahertz time-gated spectral imaging for content extraction through layered structures. Nat. Commun. 2016, 7, 1–7. [Google Scholar] [CrossRef] [Green Version]
  3. Colton, D.L.; Kress, R.; Kress, R. Inverse Acoustic and Electromagnetic SCATTERING theory; Springer: Berlin/Heidelberg, Germany, 1998; Volume 93. [Google Scholar]
  4. Maire, G.; Drsek, F.; Girard, J.; Giovannini, H.; Talneau, A.; Konan, D.; Belkebir, K.; Chaumet, P.C.; Sentenac, A. Experimental demonstration of quantitative imaging beyond Abbe’s limit with optical diffraction tomography. Phys. Rev. Lett. 2009, 102, 213905. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Ambrosanio, M.; Bevacqua, M.T.; Isernia, T.; Pascazio, V. The tomographic approach to ground-penetrating radar for underground exploration and monitoring: A more user-friendly and unconventional method for subsurface investigation. IEEE Signal Process. Mag. 2019, 36, 62–73. [Google Scholar] [CrossRef]
  6. Meaney, P.M.; Fanning, M.W.; Li, D.; Poplack, S.P.; Paulsen, K.D. A clinical prototype for active microwave imaging of the breast. IEEE Trans. Microw. Theory Tech. 2000, 48, 1841–1853. [Google Scholar]
  7. Pastorino, M. Microwave Imaging; John Wiley & Sons: Hoboken, NJ, USA, 2010; Volume 208. [Google Scholar]
  8. Wang, Y.; Chew, W.C. An iterative solution of the two-dimensional electromagnetic inverse scattering problem. Int. J. Imaging Syst. Technol. 1989, 1, 100–108. [Google Scholar] [CrossRef]
  9. Chew, W.C.; Wang, Y.M. Reconstruction of two-dimensional permittivity distribution using the distorted Born iterative method. IEEE Trans. Med. Imaging 1990, 9, 218–225. [Google Scholar] [CrossRef] [PubMed]
  10. van den Berg, P.M.; Van Broekhoven, A.; Abubakar, A. Extended contrast source inversion. Inverse Probl. 1999, 15, 1325. [Google Scholar] [CrossRef]
  11. Li, L.; Zheng, H.; Li, F. Two-dimensional contrast source inversion method with phaseless data: TM case. IEEE Trans. Geosci. Remote Sens. 2008, 47, 1719–1736. [Google Scholar]
  12. Chen, X. Subspace-based optimization method for solving inverse-scattering problems. IEEE Trans. Geosci. Remote Sens. 2009, 48, 42–49. [Google Scholar] [CrossRef]
  13. Xu, K.; Zhong, Y.; Wang, G. A hybrid regularization technique for solving highly nonlinear inverse scattering problems. IEEE Trans. Microw. Theory Tech. 2017, 66, 11–21. [Google Scholar] [CrossRef]
  14. Zhong, Y.; Chen, X. An FFT twofold subspace-based optimization method for solving electromagnetic inverse scattering problems. IEEE Trans. Antennas Propag. 2011, 59, 914–927. [Google Scholar] [CrossRef]
  15. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  16. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  17. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  18. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  19. Graves, A.; Mohamed, A.R.; Hinton, G. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 6645–6649. [Google Scholar]
  20. Amodei, D.; Ananthanarayanan, S.; Anubhai, R.; Bai, J.; Battenberg, E.; Case, C.; Casper, J.; Catanzaro, B.; Cheng, Q.; Chen, G.; et al. Deep speech 2: End-to-end speech recognition in english and mandarin. In Proceedings of the International Conference on Machine Learning, PMLR, New York, NY, USA, 19–24 June 2016; pp. 173–182. [Google Scholar]
  21. Sriram, A.; Zbontar, J.; Murrell, T.; Zitnick, C.L.; Defazio, A.; Sodickson, D.K. GrappaNet: Combining parallel imaging with deep learning for multi-coil MRI reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 14315–14322. [Google Scholar]
  22. Kamilov, U.S.; Papadopoulos, I.N.; Shoreh, M.H.; Goy, A.; Vonesch, C.; Unser, M.; Psaltis, D. Learning approach to optical tomography. Optica 2015, 2, 517–522. [Google Scholar] [CrossRef] [Green Version]
  23. Li, L.; Wang, L.G.; Teixeira, F.L.; Liu, C.; Nehorai, A.; Cui, T.J. DeepNIS: Deep neural network for nonlinear electromagnetic inverse scattering. IEEE Trans. Antennas Propag. 2018, 67, 1819–1825. [Google Scholar] [CrossRef] [Green Version]
  24. Guo, L.; Song, G.; Wu, H. Complex-Valued Pix2pix—Deep Neural Network for Nonlinear Electromagnetic Inverse Scattering. Electronics 2021, 10, 752. [Google Scholar] [CrossRef]
  25. Jin, K.H.; McCann, M.T.; Froustey, E.; Unser, M. Deep convolutional neural network for inverse problems in imaging. IEEE Trans. Image Process. 2017, 26, 4509–4522. [Google Scholar] [CrossRef] [Green Version]
  26. Wei, Z.; Chen, X. Deep-learning schemes for full-wave nonlinear inverse scattering problems. IEEE Trans. Geosci. Remote Sens. 2018, 57, 1849–1860. [Google Scholar] [CrossRef]
  27. Ambrosanio, M.; Franceschini, S.; Baselice, F.; Pascazio, V. Machine learning for microwave imaging. In Proceedings of the 2020 14th European Conference on Antennas and Propagation (EuCAP), Copenhagen, Denmark, 15–20 March 2020; pp. 1–4. [Google Scholar]
  28. Sanghvi, Y.; Kalepu, Y.; Khankhoje, U.K. Embedding deep learning in inverse scattering problems. IEEE Trans. Comput. Imaging 2019, 6, 46–56. [Google Scholar] [CrossRef]
  29. Li, C.; Wand, M. Precomputed real-time texture synthesis with markovian generative adversarial networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 702–716. [Google Scholar]
  30. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  31. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv 2016, arXiv:1603.04467. [Google Scholar]
  32. Harris, C.R.; Millman, K.J.; van der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N.J.; et al. Array programming with NumPy. Nature 2020, 585, 357–362. [Google Scholar] [CrossRef] [PubMed]
  33. Cohen, G.; Afshar, S.; Tapson, J.; Van Schaik, A. EMNIST: Extending MNIST to handwritten letters. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 2921–2926. [Google Scholar]
Figure 1. Electromagnetic inverse scattering measurement diagram.
Figure 1. Electromagnetic inverse scattering measurement diagram.
Electronics 10 03104 g001
Figure 2. The generator architecture. The inputs come from the DMI algorithm.
Figure 2. The generator architecture. The inputs come from the DMI algorithm.
Electronics 10 03104 g002
Figure 3. The GAN and DMI applied for nonlinear inverse scattering. When training and simulating, the measured data of the scattered field will be calculated by Equation (7).
Figure 3. The GAN and DMI applied for nonlinear inverse scattering. When training and simulating, the measured data of the scattered field will be calculated by Equation (7).
Electronics 10 03104 g003
Figure 4. Example 1: Reconstructed images of the contrast distribution are calculated by different algorithms on MNIST. (a) Ground truths. (b) DMI result. (c) MR-CSI results of 500 iterations. (d) The GAN results, where the inputs are the DMI results.
Figure 4. Example 1: Reconstructed images of the contrast distribution are calculated by different algorithms on MNIST. (a) Ground truths. (b) DMI result. (c) MR-CSI results of 500 iterations. (d) The GAN results, where the inputs are the DMI results.
Electronics 10 03104 g004
Figure 5. The real and imaginary parts of scattered field measurements, calculated by Equation (7) with ground truths in Figure 4. (a) is the real parts and (b) is the imaginary parts.
Figure 5. The real and imaginary parts of scattered field measurements, calculated by Equation (7) with ground truths in Figure 4. (a) is the real parts and (b) is the imaginary parts.
Electronics 10 03104 g005
Figure 6. Example 2: Reconstructed images of the contrast distribution are calculated by different algorithms on EMNIST. (a) Ground truths. (b) DMI result. (c) MR-CSI results of 500 iterations. (d) The GAN results, where the inputs are the DMI results.
Figure 6. Example 2: Reconstructed images of the contrast distribution are calculated by different algorithms on EMNIST. (a) Ground truths. (b) DMI result. (c) MR-CSI results of 500 iterations. (d) The GAN results, where the inputs are the DMI results.
Electronics 10 03104 g006
Figure 7. The imaging results are calculated by CVP2P and the GAN. (a) is the ground truths, (b) is the CVP2P imaging results, (c) is the GAN imaging results.
Figure 7. The imaging results are calculated by CVP2P and the GAN. (a) is the ground truths, (b) is the CVP2P imaging results, (c) is the GAN imaging results.
Electronics 10 03104 g007
Figure 8. Example 3: Reconstructed images of the contrast distribution calculated by different algorithms with lossy scatterers.
Figure 8. Example 3: Reconstructed images of the contrast distribution calculated by different algorithms with lossy scatterers.
Electronics 10 03104 g008
Figure 9. Example 4: Reconstructed images of the contrast distribution are calculated by different algorithms with more complex images. (a) Ground truths. (b) DMI result. (c) MR-CSI results of 500 iterations. (d) The GAN results, where the inputs are the DMI results.
Figure 9. Example 4: Reconstructed images of the contrast distribution are calculated by different algorithms with more complex images. (a) Ground truths. (b) DMI result. (c) MR-CSI results of 500 iterations. (d) The GAN results, where the inputs are the DMI results.
Electronics 10 03104 g009
Figure 10. Reconstructed images of the contrast distribution are calculated by the GAN, and the inputs of the GAN are the DMI results. (a) The ground truths. (b) The reconstructed results without noise. (c) The reconstructed results with 5% noise. (d) The reconstructed results with 10% noise. (e) The reconstructed results with 15% noise.
Figure 10. Reconstructed images of the contrast distribution are calculated by the GAN, and the inputs of the GAN are the DMI results. (a) The ground truths. (b) The reconstructed results without noise. (c) The reconstructed results with 5% noise. (d) The reconstructed results with 10% noise. (e) The reconstructed results with 15% noise.
Electronics 10 03104 g010
Figure 11. The imaging area is surrounded by red lines and the target is surrounded by blue lines. The side length of the imaging area is 10 cm, and the target’s is 5 cm.
Figure 11. The imaging area is surrounded by red lines and the target is surrounded by blue lines. The side length of the imaging area is 10 cm, and the target’s is 5 cm.
Electronics 10 03104 g011
Figure 12. The real and imaginary parts of measurement data of the target.
Figure 12. The real and imaginary parts of measurement data of the target.
Electronics 10 03104 g012
Figure 13. Reconstructed images of the contrast distribution are calculated by different algorithms with experimental data, which are shown in Figure 12. (a) Ground truths. (b) DMI result. (c) MR-CSI results of 500 iterations. (d) The GAN results, where the inputs are the DMI results.
Figure 13. Reconstructed images of the contrast distribution are calculated by different algorithms with experimental data, which are shown in Figure 12. (a) Ground truths. (b) DMI result. (c) MR-CSI results of 500 iterations. (d) The GAN results, where the inputs are the DMI results.
Electronics 10 03104 g013
Table 1. Parameters of each layer in the discriminator.
Table 1. Parameters of each layer in the discriminator.
Layer NameInputs ShapeOutputs ShapeFiltersKernelStride
Conv1 48 × 24 24 × 24 64 4 × 1 2 × 1
Conv2 24 × 24 12 × 24 128 4 × 1 2 × 1
Conv3 12 × 24 6 × 24 256 4 × 1 2 × 1
Conv4 6 × 24 3 × 24 1 4 × 1 2 × 1
Table 2. SRE results for the reconstructions in Figure 4.
Table 2. SRE results for the reconstructions in Figure 4.
Ground TruthsDMIGANMR-CSI
Electronics 10 03104 i001111.765645.2824561.52228
Electronics 10 03104 i002105.420531.6861948.43685
Electronics 10 03104 i00394.9206921.6697822.43685
Electronics 10 03104 i004130.932912.3050719.54651
Electronics 10 03104 i00566.7850122.6400723.60733
Electronics 10 03104 i006214.583739.7895742.28401
Electronics 10 03104 i00729.085618.09707711.39325
Electronics 10 03104 i008140.850151.7745072.77665
Electronics 10 03104 i009151.535253.3192172.77665
Electronics 10 03104 i010178.754021.9576043.64304
Electronics 10 03104 i011183.604143.3770053.37468
Electronics 10 03104 i012124.146029.1148232.13470
Table 3. SSIM results for the reconstructions in Figure 4.
Table 3. SSIM results for the reconstructions in Figure 4.
Ground TruthsDMIGANMR-CSI
Electronics 10 03104 i0130.71385510.81001360.7487772
Electronics 10 03104 i0140.77402240.89820240.8435692
Electronics 10 03104 i0150.77854970.88073780.8555204
Electronics 10 03104 i0160.75137920.91752780.9015835
Electronics 10 03104 i0170.80429630.89067980.8541093
Electronics 10 03104 i0180.65522820.84315290.8194608
Electronics 10 03104 i0190.93514760.97599040.9055687
Electronics 10 03104 i0200.69258020.79312280.7598765
Electronics 10 03104 i0210.69800580.80949630.7807395
Electronics 10 03104 i0220.64228080.86652960.7998289
Electronics 10 03104 i0230.66917070.83088840.7951557
Electronics 10 03104 i0240.86636890.70613960.8453851
Table 4. SSIM results for the reconstructions in Figure 6.
Table 4. SSIM results for the reconstructions in Figure 6.
Ground TruthsDMIGANMR-CSI
Electronics 10 03104 i025103.208819.0467229.22322
Electronics 10 03104 i026116.327051.6338171.20323
Electronics 10 03104 i02799.0527816.2620020.75824
Electronics 10 03104 i028213.618638.9576882.97401
Electronics 10 03104 i02949.6277912.9779026.17604
Electronics 10 03104 i030152.325821.1050635.66177
Electronics 10 03104 i031114.421332.4365532.50585
Electronics 10 03104 i03215.6031710.2688011.28294
Table 5. SRE results for the reconstructions in Figure 7.
Table 5. SRE results for the reconstructions in Figure 7.
Ground TruthsCVP2PGAN
Electronics 10 03104 i033523.0269.1223
Electronics 10 03104 i034450.0361.1265
Electronics 10 03104 i035849.0387.2867
Electronics 10 03104 i036306.0104.5572
Electronics 10 03104 i037483.0268.2372
Electronics 10 03104 i038420.0168.0862
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, H.; Ren, X.; Guo, L.; Li, Z. A Non-Iterative Method Combined with Neural Network Embedded in Physical Model to Solve the Imaging of Electromagnetic Inverse Scattering Problem. Electronics 2021, 10, 3104. https://doi.org/10.3390/electronics10243104

AMA Style

Wu H, Ren X, Guo L, Li Z. A Non-Iterative Method Combined with Neural Network Embedded in Physical Model to Solve the Imaging of Electromagnetic Inverse Scattering Problem. Electronics. 2021; 10(24):3104. https://doi.org/10.3390/electronics10243104

Chicago/Turabian Style

Wu, Hongsheng, Xuhu Ren, Liang Guo, and Zhengzhe Li. 2021. "A Non-Iterative Method Combined with Neural Network Embedded in Physical Model to Solve the Imaging of Electromagnetic Inverse Scattering Problem" Electronics 10, no. 24: 3104. https://doi.org/10.3390/electronics10243104

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop