Generation of High-Precision Ground Penetrating Radar Images Using Improved Least Square Generative Adversarial Networks

: Deep learning models have achieved success in image recognition and have shown great potential for interpretation of ground penetrating radar (GPR) data. However, training reliable deep learning models requires massive labeled data, which are usually not easy to obtain due to the high costs of data acquisition and ﬁeld validation. This paper proposes an improved least square generative adversarial networks (LSGAN) model which employs the loss functions of LSGAN and convolutional neural networks (CNN) to generate GPR images. This model can generate high-precision GPR data to address the scarcity of labelled GPR data. We evaluate the proposed model using Frechet Inception Distance (FID) evaluation index and compare it with other existing GAN models and ﬁnd it outperforms the other two models on a lower FID score. In addition, the adaptability of the LSGAN-generated images for GPR data augmentation is investigated by YOLOv4 model, which is employed to detect rebars in ﬁeld GPR images. It is veriﬁed that inclusion of LSGAN-generated images in the training GPR dataset can increase the target diversity and improve the detection precision by 10%, compared with the model trained on the dataset containing 500 ﬁeld GPR images.


Introduction
Ground penetrating radar (GPR) is a popular geophysical technique and has been widely applied to near-surface investigation [1,2], archaeological prospection [3,4], hydrological investigation [5], lunar exploration [6], and civil engineering [7]. In tunnel detection, GPR is used to detect voids, seepage, and rebar defects [8,9]. In bridge field, GPR is commonly used to measure reinforcement position, concrete thickness, and reinforcement corrosion degree [10,11]. With the rapid increase of detection requirements in civil engineering, GPR has been become a regular method for inspecting reinforced bars (rebars) in concrete, locating subsurface pipelines, structural performance evaluation, etc. [12]. A single scatterer, such as a landmine, rebar, or pipeline, reflects a hyperbolic signature in a recorded GPR B-scan profile [13], which can be used to locate the buried objects from GPR images [14]. However, even for an experienced practitioner, interpretation of GPR data is extremely time-and labor-consuming due to complex field conditions and huge data volumes. For example, field data detected by a car-mounted GPR system in a day would take one week or even longer to be comprehensively interpreted [15]. Therefore, the low efficiency of manual interpretation is a major factor that limits the fast decision-making for maintenance and rehabilitation [16,17]. p z (z). The generator G will confuse the discriminator D to judge whether the generated data is true or not. The loss function in GAN is defined as follows min G max D V(D, G) = E x∼P data [LogD(x)] + E z∼P z [log(1 − D(G(z)))] (1) where G(z) is a sample image generated by a random matrix, x is a field sample image, E x∼P data is the expected value over all field instances, and E z∼P z is the expected value over all the fake instances.
where G(z) is a sample image generated by a random matrix, x is a field sample image, ~ is the expected value over all field instances, and ~ is the expected value over all the fake instances.
The convergence direction of the network is achieved by min max , . The loss function in Equation (1) can be divided into two parts which correspond to the discriminative model and the generative model, respectively. log log 1 However, GAN is unstable and easily results in non-convergence during the training process [29]. In addition, the instability of the GAN makes it prone to under-fitting or over-fitting [35]. Therefore, the parameters of GAN must be carefully adjusted in the training process.

Improved Least Square Generative Adversarial Networks
LSGAN adopts the least square loss function of the discriminator and can generate images that are closer to field GPR images than the normal GANs [32]. The loss functions of the generator and discriminator of the LSGAN are respectively defined as follows: where a is the label of fake data, b is the label of field data, and c is the value set by G for D to determine whether the generated image is real data. However, LSGAN models suffer from the following problems [36]: (a) The generator is susceptible to collapse during the training process; (b) The generator gradient may vanish and learn nothing;  (1) can be divided into two parts which correspond to the discriminative model and the generative model, respectively.
However, GAN is unstable and easily results in non-convergence during the training process [29]. In addition, the instability of the GAN makes it prone to under-fitting or over-fitting [35]. Therefore, the parameters of GAN must be carefully adjusted in the training process.

Improved Least Square Generative Adversarial Networks
LSGAN adopts the least square loss function of the discriminator and can generate images that are closer to field GPR images than the normal GANs [32]. The loss functions of the generator and discriminator of the LSGAN are respectively defined as follows: where a is the label of fake data, b is the label of field data, and c is the value set by G for D to determine whether the generated image is real data. However, LSGAN models suffer from the following problems [36]: (a) The generator is susceptible to collapse during the training process; (b) The generator gradient may vanish and learn nothing; (c) The generated images are not diverse.
In this paper, we improve the architecture of LSGAN, as shown in Figure 2. It can smooth the gradient and improve the stability of the adversarial training, thus reducing the possibility of mode collapses in training stage and increasing the variety of generated images. (c) The generated images are not diverse.
In this paper, we improve the architecture of LSGAN, as shown in Figure 2. It can smooth the gradient and improve the stability of the adversarial training, thus reducing the possibility of mode collapses in training stage and increasing the variety of generated images. The generator's detailed architecture is presented in Table 1. The purpose of the generator is to create an image from the input image based on a random noise matrix. Several up-sampling operations are set to reshape the size of input matrix (512 × 512 in this work). The design of the generator involves a transposed convolutional block to up-sample the input image. Eight transposed convolutional blocks are used in the improved LSGAN. Each block consists of a transposed convolutional layer, followed by a Batch Normalization and a Rectified Liner Unit (ReLU) activation function. In the first block, the kernel of transposed convolutional layer is 4 × 4 with a stride of one that resizes the input matrix to 4 × 4 with 4096 channels. Then the stride of transposed convolutional layer is set as two with a number of features divided by two at each block. The residual block module is used to connect with the up-sampling module. A transposed convolution with a size of 4 × 4 and a stride of two followed by Tanh is set as the last block to resize the output to be 512 × 512. Table 2 shows the composition of the discriminator. The input of the discriminator model is the fake and real images from the generator. During training, the discriminator compares two kinds of images, and the output is used to adjust the generator to make images resemble real ones. The LeakyReLU activation function is used to avoid mode collapse [37]. Conv1 includes a 4 × 4 convolutional block with a stride of two, followed by LeakyReLU activation. Then seven 4 × 4 convolutional blocks with a stride of two, and several features are multiplied by two at each stage. The generator's detailed architecture is presented in Table 1. The purpose of the generator is to create an image from the input image based on a random noise matrix. Several up-sampling operations are set to reshape the size of input matrix (512 × 512 in this work). The design of the generator involves a transposed convolutional block to up-sample the input image. Eight transposed convolutional blocks are used in the improved LSGAN. Each block consists of a transposed convolutional layer, followed by a Batch Normalization and a Rectified Liner Unit (ReLU) activation function. In the first block, the kernel of transposed convolutional layer is 4 × 4 with a stride of one that resizes the input matrix to 4 × 4 with 4096 channels. Then the stride of transposed convolutional layer is set as two with a number of features divided by two at each block. The residual block module is used to connect with the up-sampling module. A transposed convolution with a size of 4 × 4 and a stride of two followed by Tanh is set as the last block to resize the output to be 512 × 512. Table 2 shows the composition of the discriminator. The input of the discriminator model is the fake and real images from the generator. During training, the discriminator compares two kinds of images, and the output is used to adjust the generator to make images resemble real ones. The LeakyReLU activation function is used to avoid mode collapse [37]. Conv1 includes a 4 × 4 convolutional block with a stride of two, followed by LeakyReLU activation. Then seven 4 × 4 convolutional blocks with a stride of two, and several features are multiplied by two at each stage.

Evaluation Index
Generally, inception score (IS) [38] and Frechet Inception Distance (FID) [39] are two widely accepted measures for evaluating the performance of GAN models [40]. The IS measure directly evaluates the generated image itself by calculating its entropy. In contrast, the FID measurement calculates the similarity between the generated images and the field images [39] and thus is superior to IS measurement [38]. In this paper, FID score is used to evaluate the results of the improved LSGAN algorithm.
The FID score presents the feature distance between the real and GAN generated images, which is also known as Frechet distance between the two multivariate Gaussians. A lower FID score means closer proximity between the two distributions, which means higher quality and greater diversity of images generated. The FID score is given by where Σ x and Σ g are the covariance matrix of field and generated images, Tr is the trace calculated from the sum of the elements along the main diagonal of the square matrix, and µ x , µ g are the dimensional activations for field images and generated images, respectively.

Data Collection
Field GPR images were obtained in several residential buildings of two construction sites by using a commercial GPR system with a central frequency of 2 GHz (Figure 3). The GPR data were recorded by a distance-measuring odometer, and the acquisition parameters are summarized in Table 3. In order to enable the dataset to cover as much as possible of all types of rebars, GPR images were collected on the surface of concrete walls, columns, beams, and slabs. Since the dimensions of these concrete components are not the same for different buildings, the GPR survey lines have various lengths. By data processing, we set each GPR image to be of the size of 512 × 512. In total, 500 GPR images which contain 2856 rebars were collected, as shown in Figure 4. Since GPR measurements were conducted before the building decoration, we can confirm that all the near-surface hyperbolic features are generated by the concrete rebars. A single rebar reflection can be easily identified due to its hyperbolic feature, and the field GPR images that are used for training the deep learning model will be described and tested in the next section. Remote Sens. 2021, 13, x FOR PEER REVIEW 7 of 17 (a) (b) Figure 3. (a) The system used for GPR images acquisition and (b) GPR images of the field data collection [17].

Data Augmentation Methods
The recognition sample library needs a large amount of GPR images to successfully train the neural network model. Though 500 images have been obtained, labeled field GPR data are still limited. In this work, the improved LSGAN is used to generate more GPR data which supply the amount of training data. In this section, the obtained field GPR images are used as the training dataset in the training of the improved LSGAN. The training process is conducted on a computer with a NVIDIA GeForce GTX 1660Ti graphics card (6 GB memory). It takes 8.5 h for 500 epochs to obtain produced weight file, which is about the same as the training time with other GANs. After obtaining the weight file, each artificial GPR image can be generated within 1 s.

Data Augmentation Methods
The recognition sample library needs a large amount of GPR images to successfully train the neural network model. Though 500 images have been obtained, labeled field GPR data are still limited. In this work, the improved LSGAN is used to generate more GPR data which supply the amount of training data. In this section, the obtained field GPR images are used as the training dataset in the training of the improved LSGAN. The train-

Data Augmentation Methods
The recognition sample library needs a large amount of GPR images to successfully train the neural network model. Though 500 images have been obtained, labeled field GPR data are still limited. In this work, the improved LSGAN is used to generate more GPR data which supply the amount of training data. In this section, the obtained field GPR images are used as the training dataset in the training of the improved LSGAN. The training process is conducted on a computer with a NVIDIA GeForce GTX 1660Ti graphics card (6 GB memory). It takes 8.5 h for 500 epochs to obtain produced weight file, which is about the same as the training time with other GANs. After obtaining the weight file, each artificial GPR image can be generated within 1 s. Figure 5 shows the visual inspection in the improved LSGAN training process. Starting from an image of random noise, the improved LSGAN adjusts the network parameters to produce GPR images resembling the real ones. At 10 epochs, the generated image has contained some prominent features, but details are lacking. When epochs reach 100, there are subtle features which resemble the rebar reflections emerged. At 200 epochs, the generated image reveals more details and looks similar to a field GPR image. When it reaches 500 epochs, the generated image contains almost all the detailed information of the field GPR images. With the improved LSGAN, we generate 500 GPR images containing 2602 rebars to augment the training dataset.
Remote Sens. 2021, 13, x FOR PEER REVIEW 8 of 17 Figure 5 shows the visual inspection in the improved LSGAN training process. Starting from an image of random noise, the improved LSGAN adjusts the network parameters to produce GPR images resembling the real ones. At 10 epochs, the generated image has contained some prominent features, but details are lacking. When epochs reach 100, there are subtle features which resemble the rebar reflections emerged. At 200 epochs, the generated image reveals more details and looks similar to a field GPR image. When it reaches 500 epochs, the generated image contains almost all the detailed information of the field GPR images. With the improved LSGAN, we generate 500 GPR images containing 2602 rebars to augment the training dataset.

Results of Other GANs
In addition to the improved LSGAN, other GAN models, i.e., DCGAN and LSGAN, are used to generate GPR images. Their results are compared with the images generated by the proposed improved LSGAN. As shown in Figure 6, the clarity of the images generated by DCGAN is poor, and the hyperbolic characteristics of rebar reflection are not well learned by LSGAN. In comparison, GPR images generated by the improved LSGAN reveal more details than the other two.

Results of Other GANs
In addition to the improved LSGAN, other GAN models, i.e., DCGAN and LSGAN, are used to generate GPR images. Their results are compared with the images generated by the proposed improved LSGAN. As shown in Figure 6, the clarity of the images generated by DCGAN is poor, and the hyperbolic characteristics of rebar reflection are not well learned by LSGAN. In comparison, GPR images generated by the improved LSGAN reveal more details than the other two. The calculated FID score by using PyTorch shows that the improved LSGAN model achieves a FID score of 29.6 (lower scores correspond to better GAN performance). The FID score of DCGAN and LSGAN are, respectively, 67.5 and 47.6, which are much higher than results by the improved LSGAN. It means that the improved LSGAN can extract more rebar features than the DCGAN and LSGAN from field GPR images. More importantly, the images generated by the improved LSGAN contain more rebar targets than the field GPR images, which improves the diversity of training dataset.

Pre-Trained YOLOv4
Since GPR data acquisition can be operated at an ultra-fast speed, both the detection accuracy and speed of a deep learning model are important in the GPR application scenarios. Therefore, we employ YOLOv4 model which improves YOLOv3's average precision and frame per second by 10% and 12%, respectively [41], to test the value of the generated GPR images by the improved LSGAN.
The schematic diagram of YOLOv4's network structure is shown in Figure 7. To reduce the number of required training images and iterations, transfer learning is utilized for training the Yolov4 model. In this study, the employed Yolov4 model is firstly pretrained on the COCO dataset [42]. Then the pre-trained Yolov4 model is further trained and fine-tuned by using the acquired GPR dataset. The calculated FID score by using PyTorch shows that the improved LSGAN model achieves a FID score of 29.6 (lower scores correspond to better GAN performance). The FID score of DCGAN and LSGAN are, respectively, 67.5 and 47.6, which are much higher than results by the improved LSGAN. It means that the improved LSGAN can extract more rebar features than the DCGAN and LSGAN from field GPR images. More importantly, the images generated by the improved LSGAN contain more rebar targets than the field GPR images, which improves the diversity of training dataset.

Pre-Trained YOLOv4
Since GPR data acquisition can be operated at an ultra-fast speed, both the detection accuracy and speed of a deep learning model are important in the GPR application scenarios. Therefore, we employ YOLOv4 model which improves YOLOv3's average precision and frame per second by 10% and 12%, respectively [41], to test the value of the generated GPR images by the improved LSGAN.
The schematic diagram of YOLOv4's network structure is shown in Figure 7. To reduce the number of required training images and iterations, transfer learning is utilized for training the Yolov4 model. In this study, the employed Yolov4 model is firstly pretrained on the COCO dataset [42]. Then the pre-trained Yolov4 model is further trained and fine-tuned by using the acquired GPR dataset.

Testing Results
After data augmentation, the training dataset contains 500 field GPR images and 500 GAN-generated GPR images. A total of 5467 rebars are labeled manually as targets.
To test the impact of generated GPR images in training dataset on the target recognition, three training datasets are created, as shown in Table 4, and the corresponding recognition precisions are investigated. The computing facility for training YOLOv4 is the same as that used in training the improved LSGAN architecture.

Training Dataset I and II
Training dataset I contains 500 field GPR images, training dataset II contains 500 generated GPR images. All the images are resized to 512 × 512 pixels. A total of 2865 rebars are labeled in dataset I, and 2602 rebars are labeled in dataset II. In each training dataset, 350 images (70%) are randomly selected for training, and the remaining 150 images (30%) are used for validation.
The YOLOv4 network is trained using the Adam optimizer with a learning rate of 0.001 and a weight decay of 0.0001, and the average training and validation losses are plotted in Figure 8. The network is improved gradually during the training process and reaches a steady state after 100 training epochs. The loss values for validation are close to those for training, indicating that no overfitting has occurred. Thus, the network is well trained and ready for recognition.

Testing Results
After data augmentation, the training dataset contains 500 field GPR images and 500 GAN-generated GPR images. A total of 5467 rebars are labeled manually as targets. To test the impact of generated GPR images in training dataset on the target recognition, three training datasets are created, as shown in Table 4, and the corresponding recognition precisions are investigated. The computing facility for training YOLOv4 is the same as that used in training the improved LSGAN architecture.

Training Dataset I and II
Training dataset I contains 500 field GPR images, training dataset II contains 500 generated GPR images. All the images are resized to 512 × 512 pixels. A total of 2865 rebars are labeled in dataset I, and 2602 rebars are labeled in dataset II. In each training dataset, 350 images (70%) are randomly selected for training, and the remaining 150 images (30%) are used for validation.
The YOLOv4 network is trained using the Adam optimizer with a learning rate of 0.001 and a weight decay of 0.0001, and the average training and validation losses are plotted in Figure 8. The network is improved gradually during the training process and reaches a steady state after 100 training epochs. The loss values for validation are close to those for training, indicating that no overfitting has occurred. Thus, the network is well trained and ready for recognition. Next, we investigate the recognition precision of the trained network using 100 field GPR images that are not included in the training dataset. Classical evaluation metrics Next, we investigate the recognition precision of the trained network using 100 field GPR images that are not included in the training dataset. Classical evaluation metrics including precision (Pr), recall (Re), and F1 score (F1) are applied to evaluate the performance of the trained YOLOv4 model for rebar detection in GPR images. The Pr, Re, and F1 of the testing results of the validation images are averaged values.

Pr =
correctly detected rebars all detected rebars (7) Re = correctly detected rebars all ground − truth rebars (8) Training results of 100 iterations are then used to evaluate the performance of the trained YOLOv4 algorithm for rebar detection, as shown in Figure 9. The overall Pr, Re, and F1 scores of dataset I are 84.9%, 81.2%, and 83.1%, respectively, while those of dataset II are 84.9%, 76.4%, and 80.3%, respectively. The generated dataset image can only simulate the field GPR measurement environment with high precision but cannot be identical with field GPR images. Thus, a small number of rebars in the test field GPR dataset are not detected, resulting in low recall rate and F1 score. Nevertheless, Datasets I and II result in the same detection precision. It means that the GPR image generated by the improved LSGAN can be used to train the detection model.
including precision (Pr), recall (Re), and F1 score (F1) are applied to evaluate the performance of the trained YOLOv4 model for rebar detection in GPR images. The Pr, Re, and F1 of the testing results of the validation images are averaged values.
Training results of 100 iterations are then used to evaluate the performance of the trained YOLOv4 algorithm for rebar detection, as shown in Figure 9. The overall Pr, Re, and F1 scores of dataset I are 84.9%, 81.2%, and 83.1%, respectively, while those of dataset II are 84.9%, 76.4%, and 80.3%, respectively. The generated dataset image can only simulate the field GPR measurement environment with high precision but cannot be identical with field GPR images. Thus, a small number of rebars in the test field GPR dataset are not detected, resulting in low recall rate and F1 score. Nevertheless, Datasets I and II result in the same detection precision. It means that the GPR image generated by the improved LSGAN can be used to train the detection model.

Training Dataset III
Training dataset III contains the 500 field GPR images and a number of additional generated images. The YOLOv4 network is trained with different proportions of generated images using the same training parameters as those of the previous cases. Figure 10 shows the recognition precision. The highest precision for recognizing rebars reaches 95.9%, which is higher than the precisions using training datasets I and II.

Training Dataset III
Training dataset III contains the 500 field GPR images and a number of additional generated images. The YOLOv4 network is trained with different proportions of generated images using the same training parameters as those of the previous cases. Figure 10 shows the recognition precision. The highest precision for recognizing rebars reaches 95.9%, which is higher than the precisions using training datasets I and II.
Remote Sens. 2021, 13, x FOR PEER REVIEW 13 of 17 Figure 10. Classification precision for GPR images corresponding to different numbers of field training images and after augmentation with generated data, which are used for training YOLOv4. Figure 11 shows some examples of GPR images, in which the hyperbolic reflections of rebar are identified in different scenarios. It can be seen that almost all the hyperbolic rebars in these situations are correctly identified with a high confidence, including multiple targets, closely aligned targets, overlapped targets, and blurred targets. In addition, we have further propose an automatic localization method through migration and binarization, which can accurately estimate the rebar position information [17].
(a) (b) Figure 10. Classification precision for GPR images corresponding to different numbers of field training images and after augmentation with generated data, which are used for training YOLOv4. Figure 11 shows some examples of GPR images, in which the hyperbolic reflections of rebar are identified in different scenarios. It can be seen that almost all the hyperbolic rebars in these situations are correctly identified with a high confidence, including multiple targets, closely aligned targets, overlapped targets, and blurred targets. In addition, we have further propose an automatic localization method through migration and binarization, which can accurately estimate the rebar position information [17].
Remote Sens. 2021, 13, x FOR PEER REVIEW 13 of 17 Figure 10. Classification precision for GPR images corresponding to different numbers of field training images and after augmentation with generated data, which are used for training YOLOv4. Figure 11 shows some examples of GPR images, in which the hyperbolic reflections of rebar are identified in different scenarios. It can be seen that almost all the hyperbolic rebars in these situations are correctly identified with a high confidence, including multiple targets, closely aligned targets, overlapped targets, and blurred targets. In addition, we have further propose an automatic localization method through migration and binarization, which can accurately estimate the rebar position information [17].

Discussion
This paper proposes an improved LSGAN model for generation of high-precision GPR images. The loss function of LSGAN can smooth the gradient and improve the stability of the training process, thus decreasing the possibility of mode collapses and increasing the variety of generated images. Eight residual blocks and tanh function in generator are aimed to reduce the training error of deep network. Moreover, LeakyReLU activation function is applied in discriminator to improve the learning ability and also avoid mode collapse. The results show that the quality of the images produced by the proposed improved LSGAN is better than that generated by DCGAN and LSGAN at the same epochs.
In order to verify the feasibility of the generated GPR images for data augmentation, three different training datasets (dataset I to III) are used to train the YOLOv4 models and illustrate the recognition precisions. Results reveal that datasets I and II present same detection precision, but a small number of rebars in the Dataset II are not detected, resulting a low recall rate and F1 score. With the increasing of the number of improved LSGAN GPR images (dataset III), the precision of rebar recognition rises to 95.9% when 500 generated images and 400 field images are used for training. In our previous work [17], a training dataset of 3992 GPR images containing 13,026 rebars is established by another simple data augmentation method, i.e., the horizontally flipping and scaling, and the precision of rebar recognition can only reach 90.9%. In comparison, image augmentation by the improved LSGAN algorithm achieves a higher recognition precision of 95.9% by using only 900 images for training. This finding demonstrates that the GPR images generated by the improved LSGAN have abundant diversity, which can be used to train the neural network model.

Discussion
This paper proposes an improved LSGAN model for generation of high-precision GPR images. The loss function of LSGAN can smooth the gradient and improve the stability of the training process, thus decreasing the possibility of mode collapses and increasing the variety of generated images. Eight residual blocks and tanh function in generator are aimed to reduce the training error of deep network. Moreover, LeakyReLU activation function is applied in discriminator to improve the learning ability and also avoid mode collapse. The results show that the quality of the images produced by the proposed improved LSGAN is better than that generated by DCGAN and LSGAN at the same epochs.
In order to verify the feasibility of the generated GPR images for data augmentation, three different training datasets (dataset I to III) are used to train the YOLOv4 models and illustrate the recognition precisions. Results reveal that datasets I and II present same detection precision, but a small number of rebars in the Dataset II are not detected, resulting a low recall rate and F1 score. With the increasing of the number of improved LSGAN GPR images (dataset III), the precision of rebar recognition rises to 95.9% when 500 generated images and 400 field images are used for training. In our previous work [17], a training dataset of 3992 GPR images containing 13,026 rebars is established by another simple data augmentation method, i.e., the horizontally flipping and scaling, and the precision of rebar recognition can only reach 90.9%. In comparison, image augmentation by the improved LSGAN algorithm achieves a higher recognition precision of 95.9% by using only 900 images for training. This finding demonstrates that the GPR images generated by the improved LSGAN have abundant diversity, which can be used to train the neural network model.

Conclusions
In this paper, we propose an improved LSGAN for generation of GPR images to deal with the insufficient GPR images with labels for training the deep learning models with an aim of automatic subsurface target detection. Compared with other GANs, the improved LSGAN can generate GPR images of rebar with a higher precision, while ensuring the diversity of images. Furthermore, the improved LSGAN approach is employed for GPR data augmentation. It is found that the generated images can supplement the missing features in the field GPR data, increase the diversity of the dataset, and improve the recognition precision by 10%, compared with the precision of 84.9% achieved by using 500 field GPR images for training.
The future work will try to apply the improved LSGAN to generate GPR images of other underground targets, such as subsurface pipes, landmines, cavities, etc. In addition, the improved LSGAN will be trained by combining FDTD simulation and field images to make the generated images more diverse.