Radar-Based Microwave Breast Imaging Using Neurocomputational Models

In this study, neurocomputational models are proposed for the acquisition of radar-based microwave images of breast tumors using deep neural networks (DNNs) and convolutional neural networks (CNNs). The circular synthetic aperture radar (CSAR) technique for radar-based microwave imaging (MWI) was utilized to generate 1000 numerical simulations for randomly generated scenarios. The scenarios contain information such as the number, size, and location of tumors for each simulation. Then, a dataset of 1000 distinct simulations with complex values based on the scenarios was built. Consequently, a real-valued DNN (RV-DNN) with five hidden layers, a real-valued CNN (RV-CNN) with seven convolutional layers, and a real-valued combined model (RV-MWINet) consisting of CNN and U-Net sub-models were built and trained to generate the radar-based microwave images. While the proposed RV-DNN, RV-CNN, and RV-MWINet models are real-valued, the MWINet model is restructured with complex-valued layers (CV-MWINet), resulting in a total of four models. For the RV-DNN model, the training and test errors in terms of mean squared error (MSE) are found to be 103.400 and 96.395, respectively, whereas for the RV-CNN model, the training and test errors are obtained to be 45.283 and 153.818. Due to the fact that the RV-MWINet model is a combined U-Net model, the accuracy metric is analyzed. The proposed RV-MWINet model has training and testing accuracy of 0.9135 and 0.8635, whereas the CV-MWINet model has training and testing accuracy of 0.991 and 1.000, respectively. The peak signal-to-noise ratio (PSNR), universal quality index (UQI), and structural similarity index (SSIM) metrics were also evaluated for the images generated by the proposed neurocomputational models. The generated images demonstrate that the proposed neurocomputational models can be successfully utilized for radar-based microwave imaging, especially for breast imaging.


Introduction
In the health care industry, the diagnosis and treatment of diseases has become increasingly reliant on rapidly advancing technology. Currently, cardiovascular diseases are the leading cause of death, followed by cancer in second place [1,2]. Although cancer is a non-communicable disease with various types, breast cancer is the most prevalent form of cancer among women [1,3]. Despite the fact that breast cancer can be discovered reasonably quickly and easily due to the development of medical imaging technologies, if it is not diagnosed at an early stage, it can develop into later stages and be fatal. In addition, it is crucial to detect breast cancer at an early stage since it might metastasize and spread to other tissues, resulting in the development of additional malignancies. Although a variety of modalities are used to identify breast cancer at an early stage, X-ray mammography is the most frequently utilized primary modality [3]. However, the drawbacks of X-ray mammography include the use of ionizing X-rays for imaging, low mobility, low sensitivity, and painful compression of breast tissue between two planes. In addition, X-ray mammography, which is significantly more effective in detecting benign cancers, may necessitate 9 GHz, and 81.82% success was achieved as a malignant finding (MF) performance. Shao et al. [31] developed an auto-encoder-based DL algorithm that transforms 4 GHz data received from 24 × 24 antenna array data into 128 × 128 images. The performance of the model was evaluated by comparing the images using the distorted-Born iterative method (DBIM) and the phase confocal method (PCM) techniques. The developed model [31] utilizes the complex input data as a two-dimensional image in amplitude and phase. Chiu et al. [32] examined the U-Net and object-attentional super-resolution network (OASRN) models for electromagnetic imaging. Using a setup of 32 transmitting and 32 receiving antennas, scattered field measurements were carried out with the addition of Gaussian noise. The authors [32] concluded that the OASRN model is superior to the U-Net model based on a comparison of the obtained images and results. Khoshdel et al. [22] developed a model based on DL for three-dimensional breast imaging. Three-dimensional CSI images are applied as the input to the proposed U-Net-based DL model, and a three-dimensional dielectric map is generated as output. It has been demonstrated that the U-Net model, which enhances the CSI images applied to the input, produces superior results as compared to the CSI method [22]. Qin et al. [23] developed a breast imaging model based on DL using microwave and ultrasonic data. The proposed model [23] utilizes ultrasound and microwave data as input, combines them, and applies convolutional layers. The output of the model is divided into two branches to provide the segmentation result and regression results, such as the dielectric constant. Considering the studies in the literature [22,[33][34][35][36], it can be seen that the application of DL models in medical imaging systems is rising. DL models produce faster and higher-quality results than conventional imaging techniques, and they are becoming more popular in imaging systems.
In this study, four models utilizing deep neural networks and convolutional neural networks are proposed for the generation of monostatic radar-based microwave images using backscattered electric field data using the CSAR principle. The images generated by the models are compared to those obtained by a matching pursuit-based (MP-based) [19,37] algorithm, and the performances of the models are discussed.
The highlights of this study are as follows: • In this study, conventional imaging was carried out utilizing CSAR-based numerical data and an MP-based algorithm. • For imaging, both the matching-pursuit-based method and the neurocomputational models utilized raw, unprocessed real-valued, and complex-valued numerical data. Computed or measured scattered electric field data can therefore be applied directly to models without preprocessing. • RV-DNN and RV-CNN models are proposed, followed by two combined neurocomputational models (RV-MWINet and CV-MWINet) employing the proposed CNN model structure, which combines the U-Net structure. The images generated by the proposed models are compared to those generated by the matching-pursuit algorithm. The study demonstrates that the processing and generation speeds of the proposed models are faster than those of conventional imaging techniques, and that the resulting images are of higher quality. • By placing a screw in the sand and an unhealthy tumor phantom in a healthy phantom, a total of 12 measurements were taken in the range of 1 GHz to 10 GHz, using the measurement setup. In order to train the CV-MWINet model, measurement data were added to the dataset obtained from simulated data. Also, the performance of the proposed model on both simulated and measured data is discussed.

The Forward Problem Based on the Circular Synthetic Aperture Radar (CSAR) Principle
The simulation data used in this study were generated based on the monostatic circular synthetic aperture radar (CSAR) principle [38], and the simulation data acquisition setup is illustrated in Figure 1. In this method, a transceiver antenna is rotated at certain intervals on a concentric circle with a stationary object in the imaging domain (Ω) with a dielectric distribution ε(r), and collects backscattered electric field data from this domain. This method assumes that the imaging domain is entirely encompassed by the radiation pattern of the antenna. Thus, the electric field measurements backscattered from the imaging domain contain information about the target object. The backscattered electric field data obtained in accordance with the structure depicted in Figure 1 comprise information regarding skin and tumors.
The simulation data used in this study were generated based on the mon cular synthetic aperture radar (CSAR) principle [38], and the simulation data setup is illustrated in Figure 1. In this method, a transceiver antenna is rotated intervals on a concentric circle with a stationary object in the imaging domain dielectric distribution ε(r), and collects backscattered electric field data from th This method assumes that the imaging domain is entirely encompassed by th pattern of the antenna. Thus, the electric field measurements backscattered fr aging domain contain information about the target object. The backscattered e data obtained in accordance with the structure depicted in Figure 1 comprise in regarding skin and tumors. According to the CSAR concept, the back-scattered electric field in frequen can be expressed as [37], where A0, f, εr, μr, c, and R(ϕ) denote the amplitude of the electric field, frequen permittivity, magnetic permeability, the phase velocity of the wave, and the distance function between the scatterer and antenna. For most common mat considered as 1. For the sake of simplicity, the imaging field is considered to b nous, and the tumor and skin are supposed to be discrete perfect scatterers. dependent Euclidean distance in the expression given in Equation (1)   Simulation setup for two-dimensional breast tumor imaging (The red arcs from the antenna to the imaging field represent the propagating wave, the gray arrows the scattered field, and the red arrows the backscattered field).
According to the CSAR concept, the back-scattered electric field in frequency domain can be expressed as [37], where A 0 , f, ε r , µ r , c, and R(φ) denote the amplitude of the electric field, frequency, relative permittivity, magnetic permeability, the phase velocity of the wave, and the Euclidean distance function between the scatterer and antenna. For most common materials, µ r is considered as 1. For the sake of simplicity, the imaging field is considered to be homogenous, and the tumor and skin are supposed to be discrete perfect scatterers. The angle-dependent Euclidean distance in the expression given in Equation (1) is calculated by Equation (2) [38].
As shown by the equation, the distance is calculated using the difference between the antenna position and the projection of the scatterers on the axis. The single transceiver antenna in the imaging system collects the backscattered electric field data from the imaging domain by positioning itself at the measurement positions shown in Figure 1 at predetermined intervals. For each measurement point, the backscattered electric field data from all scattering points within the imaging domain is collected to yield the overall electric field data. This procedure is repeated for all measurement points, resulting in 360-degree data coverage of the imaging region. The measured data may contain information regarding the maximum range (R m ), which can be determined using Equation (3) [38].
where N represents the number of frequencies, ∆r represents the range resolution and is calculated using Equation (4) [38].
∆f in Equation (4) represents the bandwidth used in the measurement system. The parameters and values specified in Table 1 were utilized to acquire the total backscattered electric field data from the imaging plane. Using Equation (1) through (4), between one and three tumor scatterers with diameters between 0.2 cm and 0.9 cm and random positions and shapes in the imaging domain were generated, and numerical data for these scatterers were computed. Consequently, a complex-valued backscattered electric field dataset for 1000 scatterers was created. The dataset, each consisting of backscattered electric field data with dimensions (301 × 90), had dimensions (1000, 301, 90) in total (number of data, number of frequencies, number of angles).

Phantom Fabrication and Measurement
In this study, measurements were carried out to be used for model training. To obtain the measurement data, phantoms of both healthy and tumor tissues were fabricated using methods similar to those described by Ortega-Palacios et al. [39]. Figure 2 depicts the images of the phantom fabrication, dielectric constant measurements, and microwave imaging measurement setup. Using a dielectric probe, the dielectric constants of the phantoms were measured between 1 GHz and 10 GHz, as shown in Figure 2a. Figure 2b depicts a measurement setup in a large, empty space outside the setup. During the measurements, an ultra-wideband (UWB) horn antenna was employed. For the sake of simplicity, the rotation of the material was chosen over the antenna in the measurement setup. The computer-controlled turntable was rotated at angles of 4 degrees, and the scattering parameter (S11) was measured at  Using a dielectric probe, the dielectric constants of the phantoms were measured between 1 GHz and 10 GHz, as shown in Figure 2a. Figure 2b depicts a measurement setup in a large, empty space outside the setup. During the measurements, an ultra-wideband (UWB) horn antenna was employed. For the sake of simplicity, the rotation of the material was chosen over the antenna in the measurement setup. The computer-controlled turntable was rotated at angles of 4 degrees, and the scattering parameter (S 11 ) was measured at a total of 90 angles for a total of 360 degrees. Figure 3 depicts the dielectric constant measurement graph of the phantoms manufactured as shown in Figure 2a.
Using a dielectric probe, the dielectric constants of the phantoms were measure tween 1 GHz and 10 GHz, as shown in Figure 2a. Figure 2b depicts a measurement in a large, empty space outside the setup. During the measurements, an ultra-wide (UWB) horn antenna was employed. For the sake of simplicity, the rotation of the ma was chosen over the antenna in the measurement setup. The computer-controlled tu ble was rotated at angles of 4 degrees, and the scattering parameter (S11) was measu a total of 90 angles for a total of 360 degrees. Figure 3 depicts the dielectric constant m urement graph of the phantoms manufactured as shown in Figure 2a. When analyzing the dielectric constants presented in Figure 3 for the healthy p tom and the tumor phantom, a dielectric contrast of 4 to 6 is observed. There was a of 12 measurements performed, including 7 obtained by placing metal screws at 7 di locations in the fine sand and 5 obtained by placing the tumor phantom at 5 points o healthy phantom. The measurement data were added to the dataset used to train the learning model along with the simulation data.

Microwave Imaging (MWI) Using Deep Learning (DL) Models
The similarities between DL models and non-linear electromagnetic scatterin initially discussed in this study. Then, the use of three distinct real-valued and one plex-valued DL approaches will be explained. These are real-valued deep neural netw based (RV-DNN), real-valued convolutional network-based (RV-CNN), and com real-valued and complex-valued DL models consisting of CNN and U-Net-based m (RV-MWINet and CV-MWINet). When analyzing the dielectric constants presented in Figure 3 for the healthy phantom and the tumor phantom, a dielectric contrast of 4 to 6 is observed. There was a total of 12 measurements performed, including 7 obtained by placing metal screws at 7 distinct locations in the fine sand and 5 obtained by placing the tumor phantom at 5 points on the healthy phantom. The measurement data were added to the dataset used to train the deep learning model along with the simulation data.

Microwave Imaging (MWI) Using Deep Learning (DL) Models
The similarities between DL models and non-linear electromagnetic scattering are initially discussed in this study. Then, the use of three distinct real-valued and one complexvalued DL approaches will be explained. These are real-valued deep neural network-based (RV-DNN), real-valued convolutional network-based (RV-CNN), and combined real-valued and complex-valued DL models consisting of CNN and U-Net-based models (RV-MWINet and CV-MWINet).

Similarities between DL and Non-Linear Electromagnetic Scattering
The relationship between DL and non-linear electromagnetic scattering, as established by Li et al. [20], is considered in this study. For the configuration depicted in Figure 1, the total electric field value E (n) (r), where E (n) i (r) is the total incident electric field and E (n) s is the total scattered electric field, can be calculated using Equation (5) [20]. The parameters n, k 0 , H (1) 0 and χ represent the index of the scattering, the wavenumber of the background medium, the first-kind zeroth-order Hankel function, and the contrast function, respectively. r = (x, y) and r' = (x', y') indicate the field and source positions, respectively, and are evaluated as r, r' ∈ Ω. In computational imaging, the imaging region surrounded by antennas and whose content is unknown is regarded as being divided into pixels. The values of the pixels provide information related to the contrast values. Consequently, the value of the scattered electric field to be used in the imaging process is computed using Equation (6) [20].
In Equation (9), the parameter D is utilized to describe a sparse transformation process like a wavelet. The contrast function at time t + 1 can be defined as in Equation (10) [20]. (10) denote the element-wise soft-threshold and conjugate transpose, respectively. Equation (10) can be rearranged as Equations (11) and (12) to illustrate the connection between NN and non-linear electromagnetic scattering [20].
Diagnostics 2023, 13, 930 8 of 23 Equation (12) is conceptually comparable to the definition of a fully connected NN. The parameters P and b in Equation (12) correspond to the weights and bias values of fully connected NNs. The indices (t) of these parameters represent the neural network layers. This similarity and relationship demonstrate that DL models are applicable to non-linear electromagnetic scattering challenges.

Deep Neural Network-Based (DNN-Based) Imaging
Given that the dataset built through numerical computations in this study contains backscattered electric field data with complex values, the real-valued DNN (RV-DNN) model is constructed to handle the absolute value of the complex values. Figure 4 illustrates the representative architecture of the proposed RV-DNN model.
Equation (12) is conceptually comparable to the definition of a fully connected NN The parameters P and b in Equation (12) correspond to the weights and bias values o fully connected NNs. The indices (t) of these parameters represent the neural networ layers. This similarity and relationship demonstrate that DL models are applicable to non linear electromagnetic scattering challenges.

Deep Neural Network-Based (DNN-Based) Imaging
Given that the dataset built through numerical computations in this study contain backscattered electric field data with complex values, the real-valued DNN (RV-DNN model is constructed to handle the absolute value of the complex values. Figure 4 illus trates the representative architecture of the proposed RV-DNN model.  The input values supplied to the model at the input layer are passed straight to the first layer by the input elements depicted in Figure 4. Using Equation (15), the outputs of each element in the hidden layers and the output layer are computed.
In Equation (15), the parameters h i , N, W ij , x j , and b i h represent the output value of the element, the number of inputs to the element, the weight coefficients at the input of the element, the values at the input of the element, and the bias value, respectively. The parameter σ represents the activation function, and the rectified linear unit (ReLU) activation function used in this study is given in Equation (16).
Each of the 1000 data in the dataset comprises magnitude values of the complexvalued backscattered electric field data with a size of (301 × 90). The RV-DNN model was constructed to handle real-valued input and output data in one dimension. Thus, the Diagnostics 2023, 13, 930 9 of 23 two-dimensional input and output data were transformed into one-dimensional vectors, and the model was trained using these vectors. The model, which is designed with an input layer consisting of 27,090 elements, contains a total of 5 hidden layers, with the number of elements being (128, 128, 128, 128, 128). The output layer of the model comprises 16,384 elements, as the size of the image to be generated using the model is 128 × 128. The chosen settings for the training phase of the model include using the ReLU function as the activation function, the Adam algorithm as the optimization technique, an epoch number of 1000, a batch size of 32, and minimizing the mean squared error (MSE) as the metric. The 10-fold cross-validation method was applied to evaluate the performance of the model. In addition to cross-validation, the model was trained with 90% data and tested with 10% data. To compare the performance of the models considered in the study, images were also obtained using the traditional MP-based imaging algorithm.

Convolutional Neural Networks-Based (CNN-Based) Imaging
In this study, a sequential real-valued CNN (RV-CNN) model for the imaging of the backscattered electric field data is proposed. In the convolution process, the filtered output data are obtained by convolving the input data with the filter, also known as the kernel matrix. The filtering allows for the extraction of various attributes of the handled data. The convolution of the input data x with the four-dimensional f filter is calculated using Equation (17) [40]. The x l+1 derived from Equation (17) belongs to the solution set In the equation, x l represents the input of the lth layer, while x l+1 represents the output of this layer, as well as the input of the (l + 1)th layer. f represents the kernel function for R H×W×D l ×D , while the ρ function is the activation function.
Also, the rest of the parameters are defined as H l+1 = H l -H + 1, W l+1 = W l − W + 1 and D l+1 = D. H × W represents the spatial span of each kernel, whereas D indicates the total number of kernels. The RV-CNN model developed in this study also employs the ReLU activation function derived from Equation (16). The RV-CNN model proposed in the study is shown in Figure 5. The model shown in Figure 5 contains 7 convolutions and 3 fully connected layers. Details of the properties of the layers are given in Table 2. Table 2. Properties of the proposed CNN-based model layers. The model shown in Figure 5 contains 7 convolutions and 3 fully connected layers. Details of the properties of the layers are given in Table 2. As with the proposed RV-DNN model, the RV-CNN model is designed to obtain the one-dimensional dielectric map vector. In order to train the model, 1000 input data consisting of (301 × 90) backscattered electric field values were utilized. At the output of the model, a total of 1000 data consisting of one-dimensional dielectric map vectors of length 16,384 were obtained through training. The output vector is reshaped into a two-dimensional form during the imaging step. The proposed RV-CNN model was trained using the ReLU function as the activation function, Adam algorithm as the optimization algorithm, 2000 epochs, a batch size of 32, and the mean squared error (MSE) as the metric to minimize. Similar to the RV-DNN model, 10-fold cross-validation approach was used to evaluate the performance of the model.

U-Net-Based Combined Neurocomputational Imaging Model
In this study, two neurocomputational models, named MWINet, are proposed for use in microwave imaging by combining the proposed CNN model with the U-Net-based model. For this purpose, a U-Net-based model extends the sequential CNN model. The proposed model utilizes raw scattered electric field data as the input and generates a one-dimensional microwave image. The structure of the proposed MWINet model is given in Figure 6.
As seen from Figure 6, the CNN structure in the initial layers of the proposed MWINet model provides general imaging, while the U-Net section is responsible for image cleaning and tumor structural clarification. For the purposes of this study, the layers of the model depicted in Figure 6 were constructed as RV-MWINet models with real-valued layers and CV-MWINet models with complex-valued layers.
In order to train the RV-MWINet model, 1000 input data consisting of (301 × 90) backscattered electric field values were utilized. At the output of the model, a total of 1000 data consisting of one-dimensional dielectric map vectors of length 16,384 were obtained through training. The output data used to train the model was converted to be binary valued. The output vector is reshaped into a two-dimensional form during the imaging step.

U-Net-Based Combined Neurocomputational Imaging Model
In this study, two neurocomputational models, named MWINet, are proposed for use in microwave imaging by combining the proposed CNN model with the U-Net-based model. For this purpose, a U-Net-based model extends the sequential CNN model. The proposed model utilizes raw scattered electric field data as the input and generates a onedimensional microwave image. The structure of the proposed MWINet model is given in Figure 6. As seen from Figure 6, the CNN structure in the initial layers of the proposed MWINet model provides general imaging, while the U-Net section is responsible for image cleaning and tumor structural clarification. For the purposes of this study, the layers of the model depicted in Figure 6 were constructed as RV-MWINet models with real-valued layers and CV-MWINet models with complex-valued layers.
In order to train the RV-MWINet model, 1000 input data consisting of (301 × 90) backscattered electric field values were utilized. At the output of the model, a total of 1000 data consisting of one-dimensional dielectric map vectors of length 16,384 were obtained through training. The output data used to train the model was converted to be binary valued. The output vector is reshaped into a two-dimensional form during the imaging step.
For the proposed RV-MWINet model, the real-valued ReLU function was chosen as the activation function for the inner layers, and the sigmoid activation function was chosen for the output layer. In the layers of the CV-MWINet model; however, the cartesian ReLU (CReLU) activation function given by Equation (18) is used, but the amplitude of the complex sigmoid function given by Equation (19) is used in the output layer. For the proposed RV-MWINet model, the real-valued ReLU function was chosen as the activation function for the inner layers, and the sigmoid activation function was chosen for the output layer. In the layers of the CV-MWINet model; however, the cartesian ReLU (CReLU) activation function given by Equation (18) is used, but the amplitude of the complex sigmoid function given by Equation (19) is used in the output layer.
In Equations (18) and (19), the parameters x and y represent the real and imaginary components of the input data, respectively. The optimization algorithm selected was Adam, with 500 epochs, a batch size of 32, and accuracy as the metric to be maximized. Similar to the proposed RV-DNN and RV-CNN models, a 10-fold cross-validation approach was used to evaluate the performance of the model. While 1000 real-valued data were used to train and evaluate the performance of the RV-MWINet model, 12 measurement data were added to the data used to train and analyze the performance of the CV-MWINet model.

Evaluation Metrics
In this study, accuracy (ACC), mean squared error (MSE), peak signal-to-noise ratio (PSNR), universal quality image index (UQI), and structural similarity (SSIM) metrics were utilized to examine the images generated by the proposed neurocomputational models. For the MSE metric, the equation given in Equation (20) is used.
The variables x and y in the equation represent the input and output images of size m × n. Although MSE is a significant metric in regression problems, it is more typical to utilize the well-known PSNR, UQI, and SSIM metrics to visually analyze images. Equation The M I parameter in the equation represents the maximum value of the pixels. In addition to PSNR, the UQI and SSIM metrics given in Equations (27) and (28) provide significant information about the generated images. The values of the variables used in Equations (27) and (28) are calculated by Equations (22)- (26).
In Equation (27), the dynamic range of the UQI value is [−1, 1]. The optimal value is 1, which can only be achieved when the two images are identical. In the equations, µ represents the mean, and σ represents the variance. In fact, the UQI value is the premise of the SSIM calculation. The SSIM metric is calculated by Equation (28).
Comparing Equations (28) and (27), it can be observed that the difference in the equations is due to the C 1 and C 2 coefficients. The UQI value is achieved when C 1 and C 2 in the SSIM equation are both set to 0.

Numerical Results and Discussion
In this study, 1000 complex-valued backscattering electric field data were generated numerically using the setup in Figure 1, and the parameters and values in Table 1. The magnitudes of this data were used to create a dataset for real-valued neurocomputational models, while another dataset was generated for the CV-MWINet model using the original complex values along with 12 measured values. To improve the generalizability of the proposed models, the number of data was kept as high as possible. Thus, the input data have the dimensions (1000, 301, 90, 1), whereas the output data have the dimensions (1000, 512, 512, 1). To simplify training and testing of the models, the output images were resized to have dimensions (1000, 128, 128, 1). The 10-fold cross-validation method was used for the performance evaluation of the proposed neurocomputational models. Although different epochs were used to train the models, 1000 epochs and 32 batch sizes were chosen in the 10-fold cross-validation process for the four models. Table 3 provides a comparison of the evaluation results obtained through 10-fold cross-validation using the train data. The values in Table 3 are expressed as the mean value ± the standard deviation.
MSE and SSIM metrics are presented in Table 3 for the proposed RV-DNN and RV-CNN models, while ACC and SSIM metrics are presented for the MWINet models. This is because the proposed RV-DNN and RV-CNN models use float-valued output images for training, whereas the MWINet models use binary-valued output images. On examining the data in Table 3, it can be seen that the RV-DNN model has a higher MSE error than the RV-CNN model, while the SSIM metrics are greater for the RV-DNN model than for the RV-CNN model. A comparison of the 10-fold cross-validation results of the MWINet models with those of the other models indicates that the MWINet models have superior training performance. Table 4 presents a comparison of the 10-fold cross-validation performance of the proposed neurocomputational models using test data.
In terms of MSE error, the RV-CNN model outperforms the RV-DNN model, although the SSIM values are comparable. The MWINet models are observed to produce superior outcomes compared to the proposed RV-DNN and RV-CNN models. After a 10-fold crossvalidation, the dataset was shuffled, and neurocomputational models were then trained utilizing 90% of the data. The remaining data was utilized for both validation and testing. Figure 7 illustrates the change in the MSE measure during the training and validation of the RV-DNN model. In terms of MSE error, the RV-CNN model outperforms the RV-DNN model, a hough the SSIM values are comparable. The MWINet models are observed to produ superior outcomes compared to the proposed RV-DNN and RV-CNN models. After a fold cross-validation, the dataset was shuffled, and neurocomputational models w then trained utilizing 90% of the data. The remaining data was utilized for both validat and testing. Figure 7 illustrates the change in the MSE measure during the training a validation of the RV-DNN model.    The MSE metric indicates a dramatic fall in the initial epochs and a monotonic reduction in the subsequent epochs, as depicted in Figure 8. In comparison to the proposed RV-DNN model, the RV-CNN model exhibits a closer variance between the train and validation errors. The normalization layers used in the model help to keep the validation error close to the train error. The MSE errors of the proposed RV-CNN model are obtained as 45.283 and 153.818 during training and testing, whereas the SSIM metrics are calculated as 0.91000 and 0.92300, respectively. Figure 9 depicts the change in the accuracy metric of the RV-MWINet model during training and validation. The MSE metric indicates a dramatic fall in the initial epochs and a monotonic redu tion in the subsequent epochs, as depicted in Figure 8. In comparison to the proposed R DNN model, the RV-CNN model exhibits a closer variance between the train and valid tion errors. The normalization layers used in the model help to keep the validation err close to the train error. The MSE errors of the proposed RV-CNN model are obtained 45.283 and 153.818 during training and testing, whereas the SSIM metrics are calculat as 0.91000 and 0.92300, respectively. Figure 9 depicts the change in the accuracy metric of the RV-MWINet model duri training and validation.  Figure 9 illustrates a slower rise in training accuracy compared to validation acc racy. Due to the chosen batch size and the fact that the solution space has a high numb of local minimums, the accuracy curves contain numerous ripples. Due to the design the RV-MWINet model, both the CNN structure in the first model layers and the U-N based model layers are trained simultaneously. Since image generation and improveme are performed concurrently, it is acceptable for the number of ripples to increase throug out training and validation. The MSE, SSIM, and accuracy metrics for the training pha of the proposed RV-MWINet model are 0.00083, 0.99996, and 0.91139, while the same m rics for the testing process are 0.00467, 0.99957, and 0.86359. To account for the effect the phase component of the complex-valued backscattered electric field data, each lay of the MWINet model in Figure 6 was replaced with a complex-valued layer to constru the CV-MWINet model structure. Figure 10 illustrates the evolution of the accuracy m rics of the proposed CV-MWINet model for training and validation over 500 epochs.   Figure 6 was replaced with a complex-valued layer to construct the CV-MWINet model structure. Figure 10 illustrates the evolution of the accuracy metrics of the proposed CV-MWINet model for training and validation over 500 epochs.   During the training of the CV-MWINet model, the complex average cross-entropy (CACE) loss function as given by Equation (29) was utilized, and the model weights at the iteration with the effective weight distribution were kept.
Loss CCE Re y pred , y true + Loss CCE Im y pred , y true (29) In Equation (29), ACE and CCE represent average cross-entropy and category crossentropy, respectively. The proposed CV-MWINet model was trained for 500 epochs with a batch size of 32 and achieved a training accuracy of 0.991 and a validation accuracy of 1.000.
In order to compare the performance of the proposed models, the RV-DNN, RV-CNN, and MWINet models are employed to generate images from data samples. Also, images were generated using the conventional MP-based MWI imaging technique using the same data. Figure 11 depicts the images generated by randomly selected training data samples. The ground truth images are depicted in Figure 11a,g,m.
In Equation (29), ACE and CCE represent average cross-entropy and category crossentropy, respectively. The proposed CV-MWINet model was trained for 500 epochs with a batch size of 32 and achieved a training accuracy of 0.991 and a validation accuracy of 1.000.
In order to compare the performance of the proposed models, the RV-DNN, RV-CNN, and MWINet models are employed to generate images from data samples. Also, images were generated using the conventional MP-based MWI imaging technique using the same data. Figure 11 depicts the images generated by randomly selected training data samples. The ground truth images are depicted in Figure 11a Figure 11b,h,n depict radar-based images generated by the MP-based method for data containing one tumor, two tumors, and three tumors, respectively. Even though the backscattered electric field data contains information on a relatively modest scatterer, the radar-based MP-based image can make this scatterer appear larger than it actually is when MP-based images are evaluated. According to the case involving a single tumor, the image obtained from the RV-DNN model provides limited information regarding the position of the tumor. Although the RV-CNN model produces a clearer image of the same tumor, the RV-MWINet model is seen to produce the most accurate image. In cases involving two tumors, the MP-based algorithm generated a substantially larger image for the smaller tumor. In the images generated by the RV-DNN and RV-CNN models proposed, the small tumor is not visible. In this scenario, the RV-MWINet model delivers the most accurate representation of ground truth. Figure 11k demonstrates that the RV-MWINet model is able to image relatively small tumors. In the scenario involving three tumors, one tumor is positioned far away, while the other two are located quite close to one another. In this case, the MP-based algorithm treats two nearby tumors as a single tumor, as shown in Figure 11n. The RV-DNN model does not provide a good solution for distinguishing between two tumors, and the resulting image is quite noisy. The image generated by the proposed RV-CNN model is superior to those generated by the conventional method and the RV-DNN model, but it also contains noise. Figure 11q depicts the image generated by Figure 11. Comparison of samples of microwave images generated by the proposed neurocomputational models for train data ((a,g,m) are ground truth images). Figure 11b,h,n depict radar-based images generated by the MP-based method for data containing one tumor, two tumors, and three tumors, respectively. Even though the backscattered electric field data contains information on a relatively modest scatterer, the radar-based MP-based image can make this scatterer appear larger than it actually is when MP-based images are evaluated. According to the case involving a single tumor, the image obtained from the RV-DNN model provides limited information regarding the position of the tumor. Although the RV-CNN model produces a clearer image of the same tumor, the RV-MWINet model is seen to produce the most accurate image. In cases involving two tumors, the MP-based algorithm generated a substantially larger image for the smaller tumor. In the images generated by the RV-DNN and RV-CNN models proposed, the small tumor is not visible. In this scenario, the RV-MWINet model delivers the most accurate representation of ground truth. Figure 11k demonstrates that the RV-MWINet model is able to image relatively small tumors. In the scenario involving three tumors, one tumor is positioned far away, while the other two are located quite close to one another. In this case, the MP-based algorithm treats two nearby tumors as a single tumor, as shown in Figure 11n. The RV-DNN model does not provide a good solution for distinguishing between two tumors, and the resulting image is quite noisy. The image generated by the proposed RV-CNN model is superior to those generated by the conventional method and the RV-DNN model, but it also contains noise. Figure 11q depicts the image generated by the RV-MWINet model, which is the image most similar to the ground truth. Images obtained with CV-MWINet are given in Figure 11f,l,r. When these images are analyzed, it can be observed that they are identical to the ground truth images. It may be stated that processing the complex-valued input information in complex-valued layers without losing the imaginary component of the data enhances the image quality at the output of the CV-MWINet. Similar to Figure 11, Figure 12 shows the images generated by the MP-based algorithm, RV-DNN model, RV-CNN model, and MWINet models for test data samples. In the case of a single tumor, the location of the tumor can be detected, albeit imprecisely, using blurry images obtained with the MP-based algorithm, RV-DNN model, and RV-CNN model. As seen in Figure 12e, RV-MWINet provided the cleanest and finest image for this case. In a scenario with two tumors, the MP-based method generates a rather large tumor image for the small tumor, as depicted in Figure 12h. This image also demonstrates that the MP-based algorithm depicts the tumor as being sufficiently massive to extend beyond the skin. The RV-DNN-based image in this scenario is quite noisy, so only the position of the major tumor is recognizable. Even if the image is noisy, the RV-CNN model can generate a better image than other models. In contrast, the MWINet models generated the most precise results in these scenarios. In all test scenarios, the CV-MWINet model achieves the best results compared to the other models, while the RV-MWINet model produces results that are comparable to those of the CV-MWINet model. It is possible to say that the usage of complex-valued data improves the performance of the model. In the final scenario with two large tumors and one small tumor, the MP-based algorithm presents two adjacent tumors as if they were a single tumor. The small tumor is not visible in the image generated by the RV-DNN model. In this case, the RV-CNN model displays two adjacent tumors as a single tumor. However, as shown in Figure 12q,r, the RV-MWINet and CV-MWINet models accurately predicted the location and size of the three tumors in this scenario. In the final scenario with two large tumors and one small tumor, the MP-based algorithm presents two adjacent tumors as if they were a single tumor. The small tumor is not visible in the image generated by the RV-DNN model. In this case, the RV-CNN model displays two adjacent tumors as a single tumor. However, as shown in Figure 12q,r, the RV-MWINet and CV-MWINet models accurately predicted the location and size of the three tumors in this scenario.
In order to analyze the results of the application of the models to a real-world problem following the simulation studies, a metal screw was placed in fine sand and a tumor phantom was placed in a healthy phantom, and measurement data was collected using a horn antenna and an Agilent vector network analyzer in accordance with the monostatic CSAR principle. The utilization of metal in fine sand allows the analysis of the effects of PEC material in a homogeneously distributed environment, whereas the tumor phantom placed within a healthy phantom is a method of simulating a realistic patient. In the measurement scenarios presented in Table 5, the scatterers were placed at a specific distance and a 45-degree angle to the x-axis relative to the center of the imaging domain.  Figure 13 depicts the images generated by the CV-MWINet model using data that was collected from measurements of scenarios involving metal screws in fine sand. Although measurement results were also utilized to train CV-MWINet, the performance of the model for measurement data was also remarkably precise. Figure 14 illustrates images generated from the CV-MWINet model utilizing measurement data with the tumor phantom located within the healthy phantom. In scenarios utilizing phantoms, where the radius of the tumor phantom is around 2 cm, the tumors in the images are also large. Figure 14 illustrates that in scenarios #5, #7, and #8, the images obtained from the model closely match the ground truth images, however, in scenario #6, the images derived from the CV-MWINet model depict two adjacent tumors when there should be only one tumor. One of the main reasons for this inaccuracy is due to the use of a small number of Although measurement results were also utilized to train CV-MWINet, the performance of the model for measurement data was also remarkably precise. Figure 14 illustrates images generated from the CV-MWINet model utilizing measurement data with the tumor phantom located within the healthy phantom. In scenarios utilizing phantoms, where the radius of the tumor phantom is around 2 cm, the tumors in the images are also large. Figure 14 illustrates that in scenarios #5, #7, and #8, the images obtained from the model closely match the ground truth images, however, in scenario #6, the images derived from the CV-MWINet model depict two adjacent tumors when there should be only one tumor. One of the main reasons for this inaccuracy is due to the use of a small number of measurement data in the dataset used to train the model. It can be stated that increasing the number of measurement data yields more precise results. Although measurement results were also utilized to train CV-MWINet, the performance of the model for measurement data was also remarkably precise. Figure 14 illustrates images generated from the CV-MWINet model utilizing measurement data with the tumor phantom located within the healthy phantom. In scenarios utilizing phantoms, where the radius of the tumor phantom is around 2 cm, the tumors in the images are also large. Figure 14 illustrates that in scenarios #5, #7, and #8, the images obtained from the model closely match the ground truth images, however, in scenario #6, the images derived from the CV-MWINet model depict two adjacent tumors when there should be only one tumor. One of the main reasons for this inaccuracy is due to the use of a small number of measurement data in the dataset used to train the model. It can be stated that increasing the number of measurement data yields more precise results.  Table 6 provides PSNR, UQI, and SSIM metrics for the entire dataset in addition to simulation data for the models proposed in this study.  Table 6 provides PSNR, UQI, and SSIM metrics for the entire dataset in addition to simulation data for the models proposed in this study.
Analyzing the numerical metrics in Table 6 reveals that the neurocomputational models proposed in this study produce images of higher quality than conventional techniques. Even if the metrics of the proposed RV-DNN and RV-CNN models are comparable, it is noticeable that the RV-CNN model outperforms the RV-DNN model when analyzing images. Images and metrics provided by the MWINet models demonstrate that this model generates exceptionally high-quality microwave images. Even though their training time is longer, it is a well-known fact that deep learning models generate images quickly during the testing phase. In contrast, traditional algorithms can generate images over extended periods of time. The times required to generate the traditional images depicted in Figures 11 and 12 are listed in Table 7, based on the mesh size employed by the MP-based method utilized in this study.
As shown in Table 7, imaging was carried out using an MP-based technique with 9061 and 16,105 mesh points. The imaging time required by the MP-based approach is not dependent on the number of tumors but is heavily reliant on the number of mesh points. The neurocomputational models proposed in this study can generate images of superior quality in less time than conventional techniques.