Next Article in Journal
Segmentation Performance Comparison Considering Regional Characteristics in Chest X-ray Using Deep Learning
Previous Article in Journal
Estimating Biomass and Carbon Sequestration Capacity of Phragmites australis Using Remote Sensing and Growth Dynamics Modeling: A Case Study in Beijing Hanshiqiao Wetland Nature Reserve, China
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Adversarial Resolution Enhancement for Electrical Capacitance Tomography Image Reconstruction

Department of Computer Science in Jamoum, Umm Al-Qura University, Makkah 25371, Saudi Arabia
Computers and Systems Engineering Department, Mansoura University, Mansoura 35516, Egypt
Electrical Engineering Department, Assiut University, Assiut 71516, Egypt
Laboratoire d’Informatique et des Technologies de l’Information d’Oran (LITIO), University of Oran, Oran 31000, Algeria
Author to whom correspondence should be addressed.
Sensors 2022, 22(9), 3142;
Received: 8 March 2022 / Revised: 15 April 2022 / Accepted: 17 April 2022 / Published: 20 April 2022
(This article belongs to the Section Sensing and Imaging)


High-quality image reconstruction is essential for many electrical capacitance tomography (CT) applications. Raw capacitance measurements are used in the literature to generate low-resolution images. However, such low-resolution images are not sufficient for proper functionality of most systems. In this paper, we propose a novel adversarial resolution enhancement (ARE-ECT) model to reconstruct high-resolution images of inner distributions based on low-quality initial images, which are generated from the capacitance measurements. The proposed model uses a UNet as the generator of a conditional generative adversarial network (CGAN). The generator’s input is set to the low-resolution image rather than the typical random input signal. Additionally, the CGAN is conditioned by the input low-resolution image itself. For evaluation purposes, a massive ECT dataset of 320 K synthetic image–measurement pairs was created. This dataset is used for training, validating, and testing the proposed model. New flow patterns, which are not exposed to the model during the training phase, are used to evaluate the feasibility and generalization ability of the ARE-ECT model. The superiority of ARE-ECT, in the efficient generation of more accurate ECT images than traditional and other deep learning-based image reconstruction algorithms, is proved by the evaluation results. The ARE-ECT model achieved an average image correlation coefficient of more than 98.8 % and an average relative image error about 0.1 % .

1. Introduction

During the 1980s, based on the computed tomography (CT) technique of medical images, researchers proposed electrical capacitance tomography (ECT) [1]. Because of its low cost and accuracy, ECT has been widely used in industrial process monitoring in reactors, pipelines, and containers, and wherever non-conductive components of a dielectric nature can be used. Knowing the internal distribution of materials inside an industrial process container or pipe is essential in many applications. Tomography plays a very important role in several industrial fields. Typical examples of the use of this technology include the food industry, industrial tomography, biomedical processes [2], gas–fluid flow [3], chemical and pharmaceutical processes [4,5], and non-destructive evaluations of invisible objects in dams and flood embankments [6].
The electrical capacitance tomography (ECT) can be defined as the use of electrodes to measure capacitance changes that are transformed into two-dimensional images as visual outputs using image reconstruction algorithms [7]. Typically, the electrode numbers in the ECT sensor controls the number of independent capacitance measurements (usually 28 to 496) and the acquisition rate varies from a few up to several thousand images per second [8]. Then, one or more high-performance PCs collaborating together, using mathematical models, can process the collected data and implement dedicated image reconstruction algorithms to make the appropriate diagnostic decision to effectively process control and automation [7].
The ECT can be implemented both in real-time [9] and offline mode [10]. The choice of the image reconstruction algorithm has a crucial role in the ECT process since it has a direct impact on the image quality [11]. The ECT image reconstruction process can be implemented through iterative algorithms, e.g., iterative Landweber method (ILM) [12], Newton Raphson [13], and Tikhonov regularization [14], as it can also be implemented through non-iterative methods, such as linear back projection (LBP) [15]. The speed and the simplicity of the non-iterative methods was not an argument for wide use because, in the same time, they suffer from deformations in the reconstructed images [16]. In comparison, iterative methods can generate higher quality images. They are, however, computationally very expensive, thus, more useful for offline processing. The need for tools that can compromise the trade-off between high quality reconstructed images and computational efficiency, is currently the main interest of machine learning (ML) [17,18], more specifically, deep neural network (DNN) methods [19]. DNN methods have been utilized in many fields due to their ability to map complex nonlinear functions [20,21]. DNN algorithms have been transferred and adapted such as in image reconstruction methods based on the convolutional neural network (CNN) [22], multi-scale CNNs [23], long short-term memory (LSTM) [24], and autoencoder [25]. To solve the forward problem and to estimate the capacity measures, Deabes et al. used a capacitance artificial neural network (CANN) system [26,27]. Thanks to its ability to effectively use specific geometric relationships hidden in commonly used unstructured grid models, the authors in [28] proposed to use the graph convolutional network(s) (GCN), to increase the quality of the ECT image. Moreover, a long short-term memory image reconstruction (LSTM-IR) algorithm was implemented to map the capacitance measurements to accurate material distribution images [24].
Generative adversarial networks (GANs) are very interesting techniques that have been recently developed in ML [29,30]. These networks allowed obtaining new results that were previously thought to be difficult to achieve: text to image generation [31], text generation in different styles [32], generation and defense against fake news [33], conversion of sketches to images [34], generation of photo-realistic images [35], and even game designs learned by watching videos [36]. The conditional generative adversarial network (CGAN) [37], which is a particular version of the standard GAN, allowed better control over the output of generative adversarial models. Subsequently, this kind of GAN was applied in medicine to the CT of soft tissues [38] as well as to tomography of the structure of materials with synchrotron radiation [39,40].
A novel post-processing adversarial resolution enhancement (ARE-ECT) model for ECT reconstructed image quality improvement is proposed in this paper. The proposed model is inspired by the deep learning networks for image super-resolution [41,42]. Principally, we assumed that a CGAN can be trained to enhance the reconstructed low-resolution ECT images from few capacitance measurements. Particularly, a CGAN is trained in generator and discriminator networks to produce high-resolution images from lower resolution reconstructions. As a result, when trained with pairs of ECT image reconstructions of a simulated phantom and a phantom itself, the CGAN model learns how to enhance the resolution of the inputs. Accordingly, the proposed adversarial model achieves better results than the recent complex, time-consuming non-linear ECT image reconstruction methods, and brings the reconstructed images closer to the phantom reference quality.
The contributions of this paper can thus be summarized as follows:
  • The adversarial resolution enhancement (ARE-ECT) model was developed in the problem of the ECT image reconstruction quality improvement.
  • The proposed model aimed to predict enhanced ECT image reconstructions from the lower quality ones.
  • Our CGAN-based approach produces qualitative and quantitative improved results in ECT image resolution better than current complex and time-consuming non-linear reconstruction algorithms.
The remainder of this paper is organized as follows: Section 2 covers the ECT image construction problems. Section 3 describes the DNN models, including GAN and CGAN. Section 4 introduces a new ARE-ECT model to enhance the ECT image construction. Section 5 describes the dataset used to train, test, and evaluate the proposed model. Section 6 discusses the experimental results and the validity of the proposed model. Finally, Section 7 presents our conclusions.

2. Problem Statement

The ECT problem is a typical image reconstruction problem. Particularly, given input data measurements, a higher resolution image is to be reconstructed. The input measurements could be any input data that are correlated to the reconstructed image. The modalities of the input data do not necessarily have to be the same of the output data. In the ECT problem, the input data are a few sensor reading numbers that are fed into the reconstruction algorithm as the input signal. The ECT sensor generates readings via a number of electrodes ( n = 12 ), which are evenly mounted around the imaging area. Figure 1 illustrates the sensor setup. To capture the variations in the permittivity of the inner distribution, the mutual capacitance of each pair of these electrodes are measured independently [43]. This pairwise measurement process results in a total number of capacitance measurements of M = n ( n 1 ) / 2 . To keep the uniformity in the electric field, decrease the external coupling, and eliminate any interference, the electrodes are separated by insulating guards [44].
The distribution of the permittivity of the inner material within the area of interest affects the distribution of the electric field, which is defined according to the Poisson linear partial differential equation, as shown in Equation (1).
· ( ε ( x , y ) ϕ ( x , y ) ) = ρ ( x , y ) ,
where ε ( x , y ) is the distribution of permittivity, ϕ ( x , y ) is the potential distribution, and ρ ( x , y ) denotes the charge distribution.
The mutual capacitance between electrode pairs is given by Equation (2).
C u v = Q v V u v = 1 V u v Γ v ε ( x , y ) ϕ ( x , y ) · k ^ d l
where C u v identifies the mutual capacitance between two electrodes u and v, Q v denotes the charge on the sensing electrode, which is defined according to the Gaussian law, V u v denotes the potential difference, Γ v represents a closed path embracing a detection electrode, and k ^ stands for a unit vector normal to Γ v .
The ECT image reconstruction involves solving two types of problems: the forward and inverse. The forward problem refers to the numerical computation of the capacitance measurements from the sensor reading, according to Equation (3):
C M × 1 = S M × N ( ε 0 ) · G N × 1
where C is the calculated capacitance, S is the sensitivity matrix, N = 16,384 is the number of image pixels, and G is the permittivity distribution. The sensitivity matrix is the Jacobian of the capacitance with respect to pixels evaluated at ε 0 .
The ECT inverse problem refers to estimating the permittivity distribution, G, given the capacitance measurement, C, and the sensitivity matrix, S. A non-iterative solution can be obtained directly from Equation (3) using non-iterative algorithms, e.g., LBP, as shown in Equation (4).
G = S T C
However, the obtained images using such a paradigm suffer from poor quality. This shortcoming could be dealt with using iterative algorithms, e.g., the Landweber algorithm (LW), as shown in Equation (5).
G k + 1 = G k λ S T ( S G k C )
where λ is the relaxation parameter, S G k is the forward problem solution, and k is the iteration number. However, despite the significant improvement achieved in the reconstructed images quality, it comes with high computational costs.

3. Deep Neural Network Models

The ECT reverse problem can be looked at as a data generation problem, which is controlled by certain constraints. Specifically, a low-resolution input image is the control input that governs the creation of the higher resolution permittivity solution. Over the years, many models have been developed based on DNNs. One of the most popular models that is extensively researched and applied in image processing and computer vision is the generative adversarial network (GAN). In addition, a conditioned version called CGAN was developed to control the reconstructed image and guarantee high quality outputs [45]. Therefore, we propose using a CGAN model for this purpose. In the following subsections, we provide a brief overview of GANs and CGANs. Then, we describe the proposed ECT image reconstruction model using CGAN.

3.1. GAN

GAN [29] was introduced to force two competing learning agents to enter a performance race during data generation. The first agent, which is the generative model G, is responsible for capturing the data distribution. It learns how to generate from scratch data patterns that follow the same distributions of input data. The second agent, which is the discriminative model D, learns how to discriminate between real data samples drawn from the input data and the fake data samples that are generated by G. During the training process, each agent optimizes for its own objective function simultaneously in a competitive manner. This leads to a state, in which the generated data by the generator is hardly identified as fake.
In the training process, G learns a distribution p g over the input data. This is accomplished by building a mapping function from a noise distribution to a generative data space G ( z , θ g ) . The discriminator D learns how to generate a Boolean decision indicating whether its input data come from training data or generated by G. The purpose of the training process is to adjust parameters for the generator to deceive the discriminator by minimizing log ( 1 D ( G ( z ) ) ) . At the same time, the parameters of the discriminator are adjusted to optimally detect the real data by maximizing log ( D ( x ) ) . These two competing objectives are aggregated in a combined objective value function V ( G , D ) , as show in Equation (6).
V ( G , D ) = arg min G max D ( E x p d a t a ( x ) [ log ( D ( x ) ) ] + E z p d a t a ( z ) [ log ( 1 D ( G ( z ) ) ) ] )

3.2. CGAN

GAN has been modified and developed into many variants over the last few years. CGAN is one of these models [37]. The new thing about this model is labeling the data during the training process. Table 1 shows the differences between these two models. It may look similar, yet the major difference between them involves adding additional information to control the output [46,47]. So, the CGAN is an extension of the generative adversarial networks, which include a condition to both the generator (G) and discriminator (D) by feeding some extra information, y, into the input layer as an additional constraint. This extra information helps guide both G and D by incorporating auxiliary data from the same or other modalities. For the objective function of Equation (6), this turns out to condition G and D, as shown in Equation (7).
V ( G , D ) = arg min G max D ( E x p d a t a ( x ) [ log ( D ( x | y ) ) ] + E z p d a t a ( z | y ) [ log ( 1 D ( G ( z ) ) ) ] )

4. ARE-ECT Model

As explained in Section 2, the main objective of the ECT image reconstruction problem is to generate a high quality permittivity distribution image, given a lower resolution distribution input image. Therefore, the first step of the proposed ARE-ECT model is to prepare the input image for the generator operation. This preparation is performed in a preprocessing phase, as shown in Figure 2. The input to this preprocessing phase is the capacitance reading set. The ECT capacitance sensor produces a 1 × 66 raw vector data, i.e., M = 66 . Afterwords, the input image is generated using traditional LW of Equation (5) with k = 0 . The initial image of the permittivity distribution is provided by some fast matrix multiplication. The input image resulted from the preprocessing phase is fed to a generator. This generator could be a traditional autoencoder. However, although autoencoders are capable of reconstructing such patterns, the spatial information of the input signals are not modeled with sufficient accuracy. Given that the spatial information of the inner distributions is essential for the reconstruction of the flow pattern image, another generator that can preserve such spatial representation is mandatory. UNet is a good candidate to satisfy this requirement [48]. Therefore, we adopted UNet to construct the flow pattern in the generator module. Figure 3 illustrates the details of the used UNet in ARE-ECT. Four blocks were used on the encoder side, and similarly, four blocks were placed on the decoder side. The latent vector size was eight. The input layer’s low resolution image, generated by the preprocessing phase, was concatenated with the generated image by the final layer. Similarly, each input of the hidden layers on the decoder side was concatenated with the output of the corresponding layer from the encoder side.
The UNet generator module produces a flow pattern, which is considered a fake sample for the discriminator training. A synthetic data generator was developed to generate real samples, F P r , for the purposes of discriminator training. As shown in Figure 2, the architecture of our UNet generator was designed with two sections: down- and upsampling. The main idea of UNet is to map a low resolution input image at a size of 128 × 128 to a 1-D vector and then reconstruct it back to a high quality image. The contraction of the downsampling (encoder) applies a 3 × 3 convolutional layer, batch normalization, and Relu activation followed by a 2 × 2 max pooling in each step. This stage generates a downsized image of a size equal to 64 × 64 with 128 features, and it continues to the latent vector size of 8 × 8 with 1024 features. The layers at the decoder (upsampling) section employ a 2 × 2 upsampling layer after convolution. During the upsampling process, the corresponding feature maps from the downsampling part are reused to reduce the distortion of images. They are appended directly after the upsample layer. The proposed model is designed for a 12-electrode ECT sensor setup. If any change in this setup, in terms of the number of sensors occurs, a new dataset must be generated. Therefore, every generated dataset is valid only for its underlying hardware configuration. This is because the resolution of the initially generated low-resolution images varies with the number of installed sensors.

5. ECT Dataset

We implemented a MATLAB GUI software package to build different configurations of ECT sensors. Various flow patterns can be simulated and their forward problems can be solved to generate the corresponding capacitance measurements. An extensive ECT benchmark dataset was developed for training and testing of the proposed ARE-ECT. A traditional image reconstruction algorithm was used to reconstruct the permittivity distributions, which used the initial image x for the deep learning ARE-ECT model. In this paper, we used the LW algorithm as the inversion algorithm to generate the initial input image. The dataset consisted of 320 k samples, each one was a pair of an actual permittivity distribution vector as a ground truth, and the reconstructed image of the LW algorithm corresponding to each capacitance measurement vector. The sizes of the actual distribution, and the LW reconstructed image were 128 × 128 = 16,384. The ECT sensor was composed of 12 electrodes as shown in Figure 1. The sensor pipe was made from PVC material with a relative permittivity of 2. The diameter and the thickness of the pipe was 100 and 2 mm, respectively. The electrodes were separated by gaps of 4 degrees, and the span angle of each electrode was 26 degrees. The dataset contained five different flow patterns, 10 k ring patterns, annular with 20 k patterns, 10 k stratified patterns, 1–3 circular bars with 140 k patterns, and 140 k patterns of 1–3 square bars. Figure 4 shows some samples of various flow patterns from the generated ECT dataset. The low phase was air with a relative permittivity value equal to 1, and the relative permittivity of the high phase glass was ( 4 ) . Random variables were used in building the dataset. For instance, a uniform random variable with a range of 10 % to 95 % of the imaging area’s radius was applied to the ring’s width of the annular flow. The stratified flow height was assigned to a uniform random variable in a range of 5–95% of the diameter of the sensing field. The number of circular and square bars varied from 1 to 3. The generated data have some discrepancies in the number of instances within each type to reflect varying degrees of randomness. Additionally, every flow pattern had a different number of attributes that determined its geometric specifications. For instance, the attributes that characterized a ring flow pattern were just two—the inner and outer radii, while those of the square bar patterns were the number of bars, their lengths, widths, and planner locations. This large attribute dimensionality variation implies consequent large variations in the number of generated instances that represented the input data space.

6. Experimental Results and Analysis

The ARE-ECT model was trained and tested by using the developed ECT datasets. The overall network’s performance of the proposed algorithm was verified based on the reconstruction results of the testing dataset. Typically, the ARE-ECT model was validated during the training phase to avoid overfitting; 10 % of the training samples were randomly chosen as a validation set. The more comprehensive the data simulation, the stronger the generalization performance of the model after training. Therefore, the generalization ability of the proposed model was tested using a testing dataset, generated phantoms that were not included in the training dataset, and practical experimental data.

6.1. Validation Metrics

Typically, the relative image error (IE) and correlation coefficient (CC) between ground truths and reconstructed permittivity distributions were applied to evaluate the image quality and the reconstruction algorithm’s performance [7]. The relative IE is defined as Equation (8).
I E = | | G G * | | 2 | | G | | 2
where G * represents the reconstructed image from the ARE-ECT model, and G represents the original distribution.
The similarity between the reconstructed image and the ground truth image was measured by CC, which is defined in Equation (9)
C C = i = 1 N ( G i G ¯ ) ( G i * G ¯ * ) i = 1 N ( G i G ¯ ) 2 i = 1 N ( G i * G ¯ * ) 2
where G ¯ and G ¯ * are the mean values of G and G * , respectively. N = 12,932 is the number of pixels in the imaging area.
The ARE-ECT model was designed and trained using the Python TensorFlow machine learning platform [49], and Keras deep learning API [50]. The testing process was carried out using the reconstructed image from LW as input to the ARE-ECT model, while the output was the reconstructed permittivity distribution. The testing set contained 96 k samples; hence, the ARE-ECT performance was evaluated by the mean values of the IE and CC. The smaller the relative IE and the bigger the CC, the better the performance.

6.2. Qualitative Results on Simulation Test Dataset

A simulation testing dataset that had been unseen by the network during the training process was used to validate the reconstruction ability of the proposed ARE-ECT model. Typically, the developed ECT dataset containing 320 k pairs was divided into a 70% (224 k pairs) training dataset and a 30% (96 k pairs) testing dataset. The training and testing datasets are quite different since the dataset for each flow pattern was randomly generated.
The loss curve, shown Figure 5, declines over 250 epochs on the training and validation sets. The minimum, maximum, and average values of relative IE and CC of the testing dataset are stated for each flow type in Table 2. The results prove that the ARE-ECT model can reconstruct images that are very close to the ground truth distributions. The average values of the relative IE = 0.1019 and CC = 0.9884 show a significant overall performance of the ARE-ECT model when applying the LW input images.
The IE and CC for all flow types are drawn as box plots, Figure 6a,b, respectively.
Figure 6a,b show the substantial performance of the ARE-ECT model since 95 % of the IE and CC are in reasonable intervals. From Table 2, the performance of the ARE-ECT model on the ring flow type is the lowest compared with other flow types. A single square bar flow type has the best results of relative IE, while for CC, annular, stratified, single circular, and square bar are more than 99 % .
Reconstructed image instants equivalent to the minimum and maximum CC of each flow group in Table 2 are given in Figure 7. Visually, the reconstructed images with minimum CC, still very close to the ground truth permittivity distributions, and the reconstructed images with the maximum CC, obviously have better visual effects. The reconstructed images, shown in Figure 7 are almost the same as their ground truth distributions. For multiple circular and square bars, the reconstructed positions of objects are consistent with the true distributions. In general, our model performs well on the test dataset and has a strong ability to reconstruct images of all typical flow types with permittivity values of objects predicted correctly.
The performance and the reconstructed image qualities of the proposed ARE-ECT algorithm and other state-of-the-arts ECT image reconstruction algorithms are compared. An assortment of flow patterns have been set up to test the generalization ability of the proposed model. Figure 8 shows the compassion results, where the real phantoms are shown in the first column, and the reconstructed images from the LBP, iterative Tikhonov, ILM, CNN [22,23], LSTM-IR [24], and ARE-ECT algorithms are contained in the other columns, respectively. The hyperparameters of the Tikhonov and ILM algorithms were selected empirically. The optimal regularization parameter was selected, 0.01 , while the iteration numbers of the Tikhonov and the ILM were 200 and 1000 iterations, respectively. The CNN algorithm is based on a multi-scale dual-channel convolution kernel composed of a dual-channel frequency division model [23], where each channel has five convolution layers. The CNN model is trained using the results of the LBP as inputs. The results of the ARE-ECT model have high image quality and accuracy with sharp object boundaries when compared to the reconstructed images from the LBP, iterative Tikhonov, ILM, and CNN algorithms. Visually, in Figure 8, the ARE-ECT model can reconstruct objects in the imaging area with sharp edges since there is no transition region between the reconstructed objects compared with the other algorithms. The generated objects have blurred zones around it, which increases the relative IE. Moreover, the results stated in Table 3, which are the IE and CC of the reconstructed images from the ARE-ECT model compared with the other algorithms, prove that the performance of the ARE-ECT model is better than other reconstruction algorithms.

6.3. Testing Results of Non-Existing Phantoms in Training Dataset

New two-phase flow patterns, which are not included in the training dataset, were created to measure the generalization ability of the proposed ARE-ECT model. Four different flow distributions, from 1 to 4, shown in first column of Figure 9, were inputted to the trained ARE-ECT model. Relative IE and CC are listed in Table 4. Although none of these patterns exist in the training set, the ARE-ECT still can reconstruct them with high quality results. Although the ECT suffers from the inhomogeneous sensitivity map problem across its cross-sectional sensing domain, the reconstructed image of the five-bars phantom proves the ability of the ARE-ECT model to reconstruct phantoms located in the low and high sensitivity areas of the ECT sensor. The results are acceptable, although the reconstructed result is not quite sharp. The angles of the square object in the first sample and the L_Shape of the fourth sample are more rounded.

6.4. Evaluation Using Experimental Data

The generalization ability of the ARE-ECT model was also measured by applying experimental data. Capacitance measurements from three two-phase flow types as the training set were generated as real testing inputs. The experiments were carried out using electrical capacitance volume tomography (ECVT) hardware system [51]. There were 36 channels in the ECVT to measure the capacitance among 12 electrodes ECT sensor with an imaging rate of 120 images/s. Static phantoms were placed in an imaging area with a radius of 140 mm surrounded by 12 electrodes. As shown in the first column of Figure 10, the bubble flow type was experimented by placing two plastic rods of radius r = 20 mm inside the imaging area, while one-half of the imaging area filled with plastic particles ( ϵ = 4 ) simulated the stratified flow type. Filling a ring shape around the center of the ECT sensor with the plastic particles represented the annular flow type.
Figure 10 demonstrates the real distributions and the generated images from LBP, iterative Tikhonov, ILM, local ensemble transform Kalman filter (LETKF) [18], CNN, LSTM-IR, and ARE-ECT algorithms. The reconstructed images by the ARE-ECT model have high accuracy and sharp edges separate the two phases compared with the other reconstruction algorithms. Moreover, the ARE-ECT reconstructed images have fewer artifacts, much better visual quality, and are faster than that of the LBP. ARE-ECT is more efficient than traditional iteration algorithms, such as the iterative Tikhonov, ILM, and the LETKF, which can obtain good imaging quality but are still slow. Comparing the reconstructed images from the proposed ARE-ECT model with the other deep learning (DL) models, such as CNN and LSTM-IR, proves the potential of the proposed method in generating significant high quality images with accurate permittivity values and sharp boundaries. The core component of our method is CGAN, which exhibits stronger enhancement and resolution, increasing capabilities, compared to conventional DL methodologies. As the target problem model in this work is image enhancement, it is natural for our method to obtain benefits of the inherited capabilities of CGAN in this aspect. Moreover, since the UNet conditions the output side by input data, this further strengthens the enhancement capabilities of the proposed method.

6.5. Computational Time Measure

Typically, the performances of image reconstruction algorithms are evaluated by the imaging speed. For the experimental ECT data, Table 5 contains the imaging costs of different reconstruction algorithms. The algorithms were run on a PC with an i9 CPU ( 3.6 GHz) and 32 GB memory. The reconstruction time of the proposed model was 0.046 s, which was >135x, >115x, and >28x faster than ILM, iterative Tikhonov method, and LETKF, respectively. The ARE-ECT model was also faster than other DL models, and it constructed more accurate images compared to all other methods. The LBP was faster than our proposed method, but the image qualities were worse than our model. The imaging speed of the ARE-ECT model can also satisfy online application, as the LBP algorithm.

7. Conclusions

In this paper, a new ARE-ECT model based on the CGAN deep neural network was proposed to enhance the resolution of the ECT reconstructed images. The generator was built using UNet. For evaluation purposes, a big dataset was developed. It contained simulation data of 320 k capacitance measurements–flow image pairs for training, validating, and testing. For generalization and feasibility of ARE-ECT, data instances, to which the model was not exposed during the training phase, were included in the evaluation dataset. The experimental results proved the superiority of the proposed ARE-ECT over the state-of-the-art, both quantitatively and qualitatively. Efficiency evaluation results showed that ARE-ECT succeeded in beating existing high-quality methods in terms of execution speed by ’several tens of times’, particularly from 28× to 135×. Briefly, ARE-ECT achieved better performance than the computationally-expensive methods, yet with the same execution time order of the low-resolution reconstruction method, e.g., the well-known LBP. In terms of the overall generalization, the ARE-ECT exhibited good capabilities. Hopefully, the work presented herein will inspire researchers in the ECT field to further investigate other deep learning-based approaches to reconstruct the flow patterns in the sensing field of the multi-phase flow.

Author Contributions

Conceptualization, W.D. and A.E.A.-H.; methodology, W.D. and A.E.A.-H.; software, W.D.; validation, K.E.B. and H.A.; formal analysis, W.D.; investigation, A.E.A.-H.; resources, W.D.; data creation, W.D.; writing—original draft preparation, W.D. and A.E.A.-H.; writing—review and editing, K.E.B. and H.A.; visualization, W.D.; supervision, W.D.; project administration, W.D.; funding acquisition, W.D. All authors have read and agreed to the published version of the manuscript.


This research was funded by the Deanship of Scientific Research at Umm Al-Qura University, grant number 22UQU4310447DSR01.


The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by grant code: (22UQU4310447DSR01).

Conflicts of Interest

The authors declare no conflict of interest.


List of nomenclature and abbreviations
CTComputed Tomography
ECTElectrical Capacitance Tomography
ARE-ECTAdversarial Resolution Enhancement
ILMIterative Landweber Method
LBPLinear Back Projection
MLMachine Learning
DNNDeep Neural Networks
DLDeep Learning
CNNConvolutional Neural Network
LSTMLong Short-Term Memory
CANNCapacitance Artificial Neural Network
GCNGraph Convolutional Networks
GANGenerative Adversarial Network
CGANConditional Generative Adversarial Network
LWLandweber Algorithm
IEImage Error
CCCorrelation Coefficient
LSTM-IRLong Short-Term Memory Image Reconstruction
LETKFLocal Ensemble Transform Kalman Filter
ECVTElectrical Capacitance Volume Tomography


  1. Tsai, C.Y.; Feng, Y.C. Real-time multi-scale parallel compressive tracking. J.-Real-Time Image Process. 2019, 16, 2073–2091. [Google Scholar] [CrossRef]
  2. Xu, Z.; Yao, J.; Wang, Z.; Liu, Y.; Wang, H.; Chen, B.; Wu, H. Development of a Portable Electrical Impedance Tomography System for Biomedical Applications. IEEE Sens. J. 2018, 18, 8117–8124. [Google Scholar] [CrossRef]
  3. Xia, Z.; Cui, Z.; Chen, Y.; Hu, Y.; Wang, H. Generative adversarial networks for dual-modality electrical tomography in multi-phase flow measurement. Meas. J. Int. Meas. Confed. 2020, 173, 108608. [Google Scholar] [CrossRef]
  4. Wang, M. Industrial Tomography: Systems and Applications; Elsevier: Amsterdam, The Netherlands, 2015. [Google Scholar]
  5. Wang, H.; Yang, W. Scale-up of an electrical capacitance tomography sensor for imaging pharmaceutical fluidized beds and validation by computational fluid dynamics. Meas. Sci. Technol. 2011, 22, 104015. [Google Scholar] [CrossRef]
  6. Rymarczyk, T.; Kłosowski, G.; Kozłowski, E. A Non-Destructive System Based on Electrical Tomography and Machine Learning to Analyze the Moisture of Buildings. Sensors 2018, 18, 2285. [Google Scholar] [CrossRef][Green Version]
  7. Cui, Z.; Wang, Q.; Xue, Q.; Fan, W.; Zhang, L.; Cao, Z.; Sun, B.; Wang, H. A review on image reconstruction algorithms for electrical capacitance/resistance tomography. Sens. Rev. 2016, 36, 429–445. [Google Scholar] [CrossRef]
  8. Sun, S.; Cao, Z.; Huang, A.; Xu, L.; Yang, W. A high-speed digital electrical capacitance tomography system combining digital recursive demodulation and parallel capacitance measurement. IEEE Sens. J. 2017, 17, 6690–6698. [Google Scholar] [CrossRef][Green Version]
  9. Wang, Q.; Yang, C.; Wang, H.; Cui, Z.; Gao, Z. Online monitoring of gas–solid two-phase flow using projected CG method in ECT image reconstruction. Particuology 2013, 11, 204–215. [Google Scholar] [CrossRef]
  10. Raghavan, R.; Senior, P.; Wang, H.; Yang, W.; Duncan, S. Modelling, measurement and analysis of fluidised bed dryer using an ect sensor. In Proceedings of the 5th World Congress in Industrial Process Tomography. International Society for Industrial Process Tomography, Bergen, Norway, 3–6 September 2007; pp. 334–341. [Google Scholar]
  11. Yulei, Z.; Baolong, G.; Yunyi, Y. Latest development and analysis of electrical capacitance tomography technology. Chin. J. Sci. Instrum. 2012, 33, 1909–1920. [Google Scholar]
  12. Li, Y.; Yang, W. Image reconstruction by nonlinear Landweber iteration for complicated distributions. Meas. Sci. Technol. 2008, 19, 094014. [Google Scholar] [CrossRef]
  13. Chen, D.Y.; Chen, Y.; Wang, L.L.; Yu, X.Y. A Novel Gauss-Newton Image Reconstruction Algorithm for Electrical Capacitance Tomography System. Acta Electron. Sin. 2009, 4, 739–743. [Google Scholar]
  14. Vauhkonen, M.; Vadâsz, D.; Karjalainen, P.A.; Somersalo, E.; Kaipio, J.P. Tikhonov regularization and prior information in electrical impedance tomography. IEEE Trans. Med. Imaging 1998, 17, 285–293. [Google Scholar] [CrossRef] [PubMed]
  15. Gamio, J.; Ortiz-Aleman, C.; Martin, R. Electrical capacitance tomography two-phase oil-gas pipe flow imaging by the linear back-projection algorithm. Geofísica Int. 2005, 44, 265–273. [Google Scholar] [CrossRef]
  16. Zhang, W.; Wang, C.; Yang, W.; Wang, C.H. Application of electrical capacitance tomography in particulate process measurement–A review. Adv. Powder Technol. 2014, 25, 174–188. [Google Scholar] [CrossRef]
  17. Deabes, W.; Amin, H.H. Image Reconstruction Algorithm Based on PSO-Tuned Fuzzy Inference System for Electrical Capacitance Tomography. IEEE Access 2020, 8, 191875–191887. [Google Scholar] [CrossRef]
  18. Deabes, W.; Bouazza, K.E. Efficient Image Reconstruction Algorithm for ECT System Using Local Ensemble Transform Kalman Filter. IEEE Access 2021, 9, 12779–12790. [Google Scholar] [CrossRef]
  19. Xie, D.; Zhang, L.; Bai, L. Deep learning in visual computing and signal processing. Appl. Comput. Intell. Soft Comput. 2017, 2017, 1320780. [Google Scholar] [CrossRef]
  20. Zhu, H.; Sun, J.; Xu, L.; Tian, W.; Sun, S. Permittivity Reconstruction in Electrical Capacitance Tomography Based on Visual Representation of Deep Neural Network. IEEE Sens. J. 2020, 20, 4803–4815. [Google Scholar] [CrossRef]
  21. Yang, X.; Zhao, C.; Chen, B.; Zhang, M.; Li, Y. Big Data driven U-Net based Electrical Capacitance Image Reconstruction Algorithm. In Proceedings of the IST 2019—IEEE International Conference on Imaging Systems and Techniques, Abu Dhabi, United Arab Emirates, 9–10 December 2019. [Google Scholar] [CrossRef]
  22. Zheng, J.; Ma, H.; Peng, L. A CNN-based image reconstruction for electrical capacitance tomography. In Proceedings of the IEEE International Conference on Imaging Systems and Techniques, Abu Dhabi, United Arab Emirates, 9–10 December 2019; pp. 1–6. [Google Scholar] [CrossRef]
  23. Lili, W.; Xiao, L.; Deyun, C.; Hailu, Y.; Wang, C. ECT Image Reconstruction Algorithm Based on Multiscale Dual-Channel Convolutional Neural Network. Complexity 2020, 2020, 4918058. [Google Scholar] [CrossRef]
  24. Deabes, W.; Khayyat, K.M.J. Image Reconstruction in Electrical Capacitance Tomography Based on Deep Neural Networks. IEEE Sens. J. 2021, 21, 25818–25830. [Google Scholar] [CrossRef]
  25. Zheng, J.; Peng, L. An autoencoder-based image reconstruction for electrical capacitance tomography. IEEE Sens. J. 2018, 18, 5464–5474. [Google Scholar] [CrossRef]
  26. Deabes, W.; Sheta, A.; Bouazza, K.E.; Abdelrahman, M. Application of Electrical Capacitance Tomography for Imaging Conductive Materials in Industrial Processes. J. Sens. 2019, 2019, 4208349. [Google Scholar] [CrossRef]
  27. Deabes, W.; Sheta, A.; Braik, M. ECT-LSTM-RNN: An Electrical Capacitance Tomography Model-Based Long Short-Term Memory Recurrent Neural Networks for Conductive Materials. IEEE Access 2021, 9, 76325–76339. [Google Scholar] [CrossRef]
  28. Fabijańska, A.; Banasiak, R. Graph convolutional networks for enhanced resolution 3D Electrical Capacitance Tomography image reconstruction. Appl. Soft Comput. 2021, 110, 107608. [Google Scholar] [CrossRef]
  29. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Advances in Neural Information Processing Systems; Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2014; Volume 27. [Google Scholar]
  30. Mahdizadehaghdam, S.; Panahi, A.; Krim, H. Sparse generative adversarial network. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Korea, 27–28 October 2019. [Google Scholar]
  31. Xu, T.; Zhang, P.; Huang, Q.; Zhang, H.; Gan, Z.; Huang, X.; He, X. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1316–1324. [Google Scholar]
  32. Subramanian, S.; Mudumba, S.R.; Sordoni, A.; Trischler, A.; Courville, A.C.; Pal, C. Towards text generation with adversarially learned neural outlines. Adv. Neural Inf. Process. Syst. 2018, 31, 1–13. [Google Scholar]
  33. Mirsky, Y.; Lee, W. The creation and detection of deepfakes: A survey. Acm Comput. Surv. (CSUR) 2021, 54, 1–41. [Google Scholar] [CrossRef]
  34. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 5967–5976. [Google Scholar] [CrossRef][Green Version]
  35. Karras, T.; Laine, S.; Aittala, M.; Hellsten, J.; Lehtinen, J.; Aila, T. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8110–8119. [Google Scholar]
  36. Kim, S.W.; Zhou, Y.; Philion, J.; Torralba, A.; Fidler, S. Learning to simulate dynamic environments with gamegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1231–1240. [Google Scholar]
  37. Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
  38. Selim, M.; Zhang, J.; Fei, B.; Zhang, G.Q.; Chen, J. STAN-CT: Standardizing CT Image using Generative Adversarial Networks. In AMIA Annual Symposium Proceedings; American Medical Informatics Association: Bethesda, MD, USA, 2020; Volume 2020, p. 1100. [Google Scholar]
  39. Yang, X.; Kahnt, M.; Brückner, D.; Schropp, A.; Fam, Y.; Becher, J.; Grunwaldt, J.D.; Sheppard, T.L.; Schroer, C.G. Tomographic reconstruction with a generative adversarial network. J. Synchrotron Radiat. 2020, 27, 486–493. [Google Scholar] [CrossRef]
  40. Liu, Z.; Bicer, T.; Kettimuthu, R.; Gursoy, D.; De Carlo, F.; Foster, I. TomoGAN: Low-dose synchrotron x-ray tomography with generative adversarial networks: Discussion. JOSA A 2020, 37, 422–434. [Google Scholar] [CrossRef][Green Version]
  41. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
  42. Lu, J.; Hu, W.; Sun, Y. A deep learning method for image super-resolution based on geometric similarity. Signal Process. Image Commun. 2019, 70, 210–219. [Google Scholar] [CrossRef]
  43. Ye, J.; Wang, H.; Yang, W. Image Reconstruction for Electrical Capacitance Tomography Based on Sparse Representation. IEEE Trans. Instrum. Meas. 2015, 64, 89–102. [Google Scholar] [CrossRef]
  44. Deabes, W.A.; Abdelrahman, M.A. A nonlinear fuzzy assisted image reconstruction algorithm for electrical capacitance tomography. Isa Trans. 2010, 49, 10–18. [Google Scholar] [CrossRef] [PubMed]
  45. Hitawala, S. Comparative study on generative adversarial networks. arXiv 2018, arXiv:1801.04271. [Google Scholar]
  46. Chakraborty, A.; Ragesh, R.; Shah, M.; Kwatra, N. S2cGAN: Semi-Supervised Training of Conditional GANs with Fewer Labels. arXiv 2020, arXiv:2010.12622. [Google Scholar]
  47. Qin, Z.; Shan, Y. Generation of Handwritten Numbers Using Generative Adversarial Networks. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2021; Volume 1827, p. 012070. [Google Scholar]
  48. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  49. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv 2016, arXiv:1603.04467. Available online: (accessed on 8 April 2022).
  50. Chollet, F. Keras; GitHub: San Francisco, CA, USA, 2015; Available online: (accessed on 12 February 2022).
  51. Tech4Imaging. Electrical Capacitance Volume Tomography. Ohio, USA. 2020. Available online: (accessed on 15 April 2022).
Figure 1. ECT system with 12 electrodes.
Figure 1. ECT system with 12 electrodes.
Sensors 22 03142 g001
Figure 2. Architecture of ARE-ECT model.
Figure 2. Architecture of ARE-ECT model.
Sensors 22 03142 g002
Figure 3. The architecture of the used UNET network in the generator.
Figure 3. The architecture of the used UNET network in the generator.
Sensors 22 03142 g003
Figure 4. Samples of different flow patterns. (a) Ring, (b) annular, (c) stratified, (d) 1 cir. bar, (e) 2 cir. bars, (f) 3 cir. bars, (g) sq. bar, (h) 2 sq. bars, (i) 3 sq. bars.
Figure 4. Samples of different flow patterns. (a) Ring, (b) annular, (c) stratified, (d) 1 cir. bar, (e) 2 cir. bars, (f) 3 cir. bars, (g) sq. bar, (h) 2 sq. bars, (i) 3 sq. bars.
Sensors 22 03142 g004
Figure 5. Training and validation loss curves.
Figure 5. Training and validation loss curves.
Sensors 22 03142 g005
Figure 6. Box plots of testing criterion. (a) Relative image errors (ie), (b) correlation coefficients (CC).
Figure 6. Box plots of testing criterion. (a) Relative image errors (ie), (b) correlation coefficients (CC).
Sensors 22 03142 g006
Figure 7. Examples of maximum and minimum CC image reconstruction results.
Figure 7. Examples of maximum and minimum CC image reconstruction results.
Sensors 22 03142 g007
Figure 8. Reconstructed images of well known image reconstruction algorithms.
Figure 8. Reconstructed images of well known image reconstruction algorithms.
Sensors 22 03142 g008
Figure 9. Image reconstruction results of phantoms not in training dataset.
Figure 9. Image reconstruction results of phantoms not in training dataset.
Sensors 22 03142 g009
Figure 10. Experimental Setup and Reconstructed Frames.
Figure 10. Experimental Setup and Reconstructed Frames.
Sensors 22 03142 g010
Table 1. GAN vs. CGAN.
Table 1. GAN vs. CGAN.
InputLatent vectorRandom and auxiliary data
OutputClassify as real or generatedClassify labeled data as real or generated
DataNo control over dataConditional data
Table 2. Minimum and maximum of relative IE and CC of testing results.
Table 2. Minimum and maximum of relative IE and CC of testing results.
Flow PatternsMin. IEMax. IEAverage IEMin. CCMax. CCAverage CC
Single Cir. Bar0.01730.12760.07120.98191.00000.9923
Multiple Cir. Bars0.03080.21780.12880.96390.99850.9845
Single Sq. Bar0.00000.13290.04150.98681.00000.9965
Multiple Sq. Bars0.00000.25120.10860.93901.00000.9803
Total AverageIE0.1019CC0.9884
Table 3. IE and CC values of different ECT image reconstruction algorithms.
Table 3. IE and CC values of different ECT image reconstruction algorithms.
Relative Image Error (IE)Annular0.24120.19500.33510.12220.05610.0687
Cir. Bar0.39230.65620.65750.22240.14200.0821
2 Cir. Bars0.45680.66380.40380.32740.14450.0990
3 Cir. Bars0.60830.74920.42750.47650.20430.0940
Sq. Bar0.36770.58410.65750.24900.21220.0991
2 Sq. Bars0.49880.34490.32940.31760.24150.1653
3 Sq. Bars0.51120.60700.69090.49990.25580.0528
Correlation Coefficient (CC)Annular0.87010.88850.90840.95900.99130.9864
Cir. Bar0.69640.77540.79740.88600.95410.9850
2 Cir. Bars0.66810.85650.79630.80600.96400.9823
3 Cir. Bars0.54980.56520.76250.73250.93630.9862
Sq. Bar0.84420.82640.65750.89970.92770.9850
2 Sq. Bars0.70410.83260.86630.85270.91610.9617
3 Sq. Bars0.50990.63610.56880.66680.87070.9951
Table 4. Results of phantoms not in training dataset.
Table 4. Results of phantoms not in training dataset.
Table 5. Reconstruction time in sec.
Table 5. Reconstruction time in sec.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Deabes, W.; Abdel-Hakim, A.E.; Bouazza, K.E.; Althobaiti, H. Adversarial Resolution Enhancement for Electrical Capacitance Tomography Image Reconstruction. Sensors 2022, 22, 3142.

AMA Style

Deabes W, Abdel-Hakim AE, Bouazza KE, Althobaiti H. Adversarial Resolution Enhancement for Electrical Capacitance Tomography Image Reconstruction. Sensors. 2022; 22(9):3142.

Chicago/Turabian Style

Deabes, Wael, Alaa E. Abdel-Hakim, Kheir Eddine Bouazza, and Hassan Althobaiti. 2022. "Adversarial Resolution Enhancement for Electrical Capacitance Tomography Image Reconstruction" Sensors 22, no. 9: 3142.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop