Next Article in Journal
A Participation Degree-Based Fault Detection Method for Wireless Sensor Networks
Next Article in Special Issue
A Lagrange-Newton Method for EIT/UT Dual-Modality Image Reconstruction
Previous Article in Journal
Photovoltaic Array Fault Diagnosis Based on Gaussian Kernel Fuzzy C-Means Clustering Algorithm
Previous Article in Special Issue
Multiple Wire-Mesh Sensors Applied to the Characterization of Two-Phase Flow inside a Cyclonic Flow Distribution System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Selected Machine Learning Algorithms for Industrial Electrical Tomography

1
University of Economics and Innovation in Lublin, 20-209 Lublin, Poland
2
Research & Development Centre Netrix S.A., 20-704 Lublin, Poland
3
Faculty of Management, Lublin University of Technology, 20-618 Lublin, Poland
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(7), 1521; https://doi.org/10.3390/s19071521
Submission received: 28 January 2019 / Revised: 15 March 2019 / Accepted: 25 March 2019 / Published: 28 March 2019

Abstract

:
The main goal of this work was to compare the selected machine learning methods with the classic deterministic method in the industrial field of electrical impedance tomography. The research focused on the development and comparison of algorithms and models for the analysis and reconstruction of data using electrical tomography. The novelty was the use of original machine learning algorithms. Their characteristic feature is the use of many separately trained subsystems, each of which generates a single pixel of the output image. Artificial Neural Network (ANN), LARS and Elastic net methods were used to solve the inverse problem. These algorithms have been modified by a corresponding increase in equations (multiply) for electrical impedance tomography using the finite element method grid. The Gauss-Newton method was used as a reference to machine learning methods. The algorithms were trained using learning data obtained through computer simulation based on real models. The results of the experiments showed that in the considered cases the best quality of reconstructions was achieved by ANN. At the same time, ANN was the slowest in terms of both the training process and the speed of image generation. Other machine learning methods were comparable with the deterministic Gauss-Newton method and with each other.

1. Introduction

This article presents the results of research on the use of tomographic sensors for the analysis of industrial processes with the use of dedicated measuring devices and image reconstruction algorithms.
Electrical impedance tomography (EIT) is a non-invasive, high-potential application imaging method. It is suitable for continuous real-time visualization of the dynamic distribution of electrical conductivity inside the tested object [1]. To perform EIT reconstructions, we use weak alternating currents (1–5 mA) with low frequency (1–100 kHz) and measure the appropriate peripheral voltages by means of a set of electrodes attached to the object’s surface [2]. A cross-sectional image of internal spatial conductivity is obtained from voltage measurements gained from different electrode pairs. Despite its relatively low spatial resolution, the EIT is now a widely accepted tomographic imaging technique that is widely used in many areas, such as monitoring industrial processes [3,4,5], geophysical research [6,7,8] and biomedical diagnosis [2,9,10]. Mathematical reconstruction of conductor maps in the EIT is about solving a non-linear and ill-posed inverse problem from noisy data [11]. Regulatory techniques can be used to mitigate the instability of solutions. One of the most commonly used methods is a one-step approach to Gauss-Newton reconstruction (GN) [12], which allows the use of sophisticated, regulated models to describe the problem of the inverse EIT through a heuristic determined predecessor [13]. Landweber iteration is a modification of the steepest gradient descent approach and is also widely used in EIT [14]. The algebraic reconstruction technique (ART) is a valid method of reconstructing the computed tomography images that can be used in the EIT [15]. Other important methods include: regularization using total variation (TV) [16], which allows image reconstruction while preserving the edge, split augmented Lagrangian shrinkage algorithm [17] and the generalized vector sampled pattern matching method (GVSPM) [18].
Because deep learning is good for mapping complicated nonlinear functions, attempts are increasingly being made to apply deep learning methods based on convolutional neural networks (CNNs) for EIT/ERT (electrical resistivity tomography) image reconstruction [11]. Among the CNN, the deep D-bar methods are also used [19]. D-bar methods are based on a rigorous mathematical analysis. They provide robust direct reconstructions by using a low-pass filtering of the associated nonlinear Fourier data [9].
In the EIT tomography, algorithms belonging to machine learning methods can be successfully used. Typical examples of this kind of method are: Lasso (least absolute shrinkage and selection operator), Elastic net, least-angle regression (LARS) [6], artificial neural networks [20] and convolutional neural networks [11], multivariate adaptive regression splines (MARS), k-nearest neighbors (KNN), random forest (RF), gradient boosting machine (GBM) [21], Principal Component and Partial Least Square Regression [22].
The current development of EIT algorithms is largely focused on the use of machine learning methods [23]. Hence the need to verify whether such algorithms are in fact better than the classical, known deterministic methods to which the Gauss-Newton method belongs [12,24].
In comparison to other known imaging methods used in industry [25], electrical impedance tomography (EIT) has a number of advantages. These include, among others: higher time resolution, lower costs, opportunities for wider use, etc. However, reconstruction of the EIT may be unstable and has a fundamental disadvantage resulting from the need to solve the inverse problem [26]. The sensitivity of the EIT solutions to measurement, numerical and model errors entails the need to adjust the model parameters to specific cases. Many such methods have been developed over the years. These serious constraints on the EIT therefore favor the development of more sophisticated algorithms [27,28,29]. It is worth mentioning that most 2D reconstruction methods are also applicable in 3D situations with minor modifications [30].
The authors of the article developed three original variants of known algorithms based on machine learning techniques, and then compared them to the deterministic method as well as to each other. In order to make a precise assessment enabling a reliable comparison, universal evaluation metrics were used: Mean Squared Error (MSE), Relative Image Error (RIE) and Image Correlation Coefficient (ICC).
Advanced automation and control of production processes play a key role in enterprises [31,32]. Technological equipment and production lines can be considered the heart of industrial production, while information technologies and control systems are its brain. Tomographic imaging of objects creates a unique opportunity to discover the complexity of the structure without the need to invade the object. There is a growing need for information on how internal flows behave in the process equipment. This should be performed non-invasively by tomographic instrumentation [33].
Sensor technologies are mainly based on electrical tomography (ET) [34,35,36,37,38], which includes electrical capacitance tomography (ECT) [39,40,41,42,43,44,45] and electrical resistance tomography (ERT) [7,46,47]. It allows reconstruction of the image by the distribution of conductivity or permittivity of the object from electrical measurements at the edge of the object.
The results of the reconstruction of individual algorithms with different measurement models were compared. The tests were carried out for real data obtained from real laboratory measurements. The electronic devices for measuring the material values and to collect data from the measurement sensors were designed and made by the authors.
The main novelty of the presented method is a machine learning approach based on learning many separate subsystems (ANN, LARS, Elastic net), while each subsystem is dedicated to a single pixel of the output image (Figure 1). The deterministic method, Gauss-Newton with Laplace regularization should be treated as a reference, enabling objective comparison of standard techniques with machine learning methods.
Figure 1a shows the traditional machine-trained algorithm. It consists of a singular predictive (regression) system with many outputs. The input vector includes 96 measurements of voltage drops measured on individual electrode pairs. The predictive system has 2883 outputs, which makes its training difficult. The large number of system outputs is the main reason for the unsatisfactory quality of reconstructed tomographic images.
Figure 1b shows the scheme of the novel multiple system. Its characteristic feature is that on the basis of the same 96-element vector of predictors, 2883 separate prediction subsystems (S1, S2,…, S2883) were trained. Each of the subsystems generates only one independent output (response), which is the value of a single pixel of the reconstructed image (O1, O2,…, O2883). Thanks to this approach, each pixel of the output image is the result of the operation of a single-output prediction subsystem. Subsystems with one output are easier to train than subsystems with multiple outputs. Thanks to this, the results obtained using the presented concept (96—S—1) × 2883 are better than those obtained using the traditional concept 96—S—2883.
The article consists of four sections. The measurement models, machine learning methods and descriptions of algorithms were presented in Section 2. The results of the research work in the form of reconstruction of images for measurement data are shown in Section 3. In Section 4, the results obtained are discussed. It also summarizes the presented research.

2. Materials and Methods

This section presents the tomographic methods, process tomography, measuring devices, laboratory systems, mathematical algorithms and measurement models used in image reconstruction based on synthetic data and real measurements. Laboratory equipment, tomography devices designed at Research & Development Centre Netrix SA, the Eidors toolbox [48], Microsoft tools, Matlab, Python and R language were used during the research.

2.1. Electrical Tomography

Electrical tomography is an imaging technique that uses different electrical properties of different types of materials, including biological tissues. In this method, the power or voltage source is connected to the object, followed by the emergence of current flows or the distribution of voltage at the edge of the object. The collected information is processed by an algorithm that reconstructs the image. This tomography is characterized by a relatively low image resolution. Difficulties in obtaining high resolution result mainly from a limited number of measurements, nonlinear current flow through a given medium and too-low sensitivity of measured voltages depending on changes in conductivity inside the area. Electrical tomography has historically been divided into electrical capacitive tomography for systems dominated by dielectrics, and electrical resistance tomography. The basic theory can be obtained from Maxwell’s equations.
A complex “admittivity” can be defined as follows:
γ = σ + i ω ε
where ε is the permittivity, σ is the electrical conductivity, and ω is the angular frequency.
In the case of the electric field strength (Ε), the current density (J) in the test area will be related to Ohm’s law:
J = γ E
The gradient of the potential distribution (u) has the form:
E = u
Due to the fact that there are no sources from the Ampère law in the studied region, we have:
· J = 0
Potential distribution in a heterogeneous, isotropic area:
· ( γ u ) = 0
where u is the potential.
Where the capacitance or resistance dominates, the equation factor should be simplified to the form:
· ( σ u ) = 0   f o r   ω ε σ 1 ( ERT )
· ( ε u ) = 0   f o r   ω ε σ 1 ( ECT )
By solving the inverse problem, we obtain the distribution of material coefficients in the studied area.
Electrical resistance tomography in a process tomography can be interchangeably called electrical impedance tomography (EIT). In the following part of this work, we will mainly use the name, EIT [49,50,51].
The inverse method and neighboring method in EIT for collecting data from potential measurements at the edge of an object for 16 electrodes is shown in Figure 2.

2.2. Measurement Models

In order to test the effectiveness of algorithms for the analysis of processes in industrial tomography, three real measuring models were implemented. Electrical tomography was implemented for the analysis. Figure 3a presents the EIT measuring device (hybrid tomograph), which was made by the Netrix S.A. Research and Development Center. A bucket with electrodes was used as the tank or industrial reactor model (Figure 3b,c).
The arrangement of phantoms inside the investigated object is presented in Figure 4. This is a plan view that corresponds with the pictures of the tank shown in Figure 3.
Figure 5 shows a side view of the dimensioned model of the EIT tested tank. On the left side, a tube immersed in the tank with its diameter can be seen.
Based on the above physical models, a special simulation algorithm was developed to generate learning cases used during the training process of the machine learning systems. Each training case was generated in accordance with the following procedure. First of all, we assume a homogeneous distribution of electrical conductivity. Then, we randomly select the number of internal inclusions. We assume that as a result of the draw we receive a maximum of five objects, each with a circular shape. The radius and conductivity are such that they correspond to the actual tests carried out by the EIT. During the next stage of calculations, the center of each internal object is drawn. For the obtained conductivity distribution, measurement voltages are determined using the finite element method.
Figure 6 shows one of the 50,000 generated cases used to train the predictive system. The cross section of the tank contains 5 randomly arranged artifacts, which corresponds to the 96 vector voltage measurements. Because the polarization of the electrodes changes during individual measurements, the voltage varies during the interval (−0.06; +0.06).
Based on the dimensions of the physical model, output images (reconstructions) for 3 cases of arrangement of the tubes were also generated (see Figure 4). The background pixel values are 1 and in the reconstructive images are marked in white. In turn, the spots (pixels) of the occurrence of the tubes have a value close to 0 and are colored dark blue.
Algorithm 1 shows the pseudo code used to generate training cases. The script generating the simulation data of the measurements on the electrodes included artificial noise (line 8 in Algorithm 1). For this purpose, a random number generator with a normal distribution was used, with an expected value of 0 and a corresponding standard deviation. In addition, the voltages determined on the basis of numerical simulation are always subject to a certain error, especially when, as in the described case, we make calculations on a grid consisting of a relatively small number of finite elements.
Algorithm 1. The pseudo code to generate learning cases
1.N = 50000;% The number of cases
2.for 1: N
3. random selection of the number of objects;% set of NumberOfObjects variable
4.for 1: NumberOfObjects
5.  random selection of the object’s location;% center and radius
6.end
7. adding an output image to the set of training cases;% saving response data
8. determination of voltages and adding Gaussian noise;% Gaussian noise = randn(1, 96) × 5 × 10−5
9. saving the values of voltages to the training set;% saving input data
10.end

2.3. Algorithms and Methods

There are many methods and algorithms used in optimization problems [52,53,54,55,56,57]. In this article, the authors chose deterministic algorithms based on the Gauss-Newton method as a reference to the machine learning methods. The Gauss-Newton method is often used in electrical tomography because it is quite effective. The next algorithms were based on machine learning methods [8,58], in which an innovative approach to tomographic problems was presented.

2.3.1. Image Reconstruction

Process tomography also belongs to the problems of the inverse electromagnetic field. The inverse problem is the process of optimization, identification, or synthesis in which the parameters describing a given field are determined based on the possession of information specific to this field. Such issues are difficult to analyze. They do not have unambiguous solutions and are ill-conditioned due to too little or too much information. They are sometimes contradictory or linearly dependent. Knowledge of the process can make image reconstruction more resistant to incomplete or damaged data. The numerical analysis of the problem was carried out using the finite element method.
The colors of individual pixels on the image correspond to the conductance of the examined cross-section parts. An approach in which each of the separately trained subsystem generates only one output, that is, the value of a single pixel of the output image allows for better mapping of the values of electrical measurements.
To confirm the above thesis, a number of experiments were carried out using three neural networks differing in structure and number of outputs. Three types of ANN with the following structures were trained: 96—10—1 (96 predictors, 10 hidden neurons, 1 response), 96—10—10 (96 predictors, 10 hidden neurons, 10 responses) and 96—20—10 (96 predictors, 20 hidden neurons, 10 responses). The smaller the Mean Square Error (MSE) and the bigger the regression (R), the better is the quality of ANN. Responses (output pixels) were chosen randomly. The set of 50,000 cases was randomly divided into 3 subsets: training, validating and testing in the proportion of 70:15:15. The results of the experiments are presented in Table 1.
Only the testing set was used to assess the network quality. To increase the objectivity of experiments, the indicators in Table 1 ( M S E ¯ , R ¯ ) were the arithmetic mean of 10 experiments performed for each of the three types of ANN.
As you can see, the best results were obtained for the ANN with a single output (96—10—1). A more complex 10-output network (96—20—10) was better than the simpler 96—10—10 ANN having 10 neurons in the hidden layer. However, both neural networks with 10 responses turned out to be worse than ANN with a single response. The abovementioned tests proved that the variant of ANN with one output turned out to be the best. For this reason, in the research multiple LARS, Elastic net and ANN systems were used, in which each of the subsystems generated only one response value.
Figure 7 presents an outline of a machine learning system that was applied to all 3 methods: LARS, Elastic net and ANN. A distinguishing feature of the presented concept is the separate training of each of the 2883 machine learning subsystems. Their number is equal to the resolution of the image output grid (2883 pixels).

2.3.2. Gauss-Newton Method

The Gauss-Newton method is an effective approach to solve inverse problem in the electrical impedance tomography. It is worth emphasizing that such a problem is nonlinear and ill-posed. In difference imaging, the Gauss-Newton method can be used to minimize differences between reference and inhomogeneous data [12,59].
In general cases, image reconstruction involves determining the global minimum of the objective function, which can be defined as follows:
F ( σ ) = 1 2 { U m U s ( σ ) 2 + λ 2 L ( σ σ * ) 2 }
where:
  • Um—voltages obtained as a result of the measurements
  • Us(σ)—voltages received by numerical calculations (FEM) for given conductivity σ
  • σ*—conductivity represents known properties
  • λ—regularization parameter (positive real number)
  • L—regularization matrix.
Using appropriate approximations, it can be shown that the conductivity in the iteration denoted by k + 1 is given by the following formula:
σ k + 1 = σ k + α k ( J k T W J k + λ 2 L T L ) 1 [ J k T W ( U m U s ( σ k ) ) λ 2 L T L ( σ k σ * ) ]
where: W—weighting matrix (it is usually a unit matrix), Jk—Jacobian matrix calculated in k-th step, αk—step length. The Gauss-Newton method with Laplace regularization was implemented in our research.

2.3.3. LARS

Machine learning is related to the ability of the software to generalize based on previous experience. The important thing is that these generalizations are designed to answer questions about both previously collected data and new information. Using statistical methods with different regression models was presented in [60]. This approach enables quick diagnosis combining low cost and high efficiency. The selection of variables and the detection of data anomalies are not separate problems. To use the variables and outliers at the same time, the low angle regression (LARS) algorithm is used. While it is prudent to be cautious about the generalization of a small set of simulation results, it seems that LARS combined with dummy variables or row samples can provide computationally efficient, robust selection procedures. The proposed multiply LARS algorithm calculates all possible Lasso estimates for a given problem using an order of magnitude of less computing time. Another variation of LARS implements the linear regression of forward stagewise, this combination explains similar numerical results previously observed for Lasso and Stagewise and helps to understand the properties of both methods. A simple approximation of LARS degrees of freedom is available, from which the estimated prediction error value is taken.
If the regression data has only additional outliers, then we can start with a simple regression model:
Y = X β + ε
where Y R n , X R n × ( k + 1 ) denote the observation matrices of response and input variables, respectively, and β R k + 1 denotes the vector of unknown parameters. The object, ε R n presents a sequence of disturbances. The Least Angle Regression algorithm selects the subset of appropriate variables from entire set of available input variables. The linear model is built by employing the forward stepwise regression, where at each step the best variable is inserted to model.
The algorithm of Least Angle Regression is applied as follows:
  • standardize input variables;
  • select the most correlated input variable with the output variable. Add input variable to the linear model;
  • determine the residual from the obtained model;
  • add a variable which is the most correlated with the residual to the model;
  • move coefficient β towards its least-squares coefficient;
Repeat steps 2–5 for the suitable number of iterations.

2.3.4. Elastic Net

Elastic net is a regularized regression method that linearly combines the L1 and L2 penalties of the Lasso and ridge methods [61,62,63]. Lasso is a regularization technique. The implemented multiply method can be used to reduce the number of predictors in a regression model or it selects among redundant predictors.
The equation is used to determine the linear regression:
min ( β 0 , β ) R k + 1 1 2 n i = 1 n ( y i β 0 x i β ) 2 + λ P α ( β )
where x i = ( x i 1 , , x i k ) , β = ( β 1 , , β k ) for 1 i n and P α is an Elastic net penalty
P α is defined as:
P α ( β ) = ( 1 α ) 1 2 β L 2 + α β L 1 = j = 1 k ( 1 α 2 β j 2 + α | β j | )
We see that the punishment is a linear combination of norms L 1 and L 2 of unknown parameters β . The introduction of the parameter-dependent penalty function to the objective function reduces the estimators of unknown parameters.

2.3.5. Multiply Neural Network

This chapter presents the neuronal model enabling efficient reconstruction of tomographic images. Effective use of multiply artificial neural networks in tomography is possible, but the effectiveness of this tool depends on many conditions. First of all, ANNs (artificial neural networks) are able to effectively visualize objects, many of which are already known. Each subsystem means one neural network. All neural networks were trained based on a set of 50,000 simulation-generated cases.
A serious problem limiting the ability to generalize ANNs is overfitting. A good technique to reduce overfitting is to fundamentally limit the capacity of the model. These approaches are called regularization techniques. Among them, the following techniques can be distinguished: parameter norm penalties, early stopping, dropout, and transfer learning. In the case described, the technique of early stopping was used [64].
This technique tries to stop an estimator’s training phase prematurely, at the point where it has learned to extract all meaningful associations from the data, before beginning to model its noise. This is done by monitoring MSE (Mean Squared Error) of the validation set and terminating the training phase when this metric stops falling. This way, the estimator has enough time to learn the useful information but not enough to learn from the noise.
All cases were randomly divided into 3 sets: training, validating and testing in 70:15:15 proportions. The training set (35,000 cases) was used to properly train each of the subsystems. The validation set (7500 cases) was used to determine the moment of stopping the iterative training process. The condition for stopping the learning process was a non-decreasing MSE for the validation set for the next 6 iterations. The test set (7500 cases) can be used for independent assessment of network quality after the learning process (MSE, R). The structure of each of the neural networks can be described as follows: 96—10—1. This means that each ANN was a multi-layered perceptron with 96 predictors, one hidden layer with 10 neurons and the output layer with one neuron. Logistic functions were used as the activation functions. All ANNs were trained using the Levenberg-Marquardt algorithm.
Algorithm 2 in the form of Matlab code represents the iterative process of training the multiple neural network shown in Figure 5. In a single structural variable called nets_for_pixels, all 2883 neural networks were recorded.
Algorithm 2. The Matlab code for training multiple ANN system
% X - input matrix 96 × 50000 of training cases
% Y - output matrix 2883 × 50000 of training cases
% Choose a Training Function
trainFcn = 'trainlm'; % In this case Levenberg-Marquardt backpropagation was chosen
hiddenLayerSize = 10;  % Choose a number of hidden layers
net = fitnet(hiddenLayerSize,trainFcn);  % Create a fitting network under variable ‘net’
% Choose input and output pre/post-processing functions
% ‘removeconstantrows’ - remove matrix rows with constant values
% ‘mapminmax’ - map matrix row minimum and maximum values to [−1 1]
net.input.processFcns = {'removeconstantrows','mapminmax'};
net.output.processFcns = {'removeconstantrows','mapminmax'};
% Setup division of data for training, validation, testing
net.divideFcn = 'dividerand'; % Divide data randomly
net.divideMode = 'sample'; % Divide up every sample
net.divideParam.trainRatio = 70/100; % 70% of cases is allocated for training
net.divideParam.valRatio = 15/100; % 15% of cases is allocated for validation
net.divideParam.testRatio = 15/100; % 15% of cases is allocated for testing
net.performFcn = 'mse'; % Mean Squared Error will be used for performance evaluation
x = X';
y = Y';
N=2883; % The resolution of output picture grid
parfor i=1:N  % Start ‘for’ loop with parallel computing
  % Assign an i-th row of reference cases to the variable t. Each of the 2883 lines corresponds
  % to one pixel of the output image
  t = y(i,:);
  % Train the network. The variable ‘nets_for_pixels’ is a structure that consists of 2883
  % separately trained neural networks.
  [nets_for_pixels{i},~] = train(net,x,t);
end % End ‘parfor’ loop
It should be emphasized that the algorithms used for the multiply Elastic net and multiply LARS methods, although created in R programming language, have an analogical logical structure. Therefore, they are not included in this paper.

3. Results

This chapter presents the results of image reconstruction based on designed numerical models and laboratory measurements. Data analysis is an important part of the diagnosis of the process based on tomography. Knowledge of the process can make the image reconstruction better. Inside the tested object, as its cross-section, a mesh of finite elements is generated. As a result of the calculations it obtains a reconstructed image. The inverse problem was solved using both deterministic and machine learning methods.
Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15 present the results of reconstruction of images based on laboratory measurements of the examined objects. These are not reconstructions based on artificial measurements obtained from a simulation generator. The reconstructions presented below are the result of real measurements generated using a physical model (Figure 3). They contain natural noises and other imperfections, caused by disturbances of the EIT system and the measurement process. As a result, the tomographic images presented below constitute an appropriate comparative basis, enabling objective evaluation of individual reconstruction EIT algorithms.
The systems with 16 and 32 electrodes were used here. Previous research proves that deterministic methods effectively reconstruct the image based on real measurements. The results obtained using multiply neural networks depend primarily on the quality of the training set. In the presented experiments, the data set for ANN was 10 times larger than for LARS or Elastic net and included more cases both in terms of the number of objects (tubes) and their distribution. It is possible that this fact caused the higher quality of the ANN reconstruction. The multiply LARS method is quite sensitive, while multiply Elastic net is quite universal, because by selecting the appropriate regularization parameters you can get enough good reconstructions on the actual data.
All reconstructions presented in this section refer to the three variants of tube arrangements presented in Section 2.2 (Figure 4). Reconstructions were obtained on the basis of test cases generated using the appropriate script. The white color means background. The objects are blue. The colors of the image are correlated with the conductance of the area represented by particular points on the mesh of a given cross-section. All reconstructed images were not improved by data filtering or denoising.

3.1. Gauss-Newton Method

The Gauss-Newton with Laplace regularization method was used to reconstruct the image in the electrical tomography for the 16 and 32 electrode systems (Figure 8 and Figure 9). The reconstructions were obtained using the Gauss-Newton method using Laplace regularization. The numerical algorithm operated on a differential basis. So, in this case, we solve the inverse problem after the first iteration. The regularization parameter was 0.08. The reconstructed images illustrate variants with 2, 3 and 4 artifacts.
By comparing the reconstructive images from Figure 8 and Figure 9 to Figure 4 from Section 2.2, it can be seen that the visual mapping of the position of the objects is correct, but their diameters are larger than in the reference images. The background noise is also visible because it should be uniformly white. There are also significant differences in the quality of images obtained from 16 and 32 electrode systems. The use of 32 electrodes gives much better results in this case.

3.2. Multiply Neural Networks

Image reconstruction in the case of multiply neural networks depends largely on the training set. An interesting observation is that the use of 32 electrodes (Figure 11) with respect to 16 (Figure 10) does not affect the visual quality of the imaging.
Comparing the reconstructive images of Figure 10 and Figure 11 to Figure 4, it can be seen that the visual representation of the position and also the size of the objects is clearly better than for the Gauss-Newton method. Noise is visible, but it is relatively small and rather point-like.

3.3. Multiply LARS

Another algorithm was based on the multiply LARS method. A training set of 5000 elements was used here. In this case, the obtained results for a system with 16 electrodes (Figure 12) are slightly worse than for a system with 32 electrodes (Figure 13). The key element in this method is the separation of a group of independent measurements. The visual mapping of the position of the objects is correct, but their diameters are larger than in the reference images.

3.4. Multiply Elastic Net

The final algorithm is multiply Elastic net. It is more universal due to its character and gives quite precise results.
The same training set was used as for the previous method. Reconstructions for systems with 16 and 32 electrodes are shown in Figure 14 and Figure 15, respectively. The diameters of inclusions are larger than the reference ones, however, the two-fold increase in the number of electrodes gives significantly better results. The accuracy of the mapping increases and the amount of background noise decreases.

3.5. Comparison of Image Reconstructions

Visual comparison of individual methods (Gauss-Newton, multiply Neural Networks, multiply LARS and multiply Elastic net) is not very precise. In order to increase the fairness of the comparison, special indicators calculated using mathematical methods were applied. To make this possible using the simulation method, reference images (vectors) were designed for all 6 cases examined. The dimensions of the physical model presented in Section 2.2 were used for this purpose. It is quite easy because in all tested variants the pixels of the background are white (value 1) and the identified objects (tubes) have a dark blue color (value 0) on the cross-section.
In order to compare the methods, the following evaluation metrics were used: Mean Squared Error (MSE), Relative Image Error (RIE), Image Correlation Coefficient (ICC) and the Expected Time of Image Reconstruction expressed in seconds. MSE is evaluated according to (13)
MSE = 1 n i = 1 n ( y i y * i ) 2
where n is the output image resolution, y i is the value of i reference pixel, and y * i is the value of i reconstructed pixel.
RIE is evaluated according to (14).
RIE = y y * y
ICC is evaluated according to (15).
ICC = i = 1 n ( y i * y ¯ * ) ( y i y ¯ ) i = 1 n ( y i * y ¯ * ) 2 i = 1 n ( y i y ¯ ) 2
where: y ¯ is the mean value for reference pixels; y ¯ * is the mean value for reconstructed pixels.
The smaller the MSE and RIE, the better the reconstruction quality. ICC is vice versa, the closer to 1, the better the reconstruction.
Table 2 presents the analysis of the quality of image reconstruction for individual methods. The column headers contain information about the number of electrodes in the measurement system (E16 or E32) and the number of hidden objects (O2, O3, O4). For example, E16_O2 means a measuring system with 16 electrodes applied to a case with 2 objects hidden inside the tested tank.
Analyzing the indicators in Table 2, it can be noticed that for all 6 tested variants and 3 indicators the best quality of reconstruction was obtained with the multiply ANN. The rest of the methods differ in relation to both variants and indicators. For example, in the reconstruction of E16_O2, the best MSE was with Elastic net (MSE = 0.0111), the best RIE was for LARS (RIE = 0.1053) and the best ICC was for Gauss-Newton with Laplace regularization (ICC = 0.5290). So, it is impossible to unambiguously determine the best method from the multiply Elastic net, multiply LARS and Gauss-Newton, but indisputably, the best results were obtained using ANN in the tested cases. At the same time, it can be noticed that multiply ANN is the slowest method among all the tested algorithms, while the fastest methods proved to be multiply Elastic net and multiply LARS.
The learning times of tomographic algorithms based on the analyzed methods belonging to the machine learning group depend on a lot of factors. For example, the training time of the multiply ANN for 16 electrodes, by employing one central processing unit (CPU) core took about 27 hours but with 24 cores it was 4.4 hours. The multiply Elastic net and multiply LARS methods are much faster than multiply ANN. With one core it was about 90 seconds and with 24 cores the training time was about 25 seconds. In the case of 32 electrodes, the training times are about 13% longer for ANN and 5 times longer for Elastic net and LARS.

4. Conclusions

The monitoring systems are aimed at automation, analysis and optimization of technological processes using industrial tomography, which allows for analysis of processes taking place in a facility without interfering with its interior. Such solutions enable better understanding and monitoring of industrial processes and facilitate process control in real time. The collected information is processed by an algorithm that reconstructs the image. This type of tomography is characterized by a relatively low image resolution. Difficulties in obtaining high resolution result mainly from a limited number of measurements, non-linear current flow through a given medium and too-low sensitivity of measured voltages depending on changes in conductivity in the area. The main challenge in this area is to design precise measuring devices and algorithms for image reconstruction.
Data analysis is an important part of the diagnosis of the process based on tomography. The inverse problem is the process of optimization, identification or synthesis, in which the parameters describing the field are determined based on the possession of information relevant to this field. Such problems are difficult to analyze. They do not have unambiguous solutions and are misunderstood due to too little or too much information. Knowledge of the process can make image reconstruction more resistant to incomplete information. In the article, the authors used the deterministic method based on the Gauss-Newton method with Laplace regularization as a reference for the selected machine learning methods.
In the process based on electrical tomography, there is no ideal method for reconstructing and analyzing data. Methods and models need to be properly selected depending on the specificity of the problem that needs to be solved. Deterministic methods are usually more awkward with many hidden objects requiring reconstruction. Multiply Neural Networks give better results, but this is mostly dependent on the quantity and quality of the training set. With a large training set, especially for smaller objects, they are really effective. Machine learning methods based on multiply LARS, especially multiply Elastic net, seem to be less accurate, especially for real measurement data, but they are much faster than multiply ANN. The disadvantage of ANN is the long training time and the relatively long reconstruction time. The obtained results were illustrated graphically, which gives the possibility of visual analysis of the processes taking place inside the object, as well as with the use of numerical indicators. The proposed algorithms and gained knowledge should bring benefits to various economic and industrial sectors.
Further works will be focused on improving the methods of image reconstruction using deep learning and the development of measuring devices for both electrical tomography and ultrasound tomography.

Author Contributions

T.R. has developed system concepts, research methods and implementation of solutions in industrial tomography of the presented techniques in this article. G.K. has implemented the neural network method. E.K. carried out research especially in the field of statistical methods. P.T. worked on deterministic algorithm and grids in Matlab.

Acknowledgments

The authors would like to thank the authorities and employees of the Institute of Mathematics, Maria Curie-Skłodowska University, Lublin, Poland for sharing supercomputing resources.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. González, G.; Huttunen, J.M.J.; Kolehmainen, V.; Seppänen, A.; Vauhkonen, M. Experimental evaluation of 3D electrical impedance tomography with total variation prior. Inverse Probl. Sci. Eng. 2016, 24, 1411–1431. [Google Scholar] [CrossRef]
  2. Liu, S.; Jia, J.; Zhang, Y.D.; Yang, Y. Image reconstruction in electrical impedance tomography based on structure-aware sparse Bayesian learning. IEEE Trans. Med. Imaging 2018, 37, 2090–2102. [Google Scholar] [CrossRef]
  3. Kang, S.I.; Khambampati, A.K.; Kim, B.S.; Kim, K.Y. EIT image reconstruction for two-phase flow monitoring using a sub-domain based regularization method. Flow Meas. Instrum. 2017, 53, 28–38. [Google Scholar] [CrossRef]
  4. Ren, S.; Wang, Y.; Liang, G.; Dong, F. A Robust Inclusion Boundary Reconstructor for Electrical Impedance Tomography with Geometric Constraints. IEEE Trans. Instrum. Meas. 2018, 99, 1–12. [Google Scholar] [CrossRef]
  5. Yang, Y.; Jia, J. An image reconstruction algorithm for electrical impedance tomography using adaptive group sparsity constraint. IEEE Trans. Instrum. Meas. 2017, 66, 2295–2305. [Google Scholar] [CrossRef]
  6. Liu, D.; Zhao, Y.; Khambampati, A.K.; Seppänen, A.; Du, J. A Parametric Level set Method for Imaging Multiphase Conductivity Using Electrical Impedance Tomography. IEEE Trans. Comput. Imaging 2018, 4, 552–561. [Google Scholar] [CrossRef]
  7. Rymarczyk, T. Using electrical impedance tomography to monitoring flood banks. Int. J. Appl. Electromagn. Mech. 2014, 45, 489–494. [Google Scholar] [CrossRef]
  8. Rymarczyk, T.; Kłosowski, G. Application of neural reconstruction of tomographic images in the problem of reliability of flood protection facilities. Eksploatacja I Niezawodnosc 2018, 20, 425–434. [Google Scholar] [CrossRef]
  9. Hamilton, S.J.; Hauptmann, A. Deep D-Bar: Real-Time Electrical Impedance Tomography Imaging with Deep Neural Networks. IEEE Trans. Med. Imaging 2018, 37, 2367–2377. [Google Scholar] [CrossRef]
  10. Tavares, R.S.; Sato, A.K.; Martins, T.C.; Lima, R.G.; Tsuzuki, M.S.G. GPU acceleration of absolute EIT image reconstruction using simulated annealing. Biomed. Signal Process. Control 2017. [Google Scholar] [CrossRef]
  11. Tan, C.; Lv, S.; Dong, F.; Takei, M. Image Reconstruction Based on Convolutional Neural Network for Electrical Resistance Tomography. IEEE Sens. J. 2019, 19, 196–204. [Google Scholar] [CrossRef]
  12. Farha, M. Combined Algorithm of Total Variation and Gauss-Newton for Image Reconstruction in Two-Dimensional Electrical Impedance Tomography (EIT). In Proceedings of the 2017 International Seminar on Sensor, Instrumentation, Measurement and Metrology (ISSIMM), Surabaya, Indonesia, 25–26 August 2017. [Google Scholar]
  13. Yang, Y.; Jia, J.; Polydorides, N.; McCann, H. Effect of structured packing on EIT image reconstruction. In Proceedings of the 2014 IEEE International Conference on Imaging Systems and Techniques (IST) Proceedings, Santorini, Greece, 14–17 October 2014; pp. 53–58. [Google Scholar]
  14. Wang, H.; Wang, C.; Yin, W. A pre-iteration method for the inverse problem in electrical impedance tomography. IEEE Trans. Instrum. Meas. 2004, 53, 1093–1096. [Google Scholar] [CrossRef]
  15. Li, T.; Kao, T.J.; Isaacson, D.; Newell, J.C.; Saulnier, G.J. Adaptive Kaczmarz method for image reconstruction in electrical impedance tomography. Physiol. Meas. 2013, 34, 595–608. [Google Scholar] [CrossRef]
  16. González, G.; Kolehmainen, V.; Seppänen, A. Isotropic and anisotropic total variation regularization in electrical impedance tomography. Comput. Math. Appl. 2017, 74, 564–576. [Google Scholar] [CrossRef]
  17. Zhou, Y.; Li, X. A real-time EIT imaging system based on the split augmented Lagrangian shrinkage algorithm. Measurement 2017, 110, 27–42. [Google Scholar] [CrossRef]
  18. Liu, X.; Yao, J.; Zhao, T.; Obara, H.; Cui, Y.; Takei, M. Image reconstruction under contact impedance effect in micro electrical impedance tomography sensors. IEEE Trans. Biomed. Circuits Syst. 2018, 12, 623–631. [Google Scholar] [CrossRef]
  19. Alsaker, M.; Hamilton, S.J.; Hauptmann, A. A direct D-bar method for partial boundary data electrical impedance tomography with a priori information. Inverse Probl. Imaging 2017, 11, 427–454. [Google Scholar] [CrossRef]
  20. Fernández-Fuentes, X.; Mera, D.; Gómez, A.; Vidal-Franco, I. Towards a Fast and Accurate EIT Inverse Problem Solver: A Machine Learning Approach. Electronics 2018, 7, 422. [Google Scholar] [CrossRef]
  21. Brillante, L.; Bois, B.; Mathieu, O.; Lévêque, J. Electrical imaging of soil water availability to grapevine: A benchmark experiment of several machine-learning techniques. Precis. Agric. 2016, 17, 637–658. [Google Scholar] [CrossRef]
  22. Rymarczyk, T.; Kozłowski, E. Using Statistical Algorithms for Image Reconstruction in EIT. In Proceedings of the MATEC Web Conferences, Majorca, Spain, 14–17 July 2018; Volume 210, p. 02017. [Google Scholar]
  23. Rymarczyk, T.; Kłosowski, G.; Kozłowski, E. Non-Destructive System Based on Electrical Tomography and Machine Learning to Analyze Moisture of Buildings. Sensors 2018, 18, 2285. [Google Scholar] [CrossRef]
  24. Hoyle, B.S. IPT in Industry—Application Need to Technology Design. In Proceedings of the ISIPT 8th World Congress in Industrial Process Tomography, Igaussu Falls, Brazil, 26–29 September 2016; pp. 1–7. [Google Scholar]
  25. Rymarczyk, T.; Adamkiewicz, P.; Polakowski, K.; Sikora, J. Effective ultrasound and radio tomography imaging algorithm for two-dimensional problems. Przegląd Elektrotechniczny 2018, 94, 62–69. [Google Scholar]
  26. Romanowski, A. Contextual Processing of Electrical Capacitance Tomography Measurement Data for Temporal Modeling of Pneumatic Conveying Process. In Proceedings of the 2018 Federated Conference on Computer Science and Information Systems (FedCSIS), Poznan, Poland, 9–12 September 2018; pp. 283–286. [Google Scholar]
  27. Rymarczyk, T.; Kłosowski, G.; Gola, A. The Use of Artificial Neural Networks in Tomographic Reconstruction of Soil Embankments. In Proceedings of the International Symposium on Distributed Computing and Artificial Intelligence, Toledo, Spain, 20–22 June 2018; Springer: Cham, Switzerland, 2018; pp. 104–112. [Google Scholar]
  28. Rymarczyk, T. New Methods to Determine Moisture Areas by Electrical Impedance Tomography. Int. J. Appl. Electromagn. Mech. 2016, 52, 79–87. [Google Scholar] [CrossRef]
  29. Szczęsny, A.; Korzeniewska, E. Selection of the method for the earthing resistance measurement. Przegląd Elektrotechniczny 2018, 94, 178–181. [Google Scholar]
  30. Liu, S.; Wu, H.; Huang, Y.; Yang, Y.; Jia, J. Accelerated Structure-Aware Sparse Bayesian Learning for 3D Electrical Impedance Tomography. IEEE Trans. Ind. Inform. 2019. [Google Scholar] [CrossRef]
  31. Kozłowski, E.; Mazurkiewicz, D.; Kowalska, B.; Kowalski, D. Binary linear programming as a decision-making aid for water intake operators. In Proceedings of the International Conference on Intelligent Systems in Production Engineering and Maintenance, Wroclaw, Poland, 28–29 September 2017; Springer: Cham, Switzerland, 2017; pp. 199–208. [Google Scholar]
  32. Kłosowski, G.; Kozłowski, E.; Gola, A. Integer linear programming in optimization of waste after cutting in the furniture manufacturing. Adv. Intell. Syst. Comput. 2018, 637, 260–270. [Google Scholar]
  33. Wang, M. Industrial Tomography: Systems and Applications; Woodhead Publishing: Sawston, UK, 2015. [Google Scholar]
  34. Holder, D. Introduction to Biomedical Electrical Impedance Tomography Electrical Impedance Tomography Methods, History and Applications; Institute of Physics: Bristol, UK, 2005. [Google Scholar]
  35. Karhunen, K.; Seppänen, A.; Kaipio, J.P. Adaptive meshing approach to identification of cracks with electrical impedance tomography. Inverse Probl. Imaging 2014, 8, 127–148. [Google Scholar] [CrossRef]
  36. Rymarczyk, T.; Adamkiewicz, P.; Duda, K.; Szumowski, J.; Sikora, J. New electrical tomographic method to determine dampness in historical buildings. Arch. Electr. Eng. 2016, 65, 273–283. [Google Scholar] [CrossRef]
  37. Al Hosani, E.; Soleimani, M. Multiphase permittivity imaging using absolute value electrical capacitance tomography data and a level set algorithm. Philos. Trans. R. Soc. A 2016, 374, 20150332. [Google Scholar] [CrossRef]
  38. Kryszyn, J.; Wanta, D.; Smolik, W. Gain Adjustment for Signal-to-Noise Ratio Improvement in Electrical Capacitance Tomography System EVT4. IEEE Sens. J. 2017, 17, 8107–8116. [Google Scholar] [CrossRef]
  39. Banasiak, R.; Wajman, R.; Jaworski, T.; Fiderek, P.; Fidos, H.; Nowakowski, J. Study on two-phase flow regime visualization and identification using 3D electrical capacitance tomography and fuzzy-logic classification. Int. J. Multiphase Flow 2014, 58, 1–14. [Google Scholar] [CrossRef]
  40. Garbaa, H.; Jackowska-Strumiłło, L.; Grudzień, K.; Romanowski, A. Application of electrical capacitance tomography and artificial neural networks to rapid estimation of cylindrical shape parameters of industrial flow structure. Arch. Electr. Eng. 2016, 65, 657–669. [Google Scholar] [CrossRef]
  41. Kryszyn, J.; Smolik, W. Toolbox for 3d modelling and image reconstruction in electrical capacitance tomography. Informatyka Automatyka Pomiary w Gospodarce i Ochronie Środowiska (IAPGOŚ) 2017, 7, 137–145. [Google Scholar]
  42. Soleimani, M.; Mitchell, C.N.; Banasiak, R.; Wajman, R.; Adler, A. Four-dimensional electrical capacitance tomography imaging using experimental data. Prog. Electromagn. Res. 2009, 90, 171–186. [Google Scholar] [CrossRef]
  43. Ye, Z.; Banasiak, R.; Soleimani, M. Planar array 3D electrical capacitance tomography. Insight-Non-Destr. Test. Cond. Monit. 2013, 55, 675–680. [Google Scholar] [CrossRef]
  44. Wajman, R.; Fiderek, P.; Fidos, H.; Sankowski, D.; Banasiak, R. Metrological evaluation of a 3D electrical capacitance tomography measurement system for two-phase flow fraction determination. Meas. Sci. Technol. 2013, 24, 065302. [Google Scholar] [CrossRef]
  45. Romanowski, A. Big Data-Driven Contextual Processing Methods for Electrical Capacitance Tomography. IEEE Trans. Ind. Inform. 2019, 15, 1609–1618. [Google Scholar] [CrossRef]
  46. Kłosowski, G.; Rymarczyk, T.; Gola, A. Increasing the Reliability of Flood Embankments with Neural Imaging Method. Appl. Sci. 2018, 8, 1457. [Google Scholar] [CrossRef]
  47. Demidenko, E.; Hartov, A.; Paulsen, K. Statistical estimation of Resistance/Conductance by electrical impedance tomography measurements. IEEE Trans. Med. Imaging 2004, 23, 829–838. [Google Scholar] [CrossRef]
  48. Adler, A.; Lionheart, W.R. Uses and abuses of EIDORS: An extensible software base for EIT. Physiol. Meas. 2006, 27, S25. [Google Scholar] [CrossRef] [PubMed]
  49. Dušek, J.; Hladký, D.; Mikulka, J. Electrical Impedance Tomography Methods and Algorithms Processed with a GPU. In Proceedings of the 2017 Progress In Electromagnetics Research Symposium-Spring (PIERS), St. Petersburg, Russia, 22–25 May 2017; pp. 1710–1714. [Google Scholar]
  50. Rymarczyk, T.; Sikora, J. Applying industrial tomography to control and optimization flow systems. Open Phys. 2018, 16, 332–345. [Google Scholar] [CrossRef]
  51. Voutilainen, A.; Lehikoinen, A.; Vauhkonen, M.; Kaipio, J. Three-dimensional nonstationary electrical impedance tomography with a single electrode layer. Meas. Sci. Technol. 2010, 21, 035107. [Google Scholar] [CrossRef]
  52. Babout, L.; Grudzień, K.; Wiącek, J.; Niedostatkiewicz, M.; Karpiński, B.; Szkodo, M. Selection of material for X-ray tomography analysis and DEM simulations: Comparison between granular materials of biological and non-biological origins. Granul. Matter 2018, 20, 20–38. [Google Scholar] [CrossRef]
  53. Mikulka, J. GPU—Accelerated Reconstruction of T2 Maps in Magnetic Resonance Imaging. Meas. Sci. Rev. 2015, 4, 210–218. [Google Scholar] [CrossRef]
  54. Bartušek, K.; Fiala, P.; Mikulka, J. Numerical Modeling of Magnetic Field Deformation as Related to Susceptibility Measured with an MR System. Radioengineering 2008, 17, 113–118. [Google Scholar]
  55. Lopato, P.; Chady, T.; Sikora, R.; Ziolkowski, M. Full wave numerical modelling of terahertz systems for nondestructive evaluation of dielectric structures. COMPEL 2013, 32, 736–749. [Google Scholar] [CrossRef]
  56. Vališ, D.; Mazurkiewicz, D. Application of selected Levy processes for degradation modelling of long range mine belt using real-time data. Arch. Civil Mech. Eng. 2018, 18, 1430–1440. [Google Scholar] [CrossRef]
  57. Ziolkowski, M.; Gratkowski, S.; Zywica, A.R. Analytical and numerical models of the magnetoacoustic tomography with magnetic induction. COMPEL 2018, 37, 538–548. [Google Scholar] [CrossRef]
  58. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning Data Mining, Inference, and Prediction; Springer: New York, NY, USA, 2009. [Google Scholar]
  59. Madsen, K.; Nielsen, H.; Tingleff, O. Methods for Non-Linear Least Squares Problems, 2nd ed.; Informatics and Mathematical Modelling, Technical University of Denmark: Lyngby, Denmark, 2004; p. 60. [Google Scholar]
  60. Fonseca, T.; Goliatt, L.; Campos, L.; Bastos, F.; Barra, L.; Santos, R. Machine Learning Approaches to Estimate Simulated Cardiac Ejection Fraction from Electrical Impedance Tomography. In Proceedings of the Ibero-American Conference on Artificial Intelligence (IBERAMIA 2016), LNAI 10022, San José, Costa Rica, 23–25 November 2016; pp. 235–246. [Google Scholar]
  61. Zou, H.; Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B 2005, 2, 301–320. [Google Scholar] [CrossRef]
  62. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
  63. Wang, J.; Han, B.; Wang, W. Elastic-net regularization for nonlinear electrical impedance tomography with a splitting approach. Appl. Anal. 2018, 1–17. [Google Scholar] [CrossRef]
  64. Raskutti, G.; Wainwright, M.J.; Yu, B. Early stopping and non-parametric regression: An optimal data-dependent stopping rule. J. Mach. Learn. Res. 2014, 15, 335–366. [Google Scholar]
Figure 1. Comparison of the traditional concept with the improved concept: (a) a single prediction system with 96 predictors and 2883 responses; (b) multiple prediction system composed of 2883 separately trained subsystems, each of which has 96 predictors and 1 response.
Figure 1. Comparison of the traditional concept with the improved concept: (a) a single prediction system with 96 predictors and 2883 responses; (b) multiple prediction system composed of 2883 separately trained subsystems, each of which has 96 predictors and 1 response.
Sensors 19 01521 g001
Figure 2. Measurement model in electrical impedance tomography: (a) opposite, (b) neighboring method.
Figure 2. Measurement model in electrical impedance tomography: (a) opposite, (b) neighboring method.
Sensors 19 01521 g002
Figure 3. The test stand: (a) the measurement device—a hybrid tomograph made by the Netrix S.A. Research and Development Center, (b) tank with 2 phantoms, (c) tank with 4 phantoms.
Figure 3. The test stand: (a) the measurement device—a hybrid tomograph made by the Netrix S.A. Research and Development Center, (b) tank with 2 phantoms, (c) tank with 4 phantoms.
Sensors 19 01521 g003
Figure 4. Three variants of the arrangement of phantoms in the tested tank with 16 electrodes: (a) 2 phantoms, (b) 3 phantoms, (c) 4 phantoms.
Figure 4. Three variants of the arrangement of phantoms in the tested tank with 16 electrodes: (a) 2 phantoms, (b) 3 phantoms, (c) 4 phantoms.
Sensors 19 01521 g004
Figure 5. Dimensioned model of the EIT tested tank.
Figure 5. Dimensioned model of the EIT tested tank.
Sensors 19 01521 g005
Figure 6. A training case generated with the simulation method with a graph showing the voltages.
Figure 6. A training case generated with the simulation method with a graph showing the voltages.
Sensors 19 01521 g006
Figure 7. A mathematical neural model for converting electrical signals into images.
Figure 7. A mathematical neural model for converting electrical signals into images.
Sensors 19 01521 g007
Figure 8. Image reconstruction for 16 measurement electrodes by the Gauss-Newton method with Laplace regularization: (a) 2 objects, (b) 3 objects, (c) 4 objects.
Figure 8. Image reconstruction for 16 measurement electrodes by the Gauss-Newton method with Laplace regularization: (a) 2 objects, (b) 3 objects, (c) 4 objects.
Sensors 19 01521 g008
Figure 9. Image reconstruction for 32 measurement electrodes by the Gauss-Newton method with Laplace regularization: (a) 2 objects, (b) 3 objects, (c) 4 objects.
Figure 9. Image reconstruction for 32 measurement electrodes by the Gauss-Newton method with Laplace regularization: (a) 2 objects, (b) 3 objects, (c) 4 objects.
Sensors 19 01521 g009
Figure 10. Image reconstruction for 16 measurement electrodes by Multiply Neural Networks: (a) 2 objects, (b) 3 objects, (c) 4 objects.
Figure 10. Image reconstruction for 16 measurement electrodes by Multiply Neural Networks: (a) 2 objects, (b) 3 objects, (c) 4 objects.
Sensors 19 01521 g010
Figure 11. Image reconstruction for 32 measurement electrodes by Multiply Neural Networks: (a) 2 objects, (b) 3 objects, (c) 4 objects.
Figure 11. Image reconstruction for 32 measurement electrodes by Multiply Neural Networks: (a) 2 objects, (b) 3 objects, (c) 4 objects.
Sensors 19 01521 g011
Figure 12. Image reconstruction for 16 measurement electrodes by multiply LARS: (a) 2 objects, (b) 3 objects, (c) 4 objects.
Figure 12. Image reconstruction for 16 measurement electrodes by multiply LARS: (a) 2 objects, (b) 3 objects, (c) 4 objects.
Sensors 19 01521 g012
Figure 13. Image reconstruction for 32 measurement electrodes by multiply LARS: (a) 2 objects, (b) 3 objects, (c) 4 objects.
Figure 13. Image reconstruction for 32 measurement electrodes by multiply LARS: (a) 2 objects, (b) 3 objects, (c) 4 objects.
Sensors 19 01521 g013
Figure 14. Image reconstruction for 16 measurement electrodes by multiply Elastic net: (a) 2 objects, (b) 3 objects, (c) 4 objects.
Figure 14. Image reconstruction for 16 measurement electrodes by multiply Elastic net: (a) 2 objects, (b) 3 objects, (c) 4 objects.
Sensors 19 01521 g014
Figure 15. Image reconstruction for 32 measurement electrodes by multiply Elastic net: (a) 2 objects, (b) 3 objects, (c) 4 objects.
Figure 15. Image reconstruction for 32 measurement electrodes by multiply Elastic net: (a) 2 objects, (b) 3 objects, (c) 4 objects.
Sensors 19 01521 g015
Table 1. Comparison of the neural networks with 1 and 10 responses.
Table 1. Comparison of the neural networks with 1 and 10 responses.
Quality Indicators for Testing SetANN Type
96—10—196—10—1096—20—10
M S E ¯ 0.00690.00870.0086
R ¯ 0.75480.69940.6897
Table 2. Comparison of image reconstruction indicators.
Table 2. Comparison of image reconstruction indicators.
MethodsEvaluation MetricsTested Cases
E16_O2E16_O3E16_O4E32_O2E32_O3E32_O4
ANNMSE0.00740.00860.00760.00600.00610.0058
RIE0.08690.09360.08860.07820.07850.0771
ICC0.73560.73710.82180.74840.81630.7946
Expected time of image reconstruction [s]0.15010.15780.15740.27760.27850.2787
Elastic netMSE0.01110.01480.01970.00810.01310.0174
RIE0.24660.34990.34510.21200.26610.3300
ICC0.50240.46510.45350.50900.47850.4702
Expected time of image reconstruction [s]0.000620.000660.000710.00130.00140.0014
LARSMSE0.01150.01530.02030.00740.01210.0160
RIE0.10530.12160.14020.08710.11130.1280
ICC0.46580.45860.44380.52610.50720.5082
Expected time of image reconstruction [s]0.000410.000950.000920.00190.00180.0018
Gauss-Newton with Laplace regulari-zationMSE0.01990.02670.03510.01100.01640.0225
RIE0.16610.25240.34150.15630.17550.2402
ICC0.52900.46430.41810.58530.59840.5412
Expected time of image reconstruction [s]0.012480.010100.009400.011590.012290.01197

Share and Cite

MDPI and ACS Style

Rymarczyk, T.; Kłosowski, G.; Kozłowski, E.; Tchórzewski, P. Comparison of Selected Machine Learning Algorithms for Industrial Electrical Tomography. Sensors 2019, 19, 1521. https://doi.org/10.3390/s19071521

AMA Style

Rymarczyk T, Kłosowski G, Kozłowski E, Tchórzewski P. Comparison of Selected Machine Learning Algorithms for Industrial Electrical Tomography. Sensors. 2019; 19(7):1521. https://doi.org/10.3390/s19071521

Chicago/Turabian Style

Rymarczyk, Tomasz, Grzegorz Kłosowski, Edward Kozłowski, and Paweł Tchórzewski. 2019. "Comparison of Selected Machine Learning Algorithms for Industrial Electrical Tomography" Sensors 19, no. 7: 1521. https://doi.org/10.3390/s19071521

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop