Flow Velocity Computation in Solid–Liquid Two-Phase Flow by a Hybrid Network CNN–RKSVM

: As an advanced detection technique, electrical resistive tomography (ERT) has been applied to detect the solid–liquid two-phase flow velocity based on available ERT measurements. The flow velocity computation by ERT must depend on the relative algorithms, including both the cross-correlation (CC) principle and convolutional neural networks (CNNs). However, these two types of algorithms have poor accuracy and generalization under complex measuring conditions and various flow patterns. To address this issue, in this paper, a hybrid network is proposed that combines a CNN with a reproducing kernel-based support vector machine (RKSVM) technique. The features hidden in ERT measurements are extracted using the CNN, and then the flow velocity is computed by the RKSVM in a high-dimensional feature space. According to the ERT measurements in an actual experimental platform, the results show that the hybrid network has higher accuracy and generalization ability for flow velocity computation compared with the existing CC


Introduction
Flow velocity is an important parameter for multiphase flow detection [1,2].As an advanced detection technique, electrical resistive tomography (ERT) [3] has real-time, noninvasive, fast response advantages and is widely utilized in industrial detection [4,5].The flow velocity computation performed by ERT must depend on the relative algorithms, the most typical of which include both the cross-correlation (CC) [6] principle and convolutional neural networks (CNNs) [7].
In the past decade, various methods for flow velocity computation have been presented based on the CC principle under available ERT measurements.The CC-based methods compare two groups of correlative measuring series obtained by two adjacent ERT sensors.Then, the flow velocity can be computed from the crossing time and distance between the two adjacent ERT sensors.However, in most cases, these CC methods remain inaccurate due to the following three problems: unreasonable "frozen" assumption in CC [7], natural ERT limitations [8], and uncertain length of comparing series [9]; however, some progress has been made [10,11].
In recent years, CNNs have been applied to flow velocity detection.The flow velocity prediction model [12] was established based on a deep neural network, which could take advantage of CNNs and Bi-LSTMs to extract features in time series and be combined with a self-attention mechanism to improve prediction performance.In the same direction, two different models based on the CNN and the encoder-decoder method were established to predict the characteristics of the flow velocity and heat transfer around the NACA sections [13].Essentially, some research groups designed various CNNs to perform flow velocity computation [14].For example, Liu et al. [15] used the CNN to calculate flow velocity owing to the generalization and feature extraction ability, while the transfer learning method can train the network in advance.More reviews can be found in [16].Despite progress, these existing CNN-based methods have their individual limitations.First, due to the black-box state of the deep learning process, the generalization of CNNs is unclear and incomprehensive.Second, the computed accuracy of CNNs is often too unstable to satisfy the requirement of flow velocity detection when the flow velocity changes quickly.These problems limit the applicable range of CNNs in applications.
To address the above problems, we propose a method for flow velocity detection along the existing least square-based support vector machine (LS-SVM) algorithm.LS-SVM is an improved support vector machine (SVM) method proposed by Suykens et al. [17] that is widely used in pattern recognition and nonlinear regression due to its simplified computational complexity.LS-SVM uses a reproducing kernel function to map the input to a high-dimensional feature space, which can effectively enhance the generalization of SVM prediction.According to the LS-SVM algorithm, in this paper, we design a hybrid network which combines a CNN with a reproducing kernel-based support vector machine (RKSVM) technique.Firstly, the features hidden in ERT measurements are extracted using the CNN, and then the feature data are mapped to a high-dimensional feature space by a selected reproducing kernel function.Finally, the flow velocity is generally computed by the hybrid network in a high-dimensional feature space.The proposed method is expected to have stronger generalization capabilities and greater accuracy than the typical CC, RKSVM, and CNN methods.

Cross-Correlation Principle
The cross-correlation (CC) computation results from the principle of stochastic processes and analysis of random signals [18].The CC method compares two groups of correlative measuring series obtained by two adjacent ERT sensors, i.e., an upstream sensor x(t) and a downstream sensor y(t) along the flow direction, as shown in Figure 1.Both the upstream sensor and the downstream sensor are parallelly mounted on the pipe.velocity owing to the generalization and feature extraction ability, while the transfer learning method can train the network in advance.More reviews can be found in [16].Despite progress, these existing CNN-based methods have their individual limitations.First, due to the black-box state of the deep learning process, the generalization of CNNs is unclear and incomprehensive.Second, the computed accuracy of CNNs is often too unstable to satisfy the requirement of flow velocity detection when the flow velocity changes quickly.These problems limit the applicable range of CNNs in applications.To address the above problems, we propose a method for flow velocity detection along the existing least square-based support vector machine (LS-SVM) algorithm.LS-SVM is an improved support vector machine (SVM) method proposed by Suykens et al. [17] that is widely used in pattern recognition and nonlinear regression due to its simplified computational complexity.LS-SVM uses a reproducing kernel function to map the input to a high-dimensional feature space, which can effectively enhance the generalization of SVM prediction.According to the LS-SVM algorithm, in this paper, we design a hybrid network which combines a CNN with a reproducing kernel-based support vector machine (RKSVM) technique.Firstly, the features hidden in ERT measurements are extracted using the CNN, and then the feature data are mapped to a high-dimensional feature space by a selected reproducing kernel function.Finally, the flow velocity is generally computed by the hybrid network in a high-dimensional feature space.The proposed method is expected to have stronger generalization capabilities and greater accuracy than the typical CC, RKSVM, and CNN methods.

Cross-Correlation Principle
The cross-correlation (CC) computation results from the principle of stochastic processes and analysis of random signals [18].The CC method compares two groups of correlative measuring series obtained by two adjacent ERT sensors, i.e., an upstream sensor x(t) and a downstream sensor y(t) along the flow direction, as shown in Figure 1.Both the upstream sensor and the downstream sensor are parallelly mounted on the pipe.If the fluid in the pipe conforms to the "freezing" assumption [19], the ERT measuring series between the two sensors will be sufficiently similar.Hence, the transit time τ between the two sensors can be measured when the distance between them is L. The flow velocity can be computed as If the fluid in the pipe conforms to the "freezing" assumption [19], the ERT measuring series between the two sensors will be sufficiently similar.Hence, the transit time τ between the two sensors can be measured when the distance between them is L. The flow velocity can be computed as v = L/τ 0 (1) However, the "freezing" assumption is not true in most cases that have complex flow patterns and various measuring conditions in dredging engineering [20].The noise in the ERT measurement can greatly affect the accuracy of the CC method.The complementarity between similarity norms in the CC method is seldom used.In addition, the CC method cannot determine the optimal series length, which can greatly affect the accuracy of the CC estimation.Therefore, the accuracy of the computational flow velocity by the CC principle and ERT measurements cannot be guaranteed.

Convolutional Neural Network
The convolutional neural network (CNN) is a deep feedforward neural network based on the idea of local connectivity and parameter sharing [21].The structure of a typical CNN for flow velocity prediction is shown in Figure 2.
However, the "freezing" assumption is not true in most cases that have complex patterns and various measuring conditions in dredging engineering [20].The noise i ERT measurement can greatly affect the accuracy of the CC method.The complement between similarity norms in the CC method is seldom used.In addition, the CC me cannot determine the optimal series length, which can greatly affect the accuracy o CC estimation.Therefore, the accuracy of the computational flow velocity by the CC ciple and ERT measurements cannot be guaranteed.

Convolutional Neural Network
The convolutional neural network (CNN) is a deep feedforward neural netw based on the idea of local connectivity and parameter sharing [21].The structure of a ical CNN for flow velocity prediction is shown in Figure 2. The CNN generally consists of multiple convolutional layers and pooling layers convolutional layer effectively captures feature data by applying a series of filters or volution kernels to the input data.Moreover, the convolutional kernel computes onl cal information from the input data at a single point in time, which allows the CN better extract the deep features.The pooling layer is used for the secondary extractio features obtained from the convolutional layer in order to reduce the feature dimens and common pooling operators include maximum pooling, average pooling, and pooling.The two FC layers of FC1 and FC2 convert the extracted features into a on mensional (1D) matrix for feature combinations and transformations, and FC2 decre the connection of the CNN in the FC layer to a low dimension (e.g., 10 dimensions in paper).These are ultimately passed on to the output layer to obtain the results.

Reproducing Kernel Function
Let H be a Hilbert function space, and each function is defined on a set of D. For two functions of u and v ∈ H, their inner product is represented as , ( ), ( ) , .., , and ( , ) K t r is the reproducing kernel function on H [22].A Sobolev Hilbert space on ( , ) −∞ +∞ is composed of all continuous complex f tions ( )  u x and has a finite number of paradigms.
where a and c > 0. The reproducing kernel of a Sobolev Hilbert space can be expresse The CNN generally consists of multiple convolutional layers and pooling layers.The convolutional layer effectively captures feature data by applying a series of filters or convolution kernels to the input data.Moreover, the convolutional kernel computes only local information from the input data at a single point in time, which allows the CNN to better extract the deep features.The pooling layer is used for the secondary extraction of features obtained from the convolutional layer in order to reduce the feature dimensions, and common pooling operators include maximum pooling, average pooling, and sum pooling.The two FC layers of FC1 and FC2 convert the extracted features into a onedimensional (1D) matrix for feature combinations and transformations, and FC2 decreases the connection of the CNN in the FC layer to a low dimension (e.g., 10 dimensions in this paper).These are ultimately passed on to the output layer to obtain the results.

Reproducing Kernel Function
Let H be a Hilbert function space, and each function is defined on a set of D. For any two functions of u and v ∈ H, their inner product is represented as For any r, t ∈ D, if a function K(t, r) ∈H satisfies u(t) =< u(t), K(t, r) >, H is called a reproducing kernel space, and K(t, r) is the reproducing kernel function on H [22].
A Sobolev Hilbert space on (−∞, +∞) is composed of all continuous complex functions u(x) and has a finite number of paradigms.
where a and c > 0. The reproducing kernel of a Sobolev Hilbert space can be expressed as The translation invariant kernel function can be obtained from Equation (4) as This can be proved by the Fourier variation of Equation ( 4), Similarly, it can be proved by Equation (5), From Equations ( 6) and ( 7), the Equation (3) translation invariant kernel function satisfies the Mercer condition for the SVM kernel function [23].

Reproducing Kernel-Based Support Vector Machine
The reproducing kernel-based support vector machine (RKSVM) [24] is an improvement on the typical SVM based on the reproducing kernel function φ(•) of Equation ( 5).
If X is the sample vector in a d-dimensional space and y is the label of the sample, X = (x 1 , x 2 , . . ., x d ) ∈ R d , then the dataset S of all samples can be represented as S = {(X 1 , y 1 ), . . ., (X n , y n )}, where X k ∈ R d and y ∈ R, and n is the total number of samples.
RKSVM transforms any sample data X in S to a high-dimensional feature space by a nonlinear mapping φ(•), and the linear regression model for a dataset of all samples is constructed in the high-dimensional feature space as where ϖ is the multiplier of support vectors, and b is the parametrized vector.
The optimization problem for the above regression model is where γ is the regularization parameter, ξ is the slack variable, and e is a unit vector.The Lagrange function is defined using Equation (9) as follows, where α is the vector on the Lagrange multiplier.Let these partial derivatives of L for ϖ, b, ξ, and α equal 0.Then, the above optimization problem is used to solve the following equation group, 0 e T e φ(X)φ(X) where b = α T e, and thus, the following equation is obtained from Equation ( 11): Let the kernel function be φ(X)φ(X) T = K(X, X), where K(X, X) is the reproducing kernel function of Equation (5).Equation ( 12) is simply the regress function of RKSVM, which can be rewritten as where K(X, X) + ee T + γ −1 is a positive definite matrix, and α is solved by Equation (11).
Compared with the typical SVM, the RKSVM has three advantages.First, the matrix in RKSVM is positive and definite, and then the optimization process is performed once to solve the RKSVM model rather than multiple iterative optimizations, which is the case with the SVM.Second, without the need to solve for b, the RKSVM reduces the number of solved variables by half compared to that of the SVM.Finally, the linear regression in RKSVM has stronger generalization capabilities than those in the SVM.Hence, our design starts from RKSVM in this paper.

Structure of Hybrid Network
Because of the advantages of the CNN and the RKSVM, we proposed a hybrid network based on the CNN and the RKSVM (CNN-RKSVM), as shown in Figure 3.The CNN-RKSVM consists of a CNN block, an FC block, and an RKSVM block, in which the structure of the CNN block and FC block in the hybrid network is adapted from the AlexNet network [25].The network training process of the CNN-RKSVM comprises two stages as follows.In regular CNN-based models, a small convolutional kernel is usually chosen.A large number of researchers have followed this guideline of using small convolutional kernels when designing models, and few have used large convolutional kernels.However, the ERT measurements are the 1D data; thus, the network parameters, such as the number of convolutional and pooling layers, the kernel size, and the step size, are determined based on the quality of features extracted by the CNN block.The structure of the CNN block and the FC block is shown in Table 1.

Training of CNN-RKSVM
During the process of network training, in the CNN block in the first stage, the ERT measurements are used as inputs, the real flow velocities are used as outputs (labels), and the parameters of the CNN block are determined by minimizing the following loss function: In the first stage, the CNN block extracts the key features from input ERT measurements through the convolutional layer.In addition, the batch normalization (BN) layer and the rectified linear unit (ReLu) activation function are behind the convolutional layer.The BN layer is used to normalize the output from the convolutional layer to improve the generalization ability of the used network, and the ReLu activation function is used to improve the feature extraction and the network performance.In the convolutional layer, the same padding is chosen on the input ERT measurements to ensure that the input and output of the convolutional layer have the same size.The amount of padding is determined based on the size of the convolution kernel and the step size.In this way, the loss of information at the boundaries of the input data can be avoided, and it also helps to improve the performance and stability of the proposed network.
The second stage starts from the training results in the first stage, where the FC2 block generates a set whose element is formulized as (w1(t), w2(t), . ..., w10(t), v(t)).In this set, t denotes the tth iteration, and v(t) is the sample label of flow velocity at total T iterations, where t = 1, 2, . .., T. The FC2 block is transformed to the set of the RKSVM block, and the set of {(w1(t), w2(t), . ..., w10(t), v(t)) t = 1, 2, . .., T} is used to train the RKSVM block.Finally, the completed RKSVM is taken back into the CNN block and replaces the FC2 layer as the output of the CNN block, and the integration of the CNN and the RKSVM is shown by the sign "⊕".Consequently, the learning process completed the network, and the trained hybrid network termed CNN-RKSVM can predict and compute the flow velocity.
In regular CNN-based models, a small convolutional kernel is usually chosen.A large number of researchers have followed this guideline of using small convolutional kernels when designing models, and few have used large convolutional kernels.However, the ERT measurements are the 1D data; thus, the network parameters, such as the number of convolutional and pooling layers, the kernel size, and the step size, are determined based on the quality of features extracted by the CNN block.The structure of the CNN block and the FC block is shown in Table 1.
Table 1.The structure of the CNN block and FC block.

Type Kernel Size
Step Size Output Dimension

Training of CNN-RKSVM
During the process of network training, in the CNN block in the first stage, the ERT measurements are used as inputs, the real flow velocities are used as outputs (labels), and the parameters of the CNN block are determined by minimizing the following loss function: where N is the number of the sample dataset, y i is the ith real flow velocity, ŷi is the ith predicted flow velocity, θ represents the parameters of the CNN block, and λ is a hyperparameter that is used to balance the effect between the two terms.In this paper, the value of λ is taken as 10 −3 after a larger number of comparable performances.
In the second stage, the feature data extracted from the CNN block and FC block are used as inputs in the RKSVM block.The loss function is Equation (9), and the optimal parameters of the RKSVM are determined by minimizing Equation (9).
The CNN-RKSVM in this paper was trained on a server with a Windows 10 (64-bit) operating system with the following hardware configuration parameters: CPU processor Intel(R) Core (TM) i7-9800X, 3.80 GHz; 64 GB of operating memory; and dual NVIDIA GeForce RTX 2080Ti 11 GB graphics cards.During the network training process of the CNN-RKSVM parameters, the adaptive moment estimation (Adam) algorithm [26] updates the parameters θ.The initialization parameters for the hybrid network are often empirically determined during training.The initial learning rate is set to 0.001, the attenuation coefficient is 0.1, the regularization factor is 0.0001, and the learning rate decay is 0.5.The network is trained for 200 epochs with a batch size of 128.

The Sample Dataset
The sample dataset consists of ERT measurements and the corresponding real flow velocities, which were obtained from the Tianjin Dredging Co., Ltd., Tianjin, China.The ERT measurements are obtained by the ERT system, and the real flow velocities are obtained by the electromagnetic flowmeter.Moreover, the ERT system in this paper used the single electrode measurement after single electrode excitation.Thus, for the 16-electrode ERT system, 240 measurements can be obtained as the input of the CNN-RKSVM.
The pipeline of the dredging transportation experimental platform has a diameter of 0.8 m and a length of 358 m.The horizontal pipe section is installed with measuring devices such as an electromagnetic flowmeter, a densitometer, and an ERT system.The flow direction of solid-liquid two-phase flow is shown in Figure 4.The distance between the electromagnetic flowmeter and the ERT system is 5 m.The difference in flow velocity between the two locations of the electromagnetic flowmeter and the ERT system is negligible under the experimentally set working conditions.In addition, the distance between the two adjacent ERT sensors is 0.3 m.
the single electrode measurement after single electrode excitation.Thus, for the 16-electrode ERT system, 240 measurements can be obtained as the input of the CNN-RKSVM.
The pipeline of the dredging transportation experimental platform has a diameter of 0.8 m and a length of 358 m.The horizontal pipe section is installed with measuring devices such as an electromagnetic flowmeter, a densitometer, and an ERT system.The flow direction of solid-liquid two-phase flow is shown in Figure 4.The distance between the electromagnetic flowmeter and the ERT system is 5 m.The difference in flow velocity between the two locations of the electromagnetic flowmeter and the ERT system is negligible under the experimentally set working conditions.In addition, the distance between the two adjacent ERT sensors is 0.3 m.As shown in Figure 4, the solid phase and liquid phase were mixed into a solid-liquid two-phase flow in a mixing tank, which was further transferred into a transporting pipeline by the dredge pump.The solid phase included coarse sand and gravel (with a conductivity of approximately 0 mS/cm).Different types of solid-phase objects were employed to evaluate the applicability of various methods for flow velocity along different flow patterns.Meanwhile, the liquid phase consisted of water (with a conductivity of 1.3 mS/cm).The flow velocities and solid phase fractions (SPFs) were adjustable by means of a dredge pump in the pipeline.The inlet solid-liquid two-phase flow velocity was 1-6 m/s, and the SPF ranged between 5% and 40%.As shown in Figure 4, the solid phase and liquid phase were mixed into a solid-liquid two-phase flow in a mixing tank, which was further transferred into a transporting pipeline by the dredge pump.The solid phase included coarse sand and gravel (with a conductivity of approximately 0 mS/cm).Different types of solid-phase objects were employed to evaluate the applicability of various methods for flow velocity along different flow patterns.Meanwhile, the liquid phase consisted of water (with a conductivity of 1.3 mS/cm).The flow velocities and solid phase fractions (SPFs) were adjustable by means of a dredge pump in the pipeline.The inlet solid-liquid two-phase flow velocity was 1-6 m/s, and the SPF ranged between 5% and 40%.
The accuracy and generalization of the CNN-RKSVM are strongly dependent on the quality of the sample dataset.Therefore, simulating actual working conditions is crucial for obtaining ERT measurements under different SPFs and flow velocities.
The solid phase used in the experiments included coarse sand and gravel.For coarse sand, the experiments involved eight SPFs: 5%, 10%, 15%, 20%, 25%, 30%, 35%, and 40%.For gravel, the experiments involved six SPFs: 5%, 10%, 15%, 20%, 25%, and 30%.The flow velocity ranged from 0 m/s to 6 m/s for both types of solid phases under different SPFs.To enhance the generalization ability of the CNN-RKSVM, ERT measurements from two types of solid-phase objects were combined to form the sample dataset.The sample dataset was grouped to training and testing datasets according to the 10-fold cross-validation method.The data distribution of the datasets and their sample sizes are shown in Table 2.

Evaluation Parameters
The CNN-RKSVM method is quantitatively assessed using three evaluation metrics: the root mean square error (RMSE), the mean absolute percentage error (MAPE), and runtime compared with existing flow velocity detection methods.The RMSE and MAPE describe the error between the real flow velocity and the predicted flow velocity.The runtime describes the time taken to obtain a frame of predicted flow velocity.
RMSE is defined as where y i is the real flow velocity from the electromagnetic flowmeter, ⌢ y i is the predicted flow velocity from any of the methods, and i is the ith sampling.The RMSE quantifies the disparity between real flow velocity and predicted flow velocity.A smaller RMSE indicates a more accurate prediction, whereas a larger RMSE signifies a greater prediction error and decreased model efficacy.
MAPE is defined as MAPE indicates the deviation degree between the predicted flow velocity calculated using the testing method and the real flow velocity; the closer MAPE is to 0, the more accurate the model.

Results and Discussion
Based on the above sample dataset, the flow velocities of the test dataset obtained by the CNN-RKSVM are compared with the CC, RKSVM, and CNN methods.
The solid-liquid two-phase flow within dredging engineering has a critical flow velocity (V 0 ), and in this experiment, V 0 is nearly 2.3 m/s.When the flow velocity is greater than V 0 , the solid phase inside the pipe is in a suspended state, and the flow pattern is under turbulent flow.Conversely, when the flow velocity is less than V 0 , the solid phase inside the pipe settles at the bottom of the pipe, and the flow pattern is under laminar flow.Therefore, in order to compare these two flow velocity cases, the results of flow velocity are divided into two groups: low flow velocity and high flow velocity, as shown in Figure 5. Figure 5a shows that when the flow velocity is smaller than V0, the flow velocity obtained by the CC method exhibits a similar trend to the real flow velocity, albeit with a larger error compared to the CNN and CNN-RKSVM methods.In contrast, the RKSVM ha some errors with the real flow velocity.Compared with the CNN, the computed flow velocities from the CNN-RKSVM are very close to the real flow velocities.The CNN-RKSVM uses the RKSVM to map the feature data extracted from the CNN into the highdimensional space to obtain the flow velocities so that the flow velocities obtained by the CNN-RKSVM are very close to the real flow velocities.
Figure 5b shows that when the flow velocity exceeds V0, the CC method fails to accurately capture the flow velocity due to the non-fulfillment of the "freezing" model.Particularly, when the flow velocity surpasses 4 m/s, the solid phase within the pipeline exhibits a uniform distribution, resulting in a high similarity of ERT measurements.The RKSVM and CNN methods show some deviation from the real flow velocity, whereas the CNN-RKSVM provides a closer approximation.
To quantitatively evaluate the CNN-RKSVM, we compared it with the CC, RKSCN, Figure 5a shows that when the flow velocity is smaller than V 0 , the flow velocity obtained by the CC method exhibits a similar trend to the real flow velocity, albeit with a larger error compared to the CNN and CNN-RKSVM methods.In contrast, the RKSVM ha some errors with the real flow velocity.Compared with the CNN, the computed flow velocities from the CNN-RKSVM are very close to the real flow velocities.The CNN-RKSVM uses the RKSVM to map the feature data extracted from the CNN into the high-dimensional space to obtain the flow velocities so that the flow velocities obtained by the CNN-RKSVM are very close to the real flow velocities.
Figure 5b shows that when the flow velocity exceeds V 0 , the CC method fails to accurately capture the flow velocity due to the non-fulfillment of the "freezing" model.
Particularly, when the flow velocity surpasses 4 m/s, the solid phase within the pipeline exhibits a uniform distribution, resulting in a high similarity of ERT measurements.The RKSVM and CNN methods show some deviation from the real flow velocity, whereas the CNN-RKSVM provides a closer approximation.
To quantitatively evaluate the CNN-RKSVM, we compared it with the CC, RKSCN, and CNN methods.The RMSEs, MAPEs, and runtimes for the four methods of calculating flow velocity are compared with those of real flow velocity, as shown in Table 3. Table 3 shows that the RMSEs of the CC method are always greater than 0.8 and that the MAPEs of the CC method are greater than 0.2.The RKSVM method is relatively stable regardless of whether V > V 0 or V < V 0 , but it may lead to a large error with partial testing samples.Owing to the advantage of deep learning in both the CNN and CNN-RKSVM methods, their RMSEs and MAPEs are much smaller than those of the CC and RKSVM methods.Both the CNN and CNN-RKSVM methods yield a more consistent flow velocity when the flow velocity is less than V 0 .However, when the flow velocity is greater than V 0 , the RMSE and MAPE of the CNN-RKSVM method are better than those of the CNN method.Since the RKSVM uses the reproducing kernel as the kernel function, it has advantages, including a strong generalization ability and a high computational efficiency.Therefore, the CNN-RKSVM based on both the RKSVM and the CNN can improve the accuracy, generalization ability, and robustness of flow velocity calculations.
Moreover, the CNN-RKSVM method uses reproducing kernel functions to avoid the computation of inner products in high-dimensional spaces and thus greatly reduces the computational load and shortens the model computation time under the learning process.On the other hand, the bottom line in Table 3 shows the runtimes of the four methods, CC, RKSVM, CNN, and CNN-RKSVM, at any new ERT measurements.It is clear that the CNN-RKSVM has the longest time cost while the CC has the lowest since the CNN-RKSVM must use more blocks and parameters to compute flow velocity.Nevertheless, the runtimes of the CNN-RKSVM are less than 1 s, which is an acceptable response time in dredging engineering.

Conclusions
Flow velocity detection plays an increasingly important role in dredging engineering.In this paper, a method combining a CNN and a RKSVM was used to calculate the flow velocity.The proposed method was designed in two stages.In the first stage, the ERT measurements of different types of solid phases, various SPFs, and distinct flow velocities were obtained from the dredging engineering experimental platform, which constituted the sample dataset.Then, the feature data of the sample dataset were extracted through the CNN block.Accordingly, the feature data were transformed to the FC block and consisted of a new learning dataset for the second stage.In the second stage, the RKSVM was built by the new learning dataset.After the RKSVM was trained well, it replaced the FC block in the CNN block in the first stage.Consequently, the CNN-RKSVM was created for the flow velocity calculation.Our experiments showed that the CNN-RKSVM method overcame problems such as low accuracy and poor generalization compared to existing flow velocity detection methods.Therefore, the CNN-RKSVM method proposed in this paper provides a new method for flow velocity detection in dredging engineering.
However, the sample dataset used in this paper was obtained by the Tianjin Dredging Co., Ltd., and parameters such as viscosity coefficient and conductivity of the solid-liquid two-phase flow were fixed.Therefore, the sample dataset did not contain all the working conditions in dredging engineering.In addition, the input of the CNN-RKSVM method was the 1D ERT measurements, which lose some of the neighborhood information during the feature extraction by the CNN block.In the future, ERT-reconstructed images will be used as the input of the CNN-RKSVM method to improve accuracy.Nevertheless, the quality of ERT reconstruction images plays a key role, so it is important to choose a suitable image reconstruction algorithm.

Figure 2 .
Figure 2. The structure of a typical CNN for flow velocity prediction.

Figure 2 .
Figure 2. The structure of a typical CNN for flow velocity prediction.

12 Figure 3 .
Figure 3. Structure of the proposed hybrid network.

Figure 3 .
Figure 3. Structure of the proposed hybrid network.

Table 1 .
The structure of the CNN block and FC block.

Table 2 .
Distribution of sample dataset.

Table 3 .
Evaluation parameters for the four methods.