Next Article in Journal
Hybrid Simulation of Seismic Responses of a Typical Station with a Reinforced Concrete Column
Previous Article in Journal
Coherent Exciton Dynamics in Ensembles of Size-Dispersed CdSe Quantum Dot Dimers Probed via Ultrafast Spectroscopy: A Quantum Computational Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Deep Parallel Diagnostic Method for Transformer Dissolved Gas Analysis

School of Electrical Engineering and Automation, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(4), 1329; https://doi.org/10.3390/app10041329
Submission received: 6 January 2020 / Revised: 17 January 2020 / Accepted: 19 January 2020 / Published: 15 February 2020
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:
With the development of Industry 4.0, as a pivotal part of the power system, large-capacity power transformers are requiring fault diagnostic methods with higher intelligence, accuracy and anti-interference ability. Considering the powerful capability for extracting non-linear features and the sensitivity differences to features of deep learning methods, this paper proposes a deep parallel diagnostic method for transformer dissolved gas analysis (DGA). In view of the insufficient and imbalanced dataset of transformers, adaptive synthetic oversampling (ADASYN) was implemented to augment the fault dataset. Then, the newly constructed dataset was normalized and input into the LSTM-based diagnostic framework. Then, the dataset was converted into images as the input of the CNN-based diagnostic framework. At the same time, the problem of still insufficient data was compensated by the introduction of transfer learning technology. Finally, the diagnostic models were trained and tested respectively, and the Dempster–Shafer (DS) evidence theory was introduced to fuse the diagnostic confidence matrices of the two models to achieve deep parallel diagnosis. The results of the proposed deep parallel diagnostic method show that without complex feature extraction, the diagnostic accuracy rate could reach 96.9%. Even when the dataset was superimposed with 3% random noises, the rate only decreased by 0.62%.

1. Introduction

As the pivotal equipment of the power system, a power transformer’s health directly affects the safety and stability of the entire power grid. Therefore, it is of great significance to study the fault diagnostic method of a transformer [1]. With the continuous development of computer storage and sensor technology, the online monitoring DGA data of power transformers will show an explosive growth trend [2,3], which has put forward higher requirements on the learning ability, feature extraction ability, and adaptability of transformer diagnostic methods.
The dissolved gas analysis (DGA) is an online monitoring technology to analyze the composition and content of dissolved gas in transformer oil [4]. By studying the correlation between the dissolved gas and the fault states of the transformer, the health status of the transformer can be effectively diagnosed and the latent faults can be eliminated in time. A lot of studies have been conducted on traditional diagnostic methods based on DGA data. Traditional DGA interpretation methods such as IEC Ratio, Rogers Ratio, and Doernenburg Ratio are simple and easy to implement and have been widely used in engineering practice. Yet, there are still problems, such as the absolute diagnostic boundaries and missing codes [5,6]. In addition, although traditional intelligent methods have achieved certain effects, they still have certain limitations on popularization and application due to their shortcomings in learning ability, processing efficiency, and feature extraction ability. For example, the learning ability of the fuzzy methods are not satisfying [7,8]. Neural networks (NN) tend to fall into local optimal solutions [9,10]. The K-nearest neighbor (KNN) method is inefficient in high-dimensional space [11]. The support vector machine (SVM) is essentially a two-classifier, which makes it more troublesome to deal with multi-classification problems [12,13]. There are also some relevant diagnosis studies on combining traditional ratio methods and intelligent methods, which are combinations of previous researches, and have achieved certain effects [14,15,16]. However, the features extracted by the ratio methods are still limited. And the anti-interference ability of the combined models need further testing. In recent years, due to the strong ability of deep learning to extract complex non-linear features, some papers have tried to introduce it into the field of transformer fault diagnosis. And the fault diagnostic accuracy of deep learning methods has been significantly improved compared to traditional machine learning diagnostic models’. The authors in [17] proposed a DBN-based fault diagnostic method that used the uncoded ratio of DGA data as the model input. Compared with the traditional methods’, the accuracy was significantly improved. The authors in [18] introduced a method for identifying and locating winding faults based on CNN and transformer impulse tests. And the results showed that the method was effective. The authors in [19] studied the internal fault diagnosis of transformers based on CNN to enhance the differential protection performance, and verified the effectiveness of the method through different reliability indicators. However, these papers only considered a single deep learning method and did not compare the pros and cons of different deep learning methods. Still, they did not find a way to integrate different deep learning methods to fully mine the transformer fault characteristics from different perspectives and further improve the accuracy of fault diagnosis.
Convolutional Neural Networks (CNN) and Recurrent Neural Network (RNN) are two of the most important frameworks in deep learning field [20,21,22]. CNN has been applied in mature fields such as visual recognition, image processing, and fault diagnosis [23,24,25]. As an improved model of RNN, long short-term memory (LSTM) was introduced to make up for the defects of long-term memory loss, gradient dissipation or explosion in the feedback process of the RNN model [26,27]. Nowadays, it is widely used in various fields such as speech recognition, video classification and stock prediction. At present, there are still few relevant studies on the fusion of deep learning models. Some papers have tried to integrate multiple deep learning frameworks into the same system to improve the final recognition accuracy [28]. But these deep learning frameworks are often frameworks that deal with data of the same type. For example, [29,30] were both about parallel fusions between CNN frameworks. Most of the relevant research on the combination of CNN and LSTM for diagnosis combined CNN and LSTM together in a series. For example, [31] used CNN to extract image features, obtaining feature sequences, and used LSTM to classify them. However, it is rare to comprehensively consider the complementary relationship between CNN and LSTM for extracting different features, especially in transformer DGA fault diagnosis modeling. DGA-based transformer fault diagnostic data are often single-time data [32], so the fault tolerance of the data is not good. Moreover, there may be some noises in the real application environment, which makes it worth studying the anti-interference ability and robustness of diagnostic methods [33]. The data visualization method has strong anti-interference performance. Therefore, converting numerical data into images and processing them with CNN can improve the fault tolerance of the data. The LSTM network can not only process point data, but also sequence data. It can be used in transformer fault diagnosis to support the diversity of input data type, thereby improving the generalization ability of the model.
In addition, the difficulties in DGA fault data acquisition, the uneven distribution of transformer fault types, the continuous improvement of equipment manufacturing technology and fault monitoring methods may lead to the imbalance and shortage of fault samples [34]. Using a model with many parameters on such an imbalanced small training dataset will increase the risk of overfitting and weaken the model’s ability to generalize outside the sample dataset. Therefore, it is necessary to consider the problems of uneven and insufficient samples in the research. At present, the most common solution to the problem of imbalanced data is oversampling, which has become the simplest and most reliable method to solve the imbalance and deficiency of a specific dataset [35]. The adaptive synthetic oversampling (ADASYN) algorithm is an improved technique for the synthetic minority over-sampling technique (SMOTE). According to the learning difficulty level of insufficient samples, the algorithm uses adaptive weight for different types of insufficient samples instead of using homogeneous weight. That is, if it is more difficult to understand a specific minority sample, more synthetic data about that sample will be generated. On the other hand, for the problem of insufficient data, transfer learning technology can be introduced in the CNN diagnostic framework. Transfer learning can map the knowledge in the B domain to the A domain [36,37,38]. With the transfer learning method, a small amount of data can still achieve a high diagnostic accuracy [39,40].
Generally speaking, there still exist the following problems in current transformer DGA fault diagnostic methods:
  • The diagnostic accuracy, adaptability and anti-interference ability of traditional methods still need to be improved;
  • The DGA fault dataset is in fact imbalanced and insufficient, which may have a bad effect on fault diagnosis modeling;
  • The sensitivity difference to fault features of deep learning models as well as their fusion have not been taken into consideration in recent researches.
In order to further improve the current transformer DGA fault diagnostic effect, this paper proposes a deep parallel diagnostic method for transformers. The main work and innovations of this article are as follows:
  • A deep parallel diagnostic method for power equipment is proposed. It can make full use of the capabilities of different deep learning methods to extract a complex nonlinear feature. Without complex feature extraction and a selection process, a higher diagnostic accuracy can be achieved, which means that the model has a high generalizability.
  • This article uses the data visualization method to convert DGA numerical data into images for CNN-based diagnostic framework, which can extract different features compared with the LSTM-based framework. Moreover, it can significantly enhance the anti-interference ability of the proposed method;
  • In view of the imbalance and inadequacy of the DGA fault dataset, this paper proposes to use ADASYN and transfer learning to solve the problem of imbalanced and insufficient data on fault diagnosis of a transformer.

2. Materials and Methods

CNN and LSTM are two important frameworks in deep learning field. Among them, CNN relies on convolution operations to directly process two-dimensional images, which can effectively learn corresponding features from a large number of samples and avoid complex feature extraction processes. LSTM inherits the characteristics of most RNN models and solves the problem of gradient dissipation. At present, it seems that feed-forward networks represented by CNNs still have performance advantages in the diagnosis field, while LSTMs have fewer applications in this field. But in the long run, the potential of LSTMs for more complex tasks which CNN cannot match will gradually begin to be accentuated because it more realistically characterizes or simulates the cognitive processes of human behavior, logical development, and neural tissue. Therefore, the research on the combination of CNN and LSTM is meaningful and worth noting.
The deep parallel diagnostic framework proposed in this paper is shown in Figure 1. The materials used in this paper and the implementation of ADASYN data augmentation technique are discussed in Section 2.1. The CNN-based diagnostic model is introduced in Section 2.2.1. The LSTM-based diagnostic model is introduced in Section 2.2.2. Finally, the deep parallel fusion (DPF) method is introduced in Section 2.2.3. This paper fully considers the sensitivity differences of deep learning methods to different features, constructs different deep learning methods to model on the same diagnostic problem, and introduces the Dempster–Shafer evidence theory (DS evidence theory) to perform DPF and obtain a higher diagnostic accuracy. This paper applies the whole framework to the field of transformer fault diagnosis based on DGA.

2.1. Dataset Acquisition and Processing

2.1.1. Distribution and Preprocessing of Dataset

In this paper, 528 cases of transformer DGA fault samples were collected from the State Grid companies as well as relevant papers. Each sample contains five major dissolved gas contents in transformer oil, including H2, CH4, C2H6, C2H4, C2H2. The data distribution is shown in Table 1. As can be seen from Table 1, the number of samples of serious faults such as high temperature overheating, low energy discharge, and high energy discharge is much larger than that of minor faults such as medium temperature overheating, low temperature overheating, and partial discharge. This seems to contradict the common assumption that the occurrence frequency of serious faults should theoretically be lower than that of the minor faults. That is because the current sensitivity of sensors and the accuracy of diagnostic methods still need to be strengthened. It is still difficult to effectively diagnose latent faults, which often remain undetected until they become serious faults. From this perspective, it is necessary to study the more accurate diagnostic method of transformers.
Each DGA data was normalized according to Formula (1) into fault characteristic gas index as shown in Formula (2).
p i , j = X i , j E ( X ) D ( X )
p i , j = [ a 1 i , j ,   a 2 i , j ,     ,   a n i , j ]
where X i , j represents the fault samples; E ( X ) is the mean; D ( X ) is the variance; p i , j represents the normalized value of the j-th fault sample of the i-th fault type; a n i , j represents the n-th normalized fault characteristic gas index of p i , j . In this paper, n = 5.

2.1.2. ADASYN’s Principle and Implementation

The occurrence frequency of different fault types in the given dataset is significantly different. If the imbalanced dataset is used directly to train a deep learning framework, the results will not be ideal. In view of the adverse effects of the imbalanced dataset, the task of balancing the original dataset should be given priority.
The basic idea of the SMOTE algorithm is to artificially synthesize new samples of the minority class based on the given dataset in order to balance and expand the dataset [41]. One of its obvious limitations is that the objects of the minority class are homogenized. The facts show that the importance and impact of different fault samples on the further learning of the diagnostic models are not similar. In addition, if the fault samples in the original dataset were mixed with noise, assigning the same oversampling weight to all the fault samples in the minority class may be counterproductive. In addition, many studies have shown that during the training process, fault samples closer to the decision boundary contribute more to the establishment of classification boundaries than other fault samples. In order to solve this limitation for the transformer DGA imbalanced dataset, we propose to use an adaptive form of SMOTE, called the ADASYN algorithm, to enhance the imbalanced dataset.
The steps of the ADASYN algorithm are as follows:
First, define multiple data spaces: Γ i ( i = 1 , 2 , , m 1 ) and Γ i   |   i = m , which represent spaces for fault samples in minority class and space for fault samples in the majority class, respectively. In this paper, m = 6 . Γ 1 represents space for samples with a high temperature overheating fault; Γ 2 represents space for samples with a medium temperature overheating fault; Γ 3 represents space for samples with a low temperature overheating fault; Γ 4 represents space for samples with a partial discharge fault; Γ 5 represents space for samples with a high energy discharge fault; Γ 6 which has the highest number of samples, represents space for samples with a low energy discharge fault. Each space has five dimensions because there are five different fault characteristic indexes. However, for the convenience of illustration, the space will be converted into a two or three-dimensional diagram in this paper.
Next, supposed the number of neighbors of the j-th fault sample p i , j in the i-th minority class data space Γ i is z, then calculate its local reachability density to the majority fault samples cluster as well as the minority fault samples cluster. The calculated result can be, respectively, noted as D i , j m a j and D i , j min ( j = 1 , 2 , , n i ) . Local reachability density is defined by
D i , j = 1 q Z Distance ( q , p i , j ) z
where Z is the nearest neighbor set of the fault sample p i , j when the number of neighbors is z , and q is the neighbors of p i , j in Z .
The meaning of the local reachability density is illustrated in Figure 2. The pink ball represents fault samples of the i-th minority class and the blue prism represents fault samples of the majority class. When z is equal to 5, D i , j min of the selected minority fault sample p i , j , which is positioned at the origin of coordinates, is big, while its D i , j m a j is small, indicating that there are more fault samples of the same minority class in the neighborhood. Thus, the learning of this selected fault sample is not too difficult. The learning difficulty can be quantified by the ratio of D i , j m a j and D i , j min , as shown in Equation (4). Therefore, the learning difficulty of each minority fault sample can be quantified so that each fault sample can be assigned a tailor-made oversampling weight according to its learning difficulty.
d i , j = D i , j m a j D i , j min
Then, the oversampling weights of fault samples of the minority class are normalized to obtain the difficulty coefficient i , j , as shown in Equation (5).
i , j = d i , j j = 1 n i d i , j
The number of synthetic fault samples generated from each minority class can be calculated according to the difficulty coefficient as shown in Equation (6).
N i , j = i , j N o
where N o is the expected number of newly synthetic samples.
Finally, new samples were synthesized by the procedures introduced in [41].

2.2. Deep Parallel Diagnosis

2.2.1. CNN-Based Transformer DGA Diagnostic Method

As shown in Table 2, transforming measured data such as waveforms into pictures through certain data visualization techniques and inputting them to CNN for diagnosis is a commonly used method in the current researches of diagnostic methods [42]. DGA fault data contains rich information on transformer fault states. The reason for the relatively lower accuracy of traditional methods is that it is difficult to find accurate mathematical models to characterize the complex nonlinear relationship between DGA data and transformer fault states. In order to fully mine the transformer state information contained in the DGA data, the normalized numerical DGA data are converted into the more intuitive images, which are convenient for CNN to process.
The continuous improvement of equipment manufacturing technologies and fault monitoring methods may lead to insufficient DGA fault samples. Transfer learning can map the knowledge in the B domain to the A domain. When using transfer learning, the CNN parameters after pre-trained on the public dataset are set to initial values and part of the network is frozen for training. Transfer learning is used to enable the network to obtain the fine characteristics of the equipment to be diagnosed, making up for the lack of power equipment’s fault samples. With the transfer learning method, a small amount of data can still achieve a high training accuracy [43].
At present, there are dozens of CNN image recognition models suitable for large image databases. Among them, MobileNet-V2 [44] (its basic module is shown in Figure 3) is a lightweight neural network for limited computing resource environment proposed by the Google team in 2018. MobileNet-V2 is based on the basic concepts of MobileNet-V1 [45]. MobileNet-V1 innovatively proposed to use depth-wise separable convolution instead of traditional convolution operations. Although this can reduce the number of parameters and operations, it will also cause a loss of features and decrease accuracy. MobileNet-V2 has made certain relevant improvements against the shortcomings of MobileNet-V1 and proposed two innovative design ideas: inverted residuals and linear bottlenecks.
Inverted residuals: the depth-wise convolution of MobileNet-V2 was preceded by a 1 × 1 “expansion” layer. Its purpose is to expand the number of channels in the data before the data enters deep convolutions, enrich the number of features, and improve accuracy.
Linear bottlenecks: MobileNet-V2 proposed to replace the ReLU activation function with a linear activation function after layers with fewer channels. Due to the introduction of the “expansion layer”, a large number of features output by the convolution layer need to be “compressed” to reduce the amount of calculation. As the number of channels decreases, if the activation function is still ReLU, the features will be destroyed. This is because ReLU’s outputs for negative inputs are all zeros; the original features are already “compressed” and if they pass through ReLU, a lot of them will be lost.
Due to the large cost of accidents in large power transformers, the efficiency requirements for the fault diagnosis algorithm are numerous. MobileNet-V2 can obtain a higher accuracy without too much calculating time and computing resource. Therefore, the MobileNet-V2 model was introduced in this diagnostic scenario as the pre-trained model for transfer learning.

2.2.2. LSTM-Based Transformer DGA Diagnostic Method

Due to the length limitation, the principle of LSTM, which can be found in the [46], will not be described in detail. Let us focus on the deep learning framework built based on LSTM in this paper. It is a five-layer network, as shown in Figure 4, which includes the input layer, the LSTM layer, the fully connected layer, the softmax layer, and the classification output layer. The state excitation function of the LSTM network node is ‘tanh’ and the gate excitation function is ‘sigmoid’.

2.2.3. A Deep Parallel Fusion Diagnostic Method

There is usually a softmax layer in deep learning frameworks based on CNN and LSTM when they are used to diagnose. After the training has been done and the testing dataset is input into the diagnostic network, the softmax layer will output a confidence matrix of the fault categories for the testing dataset. And the softmax functions as shown in Equation (7)
S o f t m a x ( x ) = [ ξ ( H = 1 , x | θ ) ξ ( H = 2 , x | θ ) ξ ( H = η , x | θ ) ξ ( H = N , x | θ ) ] = 1 i = 1 N e θ i T x [ e θ 1 T x e θ 2 T x e θ η T x e θ N T x ]
where x represents the input of the softmax layer which is exactly the output of the previous layer of the softmax layer; θ is the weight parameter matrix, of which θ i ( i = 1 , 2 , , N ) is the element; H represents the fault label; N is the number of fault labels; ξ ( H = η | θ ) is the probability value of H = η .
Using the DS evidence theory algorithm to fuse the fault confidence matrices output by the softmax layers of the two diagnostic frameworks based, respectively, on CNN and LSTM, can enable one to make full use of the diagnostic advantages of the two deep learning methods and finally improve the diagnostic accuracy.
The calculation process of DPF is as follows:
(1) Obtain the confidence matrices output by the softmax layer of the deep learning diagnostic frameworks: that is, a group of the diagnostic support vectors of the deep learning framework for different fault labels. For each set of fault data, the deep learning diagnostic framework’s diagnosis support vector for the fault labels can be noted as ξ k , l · = [ ξ k , 1 ,   ξ k , 2 , , ξ k , γ , ,   ξ k , l ] . (The value of k represents different methods. In this paper, k = 2 (CNN, LSTM) and l is the total number of fault labels, that is, γ = 1, 2, …, l, in this paper, l = 6);
(2) Using different methods k as the rows and the diagnostic supports ξ k , γ as columns to form a support matrix, that is { ξ k , γ , k = 1 ,   2 , γ = 1 , , l } . Each element of the support matrix indicates that the k-th diagnostic method’s support for the fault label H γ is ξ k , γ .
(3) Treat each column in the support matrix as the identification framework Θ of DS evidence theory, so Θ = { H } = { H γ | γ = 1 , 2 , , l } = { H 1 , H 2 , , H l } ;
(4) Integrate the diagnostic support information of different diagnostic methods into the same recognition framework Θ and calculate the basic probability assignment (BPA):
m k , γ = m k ( H γ ) = ω k ξ k , γ
m ˜ k , H = m ˜ k ( H ) = ω k ( 1 γ = 1 l ξ k , γ )
m ¯ k , H = m ¯ k ( H ) = ( 1 ω k )
m k , H = m k ( H ) = m ˜ k , H + m ¯ k , H = 1 γ = 1 l m k , γ = 1 ω k γ = 1 l ξ k , γ
where ωk is the weight value of each diagnostic method k (k = 1, 2); m k , γ represents the basic probability assignment of the k-th method for the evaluation target, which is the transformer’s fault state Hγ. m k , H represents the remaining probability that is assigned to the entire fault set H, rather than to a specific transformer’s fault state; m ˜ k , H , m ¯ k , H are two intermediate variables for calculating basic probability assignment.
(5) Composite probability assignment (CPA):
K = [ γ = 1 l k = 1 2 ( m k , γ + m ¯ k , H + m ˜ k , H ) ( l 1 ) k = 1 2 ( m ¯ k , H + m ˜ k , H ) ] 1
{ H } : m ˜ H = K [ k = 1 2 ( m ¯ k , H + m ˜ k , H ) k = 1 2 ( m ¯ k , H ) ]
{ H } : m ¯ H = K [ k = 1 2 ( m ¯ k , H ) ]
H γ :   m γ = K [ k = 1 2 ( m k , γ + m ¯ k , H + m ˜ k , H ) k = 1 2 ( m ¯ k , H + m ˜ k , H ) ]
where m γ represents the composite probability assignment for the fault label H γ ; K , m ˜ H , m ¯ H , are three intermediate variables for calculating composite probability assignment.
Then, they will be normalized to obtain comprehensive diagnostic results:
ξ γ = m γ 1 m ¯ H , γ = 1 , , l
ξ ˜ γ = m ˜ H 1 m ¯ H , γ = 1 , , l
where ξ γ , γ = 1 , , l is the fused and normalized confidence result of different fault labels. ξ ˜ γ is the normalized value of the uncertainty distribution.
Finally, corresponding to the maximum confidence, the fault label will be output as the final diagnostic result.

3. Results and Discussion

3.1. Results of the Data Augmentation

In order to intuitively observe the effectiveness of the ADASYN method, the distribution of fault samples before and after the data augmentation has been visualized by the t-SNE method, as shown in Figure 5. T-SNE can visualize high-dimensional data by specifying a location for each sample in a 2-D or 3-D map [47].
As can be seen from Figure 5, the ADASYN technology does not simply copy the fault samples, but generates entirely new fault samples. In addition, it can be seen that these fault samples are not easy to separate. The new dataset distribution is shown in Table 3.
Since the data augmentation was implemented and achieved satisfying results, the new dataset is ready for the study of the deep parallel diagnostic framework.

3.2. Results of Deep Parallel Diagnosis

3.2.1. Data Visualization and Deep Transfer Learning

After the image augmentation and normalization process, turn the fault characteristic index into images by data visualization technique to provide effective inputs for the CNN framework. This paper uses two methods for the data visualization process. One is using changes in height to represent the difference in values which obtains the dataset A, as shown in Figure 6a. The other is to use changes in color to represent the difference in values which obtains the dataset B, as shown in Figure 6b. Each dataset is divided into a training dataset and a testing dataset according to a ratio of 4:1. Next, import the MobileNet-V2 network and replace the last layer of the network with a fully connected layer (FL) which has learnable weights and the same output number equal to the number of fault labels. Adjust the hyperparameters as shown in Table 4; the computer configuration is also listed in this table. Then, the two datasets A and B are respectively input into the modified MobileNet-V2 network for transfer training and testing.
After the training progress, the diagnostic accuracy rate based on dataset A can reach 92.1%, while the one based on dataset B is only 80.3%. Exploring the mechanism of CNN, it can be known that CNN is mainly based on the discrimination of image texture and its ability to distinguish colors is relatively poor. Therefore, it is more appropriate to distinguish the data difference by the height difference in the transformer DGA fault diagnostic scenario.

3.2.2. Construction and Training of the LSTM-Based Network

After the image augmentation and normalization process, construct a five-layer network, as shown in Figure 4, including the input layer, LSTM layer, fully connected layer, softmax layer, and classification output layer. The state excitation function of the LSTM network node is ‘tanh’ and the gate excitation function is ‘sigmoid’. The hyperparameters of the network and the computer configuration parameters are as shown in Table 5. Then, the numerical dataset is input into the LSTM-based network for training and testing. After the training progress, the diagnostic accuracy rate was 93.6%. It seems the LSTM-based network performs better than the CNN-based network in this transformer DGA diagnosis. However, this is not the case in every diagnostic scenario. Still, it is worth finding a way to combine the advantages of the two widely used deep networks.

3.2.3. Implementation of the DPF

After finishing the training of CNN-based and LSTM-based deep networks, respectively, the testing dataset was input into the two networks for testing. Then, the output confidence matrices of the softmax layers in the two frameworks were extracted to implement the DPF according to the procedures described in detail in Section 2.2.3. Because the amount of testing data is too large to illustrate in a graph, we just drew a schematic diagram of the fusion as shown in Figure 7. The output of the softmax layer was digital data. In order to visualize the data intuitively, this paper uses color to distinguish the value of data. As shown in Figure 7, each column in the output matrix of the softmax layer represents a set of testing data and its confidence level for the six different fault labels. The DPF diagnostic result and the diagnostic result of the single deep learning method are compared in Table 6.
As can be seen from Table 6, the effectiveness of the deep parallel diagnostic method is further improved, which means that it can effectively extract the complex transformer fault information contained in the transformer DGA data.

3.3. Comparison to the State of Arts

We used eight types of models, including SVM, KNN, gradient boosting descent tree (GBDT), NN, the fuzzy c-means algorithm (FCM), CNN, LSTM and deep parallel diagnosis to diagnose the same dataset before and after the data augmentation in the same computer environment. In the SVM model, the kernel function is Gaussian, the kernel scale is 0.56, and the box constraint level is 1. In the KNN model, the number of neighbors is 1, the distance metric is Euclidean, and the distance weight is equal. In the GBDT model, the maximum number of splits is 20, the number of learners is 30, and the learning rate is 0.1. As for the structure of the NN model, it contains an input layer of five neurons, a hidden layer of 10 neurons, and an output layer of six neurons. The training epoch is 1000 and the learning rate is 0.01. In the FCM model, the exponent for the fuzzy partition matrix is 2.0, the maximum number of iterations is 1000, and the minimum improvement in objective function between two consecutive iterations is 1 × 10 5 .
The comparison results are shown in Table 7. Due to the uncertainty of the results of some methods, the diagnostic accuracy of this article is the average result of five independent experiments. The diagnostic accuracy of all the eight diagnostic methods increased after the data augmentation. It can be observed that the imbalance of the former dataset limits the effectiveness of the fault diagnostic method to a certain extent.
Moreover, the deep parallel diagnosis performs better than the traditional intelligent algorithms and single deep learning method in the transformer diagnostic scenario without conducting a complex feature extraction and selection process.

3.4. The Anti-Interference Ability of the Deep Parallel Diagnostic Method

In order to test the anti-interference ability of the models, 3% random noises were superimposed on the dataset, which has been augmented. Then, the processed dataset was used to train and test the same models. The results are shown in Table 8. As can be seen from Table 8, the anti-interference performance of traditional machine learning methods is relatively worse than the deep learning methods. In the application scenario of this article, only the anti-interference performance of KNN can compete with the deep learning methods. Among the two deep learning methods, CNN has the best anti-interference performance. In addition, the deep parallel diagnosis inherits the powerful anti-interference ability of CNN and retains a high diagnostic accuracy.

4. Conclusions

In this paper, a deep parallel diagnostic method was proposed into the field of transformer DGA fault diagnosis. ADASYN and transfer learning were introduced to deal with the problem of data imbalance and insufficiency. The conclusions are as follows:
  • In view of the situation that the actual fault samples of the transformer DGA are imbalanced and inadequate, which will cause the ineffectiveness of the fault diagnostic method, ADASYN and transfer learning technology were used to effectively improve the diagnostic ability, especially the ability to troubleshoot the minor latent faults. The accuracy of the deep parallel diagnostic method before and after data augmentation was improved by 3.64%.
  • The complex non-linear mapping relationship between dissolved gas in transformer oil and transformer fault states was modeled with the help of two important deep learning methods, LSTM and CNN. The diagnostic accuracy of the LSTM network was 93.6%, while that of the CNN network was 92.1%.
  • Using DPF, which fully considers the sensitivity differences of deep learning methods to different fault features, retains the independence of different deep learning diagnostic methods to a large extent. As a result, the final diagnostic result was 96.9%.
  • The anti-interference ability of the proposed method is better than most of the methods compared in this paper. When the dataset was superimposed with 3% random noises, the diagnostic accuracy of the proposed method only decreased by 0.62%, which shows it has strong adaptability and generalization ability.

Author Contributions

Conceptualization, X.W. and J.D.; methodology, J.D. and X.W.; software, X.W. and J.D.; validation, J.D. and X.W.; formal analysis, Y.H.; investigation, X.W. and Y.H.; resources, Y.H.; data curation, Y.H. and X.W.; writing—original draft preparation, X.W.; writing—review and editing, X.W., J.D. and Y.H.; visualization, X.W. and J.D.; supervision, Y.H.; project administration, Y.H. and J.D.; funding acquisition, Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant No. 51977153, 51977161, 51577046, the State Key Program of National Natural Science Foundation of China, grant No. 51637004, the National Key Research and Development Plan of China “Important Scientific Instruments and Equipment Development”, grant No. 2016YFF0102200, Equipment Research Project in Advance of China, grant No. 41402040301.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bustamante, S.; Manana, M.; Arroyo, A.; Castro, P.; Laso, A.; Martinez, R. Dissolved Gas Analysis Equipment for Online Monitoring of Transformer Oil: A Review. Sensors 2019, 19, 4057. [Google Scholar] [CrossRef] [Green Version]
  2. Wang, Y.; Zhang, L.G. A Combined Fault Diagnosis Method for Power Transformer in Big Data Environment. Math. Probl. Eng. 2017, 2017, 9670290. [Google Scholar] [CrossRef]
  3. Gao, X.; Lv, S.; Huang, R.; Zhuang, Y.; Duan, X. Big data evaluation method of transformer based on association rules and fuzzy variable weight model. In Proceedings of the 2018 13th IEEE Conference on Industrial Electronics and Applications (ICIEA), Wuhan, China, 31 May–2 June 2018; pp. 2215–2220. [Google Scholar]
  4. Abu Bakar, N.; Abu-Siada, A.; Islam, S. A Review of Dissolved Gas Analysis Measurement and Interpretation Techniques. IEEE Electr. Insul. Mag. 2014, 30, 39–49. [Google Scholar] [CrossRef]
  5. Gouda, O.E.; El-Hoshy, S.H.; Hassan, H. Proposed three ratios technique for the interpretation of mineral oil transformers based dissolved gas analysis. Transm. Distrib. 2018, 12, 2650–2661. [Google Scholar] [CrossRef]
  6. De Faria, H.; Costa, J.G.S.; Olivas, J.L.M. A review of monitoring methods for predictive maintenance of electric power transformers based on dissolved gas analysis. Renew. Sustain. Energy Rev. 2015, 46, 201–209. [Google Scholar] [CrossRef]
  7. Li, E.; Wang, L.; Song, B.; Jian, S. Improved Fuzzy C-Means Clustering for Transformer Fault Diagnosis Using Dissolved Gas Analysis Data. Energies 2018, 11, 2344. [Google Scholar] [CrossRef] [Green Version]
  8. Tavoosi, J.E.A. Design a New Intelligent Control for a Class of Nonlinear Systems. In Proceedings of the 6th International Conference on Control, Instrumentation, and Automation (ICCIA 2019), Kordestan, Iran, 30–31 October 2019. [Google Scholar]
  9. Zhang, K.; Yuan, F.; Guo, J.; Wang, G. A novel neural network approach to transformer fault diagnosis based on momentum-embedded BP neural network optimized by genetic algorithm and fuzzy c-means. Engineering 2016, 41, 3451–3461. [Google Scholar] [CrossRef]
  10. Karimi, H.; Ghasemi, R.; Mohammadi, F. Adaptive Neural Observer-Based Nonsingular Terminal Sliding Mode Controller Design for a Class of Nonlinear Systems. In Proceedings of the 6th International Conference on Control, Instrumentation, and Automation, Kordestan, Iran, 30–31 October 2019. [Google Scholar]
  11. Benmahamed, Y.; Teguar, M.; Boubakeur, A. Diagnosis of Power Transformer Oil Using PSO-SVM and KNN Classifiers. In Proceedings of the 2018 International Conference on Electrical Sciences and Technologies in Maghreb (CISTEM), Algiers, Algeria, 28–31 October 2018; pp. 1–4. [Google Scholar]
  12. Li, J.; Zhang, Q.; Wang, K.; Wang, J.; Zhou, T.; Zhang, Y. Optimal dissolved gas ratios selected by genetic algorithm for power transformer fault diagnosis based on support vector machine. IEEE Trans. Dielectr. Electr. Insul. 2016, 23, 1198–1206. [Google Scholar] [CrossRef]
  13. Mohammadi, F.; Nazri, G.; Saif, M. A Fast Fault Detection and Identification Approach in Power Distribution Systems. In Proceedings of the 2019 International Conference on Power Generation Systems and Renewable Energy Technologies (PGSRET), Istanbul, Turkey, 26–27 August 2019; pp. 1–4. [Google Scholar]
  14. Souahlia, S.; Bacha, K.; Chaari, A. SVM-based decision for power transformers fault diagnosis using Rogers and Doernenburg ratios DGA. In Proceedings of the 10th International Multi-Conferences on Systems, Signals & Devices 2013 (SSD13), Hammamet, Tunisia, 18–21 March 2013; pp. 1–6. [Google Scholar]
  15. Li, S.; Wu, G.; Gao, B.; Hao, C.; Xin, D.; Yin, X. Interpretation of DGA for transformer fault diagnosis with complementary SaE-ELM and arctangent transform. IEEE Trans. Dielectr. Electr. Insul. 2016, 23, 586–595. [Google Scholar] [CrossRef]
  16. Yang, X.; Chen, W.; Li, A.; Yang, C.; Xie, Z.; Dong, H. BA-PNN-based methods for power transformer fault diagnosis. Adv. Eng. Inform. 2019, 39, 178–185. [Google Scholar] [CrossRef]
  17. Dai, J.; Song, H.; Sheng, G.; Jiang, X. Dissolved gas analysis of insulating oil for power transformer fault diagnosis with deep belief network. IEEE Trans. Dielectr. Electr. Insul. 2017, 24, 2828–2835. [Google Scholar] [CrossRef]
  18. Dey, D.; Chatterjee, B.; Dalai, S.; Munshi, S.; Chakravorti, S. A deep learning framework using convolution neural network for classification of impulse fault patterns in transformers with increased accuracy. IEEE Trans. Dielectr. Electr. Insul. 2017, 24, 3894–3897. [Google Scholar] [CrossRef]
  19. Afrasiabi, M.; Afrasiabi, S.; Parang, B.; Mohammadi, M. Power transformers internal fault diagnosis based on deep convolutional neural networks. J. Intell. Fuzzy Syst. 2019, 37, 1165–1179. [Google Scholar] [CrossRef]
  20. Zhao, Z.Q.; Zheng, P.; Xu, S.T.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Alom, M.Z.; Taha, T.M.; Yakopcic, C.; Westberg, S.; Sidike, P.; Nasrin, M.S.; Hasan, M.; Van Essen, B.C.; Awwal, A.A.; Asari, V.K. A state-of-the-art survey on deep learning theory and architectures. Electronics 2019, 8, 292. [Google Scholar] [CrossRef] [Green Version]
  22. Lin, S.; Han, Z.; Li, D.; Zeng, J.; Yang, X.; Liu, X.; Liu, F. Integrating model-and data-driven methods for synchronous adaptive multi-band image fusion. Inf. Fusion 2020, 54, 145–160. [Google Scholar] [CrossRef]
  23. Alhichri, H.; Bazi, Y.; Alajlan, N.; Bin Jdira, B. Helping the Visually Impaired See via Image Multi-labeling Based on SqueezeNet CNN. Appl. Sci. 2019, 9, 4656. [Google Scholar] [CrossRef] [Green Version]
  24. Xiao, B.; Wei, Y.; Bi, X.; Li, W.; Ma, J. Image splicing forgery detection combining coarse to refined convolutional neural network and adaptive clustering. Inf. Sci. 2020, 511, 172–191. [Google Scholar] [CrossRef]
  25. Liu, Z.; Li, Q.; Li, W. Deep layer guided network for salient object detection. Neurocomputing 2020, 372, 55–63. [Google Scholar] [CrossRef]
  26. Geng, Z.; Chen, G.; Han, Y.; Lu, G.; Li, F. Semantic relation extraction using sequential and tree-structured LSTM with attention. Inf. Sci. 2020, 509, 183–192. [Google Scholar] [CrossRef]
  27. Liu, J.; Wang, Z.; Xu, M. DeepMTT: A deep learning maneuvering target-tracking algorithm based on bidirectional LSTM network. Inf. Fusion 2020, 53, 289–304. [Google Scholar] [CrossRef]
  28. Wei, W.; Dai, Q.; Wong, Y.; Hu, Y.; Kankanhalli, M.; Geng, W. Surface Electromyography-based Gesture Recognition by Multi-view Deep Learning. IEEE Trans. Biomed. Eng. 2019, 66, 2964–2973. [Google Scholar] [CrossRef] [PubMed]
  29. Zhang, F.; Li, Z.; Zhang, B.; Du, H.; Wang, B.; Zhang, X. Multi-modal deep learning model for auxiliary diagnosis of Alzheimer’s disease. Neurocomputing 2019, 361, 185–195. [Google Scholar] [CrossRef]
  30. Jiao, J.; Zhao, M.; Lin, J.; Ding, C. Deep Coupled Dense Convolutional Network with Complementary Data for Intelligent Fault Diagnosis. IEEE Trans. Ind. Electron. 2019, 66, 9858–9867. [Google Scholar] [CrossRef]
  31. Ullah, A.; Muhammad, K.; Del Ser, J.; Baik, S.W.; Albuquerque, V. Activity recognition using temporal optical flow convolutional features and multi-layer LSTM. IEEE Trans. Ind. Electron. 2018, 66, 9692–9702. [Google Scholar] [CrossRef]
  32. Faiz, J.; Soleimani, M. Dissolved gas analysis evaluation in electric power transformers using conventional methods a review. IEEE Trans. Dielectr. Electr. Insul. 2017, 24, 1239–1248. [Google Scholar] [CrossRef]
  33. Mohammadi, F.; Zheng, C. A Precise SVM Classification Model for Predictions with Missing Data. In Proceedings of the 4th National Conference on Applied Research in Electrical, Mechanical Computer and IT Engineering, Tehran, Iran, 4 October 2018; pp. 3594–3606. [Google Scholar]
  34. Tra, V.; Duong, B.P.; Kim, J.M. Improving diagnostic performance of a power transformer using an adaptive over-sampling method for imbalanced data. IEEE Trans. Dielectr. Electr. Insul. 2019, 26, 1325–1333. [Google Scholar] [CrossRef]
  35. Han, H.; Wang, W.Y.; Mao, B.H. Borderline-SMOTE: A new over-sampling method in imbalanced data sets learning. In Proceedings of the International Conference on Intelligent Computing, Hefei, China, 23–26 August 2005; pp. 878–887. [Google Scholar]
  36. Matasci, G.; Volpi, M.; Kanevski, M.; Bruzzone, L.; Tuia, D. Semisupervised transfer component analysis for domain adaptation in remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3550–3564. [Google Scholar] [CrossRef]
  37. Kolar, Z.; Chen, H.; Luo, X. Transfer learning and deep convolutional neural networks for safety guardrail detection in 2D images. Autom. Constr. 2018, 89, 58–70. [Google Scholar] [CrossRef]
  38. Sassi, P.; Tripicchio, P.; Avizzano, C.A. A Smart Monitoring System for Automatic Welding Defect Detection. IEEE Trans. Ind. Electron. 2019, 66, 9641–9650. [Google Scholar] [CrossRef]
  39. Wang, J.; Zheng, H.; Huang, Y.; Ding, X. Vehicle type recognition in surveillance images from labeled web-nature data using deep transfer learning. IEEE Trans. Intell. Transp. Syst. 2017, 19, 2913–2922. [Google Scholar] [CrossRef]
  40. Cai, Q.; Liu, X.; Guo, Z. Identifying Architectural Distortion in Mammogram Images Via a SE-DenseNet Model and Twice Transfer Learning. In Proceedings of the 2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Beijing, China, 13–15 October 2018; pp. 1–6. [Google Scholar]
  41. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  42. Wang, J.; Mo, Z.; Zhang, H.; Miao, Q. A Deep Learning Method for Bearing Fault Diagnosis Based on Time-Frequency Image. IEEE Access 2019, 7, 42373–42383. [Google Scholar] [CrossRef]
  43. Zheng, H.; Wang, R.; Yang, Y.; Li, Y.; Xu, M. Intelligent Fault Identification Based on Multi-Source Domain Generalization Towards Actual Diagnosis Scenario. IEEE Trans. Ind. Electron. 2019, 67, 1293–1304. [Google Scholar] [CrossRef]
  44. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  45. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  46. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  47. Maaten, L.V.D.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
Figure 1. Deep parallel diagnostic framework.
Figure 1. Deep parallel diagnostic framework.
Applsci 10 01329 g001
Figure 2. Illustrate the local reachability density, when D i , j min is bigger and D i , j m a j is smaller.
Figure 2. Illustrate the local reachability density, when D i , j min is bigger and D i , j m a j is smaller.
Applsci 10 01329 g002
Figure 3. The basic module of MobileNet-V2.
Figure 3. The basic module of MobileNet-V2.
Applsci 10 01329 g003
Figure 4. The structure of the long short-term memory (LSTM)-based diagnostic network.
Figure 4. The structure of the long short-term memory (LSTM)-based diagnostic network.
Applsci 10 01329 g004
Figure 5. Dimension reduction results of the dataset before and after the data augmentation. (a) Dimension reduction results of the dataset before the data augmentation.; (b) Dimension reduction results of the dataset after the data augmentation.
Figure 5. Dimension reduction results of the dataset before and after the data augmentation. (a) Dimension reduction results of the dataset before the data augmentation.; (b) Dimension reduction results of the dataset after the data augmentation.
Applsci 10 01329 g005
Figure 6. Data visualization and transfer learning.
Figure 6. Data visualization and transfer learning.
Applsci 10 01329 g006
Figure 7. The schematic of deep parallel fusion (DPF).
Figure 7. The schematic of deep parallel fusion (DPF).
Applsci 10 01329 g007
Table 1. Distribution of the original dataset.
Table 1. Distribution of the original dataset.
Fault Type Code iFault LabelNumber of Fault Samples
1High temperature overheating120
2Medium temperature overheating22
3Low temperature overheating14
4Partial discharge20
5Low energy discharge239
6High energy discharge113
Total528
Table 2. Data visualization methods.
Table 2. Data visualization methods.
Type of DataExamplesData Visualization Method
Monitoring image Applsci 10 01329 i001Trim away the extra parts and highlight the detected or pivotal parts.
Waveform graph Applsci 10 01329 i002Perform a decomposition, spectrum analysis, and other process to convert the curve into a characteristic spectrum.
Parameters Applsci 10 01329 i003Construct a suitable data graph according to the parameter value size, value range, etc.
Text Applsci 10 01329 i004Use text visualization technologies such as Generative Adversarial Networks (GAN).
Table 3. Distribution of the new data set.
Table 3. Distribution of the new data set.
Fault Label CodeFault LabelTotal Number of Samples After Data Augmentation
1High temperature overheating232
2Medium temperature overheating240
3Low temperature overheating245
4Partial discharge237
5Low energy discharge240
6High energy discharge243
Total1437
Table 4. The hyperparameters of the CNN-based network and the computer configuration parameters.
Table 4. The hyperparameters of the CNN-based network and the computer configuration parameters.
Hyperparameters of the NetworkComputer Hardware/Software
NameSettingIndexSetting
Pre-trained networkMobileNet-V2RAM16 Gb
OptimizerAdam
Weight learning rate factor10ProcessorIntel(R) Core(TM)
i7-7700 @3.60 GHz
Bias learning rate factor10Graphics cardNVIDIA GeForce
GT 730
Frozen layers 10Operating systemWindows 10, 64 bits
Minibatch size30
Max epochs60
Initial learning rate3 × 10−4
Validation Frequency3
Table 5. The hyperparameters of the LSTM-based network and the computer configuration parameters.
Table 5. The hyperparameters of the LSTM-based network and the computer configuration parameters.
Hyperparameters of the NetworkComputer Hardware/Software
NameSettingIndexSetting
OptimizerAdamRAM16 Gb
Gradient threshold0.01ProcessorIntel(R) Core(TM)
i7-7700 @3.60 GHz
Minibatch size100Graphics cardNVIDIA GeForce
GT 730
Max epochs500Operating systemWindows 10, 64 bits
Number of hidden units150
Table 6. Comparison of diagnostic results before and after DPF.
Table 6. Comparison of diagnostic results before and after DPF.
Diagnostic MethodDiagnostic Accuracy
CNN diagnostic framework92.1%
LSTM diagnostic framework93.6%
Deep parallel diagnostic framework96.9%
Table 7. Diagnostic effects of different diagnostic methods before and after data augmentation.
Table 7. Diagnostic effects of different diagnostic methods before and after data augmentation.
Diagnostic MethodsDiagnostic Accuracy Before Data AugmentationDiagnostic Accuracy After Data AugmentationIncreased Percentage
SVM79.4%87.4%10.08%
KNN86.0%91.5%6.40%
GBDT85.5%86.9%1.64%
NN83.3%85.1%2.16%
FCM83.4%87.5%4.92%
CNN88.2%92.1%4.42%
LSTM89.7%93.6%4.35%
Deep Parallel Diagnosis93.5%96.9%3.64%
Table 8. Diagnostic effects of different diagnostic methods with or without noises.
Table 8. Diagnostic effects of different diagnostic methods with or without noises.
Diagnostic MethodsDiagnostic Accuracy without NoisesDiagnostic Accuracy with NoisesDecreased Percentage
SVM87.4%85.9%1.72%
KNN91.5%91.1%0.44%
GBDT86.9%80.1%7.83%
NN85.1%80.9%4.94%
FCM87.5%85.9%1.83%
CNN92.1%91.5%0.65%
LSTM93.6%92.7%0.96%
Deep parallel diagnosis96.9%96.3%0.62%

Share and Cite

MDPI and ACS Style

Wu, X.; He, Y.; Duan, J. A Deep Parallel Diagnostic Method for Transformer Dissolved Gas Analysis. Appl. Sci. 2020, 10, 1329. https://doi.org/10.3390/app10041329

AMA Style

Wu X, He Y, Duan J. A Deep Parallel Diagnostic Method for Transformer Dissolved Gas Analysis. Applied Sciences. 2020; 10(4):1329. https://doi.org/10.3390/app10041329

Chicago/Turabian Style

Wu, Xiaoxin, Yigang He, and Jiajun Duan. 2020. "A Deep Parallel Diagnostic Method for Transformer Dissolved Gas Analysis" Applied Sciences 10, no. 4: 1329. https://doi.org/10.3390/app10041329

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop