Next Article in Journal
Experimental and Numerical Investigation of Forced Convection in a Double Skin Façade
Previous Article in Journal
Enumerative Optimization Procedure for the Gear Train Optimization Problem of a Two-Speed Dedicated Electric Transmission
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Gas Turbine Engine Identification Based on a Bank of Self-Tuning Wiener Models Using Fast Kernel Extreme Learning Machine

Jiangsu Province Key Laboratory of Aerospace Power Systems, College of Energy & Power Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, Jiangsu, China
*
Authors to whom correspondence should be addressed.
Energies 2017, 10(9), 1363; https://doi.org/10.3390/en10091363
Submission received: 4 August 2017 / Revised: 28 August 2017 / Accepted: 1 September 2017 / Published: 8 September 2017
(This article belongs to the Section F: Electrical Engineering)

Abstract

:
In order to simultaneously obtain global optimal model structure and coefficients, this paper proposes a novel Wiener model to identify the dynamic and static behavior of a gas turbine engine. An improved kernel extreme learning machine is presented to build up a bank of self-tuning block-oriented Wiener models; the time constant values of linear dynamic element in Wiener model are designed to tune engine operating conditions. Reduced-dimension matrix inversion incorporated with the fast leave one out cross validation strategy is utilized to decrease computational time for the selection of engine model feature parameters. An optimization algorithm is no longer needed compared to the former method. The contribution of this study is that a more convenient and appropriate methodology is developed to describe aircraft engine thermodynamic behavior during its static and dynamic operations. The methodology is evaluated in terms of computational efforts, dynamic and static estimation accuracy through a case study involving data that are generated by general aircraft engine simulation. The results confirm our viewpoints in this paper.

1. Introduction

Gas turbines are complicated thermodynamic machines and widely used as the power supply for commercial and military planes. The modeling and simulation of aircraft engines is a general methodology for optimization of design, performance and maintenance [1,2]. The engine operating parameters such as speed, pressure, temperature along the engine flow path are separately measured by the full authority digital electric control (FADEC) and engine monitoring unit (EMU) for control and health management [3,4]. An accurate on-board model is capable of providing real-time realistic estimations of aircraft engine operating parameters. The unmeasurable state parameters representing the health condition of engine modules can be calculated from the residuals between the real measurements of these operating parameters and their predicted values from an on-board engine model [5,6,7]. In addition, the engine model provides a redundancy channel of operating parameters [8]. For sensor fault tolerant control, this channel value can replace a faulty sensor signal used for feedback control [9,10]. An engine model with real-time performance and high confidence of both dynamic and static behavior becomes a key factor in aircraft control methodology and is of great interest.
There are two groups of engine models: physics-based and data-based models. The former models are established using information about the engine physics, such as characteristic maps, thermodynamic relationships, mass, momentum and energy balance [11,12]. Numerous studies have been conducted on the development of these aero-thermodynamic engine models, such as Gas turbine Simulation Program (GSP) [13], GasTurb [14], Commercial Modular Aero-Propulsion System Simulation (CMPASS) [15], Numerical Propulsion System Simulation (NPSS) [16] and GGTS (general aircraft simulation) [17] on software platforms of C++ 6.0, JAVA 5, MATHEMATICA 8.0 and Simulink/MATLAB 6.0. The physics-based models sufficiently describe the engine static and dynamic behavior, but there are drawbacks; performance is easily affected by the engine component maps and empirical parameters from extensive testing, and the computational efforts are relatively serious [18]. Linearization is introduced to achieve a real-time engine model because of computational complexity and the time consumption of the aero-thermodynamic model.
The data-based models directly disclose the relationships between engine inputs and outputs derived from the sensed operating data by mathematical tools. The data-based model parameters or the functions between the inputs and outputs fit in-service operating data, but do not rely on any physical concepts of the aircraft engine. Machine learning algorithms [19,20] and polynomial regressions [21] are well known mathematical approaches to depict the engine static and dynamic operating process in the data-based modeling field. Polynomial regression has a simpler structure and less modeling parameters to determine to describe an aircraft’s behavior [5,21]. The machine learning methods such as neural network (NN) and support vector machine (SVM) can be used for complex multiple-output system modeling [22,23], while the regression method is limited to single-output system modeling.
The block-oriented nonlinear model is one of the data-based modeling methods and has had more attention due to its simplicity and clear structure [24,25]. These models have both linear and nonlinear parts; the nonlinear static part is usually assumed continuous and differentiable. The block-oriented model parameters are identified by the polynomial regression method [24]. A number of attempts have been made to simplify model structure parameters of linear/nonlinear elements and employ appropriate parameterization methods to improve estimation accuracy and reduce computational costs [26]. Recently, an enhanced Wiener model (WM) method integrated the NN and optimization algorithms performed on the gray-box assumption [25], and adapted some linear dynamic coefficients to operating conditions to better describe the engine model.
This article proposes a novel block-oriented nonlinear model based on a bank of self-tuning WMs (SWMs) using fast kernel extreme learning machine (FKELM), and performance comparisons to predict engine operating parameters are carried out. The proposed engine model consists of four adaptive linear dynamic parts related to each nonlinear static element by a proper aero-thermodynamic correlation, and it represents the engine static and dynamic behavior in the form of a single input and multiple outputs (SIMO) system. Each linear dynamic part is reduced to a first-order inertia section, and the time constant values are tuned to the engine operating state. The goal of the innovation in the FKELM is to obtain appropriate model feature parameters, including kernel parameters of KELM and block-oriented regression factors with less running time. The following research is undertaken by the authors at Nanjing University of Aeronautics and Astronautics, China in collaboration with the lab at the University of Toronto, Canada. The simulations prove that this methodology is superior to the existing methods in terms of representing static and dynamic behavior of an engine.
The paper is organized as follows: Section 2 presents the self-tuning Wiener model with block structure and the improved KELM algorithm for SWM. The bank of SWMs is designed, and it is employed to model aircraft engine identification in Section 3. In Section 4, simulation and analysis indicate the improved performance of the proposed methodology with regards to estimation accuracy and computational time. Section 5 draws conclusions and discusses future research directions.

2. Self-Tuning Wiener Model

2.1. Block-Oriented Self-Tuning Wiener Model

Block-oriented models were first proposed by Narenda, and have been extensively applied to complex nonlinear systems [27]. Linear and nonlinear parts interconnect to form an open-loop single branch block-structure model; it makes the description of the dynamic and static behavior easy [24]. There are three kinds of these block-oriented models, including Hammerstein, Wiener and Wiener-Hammerstein models [28].
The number of blocks in Wiener-Hammerstein is relatively large, and data to determine the model parameters of the superimposed model are hard to produce [29]. A Hammerstein model is often used as an approximate expression for a system where nonlinearity only resulted from the change of direct-current gain with input amplitude; this model cannot depict aircraft engine operating process due to the engine dynamics distinctly varied with the input amplitude. Compared to the Hammerstein model, the Wiener model can show variations of dynamic behavior for different input amplitudes. The Wiener model has a simpler structure than the Wiener-Hammerstein model, and it requires fewer model parameters to be computed. Hence, the Wiener model is used to depict engine dynamic behavior varying with operating conditions and is recommended for engine identification.
The identification of the nonlinear static part in a Wiener model can be determined from the steady-state relationship of the aircraft engine. The aero-thermodynamic operating data includes engine physical spool speed, capacity flow, total temperature and total pressure. The dynamic relationships of these operating data are generally simplified in a first-order delay [6,30]. Therefore, the linear dynamic block of the Wiener model is expressed in an inertial element of a transfer function:
G i ( s ) = 1 τ i s + 1
where the coefficient in this linear dynamic part is the time constant τ of the first-order inertial element, and i represents the i-th linear dynamic part related to the i-th engine model output. For the nonlinear static element of Wiener models, it is the nonlinear expression y = f(v). The nonlinear static part related to the nonlinear relationship of the engine operating input- output is represented by a lookup table.
The linear dynamic part is employed to depict system dynamic behavior in a Wiener model; the time constant τ of the transfer function Equation (1) is unchanged. However, previous experimental results on gas turbines show that the time constant of a first-order lag for the operating data varies with different engine operations. This model does not have enough tunable parameters to represent engine dynamic characteristics, and it is difficult to meet the requirements of engine dynamic accuracy. Therefore, a self-tuning Wiener model approach is introduced to modify the transfer function coefficients of the linear dynamic block to system operating conditions to improve flexibility and estimation accuracy of the Wiener model.
The engine spool speeds, low-pressure spool speed and high-pressure spool speed, are the important parameters that represent the engine operating state and are added as regression variables in the NARX (nonlinear auto-regressive exogenous input) model to describe varying time constants of the nonlinear feature:
τ ( t ) = F ( Y ( t 1 ) , , Y ( t n b ) , τ ( t 1 ) , , τ ( t n a ) , N H ( t 1 ) , N L ( t 1 ) )
where Y is the engine operating data to be estimated, and the regression factors na and nb are the count of output and input terms. NL and NH are high pressure spool speed and low pressure spool speed, which are defined as the relative regression variable. The time constant can be adaptive to the engine operations to improve the dynamic description of the engine nonlinearity in Equation (2). In this aspect, a learning machine (LM) is introduced to perform this NARX in self-tuning WM (SWM). Figure 1 shows the training schematic of the self-tuning Wiener model; UD indicates unit delay and Wf is fuel flow.
The sample interval of the nonlinear static part is 0.02 s with the form of the lookup table. The regression factors na and nb determining the model structure have an important impact on the performance of engine identification except for the type of nonlinear estimator [31]. The Enhanced Wiener Model (EWM) is developed to identify system for a two-shaft industrial gas turbine in [25], and the regression factors of the LM estimator embedded in the WM are calculated by trial and error. However, a systematic method to obtain the structure parameters of these Wiener models, especially in aircraft engine applications, has not been presented due to the problem-specific nature.
In this paper, the GA-KELM is first extended from the EWM framework, where the genetic algorithm (GA) as a well-known optimization technique is combined with the KELM. The regressor factors of GA-KELM are gained by trial, and it has the similar training schematic as the EWM. It is noted that both the invasive weed optimization (IWO) and GA are the population-based optimization algorithms, and it is hard to reach a steady global optimal result in every searching calculation due to their stochastic nature. The optimization process needs to be performed several times to generate the appropriate result. In addition, the optimization algorithm used to tune the LM model parameters like weighting vector and biases consumes large time. With regards to computational efforts, static and dynamic estimation accuracy, a novel KELM especially for the SWM is proposed without use of population-based optimization techniques.

2.2. Improved KELM for Self-Tuning Wiener Model

Neural networks play an important role in machine learning field and have been widely applied in engineering. In the last decade, the ELM algorithm is developed by Huang from the NN [32]. The input weight and biases related to ELM hidden layer nodes are randomly generated, and thus less model parameters to be calculated leads to much faster learning speed. The kernel function mapping from input space to feature space by the kernel transformation is then introduced into the ELM to form the KELM, and it brings better steady calculation [33].
Given N samples { ( x i , y i ) | i = 1 , 2 , , N } , wherein x i = [ x i 1 , x i 2 , , x i n ] T R n is the input and yi is the output. The learning problem is to search an optimal function F : x i y i , and it is presented by F ( x i ) y i . Once the number of hidden layer node is L, the function F is expressed as:
F ( x i ) = j = 1 L β j g ( a j x i + b j ) i = 1 , 2 , , N
where aj = [a1j, a2j, …, anj]T is the input weighting vector between the j-th hidden and input nodes, bj is the bias of the j-th hidden node, βj is the output weighting vector between the j-th hidden and output nodes. The expression ajxi indicates the vector inner product and g( ) is activation function of hidden layer node.
A regularization factor C referred to SVM algorithm [33] is to produce a new risk functional R = ||β|| + C||y||, and thus the learning problem is:
min β 1 2 β 2 2 + C 2 i = 1 N ε i 2 s . t . h T ( x i ) β = y i ε i , i = 1 , 2 , , N
The factor C is the weight assignment of empirical risk and structural risk. The slack variable εi implies the differences between the predicted and actual value. The optimal output weight of Equation (4) is
β = H T ( 1 C I N + H H T ) 1 y H ( a i , , a L , b i , b L ) = [ g ( a 1 T x 1 + b 1 ) g ( a L T x 1 + b L ) g ( a 1 T x N + b 1 ) g ( a L T x N + b L ) ] N × L
Compared to the ELM, the KELM adds a regularization term IN/C. In addition, the kernel transformation k(xi, xj) replaces an explicit expression in the input space of the ELM.
K = H H T : K i , j = h ( x i ) h ( x j ) = k ( x i , x j ) k ( x i , x j ) = exp ( γ x i x j 2 )
where the kernel parameter γ indicates the kernel distribution width. The output function in Equation (3) can be calculated
F ( x ) = h T ( x ) β = h T ( x ) H T ( 1 C I N + H H T ) 1 y = [ k ( x , x 1 ) k ( x , x N ) ] T ( 1 C I N + K ) 1 y = k ( x ) α α = ( 1 C I N + K ) 1 y
where α is the KELM output weight. The detailed KELM algorithm can be referred to [34].
The model feature parameters such as regressor factors and KELM kernel parameter γ should be given before the SWM implementation in engine model, which the keys to affecting SWM performance. Some Cross validation methods are employed to select these parameters, which essentially include hold-out, K-cross validation (KC) and leave one out cross validation (LOOC) methods. Among the methods above, the LOOC brings a better generalization performance, but computational time dramatically increases in the case of a large dataset [35].
For this purpose, an FKELM method is put forward where Fast LOOC is to improve the computational efforts in the framework of KELM theory. This methodology relies on block matrix inversion and the LOOC strategy. Provided the effects of p-th training sample is neglected, the estimate y ˜ p and weighting vector αp of output can be written as:
y ˜ p = F p ( x ) = F ( x ) = j = 1 L β j g ( α j x + b j ) = k ( x ) α p p = 1 , , N α p = A p 1 Y
where Ap is the one-dimension simplified matrix compared to original matrix A, and it is that the p-th column elements of A are zero but the p-th row element in this column invariant. That is to say, the contribution of the p-th sample to KELM output weighting vector can be almost neglected. The expression of these two matrices are given:
A = [ a 11 a 1 p a 1 N a p 1 a p p a p N a N 1 a N p a N N ] A p = [ a 11 0 a 1 N a p 1 a p p a p N a N 1 0 a N N ]
Define two vectors u = [a1p,…,a(p-1)p,0,a(p + 1)p,…,aNp]T and v = [0,…,0,−1,0,…,0]T. The p-th element of v is −1, and the p-th element of u is 0. We can obtain the following expression:
A p = A + u v T
The Sherman-Morrison-Woodbury equation is introduced:
( A + u v T ) 1 = A 1 A 1 u v T A 1 1 + v T A 1 u
The Equation (8) can be rewritten as follows using the conclusions of Equations (10) and (11):
α p = ( A 1 A 1 u v T A 1 1 + v T A 1 u ) Y = α A 1 u v T A 1 Y 1 + v T A 1 u
Furthermore, we can obtain the following two equations:
1 + v T A 1 u = a ¯ p p a p p
A 1 u v T A 1 Y = L a p p α ( p )
where the vector L = [ a ¯ 1 p , a ¯ 2 p , , ( a ¯ p p a p p 1 ) / a p p , a ¯ N p ] T , and a ¯ i p ( i = 1 , 2 , , N ) is the corresponding subscript element of A−1. α(p) is the p-th element of α, and the simplified output weight is calculated:
α p = α L α ( p ) a ¯ p p
Therefore, the inverse of simplified output weight vector can be reached directly by Equation (15), and no additional calculation about matrix inversion is needed. It implies that the number of inversion calculation can be reduced from N to 1, and the FKELM is obtained from the KELM combined with FLOOC (Fast LOOC) validation method. As we know, the matrix inversion calculation occupies the most calculation time, the reduction of matrix inversion count save lots of computational time in the training phase. It leads to greatly less computational efforts for selection of feature parameters by the FKELM. The detailed steps of this methodology are presented as follows:
Step 1:
Initialize model parameters, Let p = 1; Empirically produce a set of M candidates including the regressor factors na, nb, and kernel parameter γ; Select the KELM feature parameter combination (na, nb, γ)k from candidate set, and k = 1.
Step 2:
Train the KELM using N samples to generate the matrices A and A−1 with full dimensions as the KELM parameter combination (na, nb, γ)k.
Step 3:
Calculate the simplified output weight vector αp by Equation (15) and y ˜ p by Equation (8).
Step 4:
If p < N, then p = p + 1 and return to step 3; otherwise, compute the generalization performance index of KELM: align
R M S E k = p = 1 N ( y p y ˜ p ) 2 N
Step 5:
If k < M, then k = k + 1 and go back to step 2; otherwise, stop the procedure and select the optimal KELM feature parameter combination (na, nb, γ) and the output weight vector α who has the minimum value of RMSEk.
The FKELM is used as the regression estimator to tune the model coefficients of linear dynamic part to engine operation for the SWM. The optimal model feature parameters and output weight in the SWM framework is obtained by the FKELM, and population-based optimization algorithm is no longer needed as the EWM.

3. Aircraft Engine Identification by Self-Tuning Wiener Models

The proposed methodology is tested on a virtual aircraft engine created from GGTS that is based on thermodynamic physical theory and with the architecture of engine component level model [11,36]. The GGTS is similar to CMAPSS, which is not available to Chinese now. GGTS provides various kinds of aircraft engine model, like turbojet, turbofan and turboshaft engine model and their simulation data, which has been well used in aviation industry corporation of China. The engine component characteristic maps and design operation data are loaded to GGTS to create a turbofan engine model (TEM) [14,37], which is coded using C language and packaged with a dynamic link library (DLL) for simulation in Matlab environment.
The TEM produces engine-sensed data comprising of snapshot measurements for dynamic behavior collected around design point. The available instrumentation at design operation is reported in Table 1, where the first four measurements define the engine operation and the rest flow path sensors could be used for FADEC or EMU. The examined engine operates at the condition of the International Standard Atmosphere (ISA).
A bank of self-tuning Wiener models is employed to identify the engine instead of conventionally a single stochastic model. There are one input (Wf ) and four outputs (NL, NH, W3, and Exhaust Gas Temperature (EGT)) considered in the examined engine model. The variables like H, Ma and atmosphere condition are not included in the model inputs as in the International Standard Atmosphere condition at ground. Thus this engine model is a SIMO system, and consists of four SWMs related to four outputs. Each SWM is developed based on the blocked-oriented Wiener model, time constant of which is self-tuned to operating conditions by the FKELM. Figure 2 shows a bank of SWMs for low bypass ratio aircraft engine.
Aircraft engine spool speeds NL and NH are the most important parameters representing its operating state [36]. Besides, the nature of speed sensors installed in the engine is high accuracy and quick response, and there is far less time lag than flow and temperature sensors. Hence, the last step of spool speeds act as the regression inputs not only for the speed SWMs themselves but also for the SWMs of W3 and EGT. The fuel air ratio (FAR) is employed as the input for the SWM of EGT, which is different from the other SWMs using Wf. The FAR serves as control variable of the speed control loop in the FADEC, which can effectively reduce engine surge probability and lead to change the specific exhaust gas emissions. Hence, FAR is recognized as one of the inputs in the EGT SWM. The Quasi-amplitude-modulated pseudo random binary sequence (APRBS) data is loaded to learn each SWM [38], which is developed from a well-used excitation signal for nonlinear system identification [39].
The procedure schematic to engine identification by the proposed methodology is presented in Figure 3. The TEM is to generate the data for both of training and testing, which is developed from GGTS using the design point data and component maps. The training data are excited using Quasi-APRBS to the TEM, and then sorted and removes one sample out to obtain yp for training each FKELM. The initial combinations of model feature parameters are fed into each FKELM corresponding to its SWM to obtain the optimal model structure, kernel parameter and output weighting vector. The linear dynamic element with tuned time constant and the nonlinear static part are together formed the SWM related to each model output. Then the identified engine model represented by the bank of SWMs is created based on the integration of four SWMs as given in Figure 2. In the testing process, the test data brought from the TEM is to evaluate performance of the established engine model.

4. Simulation and Analysis

4.1. FKELM Performance Evaluation

The FKELM is compared against LOOC-KELM and KC-KELM [36] in this section. The involved algorithms run in Matlab Software R2016b, and the computer hardware used for simulation is configured as follows: CPU i3-550 @ 3.20 GHz and RAM 4GB. The computational efforts of three cross validation methods are first assessed using the benchmark data set of “sinc”. The expression of artificial data set “sinc” is:
f ( x ) = { sin x / x , x 0 1 , x = 0
There are M training samples generated in the range of (−10, 10), and the scales of these samples are separately 500, 1000, 1500, 2000, 2500, 3000. The regularization factor C is 0.5. Table 2 gives the computational efforts comparisons of the three methods. We can see that the computational time of LOOC is 0.75 s as data size 500 and 277.29 s as size 3000, and it adds evidently as dataset scale increases. Both of the FLOOC and KC consume less time than the LOOC especially in large data size cases, while the FLOOC is about half of that of KC in all cases. Hence, the FLOOC method is the best one for fast computational evaluation in the large scale dataset.
The generalization performance of proposed FKELM is compared with those of LOOC-KELM and KC-KELM using two benchmark datasets “Boston housing” and “Abalone”, which are taken from UCI Machine Learning Repository. The sample scales of the “Boston housing” and “Abalone” are separately 455 and 1000, and the data are normalized to the range of [−1, 1]. Two kernel functions, namely Gaussian kernel in Equation (6) and asymptotic kernel in Equation (18) [34], are discussed:
k ( x , y ) = 2 π arcsin 1 + < x , y > ( 1 / γ 2 + 1 + < x , x > ) ( 1 / γ 2 + 1 + < y , y > )
The kernel parameter γ are empirically selected 10−2, 10−1, 100, 101 and 102. The RMSE used as generalized performance indices of these KELM algorithms defined by Equation (16), and the results are shown in Table 3.
In Table 3, the RMSE of LOOC-KELM is 0.07475 and the minimum in the case of the dataset “Boston housing” when the Gaussian kernel γ is 10. The minimal RMSE in the same dataset is also generated as the Asymptotic kernel γ = 10. The KC-KELM and FKELM produce the minimal RMSEs as γ = 10 as the Gaussian and Asymptotic kernel used. As was mentioned earlier, LOOC-KELM has the best generalization performance once the number of train sample being infinite, and the FKELM is an approximate to LOOC-KELM whose p-th row element effect to inversed matrix is ignored. The RMSE by FKELM is much closer to that by LOOC-KELM compared to KC-KELM. In the benchmark dataset “Abalone”, the similar conclusion can be gained. Consequently, the proposed FKELM is the best choice among the examined KELM algorithms in terms to the computational efforts and generalized performance, and it will be applied to tune the linear dynamic part of SWM.

4.2. Engine Identification Application of the Methodology

Aircraft engine dynamic and static behavior is identified by the self-tuning Wiener model (FSWM), and the dynamic performance of the FKELM self-tuning Wiener model (FSWM) is compared to the WM, NN-WM, EWM and GA-SWM, while the static performance of FSWM is tested against the NN. The basic WM has the fixed time constant of linear dynamic part. The NN-WM has the similar architecture as the EWM, and the difference is that the EWM parameters is adjusted by the IWO algorithm but no optimization algorithm is employed to tune those of the NN-WM in training phase. The design parameters of the involved algorithms are given as [25]. The FKELM and basic KELM are separately used as regression estimators to yield the time constant of linear dynamic element in FSWM and GA-SWM, and the GA is no longer to tune the model parameters for the FSWM which is different from the GA-SWM.
The fuel flow is generated using Quasi-APRBS, and flown into the TEM to produce the I/O data for engine identification given in Figure 4. The input data are given in the first plot and the output data in the rest plots in Figure 4. Wf change context is from 0.55 to 1, and the dwell time is randomly selected between 10 and 24.
The Quasi-APRBS engine data totally 1000 s are divided into two subsets, which are the former 800 s for training samples and the remaining 200 s for testing samples. The performance evaluation function of the engine models for the purpose to search the optimal structure parameters is given:
J = ( 1 w 1 + w 2 ) ( w 1 ( | P C | ) 1 N 1 + w 2 E P m a x N 2 ) P C = ( 1 Y Y ^ Y Y ¯ ) × 100 E P m a x = max 1 i D ( | Y ( i ) Y ^ ( i ) Y ( i ) | × 100 ) E P m e a n = i = 1 D ( | Y ( i ) Y ^ ( i ) Y ( i ) | × 100 ) / D
where the PC is percentage of compliance, EPmax is maximum error percentage, EPmean is mean error percentage, and D is Y dimension. The weighting coefficient wi determines the contribution of different error indices (PC and EPmax) to the objective function, and w1 and w2 are 0.5 in this study. Two vectors Y and Y ^ are separately the desired and estimated of the engine outputs, and Y ¯ is the mean of vector Y.
The optimal estimator regressor factors na, nb of the NNWM, EWM, GA-SWM are calculated with the minimal evaluation function as Equation (19) by several tries. The number of regressor factors na and nb are separately changed in the intervals [1, 5] and [0, 5] by 1. The specification results of the optimal structure for different Wiener models are reported in Table 4, and the basic WM is not included because of no regression estimator used. It is noted that the FSWM structure parameters are obtained in the FKELM training phase, and at the same time the output weighting vector and kernel parameter are gained. In the FKELM training process, the evaluation function Equation (19) replaces Equation (16) as new generalization performance index. The kernel parameter varies in an interval as given in Section 4.1.
The time constant of linear dynamic part in each SWM is continuously varied with aircraft engine operations, which are presented in Figure 5. The time constant variation indicates that the tunable dynamic characteristics in the linear part make the block-oriented Wiener model more flexible.
Table 5 gives the dynamic performances of five model identification methods against the training and testing data with regards to the performance indices (PC, EPmax, EPmean, Ttrain and Ttest).
The Wiener models with tuned time constant have obviously better estimation accuracy than the basic WM from three former indices in Table 5. The block-oriented Wiener model using KELM including GA-SWM and FSWM, have superior estimation accuracy to NNWM and EWM. The GA-SWM and FSWM have almost the same dynamic estimation performance.
When computational efforts are concerned, WM’s train time is the least, and the same conclusion is to the test time. There are no regression estimator parameters to be tried or trained in the basic WM. The FSWM is the best one among four Wiener models with tuned time constant. All FSWM parameters are generated in training stage of FKELM, and no optimization algorithm is involved. In the FSWM, the model feature parameters selection and FKELM output weight vector tuning are not stepwise performed. These indicate that the proposed FSWM strategy is an effective way to gain system parameters.
The NN performance indices are not presented, since the conclusion that NN’s dynamic estimation performance inferior to EWM had been proved in [25]. The averages of engine model outputs in terms to three indices ( P C ¯ , EPmax, and EPmean) are shown in Figure 6, where P C ¯ is defined by P C ¯ = 1 P C . The Wiener models with tuned time constant have obviously less estimation error percentage than the basic WM both in the train mode and test mode as seen from Figure 6.
The steady estimation error is another key index to evaluate performance of proposed modeling approach. Fifteen operating points above the idle condition are used, at which the steady errors are calculated. Figure 7 gives the steady estimation errors of the NN and block-structured FSWM methods. The FSWM brings few steady errors while NN produces maximum errors close to 3% for NL, NH, W3, and EGT estimation around the idle from Figure 7. The NN’s steady error approaches to that of FSWM at the engine operation near design point.
The basic WM and other tunable Wiener models developed on the basis of block structure, which have nonlinear static part to represent static behavior. Hence, the block-oriented Wiener models have almost the same steady performance, and only FSWM is given to compare the NN in Figure 7.
Since the identified engine model is a SIMO system, it runs with an architecture of the bank of SWMs. In order to further evaluate the engine model identified by FSWM, the static and dynamic behavior joint tests are carried out. The time constant of all linear dynamic elements are simultaneously modified by regression estimator. Figure 8 indicates a more detailed FSWM performance compared to the EWM for the NL, NH, W3 and EGT estimation, and the simulation time is 200 s on the joint test. The engine models identified by both of the EWM and FSWM have acceptance agreement with the target TEM. Compared to the EWM, the model outputs by the FSWM is more close to the TEM outputs in Figure 8. It indicates that the FSWM produces less modeling errors, and the FSWM has better dynamic estimation accuracy than the EWM. The FSWM produces similar steady estimation accuracy as the EWM, and it coincides with the previous ones in module tests.

5. Conclusions

This paper develops a systematic approach which includes model structure selected, parameter tuned and model integrated steps that lead to an improved model identification. The novelty of this methodology lies in the development of FKELM algorithm and the SWMs bank architecture, and the combination of these techniques for aircraft engine modeling named the FSWM is proposed. One advantage of this methodology is that the dynamic estimation performance becomes more accurate as the tunable time constant of linear dynamic element compared with the fixed value in the basic WM. The steady estimation accuracy of FSWM is superior to the traditional machine learning method due to the nonlinear static element held in the block-oriented WM. Another advantage of this methodology is that the global optimal model feature parameters and weighting vector obtained directly by the training phase can evidently reduce computational time-consuming, and no population-based optimization algorithm is employed to tune the model parameters compared to the EWM. This indicates the bank of SWMs could be a more potential model-based tool for on-board real time control and fault diagnosis.
The methodology is tested and validated using the measurement data of a low-bypass ratio aircraft engine which are established from GGTS. Both of the SWM related to each engine output and the bank of SWMs as an entire engine outputs are numerically evaluated against the engine simulation data. The methodology developed in this paper is not limited to dual-spool engine, but also can be applied to other engine types. The model feature parameters including structure parameters of regressor and kernel parameters of KELM are the key elements to affect the FSWM performance to the aim of aircraft engine system identification. The proposed FKELM brings an effective way to rapidly select these model parameters altogether with the output weighting vector during the training process.
This research points out a new direction in block-oriented Wiener model identification by proposing an appropriate bank of SWMs using FKELM technique that is specifically beneficial for aircraft engine modeling applications. Although the FKELM produces less dynamic modeling errors than the EWM in dynamic tracking simulation, the maximum dynamic errors still more than 0.01 in Figure 8. There are several important topics for further study related to this research. First, the transfer functions for four outputs are all simplified to one-order inertial expressions, and it is better to utilize a two-order function to approximate a two-spool gas turbine engine. Further works can be performed to investigate the performance when the order of linear dynamic element is increased and each SWM has different transfer function order. Second, although this paper focuses on the SIMO system identification for gas turbine engine at ground, extensions to the cases that have more than the examined aero-thermodynamic parameters in this study or more wide operational condition in flight envelope are worthy of future exploration.

Acknowledgment

We are grateful for the financial support of the National Nature Science Foundation of China (No. 61304113), and China Outstanding Postdoctoral Science Foundation (No. 2015T80552). Gratitude is also extended to Prof. Viliam Makis for useful suggestions and to China Scholarship Council for supporting the first author to carry out collaborative research in the Department of Mechanical and Industrial Engineering at the University of Toronto.

Author Contributions

Feng Lu and Jinquan Huang contributed in developing the ideas of this research, Yu Ye performed this research. All of the authors were involved in preparing this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mohammadi, E.; Montazeri-Gh, M. Performance enhancement of global optimization-based gas turbine fault diagnosis systems. J. Propuls. Power 2016, 32, 214–224. [Google Scholar] [CrossRef]
  2. Zhong, S.S.; Luo, H.; Lin, L. An improved correlation-based anomaly detection approach for condition monitoring data of industrial equipment. In Proceedings of the IEEE International Conference on Prognostics and Health Management (ICPHM), Ottawa, ON, Canada, 20–22 June 2016. [Google Scholar]
  3. Dimogianopoulos, D.; Hios, J.; Fassois, S. Aircraft engine health management via stochastic modelling of flight data interrelations. Aerosp. Sci. Technol. 2012, 16, 70–81. [Google Scholar] [CrossRef]
  4. Qi, Y.W.; Bao, W.; Chang, J.T. State-based switching control strategy with application to aeroengine safety protection. J. Aerosp. Eng. 2015, 28, 04014076. [Google Scholar] [CrossRef]
  5. Borguet, S.; Leonard, O.; Dewallef, P. Regression-based modeling of a fleet of gas turbine engines for performance trending. J. Eng. Gas Turbines Power 2016, 138, 021201. [Google Scholar] [CrossRef]
  6. Xue, W.; Guo, Y.Q. Aircraft Engine Sensor Fault Diagnostics Based on Estimation of Engine’s Health Degradation. Chin. J. Aeronaut. 2009, 22, 18–21. [Google Scholar]
  7. Simon, D. A comparison of filtering approaches for aircraft engine health estimation. Aerosp. Sci. Technol. 2008, 12, 276–284. [Google Scholar] [CrossRef]
  8. Wen, H.; Zhu, Z.H.; Jin, D.; Hu, H. Model predictive control with output feedback for a deorbiting electrodynamic tether system. J. Guid. Control Dyn. 2016, 39, 2455–2460. [Google Scholar] [CrossRef]
  9. Pourbabaee, B.; Meskin, N.; Khorasani, K. Robust sensor fault detection and isolation of gas turbine engines subjected to time-varying parameter uncertainties. Mech. Syst. Signal Proc. 2016, 76, 136–156. [Google Scholar] [CrossRef]
  10. Chapman, J.W.; Lavelle, T.M.; May, R.D.; Litt, J.S.; Guo, T.H. Propulsion System Simulation Using the Toolbox for The Modeling and Analysis of Thermodynamic Systems (T-MATS). In Proceedings of the 50th AIAA/ASME/SAE/ASEE Joint Propulsion Conference, Cleveland, OH, USA, 28–30 July 2014. [Google Scholar]
  11. Sun, J.G.; Vasilyev, V.; Ilyasov, B. Advanced Multivariable Control Systems of Aeroengines; Beihang Press: Beijing, China, 2005. [Google Scholar]
  12. Aretakis, N.; Roumeliotis, I.; Alexiou, A.; Romessis, C.; Mathioudakis, K. Turbofan Engine Health Assessment From Flight Data. J. Eng. Gas Turbines Power 2014, 137. [Google Scholar] [CrossRef]
  13. Visser, W.; Broomhead, M. GSP: A generic Object-Oriented Gas Turbine Simulation Environment. In Proceedings of the ASME Turbo Expo 2000: Power for Land, Sea, and Air, Munich, Germany, 8–11 May 2000. [Google Scholar]
  14. Kurzke, J. GASTURB 12 User’s Manual; GasTurb GmbH: Dachau, Germany, 2013. [Google Scholar]
  15. Ruano, A.E.; Fleming, P.J.; Teixeira, C.; RodriGuez-Vazquez, C.K.; Fonseca, C.M. Nonlinear identification of aircraft gas-turbine dynamics. Neurocomputing 2003, 55, 551–579. [Google Scholar] [CrossRef]
  16. Lytle, J.; Follen, G.; Naiman, C.; Evans, A. Numerical Propulsion System Simulation (NPSS) 1999 Industry Review; NASA Glenn Research Center: Cleveland, OH, USA, 6–7 October 1999.
  17. Zhou, W.X. Research on Object-Oriented Modeling and Simulation for Aeroengine and Control System. Ph.D. Thesis, Nanjing University of Aeronautics and Astronautics, Nanjing, China, 2007. [Google Scholar]
  18. Kyprianidis, K.G.; Sethi, V.; Ogaji, S.O.T.; Pilidis, P.; Singh, R.; Kalfas, A.I. Uncertainty in gas turbine thermo-fluid modelling and its impact on performance calculations and emissions predictions at aircraft system level. Proc. Inst. Mech. Eng. Part G 2012, 226, 163–181. [Google Scholar] [CrossRef]
  19. He, K.; Xu, Q.S.; Jia, M.P. Modeling and Predicting Surface Roughness in Hard Turning Using a Bayesian Inference-Based HMM-SVM Model. IEEE Trans. Autom. Sci. Eng. 2015, 12, 1092–1103. [Google Scholar] [CrossRef]
  20. Zaidan, M.A.; Mills, A.R.; Harrison, R.F.; Fleming, P.J. Gas turbine engine prognostics using Bayesian hierarchical models: A variational approach. Mech. Syst. Signal Proc. 2016, 70, 120–140. [Google Scholar] [CrossRef]
  21. Basso, M.; Giarre, L.; Groppi, S.; Zappa, G. NARX models of an industrial power plant gas turbine. IEEE Trans. Control Syst. Technol. 2005, 13, 599–604. [Google Scholar] [CrossRef]
  22. Wang, X.; Li, Q.; Xie, Z.H. New Feature Selection Method Based on SVM-RFE. Adv. Mater. Res. 2014, 926, 3100–3104. [Google Scholar] [CrossRef]
  23. Fei, J.Z.; Zhao, N.B.; Shi, Y.; Feng, Y.M.; Wang, Z.W. Compressor performance prediction using a novel feed-forward neural network based on Gaussian kernel function. Adv. Mech. Eng. 2016, 8. [Google Scholar] [CrossRef]
  24. Schoukens, J.; Pintelon, R.; Rolain, Y.; Schoukens, M.; Tiels, K.; Vanbeylen, L.; van Mulders, A.; Vandersteen, G. Structure discrimination in block-oriented models using linear approximations: A theoretic framework. Automatica 2015, 53, 225–234. [Google Scholar] [CrossRef]
  25. Mohammadil, E.; Montazeri-Gh, M. A new approach to the gray-box identification of Wiener models with the application of gas turbine engine modeling. J. Eng. Gas Turbines Power 2015, 137, 071202. [Google Scholar] [CrossRef]
  26. Luo, M.; Sun, F.; Liu, H. Joint block structure sparse representation for multi-input-multi-output (MIMO) T-S fuzzy system identification. IEEE Trans. Fuzzy Syst. 2014, 22, 1387–1400. [Google Scholar] [CrossRef]
  27. Narendra, K.S.; Gallman, P.G. An iterative method for the identification of nonlinear systems using a Hammerstein model. IEEE Trans. Autom. Control 1996, 11, 546–550. [Google Scholar] [CrossRef]
  28. Wills, A.; Ninness, B. Generalised Hammerstein-Wiener system estimation and a benchmark application. Control Eng. Pract. 2012, 20, 1097–1108. [Google Scholar] [CrossRef]
  29. Paduart, J.; Lauwers, L.; Pintelon, R.; Schoukens, J. Identification of a Wiener- Hammerstein system using the polynomial nonlinear state space approach. Control Eng. Pract. 2012, 20, 1133–1139. [Google Scholar] [CrossRef]
  30. Wakui, T.; Imaizumi, N.; Yokoyama, R. Model-based performance monitoring with dynamic compensation for heat utilization process in distributed energy system. J. Syst. Des. Dyn. 2012, 6, 597–615. [Google Scholar]
  31. Na, J.; Herrmann, G. Online adaptive approximate optimal tracking control with simplified dual approximation structure for continuous-time unknown nonlinear systems. IEEE/CAA J. Autom. Sin. 2014, 1, 412–422. [Google Scholar]
  32. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: A new learning scheme of feed forward neural networks. In Proceedings of the International Joint Conference on Neural Networks, Budapest, Hungary, 25–29 July 2004. [Google Scholar]
  33. Deng, W.Y.; Zheng, Q.H.; Zhang, K. Reduced Kernel Extreme Learning Machine. In Proceedings of the 8th International Conference on Computer Recognition Systems CORES, Milkow, Poland, 27–29 May 2013. [Google Scholar]
  34. You, C.X.; Huang, J.Q.; Lu, F. Recursive reduced kernel based extreme learning machine for aero-engine fault pattern recognition. Neurocomputing 2016, 214, 1038–1045. [Google Scholar] [CrossRef]
  35. Seber, G.A.F.; Lee, A.J. Linear Regression Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  36. Kulikov, G.G.; Thompson, H.A. Dynamic Modeling of Gas Turbines: Identification, Simulation, Condition Monitoring and Optimal Control; Springer: London, UK, 2004. [Google Scholar]
  37. Lu, F.; Wang, Y.F.; Huang, J.Q.; Huang, Y.H. Gas Turbine Transient Performance Tracking Using Data Fusion Based on an Adaptive Particle Filter. Energies 2015, 8, 13911–13927. [Google Scholar] [CrossRef]
  38. Gringard, M.; Kroll, A. On the parametrization of APRBS and multisine test signals for the identification of nonlinear dynamic TS-models. In Proceedings of the IEEE Symposium Series on Computational Intelligence, Athens, Greece, 6–9 December 2016. [Google Scholar]
  39. Deflorian, M.; Zaglauer, S. Design of experiments for nonlinear dynamic system identification. In Proceedings of the 18th IFAC World Congress, Milano, Italy, 28 August–2 September 2011. [Google Scholar]
Figure 1. The training schematic of self-tuning Wiener model related to one examined engine output.
Figure 1. The training schematic of self-tuning Wiener model related to one examined engine output.
Energies 10 01363 g001
Figure 2. The bank of self-tuning Wiener models for the examined engine.
Figure 2. The bank of self-tuning Wiener models for the examined engine.
Energies 10 01363 g002
Figure 3. The procedure schematic of the methodology for aircraft engine identification.
Figure 3. The procedure schematic of the methodology for aircraft engine identification.
Energies 10 01363 g003
Figure 4. The input and output data from TEM for aircraft engine identification.
Figure 4. The input and output data from TEM for aircraft engine identification.
Energies 10 01363 g004
Figure 5. The time constant variation of linear dynamic block in the FSWM: (a) NL; (b) NH; (c) W3; (d) EGT.
Figure 5. The time constant variation of linear dynamic block in the FSWM: (a) NL; (b) NH; (c) W3; (d) EGT.
Energies 10 01363 g005
Figure 6. Comparisons of five Wiener models using the average performance indices: (a) Train mode; (b) Test mode.
Figure 6. Comparisons of five Wiener models using the average performance indices: (a) Train mode; (b) Test mode.
Energies 10 01363 g006
Figure 7. The steady error comparisons of identified engine model between the NN and FSWM: (a) NL; (b) NH; (c) W3; (d) EGT.
Figure 7. The steady error comparisons of identified engine model between the NN and FSWM: (a) NL; (b) NH; (c) W3; (d) EGT.
Energies 10 01363 g007
Figure 8. Variations of aircraft engine model outputs in joint test: (a) NL; (b) NH; (c) W3; (d) EGT.
Figure 8. Variations of aircraft engine model outputs in joint test: (a) NL; (b) NH; (c) W3; (d) EGT.
Energies 10 01363 g008aEnergies 10 01363 g008b
Table 1. Design point specifications of the examined engine.
Table 1. Design point specifications of the examined engine.
LabelDescriptionValue
HAltitude0 m
MaMach number0
WfFuel flow2.48 kg/s
A8Throttle area0.2597 m2
NLPhysical low pressure spool speed10,302 r/min
NHPhysical high pressure spool speed13,340 r/min
W3Air flow75.6594 kg/s
EGTExhaust gas temperature1157.34 K
Table 2. The computational efforts of three cross validation method to benchmark dataset “sinc” (s).
Table 2. The computational efforts of three cross validation method to benchmark dataset “sinc” (s).
Method50010001500200025003000
LOOC0.74917.722733.802674.1476151.0138277.2857
KC0.16410.70063.16562.99685.13898.0324
FLOOC0.08510.22681.29601.40372.58743.8879
Table 3. The RMSE of various KELMs to benchmark dataset “Boston housing” and “Abalone”.
Table 3. The RMSE of various KELMs to benchmark dataset “Boston housing” and “Abalone”.
DatasetγGaussian Kernel Asymptotic Kernel
LOOC-KELMKC-KELMFKELMLOOC-KELMKC-KELMFKELM
Boston housing10−20.416200.432000.451000.208770.199780.22111
10−10.260790.370010.273460.102760.114240.11406
1000.090680.164120.090780.086040.103080.07717
1010.074750.101370.076760.073750.103020.07170
1020.106730.113010.108850.074930.103280.07259
Abalone10−20.260210.261530.260660.175340.183830.18874
10−10.121780.123640.118510.091740.099390.09623
1000.091490.099120.093160.087440.097200.09188
1010.089340.097200.092180.089610.099300.09215
1020.094410.098880.094930.093670.099410.09308
Table 4. Specifications of the optimal structure for Wiener models.
Table 4. Specifications of the optimal structure for Wiener models.
Model TypeNumber of Regressors (na, nb)
NLNHW3EGT
NN-WM(1, 1)(1, 1)(1, 2)(0, 2)
EWM(1, 1)(1, 1)(1, 1)(0, 2)
GA-SWM(1, 1)(1, 2)(1, 1)(0, 2)
FSWM(1, 1)(1, 2)(1, 2)(1, 1)
Table 5. The dynamic performance indices of train and test mode by five Wiener models.
Table 5. The dynamic performance indices of train and test mode by five Wiener models.
OutputsMethodsTrain ModeTest Mode
PC %EPmean %EPmax %Ttrain (s)PC %EPmean %EPmax %Ttest (s)
NLWM89.34251.29313.15746.543089.14521.39524.65210.0666
NN-WM91.65130.76113.298213.689691.07350.82113.38370.0980
EWM91.89450.60883.543225.434491.58830.64013.56150.1830
GA-SWM92.12450.50422.362819.543391.96540.59422.56270.0790
FSWM92.08600.49482.498312.544892.98370.54012.58750.0772
NHWM91.54610.12881.32946.891091.51420.25471.90170.0762
NN-WM92.87710.10931.071213.838492.00110.32231.43960.0991
EWM93.10910.04120.909326.019692.79810.21981.30150.1855
GA-SWM93.25030.04610.887220.358993.02180.20211.06370.0903
FSWM93.30250.04850.901613.011892.98240.19771.11740.0896
W3WM91.36571.84263.11547.023090.75212.35945.42140.0649
NN-WM94.12931.48753.586014.018893.37032.31004.81910.0973
EWM94.40331.24213.007725.805493.13421.98014.89940.1896
GA-SWM95.42591.25432.693119.799794.65551.82244.22870.0815
FSWM95.39121.21042.870612.319294.71231.87534.04110.0797
EGTWM89.54222.54236.39226.597088.27633.24527.46820.0674
NN-WM90.31451.55934.781214.167089.88721.85575.91630.0986
EWM90.67131.24314.091326.036190.22011.74115.06170.1879
GA-SWM91.12221.29333.753420.107490.12441.88754.90180.0833
FSWM91.04791.27063.792113.053690.19751.94524.95330.0819

Share and Cite

MDPI and ACS Style

Lu, F.; Ye, Y.; Huang, J. Gas Turbine Engine Identification Based on a Bank of Self-Tuning Wiener Models Using Fast Kernel Extreme Learning Machine. Energies 2017, 10, 1363. https://doi.org/10.3390/en10091363

AMA Style

Lu F, Ye Y, Huang J. Gas Turbine Engine Identification Based on a Bank of Self-Tuning Wiener Models Using Fast Kernel Extreme Learning Machine. Energies. 2017; 10(9):1363. https://doi.org/10.3390/en10091363

Chicago/Turabian Style

Lu, Feng, Yu Ye, and Jinquan Huang. 2017. "Gas Turbine Engine Identification Based on a Bank of Self-Tuning Wiener Models Using Fast Kernel Extreme Learning Machine" Energies 10, no. 9: 1363. https://doi.org/10.3390/en10091363

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop