Next Article in Journal
Biohydrogen Production from Lignocellulosic Biomass: Technology and Sustainability
Next Article in Special Issue
High Resolution Modeling of the Impacts of Exogenous Factors on Power Systems—Case Study of Germany
Previous Article in Journal
An Assessment of Direct on-Farm Energy Use for High Value Grain Crops Grown under Different Farming Practices in Australia
Previous Article in Special Issue
Recent Progress on the Resilience of Complex Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Neuro-Fuzzy Inference Systems as a Strategy for Predicting and Controling the Energy Produced from Renewable Sources

Automation, Computer Science and Electrical Engineering Department, Valahia University of Târgoviște, 2 Carol I Bd., Targoviste 130024, Romania
*
Author to whom correspondence should be addressed.
Energies 2015, 8(11), 13047-13061; https://doi.org/10.3390/en81112355
Submission received: 3 July 2015 / Revised: 6 November 2015 / Accepted: 9 November 2015 / Published: 17 November 2015
(This article belongs to the Special Issue Resilience of Energy Systems)

Abstract

:
The challenge for our paper consists in controlling the performance of the future state of a microgrid with energy produced from renewable energy sources. The added value of this proposal consists in identifying the most used criteria, related to each modeling step, able to lead us to an optimal neural network forecasting tool. In order to underline the effects of users’ decision making on the forecasting performance, in the second part of the article, two Adaptive Neuro-Fuzzy Inference System (ANFIS) models are tested and evaluated. Several scenarios are built by changing: the prediction time horizon (Scenario 1) and the shape of membership functions (Scenario 2).

1. Introduction

For renewable energy source management systems it is necessary to know how to optimally schedule or dispatch the power resources. Among the operations performed for this purpose the forecasting of future supplier or consumer behaviour may be included. The accuracy of short time horizon predictions has a direct economic impact because it enables the energy provider to plan in advance there sources and to take proactive control actions for the power systems. The quality of the forecasting has a significant importance on the economic parameters and the security of the system, for the optimal management of the resources and increasing the business profitability.
The current energy policy of the European Union (EU) considers the security of supply, competitiveness and sustainability as central goals. Studies conducted by us have highlighted the need for an energy management system at the micro to macro level, as existing control strategies are not being always successfully applied.
The problems in this area are numerous. The main interest directions in the field, identified by reviewing the scientific literature are how to increase conversion efficiency [1,2], how to identify new opportunities, complementary to the limitations of existing solutions [3], how to balance the insufficient number of providers to reported market needs [4], how to supply energy to an increasing number of consumers [5], how to provide and promote sustainable buildings and housing projects [6], how to identify new renewable energy sources in order to reduce risks, social and economic costs [7], how to understand the effects associated with the use of RES and the increasing complexity of systems [8], how to identify measures and solutions for ensuring the sustainability of renewable resources [9], how to adjust the impacts of renewable energy for the final beneficiaries, in terms of cost/benefit ratio [10,11], and how to highlight the importance of being efficiently informed in order to make better decisions about energy options [12].
In this area two challenges have to be addressed: the first one consists of the monitoring, diagnosis and forecasting functioning of microgrids integrating renewable energy sources, in on-grid or off-grid mode and the second to ensure the optimization of these functioning ones, with an integrated, proactive management system based on a multi-objective decision-making scheme.
To support these interests, we consider that integration of the predictive ability is a very important function in the command and control of a power system, especially at the microgrid level, heavily dependent on the consumer and with reduced storage availability. Thus, we access information, cloud stored, for training and testing the prediction tools, based on artificial intelligence techniques, in order create an open and adaptable tool for optimizing microgrid energy management, functioning from an economical point of view, active control of distributed generation, controlled consumption, and loading the storage equipment.
Worldwide there are only a few tools which are able to perform dynamic assessments of electrical networks; none of them take into account the characteristics and the specificities of microgrids integrating RES. The efficiency of using Neural Networks (NN) in the area of energy forecasting was demonstrated in our previous works [13,14,15].
The challenge for our paper consists in controlling the performance of the future state of a microgrid with energy produced from renewable energy sources. The added value of this proposal consists in identifying the most used criteria, related to each modeling step, able to lead us to an optimal neural network forecasting tool. In order to underline the effects of the users’ decision making on the forecasting performance, in the second part of the article, two Adaptive Neuro-Fuzzy Inference System (ANFIS) models are tested and evaluated. Several scenarios are built by changing the prediction time horizon (Scenario 1) and the shape of the membership functions (Scenario 2).

2. Exploration and Assessment of Criteria Used for Choosing a Forecasting Tool

In our previous work [13] the forecasting was studied from the industrial and scientific point of view. This was aimed at proposing a novel classification of forecasting tools and guidelines for choosing a suitable prediction technique, that is, a set of criteria to “assess” the applicability of forecasting tools. In this respect, the steps in the forecasting tool modelling, based on NN, are: data pre-processing, determining the forecasting tool architecture, parametrization and implementation.
The added value of this paper consists in identifying the most used criteria for each modeling step, allowing us to implement an optimal forecasting NN architecture. The approach was difficult because of the high numbers of variables and constraints that condition parameterization of a NN: number of hidden layers, number of neurons from the NNs input, output and hidden layers, testing/validation data sets dimension, the learning algorithm to choose, and the method used to measure the forecasting made by the tool.

2.1. Data Preprocessing

The input selection methods we selected are experimental methods. They are supposed to find the most important input variables, among a large number of them [16]. Some authors [17] assume that the input information can be classified into groups (hierarchical structure). There is no general automatic approach to realize such a grouping, beside from heuristics based on the fusion of physical sensors. Both input selection and hierarchical structure presume that inputs are independent and assign no priority or importance to the selected input variables.
When the analyzed process is complex it is recommended to normalize the input and output real values into the interval between the max and min of the transformation function (usually in [0, 1] or [−1, 1] intervals). The most popular methods are the two following:
S V = ( 0.9 0.1 ) M A X V A L M I N V A L ( O V M I N V A L )
or:
S V = T F m i n + ( T F m a x T F m i n ) ( M A X V A L M I N V A L ) ( O V M I N V A L )
where:
S V : Scaled Value;
M A X V A L : maximum value of data;
M I N V A L : minimum value of data;
T F m a x : maximum of transformation;
T F m i n : minimum of transformation;
O V : Original Value.

2.2. Determining NN Architecture

There are many techniques for determining the forecasting tool architecture. In this section we will cover some of the general “rules of thumb” that can be used. In nearly all cases some additional experimentation will be required to determine the optimal structure.
Every input neuron from the input layer of the NN should represent some independent variable that has an influence on the output. If a pattern is presented to the input layer of a NN then the output layer will generate another pattern. The output layer of the NN presents a pattern to the external environment [18]. Whatever a pattern is represented by the output layer this can be directly traced back to the input layer. The number of output neurons is related to the type of work that the NN has to perform. If it is to be used to classify items into groups, then it is often preferable to have one output neuron for each group that the item is to be assigned to. If the NN is to perform noise reduction on a signal then it is likely that the number of input neurons will match the number of output neurons.
One common mistake made by users is to add a large number of variables and neurons, without taking into account the number of parameters to be estimated [19]. The hidden layers are those that don’t interact directly with the external environment. The number of hidden layers and the number of neurons in the hidden layers must be considered because these have an important influence on the final output. For many practical problems there’s no reason to use more than one hidden layer. The one hidden layer NN can approximate arbitrarily well any continuous mapping from one compact space to another. NNs without hidden layers are only capable of representing linear separable functions or decisions. NN with two hidden layers can represent functions with any kind of shape. Problems that require two hidden layers are rarely encountered and currently there is no theoretical reason to use neural networks with any more than two hidden layers [18].
In [20] the authors proposed as three rules starting points to consider: the number of hidden neurons should be in the range between the size of the input layer and the size of the output layer; the number of hidden neurons should be 2/3 of the input layer size, plus the size of the output layer; the number of hidden neurons should be less than twice the input layer size.
One additional method that can be used to reduce the number of hidden neurons is called pruning. It involves evaluating the weighted connections between the layers and if the network contains connections with weights equal to zero they can be removed.

2.3. Parametrization

The test samples must be appropriately selected. Since NNs are “data-driven” methods, they typically require large samples in testing. The input selection reduces the testing data dimension. The NN are tested with small subsets that included data from only a few past days selected through statistical measures of similarity. This results in homogeneous samples, but also very small [19].
Usually, the dimension of test data set must be five times greater than the number of updated parameters. The testing process is usually finished as soon as the testing error is reduced to a specified tolerance level (e.g., 10−5). The adequate rate between the number of testing samples and the number of weights in the network has not yet been clearly defined [21,22].
The testing algorithm has to be also established for this task. The most common in use is the back propagation algorithm. The goal of back propagation testing is to converge to a near-optimal solution based on the total squared error calculated. Another algorithm, less used in energy applications, is K-nearest-neighbor. It is a learning algorithm that is simpler than the back propagation algorithm because there is no model to test on the data series. Instead, the data series is searched for situations similar to the current one each time a forecast needs to be made [23,24]. In most of the reviewed papers, testing was stopped after a fixed number of iterations or after the error decreased below some specified tolerance.

2.4. Implementation and Performance Testing

The next stage in modelling is the implementation of the NN, which means the estimation of its parameters. The guidelines proposed in [25] for evaluating effectiveness of implementation were based on the next question: was the NN properly tested so that its performance was the best it could possibly achieve?
The choice of error measures to help comparing forecasting methods has been much discussed, as a consequence of the many competitions that were started in the 1980’s [26]. de Gooijer and Hyndman [27] summarized in Table 1 the most used performance indicators.
Table 1. The most used performance indicators.
Table 1. The most used performance indicators.
Commonly Used Forecast Accuracy Measures
MSEMean Squared Error=mean( e t 2 )
RMSERoot Mean Squared Error= M S E
MAEMean Absolute Error=mean( | e t | )
MdAEMedian Absolute Error=median( | e t | )
MAPEMean Absolute Percentage Error=mean( | p t | )
MdAPEMedian Absolute Percentage Error=median( | p t | )
MRAEMean Relative Absolute Error=mean( | r t | )
MdRAEMedian Relative Absolute Error=median( | r t | )
GMRAEGeometric Mean Relative Absolute Error=gmean( | r t | )
RelMAERelative Mean Absolute Error=MAE/MAEb
RelRMSERelative Root Mean Squared Error=RMSE/RMSEb
LMRLog Mean squared error Ratio=log(RelMSE)
PBPercentage better=100*mean( I { | r t | 1 } )
PB (MAE)Percentage better (MAE)=100*mean( I { M A E < M A E b } )
PB(MSE)Percentage better (MSE)=100*mean( I { M S E < M S E b } )
Error (e) is the diffrence between real and estimated data; The absolute percentage error is p; The relative absolute error is r; Here I { u } =1 if u is true and 0 otherwise.
Most authors agree on theuse of the following as performance indicators: mean squared error (MSE), root mean squared error(RMSE) and mean absolute error (MAE) for training phase evaluation [27]. Only a few have reported the use of mean absolute percent errors (MAPE) or the standard deviation of the errors [28,29] (Table 1).
However, recent studies and the experience of system operators indicates that the loss function in the load forecasting problem is clearly nonlinear, and that large errors may have disastrous consequences for a utility [30,31]. Because of this, measures based on squared error are sometimes suggested, as they penalize large errors (RMSE was suggested in [32], MAPE in [33]).
Most researchers test their models by examining their errors in samples other that the one used for parameter estimation because the goodness-of-fit statistics are not enough to predict the actual performance of a method [21].
Also, it is generally recognized that error measures should be easy to understand and closely related to the needs of the decision-makers. Some papers have reported that the utilities would rather evaluate forecasting systems by the Absolute Errors Produced, and this suggests that MAE could be useful [19]. The shape of the distribution should be suggested. A few papers included graphs of the cumulative distribution of the errors [34]. Others suggested this distribution by reporting the percentage of errors above some critical values, percentiles [35], the maximum errors [36,37]. In any case, no single error measure could possibly be enough to summarize the efficiency of the forecast.

3. Forecasting Tool Simulation and Performance Evaluation

Considering the bibliographical research presented above, in the next sections we underline the consequences tothe forecasting performances of the choices made during the tool modelling.

3.1. Adaptive Neuro-Fuzzy Inference System

Despite all their publicized successes, NN-based load forecasting models have been designed relying on time-consuming, empirical, and suboptimal procedures. The potential for applying NN to short term forecasting depends strongly on the extraction of appropriate input variables. This requirement has often been neglected, and many proposals for building the NN input space still use (linear) correlation analysis. Input selection procedures with the capacity to extract high order statistical information from the input-output data must be employed to fully exploit the NN mapping capability.
The advantages and the drawbacks of NN led us to chooseanonlinear network (neuro-fuzzy systems-NFs) as reference tool for our short term energy forecasting approach. Thus the case study is performed using an ANFIS network. Neuro-fuzzy systems are a combination of NN and fuzzy sets and represent a powerful tool to model system behavior [38,39]. The NN is used to define the clustering in the solution space, which results in the creation of fuzzy sets [40,41,42].
A particular neuro-fuzzy system architecture is represented by the Adaptive Neuro-Fuzzy Inference System (ANFIS) [43]. ANFIS is a Sugeno-type fuzzy inference system in which the parameters associated with specific membership functions are computed using either a backpropagation gradient descent algorithm alone or in combination with a least squares method. It has been widely applied to random data sequences with highly irregular dynamics [44], e.g., for forecasting non-periodic short-term stock prices [45]. Figure 1 illustrates the ANFIS architecture for two input parameters, where the nodes of the same layer have similar functions, as described next.
Figure 1. ANFIS architecture.
Figure 1. ANFIS architecture.
Energies 08 12355 g001
Here we denote the output of the ith node in layer j as O i , j ( i ) :
Layer 1. Every node i in this layer is an adaptive node with a node function:
O 1 , i = μ A i ( x ) ,    f o r   i = 1 , 2
or:
O 1 , i = μ B i 2 ( y ) ,    f o r   i = 3 , 4
where x (or y) is the input to node i and Ai (or Bi−2) is a linguistic label (such as “small” or “large”) associated with this node. In other words, O1,i is the membership grade of a fuzzy set A (=A1, A2, B1 or B2) and it specifies the degree to which the given input x (or y) satisfies the quantifier A.
The membership function for A can be any appropriate parameterized membership function introduced in here, such as the generalized bell function:
μ A ( x ) = 1 1 + | x c i a i | 2 b
where { a i ,   b i ,   c i } is the parameter set. As the values of these parameters change, the bell-shaped function varies accordingly thus various forms of membership function for fuzzy set A. Parameters in this layer are referred to as premise parameters.
Layer 2. Every node in this layer is a fixed node labelled Oi,j, whose output is the product of all the incoming signals:
O 2 , i = w i = μ A i ( x ) × μ B i ( y ) ,   i = 1 ,   2
Each node output represents the firing strength of a rule. In general, any other T-norm operators that performs fuzzy and can be used as the node function in this layer.
Layer 3. Every node in this layer is a fixed node labelled N. The ith node calculates the ratio of the ith firing strength and the sum of all firing strengths:
O 3 , i = w i ¯ =   w i w 1 + w 2 ,    i = 1 ,   2
For convenience, outputs of this layer are called normalized firing strengths.
Layer 4. Every node i in this layer is an adaptive node with a node function:
O 4 , i = w i ¯ × f i = w i ¯ ( p i x + q i y + r i )
where w i ¯ is a normalized firing strength from layer 3 and { p i ,   q i ,   r i } is the parameter set of this node. Parameters in this layer are referred to as consequent parameters.
Layer 5. The single node in this layer is a fixed node labelled O 5 , 1 (overall output), which computes the overall output as the summation of all incoming signals:
O 5 , 1 = i w i ¯ f i = i w i f i i w i
Thus we have constructed an adaptive network that is functionally equivalent to a Sugeno fuzzy model.
The ANFIS learning algorithm. When the premised parameters are fixed, the overall output is a linear combination of the consequent parameters. In symbols, the output f can be written as:
O 5 , 1 = i w i ¯ f i = i w i f i i w i f = ( w 1 ¯ × x ) × c 11 + ( w 1 ¯ × y ) × c 12 + w 1 ¯ × c 10 +   ( w 2 ¯ × x ) × c 21 + ( w 2 ¯ × y ) × c 22 + w 2 ¯ × c 20
which is linear in the consequent parameters c i , j (i = 1, 2; j = 0, 1, 2). A hybrid algorithm adjusts the consequent parameters c i , j in a forward pass and the premise parameters { a i ,   b i ,   c i } in a backward pass [40,41].
In the forward pass the network inputs propagate forward until Layer 4, where the consequent parameters are identified by the least-squares method. In the backward pass, the error signals propagate backwards and the premise parameters are updated by gradient descent.
Because the updating rules for the premise and consequent parameters are decoupled in the hybrid learning rule, a computational speedup may be possible by using variants of the gradient method or other optimization techniques on the premise parameters.
The success of ANFIS is given by aspects like: the designated distributive inferences stored in the rule base, the effective learning algorithm for adapting the system’s parameters or by the own learning ability to fit irregular or non-periodic time series.
On the other hand, used alone in applications to non-periodic short-term forecasting, ANFIS predictions make large residual errors due to high residual variance, consequently degrading prediction accuracy [46,47]. It is very difficult for a non-expert to interpret the fuzzy rules generated by ANFIS because of the form of consequents (linear combination of inputs).

3.2. Simulation Conditions and Strategy

ANFIS is applied on a database obtain from an experimental photovoltaic amphitheater of minimum dimensions (0.4 kV/10 kW), located in the east-center region of Romania, more precisely in the city of Targoviste [14]. The used data base has 296 data points { ( y ( t ) ,   u ( t ) , t = 1 , , 296 } .
For ANFIS the input selection method is based on the assumption that the ANFIS model with the smallest RMSE [34] after one epoch of training has a greater potential of achieving a lower RMSE when given more epochs of training.
This assumption is not absolutely true, but it is heuristically reasonable. ANFIS can usually generate satisfactory results right after the first epoch of training, that is, only after the first application of the least-squares method [16]. For example, in case of a problem with 10 entries, for the identification of the three most influential entries, should be built firstly C 3 10 =120 ANFIS models.
Everyone should be “learned” once and a mix of inputs with the smallest error learning will be selected. The learning time of 120 models involves less computations that learning to 120 times of a simple ANFIS model. For 10 inputs the Sugeno inference system generates 210= 1024 rules and (10 + 1) × 1024 parameters.
The too costly computation time determined us to build two groups for the selection of variables: a first set for historic outputs {y(t − 1), y(t − 2), y(t − 3)} and another set for {u(t) u(t − 1) u(t − 2) u(t − 3) u(t − 4) u(t − 5)}. Thus, only 6 × 4 = 24 ANFIS models are tested and analyzed.
The difference between the electricity produced and consumed from renewable energy sources is considered as output (y(t)) to be forecast by ANFIS and the exterior temperature as input (u (t)). The inputs and outputs values aren’t normalised.
The estimation procedure is carried out as two ANFIS Models, presented in Figure 2, having the following features:
  • Model 1 represents an ANFIS structure with two inputs {(y(t − a), u(t − b))}
  • Model 2 is an ANFIS structure with three inputs: {(y(t − a), u(t − b), u(t − c))}, a = 1,...4; b, c = 1, ... 6
Figure 2. ANFIS models used for simulation.
Figure 2. ANFIS models used for simulation.
Energies 08 12355 g002
Being a particular architecture of neuro-fuzzy network, the user can’t change the number of ANFIS’ layers but we can investigate the influence of the membership functions shapes over the output.
The leaning algorithm tested is back propagation, the most used. In our future works we intend to also identify the influence of the learning method. Related to the dimension of training and testing data sets, we split our data considering the recommendation “test data set must be five times greater than the number of update parameters”.
The investigation uses as performance metrics: error, MAPE, RMSE and MAE. In any case, these error measures cannot replace the need to calculate the error distribution, the shape of the probability density function (PDF) being the most suggestive for the evaluation of the forecasting model robustness and stability.

4. Simulations Results

The challenge for our paper consists in controlling the performance of the future state of the system. Thus, considering the conditions and the strategy described above, we have performed some simulations. In order to underline the effects of the users’ decisions over the forecasting performance, several scenarios have been built by changing the prediction time horizon (Scenario 1) and the effect of membership function type and number changing (Scenario 2). The testing parameters in each case remain the same: number of epochs = 20, ss = 0.01, ss_dec_rate = 0.5, ss_inc_rate = 1.5, training/test data set = 145/145.

4.1. Scenario 1

This scenario identifies the performance of the ANFIS models, in relation with the time horizon. In this respect, each ANFIS model is trained and tested for a short-term horizon, with a time horizon equal with 20 steps ahead (t + 20), mid-term horizon (t + 40) and long- term horizon (t + 100).
The performance of ANFIS models is measured using the error metric (prediction error) and mean absolute percentage error (MAPE).
Figure 3 and Figure 4 represent comparatively the performance of “Model 1” (Figure 3a,b) and of “Model 2” (Figure 4a,b) for a time horizon equal to 20 and 40. We can see that the ANFIS structure with three inputs, having additional information on the input, is better than “Model 1”, especially in the short time horizon. For the medium and long term cases, the errors became bigger and bigger and these affect the forecasting in terms of accuracy and confidence.
Figure 3. The performance of ANFIS for “Model 1”, for a time horizon equal with: (a) 20 steps ahead and (b) 40 steps ahead.
Figure 3. The performance of ANFIS for “Model 1”, for a time horizon equal with: (a) 20 steps ahead and (b) 40 steps ahead.
Energies 08 12355 g003
Figure 4. The performance of ANFIS for “Model 2”, for a time horizon equal with: (a) 20 steps ahead and (b) 40 steps ahead.
Figure 4. The performance of ANFIS for “Model 2”, for a time horizon equal with: (a) 20 steps ahead and (b) 40 steps ahead.
Energies 08 12355 g004
The performance metrics RMSE and MAE, measured on the test set, for “Model 1” and “Model 2” (Table 2, Figure 5a,b) show that these models don’t achieve a satisfactory compromise between short-term accuracy and long-term stability.
Figure 5. (a) MAE for Model 1 and Model 2 on different prediction of test data set; (b) RMSE for Model 1 and Model 2 on different prediction of test data set.
Figure 5. (a) MAE for Model 1 and Model 2 on different prediction of test data set; (b) RMSE for Model 1 and Model 2 on different prediction of test data set.
Energies 08 12355 g005
Table 2. RMSE and MAE values computed in case of Scenario 1.
Table 2. RMSE and MAE values computed in case of Scenario 1.
t + predANFISRMSE_TestMAE_Test
Epochs = 20Epochs = 100Epochs = 20Epochs = 100
1model 10,55320,55140,08090,0816
1model 20,40830,44320,05970,0708
4model 11,22281,230,23840,2591
4model 21,19551,19590,20420,2239
10model 13,03883,27131,13571,226
10model 24,52017,06610,56171,0066
20model 14,12094,73853,10393,382
20model 24,3748,10932,52852,6426
40model 14,55974,6223,10393,0073
40model 25,78045,74922,71982,2013
In any case, error measures are only intended as summaries of the error distribution. This distribution is usually expected to be normal white noise in a forecasting problem, but it will probably not be so in a complex problem like load forecasting. No single error measure could possibly be enough to summarize it. The shape of the distribution should be suggested. Keeping the total error low, therefore, means keeping the model simple.
In this respect, we have computed the probability density function (PDF) for the selected models, for short time (t + 20), medium time (t + 40) and long time (t + 100) prediction horizon (Figure 6a,b).
Figure 6. Probability density function for twenty, forty and a hundred steps ahead forecasting, for (a) “Model 1” (b) “Model 2”.
Figure 6. Probability density function for twenty, forty and a hundred steps ahead forecasting, for (a) “Model 1” (b) “Model 2”.
Energies 08 12355 g006
In the first case, in the medium and long term the error dispersion increases with the forecasting horizon. In our view this isn’t the expected tendency. In consequence, we have trained and tested the second model. The growth trend of the error, observed in the medium term in case of “Model 1” is reduced. This indicate a “stabilization” of the forecasting model with the prediction horizon. Thus, we can observe that the additional information brought in the input of ANFIS contributes to the reduction of the errors and the improvement of the forecasting. In the future, in this direction, we intend to link ANFIS “in series” so one “forecast” can learn from the other ones.

4.2. Scenario 2

The second simulation scenario identifies in terms of forecasting performance (MAE and RMSE) the effect of membership function type and number changing. The models have been configured having two, or three membership functions (MFs), respectively, for each input and Gaussian and Gaussian Bell MFs. The computational results are presented in Table 3 and Table 4.
Table 3. MAE and RMSE values computed in Scenario 2, for 2 MFs associated with the ANFIS inputs.
Table 3. MAE and RMSE values computed in Scenario 2, for 2 MFs associated with the ANFIS inputs.
t + predANFISMAE_TestRMSE_Test
TypeNoModel 1Model 2Model 1Model 2
1gbellmf20,07180,05790,41860,5325
4gbellmf20,26650,28631,13951,228
10gbellmf21,0757−91510,5153,2758
20gbellmf23,30121,68945,30834,8206
40gbellmf23,06161,39975,54434,7185
1gaussmf20,07180,05790,41860,5325
4gaussmf20,26650,28631,13951,228
10gaussmf21,0757−91510,5153,2758
20gaussmf23,30121,68945,30834,8206
40gaussmf23,06161,39975.54434,7185
Table 4. MAE and RMSE values computed in Scenario 2, for 3 MFs associated with the ANFIS inputs.
Table 4. MAE and RMSE values computed in Scenario 2, for 3 MFs associated with the ANFIS inputs.
t + predANFISMAE_TestRMSE_Test
TypeNoModel 1Model 2Model 1Model 2
1gbellmf30,07180,05980,57390,5325
4gbellmf30,26650,48081,29491,228
10gbellmf31,0757−0,91510,5153,2758
20gbellmf33,02321,65265,47334,4743
40gbellmf33,25911,39975,57395,1016
1gaussmf30,07180,05980,57390,5325
4gaussmf30,26650,48081,29491,228
10gaussmf31,0757−0,91510,5153,2758
20gaussmf33,02321,65265,47334,4743
40gaussmf33,25911,39975,57395,1016
Changing the type of MFs, for each model case, does not improve the forecasting performance. The error magnitude measured using MAE and RMSE metrics remains the same. This could be explained as a consequence of the small number of MFs used in the test phase. A bigger number of assigned MFs will refine the partitioning, but will also increase the computational time with no relevant influence over the forecasting precision. We have compared the effects of MF number and type changing for the two models in Figure 7 and Figure 8.
The MAE and RMSE put in relation target and network’s output on testing data set and give an overview of generalization and memorization abilities of ANFIS. As expected, the error values in the case of Model 1 are higher.
Figure 7. MAE for Gaussian Bell of MFs typologies.
Figure 7. MAE for Gaussian Bell of MFs typologies.
Energies 08 12355 g007
Figure 8. RMSE for Gaussian Bell of MFs typologies.
Figure 8. RMSE for Gaussian Bell of MFs typologies.
Energies 08 12355 g008
In the future, in this direction, we intend to extend the computations by using different types of MFs in the same interaction. For example, in the case of three MFs, the middle one will be triangular shaped and the other one, a Gaussian shape.

5. Conclusions

The paper presented here has as challenge controlling the performance of the future state of a microgrid with energy produced from renewable energy sources. The novelty aspect of our proposal consists in identifying the most used criteria, related to each modeling step, able to lead us to an optimal neural network forecasting tool. The approach was difficult because of the high numbers of variables and constraints that condition parameterization of a neuro-fuzzy network.
On the other hand, considering the aspects identified in the bibliographical research, we have proposed two case studies based on ANFIS models, underlining the consequences of the choices made by users during the tool modelling over the forecasting performances. Several scenarios are built by changing the prediction time horizon and the shape of the membership function in this respect. The work is still in progress and the developments are at present extended to the design and implementation of an intelligent informatics platform for forecasting and controlling energy generation from renewable energy resources and load in a distributed power system.

Acknowledgments

This work was supported by the Romanian National Authority for Scientific Research, CNDI-UEFISCDI, project code PN-II-PT-PCCA-2011-3.2-1616, “Intelligent Decision Support and Control System for Low Voltage Grids with Distributed Power Generation from Renewable Energy Sources” (INDESEN), Contract nr. 42/2012.

Author Contributions

O.E.D. conceived the idea of research, provide guidance and supervision and wrote the paper; F.D., V.S. and E.M. implemented the research and performed the analysis. All authors have contributed significantly to this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boyle, G. Renewable Energy: Power for a Sustainable Future; Oxford University Press: Oxford, UK, 2012. [Google Scholar]
  2. Sorensen, B. Renewable Energy, Fourth Edition: Physics, Engineering, Environmental Impacts, Economics & Planning; Academic Press: Waltham, MA, USA, 2010. [Google Scholar]
  3. Masters, G. Renewable and Efficient Electric Power Systems; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2004. [Google Scholar]
  4. Kohl, H. The Development. In Renewable Energy; Wengenmayr, R., Buhrke, T., Eds.; Wiley-VCH: Weinheim, Germany, 2008; pp. 4–14. [Google Scholar]
  5. Da Rosa, A.V. Fundamentals of Renewable Energy Processes; Academic Press: Oxford, UK, 2012. [Google Scholar]
  6. Kemp, W. The Renewable Energy Handbook, Revised Edition: The Updated Comprehensive Guide to Renewable Energy and Independent Living; Aztext Press: Tamworth, ON, Canada, 2009. [Google Scholar]
  7. MacKay, D. Sustainable Energy—Without the Hot Air; UIT Cambridge Ltd.: Cambridge, UK, 2009. [Google Scholar]
  8. Kaltschmitt, M.S. Renewable Energy: Technology, Economics and Environment; Springer: Berlin, Germany, 2010. [Google Scholar]
  9. Bryce, R. Power Hungry: The Myths of “Green” Energy and the Real Fuels of the Future; Public Affairs: New York, NY, USA, 2011. [Google Scholar]
  10. Chiras, D. The Homeowner’s Guide to Renewable Energy: Achieving Energy Independence through Solar, Wind, Biomass, and Hydropower; New Society Publishing: Gabriola Island, BC, Canada, 2011. [Google Scholar]
  11. Boyle, G. Renewable Electricity and the Grid; Routledge: London, UK, 2007. [Google Scholar]
  12. Rapier, R. Power Plays: Energy Options in the Age of Peak Oil; Apress: New York, NY, USA, 2012. [Google Scholar]
  13. Dragomir, O.; Dragomir, F.; Minca, E. An application oriented guideline for choosing a prognostic tool. In Proceedings of the AIP Conference Proceedings: 2nd Mediterranean Conference on Intelligent Systems and Automation, Zarzis, Tunisia, 23–25 March 2009; Volume 1107, pp. 257–262.
  14. Dragomir, O.; Dragomir, F.; Gouriveau, R.; Minca, E. Medium term load forecasting using ANFIS predictor. In Proceedings of the 18th IEEE Mediterranean Conference on Control and Automation, Marrakech, Morocco, 23–25 June 2010; pp. 551–556.
  15. Dragomir, O.; Dragomir, F.; Minca, E. Forecasting of renewable energy load with radial basis function (RBF) neural networks. In Proceedings of the 8th International Conference on Informatics in Control, Automation and Robotics, Noordwijkerhout, The Netherlands, 28–31 July 2011.
  16. Jang, J.-S.R.; Sun, C.-T.; Mizutan, E. Neuro-Fuzzy and Soft Computing; Prentice Hall: New Jersey, USA, 1997. [Google Scholar]
  17. Lacrose, A.; Tilti, A. Fusion and hierarchy can help fuzzy logic controller designer. In Proceedings of the IEEE International Conference on Fuzzy Systems, Barcelona, Spain, 1–5 July1997.
  18. Heaton, J. Introduction to neural networks with java, Heaton Research, Inc. Available online: http://www.heatonresearch.com (accessed on 21 May 2015).
  19. Hippert, H.S. Neural networks for short-term load forecasting: A review and evaluation. IEEE Trans. Power Syst. 2001, 16, 44–55. [Google Scholar] [CrossRef]
  20. Hong, Y.-Y.; Wei, Y.-H.; Chang, Y.-R.; Lee, Y.-D.; Liu, P.-W. Fault detection and location by static switches in microgrids using wavelet transform and adaptive network-based fuzzy inference system. Energies 2014, 7, 2658–2675. [Google Scholar] [CrossRef]
  21. Maqsood, I.; Khan, M.R.; Abraham, A. Intelligent weather monitoring systems using connectionist neural models. Parallel Sci. Comput. 2002, 10, 157–178. [Google Scholar]
  22. Hernández, L.; Baladrón, C.; Aguiar, J.M.; Calavia, L.; Carro, B.; Sánchez-Esguevillas, A.; Pérez, F.; Lloret, J. Artificial neural network for short-term load forecasting in distribution systems. Energies 2014, 7, 1576–1598. [Google Scholar] [CrossRef]
  23. Elsholberg, A.; Simonovic, S.P.; Panu, U.S. Estimation of missing streamflow data using principles of chaos theory. J. Hydrol. 2002, 255, 123–133. [Google Scholar] [CrossRef]
  24. Cerjan, M.; Matijas, M.; Delimar, M. Dynamic hybrid model for short-term electricity price forecasting. Energies 2014, 7, 3304–3318. [Google Scholar] [CrossRef]
  25. Babuska, R.; Jager, R.; Verbruggen, H.B. Interpolation issues in Sugeno-Takagi Reasoning. In Proceedings of the 3rd IEEE Conference on Fuzzy Systems, IEEE World Congress on Computational Intelligence, Orlando, FL, USA, 26–29 June 1994; pp. 859–863.
  26. Fildes, R. The evaluation of extrapolative fore-casting methods. Int. J. Forecast. 1992, 8, 81–98. [Google Scholar] [CrossRef]
  27. De Gooijer, J.G.; Hyndman, R.J. 25 years of time series forecasting. Int. J. Forecast. 2006, 22, 443–473. [Google Scholar] [CrossRef]
  28. Alfuhaid, A.S.; EL-Sayed, M.A.; Mahmoud, M.S. Cascaded artificial neural networks for short-term load forecasting. IEEE Trans. Power Syst. 1997, 12, 524–1529. [Google Scholar] [CrossRef]
  29. Chow, T.W.S.; Leung, C.T. Neural network based short-term load forecasting using weather compensation. IEEE Trans. Power Syst. 1996, 11, 1736–1742. [Google Scholar] [CrossRef]
  30. Hobbs, B.F.; Jitprapaikulsarn, S.; Konda, S.; Chankong, V.; Loparo, K.A.; Maratukulam, D.J. Analysis of the value for unit commitment of improved load forecasting. IEEE Trans. Power Syst. 1999, 14, 1342–1348. [Google Scholar] [CrossRef]
  31. Karady, G.G.; Farner, G.R. Economic impact analysis of load forecasting. IEEE Trans. Power Syst. 1997, 12, 1388–1392. [Google Scholar] [CrossRef]
  32. Armstrong, J.S.; Collopy, F. Error measures for generalizing about forecasting methods: Empirical comparisons. Int. J. Forecast. 1992, 8, 69–80. [Google Scholar] [CrossRef]
  33. Armstrong, J.S.; Fildes, R. Correspondence on the selection of error measures for comparisons among forecasting methods. J. Forecast. 1995, 14, 67–71. [Google Scholar] [CrossRef]
  34. Mohammed, O.; Park, D.; Merchant, R.; Dinh, T.; Tong, C.; Azeem, A.; Farah, J.; Drake, C. Practical experiences with an adaptive neural net-work short-term load forecasting system. IEEE Trans. Power Syst. 1995, 10, 254–265. [Google Scholar] [CrossRef]
  35. Papalexopoulos, A.D.; Hao, S.; Peng, T.M. An implementation of a neural network based load forecasting model for the EMS. IEEE Trans. Power Syst. 1994, 9, 1956–1962. [Google Scholar] [CrossRef]
  36. Peng, T.M.; Hubele, N.F.; Karady, G.G. Advancement in the application of neural networks for short-term load forecasting. IEEE Trans. Power Syst. 1992, 7, 250–257. [Google Scholar] [CrossRef]
  37. Norouzi, A.; Hamedi, M.; Adineh, V.R. Strength modeling and optimizing ultrasonic welded parts of ABS- PMMA using artificial intelligence methods. Int. J. Adv. Manuf. Technol. 2012, 61, 135–147. [Google Scholar] [CrossRef]
  38. Collotta, M.; Messineo, A.; Nicolosi, G.; Giovanni, P.A. Dynamic Fuzzy Controller to Meet Thermal Comfort by Using Neural Network Forecasted Parameters as the Input. Energies 2014, 7, 4727–4756. [Google Scholar] [CrossRef]
  39. Moon, J.W.; Chang, J.D.; Kim, S. Determining adaptability performance of artificial neural network-based thermal control logics for envelope conditions in residential buildings. Energies 2013, 6, 3548–3570. [Google Scholar] [CrossRef]
  40. Jang, J.S.R.; Suni, C.T.; Mizutani, E. Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence; Prentice Hall: New York, NY, USA, 1997. [Google Scholar]
  41. Kusiak, A.; Wei, X. Prediction of methane production in wastewater treatment facility: A data-mining approach. Ann. Oper. Res. 2014, 216, 71–81. [Google Scholar] [CrossRef]
  42. Iranmanesh, H.; Abdollahzade, M.; Miranian, A. Mid-term energy demand forecasting by hybrid neuro-fuzzy models. Energies 2012, 5, 1–21. [Google Scholar] [CrossRef]
  43. Jang, J.S.R. ANFIS: Adaptive network based fuzzy inference systems. IEEE Trans. Syst. Manuf. Cybern. 1993, 23, 665–685. [Google Scholar] [CrossRef]
  44. Wang, J.S. An efficient recurrent neuro-fuzzy system for identification and control of dynamic systems. In Proceedings of the IEEE International Conference on Systems, Manand Cybernetics, Washington, DC, USA, 5–8 October 2003.
  45. Chiang, L.H.; Russel, E.; Braatz, R. Fault Detection and Diagnosis in Industrial Systems; Springer Verlag: London, UK, 2001. [Google Scholar]
  46. Gourierou, C. ARCH Models and Financial Applications; Springer: New York, NY, USA, 1997. [Google Scholar]
  47. Chan, K.-Y.; Gu, J.-C. Modeling of turbine cycles using a neuro-fuzzy based approach to predict turbine-generator output for nuclear power plants. Energies 2012, 5, 101–118. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Elena Dragomir, O.; Dragomir, F.; Stefan, V.; Minca, E. Adaptive Neuro-Fuzzy Inference Systems as a Strategy for Predicting and Controling the Energy Produced from Renewable Sources. Energies 2015, 8, 13047-13061. https://doi.org/10.3390/en81112355

AMA Style

Elena Dragomir O, Dragomir F, Stefan V, Minca E. Adaptive Neuro-Fuzzy Inference Systems as a Strategy for Predicting and Controling the Energy Produced from Renewable Sources. Energies. 2015; 8(11):13047-13061. https://doi.org/10.3390/en81112355

Chicago/Turabian Style

Elena Dragomir, Otilia, Florin Dragomir, Veronica Stefan, and Eugenia Minca. 2015. "Adaptive Neuro-Fuzzy Inference Systems as a Strategy for Predicting and Controling the Energy Produced from Renewable Sources" Energies 8, no. 11: 13047-13061. https://doi.org/10.3390/en81112355

Article Metrics

Back to TopTop