A Hierarchical Approach Using Machine Learning Methods in Solar Photovoltaic Energy Production Forecasting

We evaluate and compare two common methods, artificial neural networks (ANN) and support vector regression (SVR), for predicting energy productions from a solar photovoltaic (PV) system in Florida 15 min, 1 h and 24 h ahead of time. A hierarchical approach is proposed based on the machine learning algorithms tested. The production data used in this work corresponds to 15 min averaged power measurements collected from 2014. The accuracy of the model is determined using computing error statistics such as mean bias error (MBE), mean absolute error (MAE), root mean square error (RMSE), relative MBE (rMBE), mean percentage error (MPE) and relative RMSE (rRMSE). This work provides findings on how forecasts from individual inverters will improve the total solar power generation forecast of the PV system.


Introduction
With the development of photovoltaic (PV) technology or other, similar technologies, renewable energy sources have been used more frequently across the U.S. in recent years.Policies such as renewable portfolio standards (RPS) are gaining more and more attention due to the increasing penetration of renewable energy production into the conventional utility grid.As of March 2015, with 46 states already in session across the country and over 100 RPS bills pending, how to promote solar energy to compete with other major players in the renewable source market is a priority for many solar power producers, utility companies and independent service operators.
The widespread implementations of solar power systems are so far impeded by many factors, such as weather conditions, seasonal changes, intra-hour variability, topographic elevation and discontinuous production.Operators need to acquire solar energy production information ahead of the time to counter the operating costs caused by requirements of energy reserves or shortage of electricity supplies from PV systems.Therefore, solar power forecasting is the "cornerstone" of a reliable and stable solar energy industry.Intra-hour forecasts are critical for monitoring and dispatching purposes while intra-day and day-ahead forecasts are important for scheduling the spinning reserve capacity and managing the grid operations.
The forecast of power output, for example a day ahead, is a challenging task owing to the dependency of inputs' accuracy.In general, there are two types of inputs for PV energy output forecasting: exogenous inputs from meteorological forecasts, and endogenous inputs from direct system energy outputs.Meteorological forecasts, such as solar irradiance, have been studied for a long time [1][2][3].Researchers further extended their models to predict power output from PV plants [4][5][6][7].However, even with the cloud graph from synchronous meteorological satellites, the large variability in key parameters, namely diffuse component from the sky hemisphere, makes solar irradiance much less predictable than the temperature.More difficulties exist in large-scale PV systems installed over a wide area with different tile and azimuth angles.The situation requires individual parameterized solar models to deal with diverse configurations of PV.
Since it is not possible to take into account all related meteorological forecasts in a practical situation, a lot of alternative solutions have been developed.Some considered adopting weather forecasts provided by meteorological online services [8].Many others tried to simplify the solar forecast model by exploring different nonlinear modeling tool such as artificial neural networks (ANN) [9][10][11].Two types of network, radial basis function (RBF) and multilayer perception (MLP), are commonly utilized to forecast global solar radiation [12][13][14][15], solar radiation on titled surface [15], daily solar radiation [15][16][17], and short-term solar radiation [15,18].More techniques are being explored to improve the current models for solar radiation and PV production such as the minimal resource allocating network [19], the wavelet neural network [20], fuzzy logic [21], and the least-square support vector machine [22].Others tested PV productions forecasts on reduced weather inputs (e.g., no solar irradiance input) or only based on endogenous inputs [9,23,24].Proposed approaches include isolating PV linear and nonlinear power outputs [24], adjusting the temporal resolution [25], and classified day types [26][27][28][29].
In this work, a simplified approach, using reduced exogenous inputs without solar irradiance, is developed for predicting the PV production output 15 min, 1 h and 24 h ahead of time.The forecast models, developed from ANN and support vector regression (SVR) respectively, forecast the power output based on PV's historical record and the online meteorological services.Moreover, an alternative way to forecast total PV output, generated from the individual inverter output forecast, is used to compare the baseline performance using the total PV output data.The goals of this study are to: (1) assess common forecast techniques' accuracy in order to determine a baseline performance for different prediction windows; (2) propose a hierarchical forecast approach using the monitored information on the inverter level; and (3) validate the approach by comparing the baseline forecasts.
The paper is organized as follows: the next section provides a brief introduction of the methods applied to forecast power plant production; Section 3 defines the performance matrixes in terms of common error criteria in literatures; Section 4 gives the data description used in this study; and results are discussed in Section 5.

Methodology
The two most commonly applied machine learning methods to forecast power plant output 15 min, 1 h and 24 h head of time are used in this work.The methods employed are ANN and SVR.The implementations and parameter selections are discussed in Sections 2.1 and 2.2.A hierarchical approach integrating machine learning methods is further introduced in Section 2.3.

Artificial Neural Networks
A neural network is heuristically developed from the human brain's learning processes on recognizing visualized objects.Similar to neurons in a biological brain, ANN is operated on many different artificial neurons.The structure is a network model connecting the input, hidden, and output layers constructed with neuron nodes.No assumptions need to be made about the inputs or outputs of the model.The users only define the model structure such as the number of hidden layers, the number of neurons in each layer, and the appropriate learning algorithms.Inputs to a neuron can be external In this study, feed-forward neural networks (FFNN) for a single layer and a double layer configuration are explored [30].However, due to an over-fitting problem, good forecasts were not found from the double hidden layer structure, similar to the layer-tuning tests in building load forecasting [31,32].The performance of FFNN depends strongly on the parameter settings.For this study, FFNN is modeled as 1 hidden layer with more than 20 neurons, 1 output neuron (the prediction of power production), and 26 input neurons (several time-lagged values of historical power productions, meteorological inputs and time information).A simplified diagram of the FFNN is shown in Figure 1.
Energies 2016, 9, 55 where w is the weights for input, hidden, and output layers, x is the training input, N represents the total number of hidden neurons, M represents the total number of inputs, and φ represents the In this study, feed-forward neural networks (FFNN) for a single layer and a double layer configuration are explored [30].However, due to an over-fitting problem, good forecasts were not found from the double hidden layer structure, similar to the layer-tuning tests in building load forecasting [31,32].The performance of FFNN depends strongly on the parameter settings.For this study, FFNN is modeled as 1 hidden layer with more than 20 neurons, 1 output neuron (the prediction of power production), and 26 input neurons (several time-lagged values of historical power productions, meteorological inputs and time information).A simplified diagram of the FFNN is shown in Figure 1.The model is implemented using the NeuroLab Library 0.3.5 in Python.The transfer functions are the hyperbolic tangent sigmoid functions for each layer.FNNN is trained with a gradient descent back-propagation algorithm.FFNN's weights are learned by minimizing SSE.To select the specific number of hidden neurons, the training algorithms are performed on multiple epochs of historical data.For each epoch selected (e.g., one week), 75% of the data are used for training and the remaining 25% are kept for validation.Parameters are optimized using all input data normalized between 0 and 1.At least 50 epochs are simulated until the calculated SSE reaches the setting error goal of 0.001.

Support Vector Regression
Support vector machines are statistics learning tools widely used in classification and regression problem [22].For SVR, a data set is firstly transformed into a high-dimension space.Predictions are discriminated from training data as a "tube" enclosed with a desired pattern curve with certain tolerances.The support vectors are the points which mark the margins of the tube.The SVR approximates the inputs and outputs using the following form:  The model is implemented using the NeuroLab Library 0.3.5 in Python.The transfer functions are the hyperbolic tangent sigmoid functions for each layer.FNNN is trained with a gradient descent back-propagation algorithm.FFNN's weights are learned by minimizing SSE.To select the specific number of hidden neurons, the training algorithms are performed on multiple epochs of historical data.For each epoch selected (e.g., one week), 75% of the data are used for training and the remaining 25% are kept for validation.Parameters are optimized using all input data normalized between 0 and 1.At least 50 epochs are simulated until the calculated SSE reaches the setting error goal of 0.001.

Support Vector Regression
Support vector machines are statistics learning tools widely used in classification and regression problem [22].For SVR, a data set is firstly transformed into a high-dimension space.Predictions are discriminated from training data as a "tube" enclosed with a desired pattern curve with certain tolerances.The support vectors are the points which mark the margins of the tube.The SVR approximates the inputs and outputs using the following form: where φpxq represents the transfer function mapping the input data to the high-dimensional feature spaces.Parameters w and b are estimated by minimizing the regularized risk function: where n represents the total number of training samples, ξ is the error slacks guaranteeing the solutions, C is the regularized penalty, and ε defines the desired tolerance range of the "tube".The first term w T ¨w is a regularized term to flatten the function in Equation ( 2).The second term is the empirical error measured by the ε-insensitive loss function.This loss function defines the ε tube: if the forecasted values are within the tube, the loss is zero; if the forecasted values are outside the tube, the loss is proportional to the absolute differences between the forecasted values and the radius ε of the tube.Both ε and C are optimized by introducing the Lagrange multiplier and exploiting the optimality constraints.The equivalent Lagrangian form of Equation ( 2) is expressed by the following equation: where Kpx i , x j q is defined as the kernel function.In Equation ( 4), a i and a i are the so-called Lagrange multipliers.They are calculated by solving the dual problem of Equation ( 3) in Lagrange form.The advantage of using the kernel function is that one can deal with feature space in arbitrary dimension without explicitly computing the map φpxq.Any function that satisfies Mercer's condition can be used as the kernel function [33].The commonly used choices are linear, polynomial, and RBF [34].By literature review, RBF with weight parameter γ tends to give good performance by smoothness [34,35].
The RBF is defined as follows: where γ is the kernel parameter.Scikit-learn Library 0.15.0 in Python is used in our study.The RBF used is 3 degree kernel function.We find the SVR model is relatively insensitive to the value of ε smaller than 0.01 whereas both C and γ necessitate independent tuning.These parameters were determined with 10 fold cross-validation based on mean square error.The grid search scale for C and γ is maintained between 10 3 and 10 ´3.

Hierarchical Forecasting
As shown in Figure 1, any historical information of the power productions can be fed to the machine learning methods as the inputs.Traditionally, the forecasting of power plant production will use the information of the total power plant production from historical record to train the models and most efforts are spent on manipulating the parameters of the machine learning models to get better predictions.However, different feeding information from the historian could change the game, especially for the large PV plant.Many researchers are used to neglect the production information in the micro level of the PV plant, and the difficulties accessing a measuring the data may be another reason for those who noticed the issue.However, the potentials to implement the abundant information from the micro level of PV (e.g., inverters) can offer a new hierarchical view in forecasting the power output from the large PV system.The hierarchical forecasting approach, utilizing the machine learning tools at a micro level for each inverter prediction and evaluating the performances at a macro level for the whole plant by summing up the forecasts, is shown in Figure 2.Those who argue the simulation costs for the online applications can also find their solution by adopting advanced computational algorithms, namely parallel computing.Those who argue the simulation costs for the online applications can also find their solution by adopting advanced computational algorithms, namely parallel computing.

Performance Matrixes
The performances of the methods described in the previous section are evaluated by several error calculations.For test samples during night hours, i.e. when solar irradiance is not available, there are no needs to evaluate the system performances.Forecast accuracy is mainly evaluated using the following common statistical metrics:  Mean absolute error (MAE)  Root mean square error (RMSE) where i P is the measured power output, ˆi P is the forecasting for i P , and N represents the number of data points which the forecast errors are calculated.MBE is used to estimate the bias between the expected value from predictions and the ground truth.In comparison, MAE provides a view of how close the forecasts are to the measurements in absolute scale.The last one, RMSE, amplifies and severely punishes large errors by using the square form.Although these performance metrics are popular and considered the standard performance measures, limitations with MAE, MBE and RMSE are that the relative sizes of the reported errors are not obvious.It is hard to tell whether they are big or small when comparing to different series of data in different scales.Hence, relative error measures are further introduced:  Mean percentage error (MPE)

Performance Matrixes
The performances of the methods described in the previous section are evaluated by several error calculations.For test samples during night hours, i.e. when solar irradiance is not available, there are no needs to evaluate the system performances.Forecast accuracy is mainly evaluated using the following common statistical metrics:

‚
Mean bias error (MBE) ‚ Mean absolute error (MAE) ‚ Root mean square error (RMSE) where P i is the measured power output, Pi is the forecasting for P i , and N represents the number of data points which the forecast errors are calculated.MBE is used to estimate the bias between the expected value from predictions and the ground truth.In comparison, MAE provides a view of how close the forecasts are to the measurements in absolute scale.The last one, RMSE, amplifies and severely punishes large errors by using the square form.Although these performance metrics are popular and considered the standard performance measures, limitations with MAE, MBE and RMSE are that the relative sizes of the reported errors are not obvious.It is hard to tell whether they are big or small when comparing to different series of data in different scales.Hence, relative error measures are further introduced: ‚ Mean percentage error (MPE) Energies 2016, 9, 55 6 of 12

‚
Relative root mean squared error (rRMSE) where M is the maximum power production is recorded, and P is the mean value of solar PV energy production during the daytime in the test period.P is calculated from the below equation:

Data
This work uses data collected from a 6 MW (direct current) peak solar power plant with eleven 500 kW (alternative current) inverters located in Florida.This solar farm's tested period spans from 1 January to 31 December 2014.The intra-hour ground truth data collected from the PV plant site was transferred to 15 min and hourly average of power output.Before applying the training algorithm, both the input and output data are normalized to the range from 0 to 1 using following equation: where y i is the original data value, y i is the corresponding normalized value, y min is the minimum value in y i data set, and y max is the maximum value in y i data set.
To train the models for different forecasting windows, the primary interest is determining what inputs perform the best at predicting the PV output.The training process is facilitated by testing different configurations of historical power production information.The inputs for the 15 min and 1 h ahead forecast, defined as H1, is a Markov order 5 sequence: where L t´1 represents historical power plant production from the previous one time step back, . . ., and L t´5 represents production from the previous five time steps back.The 24 h ahead input set, defined as H2, is used to forecast the next 24 h power production.For one future hour of power production L t in the next 24 h window, the input features selected are: For a one-time-step-ahead forecast (the 15 min and 1 h ahead forecast), inputs include the historical power productions from 1 to 5 time steps back.By comparison, 24 h ahead case needs the historical outputs at the same time from yesterday, the day before yesterday, and so on, until the day one week before.These features are selected based on an exhaustive search by minimizing coefficient of determination.
On the other hand, the reduced exogenous inputs are the following physical variables from weather services: ambient temperature, wind speed, and wind direction.Additional variables are also tried form the National Renewable Energy Laboratory's SOLPOS (Solar Position and Intensity) calculator as an indirect source to enhance the clear-sky PV production forecast.The following variables were found useful: (1) solar zenith angle, no atmospheric correction; (2) solar zenith angle, degrees from zenith, refracted; (3) cosine of solar incidence angle on panel; (4) cosine refraction corrected solar zenith angle; (5) solar elevation (no atmospheric correction); and (6) solar elevation angle (degrees from horizon, refracted).Various combinations of these variables can be used individually to help to improve the forecasts in different scenarios.This depends on the forecast windows, and these are the all possible variables that can be used.In essence, these additional inputs are adopted as indicators of the clear-sky solar irradiance without explicitly predicting the irradiances.The assumption is that the machine learning algorithms will learn the nonlinear relationship between clear-sky solar properties and the PV plant productions directly rather than through the solar irradiance predictions.

Results and Discussion
Two common forecast techniques, ANN and SVR, were tested on the total power production forecast of the sample PV plant.As introduced in Section 2.3, the traditional baseline forecast uses whole plant production information.It only implements one machine learning model with the plant data for power production prediction.The hierarchical method predicts the power production from inverters while each inverter production forecast uses a different machine learning model, which is trained separately based on the information provided by the corresponding inverter only.The summation of these inverter forecasts will transfer micro level predictions to a macro level forecast standing for total plant production.Tables 1-3 present forecast results in 2014 for different prediction windows using both approaches.The best results from different error matrixes are highlighted.The hierarchical approach, whether using ANN or SVR, performs better in one step ahead forecast than the traditional way.The 24 h ahead cases are fairly even between the two approaches which is caused by the difficulty of implementing the machine learning algorithms in a longer forecast window, shown by the larger MAE, RMSE and MPE compared to hour-ahead forecast results.Figures 3 and 4 display the forecast results comparing the traditional and the hierarchical approach using the same machine learning technique, such as ANN.In Figure 3, the presented two-week period shows improvements in performance using the hierarchical approach for forecasts 15 min, 1 h and 24 h ahead.In particular, we observe a strikingly difference results for the 24 h ahead case from the consistent performances for 15 min and 1 h forecasts, which are the cases one time step ahead.Power Energies 2016, 9, 55 8 of 12 production forecasting plots in Figure 3a,b, even on the cloudy day (around 1200 time steps in the 15 min ahead plot and 300 time steps in the hour ahead plot), match the ground truth pattern with a low error.In contrast, the same case in the 24 h forecast apparently excessively predicts the true power production.The dampened performance for the 24 h case is clearly influenced by the input features.the 24 h ahead forecast, models learned the patterns as a daily profile which is a 24 time steps ahead of the moving window.The key assumption behind the learning process is that daily power plant production levels are similar without large deviation, which may not be the case for all days.That is why cloudy rainy weather factors play more important roles than historical power production information in 24 h ahead cases.However, due to the goals and definitions of the approaches, the factors cannot be easily approximated from the reduced exogenous inputs in our study.Besides the 24 h tests, 15 min and 1 h ahead predictions actually do not depend too much on exogenous inputs.Sample plots for rainy and cloudy days' forecasts in Figure 4 indicate that the trained model with endogenous inputs can predict the plant power production for one time step ahead with an appropriate error level.On the cloudy day, the production level of energy is affected by the cloudy periods and changes can be observed from 10 am to 3 pm in Figure 4a.In comparison, the rainy day has more irregular productions in the entire daytime, and more periods are either over-fitted or under-estimated (e.g., the periods from 2 pm to 4 pm).
Energies 2016, 9, 55 predicts the true power production.The dampened performance for the 24 h case is clearly influenced by the input features.In the 24 h ahead forecast, models learned the patterns as a daily profile which is a 24 time steps ahead of the moving window.The key assumption behind the learning process is that daily power plant production levels are similar without large deviation, which may not be the case for all days.That is why cloudy or rainy weather factors play more important roles than historical power production information in 24 h ahead cases.However, due to the goals and definitions of the approaches, the factors cannot be easily approximated from the reduced exogenous inputs in our study.Besides the 24 h tests, 15 min and 1 h ahead predictions actually do not depend too much on exogenous inputs.Sample plots for rainy and cloudy days' forecasts in Figure 4 indicate that the trained model with endogenous inputs can predict the plant power production for one time step ahead with an appropriate error level.On the cloudy day, the production level of energy is affected by the cloudy periods and changes can be observed from 10 am to 3 pm in Figure 4a.In comparison, the rainy day has more irregular productions in the entire daytime, and more periods are either over-fitted or under-estimated (e.g., the periods from 2 pm to 4 pm).
(a) ( Energies 2016, 9, 55 predicts the true power production.The dampened performance for the 24 h case is clearly influenced by the input features.In the 24 h ahead forecast, models learned the patterns as a daily profile which is a 24 time steps ahead of the moving window.The key assumption behind the learning process is that daily power plant production levels are similar without large deviation, which may not be the case for all days.That is why cloudy or rainy weather factors play more important roles than historical power production information in 24 h ahead cases.However, due to the goals and definitions of the approaches, the factors cannot be easily approximated from the reduced exogenous inputs in our study.Besides the 24 h tests, 15 min and 1 h ahead predictions actually do not depend too much on exogenous inputs.Sample plots for rainy and cloudy days' forecasts in Figure 4 indicate that the trained model with endogenous inputs can predict the plant power production for one time step ahead with an appropriate error level.On the cloudy day, the production level of energy is affected by the cloudy periods and changes can be observed from 10 am to 3 pm in Figure 4a.In comparison, the rainy day has more irregular productions in the entire daytime, and more periods are either over-fitted or under-estimated (e.g., the periods from 2 pm to 4 pm).Tables 4 and 5 list all the inverter values for the 15 min and 1 h ahead forecast windows using ANN.The classification of the inverter groups are based on plant configuration and areas occupied.The similar size for the plant inverter guarantees a comparative view on which areas have the most potential in production predictions and which inverter causes the most trouble in power forecasting.Table 6 further illustrates the effects of the hierarchical approach while predicting power plant production from the micro level to a macro analysis.In of MBE, the unstable error implies similar chances to overly predict or inadequately predict the power output regardless of the power production levels.The evaluations based on absolute changes, such as MAE, MPE and RMSE, represent the evolving process when the power production level increases with the max energy shown as well.The gradually forecasts are measured by rMBE and rRMSE.By comparing the total summation form 11 inverter predictions and the forecast using the historical power production from the whole plant, we can conclude that the hierarchical approach performs better in terms of MBE, MAE, RMSE, rMBE, and rRMSE, which is shown in Figure 5.The evolving process of the hierarchical approach shows the potential to have multiple-level forecasts of the PV system, and they can be used for various purposes such as optimization of the production schedules at the inverter level and micro controls for multiple solar modules in a large PV system.

Conclusions
This work compares two common models, ANN and SVR, for 15 min, 1 h and 24 h forecasting of averaged power output of a 6 MWp PV power plant.No exogenous data such as solar irradiance was used in the forecasting models.A hierarchical approach using micro level information on the power plant, such as inverters, is further assessed.From the analysis of the error between ground truth and predicted values, it can be concluded that hierarchical technique outperforms traditional models using power production information from the micro level of the plant system.In addition, we discuss and show the difference between a one-step-ahead forecast, namely 15 min and 1 h ahead, and a 24 h forecast using both approaches.The analysis of the evolving errors, calculated by the summation of the different inverter number, shows the potential of the hierarchical approach to determine which smaller generation units have the greatest impact on forecasting.The evolving process of the hierarchical approach shows the potential to have multiple-level forecasts of the PV system, and they can be used for various purposes such as optimization of the production schedules at the inverter level and micro controls for multiple solar modules in a large PV system.

Conclusions
This work compares two common models, ANN and SVR, for 15 min, 1 h and 24 h forecasting of the averaged power output of a 6 MWp PV power plant.No exogenous data such as solar irradiance was used in the forecasting models.A hierarchical approach using micro level information on the power plant, such as inverters, is further assessed.From the analysis of the error between ground truth and predicted values, it can be concluded that hierarchical technique outperforms traditional models using power production information from the micro level of the plant system.In addition, we discuss and show the difference between a one-step-ahead forecast, namely 15 min and 1 h ahead, and a 24 h forecast using both approaches.The analysis of the evolving errors, calculated by the summation of the different inverter number, shows the potential of the hierarchical approach to determine which smaller generation units have the greatest impact on forecasting.

w
outputs from the other neurons.Neurons calculate the weights sum of the inputs and produce the output via transfer functions: fpxq " ij x i `wio s `wjo(1)    where w is the weights for input, hidden, and output layers,xis the training input, N represents the total number of hidden neurons, M represents the total number of inputs, and ϕ represents the transfer function for each hidden neuron.The weighted sums are adjusted during training processes by minimizing the errors of the training data, namely mean squared error, sum squared error (SSE) and root mean squared error.Numerical learning algorithms such as back-propagation, quasi-Newton and Levenberg-Marquardt have been developed for years to effectively optimize the weights.
transfer function for each hidden neuron.The weighted sums are adjusted during training processes by minimizing the errors of the training data, namely mean squared error, sum squared error (SSE) and root mean squared error.Numerical learning algorithms such as back-propagation, quasi-Newton and Levenberg-Marquardt have been developed for years to effectively optimize the weights.
 x represents the transfer function mapping the input data to the high-dimensional feature spaces.Parameters w and b are estimated by minimizing the regularized risk function:

Figure 5 .
Figure 5.The hierarchical approach in terms of evolving errors: (a) rMBE and (b) rRMSE.

Figure 5 .
Figure 5.The hierarchical approach in terms of evolving errors: (a) rMBE and (b) rRMSE.

Table 2 .
Hour ahead forecast result for the whole PV plant.

Table 3 .
24 h ahead forecast result for the whole PV plant.

Table 4 .
15min forecast results for each inverter.

Table 5 .
1 h forecast result for each inverter.

Table 6 .
24h forecast result for different number of inverters.