Next Article in Journal
Scheduled QR-BP Detector with Interference Cancellation and Candidate Constraints for MIMO Systems
Next Article in Special Issue
Anomaly Detection and Automatic Labeling for Solar Cell Quality Inspection Based on Generative Adversarial Network
Previous Article in Journal
Characterization of Six-Degree-of-Freedom Sensors for Building Health Monitoring
Previous Article in Special Issue
A Monitoring System for Online Fault Detection and Classification in Photovoltaic Plants
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Methods for Degradation Estimation and Anomaly Detection in Photovoltaic Plants

1
SAL Silicon Austria Labs GmbH, Europastr. 12, 9524 Villach, Austria
2
Fronius International GmbH, Guenter Fronius Straße 1, 4600 Thalheim bei Wels, Austria
3
ENcome Energy Performance GmbH, Lakeside B08b, 9020 Klagenfurt, Austria
4
SAL Silicon Austria Labs GmbH, Inffeldgasse 33, 8010 Graz, Austria
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2021, 21(11), 3733; https://doi.org/10.3390/s21113733
Submission received: 26 February 2021 / Revised: 13 April 2021 / Accepted: 22 May 2021 / Published: 27 May 2021

Abstract

:
Photovoltaic (PV) plants typically suffer from a significant degradation in performance over time due to multiple factors. Operation and maintenance systems aim at increasing the efficiency and profitability of PV plants by analyzing the monitoring data and by applying data-driven methods for assessing the causes of such performance degradation. Two main classes of degradation exist, being it either gradual or a sudden anomaly in the PV system. This has motivated our work to develop and implement statistical methods that can reliably and accurately detect the performance issues in a cost-effective manner. In this paper, we introduce different approaches for both gradual degradation assessment and anomaly detection. Depending on the data available in the PV plant monitoring system, the appropriate method for each degradation class can be selected. The performance of the introduced methods is demonstrated on data from three different PV plants located in Slovenia and Italy monitored for several years. Our work has led us to conclude that the introduced approaches can contribute to the prompt and accurate identification of both gradual degradation and sudden anomalies in PV plants.

1. Introduction

Evaluating the status of a PV plant is an important task in maintaining a high output performance and low operating costs. Operation and maintenance (O & M) companies aim at detecting any failure in a PV system and taking suitable countermeasures. Considering the cost-effectiveness of the different techniques for failure identification (visual inspection, thermography, electroluminescence, etc.) an efficient procedure for a plant evaluation is to first check for any power loss recorded by the monitoring system, followed, if needed, by other on-site techniques for identifying the plant failure [1]. Efficient and reliable methods, appropriate for online monitoring, should be used to detect any failure that causes power losses. A power loss in a PV plant can be correlated to the values of current, voltage, temperature, irradiance, thermal cycling, shading, and others [2]. While shading is difficult to measure and quantify, the other parameters can be measured within the PV plant monitoring system. Failures in a PV plant can be located in the PV modules, inverters, cables and interconnectors, mounting, or other components. Typical failures [1,3] located in PV modules include cracks, potential induced degradation (PID), burned marks and hail damage of the cells, soiling or physical damage as failure of the front glass, delamination as failure of the encapsulant, and others. Because of the many different types of failures, identifying one type of failure in a PV system is a challenging task. Nowadays increasingly more research is being done on diagnosing a specific set of failures [4,5,6,7].
A PV performance analysis involves the estimation of the long-term degradation rates, that quantify the gradual reduction of performance of a PV system over time. In many cases the degradation rates are calculated based on a metric called the performance ratio (PR) [8,9,10], which is the ratio of the measured and nominal power. Variants of the standard PR include a corrected PR that uses a corrected measured power to compensate for the differences in measured irradiance and module temperature, with respect to the Standard Test Conditions (STC). For example, a corrected PR is used in [9,11,12]. PR can be calculated on a yearly, monthly, or daily basis, after which an analysis of the PR time-series is done to evaluate the degradation. When a linear degradation over time is assumed, methods based on linear regression models and seasonal decomposition have been mostly used [13]. A simple linear regression model fits a linear model to the raw PR time-series [10], or to the trend component extracted after seasonal decomposition [8]. In another approach, the degradation ratio is extracted from the distribution of the year-on-year degradation calculated as the rate of change of the PR between the same days in two subsequent years [9]. For cases of nonlinear degradation rates, change point analysis has been performed to detect the changes in the degradation slopes, after which linear degradation rates are calculated between every two consecutive points of change [14].
Appropriate preprocessing and filtering of the dataset is needed to eliminate outliers, noise, and minimize seasonal oscillations [11]. An investigation of the uncertainty of several different methods for degradation estimations shows that the simple linear regression performed on the PR time-series has higher uncertainty than the methods that use seasonal decomposition [8,11]. However, an important requirement for the seasonal decomposition methods is an accurate estimation of the model parameters [11]. On the other hand, using a corrected PR requires a valid measurement of irradiance and module temperature, which in some cases are not available in a monitoring system. There is then the need of defining statistical approaches for degradation estimation that can either be used without the environmental sensor data, or that are not dependent on the accuracy of the seasonal decomposition models.
Besides gradual degradation, the performance of a PV plant can undergo sudden changes caused by localized failures in the system. A variety of statistical methods have been used for failure diagnostics, mostly involving machine learning (ML) regression models [13,15]. ML regression models have been used to monitor the operation of a PV system by estimating the expected output, being it either power, current, or voltage, and identifying as anomalies all instances where the measured output deviates from the predicted one. One approach to estimate the expected power output involves deriving the parameters of the standard nonlinear models of the relationship between current and voltage values, which are usually given by the PV module manufacturer, but are not always available [16]. Other approaches predict an expected daily power output, taking as input a combination of environmental data and data specific to the PV plant [17,18]. For this purpose, ANN (Artificial Neural Network) [18], SVM (Support vector machine) [17] and Regression trees have been used as regression models. Some results show a great performance of the ML models, obtaining a high correlation of more than 0.99 between the measured and predicted power output [18]. For a real-time optimum voltage and current prediction, recurrent ANN are investigated in [4], showing high accuracy of more than 98.2% [4]. An alternative approach of using data-driven models for power prediction is to use the one-diode model [19,20]. A comparison between the performance of the one-diode model and a recursive linear regression model showed a better performance in the regression model [6]. Once a regression model is derived, this can be used by failure detection algorithms. In some studies [16,20], to perform the fault detection, both the measured and predicted outputs are used. In one case, upper and lower boundaries of the loss in power are set up in advance based on which a fault is detected [20]. In another study, a weighted moving average control chart of the power residuals is used [16]. However, all these methods for output regression and fault detection have been so far tested only on one PV plant. For this reason, finding an approach for robust anomaly detection that can be used on several PV systems is still a great challenge as different systems may present different features.
The purpose of this work is to develop models for the assessment of the condition of a PV plant by monitoring the variation of its output. Two different sources for a decrease in performance are considered, i.e., progressive degradation and sudden anomalies. For each of these scenarios, multiple approaches for the plant’s assessment are considered and compared. For the detection of progressive degradation, we propose novel methods for degradation estimation that overcome some of the issues of the existing methods. More precisely, one of the methods does not rely on any environmental sensor data, and therefore it can be used in scenarios where these data are not available. The other method targets to find a reliable degradation evaluation without the use of any seasonal decomposition models, thereby avoiding the problem of an accurate estimation of the model parameters. For anomaly detection, instead, the developed approaches are based on regression models that predict the expected output for each inverter of the PV plant. We propose novel approaches to detect the anomalies by using the produced output. Compared to other approaches, our approach uses some of the measured data as training data. All the approaches considered in this work rely on statistical machine learning techniques and are therefore designed to be derived only from the available data without the need for an in-depth inspection of the plant. The proposed approaches are then validated on the data extracted from three different PV plants located in Europe, ranging from 4 to 19 inverters per plant, and each monitored for 5–6 years.
More in detail, Section 2.1 and Section 2.2 present the approaches developed for the estimation of the plant’s degradation and for the identification of anomalies, respectively. Section 3.1 and Section 3.2 discuss the applications of these methods to the selected PV plants, comparing the results. Finally, Section 4 draws the conclusions.

2. Methods

In the typical operation of a PV plant, two types of events can cause a decrease in performance:
  • the progressive degradation of the plant due to aging, soiling, PID, or other degradation sources;
  • sudden faults, that can affect a part or the entire plant, and are due to anomalous events, for instance failures or components breakdowns.
Both these types of events need to be properly monitored and recognized, in order for the plant to operate at its maximum efficiency. Because of the very different physical natures of these events, however, the methods that can be used for their assessment are necessarily different from each other.

2.1. Degradation Estimation

In order to obtain an accurate estimate for the plant’s degradation, one needs to monitor in time the value of some quantity that is supposed to remain constant in an ideal scenario, typically the plant’s power output. However, the challenge with monitoring a PV plant is that its output is continuously changing, because of the varying environmental conditions (temperature, irradiation, shadings, etc.). In this section, we then present two approaches for the derivation of a stable measurement for the plant’s output: one based on sampled values and the other on the prediction of the plant’s power in reference conditions.

2.1.1. Sampled Values—Based Score

In [21] we defined an intuitive and computationally simple metric, called yearly degradation score (YDS), that quantifies the degradation in a PV plant between two or more consecutive years. One special characteristic is that it can be calculated, not only for the output power but also for several other data sources, including Maximum Power Point voltage (MPP-voltage, voltage) and Maximum Power Point current (MPP-current, current). That distinction could narrow down the failure types that could have caused the power loss in the plant. Similar ideas for differentiating between the degradation of the two components of the power (voltage and current) have been used in other methods for PV fault identification [16,20].
Instead of analyzing the whole data series, the idea behind YDS was to focus the analysis only on a representative set of raw values that give a nice reflection of the overall data. Taking this into consideration, YDS is calculated based on a selected set of K sampled values per year taken from the previously cleaned and filtered raw data. The highest values per year in voltage, current or power are selected in the representative set of values. The degradation score YDS is obtained from the slope of the linear line fitted to the selected points. Therefore, the slope represents the per unit reduction in the measured values per year. The final score of YDS is the percentage of degradation per year. The reference value used for the percentage calculation is the value of the fitted model in the first year. The whole flowchart of the method is shown in Figure 1a.
The performance of YDS depends highly on the preconditioning step where the data errors, data outliers, and data with unusual values are filtered out. The score is also affected by the value of the parameter K. A larger value of K could reduce the power of detecting smaller losses, while a smaller value could cause a greater influence by the outliers. The results showed that the best values of K are between 30 and 50 [21], so in the experiments done here K is set to 30. Note that the model presented here makes use only of a single time-variable input, being it either current, voltage, or power. Despite this, however, as will be shown in Section 3.1, the careful choice of input data can compensate the inherent variability, allowing to obtain a prediction performance very close to the one of much more complex models that make use of several input variables.

2.1.2. Prediction of Reference Power

The previous approach relies only on the availability of the plant’s raw output data (power, current, and voltage). Most modern plants are, however, equipped with multiple sensors that can provide additional information over the operating conditions, most notably about the irradiance and the temperature. Using this additional information can then allow the derivation of a more robust model that can compensate for the variations in operating conditions during the year, with the added benefit of needing much shorter acquisitions to obtain enough data for the degradation estimation.
For these reasons, our second approach for the estimation of the PV plant’s degradation involves estimating the output power in conventional test conditions given the latest observed data. These test conditions are the common ones defined for PV modules [22,23] and are give as follow:
  • Standard Test Conditions (STC): Irradiance = 1000 W/m2, Module temperature = 25 °C;
  • Nominal Operating Cell Temperature (NOCT) conditions: Irradiance = 800 W/m2, Module temperature = 45 +/− 3 °C, Ambient temperature = 20 °C;
The STC are the conditions that correspond to the modules parameters communicated by the manufacturer, however, they are also difficult to realize in the real-world operation of the modules. On the other hand, the NOCT conditions are much more representative of normal operation.
The proposed approach consists of dividing the data in 6-month bins and deriving for each inverter a model that estimates the power at STC and NOCT conditions in each bin. The inputs to the model are the raw irradiance and module temperature for the STC, while in NOCT conditions also the ambient temperature is added. The chosen model is a Decision Tree [24], because of the high efficiency and ease of training and interpretation, and a different model is trained for each of the 6-month bins. The model is then trained to predict the expected power given any values of irradiance and temperature, and the trained model is finally used to estimate the power at STC and NOCT conditions. For better model accuracy, only the data points where the irradiance difference with respect to the test conditions is lower than 150 W/m2 and the temperature difference lower than 5 °C are considered. Due to the unrealistic STC, though, this results in much fewer data points available in this case with respect to NOCT conditions. All data are normalized to lie approximately in the interval [ 0 , 1 ] for better numerical properties, and the whole training procedure is handled in Python using the Scikit-Learn library [25]. The decision tree model uses as criterion the Friedman MSE, while the minimum number of samples to create a tree leaf is set to 10. The whole flowchart of the method is shown in Figure 1b.

2.2. Anomaly Detection

While a progressive degradation of a PV plant’s performance is inevitable due to the aging of its components, other events can cause a sharp decrease in power output, and therefore need to be promptly identified and corrected. As these events are usually very localized in time, the approaches presented in Section 2.1 are not appropriate because they require the collection of data over long time periods, spanning at least a few months. On the other hand, for a prompt fault detection, there is the need of developing methods that can immediately signal if any anomaly is occurring. In this section we present two approaches, one based on the real-time prediction of the inverter’s DC currents and voltage from environmental information and the other monitoring the deviations of each inverter from the behavior of a reference inverter.

2.2.1. Environmental Model and Control Chart

One way of detecting an anomaly is to build a prediction model for the instantaneous DC current and voltage outputs of each inverter, given the current conditions in terms of irradiance and temperature. This approach shares many similarities with the one presented in Section 2.1.2, however, here, we do not try to predict the power in reference conditions, but rather the instantaneous values of DC current and voltage at each time step at the current environmental conditions. Such a model needs to be trained on a dataset that summarizes the behavior of the inverter in normal operating conditions. The anomaly detection is performed by comparing the measured current and voltage to the predicted ones making use of control charts [26].
In this work we have chosen again Decision Trees for the models, as in Section 2.1.2 and using the same normalization strategy and implementation details. The prediction task involves estimating separately the DC current and voltage at each time-step for each inverter. The inputs to the model are the measured irradiance and ambient temperature at the same time-step where the prediction is calculated. As the module temperature is not always available in every plant (for instance plant B in Section 3), we have decided not to use this measurement in the model.
Having derived a model on the training set, for the derivation of the control chart the residuals need to be calculated. To compensate for the daily variability, the residuals r are aggregated per day D:
r D = t D 1 N D X p r e d t X m e a s t max t D X m e a s t
where t indexes the measurement samples, N D is the number of samples in day D and X p r e d and X m e a s are either the predicted or measured, respectively, current or voltage. These residuals are used for the derivation of the control chart, which identifies as anomaly all points in which:
r D > r 0 + 3 σ r
where r 0 is the average and σ r the standard deviation of r D on the training set. The whole flowchart of the method is shown in Figure 2a.

2.2.2. Comparison Model and Clustering

The disadvantage of the previous approach is that it requires identifying, for each inverter, a pool of data that is considered “normal operation” for the training of the models. These data have to include not only the output of each inverter at every time-step (power, current, and voltage), but also the environmental information (irradiance, and temperature), which is not available in every plant. Moreover, these models need to be periodically retrained to account for the gradual degradation due to aging, as discussed in Section 2.1.
For these reasons, we developed a second approach for anomaly detection that aims at detecting unusual daily patterns by comparing the deviations with respect to a reference condition. In order to account for the great seasonality effect and dependence on the weather conditions, the approach presented here compares the operations of a chosen inverter, called reference inverter, to all other inverters. The comparison is done based on DC current and DC voltage. For this purpose, a statistical ML model predicts the value of one inverter, given the value of the reference inverter. The advantage of this approach is that it does not require environmental information and that, assuming the whole plant ages uniformly, the models do not need to be retrained periodically. It does, however, assume that the train data, based on which the prediction ML model is created, is representative of normal operation, without any anomaly.
After the prediction model is created, the daily residuals are calculated as the difference between the modeled and measured values. In the next step, clustering is performed on the daily residuals in the train data. A K-means clustering algorithm specialized for time series data is used for this purpose. The Python package “tslearn” [27] is used for the implementation. The distance metric “Dynamic time Warping”(DTW) [28] is selected for clustering as it can be used to calculate the distance between time series with different lengths. The general idea for calculating the DTW distance is to find the one-to-many and many-to-one matches that will minimize the total distance between the two time series. As a result, small shifts in relation to time should not affect the DTW distance, and even short-term missing data or outliers should have a smaller effect on the metric. In order to find the best fitted number of clusters for the train data, an iterative search between 2 and N m a x is performed, where N m a x is the maximal number of clusters. The best fitted model is the one with the highest "Silhouette Coefficient" (SC). The parameter s i h m i n is defined as the minimal value of the SC that an accepted cluster model should have. Therefore, if the best model has a SC less then s i h m i n , then the best model is set to the one with only one cluster. The clusters found with the best model in the train data are further inspected. All clusters with less than c o u n t m i n items are discarded as invalided clusters, where c o u n t m i n is the minimal number of items allowed in one cluster.
To detect the unusual daily residuals in the test data that do not fit into the clusters of daily residual in the train data, one unsupervised change point detection algorithm is used [29]. We used a variant of the “Model Fitting”(MF) event detection algorithm [30]. According to the original MF algorithm, a change point is detected in a time series if the Euclidean distance between the point and all clusters found in the time series is higher than the radius in each of the clusters. In our implementation, the radius of a cluster C ( r C ) is the maximal DTW distance between all items in the cluster and the center of the cluster ( μ C ) calculated by the clustering algorithm. Therefore, if the DTW distance between a daily residual and the center of each cluster is higher than its radius, an unusual daily pattern is found.
To be able to quantify how a daily residual x differs from the clusters, a distance d x is calculated using the formula in Equation (3), where C s e t is the set of all clusters found and d x , C is the distance to the cluster C. If x is fitted into the cluster C, the distance is 0, while otherwise a relative DTW distance limited to 100% is obtained. The whole flowchart of the method is shown in Figure 2b.
d x = min C C s e t d x , C = min C C s e t 0 , if   x C min ( D T W ( μ C , x ) r C 100 % , 100 % ) , otherwise

3. Results

For our investigation we have made use of data coming from three plants. A crystalline silicon technology is used in all plants. For anonymity reasons, we will call them plants A, B, C, and they have the following characteristics:
  • plant A, location Slovenia: data acquired between September 2013 and April 2020, however with some long interruptions due to lack of data from some sensors, 19 inverters have been considered. In this plant, global irradiation, and ambient and module temperatures are measured. The irradiation sensor is based on a thin film solar cell, whereas the module temperature sensor is a Pt100. The installed capacity of the plant is approximately 315 kWp and contains 1313 pieces of 240 Wp modules composed of multicrystalline silicon (multi-c-Si) cells. Regarding the placement of the modules, there are two different orientations: southwest (200°) and southeast (135°). More precisely, there are 12 inverters with southwest orientation, 6 have southeast orientation, while one inverter has modules connected to both orientations. Moreover, the irradiation sensor shares the same southwest orientation of the first 12 inverters.
  • plant B, location Sardinia, Italy: data acquired between April 2014 and April 2019, 4 inverters have been considered. Around 4/5 of the modules are west-oriented and the others have southwest orientation. This plant has crystalline silicon (c-Si) cells. The measurement sensors consist of an irradiation and ambient temperature sensor. The irradiation sensor is based on a silicon solar cell, while the module temperature is not measured.
  • plant C, location Italy: data acquired between January 2011 and April 2020, 5 inverters have been considered. Each inverter is connected to three strings, each with 16 PV modules. All inverters have modules with the south orientation. In this plant, irradiation, ambient and module temperatures are measured. The plant is composed of crystalline silicon technology modules. No specifications of the sensor technology is available.
The investigated plants have very different sensor infrastructures and do not always have detailed information about the sensors available. For these reasons, the focus of this paper is not on the condition of the sensors, which we investigated in [31], but rather on the methods for deriving reliable prediction models using a variety of possibly unknown sensors. It can also be observed that, in plant A, the modules and the irradiance sensor come from different technologies. According to the work in [32], using amorphous silicon irradiance sensors, which are a much cheaper technology, in a c-Si plant is not optimal, but this should result only in a fixed offset. However, as the prediction models presented in this work learn the relationship between the irradiance values and DC plant values from the measured data, such an offset is automatically compensated.
Moreover, for some of the plants the status of the investigated strings has been assessed with an on-site inspection. For plant A the inspection using thermal imaging showed inactive parts in the PV modules, that explains the higher degradation in voltage which will be shown in Section 3.1. For plant B, no on-site inspection could be performed. Finally, for plant C, the on-site inspection using thermal imaging and IV-curve measurements showed only a slight PID behavior, but an overall good operation of the plant with no suspicious behaviors, consistently with the results in Section 3.1.

3.1. Degradation Estimation

A well-operating PV plant using the crystalline silicon technology has an estimated power degradation due to aging of 0.5–0.6% per year [33]. The estimated degradation of the inspected plants, using the proposed approaches, are presented next. These results are also compared to a popular method where the degradation is calculated based on a linear standard least square regression applied to the temperature corrected PR [10]. The data are first filtered using appropriate irradiance, outlier and stability filters as suggested in [10,12]. The reference PR-based degradation rate is calculated only for plants A and C, for which the module temperature and the coefficients needed to calculate the PR are known. Because of the high difference in these datasets, the applied outlier filter was customised for both datasets.

3.1.1. Sampled Values—Based Score

The estimation of the plant’s degradation using the sampled values-based method on the DC power is performed for all plants, Figure 3 and Table 1 and Table 2. For each inverter, the sampled values taken for each year are shown in the plots, together with a linear fit showing the degradation. In Figure 3c, the relative values in percentage are given for better clarity since the range of power values for inverters 97 and 98 is about 5 times higher than the one of the other inverters. For the other plots in Figure 3, the absolute values expressed in Watts are given. The yearly degradation of the DC power for all inverters in plant C is on average 50 W per year, or 0.5% per year. Plants A and B have higher degradation. More precisely, on average there is a yearly degradation of 1.9% for plant B and 2.5% for plant A.
By comparing the degradation rates per inverter in one plant, interesting results can be observed. First, inverter 99 (Figure 3c) has a higher degradation in power than the other inverters. Next, there is an unusual drop in the sampled values in 2016 shown in Figure 3d, that has no significant effect on the linear degradation fit. Finally, there is a higher degradation in power for some of the inverters in plant A, such as 1U02, 1U04, 1U09, and 2U02 (Figure 3a,b). A significant drop in DC power is seen for the inverter 2U04 in 2015, where the selected points deviate highly from all others. Including these sampled points in the degradation analysis affects highly the YDS. Hence, for better accuracy, these selected points are omitted from the degradation analysis.
One valuable feature of the sampled value-based approach is that the degradation in DC current and DC voltage can also be obtained. Consequently, the power degradation can be correlated to the degradation in DC current or DC voltage. One can observe that the higher degradation in power for some of the inverters in plant A, is related to a higher degradation in voltage (Figure 4). A detailed comparison of the degradation rates, expressed in percentage, is shown in Table 3. Although, theoretically a loss in voltage is not expected, for several inverters, like 1U02, 1U09 and 2U02, there is loss in voltage of above 1% per year. The average percentage of degradation in DC current for all inverters is 1.2%, but there is no significant difference between the degradation of different inverters. One explanation of such uniform degradation between the inverters is that it is a result of accelerated aging or soiling.
Similar analyses on the other datasets bring additional observations. First, the degradation in power for inverter 99 in plant B, is related to a loss in current. Next, the slight power degradation in a few of the inverters in plant C can be correlated to a higher degradation in DC voltage (Figure 3d, Table 2). Lower selected values for DC voltage of approximately 460 V is seen in the period 2015-2017, compared to 2011 when the values are approximately 480 V. The obtained degradation rates are in high correlation with the referenced degradation method based on PR (Table 2). The advantage of using the sampling method is that it does not depend on either the temperature coefficient for the power or the nominal power used for a reliable PR calculation. During the on-site inspection of this system, it was found that a slight PID effect is distributed across the system, which has different effects on the various inverters.

3.1.2. Prediction of Reference Power

The second approach for estimating the plant’s degradation (Section 2.1.2) has been applied instead only on plants A and C, due to the lack of the measured module temperature in plant B. Figure 5 shows the predicted power in STC or NOCT conditions for each of the considered inverters in plant C. All curves are collectively fitted with a linear model, to have an estimate of the overall decreasing trend. Moreover, the dashed horizontal line shows the nominal power as communicated by the modules manufacturer. The vertical lines at each point estimate the uncertainty of the power estimates, and this is always much higher in STC because of the lack of one prediction input (the ambient temperature) and the lower amount of data available for training the models. Additionally, Figure 5c shows the prediction of the STC power obtained using the standard Power Temperature coefficient model [34] for the dependency of current and voltage with irradiance and temperature, which makes use of the coefficients communicated by the modules manufacturer. Also in this case, the results are in accordance with the previous estimates of power and degradation based on the decision tree model, therefore validating our approach. However, the uncertainty indicated by the error bars is even higher in Figure 5b than the one in Figure 5c, therefore showing how the Power Temperature coefficient model can provide good prediction results only on average, but it is not suitable for precise point-wise estimations.
For a more precise estimation of the degradation, Table 2 shows the linear fit for the degradation obtained separately for each inverter and each operating conditions. As already observed, usually the estimates obtained in STC and NOCT conditions are rather different between each other. However, as the power estimation for NOCT is more reliable (as seen by the smaller uncertainty), we believe that these should be the conditions to be preferred, and we will just consider this case in the remaining of this work. Note also that the degradation estimation obtained from NOCT conditions is in good agreement with the one obtained using the sampled values model.
Moving then to plant A, Figure 6 shows the predicted NOCT powers for all considered inverters, together with their collective linear fit. Unfortunately, as immediately evident, the missing data prevents from obtaining continuous curves, however the plant’s operational time is still well covered. It can also be observed that the yearly degradation of this plant is much higher than in the previous case.
A more detailed comparison is shown in Figure 7 where only four inverters are considered and individually fitted for linear degradation. The linear model is fitting the data very well, therefore reassuring about the validity of the proposed approach. The calculated yearly degradations are also relatively consistent between each other, showing that all these inverters are affected by the same phenomena. For a more detailed comparison, Table 1 also shows the calculated degradation coefficients for all considered inverters of this plant. Note again the good agreement between this model and the one based on sampled values. Comparing the degradation scores with the reference score obtained from the PR-based method, one can observe that the degradation ranges are within a similar range of around 1.5% to 3% per year. This high degradation rate can be explained by problems in the plant that were discovered during an on-site inspection. More precisely, disconnected cell failure [2] was found which was distributed throughout the system and affected all inverters, but each on a different scale. These findings were confirmed with thermal images. Although the PR method produces similar results, our methods are better adapted to typical data available for online monitoring where the information for the nominal power per inverter is normally absent or difficult to obtain. For instance, different inverters in plant A have different nominal powers that needs to be considered in the PR calculation, and this information might not be always available. Additionally, because the PR-based degradation rate was highly affected by the increasing trend present in the module temperature data from plant A, a recalculated module temperature obtained from the measured ambient temperature using a correction formula [35] was used instead. On the other hand, our method is less affected by the problems in the module temperature data because the model learns to predict the reference power from data blocks of 6-month data where this increasing trend does not have a high impact, resulting therefore in a more robust model.

3.2. Anomaly Detection

As discussed in Section 2.2, we have developed two approaches for anomaly detection, which both require as a first step the derivation of a regression model (Section 3.2.1). The anomaly detection algorithms are then developed and compared between each other (Section 3.2.2 and Section 3.2.3).

3.2.1. Regression Models

Environmental Model

The first algorithm for the prediction of the inverters DC current and voltage uses the approach presented in Section 2.2.1. In this case only plants A and B are considered, and the recorded data have been divided in two parts: the first one, composed of all data acquired before the 1st January 2018, constitutes the training set for our models, while the second one, composed of all data acquired after this date, constitutes the test set on which the models performance is assessed.
Figure 8 shows the cumulative distributions of the relative errors in the predicted voltage and current for each of the two plants and each inverter on the test set. For this plot only the points where the current is higher than 5% of the maximum measured inverter’s current are considered, in order to focus only on times of operation. As immediately apparent, the error on the voltage prediction is usually much lower than the one on the current, which is then the most important contribution to the error in predicted power. Noted that there is a relatively large difference in errors between the different inverters, which has to be investigated.
For this reason, Figure 9 and Figure 10 show comparisons between measured and predicted DC currents for some inverters of the two plants. For plant A (Figure 9), it is apparent that the inverters 1U01 and 1U03 have conserved the same behavior between the training and test sets, and for this reason the predicted current is always very close to the measured one. On the other hand inverters 2U04 and 2U10 have deviated much more from this behavior, exhibiting both a small shift in time, due to the slightly different orientation between these modules and the irradiance sensor, and higher measured current for inverter 2U04, probably due to improvements in the PV panels or in the inverter. Note that the time shift for inverters 2U04 and 2U10 is just a systematic error, which can in principle be compensated, but it does not affect the results of anomaly detection. This happens because the derivation of the control chart limits takes already into account and compensates for any systematic error.
For plant B (Figure 10), instead, the differences between the inverters is much smaller. It is, however, evident also in this case the time shift of inverter 99, which leads to a higher prediction error.

Comparison Model

The second approach for deriving a regression model of the inverter’s DC current and voltage involves the usage of a reference inverter (as explained in Section 2.2.2). The easiest method for choosing a reference inverter is to select any one that does not show any evident anomaly in the recorded data, and this is the choice made in this work. For an application of this method to online monitoring, however, methods for checking whether the reference inverter is still operating normally need to be implemented. Such methods can make use, for instance, of a second reference inverter that could promptly signal if any anomaly occurred on the reference inverters. Another possibility would be to monitor whether suddenly all inverters signal an anomaly at the same time, indicating a possible failure on the reference inverter. These investigations would however require data where the anomalies are precisely characterized, and are therefore left for future work.
In the case presented here, the reference inverters chosen for the plants are: 1U01 for plant A, 100 for B, and 244 for C. Approximately one year of data, taking the data from the start date of plant operation is used for training. The starting dates of the test data are the following: 1.1.2015 for plant A, 1.7.2015 for plant B, and 1.1.2012 for C. The implementation for the prediction models is done using the Scikit-Learn library [25]. The following algorithms were tested: “Linear regression”(LR), “Support vector regressor”(SVR), “Random forest regressor”(RFR), and “Decision tree”(DT). The parameters were set to their default values, except the maximal depth of the trees used in RFR that was set to 5, and the parameters for the DT models that were the same used in the approach in Section 2.2.1. For training, 70% of the data is randomly chosen, while the other 30% was used for evaluation of the prediction model. The evaluation showed that a simple LR model can predict the DC current with high performance, featuring a r 2 coefficient of around 0.97. On the other hand, r 2 is only 0.47 for the models that predict DC voltage, showing a much lower performance. This result was expected since there is a strong linear dependency between the irradiance and DC current that would cause the DC current of two different inverters to be linearly dependent. On the other hand, this is not valid for the DC voltage.
To overcome the limitations of linear models for DC voltage prediction models, experiments were conducted to evaluate the models SVR, RFR and DT. For a better performance, the input data for SVR were standardized, while for RFR and DT the data were normalized to [ 0 , 1 ] . Later, the reverse process was done to get the prediction in the same range as the measurements. Adding the temporal features: “Time in the day”, expressed in hours, and “Day in year”, expressed as the index of the date, is also evaluated. The average value of the root mean square error ( R M S E ) of the evaluation data, for all cases of models and input data, is shown in Table 4. Results suggest that the best performance is achieved when using the SVR model with the additional temporal data include in the input. Therefore, for further investigations, the models for DC voltage prediction make use of this method.

3.2.2. Clustering

The daily residuals calculated from the comparison model, which employs a reference inverter (Section 2.2.2), are used to run the clustering algorithm and find the meaningful clusters in the training data. The daily residuals are represented as multidimensional vectors, where each dimension matches a time in the day when a measurement is done. Only the dimensions with enough valid data are considered in the vector. Before clustering, missing data in the beginning and in the end of the day are replaced by 0-values, while the other missing data points are interpolated from the surrounding values. Daily residuals vectors with more than two consecutive missing points are discarded for the training process. The parameters used in the clustering (Section 2.2.2) are set to N m a x = 4 , s i h m i n = 0.5 , and c o u n t m i n = 5 .
In many cases, as expected, only one meaningful cluster is found. In fact, if there are valid data from a well-operating plant, then the daily patterns of the residuals should be close to 0. Visualization of the daily residuals in a case where one cluster is identified, together with the cluster center, is seen in Figure 11a. If different states of operation are present in the training data, we expect that more clusters will show up. One such example is inverter 1U07, where erroneous data are present in the training set (Figure 11b), where two clusters are found. This is one drawback of the approach as there is no validation on the training data and we make an assumption that it does not include failures. Therefore, the final event detection for inverter 1U07 should be interpreted with caution. Another observation can be made for the inverters from plant A that have a different orientation than the reference inverter 1U01. For all inverters 2U04-2U10, two clusters are found in the DC current daily residuals, that are related to days in summer and winter seasons. The LR model cannot capture the shift in DC current seen for inverters with different orientation. Therefore, these shifts, that are different for the different seasons, are seen in the clusters (Figure 11c). The final observation is that in a few of the cases on DC voltage daily residuals, except for the expected cluster around the 0-residuals, an additional one is found. One explanation is that natural shadows cause the modeled and reference inverter to start and end the daily operation at different times of the day. Hence, higher residuals are seen at the start or end of the day, which is later identified as a separate cluster. It can then be concluded that such a second cluster also shows a normal pattern since it represents a particular feature of the inverter.
The final stage of the approach is to find the daily events that do not fit to any cluster. The relative distance of the daily residuals, for all days in the test data, are shown in Figure 12, Figure 13 and Figure 14. We consider the days with a distance higher than 0 to detect an unusual daily pattern. On average, for all inverters, 4% (4% ) of the days in plant A, 11% (7%) in plant B, and only 0.7% (0.7%) in plant C are identified as DC current (DC voltage) unusual events. With the distinctions between unusual events in the residuals of DC current and DC voltage, one can find failures connected to DC voltage or DC current issues.
As the ground truth information of failures in the systems is not available, the evaluation of the proposed approach to detect unusual daily events is done qualitatively. The investigation of the daily events suggests different scenarios:
  • one-day events specific to one inverter;
  • long-term events specific to one inverter;
  • events occurring on all inverters, indicating either a plant-wide failure or a problem on the reference inverter;
  • events detected on both DC current and DC voltage.
For many of the detected one-day events, the relative distance to the nearest cluster is less than 50%. In these cases, the residuals show only a slight deviation with respect to the cluster centers (Figure 13b). On the other hand, events with a higher distance usually represent more severe issues. Several events with high distances are detected in December in multiple years, for inverter 2U03 (Figure 12d), which are related to short-term increase or decrease in voltage in several hours in the afternoon. In another example, current-related events are detected for inverter 99, where the current measured for short periods in the afternoon has lower values (Figure 15).
In some cases, events are detected in multiple days over a longer period of time. One such case is seen in mid-2015 (Figure 12a) for inverter 1U09. A scatter plot of the measured values for DC current and the predicted values with the ML model is shown in Figure 15a. The points in red show the values in the days detected as unusual events, where the measured values are lower than the predicted ones for about 5A. Similar scenarios are seen in mid-2016, and in most of the time in 2019 and 2020. The events seen in DC voltage in 2016 for the same inverter are caused by a slight increase in voltage in part or the whole day (Figure 15b). Lower current is also behind the events in 2020 for inverter 1U02, the events in 2019 for 1U03 and 2U01, and finally the events in 2015 for 2U04 (Figure 12). The dependency of the measured and predicted values in the case of inverter 2U04 is not linear, since the orientation of the modules is different than the orientation in inverter 1U01 (Figure 15c). Most of the events detected in DC voltage daily residuals are caused by the high drop in voltage at approximately 19:00 for a short period of time (Figure 12d). On the other hand, for inverter 2U08, the events in 2019 are related to an unusual increase in voltage seen in the morning.
The third scenario, where events in several inverters at the same time are seen, can be observed in a few examples, and they are a probable indication of an anomaly on the reference inverters. In one case, a deviation of the DC current of the reference inverter caused detection of events for all other inverters in plant C in 2017 (Figure 14a). In another case, this time not indicating anomalies on the reference inverter but rather problems in the data collection, the DC current for all inverters in plant A goes to 0 at some times of the day, but also many missing data within the day are seen in the mid-2019 (Figure 12).
Finally, one example of the fourth scenario can be seen for inverter 99 in plant B. In the first half of 2018, for the days detected as events for both DC current and DC voltage, lower DC current and higher DC voltage is observed. The measured and predicted DC voltage in 2018 is seen in Figure 15d. The events of DC current residuals of the inverters 757 and 750 in plant C, detected in 2012 are connected to a drop of current to 0, while at the same time the measured voltage is higher (Figure 14). A similar scenario is detected for inverters 2U01-2U10 in the periods of 8th–12th August 2016 and 25th July–10th August in 2017 (Figure 12).
Overall, the analysis of the detected events shows a successful performance of the method to grasp many truly unusual patterns, especially in the cases where a high distance to the closest cluster is obtained (more than 50%). One limitation is that some events are detected in cases where only a slight deviation from the clusters exists. The sensitivity of the distance metric should be further investigated, and if necessary a different metric could be proposed in future work. Another limitation is that, for online monitoring, if a method for inspecting if the reference inverter itself is operating normally is not implemented, the interpretation of the events should be done with special care.

3.2.3. Control Chart

The second method for anomaly detection presented here makes use of the environmental model (Section 2.2.1) to build a control chart on the test set. The model’s performance, as assessed in Section 3.2.1, can be highly variable depending on the plant and the inverter, and therefore the limits for the control chart (Equation (2)) need to be derived on a per-inverter basis. Figure 16 and Figure 17 show the derived control charts for DC current and voltage on the most representative inverters of plants A and B. The dashed horizontal line is the limit defined by Equation (2), while the dots are the points in which the clustering approach from Section 3.2.2 detects an anomaly. Unfortunately, due to missing data, a non-negligible time period is unavailable to derive the control chart for plant A. It can be noted, however, that the two methods for anomaly detection have a good agreement in identifying long periods of anomalous behavior. More localized anomaly peaks, in one method or in the other, are instead most probably outliers, that need to be filtered out.

4. Conclusions

In this work, we have presented different data-driven approaches for the assessment of performance degradation in PV plants due to various conditions. The approaches target different data availability and operating conditions, showing a substantial agreement when a comparison is possible. Such methods can be extremely valuable for an efficient operation of a photovoltaic plant, allowing the prompt identification and correction of problems affecting the performance. We have shown that the great degree of variability on PV plants does not affect negatively the accuracy of the algorithms, provided that data of sufficient quality are available for the training phase. Our methods have been validated against some of the most popular methods in the literature, showing comparable performance. Our approaches, however, being data-driven, have the advantage of requiring neither in-depth knowledge of the plant nor specific and accurate physical measurements on-site, rather only the monitoring of the plant with high-level sensors for an adequate amount of time.
The next logical step with respect to anomaly detection would be to allow not just the identification of a failure, but also its characterization in terms of root causes. This, however, would require the collection of much more detailed datasets, where examples of many different kinds of failures would need to be recorded and manually characterized. Our results on the degradation estimation pave also the way for the derivation of predictive models, which can estimate the remaining useful life for all components before the need of replacement due to an unacceptable decrease in performance. Furthermore, for this application, though, the need for datasets with more specific and accurate information about each inverter is mandatory. These considerations reiterate the need for promoting the acquisition of increasingly accurate and detailed datasets monitoring the operation of photovoltaic plants.

Author Contributions

Conceptualization, V.D., F.P. and W.M.; Formal analysis, V.D. and F.P.; Funding acquisition, C.H.; Investigation, V.D. and F.P.; Methodology, V.D. and F.P.; Project administration, W.M.; Resources, N.D. and M.H.; Software, V.D. and F.P.; Supervision, A.M. and C.H.; Visualization, V.D. and F.P.; Writing—original draft, V.D. and F.P.; Writing—review & editing, W.M., N.D., M.H., A.M. and C.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Austrian Climate and Energy Funds and this study was carried out as part of the Energy Research Program 2018 within the framework of the ”OptPV4.0” project (FFG number 871684, Energieforschung (e!MISSION), 5. Ausschreibung Energieforschung 2018).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are not publicly available due to data protection reasons.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mühleisen, W.; Hirschl, C.; Brantegger, G.; Neumaier, L.; Spielberger, M.; Sonnleitner, H.; Kubicek, B.; Ujvari, G.; Ebner, R.; Schwark, M.; et al. Scientific and economic comparison of outdoor characterisation methods for photovoltaic power plants. Renew. Energy 2019, 134, 321–329. [Google Scholar] [CrossRef]
  2. Köntges, M.; Kurtz, S.; Packard, C.; Jahn, U.; Berger, K.A.; Kato, K.; Friesen, T.; Liu, H.; Van Iseghem, M.; Wohlgemuth, J.; et al. Review of Failures of Photovoltaic Modules; Technical Report IEA-PVPS T13-01:2014; IEA international Energy Agency. 2014. Available online: https://www.iea.org/about/membership (accessed on 26 February 2021).
  3. Halwachs, M.; Neumaier, L.; Vollert, N.; Maul, L.; Dimitriadis, S.; Voronko, Y.; Eder, G.; Omazic, A.; Muehleisen, W.; Hirschl, C.; et al. Statistical evaluation of PV system performance and failure data among different climate zones. Renew. Energy 2019, 139, 1040–1060. [Google Scholar] [CrossRef]
  4. Samara, S.; Natsheh, E. Intelligent PV Panels Fault Diagnosis Method Based on NARX Network and Linguistic Fuzzy Rule-Based Systems. Sustainability 2020, 12, 2011. [Google Scholar] [CrossRef] [Green Version]
  5. Pei, T.; Hao, X. A Fault Detection Method for Photovoltaic Systems Based on Voltage and Current Observation and Evaluation. Energies 2019, 12, 1712. [Google Scholar] [CrossRef] [Green Version]
  6. Lazzaretti, A.; Costa, C.; Paludetto, M.; Yamada, G.; Lexinoski, G.; Moritz, G.; Oroski, E.; Goes, R.; Linhares, R.; Stadzisz, P.; et al. A Monitoring System for Online Fault Detection and Classification in Photovoltaic Plants. Sensors 2020, 20, 4688. [Google Scholar] [CrossRef]
  7. Basnet, B.; Chun, H.; Bang, J. An Intelligent Fault Detection Model for Fault Detection in Photovoltaic Systems. J. Sens. 2020, 2020, 1–11. [Google Scholar] [CrossRef]
  8. Ingenhoven, P.; Belluardo, G.; Moser, D. Comparison of Statistical and Deterministic Smoothing Methods to Reduce the Uncertainty of Performance Loss Rate Estimates. IEEE J. Photovolt. 2018, 8, 224–232. [Google Scholar] [CrossRef]
  9. Dhimish, M.; Alrashidi, A. Photovoltaic Degradation Rate Affected by Different Weather Conditions: A Case Study Based on PV Systems in the UK and Australia. Electronics 2020, 9, 650. [Google Scholar] [CrossRef] [Green Version]
  10. Jordan, D.C.; Deceglie, M.G.; Kurtz, S.R. PV degradation methodology comparison—A basis for a standard. In Proceedings of the 2016 IEEE 43rd Photovoltaic Specialists Conference (PVSC), Portland, OR, USA, 5–10 June 2016; pp. 0273–0278. [Google Scholar] [CrossRef]
  11. Lindig, S.; Ismail, K.; Weiß, K.A.; Moser, D.; Topic, M. Review of Statistical and Analytical Degradation Models for Photovoltaic Modules and Systems as Well as Related Improvements. IEEE J. Photovolt. 2018, 8, 1773–1786. [Google Scholar] [CrossRef]
  12. Jordan, D.C.; Kurtz, S.R. The Dark Horse of Evaluating Long-Term Field Performance—Data Filtering. IEEE J. Photovolt. 2014, 4, 317–323. [Google Scholar] [CrossRef]
  13. Zinger, D.S. Review on Methods of Fault Diagnosis in Photovoltaic System Applications. J. Eng. Sci. Technol. Rev. 2019, 12, 53–66. [Google Scholar] [CrossRef]
  14. Theristis, M.; Livera, A.; Jones, C.B.; Makrides, G.; Georghiou, G.E.; Stein, J.S. Nonlinear Photovoltaic Degradation Rates: Modeling and Comparison Against Conventional Methods. IEEE J. Photovolt. 2020, 10, 1112–1118. [Google Scholar] [CrossRef]
  15. Rodrigues, S.; Ramos, H.; Morgado-Dias, F. Machine Learning in PV Fault Detection, Diagnostics and Prognostics: A Review. In Proceedings of the 2017 IEEE 44th Photovoltaic Specialist Conference (PVSC), Washington, DC, USA, 25–30 June 2017; pp. 3178–3183. [Google Scholar] [CrossRef]
  16. Chouder, A.; Silvestre, S. Automatic supervision and fault detection of PV systems based on power losses analysis. Energy Convers. Manag. 2010, 51, 1929–1937. [Google Scholar] [CrossRef]
  17. Theocharides, S.; Makrides, G.; Georghiou, G.E.; Kyprianou, A. Machine learning algorithms for photovoltaic system power output prediction. In Proceedings of the 2018 IEEE International Energy Conference (ENERGYCON), Limassol, Cyprus, 3–7 June 2018; pp. 1–6. [Google Scholar] [CrossRef]
  18. Saberian, A.; Hizam, H.; Mohd Radzi, M.A.; Kadir, Z.; Mirzaei, M. Modelling and Prediction of Photovoltaic Power Output Using Artificial Neural Networks. Int. J. Photoenergy 2014, 2014, 1–10. [Google Scholar] [CrossRef] [Green Version]
  19. Villalva, M.G.; Gazoli, J.R.; Filho, E.R. Comprehensive Approach to Modeling and Simulation of Photovoltaic Arrays. IEEE Trans. Power Electron. 2009, 24, 1198–1208. [Google Scholar] [CrossRef]
  20. Harrou, F.; Sun, Y.; Taghezouit, B.; Ahmed, S.; Hamlati, M.E. Reliable fault detection and diagnosis of photovoltaic systems based on statistical monitoring approaches. Renew. Energy 2017, 116. [Google Scholar] [CrossRef] [Green Version]
  21. Dimitrievska, V.; Mühleisen, W.; Pittino, F.; Diewald, N.; Makula, M.; Kosel, J.; Hirschl, C. Statistical evaluation approach of PV plant for O&M. In Proceedings of the 37th European Photovoltaic Solar Energy Conference and Exhibition, Online. 7–11 September 2020; pp. 1536–1541. [Google Scholar] [CrossRef]
  22. Terrestrial Photovoltaic (PV) Modules—Design Qualification and Type Approval, 2019 (E DIN EN IEC 61215-1-1 VDE 0126-31-1-1:2019-06). Available online: https://www.vde-verlag.de/standards/1100557/e-din-en-iec-61215-1-1-vde-0126-31-1-1-2019-06.html (accessed on 26 February 2021).
  23. Photovoltaic Devices. 2020 (DIN EN IEC 60904-3 VDE 0126-4-3:2020-01). Available online: https://www.vde-verlag.de/standards/0100547/din-en-iec-60904-3-vde-0126-4-3-2020-01.html (accessed on 26 February 2021).
  24. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction; Springer Series in Statistics; Springer: New York, NY, USA, 2009. [Google Scholar]
  25. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  26. Roberts, S. Control chart tests based on geometric moving averages. Technometrics 1959, 1, 239–250. [Google Scholar] [CrossRef]
  27. Tavenard, R.; Faouzi, J.; Vandewiele, G.; Divo, F.; Androz, G.; Holtz, C.; Payne, M.; Yurchak, R.; Rußwurm, M.; Kolar, K.; et al. Tslearn, A Machine Learning Toolkit for Time Series Data. J. Mach. Learn. Res. 2020, 21, 1–6. [Google Scholar]
  28. Berndt, D.J.; Clifford, J. Using dynamic time warping to find patterns in time series. KDD Workshop 1994, 10, 359–370. [Google Scholar]
  29. Aminikhanghahi, S.; Cook, D.J. A Survey of Methods for Time Series Change Point Detection. Knowl. Inf. Syst. 2017, 51, 339–367. [Google Scholar] [CrossRef] [Green Version]
  30. Madicar, N.; Sivaraks, H.; Rodpongpun, S.; Ratanamahatana, C.A. An Enhanced Parameter-Free Subsequence Time Series Clustering for High-Variability-Width Data. In Recent Advances on Soft Computing and Data Mining; Herawan, T., Ghazali, R., Deris, M.M., Eds.; Springer International Publishing: Cham, Swizerland, 2014; pp. 419–429. [Google Scholar]
  31. Mühleisen, W.; Neumaier, L.; Taverna, F.; Makula, M.; Streit, B.; Graefe, M.; Gradwohl, C.; Kosel, C.H.J. The Need for an Accuracy Check of Irradiation Sensors for Photovoltaic Power Plants. In Proceedings of the 37th European Photovoltaic Solar Energy Conference and Exhibition, Online. 7–11 September 2020; pp. 1553–1556. [Google Scholar] [CrossRef]
  32. Kirn, B.; Jankovec, M.; Brecl, K.; Topic, M. Performance of Different Types of ETSC Solar Irradiance Sensors. In Proceedings of the 28th European Photovoltaic Solar Energy Conference and Exhibition, Paris, France, 30 September–4 October 2013; pp. 3196–3199. [Google Scholar]
  33. Jordan, D.; Kurtz, S.; VanSant, K.; Newmiller, J. Compendium of photovoltaic degradation rates. Prog. Photovolt. Res. Appl. 2016, 24. [Google Scholar] [CrossRef]
  34. Marion, B. Comparison of Predictive Models for Photovoltaic Module Performance. In Proceedings of the 2008 IEEE 33th Photovoltaic Specialist Conference (PVSC), San Diego, CA, USA, 11–16 May 2008; pp. 1–6. [Google Scholar] [CrossRef]
  35. Ross, R.G. Flat-Plate Photovoltaic Array Design Optimization. In Proceedings of the 1980 IEEE 14th Photovoltaic Specialist Conference (PVSC), San Diego, CA, USA, 7–10 January 1980; pp. 1126–1132. [Google Scholar]
Figure 1. Flowcharts describing the two approaches for degradation estimation.
Figure 1. Flowcharts describing the two approaches for degradation estimation.
Sensors 21 03733 g001
Figure 2. Flowcharts describing the two approaches for anomaly detection.
Figure 2. Flowcharts describing the two approaches for anomaly detection.
Sensors 21 03733 g002
Figure 3. DC power degradation for all inverters in the plants calculated using the sampled values-based approach.
Figure 3. DC power degradation for all inverters in the plants calculated using the sampled values-based approach.
Sensors 21 03733 g003
Figure 4. Estimated degradation in DC current and DC voltage for plant A.
Figure 4. Estimated degradation in DC current and DC voltage for plant A.
Sensors 21 03733 g004
Figure 5. Predicted reference power for plant C.
Figure 5. Predicted reference power for plant C.
Sensors 21 03733 g005
Figure 6. Predicted NOCT power for all inverters of plant A.
Figure 6. Predicted NOCT power for all inverters of plant A.
Sensors 21 03733 g006
Figure 7. Predicted NOCT power for selected inverters of plant A.
Figure 7. Predicted NOCT power for selected inverters of plant A.
Sensors 21 03733 g007
Figure 8. Empirical cumulative distributions of the relative errors in the predicted voltage and current for each of the two plants and each inverter on the test set. Only the points where the current is higher than 5% of the maximum measured inverter’s current are considered.
Figure 8. Empirical cumulative distributions of the relative errors in the predicted voltage and current for each of the two plants and each inverter on the test set. Only the points where the current is higher than 5% of the maximum measured inverter’s current are considered.
Sensors 21 03733 g008
Figure 9. Detail for two inverters from plant A showing both the measured and predicted DC currents. The residuals are also shown (green dash-dotted line, right axis).
Figure 9. Detail for two inverters from plant A showing both the measured and predicted DC currents. The residuals are also shown (green dash-dotted line, right axis).
Sensors 21 03733 g009
Figure 10. Detail for four inverters from plant B showing both the measured and predicted DC currents. The residuals are also shown (green dash-dotted line, right axis). The residuals are also shown (green dash-dotted line, right axis).
Figure 10. Detail for four inverters from plant B showing both the measured and predicted DC currents. The residuals are also shown (green dash-dotted line, right axis). The residuals are also shown (green dash-dotted line, right axis).
Sensors 21 03733 g010
Figure 11. Clusters obtained from the daily residuals of DC current predictions for several inverters in plant A.
Figure 11. Clusters obtained from the daily residuals of DC current predictions for several inverters in plant A.
Sensors 21 03733 g011
Figure 12. Unusual day event detection in plant A.
Figure 12. Unusual day event detection in plant A.
Sensors 21 03733 g012
Figure 13. Unusual day event detection in plant B.
Figure 13. Unusual day event detection in plant B.
Sensors 21 03733 g013
Figure 14. Unusual day event detection in plant C.
Figure 14. Unusual day event detection in plant C.
Sensors 21 03733 g014
Figure 15. Modeled vs. Predicted values, where the points for the values in the days detected as unusual events are in red.
Figure 15. Modeled vs. Predicted values, where the points for the values in the days detected as unusual events are in red.
Sensors 21 03733 g015
Figure 16. Control charts for anomaly detection on selected inverters from plant A. The dashed horizontal lines are the control chart limits, while the circles are the points in which the clustering algorithms detects an anomaly. Some time periods are not shown in the control chart due to missing data.
Figure 16. Control charts for anomaly detection on selected inverters from plant A. The dashed horizontal lines are the control chart limits, while the circles are the points in which the clustering algorithms detects an anomaly. Some time periods are not shown in the control chart due to missing data.
Sensors 21 03733 g016
Figure 17. Control charts for anomaly detection on selected inverters from plant B. The dashed horizontal lines are the control chart limits, while the circles are the points in which the clustering algorithms detects an anomaly.
Figure 17. Control charts for anomaly detection on selected inverters from plant B. The dashed horizontal lines are the control chart limits, while the circles are the points in which the clustering algorithms detects an anomaly.
Sensors 21 03733 g017
Table 1. Yearly degradations for plant plant A.
Table 1. Yearly degradations for plant plant A.
From PRFrom NOCT PowerFrom Sampled Values
Relative deg.Absolute deg.Relative deg.Absolute deg.Relative deg.
Inverter[%/Year][Wp/Year][%/Year][W/Year][%/Year]
1U012.274903.53902.6
1U023.917005.16404.4
1U032.933702.73202.2
1U042.863002.55604.0
1U051.834103.03302.3
1U062.163202.44102.8
1U071.901801.43502.4
1U082.173302.54102.9
1U092.162602.22902.3
2U012.902001.53102.1
2U023.263603.05604.0
2U031.232001.62601.8
2U041.264303.22301.6
2U051.845203.83402.3
2U061.394103.12801.9
2U071.605003.72801.9
2U081.183502.72201.5
2U091.284603.52501.7
2U102.196404.73902.7
Table 2. Yearly degradations for plant C.
Table 2. Yearly degradations for plant C.
From PRFrom NOCT/STC PowerFrom Sampled Values
Relative deg. Absolute deg.Relative deg.Absolute deg.Relative deg.
Inverter[%/Year]Conditions[Wp/Year][%/Year][W/Year][%/Year]
2440.30NOCT140.19290.29
STC500.49
2480.61NOCT400.54610.61
STC960.97
2420.64NOCT440.59590.58
STC850.85
7570.43NOCT180.25370.36
STC540.54
7500.75NOCT610.8650.64
STC760.75
Table 3. DC current and DC voltage yearly degradation for plant A using the sampled values—based approach.
Table 3. DC current and DC voltage yearly degradation for plant A using the sampled values—based approach.
InverterDC CurrentDC VoltageInverterDC CurrentDC Voltage
deg. [%/Year]deg. [%/Year] deg. [%/Year]deg. [%/Year]
1U011.40.462U011.4−0.07
1U021.51.72U021.41.9
1U031.40.222U031.2−0.05
1U041.50.712U040.88−0.02
1U051.40.142U051.00.11
1U061.50.342U060.710.19
1U071.3−0.162U071.1−0.31
1U081.40.562U080.83−0.26
1U091.13.62U091.2−0.31
2U101.20.39
Table 4. Average root mean square error ( R M S E ) of the evaluation data of all prediction models trained to output DC voltage using different models and different input data (without or with the added data).
Table 4. Average root mean square error ( R M S E ) of the evaluation data of all prediction models trained to output DC voltage using different models and different input data (without or with the added data).
AlgorithmLRRFRSVRDT
Added DataFalseTrueFalseTrueFalseTrueFalseTrue
Plant A26.225.720.317.320.615.521.715.9
Plant B9.89.49.17.429.26.999.17.2
Plant C9.99.99.79.59.89.29.89.7
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dimitrievska, V.; Pittino, F.; Muehleisen, W.; Diewald, N.; Hilweg, M.; Montvay, A.; Hirschl, C. Statistical Methods for Degradation Estimation and Anomaly Detection in Photovoltaic Plants. Sensors 2021, 21, 3733. https://doi.org/10.3390/s21113733

AMA Style

Dimitrievska V, Pittino F, Muehleisen W, Diewald N, Hilweg M, Montvay A, Hirschl C. Statistical Methods for Degradation Estimation and Anomaly Detection in Photovoltaic Plants. Sensors. 2021; 21(11):3733. https://doi.org/10.3390/s21113733

Chicago/Turabian Style

Dimitrievska, Vesna, Federico Pittino, Wolfgang Muehleisen, Nicole Diewald, Markus Hilweg, Andràs Montvay, and Christina Hirschl. 2021. "Statistical Methods for Degradation Estimation and Anomaly Detection in Photovoltaic Plants" Sensors 21, no. 11: 3733. https://doi.org/10.3390/s21113733

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop