A Data-Driven Approach to Extend Failure Analysis: A Framework Development and a Case Study on a Hydroelectric Power Plant

: Power plants are required to supply the electric demand e ﬃ ciently, and appropriate failure analysis is necessary for ensuring their reliability. This paper proposes a framework to extend the failure analysis: indeed, the outcomes traditionally carried out through techniques such as the Failure Mode and E ﬀ ects Analysis (FMEA) are elaborated through data-driven methods. In detail, the Association Rule Mining (ARM) is applied in order to deﬁne the relationships among failure modes and related characteristics that are likely to occur concurrently. The Social Network Analysis (SNA) is then used to represent and analyze these relationships. The main novelty of this work is represented by support in the maintenance management process based not only on the traditional failure analysis but also on a data-driven approach. Moreover, the visual representation of the results provides valuable support in terms of comprehension of the context to implement appropriate actions. The proposed approach is applied to the case study of a hydroelectric power plant, using real-life data.


Introduction
Power plants aim to efficiently supply the electric demand, considering the economic, reliability, and environmental aspects [1][2][3]. The implementation of an accurate maintenance strategy represents a critical issue from many points of view since, for example, power plants are characterized by complex structures [4]. Additionally, an inadequate maintenance strategy may result in energy losses and unpredictable operating conditions [5]. Considering the renewable energy field, hydroelectric sources globally provide the broadest supply [6]; thus, it is fundamental that the maintenance management ensures a smooth operation deployment [7]. This aspect can be critical due to the complex nature of hydroelectric power plants, which requires the analysis of several variables, items, and operating conditions [8] to evaluate how a single failure can trigger a series of cascade effects penalizing the entire production system.

Data-Driven Failure Analysis Approaches
In the existing literature, several contributions involve the implementation of data-driven approaches to failure analysis. Specifically, the main focus regards the joint implementation of the Failure Mode and Effects Analysis or the Failure Mode Effects and Criticality Analysis and data-driven techniques. For example, the FMEA and fuzzy inference can be applied to perform a thorough criticality evaluation, considering safety issues and production performance [23]. Other authors, instead, propose applications for the automatization of the failure analysis: in [24], for example, the failure modes identification is automatized through knowledge-based fault models, so that the experiences derived from previous projects could be included in new ones, while, in [11] behavior trees are applied for the same objective. Instead, text mining applications can be found to determine all the potential failure modes related to the components [25]. Bayesian networks are often applied to improve the performance of the failure analysis, as testified by [26], who use them to improve the risk and reliability assessment, and [12] that integrates the opinion of the experts in the risk assessment process. In [14], the remaining useful life of components is predicting through data mining techniques, and the outcomes of the analysis are used to update the risk of failure. In [15], the remaining useful life is predicted in case of multiple failure modes occurrence, comparing a methodology based on logical data analysis and non-parametric cumulative incidence functions with traditional techniques (e.g., neural networks, support vector machines).
Improvements of the traditional failure analysis include prioritizing components based on their risk of failure: multi-criteria decision-making approaches are frequently used to synthesize the experts' judgments and include all the perspectives in the analysis [27,28]. The fuzzy ordered weighted method and DEMATEL are implemented in [29] to calculate each component's risk level and rank them. Risk evaluation and ranking of the risk factors can also be performed by defining the fuzzy digraph and matrix [30] or through the identification of a synthetic failure index that guides the selection of improvement actions to maximize the reliability of the system [31].
Data-driven failure analysis applications are frequently implemented on power plants, as testified by several works on the topic. Early works mainly involve the implementation of hybrid approaches, namely the integration of model-driven and data-driven techniques, for the prognostic health management of power systems ( [32,33]). Some papers, instead, focus on the fault diagnosis aspects: in [34,35], principal component analysis and independent component analysis are respectively applied to diagnose the failures in thermal power plants components, while in [36] a dynamic, uncertain causality graph is optimized through a genetic algorithm to diagnose the failures in a nuclear power plant. Artificial neural networks can also be applied to detect failures in thermal plants (e.g., [37,38]) or in wind turbines [39]. Nonetheless, the remaining useful life estimation is widely addressed by combining different algorithms to improve overall performance [40].
Noteworthy, the existing contributions mainly deal with the implementation of techniques and approaches for diagnosing or detecting failures. Two interesting contributions can be compared to the one proposed in this work: specifically, in [16], early fault detection on a photovoltaic power plant is performed by analyzing complex networks of sensors to discover hidden dynamics non-observable from the observation of a single sensor. Additionally, in [17], the implementation of different data-driven techniques is proposed to detect faults and cluster them to analyze the interactions among faults. In the proposed approach, the interactions among failures, items, and failure modes are investigated through the Association Rule Mining and then further explored by analyzing a network. Moreover, the network is built based on the data of the failure analysis and does not reflect the plant's physical structure.

Materials and Methods
In this section, the proposed approach is detailed. Specifically, this work aims to provide a framework for improving the potential failures' analysis to extract previously unknown information and adapt the maintenance strategies accordingly. From this perspective, three main steps can be highlighted:
Determination of the relevant associations 3.
Social network analysis and insights definition.

Data Collection and Understanding
The procedure relies on the results of the analysis of the past or potential failures; thus, the first step is collecting all the possible data on this matter. Failure Modes and Effects Analysis [41] represents a valid starting point for applying the present framework. The system under investigation is broken down to identify its elementary items (sub-systems or parts) separately analyzed. The objective of the decomposition is to anticipate all the potential failure modes and effects. The failure analysis is, in this way, carried out collaboratively, involving interdisciplinary groups (e.g., Operations and Maintenance engineers, managers, technicians, on-field personnel) in the discussion of the main features of the system.
The main advantage of taking the FMEA as a starting point is that several perspectives are questioned so that a complete understanding of the potential failures and effects is achieved. Additionally, due to the multi-disciplinary team's contributions in the FMEA, it is possible to limit the subjective bias related to each role and avoid the related uncertainty. Finally, in carrying out the FMEA, a dataset containing the system's equipment under investigation, the potential failure modes, and the associated effects are created and can be analyzed through the association rule mining. Additional information can be added, such as the mean time to repair (MTTR) or the failure modes' criticalities. Starting from the FMEA has different advantages: on the one hand, it allows the company to improve the plant's knowledge further. On the other hand, the data-driven analysis is carried out basing on the expertise of the multi-functional team that is usually charged with deploying the FMEA-so benefiting from different and wide-ranges perspectives.

Determination of the Relevant Associations
The second step of the procedure requires defining the relevant associations among the events extracted from the FMEA dataset. Specifically, relevant information may regard the failure modes frequently occurring on different items or the same effects deriving from different failure modes. This exploratory analysis aims to extend the existing knowledge of the analyzed system. The larger the dataset, the more complex the data analysis is: in this sense, data-driven techniques overcome the traditional statistical ones, which are no longer able to provide useful insights alone, without the need for formulating hypotheses. Hence, the ARM selection represents a valid alternative [42] since it allows both the simultaneous analysis of a large amount of data and an intuitive results interpretation [43] due to the structure of the outcomes. In this sense, it is also easier to involve the non-expert of the data analytics field to understand and implement the insights obtained in the data-driven analysis.
The applications of the ARM are widespread and can be found in different fields, such as the operations and production-related ones; however, the first one regards the extraction of hidden patterns from large datasets for marketing scopes [44]. In the following, a formal definition of ARs and ARM is provided.
Let K = {k 1 , k 2 , . . . , k n } be a set of data, called items, and T = {t 1 , t 2 , . . . , t m } the set of transactions; each transaction is composed of a set of items, namely an itemset, taken from K. An Association Rule (AR) is an implication I→J such that I and J are item sets (I, J ⊆ K). They have not items in common (I ∩ J = φ). The itemset I is called the body or left-hand side of the rule, while J is the head or right-hand side. The two principal metrics for evaluating the quality of a rule are support (1) and confidence (2) [26].
Since the function #{*} represents the cardinality of the itemset, the support can be defined as the number of itemsets taken from T in which there are both I and J, i.e., the probability of finding both I and J in the transaction set. Instead, the confidence determines the number of itemsets containing both I and J among the ones containing I. Hence, it is a measure of the conditional probability of finding J, given the fact that a transaction contains I.
The ARM can be performed through several algorithms: in this application, the Frequent Pattern-growth (FP-growth) [45] is selected due to its better efficiency [46]. This algorithm requires the scan of the transaction set T = {t 1 , t 2 , . . . , t m } to identify the items appearing more frequently than a user-defined threshold, i.e., the minimum support (min_sup). Those that do not meet the min_sup requirement are excluded from the scan: in this way, itemsets composed of several items are considered only if each of the single items has the support higher than min_sup. Starting from the selected itemsets, the rules meeting a minimum confidence threshold (min_conf) are generated according to the following procedure:

1.
Define min_sup: the minimum support threshold required to consider a rule; 2.
Define min_conf: the minimum confidence threshold required to consider a rule; 3.
Use the FP-growth algorithm [27] to determine the frequent itemsets; 4.
Combine pairs of frequent itemsets to create the association rules; delete rules having confidence lower than min_conf.
The association rules mined are used as input for the third step of the research approach.

Social Network Analysis and Insights Definition
The SNA is usually applied to investigate social structures relying on the network and graph theory [47]. A network is defined by an ordered pair of nodes (N) connected by edges (E), G = (N, E). The traditional application of SNA is the study of the interactions among a set of actors, respectively represented by edges and nodes. In the current work, the frequent itemsets identified through the FP-growth algorithm are the social network actors, while the ARs describe their interactions. Indeed, in this framework, the aim is to deploy an SNA to display failure modes, effects, and criticalities frequently occurring concurrently in order to clarify the interpretation of the association rules extracted. In this way, an overview of the patterns to take into account is provided, and proper insights can be defined based on the nature of network structure.
For example, if the rule a→b is extracted, a and b will be nodes of the network, and they will be connected. The confidence of the rule a→b is the weight of the edge connecting them. If the rule b→a is defined too, then the connection between the two nodes will be double arrowed; however, the weight of the edge between b and a (i.e., the confidence of the rule b→a) can be different than the other one. For each node, the Out-Degree (OD) [48] is determined as follows: Specifically, the OD of a node j is the weighted sum of the n edges outgoing from j: a high OD indicates a strong influence of a node on its successors, highlighting the need to control that node. Basing on the OD metric and having a complete visualization of the interrelations among the items identified during the failure analysis, it is possible to define useful insights to extend the plant's knowledge.
During the analysis of the SNs, it is noteworthy to consider an additional metric, the Betweenness Centrality (BC) [49]: the shortest weighted paths among all the couples of nodes are determined, and the BCj equals the sum of the shortest weighted paths on which node j appears. This metric measures the influence of node j across the network [50] since a node having a high BC value can be considered as a bridge among separate portions of the network.

Case Study
The proposed approach is applied to a Brazilian hydroelectric power plant (HPP). It is equipped with three hydro generators type Kaplan units, which operate at 166.25 MW. Kaplan hydro generators units can work where a small head of water is involved; the turbines are applied in sites having a head range of 2-40 m. Since the angles of their blades can be modified to adapt to the water flow, Kaplan turbines can also work efficiently at a wider range of water head, allowing for variations in the dam's water level. Three principal systems compose the hydro-generator Kaplan unit: speed governor, turbine system, and axis. In all, 152 components have been identified during the FMEA analysis of the HPP; thus, they are treated in the failure analysis.

Data-Driven Framework Application
The hydroelectric industry requires a high level of availability and reliability. The FMEA is regularly carried out on the system to identify components' criticality and prioritize their maintenance. In this way, the risk involved in the production process is monitored; however, further knowledge of the HPP can be extracted through the implementation of the proposed approach. The FMEA is performed following the US Military Standard's recommendation, adopting a bottom-up approach: the system under investigation is broken down to analyze its elementary components separately. Through the breaking-down, the objective is to provide an accurate description of the failure modes, effects, and impact on safety, environment, and assets. A collaborative approach is adopted to deploy the FMEA so that the HPP's main features are discussed by interdisciplinary groups of people involved in the system's operations at different levels (e.g., maintenance engineers, managers, on-field technical personnel).
The dataset structure used as a starting point for the data-driven analysis is reported in Table 1. Specifically, data refer to the FMEA traditionally carried out by the company and regard:

1.
System: one of the three main systems composing the HPP; 2.
Name: one of the 152 components relevant for the study; 3.
PFM: potential failure mode occurring on the component; 4.
Main functions: effect of the PFM on the main functionality of the component; 5.
FR: the failure rate of the component (it can be actual if the FM has already occurred or theoretical if the FM is potential); 6.
MTTR: the mean time to repair, expressed in hours; 7.
SAI: the impact of the FM occurrence on the availability of the system; 8.
IOP: the impact of the FM occurrence on people; 9.
EI: the impact of the FM occurrence on the environment. Attributes 7-9 are evaluated by the multi-disciplinary team members responsible for performing the FMEA on a 1:9 scale.
The second step of the approach regards the determination of the relevant associations through the ARM. The dataset, whose structure is presented in Table 1, comprises 432 transactions (rows of the dataset). The components analyzed are 152, while the distinct PFM is 113: this means that the same failure mode can affect different components. The software selected for this case study is RapidMiner studio: its main strength is the graphical interface that does not require any programming language knowledge, making it easier to implement in an industrial context.
First, to identify the association rules worthy of investigation and not limit their extraction, null support, and confidence threshold are set (min_sup = 0; min_conf = 0). The ARs among all the nine attributes explained in Table 1 are mined. Indeed, min_sup and min_conf thresholds have to be set based on the specific case study, considering the dimensions of the dataset and relying on the decision-maker's expertise, since there is no absolute value suitable for all the cases [51].
In all, 4147 associations among 362 itemsets are extracted and are represented using the open-source software Gephi. To limit the study to the relevant associations and to be able to analyze them properly, the following procedure is applied:

1.
Create the SN using all the ARs; 2.
Determine the most interesting node based on the OD; 3.
Filter the ARs and create more specific SNs, limiting the analysis to the nodes considered more relevant; 4.
Formalize the information extracted.
The turbine node has the highest OD (4.645) if compared to the axis (4.301) and the speed governor (4.419); hence, the ARs referring to this portion of the HPP is extracted. Therefore, the ARs referring to the turbine are extracted to focus the analysis on this branch of the system primarily. This filter leads to the mining of 1248 ARs (127 itemsets). To focus on the most relevant portions of the network, the attributes Item, PFM, and Functions are taken into account, creating an SN composed of 102 nodes and 308 arcs.
Interestingly, as reported in Figure 1, 13 communities of nodes originated, considering these ARs. This structure indicates that not all the nodes are connected among the others, thus limiting the potentiality of spreading their occurrence across the network. Indeed, if the nodes are not connected among them, there is no relation among the events represented by such nodes. This aspect limits the attention that the maintenance managers have to pay to the so-called domino effect. In particular, 8 networks simply represent the connection among the item, the related function, and failure modes: this information is not new since it can be derived from the FMEA with no reason for extending the analysis through the data-driven framework. Indeed, the proposed approach aims to extend the current body of knowledge on the existing plant by extracting previously unknown relationships. On the contrary, there are 3 networks (Figure 1d,e,i) in which relevant and previously unknown relationships are displayed. Indeed. These relationships involve more than one item and several PFM, supporting the maintenance managers in identifying potential combined inspections and actions to anticipate the potential failures across the plant.
For example, in Figure 2a-which deploys Figure 1i in detail, it can be noticed that the node PFM = External leak acts as a bridge among the two portions of the network: indeed, its BC is the highest in the SN (74.67). In this sense, the occurrence of an external leak may have an impact on control valves, the oil pump, and the pump drainage system, as evidenced in Table 2. The confidence associated with the three rules (PFM = external leak → Item = pump drainage system; PFM = external leak → Item = oil pump; PFM = external leak → Item = control valves) is 0.333 since it is equiprobable that, when an external leak occurs, the item is one of those listed.
These connections highlight the need for establishing a protocol for the inspection of the item when an external leak occurs. Specifically, such protocol should require the verification of the normal functioning of the items, e.g., the flow of the fluid at the desired pressure (Function = promote the flow of fluid at the desired pressure → Item = Oil pump), the drainage of the water (Function = Drain the water that eventually passes through the inner cover seal → Item = Pump drainage system) and the control of the oil flow (Function = Check the oil flow for actuating the gate → Item = Control valves). The confidence is 100% for the three cases since each function is associated with a single item.
Similarly, in Figure 2b, the communities of nodes reported in Figure 1e are reported. The considerations drawn for Figure 2a can be extended to this community too. Indeed, the two items noted in this network (i.e., gate and adduction grid) share a common potential failure mode (PFM = deterioration of concrete) that acts as a bridge for the two portions of the network. When this failure mode occurs is then essential to check whether both the items are normally functioning or if an intervention is needed. As noticeable from Table 3, when the potential failure mode "deterioration of concrete" occurs, the confidence of 50% indicates that it regards either the gate or the adduction grid (see the first two rules reported in Table 3).    It is noteworthy to evaluate the impact of a failure on the related items, taking Figure 2a as a reference: the ARs involving the item, the measures of the impacts on people, system availability, and environment are taken into consideration to create the SN reported in Figure 3. According to the experts' opinion, failures on the three items cause low impact at a system availability (Item = Control Valves → System_Availability_Impact = 1; Item = Oil pump → System_Availability_Impact = 1; Item = Pump Drainage System → System_Availability_Impact = 1) in all cases, since the confidence associated with these rules is 100%. At an environmental level, instead, the pump drainage system and the oil pump are associated with a value of 3 on the 1:9 scale, while control valves are less critical (1 out of 9). A score of 3 is assigned to the pump drainage system and the control valves, while the oil pump is less critical. These evaluations support the decision-makers in defining which areas should be monitored first after the occurrence of a malfunctioning, prioritizing the interventions in the area where the impact is higher: referring to Figure 3, for example, people safety is the primary concern On the contrary, when a malfunctioning on the gate occurs, only in 25% of cases, the failure mode is the deterioration of concrete. Indeed, other PFMs are related to this item, as reported in Figure 2b. The same consideration can be assumed for the adduction grid, but the rule's confidence is 33.3%. At the same time, when the gate experiences a malfunctioning, the compromised function is indeed "Allow intake of water", as testified by a confidence value 1 of the rule Item = Gate → Function = Allow the intake of water. Accordingly, when the maintenance department members notice a lack in this function, they should immediately check the gate since it is surely damaged.  It is noteworthy to evaluate the impact of a failure on the related items, taking Figure 2a as a reference: the ARs involving the item, the measures of the impacts on people, system availability, and environment are taken into consideration to create the SN reported in Figure 3. According to the experts' opinion, failures on the three items cause low impact at a system availability (Item = Control Valves → System_Availability_Impact = 1; Item = Oil pump → System_Availability_Impact = 1; Item = Pump Drainage System → System_Availability_Impact = 1) in all cases, since the confidence associated with these rules is 100%. At an environmental level, instead, the pump drainage system and the oil pump are associated with a value of 3 on the 1:9 scale, while control valves are less critical (1 out of 9). A score of 3 is assigned to the pump drainage system and the control valves, while the oil pump is less critical. These evaluations support the decision-makers in defining which areas should be monitored first after the occurrence of a malfunctioning, prioritizing the interventions in the area where the impact is higher: referring to Figure 3, for example, people safety is the primary concern (hence the first aspect to be investigated) in case of a failure on control valves, while both people and environment have the priority over the impact on system availability in case of a failure of the pump drainage system. In this way, the areas characterized by a higher risk are controlled and repaired firstly. (hence the first aspect to be investigated) in case of a failure on control valves, while both people and environment have the priority over the impact on system availability in case of a failure of the pump drainage system. In this way, the areas characterized by a higher risk are controlled and repaired firstly.

Discussion
The approach proposed in this work aims to extend the failure analysis usually carried out through the FMEA by introducing data-driven techniques. Some theoretical and practical contributions can be extracted from the implementation of the proposed data-driven framework.

Theoretical Implications
From a theoretical point of view, a comprehensive analysis of large systems' failures can be critical since traditional statistical techniques are not suitable to deal with a large amount of data effectively. Indeed, the ARM implementation allows the definition of the relationships among data, highlighting both the pattern was already known and unknown ones, i.e., those objects of the study. An important feature characterizing the ARM is that there is no need for hypothesis formulation since the whole dataset is explored and the possible connections among items are made [44]. The definition of all the possible itemsets requires the combination of 2k − 1 items (k being the total number of items), making the dimension of the dataset a possible critical issue. However, selecting the FP-growth algorithm supports the approach in this sense since the scanning of the dataset is only necessary twice during the whole procedure [45].
The creation of the networks through the SNA, instead, helps in the visualization of all the

Discussion
The approach proposed in this work aims to extend the failure analysis usually carried out through the FMEA by introducing data-driven techniques. Some theoretical and practical contributions can be extracted from the implementation of the proposed data-driven framework.

Theoretical Implications
From a theoretical point of view, a comprehensive analysis of large systems' failures can be critical since traditional statistical techniques are not suitable to deal with a large amount of data effectively. Indeed, the ARM implementation allows the definition of the relationships among data, highlighting both the pattern was already known and unknown ones, i.e., those objects of the study. An important feature characterizing the ARM is that there is no need for hypothesis formulation since the whole dataset is explored and the possible connections among items are made [44]. The definition of all the possible itemsets requires the combination of 2k − 1 items (k being the total number of items), making the dimension of the dataset a possible critical issue. However, selecting the FP-growth algorithm supports the approach in this sense since the scanning of the dataset is only necessary twice during the whole procedure [45].
The creation of the networks through the SNA, instead, helps in the visualization of all the connection extracted through the ARM and allows the identification of the communities of nodes, facilitating the understanding of the interactions by making them more intuitive. Due to the latter characteristic, besides, it is easier for the analysts to define whether there are missing connections related to the first phases of the failure analysis (e.g., during the deployment of the FMEA). This step is also strategic from a managerial point of view. It allows the definition of whether the analysis of possible failures is performed accurately or if amendments in the process are necessary.

Practical Implications
Practical implications, instead, regard the possibility of having a closer control of maintenance management. First of all, this approach extends the system's knowledge by using as a starting point the failure analysis usually carried out by the company. In this way, the resources employed are further capitalized without requiring additional investments (i.e., open-source tools are widely available for developing the proposed approach), with a positive impact from an economic point of view. Additionally, the plant on which the analysis is carried out benefits from a complete knowledge of the potential failure modes and a better response to failure occurrence. It is possible to prioritize the interventions based on the impact of the failure itself. Moreover, making the maintenance processes more controllable and predictable harbors benefits from a resource allocation perspective.
From an engineering point of view, the visualization obtained with the SNA is useful to identify the critical chain possibly triggered by the occurrence of a failure mode. In this way, it is easier to understand which items have to be monitored and the areas more hit by the failure. It is also easier to define the items that should be inspected when a failure mode occurs or when malfunctioning is noticed since the network structure presented by the SNA is clear and comprehensive of the whole plant. Specific resources can be destined to maintain and control the critical areas or define interventions for their structural change. In addition, this framework supports the definition of items criticality from a classical risk assessment perspective and considers the unexpected relationships extracted by the ARM and visualized through the SNA.
A further aspect that should be pointed out regards the acceptance of the proposed approach by the plant personnel: indeed, when introducing new methods, there is the risk of resistance to change that can compromise their development. Starting from the FMEA, as presented in the case study, introduces only a partial change in the maintenance management procedures, facilitating the acceptance of the new technology introduction.

Conclusions
In this work, a framework for extending the failure analysis through data-driven techniques is proposed. Specifically, the approach offers to consider the failure analysis carried out by the companies (e.g., the Failure Modes Effects Analysis) as a starting point for the application of the Association Rule Mining and Social Network Analysis. The former one aims to identify the co-occurrence of events, such as potential failure modes on specific items, functions compromised, or impacts on the process. Instead, the latter is used for representing these co-occurrences to make them more understandable and intuitive, using a network structure. The failure modes, items and all the attributes analyzed through the Association Rule Mining are the nodes of the network, while the Association Rules are the edges. Using these techniques jointly, it is easier to extend the plant's knowledge and capitalize on the failure analysis's information more extensively. Moreover, the two data-driven techniques enable the exploration of a large amount of data without the need for formulating a research hypothesis.
The proposed approach is applied to the case study of a hydroelectric power plant, using the real failure modes effects analysis as a starting point. The implementation of the policy helps understand the process, identifying critical nodes of the network that are worthy of monitoring in case of failure occurrence and communities of nodes interacting among them. Indeed, starting from the communities of nodes allows the maintenance managers to define which nodes interact among them and have to be monitored jointly. Thus, support to manage the maintenance process is provided by implementing the data-driven failure analysis, giving the maintenance managers both the chance of enhancing the knowledge of the system and capitalizing on traditional analysis usually carried out.
Future research directions may involve developing further case studies so that the results of different applications in the same research area can be compared and useful insights can be extended to the hydroelectric power plant industry.