Next Article in Journal
Comminution of Copper Ores with the Use of a High-Pressure Water Jet
Previous Article in Journal
Improvement of Power Capacity of Electric-Assisted Bicycles Using Fuel Cells with Metal Hydride
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Approach to Defining Maintenance Significant Items: A Hydro Generator Case Study

by
Adherbal Caminada Netto
1,
Arthur Henrique de Andrade Melani
1,
Carlos Alberto Murad
1,
Miguel Angelo de Carvalho Michalski
1,
Gilberto Francisco Martha de Souza
1,* and
Silvio Ikuyo Nabeta
2
1
Department of Mechatronics and Mechanical Systems, University of São Paulo, Avenida Professor Mello de Moraes 2231, São Paulo 05508-030, SP, Brazil
2
Department of Energy and Electrical Automation, University of São Paulo, Aavenida Professor Luciano Gualberto, Travessa 3, 158, São Paulo 05508-010, SP, Brazil
*
Author to whom correspondence should be addressed.
Energies 2020, 13(23), 6273; https://doi.org/10.3390/en13236273
Submission received: 29 October 2020 / Revised: 18 November 2020 / Accepted: 23 November 2020 / Published: 28 November 2020
(This article belongs to the Section F: Electrical Engineering)

Abstract

:
The operation and maintenance of industrial plants constitute a major concern among managers, scientists, and engineers. To allow the prioritization of maintenance actions for improved continuous equipment operation, this paper presents a novel method focused on identifying the most critical components of a plant’s systems. In a complex engineering approach, this method uses tools and techniques borrowed from several fields of knowledge, including an innovative technique devised by the authors, as well as a multi-criteria decision method in the criticality determination process. To illustrate the method, a case study is presented considering a 150 MW Kaplan hydro generator from a hydroelectric power plant located in the north of Brazil. An uncertainty analysis is presented, considering variations in the scores of each criterion, to verify the robustness of the proposed method. The results show that the method responds in a solid and structured way, allowing decision making from qualitative and quantitative data and eliciting the opinion of experts on the factors that can affect the degradation of the performance of a system. In addition, the method’s robustness is confirmed since the uncertainty propagation related to the criteria scores is small for the items classified as most significant from its application.

1. Introduction

Maintenance is undoubtedly an important sector for the achievement of companies’ organizational objectives. Evidence shows that poor maintenance can result in process failures, putting people, the environment, and assets at risk, generating a revenue loss, and making operations unfeasible [1]. In its turn, Maintenance Strategy is a systematic approach to keep the facilities and equipment operating in a given industrial plant. It involves many activities and is concerned with formulating the best plan for each piece of equipment of the plant. It describes what type of maintenance actions are suitable for the critical pieces of equipment whose failure may jeopardize plant production and safety. Thus, selecting the most appropriate maintenance strategy depends on several factors such as the goals of maintenance, the nature of the facility or the equipment to be maintained, and the work environment. In addition, the selection of critical equipment is very important to successfully implement a maintenance strategy [2].
The twentieth century brought about, as one of the consequences of an unprecedented evolution of science and technology, a considerable change both in maintenance thought, and practice. Two industries, electronics and airline, by pointing out the decisive role of reliability engineering, were particularly responsible for that. Within such industries, it soon became clear that it would be possible to use more accurate methods in maintenance instead of relying on experience and judgment alone.
An important initiative was the creation of the airline industry task force in 1960 to investigate the capabilities of preventive maintenance. Later in 1974, the US Department of Defense commissioned United Airlines to write a report on the processes used so far in the civil aviation industry. The objective of such a study was the development of maintenance programs for aircraft [3]. This report, entitled Reliability-Centered Maintenance (RCM), has become a basic reference for all subsequent RCM approaches to this day. From these niche beginnings, as it were, RCM and its variations gained widespread acceptance among maintenance practitioners as well as researchers for its ability to bring savings, efficiency, reliability, and focus on what really matters when applied properly in an organization. [4,5,6,7,8,9,10].
RCM is a qualitative approach used to determine the maintenance requirements of any physical asset in its operational context, associating criticality to the effect of equipment failure on plant performance [11]. The main goal is to keep plant performance during mission time. RCM focuses on predictive maintenance and Condition Based Maintenance (CBM) as the most suitable maintenance practices for critical equipment given their present degradation failure modes when subject to monitoring during mission time [12]. In addition, CBM is considered the most cost-effective maintenance strategy [13]. The implementation of maintenance routines, such as inspections, lubrication, and small adjustments, for fault diagnosis in conjunction with predictive maintenance techniques, can lead to a significant reduction in repair costs and production losses [14]. Figure 1 presents a conceptual graph correlating the maintenance costs and maintenance commitment.
Considered by many authors [4,15,16,17,18,19] as one of the most important phases in the implementation of RCM, Maintenance Significant Items (MSI) selection is essentially a screening phase in which the number of items for analysis can be reduced [15,16,17,18,19]. However, even today, there are few systematic and convenient methods for such identification [4,15]. There is no consensus among authors on how such a selection should be made [20]. Furthermore, although Nowlan and Heap [11] define a “significant item” as “the item whose failure could affect the safety or have major economic consequences”, indicating a simple and conservative approach to MSI identification, some studies follow more structured approaches based on ranking methodologies and even standards to define MSI [18,21].
From this perspective, failure’s direct and indirect impacts, such as loss of availability; loss of revenue; contractual fines; maintenance costs; difficulties to repair; availability of spare parts, proper equipment, tools, and trained teams to maintenance execution; environmental issues; personnel safety issues; criticality of the function lost upon failure; failure rate; and failure detection likelihood, among others, can be considered as ranking criteria [20]. However, it is important to have in mind that the choice of such criteria will have a direct impact on the methodology results and the MSI selection.
It is observed that in most of the works found in the literature regarding this subject, the criteria used to MSI identification are generally based on concepts that consider the occurrence of the failure, and not the possibility of preventing or anticipating it, seeking to use monitoring data according to Industry 4.0 concepts [22], in the face of a higher demand for productivity and maintenance efficiency. Thus, to fill this gap, the objective of this work is to present a novel approach to defining MSI. It is based on techniques and tools that allow to analyze the criticality of components regarding failure modes’ development, allowing a more focused look at predictive maintenance and the prioritization of maintenance actions for improved continuous equipment operation, avoiding unnecessary machine stops and breakdowns. The proposed method aims at eliciting expert opinion regarding factors that may affect the degradation of the performance of a system. To control those factors, the method proposes the selection of maintenance practices. The method is a collection of techniques available in the literature but they are arranged in an innovative way to increase the possibility of reducing the uncertainty associated with maintenance planning decision-making.
These tools and techniques are borrowed from several fields of knowledge, such as Functional Tree (FT), SysML Internal Block Diagrams (IBD), Hazard and Operability Study (HAZOP), and Failure Mode Symptoms Analysis (FMSA). In addition, an innovative Operation & Maintenance Tree Analysis (OMCTA) devised by the authors [23] is introduced in the criticality determination process. Finally, the method also uses a multi-criteria decision method, the Analytic Hierarchy Process (AHP), for ranking the most critical components [24]. Besides, to illustrate the proposed method, a case study considering a 150 MW Kaplan hydro generator from a hydroelectric power plant located in Brazil is presented.

2. Literature Review

Despite being considered by many authors to be one of the key steps that anticipate the implementation of RCM, defining a control volume for the analysis and reducing the number of items to be analyzed, few studies address the selection of a system’s MSI in a structured, systematic, and convenient operational way. Tang et al. [9], i.e., define MSI through three stages of screening. In the first two qualitative stages, the non-MSI are defined by applying 2nd and 5th order risk matrices and Failure Mode and Effects Analysis (FMEA). Then, a quantitative analysis is carried out to rank the MSI based on assessment indexes (influence on personnel safety, on environment safety, on system functions, average failure rate, maintenance costs, downtime loss, failure monitoring availability, downtime, and complexity of maintenance) and the application of the AHP and Fuzzy Borda Count (FBC) mathematical methods.
Gupta and Mishra [6] also consider identifying and prioritizing critical components for implementing maintenance as an important task in industry. In this work, the authors tried to find out the key factors associated with the criticality of components, in which five major criteria (cost, functional dependency, complexity, maintainability, and safety impact) were suggested for criticality analysis. In addition, to identify the critical components, a hierarchical network was proposed based on the Analytic Network Process (ANP).
Melani et al. [7] aim to present a thoroughness and soundness method that identifies the most critical components of a system, contributing to maintenance actions prioritization. The method uses reliability and Risk Analysis (RA) techniques, such as HAZOP, Fault Tree Analysis (FTA), and Failure Modes and Effects Criticality Analysis (FMECA). The considered criteria to define MSI were severity, occurrence, and detection (from FMECA), MTTR, FTA cut set order, repair cost (related to maintenance), and environmental, generation, and patrimonial impacts. The work also considers the ANP, a multi-criteria decision method for ranking the most critical components.
Silva et al. [21] propose a novel framework for the determination of the MSI based on ISO 55000 standard aspects and using AHP. After studying the system and building its FT and FMEA, nine criteria (safety impact, history of security events, environmental impact, history of environmental events, reliability, maintainability, health assessment capacity, maintenance compliance, and impact on availability) and their respective scales were defined according to the asset criticality in the ISO 55000 standard. The authors expected that the work results could contribute to maintenance and asset management research and the diffusion of multicriteria prioritization methods, aiming to increase reliability and optimize maintenance planning.
Yuan et al. [4] employ a two-level analysis method for MSI identification. At first, a discernibility matrix is constructed for a pre-screening. In the second level, FMEA and RA were employed for MSI selection. Moreover, the prioritization of MSI was determined by a quantitative analysis based on AHP and Monte Carlo Simulation (MCS). The method is applied for a train control system, aiming to improve maintenance management and reduce its cost.
Santos et al. [5] developed and applied a practical and structured method of equipment classification criticality based on its importance for the production process. The method culminated in the equipment classification into three levels considering several criteria (effect of the asset’s failure on the quality of the products developed, the effect of equipment failures on the production process, analysis of potential risks to people and the environment, cost and technological complexity of repairing equipment failures).
Hartini, Dibyo, and Pujirta [25] have presented a proposal to prioritize the reliability of systems and equipment for safety systems using System & Equipment Reliability Prioritization (SERP). During the process, the identification MSI is performed through a combination of quantitative and qualitative analysis. Filtering of the non-MSI items is performed through the first screening that defines the system boundary, system hierarchy, and RA of the risk matrix. The determination of component criticality level on reliability management was based on category rankings of frequency data and duration of interference with certain criteria as well as system aspects, safety, quality, and cost.
Summing up, the following aspects can certainly be implied from this work:
  • There are two fundamental phases to identify the MSI of a system in a systematic and structured way: the first phase is based on the system study and analysis, where the criteria and scales are established for further evaluation; and the second, in which the criticality evaluation is performed, defining the ranking of the system’s MSI;
  • Several methods, with different approaches, can be used to establish the criteria that are used in the evaluation phase, with FMEA and risk analysis matrices being the most usual;
  • Despite this diversity of methods, the criteria end up converging to the same fundamental criteria: safety impact, system availability impact, reliability, costs, failure detection likelihood, and maintainability;
  • Generally, a multicriteria decision method is used to rank the most critical items, with AHP being the most used among the reviewed articles.
Table 1 presents a summary of this bibliographic review, highlighting the methods and criteria considered in each analyzed article, as well as ones considered in the approach proposed in this work. The table is divided into three parts: the first part relating to the methods used to define the criteria, the second part to the fundamental criteria, and the third part to the decision methods used for the ranking of MSI.
The present method not only incorporates a greater number of methods but also diversifies the foundations of the analysis by including criteria that have not been explored in other works. Concerning the multicriteria decision method used to rank the MSI, the authors sought a more traditional approach with AHP.

3. The Proposed Method

The objective of the proposed method is to identify the MSI of a system to prioritize maintenance and improvement actions. The method is divided into two phases, the system study and the MSI definition, and represents a complex engineering approach, drawing methods and techniques from a broad spectrum of scientific fields.
Figure 2 shows the method illustrated flowchart, highlighting the information flow between the applied methods and techniques. The novelty of the method is the integration of techniques and not the techniques themselves. Besides, the developed approach is focused on predictive maintenance aspects, since techniques that consider aspects related to the origin and progression of the analyzed system’s failure modes are incorporated into the method. The next sections of this chapter will describe each step of the method, highlighting its purpose.

3.1. System Study

The main objective of the system study is to identify all the information necessary to define significant maintenance items. Such a phase consists of five steps: FT, IBD, HAZOP, FMSA, and OMCTA (presented in Figure 2).

3.1.1. Functional Tree (FT)

The first step of the system study consists of describing the plant’s system and sub-systems using FT, a well-known and established technique designed to show both the interrelation of a system’s components and the individual description of each component’s functions. [18].

3.1.2. SysML Internal Block Diagrams (IBD)

Although very useful for the system’s description, FT diagrams do not show the interactions and interfaces between system components. So, in the second step of the system study, the SysML IBD [26,27], a technique borrowed from Systems Engineering, is used to expand knowledge of how the system works and enable the use of HAZOP in step three since the IBD highlights the process parameters and their relation with the operating condition of the system under analysis.

3.1.3. Hazard and Operability Analysis (HAZOP)

HAZOP [28,29] is a detailed hazard analysis of a system to determine what would happen if any component were to operate outside its normal intended mode. In the present work, HAZOP is the third step of the system study. The nodes presented in the HAZOP table are those identified in the SysML IBD’s.

3.1.4. Failure Mode Symptoms Analysis (FMSA)

FMSA [30,31,32] is a relatively new and still not widespread variation of FMECA, a popular technique among reliability, safety, and quality engineers. FMECA is generally used when criticality analysis is required, being an extension of the well-known FMEA, whereas FMSA focuses on the symptoms produced by each considered failure mode and in the selection of the most appropriate techniques and strategies for detection and monitoring, seeking to maximize the level of confidence in diagnosis and prognosis for maintenance planning.
In the FMSA worksheet, each column represents different information in the evaluation process. The considered items must have at least one specified failure mode associated with a single effect and one or more possible root causes, which in turn can have several symptoms, which can be detected by more than one technique associated with a monitoring location and frequency. Then, grades are assigned to the likelihood of detection (DET), severity (SEV), and confidence levels of diagnoses (DGN) and prognoses (PGN) for each resulting row from the combination of failure modes, root causes, symptoms, and detection methods. The outcome is the Monitoring Priority Number (MPN) obtained by the multiplication of each of the previously mentioned factors. The lower the MPN, the most critical is the analyzed case.

3.1.5. Operation & Maintenance Causal Tree Analysis (OMCTA)

OMCTA [23] results from the introduction of “OR gates” while retaining the basic characteristics of a Causal Tree Analysis (CTA), which is the breaking down of activities, as well as some aspects of an FTA in that the failure of a system is decomposed in its constituents’ failures. However, operation and the maintenance of individual constituents are considered as distinct activities, each subdivided into four components: individual (I), task (T), material (M), and work medium (MT).
The ranking figures for each constituent’s components in the OMCTA diagram may then be multiplied to supply Severity Figures relating separately to both operation and maintenance aspects. This discrimination capability is a very important and useful feature of this new technique. The tree branches created with OMCTA analysis represent the failure modes that cause deviations for each node analyzed in HAZOP. The association of the ODF and MDF scores present in these branches with each component is made considering the worst-case scenario found in all analyses, i.e., the failure mode that results in the lowest ODF and MDF values for that component. This approach penalizes the component as only the worst condition is considered. However, this conservative posture was adopted considering the importance of choosing MSI for maintenance planning.

3.2. Defining Maintenance Significant Items through AHP

AHP, developed by Saaty [33], is a multicriteria decision method that relies on experts’ judgment to obtain priority scales through pairwise comparisons of both criteria and alternatives. In this paper, AHP is used to rank the MSI of the analyzed system. To implement AHP, it is first necessary to define the components to be ranked (i.e., which system items will be considered most critical to maintenance), as well as the criteria used for ranking. The components, therefore, are defined through the FT and the criteria (and the justification for each choice) are presented in the following subsections. Once the components and criteria are defined, AHP can be implemented through five steps, as described by Silva et al. [21].
  • Step 1: Build a judgment matrix A through pairwise comparisons between the criteria,
A = a 11 a 1 k a k 1 a k k ,
where k is the total number of criteria considered and a i j   represents the relative importance of criterion i on criterion j, according to the relative importance level scale, in Table 2. Thus,   a i j   is the reciprocal of a i j   .
  • Step 2: Fulfill a prioritization method by deriving a priority vector w from judgment matrix A,
w = w 1 w k ,
where   w i 0   and i = 1 k w i = 1 . The priority vector provides a weight   w i ,
w i = j = 1 k a i j 1 k i = 1 k j = 1 k a i j 1 k ,
for each considered criterion. In this work, the considered prioritization method is the Row Geometric Mean Method (RGMM), as presented by Dong et al. [35].
  • Step 3: Check the consistency of the judgment matrix by calculating the Geometric Individual Consistency Level (GICL),
GICL = 2 k 1 k 2 i < j log a i j log w i + log w j 2 ,
developed by Crawford and Williams [36] for the RGMM method.
Aguarón and Moreno-Jiménez [37] provided the thresholds for GICL: e.g., when GICL   <   0.37   for k   > 4 , that is, more than 4 criteria, it is considered that the judgment matrix has acceptable consistency.
  • Step 4: If the obtained GICL is greater than the thresholds, the judgment matrix should be adjusted by repeating steps 1 through 3 until the required consistency is satisfied.
  • Step 5: Calculate the final score of every system’s component from the expression [4],
Component   Final   Score = i = 1 k m i w i ,
where   m i is the normalized score given to the component for each criterion, based on the previously presented scales.

3.2.1. Monitoring Priority Number (MPN)

Murad et al. [32] present the MPN four criteria evaluation in detail, being the range of DET, DGN, and PGN from 1 to 5 and the SEV scale from 1 to 4. In all criteria, the lowest number represents the worst-case scenario and the highest number the best-case scenario. Therefore, the MPN range can vary between 1 and 500, with 1 being the worst case and 500 being the best-case scenario. It should be noted that the MPN scale is reversed vis-à-vis the generally found pattern, i.e., usually the lowest criterion value is associated with the less critical scenario, while the scale higher value is associated with the worst-case scenario. Thus, to match the MPN scale with the scales of the other criteria, the results obtained from the FMSA must be applied, resulting in the new MPN*.
MPN * = m a x MPN + 1 MPN .
Since the maximum MPN scale value is 500, in practice, to find the new MPN* value, just subtract the measured MPN value from 501. Thus, the MPN* scale continues to range from 1 to 500, with 1 for the least critical case and 500 for the most critical case. As mentioned earlier, for the same component more than one resulting MPN can be obtained. Thus, in this application, the worst case will be considered as input in AHP, i.e., the highest MPN* value will be considered.

3.2.2. Operation and Maintenance Derived Figures

Both ODF and MDF are figures derived through OMCTA and both, as mentioned before, are product results of four criteria: I, T, M, and WM. The choice of using OMCTA in this work also stems from a more predictive design, since in this case items are ranked considering the likelihood of system failures that would be originated by either operational or maintenance actions [23].
Both the likelihood of predicting and preventing failure are considered, including the FMSA and OMCTA results as AHP criteria. The grades of each activity component (I, T, M, and WM) are given according to the ranking criteria presented in Murad et al. [23]. Considering the range of values for each of the four criteria that constitute both ODF and MDF, they can vary between 1 and 625, with 1 being the best case and 625 being the worst-case scenario.

3.2.3. System Availability Impact (SAI)

The system availability impact represents the reduction or interruption in the generation of electric energy caused by a certain failure. The unavailability of a hydro generator in the Brazilian electricity system generates, besides possible direct revenue losses, severe contractual fines. Thus, the costs of an unscheduled shutdown become much higher than that of simply repairing the hydro generator, requiring better planning of maintenance actions and greater anticipation of any event that could cause equipment failure. The values assigned to this criterion are set on a scale of 1 to 10, as shown in Table 3.

3.2.4. Mean Time to Repair

The Mean Time to Repair (MTTR) represents the average time consumed for the repair of a certain piece of equipment, having a direct impact on equipment availability [18], as shown in Table 4.

3.2.5. Mean Time Between Failures

The Mean Time Between Failures (MTBF) represents the average time elapsed between consecutive failures of a certain piece of equipment, and also has a direct impact on equipment availability, as shown in Table 5.

3.2.6. Impact on People

The Impact on People (IoP) represents the detrimental effects of failures on human well-being, health, and life. Scores of this criterion are assigned according to Table 6.

3.2.7. Environmental Impact

Environmental Impact (EI) represents the detrimental effects of failures on the environment. Its scores are assigned on a scale according to Table 7.

4. Case Study: A Hydroelectric Power Plant

According to the British Petroleum Statistical Review of World Energy [38], hydroelectricity ranks third in the share of global electricity generation by fuel. In Brazil, the share of hydroelectricity in the national energy matrix is even greater than the world average: almost 70% of the electricity consumed in Brazil comes from hydropower plants [39]. Accordingly, the proposed method was developed for application in a study carried out for a run-of-river baseload hydroelectric power plant located in northern Brazil. The plant has three identical Kaplan turbine generating units with an installed capacity of approximately 450 MW. Following the model proposed in this work, the present example is subdivided into two phases: the study of the system and the definition of the MSI.

4.1. Studying the System

The first step consisted of developing a detailed functional description of one of the hydro generators of the aforementioned power plant. FT’s were used to identify the several components and then specify their main functions, as in Figure 3 example, with a breakdown of the shaft system and several subsystems presented with some components (numbered sequentially and logically) of the considered hydro generator in this work.
Once all the functional trees had been carefully established, it was possible to proceed to the second step. IBD’s could be developed to provide a representation of the interfaces between the previously defined components and display the kind of information they exchange with each other, as presented in Figure 4 for the hydro generator shaft, previously presented in the FT second level. The nodes presented in several parameter flows in the IBD, represented sequentially by numbers and letters, are the basis of the HAZOP analysis developed next.
HAZOP identifies monitoring points in the system, in addition to the parameters to be monitored, for the detection of component failures, allowing for predictive maintenance actions to be taken. It also highlights the consequences of these failures. Beyond the usual listing of causes of an observed deviation in the HAZOP diagram, as presented in Table 8 (considering the shaft IBD diagram, previously presented in Figure 4), these causes are considered in the proposed method using OMCTA diagrams, as shown in Figure 5 (considering node 3.I, presented in Table 8).
Parallel to the combined use of HAZOP and OMCTA, the FMSA technique was applied to determine the MPN for each failure mode of components. Table 9 shows the FMSA worksheet with the information needed to conduct the analysis process, considering in this example the turbine and generator shafts from the case study FT.
Once the results of OMCTA and FMSA have been obtained (ODF, MDF, and MPN*) and the value of their respective resulting criteria are known, it is necessary to define the values of the other criteria considered (SAI, MTTR, MTBF, IoP, and EI) for all analyzed components. A total of 206 subsystems and components were initially considered in the system study phase, resulting in 161 evaluated items on the second phase, since in this phase only FT leaves (components that are not subdivided) are considered in the analysis.

4.2. Defining Maintenance Significant Items

Once all components have been defined and scores for the eight criteria considered have been established, AHP can be implemented. The scores for all considered components in this work can be found in Table S1 of the supplementary material. The scores chosen for each criterion were acquired via brainstorming with system experts. The judgment matrix, built from pairwise comparisons between the criteria (step 1), is presented in Table 10, which also shows the computed priority vector. To check the consistency of the judgment matrix, a Geometric Individual Consistency Level (GICL) of 0.0745 was computed. Since its value is lower than 0.37, it can be concluded that the judgment matrix has acceptable consistency.
The next step is to calculate the final score for all 161 components. These results can be found in Table S2 of the supplementary material. The ten most critical maintenance items, as presented in Table 11, are Control Gate, Air/Oil Accumulator, Thrust Bearing Collar, Thrust Bearing Pad, Thrust Bearing Leveling Link, Trash Rack, Turbine Guide Bearing Pivot, Turbine Guide Bearing Pad, Turbine Guide Bearing Housing, and Turbine Blades.
Note that such a criticality assessment is consistent with the fundamental functional issues of the generating unit. The Control Gate and the Trash Rack, e.g., are the two adduction components that are most subject to wear-and-tear as well as maintenance or operation errors. The Air/Oil Accumulator, in turn, can be considered the “heart” of the speed governor, essential to sustaining the hydro generator synchronization. Finally, the Thrust and the Turbine Guide Bearings can be considered the most critical bearings, as they have, respectively, the functions of sustaining the entire generating unit and maintaining turbine shaft alignment, which constantly suffers from disturbances from the water flow through the turbine blades.

5. Uncertainty Analysis

To verify the robustness of the presented method, an uncertainty analysis was performed considering all the grades given by the experts for each of the considered criteria. As the MPN*, ODF, and MDF criteria scores are not given directly by the experts but obtained from the product of other criteria, the uncertainty analysis extended to the latter. Thus, the uncertainties of 17 criteria were considered: DET, SEV, DGN, and PGN (related to MPN), I, T, M, and WM (related to ODF), I, T, M, and WM (related to MDF), SAI, MTTR, MTBF, IoP, and EI. Since all grades are represented by integers, a variation of one point above or below the original scores (besides the original grade) was considered for the first 12 criteria and up to two points above or below the original scores (also besides the original grade) for the last five criteria. This difference is due to the grade range of each criterion, being a variation of approximately 20% of the maximum scale value considered for all cases.
A total of 10,000,000 combinations were simulated, ensuring convergence by the mean for all analyzed parameters (mode, mean, standard deviation, maximum position, and minimum position). For each simulation, a list of MSI was created, with the items sorted from AHP results. For each considered item in this paper, its position in every generated list was evaluated. In addition, a measurement of distance (or error) between the original rank position and the mode of each item was obtained. Table 12 presents the first twenty-five results for this analysis. The table was sorted in increasing order considering the original value obtained for the rank of each item. The complete list with the result of this analysis is available in Table S3 of the supplementary material.
Analyzing the table, it can be noted that for the first five items there was an excellent consistency in the results. The difference between the mode and the original rank reached zero for the first two items, with a small measure of dispersion (standard deviation) as well. From the fifth to the tenth listed item, the results can also be considered very good, and between the tenth and the twentieth item, despite some variations, there is still consistency in the results. Summarizing, the overall result is extremely satisfactory since the position in the table of originally considered most significant items tends not to change significantly, despite the uncertainties considered.

6. Conclusions

This paper presented a new method for defining the MSI of a system. The method is based on several techniques, in a complex engineering approach. Although most inputs are knowledge-based, the method response is obtained in a solid and structured manner from an analytical process, the AHP, allowing decision-making to be made from qualitative and quantitative data, eliciting the opinion of experts regarding the factors that can affect the degradation of the performance of a system.
It is noteworthy that the knowledge of the experts about the analyzed system is crucial for the success of the method, especially considering the various ways to describe such a system, as well as the knowledge about the various techniques used in the method. For example, the FT’s level of detail regarding both the number of analyzed levels and which systems and subsystems would be detailed is a decision of the experts involved in the process. Such a decision will eventually have a bearing on the whole process and accordingly on the result. Thus, it can be said that good initial planning is the key point for the best use of the method. In this case, a pre-screening process for the prior considered items could be added to the method to systematize this phase in future works.
On the other hand, it is noted that the method is robust since the uncertainty’s propagation related to the criteria grades is small for the items classified as most significant from its application. Nevertheless, due to the qualitative nature of many of the applied techniques, the method is susceptible to epistemic uncertainties. Thus, the assessment of the method’s sensitivity to such uncertainty, as well as techniques that could be incorporated into the method to mitigate the effect of these uncertainties may be the subjects of future work.
Another advantage regarding the proposed method is the fact that by using AHP, the method can be easily automated and replicated to many systems, which is very interesting for its application in Maintenance 4.0. Likewise, when techniques such as FMSA and OMCTA are applied, criteria that assess the failure in advance of its occurrence, such as the likelihood of the failure due to human errors and the level of confidence in the diagnosis and prognosis of each failure the method response, will be useful for applications that require a higher level of reliability and availability of the analyzed system.
The method was applied to a hydro generator unit of a Brazilian plant and the obtained result was quite satisfactory. Due to the choice of decision criteria applied to AHP, it is noted that the method response is clearly directed to the analyzed systems’ functionality, i.e., aligned with RCM, CBM, and other modern maintenance techniques. Considering its application in hydropower plants, RCM and CBM practices can extend the duration between maintenance routines, improve unit availability, and eliminate some forced outages. However, despite these benefits, justifying the required initial investment to change from traditional maintenance practices, and the higher budgets required for these preventive efforts, can be a challenge. Thus, knowing the MSI becomes essential to direct the efforts and investments required to develop maintenance under these conditions, showing a great application potential for the developed method.
Besides, for future work, it may be considered interesting to apply the method to other complex systems to confirm its viability and ability to define MSI. In addition, it would be desirable to analyze the method sensitivity to each criterion considered and the uncertainties related not only to the scores of such criteria, but also the uncertainties related to the judgment matrix and the pairwise comparisons between the criteria.

Supplementary Materials

The following materials are available online at https://www.mdpi.com/1996-1073/13/23/6273/s1, Table S1: Items, IDs and AHP Scores.pdf, Table S2: MSI Ranking Matrix.pdf, Table S3: Statistical results of uncertainty analysis.pdf.

Author Contributions

Conceptualization, A.C.N., A.H.d.A.M., C.A.M., M.A.d.C.M., and G.F.M.d.S.; data curation, A.C.N., A.H.d.A.M., C.A.M., M.A.d.C.M., and G.F.M.d.S.; visualization, A.C.N., A.H.d.A.M., C.A.M., M.A.d.C.M., and G.F.M.d.S.; methodology, A.C.N., A.H.d.A.M., C.A.M., M.A.d.C.M., and G.F.M.d.S.; validation, A.C.N., A.H.d.A.M., C.A.M., M.A.d.C.M., and G.F.M.d.S.; formal analysis, A.C.N., A.H.d.A.M., C.A.M., M.A.d.C.M., and G.F.M.d.S.; investigation, A.C.N., A.H.d.A.M., C.A.M., M.A.d.C.M., and G.F.M.d.S.; writing—original Draft, A.C.N., A.H.d.A.M., C.A.M., M.A.d.C.M., and G.F.M.S; writing—review and editing, A.C.N., A.H.d.A.M., C.A.M., M.A.d.C.M., G.F.M.d.S. and S.I.N.; supervision, G.F.M.d.S. and S.I.N.; project administration, G.F.M.d.S. and S.I.N.; funding acquisition, G.F.M.d.S. and S.I.N.; resources, G.F.M.d.S. and S.I.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by Fundação para o Desenvolvimento Tecnológico da Engenharia (FDTE), Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), and Energia de Portugal (EDP) through an ANEEL R&D project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Da Silva, R.F.; de Andrade Melani, A.H.; de Carvalho Michalski, M.A.; de Souza, G.F.; Nabeta, S.I.; Hamaji, F.H. Defining Maintenance Performance Indicators for Asset Management Based on ISO 55000 and Balanced Scorecard: A Hydropower Plant Case Study. In Proceedings of the 30th European Safety and Reliability Conference and the 15th Probabilistic Safety Assessment and Management Conference, Venice, Italy, 1–5 November 2020; Baraldi, P., Maio, F., Di Zio, E., Eds.; Research Publishing: Singapore; Venice, Italy, 2020; p. 8. [Google Scholar]
  2. Velmurugan, R.S.; Dhingra, T. Maintenance strategy selection and its impact in maintenance function: A conceptual framework. Int. J. Oper. Prod. Manag. 2015, 35, 1622–1661. [Google Scholar] [CrossRef]
  3. Rose, A. Reliability Centered Maintenance; Technical Bulletin 014; Cadick Corporation: Garland, TX, USA, 2002. [Google Scholar]
  4. Tang, Y.; Liu, Q.; Jing, J.; Yang, Y.; Zou, Z. A framework for identification of maintenance significant items in reliability centered maintenance. Energy 2017, 118, 1295–1303. [Google Scholar] [CrossRef]
  5. Gupta, G.; Mishra, R.P. A SWOT analysis of reliability centered maintenance framework. J. Qual. Maint. Eng. 2016, 22, 130–145. [Google Scholar] [CrossRef]
  6. Yssaad, B.; Khiat, M.; Chaker, A. Reliability centered maintenance optimization for power distribution systems. Int. J. Electr. Power Energy Syst. 2014, 55, 108–115. [Google Scholar] [CrossRef]
  7. Sainz, J.A.; Sebastián, M.A. Methodology for the maintenance centered on the reliability on facilities of low accessibility. Procedia Eng. 2013, 63, 852–860. [Google Scholar] [CrossRef] [Green Version]
  8. Igba, J.; Alemzadeh, K.; Anyanwu-Ebo, I.; Gibbons, P.; Friis, J. A Systems Approach Towards Reliability-Centred Maintenance (RCM) of Wind Turbines. Procedia Comput. Sci. 2013, 16, 814–823. [Google Scholar] [CrossRef] [Green Version]
  9. Cheng, Z.; Jia, X.; Gao, P.; Wu, S.; Wang, J. A framework for intelligent reliability centered maintenance analysis. Reliab. Eng. Syst. Saf. 2008, 93, 806–814. [Google Scholar] [CrossRef]
  10. Legutko, S. Trendy rozwoju utrzymania ruchu urządzeń i maszyn. Eksploat. Niezawodn. Maint. Reliab. 2009, nr 2, 8–16. [Google Scholar]
  11. Nowlan, F.S.; Heap, H.F. Reliability-Centered-Maintenance; Dolby Access Press: San Francisco, CA, USA, 1978. [Google Scholar]
  12. Niu, G.; Yang, B.-S.; Pecht, M. Development of an optimized condition-based maintenance system by data fusion and reliability-centered maintenance. Reliab. Eng. Syst. Saf. 2010, 95, 786–796. [Google Scholar] [CrossRef]
  13. Arno, R.; Dowling, N.; Schuerger, R. Equipment Failure Characteristics and RCM for Optimizing Maintenance Cost. IEEE Trans. Ind. Appl. 2016, 52, 1257–1264. [Google Scholar]
  14. Eti, M.C.; Ogaji, S.O.T.; Probert, S.D. Reducing the cost of preventive maintenance (PM) through adopting a proactive reliability-focused culture. Appl. Energy 2006, 83, 1235–1248. [Google Scholar] [CrossRef] [Green Version]
  15. Yuan, Y.; Wang, J.; ShangGuan, W.; Cai, B.; Song, H. Research on Identification of Maintenance Significant Items in Reliability Centered Maintenance for Train Control System. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 2817–2822. [Google Scholar]
  16. Santos, T.; Silva, F.J.G.; Ramos, S.F.; Campilho, R.D.S.G.; Ferreira, L.P. Asset Priority Setting for Maintenance Management in the Food Industry. Procedia Manuf. 2019, 38, 1623–1633. [Google Scholar] [CrossRef]
  17. Gupta, G.; Mishra, R.P. Identification of Critical Components Using ANP for Implementation of Reliability Centered Maintenance. Procedia CIRP 2018, 69, 905–909. [Google Scholar] [CrossRef]
  18. Melani, A.H.A.; Murad, C.A.; Caminada Netto, A.; de Souza, G.F.M.; Nabeta, S.I. Criticality-based maintenance of a coal-fired power plant. Energy 2018, 147, 767–781. [Google Scholar] [CrossRef]
  19. Gonçalves, F.C.C.; Trabasso, L.G. Aircraft preventive maintenance data evaluation applied in integrated product development process. J. Aerosp. Technol. Manag. 2018, 10, 1–15. [Google Scholar] [CrossRef]
  20. Ahmadi, A.; Söderholm, P.; Kumar, U. On aircraft scheduled maintenance program development. J. Qual. Maint. Eng. 2010, 16, 229–255. [Google Scholar] [CrossRef]
  21. da Silva, R.F.; de Andrade Melani, A.H.; de Carvalho Michalski, M.A.; de Martha Souza, G.F.; Nabeta, S.I. Defining Maintenance Significant Items Based on ISO 55000 and AHP: A Hydropower Plant Case Study. In Proceedings of the 29th European Safety and Reliability Conference (ESREL), Hannover, Germany, 22–26 September 2019; Research Publishing Services: Singapore, 2019; pp. 3437–3444. [Google Scholar]
  22. Zhong, R.Y.; Xu, X.; Klotz, E.; Newman, S.T. Intelligent Manufacturing in the Context of Industry 4.0: A Review. Engineering 2017, 3, 616–630. [Google Scholar] [CrossRef]
  23. Murad, C.A.; Melani, A.H.A.; Caminada Netto, A.; Michalski, M.A.C.; Souza, G.F.M.; Nabeta, S.I. OMCTA: A novel knowledge-based technique to evaluate the influence of O&M actions on maintenance management. Int. J. Ind. Eng. Manag. 2020, 11, 91–92. [Google Scholar]
  24. Dehghanian, P.; Fotuhi-Firuzabad, M.; Bagheri-Shouraki, S.; Razi Kazemi, A.A. Critical component identification in reliability centered asset management of power distribution systems via fuzzy AHP. IEEE Syst. J. 2012, 6, 593–602. [Google Scholar] [CrossRef]
  25. Hartini, E.; Dibyo, S.; Pujiarta, S. Determination of Maintenance Priority Index (MPI) for Components on RSG-Gas Safety System. J. Teknol. Reakt. Nukl. Tri Dasa Mega 2020, 20, 77–88. [Google Scholar] [CrossRef]
  26. Delligatti, L. SysML Distilled: A Brief Guide to the Systems Modeling Language, 1st ed.; Pearson Education: Saddle River, NJ, USA, 2013; ISBN 978-0321927866. [Google Scholar]
  27. Melani, A.H.A.; Souza, G.F.M. Mapping SysML Diagrams into Bayesian Networks: A Systems Engineering Approach for Fault Diagnosis. ASCE-ASME J. Risk Uncertain. Eng. Syst. Part B Mech. Eng. 2020, 6, 20. [Google Scholar] [CrossRef]
  28. IEC. 61882: 2016 Hazard and Operability Studies (HAZOP Studies)—Application Guide; International Electrotechnical Commission: Geneva, Switzerland, 2016. [Google Scholar]
  29. Baybutt, P. Design intent for hazard and operability studies. Process Saf. Prog. 2016, 35, 36–40. [Google Scholar] [CrossRef]
  30. Vogl, G.W.; Weiss, B.A.; Donmez, M.A. Standards for prognostics and health management (PHM) techniques within manufacturing operations. In Proceedings of the Annual Conference of the Prognostics and Health Management Society 2014, Fort Worth, TX, USA, 29 September–2 October 2014; Volume 1, pp. 576–588. [Google Scholar]
  31. ISO 13379-1:2012(E) Condition Monitoring and Diagnostics of Machines—Data Interpretation and Diagnostics Techniques—Part 1: General Guidelines; ISO: Geneva, Switzerland, 2012.
  32. Murad, C.; Melani, A.H.A.; Michalski, M.A.C.; Caminada Netto, A.; de Souza, G.; Nabeta, S.I. Fuzzy-FMSA: Evaluating Fault Monitoring and Detection Strategies Based on Failure Mode and Symptom Analysis and Fuzzy Logic. ASCE-ASME J. Risk Uncert Engrg Sys. Part B Mech. Engrg 2020, 6, 12. [Google Scholar] [CrossRef]
  33. Saaty, T.L. How to make a decision: The analytic hierarchy process. Eur. J. Oper. Res. 1990, 48, 9–26. [Google Scholar] [CrossRef]
  34. Saaty, T.L. Decision making with the analytic hierarchy process. Int. J. Serv. Sci. 2008, 1, 83–98. [Google Scholar] [CrossRef] [Green Version]
  35. Dong, Y.; Zhang, G.; Hong, W.-C.; Xu, Y. Consensus models for AHP group decision making under row geometric mean prioritization method. Decis. Support Syst. 2010, 49, 281–289. [Google Scholar] [CrossRef]
  36. Crawford, G.; Williams, C. A note on the analysis of subjective judgment matrices. J. Math. Psychol. 1985, 29, 387–405. [Google Scholar] [CrossRef]
  37. Aguarón, J.; Moreno-Jiménez, J.M. The geometric consistency index: Approximated thresholds. Eur. J. Oper. Res. 2003, 147, 137–145. [Google Scholar] [CrossRef]
  38. British Petroleum. BP Statistical Review of World Energy; British Petroleum: London, UK, 2018. [Google Scholar]
  39. MME. Balanço Energético Nacional 2018; Ministério de Minas e Energia: Brasília, Brazil, 2018.
Figure 1. Maintenance Cost vs. Maintenance Commitment indicating the optimum point of Total Maintenance Cost.
Figure 1. Maintenance Cost vs. Maintenance Commitment indicating the optimum point of Total Maintenance Cost.
Energies 13 06273 g001
Figure 2. The proposed method illustrated flowchart with the presentation of each applied technique—(A): Functional Tree (FT); (B): Internal Block Diagram (IBD); (C): Hazard and Operability Study (HAZOP); (D): Failure Mode Symptoms Analysis (FMSA); (E): Operation & Maintenance Tree Analysis (OMCTA); and (F): Analytic Hierarchy Process (AHP)—and the information flows between them.
Figure 2. The proposed method illustrated flowchart with the presentation of each applied technique—(A): Functional Tree (FT); (B): Internal Block Diagram (IBD); (C): Hazard and Operability Study (HAZOP); (D): Failure Mode Symptoms Analysis (FMSA); (E): Operation & Maintenance Tree Analysis (OMCTA); and (F): Analytic Hierarchy Process (AHP)—and the information flows between them.
Energies 13 06273 g002
Figure 3. System functional tree, as considered in the case study, representing its subsystems and components up to the sixth level.
Figure 3. System functional tree, as considered in the case study, representing its subsystems and components up to the sixth level.
Energies 13 06273 g003
Figure 4. Shaft Internal Block Diagrams (IBD) with the representation of interfaces and information flow between the components of this subsystem.
Figure 4. Shaft Internal Block Diagrams (IBD) with the representation of interfaces and information flow between the components of this subsystem.
Energies 13 06273 g004
Figure 5. Operation & Maintenance Tree Analysis (OMCTA) example diagram considering node 3.I previously presented in the HAZOP table.
Figure 5. Operation & Maintenance Tree Analysis (OMCTA) example diagram considering node 3.I previously presented in the HAZOP table.
Energies 13 06273 g005
Table 1. Comparison of the Methods and Criteria.
Table 1. Comparison of the Methods and Criteria.
Methods and Criteria[4][17][18][21][15][16]Proposed Approach
FT XX X
IBD X
HAZOP X X
FTA X
FMEA/FMECAX XXX
FMSA X
OMCTA X
Risk MatrixX XX
ISO 55000 standard X
Safety Impacts (Personnel and Environmental)XXXXXXX
System Functionality/Availability ImpactsXXXXXXX
Failure Rate/ReliabilityXXXXXXX
Costs (Maintenance Costs and Downtime Losses)XXX XX
Failure Detectability/Monitoring CapabilityXXXXXXX
Downtime/MaintainabilityXXXXXXX
Maintenance Compliance X
Failure Likelihood due to Operation X
Failure Likelihood due to Maintenance X
Diagnose Confidence Level X
Prognose Confidence Level X
AHPX XX X
ANP XX
FBCX
MCS X
Table 2. The fundamental Scale of Absolute Numbers [34].
Table 2. The fundamental Scale of Absolute Numbers [34].
Intensity of ImportanceDefinitionExplanation
1Equal importanceTwo activities contribute equally to the objective.
3Moderate importanceExperience and judgment slightly favor one activity over another.
5Strong importanceExperience and judgment strongly favor one activity over another.
7Very strong or demonstrated importanceAn activity is favored very strongly over another; its dominance is demonstrated in practice.
9Extreme importanceThe evidence favoring one activity over another is of the highest possible order or affirmation.
Reciprocals of aboveIf activity i has one of the above non-zero numbers assigned to it when compared to activity j, then j has the reciprocal value when compared with iA reasonable assumption.
1.1–1.9If the activities are very closeIt may be difficult to assign the best value but when compared with other contrasting activities the size of the small number would not be too noticeable, yet they can still indicate the relative importance of the activities.
Table 3. System Availability Impact Scores and Description.
Table 3. System Availability Impact Scores and Description.
ScoreDescriptionImpact
1NoneFailure does not impact unit availability or generation
3MinorFailure does not cause downtime but affects the operating condition
5MajorFailure does not cause downtime but affects power generation
7CriticalFailure does not cause downtime, but critically affects power generation
9CatastrophicFailure causes generator unit outage impact
Table 4. Mean Time to Repair (MTTR) Impact Scores and Description.
Table 4. Mean Time to Repair (MTTR) Impact Scores and Description.
ScoreDescriptionImpact
1Very lowRepair time is too short (up to one shift)
3LowRepair time is short (up to one day)
5ModerateRepair time is moderate (up to one week)
7HighRepair time is long (up to one month)
9Very highRepair time is too long (several months)
Table 5. Mean Time Between Failures (MTBF) Impact Scores and Description.
Table 5. Mean Time Between Failures (MTBF) Impact Scores and Description.
ScoreDescriptionImpact
1Very lowFailure rate is very low—up to 1 fault within 60 months (λ < 0.016)
3LowFailure rate is low—1 fault every 36 to 48 months (0.020 < λ < 0.027)
5ModerateFailure rate is moderate—1 fault every 12 to 24 months (0.041 < λ < 0.083)
7HighFailure rate is high—1 fault every 3 to 6 months (0.166 < λ < 0.333)
9Very highFailure rate is very high—1 fault every month or more (λ > 1)
Table 6. Impact on People Scores and Description.
Table 6. Impact on People Scores and Description.
ScoreDescriptionImpact
1LowDoes not cause injuries or health problems to personnel
3MinorMay cause minor injuries or minor health problems to personnel
5MajorMay cause major injuries or serious health problems to personnel
7CriticalMay cause serious injuries or critical health problems to personnel
9CatastrophicMay cause death or extreme health problems to personnel
Table 7. Environmental Impact Scores and Description.
Table 7. Environmental Impact Scores and Description.
ScoreDescriptionImpact
1LowNot enough to cause environmental impacts
3MinorMay cause minor environmental impacts
5MajorMay cause medium-sized or large environmental impacts
7CriticalMay cause severe environmental impacts
9CatastrophicMay cause catastrophic environmental impacts
Table 8. Shaft Hazard and Operability Analysis (HAZOP) Table.
Table 8. Shaft Hazard and Operability Analysis (HAZOP) Table.
NodeParameterGuide WordDeviationPotential Causes
3.APressurebelowHydrodynamic pressure below nominalBearing oil shortage; overheated oil; wrong specification oil; excessive bearing wear
3.BPressurebelowHydrodynamic pressure below nominalBearing oil shortage; overheated oil; wrong specification oil; excessive bearing wear
3.CPressurebelowLubricating oil pressure below nominalLubrication system failure
3.DPressurebelowLubricating oil pressure below nominalLubrication system failure
3.EPressurebelowLubricating oil pressure below nominalLubrication system failure
3.FLevelbelowBearing oil level below nominalOil leak in turbine guide bearing; lubrication system failure
TemperatureaboveBearing oil temperature above nominalExcessive dynamic loads
3.GLevelbelowBearing oil level below nominalOil leak in turbine guide bearing; lubrication system failure
TemperatureaboveBearing oil temperature above nominalExcessive dynamic loads
3.HLevelbelowBearing oil level below nominalOil leak in turbine guide bearing; lubrication system failure
TemperatureaboveBearing oil temperature above nominalExcessive dynamic loads
3.ICurrentnotNo electric power supply to the oil pumpPower system failure
3.JRotationnotNo rotation on the generator shaftCoupling elements breakage
Table 9. Failure Mode Symptoms Analysis (FMSA) Worksheet Example.
Table 9. Failure Mode Symptoms Analysis (FMSA) Worksheet Example.
ItemFailure ModeRoot
Causes
Failure SymptomsPrimary TechniqueMonitoring LocationMonitoring FrequencyDETSEVDGNPGNMPN
Turbine and Generator shafts (items 3.1 and 3.2)FractureCracks propagation due to mechanical fatigue or overloadNoise, excessive vibration, and loss of synchronism. Active powerPhase busContinuous114520
NoiseLocalInspection round314560
Excessive vibrationBearings lubrication failure (lack or loss of oil properties)Excessive bearing vibration and temperature, noise.NoiseLocalInspection round223224
Mechanical unbalanceExcessive bearing vibration and temperature, noise.NoiseLocalInspection round233236
Electromagnetic imbalanceExcessive bearing vibration and temperature, noise.NoiseLocalInspection round233236
Shaft misalignmentExcessive bearing vibration and temperature, noise.NoiseLocalInspection round243248
Misalignment between bearingsExcessive bearing vibration and temperature, noise.NoiseLocalInspection round243248
Shaft bowExcessive bearing vibration and temperature, noise.NoiseLocalInspection round243248
Cracks propagation in the shaftExcessive bearing vibration and temperature, noise.NoiseLocalInspection round223224
Table 10. Judgment Matrix with the Scores for Each Considered Criterion.
Table 10. Judgment Matrix with the Scores for Each Considered Criterion.
MPN*ODFMDFSAIMTTRMTBFIoPEIPriority
Vector
MPN*10.50.250.1730.140.130.170.0274
ODF2120.1720.20.170.250.0456
MDF40.510.1420.330.140.170.0408
SAI6671750.250.50.1923
MTTR0.330.50.50.1410.170.110.130.0216
MTBF7530.2610.20.330.1045
IoP867495120.3460
EI6462830.510.2218
Table 11. Item Classification Matrix with the Top Ten Maintenance Significant Items (MSI)
Table 11. Item Classification Matrix with the Top Ten Maintenance Significant Items (MSI)
IDItemRankScore
4.1.2Control Gate10.0152
5.4.2.4.2Air/Oil Accumulator20.0139
3.5.1Thrust Bearing Collar30.0128
3.5.3Thrust Bearing Pad40.0128
3.5.4Thrust Bearing Leveling Link50.0126
4.1.1Trash Rack60.0120
3.4.2.3Turbine Guide Bearing Pad70.0117
3.4.2.1Turbine Guide Bearing Housing80.0117
3.4.2.2Turbine Guide Bearing Pivot90.0117
4.6.2Turbine Blades100.0111
Table 12. Uncertainty Analysis Results.
Table 12. Uncertainty Analysis Results.
IDItemRankModeMeanSDMaxMinError
4.1.2Control Gate11333010
5.4.2.4.2Air/Oil Accumulator22443510
3.5.1Thrust Bearing Collar32875011
3.5.3Thrust Bearing Pad42875012
3.5.4Thrust Bearing Leveling Link53975112
4.1.1Trash Rack641074912
3.4.2.3Turbine Guide Bearing Pad7614106411
3.4.2.1Turbine Guide Bearing Housing8614106312
3.4.2.2Turbine Guide Bearing Pivot9614106513
4.6.2Turbine Blades101018117510
3.1Generator Shaft1191595312
3.2Turbine Shaft1281595314
4.2Penstock1381595315
3.4.1.1.2Generator Upper Guide Bearing Pivot141120127713
3.4.1.2.2Generator Lower Guide Bearing Pivot151220127613
3.4.1.1.3Generator Upper Guide Bearing Pad161120127615
3.4.1.2.3Generator Lower Guide Bearing Pad171120127616
3.4.1.1.1Generator Upper Guide Bearing Housing181120127717
3.4.1.2.1Generator Lower Guide Bearing Housing191120127718
3.5.2Thrust Bearing Housing201220127718
5.2.3.1Center guideline211016954111
3.6.3.1.4Relief Valve222229149210
4.6.1Hub231926127414
5.2.2Kaplan Head Bushing2424311510010
5.2.1Kaplan Head2524311510011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Netto, A.C.; de Andrade Melani, A.H.; Murad, C.A.; de Carvalho Michalski, M.A.; Martha de Souza, G.F.; Nabeta, S.I. A Novel Approach to Defining Maintenance Significant Items: A Hydro Generator Case Study. Energies 2020, 13, 6273. https://doi.org/10.3390/en13236273

AMA Style

Netto AC, de Andrade Melani AH, Murad CA, de Carvalho Michalski MA, Martha de Souza GF, Nabeta SI. A Novel Approach to Defining Maintenance Significant Items: A Hydro Generator Case Study. Energies. 2020; 13(23):6273. https://doi.org/10.3390/en13236273

Chicago/Turabian Style

Netto, Adherbal Caminada, Arthur Henrique de Andrade Melani, Carlos Alberto Murad, Miguel Angelo de Carvalho Michalski, Gilberto Francisco Martha de Souza, and Silvio Ikuyo Nabeta. 2020. "A Novel Approach to Defining Maintenance Significant Items: A Hydro Generator Case Study" Energies 13, no. 23: 6273. https://doi.org/10.3390/en13236273

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop