Next Article in Journal
Development of an Optimization Algorithm for Designing Low-Carbon Concrete Materials Standardization with Blockchain Technology and Ensemble Machine Learning Methods
Previous Article in Journal
A Study on the Effects of Distinct Visual Elements and Their Combinations in Window Views on Stress and Emotional States
Previous Article in Special Issue
Developing Proactive Compliance Mechanisms for Chinese International Construction Contractors: A PLS-SEM Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ranking Public Infrastructure Project Success Using Multi-Criteria Analysis

Department of Environmental Engineering, International Hellenic University, Sindos, 57400 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Buildings 2025, 15(16), 2807; https://doi.org/10.3390/buildings15162807
Submission received: 5 June 2025 / Revised: 31 July 2025 / Accepted: 6 August 2025 / Published: 8 August 2025

Abstract

Project success is a complex and debated concept in construction project management, even more so for public sector infrastructure projects. This study proposes a new, data-driven methodology to assess the success of public infrastructure projects using a multi-criteria decision-making framework. By utilizing empirical data from 30 completed road infrastructure projects the study applies the Technique for Order Preference by Similarity to Ideal Situation (TOPSIS) method to evaluate performance across four key success criteria: cost, time, quality, and project management. An integrated Success Index (SI) was then calculated using the Simple Additive Weighting (SAW) method under two different weighting scenarios. Results show that projects with shorter durations and simpler scopes consistently achieved higher SI scores, while larger, more complex projects were more prone to delays, cost overruns, and quality issues. This study contributes to scientific research by utilizing real, archival project data rather than relying on expert opinions to quantify project success from the client contracting authority’s perspective rather than that of the contractor. Hence, the proposed model serves as a practical, adaptable tool for public contracting authorities seeking to benchmark and improve project performance.

1. Introduction

Project success is one of the most studied topics in project management because of the complexity of determining the factors that contribute to its achievement and defining how to measure it. Measuring the success of a project is a real challenge and quite a complex task. Nevertheless, it is essential for all project-oriented organizations because if success cannot be measured, it cannot be improved [1]. Success in the construction sector is vital, and therefore, it is important for all stakeholders, including clients, construction contractors, project management teams and consultants, to understand the concept of project success. According to Ingle and Mahesh [2], success is described as meeting the expectations of stakeholders and achieving the intended goal.
Project success is evaluated through success criteria, which are the principles and measures used to determine whether a project has met its objectives, traditionally focusing on time, cost, quality, and customer satisfaction [3]. To track progress towards meeting these success criteria, key performance indicators (KPIs) are specific, measurable metrics used to monitor and evaluate project performance [4]. In contrast, critical success factors (CSFs) are the essential elements and conditions that significantly contribute to achieving these success criteria [5]. Due to the large number of different CSFs reported in the literature, most researchers classify them into broad categories. These categories include project characteristics, organizational factors, project manager and project team capacity, project management and the external environment. These factors are the enablers that drive the project towards its defined success.
An integrated framework for project performance measurement is required to formalize the way contractors and contracting authorities (CAs) evaluate the performance of construction projects. Therefore, the purpose of this paper is to develop a project success evaluation tool that can potentially be useful in the public sector. To do so, it examines the public CA Egnatia Odos S.A (EOSA) as a case study. EOSA was the CA for the design, management, supervision, construction, maintenance, and operation of the Egnatia Motorway and other road infrastructure projects in Greece and abroad. This research collected contractual data from 30 road construction projects with construction completion dates between 2002 and 2020. The project types included in this study were new motorway construction, motorway completion works, existing motorway improvements, rural road improvements, emergency road works, and construction of operating facilities, such as toll stations and parking areas.
The Technique for Order Preference by Similarity to Ideal Situation (TOPSIS) multi-criteria decision-making (MCDM) method was applied to evaluate each project’s success against each of the four success criteria examined (cost, time, quality and management). This MCDM method was chosen because it requires minimal data input, and its results can be easily obtained using spreadsheets. The results of the method are objective since the data used are real and sufficient in number to produce a reliable evaluation tool. The only subjective factor was the definition of the importance of the weights assigned to each success criterion, which are directly related to the requirements of the evaluator. Subsequently, the Simple Additive Weighting (SAW) method enabled the ranking of the projects in terms of their overall performance. Thus, an overall success index (SI) was calculated that incorporated and quantified the performance of each project against all success criteria.
The following section includes a detailed literature review that identifies a significant gap in the literature regarding proposals for a tool to define project success indicators and measure project success as perceived by public CAs. Section 3 describes the methods used and how they were applied to the research problem, including data collection, calculation of KPIs for each success criteria, analysis, and results. Section 4 presents the results, while the final section includes the conclusions, proposals, limitations, and suggestions for future research.

2. Literature Review

While project success is one of the most explored topics in project management, its meaning remains vague, often depending on the interpretation given by each observer [6]. Early definitions focused primarily on the “iron triangle” of time, cost, and quality, where a project was considered successful if it was delivered on time, within budget, and to the specified standards [7]. Since the 1980s, however, this view has evolved to recognize additional dimensions like stakeholder satisfaction and project management success.
For construction projects, success is defined as the extent to which project objectives and quality are achieved, which includes not only completing the project within budget, on time, and to the required quality standards but also meeting the needs and expectations of stakeholders [8]. Similarly, according to Ingle and Mahesh [2] success is described as meeting the expectations of stakeholders and achieving the intended goal.
Bahadori Zare et al. [9] emphasizes that project success is among the most important objectives and a major concern for project managers and stakeholders alike. From this perspective, researchers agree that success hinges on satisfying the needs of stakeholders, clients, and end users [10,11,12,13]. Furthermore, Zidane et al. [14] propose a dual categorization of success: (1) project management success, which concerns the efficiency of execution and the achievement of business objectives, and (2) project success, which reflects the importance of long-term success factors and criteria that influence overall outcomes.
Most studies on CSFs in the construction industry have employed questionnaires or interviews as their primary research methods [12,13,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30], with a smaller number relying on literature reviews [25,27,31,32] or case studies and real project data [3,21,26]. These studies predominantly focus on general construction [12,13,16,19,20,22,23,24,25,32] or building projects [15,18,26,29,33] while only a few address public infrastructure projects [3,17,21,28,30,31]. Even fewer published in recent years have specifically identified CSFs relevant to public-sector road projects. These factors are essential to ensure project success, particularly in terms of on-time delivery, cost management and overall project quality. For example, a study focusing on the timely delivery of road projects highlighted the importance of engaging competent contractors, timely payment and promoting research and development [34]. It also stressed that the CA should consider alternative funding mechanisms to prevent delays. Similarly, Sohu et al. [27] identified the top five CSFs for highway projects in Pakistan: the experience level of the project management team, effective site management, commitment from all stakeholders, involvement of an experienced design team, and robust project planning.
Most research into the success of public infrastructure projects, primarily in developing countries, focus on the factors leading to cost overruns and the factors that cause them. For example, a survey of public construction projects in Ghana identified early involvement of the contractor in project planning, adequate funding, good project team relationships, competent management and incentives for project participants as the main CSFs necessary to reduce cost overruns [35]. At the same time, another study focused on contract changes as the most important factor leading to cost overruns in Indonesia [36]. Finally, an article examining public sector road projects in developing economies found that stakeholder collaboration and strong institutional support are critical. Effective communication and collaboration between all project stakeholders, including government agencies, contractors and local communities, were seen as key to project success [37].
In general, most studies on road projects have mainly focused on the success of public–private partnerships (PPPs) [38,39,40,41]. In this paper, the project data used is derived from contracts awarded to contractors through the traditional project procurement process, i.e., design-bid-build, rather than PPP contracts. While sixty percent of the overall length of Greek motorways were also constructed as PPP the remaining 40% corresponds to the Egnatia Odos Motorway and its perpendicular axes, which were constructed through a series of public works design-bid-build contracts [42]. So, this article looks at the issue of project success from the perspective of the CA rather than the contractor or concessionaire. Previous studies have primarily employed quantitative evaluation using statistical and simplified methods, which are relatively straightforward to apply. Moreover, most studies determine KPIs of success primarily through literature surveys, questionnaires, and interviews to rank the indicators based on their importance in evaluating project performance. However, not many studies were found calculating KPIs to measure overall project success using an analytical mathematical method.
In fact, following a non-exhaustive literature review 74 related studies were found dating from 2001 to 2025, of these only six studies were found that provided measurement methods for their project success. A detailed content analysis of these six studies (Table 1) revealed that they proposed methods for calculating from 5 to 14 different KPIs. At the same time, five of them employed a method of integrating these evaluations to calculate overall SI. Only one of these studies by Langston [43] could be utilized by client organizations like CAs for road infrastructure, as it developed generic key performance indicators but left the specific evaluation methods for each KPI to the organization. In contrast, the other studies primarily assess project success from the contractor’s perspective, focusing on KPIs such as profitability, billing efficiency, costs arising from safety and quality rework issues, as well as client satisfaction and return on investment. These indicators, while important, reflect the priorities and performance metrics relevant to contractors rather than public stakeholders. All used the SAW method or the Relative Importance Index (RII) for overall SI calculation, and only one employed the Analytic Hierarchy Process (AHP) MCDM for weight calculation [1]. Therefore, the approach presented in this article is unique, as it proposes a novel, data-driven methodology to assess the success of public infrastructure projects from the perspective of the public client organizations using TOPSIS, a robust and easy to apply MCDM framework.

3. Materials and Methods

3.1. Data Collection

With a substantial history of managing major infrastructure projects, primarily for the construction and operation of the Egnatia Motorway and its vertical axes, EOSA has developed extensive expertise in managing public infrastructure projects. The data used in this study were extracted directly from official contract files. This is the first time such data have been systematically collected for the purpose of evaluating project success.
Data were collected from thirty (30) projects, a small number of projects compared to the total number of projects implemented by EOSA but sufficient given the size and the complexity of public infrastructure projects in Greece. The selection of the thirty (30) projects was based on the availability of complete and reliable data, which was a prerequisite for inclusion in the analysis. Data access was facilitated through collaboration with personnel responsible for a specific geographic section of the Egnatia Motorway, where full documentation was available. Although the number of projects represents a small portion of EOSA’s total portfolio, care was taken to include a diverse range of project types. These include eleven (11) new motorway section construction projects, seven (7) motorway completion works, four (4) existing motorway improvements, two (2) rural road improvements, two (2) emergency road works, and (4) construction of operating facilities, such as toll stations. It is important to note that the scope of this study is not to draw statistical conclusions about infrastructure project success at the national level, but rather to present a practical evaluation model that public CAs can use to assess and improve project outcomes within their own operations.
A total of 57 data items were collected for each project, as outlined in Table 2, from EOSA’s archives. This data collection was conducted following a comprehensive review of both digital and physical project files under the supervision of experienced company engineers to ensure the accuracy and relevance of the information for subsequent analysis. The data were provided in writing by EOSA upon official request. Of the 57 data items, 27 were utilized in the calculation of performance indicators. The remaining data—such as contract titles, contract numbers, executing contractors, reasons for extensions, special orders, and objections—were excluded from indicator calculations. While some of these items had limited variability across the sample (e.g., minimal claims or work accidents) or were subject to GDPR constraints, others were excluded due to time limitations that prevented further correlation analysis with the Success Index (SI). Nonetheless, several of the unused items were considered valuable for context and interpretation and are acknowledged in the conclusions as promising areas for future model refinement and evaluation.

3.2. Project Success Indicators

The primary objective of the proposed model is to provide an analytical framework using MCDM methods to assess and interpret the overall success of a road project. This process aims to calculate an overall success index (SI) for each project based on the definition of four success criteria. The KPIs, which serve as the success criteria of this evaluation, are as follows:
  • Cost success indicator (KPIs);
  • Time success indicator (KPIt);
  • Quality success indicator (KPIq);
  • Project management success indicator (KPIm).
Each KPI consisted of sub-indicators that served as selection criteria for the application of the TOPSIS method in calculating the particular KPI value. The data collected for each project, as described in Table 2, were used to quantify the sub-indicators. The KPIs, their sub-indicators, and a detailed description of how the individual sub-indicators were calculated are presented in Figure 1.
Table 2. List of data collected for each project.
Table 2. List of data collected for each project.
a/aData Itema/aData Item
1Contract number30Number of interim work quantity take-offs
2Title of contract31Number of requests for payment
3Project category32Final work quantity take off: date of submission
4Contractor33Final work quantity take-off date of approval
5Tender budget 34Project registry: date of submission
6Tender system35Project registry: date of approval
7Bidding system36Number of special orders
8Project procurement system37Subject matter of special orders
9Tender date38Special invitation issued (Yes or No)
10Date of signing of the contract39Termination of contract before completion (Yes or No)
11Duration of contract (days)40Number of claims
12Initial contractual deadline (date)41Causes of claims
13Contract completion (date)42Number of extensions of time
14Total extension of time (days)43Duration of each time extension (days)
15Source of funding44Reasons for time extensions
16Initial contractual cost (including CO&P and excluding contingencies and VAT)45Number of work accidents
17Initial amount for contingency46Type of accidents
18Final contractual cost (including CO&P)47Size of the supervision team
19Final design cost48Provisional acceptance protocol: date of appointment
20Final price revision amount49Provisional acceptance protocol: date of signing
21Number of change orders50Provisional acceptance protocol: date of approval
22Total number of new work items51Provisional acceptance protocol: quantitative observations
23Total cost of new work items52Provisional acceptance protocol: qualitative observations
24Supplementary contract (Yes or No)53Mandatory maintenance period
25Final cost of supplementary contract54Final acceptance protocol: date of appointment
26Final price revision amount for SC55Final acceptance protocol: date of signing
27Change in contractual scope of work (Yes/No)56Final acceptance protocol: date of approval
28Design modifications (Yes/No)57Final acceptance protocol: quality observations
29Existence of land acquisition problem (Yes/No)

3.2.1. Cost KPI

In Greece, the cost overruns considered in this study reflect legally approved increases to contractor compensation. These are limited to either a 9% or 15% contingency included in the original contract or a supplementary contract of up to 50% of the original value, which requires a new award process. Contracts are typically awarded to the lowest bidder. If a contractor cannot complete the project due to an unsustainably low bid, the CA is required to terminate the contract, impose a financial penalty, and claim the 5% bank guarantee. Therefore, as shown in Figure 1, four (4) sub-indicators of the cost KPI (KPIc) were defined.
The first sub-indicator (C1) calculates the proportion of the contractual budget overrun. A positive value signifies an increase in the final cost of the works compared to the original contractual budget, which includes the contractual cost of all work items plus 18% for the contractor’s overhead and profit (CO&P). Cost overruns are widely recognized as a key metric for assessing project performance. In the cases analyzed, most overruns were attributed to change orders issued by the CA. Contractual budgets typically include either 9% or 15% of the total cost (including CO&P) to account for change orders that do not significantly alter the original scope of work (SoW). Such changes are generally necessitated by design errors, quantity take-off discrepancies, or other legally permitted contingencies related to unforeseen or miscalculated work item quantities.
Sub-indicator C2 measures the difference between the final contractual cost of the project to the CA and the original tender budget expressed as a proportion of the original tender budget. The original tender budget includes the standard value of all work items plus 18% for CO&P. Ideally, this index should yield a negative or zero value, signifying that the awarded bid was competitive and that any cost increases during construction were contained. A zero or negative value, therefore, indicates a financially successful project outcome for the CA.
The third sub-indicator (C3) assesses the cost of any supplementary contract (SC), expressed as a proportion of the original contract budget. Under Greek Public Works Law, supplementary contracts are permitted only for unforeseen works resulting from force majeure events that are essential for completing the original contractual scope. The total value of such supplementary contracts must not exceed 50% of the original contract budget.
Finally, the fourth sub-indicator (C4) evaluates the extent of changes to the scope of work (SoW) during project execution. This is rated on a qualitative scale from 1 to 5, as follows:
  • 1 = no changes to SoW;
  • 2 = additions to the SoW covered by a SC;
  • 3 = additions and removals made without a SC;
  • 4 = partial removal of the original SoW;
  • 5 = major removal from the original SoW.
To allow comparability with other sub-indicators that are all measured as a percentage, the value of C4 was normalized by dividing the rating by 5. This simple linear transformation expresses each score as a proportion of the maximum possible deviation from the original scope, resulting in normalized values ranging from 0.2 to 1. This approach was selected for its clarity and consistency, and it enables the integration of C4 into an aggregated performance index without introducing scale-related distortions. C4 captures the impact of scope modifications, which often significantly influence project costs, either through additions or reductions, depending on their scale and complexity.

3.2.2. Time KPI

Two (2) sub-indicators were defined for consideration as criteria for the calculation of the time KPI (KPIt).
The first sub-indicator (T1) calculates the percentage increase in the project duration compared to the original contract duration.
The second sub-indicator (T2) calculates the percentage of the final contractual cost attributable to price revisions for work items resulting from project delays. According to the Greek legislation, if a delay occurs and it is not solely attributable to the contractor, the extension of time is granted “with price revision”. In such cases, the revision coefficients applied are those valid at the end of the original construction schedule, and they are used to calculate additional payments to account for cost fluctuations in materials and labor due to inflation. Conversely, if the delay is deemed the exclusive fault of the contractor, the extension is granted “without price revision”, and no additional compensation is provided.
In the case of the projects included in this study, the extensions of time were predominantly approved with price revision, as the delays were not solely due to the contractor. Therefore, the resulting cost increases, which are paid as a separate item under the title “total price revision” are only attributable to the extended project duration. T2 is calculated by dividing the total price revision by the original contractual cost of the works (including general and administrative expenses), thereby expressing the impact of time-related cost increases as a percentage of the baseline contract value.

3.2.3. Quality KPI

Two (2) sub-indicators were defined for the calculation of the quality KPI (KPIq). The first sub-indicator (Q1) measures the number of “special orders” issued during the execution of each project, recorded as whole numbers. These orders are formal notifications issued when construction defects are identified and not corrected by the contractor within the expected timeframe. Each special order outlines the nature and severity of the defect, the required corrective actions, and any possible financial penalties or reductions in payment. As such, the frequency of these orders serves as a key metric for assessing the quality and reliability of a contractor’s performance while also reflecting the level of oversight and quality assurance exercised by the CA.
The second sub-indicator (Q2) captures the presence of qualitative observations noted in the provisional and final acceptance protocols of each project. It is a binary indicator, taking the value of zero (0) when no such observations are recorded and one (1) when at least one observation is present. This metric helps assess the overall quality and compliance of the completed works, indicating whether any issues were identified during formal acceptance procedures.

3.2.4. Management KPI

The management KPI (KPIm) assesses key aspects of project execution and oversight, focusing on administrative complexity, contract modifications, and procedural delays. It is composed of eleven sub-indicators (M1–M11), each addressing a different facet of project management performance.
The first sub-indicator (M1) records the number of change orders issued during project execution. These orders reflect adjustments to quantities or the inclusion of unforeseen works and typically signal shortcomings in contract planning or initial project design. A high number of change orders often indicates disruptions to workflow and added administrative burdens, as their preparation and approval follow formal legal procedures that can delay project completion.
The second sub-indicator (M2) tracks the total number of time extensions granted for a project., i.e., how many extensions were required in total. Timely completion is crucial for the effective coordination and operation of public infrastructure. Multiple extensions, unless caused by exceptional circumstances, may suggest weaknesses in project scheduling, tender preparation, or administrative handling, whether from the contractor, CA, or third parties. In any case, a high number of time extensions and repeated delay announcements by the CA can undermine public trust in the effectiveness and reliability of public administration.
The third sub-indicator (M3) measures the combined final cost of additional designs and cost-plus works, which refer to tasks that emerged during construction but could not be foreseen or quantified at the design stage. These costs arise when critical elements are omitted or inadequately defined in the original designs and tender documents, indicating an insufficient level of project maturity. High expenditure under this indicator implies a reliance on reactive rather than proactive project design and planning.
The fourth sub-indicator (M4) calculates the value of work with new unit prices after the discount as a percentage of the original contract cost. This quantifies the cost impact of unforeseen works and reflects how much the contract value was altered due to design gaps or evolving project needs. The need to define new work items during a project is again an indicator of insufficient project maturity at the tender stage.
The fifth sub-indicator (M5) assesses, on a scale from 1 to 5, the extent and nature of changes to the contractual scope. It is the same as the cost sub-indicator C4. It is also used as a management sub-indicator, as it considers and quantifies the significance of these changes in the Scope of Work (SoW). It captures the overall deviation from the initial scope as planned. A high value here also indicates poor project management.
The sixth sub-indicator (M6) identifies whether the project encountered issues related to expropriation procedures necessary for land acquisition, using a binary value (0 for no issues, one if at least one issue was present). Expropriation delays have historically been a significant source of project disruption. These delays often stem from protracted legal and administrative processes that can only begin after project financing is secured. In cases where disputes over land valuation arise, court proceedings can further prolong the timeline, sometimes by several years.
The seventh sub-indicator (M7) records the delay in days for submitting the project as-built drawings and the final measurement documents. These submissions are required two months after the certified completion of the project and are necessary for verifying and closing out the contract. Delays may reflect inefficiencies in contractor documentation or administrative follow-up.
The eighth sub-indicator (M8) counts the number of objections or formal disputes raised during or after project execution. Such objections can arise from contractors or third parties and typically reflect procedural conflicts, contractual disagreements, or design issues, often affecting the smooth delivery of the project.
The ninth sub-indicator (M9) measures the delay in days for completing the provisional acceptance of the project, starting from the submission of the final measurement and as-built drawings. It assesses whether the CA fulfills its obligation to verify and formally accept the project within the legally prescribed timeframe.
The tenth sub-indicator (M10) calculates the delay in days for the final acceptance of the project based on the legally defined period following the end of the contractor’s maintenance obligation. This stage ensures that any issues arising during the guarantee period are addressed, and delays may signal administrative inaction or lack of capacity.
Finally, the eleventh sub-indicator (M11) counts the number of SCs issued. These contracts typically arise from significant, unforeseen circumstances during execution and often result in substantial increases in project cost and scope. Their frequency is indicative of how well the original tender documents anticipated actual project conditions.

3.3. Data Analysis

The TOPSIS method was originally proposed by Hwang and Yoon in 1981 [47] as an MCDM method used to rank alternatives based on their relative proximity to the ideal solution. The method ranks alternatives according to their distance from the ideal solution, with the best option being the one that has the smallest distance from the ideal and the largest distance from the counter-ideal solution. The ideal solution (A+) is defined by selecting the maximum weighted normalized score for each criterion. The anti-ideal solution (A−) is defined by choosing the lowest weighted normalized score for each criterion. The distances from these are then calculated and used to calculate the closeness coefficient (C), which takes values from 0 to 1. Detailed mathematical formulations of this method are provided by Ishizaka and Nemery [48], and its applications are further demonstrated by Antoniou [49].
The TOPSIS method was selected for this study to calculate the KPIs for each success criterion (cost, time, quality, project management) for each project due to its practicality, simplicity, and strong alignment with decision-making logic. It requires minimal input data and produces clear, interpretable results with limited subjectivity—confined primarily to the weighting of criteria, which can be adapted as needed. Its mathematical foundation mirrors human reasoning by simultaneously considering both the ideal and anti-ideal choices, making it particularly suitable for ranking project alternatives [48]. Furthermore, TOPSIS can be easily implemented using standard spreadsheet software, making it particularly practical for contracting authorities (CAs) who may lack personnel with experience in complex technical applications. Detailed mathematical formulations of the TOPSIS method are presented in the work of Ishizaka and Nemery [48], and its practical applications are further demonstrated by Antoniou et al. [50], Jozi et al. [51], and Srdjevic et al. [52]. In essence, the TOPSIS method involves the following five computational steps.
Step 1:
Normalization. This step is required when the selection criteria are evaluated in differing units of measurement (e.g., monetary, time, rating scales). It can be achieved by various methods such as the distributive and ideal normalization [53] as well as the additive method [52]. The ideal normalization method, which involves dividing each value by the ideal value, is not applicable in cases where the ideal value is zero. Similarly, the additive normalization method, which divides each value by the sum of all other values, can result in a negative denominator. This may distort the results by changing the sign of the weighted scores. Therefore, the distributive or otherwise called vector normalization method was applied in this study. This entails dividing each value in the matrix by the square root of the sum of squares of all values in its column.
Step 2:
Weighted Normalized Decision Matrix. In this step, the importance of each criterion is integrated by multiplying the normalized values by the criteria weights. Weights reflect the relative importance of each criterion and are set by the decision-maker intuitively or can be calculated using methods such as the AHP [53,54,55], Simos’ Method [53,56], goal programming [53], or Shannon’s entropy [49,51], among others.
Step 3:
Finding the ideal and anti-ideal alternative per criterion. The weighted normalized values are evaluated to determine the ideal and anti-ideal alternative for each criterion. When the objective is to maximize a criterion, the ideal value is the highest weighted normalized score in that column, while the anti-ideal value is the lowest. Conversely, if the goal is to minimize the criterion, the ideal value becomes the lowest score and the anti-ideal value the highest [49].
Step 4:
Calculating the distance between the ideal and the anti-ideal alternative. This step calculates how far each delay factor is from the ideal (d+) and the anti-ideal solution (d) per criterion using the Euclidean distance formula (Equations (1) and (2)).
d a + = i ( v i + v a i ) 2
d a + = i ( v i v a i ) 2
where
  • v i + = the ideal solution for the i criterion;
  • v i = the anti-ideal solution for the i criterion;
  • v a i = the value of the a alternative against the i criterion.
Step 5:
Calculating the closeness coefficient. The closeness coefficient Ca takes values between 0 and 1 (Equation (3)). An alternative that is close to the ideal solution has a Ca value nearer to 1; otherwise, if it is closer to the non-ideal solution, it approaches 0.
C a = d i d i + + d i
The TOPSIS method was applied four times to calculate the KPIs for each of the defined success criteria (cost, time, quality, project management) for each project. Each KPI consisted of several sub-indicators (Figure 1), which served as selection criteria for applying the TOPSIS method. The resulting C corresponded to the values of the KPIs for cost (KPIc), time (KPIt), quality (KPIq), and project management (KPIm).
To combine the results and calculate the overall Success Index (SI), the Simple Additive Weighting (SAW) model was used. As noted by Tzeng and Huang [57] and Ishizaka and Nemery [48], the SAW model is a special case of the Multi-Attribute Utility Theory (MAUT), where all marginal utility functions are linear. Under this condition, the global utility of a given alternative; in this case, the SI of each project is calculated as follows (Equation (4)).
S I i = i 4 w i K P I i
By applying the TOPSIS and SAW methods to the data from the 30 projects, an overall SI was calculated for each project, with values ranging from 0 to 1, where 1 represents the optimal outcome. Based on these SI values, the projects were ranked into performance categories as follows:
-
Excellent: SI ≥ 0.9;
-
Good: 0.9 > SI ≥ 0.7;
-
Medium: 0.7 > SI ≥ 0.5;
-
Poor: SI < 0.5.
Categorizing the projects into four performance levels allowed for a more intuitive interpretation of their success. This classification facilitated the visual representation of project performance using bar charts, showing the distribution of projects across these categories. It also enabled straightforward comparisons of results under different weighting scenarios.

3.3.1. Application of the TOPSIS Method

To evaluate project performance across four key success criteria—cost (KPIc), time (KPIt), quality (KPIq), and management (KPIm)—the TOPSIS method was applied. To calculate each KPI, their sub-indicators were considered as selection criteria, all equally weighted, with the ideal value for each being the minimum observed value. The resulting values of sub-indicators C1–C4, T1–T2, and Q1–Q2 are presented in Table 3, while those for M1-M11 are presented in Table 4. For each KPI, the standard steps of the TOPSIS method were followed, i.e., normalization of sub-indicator values, application of equal weighting, calculation of the Euclidean distance from the ideal (d+) and anti-ideal (d) solutions, and derivation of the closeness coefficient (C), which represents the final KPI value. The resulting closeness coefficients—KPIc, KPIt, KPIq, and KPIm—for each project are summarized in Table 5.

3.3.2. Application of Simple Additive Weighting (SAW) Method

To calculate the overall success index (SI) for each project, the SAW method, outlined in Section 3.1, was applied. The input criteria were the Cs for the four KPIs (KPIc, KPIt, KPIq, and KPIm), which had been previously calculated using the TOPSIS method. Initially, equal weights of 25% were assigned to each KPI.
SI = wcKPIc + wtKPIt + wqKPIq + wmKPIm,
where
  • wc, wt, wq, wm are the respective weights of each KPI;
  • The sum of the weights equals 1.
Table 6 presents the KPI values, computed SI, and resulting project ranks under equal weighting. Subsequently, a sensitivity analysis was conducted using an alternative set of weights: wc = 30%; wt = 30%; wq = 15%; and wm = 25%. This weighting scheme reflects the common practice among many contracting authorities (CAs) to place greater emphasis on cost and time outcomes. While weighting preferences can vary across CAs, the selected values are based on the authors’ professional experience with EOSA, a major infrastructure agency. The purpose of this analysis is to assess the robustness of the project rankings under a plausible alternative prioritization framework. The recalculated rankings using these adjusted weights are presented in Table 7.
After calculating the overall success index (SI) for each project using the SAW method, the results were categorized into four performance categories: Excellent (SI ≥ 0.9), Good (0.9 > SI ≥ 0.7), Medium (0.7 > SI ≥ 0.5), and Poor (SI < 0.5). Using equal weighting factors for all four KPIs (25% each), the majority of projects (23 out of 30) were classified as Good, reflecting a strong overall performance across the evaluated dimensions. Four projects (P25, P27, P28, and P29) achieved Excellent ratings, with SI values exceeding 0.95, indicating top-tier performance. Three projects fell into the Medium category, while only one project (P21) was classified as Poor, with an SI below 0.5 (Figure 2).
When the weighting factors were adjusted to better reflect the common priorities of CAs—namely, wc = 30%, wt = 30%, wq = 15%, and wm = 25%—the distribution shifted slightly. The number of Excellent projects remained the same (P25, P27, P28, and P29), but the number of Good projects decreased to 22. Correspondingly, Medium-performing projects increased to six, and the Poor category still included only P21 (Figure 3).
This classification approach provides a practical means of comparing overall project success while accounting for both uniform and context-specific priorities in project evaluation.

4. Discussion

4.1. Discussion of Results by Index

4.1.1. Cost Index

Using the TOPSIS method, Project P25 achieved the highest cost KPI of 0.917. This construction project for operating facilities had a short contract duration of six months and was completed without any time extension. Tendered through an open procedure, the initial tender budget was EUR 5 million, including CO&P. Despite a substantial discount during the tender, resulting in a contract value of approximately EUR 2 million, the final cost overrun was limited to 6.85%. Although 20 new work items were required, there was no change in the contractual scope, and their additional cost was covered by the contingency amount allocated in the original contract.
In contrast, Project P14 had the lowest cost performance, with a KPI of 0.181. This new motorway section construction project had a contract duration of 2.5 years but required two extensions of time, resulting in a total delay of 460 days. The final cost of works increased by 44.28%, a significant escalation approaching the original pre-tender budget. An SC accounting for 35.35% of the original contractual cost was required while the SoW was significantly modified.

4.1.2. Time Index

Projects P27, P28, and P29 recorded the highest time performance (KPIt = 1.0) under the TOPSIS method. These were all short-duration projects, ranging from four to ten months. P27 and P28 were both construction projects for operating facilities, and P29 was a motorway completion works contract. All three were completed in 2018 without the need for extensions of time. Since no revision rates were issued that year, the time index was unaffected by price revisions.
On the other hand, Project P2, also a motorway completion works contract, had the lowest time performance (KPIt = 0.242). Although initially assigned a four-month contract duration, the project faced delays due to interference from a third party and archeological findings, resulting in a 533-day delay. These delays also led to significantly increased contractual cost due to price adjustments for work items.

4.1.3. Quality Index

The highest quality performance index (KPIq = 1) was achieved by 21 projects (including P1–P4, P6–P7, P10–P13, P16–P18, P20, P22–P25, P27–P29). These projects recorded no special orders or quality observations in either the provisional or final acceptance protocol.
Conversely, Project P21 received KPIq = 0. It was a new motorway section construction project that faced six “special orders” related to acceleration, unfinished work, and repair of ongoing works. The contract was ultimately terminated before completion, while the provisional acceptance protocol recorded quality observations that were resolved by the time of final acceptance.

4.1.4. Management Index

Project P29 recorded the highest management performance (KPIm = 0.971). This motorway completion works project was completed on time, within scope, and without the need for new unit prices. Both the provisional and final acceptance protocols were concluded faster than the timeframes set by Greek public works law.
Project P21 again recorded the lowest management performance (KPIm = 0.516), confirming the broader challenges it faced across multiple KPIs.

4.2. Discussion of Overall Success Index Results [SI]

As explained in Section 3.3.2, Equation (5) was used to derive the overall SI from the KPI scores (KPIc, KPIt, KPIq, KPIm). This was initially calculated using equal weights (25% per KPI), followed by a sensitivity analysis using adjusted weights: wc = 30%; wt = 30%; wq = 15%; and wm = 25%. From the results obtained, the three best-performing projects, i.e., the highest overall success rate, are presented below, followed by the three projects with the lowest.
The three best-performing projects under both weighting schemes were P28, P29, and P25, all rated as Excellent. Projects P25 and P28 are construction of operating facilities projects, while P29 was a motorway completion works contract. These projects were characterized by the following:
  • Simple technical SoW and short durations;
  • Tender budgets below EUR 5 million;
  • Minimal cost overruns, with no need for supplementary contracts;
  • Smooth project management and contractual execution;
  • Absence of quality or quantitative deviations at acceptance stages.
Their consistent performance across all metrics highlights their exemplary project delivery.
The lowest ranking projects under both weighting schemes were P21 and P14. While the third lowest ranking project (28th place) was P15 in the case of equal weightings and P5 after the sensitivity analysis (weighting factors KPIc = 30%; KPIt = 30%; KPIq = 15%; and KPIm = 25%).
All four of these projects were large new motorway construction projects with a long duration and a large tender bid of more than EUR 50,000,000. A summary description of the data from the four projects is presented below.
The lowest-ranked projects, considering both weighting scenarios, were P21, P14, P15, and P5—all large-scale new motorway section construction projects with contract values exceeding EUR 50 million. Key issues included the following:
Project P21:
  • A 2.5-year duration extended by 1107 days (99% overrun);
  • Contract terminated after scope removal and multiple special orders;
  • Quality observations at provisional acceptance;
  • Five change orders issued.
Project P14:
  • A 460-day delay after two extensions of time;
  • A 44.28% increase in cost;
  • Supplementary contract = 35.35% of original contract;
  • Major scope removal (Category 5).
Project P15:
  • A 460-day delay after two extensions of time;
  • A 51.2% cost increase and significant scope addition (Category 2);
  • Four change orders and one special order;
  • Three contractor objections.
Project P5:
  • Originally 3 years and extended by 2055 days (187.5% overrun);
  • Archeological issues, expropriations, and utility conflicts;
  • A 23.89% cost increase and seven change orders;
  • Supplementary contract = 16.34% of contract value;
  • Despite no final acceptance issues, the prolonged duration and complexity weighed down overall performance.
The analysis confirms that small-scale projects for the construction of operating facilities and motorway completion works achieved consistently higher performance due to their simpler scope, shorter durations, and fewer stakeholders and coordination complexities.
In contrast, large-scale motorway projects exhibited more varied outcomes, often impacted by delays, scope changes, contractual disputes, and higher budgetary and managerial complexity. These results reinforce the established understanding that project scale and complexity are inversely proportional to performance consistency, particularly under constrained public sector conditions.

4.3. Implications for Theory and Practice

The findings of this study contribute to the broader theoretical understanding of infrastructure project success by supporting the long-standing theoretical view that project scale, complexity, and duration are inversely related to project success, particularly in the public sector. Large-scale public infrastructure projects have consistently proven to be susceptible to cost overruns and time delays, leading researchers both abroad and overseas to study the factors causing these unfavorable performances [49,58]. In addition, the results are consistent with complexity theory in project management, which emphasizes the challenges of coordination, stakeholder management, and risk in large, multi-dimensional projects [59,60]. By contrast, smaller-scale projects with clearly defined scopes demonstrated consistently high performance across all KPIs. This pattern aligns with bounded complexity theory [61] and the Iron Triangle framework [62], which recognize the trade-offs between cost, time, and quality, and the increased likelihood of success when project complexity is reduced.
From a practical standpoint, the evaluation model developed in this study provides a replicable and data-driven tool for CAs to assess the performance of a group of completed projects. The model’s structure, which includes cost, time, quality, and management performance indicators, supports both internal benchmarking and external reporting. Specifically, it can assist CAs in identifying performance patterns and root causes across project types or delivery methods, flagging high-risk projects early based on indicators such as scope changes or delays in provisional acceptance, setting more realistic cost and time contingencies during procurement planning, and structuring lessons-learned databases to guide future project design and contract strategy. In addition, the inclusion of a sensitivity analysis demonstrates how the model can be tailored to reflect changing institutional priorities (e.g., prioritizing cost efficiency during times of fiscal constraint or management quality in capacity-building initiatives). Overall, this approach supports a shift toward evidence-based project governance and performance management in public infrastructure, aligning with international recommendations for data-informed decision-making in public procurement [63,64].

5. Conclusions

This study presents a novel project success evaluation model that, for the first time, calculates KPIs based on actual project data from the perspective of a public infrastructure CAs. Unlike prior research, which typically relied on indicators reflecting the priorities and performance metrics relevant to contractors rather than public stakeholders, this work applies the TOPSIS and SAW methods to quantify project performance of 30 road infrastructure projects. A total of 4 core KPIs—cost, time, quality, and management—were constructed from 19 objective and measurable sub-indicators to generate an objective SI for each project.
The analysis revealed that smaller-scale projects with simple scopes and shorter durations performed significantly better across all dimensions. In contrast, large-scale motorway projects experienced more frequent delays, cost overruns, and scope changes, resulting in lower SI scores. These findings confirm the strong influence of project complexity and management burden on overall success and warrant further investigation into understanding the underlying causes, such as differences in complexity, administrative burden, risk exposure, and stakeholder coordination, providing valuable insights for improving the planning and management of large-scale infrastructure projects.
This study makes a valuable contribution since the proposed model can serve as a practical, adaptable tool for Greek public CAs seeking to benchmark and improve project performance. Although the model has been developed within the context of Greece’s public highway infrastructure, its overall structure is flexible and could be applied to other national or project settings. Such an application would require appropriate adjustments to account for differences in contractual regulatory frameworks, procurement systems, and performance evaluation practices. Exploring these adaptations represents a meaningful avenue for future research.
Nevertheless, the study has certain limitations; most notably, the relatively small sample size and the absence of post-completion road-user satisfaction data. Future research should aim to broaden the dataset and explore additional factors such as procurement systems, supervision team size, and tender budgets and how they correlate with project success as evaluated within the proposed model. The role of the supervision team is particularly worth deeper investigation. Larger teams may ensure stricter regulatory compliance and collective decision-making but face coordination challenges; smaller teams, while more agile, may lack the breadth of expertise needed for complex situations. An interesting aspect for further research is to examine correlations of project success, as measured by the SI, with the project manager’s personality traits. Finally, another promising direction for future work would be to examine how increased investment in project preparation—such as design quality—or in workforce capabilities may influence construction effectiveness and, consequently, overall project success.
Overall, this research provides a robust, data-driven, and replicable framework that empowers public CAs to systematically evaluate, compare, and enhance the effectiveness of infrastructure project delivery. By anchoring project assessment in objective data and public sector priorities, this model lays the groundwork for smarter, more accountable infrastructure development in the future.

Author Contributions

Conceptualization, F.A.; formal analysis, E.T.; investigation, E.T.; methodology, F.A.; resources, F.A.; supervision, F.A.; validation, F.A. and E.T.; writing—original draft, E.T.; writing—review and editing, F.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are available on request with privacy restrictions. The data are not publicly available due to contractual and GDPR constraints.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

TOPSISTechnique for Order Preference by Similarity to Ideal Situation
SISuccess Index
SAWSimple Additive Weighting
KPIKey Performance Indicator
CSFCritical Success Factor
CAContracting Authority
EOSAEgnatia Odos S.A.
MCDMMulti-Criteria Decision-Making
PPPPublic–Private Partnership
RIIRelative Importance Index
AHPAnalytic Hierarchy Process
CO&PContractor’s Overhead and Profit
SoWScope of Work
SCSupplementary Contract
KPIsCost KPI
KPIsTime KPI
KPIqQuality KPI
KPImManagement KPI
MAUTMulti-Attribute Utility Theory
CCloseness Coefficient

References

  1. Nassar, N.K. An Integrated Framework for Evaluation of Performance of Construction Projects. In Proceedings of the PMI® Global Congress 2009, Orlando, FL, USA, 10 October 2009; Project Management Institute: Newtown Square, PA, USA, 2009; pp. 1–9. [Google Scholar]
  2. Ingle, P.V.; Mahesh, G. Construction Project Performance Areas for Indian Construction Projects. Int. J. Constr. Manag. 2022, 22, 1443–1454. [Google Scholar] [CrossRef]
  3. Cooke-Davies, T. The “Real” Success Factors on Projects. Int. J. Proj. Manag. 2002, 20, 185–190. [Google Scholar] [CrossRef]
  4. Bryde, D.J. Methods for Managing Different Perspectives of Project Success. Br. J. Manag. 2005, 16, 119–131. [Google Scholar] [CrossRef]
  5. Shenhar, A.J.; Dvir, D.; Levy, O.; Maltz, A.C. Project Success: A Multidimensional Strategic Concept. Long. Range Plann 2001, 34, 699–725. [Google Scholar] [CrossRef]
  6. Jugdev, K.; Müller, R. A Retrospective Look at Our Evolving Understanding of Project Success. Proj. Manag. J. 2005, 36, 19–31. [Google Scholar] [CrossRef]
  7. Silvius, A.J.G.; Schipper, R. Exploring the Relationship between Sustainability and Project Success-Conceptual Model and Expected Relationships. Int. J. Inf. Syst. Proj. Manag. 2016, 4, 5–22. [Google Scholar] [CrossRef]
  8. Chan, A.P.C.; Chan, A.P.L. Key Performance Indicators for Measuring Construction Success. Benchmarking Int. J. 2004, 11, 203–221. [Google Scholar] [CrossRef]
  9. Bahadori Zare, M.; Mirjalili, A.; Mirabi, M. Ranking and Evaluating the Factors Affecting the Success of Management Team in Construction Projects. J. Fundam. Appl. Sci. 2016, 8, 614–630. [Google Scholar] [CrossRef]
  10. Youneszadeh, H.; Ardeshir, A.; Sebt, M.H. Predicting Project Success in Residential Building Projects (RBPs) Using Artificial Neural Networks (ANNs). Civ. Eng. J. 2020, 6, 2203–2219. [Google Scholar] [CrossRef]
  11. Unegbu, H.C.O.; Yawas, D.S.; Dan-asabe, B. An Investigation of the Relationship between Project Performance Measures and Project Management Practices of Construction Projects for the Construction Industry in Nigeria. J. King Saud. Univ.-Eng. Sci. 2022, 34, 240–249. [Google Scholar] [CrossRef]
  12. Tsiga, Z.; Emes, M.; Smith, A. Critical Success Factors for the Construction Industry. PM World J. 2016, 5, 1–12. [Google Scholar]
  13. Bakr, G.A. Ranking the Factors That Influence the Construction Project Success: The Jordanian Perspective. Int. J. Eng. Technol. 2018, 7, 97–102. [Google Scholar] [CrossRef]
  14. Zidane, Y.J.T.; Johansen, A.; Ekambaram, A. Project Evaluation Holistic Framework–Application on Megaproject Case. Procedia Comput. Sci. 2015, 64, 409–416. [Google Scholar] [CrossRef]
  15. Yang, J.; Shen, G.Q.; Ho, M.; Drew, D.S.; Chan, A.P.C. Exploring Critical Success Factors for Stakeholder Management in Construction Projects. J. Civ. Eng. Manag. 2009, 15, 337–348. [Google Scholar] [CrossRef]
  16. Gunduz, M.; Almuajebh, M. Critical Success Factors for Sustainable Construction Project Management. Sustainability 2020, 12, 1990. [Google Scholar] [CrossRef]
  17. Abdul-Kareem, H.I. Factors That Influence the Megaprojects in Iraq. IOP Conf. Ser. Mater. Sci. Eng. 2020, 870, 012067. [Google Scholar] [CrossRef]
  18. Mathar, H.; Assaf, S.; Hassanain, M.A.; Abdallah, A.; Sayed, A.M.Z. Critical Success Factors for Large Building Construction Projects: Perception of Consultants and Contractors. Built Environ. Proj. Asset Manag. 2020, 10, 349–367. [Google Scholar] [CrossRef]
  19. Gunduz, M.; Yahya, A.M.A. Analysis of Project Success Factors in Construction Industry. Technol. Econ. Dev. Econ. 2015, 24, 67–80. [Google Scholar] [CrossRef]
  20. Alzahrani, J.I.; Emsley, M.W. The Impact of Contractors’ Attributes on Construction Project Success: A Post Construction Evaluation. Int. J. Proj. Manag. 2013, 31, 313–322. [Google Scholar] [CrossRef]
  21. Ullah, F.; Thaheem, M.J.; Siddiqui, S.Q.; Khurshid, M.B. Influence of Six Sigma on Project Success in Construction Industry of Pakistan. TQM J. 2017, 29, 276–309. [Google Scholar] [CrossRef]
  22. Asgari, M.; Kheyroddin, A.; Naderpour, H. Evaluation of Project Critical Success Factors for Key Construction Players and Objectives. Int. J. Eng. 2018, 31, 228–240. [Google Scholar] [CrossRef]
  23. Gudiene, N.; Banaitis, A.; Banaitiene, N. Evaluation of Critical Success Factors for Construction Projects–an Empirical Study in Lithuania. Int. J. Strateg. Prop. Manag. 2013, 17, 21–31. [Google Scholar] [CrossRef]
  24. Tabish, S.Z.S.; Jha, K.N. Identification and Evaluation of Success Factors for Public Construction Projects. Constr. Manag. Econ. 2011, 29, 809–823. [Google Scholar] [CrossRef]
  25. Naji, K.; Gunduz, M.; Salat, F. Assessment of Preconstruction Factors in Sustainable Project Management Performance. Eng. Constr. Archit. Manag. 2021, 28, 3060–3077. [Google Scholar] [CrossRef]
  26. Mohammed, A.J. Evaluating the Management of Critical Success Factors of Residential Complex’s Projects and Their Impact on Cost, Time, and Quality in Erbil Governorate. Open Civ. Eng. J. 2022, 16, 1–13. [Google Scholar] [CrossRef]
  27. Sohu, S.; Jhatial, A.A.; Ullah, K.; Lakhiar, M.T.; Shahzaib, J. Determining the Critical Success Factors for Highway Construction Projects in Pakistan. Eng. Technol. Appl. Sci. Res. 2018, 8, 2685–2688. [Google Scholar] [CrossRef]
  28. Shrestha, P.P.; Shrestha, S.; Basnet, P. Budget and Schedule-Related Critical Success Factors for Design-Build Water and Wastewater Projects: Principal Component Analysis. Buildings 2025, 15, 1653. [Google Scholar] [CrossRef]
  29. Dong, R.R.; Muhammad, A.; Nauman, U. The Influence of Weather Conditions on Time, Cost, and Quality in Successful Construction Project Delivery. Buildings 2025, 15, 474. [Google Scholar] [CrossRef]
  30. Rasebotsa, A.R.; Agumba, J.N.; Adebowale, O.J.; Edwards, D.J.; Posillico, J. A Critical Success Factors Framework for the Improved Delivery of Social Infrastructure Projects in South Africa. Buildings 2024, 15, 92. [Google Scholar] [CrossRef]
  31. Fahri, J.; Biesenthal, C.; Pollack, J.; Sankaran, S. Understanding Megaproject Success beyond the Project Close-Out Stage. Constr. Econ. Build. 2015, 15, 48–58. [Google Scholar] [CrossRef]
  32. Esmaeili, B.; Pellicer, E.; Molenaar, K.R. Critical Success Factors For Construction Projects. In Project Management and Engineering Research, 2014; Ayuso Muñoz, J., Yagüe Blanco, J., Capuz-Rizo, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 3–14. [Google Scholar] [CrossRef]
  33. Cleary, J.P.; Lamanna, A.J. Correlation of Construction Performance Indicators and Project Success in a Portfolio of Building Projects. Buildings 2022, 12, 957. [Google Scholar] [CrossRef]
  34. Baporikar, N. Critical Success Factors for Timely Delivery of Road Construction Projects. Int. J. Appl. Logist. 2022, 12, 24. [Google Scholar] [CrossRef]
  35. Nuako, F.; Ghansah, F.A.; Adusei, T. Critical Success Factors for Cost Overrun Minimization in Public Construction Projects in Developing Countries: The Case of Ghana. Constr. Innov. 2024. early access. [Google Scholar] [CrossRef]
  36. Wibowo, A.; Santoso, S.R. Cost Overruns Arising From Government-Led Risks in Indonesian Toll Roads. Public Work. Manag. Policy 2024, 29, 446–470. [Google Scholar] [CrossRef]
  37. Damoah, I.S.; Ayakwah, A.; Twum, P. Assessing Public Sector Road Construction Projects’ Critical Success Factors in a Developing Economy: Definitive Stakeholders’ Perspective. J. Proj. Manag. 2022, 7, 23–34. [Google Scholar] [CrossRef]
  38. Kandawinna, N.; Mallawaarachchi, H.; Vijerathne, D. Successful Delivery of Public-Private Partnership (PPP) in the Construction Projects of Sri Lankan Higher Education Sector. In Proceedings of the 10th World Construction Symposium 2022, Colombo, Sri Lanka, 24–26 June 2022; pp. 782–793. [Google Scholar]
  39. Chileshe, N.; Njau, C.W.; Kibichii, B.K.; Macharia, L.N.; Kavishe, N. Critical Success Factors for Public-Private Partnership (PPP) Infrastructure and Housing Projects in Kenya. Int. J. Constr. Manag. 2022, 22, 1606–1617. [Google Scholar] [CrossRef]
  40. Mwelu, N.; Davis, P.R.; Ke, Y.; Watundu, S.; Jefferies, M. Success Factors for Implementing Uganda’s Public Road Construction Projects. Int. J. Constr. Manag. 2021, 21, 598–614. [Google Scholar] [CrossRef]
  41. Babatunde, S.O.; Perera, S. Barriers to Bond Financing for Public-Private Partnership Infrastructure Projects in Emerging Markets. J. Financ. Manag. Prop. Constr. 2017, 22, 2–19. [Google Scholar] [CrossRef]
  42. Kalogeraki, M.; Antoniou, F. Improving Risk Assessment for Transporting Dangerous Goods through European Road Tunnels: A Delphi Study. Systems 2021, 9, 80. [Google Scholar] [CrossRef]
  43. Langston, C. Development of Generic Key Performance Indicators for PMBOK® Using a 3D Project Integration Model. Constr. Econ. Build. 2013, 13, 78–91. [Google Scholar] [CrossRef]
  44. Heravi, G.; Ilbeigi, M. Development of a Comprehensive Model for Construction Project Success Evaluation by Contractors. Eng. Constr. Archit. Manag. 2012, 19, 526–542. [Google Scholar] [CrossRef]
  45. Zavadskas, E.K.; Vilutienė, T.; Turskis, Z.; Šaparauskas, J. Multi-Criteria Analysis of Projects’ Performance in Construction. Arch. Civ. Mech. Eng. 2014, 14, 114–121. [Google Scholar] [CrossRef]
  46. Papanikolaou, M.; Xenidis, Y. Risk-Informed Performance Assessment of Construction Projects. Sustainability 2020, 12, 5321. [Google Scholar] [CrossRef]
  47. Hwang, C.L.; Yoon, K. Methods and Applications. In Multiple Attribute Decision Making: Methods and Applications A State-of-the-Art Survey; Springer: New York, NY, USA, 1981. [Google Scholar]
  48. Ishizaka, A.; Nemery, P. Multi-Criteria Decision Analysis Methods and Software; John Wiley and Sons: Chichester, UK, 2013. [Google Scholar]
  49. Antoniou, F. Delay Risk Assessment Models for Road Projects. Systems 2021, 9, 70. [Google Scholar] [CrossRef]
  50. Antoniou, F.; Marinelli, M.; Petroutsatou, K. Exploring the Mechanisms for Value-for-Money Diffusion in the Design and Procurement of EU Public Infrastructure Projects. In Financial Evaluation and Risk Management of Infrastructure Projects; IGI Global: Hershey, PA, USA, 2023; pp. 1–31. ISBN 9781668477878. [Google Scholar]
  51. Jozi, S.A.; Shafiee, M.; MoradiMajd, N.; Saffarian, S. An Integrated Shannon’s Entropy–TOPSIS Methodology for Environmental Risk Assessment of Helleh Protected Area in Iran. Env. Monit. Assess. 2012, 184, 6913–6922. [Google Scholar] [CrossRef]
  52. Srdjevic, B.; Medeiros, Y.D.P.; Faria, A.S. An Objective Multi-Criteria Evaluation of Water Management Scenarios. Water Resour. Manag. 2004, 18, 35–54. [Google Scholar] [CrossRef]
  53. Antoniou, F.; Aretoulis, G. A Multi-Criteria Decision-Making Support System for Choice of Method of Compensation for Highway Construction Contractors in Greece. Int. J. Constr. Manag. 2019, 19, 492–508. [Google Scholar] [CrossRef]
  54. Wakchaure, S.S.; Jha, K.N. Determination of Bridge Health Index Using Analytical Hierarchy Process. Constr. Manag. Econ. 2012, 30, 133–149. [Google Scholar] [CrossRef]
  55. Antoniou, F.; Aretoulis, G.N. Comparative Analysis of Multi-Criteria Decision Making Methods in Choosing Contract Type for Highway Construction in Greece. Int. J. Manag. Decis. Mak. 2018, 17, 1–28. [Google Scholar] [CrossRef]
  56. Marzouk, M.M.; Gaid, E.F. Assessing Egyptian Construction Projects Performance Using Principal Component Analysis. Int. J. Product. Perform. Manag. 2018, 67, 1727–1744. [Google Scholar] [CrossRef]
  57. Tzeng, G.; Huang, J. Methods and Applications. In Multiple Attribute Decision Making; Taylor and Francis Group: Boca Raton, FL, USA, 2011. [Google Scholar]
  58. Abderisak, A.; Josephson, P.E.B.; Lindahl, G. Aggregation of Factors Causing Cost Overruns and Time Delays in Large Public Construction Projects: Trends and Implications. Eng. Constr. Archit. Manag. 2017, 24, 393–406. [Google Scholar]
  59. Williams, T.M. The Need for New Paradigms for Complex Projects. Int. J. Proj. Manag. 1999, 17, 269–273. [Google Scholar] [CrossRef]
  60. Geraldi, J.; Maylor, H.; Williams, T. Now, Let’s Make It Really Complex (Complicated)A Systematic Review of the Complexities of Projects. Int. J. Oper. Prod. Manag. 2011, 31, 966–990. [Google Scholar] [CrossRef]
  61. Baccarini, D. The Concept of Project Complexity—A Review. Int. J. Proj. Manag. 1996, 14, 201–204. [Google Scholar] [CrossRef]
  62. Atkinson, R. Project Management: Cost, Time and Quality, Two Best Guesses and a Phenomenon, Its Time to Accept Other Success Criteria. Int. J. Proj. Manag. 1999, 17, 337–342. [Google Scholar] [CrossRef]
  63. World Bank. World Bank Benchmarking Public Procurement 2016: Assessing Public Procurement Regulatory Systems in 77 Economies; World Bank: Washington, DC, USA, 2016. [Google Scholar]
  64. OECD. OECD Reforming Public Procurement: Progress in Implementing the 2015 OECD Recommendation; OECD: Paris, France, 2019; ISBN 9789264891609. [Google Scholar]
Figure 1. Performance indicators of the proposed project success evaluation model/mechanism.
Figure 1. Performance indicators of the proposed project success evaluation model/mechanism.
Buildings 15 02807 g001
Figure 2. Project performance with equal KPI weights.
Figure 2. Project performance with equal KPI weights.
Buildings 15 02807 g002
Figure 3. Project performance with varying KPI weights.
Figure 3. Project performance with varying KPI weights.
Buildings 15 02807 g003
Table 1. Content analysis of studies calculating KPIs.
Table 1. Content analysis of studies calculating KPIs.
ReferenceTypeObjectivesDataSI Calculation MethodKPIs
Chan and Chan (2004) [8]BuildingProposes a set of both objective and subjective KPIs. For both Client and Contractor. It does not provide an overall success index.3 projects and questionnaires Objective (provides formulae for calculation of numerical values):
1. Construction time; 2. Construction speed; 3. Time variation; 4. Unit cost; 5. Net variation over final cost (%); 6. Net present value; 7. Accident rate; 8. Environmental impact assessment scores. Subjective (7-point scale): 9. Quality; 10. Functionality; 11. End-user’s satisfaction; 12. Client’s satisfaction; 13. Design team’s satisfaction; 14. Construction team’s satisfaction.
Nassar (2009) [1]AnyProposes a set of both objective and subjective KPIs and a method for calculation of overall SI. For contractor.exampleAHP for weight calculation/SWAObjective (provides formulae for calculation of numerical values): 1. Cost; 2. Schedule; 3. Billing; 4. Profitability; 5. Safety; 6. Quality,
Subjective (10-point scale): 7. Team Satisfaction; 8. Client Satisfaction.
Heravi and Ilbeigi (2012) [44]Power Transmission lineProposes a set of both objective and subjective KPIs for project success and objective KPIs for project management success and a method for calculation of overall SI. For contractor.1 Case studySWAProject Success, Objective (provides formulae for calculation of numerical values): 1. Profit; 2. Quality; 3. Investment. Subjective: 4. Client satisfaction (10-point scale); 5. Contractor profit satisfaction (5-point scale).
Management success, Objective: 1. Cost; 2. Billing; 3. Scheduling; 4. Safety; 5. Quality; 6. Environmental.
Langston (2013) [43]BuildingsProposes a set of objective KPIs and a method for calculation of overall SI. For both client and contractorExample S I = s c o p e 3 C o s t T i m e R i s k 1. Value (scope/cost); 2. Efficiency (cost/time);
3. Speed (cost/time); 4. Innovation (risk/cost); 5. Complexity (risk/time); 6. Impact (risk/cost).
Zavadskas et al. (2014) [45]Not SpecifiedProposes a set of objective KPIs and proposes a method for calculation of overall SI.
For contractor.
6 projectsLogarithmic normalization of values/AHP for weight calculation/RIIObjective 1. profit/income; 2. Cost/income; 3. Income per team member; 4. Number of accidents; 5. Project delay (months); 6. Process documentation indicator; 7. Project risk management indicator; 8. Project cost management indicator; 9. Project team performance indicator; 10. Project budget compliance indicator.
Papanikolaou and Xenidis (2020) [46]AnyProposes a set of both objective and subjective KPIs and proposes a method for the calculation of a risk-informed overall SI. For contractor.ExampleSWA with the incorporation of risk factors for each KPI.Objective (provides formulae for calculation of numerical values): 1. Cost; 2. Schedule; 3. Billing; 4. Safety; 5. Profitability; 6. Quality.
Subjective (10-point scale): 7. Team satisfaction; 8. Client satisfaction
Table 3. Values of KPIc, KPIt, and KPIq sub-indicators per project.
Table 3. Values of KPIc, KPIt, and KPIq sub-indicators per project.
ProjectC1C2C3C4T1T2Q1Q2
P10.140.060.000.203.120.020.000.00
P20.08−0.060.000.204.160.130.000.00
P3−0.03−0.110.000.403.630.010.000.00
P40.170.170.000.200.990.150.000.00
P50.24−0.040.160.401.880.231.000.00
P60.110.110.030.800.110.120.000.00
P7−0.01−0.070.000.400.570.190.000.00
P80.08−0.330.000.200.620.121.000.00
P9−0.02−0.240.000.200.460.091.000.00
P100.150.070.000.200.000.000.000.00
P110.25−0.170.160.400.410.090.000.00
P120.140.060.000.200.000.070.000.00
P130.070.000.000.200.000.130.000.00
P140.44−0.030.351.000.500.120.001.00
P150.510.000.420.400.500.101.001.00
P160.140.060.000.400.550.150.000.00
P170.110.030.000.400.300.020.000.00
P180.150.090.000.200.000.090.000.00
P190.34−0.030.250.400.880.121.000.00
P200.150.070.000.200.920.010.000.00
P210.06−0.270.001.001.100.156.001.00
P220.150.070.000.201.010.000.000.00
P230.04−0.230.000.204.410.000.000.00
P240.150.070.000.201.010.000.000.00
P250.07−0.590.000.200.000.000.000.00
P260.09−0.270.030.400.620.092.000.00
P270.08−0.530.000.200.000.000.000.00
P280.11−0.510.000.200.000.000.000.00
P290.15−0.580.000.200.000.000.000.00
P300.15−0.240.060.402.110.041.000.00
Table 4. Values of KPIm sub-indicators per project.
Table 4. Values of KPIm sub-indicators per project.
ProjectM1M2M3M4M5M6M7M8M9M10M11
P123100,473.290.111137502525150
P2120.000.17101.49103291.0860
P32433,149.340.0620632050−520
P42184,469.910.0710170330−140
P5791,780,732.590.1321−2084561221
P651282,817.780.0240591216−431
P763707,512.300.0321−109071160
P852199,951.770.0711−1002479−410
P952234,627.600.18116391−24−140
P10200.000.0210102.0431.3100
P1132107,621.380.0420109582251
P12100.000.0110−24070380
P131047,979.400.001000722−100
P1452100,000.000.135112592−281
P1542125,351.810.122123591−471
P161117,574.000.0021−30176650
P171120,000.000.0020−40626−350
P181010,000.000.021000106−20
P1954109,972.940.2120−91179111
P202170,000.000.1210−12086−220
P2155350,000.000.015135822−1181.6390
P22320.000.0110−80−59−270
P2346120,117.210.0310−3506−290
P24310.000.00100094910
P2530147,810.800.1010−40−54360
P2643344,391.770.0820−3144291
P272023,965.970.061000−634010
P281039,349.940.021010−135−140
P291049,243.870.001000−75−430
P3095609.556.150.0320−14123−162
Table 5. Calculation of closeness coefficients (C).
Table 5. Calculation of closeness coefficients (C).
KPIcKPItKPIqKPIm
d+dCd+dCd+dCd+dC
P10.12890.20870.61830.18020.19960.52550.00000.52821.00000.05310.16200.7532
P20.10180.21990.68350.26820.08570.24220.00000.52821.00000.09720.15340.6122
P30.09200.22770.71230.20850.20610.49710.00000.52821.00000.04580.16970.7876
P40.15180.20410.57350.15220.20690.57610.00000.52821.00000.02190.18030.8915
P50.13880.14440.50990.23290.14560.38470.07370.46820.86400.11260.12460.5252
P60.15250.18130.54310.11370.26370.69870.00000.52821.00000.04530.16620.7859
P70.10060.22290.68900.17770.22290.55640.00000.52821.00000.06120.15790.7206
P80.05550.23460.80860.11890.23670.66560.07370.46820.86400.04500.17080.7914
P90.06560.24140.78630.08540.25910.75220.07370.46820.86400.06390.16110.7160
P100.13050.20800.61450.00070.32630.99770.00000.52821.00000.08330.16770.6680
P110.12330.15250.55310.08940.25920.74360.00000.52821.00000.04830.16360.7719
P120.12870.20880.61880.06070.29220.82790.00000.52821.00000.02590.18400.8767
P130.11240.21860.66050.11790.26820.69460.00000.52821.00000.02670.18290.8728
P140.22620.05000.18100.11030.24560.69010.28870.44230.60510.06670.15750.7026
P150.23930.07480.23820.09280.25360.73200.29790.36860.55300.05680.15830.7359
P160.13040.20050.60610.13890.23260.62610.00000.52821.00000.03390.18160.8426
P170.12340.20450.62360.02420.30260.92590.00000.52821.00000.02570.18170.8763
P180.13530.20730.60500.08490.28090.76790.00000.52821.00000.01000.18770.9496
P190.16870.11230.39970.12490.22250.64050.07370.46820.86400.05810.16450.7388
P200.13120.20760.61270.05360.28180.84010.00000.52821.00000.02650.18230.8730
P210.11050.21430.65970.15130.20210.57180.52820.00000.00000.12060.12860.5160
P220.13030.20810.61490.05770.28440.83140.00000.52821.00000.01540.18610.9234
P230.06910.23260.77090.25320.20660.44930.00000.52821.00000.03740.17950.8277
P240.13120.20760.61270.05780.28430.83100.00000.52821.00000.01450.18480.9274
P250.02340.25950.91740.00000.32671.00000.00000.52821.00000.02360.18280.8857
P260.07100.21230.74940.08640.25240.74500.14740.41270.73680.04220.16510.7964
P270.02840.25200.89870.00000.32671.00000.00000.52821.00000.02110.18300.8966
P280.03680.24700.87040.00000.32671.00000.00000.52821.00000.00710.18970.9640
P290.04370.25000.85130.00000.32671.00000.00000.52821.00000.00570.19010.9708
P300.08410.19730.70110.12540.21830.63530.07370.46820.86400.07820.15110.6589
Table 6. Calculation of the overall success index with equal weights.
Table 6. Calculation of the overall success index with equal weights.
KPIcKPItKPIqKPImSIRANK
wj0.2500.2500.2500.250
P10.6180.5251.0000.7530.72423
P20.6840.2421.0000.6120.63426
P30.7120.4971.0000.7880.74921
P40.5740.5761.0000.8920.76018
P50.5100.3850.8640.5250.57127
P60.5430.6991.0000.7860.75719
P70.6890.5561.0000.7210.74222
P80.8090.6660.8640.7910.78213
P90.7860.7520.8640.7160.78014
P100.6150.9981.0000.6680.82011
P110.5530.7441.0000.7720.76716
P120.6190.8281.0000.8770.8319
P130.6610.6951.0000.8730.80712
P140.1810.6900.6050.7030.54529
P150.2380.7320.5530.7360.56528
P160.6060.6261.0000.8430.76915
P170.6240.9261.0000.8760.8565
P180.6050.7681.0000.9500.83110
P190.4000.6400.8640.7390.66125
P200.6130.8401.0000.8730.8318
P210.6600.5720.0000.5160.43730
P220.6150.8311.0000.9230.8427
P230.7710.4491.0000.8280.76217
P240.6130.8311.0000.9270.8436
P250.9171.0001.0000.8860.9513
P260.7490.7450.7370.7960.75720
P270.8991.0001.0000.8970.9494
P280.8701.0001.0000.9640.9591
P290.8511.0001.0000.9710.9562
P300.7010.6350.8640.6590.71524
Table 7. Calculation of the overall success index with adjusted weights.
Table 7. Calculation of the overall success index with adjusted weights.
KPIcKPItKPIqKPImSIRANK
wj30%30%15%25%
P10.6180.5251.0000.7530.68124
P20.6840.2421.0000.6120.58126
P30.7120.4971.0000.7880.71021
P40.5740.5761.0000.8920.71820
P50.5100.3850.8640.5250.52928
P60.5430.6991.0000.7860.71919
P70.6890.5561.0000.7210.70422
P80.8090.6660.8640.7910.77014
P90.7860.7520.8640.7160.77013
P100.6150.9981.0000.6680.80110
P110.5530.7441.0000.7720.73216
P120.6190.8281.0000.8770.8039
P130.6610.6951.0000.8730.77512
P140.1810.6900.6050.7030.52829
P150.2380.7320.5530.7360.55827
P160.6060.6261.0000.8430.73017
P170.6240.9261.0000.8760.8345
P180.6050.7681.0000.9500.79911
P190.4000.6400.8640.7390.62625
P200.6130.8401.0000.8730.8048
P210.6600.5720.0000.5160.49830
P220.6150.8311.0000.9230.8157
P230.7710.4491.0000.8280.72318
P240.6130.8311.0000.9270.8156
P250.9171.0001.0000.8860.9473
P260.7490.7450.7370.7960.75815
P270.8991.0001.0000.8970.9444
P280.8701.0001.0000.9640.9521
P290.8511.0001.0000.9710.9482
P300.7010.6350.8640.6590.69523
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Antoniou, F.; Tsavlidou, E. Ranking Public Infrastructure Project Success Using Multi-Criteria Analysis. Buildings 2025, 15, 2807. https://doi.org/10.3390/buildings15162807

AMA Style

Antoniou F, Tsavlidou E. Ranking Public Infrastructure Project Success Using Multi-Criteria Analysis. Buildings. 2025; 15(16):2807. https://doi.org/10.3390/buildings15162807

Chicago/Turabian Style

Antoniou, Fani, and Elissavet Tsavlidou. 2025. "Ranking Public Infrastructure Project Success Using Multi-Criteria Analysis" Buildings 15, no. 16: 2807. https://doi.org/10.3390/buildings15162807

APA Style

Antoniou, F., & Tsavlidou, E. (2025). Ranking Public Infrastructure Project Success Using Multi-Criteria Analysis. Buildings, 15(16), 2807. https://doi.org/10.3390/buildings15162807

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop