3.1. Key Input Parameters
For the analysis of turbine performance and economic viability, input variables have been categorised into two main groups: productivity parameters and cash flow parameters. Productivity parameters are defined by technical, financial and geographical factors that influence turbine performance, while cash flow parameters are used to assess the financial feasibility of wind power investments. In order to identify these parameters and facilitate investment decisions for wind farms, data were collected from both operating wind farms and authorised institutions. These data provided a comprehensive basis for assessing the potential of the turbine in terms of both efficiency and profitability. Productivity parameters include various conditions and factors that affect turbine efficiency and energy output. These include technical specifications, maintenance requirements, financial costs associated with turbine operation and geographical factors such as wind conditions and site characteristics, which are known to have a significant impact on energy production.
The cash flow parameters, on the other hand, are essential for carrying out feasibility analyses to assess the financial soundness of the investment. These parameters include the real interest rate for its impact on financing costs, the inflation rate for its impact on long-term cost projections and the unit price of electricity as a critical determinant of revenue. Annual operating costs are included to reflect the recurring costs associated with the wind farm, and the initial investment cost of the turbine is included as it plays a key role in determining capital requirements. By analysing efficiency and cash flow parameters together, a detailed and data-driven feasibility assessment was made.
To ensure that the Monte Carlo simulation accurately represented the variability in the data collected, statistical models were first identified that best approximated the underlying data distribution. This was achieved by performing a curve fitting analysis using EasyFit V.6.0 software. EasyFit automatically evaluates several theoretical distributions and compares them to the empirical data, allowing the selection of distribution types that best fit the observed data patterns. For each variable in the dataset, the software identifies the best-fitting probability distribution model (e.g., normal, log-normal, Weibull or exponential), taking into account measures of fit such as the Kolmogorov–Smirnov, Anderson–Darling and chi-square tests. The software primarily focuses on distribution fitting and nonlinear curve fitting. It includes over 60 predefined probability distributions, such as normal, log-normal, exponential, Weibull, and Gamma, allowing users to fit these models to data. The software uses statistical methods like maximum likelihood estimation to estimate model parameters and provides goodness-of-fit tests, including the chi-squared, Kolmogorov–Smirnov and Anderson–Darling tests, to assess the model’s accuracy.
Additionally, EasyFit supports nonlinear curve fitting, enabling users to fit custom mathematical models, such as exponential or logarithmic functions, using optimization algorithms like least squares. The software also features Monte Carlo simulations for estimating uncertainty in fitted parameters. Overall, EasyFit offers a versatile set of tools for fitting distributions and custom curves to data while ensuring statistical validation.
Once the optimal distribution for each parameter was determined, the associated parameters—such as mean, standard deviation, shape and scale parameters—were extracted. These parameters served as inputs to the Monte Carlo simulation model, ensuring that the simulated scenarios reflected the true statistical properties of the observed data, thereby increasing the reliability and accuracy of the simulation results. To determine the appropriate sample size for a Monte Carlo power analysis, it is essential to first define the key parameters of the study, including the expected effect size, the type of statistical test, the significance level (typically set at 0.05) and the desired power (commonly set at 0.80). An initial sample size estimate is then selected, often informed by prior research or domain-specific guidelines. Following this, a series of simulations are conducted, typically involving thousands of iterations, with data generated based on the predefined parameters. The statistical test of interest is applied to each simulated dataset, and the outcome is recorded to determine whether the null hypothesis is rejected (i.e., whether a statistically significant result is obtained). Once the simulations are completed, empirical power is calculated as the proportion of simulations in which the null hypothesis was rejected. If the resulting power is lower than the target, the sample size is increased, and the process is repeated. Conversely, if the power exceeds the desired threshold, the sample size is reduced. This iterative procedure continues until the sample size produces the desired statistical power, ensuring that the study is adequately powered to detect the specified effect size.
The number of simulations to run in a Monte Carlo analysis is contingent upon the desired accuracy and precision of the power estimate. A common recommendation is to conduct a minimum of 1000 to 5000 simulations to obtain a reliable estimate of statistical power. However, for more complex models or when detecting smaller effect sizes, a larger number of simulations, such as 10,000 or more, may be necessary to ensure greater stability and reduce variability in the results. Increasing the number of simulations generally enhances the accuracy of the power estimate by minimizing random fluctuations across runs. Nevertheless, this improvement in precision comes at the cost of greater computational resources. Thus, a balance must be found between the computational feasibility and the required level of precision for the analysis. In most cases, 5000 to 10,000 simulations provide an adequate level of accuracy, although convergence of the power estimate can be monitored as additional simulations are conducted to ensure reliable results.
Monte Carlo simulation can be effectively applied to multi-criteria decision-making (MCDM) by incorporating uncertainty and variability into the decision-making process, thereby enhancing the robustness and reliability of decisions in complex and uncertain environments. MCDM involves the evaluation and comparison of multiple alternatives based on several conflicting criteria, yet real-world decision problems often involve various forms of uncertainty, such as imprecise data, ambiguous preferences or unpredictable outcomes. Monte Carlo simulation addresses these uncertainties by generating a range of possible outcomes based on probabilistic inputs.
In the context of MCDM, the process typically unfolds in several key stages. First, decision alternatives and evaluation criteria must be identified, with each alternative assessed across multiple criteria that may be assigned different weights to reflect their relative importance. Second, the uncertainty associated with each criterion is modelled. This involves assigning probability distributions (e.g., normal, triangular or uniform) to reflect variability in the parameters such as costs, risks or performance outcomes for each alternative.
Next, Monte Carlo simulation involves running a large number of simulations, typically thousands or more, in which the uncertain parameters for each criterion are randomly sampled from their specified probability distributions. For each simulation run, a decision matrix is constructed, and the alternatives are evaluated based on the weighted criteria. The results of these simulations are then aggregated to provide a distribution of possible outcomes for each alternative, allowing for the calculation of expected values, variances and probabilities of achieving favourable results.
Finally, the decision-maker can analyse these aggregated results to assess the likelihood of each alternative’s success across a range of possible scenarios. Sensitivity analysis can also be performed to examine how changes in uncertain parameters influence the decision outcomes, helping to identify alternatives that are more robust under varying conditions. The primary advantage of applying Monte Carlo simulation in MCDM is its ability to provide a comprehensive view of potential outcomes, as opposed to relying on single-point estimates. This approach enables decision-makers to better understand the risks and uncertainties inherent in each alternative, facilitating more informed and resilient decision-making.
Integrating Monte Carlo simulations and grid computing in finance for corporate performance management (CPM) offers significant benefits but also entails various costs. Monte Carlo simulations enhance risk analysis by modelling uncertainty, enabling better decision-making in areas like investment strategy and portfolio management. Grid computing improves the accuracy and efficiency of these simulations by distributing computational tasks across multiple systems, allowing for faster processing and more reliable results. This combination supports real-time analysis and scenario planning, helping organisations adapt to market changes and optimise long-term strategic planning.
However, the implementation of these technologies comes with high initial costs, including investment in hardware, software and networking infrastructure, especially when not leveraging cloud-based systems. Operational complexity also arises from managing distributed computing tasks, which requires specialised expertise. Ongoing maintenance and the need for high-quality data further increase costs. Additionally, grid computing can lead to substantial energy consumption, which may offset some of the efficiency gains. Ultimately, while the integration of Monte Carlo simulations and grid computing offers valuable enhancements to CPM, organisations must carefully weigh the associated costs against the long-term benefits to ensure a positive return on investment.
3.1.1. Real Interest Rate (
The real interest rate is the interest rate without inflation. It is the form of the nominal interest rate adjusted for inflation. For this purpose, the nominal interest rate and inflation rate have been taken in Turkey for 20 years, and the collected data are given in Equation (1).
ir = real interest rate
in = nominal interest rate
r = inflation rate
Nominal Interest Rate (in)
The nominal interest rate can be defined as the interest rate used in the market. The nominal interest is used by banks when applying for loans and deposits, and it is not adjusted for the effect of inflation. In this study, the nominal interest rates for the last 20 years were collected, and a logistic curve was fitted as shown in
Figure 6. Statistical information such as the mean, minimum and maximum values of these parameters are shown in
Table 1. The parameters and coefficients of these curves are given in
Table 2. The equation used for the logistic distribution model is
where
γ: continuous location parameter
β: continuous scale parameter β > 0
α: continuous shape parameter α > 0
Figure 6.
Fitted curve for nominal interest rate data.
Figure 6.
Fitted curve for nominal interest rate data.
Table 1.
Statistical information on the nominal interest rate data collected.
Table 1.
Statistical information on the nominal interest rate data collected.
Cash Flow Parameter (%) | Values | Minimum | Mean | Maximum | Standard Deviation |
---|
in | 237 | 0.11 | 2.55 | 24.97 | 2.47 |
Table 2.
Characteristics of the fitted curve for nominal interest rate data.
Table 2.
Characteristics of the fitted curve for nominal interest rate data.
Cash Flow Parameter (%) | Distribution Model | | | | | | a | b | Shift |
---|
in | LogLogistic | 2.87 | 2.16 | −0.16 | - | - | - | - | - |
3.1.2. Inflation Rate (r)
Inflation is a rise in the prices of goods and services. However, the prices of goods and services can rise or fall over time. Inflation is not just an increase in the price of a particular good or service but a continuous increase in the general price level. In this study, the inflation rates for the last 21 years were collected, and an InvGauss curve was fitted, as shown in
Figure 7. Statistical information such as the mean, minimum and maximum values of these parameters are shown in
Table 3. The parameters and coefficients of these curves are given in
Table 4. The equation used for the InvGauss distribution model is
where
μ: continuous parameter μ > 0
λ: continuous parameter λ > 0
Figure 7.
Fitted curve for inflation rate data.
Figure 7.
Fitted curve for inflation rate data.
Table 3.
Statistical information on the inflation data collected.
Table 3.
Statistical information on the inflation data collected.
Cash Flow Parameter (%) | Values | Minimum | Mean | Maximum | Standard Deviation |
---|
r | 21 | 6.16 | 15.19 | 68.53 | 14.73 |
Table 4.
Characteristics of the fitted curve for inflation data.
Table 4.
Characteristics of the fitted curve for inflation data.
Cash Flow Parameter (%) | Distribution Model | | | | | | a | b | Shift |
---|
r | InvGauss | - | - | - | 9.59 | 3.48 | - | - | 5.60 |
3.1.3. Annual Income (AI)
The annual income (AI) refers to the annual income of a wind turbine. It is obtained by multiplying the unit price of electricity (UPE) by the energy produced (PE).
Unit Price of Electricity
The unit price of electricity (UPE) is the unit selling price of electricity generated by wind farms. Data were collected for the last 22 years, and an ExtValue curve was fitted as shown in
Figure 8. Statistical information such as mean, minimum and maximum values of these parameters is shown in
Table 5. The parameters and coefficients of these curves are given in
Table 6. The equation used for the ExtValue distribution model is
where
a(alpha): continuous location parameter
b(beta): continuous scale parameter, beta > 0
Figure 8.
Fitted curve for UPE data.
Figure 8.
Fitted curve for UPE data.
Table 5.
Statistical information on the unit price of UPE collected.
Table 5.
Statistical information on the unit price of UPE collected.
Cash Flow Parameter ($/kWh) | Values | Minimum | Mean | Maximum | Standard Deviation |
---|
UPE | 22 | 0.07 | 0.10 | 0.13 | 0.01 |
Table 6.
Characteristics of the fitted curve for the UPE.
Table 6.
Characteristics of the fitted curve for the UPE.
Cash Flow Parameter ($/kWh) | Distribution Model | | | | | | a | b | Shift |
---|
UPE | ExtValue | - | - | - | - | - | 0.09 | 0.01 | - |
Produced Energy (PE)
Produced energy (PE) is the energy produced by wind turbines. The wind power available per unit area swept by the turbine blades is given by the following equation:
where
is the air density,
is the mean wind speed for the
time interval,
is the number of hours corresponding to the time interval divided by the total number of hours and
is a power coefficient. As the turbines are all selected in the same region and with the same characteristics, all parameters except the wind speed are neglected. Thus, in
Figure 9, a linear correlation has been established between the monthly mean wind speed data and the energy obtained for each turbine.
Wind Speed (WS)
Wind speed is the most effective parameter on the energy produced. In this study, one year of wind speed data was collected from 20 turbines with similar characteristics in the Eastern Thrace region, and the data distribution of monthly mean values for each turbine is presented in
Figure 10. Although the wind speed distribution often encountered in literature and practice is Weibull [
18], the curve that best fit the data collected in this study was ExtValue. Statistical information such as mean, minimum and maximum values of these parameters is given in
Table 7. The parameters and coefficients of these curves are presented in
Table 8. The equation used for the ExtValue distribution model is
where
a(alpha): continuous location parameter
b(beta): continuous scale parameter, beta > 0
Figure 10.
Fitted curve for WS data.
Figure 10.
Fitted curve for WS data.
Table 7.
Statistical information of the WS.
Table 7.
Statistical information of the WS.
Parameter (m/s) | Values | Minimum | Mean | Maximum | Standard Deviation |
---|
WS | 240 | 4.33 | 6.61 | 10.19 | 1.36 |
Table 8.
Properties of the fitted curve for the WS.
Table 8.
Properties of the fitted curve for the WS.
Parameter (m/s) | Distribution Model | | | | | | a | b | Shift |
---|
WS | ExtValue | - | - | - | - | - | 5.98 | 1.08 | - |
3.1.4. Annual Expenses (AE)
The annual cost of a wind turbine consists of variable costs such as maintenance per turbine and fixed costs such as personnel, security and insurance. The historical annual cost data (60 values) for each turbine were obtained from the relevant wind turbine databases. The data obtained and the InvGauss curve fitted to these data are shown in
Figure 11. Statistical information such as mean, minimum and maximum values of these parameters can be seen in
Table 9. The parameters and coefficients of these curves are given in
Table 10. The equation used for the InvGauss distribution model is
where
: continuous parameter, > 0
: continuous parameter, > 0
Figure 11.
Fitted curve for AE data.
Figure 11.
Fitted curve for AE data.
Table 9.
Statistical information on AE.
Table 9.
Statistical information on AE.
Cash Flow Parameter ($) | Values | Minimum | Mean | Maximum | Standard Deviation |
---|
AE | 60 | −128,426.00 | −109,943.18 | −85,288.00 | 11,393.83 |
Table 10.
Characteristics of the fitted curve for AE.
Table 10.
Characteristics of the fitted curve for AE.
Cash Flow Parameter ($) | Distribution Model | | | | | | a | b | Shift |
---|
AE | InvGauss | - | - | - | 62,014 | 1,810,110 | - | - | −171,957 |
3.1.5. Investment Cost of a Wind Turbine (ICWT)
The investment cost of a wind turbine consists of the land cost and the installation cost. The investment cost values of 20 turbines considered in this study were obtained from the relevant wind turbine database. The obtained data and the corresponding Weibull curve are shown in
Figure 12. Statistical information such as mean, minimum and maximum values of these parameters is shown in
Table 11. The parameters and coefficients of these curves are given in
Table 12. The equation used for the Weibull distribution model is
where
: continuous shape parameter (> 0)
: continuous scale parameter ( > 0)
: continuous location parameter (= 0 for the two-parameter Weibull distribution)
Figure 12.
Fitted curve for ICWT data.
Figure 12.
Fitted curve for ICWT data.
Table 11.
Statistical information from the ICWT.
Table 11.
Statistical information from the ICWT.
Cash Flow Parameter ($) | Values | Minimum | Mean | Maximum | Standard Deviation |
---|
ICWT | 20 | −3,494,451.00 | −3,143,038.55 | −2,506,463.00 | 274,387.96 |
Table 12.
Characteristics of the fitted curve for the ICWT.
Table 12.
Characteristics of the fitted curve for the ICWT.
Cash Flow Parameter ($) | Distribution Model | | | | | | a | b | Shift |
---|
ICWT | Weibull | 2.83 | 793,701 | | | | | | −3,848,563 |
3.3. Conducting Simulation
To assess the financial outcomes of the project, a simulation model was built by first performing curve fitting on key input parameters. Historical and forecast data were analysed to accurately represent the behaviour of each input and to capture a realistic range of potential outcomes. These inputs were then incorporated into a net present value (NPV) equation, which was set as the primary metric for assessing the long-term viability of the project over a 20-year period.
The simulation model was run using @RISK software (version 5.5) with 5000 iterations. In each iteration, a unique NPV result was generated by randomly sampling from the fitted distributions of the input parameters. This stochastic approach allowed for uncertainty and variability in key parameters, resulting in a robust distribution of potential NPV outcomes. @RISK uses Monte Carlo simulation as its core algorithm, performing simulations by generating random values for uncertain input variables based on assigned probability distributions. It runs multiple trials to model possible outcomes, helping users estimate the likelihood of different results. The software employs various sampling methods, including Latin hypercube sampling, random sampling and quasi-random sampling, to improve the accuracy and efficiency of the simulations.
Additionally, @RISK offers sensitivity analysis to determine how changes in input variables impact the outcome, using tools like tornado diagrams and sensitivity charts. It also incorporates optimisation algorithms, such as genetic and evolutionary algorithms, to identify optimal solutions under uncertainty. Analytical methods and regression tools are available for specific problems, allowing for efficient modelling in cases where closed-form solutions are applicable. Overall, @RISK combines Monte Carlo simulation with advanced techniques for risk analysis and decision-making.
On completion of the simulation, a sensitivity analysis was performed on the NPV results to identify the parameters with the greatest impact on project profitability. By identifying the most influential factors, greater insight was gained into the inputs that drive variability in NPV, providing valuable information on areas of risk and opportunity. These insights are critical for guiding future decisions and adjusting project strategies to improve profitability under uncertain conditions.