The Bayesian Approach to Capital Allocation at Operational Risk: A Combination of Statistical Data and Expert Opinion

: Operational risk management remains a major concern for financial institutions. Indeed, institutions are bound to manage their own funds to hedge this risk. In this paper, we propose an approach to allocate one’s own funds based on a combination of historical data and expert opinion using the loss distribution approach (LDA) and Bayesian logic. The results show that internal models are of great importance in the process of allocating one’s own funds, and the use of the Delphi method for modelling expert opinion is very useful in ensuring the reliability of estimates.


Introduction
Since the 1990s, the Basel Committee and researchers have tried to define an incontestable framework for modelling and managing operational risk. However, the efforts made have shown that theoretical and practical mastery of this risk is far from being achieved.
Operational risk management practice is based on an approach composed of four steps: Identification, assessment of impact, classification of risks, and implementation of action plans. Indeed, the risk management process must be able to ensure perfect knowledge and control of operational risk at the level of the various activities exercised.
With regard to the minimum capital requirement, the legislator under the Basel II offered banks several approaches and methods for calculating operational risk depending on the degree of control and the availability of the information required for internal modelling. As a result, the regulator proposes, on the one hand, simple, unified, and standardized approaches whose characteristics are provided by him and, on the other hand, complicated and sophisticated approaches whose characteristics are determined by banks.
In terms of quantification, the committee presented some methods that could be used in the Advanced Measurement Approach ( ) in a document published in 2001 entitled the "Working Paper on the Regulatory Treatment of Operational Risk".
The approach is an approach that allows banks to use internal models for risk measurement. Indeed, three approaches have been proposed: The Scorecard approach, the IMA (internal measurement approach), and the approach (loss distribution approach). The use of the approach for the calculation of capital requirements has become very complex given the multitude of theories and models used, such as the probabilistic approach, the Bayesian approach, the Markov chain Monte Carlo approach, and the use of copulas to model the correlation. This situation has generated a significant model risk because it has become impossible to compare and benchmark between banks and assess the evolution of the risk profile.
Following the financial crisis, the minimum capital requirements for operational risk were reviewed by the Basel Committee (BCBS). Indeed, the publication in December 2017 of the document entitled "Basel III: Finalizing post-crisis reforms" divulged the orientation of banking regulation after 2022, which consists of replacing existing operational risk measurement approaches with a single approach known as the "Standardized Measurement Approach" ( ) which will enter into effect in January 2022.
The Basel committee justified the decision to abandon internal models for the calculation of capital requirements based on the complexity of the models used and proposed a simple standardized approach.
Until the Basel III reform enters into effect, banks will continue to use their own models for calculating minimum capital requirements. Indeed, banks opt for two types of modelling approaches: the Top-Down approach or the Bottom-Up approach.
The Top-Down approach quantifies operational risk without attempting to identify events or causes of losses. Operational losses, under this approach, are measured based on overall historical data while The Bottom-Up approach quantifies operational risk based on knowledge of events by identifying internal events and relates generating factors in detail at the level of each task and entity. The information collected is included in the overall calculation of the capital charge.
Despite the Basel Committee's decision to abandon the approach, the use of internal models is essential for operational risk management, notably for the risk appetite process and capital allocation process.
In this study, we show the interest of internal models in the allocation of equity capital based on the LDA approach and propose a practical approach based on the Delphi method to adjust historical data by expert opinions, using Bayesian logic to determine the risk measure to be used in the capital allocation and applying the proposed approach for the allocation of capital for the retail banking business line of a Moroccan banks.
Therefore, in this article, the second section will be reserved for the literature review, the third part for the methodology, and the fourth part for the empirical study.

Literature Review
The modelling and management of operational risk remains a major concern for financial institutions, particularly in the absence of a total consensus on the approach to be followed between BCBS and the academicians and professionals in the domain. Indeed, research has focused on the approaches to be used for loss modelling, severity modelling, frequency modelling, correlation between losses, correlation between losses and total income, capital allocation, etc.
The approaches quantifying operational risk are multiple in the most well-known: (1) The IMA approach based on a proportionality assumption between the expected loss and unexpected loss, presented by Akkizidis and Bouchereau (2005) and Cruz et al. (2015); (2) The scorecard approach based on calculating a score for the risks measured by an entity and acting on its changing values, presented by Niven (2006), Akkizidis and Bouchereau (2005), Giudici (2013), andFacchinetti et al. (2019); (3) The LDA approach based on the distribution of frequency and the severity of losses.
The latter approach consists of three forms. The first is the classic LDA approach, which consists of determining the distributions that fits with the loss data and their parameters. In this case, the parameters are estimated by the moment method or by the maximum likelihood. This technique has been studied by a large number of researchers, such as Frachot et al. (2001Frachot et al. ( , 2003; King (2001), Cruz (2002), Alexander (2003), Chernobai et al. (2005), Bee (2006), and Shevchenko (2010); the second is the LDA Bayesian approach with conjugated distributions, which considers that the parameters of frequency and severity distributions are random variables distributed according to a priori laws. This approach has been the subject of various studies, such as Giudici and Bilotta (2004), Shevchenko (2011), Dalla Valle (2009), Figini et al. (2014, and Benbachir and Habachi (2018). The last method is the LDA approach by markov chain monte carlo (MCMC), which uses non-informative laws and Markov chain properties. This method has been studied by Peters and Sisson (2006), Dalla Valle and Giudici (2008), and Shevchenko and Temnov (2009).
The dependence between the operational losses using mathematical copulas has been studied by various researchers, such as Cope and Antonini (2008), Brechmann et al. (2013), Groenewald (2014), and Abdymomunov and Ergen (2017). The opinions are divergent on this point because some consider it to be weak or inconclusive (Cope and Antonini 2008;Groenewald 2014).
The Basel III reform abandoned the "AMA" approach, for a new standard approach (SMA). The latter has been the subject of several critical studies that have shown the importance of the associated model risk, including the study by Mignola et al. (2016), Peters et al. (2016), andMcConnell (2017). As a result, some researchers have proposed other types of models based on historical losses (Cohen 2016(Cohen , 2018. Capital allocation is an important area in risk management. Indeed, various studies have addressed this subject in different categories of risks including studies by Denault (2001), Tasche (2007), Dhaene et al. (2012), andBoonen (2019). In terms of operational risk, this issue is treated by Urbina and Guillén (2014).

The Risk Appetite Process
Risk appetite is defined as the maximum loss that the bank supports in order to achieve its profitability objectives. Indeed, the Board of Directors must define the risks that shareholders accept in order to achieve the objectives defined for the Senior Management.
Risk appetite must be defined by the Senior Management at the level of each business line and activity, by defining risk tolerance at the intermediate level and risk limits at the operational level.
Risk appetite is directly related to the current risk profile and its evolution in correlation with the evolution of the bank's activity. As a result, the bank must determine its risk profile at the date of preparing its risk appetite policy and must estimate the evolution of its profile in accordance with the progress of its development and expansion plan.
The risk profile is determined internally by the bank and may differ from its regulatory profile, as determined by the regulatory capital. Indeed, the actual profile is determined by the bank's economic capital, while the regulatory profile is defined by the minimum capital requirement according to the standard approach of Basel III.
For the deployment of a risk appetite framework, Shang and Chen (2012) identified seven steps: (1) A Bottom-up analysis of the company's current risk profile; (2) Interviews with the board of directors regarding the level of risk tolerance; (3) Alignment of risk appetite with the company's goal and strategy; (4) Formalization of the risk appetite statement with approval from the board of directors; (5) Establishment of risk policies, risk limits, and risk-monitoring processes consistent with risk appetite; (6) Design and implementation of a risk-mitigation plan consistent with risk appetite; (7) Communication with local senior management for their buy in.
Indeed, this approach should be able to define three components: (1) The risk profile; (2) The risk tolerance process; (3) The process for defining operational risk limits.

The Process of Capital Allocation
Capital allocation is the process that defines the capital allocated by the bank to a given entity to achieve the intended profitability objective. Indeed, the capital allocated to unit ( ) is defined according to the risk incurred by the said unit.
The definition of a risk measure is an essential component in the capital allocation process. Indeed, for operational risk, two measures can be used: Value at risk ( ), which is a non-coherent risk measure, and the Expected Shortfall, which is a coherent risk measure. The expected Shortfall ( is defined by , where F is the cumulative distribution function of operational losses. Let , 1, … , be the random variables representing the individual losses of business units and , 1, … , be the allocation of capital for each probable individual loss ( ). The total operational loss ( ) and total risk capital ( ) are expressed as For the allocation of risk capital for operational risk, several methods can be used, such as the proportionality allocation method (Hamlen et al. 1977), the beta method (Panjer 2002), the incremental method (Jorion 2001), the cost gap method (Driessen and Tijs 1985), the Shapley method (Shapley 1953), and Euler allocation (Aumann and Shapley 1974).
In operational risk, Urbina and Guillén (2014) used the proportionality allocation method using the VaR for capital allocation in the case of fraud.
In our study, we will use the same method for the allocation of capital at the retail banking business line level, using a Bayesian risk measure to integrate expert estimates.
This method is based on an assumption of proportionality between allocated and unallocated capital: where or .
The capital allocated by this principle neglects the dependence of the losses of the different business lines and risk categories. The Haircut allocation method considers that the correlation between risk categories and business lines is weak or insignificant. Indeed, the studies of Cope and Antonini (2008) and Groenewald (2014), cited above, encourage the use of this method.
Under the second pillar, the allocation of capital and the implementation of the risk appetite process strengthen the use of internal models despite the suppression of their use for the calculation of the minimum capital requirement under the first pillar. Indeed, the piloting of the activity by the risk requires an individual monitoring of the risk by business line in order to guarantee adequacy between the risk incurred and the capital allocated. Consequently, the bank must develop its own models for estimating the economic capital needed to develop its business independently of the regulatory constraint of measuring the solvency ratio based on the standard approach of Basel III.

The Risk Mapping
Operational risk mapping is a balance sheet of the probable risks incurred by a bank at a given date. This type of mapping represents all operational risk situations broken down by business line and risk category. An operational risk situation is composed of three elements: (1) The generating factor of the risk (hazard), which constitutes the factors that favor the occurrence of the risk incident for inexperienced personnel and the malfunction of the control device; (2) The operational risk event (incident), which constitutes the single incident whose occurrence can generates losses for the bank as internal fraud and external fraud; (3) The impact (loss), which it constitutes the amount of financial damage resulting from an event.
To normalize the identification of an operational risk situation, the BCBS (2006) defines the generic mapping of operational risks within credit institutions, comprising eight business lines and seven categories of operational risks.

The Operational Risk Categories
The operational risk categories ( RT , 1 ≤ ≤ 7) are RT _ execution, delivery, and process management, RT _business disruption, and system failures, RT _damage to physical assets, RT _clients, products, and business practices, RT _employment practices, and workplace safety, RT _external fraud, RT _internal fraud.

Capital Requirements
The quantification of operational risk remains a major problem for the Basel Committee. Indeed, several approaches have been adopted in the Basel II framework, including the approach based on internal models, which is considered the most important.
The use of internal models has been strongly criticized by the Basel Committee. Indeed, a new orientation of the Basel Committee has been born; this orientation considers abandoning all Basel II approaches and adopting a new standard approach, , which will replace all previous approaches.
The standard approach " " defined by (BCBS 2016 and ) is based on the Business indicator ( ) defined as follows: , The components , and are calculated by the following formulas:

The Loss Distribution Approach DA
The LDA approach uses the distributions of the frequency and severity of operational losses to determine operational the losses over a time horizon .
The Classical Model i.
Mathematical formulation of the model In the approach, the operational loss in horizon is considered as a random variable , defined as follows: where: is the random variable that represents the individual impact of operational risk incidents; is the random variable that represents the number of occurrences on a horizon .
The random variables are independent and identically distributed. The random variable is independent of variables .
The mathematical expectation and variance of the compound random variable are defined as follows: , VAR P E N var X var N .
ii. Presentation of the classical approach.
The classical approach considers that severity and frequency can be modelled by the usual theoretical laws whose parameters are estimated from these data.
For modelling the individual severity of losses , several distributions can be used to represent the severity random variable as the LogNormal distribution, the Beta distribution, the Weibull distribution, or other distributions, which are detailed in Chernobai et al. (2007). In our study, we will limit ourselves to the LogNormal distribution µ, . , 1 .
With regard to the modelling of the loss frequency , we use the Poisson distribution or the Negative Binomial distribution , .

The Pure Bayesian Approach
In the pure Bayesian approach, the parameters of the distribution of the frequency and the individual loss are considered as random variables with a probability density function.
The pure Bayesian approach considers the parameters , and λ of the density functions of and N as the random variables whose the densities are, respectively, , , and . i.
Description of the pure Bayesian approach.
Let , … be a vector of random variables independent and identically distributed (i.i.d).

Let
, … be a realization of vector , and let , , … , be a vector of the random variables of the parameters of the density of vector .
The density function , of vector , , … , , , … , is defined by: where  is the probability density of the parameter called the "prior density function";  ⁄ is the conditional probability density function of the parameter knowing , which is called "posterior density";  , is a probability density function of the couple , ;  ⁄ is the conditional density function of knowing ; this is the likelihood function ⁄ ∏ ⁄ with ⁄ as the conditional probability density function of ;  is the marginal density of that can be written as ⁄ . Hence where is a normalization constant, and the posterior distribution ⁄ can be viewed as a combination of a priori knowledge with a likelihood function ⁄ for the observed data.
Since is a normalization constant, the posterior distribution is often written with the form (13), where the symbol ∝ signified "is proportional", with a constant of proportionality independent of the parameter .

ii. The Bayesian Estimator
The parameter ( ) can be univariate or multivariate. The estimate of the Bayesian posterior mean of is defined as follows: iii. If parameter ( ) is univariate, the estimate of the Bayesian posterior mean of , denoted as , is a conditional expectation of knowing , defined by iv. In a multidimensional context, where , , … , , the estimate of the Bayesian posterior mean of θ, denoted as , is a conditional expectation of vector knowing , defined by a. Calculation of the estimate of the Bayesian posterior mean To determine the estimate of the Bayesian posterior mean defined by formulas (13) and (14), we must determine the prior and posterior laws of the random variable . We will limit our study to a Lognormal distribution for loss severity ↝ , , 1 and to the Poisson distribution for the frequency of the losses ↝ . The parameters , , and are considered random variables. Therefore, we must determine the following estimate of the Bayesian posterior mean: ⁄ .
b. Determination of the prior law of the parameters The Bayesian approach depends on the accuracy of the information provided by experts on the parameters of the prior law. Below, we present the approach adopted:


The prior law of the parameter with ↝ In our study, we consider that the prior law is a gamma distribution Γ with parameters ( , ) to be determined by the experts. The choice of the prior distribution of the parameter λ depends on the description of the characteristics of the random variable given by the experts. In our study, we consider that the prior law is a Gamma distribution Γ of parameter ( , ).


The prior law of and with ↝ , In this paper we limit ourselves to a case where is a gaussian random variable ↝ , , and is a known constant. However, Shevchenko (2011), represented by the inverse Chi-square distribution (Inv.Chi.Sq) of parameters , .

c. Determination of the posterior law of the parameters and
The posterior distribution is determined from the likelihood function and the prior distribution by the formula (12). Thereby, we will calculate the posterior law of frequency and severity:


The posterior law of parameter with ↝ Let , … be a vector of random variables of the frequency. Let , … be a realization of vector . We suppose that ↝ , and we consider that ↝ Γ , . The posterior law conjugated in the prior law λ is defined by and we have From formula (17), we deduce that the posterior law is a gamma law Γ , .


The posterior law of the parameter ↝ , with as a constant Let , … , be the realizations of random variables , … , representing the collected losses. We suppose here for the Bayesian modelling of the severity that ↝ , and a constant, which we estimate from the sample by the maximum likelihood method. We pose . Thus, ↝ , . We then consider the random vector , … , .The prior distribution of is given by √ , and the conditional distribution of a random vector is given by Hence, the posterior law of : where and ̅ ∑ Formula (18) shows that the posterior law of is a gaussian law , .
d. Calculation of the Bayesian estimator ̂ and λ The Bayesian estimator of parameter is given by ⁄ .
Result (17) shows that the posterior law of is a Γ , distribution with ( , ∑ , . Consequently, the estimator is the mathematical expectation of the posterior where ε , ∑ . Parameter is estimated by the experts.
The Bayesian estimator ̂ of parameter is given by where is a constant, and , … , are realizations of the random variables , … , . Result (18) shows that the posterior law of is a gaussian distribution , . Consequently, the estimator ̂ is the mathematical expectation of the posterior law of . Thus, ̂ , which can be written as where where the parameter is estimated by the experts. Consequently, the parameters of the LogNormal law used in the simulation are ̂ and .

Value at Risk of Operational Risk
Value at Risk (VaR) is a measure adopted by the Basel Committee on Banking Supervision under Basel II to measure credit risk, market risk, and operational risk in the framework of advanced approaches based on internal models. Indeed, the committee requires that the internal model be very robust and meet very high requirements by fixing the threshold for the VaR of operational risk at 99.9%.
In terms of operational risk, the model is the main component for the calculation of capital requirements by the approach, which is based on the determination of the distribution of aggregate operational losses, and determination of the 99.9% percentile of this distribution.
The determination of is dependent on the determination of the aggregate operational loss distribution because it can be calculated analytically, determined by numerical algorithms or calculated by Monte Carlo simulation.

Presentation of Value at Risk ( )
Let , 1, … , , be, a series of stationary data for the cumulative distribution function . The value at risk (VaR) for a given probability is defined mathematically by ⁄ ,

Definition of the Capital at Operational Risk
We consider the aggregated loss ∑ in a given horizon . We fix the level of confidence 1 99.9%.
The requirement of capital to cover the operational risk is measured by the Value at Risk (VaR). The is the quantile of order 1 of the aggregated loss defined by where is the cumulative distribution function of . The is given by The simulation of the VaR is presented in Appendix A.

Collecting and Modelling the Expert Opinion
Organization of the Process for Collecting the Expert Opinion Obtaining an expert opinion can be defined as the process of collecting information and data or answering questions about problems to be solved. In this study, we must define the parameters of the frequency and severity of operational risk events. Therefore, the approach adopted must ensure a high level of accuracy and reliability of the expert opinion in order to reduce the impact of this data on the bank's risk profile.
The modelling of expert opinions has been the subject of various studies that have used various techniques for collecting expert opinions, such as the Delphi technique defined by Helmer (1968) and the practical guides proposed by Ayyub (2001).
In our study, we use the Delphi technique after adapting it to the specificities of collecting information from experts in the field of operational risk.

Presentation of the Delphi Method
The Delphi method includes eight steps according to Ayyub (2001), which are defined as follows: (1) Selection of issues or questions and development of questionnaires; (2) Selection of experts who are most knowledgeable about issues or questions of concern; (3) Issue familiarization of experts by providing sufficient details on the issues via questionnaires; (4) Elicitation of experts about the pertinent issues. The experts might not know who the other respondents are; (5) Aggregation and presentation of the results in the form of median values and using an interquartile range (i.e., 25% and 75% values); (6) Review of results and revision of the initial answers by experts. This iterative re-examination of issues sometimes increases the accuracy of results. Respondents who provide answers outside the inter-quartile range need to provide written justifications or arguments during the second cycle of completing the questionnaires; (7) Revision of results and re-review for another cycle. The process should be repeated until a complete consensus is achieved. Typically, the Delphi method requires two to four cycles or iterations; (8) A summary of the results is prepared with an argument summary for out of inter-quartile range values.

Summary Presentation of the Process for Collecting Expert Opinions
The approach for collecting expert opinion is based on that defined by Ayyub (2001) with readjustments to better adapt the process to the area of operational risk: (1) Definition of the information requested; (2) Definition of interveners in the data collecting process; (3) Identification of problems, information sources and insufficiencies; (4) Analysis and collecting of pertinent information; (5) Choice of interveners in the data collecting process; (6) Knowledge of the operation's objectives by the experts and a formation of those objectives. (7) Soliciting and collecting opinions; (8) Simulation, revision of assumptions, and estimates. If the expert provides his consent, we pass to the next step; otherwise, we repeat steps 6, 7, and 8; (9) Aggregation of estimates and overall validation; (10) Preparation of reporting and determination of results.

Definition of the Information Requested
The collecting of information from experts has two objectives: (1) The first consists in modelling the law a priori of the frequency and the severity of data by risk category. Indeed, the expert must provide the forms of the priori laws of frequency and severity and an estimation of their parameters ( , , ; (2) The second objective is the estimation of the expert weighting with the control functions (internal audit and permanent control).
i Modelling the a priori law.
In this case, the expert must provide: (1) The estimation of parameter of the lognormal law , , which models the severity by risk category by knowing that σ is a constant and ~ , ; (2) The estimation of parameter of the Poisson's law , which models the frequency by risk category over a horizon ( ) knowing that ~gamma ( , ). ii Weighting of the expert opinion.
The objective of weighting the expert opinion is to determine the parameters of the a posteriori law. Indeed, for frequency, this weighting permits one to determine the parameter of Poisson's law relating to the frequency of losses by risk category . For severity, this weighting allows one to determine parameter ̂ of the LogNormal law relating to the severity of losses by risk category.

Definition of Interveners in the Data Collecting Process
The evaluation of the parameters of a priori law involves all operational entities concerned, as well as the risk management function: i.
The Risk managers.
The risk managers have the status of evaluators because they must conduct the evaluation process with the various experts.
ii. Person in charge of incident reporting (risk correspondents) and their managers. This is an essential population with great added value, given their experience in collecting incidents and their contributions to correcting collection biases.
iii. Experts from the operating entities and the business lines.
The operational losses are dependent on the business line and the activity exercised. Indeed, the severity and frequency generally reflect the risk profile of each activity and business line because they depend on the size of the transactions concluded by the business line (or activity exercised) and on their frequencies.
Consequently, the use of experienced and well-qualified experts is the first step in the evaluation process, which will be followed by a phase of estimation and an aggregation of the data collected, which takes into account the specificities of the activity targeted by the evaluation. iv. Internal auditors and permanent controllers.
The internal audit and permanent control functions have the right to supervise all activities and executions on a permanent or periodic basis, as well as audit and control missions for the various business lines and operational entities. Their verification approaches are based on a risk identification approach using risk mapping and the database of events collected. Therefore, recourse to the service of this category for the weighting of experts' opinions is necessary.

Identification of Problems, Information Sources, and Insufficiencies
The main reason for using expert opinion modelling is to reduce the uncertainty due to the change in the bank's risk profile caused by changes at the organization level and in the process of control and risk management, given that the distributions of observed historical losses in frequency and severity follow Poisson's law and the LogNormal law, respectively. Indeed, uncertainty is linked to a change in the parameters of the two laws because the use of historical data alone can bias the estimation of risk capital.
Consequently, the expert opinion makes it possible to define the a priori law on the one hand and to weight the experts' estimate on the other hand. To do this, we will estimate, with the business experts, the average loss defined by formula (4), which will allow us to determine the parameters and , respectively.

Analysis and Collecting of Pertinent Information
In order to carry out the evaluation mission and ensure the acceptable reliability of the expert opinion, we have collected a series of relevant information, such as (1) The evolution of the bank's size in terms of net banking income, the number of transactions, the number of incidents, the size of the banking network, and the number of customer claims. (2) The organizational and business changes, such as the introduction of new products, the industrialization of sales, control and treatment processes, external audits, control activities, and outsourcing of activities. In our study, we weighted the expert opinion at 25%. However, the approach used is valid for any desired weighting. Therefore, we have carried out an estimation with experts who can be weighted at 25%. To choose them, we drew a list of experts from the operating entities and the person in charge of incident reporting at the level of each business line and, we scaled the estimate that each person in charge of incident reporting and each expert can provide with a scoring system that we constructed. Then, we selected only those whose estimates can be weighted at 25%.
The determination of the score is made with the hierarchical managers and validated with internal audit and permanent control functions on the basis of the following elements: (1) Relevant expertise, academic and professional formation as well as professional experience; (2) The number of risk incidents declared and treated; (3) Knowledge and mastery of the control device; (4) The level of formation and, the knowledge of operational risk; (5) The level of knowledge of descriptive and inferential statistics; (6) Excellent communication abilities, flexibility, impartiality, and a capacity to generalize and simplify.
The score must give a value that corresponds to a grid of 10%, 25%, 40%, 50% and 75%. Each criterion must have a qualification between low, medium, and high. To calculate the score, a rating was assigned to each qualification. The ratings assigned are presented in Table 1 as follows:

Qualification Low Medium High
Rating 1 2 3 The expert score function that we retained for our study is equal to the sum of the ratings assigned to all criteria, and the weighting is defined according to the score obtained. Expert weighting according to the score function is presented in Table 2 as follows:
To choose the evaluators for permanent control and internal audit, we based our choice on the following elements: (1) Relevant expertise, academic and professional formation, as well as professional experience; (2) The number of control and audit missions conducted annually; (3) The level of formation and knowledge of operational risk; (4) The level of knowledge of descriptive and inferential statistics; (5) Excellent communication abilities, flexibility, impartiality, and a capacity to generalize and simplify.
The designation of evaluators is made by consensus with the audit function and the permanent control function.
iii. Knowledge of the operation's objectives by the experts and the formation of those objectives.
After we selected the experts and evaluators, we organized an introductory session on the evaluation mission by presenting the main lines of the mission, the objectives, the speakers, and the realization planning. Then, the following elements were sent to the participants before launching the evaluation meetings and workshops: (1) The description of the operation's objectives; (2) The list of experts from the operating entities and person in charge of incident reporting, as well as hierarchical managers and the evaluators for internal audit and permanent control; (3) A summary description of the risks, tools, and operating system, as well as the organization and controls; (4) Basic terminology, definitions that should include probability density, arithmetic and weighted mean, standard deviation, mode, median, etc; (5) A detailed description of the process by which meetings and workshops to collect expert opinions are conducted and the average duration of their conduct; (6) Methods for aggregating expert opinions.
iv. Simulation, revision of assumptions, and estimates.
To have the expert's consent to the estimates obtained, we proceeded as follows: (1) The expert estimates the average loss per risk category that will be used to determine the parameters of the frequency law and severity law ̂ , knowing that is equal to , as determined by the likelihood. These parameters will be used to simulate, via Monte Carlo, three samples of the realizations concerning, respectively, the individual loss , the frequency , and the annual loss ∑ Then, we analyze the characteristics of these samples with the expert, particularly the average, median, maximum, minimum, and maximum values, etc; (2) If the expert accepts the simulations and their characteristics, the estimation of the parameters , ̂ , and will be validated; (3) If the expert rejects the simulations, we will eliminate the outliers rejected by the expert and revise the expert's initial estimates and proposed simulations in an iterative manner until the expert's consent is obtained.
v. Aggregation of estimates and validation.
In our study, the expert's estimate concerns the parameters ,̂ and . Therefore, we need to aggregate historical and expert estimates to determine the Bayesian estimator.

Determination of the Bayesian Estimator
In the theoretical study, we showed that the Bayesian Estimators of the parameters of the severity and the frequency distributions of losses are defined as follows: (1) For frequency, formula (19) defines the Bayesian estimator of by 1 .
In our study, weights and are fixed at 25%, which corresponds to the scores of the selected experts.

Data Description
In this study, we used a database of loss incidents concerning the retail banking business line of a Moroccan banking institution. The database was constituted from the losses registered by the bank since the 1990s, as well as the reports and missions of the audit.
The database is composed of 3581 individual losses, i.e., 2069 distinct amounts. The descriptive statistics, of losses is summarized in Table 3 as follows. The distribution of the database by risk category shows that the losses of the category represent 45%, followed by those of , which represent 19%. In the third position is , which represents 12%, followed by with 10%; the other categories represent 15%. The statistical characteristics of the individual losses by risk category are summarized in Table 4 as follows. Table 4. Statistical characteristics of individual losses by risk category (in amounts). To determine the frequency of losses, we will segment the database according to a semi-annual horizon. The choice of horizon is based on the data available for modelling, which must be greater than 30 observations. The statistical characteristics of the frequency by risk category are presented in Table 5 as follows. The estimation of the parameters of the laws of severity and frequency based on the observed data by risk category is presented as follows.

The Parameters of the Severity
The adjustment test of the data with the lognormal law , is based on the Kolmogorov-Smirnov test. As a result, the estimation of the parameters and the results of the adjustment tests by risk category are presented in Table 6 as follows. The Kolmogorov-Smirnov fit test shows that data from all categories adjust with the lognormal law except the category .

The Parameters of the Frequency
The test for adjusting the frequency data with the Poisson law and the negative binomial law is based on the chi-square test. As a result, the estimation of the parameters and the results of the adjustment tests by risk category are presented in Table 7 as follows. The fit test shows that, for a 5% threshold, the data do not adjust with Poisson's law and Negative-Binomial law, except for the category , which adjusts with the Negative-Binomial law while for a 1% threshold. All categories do not adjust with Poisson's law but instead adjust with the Negative-Binomial law, except the category which, does not adjust with the Negative-Binomial law.

Experts' Estimates
The mean annual loss is determined by the risk category through maintaining the same allocation structure for the mean annual losses. The calculation of the mean loss for the business line is defined as a percentage of the activity level of the business line ( . Indeed, the level of activity is deducted from the activity indicator presented above. In our research, the activity level is defined as follows: For the bank studied, the semi-annual level of activity of the Retail Banking line is equal to 4.5 million MAD. The experts' estimates of the mean loss for the business line are set at 1.5%, i.e., a mean loss of 67.5 million MAD, allocated by risk category in Table 8 as follows. The estimation by the experts is made in two steps. We will first estimate the mean semi-annual frequency ( ) and then estimate the parameter of , from the formula (9) using the mean loss per risk category .
Expert Estimation of Parameter λ.
The experts' estimate of parameter is based on the approach defined above. Indeed, the expert gives the first estimate based on historical data. This estimate is used to simulate the realization of Poisson's law according to the algorithm presented in Appendix A. Then, the values judged to be outliers by the expert are deleted. We determine the new mean of the simulated sample after deleting the values judged to be outliers, which will be confirmed by the expert. This simulation is repeated until the expert's validation of the mean frequency by risk category is obtained.
The results of this approach are presented in Table 9 as follows. Estimation of the Parameter μ The expert estimates of parameter from the estimation of the mean loss and the mean frequency is made by the following formulas determined from formulas (7) and (9): As a result, the estimate of parameter by risk category is presented in Table 10 as follows. The Bayesian estimators of frequency and severity are determined by the following relationships: As a result, the Bayesian estimators of severity and frequency by risk category, knowing that variance is a constant determined by likelihood, are presented in Table 11 as follows. The determination of the is made according to the approach presented above. Indeed, the breakdown of based on historical and the Bayesian by risk category is presented in Table 12 as follows. The use of expert opinion has permitted us to minimize by risk category. Indeed, the experts readjusted the parameters of the distribution of severity and frequency for all categories, in order to take into account organizational changes and the strengthening of the control device.

Capital Allocation
The capital is allocated in accordance with formula (1). Indeed, each category benefiting from a percentage of the capital allocated to the retail banking business line is equivalent to the ratio of its and the sum of the of all categories ( ∑ . The capital share of each class is presented in Table 13 as follows. The allocation of capital in retail banking shows that the integration of expert opinion appreciates certain types of risk, in particular categories and .
Indeed, the experts consider that the losses recorded do not represent the bank's actual exposure to these two risks because:

For
, the database only includes proven losses, while risk events are generally adjusted without an accounting impact. However, they can have consequences if the losses recorded are not recovered; 2. For , the experts believe that fraud attempts to target large amounts of money, especially those that have not been successful. However, if they are successful, the impact will be great.
On the other hand, our approach is sensitive to several factors: 1. The bank studied is a medium-sized bank whose main activity is the granting of bank loans.
Therefore, the use of simple and easy to implement approaches is its principal concern. However, other allocation approaches can be used to refine the allocation process; 2. The approach we propose is based on the average loss per risk category, which favors the category . However, the collection approach used by the bank may bias the results because the bank accounts for the losses per fraud file even if a fraud is composed of different amounts distributed over several years; 3. We have defined a list of criteria to score the experts and define their weighting, which makes the process very sensitive to the choice of scoring tool.

Discussion and Conclusions
Internal models permit us to determine the economic capital independent of the regulatory capital and the impact of the occurrence of risk events at the level of the different entities and at the aggregate level under the Bottom-Up approach or the Top-Down approach.
For the risk identification process, banks are free to use their own models to achieve the objective of risk supervision in accordance with the second pillar relating to prudential risk management. This situation encourages the use of internal models that can be based on historical data, expert opinion, or a combination of historical data and expert opinions.
The use of expert opinion is essential in risk management given the recurrent changes in organization, business size, and control device. Indeed, expert opinions permit one to readjust estimates and assumptions based on historical data by considering the changes that have been operated.
The reliability of models incorporating expert opinions depends on the approach used to collect the requested information. Indeed, it is necessary to adopt rigorous procedures and approaches at the theoretical and practical levels in order to avoid the risk of a model.
In this context, we have presented in this paper a process for collecting information from experts specific to operational risk, based on the Delphi method, which we believe will give the relevant results for risk measurement if correctly administered.
For the prospects of internal models for quantifying operational risk, banks must separate regulatory capital requirements from internal requirements for managing the return/risk trade-off. Indeed, they must develop internal risk measures allowing them to manage their activities through risks and allocate the necessary equity capital for their business plans.
Author Contributions: All authors contributed to the entire process of writing this paper. Habachi Mohamed has written the original draft and has realize the statistical studies, all authors reviewed and edited the draft, all authors have read and agreed to the published version of the manuscript.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. Simulation of Aggregate Operational Losses
To simulate the losses, we use the appropriate estimator. For the classical approach, we use the maximum likelihood estimator , , of ( , , , respectively, the parameters of P( and , . For the Bayesian approach, we use the Bayesian estimators ,̂ .

Appendix A.1. Presentation of the Simulation by the Inverse Cumulative Distribution Function
The Monte Carlo Method consists of simulating an important sample of realizations of size 100,000 in the following manner: For 1 : (1) Simulate a realization of frequency from the chosen law of frequency , ; (2) Simulate realizations , 1 , of severity , from the chosen law of severity , , ; (3) Calculate ∑ , which will constitute a realization of the loss ∑ .
Before presenting the simulation by the Monte Carlo method, we first cite the theorem of the inverse cumulative function that allows the simulation of continuous random variables.
Theorem 1. Suppose is uniform random variable on the interval 0,1 , and is a cumulative distribution function that is continuous and strictly increasing. Let be the random variable defined from the inverse cumulative distribution function by . Then, the cumulative distribution function of is .
Consequently, to simulate the realization of the random variable which has as a cumulative distribution function, it suffices to:


Simulate a realization of the Uniform distribution 0,1 ;  Calculate the inverse cumulative distribution function ). Then, is considered to be a realization of . Appendix A.1.1. Simulation of the Realizations for 1 j 100,000 To simulate the realizations of frequency , we use Poisson's distribution or the gamma distribution Γ , .

Propriety:
Let be a sequence of exponential random variables of parameter . Then, the random variable is defined by ∈ ℕ * 1 , 1 0 , 1 is a random Poisson variable of parameter .
To simulate the realizations of Poisson's law of parameter λ, we use the following algorithm.
Step 1: Simulation of To simulate the realization of the frequency, we proceed as follows.
1. We simulate a realization of the law by the inverse cumulative distribution function. For that, we must  Simulate a realization of the Uniform law 0,1 ;  Define the cumulative distribution function of the exponential law by . We then deduct 2. If 1 then 0.
If not, we simulate a second realization of the exponential law according to procedure 1. If 1, then 1 is a realization of the Poisson of parameter ; otherwise, we simulate realizations , 1 until ∑ 1 and ∑ 1 . The value that verifies the last two inequalities is the realization of the frequency.
Step j: A simulation of , , We repeat step (1.) 100,000 times and we thereby obtain 100,000 realizations of .
To simulate the laws , , we use the inverse cumulative distribution function method, as follows: 1. Simulate a realization of the Uniform law 0,1 ; 2. Calculate , , where , is a cumulative distribution function of the law , .
As , has no analytical expression, we numerically simulate .

Determination of Operating Losses
For each realization of the law of frequency, we have to simulate realizations of the law of severity. The simulated loss is the sum of the simulated realizations: ∑ ,

Appendix A.2. Calculation of the Capital at Operational Risk ( )
The capital at operational risk is calculated by the determination of the percentile 99.9% of the empirical distribution of the losses ∑ , for 1 100,000, as simulated by Monte Carlo.
Let be the empirical cumulative distribution function of loss determined from the simulated realizations . The function is given by , The value at risk is expressed by the following formula: 99,9% ⁄ 99,9% , In this paper, frequency is modelled for a horizon of one year 12 ℎ or by dividing the year into sub-horizons for a integer 2 12.
Appendix A.2.1.The Annual with Segmentation of the Database by Risk Category The operational loss of risk category is a random variable defined by ∑ , where:  : The random variable that represents the frequency of losses of the risk category ;  X : The random variable, for 1 , that represents the severity of the losses of the risk category .
Let , 1 100,000 be, the annual frequency of the losses collected for the risk category , and let be the simulated realizations of the losses of the risk category . The realizations ∑ , 1 100,000 are able to calculate the capital at risk for each risk category . The annual is the sum of the because it supports that the risk categories are independent. The modelling of the frequency of the loss is made for a horizon of one year 12 ℎ or by dividing the year into sub-horizons for a integer 2 12.
Appendix A.2.2. The Modelling of the Loss Frequency for the Annual Horizon The horizon chosen is year 12 ℎ and the level of confidence is 1 99.9%.
Let be the empirical cumulative distribution function of the losses for the risk category . The capital of operational risk for the category risk is The capital at risk on the annual horizon is the sum of the : Appendix A.2.3. The Modelling of the Loss Frequency for the Sub-Horizon , 2 12 Let be the empirical cumulative distribution function of the operational risk of a given risk category for the horizon 1 , determined from the simulated realizations , with as a realization of the frequency of losses on horizon . The cumulative distribution function is simulated times on the horizon . Let be the ith simulation and be the ith capital at operational risk determined from the ith simulation of the losses. The capital at risk on the annual horizon is the sum of the , 1 : The capital at risk on the annual horizon is the sum of the :