Next Article in Journal
Quantum κ-Entropy: A Quantum Computational Approach
Previous Article in Journal
Exploring Entanglement Spectra and Phase Diagrams in Multi-Electron Quantum Dot Chains
Previous Article in Special Issue
Analysis of Core–Periphery Structure Based on Clustering Aggregation in the NFT Transfer Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Automated Decision Support System for Portfolio Allocation Based on Mutual Information and Financial Criteria

by
Massimiliano Kaucic
1,2,*,
Renato Pelessoni
1,2 and
Filippo Piccotto
1,2
1
Department of Economic, Business, Mathematical and Statistical Sciences, University of Trieste, 34127 Trieste, Italy
2
SOFI Lab, Soft Computing Laboratory for Finance and Insurance, University of Trieste, 34127 Trieste, Italy
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(5), 480; https://doi.org/10.3390/e27050480
Submission received: 11 March 2025 / Revised: 19 April 2025 / Accepted: 28 April 2025 / Published: 29 April 2025
(This article belongs to the Special Issue Entropy, Econophysics, and Complexity)

Abstract

:
This paper introduces a two-phase decision support system based on information theory and financial practices to assist investors in solving cardinality-constrained portfolio optimization problems. Firstly, the approach employs a stock-picking procedure based on an interactive multi-criteria decision-making method (the so-called TODIM method). More precisely, the best-performing assets from the investable universe are identified using three financial criteria. The first criterion is based on mutual information, and it is employed to capture the microstructure of the stock market. The second one is the momentum, and the third is the upside-to-downside beta ratio. To calculate the preference weights used in the chosen multi-criteria decision-making procedure, two methods are compared, namely equal and entropy weighting. In the second stage, this work considers a portfolio optimization model where the objective function is a modified version of the Sharpe ratio, consistent with the choices of a rational agent even when faced with negative risk premiums. Additionally, the portfolio design incorporates a set of bound, budget, and cardinality constraints, together with a set of risk budgeting restrictions. To solve the resulting non-smooth programming problem with non-convex constraints, this paper proposes a variant of the distance-based parameter adaptation for success-history-based differential evolution with double crossover (DISH-XX) algorithm equipped with a hybrid constraint-handling approach. Numerical experiments on the US and European stock markets over the past ten years are conducted, and the results show that the flexibility of the proposed portfolio model allows the better control of losses, particularly during market downturns, thereby providing superior or at least comparable ex post performance with respect to several benchmark investment strategies.

1. Introduction

The portfolio selection process typically involves two stages. The first phase comprises the selection of the most promising stocks to be included in the optimization, while the second concerns the optimal wealth allocation between the portfolio constituents.
Ranking and selecting stocks from an investment basket is a challenge that has been addressed in the literature in several ways, such as traditional stock-picking techniques based on factor models [1,2] or novel approaches based on machine learning techniques [3,4]. When the stock-picking process involves many different and conflicting financial criteria, it can fall into the realm of multi-criteria decision-making (MCDM) problems. This ensemble of methods remarkably supports the portfolio selection practice since it provides a comprehensive range of techniques that tackle issues related to the stock-picking phase. In recent times, various multi-criteria decision-making methods have been employed to rank superior securities and construct optimal portfolios, such as the preference ranking organization method for enrichment evaluation [5,6], the technique for order of preference by similarity to ideal solution [7,8], and the multi-criteria optimization and compromise solution [9], to mention some of the most often used. For a comparison of the performance of several multi-criteria decision-making techniques, we refer the reader to [10,11]. To account for the decision-makers’ irrational exuberance when making decisions in the presence of risk and uncertainty, ref. [12] initiated the TODIM (the Portuguese acronym for interactive multi-criteria decision-making) method. Specifically, this approach incorporates prospect theory’s principles [13] to define the value function that ranks between criteria alternatives, considering investors’ behavioral characteristics. Recently, the TODIM method has gained increasing interest in the literature. In [14], the authors applied this procedure to rank 462 equities from the constituents of the S&P500 Index by adopting nine financial criteria. They considered several portfolio cardinalities and constructed equally weighted or ranking-based weighted portfolios, finding that investments built through this technique yielded better results in terms of the Sharpe ratio. Ref. [15] extended this framework by embedding the TODIM method in a multi-objective portfolio selection based on the mean–variance paradigm to optimize the portfolio constituents, finding promising results. Their proposed model has been tested on a limited number of assets from the Chinese stock market.
Although still highly influential, Markowitz’s pioneering mean–variance framework [16] faces numerous practical challenges when applied in real-world scenarios. Moreover, the need to meet the demands of practitioners and institutional investors, who are increasingly engaged in the portfolio design process, has resulted in considerable research aimed at developing alternative optimization approaches. In particular, the so-called risk parity framework has become a mainstream asset allocation approach, gaining widespread relevance in both industry and academia [17,18]. This strategy allocates wealth in such a way that the risk contribution per asset to the portfolio risk is equalized, focusing on managing the different sources of risk involved in the investment process and introducing the idea of risk diversification. However, the existence and uniqueness of a solution to the risk parity portfolio problem is guaranteed only in some particular cases [19,20]. Furthermore, optimizing a portfolio to reach risk parity compliance neglects the performance dimension of an investment, which represents a necessity for most categories of investors. Consequently, some authors have proposed asset allocation problems with different performance objectives while imposing the parity condition as a portfolio constraint [21,22]. It is worth noting that risk parity is a special case of the more general risk budgeting approach, where assets are given a predetermined weight, called a budget, as a percentage of the total portfolio risk [23,24].
This paper presents a novel automated decision support system built upon two interconnecting modules tailored to solving portfolio optimization models that maximize a financial performance measure subject to real-world constraints, particularly cardinality constraints. This knowledge-based structure is illustrated in Figure 1. The first module employs the TODIM procedure to develop a ranking between stocks concerning several financial criteria identified by the end user. With this step, we can bypass the explicit management of the cardinality constraint within the optimization process. In the second module, we determine the optimal portfolio weights using a version of the linear population size reduction success-based differential evolution algorithm with double crossover (DISH-XX [25]). The synergy between these two modules can address the necessities of various types of end users who wish to be directly involved in the design of the portfolio strategy. The system developed in this paper parallels the one proposed in [26], with the differences being (i) a more practical approach aimed at supporting practitioners, (ii) its application in a single- and not a multi-objective setting, and (iii) the consideration of information theory—and, in particular, entropy—in choosing the portfolio constituents.
Furthermore, the capabilities of our automated financial management system are evaluated by examining two instances of a portfolio optimization problem, where the objective function to optimize is a modified version of the Sharpe ratio [27]. In this refined performance measure, when the portfolio excess return is negative, it is multiplied by the standard deviation instead of being divided. This version aligns with the risk–return preferences of a rational investor, even if the risk premiums are negative [28]. We consider the outlook of an institutional investor or a portfolio manager who seeks to operate in large equity markets by selecting a limited pool of stocks to form a portfolio with a suitable performance while maintaining risk diversification. To achieve this goal, we introduce the following real-world constraints. Firstly, a cardinality constraint limits the number of assets to a reasonable portfolio size. Then, a budget constraint ensures that we allocate all of the available capital, while box constraints prescribe lower and upper bounds on the fraction of capital invested in each asset. The resulting portfolio model is similar to the one analyzed in [29], where the authors empirically proved notable ex post financial profitability regarding many ex post performance metrics. In the second instance, along with the previous restrictions, this work introduces the direct control of the portfolio risk in the optimization phase using a set of risk budgeting constraints. More precisely, a tolerance threshold allowing for minor upper and lower deviations from the parity is adopted, resulting in a formulation similar to the one developed in [30]. This approach relaxes the risk parity conditions and provides greater flexibility in determining the risk contributions of the portfolio constituents. The results are mixed-integer optimization problems that belong to the family of cardinality-constrained portfolio optimization problems, which are NP-hard [31]. In addressing the challenge posed by the resolution of this particular class of optimization problems, a number of researchers in the fields of finance and computer science have directed their attention towards metaheuristics, given their demonstrated simplicity and effectiveness [32]. For instance, the beetle antennae search algorithm has attracted significant interest due to its computational efficiency and global convergence properties, rendering it suitable for solving portfolio optimization problems subject to real-world constraints such as transaction costs and cardinality limits [33]. Furthermore, the integration of artificial neural dynamics into portfolio optimization models has demonstrated substantial enhancements in computational efficiency and solution accuracy when compared to traditional methodologies [34,35]. The present work falls within this line of research, focusing on the differential evolution algorithm (DE [36]), a metaheuristic extensively used to solve single- and multi-objective asset allocation problems [37,38]. Specifically, it considers a recently developed augmented version of DE, called distance-based parameter adaptation for success-history-based differential evolution with double crossover (DISH-XX), which has shown very competitive results based on several benchmark functions from real-world engineering problems [25]. Regarding the current literature and knowledge, this paper is the first to apply this algorithm to portfolio optimization. Section 2.1 provides a detailed description of the progression from DE to DISH-XX. Since the original version of this solver is blind to non-bound constraints, it is equipped with an ensemble of constraint-handling techniques. Specifically, a repair mechanism is used to manage box and budget constraints, as in [39]. Subsequently, for risk budgeting constraints, the proposed constraint-handling approach accelerates the convergence process towards the feasible region by applying a gradient-based mutation [40] and uses the ε -constrained method [41] to transform the constrained optimization problem into an equivalent unconstrained one.
Note that selecting the alternative criteria for the preliminary stock-picking phase and establishing their relative preferences constitutes a crucial phase for the TODIM method. Specifically, this work considers a complementary set of three financial criteria, each providing a unique perspective. Firstly, it adopts a peripherality measure for the stocks based on mutual information, similar to the approach discussed in [42]. The mutual information dimension focuses on the microstructure of the stock market to capture the full spectrum of assets’ dependencies. Recently, many scholars have investigated the capabilities of mutual information and entropy as investors’ tools to make choices under uncertain conditions. Section 2.2 provides an overview of the most recent literature contributions in this field. Then, the second and third criteria are the momentum based on the most recent monthly returns and the upside-to-downside beta ratio, respectively. The former highlights the ability of stocks to generate value over time, and the latter assesses the responsiveness of a stock concerning market upswings and downswings. Furthermore, to calculate the preference weights associated with the three aforementioned criteria, this work exploits two approaches. The first consists of a static method where the three criteria have the same importance, without introducing any relative preference. Next, the second technique uses the joint entropic information carried by criteria and evaluates their contributions, dynamically adjusting the relative preferences during the investment period.
The following are the main literature contributions of this paper.
  • We develop a knowledge-based financial management system to solve cardinality-constrained portfolio optimization problems. This expert system is built upon two interconnected modules. On the one hand, a multi-criteria decision analysis technique called TODIM handles the cardinality constraint. On the other hand, the DISH-XX algorithm is extended with an ensemble of constraint-handling techniques and a gradient-based mutation.
  • This study introduces two portfolio selection models where the objective function to maximize is a modified version of the Sharpe ratio under some real-world constraints. The first instance considers cardinality, box, and budget constraints. The second one introduces a set of risk budgeting constraints to provide explicit control of risk.
  • When running the TODIM procedure for the preliminary ranking, we use three complementary financial criteria, namely the peripherality measure based on mutual information, the momentum measure, and the upside-to-downside beta ratio.
  • To set up the relative preference weights of the three criteria, an equally weighted method and an entropy-based method are adopted.
  • An extensive experimental analysis is conducted considering the two most significant indices of the American and European stock markets, namely the S&P 500 and the STOXX Europe 600.
  • The empirical part validates the profitability of our investment strategy considering several ex post performance metrics and compares the two portfolio models described above against some alternatives that pre-select the stocks using the criteria individually, as well as the market benchmark.
The remainder of this paper is structured as follows. Section 2 presents some related works regarding the differential evolution algorithm and the use of information theory for portfolio optimization. Section 3 illustrates the two instances of the portfolio model. Section 4 and Section 5 describe the two modules that compose the developed decision support system. Section 6 presents the experimental analysis, discussing the data and investment setup and showing the ex post performance results. Finally, Section 7 concludes the paper, summarizing the main findings, illustrating potential research limitations, and suggesting some future research directions.

2. Related Works

2.1. From DE to DISH-XX

The differential evolution algorithm works according to the following steps. The algorithm starts by randomly sampling an initial population of candidate solutions. Then, it iteratively produces new trial vectors through mutation and crossover phases. If a new individual outperforms the original one, it survives and progresses to the next generation. This iterative process continues until it satisfies some stopping conditions, and the algorithm returns the best-found solution to the optimization problem. The original algorithm, developed in [36], includes three user-defined control parameters: the population size N P , the scaling factor F, and the crossover rate C r . Over recent decades, scholars have proposed several enhanced versions of this algorithm. These advancements typically involve using refined mutation schemes, introducing external archives to store the most promising solutions, and adaptively determining the parameters N P , F, and C r . For a comprehensive survey of the latest advancements in the field of differential evolution-based algorithms, see [43]. In [44], the authors introduced an influential variant that uses a control parameter adaptation strategy. Specifically, this version samples the parameters F and C r from a probability distribution and stores successful values in an external archive. An improved version of the latter, called success-history-based differential evolution, was proposed in [45]. Instead of sampling F and C r from gradually adapted probability distributions, the authors proposed to use historical archives ( M F and M C r ) to store effective parameter values from recent generations. The algorithm then generates new F and C r parameters by sampling near these stored pairs. Next, the same authors introduced a linearly decreasing function that adaptively reduces the population size over the generations [46]. In [47], the authors proposed an update to the scaling factor and a crossover rate adaptation that exploits information from the Euclidean distance between the trial and the original individual. They called this new algorithm distance-based success history differential evolution (DISH), proving its superior performance over several versions of DE. The DISH-XX algorithm [25] used as a solver in this paper is a refined version of DISH that introduces a secondary crossover between the trial vector and one of the historically best-found solutions randomly selected from the archive.

2.2. Information Theory in Portfolio Optimization

Metrics from information theory, especially entropy, have led to significant literature contributions in portfolio theory. For instance, several authors have used entropy as a proxy for portfolio risk, starting with the seminal paper [48]. To mention some recent works, ref. [49] proposed a return–entropy portfolio model and compared it with the classical Markowitz strategy. The work in [50] prescribes a setup for portfolio optimization where entropy and mutual information are used instead of variance and covariance as risk measurements. In this approach, the mutual information measures the statistical independence between two random variables, and it is used as a more general approach to capture nonlinear relationships [51]. Along with using entropy as a risk measure, several contributions have used this metric to quantify portfolio diversification. In [52], the authors proposed a model that aims to maximize the entropy of the portfolio weight vector, extending the classical Markowitz framework by adding a control on diversification. In [53], this approach has been broadened by suggesting a mean–variance–skewness–entropy multi-objective optimization model. More recently, in [54], the authors managed the mean, variance, and entropy objectives using a self-adapting parameter λ that adjusts to the market conditions.
In recent times, researchers in finance have considered markets as networks in which stocks correspond to nodes and the links are related to the correlations of returns. In [55], the authors used network theory to select stocks from the peripheral regions of the financial filtered networks, finding that they performed better than stocks belonging to the networks’ central zones. The work in [56] bridges the gap between the mean–variance and network theories. In particular, a negative relationship between optimal portfolio weights and the centrality of assets in the financial market network has been evidenced. In [57], the authors tested various dependence measures, such as the Pearson and Kendall correlations and lower tail dependence, to construct interconnected graphs and build optimal mean–variance portfolios. Moreover, a trend has emerged in the literature where, instead of using canonical correlation measures, researchers employ mutual information to capture nonlinear dependencies among stocks and describe the microstructure of the financial market. The foundational work in this area is the paper [58], where the authors constructed minimum spanning trees based on the mutual information between stocks in the Chinese stock market. By applying this methodology and combining it with the approach suggested in [55], some authors have considered a measure of asset centrality based on mutual information and have proposed various stock-picking techniques for portfolio construction [42]. This paper follows the latter approach to establish one of the three criteria employed within the multi-criteria decision-making module.

3. Portfolio Models

3.1. Investment Strategy Setup

This paper considers a frictionless market that does not allow for short selling, and all investors act as price takers. The investable universe consists of n 2 risky assets, and a portfolio is denoted by the vector of weights w = ( w 1 , , w n ) R n , where w i represents the proportion of capital invested in asset i, with i = 1 , , n . R i indicates the random variable representing the rate of return of asset i, and μ i is its expected value. Hence, the random variable R p ( w ) = i = 1 n w i R i expresses the portfolio rate of return, while the expected rate of return of portfolio w is defined as
μ p ( w ) = i = 1 n w i μ i
and its volatility is given by
σ p ( w ) = i = 1 n j = 1 n σ i j w i w j = w Σ w
where ( Σ ) i j = σ i j is the covariance between assets i and j, with i , j = 1 , , n , with the covariance matrix Σ assumed to be positive definite. Since investors perceive large deviations from the portfolio mean value as damaging, Equation (2) represents the so-called portfolio risk.
Given this framework, a portfolio that provides the maximum return for a given level of risk or, equivalently, has the minimum risk for a given level of return is called efficient. This decision-making approach is widely known as mean–variance analysis, and the set of optimal mean–variance trade-offs in the risk–return space forms the efficient frontier [16]. In this setting, the so-called Sharpe ratio identifies the best investment among efficient portfolios. This performance measure is defined as
S R ( w ) = μ p ( w ) r f σ p ( w )
and expresses the net compensation, with respect to a risk-free rate r f , earned by the investor per unit of risk. However, the reliability of this performance measure decreases when the portfolio excess return μ p ( w ) r f is negative, since (in some cases) an investor would select a higher-risk portfolio using the Sharpe ratio. To overcome this issue, the proposed portfolio selection model considers the so-called modified Sharpe ratio [28], defined as
M S R ( w ) = μ p ( w ) r f σ p ( w ) sign ( μ p ( w ) r f )
where sign ( z ) is the sign function of z R . Observe that, if the portfolio excess return is non-negative, the modified Sharpe ratio is equal to the Sharpe ratio. Otherwise, it multiplies the portfolio excess return by the standard deviation. In this manner, even in periods of market downturn, portfolios with lower risk and a higher excess return will be preferred.

3.2. First Proposed Model

This paper considers a portfolio model that is similar to the one inspected in [29]. The aim is to maximize the modified Sharpe ratio illustrated in Equation (4) subject to the following real-world constraints.
  • Budget. Since all available capital needs to be invested at each investment window, the following holds:
    i = 1 n w i = 1 .
  • Cardinality. The portfolio includes exactly K assets, where K n . To model the inclusion or exclusion of the ith asset in the portfolio, a binary variable δ i is introduced as
    δ i = 0 ,   if   asset   i   is   excluded 1 ,   if   asset   i   is   included
    for i = 1 , , n , where δ = ( δ 1 , , δ n ) { 0 , 1 } n , and the cardinality constraint can be written as
    i = 1 n δ i = K .
    Then, I K = i { 1 , , n } : w i > 0 denotes the set of active portfolio weights, with | I K | = K .
  • Box. A balanced portfolio should avoid extreme positions and foster diversification. Hence, maximum and minimum limits for portfolio weights are imposed, expressed by
    δ i l b i w i δ i u b i , i = 1 , , n ,
    where l i and u i are the lower and upper bounds for the weight of the ith asset, respectively, with 0 < l b i < u b i 1 to exclude short sales.
The resulting is a mixed-integer optimization model that requires some ad hoc techniques to be practically handled. In this paper, instead of directly handling the cardinality constraint in the optimization process, as in [29,31], the TODIM procedure described in Section 4 is used to perform preliminary stock selection and bypass the cardinality issue. Then, a metaheuristic is employed to search for optimal solutions for the reduced portfolio allocation problem.

3.3. Risk Budgeting Approach

To control the degree of risk-based diversification between the portfolio constituents, the risk budgeting portfolio [23] is introduced. This approach allocates the risk according to the profile described by the vector b = ( b 1 , , b n ) , with 0 < b i < 1 , i = 1 , , n , and i = 1 n b i = 1 , such that
R C i ( w ) = b i σ p ( w ) i ,
where R C i ( w ) = w i ( Σ w ) i w Σ w denotes the risk contribution of the i-th stock to the portfolio risk.
Notice that the risk budgeting approach represents a relaxation of the more restrictive risk parity conditions, since the risk parity portfolio occurs when b i = 1 / n for all i. Appendix A.1 illustrates in detail the basics of the risk parity framework.

3.4. Proposed Risk Budgeting Formulation for the Second Portfolio Model

Inspired by the risk budgeting setup, this paper designs an investment strategy in which the deviations from risk parity are fixed by the investor’s risk profile according to the following set of inequalities:
( 1 ν ̲ ) σ p ( w ) n R C i ( w ) ( 1 + ν ¯ ) σ p ( w ) n i
where 0 ν ̲ ν ¯ < 1 . Note that, for ν ̲ = ν ¯ = 0 , Equation (9) reduces to the risk parity condition, while increasing values of these parameters introduce portfolios with increasing deviations of the risk contributions from the parity condition and thus greater risk concentrations. It can be proven that any optimization problem that involves the use of Equation (9) is non-convex. More details regarding the non-convexity of the risk budgeting formulation are given in Appendix A.2.
Summing up, the second instance of the portfolio model that optimizes the MSR measure of Equation (4) considers budget, bound, and cardinality constraints as specified in Equations (5)–(7), while introducing the direct control of the portfolio risk according to the set of constraints outlined in Equation (9).

4. Multi-Criteria Decision Analysis Module

To address the cardinality constraint (6) and develop a portfolio model involving only real variables, the proposed approach selects assets with higher rankings based on a set of criteria, using the TODIM method.

4.1. TODIM Generalities

The TODIM method facilitates decision-making by evaluating the importance of each criterion according to the subjective preferences of each investor. It consists of the following sequential steps.
  • Constructing the multi-criteria decision-making matrix between criteria and alternatives. Given m alternatives { a 1 , , a m } and s criteria { c 1 , , c s } , the decision matrix A = ( a i , j ) m × s is expressed as
    A = a 1 , 1 a 1 , 2 a 1 , s a 2 , 1 a 2 , 2 a 2 , s a m , 1 a m , 2 a m , s
    where a i , j is the performance evaluation of a i under criterion c j .
  • Determining the criteria weights. In this step, the criteria weighting vector β = ( β 1 , , β s ) , which satisfies 0 β j 1 and j β j = 1 , needs to be determined. This vector defines the relative preference degree of the procedure toward the s criteria. Two weighting schemes are analyzed in this paper. The first assigns the same weight to each criterion to avoid any prior preference for a specific criterion in the TODIM structure. The second one utilizes the entropy weight method [59]. The contribution of the alternative a i to the criterion c j is calculated as
    λ i , j = a i , j i = 1 m a i , j , i = 1 , , m and j = 1 , , s .
    Next, the entropy value e n j for the jth criterion is given by
    e n j = 1 ln ( m ) i = 1 m λ i , j ln ( λ i , j ) , j = 1 , , s
    where e n j denotes the total contribution of all alternatives to criterion c j . If λ i , j = 0 , it follows that λ i , j ln ( λ i , j ) = 0 . After obtaining the entropy values, the entropy weight β j is
    β j = 1 e n j s j = 1 s e n j , j = 1 , , s .
  • Binning and normalizing criteria matrix. The third step transforms the raw criteria matrix A into a different matrix, A , by binning each element into 10 bins. Specifically, if a criterion is considered a benefit, a value of 10 is assigned to the alternatives in the top 10 % for that criterion. Conversely, if the criterion is a cost, a value of 10 is assigned to the alternatives in the bottom 10 % . Then, to make the scores comparable, a normalization procedure is used to obtain the normalized values N i , j .
  • Computing alternative comparisons. Through the normalized scores, the alternatives can be compared based on their overall scores across the criteria. For criterion c j , the criteria score of alternative a i against alternative a k is defined as in [60]
    C S j ( a i , a k ) = β j N i , j N k , j η 1 if   N i , j N k , j ξ β j N k , j N i , j η 2 if   N i , j < N k , j
    where β j is the objective weight of criterion c j ; η 1 , η 2 [ 0 , 1 ] are the two risk parameters of the value function in the domain of gains and losses; and ξ > 0 is the loss aversion coefficient in the loss domain.
    After calculating the dominance degree with respect to criterion c j between any two alternatives a i and a j using Equation (13), the final comparison score concerning each criterion is
    F S j ( a i ) = k = 1 m C S j ( a i , a k ) i = 1 , , m and j = 1 , , s .
  • Determining the final ranking between alternatives. In the last step, the rank of each alternative a i is obtained as
    R ( a i ) = j = 1 s F S j ( a i ) .
    The procedure then concludes with the normalization of the final ranks. These range between 0 and 1, with the most preferred alternative having a value of 1 and the least preferred having a value of 0.

4.2. Application of TODIM to Investable Universe

The previously described multi-criteria decision-making method is applied to the assets that are part of the investable universe, representing the alternatives a i , using three criteria based on the financial performance of stocks, which will be described in the experimental section. Given this procedure, the cardinality constraint can be tackled by picking the K assets with higher rankings, and, with this procedure, we can express the portfolio optimization problem without the auxiliary vector δ . To avoid any confusion, the notation of this paper uses a vector x R K of K components instead of w to express weights, and we write the two inspected models as follows:
maximize x R K f ( x ) = M S R ( x ) s . t . i = 1 K x i = 1 l b i x i u b i , i = 1 , , K
and, for the risk budgeting model,
maximize x R K f ( x ) = M S R ( x ) s . t . i = 1 K x i = 1 l b i x i u b i , i = 1 , , K g j ( x ) 0 , j = 1 , , 2 K
where g j ( x ) = x ( 1 ν ̲ ) K Σ ( K ) E j ( K ) x for j = 1 , , K and g j ( x ) = x E j K ( K ) ( 1 + ν ¯ ) K Σ ( K ) x for j = K + 1 , , 2 K , with Σ ( K ) R K × K being the covariance matrix of the K assets selected by TODIM, E j ( K ) = 1 2 e i ( K ) e i ( K ) Σ ( K ) + Σ ( K ) e i ( K ) e i ( K ) , and e i ( K ) R K denotes the jth column of the identity matrix.

5. Optimization Module

This section introduces the developed version of the DISH-XX algorithm specifically designed to solve Problems (16) and (17).

5.1. DISH-XX Algorithm

The following steps outline the core components of the DISH-XX algorithm as presented in [25] for the problem of optimizing a generic function f : R K R .
  • Initialization. At iteration t = 0 , the algorithm commences with the initialization of a random population P consisting of N P i n i t solutions. During this step, additional parameters are configured: the final population size ( N P f ), the maximum number of objective function evaluations ( F E S m a x ), and two parameters utilized in the mutation operator ( α m a x and α m i n ). Moreover, two external archives are introduced: the first, denoted as A, stores solutions that have been improved by the corresponding trial vectors; the second, A b e s t , contains the most promising solutions. Based on the prescriptions given in [25], two historical memory arrays of size H, M F and M C r , are defined component-wise as
    M F , h = 0.5 for h = 1 , , H 1 , and M F , H = 0.9
    and
    M C r , h = 0.8 for h = 1 , , H 1 , and M C r , H = 0.9
    which will be used to define the values for the scaling factor F and the crossover rate C r .
  • Mutation. For each generation t 0 , the mutation operator used in DISH-XX is the current-to- p b e s t -w/1 strategy. Let F E S r a t i o be the ratio between the current number of objective function evaluations F E S and F E S m a x . The mutation vector v p for each individual p is then generated as follows:
    v p = x p + F w p ( x p b e s t x p ) + F p ( x r 1 x r 2 )
    where x p b e s t is one of the 100 α % best solutions in the archive A b e s t , with α = F E S r a t i o α m a x α m i n + α m i n ; x r 1 is randomly selected from the current population P and x r 2 from P A . It is worth noting that i p b e s t r 1 r 2 . The scaling factor F p is generated from a Cauchy distribution with location parameter M F , r ˜ randomly selected from the historical memory array M F and a scale parameter value of 0.1 . If the generated value F p is non-positive, it is drawn again, and, if it is greater than 1, it is set to 1. In addition, to bound its value in the exploration phase, we set F p = 0.7 whenever F E S r a t i o < 0.6 and F p > 0.7 . The weighted scaling factor F w p depends on F p and F E S m a x as follows:
    F w , i = 0.7 F p , if F E S r a t i o < 0.2 0.8 F p , if F E S r a t i o < 0.4 1.2 F p , otherwise .
    This mutation strategy combines a greedy approach in the first difference and an exploratory factor in the second difference.
  • Double Crossover. The DISH-XX algorithm employs a double crossover mechanism. The first crossover is the standard binomial crossover as in [36], which combines the mutation vector v p with the target vector x p to produce a temporary trial vector u , p = u 1 , p , , u K , p . This process is based on the crossover rate value C r p , which is randomly generated using a normal distribution with a mean value M C r , r ˜ ˜ , randomly selected from the memory array M C r , and a standard deviation value of 0.1 . The C r p value is then bounded between 0 and 1, with values outside this range truncated to the nearest bound. Similarly to the scaling factor, the crossover rate depends on F E S r a t i o as follows:
    C r p = max { C r p , 0.7 } , if F E S r a t i o < 0.25 max { C r p , 0.6 } , if F E S r a t i o < 0.5 C r p , otherwise .
    The second crossover involves the archive of historically best-found solutions A b e s t , enhancing the diversity and exploration capabilities of the algorithm. Using the same value C r p of the first crossover, the trial vector u p = ( u 1 p , , u K p ) is generated component-wise as follows:
    u i p = u i , p if r a n d j C r p or j = j r a n d x i r b e s t otherwise
    where r a n d j is a uniformly distributed random number, j r a n d is a randomly chosen index in { 1 , , K } , and x r b e s t = ( x 1 r b e s t , , x K r b e s t ) is a solution randomly selected from A b e s t .
  • Selection. The selection process in DISH-XX is based on the comparison of the trial vector u p and the target vector x p . The objective function values of both vectors are evaluated, and the one with the better fitness value is selected for the next generation. This ensures that the population evolves toward better solutions over time.
  • Adaptation of Control Parameters. DISH-XX incorporates adaptive mechanisms for control parameters, such as the scaling factor and the crossover rate. These parameters are adjusted based on the success history of previous generations, allowing the algorithm to dynamically adapt to the problem landscape and enhance its performance. After each generation, one cell in both memory arrays is updated. DISH-XX uses an index k to track which cell will be updated. The index is initialized to 1, so, after the first generation, the first memory cell is updated. The index is incremented by one after each update, and, when it exceeds the value of H, it resets to 1. There is one exception to this update process: the last cell in both arrays is never updated and retains a value of 0.9 for both control parameters. Let S F and S C r be arrays storing successful F p and C r p , respectively. A pair ( F p , C r p ) is considered successful if it generates a trial vector u p that outperforms the target vector x p . The size of S F and S C r is a random number between 0 (indicating that no trial vector is better than the target) and N P (indicating that all trial vectors are better than their targets). Consequently, the value stored in the kth cell of the memory arrays after a given generation is
    M F , k = mean W L ( S F ) if S F and k H M F , k otherwise
    and
    M C r , k = mean W L ( S C r ) if S C r and k H M C r , k otherwise
    where mean W L is the weighted Lehmer mean of the corresponding control parameter S F and S C r and is defined as
    mean W L ( S ) = n = 1 | S | ω n S n 2 n = 1 | S | ω n S n
    for S { S F , S C r } . The weights ω n are computed as the Euclidean distance between the trial vector u p and the the individual x p ; specifically,
    ω n = i = 1 K u i n x i n 2 m = 1 | S | i = 1 K u i m x i m 2 .
    This weighting scheme encourages exploitation while aiming to prevent the premature convergence of the algorithm to local optima.
  • Decrease in the Population Size. The population size dynamically reduces during the execution of the algorithm to allocate more time for exploration in the later stages of optimization. Specifically, at the end of each generation, the population size is updated using the following formula:
    N P = N P i n t F E S r a t i o ( N P i n i t N P f ) .
  • Population and Archive Management. The archive of historically best-found solutions A b e s t is maintained throughout the optimization process. The archive is periodically updated with the best solutions available, ensuring that it remains relevant and effective. The population P and the archive A adjust their sizes in response to changes in (25) by removing the worst-ranking individuals.
  • Termination. The algorithm iterates through the above steps until a termination criterion is met. Common termination criteria include reaching a maximum number of generations, achieving a satisfactory fitness level, or observing no significant improvement over a predefined number of iterations.

5.2. Dealing with Budget and Box Constraints

In the construction phase of portfolio models, admissible solutions have to satisfy budget and buy-in threshold constraints. However, DISH-XX is blind to these constraints. To overcome this issue, the algorithm is equipped with a hybrid constraint-handling procedure. At first, to guarantee feasibility with respect to the bound constraints (7), this study introduces the following random combination [61]:
x i p = ( 1 r ) u b i + r x i p , if x i p > u b i ( 1 r ) l b i + r x i p , if x i p < l b i
where p = 1 , , N P and i = 1 , , K . Then, it uses the repair transformations developed in [39] to also satisfy the budget constraint (5). The assumptions needed to apply this method are the following:
  • l b i x i p u b i     i ;
  • i = 1 K l b i < 1 ;
  • i = 1 K u b i > 1 .
Then, for each p = 1 , , N P , the candidate solution x p is adjusted component-wise
x i p = l i + ( x i p l b i ) j = 1 K ( x j p l b i ) 1 j = 1 K l b i , if j = 1 K x j p > 1 x i p , if j = 1 K x j p = 1 u i ( u b i x i p ) j = 1 K ( u b i x j p ) j = 1 K u b i 1 , if j = 1 K x j p < 1
for all i 1 , , K . As proven in [39], solutions transformed through Equation (27) fulfill, at the same time, budget and box constraints.

5.3. Dealing with Risk Budgeting Constraints

Dealing with the risk budgeting constraints in Problem (17) requires the definition of a proper constraint violation function. Given a candidate solution x , this quantity is defined as
ϕ ( x ) = j = 1 2 K Δ g j ( x )
where Δ g j ( x ) = max 0 , g j ( x ) represents the constraint violation for the jth inequality constraint, with j = 1 , , 2 K .
Then, the ε -constrained method proposed in [41] transforms the constrained optimization model into an unconstrained one. More specifically, let x 1 , x 2 R K be two candidate solutions with objective function values f ( x 1 ) , f ( x 2 ) and constraint violations ϕ ( x 1 ) and ϕ ( x 2 ) , respectively. Then, the ε -comparison of the two solutions is defined as
( f ( x 1 ) , ϕ ( x 1 ) ) < ε ( f ( x 2 ) , ϕ ( x 2 ) ) f ( x 1 ) < f ( x 2 ) i f ϕ ( x 1 ) , ϕ ( x 2 ) ε , f ( x 1 ) < f ( x 2 ) i f ϕ ( x 1 ) = ϕ ( x 2 ) , ϕ ( x 1 ) < ϕ ( x 2 ) otherwise
and
( f ( x 1 ) , ϕ ( x 1 ) ) ε ( f ( x 2 ) , ϕ ( x 2 ) ) f ( x 1 ) f ( x 2 ) i f ϕ ( x 1 ) , ϕ ( x 2 ) ε , f ( x 1 ) f ( x 2 ) i f ϕ ( x 1 ) = ϕ ( x 2 ) , ϕ ( x 1 ) < ϕ ( x 2 ) otherwise .
It is worth noting that if both compared solutions are feasible or slightly infeasible (as determined by the ε value in the first parts of Equations (29) and (30)), or even if they have the same sum of constraint violation, they are compared using the values of the objective function. Conversely, if both solutions are infeasible, they are compared using the sum of their constraint violations. It is interesting to see that, if ε = , the ε -level comparison works by using as comparison criterion only the objective function values. If ε = 0 , then the ε -level comparison is equivalent to a lexicographic ordering in which the minimization of the sum of the constraint violation precedes the minimization of the objective function.

5.3.1. Controlling the ε -Level

This study uses the following scheme based on  [41,62] to control the ε parameter:
ε ( 0 ) = mean ϕ ( P best half ) ε ( t ) = ε ( 0 ) 1 t T c c p 0 < t < T c , 0 t T c ,
where the initial ε -level is equal to the mean constraint violation of the best half of the initial population, mean ϕ ( P best half ) . The level is then updated until the iteration counter t reaches a maximum T c . After this, the ε -level is set to 0. To maintain the stability and efficiency of the algorithm, c p is set equal to 5 and T c is given by
T c = 0.2 T m a x , if N i n t = 18 K 6 F E S m a x K 5 , otherwise
with T m a x representing the maximum number of iterations corresponding to F E S m a x .

5.3.2. Gradient-Based Mutation

The gradient-based mutation is an operator that was first developed in [41], following the seminal work presented in [40]. The main idea of the method is to utilize the gradient information of the constraints to repair the infeasible candidate solutions, moving them toward the feasible region.
Given a candidate solution x , the vector of the values of the inequality constraint functions is C ( x ) = ( g 1 ( x ) , . . . , g 2 K ( x ) ) , and Δ C ( x ) = ( Δ g 1 ( x ) , . . . , Δ g 2 K ( x ) ) denotes the vector of the constraint violations. Next, the aim is to solve the following system of linear equations:
C ( x ) Δ x = Δ C ( x )
where the values of the increments Δ x are the variables and C ( x ) is the gradient matrix of C ( x ) ,
C ( x ) = g 1 ( x ) x 1 g 1 ( x ) x K g 2 K ( x ) x 1 g 2 K ( x ) x K .
The Moore–Penrose pseudoinverse C ( x ) + gives an approximate solution as follows:
Δ x = C ( x ) + Δ C ( x ) .
Thus, the new mutated solution can be written as
x n e w = x + Δ x .
This repair operation is executed with a probability P g at every K iterations and is repeated for a maximum of R g times while the point is not feasible. In the numerical experiments, P g = 0.2 and R g = 1 . Notice that only non-zero elements of Δ C ( x ) are repaired using this mutation.

5.4. The Proposed DISH-XX- ε g Algorithm

In summary, the developed solver involves the following steps. When individuals are subject to mutation and crossover, the repair operator defined by Equations (26) and (27) manages budget and bound constraints (5) and (7) simultaneously. The risk budgeting inequality constraints, when incorporated into portfolio design, are addressed using the ε -constraint method, with the ε comparison (29) and the update rule (31) for the parameter ε . Then, the gradient-based mutation operator defined by Equations (32) and (33) leverages the gradient information of the risk budgeting constraints to accelerate the convergence of solutions toward the feasible region, as suggested in [40]. Finally, a pair ( F p , C r p ) is considered successful if it generates a trial vector u p that outperforms the target vector x p in terms of the ε comparison (29). The same order relation is also used to update the population and the archives. The resulting enhanced DISH-XX algorithm with gradient-based mutation, denoted DISH-XX- ε g , is illustrated in Appendix B in terms of addressing Problem (17). The termination criterion employed is the maximum number of objective function evaluations.
The parameter setup of the DISH-XX- ε g algorithm is based on the recommendations of [25,62]. The maximum number of objective function evaluations depends on the problem dimension according to the following formula:
F E S m a x = 1 × 10 5 , if K 10 2 × 10 5 , if 10 < K 30 4 × 10 5 , if 30 < K 50 8 × 10 5 , if 50 < K 150 1 × 10 6 , if K > 150 .
Similarly, the initial population size is N i n t = 50 ln ( K ) K and the final population size is N f = 4 . For the mutation parameters, α m a x = 0.25 and α m i n = α m a x / 2 = 0.125 . The external archive A and the archive of the historical best solutions A b e s t are initialized as empty. The historical memory size H is set to 5. Additionally, the termination criterion will be based on the maximum number of objective function evaluations. However, to avoid unnecessary computational costs in financial applications, the algorithm will terminate if the objective function value of the best solution, x b e s t , does not show a significant improvement over 10 consecutive iterations, indicating convergence.

6. Experimental Analysis

This section provides a detailed description of the numerical analysis aimed at evaluating the flexibility and effectiveness of the proposed automated expert system in managing the two instances of the modified Sharpe ratio-based portfolio model.

6.1. Data Set Description and Experimental Setup

The empirical analysis conducted in this work focuses on the American and European stock markets. Specifically, for the former, the daily closing prices of the constituents of the S&P 500 index for the period from 31 December 2014 to 31 October 2024 are considered. The latter case study refers to the securities listed in the STOXX Europe 600 index for the same period. Assets presenting missing data within the observation window have been discarded. As a result, the American dataset comprises 470 stocks, while the European investment basket includes 535 stocks.
The two case studies consider a rolling window investment plan with monthly portfolio rebalancing, with an out-of-sample window consisting of 94 months, covering the period from 31 January 2017 to 31 October 2024. For each month in this window, a historical approach based on the last two years of daily observations is adopted to calculate the expected rates of return and the covariance matrix. For each month of the investment phase, the DISH-XX-εg solver is used to find the optimal wealth allocation in terms of portfolio weights. Appendix C illustrates the solving capabilities of the proposed algorithm. Table 1 recaps the data set structure and the experimental setup.
Regarding the portfolio designs, the risk-free rate of return r f in the objective function (4) is set to zero, as in [63], and the buy-in thresholds l b i and u b i are equal to 0.005 and 0.1 , respectively. The cardinality parameter K is expressed as a fraction K % of the number of assets in a given data set, i.e., K = K % · n , and K % is set equal to 5 % , 10 % , and 15 % . Furthermore, in the risk budgeting model, ν = ν ¯ = ν ̲ in Equation (9), considering symmetrical ranges of deviation from the risk parity level. Three entries for the parameter ν are studied, namely 0.01 , 0.05 , and 0.10 , where a higher value indicates more flexibility in the management of risk budgets. Note that this paper does not compare the introduced risk budgeting approach with the classical risk parity portfolio model; however, the choice of ν = 0.01 represents a scenario with minimal deviations from the parity, which indirectly relates the results to it. Finally, regarding the practical implementation of the TODIM method, this study follows the suggestions in [60] by setting the parameters η 1 = η 2 = 0.88 and the attenuation factor ξ = 2.25 in Equation (13).

6.2. Criteria Used for the Screening of Assets

The preliminary stock-picking phase considers three complementary criteria to implement in the TODIM procedure. The first focuses on the microstructure of the stock market to capture the full spectrum of assets’ dependencies based on mutual information (MI). The so-called momentum measure (MOM) is the second criterion, which exploits the ability of individual stocks to generate value over time. The third metric consists of the upside-to-downside beta ratio (U/D ratio), which assesses the responsiveness of a stock with respect to upward and downward market movements. In the following, a description of the methodology employed to define and compute these three measures is provided.

6.2.1. Eigenvector Centrality Measure Based on Mutual Information

The definition of the first criterion needs some preliminary notions about the Shannon entropy measure. Given a continuous random variable X with probability density function p ( x ) , its entropy is defined as
H ( X ) = x p ( x ) log p ( x ) d x .
Similarly, if one considers two continuous random variables X and Y, their joint entropy is given by
H ( X , Y ) = x y p ( x , y ) log p ( x , y ) d x d y
where p ( x , y ) is the joint probability density function of X and Y. The mutual information between two random variables captures the mutual dependence between them. For continuous variables, it is expressed as
M I ( X , Y ) = x y p ( x , y ) log p ( x , y ) p ( x ) p ( y ) d x d y
and it is zero if and only if they are independent.
Then, the dissimilarity between the rates of return of two stocks, namely R j and R k , with j , k = 1 , , n and j k , is quantified by the so-called normalized distance metric, defined as
d ( R j , R k ) = 1 M I ( R j , R k ) H ( R j , R k ) .
Notice that this distance ranges from 0 (perfect dependence) to 1 (independence), making it particularly useful in building networks. The last two years of daily observations are employed for the estimation of the distance d ( R j , R k ) , and, following the approach described in [42], we consider a minimum spanning tree (MST), defined as a connected subgraph that spans all nodes of a graph with the minimum total edge weight and no cycles. To construct the MST based on (35), Prim’s algorithm is used—a well-known method that, starting from an arbitrary node, iteratively connects it with the shortest edge until all nodes are included.
Once the MST is constructed, to identify key nodes within the network, we exploit eigenvector centrality, a measure that assigns an importance score to each node based on its connections. This centrality measure is obtained by computing the Perron eigenvector of the adjacency matrix of the MST, which corresponds to the principal eigenvalue. A high score of centrality characterizes influential stocks that are important nodes in their respective clusters, facilitating the transfer of information. However, because of their importance in the dynamics of the market, these stocks are more susceptible to market volatility. In contrast, nodes with low centrality scores are located on the periphery of the network, making them less susceptible to market risk and thus representing effective candidates for portfolio selection [55].

6.2.2. Momentum Measure

Let R i be the random variable expressing the stochastic rate of return of stock i for a given period. The momentum of a stock is typically defined as the observed rate of return over a specified observation window that consists of N periods that begins in t 0 and ends in t N :
M O M i ( t 1 , t N ) = t = t 0 t N 1 ( 1 + r i , t ) 1
where r i , t 1 , , r i , t N are N consecutive observations of R i . Stocks with higher momentum values are preferred. The momentum of a stock is calculated considering the last two years of monthly observations.

6.2.3. Upside-to-Downside Beta Ratio

In financial analysis, the downside beta ( β ) measures the sensitivity of an asset to market returns when they are below a certain threshold. This beta component is particularly useful in evaluating the risk of an asset in adverse market conditions, and a percentage of the portfolio to stocks with low downside betas provides protection against market downturns [64]. Conversely, the upside beta ( β + ) refers to periods when the market returns are higher than a threshold and reflects the potential gain capability of an asset during favorable market conditions. By considering this measure, investors can identify growth opportunities in order to construct portfolios that capitalize on market upswings. R B is a random variable that expresses the benchmark rate of return, where the benchmark is the equally weighted portfolio constructed on the considered market [65]. Then, β and β + can be defined as in [66]:
β = C o v ( R i , R B | R B < τ ) V a r ( R B | R B < τ )
and
β + = C o v ( R i , R B | R B > τ ) V a r ( R B | R B > τ )
where τ is the target threshold for the benchmark rate of return. Together, the downside beta and upside beta can be combined by introducing the so-called upside-to-downside beta ratio:
U / D Ratio i = β i + β i .
The larger the ratio, the more effectively an asset increases the returns during market upswings, without significantly amplifying the losses during downturns. To compute β and β + , this study considers the last two years of daily observations, and the threshold τ is set to zero.

6.3. Ex Post Performance Metrics

The experimental analysis of this paper has a twofold objective. On the one hand, it investigates whether the inclusion of the risk budgeting constraints improves the control of portfolio risk, by comparing the two proposed asset allocation models. On the other hand, the aim is to analyze the strengths and differences in the two weighting schemes described in Section 4.1 for the TODIM procedure, namely the equal weighting and entropy weighting methods. Several ex post metrics are used to evaluate the financial performance of the compared portfolio strategies, and they are divided into two groups, namely risk measures and performance measures. Let r p , t o u t = r t x t be the realized portfolio rate of return of a given strategy at the end of month t, with t = 1 , , T (in our case, T = 94 ). Given initial capital W 0 , the wealth at the end of investing period t is W t = W t 1 ( r p , t o u t + 1 ) , for t = 1 , , T , where W t 1 is its amount in the previous month.
After defining this quantity, the capacity of an investment strategy to avoid high losses through the drawdown risk measure is given by
D D t = min 0 , W t W p e a k W p e a k
where W p e a k is the maximum amount of wealth reached by the strategy by the end of month t. Then, two ex post risk measures linked to drawdowns are considered, namely the maximum,
m a x D D = max t = 1 , , T D D t ,
and the Ulcer index,
U I = t = 1 T D D t 2 T .
In particular, the latter evaluates the depth and the duration of drawdowns in wealth over the out-of-sample period [67]. Note that a smaller value for the three metrics indicates better control of the drawdown risk.
Regarding the performance metrics, to evaluate the attractiveness of the proposed investment strategies, the so-called compound annual growth rate (shortly, C A G R ) is introduced, and it is defined as follows:
C A G R = W T W 0 12 T 1 .
Moreover, we introduce the out-of-sample monthly rate of return and standard deviation, μ o u t and σ o u t ,
μ o u t = 1 T t = 1 T r p , t o u t ,
σ o u t = 1 T 1 t = 1 T r p , t o u t μ o u t 2 ,
and calculate the ex post Sharpe ratio, which is defined as the reward compensation per unit of risk, where the standard deviation is used to quantify the risk:
S R o u t = μ o u t r f σ o u t
with r f being the risk-free rate of return, which we set equal to zero. The second considered performance metric is the Sortino–Satchell ratio [68], which is based on the idea that investors are only concerned about the downside part of the risk, the so-called (negative) semi-standard deviation:
σ o u t = t = 1 T r p , t o u t μ o u t 2 t = 1 T 1 { r p , t o u t μ o u t < 0 } ,
where r p , t o u t μ o u t = min { r p , t o u t μ o u t , 0 } , and 1 A denotes the indicator function of A. Thus, the Sortino–Satchell ratio is as follows:
S S R o u t = μ o u t r f σ o u t
The third employed risk-adjusted performance measure is the Omega ratio [69], a practical tool to establish whether an investment is more likely to be profitable than loss-making. This quantity is calculated as the ratio between out-of-sample profits and losses:
Ω o u t = t = 1 T r p , t o u t 1 { r p , t o u t > 0 } t = 1 T r p , t o u t 1 { r p , t o u t < 0 } .

6.4. Compared Strategies and Benchmark Portfolios

This section introduces the compared investment strategies according to the following notation, which depends on the cardinality size K and the risk parity deviation ν .
  • ModSharpe-Equi-TODIMK: the portfolio model (16) that maximizes the modified Sharpe ratio with cardinality K and using the equal weighting method.
  • ModSharpe-Entr-TODIMK: the portfolio model (16) that maximizes the modified Sharpe ratio with cardinality K and using the entropy weighting method.
  • ModSharpe-RB-Equi-TODIMK,ν: the proposed risk budgeting portfolio model (17) with cardinality K, risk parity deviation ν , and adopting the equal weighting method.
  • ModSharpe-RB-Entr-TODIMK,ν: the proposed risk budgeting portfolio model (17) with cardinality K, risk parity deviation ν , and adopting the entropy weighting method.
Moreover, the following two benchmark strategies are considered.
  • BenchEW: the equally weighted portfolio constructed using all assets in the investable universe.
  • BenchMI,K: an equally weighted strategy that adopts a preliminary stock-picking technique only based on the mutual information criterion for each of the three choices for K.

6.5. Discussion of the Ex Post Investment Results

Table 2 presents the ex post results of the proposed automated decision support system for the two problem instances, compared with the benchmark strategies introduced in Section 6.4, using the US data set. Observe that, for K = 5 % , the models employing the entropy weighting method in the TODIM procedure generally display higher risk-adjusted performance ratios. Among these strategies, those based on risk budgeting perform better than their counterparts that do not incorporate risk control. Conversely, better results in terms of risk measures are obtained when employing the equally weighted method in TODIM. It is worth noting that the three risk budgeting models display better control of portfolio volatility and limited drawdowns. This evidence suggests that adapting the criteria weights according to market signals is beneficial for performance, albeit at the cost of higher volatility and greater loss exposure during market downswings. Lastly, note that the mutual information benchmark excels in terms of risk control and also demonstrates competitive risk-adjusted performance. Furthermore, expanding the number of portfolio constituents enhances the ex post results across many of the examined models. For the case K = 10 % , portfolio models with risk budgeting restrictions that employ equal criteria weighting display significantly better results than in the previous case. In detail, these three models show higher ex post risk-adjusted performance ratios, diminished ex post standard deviations, and lower maximum drawdowns and Ulcer index values. Strategies that use the entropy weighting method yield analogous outcomes to the aforementioned case. For the cardinality K = 15 % , there is solid evidence of improvement for the entropic-based investments. More precisely, the ModSharpe-Entr-TODIM15% strategy shows very high risk-adjusted performance and good results in terms of risk measures, being the only tested model that is capable of outperforming the equally weighted benchmark. In order to assess whether the differences in performance are statistically significant, a robustness check on these results is conducted. The idea is to test the hypothesis that the out-of-sample Sharpe ratios of the compared strategies are equal. To do this, this work considers the approach introduced in [70], which is based on a circular block bootstrap approach with 5000 bootstrap resamples and automatic optimal block length determination. Due to their size, the tables reporting these comparisons are included in Appendix D, where Table A1 depicts the US case study. The results in terms of the p-values for tests in which the null hypothesis is that the Sharpe ratio difference between two compared models is zero are displayed. Specifically, in instances where the null hypothesis is rejected (the observed p-value is lower than 0.05 ), the alternative hypothesis that is considered is consistent with the observed difference between the two Sharpe ratios. A ‘+’ sign is used to denote a positive difference, indicating that the model in the rows outperforms the one in the columns, while a ‘-’ is used to denote a negative difference, indicating that the model in the columns outperforms the one in the rows. Moreover, the false discovery rate approach proposed in [71] is used to determine the proportions of over-, equal, and under-performing methods, in terms of the Sharpe ratio, between all compared ones. To perform these analyses, the RStudio package PeerPerformance [72] (https://CRAN.R-project.org/package=PeerPerformance, accessed on 17 April 2025) has been exploited. This table points out that the ModSharpe-Entr-TODIM15% strategy is statistically superior to the majority of the other analyzed strategies in terms of the Sharpe ratio. The other portfolio models that, according to the adopted method, are more likely to be the over-performing ones are the equally weighted benchmark and MSR-RB-Entr-TODIM15%,ν. In all other cases, the test for differences in the Sharpe ratios does not show statistically significant results.
Given these considerations, Figure 2 displays the equity lines of the strategies MSR-Entr-TODIMK and MSR-RB-Entr-TODIMK,0.1% during the investment phase for the US case study, for the three values of K. At first, we observe that the equally weighted US market benchmark shows competitive results, almost increasing the initial wealth allocation threefold. In particular, this strategy exhibits elevated performance spikes after the COVID-19 outbreak and during the last two years. Panels (a) and (b) display the results for the K = 5 % and K = 10 % cases. Notice that the proposed portfolio models are subject to a marked drawdown phase during the first quarter of 2022 after being the best-performing investment plans during the first five years of allocation. This issue is due to the high volatility of the markets at this time, caused by the aftermath of the COVID-19 pandemic and the international tension brought about by the Russian invasion of Ukraine. Conversely, this drawdown is much less pronounced in the two benchmarks due to the higher diversification of the EW benchmark and a more loss-mitigating stock selection made by the mutual information-based model. Moreover, after this downturn, the two portfolios that maximize the modified Sharpe ratio struggle to regain ground, while the EW benchmark shows positive momentum until the conclusion of the investment period. In the 15 % cardinality case, the proposed portfolio strategies more efficiently curtail losses during market downturns. This assertion is further substantiated by Table 2, which shows that the two strategies under consideration display lower maximum drawdowns and Ulcer index values within this cardinality configuration. To conclude, despite its underperformance in terms of produced wealth, the mutual information-based benchmark strategy demonstrates superior loss and volatility control in all analyzed cases.
The results for the EU case study are displayed in Table 3. The European market findings deviate considerably from the American market case. Firstly, strategies based on the entropic weighting method to determine the criteria preferences demonstrate markedly inferior results compared to their counterparts that utilize the equal weighting method. In contrast to the observations made in the US case study, adapting the relative preferences on the three criteria based on market information results in a deterioration in both the risk-adjusted performance ratios and risk control measures.
To assess these considerations in terms of out-of-sample Sharpe ratios, statistical tests on the differences are performed, as seen in Table A1. The test results indicate that MSR-Entr-TODIM5% exhibits the least efficacy, as evidenced by its significantly lower Sharpe ratio in comparison to numerous alternative strategies. In addition, as commented previously, strategies that adopt entropic weights demonstrate marked discrepancies (in negative terms) in comparison to their counterparts that employ the equally weighted method.
The mutual information-based benchmark strategies yield highly competitive outcomes regarding financial performance and show a good capacity to control the ex post standard deviation and drawdown measures. Among all the analyzed allocation plans, the MSR-Equi-TODIMK configurations without risk budgeting constraints are the most successful in every respect, producing results aligned with those of the MI-based and the equally weighted benchmarks.
Regarding the statistical tests on the Sharpe ratio differences, the results in Table A1 confirm these insights. Observe that the MI-based benchmarks are the ones with the greatest over-performance, together with the BenchEW strategy. Moreover, the MSR-Equi-TODIM5% configurations are characterized by high p + values, meaning that they are likely to be in the group of the best-performing strategies. Finally, it is correct to note that (i) there are no significant differences in the group of entropic portfolio strategies and (ii) despite being the strategy with the highest value of p + detected, BenchMI,5% shows statistically significant differences only against a few of the alternatives in the pairwise comparisons. In this data set, incorporating the risk budgeting constraints within the model during the portfolio construction phase does not enhance the financial performance or the portfolio loss control. In the cases of K = 5 % and K = 10 % , the optimal parameter value for the parameter ν appears to be 0.05 , while, when the cardinality is 15 % , the most favorable outcomes are obtained with ν = 0.1 .
Figure 3 shows the equity lines of the best-performing strategies and the benchmarks for the EU case study. Panels (a) and (b) illustrate the strategies MSR-Equi-TODIMK and MSR-RB-Equi-TODIMK,0.05 for the cardinalities K = 5 % and K = 10 % . Panel (c) shows the evolution of MSR-Equi-TODIMK and MSR-RB-Equi-TODIMK,0.1 for K = 15 % . In this case study, it is evident that the mutual information-based benchmark emerges as the optimal investment strategy, particularly in the context of K = 10 % . In the K = 5 % scenario, the MSR-Equi-TODIM5% portfolio demonstrates the highest equity line at the end of the investment period, despite a turbulent phase during the year 2023. Conversely, the risk budgeting portfolio struggles to recover after periods of market decline, as observed in the post-pandemic era and during the final two years of the investment period. In the K = 10 % setting, the modified Sharpe-based portfolios prove ineffective in surpassing the MI-based benchmark, yielding outcomes comparable to those of the equally weighted European market benchmark. In the last case ( K = 15 % ), the MSR-Equi-TODIM15% strategy demonstrates results aligned with the two benchmarks, while the risk budgeting model shows marginally diminished performance in terms of wealth generation capabilities.

7. Conclusions

This study proposes a novel knowledge-based system built upon two interconnected modules, grounded in information theory and financial practices, to assist investors in their financial decision-making. In this context, two instances of a constrained portfolio selection model where the objective function to optimize is a modified version of the Sharpe ratio have been addressed. The first one considers several standard constraints encompassing cardinality, budget, and bound limitations. The second introduces a relaxed risk parity constraint to explicitly control the portfolio volatility in the construction phase. Moreover, a stock-picking procedure that uses a multi-criteria decision-making method named TODIM, which has gained widespread popularity over the years, deals with the cardinality requirement. This technique exploits information from a complementary set of three financial criteria: the mutual information-based peripherality measure, momentum, and the upside-to-downside beta ratio. Finally, a version of the recently proposed distance-based success history differential evolution with double crossover (DISH-XX) algorithm, equipped with an ensemble of constraint-handling techniques, solves the proposed portfolio selection models.
The following summarizes the main experimental findings of this paper. Firstly, the proposed automated decision support system has proven capable of supporting the investment choices of an end user whose preferences are outlined by the two portfolio models analyzed in this paper. Implementing the TODIM module to screen stocks based on three complementary criteria improves the adaptability to the two market scenarios considered. The mutual information-based stock-picking strategy achieves excellent results, especially in a resilient market like the European one, efficiently capturing non-linear relationships between stocks and the market microstructure. This allows potential investors to contain losses during market downswings. Additionally, incorporating criteria such as momentum and the upside-to-downside beta ratio enhances the model’s adaptability to market phases and leverages potential upswings in a thriving market like the American one. Moreover, regarding the optimization module, the solving capabilities of the proposed evolutionary algorithm have been analyzed, demonstrating the convergence of the provided solutions toward the feasible region and the efficient exploration of the search space.
In the subsequent phase of the experimental analysis, this study assessed the financial performance of the proposed investment models, implementing an investment plan with monthly rebalancing from January 2017 to October 2024. Specifically, we compared our strategies against an equally weighted benchmark in both the American and European markets, as well as a mutual information-based stock-picking strategy. The results varied significantly between the two case studies. In the American market, considering a portfolio with 15 % of the investable universe resulted in an enhancement in the risk-adjusted performance. In particular, the model without risk budgeting constraints is the only one that can outperform the equally weighted benchmark. In the European case study, the proposed strategy mimics more efficiently the behavior of the equally weighted benchmark and achieves results comparable to those of the mutual information-based benchmark, which is the best-performing portfolio model in the EU case.
A possible limitation of this research is that it adopts a limited number of features for the stock selection phase. Indeed, it would be beneficial to incorporate additional types of information as criteria, such as technical indicators or fundamental analysis metrics extrapolated from firms’ balance sheets. To extend the topic in the field of sustainability (especially relevant in the European context), non-financial disclosure information could also be included as a discriminant. Furthermore, another limitation is represented by the choice of the weighting methods, as we were limited to comparing the performance of only two techniques.
This paper lays the groundwork for several possible future research directions. The first possibility is to consider additional techniques that account not only for agent preferences but also for the predictive capacity of individual criteria over time. Moreover, the flexibility of the proposed knowledge-based financial management system can be tested by considering alternative portfolio models that maximize different objective functions, thereby outlining various investment profiles. To further enhance the practical relevance of the model, an additional constraint to control transaction costs during the investment phase can be introduced. Finally, a third possible extension involves applying different metaheuristics to identify the most suitable algorithm for our cardinality-constrained portfolio optimization models.

Author Contributions

Conceptualization: M.K., R.P. and F.P.; methodology: M.K., R.P. and F.P.; software: M.K.; performance analysis: M.K., R.P. and F.P.; validation: M.K., R.P. and F.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data sets presented in this article are not readily available because there are technical limitations imposed by the data provider.

Acknowledgments

The authors are affiliated with the GNAMPA-INdAM research group.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Risk Parity and Risk Budgeting

Appendix A.1. Details of Risk Parity

The modern risk parity asset allocation framework has recently gained traction in both academia and industry. In this setting, wealth is assigned in such a way that the individual assets’ contributions to portfolio risk are equalized. As shown in [17], this goal can be achieved through the Euler decomposition of the portfolio risk measure, under the condition that the risk measure must be a homogeneous function of degree one. It is readily shown that the portfolio standard deviation meets this requirement. Moreover, this quantity can be decomposed as follows:
σ p ( w ) = i = 1 n w i σ p ( w ) w i = i = 1 n w i ( Σ w ) i w Σ w .
Given the marginal risk contribution of asset i σ p ( w ) w i = ( Σ w ) i w Σ w , it follows that w i ( Σ w ) i w Σ w = R C i ( w ) denotes its risk contribution. The corresponding relative risk contribution is defined as
R R C i ( w ) = R C i ( w ) w Σ w = w i ( Σ w ) i w Σ w .
The risk parity framework seeks to compute the portfolio that equalizes all risk contributions, satisfying the condition
w i ( Σ w ) i = w j ( Σ w ) j i , j .
Moreover, in the risk parity portfolio, combining the decomposition introduced in Equation (A1) and the requirement expressed in Equation (A3), it follows that
R C i ( w ) = w Σ w n = σ p ( w ) n i .

Appendix A.2. Non-Convexity of the Proposed Risk Budgeting Formulation

Following [30], R C i ( w ) can be recast using the standard matrix notation as
R C i ( w ) = w E i w w Σ w i
where E i R n × n captures the individual risk contribution of asset i. The symmetric matrices E i are composed of the superposition of row i and column i from the original covariance matrix Σ multiplied by one half, with all other elements in the matrix equal to zero, according to the following formula:
E i = 1 2 e i e i Σ + Σ e i e i
where e i R n denotes the ith column of the identity matrix. Notice that the set of inequalities introduced in Equation (9) can be rewritten as
( 1 ν ̲ ) w Σ w n w i ( Σ w ) i w Σ w 0 , i = 1 , . . . , n w i ( Σ w ) i w Σ w ( 1 + ν ¯ ) w Σ w n 0 , i = 1 , . . . , n .
and, with some algebra, the following set of conditions follows:
w ( 1 ν ̲ ) n Σ E i w 0 , i = 1 , . . . , n w E i ( 1 + ν ¯ ) n Σ w 0 , i = 1 , . . . , n .
Since E i is symmetric, the difference E i ( 1 + ν ¯ ) n Σ is still a symmetric matrix. Then, the generic difference is the following, with ( 1 + ν ¯ ) n = γ ¯ ( < 1 ) :
0 0 σ 1 i 2 0 0 0 σ 2 i 2 0 σ i 1 2 σ i 2 2 σ i 2 σ i n 2 0 0 σ n i 2 0 γ ¯ σ 1 2 σ 12 σ 1 i σ 1 n σ 21 σ 2 2 σ 2 i σ 2 n σ i 1 σ i 2 σ i 2 σ i n σ n 1 σ n 2 σ n i σ n 2 ,
This results in the matrix
Γ ¯ i = γ ¯ σ 1 2 γ ¯ σ 12 σ 1 i 2 γ ¯ σ 1 i γ ¯ σ 1 n γ ¯ σ 21 γ ¯ σ 2 2 σ 2 i 2 γ ¯ σ 2 i γ ¯ σ 2 n σ i 1 2 γ ¯ σ i 1 σ i 2 2 γ ¯ σ i 2 σ i 2 γ ¯ σ i 2 σ i n 2 γ ¯ σ i n γ ¯ σ n 1 γ ¯ σ n 2 σ n i 2 γ ¯ σ n i γ ¯ σ n 2 .
Inspecting the matrices Γ ¯ i reveals that they are indefinite, each having both positive and negative elements on the diagonal [73]. Thus, any optimization problem that involves the use of Γ ¯ i is non-convex. Similar conclusions can be inferred for the matrix obtained from the difference ( 1 ν ̲ ) n Σ E i .

Appendix B. Pseudocode of DISH-XX-εg

The pseudocode of the proposed DISH-XX-εg is given in Algorithm A1 for the solution of Problem (17).
Algorithm A1 DISH-XX- ε g
  • Input: K, μ , Σ , l b i and u b i for i = 1 , , K , g j for j = 1 , , 2 K
  • Set H = 5 , α m a x = 0.25 , α m i n = 0.125 , A = , A b e s t =
  • Initialize the historical memory arrays M F and M C r using Equations (18) and (19)
  • Calculate N P i n i t = 50 ln ( K ) K and F E S m a x using Equation (34)
  • Set N P = N P i n i t
  • F E S : = 0 , t : = 0 , k : = 1
  • Generate and evaluate the initial population P
  • Update F E S
  • Initialize the ε level using Equation (31)
  • Set x b e s t as the best solution in P based on the ε -comparison (29)
  • while  F E S < F E S m a x  do
  •     t : = t + 1
  •    Calculate ε using Equation (31)
  •    Apply mutation, double crossover, and the repair operator using Equations (20), (21), (26) and (27)
  •    Evaluate the new trial population U and update F E S
  •    for  p = 1 to N P  do
  •        if  ϕ ( u i ) 0 mod ( t , K ) = 0 rand < 0.2  then
  •           Apply the gradient-based mutation to u i using Equations (32) and (33)
  •        end if
  •        Apply selection phase using Equation (29)
  •        Store the best individuals in archive A b e s t and the worst individuals in archive A
  •        Update S F , S C r
  •        if  S F S C r  then
  •            Update M F , k , M C r , k
  •             k : = k + 1
  •            if  k > H  then
  •              k : = 1
  •            end if
  •        end if
  •    end for
  •    Update population size using Equation (25)
  •    if  | P | > N P  then
  •         Sort the individuals in P according to (29) and delete the | P | N P worst ones
  •    end if
  •    if  | A | > N P  then
  •         Randomly delete | A | N P individuals from A
  •    end if
  •    Update x b e s t
  • end while

Appendix C. Assessment of the Algorithm’s Efficiency

The aim of this section is evaluating whether the proposed algorithm is capable of finding feasible solutions to the risk budgeting portfolio optimization problem. To do this, a date from the 94 available in the ex post window is randomly selected and the nine portfolio configurations presented in Section 6.4 are considered. Then, 30 random different initial portfolios to be optimized are sampled, which already respect the cardinality constraint, in order to demonstrate the algorithm’s search capabilities. In this process, two features are considered: feasibility and diversity. The former is quantified by the so-called feasibility ratio, which is the number of feasible solutions relative to the population size at a given iteration; the latter is evaluated by the population diversity [74], which is defined as
d i v ( t ) = 1 N p o p ( t ) i = 1 N p o p ( t ) i = 1 d y j i ( t ) y ¯ i ( t ) 2
where t is the generation index, y j i is the ith component of the j-th candidate solution, and y ¯ i ( t ) is the mean position of the population in coordinate i at generation t. Note that d i v can be interpreted as the average Euclidean distance between a candidate solution and the barycenter of the population of candidate solutions at time t.
Figure A1 illustrates the evolution of the average values for the feasibility ratio and population diversity over the generations in the US case study. From panels (a), (c), and (d), observe that the algorithm reaches a 100 % feasibility ratio of the population after approximately 400 iterations in all but one case. (The figures plot only the initial 400 iterations to improve the readability of the graphs). The diversity measure plots (b), (d), and (f) point out how the population, at the final stages, narrows down to a very small portion of the search space, assessing the algorithm’s capability to find near-optimal solutions for the optimization problem.
Figure A1. Average results for 30 random initial portfolio configurations of the feasible ratio and the diversity measure for the US case study by varying the deviation parameter ν in { 0.01 , 0.05 , 0.10 } . Plots (a,b) show the results for K = 23 ( 5 % ) ; plots (c,d) show the results for K = 47 ( 10 % ) . Finally, results for K = 70 ( 15 % ) are displayed in charts (e,f).
Figure A1. Average results for 30 random initial portfolio configurations of the feasible ratio and the diversity measure for the US case study by varying the deviation parameter ν in { 0.01 , 0.05 , 0.10 } . Plots (a,b) show the results for K = 23 ( 5 % ) ; plots (c,d) show the results for K = 47 ( 10 % ) . Finally, results for K = 70 ( 15 % ) are displayed in charts (e,f).
Entropy 27 00480 g0a1
Figure A2 illustrates the same plots for the EU case study, showing similar findings. The results for other random dates from the out-of-sample window are very close to the ones presented in Figure A1 and Figure A2 and thus are omitted.
Figure A2. Average results for 30 random initial portfolio configurations of the feasible ratio and the diversity measure for the EU case study by varying the deviation parameter ν in { 0.01 , 0.05 , 0.10 } . Plots (a,b) show the results for K = 26 ( 5 % ) ; plots (c,d) show the results for K = 53 ( 10 % ) . Finally, results for K = 80 ( 15 % ) are displayed in charts (e,f).
Figure A2. Average results for 30 random initial portfolio configurations of the feasible ratio and the diversity measure for the EU case study by varying the deviation parameter ν in { 0.01 , 0.05 , 0.10 } . Plots (a,b) show the results for K = 26 ( 5 % ) ; plots (c,d) show the results for K = 53 ( 10 % ) . Finally, results for K = 80 ( 15 % ) are displayed in charts (e,f).
Entropy 27 00480 g0a2

Appendix D. Statistical Significance of Differences Among the Sharpe Ratios of Portfolios

This section analyzes the statistical significance of the differences among the Sharpe ratios of the compared portfolio strategies, according to the robustness Sharpe test introduced in [70], for each pair of the competing portfolios. Table A1 reports the results for the US case study, while Table A2 shows the respective counterparts for the European stock market.
Table A1. Assessment of the statistical significance of differences among the Sharpe ratios of the competing portfolios according to the Sharpe ratio robustness test for the US data set [70]. The differences are calculated by comparing the model in the rows against the one in the columns. The p-values are reported, and values equal to or lower than 5 % are highlighted in bold. The alternative hypothesis is consistent with the sign of the observed difference (‘+’ denotes a positive difference, ‘−’ a negative one). In the bottom part of the table, the proportions of over- ( p + ), equal ( p 0 ), and under-performing ( p ) strategies, in terms of the Sharpe ratio, are computed following the approach of [71].
Table A1. Assessment of the statistical significance of differences among the Sharpe ratios of the competing portfolios according to the Sharpe ratio robustness test for the US data set [70]. The differences are calculated by comparing the model in the rows against the one in the columns. The p-values are reported, and values equal to or lower than 5 % are highlighted in bold. The alternative hypothesis is consistent with the sign of the observed difference (‘+’ denotes a positive difference, ‘−’ a negative one). In the bottom part of the table, the proportions of over- ( p + ), equal ( p 0 ), and under-performing ( p ) strategies, in terms of the Sharpe ratio, are computed following the approach of [71].
BenchEWBenchMI,5%BenchMI,10%BenchMI,15%Equi-NoRB5%Equi-NoRB10%Equi-NoRB15%Entr-NoRB5%Entr-NoRB10%Entr-NoRB15%
BenchMI,5%0.4094
BenchMI,10%0.23940.9758
BenchMI,15%0.06650.60230.3479
Equi-NoRB5%0.19150.60930.56340.7518
Equi-NoRB10%0.37690.92580.91780.88430.3474
Equi-NoRB15%0.68280.69430.64880.49190.16800.3219
Entr-NoRB5%0.30740.87180.88180.93830.55590.91230.4594
Entr-NoRB10%0.70130.76080.71980.57590.18800.46490.99030.2569
Entr-NoRB15%0.55040.26440.15850.0995 0.0175 + 0.0385 + 0.1680 0.0045 + 0.0500 +
Equi-RB5%,0.010.19100.64780.57940.82380.76030.59490.26390.76080.3404 0.0315
Equi-RB5%,0.050.19100.64830.57490.82080.75830.58590.25640.75880.3354 0.0315
Equi-RB5%,0.100.19350.64480.57840.82230.74330.59340.25390.75530.3334 0.0320
Equi-RB10%,0.010.36340.98180.96830.78180.29640.87130.47990.83880.56640.0575
Equi-RB10%,0.050.35790.97430.95830.79730.30740.88530.46740.85480.55440.0580
Equi-RB10%,0.100.37190.98580.98680.77630.28190.85130.48990.83230.57540.0600
Equi-RB15%,0.010.25140.97930.96330.68530.34140.83480.50190.78930.6128 0.0350
Equi-RB15%,0.050.25140.98480.96630.68380.33940.83380.49740.78880.6103 0.0350
Equi-RB15%,0.100.25840.97780.95730.68280.33490.82830.50290.78380.6168 0.0345
Entr-RB5%,0.010.40140.97730.97880.77480.39440.86330.60630.66480.5584 0.0395
Entr-RB5%,0.050.39640.98480.98530.78280.40340.87630.59590.68380.5469 0.0380
Entr-RB5%,0.100.41140.96780.96280.76480.38190.85030.61680.63680.5694 0.0440
Entr-RB10%,0.010.42540.94180.92030.72330.32290.75330.66680.57790.5544 0.0140
Entr-RB10%,0.050.42140.94680.92430.72930.32640.75930.65930.58390.5464 0.0140
Entr-RB10%,0.100.42890.94480.91980.72530.32390.75080.66430.57790.5474 0.0130
Entr-RB15%,0.010.69880.60880.51540.36490.13100.36040.88280.20890.88280.1100
Entr-RB15%,0.050.70930.60330.51290.36040.12700.35440.87580.20240.87430.1125
Entr-RB15%,0.100.71980.59940.50340.35490.12700.34790.86580.19800.85680.1175
p 0 0.44441.00001.00001.00000.43211.00000.62401.00000.65220.0000
p + 0.55560.00000.00000.00000.00000.00000.37600.00000.34781.0000
p 0.00000.00000.00000.00000.56790.00000.00000.00000.00000.0000
Equi-RB5%,0.01Equi-RB5%,0.05Equi-RB5%,0.10Equi-RB10%,0.01Equi-RB10%,0.05Equi-RB10%,0.10Equi-RB15%,0.01Equi-RB15%,0.05Equi-RB15%,0.10
BenchMI,5%
BenchMI,10%
BenchMI,15%
Equi-NoRB5%
Equi-NoRB10%
Equi-NoRB15%
Entr-NoRB5%
Entr-NoRB10%
Entr-NoRB15%
Equi-RB5%,0.01
Equi-RB5%,0.050.8633
Equi-RB5%,0.100.93530.9768
Equi-RB10%,0.010.19850.19100.1895
Equi-RB10%,0.050.21590.21190.21040.3524
Equi-RB10%,0.100.18500.18150.17750.57390.2729
Equi-RB15%,0.010.31340.31640.31490.89730.87380.9283
Equi-RB15%,0.050.31640.31840.31990.90530.88180.93530.8453
Equi-RB15%,0.100.30440.30140.30690.88380.85730.91080.74730.7063
Entr-RB5%,0.010.44290.44140.43740.94180.92780.95280.99680.99880.9853
Entr-RB5%,0.050.44940.44890.44840.95130.93380.96730.98530.98530.9753
Entr-RB5%,0.100.43090.42790.42740.92030.90080.93630.98630.98430.9938
Entr-RB10%,0.010.38290.37440.37840.80630.78880.82380.88530.87730.8938
Entr-RB10%,0.050.38890.38690.38690.81580.79780.83230.89780.89030.9088
Entr-RB10%,0.100.38040.37540.37740.80130.78530.81730.88480.88130.8908
Entr-RB15%,0.010.10900.11000.10750.24590.23790.25840.26740.26640.2674
Entr-RB15%,0.050.10850.10650.10550.23890.23240.25390.26040.25440.2594
Entr-RB15%,0.100.10650.10550.10600.23490.22990.24790.25090.24940.2489
p 0 0.59310.59310.59311.00001.00001.00001.00001.00001.0000
p + 0.00000.00000.00000.00000.00000.00000.00000.00000.0000
p 0.40690.40690.40690.00000.00000.00000.00000.00000.0000
Entr-RB5%,0.01Entr-RB5%,0.05Entr-RB5%,0.10Entr-RB10%,0.01Entr-RB10%,0.05Entr-RB10%,0.10Entr-RB15%,0.01Entr-RB15%,0.05Entr-RB15%,0.10
BenchMI,5%
BenchMI,10%
BenchMI,15%
Equi-NoRB5%
Equi-NoRB10%
Equi-NoRB15%
Entr-NoRB5%
Entr-NoRB10%
Entr-NoRB15%
Equi-RB5%,0.01
Equi-RB5%,0.05
Equi-RB5%,0.10
Equi-RB10%,0.01
Equi-RB10%,0.05
Equi-RB10%,0.10
Equi-RB15%,0.01
Equi-RB15%,0.05
Equi-RB15%,0.10
Entr-RB5%,0.01
Entr-RB5%,0.050.5784
Entr-RB5%,0.100.65830.5294
Entr-RB10%,0.010.82380.79730.8568
Entr-RB10%,0.050.83730.81030.86430.6548
Entr-RB10%,0.100.81530.79480.85230.95580.7328
Entr-RB15%,0.010.19350.18650.2124 0.0280 + 0.0275 + 0.0260 +
Entr-RB15%,0.050.18250.17850.1995 0.0240 0.0235 + 0.0220 + 0.4324
Entr-RB15%,0.100.17750.17500.1960 0.0230 + 0.0225 + 0.0205 + 0.20640.5064
p 0 1.00001.00001.00001.00001.00001.00000.29630.27780.2778
p + 0.00000.00000.00000.00000.00000.00000.67410.72220.7222
p 0.00000.00000.00000.00000.00000.00000.02960.00000.0000
Table A2. Assessment of the statistical significance of differences among the Sharpe ratios of the competing portfolios according to the Sharpe ratio robustness test for the EU data set [70]. The differences are calculated by comparing the model in the rows against the one in the columns. The p-values are reported, and values equal to or lower than 5 % are highlighted in bold. The alternative hypothesis is consistent with the sign of the observed difference (‘+’ denotes a positive difference, ‘−’ a negative one). In the bottom part of the table, the proportions of over- ( p + ), equal ( p 0 ), and under-performing ( p ) strategies, in terms of the Sharpe ratio, are computed following the approach of [71].
Table A2. Assessment of the statistical significance of differences among the Sharpe ratios of the competing portfolios according to the Sharpe ratio robustness test for the EU data set [70]. The differences are calculated by comparing the model in the rows against the one in the columns. The p-values are reported, and values equal to or lower than 5 % are highlighted in bold. The alternative hypothesis is consistent with the sign of the observed difference (‘+’ denotes a positive difference, ‘−’ a negative one). In the bottom part of the table, the proportions of over- ( p + ), equal ( p 0 ), and under-performing ( p ) strategies, in terms of the Sharpe ratio, are computed following the approach of [71].
BenchEWBenchMI,5%BenchMI,10%BenchMI,15%Equi-NoRB5%Equi-NoRB10%Equi-NoRB15%Entr-NoRB5%Entr-NoRB10%Entr-NoRB15%
BenchMI,5%0.9568
BenchMI,10%0.62130.4944
BenchMI,15%0.76880.78330.5334
Equi-NoRB5%0.98230.94030.68980.7998
Equi-NoRB10%0.86130.84630.59590.71880.8678
Equi-NoRB15%0.94230.92930.68730.79230.97780.8448
Entr-NoRB5%0.06450.1220 0.0400 0.0470 0.0170 0.0180 0.0225
Entr-NoRB10%0.29940.40140.20390.26290.23990.25740.20890.0825
Entr-NoRB15%0.11150.30340.15400.18300.17100.0935 0.0420 0.31090.6698
Equi-RB5%,0.010.19900.31140.15350.17000.11000.21190.14150.16300.74580.9348
Equi-RB5%,0.050.20590.31440.15800.17300.11450.21540.14750.14850.76380.9143
Equi-RB5%,0.100.20340.31140.15250.17050.10750.20890.14250.15750.75030.9293
Equi-RB10%,0.010.39140.57840.32490.38440.46990.50690.4094 0.0150 + 0.55290.2809
Equi-RB10%,0.050.40990.59190.33290.39490.48540.53640.4304 0.0145 + 0.52490.2589
Equi-RB10%,0.100.39690.58040.32840.38890.47440.51140.4104 0.0155 + 0.54040.2659
Equi-RB15%,0.010.34590.54840.31540.37990.46540.49340.2954 0.0265 + 0.60280.2634
Equi-RB15%,0.050.34690.55040.31690.38190.46290.49140.2924 0.0270 + 0.60330.2619
Equi-RB15%,0.100.35940.55890.32590.38890.47690.51190.3084 0.0255 + 0.58090.2449
Entr-RB5%,0.01 0.0485 0.1335 0.0480 0.0565 0.0325 0.0510 0.0365 0.86580.12650.3034
Entr-RB5%,0.05 0.0480 0.1310 0.0465 0.0555 0.0315 0.0470 0.0355 0.84480.11800.2939
Entr-RB5%,0.10 0.0460 0.1320 0.0455 0.0550 0.0315 0.0470 0.0325 0.84780.11850.2924
Entr-RB10%,0.010.10050.26690.12100.14350.14700.13150.09300.26440.42690.7913
Entr-RB10%,0.050.09950.26440.11850.14100.14100.12900.09050.26990.41540.7813
Entr-RB10%,0.100.10100.26440.11850.14100.14350.12600.09000.26790.42540.7743
Entr-RB15%,0.010.10950.30640.14200.16850.18200.17850.10950.19150.66780.9873
Entr-RB15%,0.050.10950.30490.14050.16950.18050.17750.10950.19350.66680.9883
Entr-RB15%,0.100.11400.31340.14400.17500.19000.19000.11650.17200.70030.9453
p 0 0.44440.46310.00000.37030.63510.63510.44440.18520.37040.6351
p + 0.55560.53691.00000.62970.36490.36490.55560.00000.29630.0000
p 0.00000.00000.00000.00000.00000.00000.00000.81480.33330.3649
Equi-RB5%,0.01Equi-RB5%,0.05Equi-RB5%,0.10Equi-RB10%,0.01Equi-RB10%,0.05Equi-RB10%,0.10Equi-RB15%,0.01Equi-RB15%,0.05Equi-RB15%,0.10
BenchMI,5%
BenchMI,10%
BenchMI,15%
Equi-NoRB5%
Equi-NoRB10%
Equi-NoRB15%
Entr-NoRB5%
Entr-NoRB10%
Entr-NoRB15%
Equi-RB5%,0.01
Equi-RB5%,0.050.2959
Equi-RB5%,0.100.82730.6783
Equi-RB10%,0.010.10150.10900.1100
Equi-RB10%,0.050.08950.09750.0915 0.0250
Equi-RB10%,0.100.10000.10900.10300.60780.4554
Equi-RB15%,0.010.22990.24140.23540.88180.77780.8298
Equi-RB15%,0.050.22590.23890.23340.89330.78480.84080.8873
Equi-RB15%,0.100.21140.22840.22140.95630.85980.91830.15450.1685
Entr-RB5%,0.01 0.0260 0.0230 0.0250 0.0035 0.0035 0.0040 0.0145 0.0145 0.0140
Entr-RB5%,0.05 0.0235 0.0220 0.0230 0.0035 0.0035 0.0035 0.0135 0.0140 0.0130
Entr-RB5%,0.10 0.0230 0.0195 0.0225 0.0030 0.0030 0.0030 0.0125 0.0130 0.0130
Entr-RB10%,0.010.64480.61780.6413 0.0345 0.0280 0.0315 0.05250.0510 0.0470
Entr-RB10%,0.050.62930.59790.6273 0.0320 0.0260 0.0285 0.0490 0.0480 0.0450
Entr-RB10%,0.100.63430.60680.6298 0.0325 0.0245 0.0280 0.0480 0.0475 0.0450
Entr-RB15%,0.010.90430.86980.89580.08650.07500.08000.11000.10850.1015
Entr-RB15%,0.050.90330.87430.89780.08700.07350.07900.10800.10550.0995
Entr-RB15%,0.100.94380.91530.93530.10200.08750.09600.12800.12650.1180
p 0 0.52910.52910.52910.37040.27780.37040.49380.49380.2778
p + 0.04760.12170.04760.44440.57410.48150.40740.40740.5741
p 0.42330.34920.42330.18520.14810.14810.09880.09880.1481
Entr-RB5%,0.01Entr-RB5%,0.05Entr-RB5%,0.10Entr-RB10%,0.01Entr-RB10%,0.05Entr-RB10%,0.10Entr-RB15%,0.01Entr-RB15%,0.05Entr-RB15%,0.10
BenchMI,5%
BenchMI,10%
BenchMI,15%
Equi-NoRB5%
Equi-NoRB10%
Equi-NoRB15%
Entr-NoRB5%
Entr-NoRB10%
Entr-NoRB15%
Equi-RB5%,0.01
Equi-RB5%,0.05
Equi-RB5%,0.10
Equi-RB10%,0.01
Equi-RB10%,0.05
Equi-RB10%,0.10
Equi-RB15%,0.01
Equi-RB15%,0.05
Equi-RB15%,0.10
Entr-RB5%,0.01
Entr-RB5%,0.050.3854
Entr-RB5%,0.100.74430.7783
Entr-RB10%,0.010.07100.06300.0635
Entr-RB10%,0.050.08150.06750.06850.5289
Entr-RB10%,0.100.08300.07000.06950.71280.9618
Entr-RB15%,0.010.06700.06000.05800.46590.44240.4554
Entr-RB15%,0.050.06600.05900.05850.46840.44440.45990.9243
Entr-RB15%,0.100.05750.05300.05300.39940.37140.38240.0880 0.0330 +
p 0 0.14810.15870.15870.24690.24690.44440.47620.47620.4233
p + 0.01480.00000.00000.08640.04940.00000.06880.06880.1640
p 0.83700.84130.84130.66660.70370.55560.45500.45500.4127

References

  1. Morel, C. Stock selection using a multi-factor model: Empirical evidence from the French stock market. Eur. J. Financ. 2001, 7, 312–334. [Google Scholar] [CrossRef]
  2. De Franco, C. Stock picking in the US market and the effect of passive investments. J. Asset Manag. 2021, 22, 1–10. [Google Scholar] [CrossRef]
  3. Breitung, C. Automated stock picking using random forests. J. Empir. Financ. 2023, 72, 532–556. [Google Scholar] [CrossRef]
  4. Wolff, D.; Echterling, F. Stock picking with machine learning. J. Forecast. 2023, 43, 81–102. [Google Scholar] [CrossRef]
  5. Albadvi, A.; Chaharsooghi, S.K.; Esfahianipour, A. Decision making in stock trading: An application of PROMETHEE. Eur. J. Oper. Res. 2007, 177, 673–683. [Google Scholar] [CrossRef]
  6. Vetschera, R.; De Almeida, A.T. A PROMETHEE-based approach to portfolio selection problems. Comput. Oper. Res. 2012, 39, 1010–1020. [Google Scholar] [CrossRef]
  7. Tavana, M.; Keramatpour, M.; Santos-Arteaga, F.J.; Ghorbaniane, E. A fuzzy hybrid project portfolio selection method using data envelopment analysis, TOPSIS and integer programming. Expert Syst. Appl. 2015, 42, 8432–8444. [Google Scholar] [CrossRef]
  8. Vásquez, J.A.; Escobar, J.W.; Manotas, D.F. AHP–TOPSIS methodology for stock portfolio investments. Risks 2021, 10, 4. [Google Scholar] [CrossRef]
  9. Ho, W.R.J.; Tsai, C.L.; Tzeng, G.H.; Fang, S.K. Combined DEMATEL technique with a novel MCDM model for exploring portfolio selection based on CAPM. Expert Syst. Appl. 2011, 38, 16–25. [Google Scholar]
  10. Pätäri, E.; Karell, V.; Luukka, P.; Yeomans, J.S. Comparison of the multicriteria decision-making methods for equity portfolio selection: The US evidence. Eur. J. Oper. Res. 2018, 265, 655–672. [Google Scholar] [CrossRef]
  11. Jing, D.; Imeni, M.; Edalatpanah, S.A.; Alburaikan, A.; Khalifa, H.A.E.-W. Optimal selection of stock portfolios using multi-criteria decision-making methods. Mathematics 2023, 11, 415. [Google Scholar] [CrossRef]
  12. Gomes, L.F.A.M.; Lima, M.M.P.P. TODIM: Basics and application to multicriteria ranking of projects with environmental impacts. Found. Comput. Decis. Sci. 1991, 16, 113–127. [Google Scholar]
  13. Kahneman, D.; Tversky, A. Prospect theory: An analysis of decision under risk. Econometrica 1979, 47, 263–291. [Google Scholar] [CrossRef]
  14. Alali, F.; Tolga, A.C. Portfolio allocation with the TODIM method. Expert Syst. Appl. 2019, 124, 341–348. [Google Scholar] [CrossRef]
  15. Wu, Q.; Liu, X.; Qin, J.; Zhou, L.; Mardani, A.; Deveci, M. An integrated generalized TODIM model for portfolio selection based on financial performance of firms. Knowl.-Based Syst. 2022, 249, 108794. [Google Scholar] [CrossRef]
  16. Markowitz, H.M. Portfolio selection. J. Financ. 1952, 7, 77–91. [Google Scholar]
  17. Maillard, S.; Roncalli, T.; Teïletche, J. The properties of equally weighted risk contribution portfolios. J. Portf. Manag. 2010, 36, 60–70. [Google Scholar] [CrossRef]
  18. Fabozzi, F.A.; Simonian, J.; Fabozzi, F.J. Risk parity: The democratization of risk in asset allocation. J. Portf. Manag. 2021, 47, 41–50. [Google Scholar] [CrossRef]
  19. Feng, Y.; Palomar, D.P. SCRIP: Successive convex optimization methods for risk parity portfolio design. IEEE Trans. Signal Process. 2015, 63, 5285–5300. [Google Scholar] [CrossRef]
  20. Bai, X.; Scheinberg, K.; Tutuncu, R. Least-squares approach to risk parity in portfolio selection. Quant. Financ. 2016, 16, 357–376. [Google Scholar] [CrossRef]
  21. Feng, Y.; Palomar, D.P. Portfolio optimization with asset selection and risk parity control. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shangai, China, 20–25 March 2016; pp. 1–8. [Google Scholar]
  22. Kaucic, M. Equity portfolio management with cardinality constraints and risk parity control using multi objective particle swarm optimization. Comput. Oper. Res. 2019, 109, 300–316. [Google Scholar] [CrossRef]
  23. Qian, E. On the financial interpretation of risk contribution: Risk budgets do add up. J. Investig. Manag. 2006, 4, 41–51. [Google Scholar] [CrossRef]
  24. Bruder, B.; Roncalli, T. Managing risk exposures using the risk budgeting approach. SSRN. 2012. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2009778 (accessed on 10 March 2025).
  25. Viktorin, A.; Senkerik, R.; Pluhacek, M.; Kadavy, T.; Zamuda, A. DISH-XX solving CEC2020 single objective bound constrained numerical optimization benchmark. In Proceedings of the 2020 IEEE International Conference on Evolutionary Computation, Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  26. Banerjee, A.K.; Pradham, H.K.; Sensoy, A.; Fabozzi, F.; Mahapatra, B. Robust portfolio optimization with fuzzy TODIM, genetic algorithm and multi-criteria constraints. Ann. Oper. Res. 2024, 337, 1–22. [Google Scholar] [CrossRef]
  27. Sharpe, W.F. Mutual funds performance. J. Bus. 1966, 39, 119–138. [Google Scholar] [CrossRef]
  28. Israelsen, C. A refinement to the Sharpe ratio and information ratio. J. Asset Manag. 2005, 5, 423–427. [Google Scholar] [CrossRef]
  29. Kaucic, M.; Piccotto, F.; Sbaiz, G.; Valentinuz, G. A hybrid level-based learning swarm algorithm with mutation operator for solving large-scale cardinality-constrained portfolio optimisation problems. Inf. Sci. 2023, 634, 321–339. [Google Scholar] [CrossRef]
  30. Costa, G.; Kwon, R. Generalized risk parity portfolio optimization: An ADMM approach. J. Glob. Optim. 2020, 78, 207–238. [Google Scholar] [CrossRef]
  31. Kalayci, C.B.; Ertenlice, O.; Akbay, M.A. A comprehensive review of deterministic models and applications for mean-variance portfolio optimization. Expert Syst. Appl. 2019, 125, 345–368. [Google Scholar] [CrossRef]
  32. Erwin, K.; Engelbrecht, A. Meta-heuristics for portfolio optimization. Soft Comput. 2023, 27, 19045–19073. [Google Scholar] [CrossRef]
  33. Khan, A.T.; Cao, X.; Brajevic, I.; Stanimirovic, P.S.; Katsikis, V.N.; Li, S. A non-linear activated beetle antennae search: A novel technique for non-convex tax-aware portfolio optimization problem. Expert Syst. Appl. 2022, 197, 116631. [Google Scholar] [CrossRef]
  34. Cao, X.; Francis, A.; Pu, X.; Zhang, Z.; Katsikis, V.N.; Stanimirovic, P.S.; Brajevic, I.; Li, S. A novel recurrent neural network based online portfolio analysis for high frequency trading. Expert Syst. Appl. 2023, 233, 120934. [Google Scholar] [CrossRef]
  35. Cao, X.; Yang, S.; Stanimirovic, P.S.; Katsikis, V.N. Artificial neural dynamics for portfolio allocation: An Optimization perspective. IEEE Trans. Syst. Man Cybern. Syst. 2025, 55, 1960–1971. [Google Scholar] [CrossRef]
  36. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  37. Krink, T.; Paterlini, S. Multiobjective optimization using differential evolution for real-world portfolio optimization. Comput. Manag. Sci. 2011, 8, 157–179. [Google Scholar] [CrossRef]
  38. Vijayalakshmi Pai, G.A.; Michel, T. Metaheuristic optimization of risk budgeted global asset allocation portfolios. In Proceedings of the 2011 World Congress on Information and Communication Technologies, Mumbai, India, 11–14 December 2011; pp. 154–159. [Google Scholar]
  39. Meghwani, S.S.; Thakur, M. Multi-criteria algorithms for portfolio optimization under practical constraints. Swarm Evol. Comput. 2017, 37, 104–125. [Google Scholar] [CrossRef]
  40. Chootinan, P.; Chen, A. Constraint handling in genetic algorithms using a gradient-based repair method. Comp. Oper. Res. 2006, 33, 2263–2281. [Google Scholar] [CrossRef]
  41. Takahama, T.; Sakai, S. Constrained optimization by the ε-constrained differential evolution with gradient-based mutation and feasible elites. In Proceedings of the 2006 IEEE International Conference on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006; pp. 1–8. [Google Scholar]
  42. Sharma, C.; Habib, A. Mutual information based stock networks and portfolio selection for intraday traders using high frequency data: An Indian market case study. PLoS ONE 2019, 14, e0221910. [Google Scholar] [CrossRef]
  43. Das, S.; Mullick, S.S.; Suganthan, P.N. Recent advances in differential evolution—An updated survey. Swarm Evolut. Comput. 2016, 27, 1–30. [Google Scholar] [CrossRef]
  44. Chang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar]
  45. Tanabe, R.; Fukunaga, A.S. Success-history based parameter adaptation for differential evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation (CEC), Cancun, Mexico, 20–23 June 2013; pp. 71–78. [Google Scholar]
  46. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar]
  47. Viktorin, A.; Senkerik, R.; Pluhacek, M.; Kadavy, T.; Zamuda, A. Distance based parameter adaptation for success-history based differential evolution. Swarm Evolut. Comput. 2019, 50, 100462. [Google Scholar] [CrossRef]
  48. Philippatos, G.C.; Wilson, C.J. Entropy, market risk, and the selection of efficient portfolios. Appl. Econ. 1972, 4, 209–220. [Google Scholar] [CrossRef]
  49. Mercurio, P.J.; Wu, Y.; Xie, H. An entropy-based approach to portfolio optimization. Entropy 2020, 22, 332. [Google Scholar] [CrossRef]
  50. Novais, R.G.; Wanke, P.; Antunes, J.; Tan, Y. Portfolio optimization with a mean-entropy-mutual information model. Entropy 2022, 24, 369. [Google Scholar] [CrossRef] [PubMed]
  51. Kraskov, A.; Stögbauer, H.; Grassberger, P. Estimating mutual information. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 2004, 69, 6. [Google Scholar] [CrossRef] [PubMed]
  52. Bera, A.K.; Park, S.Y. Optimal portfolio diversification using the maximum entropy principle. Econom. Rev. 2008, 27, 484–512. [Google Scholar] [CrossRef]
  53. Usta, I.; Kantar, Y.M. Mean-variance-skewness-entropy measures: A multi-objective approach for portfolio selection. Entropy 2011, 13, 117–133. [Google Scholar] [CrossRef]
  54. Song, R.; Chan, Y. A new adaptive entropy portfolio selection model. Entropy 2020, 22, 951. [Google Scholar] [CrossRef]
  55. Pozzi, F.; Di Matteo, T.; Aste, T. Spread of risk across financial markets: Better to invest in the peripheries. Sci. Rep. 2013, 3, 1665. [Google Scholar] [CrossRef]
  56. Peralta, G.; Zareei, A. A network approach to portfolio selection. J. Empir. Financ. 2016, 38, 157–180. [Google Scholar] [CrossRef]
  57. Clemente, G.P.; Grassi, R.; Hitaj, A. Asset allocation: New evidence through network approaches. Ann. Oper. Res. 2019, 299, 61–80. [Google Scholar] [CrossRef]
  58. Guo, X.; Zhang, H.; Tian, T. Development of stock correlation networks using mutual information and financial big data. PLoS ONE 2018, 13, e0195941. [Google Scholar] [CrossRef]
  59. Zhao, D.; Li, C.; Wang, J.; Yuan, J. Comprehensive evaluation of national electric power development based on cloud model and entropy method and TOPSIS: A case study in 11 countries. J. Clean. Prod. 2020, 277, 123190. [Google Scholar] [CrossRef]
  60. Llamazares, B. An analysis of the generalized TODIM method. Eur. J. Oper. Res. 2018, 269, 1041–1049. [Google Scholar] [CrossRef]
  61. Gurrola-Ramos, J.; Hernàndez-Aguirre, A.; Dalmau-Cedeño, O. COLSHADE for real-world single-objective constrained optimization problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, Scotland, 19–24 July 2020; pp. 1–8. [Google Scholar]
  62. Tang, H.; Lee, J. Adaptive initialization LSHADE algorithm enhanced with gradient-based repair for real-world constrained optimization. Knowl.-Based Syst. 2022, 246, 108696. [Google Scholar] [CrossRef]
  63. Kaucic, M.; Piccotto, F.; Sbaiz, G. A constrained swarm optimization algorithm for large-scale long-run investments using Sharpe ratio-based performance measures. Comp. Manag. Sci. 2024, 21, 6. [Google Scholar] [CrossRef]
  64. Guy, A. Upside and downside beta portfolio construction: A different approach to risk measurement and portfolio construction. Risk Gov. Control Financ. Mark. Inst. 2015, 5, 243–251. [Google Scholar] [CrossRef]
  65. DeMiguel, V.; Garlappi, L.; Uppal, R. Optimal versus naive diversification: How inefficient is the 1/N portfolio strategy? Rev. Financ. Stud. 2009, 22, 1915–1953. [Google Scholar] [CrossRef]
  66. Fisher, J.D.; D’Alessandro, J. Portfolio upside and downside risk—Both matter! J. Port. Manag. 2021, 47, 158–171. [Google Scholar] [CrossRef]
  67. Martin, P.G.; McCann, B.B. The Investor’s Guide to Fidelity Funds, 2nd ed.; Venture Catalyst, Inc.: Redmond, WA, USA, 1998. [Google Scholar]
  68. Sortino, F.A.; Satchell, S. Managing Downside Risk in Financial Markets: Theory, Practice and Implementation; Butterworth Heinemann: Oxford, UK, 2001. [Google Scholar]
  69. Keating, C.; Shadwick, W.F. A universal performance measure. J. Perf. Meas. 2002, 6, 59–84. [Google Scholar]
  70. Ledoit, O.; Wolf, M. Robust performance hypothesis testing with the Sharpe ratio. J. Empir. Financ. 2008, 15, 850–859. [Google Scholar] [CrossRef]
  71. Storey, J.D. A direct approach to false discovery rates. J. R. Stat. Soc. B 2002, 64, 479–498. [Google Scholar] [CrossRef]
  72. Ardia, D.; Boudt, K. The peer performance ratios of hedge funds. J. Bank. Financ. 2018, 87, 351–368. [Google Scholar] [CrossRef]
  73. Beck, A. Introduction to Nonlinear Optimization: Theory, Algorithms, and Applications with MATLAB, 1st ed.; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2014. [Google Scholar]
  74. Olorunda, O.; Engelbrecht, A.P. Measuring exploration/exploitation in particle swarms using swarm diversity. In Proceedings of the 2008 IEEE International Conference on Evolutionary Computation, Hong Kong, China, 1–8 June 2008; pp. 1128–1134. [Google Scholar]
Figure 1. Structure of the knowledge-based financial management system.
Figure 1. Structure of the knowledge-based financial management system.
Entropy 27 00480 g001
Figure 2. Wealth evolution for the best selected strategies corresponding to the best ex post models (MSR-Entr-TODIMK and MSR-RB-Entr-TODIMK,0.1) and the benchmarks for the US case study. Panel (a) shows results for K = 5 % , while panels (b) and (c) display K = 10 % and K = 15 % , respectively.
Figure 2. Wealth evolution for the best selected strategies corresponding to the best ex post models (MSR-Entr-TODIMK and MSR-RB-Entr-TODIMK,0.1) and the benchmarks for the US case study. Panel (a) shows results for K = 5 % , while panels (b) and (c) display K = 10 % and K = 15 % , respectively.
Entropy 27 00480 g002
Figure 3. Wealth evolution for the best selected strategies corresponding to the best ex post models and the benchmarks for the EU case study. Panel (a) shows the cardinality K = 5 % and the strategies MSR-Equi-TODIM5% and MSR-RB-Equi-TODIM5%,0.05. Panel (b) displays the results for the strategies MSR-Equi-TODIM15% and MSR-RB-Equi-TODIM10%,0.05. Finally, graph (c) shows the evolution of MSR-Equi-TODIM15% and MSR-RB-Equi-TODIM15%,0.1.
Figure 3. Wealth evolution for the best selected strategies corresponding to the best ex post models and the benchmarks for the EU case study. Panel (a) shows the cardinality K = 5 % and the strategies MSR-Equi-TODIM5% and MSR-RB-Equi-TODIM5%,0.05. Panel (b) displays the results for the strategies MSR-Equi-TODIM15% and MSR-RB-Equi-TODIM10%,0.05. Finally, graph (c) shows the evolution of MSR-Equi-TODIM15% and MSR-RB-Equi-TODIM15%,0.1.
Entropy 27 00480 g003
Table 1. Summary of the considered data sets and experimental design.
Table 1. Summary of the considered data sets and experimental design.
Data SetnTime WindowEstimation Window (Months)Ex Post Months
S&P 500 (US)470 stocks31/12/2014–31/10/20242494
STOXX Europe 600 (EU)535 stocks31/12/2014–31/10/20242494
Table 2. Ex post results for the two Sharpe ratio-based optimization strategies in the US data set, compared with the benchmark strategies. The first two columns report the value of the fraction K % of assets comprising the portfolio and the name of the portfolio strategy, respectively. For simplicity of notation, this table omits the dependence from the parameter K of the considered strategies. The other columns show the results of the ex post metrics presented in Section 6.3.
Table 2. Ex post results for the two Sharpe ratio-based optimization strategies in the US data set, compared with the benchmark strategies. The first two columns report the value of the fraction K % of assets comprising the portfolio and the name of the portfolio strategy, respectively. For simplicity of notation, this table omits the dependence from the parameter K of the considered strategies. The other columns show the results of the ex post metrics presented in Section 6.3.
US Data Set
ConfigurationCAGRSRSSRoutΩoutσout (×100) maxDDUI
K = 5%MSR-Equi-TODIM0.080.150.211.495.000.350.14
MSR-RB-Equi-TODIM0.010.080.160.231.534.940.300.12
MSR-RB-Equi-TODIM0.050.080.160.231.534.940.300.12
MSR-RB-Equi-TODIM0.10.080.160.231.534.950.300.12
MSR-Entr-TODIM0.100.180.221.614.990.330.13
MSR-RB-Entr-TODIM0.010.100.190.251.645.030.340.13
MSR-RB-Entr-TODIM0.050.100.190.251.635.030.340.13
MSR-RB-Entr-TODIM0.10.110.190.251.645.020.330.13
BenchMI0.090.190.271.624.500.200.06
K = 10%MSR-Equi-TODIM0.090.180.231.604.490.290.12
MSR-RB-Equi-TODIM0.010.100.190.251.614.640.270.10
MSR-RB-Equi-TODIM0.050.100.190.241.614.640.270.10
MSR-RB-Equi-TODIM0.10.100.190.251.624.640.270.10
MSR-Entr-TODIM0.120.220.271.794.970.340.15
MSR-RB-Entr-TODIM0.010.110.200.251.684.990.340.14
MSR-RB-Entr-TODIM0.050.110.200.251.674.990.340.14
MSR-RB-Entr-TODIM0.10.110.200.251.674.990.340.14
BenchMI0.090.190.271.654.540.190.06
K = 15%MSR-Equi-TODIM0.120.220.291.764.910.330.12
MSR-RB-Equi-TODIM0.010.100.190.251.644.710.250.09
MSR-RB-Equi-TODIM0.050.100.190.251.644.700.250.09
MSR-RB-Equi-TODIM0.10.100.190.251.644.700.250.09
MSR-Entr-TODIM0.160.280.342.054.920.280.10
MSR-RB-Entr-TODIM0.010.120.230.281.804.820.300.11
MSR-RB-Entr-TODIM0.050.120.230.281.814.810.300.11
MSR-RB-Entr-TODIM0.10.120.230.281.814.810.300.11
BenchMI0.090.170.251.584.650.220.06
BenchEW0.140.240.331.895.060.250.06
Table 3. Ex post results for the two Sharpe ratio-based optimization strategies in the EU data set, compared with the benchmark strategies. The first two columns report the value of the fraction K % of assets comprising the portfolio and the portfolio strategy, respectively. For simplicity of notation, this table omits the dependence from the parameter K of the considered strategies. The other columns show the results of the ex post metrics presented in Section 6.3.
Table 3. Ex post results for the two Sharpe ratio-based optimization strategies in the EU data set, compared with the benchmark strategies. The first two columns report the value of the fraction K % of assets comprising the portfolio and the portfolio strategy, respectively. For simplicity of notation, this table omits the dependence from the parameter K of the considered strategies. The other columns show the results of the ex post metrics presented in Section 6.3.
EU Data Set
ConfigurationCAGRSRSSRoutΩoutσout (×100) maxDDUI
K = 5%MSR-Equi-TODIM0.080.160.231.584.930.240.07
MSR-RB-Equi-TODIM0.010.050.100.131.375.690.330.11
MSR-RB-Equi-TODIM0.050.050.100.131.385.680.330.11
MSR-RB-Equi-TODIM0.10.050.100.131.375.660.330.11
MSR-Entr-TODIM0.020.060.081.205.970.430.15
MSR-RB-Entr-TODIM0.010.020.050.071.196.430.430.15
MSR-RB-Entr-TODIM0.050.020.050.071.196.430.430.15
MSR-RB-Entr-TODIM0.10.020.050.071.196.400.430.15
BenchMI0.090.170.231.544.790.300.10
K = 10%MSR-Equi-TODIM0.080.150.211.554.790.300.08
MSR-RB-Equi-TODIM0.010.070.130.161.505.450.320.09
MSR-RB-Equi-TODIM0.050.070.130.161.515.440.320.09
MSR-RB-Equi-TODIM0.10.070.130.161.515.430.320.09
MSR-Entr-TODIM0.060.110.131.375.350.390.13
MSR-RB-Entr-TODIM0.010.040.090.111.326.090.400.12
MSR-RB-Entr-TODIM0.050.040.090.111.326.080.390.12
MSR-RB-Entr-TODIM0.10.040.090.111.326.080.400.12
BenchMI0.090.190.251.654.630.260.09
K = 15%MSR-Equi-TODIM0.080.160.221.584.840.270.07
MSR-RB-Equi-TODIM0.010.070.130.161.495.280.310.08
MSR-RB-Equi-TODIM0.050.070.130.161.495.280.310.08
MSR-RB-Equi-TODIM0.10.070.130.161.495.260.310.08
MSR-Entr-TODIM0.050.100.121.335.380.370.11
MSR-RB-Entr-TODIM0.010.050.100.111.345.810.360.10
MSR-RB-Entr-TODIM0.050.050.100.111.345.810.360.10
MSR-RB-Entr-TODIM0.10.050.100.111.355.810.360.10
BenchMI0.090.180.231.604.520.280.09
BenchEW0.080.160.221.544.800.260.08
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kaucic, M.; Pelessoni, R.; Piccotto, F. An Automated Decision Support System for Portfolio Allocation Based on Mutual Information and Financial Criteria. Entropy 2025, 27, 480. https://doi.org/10.3390/e27050480

AMA Style

Kaucic M, Pelessoni R, Piccotto F. An Automated Decision Support System for Portfolio Allocation Based on Mutual Information and Financial Criteria. Entropy. 2025; 27(5):480. https://doi.org/10.3390/e27050480

Chicago/Turabian Style

Kaucic, Massimiliano, Renato Pelessoni, and Filippo Piccotto. 2025. "An Automated Decision Support System for Portfolio Allocation Based on Mutual Information and Financial Criteria" Entropy 27, no. 5: 480. https://doi.org/10.3390/e27050480

APA Style

Kaucic, M., Pelessoni, R., & Piccotto, F. (2025). An Automated Decision Support System for Portfolio Allocation Based on Mutual Information and Financial Criteria. Entropy, 27(5), 480. https://doi.org/10.3390/e27050480

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop