Previous Article in Journal
Study on Mechanism and Constitutive Modelling of Secondary Anisotropy of Surrounding Rock of Deep Tunnels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two-Stage Distributionally Robust Optimization for an Asymmetric Loss-Aversion Portfolio via Deep Learning

School of Economics and Management, Beihang University, No. 37, Xueyuan Road, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(8), 1236; https://doi.org/10.3390/sym17081236
Submission received: 25 June 2025 / Revised: 26 July 2025 / Accepted: 30 July 2025 / Published: 4 August 2025
(This article belongs to the Section Computer)

Abstract

In portfolio optimization, investors often overlook asymmetric preferences for gains and losses. We propose a distributionally robust two-stage portfolio optimization (DR-TSPO) model, which is suitable for scenarios where the loss reference point is adaptively updated based on prior decisions. For analytical convenience, we further reformulate the DR-TSPO model as an equivalent second-order cone programming counterpart. Additionally, we develop a deep learning-based constraint correction algorithm (DL-CCA) trained directly on problem descriptions, which enhances computational efficiency for large-scale non-convex distributionally robust portfolio optimization. Our empirical results obtained using global market data demonstrate that during COVID-19, the DR-TSPO model outperformed traditional two-stage optimization in reducing conservatism and avoiding extreme losses.

1. Introduction

In recent years, the increasing frequency of geopolitical conflicts, global supply chain disruptions, and macroeconomic fluctuations have significantly heightened the uncertainty and complexity of financial markets. On the one hand, the complexity of investor behavior complicates the construction of quantitative portfolios. Moreover, market uncertainty severely impacts asset price stability and amplifies investors’ sensitivity to risk. Researchers have observed that during periods of intense market volatility, traditional portfolio theories often overlook the asymmetric sensitivity of investors to gains and losses. Kahneman and Tversky [1] provided an alternative theoretical framework for decision making under such uncertainty. The decision model based on prospect theory incorporates new features, such as (i) reference dependence, where decision-makers evaluate outcomes as gains or losses relative to a reference point that fluctuates with market conditions and wealth [2], (ii) asymmetric utility, as Zhang and Semmler [3] demonstrated that previous gains and losses in the stock market have an asymmetric impact on investment behavior, and (iii) different probabilities for evaluating gains and losses. These features lead to highly nonlinear investor utility, posing significant challenges in modeling and solving portfolio optimization problems from this perspective.
In modeling loss aversion for portfolio optimization, the focus is on characterizing the updating mechanism of the loss reference point. The application of a static reference point, set subjectively by the decision-maker, is quite limited [4,5] and struggles to adapt to dynamic market conditions. A widely accepted view is that the loss reference point updates in an adaptive manner [6,7]. Our work builds on the literature of behavioral portfolio selection based on prospect theory, which includes notable studies such as [4,8,9,10,11,12,13,14], among others. However, we further extend this framework by incorporating a dynamic, decision-dependent reference point, and we model the uncertainty of the loss reference point through ambiguity sets in the distributionally robust optimization (DRO) framework. In recent years, DRO has been widely applied in uncertainty modeling, as it avoids unreasonable assumptions about the distributional form of random variables and provides robust solutions through worst-case analysis when precise distribution information is unavailable. Within the DRO framework, we propose an update mechanism for the decision-dependent loss aversion reference point, allowing the reference point to adapt dynamically to market performance and investor behavior. We assume that the distribution of the random loss reference point depends solely on the investor’s prior decisions and is independent of asset returns, without requiring additional commitments to the specific distribution form. Specifically, the difference between prior decision returns and market returns influences the expected value of the second-stage random loss reference point, while the difference in market weights and prior decisions affects the variance of the loss reference point, as illustrated in Figure 1. When a loss occurs, investors compare their performance to that of the market or other investors, which in turn affects their risk preferences, decision making, and expectations for future returns. This “social comparison effect” stems from a common psychological mechanism in human social behavior, where individuals tend to assess their own performance by comparing it to others, something that particularly evident in financial decision making [15].
We present the following three main contributions to the distributionally robust two-stage optimization portfolio (DR-TSPO) problem under loss aversion:
  • We propose an updated mechanism for the loss reference point based on prior decisions, which adapts to market fluctuations and investor behavior. This mechanism captures how investors dynamically respond to market changes. We also derive the equivalent dual of the DR-TSPO problem, transforming the original problem into a second-order cone programming problem that is easier to implement, providing a solid theoretical foundation for algorithm design and practical applications.
  • We develop a deep learning-based constraint correction algorithm (DL-CCA) to solve complex optimization problems with nonlinear and non-convex constraints. Specifically, the innovation of this method lies in training the neural network directly from the problem’s specifications, rather than from an existing supervised dataset, effectively handling complex non-convex constraints. Experimental results show that the DL-CCA algorithm, leveraging fully connected neural networks, outperforms Trust-Constr, HO, and LSTM/CNN-based variants in solving large-scale constrained non-convex problems, achieving superior average optimal objective values (0.0029) and faster solution times (76.78 s).
  • We validate the advantages of the loss aversion Distributionally Robust Two-Stage Portfolio Optimization (DR-TSPO) model in dealing with loss and uncertainty using global key stock index component data. The experimental results show that the DR-TSPO model exhibits strong robustness and lower drawdown under extreme market conditions (such as the 2020 COVID-19 pandemic). For instance, in the Chinese market, the DR-TSPO’s annual return is 0.4635, significantly higher than the TSPO’s 0.3020, with lower volatility (DR-TSPO: 0.4430 vs. TSPO: 0.5846), demonstrating stronger capital protection ability.
This study deeply combines behavioral finance theory with modern optimization methods. On the one hand, the decision-dependent loss reference point mechanism enriches the application scenarios of prospect theory in asset allocation. At the same time, the DL-CCA algorithm provides a universal solution framework for complex optimization problems in financial engineering, which has enlightening significance for technological upgrading in the fields of robo-advisory and risk management. In the future, it can be extended to more complex scenarios such as multi-stage investment decisions and cross-border asset allocation.
The remainder of the paper is organized as follows. Section 2 reviews related work on loss aversion and distributionally robust optimization. Section 3 introduces the two-stage portfolio optimization (TSPO) model with stochastic loss reference points. Section 4 develops the loss aversion-based DR-TSPO model and derives its tractable reformulation. In Section 5, we propose a DL-CCA algorithm for solving large-scale DR-TSPO problems. Section 6 is the algorithm comparison experiment, including efficiency comparison and ablation study. Section 7 designs empirical experiments using real data from global key index constituents and analyzes the results. The experimental findings demonstrate that, compared to conventional two-stage optimization models, the loss aversion-based DR-TSPO exhibits higher robustness and adaptability. Finally, Section 8 concludes the paper and discusses future research directions.

2. Related Work

Since the construction of the Markowitz classical portfolio theory [16], portfolio construction methods based on mathematical statistics have developed rapidly. However, these models rely on the assumption that all investors are well informed and rational with no bias sentiment. In the 1980s, scholars began to focus on the impact of loss aversion on portfolios. This section provides a comprehensive review of loss aversion in financial investments and the DRO method.

2.1. Loss Aversion in Financial Investments

Loss aversion, as a crucial component of prospect theory, provides a novel perspective for investigating investor preferences and portfolio construction [2]. By incorporating loss aversion into portfolio construction, we can account for investors’ loss preferences, explain the asymmetric impact of prior gains and losses in the stock market on investment behavior, and thereby mitigate decision-making errors caused by cognitive biases [3]. Regarding investment strategies for loss-averse investors, Berkelaar et al. [8] demonstrated that over shorter investment horizons (e.g., less than five years), loss-averse investors significantly reduce the initial portfolio weight allocated to stocks compared to investors with smooth power utility. However, loss aversion and risk aversion are empirically difficult to distinguish clearly, necessitating the examination of individual investor trading behavior through the lens of utility derived from realized gains and losses [4]. Based on prospect theory, Jin and Yu Zhou [9] developed and analyzed a portfolio selection model featuring an S-shaped utility (value) function and probability weighting. The reference point for losses serves as a key element in this model for measuring loss utility, functioning as a standard or benchmark to delineate gains from losses. However, the specific mechanisms by which decision-makers form and update reference points remain insufficiently understood. Baucells et al. [17] demonstrated through experiments that this process is not a simple recursive procedure. Shi et al. [13] integrated the adaptive process of reference points with investors’ perception of past gains and losses, constructing a dynamic trading model featuring reference point adaptation and loss aversion, and derived its semi-analytical solution.
Notably, existing research on portfolio optimization with reference point updating typically assumes that decision-makers can anticipate the evolution of reference points and thereby resolve the time inconsistency issue in dynamic optimization. Strub and Li [18] compared optimal investment strategies under different reference point updating rules within time-consistent and time-inconsistent frameworks, providing empirical evidence that decision-makers often struggle to foresee the updating process of reference points. van Bilsen and Laeven [19] further highlighted that loss-averse individuals endogenously update their reference levels over time and distort probabilities, with experimental findings showing that investors with prospect-theoretic preferences tend to adopt more conservative portfolio strategies and exhibit lower sensitivity of optimal consumption strategies to economic shocks. He and Strub [20] examined the impact of different partially endogenous reference point generation models on optimal portfolio decisions under loss aversion. Gao et al. [7] explored the behavioral characteristics of loss-averse investors with dynamically adjusted reference points in market environments with serially correlated returns, offering new insights into investor decision making in complex market settings.
Nevertheless, current research on the application of loss aversion in portfolio construction still exhibits notable limitations. For example, scholars frequently adopt static loss aversion parameters in their models, failing to dynamically capture the inherent ambiguity in loss reference point distributions. Additionally, the fixed structure of ambiguity sets cannot adequately reflect investors’ differential dependence on realized returns versus target returns. These methodological constraints motivate us to refine the quantitative modeling framework for loss reference points while simultaneously developing more robust portfolio optimization approaches.

2.2. Distributionally Robust Portfolio Optimization

Distributionally robust optimization (DRO) methods have been widely applied to uncertainty analysis in portfolio selection, focusing on the uncertainty of random variables rather than assuming known variables or their distributions [21,22,23]. Furthermore, DRO can effectively mitigate the impact of outliers and reduce the interference of ambiguity in portfolio construction, thereby enhancing robustness and flexibility. Building upon this optimization framework, we aim to relax the stringent requirements regarding loss aversion reference points in portfolio modeling, such as regulator-prescribed reference points or uncertainty considerations under known distributions. Generally, distributionally robust optimization assumes that the distribution of random variables belongs to a well-defined ambiguity set. Garlappi et al. [24] posited that investors’ ambiguity aversion manifests as mean returns belonging to an ellipsoidal uncertainty set, studying robust mean-variance optimization under parameter and model uncertainty. Under ambiguity sets such as mixture distributions, box-type, and ellipsoidal sets, researchers have proposed robust portfolios based on minimizing worst-case conditional value-at-risk (CVaR) [25,26,27]. Empirical results demonstrate that portfolios constructed via this method exhibit superior diversification, stability, expected returns, and turnover compared to non-robust approaches.
However, traditional ambiguity sets often include excessive inappropriate discrete distributions. To address this, we consider data-driven ambiguity sets (e.g., Wasserstein, φ -divergence), which leverage data adaptiveness, statistical rigor, and tail risk control to overcome the over-conservatism of conventional ambiguity sets. In this regard, Pflug and Wozabal [28] introduced the Wasserstein distance to describe portfolio ambiguity sets, which was later extended by Wozabal [29] to more general cases. Gao et al. [30] derived a worst-case expectation representation for Wasserstein-based ambiguity sets, applied to portfolio construction centered on empirical measures with different risk metrics. Blanchet et al. [31] proposed a DRO model incorporating return and variance uncertainty distributions using the Wasserstein metric. Existing studies primarily focus on single-stage distributionally robust portfolio construction, motivating us to extend this research trajectory—particularly by incorporating the decision-dependent nature of loss aversion introduced in this study. The first-stage decision must be made before observing the realization of random parameters, while the second-stage decision involves adjustments or compensations based on the first-stage outcome after observing the actual realizations. Our proposed two-stage distributionally robust portfolio optimization framework with loss aversion not only integrates behavioral finance into quantitative portfolio research but also relaxes the constraint of regulator-imposed reference points by leveraging DRO properties.
Additionally, solving DRO problems requires mapping the primal problem to the dual space via duality theory. Since uncertainty sets (e.g., moment-based, Wasserstein distance, or ϕ -divergence) may introduce non-convex or even non-smooth constraints, large-scale DRO problems often face severe computational challenges. Even with convex relaxation or stochastic optimization techniques, existing methods struggle to obtain high-quality feasible solutions within reasonable timeframes, especially in high-dimensional or dynamic settings where computational complexity grows exponentially, severely limiting DRO’s practical deployment. To address this, we incorporate deep learning to design heuristic algorithms that enhance the solution efficiency of non-convex robust models in behavioral portfolio optimization.

3. Basic Two-Stage Portfolio Optimization Model with Decision-Dependent Loss Aversion

In the investment process, assume that the investor allocates all assets to the stock market, and the current portfolio weight vector is x = ( x 1 , x 2 , , x n ) , where x X is a feasible portfolio weight vector. The stock return vector r follows a normal distribution r N ( μ , Σ ) , so the portfolio return R ( x ) can be expressed as:
R ( x ) = r x .
The expected return and risk of the portfolio are, respectively, given by
E [ R ( x ) ] = μ x ; C o v [ R ( x ) ] = x Σ x .
Let μ denote the vector of expected returns of the stocks, and Σ represent the covariance matrix of the returns.
For an investor with loss aversion preferences, assume that the loss aversion coefficient is φ and there is a psychological reference point (the loss reference point) to evaluate the portfolio’s gains and losses. When the portfolio return falls below the reference point , the investor’s utility decreases. Specifically, the τ -order loss aversion utility is defined as:
φ E l P [ l μ x , 0 ] + τ .
where [ · , 0 ] + denotes the non-negative part (i.e., when a loss occurs, the investor experiences loss aversion; when the portfolio return exceeds the reference point, the loss aversion effect is zero). In practice, the assessment of losses is closely related to the prior investment decision y X . Therefore, consider the loss reference point as a random variable, where l P ( y ) .
Two-stage portfolio optimization is used in various financial and investment scenarios, aiming to enhance portfolio performance, reduce risk, and adapt to changing market conditions. It is applicable to optimization scenarios where decision making is dependent on random variables. Assume the decision set X is related to market random events, and the investor’s prior decisions reflect potential losses the investor may face under specific circumstances. In this framework, based on different random events k, the decision-maker has a determined investment decision y X ( k ) , y = ( y 1 , y 2 , , y n ) and a loss reference point . Consider the two-stage portfolio optimization problem (TSPO) under a quadratic loss utility function. Let the investor’s utility function be defined as U ( x , l ) = μ x c i [ N ] | x i y i | φ · l μ x , 0 + 2 , where c is the transaction cost coefficient. Under the constraint of no short-selling, the two-stage portfolio optimization problem for a loss-averse investor is:
( T S P O ) : max y X μ y + E l P ( y ) max x X U ( x , l ) ,
s . t . i [ N ] x i = 1 , i [ N ] y i = 1 ,
l L ( y ) , x i , y i [ 0 , 1 ] , i [ N ] .
After the decision variable y is made in the first stage, the investor determines based on the realized gains and losses. The variable x represents the decision variable in the second stage, which is made after the random loss reference point is determined. Under the decision y , the realization of can be determined by accessing a discrete probability distribution P ^ ( y ) , which contains [ S ] samples of loss reference points. This is defined as:
P ^ ( y ) ( l ) = s = 1 S p s ( y ) · δ ( l l ^ s ( y ) ) .
where:
-
p ^ s ( y ) is the probability of the s-th sample, satisfying p ^ s ( y ) > 0 and s [ S ] p ^ s ( y ) = 1 , with the general assumption of equal probability.
-
l ^ s ( y ) is the s-th loss reference point sample in the set determined by decision y .
-
δ ( l l ^ s ( y ) ) is the Dirac delta function, which represents a probability mass at the point of the discrete distribution.
The discrete loss reference point samples l ^ s ( y ) can be obtained from historical data, expert knowledge, or by extracting reference distribution samples based on prior decision characteristics. This study primarily focuses on adaptive optimization methods, so the samples will be directly extracted based on the characteristics of the decision y .
We assume that for a finite set of events k, there is a unique decision y and a corresponding set of random loss reference point samples l ^ L ^ k . The expected loss values of these distributions should be adjusted based on the nature of the events and the psychological expectations of the investors, ensuring that the expected values are ordered from largest to smallest, i.e., E [ l ˜ 1 ] E [ l ˜ 2 ] E [ l ˜ K ] . This structure ensures that the event partitions not only reflect the loss aversion sentiment of market participants but also provide a clear, stepwise basis for the expected loss of the reference points. For example, according to expert opinions and historical statistics, we classify random market states into five categories of events according to the varying levels of loss aversion among market participants. Four thresholds T 1 < T 2 < T 3 < T 4 are set, which are used to define the event classification criteria. Specifically, we define the event set X k = { y 1 , y 2 , y 3 , y 4 , y 5 } , where each event y i ( i { 1 , 2 , 3 , 4 , 5 } ) corresponds to the following:
  • Event y X 1 : High loss aversion, satisfying E [ l ^ ] T 1 . Investors have suffered significant losses in the past or experienced a large gap in returns compared to the benchmark portfolio. This leads to high loss aversion, causing investors to set a lower reference loss l ^ for the new decision round. Such events are often accompanied by sharp market declines, where some investors may underestimate the market’s recovery potential, resulting in overly pessimistic expectations. Extreme outliers cause higher sample variance.
  • Event y X 2 : Moderate-high loss aversion, satisfying T 1 < E [ l ^ ] T 2 . Investors may have experienced some losses, but the overall loss is smaller or the return difference with the benchmark portfolio is less significant, resulting in lower loss aversion. The sample variance is smaller.
  • Event y X 3 : Moderate loss aversion, satisfying T 2 < E [ l ^ ] T 3 . Investors may have followed a benchmark-tracking strategy, with returns similar to or nearly identical to the benchmark, resulting in little additional loss.
  • Event y X 4 : Moderate-low loss aversion, satisfying T 3 < E [ l ^ ] T 4 . Investors have achieved some excess returns compared to the benchmark portfolio and, due to a small deviation from market strategies, exhibit some degree of risk aversion. The new reference loss l ^ is positive but relatively low. The sample variance for the current market state is also low.
  • Event y X 5 : Low loss aversion, satisfying E [ l ^ ] > T 4 . Investors have made significant portfolio adjustments or earned returns higher than the market. The new reference loss l ^ is high. Some investors may exhibit overconfidence, where overestimating their own abilities influences their decisions and expectations, causing outlier sample variance among those pursuing higher returns.
However, a finite set of random events is insufficient to explain the complex market states. When the threshold T is infinitely subdivided (or the time intervals ( T k 1 , T k ] are sufficiently small), market uncertainty is modeled as an infinite number of events ( k ) and an infinite number of feasible decisions y . By representing the sample set L ^ as a continuous functional relationship of random events L ^ k = f ( y ) , we introduce a mapping function f : R n M 2 R 2 . This maps the decision of random events to the reference distribution information space of loss reference points M 2 = P ^ E [ l ^ ] = μ l ^ , Var [ l ^ ] = σ l ^ 2 , and then generates discrete samples l ^ s , for s = 1 , , S , according to the specific distribution moments. For each random event, the following relationship is set based on the reference distribution mean E [ l ^ ] and variance Var [ l ^ ] :
E [ l ^ ] = β · ( μ y μ y m k t ) ,
Var [ l ^ ] = γ · y y m k t 2 2 .
where β and γ serve as adjustment coefficients, while y m k t represents the market portfolio weights. The term ( μ y μ y m k t ) reflects the investor’s prior gains or losses. When a loss occurs, E [ l ^ ] < 0 ; otherwise, if there is no loss, E [ l ^ ] 0 . The norm distance y y m k t 2 measures the degree of deviation between the investor’s portfolio and the market portfolio, indicating the level of active management. In an efficient market, where information disseminates rapidly and prices adjust swiftly to all available information, investors tend to adopt passive investment strategies to track market indices. In this environment, active management struggles to generate consistent excess returns. Additionally, in low-volatility markets, investors are more inclined toward passive management y y m k t 2 0 , aiming for relatively stable returns. Conversely, in an inefficient market, information asymmetry and the market’s failure to fully reflect fundamentals make active management strategies more attractive, as investors can exploit these inefficiencies to achieve excess returns. In high-volatility markets, where uncertainty and price fluctuations are significant, investors also prefer active management y y m k t 2 0 to seize short-term investment opportunities.
Specifically, let the sample vector l ^ be:
l ^ = μ β · ( y y m k t ) 1 S + 1 N ε ,
ε i . i . d . F ε : E [ ε ] = 0 , C o v [ ε ] = γ · y y m k t 2 2 · I S .
Here, “⊗” represents the tensor product of vectors, and 1 N is a vector of size N with all elements equal to 1. The noise vector ε of dimension S consists of independent and identically distributed (i.i.d.) elements that follow a multivariate distribution given by ε : E [ ε ] = 0 , Cov [ ε ] = γ · y y m k t 2 2 · I S , where I S is the S-dimensional identity matrix. The larger the prior loss, the higher the overall expected value of the sample. The greater the deviation of the decision from the market, i.e., the larger y y m k t 2 0 , the higher the uncertainty regarding the loss reference, resulting in a larger overall variance. Define the sample set as:
L ^ ( y ) = l ^ s : l ^ s = β · μ ( y y m k t ) + ε s · 1 S μ , s [ S ] .
According to the sample estimate, the TSPO can be approximated as a single-stage optimization problem.
( TSPO ) : max x , y , X , ε μ ( y + x ) c i [ N ] x i y i φ · s [ S ] β · μ ( y y m k t ) + ε s · 1 S μ μ x , 0 + 2 ,
s . t . 1 S 1 s [ S ] ε s 2 = γ · y y m k t 2 2 ,
i [ N ] x i = 1 , i [ N ] y i = 1 , s [ S ] ε s = 0 ,
x i , y i [ 0 , 1 ] , i [ N ] .
By introducing auxiliary variables ζ s , s [ S ] , the problem becomes equivalent to a second-order cone programming (SOCP) formulation.
( TSPO ) : max x , y , X , ε , ζ μ ( y + x ) c i [ N ] x i y i φ · s [ S ] ζ s 2 ,
s . t . 1 S 1 s [ S ] ε s 2 = γ · y y m k t 2 2 ,
i [ N ] x i = 1 , i [ N ] y i = 1 , s [ S ] ε s = 0 ,
ζ s β · μ ( y y m k t ) + ε s · 1 S μ μ x , s [ S ] ,
x i , y i [ 0 , 1 ] , i [ N ] , ζ s 0 , s [ S ] .
The definitions of the notations are presented in the Appendix A.

4. DR-TSPO Model

In fact, the reference distribution P ^ may not accurately reflect the real situation, especially in scenarios with sparse data or noise. This bias can lead to suboptimal decisions in practical applications, increasing potential risks. To address the uncertainty in real-world distributions, two-stage distributionally robust optimization (DRO) adopts a more conservative approach. By constructing a distributional ambiguity set that encompasses all possible true distributions, the DRO optimization scheme targets the worst-case distribution for optimization. This “worst-case” approach effectively mitigates the impact of the reference distribution deviating from the true distribution, providing more robust decisions. Additionally, in high-uncertainty situations, it better balances risk and return, offering more reliable support for real-world decision making.
To account for the uncertainty of the loss reference point , we introduce an ambiguity set based on the Wasserstein distance to characterize the distribution of the loss reference point. Specifically, the Wasserstein distance is used to measure the gap between the true distribution and the reference distribution. The uncertainty set B ( P ^ ( y ) ) for the true distribution is defined as:
B ( P ^ ( y ) ) = P P 0 ( L ) | l P , l ^ P ^ ( y ) , d W ( P , P ^ ( y ) ) ϵ , ϵ R + .
where the set P 0 ( L ) represents the collection of all Borel probability distributions on L R P , where L is a prescribed cone-shaped representable set. P ^ ( y ) is the reference distribution of the loss reference point determined by the prior decision y . Although this reference distribution is discrete, the ambiguity set can encompass both discrete and continuous distributions. d W ( · , · ) is the Wasserstein distance metric between two distributions, which measures the minimum cost required to transform one distribution into another, typically understood as the “transportation cost” in geographical space. The Type-2 Wasserstein distance, through its quadratic penalty mechanism, enables more refined control over higher-order distributional characteristics. This proves particularly advantageous when balancing mean–variance trade-offs, handling extreme events, or addressing high-dimensional correlations. However, its computational cost may be significantly higher, necessitating careful consideration of the optimal order selection based on specific problem requirements.
To construct a concrete optimization model and link the ambiguity set to the prior decision y , we use the ambiguity set based on the Wasserstein distance to capture the distributional uncertainty of . The size of this ambiguity set can be adjusted by the parameter ϵ , which represents the radius of the ambiguity sphere and controls the degree of uncertainty. The specific definition of the Wasserstein distance is:
d W ( P , P ^ ( y ) ) = inf Π Γ ( P , P ^ ( y ) ) L × L l l ^ 2 2 d Π ( l , l ^ ) ,
where Γ ( P , P ^ ( y ) ) denotes the set of all possible joint distributions, with marginal distributions P and P ^ ( y ) , respectively. and l ^ are samples drawn from these distributions. This metric ensures that the Wasserstein distance between any possible distribution P and the reference distribution P ^ ( y ) does not exceed ϵ , thereby introducing uncertainty management.
When considering transaction costs, the investor’s objective is to minimize both the loss and some necessary costs. Specifically, the investor needs to consider not only the expected return μ x , but also loss aversion, ambiguity losses, and upper bounds on transaction costs. In this case, the investor’s objective function can be expressed as:
max y X , l ^ L ^ ( y ) , ε R S μ y min x X c i [ N ] x i y i μ x + sup l P , P B ( P ^ ( y ) ) φ · E P l μ x , 0 + 2 .
This objective function combines the prior decision, loss aversion preferences, and market transaction costs, aiming for effective asset allocation by maximizing the net returns of the two-stage portfolio under the worst-case scenario. Considering the two-stage distributionally robust portfolio optimization problem (DR-TSPO) under a quadratic loss utility function, the optimization problem can be expressed as:
( DR TSPO ) : max y X , l ^ L ^ ( y ) , ε R S μ y min x X c i [ N ] x i y i μ x + sup l P , P B ( P ^ ( y ) ) φ · E P l μ x , 0 + 2 ,
s . t . l ^ = β · μ ( y y m k t ) 1 S + 1 N ε ,
1 S 1 s [ S ] ε s 2 = γ · y y m k t 2 2 ,
i [ N ] x i = 1 , i [ N ] y i = 1 , s [ S ] ε s = 0 ,
x i , y i [ 0 , 1 ] , i [ N ] .
Here, c is the cost coefficient for buying and selling stocks. The utility model objective in Equation (25) considers the possible true distribution of the loss reference point based on the prior decision and aims to maximize the net returns of the two-stage portfolio in the worst-case scenario. Unlike two-stage robust optimization, the distributionally robust optimization framework effectively avoids “over-conservatism,” achieving a more balanced result in practical applications. It is particularly suitable for situations where uncertainty is high or difficult to quantify directly. However, DR-TSPO has a higher computational complexity because it requires handling probability distributions, optimizing distributional uncertainty, and possibly estimating distribution parameters. These challenges necessitate the design of appropriate algorithms to address the distributional uncertainties.
According to hierarchical optimization and dual theory, we reformulate the DR-TSPO into a more manageable deterministic two-stage second-order cone programming.
Theorem 1. 
The DR-TSPO is equivalent to solving a deterministic two-stage nonlinear constrained optimization problem.
max x , y , ω , ν , θ , ε μ y θ ,
s . t . 1 S s [ S ] ω s + ν ϵ μ x + c i [ N ] x i y i θ ,
1 φ 1 ν · ω s ( β · μ ( y y m k t ) + ε s · 1 S μ μ x ) 2 , s [ S ] ,
1 S 1 s [ S ] ε s 2 = γ · y y m k t 2 2 ,
i [ N ] x i = 1 , i [ N ] y i = 1 , s [ S ] ε s = 0 ,
x i , y i 0 , i [ N ] ,   w s 0 , 0 φ v .
The proof is shown in Appendix B.
The DR-TSPO optimization problem is a multi-variable, multi-constraint non-convex optimization problem. The non-convexity primarily arises from the quadratic nonlinear term in constraint ( 32 ) and the complex coupling of variables (such as the nonlinear dependence on 1 ν ). Additionally, the variance equality constraint ( 33 ) for the scenario variable ε and the absolute value term c i [ N ] | x i y i | further complicate the solution process. In practical applications, due to the large number of stocks N and scenarios S, the problem’s scale significantly increases, resulting in high computational costs. Therefore, it is necessary to design appropriate algorithms that reduce computational complexity by relaxing constraints, decomposing the problem, or introducing penalty terms, while ensuring feasibility and obtaining high-quality approximate solutions quickly.

5. Deep Learning-Based Constraint Correction Algorithm

To reduce the resource usage and time cost associated with solving non-convex constraints in large-scale optimization problems, we design a deep learning-based constraint correction algorithm (DL-CCA) for non-convex constrained optimization. Beyond the traditional optimization literature, substantial research efforts in deep learning have focused on developing approximations or acceleration techniques for optimization models. As evidenced by comprehensive reviews in fields like combinatorial optimization [32] and optimal power flow [33], current machine learning applications for optimization acceleration primarily follow two distinct methodologies.
One methodology, conceptually similar to surrogate modeling techniques [34], trains machine learning models to directly predict complete solutions from optimization inputs. Nevertheless, these methods frequently encounter challenges in generating solutions that simultaneously satisfy feasibility and near-optimality conditions. Alternatively, a second methodology integrates machine learning within optimization frameworks, either in conjunction with or embedded in the solution process. Examples include learning effective warm-start initializations [35,36] or employing predictive models to identify active constraints, thereby enabling constraint reduction strategies [37].
In this work, we consider solving a series of optimization problems where the objective or constraints differ across instances. Formally, let z R m represent the solution to the corresponding optimization problem. For any given parameters, our goal is to find the optimal solution z for z :
min z R m f ( z ) , s . t . g ( z ) 0 , h ( z ) = 0 ,
Here, f, g, and h may be nonlinear and non-convex. We consider using deep learning methods to solve this task—specifically, training a neural network N ϑ parameterized by ϑ to adjust the multi-dimensional random solution z ^ into an approximate optimal solution that satisfies the constraints g ( z ) and h ( z ) . This approach allows the integration of difficult non-convex constraints into the neural network training process. The method enables training directly from the problem specification (rather than a supervised dataset). Additionally, we incorporate equality constraint adjustment layers at both the input and output of the neural network model to adjust the form of partial solutions that satisfy the equality constraints. The algorithm learns to minimize a composite loss that includes the objective and two “soft loss” terms, which represent penalties for violating the equality and inequality constraints ( λ g , λ h 0 ):
L soft ( N ϑ ( z ) ) = f N ϑ ( z ) + λ g · ReLU ( g ( N ϑ ( z ) ) 2 2 + λ h · h ( N ϑ ( z ) 2 2
First, the training set is constructed by generating random solution samples z ^ with explicit equality constraint conditions. Specifically, we design an EqCompletion_Layer to handle the constraints, which includes normalizing the portfolio weights x , y and adjusting the mean of the loss sample ε to ensure that the specific equality constraint in Equation (33) is satisfied. At the same time, this layer optimizes the relationship between the constraints by adjusting factors, ensuring feasibility during the optimization process. During the solving process, our model needs to satisfy both inequality and equality constraints. To this end, the penalty term in the loss function includes constraints (31) (which relates to rebalancing costs, the second-stage portfolio returns, and other variables), constraint (32) (used to adjust portfolio loss reference point sample differences), and a series of non-negative constraints (35). These constraints are explicitly incorporated into the loss function penalty term using the L2 norm, which adjusts the variables to ensure that each solution is as close as possible to the feasible region.
We use a neural network model with fully connected structure, where the input layer size matches the dimensionality of the training data. The hidden layers employ ReLU activation functions, and the final output corresponds to the decision variables of the optimization problem. The optimization process utilizes the Adam optimizer in combination with an exponentially decaying learning rate scheduler to gradually reduce the learning rate, improving convergence and stability. The loss function includes penalty terms for multiple constraints, with the weight of each term adjusted using penalty factors ( λ g , λ h 0 ) to ensure that the model can effectively balance the objective function and the constraints during training. Figure 2 illustrates the DL-CCA framework, and Algorithm 1 provides the corresponding pseudocode.
Algorithm 1: Deep learning-based constraint correction algorithm (DL-CCA)
1: Assume: Equality completion procedure ψ ( ) : ˜ to solve equality constraints, where, , ˜ R m
2: Initialize random sample solution: z ^ = [ x ^ , y ^ , ε ^ , ω ^ , ν ^ , θ ^ ] ; x ^ , y ^ R N ; ε ^ , ω ^ R S ; ν ^ , θ ^ R .
3: Input: Training Set of solutions z ˜ = [ x ˜ , y ˜ , ε ˜ , ω ^ , ν ^ , θ ^ ] = ψ ( z ^ ) , learning ratio (LR) ι = ι 0 .
4: Initialize neural network N ϑ : R d × m R m
5: for epoch = 1 to epochs do
6:     Compute the output of the neural network layer N ϑ ( z ˜ )
7:     Sample averaging layer processing,
        Mean _ Layer : R d × m R m , z ¯ = Mean ( N ϑ ( z ˜ ) )
8:     Equation constraint correction,
        EqC _ Layer : N ϑ ( z ) = ψ ( z ¯ )
9:     Compute constraint-regularized loss:
        L soft ( N ϑ ( z ) ) = f ( N ϑ ( z ) ) + λ g · ReLU ( g ( N ϑ ( z ) ) 2 2 + λ h · h ( N ϑ ( z ) 2 2
10:    Update ϑ using ϑ l soft ( N ϑ ( z ) )
11:    if epoch % 100 == 0 then
12:        Update LR: ι = 0.9
13:    end if
14: end for
15: Decoding the optimal solution z = N ϑ ( z ˜ )
16: return  x , y , ε , ω , ν , θ

6. Algorithm Experiments

6.1. Analysis of Optimal Network Parameters

We experimentally investigated the impact of three key parameters—neural network depth (num_Layer), hidden layer dimension (hidden_Size), and learning rate (learn_Rate)—on the average loss value (Avg.loss_Value) and average solution time (Avg.times) of the DL-CCA algorithm. A systematic analysis of the experimental results was conducted using heatmaps and three-way analysis of variance (ANOVA), as illustrated in Figure 3. The experiments addressed the DR-TSPO problem with a decision variable dimension of 200, and repeated trials were performed within the parameter ranges to obtain average values ( num _ Layer { 4 , 5 , 6 } , hidden _ Size { 64 , 128 , 256 , 512 } , learn _ Rate { 0.0005 , 0.001 , 0.005 } ).

6.2. Results of ANOVA

Through heatmap analysis, this study observed significant variations in model performance under different parameter combinations. In terms of solution accuracy, network complexity—particularly the hidden layer dimension (hidden_Size) and network depth (num_Layer)—emerged as critical factors. The experimental results demonstrated that when the hidden layer dimension was set to 256, which is close to the problem’s solution dimension, the model exhibited superior solution accuracy, as evidenced by a significant reduction in the average loss value across multiple trials. However, it is noteworthy that increasing network complexity, especially the hidden layer dimension, significantly prolonged the model’s solution time, potentially leading to higher computational costs when tackling large-scale optimization problems. Additionally, the learning rate setting played a crucial role in both the search accuracy for optimal solutions and computational efficiency. Experimental data indicated that a learning rate of approximately 0.001 yielded optimal performance.
Furthermore, through a 3-way ANOVA, we have statistically elucidated the significance of the individual parameters and their interaction effects on model performance. Initially, from the perspective of target loss as presented in Table 1, the learning rate (learn_Rate) exerts the most significant influence on the loss (F = 9.0533, p < 0.001), underscoring its pivotal role in determining model performance. The number of layers (num_Layer) also significantly affects the loss (F = 3.2942, p = 0.040), albeit with a relatively smaller effect size, suggesting that increasing the number of layers may optimize the loss to some extent, but the improvement is limited. In contrast, the size of the hidden layer (hidden_Size) does not significantly impact the loss (F = 0.0966, p = 0.962), indicating a weaker direct influence on model performance. Additionally, the interaction among the three factors is significant (p = 0.038), revealing that the combination of these parameters may exert complex nonlinear effects on the loss.
From the perspective of computational time, network complexity is the primary factor influencing the speed of solution. As shown in Table 2, the size of the hidden layer (hidden_Size) has the most significant impact on the solution time (F = 46815.854, p < 0.001), with an extremely high effect size, indicating that an increase in hidden layer size significantly escalates computational complexity. Furthermore, the number of hidden layers (num_Layer) also significantly affects the solution time (F = 822.7933, p < 0.001), although its effect size is slightly lower than that of the hidden layer size, suggesting that increasing the number of layers also adds to the computational burden, albeit to a lesser extent. Although the learning rate (learn_Rate) significantly influences the solution time as well (F = 55.8716, p < 0.001), its effect size is relatively low, indicating a limited impact on computational efficiency, with its setting primarily aimed at ensuring solution accuracy. A larger learning rate accelerates gradient descent but compromises solution precision, necessitating a judicious setting of the learning rate. Interaction analysis further reveals significant interactions between the number of layers and hidden layer size (p < 0.001), as well as between hidden layer size and learning rate (p < 0.001), with these parameter combinations further affecting solution time. Additionally, the three-way interaction among these factors also reaches a significant level (p = 0.029), further corroborating the intricate relationships among the parameters.
In summary, the learning rate is a pivotal parameter that ensures the accuracy of the solution, while the size and number of hidden layers significantly influence the computation time. The impact of the three key parameters (num_Layer, hidden_Size, and learn_Rate) on the algorithm’s application is not independent; their interactions are also crucial, particularly in terms of computation time, where the combined effects of multiple factors can substantially increase computational complexity. Therefore, in practical model optimization, it is essential to consider both the individual effects of each parameter and their interactions to achieve a balance between model performance and computational efficiency.

6.3. Comparison of Algorithms for Solving Large-Scale DR-TSPO

To validate the efficiency of the DL-CCA algorithm, which is based on deep neural networks, in solving large-scale complex constrained non-convex problems, we designed a series of comparative experiments. These included the Trust-Constr algorithm from the Scipy library, the heuristic Hippopotamus Optimization (HO) algorithm, and ablation studies on the neural network architectures embedded within the DL-CCA algorithm (fully connected neural networks, LSTM, and CNN). The experimental results demonstrate that the DL-CCA algorithm exhibits significant advantages in terms of solution accuracy, computation time, and constraint violation, with its performance benefits being particularly pronounced in high-dimensional problems.
Trust-Constr is a modern variant based on the trust-region method, which integrates interior-point and trust-region methods to construct an efficient algorithm for solving optimization problems with nonlinear constraints. It is capable of handling general nonlinear constraints while ensuring the feasibility of constraints at each iteration and stabilizing the optimization process through dynamic adjustment of the trust-region size [38]. The Hippopotamus Optimization Algorithm (HO) is a novel metaheuristic algorithm (intelligent optimization algorithm) inspired by the inherent behaviors of hippopotamuses. Research indicates that the HO algorithm outperforms the SSA algorithm on most functions [39]. According to real return data from S&P 500 constituent stocks, we evaluated the effectiveness of various algorithms in solving the non-convex optimization problem DR-TSPO. Specifically, as the scale of the problem increases (portfolio expansion), we assessed the solving efficiency of the DL-CCA algorithm compared to different algorithms and network architectures.
  • Speed: The time or number of iterations required for an algorithm to find the optimal solution when solving optimization problems with a large number of variables and constraints. Solution speed is influenced by several factors, including problem size, algorithm complexity, problem structure (such as sparsity or nonlinearity), and available hardware and computational resources.
  • Feasibility: Feasibility refers to whether the obtained solution satisfies all the given constraints. For constrained optimization problems, feasibility can be measured by how well the constraints are satisfied, particularly with respect to equality and inequality constraints. The average constraint violation is defined as:
    C o n s t r _ v i o = i n _ C o n s t r max { B i a s _ C o n s t r ( i ) , 0 } n _ C o n s t r ,
    where n _ C o n s t r is the number of constraints, and B i a s _ C o n s t r represents the constraint violation.
  • Optimality: This refers to whether the algorithm can converge within a finite number of iterations. The iteration limit is set to 2000, and the convergence condition is defined by a gradient tolerance of G r a d . t o l 10 6 .

6.4. Result of Algorithm Comparison

Table 3 presents the results of solving the DR-TSPO problem with variable dimensions Dim _ Var = 50 , 100 , , 500 . We compared the average constraint violation for equality/inequality constraints between the DL-CCA and Trust-Constr algorithms under the task of seeking the optimal objective value Optimal _ Obj . Additionally, as the problem scale increased, we compared the total runtime required by both algorithms on test instances, assuming full parallelization.
First, in terms of solution accuracy ( Optimal _ Obj ( Min . ) ), the DL-CCA algorithm achieved significantly lower optimal objective values across all dimensions compared to the HO and Trust-Constr algorithms. For instance, when Dim _ Var = 50 , the optimal objective value for DL-CCA was 0.0024, while those for Trust-Constr and HO were 0.0047 and 0.0720, respectively. When Dim _ Var = 500 , the optimal objective value for DL-CCA was 0.0030, compared to 0.0037 and 0.0134 for Trust-Constr and HO, respectively. This indicates that the DL-CCA algorithm is more effective in approximating the global optimal solution, particularly in high-dimensional problems, where its precision advantage becomes more pronounced.
In terms of computation time ( Sol _ time ( Sec . ) ), the DL-CCA algorithm also outperformed the comparative algorithms. Specifically, when solving high-dimensional, large-scale optimization problems (e.g., Dim _ Var 300 ), the experimental results demonstrate that DL-CCA achieves faster solution times. Even in low-dimensional problems, DL-CCA’s computation time was significantly lower than that of the HO algorithm. For example, when Dim _ Var = 50 , DL-CCA’s computation time was 12.25 s, compared to 11.22 s for HO, but DL-CCA’s solution accuracy was substantially higher. Furthermore, as the problem dimension increased, the computation time for exact algorithms rose sharply, while DL-CCA’s computation time grew more gradually, indicating its superior computational efficiency in high-dimensional problems.
Although the Trust-Constr algorithm exhibits slightly better performance in terms of inequality constraint violations, its equality constraint violations are comparable to those of DL-CCA, while the HO algorithm shows significantly higher equality constraint violations than DL-CCA. This indicates that the DL-CCA algorithm is better at satisfying constraint conditions during the optimization process, thereby ensuring solution feasibility. In the ablation experiments of the DL-CCA algorithm, we observed that the DL-CCA algorithm employing a fully connected neural network (FCNN) outperformed those using LSTM and CNN in both solution accuracy and computation time. Specifically, the average optimal objective value for DL-CCA-FCNN was 0.0029, compared to 0.0405 and 0.0279 for DL-CCA-LSTM and DL-CCA-CNN, respectively. Additionally, the average computation time for DL-CCA-FCNN was 76.78 s, significantly lower than the 224.38 s for DL-CCA-LSTM and 413.26 s for DL-CCA-CNN.
Figure 4 illustrates the trends in computation time and optimal values as the problem scale increases for various algorithms. The DL-CCA algorithm demonstrates significant advantages over the exact solver Trust-Constr and the heuristic algorithm HO in terms of solution accuracy and computation time when addressing large-scale complex constrained non-convex problems. Furthermore, the DL-CCA algorithm utilizing FCNN outperforms those employing LSTM and CNN, further validating the effectiveness of FCNN in handling high-dimensional nonlinear optimization problems. These experimental results robustly demonstrate the efficiency and robustness of the DL-CCA algorithm in practical applications.

7. DR-TSPO vs. TSPO Empirical Validation

To validate the advantages of the DR-TSPO model based on loss aversion, we design a comparative experiment to benchmark it against the traditional two-stage robust optimization model. The core of the experiment is to examine the model’s performance under different market distributions when the loss reference point is treated as a random variable. The experiment will optimize a set of real market data using both models and assess their robustness and loss aversion capabilities in practical decision making. By comparing the portfolio’s average drawdown, return, volatility, and risk-adjusted performance metrics, we aim to demonstrate the adaptability and advantages of the loss aversion-based DR-TSPO model in markets with unknown loss reference distributions.

7.1. Experimental Data and Evaluation Metrics

Between 2019 and 2020, global financial markets experienced significant volatility and high uncertainty. The market trend in 2019 was relatively stable, while the outbreak of the COVID-19 pandemic in 2020 triggered a global financial crisis, leading to sharp market fluctuations, a significant drop in stock prices, and a further intensification of the economic recession. Against this backdrop, the experiment can more effectively test the risk transmission of portfolios in both normal market conditions (2019) and extreme market scenarios (2020), thereby assessing their robustness and resilience during financial crises, as shown in Figure 5. To achieve this, we selected constituent stocks from eight global market indices with different distribution characteristics, including SSE50, Hang Seng Index (HSI), FTSE Index, French CAC40 Index (FCHI), German DAX Index (GADXI), Russian RTS Index (RTS), Nikkei 225 Index (N225), and Nasdaq 100 Index (NDX). These indices represent the economic conditions and market volatility of different regions globally, providing high distribution diversity and representativeness. Specific information and distribution characteristics are presented in Table 4.
The experiment uses data from 1 January 2019 to 31 December 2019 as the in-sample data for constructing the first-stage portfolio; the second stage is based on 2020 data to assess the portfolio’s out-of-sample performance. The evaluation focuses on two main aspects. (1) First, the portfolio’s ability to avoid losses during the financial crisis (the global pandemic and economic recession in 2020) is assessed. Losses are characterized by comparing the average drawdown and the portfolio rebalancing magnitude in both stages. If the out-of-sample average drawdown decreases as a result, the model is considered to have better robustness and effectively reduces the average loss in the second stage. (2) Second, traditional metrics will be used to evaluate the portfolio’s return and volatility.
  • Annualized Return: Mean annualized portfolio return.
  • Standard deviation: Standard deviation of annualized portfolio return.
  • Maximum.Drawdown: Maximum portfolio drawdown.
  • Sharpe.Ratio: Sharpe Ratio represents the excess return per unit of total risk taken. T R = E ( R p ) R f σ p .
  • Beta: Beta represents the relationship between portfolio return volatility and market return volatility, which is a measure of systematic risk. β = C o v ( p , m ) σ m 2 = ρ p m σ p σ m .
  • Sortino Ratio: SR = R p R f 1 T i = 1 T min ( 0 , R i R f ) 2 . The risk-adjusted return of a portfolio after accounting for negative volatility.
  • Autocorrelation: A statistic used to measure the correlation between a variable in time series data and itself at different time points. ρ k = t = k + 1 n ( R p t R p ¯ ) ( R p ( t k ) R p ¯ ) t = 1 n ( R p t R p ¯ ) 2 .
  • Rolling Returns: An indicator that calculates the return of a portfolio or asset over a moving window.
  • Mean Wealth: Average wealth in the second stage.

7.2. Results and Discussions

First, the experiment compares the mean drawdown and the two-stage rebalancing magnitude of the portfolio on out-of-sample data. The results in Figure 6 show significant regional differences under different market conditions (i.e., different distribution scenarios), especially among the EU countries represented by FTSE/FCHI/GDAXI and the U.S. and Japanese indices represented by N225/NDX. Specifically, compared to the TSPO model, the DR-TSPO model under the distributionally robust optimization framework demonstrates a clear advantage in robustness. Its out-of-sample average drawdown is consistently lower than that of the TSPO model and at least no worse.
In the Eurozone markets (such as the UK, France, and Germany), the advantages of DR-TSPO are not as pronounced, likely due to the high interconnectivity of market information and relatively clear market distributions (low uncertainty regarding market distributions for investors). In these markets, the loss aversion optimization model demonstrates better drawdown control than the market index, but the DR-TSPO model performs similarly to the standard TSPO in terms of ambiguity aversion and employs similar rebalancing strategies. In contrast, in other markets, the DR-TSPO model generally outperforms the non-robust model. Particularly in the Hong Kong and Russian stock markets (represented by HSI and RTS), the DR-TSPO model shows better adaptability through effective rebalancing compared to TSPO. For example, in the HSI index experiment in Hong Kong, the two-stage loss aversion model outperforms the market index, and the DR-TSPO further strengthens loss control. In the Russian market, due to its high volatility, although the TSPO model underperforms the market index, the DR-TSPO achieves better drawdown levels through larger rebalancing. While in markets such as China and the U.S., the two-stage loss aversion portfolio’s drawdown performance is inferior to the market index, the DR-TSPO shows better adaptability and robustness when facing unknown market distributions compared to the non-robust TSPO model.
In subsequent experiments, we further analyze the performance of the TSPO and DR-TSPO models across several major stock indices in 2020, aiming to compare their risk and return characteristics in market environments with different distributional features. In 2020, global markets faced extreme volatility triggered by the COVID-19 pandemic and sharp changes in monetary policies by governments, resulting in a significant increase in market uncertainty. Against this backdrop, the main objective of portfolio optimization models is to achieve higher returns while maintaining a conservative approach.
As shown in Table 5, the portfolio performance of the TSPO model and DR-TSPO model across different markets is presented.
On the SSE50 index, the DR-TSPO model achieves an annualized return of 0.4635, which is significantly higher than the TSPO’s 0.3020. Additionally, the DR-TSPO model has a lower volatility of 0.4430 compared to TSPO’s 0.5846, and it also performs better in terms of maximum drawdown, with DR-TSPO at 0.3405 versus TSPO’s 0.4528, indicating better capital protection. In 2020, the Chinese market experienced severe volatility during the early stages of the COVID-19 pandemic. The DR-TSPO model, using a distributionally robust optimization framework, was able to capture rebound opportunities under extreme market conditions while effectively reducing risk. This robust return characteristic highlights the advantage of DR-TSPO in highly volatile market environments. Especially during the pandemic and its aftermath, the traditional TSPO model may struggle to cope with such high market uncertainty and rapid changes, while the DR-TSPO effectively mitigates risk through its optimization strategy. Similarly, in the N225-based experiment, the DR-TSPO model shows an annualized return of 0.7257, slightly higher than TSPO’s 0.7189, with a volatility of 0.3119, compared to TSPO’s 0.4139, indicating better stability. In terms of maximum drawdown, DR-TSPO also outperforms TSPO, with 0.3533 versus 0.3830. In the experiments in the Chinese Shanghai and Japanese markets, the DR-TSPO model significantly outperforms the traditional two-stage loss aversion model in both volatility and profitability. The risk-adjusted Sharpe ratio and Sortino ratio further validate that the distributionally robust optimization method can maintain a certain level of robustness while overcoming over-conservatism, aiming for higher portfolio returns.
Moreover, the DR-TSPO model also performs well in the Eurozone markets, including the FTSE, FCHI, and GDAXI indices. Although the differences in various metrics are small, for example, the annualized return on the FTSE is 0.1641 for DR-TSPO, slightly higher than TSPO’s 0.1632, the volatility of DR-TSPO is significantly lower than that of TSPO. Specifically, the volatility for FTSE is 0.3460 for TSPO and 0.3444 for DR-TSPO, while for FCHI, the volatility is 0.3460 for DR-TSPO and 0.3466 for TSPO. For the German market (GDAXI), the annualized return for DR-TSPO is 1.1657, slightly higher than TSPO’s 1.1646, but the maximum drawdown is notably lower, with DR-TSPO at 0.2495 compared to TSPO’s 0.2496. The small differences can mainly be attributed to the influence of common monetary policies (such as European Central Bank regulation), close economic interconnections, synchronized impacts from the global economic environment, and similar industry structures and capital market linkages in the Eurozone markets. Due to the similar market distribution characteristics, the adaptive advantage of the robust optimization framework in addressing unknown distributions is not as pronounced. However, DR-TSPO excels in controlling volatility and maximum drawdown, demonstrating its advantage in more mature and volatile markets.
A particularly notable performance is observed in the RTS market, where the Russian economy faced multiple challenges such as a sharp drop in oil prices, the outbreak of the COVID-19 pandemic, and international sanctions, leading to extremely high financial market uncertainty. As a high-volatility market dominated by downward trends, this posed a significant test for the risk control capabilities of portfolio optimization methods. The experimental results in Table 5 indicate that TSPO performed relatively poorly, with an annualized return of −0.0905 compared to −0.1207 for DR-TSPO, volatility (0.3101 vs. 0.3759), and maximum drawdown (0.4093 vs. 0.5532), suggesting that investors could face substantial losses. In contrast, DR-TSPO, with its robust optimization model, effectively reduced volatility in such a high-risk environment, protecting investors from excessive losses. Even though all return-related metrics were negative, DR-TSPO’s ability to control volatility and manage risk remained a significant advantage, especially in a market characterized by high uncertainty and volatility. This allowed DR-TSPO to outperform traditional TSPO in avoiding extreme losses.
For the HSI (Hong Kong Hang Seng Index), although the annualized return of DR-TSPO (1.2015) is slightly lower than TSPO (1.7232), its volatility (0.4516) is significantly lower than TSPO’s (0.5479), and the maximum drawdown (0.3579 vs. 0.4087) also outperforms TSPO. Similarly, in the NDX index experiment, despite the limited profitability of DR-TSPO, it still demonstrates strong risk control, effectively smoothing market fluctuations and providing more stable returns. Looking at the wealth growth across different markets (as shown in Figure 7), the extreme declines in the RTS and N225 markets were the most severe, and the DR-TSPO model exhibited superior performance compared to the traditional TSPO model. Not only did it excel in volatility and maximum drawdown control, but it also helped avoid sudden losses from unknown factors, reflecting its conservative advantage. Overall, DR-TSPO, through its distribution-robust optimization framework, is better at balancing risk and return in high-uncertainty and high-volatility markets, offering better capital protection and delivering more robust investment performance.
In conclusion, the DR-TSPO model, through its extensive application and in-depth comparative testing across various international markets, has demonstrated its superior risk management and capital protection capabilities compared to the traditional TSPO model. This is especially true when faced with extreme market volatility, high uncertainty, and multiple external shocks such as the COVID-19 pandemic, oil price fluctuations, and international sanctions. With its advanced distribution-robust optimization framework, DR-TSPO excels in controlling volatility and maximum drawdown in markets ranging from China and Japan to the Eurozone and Russia. Even in markets where annualized returns do not show a clear advantage, its robust risk-adjusted performance stands out. This comprehensive and balanced investment strategy not only helps investors minimize potential losses in high-volatility environments but also supports more sustainable and steady wealth growth over the long term. Therefore, the DR-TSPO model provides a new and more reliable methodology for portfolio optimization, especially in the current global economic climate, which is increasingly complex and volatile. Its application value and practical significance are particularly prominent in this context.

8. Conclusions

Unlike existing studies that rely on predetermined loss reference points [4,5], our work integrates loss aversion theory with distributionally robust optimization to develop an adaptive loss reference point mechanism. This mechanism dynamically adjusts based on investors’ historical decisions and prevailing market conditions, thereby addressing the limitations of static reference point approaches. By explicitly capturing the path-dependent nature of loss aversion, our framework enhances decision-making flexibility across diverse market regimes. The proposed DR-TSPO model employs uncertainty sets to address distributional ambiguity, relaxing the conventional reliance on strict prior distribution assumptions [26,27]. This approach optimizes worst-case expected utility while maintaining robustness across different market regimes. The solution methodology demonstrates that the dual problem can be reformulated as a tractable second-order cone program. To enhance computational efficiency, we propose DL-CCA, a deep learning-based optimization algorithm that embeds constraint penalties within neural networks. Experimental comparisons demonstrate DL-CCA’s superior performance in solving large-scale optimization problems; it achieves an average optimal objective value of 0.0029 with merely 76.78 s of computation time, significantly outperforming traditional algorithms like Trust-Constr [38] and HO [39], as well as LSTM/CNN-based variants.
Comprehensive backtesting using global equity data reveals that DR-TSPO delivers stronger performance in volatile markets compared to traditional models. For instance, in China’s market, it achieves higher annualized returns (0.4635 vs. 0.3020) with lower volatility (0.4430 vs. 0.5846), demonstrating improved capital protection. The model particularly excels during extreme market conditions, such as the 2019-2020 period, where it provides more effective risk mitigation. The multidimensional implications of this research offer valuable insights for various financial market participants. For investors, the dynamic reference point mechanism optimizes decision-making processes and reduces irrational trading behaviors. Asset management institutions can leverage the DL-CCA algorithm’s efficiency to enable real-time portfolio rebalancing of complex strategies, thereby enhancing robo-advisory systems. Regulators may consider incorporating the model’s stress-testing performance into systemic risk monitoring frameworks. For policymakers, the findings support the development of algorithmic transparency standards and cross-border regulatory coordination to address emerging challenges in financial technology.
Future research directions could explore multi-asset extensions, macroeconomic factor integration, and reinforcement learning applications to further advance investment decision paradigms toward more intelligent and market-adaptive approaches.

Author Contributions

Conceptualization, X.Z. and S.L.; methodology, X.Z.; software, X.Z.; validation, S.L.; formal analysis, J.P.; investigation, S.L.; resources, X.Z.; data curation, X.Z. and S.L.; writing—original draft preparation, X.Z.; writing—review and editing, S.L.; visualization, X.Z.; supervision, S.L.; project administration, X.Z.; funding acquisition, S.L. and J.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under grant 71771008 and 72271013.

Data Availability Statement

The original contributions presented in this study are included in the article.Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
DRODistributionally Robust Optimization
DR-TSPODistributionally Robust Two-Stage 82 Portfolio Optimization
TSPOTwo-Stage Portfolio Optimization
LPMLower Partial Moment
WCELWorst-case scenario Expected Loss
DL-CCADeep Learning-based Constraint Correction Algorithm
HOHippopotamus Optimization
FCNNFully Connected Neural Network
LSTMLong Short-Term Memory
CNNConvolutional Neural Networks
Trust-ConstrTrust-Region Constrained Optimization

Appendix A

Table A1. Notation.
Table A1. Notation.
[ N ] :The number of stocks available for configuration is represented by the total stock asset set [ N ] , with the index of each individual stock j for all j [ N ] .
l , l ^ : represents the random loss reference point for the loss-averse investor, which is related to previous returns and is used to measure whether the portfolio’s return aligns with the investor’s psychological expectations. l ^ is the prior estimate of the random variable . The bold notation l ^ = ( l ^ 1 , l ^ 2 , , l ^ S ) , where s [ S ] , represents the sample vector.
μ l ^ , σ l ^ 2 :The expected value and variance of the generated sample reference points are given by Σ l ^ = σ l ^ 2 · I s , which is an invertible diagonal matrix, and Σ l ^ S + + , where S + + denotes the set of positive definite matrices.
ε :The noise vector used for sample generation.
[ S ] :A set of scenarios (or the number of samples) for loss aversion references, obtained based on expert opinions or historical statistical datasets, and dependent on the portfolio decision x .
μ , Σ :The expected return vector and covariance matrix of the stock’s random returns.
μ l ^ , σ l ^ 2 :The sample expected value and variance of the loss aversion reference point sample l ^ .
β , γ > 0 :The adjustment coefficient, which scales the function’s range to a reasonable interval of values.
c:The stock transaction cost coefficient.
φ :The investor’s loss aversion coefficient φ > 0 , which is an indicator used in behavioral economics to measure the investor’s sensitivity to losses compared to gains.
y m k t :The market portfolio weights, serving as the reference decision targets for the investor.
ϵ :The given radius of the Wasserstein distance ball, which is used to adjust the size of the distributional ambiguity set.
Decision variable:
x :The continuous decision variable, x = ( x 1 , x 2 , , x n ) , represents the decision weight vector for the current portfolio, where x j [ 0 , 1 ] , j [ N ] .
y :The previous decision vector y = ( y 1 , y 2 , , y n ) is related to the realization of the current loss aversion reference point. Under the assumption of the robust ambiguity set, the true distribution of the random loss reference point lies within the ϵ -neighborhood of the scenario distribution based on the previous decision y .
ε :The noise vector of the unknown distribution, ε = ( ε 1 , ε 2 , , ε s ) , whose distribution moments are related to the previous decision.
Bold symbols represent vectors.

Appendix B. Proof of Theorem 1

We adopt a hierarchical optimization approach, first solving the subproblem of the expected loss in the worst-case scenario (Worst-case scenario Expected Loss, WCEL). Then, under the realization of any event decision y ^ , there is a reference distributional ambiguity set l P , P B ( P ^ ( y ) ) , that is,
( WCEL ) : sup Π ( d l , l ^ ) 0 L φ · s [ S ] l μ x , 0 + 2 Π ( d l , l ^ s )
s . t . L Π ( d l , l ^ s ) = 1 S , s [ S ]
L s [ S ] l l ^ s 2 2 Π ( d l , l ^ ) ϵ .
This is a second-order lower partial moment (LPM) optimization problem under an unknown distribution. Π represents the joint distribution between the true loss and the estimated loss l ^ s . The constraint ( A 2 ) indicates that the marginal distribution Π ( d l , l ^ s ) of the joint distribution Π must satisfy the normalization condition, ensuring that the total weight of each estimated sample l ^ s is 1 S . In the scenario set { l ^ s } s [ S ] , we use Dirac distributions to represent the sample locations of each scenario s. Each sample can be treated as corresponding to a Dirac distribution δ l ^ s , meaning that in each scenario s, the sample l ^ s is deterministic, and in that scenario, all the probability mass is concentrated at this point. The constraint ( A 3 ) controls that the Wasserstein distance between the true distribution and the reference distribution does not exceed the threshold ϵ .
To construct the Lagrangian function of the optimization problem and its corresponding dual problem, we first need to incorporate the constraints of the optimization problem into the objective function, then apply the method of Lagrange multipliers to solve it. By introducing the Lagrange multipliers ω and ν > 0 for the constraints ( A 2 ) and ( A 3 ) , respectively, the Lagrangian function for the internal subproblem (WCEL) is expressed as:
L ( Π , ω , ν ) = φ · L s [ S ] l μ x , 0 + 2 Π ( d l , l ^ s ) L s [ S ] ω s Π ( d l , l ^ s ) L s [ S ] ν · l l ^ s 2 2 Π ( d l , l ^ s ) + 1 S s [ S ] ω s + ν ϵ .
Therefore, the Lagrangian dual function can be expressed as:
inf ω , ν g ( ω , ν ) = inf ω , ν sup Π ( d l , l ^ ) 0 L ( Π , ω , ν ) = inf ω , ν sup Π ( d l , l ^ ) 0 L s [ S ] l μ x , 0 + 2 ω s ν · l l ^ s 2 2 Π ( d l , l ^ s ) + 1 S s [ S ] ω s + ν ϵ .
Noting that when φ · l μ x + 2 ν · l l ^ s 2 2 > ω s , we can focus on adjusting Π ( d l , l ^ ) 0 to ensure that L ( Π , ω , ν ) . Therefore, it must hold that φ · l μ x + 2 ν · l l ^ s 2 2 ω s . With this constraint, the minimization problem can be optimized when Π ( d l , l ^ ) = 0 . The strong duality theorem holds, leading to the equivalent dual problem of WCEL (DWCEL).
( DWCEL ) : min ω , ν 1 S s [ S ] ω i + ν ϵ
s . t . φ · l μ x + 2 ν · l l ^ s 2 2 ω s , l L s , s [ S ]
ν > 0 .
Let h ( l ) = φ · l μ x + 2 ω ν · l l ^ s 2 2 . Next, we consider how to ensure that h ( l ) is always less than or equal to zero. This can be achieved by ensuring that the following constraint conditions are satisfied.
sup l L h ( l ) = sup l L φ · l μ x , 0 + 2 ω s ν · l l ^ s 2 2 0
Assume that for l L s , the range of values for can be divided into two cases: l > μ x and l μ x , and we will discuss each case separately.
PART 1. When l > μ x , the loss aversion utility is quadratic, and we have: h ( l ) = φ · ( l μ x ) 2 ω s ν · l l ^ s 2 2 .
Let Δ u s = l l ^ s , Δ u s R , then
sup l L φ · ( l μ x ) 2 ω s ν · l l ^ s 2 2
= sup Δ u s R φ · Δ u s + [ l ^ s μ x ] 2 ω s ν · Δ u s 2 2
= sup t 0 sup Δ u s 2 2 = t φ · Δ u s 2 + ( l ^ s μ x ) 2 + 2 ( l ^ s μ x ) Δ u s ω s ν · t
= sup Δ u s 2 2 0 ( φ ν ) Δ u s 2 + 2 φ ( l ^ s μ x ) Δ u s + φ ( l ^ s μ x ) 2 ω s
where the term φ ( l ^ s μ x ) 2 ω s can be treated as a constant that does not change with Δ u s . If φ ν 0 ; this function is either linear or convex, and its maximum value occurs at the boundary points. Since the domain of Δ u s is the positive real number domain R , the value tends to infinity as Δ u s ± . However, when φ ν < 0 , the function is concave, and its maximum value can be found by solving for the point where the gradient is zero. Let: Δ u s = 2 ( φ ν ) Δ u s + 2 φ ( l ^ s μ x ) = 0 .
We have
Δ u s = φ ( l ^ s μ x ) φ ν .
Substituting Δ u s back into equation (A12), when φ ν > 0 , the upper bound of sup l L h ( l ) is:
sup Δ u s 2 2 0 ( φ ν ) Δ u s 2 + 2 φ ( l ^ s μ x ) Δ u s + φ ( l ^ s μ x ) 2 ω s = ( φ ν ) φ ( l ^ s μ x ) φ ν 2 + 2 φ ( l ^ s μ x ) φ ( l ^ s μ x ) φ ν + φ ( l ^ s μ x ) 2 ω s = φ 2 ( l ^ s μ x ) 2 φ ν + φ ( l ^ s μ x ) 2 ω s .
Therefore, the constraint ( A 8 ) is equivalent to:
ω s φ 2 ( l ^ s μ x ) 2 ( ν φ ) + φ ( l ^ s μ x ) 2 , φ ν 0 + , φ ν > 0
PART 2. If l μ x , then l μ x , 0 + 2 = 0 , meaning that no loss aversion emotion is generated in this case.
sup l L h ( l ) = sup l L ω s ν · l l ^ s 2 2
Since ν 0 , the constraint ( A 8 ) is equivalent to ω s 0 . In conclusion, the DWCEL problem is equivalent to:
min ω , ν 1 S s [ S ] ω s + ν ϵ
s . t . ω s φ 2 ( l ^ s μ x ) 2 ( ν φ ) + φ ( l ^ s μ x ) 2 , s [ S ]
0 φ ν , ω s 0 , s [ S ] .
By substituting the dual problem of the subproblem (DWCEL) into the original problem, the DR-TSPO model is equivalent to solving:
max y X , l ^ L ^ ( y ) , ε R S μ y min x X , ω , ν 1 S s [ S ] ω s + ν ϵ μ x + c i [ N ] x i y i
s . t . 1 φ 1 ν · ω s ( l ^ s μ x ) 2 , l ^ L ^ s , s [ S ]
l ^ s = β · μ ( y y m k t ) + ε s · 1 N μ , s [ S ]
1 S 1 s [ S ] ε s 2 = γ · y y m k t 2 2
i [ N ] x i = 1 , i [ N ] y i = 1 , s [ S ] ε s = 0 ,
x i , y i 0 , i [ N ] , ω s 0 , 0 φ ν .
By substituting constraint ( A 18 ) into ( A 17 ) and introducing the upper bound parameter θ for further simplification, we get:
max x , y , ω , ν , θ , ε μ y θ
s . t . 1 S s [ S ] ω s + ν ϵ μ x + c i [ N ] x i y i θ
1 φ 1 ν · ω s ( β · μ ( y y m k t ) + ε s · 1 S μ μ x ) 2 , s [ S ]
1 S 1 s [ S ] ε s 2 = γ · y y m k t 2 2
i [ N ] x i = 1 , i [ N ] y i = 1 , s [ S ] ε s = 0 ,
x i , y i 0 , i [ N ] , ω s 0 , 0 φ ν .

References

  1. Kahneman, D.; Tversky, A. Prospect Theory: An Analysis of Decision under Risk. Econometrica 1979, 47, 263–291. [Google Scholar] [CrossRef]
  2. Barberis, N.; Huang, M.; Santos, T. Prospect theory and asset prices. Q. J. Econ. 2001, 116, 1–53. [Google Scholar] [CrossRef]
  3. Zhang, W.; Semmler, W. Prospect theory for stock markets: Empirical evidence with time-series data. J. Econ. Behav. Organ. 2009, 72, 835–849. [Google Scholar] [CrossRef]
  4. Barberis, N.; Xiong, W. What drives the disposition effect? An analysis of a long-standing preference-based explanation. J. Financ. 2009, 64, 751–784. [Google Scholar] [CrossRef]
  5. Shi, Y.; Cui, X.; Li, D. Discrete-time behavioral portfolio selection under cumulative prospect theory. J. Econ. Dyn. Control 2015, 61, 283–302. [Google Scholar] [CrossRef]
  6. Van Bilsen, S.; Laeven, R.J.; Nijman, T.E. Consumption and portfolio choice under loss aversion and endogenous updating of the reference level. Manag. Sci. 2020, 66, 3927–3955. [Google Scholar] [CrossRef]
  7. Gao, J.; Li, Y.; Shi, Y.; Xie, J. Multi-period portfolio choice under loss aversion with dynamic reference point in serially correlated market. Omega 2024, 127, 103103. [Google Scholar] [CrossRef]
  8. Berkelaar, A.B.; Kouwenberg, R.; Post, T. Optimal portfolio choice under loss aversion. Rev. Econ. Stat. 2004, 86, 973–987. [Google Scholar] [CrossRef]
  9. Jin, H.; Yu Zhou, X. Behavioral portfolio selection in continuous time. Math. Financ. Int. J. Math. Stat. Financ. Econ. 2008, 18, 385–426. [Google Scholar]
  10. He, X.D.; Zhou, X.Y. Myopic loss aversion, reference point, and money illusion. Quant. Financ. 2014, 14, 1541–1554. [Google Scholar] [CrossRef]
  11. He, X.D.; Zhou, X.Y. Portfolio choice under cumulative prospect theory: An analytical treatment. Manag. Sci. 2011, 57, 315–331. [Google Scholar] [CrossRef]
  12. De Giorgi, E.G.; Legg, S. Dynamic portfolio choice and asset pricing with narrow framing and probability weighting. J. Econ. Dyn. Control 2012, 36, 951–972. [Google Scholar] [CrossRef]
  13. Shi, Y.; Cui, X.; Yao, J.; Li, D. Dynamic trading with reference point adaptation and loss aversion. Oper. Res. 2015, 63, 789–806. [Google Scholar] [CrossRef]
  14. Zou, B.; Zagst, R. Optimal investment with transaction costs under cumulative prospect theory in discrete time. Math. Financ. Econ. 2017, 11, 393–421. [Google Scholar] [CrossRef]
  15. Zou, Y.; Guo, J. Two important improvements of prospect theory—Study of the loss aversion coefficient λ and the reference point. Oper. Res. Manag. Sci. 2007, 16, 87–89. [Google Scholar]
  16. Markowitz, H. Portfolio Selection. J. Financ. 1952, 7, 71–91. [Google Scholar]
  17. Baucells, M.; Weber, M.; Welfens, F. Reference-point formation and updating. Manag. Sci. 2011, 57, 506–519. [Google Scholar] [CrossRef]
  18. Strub, M.S.; Li, D. Failing to foresee the updating of the reference point leads to time-inconsistent investment. Oper. Res. 2020, 68, 199–213. [Google Scholar] [CrossRef]
  19. van Bilsen, S.; Laeven, R.J. Dynamic consumption and portfolio choice under prospect theory. Insur. Math. Econ. 2020, 91, 224–237. [Google Scholar] [CrossRef]
  20. He, X.D.; Strub, M.S. How endogenization of the reference point affects loss aversion: A study of portfolio selection. Oper. Res. 2022, 70, 3035–3053. [Google Scholar] [CrossRef]
  21. Chow, V.T.F.; Cui, Z.; Long, D.Z. Target-Oriented Distributionally Robust Optimization and Its Applications to Surgery Allocation. INFORMS J. Comput. 2022, 34, 2058–2072. [Google Scholar] [CrossRef]
  22. Noyan, N.; Rudolf, G.; Lejeune, M. Distributionally robust optimization under a decision-dependent ambiguity set with applications to machine scheduling and humanitarian logistics. INFORMS J. Comput. 2022, 34, 729–751. [Google Scholar] [CrossRef]
  23. Zhao, Y.; Chen, Z.; Zhang, Z. Distributionally Robust Chance-Constrained p-Hub Center Problem. INFORMS J. Comput. 2023, 35, 1361–1382. [Google Scholar] [CrossRef]
  24. Garlappi, L.; Uppal, R.; Wang, T. Portfolio selection with parameter and model uncertainty: A multi-prior approach. Rev. Financ. Stud. 2007, 20, 41–81. [Google Scholar] [CrossRef]
  25. Zhu, S.; Fukushima, M. Worst-case conditional value-at-risk with application to robust portfolio management. Oper. Res. 2009, 57, 1155–1168. [Google Scholar] [CrossRef]
  26. Sun, Y.; Aw, E.L.G.; Li, B.; Teo, K.L.; Sun, J. CVaR-based robust models for portfolio selection. J. Ind. Manag. Optim. 2020, 16, 1861–1871. [Google Scholar] [CrossRef]
  27. Kang, Z.; Li, X.; Li, Z.; Zhu, S. Data-driven robust mean-CVaR portfolio selection under distribution ambiguity. Quant. Financ. 2019, 19, 105–121. [Google Scholar] [CrossRef]
  28. Pflug, G.; Wozabal, D. Ambiguity in portfolio selection. Quant. Financ. 2007, 7, 435–442. [Google Scholar] [CrossRef]
  29. Wozabal, D. A framework for optimization under ambiguity. Ann. Oper. Res. 2012, 193, 21–47. [Google Scholar] [CrossRef]
  30. Gao, R.; Chen, X.; Kleywegt, A.J. Wasserstein distributionally robust optimization and variation regularization. Oper. Res. 2024, 72, 1177–1191. [Google Scholar] [CrossRef]
  31. Blanchet, J.; Chen, L.; Zhou, X.Y. Distributionally robust mean-variance portfolio selection with Wasserstein distances. Manag. Sci. 2022, 68, 6382–6410. [Google Scholar] [CrossRef]
  32. Bengio, Y.; Lodi, A.; Prouvost, A. Machine learning for combinatorial optimization: A methodological tour d’horizon. Eur. J. Oper. Res. 2021, 290, 405–421. [Google Scholar] [CrossRef]
  33. Hasan, F.; Kargarian, A.; Mohammadi, A. A survey on applications of machine learning for optimal power flow. In Proceedings of the 2020 IEEE Texas Power and Energy Conference (TPEC), College Station, TX, USA, 6–7 February 2020; pp. 1–6. [Google Scholar]
  34. Koziel, S.; Leifsson, L. Surrogate-Based Modeling and Optimization; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  35. Baker, K. Learning warm-start points for AC optimal power flow. In Proceedings of the 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP), Pittsburgh, PA, USA, 13–16 October 2019; pp. 1–6. [Google Scholar]
  36. Dong, W.; Xie, Z.; Kestor, G.; Li, D. Smart-PGSim: Using neural network to accelerate AC-OPF power grid simulation. In Proceedings of the SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, Atlanta, GA, USA, 9–19 November 2020; pp. 1–15. [Google Scholar]
  37. Misra, S.; Roald, L.; Ng, Y. Learning for constrained optimization: Identifying optimal active constraint sets. INFORMS J. Comput. 2022, 34, 463–480. [Google Scholar] [CrossRef]
  38. Wächter, A.; Biegler, L.T. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 2006, 106, 25–57. [Google Scholar] [CrossRef]
  39. Amiri, M.H.; Mehrabi Hashjin, N.; Montazeri, M.; Mirjalili, S.; Khodadadi, N. Hippopotamus optimization algorithm: A novel nature-inspired optimization algorithm. Sci. Rep. 2024, 14, 5032. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Decision-dependent loss aversion.
Figure 1. Decision-dependent loss aversion.
Symmetry 17 01236 g001
Figure 2. DL-CCA.
Figure 2. DL-CCA.
Symmetry 17 01236 g002
Figure 3. DCLLA network parameter analysis heat map. Darker colors indicate lower optimal values and faster solving times.
Figure 3. DCLLA network parameter analysis heat map. Darker colors indicate lower optimal values and faster solving times.
Symmetry 17 01236 g003
Figure 4. Comparison of Algorithms: (a) The algorithm solution time increases with the problem scale. (b) The optimal value of the algorithm increases with the scale of the problem.
Figure 4. Comparison of Algorithms: (a) The algorithm solution time increases with the problem scale. (b) The optimal value of the algorithm increases with the scale of the problem.
Symmetry 17 01236 g004
Figure 5. 1 January 2019 to 31 December 2020 Stock Index Price.
Figure 5. 1 January 2019 to 31 December 2020 Stock Index Price.
Symmetry 17 01236 g005
Figure 6. Comparison of mean drawdown and rebalancing.
Figure 6. Comparison of mean drawdown and rebalancing.
Symmetry 17 01236 g006
Figure 7. Cumulative wealth growth.
Figure 7. Cumulative wealth growth.
Symmetry 17 01236 g007
Table 1. Three-way ANOVA table for loss.
Table 1. Three-way ANOVA table for loss.
SourceSum_SqdfFPR (>F)
C(num_Layer) 7.64 × 10 5 2.03.29420.040
C(hidden_Size) 3.36 × 10 4 3.00.09660.962
C(learn_Rate) 2.10 × 10 6 2.09.05330.000
C(num_Layer):C(hidden_Size) 1.50 × 10 6 6.02.15590.051
C(num_Layer):C(learn_Rate) 5.01 × 10 5 4.01.07970.369
C(hidden_Size):C(learn_Rate) 6.02 × 10 5 6.00.86500.522
C(num_Layer):C(hidden_Size):C(learn_Rate) 2.65 × 10 6 12.01.90370.038
Residual 1.67 × 10 7 144.0
Table 2. Three-way ANOVA table for times.
Table 2. Three-way ANOVA table for times.
SourceSum_SqdfFPR (>F)
C(num_Layer) 1.29 × 10 4 2.0822.79330.000
C(hidden_Size) 1.10 × 10 6 3.046815.8540.000
C(learn_Rate) 8.74 × 10 2 2.055.87160.000
C(num_Layer):C(hidden_Size) 1.83 × 10 4 6.0389.53770.000
C(num_Layer):C(learn_Rate) 5.57 × 10 1 4.01.78110.136
C(hidden_Size):C(learn_Rate) 6.69 × 10 2 6.014.26430.000
C(num_Layer):C(hidden_Size):C(learn_Rate) 1.86 × 10 2 12.01.98670.029
Residual 1.13 × 10 3 144.0
Table 3. Comparison of algorithms.
Table 3. Comparison of algorithms.
AlgorithmDim_VarIterConstraint ViolationSol_time (Sec.)Opt_Obj (Min.)
IEqConstr_vio EqConstr_vio
Trust-Constr50600 2.11 × 10 7 * 1.670.0047
1002990 1.07 × 10 7 * 16.60* 0.0038
1501280 1.42 × 10 7 * 13.750.0041
2003100 8.46 × 10 8 * 50.340.0036
250640 4.94 × 10 8 * 20.710.0036
3002170 2.20 × 10 7 107.200.0043
3503210 1.27 × 10 7 211.050.0035
4003550 9.40 × 10 8 309.540.0043
4503120 2.28 × 10 7 480.63* 0.0011
5003670 1.78 × 10 7 699.490.0037
Avg. * 0* 1.44 × 10 7 191.100.0037
HO50200 6.09 × 10 6 3.18 × 10 1 11.220.0720
1004000 3.30 × 10 1 26.970.0700
150600 8.24 × 10 7 8.25 × 10 2 47.100.0173
200800 9.29 × 10 6 4.69 × 10 1 77.050.0273
2501000 4.29 × 10 6 4.32 × 10 1 107.980.0485
3001200 2.78 × 10 6 1.34 × 10 1 148.430.0114
3501400 4.97 × 10 6 2.83 × 10 1 191.570.0049
4001600 1.24 × 10 5 1.01 × 10 1 260.530.0228
4501800 1.89 × 10 6 9.55 × 10 2 296.350.0043
5002000 1.06 × 10 6 4.84 × 10 2 385.910.0134
Avg. 4.36 × 10 6 2.29 × 10 1 155.310.0292
DL-CCA502000 1.98 × 10 6 4.72 × 10 6 12.25* 0.0024
1002000 6.15 × 10 6 1.86 × 10 6 22.050.0072
1502000 8.33 × 10 7 8.46 × 10 6 27.87* 0.0027
2002000 9.08 × 10 6 6.68 × 10 6 52.74* 0.0029
2502000 4.53 × 10 7 3.47 × 10 7 62.47* 0.0027
3002000 1.13 × 10 8 7.87 × 10 8 * 77.57* 0.0026
3502000 9.13 × 10 7 2.36 × 10 6 * 80.61* 0.0029
4002000 3.63 × 10 7 1.04 × 10 4 * 126.44* 0.0009
4502000 8.04 × 10 7 1.97 × 10 6 * 141.790.0017
5002000 2.31 × 10 8 1.96 × 10 7 * 164.05* 0.0030
Avg. 2.06 × 10 6 1.30 × 10 5 * 76.78* 0.0029
DLCC-LSTM502000 2.37 × 10 6 2.00 × 10 5 57.140.0350
1002000 1.98 × 10 6 6.11 × 10 6 93.470.0392
1502000 2.31 × 10 5 4.56 × 10 5 121.640.0286
2002000 5.38 × 10 4 6.64 × 10 2 198.920.0111
2502000 5.61 × 10 7 2.08 × 10 5 275.130.0380
3002000 6.65 × 10 7 2.95 × 10 5 300.920.0576
3502000 4.86 × 10 7 7.01 × 10 5 309.550.0526
4002000 3.26 × 10 5 5.30 × 10 4 298.910.0652
4502000 4.65 × 10 7 1.47 × 10 5 295.580.0233
5002000 8.87 × 10 8 5.46 × 10 6 292.550.0547
Avg. 6.00 × 10 5 6.71 × 10 3 224.380.0405
DLCC-CNN502000 2.05 × 10 6 1.10 × 10 6 116.690.0095
1002000 1.52 × 10 6 4.70 × 10 6 211.00* 0.0060
1502000 8.60 × 10 7 1.92 × 10 4 288.770.0114
2002000 1.23 × 10 6 4.40 × 10 5 286.630.0158
2502000 2.80 × 10 6 3.42 × 10 4 345.420.0219
3002000 1.47 × 10 5 2.94 × 10 4 253.910.0518
3502000 4.89 × 10 7 2.71 × 10 6 499.290.0446
4002000 1.06 × 10 5 9.82 × 10 5 640.410.0067
4502000 3.32 × 10 7 1.27 × 10 2 710.760.0501
Avg. 3.54 × 10 6 1.39 × 10 3 413.260.0279
“*” indicates the optimal indicator.
Table 4. Distribution characteristics and moment information of different market indices.
Table 4. Distribution characteristics and moment information of different market indices.
NoRegionMeanStdDevSkewnessKurtosisMinMaxRangeMedian
SSE50.GI43CHN.0.00130.01210.44716.7585−0.04760.06280.11040.0005
HSI.HI58CHN.0.00050.00980.08314.1764−0.02930.03880.06800.0008
N225.GI222JPN.0.00080.0078−0.17104.6227−0.03030.02510.05540.0010
NDX.GI88US.0.00130.0101−0.43285.9353−0.03600.04480.08080.0016
FTSE.GI36EU.0.00070.0091−0.17304.8791−0.03570.03080.06650.0009
FCHI.GI36EU.0.00100.0092−0.60725.5408−0.03630.03240.06870.0016
GDAXI.GI32EU.0.00090.0095−0.31465.0212−0.03360.03890.07250.0020
RTS.GI35RUS.0.00160.0096−0.30464.5572−0.03950.02870.06830.0013
Table 5. Performance metrics across different indices using TSPO and DR-TSPO.
Table 5. Performance metrics across different indices using TSPO and DR-TSPO.
SSE50 *HSIN225 *NDX
TSPO DR-TSPO TSPO DR-TSPO TSPO DR-TSPO TSPO DR-TSPO
Annualized Return0.30200.46351.72321.20150.71890.72575.25042.5646
Std Dev0.58460.44300.54790.45160.41390.31190.81820.5849
Max Drawdown0.45280.34050.40870.35790.38300.35330.51870.4320
Sharpe Ratio0.51661.04633.14532.66041.73682.32686.41744.3850
Beta1.15941.52591.32581.24731.25841.0399−0.2894−0.3315
Sortino Ratio0.80201.60795.15124.06962.55923.76788.56395.4795
Autocorrelation0.10360.1036−0.1152−0.1207−0.04170.0341−0.0523−0.1131
Rolling Returns0.00090.00140.00390.00310.00220.00220.00770.0053
Mean Wealth1.23210.99231.33061.27061.07681.08662.13831.7254
FTSE *FCHI *GDAXI *RTS *
TSPODR-TSPOTSPODR-TSPOTSPODR-TSPOTSPODR-TSPO
Annualized Return0.16320.16410.16340.16571.16461.1657−0.1207−0.0905
Std Dev0.34590.34440.34660.34600.45860.45790.37590.3101
Max Drawdown0.37940.37920.38020.38020.24960.24950.55320.4093
Sharpe Ratio0.47190.47390.47150.47752.53952.5401−0.3210−0.2917
Beta1.00251.00351.03561.03630.38340.38280.67130.4427
Sortino Ratio0.57480.57700.57410.58143.32763.3282−0.3572−0.3266
Autocorrelation0.02190.02170.02230.0223−0.0538−0.05380.19020.0411
Rolling Returns0.00060.00070.00060.00070.00320.0032−0.0005−0.0004
Mean Wealth0.91540.91590.91520.91631.58211.58280.77680.8857
“*” indicates a market where DR-TSPO is performing better, and "―" indicates a better-performing indicator.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Liu, S.; Pan, J. Two-Stage Distributionally Robust Optimization for an Asymmetric Loss-Aversion Portfolio via Deep Learning. Symmetry 2025, 17, 1236. https://doi.org/10.3390/sym17081236

AMA Style

Zhang X, Liu S, Pan J. Two-Stage Distributionally Robust Optimization for an Asymmetric Loss-Aversion Portfolio via Deep Learning. Symmetry. 2025; 17(8):1236. https://doi.org/10.3390/sym17081236

Chicago/Turabian Style

Zhang, Xin, Shancun Liu, and Jingrui Pan. 2025. "Two-Stage Distributionally Robust Optimization for an Asymmetric Loss-Aversion Portfolio via Deep Learning" Symmetry 17, no. 8: 1236. https://doi.org/10.3390/sym17081236

APA Style

Zhang, X., Liu, S., & Pan, J. (2025). Two-Stage Distributionally Robust Optimization for an Asymmetric Loss-Aversion Portfolio via Deep Learning. Symmetry, 17(8), 1236. https://doi.org/10.3390/sym17081236

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop