Next Article in Journal
Resource Allocation and Energy Harvesting in UAV-Assisted Full-Duplex Cooperative NOMA Systems
Previous Article in Journal
Efficient Image Segmentation of Coal Blocks Using an Improved DIRU-Net Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data-Driven Optimal Treatment Combination Regimes for Multiple Stressors Controlling for Multiple Adverse Effects

1
Statistical Sciences and Operations Research, Virginia Commonwealth University, Richmond, VA 23284, USA
2
Liberal Arts and Sciences, Virginia Commonwealth University School of the Arts in Qatar, Doha 8095, Qatar
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(21), 3542; https://doi.org/10.3390/math13213542
Submission received: 15 September 2025 / Revised: 27 October 2025 / Accepted: 30 October 2025 / Published: 4 November 2025
(This article belongs to the Section D: Statistics and Operational Research)

Abstract

Combination drug treatment plays a central role in addressing complex diseases by enhancing the therapeutic benefit while mitigating adverse effects. However, determining optimal dose levels remains challenging due to additive drug effects, competing safety constraints, and the scarcity of reliable data in clinical and experimental settings. This paper develops a data-driven robust optimization framework for combination dose selection under uncertainty. The proposed approach integrates posterior sampling via Markov Chain Monte Carlo with convex hull-based and mean-based filtration methods to generate, evaluate, and refine candidate optimal solutions. By embedding uncertainty quantification into the optimization process, the framework systematically balances therapeutic efficacy against the risk of adverse effects, yielding risk-averse yet effective dose strategies. Numerical experiments using exponential dose–response models and the ED50 criterion demonstrate that convex hull-based methods consistently produce feasible solutions, while mean-based approaches are prone to infeasibility except in limited cases. Among hull methods, balance-oriented filtration (BOF) achieves the best balance between performance and conservativeness, closely approximating the benchmark solution under moderate levels of uncertainty for models with additive drug effects. These findings highlight the advantages of robust optimization for dose selection in settings where data are limited, variability is high, and risk management is essential.

1. Introduction

Combination drug treatment has become a cornerstone in modern medicine, particularly in the management of complex and life-threatening diseases. Conditions such as AIDS [1], cancer [2], epilepsy [3], and Alzheimer’s disease [4] are often refractory to monotherapy, making combination strategies essential. By acting on multiple biological pathways simultaneously, drug combinations can achieve greater therapeutic efficacy while limiting adverse events [5]. Notably, synergistic drug interactions allow clinicians to maintain or improve treatment outcomes at lower individual doses, thereby reducing toxicity and side effects more effectively than dose escalation of a single agent [6]. The concept of synergy in pharmacology has long been recognized [7], and computational models for assessing drug–drug interactions have expanded rapidly in recent decades [8,9]. Since the approval of the first combination therapy by the U.S. Food and Drug Administration in the 1940s, such treatments have become increasingly prevalent in both research and clinical practice.
A central challenge in this field is the optimization of drug dosing. Despite the clinical promise of combination therapies, determining the appropriate dose levels remains nontrivial due to nonlinear interactions between drugs and the competing objectives of maximizing benefits while minimizing harm. Numerous methodologies have been proposed to address this challenge. For example, ref. [10] developed a combinatorial screening method that identifies effective dose pairs without requiring mechanistic insight into disease biology. Ref. [11] employed pharmacokinetic/pharmacodynamic (PK/PD) modeling to better quantify and balance risk–benefit trade-offs. Ref. [12] argued for modeling dose–response curves directly rather than relying on traditional ANOVA and multiple testing approaches, thereby capturing more nuanced treatment effects. Ref. [13], focusing on clinical development, emphasized optimizing tolerability while preserving therapeutic benefits across different stages of treatment evaluation. Additional approaches have included stochastic search and optimization algorithms for drug synergy prediction [14], network-based models of drug interactions [15], and machine learning-driven frameworks for combinatorial therapy discovery [16]. Collectively, these studies demonstrate the breadth of tools available for dose optimization, ranging from purely empirical methods to mechanism-based and computationally intensive modeling frameworks.
While these contributions have advanced the field, they share a common limitation: most approaches either rely on large experimental datasets, which are often costly and impractical to obtain, or they prioritize average treatment effects without explicitly accounting for risk under uncertainty. As highlighted in the literature on adaptive clinical trial design [17,18], failing to properly incorporate uncertainty into dose recommendations can result in unsafe or suboptimal treatment regimens. Moreover, risk–benefit optimization in drug development has typically been approached through expected utility maximization [19] or PK/PD-based safety margins [20], but these methods are not always directly suited for small-sample, data-limited settings. Consequently, existing dose optimization methods may yield recommendations that are either overly aggressive, exposing patients to elevated risks of adverse outcomes, or excessively conservative, thereby reducing therapeutic benefits. In contrast to existing work, this paper develops a data-driven dose optimization framework that explicitly integrates risk control into the optimization process. Our approach seeks to maximize therapeutic efficacy while ensuring that the probability of adverse effects does not exceed a pre-specified threshold. By formulating the problem as a constrained optimization task, the method balances reward and risk in a principled manner, making it suitable for applications where both efficacy and safety are critical. In doing so, this study contributes a novel perspective to dose optimization in combination drug treatment—bridging data-driven analysis with risk-averse decision-making under uncertainty.
In the proposed dose optimization framework, the objective is to maximize clinical benefit, represented as a linear function of drug doses [21]. As anticipated, the therapeutic benefit increases monotonically as doses rise. However, feasible dosing is constrained by safety considerations, since excessive doses can trigger severe or even lethal adverse effects [20]. Unlike clinical benefit, which progresses in an approximately linear manner, adverse effects typically escalate nonlinearly, often deteriorating suddenly once doses exceed critical thresholds [7,8]. To account for this behavior, adverse effects are modeled as nonlinear functions of linear combinations of drug doses, and constraints are imposed to ensure that all such effects remain below pre-specified safety levels [22]. In principle, the optimization problem could be solved if the exact functional forms and associated parameters—such as the coefficients in both the objective and the constraints—were known. In practice, however, these parameters are unknown and must be inferred from patient responses obtained through experiments or clinical trials. Acquiring sufficient clinical data is often infeasible: trials are costly, time-intensive, and constrained by limited resources, typically resulting in small datasets [17,18]. Moreover, the nonlinear structure of the constraints necessitates iterative numerical estimation procedures even for relatively simple parameter estima [23,24]. When data availability is limited, such point estimates exhibit high variability and limited accuracy, thereby undermining the reliability of dose recommendations. To address these challenges, this study develops a data-driven robust optimization approach for dose selection. Rather than substituting uncertain point estimates directly into the optimization problem, the proposed framework explicitly incorporates estimation uncertainty into the decision-making process. By balancing clinical benefit with the risk of constraint violation, the robust approach mitigates the shortcomings of small-sample estimation and enhances the reliability of treatment recommendations in combination drug therapy [25,26,27].
The collection of large-scale datasets is often infeasible in clinical and experimental settings due to constraints on cost, time, and patient availability. As a result, only limited observational data are typically available for parameter estimation. To address this challenge, we adopt a Bayesian inference framework, which allows the systematic integration of prior knowledge into the modeling process [28]. Prior distributions for each model parameter are specified based on available biological and experimental knowledge. Under conditions of data scarcity, such prior information plays a crucial role by stabilizing estimation procedures and improving the robustness of the resulting model inferences [29,30]. To approximate the posterior distributions of the model parameters, we employ the Markov Chain Monte Carlo (MCMC) method, a widely used algorithm for sampling from high-dimensional distributions [31]. As additional data are observed, the posterior distribution is obtained by updating the prior with the likelihood of the new observations, thereby incorporating information in a principled and coherent manner. In this study, thousands of MCMC samples were generated for the model parameters, effectively enlarging the sample space and enhancing inference stability. Unlike methods that rely exclusively on point estimates, MCMC provides a distributional characterization of uncertainty across a range of plausible parameter values, together with their probabilities [32]. As the number of samples increases, the approximation converges toward the true posterior distribution, thereby strengthening the reliability of all potential candidate solutions [33]. This probabilistic characterization enables a more realistic assessment of uncertainty in both inference and prediction, particularly in the presence of small datasets. Similar approaches have been applied in related contexts. For example, ref. [34] used MCMC-based inference to estimate benchmark dose-tolerable regions under multiple stressors, demonstrating the value of Bayesian sampling in toxicological risk assessment. Likewise, Bayesian hierarchical models combined with MCMC have been successfully used in clinical dose–response modeling, offering robust inference in small-sample designs [18,35]. In our framework, each parameter set generated via MCMC is embedded within a linear programming (LP) formulation of the dose optimization problem. Samples that yield feasible and solvable LP instances are retained for further analysis, while infeasible samples are systematically discarded. This filtering mechanism ensures that only parameter configurations consistent with structural and mathematical constraints are incorporated into the optimization analysis, thereby preserving both the validity and interpretability of the results. By combining Bayesian inference, MCMC sampling, and robust optimization, our methodology offers a principled and computationally tractable approach for dose optimization under uncertainty.
Our proposed framework is both generic and computationally efficient, making it applicable across a wide range of problem domains without reliance on restrictive assumptions. Within this framework, random values for the model parameters are generated from their current estimates and prior distributions. This sampling-based design directly addresses dose optimization problems in the presence of real-world challenges such as uncertainty, data variability, and measurement noise. By embedding uncertainty quantification into the optimization process, the framework improves robustness and provides a principled basis for identifying risk-averse dosage strategies [36,37]. The central objective is to estimate tolerable dose levels, denoted as X * , which achieve the maximum permissible reduction in dosage while preserving therapeutic efficacy and maintaining normal physiological function. To this end, we develop a family of robust optimization methods in which candidate solutions are first generated and then systematically filtered using algorithms tailored to each method. From these refined sets, a single optimal solution is selected. The resulting solutions are compared across methods, enabling a comprehensive evaluation of their relative performance. This comparative analysis yields insights into the conditions under which each method is most effective, thereby informing decision-making in dose selection and risk management [27,38].
The remainder of this paper is organized as follows. Section 2 presents the mathematical model, consisting of a linear objective function subject to nonlinear constraints expressed as exponential functions wiht additive drug effects. Section 3 describes the methodology for parameter sampling and details the algorithms employed in the proposed optimization approaches. Section 4 reports numerical experiments that demonstrate the performance of the methods and provides a comparative analysis of their effectiveness under varying conditions. Section 5 concludes by summarizing the main contributions and outlining directions for future research.

2. Background

Dose Optimization

In dose optimization, our goal is to determine the optimal dose combination of multiple stressors (e.g., drugs) such that the desired therapeutic effect is maximized while the adverse effect is controlled under some acceptable tolerance level. Let X = { x 1 , x 2 , , x K } R + K be the dose combination of K stressors, where x k is the dose of the kth stressor for k = 1 , , K . While the desired therapeutic effect of dose combination X is typically modeled as a linear combination of all x k [21], there could be different types of adverse effects that need to be considered, and each of them is due to a particular interaction mechanism between the stressors. Therefore, we consider the following dose optimization problem:
max X R + K Z = α X s . t . f ( β X ) : = f 1 ( β 1 X ) f 2 ( β 2 X ) f N ( β N X ) τ 1 τ 2 τ N
where α = ( α 1 , , α K ) is the vector of coefficients that indicates the therapeutic effect of each unit of dose, f = ( f 1 , , f N ) : R + K R + N represents N different types of adverse responses and is known but nonlinear, β = ( β 1 , , β N ) with β i R K being the regression coefficient of X in the ith adverse effect, and τ 1 , τ 2 , , τ N [ 0 , 1 ] is known and represents the acceptable tolerance level for the adverse effect. Note that the level at which an adverse response is acceptable is given by a clinician who understands the impact of the adverse response on the patient. For notation convenience, denote Y = ( Y 1 , , Y N ) = f ( β X ) , i.e., Y j = f j β j X for all j = 1 , , N . Suppose the optimal solution of (1) exists and denote it by X * . Note that (1) is generally an NLP due to the existence of f. Nonetheless, as the adverse response typically tends to get more intense as β i X increases [7], for simplicity, we assume that each f i is strictly monotonically decreasing and thus invertible. By assuming that all endpoints are monotonically decreasing, the higher the dose given, the lower the response, which in this case we do not want to be below a predefined adverse-effect level. Then, by applying the inverse f i 1 to the jth constraint for all j = 1 , , N , we can convert (1) into an LP:
max X R + K Z = α X s . t . f 1 ( Y ) : = β 1 X β 2 X β N X f 1 1 ( τ 1 ) f 2 1 ( τ 2 ) f N 1 ( τ N ) ,
However, though (2) is an LP, we are not able to directly solve it because the parameters α and β are unknown. That said, we can collect noisy observations of the therapeutic effect and the adverse response when we apply dose combination treatments. Denoted by X ˜ = X ˜ ( 1 ) , , X ˜ ( M ˜ ) , the set of M ˜ applied dose combination treatments. Then, denoted by Y ˜ = Y ˜ ( 1 ) , , Y ˜ ( M ˜ ) and Z ˜ = Z ˜ ( 1 ) , , Z ˜ ( M ˜ ) are the collected noisy observations of the corresponding adverse response and therapeutic effect, respectively. Specifically, for i = 1 , , M ˜ , Y ˜ ( i ) = f β X ˜ ( i ) + ϵ ( i ) , Z ˜ ( i ) = α X ˜ ( i ) + η ( i ) , where ϵ ( i ) = ϵ 1 ( i ) , , ϵ N ( i ) and η ( i ) are i.i.d. Gaussian noise with variance σ ϵ 2 and σ η 2 , respectively. Of course, one may seek to construct point estimators for α and β using ( X ˜ , Y ˜ , Z ˜ ) , and then solve (2) by replacing α and β with their point estimators. Since f is nonlinear, numerical methods are typically needed for finding point estimators of α and β ; for example, maximum likelihood estimation using the Newton–Raphson method. However, such point estimators of α and β may not be very accurate due to the uncertainty from ( X ˜ , Y ˜ , Z ˜ ) , especially when the available data size M ˜ is small due to a limited budget in practice, such as resources and time. Moreover, inaccurate point estimators will further result in poor decision-making for estimating X * ; for example, (2) may not even be solvable when α and β are replaced with inaccurate point estimators. Therefore, to address the uncertainty from ( X ˜ , Y ˜ , Z ˜ ) , we will establish a robust approach for estimating X * : first, we will use posterior sampling to increase the sample size extensively and then use the generated samples to evaluate the robustness of some promising candidate points. To proceed, we first adopt Markov Chain Monte Carlo (MCMC) [28,31] to perform posterior sampling in Section 3.1. Markov Chain Monte Carlo (MCMC) methods generate random samples by utilizing a Markov Chain that converges to the desired sampling distribution as a posterior distribution for parameters of interest.

3. Methodology

In this section, we introduce methodologies to filter the candidate optimal solutions obtained in Section 3 and find the unique optimal risk-averse solution X * using each method. We propose three methods that estimate X * , which we will discuss in detail in each subsection.

3.1. Posterior Sampling with MCMC

Note that in the collected dataset ( Y ˜ , Z ˜ ) , for all i = 1 , , M ˜ , the noises ϵ ( i ) = ϵ 1 ( i ) , , ϵ N ( i ) and η ( i ) are i.i.d. Gaussian. Thus, to perform posterior sampling using MCMC, we assume the following prior distributions for the unknown parameters ( α , β ) : for all i = 1 , , N and j = 1 , , K ,
α i | σ η N ( μ i , κ i σ η 2 ) , σ η 2 Gamma ( ν , ζ ) , β i j | σ ϵ log N ( μ i , κ i σ ϵ 2 ) , σ ϵ 2 Gamma ( γ , ρ ) ,
where β i j is the jth component of β i , log N ( μ , σ 2 ) represents the log normal distribution with mean μ and variance σ 2 , and Gamma ( γ , ρ ) represents the Gamma distribution with shape γ and rate ρ [28,31]. Note that for β i , the prior distribution is chosen as log normal to ensure the monotonic decreasing nature of f j . Furthermore, the prior distribution for α is chosen to be a normal distribution as we a priori have no information about the nature of the effect of each compound on Z. It allows for both positive and negative effects on the objective function. Recall that ϵ ( i ) = ϵ 1 ( i ) , , ϵ N ( i ) and η ( i ) are i.i.d. Gaussian noise; thus, the joint density of Z ˜ = Z ˜ ( 1 ) , , Z ˜ ( M ˜ ) given α and σ is
d ( z ˜ | α , σ η ) = i = 1 M ˜ 1 σ η ϕ z ˜ ( i ) α X ˜ ( i ) σ η ,
where ϕ represents the standard normal PDF. Similarly, the joint density of Y ˜ = Y ˜ ( 1 ) , , Y ˜ ( M ˜ ) given β and σ is
d ( y ˜ | β , σ ϵ ) = i = 1 M ˜ j = 1 N 1 σ ϵ ϕ y ˜ j ( i ) f j β j X ˜ ( i ) σ ϵ .
MCMC can be performed using various packages such as WinBUGS [39], JAGS [40], Openbugs [41], Numbl, and Stan [42]. In this paper, we use JAGS to generate posterior samples based on the Bayesian model given in—(4). To obtain desired samples from the posterior distribution, we discard the initial samples generated and draw M ¯ samples when the Markov Chain reaches its equilibrium, which is verified by checking the trace plots of the drawn parameter values. Consequently, by choosing M ¯ M ˜ , we can draw as many posterior samples as we need for ( α , β ) based on the distributional information carried in ( Y ˜ , Z ˜ ) . Based on such a large set of posterior samples for the unknown parameters, we are able to construct a measurement for quantifying the risk incurred by uncertainty and therefrom design a robust approach for estimating X * , as is shown in Section 3.2, Section 3.3 and Section 3.4.

3.2. PEOF: Positive Effect-Oriented Filtration

We generate M ¯ samples for the unknown true parameters α , β using MCMC. Then, we replace α , β with each of these MCMC samples in (2). If the resulting LP by plugging in an MCMC sample is solvable, then we keep that MCMC sample; otherwise, we discard that MCMC sample. Denote by α ^ m , β ^ m the mth MCMC samples that will lead to a solvable LP, where m = 1 , , M and M M ¯ . When α , β are replaced by α ^ m , β ^ m in (2), denote by Ψ m and x ^ m the feasible set and an optimal solution of the resulting LP, respectively. We establish a robust approach to filter the candidate optimal solutions x ^ 1 , , x ^ M and select a subset of them for further analysis.
  • Step 1: For each q = 1 , , M , let
T q m = 1 M 1 x ^ m Ψ q ,
where 1 x ^ m Ψ q is a binary indicator that is equal to 1 if x ^ m Ψ q and 0 otherwise. For each q, T q indicates how many candidate optimal solutions the feasible set Ψ q contains. Intuitively, the more candidate optimal solutions a feasible set contains, the more tolerable it is considered.
  • Step 2: For a pre-determined threshold δ , where 0 < δ < 1 , find
A δ m : 1 M q = 1 M 1 T m T q 1 δ .
Intuitively, A δ is the index set of which the corresponding feasible sets are more tolerable than at least 100 ( 1 δ ) % of all Ψ q , q = 1 , , M . Then, by controlling δ , U δ = x ^ m : m A δ filters the candidate optimal solutions and only keeps those associated with the 100 δ % most tolerable feasible sets. The value of δ is dependent on how conservative the researcher wishes to be. A value near 1 will include almost all points in the convex hull and thus will produce a more conservative convex hull and ultimately a more conservative value (dose combination is closer to 0), whereas values near 0 will produce values that are more liberal (dose combination is farther from zero than other solutions). In pilot studies, a value near 0, say 0.05, may be more appropriate to ensure the sampling procedure in the next stage includes a large range of dosages. For a confirmatory study, a value near 1, say 0.95, will produce a very conservative dosage combination which would be recommended to physicians as appropriate.
  • Step 3: Construct the convex hull, conv ( U δ ) , of U δ . Find
x ^ T , δ * = arg min x conv ( U δ ) x , O 2 ,
where O is the origin in R n , and x , O 2 is the Euclidean distance between x and O. In other words, x ^ T , δ * achieves the distance between O and conv ( U δ ) , and thus is the nearest point from O in conv ( U δ ) . Therefore, x ^ T , δ * is the most risk-averse dosage treatment in conv ( U δ ) .
This convex hull approach leverages ideas from robust optimization [26,43], where worst-case or risk-averse solutions are identified by considering uncertainty sets and their geometric structures. The use of convex hulls as feasible aggregation mechanisms is well established in computational geometry and optimization [44,45], making them particularly suitable for filtering candidate solutions under uncertainty.

3.3. NEOF: Negative Effect-Oriented Filtration

Now we seek to measure the robustness of each candidate optimal solution x ^ q by evaluating its feasibility for all Ψ m , m = 1 , , M . Specifically, for each q = 1 , , M , find
H q m = 1 M 1 x ^ q Ψ m ,
which indicates the number of feasible sets that x ^ q lies in. The larger H q is, the more robust x ^ q is considered. Then, we will filter the candidate optimal solutions by finding
B δ m : 1 M q = 1 M 1 H m H q 1 δ .
Intuitively, B δ is the index set of which the corresponding candidate optimal solutions are more robust than at least 100 ( 1 δ ) % of all x ^ q , q = 1 , , M . Note that the filtration is similar to Step 2 in Section 3.2 but is based on the robustness of candidate optimal solutions. By controlling δ , V δ = x ^ m : m B δ filters the candidate optimal solutions and keeps only the 100 δ % most robust ones. Then, similarly as in Section 3.2, we find the most risk-averse dosage treatment by
x ^ H , δ * = arg min x conv ( V δ ) x , O 2 .

3.4. BOF: Balance-Oriented Filtration

Section 3.2 and Section 3.3 have established two different ways for filtering the candidate optimal solutions. On the one hand, PEOF filters the candidate optimal solutions based on the tolerance of their associated feasible sets. Note that the more tolerable a feasible set is, the larger it is. Therefore, U δ selects the candidate optimal solutions that tend to return better objective values but are likely to be more risky. On the other hand, NEOF filters the candidate optimal solutions based on their robustness. Therefore, V δ selects the more conservative candidate optimal solutions, which tend to be less risky but are more likely to return only subpar objective values. Thus, we would like to identify a set of candidate optimal solutions that balance the risk and conservativeness. Specifically, we take W δ = U δ V δ and find the most risk-averse dosage treatment by
x ^ W , δ * = arg min x conv ( W δ ) x , O 2 ,
This approach integrates the strengths of both tolerability- and robustness-oriented criteria, yielding solutions that are neither overly risky nor excessively conservative. Conceptually, BOF reflects the principle of balancing risk and conservativeness, which is central to risk measures such as Conditional Value-at-Risk (CVaR) [46] and compromise programming in multi-objective optimization [47]. By combining filtration criteria before constructing the convex hull, BOF provides a structured mechanism for identifying candidate solutions that trade off between objective performance and reliability.

3.5. Mean-Based Estimation

As a complementary procedure to the convex hull-based filtering methods introduced in Section 3.2, Section 3.3 and Section 3.4, we also apply a mean-based estimation strategy. The objective of this method is to capture the central tendency of the candidate optimal solutions generated under posterior sampling, thereby providing a statistical benchmark for comparison. While convex hull-based methods emphasize risk aversion by selecting solutions that minimize the distance to the origin, mean-based estimation instead emphasizes average performance across all candidate solutions. This approach has been frequently employed in stochastic optimization and adaptive clinical trial designs, where averages are used to approximate the expected utility or treatment effect [17,18,38].
Formally, let x 1 ^ , x 2 ^ , , x M ^ denote the set of candidate optimal solutions obtained by embedding posterior samples into the optimization model. The mean-based estimator is defined as
x ˘ = 1 M m = 1 M x ^ m ,
with corresponding objective value
Z ˘ = α x ˘ .
This procedure smooths variability across candidate solutions, producing an “expected” dose vector that highlights the central location of the solution space. However, because the mean-based solution does not necessarily lie within the feasible region defined by nonlinear adverse-effect constraints, infeasibility is common. Such limitations are well recognized in robust optimization, where averaging across uncertain solutions often ignores worst-case scenarios [25,26].
To allow comparability with hull-based solutions, we further extend mean-based estimation to the filtered subsets defined by PEOF, NEOF, and BOF. Denoted by x ˘ T , δ * , the mean of the candidate optimal solutions in a set T q is the optimal risk-averse candidate optimal solution using mean-based estimation. Similarly, x ˘ H , δ * and x ˘ W , δ * are also calculated applying mean-based estimation in H q and W δ , respectively. Specifically, letting T q , H q , and W δ denote the subsets of candidate solutions identified by these filters, the corresponding mean-based estimators are defined as
x ˘ T , δ * = 1 | T q | x ^ m T q x ^ m , x ˘ H , δ * = 1 | H q | x ^ m H q x ^ m , x ˘ W , δ * = 1 | W δ | x ^ m W δ x ^ m ,
with corresponding objective values Z ˘ T , δ * , Z ˘ H , δ * , and Z ˘ W , δ * . For a comparative analysis, these mean-based estimators are evaluated alongside the unfiltered mean of all candidate solutions. This allows us to assess whether filtering combined with averaging improves robustness or whether infeasibility persists. As is shown in Section 4, mean-based estimation often produces higher objective values but at the cost of frequent feasibility violations, underscoring the fundamental trade-off between risk aversion and statistical averaging.

4. Numerical Experiments

In this section, we evaluate the unique optimal risk-averse solution, denoted by X * , estimated under each of the proposed methods, and benchmark these solutions against the corresponding solution obtained from the true model. This comparative analysis provides a direct measure of the effectiveness of the methods in approximating the true optimal decision. Both the true model and the test models are defined in terms of the E D τ criterion, which represents the dosage level that achieves a 100 τ % reduction in the mean of the adverse response. By quantifying the proximity of each estimated X * to the E D τ benchmark, we assess not only the accuracy of the methods but also their robustness under uncertainty. This evaluation emphasizes the inherent trade-offs between risk aversion and conservativeness in the estimation process, thereby clarifying the practical implications of the proposed optimization strategies.

4.1. True Model Formulation

The test model for our numerical experiments is based on a true model that employs an exponential function to describe the dose–response relationship and a linear objective function to evaluate the optimized outcome. The true model is formulated as follows:
max x 1 , x 2 0 Z = 6 x 1 + 2 x 2 s . t . x 1 + x 2 7 log ( τ 1 ) log ( 0.8 ) x 1 5 log ( τ 2 ) log ( 0.8 ) x 2 5 log ( τ 3 ) log ( 0.8 )
Adopting the E D 50 criterion as the tolerance threshold for side effects, i.e., setting the τ 1 = τ 2 = τ 3 = 0.5 yields the unique optimal solution
x 1 = 15.53 , x 2 = 6.21 , Z = 105.61
This benchmark solution provides the reference point for assessing the performance of the proposed methods, serving as the standard against which the estimated risk-averse solutions are evaluated. In particular, the proximity of the estimated optimal solutions produced by the proposed methods to this true benchmark provides a direct measure of their accuracy and robustness. Methods that yield solutions close to Z = 105.61 while maintaining feasibility under the side-effect tolerance constraints can be regarded as more reliable, whereas deviations from this benchmark indicate increased conservativeness or loss of efficiency. In the following section, we present a detailed comparison between the estimated risk-averse solutions and the true model outcome, focusing on both feasibility and objective performance across varying levels of uncertainty.

4.2. Test Model

In the numerical experiments, we considered K = 2 stressors and N = 3 adverse responses arising from their combined effects. The available data size, M ˜ = 121 , was insufficient to provide stable and reliable inference. To address this limitation, we adopted a Bayesian framework and employed the MCMC method to perform posterior sampling of the unknown parameters ( α i , β i j ) , where i = 1 , 2 and j = 1 , 2 , 3 . For the prior specification for the unknown parameters α i and β i j , we assumed independent Gaussian priors of the form
α i | σ η N ( μ i , κ i σ η 2 ) , σ η 2 Gamma ( ν , ζ ) , β i j | σ ϵ log N ( μ i , κ i σ ϵ 2 ) , σ ϵ 2 Gamma ( γ , ρ ) ,
where μ i = 0 and ν , ζ , γ , ρ = 1 . This specification reflects a weakly informative prior structure: the normal priors are centered at zero, expressing no strong directional preference for either main or additive effects, while the gamma priors on the variance components impose mild regularization to ensure numerical stability without unduly constraining posterior learning. All methods described in Section 3 were implemented under this prior specification using the E D τ criterion, where τ = 0.5 represents the dosage level that produces a 50 % reduction in the mean of the adverse response. Posterior samples generated through the MCMC procedure were subsequently employed to construct the candidate solutions, and the resulting outcomes were plotted as representative subsets of the feasible region. These plots illustrate the 50 % most tolerable solutions that satisfy the side-effect threshold while simultaneously preserving competitive objective values, and they are compared against the benchmark solution derived from the true model. This comparison enables a direct assessment of the accuracy, robustness, and practical utility of the proposed methods under limited data availability.
Figure 1 displays the various filtering approaches. Panel (a) shows 10,000 unfiltered candidate optimal solutions (unfiltered) obtained from each instance of the linear program (LP), with the optimal solution located at X U * = ( 12.33 , 6.99 ) and an objective value of Z U * = 87.97 . To evaluate performance under varying levels of uncertainty, data were generated using different values of the noise parameter σ , ranging from low to high. Note that this demonstrates multiplicative noise versus additive noise on the side effects; for the objective function, the error is additive. Panel (a) illustrates 10,000 unfiltered candidate optimal solutions obtained under low variability, σ = 0.01 . Panel (b), Panel (c), and Panel (d) present the filtered candidate optimal solutions corresponding to 50 % of the most tolerable feasible sets for PEOF, NEOF, and BOF, respectively. The resulting optimal solutions are X ^ P * = ( 15.44 , 5.39 ) for PEOF, X ^ N * = ( 12.33 , 6.99 ) for NEOF, and X ^ B * = ( 15.44 , 5.39 ) for BOF, corresponding to objective values of Z ^ P * = 103.44 , Z ^ N * = 87.97 , and Z ^ B * = 103.44 , respectively.
From Figure 1, one can see the relative level of conservativeness of each of the filters. The cluster of solutions in the upper-right corner Panel (b) for PEOF reflects non-conservative, high-risk decisions, whereas the cluster in the lower-left corner Panel (c) for NEOF reflects more conservative, low-risk decisions. By contrast, the filtered solutions in Panel (d) for BOF occupy an intermediate region, representing a balance between conservative and non-conservative decision-making. In each panel, the red star marks the benchmark solution X = ( 15.53 , 6.21 ) , the green circle represents the mean-based solution ( X ˘ * ), and the blue square denotes the most risk-averse convex hull-based solution ( X ^ * ).
Figure 2 displays the filtered candidate optimal solutions across the three methods. The pink squares represent PEOF, the green triangles represent NEOF, and the yellow circles represent BOF. As shown, PEOF produces solutions that extend toward the upper-right region, reflecting more aggressive, non-conservative decisions with a higher associated risk. In contrast, NEOF generates clusters concentrated toward the lower-left region, representing conservative, low-risk solutions that sacrifice objective performance. BOF occupies an intermediate band (yellow), lying between the clusters of PEOF and NEOF, thereby demonstrating a balance between conservativeness and risk-taking. To further investigate the effect of increasing uncertainty, additional experiments were conducted with larger values of the noise parameter σ , varied from 0.01 to 0.1 in increments of 0.01 and from 0.2 to 1 in increments of 0.1 .
Figure 3, Figure 4, Figure 5 and Figure 6 illustrate the corresponding results for σ = 0.1 , and Figure 7, Figure 8, Figure 9 and Figure 10 depict the solutions when σ = 0.25 . Comparing the graphs across different values of σ reveals a clear pattern in the distribution of candidate optimal solutions.
At low uncertainty ( σ = 0.01 ), the solution sets are relatively well spread across the feasible region, with candidate solutions clustering around points that remain close to the benchmark solution ( x 1 , x 2 , Z ) = ( 15.53 , 6.21 , 105.61 ) . As σ increases to moderate levels ( σ = 0.1 ), the dispersion of solutions decreases, and the clusters shift away from the benchmark, reflecting the growing influence of noise in constraining feasible outcomes. At higher uncertainty ( σ = 0.25 and above), this effect becomes more pronounced. The feasible region collapses significantly, and a large portion of the candidate solutions drift toward the lower boundary of the space, with many points approaching the origin. This trend indicates that as the noise parameter grows, the optimization becomes increasingly dominated by conservative feasible sets, driving the solutions toward trivial or near-zero allocations.
In summary, the comparative analysis shows that increasing σ systematically reduces the size of the feasible region and pulls the candidate optimal solutions closer to the origin. This collapse underscores the importance of method selection: while PEOF and BOF preserve proximity to the benchmark under moderate noise, NEOF quickly degenerates into overly conservative solutions, and all methods ultimately converge toward near-zero outcomes as σ becomes large. Table 1, Table 2 and Table 3 report the estimated optimal solutions for the unfiltered, PEOF, NEOF, and BOF when σ = 0.01 , σ = 0.1 , and σ = 0.25 .
For each case, both the convex hull solution ( x ^ * ) and the mean solution ( x ˘ * ) are presented, along with their corresponding objective values ( Z ^ * , Z ˘ * ). Feasibility indicators ( F ^ , F ˘ ) are provided, where Y = 1 denotes feasibility and N = 0 denotes infeasibility. These results enable a systematic comparison of solution quality, robustness, and feasibility across varying levels of the noise parameter σ . The findings indicate that convex hull-based methods generally produce solutions that remain closer to the benchmark objective value, even under increasing levels of uncertainty. By contrast, mean-based methods are substantially more prone to infeasibility, with feasible solutions arising only under relatively small σ . This divergence reflects the ability of convex hull constructions to preserve feasibility across parameter uncertainty [26,43], while mean-based aggregation often fails to capture the structural variability in the feasible region. Furthermore, Figure 11, Figure 12 and Figure 13 display the estimated optimal solutions under increasing levels of the noise parameter σ for PEOF, NEOF, and BOF, respectively.
Each subplot corresponds to a specific σ value, ranging from σ = 0.04 to σ = 1 . For PEOF (Figure 11), the solution clusters initially align well with the benchmark at low σ , but as uncertainty grows ( σ 0.4 ) , the feasible region expands irregularly, producing scattered solutions and greater instability. This instability highlights the method’s sensitivity to uncertainty, consistent with its bias toward higher-risk feasible sets. NEOF (Figure 12) consistently yields conservative solutions that shift toward lower values of x 2 * as σ increases, with many points collapsing toward the origin at high noise levels ( σ 0.6 ) . BOF (Figure 13), in contrast, maintains a more balanced distribution across varying σ , with clusters that remain closer to the benchmark solution for moderate noise levels ( σ 0.5 ) demonstrating its ability to balance risk and conservativeness. Nevertheless, under extreme uncertainty ( σ = 1 ) , BOF also degenerates toward trivial solutions, underscoring the inherent limitations of convex hull-based approaches under high noise.
The results of our numerical experiments discussed in Section 4 are included in summary Table 4 and Table 5. Table 4 presents a comparative analysis of four convex hull-based methods, labeled unfiltered ( x ^ U * ), PEOF ( x ^ P * ), NEOF ( x ^ N * ), and BOF ( x ^ B * ) and their corresponding mean-based methods, labeled unfiltered ( x ^ U * ), PEOF ( x ˘ P * ), NEOF ( x ˘ N * ), and BOF ( x ˘ B * ) in Table 5. Each convex hull-based method and mean-based method generates estimates of the decision variables denoted by x ^ * and x ˘ * , respectively along with their corresponding objective values, Z ^ * and Z ˘ * . These estimates are evaluated across varying values of the parameter σ ranging from 0.02 to 1. The parameter σ serves as a perturbation factor that influences the feasible region and thus tests the robustness of each hull-based and mean-based method. At very small σ values (0.01–0.05), hull-based and mean-based methods yield solutions that remain close to the true benchmark, although hull-based methods tend to underestimate the objective value, while mean-based approaches overestimate it.
When evaluated against the benchmark solution of the true model, ( x 1 , x 2 ) = ( 15.53 , 6.21 ) and Z = 105.61 , the trade-offs among the three hull-based methods become apparent. NEOF, although feasible, produces an optimal value of Z ^ N * = 87 for σ = 0.03 , which lies significantly below the benchmark and reflects its conservative bias toward minimizing risk. In contrast, PEOF and BOF yield objective values of Z ^ N * , Z ^ B * = 103.60 , which are much closer to the true optimum. However, their behaviors differ: PEOF tends to cluster toward more aggressive, high-risk regions, while BOF provides a more balanced distribution of solutions, capturing both feasibility and robustness. This positioning enables BOF to approximate the benchmark more faithfully than NEOF without incurring the instability associated with overly aggressive solutions. Taken together, these findings highlight BOF as the most effective compromise between conservativeness and performance when measured against the true model.
The findings indicate that all convex hull-based methods generate feasible solutions, while all mean-based methods are classified as infeasible, with the sole exception of NEOF, which exhibited occasional feasibility. Consistent with its conservative model, the sample mean of the candidate optimal solutions returned by NEOF remained within the feasible region. These results align with Section 3: PEOF is non-conservative and higher-risk yet delivers stronger objective values, whereas NEOF is conservative and lower-risk but tends to produce subpar objectives. In contrast, the mean-based methods display larger variability in objective values Z ˘ * , ranging from 55.98 to 113.36, which are uniformly classified as infeasible. These results highlight a fundamental difference between the two approaches; while hull methods ensure feasibility and, in the case of PEOF and BOF, maintain proximity to the true solution, mean methods fail to preserve feasibility despite occasionally approximating the correct magnitude of the objective function. As σ increases, a clear divergence emerges: mean-based methods maintain feasible approximations of both decision variables and objective values, remaining relatively stable around the true solution. In contrast, convex hull-based methods become increasingly unstable, with some collapsing to infeasible or zero solutions once σ ≥ 0.6. With further increases in σ , the estimated solutions tend to converge toward the origin, such that the convex hull eventually encloses the origin, leading to trivial and uninformative results. This divergence highlights a fundamental limitation of the convex hull-based methods under high-variance conditions, where the mean-based approach demonstrates greater robustness.
This structured comparison allows us to identify which convex hull approximation method best aligns with the true solution under different levels of parameter perturbation. The inclusion of both variable values and the objective function makes it possible to evaluate not only accuracy but also the stability of each approach relative to the true benchmark. Taken together, these findings suggest that convex hull-based solutions, particularly PEOF and BOF, are more robust and structurally valid, whereas mean-based methods lack the reliability required for practical application. Within the convex hull-based method, BOF marginally outperforms PEOF, yielding slightly larger objective values once σ 0.3 , aligning with Section 3 characterization of a more favorable risk–conservatism balance. At lower noise levels, BOF also tends to exceed PEOF for σ 0.1 , with a notable exception at σ = 0.2 . PEOF tends toward more aggressive, high-risk solutions, while NEOF consistently favors safer, low-risk but less efficient solutions. BOF, by balancing the spread of its candidate solutions, captures both feasibility and robustness, resulting in estimates that remain close to the benchmark without excessive risk. These results underscore the advantage of BOF in approximating the true optimum, as it effectively reconciles the trade-off between risk aversion and performance.

5. Conclusions

This study developed a data-driven framework for dose optimization in combination drug treatment under uncertainty, with particular emphasis on balancing therapeutic benefits and adverse effects when only limited observational data are available. By embedding Bayesian inference and posterior sampling within the optimization process, the proposed approach effectively addressed the challenge of parameter uncertainty that arises when only small datasets are available. The use of MCMC sampling enabled a richer characterization of parameter distributions, expanding the effective sample size and allowing for a principled quantification of robustness across candidate solutions.
Through the design of three convex hull-based filtering methods and their mean-based counterparts, we systematically investigated the trade-offs among risk aversion, conservativeness, and objective performance. Numerical experiments showed that hull-based approaches consistently yielded feasible solutions, while mean-based approaches often failed to do so, with the notable exception of the conservative mean-based NEOF. Among the hull-based methods, PEOF favored higher-risk but stronger objective values, NEOF emphasized conservativeness at the cost of suboptimal outcomes, and BOF offered the most balanced strategy, closely approximating the benchmark solution under moderate levels of uncertainty. These findings suggest that convex hull-based approaches, particularly BOF, provide a promising pathway for robust and risk-averse dose optimization in noisy, data-limited environments. At the same time, the observed collapse of all hull-based methods under high noise levels highlights a critical limitation that warrants further methodological development.
Overall, this work advances the methodological toolkit for dose optimization under uncertainty by offering a principled and computationally efficient framework. Beyond methodological innovation, it lays the groundwork for more informed clinical decision-making, where carefully balancing efficacy and safety is essential. Looking ahead, future research may extend this framework in several directions. Incorporating informative priors could improve parameter stability in even smaller datasets, while adaptive sampling strategies may help overcome the collapse of hull-based methods at higher noise levels. Moreover, extending the methodology to higher-dimensional treatment combinations, nonlinear objectives, or patient-specific modeling could broaden its applicability to more complex clinical scenarios. Such extensions would further enhance the reliability and translational potential of data-driven dose optimization in practice.

Author Contributions

Conceptualization, E.L.B. and R.G.; Methodology, K.S., E.L.B., and R.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

Ryad Ghanam and Edward Boone would like to thank VCU Qatar and Qatar Foundation for supporting this project.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Merigan, T.C. Treatment of AIDS with combinations of antiretroviral agents. Am. J. Med. 1991, 90, S8–S17. [Google Scholar] [CrossRef]
  2. Weiss, A.; Ding, X.; van Beijnum, J.R.; Wong, I.; Wong, T.J.; Berndsen, R.H.; Dormond, O.; Dallinga, M.; Shen, L.; Schlingemann, R.O.; et al. Rapid optimization of drug combinations for the optimal angiostatic treatment of cancer. Angiogenesis 2015, 18, 233–244. [Google Scholar] [CrossRef]
  3. Sarhan, E.; Walker, M.; Selai, C. Evidence for Efficacy of Combination of Antiepileptic Drugs in Treatment of Epilepsy. J. Neurol. Res. 2016, 5, 269–276. [Google Scholar] [CrossRef]
  4. Kabir, M.T.; Uddin, M.S.; Mamun, A.A.; Jeandet, P.; Aleya, L.; Mansouri, R.A.; Ashraf, G.M.; Mathew, B.; Bin-Jumah, M.N.; Abdel-Daim, M.M. Combination Drug Therapy for the Management of Alzheimer’s Disease. Int. J. Mol. Sci. 2020, 21, 3272. [Google Scholar] [CrossRef]
  5. Zimmermann, G.R.; Lehár, J.; Keith, C.T. Multi-target therapeutics: When the whole is greater than the sum of the parts. Drug Discov. Today 2007, 12, 34–42. [Google Scholar] [CrossRef]
  6. Lehár, J.; Krueger, A.S.; Avery, W.; Heilbut, A.M.; Johansen, L.M.; Price, E.R.; Rickles, R.J.; Short, G.F., 3rd; Staunton, J.E.; Jin, X.; et al. Synergistic drug combinations tend to improve therapeutically relevant selectivity. Nat. Biotechnol. 2009, 27, 659–666. [Google Scholar] [CrossRef] [PubMed]
  7. Chou, T.C. Theoretical basis, experimental design, and computerized simulation of synergism and antagonism in drug combination studies. Pharmacol. Rev. 2006, 58, 621–681. [Google Scholar] [CrossRef] [PubMed]
  8. Greco, W.R.; Bravo, G.; Parsons, J.C. The search for synergy: A critical review from a response surface perspective. Pharmacol. Rev. 1995, 47, 331–385. [Google Scholar] [CrossRef]
  9. Geary, N. Understanding synergy. Am. J. Physiol.-Endocrinol. Metab. 2013, 304, E237–E253. [Google Scholar] [CrossRef] [PubMed]
  10. Nowak-Sliwinska, P.; Weiss, A.; Ding, X.; Dyson, P.J.; van den Bergh, H.; Griffioen, A.W.; Ho, C.M. Optimization of drug combinations using Feedback System Control. Nat. Protoc. 2016, 11, 302–315. [Google Scholar] [CrossRef]
  11. Derendorf, H.; Hochhaus, G. (Eds.) Handbook of Pharmacokinetic/Pharmacodynamic Correlation, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2019; pp. 67–72. [Google Scholar]
  12. Das, P.; Delost, M.D.; Qureshi, M.H.; Smith, D.T.; Njardarson, J.T. A Survey of the Structures of US FDA Approved Combination Drugs. J. Med. Chem. 2019, 62, 4265–4311. [Google Scholar] [CrossRef] [PubMed]
  13. Korn, E.L.; Moscow, J.A.; Freidlin, B. Dose optimization during drug development: Whether and when to optimize. JNCI J. Natl. Cancer Inst. 2022, 115, 492–497. [Google Scholar] [CrossRef]
  14. Pan, Y.; Ren, H.; Lan, L.; Li, Y.; Huang, T. Review of Predicting Synergistic Drug Combinations. Life 2023, 13, 1878. [Google Scholar] [CrossRef] [PubMed]
  15. Cheng, F.; Kovács, I.A.; Barabási, A.L. Network-based prediction of drug combinations. Nat. Commun. 2019, 10, 1197. [Google Scholar] [CrossRef]
  16. Zhou, W.; Wang, Y.; Lu, A.; Zhang, G. Machine learning approaches for synergistic drug discovery. Trends Pharmacol. Sci. 2020, 41, 482–494. [Google Scholar] [CrossRef]
  17. Berry, D.A. Adaptive clinical trials: The promise and the caution. J. Clin. Oncol. 2011, 29, 606–609. [Google Scholar] [CrossRef] [PubMed]
  18. Thall, P.F.; Wathen, J.K.; Bekele, B.N. Hierarchical Bayesian approaches to adaptive dosing in clinical trials. Stat. Med. 2014, 33, 2425–2443. [Google Scholar] [CrossRef]
  19. Parmigiani, G.; Inoue, L. Decision Theory: Principles and Approaches; Wiley Series in Probability and Statistics; Wiley: Hoboken, NJ, USA, 2009; pp. 35–41. [Google Scholar] [CrossRef]
  20. Mager, D.E.; Jusko, W.J. Development of translational pharmacokinetic-pharmacodynamic models. Clin. Pharmacol. Ther. 2008, 83, 909–912. [Google Scholar] [CrossRef]
  21. Derendorf, H.; Hochhaus, G. Handbook of Pharmacokinetic/Pharmacodynamic Correlation; CRC Press: Boca Raton, FL, USA, 1995; pp. 45–52. [Google Scholar] [CrossRef]
  22. Korn, E.L.; Freidlin, B. Adaptive clinical trials: Advantages and disadvantages of various adaptive design elements. J. Natl. Cancer Inst. 2017, 109, djx013. [Google Scholar] [CrossRef]
  23. Walter, E.; Pronzato, L. Identification of Parametric Models; Communications and Control Engineering; Springer: Berlin/Heidelberg, Germany, 1997; pp. 55–70. [Google Scholar] [CrossRef]
  24. van der Vaart, A.W. Asymptotic Statistics; Cambridge Series in Statistical and Probabilistic Mathematics; Cambridge University Press: Cambridge, UK, 1998; pp. 15–25. [Google Scholar] [CrossRef]
  25. Ben-Tal, A.; Nemirovski, A. Robust optimization–methodology and applications. Math. Program. 2002, 92, 453–480. [Google Scholar] [CrossRef]
  26. Bertsimas, D.; Brown, D.B.; Caramanis, C. Theory and applications of robust optimization. SIAM Rev. 2011, 53, 464–501. [Google Scholar] [CrossRef]
  27. Delage, E.; Ye, Y. Distributionally robust optimization under moment uncertainty with application to data-driven problems. Oper. Res. 2010, 58, 595–612. [Google Scholar] [CrossRef]
  28. Gelman, A.; Carlin, J.B.; Stern, H.S.; Dunson, D.B.; Vehtari, A.; Rubin, D.B. Bayesian Data Analysis, 3rd ed.; Chapman and Hall/CRC: Boca Raton, FL, USA, 2013; pp. 3–5. [Google Scholar] [CrossRef]
  29. Spiegelhalter, D.J.; Abrams, K.R.; Myles, J.P. Bayesian Approaches to Clinical Trials and Health-Care Evaluation; Statistics in Practice; Wiley: Hoboken, NJ, USA, 2004; pp. 45–52. [Google Scholar] [CrossRef]
  30. Ibrahim, J.G.; Chen, M.H.; Sinha, D. Bayesian Survival Analysis; Springer Series in Statistics; Springer: Berlin/Heidelberg, Germany, 2014; pp. 25–32. [Google Scholar] [CrossRef]
  31. Robert, C.P.; Casella, G. Monte Carlo Statistical Methods, 2nd ed.; Springer Texts in Statistics; Springer: Berlin/Heidelberg, Germany, 2004; pp. 145–160. [Google Scholar] [CrossRef]
  32. Annis, J.; Miller, B.J.; Palmeri, T.J. Bayesian inference with Stan: A tutorial on adding custom distributions. Behav. Res. Methods 2017, 49, 863–886. [Google Scholar] [CrossRef]
  33. Brooks, S.; Gelman, A.; Jones, G.; Meng, X.L. Handbook of Markov Chain Monte Carlo; Chapman & Hall/CRC Handbooks of Modern Statistical Methods; Chapman & Hall/CRC: Boca Raton, FL, USA, 2011; pp. 45–60. [Google Scholar] [CrossRef]
  34. Farhat, N.J.; Boone, E.L.; Edwards, D.J. A new method for determining the benchmark dose tolerable region and endpoint probabilities for toxicology experiments. J. Appl. Stat. 2020, 47, 775–803. [Google Scholar] [CrossRef] [PubMed]
  35. Neuenschwander, B.; Branson, M.; Gsponer, T. Critical aspects of the Bayesian approach to phase I cancer trials. Stat. Med. 2008, 27, 2420–2439. [Google Scholar] [CrossRef] [PubMed]
  36. Ben-Tal, A.; Nemirovski, A. Robust convex optimization. Math. Oper. Res. 1998, 23, 769–805. [Google Scholar] [CrossRef]
  37. Bertsimas, D.; Sim, M. The price of robustness. Oper. Res. 2004, 52, 35–53. [Google Scholar] [CrossRef]
  38. Shapiro, A.; Dentcheva, D.; Ruszczyński, A. Lectures on Stochastic Programming: Modeling and Theory, 2nd ed.; SIAM: Bangkok, Thailand, 2014; pp. 55–60. [Google Scholar] [CrossRef]
  39. Lunn, D.J.; Thomas, A.; Best, N.; Spiegelhalter, D. WinBUGS—A Bayesian Modelling Framework: Concepts, Structure, and Extensibility. Stat. Comput. 2000, 10, 325–337. [Google Scholar] [CrossRef]
  40. Plummer, M. JAGS: A Program for Analysis of Bayesian Graphical Models using Gibbs Sampling. In Proceedings of the 3rd International Workshop on Distributed Statistical Computing (DSC 2003), Vienna, Austria, 20–22 March 2003; Hornik, K., Leisch, F., Zeileis, A., Eds.; pp. 1–16. Available online: https://www.r-project.org/conferences/DSC-2003/Proceedings/Plummer.pdf (accessed on 20 October 2025).
  41. Thomas, A.; O’Hara, B.; Ligges, U.; Sturtz, S. Making BUGS open. R News 2006, 6, 12–17. [Google Scholar]
  42. Hoffman, M.D.; Gelman, A. The No-U-Turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo. J. Mach. Learn. Res. 2014, 15, 1593–1623. [Google Scholar]
  43. Ben-Tal, A.; El Ghaoui, L.; Nemirovski, A. Robust Optimization; Princeton Series in Applied Mathematics; Princeton University Press: Princeton, NJ, USA, 2009; pp. 95–150. [Google Scholar] [CrossRef]
  44. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004; pp. 27–29. [Google Scholar] [CrossRef]
  45. Preparata, F.P.; Shamos, M.I. Computational Geometry: An Introduction; Springer: New York, NY, USA, 1985; pp. 95–128. [Google Scholar] [CrossRef]
  46. Rockafellar, R.T.; Uryasev, S. Optimization of Conditional Value-at-Risk. J. Risk 2000, 2, 21–42. [Google Scholar] [CrossRef]
  47. Miettinen, K. Nonlinear Multiobjective Optimization; International Series in Operations Research & Management Science; Springer: Berlin/Heidelberg, Germany, 1999; Volume 12, pp. 109–113. [Google Scholar] [CrossRef]
Figure 1. Optimal solutions under various filtering conditions.
Figure 1. Optimal solutions under various filtering conditions.
Mathematics 13 03542 g001
Figure 2. Filtered candidate optimal solutions under PEOF, NEOF, and BOF. The pink cluster (PEOF) reflects non-conservative, high-risk decisions, the green cluster (NEOF) reflects conservative, low-risk decisions, and the yellow cluster (BOF) represents a balanced trade-off between risk and conservativeness.
Figure 2. Filtered candidate optimal solutions under PEOF, NEOF, and BOF. The pink cluster (PEOF) reflects non-conservative, high-risk decisions, the green cluster (NEOF) reflects conservative, low-risk decisions, and the yellow cluster (BOF) represents a balanced trade-off between risk and conservativeness.
Mathematics 13 03542 g002
Figure 3. Unfiltered candidate optimal solutions for σ = 0.1 .
Figure 3. Unfiltered candidate optimal solutions for σ = 0.1 .
Mathematics 13 03542 g003
Figure 4. PEOF for σ = 0.1 .
Figure 4. PEOF for σ = 0.1 .
Mathematics 13 03542 g004
Figure 5. NEOF for σ = 0.1 .
Figure 5. NEOF for σ = 0.1 .
Mathematics 13 03542 g005
Figure 6. BOF for σ = 0.1 .
Figure 6. BOF for σ = 0.1 .
Mathematics 13 03542 g006
Figure 7. Unfiltered candidate optimal solutions for σ = 0.25 .
Figure 7. Unfiltered candidate optimal solutions for σ = 0.25 .
Mathematics 13 03542 g007
Figure 8. PEOF for σ = 0.25 .
Figure 8. PEOF for σ = 0.25 .
Mathematics 13 03542 g008
Figure 9. NEOF for σ = 0.25 .
Figure 9. NEOF for σ = 0.25 .
Mathematics 13 03542 g009
Figure 10. BOF for σ = 0.25 .
Figure 10. BOF for σ = 0.25 .
Mathematics 13 03542 g010
Figure 11. Estimated optimal solutions under positive effect-oriented filtration (PEOF) for varying σ .
Figure 11. Estimated optimal solutions under positive effect-oriented filtration (PEOF) for varying σ .
Mathematics 13 03542 g011
Figure 12. Estimated optimal solutions under negative effect-oriented filtration (NEOF) for varying σ .
Figure 12. Estimated optimal solutions under negative effect-oriented filtration (NEOF) for varying σ .
Mathematics 13 03542 g012
Figure 13. Estimated optimal solutions under balance-oriented filtration (BOF) for varying σ .
Figure 13. Estimated optimal solutions under balance-oriented filtration (BOF) for varying σ .
Mathematics 13 03542 g013
Table 1. Estimated optimal solutions using convex hull ( x ^ * ) and mean approaches ( x ˘ * ) when σ = 0.01 .
Table 1. Estimated optimal solutions using convex hull ( x ^ * ) and mean approaches ( x ˘ * ) when σ = 0.01 .
x ^ * x ˘ * Z ^ * Z ˘ * F ^ F ˘
Unfiltered(12.33, 6.99)(15.73, 5.77)87.97105.93YN
PEOF(15.44, 5.39)(16.31, 6.08)103.44110.03YN
NEOF(12.33, 6.99)(15.11, 5.56)87.97101.78YN
BOF(15.44, 5.39)(15.61, 5.97)103.44105.61YN
Table 2. Estimated optimal solutions using convex hull ( x ^ * ) and mean approaches ( x ˘ * ) when σ = 0.1 .
Table 2. Estimated optimal solutions using convex hull ( x ^ * ) and mean approaches ( x ˘ * ) when σ = 0.1 .
x ^ * x ˘ * Z ^ * Z ˘ * F ^ F ˘
Unfiltered(11.80, 3.36)(16.55, 3.40)77.50106.13YN
PEOF(15.35, 5.27)(17.63, 3.41)102.67112.60YN
NEOF(12.69, 0.88)(15.71, 2.77)77.9299.82YN
BOF(15.35, 5.29)(16.65, 2.88)102.67105.63YN
Table 3. Estimated optimal solutions using convex hull ( x ^ * ) and mean approaches ( x ˘ * ) when σ = 0.25 .
Table 3. Estimated optimal solutions using convex hull ( x ^ * ) and mean approaches ( x ˘ * ) when σ = 0.25 .
x ^ * x ˘ * Z ^ * Z ˘ * F ^ F ˘
Unfiltered(9.21, 2.98)(15.94, 2.33)61.21100.32YN
PEOF(14.30, 4.19)(18.15, 1.58)94.17112.08YN
NEOF(9.21, 2.98)(14.74, 0.72)61.2189.90YY
BOF(15.21, 2.18)(16.48, 0.18)95.6499.24YN
Table 4. Estimated optimal solutions and objective values across hull methods as a function of the noise parameter σ .
Table 4. Estimated optimal solutions and objective values across hull methods as a function of the noise parameter σ .
σ x ^ * U Z ^ U * x ^ * P Z ^ P * x ^ * N Z ^ N * x ^ * B Z ^ B *
0.02(13.20, 3.80)86.80(15.17, 6.00)103.02(13.20, 3.80)86.80(15.17, 6.00)103.02
0.03(13.41, 3.28)87.00(15.35, 5.74)103.60(13.41, 3.28)87.00(15.35, 5.74)103.60
0.04(12.82, 3.67)84.27(15.22, 5.77)102.85(13.01, 3.36)84.75(15.22, 5.77)102.85
0.05(13.63, 2.33)86.47(15.31, 5.77)103.40(13.63, 2.33)86.47(15.31, 5.77)103.40
0.06(11.96, 3.48)78.71(15.49, 5.29)103.51(12.59, 2.18)79.92(15.49, 5.29)103.51
0.07(12.51, 3.19)81.44(15.19, 6.17)103.48(12.98, 2.09)82.09(15.19, 6.17)103.48
0.08(12.22, 4.20)81.73(15.33, 5.35)102.70(12.99, 1.98)81.92(15.33, 5.35)102.70
0.09(11.82, 3.76)78.43(15.50, 4.74)102.47(12.68, 2.06)80.20(15.50, 4.74)102.47
0.20(10.11, 3.37)67.40(14.79, 4.55)97.84(11.24, 0.00)67.44(15.28, 2.96)97.60
0.30(8.48, 3.24)57.34(13.44, 4.53)89.71(8.52, 3.19)57.51(14.86, 1.29)91.72
0.40(7.50, 2.39)49.78(12.46, 4.18)83.12(7.50, 2.39)49.78(13.80, 0.90)84.62
0.50(6.64, 2.11)44.03(11.41, 3.87)76.21(6.64, 2.11)44.03(12.72, 0.17)76.67
Table 5. Estimated optimal solutions and objective values across mean methods as a function of the noise parameter σ .
Table 5. Estimated optimal solutions and objective values across mean methods as a function of the noise parameter σ .
σ x ˘ U * Z ˘ U * x ˘ P * Z ˘ P * x ˘ N * Z ˘ N * x ˘ B * Z ˘ B *
0.02(15.81, 5.53)105.94(16.41, 5.86)110.16(15.15, 5.35)101.63(15.69, 5.73)105.57
0.03(15.96, 5.19)106.13(16.61, 5.45)110.55(15.29, 4.96)101.68(15.85, 5.33)105.76
0.04(16.01, 4.97)106.02(16.68, 5.27)110.61(15.34, 4.69)101.41(15.93, 5.07)105.73
0.05(16.17, 4.70)106.44(16.90, 4.93)111.25(15.48, 4.39)101.66(16.08, 4.79)106.09
0.06(16.28, 4.35)106.38(17.06, 4.55)111.48(15.55, 4.00)101.28(16.18, 4.44)105.96
0.07(16.33, 4.10)106.17(17.18, 4.26)111.57(15.58, 3.66)100.84(16.27, 4.11)105.83
0.08(16.42, 3.95)106.40(17.31, 4.07)112.03(15.65, 3.50)100.88(16.39, 3.85)106.06
0.09(16.49, 3.63)106.21(17.50, 3.66)112.35(15.68, 3.09)100.26(16.54, 3.32)105.86
0.20(16.44, 2.20)103.02(18.29, 1.80)113.36(15.25, 0.93)93.35(16.92, 0.34)102.20
0.30(15.54, 2.69)98.64(18.18, 1.52)112.12(14.32, 0.63)87.17(16.14, 0.16)97.18
0.40(14.58, 3.75)94.98(17.84, 1.88)110.77(13.35, 1.06)82.24(15.52, 0.13)93.39
0.50(13.72, 4.84)91.97(17.38, 2.56)109.41(12.38, 1.70)77.71(14.92, 0.12)89.74
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shrestha, K.; Boone, E.L.; Ghanam, R. Data-Driven Optimal Treatment Combination Regimes for Multiple Stressors Controlling for Multiple Adverse Effects. Mathematics 2025, 13, 3542. https://doi.org/10.3390/math13213542

AMA Style

Shrestha K, Boone EL, Ghanam R. Data-Driven Optimal Treatment Combination Regimes for Multiple Stressors Controlling for Multiple Adverse Effects. Mathematics. 2025; 13(21):3542. https://doi.org/10.3390/math13213542

Chicago/Turabian Style

Shrestha, Kiran, Edward L. Boone, and Ryad Ghanam. 2025. "Data-Driven Optimal Treatment Combination Regimes for Multiple Stressors Controlling for Multiple Adverse Effects" Mathematics 13, no. 21: 3542. https://doi.org/10.3390/math13213542

APA Style

Shrestha, K., Boone, E. L., & Ghanam, R. (2025). Data-Driven Optimal Treatment Combination Regimes for Multiple Stressors Controlling for Multiple Adverse Effects. Mathematics, 13(21), 3542. https://doi.org/10.3390/math13213542

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop