Next Article in Journal
The AI-Driven Hydrogen Community: A Critical Review of Design Strategies for Decentralized Integrated Energy Systems
Previous Article in Journal
Cradle-to-Grave Life Cycle Analysis of Engineered Bamboo for Structural Applications in Australia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Gradient-Based, Post-Optimality Sensitivity Analysis with Respect to Parameters of State Equations

Department of Mechanical and Aerospace Engineering, Old Dominion University, Norfolk, VA 23454, USA
*
Authors to whom correspondence should be addressed.
Designs 2026, 10(1), 11; https://doi.org/10.3390/designs10010011
Submission received: 18 December 2025 / Revised: 23 January 2026 / Accepted: 23 January 2026 / Published: 27 January 2026

Abstract

Design optimization is a computational tool that can enable a designer to investigate the effectiveness of a design concept in an organized format. However, this design process requires the design variables, constraints, and objective function to be properly defined and expressed in mathematical forms. Post-optimality analysis thus becomes a necessary step to investigate different variations in the problem formulation and parameters to ensure that optimization produces a stable and trustworthy outcome. One efficient way to achieve this aim is to compute the local derivative of the optimized objective function with respect to the optimization problem parameters, such as bounds on the constraints and the material properties in the state equation. This method is referred to as post-optimality sensitivity analysis. In this study, we derived the post-optimal sensitivity equation to explicitly include the derivatives of state variables with respect to problem parameters and to broaden its applications to minimax and goal attainment design optimization problems.

1. Introduction

The responsibility of an engineering designer is primarily to produce a quality product that can meet performance requirements with high confidence. This design task could form a lengthy process that may require steps such as data collection, trial and error, and validation. This task could be improved, however, by taking advantage of computer technology and numerical methods for modeling, analysis, and design optimization. To achieve this aim, post-optimality analysis becomes necessary to ensure that the related design optimization problems are effectively formulated, which includes the selection of design variables, the criteria to measure product quality, and the constraints on product performance.
One specific task within post-optimality analysis is to investigate the impact of the values of problem parameters upon the results of design optimization. This can be achieved with the aid of post-optimality sensitivity analysis, which involves investigating variation in the optimization solution due to changes in problem parameters by conducting direct differentiation of the optimized objective function with respect to the parameters of concern [1].
The focus of post-optimality analysis differs from that of post-optimality sensitivity analysis investigated herein. The former places greater emphasis on the quality of the design optimization formulation and its outcome. In particular, post-optimality analysis, as stated by Gero [2], benefits the stability analysis of the optimal solution, the impact of different design decisions, and the weighing of different objective functions. Venkat et al. [3] used the Greedy Reduction algorithm to select the most preferable subset of the Pareto optimal points. Arias et al. [4] conducted post-optimality analysis to compare the operational costs of two different C O 2 capture plants. Cherif [5] introduced a new behavior penalty method to conduct post-optimality analysis to improve the efficiency of goal programming. Moreover, Wang et al. [6] conducted post-optimality analysis in a multi-objective design optimization problem to measure the stability of an optimal solution under variations in the design and environmental variables of wind turbines, based on the distances between Pareto points.
Post-optimality sensitivity analysis, in comparison, involves analytically producing the design derivatives of the optimization results with respect to problem parameters, which is achieved once the optimization formulation is fully specified and solved. These derivatives are conducted in the domain of the optimized design variables. Specifically, the optimized design variables that involve those specific derivatives must satisfy the Kuhn–Tucker necessary conditions. Vanderplaats and Yoshida [1] classified post-optimality sensitivity analysis methods into three major approaches: the optimality objective-based approach, the feasible search direction approach, and the necessary condition-based approach. Both Approaches 1 and 3 are based on the Kuhn–Tucker necessary conditions. However, the first approach only focuses on the derivative of the optimal objective function, whereas the latter approach includes the derivative of the objective function, the constraints, and the derivatives of the optimal design variables and the Lagrange multipliers. Numerous authors have discussed some or all of these approaches in their published engineering optimization textbooks. Onwubiko [7] presented only the first approach in his book; Haug detailed the first and third approach for post-optimality sensitivity analysis [8], as was also the case for Belegundu and Chandrupatla [9]. Conversely, Rao [10] presented all three approaches.
Fiacco and Ghaemi [11] derived the post-optimality sensitivity analysis of a nonlinear structure design problem. The bounds of constraints and the structural properties involved in the state equation are considered as problem parameters. However, the state variables are not explicitly included in the post-optimality sensitivity equation. Barthelemy and Sobieszczanski-Sobieski [12] presented the optimum sensitivity derivatives of the objective function in nonlinear programming. Diewert [13] used the first approach to derive the first-order and second-order post-optimality sensitivity analysis of a nonlinear programming problem related to economics that does not involve engineering state variables. In a similar manner, Braun and Kroo [14] used Approach 1 to investigate the sensitivity of an optimized direct operating cost of a DC-9 class airspace vehicle with respect to parameters including cruise range, takeoff field length, etc. Hart and van Bloemen Waanders [15] improved the data collected from a limited number of optimization solutions of a high-fidelity model by taking advantage of the post-optimality sensitivities of the optimized low-fidelity models with respect to model discrepancy. Enevoldsen [16] also formulated the design optimization problem into two levels. The post-optimal sensitivity analysis of the upper-level optimization problem requires the sensitivity of the reliability index presented in its constraint with respect to the design variables. Post-optimality sensitivity analysis is also widely used in reliability index-based design optimization, which is structured as a two-level optimization [17,18]. The first-order reliability method (FORM) is applied at the first level. Its objective is the reliability index of a response function, and its design variables are the random variables. The post-optimality sensitivity analysis of the reliability index is then computed with respect to the design variables of the upper level, which are treated as the problem parameters in the low-level FORM.
Jereza and colleagues [19] used the feasible search direction method, Approach 2, for optimum design sensitivity of failure probabilities with respect to filter parameters of a concrete structure modeled by linear dynamic equations with random loads. Baldomir et al. [20] also used Approach 2 to update the optimal design variables to more effectively match the targeted values of the constraints. This process is often used in the tradeoff design process [21].
Koltai and Tatay [22] applied Approach 3 in linear programming to avoid degeneracy of optimal solutions with the ranges of the coefficients of the objective and the RHS of constraints. This optimality sensitivity analysis is achieved with finite differencing of repeated optimization solutions with respect to different problem parameters. Vakilifard et al. [23] analytically derived the derivatives of the optimal objective in their paper with respect to problem parameters in the constraint equations. The derivatives of the optimal objectives for the applications presented in the paper are equal to the Lagrange multipliers of the associated constraints.
Bonnans and Shapiro [24] mathematically provided the necessary and sufficient conditions to ensure stable first- and second-order optimal sensitivity analyses of general programming problems based on Approach 3. The problem parameters in the example presented in their paper are the bounds of the constraints. Similarly, Pimay et al. [25] differentiated the Kuhn–Tucker necessary condition twice with respect to problem parameters to produce the second-order derivatives of the optimized design variables and Lagrange multipliers. Sobieskzczanski-Sobicski and colleagues [26] applied Approach 3 of optimal sensitivity analysis to decouple a large optimization problem into multiple levels to improve computational efficiency. The upper-level design variables are treated as problem parameters in a lower-level design optimization problem. The optimal sensitivities of the objective function and the design variables of a lower-level problem with respect to these problem parameters can be employed to approximate the lower-level objective function linearly in terms of the upper-level design variables presented in the upper-level design optimization formulation. This process improves computational efficiency by eliminating the lower-level design variables from the upper-level formulation.
Approach 1, as discussed above, is the most efficient method in terms of the number and degree of derivatives among the three for post-optimality sensitivity analysis. However, it is often limited to the objective function expressed in terms of problem parameters and design variables. To broaden engineering applications of Approach 1, we explicitly aim to take into account of state equations in the post-optimality sensitivity analysis. The newly derived equation is then further extended to common multi-objective problems, minimax and goal attainment. We also methodically review the search direction method and the most comprehensive approach, Approach 3. The remainder of the paper is organized as follows: In Section 2, Materials and Methods, we will review all three approaches. The focus in this case is placed on Approach 1. The detailed derivation of the post-optimality sensitivity analysis equation for a single objective problem is first presented, whereby the problem parameter is presented as part of the state equation. The same post-optimality sensitivity equation is then revised to handle minimax and goal attainment problems. In Section 3, Validation and Application, we will first use an illustrative example to validate the derived equations in Approaches 1 and 2 and demonstrate their applications. The second example is a design problem of a finned heat sink for a circuit board cooling system to minimize its weight while maximizing the heat loss rate and hydraulic diameter. The heat sink is modeled using a 2D finite element heat transfer problem. Research results and future research directions are summarized in the final section, Section 4, Conclusion: Remarks and Discussion.

2. Methodology

All three approaches for the post-optimality sensitivity analysis are reviewed in this section. The layout of this section is divided into four subsections. The most efficient approach, Approach 1, is presented in Section 2.1 and Section 2.2. In the first subsection, we focus on a standard optimization method with one single-objective function; in the second subsection, we address the minimax and goal attainment problems, which are multi-objective. The related post-optimality sensitivity equation of a single objective function with respect to a problem parameter is derived in Section 2.1, which takes state variables into consideration in the formulation.
The post-optimality sensitivity analysis of the minimax and goal attainment problems is discussed in Section 2.2. The multi-objective functions of these problems are converted into single-objective functions. These recast problems treat the new objective function as an additional design variable, which is also presented as part of the constraints. This uniqueness produces a post-optimal sensitivity equation differing from the one derived from the single-objective optimization problem presented in Section 2.1.
In the third subsection, we will review the search direction method in which the search direction is formulated and computed based on the quadratic programming problem rather than the linear programming problem presented in Reference [1]. The direct search method is suitable for tradeoff design but not for post-optimality sensitivity analysis, as it does not involve the Kuhn–Tucker necessary solution. The most comprehensive method for post-optimality sensitivity analysis is Approach 3, which will be presented in Section 2.4. The method will directly differentiate the Kuhn–Tucker necessary condition to construct the equations to solve the design derivatives of the optimal design variables and the Lagrange multipliers. The derived equation involves second-order derivatives which are characterized by computational complexity.

2.1. Approach 1: Direct Differentiation of the Augmented Objective Function

Setting ϕ, x, b, and p, respectively, as the cost function, the state variable vector, the design variable vector, and a problem parameter, a single-objective design optimization problem is expressed mathematically as follows:
m i n b R k ϕ ( x ( b , p ) , b , p ) ,
subjected to the constraints,
h ( x ( b , p ) , b , p ) = 0 g ( x ( b , p ) , b , p ) 0 } ,
and the state equation,
Q ( x ( b , p ) , b , p ) = 0
where the state variable vector, x R k , is the solution of the state equation, Q R k , expressed in Equation (3), which itself is a function of the design variable vector and the problem parameters. As a result, the state variable vector, x ( b , p ) , is also a function of the design variables, b , and the problem parameter, p .
Assume that all functions in Equations (1)–(3) meet the sufficient continuity requirements and, for a given problem parameter, p 0 , b * ( p 0 ) is an isolated minimum of the above design problem, which satisfies the Kuhn–Tucker necessary condition at the optimal design, b * ,
ϕ * b + ϕ * x x b + λ T ( g * b + g * x x b ) = 0 .
The optimal objective function, ϕ * ( x ( b * , p 0 ) , b * , p 0 ) is expanded as
ϕ * ( x ( b * , p 0 ) , b * , p 0 ) = ϕ * + λ T g *
where g * represents all tight constraints defined by Equation (2), evaluated at b * and p 0 . The vector, λ , is also a collection of all associated Lagrange multipliers with λ 0 . Furthermore, by taking advantage of the Kuhn–Tucker necessary condition, the derivative of the optimal objective function ϕ ( x ( b * , p 0 ) , b * , p 0 ) with respect to the problem parameter, p , at p 0 can now be derived as
d ϕ * d p = ϕ * p + λ T g * p + ( ϕ * x + λ T g * x ) x p
where g * / p is an m × 1 column vector, ϕ * / x is a 1 × n row vector, and g * / x is an m × n Jacobian matrix. Note that an asterisk marks the function evaluated at the optimal design. By partially differentiating the state equation, Equation (3), evaluated at b * with respect to the problem parameter, p , one can construct the following equation solved for x / p directly as
( Q * x ) x p = Q * p .
Alternatively, the adjoint variable method can be employed to rederive Equation (4) as
d ϕ * d p = ϕ * p + λ T g * p + η T Q * p
where the adjoint variable vector, η , is the solution of the associated adjoint equation,
( Q * x ) T η = ( ϕ * x + λ T g * x ) .
The advantage of the adjoint variable method is that the same adjoint variable vector, η , in Equation (7) can be repeatedly used in Equation (6) for post-optimality sensitivity analysis with respect to different problem parameters.
The sensitivity equations derived by Approach 1 published in the literature usually include only the first two terms of Equation (4), but not those associated with state variables, x / p . Detailed derivation of Equations (5)–(8) can be found in the Appendix A.

2.2. Approach 1: Minimax and Goal Attainment

Some optimization problems with multi-objective functions can be revised to fit into a single-objective format. Specifically, the goal of minimax is, among all objective functions, to minimize the maximal one. Its mathematical format can be expressed as follows:
m i n b R n m a x i = 1,2 , m { ϕ i ( x ( b , p ) , b , p ) }
subjected to the same constraints and the state equation stated in Equations (2) and (3). The objective function described in Equation (9) can be converted into a single-objective format by introducing an additional design variable, Z , as
m i n b R n , Z Z
subjected to the following m constraints in addition to those presented in Equations (2) and (3),
g Z , i ϕ i ( x ( b , p ) , b , p ) Z 0 ,                       i = 1 , , m Z .
Conversely, the goal attainment problem is defined with a single objective to minimize the gap between the optimized objective functions and its targeted values, f 0 . The outcome is expected to be a design that can achieve the targeted goals, f 0 . The problem can be described as follows:
m i n b , γ γ
subjected to the following constraint set in addition to Equations (2) and (3),
g Z f ( x , b ) γ × w t = f 0 .
The vector, w t , is the pre-determined weighting vector, whereas the attainment factor, γ , is a magnitude to measure the gap between the objective functions and the targeted goals.
The objective functions presented in these two optimization formulations differ considerably from the one presented in Equations (1)–(3). First, the objective function in either Equation (10) or (12) is formulated as a design variable; secondly, it is also presented as part of the constraints. Therefore, the post-optimal sensitivity equation presented in Equation (5) is now revised as
d ϕ * d p = λ T g * p + λ Z T g Z * p + ( λ T g * x + λ Z T g Z * x ) x p
where the Lagrange multiplier, λ , is associated with the traditional format of constraints as described in Equations (2) and (3), and λ Z is the Lagrange multiplier associated with Equation (11) for the minimax problem and Equation (13) for the goal attainment one. The above equation can also be reformulated by introducing the adjoint variable, η , as
d ϕ * d p = λ T g * p + λ Z T g Z * p η Q * p
with the adjoint equation,
( Q * x ) T η = ( λ T g * x + λ Z T g Z * x )
where the vector Q * is the state equation of Equation (3) evaluated at the optimal design point; i.e., Q * = 0 = Q ( x ( b * , p ) , b * , p ) .

2.3. Approach 2: The Feasible Search Direction Method

The aim of the feasible search direction approach, proposed by Vanderplaats and Yoshida [1], is to find the best way to improve the existing optimal design, b * , without violating the tight constraints, g * . This is achieved by expanding the design space to include the problem parameter, p , as a new design variable. Consequently, both b and p are now treated as independent design variables.
Using the existing optimal design, b * , as the starting point, a new design along the steep descent direction is then found in the expanded design space by solving a linear programming problem [1], which maximizes the reduction in the objective function while not violating constraints. In the study, however, the search direction, s = ( s b , s p ) , is found in the expanded design space by solving a quadratic programming problem as stated below:
m i n s p , s b R n ϕ * + 1 2 s T s = ( ϕ b * T + ϕ x * T d x d b ) s b + ( ϕ * p + ϕ x * T d x d p ) s p + 1 2 ( s b T s b + s p 2 )
subject to
( g b * + g x * d x d b ) s b + ( g * p + g x * d x d p ) s p + g * = 0 .
This feasible search direction resulting from the above equation has been widely used in gradient-based design optimization and in tradeoff design [21]. Its solution, s , comprises two parts. The first part, s 1 , aims to reduce the objective function while maintaining the same constraint values. The second one, s 2 , is mainly responsible for correcting constraint violations. As in Reference [1], in this study, we will focus on the first part of the solution, s 1 = ( s 1 b , s 1 p ) , which is computed by
s 1 = P ϕ *
where the matrix, P , is computed by
P = I ( g * ) [ ( g * ) T ( g * ) ] 1 ( g * ) T .
Note that both gradients, f * and g * , are computed in the expanded design space.
Once s 1 becomes available, one can compute the reduction in the objective function at the current optimal design, b * , in the most efficient way as
ϕ * = ϕ * s 1 = ( ϕ b * T + ϕ x * T d x d b ) s 1 b + ( ϕ * p + ϕ x * T d x d p ) s 1 p .
Furthermore, one can divide the above equation by s 1 p to obtain the below equation:
ϕ s 1 p = ( ϕ b * T + ϕ x * T d x d b ) s 1 b s 1 p + ( ϕ * p + ϕ x * T d x d p ) .
Next, setting the limit of s 1 p approaching zero, the above equation is revised as the following one, which can be used to estimate the change in the objective function due to the changes in the design variables and the problem parameter as
d ϕ * d p = ( ϕ b * T + ϕ x * T d x d b ) d b * d p + ( ϕ * p + ϕ x * T d x d p )
where d b * / d p is the limit of s 1 b / s 1 p , and d ϕ * / d p is the derivative of the objective function with respect to p .

2.4. Approach 3: Direct Differentiation of the Kuhn–Tucker Necessary Condition

The most comprehensive approach to conducting post-optimality sensitivity analysis is to directly differentiate the Kuhn–Tucker necessary conditions, Equation (4), and the tight constraints, g * = 0 , with respect to the problem parameter, p , at the optimal design point, b * . This differentiation builds an equation that can be solved for d b * / d p and d λ / d p together, which are the targeted outcome of Approach 3 [24,25,26].
[ 2 ϕ * b 2 + λ T 2 g * b 2 g * b g * T b 0 ] { d b * d p d λ d p } = { ( 2 ϕ * b p + λ T 2 g * b p ) g * p } .
Once the problem parameter, p, becomes a part of the state equation, differentiation of the Kuhn–Tucker necessary condition with respect to p can be complicated, as it involves second-order derivatives. However, once the state variable, x , is engaged in the problem formulation, the coefficient matrix and the loading terms of the assembled equation, Equation (20), will be a collection of various second-order derivatives. For example, the differentiations of the first and second terms of Equation (4) lead to
d d p ( ϕ * b ) T = 2 ϕ * b p + [ 2 ϕ * b 2 + 2 ϕ * b x x b ] ( d b * d p ) + 2 ϕ * b x ( x p ) .
and
d d p ( x b ) T = 2 x b p + 2 x b 2 d b * d p .
which include 2 x / b p and 2 x / b 2 . The requirement for these second-order derivative terms will increase the computational complexity of applying Approach 3 for general engineering applications.
The following summary explains the common procedure for implementing the post-optimality sensitivity analysis. Based on the equations derived in Section 2, one can conduct post-optimality sensitivity analysis of the optimization design problems with respect to p at p 0 . First, the optimized design variables, b * , and the associated Lagrange multipliers, λ , are solved at the specified value of the problem parameter, p 0 . The state variables, x , must also be available at b * and p 0 . While MATLAB R2023b was used in the examples in this paper, there are many other computational software applications, such as Mathematica, that could be used effectively to accomplish the post-optimality sensitivity analysis.

3. Validation Examples

Four validation examples are presented in this section. The first three examples concern illustrative state equations, and the fourth example is built on a finite element model of a 2D heat sink problem. The first example is set not only to validate the post-optimality sensitivity equations of Approach 1 derived in Section 2, but also to demonstrate the use of the derived equations to investigate the effects of the problem parameters on the optimal formulation and solution. The second example extends Approach 1 to solve a problem with a minimax objective in addition to goal attainment. Example 3 will apply Approach 2, the feasible search direction method, to resolve the problem investigated in the first example. The last example involves the use of Approach 1 to conduct post-optimality sensitivity analysis of a circuit board cooling problem involving a finned heat sink. This engineering application example is analyzed using a 2D finite element model. Its optimization design problem is formulated with multiple objectives.

3.1. Example 1: A Structural Problem

The example presented in this section is an academic example which covers broad applications of engineering problems. It has two state equations which model the static problem as well as the eigenvalue problem. Both state equations share the same stiffness matrix, which is a function of the problem parameter, p . Consequently, this example problem demonstrates the post-optimality sensitivity analysis of an optimization design problem with respect to problem parameters involved in the state equations. Furthermore, it provides an opportunity to form a multi-objective design optimization problem; one is associated with the static equation and the other with the eigenvalue problem.
The design problem of concern is formulated with four design variables, b 1 ,   b 2 ,   b 3   a n d   b 4 ; three state variables, x 1 ,   x 2   a n d   x 3 ; and the fundamental eigenvalue, μ 1 . The optimization problem aims to minimize the work performed by the external force, f , subjected to three inequality constraints on deflection, stress, and the fundamental eigenvalue. The design optimization problem is mathematically cast as follows:
m i n b R 4 ϕ = f T x ( b , p )
subject to the inequality constraints,
g 1 ( b , x ( b , p ) ) = b 1 2 x 1 + x 2 2 0 g 2 ( b , x ( b , p ) ) = 4 b 2 sin ( x 3 ) 3 x 1 0 g 3 ( b , μ ( b , p ) ) = 0.8 4 μ 1 0 } ,
and the bounds on the design variables,
0.1 b 1 ,   b 2 , b 3 ,   b 4 20
where x = ( x 1 x 2 x 3 ) T , the state variables, and μ 1 , the fundamental eigenvalue, are the solutions of the following static matrix equations,
K ( p ) x f = 0 = [ 5 b 1 5 b 1 0 5 b 1 5 b 1 + p b 2 + 5 b 3 5 b 3 0 5 b 3 5 b 3 2 + 5 b 4 2 ] { x 1 x 2 x 3 } { 20 1 b 2 15 } ,
and the eigenvalue matrix equation,
K ( p ) y μ M y = 0 =                                                                              [ 5 b 1 5 b 1 0 5 b 1 5 b 1 + p b 2 + 5 b 3 5 b 3 0 5 b 3 5 b 3 2 + 5 b 4 2 ] { y 1 y 2 y 3 } μ [ b 1 2 + 5 b 3 0 0 0 3 b 2 2 + b 4 2 0 0 0 b 3 2 + 5 b 1 ] { y 1 y 2 y 3 } .
The problem parameter, p , appears in K ( 2,2 ) in both Equations (26) and (27). Its initial value, p 0 , is set to be 10.
The design optimization run starts with initial design variables, b 0 T = [ 1 . ,   1 . ,   1 . ,   1 . ] , which generates the initial objective,   ϕ 0 = 163 , and encounters no constraint violations, with constraint values g T = [ 1.2053 , 16.7501 , 1.3372 ] . With user-provided gradients, the MATLAB built-in function, fmincon, reaches the optimal solution after 38 function evaluations. The objective function and the design variables at the optimal design solution are ϕ * = 8.8881 and b * T = [ 10.2718 ,   11.3447 ,   11.1417 ,   0.1001 ] , combined with two Lagrange multipliers, λ 2 = 0.0184 and λ 3 = 18.9916 , which correspond to the tight constraints, g 2 * = g 3 * = 0 . Note that the value of λ 2 is 103 times smaller than that of λ 3 . The design derivative of the optimality objective with respect to the parameter, p , is 0.6065. It is calculated at p 0 = 10 based on the following Equation (28), which was derived based on Equation (5) as
d ϕ * d p = { 20 3 λ 2 1 b 2 15 + 4 λ 2 b 2 cos ( x 3 ) } T x p 4 λ 3 μ p .
Its accuracy can be validated with the results obtained via finite differencing:
Δ ϕ * d ϕ * d p p .
The above sensitivity equation can be used to estimate the change in the optimal objective function due to the change in parameter, p , as
Δ ϕ * = ϕ * ( p 0 + p ) ϕ * ( p 0 ) .
The results of such comparison are listed in Table 1. This comparison is performed with a perturbation of the problem parameter, p , falling into a range of ± 5 % of p 0 .
Note that Equation (5) of d ϕ * / d p is derived based on the assumption that the involved constraints remain active due to the change in the value of p . However, the bottom two rows of Table 1 present cases where Equation (5) still achieves satisfactory results even though the constraint, g 2 , is not active. For this example, it is noted that both λ 2 and its coefficient terms in Equation (5) are much smaller than the rest, which may explain why Equation (5) is still effective in estimating the change in the objective function due to the change in p, even though the constraint, g 2 , is switched from active at the base design, p 0 = 10 , to non-active at the perturbed designs, p = 9.75 and p = 9.5 . These two cases are presented in the last two rows of Table 1 with constraint values, g 2 = 0.0244 and g 2 = 0.0717 , respectively.
To further clarify this matter, one may add an additional parameter, α , to investigate the impact of the feasibility of the second constraint in Equation (2) as
g 2 = 4 b 2 sin ( x 3 ) 3 x 1 + α 0
to make g 2 active at the optimal design with a positive value of α . For example, setting the values of parameters, p = 9.5 and α = 0.1 , the second and third constraints will remain active at the optimal design with an objective function, 9.206274, and the total perturbation of the optimal objective function can be computed as follows:
Δ ϕ * d ϕ * d p p + d ϕ * d α α
where d ϕ * / d p can be computed according to Equation (28), whereas d ϕ * / d α is simply equal to λ 2 based on Equation (5). By setting the changes in the parameter, p = 0.5 and α = 0.1 , one may estimate the change in the optimal objective function as
Δ ϕ * 0.3031 + 0.00184 = 0.30494
which is close to the result obtained via finite differencing between two optimization runs, 0.3182. The numerical results of Equation (30) demonstrate that whether constraint g 2 is active or not has little influence on the change in optimal objective functions, Δ ϕ * .

3.2. Example 2: Minimax and Goal Attainment

The example problem presented herein comprises two objectives: the first objective is for the structure problem to reach the value of 10, whereas the second objective is to make the second eigenvalue seven times higher than the first one. Mathematically, the objective function can be expressed as, based on the required format for minimax optimization, Equations (12) and (13),
m i n b R 4 m a x i = 1,2 { ϕ 1 = f T x 10 ϕ 2 = μ 2 μ 1 7 } .
The constraints and the state equations are identical to those employed by the single-objective optimization problem as stated in Equations (24)–(27). By introducing a new design variable, Z, the multi-objective function in Equation (31) can now be converted into a single objective, Z , as
m i n b R n , Z Z ( b ( p ) ,   p )
together with two additional constraints such as, excluding those presented in Equation (24),
g Z 1 f T x 10 Z ( b ( p ) ,   p ) 0 g Z 2 μ 2 μ 1 7 Z ( b ( p ) ,   p ) 0 } .
As indicated in the above Equations (32) and (33), the objective function, Z , plays not only the role of a design variable but also a bound on the constraints.
The two design objectives stated in Equation (31) can also be formulated as the targeted objectives in a goal attainment problem such as
m i n b R 4 , γ γ ( b ( p ) ,   p )
subjected to the additional constraint set as
g Z 1 = f T x γ × w 1 = 10 g Z 2 = μ 2 μ 1 γ × w 2 = 7 }
where γ is the attainment factor, treated as the objective function, and w 1 and w 2 are the weighting coefficients provided by the users to measure the gap between the goals and the results of the optimal objective functions.
The optimization solution processes of these two sets of problems, Equations (31)–(35), start with the same design variables as previous, b 0 = [ 1 . ,   1 . ,   1 . ,   1 . ] , combined with the same bounds, 0.1 b 20 . Conversely, the bounds on the new design variables, Z and γ , are set to be between 0 and 20. Setting the lowest values of Z and γ to zero ensures that the optimal objectives can match the targeted values.
To achieve this aim, the optimization process is performed repeatedly by adjusting the value of the problem parameter, p , from 10 to 10.0215, with the aid of the post-optimality sensitivity equation, Equation (14). The results are listed in Table 2. Both minimax and goal attainment achieve the targeted values of 10 and 7 for f T x and μ 2 / μ 1 , respectively. Note that the column, ϕ * , listed in Table 2 is the targeted amount of reduction in the optimal objective function evaluated as below, based on the known values of d ϕ * / d p and p :
ϕ * = d ϕ * d p p .
In this example, both the minimax and goal attainment problems produce very close optimal design variables, b * , and are subjected to the same form of the post-optimality sensitivity equation as below:
d ϕ * d p = 4 λ 3 d μ 1 d p + λ 4 f T x p + λ 5 p ( μ 2 μ 1 ) .
However, they generate different values of d ϕ * / d p at b * , as they have different values of Lagrange multipliers. The detailed results are summarized in Table 2.

3.3. Example 3: The Feasible Search Direction Method

The post-optimality sensitivity analysis of the optimal solution of the structural problem described in Section 3.1 is repeatedly investigated herein based on the feasible search direction method. The investigation starts with a problem parameter, p = 10 , which leads to the optimal design, b * T = [ 10.2718 , 11.3447 , 11.1417 , 0.1001 ] together with the post-optimality sensitivity, d ϕ * / d p = 0.6065 . The sensitivity of the optimal design variables with respect to p at b * is estimated by central difference with two more design optimization runs of the structural problem with p = 10.01 and p = 9.99 . The result is computed as
d b * d p [ 0.4950 , 0.6800 , 0.5600 , 0.0050 ] .
Based on the quadratic programming formulation, Equations (15) and (16), the search direction s 1 is solved at b * and p = 10 as
s 1 = [ s 1 b     s 1 p ] = [ 0.2563 , 0.0541 , 0.1401 , 0.0019 , 0.3390 ] .
The above value of s 1 can be substituted into Equation (18) to estimate the sensitivity of the optimal design variables as
d b d p s 1 b s 1 p = [ 0.7562 , 0.1596 , 0.4132 , 0.0055 ] ,
which is different from the result of Equation (36), solved via central difference. Furthermore, the sensitivity of the objective function at b * with respect to the problem parameter, p, is now computed based on Equation (19) as
d ϕ d p ( ϕ b + ϕ x d x d b ) s 1 b s 1 p ( ϕ p + ϕ x d x d p ) ϕ p = 0.5993 .
Its value is close to the value, −0.6065, obtained via the post-optimality sensitivity of Approach 1, Equation (28). Nevertheless, the goal of the feasible search direction method is to find the minimum design point of b * in an expanded design space which includes parameter p as an additional design variable. This new minimum design point, b , however, is not an optimal design point as it does not satisfy the Kuhn–Tucker necessary conditions defined in the original design space. In short, the feasible direct search method can be viewed as a method for post-optimality analysis, but not for post-optimality sensitivity analysis.

3.4. Example 4: Finned Heat Sink Design Optimization

A CPU for a computer with a heat transfer conduction surface area of 90 mm by 88.50 mm must be cooled to ensure that the CPU does not reach a surface contact temperature of T = 71   ° C . The heat flux is constant across the contact area with the base of the heat sink. For simplicity, the thermal contact resistance is assumed to be negligible. Despite the appearance of the heat sink, it is composed of 6061-T651 Aluminum with a specific heat capacity of C p = 0.896   J / g K , a thermal conductivity of k = 167   W / m K , a coefficient of thermal expansion of α = 23.6 ( 10 6 )   K 1 , and a modulus of elasticity of E = 68.9   G P a .
A graphical representation of the heat sink, upon which this example is based, is shown in Figure 1. The dimensions of the heat sink are also provided in Figure 2 below. The simplification of the geometry neglects the screw interference and assumes that all fins are the same length.
A single fan is used for forced cooling of the heat sink. The orientation of the fan is assumed to be such that the air is pushed lengthwise through the fins. The hydraulic diameter, D h , is provided with the formula for an open channel, with airflow moving in the direction of the arrows in Figure 3. The hydraulic diameter of a single channel of the heat sink is given as
D h = 4 b ( a 2 t ) 2 ( a 2 t ) + b
where t b is the thickness of the base of the heat sink, and t is the thickness of one fin of the heat sink. The hydraulic diameter is used to measure the hydraulic volumetric efficiency. The variable, a , is the distance from the centerline of one fin to the centerline of the fin on the opposite side of the channel, which is approximately equal to the width of the channel. The variable, b , is the height of the fins.
The fan moves air over the heat sink at 58 cfm at static pressure (Arctic P9 Max—ARCTIC GMBH, Braunschweig, Germany). At zero pressure differential, air will be pushed at a velocity of approximately v = 3.24   m / s while assuming a flow area of 92 × 92 mm2, which is the cross-sectional area of the fan. However, due to friction, the air velocity is zero at the bottom convection-facing surface of the heat sink and varies linearly with respect to the height above that zero-velocity boundary, until it reaches the maximum air velocity of 3.24   m / s . Additionally, the flow in the heat sink is assumed to be fully developed to reach a constant speed.
The design problem of concern is a 2D section of the model shown in Figure 3, which represents a typical channel of the heat sink where b and t model the height and the width of the fin, and t b and L model the thickness and length of the base plate. These four dimensions, b , t , t b , and L , will be considered as design variables. Three types of boundary conditions are applied to this 2D heat sink model. The first is a temperature boundary condition applied on the bottom edge of the base of the heat sink. The center point is subjected to the highest temperature, 71 °C, which will be linearly reduced to 55 °C at the edge points of the base. The second is a zero-heat flux boundary condition applied to the left and right edges of the heat sink base, and the third is a convection boundary condition applied to the top edges of the heat sink base and all three edges of the fins. Specifically, the thermal convection condition is formulated using the following equation:
k T n = h ( T T )                                       o n   Γ h
where the ambient temperature, T , was set at a constant 25 °C, and the convection coefficient, h 0 , was assumed to be 0.002924 W / ( m 2 × K ) . The 2D finite element equation of this heat sink model is formulated as
( K + K h ) T = f h f
which is subject to the prescribed temperature boundary conditions along the base plate.
The optimization problem is formulated as a minimax problem. Therefore, the objectives are to deliver the most efficient heat sink that maximizes the cooling rate, Q h , and the hydraulic diameter D h , while minimizing the weight, W t . The cooling rate is measured by the heat flux extracted by the heat convection of the surface exposed to the cooling flow. It is computed using the following equation:
Q h = Γ h h ( T T ) ϕ d A = L h T h ( T T )
where h is the heat convection coefficient, and T   a n d   L h are the surface temperature and the area of the cooling fin exposed to the air flow with temperature. The hydraulic diameter in our case is computed as
D h = 10 b / ( 2.5 2 t + 2 t b ) .
The weight of the cooling unit is
W t = 5 t b + 2 b t .
Specifically, the mathematical formulation of the multi-objective design problem is stated as follows
m a x b R 4 m i n i = 1,2 , 3 { Q h ( b , T ( b ) ) D h ( b ) 1 / W t ( b ) }
subjected to a maximal thermal stress constraint
σ σ a l l ( b ) = 110   N / m m 2 .
The minimax problem defined in Equation (40) can be recast into a single objective one as
m a x b R 4 , Z Z
subjected to three constraints
g 1 = Q h Z 0 g 2 = D h Z 0 g 3 = 1 / W t Z 0 }
together with the stress constraint stated in Equation (41).
The problem begins with design variables, b = 24 mm, t = 1   m m , t b = 2.5 mm and Z = 1 , that satisfy all constraints. The MATLAB built-in function fmincon is then applied to solve the above optimization problem. The optimal design reaches b = 22.0002   m m , t = 1.0217   m m , t b = 2   m m   , and Z = 1.1009 . The optimal design raises the cooling rate by 10% from the initial 0.5829 watts to 0.6417 watts. It is, however, under the control of two tight constraints, g 1 = g 3 = 0 with associated Lagrange multipliers, λ 1 =   0.9071 and λ 3 = 0.0929. This result implies that the optimal design achieves two objectives which maximize cooling while minimizing the weight. The comparison between the optimized dimension and the initial one is displayed in Figure 4, and the temperature distribution of the optimal design is displayed in Figure 5. The lowest temperature can be found along the top edge of the fin, 23.8862 °C, and the highest temperature is 71 °C at the center of the baseline.
Post-optimality sensitivity analysis is performed hereafter to investigate the impact of air velocity on the maximal cooling rate. The air velocity, v , is related to the heat convection coefficient, h ( v ) = 1.16 [ 10.45 v + 10 v ] [27], with a unit of W / ( m 2 × K ) . The post-optimality sensitivity derivative of the maximal cooling rate with respect to the air velocity, v , can be computed analytically based upon Equation (14), as
d Q h * d h = λ 1 h ( L h * h ( T * T ) ) ,
which produces a result of 77.5564   m m 2 × K , a value close to the result obtained by conducting finite differencing between two optimization runs, 78.6594 m m 2 × K . Estimating the change in the maximal cooling rate due to the change in air velocity can then be performed conveniently as
d Q h * d v = 1.16 [ 1 + 5 v ] d Q h * d h .

4. Conclusions: Remarks and Discussion

The goal of post-optimality sensitivity analysis is to investigate the impact of a problem parameter on the value of the optimized design objective. This aim is achieved by computing the derivative of the objective function with respect to the problem parameter at the optimized design point. The concept and application of post-optimality sensitivity analysis is not new, having been developed in the early 1980s.
The published methods can be grouped into three approaches. Approach 1 differentiates the optimized objective function first with respect to the parameter p. The derived equation is then simplified by taking the Kuhn–Tucker necessary condition into consideration. This method is the most efficient among all methods, as it involves only first-order derivatives. The second approach, Approach 2, is developed based on the feasible search direction method to find the steepest descent direction to upgrade the design variables at the optimality design point, which maximizes the objective function reduction without violating constraints. Approach 2 is in fact a method for post-optimality analysis, not for post-optimality sensitivity analysis. Approach 3 is the most comprehensive approach, which differentiates the Kuhn–Tucker necessary condition directly with respect to the problem parameter, p, to find the post-optimality sensitivities of the optimality design variables, in addition to the associated Lagrange multipliers. Approach 3 is computationally expensive as it requires solving a linear equation with the first-order derivatives of the optimal design variables and the Lagrange multipliers as unknown.
However, the authors of most published papers did not include the state variable in their derived post-optimality sensitivity equation. In order to broaden the engineering applications of post-optimality sensitivity analysis, we derived the new post-optimality sensitivity analysis equations based on Approaches 1 and 3, which explicitly includes the state variables and their first-order derivatives with respect to the problem parameter in the equations. One feasible search direction is also reformulated to count the state variable. The newly derived post-optimality sensitivity equations are validated for both single and multi-objective problems by using an illustrative example with static and eigenvalue equations. Lastly, an engineering application problem, design optimization of a finned heat sink for a circuit board cooling problem, is formulated and solved as a multi-objective problem. Post-optimality sensitivity analysis is performed thereafter to investigate the impact of air velocity on the maximal cooling rate.
Sensitivity analysis can be categorized into two groups, local vs. global [28]. The scope of this paper is limited to local sensitivity of the optimality objective at an isolated optimality design point. Global sensitivity analysis, in contrast, is mostly conducted through a statistical point of view to investigate the impact on the output due to a wide variation in the input parameters and their interactions. In other words, global sensitivity analysis is associated with uncertainty. The current methods in this study deal with deterministic problems with variation in the deterministic optimal solutions. It is desirable in the near future to thus extend the current research to include time-dependent and stochastic state equations in the post-optimality global sensitivity analysis.

Author Contributions

Conceptualization, G.H. and J.D.; methodology, G.H.; software, G.H. and J.D.; validation, G.H. and J.D.; formal analysis, G.H. and J.D.; investigation, G.H. and J.D.; resources, G.H. and J.D.; data curation, G.H. and J.D.; writing—original draft preparation, G.H. and J.D.; writing—review and editing, G.H. and J.D.; visualization, G.H. and J.D.; supervision, G.H.; project administration, G.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data and detailed equation derivations are available upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

The post-optimality sensitivity equations of the augmented objective function with respect to the problem parameter, p, presented from Equations (5)–(8) in Section 2.1, will be derived in detail in this Appendix. Equations (5) and (6) are the results of the direct differentiation method for sensitivity analysis, while Equations (7) and (8) are associated with the adjoint variable methods. This presentation will be grouped in two parts accordingly.

Appendix A.1. Direct Differentiation Method

The objective function is augmented by the product of Lagrange multipliers and the tight constraints as
ϕ * ( x ( b * ( p ) , p ) , b * ( p ) , p )                                                                                        = ϕ * ( x ( b * ( p ) , p ) , b * ( p ) , p ) + λ T ( p ) g * ( x ( b * ( p ) , p ) , b * ( p ) , p )
where p is the problem parameter and x ( b * ( p ) , p ) is the state variables, a solution of the state equation, Equation (3), Q ( x ( b * ( p ) , p ) , b , p ) = 0 . Furthermore, g * ( x ( b * ( p ) , p ) , b * ( p ) ,   p ) = 0 , is the collection of the tight constraints at the optimal design point. Note that the value of the optimized design variable b * , is now a function of the problem parameter, p. Assume that the function b * ( p ) is continuous and differentiable. The differentiation of Equation (A1), with respect to p , yields,
d ϕ * d p = ( ϕ * p + λ T g * p ) + ( ϕ * x + λ T g * x ) x p + λ b T g * + ( ( ϕ * x + λ T g * x ) x b + ϕ * b + λ T g * b ) = ( ϕ * p + λ T g * p ) + ( ϕ * x + λ T g * x ) x p .
The above equation results in Equation (5) as g * = 0 and the last term on the right of the second line of the equation is equal to zero due to the Kuhn–Tucker necessary condition as stated in Equation (4). The derivative of the state variable, x / p , can be solved by solving Equation (6), which is obtained by differentiating the state equation,
( Q * x ) x p = Q * p .

Appendix A.2. Adjoint Variable Method

The objective function is now augmented with two more terms; one is a product of the Lagrange multipliers and the tight constraints, while the other is that of the adjoint variable vector, η , and the state equation, Equation (3).
d ϕ * d p = ϕ * p + λ T g * p + ( ϕ * x + λ T g * x ) x p + η b T Q * + η T ( ( Q * x ) ( x p + x b d b d p ) + Q * b d b d p + Q * p ) = ϕ * p + λ T g * p + + η T Q * p + [ ( ϕ * x + λ T g * x ) + η T ( Q * x ) ] x p .
Note that the second line of the above equation is simplified with the help of Equation (6) and
( Q * x ) x p = Q * p .
Next, by setting the coefficient row vector related to x / p in the last term on the right of Equation (A3) to zero, one has the following equation,
( ϕ * x + λ T g * x ) + η T ( Q * x ) = 0 T
By taking its transpose, the above equation results in the adjoint variable equation, which can be solved for the adjoint variable, η ,
( Q * x ) T η = ( ϕ * x + λ T g * x ) T .
Equation (A3) can be further simplified to obtain Equation (5) as
d ϕ * d p = ϕ * p + λ T g * p + η T Q * p

References

  1. Vanderplaats, G.N.; Yoshida, N. Efficient Calculation of Optimum Design Sensitivity. AIAA J. 1985, 23, 1798–1803. [Google Scholar] [CrossRef]
  2. Academic Press, Inc. Design Optimization, Notes and Reports in Mathematics in Science and Engineering, 1st ed.; Gero, J.S., Ed.; Academic Press, Inc.: St. Louis, MO, USA, 1985; Volume 1, pp. 267–268. ISBN 978-012-280-910-1. [Google Scholar]
  3. Venkat, V.; Jacobson, S.H.; Stori, J.A. A Post-Optimality Analysis Algorithm for Multi-Objective Optimization. Comput. Optim. Appl. 2004, 28, 357–372. [Google Scholar] [CrossRef]
  4. Arias, A.M.; Mores, P.L.; Scenna, N.J.; Mussati, S.F. Optimal design and sensitivity analysis of post-combustion CO2 capture process by chemical absorption with amines. J. Clean. Prod. 2016, 115, 315–331. [Google Scholar] [CrossRef]
  5. Cherif, M.S. A Novel Behavioral Penalty Function for Interval Goal Programming with Post-Optimality Analysis. Decis. Anal. J. 2024, 12, 100511. [Google Scholar] [CrossRef]
  6. Wang, W.; Caro, S.; Bennis, F.; Soto, R.; Crawford, B. Multi-objective Robust Optimization using a Post-optimality Sensitivity Analysis Technique: Application to a Wind Turbine Design. J. Mech. Des. 2014, 137, 011403. [Google Scholar] [CrossRef]
  7. Onwubiko, C. Introduction to Engineering Design Optimization; Printice-Hall, Inc.: Upper Saddle River, NJ, USA, 2000; pp. 203–205. ISBN 978-020-147-673-6. [Google Scholar]
  8. Haug, E.J.; Arora, J.S. Applied Optimal Design: Mechanical and Structural Systems; John Wiley & Sons, Inc.: New York, NY, USA, 1979; pp. 160–161. ISBN 978-047-104-170-2. [Google Scholar]
  9. Belegundu, A.D.; Chandrupatla, T.R. Optimization Concepts and Applications in Engineering, 3rd ed.; Cambridge University Press: New York, NY, USA, 2019; pp. 205–208. ISBN 978-110-842-488-2. [Google Scholar]
  10. Rao, S.S. Engineering Optimization: Theory and Practice, 4th ed.; John Wiley & Sons, Inc.: New York, NY, USA, 2009; pp. 751–755. ISBN 978-111-945-471-7. [Google Scholar]
  11. Fiacco, A.V.; Ghaemi, A. Sensitivity Analysis of a Nonlinear Structural Design Problem. Comput. Oper. Res. 1982, 9, 29–55. [Google Scholar] [CrossRef]
  12. Barthelemy, J.-F.M.; Sobieszczanski-Sobieski, J. Optimum Sensitivity Derivatives of Objective Functions in Nonlinear Programming. AIAA J. 1983, 21, 913–915. [Google Scholar] [CrossRef]
  13. Diewert, W.E. Sensitivity Analysis in Economics. Comput. Oper. Res. 1984, 11, 141–156. [Google Scholar] [CrossRef]
  14. Braun, R.D.; Kroo, I.M. Post-Optimality Analysis in Aerospace Vehicle Design. In Proceedings of the AIAA Aircraft Design, Systems and Operations Meeting, Monterey, CA, USA, 11–13 August 1993. [Google Scholar] [CrossRef]
  15. Hart, J.; Van Bloemen Waanders, B. Hyper-Differential Sensitivity Analysis with Respect to Model Discrepancy: Optimal Solution Updating. Comput. Methods Appl. Mech. Eng. 2023, 412, 116082. [Google Scholar] [CrossRef]
  16. Enevoldsen, I. Sensitivity Analysis of Reliability Based-Optimization Solution. J. Eng. Mech. 1994, 120, 198–205. [Google Scholar] [CrossRef]
  17. Youn, B.D.; Choi, K.K. Hybrid Analysis Method for Reliability-Based Design Optimization. J. Mech. Des. 2003, 125, 221–232. [Google Scholar] [CrossRef]
  18. Hou, G.J.-W. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization; NASA/CR-2004-213002; National Aeronautics and Space Administration: Langley Research Center: Hampton, VA, USA, 2004. [Google Scholar]
  19. Jereza, D.J.; Jensenb, H.A.; Valdebenitoc, M.A.; Misrajid, M.A.; Mayorgae, F.; Beer, M. On the Use of Directional Importance Sampling for Reliability-Based Design and Optimum Design Sensitivity of Linear Stochastic Structure. Probabilistic Eng. Mech. 2022, 70, 103368. [Google Scholar] [CrossRef]
  20. Baldomir, A.; Hernandez, S.; Diaz, J.; Fontan, A. Sensitivity analysis of optimum solutions by updating active constraints: Application in aircraft structural design. Struct. Multidiscip. Optim. 2011, 44, 797–814. [Google Scholar] [CrossRef]
  21. Royster, L.A.; Hou, G. Gradient-Based Trade-Off Design for Engineering Applications. Designs 2023, 7, 81. [Google Scholar] [CrossRef]
  22. Koltai, T.; Tatay, V. A Practical Approach to Sensitivity Analysis in Linear Programming under Degeneracy for Management Decision Making. Int. J. Prod. Econ. 2011, 131, 392–398. [Google Scholar] [CrossRef]
  23. Vakilifard, H.; Esmalifalak, H.; Behzadpoor, M. Profit Optimization with Post Optimality Analysis Using Linear Programming. World J. Soc. Sci. 2013, 3, 127–137. [Google Scholar]
  24. Bonnans, J.F.; Shapiro, A. Optimization Problems with Perturbations: A Guided Tour. SIAM Rev. 1998, 40, 228–264. [Google Scholar] [CrossRef]
  25. Pirnay, H.; López-Negrete, R.; Biegler, L.T. Optimal sensitivity based on IPOPT. Math. Program. Comput. 2012, 4, 307–331. [Google Scholar] [CrossRef]
  26. Sobieszczanski-Sobieski, J.; Barthelemy, J.-F.M.; Riley, K.M. Sensitivity of Optimum Solutions to Problem Parameters. AIAA J. 1982, 20, 1291–1299. [Google Scholar] [CrossRef]
  27. The Engineering ToolBox. Understanding Convective Heat Transfer: Coefficients, Formulas & Examples. 2003. Available online: https://www.engineeringtoolbox.com/convective-heat-transfer-d_430.html (accessed on 9 August 2025).
  28. Li, D.; Jiang, P.; Hu, C.; Yan, T. Comparison of Local and Global Sensitivity Analysis Methods and Application to Thermal Hydraulic Phenomena. Prog. Nucl. Energy 2023, 158, 104612. [Google Scholar] [CrossRef]
Figure 1. Typical heat sink for a circuit card cooling system.
Figure 1. Typical heat sink for a circuit card cooling system.
Designs 10 00011 g001
Figure 2. Detailed view of the dimensions of the finned heat sink base and fins.
Figure 2. Detailed view of the dimensions of the finned heat sink base and fins.
Designs 10 00011 g002
Figure 3. Open channel from the finned heat sink with the direction of airflow depicted by arrows.
Figure 3. Open channel from the finned heat sink with the direction of airflow depicted by arrows.
Designs 10 00011 g003
Figure 4. Dimension comparison between the initial and optimized designs with the solid line representing the original profile and the dashed line representing the optimized profile.
Figure 4. Dimension comparison between the initial and optimized designs with the solid line representing the original profile and the dashed line representing the optimized profile.
Designs 10 00011 g004
Figure 5. Temperature distribution in a single channel of the finned heat sink.
Figure 5. Temperature distribution in a single channel of the finned heat sink.
Designs 10 00011 g005
Table 1. Post-optimal sensitivity analysis of Example 1 at p 0 = 10.0 .
Table 1. Post-optimal sensitivity analysis of Example 1 at p 0 = 10.0 .
Perturbation: ∆p/pInitial Obj.
ϕ 0
Optimal Obj.
ϕ *
Active Constraints
at   b *
Finite Diff.
ϕ*
Estimated
(*/dp) ∆p
0.50/10.0160.6738.6001 g 2 , g 3 −0.2880−0.3031
0.25/10.0161.8138.7403 g 2 , g 3 −0.1478−0.1515
0.10/10.0162.5198.8280 g 2 , g 3 −0.0601−0.0606
0.00/10.0163.0008.8881 g 2 , g 3 0.00.0
−0.10/10.0163.4878.9494 g 2 , g 3 0.06130.0606
−0.25/10.0164.2349.0435 g 3 0.15540.1515
−0.50/10.0165.5209.2059 g 3 0.31780.3031
Table 2. Post-optimality sensitivity analysis for minimax and goal attainment problems.
Table 2. Post-optimality sensitivity analysis for minimax and goal attainment problems.
Methodp b * T f T x μ2/μ1*/dp ϕ *
Minimax10.0000[10.1882, 6.7067, 12.9216, 0.1010]10.00717.0050−0.03317.1089 × 10−4
10.0215[10.1952, 6.7055, 12.9201, 0.2441]10.00007.0000−0.00441.2008 × 10−4
Goal Attain.10.0000[10.1858, 6.6953, 12.9263, 0.1000]10.01287.0128−0.59581.2785 × 10−2
10.0215[10.1955, 6.7054, 12.9189, 0.2463]10.00007.00008.1003 × 10−88.1003 × 10−8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hou, G.; DeGroff, J. Gradient-Based, Post-Optimality Sensitivity Analysis with Respect to Parameters of State Equations. Designs 2026, 10, 11. https://doi.org/10.3390/designs10010011

AMA Style

Hou G, DeGroff J. Gradient-Based, Post-Optimality Sensitivity Analysis with Respect to Parameters of State Equations. Designs. 2026; 10(1):11. https://doi.org/10.3390/designs10010011

Chicago/Turabian Style

Hou, Gene, and Jonathan DeGroff. 2026. "Gradient-Based, Post-Optimality Sensitivity Analysis with Respect to Parameters of State Equations" Designs 10, no. 1: 11. https://doi.org/10.3390/designs10010011

APA Style

Hou, G., & DeGroff, J. (2026). Gradient-Based, Post-Optimality Sensitivity Analysis with Respect to Parameters of State Equations. Designs, 10(1), 11. https://doi.org/10.3390/designs10010011

Article Metrics

Back to TopTop