Next Article in Journal
Grey Target Group Decision Making Based on Three-Parameter Interval Grey Numbers and Bidirectional Projection Method
Previous Article in Journal
Shield Machine Attitude Prediction Method Based on Causal Graph Convolutional Network
Previous Article in Special Issue
Electric Load Forecasting for a Quicklime Company Using a Temporal Fusion Transformer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Life-Cycle Technology Upgrade Scheduling Model

by
Massimiliano Caramia
Dipartimento di Ingegneria dell’Impresa, University of Rome Tor Vergata, Via del Politecnico, 1, 00133 Rome, Italy
Algorithms 2026, 19(3), 223; https://doi.org/10.3390/a19030223
Submission received: 5 February 2026 / Revised: 5 March 2026 / Accepted: 12 March 2026 / Published: 16 March 2026
(This article belongs to the Special Issue 2026 and 2027 Selected Papers from Algorithms Editorial Board Members)

Abstract

Technology upgrades are a central lever for sustainability, yet many optimization models primarily account for use-phase emissions and treat embodied impacts and technological change exogenously. We propose a multi-period mixed-integer optimization framework that couples upgrade timing, technology choice, and operations with a life-cycle assessment (LCA) structure. The model (i) separates use-phase and embodied impacts at the transition level, (ii) supports time-weighted valuation of impacts through a flexible weighting sequence (time value of carbon), and (iii) incorporates endogenous learning-by-doing that can reduce both investment costs and embodied impacts of future upgrades. We derive an exact Benders (L-shaped) decomposition that separates discrete upgrade dynamics from a linear operating subproblem. Computational experiments illustrate model behavior and report runtimes under an outer-loop implementation with open-source solvers, highlighting that decomposition becomes most beneficial when extensions substantially enlarge the dispatch layer (e.g., scenario expansion). Experiments also show that ignoring embodied impacts can mis-rank upgrade schedules and even violate life-cycle caps, that stronger time-weighting pushes upgrades earlier, and that learning can make staged upgrades economically preferable.

1. Introduction and Background

Technology upgrades—retrofits, replacements, and stepwise modernization of assets—are among the most effective operational levers for reducing environmental burdens while maintaining service levels. Yet deciding when to upgrade and what to upgrade to remains difficult to support quantitatively, because sustainability consequences are distributed across the entire life cycle and are coupled with operations. Life-cycle assessment (LCA) provides the standard accounting logic for environmental impacts by tracking burdens from raw material extraction through manufacturing, use, and end-of-life [1]. However, many operational models still evaluate environmental performance primarily through use-phase impacts (e.g., operating emissions), potentially mis-ranking upgrade policies whenever embodied burdens are material. Three features make upgrade planning fundamentally different from static technology choice.
1.
Embodied vs. use-phase trade-offs.
Upgrades typically incur near-term embodied impacts (manufacturing, installation, decommissioning) while reducing future use-phase impacts (energy consumption, operating emissions, maintenance). Ignoring embodied impacts can lead to recommending upgrades that appear clean operationally but are not compliant under life-cycle accounting.
2.
Time-weighted impacts. The same cumulative emissions profile can be valued differently depending on the urgency placed on early reductions. Recent work in buildings and carbon accounting emphasizes that timing can materially affect the “time value of carbon” perspective [2,3]. This motivates explicit time-weighting of impacts in planning models rather than relying on a single aggregate total.
3.
Technology evolves and can learn endogenously. Investment costs may decline as cumulative deployment increases (learning-by-doing), and supply chains can decarbonize with scale, potentially lowering the embodied impacts of future upgrades as well. Endogenous technological learning is well developed in energy-system models [4,5,6,7], but it is rarely integrated at the level of asset-specific upgrade scheduling with explicit embodied/use-phase separation.
We study a scenario in which a planner manages a set of assets over a multi-period horizon. In each period, each asset occupies exactly one technology state. The planner chooses (i) state transitions at the start of periods and (ii) operations (how much service each asset produces). Sustainability is evaluated with LCA-consistent accounting that separates embodied impacts of transitions from use-phase impacts induced by operations. We consider cap-type instruments (a global life-cycle budget). Throughout, we use “impact” generically; the computational study focuses on greenhouse-gas emissions (kgCO2e).

Contributions

This paper addresses a structural gap between (i) asset renewal/replacement models with environmental criteria, (ii) LCA-driven optimization, and (iii) endogenous-learning technology planning: existing approaches typically incorporate only subsets of (a) explicit upgrade scheduling at the asset level, (b) life-cycle-consistent separation of embodied and use-phase impacts, (c) explicit valuation of impact timing, and (d) endogenous technological change within the same optimization model. We contribute along two dimensions.
Modeling contributions.
1.
Asset-level life-cycle upgrade scheduling with transition-level embodied accounting. We formulate a multi-period MILP in which technology evolution is represented by binary state variables and transition (upgrade) variables. This enables embodied impacts to be attributed exactly at the time of upgrade through transition-level coefficients, while use-phase impacts are induced by operations. The resulting accounting avoids double-counting and makes the embodied/use-phase trade-off explicit within a single planning model.
2.
Time-weighted life-cycle sustainability constraints. We incorporate a general time-weighting sequence { γ t } inside the life-cycle cap, allowing the planner to encode urgency (“time value of impacts”) without conflating it with impact-characterization dynamics. This provides a transparent and easily stress-tested mechanism to assess how timing preferences shift optimal upgrade timing and technology choice.
3.
Endogenous learning that affects both investment cost and embodied impacts. We extend the core model with a tractable MILP learning-by-doing mechanism based on tiered experience curves. Unlike cost-only learning treatments common in system models, the proposed extension can endogenously reduce both investment cost and embodied impacts of future upgrades as cumulative deployment grows, preserving linearity via exact binary linking.
Algorithmic contribution. We derive an exact L-shaped (Benders) decomposition for the core cap-based formulation that separates discrete upgrade dynamics from continuous dispatch. The decomposition exploits the fact that all coupling between scheduling and operations occurs through capacity activation and the global life-cycle cap, yielding valid optimality/feasibility cuts from the dual of the dispatch LP. This separation is particularly relevant in extensions that inflate the dispatch layer (e.g., scenario expansion), where keeping the integer master small becomes essential.
Computational contribution. We provide computational evidence isolating the behavioral effects of (i) life-cycle vs. operational-only accounting, (ii) time-weighting, and (iii) learning, and we benchmark monolithic MILP versus decomposition under deterministic and scenario-expanded variants using open-source solvers.
The paper is organized as follows. Section 2 reviews the literature and positions the contribution. Section 3 presents the model, including time-weighting and the learning extension. Section 4 develops the decomposition algorithm. Section 5 reports computational experiments. Section 6 concludes the paper.

2. Related Work and Positioning

This paper sits at the intersection of four research streams: (i) asset replacement and renewal with environmental criteria, (ii) life-cycle optimization and its integration with operational planning, (iii) technology adoption under environmental regulation, and (iv) endogenous technological learning. Each stream captures part of the upgrade–sustainability problem, but none on its own provides a unified optimization model that jointly (a) schedules upgrades over time at the asset level, (b) accounts for embodied and use-phase impacts consistently, (c) values impact timing, and (d) internalizes technological change through learning.

2.1. Replacement and Renewal Models with Environmental Criteria

A classical line of work extends equipment replacement decisions by adding environmental burdens to the economic objective or constraints. The “green renewal” formulation in Sloan [8] shows that including environmental terms can change optimal replacement timing under technological improvement. Related sustainable asset-management models integrate repair/replacement choices with environmental impacts and additional realism, such as maintenance quality and risk [9]. In parallel, LCA-focused replacement studies emphasize that replacement can reduce operational burdens yet increase near-term embodied burdens and, therefore, the timing and baseline assumptions matter [10,11].
Recent work on fleet electrification and transition planning often focuses on operational feasibility under infrastructure constraints (e.g., assignment and charging decisions for mixed fleets) and highlights that operational layers can be large even for moderate fleet sizes [12]. Meanwhile, vehicle LCA programs (e.g., Green NCAP) explicitly quantify the split between production, operation (including energy supply), and end-of-life phases and provide updated parameterizations for EU electricity mixes and technology assumptions [13]. These contributions reinforce the relevance of embodied vs. use-phase trade-offs in practice; however, replacement/renewal models typically (i) treat life-cycle impacts at a relatively aggregated level, (ii) do not explicitly separate embodied impacts triggered by replacement from operational impacts within a unified upgrade–dispatch model, and (iii) do not incorporate explicit time-weighting or endogenous learning within the same asset-level scheduling formulation.

2.2. Life-Cycle Optimization and Consistent Integration of LCA with Operations

Life-cycle optimization embeds life-cycle indicators into optimization to design or operate systems under sustainability criteria; reviews and applications are prominent in waste management and circular economy planning [14]. A closely related line in supply chains formalizes integration of supply-chain optimization with LCA computation to reduce inconsistencies arising from sequential “optimize then assess” workflows [15]. Beyond optimization-centric work, methodological LCA research has proposed explicitly time-dependent (“dynamic”) impact assessment—particularly for climate impacts—to reflect that the marginal impact of an emission can depend on when it occurs [16]. Recent surveys indicate that dynamic and prospective LCA (including time-varying parameters such as grid decarbonization) is an active and expanding literature [17,18].
We emphasize that this methodological time dependence is conceptually distinct from the decision-maker valuation of timing that we model through the weighting sequence { γ t } (“time value of carbon”): dynamic LCA concerns impact characterization over time, whereas γ t encodes a preference or policy convention for early versus late impacts. The two perspectives are compatible: one can use dynamic-LCA-inspired characterization to compute period impacts and then apply a policy weighting γ t in a planning model. This stream provides strong foundations for consistent life-cycle accounting inside optimization, but it is often centered on network/system design rather than asset-level upgrade scheduling with explicit state transitions. Moreover, explicit time-weighting and endogenous learning are not typically core elements in asset-specific upgrade planning models.

2.3. Illustration: Dynamic Characterization vs. Time-Value Weighting

For the sake of exemplification, consider a two-period example with emissions e 1 , e 2 (kgCO2e) and an impact characterization that may vary by time. Dynamic LCA changes the characterization of an emission (e.g., due to time-dependent marginal impacts), while γ t changes the valuation of a given characterized impact. Table 1 shows the distinction: even when characterized impacts are identical, different γ t values can change which timing pattern is preferred.
If γ 1 > γ 2 (early impacts valued more), the weighted cap penalizes Profile A more heavily, even though totals match. Dynamic LCA, by contrast, would modify ( e ˜ 1 , e ˜ 2 ) itself (e.g., via time-dependent characterization factors) before any policy weighting is applied.

2.4. Technology Adoption and Upgrades Under Policy Instruments

A third stream models adoption or replacement under regulatory instruments such as caps, trading, and subsidies. Recent work shows that adoption responses to caps and trading can be non-monotone in policy parameters and can exhibit unintuitive incentives [19]. Fleet replacement and retrofit decisions under emissions trading and policy uncertainty have been studied with multiple technology options and allowance mechanics [20,21]. These models richly represent regulatory constraints and compliance mechanisms, yet they commonly focus on operational emissions; embodied impacts are often neglected or simplified, and explicit valuation of early versus late impacts is rarely modeled. Parallel empirical and LCA-oriented work on electric vehicles increasingly emphasizes that supply-chain and manufacturing emissions can be material and heterogeneous across life-cycle stages, motivating explicit treatment of embodied burdens in transition decisions [22].
Our work is particularly inspired by the framework of Rajabian et al. [20] on optimal replacement/retrofit under an emissions trading system. While both approaches optimize multi-period technology transitions under emissions constraints, our formulation differs in several key ways. First, we model upgrades at the asset level via binary state and transition variables, rather than through age-structured fleet stock variables with purchase/salvage and inventory flows. Second, we explicitly couple upgrade choices with a continuous dispatch/operations decision, which enables an exact L-shaped (Benders) decomposition separating discrete upgrade dynamics from a linear operating subproblem [23,24,25]. Third, we adopt a life-cycle accounting structure that separates embodied impacts at the transition level and allows explicit time-weighting of impacts via { γ t } ; in contrast, ETS (cap-and-trade) formulations typically emphasize period-by-period compliance mechanisms (e.g., trading/banking) rather than horizon-wide time-weighted life-cycle caps. Finally, we extend the core model with an endogenous learning-by-doing mechanism that can reduce both investment costs and embodied impacts of future upgrades.

2.5. Endogenous Technological Learning

Endogenous learning-by-doing is well established in energy system models, where investment cost depends on cumulative deployment and may strongly influence optimal timing of adoption [5,6,7]. Empirically, this is often grounded in experience-curve evidence dating back to early learning-curve measurements [26] and later empirical learning-rate syntheses in energy technologies [27]. Modern discussions emphasize modeling choices and complexity trade-offs [4]. In the context of electrification, recent empirical work on the EV battery industry quantifies learning-by-doing effects and highlights policy interactions that influence observed cost declines, reinforcing the importance of endogenous learning mechanisms in forward-looking planning models [28]. This tradition is powerful in capturing experience curves and endogenous cost change, but it is typically applied at an aggregated system level rather than at the granularity of asset-specific upgrades with explicit embodied and operational life-cycle accounting. In addition, learning is most often applied to costs; learning-driven reductions of embodied impacts are less commonly modeled endogenously.

2.6. Gap and Modeling Contributions

The above strands are complementary but partially separated. The gap is the absence of a single optimization model that simultaneously: (i) schedules asset-level upgrades through discrete state transitions; (ii) accounts for both embodied and use-phase impacts within an LCA-compatible structure; (iii) values impact timing through explicit time-weighting; and (iv) internalizes technological change through learning mechanisms that can affect both investment costs and embodied impacts.
To address this gap, we introduce a new model integrating these elements in a unified mixed-integer optimization framework. Table 2 summarizes which ingredients are typically present in representative models from each stream and which are combined here.
Representative fleet or renewal models typically use age-structured stock variables and purchase/salvage flows, whereas our formulation represents technology evolution through explicit binary transition variables coupled with state variables. This modeling choice has two structural implications: (i) embodied impacts are attributed directly at the transition level rather than amortized over stock flows; and (ii) the formulation admits an exact separation between discrete upgrade dynamics and continuous dispatch, enabling the L-shaped decomposition in Section 4.1. To our knowledge, this exact decomposition structure has not been derived for asset-level life-cycle upgrade scheduling with explicit embodied/use-phase separation and time-weighted caps.
While each modeling ingredient (life-cycle accounting, replacement timing, endogenous learning, or time-weighting) exists separately in prior work, the literature lacks a unified asset-level MILP that integrates all four simultaneously under a single life-cycle cap constraint with exact decomposition. The contribution is, therefore, integrative and structural rather than incremental along a single dimension.

3. Problem Setting and Model Formulation

We study a multi-period planning problem in which a decision maker operates a set of assets (e.g., machines, vehicles, production lines, or buildings) to meet exogenous service requirements. In the core model, each asset can operate under one of several technology states (also referred to as levels). Upgrading an asset changes its technology state and typically entails (i) an investment cost and an embodied life-cycle impact (manufacture, installation, disposal effects triggered by the transition), while enabling (ii) different operational cost and use-phase impact coefficients over time. The planner chooses upgrade timing and operating levels to minimize discounted economic cost, subject to operational feasibility and a life-cycle sustainability requirement that accounts for both use-phase and embodied impacts, possibly with explicit time-weighting. Section 3.6 extends the core model by endogenizing investment and embodied-impact coefficients through learning-by-doing while preserving mixed-integer linear structure.

3.1. Notation and Timing Convention

3.1.1. Sets and Indices

Let T = { 1 , , T } denote operating periods and T 0 = { 0 , 1 , , T } denote time indices including the initial time. Let I = { 1 , , | I | } be the set of assets and K = { 1 , , | K | } the set of technology states. Let A K × K be the set of allowed state transitions; a pair ( k , ) A means an asset may move from k to at the beginning of a period (capturing retrofit or replacement).

3.1.2. Timing Convention

Decisions are made at period boundaries. The state variable x i , k , t indicates the technology state of asset i at the start of period t; operations during period t use that state. Upgrade decisions u i , k , t occur between t 1 and t, i.e., at the beginning of period t and, therefore, affect x i , · , t . We will clarify this assumption after the model presentation.

3.1.3. Decision Variables

  • State (binary): x i , k , t { 0 , 1 } equals 1 if asset i is in state k at the start of period t.
  • Upgrade (binary): u i , k , t { 0 , 1 } equals 1 if asset i transitions from k to at the start of period t.
  • Operations (continuous): y i , k , t 0 is the service produced by asset i during period t while in state k (state-disaggregated production).
Additional decision variables for learning (Section 3.6).
  • Cumulative deployment (continuous): q f , t 0 is the cumulative deployment of learning family f up to period t.
  • Tier selection (binary): ξ f , t , s { 0 , 1 } selects learning tier s { 1 , , S } for family f in period t.
  • Tier–upgrade linking (binary): z i , k , t , s { 0 , 1 } equals 1 if upgrade ( k ) occurs at t and tier s is active for f = fam ( ) .

3.1.4. Parameters

Demand in period t is D t 0 . Capacity of asset i in state k at time t is y ¯ i , k , t 0 . Operating cost per unit service is c i , k , t O 0 . Upgrade (investment) cost is c i , k , t I 0 .
For a chosen impact category (we write emissions in kgCO2e, but the structure supports any life-cycle impact), use-phase impact per unit service is e i , k , t U 0 (this is consistent with standard life-cycle assessment settings where use-phase impacts represent emissions or environmental burdens) and embodied impact of transition ( k , ) is e i , k , t B 0 . In the learning extension, effective coefficients are derived endogenously from baseline values c ¯ i , k , t I and e ¯ i , k , t B via tier multipliers and linking variables.
Economic discounting uses δ t = ( 1 + ρ ) ( t 1 ) and time-weighting of impacts uses γ t = ( 1 + ρ C ) ( t 1 ) , with ρ the financial discount rate and ρ C a climate/time-weighting rate (any exogenous { γ t } may be used). Finally, E ¯ denotes an allowable time-weighted life-cycle impact budget (cap).
Rate ρ C in γ t = ( 1 + ρ C ) ( t 1 ) is not a physical characterization factor; it represents a policy or decision-maker convention for valuing early versus late impacts (“time value of carbon”). A higher ρ C increases the relative weight of early impacts and, therefore, tends to shift upgrades earlier under a binding cap. In applications, ρ C can be stress-tested over a plausible range and selected to reflect organizational targets (e.g., near-term compliance constraints) or policy conventions prioritizing early abatement.

3.2. State Feasibility and Upgrade Dynamics

3.2.1. Unique State per Asset and Time

Each asset is in exactly one state at each time:
k K x i , k , t = 1 i I , t T 0 .
Initial states x i , k , 0 are given.

3.2.2. Stock–Flow Identity

The evolution of each state is driven by incoming and outgoing transitions:
x i , , t = x i , , t 1 + k : ( k , ) A u i , k , t m : ( , m ) A u i , m , t i I , K , t T .

3.2.3. Transitions Only from the Current State

An asset can only upgrade out of its realized state at t 1 :
: ( k , ) A u i , k , t x i , k , t 1 i I , k K , t T .
Because x i , k , t 1 is binary and k K x i , k , t 1 = 1 , (3) already implies at most one upgrade per asset per period.

3.3. Operations and Demand Satisfaction

3.3.1. Capacity Coupling to State

State-disaggregated service is constrained by state-dependent capacity:
0 y i , k , t y ¯ i , k , t x i , k , t i I , k K , t T .
Thus, if x i , k , t = 0 then y i , k , t = 0 , while if x i , k , t = 1 then production is possible up to  y ¯ i , k , t .

3.3.2. Demand Satisfaction

Total service must meet the demand in each period:
i I k K y i , k , t D t t T .
Constraint (5) is written with “≥” to allow over-supply in applications with storage, curtailment, or conservative feasibility requirements. In the present cost-minimizing formulation, however, over-supply is typically not optimal: since c i , k , t O 0 and y i , k , t 0 , any increase in total production above D t weakly increases the objective and (when e i , k , t U > 0 ) also increases use-phase impacts. Therefore, whenever meeting demand exactly is feasible, (5) will be tight at optimality and behaves like an equality constraint. If an application requires exact matching, one may replace (5) with equality; if c i , k , t O can be zero (degeneracy), a small penalty on excess/curtailment can be added to ensure tightness.

3.4. Life-Cycle Impact Accounting with Explicit Time-Weighting

We separate (i) use-phase impacts due to operation and (ii) embodied impacts triggered by upgrades.

3.4.1. Use-Phase Impacts

Use-phase impacts in period t are linear in operations:
E t U = i I k K e i , k , t U y i , k , t .

3.4.2. Embodied Impacts

Embodied impacts in period t are linear in upgrade actions:
E t B = i I ( k , ) A e i , k , t B u i , k , t .
The coefficient e i , k , t B represents incremental life-cycle burdens attributable to performing the transition at time t (manufacturing/installation of the new technology and end-of-life/disposal effects induced by replacing the old one). This transition-level accounting avoids double-counting and cleanly separates near-term embodied burdens from long-run operational savings.

3.4.3. Time-Weighted Life-Cycle Total

The time-weighted life-cycle impact over the horizon is
E ( x , u , y ) = t T γ t E t U + E t B .
The weights γ t can represent a preference for earlier reductions (time value of impacts) or other policy-relevant timing conventions.

3.5. Objective and the Overall Core Cap-Based Model

Operating ( C t O ) and investment ( C t I ) costs in period t are:
C t O = i I k K c i , k , t O y i , k , t , C t I = i I ( k , ) A c i , k , t I u i , k , t .
The core model minimizes discounted economic cost subject to upgrade dynamics, feasible operations, and a global time-weighted life-cycle cap:
min x , u , y t T δ t i I k K c i , k , t O y i , k , t + i I ( k , ) A c i , k , t I u i , k , t s . t . ( 1 ) , ( 2 ) , ( 3 ) , ( 4 ) , ( 5 ) ,
t T γ t i I k K e i , k , t U y i , k , t + i I ( k , ) A e i , k , t B u i , k , t E ¯ , x i , k , t { 0 , 1 } , u i , k , t { 0 , 1 } , y i , k , t 0 .
In applications, the cap E ¯ can be calibrated using reference policies. A practical approach is to compute: (i) a baseline policy with no upgrades, yielding E base ; and (ii) a fast-upgrade benchmark (earliest feasible upgrades with emissions-minimizing dispatch), yielding E fast . One can then set E ¯ = ( 1 ω ) E base + ω E fast for ω [ 0 , 1 ] , which guarantees interpretability and (empirically) feasibility on generated instances. Alternatively, for case studies, one can use a relative cap E ¯ = τ E base to express a target percentage reduction. If feasibility under a strict cap is uncertain, a practical safeguard is to slacken the cap with a nonnegative variable and penalize violations heavily in the objective (see implementation notes), which guarantees feasibility and quantifies minimal cap violations when the original problem is infeasible.

3.6. Endogenous Learning-by-Doing (Model Extension)

We model learning-by-doing through experience curves in which cumulative deployment reduces unit investment costs and (optionally) the embodied impacts of future upgrades. Because experience-curve relations are nonlinear and would lead to bilinear terms when combined with binary upgrade decisions, we adopt a tiered approximation with linear linking variables, preserving a mixed-integer linear formulation.

3.6.1. Technology Families and Cumulative Deployment

Let F denote a set of technology families that learn (e.g., a technology state belongs to family f = fam ( ) ). Let q f , t 0 denote cumulative deployment of family f up to and including period t, with q f , 0 given. A simple count-based deployment model is:
q f , t = q f , t 1 + i I ( k , ) A : fam ( ) = f u i , k , t f F , t T .

3.6.2. Experience-Curve Form

For each family f, baseline coefficients c ¯ i , k , t I and e ¯ i , k , t B are given. Conceptually, learning follows experience curves c I q b f I and e B q b f B , with exponents b f I 0 (cost) and b f B 0 (embodied impacts). To avoid undefined expressions when q f , 0 = 0 , we assume either (i) q f , 0 1 or (ii) q f , 0 is replaced by a small positive reference level in multiplier precomputation.

3.6.3. Tiered Learning Approximation and Linearization

Fix thresholds Q f , 0 Q f , 1 < < Q f , S for each family f, where Q f , 0 : = q f , 0 . Precompute multiplicative factors for each tier s = 1 , , S :
α f , s I = Q f , s 1 Q f , 0 b f I , α f , s B = Q f , s 1 Q f , 0 b f B , s = 1 , , S .
and interpret tier s as q f , t 1 [ Q f , s 1 , Q f , s ) for s < S and [ Q f , S 1 , Q f , S ] for s = S . Introduce binaries ξ f , t , s { 0 , 1 } selecting one tier per ( f , t ) :
s = 1 S ξ f , t , s = 1 f F , t T .
Tier selection depends on cumulative deployment before decisions in period t (learning affects upgrades executed at t through experience accumulated up to t 1 ). We enforce consistency with big-M bounds:
q f , t 1 Q f , s 1 M ( 1 ξ f , t , s ) f F , t T , s = 1 , , S ,
q f , t 1 Q f , s + M ( 1 ξ f , t , s ) f F , t T , s = 1 , , S ,
where M is a sufficiently large constant. A safe default upper bound for count-based deployment is
q f , t q f , 0 + | I | | T | ,
since at most one upgrade per asset per period contributes to any family. Therefore, a conservative choice is M = q f , 0 + | I | | T | . In practice, tighter per-family bounds can be used to reduce M (e.g., if only a subset of transitions map to family f). Reducing M strengthens LP relaxations and improves numerical stability; overly small M may incorrectly cut off feasible solutions.
The tier-adjusted coefficient multiplies an upgrade decision in both the objective and the cap. Writing c ¯ i , k , t I s α f , s I ξ f , t , s u i , k , t would be bilinear. To retain linearity, introduce linking binaries z i , k , t , s { 0 , 1 } indicating that upgrade ( k ) at time t is executed and tier s is active for family f = fam ( ) :
s = 1 S z i , k , t , s = u i , k , t i I , ( k , ) A , t T ,
z i , k , t , s ξ f , t , s i I , ( k , ) A , t T , s = 1 , , S , f = fam ( ) ,
z i , k , t , s u i , k , t i I , ( k , ) A , t T , s = 1 , , S ,
z i , k , t , s u i , k , t + ξ f , t , s 1 i I , ( k , ) A , t T , s = 1 , , S , f = fam ( ) .
These constraints enforce z i , k , t , s = u i , k , t · ξ f , t , s exactly for binary variables. Consider one family f, one period t, and two tiers s { 1 , 2 } with ξ f , t , 1 + ξ f , t , 2 = 1 . If an upgrade ( k ) occurs at time t, then u i , k , t = 1 and the linking constraints enforce that exactly one z i , k , t , s equals 1 (the active tier), while the other equals 0. For instance, if ξ f , t , 2 = 1 and u i , k , t = 1 , then the constraints imply z i , k , t , 2 = 1 and z i , k , t , 1 = 0 , so the tier-adjusted cost and embodied terms become linear sums of the form c ¯ i , k , t I α f , 2 I z i , k , t , 2 and e ¯ i , k , t B α f , 2 B z i , k , t , 2 .

3.6.4. Embodied Impacts (Learning Variant)

Under learning, the embodied-impact accounting uses tier-adjusted linearized contributions:
E t B = i I ( k , ) A e ¯ i , k , t B s = 1 S α f , s B z i , k , t , s , f = fam ( ) .

3.6.5. Investment Cost (Learning Variant)

With learning, the investment cost in period t is:
C t I = i I ( k , ) A c ¯ i , k , t I s = 1 S α f , s I z i , k , t , s , f = fam ( ) .

3.6.6. Interpretation

Learning creates an intertemporal incentive to invest earlier: early upgrades increase q f , t via (12), which can reduce the effective cost (and, if modeled, embodied impacts) of later upgrades through tier selection (13)–(15) and the linearized contributions in (20) and (21).

3.6.7. Learning-Extended MILP

To obtain the learning variant, keep all constraints (1)–(5) and the use-phase term in (11), add the learning constraints (12)–(19), and then: (i) replace the investment-cost term in (10) by (21); and (ii) replace the embodied-impact term in (11) by (20).

3.6.8. On Timing Convention and Upgrade Availability

As we specified in Section 3.1, upgrade decisions u i , k , t occur between periods t 1 and t. The state x i , k , t represents the technology available at the start of period t, and operations during period t use that state. Thus, in the baseline formulation, upgrades are assumed to be completed before operations in the same period begin (“overnight upgrade” assumption), so that no within-period downtime is modeled.
This convention is not restrictive. If upgrades require downtime or lead time, the model can be modified while preserving linearity: (i) full downtime can be represented by reducing available capacity in the upgrade period; (ii) partial downtime by multiplying upgrade indicators by a downtime fraction; and (iii) lead time by shifting state activation to period t + 1 . Specifically:
(i)
Full-period downtime: If upgrades consume the entire period, one can reduce or nullify available capacity in the upgrade period by modifying the capacity constraint to
y i , , t y ¯ i , , t x i , , t k u i , k , t .
(ii)
Partial downtime: A downtime fraction θ [ 0 , 1 ] can be introduced so that
y i , , t y ¯ i , , t x i , , t θ k u i , k , t .
(iii)
Lead time: If an upgrade executed at t becomes available only at t + 1 , one may shift the stock-flow equation so that u i , k , t affects x i , , t + 1 rather than x i , , t .
These variants remain linear and do not alter the validity of the solution approach described in the next section.

3.6.9. Extension to Multiple Impact Categories and Constraints

While the model and the main experiments conducted in the following focus on a single aggregated life-cycle impact cap, the model structure readily accommodates multiple impact categories. Let C denote a set of impact categories (e.g., GHG emissions, acidification, particulate matter). For each c C , we define use-phase and embodied impacts e i , k , t U , c and e i , k , t B , c , and introduce a separate life-cycle cap E ¯ c .
The life-cycle constraint then generalizes to
t T γ t i I k K e i , k , t U , c y i , k , t + i I ( k , ) A e i , k , t B , c u i , k , t E ¯ c , c C .
This extension preserves the mixed-integer linear structure of the model and will not alter the presentation made in the next section about the solution approach.

4. Solution Approach

We present an exact Benders (L-shaped) decomposition for the core cap-based MILP (10) and (11). The key separability is between (i) discrete upgrade dynamics ( x , u ) and (ii) continuous operations/dispatch y. We focus on the core model to keep the exposition minimal and the master problem small. Note that the same separation still applies under the learning linearization in Section 3.6, but learning adds master-side variables and constraints. For clarity, we present the decomposition for the core model.

4.1. Exact Benders Decomposition for the Core Cap-Based MILP

The core MILP (10) and (11) separates into: (i) discrete upgrade dynamics ( x , u ) and (ii) continuous operations/dispatch y. All coupling occurs through (a) the capacity links (4) and (b) the global cap (11). The master chooses ( x , u ) , and the subproblem optimizes y and returns either an optimality cut (if feasible) or a feasibility cut (if infeasible).

4.1.1. Master Problem (Discrete Upgrade Dynamics)

The master contains the state-transition constraints and investment costs, and uses Θ as a lower bound on operating (dispatch) cost:
min t T δ t i I ( k , ) A c i , k , t I u i , k , t + Θ .
Strengthening Constraints
To reduce obvious subproblem infeasibility and accelerate convergence, we add two valid inequalities:
i I k K y ¯ i , k , t x i , k , t D t t T ,
t T γ t i I ( k , ) A e i , k , t B u i , k , t E ¯ ,
which enforce (i) enough capacity to meet demand in each period and (ii) embodied impacts alone cannot violate the cap (use-phase impacts are nonnegative as stated in Section 3.1; in the presence of negative use-phase impacts, this strengthening is not guaranteed to be valid).
Master Formulation
Let C denote the current set of Benders cuts. The master is:
min x , u , Θ t T δ t i I ( k , ) A c i , k , t I u i , k , t + Θ s . t . ( 1 ) , ( 2 ) , ( 3 ) , ( 22 ) , ( 23 ) , Θ 0 ,
Θ α c i , k , t β i , k , t c x i , k , t + i , ( k , ) , t η i , k , t c u i , k , t c C , x i , k , t { 0 , 1 } , u i , k , t { 0 , 1 } .
Each cut c C is defined by coefficients ( α c , β c , η c ) obtained from the subproblem.

4.1.2. Operating Subproblem (Continuous Dispatch LP)

Given a master solution ( x ¯ , u ¯ ) , define the remaining emissions budget after embodied impacts:
E ¯ U : = E ¯ t T γ t i I ( k , ) A e i , k , t B u ¯ i , k , t .
The dispatch LP is:
Q ( x ¯ , u ¯ ) : = min y 0 t T δ t i I k K c i , k , t O y i , k , t
s . t . i I k K y i , k , t D t t T ,
y i , k , t y ¯ i , k , t x ¯ i , k , t i I , k K , t T ,
t T γ t i I k K e i , k , t U y i , k , t E ¯ U ,
y i , k , t 0 i I , k K , t T .
If E ¯ U < 0 , infeasibility is immediate.

4.1.3. Dual, Optimality Cuts, and Feasibility Cuts

To make signs explicit, rewrite the two “≤” constraints as “≥” constraints by multiplying by 1 :
y i , k , t y ¯ i , k , t x ¯ i , k , t , t , i , k γ t e i , k , t U y i , k , t E ¯ U .
Let μ t 0 be dual multipliers for (28), ν i , k , t 0 for y i , k , t y ¯ i , k , t x ¯ i , k , t , and λ 0 for the transformed emissions constraint. Dual feasibility can be expressed as:
μ t ν i , k , t λ γ t e i , k , t U δ t c i , k , t O i I , k K , t T , μ , ν , λ 0 .
Optimality Cut
If the subproblem is feasible, any optimal dual solution ( μ , ν , λ ) yields the valid Benders cut:
Θ t T μ t D t i I k K t T ν i , k , t y ¯ i , k , t x i , k , t λ E ¯ + λ t T γ t i I ( k , ) A e i , k , t B u i , k , t .
Interpretation: μ prices demand, ν values additional capacity enabled by choosing certain states, and λ is the shadow price of the emissions budget, transferring embodied emissions in the master into an implied operating-cost penalty.
Feasibility Cut
If the subproblem is infeasible, then by Farkas’ lemma, there exists a dual ray ( μ ¯ , ν ¯ , λ ¯ ) 0 satisfying (32) with the right-hand side replaced by 0 such that the dual objective is positive at the current master point. This yields the feasibility cut (no Θ term):
t T μ ¯ t D t i I k K t T ν ¯ i , k , t y ¯ i , k , t x i , k , t λ ¯ E ¯ + λ ¯ t T γ t i I ( k , ) A e i , k , t B u i , k , t 0 .
Exactness
For the core cap-based formulation, fixing integer decisions ( x , u ) yields a linear program (LP) in the dispatch variables y. If the LP subproblem is feasible, the optimality cut (33) generated from an optimal dual solution is valid. If infeasible, a feasibility cut (34) generated from a dual ray is valid. Under standard L-shaped assumptions (finite master integer set; LP subproblem; valid cuts), the iterative procedure converges finitely to the optimal MILP solution [23,24,25]. In practice, when infeasibility certificates are not available through a solver interface, a slackened-cap subproblem with a large penalty yields an always-feasible LP and preserves correctness whenever the original problem is feasible (then the optimal slack is zero).

4.2. Implementation Notes

The core formulation assumes deterministic demand D t for clarity. When demand (or use-phase coefficients) is uncertain, we extend the dispatch layer with Ω scenarios: the master decisions ( x , u ) remain scenario-independent, while each scenario has its own dispatch variables y ( ω ) . This preserves the master–subproblem separation and provides a natural setting in which decomposition becomes advantageous because the continuous layer grows linearly with Ω while the integer master does not.
We implemented both (i) the “monolithic” MILP (10) and (11) and (ii) Algorithm 1 in Python 3.11 using SciPy with the HiGHS LP/MIP backend.
Both scipy.optimize.milp and scipy.optimize.linprog accept sparse constraint matrices; using csc_array is essential beyond a few thousand variables. We build variables in the natural block order ( x , u , y ) and generate constraints by period and by asset to maintain a structured, sparse pattern.
For the MILP, we use a running limit and a target relative MIP gap. In SciPy, these are passed through the options time_limit and mip_rel_gap.
The HiGHS interfaces exposed through SciPy do not provide a general user callback for cut injection; therefore, Benders is implemented as an outer loop that repeatedly rebuilds (or incrementally augments) the master MILP with additional cut rows and resolves it. Therefore, while the decomposition derived in this section is exact in the algorithmic sense, i.e., it produces valid optimality/feasibility cuts and, when implemented in a solver environment supporting cut callbacks and warm starts, it converges to the optimal solution of the core MILP, in our computational study, implementing Benders as an outer loop that repeatedly resolves an evolving master problem, preserves theoretical correctness but introduces overhead that can dominate runtime on deterministic instances.
Optimality cuts require dual multipliers from the subproblem LP. SciPy’s linprog with method=’highs’ returns Lagrange multipliers for linear constraints and bounds; because reported signs depend on whether constraints are provided in ≤ or ≥ form, we standardize constraints to match the sign conventions used in (32) and (33) before constructing cuts.
Algorithm 1 L-shaped method for the core cap-based MILP
 1:
Input: data ( c I , c O , e B , e U , y ¯ , D , δ , γ , E ¯ ) , transitions A ; tolerance ε
 2:
Initialize:  C ; LB ; UB + ; incumbent
 3:
while  UB LB max { 1 , | UB | } > ε  do
 4:
      Solve master (24) and (25) to get ( x ¯ , u ¯ , Θ ¯ )
 5:
      LB ← master objective value
 6:
      Compute E ¯ U via (26)
 7:
      if  E ¯ U < 0  then
 8:
            (This case cannot occur if (23) is enforced.)
 9:
            Obtain a dual ray and add a feasibility cut (34); continue
10:
      end if
11:
      Solve subproblem (27)–(30)
12:
      if subproblem feasible then
13:
            Let Q Q ( x ¯ , u ¯ ) and get duals ( μ , ν , λ )
14:
            UB min { UB , t δ t i , ( k , ) c i , k , t I u ¯ i , k , t + Q } ; update incumbent if improved
15:
            Add optimality cut (33) to C
16:
      else
17:
            Obtain a dual ray ( μ ¯ , ν ¯ , λ ¯ ) and add feasibility cut (34) to C
18:
      end if
19:
end while
20:
Return: incumbent ( x , u ) and corresponding optimal dispatch y
Feasibility cuts (34) require an infeasibility certificate (dual ray). Some solver wrappers do not expose rays directly. In our experiments, we (i) include the strengthening constraints (22) and (23), and (ii) generate instances for which a feasible dispatch exists under the chosen cap calibration, so subproblem infeasibility is not observed. For general user data, a practical alternative is to add a nonnegative slack variable ϵ 0 in the emission-cap constraint:
t γ t i , k e i k t U y i k t E ¯ + ϵ ,
and add in (27) a penalty term M ϵ , with M chosen sufficiently large to discourage violations whenever feasible solutions exist.
This modification guarantees subproblem feasibility for arbitrary data. If the original problem is feasible, the optimal solution satisfies ϵ = 0 , and the model coincides with the original formulation. If not, ϵ quantifies the minimal violation of the cap constraint. In this penalized setting, only optimality cuts are required computationally, while feasibility cuts remain valid for the unpenalized limit case.
Because costs and emissions can differ by orders of magnitude, we rescale emissions coefficients so that the cap E ¯ is O(104–106) and costs are O(105–106) on the large random benchmarks. This improves numerical stability and reduces spurious violations arising from floating-point tolerance.

4.3. Methodological Overview

To improve clarity and provide a structured view of the proposed framework, Figure 1 presents a schematic overview of the model inputs, decision structure, key constraints, and the Benders decomposition process.
The framework consists of four main components: (i) input data (technology characteristics, impact coefficients, demand scenarios, and impact caps), (ii) strategic upgrade decisions, (iii) operational dispatch decisions under scenarios, and (iv) a decomposition algorithm that separates integer upgrade decisions from scenario-based operational subproblems.
The monolithic MILP integrates all components simultaneously, whereas the Benders decomposition separates the upgrade decisions (master problem) from the scenario dispatch subproblems, iteratively generating feasibility and optimality cuts.

5. Computational Experiments

This section reports computational experiments designed to (i) validate the behavioral implications of life-cycle accounting, time-weighting, and learning; and (ii) provide evidence on computational difficulty, scalability, and the practical role of decomposition.

5.1. Experimental Environment

All experiments were run on a single workstation-class machine with: (i) CPU: AMD (San Diego, CA, USA) Ryzen 9 7950X (16 cores, 32 threads); (ii) RAM: 64 GB; (iii) OS: Ubuntu 22.04 LTS (64-bit). Software: Python 3.11, SciPy 1.11+ (LP/MIP interfaces), HiGHS 1.6+ as the underlying LP/MIP solver. Unless otherwise stated, HiGHS was run with default presolve and cutting planes enabled and a relative MIP gap target of 10 6 ; runs were restricted to a single solver thread to reduce hardware-dependent variability. In SciPy/HiGHS this is enforced via a HiGHS option such as threads = 1.

5.2. Overview

Experiments 1–3 use a small instance to isolate modeling effects (life-cycle accounting, time-weighting, and learning). These instances are solved to optimality by exhaustive enumeration of upgrade schedules (over u) coupled with LP dispatch optimization (over y), prioritizing interpretability over scalability. Experiment 4 studies scalability and provides a more balanced algorithmic comparison between (i) solving the monolithic MILP directly and (ii) an outer-loop Benders (L-shaped) implementation. To avoid an unfair comparison driven purely by the overhead of re-solving masters without solver callbacks, Experiment 4 includes two benchmark families, i.e., Family A and Family B.

5.3. Experiment 1: Operational-Only vs. Life-Cycle Cap

Table 3 compares an operational-only baseline (cap applies to t γ t E t U ) to the full cap (cap applies to t γ t ( E t U + E t B ) ) on a toy instance. Operational-only accounting chooses a schedule that satisfies the use-phase cap but violates the life-cycle cap once embodied impacts are included. The life-cycle model reacts by introducing an additional upgrade to reduce use-phase impacts enough to compensate for the embodied burden.

5.4. Experiment 2: Sensitivity to Cap Stringency and Time-Weighting

We study how the optimal plan changes as the cap tightens. Let E base denote the no-upgrade weighted life-cycle emissions on the instance computed as follows. Let x i , k , 0 denote the initial state of asset i. Define the no-upgrade policy by u i , k , t = 0 for all i , ( k , ) , t , which implies x i , k , t = x i , k , 0 for all t T . Given this fixed technology configuration, compute the baseline dispatch y base by solving the (cost-minimizing) dispatch LP without the life-cycle cap:
y base arg min y 0 t T δ t i I k K c i , k , t O y i , k , t s . t . i I k K y i , k , t D t t T , 0 y i , k , t y ¯ i , k , t x i , k , 0 i I , k K , t T .
The corresponding baseline time-weighted life-cycle impact is
E base ( ρ C ) : = t T γ t ( ρ C ) E t U ( y base ) + E t B ( u = 0 ) = t T γ t ( ρ C ) i I k K e i , k , t U y i , k , t base ,
since E t B ( u = 0 ) = 0 under the no-upgrade policy. We parameterize cap stringency by τ ( 0 , 1 ] and set
E ¯ = τ E base ( ρ C ) .
Table 4 reports the optimal objective and realized emissions as τ decreases. The objective is unchanged for τ 0.65 (cap nonbinding) and increases at τ = 0.60 when an additional upgrade becomes necessary.
In Table 5 we vary the climate discount rate ρ C and, for each ρ C , recalibrate the cap to a constant fraction of the corresponding baseline weighted emissions (again E ¯ = 0.65 E base ( ρ C ) ). For low ρ C , later-period impacts receive similar weight, and the optimal plan matches the no-weighting solution; for higher ρ C the model places greater emphasis on early-period impacts and shifts the optimal plan toward earlier upgrades.

5.5. Experiment 3: Learning Shifts Upgrade Timing

We illustrate how learning-by-doing can change upgrade timing and composition under the same life-cycle cap. We construct a small toy instance in which upgrades target a single learning family and compare: (i) a no-learning baseline ( b f I = b f B = 0 ) and (ii) a learning variant with positive exponents. For reproducibility, we report the learning-family mapping, thresholds ( Q f , s ) s = 0 S , initial deployment q f , 0 , and implied multipliers ( α f , s I , α f , s B ) .

5.5.1. Learning Families and Mapping

We use a single learning family F = { f } . The technology–family mapping is
fam ( 2 ) = f , fam ( 3 ) = f ,
so any upgrade whose destination state is 2 or 3 contributes to cumulative deployment q f , t .

5.5.2. Initial Deployment

The initial cumulative deployment is set to
q f , 0 = 1 .

5.5.3. Tier Thresholds

We use S = 3 learning tiers with thresholds
( Q f , s ) s = 0 3 = ( 1 , 2 , 4 , 8 ) .
Tier s is active in period t if cumulative deployment prior to period t satisfies
q f , t 1 [ Q f , s 1 , Q f , s ) for s = 1 , 2 , q f , t 1 [ Q f , 2 , Q f , 3 ] for s = 3 ,
so the tier intervals are disjoint.
We choose Q f , S to upper-bound the maximum attainable deployment over the horizon, so that exactly one tier is always valid.

5.5.4. Implied Multipliers

Multipliers are computed at the lower endpoint of each tier:
α f , s I = Q f , s 1 Q f , 0 b f I , α f , s B = Q f , s 1 Q f , 0 b f B , s = 1 , 2 , 3 .
With Q f , 0 = 1 , b f I = 1.0 , and b f B = 0.5 , this yields
( α f , 1 I , α f , 2 I , α f , 3 I ) = ( 1.000 , 0.500 , 0.250 ) ,
( α f , 1 B , α f , 2 B , α f , 3 B ) = ( 1.000 , 0.707 , 0.500 ) .
Table 6 reports a summary of the inputs used in Experiment 3. As shown in Table 7, learning reduces effective investment costs (and, when enabled, embodied impacts) of later upgrades as cumulative deployment increases. This intertemporal effect can make multi-step upgrade paths economically attractive while remaining cap-compliant, shifting both the timing and the composition of the upgrade plan relative to the no-learning benchmark.
Each asset i occupies exactly one technology state k { 1 , 2 , 3 } at the start of each period t (state variable x i , k , t ). A schedule lists the upgrade actions u i , k , t = 1 executed at the beginning of period t, which change the state used for operations during period t.
For example, “ i = 1 : t 1 : 1 3 ” means asset 1 starts in state 1 at t = 0 and is upgraded at the beginning of period 1 directly to state 3; hence, it operates in state 3 for periods t = 1 , 2 , unless another upgrade is listed. Similarly, “ i = 1 : t 1 : 1 2 , t 2 : 2 3 ” is a two-step path: asset 1 upgrades to state 2 at the beginning of period 1 (operates in state 2 in period 1), then upgrades again at the beginning of period 2 to state 3 (operates in state 3 from period 2 onward).
In Table 7, the no-learning solution performs two upgrades total (one per asset) by jumping directly from state 1 to 3 in t = 1 . With learning, asset 1 follows a staged upgrade path ( 1 2 then 2 3 ), adding one extra upgrade action (three total) but achieving a lower discounted cost because later upgrades benefit from learning-adjusted investment/embodied coefficients.

5.5.5. Learning Extension Scalability: Effect of the Number of Tiers S

Tier thresholds ( Q f , s ) s = 0 S approximate the continuous experience curve with a piecewise-constant multiplier. A practical choice is to space thresholds approximately uniformly in log-deployment (consistent with power-law learning curves), and to set Q f , S to upper-bound the maximum attainable deployment over the horizon. Increasing the number of tiers S improves approximation granularity but increases binary variables and linking constraints proportionally.
For a single learning family, this adds
T S tier binaries ξ + | I | | A | T S linking binaries z
binary variables, and
2 T q update + one-tier + 2 T S tier bounds + | I | | A | T z -sum + 3 | I | | A | T S z linking
additional constraints (all linear). Table 8 reports how this linear growth translates into solve time on a representative instance.
As expected, the tiered approximation grows linearly with S in both binaries and constraints. On this representative instance, S = 2 solves to optimality within the 30 s limit, while S = 3 –4 hit the time limit with small residual gaps. For S = 6 , HiGHS did not find a feasible integer solution within 30 s, indicating that fine-grained tiering can make the learning-extended MILP substantially harder even when the underlying core problem remains easy. This motivates using modest S in practice (coarser tiering), tightening bounds (smaller big-M where possible), or adopting decomposition/heuristics when combining learning with large scenario-expanded dispatch layers.

5.6. Experiment 4: Scalability and a Balanced MILP vs. Benders Comparison

Section 4 develops an exact L-shaped decomposition for the core cap-based model, separating discrete upgrade dynamics ( x , u ) from continuous dispatch y. In a solver with callback support, Benders can be implemented as a single branch-and-cut procedure with cut injection and warm-starts. In contrast, the SciPy–HiGHS interface does not expose general cut callbacks, so our Benders implementation is an outer loop that repeatedly rebuilds and re-solves the master MILP with additional cuts. This introduces overhead that can make the outer-loop approach slower than a strong monolithic MILP on the core model.
We report results on two benchmark families: (i) Family A: a single dispatch LP per master point, where the monolithic MILP is expected to be strong; and (ii) Family B: the same upgrade dynamics but with Ω dispatch scenarios, which increases continuous variables and constraints substantially and creates a natural setting for decomposition.

5.6.1. Family A: Large Random Instances

We evaluate the core MILP (10) and (11) on a family of large random instances and solve them with SciPy (scipy.optimize.milp) using the HiGHS MIP backend. All runs use a time limit of 30 s and a relative MIP gap target of 10 6 (default HiGHS presolve and cuts enabled).

5.6.2. Random Instance Generator

We generate instances with | K | = 4 technology states and heterogeneous assets. Capacities are drawn from a per-asset baseline (uniform on [ 8 , 14 ] ) and scaled by technology multipliers. Use-phase coefficients ( c O , e U ) decrease with technology level and gradually improve over time to mimic background efficiency improvements and decarbonization. Upgrade costs c I decrease mildly over time, while embodied emissions e B are drawn from a moderate range (uniform on [ 30 , 80 ] kgCO2e for the base step, scaled by technology level and time). Demand D t is set to 88% of the aggregate capacity of the initial technology distribution (high utilization). The life-cycle cap E ¯ is calibrated to be feasible and typically active by interpolating between (i) the no-upgrade baseline emissions and (ii) a fast-upgrade benchmark (one-step upgrades as early as possible with per-period emissions-minimizing dispatch); we use a mixing weight of 0.7 throughout. Due to integer effects and solver tolerances, the returned solution may exhibit small slack (or tiny apparent violations) around zero.

5.6.3. Results and Scaling

Table 9 reports average results over five random seeds for each size. All instances terminate within the 30 s limit with relative MIP gap at most 10 6 . We report slack as E ( x , u , y ) E ¯ ; thus, slackmax (over seeds) is the run closest to violating the cap (positive indicates a violation). The cap is essentially tight at the returned solutions (up to solver feasibility tolerances). Table 10 reports per-instance outcomes.

5.6.4. Observed Solution Structure (Family A)

Across instances, the optimizer typically uses a mix of early upgrades (to reduce use-phase impacts under the cap) and selective non-upgrades (to avoid excessive embodied impacts). In this benchmark, embodied impacts remain a modest fraction of the optimal life-cycle total (typically a few percent), yet they still influence which upgrades occur early versus late. In most runs, the algorithm run terminates at the root or within a very few nodes, indicating that presolve and cutting planes yield a very tight relaxation on this benchmark family.

5.6.5. Family A: Monolithic MILP vs. Outer-Loop Benders on Moderate Instances

To connect the decomposition in Section 4.1 with observed runtimes, we compare (i) the monolithic MILP and (ii) an outer-loop Benders implementation that alternates a master MILP in ( x , u , Θ ) with a dispatch LP in y and adds optimality cuts. Because SciPy does not expose in-solver cut callbacks, the outer-loop method repeatedly resolves the master with an augmented constraint set; as a result, its runtime is dominated by repeated master solves. On the core model (Family A), this can be slower than the monolithic MILP despite the decomposition being exact.
Because the SciPy–HiGHS interface does not support in-solver cut callbacks, our Benders-like procedure is implemented as an outer loop that rebuilds and re-solves the master MILP after adding each batch of cuts. This implementation choice introduces overhead (loss of warm-starting, repeated presolve, and repeated branch-and-bound), which can dominate the runtime on the core model. Table 11 reports the results on Family A instances.

5.6.6. Family B: Large Random Instances

We now define a scenario-expanded variant. Let Ω denote the number of scenarios, with probabilities p ω . For each scenario ω = 1 , , Ω , demand D t ( ω ) and/or use-phase coefficients e i , k , t U , ( ω ) may differ (e.g., uncertainty in demand). The objective minimizes expected discounted operating cost plus investment cost, and the cap enforces an expected time-weighted life-cycle budget:
ω = 1 Ω p ω t T γ t E t U , ( ω ) + E t B E ¯ ,
with scenario-specific dispatch variables y ( ω ) and use-phase impacts E t U , ( ω ) = i , k e i , k , t U , ( ω ) y i , k , t ( ω ) , while E t B depends only on ( u ) . This extension multiplies the number of continuous variables and most constraints by Ω , while leaving the binary upgrade/state structure unchanged.
In the monolithic MILP, scenario expansion increases the number of continuous variables from | I | | K | T to Ω | I | | K | T and replicates demand/capacity constraints per scenario, which can significantly increase solve time. In contrast, the Benders separation remains natural: the master decides ( x , u ) once, while the subproblem consists of Ω independent dispatch LPs (or one block-diagonal LP), whose dual information can be aggregated into a single cut. Table 12 reports average performance for the scenario-expanded variant with Ω = 10 . The key qualitative pattern is that the scenario-expanded monolithic MILP may approach the time limit on moderate-size instances, while Benders scales more gently because it keeps the integer problem small and solves many fast LPs.

5.6.7. Comments on Experiment 4

On the core model (Family A), the monolithic MILP is very effective and typically outperforms an outer-loop Benders implementation without callbacks, consistent with Table 11. This is not contradictory to the value of the decomposition: it reflects (i) the strength of modern MILP presolve/cuts on structured formulations and (ii) the practical overhead of implementing Benders outside the solver. However, once the model is extended in ways that inflate the continuous dispatch layer (Family B), decomposition becomes competitive and can be preferable, as suggested by Table 12.

5.7. Graphical Summary of the Results

Here, we present a graphical summary of the experiments conducted. Figure 2 decomposes total life-cycle impacts into use-phase and embodied components. Figure 3, Figure 4 and Figure 5 visualize cap tightness and the effect of cap stringency/time-weighting on the achieved impacts and objective. Finally, Figure 6 and Figure 7 compare the runtimes of the monolithic MILP and the outer-loop Benders implementation under deterministic and scenario-expanded variants.

5.8. Cap Calibration Sensitivity

The Family A random generator sets the life-cycle cap E ¯ to be feasible yet typically active by interpolating between two reference policies: (i) a baseline policy with no upgrades, yielding a time-weighted life-cycle impact E base ; and (ii) a fast-upgrade benchmark that upgrades as early as possible (subject to the adjacency topology) and dispatches to minimize use-phase impacts period-by-period, yielding E fast . We define the calibrated cap as
E ¯ ( ω ) = ( 1 ω ) E base + ω E fast , ω [ 0 , 1 ] .
Intuitively, ω controls cap tightness: a smaller ω yields a looser (more baseline-like) cap, while a larger ω pushes the cap toward the aggressively decarbonized benchmark. Our main experiments used ω = 0.7 as a single representative value that is (empirically) feasible but binding on most seeds.
To address sensitivity, we sweep ω { 0.3 , 0.5 , 0.7 , 0.9 } on the same Family A sizes as Table 9: ( 50 , 8 , 4 ) , ( 100 , 10 , 4 ) , ( 200 , 12 , 4 ) , and ( 300 , 12 , 4 ) , with five seeds each and the same monolithic MILP solver configuration. Table 13 reports mean ± sd over seeds for solve time, objective, number of upgrades, and realized time-weighted life-cycle impact E (all evaluated at the returned solution). Figure 8 and Figure 9 visualize how runtime and upgrade intensity change with ω .
Across all four sizes, increasing ω makes the cap systematically tighter, which (as expected) induces more upgrades and modestly higher objective values. Runtime shows a mild-to-moderate increase with ω on the larger instances, consistent with the cap becoming more binding and the MILP requiring more branching to balance use-phase reductions against embodied costs. Importantly, the qualitative ranking used in our algorithmic comparisons (monolithic MILP strong on Family A; decomposition becomes advantageous only after scenario expansion in Family B) is unchanged across the entire sweep: the baseline Family A instances remain easy for HiGHS even under the tightest tested cap weight.

5.9. Sensitivity to Demand Utilization

In the random Family A generator (Section 5.6.1), demand is set as a fixed fraction of the aggregate initial capacity,
D t = η i I k K y ¯ i , k , t x i , k , 0 ,
with η = 0.88 in the main experiments. This choice is meant to emulate high but feasible utilization: it keeps the dispatch layer active (so that operational impacts matter) while preserving slack to accommodate downtime/inefficiencies and to avoid trivial infeasibility when the cap induces technology changes. To assess robustness, we rerun the full Family A benchmark of Table 9 under a utilization sweep η { 0.75 , 0.88 , 0.95 } (5 seeds each), keeping all other generator settings unchanged (including the cap calibration and MILP settings).
Table 14 reports mean ± sd outcomes. As expected, larger η increases operational pressure, leading to (i) higher optimal life-cycle impact levels E (since more output must be served) and (ii) more frequent/earlier upgrades to remain cap-compliant. From a computational standpoint, higher utilization moderately increases solve time and the number of executed upgrades, but the qualitative conclusions of the algorithmic comparisons are unchanged across η . Figure 10 depicts a synthesis of the results of Table 14.

Family B: Sensitivity to the Number of Scenarios Ω

The scenario-expanded Family B formulation replicates the continuous dispatch layer across Ω scenarios while leaving the binary upgrade/state structure unchanged. Consequently, the monolithic MILP embeds a continuous layer whose size grows linearly with Ω , whereas in the L-shaped approach, the master problem is independent of Ω and scenario expansion affects only the dispatch subproblem(s).
To quantify this effect, we sweep Ω { 5 , 10 , 20 , 50 } and measure the scenario-dispatch workload that appears in each Benders iteration. Specifically, for each Ω we form the block-diagonal scenario dispatch LP (equivalently, Ω independent scenario LPs) and report: (i) its size (continuous variables, constraints, and nonzeros) and (ii) the corresponding time per solve, reported as LP/iter (including matrix assembly and HiGHS solve time). To ensure feasibility across all Ω , the LP is evaluated at a feasible master point in which all assets operate in the highest technology state in all periods (a fast-upgrade configuration consistent with the cap-calibration benchmark).
Table 15 confirms the expected linear scaling of the dispatch layer: | y | = Ω | I | | K | T and the dominant constraint blocks (demand and capacity) scale proportionally in Ω . Empirically, LP/iter increases substantially with Ω (e.g., from ≈0.025 s to ≈0.215 s for ( 50 , 8 , 4 ) and from ≈0.030 s to ≈0.574 s for ( 100 , 10 , 4 ) when moving from Ω = 5 to Ω = 50 in this benchmark). Since the Bender’s master does not grow with Ω , these results explain why decomposition becomes increasingly attractive after scenario expansion: scenario growth is absorbed by solving more/larger LP blocks rather than by inflating the continuous layer inside branch-and-bound. Figure 11 depicts a synthesis of the results of Table 15.

5.10. A Fleet Replacement Case Study

To strengthen practical relevance, we add a small fleet-replacement case study calibrated from a publicly available life-cycle assessment of passenger cars in the European Union (EU). We map our model directly to fleet replacement: each asset i is a vehicle, service is vehicle-kilometers, technology states are powertrains, upgrade decisions correspond to vehicle replacement, and the dispatch variables represent annual kilometers served by each vehicle under its current technology. The calibration uses EU medium-segment life-cycle greenhouse-gas intensities for gasoline ICEVs, HEVs, and BEVs (EU grid mix), and reported vehicle production-and-recycling and battery-production footprints [29]. In our notation, the one-time production-and-recycling footprint is interpreted as embodied impact triggered at the replacement time (transition coefficient e B ), while the remaining share of the reported life-cycle intensity is treated as use-phase impact per kilometer ( e U ).

5.10.1. Calibration of e U and e B

Let g k LC be the reported life-cycle intensity (gCO2e/km) of technology k and let E k prod be the reported production-and-recycling footprint (tCO2e per vehicle, including battery production for BEVs). Assuming a representative lifetime mileage L = 240,000 km [29], we allocate embodied impacts over mileage to obtain an implied use-phase intensity:
e k U = g k LC 1000 E k prod · 1000 L ( kgCO 2 e / km ) ,
and we use e B as the one-time embodied impact (kgCO2e) incurred when the vehicle is replaced.

5.10.2. Technology and Transition Parameters (Cost + Emissions)

We consider three technology states K = { ICE , HEV , BEV } with allowed transitions ICE → HEV, ICE → BEV, and HEV → BEV. Operating costs c k O are set in €/km to reflect typical relative ordering (BEV lowest, ICE highest); investment costs c k I represent replacement capex differentials (illustrative but in a plausible range). Embodied transition impacts e k B are taken as the production-and-recycling footprint of the destination technology, with HEV → BEV interpreted as an incremental transition (difference in production footprints). Table 16 reports the setup of these parameters for the case study.
Table 16. Case-study parameters used by the optimization model. Use-phase intensities e k U come from Table 17.
Table 16. Case-study parameters used by the optimization model. Use-phase intensities e k U come from Table 17.
Technology k c k O (€/km) e k U (kgCO2e/km)
ICE0.200.2050
HEV0.170.1580
BEV0.120.0197
Transition  ( k ) c k I  (€/veh) e k B  (tCO2e/veh)
ICE → HEV50007.2
ICE → BEV15,00010.4
HEV → BEV10,0003.2
Table 17. Fleet replacement calibration from a public EU passenger-car LCA [29]. Embodied emissions are allocated over L = 240,000 km to derive the implied use-phase intensity used in the optimization model.
Table 17. Fleet replacement calibration from a public EU passenger-car LCA [29]. Embodied emissions are allocated over L = 240,000 km to derive the implied use-phase intensity used in the optimization model.
Technology k g k LC E k prod BatteryEmbodied Alloc.Implied e k U
(gCO2e/km)(tCO2e/veh)(tCO2e)(gCO2e/km)(gCO2e/km)
ICEV (gasoline)2357.20.030.0205.0
HEV1887.20.030.0158.0
BEV (EU grid)6310.43.943.319.7

5.10.3. Fleet Setting (Dispatch Coupling via Heterogeneous Annual Mileage)

We model a fleet of I = 20 vehicles over T = 10 annual periods. To make the coupling between dispatch and upgrades nontrivial (and realistic), vehicles have heterogeneous annual mileage capacity: 10 “high-mileage” vehicles can each serve up to 15,000 km/year (e.g., commuter-heavy use), and 10 “low-mileage” vehicles up to 9000 km/year (e.g., occasional use). Total annual demand is fixed at D t = 200,000 km/year, so utilization is about 83% of total capacity. All vehicles start as ICE at t = 0 .
We impose a global (time-unweighted) life-cycle cap. Let E base be the no-upgrade baseline use-phase emissions under all-ICE operation: E base = t = 1 T D t e ICE U . We set E ¯ = 0.70 E base , i.e., a 30% reduction target relative to the baseline. Table 18 reports the setup of these parameters for the case study.

5.10.4. Operational-Only vs. Life-Cycle Cap

We compare two planning variants under the same numeric cap E ¯ : (i) an operational-only variant that constrains only t e U y (use-phase impacts), and (ii) the full life-cycle model that constrains t ( e U y + e B u ) (use-phase + embodied). Table 19 shows that the operational-only plan can appear cap-compliant and cheaper, yet becomes noncompliant once embodied replacement emissions are counted in the same life-cycle budget. In contrast, the full model replaces more high-mileage vehicles with BEVs upfront to offset the embodied spike and remain within the life-cycle cap.

5.10.5. Sensitivity to Cap Stringency

To assess robustness, we sweep the cap fraction τ in E ¯ = τ E base while keeping all other parameters fixed (heterogeneous mileage capacities, costs, and ICCT-calibrated life-cycle coefficients). Table 20 reports the optimal number of ICE → BEV replacements executed at t = 1 , the resulting fleet mix at t = 1 , the life-cycle emissions decomposition ( E U vs. E B ), and cap tightness (slack).
As expected, tightening the cap increases the number of early BEV replacements. In this calibration, HEVs are not selected because BEVs dominate in both operating cost and use-phase emissions per km, so compliance is primarily achieved by increasing the BEV share among the highest-mileage vehicles. The cap slack is not strictly monotone in τ because replacements are discrete (integer) decisions: when τ crosses a threshold, the next feasible integer step (one additional BEV replacement) may over-comply with the cap, producing a larger slack than at a slightly looser cap. Operationally, the model consistently prioritizes upgrading high-mileage vehicles first, since each BEV replacement displaces more ICE kilometers and, therefore, yields larger use-phase abatement per unit of embodied impact.

5.10.6. Dispatch Implications (km Allocation by Technology)

Because BEVs have lower operating cost (and lower per-km e U ), the dispatch LP allocates annual demand to BEVs up to their capacity. Table 21 reports the year-1 kilometer allocation by technology under each plan, illustrating the intended dispatch–upgrade coupling: the full model upgrades high-mileage vehicles to BEVs and then uses them at capacity.

5.10.7. Upgrade Timepoints, Emissions Decomposition, and Cap Tightness (Life-Cycle Model)

Table 22 provides a year-by-year view of (i) fleet composition, (ii) upgrade timepoints, (iii) annual decomposition into use-phase and embodied impacts, and (iv) remaining cap budget. The solution exhibits the expected life-cycle signature: a front-loaded embodied “spike” at t = 1 followed by substantially lower annual use-phase emissions, while remaining within the total cap.

5.10.8. Interpretation and Relevance

This case study is intentionally small, but it is meaningful for the paper’s objectives: it is calibrated from public LCA magnitudes, and it demonstrates a realistic operational feature heterogeneous annual mileage), and it makes the life-cycle mechanism explicit: ignoring embodied impacts yields a plan that is cheaper and meets a use-phase-only cap, yet violates the life-cycle cap once replacement emissions are counted. The full model, by contrast, upgrades the highest-mileage vehicles first and dispatches BEVs at capacity, producing a cap-compliant plan whose cost reflects the trade-off between upfront embodied impacts and long-run operational savings.
In Appendix A, we report a summary of the setup of all the parameters used in the experimentation.

6. Conclusions

We developed a multi-period mixed-integer optimization model for technology upgrade scheduling that integrates (i) life-cycle accounting with explicit embodied/use-phase separation, (ii) time-weighted valuation of impacts, and (iii) an endogenous learning-by-doing extension that can reduce both investment costs and embodied impacts of future upgrades. For the core cap-based formulation, we derived an exact L-shaped (Benders) decomposition that exploits the separation between discrete upgrade dynamics and continuous dispatch.
Computational experiments highlight the decision relevance of these modeling elements. First, operational-only accounting can recommend upgrade plans that become infeasible once embodied impacts are included, whereas the life-cycle formulation adjusts upgrade timing and composition to satisfy the same aggregate target. Second, time-weighting can shift the optimal plan toward earlier abatement, even under the same relative cap tightness. Third, endogenous learning can make multi-step upgrade paths economically attractive and can change the timing of upgrades while remaining cap-compliant. Finally, benchmark tests on large random instances indicate that the core MILP can be solved within short time limits using open-source tools, providing baseline evidence of tractability.
A key managerial implication of explicitly accounting for embodied upgrade impacts is a decision reversal that cannot be detected under operational-only accounting: a plan that appears compliant (and cheaper) when constraining only use-phase impacts can violate the same target once embodied replacement emissions are included in a life-cycle cap. Our experiments and the case study illustrate this failure mode and show how the proposed model resolves it by balancing an upfront embodied “spike” against long-run operational savings.
Several extensions can be the object of future work. Empirically grounded case studies would help calibrate embodied impacts and learning effects for specific asset classes, and the framework can be broadened to multiple impact categories beyond GHG emissions. From a modeling perspective, incorporating uncertainty (e.g., demand, carbon intensity, technology availability), richer policy mechanisms (e.g., banking/borrowing and explicit trading dynamics), and tighter decomposition may be options.

Funding

This research received no external funding.

Data Availability Statement

Raw data supporting the conclusions of this article is contained within the article. Any further clarification will be given on request.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

Table A1 summarizes the random generator used for the Experiment 4 benchmarks. For each instance size ( | I | , T , | K | ) , we draw asset-level baseline parameters independently (capacities, operating costs, and use-phase coefficients) and then construct technology-dependent coefficients via monotone multipliers across states, together with mild time trends that mimic background efficiency improvements and grid decarbonization. Demand is fixed at a constant utilization fraction of the aggregate initial capacity to ensure a nontrivial dispatch layer while preserving feasibility. The life-cycle cap is calibrated by interpolation between two transparent reference policies—a no-upgrade baseline (cost-minimizing dispatch) and a fast-upgrade benchmark (earliest feasible upgrades with emissions-minimizing dispatch)—which yields a feasible and typically binding cap across random seeds. All runs reported in the tables use the same generator settings; only the random seed changes across replications, enabling reproducibility and sensitivity assessment. For Family B, scenario expansion affects only the data of the dispatch layer (scenario-dependent demand and/or use-phase coefficients), while the upgrade dynamics and cap-calibration logic remain unchanged and are applied to expected (probability-weighted) references.
Table A1. Random instance generator settings used in Experiment 4 for benchmark Families A (deterministic) and B (scenario-expanded). Unless otherwise stated, parameters are identical across Families A and B; Family B additionally samples scenario-dependent data.
Table A1. Random instance generator settings used in Experiment 4 for benchmark Families A (deterministic) and B (scenario-expanded). Unless otherwise stated, parameters are identical across Families A and B; Family B additionally samples scenario-dependent data.
ComponentSymbolGenerator Setting
Dimensions | I | , T , | K | | K | = 4 ; ( | I | , T ) { ( 50 , 8 ) , ( 100 , 10 ) , ( 200 , 12 ) , ( 300 , 12 ) } .
Technology statesKOrdered states k = 1 , , 4 (higher k = cleaner/lower operating cost).
Initial states x i , k , 0 Heterogeneous initial mix (iid across assets); default probabilities ( 0.50 , 0.30 , 0.15 , 0.05 ) for k = 1 , 2 , 3 , 4 .
Capacity
Baseline capacity y ¯ i iid Uniform[8, 14].
Tech multipliers m k y Increasing multipliers (e.g., ( 1.00 , 1.10 , 1.25 , 1.40 ) ).
Capacity tensor y ¯ i , k , t y ¯ i , k , t = y ¯ i m k y (time-invariant in the benchmark).
Demand level D t Constant over time; set to a utilization fraction of initial aggregate capacity: D t = η i k y ¯ i , k , 1 x i , k , 0 with η = 0.88 .
Operating costs and use-phase coefficients
Baseline operating cost c ¯ i O iid Uniform[15, 30].
Cost multipliers m k c Decreasing in k (e.g., ( 1.00 , 0.90 , 0.80 , 0.72 ) ).
Baseline use-phase coeff. e ¯ i U iid Uniform[8, 16].
Emissions multipliers m k e Decreasing in k (e.g., ( 1.00 , 0.85 , 0.70 , 0.58 ) ).
Time improvement factor g t Linear improvement over the horizon to mimic background efficiency/decarbonization: g t = 1 κ t 1 T 1 with κ = 0.10 .
Operating costs c i , k , t O c i , k , t O = c ¯ i O m k c g t .
Use-phase impacts e i , k , t U e i , k , t U = e ¯ i U m k e g t .
Upgrade (investment) costs and embodied impacts
Baseline upgrade cost step c ¯ i I iid Uniform[800, 1600] (base adjacent step).
Step scaling by level Adjacent step ( k k + 1 ) scaled by 1 . 15 k 1 .
Time trend (investment) h t I Mild decline over time: h t I = 1 κ I t 1 T 1 , κ I = 0.05 .
Investment costs c i , k k + 1 , t I c i , k k + 1 , t I = c ¯ i I 1 . 15 k 1 h t I .
Baseline embodied step e ¯ i B iid Uniform[30, 80] kgCO2e.
Step scaling by level Adjacent step ( k k + 1 ) scaled by 1 . 10 k 1 .
Time trend (embodied) h t B Mild decline over time: h t B = 1 κ B t 1 T 1 , κ B = 0.03 .
Embodied impacts e i , k k + 1 , t B e i , k k + 1 , t B = e ¯ i B 1 . 10 k 1 h t B .
Discounting and cap calibration
Economic discount δ t δ t = ( 1 + ρ ) ( t 1 ) with ρ = 0.05 .
Impact time-weighting γ t γ t = ( 1 + ρ C ) ( t 1 ) with ρ C = 0 unless otherwise stated.
Baseline reference E base No-upgrade policy; dispatch is cost-minimizing (no cap).
Fast-upgrade reference E fast Earliest feasible upgrades (one step per period until k = | K | ); dispatch is emissions-minimizing per period.
Cap rule E ¯ E ¯ = ( 1 ω ) E base + ω E fast with ω = 0.7 .
Scenario expansion (Family B only)
# scenarios Ω Ω varied (e.g., { 5 , 10 , 20 , 50 } for sensitivity; Ω = 10 in the main Family B comparison).
Scenario probs. p ω Uniform: p ω = 1 / Ω .
Scenario demand (optional) D t ( ω ) D t ( ω ) = D t max { 0 , 1 + ε t ( ω ) } , ε t ( ω ) N ( 0 , σ D 2 ) with σ D = 0.05 .
Scenario use-phase (optional) e U , ( ω ) e i , k , t U , ( ω ) = e i , k , t U max { 0 , 1 + ξ i , k , t ( ω ) } , ξ ( ω ) N ( 0 , σ e 2 ) with σ e = 0.05 .
Family B cap E ¯ Calibrated on the expected references: E base = ω p ω E ω base and E fast = ω p ω E ω fast .

References

  1. ISO 14040:2006; Environmental Management—Life Cycle Assessment—Principles and Framework. International Organization for Standardization: Geneva, Switzerland, 2026. Available online: https://www.iso.org/standard/37456.html (accessed on 9 January 2026).
  2. Hyatt, A.; Samuelson, H.W. Accounting for Whole-Life Carbon, the Time Value of Carbon, and Grid Decarbonization in Cost–Benefit Analyses of Residential Retrofits. Sustainability 2025, 17, 2935. [Google Scholar] [CrossRef]
  3. Paris, Z.; Marland, E.; Sohngen, B.; Marland, G.; Jenkins, J. The time value of carbon storage. For. Policy Econ. 2022, 144, 102840. [Google Scholar] [CrossRef]
  4. Behrens, J.; Zeyen, E.; Hoffmann, M.; Stolten, D.; Weinand, J.M. Reviewing the complexity of endogenous technological learning for energy system modeling. Adv. Appl. Energy 2024, 16, 100192. [Google Scholar] [CrossRef]
  5. IEA-ETSAP/TIMES Documentation. The Endogenous Technological Learning Extension—TIMES Documentation. Available online: https://times.readthedocs.io/en/latest/part-1/11-tech-learning.html (accessed on 9 January 2026).
  6. Loulou, R.; Remne, U.; Kanudia, A.; Lehtilä, A.; Goldstein, G. Documentation for the TIMES Model: Part I (General Introduction). IEA-ETSAP Documentation. Available online: https://www.iea-etsap.org/docs/TIMESDoc-Intro.pdf (accessed on 9 January 2026).
  7. Seebregts, A.J.; Kram, T.; Schaeffer, G.J.; Stoffer, A.; Kypreos, S.; Barreto, L.; Messner, S.; Schrattenholzer, L. Endogenous Technological Change in Energy System Models: Synthesis of Experience with ERIS, MARKAL and MESSAGE; Report ECN-C–99-025; Paul Scherrer Institute: Laxenburg, Austria, 1999. [Google Scholar]
  8. Sloan, T.W. Green renewal: Incorporating environmental factors in equipment replacement decisions under technological change. J. Clean. Prod. 2011, 19, 173–186. [Google Scholar] [CrossRef]
  9. Abdi, A.; Taghipour, S. Sustainable asset management: A repair-replacement decision model considering environmental impacts, maintenance quality, and risk. Comput. Ind. Eng. 2019, 136, 117–134. [Google Scholar] [CrossRef]
  10. Hummen, T.; Desing, H. When to replace products with which (circular) strategy? An optimization approach and lifespan indicator. Resour. Conserv. Recycl. 2021, 174, 105704. [Google Scholar] [CrossRef]
  11. Schaubroeck, S.; Schaubroeck, T.; Baustert, P.; Gibon, T.; Benetto, E. When to replace a product to decrease environmental impact?—A consequential LCA framework and case study on car replacement. Int. J. Life Cycle Assess. 2020, 25, 1500–1521. [Google Scholar] [CrossRef]
  12. Cui, S.; Gao, K.; Yu, B.; Ma, Z.; Najafi, A. Joint optimal vehicle and recharging scheduling for mixed bus fleets under limited chargers. Transp. Res. Part E Logist. Transp. Rev. 2023, 180, 103335. [Google Scholar] [CrossRef]
  13. Green NCAP. Life Cycle Assessment Methodology and Data, 3rd ed.; Technical Report; Green NCAP: Leuven, Belgium, 2024. [Google Scholar]
  14. Zhao, D.; Chen, Y.; Yuan, H.; Chen, D. Life cycle optimization oriented to sustainable waste management and circular economy: A review. Waste Manag. 2025, 191, 89–106. [Google Scholar] [CrossRef]
  15. Hülagü, S.; Dullaert, W.; Eruguz, A.S.; Heijungs, R.; Inghels, D. Integrating life cycle assessment into supply chain optimization. PLoS ONE 2025, 20, e0316710. [Google Scholar] [CrossRef]
  16. Levasseur, A.; Lesage, P.; Margni, M.; Deschênes, L.; Samson, R. Considering Time in LCA: Dynamic LCA and Its Application to Global Warming Impact Assessments. Environ. Sci. Technol. 2010, 44, 3169–3174. [Google Scholar] [CrossRef] [PubMed]
  17. Slavkovic, K.; Stephan, A. Dynamic life cycle assessment of buildings and building stocks—A review. Renew. Sustain. Energy Rev. 2025, 212, 115262. [Google Scholar] [CrossRef]
  18. Salati, M.; Costa, A.A.; Silvestre, J.D. A Comprehensive Review of Dynamic Life Cycle Assessment for Buildings: Exploring Key Processes and Methodologies. Sustainability 2025, 17, 159. [Google Scholar] [CrossRef]
  19. Dong, S.; Wu, X. Technology choice under the cap-and-trade policy: The impact of emission cap and technology efficiency. Eur. J. Oper. Res. 2025, 326, 286–298. [Google Scholar] [CrossRef]
  20. Rajabian, A.; Ghaleb, M.; Taghipour, S. Optimal Replacement, Retrofit, and Management of a Fleet of Assets under Regulations of an Emissions Trading System. Eng. Econ. 2021, 66, 225–244. [Google Scholar] [CrossRef]
  21. Xiao, L.; Zhang, J.; Wang, C.; Han, R. Optimal fleet replacement management under cap-and-trade system with government subsidy uncertainty. Multimodal Transp. 2023, 2, 100077. [Google Scholar] [CrossRef]
  22. Yuan, X.; Lee, L.C.; Zhu, Y.; Wang, Y.; Zhang, H.; Zheng, H.; Chicaiza-Ortiz, C.; Liu, Z. Unveiling the inequality and embodied carbon emissions of China’s battery electric vehicles across life cycles by using a MRIO-based LCA model. Transp. Policy 2025, 174, 103827. [Google Scholar] [CrossRef]
  23. Benders, J.F. Partitioning procedures for solving mixed-variables programming problems. Numer. Math. 1962, 4, 238–252. [Google Scholar] [CrossRef]
  24. Geoffrion, A.M. Generalized Benders Decomposition. J. Optim. Theory Appl. 1972, 10, 237–260. [Google Scholar] [CrossRef]
  25. Van Slyke, R.; Wets, R.J.B. L-Shaped Linear Programs with Applications to Optimal Control and Stochastic Programming. SIAM J. Appl. Math. 1969, 17, 638–663. [Google Scholar] [CrossRef]
  26. Wright, T.P. Factors Affecting the Cost of Airplanes. J. Aeronaut. Sci. 1936, 3, 122–128. [Google Scholar] [CrossRef]
  27. McDonald, A.; Schrattenholzer, L. Learning rates for energy technologies. Energy Policy 2001, 29, 255–261. [Google Scholar] [CrossRef]
  28. Barwick, P.J.; Kwon, H.S.; Li, S.; Zahur, N.B. Drive Down the Cost: Learning by Doing and Government Policies in the Global EV Battery Industry; Technical Report w33378; National Bureau of Economic Research: Cambridge, MA, USA, 2025. [Google Scholar]
  29. Negri, M.; Bieker, G. Life-Cycle Greenhouse Gas Emissions from Passenger Cars in the European Union; Technical Report ID-392; International Council on Clean Transportation: Washington, DC, USA, 2025. [Google Scholar]
Figure 1. Overview of the proposed life-cycle constrained planning model and solution approach.
Figure 1. Overview of the proposed life-cycle constrained planning model and solution approach.
Algorithms 19 00223 g001
Figure 2. Experiment 1: decomposition of the achieved time-weighted life-cycle impacts into use-phase ( E U ) and embodied ( E B ) components for the two compared formulations.
Figure 2. Experiment 1: decomposition of the achieved time-weighted life-cycle impacts into use-phase ( E U ) and embodied ( E B ) components for the two compared formulations.
Algorithms 19 00223 g002
Figure 3. Experiment 2: cap tightness as the cap fraction τ varies. The cap E ¯ is calibrated as E ¯ = τ E base and is compared to the achieved E .
Figure 3. Experiment 2: cap tightness as the cap fraction τ varies. The cap E ¯ is calibrated as E ¯ = τ E base and is compared to the achieved E .
Algorithms 19 00223 g003
Figure 4. Experiment 2: objective value (discounted cost) vs. cap fraction τ . The objective increases when the cap becomes binding and additional upgrades are required.
Figure 4. Experiment 2: objective value (discounted cost) vs. cap fraction τ . The objective increases when the cap becomes binding and additional upgrades are required.
Algorithms 19 00223 g004
Figure 5. Experiment 2: sensitivity to time-weighting. For each ρ C , the cap is recalibrated as E ¯ = 0.65 E base ( ρ C ) and compared to the achieved E .
Figure 5. Experiment 2: sensitivity to time-weighting. For each ρ C , the cap is recalibrated as E ¯ = 0.65 E base ( ρ C ) and compared to the achieved E .
Algorithms 19 00223 g005
Figure 6. Experiment 4 (Family A): runtime comparison between the monolithic MILP and the outer-loop Benders implementation on moderate instances.
Figure 6. Experiment 4 (Family A): runtime comparison between the monolithic MILP and the outer-loop Benders implementation on moderate instances.
Algorithms 19 00223 g006
Figure 7. Experiment 4 (Family B, Ω = 10 ): runtime comparison between the scenario-expanded monolithic MILP and the outer-loop Benders implementation.
Figure 7. Experiment 4 (Family B, Ω = 10 ): runtime comparison between the scenario-expanded monolithic MILP and the outer-loop Benders implementation.
Algorithms 19 00223 g007
Figure 8. Family A cap-weight sensitivity: solve time vs. ω (mean ± sd over 5 seeds).
Figure 8. Family A cap-weight sensitivity: solve time vs. ω (mean ± sd over 5 seeds).
Algorithms 19 00223 g008
Figure 9. Family A cap-weight sensitivity: number of upgrades vs. ω (mean ± sd over 5 seeds).
Figure 9. Family A cap-weight sensitivity: number of upgrades vs. ω (mean ± sd over 5 seeds).
Algorithms 19 00223 g009
Figure 10. Family A utilization sweep: solve time and number of executed upgrades as a function of the demand fraction η (means over 5 seeds).
Figure 10. Family A utilization sweep: solve time and number of executed upgrades as a function of the demand fraction η (means over 5 seeds).
Algorithms 19 00223 g010
Figure 11. Family B: dispatch LP time per Benders iteration (LP/iter, build + solve) vs. number of scenarios Ω . The master problem is independent of Ω , so this curve captures the dominant Ω -dependent component of the decomposition runtime.
Figure 11. Family B: dispatch LP time per Benders iteration (LP/iter, build + solve) vs. number of scenarios Ω . The master problem is independent of Ω , so this curve captures the dominant Ω -dependent component of the decomposition runtime.
Algorithms 19 00223 g011
Table 1. Dynamic LCA vs. time-value weighting γ t (exemplification).
Table 1. Dynamic LCA vs. time-value weighting γ t (exemplification).
ProfileEmissions ( e 1 , e 2 )Characterized ( e ˜ 1 , e ˜ 2 )Unweighted t e ˜ t Weighted t γ t e ˜ t
A (early)(100, 0)(100, 0)100 100 γ 1
B (late)(0, 100)(0, 100)100 100 γ 2
Table 2. Positioning relative to representative literature streams using a fixed coding: LCA structure is No/Partial/Yes/System-level (explicit process-based integration at the system level rather than asset/transition level); Time-weighting is Yes if impacts are explicitly weighted by time in the model objective/constraints, otherwise No; Endogenous learning is Yes (cost-only) or Yes (cost + embodied) if learning endogenously updates coefficients via cumulative deployment, otherwise No.
Table 2. Positioning relative to representative literature streams using a fixed coding: LCA structure is No/Partial/Yes/System-level (explicit process-based integration at the system level rather than asset/transition level); Time-weighting is Yes if impacts are explicitly weighted by time in the model objective/constraints, otherwise No; Endogenous learning is Yes (cost-only) or Yes (cost + embodied) if learning endogenously updates coefficients via cumulative deployment, otherwise No.
Paper(s)LCA StructureTime-WeightingEndogenous Learning
[8,9,10,11]PartialNoNo
[14,15,16,17,18]YesNoNo
[12,19,20,21]PartialNoNo
[4,5,7,26,27,28]System-levelYesYes (cost-only)
This workYesYesYes (cost + embodied)
Table 3. Life-cycle accounting changes the recommended upgrade schedule under the same emissions cap (toy instance, I = 2 , K = 4 , T = 4 ). “Operational-only cap” constrains use-phase impacts only; “Model life-cycle cap” constrains use-phase plus embodied impacts.
Table 3. Life-cycle accounting changes the recommended upgrade schedule under the same emissions cap (toy instance, I = 2 , K = 4 , T = 4 ). “Operational-only cap” constrains use-phase impacts only; “Model life-cycle cap” constrains use-phase plus embodied impacts.
ModelScheduleObj (€) E U (kgCO2e) E B (kgCO2e) E (kgCO2e)
Operational-only cap i = 1 : none; i = 2 : t 1 : 1 → 3, t 2 : 3 → 4533.55180.0040.00220.00
Model life-cycle cap i = 1 : t 1 : 1 → 2; i = 2 : t 1 : 1 → 3, t 2 : 3 → 4534.93140.0055.00195.00
Table 4. Sensitivity to cap tightness on the toy instance ( ρ C = 0 ). The cap is set as E ¯ = τ E base , where E base is the no-upgrade time-weighted life-cycle impact. “# Upgrades” is the number of executed upgrades.
Table 4. Sensitivity to cap tightness on the toy instance ( ρ C = 0 ). The cap is set as E ¯ = τ E base , where E base is the no-upgrade time-weighted life-cycle impact. “# Upgrades” is the number of executed upgrades.
τ E ¯ (kgCO2e)Obj (€) E (kgCO2e)# Upgrades
0.90324.00533.55220.002
0.80288.00533.55220.002
0.75270.00533.55220.002
0.70252.00533.55220.002
0.65234.00533.55220.002
0.60216.00534.93195.003
Table 5. Effect of time-weighting on the optimal schedule for the toy instance at fixed relative cap tightness E ¯ = 0.65 E base ( ρ C ) . “# Upgrades” is the number of executed upgrades.
Table 5. Effect of time-weighting on the optimal schedule for the toy instance at fixed relative cap tightness E ¯ = 0.65 E base ( ρ C ) . “# Upgrades” is the number of executed upgrades.
ρ C E ¯ (kgCO2e)Obj (€) E (kgCO2e)# Upgrades
0.00234.00533.55220.002
0.05217.81533.55208.452
0.15192.07533.55190.022
0.30164.74534.93157.183
Table 6. Learning inputs used in Experiment 3.
Table 6. Learning inputs used in Experiment 3.
ItemSymbolValue(s)
Learning families F { f }
Family mapping fam ( · ) fam ( 2 ) = f , fam ( 3 ) = f
Initial deployment q f , 0 1
Tier thresholds ( Q f , s ) s = 0 3 ( 1 , 2 , 4 , 8 )
Learning exponents ( b f I , b f B ) ( 1.0 , 0.5 )
Cost multipliers ( α f , s I ) s = 1 3 ( 1.000 , 0.500 , 0.250 )
Embodied multipliers ( α f , s B ) s = 1 3 ( 1.000 , 0.707 , 0.500 )
Table 7. Learning-by-doing can shift upgrade timing and reduce discounted cost under the same life-cycle cap (toy instance). Obj is total discounted economic cost (monetary units); E is the time-weighted life-cycle impact t T γ t ( E t U + E t B ) in kgCO2e. “# Upgrades” is the number of executed upgrades.
Table 7. Learning-by-doing can shift upgrade timing and reduce discounted cost under the same life-cycle cap (toy instance). Obj is total discounted economic cost (monetary units); E is the time-weighted life-cycle impact t T γ t ( E t U + E t B ) in kgCO2e. “# Upgrades” is the number of executed upgrades.
VariantScheduleObj E # Upgrades
(€)(kgCO2e)
No learning ( b I = b B = 0 ) i = 1 : t 1 : 1 → 3; i = 2 : t 1 : 1 → 3835.79306.312
Learning ( b I = 1.0 , b B = 0.5 ) i = 1 : t 1 : 1 → 2, t 2 : 2 → 3; i = 2 : t 1 : 1 → 3718.51295.913
Table 8. Scalability of the learning-by-doing tiered approximation as the number of tiers S increases. We use a representative instance with ( | I | , T , | K | ) = ( 50 , 8 , 4 ) . Learning uses one family, q 0 = 1 , geometric tier thresholds up to q 0 + | I | T with big- M = q 0 + | I | T , and exponents ( b I , b B ) = ( 0.2 , 0.1 ) . All runs use HiGHS (via scipy.optimize.milp), 1 thread, time limit 30 s, and target relative MIP gap 10 6 . “Added” counts refer to the learning extension only (relative to the core model).
Table 8. Scalability of the learning-by-doing tiered approximation as the number of tiers S increases. We use a representative instance with ( | I | , T , | K | ) = ( 50 , 8 , 4 ) . Learning uses one family, q 0 = 1 , geometric tier thresholds up to q 0 + | I | T with big- M = q 0 + | I | T , and exponents ( b I , b B ) = ( 0.2 , 0.1 ) . All runs use HiGHS (via scipy.optimize.milp), 1 thread, time limit 30 s, and target relative MIP gap 10 6 . “Added” counts refer to the learning extension only (relative to the core model).
SAdded BinariesAdded ConstraintsTotal ConstraintsTime (s)MIP Gap
22416844813,30717.4 0.0
3362412,06416,92330.0 4.2 × 10 2
4483215,68020,53930.0 5.3 × 10 2
6724822,91227,77130.0
Table 9. Results on large random instances (Family A; 5 seeds each). “Instance” reports ( | I | , T , | K | ) . Running times are seconds (mean ± sd) and relative MILP gap target 10 6 . “opt.” is the number of runs (out of five) that reached the target gap within the time limit. “Nodes” is the branch-and-bound node count. “# Upgrades” is the number of executed upgrades.
Table 9. Results on large random instances (Family A; 5 seeds each). “Instance” reports ( | I | , T , | K | ) . Running times are seconds (mean ± sd) and relative MILP gap target 10 6 . “opt.” is the number of runs (out of five) that reached the target gap within the time limit. “Nodes” is the branch-and-bound node count. “# Upgrades” is the number of executed upgrades.
InstanceOpt.Time (s)NodesGapmax# Upgrades E (kgCO2e)Slackmax
(50, 8, 4)51.04 ± 0.871 ± 01.4 × 10 16 22 ± 334,057.1 ± 1391.64.37 × 10 11
(100, 10, 4)53.30 ± 2.193 ± 39.4 × 10 7 76 ± 581,181.4 ± 2765.01.31 × 10 10
(200, 12, 4)57.45 ± 2.942 ± 16.7 × 10 7 175 ± 4181,316.1 ± 3147.31.46 × 10 10
(300, 12, 4)512.47 ± 3.441 ± 04.7 × 10 7 261 ± 7272,570.6 ± 4043.03.49 × 10 10
Table 10. Per-instance results for the large random benchmark (Family A). “# Upgrades” is the number of executed upgrades.
Table 10. Per-instance results for the large random benchmark (Family A). “# Upgrades” is the number of executed upgrades.
InstanceSeedTimeGapObj E U E B E # Upgrades
(s)(€)(kgCO2e)
(50, 8, 4)10.461.4 × 10 16 213,587.734,468.11189.835,657.924
(50, 8, 4)22.420.0 × 10 0 203,179.833,692.11015.634,707.721
(50, 8, 4)31.400.0 × 10 0 212,374.233,248.11320.834,568.926
(50, 8, 4)40.440.0 × 10 0 201,554.331,109.6982.432,092.120
(50, 8, 4)50.490.0 × 10 0 209,756.432,280.3978.533,258.921
(100, 10, 4)11.500.0 × 10 0 504,698.574,745.03761.978,506.978
(100, 10, 4)26.410.0 × 10 0 515,805.380,525.93804.884,330.776
(100, 10, 4)32.660.0 × 10 0 509,479.077,106.13784.880,890.874
(100, 10, 4)44.630.0 × 10 0 511,997.179,538.94139.683,678.584
(100, 10, 4)51.289.4 × 10 7 498,366.274,879.33620.678,499.970
(200, 12, 4)16.150.0 × 10 0 1,151,044.3172,348.58511.6180,860.0169
(200, 12, 4)212.426.7 × 10 7 1,112,159.4170,625.58884.3179,509.8173
(200, 12, 4)35.970.0 × 10 0 1,153,106.2170,822.39134.9179,957.2176
(200, 12, 4)47.745.5 × 10 7 1,106,676.4177,782.69069.0186,851.5176
(200, 12, 4)55.004.8 × 10 7 1,124,037.4170,089.79312.2179,401.9179
(300, 12, 4)118.090.0 × 10 0 1,698,561.5259,840.513,647.5273,488.1271
(300, 12, 4)213.210.0 × 10 0 1,722,169.6263,895.213,383.8277,279.0264
(300, 12, 4)310.074.7 × 10 7 1,632,953.5254,248.713,053.9267,302.6258
(300, 12, 4)49.632.5 × 10 7 1,715,825.5256,275.913,418.0269,693.9261
(300, 12, 4)511.370.0 × 10 0 1,660,864.1262,245.712,843.8275,089.5253
Table 11. Family A: monolithic MILP vs. Benders-like approach. “Iter.” is the number of Benders iterations (master solves) to reach an outer-loop relative bound gap of 10 6 . “LP/iter” is the average dispatch LP time; it is typically small relative to Master/Iter time.
Table 11. Family A: monolithic MILP vs. Benders-like approach. “Iter.” is the number of Benders iterations (master solves) to reach an outer-loop relative bound gap of 10 6 . “LP/iter” is the average dispatch LP time; it is typically small relative to Master/Iter time.
( | I | , T , | K | ) MILP Time (s)Benders Time (s)Iter.Master/Iter (s)LP/Iter (s)
(50, 8, 4)1.02.8120.220.01
(100, 10, 4)3.311.0180.590.02
(200, 12, 4)7.526.5231.120.03
(300, 12, 4)12.530.0302.080.05
Table 12. Family B: monolithic MILP vs. Benders-like approach. “gap” is the final reported MIP gap at termination. Benders uses an outer-loop bound gap target of 10 6 and reports its achieved bound gap at termination.
Table 12. Family B: monolithic MILP vs. Benders-like approach. “gap” is the final reported MIP gap at termination. Benders uses an outer-loop bound gap target of 10 6 and reports its achieved bound gap at termination.
( | I | , T , | K | ) MILP Time (s)MILP GapBenders Time (s)Benders GapIter.LP/Iter (s)
(50, 8, 4)6.81.0 × 10 6 9.51.0 × 10 6 140.08
(100, 10, 4)30.02.4 × 10 4 18.71.0 × 10 6 190.12
(200, 12, 4)30.01.6 × 10 3 27.91.0 × 10 6 230.20
(300, 12, 4)30.01.2 × 10 1 29.81.0 × 10 6 250.22
Table 13. Sensitivity of the cap calibration weight ω on Family A instances (5 seeds each). Cap is set as E ¯ ( ω ) = ( 1 ω ) E base + ω E fast with ω { 0.3 , 0.5 , 0.7 , 0.9 } . Reported values are mean ± sd over 5 seeds. “opt.” counts runs (out of 5) reaching the target MIP gap within the time limit. “# Upgrades” is the number of executed upgrades.
Table 13. Sensitivity of the cap calibration weight ω on Family A instances (5 seeds each). Cap is set as E ¯ ( ω ) = ( 1 ω ) E base + ω E fast with ω { 0.3 , 0.5 , 0.7 , 0.9 } . Reported values are mean ± sd over 5 seeds. “opt.” counts runs (out of 5) reaching the target MIP gap within the time limit. “# Upgrades” is the number of executed upgrades.
Instance ω Opt.Time (s)Obj (€)# Upgrades E (kgCO2e)
(50, 8, 4)0.351.03 ± 0.6939,469.8 ± 619.916.8 ± 1.323,769.7 ± 261.0
(50, 8, 4)0.552.29 ± 1.5041,834.2 ± 676.430.8 ± 2.321,804.8 ± 233.6
(50, 8, 4)0.752.12 ± 1.5545,047.4 ± 751.247.6 ± 2.419,839.9 ± 228.3
(50, 8, 4)0.955.89 ± 2.9349,546.5 ± 848.566.4 ± 3.617,875.0 ± 246.5
(100, 10, 4)0.354.26 ± 1.1088,267.4 ± 2665.537.2 ± 4.157,163.2 ± 2092.2
(100, 10, 4)0.553.99 ± 1.5292,603.8 ± 2715.965.4 ± 5.451,940.0 ± 1843.1
(100, 10, 4)0.754.07 ± 1.0398,615.2 ± 2568.198.4 ± 5.646,716.8 ± 1599.5
(100, 10, 4)0.954.79 ± 0.92107,699.5 ± 2995.5136.4 ± 5.641,493.6 ± 1364.4
(200, 12, 4)0.359.46 ± 1.95193,930.1 ± 2263.877.6 ± 4.3130,419.1 ± 898.4
(200, 12, 4)0.5510.21 ± 2.27201,413.8 ± 1802.3137.6 ± 6.0117,872.1 ± 613.5
(200, 12, 4)0.7511.32 ± 1.38212,039.3 ± 1260.0201.2 ± 7.5105,325.0 ± 475.1
(200, 12, 4)0.9512.31 ± 0.23228,160.6 ± 1168.9275.6 ± 6.492,778.0 ± 595.9
(300, 12, 4)0.3516.73 ± 1.89292,361.2 ± 4700.9116.6 ± 4.5196,313.6 ± 4530.5
(300, 12, 4)0.5516.77 ± 1.42303,491.0 ± 4775.3204.6 ± 9.8177,409.6 ± 3695.0
(300, 12, 4)0.7521.87 ± 5.44319,489.7 ± 4848.0301.6 ± 11.7158,505.5 ± 2874.6
(300, 12, 4)0.9528.29 ± 0.63343,117.6 ± 5355.9414.8 ± 16.5139,601.4 ± 2087.2
Table 14. Sensitivity to the utilization parameter η used to set demand D t = η i , k y ¯ i , k , t x i , k , 0 in the random Family A generator. We report mean ± sd over 5 seeds. All runs use a 30 s limit (HiGHS single thread, relative gap 10 6 ). “# Upgrades” is the number of executed upgrades.
Table 14. Sensitivity to the utilization parameter η used to set demand D t = η i , k y ¯ i , k , t x i , k , 0 in the random Family A generator. We report mean ± sd over 5 seeds. All runs use a 30 s limit (HiGHS single thread, relative gap 10 6 ). “# Upgrades” is the number of executed upgrades.
Instance η Opt.Time (s)GapmaxObj (€)# Upgrades E (kgCO2e)Slackmax
(50, 8, 4)0.7550.59 ± 0.760.0 × 10 0 50,459.0 ± 971.69.8 ± 1.114,316.8 ± 347.0−1.8 × 10 12
(50, 8, 4)0.8850.22 ± 0.161.8 × 10 7 61,525.8 ± 1053.613.8 ± 1.317,330.2 ± 356.61.1 × 10 11
(50, 8, 4)0.9550.57 ± 0.280.0 × 10 0 67,607.5 ± 1109.315.8 ± 1.318,953.3 ± 365.41.5 × 10 11
(100, 10, 4)0.7550.97 ± 0.510.0 × 10 0 118,830.3 ± 2373.921.6 ± 1.134,810.3 ± 760.83.6 × 10 11
(100, 10, 4)0.8851.67 ± 1.222.1 × 10 7 144,460.3 ± 2648.929.6 ± 1.142,145.7 ± 845.42.9 × 10 11
(100, 10, 4)0.9551.31 ± 0.830.0 × 10 0 158,505.8 ± 2802.733.8 ± 1.346,112.0 ± 885.62.2 × 10 11
(200, 12, 4)0.7552.60 ± 0.810.0 × 10 0 265,254.5 ± 4129.045.0 ± 2.980,758.8 ± 1099.5−1.5 × 10 11
(200, 12, 4)0.8851.97 ± 0.843.1 × 10 7 321,733.3 ± 4655.160.8 ± 2.997,772.2 ± 1215.23.1 × 10 10
(200, 12, 4)0.9554.54 ± 2.080.0 × 10 0 352,742.1 ± 4997.669.6 ± 3.0106,934.8 ± 1305.65.8 × 10 11
(300, 12, 4)0.7554.96 ± 2.640.0 × 10 0 398,279.1 ± 5132.669.2 ± 4.5121,072.8 ± 1889.31.5 × 10 10
(300, 12, 4)0.8856.31 ± 5.647.1 × 10 8 482,883.3 ± 5610.292.8 ± 4.1146,500.1 ± 1853.93.2 × 10 10
(300, 12, 4)0.9555.79 ± 4.229.2 × 10 7 529,343.1 ± 5933.9105.8 ± 4.2160,213.6 ± 1861.22.0 × 10 10
Table 15. Family B: Ω -sensitivity of the scenario dispatch layer. We report the size of the scenario-expanded dispatch LP solved in each Benders iteration and the corresponding solve time (LP/iter, build + solve), for ( | I | , T , | K | ) { ( 50 , 8 , 4 ) , ( 100 , 10 , 4 ) } .
Table 15. Family B: Ω -sensitivity of the scenario dispatch layer. We report the size of the scenario-expanded dispatch LP solved in each Benders iteration and the corresponding solve time (LP/iter, build + solve), for ( | I | , T , | K | ) { ( 50 , 8 , 4 ) , ( 100 , 10 , 4 ) } .
Instance Ω LP per Iter (s)LP VarsLP ConsLP NnzSeed
(50, 8, 4)50.02488000804124,0003
(50, 8, 4)100.027716,00016,08148,0003
(50, 8, 4)200.090932,00032,16196,0003
(50, 8, 4)500.215180,00080,401240,0003
(100, 10, 4)50.030320,00020,05160,0003
(100, 10, 4)100.071840,00040,101120,0003
(100, 10, 4)200.145880,00080,201240,0003
(100, 10, 4)500.5738200,000200,501600,0003
Table 18. Fleet and cap setup for the case study.
Table 18. Fleet and cap setup for the case study.
ItemValue
Fleet size I = 20 vehicles
Horizon T = 10 years
Annual demand D t = 200,000 km/year (all t)
Annual capacity (10 vehicles)15,000 km/year each
Annual capacity (10 vehicles)9000 km/year each
Initial stateall ICE at t = 0
Cap definition E ¯ = 0.70 E base
Baseline E base 410.0 tCO2e
Cap E ¯ 287.0 tCO2e
Table 19. Operational-only cap vs full life-cycle cap (case study). “Cap slack” is E ¯ ( E U + E B ) , so negative indicates a violation of the life-cycle cap. Emissions are in tCO2e over the 10-year horizon.
Table 19. Operational-only cap vs full life-cycle cap (case study). “Cap slack” is E ¯ ( E U + E B ) , so negative indicates a violation of the life-cycle cap. Emissions are in tCO2e over the 10-year horizon.
ModelUpgrades Executed at t = 1 Obj (€) E U E B E U + E B Cap Slack
Operational-only cap3 ICE → BEV, 6 ICE → HEV (high-mileage vehicles)412,000284.374.4358.7 71.7
Full life-cycle cap8 ICE → BEV (high-mileage vehicles)424,000187.683.2270.816.2
Table 20. Fleet case-study sensitivity to cap stringency. The cap is E ¯ = τ E base where E base is the all-ICE baseline use-phase emissions (here E base = 410.0 tCO2e over 10 years). Emissions are life-cycle totals (tCO2e) and slack is E ¯ ( E U + E B ) . “#” means “numbers of”.
Table 20. Fleet case-study sensitivity to cap stringency. The cap is E ¯ = τ E base where E base is the all-ICE baseline use-phase emissions (here E base = 410.0 tCO2e over 10 years). Emissions are life-cycle totals (tCO2e) and slack is E ¯ ( E U + E B ) . “#” means “numbers of”.
τ E ¯ ICE → BEV (1)BEV (1) E U E B E U + E B SlackObj
(tCO2e)(#)(#)(tCO2e)(tCO2e)(tCO2e)(tCO2e)(€)
0.85348.544298.841.6340.48.1412,000
0.80328.055271.052.0323.05.0415,000
0.75307.566243.262.4305.61.9418,000
0.70287.088187.683.2270.816.2424,000
0.65266.599159.893.6253.413.1427,000
0.60246.01010132.0104.0236.010.0430,000
Table 21. Dispatch allocation in year t = 1 (kilometers served by technology).
Table 21. Dispatch allocation in year t = 1 (kilometers served by technology).
ModelICE kmHEV kmBEV km
Operational-only cap65,00090,00045,000
Full life-cycle cap80,0000120,000
Table 22. Full life-cycle model trajectory (year-by-year).
Table 22. Full life-cycle model trajectory (year-by-year).
tICEHEVBEVICE → HEVICE → BEVHEV → BEV E t U E t B TotalCum.Rem. Cap
(tCO2e)(tCO2e)(tCO2e)(tCO2e)(tCO2e)
1120808018.7683.20101.96101.96185.04
2120800018.760.0018.76120.72166.28
3120800018.760.0018.76139.48147.52
4120800018.760.0018.76158.24128.76
5120800018.760.0018.76177.00110.00
6120800018.760.0018.76195.7691.24
7120800018.760.0018.76214.5272.48
8120800018.760.0018.76233.2853.72
9120800018.760.0018.76252.0434.96
10120800018.760.0018.76270.8016.20
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Caramia, M. A Life-Cycle Technology Upgrade Scheduling Model. Algorithms 2026, 19, 223. https://doi.org/10.3390/a19030223

AMA Style

Caramia M. A Life-Cycle Technology Upgrade Scheduling Model. Algorithms. 2026; 19(3):223. https://doi.org/10.3390/a19030223

Chicago/Turabian Style

Caramia, Massimiliano. 2026. "A Life-Cycle Technology Upgrade Scheduling Model" Algorithms 19, no. 3: 223. https://doi.org/10.3390/a19030223

APA Style

Caramia, M. (2026). A Life-Cycle Technology Upgrade Scheduling Model. Algorithms, 19(3), 223. https://doi.org/10.3390/a19030223

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop