1. Introduction
According to the current status of scientific knowledge, one can assume, with a high level of confidence, that (i) global warming of the Earth is happening, (ii) anthropogenic greenhouse gas (GHG) emissions are to a large extent responsible of this warming [
1] and, therefore, (iii) GHG emissions from human activities must be mitigated to prevent important damages on ecosystems [
2], to the extent possible to the level of 1.5 °C [
3].
To support the design of climate mitigation targets and policies, and especially to analyze energy transition pathways ensuring a strong abatement of GHG emissions, one may rely on an integrated assessment (IA) approach. The latter typically combines the socio-economic elements that drive GHG emissions with the geophysical and environmental elements that determine climate changes and their impacts. Integrated assessment models (IAMs) are computational tools to perform IA. Examples of such models include: BaHaMa [
4,
5,
6], DICE [
7], FUND [
8], MERGE [
9], PAGE [
10] and TIAM-World [
11]. IAMs operate under different paradigms (e.g., bottom-up or top-down, optimization, or simulation). Furthermore, they specifically vary with respect to the level of modelling details for the mitigation options. At both ends of the spectrum, the top-down model DICE aggregates all mitigation options into a single cost function, whereas the bottom-up TIAM-World model (following the TIMES paradigm of the International Energy Agency [
12]) offers a technology-rich representation of the energy sector with thousands of energy technologies. This large variety of IAMs used, along with our current imperfect knowledge of all the climate change mechanisms, yield to very different GHG emissions abatement pathways (
Figure 1). As a consequence, climate policy recommendations may widely vary across studies. For instance, Stern (2007, [
13]) has advocated using PAGE for immediate actions to abate GHG emissions. Conversely, Nordhaus (2008, [
14]), with his DICE model, has reached the conclusion that immediate and massive actions are not necessary.
This large variance in outputs across models led some economists to consider the use of current IAMs with caution [
17,
18,
19]. Indeed, the long-term energy–economy–climate outlook provided by current IAMs is clouded with a great degree of uncertainty that may deeply affect the relevance of the policy analyses performed and the validity of the policy recommendations formulated. This is mostly due to the multiple sources of uncertainty (see for instance [
20,
21]), ranging from cross-models structural uncertainty (the modelling paradigms and their underlying simplifications) to within-models parametric uncertainty (because models are calibrated with imperfectly known data—measures uncertainty, and because of numerical assumptions about the future—radical uncertainty).
Therefore, dealing with risk and uncertainty in IAMs is a crucial question, both for scientists and policy makers [
22]. This general question has a long history and has led to a substantial body of literature. Several approaches have been followed so far, with applications to different sectors of IAMs. Deterministic multi-scenario analysis, sensitivity analysis and Monte-Carlo simulations, stochastic programming, and stochastic control have been applied e.g., to uncertainty surrounding technology pathways [
23] or economic growth [
24]. In the specific case of climate modelling within IAMs, Refs. [
25,
26] identifythe following main streams to handle uncertainty: discrete scenario-based modelling [
27], real options analysis [
28] and stochastic dynamic programming/control [
29,
30]. As a subset of this last category, authors have also developed dedicated closed-form IAMs to integrate uncertainty [
31]. To these techniques, we can add multi-ensemble uncertainty analysis where a large number of deterministic model outcomes are treated statistically [
32].
These approaches prove useful, but all have drawbacks, and their implementation largely depends on the structure and size of the model at stake. Sensitivity analysis and Monte-Carlo simulations make way for the investigation of the impact of particular parameters, but do not provide unambiguous hedging strategies. Crost and Traeger [
33] demonstrate that for this reason, there is no equivalence between the two approaches. Deterministic multi-scenario analysis results are also difficult to interpret as models are run in a deterministic way with little possibility to apply any probability distribution to the set of scenarios. One of stochastic programming drawbacks is that probability distributions have to be defined over the whole tree and that conclusions might be (extremely) sensitive to the choice of scenario and branching scheme. Moreover, stochastic programming may considerably increase the size of the problem to be solved, leading quickly to excessive computational times. Computational burden also typically limits the use of stochastic control approaches in IAMs.
The topic of uncertainty in integrated assessment is still on the scientific agenda, since it questions the relevance of the outcomes [
34]. In this paper, we aim at contributing to the literature stream of analyzing climate uncertainty in IAMs by introducing robust optimization (RO) in a large scale, surplus maximization IAM.
Early developments of RO date back to Soyster [
35], who initiated an approach to obtain relevant (feasible) solutions of linear optimization problems although matrix coefficients are inexact. They initiated this work because of one observation: even small variations in data can impact feasibility or optimality properties of a solution [
36]. This idea has then been largely explored with different formalisms [
37,
38] or by generalizing the Soyster approach [
39]. RO allows to solve decision-making problems under uncertainty even when the underlying probabilities are not known—only assumptions of the bounds of the support are required. It consists in immunizing a solution against adverse realizations of uncertain parameters within given uncertainty sets. The basic requirement for a robust solution is that constraints of the problem are not violated regardless of the realization of the parameters in the set. The issue then consists in identifying computable robust counterparts for the initial optimization program. Ben-Tal et al. [
40] or Bertsimas et al. [
41] review techniques for building such robust counterparts in general cases.
Up until now, RO has rarely been used in energy models [
42], and even less in IAMs, with the exceptions of Babonneau et al. [
43] and Andrey et al. [
44]. On the other hand, the uncertainty of some climate parameters has already been studied, see, for instance, [
45] for an application based on stochastic programming to analyze climate sensitivity, or [
46] for a research focusing on atmospheric CO
concentration and using a maximin regret criterion in a linear model. Therefore, to the best of our knowledge, we propose the first application of robust optimization to a systematic analysis of uncertainty in climate model parameters of a large scale IAM. Although both the TIAM-World model and the robust optimization approach are well established, the novelty of this work lies with the application of RO to analyze the impact of climate uncertainty on IAMs results. Our hope is that, in the future, such techniques become part of the modelling toolbox accessible to decision-makers, in order to incorporate uncertainty in investment or policy decisions in a more systematic way.
A first contribution of our paper is to propose a general robust approach to consider uncertainty in simple climate models (SCMs) typically used by IAMs to represent climate evolution. Our approach relies on Bertsimas and Sim [
39]. It consists in defining an uncertainty budget to control the degree of pessimism; in short, to limit the number of climate parameters allowed to deviate from their nominal values. We then obtain robust strategies by using a decomposition scheme that involve solving a series of sightly modified versions of the deterministic IAM. In comparison, Babonneau et al. [
43] use robust optimization in order to protect the total future energy supply from possible perturbations of technological efficiencies. Their methodology exploits independence and first moment information about some underlying efficiency factors that have linear effects on the total available capacity for each period. Their proposed solution scheme relies on second order cone programming which might limit the size of the problems that can be efficiently addressed. More recently, Andrey et al. [
44] also choose to robustify total future energy supply but make use of the budgeted uncertainty set of Bertsimas and Sim [
39]. In contrast with these two approaches, our research focuses on how to robustify the world’s capacity to meet its targets regarding future temperature levels given the current available knowledge of the climate parameters. Moreover, our analysis will account for plausible perturbations of these parameters which have strong non-linear (instead of linear) effects on the temperature that will be reached. Finally, the solution scheme we propose has a similar advantage as the method of Andrey et al. [
44]—to preserve the linear structure of the IAM model that needs to be solved—through a constraint generation algorithm.
The second contribution of this work is on the quantitative side. Our approach is implemented in the TIAM-World [
47] integrated assessment model, which also relies on a SCM. We first define plausible uncertainty ranges for the climate parameters of the TIAM-World model and then calibrate these ranges using existing literature [
48] against climate simulations from the MAGICC model [
49]. Then, using a robust counterpart of TIAM-World, we enrich the climate debate by defining robust energy transition pathways for different global warming targets. In other words, we identify economic transition pathways under climate constraints for which the outcome scenarios remain relevant for any realization of the climate parameters. Moreover, we can assess which climate parameter or which combination of climate parameters are the most sensitive in our model and we can quantify the uncertainty cost. The originality of our results is that (i) unlike other studies e.g., [
50,
51], we consider uncertainty on all the climate system parameters of our IAM and (ii) we assess the cost of different protection levels and their impact on energy transition pathways.
The remainder of this paper is organized as follows: in
Section 2, we first present the approach in the general case (for all IAMs). We then describe briefly, in
Section 3, how we implement our RO approach in the TIAM-World model and, finally, in
Section 4, we present numerical results of selected scenarios and review the different insights brought by the RO approach and how it can inform policy makers.
5. Conclusions
Climate modelling is hampered by a considerable amount of uncertainty because of the lack of knowledge of the climate system. As it significantly impacts climate policy making, the need for tools to evaluate robust transition pathways is more and more urgent. In this paper, we present a robust approach to handling climate uncertainty in Integrated Assessment Models (IAMs).
We find that the climate module’s most sensitive parameter is the climate sensitivity. This is consistent with the existing literature on the subject. Climate sensitivity is the most studied parameters and that its value estimations are numerous. Hence, the determination of the climate sensitivity uncertainty range is quite straightforward. Another important point is that this range relies on a large information set unlike the other parameters, for which data are scarce. Indeed, information on the carbon cycle parameters is scarce (few studies in the IAMs climate module literature) and yet the global climate system behaviour is very sensitive to them. Moreover, climate parameters impact diversely the timing of the adaptation: the radiative forcing sensitivity multiplies directly the CO concentration, hence even a small variation of this parameter leads to a strong impact on the CO abatement timing. We then believe that a stronger focus should be put on the other climate model parameters.
To ensure that we comply with a 3 °C constraint, the temperature trajectories we should aim at with the nominal parameters should not exceed 2.4 °C, leading to zero net carbon emissions at the end of the century. With a 2 °C constraint, we should aim at 1.6 °C with negative carbon emissions as soon as 2050. If the insurance cost is quite reasonable for the higher constraint (from 1.5% to 4% of the system total discounted cost), it is less the case with a 2 °C objective. In the latter situation, the total discounted system cost increases by 7% when the protection level is low and up to 14% when it is high. This is because to comply with a stringent target, sectors with high abatement costs have to participate in the global reduction effort. For example, transport is little impacted by the 3 °C target (but as the protection level increases, the vehicle fleet is slightly modified), whereas the introduction of uncertainty leads to major fuel consumption changes for the 2 °C constraint.
Abatement strategies are quite different between the two temperature targets. For the 3 °C target, both the carbon intensity and the primary energy intensity of the economy decrease with the protection level whereas for the 2 °C target, the energy intensity increases and the carbon intensity decreases. This more stringent goal is reached by investing massively in carbon removal technologies such as bioenergy with carbon capture and storage (BECCS) which have yields much lower than traditional fossil fuelled technologies. Another interesting fact of the 2 °C hedging trajectories is the drastic increase in the nuclear electricity production. The massive use of nuclear or carbon removal technology is highly uncertain as BECCS is a very expensive technology that is not competitive in the absence of a high CO price, while the development of the nuclear industry could be hampered by social acceptance issues. The 1.5 °C objective mentioned during the COP21 is obviously very ambitious and reaching it would necessitate strong political and societal ambitions and actions (much stronger than the ones decided during the COP21).
By taking a robust approach to study ways of complying with ambitious climate targets, we were able to bring to light hedging technological trajectories without excessive computational issues. The method presented being quite generic, it could be interesting to perform similar exercises with other IAMs. It would help strengthen the knowledge on technological transition pathways with uncertainty and would allow a better understanding and awareness of the costs of the risks linked to our partial knowledge of the climate system.