How Much Human-Caused Global Warming Should We Expect with Business-As-Usual (BAU) Climate Policies? A Semi-Empirical Assessment

In order to assess the merits of national climate change mitigation policies, it is important to have a reasonable benchmark for how much human-caused global warming would occur over the coming century with “Business-As-Usual” (BAU) conditions. However, currently, policymakers are limited to making assessments by comparing the Global Climate Model (GCM) projections of future climate change under various different “scenarios”, none of which are explicitly defined as BAU. Moreover, all of these estimates are ab initio computer model projections, and policymakers do not currently have equivalent empirically derived estimates for comparison. Therefore, estimates of the total future human-caused global warming from the three main greenhouse gases of concern (CO2, CH4, and N2O) up to 2100 are here derived for BAU conditions. A semi-empirical approach is used that allows direct comparisons between GCM-based estimates and empirically derived estimates. If the climate sensitivity to greenhouse gases implies a Transient Climate Response (TCR) of ≥ 2.5 ◦C or an Equilibrium Climate Sensitivity (ECS) of ≥ 5.0 ◦C then the 2015 Paris Agreement’s target of keeping human-caused global warming below 2.0 ◦C will have been broken by the middle of the century under BAU. However, for a TCR < 1.5 ◦C or ECS < 2.0 ◦C, the target would not be broken under BAU until the 22nd century or later. Therefore, the current Intergovernmental Panel on Climate Change (IPCC) “likely” range estimates for TCR of 1.0 to 2.5 ◦C and ECS of 1.5 to 4.5 ◦C have not yet established if human-caused global warming is a 21st century problem.


Introduction
Since the late-1960s and early-1970s, computer model simulations of the Earth's climate have been predicting that increasing concentrations of "greenhouse gases" (chiefly, carbon dioxide or CO2) in the atmosphere from human activity should be causing substantial global warming at the Earth's surface and in the lower atmosphere [1,2]. In the 1960s [3] and 1970s [4], estimates of global surface temperature trends (which mostly were confined to the Northern Hemisphere since there is less longterm data for the Southern Hemisphere) suggested, if anything, a global cooling trend. However, during the 1980s, the cooling trend reversed and, by the late-1980s, the long-term linear trend since the (relatively cold) late-19th century was warming [5][6][7]. This prompted several researchers to argue that the long-term warming trend was in fact the "enhanced greenhouse warming" originally predicted by the computer models, e.g., [8][9][10]. To distinguish this predicted "enhanced greenhouse warming" due to increasing greenhouse gas concentrations from a naturally occurring global warming trend which might have occurred anyway, the term "Anthropogenic Global Warming" (AGW) is often used. These claims garnered a lot of media attention and public concern, e.g., [11].
Ultimately, this led the United Nations to set up the United Nations Framework Convention on Climate Change (UNFCCC) with the goal of facilitating international negotiations to achieve the "(…) stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system" [12]. To work in parallel with the political work of the UNFCCC, the United Nations also co-founded, with the World Meteorological Organization, a separate body called the Intergovernmental Panel on Climate Change (IPCC) to provide amongst other things "a comprehensive review (… on) the state of knowledge of the science of climate and climatic change" [13].
In the ensuing years, the computer models have continued to predict that increasing greenhouse gas concentrations should be causing substantial global warming. Indeed, based on the results of simulations with one of the NASA Goddard Institute of Space Studies (GISS)' computer models, Lacis et al. (2010) concluded that "atmospheric CO2 (… is the) principal control knob governing Earth's temperature" [14]. Largely on the basis of comparing the results of such computer models to global temperature trends, the IPCC's most recent complete Assessment Report (2013) concluded that, "It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century" [15] (Emphasis in original). Although a competing series of reports has been published by the Nongovernmental International Panel on Climate Change (NIPCC), which contradicts many of the IPCC's findings, e.g., NIPCC (2013) [16] and (2019) [17], the IPCC reports are widely cited and have been highly influential among both the scientific community and policymakers.
Meanwhile, the efforts of the UNFCCC have led to a series of major international treaties and agreements to try to reduce greenhouse gas emissions, from the Kyoto Protocol (1996) [18] to the Paris Agreement (2015) [19]. In particular, the Paris Agreement specifically aims to encourage national and international policies to reduce greenhouse gas emissions with the view to, "Holding the increase in the global average temperature to well below 2 °C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5 °C above pre-industrial levels" [19]. Although the United States has decided to withdraw from the Paris Agreement [20], most nations are currently signed up to the voluntary Paris Agreement.
The selection of a specific "global average temperature" in °C as an international "goal" is a remarkably arbitrary and subjective process, e.g., see Mahoney (2015) [21]. Hence, there has been some debate over whether a target of 1.5 or 2 °C is better, e.g., [22][23][24][25], and whether or not such targets in terms of global temperature are helpful, e.g., [26][27][28][29][30]. However, there is a more fundamental question-what specifically should an individual nation do differently in order to "(keep) the global average temperature to well below 2 °C above pre-industrial levels (or even) 1.5 °C"? If the motivation of the Paris Agreement was to encourage individual nations to significantly alter their national policies relative to "Business-As-Usual", then it is important to know how much Anthropogenic Global Warming to expect under "Business-As-Usual" conditions.
In other words, what is the "human-caused global warming baseline" against which efforts to meet the Paris Agreement are to be assessed? This is the question we will attempt to answer in this paper. However, while the question might initially seem fairly reasonable and straightforward, as we will discuss, it is remarkably challenging to answer satisfactorily. In essence, it depends on the answers to four separate questions: • Question 1. What would future greenhouse gas emissions be over the coming century under "Business-As-Usual" conditions? • Question 2. For each of the greenhouse gases, what is the relationship between emissions and actual changes in atmospheric concentrations? • Question 3. How different would global average temperatures be at present if greenhouse gases were still at "pre-industrial concentrations"? In other words, how do we define the "pre-industrial levels" of global average temperatures to which the Paris Agreement refers? • Question 4. How "sensitive" are global average temperatures to increases in the atmospheric concentrations of greenhouse gases?
With each of these questions, there is considerable debate in the scientific literature. However, the relevant literature for each subject comes from quite different academic disciplines. The first question is typically addressed by economists, political scientists, environmental governance researchers, etc. The second question is mostly the realm of biologists, ecologists, geochemists, oceanographers, etc. The third and fourth questions are both climate science problems, but even within these topics, there are separate bodies of literature from, e.g., computer modelling research groups, groups evaluating climate records, statisticians evaluating results from a meta-analysis perspective, etc.
In other words, many researchers who might be familiar with the debates in one relevant branch of the literature are often completely unaware of the debates in other relevant branches. In our personal experience in dealing with these four questions, we have found that researchers with expertise on one aspect are frequently delighted to have a robust scientific discussion on the controversies within that specific aspect. However, as soon as the discussion shifts to one of the other aspects, the researcher will typically tell us that they are "getting outside of their comfort zone" or that they are "not very familiar with that subject", and that they "would have to spend more time reading the literature before continuing this discussion".
We totally appreciate the discomfort felt by those researchers and recognize that many readers might share this discomfort. However, we suggest that unless the scientific debate over all four of these questions are simultaneously considered, it is unlikely that truly satisfactory answers will be achieved to the over-arching question, "How much human-caused global warming should we expect with Business-As-Usual (BAU) climate policies?" With that in mind, we will discuss all four of these subquestions in turn, including a brief review of the key relevant literature, before attempting to answer the main question. Specifically, we will consider Question 1 in Section 2; Question 2 in Section 3; Question 3 in Section 4 and Question 4 in Section 5. In Section 6, we will combine the answers to all four questions to develop a set of Business-As-Usual projections.

What Would Future Greenhouse Gas Emissions Be Under "Business-As-Usual" Conditions?
As can be seen from Table 1, the Earth's atmosphere is largely composed of nitrogen (~78%), oxygen (~21%), argon (~1%) and some water vapor (~0.33% by mass, ~0.53% by volume). However, when Tyndall (1861) [31] studied the infrared activity of the atmospheric gases, he noted that nitrogen and oxygen were largely inactive with respect to infrared activity (argon was not discovered until 1894, but also is infrared-inactive). On the other hand, he noted that water vapor, and also some of the trace gases (e.g., CO2 and CH4) were strongly infrared-active, i.e., they are what we now call "greenhouse gases". On this basis, he argued that if the dramatic climatic changes between glacial and interglacial periods were due to changes in atmospheric composition (a theory that was popular at the time), then changes in water (rather than CO2) would be a plausible candidate [31]. Table 1. Key statistics on the Earth's atmospheric composition. The fraction by volume and total mass figures are adapted from Table 1 of Hartmann (1994) [32]. Calculated "Global Warming Potential (GWP)" values are taken from Table 8.7 of the Intergovernmental Panel on Climate Change (IPCC) Working Group 1's 5th Assessment Report (2013 [33]). One ppmv is one part per million (by volume), i.e., 0.0001% and one ppbv is one part per billion (by volume), i.e., 0.0000001%.

Constituent
Chemical Formula

Infrared-Active
Calculated "GWP" Several decades later, Arrhenius (1896) argued that because atmospheric water vapor concentrations were more immediately responsive to air temperatures (e.g., warmer temperatures lead to more evaporation from water-enriched surfaces and oceans), carbon dioxide was a better candidate-even though by volume the average concentration of water vapor is more than 10 times that of carbon dioxide [34]. Although Arrhenius' arguments were disputed by, e.g., Ångstrom (1901) [35] and Simpson (1929) [36], the theory has remained popular. Moreover, Callendar (1938) [37] argued that human activities since the Industrial Revolution (chiefly the use of fossil fuels) were probably increasing the atmospheric concentration of CO2 and thereby leading to human-caused global warming. We will briefly discuss the debate which this provoked in later sections. However, for here it is sufficient to note that this theory was incorporated into the early computer climate models, e.g., refs. [1,2], and as described in Section 1, this ultimately led the UN to argue that the international community needed to dramatically reduce emissions of greenhouse gases in order to minimize any future (human-caused) global warming that would occur if the world continued business-as-usual.
In this paper, we are trying to quantify what the expected (human-caused) global warming from "business-as-usual" would be. In this section, we deal with the first step of this process, which is to quantify what future greenhouse gas emissions would be under business-as-usual. As can be seen from Table 1, the most common of the greenhouse gases in the Earth's atmosphere (by an order of magnitude) is water vapor. However, like Arrhenius (1896) [34], the current computer climate models assume that water vapor concentrations respond to changes in climate rather than drive climate change, e.g., Lacis et al. (2010) [14]. Moreover, the emissions of water vapor from human activity are a relatively minor contributor to the hydrological cycle.
Therefore, in terms of the expected (human-caused) global warming, the chief "anthropogenic greenhouse gas" of concern is CO2. However, various human activities can also lead to emissions of other infrared active gases, and so the UNFCCC's Kyoto Protocol (1996) [18] also included several other greenhouse gases as being of potential concern: methane (CH4), nitrous oxide (N2O, also known as "laughing gas") and three halogenated gases that are collectively known as the "F-gases" (as they all are fluorine-containing compounds).
There are many other trace gases which could also be regarded as "greenhouse gases". However, according to the latest IPCC assessment reports [15] and computer models (e.g., Ref. [14]), three greenhouse gases-CO2, CH4 and N2O-together account for more than 90% of the expected anthropogenic global warming under business-as-usual [38]. With that in mind, we will confine our analysis to just these three gases.
As we will discuss in Section 3, all three of these gases are naturally occurring gases that are essential for life as we know it on this planet. Therefore, there are many natural sources and sinks of these gases, e.g., photosynthesis requires CO2 and aerobic respiration releases CO2, while anaerobic respiration releases CH4. However, various human activities are known to contribute to additional emissions of these gases. In this section, we will look in turn at various estimates and projections of the magnitude of the annual "anthropogenic emissions" of the three gases, and extrapolate from these estimates how these annual emissions would be expected to change if things continued business-asusual. The estimates and projections used are listed in Table 2.

Some Notes on Units and Acronyms
In order to minimize the use of large and cumbersome numbers, emissions are usually reported in units of, e.g., gigatons of carbon dioxide (Gt CO2), teragrams of methane (Tg CH4), etc. Moreover, since much of the focus is on carbon dioxide emissions, for brevity, carbon dioxide emissions are frequently reported in terms of the carbon component of carbon dioxide, i.e., gigatons of "carbon" (Gt C) instead of Gt CO2. This is a simple arithmetic calculation of dividing the mass of CO2 by 3.6675 (i.e., the molecular weight of CO2/molecular weight of C = 44.01/12 = 3.6675). For convenience, here is a summary of the main relevant relationships for the three main anthropogenic greenhouse gases: • 1 Gt C = 1 Pg C = 1000 Tg C = 3.6675 Gt CO2 = 3.6675 × 10 12 kg of CO2 = 3.6675 × 10 15 g of CO2 • 1 Mt CH4 = 1 Tg CH4 = 1 × 10 9 kg of CH4 = 1 × 10 12 g of CH4 • 1 Mt N2O = 1 Tg N2O = 1 × 10 9 kg of N2O = 1 × 10 12 g of N2O Although emissions are usually described in terms of the mass of the gas, measurements of the atmospheric concentrations are usually reported in fractions by volume. Therefore, in order to consider the relationship between the emissions of a particular gas and the actual changes in atmospheric concentrations of that gas (as we will do in Section 3), it is useful to convert the emissions into an equivalent atmospheric concentration or vice versa. This can be approximated by using the values of the atmospheric composition in 1990 listed in Table 1. For instance, given that the atmospheric concentration of carbon dioxide of 353 ppmv (0.0353% by volume) in c. 1990 represents a total mass of ~2.76 × 10 18 g CO2 = ~2760 Gt CO2 = ~752.6 Gt C, 1ppmv of CO2 is equivalent to ~7.8 Gt CO2 or 2.13 Gt C. We can approximate the relationships for both CH4 and N2O similarly: We appreciate that there are many different acronyms, abbreviations, and shorthands repeatedly used in this article. Therefore, we have listed them in Table 3 for quick reference.

Carbon Dioxide (CO2) Emissions
The largest source of human-caused CO2 emissions comes from fossil fuel use and industrial processes (e.g., cement production). Boden et al. (2017) have compiled estimates of the national and global emissions from these processes from 1751 to the near present [40], and this is the main dataset used by most researchers-either directly or indirectly (through using a dataset which is based on the Boden et al. dataset). For convenience we use Friedlingstein et al. (2019)'s updated version of this dataset, which was compiled as part of the "Global Carbon Budget" project [39].
The annual emissions since 1850 to the near present (2018) are plotted in Figure 1a. We can see that since the end of World War 2, i.e., post-1945, there has been a substantial and almost continuous growth in annual emissions. The solid black line represents a linear extrapolation of the data from 1946-2018 up to 2100, and therefore represents one estimate of "business-as-usual" growth in emissions. However, as discussed in the Introduction, since the late-1980s, there has been international interest in reducing CO2 emissions due to the concerns raised by the UNFCCC and IPCC. Therefore, arguably the linear extrapolation for the period post-1990 (dashed line) is a better period for estimating "business-as-usual" growth. Ironically, the extrapolation from 1990-2018 implies a slightly higher rate of growth. At any rate, we will use the linear extrapolations from both periods as an upper and lower bound for business-as-usual growth in CO2 emissions from fossil fuels and industrial usage.
There is a second source of indirect CO2 emissions that is human-caused, however. These are emissions which occur from changes in land use or land cover, e.g., deforestation. Unfortunately, these indirect emissions are rather hard to quantify, and there are multiple different estimates, e.g., refs. [41,[49][50][51][52]. We had initially considered just using one of the more widely cited of these estimates for extrapolating future emissions from changes in land use/land cover under BAU. However, often these estimates imply opposing trends, e.g., the Houghton and Nassikas (2017) [52] estimate implies that there has been a slight decreasing trend in annual emissions since 1997, while Hansis et al. (2015)'s "BLUE" estimate implies a slight increasing trend over the same period [50]. However, we note that Friedlingstein et al. (2019) have compiled both these estimates and 15 different model-based estimates for the post-1958 period as part of the "Global Carbon Budget" project [39]. Therefore, for our analysis in this paper, we have calculated the mean and standard errors of all 17 of these estimates. We then use a linear extrapolation of the upper and lower bounds of the error bars as our projection of BAU emissions from changes in land use/land cover up to 2100-see Figure 1b.
In Figure 1c, we have summed the two separate components together to develop a BAU projection of the total annual human-caused CO2 emissions up to 2100. In Figure 2, we then compare this projection to the various emission scenarios that have been considered by each of the IPCC reports. Figure 3 shows the equivalent comparisons for the two separate components from the IPCC's 2nd Assessment Report (1995), but such a breakdown was not available for the 1st Assessment Report (1990)'s emissions scenarios. However, for the analysis in this paper we will use the mean ± twice the Standard Error of all these estimates, which is shown with a thick solid red line (with light red envelope).   Table 2.
Comparing our BAU projection to the various IPCC scenarios, we see from Figure 2a, that Scenario A from the 1st Assessment Report (1990) [53] was actually a fairly good match for BAU. However, all the other scenarios implied a continual decrease in emissions post-1990 which did not occur. For the 2nd Assessment Report (1995) [54], our BAU projection is intermediate between scenarios IS92-F and IS92-A/IS92-B-see Figure 2b-but they also considered a much higher-growth scenario as IS92-E and two scenarios with a more modest emissions rate from 1995 up to present than has been observed and which imply a steady decrease in emissions from about 2025 until the end of the century.
The "SRES" projections that were developed in a special IPCC (2000) [46] report were used for both the 3rd (2001) [55] and 4th (2007) [56] Assessment Reports. These included more than 40 different scenarios, but six of these ("A1", "A1G", "A1T", "A2", "B1", and "B2") were chosen as being representative of the full range of available scenarios. We compare them to our BAU projection in Figure 2c. None of these six "SRES" scenarios match well with our BAU projection. "A1G" and "A2" both imply a growth in emissions that is considerably higher than BAU. This is broadly consistent with the findings of McKitrick et al. (2013) [57] as well as the earlier critique by   [58], which led to considerable debate, e.g., [59][60][61][62]. Meanwhile, the rest of the scenarios imply changes that are considerably less than BAU. "A1" and "A2" match our BAU projection reasonably well up to 2050, but then they diverge in opposite directions ("A1" increasing faster than BAU and "A2" decreasing). Figure 2d compares our BAU projections to the four Representative Concentrations Projections (RCP) scenarios [47] considered by the most recent 5th Assessment Report (2013) [33]. As for the SRES projections, none of them match well. The RCP 8.5 scenario implies a growth in emissions much greater than BAU. This agrees with several recent articles pointing out that RCP 8.5 substantially overestimates CO2 emissions relative to BAU, e.g., [63][64][65][66]. Meanwhile, all of the other scenarios imply that emissions are substantially decreasing relative to BAU over the 21st century.
In preparation for the upcoming 6th Assessment Report, the RCP scenarios have been updated and elaborated on with a series of "Shared Socioeconomic Pathways" (SSP) [48,67,68]. Nine of these combined SSP/RCP scenarios have been recommended to the CMIP6 modelling groups, and we compare these to our BAU projection in Figure 2e. Two of these recommended scenarios actually match quite well to BAU, i.e., the two "SSP3-70" scenarios. However, the "SSP5-85 (Baseline)" scenario implies much higher growth in emissions over the 21st century than BAU, while the other scenarios imply much lower emissions than BAU.

Methane (CH4) Emissions
Historical estimates of past CH4 and N2O appear to have been much less studied so far.  Figure 4a. We use the lowest and highest extrapolations from these four extrapolations as the upper and lower bound for BAU annual emissions up to 2100.
In Figure 4b-f, we again compare our BAU projections to the scenarios used by each of the IPCC reports. We note that all the scenarios for the 1st Assessment Report start at a much higher annual emission rate than observed-see Figure 4b. This thereby implies much greater emissions over the 21st century than our BAU projection. However, this seems to be because they apparently mistakenly included several natural sources of CH4 emissions in their total annual human-caused emissions.
For subsequent reports, these naturally occurring emissions appear to have been separated out. As a result, the starting point for the later IPCC report scenarios matches quite well with the historical human-caused emission estimates (and therefore also with our BAU projection). However, for the 2nd Assessment Report, scenarios and the SRES projections used in the 3rd and 4th Assessment Reports, CH4 emissions were projected to increase at a much higher rate than was observed up to present and continue to increase at a rate much higher than BAU for much of the 21st century-see Figure 2c,d. Some of the projections imply lower annual emissions than BAU by 2100, but this appears to be because it is assumed that there will be active policy-driven reductions in CH4 emissions in the second half of the 21st century.
On the other hand, several of the scenarios in both the RCP scenarios- Figure 4e-and the new SSP/RCP scenarios- Figure 4f-imply that CH4 emissions will be much less than our BAU projection. Although, again, the RCP 8.5 scenario overestimates BAU CH4 emissions, as do several of the SSP/RCP scenarios over most of the century, i.e., "SSP3-70 (Baseline)"; "SSP5-85 (Baseline)" and "SSP4-60".  Table 2. Figure 5 presents the equivalent analysis for annual N2O emissions. For brevity, we will not comment in too much detail on the comparisons, but we will note a key point difference. As for CH4, the starting point for annual N2O emissions seems to have been higher than the current estimates of historical emissions. However, there appears to have been much more inconsistency in the estimates of current emissions between reports. The scenarios used by both the 1st and 5th Assessment Reports imply higher annual emissions at the start of their projections than the current estimates, while those used by the 2nd, 3rd, and 4th Assessment Reports imply lower annual emissions. On the other hand, the starting annual emissions implied by the latest SSP/RCP scenarios match well with the current estimates of historical emissions.  Table 2.

What Is the Relationship between Greenhouse Gas Emissions and Actual Changes in Atmospheric Concentrations?
It is the atmospheric concentrations of these gases that the computer model simulations predict should be influencing global temperatures [1,2,[8][9][10], rather than the rates of anthropogenic emissions. However, CO2, CH4, and N2O are all naturally occurring gases. They also each play important roles in many biological processes. In particular, CO2 is consumed by photosynthesis and released by (aerobic) respiration, and therefore is a key component for all life on this planet. Hence, there are many natural fluxes into and out of the atmosphere for these gases. For convenience, a flux out of the atmosphere is usually referred to as a "sink" and a flux into the atmosphere as a "source".
Therefore, in order to estimate how much human-caused global warming the projected BAU emissions in Section 2 could potentially cause, we need to estimate what fraction of the emitted gases will remain in the atmosphere, i.e., what will the "airborne fraction" of the emitted gases be? Given the many uncertainties associated with the natural sources and sinks for each of the three gases, we argue that currently the best way to estimate this is to consider how the airborne fractions have behaved since continuous records of atmospheric concentrations began (in 1958/59 for CO2; 1978/79 for CH4 and N2O). In this section, we will calculate the airborne fractions by comparing the anthropogenic emissions described in the previous section to the observed changes in atmospheric concentrations using the datasets listed in Table 4.  Figure 6a shows the observed annual atmospheric CO2 concentrations as recorded at Mauna Loa observatory in Hawai'i (solid green line) since systematic measurements began in early-1958 and also globally averaged estimates from multiple observatories around the world since 1979 (dashed blue line). Although the globally averaged curve is slightly below the Mauna Loa curve, they both almost overlap each other. Therefore, since the Mauna Loa record is longer, for the rest of this paper, we will treat it as being representative of "global atmospheric CO2 concentrations". It can be easily seen that-as discussed earlier-atmospheric CO2 concentrations have been steadily rising at a rate of roughly ~1.5 ppmv per year since at least 1959, i.e., the start of the record. Antarctic ice core estimates of pre-historic atmospheric CO2 concentrations suggest that up until the 19th/20th centuries, concentrations had remained fairly constant since the end of the last glacial period more than 10,000 years ago [72], only fluctuating within the range 271-285 ppmv [38]. This small range of "preindustrial variability" is shown by the grey band in Figure 6a, with the dashed black line corresponding to the mean value of 280 ppmv.

The Airborne Fraction of Carbon Dioxide (CO2) Emissions
We will discuss below some of the scientific debate over exactly how reliable the Antarctic ice core estimates are. Nonetheless, if we assume for now that the Antarctic ice core estimates are reliable, we can see why it is widely believed that the rise in atmospheric CO2 from ~280 ppmv to ~410 ppmv today (a 46% increase) is largely human-caused and due to human-driven CO2 emissions. However, as can be seen from Figure 6b, there is a challenging complication. The lower green curve represents the annual change in atmospheric CO2, i.e., the increase in atmospheric CO2 from each year to the next. As discussed in Section 2.1, we can convert the anthropogenic (i.e., human-caused) annual CO2 emissions determined in Section 2.2 from Gt C/year into the equivalent annual change in atmospheric concentrations. This is the upper curve (solid red line with a surrounding envelope) of Figure 6b. The two vertical axes are scaled so that they are directly interchangeable, i.e., an annual change of 1 ppmv on the left-hand vertical axis is equivalent to 2.13 Gt C/yr on the right-hand axis. Although both curves have generally increased over time, the lower curve corresponding to the observed atmospheric changes has been consistently below the curve corresponding to anthropogenic emissions. In other words, only a fraction of the anthropogenic CO2 emissions remains in the atmosphere from year-to-year. The so-called "airborne fraction", i.e., the ratio of atmospheric change to anthropogenic emissions is plotted in Figure 6c.
The reason that there is not an exact 1:1 relationship between anthropogenic CO2 emissions and atmospheric CO2 concentrations is that CO2 is a naturally occurring gas. Moreover, it is one of the most important gases biologically speaking. Life as we know it on Earth is carbon-based, and this carbon in living organisms largely comes from the photosynthesis of atmospheric CO2 by plants and other photosynthetic organisms. As a result, as well as the anthropogenic sources for CO2, there are also natural sinks and sources for CO2. In fact, current estimates for the annual anthropogenic CO2 emissions are ~12 Gt gC/year, but the current estimates for the emissions from natural sources are ~200 Gt C/year, and it is also estimated that natural sinks absorb ~200 Gt C/year of CO2 from the atmosphere [73]. Therefore, the exact relationship between anthropogenic CO2 emissions and actual atmospheric CO2 concentrations depends on the many complex interactions between the various natural and anthropogenic CO2 sources and sinks, known collectively as "the carbon cycle".
The annual changes in atmospheric concentrations show quite a lot of variability from year-toyear, which is not apparent from the annual anthropogenic emissions, and the airborne fraction actually varies quite a bit from year-to-year, as has been noted by others (e.g., refs. [74][75][76][77][78][79][80][81][82]). This suggests that natural variability actually plays quite a substantial role in atmospheric CO2 concentration trends. On the other hand, if the Antarctic ice core estimates are reliable, then it implies that atmospheric CO2 has been almost constant for more than 10,000 years, making it very difficult to see why the steady increase since at least 1959 is not a recent and human-caused phenomenon. Moreover, if it were to transpire that all (or even most) of the recent increase in atmospheric CO2 were a natural phenomenon, then this would completely undermine the entire basis for claiming that society's CO2 emissions are causing "human-caused global warming". Therefore, whenever a researcher publishes an analysis suggesting that some or all of the recent increase could be natural in origin, such as the references cited above [76,[78][79][80][81][82] and also refs. [74,75,77,[83][84][85][86][87][88][89], their arguments are attacked with a particular vehemence, e.g., [90][91][92][93][94][95][96][97][98][99].
In Kuhn's 1962 book, "The structure of scientific revolutions" [100], he argued that for the vast majority of what he called "normal science", i.e., the day-to-day work of most researchers, scientists carry out their research implicitly relying on one or more paradigms that are assumed to be indisputable. Although the specific paradigms within one discipline might be different from a separate discipline, and that they change over time, he argued that "normal science" implicitly relied upon these paradigms being correct. As a result, Kuhn proposed that whenever a researcher questions a key paradigm within a discipline, they are immediately attacked or ridiculed by their peers. On the other hand, he noted that over time an increasing number of "anomalies" that are difficult to explain may arise within that paradigm, and during "revolutionary science", the community may undergo a shift to a replacement paradigm. Ultimately, Kuhn argued that science progresses over time through both processes.
We refer to this Kuhnian approach to viewing science here because in our opinion, a lot of the (often acrimonious) debate between researchers on this particular topic, as well as several of the topics we will discuss later (Sections 4.1, 4.2 and 5) can be best understood in terms of competing paradigms. With regards to the debate over the relationships between natural and human-caused sinks and sources of CO2, we have identified four distinct paradigms: • Paradigm 1-the "anthropocentric" approach. It is assumed that any natural sinks and sources of CO2 are effectively balanced, and that all human-caused CO2 emissions will contribute to human-caused global warming. This was originally proposed by studies before the Mauna Loa observations had begun or had only recently begun, e.g., refs. [37,[101][102][103].
It also seems to be implicit among researchers who consider carbon dioxide (CO2) to be a "pollutant", even though it is a naturally-occurring gas, e.g., the US EPA's 2009 so-called "Endangerment finding" [104]. • Paradigm 2-the "airborne fraction" approach. Like Paradigm 1, it is assumed that any natural sinks and sources are roughly balanced from year-to-year. However, given the fact that the airborne fraction is <1, it is acknowledged that the natural sinks and sources are not exactly balancing each other. Instead, it is assumed that some of the natural sinks (chiefly, the oceans and terrestrial vegetation) are absorbing some of the anthropogenic emissions. Within this paradigm, there is ongoing debate over whether these sinks will continue to take up a fraction of this anthropogenic CO2 at the same rate they have been since 1959, e.g., refs.
• Paradigm 3-the "sinks and sources" approach. Within this paradigm, it is recognized that anthropogenic emissions are a new source of CO2, but that there is also significant variability in the magnitudes of the natural sinks and sources. In particular, it is widely acknowledged that increasing temperatures should increase the natural CO2 emissions from soil respiration [110,111] as well as reducing the solubility of CO2 in the upper oceans [112] (which could potentially lead to net outgassing of CO2 into the atmosphere). For this reason, several researchers have argued that some component of the observed increase in atmospheric CO2 since 1959 could be a result of a natural global warming trend (i.e., the opposite of the human-caused global warming theory), e.g., refs. [16,17,74,75,77,[79][80][81]85,113]. Importantly, this paradigm does not rule out a contribution from anthropogenic emissions in the recent increase-rather, anthropogenic emissions are treated as an additional source that needs to be taken into account. • Paradigm 4-the "resilient Earth" approach. This is similar to Paradigm 3, except that it is disputed whether there is anything unusual about the increase since 1959. Within this paradigm, it is argued that the Antarctic ice core estimates are unreliable and that similar CO2 concentrations to present may well have occurred in the decades and centuries before the Mauna Loa record began. Instead, it is argued that most (if not all) of the rise in CO2 over the Mauna Loa record was natural in origin (due to natural global warming), e.g., refs. [82][83][84][87][88][89]114].
Notice that the four paradigms cannot all be correct. As a result, proponents of one paradigm may consider proponents of competing paradigms to be not just wrong, but scientifically incompetent. However, once the existence of different paradigms is appreciated and respected, it may be possible for fruitful discussions to take place between competing paradigms. Indeed, it is worth noting that the Mauna Loa observations were largely initiated in order to resolve the debate [115] between proponents of Paradigm 1 (e.g., refs. [37,[101][102][103]) and Paradigm 4 (e.g., Revelle and Suess (1957) [116]). Yet, ironically, as explained below, the "airborne fraction" results implied by the Mauna Loa record, i.e., Figure 6c, actually point towards either Paradigm 2 or 3.
Let us consider the evidence for and against each of these paradigms. First, if Paradigm 1 were correct, then we would expect the airborne fraction to be 1 (or at least nearly 1). That is, we would be expecting most of the emitted anthropogenic CO2 to remain in the atmosphere. On the other hand, if Paradigm 4 were correct, then we would expect the airborne fraction to be near-zero on average, i.e., we would be expecting the changes in atmospheric CO2 to be largely independent of anthropogenic emissions. However, as can be seen from Figure 6c, the airborne fraction has been fairly constant with a mean of 0.44 ± 0.04 over the entire record. This appears to rule out Paradigms 1 and 4, leaving either Paradigm 2 or 3. Now, let us consider the reliability of the Antarctic ice core estimates. If these estimates are correct and atmospheric CO2 was almost constant for nearly 10,000 years up to the 19th century [38,72], then that seems to rule out much room for naturally occurring trends of more than a dozen ppmv. In other words, it appears to rule out Paradigms 3 and 4. However, in this context it is worth noting that several estimates of past CO2 concentrations derived from the stomata of fossilised leaves imply considerably more variability during the pre-industrial era than that implied by the Antarctic ice cores, see refs. [117][118][119][120][121][122][123][124][125][126][127]. Moreover, Greenland ice cores imply increases of ~20-30 ppmv more than the Antarctic ice cores during relatively warm periods in the pre-industrial era, although it has been argued that the Antarctic ice cores are more reliable, e.g., refs. [128][129][130].
If the greater variability implied by the stomatal-based estimates (or even the Greenland ice cores) are accurate, then this could provide support for Paradigm 3. For instance, Kouwenberg et al. (2005) suggest that atmospheric CO2 could have dropped as low as 260 ppmv and risen as high as 320 ppmv several times over the last millennium [123], while the Antarctic ice cores suggest that CO2 remained within the range 271-285 ppmv [38]. However, it should be stressed that the current concentrations of ~410 ppmv are still higher than those stomata-based estimates, which suggests that anthropogenic emissions have significantly increased concentrations above natural variability, i.e., disagreeing with Paradigm 4. Moreover, the stomata-based estimates have been disputed by supporters of the Antarctic ice core estimates [129,131].
Others have also suggested that the Antarctic ice core estimates are problematic and unreliable, e.g., refs. [83,84,[87][88][89], and Beck has suggested that early measurements of atmospheric CO2 before the Mauna Loa observations began implied atmospheric concentrations above 400 ppmv in the 1940s as well as in the early 19th century [87][88][89]. This would be very consistent with Paradigm 4, but, again, all of these studies have been vehemently disputed by advocates for Paradigm 2 [98,99].
What does this tell us about the relationship between anthropogenic CO2 emissions and changes in atmospheric CO2 up to 2100? Well, if Paradigm 4 were correct, then there should be no relationship (or at best a weak one). In that case, arguably the rest of the analysis in this paper would be largely redundant, and we could say that there should be no (or very little) human-caused global warming up to 2100 [82][83][84][87][88][89]114], although it would not preclude the possibility of natural global warming. However, as explained above, this would also imply an average airborne fraction of 0 or close to 0. Similarly, we can rule out Paradigm 1, as this would imply an average airborne fraction close to 1.
If Paradigm 3 is correct, then at least some of the observed increase in atmospheric CO2 over the Mauna Loa record is anthropogenic in origin, and therefore we argue that "business-as-usual" conditions imply that the airborne fraction will remain fairly constant. Within Paradigm 2, there is some debate over whether some of the sinks that are currently reducing the airborne fraction might become "saturated", e.g., refs. [39,[105][106][107][108][109]. However, given that the annual airborne fraction has remained fairly constant since the Mauna Loa record began in 1959, we will define "business-asusual" conditions to mean this will remain constant with a mean of 0.44 ± 0.04. In Section 3.4, we will compare this assumption to the airborne fractions implicit in the IPCC RCP scenarios. But, first, let us consider the other two relevant gases. Figure 7 shows the equivalent results for CH4. Although the CH4 observational records are not as long as for CO2, there are more than 40 years of measurements (compilations of various flask measurements beginning in 1978, and more systematic measurements beginning in 1984 [132]). Unlike for CO2, the airborne fraction of anthropogenic emissions has been almost zero (0.07 ± 0.02) over the entire record, and for a few years even went slightly below zero, i.e., atmospheric concentrations decreased even though anthropogenic emissions continued.

The Airborne Fraction of Methane (CH4) Emissions
Antarctic ice core estimates of atmospheric CH4 before the instrumental record [38] suggest preindustrial concentrations of less than half the current concentrations, i.e., 624-737 ppbv (0.624-0.737 ppmv) compared to ~1850 ppbv today [133]. Furthermore, during the 1980s, atmospheric concentrations were still rising. This led many researchers to suggest that anthropogenic CH4 emissions were significantly altering atmospheric concentrations of CH4 as well as CO2. Therefore, CH4 was included as one of the six greenhouse gases considered under the 1996 Kyoto protocol [18]. Indeed, as Ganesan et al. (2019) [134] note, attempting to reduce anthropogenic CH4 emissions is one of the main goals of the 2015 Paris Agreement [19]. However, during the 1990s and early 2000s, the rise in CH4 concentrations began to slow down and even plateaued for several years. It is only after 2006 that concentrations began to rise again. Over this entire period, anthropogenic emissions have continued and even increased-see Figure 7b.
This has been a puzzle for researchers who had been assuming something like the Paradigm 2 we described in the previous section applied to CH4 emissions, e.g., refs. [133,135]. As a result, researchers working on the "Global Methane Budget" are recognizing that there is significant variability in many of the natural sinks and sources of methane, and that this natural variability may be a major factor for the unexpected trends in atmospheric CH4-see Saunois et al. (2016) [136], which builds on the work of Kirschke et al. (2013) [133].
Unlike CO2, where most of the anthropogenic emissions have tended to come from developed nations with relatively high GDPs [137,138], most of the anthropogenic CH4 and N2O emissions apparently come from agricultural processes (e.g., rice paddies and cattle production) in developing nations-especially in Asia and South America, e.g., Tian et al. (2015) [139]. However, Zhang et al. (2020) have recently shown that the contribution of rice paddies in monsoon Asia has declined since 2007 [140], suggesting that other changes in sources and sinks are probably involved in the post-2006 increase.
Although Saunois et al. estimate that anthropogenic emissions (approximately 540-568 Tg CH4/year) comprise about 60% of the total annual CH4 emissions, they caution that the uncertainties over the natural sources appear to be much larger than for anthropogenic sources [136]. The variability in the natural sources and sinks remains poorly understood, e.g., Bastviken et al. (2011) estimated that freshwater lakes, reservoirs, streams, and rivers could be contributing at least 103 Tg CH4/year [141]. If the Antarctic ice core estimates of past atmospheric CH4 are inaccurate, then on the basis of the very low airborne fraction of 0.07 ± 0.02 over the instrumental record, it is quite plausible that most of the apparent trends in atmospheric CH4 are due to variability in the natural sinks and sources. In that case, it would suggest that anthropogenic CH4 emissions might not be influencing atmospheric concentrations, in which case they probably could be removed as a potential source of human-caused global warming. We suggest that this possibility should be considered and we encourage more research into identifying and quantifying the variability in the various natural sources and sinks, perhaps building on the work of Saunois et al. (2016) [136], but explicitly considering Paradigms 3 or even 4 as relevant for CH4. However, for the purposes of this analysis, we will define BAU to imply that anthropogenic emissions are increasing the atmospheric concentration, but with an airborne fraction of only 0.07 ± 0.02 up to at least 2100. As we will discuss in Section 3.4, this is actually assuming a higher airborne fraction for CH4 than most of the IPCC RCP scenarios. Figure 8 shows the equivalent results for N2O. The N2O observational record is of a similar length to that of CH4. However, unlike CH4, the airborne fraction has been quite high, although there seems to have been quite a bit of interannual variability with three years being above 1 (implying atmospheric concentrations increased more than was emitted anthropogenically) and one year being below 0 (i.e., the atmospheric concentration slightly decreased in spite of anthropogenic emissions). Averaged over the entire record, the airborne fraction has remained fairly constant at 0.65 ± 0.09. This suggests that anthropogenic N2O emissions are responsible for much (if not all) of the fairly steady increase in atmospheric concentration (with a rate of +0.75 ppbv/year). The "airborne fraction" for N2O, i.e., the fraction of anthropogenic N2O emissions that remained in the atmosphere for each year since 1980. Note that the airborne fraction was below 0 in one year (1987) and above 1 in three years (1982, 1986 and 1989). The horizontal axes correspond to years.

The Airborne Fraction of Nitrous Oxide (N2O) Emissions
Having said that, the airborne fraction has been quite variable, and well below 1. Therefore, there does seem to have been quite a bit of variability in the natural sinks and sources. Davidson and Kanter (2014) [142] estimate that 65%-69% of the annual N2O emissions are natural in origin. However, there is still a lot of ongoing research into quantifying different natural sources, e.g., refs. [143,144]. We suggest that a similar attempt to quantify the natural sinks and sources for N2O to what Saunois et al. (2016) [136] have been doing for the Global Methane Budget project would be helpful (perhaps building on studies such as Davidson and Kanter (2014) [142]. We would recommend following the lead of Saunois et al. (2016) who seem to have been approaching the Global Methane Budget from Paradigm 3, rather than the Paradigm 2 approach used by Friedlingstein et al. (2019) [39] for the Global Carbon Budget. In the meantime, as for the other two gases, we will define BAU to imply that the experimentally observed airborne fraction for N2O of 0.65 ± 0.09 will continue up to at least 2100.

Comparison of "Business-As-Usual" AirborneFfractions with the RCP Scenarios
In Figure 9, we compare the historical airborne fractions up to present and our projected constant airborne fractions up to 2100 with the equivalent airborne fractions used in the IPCC's four RCP scenarios for (a) CO2, (b) CH4, and (c) N2O.  Table 2.
For CO2, one of the scenarios (RCP 2.6) predicts a rapid decline in the airborne fraction becoming negative after 2050. This scenario predicts that society will develop and implement technologies to allow substantial carbon sequestration, leading to "negative CO2 emissions", such as those considered by Fuss et al. (2018) [145]. On the other hand, for RCP 8.5, it is predicted that the airborne fraction will gradually increase over the century (which would thereby increase the rate at which atmospheric CO2 concentrations would increase). The other two scenarios imply a fairly constant airborne fraction up to the middle of the century, but RCP 6.0 predicts a slight increase from 2040-2070, followed by a slight decrease to 2100, while RCP 4.5 predicts a rather sharp drop in airborne fraction (associated with "negative CO2 emissions") from 2060 to 2080 followed by an increase to 2100. In comparison, our BAU projection lies in between the two intermediate scenarios (RCP 4.5 and 6.0) for the entire 21st century.
For CH4, all of the RCP scenarios except 8.5 predict that the airborne fraction will be lower than our BAU projection. Even with RCP 8.5, the airborne fraction is predicted to decrease towards 0 from 2050 onwards. As a result, by 2100, our BAU projection, which is already very modest (as discussed in Section 3.2) is higher than all four RCP scenarios. In other words, our BAU projection predicts a slightly greater fraction of anthropogenic CH4 emissions will remain in the atmosphere than the RCP scenarios.
As for N2O, all four of the RCP scenarios assume a starting airborne fraction of 0.45 in 2000, which is lower than the historical airborne fraction of 0.65 ± 0.09. As a result, even though RCP 8.5 predicts a relative increase from 2010 to 2040 and RCP 6.0 predicts a slight increase from 2030 to 2050, all four scenarios predict a lower airborne fraction than our BAU projection for the entire 21st century.

Projected Greenhouse Gas Concentrations up to 2100 under "Business-As-Usual" Airborne Fractions
Let us now combine the results of the previous sections in order to estimate future greenhouse gas concentrations up to 2100 for our three gases assuming (a) that anthropogenic emissions continue to grow "business-as-usual" and (b) that the observed airborne fractions for the gases remain constant "business-as-usual". In that case, the increase in atmospheric concentration for each of our gases for each year is equal to the projected emissions for that year multiplied by the airborne fraction. The results are plotted with black dashed lines in Figure 10, along with the equivalent projections for the four RCP scenarios (in colored dashed lines) [47], the historical values (in red solid line), and the range of "pre-industrial" variability implied by the Antarctic ice cores (in black dotted line) [38]. Since both our projected emissions and the average airborne fractions have uncertainty ranges associated with them (see previous sections), the resulting concentration projections have a combined uncertainty range shown with a gray bounding envelope.
Although the concentrations of both CH4 and N2O are still projected to be several orders of magnitude lower than that of CO2 by 2100 (e.g., CH4 and N2O being reported in units of parts per billion by volume and CO2 in parts per million by volume), according to the current computer models (e.g., [14]) and the IPCC assessment reports (e.g., [33]), they are both expected to cause much more global warming on a molecule-by-molecule basis than CO2. Specifically, it is argued that these gases have a much greater "Global Warming Potential" (GWP) than CO2. The exact values of these GWP calculations have changed between IPCC reports and depending on the timescale used (e.g., 20 years, 100 years or 500 years). However, if we use the 100-year GWP values from the more recent IPCC Assessment Reports, i.e., Table 8.7 of the IPCC Working Group 1's 5th Assessment Report (2013 [33]), CH4 apparently has a "Global Warming Potential (GWP)" 28 times that of CO2 and N2O has a GWP 265 times that of CO2 (see Table 1). This means that 100 ppbv of CH4 is "equivalent to" 1.02 ppmv of CO2 in terms of GWP and that 100 ppbv of N2O is equivalent to 25.21 ppmv of CO2. We have shown these "CO2 equivalent" scales on the right-hand vertical axes of Figure 10b,c. For interested readers, in the Supplementary Materials, we compare the total projected greenhouse gas concentrations under BAU (in CO2 equivalent concentrations) to alternative projections that used the projected airborne fractions implied by the RCP scenarios.
Comparing our projected CO2 concentrations to those of the RCP scenarios, it is actually quite similar to the RCP 6.0 scenario, and indeed the RCP 6.0 scenario curve just about fits within the gray bounding envelope of our projection up to 2100. This suggests that the CO2 concentrations of the RCP 6.0 scenario are actually a reasonable estimate of "Business-As-Usual" for the 21st century. This is consistent with several recent articles that argue that RCP 8.5 implies a dramatic increase in CO2 emissions relative to "Business-As-Usual", e.g., [63][64][65][66]. Readers might wonder why the RCP 6.0 CO2 concentrations match so well with our BAU projection when the RCP 6.0 emissions projections curve in Figure 2d was lower than our BAU projection. This seems to be a consequence of the slight temporary increase in airborne fraction of the RCP 6.0 scenario from 2040-2070 that can be seen in Figure 9a. With regards to CH4, our projected CH4 concentrations are higher than all of the RCP scenarios except RCP 8.5-see Figure 9b. RCP 8.5 projects a rapid acceleration in atmospheric CH4 beginning over the coming decades, which is not predicted under BAU growth.
Meanwhile, our BAU projections for N2O imply a growth rate that is higher than all of the RCP scenarios, even though our projected emissions in Figure 5d implied a projection intermediate between the two middle RCP scenarios (RCP 4.5 and 6.0). This is due to the relatively low airborne fraction for N2O projected by the RCP scenarios for the 21st century.

How Much of the Recent Global Warming is Human-Caused vs. Natural?
Several widely cited papers have argued that 90%-95% (or more) of scientists agree on global warming and climate change, e.g., Doran [149]. Separately, the IPCC 5th Assessment Report has stated that, "warming of the climate system is unequivocal", and that, "it is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century." [15]. For this reason, many readers may be wondering at the title of this subsection. With that in mind, let us briefly dissect what exactly was found by the studies just mentioned.
There have indeed been many surveys of the scientific community that have confirmed that 90%-95% (or more) of scientists agree that it is probably warmer today than during the late-19th century and/or that the climate is changing. However, as we have written elsewhere, the climate is always changing, and global temperatures have changed on timescales varying from decades and centuries, e.g.,   [150,151]; Soon et al. (2015) [152], to millennia, e.g., Carter and Gammon (2004) [153], to millions of years, e.g., Carter (2010) [113]. Therefore, agreeing that the climate has changed and/or that there has been long-term global warming since the late-19th century does not in itself say anything about whether this is natural, human-caused, or a mixture of both. That said, it is often implied that these surveys have also shown that 90%-95% (or more) of scientists agree that this climate change and global warming is human-caused. However, a close inspection of the survey results reveals that this is not true.
For instance, Doran and Zimmerman (2009) [146] sent a survey to over 10,000 Earth scientists and received more than 3000 responses. Of the 3146 respondents, 90% answered "risen" to the question, "When compared with pre-1800s levels, do you think that mean global temperatures have generally risen, fallen, or remained relatively constant?", i.e., they agreed that there has been global warming since "pre-1800s". However, the question they were asked on the causes of this warming was remarkably ambiguous: "Do you think human activity is a significant contributing factor in changing mean global temperatures?". While 82% of the respondents answered, "yes", Doran and Zimmerman neglected to ask them what percentage they considered to be "a significant contributing factor". In our experience, among the general public, "significant" is synonymous with "large" or "substantial", but among the scientific community, it generally means, "not insignificant", i.e., more than, e.g., 5%. In other words, many of the respondents may still have felt that the recent global warming was mostly natural (or perhaps a mixture of human and natural factors).
On the other hand, Cook et al. (2013) [147] purported to be a survey of the scientific literature rather than the scientific community. The authors examined more than 10,000 abstracts from papers that matched the search phrases of either "global climate change" or "global warming". In total, 2/3 of the papers apparently expressed no position on the causes of global warming. Of the remaining abstracts, only 2.1% of the abstracts explicitly disputed that global warming was mostly humancaused and only 0.9% explicitly stated uncertainty over the causes of global warming. Therefore, they concluded that 97% of published scientific articles agreed that global warming is human-caused. However, as one of us has pointed out in Legates et al. (2015) [154], Cook et al. had neglected to mention that only 8% of the 11,944 abstracts had actually made any explicit claim on the causes of global warming. It is true that, of this 8%, very few abstracts explicitly disputed the claim that recent global warming was mostly human-caused (3%). However, similarly, very few abstracts explicitly endorsed the claim (6%). The vast majority (91%) of the abstracts that made an explicit claim on the causes of global warming merely implied that human activity was a factor, i.e., they did not state whether global warming was mostly human-caused or mostly natural.
Verheggen et al. (2014)'s survey of nearly 2000 climate scientists [149] was a bit more nuanced. However, a close inspection of the survey results reveals that only 38% of the respondents believed that recent global warming was entirely human-caused, and only 27% believed that it was mostly human-caused. Meanwhile 12% of respondents believed that it was mostly or entirely natural, 12% believed it was a mixture of both and 10% did not know. Meanwhile Stenhouse et al. (2013)'s survey of nearly 2000 meteorologists [148] revealed that 52% believed that it was mostly or entirely humancaused, but that 15% believed it was mostly natural or a mixture of both, and 21% were unsure. The rest were not convinced that global warming would increase in the future.
In other words, there are actually several different perspectives among the scientific community as to the causes of recent global warming: • Paradigm 1. Recent global warming was mostly or entirely human-caused, and future climate change is going to be increasingly dominated by human-caused global warming • Paradigm 2. Recent global warming was a mixture of human and natural causes. This means that the current climate models are probably underestimating the role of natural factors in the recent warming and are therefore probably overestimating the magnitude of humancaused global warming that we should expect. • Paradigm 3. Recent global warming was mostly or entirely natural, and not human-caused.
This implies that there is something fundamentally wrong with the computer models, and their projections of future human-caused global warming should be treated with skepticism.
This has important implications for our assessment of how much human-caused global warming we should expect under BAU. So, it is important to understand why there could be such disagreement among the scientific community on this fundamental point. To get an idea of why, in Figure 11, we consider two competing perspectives. The left-hand side, i.e., Figure 11a-f, illustrates a common take on the "mostly human-caused" paradigm as argued by the IPCC 5th Assessment Report [33]. The right-hand side, i.e., Figure 11g-l, summarizes the counter-arguments made by some of us in   [152] and presents a perspective from the "mostly natural" paradigm. For a detailed discussion of each perspective, we refer readers to Chapter 10 of the IPCC 5th Assessment Report [33] and Soon et al. (2015) [152], respectively. However, in brief, the key differences are as follows: • The IPCC argued that automated statistical homogenization techniques, such as Menne and Williams (2009) [155], are able to remove any non-climatic biases, such as the growth of urban heat islands, and therefore use as many stations as possible to estimate global temperature trends-regardless of whether they have been affected by urbanization bias or not. Soon et al. argued that those automated homogenization techniques are inadequate for that purpose, and therefore estimated global temperature trends using only rural (or mostly rural) stations. • The IPCC argued that solar variability has been very low since the 19th century, and that solar activity has been, if anything, declining since the mid-20th century. Therefore, they only considered solar variability estimates that fit that narrative, e.g., Wang [160]. • The IPCC were therefore unable to explain any of the post-1950s global temperature trends in terms of natural factors and concluded that human-caused factors (chiefly increasing greenhouse gas concentrations) were needed to explain the warming since then. They therefore concluded that recent global warming was mostly (or entirely) human caused. On the other hand, Soon et al. were able to explain almost all of the temperature trends since 1881 in terms of changes in solar output. They therefore concluded that recent global warming was probably mostly (or entirely) natural. Figure 11. Example of two narratives from the competing (a-f) "recent global warming is mostly human-caused" and (g-l) "recent global warming is mostly natural" paradigms. See text for a detailed discussion. The horizontal axes correspond to years.

When Exactly Was "Pre-industrial"?
The UNFCCC-organized 2015 Paris Agreement declared an international agreement to, "(hold) the increase in the global average temperature to well below 2 °C above pre-industrial levels and (pursue) efforts to limit the temperature increase to 1.5 °C above pre-industrial levels" [19]. However, while this initially sounds like a very specific and definite agreement, a careful parsing of the wording reveals a remarkable degree of ambiguity about what exactly the agreement is agreeing to. Implicit in the above agreement is the "recent global warming is mostly human-caused" paradigm described in Section 4.1. Pielke Jr. (2005) [161] has noted that the UNFCCC explicitly defines "climate change" as being entirely human-caused, and so it is therefore not surprising that their agreement does not consider the possibility that much (or even all) of this warming is natural. However, even if we ignore the ongoing debate over how much of the warming since the late-19th century is human-caused vs. natural, and assume for the sake of argument that the "mostly humancaused" argument is correct, what exactly is meant by "pre-industrial levels" [22,162,163]?
As noted by Hawkins et al. (2017) [162], "in the absence of a formal definition for preindustrial, the IPCC AR5 made a pragmatic choice to reference global temperatures to the mean of 1850-1900 when assessing the time at which particular temperature levels would be crossed". This decision seems to have been because several of the standard instrumentally based global temperature time series used by the IPCC began in 1850 or 1880. In the penultimate draft of the report, this period was apparently explicitly referred to as "preindustrial", but during the ensuing governmental approval session, this reference was removed.
Therefore, although the IPCC AR5 reports seem to have been considered during the drafting of the Paris Agreement, it is unclear which baseline was meant. Hawkins et al. (2017) argue that 1720-1800 would make a more suitable baseline for "pre-industrial" global average temperatures than 1850-1900, while Lüning and Vahrenholt (2017) [163] argue that 1940-1970 is the most suitable baseline. The choice of baseline is quite significant since paleoclimate reconstructions suggest that global average temperatures have varied substantially over the last millennium and earlier, e.g., refs. [164][165][166][167][168][169][170]. In particular, it seems that coincidentally the 18th and 19th centuries corresponded to a relatively cool period known as the "Little Ice Age". On this basis, Akasofu (2010) argued that much of the warming since the 19th century corresponded to a natural "recovery from the Little Ice Age" [171]. This would imply that using an 18th or 19th century baseline for "pre-industrial" would be too cold. By comparing the instrumental time series to various paleoclimate reconstructions, Lüning and Vahrenholt (2017) [163] argue that the 1940-1970 period was closer to the long-term global temperature average of the last few millennia. However, clearly, 1940-1970 is long after the start of the actual Industrial Revolution. Hawkins et al. (2017) argue that "pre-industrial" should be defined much earlier, e.g., 1720-1800 [162]. On the other hand, if the context of the baseline is to indicate a period before human activity had a significant influence on the climate, then some researchers have argued that humans were already significantly influencing the climate before the Industrial Revolution. E.g., Koch et al., 2019 [172] argue that depopulation from the disease epidemics in the Americas initiated by the arrival of Europeans in 1492 indirectly led to a substantial reforestation of the Americas, and that this might have caused a significant global cooling. On this basis, they argue that "the Great Dying of the Indigenous Peoples of the Americas resulted in a human-driven global impact on the Earth Systems in the two centuries prior to the Industrial Revolution" [172]. Ruddiman et al., 2016, argue that human influence on global average temperatures began even earlier-with the development of agriculture thousands of years ago [173].
At any rate, the question of which temperature baseline defines "pre-industrial levels" depends on how the global average temperature has varied over the last millennium or so. This turns out to be yet another topic of ongoing debate in the scientific literature. Up until the mid-1990s, it was generally accepted that before the Little Ice Age, sometime around 1000-1200, there was a Medieval Warm Period when global average temperatures were at least as warm as present, if not warmer. However, in the late-1990s, a series of paleoclimate reconstructions were published which implied that global average temperatures had been fairly constant for at least the last thousand years up until the end of the 19th century before rising sharply, e.g., [164]. In particular, the main figure of a highly cited paper, Mann et al., (1999) [164] was dubbed "the hockey stick graph" because its estimate of Northern Hemisphere temperatures since 1000 AD apparently looked similar to an ice hockey stick lying on the ground with the "blade" sticking up in the air. This graph featured prominently in the IPCC's 3rd Assessment Report (2001) [55], and appeared to confirm the claim of that report that global average temperature changes are currently dominated by human activities, and that the 20th century global warming was unprecedented in at least 1000 years. However, the striking claims of Mann et al. (1999) have been very controversial and contentious both in the scientific literature and in the wider public sphere. One important unanswered puzzle is the blending of proxy temperatures with instrumental thermometer data in the 20th century, which seems to have been a major component of the apparent "blade", as cautioned by Soon et al. (2004) [174]. Readers interested in a detailed discussion of the debate might find the books by Montford (2010) [175] and Mann (2013) [176] useful for seeing two opposing perspectives. Each of us has also written extensively on this debate elsewhere, e.g., [113,150,151,[177][178][179].
For readers who are interested in learning more about these debates and various other similar controversies that have arisen from the debate over this controversial topic, we recommend reading Connolly and Connolly (2014) [178]. However, it should already be apparent that this is another topic that includes several competing scientific paradigms. We can get an idea of these paradigms by considering Figure 12. Essentially, there appear to be four main paradigms when it comes to current views on how the Current Warm Period compares to the Little Ice Age and Medieval Warm Period: • Paradigm 4. There are still too many inconsistencies between the various reconstructions and uncertainties and poorly justified assumptions associated with many of the underlying proxy series for us to establish how the globally representative averaged temperature changes during the current period compared to those over the last millennium or longer. Some of these problems and uncertainties have been described by, e.g., [178,[183][184][185][190][191][192]198,199].  [165]. The time series were downloaded from NOAA NCEI's Paleoclimatology public data repository, https://www.ncdc.noaa.gov/data-access/paleoclimatology-data (Accessed: January 2020). Highlighted dates correspond to different proposed baseline periods for the "pre-industrial" era, as discussed in the text. The horizontal axes correspond to years.
The debate over the relative warmth of the Medieval Warm Period, Little Ice Age and Current Warm Period has continued, and appears to be ongoing. For instance, while the PAGES-2K (2019) reconstruction [170] appears to support Paradigm 1 or 2, the Ljungqvist (2010) [168] reconstruction appears to support Paradigm 3. Meanwhile Shi et al. (2013) [193] provide three different reconstructions all using the same data but different methods for processing the data. Each of the three Shi et al. (2013) reconstructions appears to support a different one of the first three paradigms. Some might argue that this in itself supports Paradigm 4. Another issue that should be noted is that we did not include any thermometer-based time series in Figure 12. Many groups have chosen to superimpose the instrumental record on top of the proxybased series, e.g., refs. [164][165][166][167][168][169][170]. As noted by e.g., Soon et al. (2004) [174] and Ljungqvist (2010) [168], because the proxy-based series tend to show less variability than the direct instrumental record, this has the visual effect of artificially making the Current Warm Period seem warmer than it would otherwise.
At any rate, it should be apparent that there are many plausible baseline periods for defining "pre-industrial temperatures", but because climate change was also occurring during the preindustrial era, you could end up with different answers depending on whether you chose a relatively cool period or a relatively warm period. However, we suggest that the underlying motivation behind the Paris Agreement was not to define any particular pre-industrial period as having a supposedly ideal global temperature. Rather, to us, the motivation seems to have been to attempt to minimize the magnitude of future human-caused global warming that was specifically due to increasing greenhouse gas concentrations.
With this in mind, we argue that a better metric to use for defining "pre-industrial levels" is to assume the Paris Agreement is only referring to changes in global temperature that are due to humancaused global warming from increasing greenhouse gases above pre-industrial levels. Therefore, for our analysis, we will calculate the estimated human-caused global warming up to 2100 under BAU in terms of the increases in greenhouse gases relative to "pre-industrial levels". We will define "preindustrial levels" as that implied by the Antarctic ice cores, although we remind the reader of the controversies over the reliability of these estimates, which we discussed in Section 3.

How "Sensitive" Is the Global Average Temperature to Changes in Greenhouse Gas Concentrations?
If we assume that the changes in atmospheric greenhouse gas concentrations up to 2100 under Business-As-Usual conditions are indeed as proposed in Section 3.5, that still does not tell us how much Anthropogenic Global Warming this would cause. In order to establish this, we need to know what the "climate sensitivity" is, i.e., how much human-caused global warming is generated by an increase in greenhouse gases (typically defined in terms of a doubling of CO2). Unfortunately, there is as of yet still no consensus on what the actual "climate sensitivity" is, e.g., see Knutti et al. (2017) for a list of several hundred estimates [200].

Climate Sensitivity Paradigms
Part of the reason for the ongoing debate over the "climate sensitivity" is that nobody has been able to directly measure it. Although it has been well established experimentally that: (i) each of the greenhouse gases is infrared-active, e.g., Tyndall (1861) [31]; (ii) the presence of greenhouse gases in the Earth's atmosphere alters the shape of the outgoing infrared spectrum of the Earth, e.g., Harries et al. (2001) [201]; and (iii) globally-averaged surface temperatures have on average increased since the late-19th century, e.g., see the discussion in Section 4, there have (as yet!) been no experimental measurements that have directly demonstrated and quantified the proposed increase in atmospheric temperatures specifically from increasing greenhouse gases. We urge readers to note the nuance in this statement-as we will discuss in this section, there have indeed been many computer modelbased, theoretical, or semi-empirical studies that have attempted to quantify the influence of increasing greenhouse gases on atmospheric temperature (and particularly surface temperatures). However, none of these studies have been able to directly demonstrate and quantify experimentally the proposed increase in atmospheric temperatures from an increase in atmospheric greenhouse gas concentrations.
In other words, it has not been directly established experimentally how "sensitive" the global average temperature is to changes in greenhouse gas concentrations. Instead, attempts have either been indirect by fitting the changes in greenhouse gas concentrations to global temperature trends (e.g., refs. [37,152,[202][203][204][205]), computer model-based (e.g., refs. [2,8,10,206]), or relying on theoretical and/or semi-empirical models or assumptions (e.g., refs. [207][208][209]). We stress that this does not invalidate the attempts, but it means that the estimates that are obtained often depend heavily on the theoretical assumptions explicit (or implicit) in the approach taken. As in previous sections, we have identified several distinct paradigms within the literature on this topic, and each of them appear to involve different assumptions (implicit or explicit): • Paradigm 1: "Global warming is mostly or entirely human-caused". Changes in greenhouse gas concentrations are the primary driver of global temperature change, especially in recent decades, and the long-term warming since the late-19th century is mostly (if not entirely) due to human-caused greenhouse gas emissions. Within this paradigm, there is generally less interest in trying to understand the causes of recent climate change, and instead the focus is largely on quantifying future climate change from increasing greenhouse gas concentrations, e.g., Andronova and Schlesinger (2001) [217]. • Paradigm 2: "Global warming is a mixture of human-caused and natural factors". It is assumed that human-caused greenhouse gas emissions are a significant driver of recent global temperature change (as in Paradigm 1), but that natural climate change has probably also been a significant driver. Within this paradigm, satisfactorily establishing the relative roles of natural and human-caused factors in recent climate change is a primary focus since this strongly influences both our understanding of recent climate change and our expectations for future climate change. For instance, if 50% of the warming since the late-19th century was due to natural climate change, then this suggests that the future warming from increasing greenhouse gases would probably only be at most half of what might be expected if 100% of the warming was due to greenhouse gases, e.g., Idso (1998) [228]. • Paradigm 3: "Global warming is mostly or entirely natural". Greenhouse gases are not necessarily a major driver of global temperature change, and most (or all) of the warming since the late-19th century is due to the same natural climatic changes that have occurring since long before the Industrial Revolution. Within this paradigm, there is generally less interest in describing future climate change (which is typically assumed to be comparable to the climate changes experienced over the last few millennia). Instead, the primary focus tends to be on better quantifying the magnitudes and causes of past climate changes, e.g. Within Paradigm 1, there are actually several competing philosophies and approaches that largely boil down to the fundamental question of whether or not the results from computer model simulations of potential future climate change are more relevant than observations of recent climate change. Within Paradigms 2 and 3, the fact that this question is debated can seem largely incomprehensible as it is usually assumed that empirical observations automatically take precedence over computer model results (the four of us can vouch for this since most of our climate change research has been based on either Paradigm 2 or 3 and it has been difficult for us to appreciate the unquestioned credibility afforded to computer model results by many in the scientific community). This is even the case for some working within Paradigm 1, e.g., Schwartz (2008) [236]. However, within Paradigm 1, the debate is considered non-trivial, e.g., see the debate over the Schwartz (2007) [237] study between Knutti et al. (2008) [238] and Schwartz (2008) [236], or that over Monckton et al. (2015a) [216] between Richardson et al. (2015) [239] and Monckton et al. (2015b) [240].
Given that Paradigm 1 assumes that greenhouse gas concentrations are the primary driver of climate change and is largely concerned with estimating future climate change if carbon dioxide doubles or trebles (along with other increasing greenhouse gases), the recent climate changes that have been experienced up to present are considered to be just the beginning of increasingly substantial human-caused global warming. As a result, many researchers within this paradigm, argue that the computer model projections of future global warming under substantially increased greenhouse gas concentrations provide more insight into future global warming than the experimental observations up to present, e.g., refs. [8,206,238,239]. That is, in terms of understanding the climate changes to be expected from increasing CO2, the computer model projections are considering concentrations twice, four times, or more that of pre-industrial CO2, while the historical observations still only cover a period where CO2 has still only increased by less than 45% relative to the pre-industrial concentrations implied by the Antarctic ice cores. Moreover, because the computer model simulations can provide continuous and complete values for every aspect of the model's climate system, the time series and results that can be extracted from the model simulations are very tidy, comprehensive, and precise, while experimental measurements of the real climate system are often based on an incomplete sampling network, and may be affected by various non-climatic biases and instrument errors [238].
On the other hand, other researchers argue that the computer model projections are only describing climate change in a computer model world. They argue that if you want to understand how the climate changes in the real world, you will get more realistic answers if you base them on actual experimental observations [236].
Meanwhile, many researchers argue that running a typical Global Climate Model simulation with the latest code requires a large computational expense (a typical run can take a few weeks or even months to complete for a climate modelling group even using high-end supercomputers). Furthermore, the simulations arguably report far more information than is necessary for most studies. As a result, a lot of climate sensitivity studies are based on relatively simple analytical models or theoretical frameworks that do not require the computational expense of a full Global Climate Model simulation.
With this in mind, it can be helpful to divide Paradigm 1 into three sub-paradigms: • Paradigm 1a: For estimating the future climate change that would occur if CO2 doubles or quadruples, the latest simulations from the most up-to-date Global Climate Models are probably more reliable than extrapolating from historical observations, e.g., Knutti

Different Climate Sensitivity Definitions and Estimates
The computational power of the first climate model simulations in the 1960s, 1970s, and 1980s was very limited compared to today. As a result, most of the early attempts to model the effects that increasing CO2 would have on the climate tended to focus on idealized hypothetical scenarios where the atmospheric CO2 concentration was doubled relative to the concentration of the time. Modelers would run these "2 × CO2" simulations until the climate system in the model world had equilibrated. This climate system was then compared with the results from a similar model world where the atmospheric CO2 was the same as present (i.e., "1 × CO2"), e.g., Manabe and Wetherald (1975) [2]. The difference between the globally averaged temperatures of the two simulations came to be known as the "Equilibrium Climate Sensitivity" (ECS).
In a well-cited National Research Council report in 1979 [246], led by Jule Charney (and hence commonly referred to as "the Charney report"), the results of such ECS simulations from several computer modelling groups were used to conclude that a doubling of atmospheric CO2 would probably lead to 1.5-4.5 °C of human-caused global warming. Schlesinger (1986) noted that although the simulations from these "general circulation models" did indeed imply a similar range of climate sensitivity estimates (1.3-4.2 °C, taken from seven separate studies), alternative approaches led to different ranges [8]. He found three different studies that used "surface energy balance models", but implied the climate sensitivity was anywhere in the range 0.24-9.6 °C. He also found estimates from 17 studies using "radiative-convective models", and these implied that the range was 0.48-4.20 °C.
Nonetheless, van der Sluijs et al. (1998) noted that, "in international assessments of the climate issue, the consensus-estimate of 1.5 to 4.5 °C for climate sensitivity has remained unchanged for two decades" [247]. More recently, Knutti et al. (2017) confirmed that, still, "the consensus on the 'likely' range for climate sensitivity of 1.5 to 4.5 °C today is the same as given by Jule Charney in 1979" [200]. We will return to this question later, but for now we note that the most recent IPCC 5th Assessment Report (2013) also argued that the "likely" value for the Equilibrium Climate Sensitivity was probably in the range 1.5-4.5 °C [15].
At any rate, by the mid-1980s, it was already apparent that the long-term global warming since 1880 was at least half of what would have been expected from the ECS values, given the increase in atmospheric CO2 that had already occurred. However, Schlesinger (1986) argued that this did not mean the climate models were wrong. Instead, he argued that the problem was that the ECS values were for equilibrium conditions and that, "the actual response of the climate system lags the equilibrium response because of the thermal inertia of the ocean" [8]. Bryan et al. (1982) had referred to this apparent lag as being the "Transient Climate Response to increasing atmospheric carbon dioxide" [248], and the term Transient Climate Response (TCR) is now generally used to refer to the climate sensitivity that would be observed as carbon dioxide gradually doubles over a multidecadal period (typically defined as a 1%/annum increase over 70 years).
Computational power has improved dramatically over the decades, and since the 2000s, it has become increasingly standard for climate modelling groups to carry out Transient Climate Response simulations where CO2 gradually increases over time as well as the original "2 × CO2" Equilibrium Climate Sensitivity simulations (and more recently, "4 × CO2" simulations [249]), e.g., refs. [217,244,[250][251][252][253][254][255][256]. The climate sensitivity estimates implied by the Transient Climate Response simulations are typically a good bit lower than those from the Equilibrium Climate Sensitivity simulations. That is, the expected human-caused global warming for a doubling of CO2 is typically a good bit lower for the TCR estimates than the ECS estimates, e.g., Forster et al. (2013) estimate the ECS to be in the range 1.90-4.54 °C with a most-likely value of 3.22 °C, while they estimate the TCR to be in the range 1.19-2.45 °C with a most-likely value of 1.82 °C [254].
Within Paradigm 1, it is now generally accepted that the reason why the climate sensitivities implied by the TCR simulations is lower than that from the ECS simulations is that proposed by Bryan et al. (1982) [248] and Schlesinger (1986) [8]. That is, in the computer model world, some of the extra "greenhouse heating" from increasing greenhouse gas concentrations is temporarily absorbed by the oceans, but because the model oceans have a large heat capacity and relatively slow circulation rates, this can take decades or even centuries of time (in model "years") before the climate system has fully equilibrated, see, e.g., Gregory and Mitchell (1997) [252]. According to this theory, even if CO2 concentrations stop increasing once they have doubled, the human-caused global warming will continue to rise over time until the oceans have equilibrated. At that point, the expected warming will be that of the ECS rather than the initial TCR. However, the models also predict that this could potentially take centuries, meaning that the lower TCR estimates are the ones that are most relevant for the coming century. Hope (2015) has even argued that finding out a more accurate value for the TCR has a "$10 trillion value" [259]. Hansen et al. (2005) argue that the exact length of this proposed "lag" should increase with ECS, e.g., they argue that for an ECS of ~1 °C, the lag could be as short as a decade, but for an ECS of ~4 °C or greater, the lag could be a century or longer [257].
On the other hand, while these arguments carry a lot of weight within Paradigm 1a, many researchers from outside that paradigm are unimpressed by claims that we should base our policies solely on computer model predictions of how the climate would hypothetically change over multiple centuries into the future. Partly for that reason, a lot of researchers have tried to estimate the true climate sensitivity to greenhouse gases guided by observations of how the climate has changed already. As discussed in the previous section, different researchers have attempted this from within different paradigms.
Some of those working from within Paradigm 1b or 1c have obtained similar estimates to those of the climate models, e.g., Otto et al. (2013) estimated the ECS is in the range 1.2-3.9 °C and that the TCR is in the range 0.9-2.0 °C [207]. However, others find that the climate sensitivity is significantly smaller for both TCR and ECS, e.g., Lewis and Curry (2018) estimate the ECS is 1.05-2.45 °C and the TCR is 0.9-1.7 °C [208]; while Bates (2016) [209] estimates the ECS at 0.85-1.30°C and Lindzen and Choi (2011) [260] estimate it at 0.5-1.3°C.
Many of the estimates based on Paradigm 2, i.e., assuming that some of the warming since the 19th century may have been natural, suggest climate sensitivities that are even lower still, e.g., Idso (1998) calculated a maximum climate sensitivity (equivalent to either ECS or TCR) of 0.4 °C, while Zisking and Shaviv (2012) calculated a climate sensitivity range that is equivalent to an ECS of 0.69-1.26 °C.
We have found very few studies working from within Paradigm 3 that provide a climate sensitivity value. This seems to be because if you are finding (or assuming) that the global warming since the late 19th century is mostly or entirely natural (and therefore not a result of increasing CO2), then there is less motivation for estimating these values. However, as part of our analysis in   [152] (which we summarized in Section 4.1), we argued that-after accounting for urbanization bias, and using the updated Hoyt and Schatten (1993) [231,232] estimate for solar variability-the residuals left implied a maximum climate sensitivity of 0.44 °C for a doubling of CO2. Although we did not define it there in terms of ECS or TCR, in this case, this would probably be most equivalent to an upper bound for TCR. However, as discussed in Section 4.1, even adding this small role for CO2 did not substantially improve the fit to the observed temperature trends from 1881-2014, i.e., it suggested that the climate sensitivity to greenhouse gases was very small or even zero [152].
At any rate, for the purposes of the final stage of our analysis, regardless of whether we use the TCR or ECS estimates, what value should we assume for the climate sensitivity? The IPCC 5th Assessment Report (2013)'s "likely" estimate for the TCR climate sensitivity is in the range 1.0 °C-2.5 °C. They consider it "extremely unlikely" to be higher than 3.0 °C. They argue that the ECS, "is likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence), and very unlikely greater than 6°C (medium confidence)" [33]. However, as can be seen from the list of estimates in Table 5, there is a wide range of estimates for both the ECS and TCR, and several of these estimates include values that are outside the IPCC's range. This is actually only a partial sample of the various estimates available in the literature. For a more complete list, Knutti et al. (2017) [200] provide a summary of hundreds of estimates for both TCR and ECS.  Knutti et al. 2018 [200] for a summary of several hundred estimates in the literature), but are rather an illustrative sample of typical values in the literature, with examples taken from each of the paradigms described in the text. The values accompanied by a † correspond to the most-likely, mean, or median value if provided by the study.

Study
Paradigm ECS (or Equivalent) TCR (or Equivalent) "Charney Report" (1979) [ Given that the 2015 Paris Agreement has set an international, but voluntary, target of keeping human-caused global warming below 2 °C, it should be apparent that establishing whether the actual values are at the high end or at the low end of the IPCC's ranges (or outside their ranges) has huge implications for what exactly the Paris Agreement has agreed to. More specifically for this study, in order to estimate how much human-caused global warming we should expect for our projected BAU increases in atmospheric greenhouse gas concentrations, we need to establish what the actual climate sensitivity is. However, as can be seen from Table 4, there are many different estimates of both the TCR and ECS published in the literature.
Therefore, rather than considering just one value for the climate sensitivity, for the rest of our analysis, we will consider a range of six different values for TCR: 0.5, 1.0, 1.5, 2.0, 2.5, and 3.0 °C This covers the IPCC's current "likely" range of 1.0-2.5 °C, but also considers a lower value of 0.5°C, recognizing that several recent studies have argued that the TCR could be less than 1.0 °C, e.g., refs. [152,207,208,215,216,219,221,227] as well as a higher value of 3.0°C. Similarly, we consider a range of six different values for ECS (1, 2, 3, 4, 5, and 6 °C), which encompasses the IPCC's current "likely" range of 1.5-4.5 °C, but also considers the possibility that the ECS might be lower than 1.5 °C, e.g., refs. [202,[207][208][209]211,212,215,216,[218][219][220]222,226,236,237,260,263,265,266] or that it might be higher than 4.5 °C, e.g., refs. [202,210,262,264,265]. We also stress that if Soon et al. (2015) are correct, then the TCR is less than 0.44 °C [152], i.e., less than the lowest value of 0.5 °C that we will consider in this analysis. In that case, the expected human-caused global warming under BAU will be even smaller.

Converting Projected Greenhouse Gas Concentrations into Projected Human-Caused Global Warming for Different Transient Climate Response and Equilibrium Climate Sensitivity Estimates
Both the TCR and ECS metrics are typically defined in terms of a doubling of atmospheric CO2 concentrations. However, our BAU projections of future greenhouse gas concentrations from Section 3.5 describe annual changes. Therefore, we need to come up with a suitable approach to translating a metric in terms of a doubling to the expected annual changes. Moreover, as discussed in the previous section, according to Paradigm 1, the equilibration time required for the ECS values to be reached is in the order of centuries, e.g., refs. [244,[250][251][252][253][255][256][257][258]. Hence, since our projections only cover the ~80-year period up to the end of the 21st century, we will need to convert the ECS values into estimates of the shorter term warming that would have occurred up to 2100. Finally, since we are also considering the potential contributions of CH4 and N2O, we will need to translate these CO2defined metrics into metrics that are also relevant for CH4 and N2O.
Let us consider the final problem first. Myhre et al. (1998) [267] and others (e.g., refs. [10,[268][269][270][271]) argue that the expected relationship between increasing concentrations and global temperatures should be different for CO2, CH4, and N2O, since they each have different infrared activities as well as different calculated atmospheric lifespans. Therefore, in order to convert an increase in concentration of a non-CO2 greenhouse gas into a CO2-equivalent concentration change, it has become standard practice to multiply the concentration change by a metric called the "Global Warming Potential" (GWP). The value of this calculated metric depends on the timescale being considered, and the IPCC reports offer different estimates for timescales of 20 years and 100 years. Since the timescale we are considering (up to 2100) is ~80 years, we use the 100-year GWP figures from Table 8.7 of the IPCC Working Group 1's 5th Assessment Report [33], as described in Section 3.5. We then sum the combined greenhouse gas concentrations for each year in CO2-equivalent concentrations, i.e., we sum the time series plotted in Figure 10a-c into one time series.
In order to describe the modelled global temperature response to an increase in greenhouse gas concentrations predicted by climate models, it has become standard practice to describe the modelled relationships in terms of a metric called the "Radiative Forcing" (RF). This is a calculated metric with units of W m −2 , and so can be easily compared with measured changes in incoming solar irradiance, for instance. Calculated values of the RF associated with a change in atmospheric CO2 are so prevalent within the literature that some readers might initially have assumed that these values are somehow experimentally derived. Therefore, we stress that the RF of CO2 is a calculated metric. Some widely cited values are calculated from radiative transfer models, e.g., Shi (1992) [ [271]. However, the RF for a given Global Climate Model can be also be inferred from the computer model output, e.g., Forster et al. (2013) [254].
At any rate, the RF of CO2 is typically assumed to increase logarithmically with concentration according to the following equation taken from Myhre et al. (1998) [267]: where ΔF is the change in RF (in W m −2 ), C is the new concentration, C0 is the reference concentration (in our case, the pre-industrial concentrations implied by the Antarctic ice core estimates), and α is a constant. The value of α varies from study to study, e.g., in the IPCC's 1st Assessment Report (1990) [53], it was assumed that α = 6.3, but following Myhre et al. (1998) [254]. Gregory and Mitchell (1997) [250] argued that this increase in radiative forcing can be related to the temperature response, ΔT, for a well-equilibrated system via the following equation: where λ is a constant "climate response factor", sometimes called the "feedback" factor, meaning that for a doubling of CO2, ΔT in Equation (2) would correspond to the ECS. Meanwhile, for the Transient Climate Response, the transient temperature response is reduced by a term to account for the proposed lag due to ocean heat uptake, which is expressed as follows: where κ is also treated as a constant which is known as the "ocean heat uptake efficiency". (Note: Gregory and Mitchell (1997) actually used a different label of Q for ΔF). This relatively simple framework for relating the Transient Climate Response and Equilibrium Climate System has become quite popular, and many studies have included estimates of κ, e.g., refs. [252,255,258,261]. For example, Dufresne and Bony (2008) calculated that κ varied from a 0.53 to 0.92 W m −2 K −1 with a mean of 0.69 W m −2 K −1 across 12 of the CMIP3 Global Climate Models [258]. However, implicit in this framework is the paradox that the κ "constant" is not actually a constant, but that it must tend towards zero as the oceans equilibrate, i.e., Equations (2) and (3)  Rohrschneider et al. (2019) compared the results from this two-layer model-and also "tworegion models" similar to those used by Bates (2016) [209]-to those of the complete Global Climate Models. They found that the two-layer models did indeed provide a good approximation of the Global Climate Models (and that in terms of globally averaged results they were equivalent to the two-region models), but they recommended that the Global Climate Models were more reliable for detailed studies [242]. Gregory et al. (2015) [252] also carried out a detailed comparison of the two-layer model approach to the equivalent results from CMIP5 Global Climate Models. They also compared the results of the Gregory and Mitchell (1997) [250] framework of Equations (2) and (3) described above, which they call the "zero-layer model". They found that the two-layer model was a much better approximation of the CMIP5 model results than the zero-layer model when considering long timescales of several centuries at high CO2 concentrations. Specifically, they found that in the CMIP5 models, the implied value of κ gradually decreased from a range of 0.73 ± 0.11 W m −2 K −1 to 0.54 ± 0.11 W m −2 K −1 after 120-140 years, during which CO2 quadrupled.
Therefore, if we were to extend our BAU projections to, e.g., the middle of the 22nd century, then we would probably need to use a more sophisticated approach than the zero-layer model. However, since we are only extending our projections ~80 years, i.e., to 2100, and greenhouse gas concentrations are only projected to have slightly more than doubled by then (see Figure 10), we argue that the zero-layer model is a reasonable approximation for estimating the transient temperature response to increasing greenhouse gas concentrations over the next ~80 years for a given climate sensitivity.
With this in mind, we will take the following approach to converting the projected BAU greenhouse gas increases into the expected human-caused global warming for a given climate sensitivity: For the TCR values, we will assume that λ and κ are both constant, and that the increase in temperature, ΔT, for a given year is therefore proportional to the increase in ΔF (Equation (3)). The TCR value corresponds to the ΔT when C = 2 × C0 in Equation (1). Therefore, the expected ΔT for a given year is related to the concentration of greenhouse gases in that year (in CO2-equivalent), C, by: The calculations are a little more complex for a given ECS value. If we want to calculate the transient temperature response for a given year, we need to use Equation (3). However, the ECS only tells us what the expected temperature response would be for Equation (2), i.e., when κ = 0.
Our first step is to decide on a suitable value of κ. As mentioned above, Gregory et al. (2015) [252] calculated the mean value of κ for the CMIP5 models after a doubling of CO2 was 0.73 W m −2 K −1 . Therefore, for our analysis in this paper, we will assume that κ = 0.73 W m −2 K −1 . However, in the Supplementary Materials, we also provide equivalent analyses using either the highest or lowest values for a doubling of CO2 of the CMIP5 models from Gregory et al. (2015)'s Table 1.
We then need to decide on a value for λ. We do this by assuming ΔF = 3.71 W m −2 for a doubling of CO2, i.e., the value used by Myhre et al. (1998) [267] and also the IPCC 5th Assessment Report [33]. However, in the Supplementary Materials, we also provide equivalent analyses using the highest and lowest estimates of ΔF calculated by Forster et al. (2013) [254] for the CMIP5 models, i.e., 2.59 and 4.31 W m −2 . We can then calculate the corresponding value of λ for each of our ECS values by rearranging Equation (2): where ΔT(×2CO2) is the ECS value. We then use Equation (3) to calculate ΔT for each year by calculating ΔF for that year by plugging the corresponding concentration, C into Equation (1). Figure 13a shows the results of our analysis for a TCR of 0.5, 1.0, 1.5, 2.0, 2.5, and 3.0 °C, while Figure 13b shows the results for an ECS of 1, 2, 3, 4, 5, and 6 °C. One point which might initially seem surprising is that the results are already different for the historic period, 1980-2019. That is, the magnitude of "human-caused global warming", which is presumed to have already occurred over the historic period, increases with the value of the higher climate sensitivity which is assumed. Some readers may wonder at why there should be uncertainty over this given that we have reasonable estimates of the global warming which occurred over this period. However, as discussed in Section 4.1, the magnitude of "global warming" over a given period does not automatically tell us the magnitude of "human-caused global warming" from increasing greenhouse gases. Some (or even all) of the observed global warming may have been due to natural factors and/or other non-greenhouse gas-related factors. On the other hand, there may have been additional "global cooling" factorseither natural (e.g., decreases in solar activity) or human-caused (e.g., increases in aerosols)-that led to a reduction in human-caused global warming. That is, the amount of human-caused global warming that should have already occurred might be less than or greater than the amount of observed global warming. Indeed, this is a major part of the reason why there is still such uncertainty over the actual climate sensitivity to greenhouse gases.

How Much Human-Caused Global Warming Should We Expect with Business-As-Usual (BAU) Climate Policies?
At any rate, for us, probably the most striking result is the sheer range of possible values by the end of our BAU projections in 2100. This is quite problematic given that recently international climate policies have been framed within the context of limiting the magnitude of future human-caused global warming to within a specific value. In particular, the 2015 Paris Agreement involved a voluntary international agreement for, "holding the increase in the global average temperature to well below 2 °C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5 °C above preindustrial levels" [19]. More recently, in 2018, the IPCC issued an intermediate Special Report entitled, "Global Warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above preindustrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty." [272]. For readers who are more interested in future human-caused global warming relative to present, in the Supplementary Materials we provide an equivalent figure using 2018 greenhouse gas concentrations as the starting baseline.
We will not get into the debate here which we referred to in the introduction, e.g., refs. [21][22][23][24][25][26], over whether those specific targets of 1.5 and 2.0 °C are useful. Although, we do note that Lang and Gregory (2019) [273] have calculated that, "3.0 °C of global warming from 2000 would increase global economic growth", and Dayaratna et al. (2020) [274] have recently argued that some recent research suggests that increasing CO2 could be a net positive, "at least through the mid-twenty-first century". See also NIPCC (2019) [17]. Rather, we will explicitly assume here that these targets of keeping humancaused global warming well below 2.0 °C and ideally below 1.5 °C are indeed worthy. With than in mind, what implications do the results in Figure 13 have for these targets? Figure 13. Projected human-caused global warming (from CO2, CH4, and N2O greenhouse gases) up to 2100 under Business-As-Usual conditions for various estimates of (a) the Transient Climate Response and (b) Equilibrium Climate Sensitivity. For comparison, the 2.0 and 1.5 °C targets described under the Paris Agreement (2015) [19] are shown. The horizontal axes correspond to years.
If the ECS is 5 °C or higher, or the TCR is 2.5 °C or higher, then under Business-As-Usual, we are projected to have broken the 1.5°C target by 2026-2028, and the 2 °C target by 2045-2053. On the other hand, if the ECS is 2 °C, we are not projected to break the 1.5 °C target until 2069-2082, and we are not projected to break the 2 °C target until the 22nd century. Similarly, if the TCR is 1.5 °C, then we are not projected to break the 1.5 °C target until 2065-2077, and we would probably not break the 2 °C target until the 22nd century (or 2095 at the earliest). Meanwhile, if the ECS or TCR is 1 °C or less, then we are not projected to break either of the two targets in the 21st century under BAU.
In other words, the urgency (or otherwise) of the Paris Agreement depends critically on what the actual value of the climate sensitivity is. According to the IPCC's 5th Assessment Report, the ECS is "likely" to be any value in the range 1.5-4.5 °C and the TCR is "likely" to be any value in the range 1.0-2.5 °C [33]. That is, the results of the latest IPCC Assessment Report still do not tell us whether the Paris Agreement is trying to solve a problem for the next few decades or the 22nd century.
Moreover, as can be seen from Table 4, several studies have suggested that the climate sensitivity may be either higher or lower than the IPCC's "likely" ranges. For instance, Zelinka et al. (2020) have noted that several of the latest CMIP6 Global Climate Models imply an ECS that is greater than 4.5 °C. On the other hand, Lindzen and Choi (2011) argue that the ECS is in the range 0.5-1.3 °C, with a most likely value of 0.7 °C [260], and both Monckton et al. (2015a) [216] and Bates (2016) [209] argue that the ECS is in the range 0.8-1.3 °C, with a most likely value of 1.05°C. All of these estimates are below the IPCC's "likely" range. Meanwhile, some of us have argued in Soon et al. (2015) that the TCR is less than 0.44 °C, and that it is possible to explain all of the observed warming since at least 1881 in terms of natural climate change [152].
Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1, Figure S1: Effects on projected future greenhouse gas concentrations (from CO2, CH4 and N2O greenhouse gases) up to 2100 of using the implied projected airborne fractions of the IPCC RCP scenarios (as used for IPCC AR5) compared to using the fixed, empirically-derived estimates as in the paper, Figure S2: Effects of changing the ocean heat uptake efficiency constant, κ, on projected human-caused global warming (from CO2, CH4 and N2O greenhouse gases) up to 2100 under Business-As-Usual conditions for various estimates of Equilibrium Climate Sensitivity. For comparison the 2.0 °C and 1.5 °C targets described under the Paris Agreement (2015) are shown; Figure S3: Effects of changing the estimated Radiative Forcing for a doubling of CO2, F (×2 CO2), on projected human-caused global warming (from CO2, CH4 and N2O greenhouse gases) up to 2100 under Business-As-Usual conditions for various estimates of Equilibrium Climate Sensitivity. For comparison the 2.0 °C and 1.5 °C targets described under the Paris Agreement (2015) are shown. Figure  S4: As for Figure 13 in the main article, except only projecting future warming (relative to 2018 values). Projected human-caused global warming (from CO2, CH4 and N2O greenhouse gases) up to 2100 under Business-As-Usual conditions for various estimates of (a) Transient Climate Response and (b) Equilibrium Climate Sensitivity. The horizontal axes correspond to years. Funding: Two of us (RC and WS) received financial support from the Center for Environmental Research and Earth Sciences (CERES), http://ceres-science.com/, while carrying out the research for this paper. The aim of CERES is to promote open-minded and independent scientific inquiry. For this reason, donors to CERES are strictly required not to attempt to influence either the research directions or the findings of CERES.
Acknowledgments: As mentioned above, our co-author, Robert "Bob" M. Carter, passed away during the preparation of an earlier version of this article. His calm, measured, yet passionate love of science and his good sense of humour were inspiring, and he is deeply missed by his friends and colleagues. We would like to thank Bob's family for encouraging us to finish the article without him. After the 2016 version was submitted for peer review, two reviewers provided several critical comments which prompted us to take a much more comprehensive approach than in the original version as well as to substantially rewrite and revise much of the paper. We would like to thank the two reviewers of that early manuscript for these comments, as well as the editor and four reviewers of this revised manuscript whose collective feedback substantially improved our analysis and manuscript.

Conflicts of Interest:
The authors declare no conflict of interest.