1. Introduction
Global climate change represents one of the most pressing challenges of our time, driven primarily by increasing concentrations of greenhouse gases in the atmosphere. To understand and predict the evolution of Earth’s climate system, scientists employ a hierarchy of models ranging from computationally intensive General Circulation Models (GCMs) [
1,
2] to simpler, yet conceptually powerful, reduced-order models [
3]. Among the latter, Energy Balance Models (EBMs) have proven particularly valuable for capturing the fundamental thermodynamics of global climate through a parsimonious representation of radiative forcing and thermal response [
4,
5,
6].
The classical EBM, pioneered by Budyko [
7] and Sellers [
8], describes the change in global mean surface temperature as a balance between incoming solar radiation and outgoing terrestrial radiation. While these models have provided foundational insights into climate sensitivity and equilibrium states, they are fundamentally limited by their assumption of instantaneous thermal adjustment. This simplification fails to account for the long-term, delayed effects of processes such as heat absorption and diffusion by the deep ocean, which acts as a massive thermal reservoir with multi-decadal memory [
9,
10,
11]. This “memory effect” is a critical component of the real climate system’s transient response but remains inadequately captured by integer-order differential equations, which imply exponential relaxation with a single characteristic timescale [
12]. Recent studies have highlighted this challenge explicitly, noting that “the Mori–Kubo overdamped generalized Langevin equation suggests the form of a relatively simple stochastic EBM with memory” and that such approaches naturally connect to fractional energy balance formulations [
13].
To overcome this limitation, we turn to fractional calculus, which generalizes differentiation and integration to non-integer orders [
14,
15,
16,
17]. Unlike integer-order derivatives, fractional derivatives are inherently non-local operators, meaning their value at any time depends on the entire history of the function rather than just its immediate neighborhood [
18,
19,
20,
21,
22]. This mathematical property makes fractional calculus a natural framework for modeling systems with long-range temporal dependencies, including viscoelastic materials, anomalous diffusion processes, and—as increasingly recognized—climate dynamics [
23,
24]. The application of fractional operators to energy balance modeling has gained traction in recent years, with Lovejoy and collaborators pioneering the Fractional Energy Balance Equation (FEBE) framework [
25,
26,
27]. Their work demonstrates that fractional-order models can capture the power-law relaxation observed in climate responses, offering a continuum of timescales between the limiting cases of perfect memory (
) and no memory (
).
Concurrently, fractional calculus and advanced numerical methods have shown remarkable utility across diverse scientific domains, reinforcing their applicability to complex dynamical systems. In computational mathematics, recent developments in numerical schemes for generalized tempered-type integrodifferential equations provide robust tools for handling non-local operators similar to those appearing in fractional climate models [
28]. In power systems engineering, second-order normal form methods have been successfully applied to capture nonlinear dynamics in differential–algebraic equation systems, demonstrating the importance of higher-order approximations for accurate transient response prediction [
29]. Even in machine learning, optimization strategies for energy-efficient transformer inference in time series classification highlight the growing emphasis on parsimonious yet accurate modeling in data-intensive applications [
30]. These interdisciplinary advances underscore the value of sophisticated mathematical frameworks—like fractional calculus—for systems where memory, non-locality, and computational efficiency are paramount.
In this work, we develop and validate a fractional Energy Balance Model (fEBM) by replacing the classical first-order time derivative in the standard EBM with a Caputo fractional derivative of order
(
). While the conceptual foundation of fractional EBMs has been established previously [
25,
26,
27], our contribution provides a comprehensive mathematical framework that integrates rigorous theoretical analysis, robust numerical implementation, and thorough empirical validation using the most recent climate data. Specifically, our study makes several key advances: First, we establish complete mathematical proofs for the existence, uniqueness, and asymptotic stability of solutions to the fEBM, ensuring a solid theoretical foundation. Second, we implement and analyze the Adams–Bashforth–Moulton predictor–corrector method for fractional differential equations with explicit convergence and stability guarantees. Third, we calibrate and validate the model against updated observational datasets (NASA GISTEMP v4, IPCC AR6 forcing) and perform out-of-sample forecasting for the period 2011–2023. Fourth, we employ Monte Carlo methods and sensitivity analysis to quantify parameter uncertainties—a cornerstone of responsible climate forecasting. Finally, we derive and interpret the optimized fractional order
as a quantitative measure of the climate system’s aggregate memory, linking it to physical processes like deep-ocean heat uptake.
The remainder of this paper is organized as follows.
Section 2 describes the methodological framework, including the numerical scheme, data sources, parameter optimization strategy, and uncertainty quantification.
Section 3 presents the mathematical formulation of the fractional Energy Balance Model, together with rigorous proofs of existence, uniqueness, and stability.
Section 4 details the numerical implementation and validation against observational data, including residual diagnostics and robustness analysis.
Section 5 discusses the results, emphasizing physical interpretation, methodological implications, and relevance for climate sensitivity and projections. Finally,
Section 6 summarizes the main findings and outlines directions for future research, including extensions to spatially explicit models and enhanced feedback representations within the fractional framework (
Figure 1).
2. Methodology
This section details the comprehensive methodology employed to develop, solve, and validate the fractional Energy Balance Model (fEBM). The process encompasses the numerical scheme for solving the fractional differential equation, the data sources used for calibration and validation, the parameter optimization framework, and the robust uncertainty quantification techniques applied to ensure the reliability and physical relevance of the results.
2.1. Numerical Scheme: Adams–Bashforth–Moulton Predictor–Corrector Method
Given the non-local nature of the Caputo fractional derivative in the fEBM (Equation (
1)), an analytical solution is intractable for arbitrary forcing functions
. Consequently, we employ a numerical method to obtain an approximate solution. The chosen scheme is the Adams–Bashforth–Moulton (ABM) predictor–corrector method, a widely adopted and stable algorithm for solving fractional differential equations [
31].
The fEBM can be expressed in the general form of a fractional initial value problem:
where for our specific model,
, and
The initial condition represents the temperature anomaly at time , which we set to match the observed temperature anomaly in 1880 from the GISTEMP dataset.
The numerical solution is computed on a uniform temporal grid for , with step size year to match the annual resolution of the observational datasets. The ABM method provides an approximation through a two-step process:
2.1.1. Predictor Step (Fractional Adams–Bashforth)
An initial approximation
is calculated using the explicit formula:
where the weights are given by:
2.1.2. Corrector Step (Fractional Adams–Moulton)
The prediction is refined using the implicit formula:
where the weights are:
This method is chosen for its proven stability and convergence properties. For
(the classical integer-order case), the method reduces to the standard Adams–Bashforth–Moulton scheme with convergence rate
. For fractional orders
, the method maintains a convergence rate of
[
31]. The numerical stability and convergence analysis of this scheme for our fEBM is presented in
Section 3.3 (Theorem 2).
The proposed fEBM is constructed through a direct reformulation of the classical linear EBM by replacing the integer-order time derivative with a Caputo fractional derivative while preserving the original physical balance structure between radiative forcing and climate feedback. This formulation maintains the physical interpretability of the classical model while introducing the essential memory effects through the fractional order parameter .
For clarity and reference throughout the subsequent analysis,
Table 1 summarizes all variables and parameters of the fEBM, including their physical interpretations and typical ranges.
2.2. Data Sources
Model calibration and validation require high-quality, publicly available historical climate data with well-characterized uncertainties. We utilize two primary datasets that represent the current state-of-the-art in climate data compilation:
2.2.1. Global Mean Surface Temperature (GMST)
The observed temperature data are taken from the NASA Goddard Institute for Space Studies (GISS) Surface Temperature Analysis (GISTEMP version 4) [
32]. This dataset provides monthly and annual global surface temperature anomalies relative to a 1951–1980 baseline from 1880 to the present, with an estimated uncertainty of
K for the global annual mean in recent decades. The dataset combines land surface air temperature data from meteorological stations with sea surface temperature data from ships, buoys, and satellites, employing advanced homogenization and interpolation techniques to ensure spatial and temporal consistency.
2.2.2. Radiative Forcing
The time-evolving external radiative forcing
is sourced from the historical dataset compiled for the Intergovernmental Panel on Climate Change (IPCC) Sixth Assessment Report (AR6) [
33]. This comprehensive time series (1750–present) incorporates contributions from well-mixed greenhouse gases (CO
2, CH
4, N
2O, and halocarbons), aerosols (sulfate, nitrate, organic carbon, black carbon, and mineral dust), ozone (tropospheric and stratospheric), land use change (albedo effects), and solar irradiance variations. The forcing estimates are provided annually with associated uncertainty ranges that account for both measurement errors and modeling uncertainties in the radiative transfer calculations.
Both datasets are publicly accessible and have been widely validated in peer-reviewed literature, making them appropriate benchmarks for climate model evaluation. For the period 1880–2023, the data are available at annual resolution, which aligns with our numerical time step year and allows for direct comparison between model outputs and observations.
2.3. Parameter Optimization and Uncertainty Quantification
The model parameters—the fractional order , the climate feedback parameter , and the effective heat capacity C—are determined through an inverse modeling approach. The objective is to find the parameter set that minimizes the discrepancy between the model output and the observed data .
The primary discrepancy measure is the Root Mean Square Error (RMSE), which serves as our main objective function due to its differentiability and sensitivity to large errors:
For comprehensive model evaluation following best practices in climate model validation, we also compute additional performance metrics:
where
is the mean of observed temperatures. The Nash–Sutcliffe Efficiency (NSE) ranges from
to 1, with values closer to 1 indicating better model performance, while
represents the proportion of variance explained by the model. For linear models with an intercept evaluated against the observational mean, NSE and
become mathematically equivalent. Nevertheless, both metrics are reported here to maintain consistency with common practice in climate model evaluation and to facilitate direct comparison with existing studies that may report either metric.
Due to the potential non-convexity of the optimization landscape with multiple local minima, we employ a Genetic Algorithm (GA) to find the global minimum of the RMSE. The GA efficiently explores the parameter space () through operations of selection, crossover, and mutation, avoiding entrapment in local minima. The algorithm uses a population size of 100, crossover probability of 0.8, mutation probability of 0.1, and runs for 200 generations—parameters determined through preliminary convergence tests to ensure robust optimization without excessive computational cost.
To quantify the uncertainty in the optimized parameters—a cornerstone of responsible forecasting—we perform a Monte Carlo uncertainty analysis. We generate
synthetic realizations of the observed temperature series by adding Gaussian noise
to
, where
K is the published measurement uncertainty of the GISTEMP dataset [
32]. The GA optimization is repeated for each perturbed dataset. The resulting distribution of optimal parameter values
allows us to report our final parameters with 95% confidence intervals, rigorously capturing the uncertainty propagated from observational errors.
2.4. Model Validation and Forecasting Skill
The model’s performance is rigorously evaluated through a structured two-phase protocol designed to prevent overfitting and comprehensively assess both explanatory power and predictive capability:
2.4.1. Calibration Phase (1880–2010)
The model is calibrated against historical data from the period 1880–2010. The optimized parameter set obtained from this calibration period is fixed and used for all subsequent analyses, ensuring that no information from the validation period influences parameter estimation.
2.4.2. Validation/Forecasting Phase (2011–2023)
The calibrated model is run forward from 2011 to 2023 without any further parameter adjustment. The performance in this out-of-sample period serves as the primary test of the model’s predictive skill and generalization ability. Beyond the primary RMSE metric, we evaluate multiple complementary measures including MAE, Bias, NSE, and
(defined in
Section 2.3) to provide a comprehensive assessment of forecast accuracy. The numerical values of these metrics for both the fractional and classical models are reported and comparatively analyzed in
Section 5.
2.4.3. Residual Analysis
To diagnose potential model deficiencies and validate statistical assumptions, we conduct a thorough residual analysis following calibration. This includes: (i) Residuals versus time plots to detect temporal patterns or heteroscedasticity; (ii) Histograms of residuals to assess normality assumptions; and (iii) Autocorrelation functions (ACFs) of residuals to identify unaccounted temporal dependencies. These diagnostics are presented in
Section 4.3. In particular, the absence of significant residual autocorrelation would indicate that the fractional memory term effectively captures long-range dependencies in the temperature response that are missed by classical integer-order formulations.
2.4.4. Comparison Framework
The performance of the fractional EBM (fEBM) is systematically compared against the classical integer-order EBM (
) using identical calibration and validation periods, data sources, and numerical settings. This direct comparison isolates the effect of introducing fractional memory while controlling for all other modeling choices. The classical EBM serves as a baseline, with improvements in the fEBM quantified through relative reduction in error metrics and information criteria (AIC; see
Section 5 for details).
2.4.5. Addressing Methodological Skepticism
Recognizing that fractional calculus approaches in climate modeling may face skepticism regarding physical interpretability and parameter identifiability, our validation protocol explicitly addresses these concerns. The physical meaningfulness of the optimized fractional order
is assessed through its consistency with known climate timescales (e.g., deep ocean heat uptake). Parameter identifiability is verified through the sensitivity analysis in
Section 5.3, and forecast uncertainty is rigorously quantified via the Monte Carlo approach described in
Section 2.3. This comprehensive validation strategy ensures that the fEBM’s advantages are demonstrated through robust, reproducible statistical evidence rather than methodological novelty alone.
5. Results and Discussion
5.1. Optimal Parameters and Their Physical Interpretation
The optimal parameter set identified through the calibration procedure (
,
W
,
W yr
; see
Section 4.2) carries specific physical meanings that illuminate the memory characteristics and feedback structure of the climate system as represented by the fEBM.
5.1.1. The Fractional Order as a Memory Metric
The value
quantifies the deviation from exponential (memoryless) relaxation toward a slower, power-law decay. In terms of characteristic timescales, this corresponds to an effective continuum of relaxation times rather than a single dominant timescale. For comparison, Lovejoy et al. [
25] reported
–
for global temperature response in a purely scaling-based fractional model, while Procyk et al. [
27] obtained
when including a radiative-feedback term. Our estimate (
) lies within this range and reflects a balance between the fast land-atmosphere response and the slow, diffusive heat uptake of the deep ocean [
9]. The uncertainty interval (
) indicates that the memory strength is well-constrained by the observational record.
5.1.2. Climate Feedback Parameter
The optimal feedback strength
W
aligns with canonical estimates from energybalance and generalcirculation models, which typically place
between 1.0 and 1.5 W
for the modern climate [
4]. This value encapsulates the net effect of various fast feedbacks (water vapor, lapse rate, surface albedo) but, in the fEBM framework, does not include the slower feedbacks associated with icesheet dynamics or biogeochemical cycles—those are effectively absorbed into the memory term through
. The close agreement with classical estimates confirms that the fractional reformulation preserves the basic radiative balance while redistributing the temporal response.
5.1.3. Effective Heat Capacity C
The inferred effective heat capacity
W yr
is substantially larger than the atmospheric heat capacity (
W yr
) but smaller than the oftencited value for a wellmixed ocean layer (
W yr
). This intermediate value emerges because the fractional derivative already accounts for part of the system’s thermal inertia through its nonlocal kernel;
C thus represents the “instantaneous” capacity, while the longtail storage is captured by
. In a classical EBM, a single
C must embody both effects, leading to values that depend strongly on the chosen forcing scenario [
12]. The fEBM decouples these aspects, yielding a more stable and physically interpretable
C.
5.1.4. Consistency with Observed Climate Variability
Together, the three parameters describe a system that relaxes from perturbations with a power-law tail
rather than an exponential decay. This slower relaxation is consistent with the observed persistence of temperature anomalies following major volcanic eruptions [
9] and with the multi-decadal memory seen in ocean heat-content records. The parameter combination also yields an equilibrium climate sensitivity (ECS) of approximately
which falls within the likely range of the IPCC AR6 assessment [
33]. The key advance is that the fEBM produces this ECS while simultaneously reproducing the transient response on annual-to-centennial timescales through the memory parameter
, whereas classical EBMs often require separate treatments of fast and slow components.
The physical coherence of the optimized parameters reinforces the fEBM as a structurally sound, parsimonious representation of global climate dynamics, bridging the gap between simple exponential models and computationally expensive Earth-system models.
5.2. Addressing Methodological Skepticism
The introduction of fractional calculus into climate modeling may raise legitimate skepticism regarding physical interpretability and parameter identifiability. We address these concerns pointwise.
First, the fractional order serves as an effective memory parameter that integrates multiple slow climate processes (deep-ocean diffusion, cryospheric adjustments) without requiring explicit resolution of each. This parsimonious representation is analogous to effective parameters widely used in continuum mechanics.
Second, the risk of overfitting is mitigated by (i) tight uncertainty intervals from Monte Carlo analysis (), (ii) physical consistency of and C with independent estimates, and (iii) superior out-of-sample validation (47% RMSE reduction).
Third, compared to multi-box alternatives, the fractional approach provides a continuous relaxation spectrum through a single parameter, avoiding arbitrary discretization while maintaining mathematical tractability. In this sense, the fractional formulation may be viewed as the continuous limit of increasingly fine-grained multi-box models, retaining physical interpretability while reducing parametric arbitrariness.
Ultimately, the fEBM does not replace process-detailed models but offers a mathematically rigorous, empirically validated tool for exploring the implications of climate memory in long-term projections.
5.3. Implications for Climate Sensitivity and Projections
The memory-aware formulation of the fractional Energy Balance Model (fEBM) carries specific consequences for estimating climate sensitivity and interpreting projected warming pathways, extending beyond mere curve-fitting improvement.
5.3.1. Memory-Adjusted Transient Climate Response
While the equilibrium climate sensitivity (ECS) depends primarily on the feedback parameter —yielding for our optimal —the transient climate response (TCR) becomes explicitly memory-dependent. The fractional order implies that the system approaches equilibrium more slowly than the exponential relaxation predicted by the classical EBM, effectively stretching the warming trajectory over time. This distinction is particularly relevant on policy-relevant horizons (50–100 years), where the committed warming from past emissions can be underestimated by memoryless reduced-order models.
5.3.2. Warming Trajectories Under Standard Forcing Pathways
Under smoothly increasing forcing trajectories—such as standard IPCC AR6 scenario pathways—the fEBM is expected to exhibit a slightly delayed initial response due to the integrative effect of fractional memory, followed by a more persistent “tail” after forcing stabilizes. This slow relaxation reflects the continuous spectrum of adjustment timescales encoded by and is qualitatively consistent with the prolonged ocean heat uptake seen in comprehensive Earth-system models. Although quantitative scenario projections fall outside the scope of the present validation study, the qualitative behavior suggests that fractional-order EBMs can serve as useful intermediate tools for exploring scenario uncertainty and response inertia.
5.3.3. Practical Recommendations for Climate Assessment
Two practical uses of the fEBM framework follow directly from these findings. First, the fractional order
may be reported alongside traditional climate sensitivity metrics (ECS, TCR) as a concise diagnostic of aggregate climate-system memory and inertia. Second, simple fractional models such as the fEBM can be used as interpretable emulators for complex models, helping attribute differences in projected warming to alternative assumptions about memory structure and relaxation spectra. These applications leverage the mathematical rigor established in
Section 3.2,
Section 3.3 and
Section 3.4 while addressing practical needs in climate-risk assessment.
Ultimately, the fEBM does not replace process-detailed models; rather, it enriches the reduced-order modeling toolbox by providing an empirically grounded and mathematically sound mechanism to incorporate climate memory into conceptual analyses and long-term projection workflows.
6. Conclusions and Future Work
This study introduced and rigorously analyzed a fractional Energy Balance Model (fEBM) as a parsimonious extension of the classical EBM, aimed at capturing the long-range memory inherent in the climate system. By replacing the integer-order time derivative with a Caputo fractional derivative, the proposed framework preserves the fundamental radiative balance structure while generalizing the temporal response through a single, physically interpretable memory parameter.
From a mathematical perspective, the fEBM was shown to be well posed, numerically stable, and convergent under a fractional Adams–Bashforth–Moulton predictor–corrector scheme. These properties provide a solid theoretical foundation for its numerical implementation and ensure that the improved empirical performance is not an artifact of numerical instability. The convergence and stability guarantees further support the reproducibility and robustness of the proposed approach.
Empirically, calibration against historical global mean surface temperature anomalies demonstrated that the fractional formulation yields a substantial improvement over the classical EBM, both in-sample and out-of-sample. The optimal fractional order consistently emerged as a robust estimate, indicating that the climate system exhibits a persistent, non-Markovian memory extending beyond a single characteristic timescale. Importantly, the associated parameters governing radiative feedback and effective heat capacity remained consistent with independent physical estimates, confirming that the fractional extension enhances temporal realism without distorting the underlying energy balance.
Beyond numerical accuracy, the principal contribution of the fEBM lies in its conceptual clarity. The fractional order provides a compact descriptor of aggregate climate memory, effectively integrating fast atmospheric processes and slow oceanic heat uptake within a unified mathematical framework. This allows the model to reconcile equilibrium climate sensitivity with transient climate response behavior in a manner that classical single-timescale EBMs cannot achieve without additional structural complexity.
The present work also delineates the scope and limitations of the proposed approach. The fEBM is not intended to replace process-based Earth system models, nor does it explicitly resolve spatial heterogeneity or individual feedback mechanisms. Rather, it serves as an intermediate-complexity tool that bridges simple conceptual models and high-dimensional simulations, offering insight into how memory effects shape long-term climate dynamics.
Several avenues for future research naturally follow from this framework. The fractional formulation can be extended to spatially resolved energy balance models, enabling the investigation of regional climate memory and teleconnection effects. Additional slow feedbacks, such as cryospheric or biogeochemical processes, could be incorporated explicitly and examined in relation to the effective memory parameter. Finally, coupling the fEBM with standard emissions or forcing scenarios would allow systematic exploration of how fractional memory influences projected warming trajectories and committed climate change.
In summary, the fractional Energy Balance Model provides a mathematically rigorous, physically interpretable, and empirically validated representation of climate memory. By quantifying long-range temporal dependence through a single parameter, it enriches the modeling toolbox available for climate sensitivity assessment and long-term projection studies, while maintaining transparency and computational efficiency.