1. Introduction
Utility customer-funded electricity efficiency programs provide a low-cost electricity resource in 42 states, with investments in 2018 totaling more than USD 5.8 billion and savings exceeding 34 terawatt hours (TWh) [
1]. Understanding the cost drivers of customer-funded energy efficiency portfolios can help facilitate regulatory oversight of these programs, help utilities better utilize energy efficiency as a resource, and inform building energy decarbonization policies.
Insight into differences between the cost of saved electricity across states and utilities can help regulators benchmark performance and set appropriate spending levels for utilities in their jurisdictions. Similarly, utilities can use this information to develop least-cost investments in efficiency. In particular, regulators and utilities can benefit from understanding how the cost of saved electricity relates to the scale of the efficiency resource portfolios. For example, increased savings levels may introduce operational efficiencies that reduce the cost of saved electricity. However, historical investments in certain efficiency measures may reduce the availability of low-cost efficiency savings and could counteract this effect. Quantification of these economies and diseconomies of scale, therefore, would be valuable to regulators and utilities estimating the remaining efficiency potential and forecasting spending.
Policymakers can leverage an understanding of drivers of the cost of saved electricity to inform long-term decarbonization strategies. First, insight into the cost of saved electricity can help grid operators assess the role of energy efficiency as a demand-side resource alongside traditional supply-side generation resources. For example, the New England Independent System Operator (ISO New England) uses the cost of saved electricity reported by efficiency program administrators (the organizations that implement energy efficiency programs funded by electricity and gas customers) as part of its efficiency forecasting methodology [
2]. Second, when the cost of saved electricity is coupled with information on the timing of efficiency savings, policymakers can better evaluate efficiency as a capacity resource that can reduce peak demand. For example, energy efficiency measures can provide significant value to the grid during peak demand periods but its role as a peak resource depends on whether it is cost competitive with generation and other distributed energy resources [
3].
The economics of customer-funded energy efficiency programs can also inform policies that promote building electrification, demand flexibility, and reduce risks from extreme heat. Electrification efforts may exhibit economies of scale similar to those of efficiency programs, in particular for residential buildings, given that they incentivize similar measures (e.g., heat pumps) and employ similar implementation strategies (e.g., upstream incentives) [
4]. Similarly, programs that seek to mitigate the impacts of extreme heat may draw on measures generally associated with energy efficiency, such as building envelope improvements [
5]. Recent state-level efforts at managing extreme heat risk have also identified existing efficiency programs as implementation mechanisms [
6]. Finally, recent research into the technical potential for demand flexibility in U.S. buildings finds synergies in the co-deployment of efficiency and flexibility measures [
7]. Estimates of the economic potential for demand flexibility would, therefore, likely benefit from an understanding of the economics of large-scale efficiency program investments.
Several studies have documented the cost of saved electricity from energy efficiency programs. For example, research has characterized how the cost of saved electricity in programs administered by American investor-owned utilities varies geographically, temporally, and by market sector [
8,
9]. Similar work has described the cost of saved electricity for publicly owned American utilities [
10]. Internationally, researchers have examined the cost of saved electricity for both state and utility-administered energy efficiency programs in Korea [
11,
12].
Additionally, the literature also includes studies that employ econometric methods to explain variation in the observed cost of saved electricity. Arimura et al., 2012 estimates the effect of past and current energy efficiency program spending based on changes in electricity demand using a parametric function and non-linear model as well as utility-fixed effects [
13]. Arimura et al., 2012 then predicts the lifetime savings of efficiency investments made in the study period 1992–2006 and calculates an average cost of saved electricity under a range of discount rates [
13].
Hoffman et al., 2017 considers both a linear and quadratic relationship between the program administrator levelized cost of saved electricity and program year using program administrator fixed effects for programs implemented between 2009 and 2013 [
14]. Overall, the authors find mixed results on the significance of program year on the cost of saved electricity. The study does not find statistically significant results at a 95% confidence interval when the model only includes the linear program year term. With the addition of a quadratic program year variable, the quadratic term is significant at a 95% confidence interval and the linear term is just barely not significant (
p-value of 0.059). The negative linear term and positive quadratic term suggests that the cost of saved electricity declines before reaching a minimum and increasing. Hoffman et al. (2017) also considers models of the same structure on the cost of saved electricity for different types of programs (e.g., residential lighting) [
14]. Using data reported in Energy Information Administration (EIA) Form 861 for the period 2010–2019, Knight et al., 2022 explores economies of scale in energy efficiency programs by regressing the program administrator levelized cost of saved electricity on each program administrator’s first year savings as a percent of its retail sales in a given year [
15]. The authors find a downward linear trend, suggesting that energy efficiency programs have economies of scale. Their model, however, only explains 12% of the observed variation in the levelized costs and does not include any fixed effects.
This study continues a line of research that uses econometric techniques to estimate the costs of saved electricity in customer-funded energy efficiency programs. We build on these analytic efforts by considering economies of scale with annual savings as a percent of sales among a broader set of covariates, including construction labor costs and both time and program administrator fixed effects. As
Table 1 summarizes, no model in the literature includes all of these covariates. We address a gap in the literature of econometric analyses of the cost of saved electricity by introducing a variable for historical efficiency achievement–cumulative efficiency savings as a percent of sales. We include this variable to explore potential diseconomies of scale that result from the increasing costs of achieving remaining energy efficiency potential. Past research has conceptualized energy investments as a supply curve in which different technologies and approaches deliver different levels of savings at different costs [
16]. If historical energy efficiency efforts are focused on the less costly sections of the efficiency supply curve, the remaining potential would be weighted towards the more expensive end of the supply curve. In this case, we would expect increases in cumulative savings as a percent of sales to increase the cost of saved electricity. We incorporate this variable into our regression model and explore how it varies across states and in between customer segments. We employ a weighted least squares model to account for differences in the size of the efficiency programs.
2. Materials and Methods
2.1. Energy Efficiency Program Data
We constructed a dataset of program costs, energy savings, and savings lifetimes on gas and electric energy efficiency programs administered by investor-owned utilities, publicly-owned utilities, government agencies, and third parties. We primarily source these data from annual regulatory reports that utilities file with state energy regulators. In some cases, we acquired program data directly from utilities and from databases and dashboards maintained by regulatory agencies.
The format of the reported program data was typically in tables embedded in PDFs, which we transcribed into a collection template. We assigned each program to a standard program type developed by the Lawrence Berkeley National Laboratory [
17]. To prepare the data for analysis we implemented a two-staged review process. First, we manually reviewed the transcription of the reported data. Second, we identified potential data-entry errors by flagging any program with unexpectedly low or high first year costs of saved electricity (below USD 0.01/kWh or above USD 50/kWh). We then manually reviewed the data for any flagged program and compared it with its source. Therefore, we did not reject outliers simply because of high or low costs of saved electricity; we based our analysis on what utilities reported and conducted data cleaning procedures to assure that we accurately represented the reported data. We drew on more than 5500 program-years of these data to build a panel dataset of 32 program administrators in 14 states that operated electric efficiency programs between 2010 and 2018. In 2018, these program administrators accounted for 55% of savings reported by investor-owned utilities to the EIA in the same year (EIA 2019).
In our analysis, we used ex ante estimates of gross savings for each program as reported by program administrators to state regulators in annual filings. They did not include free-rider or spillover effects nor did they account for evaluation, measurement, and verification (EM&V) studies that occurred in the reporting year. For costs, we included all of the costs that an energy efficiency program administrator would incur, including costs for administration, marketing, evaluation, and customer incentives. We did not include the costs participants incurred as part of participation nor the cost of any shareholder incentive that may have resulted from program achievement. For each program administrator and year, we calculated the levelized cost of saved electricity, which amortizes the cost of an efficiency program over the lifetime of the program’s savings [
18]. We define three market sectors: commercial and industrial (C&I) and residential, which also include programmatic offerings for low-income customers, and cross-cutting, which includes programs that serve multiple customer classes.
In
Table 2 and
Table 3, we show the composition of the panel dataset in terms of the states and market sectors. Across the nine years of program performance in our dataset, three states: California, Massachusetts and Pennsylvania account for 55% of spending and nearly 50% of savings. Combined, their 2900 records of program are 52% of all of the programs in the dataset. C&I programs account for about half of savings and spending, with 40% resulting from residential programs and the remainder from cross-cutting programs. The five most common program types in our dataset, in terms of overall gross electricity savings, were residential lighting, codes and standards, general prescriptive C&I, custom C&I, and residential behavioral programs. These program categorizations are consistent with Hoffman et al., 2013 [
17]. Together, these programs account for 48% of the reported electricity savings in our dataset.
2.2. Annual and Cumulative Savings as a Percent of Sales
We measured program scale relative to a utility’s sales base using two metrics: annual savings as a percent of annual retail sales and cumulative savings as a percent of annual retail sales. We calculated annual savings as a percent of sales by dividing annual electric efficiency savings by annual retail sales as reported in EIA 861. Where a program administrator’s impacts were broader than one utility jurisdiction, we aggregated the savings and sales from the applicable utilities. To characterize the composition of each program administrator’s sales base, we calculated the share of retail sales only from residential and C&I customers in each year. We also measured program scale by customer class by calculating annual residential and C&I savings as a percent of retail sales for each program administrator.
We used cumulative savings as a percent of sales to measure the scale of savings from past investments in energy efficiency that have not reached the end of their useful life. We used reported savings lifetimes to estimate when program savings expire. Program administrators reported lifetimes for half of the programs in our dataset. For the remaining programs, we estimated lifetimes based on the lifetimes reported for the same program type. For a given year, we aggregated the savings from prior years that have not expired, regardless of when they began, and divided them by retail sales in that year. For example, a program administrator’s cumulative savings in 2018 would reflect all of the savings from program investments made between 2010 and to 2017 that are still active. If a program administrator reported in 2012 that savings from an efficiency program in that year would last 7 years, we would include those savings in the cumulative savings for each year 2013–2018. Cumulative savings as a percent of sales, therefore, can exceed annual savings as a percent of sales, particularly in later years. We did not include savings from before 2010 in this calculation. While some savings from years prior to 2010 likely continue through our analysis period, we lack the data to quantify their contribution. Program administrator fixed effects in the regression model address this issue as cumulative saving pre-2010 are time invariant characteristics of administrators.
2.3. Regression Analysis
2.3.1. Base Model
We quantified drivers of the cost of saved electricity through a regression analysis and built on past econometric efforts by accounting for the impact of cumulative energy efficiency savings. Given that our sample includes both small and large investor-owned utilities (the largest annual electricity sales for a single utility is 26 times the smallest annual sales in our dataset), we employed a weighted least squares model with annual retail sales as weights. We specify the regression model in Equation (1). The subscripts on the covariates indicate the dimensions in which the covariates may vary: time (t) and program administrator (PA). The main covariates of interest are annual savings as a percent of sales (S) and cumulative savings as a percent of sales (C). Furthermore, we used two types of fixed effects in our model. Calendar year fixed effects (μ
t) account for factors that vary in time but are common to each administrator, such as macro-economic conditions or the impact of federal appliance efficiency standards. Program administrator fixed effects (η
PA) account for factors that are static in time but vary with each administrator such as differences in program administration, service territory, or regulatory environment. We also addressed how state-level differences and changes in labor costs may affect the cost of saved electricity with an annual Employment Cost Index (γ
t) for construction workers published by the Bureau of Labor Statistics [
19]. We also included the savings-weighted lifetime (λ
PA,t) of each program administrators’ efficiency portfolio in each year of the study period as it is an input into the formula for the levelized cost of saved electricity [
18]. Finally, we included an intercept (α) to capture baseline costs of energy efficiency program administration.
2.3.2. Additional Covariates
We explored the impact of several covariates in addition to those in the base model. First, we considered whether differences in the customer composition of utility jurisdiction affects the cost of saving electricity. Given that C&I programs are generally less costly than residential programs, a larger C&I customer base could provide a reliable source of low-cost efficiency savings [
20]. We tested this hypothesis by including the share of total sales from C&I customers as a covariate. To account for differences in the cost of saved electricity between customer segments, we also included the share of each program administrator’s annual savings from C&I, residential, and low-income programs.
2.3.3. Alternative Dependent Variable
Next, we tested sector-specific versions of our dependent variable (e.g., the levelized cost of electricity for C&I programs only) and adjusted program scale covariates accordingly. A regression of the levelized cost of electricity only for C&I programs, then, would be on annual and cumulative C&I savings as a percent of sales. Since EIA 861 does not separate residential sales into low income and non-low-income as many program administrators do for their programs, we included low-income programs in the residential levelized cost of saved electricity.
2.3.4. Alternative Sample
As robustness check against compositional effects, we apply the base model to a sample that covers a shorter time period 2011–2018 and but includes six more program administrators (38 total) than our base sample and a non-panel dataset that covers 2010–2018 and 117 program administrators.
4. Discussion
Our regression analysis generally aligns with the results of similar work and provides new evidence for other drivers of the cost of saved electricity. First, we found a strong economies of scale: as annual savings as a percent of sales increases, the cost of saved electricity decreases at the portfolio level and for residential and C&I programs separately, all else being equal. In contrast, Knight et al., 2021 concludes that annual savings as a percent of sales and the cost of saved electricity only displays a weak relationship [
15]. Additionally, we found some evidence for diseconomies of scale in customer-funded efficiency programs through our estimate of cumulative efficiency savings, though the result was not robust to our alternative model specifications and warrants additional research. Together with employment costs and program administrator and year fixed effects, these measures of program scale explain the majority of the observed variation in the cost of saved electricity (R
2 = 0.637), much higher than the 12% explained by the model in Knight et al. [
15]. Future research could build on this work by exploring the impact of various policy variables on the cost of saved electricity. For example, higher avoided costs would allow more costly programs to pass cost-effectiveness tests, thereby putting upward pressure on the cost of saved electricity. Conversely, higher discount rates in screening tests could select for less costly programs.
Our regression results demonstrate the importance of programs scale-as measured by annual savings as a percent of sales-on the cost of saved electricity. Energy efficiency forecasting and potential study methodologies, however, do not always reflect this relationship. For example, in its energy efficiency forecasts, ISO New England uses a deemed cost escalation factor that reflects measure penetration and technology costs. The escalation factor is graduated, beginning at 0% in the first year of the forecast and increasing by 1.25% each year thereafter [
2,
21]. These factors linearly increase the cost of saved electricity in each year of the forecast, regardless of any economies of scale. Since ISO New England forecasts savings as a function of budgets and the cost of saved electricity, these escalation factors put downward pressure on savings forecasts. The results of our regression analysis, however, suggest a more nuanced approach to estimating the cost of saved electricity is warranted. In the context of ISO New England’s forecasts, annual and cumulative savings as a percent of sales, program administrator effects, and labor costs may all contribute to more accurate cost projections.
Simplistic assumptions about the cost of saved electricity also affect energy efficiency potential studies. As part of its 2020 integrated resource plan, Duke Energy commissioned a study of its energy efficiency savings potential. The study included an ‘Enhanced Scenario’ in which incentives double but energy savings only increase by 6.3% on average relative to a base case scenario [
22]. The economies of scale that we found imply that such significant increases in incentive levels are not necessary for higher levels of savings. Similarly, a 2020 efficiency potential study for Evergy assumes that the first year cost of saved electricity declines as savings levels increase for most commercial programs but not for residential programs [
23]. Our results, however, suggest that economies of scale apply to both residential and commercial programs. In general, models of energy efficiency savings potential should account for economies of scale. However, we are less confident in applying the diseconomies of scale from our regressions to efficiency potential models since this result was weaker than that for the economies of scale.
These examples of how energy efficiency forecasts and potential studies are illustrative but not necessarily representative. Still, they illustrate how empirical estimates of the cost of saved electricity are relevant for utilities, regulators, and policymakers. Additional research into energy efficiency forecasting and potential study methodologies would help characterize industry trends in projecting the cost of saved electricity. Programmatic efforts for demand flexibility, building electrification, and extreme heat mitigation may also find these results relevant given potential similarities in implementation to efficiency programs.
The differences we observed in state-level cumulative savings may reflect the existence of an efficiency achievement gap. Importantly, energy efficiency savings do occur outside of customer-funded programs, including through building codes, appliance and equipment standards, and private investment. Higher rates of investments in energy service companies’ (ESCO) projects may compensate for lower customer-funded savings levels. For example, New York is in the bottom half of cumulative savings achievement in our sample but has relatively high rates of ESCO investment [
24]. In contrast, Texas has the lowest cumulative savings in our sample and only moderate levels of ESCO investment, which suggests that the state may have significant remaining efficiency potential [
24]. The economies of scale in our regression model suggest that Texas or a similar state with lower levels of overall energy efficiency achievements can increase efficiency savings goals without significant increases and possible with decreases in the cost of saved electricity.
Our analysis produces quantitative results that are specific to energy efficiency programs administered by investor-owned utilities in the U.S. However, some qualitative results are relevant to policymakers in other countries that have utility- or state-led energy efficiency programs [
11,
12,
25]. In particular, our evidence for economies of scale may be informative to policy conversations on levels of investment and be relevant to jurisdictions developing energy efficiency forecasts or potential estimates. While our dataset includes program administrators that account for more than half of energy efficiency savings reported by investor-owned utilities to the IEA in 2018, it does not include administrators from every state. Additionally, our dataset covers the years 2010–2018 and does not include data from efficiency programs operating in the wake of the COVID-19 pandemic, during which lockdowns shifted electricity usage towards residential buildings [
26] and the energy efficiency sector experienced significant job losses [
27]. Analysis of an expanded panel dataset that includes data for the years 2019–2022 would shed light on whether the effects we observed for 2010–2018 have continued.