The proposed robust counterpart DTL DEA model (18) was implemented for China’s 31 provincial power grids in Python, generating the following results: (1) overall and period-specific efficiency scores under various lag lengths condition. Furthermore, two scenarios were implemented for comparative analysis: (2) baseline efficiency evaluation without time lag effects, and (3) efficiency assessment under deterministic data conditions.
4.3.1. Efficiency Patterns Under Varying Lag Periods
To assess the influence of time lag effects on power grid investment performance, we examined two distinct lagged approaches. The first approach excluded time lags (
LP = 0), positing that grid investments exhibit no temporal latency effects, whereby annual output depends exclusively on contemporaneous investment. The second approach explicitly incorporated investment time lags, acknowledging that investments in a given year affect output both in the current year and in subsequent periods. We calculated six lag-period scenarios (
LP = 1–6) and a no time lag scenario (
LP = 0) and obtained seven sets of overall efficiency values, as presented in
Figure 5.
The results demonstrate that the overall efficiency scores of all DMUs remain below 1, confirming that none achieve production frontier optimality. This empirically confirms that no DMU achieves production frontier optimality, indicating systemic inefficiency manifested through both input excesses and suboptimal output generation. Significant efficiency disparities exist across DMUs, with scores ranging from 0.3942 (Xinjiang, LP = 0) to 0.8188 (Tibet, LP = 6).
Figure 6 depicts the average overall efficiency scores of all the DMUs and their growth rate of scores across lag periods, from no time lag (
LP = 0) to a 6-year lag (
LP = 6). Comparative analysis of the seven lag-period scenarios reveals a consistent upward trend in efficiency, with increasing from 0.6404 (
LP = 0) to 0.7174 (
LP = 6) as the lag period extends, reflecting a 12.0% overall improvement. Specifically, Shandong and Xinjiang exhibited the most substantial efficiency gains, increasing by 30.2% and 24.7%, respectively, while Yunnan and Hainan demonstrated the most marginal improvements with growth rates of merely 0.9% and 1.3%. The analysis reveals a statistically significant divergence between dynamic models that incorporate time-lag adjustments and traditional static models. Notably, the efficiency values of each region when considering time-lag effects consistently exceed those without such consideration. The analysis reveals a pronounced positive time-lag effect in the investment efficiency of China’s provincial power grid enterprises. As the lag period (
LP) extends, the efficiency values of the majority of DMUs exhibit progressive improvement.
From the perspective of the growth rate of average overall efficiency values, it reveals that incorporating time lags generates peak improvement (3.7798% gain from LP = 0 to LP = 1), followed by progressively diminishing returns (1.1378% at LP = 6). During a 3-year to 5-year lag, the growth rate gains stabilize within 1.5–1.6% (1.5863%, 1.5887%, 1.5006%, respectively). Static models (LP = 0) systematically underestimate efficiency by 7.42% (LP = 3), 9.12% (LP = 4), and 10.76% (LP = 5) compared to their dynamic counterparts. This pattern strongly suggests that 3–5 years constitutes the optimal latency window for power grid investment returns.
Two key conclusions can be drawn from these results. Firstly, time lag exclusion systematically underestimates power grid investment efficiency, and lag-period adjustment is essential for assessment accuracy as it mitigates significant biases. This systematic underestimation highlights the limitations of static DEA models in accurately reflecting the true efficiency of power grid investments, particularly in systems where inputs yield outputs over extended periods. Without such consideration, policymakers and grid operators may misallocate resources and miss opportunities for optimizing long-term investment strategies. Therefore, the inclusion of time-lag effects in efficiency assessments provides a more comprehensive and reliable framework for decision-making in the power grid sector. Secondly, stabilized growth rate gains of efficiency during 3–5-year lags indicate an archetypal 3–5-year investment lag due to power grid infrastructure development periods (2–3 years) and operational impact realization periods (1–2 years). When the lag period exceeds 5 years (LP > 5), the growth rate significantly decreases, proving that excessively extending the lag period consideration in performance evaluation has no practical value.
4.3.2. Temporal and Spatial Efficiency Dynamics
Subsequently, we enable systematic examination of temporal and spatial analysis of power grid investment performance based on three lag period specifications: (1) three-period lag, (
LP = 3), measuring overall and period-year efficiency scores for 2018–2023; (2) four-period lag (
LP = 4), deriving efficiency scores for 2019–2023; (3) five-period lag (
LP = 5), calculating efficiency scores for 2020–2023.
Table 3,
Table 4 and
Table 5 present detailed results for both overall and period efficiency values of three lag periods.
- (1)
Temporal efficiency dynamics analysis
The analysis of individual period efficiency scores reveals a consistent pattern that all efficiency values consistently fall below 1, with a range spanning from 0.4251 to 0.8203. Furthermore, in accordance with Definition 3 and Definition 4, the calculated efficiency values for each period demonstrate that the average efficiency score of each period closely corresponds to the overall efficiency. This result confirms that each period contributes equally to the overall efficiency, aligning perfectly with our initial hypothesis. The consistency and alignment of these findings provide robust validation for the accuracy of the calculations, ensuring that the results are free from errors.
The temporal efficiency dynamics of China’s provincial power grid investment from 2018 to 2023 reveal a clear upward trend with notable fluctuations. The efficiency scores started at 0.6823 in 2018, gradually improving to 0.6979 in 2020, peaking at 0.7022 in 2021, and slightly declining to 0.6982 in 2023. The initial years (2019–2021) witnessed moderate growth primarily driven by early-stage regulatory reforms and policy harmonization. This period further benefited from accelerated technological advancements, enhanced renewable energy grid integration, and the comprehensive implementation of transmission and distribution tariff mechanisms. The decline in 2022–2023 was attributed to economic slowdowns, policy adjustments, and environmental constraints, though overall efficiency remained robust compared to the baseline. This temporal analysis highlights the dynamic nature of grid investment efficiency, emphasizing the importance of continuous policy refinement and technological innovation to sustain high performance and support long-term development goals.
- (2)
Spatial Efficiency Dynamics Analysis
We conducted an analysis of regional disparities based on the average overall efficiency derived from the LP = 3, 4, 5. It unveils significant regional disparities in the investment efficiency of provincial power grids across China. Tibet was the top-performing region with an efficiency score of 0.8048, in sharp contrast to Xinjiang, which ranked the lowest at 0.4520. The efficiency distribution can be stratified into four distinct tiers. By comparing the input–output data of high-efficiency provinces (e.g., Tibet, Shanghai, Qinghai) with that of low-efficiency regions, this study focuses on analyzing input redundancy and output shortfalls in underperforming areas.
The first tier (average overall efficiency < 0.6) includes five inland provinces: Xinjiang, Henan, Anhui, Hunan, and Hubei. Xinjiang exhibited the lowest efficiency. Although one of its output indicators is nearly 50% higher than that of Qinghai, and other outputs are similar, its inputs and undesirable outputs are approximately twice those of Qinghai. This indicates that Qinghai achieves higher efficiency through a “low-input, low-output” paradigm, a pattern also observed in Tibet. When compared with Shanghai, Xinjiang shows comparable output levels, yet its input indicators, such as power capacity, are nine times higher, revealing significant input redundancy. This is largely attributable to Xinjiang’s geographical conditions, including its vast territory and sparse population, which lead to considerably higher grid construction and operational costs. Among other provinces, Anhui exhibits a relatively low level of renewable energy absorption, resulting in underperformance in output. In contrast, Henan, Hubei, and Hunan have made substantial investments, particularly in transformer capacity, which exceeds that of Shanghai by more than sevenfold. However, the resulting outputs in these provinces remain largely comparable to those of Shanghai and Qinghai, indicating disproportionately low returns relative to the scale of investment. The findings suggest that inefficient regions generally suffer from excessive resource input, resulting in resource misallocation and over-investment.
The second tier (average overall efficiency between 0.6 and 0.7) comprises Jiangxi, Shandong, Guangxi, Shanxi, and Liaoning provinces. The efficiency scores of these provinces are generally lower, mainly attributable to their inadequate renewable energy accommodation, which falls significantly behind that of high-performing regions such as Qinghai and Guangdong. Moreover, these provinces exhibit notably higher line loss rates compared to the latter. To illustrate, provinces like Shandong and Liaoning have invested substantially in grid infrastructure, including transmission line length and transformer capacity, yet their electricity sales volume remains comparatively low, resulting in underutilization of resources. Most of these less efficient provinces are traditional fossil fuel bases, including Shandong, Shanxi, and Liaoning. Their energy systems remain heavily dependent on coal power, characterized by limited flexibility, thereby constraining overall operational efficiency.
The third tier (average overall efficiency between 0.7 and 0.75) encompasses the largest number of provinces (14), representing the intermediate efficiency level of China’s power grid system. These provinces exhibit certain deficiencies in input–output allocation, resulting in moderate efficiency performance. For instance, Jilin and Heilongjiang experience line loss rates that exceed the provincial average by more than 20%. Meanwhile, Beijing and Tianjin demonstrate notably inadequate clean energy integration capacity, with renewable energy absorption levels reaching only 20.41% and 27.88% of the average, respectively. Conversely, Jiangsu has made substantial investments in grid infrastructure, such as transmission line length and transformer capacity, yet its electricity sales volume remains relatively limited, leading to suboptimal resource utilization.
The fourth tier (average overall efficiency > 0.75) includes seven top-performing provinces, which are Sichuan, Hainan, Guangdong, Qinghai, Ningxia, Shanghai, and Tibet. Among these, Sichuan and Guangdong exhibit exceptional performance in renewable energy integration, with annual renewable energy accommodation exceeding 210,000 GWh, significantly surpassing the national average of 66,500 GWh. Nevertheless, Sichuan is constrained by a relatively high line loss rate, and Guangdong has invested more substantially in both physical infrastructures (length of transmission lines and transformer capacity) and financial resources (annual investment). In contrast, Ningxia and Hainan have achieved high efficiency scores through relatively low investment coupled with comparatively high output. To further enhance efficiency, Ningxia could focus on expanding its electricity sales volume, while Hainan has potential for improvement in both sales volume and renewable energy accommodation. Despite exhibiting the highest line loss rate and the lowest electricity sales volume, Tibet achieved the highest efficiency score of 0.8048. This may be explained by its relatively high efficiency within a low-input–low-output operational model.
Furthermore, analysis across varying time lags reveals distinct provincial trajectories that Anhui and Shandong demonstrate significant efficiency gains of 9.3% and 11.3%, respectively, potentially attributable to recent large-scale infrastructure investments. Beijing and Jiangsu maintain remarkable stability in efficiency metrics, reflecting the maturity of their established grid systems, while Xinjiang exhibits the most marginal improvement (4.52%), suggesting exceptionally long investment return periods in this region. The result analysis demonstrates that the lag period for power grid investment efficiency necessitates region-specific calibration, accounting for disparities in regional resource endowments and development constraints. For instance, when evaluating the efficiency value of Xinjiang, we can moderately expand its lag period because its investment payback period is longer than other provinces
Based on these findings, we conclude that the efficiency performance of provincial power grids in China exhibits notable heterogeneity. Accordingly, future policy-making should be tailored to local conditions. High-efficiency provinces should focus on addressing specific deficiencies, such as reducing line losses, optimizing returns on investment. Meanwhile, other provinces can draw lessons from various high-efficiency models to seek breakthroughs in expanding renewable energy integration and enhancing operational leanness.
4.3.4. Comparative Analysis of Deterministic Scenarios
In this subsection, we implemented the model under deterministic conditions, specifically with the robust coefficients
and
, while considering time-lag effects for
LP = 3, 4, 5, and without time-lag effects. The results are illustrated in
Figure 8.
The observed patterns in efficiency dynamics align with those under uncertainty, with the average overall efficiency increasing from 0.8765 to 0.9450 across the three lag scenarios. This analysis underscores that the deterministic models, when time-lag effects are excluded, systematically underestimate efficiency. Specifically, the exclusion of time-lag considerations leads to an underestimation of efficiency by an average of 8.9%, highlighting the critical role of accounting for the inherent time-lagged nature of grid investments. These findings further reinforce the necessity of incorporating time-lag effects in efficiency assessments to accurately reflect the true performance of power grid investments.
The empirical results present compelling evidence regarding the critical importance of accounting for parameter uncertainty in efficiency evaluation. As illustrated in
Figure 6, the conventional deterministic approach that ignores input–output uncertainty systematically overestimates efficiency scores across all temporal scenarios. Particularly striking is the observation that mean efficiency values under various time-lag conditions consistently exceed 0.9, which clearly demonstrates the methodological bias inherent in traditional DEA models.
This overestimation phenomenon is further substantiated by the comparative analysis of
Table 6, which reveals that all DMUs exhibit inflated efficiency values in deterministic settings, with relative overestimation ranging from 12.7% to 28.3% compared to robust measurements. The number of DMUs reaching the efficiency frontier shows dramatic variation, with six frontier DMUs when
LP = 0 and 16, 17, 18 when
LP = 3, 4, 5, respectively. These findings carry significant theoretical and practical implications. More than 200% increase in frontier-reaching DMUs suggests that traditional models fail to capture approximately 60–75% of achievable efficiency potential due to their inability to account for temporal dynamics and data uncertainty. The consistently exaggerated efficiency scores (mean > 0.9) in deterministic models indicate they may produce mathematically feasible but economically meaningless results, potentially leading to overoptimistic investment planning and inaccurate regulatory benchmarking.
4.3.5. Managerial Insights
The empirical findings of this study provide actionable managerial insights for policymakers, grid operators, and investors in the power sector, addressing critical factors that influence grid investment efficiency.
(1) It is imperative to consider time-lagged effects when evaluating the efficiency of large-scale infrastructure investments like power grids. Ignoring time-lagged effects can lead to underestimating true efficiency, since investment benefits typically materialize through multi-period accumulation, thereby distorting policy decisions and resource allocation. For accurate efficiency evaluations, a dynamic DEA model with time lag effects employing LP = 3/4/5 lag parameters is recommended to capture temporal evolution, particularly in region-specific contexts. Empirically justified adjustments to the lag period may enhance evaluation robustness. For accurate efficiency evaluations of grid investment, a dynamic DEA model incorporating time-lag effects with LP = 3/4/5 parameters is recommended to capture temporal evolution. In region-specific contexts, evidence-based adjustments to the lag period may further enhance evaluation robustness.
(2) The investment strategy for the power grid should be deeply integrated with the regional resources situation and development status. Policymakers need to coordinate regional differences and strategic priorities, and develop differentiated investment strategies and resource allocation plans. For instance, coastal regions can leverage their geographical advantages to foster technological innovation; provinces abundant in clean energy resources should accelerate their green transition while enhancing grid integration and absorption capacity; and in regions constrained by resources, technology, or geography, precise resource allocation becomes essential to optimize investment returns, thus providing a replicable framework for similar regions.
(3) Uncertainty in input–output parameters must be incorporated into the efficiency assessment process. Evaluation models typically rely on predefined parameters and assumptions, however, inherent uncertainties abound in reality, such as measurement errors and data gaps during collection. Neglecting parameter uncertainty will lead to overoptimistic efficiency assessments. This assessment bias can severely misguide decision-making. On the one hand, it may overestimate project benefits or underestimate implementation risks, fostering unsustainable investment planning; on the other hand, operational decisions based on distorted assessments may fail when confronted with the real world’s complexity and volatility, potentially triggering cascading systemic risks. Therefore, it is imperative to account for uncertainties in efficiency evaluations. This practice enhances the robustness and resilience of assessment outcomes, thereby laying a solid foundation for formulating more adaptive and forward-looking policies and investment strategies.