Next Article in Journal
Weak Compactness in Wk,∞
Previous Article in Journal
LLM-TOC: LLM-Driven Theory-of-Mind Adversarial Curriculum for Multi-Agent Generalization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Pessimistic Two-Stage Network DEA Model with Interval Data and Endogenous Weight Restrictions

Department of Industrial Engineering and Management, National Kaohsiung University of Science and Technology, Kaohsiung 80778, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(5), 917; https://doi.org/10.3390/math14050917
Submission received: 8 February 2026 / Revised: 1 March 2026 / Accepted: 6 March 2026 / Published: 8 March 2026
(This article belongs to the Special Issue New Advances of Optimization and Data Envelopment Analysis)

Abstract

This paper develops a pessimistic two-stage network data envelopment analysis (DEA) model that integrates interval-valued data and endogenous weight restrictions within a unified linear programming framework. The proposed approach explicitly captures internal network structures while addressing bounded data uncertainty through an interval-to-deterministic transformation that preserves linearity and avoids probabilistic assumptions. Robustness is interpreted in the pessimistic interval DEA sense, where efficiency is evaluated under worst-case realizations of observed bounds rather than through explicit uncertainty-set optimization. To mitigate weight degeneracy and enhance discrimination power, data-driven proportional weight restrictions are introduced; these endogenous bounds are constructed solely from observed data and regularize the multiplier space without relying on subjective preferences or tuning parameters, while maintaining scale invariance and the nonparametric nature of DEA. The model admits equivalent multiplier and envelopment formulations and enables meaningful decomposition of overall efficiency into stage-specific components. Fundamental theoretical properties—including feasibility, boundedness, monotonicity, efficiency decomposition, and special case consistency—are rigorously established. An empirical application to OECD macroeconomic data, accompanied by sensitivity evaluation, demonstrates the stability and discriminatory capability of the proposed framework under bounded variability. Computational analysis confirms that the model retains linear programming structure and exhibits linear growth in problem size with respect to the number of decision-making units, thereby preserving the scalability characteristics of classical two-stage network DEA formulations. The proposed framework provides a theoretically grounded and computationally tractable approach for network efficiency analysis under bounded interval uncertainty.

1. Introduction

Data Envelopment Analysis (DEA) is a nonparametric optimization-based methodology for evaluating the relative efficiency of decision-making units (DMUs) that transform multiple inputs into multiple outputs. Since the seminal CCR model introduced by Charnes, Cooper, and Rhodes, DEA has become a standard tool in operations research and applied mathematics due to its minimal structural assumptions and strong axiomatic foundations [1].
Despite its popularity, classical DEA models rely on three assumptions that often limit empirical validity: precise data, unrestricted weight flexibility, and black-box production processes. Unrestricted multipliers may yield extreme or zero weights, allowing DMUs to appear efficient by effectively ignoring certain inputs or outputs. To address this issue, weight restriction techniques such as assurance region constraints, cone-ratio restrictions, and common-weight approaches have been proposed to regularize multiplier selection while preserving DEA’s nonparametric nature [2,3,4].
Another fundamental limitation of classical DEA concerns data uncertainty. In practice, performance data are frequently subject to measurement errors, estimation noise, or reporting imprecision. Interval DEA models extend the classical framework by allowing inputs and outputs to vary within bounded intervals, thereby capturing uncertainty without imposing distributional assumptions [5,6,7]. Alternatively, robust optimization-based DEA models explicitly construct uncertainty sets and evaluate efficiency under worst-case realizations, offering strong robustness guarantees at the expense of increased computational complexity or nonlinear formulations [8,9].
In parallel, network DEA models have been developed to relax the black-box assumption by explicitly modeling internal production structures. Two-stage and multi-stage network DEA frameworks incorporate intermediate measures linking consecutive stages, enabling efficiency decomposition and providing deeper insights into internal performance mechanisms [10,11,12]. Comprehensive surveys document the rapid expansion of network DEA in applications such as banking, supply chains, innovation systems, energy, and healthcare [13].
However, these research streams largely evolve independently. Existing weight-restricted DEA models typically rely on exogenously specified bounds or decision-maker preferences, interval DEA models often neglect internal network structures, and robust optimization-based DEA formulations rarely permit efficiency decomposition across stages [7,9,14]. Consequently, there remains a methodological gap for a unified framework that simultaneously accommodates network structures, interval-valued data, and data-driven weight regularization, while maintaining linearity and interpretability.
To address this gap, this study proposes a pessimistic two-stage network DEA model that integrates interval data with endogenous weight restrictions derived directly from observed data bounds. The proposed framework preserves linear programming structure, avoids subjective tuning parameters, and allows efficiency decomposition across stages. Fundamental theoretical properties, including feasibility, boundedness, monotonicity, and special case consistency, are formally established, and an empirical application demonstrates improved stability and discrimination under bounded data variability.

2. Preliminaries and Notation

Consider a set of n decision-making units (DMUs), indexed by
j = 1 , , n .
Each DMU operates as a two-stage network production system.
  • Stage 1 (Upstream Stage)
  • Inputs:
    x i j [ x i j   x i j ] , i = 1 , , m ,
    where x i j and x i j denote the lower and upper bounds of input i for DMU j , respectively.
  • Intermediate outputs:
    z k j [ z k j   z k j ] , k = 1 , , p .
  • Stage 2 (Downstream Stage)
  • Inputs:
    The intermediate measures z k j generated in Stage 1 serve as inputs to Stage 2.
  • Final outputs:
    y r j [ y r j   y r j ] , r = 1 , , s .
All interval bounds are assumed to be non-negative. Furthermore, for each DMU, at least one input and one output are assumed to be strictly positive to ensure meaningful efficiency evaluation.
The two-stage serial structure adopted in this study represents the canonical network DEA architecture introduced in foundational works [10,11,12]. While more complex network topologies exist, the two-stage structure provides a fundamental and analytically tractable representation of internal production mechanisms. The methodological contribution of the present study lies not in increasing network depth, but in integrating bounded interval uncertainty and endogenous proportional regularization within a linear network DEA framework. The proposed formulation can be extended to multi-stage or parallel network structures without altering its theoretical foundation.

3. Pessimistic Two-Stage Network DEA Model

3.1. Interval-to-Deterministic Transformation

To obtain a conservative (pessimistic) efficiency measure under interval-valued data, inputs are evaluated at their upper bounds and outputs at their lower bounds. Specifically, for each decision-making unit (DMU), the upper bounds of input intervals and the lower bounds of output intervals are adopted in the efficiency evaluation. This transformation yields a deterministic linear programming formulation that guarantees conservative efficiency estimates while avoiding probabilistic assumptions on data uncertainty.
By evaluating inputs at their worst-case realizations and outputs at their least favorable realizations, the proposed approach ensures that the resulting efficiency scores do not overestimate performance in the presence of bounded data uncertainty. Moreover, this interval-to-deterministic transformation preserves the linear structure of classical DEA models, thereby maintaining computational tractability and consistency with standard network DEA formulations.
In this study, robustness is interpreted in the sense of interval data envelopment analysis. Efficiency is evaluated under pessimistic realizations of bounded data uncertainty by assigning upper bounds to inputs and lower bounds to outputs. This interpretation differs from robust optimization-based DEA models, which explicitly construct uncertainty sets and optimize performance against worst-case scenarios within those sets. The proposed interval-based approach preserves linearity, scale invariance, and interpretability while providing conservative efficiency assessments in the presence of interval-valued data.

3.2. Multiplier Formulation

Let v i , w k , and u r denote the weights of inputs, intermediate measures, and outputs, respectively. For the DMU under evaluation o , the proposed model is
m a x r = 1 s u r y r o
subject to
k = 1 p w k z k o = 1 ,
r = 1 s u r y r j k = 1 p w k z k j 0 , j ,
k = 1 p w k z k j i = 1 m v i x i j 0 , j ,
v i , w k , u r 0 .
This formulation evaluates the performance of a two-stage network DMU under interval-valued data by adopting a robust (pessimistic) multiplier framework. The objective function maximizes the weighted sum of the lower bounds of final outputs for the evaluated DMU, thereby ensuring that efficiency is assessed under worst-case output realizations and avoiding overestimation due to data uncertainty. The normalization constraint fixes the weighted sum of the upper bounds of intermediate measures to unity, eliminating scale ambiguity and anchoring the efficiency evaluation at the interface between the upstream and downstream stages of the network. The downstream feasibility constraints require that, for every DMU, the worst-case weighted final outputs do not exceed the best-case weighted intermediate inputs, ensuring feasibility of the second-stage transformation and preventing efficiency scores greater than one. Similarly, the upstream feasibility constraints impose that the worst-case weighted intermediate outputs are bounded by the best-case weighted external inputs for all DMUs, preserving consistency with the underlying production possibility set under interval data. Finally, the non-negativity of all weights reflects their economic interpretation as shadow prices and prevents undesirable compensation between inputs, intermediate measures, and outputs. Collectively, this multiplier formulation provides a theoretically sound and practically robust framework for evaluating two-stage network systems in the presence of interval-valued data.

3.3. Summary of Bound Usage

To enhance clarity regarding the pessimistic interval transformation, Table 1 summarizes the bound realizations adopted in each formulation of the proposed model.
Under the pessimistic evaluation principle, external inputs are assessed at their upper bounds to reflect worst-case resource consumption, while final outputs are evaluated at their lower bounds to avoid overestimation of performance. Intermediate measures are treated consistently with the stage-specific feasibility constraints: upper bounds are used in the normalization constraint to anchor scale at the inter-stage interface, whereas lower bounds are adopted in feasibility constraints to ensure conservative transformation between stages. This structured bound selection preserves internal consistency across multiplier, envelopment, and decomposition formulations.

4. Endogenous Weight Restrictions

In classical DEA models, unrestricted multipliers may lead to extreme or zero weights, allowing a DMU to appear efficient by effectively ignoring certain inputs or outputs. This phenomenon, commonly referred to as the weight degeneracy problem, has been widely documented in the DEA literature and is known to undermine both discrimination power and the economic interpretability of efficiency scores [2,15,16]. The problem becomes more pronounced in the presence of interval-valued or imprecise data, where data uncertainty may further amplify multiplier instability and exacerbate extreme weight selection [7,14,17]. To address this issue, various forms of exogenous weight restrictions have been proposed, including assurance region (AR) constraints [2,16], cone-ratio restrictions [16], and trade-off bounds or preference structures derived from decision-maker preferences [18,19]. While these approaches are effective in constraining pathological weight solutions, they typically rely on subjective judgments or externally imposed parameters, which may conflict with the nonparametric and data-driven philosophy underlying DEA [3,4].
To mitigate weight degeneracy without introducing subjective information, the proposed model adopts endogenous (data-driven) proportional weight restrictions, following the spirit of internally consistent and sample-dependent bounding schemes developed in the DEA literature [20,21,22]. Specifically, for each input weight v i , proportional bounds of the form
α i v i i = 1 m v i β i , i = 1 , , m ,
are imposed. Unlike absolute bounds, these proportional constraints regulate the relative contribution of each input to the aggregate input measure, thereby preserving scale invariance and unit consistency, which are fundamental axioms of DEA models [1,4]. The lower and upper bounds are defined as
α i = m i n j x i j i = 1 m m a x j x i j , β i = m a x j x i j i = 1 m m i n j x i j ,
and are constructed solely from observed interval upper bounds across all DMUs. As a result, the bounds are entirely data-driven and introduce no external tuning parameters or preference information.
Under the standing assumption that all interval bounds are strictly positive, the proportional weight restrictions are well defined. However, the conditions 0 α i β i 1 alone do not guarantee joint feasibility of all proportional constraints. To formally establish feasibility of the multiplier model under the proposed bounds, we provide the following result.
Lemma 1 (Joint Feasibility of Proportional Bounds). 
Suppose that for the evaluated DMU   o , all upper-bound inputs satisfy   x i o U > 0  for all   i . If the proportional bounds satisfy
0 α i β i 1 ,
then there exists a strictly positive weight vector   v  such that
α i v i x i o U k v k x k o U β i , i .
Proof. 
Define normalized contribution shares
s i = v i x i o U k v k x k o U .
By construction,
s i 0 , i s i = 1 .
The proportional constraints are therefore equivalent to
α i s i β i , i ,
with s lying in the unit simplex. The feasible region is thus a truncated simplex. A necessary and sufficient condition for non-emptiness of this region is
i α i 1 i β i .
Under the proposed data-driven construction of the proportional bounds, both α i and β i are derived exclusively from normalized upper-bound input magnitudes of the evaluated DMU and satisfy 0 α i β i 1 . Summing over all inputs yields
i α i 1   and i β i 1 .
Hence,
i α i 1 i β i ,
which guarantees that the truncated simplex is non-empty. Consequently, there exists at least one strictly positive weight vector satisfying all proportional bounds simultaneously.
Moreover, because the proportional bounds are constructed exclusively from normalized upper-bound input magnitudes of the evaluated DMU, the inequalities
i α i 1 i β i
hold for every evaluated DMU by construction. Consequently, joint feasibility of the proportional constraints is guaranteed for all admissible data realizations satisfying the positivity assumption. This ensures that the introduction of endogenous proportional bounds does not restrict the feasible multiplier region to emptiness. □
Similar endogenous bounding strategies have been shown to enhance discrimination power and numerical stability in both classical and network DEA models [20,21,22]. The present formulation extends these ideas to the pessimistic interval network setting while preserving feasibility and linearity.

5. Envelopment Formulation

The dual (envelopment) formulation of the proposed pessimistic two-stage network DEA model is given by
m i n   θ
subject to
j = 1 n λ j x i j θ x i o , i = 1 , , m ,   j = 1 n λ j z k j z k o , k = 1 , , p ,   j = 1 n λ j y r j y r o , r = 1 , , s ,   λ j 0 , j = 1 , , n .
This envelopment formulation evaluates the efficiency of DMU o relative to a convex combination of observed DMUs under pessimistic realizations of interval-valued data. This is consistent with standard DEA models with multiplier-side weight restrictions, where the primal technology set remains unchanged unless the restrictions are dualized explicitly. The input constraints compare the upper bounds of the reference set with the upper bounds of the evaluated DMU scaled by θ , ensuring conservative assessment of input efficiency. The intermediate and output constraints enforce feasibility of both stages using lower-bound realizations, thereby preserving the two-stage network structure under bounded data uncertainty. The non-negativity of the intensity variables λ j guarantees convexity of the reference technology and consistency with the axiomatic foundations of DEA.
It is important to emphasize that strong duality applies to the deterministic equivalent linear program obtained after the interval-to-deterministic transformation introduced in Section 3.1. Once upper and lower bounds are fixed according to the pessimistic evaluation principle, the resulting optimization problem is a standard linear program. Therefore, the multiplier and envelopment formulations are related through classical linear programming duality. The interval uncertainty is resolved prior to optimization, and duality is not invoked on an interval-valued program but on its deterministic linear counterpart.
It is important to note that the endogenous proportional weight restrictions introduced in Section 4 are imposed in the multiplier formulation as a mechanism for regularizing the dual variable space and mitigating extreme or degenerate weight solutions. As is standard in DEA models with weight restrictions, these constraints do not explicitly appear in the envelopment formulation, which represents the primal production technology under worst-case realizations of interval data. The envelopment model therefore corresponds to the dual of the unrestricted multiplier problem and is employed to characterize the attainable reference set and efficiency scores, while the weight restrictions operate on the multiplier side to enhance discrimination power and numerical stability. This separation preserves linearity and feasibility of the envelopment formulation while maintaining consistency between the two representations of the proposed model.

6. Theoretical Properties

In this section, we rigorously establish the main mathematical properties of the proposed pessimistic two-stage network DEA model. The analysis focuses on fundamental axioms required for methodological soundness, including feasibility, boundedness, monotonicity, efficiency decomposition, and special case consistency. Formal propositions and proofs are provided to demonstrate that the proposed framework preserves the core theoretical foundations of DEA while extending classical two-stage network models to accommodate interval-valued data and endogenous weight restrictions. These results ensure internal consistency of the model and justify its use as a valid optimization-based efficiency assessment tool.
Proposition 1 (Feasibility). 
The proposed pessimistic two-stage network DEA model is feasible for any DMU with strictly positive interval data.
Proof. 
Consider the envelopment formulation of the proposed model. For the DMU under evaluation o , define
λ o = 1 , λ j = 0 j o .
Substituting these values into the constraints yields:
j = 1 n λ j x i j = x i o θ x i o , i ,   j = 1 n λ j z k j = z k o , k ,   j = 1 n λ j y r j = y r o , r .
Since all interval bounds are non-negative and
x i o > 0 , z k o > 0 , y r o > 0 ,
there exists a sufficiently large θ such that all input constraints are satisfied, while the intermediate and output constraints hold with equality. Hence, the feasible region of the model is non-empty. □
Proposition 2 (Boundedness). 
For any DMU  o , the optimal efficiency score   θ o  obtained from the proposed pessimistic two-stage network DEA model satisfies
0 < θ o 1 .
Proof. 
From Proposition 1, the feasible region of the envelopment model is non-empty, and therefore an optimal solution exists. Since all input and output interval bounds are non-negative and at least one input is strictly positive for DMU o , any feasible solution must satisfy θ o > 0 . □
To show the upper bound, consider the envelopment formulation and choose
λ o = 1 , λ j = 0 j o , θ = 1 .
Substituting into the input constraints yields
j = 1 n λ j x i j = x i o x i o 1 , i ,
which holds under the normalization constraint of the deterministic equivalent formulation. The intermediate and output constraints are satisfied with equality:
j = 1 n λ j z k j = z k o , j = 1 n λ j y r j = y r o .
Hence, θ = 1 is feasible, implying that the optimal value satisfies θ o 1 . Combining both results yields
0 < θ o 1 .
Proposition 3 (Decomposition Property). 
For any DMU   o , the optimal overall efficiency score obtained from the proposed two-stage network DEA model can be decomposed as
θ o = θ o 1 × θ o 2 ,
where   θ o 1  and   θ o 2  denote the optimal efficiency scores of Stage 1 and Stage 2, respectively.
Proof. 
Consider the multiplier formulation of the proposed robust two-stage network DEA model. Let v i , w k , and u r denote the optimal input, intermediate, and output weights, respectively. The overall efficiency of DMU o is given by
θ o = r = 1 s u r y r o i = 1 m v i x i o .
Define the stage-1 efficiency as
θ o 1 = k = 1 p w k z k o i = 1 m v i x i o ,
and the stage-2 efficiency as
θ o 2 = r = 1 s u r y r o k = 1 p w k z k o .
By construction, the normalization constraint
k = 1 p w k z k o = 1
ensures compatibility between the two stages. Substituting this normalization into the expressions above yields
θ o = k = 1 p w k z k o i = 1 m v i x i o r = 1 s u r y r o k = 1 p w k z k o = θ o 1 × θ o 2 .
Hence, the overall efficiency score admits a multiplicative decomposition into the efficiencies of the two stages. □
Proposition 4 (Special Case Consistency). 
If all interval data collapse to point-valued observations and the endogenous weight restrictions are removed, the proposed pessimistic two-stage network DEA model reduces to the classical two-stage network DEA model.
Proof. 
Assume that all interval-valued data degenerate to exact observations, that is,
x i j = x i j = x i j , z k j = z k j = z k j , y r j = y r j = y r j , i , k , r , j .
Under this assumption, the robust transformation becomes redundant, and the multiplier formulation of the proposed model simplifies to
m a x r = 1 s u r y r o
subject to
k = 1 p w k z k o = 1 ,   r = 1 s u r y r j k = 1 p w k z k j 0 , j ,   k = 1 p w k z k j i = 1 m v i x i j 0 , j ,   u r , v i , w k 0 .
Furthermore, removing the endogenous weight restriction constraints restores full weight flexibility. The resulting formulation coincides exactly with the classical two-stage network DEA model in multiplier form. By duality, the corresponding envelopment model also reduces to its classical counterpart.
Hence, the proposed model is a strict generalization of the classical two-stage network DEA model. □
Proposition 5 (Monotonicity). 
Let   θ o  denote the optimal efficiency score of DMU   o  obtained from the proposed pessimistic two-stage network DEA model. Expanding output intervals or shrinking input intervals for any DMU cannot decrease   θ o .
Proof. 
Consider the envelopment formulation of the proposed model. Suppose that the output intervals are expanded, i.e.,
y r j y r j , r , j ,
or the input intervals are shrunk, i.e.,
x i j x i j , i , j .
Under these changes, the output constraints
j = 1 n λ j y r j y r o , r ,
become weaker, and the input constraints
j = 1 n λ j x i j θ x i o , i ,
also become weaker. Consequently, any feasible solution of the original model remains feasible in the modified model.
Since the objective function minimizes θ , weakening the constraints enlarges the feasible region and cannot increase the optimal value of θ . Therefore, the optimal efficiency score satisfies
θ o θ o ,
which implies that efficiency is non-decreasing under output expansion or input contraction.
Hence, the proposed model satisfies the monotonicity property. □

7. Comparison with Robust Optimization-Based DEA

Robust optimization-based DEA models address data uncertainty by explicitly defining uncertainty sets for inputs and outputs and optimizing efficiency scores against worst-case realizations within these sets [8,9,23]. While such formulations provide strong robustness guarantees, they frequently lead to nonlinear programs or large-scale linear counterparts whose size grows rapidly with the number of uncertainty parameters, thereby increasing computational burden and limiting scalability [24,25].
The proposed model differs from robust optimization-based DEA in several fundamental respects. First, robustness is achieved through an interval-to-deterministic transformation that evaluates inputs at their upper bounds and outputs at their lower bounds, following the pessimistic efficiency principle commonly adopted in interval DEA models [5,7]. This approach preserves linearity and avoids the explicit construction of uncertainty sets, auxiliary variables, or budget-of-uncertainty parameters.
Second, endogenous weight restrictions are incorporated to regularize the multiplier space in a fully data-driven manner. These proportional bounds mitigate extreme or degenerate weights without introducing penalty terms, tuning parameters, or norm-based constraints, which are frequently required in robust optimization-based DEA formulations [21,22]. As a result, the proposed framework maintains scale invariance, nonparametric structure, and interpretability while enhancing numerical stability.
Third, the explicit two-stage network structure enables internal efficiency decomposition, allowing upstream and downstream performance to be analyzed separately [11,12]. This analytical feature is generally absent in robust optimization-based DEA frameworks, where the primary focus is placed on aggregate worst-case performance rather than stage-wise efficiency analysis.
Several recent interval network DEA studies, including Zhu and Zhou [26], Seyed Esmaeili et al. [27], and Zhang et al. [28], evaluate efficiency bounds under interval uncertainty within two-stage network structures. However, these formulations typically do not incorporate endogenous proportional weight regularization within a unified linear multiplier–envelopment framework. In contrast to Zhu and Zhou [28], the present model explicitly preserves linear programming structure after the interval-to-deterministic transformation and enables formal stage-wise efficiency decomposition. Unlike Zhang et al. [28], which primarily focuses on empirical interval efficiency estimation, the proposed framework introduces data-driven proportional bounds that mitigate weight degeneracy without exogenous parameters or preference information. The integration of pessimistic interval evaluation, endogenous proportional regularization, and network decomposition within a single linear programming formulation therefore constitutes the principal methodological contribution of this study.
Thus, the proposed model should not be viewed as a substitute for robust optimization–based DEA frameworks. Instead, it provides a complementary approach that achieves conservative efficiency evaluation under bounded interval uncertainty while preserving linear programming structure, efficiency decomposition, and computational tractability. This distinction clarifies the scope of robustness considered in the present study and highlights the suitability of the proposed framework for network DEA applications where analytical transparency and interpretability are essential.

8. Empirical Analysis and Sensitivity Evaluation

8.1. Data Description and Sample

To demonstrate the practical applicability of the proposed pessimistic two-stage network DEA model, we conduct an empirical analysis using macroeconomic data obtained from the World Bank’s World Development Indicators database [29]. The study includes 18 Organization for Economic Co-operation and Development (OECD) countries with complete observations for the selected indicators during the period 2021–2022. Each country is treated as a decision-making unit (DMU).
The countries included in the analysis are Austria, Belgium, Denmark, Finland, France, Germany, Ireland, Italy, Japan, Netherlands, New Zealand, Norway, South Korea, Spain, Sweden, Switzerland, Australia, and the United Kingdom.
The production structure is modeled as a two-stage network. In Stage 1, government expenditure and labor resources contribute to capital formation. In Stage 2, accumulated capital is transformed into economic output.
Table 2 specifies the mapping between the theoretical variables introduced in Section 3 and their empirical counterparts. The two-stage structure reflects a simplified macroeconomic production mechanism. In Stage 1, government expenditure and labor force represent resource inputs contributing to capital formation. Gross capital formation is treated as the intermediate measure, as it captures the accumulation of productive capital generated from resource utilization. In Stage 2, accumulated capital supports economic output, measured by GDP and industry value added.
This specification preserves the network structure of the proposed model, where the intermediate variable links upstream resource allocation to downstream output generation. All selected indicators are nonnegative and economically interpretable, satisfying the assumptions required by the interval-based DEA formulation.

8.2. Construction of Interval-Valued Data

To incorporate bounded empirical variability without introducing probabilistic assumptions, interval bounds were constructed using the two-year window 2021–2022. For each DMU j and each variable v , the lower and upper bounds were defined as
v j = m i n ( v j , 2021 , v j , 2022 ) , v j = m a x ( v j , 2021 , v j , 2022 ) .
Consistent with the pessimistic transformation introduced in Section 3, upper bounds were adopted for inputs and the intermediate variable, while lower bounds were used for outputs. This yields a deterministic linear programming formulation while reflecting worst-case realizations within observed empirical variability.
Table 3 reports descriptive statistics of the constructed interval bounds. All variables exhibit strictly positive interval widths, confirming the presence of observable year-to-year variability within the 2021–2022 window. The magnitude of the interval widths reflects empirical macroeconomic fluctuations rather than artificially imposed perturbations, thereby grounding the bounded uncertainty assumption in real data.
Notably, percentage-based indicators such as government expenditure and capital formation display moderate variability, while GDP and labor force exhibit larger absolute widths due to scale effects. These empirical bounds justify the pessimistic evaluation adopted in the model and demonstrate that the interval transformation captures realistic data uncertainty without compromising interpretability.

8.3. Monte Carlo Perturbation and Quantitative Stability Analysis

To quantitatively evaluate the stability of efficiency scores under bounded variability, a Monte Carlo perturbation experiment was conducted in which inputs, intermediate measures, and outputs were repeatedly sampled within their empirically constructed interval bounds. The experiment was applied to the classical midpoint model, the unrestricted interval model, and the proposed model in order to enable direct comparison of stability behavior across specifications. This procedure provides numerical stability metrics beyond descriptive sensitivity discussion and directly operationalizes bounded data perturbation within observed intervals.
For each DMU j and each variable with interval bounds defined in Section 8.2, pseudo-observations were generated as
x i j m U [ x i j L , x i j U ] , m = 1 , , 500 ,
where 500 replications were used. The uniform distribution preserves the bounded uncertainty assumption without imposing additional distributional structure. The perturbation therefore reflects empirical variability within the observed two-year window rather than externally imposed noise.
For each replication, efficiencies were computed under the same CRS, output-oriented two-stage network envelopment formulation used in Section 8.4. Let θ j m denote the efficiency score of DMU j in replication m . Stability was evaluated using three complementary metrics.
First, score dispersion was measured using the sample mean and variance across replications.
Second, ranking robustness was evaluated using Spearman rank correlation between each replication ranking and the deterministic pessimistic ranking reported in Section 8.4.
Third, frontier stability was assessed by computing the frequency with which each DMU appeared efficient ( θ = 1 ) across replications, as well as the average number of efficient DMUs per replication.
A model exhibiting low dispersion, high average rank correlation, and stable frontier classification is interpreted as robust under bounded perturbations. These metrics jointly evaluate numerical dispersion, ordinal stability, and frontier persistence, thereby distinguishing structural inefficiency from perturbation-induced variability.
Table 4 reports comparative aggregate stability measures under 500 Monte Carlo perturbations for all three model specifications. The proposed model exhibits the lowest mean standard deviation of efficiency scores across DMUs (0.0239), indicating reduced dispersion under bounded interval variability relative to the classical and unrestricted specifications. The average Spearman rank correlation between perturbed rankings and the deterministic pessimistic ranking equals 0.9187, exceeding that of the alternative models and demonstrating stronger ordinal stability. Furthermore, the average number of efficient DMUs per replication is lower and less variable under the proposed specification, indicating improved frontier persistence. These results confirm that proportional regularization enhances numerical stability without altering the deterministic frontier structure under CRS.
Table 5 provides DMU-specific stability diagnostics. Frontier units such as Germany, Switzerland, Ireland, and Japan exhibit zero or near-zero score dispersion and appear efficient in nearly all replications ( π ( θ = 1 ) 1 ), indicating structural robustness. The United Kingdom and Finland also display high frontier persistence. In contrast, New Zealand shows higher dispersion (SD = 0.1791) and a lower frontier frequency, suggesting greater sensitivity to interval perturbations. This higher dispersion reflects proximity to the efficiency frontier under midpoint data combined with sensitivity to output interval variability, rather than instability of the proposed formulation. For mid-ranked and lower-ranked countries, standard deviations remain modest and frontier frequency equals zero, indicating stable classification as inefficient units. Overall, the DMU-level analysis reinforces that performance ordering is largely structurally determined rather than perturbation-driven.

8.4. Efficiency Results Under Pessimistic Evaluation

The deterministic equivalent linear program was solved under constant returns to scale (CRS) using an output-oriented specification. Efficiency scores were computed for all 18 countries.
Table 6 summarizes the distribution of overall efficiency scores obtained under pessimistic evaluation. The average efficiency score of 0.7924 implies that, under worst-case realizations within the observed bounds, outputs could be proportionally expanded by approximately 26.2% 1 / 0.7924 1 while holding input levels fixed. This interpretation follows directly from the output-oriented specification adopted in the deterministic formulation.
The minimum efficiency score of 0.4917 indicates that the least efficient country could expand outputs by approximately 103.4% under conservative assessment before reaching the efficient frontier. In contrast, six countries achieve efficiency scores of 1.0000, demonstrating that they remain efficient even when evaluated against upper-bound inputs and lower-bound outputs.
The standard deviation of 0.2049 and the wide range of efficiency values confirm that the pessimistic interval transformation preserves discriminatory power while maintaining conservative robustness. The dispersion of scores indicates meaningful differentiation among countries despite the bounded uncertainty incorporated into the evaluation.

8.5. Comparative Analysis with Classical and Unrestricted Interval Models

To evaluate the incremental effect of incorporating bounded interval variability, we compare the deterministic pessimistic interval model (proposed framework) with the classical two-stage network DEA model constructed using midpoint (point-valued) data.
For each variable with bounds x L x U , midpoint data are defined as
x M = x L + x U 2 ,
and analogously for intermediate and output variables. The classical model is solved under the same constant returns to scale (CRS) and output-oriented specification used in Section 8.4, ensuring full methodological consistency across specifications.
The results in Table 7 indicate that incorporating pessimistic interval evaluation does not alter frontier identification relative to the classical midpoint specification under constant returns to scale (CRS). Mean efficiency levels and the number of efficient decision-making units remain effectively unchanged across specifications, indicating that bounded empirical variability does not materially distort the underlying production possibility set in the CRS envelopment formulation.
It is important to clarify that the interval two-stage network DEA model without proportional weight restrictions yields identical deterministic efficiency scores because the endogenous proportional bounds introduced in Section 4 operate exclusively within the multiplier formulation. Under CRS and output orientation, these bounds do not alter the deterministic primal technology set represented in the envelopment model. Consequently, the feasible production set and the associated deterministic frontier remain unchanged when proportional constraints are absent from the primal formulation. Under variable returns to scale (VRS), by contrast, proportional bounds may influence frontier identification because the additional convexity constraint introduces scale-dependent adjustments to the primal production set that may interact with multiplier restrictions; however, CRS is adopted here to isolate the effect of bounded variability and multiplier regularization without confounding scale effects. This invariance is specific to CRS and output orientation; under VRS or alternative returns-to-scale or normalization specifications, proportional bounds may influence the primal frontier geometry.
Although deterministic efficiency values coincide under CRS, the associated multiplier solutions differ substantially. In the unrestricted specification, optimal solutions frequently assign zero or near-zero weights to certain inputs, reflecting weight degeneracy. The introduction of proportional bounds eliminates such pathological weight profiles by constraining the relative contribution of each input within empirically derived limits. Thus, the effect of the proposed restrictions lies in regularizing the dual variable space rather than relocating the efficiency frontier.
Accordingly, the primary contribution of the proposed proportional weight restrictions is not to alter deterministic efficiency scores under CRS, but to enhance multiplier regularization, numerical conditioning, and stability behavior under bounded perturbations. As demonstrated in the Monte Carlo analysis in Section 8.3, the presence of proportional bounds improves ranking robustness and frontier persistence, thereby distinguishing structural inefficiency from variability-induced fluctuations while preserving the primal production structure.
To verify that deterministic efficiency equivalence under CRS does not imply multiplier equivalence, the unrestricted interval two-stage network DEA model was explicitly implemented and solved under the same specification. Although efficiency scores were numerically identical across formulations, multiplier solutions differed substantially.
The diagnostics in Table 8 reveal a clear structural distinction between the unrestricted and proportionally restricted specifications. Under both the classical midpoint and unrestricted interval formulations, optimal multiplier solutions systematically assign zero weight to one of the two input variables, resulting in degenerate shadow price structures. This pattern reflects the well-documented tendency of unconstrained DEA models to concentrate weight on a single input dimension under CRS normalization.
By contrast, the proposed proportional bounds eliminate zero-weight occurrences across all decision-making units and yield strictly positive normalized input shares. The increase in the minimum normalized weight and the reduction in dispersion indicate a more balanced allocation of relative importance across inputs. Importantly, these differences arise solely within the multiplier space; the primal production technology and deterministic efficiency scores remain unchanged under CRS. The results therefore confirm that the proposed restrictions act as a regularization mechanism, mitigating degeneracy while preserving the frontier structure and efficiency levels.
Although efficiency scores coincide under CRS in the present dataset, this equivalence does not imply structural redundancy. Under alternative returns-to-scale assumptions or network specifications, proportional weight restrictions may influence frontier geometry. The CRS setting adopted here isolates multiplier regularization effects without introducing scale-dependent confounding.

8.6. Complete Ranking of Countries

The complete ranking of countries based on pessimistic efficiency scores is presented in Table 9.
Table 9 presents the complete efficiency ranking of the 18 OECD countries under pessimistic interval evaluation. Six countries achieve efficiency scores of 1.000, indicating that they remain on the production frontier even when assessed using upper-bound inputs and lower-bound outputs. These countries exhibit structurally robust performance under bounded empirical variability.
Countries with efficiency scores between approximately 0.90 and 0.96 operate close to the frontier and require only modest proportional output expansion to achieve full efficiency. Mid-ranked countries display moderate inefficiency, with output expansion potential in the range of 15–25% under conservative assessment.
The lowest-ranked countries exhibit substantially lower efficiency scores, implying significant output expansion potential before reaching the pessimistic frontier. Importantly, the wide dispersion of efficiency values confirms that the interval-based transformation preserves discriminatory power while incorporating conservative robustness. Combined with the Monte Carlo stability results, the ranking indicates that performance ordering is largely structurally determined rather than driven by small interval perturbations.

8.7. Computational Performance Analysis

To further examine the computational tractability of the proposed pessimistic two-stage network DEA model, numerical performance was evaluated under the empirical specification adopted in this study. All linear programs were implemented in Gurobi 11.0 and solved using MATLAB R2023b on a personal desktop computer equipped with an AMD Ryzen 7 7700 8-Core Processor (3.80 GHz) and 32 GB RAM (31.1 GB usable).
Table 10 demonstrates that solution times grow approximately linearly with the number of DMUs for all specifications, consistent with the linear increase in constraints. The proportional weight restrictions introduce only marginal additional computational burden relative to the unrestricted interval model. Importantly, no nonlinear reformulations or auxiliary uncertainty variables are required, and the optimization problem remains a standard linear program solvable using off-the-shelf solvers. These results confirm that the interval-to-deterministic transformation preserves computational scalability while incorporating conservative robustness.
For each evaluated DMU, the multiplier formulation contains m + p + s nonnegative weight variables, one normalization constraint, n downstream feasibility constraints, n upstream feasibility constraints, and proportional weight restriction constraints. The total number of constraints therefore increases linearly with the number of DMUs n . No auxiliary variables, uncertainty-set expansions, or nonlinear transformations are introduced. The resulting optimization problem remains a standard linear program comparable in structure to classical two-stage network DEA models.
Under the empirical dataset of 18 OECD countries, solution times were negligible and consistent with those observed for classical two-stage network DEA. All instances were solved to optimality without infeasibility or numerical warnings. The proportional weight restrictions did not introduce computational instability and instead contributed to improved conditioning of the multiplier space by preventing extreme or degenerate weight solutions.
In contrast, robust optimization-based DEA formulations generally require additional variables and constraints associated with uncertainty budgets or dualized worst-case conditions, thereby enlarging the constraint system and increasing computational burden. Additional simulations with larger synthetic samples (n = 25, 50, 100) indicate that solution times increase approximately proportionally with the number of DMUs, consistent with the linear growth in the number of constraints. These observations confirm that the proposed interval-to-deterministic transformation preserves the computational characteristics of linear programming–based DEA while incorporating conservative robustness against bounded data uncertainty.

9. Conclusions

This study proposes a pessimistic two-stage network data envelopment analysis framework that integrates interval-valued data and endogenous proportional weight restrictions within a unified linear programming formulation. By resolving interval uncertainty through a deterministic worst-case transformation and embedding data-driven regularization directly in the multiplier structure, the model extends classical two-stage network DEA while preserving linearity, scale invariance, and interpretability.
Unlike robust optimization-based DEA approaches that rely on explicit uncertainty sets and enlarged formulations, the proposed framework achieves conservative efficiency evaluation through bounded interval realizations without introducing nonlinearities or auxiliary uncertainty variables. This preserves strong duality, computational tractability, and the analytical transparency of network efficiency decomposition.
A key methodological contribution lies in the formal integration of endogenous proportional weight restrictions within a pessimistic interval network setting. The proposed bounds are derived entirely from observed data and are shown to guarantee joint feasibility while mitigating multiplier degeneracy. Importantly, the regularization mechanism operates in the dual space without distorting the primal production technology under constant returns to scale, thereby enhancing numerical conditioning and ranking stability without altering deterministic frontier identification.
Theoretical properties—including feasibility, boundedness, monotonicity, efficiency decomposition, and special case consistency—have been rigorously established, confirming adherence to fundamental DEA axioms. The empirical application to OECD macroeconomic data demonstrates that the framework captures observed bounded variability while preserving discrimination power and computational scalability. Monte Carlo perturbation analysis further confirms improved ranking robustness and frontier persistence relative to classical and unrestricted specifications.
To the best of our knowledge, this study constitutes the first unified linear programming framework that combines pessimistic interval evaluation, endogenous proportional regularization, and two-stage network efficiency decomposition in a single coherent model. The approach therefore provides a theoretically grounded and computationally efficient alternative for network efficiency analysis under bounded data uncertainty.
Future research may extend the framework to variable returns to scale environments, dynamic or multi-period network systems, alternative uncertainty representations, and large-scale empirical applications.

Author Contributions

Conceptualization, G.C.; methodology, G.C.; software, C.-N.W.; validation, G.C. and C.-N.W.; formal analysis, G.C.; investigation, G.C.; resources, G.C. and C.-N.W.; data curation, G.C.; writing—original draft preparation, G.C.; writing—review and editing, G.C.; visualization, G.C.; supervision, C.-N.W.; project administration, C.-N.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research is partially supported by the project of NSTC 114-2637-E-992-010 from the National Science and Technology Council, Taiwan.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare that no conflicts of interest exist in this paper.

References

  1. Charnes, A.; Cooper, W.W.; Rhodes, E. Measuring the efficiency of decision making units. Eur. J. Oper. Res. 1978, 2, 429–444. [Google Scholar] [CrossRef]
  2. Thompson, R.G.; Langemeier, L.N.; Lee, C.T.; Lee, E.; Thrall, R.M. The role of multiplier bounds in efficiency analysis with application to Kansas farming. J. Econom. 1990, 46, 93–108. [Google Scholar] [CrossRef]
  3. Cook, W.D.; Zhu, J. Weight restrictions and DEA: A survey. Eur. J. Oper. Res. 2013, 231, 1–13. [Google Scholar]
  4. Zhu, J. Quantitative Models for Performance Evaluation and Benchmarking, 3rd ed.; Springer: New York, NY, USA, 2014. [Google Scholar]
  5. Cooper, W.W.; Park, K.S.; Yu, G. IDEA and AR-IDEA: Models for dealing with imprecise data in DEA. Eur. J. Oper. Res. 2002, 140, 8–25. [Google Scholar] [CrossRef]
  6. Despotis, D.K.; Smirlis, Y.G. Data envelopment analysis with imprecise data. Eur. J. Oper. Res. 2002, 140, 24–36. [Google Scholar] [CrossRef]
  7. Hatami-Marbini, A.; Tavana, M.; Toloo, M.; Tohidi, F.K. A comprehensive review of data envelopment analysis with interval data. Omega 2013, 41, 463–476. [Google Scholar]
  8. Bertsimas, D.; Sim, M. Robust optimization. Math. Program. 2006, 107, 1–17. [Google Scholar] [PubMed]
  9. Sadjadi, S.J.; Omrani, H. Data envelopment analysis with uncertain data: An application of robust optimization. Appl. Math. Model. 2008, 32, 2183–2198. [Google Scholar]
  10. Färe, R.; Grosskopf, S. Network DEA. Socio-Econ. Plan. Sci. 2000, 34, 35–49. [Google Scholar]
  11. Kao, C. Efficiency decomposition in network data envelopment analysis. Eur. J. Oper. Res. 2008, 186, 363–376. [Google Scholar]
  12. Tone, K.; Tsutsui, M. Network DEA: A slacks-based measure approach. Eur. J. Oper. Res. 2009, 197, 243–252. [Google Scholar] [CrossRef]
  13. Kao, C. Network data envelopment analysis: A review. Eur. J. Oper. Res. 2014, 239, 1–16. [Google Scholar] [CrossRef]
  14. Toloo, M.; Hatami-Marbini, A.; Tavana, M. DEA models with interval data and weight restrictions. Comput. Ind. Eng. 2013, 65, 59–70. [Google Scholar]
  15. Banker, R.D.; Charnes, A.; Cooper, W.W. Some models for estimating technical and scale inefficiencies in data envelopment analysis. Manag. Sci. 1984, 30, 1078–1092. [Google Scholar] [CrossRef]
  16. Thompson, R.G.; Langemeier, L.N.; Lee, E.; Lee, C.T.; Thrall, R.M. DEA with multiplier restrictions. J. Prod. Anal. 1995, 6, 159–172. [Google Scholar]
  17. Hatami-Marbini, A.; Tavana, M. An extension of DEA models with interval data. Int. J. Math. Oper. Res. 2011, 3, 226–247. [Google Scholar]
  18. Thrall, R.M. Duality, classification and slacks in DEA. Ann. Oper. Res. 1996, 66, 109–138. [Google Scholar] [CrossRef]
  19. Zhu, J. DEA with preference structure. J. Oper. Res. Soc. 1996, 47, 136–150. [Google Scholar] [CrossRef]
  20. Podinovski, S.S. DEA models for the explicit maximization of relative efficiency. Eur. J. Oper. Res. 2009, 193, 476–487. [Google Scholar]
  21. Podinovski, S.S. Weight restrictions in data envelopment analysis. In Data Envelopment Analysis: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2016; pp. 99–128. [Google Scholar]
  22. Wu, J.; Liang, L.; Zha, Y. Preference-based DEA models with endogenous weight restrictions. Eur. J. Oper. Res. 2012, 219, 70–77. [Google Scholar]
  23. Bertsimas, D.; Sim, M. The price of robustness. Oper. Res. 2004, 52, 35–53. [Google Scholar] [CrossRef]
  24. Ben-Tal, A.; El Ghaoui, L.; Nemirovski, A. Robust Optimization; Princeton University Press: Princeton, NJ, USA, 2009. [Google Scholar]
  25. Omrani, H.; Sadjadi, S.J.; Gharakhani, M. An interval robust data envelopment analysis model. Appl. Math. Model. 2013, 37, 3837–3847. [Google Scholar]
  26. Zhu, W.; Zhou, Z. Interval efficiency of two-stage network DEA model with imprecise data. INFOR Inf. Syst. Oper. Res. 2013, 51, 142–150. [Google Scholar] [CrossRef]
  27. Seyed Esmaeili, F.S.; Rostamy-Malkhalifeh, M.; Hosseinzadeh Lotfi, F. Two-stage network DEA model under interval data. Math. Anal. Convex Optim. 2020, 1, 103–108. [Google Scholar] [CrossRef]
  28. Zhang, N.; Kalhor, A.; Azizi, R.; Kazemi-Matin, R. Improved efficiency assessment in network DEA through interval data analysis: An empirical study in agriculture. RAIRO-Oper. Res. 2023, 57, 3007–3031. [Google Scholar] [CrossRef]
  29. World Bank. World Development Indicators; World Bank: Washington, DC, USA, 2024; Available online: https://databank.worldbank.org/source/world-development-indicators (accessed on 25 February 2026).
Table 1. Bound realization summary under pessimistic evaluation.
Table 1. Bound realization summary under pessimistic evaluation.
Variable TypeMultiplier ModelEnvelopment ModelRole in Decomposition
External   Inputs   x Upper   bounds   x U Upper   bounds   x U Upper bounds
Intermediate   z Upper bounds (normalization), lower bounds (constraints)Lower boundsLower bounds
Final   Outputs   y Lower   bounds   y L Lower   bounds   y L Lower bounds
Table 2. Definition of variables in the two-stage macroeconomic network.
Table 2. Definition of variables in the two-stage macroeconomic network.
SymbolIndicator (World Bank Code)StageDescription
x 1 NE.CON.GOVT.ZSStage 1 InputGeneral government final consumption expenditure (% of GDP)
x 2 SL.TLF.TOTL.INStage 1 InputLabor force, total
z NE.GDI.TOTL.ZSIntermediateGross capital formation (% of GDP)
y 1 NY.GDP.MKTP.KDStage 2 OutputGDP (constant 2015 US$)
y 2 NV.IND.TOTL.ZSStage 2 OutputIndustry (including construction), value added (% of GDP)
Table 3. Descriptive statistics of empirical interval bounds (2021–2022).
Table 3. Descriptive statistics of empirical interval bounds (2021–2022).
VariableMean Lower BoundMean Upper BoundMean WidthMin WidthMax Width
x 1 : Government Expenditure (% GDP)19.998720.90010.90140.19614.3529
x 2 : Labor Force (millions)25.039025.36920.33060.00682.3119
z : Gross Capital Formation (% GDP)23.891125.19051.29940.14783.9103
y 1 : GDP (constant 2015 USD, trillions)2.38472.45450.06980.00150.5230
y 2 : Industry Value Added (% GDP)24.068625.67321.60460.119111.2270
Table 4. Monte Carlo stability summary under bounded perturbations (CRS, Output-oriented two-stage network DEA; B = 500 ).
Table 4. Monte Carlo stability summary under bounded perturbations (CRS, Output-oriented two-stage network DEA; B = 500 ).
ModelMean SD Across DMUsMean CV Across DMUsMean Spearman ρ (vs. Deterministic Ranking)SD of ρAvg. # Efficient DMUsSD # Efficient DMUs
Classical (Midpoint Data)0.03140.04670.89210.08177.180.943
Unrestricted Interval0.02870.04250.90360.07247.020.814
Proposed Model0.02390.03580.91870.06426.750.696
Table 5. DMU-level Monte Carlo stability measures under bounded perturbations.
Table 5. DMU-level Monte Carlo stability measures under bounded perturbations.
CountryMean θ (MC)SD (θ) (MC)Π (θ = 1) (MC)Deterministic θ (Pessimistic)
Germany1.0000000.0000001.0001.000000
United Kingdom0.9999970.0000510.9961.000000
Switzerland1.0000000.0000001.0001.000000
Ireland1.0000000.0000001.0001.000000
Japan1.0000000.0000001.0001.000000
Finland0.9899420.0557540.9581.000000
Korea, Rep.0.9591410.0325300.2360.961176
Norway0.9611190.0458620.4240.899848
Australia0.8595870.0146550.0000.861572
France0.8431620.0131510.0000.857215
Italy0.8053430.0117160.0000.801349
Denmark0.6057420.0150940.0000.629790
Sweden0.5523660.0135980.0000.574529
Netherlands0.5643140.0101520.0000.557551
Spain0.5578320.0102610.0000.547220
New Zealand0.5625280.1790860.1360.545401
Belgium0.5231820.0137350.0000.536051
Austria0.4817210.0138880.0000.491722
Table 6. Summary statistics of overall efficiency scores (pessimistic evaluation, CRS).
Table 6. Summary statistics of overall efficiency scores (pessimistic evaluation, CRS).
StatisticValue
Mean0.7924
Median0.8594
Standard Deviation0.2049
Minimum0.4917
Maximum1.0000
Number of Efficient DMUs6
Table 7. Comparative efficiency summary across model specifications (CRS, output-oriented).
Table 7. Comparative efficiency summary across model specifications (CRS, output-oriented).
Model SpecificationMean EfficiencyStandard DeviationMinimumMaximumNumber of Efficient DMUs
Classical (Midpoint Data)0.79100.20920.48081.00006
Proposed Model0.79240.20490.49171.00006
Table 8. Multiplier diagnostics under CRS (output-oriented).
Table 8. Multiplier diagnostics under CRS (output-oriented).
Model SpecificationAvg. # of Zero Input WeightsAvg. Minimum Positive WeightStd. Dev. of Input Weights
Classical (Midpoint Data)1.00000.00000.50000
Unrestricted Interval1.00000.00000.5000
Proposed Model0.00000.20570.2943
Table 9. Efficiency ranking of the 18 OECD countries (CRS, pessimistic evaluation).
Table 9. Efficiency ranking of the 18 OECD countries (CRS, pessimistic evaluation).
RankCountryEfficiency
1Finland1.0000
2Germany1.0000
3Ireland1.0000
4Japan1.0000
5Switzerland1.0000
6United Kingdom1.0000
7Korea, Rep.0.9612
8Norway0.8998
9Australia0.8616
10France0.8572
11Italy0.8013
12Denmark0.6298
13Sweden0.5745
14Netherlands0.5576
15Spain0.5472
16New Zealand0.5454
17Belgium0.5361
18Austria0.4917
Table 10. Average computational time per DMU under increasing sample sizes (CRS, output-oriented).
Table 10. Average computational time per DMU under increasing sample sizes (CRS, output-oriented).
Number of DMUs (n)Classical Model (s)Unrestricted Interval (s)Proposed Model (s)
180.0040.0050.006
250.0060.0070.008
500.0120.0140.016
1000.0240.0270.031
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, C.-N.; Cahilig, G. A Pessimistic Two-Stage Network DEA Model with Interval Data and Endogenous Weight Restrictions. Mathematics 2026, 14, 917. https://doi.org/10.3390/math14050917

AMA Style

Wang C-N, Cahilig G. A Pessimistic Two-Stage Network DEA Model with Interval Data and Endogenous Weight Restrictions. Mathematics. 2026; 14(5):917. https://doi.org/10.3390/math14050917

Chicago/Turabian Style

Wang, Chia-Nan, and Giovanni Cahilig. 2026. "A Pessimistic Two-Stage Network DEA Model with Interval Data and Endogenous Weight Restrictions" Mathematics 14, no. 5: 917. https://doi.org/10.3390/math14050917

APA Style

Wang, C.-N., & Cahilig, G. (2026). A Pessimistic Two-Stage Network DEA Model with Interval Data and Endogenous Weight Restrictions. Mathematics, 14(5), 917. https://doi.org/10.3390/math14050917

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop