Previous Article in Journal
A Class of Perimeter Defense Strategies Based on Priority Path Planning
Previous Article in Special Issue
Analytical Approach to UAV Cargo Delivery Processes Under Malicious Interference Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Crisis Response Modes in Collaborative Business Ecosystems: A Mathematical Framework from Plasticity to Antifragility

1
NOVA School of Science and Technology, UNINOVA-CTS, NOVA University of Lisbon, 2829-516 Caparica, Portugal
2
NOVA School of Science and Technology, UNINOVA-CTS and LASI, NOVA University of Lisbon, 2829-516 Caparica, Portugal
3
Instituto Superior de Engenharia de Lisboa, UnIRE-Unit for Innovation and Research in Engineering, Instituto Politécnico de Lisboa, 1959-007 Lisbon, Portugal
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(15), 2421; https://doi.org/10.3390/math13152421 (registering DOI)
Submission received: 1 July 2025 / Revised: 18 July 2025 / Accepted: 25 July 2025 / Published: 27 July 2025
(This article belongs to the Special Issue Optimization Models for Supply Chain, Planning and Scheduling)

Abstract

Collaborative business ecosystems (CBEs) are increasingly exposed to disruptive events (e.g., pandemics, supply chain breakdowns, cyberattacks) that challenge organizational adaptability and value creation. Traditional approaches to resilience and robustness often fail to capture the full range of systemic responses. This study introduces a unified mathematical framework to evaluate four crisis response modes—plasticity, resilience, transformative resilience, and antifragility—within complex adaptive networks. Grounded in complex systems and collaborative network theory, our model formalizes both internal organizational capabilities (e.g., adaptability, learning, innovation, structural flexibility) and strategic interventions (e.g., optionality, buffering, information sharing, fault-injection protocols), linking them to pre- and post-crisis performance via dynamic adjustment functions. A composite performance score is defined across four dimensions (Innovation, Contribution, Prestige, and Responsiveness to Business Opportunities), using capability–strategy interaction matrices, weighted performance change functions, and structural transformation modifiers. The sensitivity analysis and scenario simulations enable a comparative evaluation of organizational configurations, strategy impacts, and phase-transition thresholds under crisis. This indicator-based formulation provides a quantitative bridge between resilience theory and practice, facilitating evidence-based crisis management in networked business environments.

1. Introduction

Contemporary business ecosystems operate in a volatile, uncertain, complex, and ambiguous (VUCA) environment [1]. Disruptive events, from pandemics and supply chain breakdowns to cyber-attacks and natural disasters, can undermine an organization’s stability, adaptive capacity, and value creation potential. Understanding how systems respond to crises has become a critical concern across multiple disciplines, including systems engineering, strategic management, and network science [2,3].
In this context, resilience is widely recognized as the ability to withstand shocks and recover, combining robustness with adaptivity [4]. Accordingly, research across the aforementioned fields emphasizes not only “bouncing back” but also learning and adaptation after crises. For example, ref. [5] defines resilience as a dynamic capability that enables firms to anticipate threats, respond effectively, and learn from disruptions.
One of the most influential frameworks in this area is Taleb’s typology of fragility–robustness–resilience–antifragility [6]. Extending this framework, we previously introduced transformative resilience to describe systems that recover through structural change [2]. In the present work, we further formalize a new response mode, plasticity, to describe organizations that neither fully recover (as in resilience) nor improve (as in antifragility) but instead stabilize at a lower yet still sustainable performance level. Together with collapsing failure, a total breakdown scenario considered outside the scope of adaptive responses, these four modes (plasticity, resilience, transformative resilience, and antifragility) provide a comprehensive taxonomy of adaptive outcomes in CBEs.
Insights from multiple research streams further inform the understanding of organizational crisis responses. While the literature specifically reviewing resilience and antifragility in CBEs remains scarce, existing studies are mostly conceptual or empirically descriptive [2,7]. In supply chain management, for instance, efforts to quantify resilience are still limited. Most surveys emphasize qualitative frameworks, and relatively few provide robust quantitative metrics for assessing resilience [8]. Moreover, existing quantitative models often fall short in capturing the dynamic and adaptive nature of supply chains, which are shaped by external disruptions and feedback loops. While informative, such metrics typically overlook critical dimensions such as organizational learning, structural adaptation, and long-term capacity development [9].
Likewise, although antifragility has been conceptually defined in terms of convex responses and asymmetric gains under stress, it remains vague in practical application and largely confined to financial or engineered systems, limiting its relevance to complex organizations [10].
Social network analysis (SNA) offers valuable structural indicators—such as centrality, density, and cohesion—for analyzing inter-organizational connectivity. However, its conventional use tends to be static and fails to reflect the evolving, performance-driven nature of organizational responses to crises [11]. Recent studies emphasize the importance of modeling adaptive and time-evolving network behaviors in real-time. For example, agent-based simulations have been used to explore panic-driven behaviors and cascading effects in crisis networks, showing how network structures can amplify behavioral responses under uncertainty [12].
Similarly, relational and temporal analyses of inter-organizational communication during crises such as the COVID-19 pandemic reveal how networks evolve rapidly under stress, requiring more flexible and time-sensitive models [13].
Value Network Analysis (VNA) complements SNA by introducing value-oriented indicators to interpret exchange relationships. However, it often lacks integration with internal organizational capabilities and adaptive behaviors [11]. Moreover, in supply chain contexts, although SNA has shown promise in identifying vulnerabilities, it still needs to be integrated with models of internal capabilities to fully support strategic decision-making [14].
Building on these diverse insights, our previous work contributed to the classification of crisis response strategies and the modeling of underlying adaptive capabilities in CBEs [2,3]. However, those efforts stopped short of proposing a quantitative, indicator-based framework for comparing systemic responses across heterogeneous CBEs.
To address this gap, the present study develops such a framework, enabling a comparative analysis by translating qualitative attributes (e.g., adaptability, diversity, or optionality) into measurable constructs and response scores. This study makes several key contributions:
  • We introduce a hierarchical structure for selecting and organizing capability indicators linked to the four systemic response modes, offering a quantitative extension to earlier qualitative taxonomies.
  • We apply an AHP-based weighting system to incorporate expert judgment into a unified framework tailored for crisis response. This adds methodological rigor and addresses the challenge of prioritizing diverse organizational capabilities within the context of CBE resilience and antifragility.
  • We develop a nonlinear performance scoring model that maps capability levels to dynamic recovery trajectories, thereby capturing the temporal dimension of adaptations—something lacking in static metrics used in prior works.
  • We offer a unified mathematical formulation that integrates internal organizational capabilities with external strategic responses.
  • We extend Taleb’s fragility–resilience–antifragility typology into a more comprehensive framework and embed it within a practical evaluation tool.
  • Finally, we validate the framework using multiple approaches—including consistency checks, sensitivity analyses, and scenario-based simulations—to demonstrate its robustness and applicability.
  • Collectively, these contributions bridge the gap between theoretical typologies of crisis response and measurable, comparative evaluation tools suitable for complex, networked environments.
Importantly, the model is platform-agnostic and domain-independent, making it adaptable for diverse real-world contexts—such as digital ecosystems, manufacturing alliances, and regional innovation networks—where high interdependence and rapid coordination under uncertainty are crucial.
By synthesizing theoretical foundations and methodological innovations into an integrated framework, this study contributes to both the academic understanding and practical tools for assessing systemic adaptation. The following sections present the framework’s conceptual foundations, indicator structuring, mathematical formulation, and validation strategy.

2. Materials and Methods

2.1. Systemic Response Modes in Collaborative Business Ecosystems

Organizational responses to crises can manifest in various forms of performance, depending on the embedded capabilities within them and the strategies they adopt in the face of the crisis. Based on our extended framework, we define five systemic response modes that reflect distinct post-crisis performance trajectories (Figure 1):
  • Collapse: The organization fails to recover and experiences a long-term or irreversible breakdown of structure and function. This outcome reflects extreme fragility, where the system lacks a sufficient robustness or adaptive capacity to survive a disruptive event.
  • Plasticity: The organization stabilizes at a lower, yet sustainable, performance level after a disruption. This represents a partial recovery without a return to the original baseline, often because of (1) a structural rigidity that prevents full reconfiguration, (2) resource constraints that limit the recovery capacity, and (3) hysteresis effects in which past disruptions create lasting organizational memories that hamper future responses. Plasticity therefore differs from resilience (full recovery), adaptive decline (gradual, continuous loss without stabilization), and collapse (catastrophic failure). Example: A factory that, after a major fire or flood, resumes operations at half of its former output, remaining viable but never regaining its previous capacity.
  • Resilience: The organization restores its pre-crisis performance while preserving its core functions and structure. This outcome reflects effective internal capabilities and coordinated coping strategies. During the COVID-19 pandemic, Unilever rapidly reconfigured production lines—for example, converting deodorant facilities to hand sanitizer production facilities—maintained service, and regained its market position through agile adaptation [15].
  • Transformative Resilience: The organization recovers or improves its performance while simultaneously undergoing positive structural change, enabling long-term adaptability to dynamic environments. Many educational systems leveraged the COVID-19 crisis to accelerate digital transformation, adopting blended and hybrid learning models that permanently altered instructional methods and increased flexibility and resilience.
  • Antifragility: The organization benefits from disorder and emerges stronger than before, ultimately exceeding its prior performance. This mode relies on mechanisms such as optionality and the ability to exploit volatility through convex strategies. Example: During the COVID-19 pandemic, Amazon experienced explosive growth—expanding logistics, investing in new technologies, and increasing profits by nearly 200 percent—thus emerging even stronger from the crisis.
Figure 1. Response modes. Source: author’s composition.
Figure 1. Response modes. Source: author’s composition.
Mathematics 13 02421 g001
Table 1 provides clarification on these concepts: antifragility, plasticity, and transformative resilience. These response modes correspond to distinct configurations of capabilities and coping strategies. Building on our earlier conceptual model [2], we classify capabilities as inherent internal organizational attributes and group them into domains such as flexibility, learning capacity, diversity, adaptability, and creative potential. Coping strategies include mechanisms or deliberate approaches like information sharing, risk pooling, buffering, fault injection, insurance, and infrastructure investment. For example, plasticity relies on structural stability, with capability indicators like fault tolerance and hysteresis, enabling systems to absorb shocks without returning to the equilibrium. In contrast, resilience depends more on adaptability and cohesiveness, enabling functional recovery. These relationships form the foundation for the weighting and modeling approaches introduced in the next sections.
The concept of an elastic limit provides a useful lens for distinguishing between different systemic responses. This threshold represents the boundary between reversible and irreversible organizational adaptation. When the post-crisis performance remains above the elastic limit, the organization can recover (resilience) or even improve through transformation (transformative resilience or antifragility). However, once performance drops below the elastic limit, full recovery becomes structurally impossible. In such cases, the system may still stabilize at a diminished level (plasticity) or suffer total breakdown (collapse). Plasticity thus occurs just below the elastic limit—where a constrained recovery is possible but a full reversion to baseline is no longer achievable. Collapse, on the other hand, reflects a condition well below the elastic limit, where viability itself is lost.

2.2. Indicator Selection Methodology

We used a structured, multi-stage methodology to identify organizational capability indicators relevant to each crisis response mode. First, a literature review surveyed existing frameworks on resilience, antifragility, and related concepts in business ecosystems, supply chains, and organization science [17,18]. This generated a preliminary set of ~23 candidate indicators (e.g., adaptability, diversity, fault tolerance, modularity, learning, optionality). Next, an expert evaluation was conducted by five domain specialists (systems engineering, risk management, industrial engineering, collaborative networks). The five experts were selected from diverse relevant domains to ensure a broad perspective and mitigate biases—for example, no single industry or sector dominated the panel. However, we acknowledge that the specific composition of experts could influence the weights, and future studies might expand the expert pool to further generalize the results. Through interviews and a structured discussion, these experts assessed each indicator’s relevance, clarity, measurability, and applicability to CBEs. Finally, a Delphi-style process refined the indicator set over two iterative rounds. Redundant or ambiguous items were removed, and overlapping concepts were consolidated. The result was a final set of 16 indicators, each mapped to one or more of the four modes: Note that although five response modes are conceptually defined, the collapsing mode—representing total organizational failure—is excluded from the quantitative analysis, as it lies outside the scope of adaptive responses.
-
Resilience: Diversity, adaptability, efficiency, cohesiveness, structural capability, fault tolerance, and learning.
-
Antifragility: Diversity, adaptability, structural capability, fault tolerance, learning, convexity, and creativity.
-
Transformative Resilience: Adaptability, structural capability, transformability, creativity, and learning.
-
Plasticity: Adaptability, cohesiveness, fault tolerance, and hysteresis.
This final set represents capabilities that are both broadly applicable and analytically tractable within the context of cross-boundary ecosystems.

2.3. AHP Structuring and Weighting

To quantify the relative importance of each selected indicator, we employed the Analytic Hierarchy Process (AHP) [19]. The AHP is a well-established multicriteria decision-making method that relies on pairwise comparisons to derive indicator weights. Notably, the AHP was chosen over alternative weighting approaches because of its suitability for qualitative criteria. It provides a structured means to transform expert subjective judgments into quantitative priority weights [20], and it includes a consistency verification step to ensure reliable comparisons. In contrast, simpler weighting methods (e.g., direct ranking or equal weighting) lack such rigorous consistency checks, which can undermine reliability. These advantages made the AHP particularly appropriate for deriving indicator weights in our framework [21]. We organized our evaluation in a three-level hierarchy: (1) the overall objective (assess systemic response capability), (2) response modes (plasticity, resilience, transformative resilience, antifragility), and (3) capability indicators (mode-specific). For each mode, experts compared indicators pairwise using Saaty’s 1–9 scale [19]. The principal eigenvector of each comparison matrix yielded normalized weights. We checked the consistency via the Consistency Ratio (CR) and retained only matrices with a CR < 0.10. This process produced prioritized weights for the capability indicators within each mode. A sensitivity analysis then tested the robustness: systematically varying key weights had only a moderate effect on mode scores, confirming stable rankings. These AHP-derived weights ensure that each indicator’s contribution to its mode’s score is grounded in expert judgment and consistent criteria.

3. Results and Discussions

This section presents the results of the AHP-based weighting process applied to the capability indicators associated with each crisis response mode. For each mode (resilience, antifragility, transformative resilience, and plasticity), a distinct set of internal capability indicators was evaluated through pairwise comparisons (as described in Section 2.3) to derive their relative importance. The resulting weights represent the proportional contribution of each capability to the system’s ability to exhibit the corresponding response behavior. All matrices were found to be consistent (see Section 4.1), ensuring the reliability of the derived weights.

3.1. AHP-Derived Weights for Capability Indicators

To establish a robust quantitative foundation, the AHP was used to capture expert judgments regarding the importance of capability indicators. Experts conducted pairwise comparisons of indicators for each response mode. The resulting matrices were analyzed using the principal eigenvector method to compute priority weights, which were then normalized to sum to one. These weights serve as the basis for calculating mode-specific capability scores in Section 3.3.
It is important to note that the AHP was applied only to internal capabilities. External strategic factors (e.g., information sharing, buffering) were excluded from the AHP and are instead incorporated later through scenario-based parameters.

3.1.1. Performance Indicators

In this context, performance refers to the organization’s capacity to generate value and maintain relevance under dynamic and stressful conditions. To operationalize this construct, we adopt a four-dimensional indicator system comprising: Innovation (Inv), Contribution (Cnt), Prestige (Prs), and Responsiveness to Business Opportunity (RBO). Innovation (Inv) reflects the organization’s ability to generate novel products, services, or technological solutions. Drawing on the innovation capability literature, it includes metrics such as R&D investment intensity, the number of new offerings, and technology exploitation success [22]. Contribution (Cnt) captures the organization’s role in co-creating value within the collaborative ecosystem. It includes metrics such as successful project involvement, knowledge-sharing, and role centrality within alliances, inspired by constructs from organizational collaboration and cooperative behavior studies [23]. Prestige (Prs) denotes the organization’s social capital and perceived standing in the ecosystem. It draws from social network analysis and reputation theory, using proxies such as citation/reputation metrics, centrality, and recognition from peers [22]. The Responsiveness to Business Opportunity (RBO) is introduced in this work to capture how effectively and rapidly organizations identify, assess, and engage with emerging business opportunities post-crisis. This indicator is operationalized through the number of successfully evaluated and exploited opportunities, normalized by availability and timing. It reflects both the opportunity recognition capacity and execution agility—which are key in volatile environments [7,24]. Detailed metrics for these indicators are provided in Section 4.3 (see performance indicator tables). These four indicators are rooted in distinct yet complementary theoretical domains (innovation studies, collaboration theory, social capital theory, and strategic opportunity recognition). Together, they provide a multi-faceted view of the organizational performance and form the basis for the composite scoring system used in the simulation and evaluation stages.
Table 2 presents the pairwise comparison matrix, where experts assessed the relative importance of each indicator. For instance, Inv was judged to be twice as important as Cnt, three times more important than Prs, and twice as important as RBO. Reciprocal values were used for the inverse comparisons. The matrix was highly consistent, with a Consistency Ratio (CR) of approximately 0.004, which is well below the standard 0.1 threshold.
The normalized weights derived from the matrix are shown in Table 3. These results highlight the central role of Inv in shaping organizational adaptability. The equal weighting of Cnt and RBO reflects their comparable relevance to ecosystem engagement and opportunity-driven behavior. In contrast, Prs, which represents past achievements or reputations, was deemed less critical in the context of responding to dynamic and disruptive environments. These findings align with the broader resilience literature, which emphasizes Innovation and responsiveness as key drivers of adaptive capacity, while recognizing that Prestige, though valuable, may have a limited influence on immediate performances under crisis conditions [7,23].

3.1.2. Plasticity Capability Indicators

Plasticity refers to an organization’s ability to absorb disruptions through adaptive failure reconfiguration, stabilizing at a new (potentially lower) equilibrium without experiencing systemic collapse. This mode reflects a system’s ductile behavior: it does not fully recover but avoids failure by adjusting its internal structure and operations. Based on the expert input, four capability indicators were identified as central to plasticity:
-
Adaptability: The capacity to reconfigure processes and structures in response to disruptions.
-
Fault Tolerance: The ability to absorb failures without a cascading collapse, through built-in redundancies and safeguards.
-
Cohesiveness: The degree of internal alignment and coordination among subunits.
-
Hysteresis: The system’s inertia or tendency to resist abrupt change and preserve its prior operational state.
A pairwise comparison matrix (Table 4) was constructed using expert judgments to prioritize these indicators. The matrix exhibited a high internal consistency (CR ≈ 0.011). Table 5 reports the resulting normalized weights. The results show that adaptability is by far the most critical capability for enabling plasticity, receiving nearly half of the total weight (~0.47). This highlights the central role of a flexible and rapid reconfiguration in absorbing external shocks. Fault tolerance received the second-highest weight (~0.28), underscoring the importance of structural safeguards in maintaining basic operations during stress. Cohesiveness and hysteresis received lower weights (~0.16 and ~0.10, respectively), indicating that while internal coordination and change resistance can support structural stability, they play a secondary role in shaping a plastic response. The relatively modest importance of cohesiveness suggests that high internal unity is not a strict prerequisite for plastic behavior, provided the system can adapt and isolate failures effectively. These findings align with conceptual expectations of plasticity: it does not necessitate a full recovery or systemwide coherence but rather reflects the capacity to deform under stress without collapsing.

3.1.3. Resilience Capability Indicators

Resilience is defined as the organization’s ability to recover from disruptions and restore core functions. Based on expert consensus, seven capability indicators were evaluated: adaptability, diversity, efficiency, cohesiveness, structural capability, fault tolerance, and learning. Among these, diversity (availability of multiple pathways to absorb shocks) and learning (the ability to integrate lessons from past disruptions) were introduced as particularly important in this context. As shown in Table 6, experts conducted pairwise comparisons of these indicators. The resulting priority weights (Table 7) indicate that structural capability received the highest weight (~0.283), followed closely by adaptability, diversity, and learning (each around 0.16). Efficiency and fault tolerance were assigned moderate weights (~0.088), while cohesiveness was deemed the least critical (~0.055). These results suggest that resilience depends heavily on preserving core operational structures while enabling flexible, multi-path recovery and learning. Tight internal cohesion and operational efficiency, while supportive, appear to play a secondary role in achieving effective recovery.

3.1.4. Transformative Resilience Capability Indicators

Transformative resilience reflects an organization’s ability not only to recover from a crisis but to do so through structural or strategic transformation—resulting in either a return to its previous performance level or an improvement beyond it. This mode emphasizes forward-looking innovation and adaptive restructuring in response to disruption. Based on the expert input, five capability indicators were evaluated: transformability, adaptability, structural capability, creativity, and learning. Among these, transformability—the ability to deliberately shift organizational structures or strategies—was identified as the most critical factor for this mode.
As shown in Table 8 and Table 9, pairwise comparisons revealed a strong emphasis on transformability (~0.40 weight), followed by adaptability and structural capability (each ~0.20). These results indicate that a successful transformation requires both a strategic reorientation capacity and a foundation of flexibility and structural stability. Creativity, defined as the ability to generate novel solutions in novel contexts, contributed a moderate weight (~0.11), underscoring its importance in reimagining systems during a crisis. Learning, though still positive, received the smallest weight (~0.07), possibly reflecting its more retrospective nature, which plays a supporting rather than leading role in transformative change. The pairwise comparison matrix demonstrated an excellent consistency (CR ≈ 0.004), further validating the reliability of the derived weights.
In summary, transformative resilience relies primarily on the capacity to lead deliberate change, supported by adaptive responsiveness, robust infrastructure, and creative exploration, while learning from past events provides secondary value.

3.1.5. Antifragility Capability Indicators

Antifragility refers to an organization’s ability to benefit from volatility and stress, improving its performance as a direct result of an exposure to disruption. Unlike resilience, which aims to preserve or restore prior conditions, antifragility thrives on instability and fluctuation. Seven capability indicators were evaluated for this mode: convexity, structural capability, adaptability, diversity, learning, fault tolerance, and creativity. Among these, convexity—defined here as the presence of mechanisms that produce disproportionate gains from variability (e.g., strategic options or hedged investments)—was identified as the most distinctive and important feature of antifragile systems.
The AHP-based pairwise comparison matrix (Table 10) showed a high consistency (CR ≈ 0.027). The resulting normalized weights (Table 11) emphasize that antifragility is fundamentally driven by convex strategies, those that turn volatility into opportunity. Structural capability plays a supporting role, enabling the system to remain intact long enough to realize those gains. Adaptability, diversity, and learning form a secondary tier of enabling factors, reflecting the need for flexibility, multiple options, and a capacity to extract value from experience in uncertain environments. Fault tolerance and creativity were rated lowest in this context. Unlike in resilience or transformative resilience, the antifragile approach does not merely aim to survive shocks or reinvent itself but to exploit them, making robustness and novelty less central than asymmetry in outcomes.
In sum, while antifragility shares several enabling traits with other modes (notably adaptability and learning), it is uniquely defined by its emphasis on convex response mechanisms that convert stress into an advantage. Figure 2 illustrates the hierarchical structure of capability indicators across all four adaptive response modes. Each mode draws on a distinct combination of capabilities, with some overlapping attributes (e.g., adaptability, structural capability) appearing in multiple modes but with varying significance.

3.2. Calculation of Performance Score

We measure each organization’s baseline performance through four core indicators: Inv, Cnt, Prs, and RBO. Each indicator combines multiple normalized metrics at both the organizational and ecosystem (CBE) levels. For example, Inv includes the ratio of new products to the total portfolio, the patent output, and the share of opportunities realized via innovation. Each performance indicator X ∈ {Inv, Cnt, Prs, RBO} is calculated as a weighted average of its component metrics. For instance, for organization i
X i = W 1 F 1 + W 2 F 2 + W 3 F 3 + W 4 F 4 ,
where each F is a normalized factor score (e.g., patent ratio, collaboration intensity), and the W values are AHP-derived weights that sum to one. Once each core indicator is computed, the overall pre-crisis performance score Pi for the organization i is calculated as a weighted combination of these indicators:
P i = W I n v I n v + W P r s P r s + W C n t C n t + W R B O R B O ,
In this study, the selected weights are
P i = 0.40 I n v + 0.15 P r s + 0.30 C n t + 0.15 R B O
An analogous formula is used to compute the CBE-level performance index, reflecting the average ecosystem capacity. Because all metrics and weights are normalized, Pi ranges between 0 and 1. This composite score reflects an organization’s pre-crisis adaptive capacity (innovation potential, collaborative engagement, network reputation, opportunity responsiveness) and serves as the baseline for the subsequent mode-specific scoring. Table 12 summarizes the representative metrics and notation used for each performance indicator at both the organization and ecosystem levels.

3.3. Mathematical Formulation of Response Scores

With the capability weights established in Section 3.1 and the performance indicators defined in Section 3.2, we now formalize the computation of each response mode score: plasticity, resilience, transformative resilience, and antifragility. Each mode’s score integrates three key components: (1) a capability score (Cmode)—the weighted sum of the organization’s relevant capability indicators for that mode; (2) a strategic factor (Smode)—a scenario-based input capturing external conditions or strategies that influence that mode (e.g., information sharing, buffering, or other interventions relevant to the crisis context); (3) the capability-adjusted performance (ΔP′)—a recovery term representing the actual change in performance under disruption, adjusted by the organization’s capabilities. We describe each of these elements and their relationships below and then present the overall scoring formula.
Capability Scores (Cmode): This component represents the organization’s internal capacity for the given response mode. Using the AHP-derived indicator weights from Section 3.1, we compute Cmode as the weighted sum of the organization’s normalized capability indicator values:
C m o d e = i I m o d e W i X i ,
where Imode is the set of relevant capability indicators for that mode, Xi in [0,1] are the normalized scores for each indicator, and Wi are their weights (which sum to one). Because all Xi ∈ [0, 1] and the weights sum to one, the resulting Cmode also lies within [0, 1]. Each capability indicator is itself an aggregate of empirical metrics measured at both the organizational and ecosystem (CBE) levels. This multi-level design allows micro-level observations (e.g., number of network ties, resource slack, recovery speed) to be systematically combined into macro-level behavioral capacities such as resilience or antifragility. (For example, an indicator like “adaptability” incorporates firm-level metrics like process flexibility and ecosystem-level metrics like the presence of alternative partners; see below.)
The strategic factor (Smode) is an external parameter (not part of the AHP weighting) that captures scenario-specific interventions or conditions relevant to the mode. This could be a binary or scaled input reflecting, for instance, the degree of information sharing, the availability of buffer resources, or the application of stress-testing protocols in the scenario. A high S_mode would indicate strongly favorable external support for that response mode (e.g., extensive buffers or knowledge-sharing in place), whereas a low S_mode might reflect an absence of such strategic supports.
To operationalize the response mode scoring, we break down the computation into three sequential steps: (1) quantifying the baseline performance recovery after the disruption (ΔP); (2) adjusting this recovery value by the internal capability (ΔP′); and (3) integrating the capability, strategy, and performance into a final response mode score. These steps are detailed below.
  • Step 1: Baseline Performance Recovery (ΔP)
We first quantify how much performance is regained after the crisis using the baseline recovery fraction ΔP, defined as
Δ P = P r e c o v e r y P d i s r u p t i o n P b a s e P d i s r u p t i o n ,
where
-
Pbase, is the pre-crisis performance level.
-
Pdisruption, is the performance immediately after the disruption.
-
Precovery, is the performance after a recovery period.
ΔP captures the fraction of the lost performance that is regained, where 0 means no recovery, 1 means full recovery back to the baseline, and values greater than 1 indicate performance that exceeds the pre-crisis baseline. Each performance level (Pbase, Pdisruption, Precovery) is calculated as the weighted average of the four performance indicators (Inv, Cnt, Prs, and RBO), defined in Section 3.2. By defining ΔP in terms of this composite performance score, we ensure that the recovery metric reflects changes across all key dimensions of the organizational performance (e.g., Innovation and Contribution), rather than focusing on a single aspect.
  • Step 2: Capability-Adjusted Performance (ΔP′)
Next, we adjust ΔP by the organization’s capability score for the mode. We introduce two parameters, α and β, to model how internal capabilities amplify performance recovery. Formally,
Δ P = α + β C m o d e Δ P ,
where
-
α is the baseline elasticity (the recovery capacity achievable without any adaptive capability);
-
β controls the sensitivity of the recovery to the capability score.
This formulation allows ΔP′ to reflect not just the basic recovery but also the organization’s potential to exceed its baseline performance due to strong internal capacities. For example, with α = 1 and β = 0.1, a high-capability organization (Cmode = 1.0) could recover 110% of the lost performance, whereas with a very low capability (Cmode ≈ 0), ΔP′ ≈ α⋅ΔP (i.e., essentially no amplified gain).
For transformative resilience, we allow an α > 1 (for example, α = 1.2) to reflect an inherent performance uplift following structural changes.
  • Step 3: Response Mode Score
In the final step, we calculate the overall score for each response mode as a weighted aggregate of three components: the capability score, the strategic factor, and the adjusted performance outcome. For a given mode—plasticity (Pl), resilience (Re), transformative resilience (Tr), or antifragility (An)—the score is computed as
S c o r e m o d e = W c     C m o d e + W s     S m o d e + W p   Δ P ,
with Wc + Ws + Wp = 1. The weights Wc, Ws, and Wp reflect the conceptual emphasis specific to each response mode. In this study, we assign illustrative values to these weights based on conceptual reasoning, rather than empirical optimization, as our primary aim is methodological.
For plasticity, we assign balanced weights to the capability and strategy (approximately 0.4 each) and a smaller weight to performance (around 0.2). This reflects the notion that plasticity relies primarily on the internal adaptive capacity and buffering strategies, while an immediate performance recovery is less critical.
For resilience, the highest weight is placed on performance (Wp = 0.40) to reflect the focus on the output recovery. The capability is weighted equally (Wc = 0.40), while the strategy receives a smaller weight (Ws = 0.20). This allocation highlights the centrality of a rapid performance rebound in resilience, supported by underlying structural and adaptive capacities. In contrast, external strategies play a somewhat smaller role in this mode.
For both transformative resilience and antifragility, we assign half of the total weight to the performance (Wp = 0.50), a substantial portion to the capability (Wc = 0.40), and a minimal weight to the strategy (Ws = 0.10). This reflects the idea that these response modes are driven more by strong internal capacities and emergent, opportunistic adaptations than by predefined strategic plans. Consequently, the strategic factor is down-weighted in these cases.
The selected weight values are summarized in Table 13, which also presents the specific scoring formulas for each response mode based on the illustrative configuration.
It is important to note that these weights can be adapted for different industries or crisis contexts. In practice, the weight calibration may be guided by the historical data or expert input. For example, one could apply a regression analysis or machine learning techniques to past crisis response data to estimate the relative influence of the capability, strategy, and performance on outcomes—thereby enabling an evidence-based weight selection.
Following the general scoring model in Equation (6), the next step is to derive mode-specific formulas—corresponding to plasticity, resilience, transformative resilience, and antifragility—by inserting the weight values from Table 13.
These formulations (Equations (7)–(10)) incorporate the unique indicator sets and weight allocations for each response mode, providing a coherent transition from the general model to the specialized scoring structure. Based on the previously defined capability indicators, each mode’s capability score (Cmode) is computed accordingly.
C P l = A d · W A d + C o h · W C o h + F T · W F T + H y s · W H y s ,
C R e = D i · W D i + A d · W A d + E f · W E f + C o h · W C o h + S t c · W S t C + F T · W F T + L r n · W L r n ,
C T r = A d · W A d + S t c · W S t C + T r · W T r + C r t · W C r t + L r n · W L r n ,
C A n = D i · W D i + A d · W A d + S t c · W S t C + F T · W F T + L r n · W L r n + C n v · W C n v + C r t · W C r t ,
By incorporating Equations (7)–(10) into the model, we explicitly tailor the general scoring formula (Equation (6)) to each specific crisis response mode. These formulations provide clear, mode-specific interpretations of how capabilities and strategies contribute to performance outcomes.
With the scoring model now established, we proceed to illustrate how the capability indicators themselves are defined within our multi-level framework. Each indicator—introduced in Section 3.1—is computed from empirical metrics at both the organizational and ecosystem (CBE) levels.
Table 14 (presented next) maps each capability indicator to its corresponding metrics and shows how these relate to specific response modes. The table also provides the mathematical formulation for each indicator. In the following section, we highlight the meaning and measurement of key indicators (along with their abbreviations), illustrating how micro- and macro-level components are integrated into each indicator’s computation.
Adaptability (Ad) reflects an organization’s capacity to reconfigure its structures and processes in response to disruptions [2]. It is measured through three metrics:
  • Ad1ᵢ—Structural Flexibility: Quantifies the organization’s network reachability under disruption, accounting for both direct connections and weighted two-hop (indirect) links. This metric is grounded in social network analysis (SNA) concepts of closeness centrality and reachability, where indirect paths (with reduced weights) contribute to overall resilience [25,26].
  • Ad2ᵢ—Technology Adoption and Integration: Assesses the organization’s uptake of new technologies relative to the ecosystem average. It is calculated as a normalized ratio of the firm’s technology adoption level (scaled for deployment or scalability) to the network-wide mean. This relative metric aligns with innovation capability measures in the literature that use normalized performance ratios for cross-firm comparisons [27].
  • Ad3ᵢ—Successful Market Entry and Competition: Reflects the ability to adapt the strategy by entering new markets. It is given by the success rate of market entry attempts (successful entries divided by total attempts), following the approach of gradual internationalization metrics in strategic management [28].
Cohesiveness (Coh) denotes the degree of the internal alignment, mutual trust, and inter-organizational connectedness within the CBE [2]. It is quantified by two metrics:
  • Coh1ᵢ—Reciprocity of Collaboration Opportunities: Measures the proportion of mutual (bidirectional) collaboration ties that the organization maintains, normalized by the total number of potential partners (N–1). This captures network reciprocity, as used in SNA to gauge bilateral engagement in collaborative structures [29,30].
  • Coh2ᵢ—Density of Direct Inter-Organizational Ties: Captures the density of the organization’s direct connections by taking the fraction of actual ties to all possible ties. This serves as a proxy for structural cohesion—a higher network density enables quicker communication, coordinated action, and collective resilience. The metric draws on graph theoretic concepts linking a high density to robust coordination and stability in networks [25,26].
Convexity (Cnv) signifies post-crisis performance gains exceeding the pre-crisis baseline—a defining feature that characterizes antifragility [2]. It is measured via three metrics:
  • Cnv1ᵢ—Relative Performance Improvement: The normalized increase in the performance after a crisis relative to the pre-crisis level (e.g., post-crisis productivity gain). This metric captures a “convex” response, where the exposure to stress yields net positive performance improvements beyond a simple recovery [31].
  • Cnv2ᵢ—Impact of Strategies on Antifragile Gains: Calculates the average benefit obtained from specific stress-responsive strategies deployed during or after the crisis. It reflects the contribution of strategic initiatives (such as hedging or flexible investments) to upside outcomes, drawing on real options theory and the strategic flexibility literature emphasizing gains under uncertainty [32].
  • Cnv3ᵢ—Opportunity Capitalization Rate: Measures the organization’s effectiveness in capitalizing on new opportunities arising from the disruption. Formally, it is the average contribution of crisis time strategic actions to post-crisis performance gains, capturing the upside potential of optionality. This metric is inspired by the real options theory’s asymmetric payoff (convex return) concept and aligns with strategic management studies on seizing opportunities under volatility [33].
Creativity (Crt) reflects an organization’s ability to generate novel and valuable solutions in the face of a crisis [2]. Two metrics are used:
  • Crt1ᵢ—R&D Intensity: Defined as the ratio of the R&D expenditure to operating expenses, indicating the level of investment in innovation relative to operations. This normalized input–effort ratio is a standard metric in innovation and resilience studies, reflecting the firm’s exploratory orientation and self-adaptive capacity under stress [34].
  • Crt2ᵢ—Network-Based Innovation Potential (Betweenness Centrality): Uses the betweenness centrality to gauge the organization’s structural position in the knowledge-sharing network. A higher betweenness value indicates that the organization serves as a key broker or bridge connecting otherwise disconnected partners, which enhances access to diverse knowledge and cross-domain innovation. This metric is grounded in the network theory of brokerage and information diffusion, where central brokers are associated with a greater creative performance and idea recombination [30,35].
Diversity (Di) represents the breadth of the organization’s product offerings, internal competencies, and market reach—a breadth that helps buffer against localized shocks [2]. It is quantified through three complementary metrics:
  • Di1ᵢ—Product/Service Portfolio Diversity: The number of distinct products or services the organization offers, normalized by the ecosystem average. This indicates the firm’s ability to pivot across offerings and spread risk. The metric applies a normalized product count ratio grounded in the portfolio diversification theory, wherein a broader portfolio (relative to peers) reduces the concentration risk [36].
  • Di2ᵢ—Competency Diversity: The share of distinct internal competencies (skills, capabilities, resources) that the organization possesses relative to the total variety of competencies present in the ecosystem. This is essentially a specialization versus diversification measure of the firm’s knowledge and resource base. It mirrors metrics in the resource-based view of the firm, which is used to gauge internal adaptability through a balanced competency profile [34].
  • Di3ᵢ—Market Entropy (Diversification Index): The evenness of the organization’s activity distribution across different markets, quantified using the Shannon entropy formula. A higher entropy value means the firm’s operations are more evenly spread across markets or segments, indicating a well-diversified market portfolio that minimizes vulnerability to any single market disruption. Entropy-based diversification measures are well-established in economics, ecology, and information theory for evaluating robustness through balance [37].
Efficiency (Ef) indicates how effectively an organization converts collaborative efforts into tangible, value-generating outcomes, even under dynamic or uncertain conditions [2]. It is evaluated by three metrics:
  • Ef1ᵢ—Revenue Proportion from New Products/Services: The proportion of total revenue derived from recently launched products or services. This revenue share metric is widely used to assess how efficiently a firm’s innovation activities translate into market output and profits. A higher value signifies the effective commercialization of innovation, which is consistent with frameworks for innovation performance (e.g., innovation success rates) [38].
  • Ef2ᵢ—Collaboration Success Rate: The fraction of the organization’s initiated collaborations that result in successful outcomes (e.g., completed projects or implemented solutions). This is a standard success rate indicator used in studies of strategic alliances and R&D partnerships to reflect execution effectiveness. A higher success rate denotes a strong capability in managing inter-organizational projects and realizing collaborative value [39].
  • Ef3ᵢ—Network Utilization Efficiency: The ratio of actively utilized network ties (ongoing collaborative relationships) to the total number of available ties. This utilization metric distinguishes mere connectivity from productive collaboration, highlighting that it is the activation of connections—not just their existence—that generates value. The concept is rooted in social capital and structural embeddedness theory, emphasizing the efficient use of one’s network for tangible outcomes [40].
Fault tolerance (FT) reflects an organization’s robustness under acute stress—essentially its capacity to absorb shocks and continue functioning without catastrophic failure [2]. It is quantified by the following metrics:
  • FT1ᵢ—Productivity Loss During Disruption: The proportional drop in productivity during a crisis, relative to the normal (pre-disruption) level. It is expressed as a normalized loss ratio, where a smaller value indicates a higher tolerance to shocks. This metric captures the system’s absorptive capacity and corresponds to loss-of-function measures used in resilience engineering and critical infrastructure protection [41].
  • FT2ᵢ—Employee Retention Rate During Disruption: The ratio of employees retained throughout the disruption period to the total workforce. This indicates workforce continuity and the stability of human capital under stress. It draws on organizational studies showing that retaining talent during crises is linked to adaptive capacity and resilience (e.g., leadership and memory sustaining the organization) [42].
  • FT3ᵢ—Partial Recovery Time: The time required for the organization to stabilize operations after a disruption, even if full the pre-crisis performance is not yet restored. It is measured as a normalized recovery time to enable comparisons across organizations of different sizes. Shorter partial recovery times signify a faster rebound to a stable state, paralleling the time-to-recovery metrics used in resilience assessment frameworks (with a focus here on regaining a functional baseline rather than complete recovery) [43].
Hysteresis (Hys) refers to the persistent or residual effects of a disruption, often seen in a delayed or incomplete recovery even after external conditions normalize [2]. It is evaluated through three complementary metrics:
  • Hys1ᵢ—Immediate Performance Drop: The immediate loss in performance caused by a disruption, quantified as the normalized difference between the pre-crisis performance and the post-shock level. This indicates the severity of the initial impact and the organization’s short-term vulnerability. It parallels shock impact measures in resilience engineering and disaster economics that assess abrupt performance degradation [44].
  • Hys2ᵢ—Time to Full Recovery: The duration required for the organization to fully return to its pre-disruption performance level. This metric reflects the delay in recovery and is especially pertinent in scenarios where hysteresis leads to prolonged stagnation. It builds on infrastructure and supply chain resilience models in which the time to recovery is a core indicator of system resilience [45,46].
  • Hys3ᵢ—Residual Effects of Past Disruptions: The persistent impact of historical disruptions on the current performance, calculated as the average influence of past shock events on present recovery outcomes. This metric embodies the idea of path dependence and cumulative effects: past crises leave lasting “scars” on organizational structures or behaviors. The approach is supported by studies of organizational memory and legacy effects that impede a full recovery over the long term [46,47].
Learning (Lrn) represents an organization’s capacity to internalize lessons from past disruptions and translate them into an enhanced future performance [2]. It reflects dynamic adaptation and institutional memory and is captured by the following:
  • Lrn1ᵢ—Performance Improvement Post-Disruption: The relative increase in performance observed after a disruption compared to the pre-crisis baseline, expressed as a normalized gain. A positive value indicates that the organization not only recovered but improved due to learning from the event. This metric aligns with learning curve models and observations of post-crisis performance enhancement in resilience research [47,48].
  • Lrn2ᵢ—Frequency of Knowledge-Sharing Events: The frequency with which the organization engages in knowledge-sharing activities (both formal and informal) in a given time frame, normalized to a standard period. This metric reflects a culture of continuous learning and collaboration; it is supported by knowledge management and social learning theories, wherein regular information exchange and joint problem-solving strengthen resilience [49,50].
  • Lrn3ᵢ—Investment in Crisis-Relevant Training: The percentage of total revenue invested in training programs focused on crisis management and recovery skills. This budget allocation ratio serves as a proxy for the organization’s commitment to learning and preparedness. Using financial investment in training as an indicator is common in the resilience literature to gauge proactive capacity-building efforts [51].
Structural capability (StC) represents the resilience afforded by an organization’s structural position in the network [2]. It encompasses the quality of the firm’s connectivity, the balance of information or resource flows, and the presence of built-in redundancies. This capability is assessed by three metrics:
  • Stc1ᵢ—Degree Centrality: The organization’s normalized degree centrality in the ecosystem’s network, a standard SNA measure of how well-connected a node is. Higher degree centrality implies greater embeddedness and visibility in the network, leading to faster information diffusion and better access to collaborative resources. This metric is widely used in modeling networked systems’ resilience, as highly connected nodes can more readily mobilize support or alternate pathways during disruptions [52,53].
  • Stc2ᵢ—Flow Asymmetry: The degree of imbalance between incoming and outgoing connections, computed as the normalized absolute difference between the in-degree and out-degree (or weighted link flows). This metric, derived from a directed network analysis, identifies structural imbalances—an extreme asymmetry suggests an over-reliance on either incoming or outgoing flows (e.g., dependence on a single source or bottlenecked outflow), whereas a balanced flow structure supports more resilient, reciprocal interactions. The concept is rooted in network stability and information flow optimization studies, where symmetry in exchange is linked to robustness [54,55].
  • Stc3ᵢ—Redundancy and Backup System Preparedness: The proportion of the organization’s critical systems or nodes (e.g., key suppliers, IT infrastructure, logistics channels) that have backup or failover alternatives. It is calculated as a redundancy ratio (number of systems with backups divided by total critical systems). This metric is common in reliability engineering, supply chain design, and business continuity planning; a higher value indicates greater structural robustness and an ability to maintain operations by switching to backups if primary systems fail [56].
Transformability (Tr) gauges an organization’s ability to undergo significant structural or strategic change in response to disruption [2]. It reflects a deep adaptive capacity for reinvention (beyond incremental flexibility) and is evaluated through three metrics:
  • Tr1ᵢ—Structural Flexibility via Edge Modifications: The extent of the network reconfiguration post-disruption, measured as the ratio of modified connections (edges added or removed in the organizational network) to the total pre-disruption connections. This network reconfiguration ratio indicates the responsiveness to shock: a higher value means the organization substantially restructured its partnerships or linkages in reaction to the crisis. The metric is conceptually rooted in dynamic network adaptation models, where the timely reorganization of ties is a sign of resilience and flexibility [57,58].
  • Tr2ᵢ—Redundancy via Backup Nodes: The availability of alternative nodes to replace or supplement critical partners/nodes during disruptions. It is quantified as the ratio of backup or secondary nodes (e.g., substitute suppliers, backup facilities) to all primary critical nodes. This redundancy coverage metric is widely used in supply chain and IT resilience planning to evaluate preparedness for rapid substitution; a higher ratio indicates that the organization can quickly reroute or maintain functions under stress [45,59].
  • Tr3ᵢ—Decentralization of Decision-Making: The degree to which the decision-making power is distributed across the network, proxied by the normalized dispersion (variance or entropy) of node degrees in the collaboration network. A more uniform degree distribution (high entropy across nodes’ connectivity) suggests that the control and influence are not concentrated in a few nodes but are rather spread out. This indicates a decentralized structure, which reduces the vulnerability to any single point of failure in a crisis. The metric is inspired by entropy-like measures of heterogeneity and formal decentralization indices in organizational network design [60,61].
Table 14 highlights how specific indicators (e.g., adaptability, fault tolerance) contribute to multiple modes with varying weights, while others (e.g., convexity or transformability) are unique to antifragility and transformative resilience, respectively. This structured mapping supports traceability from raw organizational metrics to aggregated response mode scores and enables a targeted diagnosis of strengths and gaps within each organization.

4. Model Validation and Sensitivity Analysis

We conducted a multi-faceted validation of the proposed response scoring framework to ensure its robustness and scientific soundness. Following best practices in complex systems modeling, we performed the following: (1) internal consistency checks of the AHP-derived weights, (2) a sensitivity analysis of the weight parameters, (3) a correlation analysis to see if capability scores relate to performance outcomes as expected, and (4) scenario-based simulations to examine the model’s holistic behavior. All indicator values and performance measures were normalized to [0, 1] ranges to allow the comparison. The validation results, presented in this section, support the model’s reliability and provide confidence in its theoretical underpinnings. We also discuss limitations and avenues for further empirical validation.

4.1. AHP Consistency Ratio

To ensure the reliability of the expert-derived capability weights, we assessed the internal consistency of the pairwise comparison matrices using the standard AHP methodology proposed by [19]. For each matrix, we computed the principal eigenvalue λmax, the Consistency Index (CI), and the Consistency Ratio (CR) as follows:
C I = λ m a x n n 1 ,
C R = C I R I ,
where
-
n is the number of indicators in the matrix;
-
RI is the Random Index, derived from Saaty’s scale (e.g., RI = 0.90 for n = 4, 1.12 for n = 5, 1.32 for n = 7).
-
A CR below 0.10 indicates an acceptable consistency. A value closer to zero reflects a greater coherence in expert judgments.
A summary of consistency results by the response mode is as follows:
  • Plasticity: For n = 4 (4 indicators) RI is 0.90. We obtained λmax = 4.031, so C I = 4.031 4 3 = 0.0103 and C R = 0.0103 0.90 0.0115 (1.15%).
  • Resilience: For n = 7 (7 indicators) RI is 1.32. λmax = 7.0700, giving C I = 7.0700 7 6 = 0.0117 and C R = 0.0117 1.32 0.0088 (0.88%).
  • Transformative resilience: For n = 5 (5 indicators) RI is 1.12. λmax = 5.0183, so C I = 5.0183 5 4 = 0.0046 and C R = 0.0046 1.12 0.0041 (~0.41%).
  • Antifragility: For n = 7 (7 indicators) RI is 1.32. λmax = 7.2148, yielding C I = 7.2148 7 6 = 0.0358 and C R = 0.0358 1.32 0.0271 (~2.71%).
All calculated CR values fall well below the 0.10 threshold, with three of the four matrices exhibiting less than a 1% inconsistency. These results indicate that the expert judgments were highly consistent and reflect a coherent understanding of the relative importance of the indicators for each response mode. The high degree of the internal validity of the AHP weights provides a solid foundation for the subsequent modeling and analysis.
Note: Minor rounding adjustments were made for clarity in tabulated weights (e.g., Contribution and RBO were assigned 0.227 each). Earlier draft versions incorrectly reported these as 0.30 and 0.15, which would have introduced an artificial inconsistency. All reported weights now correspond precisely to the normalized AHP results.

4.2. Sensitivity Analysis

To assess the robustness of the response mode scoring framework, we conducted a one-factor-at-a-time (OFAT) sensitivity analysis [62]. Since the AHP-derived weights come from expert judgments and carry some uncertainty, we tested how small changes in those weights would affect the outcome. This approach evaluates the impact of small perturbations in the most influential AHP-derived capability weights on the final mode scores. For each response mode, the highest weighted capability indicator was adjusted by ±10%, with remaining weights re-normalized to preserve ∑Wi = 1, and the mode’s performance score was recomputed under a fixed test scenario. The results showed minimal changes in the composite scores:
  • Plasticity: The top indicator is adaptability (WAd = 0.4658). We decreased it by 10% (to 0.4192) and increased it by 10% (to 0.5124), adjusting the other three plasticity weights so the total = 1. The plasticity score of the test profile changed by less than 0.1% (it remained essentially 1.000 in our normalized scenario). Such a negligible change indicates that the plasticity score is not overly sensitive to a small error in the adaptability weight.
  • Resilience: The top indicator is the structural capability (WStC = 0.2828). We varied this to 0.2545 and 0.3111 (±10%) and recomputed the resilience score. The result changed by less than 0.01% (from 1.0001 to 1.0001 in normalized terms effectively no change).
  • Transformative Resilience: The highest weight indicator is transformability (WTr = 0.4028). Adjusting it by ±10% (to 0.3625 and 0.4431) led to a <0.01% change in the transformative resilience score (approximately 0.9999 in both cases, essentially unchanged).
  • Antifragility: The highest indicator is convexity (WCnv = 0.3464). We varied this to 0.3118 and 0.3810 (±10%) and observed the effect on the antifragility score. This had a slightly larger impact than the other modes: the antifragility score changed from a baseline of about 0.78 to 0.73 (when the convexity weight was −10%) or 0.82 (+10%). This is roughly a 5% change in the score for a 10% change in the weight (small, but noticeable compared to the virtually zero change in other modes).
The results showed that for plasticity, resilience, and transformative resilience, such perturbations caused virtually no change in scores, and even for antifragility the effect was minor (~5% change for a 10% weight shift). This is consistent with the design of the model, as antifragility relies heavily on convexity to reflect the asymmetric gain under the disruption. The stability of the outputs under these perturbations confirms the framework’s reliability, suggesting that our AHP-weighted findings would hold even if the input weights were slightly imprecise.

4.3. Correlation Analysis: Capabilities vs. Performance Outcomes

A central theoretical assumption of our model is that organizations with higher internal capability scores should exhibit a stronger recovery from disruptions [63]. To validate this assumption, we conducted a correlation analysis between capability scores and the simulated performance recovery (ΔP).
Using synthetic data, we generated a population of hypothetical organizations with varying capability profiles. We then applied a simulated disruption event and computed their ΔP values using an inverse application of the response model equations (along with controlled random noise) to approximate the observed recovery. Each organization’s capability scores were then calculated using the AHP-weighted indicators, with a focus on capabilities associated with plasticity and resilience, such as adaptability, diversity, fault tolerance, and cohesiveness.
The results of the correlation analysis are as follows:
  • Adaptability: r ≈ 0.72. This was the highest correlation, indicating organizations scoring high in adaptability tended to recover a much larger fraction of the lost performance (often even exceeding the baseline, i.e., ΔP′ > 1 in some cases). This aligns with a vast resilience literature that positions adaptability as crucial for effective responses.
  • Diversity: r ≈ 0.58. This is a moderate positive correlation, suggesting that having a diverse set of resources or options contributes to a better recovery, though not as strongly as adaptability. Diversity provides alternative pathways when primary ones fail, hence aiding recovery.
  • Fault Tolerance: r ≈ 0.45. This is a weaker positive correlation. Organizations with a high fault tolerance did tend to recover more, but the relationship was less pronounced. Fault tolerance helps prevent catastrophic drops, but by itself it does not guarantee a full rebound, hence a modest correlation.
  • Cohesiveness: r ≈ 0.39. This was the weakest correlation among the tested capabilities. While more cohesive organizations recovered slightly better on average, the effect was relatively small. This makes sense: cohesiveness (strong internal coordination) is beneficial, but if an organization lacks adaptability or resources, being cohesive alone will not regenerate performance quickly.
All observed correlations were positive and consistent with theoretical expectations, supporting the construct validity of the capability indicators. However, we acknowledge that the correlation analysis presented here represents a form of an internal consistency validation rather than an external empirical validation. The synthetic data used in this analysis were generated using the same theoretical assumptions embedded in our model equations, which introduces a risk of circular validation—essentially confirming the model’s own assumptions. As the data were simulated based on model assumptions, these results are confirmatory rather than indicative of causal relationships.
The observed correlations between the capabilities and performance recovery, though theoretically consistent, may be influenced by unmeasured confounding variables, such as the leadership quality, industry-specific dynamics, organizational culture, or external environmental factors, that are not captured in our current framework. In real-world settings, further empirical validation is needed to rule out confounding variables (e.g., industry type) that might independently affect both adaptability and recovery. Furthermore, as noted in the broader literature on correlation versus causation, these statistical relationships do not establish causal mechanisms between capabilities and performance outcomes.
Nevertheless, as a sanity check within a controlled modeling context, this correlation analysis provides meaningful evidence that the capability constructs and weightings reflect their intended influence on performance. While these results provide valuable evidence for the construct validity and internal theoretical coherence, they should be interpreted as confirmatory rather than causal evidence. This is in line with existing empirical studies (e.g.,) that report similar relationships between capabilities such as adaptability and diversity and resilience outcomes.
For a robust empirical validation, future research should apply this framework to real-world crisis data from diverse organizational contexts, incorporating techniques such as natural experiments, instrumental variables, or longitudinal case studies to establish causal relationships. Such approaches would help address the potential omitted variable bias and provide stronger evidence for the practical utility of the proposed indicators. Nevertheless, the current validation approach serves its intended purpose as a methodological soundness check, confirming that the theoretical relationships embedded in the model produce logically consistent and theoretically expected outcomes.

4.4. Scenario-Based Simulation and Calibration

To evaluate the overall behavior and internal logic of the model, we conducted a scenario-based simulation involving three hypothetical organizational profiles (A, B, and C), representing, respectively, a fragile/low-capability organization, a moderately capable (resilient) organization, and a highly capable (possibly antifragile) organization.
Each profile was defined using four key capability indicators common across response modes, adaptability (Ad), fault tolerance (FT), diversity (Di), and structural capability (StC), while other indicators were held at average levels to isolate the effects of these core factors.
  • Organization A: With a low capability across the board (e.g., Ad = 0.2, FT = 0.3, Di = 0.1, StC = 0.3 on a 0–1 scale), this profile showed minimal recovery (ΔP ≈ 0.05), which was further dampened after the adjustment (ΔP′ ≈ 0.042). Its low plasticity (0.18) and resilience (0.22) scores reflect a fragile profile with poor adaptability and a limited structural capacity.
  • Organization B: With a moderate capability (Ad = 0.5, FT = 0.6, Di = 0.4, StC = 0.5) and a mid-level profile, this profile demonstrated partial recovery (~10%), with moderate capability scores yielding ΔP′ ≈ 0.095. The resulting mode scores (Ψ = 0.33, R = 0.39) align with expectations for a mid-level, moderately prepared organization. This is typical of resilient but non-transformative organizations, able to bounce back with some delay or loss but not to innovate or benefit from the disruption.
  • Organization C: With a high capability (Ad = 0.8, FT = 0.7, Di = 0.6, StC = 0.7), this profile exhibited the strongest response: ΔP′ ≈ 0.165 indicates not only 15% of its performance recovered but slightly exceeded the baseline. The resilience score (0.61) was notably higher than the plasticity score (0.48), suggesting a proactive rebound rather than simple adaptation, a hallmark of antifragile behavior.
These results align with theoretical expectations and support the face validity of the model. An increasing capability consistently yielded better outcomes across modes, and the score differentials between plasticity and resilience (for brevity, we focus on those two modes which were most illustrative; transformative resilience and antifragility would show similar trends given their dependency on the same four capabilities in this setup) were logically interpretable based on the profile type (Table 15).
This controlled simulation served as a sanity check for the internal model coherence. It also informed the calibration aspects (for instance, ensuring that (α + βC) chosen values produce ΔP in a reasonable range, and adjusting the W weights so that scores fall in sensible ranges for typical inputs).
In conclusion, the multi-pronged evaluation of the model indicates the following:
  • The weighting structure is internally consistent and credible (Section 4.1);
  • Small changes in weights do not upset the model (Section 4.2);
  • The capability indicators relate to performance as expected (Section 4.3);
  • The integrated model produces logical outcomes for different scenarios (Section 4.4).
These checks suggest the model is scientifically sound and suitable for use as a decision support or analysis tool in understanding organizational resilience and antifragility. Future work should apply the model to real-world organizations and assess whether computed scores correspond to observed recovery behaviors. This could inform refinements such as the following:
  • Re-tuning weight coefficients based on industry-specific data;
  • Introducing new indicators (e.g., digital resilience, supply chain agility);
  • Validating the generalizability of the score behavior across disruption types.

5. Conclusions

This study develops an indicator-based quantitative framework for assessing crisis response modes in collaborative business ecosystems. The model operationalizes four systemic response modes—plasticity, resilience, transformative resilience, and antifragility—by linking internal capabilities and strategic factors to post-crisis performance outcomes. Key findings and contributions include the following:
  • Distinct Capability Configurations: Each response mode is characterized by a specific profile of capabilities. For example, plasticity relies on the adaptability and fault tolerance to stabilize the performance at a lower but sustainable level, resilience depends on robustness to recover to the baseline, transformative resilience involves structural reconfiguration, and antifragility leverages optionality and convex mechanisms to gain from stress. This taxonomy extends Taleb’s fragility–resilience typology into a structured, operational model for decision support and comparative analysis.
  • Mathematical Rigor: We derived all indicator weights using the AHP and constructed a multi-level scoring model that links micro-level data (such as product portfolios or network centralities) to macro-level performance outcomes. Consistency checks and sensitivity analyses confirmed the model’s numerical soundness and robustness.
  • Predictive Insights: Simulation tests showed that capabilities like adaptability and diversity strongly predict recovery and performance gains, aligning with the model’s theoretical assumptions. The framework thus distinguishes mere recovery from constrained adaptation, strategic reinvention, or performance improvements under disruption.
  • Integration of Response Modes: Unlike existing resilience models that focus on static properties or basic recovery, our framework uniquely integrates four differentiated crisis response modes with nonlinear, time-dependent scoring. To our knowledge, this is the first unified, indicator-driven mathematical model that captures plasticity and antifragility (alongside resilience modes) in networked organizations.
  • Managerial Implications: The framework offers actionable guidance for practitioners. Managers can use it to diagnose capability gaps, compare investment strategies (e.g., buffering versus learning), and simulate the effects of decisions (e.g., network restructuring) under crisis scenarios. For example, leaders may prioritize investing in learning and flexibility to foster antifragility or in structural safeguards to enhance resilience. In sum, the model provides a transparent bridge between theory and practice, supporting evidence-based resilience planning, system reconfiguration, and opportunistic adaptation in business ecosystems.
The framework also establishes a theoretical foundation for future research. An empirical validation with real-world crisis data would help refine indicator definitions and calibrate weights to specific contexts. Dynamic extensions (such as incorporating feedback effects over time) could capture evolving network structures or learning curves. Exploring interactions among response modes—for instance, how pursuing antifragility might complement or trade off with resilience—would deepen our understanding of systemic trade-offs. Integrating cost–benefit analyses of various strategies and testing the model against adversarial disturbances (like cyberattacks or market shocks) are further promising avenues to enhance the practical relevance.
To improve the operational viability, the model can be tailored to the organizational context. For instance, it can be simplified by reducing the dimensionality or omitting hard-to-measure indicators. Developing practitioner-oriented tools—such as guidelines, data-gathering toolkits, or digital analytics platforms—would help organizations estimate the required inputs. Embedding the framework in simulation platforms could enable experimenting with policies for robust and adaptive designs.
However, certain limitations must be acknowledged. First, the reliance on expert-elicited weights introduces subjectivity: although consistency checks mitigate random bias, results may still reflect the choice of experts. Second, some indicators (e.g., “convexity” or “transformability”) require detailed, non-standardized data (such as records of strategic options or network reconfigurations) that may not be readily available in practice. Future work should address these issues by developing practical data collection methods (such as leveraging process mining or organizational analytics) and by applying the framework to diverse real-world cases.
Finally, future methodological improvements could further strengthen the approach. Alternative weighting techniques (e.g., the Best–Worst Method) and more comprehensive robustness checks (such as Monte Carlo simulations or global sensitivity analysis) might enhance the generalizability and ease of use across settings. Crucially, a robust empirical validation will require real crisis data. Longitudinal case studies of documented disruptions (e.g., supply chain breakdowns or health emergencies) and public data sources (such as industry recovery reports or resilience repositories) can be used to calibrate and test the model. Incorporating causal inference techniques (e.g., instrumental variable analysis, natural experiments or controlled studies) will help establish whether higher capability scores truly cause better recovery, beyond mere correlations. Such efforts will not only bolster confidence in the framework but also refine its thresholds, weighting schemes, and applicability across contexts.
In conclusion, our quantitative framework contributes practical relevance by unifying plasticity, resilience, and antifragility in a single model for collaborative ecosystems. It provides managers with a transparent tool for crisis preparedness and offers researchers a basis for further empirical investigation. With additional validation and refinement, this model can support more resilient and adaptive business ecosystems in the face of uncertainty.

Author Contributions

Conceptualization, J.R.; methodology, J.R.; validation, J.R.; formal analysis, J.R.; investigation, J.R.; writing—original draft preparation, J.R.; writing—review and editing, J.R., L.G and P.G.; visualization, J.R.; supervision, L.G.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Minciu, M.; Berar, F.A.; Dima, C. The Opportunities and Threats in the Context of the V.U.C.A World. Proc. Int. Manag. Conf. 2019, 13, 1142–1150. [Google Scholar]
  2. Ramezani, J.; Camarinha-Matos, L.M. Approaches for resilience and antifragility in collaborative business ecosystems. Technol. Forecast. Soc. Change 2020, 151, 119846. [Google Scholar] [CrossRef]
  3. Ramezani, J.; Camarinha-Matos, L.M. Novel Approaches to Handle Disruptions in Business Ecosystems; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar] [CrossRef]
  4. Schweitzer, F.; Casiraghi, G.; Tomasello, M.V.; Garcia, D. Fragile, Yet Resilient: Adaptive Decline in a Collaboration Network of Firms. Front. Appl. Math. Stat. 2021, 7, 6. [Google Scholar] [CrossRef]
  5. Chen, R.; Xie, Y.; Liu, Y. Defining, conceptualizing, and measuring organizational resilience: A multiple case study. Sustainability 2021, 13, 2517. [Google Scholar] [CrossRef]
  6. Taleb, N.N. Antifragile: Things That Gain from Disorder; Allen Lane: London, UK, 2012. [Google Scholar]
  7. Ramezani, J.; Camarinha-Matos, L.M. A collaborative approach to resilient and antifragile business ecosystems. Procedia Comput. Sci. 2019, 162, 604–613. [Google Scholar] [CrossRef]
  8. Yan, F.; Song, X. A Review of Research on Supply Chain Resilience Evaluation Indicator System and Evaluation Methods. Ind. Sci. Eng. 2024, 1, 22–30. [Google Scholar] [CrossRef]
  9. Hosseini, S.; Ivanov, D.; Dolgui, A. Review of quantitative methods for supply chain resilience analysis. Transp. Res. E Logist. Transp. Rev. 2019, 125, 285–307. [Google Scholar] [CrossRef]
  10. Axenie, C. Antifragility as a Complex System’s Response to Perturbations, Volatility, and Time. arXiv 2023, arXiv:10.48550/arXiv.2312.13991. [Google Scholar]
  11. Graça, P.; Camarinha-Matos, L.M. Influencing collaboration in sustainable business ecosystems. In Collaborative Networks in Digitalization and Society 5.0; Camarinha-Matos, L.M., Boucher, X., Ortiz, A., Eds.; Springer Nature: Cham, Switzerland, 2023; pp. 3–20. [Google Scholar] [CrossRef]
  12. Fan, R.; Yao, Q.; Chen, R.; Qian, R. Agent-based simulation model of panic buying behavior in urban public crisis events: A social network perspective. Sustain. Cities Soc. 2024, 100, 105002. [Google Scholar] [CrossRef]
  13. Lu, Y.; Liu, T.; Wang, T. Dynamic analysis of emergency inter-organizational communication network under public health emergency: A case study of COVID-19 in Hubei Province of China. Nat. Hazards 2021, 109, 2003–2026. [Google Scholar] [CrossRef]
  14. Bravo-Laguna, C. Crisis management from a relational perspective: An analysis of interorganizational transboundary crisis networks. J. Public Policy 2024, 44, 720–746. [Google Scholar] [CrossRef]
  15. van Hoek, R. Responding to COVID-19 Supply Chain Risks—Insights from Supply Chain Change Management, Total Cost of Ownership and Supplier Segmentation Theory. Logistics 2020, 4, 23. [Google Scholar] [CrossRef]
  16. Tripsas, M.; Gavetti, G. Capabilities, Cognition, and Inertia: Evidence from Digital Imaging. Strateg. Manag. J. 2000, 21, 1147–1161. [Google Scholar] [CrossRef]
  17. Behzadi, G.; O’Sullivan, M.J.; Olsen, T.L. On metrics for supply chain resilience. Eur. J. Oper. Res. 2020, 287, 145–158. [Google Scholar] [CrossRef]
  18. Bruckler, M.; Wietschel, L.; Messmann, L.; Thorenz, A.; Tuma, A. Review of metrics to assess resilience capacities and actions for supply chain resilience. Comput. Ind. Eng. 2024, 192, 110176. [Google Scholar] [CrossRef]
  19. Saaty, T.L. Relative Measurement and Its Generalization in Decision Making Why Pairwise Comparisons are Central in Mathematics for the Measurement of Intangible Factors the Analytic Hierarchy/Network Process (To the Memory of my Beloved Friend Professor Sixto Rios Garcia). Rev. de la Real Acad. de Cienc. Exactas Fis. y Naturales. Ser. A. Mat. 2008, 102, 251–318. [Google Scholar] [CrossRef]
  20. Canco, I.; Kruja, D.; Iancu, T. Ahp, a reliable method for quality decision making: A case study in business. Sustainability 2021, 13, 3932. [Google Scholar] [CrossRef]
  21. Vinogradova-Zinkevič, I.; Podvezko, V.; Zavadskas, E.K. Comparative assessment of the stability of AHP and FAHP methods. Symmetry 2021, 13, 479. [Google Scholar] [CrossRef]
  22. Graça, P.; Camarinha-Matos, L.M. A proposal of performance indicators for collaborative business ecosystems. In Collaboration in a Hyperconnected World; PRO-VE, 2016, Afsarmanesh, H., Camarinha-Matos, L., Soares, A.L., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; Volume 480, pp. 253–264. [Google Scholar] [CrossRef]
  23. Graça, P.; Camarinha-Matos, L.M. A Human-AI Centric Performance Evaluation System for Collaborative Business Ecosystems. In Camarinha-Matos, L.M., Ferrada, F. (eds) Technological Innovation for Human-Centric Systems. DoCEIS 2024. In IFIP Advances in Information and Communication Technology; Springer: Cham, Switzerland, 2024; Volume 716, pp. 3–27. [Google Scholar] [CrossRef]
  24. Yikilmaz, I. A Dynamic Capability to Determine Business Performance in the Post-COVID-19 Era. In Change Management During Unprecedented Times; IGI Global Scientific Publishing: New York, NY, USA, 2023; p. 22. [Google Scholar] [CrossRef]
  25. Newman, M. Networks: An Introduction, 1st ed.; Oxford University Press: Oxford, UK, 2010. [Google Scholar]
  26. Jackson, M.O. Social and Economic Networks, illustrated ed.; Princeton University Press: Princeton, NJ, USA, 2008. [Google Scholar]
  27. Guan, J.; Ma, N. Innovative capability and export performance of Chinese firms. Technovation 2003, 23, 737–747. [Google Scholar] [CrossRef]
  28. Emeterio, M.C.S.; Fernández-Ortiz, R.; Arteaga-Ortiz, J.; Dorta-González, P. Measuring the gradualist approach to internationalization: Empirical evidence from the wine sector. PLoS ONE 2018, 13, e0196804. [Google Scholar] [CrossRef]
  29. Robert; Hanneman, A.; Riddle, M. Concepts and Measures for Basic Network Analysis. In The SAGE Handbook of Social Network Analysis; SAGE Publications Ltd.: Thousand Oaks, CA, USA, 2014. [Google Scholar] [CrossRef]
  30. Borgatti, S.P.; Everett, M.G.; Johnson, J.C.; Agneessens, F. Analyzing Social Networks, 3rd ed.; SAGE Publications Ltd.: Thousand Oaks, CA, USA, 2024. [Google Scholar]
  31. Taleb, N.N. Mathematical Definition, Mapping, and Detection of (Anti)Fragility. Quant. Finance 2012, 13. [Google Scholar] [CrossRef]
  32. Taleb, N.N.; West, J. Working with Convex Responses: Antifragility from Finance to Oncology. Entropy 2023, 25, 343. [Google Scholar] [CrossRef]
  33. Axenie, C.; López-Corona, O.; Makridis, M.A.; Akbarzadeh, M.; Saveriano, M.; Stancu, A.; West, J. Antifragility in complex dynamical systems. NPJ Complex. 2024, 1, 12. [Google Scholar] [CrossRef]
  34. Prenzel, L.; Steinhorst, S. Towards Resilience by Self-Adaptation of Industrial Control Systems. In Proceedings of the 27th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Stuttgart, Germany, 6–9 September 2022; pp. 1–8. [Google Scholar]
  35. Lutter, M.; Weidner, L. Newcomers, betweenness centrality, and creative success: A study of teams in the board game industry from 1951 to 2017. Poetics 2021, 87, 101535. [Google Scholar] [CrossRef]
  36. Carmichael, B.; Koumou, G.B.; Moran, K. Unifying Portfolio Diversification Measures Using Rao’s Quadratic Entropy. J. Quant. Econ. 2023, 21, 769–802. [Google Scholar] [CrossRef]
  37. Olbryś, J.; Ostrowski, K. An entropy-based approach to measurement of stock market depth. Entropy 2021, 23, 568. [Google Scholar] [CrossRef]
  38. OECD, Eurostat. Oslo Manual. The Measurement of Scientific, Technological and Innovation Activities; OECD Publishing: Paris, France, 2018. [Google Scholar] [CrossRef]
  39. Kale, P.; Singh, H. Managing Strategic Alliances: What Do We Know Now, and Where Do We Go from Here? Academy of Management Perspectives. 2009. Available online: https://www.jstor.org/stable/27747525 (accessed on 21 December 2023).
  40. Abreu, A.; Camarinha-Matos, L.M. IFIP AICT 362—An Approach to Measure Social Capital in Collaborative Networks In 12th Working Conference on Virtual Enterprises; Springer: Berlin/Heidelberg, Germany, 2011; pp. 29–40. [Google Scholar] [CrossRef]
  41. Hosseini, S.; Barker, K.; Ramirez-Marquez, J.E. A review of definitions and measures of system resilience. Reliab. Eng. Syst. Saf. 2016, 145, 47–61. [Google Scholar] [CrossRef]
  42. Somers, S. Measuring resilience potential: An adaptive strategy for organizational crisis planning. J. Contingencies Crisis Manag. 2009, 17, 12–23. [Google Scholar] [CrossRef]
  43. Valiyev, I. Strategic Management in the Anti-Crisis Regulation of the Organization’s Activities. Anc. Land 2024, 6, 137–140. [Google Scholar] [CrossRef]
  44. Panteli, M.; Mancarella, P. Modeling and evaluating the resilience of critical electrical power infrastructure to extreme weather events. IEEE Syst. J. 2017, 11, 1733–1742. [Google Scholar] [CrossRef]
  45. Ivanov, D.; Dolgui, A. A digital supply chain twin for managing the disruption risks and resilience in the era of Industry 4.0. Prod. Plan. Control 2021, 32, 775–788. [Google Scholar] [CrossRef]
  46. Chen, Q.; Sun, T.; Wang, T. Network centrality, support organizations, exploratory innovation: Empirical analysis of China’s integrated circuit industry. Heliyon 2023, 9, e17709. [Google Scholar] [CrossRef]
  47. Duchek, S. Organizational resilience: A capability-based conceptualization. Bus. Res. 2020, 13, 215–246. [Google Scholar] [CrossRef]
  48. Williams, T.A.; Gruber, D.A.; Sutcliffe, K.M.; Shepherd, D.A.; Zhao, E.Y. Organizational response to adversity: Fusing crisis management and resilience research streams. Acad. Manag. Ann. 2017, 11, 733–769. [Google Scholar] [CrossRef]
  49. Argote, L. Organizational learning research: Past, present and future. Manag. Learn. 2011, 42, 439–446. [Google Scholar] [CrossRef]
  50. Zhang, Y.; Shao, C.; He, S.; Gao, J. Resilience centrality in complex networks. Physical Review E 2020, 101, 022304. [Google Scholar] [CrossRef]
  51. Oh, I.S.; Han, J.H. Will investments in human resources during the COVID-19 pandemic crisis pay off after the crisis? SSRN Electron. J. 2021, 14, 98–100. [Google Scholar] [CrossRef]
  52. Borgatti, S.P.; Everett, M.G. A Graph-theoretic perspective on centrality. Soc. Netw. 2006, 28, 466–484. [Google Scholar] [CrossRef]
  53. Li, L.; Wang, J.; Yuan, J.; Gu, T.; Ling, S.; Zhan, H. Unlocking Physical Resilience Capacities of Building Systems: An Enhanced Network Analysis Approach. Buildings 2025, 15, 641. [Google Scholar] [CrossRef]
  54. Opsahl, T.; Agneessens, F.; Skvoretz, J. Node centrality in weighted networks: Generalizing degree and shortest paths. Soc. Netw. 2010, 32, 245–251. [Google Scholar] [CrossRef]
  55. Centola, D. The spread of behavior in an online social network experiment. Science 2010, 329, 1194–1197. [Google Scholar] [CrossRef]
  56. Ivanov, D.; Sokolov, B. Adaptive Supply Chain Management; Springer: London, UK, 2010. [Google Scholar] [CrossRef]
  57. Panzarasa, P.; Opsahl, T.; Carley, K.M. Patterns and dynamics of users’ behavior and interaction: Network analysis of an online community. J. Am. Soc. Inf. Sci. Technol. 2009, 60, 911–932. [Google Scholar] [CrossRef]
  58. Guimerà, R.; Danon, L.; Díaz-Guilera, A.; Giralt, F.; Arenas, A. Self-similar community structure in a network of human interactions. Phys. Rev. E 2003, 68, 065103. [Google Scholar] [CrossRef]
  59. Yang, Z.; Wu, M.; Sun, J.; Zhang, Y. Aligning redundancy and flexibility for supply chain resilience: A literature synthesis. J. Risk Res. 2024, 27, 313–335. [Google Scholar] [CrossRef]
  60. Gochhayat, S.P.; Shetty, S.; Mukkamala, R.; Foytik, P.; Kamhoua, G.A.; Njilla, L. Measuring decentrality in blockchain based systems. IEEE Access 2020, 8, 178372–178390. [Google Scholar] [CrossRef]
  61. Erdmenger, J.; Grosvenor, K.T.; Jefferson, R. Towards quantifying information flows: Relative entropy in deep neural networks and the renormalization group. SciPost Phys. 2022, 12. [Google Scholar] [CrossRef]
  62. Erkut, E.; Tarimcilar, M. On sensitivity analysis in the analytic hierarchy process. IMA J. Manag. Math. 1991, 3, 61–83. [Google Scholar] [CrossRef]
  63. Hu, C.; Yun, K.H.; Su, Z.; Xi, C. Effective Crisis Management during Adversity: Organizing Resilience Capabilities of Firms and Sustainable Performance during COVID-19. Sustainability 2022, 14, 3664. [Google Scholar] [CrossRef]
Figure 2. A hierarchical organization of capability indicators for the four response modes. Each mode is defined by a unique combination of capabilities. Source: author’s composition.
Figure 2. A hierarchical organization of capability indicators for the four response modes. Each mode is defined by a unique combination of capabilities. Source: author’s composition.
Mathematics 13 02421 g002
Table 1. Concept clarification. Source: author’s composition.
Table 1. Concept clarification. Source: author’s composition.
PlasticityResilienceTransformative ResilienceAntifragility
Performance level after shockStabilizes at a lower, yet sustainable, performance level (e.g., 60–85% of baseline)Returns to pre-shock performance (95–100% of baseline)Recovers or exceeds baseline performance with positive structural changeSurpasses pre-shock performance (greater than 100% of baseline)
Nature of changePermanent structural or functional adaptation; constrained flexibilityTemporary or reversible adjustments; core structure preservedPositive, evolutionary transformation; fundamental reconfigurationInnovative, opportunistic transformation; benefits directly from stress
Recovery capabilityFull recovery is not possible; only partial, stable adaptationFull recovery of function and structureRecovery with systemic improvement and enhanced adaptabilityIncreased capability and performance as a result of the shock
Long-term outcomeNew, lower equilibrium; reduced growth potentialRestoration of normalcy and stabilityEnhanced resilience and adaptability for future challengesEnhanced growth trajectory and ability to exploit volatility
Learning and growthLimited learning; focus on survival and copingLearning aimed at restoring previous stateDeep learning, innovation, and positive transformationProactive learning; accelerated innovation and growth
Response to future challengesIncreased vulnerability; reduced adaptive capacityMaintained or improved capacity to respondGreater flexibility and preparedness for future disruptionsImproved ability to thrive and capitalize on future shocks
Example
Kodak: After failing to adapt to digital photography, Kodak filed for bankruptcy in 2012 but re-emerged as a smaller, restructured company focused on commercial printing and imaging, stabilizing at a much lower scale than its historical dominance [16].Walt Disney: After facing severe financial distress and near-bankruptcy in the early 2000s, Disney restructured, refocused on core brands, invested in new attractions, and rebounded to become one of the world’s most valuable media companies. IBM: In response to industry disruption and declining hardware sales, IBM shifted its business model to focus on cloud computing and hybrid cloud services, transforming its core operations and regaining industry leadership.Netflix: Netflix, through its “Simian Army” chaos engineering tools, continuously tests and strengthens its systems by intentionally introducing failures, enabling it to evolve and thrive amid ongoing technological challenges [2].
Table 2. Pairwise comparison matrix for performance indicators.
Table 2. Pairwise comparison matrix for performance indicators.
InnovationContributionPrestigeRBO
Innovation1232
Contribution 1 2 121
Prestige 1 3 1 2 1 1 2
RBO 1 2 121
Table 3. Final weights of performance indicators.
Table 3. Final weights of performance indicators.
IndicatorWeight
Innovation0.424
Contribution0.227
RBO0.227
Prestige0.122
Table 4. Pairwise comparison matrix for plasticity indicators.
Table 4. Pairwise comparison matrix for plasticity indicators.
AdaptabilityCohesivenessFault ToleranceHysteresis
Adaptability1324
Cohesiveness 1 3 1 1 2 2
Fault Tolerance 1 2 213
Hysteresis 1 4 1 2 1 3 1
Table 5. Final weights of plasticity capability indicators.
Table 5. Final weights of plasticity capability indicators.
IndicatorWeight
Adaptability0.4660
Fault Tolerance0.2771
Cohesiveness0.1611
Hysteresis0.0960
Table 6. Pairwise comparison matrix for resilience indicators.
Table 6. Pairwise comparison matrix for resilience indicators.
DiversityAdaptabilityEfficiencyCohesivenessStructural CapabilityFault ToleranceLearning
Diversity1123 1 2 21
Adaptability1123 1 2 21
Efficiency 1 2 1 2 12 1 3 1 1 2
Cohesiveness 1 3 1 3 1 2 1 1 4 1 3 1 2
Structural Capability2234132
Fault Tolerance 1 2 1 2 12 1 3 1 1 2
Learning1123 1 2 21
Table 7. Final weights of resilience capability indicators.
Table 7. Final weights of resilience capability indicators.
IndicatorWeight
Adaptability0.1619
Diversity0.1619
Efficiency0.0884
Cohesiveness0.0548
Structural Capability0.2828
Fault Tolerance0.0884
Learning0.1619
Table 8. Pairwise comparison matrix for transformative resilience indicators.
Table 8. Pairwise comparison matrix for transformative resilience indicators.
AdaptabilityStructural CapabilityTransformabilityCreativityLearning
Adaptability11 1 2 23
Structural Capability11 1 2 23
Transformability22145
Creativity 1 2 1 2 1 4 12
Learning 1 3 1 3 1 5 1 2 1
Table 9. Final weights of transformative resilience indicators.
Table 9. Final weights of transformative resilience indicators.
IndicatorWeight
Adaptability0.2085
Structural Capability0.2085
Transformability0.4028
Creativity0.1114
Learning0.0687
Table 10. Pairwise comparison matrix for antifragility indicators.
Table 10. Pairwise comparison matrix for antifragility indicators.
DiversityAdaptabilityStructural CapabilityFault ToleranceLearningConvexityCreativity
Diversity11 1 2 21 1 3 2
Adaptability11 1 2 21 1 3 2
Structural Capability22132 1 2 3
Fault Tolerance 1 2 1 2 1 3 1 1 2 1 4 1
Learning11 1 2 21 1 3 2
Convexity3344314
Creativity 1 2 1 2 1 3 1 1 2 1 4 1
Table 11. Final weights of antifragility capability indicators.
Table 11. Final weights of antifragility capability indicators.
IndicatorWeight
Diversity0.1116
Adaptability0.1116
Structural Capability0.1961
Fault Tolerance0.0614
Learning0.1116
Convexity0.3464
Creativity0.0614
Table 12. Metrics for performance indicators at organizational and CBE levels. Source: [7,22,23].
Table 12. Metrics for performance indicators at organizational and CBE levels. Source: [7,22,23].
Calculation FormulasDescription
Innovation IndicatorOrganization Level:
I n v i = I n v 1 i W 1 i + I n v 2 i W 2 i + I n v 3 i W 3 i W 1 i + W 2 i + W 3 i
I n v 1 i = # N e w P d i # P o r t P d i + ε
ϵ (a small constant, e.g., 1) prevents division by zero and avoids extreme values when the portfolio size is small.
I n v 2 i = # P t n A p p i + # T e c h D i s c i # R n D P r j i
I n v 3 i = # O p p I n n o i # O p p U t i l i

CBE Level:
I n v C B E = I n v 1 C B E W 1 C B E + I n v 2 C B E W 2 C B E + I n v 3 C B E W 3 C B E W 1 C B E + W 2 C B E + W 3 C B E
I n v 1 C B E = i = 1 N # N e w P d i i = 1 N ( # P o r t P d i + ε )
I n v 2 C B E = i = 1 N I n v 2 i N
I n v 3 C B E = i = 1 N I n v 3 i N
Innovation (Inv) measures the innovation potential of the organization/CBE. Inv is measured as a weighted combination of three factors:
1. The ratio of new products or services to the existing portfolio at the organizational and CBE levels.
2. Patent application ratio to R&D Projects.
3. Proportion of utilized opportunities that were innovation-driven (Inv3i, Inv3CBE).
Variable definitions:
#NewPdi: Number of new products or services generated by organization i.
#PortPdi: Total portfolio of products or services of organization i.
#OppInnoi: Number of opportunities utilized with innovative solutions by organization i.
#OppUtili: Number of opportunities utilized.
#PtnAppi: Number of patent applications submitted by organization i.
#TechDisci: Number of technological discoveries (software, prototypes, trade secrets) by organization i.
#RnDPrji: Total number of R&D projects conducted by organization i.
Contribution IndicatorOrganization Level:
C n t i = C n t 1 i W 1 i + C n t 2 i W 2 i + C n t 3 i W 3 i   W 1 i + W 2 i + W 3 i
C n t 1 i = # C o l P r j i # T o t P r j i
C n t 2 i = # A c t i n O i + # A c t o u t O i 2
C n t 3 i = # S h R e s i # T o t R e s i

CBE Level:
C n t C B E = C n t 1 C B E W 1 C B E + C n t 2 C B E W 2 C B E + C n t 3 C B E W 3 C B E   W 1 C B E + W 2 C B E + W 3 C B E
C n t 1 C B E = i = 1 N ( # C o l P r j i ) i = 1 N ( # T o t P r j i )
C n t 2 C B E = i = 1 N ( # A c t i n O i + # A c t o u t O i ) 2 N
C n t 3 C B E = i = 1 N ( # S h R e s i ) i = 1 N ( # T o t R e s i )
Contribution (Cnt) is measured as a weighted combination of three key factors:
1. The proportion of collaborative projects to total projects at the organizational (Cnt1) and CBE (Cnt1CBE) levels.
2. The degree of incoming and outgoing collaborative activities (Cnt2, Cnt2CBE).
3. The level of shared resources relative to total resources (Cnt3, Cnt3CBE).
Variable definitions:
#CoPrji: Number of collaborative projects involving organization i.
#TotPrji: Total projects involving organization i.
#Actin, i: Incoming collaborative activities for organization i.
#Act out, i: Outgoing collaborative activities from organization i.
#ShResi: Resources shared by organization i.
#TotResi: Total resources owned by organization i.
Prestige Indicator
Organization Level:
P r s i = P r s 1 i W 1 i + P r s 2 i W 2 i + P r s 3 i W 3 i   W 1 i + W 2 i + W 3 i
P r s 1 i = E i g C n t ( O i )
P r s 2 i = # O p p C o m t i # O p p U t i l i
P r s 3 i = # R e c o g i # A c h v i

CBE Level:
P r s i = P r s 1 C B E W 1 C B E + P r s 2 C B E W 2 C B E + P r s 3 C B E W 3 C B E   W 1 C B E + W 2 C B E + W 3 C B E
P r s 1 C B E =   i = 1 N E i g C n t ( O i ) N  
P r s 2 C B E = i = 1 N # O p p C o m t i i = 1 N # O p p U t i l i
P r s 3 C B E = i = 1 N # R e c o g i i = 1 N # A c h v i
Prestige (Prs) is measured as a weighted combination of three key factors:
1. Network influence through eigenvector centrality at the organizational (Prs1) and CBE (Prs1CBE) levels.
2. Commitment to Opportunities: Organizations that commit to opportunities over time build stronger reputations (Prs2, Prs2CBE).
3. Recognized Achievements: A higher proportion of recognized accomplishments increases Prestige.
Variable definitions:
EigCnt (Oi): Eigenvector centrality for organization i, measuring how well it is connected to other influential organizations.
#OppComti: Number of committed opportunities.
#OppUtili: Number of opportunities utilized.
#Recogi: Number of recognized achievements (including publications, awards, and certifications).
#Achvi: Total number of achievements claimed by organization I (publications + awards + certifications).
Opportunity Responsiveness (RBO) Indicator
Organization Level:
R B O i = R B O 1 i W 1 i + R B O 2 i W 2 i + R B O 3 i W 3 i W 1 i + W 2 i + W 3 i
R B O 1 i = # O p p U t i l i # T o t O p p i
R B O 2 i = # O p p O n T i m e i # O p p U t i l i
R B O 3 i = C l o s e C n t O i = 1 j i d ( O i , O j )
where d (Oi, Oj) is the shortest path distance between organization Oi and Oj. (A higher closeness centrality indicates better accessibility to opportunities.)

CBE Level:
R B O C B E = R B O 1 C B E W 1 C B E + R B O 2 C B E W 2 C B E + R B O 3 C B E W 3 C B E W 1 + W 2 + W 3
R B O 1 C B E = i = 1 N # O p p U t i l i i = 1 N # T o t O p p i
R B O 2 C B E = i = 1 N # O p p O n T i m e i i = 1 N # O p p U t i l i
R B O 3 C B E = i = 1 N C l o s e C n t O i N
Responsiveness to Business Opportunities (RBO) is measured as a weighted combination of three key factors:
1. Overall business opportunity utilization rate at the organizational (RBO 1i) and CBE (RBO 1CBE) levels.
2. Timeliness in completing business opportunities (RBO 2i, RBO 2CBE).
3. Average closeness centrality, reflecting accessibility to opportunities (RBO 3i, RBO 3CBE).
Variable Definitions:
#OppUtili: Number of opportunities utilized by organization i.
#TotOppi: Total number of opportunities identified by organization i.
#OppOnTimei: Number of utilized business opportunities that were completed within the required deadline for organization i.
CloseCnt (Oi): Closeness centrality for organization i. A higher value indicates better accessibility to opportunities.
W1, W2, W3: Weights assigned to metrics, allowing adjustment of their relative importance.
Table 13. Summary of response mode score formulas and weights. Source: author’s composition.
Table 13. Summary of response mode score formulas and weights. Source: author’s composition.
Response ModePerformance Modifier FormulaWC WS WP Description
Plasticity
Ψ
Δ P = P r e c P d i s P b a s e
Δ P = Δ P · 0.3 + 0.7 C P l + S P l
Ψ   =   W c C P l + W s S P l + W p Δ P
0.40.40.2Emphasizes internal adaptability and strategic buffering, with partial recovery.
CPl = f(Ad,FT,Coh,Hys).
SPl: Scenario value (e.g., buffering, redundancy).
Resilience
R
Δ P = P r e c P d i s P b a s e
Δ P = Δ P · 0.5 + 0.5 C R e + S R e
R = W c C R e + W s S R e + W p Δ P
0.40.20.4Emphasizes full recovery driven by structural and adaptive capabilities.
CRe: f(Stc,Ad,Di,Lrn,Ef,FT,Coh).
SRe: Scenario value (e.g., info/risk sharing).
Transformative
Resilience
T
Δ P = P r e c P d i s P b a s e
Δ P = Δ P · 0.5 + 0.5 C T r + S T r · α
T = W c C T r + W s S T r + W p Δ P
0.40.10.5Includes structural transformation multiplier.
CTr: f(Tr,Ad,Stc,Crt,Lrn).
STr: Scenario value (e.g., innovation, network reconfiguration).
α = 1.2 if structural change occurred, otherwise 1.
Antifragility
A
Δ P = P r e c P d i s m a x P b a s e , ε
Δ P = Δ P · 0.5 + 0.5 C A n + S A n
A = W c C A n + W s S A n + W p Δ P
0.40.10.5Emphasizes performance gain from stress,
with ε ≈ 10−6 to avoid division by zero.
CAn: f(Cnv,Stc,Di,Ad,Lrn,Crt,FT).
SAn: Scenario value (e.g., optionality, fault injection).
Table 14. Metrics for response mode indicators at organizational and CBE levels. Source: author’s composition.
Table 14. Metrics for response mode indicators at organizational and CBE levels. Source: author’s composition.
Calculation FormulasDescription
Adaptability IndicatorOrganization Level:
A d i = A 1 i W 1 i + A 2 i W 2 i + A 3 i W 3 i W 1 i + W 2 i + W 3 i
A d 1 i = j = 1 N ( p i j + W · k i , j p i k · p k j )
A d 2 i = # N e w T e c h i · S c f R a t e 1 N i = 1 N # N e w T e c h i · S c f R a t e + ε
A d 3 i = # S c f M k t E n t r i e s i # T o t M k t E n t r i e s i

CBE Level:
A d C B E = A 1 C B E W 1 C B E + A 2 C B E W 2 C B E + A 3 C B E W 3 C B E W 1 C B E + W 2 C B E + W 3 C B E
A d 1 C B E = 1 N i = 1 N j = 1 N ( p i j + W · k i , j p i k · p k j )
A d 2 C B E = i = 1 N # N e w T e c h i · S c f R a t e N
A d 3 C B E = i = 1 N # S c f M k t E n t r i e s i i = 1 N # T o t M k t E n t r i e s i
Adaptability (Ad) is measured as a weighted combination of three key factors:
1. Structural adaptability through indirect linkages at the organizational (Ad1i) and CBE (Ad1CBE) levels.
2. Technology adoption and integration (Ad2i, Ad2CBE).
3. Successful market entry and competition (Ad3i, Ad3CBE).
Variable Definitions:
pij: Probability of direct interaction between organizations i and j. k i , j p i k p k j : Probability of indirect interaction between i and j via an intermediary k.
W: Weight factor for indirect interactions (0 < W < 1).
#NewTechi: Number of new technologies adopted by organization i.
ScfRate: Success rate of new technology implementation for organization i.
#ScfMktEntriesi: Number of successful market entries by organization i.
#TotMktEntriesi: Total number of market entry attempts by organization i.
W1, W2, W3: Weights assigned to each adaptability metric.
Cohesiveness IndicatorOrganization Level:
C o h i = C o h 1 i W 1 i + C o h 2 i W 2 i W 1 i + W 2 i
C o h 1 i = ( # C o l O p i i n + # C o l O p i o u t ) N 1
C o h 2 i = # R e l a t i o n s i N 1

CBE Level:
C o h C B E = C o h 1 i W 1 i + C o h 2 i W 2 i W 1 i + W 2 i
C o h 1 C B E = i = 1 N ( # C o l O p i i n + # C o l O p i o u t ) N ( N 1 )
C o h 2 C B E = # R e l a t i o n s i N ( N 1 )
Cohesiveness (Coh) is measured as a weighted combination of two key factors:
1. Reciprocity of collaboration opportunities at the organizational (Coh1i) and CBE (Coh1CBE) levels.
2. Density of direct connections (Coh2i, Coh2CBE).
Variable Definitions:
#ColOpin,i in: Number of collaboration opportunities received by organization i.
#ColOpout,i out: Number of collaboration opportunities initiated by organization i.
N: Total number of organizations in CBE.
#Relationsi: Total number of direct connections of organization i.
W1, W2: Weights assigned to metrics, allowing for adjustment of their relative importance.
Convexity IndicatorOrganization Level:
C n v i = C n v 1 i W 1 i + C n v 2 i W 2 i + C n v 3 i W 3 i W 1 i + W 2 i + W 3 i
C n v 1 i = P p o s t , i P p r e , i P r o d n o r m a l , i
C n v 2 i = s = S E s S
C n v 3 i = # N e w B o p o s t , i T p o s t r e c o v e r y

CBE Level: C n v C B E = C n v 1 C B E W 1 C B E + C n v 2 C B E W 2 C B E + C n v 3 C B E W 3 C B E W 1 C B E + W 2 C B E + W 3 C B E
C n v 1 C B E = i = 1 N C n v 1 i N
C n v 2 C B E = i = 1 N C n v 2 i N
C n v 3 C B E = i = 1 N C n v 3 i N
Convexity (Cnv) measures how much an organization’s performance exceeds pre-disruption levels after recovery. It incorporates three components:
1. The relative performance improvement at the organizational (Cnv1i) and CBE (Cnv1CBE) levels.
2. The impact of strategies (Cnv2i, Cnv2CBE).
3. The ability to capitalize on new business opportunities (Cnv3i, Cnv3CBE).
Variable Definitions:
Ppre,i: Pre-disruption performance of organization i.
Ppost,i: Post-disruption performance of organization i.
Prodnormal,i: Normal productivity of organization i.
Es: The impact of strategy s taken by the organization.
S: The set of all strategies implemented by the organization.
#NewBopost,i: Number of new business opportunities gained by organization i after disruption.
Tpost-recovery: The time period after recovery in which new opportunities are measured.
N: Total number of organizations in the ecosystem.
W1, W2, W3: Weights assigned to each convexity metric, adjusted for their importance at both levels.
Creativity IndicatorOrganization Level:
C r t i = C r t 1 i W 1 i + C r t 2 i W 2 i W 1 i + W 2 i
C r t 1 i = R n D E x p i # O p E x p i
C r t 2 i = B e t C n t i = k j O k j O i

CBE Level:
C r t C B E = C r t 1 C B E W 1 C B E + C r t 2 C B E W 2 C B E W 1 C B E + W 2 C B E
C r t 1 C B E = i = 1 N C r t 1 i N
C r t 2 C B E = i = 1 N B e t C n t i N
Creativity (Crt) is measured as a weighted combination of three factors:
1. R&D investment ratio to operating costs at the organizational (Crt1i) and CBE (Crt1CBE) levels.
2. The collaborative potential of an organization based on its network centrality (Crt2i, Crt2CBE).
Variable definitions:
RnDExpi: Total research and development expenditure for organization i.
#OpExpi: Total operating expenses for organization i.
BetCnti: Betweenness centrality of organization i in the ecosystem network.
N: Total number of organizations in the CBE.
W1, W2, W3: Weights assigned to each creativity metric based on its importance.
Diversity IndicatorOrganization Level:
D i i = D i 1 i W 1 i + D i 2 i W 2 i + D i 3 i W 3 i W 1 i + W 2 i + W 3 i
D i 1 i = # P o r t P d i 1 N i = 1 N # P o r t P d i + ε
D i 2 i = # P o r t C m t i i = 1 N # P o r t C m t i
D i 3 i = j = 1 n P i j · l o g P i j

CBE Level:
D i C B E = D i 1 C B E W 1 C B E + D i 2 C B E W 2 C B E + D i 3 C B E W 3 C B E W 1 C B E + W 2 C B E + W 3 C B E
D i 1 C B E = i = 1 N # P o r t P d i N
D i 2 C B E = i = 1 N ( # P o r t C m t i ) N
D i 3 C B E = i = 1 N ( j = 1 n P i j l o g P i j ) N
Diversity (Di) is measured as a weighted combination of three factors:
1. Measures the product/service diversity of an organization, normalized by the highest product count in the ecosystem at the organizational (Di1i) and CBE (Di1CBE) levels.
2. Measures the competency diversity of an organization relative to the entire ecosystem’s total competencies (Di2i, Di2CBE).
3. Captures market diversification using Shannon entropy to quantify how evenly an organization’s connections are spread across different markets.
Variable Definitions:
#PortPdi: Portfolio of products or services of organization i.
#PortCmti: Portfolio of competencies (skills/expertise) of organization i.
Pij: Proportion of organization i’s connections linked to organization j.
N: Total number of organizations in the ecosystem.
W1, W2, W3: Weights assigned to each diversity metric, adjusted for their importance at both levels.
Efficiency IndicatorOrganization Level:
E f i = E f 1 i W 1 i + E f 2 i W 2 i + E f 3 i W 3 i W 1 i + W 2 i + W 3 i
E f 1 i = R e v n e w , i R e v t o t a l , i
E f 2 i = # S c C o l i # T o t C o l i
E f 3 i = # E d g C o l i # T o t E d g i

CBE Level:
E f C B E = E f 1 C B E W 1 C B E + E f 2 C B E W 2 C B E + E f 3 C B E W 3 C B E W 1 C B E + W 2 C B E + W 3 C B E
E f 1 C B E = i = 1 N E f 1 i N
E f 2 C B E = i = 1 N # S c C o l l i i = 1 N # T o t C o l l i
E f 3 C B E = i = 1 N # E d g C o l i i = 1 N # T o t E d g i
Efficiency (Ef) measures how well an organization or ecosystem
1. Develops revenue in proportion to new products/services at the organizational (Ef1i) and CBE (Ef1CBE) levels.
2. Converts collaboration opportunities into successful outcomes (Ef2i, Ef2CBE).
3. Effectively utilizes its network connections for collaboration (Ef3i, Ef3CBE).
Variable Definitions:
Revnew,i: Revenue from newly introduced products/services for organization i.
Revtotal,i: Total revenue for organization i.
#ScColi: Number of successful collaborations in organization i.
#TotColi: Total number of collaboration attempts in organization i.
#EdgColi: Number of network edges actively used in collaborations by organization i.
#TotEdgi: Total number of potential edges in the organization’s collaboration network.
Fault Tolerance IndicatorOrganization Level:
F T i = F T 1 i W 1 i + F T 2 i W 2 i + F T 3 i W 3 i W 1 i + W 2 i + W 3 i
F T 1 i = P r o d n o r m a l , i P r o d s t r e s s , i P r o d n o r m a l , i
F T 2 i = E m p r e t a i n e d , i E m p t o t a l , i
F T 3 i = T r e c o v e r y , i T t o t a l , i

CBE Level:
F T C B E = F T 1 C B E W 1 C B E + F T 2 C B E W 2 C B E + F T 3 C B E W 3 C B E W 1 C B E + W 2 C B E + W 3 C B E
F T 1 C B E = i = 1 N ( P r o d n o r m a l , i P r o d s t r e s s , i ) i = 1 N P r o d n o r m a l , i
F T 2 C B E = i = 1 N E m p r e t a i n e d , i i = 1 N E m p t o t a l , i
F T 3 C B E = i = 1 N T r e c o v e r y , i N · T t o t a l
Fault Tolerance (FT) is measured as a weighted combination of three key factors:
1. The percentage loss in productivity under stress at the organizational (FT1i) and CBE (FT1CBE) levels.
2. The proportion of employees retained during stress.
3. The time taken to recover from disruption.
Variable Definitions:
Prodnormal,i: Productivity of organization I under normal conditions.
Prodstress, i: Productivity of organization i during or after stress/disruption.
Empretained, i: Number of employees retained during disruption in organization i.
Emptotal, i: Total number of employees in organizations.
Trecovery, i: Time taken for organization i to recover performance post-disruption.
Ttotal, i: Total reference time period for performance assessment.
N: Total number of organizations in the ecosystem.
W: Weights representing the importance of each fault tolerance metric.
Hysteresis Indicator
Organization Level:
H y s i = H y s 1 i W 1 i + H y s 2 i W 2 i + H y s 3 i W 3 i W 1 i + W 2 i + W 3 i
H y s 1 i = P p r e , i P d r o p , i P p r e , i
H y s 2 i = T r e c o v e r y , i
H y s 3 i = P a s t E v e n t s D i m p a c t # P a s t E v e n t s

CBE Level:
H y s C B E = H y s 1 C B E W 1 C B E + H y s 2 C B E W 2 C B E + H y s 3 C B E W 3 C B E W 1 C B E + W 2 C B E + W 3 C B E
H y s 1 C B E = i = 1 N H y s 1 i N
H y s 2 C B E = i = 1 N H y s 2 i N
H y s 3 C B E = i = 1 N H y s 3 i N
Hysteresis (Hys) measures how past disruptions affect the speed and trajectory of recovery. It accounts for the following:
1. Immediate performance drops after disruption at the organizational (Hys1i) and CBE (Hys1CBE) levels.
2. Time required to recover performance to pre-disruption level (Hys2i, Hys2CBE).
3. Impact of past disruptions on current recovery speed (Hys3i, Hys3CBE).
Variable Definitions:
Ppre,i: Pre-disruption performance of organization i.
Pdrop,i: Minimum performance level reached after disruption for organization i.
Trecovery, i: Time required for organization iii to recover to pre-disruption performance.
Dimpact: The impact of each past disruption on the organization’s ability to recover.
#PastEvents: The total number of past disruptions that affected the organization.
N: Total number of organizations in the ecosystem.
W1, W2, W3: Weights for adjusting the relative importance of each hysteresis metric.
Learning Capability IndicatorOrganization Level:
L r n i = L r n 1 i W 1 i + L r n 2 i W 2 i + L r n 3 i W 3 i W 1 i + W 2 i + W 3 i
L r n 1 i = P a f t e r , i P b e f o r e , i p b e f o r e , i
L r n 2 i = # k n S h E v i T
L r n 3 i = % T r I n v i

CBE Level: L r n i = L r n 1 C B E W 1 i + L r n 2 C B E W 2 C B E + L r n 3 C B E W 3 C B E W 1 C B E + W 2 C B E + W 3 C B E
L r n 1 C B E = i = 1 N ( P a f t e r , i P b e f o r e , i ) i = 1 N p b e f o r e , i
L r n 2 C B E = i = 1 N k n S h E v i T · N
L r n 3 C B E = i = 1 N % T r I n v i N
Learning Capability (Lrn) is measured as a weighted combination of three factors:
1. Relative performance improvement after disruption at the organizational (Lrn1i) and CBE (Lrn1CBE) levels.
2. Knowledge-sharing events conducted over time (Lrn2i, Lrn2CBE).
3. Percentage of total revenue invested in training
(self-reported) (Lrn3i, Lrn3CBE).
Variable definitions:
p b e f o r e , i : Pre-disruption performance of organization.
P a f t e r , i : Post-disruption performance of organization i.
# k n S h E v i : Number of knowledge-sharing events conducted by organization i.
T: Time period (e.g., months, years).
T r I n v i : Self-reported percentage of total revenue invested in training
N: Total number of organizations in the CBE.
W1, W2, W3: Weights assigned to each learning metric based on strategic priorities.
Structural Capability IndicatorOrganization Level:
S t c i = S t c 1 i W 1 i + S t c 2 i W 2 i + S t c 3 i W 3 i W 1 i + W 2 i + W 3 i
S t c 1 i = D e g i D e g ¯
S t c 2 i = I n D e g i O u t D e g i I n D e g i + O u t D e g i
S t c 3 i = # B c S y s i # T o t S y s i

CBE Level:
S t c C B E = S t c 1 C B E W 1 C B E + S t c 2 W 2 C B E + S t c 3 W 3 C B E W 1 C B E + W 2 C B E + W 3 C B E
S t c 1 C B E = i = 1 N D e g i N · D e g ¯
S t c 2 C B E = i = 1 N I n D e g i O u t D e g i i = 1 N ( I n D e g i + O u t D e g i )
S t c 3 C B E = i = 1 N # B c S y s i i = 1 N # T o t S y s i
Structural Capability (Stc) is measured as a weighted combination of three key factors:
1. Direct connectivity (degree centrality) at the organizational (Stc1i) and CBE (Stc1CBE) levels.
2. Flow asymmetry (Stc2i, Stc2CBE).
3. Redundancy and backup system preparedness (Stc3i, Stc3CBE).
Variable Definitions:
Degi: Degree of organization i (number of direct connections in the network).
D e g ¯ : Average degree of all organizations in the ecosystem.
InDegj: Number of incoming connections to organization i.
OutDegi: Number of outgoing connections from organization i.
#BcSysi: Number of backup systems in organization i.
#TotSysi: Total number of systems in organization i.
N: Total number of organizations in the ecosystem.
W1, W2, W3: Weights assigned to each structural capability metric.
Transformability IndicatorOrganization Level:
T r i = T r 1 i W 1 i + T r 2 i W 2 i + T r 3 i W 3 i + T r 4 i W 4 i W 1 i + W 2 i + W 3 i
T r 1 i = # C h E d g i # T o t E d g i
T r 2 i = # B c N o d i # T o t N o d i
T r 3 i = 1 j = 1 N d e g j d e g ¯ ( N 1 ) ( N 2 )

CBE Level:
T r i = T r 1 i W 1 i + T r 2 i W 2 i + T r 3 i W 3 i + T r 4 i W 4 i W 1 i + W 2 i + W 3 i
T r 1 C B E = i = 1 N # C h E d g i i = 1 N # T o t E d g i
T r 2 C B E = i = 1 N # B c N o d i i = 1 N # B c N o d i
T r 3 C B E = 1 i = 1 N j = 1 N d e g j d e g ¯ ( N 1 ) ( N 2 )
Transformability (Tr) is measured as a weighted combination of four key factors:
1. Structural flexibility through edge modifications at the organizational (Tr1i) and CBE (Tr1CBE) levels.
2. Redundancy via backup nodes (Tr2i, Tr2CBE).
3. Decentralization of decision-making (Tr3i, Tr3CBE).
Variable Definitions:
#ChEdgi: Number of changed edges in organization i
#TotEdgi: Total number of edges in organization i.
#BcNodi: Number of backup nodes in organization i.
#TotNodi: Total number of nodes in organization i.
degj: Degree of node j in the network.
d e g ¯ : Average degree of all nodes in the ecosystem.
N: Total number of organizations in the ecosystem.
W1, W2, W3: Weights for adjusting the relative importance of each metric.
Table 15. Hypothetical scenario outcomes for low- (A), medium- (B), and high- (C) capability organizations.
Table 15. Hypothetical scenario outcomes for low- (A), medium- (B), and high- (C) capability organizations.
OrganizationAdFTDiStcΔPΔP′Ψ (Plasticity)R (Resilience)
A0.20.30.10.30.050.0420.180.22
B0.50.60.40.50.100.0950.330.39
C0.80.70.60.70.150.1650.480.61
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ramezani, J.; Gomes, L.; Graça, P. Crisis Response Modes in Collaborative Business Ecosystems: A Mathematical Framework from Plasticity to Antifragility. Mathematics 2025, 13, 2421. https://doi.org/10.3390/math13152421

AMA Style

Ramezani J, Gomes L, Graça P. Crisis Response Modes in Collaborative Business Ecosystems: A Mathematical Framework from Plasticity to Antifragility. Mathematics. 2025; 13(15):2421. https://doi.org/10.3390/math13152421

Chicago/Turabian Style

Ramezani, Javaneh, Luis Gomes, and Paula Graça. 2025. "Crisis Response Modes in Collaborative Business Ecosystems: A Mathematical Framework from Plasticity to Antifragility" Mathematics 13, no. 15: 2421. https://doi.org/10.3390/math13152421

APA Style

Ramezani, J., Gomes, L., & Graça, P. (2025). Crisis Response Modes in Collaborative Business Ecosystems: A Mathematical Framework from Plasticity to Antifragility. Mathematics, 13(15), 2421. https://doi.org/10.3390/math13152421

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop