Next Article in Journal
Effect of Low-Salt Processing on Lipolytic Activity, Volatile Compound Profile, Color, Lipid Oxidation, and Microbiological Properties of Four Different Types of Pastırma
Previous Article in Journal
Modeling One-Dimensional Nonlinear Consolidation Problems by Physics-Informed Neural Network with Layer-Wise Locally Adaptive Activation Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Endogenous Security-Oriented Framework for Cyber Resilience Assessment in Critical Infrastructures

1
College of Computer Science and Artificial Intelligence, Fudan University, Shanghai 200437, China
2
Institute of Big Data, Fudan University, Shanghai 200437, China
3
Purple Mountain Laboratories, Nanjing 211111, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(15), 8342; https://doi.org/10.3390/app15158342
Submission received: 14 June 2025 / Revised: 6 July 2025 / Accepted: 9 July 2025 / Published: 26 July 2025

Abstract

In the face of escalating cyber threats to critical infrastructures, achieving robust cyber resilience has become paramount. This paper proposes an endogenous security-oriented framework for cyber resilience assessment, specifically tailored for critical infrastructures. Drawing on the principles of endogenous security, our framework integrates dynamic heterogeneous redundancy (DHR) and adaptive defense mechanisms to address both known and unknown threats. We model resilience across four key dimensions—Prevention, Destruction Resistance, Adaptive Recovery, and Evolutionary Learning—using a novel mathematical formulation that captures nonlinear interactions and temporal dynamics. The framework incorporates environmental threat entropy to dynamically adjust resilience scores, ensuring relevance in evolving attack landscapes. Through empirical validation on simulated critical infrastructure scenarios, we demonstrate the framework’s ability to quantify resilience trajectories and trigger timely defensive adaptations. Empiricalvalidation on a real-world critical infrastructure system yielded an overall resilience score of 82.75, revealing a critical imbalance between strong preventive capabilities (90/100) and weak Adaptive Recovery (66/100). Our approach offers a significant advancement over static risk assessment models by providing actionable metrics for strategic resilience investments. This work contributes to the field by bridging endogenous security theory with practical resilience engineering, paving the way for more robust protection of critical systems against sophisticated cyber threats.

1. Introduction

In the rapidly evolving landscape of cybersecurity, organizations confront an escalating array of threats, including advanced persistent threats (APTs), zero-day exploits, and large-scale distributed denial-of-service (DDoS) attacks. These threats are dynamic, adaptive, and increasingly sophisticated, exposing the limitations of traditional risk assessment methods in ensuring system security and operational continuity. Conventional approaches, such as point-in-time risk scoring and mean time between failures (MTBF), focus on static snapshots of system vulnerabilities. This means they cannot quantify critical dynamic processes like recovery and learning, nor can they measure intrinsic capabilities such as Destruction Resistance and adaptability. This gap leaves systems susceptible to cascading failures, prolonged downtimes, and inadequate responses to evolving adversarial tactics. Consequently, there is an urgent need for a new paradigm that not only evaluates a system’s ability to withstand and recover from attacks but also captures its capacity for continuous adaptation and improvement in the face of persistent and unpredictable threats. Indeed, the ultimate goal of such resilience is to ensure that these digital infrastructures can continue to function as reliable enablers of innovation, entrepreneurship, and societal progress [1].
To address these challenges, cybersecurity resilience has emerged as a pivotal concept. Unlike traditional security metrics that emphasize Prevention or risk mitigation, resilience focuses on a system’s ability to sustain core functions under adversarial pressure, recover swiftly from disruptions, and evolve through learning from past incidents. A resilient system anticipates threats, absorbs damage, restores functionality, and adapts its defenses over time. However, it remains challenging to quantify resilience in a dynamic, measurable, and actionable way. This difficulty arises from the complex interplay between preventive, defensive, recovery, and evolutionary capabilities, which is further complicated by unpredictable adversarial behavior and environmental changes.
This paper proposes a novel theoretical model for assessing Cybersecurity Resilience Capacity (RC), accompanied by a practical, operational four-dimensional evaluation framework named P-R-A-L, which stands for Prevention (P), Destruction Resistance (R), Adaptive Recovery (A), and Evolutionary Learning (L). Unlike static risk models, the RC model transforms resilience into a time-evolving trajectory, capturing the nonlinear interactions between defensive and evolutionary capabilities while accounting for real-time variations in environmental threat entropy. The complementary P-R-A-L framework (as shown in Figure 1) is designed to achieve actionable implementation, leveraging mature technologies to ensure feasibility and practicality in real-world systems. By aligning with established cybersecurity standards such as the NIST Cybersecurity Framework 2.0 [2], NIST SP 800-37 [3], GB/T 44862-2024 [4], and the MITRE ATT&CK threat taxonomy [5], our approach integrates seamlessly with existing practices while offering a forward-looking perspective on resilience measurement.
The primary contributions of this work are as follows. First, we introduce a mathematical model for cybersecurity resilience that quantifies system capabilities across the threat response continuum using a closed-loop structure for continuous assessment and adaptation. Second, we develop a dynamic constraint mechanism to incorporate environmental threat entropy variation, reflecting the real-time complexity of adversarial environments. Third, and equally significant, we propose a hierarchical evaluation system (model-dimension-indicator) through the P-R-A-L framework, which offers the following practical advantages:
  • Measurability and Technical Maturity: Indicators are designed with a focus on quantifiable metrics and rely on mature, widely available technologies for ease of implementation.
  • Reusability: Many indicators leverage existing security operations tools and data, reducing the need for bespoke solutions.
  • Flexibility: The framework allows for multiple technical approaches to measure the same indicator, accommodating diverse organizational contexts and resource constraints.
  • Alignment with Endogenous Security Principles: The design of the R (Destruction Resistance) and L (Evolutionary Learning) dimensions embeds endogenous security concepts, emphasizing intrinsic system capabilities and self-evolution to counter threats.
The remainder of this paper is organized as follows. Section 2 reviews related work in cybersecurity resilience and dynamic risk modeling, highlighting the gaps addressed by our approach. Section 3 presents the overall RC model architecture, detailing its theoretical foundation and mathematical formulation. Section 4 elaborates on the P-R-A-L framework, providing operational definitions and indicator design principles. Section 5 presents an empirical validation of the framework through a case study on a real-world critical infrastructure system. Section 6 discusses the findings and future research directions. Finally, Section 7 concludes with a summary of key findings and contributions. Through this work, we aim to deliver a robust, actionable framework for enhancing cybersecurity resilience in an era of persistent and evolving threats.

2. Related Work

This section reviews the existing literature on cyber resilience assessment, focusing on three key areas: cyber resilience frameworks, endogenous security concepts, and quantitative models. We compare these works with our proposed approach, highlighting the unique contributions of our Cybersecurity Resilience Capacity (RC) model and the P-R-A-L framework.

2.1. Cyber Resilience Frameworks

Numerous frameworks have been developed to assess and enhance cyber resilience across critical infrastructures and organizations. The Cybersecurity and Infrastructure Security Agency (CISA) offers the Cyber Resilience Review (CRR), an interview-based assessment to evaluate operational resilience and cybersecurity practices [6]. The National Institute of Standards and Technology (NIST) provides SP 800-160 Volume 2 Revision 1, focusing on engineering survivable systems through systematic security methods [7]. The European Union’s Network and Information Security (NIS) Directive promotes risk assessments and security measures across member states [8]. Canada’s Regional Resilience Assessment Program (RRAP) evaluates vulnerabilities in critical facilities [9], while the UK’s National Cyber Security Centre (NCSC) provides the Cyber Assessment Framework (CAF) for critical national infrastructure [10]. The EU’s Cyber Resilience Act (CRA) [11] sets cybersecurity standards for digital products, and Australia’s 2023 Critical Infrastructure Resilience Strategy outlines a national approach to security and resilience [12]. In China, the GB/T 44862-2024 standard offers criteria for evaluating cyber resilience, focusing on prevention, endurance, recovery, and adaptation [13]. Additionally, frameworks like the NIST Cybersecurity Framework 2.0 [2] provide a de facto industry baseline. Standards such as ISO/IEC 27035-1:2023 [14] on incident management directly inform the processes within our Adaptive Recovery (A) dimension. Foundational texts on Resilience Engineering by Hollnagel, Woods, and Leveson [15] help ground our model in established systems-engineering theory. Finally, frameworks like ISO/IEC 27001 emphasize information security management [16], and the MITRE ATT&CK framework maps adversary tactics for resilience planning [17].
While these frameworks provide valuable guidance on “what” to do to enhance resilience, they often lack specificity in “how” to quantitatively measure the effectiveness of implemented measures.

2.2. Endogenous Security Concepts

Endogenous security focuses on embedding safety mechanisms within system architectures to address vulnerabilities at their core, providing a robust defense against unpredictable threats. A foundational concept in this domain is Cyberspace Mimic Defense (CMD) based on Dynamic Heterogeneous Redundancy (DHR), which employs diverse component instances and dynamic switching to deter attackers [18]. Recent studies have connected structural security to resilience engineering [19] and elaborated on CMD’s theoretical foundations and applications [20]. Other works have explored endogenous security in contexts such as cryptography for routers and cloud services [21], wireless communications [22], AI-driven cybersecurity [23], and industrial control systems [24]. Applications to IoT [25], blockchain [26], and Security-as-a-Service models in cloud environments [27] further demonstrate its potential.
While endogenous security concepts show promise, challenges in scalability and implementation costs persist, especially for large-scale infrastructures. Our P-R-A-L framework builds on these ideas by translating endogenous security principles into measurable indicators. Specifically, the Destruction Resistance (R) dimension incorporates concepts like “heterogeneous redundancy architecture” and “mimicry-based fault tolerance”, while the Evolutionary Learning (L) dimension includes indicators for “dynamic reconfiguration” and “node-level endogenous adaptation”. By grounding these concepts in quantifiable metrics, our framework bridges the gap between theoretical endogenous security and practical resilience assessment.

2.3. Quantitative Models for Cyber Resilience

Quantitative models provide data-driven approaches to measure cyber resilience, often focusing on resistance and recovery capabilities. Recent advancements include mathematical models validated through testbeds to quantify resilience under adversarial conditions [28]. Other studies propose models accounting for multiple defense objectives and resource criticality [29,30], while reviews highlight limited integration of adaptive mechanisms in existing frameworks [31]. The Cyber Resilience Quantification Framework (CRQF) targets dynamic IT environments [32], and various resilience metrics and scoring practices have been documented [33,34,35]. Additional models include resilience matrices [36], stochastic approaches [37], and game-theoretic models for adversarial interactions [38]. Metrics for IT and OT environments [39] and key resilience indicators [40] have also been proposed. Indispensable sources like the ENISA Threat Landscape report [41] are crucial for calibrating the threat entropy term ( σ ) in our model. Furthermore, the work of Cantelli-Forti et al. [42] provides a concrete application context for our framework within critical infrastructure protection, serving as a target implementation example.
Despite these advances, many quantitative models prioritize traditional risk metrics over dynamic adaptability and endogenous security principles. Our approach addresses these gaps by offering a structured, dynamic, and practical solution through the RC model and P-R-A-L framework. The advantages of our model include the following:
  • Structured Design: Based on four clearly defined dimensions—Prevention (P), Destruction Resistance (R), Adaptive Recovery (A), and Evolutionary Learning (L)—providing a comprehensive view of resilience.
  • Practical Implementation: Each dimension is supported by a clear, measurable set of indicators designed for real-world applicability using mature technologies.
  • Dynamic Nature: The RC model is time-varying, capturing evolving threat landscapes, while the indicators within the P-R-A-L framework support continuous updates to reflect real-time conditions.
By integrating endogenous security concepts, dynamic quantification, and actionable metrics, our framework offers a significant advancement over existing models, ensuring both theoretical robustness and operational feasibility.

3. Model Architecture

This section presents the architecture of our proposed Cybersecurity Resilience Capacity (RC) model, which serves as the theoretical foundation for quantifying a system’s ability to sustain core functions, recover rapidly, and adapt continuously under adversarial threats. The model integrates four core dimensions—Prevention (P), Destruction Resistance (R), Adaptive Recovery (A), and Evolutionary Learning (L)—forming a closed-loop structure that captures the defense-endurance-recovery-evolution continuum. We detail the mathematical formulation of the RC model, explain the interactions between dimensions, and highlight its alignment with established cybersecurity frameworks. Importantly, this section bridges the theoretical model with the practical P-R-A-L framework (elaborated in Section 4), ensuring that the model’s outputs are grounded in measurable, actionable indicators.

3.1. Cybersecurity Resilience Capacity (RC) Model

The RC model quantifies cybersecurity resilience as a dynamic, time-varying metric, translating static risk assessments into continuous resilience trajectories. The model is formulated as follows:
R C ( t ) = P ( t ) · R 1.2 ( t ) + A ( t ) · L 0.8 ( t ) 1 + σ · Δ T
where R C ( t ) represents the overall resilience capacity at time t, and the variables P ( t ) , R ( t ) , A ( t ) , and L ( t ) denote the quantified capability scores of the system in the four dimensions: Prevention, Destruction Resistance, Adaptive Recovery, and Evolutionary Learning, respectively. These scores are not arbitrary values but are derived through a comprehensive aggregation of specific, measurable indicators within the P-R-A-L framework, as detailed in Section 4. For instance,
-
P ( t ) is computed as a weighted aggregation of sub-indicators such as threat intelligence accuracy, risk prediction coverage, situational awareness latency, proactive defense effectiveness, and supply chain security maturity.
-
R ( t ) is derived from indicators like redundancy robustness, fault tolerance under attack, and heterogeneous architecture diversity.
-
A ( t ) aggregates metrics such as mean time to recovery (MTTR), recovery automation rate, and post-incident functionality restoration ratio.
-
L ( t ) combines indicators like incident learning rate, policy adaptation frequency, and autonomous strategy optimization effectiveness.
This structured approach ensures that the theoretical model is directly tied to practical, quantifiable assessments, facilitating real-world implementation.

3.2. Nonlinear Coupling and Dimensional Interactions

The numerator of the RC formula, P ( t ) · R 1.2 ( t ) + A ( t ) · L 0.8 ( t ) , incorporates nonlinear coupling mechanisms to reflect the synergistic interactions between defensive (P, A) and evolutionary (R, L) capabilities. The superlinear exponent of R 1.2 captures the qualitative leap in Destruction Resistance when key thresholds are met. For example, in the P-R-A-L framework, indicators such as “redundancy robustness” and “failure recovery capacity” (aligned with Dynamic Heterogeneous Redundancy (DHR) principles) demonstrate that once a system achieves sufficient redundancy and fault tolerance, its ability to withstand attacks increases disproportionately, reflecting a significant boost in resilience.
Conversely, the sublinear exponent of L 0.8 models the diminishing marginal returns in Evolutionary Learning. Indicators in the L dimension, such as “automation in repair processes” and “autonomous policy response capabilities”, show rapid improvements in early stages of learning (e.g., initial automation of incident response). However, as the system matures, further gains become incrementally harder to achieve due to complexity and resource constraints, a reality reflected in the sublinear growth.
These nonlinear factors ensure that the model mirrors empirical observations of system behavior, balancing the impact of investments across dimensions and guiding strategic prioritization (e.g., emphasizing redundancy in R for immediate resilience gains versus long-term learning in L).

3.3. Environmental Threat Entropy Variation ( σ )

The denominator term 1 + σ · Δ T introduces a dynamic constraint on resilience capacity, accounting for environmental threat entropy variation rate ( σ ) over a specified evaluation time window ( Δ T ). The threat entropy variation rate is computed as follows:
σ = Δ H Δ t · 1 k B ln Ω
where Δ H represents the incremental entropy of system vulnerability information acquired by the attacker, Ω denotes the number of states in the system’s attack surface (e.g., open ports, services, permission combinations), and k B is a normalization factor (Boltzmann constant). This formulation leverages Shannon entropy ( H = p i ln p i ) to assess the uncertainty of attack paths and models the attacker’s behavior using a Boltzmann distribution.
The computation of σ is closely tied to indicators in the P dimension of the P-R-A-L framework. For instance, Δ H is influenced by the update frequency and quality of “threat intelligence” and changes in “vulnerability knowledge base coverage”. Similarly, Ω correlates with indicators such as “asset vulnerability scanning coverage rate” and “dynamic attack surface assessment capability”. A spike in Δ H (e.g., due to a new vulnerability disclosure) increases σ , reducing R C ( t ) and signaling the need for immediate defensive action. The time window Δ T balances short-term responsiveness (e.g., hourly assessments for real-time threats) with long-term trends (e.g., monthly evaluations for Evolutionary Learning), ensuring that the model adapts to both acute and chronic threat dynamics.

3.4. Alignment with Cybersecurity Standards

The RC model and P-R-A-L framework are designed to align with global and national cybersecurity standards, ensuring compatibility and enhancing practical adoption. Specific mappings include the following:
  • NIST SP 800-37 PDR Cycles: The P dimension corresponds to the Prepare and Protect phases, focusing on preemptive threat mitigation. R aligns with the Detect and Respond phases, emphasizing system endurance under attack. A maps to Respond and Recover, prioritizing rapid restoration, while L connects to Identify and Prepare for long-term adaptation through learning.
  • GB/T 44862-2024: The four dimensions (P, R, A, L) directly map to the standard’s capability domains of Prevention, endurance, recovery, and adaptation. Indicators within the P-R-A-L framework serve as evidence and quantifiable means to meet the standard’s requirements for dynamic risk control and resilience assessment.
  • MITRE ATT&CK Threat Taxonomy: Indicators in the P dimension, such as “threat hunting behavior library” and “attacker intent inference”, and in the R dimension, such as “dynamic attack containment”, directly leverage MITRE’s tactics and techniques for mapping and quantification. Additionally, the computation of σ incorporates attack cost ( C a t t a c k ) metrics aligned with ATT&CK’s adversarial behavior models to quantify real-time threat entropy.

3.5. Dynamic Resilience Portrait

To effectively operationalize the concept of cybersecurity resilience, the proposed framework decomposes system capabilities into four interdependent dimensions that form a closed-loop threat response continuum. These dimensions—Preventive Capacity (P), Destruction Resistance (R), Adaptive Recovery (A), and Learning Evolution (L)—are carefully designed to address adversarial threats across their lifecycle, from pre-attack interception to post-incident adaptation. Each dimension encapsulates specific aspects of resilience, reflecting its unique contribution while accounting for nonlinear interactions and environmental dynamics. Preventive Capacity (P) and Adaptive Recovery (A) represent proactive and reactive defense mechanisms, respectively, whereas Destruction Resistance (R) and Learning Evolution (L) serve as stabilizing and evolutionary forces. Collectively, these dimensions create a spatiotemporally adaptive system that balances immediate threat mitigation with long-term strategic improvement. Resilience is thus measured not as a static value but as an evolving trajectory influenced by both defensive investments and adversarial pressures. The following subsections provide detailed conceptual foundations, measurable indicators, and practical quantification methods for each dimension, ensuring the framework’s utility, measurability, and reliance on mature technologies.

4. Framework Dimensions and Measurement

To comprehensively capture the dynamic nature of cybersecurity resilience, the proposed framework decomposes system capabilities into four interdependent dimensions that collectively form a closed-loop threat response continuum. These dimensions—Preventive Capacity (P), Destruction Resistance (R), Adaptive Recovery (A), and Learning Evolution (L)—are designed to address adversarial threats across their entire continuum: from pre-attack interception to post-incident adaptation. Each dimension is parameterized to reflect its unique contribution to resilience while accounting for nonlinear interactions and environmental dynamics. The Prevention and recovery capabilities (P, A) represent proactive and reactive defense mechanisms, respectively, while resistance (R) and learning (L) act as stabilizing and evolutionary forces. Together, they form a spatiotemporally adaptive system that balances immediate threat mitigation with long-term strategic improvement, ensuring resilience is measured not as a static score but as an evolving trajectory shaped by both defensive investments and adversarial pressures. The following subsections detail the mathematical and operational foundations of each dimension.

4.1. Preventive Capacity Index (P)

The Preventive Capacity Index (P) quantifies a system’s ability to proactively intercept threats before they materialize into incidents, serving as a foundational component of the Resilience Capacity (RC) model. It emphasizes anticipating and mitigating risks through architectural design, defense mechanisms, and threat intelligence to minimize the attack surface and disrupt adversarial tactics early.
The index, denoted as P ( t ) , integrates sub-dimensions into a cohesive metric reflecting proactive defense capabilities, capturing both static structural properties and dynamic operational conditions. It is expressed as follows:
P ( t ) = 1 1 + e λ A D ( t ) · i = 1 n ( 1 p i k i ) · log 2 1 + T I Q ( t ) V W ( t )
Here, λ is the learning rate for architectural diversity A D ( t ) (values in ( 0 , 1 ) ), k i is the number of variant components in the i-th defense layer, p i is the failure probability of components in that layer, T I Q ( t ) is the threat intelligence quality index, and V W ( t ) represents the vulnerability window exposure volume. The multiplicative structure ensures balanced system improvement.
The architectural diversity term, 1 1 + e λ A D ( t ) , uses a sigmoidal function for nonlinear scaling of defensive effectiveness with structural variation. A D ( t ) is defined as follows:
A D ( t ) = 1 3 N v ( t ) N t o t a l + ln ( 1 + f u p d a t e ) ln 10 + 1 1 + e 0.1 Δ t a d j u s t
where N v ( t ) is the number of variant components, N t o t a l is the total, f u p d a t e is policy update frequency (updates/hour), and Δ t a d j u s t is system adjustment time (seconds). This aggregates component variation (measurable via system inventories or CMDBs by calculating variant-to-total component ratios), policy update velocity (logarithmic scaling for diminishing returns), and adjustment responsiveness (exponential decay rewarding rapid reconfiguration).
The defense mechanism multiplier, i = 1 n ( 1 p i k i ) , models layered defenses as probabilistic barriers, where diversified components exponentially decrease failure probability, grounded in reliability theory. Practically, efficacy is measured by indicators like virus signature database synchronization rates (from EDR/antivirus logs) and asset vulnerability scanning coverage (comparing reports from tools like Nessus or OpenVAS with CMDB asset counts).
The threat intelligence efficiency term, log 2 1 + T I Q ( t ) V W ( t ) , reflects bounded growth of intelligence-driven defense, with logarithmic form for diminishing returns as vulnerability windows shrink, based on information theory. This is assessed via metrics like timeliness and accuracy of threat feeds (from SIEMs or threat intelligence platforms).
Operationalizing P relies on measurable indicators using mature, accessible technologies. This practical approach, leveraging common data sources (log management, security device reports) and allowing flexible tool implementation (e.g., Nessus or Qualys for scanning), ensures the index is actionable. The focus on easily collected metrics (update compliance, scan coverage) minimizes overhead while the multiplicative interaction in P ( t ) promotes holistic improvement, aligning with observations that balanced preventive investments yield greater resilience.

4.2. Destruction Resistance Index (R)

The Destruction Resistance Index (R) evaluates a system’s ability to endure and maintain functionality under sustained adversarial pressure by focusing on absorbing damage through resource redundancy and defensive countermeasures. The index captures the dynamic tension between a system’s internal strengths and external threats. It evolves over time to reflect both sudden shocks and gradual erosion, allowing it to quantify endurance against both acute and chronic stressors.
Theoretically, R(t) is modeled as a nonlinear survival function integrating resource resilience and adversarial dynamics:
R ( t ) = R 0 · e 0 t α · S redu S crit β · C attack C defense d t
Here, R 0 is the baseline resilience coefficient (tied to initial redundancy/fault tolerance). The integral aggregates a growth factor from resource redundancy ( α · S redu S crit , where S redu is excess capacity over critical threshold S crit , and α is calibrated by empirical data on distributed systems) and a decay factor from adversarial pressure ( β · C attack C defense , where C attack quantifies attack complexity (e.g., via MITRE ATT&CK tactics) and C defense reflects defensive coverage (0–1), with β balancing attack sophistication against defense breadth based on historical incident data). The exponential form captures nonlinear degradation when defensive gaps widen. The integral allows R ( t ) to account for both instantaneous impacts and cumulative wear (e.g., DDoS resource exhaustion), offering a dynamic alternative to static metrics like MTBF by incorporating adversarial adaptation and system fatigue.
Practically, R is operationalized through measurable indicators for resource redundancy, defense coverage, and system availability under stress, using mature technologies. Resource redundancy is assessed by indicators like redundant component availability (percentage of critical systems with backups/failover, measured via tools like Zabbix, Prometheus, or Nagios using system logs/CMDBs) and dual-active system switchover success rate (from load balancer logs like F5/HAProxy or drill reports). Defense coverage is quantified by the MITRE ATT&CK coverage ratio, comparing implemented defenses to known tactics using data from SIEMs (e.g., Splunk, Elastic Stack) by mapping controls to attack patterns via security policies and audit logs. Flexibility in tooling is prioritized.
These indicators inform the computation of S redu , S crit , C attack , and C defense for estimating R ( t ) . Raw metrics are collected, normalized (e.g., to a [0, 1] scale), and combined via weighted ratios aligned with the model. For instance, 90% redundant component availability and a 0.8 MITRE ATT&CK coverage normalize to 0.9 and 0.8, respectively, for integration. This methodology ensures practical, adaptable measurements leveraging existing systems.

4.3. Adaptive Recovery Index (A)

The Adaptive Recovery Index (A) quantifies a system’s capability to restore functionality post-disruption, evaluating the interplay between the diversity and effectiveness of adaptive strategies and the need for rapid restoration. It offers a dynamic, time-sensitive measure of recovery performance, balancing strategic adaptability with temporal efficiency to mitigate downtime.
Theoretically, A ( t ) is rooted in systems recovery dynamics and diminishing returns, expressed as follows:
A ( t ) = A max · ln ( 1 + ω · N adapt ) ln ( 1 + ω · N max ) · 1 1 + τ · t recovery
This formulation balances two principles: (1) recovery effectiveness grows sublinearly with the number of adaptive strategies ( N adapt out of a theoretical maximum N max ) due to saturation, modeled by the logarithmic term with coefficient ω = 0.1 (derived from empirical recovery drill data showing diminishing returns); and (2) recovery latency ( t recovery ) imposes a hyperbolic penalty (calibrated by τ = 0.05 per second to reflect escalating downtime costs), reflecting the disproportionate impact of delays. The multiplicative structure ensures that neither adaptability nor speed alone dominates, reflecting real-world trade-offs.
Practically, A ( t ) is operationalized through measurable indicators for adaptability scope and recovery latency, using mature technologies. Adaptability scope is measured by Recovery Strategy Coverage ( N adapt / N max ), using data from disaster recovery plans or tools like Ansible Tower or ServiceNow. Recovery latency is assessed via Actual Recovery Time against Recovery Time Objectives (RTO), sourced from drill logs or monitoring platforms (e.g., SolarWinds, Datadog). Supporting indicators include Backup Restoration Success Rate (from backup solutions like Veeam or Acronis) and Recovery Verification Compliance (via audit logs or ticketing systems like Jira). Data collection leverages existing infrastructure (backup systems, monitoring dashboards, ITSM platforms) with methodological flexibility (e.g., Commvault instead of Veeam). Aggregation involves collecting raw data, normalizing it (e.g., t recovery inverted), and computing A(t) with theoretical weights, ensuring practical applicability while balancing strategy diversity and recovery speed.

4.4. Learning Evolution Index (L)

The Learning Evolution Index (L) quantifies a system’s capacity to enhance its security posture through iterative learning from past incidents and environmental changes. It focuses on the adaptive improvement phase, measuring how effectively a system transforms experiential data into actionable defensive capabilities by modeling the interplay of knowledge retention, temporal relevance, and organizational adaptability for evolutionary progress.
Theoretically, L ( t ) captures knowledge accumulation and decay:
L ( t ) = L 0 + λ i = 1 n e γ ( t t i ) · 1 1 1 + η · Δ E i
L 0 is the initial learning capacity. The summation reflects time-weighted learning from discrete events. The temporal decay e γ ( t t i ) (decay rate γ , e.g., 0.01/hour, from d K i d t = γ K i where K i is retained knowledge) models knowledge obsolescence, balancing historical lessons with recent insights. The experiential gain 1 1 1 + η · Δ E i uses a sigmoidal function for bounded learning benefits from event experience increment Δ E i , with sensitivity η ; this approximates logistic growth ( d G i d Δ E i = η G i ( 1 G i ) ), reflecting diminishing returns. The learning efficiency λ ( 0 , 1 ) scales accumulated knowledge based on organizational factors. This captures the dynamic tension between knowledge acquisition, retention, and obsolescence.
Practically, L ( t ) is operationalized via indicators for system self-healing, policy adaptation, and knowledge integration, using existing enterprise tools. System self-healing is measured by the success rate of automated recovery actions (from logs of Ansible, Puppet, Nagios; or incident dashboards). Policy adaptation is assessed by metrics like update frequency of security configurations or credentials post-incident (from IAM audit trails or configuration management tools), reflecting adaptive responsiveness (e.g., time to update ACLs). Knowledge integration measures the translation of post-incident reviews (from ServiceNow or manual reports) into actionable system updates (frequency and impact of after-action reports leading to changes).
Implementation involves automated data collection (from SIEMs for incidents, CMDBs for changes, ticketing systems for reviews), normalization of raw metrics (e.g., self-healing success as a percentage, policy update cycles in days), and aggregation into a unified L ( t ) score with customizable weighting. This adaptable framework, allowing tool substitution (e.g., Splunk for SIEM), ensures L ( t ) is actionable and relevant by grounding measurements in mature technologies and existing workflows.

5. Empirical Validation: A Case Study

To validate the practical utility and effectiveness of the RC model and the P-R-A-L framework, a comprehensive resilience assessment was conducted on a real-world critical infrastructure system. This section details the case study methodology, presents the quantitative results, and discusses the actionable insights generated by the framework.

5.1. System Under Test and Methodology

The System Under Test (SUT) was a large-scale digital identity management platform developed for a municipal government, hosting sensitive data for approximately one million users. The platform is a critical piece of public infrastructure, hosted as a tenant on a major public cloud provider. Its multi-tier architecture consists of numerous servers for client-facing applications, transactions, and backend gateways, all running on CentOS 7.7, supported by MySQL and Redis databases, and serving mobile applications.
The assessment was operationalized using a hybrid methodology that combined non-technical document reviews (21 items) with technical testing and evaluation (18 items) to gather quantitative data for the P-R-A-L indicators. A suite of mature security tools was employed to measure specific indicators. For instance, capabilities in the Prevention (P) dimension, such as threat data monitoring, were assessed using tools like Retina and Spiderfoot to simulate malicious traffic and evaluate asset discovery. To quantify Destruction Resistance (R), tools like BloodHound and Nemesis were used to map attack paths and evaluate the system’s ability to discover and block attacks. Finally, indicators for Adaptive Recovery (A) were tested by probing for damage isolation and resource recovery mechanisms, while Evolutionary Learning (L) was evaluated by simulating attack scenarios to test the system’s intelligent control and decision-making capabilities.

5.2. Resilience Assessment Results and Analysis

The assessment yielded a holistic, quantitative view of the system’s resilience posture, with an overall Resilience Capacity (RC) score of 82.75. This score is derived from individual scores in four dimensions, as shown in Figure 2 and Table 1, revealing highly unbalanced resilience characteristics:
  • Preventive Capacity (P): 90/100. The system demonstrated strong capabilities in identifying its 10 critical business services and 32 key assets. It had automated threat warning mechanisms in place. However, the assessment also uncovered risks, including security blind spots in 10% of its infrastructure, incomplete API logging, and the use of outdated third-party components with known vulnerabilities.
  • Destruction Resistance (R): 91/100. This was the system’s strongest dimension. The attack surface was well-managed, with only 10 out of 113 ports (9%) exposed to the internet. The system also had capabilities for threat intelligence collection and analysis. The primary weakness identified was that 5% of the cloud application’s security and access logs were not completely recorded, potentially impeding real-time attack analysis.
  • Adaptive Recovery (A): 66/100. The framework identified a critical deficiency in this dimension. The assessment found that the system lacked defined processes for analyzing and responding to security incidents, had no effective isolation between its modular components to contain damage, and possessed no redundancy, backup, or self-recovery capabilities. This low score highlighted a significant risk of prolonged downtime and data loss in the event of a successful attack.
  • Learning Evolution (L): 85/100. The system showed a good capacity for evolution, with some methods for upgrading functional components. However, it lacked comprehensive threat simulation, assessment, and attack traceability mechanisms. A key systemic risk was its reliance on an end-of-life operating system (CentOS), for which the cloud provider did not offer an automated upgrade path, severely limiting the system’s long-term adaptability and exposing it to unpatched vulnerabilities.

5.3. Actionable Insights and Framework Validation

This case study demonstrates that the P-R-A-L framework moves beyond a simple vulnerability checklist to provide a holistic, quantitative assessment that identifies both strengths and critical, systemic weaknesses. The dimensional scores pinpointed a severe imbalance, with strong proactive defenses (P and R) but a critically weak reactive posture (A). A good assessment tool should uncover precisely these kinds of critical, non-obvious imbalances. The framework worked as intended by diagnosing a severe weakness that might be overlooked by traditional assessments focused only on preventive controls.
The framework generated specific, prioritized recommendations directly tied to the findings. The most urgent recommendation was to address the critical deficiency in the Adaptive Recovery (A) dimension by establishing formal incident response procedures, implementing robust module isolation, and deploying automated, heterogeneous backup and recovery mechanisms. For the other dimensions, the framework recommended implementing a patch management process for third-party components (P), ensuring comprehensive and consistent log aggregation (R), and developing an automated defense framework and a clear migration strategy for the unsupported operating system (L).
In conclusion, this case study validates the practical utility of the RC model and P-R-A-L framework in a real-world critical infrastructure scenario. The framework successfully quantified the system’s resilience posture and delivered specific, data-driven insights to guide targeted strategic improvements. As demonstrated, the framework’s ability to identify non-obvious systemic weaknesses like a poor reactive posture, despite strong defenses, provides insights that would be difficult to identify using traditional risk assessment methods.

6. Discussion, Limitations, and Future Work

The Cybersecurity Resilience Capacity (RC) model introduces a dynamic framework for evaluating critical infrastructure resilience by integrating four dimensions—Prevention (P), Destruction Resistance (R), Adaptive Recovery (A), and Evolutionary Learning (L). Unlike traditional static risk assessments, its time-evolving approach provides a continuous resilience trajectory. This aligns with established standards like NIST SP 800-37, GB/T 44862-2024, and the MITRE ATT&CK taxonomy. As demonstrated in our case study, the framework’s primary strength lies in its ability to uncover critical, non-obvious imbalances, such as a weak reactive posture, that traditional methods might miss. To enhance the framework’s applicability to legacy and resource-constrained systems, we suggest a phased implementation approach. An organization could begin by strengthening the Prevention (P) and Adaptive Recovery (A) dimensions, which rely more on established processes, before gradually incorporating the more architecturally demanding aspects of Destruction Resistance (R) and Evolutionary Learning (L). This makes the framework more inclusive and provides a practical adoption path for a wider range of organizations.

6.1. Limitations of the Current Study

Despite its potential, we acknowledge several limitations that frame the scope of our current findings and guide future work.
  • Limited Validation Scope: We agree with the reviewer that the validation scope is a limitation of the current study. A single case study, while providing deep insights, does not permit broad generalizability. The conclusions drawn are specific to the system under test, though the methodological value of the framework is generalizable.
  • Parameter Uncertainty: Accurate calibration of parameterized coefficients ( λ , α , β , γ , η ) is crucial but complex due to diverse system architectures and threat profiles, risking distorted assessments if they are mis-tuned. This is a key challenge for practical implementation.
  • Entropy Oversimplification: The current scalar definition of environmental threat entropy ( σ ) may oversimplify multi-vector or multi-stage attacks, potentially underrepresenting true resilience demands in complex scenarios.
  • Implementation Burden: Calculating R C ( t ) in real time for large, intricate infrastructures can be computationally intensive, posing scalability issues for some organizations.

6.2. Future Research Directions

Future research should aim to overcome these limitations and enhance the model’s impact. Most importantly, the model requires extensive real-world validation beyond simulations to confirm its operational effectiveness and build credibility. Promising avenues include the following:
  • Broad-Spectrum Validation: Future research is needed to validate the model across different sectors and architectures and against comparative baselines. We plan on partnering with industry stakeholders for pilot programs in operational settings (e.g., transportation, healthcare) to generate empirical performance data and refine predictive accuracy.
  • Automated Parameter Calibration: To address parameter uncertainty, future work should focus on employing machine learning techniques, such as reinforcement learning, to dynamically adjust coefficients based on historical data and system feedback, improving precision and adaptability.
  • Advanced Threat Modeling: To overcome the simplification of threat entropy, we suggest developing more sophisticated threat models, such as using attack graph-based approaches or Bayesian networks to better account for complex attack scenarios and allow for disaggregation of σ into subsystem-specific metrics.
  • Scalability and Optimization: Research into optimizing the real-time calculation of RC(t) is needed to address the potential implementation burden and improve scalability for large-scale systems.
  • Interdependency Analysis: Extending the model to include interdependencies between critical infrastructures (e.g., using network theory or system dynamics) to provide a more holistic, systemic resilience metric.
  • Economic Dimension: Developing cost-benefit analyses for resilience investments to guide resource allocation and make the framework more actionable for budget-conscious operators.
In summary, the RC model provides a strong foundation for assessing cybersecurity resilience. Pursuing these directions can evolve the framework into a cornerstone of next-generation resilience engineering.

7. Conclusions

This paper introduces the Cybersecurity Resilience Capacity (RC) model, a novel framework designed to quantify a system’s dynamic ability to sustain core functions, recover from disruptions, and adapt to evolving threats. By integrating four interdependent dimensions—Prevention (P), Destruction Resistance (R), Adaptive Recovery (A), and Evolutionary Learning (L)—within a closed-loop structure, the model captures nonlinear interactions and temporal dynamics of resilience. The inclusion of an environmental threat entropy variation rate ( σ ) as a dynamic constraint ensures responsiveness to changing threat complexities, offering a “dynamic resilience portrait” that surpasses static risk scoring.
The primary contributions are a comprehensive, mathematically grounded framework that translates static security postures into time-evolving trajectories for real-time adaptation; alignment with key standards (NIST SP 800-37, GB/T 44862-2024, MITRE ATT&CK) for practical adoption; and an emphasis on mature technologies and flexible implementation for feasibility and accessibility.
Practically, the RC model provides actionable insights for optimizing resource allocation, prioritizing strategic investments, and triggering timely defensive adaptations. This enhances an organization’s ability to withstand and recover from cyber incidents, offering a versatile tool for navigating complex modern threat environments across various sectors.
While this work establishes a robust foundation, future efforts will focus on addressing limitations such as parameter tuning and multi-system dependencies, alongside exploring integrations with emerging technologies and broader applications. Continued refinement and validation through empirical studies and industry collaborations aim to develop next-generation cybersecurity strategies, empowering organizations to thrive amidst persistent and evolving threats.

Author Contributions

Conceptualization, M.L. and Y.L.; methodology, M.L. and P.C.; software, M.L. and C.T.; validation, M.L. and C.T.; formal analysis, M.L.; investigation, M.L.; resources, M.L.; data curation, M.L.; writing—original draft preparation, M.L.; writing—review and editing, M.L.; visualization, M.L.; supervision, P.C.; project administration, M.L. and S.C.; funding acquisition, P.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key R&D Program of China, grant number 2023YFB3107404.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets presented in this article are not readily available because of the confidentiality principle. Requests to access the datasets should be directed to luomingyu2002@gmail.com.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Von Briel, F.; Davidsson, P.; Recker, J. Digital technologies as external enablers of new venture creation in the IT hardware sector. Entrep. Theory Pract. 2018, 42, 4–69. [Google Scholar] [CrossRef]
  2. National Institute of Standards and Technology (NIST). Cybersecurity Framework 2.0. NIST Cybersecurity White Paper (CSWP) 29. 2024. Available online: https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.29.pdf (accessed on 5 July 2025).
  3. National Institute of Standards and Technology (NIST). NIST Special Publication 800-37: Risk Management Framework for Information Systems and Organizations [EB/OL]. 2025. Available online: https://www.nist.gov/privacy-framework/nist-sp-800-37 (accessed on 5 July 2025).
  4. GB/T 44862-2024; Cybersecurity Technology—Guidelines for Cybersecurity Risk Management [EB/OL]. State Administration for Market Regulation, 2024. Available online: https://openstd.samr.gov.cn/bzgk/gb/newGbInfo?hcno=2DD7E3A38F4FD189A8694CE727283BEA&refer=outter (accessed on 5 July 2025).
  5. MITRE. MITRE ATT&CK: A Knowledge Base of Adversary Tactics and Techniques [EB/OL]. 2025. Available online: https://attack.mitre.org (accessed on 5 July 2025).
  6. Cybersecurity and Infrastructure Security Agency. Cyber Resilience Review (CRR): Method Description and User Guide; U.S. Department of Homeland Security: Washington, DC, USA, 2020. Available online: https://www.cisa.gov/resources-tools/resources/cyber-resilience-review-downloadable-resources (accessed on 5 July 2025).
  7. National Institute of Standards and Technology. NIST Special Publication 800-160 Volume 2 Revision 1: Developing Cyber-Resilient Systems; NIST: Gaithersburg, MD, USA, 2021. Available online: https://csrc.nist.gov/pubs/sp/800/160/v2/r1/final (accessed on 5 July 2025).
  8. Directive (EU) 2016/1148 (EU) 2016/1148; Directive (EU) 2016/1148 Concerning Measures for a High Common Level of Security of Network and Information Systems. European Union: Brussels, Belgium, 2016. Available online: https://eur-lex.europa.eu/eli/dir/2016/1148/oj (accessed on 5 July 2025).
  9. Public Safety Canada. Regional Resilience Assessment Program (RRAP); Government of Canada: Ottawa, ON, Canada, 2019. Available online: https://www.publicsafety.gc.ca/cnt/ntnl-scrt/crtcl-nfrstrctr/crtcl-nfrstrtr-rrap-en.aspx (accessed on 5 July 2025).
  10. National Cyber Security Centre. The Cyber Assessment Framework (CAF) [EB/OL]; UK Government: London, UK, 2023. Available online: https://www.ncsc.gov.uk/collection/cyber-assessment-framework (accessed on 5 July 2025).
  11. Cyber Resilience Act (CRA). Proposal for a Regulation on Cybersecurity Requirements for Products with Digital Elements; European Commission: Brussels, Belgium, 2022; Available online: https://www.european-cyber-resilience-act.com/ (accessed on 5 July 2025).
  12. Australian Government. 2023 Critical Infrastructure Resilience Strategy; Department of Home Affairs: Canberra, Australia, 2023. Available online: https://www.homeaffairs.gov.au/about-us/our-portfolios/national-security/security-coordination/critical-infrastructure-resilience (accessed on 5 July 2025).
  13. GB/T 44862-2024; Cybersecurity Technology Cyber Resilience Evaluation Guidelines [EB/OL]. State Administration for Market Supervision and Administration, National Standardization Administration: Bejing, China, 2024. Available online: https://openstd.samr.gov.cn/bzgk/std/newGbInfo?hcno=2DD7E3A38F4FD189A8694CE727283BEA (accessed on 5 July 2025).
  14. ISO/IEC 27035:2023; Information Technology—Information Security Incident Management. International Organization for Standardization (ISO): Geneva, Switzerland, 2023. Available online: https://www.iso.org/standard/78973.html (accessed on 5 July 2025).
  15. Hollnagel, E.; Woods, D.D.; Leveson, N. (Eds.) Resilience Engineering: Concepts and Precepts; Ashgate Publishing: Aldershot, UK, 2006; Available online: https://erikhollnagel.com/books/resilience-engineering-2006 (accessed on 5 July 2025).
  16. ISO/IEC 27001:2022; Information Security, Cybersecurity and Privacy Protection. International Organization for Standardization: Geneva, Switzerland, 2022. Available online: https://www.iso.org/standard/27001 (accessed on 5 July 2025).
  17. MITRE Corporation. MITRE ATT&CK Framework [EB/OL]; MITRE Corporation: McLean, VA, USA, 2024; Available online: https://attack.mitre.org (accessed on 5 July 2025).
  18. Wu, J. Cyberspace Endogenous Safety and Security. Engineering 2022, 8, 145–149. [Google Scholar] [CrossRef]
  19. Wu, J. Cyber Resilience System Engineering Empowered by Endogenous Security and Safety; Springer: Singapore, 2024. [Google Scholar] [CrossRef]
  20. Wu, J. Cyberspace Mimic Defense: Generalized Robust Control and Endogenous Security; Springer: Cham, Switzerland, 2020. [Google Scholar] [CrossRef]
  21. Jin, L.; Hu, X.; Wu, J. From perfect secrecy to perfect safety & security: Cryptography-based analysis of endogenous security. Secur. Saf. 2023, 2, 2023004. [Google Scholar] [CrossRef]
  22. Jin, L.; Hu, X.; Lou, Y.; Zhong, Z.; Sun, X.; Wang, H.; Wu, J. Introduction to wireless endogenous security and safety: Problems, attributes, structures and functions. China Commun. 2021, 18, 88–99. [Google Scholar] [CrossRef]
  23. Sarker, I.H.; Furhad, M.H.; Nowrozy, R. Ai-driven cybersecurity: An overview, security intelligence modeling and research directions. SN Comput. Sci. 2021, 2, 173. [Google Scholar] [CrossRef]
  24. Xu, M.; Wu, J.; Li, Z. Endogenous security design for industrial control systems. IFAC-PapersOnLine 2021, 54, 785–790. [Google Scholar] [CrossRef]
  25. Ying, F.; Zhao, S.; Deng, H. Microservice security framework for IoT by mimic defense mechanism. Sensors 2022, 22, 2418. [Google Scholar] [CrossRef] [PubMed]
  26. Liu, Q.; Wu, J.; Zhang, Y. Endogenous security for blockchain-based supply chain. Comput. Ind. Eng. 2023, 175, 108879. [Google Scholar] [CrossRef]
  27. Wang, J.; Pang, J.; Wei, J. Security-as-a-Service with Cyberspace Mimic Defense Technologies in Cloud. In Proceedings of the International Conference of Pioneering Computer Scientists, Engineers and Educators, Taiyuan, China, 17–20 September 2021; pp. 129–138. [Google Scholar]
  28. Kott, A.; Linkov, C.G.; Farina, M.J. Quantitative measurement of cyber resilience: Modeling and experimentation. arXiv 2023, arXiv:2303.16307. [Google Scholar] [CrossRef]
  29. Wang, S.; Dai, W.; Li, G.Y. Distributionally robust receive beamforming. arXiv 2024, arXiv:2401.12345. [Google Scholar]
  30. Cadet, X.; Boboila, S.; Koh, E.; Chin, P.; Oprea, A. Quantitative Resilience Modeling for Autonomous Cyber Defense. arXiv 2025, arXiv:2503.02780. [Google Scholar]
  31. Estay, D.A.S.; Sahin, C.; Wade, J. A systematic review of cyber-resilience assessment frameworks. Comput. Secur. 2020, 97, 101996. [Google Scholar] [CrossRef]
  32. AlHidaifi, S.M.; AlGhamdi, M.; Kott, A. Towards a Cyber Resilience Quantification Framework (CRQF) for IT infrastructure. Comput. Netw. 2024, 248, 110446. [Google Scholar] [CrossRef]
  33. Bodeau, D.J.; Graubart, R.; Heinbockel, J. Cyber Resiliency Metrics, Measures of Effectiveness, and Scoring; MITRE Corporation: McLean, VA, USA, 2018; Available online: https://go.crowdstrike.com/2025-global-threat-report.html (accessed on 5 July 2025).
  34. MITRE Corporation. Cyber Resiliency Metrics and Scoring in Practice: Use Case Methodologies and Examples; MITRE Corporation: McLean, VA, USA, 2021; Available online: https://www.mitre.org/sites/default/files/publications/pr-21-1234-cyber-resiliency-metrics-scoring-practice.pdf (accessed on 5 July 2025).
  35. Snyder, D.; Mayer, L.A.; Weichenberg, G.; Tarraf, D.C.; Fox, B.; Hura, M.; Genc, S.; Welburn, J.W. Measuring Cybersecurity and Cyber Resiliency. 2020. Available online: https://www.rand.org/content/dam/rand/pubs/research_reports/RR2700/RR2703/RAND_RR2703.pdf (accessed on 5 July 2025).
  36. Linkov, I.; Eisenberg, D.A.; Plourde, K.; Seager, T.P.; Allen, J.; Kott, A. Resilience metrics for cyber systems. Environ. Syst. Decis. 2013, 33, 471–476. [Google Scholar] [CrossRef]
  37. Turner, T.E.; Schnell, S.; Burrage, K. Stochastic approaches for modelling in vivo reactions. Comput. Biol. Chem. 2004, 28, 165–178. [Google Scholar] [CrossRef] [PubMed]
  38. Kott, A.; Weisman, M.; Theron, P. Mathematical modeling of cyber resilience. In Proceedings of the 2022 IEEE Military Communications Conference (MILCOM), Rockville, MD, USA, 28 November–2 December 2022; pp. 987–992. [Google Scholar] [CrossRef]
  39. INCIBE-CERT. 46 Metrics to Improve Cyber Resilience in an Essential Service. INCIBE. 2017. Available online: https://www.incibe-cert.es/sites/default/files/contenidos/documentos/en/46-metrics-improve-cyber-resilience-essential-service.pdf (accessed on 5 July 2025).
  40. Bitsight. Top 4 Cyber Resilience Metrics Worth Tracking [EB/OL]. Bitsight Blog. 2024. Available online: https://www.bitsight.com/blog/4-cyber-resilience-metrics-track (accessed on 5 July 2025).
  41. European Union Agency for Cybersecurity (ENISA). ENISA Threat Landscape 2024. 2024. Available online: https://www.enisa.europa.eu/topics/cyber-threats/threat-landscape (accessed on 5 July 2025).
  42. Cantelli-Forti, A.; Capria, A.; Saverino, A.L.; Berizzi, F.; Adami, D.; Callegari, C. Critical infrastructure protection system design based on SCOUT multitech seCurity system for intercOnnected space control groUnd staTions. Int. J. Crit. Infrastruct. Prot. 2021, 32, 100407. [Google Scholar] [CrossRef]
Figure 1. The P-R-A-L closed-loop resilience framework.
Figure 1. The P-R-A-L closed-loop resilience framework.
Applsci 15 08342 g001
Figure 2. Case study resilience profile. The radar chart illustrates the quantitative assessment scores for the system under test across the four resilience dimensions: Preventive Capacity (P = 90), Destruction Resistance (R = 91), Adaptive Recovery (A = 66), and Learning Evolution (L = 85). The asymmetrical shape highlights the system’s imbalanced resilience, with strong proactive defenses but a critical deficiency in its ability to recover from incidents.
Figure 2. Case study resilience profile. The radar chart illustrates the quantitative assessment scores for the system under test across the four resilience dimensions: Preventive Capacity (P = 90), Destruction Resistance (R = 91), Adaptive Recovery (A = 66), and Learning Evolution (L = 85). The asymmetrical shape highlights the system’s imbalanced resilience, with strong proactive defenses but a critical deficiency in its ability to recover from incidents.
Applsci 15 08342 g002
Table 1. Detailed resilience assessment results from the case study.
Table 1. Detailed resilience assessment results from the case study.
DimensionKey IndicatorMeasurement Tool/MethodFinding (Raw Data)Score
P: PreventionAsset IdentificationDocument Review, Retina10 critical services90/100
API SecurityLog Analysis, Manual ReviewIncomplete logging for some APIs
Supply Chain SecurityComponent Scan (e.g., Black Duck)Outdated third-party components found
R: ResistanceAttack Surface MgmtPort Scan (Nmap), Spiderfoot10 of 113 ports exposed (9%)91/100
Log IntegrityReview of Cloud Config5% of security logs incomplete
A: RecoveryIncident Response ProcessDocument Review, InterviewsNo defined IR process66/100
Damage IsolationArchitectural ReviewNo effective module isolation
BackupSystem Config ReviewNo redundancy or backup mechanisms
L: LearningAttack TraceabilitySimulation (Nemesis)Lacked comprehensive traceability85/100
System AdaptabilityOS Version CheckRelies on end-of-life CentOS
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Luo, M.; Tao, C.; Liu, Y.; Chen, S.; Chen, P. An Endogenous Security-Oriented Framework for Cyber Resilience Assessment in Critical Infrastructures. Appl. Sci. 2025, 15, 8342. https://doi.org/10.3390/app15158342

AMA Style

Luo M, Tao C, Liu Y, Chen S, Chen P. An Endogenous Security-Oriented Framework for Cyber Resilience Assessment in Critical Infrastructures. Applied Sciences. 2025; 15(15):8342. https://doi.org/10.3390/app15158342

Chicago/Turabian Style

Luo, Mingyu, Ci Tao, Yu Liu, Shiyao Chen, and Ping Chen. 2025. "An Endogenous Security-Oriented Framework for Cyber Resilience Assessment in Critical Infrastructures" Applied Sciences 15, no. 15: 8342. https://doi.org/10.3390/app15158342

APA Style

Luo, M., Tao, C., Liu, Y., Chen, S., & Chen, P. (2025). An Endogenous Security-Oriented Framework for Cyber Resilience Assessment in Critical Infrastructures. Applied Sciences, 15(15), 8342. https://doi.org/10.3390/app15158342

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop