Next Article in Journal
Deep Learning-Based Method for a Ground-State Solution of Bose-Fermi Mixture at Zero Temperature
Previous Article in Journal
A Linguistic q-Rung Orthopair ELECTRE II Algorithm for Fuzzy Multi-Criteria Ontology Ranking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reconciling Tensions in Security Operations Centers a Paradox Theory Approach

Department of Management & Organisation, School of Business and Economics (SBE), Vrije Universiteit Amsterdam, 1081 HV Amsterdam, The Netherlands
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2025, 9(11), 278; https://doi.org/10.3390/bdcc9110278
Submission received: 25 September 2025 / Revised: 20 October 2025 / Accepted: 30 October 2025 / Published: 4 November 2025

Abstract

There is pressure on security operations centers (SOCs) from public and private industries as they are coping with the surge of cyberattacks, which is making the reconciliation of inherent organizational tensions a priority. This study surfaces two persistent tensions: (1) expediency versus authority, and (2) adaptability versus consistency that have remained underexplored in cybersecurity literature. We based the research on empirical data collected across three organizational settings, an international consumer packaged goods, a non-departmental public body based in the Netherlands, and a global managed security service provider. Thus, we reveal these not as isolated trade-offs but as paradoxes that must be continuously navigated within SOC operations. Built upon both empirical analysis and Paradox Theory, we develop a conceptual model that explains how SOCs reconcile these tensions through the strategic integration of artificial intelligence (AI), automation, and human expertise. Our model emphases that AI and automation do not replace human analysts; rather, they allow a new form of organizational balance, through mechanisms such as Dynamic Equilibrium and iterative integration. The model demonstrates how SOCs embed technological and human capabilities to sustain simultaneously agility, consistency, authority, and speed. By reframing AI integration as a process of paradox reconciliation, not as a resistance or automation alone, this study contributes new theoretical insight into the sociotechnical dynamics shaping the future of cybersecurity operations.

1. Introduction

Security Operations Centers (SOCs) play a key role in organizations’ ability to manage cyber risks in real time. They act as operational hubs where threats are detected, analyzed, and addressed as they occur [1]. Traditionally, research on SOCs has focused on optimization from a technical perspective and process design, with emphasis on detection tools, incident workflows, and the work of expert analysts [2,3]. This approach has improved our understanding of how SOCs protect digital systems, but it often separates technology, process, and people into isolated elements [4,5]. Although this line of research has clarified performance bottlenecks, it tends to fragment the sociotechnical nature of SOC operations by treating technology, processes, and people as discrete components rather than as interdependent elements.
More recent studies have started to explore organizational and sociotechnical factors that affect SOC performance [4]. One stream of research examines how AI and automation influence incident detection and response [5,6,7]. For example, explainable AI has been commonly used to increase transparency in alerts and to support trust in automated decisions [8]. These developments demonstrate that technology and human expertise increasingly co-evolve rather than substitute each other. Together, these studies suggest that SOC performance depends not on any single element, AI, automation, or human expertise, but on their continuous interaction as an integrated sociotechnical system. Another stream of research investigates how SOC teams work under pressure using complex tools [9,10]. This includes studies on burnout and alert fatigue [11], and research, introducing new models of human-AI teaming that combine automation with expert judgment in more flexible ways [11,12].
While we know much about how SOCs deal with the complexity and limitations of resources [3,4,5,13,14,15], there is far less understanding of how they manage ongoing internal organizational tensions. These tensions include the need for expediency in comparison to the need for authority, or the push for consistent processes versus the requirement for adaptability to specific situations. Most research sees these as design problems, which can be addressed through improved tools or procedures [15]. In contrast, this study treats such contradictions as enduring paradoxes that must be continuously reconciled rather than permanently resolved.
Following Smith and Lewis, we define paradox as “contradictory yet interrelated elements that exist simultaneously and persist over time” [16]. This view helps explain why problems in SOC coordination cannot simply be solved through automation or additional resources. This aligns with earlier work in organizational theory, which sees paradoxes as potential sources of innovation and learning, not just obstacles [17]. Paradox Theory thus provides a dynamic lens to understand how SOC teams cope with simultaneous pressures for speed, control, flexibility, and reliability in high-velocity environments.
Compared to alternative perspectives such as Ambidexterity and Contingency Theories, Paradox Theory is uniquely suited to this context. Ambidexterity frameworks emphasize sequential or structural separation of conflicting demands (e.g., exploration versus exploitation), while Contingency Theory assumes that optimal structures can be designed to fit specific environments. However, SOC operations require simultaneous management of competing logics, acting fast while maintaining oversight, adapting responses while enforcing standardization. Paradox Theory explicitly addresses this coexistence of contradictions and thus provides a richer lens for examining how SOC professionals navigate these enduring tensions in real time.
This study addresses the following two interrelated research questions:
  • How do Security Operations Centers (SOCs) experience and navigate paradoxical tensions in incident response?
  • What role do AI, automation, and human expertise play in reconciling these tensions in practice?
To answer these questions, we draw on data from a multi-case study of SOCs in both public and private organizations. Our analysis surfaced two key tensions shaping incident response: (1) expediency versus authority and (2) adaptability versus consistency. These tensions are not temporary trade-offs, but persistent structural conditions that professionals must continuously manage.
In this paper, we provide several contributions to research on cybersecurity and organizational paradox. First, by bringing Paradox Theory into the SOC domain, we demonstrate how ongoing tensions, not one-time trade-offs, shape daily operations. Second, we propose a five-layer model that explains how human routines, AI, and automation work together to manage these tensions. Third, we show that traditional ambidexterity solutions, like the separation of tasks between teams or over time, are often not feasible in SOCs, because the conflicting demands must be addressed at once. Lastly, in our paper, we introduce the concept of Ambidextrous Integration. This is a new concept that describes how SOCs handle competing demands within the same workflows in real time.
Finally, this study anticipates two forms of contribution. From a theoretical standpoint, it extends Paradox Theory to AI-enabled organizational routines by illustrating how paradox reconciliation occurs through sociotechnical integration rather than structural separation. From a managerial standpoint, it offers actionable insight into how CISOs and SOC leaders can embed AI-driven processes that enhance responsiveness without eroding oversight or compliance. Together, these implications establish the relevance of paradox-based thinking for both academic inquiry and cybersecurity practice.

2. Theoretical Background

Research on cybersecurity incident response and Security Operations Centers (SOCs) has traditionally focused on two leading perspectives: technical optimization and organizational implementation [2,3]. The first perspective emphasizes automation, threat detection, and response efficiency, and frames SOC performance primarily as a problem of architectural design and computational improvement. This includes work on intrusion detection systems (IDS), SIEM platforms, and threat intelligence protocols designed to enhance detection accuracy and reduce false positives [5,6,7,8]. The second stream of research investigates the socio-organizational dynamics of new technologies adoption, highlighting resistance to change, knowledge fragmentation, and analyst fatigue [9,10,11,14]. Together, these perspectives explain how detection and response tools operate, but rarely show how the human and technological dimensions interact as part of an integrated system.
However, both perspectives overlook a fundamental aspect of SOC operations: the contradictory demands that security teams must manage simultaneously. These tensions are not occasional design flaws but enduring organizational features [18]. For example, SOCs must respond to incidents with speed and agility (expediency and ability to adapt) while simultaneously following predefined protocols and governance structures (authority and consistency). Likewise, they must tailor responses to clients’ or business units’ needs, yet maintain standardized, repeatable practices that ensure efficiency and compliance. Recent empirical work shows that such contradictions persist even in mature SOCs, suggesting that they are not temporary inefficiencies but embedded design features [4,15]. Hence, SOC operations are shaped by paradoxical tensions that cannot be permanently resolved but must be continually managed. This change in understanding is aligned with organizational theory, where paradoxes are perceived as sources of learning and adaptive capability, especially in fast-paced, high-stakes environments like cybersecurity [16,17,19].
To clarify the theoretical rationale, this study adopts Paradox Theory because it explicitly theorizes the simultaneous coexistence of opposing organizational demands rather than their temporal or structural separation. Alternative frameworks, such as Organizational Ambidexterity or Contingency Theory, describe sequential or structural differentiation but rarely capture the concurrent, real-time negotiation of contradictions that defines SOC work. Paradox Theory is therefore better suited to our phenomenon, as SOC analysts must uphold procedural authority while improvising rapid responses under uncertainty, managing both sides at once, rather than alternating between them over time.
Within this framing, organizations do not eliminate tensions once and for all but continuously navigate and regulate them through balancing mechanisms. This balancing is operationalized through two reconciliation strategies based on the paradox literature and our empirical findings: (1) Dynamic Equilibrium and (2) Iterative Integration [16,17]. Dynamic Equilibrium captures the ability of SOCs to address divergent demands simultaneously without leaning towards one side (both/and balance perspective), while Iterative Integration reflects the need for constant refinement and adjustment, especially in variable technical settings (continuous adjustment perspective).
However, while Dynamic Equilibrium emerges directly from Paradox Theory, Iterative Integration is an empirically derived insight from this study. In our case, study data revealed that SOCs manage paradoxes not only through balance but also through repeated learning, adjustment, and procedural refinement over time. Iterative Integration, therefore, extends Paradox Theory by capturing how paradox navigation unfolds through recursive operational practices in AI-enabled SOC routines.
To complement this perspective, we also draw on the concept of Organizational Ambidexterity, which offers models describing how organizations manage contradictory demands over time. Ambidexterity has been classically operationalized through structural separation (e.g., exploration in R&D, exploitation in operations), temporal cycling (e.g., alternating priorities gradually), or contextual balancing (e.g., empowering people to switch between competing logics) [20,21]. While widely applied in innovation and strategy research, these forms of ambidexterity are less explored in cybersecurity operations and prove limited for capturing simultaneous tensions.
Our empirical study of SOCs reveals that such classic modes of ambidexterity do not fully address the simultaneity and entanglement of tensions in real-time cyber defense. As a result, we use these foundations to develop the notion of Ambidextrous Integration: a configuration in which automation tools, analyst routines, and decision structures are not separated but combined in fluid, co-performing ways. This concept emerges from our data and contributes to both paradox and ambidexterity literature by showing that integration, rather than alternation or separation, becomes the dominant mode of tension management in AI-supported SOCs.
To consolidate these theoretical distinctions and clarify conceptual boundaries, Table 1 compares Organizational Ambidexterity, Paradox Theory, and our proposed construct of Ambidextrous Integration. This comparative synthesis shows that while ambidexterity emphasizes alternation between conflicting goals and Paradox Theory emphasizes their coexistence, Ambidextrous Integration theorizes how these contradictions are enacted and reconciled in real time through the joint agency of humans and AI within SOC operations.
This positioning clarifies that our contribution is not to reassert alternation (ambidexterity) or remain purely interpretive (paradox), but to theorize how SOCs enact reconciliation through integrated, real-time coupling of human expertise and AI-enabled automation.
In this study, AI and automation are not treated as sources of tension but as enablers of paradox reconciliation. AI-powered detection systems and auto-mitigation protocols help sustain Dynamic Equilibrium by accelerating incident response while protecting decision integrity. Meanwhile, adaptive machine learning supports iterative integration, allowing SOCs to align with leaders’ priorities and regulatory changes. Thus, SOC analysts and AI systems automation function as co-performers within a sociotechnical system that unites human judgment and computational precision at the point of execution.

3. Methods

We engaged in an inductive qualitative multi-case study to investigate the mechanisms by which Security Operations Centers (SOCs) reconcile operational paradoxes in incident response. Our aim was to theorize how competing demands are experienced and managed, rather than to test predefined hypotheses. Our approach, grounded in the principles of constructivist grounded theory [26], followed an iterative process of constant comparison and theoretical sampling until conceptual saturation was reached. This approach enables theory to emerge from systematically gathered empirical data rather than from a priori assumptions. This approach is particularly suited for studying under-theorized phenomena where processual understanding is essential.
Grounded theory has been useful in generating context-specific descriptions and explanations of phenomena within information systems. In our study, it guided coding, memo writing, and progressive comparison between cases to refine emerging categories. This interpretive stance allowed us to capture the lived experiences of cybersecurity professionals and to build mid-range theory grounded in their accounts. By combining Gioia’s inductive coding with Paradox Theory as a sensitizing lens, we bridge empirical emergence with theoretical interpretation. We applied this approach to explore how persistent tensions identified in SOC environments, particularly between expediency and authority, and adaptability versus consistency, are surfaced, interpreted, and reconciled.

3.1. Research Design

Qualitative research gives us the opportunity to investigate and explore various SOC environments in different types of organizations to capture variation across contexts and enhance theoretical transferability rather than statistical representativeness. This design follows Yin’s logic of analytic generalization, where findings are extended theoretically rather than statistically [27]. Each case represents a distinct organizational setting and level of technological maturity, allowing both intra-case depth and cross-case comparison. Interview sessions lasted between 32 and 90 min and were conducted either online or onsite, depending on participant availability. All interviews were recorded and transcribed verbatim. Field notes from observations were systematically integrated with interview summaries to ensure convergence between reported and observed behaviors. Ethical approval was obtained from the Vrije Universiteit Amsterdam research board, and all participants provided informed consent. The research team maintained reflexive journals throughout data collection to monitor positionality and potential bias [28].
We purposefully selected cases that varied in sector, size, and SOC model to allow theoretical replication and contrast. This heterogeneity enabled the identification of context-specific paradox manifestations. For instance, authority tensions were more salient in public and regulated contexts, while expediency tensions dominated private-sector SOCs dealing with global operations. This research design allows us to move further beyond descriptive studies of task automation and toward an understanding of how SOCs manage contradictions embedded in organizational roles, accountability, and coordination.

3.2. Research Context

This study includes both local and multinational organizations that operate across a diverse range of industries but share comparable concerns about their security operations. To avoid unnecessary operational details, only key contextual elements are summarized here. The first organization, a multinational consumer packaged goods company operating in 28 countries, integrates IT and OT environments within a unified SOC model supported by an MSSP. The second case, a Dutch non-departmental public body, operates a centralized SOC managing inter-agency coordination for labor market data. The third case is a global MSSP managing 16 SOCs under a distributed 24 × 7 model. Table 2 summarizes these organizations, participant roles, and data sources.
These three cases were selected to maximize contextual variation rather than representativeness. Hence, the insights derived are analytically transferable to organizations facing similar operational paradoxes, rather than statistically generalizable [29].

3.3. Data Collection Process

This study included a combination of data collection methods to ensure a comprehensive dataset that supports the research question. Observations covered daily alert-triage meetings, weekly incident-review boards, and real-time crisis calls. This operational immersion allowed the researchers to capture paradoxes as they unfolded in practice and to trace their recurrence over time, rather than relying solely on retrospective accounts. These longitudinal observations [30] provided real-time evidence of paradox navigation and supported data triangulation.
Organizational documents, such as incident tickets, playbooks, post-incident reports, and dashboards, were analyzed to complement interviews and observations, enabling methodological triangulation.
We followed a hybrid purposive and theoretical sampling strategy [31], combining stratified selection with snowball identification to ensure diversity of organizational roles and contexts. Sampling continued until theoretical saturation was reached, when no new second-order themes emerged.
We conducted 37 interviews with each of the 25 participants, supplemented by 12 follow-ups to deepen insights on paradox reconciliation. All participants provided informed consent before each interview. Data were anonymized and securely stored following the Vrije Universiteit Amsterdam ethical research policy.
This triangulated strategy, combining observation, interviews, and document analysis, allowed paradoxes to be observed in situ, verified through participant interpretation, and contextualized with organizational evidence, thereby increasing construct validity and empirical depth.

3.4. Data Analysis Methods

For data coding and analysis, we used the Gioia methodology [32]. We summarized each interview and extracted relevant quotes using this method. None of these quotations were rephrased to avoid bias. Initial codes (first-order concepts) were inductively generated from the data, which were then grouped into second-order themes, which were further abstracted into aggregate dimensions.
Coding was conducted by the first author and reviewed with co-authors to reach interpretive consensus; data saturation was achieved once no new second-order themes emerged across cases. To ensure coding reliability, two co-authors independently re-coded approximately 15% of transcripts, achieving an intercoder agreement of κ = 0.82 before consensus discussion. Following the conventional interpretation by Landis and Koch [33], this level of agreement indicates “almost perfect” consistency among coders, confirming that the application of first- and second-order codes was stable and replicable. Such reliability checks are widely recommended in qualitative research [34] to enhance transparency and analytical rigor. This repetitive and inductive approach facilitated the identification of emergent themes that reflect complex paradoxical tensions in the SOC. The arising themes were used to construct a conceptual model based on Paradox Theory, highlighting the two core tensions surfaced in all cases: Expediency versus Authority and Adaptability versus Consistency. Each tension was mapped to specific organizational situations, actor narratives, and response strategies, which then formed the theoretical development of reconciliation principles.
Several steps were taken to ensure that the findings are valid and reliable. First, data triangulation was achieved through the convergence of three sources: interviews, direct observations, and organizational documents; strengthening construct validity [35], including interviews, observations, and a review of the documents, was conducted during the analysis. Second, investigator triangulation was applied by involving multiple authors during data interpretation. This allowed for an extensive understanding of the subject matter and reduced the risk of bias. Third, member checks were carried out across all three organizations with a representative subset of participants, allowing validation of interpretations and correction of factual inaccuracies. Verbatim quotes and findings were discussed with the participants after each interview to ensure that there was no prejudice in the interpretation of their responses. Peer reviews and consultations with experts were also conducted to validate the outcomes of the study.
Throughout the analysis, the authors maintained reflexive memos documenting how prior professional experience with SOC operations could influence interpretation. Regular memo reviews and peer debriefings helped separate practitioner knowledge from empirical evidence, thereby reducing researcher bias.
Lastly, we maintained an audit trail linking data excerpts to thematic codes to ensure traceability of interpretation. Interviews provided us with information from practitioners, emphasizing their work experiences and perceptions, while observations enabled us to witness paradoxical tensions within the operational environment, providing a practical perspective that complemented the interview data. Document reviews offered a historical and policy-oriented view that helped shortlist the relevance of the measures highlighted in the conceptual model.
Given the interpretive and context-bound nature of qualitative inquiry, the aim was not statistical generalization but theoretical transferability [28]. The study’s boundaries lie in its focus on mature SOC environments; future research could extend to emerging or SME contexts where paradox navigation may differ.

4. Findings

As we look at data from security operations centers across public, private, and managed service contexts, we identified two recurring and deeply rooted tensions in the management of incident response. These tensions are the Response Expediency–Authority paradox and the Adaptability–Consistency paradox. Both reflect opposing demands that SOC professionals face in their daily work. Rather than episodic problems, these tensions are enduring features of SOC practice that require continuous management.
To structure the analysis, we first identified recurring first-order codes and second-order themes that converged into these two paradoxes. Figure 1 and Figure 2 illustrate the underlying data structure for each paradox, showing how analyst accounts, observed routines, and organizational conditions interlock. For readability, we number themes and use those numbers in the text (Expediency–Authority: E1–E5; Adaptability–Consistency: C1–C6). Each paradox is presented below through an integrated sequence that moves from empirical observation (participant evidence) to analytical interpretation (paradox mechanisms). Each subsection begins with empirical patterns, followed by a short analytical interpretation that links to the Paradox Theory.
Cross-case analysis revealed that while these tensions appeared across all settings, their expression varied with organizational characteristics. In the multinational CPG organization, pressure for rapid resolution was magnified by production-line dependencies and global coordination requirements. In the public-sector case, authority demands dominated due to regulation and multi-agency oversight. The MSSP exhibited both tensions concurrently: service-level agreements required tailored, time-sensitive responses, while standardization ensured consistency across heterogeneous client environments.
We defer theory building on “Ambidextrous Integration” to Section 5 and focus here strictly on empirical patterns and brief paradox interpretations.

4.1. Response Expediency Versus Authority Paradox

Response Expediency vs. Authority paradox is illustrated in Figure 2. In all cases, incidents unfolded under a double clock: technical time compressed by automation and organizational time paced by approvals. We observed five interdependent patterns (E1–E5): incident-detection urgency, swift remediation requirements, rapid incident resolution, decision-making constraints, and operational dependency [36,37,38].
E1. Incident–detection urgency (tempo compression): AI and automation accelerated anomaly surfacing and alert triage. AI and automation further magnify this tension: they enhance detection and remediation speeds but do not automatically grant the SOC the authority to act. As one responder noted, “Removing any human error decision-making, managing … high-volume alerts received … monitoring … 24/7 … reaction time … an AI can react much faster than a human analyst” (MST, page 8, 19:52). SIEM correlations and behavioral models flagged patterns that would otherwise exceed human attention.
E2. Swift remediation requirements (orchestration load): This urgency is further visible in swift remediation requirements. A CISO explained the breadth of coordination across identities, servers, and application owners: “These activities are communicated to the relevant stakeholders to do specific actions. It might be an administrator of active directory … responsible for the user access management … the team that is managing a specific server that required patches …each case is quite unique “ (FPA, page 3, 06:49). Automation pre-generated tasks and deadlines; yet, Automation and AI does not eliminate the complexity of human coordination; however, orchestration still depended on agreement across silos.
E3. Rapid incident resolution (analysis bottlenecks): One analyst reflected that, “Identifying an incident is quite effective … the most difficult part is addressing the incident … time is against you … the attacker continues escalating the privileges … lateral movement … you need to link whether this is the same incident or the same attack … to address it, it takes much more effort” (DMA, page 6, 13:07). Tools reconstructed attack paths and suggested root causes; yet, resolving incidents often depends on real-time analysis, confirmation of the root cause, and strategic choices. Final decisions hinged on confirmation, containment scope, and business risk.
E4. Decision-making constraints (authorization topology): Speed is met with the reality of decision-making constraints. Escalation ladders, risk validation, and regulatory checks slowed execution, especially in the public sector with multi-agency oversight; similar delays arose in the CPG case through plant-level approvals.
E5. Operational dependency (business primacy): The head of platforms described production risks from isolating SAP servers: “For example, you have a server that is sharing their SAP on one production plant. Maybe these guys, they feel that isolate one server instead of all of them and search for them if something has happened to the rest of the servers is not the right way, because this will turn off the production of the plant—so the business is going to challenge the decision to do a deep dive across all of their servers and they would like to accept only the partial system downtime and not full system downtime” (TST, page 4, 09:12). Authority for impactful remediation often sat with operations rather than the SOC. As another expert summarized, fully automatic response remained rare; the CISO explained, “The majority of the response … implementation … remediation activities is taken by the admins … engineers” (FPA, page 5, 9:18). Another expert added “For the response, … it is very risky to provide directly to an assistant the approval to … isolate something, except if it is … 100% confirmed that the threat actor is there and needs to be isolated. So currently, I haven’t seen any organization … given the authority to respond to incidents automatically, except from the ones that the Endpoint Detection Response (EDRs) are blocking them by default, … taking actions afterwards. It is something that the EDR has not detected on the first place and it goes as an action after detecting specific indicators of compromise. I haven’t seen to allow automated actions in any organization” (TST, page 7, 23:04). EDRs act on known patterns; ambiguous cases still require human approval.
Having outlined these five recurring empirical patterns, we next interpret how they collectively express the core paradox between Expediency and Authority in SOC operations. This misalignment between technical tempo and institutional rhythm exemplifies a core paradoxical dynamic described in Paradox Theory. Across cases, automation compressed “signal time,” but authorization kept “action time” sequential. The paradox thus stems from a tempo mismatch coupled with distributed authority: the SOC is accountable for speed while control over impactful actions is shared. Paradox Theory suggests that such enduring contradictions cannot be resolved outright. Alternatively, they must be managed through what is referred to as “Dynamic Equilibrium”. In Paradox Theory terms, actors continuously hold both demands, such as rapid response and proper oversight, through situated balancing rather than resolution.
The following example from the CPG case illustrates how this paradox materializes during real-time incident response. During lateral movement on a production asset, automated correlation raised priority within minutes. Containment required plant approval; the SOC proposed targeted isolation, whereas operations opted for partial shutdown to preserve output. The incident was contained, but only after a negotiated scope.
From these empirical and analytical insights, we derive two propositions that capture the mechanisms through which expediency and authority are balanced in practice. Proposition 1 (Tempo Mismatch): When automation increases detection speed without commensurate authorization redesign, decision latency becomes the principal bottleneck. Proposition 2 (Distributed Authority): The greater the operational dependency on business systems, the more remediation rights shift away from the SOC, reinforcing the paradox.
These observations show the limitations of classical ambidexterity models. To explain how SOCs navigate this, we propose the concept of Ambidextrous Integration. A fuller account of reconciliation mechanisms is developed in Section 5 (Ambidextrous Integration).

4.2. Incident Response Adaptability-Consistency Paradox

Incident Response Adaptability–Consistency Paradox is illustrated in Figure 2. Across all three cases, SOCs faced a persistent contradiction between the need for standardized, repeatable processes and the simultaneous requirement to adapt to context-specific client environments. The paradox emerged from the coexistence of two opposing, yet interdependent logics: the demand for consistency in incident-handling quality and the pressure for adaptability to heterogeneous infrastructures, policies, and risk appetites. We observed six interlinked patterns (C1–C6): consistency in diverse environments, centralized control, scalability of standardized solutions, client-specific adaptation, localized decision-making, and structured customization.
C1. Consistency in diverse environments (baseline discipline): MSSPs relied on uniform operational playbooks to guarantee service reliability across clients. As one of the Microsoft Sentinel cloud security architects mentioned, “SOC processes are really fixed... little flexibility would be needed.” (EAL, Section 5, 14:46). Standard classifications and common reporting practices created a stable foundation for auditability and control.
C2. Centralized control (stability under churn): Turnover and rotating staff challenged continuity. Central triage models, standardized templates, and AI-suggested investigation steps helped maintain baseline quality despite personnel changes. As one incident coordinator reflected, “when the incident is open … we are gathering all the information, building exactly the whole structure of the investigation...” (BMI, page 3, 04:13) “Main challenges usually are, if it happens, is the retention of the people within the organization. So this is the one thing, when the peoples there are leaving and joining the company” (BMI, page, 05:57).
C3. Scalability of standardized solutions (growth pressure): Expanding client portfolios revealed limits in one-size-fits-all playbooks. As an engineer noted, “The first thing that we always do is to make sure that the customer is following common standard procedure” (BMI, page 15:09). Yet predefined standards often require adjustment to new infrastructures or SLAs. Automation and AI mitigated this tension by adapting response thresholds to client parameters and enforcing consistent policy application across heterogeneous environments.
C4. Client-specific adaptation (bounded tailoring): SOCs’ personalized containment and remediation procedures according to client-specific policies and contractual obligations. “There is a standard process in our company but … for each individual customer the process is changed” (EAL, page 4, 10:59). AI-enabled playbooks facilitated such differentiation, allowing analysts to activate or skip predefined steps depending on regulatory and business contexts.
C5. Localized decision-making (dual ownership): While escalation remained centrally orchestrated, the authority to act often resided with business owners. One participant described, “When we are working with incidents … the incident owners or the business owners are the ones that sometimes need to be convinced that we should be setting down parts of operations in order to have the containment and eradication steps” (DMA, page 5, 11:50). Even when automation provided structured recommendations or projected impact scores, final approvals depended on operational leaders.
C6. Structured customization (bending rules without breaking them): Managers described their approach as flexible yet disciplined: “For all customers … our starting point … is our standard … whenever customers would like to customize … process, way of working … we are making sure … it will not affect … quality … effort and costs” (BMI, page 8, 28:56). Configurable templates and AI-driven orchestration tools embedded this flexibility within guardrails, ensuring that local variations remained traceable and controlled.
Having outlined the six recurring empirical patterns, we now turn to interpret how these data exemplify the underlying paradoxical mechanism. The coexistence of standardization and adaptation exemplifies a core paradoxical dynamic in SOC practice. Across cases, automation served as a mediator that stabilized uniformity while enabling bounded flexibility. Configurable systems, parameterized thresholds, and modular playbooks allowed SOCs to operate within a dynamic equilibrium, where variation occurred inside controlled limits. In Paradox Theory terms, SOCs did not alternate between stability and flexibility but held both demands simultaneously through ongoing recalibration. The paradox thus became a governable tension rather than a problem to solve.
The following brief MSSP vignette illustrates how this paradox materializes in daily SOC operations and how automation mediates its reconciliation. A phishing triage engine applied the same classifier across multiple clients but routed containment actions differently according to SLA tiers and regulatory requirements. Senior analysts overrode automated defaults for high-exposure accounts, preserving both the common baseline and client-specific obligations.
Based on this analysis, we derive two theoretical propositions that capture how SOCs balance adaptability and consistency through automation and distributed decision-making. Proposition 3 (Embedded Flexibility): When automation defines boundaries for permissible variation, SOCs can achieve simultaneous standardization and adaptation through configurable processes. Proposition 4 (Dual Accountability): Where decision authority remains distributed between the SOC and business units, localized judgment endures, making adaptation structurally necessary even under strong standardization.
These findings further demonstrate that SOCs manage paradoxes through continuous reconciliation rather than resolution. A fuller explanation of the mechanisms sustaining this equilibrium is developed in Section 5 (Ambidextrous Integration).

5. Navigating Paradox and Enabling Ambidextrous Integration in Security Operations Centers

This chapter presents the conceptual model (delineated in Figure 3) developed from empirical data and grounded in Paradox Theory [16,39]. The conceptual model describes how SOCs navigate two interdependent tensions: (1) expediency versus authority and (2) adaptability versus consistency that emerged as recurrent organizational paradoxes across the three case studies. The SOC literature separates technology, process, or human challenges, while prior SOC literature tends to separate technological, procedural, and human challenges [3,14], whereas this study reconceptualizes these as persistent organizational contradictions requiring continuous reconciliation rather than discrete technical fixes [13].
This model does not aim to solve the tensions but to explain how SOCs manage them over time by integrating competing demands into the same operational space. The model builds layer by layer on second- and third-order dimensions derived from empirical data and advances theoretical understanding of how paradoxes are not eliminated but continuously balanced in practice.
To ensure analytical transparency and align terminology across the paper, we use the exact second-order theme labels from Figure 1 and Figure 2 in the mapping displays in Table 3 (Panels A and B) and in the cross-case matrix (Table 4). This provides one-to-one traceability from data (first-order evidence), themes (second-order), and aggregate paradoxes to the conceptual model. Additionally, Table 5 anchors each conceptual layer in concrete empirical examples drawn from the three cases, illustrating how the theoretical mechanisms materialize in practice.
To strengthen the analytic transparency of the model, we explicitly link the Gioia data structure to five theoretical layers representing how empirical observations evolve into theoretical abstraction. The mapping between first-order concepts, second-order themes, and aggregate dimensions was iteratively refined through constant comparison across the three cases until no new themes emerged. Table 3 summarizes this bridge from empirical context (Layer 1) to reconciliation mechanisms (Layers 4–5), demonstrating that the model was progressively abstracted rather than preconceived.
To demonstrate how the conceptual layers are grounded in the data, Table 3 (Panel A) maps the five second-order themes from Figure 1 to their aggregate dimension (Response Expediency vs. Authority) and to the specific model mechanisms each theme activates. This makes explicit how case evidence informs Layer 2 (tensions) and triggers Layer 3 levers and Layer 4 reconciliation.
Complementing Panel A, Table 3 (Panel B) presents the second-order themes for the Adaptability vs. Consistency paradox matching Figure 2 (e.g., “Consistency in diverse environments,” “Centralized control of incident response”). The right-hand columns show how these themes anchor Layer 2 while channeling toward Iterative Integration (Layer 4) or Ambidextrous Integration (Layer 5).
To assess the robustness of these patterns across organizational settings, Table 4 provides a cross-case presence/intensity matrix. Using a descriptive coding legend (●, ●●, ●●●) and source tags (INT/OBS/DOC), the matrix supports analytic replication by showing where each second-order theme is most salient (CPG SOC, MSSP, public body) and which data streams underpin it.
Across the three cases, the expression of paradoxical tensions varied according to organizational mandate and structure. In the MSSP SOC, the Expediency–Authority paradox was most salient because multi-client engagements required rapid triage under contractual time constraints while maintaining layered approval chains. In contrast, the public body SOC emphasized consistency through centralized oversight, which ensured compliance but slowed incident containment. The CPG SOC occupied a middle position, balancing automation-enabled agility with strong business-risk governance. These contextual variations confirm that paradoxes are systemic yet contingent, requiring reconciliation mechanisms that are adaptive to institutional context.
Together, the mappings in Table 3 and the cross-case profile in Table 4 establish the empirical spine of the conceptual model, justifying a five-layer account that proceeds from operational context (Layer 1) to paradoxical tensions (Layer 2), to technology levers (Layer 3), to reconciliation principles (Layer 4), and, finally, to Ambidextrous Integration (Layer 5).
To anchor Figure 3 in practice before detailing each layer, Table 5 provides one empirical instantiation per layer and case (CPG, MSSP, Public), with source anchors (INT/OBS/DOC).
As summarized across Table 3, Table 4 and Table 5, the conceptual model integrates data-derived themes, cross-case variation, and case-specific instantiations. This triangulation establishes the foundation for the five-layer exposition that follows, detailing how SOCs transform paradoxical tensions into a dynamic capability of Ambidextrous Integration.
Layer 1—SOC Operational Context
Each layer of the model is grounded in the collected empirical data presented in our Gioia tables and Findings Sections. The first layer, “SOC operational context”, identifies three foundational elements that shape how tensions emerge and escalate within Security Operations Centers: baseline security controls; the complexity of digital dynamics; and the evolving threat landscape.
These components work together in a dual role. They provide the necessary structure for stability, such as standardized rules and control mechanisms, while also intensifying real-time response. For example, baseline security controls guide procedural integrity, but can slow down action in fast-moving situations. In the public-body SOC, rigid baseline controls ensured compliance with governmental audit requirements but delayed escalation when incidents spanned multiple agencies, demonstrating how structural stability heightened the need for flexibility. This illustrates that stability and agility co-evolve, rather than substitute each other as an early manifestation of paradoxical coupling.
The complexity of digital dynamics, such as multi-vendor environments, interconnected systems, and cloud infrastructure, makes incident diagnosis and containment more difficult. In contrast, the MSSP SOC faced continuous client onboarding, which required analysts to reconcile standardized templates with diverse customer infrastructures—exemplifying how operational complexity amplifies paradoxical demands. The evolving threat landscape, with increasingly sophisticated and fast-moving attacks, places constant urgency on SOC performance. Together, these conditions form the contextual layer that sustains rather than causes paradoxical tension.
Across cases, this environment both supports and challenges operations, amplifying internal contradictions that set the stage for the paradoxes described in the next layer of the model.
Layer 2—Paradoxical Tensions
The second layer, “Paradoxical tensions”, introduces the core paradoxical tensions that structure SOC decision-making. These tensions do not arise as isolated problems but are systemic contradictions embedded in SOC workflows. They build the theoretical basis of the conceptual model as they clarify the dual demands that must be continuously balanced rather than resolved.
The first tension, Expediency versus Authority, captures the contradiction between the need for rapid action and the requirement for oversight. In the MSSP SOC, analysts described situations where automated triage recommended containment within seconds, yet escalation procedures demanded managerial validation, creating friction that vividly embodied the Expediency–Authority paradox.
The second tension, Adaptability versus Consistency, emerges from the need to customize incident response actions to specific client environments, while at the same time, teams should keep processes stable and auditable. In the CPG SOC, global playbooks had to be locally adapted for plants subject to strict operational-technology constraints, showing how flexibility and consistency must coexist. Likewise, the public-body SOC faced continuous adjustments between national standards and department-specific protocols, further illustrating this persistent duality.
In the model, these paradoxes represent the engine of SOC functioning, continually generating the need for reconciliation rather than resolution. They serve as the reference point for evaluating whether AI, automation, and organizational routines facilitate or intensify the paradoxical dynamics that follow. Having outlined the central tensions that energize SOC activity, the subsequent layer examines how AI and automation mechanisms mediate these opposing demands rather than resolve them.
Layer 3—AI and Automation Levers
The third layer, “AI and automation levers”, in the conceptual model introduces the technological mechanisms that influence how paradoxical tensions are managed in SOCs. Rather than eliminating contradictions, these tools act as enablers that help organizations live with and respond to paradoxes more effectively.
The first lever—I-driven automation—supports the speed and scale enhancement of SOC operations while introducing new oversight challenges. Such tools as automated alert triage, AI-guided playbooks, and real-time isolation mechanisms increase responsiveness and minimize manual effort. However, the same capabilities also bring concerns about authority and validation, as automated decisions may outpace traditional escalation paths. In this way, AI-driven automation contributes to both sides of the Expediency–Authority paradox. At the MSSP SOC, automated triage substantially reduced detection time but simultaneously created validation bottlenecks, as analysts waited for managerial sign-off before containment—making visible the expediency–authority tension in practice.
The second lever—machine learning adaptive systems—allows continuous learning as they tune detection and response strategies from emerging data. These systems modify rules, surface patterns from different incidents, and support analysts in refining actions over time. In the CPG SOC, for instance, adaptive correlation models were retrained after each malware outbreak, improving detection accuracy yet challenging consistency with global playbooks. Such adaptive mechanisms exemplify the iterative learning component of paradox navigation.
In the conceptual model, this layer serves as a pivot: it connects the tensions from Layer 2 with the reconciliation practices in Layer 4. These AI and automation levers do not offer resolution, but they expand the operational bandwidth of SOCs, making it possible for the teams to work within paradox rather than eliminate it. Their effectiveness is determined not only by their technical design but also by how they are embedded into human workflows, decision rights, and structures that reflect accountability.
Layer 4—Reconciliation Principles
The fourth layer of the model explains how SOCs manage the tensions identified in Layer 2. It introduces two core reconciliation principles: Dynamic Equilibrium and Iterative Integration. These principles represent how organizations cope with opposing demands without resolving them, but rather by keeping both active over the course of time.
Dynamic Equilibrium comes directly from Paradox Theory [16,39]. It describes the capacity of organizations to sustain contradictory goals simultaneously. In SOC operations, this means that responses to incidents should be quick (expediency) while ensuring decisions follow proper supervision and governance (authority). Favoring one side over the other or rotating between them should be excluded, as SOCs build joint processes where speed and control are both maintained. For this, the ongoing adjustment and mutual coordination between automation tools and human actors are needed.
In contrast, Iterative Integration is an insight grounded in our empirical findings. It refers to the constant fine-tuning of routines over time, especially as a response to evolving threats, changing technologies, or client-specific needs. Unlike dynamic equilibrium, which emphasizes holding tensions together in the moment, Iterative Integration captures how SOCs adjust playbooks, escalation matrices, and detection rules over multiple incidents. This type of learning process allows SOCs to balance between being adaptive and consistent by embedding change into routines without losing structure. Together, these principles translate the abstract notion of paradox management into observable organizational practice.
These two mechanisms provide a more realistic path to paradox navigation than traditional models of ambidexterity. Classical ambidexterity, as defined in organizational theory [21], relies on either structural separation (delegating opposing objectives to different teams or units) or temporal separation (alternating between goals over time). However, our findings suggest that SOCs face both sets of demands simultaneously, within the same time frame and operational space. Therefore, separating tasks by team or time does not address the persistent and interrelated nature of these tensions adequately. SOCs need mechanisms working within a unified system, not by division.
For example, in the Expediency versus Authority paradox, analysts are expected to take immediate containment action. For example, isolation of compromised assets while awaiting formal approval from business units or risk managers. Such steps cannot be temporally sequenced because delays in containment increase risk, nor can they be structurally assigned to different units without creating decision bottlenecks. The analyst must act fast and defer to oversight at the same time. This concurrent demand highlights why traditional separation models do not meet the expectations in the SOC environment and why integration within the same workflow is essential.
Similarly, in the paradox of adaptability versus consistency, SOC teams often customize playbooks to accommodate specific customer infrastructures or requirements for compliance. Yet, these customizations must still follow a common baseline of operational quality and reporting. Temporally separating these tasks (e.g., customizing first, standardizing later) risks causing errors or delays, while structurally assigning standardization and customization to separate teams leads to misalignment and rework. Only by embedding both goals tailoring and standardizing within the only coherent response process, can SOCs maintain both flexibility and procedural reliability.
Together, Dynamic Equilibrium and Iterative Integration can explain how SOCs actively manage paradoxical tensions not by eliminating contradictions, but by enabling systems that can operate under both logics at once. These principles lay the groundwork for Ambidextrous Integration, the fifth layer of our model, which captures how organizations embed these reconciliation mechanisms into the core structure of incident response [16,19].
Layer 5—Ambidextrous Integration
The Ambidextrous Integration (Layer 5) is the central contribution of this conceptual model and represents our theoretical synthesis. Empirically, this mechanism was evident across all three cases. In the CPG SOC, containment procedures were pre-integrated with business-approval logic so that analysts could isolate assets while automatically triggering managerial notification, an illustration of real-time reconciliation between speed and control. In the public-body SOC, customized playbooks were embedded within standardized national frameworks, ensuring both contextual adaptation and procedural consistency. At the MSSP SOC, automation thresholds were configured collaboratively with clients, institutionalizing flexibility within governance boundaries. This new construct, reflecting how contradictory demands are embedded and reconciled in SOC workflows, is defined here as the simultaneous embedding of dual logic within day-to-day SOC workflows, which means fulfilling both demands simultaneously (e.g., being fast and thorough simultaneously, rather than choosing between one of them during the incident response) within the same workflow. In organizational theory literature, ambidexterity describes an organization’s ability to manage conflicting demands, such as exploration and exploitation [21]. This is typically achieved through temporal separation (performing one task first, then the other) or structural separation (creating different teams or units to handle each logic). However, such methods may not be efficient in time-sensitive, high-risk environments like SOCs.
Ambidextrous Integration, as introduced in this study, starts from these separation-based strategies. It discusses the real-time coexistence of conflicting demands, such as expediency and authority, or adaptability and consistency under the same operational process.
Ambidextrous Integration requires that SOC teams internalize paradoxical demands into a shared, cohesive workflow such that competing targets like speed and control, or adaptability and consistency, are no longer treated as sequential phases or distributed tasks but are instead managed simultaneously. This model of working relies on embedded routines, automation, and decision structures that allow these contradictions to be held and managed in the moment, rather than resolved in advance or deferred across teams.
This layer goes further than traditional views of ambidexterity as it proposes that in SOC environments, competing demands are not only managed side by side but structurally embedded into core operational routines. Rather than toggling between customization and standardization, SOCs intentionally design modular workflows that embed both. Adaptation is not external to the standard; it is modularized and designed into it.
Similarly, incident containment actions are not treated as isolated technical interventions. They are embedded within predefined authority boundaries that allow rapid response while safeguarding governance. This guarantees that decisions that bring balance to speed and risk are made without compromising accountability. As a result, decision rights, thresholds for escalation, and automation triggers are all pre-integrated into response routines.
The model also emphasizes a feedback loop between Ambidextrous Integration and Paradoxical Tensions. As new threats emerge, technologies evolve, or organizational roles shift, the reconciliation mechanisms must be revisited. In several cases, ambiguity in responsibilities, unclear escalation matrices, or absence of continuous standby support led to renewed tension. These examples highlight that paradox resolution is never final; the ongoing assessment is needed, as is a redesign of routines and structures.
This recurrent nature reinforces the idea that Ambidextrous Integration is not a frozen state but a dynamic capability. It grows via constant refinement, whereby SOC teams evaluate whether the embedded processes still effectively manage contradictory demands. When they do not, the system is adapted—not to resolve the paradox, but to better hold its opposing forces in place under new conditions.
Outcome Layer—Sustained Paradox Navigation as Operational Capability
The bottom of the model presents the outcomes: optimized SOC and incident response capabilities. These are not static achievements, but emergent properties of continuously navigating paradoxical demands in a dynamic, sociotechnical environment.
Rather than viewing performance outcomes as final deliverables, the model conceptualizes them as recurring effects of sustained paradox reconciliation. These reframing positions SOC agility, trustworthiness, and balance as dynamic capabilities achieved through the continuous orchestration of competing demands, not through their elimination.
In contrast to traditional models that assume fixed configurations of technology, roles, and policies (e.g., defining automation rules once or standardizing escalation procedures), this approach indicates a temporal and adaptive logic. Drawing from Paradox Theory [16,39], the model emphasizes the need to revisit and recalibrate SOC practices as threats evolve, technologies advance, and organizational structures change.
This outcome layer thus closes the loop; it shows that successful SOC performance stems not from resolving contradictions, but from developing the routines and structures to hold them in productive tension. Paradox navigation becomes an embedded organizational capability, continuously enabling SOCs to adapt and not to lose control at the same time, and to scale without sacrificing contextual sensitivity.
In this way, the model moves beyond a descriptive mapping of tensions to propose a theoretical mechanism of Ambidextrous Integration as the foundation for ongoing effectiveness. What emerges is a self-renewing system where performance depends on the institutionalization of paradox reconciliation, not its one-time resolution.
In summary, the conceptual model illustrates how Security Operations Centers navigate enduring organizational paradoxes through layered mechanisms that couple speed with control and flexibility with structure. Rather than resolving contradictions, SOCs institutionalize them as part of daily routines, transforming paradox management into an operational capability. This insight extends Paradox Theory into the cybersecurity domain by showing how Dynamic Equilibrium and Ambidextrous Integration function in real time within sociotechnical systems. Building on this foundation, the following Discussion section elaborates how these mechanisms advance theoretical understanding of paradox navigation and inform practical design principles for resilient, AI-enabled security operations.

6. Discussion and Conclusions

6.1. Contributions

This study makes a novel academic contribution by revealing, theorizing, and reconciling the persistent organizational tensions that are present in SOCs’ incident response practices. While prior SOC research has focused on technical tool deployment or on mitigating human capacity issues, this study brings to light the deeper interdependence between technology (AI, automation), human judgment, and governance context. For instance, Tilbury and Flowerday [14] show that automated detection and response tools in SOCs are often introduced with limited attention to the impact they generate on human analysts. Similarly, Patterson [13] highlights that organizations tend to conduct superficial post-incident learning, rarely embedding lessons learned into operational routines. These examples show that prior literature treated tensions among technology, institutions, and professionals as trade-offs to manage, such as speed versus caution or efficiency versus control. In contrast, our findings argue that tensions are persistent contradictions that must be continuously balanced over time.
We apply the Paradox Theory [16,39] to reframe these tensions not as implementation problems but as paradoxes that require ongoing, iterative reconciliation. Specifically, the study identifies two paradoxes in SOC operations: Expediency versus Authority and Adaptability versus Consistency. These tensions, not previously theorized as enduring organizational paradoxes in cybersecurity, move the discussion beyond describing SOC complexity toward understanding how operations are dynamically maintained in real time.
By introducing these paradoxes into high-velocity, high-risk digital environments, this study extends the theory’s applicability beyond traditional corporate or innovation contexts. These paradoxes emerge not from technical processes only, but also from institutional expectations around who has decision-making authority, how procedures are followed, and how accurate action must be across diverse environments. AI and automation tools often speed up detection and response, but can simultaneously dilute accountability. As an example, when automated decisions escalate an incident without human validation, SOCs must decide who is responsible. Similarly, automation workflows can standardize operations but may conflict with client-specific adaptation needs. These dynamics bring paradoxes to the surface, forcing SOCs to tackle them directly, rather than treat them as secondary concerns.
To explain how SOCs manage competing demands, we introduce the concept of Ambidextrous Integration. Unlike traditional views of ambidexterity [21], which separate opposing goals within different teams or phases, Ambidextrous Integration embeds both sides of the paradox, such as rapid response and oversight, into the same workflow. This method shows the reality of the SOC ecosystem, where expediency and precision must coexist. It also enlarges the application of Paradox Theory [16] by demonstrating how contradictions can be managed through integration rather than separation.
The conceptual model developed in this study offers a framework for understanding how SOCs achieve resilience by navigating tensions without eliminating them. AI, automation, and SOC analyst expertise are not presented as competing forces, but as balancing mechanisms that help reconcile conflicting goals. Empirically, this study analyzes extensively the SOC operations across three distinct organizational contexts: a leading CPG company; a global MSSP; and a governmental agency. The findings present how SOCs use layered authority, flexible procedures, and hybrid governance to reconcile operational paradoxes in practice.
Theoretically, this study contributes to three domains. First, it advances Paradox Theory by situating paradox navigation within sociotechnical systems, emphasizing that technological acceleration and institutional rigidity can coexist as mutually enabling forces. Second, it extends organizational studies of cybersecurity by shifting focus from technical or human limitations to systemic tensions as drivers of learning and adaptation. Third, it introduces Ambidextrous Integration as a novel construct that captures how SOCs embed contradictory demands into unified routines, thereby operationalizing Paradox Theory in high-velocity, high-stakes environments.
Practically, this study provides actionable guidance for SOCS leaders, CISOs, and cybersecurity practitioners. It demonstrates that paradoxes such as speed versus control or standardization versus flexibility cannot be eliminated through new tools or restructuring but must be managed through deliberate design. SOCs can strengthen resilience by embedding Ambidextrous Integration into their workflows aligning automation triggers with human approval paths, modularizing playbooks to balance client specificity with procedural consistency, and codifying accountability for algorithmic actions. This perspective shifts managerial focus from implementing discrete technologies to cultivating dynamic coordination capabilities across human and machine actors. Such an approach informs how organizations can sustain operational agility, regulatory compliance, and trust in increasingly automated environments, turning paradox navigation into a strategic competency rather than a recurring problem.

6.2. Limitations and Further Research

As with any interpretive study, several boundary conditions delimit the scope of these findings and indicate fruitful directions for future research. Despite the rigorous steps taken to ensure findings are reliable and valid, this study, based on inductive qualitative methods, offers depth of insight but may limit broader generalizability. This interpretive orientation is consistent with paradox research traditions that privilege depth of meaning over breadth of prediction [16,17].
The three organizations examined in this study, two operational SOCs and one MSSP, capture diverse but not exhaustive manifestations of SOC and incident-response dynamics. While these cases reflect significant heterogeneity in ownership model, scale, and governance (public, private, and hybrid service models), they do not encompass all possible SOC archetypes or industrial environments. Expanding the scope to companies of different sizes that have less than 20,000 or more than 120,000 employees operating in different industries for instance, small and medium-sized enterprises or large multinationals exceeding 120,000 employees would enhance generalizability. Future comparative work could also integrate cross-regional cases to account for cultural, regulatory, and infrastructural variations in paradox navigation cross-regional or cross-sectoral comparisons (e.g., finance, healthcare, or critical infrastructure) to explore how institutional environments shape paradox enactment.
We recommend that future studies apply quantitative or mixed methods design to complement and validate our qualitative insights. Such work could operationalize constructs like expediency, authority, adaptability, and consistency through measurable indicators and test their relationships statistically. Longitudinal designs could also trace how reconciliation mechanisms evolve with AI maturity and trust calibration over time.
Additionally, while this study identified paradoxical tensions as persistent organizational features, their manifestation may evolve as technology matures. The long-term impact of AI and automation integration in SOC operations requires exploration, particularly with respect to how dynamic reconciliation mechanisms evolve and scale. Future studies should examine whether Ambidextrous Integration stabilizes as a durable capability or generates new paradoxes as SOCs mature technologically.
As automation reshapes the human–machine interface, it may introduce novel paradoxes or reconfigure existing ones, altering the balance between control, accountability, and trust. Such dynamics may, for example, shift the distribution of decision authority or accountability when automated agents act semi-autonomously. Future research could investigate the applicability of the identified paradoxical tensions and the proposed conceptual model in other industries, such as critical infrastructure service providers, especially those linked to national security. Future research could also test the proposed model in other high-reliability sectors, such as energy, critical infrastructure, or national-security operations, to assess its transferability across organizational contexts.
Additionally, it would be valuable for scholars to examine the potential challenges associated with AI and automation, as highlighted by the head of network security “fault cannot be thrown on the AI, you cannot sue it … You cannot ask for penalties because AI did something wrong … deleting the configuration of 50 sites … gonna be a person needs … who sits behind it” (TDO, page 7, 09:40). This empirical insight underscores the accountability gap created by autonomous systems, revealing an under-theorized paradox between human liability and machine agency. Hence, future research could further explore the trust–accountability paradox in AI-enabled environments and develop governance mechanisms to mitigate the “responsibility vacuum” arising when algorithmic actions cause harm.
Moreover, future studies could also explore how paradoxical tensions manifest in different governance models (e.g., decentralized vs. centralized SOCs), and how reconciliation mechanisms may vary by organizational maturity, sector, or threat exposure. Such comparative inquiries would refine the boundary conditions of Paradox Theory and extend its relevance to high-reliability, technology-intensive domains.
Overall, these future directions would advance both Paradox Theory and cybersecurity practice by clarifying how paradox-navigation capabilities can be institutionalized, scaled, and governed in increasingly automated environments. By treating paradox not as a problem to solve but as a capability to cultivate, future research can further illuminate how organizations sustain resilience amid technological acceleration and institutional complexity.

6.3. Conclusions

The systematic analysis of three real-world organizational case studies reveals how Security Operations Centers (SOCs) sustain resilience by learning to live with, rather than eliminate contradictory demands. Through the joint orchestration of artificial intelligence (AI), automation, and human expertise, SOCs transform persistent tensions into productive sources of coordination and learning. Rather than resolving tensions, this study demonstrates how SOCs continuously balance competing logics, acting swiftly while preserving oversight, and tailoring responses while maintaining standardization through dynamic embedment of automation, AI with SOC operator judgment.
Most participants viewed AI as a catalyst that enhances speed and scalability while reaffirming the irreplaceable value of human judgment in context-sensitive decisions. This interplay between automation and expertise positions paradox navigation as an organizational capability rather than a technical trade-off.
Practically, the findings highlight the need for closer collaboration among SOC leaders, CISOs, and researchers to design governance mechanisms, playbooks, and training systems that institutionalize paradox navigation. By aligning automation triggers with human oversight and clarifying accountability in AI-driven actions, organizations can preserve both agility and integrity in high-velocity threat environments.
Theoretically, the study advances Paradox Theory within sociotechnical systems by showing how technological acceleration and institutional control can coexist as mutually enabling forces. The construct of Ambidextrous Integration synthesizes this dynamic, demonstrating how SOCs embed speed with oversight and adaptability with consistency within unified operational routines.
Although most participants expressed enthusiasm about AI, some underscored the emerging risks surrounding accountability and trust when automated systems act semi-autonomously. Future work should investigate how these paradoxes of control, liability, and explainability evolve across sectors as AI autonomy increases.
Overall, this synthesis underscores that SOC effectiveness arises not from resolving contradictions but from institutionalizing their productive balance. By embedding paradox navigation into sociotechnical routines, organizations can transform tension into a continual source of resilience, adaptability, and learning in cybersecurity operations. This integrative perspective reframes paradox not as a constraint but as a dynamic capability that sustains performance under technological and institutional complexity.

Author Contributions

Conceptualization, M.S. and A.S.; methodology, M.S.; writing—original draft preparation, M.S.; writing—review and editing, M.S., A.S. and S.K.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Perera, U.N.A.; Rathnayaka, S.; Perera, N.D.; Madushanka, W.W.; Senarathne, A.N. The Next Gen Security Operation Center. In Proceedings of the 6th International Conference for Convergence in Technology (I2CT), Pune, India, 2–4 April 2021; pp. 1–9. [Google Scholar]
  2. Demertzis, K.; Tziritas, N.; Kikiras, P.; Sanchez, S.L.; Iliadis, L. The Next Generation Cognitive Security Operations Center: Adaptive Analytic Lambda Architecture for Efficient Defense against Adversarial Attacks. Big Data Cogn. Comput. 2019, 3, 6. [Google Scholar] [CrossRef]
  3. Vielberth, M.; Böhm, F.; Fichtinger, I.; Pernul, G. Security operations center: A systematic study and open challenges. IEEE Access 2020, 8, 227756–227779. [Google Scholar] [CrossRef]
  4. Chamkar, S.A.; Maleh, Y.; Gherabi, N. The human factor capabilities in security operation center (SOC). Edpacs 2022, 66, 1–14. [Google Scholar] [CrossRef]
  5. Khayat, M.; Barka, E.; Serhani, M.A.; Sallabi, F.; Shuaib, K.; Khater, H.M. Empowering Security Operation Center with Artificial Intelligence and Machine Learning—A Systematic Literature Review. IEEE Access 2025, 13, 19162–19197. [Google Scholar] [CrossRef]
  6. Zhong, C.; Yen, J.; Liu, P.; Erbacher, R.F. Learning from experts’ experience: Toward automated cyber security data triage. IEEE Syst. J. 2018, 13, 603–614. [Google Scholar] [CrossRef]
  7. Lee, J.; Kim, J.; Kim, I.; Han, K. Cyber threat detection based on artificial neural networks using event profiles. IEEE Access 2019, 7, 165607–165626. [Google Scholar] [CrossRef]
  8. Mutalib, N.H.A.; Sabri, A.Q.M.; Wahab, A.W.A.; Abdullah, E.R.M.F.; AlDahoul, N. Explainable deep learning approach for advanced persistent threats (APTs) detection in cybersecurity: A review. Artif. Intell. Rev. 2024, 57, 297. [Google Scholar] [CrossRef]
  9. Sundaramurthy, S.C.; Wesch, M.; Ou, X.; McHugh, J.; Rajagopalan, S.R.; Bardas, A.G. Humans are dynamic-our tools should be too. IEEE Internet Comput. 2017, 21, 40–46. [Google Scholar] [CrossRef]
  10. Zeadally, S.; Adi, E.; Baig, Z.; Khan, I.A. Harnessing artificial intelligence capabilities to improve cybersecurity. IEEE Access 2020, 8, 23817–23837. [Google Scholar] [CrossRef]
  11. Baruwal Chhetri, M.; Tariq, S.; Singh, R.; Jalalvand, F.; Paris, C.; Nepal, S. Towards human-ai teaming to mitigate alert fatigue in security operations centres. ACM Trans. Internet Technol. 2024, 24, 1–22. [Google Scholar] [CrossRef]
  12. Tilbury, J.; Flowerday, S. Humans and automation: Augmenting security operation centers. J. Cybersecur. Priv. 2024, 4, 388–409. [Google Scholar] [CrossRef]
  13. Patterson, C.M.; Nurse, J.R.; Franqueira, V.N. “I don’t think we’re there yet”: The practices and challenges of organisational learning from cyber security incidents. Comput. Secur. 2024, 139, 103699. [Google Scholar] [CrossRef]
  14. Tilbury, J.; Flowerday, S. Automation Bias and Complacency in Security Operation Centers. Computers 2024, 13, 165. [Google Scholar] [CrossRef]
  15. Abd Majid, M.; Zainol Ariffin, K.A. Model for successful development and implementation of Cyber Security Operations Centre (SOC). PLoS ONE 2021, 16, e0260157. [Google Scholar] [CrossRef]
  16. Smith, W.K.; Lewis, M.W. Toward a theory of paradox: A dynamic equilibrium model of organizing. Acad. Manag. Rev. 2011, 36, 381–403. [Google Scholar]
  17. Schad, J.; Lewis, M.W.; Raisch, S.; Smith, W.K. Paradox research in management science: Looking back to move forward. Acad. Manag. Ann. 2016, 10, 5–64. [Google Scholar] [CrossRef]
  18. Schinagl, S.; Shahim, A.; Khapova, S. Paradoxical tensions in the implementation of digital security governance: Toward an ambidextrous approach to governing digital security. Comput. Secur. 2022, 122, 102903. [Google Scholar] [CrossRef]
  19. Raisch, S.; Krakowski, S. Artificial intelligence and management: The automation–augmentation paradox. Acad. Manag. Rev. 2021, 46, 192–210. [Google Scholar] [CrossRef]
  20. O’reilly Iii, C.A.; Tushman, M.L. Ambidexterity as a dynamic capability: Resolving the innovator’s dilemma. Res. Organ. Behav. 2008, 28, 185–206. [Google Scholar] [CrossRef]
  21. Andriopoulos, C.; Lewis, M.W. Exploitation-exploration tensions and organizational ambidexterity: Managing paradoxes of innovation. Organ. Sci. 2009, 20, 696–717. [Google Scholar] [CrossRef]
  22. Raisch, S.; Birkinshaw, J. Organizational ambidexterity: Antecedents, outcomes, and moderators. J. Manag. 2008, 34, 375–409. [Google Scholar] [CrossRef]
  23. Andriopoulos, C.; Lewis, M.W. Managing innovation paradoxes: Ambidexterity lessons from leading product design companies. Long Range Plan. 2010, 43, 104–122. [Google Scholar] [CrossRef]
  24. Lewis, M.W.; Smith, W.K. Paradox as a metatheoretical perspective: Sharpening the focus and widening the scope. J. Appl. Behav. Sci. 2014, 50, 127–149. [Google Scholar] [CrossRef]
  25. Papachroni, A.; Heracleous, L.; Paroutis, S. Organizational ambidexterity through the lens of paradox theory: Building a novel research agenda. J. Appl. Behav. Sci. 2015, 51, 71–93. [Google Scholar] [CrossRef]
  26. Glaser, B.; Strauss, A. Discovery of Grounded Theory: Strategies for Qualitative Research; Routledge: New York, NY, USA, 2017. [Google Scholar]
  27. Yin, R.K. Case Study Research and Applications; Sage Publication, Inc.: Thousand Oaks, CA, USA, 2018; Volume 6. [Google Scholar]
  28. Lincoln, Y.S. Naturalistic Inquiry; Sage Publication, Inc.: Thousand Oaks, CA, USA, 1985; Volume 75. [Google Scholar]
  29. Eisenhardt, K.M. Building theories from case study research. Acad. Manag. Rev. 1989, 14, 532–550. [Google Scholar] [CrossRef]
  30. Balmer, D.F.; Richards, B.F. Conducting qualitative research through time: How might theory be useful in longitudinal qualitative research? Adv. Health Sci. Educ. 2022, 27, 277–288. [Google Scholar] [CrossRef]
  31. Young, J.C.; Rose, D.C.; Mumby, H.S.; Benitez-Capistros, F.; Derrick, C.J.; Finch, T.; Garcia, C.; Home, C.; Marwaha, E.; Morgans, C.; et al. A methodological guide to using and reporting on interviews in conservation science research. Methods Ecol. Evol. 2018, 9, 10–19. [Google Scholar] [CrossRef]
  32. Gioia, D.A.; Corley, K.G.; Hamilton, A.L. Seeking qualitative rigor in inductive research: Notes on the Gioia methodology. Organ. Res. Methods 2013, 16, 15–31. [Google Scholar] [CrossRef]
  33. Landis, J.R.; Koch, G.G. The measurement of observer agreement for categorical data. Biometrics 1977, 33, 159–174. [Google Scholar] [CrossRef]
  34. Miles, M.B. Qualitative Data Analysis: An Expanded Sourcebook; Sage Publication, Inc.: Thousand Oaks, CA, USA, 1994. [Google Scholar]
  35. Carter, N. The use of triangulation in qualitative research. Oncol. Nurs. Forum 2014, 41, 545–547. [Google Scholar] [CrossRef]
  36. Panagopoulos, G. Aspects of Organizational Ambidexterity. J. Glob. Strateg. Manag. 2016, 1, 5. Available online: https://isma.info/uploads/files/005-aspects-of-organizational-ambidexterity-george-panagopoulos.pdf (accessed on 1 September 2025). [CrossRef]
  37. Lee, O.-K.D.; Sambamurthy, V.; Lim, K.H.; Wei, K.K. How Does IT Ambidexterity Impact Organizational Agility? Inf. Syst. Res. 2015, 26, 398–417. [Google Scholar] [CrossRef]
  38. Martin, A.; Keller, A.; Fortwengel, J. Introducing conflict as the microfoundation of organizational ambidexterity. Strateg. Organ. 2019, 17, 38–61. [Google Scholar] [CrossRef]
  39. Raisch, S.; Hargrave, T.J.; Van De Ven, A.H. The learning spiral: A process perspective on paradox. J. Manag. Stud. 2018, 55, 1507–1526. [Google Scholar] [CrossRef]
Figure 1. Gioia data structure for the Expediency–Authority paradox: first-order concepts, second-order themes (E1–E5); and aggregate poles (Expediency vs. Authority). For optimal clarity, please zoom in or enlarge the figure by 400% when viewing the digital version of this paper.
Figure 1. Gioia data structure for the Expediency–Authority paradox: first-order concepts, second-order themes (E1–E5); and aggregate poles (Expediency vs. Authority). For optimal clarity, please zoom in or enlarge the figure by 400% when viewing the digital version of this paper.
Bdcc 09 00278 g001
Figure 2. Gioia data structure for the Adaptability–Consistency paradox: first-order concepts; second-order themes (C1–C6); aggregate poles (Consistency vs. Adaptability). For optimal clarity, please zoom in or enlarge the figure by 400% when viewing the digital version of this paper.
Figure 2. Gioia data structure for the Adaptability–Consistency paradox: first-order concepts; second-order themes (C1–C6); aggregate poles (Consistency vs. Adaptability). For optimal clarity, please zoom in or enlarge the figure by 400% when viewing the digital version of this paper.
Bdcc 09 00278 g002
Figure 3. Layered Model of Paradox Navigation in SOCs From Tensions to Ambidextrous Integration. The reconciliation principles in Layer 4 draw on [16,22]. For optimal clarity, please zoom in or enlarge the figure by 400% when viewing the digital version of this paper.
Figure 3. Layered Model of Paradox Navigation in SOCs From Tensions to Ambidextrous Integration. The reconciliation principles in Layer 4 draw on [16,22]. For optimal clarity, please zoom in or enlarge the figure by 400% when viewing the digital version of this paper.
Bdcc 09 00278 g003
Table 1. Comparison of frameworks for managing contradictory demands in SOCs.
Table 1. Comparison of frameworks for managing contradictory demands in SOCs.
DimensionOrganizational AmbidexterityParadox TheoryAmbidextrous Integration (This Study)
Core assumptionTension between exploration and exploitation can be managed by separation or cycling.Contradictory demands persist concurrently and must be held in tension.Human–AI co-performance reconciles persistent tensions in real time.
Temporal logicSequential/alternating (structural or temporal solutions).Simultaneous (dynamic equilibrium; ongoing balancing).Concurrent and recursive (continuous micro-adjustments during operations).
Primary mechanismsStructural separation, temporal cycling, and contextual ambidexterity (design, incentives).Acceptance, confrontation, and iterative learning lead to dynamic equilibrium.Sociotechnical orchestration of analysts, automation, and AI; Iterative Integration with feedback from incidents.
Level of analysisMostly unit/firm (macro) design choices.Process/practice (meso/micro); sensemaking and local trade-offs.Operational routines (micro) within SOC workflows and incident playbooks.
Boundary conditions/typical useEffective when temporal slack or separable subunits exist; weaker when simultaneity dominates.Suited to high-velocity contexts where opposing logics co-exist in action.Suited to always-on SOCs where automation and human judgment co-act under time pressure.
StrengthsClear design prescriptions; broad strategy evidence base.Captures the coexistence of poles; explains ongoing tension in navigation.Explains real-time reconciliation via human–AI integration; bridges design and practice.
LimitationsUnder-specifies simultaneity; technological mediation is often implicit.Less guidance on technology-enabled coordination.Emergent construct; requires cross-industry validation.
Canonical references[22,23][16,24]This study builds on the integration perspective [25].
Table 2. Overview of Case Organizations and Participants.
Table 2. Overview of Case Organizations and Participants.
Industry/Organization TypeSOC ModelParticipants (n)Roles CoveredInterview Duration (min)Data SourcesObservation Period
Case 1: Consumer Packaged Goods (CPG) multinational operating in 28 countriesUnified IT/OT SOC model with MSSP support7CISO, Head of Security Operations, Head of Security Platforms, Head of Governance Risk and Compliance, Cyber Defense Center Lead, Network and Communication Leader, Service Line Manager Infra46–70Interviews, observations, internal tickets, and incident records2022–2024
Case 2: Global Managed Security Service Provider (16 SOCs worldwide)Distributed 24 × 7 Follow-the-Sun SOC model15Threat Intelligence Leads, SOC Team Managers, Security Platform Architects, Incident Coordinators, CERT Engineers, Response Leads32–90Interviews, observations, and service playbooks and runbooks2022–2024
Case 3: Non-departmental public body (Netherlands) handling labor-market data (>23 billion records)Centralized SOC with inter-agency coordination3SOC Lead, Senior Security Analyst, CERT Process Manager57–92Interviews, policy documents, and incident playbooks2023–2024
3 OrganizationsCentralized, distributed, and unified25C-Level to Analyst (Strategic → Operational)32–92Triangulated: Interviews + observations + documents 2022–2024
Table 3. Data-to-Model Mapping—Panel A: Response Expediency vs. Authority Paradox. Data-to-Model Mapping—Panel B: Adaptability vs. Consistency Paradox.
Table 3. Data-to-Model Mapping—Panel A: Response Expediency vs. Authority Paradox. Data-to-Model Mapping—Panel B: Adaptability vs. Consistency Paradox.
Panel A
Second-Order ThemeDefinitionAggregate DimensionModel Layer and Mechanism
E1: Incident detection urgencyNear-real-time alerting elevates the need for immediate assessmentExpediencyLayer 2 (tension) activates AI-driven automation (Layer 3)
E2: Swift remediation requirementsFast cross-team actions required to contain spreadExpediencyLayer 2 balanced via Dynamic Equilibrium (Layer 4)
E3: Rapid incident resolutionEnd-to-end closure under time pressure and uncertaintyExpediencyLayer 2 informs Ambidextrous Integration (Layer 5)
E4: Decision-making constraintsRole-based approvals and governance slow actionAuthorityLayer 2 requires Dynamic Equilibrium (Layer 4)
E5: Operational dependencyExecution depends on external owners/silos and business risk choicesAuthorityLayer 2 embedded into Ambidextrous Integration (Layer 5)
Panel B
Second-Order ThemeDefinitionAggregate Dimension PoleModel Layer and Mechanism
C1: Consistency in diverse environmentsUniform SOPs/playbooks maintain quality across heterogeneous clientsConsistencyLayer 2 (tension) anchors the consistency pole
C2: Centralized control of incident responseControl-tower routines and governance stabilize variance/turnoverConsistencyLayer 2 Iterative Integration (Layer 4)
C3: Scalability of standardized solutionsPortfolio growth and heterogeneity stress the standard baselineConsistencyLayer 2 activates AI and automation levers (Layer 3)
C4: Client-specific incident response adaptationGuard-railed tailoring of playbooks/policies for local contextAdaptabilityLayer 2 to Iterative Integration (Layer 4)
C5: Localized decision-making in incident handlingBusiness-owner approvals/risk thresholds shape actionAdaptabilityLayer 2 informs Ambidextrous Integration (Layer 5)
C6: Balancing customization with standardized offeringsStructured customization within defined global guardrailsAdaptabilityLayer 2 to Iterative Integration (Layer 4)
Table 4. Summarizes the salience of each theme per case (●/●●/●●●) and the primary sources (INT/OBS/DOC).
Table 4. Summarizes the salience of each theme per case (●/●●/●●●) and the primary sources (INT/OBS/DOC).
Second-Order ThemeCPG SOCMSSP SOCPublic Body SOCPrimary Sources
Incident detection urgency●●●●●●●INT/OBS/DOC
Swift remediation requirements●●●●●●●INT/OBS
Rapid incident resolution●●●●●●●INT/OBS
Decision-making constraints●●●●●●●INT/DOC
Operational dependency●●●●●●●INT/OBS
Consistency in diverse environments●●●●●●●●OBS/DOC
Centralized control of incident response●●●●●●●●INT/OBS
Scalability of standardized solutions●●●●●●●INT/DOC
Client-specific incident response adaptation●●●●●●●INT/OBS
Localized decision-making in incident handling●●●●●●●INT
Balancing customization with standardized offerings●●●●●●●INT/OBS
Legend: ● = presence (1–2 coded instances); ●● = frequent (3–5); ●●● = very frequent (>5). Sources: INT = interviews; OBS = observations; DOC = documents.
Table 5. Empirical manifestation of each model layer across the three SOC cases.
Table 5. Empirical manifestation of each model layer across the three SOC cases.
Model LayerCPG SOC (Manufacturing IT/OT)MSSP SOC (Multi-Client)Public SOC (Regulated)
L1 ContextOT production dependencies and global IT policies constrained rapid response; baseline controls slowed cross-site action (OBS; INT DMA p.6).Continuous client onboarding increased heterogeneity; templates had to span diverse stacks (OBS; INT EAL p.4).National audit controls and inter-agency workflows ensured compliance but elongated escalations (DOC; INT BMI p.3).
L2 TensionsAutomated correlation raised priority within minutes, but plant approvals delayed isolation (INT TST p.4). Global playbooks needed plant adaptation (OBS).SLA-driven speed vs. layered approvals and standardization vs. client tailoring (INT MST p.8; EAL p.4).Authority and consistency dominated under multi-agency oversight (INT BMI p.5; DOC).
L3 AI and Automation LeversRetrained correlation models improved accuracy yet pressured approval cadence (INT DMA p.6).Automated triage and classifier routing cut detection time but met validation gates (INT MST p.8).AI-assisted templates enforced standard steps and flagged exceptions for review (DOC; INT BMI p.3).
L4 Reconciliation PrinciplesDynamic equilibrium: Targeted containment with parallel business notification (OBS). Iterative integration: Post-incident playbook refinement (DOC).Iterative parameter updates and crisis-time balancing (OBS/INT).Policy harmonization with required sign-offs (DOC/INT).
L5 Ambidextrous IntegrationPre-approved containment auto-triggers managerial alerts—speed with control (OBS; INT FPA p.5).Client-negotiated thresholds embedded in orchestration tools—adaptation within guardrails (INT EAL p.4).Customized playbooks nested in national frameworks—local fit with system integrity (DOC; INT BMI p.3).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Saadallah, M.; Shahim, A.; Khapova, S. Reconciling Tensions in Security Operations Centers a Paradox Theory Approach. Big Data Cogn. Comput. 2025, 9, 278. https://doi.org/10.3390/bdcc9110278

AMA Style

Saadallah M, Shahim A, Khapova S. Reconciling Tensions in Security Operations Centers a Paradox Theory Approach. Big Data and Cognitive Computing. 2025; 9(11):278. https://doi.org/10.3390/bdcc9110278

Chicago/Turabian Style

Saadallah, Mehdi, Abbas Shahim, and Svetlana Khapova. 2025. "Reconciling Tensions in Security Operations Centers a Paradox Theory Approach" Big Data and Cognitive Computing 9, no. 11: 278. https://doi.org/10.3390/bdcc9110278

APA Style

Saadallah, M., Shahim, A., & Khapova, S. (2025). Reconciling Tensions in Security Operations Centers a Paradox Theory Approach. Big Data and Cognitive Computing, 9(11), 278. https://doi.org/10.3390/bdcc9110278

Article Metrics

Back to TopTop