Next Article in Journal
Research on Distribution Optimization Strategy of Front Warehouse Model Based on Deep Reinforcement Learning
Previous Article in Journal
Digital Infrastructure and Urban Innovation Capacity Under a Quasi-Natural Experiment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Authentic Intelligence in Digital Strategy Systems: A Socio-Technical Analysis of Human-Accountable Decision Governance

1
Department of Electrical Engineering and Computer Science, Florida Atlantic University, Boca Raton, FL 33431, USA
2
School of Computer Science, University of Wollongong in Dubai, Dubai Knowledge Park, Dubai 20183, United Arab Emirates
3
Business School, University of Roehampton, Roehampton Lane, London SW15 5PJ, UK
*
Author to whom correspondence should be addressed.
Systems 2026, 14(3), 259; https://doi.org/10.3390/systems14030259
Submission received: 28 December 2025 / Revised: 20 January 2026 / Accepted: 29 January 2026 / Published: 28 February 2026

Abstract

Background: Digital strategy increasingly relies on algorithmic decision systems, yet the mechanisms by which human judgement remains embedded within these systems are poorly theorised. Existing frameworks treat digital tools as either neutral instruments or autonomous agents, overlooking the systems-level conditions under which human accountability is maintained. Methods: This study employs a novel three-stage system-oriented analytical protocol: (1) mechanism-revealing thematic analysis of 50 semi-structured interviews with senior managers across multinational organisations; (2) configurational cross-case mapping against 685 cases from the European Commission’s JRC AI implementation catalogue; and (3) failure mode triangulation comparing interview-reported barriers with 37 documented implementation discontinuations. Results: We introduce Authentic Intelligence as a systems-level construct and develop a socio-technical architecture specifying six primary system functions, three decision loci, four governance mechanisms, and twelve empirically derived failure modes. Triangulation reveals high correspondence (≥20% JRC citation rate) for six failure modes and moderate correspondence for six additional modes. Conclusions: The contribution is a reusable systems architecture and diagnostic framework for maintaining human-accountable decision governance in digital strategy implementation, with direct application to EU AI Act Article 14 compliance requirements.

1. Introduction

The proliferation of algorithmic systems within organisational strategy presents a fundamental problem: how can digital strategy retain human-accountable decision capability when significant portions of sensing, processing, and even acting are delegated to computational processes? This question sits at the intersection of digital strategy research [1,2], socio-technical systems theory [3,4,5], and emerging concerns about algorithmic governance [6,7,8]. While socio-technical systems theory establishes joint optimisation as a foundational design principle [3,9], it does not specify how accountability is preserved once algorithmic systems actively participate in decision execution; Authentic Intelligence addresses this gap by translating joint optimisation into explicit decision governance requirements [10,11]. Yet existing frameworks tend to treat digital tools as either neutral instruments or autonomous agents, overlooking the systems-level conditions under which human judgement remains meaningfully embedded within technology-mediated decision processes. Although socio-technical systems and digital strategy research acknowledge human–technology interdependence [3,12,13], they provide limited explanation of how accountability is preserved once algorithms shape decision outcomes rather than merely support analysis.
This paper addresses that gap by developing the construct of Authentic Intelligence. Unlike existing governance concepts that focus on transparency, oversight, or ethical accountability as discrete properties of algorithmic systems [8,14,15], Authentic Intelligence specifies the system-level conditions under which accountability is enacted during decision execution rather than merely asserted as a design principle. We define Authentic Intelligence as the capacity of a digital strategy system to maintain human-accountable decision governance across sensing, interpretation, decision, execution, monitoring, and adaptation functions. The term “authentic” signals that intelligence in organisational settings requires more than computational accuracy; it requires traceable accountability, contextual judgement, and the capacity to override, escalate, or modify algorithmic recommendations based on human expertise and ethical consideration [16,17,18].
The research question guiding this study is as follows: What socio-technical system structures enable digital strategies to maintain human-accountable decision governance? We pursue this question through qualitative analysis of 50 semi-structured interviews with senior managers responsible for digital strategy implementation across consumer goods, financial services, professional services, energy, and technology sectors. We triangulate these findings against 685 documented AI implementations from the European Commission’s Joint Research Centre catalogue of public sector AI systems [19,20]. The study is further motivated by emerging regulatory requirements, particularly Article 14 of the EU Artificial Intelligence Act, which formalises human oversight as a legal obligation rather than a design preference [21,22].
This paper makes three contributions that extend beyond synthesising existing governance concepts by introducing a systems construct that operationalises how accountability is preserved, challenged, and degraded in algorithmically mediated decision environments. First, we specify Authentic Intelligence as a theoretically grounded systems construct, explicitly distinguishing it from adjacent concepts, including human-in-the-loop design (a technical configuration), algorithmic transparency (an information property), algorithmic accountability (a legal and ethical framing), and organisational intelligence (a collective sense-making tradition) [14,15,23]. This construct extends recent socio-technical analyses of digital transformation in Systems and related journals [24,25] by foregrounding decision governance as the organising principle. Second, we develop a configurational systems architecture that maps interview-derived dimensions to system functions, decision loci, and governance mechanisms, comparable to existing STS-based digital transformation frameworks but adding explicit decision loci specification and failure-mode diagnostics [26,27]. Third, we derive an empirically grounded failure mode typology validated through corroborative triangulation against JRC AI implementation discontinuations, providing a diagnostic framework for assessing Authentic Intelligence degradation in operational systems.
The remainder of the paper proceeds as follows. Section 2 develops the theoretical framing, situating Authentic Intelligence within socio-technical systems theory and digital strategy literature. Section 3 describes our system-oriented analytical protocol. Section 4 presents findings organised around system functions and governance mechanisms. Section 5 develops the systems model and walks through the governance framework derivation. Section 6 discusses theoretical contributions and practical implications. Section 7 addresses limitations and boundary conditions. Section 8 concludes.

2. Theoretical Framing

2.1. Socio-Technical Systems and Digital Strategy

Socio-technical systems theory emerged from the Tavistock Institute’s studies of coal mining, which demonstrated that technical and social subsystems are jointly optimised [3,9]. The theory’s core insight is that system performance depends not on technical efficiency alone but on the fit between technical components and the social organisation of work [4,28]. This framing proves especially relevant for digital strategy, where algorithmic tools are deployed within organisational contexts shaped by culture, power relations, and existing decision processes [12,29,30].
Recent extensions of socio-technical theory address information systems specifically. Baxter and Sommerville [5] argue that socio-technical design principles must guide enterprise system implementation, emphasising user participation, minimal critical specification, and support for boundary activities. Mumford [31] traces how socio-technical principles evolved to address computer-based systems, noting persistent tensions between efficiency-oriented technical design and human-centred social design. These tensions intensify when algorithmic systems take on decision-making roles previously reserved for human judgement [32,33]. Recent work in Systems has applied STS principles to digital transformation contexts, demonstrating the continued relevance of joint optimisation for contemporary technology governance [24,34].
Digital strategy research has developed somewhat independently. Bharadwaj et al. [1] define digital business strategy as “organisational strategy formulated and executed by leveraging digital resources to create differential value.” This definition positions digital resources as instrumental to strategy but does not specify the governance structures that determine how human judgement interacts with digital capabilities [35,36]. Matt et al. [2] propose four dimensions of digital transformation strategy: use of technologies, changes in value creation, structural changes, and financial aspects. Again, the relationship between human decision-making and technological systems remains underspecified [37,38].
We propose that socio-technical systems theory provides the missing conceptual infrastructure. Specifically, the joint optimisation principle suggests that digital strategy systems must be designed to support both computational efficiency and human judgement [13,39]. The minimal critical specification principle suggests that digital systems should enable rather than prescribe decision processes. The boundary management principle suggests that interfaces between human and algorithmic components require explicit design attention [40,41].

2.2. Authentic Intelligence: Construct Definition and Boundaries

The term “intelligence” carries multiple meanings in organisational and technological contexts [42,43]. Drawing on philosophical accounts of authenticity as alignment between action, values, and responsibility [44], and governance accounts of accountability as the capacity to give account and bear consequences [45,46], the term ‘authentic’ emphasises responsibility-bearing human judgement rather than nominal human presence in algorithmic workflows. Artificial intelligence research emphasises computational capabilities: pattern recognition, natural language processing, optimisation, and prediction [47]. Business intelligence research emphasises data-driven decision support: dashboards, analytics, and reporting systems [48,49]. Organisational intelligence research emphasises collective sense-making: how organisations interpret environments and adapt [50,51].
We use “authentic” to signal a distinct quality of intelligence that is irreducible to any single tradition. Drawing on philosophical discussions of authenticity as alignment between action and values [44], and on governance discussions of accountability as the capacity to give account and bear consequences [45,46,52], we propose a formal definition:
Authentic Intelligence is the systems-level capacity of a digital strategy implementation to maintain human-accountable decision governance, characterised by four necessary properties: (1) traceability—intelligence outputs can be traced to identifiable accountable decision-makers; (2) contextual judgement—human expertise shapes how algorithmic outputs are interpreted and applied; (3) intervention capacity—human agents retain meaningful ability to override, modify, or escalate decisions; and (4) consequence bearing—responsibility for decision outcomes rests with human actors who can be held to account.
This definition explicitly distinguishes Authentic Intelligence from adjacent constructs:
Human-in-the-loop (HITL) refers to a technical system configuration where humans are positioned within an algorithmic workflow [14,53]. HITL is a design pattern; Authentic Intelligence is a governance property. A system may include humans in the loop while lacking traceability, contextual judgement, or consequence bearing (e.g., when human operators rubber-stamp algorithmic recommendations without meaningful review).
Algorithmic transparency refers to the accessibility of information about how algorithms function [23,54]. Transparency is an information property; Authentic Intelligence is a governance capacity. A system may be fully transparent yet lack intervention capacity or accountability structures.
Algorithmic accountability refers to legal and ethical frameworks for attributing responsibility for algorithmic outcomes [8,46]. Accountability is a normative and legal framing; Authentic Intelligence is a systems architecture principle that enables accountability to be enacted in practice.
Organisational intelligence refers to collective sense-making processes through which organisations interpret environments [50,51]. Organisational intelligence is a cognitive and social phenomenon; Authentic Intelligence specifies the governance structures that preserve human cognition within technology-mediated systems.
The construct thus occupies a distinct conceptual space: it is neither purely technical (like HITL), nor purely informational (like transparency), nor purely normative (like accountability), nor purely cognitive (like organisational intelligence). Rather, it integrates these dimensions into a systems-level design principle.

2.3. Decision Governance in Digital Systems

Decision governance refers to the structures, processes, and norms that determine how decisions are made, by whom, and under what conditions [55,56]. In traditional organisational settings, decision governance is addressed through authority hierarchies, approval processes, and accountability structures. Digital systems complicate this picture by introducing algorithmic components that may generate recommendations, filter information, or take automated actions [57,58].
Faraj et al. [10] argue that digital systems introduce “algorithmic management” that reshapes labour processes. Kellogg et al. [11] extend this analysis, identifying six algorithmic control mechanisms: restricting, recommending, recording, rating, replacing, and rewarding. These mechanisms redistribute decision authority between humans and algorithms, often in ways that are not explicitly designed or governed. Research on integrative leadership in complex adaptive systems demonstrates how strategic decision-making processes must account for multi-modal analysis across organisational levels [59].
Research on human-in-the-loop systems addresses how human oversight can be maintained in algorithmic processes [14,53]. Rahwan [23] proposes “machine behaviour” as a field that studies AI systems as social actors, emphasising the need for human interpretability and control. Zerilli et al. [15] examine algorithmic transparency, arguing that meaningful human oversight requires not just access to algorithmic code but understanding of how algorithms shape decision environments. Recent regulatory developments, particularly the EU AI Act’s Article 14 requirements for human oversight of high-risk systems [21,22], have intensified scholarly attention to governance mechanisms that preserve human control [60,61].
We build on these foundations to propose that Authentic Intelligence functions as a design principle for digital strategy systems [62,63]. Rather than treating human oversight as an add-on to algorithmic efficiency, Authentic Intelligence positions human-accountable decision governance as a primary system requirement. The systems architecture must be designed to support this requirement across all system functions.

3. Methods

3.1. Research Design and System-Oriented Analytical Protocol

This study employs a novel multi-method qualitative design that advances beyond conventional thematic analysis. We adopt an interpretive stance [64,65], recognising that the phenomena of interest (decision governance, accountability, and judgement) are constituted through human meaning-making. The methodological contribution lies in our three-stage system-oriented analytical protocol that combines mechanism-revealing thematic analysis, configurational cross-case mapping, and failure mode triangulation.
The protocol addresses a significant gap in qualitative research on digital systems: the tendency to treat interview data as descriptive accounts rather than as evidence of underlying system mechanisms [66,67]. By combining interview analysis with structured comparison against the JRC implementation catalogue, we move beyond description toward mechanism identification and boundary condition specification. The three stages are individually well-established; what the protocol contributes is their systematic combination for systems-level theory building, enabling both interpretive depth and cross-case pattern validation.

3.2. Primary Data: Semi-Structured Interviews

We conducted 50 semi-structured interviews with senior managers responsible for digital strategy implementation. Participants were recruited through purposive sampling to ensure representation across: industry sectors (consumer goods, n = 18; financial services, n = 12; professional services, n = 8; energy, n = 7; technology, n = 5), geographical regions (Europe, n = 22; Middle East and Africa, n = 15; North America, n = 8; Asia-Pacific, n = 5), organisational sizes (from mid-sized firms to multinational corporations with revenues exceeding USD 50 billion), and functional areas (general management, n = 14; operations, n = 12; marketing, n = 9; finance, n = 7; human resources, n = 5; technology, n = 3). The objective of purposive sampling was theoretical saturation and diversity of governance perspectives rather than statistical representativeness, supporting analytical rather than demographic generalisation consistent with interpretive research standards [64,65,68].
Interview participants held titles including Director, Vice President, General Manager, and Chief Officer roles. All participants had responsibility for strategic decisions involving digital technologies and direct experience implementing or governing algorithmic systems. Interviews were conducted between 2021 and 2023, primarily via video conference with some face-to-face sessions. Interview duration ranged from 45 to 90 min, with a median of 65 min.
The interview protocol was designed to elicit mechanism-level accounts rather than general opinions. Questions addressed: (a) specific decisions where algorithmic and human components interact, (b) concrete governance structures shaping technology-mediated decisions, (c) instances where human judgement overrode or modified algorithmic outputs, (d) barriers encountered in maintaining human accountability, and (e) adaptations made when initial governance arrangements proved inadequate. Questions were open-ended to allow participants to describe their experiences in their own terms, with probes seeking specific examples and counter-examples.
Interviews were recorded and transcribed verbatim. Transcription used a combination of automated tools and manual review to ensure accuracy. Participant quotes are presented verbatim, preserving original phrasing to maintain authenticity of voice. Participants are identified by code (Res1 through Res50) to protect confidentiality.

3.3. Secondary Data: JRC AI Implementation Catalogue

To triangulate interview findings, we analysed the European Commission Joint Research Centre’s catalogue of AI implementations in European public services [12,13]. This dataset contains 685 documented cases of AI deployment across 27 EU member states, with structured fields for application domain (n = 23 categories), AI technology type (n = 14 categories), process type (n = 8 categories), organisational context (n = 6 categories), implementation status (planned, n = 89; pilot, n = 147; implemented, n = 412; discontinued, n = 37), governance arrangements (documented in free text and categorical fields), and outcomes (documented where available).
The JRC dataset offers specific advantages for our corroborative triangulation strategy. First, it provides systematic documentation using a consistent taxonomy, enabling pattern identification across cases. Second, it covers public sector contexts, complementing the predominantly private sector focus of our interviews and enabling examination of whether governance mechanisms operate differently across sectoral contexts. Third, it includes implementation status with 37 discontinued cases, enabling comparison between interview-reported failure modes and documented implementation failures. Fourth, it documents governance arrangements using categorical fields (human oversight type, decision scope, and escalation provisions) that map directly to our interview-derived constructs.
We emphasise that the JRC analysis serves a corroborative triangulation function rather than hypothesis testing. The comparison examines whether patterns identified in interview data also appear in documented implementation outcomes, strengthening confidence in the failure mode typology where correspondence is observed and identifying potential boundary conditions where correspondence is limited.

3.4. Stage 1: Mechanism-Revealing Thematic Analysis

Interview data were analysed using an enhanced thematic analysis protocol designed to reveal underlying system mechanisms rather than surface-level descriptions [69,70]. The analysis proceeded through six phases adapted from Braun and Clarke [71] with specific modifications for mechanism identification.
Phase 1 (Familiarisation): Repeated reading of transcripts with attention to decision sequences, governance structures, and points where human and algorithmic components interact. Memos documented initial observations about potential mechanisms. Extended coding trees, memo excerpts, and codebook documentation are provided in the Supplementary Materials, consistent with best practice in qualitative transparency while preserving narrative accessibility in the main text [71,72].
Phase 2 (Mechanism-Oriented Coding): Generation of initial codes capturing not just what participants said but the underlying decision pathways and governance structures their accounts implied. Codes were classified into three types: descriptive codes capturing what happened (e.g., “algorithm recommended candidate ranking”), processual codes capturing how it happened (e.g., “manager reviewed top five before final selection”), and mechanistic codes capturing why it happened that way (e.g., “escalation triggered by confidence score below threshold”). This three-level coding structure enabled systematic identification of governance mechanisms rather than mere description of events.
Phase 3 (Axial Coding for System Functions): Codes were organised using axial coding to identify relationships between governance structures and outcomes. This phase identified six system functions (sensing, interpreting, deciding, executing, monitoring, and adapting) as organising constructs that emerged from the data rather than being imposed a priori.
Phase 4 (Pattern Specification): Themes were specified in terms of necessary conditions, sufficient conditions, and boundary conditions. For each theme, we documented what governance structures were present, what outcomes resulted, and under what conditions the relationship held or broke down.
Phase 5 (Failure Mode Identification): Special attention was given to accounts of governance breakdown, decision errors, and accountability failures. These accounts were coded separately to identify failure modes and their antecedent conditions.
Phase 6 (Model Construction): Themes were synthesised into a systems model specifying relationships between system functions, decision loci, governance mechanisms, and failure modes.
Coding was conducted using NVivo 12 qualitative analysis software. Inter-coder reliability was assessed through independent coding of 20% of transcripts by two researchers, with Cohen’s kappa of 0.78 indicating substantial agreement [73]. Disagreements were resolved through discussion until consensus was reached.

3.5. Stage 2: Configurational Cross-Case Mapping

The second analytical stage mapped interview-derived constructs to the JRC catalogue’s structured taxonomy. This configurational analysis [74,75] examined whether governance mechanisms identified in interviews appeared systematically across the 685 documented implementations.
The mapping proceeded in three steps:
Step 1 (Construct Operationalisation): Each interview-derived governance mechanism was operationalised using JRC catalogue fields. The mapping is detailed in Table 1. Visibility mechanisms mapped to “Human Oversight Type” (coded as: none, logging only, real-time dashboard, and explanation interface) and “Transparency Provisions” (coded as: not documented, internal only, and public disclosure). Intervention mechanisms mapped to “Override Capability” (coded as: none, batch override, case-level override, and real-time override) and “Escalation Pathway” (coded as: not documented, informal, formal single-level, and formal multi-level). Accountability mechanisms mapped to “Responsible Organisation” (coded as: not specified, department-level, named unit, and named individual) and “Decision Authority” (coded as: algorithmic, advisory, joint, and human-final). Feedback mechanisms mapped to “Monitoring Arrangements” (coded as: none, periodic review, continuous automated, and continuous with human review) and “Adaptation Provisions” (coded as: none, error correction only, parameter adjustment, and full retraining capability).
Step 2 (Pattern Analysis): We examined frequency distributions of governance arrangements across implementation status categories (planned, pilot, implemented, and discontinued). Given the exploratory nature of this analysis and the modest sample of discontinued cases (n = 37), we report descriptive percentages rather than inferential statistics. The pattern analysis is intended to identify associations warranting further investigation rather than to establish causal relationships.
Step 3 (Configuration Identification): We identified configurations of governance mechanisms associated with successful implementation versus discontinuation using Qualitative Comparative Analysis logic [74,75], examining which combinations of conditions appeared necessary or sufficient for particular outcomes. This configurational approach aligns with recent research demonstrating how project management capability and resistance patterns combine in technology transformation contexts [76].

3.6. Stage 3: Failure Mode Triangulation

The third analytical stage directly compared interview-reported barriers and failure modes with documented discontinuations in the JRC catalogue. This triangulation strategy [77,78] assessed whether failure modes identified through interview analysis corresponded to observed implementation failures.
For each of the 37 discontinued implementations in the JRC catalogue, we
  • Extracted documented reasons for discontinuation from catalogue fields and linked source documents (where available, including press reports, audit documents, and government statements).
  • Coded discontinuation reasons using the failure mode typology developed from interview analysis.
  • Assessed correspondence between interview-derived failure modes and documented discontinuation patterns.
  • Identified additional failure modes present in catalogue data but absent from interview accounts.
Correspondence was operationalised using explicit thresholds: High correspondence indicates that the failure mode was mentioned in ≥20 interviews AND cited in ≥20% of JRC discontinuations. Moderate correspondence indicates that the failure mode was mentioned in ≥10 interviews AND cited in 10–19% of JRC discontinuations. Low correspondence indicates citation in <10% of discontinuations regardless of interview frequency.
We acknowledge that the 37 discontinued cases represent a modest base for triangulation analysis. The correspondence findings should therefore be interpreted as exploratory pattern identification rather than definitive validation. High correspondence strengthens confidence that the failure mode operates across private and public sector contexts; low correspondence may reflect sectoral differences, measurement challenges, or genuinely context-specific failure patterns.

3.7. Research Quality and Reflexivity

We address research quality through several strategies aligned with qualitative research standards [63,72]. Prolonged engagement with data occurred through multiple rounds of analysis over 18 months. Triangulation across data sources enabled comparison of interview findings with documented implementation cases. Thick description in the findings section provides sufficient detail for readers to assess transferability. Peer debriefing occurred through presentation at academic conferences and seminars.
Supplementary Materials available from the authors include: the full codebook structure with examples of descriptive, processual, and mechanistic codes; the complete mapping table showing governance construct operationalisation against JRC fields; and interview protocol documentation.
To support readability and provide an orienting overview of the empirical findings, Table 2 summarises how the six system functions identified in the analysis map to their core governance challenges and dominant failure risks. The subsequent sections present rich qualitative evidence and detailed quotations; the table provides a system-level synthesis aligned with socio-technical systems principles, which emphasise understanding interdependencies across technical and social subsystems rather than isolated components [3,5,28].
The table highlights that accountability risk is not evenly distributed across system functions. In particular, sensing, deciding, monitoring, and adapting emerge as structurally vulnerable points where governance breakdowns can displace responsibility or obscure human judgement, consistent with prior work on algorithmic governance and decision control in digital systems [6,10,11]. Interpreting and executing, by contrast, primarily face risks of over-reliance or rigidity, reflecting tensions previously identified in socio-material and digital work research [12,29]. This overview is intended as a navigational aid rather than a substitute for the detailed analysis that follows.

4. Findings

Building on this system function overview, Table 3 specifies how governance requirements vary across different types of strategic decisions. The analysis indicates that accountability cannot be ensured through uniform governance intensity; instead, the appropriate configuration of visibility, intervention, accountability, and feedback mechanisms depends on the nature and consequences of the decision being made, echoing established insights from IT and digital governance research [62,63].
By distinguishing between routine, consequential, and contested decisions, the table clarifies when lightweight governance arrangements are sufficient and when intensive human oversight becomes critical. This decision-contingent configuration aligns with research on algorithmic management and human oversight, which shows that accountability failures frequently arise not from automation per se, but from misalignment between decision risk and governance design [10,14,15]. This logic underpins the concept of Authentic Intelligence, demonstrating how human accountability is preserved through calibrated governance rather than constant human intervention.

4.1. System Function 1: Sensing

Sensing refers to how digital strategy systems gather information from internal and external environments. Interview participants consistently emphasised that effective sensing requires both computational data collection and human interpretation of what data mean.
Participants described extensive use of automated data collection systems. Management information systems, customer relationship management platforms, and enterprise resource planning tools continuously gather operational data. As Res37 explained: “The awareness and clarity of organisational objectives with the employees and with people directly in charge of the units, teams and the department basically” depends on having information systems that surface relevant data.
Digital platforms extend sensing beyond organisational boundaries. Res22 noted: “It’s about democratizing information, you can then understand when you see YouTube, you can then understand when you see workspace, the Gmail, Docs, everything collaborative sharing.” External data sources, from social media to market research databases, augment internal data collection.
Yet participants consistently emphasised that data collection does not equal understanding. Res42 articulated this distinction: “Leaders need to intentionally step aside from day-to-day pressures to relook at the business with new lenses.” Automated systems may surface patterns, but human judgement determines what patterns matter.
The consumer goods participants were especially emphatic about grounding sensing in human insight. Res44 explained that while “focus groups used to identify consumer needs may not always be successful as there is often pressure on consumers to follow the crowd,” direct observation enables richer understanding. The organisation pursues ethnographic approaches: “meeting consumers in their own homes, beyond just focusing on traditional focus groups.”
Res43 emphasised the role of research and development: “Research and development are essential in exploring consumer needs and wants. The Organisation starts with understanding what the consumer wants and tries to get insights on consumer needs per region and per demographic, based on the research.” The sensing function thus requires integration of algorithmic data collection with human interpretive capability.
The JRC catalogue reveals extensive deployment of sensing technologies in public services. Computer vision systems monitor traffic and public spaces (142 cases). Natural language processing systems analyse citizen communications (187 cases). Machine learning systems detect patterns in administrative data (203 cases).
Notably, many JRC cases document implementation challenges associated with sensing limitations. The Austrian Public Employment Service algorithm used historical data to predict job seeker outcomes, but the system was discontinued after the data protection agency ruled that the historical data embedded discriminatory patterns. This case illustrates how automated sensing, without human interpretation of data meaning and context, may produce systematically biased outputs. Of the 37 discontinued implementations, 11 (30%) cited data quality or data bias as contributing factors.

4.2. System Function 2: Interpreting

Interpretation transforms sensed data into actionable understanding. Our analysis identified three interpretation mechanisms: pattern recognition, contextual judgement, and meaning negotiation.
Algorithmic systems excel at identifying patterns across large datasets. Res3 described how “the organisation relies heavily on innovation and technology integration” for pattern recognition in medical diagnostics. Res45 explained how “automating solutions to free up human hours for individuals to focus on other opportunities” involves delegating routine pattern recognition to computational systems.
Yet pattern recognition is insufficient for interpretation. Patterns must be judged in context. Res32 emphasised: “A good understanding of the consumer is embedded in the purpose of their organisation, which is to create superior branded products and services to meet the needs of consumers for now and generations to come.” Algorithmic patterns require human judgement about consumer needs, market conditions, and organisational capabilities.
Res23 provided a vivid example of contextual judgement: “We do a lot of market visits, even for those whose work are not directly related to commercial or sales. And I think it helps us see in real-time what the problems are. So we could be in meetings, and everybody has been theorizing and all of that. And then we go into the markets, you meet 10 customers in one day, and your perspective changes totally from what you have been speaking about. And then you’re able to make decisions quite faster.” This account illustrates how direct experience provides interpretive context that algorithmic analysis cannot replicate.
Interpretation also involves collective sense-making. Res11 described how “senior leadership teams gather insights from the market to generate new ideas, which are then developed by a committee and implemented.” Interpretation is not individual cognition but organisational process.
Res29 emphasised the governance structures enabling collective interpretation: “There needs to be a proper either strategy or communication from top management. We need to have the right culture that drives innovation in the workplace internally itself before we’re actually asking our people to innovate for new products, and obviously have the right set of people with the skills and capabilities and the mindset.” Meaning negotiation requires explicit structures that enable collective interpretation.

4.3. System Function 3: Deciding

Decision-making is where Authentic Intelligence faces its most critical test. Our analysis identified three decision types: routine decisions amenable to automation, consequential decisions requiring human judgement, and contested decisions requiring escalation.
Participants supported automation of routine decisions. Res27 explained that “automation of the key processes we have today” enables “employees or the entire organisation to actually focus on other things and let systems and processes run part of the business.”
The JRC catalogue documents extensive automation of routine decisions. Chatbots handle standard citizen inquiries (89 cases). Automated systems process straightforward applications (67 cases). Rules-based systems route documents to appropriate handlers (45 cases). Implementation success rates are high for these routine applications (94% implemented or pilot status).
Consequential decisions require human accountability. Res21 argued for “empowering the right people at different levels to act on information without going through unnecessary hoops and loops of approval” while maintaining clear accountability. Res12 emphasised autonomy with accountability: “Once you’ve obviously got the right individuals, and in giving them an enabling environment, in order to grow them, the last thing you want to do is restrict and unduly govern their ability to perform in their roles.” Yet this autonomy operates within governance structures: “It has to be a level of autonomy you grant your people in order to get that exploration and exploitation at the same time and to do it effectively.”
Complex or contested decisions require escalation pathways. Res43 noted that “the speed of decision-making can limit the ability to innovate and bring new products or ideas to market. Sometimes, organisations may wait for all the data or analysis to be completed, which can delay decision-making.” Research on sustaining follower trust during leadership transitions demonstrates how trust maintenance requires explicit phase-mapping and governance playbooks [79].
Yet premature decision closure is equally problematic. Res39 emphasised that “the final say of senior management in decision-making” must be preserved for consequential choices. Res38 identified structural barriers: “the lack of full authority to change things at the senior management level and the presence of many management levels” impedes appropriate escalation.
The JRC data reveal that escalation provisions are documented in only 23% of implemented cases. Among discontinued implementations, absence of escalation pathways was cited in eight cases (22%), suggesting an association between escalation failure and implementation discontinuation.

4.4. System Function 4: Executing

Execution translates decisions into action. Our analysis identified three execution mechanisms: automated execution, human-mediated execution, and hybrid execution.
Res4 described automated execution through “a ticketing system where if there is a ticket, then you can involve the subject matter expert. And that subject matter expert gets notified immediately.” Automated execution reduces latency and ensures consistency.
Other decisions require human execution. Res15 emphasised that “the organisation needs to have a clear plan that defines success and links it to the goals set. The organisation should visualize what success looks like after every set time.” Human execution enables adaptation to local conditions.
Res41 stressed stakeholder-centricity in execution: “The customer has to be central to your execution plans. You have to reassess your assumptions from when you started because if there is not going to be anyone at the end of the exploitation, then all of that work was wasted.” This finding highlights that execution governance must include mechanisms for validating assumptions against stakeholder needs.
Res14 explained hybrid execution: “For me, there has to be seamless transition of information. This is very important because exploitation is on one hand, exploration is on another hand. After gathering data, are you able to translate this data into useful facts that help you exploit the decision at that point in time.” The transition of information across execution phases requires explicit governance.

4.5. System Function 5: Monitoring

Monitoring tracks execution outcomes and environmental changes. Participants distinguished between operational monitoring (tracking process execution) and strategic monitoring (assessing whether the system achieves intended purposes).
Participants described extensive automated monitoring. Res4 mentioned “systems for performance monitoring, systems to report risks, identify risks, service quality.” The JRC catalogue documents 156 cases involving automated monitoring of policy implementation.
Res18 described how monitoring data circulates: “There’s ongoing research now even at every like in different silos. So it means that because it’s a multinational, what is happening in Asia, I can also learn, okay, this is best practice. I may not have been the ones who may have taken up the research, but I can say, Okay, this is working.”
Yet automated monitoring requires human oversight. Res49 emphasised: “Constantly aware that markets change and develop. New trends are emerging, new competitors are emerging. So I think there’s that continual ability to monitor the market and recognize the value of that.”
Res42 described institutionalised monitoring: “The critical thing is, how do we institutionalise these practices? These processes are not left to chance or left to when a GM is under pressure to deliver. Either the business is doing super well or not, the processes of innovation is already ingrained and integrated into the integrated business process such that irrespective of what the external environment is saying, or irrespective of what the numbers are or what the stats are saying, this process continues anyway.”
Monitoring must feed back into sensing and interpretation. Res49 concluded: “The feedback process will continue to be the main forces behind the change. Continuous feedback is necessary to identify the most important areas for improvement over the following three to six months and to continue to promote both efficiency and innovation.”
Res17 described how feedback triggers adaptation: “Sometimes you bring in new innovations and they’re not even growing shares, they’re not having any impact of the value. But if you’re also sharing widely, a lot of these initiatives, and people are like okay, I think we should stop here, I think this project we’ve been trying to push it, it doesn’t seem to work. We need to pivot and bring in something else.”
The JRC data reveal that feedback loop implementation varies significantly. Of 685 documented AI implementations, only 89 (13%) include explicit feedback mechanisms for modifying system behaviour based on outcomes. Among discontinued cases, 14 (38%) cited inadequate feedback mechanisms as contributing to failure.

4.6. System Function 6: Adapting

Adaptation modifies system behaviour based on monitoring feedback. Participants distinguished between incremental adaptation and transformational adaptation.
Consequential adaptation requires human direction. Res2 emphasised: “Having a working formula must meet preparation. And that particular thing you’re good at, could change, or you could get pressed out of the market. As organisation, this should be at the back of your mind, and also be nimble, such that you are still profitable.”
Res35 highlighted capability development: “How to continuously improve the capacity of your staff to do more. Words like double hatching become used more frequently in the corporate space. Senior managers have now learnt that they need to continuously invest in the capacity of their staff to ensure they can double hatch and do a lot more across board and functions.”
Res38 specified organisational mechanisms: “Well, first of all, you need to train them. There’s no doubt about that. The team needs to understand the culture from the onboarding process. Also, putting systems in place in terms of incentives attached, and also making it part of what forms appraisals results.”
Participants identified barriers to effective adaptation. Res1 described organisational rigidity: “Being inflexible and believing that it’s your way or the highway can demoralize the organisation and lead to its downfall.”
Res50 described what we term the “comfort paradox”: “People don’t like to get outside of their comfort zone because they themselves also do not want to risk their equity or risk the profitability of the enterprise.” Successful performance may inhibit the risk-taking necessary for adaptation.
Res46 noted structural barriers: “The natural habit of an organisation is often determined by its size. This affects the decision-making process, which needs to be quicker for effective exploration. Breaking down decision-making to different niches or departments and giving them autonomy in decision-making would be the best way to facilitate exploration in larger organisations.”
Culture emerged as a critical enabler of adaptation. Res11 stated: “Culture plays a big role. The culture of an organisation can determine how much of the strategy you get executed.” Res20 specified how cultural mechanisms operate: “As leaders, it’s also worth understanding what all the systems and processes in the organisation that either helps or hinders getting things done.” Adaptation requires cultural support for experimentation and tolerance for failure.

5. Systems Model Development

5.1. Architecture of Authentic Intelligence

Synthesising the findings, we propose a systems architecture for Authentic Intelligence in digital strategy. The architecture specifies relationships between system functions, decision governance mechanisms, and accountability structures.
The model positions human-accountable decision governance as the organising principle. Each system function (sensing, interpreting, deciding, executing, monitoring, and adapting) involves interaction between algorithmic and human components. Authentic Intelligence is maintained when governance mechanisms ensure visibility (humans can see what algorithms are doing), intervention capacity (humans can override or modify algorithmic outputs), accountability (consequences of decisions can be traced to responsible parties), and feedback (outcomes inform system learning).

5.2. Decision Loci

The architecture specifies three types of decision loci:
Algorithmic Loci: Decisions delegated entirely to computational processes. Appropriate for routine, repeatable decisions with clear criteria and low consequences. JRC data show 94% implementation success for algorithmic loci applications.
Human Loci: Decisions reserved for human judgement. Appropriate for consequential decisions involving ethical considerations, stakeholder relationships, or strategic direction.
Hybrid Loci: Decisions involving algorithmic recommendation and human approval. Appropriate for decisions where algorithmic pattern recognition adds value but human judgement is required for contextual interpretation. JRC data show the highest discontinuation rates (8%) for hybrid applications, suggesting governance complexity.
System Architecture as seen in (Figure 1) positions human-accountable decision governance at the centre of six interconnected functions (sensing, interpreting, deciding, executing, monitoring, and adapting). Four governance mechanisms operate across all functions to maintain Authentic Intelligence. Decision loci indicate appropriate authority allocation between algorithmic and human components. Based on 50 semi-structured interviews triangulated with 685 JRC AI implementation cases.

5.3. Governance Framework Derivation

The Decision Governance Framework (Figure 2) specifies requirement levels for each governance mechanism across decision types. The framework was derived through systematic analysis of interview accounts and JRC patterns as follows:
Matrix Construction Process: For each cell in the governance matrix, we examined (a) interview accounts describing governance arrangements for that decision type, (b) JRC cases documenting governance configurations and their outcomes, and (c) failure mode accounts identifying governance gaps. Requirement levels (Low, Medium, High, and Critical) were assigned based on convergent evidence across sources.
Cell-by-Cell Derivation:
Routine Decisions × Visibility: Participants consistently described audit trail requirements but not real-time visibility needs for automated routine processes (Res4 and Res27). JRC implemented cases show 78% have logging but only 12% have real-time dashboards for routine applications. Requirement level: LOW.
Routine Decisions × Intervention: Exception handling rather than case-by-case override was the dominant pattern. Res4 described batch override capability for ticketing systems. JRC cases show bulk override sufficient for 89% of routine implementations. Requirement level: LOW.
Routine Decisions × Accountability: System-level rather than individual accountability was described for routine decisions. However, participants noted importance of clear ownership (Res15). Requirement level: MEDIUM.
Routine Decisions × Feedback: Aggregate performance metrics described as adequate (Res18). JRC data show periodic review sufficient for routine applications. Requirement level: MEDIUM.
Consequential Decisions × Visibility: Real-time visibility into recommendation basis consistently emphasised (Res21 and Res39). JRC discontinued cases show 45% lacked adequate visibility for consequential decisions. Requirement level: HIGH.
Consequential Decisions × Intervention: Veto authority described as essential (Res12 and Res39). JRC patterns show intervention gaps associated with discontinuation in eight cases (22%). Requirement level: CRITICAL.
Consequential Decisions × Accountability: Named decision-maker with documented rationale consistently required (Res21 and Res38). JRC cases show accountability gaps in 12 of 37 discontinuations (32%). Requirement level: CRITICAL.
Consequential Decisions × Feedback: Case-level outcome tracking described as necessary for learning (Res17 and Res49). Requirement level: HIGH.
Contested Decisions × All Mechanisms: Multi-stakeholder decisions require maximum governance intensity across all mechanisms. Interview accounts (Res38 and Res43) and JRC patterns consistently indicate that contested decisions lacking any governance mechanism face elevated discontinuation risk. Requirement level: CRITICAL across all four mechanisms.

5.4. Failure Modes

The analysis identifies twelve failure modes where Authentic Intelligence degrades. Table 2 presents the complete failure mode typology with operationalisation, interview evidence, JRC correspondence, and diagnostic indicators.
Table 4 shows the high correspondence for data opacity, decision drift, accountability diffusion, monitoring gaps, feedback disconnection, and escalation failure suggests these failure modes operate across private and public sector contexts. Moderate correspondence for pattern fetishism, execution rigidity, override inhibition, transparency theatre, and adaptation paralysis may reflect measurement differences or context-specific manifestation. Low correspondence for capability atrophy (8% JRC citation) likely reflects the longer time horizon required for this failure mode to manifest and the difficulty of documenting gradual capability erosion in implementation catalogues.

6. Discussion

6.1. Contributions to Socio-Technical Systems Theory

This study extends socio-technical systems theory in three ways [3,4,5,9,28]. First, we specify Authentic Intelligence as a design principle for digital strategy systems. Where traditional socio-technical theory addressed joint optimisation of technical and social subsystems, Authentic Intelligence addresses the specific challenge of maintaining human-accountable decision governance when computational processes mediate organisational action. This extends recent STS applications to digital transformation [24,25] by foregrounding decision governance as the central design concern. Unlike recent studies emphasising transparency, explainability, or ethical oversight as discrete interventions [15,54,60], Authentic Intelligence integrates governance mechanisms into a unified systems architecture focused on decision ownership and accountability in practice.
The construct differs from adjacent concepts in important ways. Unlike human-in-the-loop, which specifies a technical configuration, Authentic Intelligence specifies governance properties that must be present regardless of technical architecture. Unlike algorithmic accountability, which establishes normative requirements, Authentic Intelligence identifies the system structures that enable accountability to be enacted. Unlike organisational intelligence, which describes cognitive processes, Authentic Intelligence specifies governance mechanisms that preserve human cognition within automated systems.
Second, we develop a systems architecture that operationalises Authentic Intelligence through system functions, decision loci, governance mechanisms, and failure modes. This architecture provides theoretical vocabulary for analysing digital strategy systems and practical guidance for system design, extending existing STS-based digital transformation frameworks [34,60] by adding explicit decision loci specification.
Third, we identify conditions under which socio-technical systems degrade toward “inauthentic intelligence.” The twelve failure modes, corroborated through triangulation with the JRC catalogue, constitute a diagnostic framework for assessing Authentic Intelligence in operational systems. This empirically grounded typology advances beyond conceptual taxonomies by providing operationalised indicators validated against documented implementation outcomes.

6.2. Contributions to Digital Strategy Research

The study contributes to digital strategy research by foregrounding decision governance as a central concern [1,2,35,36,37,38]. The construct of decision loci provides vocabulary for specifying decision governance arrangements. The governance mechanisms translate abstract principles into system design requirements.
The connection to organisational ambidexterity research [50,80] is notable. March’s exploration–exploitation tension manifests in digital strategy contexts where algorithmic systems excel at exploitation tasks but require human direction for exploration. Authentic Intelligence provides a governance framework for managing this tension, specifying when algorithmic loci are appropriate (exploitation of known patterns) versus when human loci are required (exploration of novel possibilities).

6.3. Practical Implications

A practical first step for organisations is to conduct a decision audit, mapping strategic decisions against decision loci and governance mechanisms, consistent with established approaches to IT and digital governance assessment [62,63].
The findings offer practical guidance for organisations implementing digital strategies. First, organisations should conduct decision audits mapping existing decisions against the decision loci typology. Second, organisations should assess governance mechanisms against the four requirements: visibility, intervention, accountability, and feedback, using the governance matrix to determine appropriate intensity levels. Third, organisations should monitor for the twelve failure modes, using the diagnostic indicators in Table 2 for early identification of Authentic Intelligence degradation. Fourth, organisations should invest in human capability maintenance, addressing the capability atrophy risk identified in both interview and JRC data.

6.4. Implications for Regulatory Compliance

The European Union’s AI Act (2024) establishes legal requirements for human oversight of high-risk AI systems [21,22]. Article 14 requirements correspond directly to our governance mechanisms. The Authentic Intelligence framework provides operational guidance for compliance:
  • Article 14 (1)—Human oversight design requirements map to visibility and intervention mechanisms across all system functions;
  • Article 14 (2)—Appropriate understanding requirements map to the interpreting function governance and contextual judgement integration;
  • Article 14 (3)—Intervention capacity requirements map to override protocols, escalation pathways, and decision loci allocation;
  • Article 14 (4)—Override and correction requirements map to feedback mechanisms, adaptation function, and failure mode monitoring.
This alignment enables organisations to translate regulatory requirements into system design specifications, with the failure mode typology providing risk indicators for compliance monitoring [61].

7. Limitations and Boundary Conditions

7.1. Methodological Limitations

Interview participants were predominantly from large multinational organisations; the model may require modification for smaller organisations with different governance capacities [81]. The JRC catalogue covers European public sector implementations; triangulation would be strengthened by comparison with private sector implementations and non-European contexts. The qualitative methodology enables theory development but not theory testing; future research should test propositions using quantitative methods.
The triangulation analysis is based on 37 discontinued cases, which represents a modest base for pattern identification. The correspondence findings should be interpreted as exploratory rather than definitive. High correspondence strengthens confidence that failure modes operate across contexts; low correspondence warrants further investigation rather than rejection of the failure mode concept. The framework assumes organisational capacity for governance design, monitoring, and intervention, which may limit transferability to resource-constrained settings or loosely structured digital environments [81,82].

7.2. Boundary Conditions

The Authentic Intelligence construct assumes that human accountability is desirable and achievable. This assumption may not hold in specific contexts:
Fully autonomous systems: Where systems are designed to operate without human stakeholders by design (e.g., certain automated trading systems and autonomous research agents), the construct’s accountability requirement may be inapplicable. The model explicitly does not apply to systems where human accountability is architecturally excluded.
Resource-constrained organisations: The governance mechanisms specified require organisational capacity for visibility infrastructure, intervention protocols, accountability documentation, and feedback systems. Organisations lacking such capacity may be unable to implement Authentic Intelligence even where it is desirable.
Emergent human-AI integration: As human–AI collaboration intensifies, the distinction between algorithmic and human components may blur. The current model assumes these components can be meaningfully distinguished; future systems may require revised conceptualisation.

7.3. Future Research

Quantitative research should test relationships between governance mechanisms and Authentic Intelligence outcomes using survey instruments and implementation outcome data. Longitudinal research should examine how Authentic Intelligence evolves as digital systems mature, particularly tracking capability atrophy over extended time horizons. Design research should develop practical tools for implementing governance mechanisms [82]. Comparative research should examine Authentic Intelligence across cultural and institutional contexts, testing whether the governance mechanisms operate similarly in different regulatory and organisational environments [83,84].

8. Conclusions

Digital strategy increasingly depends on algorithmic systems, yet the mechanisms by which human judgement remains embedded within these systems have been poorly theorised. This paper introduced Authentic Intelligence as a systems-level construct that positions human-accountable decision governance at the centre of digital strategy implementation.
The methodological contribution of combining mechanism-revealing thematic analysis with configurational cross-case mapping and failure mode triangulation enabled both depth of insight and corroborative validation. The correspondence between interview-derived failure modes and JRC catalogue discontinuations strengthens confidence in the systems model, while acknowledging the exploratory nature of the triangulation given the modest discontinuation sample.
The theoretical contribution is to articulate intelligence authenticity as a design principle for digital strategy architectures, explicitly distinguished from adjacent concepts of human-in-the-loop, algorithmic transparency, algorithmic accountability, and organisational intelligence. The practical contribution is a reusable framework for designing, assessing, and improving digital strategy systems, with direct application to regulatory compliance under EU AI Act requirements.
Digital transformation need not entail the displacement of human judgement. With appropriate systems design incorporating the governance mechanisms specified here, organisations can deploy algorithmic capabilities while maintaining the authentic intelligence that responsible decision-making requires.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/systems14030259/s1, Supplementary Materials: Export-JRC-Data-Catalogue.

Author Contributions

Conceptualisation, I.E.; Methodology, I.E. and P.M.; Formal Analysis, I.E.; Investigation, I.E. and U.N.; Data Curation, I.E.; Writing—Original Draft, I.E.; Writing—Review and Editing, P.M. and U.N.; Supervision, P.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Warwick Business School Humanities and Social Sciences Research Ethics Committee (HSSREC) (protocol code: HSSREC/DBA/2021/Enang and date of approval: 9 July 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data underlying this study consist of (i) qualitative interview transcripts from 50 senior managers involved in digital strategy implementation across multiple sectors and regions, and (ii) secondary documentary data drawn from the European Commission Joint Research Centre (JRC) AI Watch catalogue of public sector AI implementations. Due to the confidential nature of the interview data, which include potentially identifiable organisational and professional information, the raw interview transcripts cannot be made publicly available in accordance with the terms of ethical approval and informed consent. Anonymised excerpts supporting the findings are included within the article. The secondary dataset used for triangulation is publicly available via the European Commission JRC AI Watch repository. Additional methodological materials, including coding structures and analytical protocols, may be made available by the corresponding author upon reasonable request, subject to ethical and confidentiality constraints.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bharadwaj, A.; El Sawy, O.A.; Pavlou, P.A.; Venkatraman, N. Digital business strategy: Toward a next generation of insights. MIS Q. 2013, 37, 471–482. [Google Scholar] [CrossRef]
  2. Matt, C.; Hess, T.; Benlian, A. Digital transformation strategies. Bus. Inf. Syst. Eng. 2015, 57, 339–343. [Google Scholar] [CrossRef]
  3. Trist, E.L.; Bamforth, K.W. Some social and psychological consequences of the longwall method of coal-getting. Hum. Relat. 1951, 4, 3–38. [Google Scholar]
  4. Cherns, A. The principles of sociotechnical design. Hum. Relat. 1976, 29, 783–792. [Google Scholar]
  5. Baxter, G.; Sommerville, I. Socio-technical systems: From design methods to systems engineering. Interact. Comput. 2011, 23, 4–17. [Google Scholar] [CrossRef]
  6. Danaher, J.; Hogan, M.J.; Noone, C.; Kennedy, R.; Behan, A.; De Paor, A.; Felzmann, H.; Haklay, M.; Khoo, S.M.; Morison, J.; et al. Algorithmic governance: Developing a research agenda through the power of collective intelligence. Big Data Soc. 2017, 4, 2053951717726554. [Google Scholar] [CrossRef]
  7. Yeung, K. Algorithmic regulation: A critical interrogation. Regul. Gov. 2018, 12, 505–523. [Google Scholar]
  8. Binns, R. Algorithmic accountability and public reason. Philos. Technol. 2018, 31, 543–556. [Google Scholar]
  9. Emery, F.E.; Trist, E.L. Socio-technical systems. In Management Science, Models and Techniques; Churchman, C.W., Verhulst, M., Eds.; Pergamon: Oxford, UK, 1960; pp. 83–97. [Google Scholar]
  10. Faraj, S.; Pachidi, S.; Sayegh, K. Working and organizing in the age of the learning algorithm. Inf. Organ. 2018, 28, 62–70. [Google Scholar] [CrossRef]
  11. Kellogg, K.C.; Valentine, M.A.; Christin, A. Algorithms at work: The new contested terrain of control. Acad. Manag. Ann. 2020, 14, 366–410. [Google Scholar] [CrossRef]
  12. Orlikowski, W.J. Sociomaterial practices: Exploring technology at work. Organ. Stud. 2007, 28, 1435–1448. [Google Scholar] [CrossRef]
  13. Pasmore, W.; Winby, S.; Mohrman, S.A.; Vanasse, R. Reflections: Sociotechnical systems design and organization change. J. Change Manag. 2019, 19, 67–85. [Google Scholar] [CrossRef]
  14. Rahwan, I. Society-in-the-loop: Programming the algorithmic social contract. Ethics Inf. Technol. 2018, 20, 5–14. [Google Scholar] [CrossRef]
  15. Zerilli, J.; Knott, A.; Maclaurin, J.; Gavaghan, C. Transparency in algorithmic and human decision-making. Philos. Technol. 2019, 32, 661–683. [Google Scholar] [CrossRef]
  16. Mittelstadt, B.D.; Allo, P.; Taddeo, M.; Wachter, S.; Floridi, L. The ethics of algorithms: Mapping the debate. Big Data Soc. 2016, 3, 2053951716679679. [Google Scholar] [CrossRef]
  17. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An ethical framework for a good AI society. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef]
  18. Jobin, A.; Ienca, M.; Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. 2019, 1, 389–399. [Google Scholar] [CrossRef]
  19. European Commission Joint Research Centre. AI Watch: Artificial Intelligence in Public Services; Publications Office of the European Union: Luxembourg, 2020. [Google Scholar]
  20. Misuraca, G.; van Noordt, C. AI Watch: Artificial Intelligence in Public Services—Overview of the Use and Impact of AI in Public Services in the EU; JRC Research Reports; Publications Office of the European Union: Luxembourg, 2020. [Google Scholar] [CrossRef]
  21. European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence. Artificial Intelligence Act Official Journal of the European Union, L 2024/1689. Available online: http://data.europa.eu/eli/reg/2024/1689/oj (accessed on 12 July 2024).
  22. Veale, M.; Borgesius, F.Z. Demystifying the draft EU Artificial Intelligence Act. Comput. Law Secur. Rev. 2021, 43, 105573. [Google Scholar]
  23. Rahwan, I.; Cebrian, M.; Obradovich, N.; Bongard, J.; Bonnefon, J.F.; Breazeal, C.; Crandall, J.W.; Christakis, N.A.; Couzin, I.D.; Jackson, M.O.; et al. Machine behaviour. Nature 2019, 568, 477–486. [Google Scholar] [CrossRef]
  24. Sarker, S.; Chatterjee, S.; Xiao, X.; Elbanna, A. The sociotechnical axis of cohesion for the IS discipline: Its historical legacy and its continued relevance. MIS Q. 2019, 43, 695–719. [Google Scholar] [CrossRef]
  25. Mikalef, P.; Gupta, M. Artificial intelligence capability: Conceptualization, measurement calibration, and empirical study on its impact on organizational creativity and firm performance. Inf. Manag. 2021, 58, 103434. [Google Scholar] [CrossRef]
  26. Teece, D.J. Explicating dynamic capabilities: The nature and microfoundations of (sustainable) enterprise performance. Strateg. Manag. J. 2007, 28, 1319–1350. [Google Scholar] [CrossRef]
  27. Teece, D.J.; Pisano, G.; Shuen, A. Dynamic capabilities and strategic management. Strateg. Manag. J. 1997, 18, 509–533. [Google Scholar]
  28. Clegg, C.W. Sociotechnical principles for system design. Appl. Ergon. 2000, 31, 463–477. [Google Scholar] [CrossRef]
  29. Leonardi, P.M. When flexible routines meet flexible technologies: Affordance, constraint, and the imbrication of human and material agencies. MIS Q. 2011, 35, 147–167. [Google Scholar] [CrossRef]
  30. Cecez-Kecmanovic, D.; Galliers, R.D.; Henfridsson, O.; Newell, S.; Vidgen, R. The sociomateriality of information systems: Current status, future directions. MIS Q. 2014, 38, 809–830. [Google Scholar] [CrossRef]
  31. Mumford, E. The story of socio-technical design: Reflections on its successes, failures and potential. Inf. Syst. J. 2006, 16, 317–342. [Google Scholar] [CrossRef]
  32. Yoo, Y.; Henfridsson, O.; Lyytinen, K. Research commentary—The new organizing logic of digital innovation. Inf. Syst. Res. 2010, 21, 724–735. [Google Scholar] [CrossRef]
  33. Nambisan, S.; Lyytinen, K.; Majchrzak, A.; Song, M. Digital innovation management. MIS Q. 2017, 41, 223–238. [Google Scholar] [CrossRef]
  34. Hanelt, A.; Bohnsack, R.; Marz, D.; Antunes Marante, C. A systematic review of the literature on digital transformation: Insights and implications for strategy and organizational change. J. Manag. Stud. 2021, 58, 1159–1197. [Google Scholar]
  35. Ross, J.W.; Sebastian, I.M.; Beath, C.M. How to develop a great digital strategy. MIT Sloan Manag. Rev. 2017, 58, 7–9. [Google Scholar]
  36. Sebastian, I.M.; Ross, J.W.; Beath, C.; Mocker, M.; Moloney, K.G.; Fonstad, N.O. How big old companies navigate digital transformation. MIS Q. Exec. 2017, 16, 197–213. [Google Scholar]
  37. Vial, G. Understanding digital transformation: A review and a research agenda. J. Strateg. Inf. Syst. 2019, 28, 118–144. [Google Scholar] [CrossRef]
  38. Verhoef, P.C.; Broekhuizen, T.; Bart, Y.; Bhattacharya, A.; Dong, J.Q.; Faber, N.; Haenlein, M. Digital transformation: A multidisciplinary reflection and research agenda. J. Bus. Res. 2021, 122, 889–901. [Google Scholar] [CrossRef]
  39. Walker, G.H.; Stanton, N.A.; Salmon, P.M.; Jenkins, D.P. A review of sociotechnical systems theory. Theor. Issues Ergon. Sci. 2008, 9, 479–499. [Google Scholar] [CrossRef]
  40. Midgley, G. Systemic Intervention: Philosophy, Methodology, and Practice; Kluwer/Plenum: New York, NY, USA, 2000. [Google Scholar]
  41. Jackson, M.C. Critical Systems Thinking and the Management of Complexity; Wiley: Chichester, UK, 2019. [Google Scholar]
  42. Davenport, T.H.; Ronanki, R. Artificial intelligence for the real world. Harv. Bus. Rev. 2018, 96, 108–116. [Google Scholar]
  43. Ransbotham, S.; Khodabandeh, S.; Fehling, R.; LaFountain, B.; Kiron, D. Winning with AI. MIT Sloan Manag. Rev. 2019, 61, 1–17. [Google Scholar]
  44. Taylor, C. The Ethics of Authenticity; Harvard University Press: Cambridge, MA, USA, 1991. [Google Scholar]
  45. Bovens, M. Analysing and assessing accountability: A conceptual framework. Eur. Law J. 2007, 13, 447–468. [Google Scholar] [CrossRef]
  46. Mulgan, R. Holding Power to Account: Accountability in Modern Democracies; Palgrave Macmillan: Basingstoke, UK, 2003. [Google Scholar]
  47. Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed.; Pearson: Hoboken, NJ, USA, 2020. [Google Scholar]
  48. Chen, H.; Chiang, R.H.L.; Storey, V.C. Business intelligence and analytics: From big data to big impact. MIS Q. 2012, 36, 1165–1188. [Google Scholar] [CrossRef]
  49. Davenport, T.H.; Harris, J.G. Competing on Analytics: The New Science of Winning; Harvard Business Press: Boston, MA, USA, 2007. [Google Scholar]
  50. March, J.G. Exploration and exploitation in organizational learning. Organ. Sci. 1991, 2, 71–87. [Google Scholar] [CrossRef]
  51. Huber, G.P. Organizational learning: The contributing processes and the literatures. Organ. Sci. 1991, 2, 88–115. [Google Scholar] [CrossRef]
  52. Dubnick, M.J. Accountability and the promise of performance. Public Perform. Manag. Rev. 2005, 28, 376–417. [Google Scholar]
  53. Shneiderman, B. Human-centered artificial intelligence: Reliable, safe & trustworthy. Int. J. Hum.-Comput. Interact. 2020, 36, 495–504. [Google Scholar]
  54. Doshi-Velez, F.; Kim, B. Towards a rigorous science of interpretable machine learning. arXiv 2017, arXiv:1702.08608. [Google Scholar] [CrossRef]
  55. Sambamurthy, V.; Zmud, R.W. Arrangements for information technology governance: A theory of multiple contingencies. MIS Q. 1999, 23, 261–290. [Google Scholar] [CrossRef]
  56. Weill, P.; Ross, J.W. IT Governance: How Top Performers Manage IT Decision Rights for Superior Results; Harvard Business Press: Boston, MA, USA, 2004. [Google Scholar]
  57. Zuboff, S. The Age of Surveillance Capitalism; Public Affairs: New York, NY, USA, 2019. [Google Scholar]
  58. Kitchin, R. Thinking critically about and researching algorithms. Inf. Commun. Soc. 2017, 20, 14–29. [Google Scholar] [CrossRef]
  59. Enang, I.; Omeihe, K.; Omeihe, I.; Enang, I.; Enang, U. Integrative leadership in complex adaptive systems: A multi-modal analysis of strategic decision-making processes. Strategy Leadersh. 2026, 54, 88–119. [Google Scholar] [CrossRef]
  60. Schneider, J.; Abraham, R.; Meske, C.; Vom Brocke, J. Artificial intelligence governance for businesses. Inf. Syst. Manag. 2023, 40, 229–249. [Google Scholar] [CrossRef]
  61. Novelli, C.; Taddeo, M.; Floridi, L. Accountability in artificial intelligence: What it is and how it works. AI Soc. 2024, 39, 1871–1882. [Google Scholar] [CrossRef]
  62. Helfat, C.E.; Finkelstein, S.; Mitchell, W.; Peteraf, M.A.; Singh, H.; Teece, D.J.; Winter, S.G. Dynamic Capabilities: Understanding Strategic Change in Organizations; Blackwell: Malden, MA, USA, 2007. [Google Scholar]
  63. Eisenhardt, K.M.; Martin, J.A. Dynamic capabilities: What are they? Strateg. Manag. J. 2000, 21, 1105–1121. [Google Scholar]
  64. Walsham, G. Interpretive case studies in IS research: Nature and method. Eur. J. Inf. Syst. 1995, 4, 74–81. [Google Scholar] [CrossRef]
  65. Klein, H.K.; Myers, M.D. A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Q. 1999, 23, 67–93. [Google Scholar] [CrossRef]
  66. Gioia, D.A.; Corley, K.G.; Hamilton, A.L. Seeking qualitative rigor in inductive research: Notes on the Gioia methodology. Organ. Res. Methods 2013, 16, 15–31. [Google Scholar] [CrossRef]
  67. Gehman, J.; Glaser, V.L.; Eisenhardt, K.M.; Gioia, D.; Langley, A.; Corley, K.G. Finding theory–method fit: A comparison of three qualitative approaches to theory building. J. Manag. Inq. 2018, 27, 284–300. [Google Scholar] [CrossRef]
  68. Lincoln, Y.S.; Guba, E.G. Naturalistic Inquiry; Sage: Newbury Park, CA, USA, 1985. [Google Scholar]
  69. Strauss, A.; Corbin, J. Basics of Qualitative Research, 2nd ed.; Sage: Thousand Oaks, CA, USA, 1998. [Google Scholar]
  70. Charmaz, K. Constructing Grounded Theory, 2nd ed.; Sage: London, UK, 2014. [Google Scholar]
  71. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
  72. Tracy, S.J. Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qual. Inq. 2010, 16, 837–851. [Google Scholar] [CrossRef]
  73. Landis, J.R.; Koch, G.G. The measurement of observer agreement for categorical data. Biometrics 1977, 33, 159–174. [Google Scholar] [CrossRef]
  74. Rihoux, B.; Ragin, C.C. Configurational Comparative Methods; Sage: Thousand Oaks, CA, USA, 2009. [Google Scholar]
  75. Greckhamer, T.; Misangyi, V.F.; Elms, H.; Lacey, R. Using qualitative comparative analysis in strategic management research. Organ. Res. Methods 2008, 11, 695–726. [Google Scholar] [CrossRef]
  76. Enang, I.; Mukala, P.; Okpanum, I.; Ahmad, A.; Kiplagat, P. Project management capability and resistance in cloud transformation: Configurational evidence from African e-commerce. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 329. [Google Scholar] [CrossRef]
  77. Jick, T.D. Mixing qualitative and quantitative methods: Triangulation in action. Adm. Sci. Q. 1979, 24, 602–611. [Google Scholar] [CrossRef]
  78. Denzin, N.K. The Research Act, 3rd ed.; Prentice Hall: Englewood Cliffs, NJ, USA, 1989. [Google Scholar]
  79. Enang, I.; Omeihe, K.; Okpanum, I.; Omeihe, I. Keeping trust when leaders go hybrid: A phase map and playbook for sustaining follower trust. Strategy Leadersh. 2025, 53. [Google Scholar] [CrossRef]
  80. O’Reilly, C.A.; Tushman, M.L. Organizational ambidexterity: Past, present, and future. Acad. Manag. Perspect. 2013, 27, 324–338. [Google Scholar] [CrossRef]
  81. Eisenhardt, K.M.; Graebner, M.E. Theory building from cases: Opportunities and challenges. Acad. Manag. J. 2007, 50, 25–32. [Google Scholar] [CrossRef]
  82. Hevner, A.R.; March, S.T.; Park, J.; Ram, S. Design science in information systems research. MIS Q. 2004, 28, 75–105. [Google Scholar] [CrossRef]
  83. Hofstede, G. Culture’s Consequences, 2nd ed.; Sage: Thousand Oaks, CA, USA, 2001. [Google Scholar]
  84. House, R.J.; Hanges, P.J.; Javidan, M.; Dorfman, P.W.; Gupta, V. Culture, Leadership, and Organizations: The GLOBE Study of 62 Societies; Sage: Thousand Oaks, CA, USA, 2004. [Google Scholar]
Figure 1. System Architecture of Authentic Intelligence in Digital Strategy.
Figure 1. System Architecture of Authentic Intelligence in Digital Strategy.
Systems 14 00259 g001
Figure 2. Decision Governance Framework for Authentic Intelligence.
Figure 2. Decision Governance Framework for Authentic Intelligence.
Systems 14 00259 g002
Table 1. Governance construct operationalisation against JRC catalogue fields.
Table 1. Governance construct operationalisation against JRC catalogue fields.
Governance MechanismConstruct DimensionJRC Catalogue FieldOperationalisation CategoriesInterview Evidence Basis
VISIBILITYAlgorithmic process observabilityHuman Oversight TypeNone; Logging only; Real-time dashboard; Explanation interfaceRes4, Res21, Res42 described visibility requirements varying by decision consequentiality
Information accessibilityTransparency ProvisionsNot documented; Internal only; Public disclosureRes22, Res37 emphasised information democratisation needs
Decision rationale traceabilityAudit Trail CompletenessNone; Inputs only; Inputs + outputs; Full decision pathwayRes15, Res39 specified rationale documentation requirements
INTERVENTIONOverride capability scopeOverride CapabilityNone; Batch override; Case-level override; Real-time overrideRes12, Res27 described override needs by decision type
Escalation pathway formalisationEscalation PathwayNot documented; Informal; Formal single-level; Formal multi-levelRes38, Res43 identified escalation barriers
Human veto authorityDecision Authority DistributionAlgorithmic final; Advisory to human; Joint human-algorithm; Human finalRes39 emphasised preserved human authority for consequential decisions
ACCOUNTABILITYResponsibility attribution clarityResponsible OrganisationNot specified; Department-level; Named unit; Named individualRes21, Res38 described accountability structure requirements
Consequence bearing assignmentDecision AuthorityAlgorithmic; Advisory; Joint; Human-finalRes12 specified autonomy-with-accountability principles
Documentation completenessRationale RecordingNone; Outcome only; Decision + rationale; Full audit trailRes15 emphasised success visualisation and documentation
FEEDBACKOutcome monitoring scopeMonitoring ArrangementsNone; Periodic review; Continuous automated; Continuous with human reviewRes4, Res49 described monitoring system requirements
Learning loop formalisationAdaptation ProvisionsNone; Error correction only; Parameter adjustment; Full retraining capabilityRes17, Res35 described feedback-driven adaptation
Stakeholder input integrationFeedback Channel AvailabilityNone; Internal only; External formal; Continuous multi-stakeholderRes41 emphasised customer centricity in execution feedback
Table 2. System functions, governance challenges, and failure risks.
Table 2. System functions, governance challenges, and failure risks.
System FunctionCore Governance ChallengeDominant Failure ModeAccountability Risk Level
SensingEnsuring data relevance, provenance, and interpretability when automated data collection replaces human observationData opacityHigh
InterpretingPreventing over-reliance on algorithmic pattern recognition without contextual judgementPattern fetishismMedium
DecidingPreserving human authority and responsibility when algorithms recommend or pre-empt decisionsAccountability diffusion; Decision driftHigh
ExecutingMaintaining flexibility and override capacity during automated or semi-automated actionExecution rigidity; Override inhibitionMedium
MonitoringDetecting performance degradation, drift, or unintended consequences over timeMonitoring gapsHigh
AdaptingEnabling timely system modification and organisational learning based on feedbackFeedback disconnection; Adaptation paralysisHigh
Table 3. Governance mechanisms across decision types.
Table 3. Governance mechanisms across decision types.
Decision TypeVisibility RequirementIntervention RequirementAccountability RequirementFeedback Requirement
Routine DecisionsLow: logging and audit trails sufficientLow: batch-level exception handlingMedium: clear system ownershipMedium: periodic performance review
Consequential DecisionsHigh: real-time visibility into recommendation basisCritical: human veto and override authorityCritical: named decision owner with documented rationaleHigh: case-level outcome tracking
Contested DecisionsCritical: shared visibility across stakeholdersCritical: formal escalation pathwaysCritical: multi-actor accountability clarityCritical: continuous feedback and review
Table 4. Failure mode typology with empirical validation.
Table 4. Failure mode typology with empirical validation.
Failure ModeOperational DefinitionSystem Function AffectedInterview Frequency (n = 50)Representative Interview EvidenceJRC Discontinuation Citation Rate (n = 37)Correspondence LevelDiagnostic Indicators
Data opacityInability to trace how input data shapes algorithmic outputs; data provenance undocumentedSensing23 mentions (46%)Res37: “awareness and clarity” dependent on information systems that surface relevant data; Res44: historical data may embed biases invisible to operators11 cases (30%)HIGHUndocumented data sources; no data lineage tracking; inability to explain output–input relationships
Pattern fetishismOver-reliance on algorithmic pattern detection without contextual validation; treating correlations as causalInterpreting18 mentions (36%)Res23: “you meet 10 customers in one day, and your perspective changes totally from what you have been speaking about”; Res32: patterns require judgement about “consumer needs, market conditions”7 cases (19%)MODERATEDecisions based solely on algorithmic scores; no human review of pattern validity; context factors ignored
Decision driftGradual expansion of algorithmic decision scope beyond original design parameters without governance reviewDeciding31 mentions (62%)Res27: automation enabling focus on “other things” may lead to unchecked scope expansion; Res39: “final say of senior management” must be preserved9 cases (24%)HIGHScope creep without authorisation; decisions originally flagged for human review now automated; no periodic scope audits
Accountability diffusionUnclear responsibility attribution when decisions involve both algorithmic and human componentsDeciding27 mentions (54%)Res21: “empowering the right people” while maintaining accountability; Res12: “autonomy you grant your people” must coexist with clear accountability12 cases (32%)HIGHNo named decision owner; responsibility attributed to “the system”; inability to identify who approved specific decisions
Execution rigidityInability to modify automated execution when contextual factors warrant deviationExecuting14 mentions (28%)Res14: “seamless transition of information” requires flexibility; Res41: execution must allow “reassess your assumptions”4 cases (11%)MODERATENo exception handling capability; rigid workflow with no deviation path; local adaptation impossible
Override inhibitionTechnical or cultural barriers preventing human override of algorithmic recommendationsExecuting11 mentions (22%)Res12: avoiding “restrict and unduly govern” while Res39 notes authority barriers at senior levels5 cases (14%)MODERATEOverride function available but unused; cultural pressure to accept algorithmic recommendations; override requires excessive justification
Monitoring gapsInsufficient tracking of system outputs and outcomes to detect performance degradation or driftMonitoring22 mentions (44%)Res4: need for “systems for performance monitoring, systems to report risks”; Res49: “continual ability to monitor” essential8 cases (22%)HIGHNo outcome tracking; performance metrics not collected; drift detection absent
Feedback disconnectionFailure to incorporate outcome data back into system modification and learningMonitoring/Adapting19 mentions (38%)Res17: recognising when to “stop here” and “pivot”; Res49: “feedback process will continue to be the main forces behind the change”14 cases (38%)HIGHOutcomes collected but not analysed; no mechanism to modify system based on results; learning loops absent
Capability atrophyDegradation of human expertise and judgement capacity through disuse as algorithmic systems take overAdapting16 mentions (32%)Res35: need to “continuously improve the capacity of your staff”; Res42: leaders must “intentionally step aside” to maintain perspective3 cases (8%)LOWDeclining human expertise in domain; staff unable to evaluate algorithmic outputs; institutional knowledge loss
Escalation failureAbsence or dysfunction of pathways for elevating decisions beyond algorithmic processingDeciding24 mentions (48%)Res38: “many management levels” impeding escalation; Res43: waiting for “all the data” delays decisions8 cases (22%)HIGHNo escalation protocol; unclear escalation triggers; escalated decisions returned to algorithmic processing
Transparency theatreNominal compliance with transparency requirements without meaningful accessibility or comprehensibilityInterpreting9 mentions (18%)Implicit in Res22’s emphasis on genuine “democratizing information” versus nominal access6 cases (16%)MODERATEDocumentation exists but incomprehensible; transparency reports not read; explanation interfaces unused
Adaptation paralysisInability to modify system behaviour despite clear evidence of performance problemsAdapting21 mentions (42%)Res1: “being inflexible” leading to “downfall”; Res50: “comfort paradox” inhibiting necessary risk-taking4 cases (11%)MODERATEKnown problems not addressed; change requests rejected; system ossification despite environmental change
Note: High correspondence = ≥20 interview mentions AND ≥20% JRC discontinuation citation. Moderate correspondence = ≥10 interview mentions AND 10–19% JRC citation. Low correspondence = <10% JRC citation regardless of interview frequency. JRC discontinuation citation rate based on n = 37 discontinued implementations where discontinuation reasons were documented.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Enang, I.; Mukala, P.; Nkereuwem, U. Authentic Intelligence in Digital Strategy Systems: A Socio-Technical Analysis of Human-Accountable Decision Governance. Systems 2026, 14, 259. https://doi.org/10.3390/systems14030259

AMA Style

Enang I, Mukala P, Nkereuwem U. Authentic Intelligence in Digital Strategy Systems: A Socio-Technical Analysis of Human-Accountable Decision Governance. Systems. 2026; 14(3):259. https://doi.org/10.3390/systems14030259

Chicago/Turabian Style

Enang, Imo, Patrick Mukala, and Ubong Nkereuwem. 2026. "Authentic Intelligence in Digital Strategy Systems: A Socio-Technical Analysis of Human-Accountable Decision Governance" Systems 14, no. 3: 259. https://doi.org/10.3390/systems14030259

APA Style

Enang, I., Mukala, P., & Nkereuwem, U. (2026). Authentic Intelligence in Digital Strategy Systems: A Socio-Technical Analysis of Human-Accountable Decision Governance. Systems, 14(3), 259. https://doi.org/10.3390/systems14030259

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop