Authentic Intelligence in Digital Strategy Systems: A Socio-Technical Analysis of Human-Accountable Decision Governance
Abstract
1. Introduction
2. Theoretical Framing
2.1. Socio-Technical Systems and Digital Strategy
2.2. Authentic Intelligence: Construct Definition and Boundaries
2.3. Decision Governance in Digital Systems
3. Methods
3.1. Research Design and System-Oriented Analytical Protocol
3.2. Primary Data: Semi-Structured Interviews
3.3. Secondary Data: JRC AI Implementation Catalogue
3.4. Stage 1: Mechanism-Revealing Thematic Analysis
3.5. Stage 2: Configurational Cross-Case Mapping
3.6. Stage 3: Failure Mode Triangulation
- Extracted documented reasons for discontinuation from catalogue fields and linked source documents (where available, including press reports, audit documents, and government statements).
- Coded discontinuation reasons using the failure mode typology developed from interview analysis.
- Assessed correspondence between interview-derived failure modes and documented discontinuation patterns.
- Identified additional failure modes present in catalogue data but absent from interview accounts.
3.7. Research Quality and Reflexivity
4. Findings
4.1. System Function 1: Sensing
4.2. System Function 2: Interpreting
4.3. System Function 3: Deciding
4.4. System Function 4: Executing
4.5. System Function 5: Monitoring
4.6. System Function 6: Adapting
5. Systems Model Development
5.1. Architecture of Authentic Intelligence
5.2. Decision Loci
5.3. Governance Framework Derivation
5.4. Failure Modes
6. Discussion
6.1. Contributions to Socio-Technical Systems Theory
6.2. Contributions to Digital Strategy Research
6.3. Practical Implications
6.4. Implications for Regulatory Compliance
- Article 14 (1)—Human oversight design requirements map to visibility and intervention mechanisms across all system functions;
- Article 14 (2)—Appropriate understanding requirements map to the interpreting function governance and contextual judgement integration;
- Article 14 (3)—Intervention capacity requirements map to override protocols, escalation pathways, and decision loci allocation;
- Article 14 (4)—Override and correction requirements map to feedback mechanisms, adaptation function, and failure mode monitoring.
7. Limitations and Boundary Conditions
7.1. Methodological Limitations
7.2. Boundary Conditions
7.3. Future Research
8. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Bharadwaj, A.; El Sawy, O.A.; Pavlou, P.A.; Venkatraman, N. Digital business strategy: Toward a next generation of insights. MIS Q. 2013, 37, 471–482. [Google Scholar] [CrossRef]
- Matt, C.; Hess, T.; Benlian, A. Digital transformation strategies. Bus. Inf. Syst. Eng. 2015, 57, 339–343. [Google Scholar] [CrossRef]
- Trist, E.L.; Bamforth, K.W. Some social and psychological consequences of the longwall method of coal-getting. Hum. Relat. 1951, 4, 3–38. [Google Scholar]
- Cherns, A. The principles of sociotechnical design. Hum. Relat. 1976, 29, 783–792. [Google Scholar]
- Baxter, G.; Sommerville, I. Socio-technical systems: From design methods to systems engineering. Interact. Comput. 2011, 23, 4–17. [Google Scholar] [CrossRef]
- Danaher, J.; Hogan, M.J.; Noone, C.; Kennedy, R.; Behan, A.; De Paor, A.; Felzmann, H.; Haklay, M.; Khoo, S.M.; Morison, J.; et al. Algorithmic governance: Developing a research agenda through the power of collective intelligence. Big Data Soc. 2017, 4, 2053951717726554. [Google Scholar] [CrossRef]
- Yeung, K. Algorithmic regulation: A critical interrogation. Regul. Gov. 2018, 12, 505–523. [Google Scholar]
- Binns, R. Algorithmic accountability and public reason. Philos. Technol. 2018, 31, 543–556. [Google Scholar]
- Emery, F.E.; Trist, E.L. Socio-technical systems. In Management Science, Models and Techniques; Churchman, C.W., Verhulst, M., Eds.; Pergamon: Oxford, UK, 1960; pp. 83–97. [Google Scholar]
- Faraj, S.; Pachidi, S.; Sayegh, K. Working and organizing in the age of the learning algorithm. Inf. Organ. 2018, 28, 62–70. [Google Scholar] [CrossRef]
- Kellogg, K.C.; Valentine, M.A.; Christin, A. Algorithms at work: The new contested terrain of control. Acad. Manag. Ann. 2020, 14, 366–410. [Google Scholar] [CrossRef]
- Orlikowski, W.J. Sociomaterial practices: Exploring technology at work. Organ. Stud. 2007, 28, 1435–1448. [Google Scholar] [CrossRef]
- Pasmore, W.; Winby, S.; Mohrman, S.A.; Vanasse, R. Reflections: Sociotechnical systems design and organization change. J. Change Manag. 2019, 19, 67–85. [Google Scholar] [CrossRef]
- Rahwan, I. Society-in-the-loop: Programming the algorithmic social contract. Ethics Inf. Technol. 2018, 20, 5–14. [Google Scholar] [CrossRef]
- Zerilli, J.; Knott, A.; Maclaurin, J.; Gavaghan, C. Transparency in algorithmic and human decision-making. Philos. Technol. 2019, 32, 661–683. [Google Scholar] [CrossRef]
- Mittelstadt, B.D.; Allo, P.; Taddeo, M.; Wachter, S.; Floridi, L. The ethics of algorithms: Mapping the debate. Big Data Soc. 2016, 3, 2053951716679679. [Google Scholar] [CrossRef]
- Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An ethical framework for a good AI society. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef]
- Jobin, A.; Ienca, M.; Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. 2019, 1, 389–399. [Google Scholar] [CrossRef]
- European Commission Joint Research Centre. AI Watch: Artificial Intelligence in Public Services; Publications Office of the European Union: Luxembourg, 2020. [Google Scholar]
- Misuraca, G.; van Noordt, C. AI Watch: Artificial Intelligence in Public Services—Overview of the Use and Impact of AI in Public Services in the EU; JRC Research Reports; Publications Office of the European Union: Luxembourg, 2020. [Google Scholar] [CrossRef]
- European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence. Artificial Intelligence Act Official Journal of the European Union, L 2024/1689. Available online: http://data.europa.eu/eli/reg/2024/1689/oj (accessed on 12 July 2024).
- Veale, M.; Borgesius, F.Z. Demystifying the draft EU Artificial Intelligence Act. Comput. Law Secur. Rev. 2021, 43, 105573. [Google Scholar]
- Rahwan, I.; Cebrian, M.; Obradovich, N.; Bongard, J.; Bonnefon, J.F.; Breazeal, C.; Crandall, J.W.; Christakis, N.A.; Couzin, I.D.; Jackson, M.O.; et al. Machine behaviour. Nature 2019, 568, 477–486. [Google Scholar] [CrossRef]
- Sarker, S.; Chatterjee, S.; Xiao, X.; Elbanna, A. The sociotechnical axis of cohesion for the IS discipline: Its historical legacy and its continued relevance. MIS Q. 2019, 43, 695–719. [Google Scholar] [CrossRef]
- Mikalef, P.; Gupta, M. Artificial intelligence capability: Conceptualization, measurement calibration, and empirical study on its impact on organizational creativity and firm performance. Inf. Manag. 2021, 58, 103434. [Google Scholar] [CrossRef]
- Teece, D.J. Explicating dynamic capabilities: The nature and microfoundations of (sustainable) enterprise performance. Strateg. Manag. J. 2007, 28, 1319–1350. [Google Scholar] [CrossRef]
- Teece, D.J.; Pisano, G.; Shuen, A. Dynamic capabilities and strategic management. Strateg. Manag. J. 1997, 18, 509–533. [Google Scholar]
- Clegg, C.W. Sociotechnical principles for system design. Appl. Ergon. 2000, 31, 463–477. [Google Scholar] [CrossRef]
- Leonardi, P.M. When flexible routines meet flexible technologies: Affordance, constraint, and the imbrication of human and material agencies. MIS Q. 2011, 35, 147–167. [Google Scholar] [CrossRef]
- Cecez-Kecmanovic, D.; Galliers, R.D.; Henfridsson, O.; Newell, S.; Vidgen, R. The sociomateriality of information systems: Current status, future directions. MIS Q. 2014, 38, 809–830. [Google Scholar] [CrossRef]
- Mumford, E. The story of socio-technical design: Reflections on its successes, failures and potential. Inf. Syst. J. 2006, 16, 317–342. [Google Scholar] [CrossRef]
- Yoo, Y.; Henfridsson, O.; Lyytinen, K. Research commentary—The new organizing logic of digital innovation. Inf. Syst. Res. 2010, 21, 724–735. [Google Scholar] [CrossRef]
- Nambisan, S.; Lyytinen, K.; Majchrzak, A.; Song, M. Digital innovation management. MIS Q. 2017, 41, 223–238. [Google Scholar] [CrossRef]
- Hanelt, A.; Bohnsack, R.; Marz, D.; Antunes Marante, C. A systematic review of the literature on digital transformation: Insights and implications for strategy and organizational change. J. Manag. Stud. 2021, 58, 1159–1197. [Google Scholar]
- Ross, J.W.; Sebastian, I.M.; Beath, C.M. How to develop a great digital strategy. MIT Sloan Manag. Rev. 2017, 58, 7–9. [Google Scholar]
- Sebastian, I.M.; Ross, J.W.; Beath, C.; Mocker, M.; Moloney, K.G.; Fonstad, N.O. How big old companies navigate digital transformation. MIS Q. Exec. 2017, 16, 197–213. [Google Scholar]
- Vial, G. Understanding digital transformation: A review and a research agenda. J. Strateg. Inf. Syst. 2019, 28, 118–144. [Google Scholar] [CrossRef]
- Verhoef, P.C.; Broekhuizen, T.; Bart, Y.; Bhattacharya, A.; Dong, J.Q.; Faber, N.; Haenlein, M. Digital transformation: A multidisciplinary reflection and research agenda. J. Bus. Res. 2021, 122, 889–901. [Google Scholar] [CrossRef]
- Walker, G.H.; Stanton, N.A.; Salmon, P.M.; Jenkins, D.P. A review of sociotechnical systems theory. Theor. Issues Ergon. Sci. 2008, 9, 479–499. [Google Scholar] [CrossRef]
- Midgley, G. Systemic Intervention: Philosophy, Methodology, and Practice; Kluwer/Plenum: New York, NY, USA, 2000. [Google Scholar]
- Jackson, M.C. Critical Systems Thinking and the Management of Complexity; Wiley: Chichester, UK, 2019. [Google Scholar]
- Davenport, T.H.; Ronanki, R. Artificial intelligence for the real world. Harv. Bus. Rev. 2018, 96, 108–116. [Google Scholar]
- Ransbotham, S.; Khodabandeh, S.; Fehling, R.; LaFountain, B.; Kiron, D. Winning with AI. MIT Sloan Manag. Rev. 2019, 61, 1–17. [Google Scholar]
- Taylor, C. The Ethics of Authenticity; Harvard University Press: Cambridge, MA, USA, 1991. [Google Scholar]
- Bovens, M. Analysing and assessing accountability: A conceptual framework. Eur. Law J. 2007, 13, 447–468. [Google Scholar] [CrossRef]
- Mulgan, R. Holding Power to Account: Accountability in Modern Democracies; Palgrave Macmillan: Basingstoke, UK, 2003. [Google Scholar]
- Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed.; Pearson: Hoboken, NJ, USA, 2020. [Google Scholar]
- Chen, H.; Chiang, R.H.L.; Storey, V.C. Business intelligence and analytics: From big data to big impact. MIS Q. 2012, 36, 1165–1188. [Google Scholar] [CrossRef]
- Davenport, T.H.; Harris, J.G. Competing on Analytics: The New Science of Winning; Harvard Business Press: Boston, MA, USA, 2007. [Google Scholar]
- March, J.G. Exploration and exploitation in organizational learning. Organ. Sci. 1991, 2, 71–87. [Google Scholar] [CrossRef]
- Huber, G.P. Organizational learning: The contributing processes and the literatures. Organ. Sci. 1991, 2, 88–115. [Google Scholar] [CrossRef]
- Dubnick, M.J. Accountability and the promise of performance. Public Perform. Manag. Rev. 2005, 28, 376–417. [Google Scholar]
- Shneiderman, B. Human-centered artificial intelligence: Reliable, safe & trustworthy. Int. J. Hum.-Comput. Interact. 2020, 36, 495–504. [Google Scholar]
- Doshi-Velez, F.; Kim, B. Towards a rigorous science of interpretable machine learning. arXiv 2017, arXiv:1702.08608. [Google Scholar] [CrossRef]
- Sambamurthy, V.; Zmud, R.W. Arrangements for information technology governance: A theory of multiple contingencies. MIS Q. 1999, 23, 261–290. [Google Scholar] [CrossRef]
- Weill, P.; Ross, J.W. IT Governance: How Top Performers Manage IT Decision Rights for Superior Results; Harvard Business Press: Boston, MA, USA, 2004. [Google Scholar]
- Zuboff, S. The Age of Surveillance Capitalism; Public Affairs: New York, NY, USA, 2019. [Google Scholar]
- Kitchin, R. Thinking critically about and researching algorithms. Inf. Commun. Soc. 2017, 20, 14–29. [Google Scholar] [CrossRef]
- Enang, I.; Omeihe, K.; Omeihe, I.; Enang, I.; Enang, U. Integrative leadership in complex adaptive systems: A multi-modal analysis of strategic decision-making processes. Strategy Leadersh. 2026, 54, 88–119. [Google Scholar] [CrossRef]
- Schneider, J.; Abraham, R.; Meske, C.; Vom Brocke, J. Artificial intelligence governance for businesses. Inf. Syst. Manag. 2023, 40, 229–249. [Google Scholar] [CrossRef]
- Novelli, C.; Taddeo, M.; Floridi, L. Accountability in artificial intelligence: What it is and how it works. AI Soc. 2024, 39, 1871–1882. [Google Scholar] [CrossRef]
- Helfat, C.E.; Finkelstein, S.; Mitchell, W.; Peteraf, M.A.; Singh, H.; Teece, D.J.; Winter, S.G. Dynamic Capabilities: Understanding Strategic Change in Organizations; Blackwell: Malden, MA, USA, 2007. [Google Scholar]
- Eisenhardt, K.M.; Martin, J.A. Dynamic capabilities: What are they? Strateg. Manag. J. 2000, 21, 1105–1121. [Google Scholar]
- Walsham, G. Interpretive case studies in IS research: Nature and method. Eur. J. Inf. Syst. 1995, 4, 74–81. [Google Scholar] [CrossRef]
- Klein, H.K.; Myers, M.D. A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Q. 1999, 23, 67–93. [Google Scholar] [CrossRef]
- Gioia, D.A.; Corley, K.G.; Hamilton, A.L. Seeking qualitative rigor in inductive research: Notes on the Gioia methodology. Organ. Res. Methods 2013, 16, 15–31. [Google Scholar] [CrossRef]
- Gehman, J.; Glaser, V.L.; Eisenhardt, K.M.; Gioia, D.; Langley, A.; Corley, K.G. Finding theory–method fit: A comparison of three qualitative approaches to theory building. J. Manag. Inq. 2018, 27, 284–300. [Google Scholar] [CrossRef]
- Lincoln, Y.S.; Guba, E.G. Naturalistic Inquiry; Sage: Newbury Park, CA, USA, 1985. [Google Scholar]
- Strauss, A.; Corbin, J. Basics of Qualitative Research, 2nd ed.; Sage: Thousand Oaks, CA, USA, 1998. [Google Scholar]
- Charmaz, K. Constructing Grounded Theory, 2nd ed.; Sage: London, UK, 2014. [Google Scholar]
- Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
- Tracy, S.J. Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qual. Inq. 2010, 16, 837–851. [Google Scholar] [CrossRef]
- Landis, J.R.; Koch, G.G. The measurement of observer agreement for categorical data. Biometrics 1977, 33, 159–174. [Google Scholar] [CrossRef]
- Rihoux, B.; Ragin, C.C. Configurational Comparative Methods; Sage: Thousand Oaks, CA, USA, 2009. [Google Scholar]
- Greckhamer, T.; Misangyi, V.F.; Elms, H.; Lacey, R. Using qualitative comparative analysis in strategic management research. Organ. Res. Methods 2008, 11, 695–726. [Google Scholar] [CrossRef]
- Enang, I.; Mukala, P.; Okpanum, I.; Ahmad, A.; Kiplagat, P. Project management capability and resistance in cloud transformation: Configurational evidence from African e-commerce. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 329. [Google Scholar] [CrossRef]
- Jick, T.D. Mixing qualitative and quantitative methods: Triangulation in action. Adm. Sci. Q. 1979, 24, 602–611. [Google Scholar] [CrossRef]
- Denzin, N.K. The Research Act, 3rd ed.; Prentice Hall: Englewood Cliffs, NJ, USA, 1989. [Google Scholar]
- Enang, I.; Omeihe, K.; Okpanum, I.; Omeihe, I. Keeping trust when leaders go hybrid: A phase map and playbook for sustaining follower trust. Strategy Leadersh. 2025, 53. [Google Scholar] [CrossRef]
- O’Reilly, C.A.; Tushman, M.L. Organizational ambidexterity: Past, present, and future. Acad. Manag. Perspect. 2013, 27, 324–338. [Google Scholar] [CrossRef]
- Eisenhardt, K.M.; Graebner, M.E. Theory building from cases: Opportunities and challenges. Acad. Manag. J. 2007, 50, 25–32. [Google Scholar] [CrossRef]
- Hevner, A.R.; March, S.T.; Park, J.; Ram, S. Design science in information systems research. MIS Q. 2004, 28, 75–105. [Google Scholar] [CrossRef]
- Hofstede, G. Culture’s Consequences, 2nd ed.; Sage: Thousand Oaks, CA, USA, 2001. [Google Scholar]
- House, R.J.; Hanges, P.J.; Javidan, M.; Dorfman, P.W.; Gupta, V. Culture, Leadership, and Organizations: The GLOBE Study of 62 Societies; Sage: Thousand Oaks, CA, USA, 2004. [Google Scholar]


| Governance Mechanism | Construct Dimension | JRC Catalogue Field | Operationalisation Categories | Interview Evidence Basis |
|---|---|---|---|---|
| VISIBILITY | Algorithmic process observability | Human Oversight Type | None; Logging only; Real-time dashboard; Explanation interface | Res4, Res21, Res42 described visibility requirements varying by decision consequentiality |
| Information accessibility | Transparency Provisions | Not documented; Internal only; Public disclosure | Res22, Res37 emphasised information democratisation needs | |
| Decision rationale traceability | Audit Trail Completeness | None; Inputs only; Inputs + outputs; Full decision pathway | Res15, Res39 specified rationale documentation requirements | |
| INTERVENTION | Override capability scope | Override Capability | None; Batch override; Case-level override; Real-time override | Res12, Res27 described override needs by decision type |
| Escalation pathway formalisation | Escalation Pathway | Not documented; Informal; Formal single-level; Formal multi-level | Res38, Res43 identified escalation barriers | |
| Human veto authority | Decision Authority Distribution | Algorithmic final; Advisory to human; Joint human-algorithm; Human final | Res39 emphasised preserved human authority for consequential decisions | |
| ACCOUNTABILITY | Responsibility attribution clarity | Responsible Organisation | Not specified; Department-level; Named unit; Named individual | Res21, Res38 described accountability structure requirements |
| Consequence bearing assignment | Decision Authority | Algorithmic; Advisory; Joint; Human-final | Res12 specified autonomy-with-accountability principles | |
| Documentation completeness | Rationale Recording | None; Outcome only; Decision + rationale; Full audit trail | Res15 emphasised success visualisation and documentation | |
| FEEDBACK | Outcome monitoring scope | Monitoring Arrangements | None; Periodic review; Continuous automated; Continuous with human review | Res4, Res49 described monitoring system requirements |
| Learning loop formalisation | Adaptation Provisions | None; Error correction only; Parameter adjustment; Full retraining capability | Res17, Res35 described feedback-driven adaptation | |
| Stakeholder input integration | Feedback Channel Availability | None; Internal only; External formal; Continuous multi-stakeholder | Res41 emphasised customer centricity in execution feedback |
| System Function | Core Governance Challenge | Dominant Failure Mode | Accountability Risk Level |
|---|---|---|---|
| Sensing | Ensuring data relevance, provenance, and interpretability when automated data collection replaces human observation | Data opacity | High |
| Interpreting | Preventing over-reliance on algorithmic pattern recognition without contextual judgement | Pattern fetishism | Medium |
| Deciding | Preserving human authority and responsibility when algorithms recommend or pre-empt decisions | Accountability diffusion; Decision drift | High |
| Executing | Maintaining flexibility and override capacity during automated or semi-automated action | Execution rigidity; Override inhibition | Medium |
| Monitoring | Detecting performance degradation, drift, or unintended consequences over time | Monitoring gaps | High |
| Adapting | Enabling timely system modification and organisational learning based on feedback | Feedback disconnection; Adaptation paralysis | High |
| Decision Type | Visibility Requirement | Intervention Requirement | Accountability Requirement | Feedback Requirement |
|---|---|---|---|---|
| Routine Decisions | Low: logging and audit trails sufficient | Low: batch-level exception handling | Medium: clear system ownership | Medium: periodic performance review |
| Consequential Decisions | High: real-time visibility into recommendation basis | Critical: human veto and override authority | Critical: named decision owner with documented rationale | High: case-level outcome tracking |
| Contested Decisions | Critical: shared visibility across stakeholders | Critical: formal escalation pathways | Critical: multi-actor accountability clarity | Critical: continuous feedback and review |
| Failure Mode | Operational Definition | System Function Affected | Interview Frequency (n = 50) | Representative Interview Evidence | JRC Discontinuation Citation Rate (n = 37) | Correspondence Level | Diagnostic Indicators |
|---|---|---|---|---|---|---|---|
| Data opacity | Inability to trace how input data shapes algorithmic outputs; data provenance undocumented | Sensing | 23 mentions (46%) | Res37: “awareness and clarity” dependent on information systems that surface relevant data; Res44: historical data may embed biases invisible to operators | 11 cases (30%) | HIGH | Undocumented data sources; no data lineage tracking; inability to explain output–input relationships |
| Pattern fetishism | Over-reliance on algorithmic pattern detection without contextual validation; treating correlations as causal | Interpreting | 18 mentions (36%) | Res23: “you meet 10 customers in one day, and your perspective changes totally from what you have been speaking about”; Res32: patterns require judgement about “consumer needs, market conditions” | 7 cases (19%) | MODERATE | Decisions based solely on algorithmic scores; no human review of pattern validity; context factors ignored |
| Decision drift | Gradual expansion of algorithmic decision scope beyond original design parameters without governance review | Deciding | 31 mentions (62%) | Res27: automation enabling focus on “other things” may lead to unchecked scope expansion; Res39: “final say of senior management” must be preserved | 9 cases (24%) | HIGH | Scope creep without authorisation; decisions originally flagged for human review now automated; no periodic scope audits |
| Accountability diffusion | Unclear responsibility attribution when decisions involve both algorithmic and human components | Deciding | 27 mentions (54%) | Res21: “empowering the right people” while maintaining accountability; Res12: “autonomy you grant your people” must coexist with clear accountability | 12 cases (32%) | HIGH | No named decision owner; responsibility attributed to “the system”; inability to identify who approved specific decisions |
| Execution rigidity | Inability to modify automated execution when contextual factors warrant deviation | Executing | 14 mentions (28%) | Res14: “seamless transition of information” requires flexibility; Res41: execution must allow “reassess your assumptions” | 4 cases (11%) | MODERATE | No exception handling capability; rigid workflow with no deviation path; local adaptation impossible |
| Override inhibition | Technical or cultural barriers preventing human override of algorithmic recommendations | Executing | 11 mentions (22%) | Res12: avoiding “restrict and unduly govern” while Res39 notes authority barriers at senior levels | 5 cases (14%) | MODERATE | Override function available but unused; cultural pressure to accept algorithmic recommendations; override requires excessive justification |
| Monitoring gaps | Insufficient tracking of system outputs and outcomes to detect performance degradation or drift | Monitoring | 22 mentions (44%) | Res4: need for “systems for performance monitoring, systems to report risks”; Res49: “continual ability to monitor” essential | 8 cases (22%) | HIGH | No outcome tracking; performance metrics not collected; drift detection absent |
| Feedback disconnection | Failure to incorporate outcome data back into system modification and learning | Monitoring/Adapting | 19 mentions (38%) | Res17: recognising when to “stop here” and “pivot”; Res49: “feedback process will continue to be the main forces behind the change” | 14 cases (38%) | HIGH | Outcomes collected but not analysed; no mechanism to modify system based on results; learning loops absent |
| Capability atrophy | Degradation of human expertise and judgement capacity through disuse as algorithmic systems take over | Adapting | 16 mentions (32%) | Res35: need to “continuously improve the capacity of your staff”; Res42: leaders must “intentionally step aside” to maintain perspective | 3 cases (8%) | LOW | Declining human expertise in domain; staff unable to evaluate algorithmic outputs; institutional knowledge loss |
| Escalation failure | Absence or dysfunction of pathways for elevating decisions beyond algorithmic processing | Deciding | 24 mentions (48%) | Res38: “many management levels” impeding escalation; Res43: waiting for “all the data” delays decisions | 8 cases (22%) | HIGH | No escalation protocol; unclear escalation triggers; escalated decisions returned to algorithmic processing |
| Transparency theatre | Nominal compliance with transparency requirements without meaningful accessibility or comprehensibility | Interpreting | 9 mentions (18%) | Implicit in Res22’s emphasis on genuine “democratizing information” versus nominal access | 6 cases (16%) | MODERATE | Documentation exists but incomprehensible; transparency reports not read; explanation interfaces unused |
| Adaptation paralysis | Inability to modify system behaviour despite clear evidence of performance problems | Adapting | 21 mentions (42%) | Res1: “being inflexible” leading to “downfall”; Res50: “comfort paradox” inhibiting necessary risk-taking | 4 cases (11%) | MODERATE | Known problems not addressed; change requests rejected; system ossification despite environmental change |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Enang, I.; Mukala, P.; Nkereuwem, U. Authentic Intelligence in Digital Strategy Systems: A Socio-Technical Analysis of Human-Accountable Decision Governance. Systems 2026, 14, 259. https://doi.org/10.3390/systems14030259
Enang I, Mukala P, Nkereuwem U. Authentic Intelligence in Digital Strategy Systems: A Socio-Technical Analysis of Human-Accountable Decision Governance. Systems. 2026; 14(3):259. https://doi.org/10.3390/systems14030259
Chicago/Turabian StyleEnang, Imo, Patrick Mukala, and Ubong Nkereuwem. 2026. "Authentic Intelligence in Digital Strategy Systems: A Socio-Technical Analysis of Human-Accountable Decision Governance" Systems 14, no. 3: 259. https://doi.org/10.3390/systems14030259
APA StyleEnang, I., Mukala, P., & Nkereuwem, U. (2026). Authentic Intelligence in Digital Strategy Systems: A Socio-Technical Analysis of Human-Accountable Decision Governance. Systems, 14(3), 259. https://doi.org/10.3390/systems14030259

