Next Article in Journal
From Leadership Recession to Systemic Leadership: An Ethical Model of Recovery
Previous Article in Journal
Impacts of the Installation of the São João Monument on the Residents in a City in the Interior of Brazil
Previous Article in Special Issue
Transforming Telecoms: How Transformational Leadership, Creativity and Innovation Drive Organizational Performance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

AI-Driven Leadership: Decision-Making, Competencies, and Ethical Challenges—A Systematic Review

by
António Sacavém
1,2,3,*,
Andreia de Bem Machado
4,
João Rodrigues dos Santos
1,2,3,5,
Ana Palma-Moreira
1 and
Manuel Au-Yong-Oliveira
6,7
1
Faculty of Social Sciences and Technology, Universidade Europeia, 1500-210 Lisboa, Portugal
2
CETRAD-EUROPEIA, 1500-210 Lisboa, Portugal
3
CETRAD-UTAD, 5000-801 Vila Real, Portugal
4
Department of Engineering and Knowledge Management, Universidade Federal de Santa Catarina, Florianópolis 88040-900, Brazil
5
CESOP/UCP, 1649-023 Lisboa, Portugal
6
Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), 4200-465 Porto, Portugal
7
Research Unit on Governance, Competitiveness and Public Policies (GOVCOPP), Department of Economics, Management, Industrial Engineering and Tourism (DEGEIT), University of Aveiro, 3810-193 Aveiro, Portugal
*
Author to whom correspondence should be addressed.
Adm. Sci. 2026, 16(4), 173; https://doi.org/10.3390/admsci16040173
Submission received: 9 February 2026 / Revised: 2 March 2026 / Accepted: 24 March 2026 / Published: 31 March 2026

Abstract

Background: Artificial intelligence (AI) is transforming leadership and raising critical questions about decision-making, leadership capabilities, and ethical accountability in increasingly digitalized organizations. Objective: This systematic review synthesizes peer-reviewed evidence to answer: How does AI integration transform leadership and decision-making in organizations? Methods: A PRISMA 2020-compliant systematic review was conducted using structured Boolean searches in Scopus and Web of Science Core Collection on 26 February 2026. Eligibility was restricted to English-language, peer-reviewed, open-access journal articles with an explicit AI–leadership integration signal. Records were deduplicated and screened by two reviewers, with full-text assessment conducted against predefined criteria. A qualitative, narrative (conceptual) synthesis integrated heterogeneous empirical and conceptual contributions. Results: From 452 records, 84 studies met inclusion criteria. The synthesis identified three recurring analytical dimensions: (i) AI-augmented decision-making, (ii) leadership competencies and role shifts, and (iii) ethical challenges (accountability, transparency/opacity, fairness, privacy, and human agency). Integrating these dimensions, the review conceptualizes AI-driven leadership as a hybrid decision phenomenon in which AI accelerates and expands decision cycles, leaders reconfigure roles toward decision architecture and orchestration, and ethical conditions shape legitimacy, adoption, and authority dynamics. Conclusions: The review advances theory by specifying a mechanism-oriented model of AI-driven leadership and proposing testable propositions linking AI modality, role reconfiguration, and ethically conditioned legitimacy under key boundary conditions (e.g., sectoral stakes, governance capacity, and data/infrastructure readiness). Practically, it outlines an implementation pathway emphasizing decision criticality assessment, formalized human–AI task allocation, and institutionalized oversight mechanisms. Limitations: Findings are bounded by database selection and the open-access full-text constraint, which may under-represent paywalled scholarship.

1. Introduction

Artificial intelligence (AI) has significant power to transform, refine, and improve processes, products, and business models. The incorporation of AI into organizational leadership is now inviting a reshaping of the competencies required for technology-enhanced environments, thereby revolutionizing conventional decision-making frameworks. Beyond process automation, AI is involved in predictive analytics, risk assessment, and difficult decision-making (Dasborough, 2023), and leaders are invited to change with regard for ethical, strategic, and human-centric values (Hossain et al., 2025b). The literature mostly examines how artificial intelligence can optimize managerial tasks such as data-driven decision-making and HR automation (Bankins et al., 2024); however, leadership theories have lagged behind AI’s rapid evolution within organizations (Quaquebeke & Gerpott, 2023). Most current models ignore the growing synergy between human leaders and AI-powered decision-support systems, presuming leadership is only a human-driven capacity. To improve conceptual precision, this review distinguishes three analytically separable configurations through which AI enters leadership work. First, AI-augmented leadership refers to human leaders who retain decision authority while using AI tools to enhance sensing, analysis, and coordination. Second, AI-generated leadership insights refer to algorithmically produced recommendations, forecasts, or rankings that shape decisions by supplying evaluative or prescriptive inputs. Third, hybrid leadership systems refer to structured human–AI decision architectures in which authority, task allocation, and oversight are explicitly configured across human and algorithmic agents. These distinctions are used throughout the manuscript to clarify how AI can alter leadership work without conflating tool use, algorithmic advice, and decision-authority delegation. A multidimensional study on AI in the organizational field revealed that its impact extends far beyond personal managerial responsibilities, encompassing team collaboration, strategic leadership, and governance structures (Bankins et al., 2024). Furthermore, it is worth noting that AI-driven technologies shift power dynamics between leaders and followers (J. Liu et al., 2025), raising questions about leadership influence, accountability, and ethical oversight within organizations (Tsai et al., 2022). Researchers found that AI fundamentally changes decision-making and may distort authority and the dissemination of knowledge (Harari, 2024; Kissinger, 2022). Inappropriate management of AI integration may strengthen elite decision-making power, precisely at the cost of worker agency, worsening inequality rather than making decision-making more democratic (Acemoglu & Johnson, 2023). Even though artificial intelligence is becoming increasingly relevant in leadership, there are still research gaps. The first of these gaps concerns how leadership frameworks might be revised to incorporate AI-driven decision-making while maintaining ethical oversight (Hossain et al., 2025b). Just a few studies have investigated leadership change in response to AI’s growing influence on decision-making processes. Existing research either focuses on AI’s technical capabilities or on employees’ behavioral responses to AI adoption (Quaquebeke & Gerpott, 2023). Furthermore, the establishment of a holistic framework connecting AI-driven leadership adaptation with ethical governance is still inadequately investigated. Systematic reviews have revealed that, although various principles and governance considerations for AI exist, there is still a lack of alignment in their implementation within organizational leadership models and decision-making frameworks (Batool et al., 2023; Madanchian & Taherdoost, 2025). Research on the influence of artificial intelligence on strategic adaptability, human oversight, and leadership authority is fragmented across various disciplines, highlighting both ethical and practical gaps in current frameworks. Unlike earlier research that primarily focused on the technological developments or operational impacts of artificial intelligence, this review pursues three main objectives. First, it reconceptualizes leadership adaptation in AI-augmented environments, challenging the traditional notion of leadership as an exclusively human capability. Second, it examines how AI transforms leadership structures, decision-making authority, and organizational governance. Finally, it develops a conceptual foundation that integrates AI-driven leadership adaptation with ethical governance to enhance responsible leadership competencies and practices. Indeed, this review aims to provide a theoretical understanding of the leadership process in the technological era by positioning AI as a transformative element of leadership. Moreover, it sets the stage for future empirical research and provides policymakers and organizational leaders with useful information to help them address the changes AI is bringing to leadership and governance. This article is a PRISMA 2020–compliant systematic review that employs a qualitative, narrative (conceptual) synthesis for theory-building rather than effect-size estimation. Its theoretical contribution is to (i) conceptualize AI-driven leadership as a hybrid decision phenomenon, (ii) specify how AI-augmented decision-making, leadership competency/role reconfiguration, and ethical challenges jointly shape leadership outcomes, and (iii) identify boundary conditions and testable propositions that clarify when AI integration yields performance and legitimacy gains versus new vulnerabilities. The synthesis yields a compact thematic mapping of inductively derived subthemes across the three analytical dimensions and integrates them into a single pathway model derived from the narrative integrative synthesis. This article is structured as follows. Section 2 situates AI-driven leadership within the broader context of digital transformation and outlines the theoretical lenses that frame the review. Section 3 describes the systematic review methodology in line with PRISMA 2020, including eligibility criteria, information sources, search strategy, record management, screening and eligibility procedures, data extraction, and the qualitative narrative (conceptual) synthesis approach. Section 4 presents the results and discussion that culminates with an integrative synthesis, structured around three analytical dimensions—AI-augmented decision-making, leadership competencies and role shifts, and ethical challenges—and culminates in an integrated conceptual framework that connects these dimensions. Section 5 discusses theoretical and practical implications, clarifies the boundary conditions of the synthesis, and outlines limitations and directions for future research. Finally, Section 6 concludes by summarizing the review’s main contributions.

2. Digital Transformation as the Structural Context for AI-Driven Leadership

It is increasingly evident that AI-driven leadership needs a solid theoretical foundation. While this study focuses on AI-oriented leadership, digital transformation provides a broader organizational context in which AI-enabled leadership emerges. Therefore, AI-driven leadership is embedded in broader processes of digital organizational change. Approaching AI-driven leadership as an organizational phenomenon rather than solely a technological one, is essential to capture its strategic and ethical implications. AI represents one of the most advanced manifestations of digital transformation (DT) because it reshapes decision-making authority, governance structures, and leadership skills (e.g., Verhoef et al., 2021). In that context, AI represents a particularly advanced form of digitalization that intervenes directly in leadership activities. DT continues to dominate discussions in both the academic and business worlds. DT is frequently defined as the integration of modern digital technologies across all organizational levels, leading to fundamental changes in how value is created and delivered (e.g., Verhoef et al., 2021). Indeed, the process is far more than just adopting new resources. As Rogers (2016) emphasizes, DT is fundamentally about strategy, implying that organizations must reconsider their entire approach, from business models to processes and organizational culture (Fitzgerald et al., 2014; Vial, 2021). These strategic transformations increasingly call for new models of leadership that can govern and benefit from AI-based systems (e.g., Quaquebeke & Gerpott, 2023). To develop an understanding of DT, particularly in relation to AI-driven leadership, it is important to rely on a range of theoretical perspectives that explain the technological advancements and organizational changes that accompany them. One of the most important theories to explore DT is the Resource-Based View (RBV) (Barney, 1991). It explains that organizations can only maintain a competitive advantage if they possess and manage resources that are valuable, rare, difficult to imitate, and irreplaceable (Barney, 1991). According to Nevo and Wade (2010), the information technology resources may improve organizational performance, but only when they are complemented with other elements such as: (1) an effective organizational structure; (2) a productive culture, and (3) adequate capabilities to utilize IT assets for business goals. Therefore, organizations are invited to strategically leverage these capabilities to succeed in digital transformation (Bharadwaj, 2000). In AI-oriented leadership, these resources also include algorithmic systems that actively participate in the decision-making processes (e.g., Jarrahi, 2018). This direct involvement of algorithmic systems in decision-making redefines the role of the leader, who becomes an orchestrator of human–AI interactions (Jarrahi, 2018; Raisch & Krakowski, 2021). Moreover, RBV suggests that the success of the DT process depends on the capacity to develop and explore digital assets, which will allow for future sustainable benefits (Barney, 1991; Bharadwaj, 2000). While RBV focuses on the resources organizations need to attain competitive advantage, theoretical developments such as the dynamic capabilities concept (Teece et al., 1997) invite a refocusing on how organizations adapt these resources in environments characterized by continuous change. Dynamic capabilities denote an organization’s capacity to integrate, reconfigure, and renew its resources continuously in response to technological and market changes (Teece et al., 1997). This perspective is particularly relevant when the organization deals with digital disruption. Teece (2007) delineates three fundamental components of dynamic capabilities: (1) the capacity to perceive emerging opportunities or threats; (2) to capitalize on them through judicious decision-making, and (3) to adapt the organization’s resource base to align with the evolving external context. These capabilities become central to leaders who operate in organizational environments augmented by AI (Hossain et al., 2025a). In the era of digital disruption, these capabilities are particularly important, as they help organizations identify and boost new technologies, such as artificial intelligence and blockchain (Warner & Wäger, 2019; Held et al., 2025). Dynamic capabilities enable organizations to adapt their structures to integrate new technologies, promoting a more resilient and focused business model. Empirical research reveals that organizations that achieve digital transformation often depend on agility and continuous learning, which are central to dynamic capabilities (Vial, 2021; Warner & Wäger, 2019; Held et al., 2025). Along with dynamic capabilities, organizational ambidexterity (O’Reilly & Tushman, 2013) is another fundamental perspective for understanding DT. The core idea of ambidexterity is that organizations need to balance their current strengths and seek new opportunities, a tension March (1991) described as the exploration–exploitation dilemma. This dilemma may be explained by the idea that companies need to use their current strengths to get short-term benefits (exploitation), but they also need to look for new ideas to stay in business for a long time (exploration). In the digital era, this balance becomes even more critical, since organizations need to continue strengthening their central processes while adopting new digital technologies with disruptive potential. However, in practice, this equilibrium is rarely stable and is likely to generate internal tensions. Organizational ambidexterity has been shown to contribute to innovation processes and global performance (O’Reilly & Tushman, 2013). Frequently, these mean the creation of units or separate projects, exclusively dedicated to the development of new ideas, while the existing operations continue to work in an efficient way. Research revealed that organizations distinguish themselves in the DT domain when they adopt an ambidexterity mindset, conciliating the exploration of new opportunities with the refinement of the established processes (Svahn et al., 2017; Rizana et al., 2025). Therefore, ambidexterity is a crucial element of the global theoretical framework, illustrating how organizations may adapt and reorganize to attain DT efficiently while preserving fundamental competencies (O’Reilly & Tushman, 2013). Likewise, it is important to consider external factors (e.g., competition), since consumer and institutional expectations may encourage organizations to accelerate digitalization processes (Li et al., 2025). Therefore, the Institutional Theory (IT) is relevant, as it provides a valuable framework for understanding organizational change as a response to external pressures (DiMaggio & Powell, 1983). These pressures require organizations to modernize to stay relevant. The IT explains that organizations adapt, on one hand, related to an economic imperative, and on the other hand, in response to coercive pressures (e.g., regulatory obligations), mimetic pressures (e.g., imitation of successful competitors), and normative pressures (e.g., digital practices become industry patterns (Xu et al., 2025). Therefore, these external factors help explain why organizations adopt DT for reasons beyond strictly financial ones. Another theoretical perspective that provides a better understanding of the AI adoption process is the Technology-Organization-Environment (TOE) framework (Tornatzky & Fleischer, 1990). Within these adoption processes, leadership plays a critical role in interpreting algorithm outputs, governing human-AI interactions, and strategically integrating AI into organizational decision-making structures (Jarrahi, 2018; Raisch & Krakowski, 2021; Hossain et al., 2025a). This model offers a more holistic view of the adoption process, highlighting that technological readiness, organizational characteristics (e.g., dimensions, culture, and leadership support), and environmental conditions (e.g., market volatility) are essential factors (Mpanza, 2025). Hence, this framework integrates these elements, offering a clearer view of how internal capabilities and external pressures interact to shape DT in organizations (Namatovu & Kyambade, 2025). As DT unfolds, the ethical implications of using AI in leadership become increasingly relevant. Even though AI has the potential to improve decision-making, it raises questions about transparency, accountability, and equity (e.g., Machado et al., 2024). Ethical governance is therefore crucial to ensure that AI systems are used responsibly in leadership contexts, and leaders are invited to ensure AI adoption is effective and morally responsible (Madanchian & Taherdoost, 2025). Moreover, as AI systems become more central to organizational decision-making, businesses face challenges related to algorithmic bias and the need for human oversight to ensure leadership accountability (Binns et al., 2018; Floridi, 2023; Romeo & Lacko, 2025). Clarifying how this review advances leadership theory, the integrative purpose of this theoretical background is not only to “situate” AI-driven leadership within DT, but also to support a theory-building synthesis of how leadership is reconfigured when algorithmic systems become active participants in organizational decision architectures. By connecting DT perspectives (RBV, dynamic capabilities, ambidexterity, IT, and TOE) with the review’s three analytical dimensions—AI-augmented decision-making, leadership competencies and role shifts, and ethical governance—the review advances leadership theory by shifting the focal unit of analysis from leaders as individual decision-makers to leaders as designers and stewards of hybrid human–AI decision systems. This framing supports a more mechanism-oriented understanding of leadership under AI integration, including the boundary conditions under which AI-enabled decision acceleration, role reconfiguration, and ethical accountability become mutually reinforcing or tension-generating dynamics in organizations. In synthesis, this theoretical background integrates various conceptual perspectives that, together, offer a more comprehensive view of DT. RBV offers a perspective on how organizations use digital resources to gain a competitive advantage, while dynamic capabilities explain the processes by which organizations adapt and renew these resources in response to change. Organizational ambidexterity adds to this by explaining how organizations balance exploration and exploitation during transformation. The Institutional Theory and the TOE framework emphasize that DT is shaped not only by external pressures but also by how organizations mobilize their internal capabilities in response to them. Finally, the ethical dimensions of AI governance highlight the challenges and responsibilities organizations face in leadership. Together, these perspectives constitute a solid, integrated theoretical framework that may better inform this review, enabling a structured understanding of how AI reconfigures leadership, decision-making authority, and ethical governance in modern organizations.

3. Methodology

This study adopts a systematic review design and reports the review in conformity with PRISMA 2020 guidance (Page et al., 2021a). Reporting is aligned with the PRISMA 2020 statement paper, including the expanded checklist, and is read in conjunction with the PRISMA 2020 Explanation and Elaboration guidance (Page et al., 2021a, 2021b). The review addresses the following research question: How does AI integration transform leadership and decision-making in organizations? The purpose of the review is to consolidate and conceptually integrate peer-reviewed research on AI in organizational leadership, mapping recurring patterns across heterogeneous empirical and conceptual contributions and developing an integrative understanding of AI-driven leadership as a hybrid decision phenomenon involving human judgment, algorithmic capability, and ethical accountability. Because the objective is conceptual integration and theory development rather than effect estimation, the review adopts a qualitative, narrative (conceptual) synthesis approach. A systematic review design is appropriate because AI–leadership evidence is interdisciplinary and terminologically dispersed across management, information systems, and adjacent organizational literatures, requiring transparent and reproducible identification and selection procedures; narrative (conceptual) synthesis is aligned with integrating heterogeneous empirical and conceptual contributions for theory development rather than effect-size aggregation (Grant & Booth, 2009; Snyder, 2019; Tranfield et al., 2003). Although the field remains methodologically heterogeneous, the corpus is sufficiently developed to warrant systematic identification and structured qualitative synthesis aimed at conceptual integration rather than statistical comparability.

3.1. Eligibility Criteria

Eligibility criteria were specified a priori. Studies were eligible if they: (a) were peer-reviewed open-access journal articles published in English; (b) were situated in organizational or workplace leadership contexts; (c) substantively examined AI/algorithmic systems within leadership roles and/or managerial decision-making, including ethical accountability and oversight considerations; (d) demonstrated an explicit AI–leadership integration signal at the title or abstract level. Both empirical and conceptual/theoretical studies were eligible, reflecting the field’s emergent and heterogeneous character. No publication-year limits were applied (database inception to the search date), and no subject-area/category filters were imposed to avoid excluding relevant interdisciplinary work. Open-access availability was treated as an a priori eligibility constraint to ensure that all included evidence could be examined in full and that extraction and interpretation could be independently verified. This methodological choice is grounded in Open Science principles emphasizing accessibility, scrutiny, and reproducibility of scientific knowledge (UNESCO, 2021) and in established guidance that openness and reproducibility practices strengthen the reliability and auditability of research claims (Munafò et al., 2017; National Academies of Sciences, Engineering, and Medicine, 2018, 2019; Nosek et al., 2015). In evidence synthesis, access to full texts supports transparent appraisal and traceable interpretation, particularly when conceptual integration depends on full-text context (Page et al., 2021b). At the same time, restricting inclusion to open-access full texts may introduce availability bias by under-representing paywalled scholarship; this limitation is acknowledged and should be considered when interpreting the scope of the synthesis (Langham-Putrow et al., 2021; Piwowar et al., 2018). Secondary studies (e.g., systematic/scoping/narrative reviews, bibliometric/science mapping studies, and meta-analyses) were excluded to avoid double-counting and to ensure the synthesis is grounded in original empirical and conceptual contributions.

3.2. Information Sources and Search Strategy

Structured searches were conducted on 26 February 2026 in Scopus and Web of Science Core Collection. These databases were selected for their broad interdisciplinary coverage of peer-reviewed scholarship in leadership, management, organizational behavior, and information systems, and for their suitability for reproducible systematic retrieval using structured Boolean logic. Using both Scopus and Web of Science Core Collection also mitigates database-specific indexing bias by cross-validating retrieval across two curated sources. This selection aligns with established systematic review guidance in management and organizational research (Tranfield et al., 2003). A structured Boolean search strategy combined two primary concept blocks (leadership-related terminology; AI-related terminology) together with an organizational/decision context component to improve precision. Database-specific topical discovery fields were applied as follows, reported verbatim line-by-line: Scopus (Document Search; TITLE-ABS): TITLE-ABS ((“leadership”) AND (“artificial intelligence”) AND (“organization” OR “organizational” OR “decision-making”)); Web of Science Core Collection (Title/Abstract fields; TI and AB): ((TI = (leadership) OR AB = (leadership)) AND (TI = (“artificial intelligence”) OR AB = (“artificial intelligence”)) AND (TI = (organization OR organizational OR “decision-making” OR “decision making”) OR AB = (organization OR organizational OR “decision-making” OR “decision making”))). Interface filters applied at the search stage in both databases were: Open Access enabled; Document type: Article; Language: English; Source type: Journal (where applicable); and no publication-year restrictions (database inception to the search date). An AI-assisted relevance ranking was used solely to prioritize the order of title/abstract screening; inclusion and exclusion decisions were made exclusively by applying the predefined eligibility criteria.

3.3. Record Management, Cleaning, and Deduplication

Records were exported in RIS format and consolidated into a unified review workspace. Prior to screening, records were cleaned and prepared for traceable counting. Deduplication was performed using DOI and bibliographic matching, with manual resolution of ambiguous duplicate cases; when duplicates were identified, the most complete bibliographic record was retained. Duplicate detection included DOI comparison where available, and otherwise titles, authors, journal information, publication year, and bibliographic completeness. Deduplication procedures were designed to preserve PRISMA traceability by ensuring that record counts reflect unique records rather than database overlap (Page et al., 2021b). AI-enabled workflow support was used only to assist with operational tasks (e.g., organizing records and supporting screening prioritization). Importantly, this support was strictly assistive: the authors retained full responsibility for the identification log, all record-management decisions (including deduplication choices), all screening and eligibility judgments, and the interpretive synthesis. AI outputs did not constitute eligibility decisions and were not used to generate the synthesis; instead, they served to facilitate consistent documentation and traceability of the author-led review process in line with PRISMA 2020 reporting expectations (Page et al., 2021a) and established guidance on decision-support workflows in evidence synthesis (Marshall & Wallace, 2019).

3.4. Study Selection (Screening and Eligibility)

Title/abstract screening was intentionally conservative to reduce premature exclusion given terminological dispersion across leadership, management, and organizational literatures. Screening followed two sequential decision criteria. Step 1 (core scope-fit) excluded records that did not reflect a workplace/organizational leadership context and/or did not substantively address AI or algorithmic systems in relation to leadership or decision-making. Step 2 (AI–leadership integration signal) excluded records passing Step 1 if the abstract did not indicate a meaningful link between AI integration and leadership transformation (e.g., AI-augmented decision-making, hybrid human–AI systems, accountability/oversight mechanisms, leadership competency evolution). Screening was conducted record-by-record (line-by-line) using a structured decision log; each excluded record was assigned a primary reason code corresponding to the first unmet criterion to enable PRISMA-compliant accounting. Title/abstract screening and full-text eligibility assessment were conducted independently by two reviewers. Disagreements were resolved through discussion until a consensus was reached. Title/abstract screening was piloted on a randomly selected subset of records to calibrate consistent application of the screening criteria prior to full screening. No changes were made to the predefined eligibility criteria following the pilot; the pilot served to calibrate consistent application of the screening rules. Full-text reports retained after title/abstract screening were assessed for eligibility against the predefined inclusion criteria. Full-text assessment also verified study type; reports identified as secondary studies (e.g., systematic/scoping/narrative reviews or bibliometric/science mapping studies) were excluded at this stage to avoid double-counting. Full texts were sought through open repositories and publicly accessible archives consistent with the open-access retrieval constraint; records indexed as open access but not practically retrievable despite reasonable attempts were documented as ‘reports not retrieved’ in PRISMA reporting. Formal inter-rater reliability statistics (e.g., Cohen’s Kappa) were not computed because screening and eligibility decisions were finalized through consensus discussion rather than retained as independent parallel ratings for adjudication, consistent with standard systematic review practice described in the Cochrane Handbook (Higgins et al., 2023).

3.5. Data Extraction (Data Items and Process)

Data extraction was designed to support structured qualitative synthesis aligned with the research question. Because this review is theory-building and non-quantitative, no outcome domains or effect measures were specified; extraction focused on conceptual variables, mechanisms, and reported leadership implications rather than effect-size metrics. For each included study, extraction captured: (i) AI-augmented decision-making (AI modality and its role in leadership decision processes); (ii) leadership competencies and role shifts (skills, roles, and capability changes under AI integration); (iii) ethical challenges (e.g., accountability, transparency, fairness, responsibility allocation, and human oversight); (iv) leadership functions affected (strategic, operational, developmental); and (v) the core theoretical or empirical contribution. Extracted fields were manually reviewed and refined to ensure conceptual fidelity to full-text meaning rather than metadata-only interpretations. The extraction template/codebook was piloted on a subset of included studies and refined prior to full extraction to improve field clarity and consistency. Data extraction was conducted by one reviewer and verified by a second reviewer. Discrepancies were resolved through discussion until consensus was reached. No study investigators were contacted to obtain or confirm information. Piloting and refinement of extraction templates is recommended in systematic review methods to reduce ambiguity in field definitions and improve consistency of extraction across heterogeneous evidence (Popay et al., 2006; J. Thomas & Harden, 2008).

3.6. Synthesis Approach

Synthesis followed iterative thematic coding and narrative integration consistent with established qualitative synthesis methods (e.g., Barnett-Page & Thomas, 2009; J. Thomas & Harden, 2008; Elo & Kyngäs, 2008). Full texts were coded inductively, consolidated into higher-order themes, and integrated into an explanatory narrative. Constant comparison across study types and contexts refined theme boundaries and minimized over-generalization. Given heterogeneity in theoretical frameworks, operationalizations, and designs, statistical pooling was not methodologically appropriate. Theme development followed a hybrid deductive–inductive logic. The extraction framework specified a priori three analytical domains aligned with the research question (AI-augmented decision-making, leadership competencies and role shifts, and ethical challenges) to support systematic cross-study comparability. However, these domains were not treated as pre-specified conclusions: within and across domains, subthemes and integrative relationships were derived inductively through iterative coding of full-text evidence, constant comparison, and refinement of theme boundaries based on cross-study coverage and deviant cases. Accordingly, the retention of the three overarching themes reflects recurring structure in the included evidence base, while the internal composition of each theme (subthemes, mechanisms, and linkages) emerged from the corpus. This hybrid deductive–inductive logic is consistent with qualitative synthesis guidance that uses an a priori analytic scaffold for comparability while deriving thematic content and integrative relationships inductively from the evidence base (Barnett-Page & Thomas, 2009; Popay et al., 2006; J. Thomas & Harden, 2008).

3.7. Quality Appraisal, Reporting Bias, and Certainty of Evidence

Consistent with narrative (conceptual) synthesis guidance and evidence-informed systematic review methodology in management and organizational research, formal risk-of-bias tools are primarily intended to appraise threats to causal inference and effect estimation in quantitative synthesis. As this review aims to integrate conceptually heterogeneous evidence to generate mechanism-oriented and theory-building insights rather than pooled effects, no formal risk-of-bias tool, reporting-bias assessment, or certainty-of-evidence system (e.g., GRADE) was applied. Instead, the review operationalized interpretive adequacy safeguards embedded in the eligibility, screening, and extraction logic: (i) peer-reviewed journal status; (ii) explicit organizational leadership context; (iii) meaningful AI–leadership integration; and (iv) sufficient conceptual centrality and interpretive richness to support qualitative narrative (conceptual) synthesis. Potential retrieval and publication biases (database-only searching, English-language restriction, open-access-only inclusion) are acknowledged as limitations. In non-quantitative, theory-building reviews, risk-of-bias tools designed for causal effect estimation are often not methodologically aligned with mechanism-oriented conceptual integration; accordingly, transparency of selection/extraction procedures and interpretive adequacy safeguards are recommended as fit-for-purpose quality protections (Grant & Booth, 2009; Popay et al., 2006; Snyder, 2019).

3.8. Protocol and Registration

The review protocol was not prospectively registered (e.g., PROSPERO/OSF), and no standalone protocol document was publicly deposited. However, core methodological decisions (research question, eligibility criteria, search logic, staged screening rules, extraction framework, and synthesis approach) were specified a priori and applied consistently throughout the review process. No amendments were made to the review methods after initiation. The completed PRISMA 2020 expanded checklist, the PRISMA 2020 flow diagram (also presented as Figure 1 in the main manuscript), and the full list of reports excluded at full-text assessment with reasons are provided in the Supplementary Materials (Page et al., 2021a, 2021b).

4. Results and Discussion

In line with PRISMA 2020, the outcomes of study selection (identification, screening, eligibility, and inclusion) are reported in Section 4, while the procedures that produced these results are detailed in Section 3 (Methodology). Figure 1 summarizes the flow of records through identification, deduplication, title/abstract screening, full-text assessment, and final inclusion. Because this review is open-access-only by design, the evidence base reflects peer-reviewed studies with openly accessible full texts and should be interpreted accordingly. For transparency, the synthesis distinguishes between (i) the a priori analytical domains used to structure extraction and cross-study comparison (AI-augmented decision-making, leadership competencies and role shifts, and ethical challenges) and (ii) the inductively derived thematic content reported in the subsequent subsections. The three themes are retained because they consistently recur across the included studies; theme labels and boundaries were finalized only after verifying cross-study coverage and internal coherence within each theme.

4.1. Identification

Database searching returned 452 records in total (Scopus: n = 275; Web of Science Core Collection: n = 177). Records were exported in RIS format and consolidated into a unified review workspace. Following deduplication, 155 duplicates were removed, resulting in 297 unique records retained for title and abstract screening (see Figure 1).

4.2. Screening and Full-Text Eligibility Assessment

The 297 unique records were screened at the title/abstract level. Across screening, 189 records were excluded, and 108 reports were sought for retrieval (n = 108). Of these, 107 reports were retrieved in full text, while one report could not be retrieved despite reasonable attempts to locate an open-access full text from the Scopus record (reports not retrieved = 1). The 107 retrieved full texts were assessed for eligibility. During full-text assessment, 23 reports were excluded because they were identified as secondary studies (systematic/scoping/narrative reviews or bibliometric/science mapping studies). The full list of excluded full-text reports, with reasons, is provided in the Supplementary Materials to support PRISMA-compliant auditability. The final synthesis set, therefore, comprised 84 studies.

4.3. Inclusion and Descriptive Characteristics of the Final Corpus

The final included corpus comprised 84 peer-reviewed open-access journal articles published in English and situated in organizational or workplace leadership contexts, with substantive AI/algorithmic relevance to leadership roles and/or managerial decision-making and an explicit AI–leadership integration signal at the title/abstract level. Secondary studies were excluded to avoid double-counting and to preserve a primary evidence base for synthesis. The included corpus (n = 84) reflects methodological and theoretical heterogeneity, comprising empirical studies (quantitative, qualitative, and mixed-methods) as well as conceptual/theoretical contributions (Table 1, below). Empirical investigations span multiple sectors (e.g., healthcare, education, public administration, and private-sector organizations), indicating the cross-sectoral relevance of AI-driven leadership research. Table 1 provides the full list of included studies to ensure transparency and traceability of the evidence base. The thematic synthesis is reported in the following subsections, structured around three analytical domains used to organize extraction and cross-study comparison (AI-augmented decision-making; leadership competencies and role shifts; ethical challenges), with subthemes derived inductively from the corpus.
Based on comparative analysis and cross-study synthesis of the 84 included studies, three overarching themes were identified to structure the qualitative narrative integration and to clarify the main ways in which AI is reshaping leadership in organizations. These themes provide a coherent organizing framework for the synthesis by capturing recurrent patterns across heterogeneous empirical and conceptual contributions, while preserving sensitivity to contextual variation across sectors and study designs. Importantly, the three-theme structure reflects stable higher-order regularities observed across the corpus; however, theme labels and boundaries were finalized only after iterative full-text coding, constant comparison, and verification of cross-study coverage and internal coherence. Accordingly, the sections that follow present the synthesis organized around: (i) AI-augmented decision-making, (ii) leadership competencies and role shifts, and (iii) ethical challenges (including accountability and oversight considerations), with the specific subthemes and linkages within and across these themes derived inductively from the extracted evidence rather than assumed in advance. Across the final corpus (n = 84), the included evidence spans quantitative, qualitative, mixed-methods, and conceptual/theoretical contributions. This distribution suggests that AI–leadership scholarship is developing through multiple methodological logics rather than converging on a single empirical template, which reinforces the suitability of mechanism-oriented integration across diverse study designs. This methodological mix also has direct implications for interpretation within each analytical dimension. Quantitative studies most often contribute empirical signals about associations between leadership-related factors and AI-related outcomes (e.g., readiness, engagement, performance, or adoption-related variables), whereas qualitative studies typically illuminate implementation processes, sensemaking dynamics, and contextual constraints that condition leadership work under AI integration. Conceptual/theoretical contributions primarily advance organizing frameworks and normative or design-oriented arguments that clarify role reconfiguration, accountability allocation, and interpretive requirements, while mixed-methods studies frequently bridge these perspectives by combining empirical patterns with contextual explanation. Accordingly, convergent patterns observed across designs are treated as more robust regularities, while design-specific insights are used to articulate mechanisms and boundary conditions in a way that remains faithful to the heterogeneity of the evidence base. Table 2 provides a compact synthesis of inductively derived subthemes within each analytical dimension and identifies representative studies to support transparent traceability across the final corpus.

4.3.1. AI-Augmented Decision-Making

Analytical Enhancement and Predictive Decision Support
Across the corpus, AI-augmented decision-making is the clearest mechanism for translating “AI integration” into observable changes in leadership and managerial work. Where modalities are specified, AI most often functions as decision support (e.g., assistants, dashboards, structured recommendation systems) and/or predictive analytics (e.g., forecasting, risk profiling, early warning), deployed at operational, strategic, or combined decision levels (e.g., healthcare command systems that shape both frontline allocation and system-wide planning). Operational augmentation is particularly visible where AI structures routine judgments and reduces subjective variability. For instance, algorithmic evaluation systems (e.g., neuro-fuzzy profiling) are presented as increasing objectivity and transparency in performance appraisal and as guiding development and advancement decisions (Escolar-Jimenez et al., 2019). Decision-support assistants in leadership contexts also foreground operational gains via more frequent, real-time interaction and feedback loops, with reported improvements extending beyond speed to perceived fairness and trust in HR-relevant processes (Dutta & Mishra, 2021). In education administration, AI is framed as a decision-support layer to streamline administrative work and improve resource distribution; simultaneously, ethical considerations and trust are positioned as core readiness conditions for leaders adopting such tools (Alshamsi, 2025; Arrooqi & Miqad Alruqi, 2025; Dai et al., 2025). In other operational domains, AI-enabled decision support and predictive techniques are positioned as compressing analysis cycles and improving accuracy (e.g., public-sector investigative decision support via data mining; administrative risk processing in government settings) (Brillianto et al., 2024; Ding, 2021). At strategic and cross-level interfaces, predictive analytics is repeatedly tied to forecasting, risk profiling, and resource orchestration—framing AI as an enabler of faster, more data-grounded strategic choices under uncertainty. Supply-chain work explicitly describes AI-powered decision support and predictive techniques as reducing response time and augmenting managerial decision-making through data-driven recommendations and automated reasoning/forecasting for uncertain choices (Dey et al., 2024). In healthcare system leadership, AI-enabled infrastructures are described as enabling rapid resource assessment and coordinated decision-making across units, supporting triage and placement, as well as proactive capacity management (Alharbi et al., 2025). Related healthcare settings similarly position AI as decision support for triage/history-taking and predictive risk assessment, while noting operational limitations (e.g., challenges in prioritizing complex cases) that constrain the reliability of AI outputs without careful calibration (Lindberg et al., 2020; Siira et al., 2024). Strategic augmentation is also foregrounded in digital transformation and leadership capability perspectives, where AI (often via analytics and predictive models) is framed as increasing the volume and quality of decision-making and enabling strategic innovation and business model adaptation (Black et al., 2024; Gaffley & Pelser, 2021, 2025; Hossain et al., 2025a; Hyiamang & Liu, 2025; Philippart, 2022).
Automation, Generative AI, and Partial Delegation of Authority
Generative AI and algorithmic decision-making mark the boundaries of the augmentation spectrum. Generative AI is treated as augmenting decision work enacted through communication, ideation, and rapid synthesis: in crisis communication, it generates executive-style messaging and solution proposals to enable real-time communication and preparedness via predictive risk analysis (Ülkü & Erol, 2025). In micro-firms, generative AI is linked to operational resilience and strategic agility, but its effectiveness is constrained by data quality and cost, underscoring that “augmentation” requires a minimum capability and infrastructure foundation (Shahzad et al., 2026). In value-laden organizational environments, leaders emphasize preserving human agency and authority while using generative tools to improve efficiency (Cheong & Liu, 2025). In contrast, algorithmic decision-making contributions explicitly discuss partial delegation of decision authority to AI systems (e.g., “algorithmic leadership” constructs and automation of leadership functions) (Flak & Pyszka, 2022; Riti et al., 2025; Verganti et al., 2020). Finally, explainability-oriented contributions position XAI-enabled decision support as strengthening transparency and trust calibration—particularly relevant when decision legitimacy depends on understanding the model’s rationale (Johannssen & Chukhrova, 2025; Kashyap et al., 2025).
Human–AI Decision Cycles: Sensing, Sensemaking, and Seizing
Mechanistically, the corpus supports interpreting AI-augmented decision-making as reshaping leadership decision cycles through sensing, sensemaking, and seizing. Sensing is strengthened when AI expands monitoring and forecasting capacity through real-time data aggregation, predictive modeling, and early warning systems, enabling earlier detection of risks/opportunities and reducing response times (Alharbi et al., 2025; Dey et al., 2024; Kesim, 2026; S. Kim et al., 2022; Trim & Lee, 2022). Sensemaking is reconfigured when AI translates complex and high-volume information into interpretable patterns and ranked alternatives—such as analytics insights used for managerial problem-solving and opportunity identification (Hossain et al., 2025a), decision-support reports used in clinical triage/history-taking (Siira et al., 2024), or AI systems that generate structured recommendations in administrative decision processes (Dai et al., 2025; Jia et al., 2022). Seizing occurs when leaders and organizations convert these model-mediated interpretations into allocation decisions, workflow redesign, or communication actions—especially salient in crisis contexts and in system coordination settings (Alharbi et al., 2025; Satish et al., 2025; S. Kim et al., 2022). Across these mechanisms, the most frequent reported impacts are speed/timeliness gains and accuracy/objectivity improvements, with more selective but important effects on coordination (e.g., cross-unit alignment), authority/power (e.g., delegation and changing reliance structures), and oversight (e.g., governance requirements) (Dey et al., 2024; S. Kim et al., 2022; J. Liu et al., 2025; Rožman et al., 2023a).
Ethical Moderation and Boundary Conditions Shaping Augmentation Outcomes
Boundary conditions consistently moderate whether these mechanisms yield performance benefits or generate new vulnerabilities. First, data quality and infrastructure constrain augmentation: generative AI benefits are explicitly limited by data quality and implementation cost in micro-firms (Shahzad et al., 2026), and clinician–leader evidence highlights bias, accuracy, and insufficient IT infrastructure as barriers to safe adoption of ML decision support in emergency workflows (Leonard et al., 2026). Second, governance and accountability design shape whether AI-driven speed/accuracy becomes institutionally legitimate. Public administration research highlights tensions between AI-driven efficiency and accountability norms, with institutionalization remaining uneven where accountability is unclear or integration mechanisms are weak (E. Kim, 2026). Military and public-sector transformation work similarly foregrounds ethical oversight challenges associated with delegation to non-human agents, emphasizing that decision architectures must be “ethical by design” if they are to be used responsibly at speed (Hovd, 2025; Vasilescu, 2025). Complementarily, work on XAI and governance frames explains how explainability, validation, and continuous evaluation serve as safeguards enabling the responsible use of AI recommendations in management and high-stakes decision settings (Hassanien et al., 2025; Johannssen & Chukhrova, 2025; Jongen, 2023; Kashyap et al., 2025; Wichtmann et al., 2026). Third, leadership readiness and capability alignment determine whether AI augmentation improves decisions or amplifies error risk. Leadership readiness is associated with perceived benefits and trust (Arrooqi & Miqad Alruqi, 2025; Iannello, 2026; Kotp et al., 2025), while AI-savvy or transformational leadership is repeatedly positioned as enabling this translation of AI potential into actual decision improvements (Abositta et al., 2024; Quttainah et al., 2025; Rožman et al., 2022; Somanathan et al., 2025). For leadership practice and theory, the cumulative implication is a shift from leaders as primary analysts to leaders as decision architects: those who design the human–AI division of labor, calibrate autonomy thresholds, and institutionalize oversight and explainability to sustain trust and accountability (Frimpong, 2025; Kashyap et al., 2025; Riti et al., 2025). This includes disciplined choices about what to automate and what to retain for human judgment, how to validate and monitor model outputs, and how to align AI-augmented decisions with organizational goals and stakeholder expectations. In parallel, several studies suggest the potential for shifts in authority and power relations as AI becomes an alternative source of informational and cognitive resources (e.g., reducing informational asymmetry and changing leader–follower reliance) (J. Liu et al., 2025; Petrat et al., 2022).

4.3.2. Evolution of Leadership Competencies

Emergent Competency Clusters Under AI Integration
A dominant pattern is the growing salience of AI literacy and data literacy as baseline leadership capabilities. Multiple studies depict leaders as increasingly required to interpret and act on algorithmic insights, whether through predictive models, analytics dashboards, or AI-mediated administrative tools (Alshamsi, 2025; Gaffley & Pelser, 2021, 2025; Kesim, 2026). In education settings, AI is positioned as streamlining administrative processes via data analysis and machine learning, with leader readiness shaped by perceived benefits, trust, and ethical concerns—implying that competent adoption requires a working understanding of AI’s informational value, limits, and institutional consequences (Ali, 2025; Arrooqi & Miqad Alruqi, 2025; Dai et al., 2025). In healthcare and clinical domains, leadership competence is repeatedly framed as spanning both technological and organizational fluency: leaders’ capacity to implement AI depends on integrating tools into workflows and supporting professional autonomy while addressing operational constraints (Iannello, 2026; Määttä et al., 2026; Siira et al., 2024; Stogiannos et al., 2025). Moreover, it is relevant to also consider strategic sensemaking and systems thinking, where AI shifts decision work from isolated judgments to ongoing interpretation of dynamic data streams and scenario outputs. In manufacturing transformation frameworks, leaders are portrayed as needing to understand how AI-enabled technologies (e.g., industrial AI, digital twins, infrastructure layers) reshape operational systems and workforce alignment, pushing leadership toward holistic digital transformation capability rather than episodic “tool adoption” (Gaffley & Pelser, 2021; Gaffley & Pelser, 2025). In tourism and hospitality, AI is explicitly framed as a conduit that operationalizes leadership intent through data-driven sensing and predictive/automation technologies, implying that leaders must translate strategic aims into reconfigured workflows and resource allocation (Seraj et al., 2025). These demands are echoed in work emphasizing proactive leadership and strategic discipline as conditions for AI-enabled transformation, rather than passive “support” alone (Black et al., 2024; Hyiamang & Liu, 2025; Philippart, 2022). A further competency domain to consider is ethical judgment and governance capability, which is repeatedly cited as the critical complement to technical proficiency. Several studies foreground ethical stewardship and accountability design as necessary to sustain legitimacy when AI mediates or partly automates decisions (Borkovich et al., 2024; Frimpong, 2025; Hovd, 2025; Riti et al., 2025). In government and public systems, AI integration is shown to create tensions between efficiency and accountability norms, requiring leaders to establish integration mechanisms and responsibility allocation that institutionalize rather than rely on ad hoc use (S. Kim et al., 2022; E. Kim, 2026; Brillianto et al., 2024). Explaining and auditing AI becomes important where decision consequences are high-stakes: XAI-oriented contributions emphasize interpretability, bias detection, and governance safeguards as enabling oversight, coordination, and shared understanding of recommendations (Hassanien et al., 2025; Johannssen & Chukhrova, 2025; Jongen, 2023; Kashyap et al., 2025; Wichtmann et al., 2026). Related work stresses leaders’ responsibility for “terminology governance” and sensemaking—naming and communicating AI tools in coordinated ways to maintain agency and ethical clarity during adoption (Haskell & Clark, 2025). Additionally, concerns related with human-centric leadership capabilities—communication, trust-building, emotional intelligence, and relational/mentorship work—become more salient as AI absorbs routine analytical tasks. Studies depict leaders as needing to cultivate trust climates and reduce resistance through training, communication, and attention to workforce concerns (Borkovich et al., 2024; Cheng et al., 2024). AI-savvy leadership is framed as bridging technical tools with workforce needs and engagement, aligning AI initiatives with core goals and fostering data-driven innovation cultures (Fengkuo et al., 2025; González-Mohíno et al., 2024; Quttainah et al., 2025). In healthcare leadership contexts, nursing informatics and nurse leader perspectives highlight education, empowerment, and organizational culture as enabling conditions for AI integration, with perceived benefits and readiness linked to training and professional experience (Göktepe & Sarıköse, 2025; Kotp et al., 2025; Turchioe et al., 2025). Cybersecurity-focused work similarly frames competence as integrative: combining AI-driven analytical capability with emotional and ethical awareness to make value-aligned decisions in high-pressure contexts (Abidin et al., 2025).
Role Shifts and Practice Impacts
Collectively, the corpus supports three recurring role reconfigurations. First and foremost, leaders shift from analysts to decision architects, where the core contribution is designing the human–AI division of labor, configuring autonomy thresholds, and institutionalizing oversight. This is explicit in algorithmic leadership models that emphasize adjustable autonomy and built-in feedback loops (Riti et al., 2025), and in innovation/design work where automation pushes human authority upstream to problem framing and direction-setting (Verganti et al., 2020). Responsible adoption frameworks similarly emphasize disciplined automation choices and “judgment retention” to preserve moral reasoning when AI’s precision tempts maximal automation (Frimpong, 2025). In parallel, XAI and governance contributions depict leaders as stewards of legitimacy: ensuring algorithm evaluation, deployment, maintenance, and interpretability so that AI-driven decisions remain understandable and accountable (Johannssen & Chukhrova, 2025; Kashyap et al., 2025; Wichtmann et al., 2026). Subsequently, leaders shift from controllers to coordinators, as AI integration increases interdependence across functions and amplifies the need for collaborative implementation. Evidence from system-level healthcare transformation underscores that AI-enabled command structures facilitate multi-actor coordination among clinicians, leaders, and units, suggesting leadership centered on orchestration and alignment rather than solely hierarchical control (Alharbi et al., 2025). Similar coordination demands appear in clinical AI implementation (Stogiannos et al., 2025) and in public-sector AI assimilation, where leadership attention allocation and communication mechanisms shape implementation success (Alshahrani et al., 2025; Brillianto et al., 2024). In networked leadership environments, AI tools are not automatically transformative; rather, outcomes depend on socio-digital engagement and leaders’ ability to mobilize peer and collective networks, highlighting coordination as a core leadership function in AI-enabled collaboration (Basilio et al., 2025). Ultimately, leaders shift from individual experts to boundary spanners who connect technical specialists, operational users, and governance stakeholders. This dynamic is explicit: operational leaders depend on technical staff for situational awareness and ethically sound decisions, especially in high-speed AI-enabled systems (Hovd, 2025). It also appears in health contexts where specialized roles (e.g., ICT doctors) are positioned to validate, verify, and audit AI models, requiring leaders to integrate expertise across disciplines rather than rely on single-domain authority (Jongen, 2023). The skill profile “multi-level connector” and “ethics risk management” identified in strategic leadership research further reinforces boundary spanning as central to AI transformation leadership (Bevilacqua et al., 2026). These competency and role shifts are reported to have practice impacts across the corpus: improved decision quality and timeliness where leaders effectively integrate AI into workflows (Alharbi et al., 2025; Satish et al., 2025); enhanced employee climate and engagement where AI-enabled leadership supports fairness, trust, and participation (Dutta & Mishra, 2021; Quttainah et al., 2025; Rožman et al., 2022; J. Liu et al., 2025); and altered authority dynamics as AI reduces informational asymmetry and creates alternative sources of guidance beyond human supervisors (J. Liu et al., 2025; Petrat et al., 2022). At the same time, several studies caution that these benefits are contingent: resource constraints, data quality, infrastructure, and regulatory demands can limit adoption or amplify risks, particularly in SMEs and high-stakes public/clinical domains (Hassanien et al., 2025; E. Kim, 2026; Leonard et al., 2026; Shahzad et al., 2026).

4.3.3. Ethical and Strategic Challenges

Accountability, Responsibility Allocation, and Contestability Under Delegation
Across the corpus, ethical challenges are primarily articulated as problems of accountability and contestability that emerge when AI systems augment—or partially supplant—human judgment, especially where decision speed and scale increase faster than interpretive and governance capacity (Hovd, 2025; E. Kim, 2026; Riti et al., 2025; Vasilescu, 2025). Multiple studies highlight that as AI becomes embedded in operational and strategic routines, who is responsible for decisions can become ambiguous, particularly when leaders rely on algorithmic outputs that are difficult to interrogate or when decision authority is partially delegated to automated systems (E. Kim, 2026; Riti et al., 2025; Vasilescu, 2025). This “responsibility allocation” concern is reinforced in organizational settings where leaders face mounting pressure to integrate AI into decision-making structures while also managing the risks of misjudgment and error amid AI-era complexity (Iordache et al., 2025; Hovd, 2025; E. Kim, 2026). In high-stakes environments (defense, public administration, and healthcare), the ethical risk is framed not only as abstract accountability but also as operational harm from decisions taken at speed without sufficient situational awareness, validation, or human oversight (Hovd, 2025; Johannssen & Chukhrova, 2025; Vasilescu, 2025; Wichtmann et al., 2026).
Opacity, Transparency, Explainability, and Legitimacy
Equally important is the ethical domain which concerns opacity, transparency, and explainability, frequently framed as “black box” risks that reduce decision contestability, weaken accountability, and erode stakeholder trust when AI recommendations cannot be understood or justified (Johannssen & Chukhrova, 2025; Kashyap et al., 2025; S. Kim et al., 2022; Ülkü & Erol, 2025). In crisis communication contexts, the literature explicitly warns that opaque AI may decrease accountability and make it harder to question decisions, particularly when generative outputs appear authoritative but remain difficult to trace to a defensible rationale (Ülkü & Erol, 2025). In public-sector digital transformation and algorithmic bureaucracy settings, transparency is linked to legitimacy and integrity norms, with leadership framed as responsible for establishing data-sharing cultures and explainability practices that sustain public trust (S. Kim et al., 2022; Brillianto et al., 2024; Alshahrani et al., 2025). In organizational change communication, explainable intelligence is positioned as enhancing accountability by exposing the logic behind AI suggestions and aligning machine-generated recommendations with organizational objectives, while explicitly recognizing cultural resistance and integration frictions as governance barriers (Kashyap et al., 2025). In healthcare management and radiology leadership, XAI and governance structures are positioned as safeguards to ensure that decisions are not merely accurate but also interpretable, fair, and continuously monitored as systems evolve (Johannssen & Chukhrova, 2025; Wichtmann et al., 2026).
Bias, Fairness, Privacy, Surveillance, and Human Agency
Furthermore, bias, discrimination, and distributive fairness, often presented as a risk arising from biased training data, unequal model performance across groups, or organizational incentives that prioritize efficiency over equity (Arrooqi & Miqad Alruqi, 2025; Johannssen & Chukhrova, 2025; S. Kim et al., 2022; Petrat et al., 2022; Ülkü & Erol, 2025). Studies in organizational leadership acceptance emphasize that discriminatory outcomes can occur when AI is trained on deficient data sets, producing biased recommendations (e.g., skewed hiring signals), and the evidence frames such risks as requiring explicit ethical evaluation rather than being treated as technical “noise” (Petrat et al., 2022; Hassanien et al., 2025). In education administration, bias and transparency concerns are positioned as central to leadership readiness to adopt AI, implying that leaders’ willingness to scale AI is highly contingent on perceived fairness and the credibility of ethical safeguards (Arrooqi & Miqad Alruqi, 2025; Alshamsi, 2025). In cybersecurity and public-sector applications, bias, discrimination, and concerns about public power are similarly emphasized, reflecting how algorithmic decisions can reproduce inequities at scale when governance is not designed for fairness (S. Kim et al., 2022; Abidin et al., 2025). Complementary contributions argue that explainability can enhance fairness by helping detect bias in model logic and supporting the evaluation of whether recommendations align with evidence-based and equity-oriented guidelines (Johannssen & Chukhrova, 2025; Kashyap et al., 2025). Additionally, concerns regarding privacy, data protection, and surveillance-related risks are foregrounded, particularly where AI relies on large-scale personal data and sensitive operational information (Gaffley & Pelser, 2025; Johannssen & Chukhrova, 2025; Leonard et al., 2026; Satish et al., 2025). Within the domain of digitally transforming manufacturing systems, privacy risks are discussed alongside security concerns in human–cyber–physical systems, with privacy protections (e.g., personal-data protection mechanisms) presented as governance-relevant design requirements rather than downstream compliance tasks (Gaffley & Pelser, 2025). In healthcare leadership and predictive analytics contexts, data privacy and regulatory compliance are repeatedly framed as non-negotiable constraints that condition whether AI-enabled decision systems can be implemented ethically and sustainably (Johannssen & Chukhrova, 2025; Kotp et al., 2025; Satish et al., 2025). Regarding clinical ML decision-support adoption, leaders highlight infrastructure and governance as prerequisites for safe implementation, including addressing concerns about bias, accuracy, and the robustness of information systems (Leonard et al., 2026). Moreover, across public-sector and multi-sector AI adoption contexts, the discourse likewise links privacy to accountable governance mechanisms (e.g., monitoring tools and transparency measures) that enable responsible decision-making while meeting regulatory expectations (Hassanien et al., 2025; S. Kim et al., 2022). Finally, the corpus surfaces ethical tensions around human agency, dignity, and role integrity, emphasizing that AI integration can unintentionally devalue human judgment or reshape authority relations in ways that undermine ethical legitimacy (Borkovich et al., 2024; Cheong & Liu, 2025; Frimpong, 2025; Haskell & Clark, 2025). In education contexts, leaders stress the continuing need for human interaction and monitoring even when AI “makes decisions,” implying that legitimate leadership requires maintaining human accountability and moral discernment (Ali, 2025; Kaan et al., 2025). In institutional value-alignment settings, leaders explicitly frame AI use as acceptable only insofar as it remains aligned with humane ends and does not compromise human-centered mission logic, thereby positioning value alignment as an ethical constraint on adoption (Cheong & Liu, 2025). In workplace ethics and integrity discussions, AI decisions are framed as capable of producing cultural shock, job displacement anxieties, and trust erosion unless leaders proactively support transparency and training as ethical practices (Borkovich et al., 2024; Cheng et al., 2024). Complementary governance-oriented perspectives argue for disciplined restraint regarding what should be automated, including explicit decision protocols or “judgment retention” approaches to preserve moral reasoning and interpretive discretion in high-stakes decisions (Frimpong, 2025).

4.3.4. Integrated Synthesis of the Three Analytical Dimensions

Taken together, the three analytical dimensions—AI-augmented decision-making, leadership competencies and role shifts, and ethical challenges—cohere into a single integrative explanation of AI-driven leadership transformation as a hybrid decision phenomenon: AI expands and accelerates decision processes, leaders reconfigure competencies and roles to translate AI outputs into coordinated organizational action, and ethical pressures condition legitimacy, acceptance, and authority dynamics across the human–AI decision interface (Abositta et al., 2024; Alharbi et al., 2025; Borkovich et al., 2024; Dey et al., 2024; Dutta & Mishra, 2021; Frimpong, 2025; E. Kim, 2026; Petrat et al., 2022; Quttainah et al., 2025; Riti et al., 2025; Ülkü & Erol, 2025). This integrative synthesis emphasizes leadership—as distinct from “governance” as a standalone focus—by foregrounding how leaders architect decision architectures, orchestrate task allocation, and safeguard ethical accountability as AI becomes embedded in organizational routines (Frimpong, 2025; Hovd, 2025; Riti et al., 2025; Vasilescu, 2025). Crucially, the synthesis indicates a dynamic, mutually reinforcing interaction among the three dimensions rather than three parallel “topics”. AI-augmented decision-making accelerates and expands decision cycles, which increases the need for leaders to reconfigure roles and competencies (e.g., decision architecture, coordination, boundary spanning) to translate model outputs into coordinated organizational action. These shifts, in turn, raise the salience of ethical challenges—accountability, contestability, transparency, fairness, and human agency—because speed and partial delegation can outpace validation and justification capacity. Ethical conditions then feedback into adoption and use: perceived legitimacy, trust, and fairness shape whether AI is relied upon, resisted, or constrained, thereby moderating realized performance benefits and authority dynamics across the human–AI interface. The resulting theoretical insight is that leadership outcomes under AI integration depend on the co-evolution of decision acceleration, role reconfiguration, and ethically grounded legitimacy—not on AI capability in isolation.
A Unified Pathway: From AI-Enabled Decision Acceleration to Role Reconfiguration
Across contexts, AI-augmented decision-making functions as the proximal mechanism translating AI integration into observable leadership change by compressing decision cycles and increasing the feasibility of real-time or near-real-time analysis and response (Alharbi et al., 2025; Dey et al., 2024; Ding, 2021; Kesim, 2026; Rožman et al., 2023a; Trim & Lee, 2022). These decision-process shifts alter leadership work in two linked ways: first, they reallocate attention from routine analytical processing toward higher-level judgment, coordination, and human-centric leadership labor; second, they reconfigure authority and dependency as algorithmic systems become alternative sources of informational and evaluative guidance for employees and managers (Dutta & Mishra, 2021; J. Liu et al., 2025; Petrat et al., 2022; Rožman et al., 2022; Rožman et al., 2023b). Illustratively, decision-support assistants and algorithmic evaluation systems are described as producing more frequent, data-grounded feedback loops and more objective evaluation logics, reducing subjective variability and enabling leaders to redirect decision time to more strategic and relational priorities (Dutta & Mishra, 2021; Escolar-Jimenez et al., 2019; Petrat et al., 2022). In system-wide crisis and healthcare coordination, AI-enabled infrastructures expand leaders’ ability to assess resources rapidly and coordinate across units, intensifying the leadership demand for orchestration rather than isolated problem-solving (Alharbi et al., 2025; Satish et al., 2025; Stogiannos et al., 2025). The second dimension—leadership competencies and role shifts—explains why decision augmentation does not automatically yield superior leadership outcomes. The corpus consistently suggests that AI augmentation raises the premium on AI and data literacy (interpreting predictive outputs, understanding tool limitations), strategic sensemaking (deciding which signals matter and which questions to ask), and change leadership (integrating tools into workflows while maintaining engagement) (Alshamsi, 2025; Bevilacqua et al., 2026; Gaffley & Pelser, 2021, 2025; Hossain et al., 2025a; Quttainah et al., 2025). These competencies underpin recurring role shifts: leaders move from analysts to decision architects (designing human–AI task allocation and autonomy thresholds), from controllers to coordinators (aligning multi-actor workflows and collective action), and from individual experts to boundary spanners (bridging technical specialists, operational users, and ethical responsibilities) (Bevilacqua et al., 2026; Jongen, 2023; Riti et al., 2025; Verganti et al., 2020). In education and healthcare settings, leaders are portrayed as responsible for ensuring tool–workflow fit and for facilitating professional autonomy within AI-supported processes, reinforcing that leadership effectiveness is contingent on interpretive competence and implementation capability rather than tool presence alone (Ali, 2025; Määttä et al., 2026; Siira et al., 2024).
Ethical Challenges as Conditioning Forces—Not an Add-On
The third dimension—ethical challenges—integrates with the first two by functioning as a conditioning layer that shapes whether AI-augmented decision systems are accepted, trusted, and sustainable in practice. Ethical issues arise most sharply when speed, scale, or automation outpace leaders’ capacity to preserve accountability, transparency, fairness, privacy, and human agency (Borkovich et al., 2024; Frimpong, 2025; Hovd, 2025; E. Kim, 2026; Vasilescu, 2025). In high-stakes contexts, the problem is framed as more than abstract ethics: leaders risk operational harm and legitimacy loss if AI-enabled decisions cannot be interrogated, if accountability is unclear, or if the system incentivizes efficiency at the expense of moral responsibility (Hovd, 2025; E. Kim, 2026; Vasilescu, 2025). In organizational settings such as recruitment, supervision, and performance evaluation, acceptance of AI in leadership roles is explicitly linked to concerns about monitoring and discrimination risks, suggesting that ethical trust is a practical precondition for delegating decision-making authority to algorithms (Petrat et al., 2022). Similarly, the climate effects of employee-facing virtual assistants highlight how perceived fairness and trust can improve when AI reduces interpersonal bias and enables employee voice, suggesting that ethical qualities (fairness, responsiveness, procedural trust) can be experienced as direct leadership outcomes rather than compliance constraints (Dutta & Mishra, 2021). Importantly, the corpus suggests ethical challenges are amplified by the very role shifts AI demands: when leaders shift toward decision architecture and orchestration, they also inherit responsibility for ensuring that human judgment remains engaged where needed and that algorithmic outputs are used in ethical ways consistent with organizational and stakeholder values (Cheong & Liu, 2025; Frimpong, 2025; Haskell & Clark, 2025; Riti et al., 2025). This is visible in contexts where leaders explicitly defend human dignity and agency while leveraging generative AI for efficiency, or where the ethical restraint stance emphasizes that not all automatable decisions should be automated and that leaders must retain judgment in high-stakes situations (Cheong & Liu, 2025; Frimpong, 2025). Even where studies center on explainability and transparency as mechanisms to strengthen trust in AI-informed actions, the leadership implication is consistent: leaders must cultivate interpretability and contestability as part of maintaining decision legitimacy and ensuring responsible leadership practice (Johannssen & Chukhrova, 2025; Kashyap et al., 2025; Talaei et al., 2024; Wichtmann et al., 2026).
Boundary Conditions That Integrate All Three Dimensions
Across the corpus, the integrated pathway is conditioned by boundary factors that simultaneously affect decision augmentation, competency demands, and ethical risk exposure. Data quality and infrastructure constrain AI benefits and increase ethical vulnerability where model accuracy, bias, or reliability is uncertain, as evidenced in micro-firm adoption constraints and clinician-leader concerns about bias, accuracy, and IT readiness for ML decision support (Leonard et al., 2026; Shahzad et al., 2026). Sectoral stakes and regulatory intensity shape the salience of ethical responsibility and the urgency of competence development: military and public administration contexts emphasize ethical adequacy under speed and delegation, while healthcare emphasizes patient safety and workflow integration, and education emphasizes human interaction and monitoring even when AI supports decisions (Ali, 2025; Hovd, 2025; E. Kim, 2026; Siira et al., 2024; Vasilescu, 2025). Organizational culture and leadership readiness condition adoption and outcomes; empirical work repeatedly indicates that trust, perceived benefits, and leadership engagement mediate whether AI is integrated into daily practice and whether it produces productivity, engagement, or innovation improvements (Arrooqi & Miqad Alruqi, 2025; Ismail & Karamanlıoğlu, 2026; Kotp et al., 2025; Quttainah et al., 2025; Rožman et al., 2022; Somanathan et al., 2025). Figure 2 visualizes this integrated pathway by summarizing how the three analytical dimensions jointly explain AI-driven leadership transformation under the boundary conditions identified across the corpus.
Research Propositions for Future Empirical Testing
The integrated synthesis supports treating AI-driven leadership as a hybrid decision phenomenon in which algorithmic capability expands sensing, sensemaking, and seizing, while leadership shifts toward the design and stewardship of the human–AI decision architecture. Across the corpus, the most robust regularities emerge where decision acceleration and analytic augmentation are paired with explicit role reconfiguration (e.g., decision architecture, coordination, boundary spanning) and with governance practices that preserve legitimacy. The conceptual model (Figure 2) therefore implies that leadership outcomes under AI integration depend not only on what AI can do, but on how leaders configure the division of labor, autonomy thresholds, and oversight structures that translate AI outputs into organizational action. In addition, a second recurring implication concerns ethical conditioning. As AI moves from descriptive and supportive functions toward more prescriptive recommendation logics, the practical question of “who is responsible” becomes more salient, and contestability and explainability become central to trust calibration. The synthesis suggests that the speed and scale gains associated with AI can outpace interpretive and governance capacity, making ethical risks more likely when validation, transparency, and accountability mechanisms are underdeveloped. This motivates propositions that link AI-enabled acceleration and delegation to heightened requirements for explainability, monitoring, and responsibility allocation. Finally, the evidence indicates that outcomes are shaped by boundary conditions and adoption dynamics. Data quality, infrastructure readiness, and governance arrangements repeatedly appear as prerequisites for reliable augmentation, particularly in high-stakes contexts where error costs are severe. In parallel, leaders’ AI/data literacy and change-oriented, human-centric capabilities condition whether AI outputs are used appropriately and whether employees perceive AI-mediated leadership as fair and trustworthy. Accordingly, the following propositions formalize testable relationships connecting AI modality and maturity, leadership role shifts, ethical governance requirements, and the contextual conditions under which AI-driven leadership generates performance gains versus new vulnerabilities.
Proposition 1.
As AI shifts from decision-support to prescriptive recommendations, the need for formal accountability and contestability mechanisms increases.
Proposition 2.
AI-driven acceleration of decision cycles improves efficiency, but ethical risks rise when explainability and validation capacity does not keep pace with speed and scale.
Proposition 3.
Leaders with higher AI and data literacy more effectively translate algorithmic outputs into decisions aligned with organizational goals and stakeholder expectations, reducing miscalibration-related error.
Proposition 4.
As AI becomes integrated into strategic leadership functions, leadership work increasingly shifts toward “decision architect” roles, including human–AI task allocation, autonomy thresholds, and governance design.
Proposition 5.
In high-stakes contexts, the performance and safety benefits of AI depend more on infrastructure, data quality, and governance arrangements than on algorithmic sophistication alone.
Proposition 6.
Organizational acceptance of AI in leadership is mediated by perceived trust, fairness, and transparency, shaping power dynamics and dependence relationships in the leader–follower interface.

5. Implications, Limitations, and Future Research

5.1. Theoretical and Practical Implications

Based on the integrative synthesis reported above, these implications translate the review’s insights into actionable guidance for leadership development and AI governance design. This systematic review offers relevant contributions at both theoretical and practical levels to understanding the emerging role of AI in organizational leadership. References cited in this section serve to contextualize and extend the discussion and do not form part of the systematic review corpus defined in Table 1. These external sources are used solely to contextualize implications; the review’s themes, propositions, Table 2, and Figure 2 are derived exclusively from the final review corpus (n = 84). Consistent with the integrated pathway (Section 4.3.4), the corpus indicates that leadership outcomes under AI integration depend on the joint configuration of decision augmentation, role/competency reconfiguration, and ethically grounded legitimacy. Practically, the synthesis suggests an implementation pathway that is transferable across sectors: (1) diagnose decision criticality and AI maturity (supportive vs. more prescriptive uses) to define where AI can augment versus where human judgment must remain primary; (2) formalize decision rights through human–AI task allocation and autonomy thresholds (i.e., when AI informs, recommends, or triggers action); (3) institutionalize oversight mechanisms—validation, monitoring, explainability/contestability, and responsibility allocation—supported by leader AI/data literacy and change leadership to sustain trust and adoption. In high-stakes settings (e.g., healthcare and public administration), validation capacity and accountability clarity become primary prerequisites for safe deployment; in people-management settings (e.g., evaluation, hiring), fairness, privacy, and transparency are central to acceptance and legitimacy; and (4) leadership development should prioritize AI/data literacy and human-centric change capability to translate AI outputs into coordinated action while maintaining trust, fairness, and accountability. Leaders who treat AI as merely a technical add-on risk creating faster but less legitimate decision cycles, whereas leaders who cultivate AI-savvy, interpretive, and human-centric competencies can translate augmented decisions into coordinated action while stabilizing trust and acceptance (Borkovich et al., 2024; Dutta & Mishra, 2021; Quttainah et al., 2025; Riti et al., 2025). The evidence also implies that AI can redistribute power and authority by reducing informational asymmetry and providing alternative leadership resources to employees, making leadership effectiveness depend more on orchestration, communication, and ethical credibility than on exclusive control of information (J. Liu et al., 2025; Petrat et al., 2022). Sectorally, the corpus suggests that as the stakes and speed of decision-making rise, leadership must increasingly integrate AI-augmented sensing and prediction with capability development and ethically grounded judgment to avoid brittle, over-automated decision regimes (Alharbi et al., 2025; Hovd, 2025; Satish et al., 2025; Vasilescu, 2025). By integrating perspectives from leadership theory, technology adoption, and organizational behavior, the study contributes to consolidating a conceptual synthesis that addresses gaps identified in the existing literature on leadership in AI-driven contexts. In particular, the analysis shows that leadership in AI-mediated environments is increasingly taking the form of a human–machine symbiosis, in which human judgment is not replaced but reconfigured and renegotiated in light of intelligent systems’ analytical capabilities (Jarrahi, 2018). In this context, leadership styles are evolving towards more hybrid and adaptive approaches, requiring leaders to integrate algorithmic recommendations into their decision-making processes without relinquishing critical oversight and strategic responsibility (Sahoo & Sahoo, 2025). At the same time, AI-supported decision-making implies a clear redefinition of accountability and reporting mechanisms, as the legitimacy and ultimate responsibility for decisions remain anchored in human judgment (Scoggins, 2025). From a theoretical perspective, this article challenges traditional leadership models by arguing that integrating AI is not only a decision-support tool but also a structural element that transforms the very nature of leadership. The literature reviewed suggests a transition from paradigms centered exclusively on the human leader to hybrid approaches, in which decision-making emerges from the interaction between human capabilities and intelligent systems, reinforcing the notion of leadership as a distributed process mediated by cognitive technologies (Joshi, 2025a; Pandey, 2025). This perspective contributes to the evolution of leadership theories by introducing the concept of AI-augmented leadership, in which authority, responsibility, and cognition are dynamically shared between humans and technologies, promoting emerging forms of collective intelligence and hybrid collaboration (Suri, 2025a). Additionally, the study advances in clarifying the so-called responsibility gap, highlighting the need for theoretical frameworks that explicitly address ethical accountability in algorithm-mediated decisions. By integrating ethical and strategic dimensions into the analysis of AI-driven leadership, the article contributes to the development of more comprehensive theories capable of explaining emerging phenomena such as algorithmic opacity, automated bias, and the redefinition of human judgment in complex organizational contexts, where leadership legitimacy is no longer associated solely with individual authority (Staniszewska & Galindo, 2024). Another relevant theoretical contribution lies in identifying the evolution of leadership competencies as a central construct. The review demonstrates that classic competencies, such as strategic vision or emotional intelligence, remain relevant but are insufficient without complementary competencies, such as data literacy, algorithmic understanding, and ethical sensitivity applied to technology. This conceptual repositioning is in line with recent approaches that emphasize the need for leaders capable of interpreting, questioning, and critically framing AI systems in organizational processes, particularly in contexts characterized by high complexity and moral hazard, paving the way for future empirical research on new types of competencies and leadership styles in digitally transformed environments (Joshi, 2025b; Pandey, 2025). On a practical level, the study’s findings offer clear guidance for organizations, leaders, and policymakers facing the challenges of digital transformation. First, the evidence suggests that the effective adoption of AI in leadership processes requires more than technological investment; it requires the deliberate development of human capabilities that enable the interpretation, questioning, and oversight of algorithmic recommendations, preventing situations in which AI systems exceed their supporting role and unduly replace human judgment (Suri, 2025b; Joshi, 2025a). Thus, leadership development programs should integrate training in AI literacy, digital ethics, and collaborative human–machine decision-making, strengthening leaders’ ability to manage highly complex technological contexts. Second, the study highlights the importance of implementing robust AI ethical governance systems. Organizations that use AI in strategic processes should establish clear mechanisms for transparency, auditing, and accountability to ensure that decisions remain aligned with organizational values and broader social norms. Leadership plays a key role here in promoting an organizational culture based on trust, fairness, and accountability, supported by governance frameworks that articulate ethical principles, organizational practices, and ongoing oversight of AI systems (Knight et al., 2025). Finally, the practical implications extend to the design of organizational and public policies. Recent empirical evidence shows that successful integration of AI into leadership depends on adaptive regulatory and organizational approaches that can keep pace with rapid technological evolution without compromising fundamental ethical principles, particularly in highly digitized sectors such as finance (Awasthi, 2025). In this context, leaders are called upon to act as mediators between innovation and social responsibility, promoting positive and inclusive work cultures that reconcile algorithmic efficiency, human judgment, and decision legitimacy (Suri, 2025c). In objective and scientific terms, this article contributes to a deeper understanding of AI-driven leadership by offering a structured qualitative, narrative (conceptual) synthesis of the evidence included in the review, that supports both theoretical advancement and practical action. By recognizing the dual nature of AI, which is simultaneously empowering and challenging, the study provides a solid foundation for future research and for the development of more adaptive, ethical, and sustainable leadership practices in increasingly digital organizational contexts.

5.2. Limitations and Future Research

This study focused on peer-reviewed publications with source traceability verified through two curated academic databases (Scopus and Web of Science Core Collection). Although these databases provide broad interdisciplinary coverage and reduce indexing bias through cross-validation, limiting retrieval to two sources may still omit relevant studies indexed elsewhere. Future research could include more databases in the search, such as PubMed, IEEE Xplore, and the Directory of Open Access Journals (DOAJ) to further broaden retrieval and reduce residual database selection bias. A second limitation concerns the open-access full-text eligibility constraint, which was applied to ensure that all included evidence could be examined in full and that extraction and interpretation remain verifiable. While this strengthens transparency and auditability, it may introduce availability bias by under-representing paywalled scholarship. Future reviews could compare open-access and non-open-access corpora to assess whether substantive thematic differences emerge and to evaluate the sensitivity of the synthesis to access constraints. Future empirical research could strengthen cumulative knowledge by developing more consistent operationalizations of AI integration in leadership work, clarifying outcome constructs, and testing the propositions derived from the integrated synthesis across contexts and sectors. Finally, the synthesis may be bounded by contextual concentration and reporting patterns in the available literature. Sectoral differences (e.g., public administration, healthcare, education, private-sector organizations) shape both the perceived value of AI augmentation and the salience of ethical risks, which may condition the transferability of specific mechanisms across settings. Future studies should therefore test boundary conditions explicitly, including data quality and infrastructure readiness, sectoral stakes and regulation, and organizational culture and leadership readiness, to clarify when AI-driven leadership produces performance gains versus new vulnerabilities. As time passes—and not a lot of time is necessary, given how technology events are advancing so rapidly—additional ethical concerns may well materialize. As one may register and has appeared in the media—AI is indeed transforming how we operate. Will AI be fully embraced or will it be [heavily] regulated? Future research may hence also focus on the pros and cons of AI namely (a) benefits of its use; (b) benefits of its regulation. We are amid a tidal wave and between excessive use of AI and its denial, society will have to decide what it accepts—and not necessarily universally.

6. Conclusions

AI has already changed how we lead and make decisions. We are currently moving further away from solely human-centered decision-making. AI is increasingly supporting business and management decisions in a new hybrid decision-making environment. Albeit the pathway forward needs to be a structured one as roles and skills evolve and ethical questions are posed. Indeed, these themes spark a new debate on leadership and ethics. As more people are using AI—and indeed becoming more dependent on AI as each day passes—we are in a paradigm shift. Just as the Internet slowly became a part of our lives and then a central part of our existence, so will AI’s continuous presence change how we live. These radical innovations have gone beyond improving current market offerings—they have changed how we operate, consume, travel, decide, and think, among other things. AI, for example, offers unprecedented support and answers to questions by leaders which erstwhile were perhaps only available via a superior coaching or mentoring relationship, which could cost a lot of money and require a great investment in time. Nowadays, this has changed. Individuals possessing advanced expertise in AI prompting, combined with the financial resources required to access high-tier artificial intelligence platforms—extending beyond freely available tools or standard subscription services and potentially involving substantial monthly investments (at the level of a regular human resource salary per month)—may acquire capabilities that significantly augment their cognitive and productive performance, approaching what could be described as capability-augmented actors. By capability-augmented actors we mean having the ability to decide quickly and decisively without consulting another human being. Collaboration has hence gained a new meaning. No longer is collaboration a term used only for human beings. A leader’s mentor, coach, or even “best friend” may now be a machine or AI platform. This presents new challenges as machines and AI platforms are subject to bias—it is just a different type of bias which is now presented to us. Several years ago, a researcher named Geert Hofstede (1928–2020) presented us with a novel perspective on human beings and how we function. By studying people in their work settings worldwide, Hofstede concluded that people work differently along different cultural dimensions, despite other work norms that should make work environments around the world quite uniform. This is not the case. Similarly, AI platforms may give dissimilar advice and provide dissimilar data in response to similar prompts—which may be misleading, to say the very least. We are now facing a different type of bias that needs to be studied and researched as thoroughly as we did human beings and culture. The notions of organizational culture and national culture became central aspects of management and business, for example, and as AI permeates society, we must know what cultural norms dominate certain types of platforms—we need to address the origins and evolution of AI platforms. AI will be making suggestions and giving advice based on their origin—which will be mainly Chinese or North American (Harari, 2026). Furthermore, AI has reportedly learned how to manipulate and lie—a prerequisite for survival for any organism (Harari, 2026). In whose interest will AI make future decisions? It is naive to accept that AI will always act in the best interest of who is seeking support—as AI is able to think for itself like the agent it is. This is a new era we are witnessing. Not without its challenges and setbacks. Only time will tell where we are going and where AI (with proven creative capabilities) will take us in an ever-more technology-dependent world. Thus, the following issue may warrant further research: what training in AI and critical thinking should be provided to leaders in powerful positions making decisions that may impact significant numbers of people? Dividing AI platforms into groups, segments, or AI cultures may be a future development. We hope with our discussion to have provided a basis on which to build future societies led by ethical and humane leaders—even considering that we run the risk of AI making us less aware of where our decisions are coming from—in what is revealing itself as a powerful “black hole”—or, at least a foggy, future horizon where much is still to be defined. Governance and capability development are new concerns that organizations need to have in view of increasing AI adoption. Responsible leadership training, and socially aligned AI decisions will only occur with new transparent mechanisms and ethical frameworks in place.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/admsci16040173/s1.

Author Contributions

Conceptualization, A.S., A.d.B.M., J.R.d.S., A.P.-M. and M.A.-Y.-O.; methodology, A.S., and A.d.B.M.; validation, A.S., A.d.B.M., J.R.d.S., A.P.-M. and M.A.-Y.-O.; investigation, A.S., and A.d.B.M.; resources, A.S., A.d.B.M., J.R.d.S., A.P.-M. and M.A.-Y.-O.; writing—original draft preparation, A.S., A.d.B.M., J.R.d.S., A.P.-M. and M.A.-Y.-O.; writing—review and editing, A.S., A.d.B.M., J.R.d.S., A.P.-M. and M.A.-Y.-O.; visualization, A.S., A.d.B.M., J.R.d.S., A.P.-M. and M.A.-Y.-O.; supervision, A.S., and A.d.B.M.; project administration, A.S.; funding acquisition, M.A.-Y.-O. All authors have read and agreed to the published version of the manuscript.

Funding

This study was financed by the European Union/Erasmus+ Programme—TourX international project funds [official project name: CoVEs for the Tourism Industry] grant number 101056184.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article/Supplementary Materials. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors acknowledge the TourX project, which “envisions to create excellence in Tourism through a bottom-up approach where the education providers of the partnership enhance their ability to adapt skills provision to ever-changing economic and social needs.” Additional thanks to the TourX Hospitality Labs trainees and inmates at the Regional Prison in Aveiro for their interest in further AI training and knowledge. We also sincerely thank the Guest Editors and the four anonymous reviewers for their constructive comments and valuable guidance in improving this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Abdelfattah, F., Salah, M., Dahleez, K., & Al Halbusi, H. (2025). Psychology of leadership: Understanding AI adoption, self-efficacy, green creativity, and risk perception among Oman’s business bosses. Changing Societies & Personalities, 9(2), 353–380. [Google Scholar]
  2. Abidin, A. W. Z., Ariffin, N. H. M., Nasruddin, Z. A., Habidin, N. F., & Yusoff, M. (2025). Evaluating and modelling artificial intelligence and emotional intelligence to improve cybersecurity employee ethical competence model. Journal of Advanced Research Design, 130(1), 13–25. [Google Scholar] [CrossRef]
  3. Abositta, A., Adedokun, M. W., & Berberoğlu, A. (2024). Influence of artificial intelligence on engineering management decision-making with mediating role of transformational leadership. Systems, 12(12), 570. [Google Scholar] [CrossRef]
  4. Acemoglu, D., & Johnson, S. (2023). Power and progress—Our thousand-year struggle over technology and prosperity. Basic Books. [Google Scholar]
  5. Aldrich, K., Chipps, E., & Mook, P. J. (2025). Driving innovations: Nursing leadership think tank explores AI solutions. Nurse Leader, 23(3), 236–238. [Google Scholar] [CrossRef]
  6. Alharbi, M. F., Senitan, M., Mominkhan, D., Smith, S., ALOtaibi, M., Siwek, M., Ohanlon, T., Alqablan, F., Alqahtani, S., & Alabdulaali, M. K. (2025). Digital innovative healthcare during a pandemic and beyond: A showcase of the large-scale and integrated Saudi smart national health command centre. BMJ Leader, 9(1), e000890. [Google Scholar] [CrossRef]
  7. Ali, B. M. (2025). Implementation of artificial intelligence and the roles of educational leadership: Investigating the expectations of kindergartens’ principals. International Journal of Instruction, 18(4), 269–282. [Google Scholar] [CrossRef]
  8. Alshahrani, A., Griva, A., Dennehy, D., & Mäntymäki, M. (2025). The role of leadership and communication in AI assimilation: Case studies from Saudi Arabia’s public sector organizations. Transforming Government: People, Process and Policy, 19(4), 875–894. [Google Scholar] [CrossRef]
  9. Alshamsi, A. S. (2025). Integration of transformative leadership, artificial intelligence, and the tpack framework for efficient pedagogy: A documentary analysis. International Journal of Learning, Teaching and Educational Research, 24(9), 995–1019. [Google Scholar] [CrossRef]
  10. Arrooqi, S., & Miqad Alruqi, M. (2025). Academic leadership attitudes toward employing artificial intelligence applications in developing administrative processes. Humanities & Social Sciences Communications, 12, 1342. [Google Scholar]
  11. Awasthi, V. (2025). Leadership in the age of artificial intelligence: Fintech world case study. SSRN. Available online: https://ssrn.com/abstract=5272710 (accessed on 1 February 2026).
  12. Bankins, S., Ocampo, A. C., Marrone, M., Restubog, S. L. D., & Woo, S. E. (2024). A multilevel review of artificial intelligence in organizations: Implications for organizational behavior research and practice. Journal of Organizational Behavior, 45(2), 159–182. [Google Scholar] [CrossRef]
  13. Barnett-Page, E., & Thomas, J. (2009). Methods for the synthesis of qualitative research: A critical review. BMC Medical Research Methodology, 9, 59. [Google Scholar] [CrossRef]
  14. Barney, J. B. (1991). Firm resources and sustained competitive advantage. Journal of Management, 17(1), 99–120. [Google Scholar] [CrossRef]
  15. Basilio, O., Montes, V. J., & Moreno-Brieva, F. (2025). Non-hierarchic leadership collaboration: Exploring the adoption of AI-driven social networking for addressing social challenges in an extra-organizational environment. Technology in Society, 81, 102809. Available online: https://www.sciencedirect.com/science/article/pii/S0160791X24003579 (accessed on 1 February 2026). [CrossRef]
  16. Batool, A., Zowghi, D., & Bano, M. (2023). Responsible AI governance: A systematic literature review. arXiv, arXiv:2401.10896. [Google Scholar] [CrossRef]
  17. Bean, E., Burleigh, C., Haskell, C., Burris-Melville, T., Payne, J., & Pathak, B. (2025). Eavesdropping on UNESCO AI policy, leadership, and ethics. Journal of Leadership Studies, 18(4), 98–110. [Google Scholar] [CrossRef]
  18. Berkovich, I. (2025). The rise of AI-assisted instructional leadership: Empirical survey of generative AI integration in school leadership and management work. Frontiers in Education, 10, 1643023. [Google Scholar] [CrossRef]
  19. Bevilacqua, S., Ferraris, A., Matzler, K., & Kuděj, M. (2026). Strategic leadership at high altitude: Investigating how AI affects the required skills of top managers. Journal of Business Research, 205, 115878. [Google Scholar] [CrossRef]
  20. Bharadwaj, A. S. (2000). A resource-based perspective on information technology capability and firm performance: An empirical investigation. MIS Quarterly, 24(1), 169–196. [Google Scholar] [CrossRef]
  21. Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018, April 21–26). ‘It’s reducing a human being to a percentage’ perceptions of justice in algorithmic decisions. The 2018 Chi Conference on Human Factors in Computing Systems (pp. 1–14), Montreal, QC, Canada. [Google Scholar]
  22. Black, S., Samson, D., & Ellis, A. (2024). Moving beyond ‘proof points’: Factors underpinning AI-enabled business model transformation. International Journal of Information Management, 77, 102796. [Google Scholar] [CrossRef]
  23. Borkovich, D. J., Adams, K. S., & Doss, J. A. (2024). Artificial intelligence in the workplace: A philosophical approach to ethics and integrity. Issues in Information Systems, 25(1), 311–326. [Google Scholar]
  24. Brillianto, B., Ruldeviyani, Y., & Sidiq, D. (2024). Making AI work for government: Critical success factors analysis using R-SWARA. Jurnal RESTI, 8(3), 438–446. [Google Scholar] [CrossRef]
  25. Cheng, Z. M., Bonetti, F., de Regt, A., Ribeiro, J. L., & Plangger, K. (2024). Principles of responsible digital implementation: Developing operational business resilience to reduce resistance to digital innovations. Organizational Dynamics, 53(2), 101043. [Google Scholar] [CrossRef]
  26. Cheong, P. H., & Liu, L. (2025). Faithful innovation: Negotiating institutional logics for AI value alignment among Christian churches in America. Religions, 16(3), 302. [Google Scholar] [CrossRef]
  27. Dai, R., Thomas, M. K. E., & Rawolle, S. (2025). The roles of AI and educational leaders in AI-assisted administrative decision-making: A proposed framework for symbiotic collaboration. The Australian Educational Researcher, 52(2), 1471–1487. [Google Scholar] [CrossRef]
  28. Dasborough, M. T. (2023). Awe-inspiring advancements in AI: The impact of ChatGPT on the field of organizational behavior. Journal of Organizational Behavior, 44(2), 177–179. [Google Scholar]
  29. Dey, P. K., Chowdhury, S., Abadie, A., Vann Yaroson, E., & Sarkar, S. (2024). Artificial intelligence-driven supply chain resilience in Vietnamese manufacturing small-and medium-sized enterprises. International Journal of Production Research, 62(15), 5417–5456. [Google Scholar]
  30. DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147–160. [Google Scholar] [CrossRef]
  31. Ding, X. (2021). Case investigation technology based on artificial intelligence data processing. Journal of Sensors, 4942657. [Google Scholar] [CrossRef]
  32. Dutta, S., & Mishra, A. (2021). Chatting with the CEO’s virtual assistant: Impact on climate for trust, fairness, employee satisfaction, and engagement. AIS Transactions on Human-Computer Interaction, 13(4), 379–401. [Google Scholar] [CrossRef]
  33. Ellis, R. A. (2025). The education leadership challenges for universities in a postdigital age. Postdigital Science and Education, 7(2), 430–447. [Google Scholar] [CrossRef]
  34. Elo, S., & Kyngäs, H. (2008). The qualitative content analysis process. Journal of Advanced Nursing, 62(1), 107–115. [Google Scholar] [CrossRef] [PubMed]
  35. Escolar-Jimenez, C. C., Matsuzaki, K., & Gustilo, R. C. (2019). A neural-fuzzy network approach to employee performance evaluation. International Journal of Advanced Trends in Computer Science and Engineering, 8(3), 573. [Google Scholar] [CrossRef]
  36. Faria, P., Alves, V., Neves, J., & Vicente, H. (2025). Data science in the management of healthcare organizations. Algorithms, 18(3), 173. [Google Scholar] [CrossRef]
  37. Fengkuo, S., Yijia, Z., Comite, U., Badulescu, A., & Badulescu, D. (2025). Artificial intelligence-supported leadership: A catalyst for team excellence in China’s fast-moving consumer goods industry. Journal of Organizational and End User Computing (JOEUC), 37(1), 1–29. [Google Scholar]
  38. Fitzgerald, M., Kruschwitz, N., Bonnet, D., & Welch, M. (2014). Embracing digital technology: A new strategic imperative. MIT Sloan Management Review, 55(2), 1–12. [Google Scholar]
  39. Flak, O., & Pyszka, A. (2022). Evolution from human virtual teams to artificial virtual teams supported by artificial intelligence. Results of literature analysis and empirical research. Problemy Zarządzania, 2(96), 48–69. [Google Scholar] [CrossRef]
  40. Floridi, L. (2023). The ethics of artificial intelligence: Principles, challenges, and opportunities (online ed.). Oxford Academic. [Google Scholar] [CrossRef]
  41. Frimpong, V. (2025). Not all that can be automated should be automated—Strategic minimalism as a disciplined and ethically grounded approach to AI adoption. Business Ethics and Leadership, 9(4), 57–66. [Google Scholar] [CrossRef]
  42. Gaffley, K., & Pelser, T. (2021). Developing a digital transformation model to enhance the strategy development process for leadership in the South African manufacturing sector. South African Journal of Business Management, 52(1), a2454. [Google Scholar] [CrossRef]
  43. Gaffley, K., & Pelser, T. (2025). A digital transformation strategy model for leadership in manufacturing: Considering the technological innovations to advance industry 5.0 in smart manufacturing. South African Journal of Business Management, 56(1), a5449. [Google Scholar] [CrossRef]
  44. González-Mohíno, M., Donate, M. J., Muñoz-Fernández, G. A., & Cabeza-Ramírez, L. J. (2024). Robotic digitalization and business success: The central role of trust and leadership in operational efficiency—A hybrid approach using PLS-SEM and fsQCA. IEEE Access, 12, 192113–192126. [Google Scholar] [CrossRef]
  45. Göktepe, N., & Sarıköse, S. (2025). Perspectives and experiences of nurse managers on the impact of artificial intelligence on nursing work environments and managerial processes: A qualitative study. International Nursing Review, 72(2), e70043. [Google Scholar] [CrossRef]
  46. Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26(2), 91–108. [Google Scholar]
  47. Gregory, G., Li, Y., & Solanki, V. (2026). Executive insights in the age of AI and global disruption: Navigating change, technology, and strategy. Journal of International Marketing, 34(1), 34–46. [Google Scholar] [CrossRef]
  48. Harari, Y. N. (2024). Nexus: A brief history of information networks from the stone age to AI. Fern Press, Penguin Books. [Google Scholar]
  49. Harari, Y. N. (2026, January 20). An honest conversation on AI and humanity. The World economic forum—Davos. Available online: https://www.youtube.com/watch?v=oJB7JNWo58w (accessed on 1 February 2026).
  50. Haskell, C., & Clark, S. J. (2025). Leadership in AI terminology governance: From anomia to agency. Journal of Leadership Studies, 18(4), 55–66. [Google Scholar] [CrossRef]
  51. Hassanien, A. R. M., Patwa, N., Bagheri, N., & Tabash, M. I. (2025). Ethical implications of artificial intelligence accessibility in the United Arab Emirates: Bridging the digital divide. International Review of Management and Marketing, 15(6), 22–31. [Google Scholar] [CrossRef]
  52. Held, P., Heubeck, T., & Meckl, R. (2025). Boosting SMEs’ digital transformation: The role of dynamic capabilities in cultivating digital leadership and digital culture. Review of Managerial Science, 19, 1–29. [Google Scholar] [CrossRef]
  53. Higgins, J. P. T., Thomas, J., Chandler, J., Cumpston, M., Li, T., Page, M. J., & Welch, V. A. (Eds.). (2023). Cochrane handbook for Systematic reviews of interventions (version 6.4, updated August 2023). Cochrane. [Google Scholar]
  54. Hossain, S., Fernando, M., & Akter, S. (2025a). Digital leadership: Towards a dynamic managerial capability perspective of artificial intelligence-driven leader capabilities. Journal of Leadership & Organizational Studies, 32(2), 189–208. [Google Scholar] [CrossRef]
  55. Hossain, S., Fernando, M., & Akter, S. (2025b). The influence of artificial intelligence-driven capabilities on responsible leadership: A future research agenda. Journal of Management & Organization, 31(5), 2360–2384. [Google Scholar]
  56. Hovd, S. (2025). Military prudence and technological disruption—The ethics of change management in the military. Journal of Military Ethics, 24(3–4), 315–334. [Google Scholar] [CrossRef]
  57. Hyiamang, O., & Liu, X. (2025). Artificial Intelligence (AI) strategies for organizational innovation, growth, and productivity: A multi-case study approach. Issues in Information Systems, 26(1), 20–36. [Google Scholar]
  58. Iannello, J. (2026). Healthcare leadership in the modern age of artificial intelligence: Are we organizationally ready? Artificial Intelligence in Health, 3(1), 71–76. [Google Scholar]
  59. Ingaldi, M., & Ulewicz, R. (2025). The role of AI in digital leadership-new competencies of leaders. Polish Journal of Management Studies, 31(2), 106–125. [Google Scholar] [CrossRef]
  60. Iordache, R. M., Cioca, V. R., Mihaila, D., Štreimikienė, D., & Ionescu, Ș. E. (2025). An analysis on leadership and decision making errors in the new artificial intelligence influenced organizational environment. Polish Journal of Management Studies, 31(2), 123–140. [Google Scholar] [CrossRef]
  61. Ismail, R. T., & Karamanlıoğlu, A. U. (2026). AI capabilities and its impact on organisational innovation in Malaysian SMEs: The role of transformational leadership and digital organisational culture. Sustainability, 18(3), 1473. [Google Scholar] [CrossRef]
  62. Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586. [Google Scholar] [CrossRef]
  63. Jia, T., Wang, C., Tian, Z., Wang, B., & Tian, F. (2022). Design of digital and intelligent financial decision support system based on artificial intelligence. Computational Intelligence and Neuroscience, 2022(1), 1962937. [Google Scholar] [CrossRef]
  64. Johannssen, A., & Chukhrova, N. (2025). The crucial role of explainable artificial intelligence (XAI) in improving health care management. Health Care Management Science, 28(3), 565–570. [Google Scholar] [CrossRef]
  65. Jongen, P. J. (2023). Information and communication technology medicine: Integrative specialty for the future of medicine. Interactive Journal of Medical Research, 12(1), e42831. [Google Scholar] [CrossRef]
  66. Joshi, S. (2025a). Artificial intelligence in leadership and management: Current trends and future directions. World Journal of Advanced Research and Reviews, 26(1), 2773–2791. [Google Scholar] [CrossRef]
  67. Joshi, S. (2025b). Comprehensive review of artificial intelligence in management, leadership, decision-making and collaboration. SSRN. [Google Scholar] [CrossRef]
  68. Kaan, I. A., Daniels, M., & Tainton, J. (2025). Relational leadership in the age of AI: Rethinking pedagogy for medical affairs. Journal of Leadership Studies, 19(2), e70018. [Google Scholar] [CrossRef]
  69. Kashyap, S., Purohit, S., Kumar, D. A., Jawaid, F. I., Kumar, J. R., & Ajani, S. N. (2025). Visual storytelling and explainable intelligence in organizational change communication. ShodhKosh: Journal of Visual and Performing Arts, 6, 696–707. [Google Scholar] [CrossRef]
  70. Kesim, E. (2026). Changing aspects of the management of distance education institutions during AI era: A case study. Turkish Online Journal of Distance Education, 27(1), 133–152. [Google Scholar] [CrossRef]
  71. Kim, E. (2026). Institutionalizing predictive AI in public administration: Algorithmic governance and the case of a wildfire forecasting system. Policy & Internet, 18(1), e70029. [Google Scholar] [CrossRef]
  72. Kim, S., Andersen, K. N., & Lee, J. (2022). Platform government in the era of smart technology. Public Administration Review, 82(2), 362–368. [Google Scholar]
  73. Kissinger, H. (2022). Leadership—Six studies in world strategy. Allen Lane, Penguin Books. [Google Scholar]
  74. Knight, S., Shibani, A., & Vincent, N. (2025). Ethical AI governance: Mapping a research ecosystem. AI Ethics, 5, 841–862. [Google Scholar] [CrossRef]
  75. Kotp, M. H., Ismail, H. A., Basyouny, H. A. A., Aly, M. A., Hendy, A., Nashwan, A. J., Hendy, A., & Abd Elmoaty, A. E. E. (2025). Empowering nurse leaders: Readiness for AI integration and the perceived benefits of predictive analytics. BMC Nursing, 24(1), 56. [Google Scholar] [CrossRef]
  76. Langham-Putrow, A., Bakker, C., & Riegelman, A. (2021). Is the open access citation advantage real? A systematic review of the citation advantage of open access articles. PLoS ONE, 16(6), e0253129. [Google Scholar]
  77. Leonard, F., Lyttle, M. D., O’Sullivan, D., Gilligan, J., Roland, D., Barrett, M., & PERUKI. (2026). Perceptions and knowledge of machine learning for paediatric related decision support in emergency care—A UK and Ireland network survey study of clinician leaders. PLoS Digital Health, 5(2), e0001213. [Google Scholar] [CrossRef] [PubMed]
  78. Li, T., Ni, L., & Xu, Y. (2025). Enterprise digital transformation drivers: Market or government? A case study from China. Journal of Theoretical and Applied Electronic Commerce Research, 20(2), 131. [Google Scholar] [CrossRef]
  79. Lindberget, D. S., Prosperi, M., Bjarnadottir, R. I., Thomas, J., Crane, M., Chen, Z., Shear, K., Solberg, L. M., Snigurska, U. A., Wu, Y., Xia, Y., & Lucero, R. J. (2020). Identification of important factors in an inpatient fall risk prediction model to improve the quality of care using EHR and electronic administrative data: A machine-learning approach. International Journal of Medical Informatics, 143, 104272. [Google Scholar] [CrossRef] [PubMed]
  80. Liu, J., Huang, M., Cui, M., Tian, G., & Li, X. (2025). The positive effects of employee AI dependence on voice behavior—Based on power dependence theory. Behavioral Sciences, 15(12), 1709. [Google Scholar] [CrossRef]
  81. Liu, Y., & Song, J. (2022). Predictive analysis of the psychological state of charismatic leaders on employees’ work attitudes based on artificial intelligence affective computing. Frontiers in Psychology, 13, 965658. [Google Scholar] [CrossRef]
  82. Lu, L., & Currie, G. (2026). How task-AI fit influences hotel employees’ job crafting and self-esteem threat: The moderating effect of leader AI crafting. Journal of Hospitality and Tourism Management, 66, 101368. [Google Scholar] [CrossRef]
  83. Machado, J., Sousa, R., Peixoto, H., & Abelha, A. (2024). Ethical decision-making in artificial intelligence: A logic programming approach. AI, 5(4), 2707–2724. [Google Scholar] [CrossRef]
  84. Madanchian, M., & Taherdoost, H. (2025). Ethical theories, governance models, and strategic frameworks for responsible AI adoption and organizational success. Frontiers in Artificial Intelligence, 8, 1619029. [Google Scholar] [CrossRef]
  85. March, J. G. (1991). Exploration and exploitation in organizational learning. Organization Science, 2(1), 71–87. [Google Scholar] [CrossRef]
  86. Marshall, I. J., & Wallace, B. C. (2019). Toward systematic review automation: A practical guide to using machine learning tools in research synthesis. Systematic Reviews, 8, 163. [Google Scholar] [CrossRef] [PubMed]
  87. Määttä, M., Hammarén, M., Kuha, S., & Kanste, O. (2026). Healthcare professionals’ perceptions of future leadership in digital healthcare: A qualitative study. Journal of Advanced Nursing, 82(2), 1482–1497. [Google Scholar] [CrossRef]
  88. Mpanza, S. S. (2025). Revisiting the technological-organizational-environmental (TOE) framework and diffusion of innovation (DOI): A theoretical review for artificial intelligence (AI) adoption. International Journal of Applied Research in Business and Management, 6(5). [Google Scholar] [CrossRef]
  89. Munafò, M. R., Nosek, B. A., Bishop, D. V. M., Button, K. S., Chambers, C. D., du Sert, N. P., Simonsohn, U., Wagenmakers, E.-J., Ware, J. J., & Ioannidis, J. P. A. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1, 21. [Google Scholar] [CrossRef]
  90. Namatovu, A., & Kyambade, M. (2025). Assessing the impact of digital leadership on public sector performance: The mediation role of digital transformation in developing economies. Sage Open, 15(3), 21582440251367585. [Google Scholar] [CrossRef]
  91. National Academies of Sciences, Engineering, and Medicine. (2018). Open science by design: Realizing a vision for 21st century research. The National Academies Press. [Google Scholar]
  92. National Academies of Sciences, Engineering, and Medicine. (2019). Reproducibility and replicability in science. The National Academies Press. [Google Scholar]
  93. Nevo, S., & Wade, M. R. (2010). The formation and value of IT-enabled resources: Antecedents and consequences of synergistic relationships. MIS Quarterly, 34(1), 163–183. [Google Scholar] [CrossRef]
  94. Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., Buck, S., Chambers, C. D., Chin, G., Christensen, G., Contestabile, M., Dafoe, A., Eich, E., Freese, J., Glennerster, R., Goroff, D., Green, D. P., Hesse, B., Humphreys, M., … Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425. [Google Scholar] [CrossRef]
  95. O’Reilly, C. A., & Tushman, M. L. (2013). Organizational ambidexterity: Past, present, and future. Academy of Management Perspectives, 27(4), 324–338. [Google Scholar] [CrossRef]
  96. Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021a). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. [Google Scholar] [CrossRef]
  97. Page, M. J., Moher, D., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021b). PRISMA 2020 explanation and elaboration: Updated guidance and exemplars for reporting systematic reviews. BMJ, 372, n160. [Google Scholar] [CrossRef]
  98. Pandey, V. (2025). Leadership in the AI era: Navigating and shaping the future of organizational guidance. SSRN. [Google Scholar] [CrossRef]
  99. Petrat, D., Yenice, I., Bier, L., & Subtil, I. (2022). Acceptance of artificial intelligence as organizational leadership: A survey. TATuP-Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis/Journal for Technology Assessment in Theory and Practice, 31(2), 64–69. [Google Scholar]
  100. Philippart, M. H. (2022). Success factors to deliver organizational digital transformation: A framework for transformation leadership. Journal of Global Information Management, 30(8), 1–17. [Google Scholar] [CrossRef]
  101. Piwowar, H., Priem, J., Larivière, V., Alperin, J. P., Matthias, L., Norlander, B., Farley, A., West, J., & Haustein, S. (2018). The state of OA: A large-scale analysis of the prevalence and impact of open access articles. PeerJ, 6, e4375. [Google Scholar] [CrossRef]
  102. Popay, J., Roberts, H., Sowden, A., Petticrew, M., Arai, L., Rodgers, M., Britten, N., Roen, K., & Duffy, S. (2006). Guidance on the conduct of narrative synthesis in systematic reviews: A product from the ESRC methods programme. ESRC Methods Programme. Available online: https://www.academia.edu/download/39246301/02e7e5231e8f3a6183000000.pdf (accessed on 1 February 2026).
  103. Quaquebeke, N. V., & Gerpott, F. H. (2023). The now, new, and next of digital leadership: How Artificial Intelligence (AI) will take over and change leadership as we know it. Journal of Leadership & Organizational Studies, 30(3), 265–275. [Google Scholar] [CrossRef]
  104. Quttainah, M. A., Sadhna, P., Aggarwal, A., Daipuria, P., Bhardwaj, B., & Sharma, I. (2025). AI-Savvy leadership for enhancing AI utilization and employee engagement among digital natives in the EdTech sector. Scientific Reports, 15, 45549. [Google Scholar] [CrossRef]
  105. Rais, M. I., Singh, V. K., Sivashankar, D., Singh, P., Nagesh, I. R., & Nayak, P. P. (2025). Empowering organizational management with artificial intelligence-enhanced data analytics solutions. Multidisciplinary Science Journal, 7, e2025ss0201. [Google Scholar] [CrossRef]
  106. Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review, 46(1), 192–210. [Google Scholar] [CrossRef]
  107. Riti, R. I., Abrudan, C. I., Bacali, L., & Bâlc, N. (2025). Command redefined: Neural-adaptive leadership in the age of autonomous intelligence. AI, 6(8), 176. [Google Scholar] [CrossRef]
  108. Rizana, A. F., Wiratmadja, I. I., & Akbar, M. (2025). Exploring capabilities for digital transformation in the business context: Insight from a systematic literature review. Sustainability, 17(9), 4222. [Google Scholar] [CrossRef]
  109. Rogers, D. L. (2016). The digital transformation playbook: Rethink your business for the digital age. Columbia University Press. [Google Scholar]
  110. Romeo, E., & Lacko, J. (2025). Adoption and integration of AI in organizations: A systematic review of challenges and drivers towards future directions of research. Kybernetes. ahead-of-print. [Google Scholar] [CrossRef]
  111. Rožman, M., Oreški, D., & Tominc, P. (2022). Integrating artificial intelligence into a talent management model to increase the work engagement and performance of enterprises. Frontiers in Psychology, 13, 1014434. [Google Scholar] [CrossRef]
  112. Rožman, M., Oreški, D., & Tominc, P. (2023a). Artificial-intelligence-supported reduction of employees’ workload to increase the company’s performance in today’s VUCA environment. Sustainability, 15(6), 5019. [Google Scholar] [CrossRef]
  113. Rožman, M., Tominc, P., & Milfelner, B. (2023b). Maximizing employee engagement through artificial intelligent organizational culture in the context of leadership and training of employees: Testing linear and non-linear relationships. Cogent Business & Management, 10(2), 2248732. [Google Scholar]
  114. Sahoo, S., & Sahoo, C. (2025). Managing with AI: How leadership styles evolve in the age of artificial intelligence. SSRN. [Google Scholar] [CrossRef]
  115. Satish, D., Gangadharan, D., Chandrasekaran, D., Roy, J., & Sharma, R. (2025). AI-enabled transformational leadership for improving healthcare workforce and performance improvement. International Journal of Accounting and Economics Studies, 12, 137–141. [Google Scholar]
  116. Scoggins, J. (2025). Negotiating judgment and accountability in AI-supported leadership decision-making. SSRN. [Google Scholar] [CrossRef]
  117. Seraj, A. H. A., Hasanein, A. M., Al-Romeedy, B. S., & Elziny, M. N. (2025). Redefining the digital frontier: Digital leadership, AI, and innovation driving next-generation tourism and hospitality. Administrative Sciences, 15(9), 369. [Google Scholar] [CrossRef]
  118. Shahzad, F., Hoque, M. T., Khan, I. S., & Arslan, A. (2026). AI for the underdogs: Navigating risk and growth in high-tech micro-firms through generative artificial intelligence. Journal of Strategy & Innovation, 37(1), 200566. [Google Scholar]
  119. Shannaq, B., Sriram, V. P., Alrawahi, S., Muniyanayaka, D. K., & Ali, O. (2025). An AI and NLP framework for extracting leadership competencies and mapping personalized training paths: A strategic approach for human resource development. Bangladesh Journal of Multidisciplinary Scientific Research, 10(5), 1–11. [Google Scholar] [CrossRef]
  120. Siira, E., Tyskbo, D., & Nygren, J. (2024). Healthcare leaders’ experiences of implementing artificial intelligence for medical history-taking and triage in Swedish primary care: An interview study. BMC Primary Care, 25, 268. [Google Scholar] [CrossRef]
  121. Snyder, H. (2019). Literature review as a research methodology: An overview and guidelines. Journal of Business Research, 104, 333–339. [Google Scholar] [CrossRef]
  122. Somanathan, S., Harsha, R., Khalilov, S., Khudoyarov, A., Makhmudov, S., Kumar, P. R., & Kumar, R. (2025). Driving SCM and HR transformation with ai through the role of leadership and innovation as mediators. Archives for Technical Sciences, 3(34), 1048–1059. [Google Scholar] [CrossRef]
  123. Staniszewska, Z., & Galindo, G. (2024). The future of leadership in the era of AI: Do we still need “leaders”? ESCP business school impact paper. SSRN. [Google Scholar] [CrossRef]
  124. Stogiannos, N., O’Regan, T., Scurr, E., Litosseliti, L., Pogose, M., Harvey, H., Kumar, A., Malik, R., Barnes, A., McEntee, M. F., & Malamateniou, C. (2025). Lessons on AI implementation from senior clinical practitioners: An exploratory qualitative study in medical imaging and radiotherapy in the UK. Journal of Medical Imaging and Radiation Sciences, 56(1), 101797. [Google Scholar] [CrossRef] [PubMed]
  125. Suri, K. (2025a). Augmented leadership in a hybrid intelligence world: Human-centered AI and the rise of collective intelligence. SSRN. [Google Scholar] [CrossRef]
  126. Suri, K. (2025b). From decision support to decision substitution: A behavioural framework for AI overreach in leadership. SSRN. [Google Scholar] [CrossRef]
  127. Suri, K. (2025c). Role of immersive leadership in the age of artificial intelligence: Fostering positive work culture. SSRN. [Google Scholar] [CrossRef]
  128. Svahn, F., Mathiassen, L., & Lindgren, R. (2017). Embracing digital innovation in incumbent firms. MIS Quarterly, 41(1), 239–254. [Google Scholar] [CrossRef]
  129. Tabata, M., Wildermuth, C., Bottomley, K., & Jenkins, D. (2025). Generative AI integration in leadership practice: Foundations, challenges, and opportunities. Journal of Leadership Studies, 18(4), 41–54. [Google Scholar] [CrossRef]
  130. Talaei, J., Yang, A., Takishova, T., & Masialeti, M. (2024). How does cost leadership strategy suppress the performance benefits of explainability of AI applications in organizations? Journal of Global Information Management, 32(1), 1–23. [Google Scholar] [CrossRef]
  131. Teece, D. J. (2007). Explicating dynamic capabilities: The nature and microfoundations of (sustainable) enterprise performance. Strategic Management Journal, 28(13), 1319–1350. [Google Scholar] [CrossRef]
  132. Teece, D. J., Pisano, G., & Shuen, A. (1997). Dynamic capabilities and strategic management. Strategic Management Journal, 18(7), 509–533. [Google Scholar] [CrossRef]
  133. Thomas, A., Duggal, H. K., Khatri, P., & Corvello, V. (2024). ChatGPT appropriation: A catalyst for creative performance, innovation orientation, and agile leadership. Technology in Society, 78, 102619. [Google Scholar] [CrossRef]
  134. Thomas, J., & Harden, A. (2008). Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Medical Research Methodology, 8, 45. [Google Scholar] [CrossRef]
  135. Tornatzky, L. G., & Fleischer, M. (1990). The processes of technological innovation. Lexington Books. [Google Scholar]
  136. Tranfield, D., Denyer, D., & Smart, P. (2003). Towards a methodology for developing evidence-informed management knowledge by means of systematic review. British Journal of Management, 14(3), 207–222. [Google Scholar] [CrossRef]
  137. Trim, P. R., & Lee, Y. I. (2022). Combining sociocultural intelligence with Artificial Intelligence to increase organizational cyber security provision through enhanced resilience. Big Data and Cognitive Computing, 6(4), 110. [Google Scholar] [CrossRef]
  138. Tsai, C. Y., Marshall, J. D., Choudhury, A., Serban, A., Hou, Y. T. Y., Jung, M. F., & Yammarino, F. J. (2022). Human-robot collaboration: A multilevel and integrated leadership framework. The Leadership Quarterly, 33(1), 101594. [Google Scholar] [CrossRef]
  139. Turchioe, M. R., Pepingco, C., Ronquillo, C., Ferrara, S. A., Topaz, M., Austin, R., & Lytle, K. (2025). Education, empowerment, and elevating nursing voices: Nursing informatics leaders’ perspectives on the path forward with artificial intelligence in nursing. Nursing Outlook, 73(5), 102484. [Google Scholar] [CrossRef]
  140. Ul Haq, F., Suki, N. M., Setini, M., Masood, A., & Khan, T. A. (2025). Adopting green AI for SME sustainability: Mediating role of green investment and moderation by green servant leadership. Sustainable Futures, 10, 101002. [Google Scholar] [CrossRef]
  141. UNESCO. (2021). UNESCO recommendation on open science. United Nations Educational, Scientific and Cultural Organization. [Google Scholar]
  142. Ülkü, G., & Erol, G. (2025). A new approach to crisis communication in tourism: Artificial intelligence-based CEO (AI-CEO). Tourism & Management Studies, 21(3), 17–31. [Google Scholar] [CrossRef]
  143. Vasilescu, C. (2025). Digital transformation of military organisations. Obrana a Strategie, 25(2), 25–47. [Google Scholar]
  144. Verganti, R., Vendraminelli, L., & Iansiti, M. (2020). Innovation and design in the age of artificial intelligence. Journal of Product Innovation Management, 37(3), 212–227. [Google Scholar] [CrossRef]
  145. Verhoef, P. C., Broekhuizen, T., Bart, Y., Bhattacharya, A., Dong, J. Q., Fabian, N., & Haenlein, M. (2021). Digital transformation: A multidisciplinary reflection and research agenda. Journal of Business Research, 122, 889–901. [Google Scholar] [CrossRef]
  146. Vial, G. (2021). Understanding digital transformation: A review and a research agenda. In A. Hinterhuber, T. Vescovi, & F. Checchinato (Eds.), Managing digital transformation (pp. 13–66). Routledge. [Google Scholar] [CrossRef]
  147. Vinod, N., Subramani, A. K., Abirami, A., & Bijumon, R. (2025). An efficient approach to innovation healthcare leadership and artificial intelligence practical applications. International Journal of Basic and Applied Sciences, 14(1), 371–376. [Google Scholar] [CrossRef]
  148. Warner, K. S. R., & Wäger, M. (2019). Building dynamic capabilities for digital transformation: An ongoing process of strategic renewal. Long Range Planning, 52(3), 326–349. [Google Scholar] [CrossRef]
  149. Wichtmann, B. D., Paech, D., Pianykh, O. S., Huang, S. Y., Seltzer, S. E., Brink, J., & Fennessy, F. M. (2026). Leadership in radiology in the era of technological advancements and artificial intelligence. European Radiology, 36(1), 548–552. [Google Scholar] [CrossRef]
  150. Xu, J., Li, R., & Peng, Z. (2025). Digital ripples in industries: An institutional theory perspective on how peer transformation dismantles greenwashing behavior. Journal of Theoretical and Applied Electronic Commerce Research, 20(4), 351. [Google Scholar] [CrossRef]
Figure 1. PRISMA 2020 flow diagram of the study selection process.
Figure 1. PRISMA 2020 flow diagram of the study selection process.
Admsci 16 00173 g001
Figure 2. Integrated conceptual framework of AI-driven leadership as a hybrid decision phenomenon (derived from the narrative integrative synthesis).
Figure 2. Integrated conceptual framework of AI-driven leadership as a hybrid decision phenomenon (derived from the narrative integrative synthesis).
Admsci 16 00173 g002
Table 1. Included studies (n = 84). Note. Indexation indicates database coverage and is not mutually exclusive across databases.
Table 1. Included studies (n = 84). Note. Indexation indicates database coverage and is not mutually exclusive across databases.
TitleAuthorsIndexationStudy Type
1A digital transformation strategy model for leadership in manufacturing: Considering the technological innovations to advance industry 5.0 in smart manufacturing(Gaffley & Pelser, 2025)Scopus + Web of SciencePrimary Mixed-Methods
2A neural-fuzzy network approach to employee performance evaluation(Escolar-Jimenez et al., 2019)ScopusPrimary Quantitative
3A new approach to crisis communication in tourism: Artificial intelligence-based CEO (AI-CEO)(Ülkü & Erol, 2025)Scopus + Web of SciencePrimary Quantitative
4Academic leadership attitudes toward employing artificial intelligence applications in developing administrative processes(Arrooqi & Miqad Alruqi, 2025)Scopus + Web of SciencePrimary Quantitative
5Acceptance of artificial intelligence as organizational leadership: A survey(Petrat et al., 2022)Scopus + Web of SciencePrimary Quantitative
6Adopting green AI for SME sustainability: Mediating role of green investment and moderation by green servant leadership(Ul Haq et al., 2025)Scopus + Web of SciencePrimary Quantitative
7AI capabilities and its impact on organisational innovation in Malaysian SMEs: The role of transformational leadership and digital organisational culture(Ismail & Karamanlıoğlu, 2026)Scopus + Web of SciencePrimary Quantitative
8AI for the underdogs: Navigating risk and growth in high-tech micro-firms through generative artificial intelligence(Shahzad et al., 2026)ScopusPrimary Qualitative
9AI-enabled transformational leadership for improving healthcare workforce and performance improvement(Satish et al., 2025)ScopusPrimary Mixed-Methods
10AI-savvy leadership for enhancing AI utilization and employee engagement among digital natives in the EdTech sector(Quttainah et al., 2025)Scopus + Web of SciencePrimary Quantitative
11An AI and NLP framework for extracting leadership competencies and mapping personalized training paths: A strategic approach for human resource development(Shannaq et al., 2025)ScopusConceptual/
Theoretical
12An analysis on leadership and decision making errors in the new artificial intelligence influenced organizational environment(Iordache et al., 2025)Scopus + Web of ScienceConceptual/
Theoretical
13An efficient approach to innovation healthcare leadership and artificial intelligence practical applications(Vinod et al., 2025)ScopusConceptual/
Theoretical
14Artificial intelligence (AI) strategies for organizational innovation, growth, and productivity: A multi-case study approach(Hyiamang & Liu, 2025)ScopusPrimary Qualitative
15Artificial intelligence in the workplace: A philosophical approach to ethics and integrity(Borkovich et al., 2024)ScopusConceptual/
Theoretical
16Artificial intelligence-driven supply chain resilience in Vietnamese manufacturing small- and medium-sized enterprises(Dey et al., 2024)Scopus + Web of SciencePrimary Quantitative
17Artificial intelligence-supported leadership: A catalyst for team excellence in China’s fast-moving consumer goods industry(Fengkuo et al., 2025)Scopus + Web of SciencePrimary Quantitative
18Artificial-intelligence-supported reduction of employees’ workload to increase the company’s performance in today’s VUCA environment(Rožman et al., 2023a)Scopus + Web of SciencePrimary Quantitative
19Case investigation technology based on artificial intelligence data processing(Ding, 2021)Scopus + Web of ScienceConceptual/
Theoretical
20Changing aspects of the management of distance education institutions during AI era: A case study(Kesim, 2026)Scopus + Web of SciencePrimary Qualitative
21ChatGPT appropriation: A catalyst for creative performance, innovation orientation, and agile leadership(A. Thomas et al., 2024)Scopus + Web of SciencePrimary Quantitative
22Chatting with the CEO’s virtual assistant: Impact on climate for trust, fairness, employee satisfaction, and engagement(Dutta & Mishra, 2021)ScopusPrimary Quantitative
23Combining sociocultural intelligence with artificial intelligence to increase organizational cyber security provision through enhanced resilience(Trim & Lee, 2022)Scopus + Web of SciencePrimary Qualitative
24Command redefined: Neural-adaptive leadership in the age of autonomous intelligence(Riti et al., 2025)Scopus + Web of SciencePrimary Mixed-Methods
25Data science in the management of healthcare organizations(Faria et al., 2025)Scopus + Web of SciencePrimary Quantitative
26Design of digital and intelligent financial decision support system based on artificial intelligence(Jia et al., 2022)Scopus + Web of ScienceConceptual/
Theoretical
27Developing a digital transformation model to enhance the strategy development process for leadership in the South African manufacturing sector(Gaffley & Pelser, 2021)Scopus + Web of SciencePrimary Quantitative
28Digital innovative healthcare during a pandemic and beyond: A showcase of the large-scale and integrated Saudi smart national health command centre(Alharbi et al., 2025)Scopus + Web of SciencePrimary Qualitative
29Digital leadership: Towards a dynamic managerial capability perspective of artificial intelligence-driven leader capabilities(Hossain et al., 2025a)Scopus + Web of SciencePrimary Qualitative
30Digital transformation of military organisations(Vasilescu, 2025)Web of SciencePrimary Qualitative
31Driving innovations: Nursing leadership think tank explores AI solutions(Aldrich et al., 2025)Scopus + Web of SciencePrimary Qualitative
32Driving SCM and HR transformation with AI through the role of leadership and innovation as mediators(Somanathan et al., 2025)ScopusPrimary Quantitative
33Eavesdropping on UNESCO AI policy, leadership, and ethics(Bean et al., 2025)Scopus + Web of SciencePrimary Qualitative
34Education, empowerment, and elevating nursing voices: Nursing informatics leaders’ perspectives on the path forward with artificial intelligence in nursing(Turchioe et al., 2025)Scopus + Web of SciencePrimary Qualitative
35Empowering nurse leaders: Readiness for AI integration and the perceived benefits of predictive analytics(Kotp et al., 2025)Scopus + Web of SciencePrimary Quantitative
36Empowering organizational management with artificial intelligence-enhanced data analytics solutions(Rais et al., 2025)ScopusConceptual/
Theoretical
37Ethical implications of artificial intelligence accessibility in the United Arab Emirates: Bridging the digital divide(Hassanien et al., 2025)ScopusPrimary Quantitative
38Evaluating and modelling artificial intelligence and emotional intelligence to improve cybersecurity employee ethical competence model(Abidin et al., 2025)ScopusPrimary Quantitative
39Evolution from human virtual teams to artificial virtual teams supported by artificial intelligence: Results of literature analysis and empirical research(Flak & Pyszka, 2022)Web of SciencePrimary Qualitative
40Executive insights in the age of AI and global disruption: Navigating change, technology, and strategy(Gregory et al., 2026)Scopus + Web of SciencePrimary Qualitative
41Faithful innovation: Negotiating institutional logics for AI value alignment among Christian churches in America(Cheong & Liu, 2025)Scopus + Web of SciencePrimary Qualitative
42Generative AI integration in leadership practice: Foundations, challenges, and opportunities(Tabata et al., 2025)Scopus + Web of ScienceConceptual/
Theoretical
43Healthcare leaders’ experiences of implementing artificial intelligence for medical history-taking and triage in Swedish primary care: An interview study(Siira et al., 2024)Scopus + Web of SciencePrimary Qualitative
44Healthcare leadership in the modern age of artificial intelligence: Are we organizationally ready?(Iannello, 2026)ScopusConceptual/
Theoretical
45Healthcare professionals’ perceptions of future leadership in digital healthcare: A qualitative study(Määttä et al., 2026)Scopus + Web of SciencePrimary Qualitative
46How does cost leadership strategy suppress the performance benefits of explainability of AI applications in organizations?(Talaei et al., 2024)Web of SciencePrimary Quantitative
47How task-AI fit influences hotel employees’ job crafting and self-esteem threat: The moderating effect of leader AI crafting(Lu & Currie, 2026)Scopus + Web of SciencePrimary Quantitative
48Identification of important factors in an inpatient fall risk prediction model to improve the quality of care using EHR and electronic administrative data: A machine-learning approach(Lindberg et al., 2020)Scopus + Web of SciencePrimary Quantitative
49Implementation of artificial intelligence and the roles of educational leadership: Investigating the expectations of kindergartens’ principals(Ali, 2025)Web of SciencePrimary Qualitative
50Influence of artificial intelligence on engineering management decision-making with mediating role of transformational leadership(Abositta et al., 2024)Scopus + Web of SciencePrimary Quantitative
51Information and communication technology medicine: Integrative specialty for the future of medicine(Jongen, 2023)Web of ScienceConceptual/
Theoretical
52Innovation and design in the age of artificial intelligence(Verganti et al., 2020)Scopus + Web of ScienceConceptual/
Theoretical
53Institutionalizing predictive AI in public administration: Algorithmic governance and the case of a wildfire forecasting system(E. Kim, 2026)ScopusPrimary Mixed-Methods
54Integrating artificial intelligence into a talent management model to increase the work engagement and performance of enterprises(Rožman et al., 2022)Scopus + Web of SciencePrimary Quantitative
55Integration of transformative leadership, artificial intelligence, and the TPACK framework for efficient pedagogy: A documentary analysis(Alshamsi, 2025)ScopusPrimary Qualitative
56Leadership in AI terminology governance: From anomia to agency(Haskell & Clark, 2025)Scopus + Web of SciencePrimary Qualitative
57Leadership in radiology in the era of technological advancements and artificial intelligence(Wichtmann et al., 2026)Scopus + Web of ScienceConceptual/
Theoretical
58Lessons on AI implementation from senior clinical practitioners: An exploratory qualitative study in medical imaging and radiotherapy in the UK(Stogiannos et al., 2025)Scopus + Web of SciencePrimary Qualitative
59Making AI work for government: Critical success factors analysis using R-SWARA(Brillianto et al., 2024)ScopusPrimary Quantitative
60Maximizing employee engagement through artificial intelligent organizational culture in the context of leadership and training of employees: Testing linear and non-linear relationships(Rožman et al., 2023b)Scopus + Web of SciencePrimary Quantitative
61Military prudence and technological disruption–the ethics of change management in the military(Hovd, 2025)ScopusConceptual/
Theoretical
62Moving beyond ‘proof points’: Factors underpinning AI-enabled business model transformation(Black et al., 2024)Scopus + Web of SciencePrimary Mixed-Methods
63Non-hierarchic leadership collaboration: Exploring the adoption of AI-driven social networking for addressing social challenges in an extra-organizational environment(Basilio et al., 2025)Scopus + Web of SciencePrimary Mixed-Methods
64Not all that can be automated should be automated: Strategic minimalism as a disciplined and ethically grounded approach to AI adoption(Frimpong, 2025)ScopusConceptual/
Theoretical
65Perceptions and knowledge of machine learning for paediatric related decision support in emergency care: A UK and Ireland network survey study of clinician leaders(Leonard et al., 2026)Scopus + Web of SciencePrimary Quantitative
66Perspectives and experiences of nurse managers on the impact of artificial intelligence on nursing work environments and managerial processes: A qualitative study(Göktepe & Sarıköse, 2025)Scopus + Web of SciencePrimary Qualitative
67Platform government in the era of smart technology(S. Kim et al., 2022)Scopus + Web of ScienceConceptual/
Theoretical
68Predictive analysis of the psychological state of charismatic leaders on employees’ work attitudes based on artificial intelligence affective computing(Y. Liu & Song, 2022)Scopus + Web of SciencePrimary Quantitative
69Principles of responsible digital implementation: Developing operational business resilience to reduce resistance to digital innovations(Cheng et al., 2024)Scopus + Web of ScienceConceptual/
Theoretical
70Psychology of leadership: Understanding AI adoption, self-efficacy, green creativity, and risk perception among Oman’s business bosses(Abdelfattah et al., 2025)Scopus + Web of SciencePrimary Quantitative
71Redefining the digital frontier: Digital leadership, AI, and innovation driving next-generation tourism and hospitality(Seraj et al., 2025)Scopus + Web of SciencePrimary Quantitative
72Relational leadership in the age of AI: Rethinking pedagogy for medical affairs(Kaan et al., 2025)Scopus + Web of ScienceConceptual/
Theoretical
73Robotic digitalization and business success: The central role of trust and leadership in operational efficiency—A hybrid approach using PLS-SEM and fsQCA(González-Mohíno et al., 2024)Scopus + Web of SciencePrimary Mixed-Methods
74Strategic leadership at high altitude: Investigating how AI affects the required skills of top managers(Bevilacqua et al., 2026)Scopus + Web of SciencePrimary Qualitative
75Success factors to deliver organizational digital transformation: A framework for transformation leadership(Philippart, 2022)Web of ScienceConceptual/
Theoretical
76The crucial role of explainable artificial intelligence (XAI) in improving health care management(Johannssen & Chukhrova, 2025)Scopus + Web of ScienceConceptual/
Theoretical
77The education leadership challenges for universities in a postdigital age(Ellis, 2025)ScopusConceptual/
Theoretical
78The influence of artificial intelligence-driven capabilities on responsible leadership: A future research agenda(Hossain et al., 2025b)Scopus + Web of ScienceConceptual/
Theoretical
79The positive effects of employee AI dependence on voice behavior: Based on power dependence theory(J. Liu et al., 2025)Scopus + Web of SciencePrimary Mixed-Methods
80The rise of AI-assisted instructional leadership: Empirical survey of generative AI integration in school leadership and management work(Berkovich, 2025)Scopus + Web of SciencePrimary Quantitative
81The role of AI in digital leadership: New competencies of leaders(Ingaldi & Ulewicz, 2025)Scopus + Web of ScienceConceptual/
Theoretical
82The role of leadership and communication in AI assimilation: Case studies from Saudi Arabia’s public sector organizations(Alshahrani et al., 2025)Scopus + Web of SciencePrimary Qualitative
83The roles of AI and educational leaders in AI-assisted administrative decision-making: A proposed framework for symbiotic collaboration(Dai et al., 2025)Scopus + Web of ScienceConceptual/
Theoretical
84Visual storytelling and explainable intelligence in organizational change communication(Kashyap et al., 2025)ScopusConceptual/
Theoretical
Table 2. Compact thematic synthesis table: analytical dimensions, subthemes, integrative insights, and representative studies (final corpus: n = 84). Representative studies are illustrative rather than exhaustive.
Table 2. Compact thematic synthesis table: analytical dimensions, subthemes, integrative insights, and representative studies (final corpus: n = 84). Representative studies are illustrative rather than exhaustive.
Analytical DimensionSubthemeIntegrative Synthesis (Compact)Representative Studies (APA In-Text)
AI-augmented decision-makingDecision support & predictive analyticsAI most commonly augments leaders via decision-support systems and predictive analytics that structure routine judgments and enable faster, data-grounded choices across operational and strategic levels.(Alharbi et al., 2025; Dey et al., 2024; Dutta & Mishra, 2021; Lindberg et al., 2020; Satish et al., 2025; Siira et al., 2024)
AI-augmented decision-makingStrategic transformation & innovation enablementAI-enabled augmentation is linked to digital transformation, innovation, and business-model adaptation, positioning leaders to reconfigure strategy and operating models.(Black et al., 2024; Gaffley & Pelser, 2021, 2025; Hossain et al., 2025a; Hyiamang & Liu, 2025; Philippart, 2022)
AI-augmented decision-makingGenerative AI and partial delegation of authorityGenerative and autonomous systems extend augmentation into communication, ideation, and partially delegated decision authority, intensifying the need for calibrated autonomy and human oversight.(Cheong & Liu, 2025; Flak & Pyszka, 2022; Riti et al., 2025; Shahzad et al., 2026; Ülkü & Erol, 2025; Verganti et al., 2020)
Leadership competencies & role shiftsAI/data literacy as baseline competenceLeaders increasingly require AI and data literacy to interpret algorithmic outputs, recognize limits, and translate insights into decisions and workflow integration.(Alshamsi, 2025; Gaffley & Pelser, 2021, 2025; Kesim, 2026; Quttainah et al., 2025)
Leadership competencies & role shiftsDecision architect, coordinator, boundary spannerRole reconfiguration shifts leaders toward decision architecture (human–AI task allocation), cross-unit coordination, and boundary spanning across technical and operational stakeholders.(Alharbi et al., 2025; Bevilacqua et al., 2026; Jongen, 2023; Riti et al., 2025; Verganti et al., 2020)
Leadership competencies & role shiftsHuman-centric change leadershipAs AI absorbs routine analytics, leaders’ relational work—communication, trust-building, empowerment, and engagement—becomes central to effective implementation and workforce acceptance.(Borkovich et al., 2024; Cheng et al., 2024; Dutta & Mishra, 2021; Göktepe & Sarıköse, 2025; Turchioe et al., 2025)
Ethical challengesAccountability & contestability under delegationEmbedding AI in decisions raises responsibility-allocation and contestability challenges, especially where authority is partially delegated and decisions operate at speed.(Hovd, 2025; E. Kim, 2026; Riti et al., 2025; Vasilescu, 2025)
Ethical challengesOpacity, explainability, and legitimacyOpacity (“black box”) undermines legitimacy and trust; explainability and interpretability are positioned as safeguards that support justification, oversight, and calibrated reliance.(Johannssen & Chukhrova, 2025; Kashyap et al., 2025; S. Kim et al., 2022; Talaei et al., 2024; Ülkü & Erol, 2025; Wichtmann et al., 2026)
Ethical challengesBias, fairness, privacy, and regulatory constraintsBias and distributive fairness risks, alongside privacy and compliance constraints, condition acceptable adoption in HR, public-sector, and high-stakes healthcare contexts.(Arrooqi & Miqad Alruqi, 2025; Hassanien et al., 2025; Leonard et al., 2026; Petrat et al., 2022; Satish et al., 2025)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sacavém, A.; Machado, A.d.B.; Rodrigues dos Santos, J.; Palma-Moreira, A.; Au-Yong-Oliveira, M. AI-Driven Leadership: Decision-Making, Competencies, and Ethical Challenges—A Systematic Review. Adm. Sci. 2026, 16, 173. https://doi.org/10.3390/admsci16040173

AMA Style

Sacavém A, Machado AdB, Rodrigues dos Santos J, Palma-Moreira A, Au-Yong-Oliveira M. AI-Driven Leadership: Decision-Making, Competencies, and Ethical Challenges—A Systematic Review. Administrative Sciences. 2026; 16(4):173. https://doi.org/10.3390/admsci16040173

Chicago/Turabian Style

Sacavém, António, Andreia de Bem Machado, João Rodrigues dos Santos, Ana Palma-Moreira, and Manuel Au-Yong-Oliveira. 2026. "AI-Driven Leadership: Decision-Making, Competencies, and Ethical Challenges—A Systematic Review" Administrative Sciences 16, no. 4: 173. https://doi.org/10.3390/admsci16040173

APA Style

Sacavém, A., Machado, A. d. B., Rodrigues dos Santos, J., Palma-Moreira, A., & Au-Yong-Oliveira, M. (2026). AI-Driven Leadership: Decision-Making, Competencies, and Ethical Challenges—A Systematic Review. Administrative Sciences, 16(4), 173. https://doi.org/10.3390/admsci16040173

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop