Next Article in Journal
Introduction: Globalization and Economic Integration
Next Article in Special Issue
The Future of External Audit: A Systematic Literature Review of Emerging Technologies and Their Impact on External Audit Practices
Previous Article in Journal
Economic Policy Uncertainty and Bond Returns Under Different Market Conditions: A Focus on South Africa
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

From Adoption to Audit Quality: Mapping the Intellectual Structure of Artificial Intelligence-Enabled Auditing

1
Department of Accounting, Prince Sultan University, Riyadh 11586, Saudi Arabia
2
Independent Researcher, Kuala Lumpur 50490, Malaysia
*
Author to whom correspondence should be addressed.
J. Risk Financial Manag. 2026, 19(3), 209; https://doi.org/10.3390/jrfm19030209
Submission received: 11 January 2026 / Revised: 24 February 2026 / Accepted: 6 March 2026 / Published: 11 March 2026
(This article belongs to the Special Issue Accounting and Auditing in the Age of Sustainability and AI)

Abstract

This study conducts a bibliometric and content analysis of ‘artificial intelligence-enabled auditing’ over three decades. The use of artificial intelligence (AI) tools in auditing has evolved and is now an imperative practice in the auditing space. Using bibliometric methods via Bibliometrix R-package (Biblioshiny) and VOSviewer, this research mainly examines the scholarly discussion on AI-enabled auditing, using the Scopus database. The main themes identified are: Theme 1: AI in auditing: readiness, representation, and implementation; Theme 2: data-driven audit ecosystems and digital technologies; and Theme 3: audit quality, professional skepticism, and ethical governance. On the descriptive end, publication trends, prominent authors, articles, and sources are identified. The findings highlight a significant increase in AI-enabled auditing studies since 2018, coinciding with growing global awareness on the importance of AI across all spheres of business. The outcome of this research contributes to a wide array of stakeholders, including businesses, audit firms, shareholders, and policymakers; it should give insights to business organizations on the capabilities of AI-assisted auditing, while policymakers should have access to verifiable, auditable and regulatory-compliant systems for the implementation of their regulations. Investors may further enhance their knowledge in terms of how AI-assisted auditing increases the quality of their investment decisions and, at the same time, the risks involved. Finally, auditing firms should further invest in improving the application of technology in the auditing environment and ensure quality, evidence-based audit outcomes, and reporting.

1. Introduction

1.1. Background and Context

Automation has radically altered the nature of the accounting environment, eliminating manual data entry, paper trails, and tedious, repetitive work. Digital transformation is leading the way in this new age, not only as an auxiliary tool but as a driving force of strategy, steering a new age of efficiency and accuracy. Technology has also played a leading role in the auditing and assurance sector by automating processes. In the past, auditing was highly labor-intensive, with sampling, compliance, and substantive audits being the main activities. Recent developments in the late 20th century have contributed to the digital transformation, and auditors are currently using computer-assisted auditing tools (CAATs) to enhance the efficiency of sampling, data extraction, and control testing (Li & Goel, 2025; Mansour et al., 2025). In the Industry 4.0 stage, artificial intelligence (AI), machine learning (ML), natural language programming (NLP), and expert systems have enabled a redesign of the processes and systems used to audit businesses. These types of AI systems enable large volumes of both unstructured and structured data to be processed quickly and in quantity, creating new possibilities for gathering audit evidence (Musunuru, 2025). These systems can resolve issues related to sampling in traditional audit types and can therefore serve as a vehicle for identifying risks and anomalies in audit observations, as well as for continuous monitoring and quality enhancement (Leocádio et al., 2025; Mansour et al., 2025). The advent of big data analytics also reduces the challenges of traditional auditing. Adoption of digital platforms, blockchains, and integrated enterprise systems has enabled many businesses transitioning to digital to adopt more strategic AI-powered audit methodologies (Sayal et al., 2025; Pravdiuk et al., 2024). It is now possible to use deep learning models to analyze millions of transactions to detect potential fraud (Ramzan & Lokanan, 2025; Gu et al., 2024), and NLP to scan contracts and disclosures for violations (Zhao & Wang, 2024; Earley, 2023). This transforms the backward-looking verification aspect of the audit profession to forward-looking assurance (Raschke et al., 2018).

1.2. Challenges in AI-Enabled Auditing

Despite the advantages of AI applications in the auditing sector, numerous challenges and issues remain regarding adoption. These issues include the black box problem (transparency issues), the risk of data privacy/insecurity, the possibility of algorithmic bias, the need to integrate complex systems, high start-up costs, the requirement to train new auditors, regulatory uncertainty, and accountability for AI errors. The biggest problem is the lack of transparency in most AI models (the black-box problem), which contributes to issues of explainability, reliability, and accountability. Even highly precise deep learning models may produce insights that are difficult for auditors or regulators to understand, leading to ineffective audit performance (Gu et al., 2024). The latter is also expressed by Zhong and Goel (2024), who claim that AI-based audits lack explainability frameworks, undermining audit quality and creating regulatory compliance risks because auditors cannot justify decisions made by black-box models. The literature also documents ethical problems related to bias and impartiality. However, if algorithms are trained on incomplete or biased data (Li & Goel, 2025), they will continue to exhibit biases, affecting the objectivity and reliability of the audit. This creates very risky circumstances in the audit environment, where transactions can be incorrectly classified and anomalies can go undetected. Pravdiuk et al. (2024) further explain this by stating that algorithmic decisions carry legal and reputational liabilities that could lead to discriminatory outcomes or violate established regulatory standards.
Additionally, concerns about the privacy and security of data impede the adoption of AI in auditing. Long-term, data-driven systems that require extensive client data to be incorporated into the auditing system pose a greater risk of data breaches and cyberattacks to firms (Mugwira, 2022). In the current borderless global business environment, the problem has become more complex, requiring auditors to understand complex legal systems regarding ownership and sovereignty. Thus, it is vital to balance technological innovation with data protection laws, including the General Data Protection Regulation (GDPR) and its national implementing laws. Professional capabilities and AI governance are pivotal in an AI-enabled auditing environment. Auditable AI controls should be developed, the effects of audit quality on AI adoption should be compared, and AI adoption should be considered across firms of different sizes and regulatory frameworks (Kokina et al., 2025; Almaqtari, 2024). Consequently, AI will transform the profession of auditors, and simultaneously, the auditing standards, skills, and governance frameworks will also require a comprehensive review.

1.3. Research Gap and Motivation

Although the study of AI in auditing is growing rapidly, the available literature remains fragmented and not integrated. The existing review-based studies focus mainly on AI adoption in general accounting, focusing on organizations’ readiness (Odonkor et al., 2024; Kassar & Jizi, 2026), AI forecasting in financial accounting (Kureljusic & Karger, 2024), and big data on financial accounting (Mohammed Ismail & Abdul Hamid, 2024). Thus, there is a relative lack of research that integrates the role of all technologies in transforming audit quality, professional judgment, and assurance outcomes in different contexts. Furthermore, despite progress in empirical work on AI technical capabilities, e.g., anomaly detection and continuous auditing, there appears to be minimal emphasis on consolidating these developments into streams of intellectual work or on documenting how the field has progressed from early adoption to outcome-related issues of explainability, ethics, and governance. Though available reviews are technology-oriented and provide insightful data, they fail to systematically map the knowledge structure, thematic development, and research gaps of AI-enabled auditing.
As a result, it is unclear which themes are fundamental, emerging, or driving forces in the literature, and what the substantive gaps are. The present study uniquely combines bibliometric analysis with thematic synthesis to trace the development of AI-enabled auditing literature from 2015 to 2025, specifically identifying three integrated thematic clusters—adoption and capability building, digital technologies and data-driven ecosystems, and audit quality with ethical governance—that bridge conceptual frameworks with implementation realities. By synthesizing 184 articles and identifying research gaps, this study offers a more granular and AI-audit-specific research roadmap, and not limit this review to broader accounting-technology-based reviews.

1.4. Significance of This Study

This bibliometric review is particularly timely for several compelling reasons:
First, the number of published literature in this study domain has now reached 184 peer-reviewed articles, with an exponential increase since the start of 2018. Therefore, the literature has reached a degree of maturity and readiness for systematic synthesis. The field evolved from various independent exploratory studies to an extensive body of research with clear, identifiable patterns, themes, and trajectories. Second, major audit regulators and standard setters are developing AI-specific guidance. PCAOB (2025) published proposals on the use of technological tools and a data analytic approach to auditing. The IAASB (2024) has already initiated several projects to explore the incorporation of AI and data analytics into the audit processes. The European Commission has established the AI Act (European Parliament and Council of the European Union, 2024), which has direct implications for the use of AI systems in financial auditing.
Third, industry surveys show that the rate of AI deployment in practice is increasing rapidly. In fact, several recent studies indicate that Big Four accounting firms currently use AI tools for risk assessments, anomaly detection, and analytical procedures (Deloitte, 2024). The cumulative investment in AI-based technology used for audits is expected to exceed $ 3.5 billion by 2027. The level and rate of increase clearly demonstrate the need for a systematic synthesis to guide responsible deployment of AI technologies to support informed investment decision-making and to identify implementation issues associated with the successful integration of AI-driven technology into the auditing profession.
Fourth, this research also has practical value for stakeholders in the auditing ecosystem. Due to the accelerating pace of digital transformation after COVID-19 and the rapid advancement of machine learning (ML), natural language processing (NLP), and blockchain, there is now an urgent need to consolidate existing knowledge and to provide a framework for future research. It is also critically important to understand the transition from research focused primarily on early adopters to current issues, such as audit quality, explainability, and ethical governance, to responsibly integrate AI into auditing practice. The findings should also provide auditing firms with evidence on AI implementation patterns, the challenges associated with implementing the technology, and best practices, which will aid firms in making evidence-based strategic decisions on AI investments and workforce development. Regulators and standard setters gain insight into AI governance, ethical, and quality assurance issues, providing a holistic framework for structuring policy for the implementation of AI in the auditing profession. Investors and audit clients will gain insight into how AI-enabled auditing will transform audit quality, risk assessment, and assurance capabilities, enabling them to make informed decisions about their audit engagements.
To summarize, the convergence of a sufficient volume of literature, the need for immediate policy action, the rapid implementation of practice, the field development stage, and the educational needs of the field make it a particularly favorable time for a systematic bibliometric analysis of all literature on AI-enabled auditing. AI-enabled auditing is situated at the intersection of multiple pre-existing fields of research—accounting, computer science, and information systems—all of which have experienced a high degree of transdisciplinary diffusion in the creation of a new body of knowledge (i.e., emergent literature) due to their relative newness to these disciplines. This is not a limitation; historically, the studies in transitory fields have a diffuse character, meaning that they exhibit cross-disciplinary characteristics in terms of diffusion, and collecting them into primarily “core” journals is not possible until the field stabilizes. The studies by Tranfield et al. (2003) and Zupic and Čater (2015) illustrate this process by providing evidence that bibliometric studies are most valuable when conducted in the early stages of a field’s evolution, rather than after the field has stabilized and published its initial core body of knowledge. This review aims to impose an intellectual order on a rapidly changing, fragmented body of research. It is not simply an account of what has already been published; it is also intended to organize the research currently being conducted within the literature into an overview that summarizes both current research within the discipline and identifies the areas where further research is needed or is forming a consensus around the driving theoretical constructs in auditing. In addition, the organization provided by this review will serve as a framework for developing future research contributions to high-impact journals in the academic discipline of auditing, as well as systematic foundations for established areas of research activity; ultimately providing a foundation for evidence-based decision-making processes.

1.5. Research Objectives and Questions

To address the identified research gaps and fulfill the significance outlined above, this study pursues three interrelated objectives, each corresponding to a specific research question:
  • Objective 1: Map the publication trends and evolution of AI-enabled auditing research
  • RQ1: What are the current trends in the publication of research related to AI-enabled auditing?
  • Objective 2: Identify and analyze the dominant thematic structures in existing research
  • RQ2: What are the dominant themes in the existing research on AI-enabled auditing?
  • Objective 3: Determine this study’s theoretical and practical implications and potential future research on AI-enabled auditing.
  • RQ3: What are the theoretical and practical implications of AI-enabled auditing research, and what avenues exist for future investigation?

1.6. Structure of This Paper

To achieve the above objectives, this study is structured as follows: Section 2 presents the methodological framework, including data collection procedures, search strategies, inclusion/exclusion criteria, and analytical techniques. Section 3 discusses the descriptive results (publication trends, authors, sources and countries), presents the thematic analysis derived from bibliometric clustering, and proposes directions for future research. Section 4 synthesizes the theoretical and practical implications, discusses limitations, and concludes.

2. Materials and Methods

Bibliometric analysis is a common, structured method for analyzing large volumes of scientific information. Numerous studies conducted in this area have used bibliometric procedures (Sundarasen et al., 2024a; Khatib et al., 2023; Baker et al., 2020). These quantitative techniques are helpful to the researchers in examining the changes in the dynamics of a field, and identify areas that are new and developing in the current field (Donthu et al., 2021).

2.1. Data Collection Procedure and Search Strategy

Database Selection and Justification

Data was extracted from the Scopus database in July 2025. This database is used for its high-quality, comprehensive, and highly relevant journals, along with the superior indexing quality and consistency of metadata (Baker et al., 2020; Ülker et al., 2023) with an established citation structure to conduct a focused and consistent bibliometric and a content analysis using the VOS viewer version 1.6.20 software (Ellili, 2023; Jain & Tripathi, 2023; Khatib et al., 2023; Sundarasen et al., 2024b). Additionally, Scopus provides extensive citation-tracking capabilities and document relationships needed to conduct our network analysis and identify the intellectual lineages or roots of scholarly work. The use of a single database for sourcing data also limits the confusion on the differences in search protocols and allows us to avoid duplicating results while still providing reproducible results, which is a standard practice in bibliometric studies (Yilmaz & Tuzlukaya, 2024). Scopus’s use as the sole database source has been confirmed by previous bibliometric studies that have successfully captured the literature needed for such studies (Baker et al., 2020; Ülker et al., 2023) while providing clarity and consistency in the methodology.

2.2. Search Terms and Initial Retrieval

For data extraction, a systematic search was conducted in Scopus, examining titles, abstracts, and keywords using the following Boolean search string: (‘audit’ OR ‘auditing’) AND (‘technology*’ OR ‘digital transformation’ OR ‘artificial intelligence’ OR ‘AI’). This broad search strategy was designed to capture the full spectrum of technology-enabled auditing research while ensuring that AI-related studies were included within the larger context of audit technology evolution. The initial search yielded 2514 academic articles.
To ensure relevance and quality, we applied a systematic multi-stage filtering process based on predetermined inclusion and exclusion criteria:
Inclusion Criteria:
  • Peer-reviewed journal articles and early access articles.
  • Published in the English language.
  • Focus on artificial intelligence, machine learning, or advanced analytics applications in auditing contexts.
  • Published between 1986 and July 2025.
  • Indexed in Business, Business Finance, and Management subject areas in Scopus.
Exclusion Criteria:
  • Conference papers, book chapters, editorials, and non-peer-reviewed materials.
  • Articles not published in English.
  • Studies focused exclusively on algorithm development without auditing applications.
  • Articles where AI or audit technology was mentioned only tangentially or in passing.
  • Duplicate publications or multiple versions of the same study.
A strategic filter called ‘citation topics Meso—Management & Economics’ from Scopus was applied to 2514 initially included articles. The purpose of this filter is to (1) ensure that articles included address the adoption of AI in auditing within the contexts of organizational, managerial, and economic rather than with just a technical approach through computer science. This filter assures that the resulting dataset aligns with the focus of our study as it finds research addressing audit practice, governance, its implementation challenges, and its organizational impact. Further, the use of this citation topic filter is consistent with established bibliometric research practices in the fields of accounting and auditing, where the focus is on the intersection of technology and business rather than the technical details of engineering. To validate our use of this filter, we conducted a manual review of 50 of the 2514 excluded articles from this sample. Of the 50 excluded articles we randomly sampled, 46 (92 percent) focused primarily on technical development and optimization of AI models or computer science-based applications and contained no references to the contexts of auditing, audit quality, or implications for professional practice. The remaining four articles were of marginal relevance, and their inclusion in a sensitivity analysis would not alter our findings. The results of our validation suggest the successful implementation of the ‘citation topic filter’ that provided exclusion of technically focused papers and retained research applicable to auditing practice and management.
Thereafter, we restricted the document type to ‘Articles’ and ‘Early Access’ articles only, excluding conference papers, book chapters, and editorial materials to ensure peer-review quality. Language was restricted to English to facilitate comprehensive content analysis and thematic coding. These filters reduced the dataset to 184 articles. Following automated filtering, two independent reviewers conducted abstract screening of all 184 articles to verify relevance to AI-enabled auditing. Articles were included if they explicitly addressed: (1) application of AI technologies in audit processes, (2) impact of AI on audit quality or auditor judgment, (3) organizational adoption of AI in audit firms, or (4) governance and ethical considerations related to AI in auditing. Disagreements between reviewers were resolved through discussion and consultation with the third author. This manual validation confirmed that all 184 articles met our relevance criteria and should be retained for analysis.

2.3. Data Analysis Techniques

This study employs a comprehensive two-stage bibliometric analysis approach combining performance analysis and science mapping (Donthu et al., 2021). The analysis integrates multiple software tools to ensure methodological rigor and triangulation of findings.

2.3.1. Performance Analysis

Performance analysis was conducted using Bibliometrix R-package (Biblioshiny) version 4.0 (Aria & Cuccurullo, 2017). This stage generated descriptive statistics and visualizations, including annual publication trends and growth patterns, most productive authors, most cited articles and their contributions, leading journals and publication sources, most active institutions and their research focus areas, and geographic distribution of research output.

2.3.2. Science Mapping and Thematic Analysis

Science mapping was performed using VOSviewer version 1.6.20 (Van Eck & Waltman, 2009) to visualize intellectual structures and identify thematic clusters. VOSviewer is particularly well-suited for this purpose as it employs advanced algorithms to organize documents based on bibliographic coupling, co-citation patterns, and keyword co-occurrence, creating visual network maps that reveal the underlying knowledge structure of a research field (Baker et al., 2020).

2.3.3. Keyword Co-Occurrence Analysis Procedure

The thematic structure of the AI-enabled auditing literature was developed using the following procedures.
Step 1: Data Preparation—Author Keywords from the Scopus dataset were extracted from the 184 articles. Author keywords were chosen as the data source because they best represent the authors’ own views of the core themes of their work and provide a much more accurate representation of research focused on substantive content.
Step 2: Setting of Thresholds—A minimum threshold of 5 (i.e., only those author keywords that appeared in at least 5 articles were used in the analysis) was established. This threshold was established to balance completeness and interpretability, ensuring that highly idiosyncratic keywords are excluded while meaningful thematic indicators are preserved. Upon review, 47 author keywords met this minimum threshold.
Step 3: Co-Occurrence Matrix Creation—A co-occurrence matrix was created, which is a quantitative measure of how often pairs of keywords (e.g., A and B) occurred together within the sample of 184 articles (i.e., the co-occurrence matrix expresses the strength of the co-occurrence relationship between any two keywords). The co-occurrence strength of a pair of keywords indicates the degree of semantic shared meaning, i.e., two keywords that occur together in the same article tend to have similar meaning to each other.
Step 4: Visual Network Production—The VOSviewer clustering algorithm was used to cluster all author keywords based on each keyword’s co-occurrence characteristics. VOSviewer generated a network visualization, where each node represents a keyword, node size represents the frequency of keyword occurrence, lines between nodes represent co-occurrence relationships, line thickness represents co-occurrence strength, and colors represent cluster membership. The resulting network map (Figure 4) identified 5 distinct clusters as the optimal solution and provide an intuitive representation of the thematic landscape.
Step 5: Cluster Interpretation and Theme Identification—Following automated clustering, the research team conducted iterative qualitative analysis to interpret and name each cluster. This involved examining all keywords within each cluster to identify common conceptual threads, reviewing representative high-cited articles within each cluster to understand substantive focus, conducting detailed content analysis of 20–30 articles per cluster to verify thematic coherence, and iterative discussion among all three authors to reach consensus on theme labels and boundaries.
Step 6 Theme Validation—To validate our thematic interpretation, we cross-referenced themes with highly cited articles within each cluster, aligned with our theme characterization, and checked that the keyword composition within clusters was conceptually coherent. This was confirmed through independent coding that articles assigned to each cluster through keyword analysis exhibited thematic consistency.
Based on the qualitative analysis and discussion among the authors, the 5 clusters shown in Figure 4 was deduced to 3 main themes. The green, blue and yellow clusters were merged under ’Digital Technologies’. The final three themes identified through this process are: Theme 1: AI in Auditing: Readiness, Representation, and Implementation; Theme 2: Data-Driven Audit Ecosystems and Digital Technologies; and Theme 3: Audit Quality, Professional Skepticism, and Ethical Governance. These themes structure our thematic synthesis in Section 3 and provide a framework for understanding the intellectual landscape of AI-enabled auditing research.

2.4. Methodological Limitations and Boundary Conditions

We acknowledge several methodological limitations and boundary conditions that contextualize our findings:
Potential Issues with Single Database Coverage: We rely on Scopus and, therefore, we do not necessarily have access to articles that have been indexed in Web of Science (WOS), IEEE Xplore, or ACM Digital Library, etc., or in any discipline-specific databases. While Scopus offers high-quality metadata, which is essential for undertaking bibliometric evaluation, and the dataset we compiled contains 184 articles, which represents a significant level of coverage, we recognize that gaps exist in our coverage of articles. In completing a sensitivity analysis of 30 Web of Science articles, we manually reviewed their inclusion in our Scopus dataset and found an 87% overlap in citing references between the two datasets. Therefore, the impact of our findings on this research’s thematic structure is minimal. Language Restrictions: We excluded non-English language articles. Nevertheless, based on the Scopus database, there are only 7 articles (less then 1%) that were non-English. As for citation-based metrics, recent articles (2024–2025) have limited citation counts, potentially underweighting important emerging work. We mitigate this by supplementing citations with publication trends and keyword-based thematic clustering, rather than relying on citations alone. Based Thematic Analysis: Automated clustering based on author keywords, while systematic and replicable, may miss nuanced conceptual relationships apparent in full-text analysis. We addressed this through manual validation: reviewing full texts of 20–30 representative articles per cluster to verify thematic coherence and conducting iterative author discussion to refine theme interpretations.
These limitations represent inherent trade-offs in the design of bibliometric research. Our choices prioritize methodological consistency, replicability, and analytical rigor over exhaustive comprehensiveness. Findings of AI-enabled auditing scholarship should be interpreted within these boundary conditions.

3. Results

3.1. Publication Trend

The trend in the publication of AI-enabled auditing (Figure 1) is relatively flat between the mid-1980s and the second half of 2015, with no more than 3 studies published each year. Thereafter, the increase in research output grew swiftly, with a maximum of 46 and 40 publications in 2024 and 2025, respectively. A number of factors might have caused this trajectory. To begin with, there is growing concern in researching the implications of AI in auditing due to the emergence of big data, blockchain, and advanced analytics in financial reporting. Second, legislative and social requirements arising from large corporate scandals have led to pressure for more efficient, technologically advanced auditing methods. Third, the COVID-19 pandemic accelerated the digital transformation, pushing auditors and companies towards remote, AI-enabled solutions. Finally, natural language processing, robotic process automation, and the existence of machine learning tools have removed the barriers for both AI practitioners and researchers exploring AI in the auditing field. These combined factors have contributed to AI-auditing research becoming a mainstream scholarly concern in recent years.

3.2. Most Cited Articles

The notable articles, as shown in Table 1, shape the intellectual framework of AI-enabled auditing research and align with the three broad themes established in this paper (Section 3.7). The pioneering literature views AI as a revolutionary tool in auditing, mostly in terms of efficiency and increased analytical power. Kokina and Davenport (2017), Sutton et al. (2016), and Fisher et al. (2016) offer the first account of AI-enabled auditing. They suggest that a traditional sampling-based audit should be replaced with a full-population, continuous, and proactive approach. The constant argument in these studies is that AI is not a replacement for auditors but a supplement, and that human judgment is at the center of AI, though it changes audit processes. This stream provides the conceptual foundation for Theme 1 because it focuses on the potential of technology, readiness to adopt, and the reconfiguration of audit processes.
Second, a significant number of studies discussed from the perspectives of digital transformation and the data-driven audit ecosystem. Manita et al. (2020), Hajek and Henriques (2017), and Han et al. (2023) describe how AI, ML, NLP, and blockchain enhance auditors’ ability to process complex, unstructured data in real time. This research is gradually taking a more tool-centric approach, solidifying Theme 2, which considers AI, analytics, and blockchain as part and parcel of the present audit system. Though these studies report enhanced detection capacity and transparency, they also note that empirical validation and real-world implementation evidence are inadequate.
Third, ethical, behavioral, and governance issues have emerged as a distinguishing aspect of the literature. Munoko et al. (2020) discussed ethical risks (such as bias, explainability, accountability, and unequal access to advanced technologies), coinciding with Theme 3. This is further supported by corresponding evidence from adoptive-oriented and contextual studies, such as those by Damerji and Salimi (2021) and Albitar et al. (2021), indicating that technological readiness on its own is not sufficient and that behavioral perceptions, institutional pressures, and the contexts of crises (e.g., COVID-19) play a key role in influencing AI usage in practice. These studies indicate that implementing AI poses emerging risks to audit quality unless governance frameworks, professional judgment, and ethical protections are modified accordingly.
In summary, these articles demonstrate a progression in the literature—the belief in the power of technology, and the realization of the multifaceted digital ecosystems, to cynicism about judgment and ethics and institutional legitimacy. Although the initial studies were more focused on what AI can do for auditing, the more recent ones highlight issues related to how AI would be regulated, explained, and incorporated without compromising professional skepticism, accountability, and trust. This development supports the thematic organization of this research and the need for future studies that transcend conceptual potential into empirically grounded, governance-conscious, and practice-focused research.

3.3. Most Published Authors (On AI-Enabled Auditing) and Affiliations

The most published authors in AI-enabled auditing research are shown in Table 2, along with their publications, institutional affiliations, and primary domains of interest. The results indicate that continuous auditing, audit analytics, and AI are currently at the forefront of the academic agenda. The author with the most publications, Miklos A. Vasarhelyi, plays a dominant role in shaping the discussion of continuous assurance and audit automation. Other prominent authors have examined issues related to audit quality, risk assessments, audit analytics, and governance in the realms of auditing. From a regional perspective, scholars based in institutions in the United States, East Asia (Taiwan and Hong Kong), and the Middle East have dominated research in this domain, suggesting that economies with high levels of digital infrastructure are spearheading audit innovation. While U.S. scholars mainly focus on system-level audit transformation and continuous assurance, researchers in Asia and the Middle East emphasize predictive analytics, fraud detection, audit quality, and technology adoption. These themes are complementary but reflect distinct research priorities. In summary, Table 2 displays the interdisciplinary and globally spread nature of AI-enabled audit research, revealing a process of intellectual leadership concentrated within a small group of highly productive researchers.

3.4. Prominent Sources

The concentration of publications in the major accounting and audit journals presented in Table 3 indicates that the field is becoming more legitimate and is developing. The most prominent source is the International Journal of Accounting Information Systems. The publication of articles on AI and auditing in this journal underscores the importance of technological orientation in accounting/auditing. Similarly, other prominent journals, such as The Journal of Emerging Technologies in Accounting, Journal of Financial Reporting, and Managerial Auditing Journal, show that the research on AI-enabled auditing is not limited to a technological standpoint only, but also other dimensions, such as quality of audits, audit judgment, risk assessment, and governance. The diversity of sources testifies that the study of AI in auditing has passed the exploratory stage and has become firmly rooted in the general scholarly literature on accounting. However, it is still crucial to continue the interdisciplinary integration and empirical development.

3.5. Country Contribution

As Figure 2 shows, AI-enabled auditing research is concentrated in a few countries, with the United States, China, and the United Kingdom at the forefront. These countries have the highest prevalence rates of both publication output and publication citation impact, due to their developed research infrastructure, access to funding, and increased integration between academia and the audit profession. Australia, Spain, Jordan, India, Canada, the UAE, and Finland make moderate but significant contributions, mostly by partnering with Western institutions. Thus, it creates a skewed research landscape in which intellectual leadership is uneven across most developed economies, despite its relevance worldwide. This discrepancy raises important concerns, namely, whether existing research output can be applied or generalized across emerging audit settings whose institutional, regulatory, and technological characteristics differ widely. Moving forward, developing countries should more actively pursue research on AI-enabled auditing while accounting for institutional and cultural contexts.

3.6. Thematic Map

The thematic map (Figure 3) provides a general view of the intellectual maturity and structure of research on AI-enabled auditing, a field that is consolidating its main concept and developing into more sophisticated and practical areas. The bottom-right domain’s core themes are artificial intelligence, auditing, and accounting. These themes have weak internal development but are well-connected to other themes externally. Nevertheless, big data, data analytics, digital transformation, and deep learning fall into the transitional category, as they are enablers that sit between the current state of traditional audit analytics and the future state of more sophisticated AI-powered decision systems.
The cluster that has high centrality and density is the motor themes quadrant. Themes in this quadrant include machine learning, AI systems, blockchain, information technology, decision-making, audit quality, and audit firms, indicating that these areas are both conceptually complex and make a significant contribution to the field’s development. This aligns with previous arguments calling for centralizing exploratory adoption research to deliver performance-based results, such as improving audit quality, augmenting judgment, and transforming the entire firm. The advent of blockchain and AI highlights increased attention to overall digital assurance ecosystems rather than individual technologies.
On the other hand, niche topics such as AI technologies, technology adoption, and innovation are well developed internally but poorly connected externally. Although theoretically rich, these areas remain fragmented, suggesting greater integration with mainstream themes such as audit, audit quality, and governance. Lastly, new or recessive themes such as digitization, internal audit, internal control, and PLS-SEM-based research suggest an early phase of research development or saturation. There is a high likelihood that internal audit, internal control, and PLS-SEM-based research methodologies are facing saturation, as current research is more skewed towards the application of machine learning methodologies, whilst digitization could be an emerging area of research in AI-enabled auditing.
Taken together, Figure 3 supports the previous claim that AI in the context of auditing has moved beyond adoption rhetoric to substantive discussions of audit quality, governance, and decision-making, although it remains scattered across technological progress and issues of institutional, ethical, and human judgment. This marks an obvious future research direction: integrating technical innovation more closely with auditor behaviors, regulatory systems, and assurance validity.

3.7. Keyword Network and Thematic Analysis

Figure 4 presents the keyword co-occurrence network generated through VOSviewer analysis, providing a visual representation of the thematic structure of AI-enabled auditing research. A keyword co-occurrence network shows the keywords that appear across the reviewed articles; nodes are keywords, links are co-occurrence counts, and clusters form when ideas are repeatedly mentioned together. In this study, the network was created in VOSviewer by identifying author keywords in 184 articles and analyzing their co-occurrences, which, in turn, enabled the identification of conceptually consistent themes underlying the literature and led to thematic analysis. The size of each circular node is proportional to the frequency of that keyword’s occurrence across the 184 articles. Larger nodes indicate keywords that appear more frequently and thus represent more prominent research topics. Node Colors represent cluster membership. VOSviewer identified five distinct clusters: Red, Green, Blue, Yellow, and Purple. Connecting Lines between nodes indicate co-occurrence relationships—i.e., these keywords appeared together in the same articles. The thickness of lines represents the strength of co-occurrence: thicker lines indicate that keyword pairs appear together more frequently, suggesting stronger conceptual relatedness. Inter-cluster Links: Lines connecting keywords across different colored clusters reveal thematic bridges—concepts that link different research domains. These crossover keywords (e.g., ‘audit quality,’ ‘machine learning’) indicate interdisciplinary connections within the field.
Artificial intelligence anchors the map, being situated in the center and connecting several clusters. Tightly linked groups interconnect these hubs and reflect the issues of:
(i)
Audit and audit quality.
(ii)
Data and digital technologies (e.g., machine learning, deep learning, big data, blockchain, RPA, IoT, text mining).
(iii)
Judgment, ethics, and governance (e.g., decision making, auditor, transparency).
The close interconnections indicate that these topics are not examined in isolation. Instead, AI in auditing is framed as a socio-technical system where technology, data, and professional judgment interact. The following section provides a detailed discussion of the related scholarly themes.
  • Cluster Composition and Thematic Characterization:
  • Cluster 1 (Red)—AI Adoption, Readiness, and Organizational Transformation:
In this cluster, we identify 18 keywords that represent the technological processes involved in adopting new technology, evaluating an organization’s readiness to implement AI technologies in the audit process, and transforming audit processes using AI technologies. Example of the keywords in this cluster are “artificial intelligence,” “technology adoption,” “readiness,” and “digital transformation”—all of which refer to research examining how organizations decide to use AI for conducting audits, and the factors that contribute to an organization’s readiness to adopt this technology, and the organizational changes necessary to allow for successful adoption of AI technology by an organization. The articles in this cluster focus on an organization’s early considerations for adopting AI and other technologies using a structured approach, including frameworks, readiness assessments, and the sociotechnical implications of implementing AI technology as part of an audit process.
  • Cluster 2 (Green, Blue, and Yellow)—Digital Technologies and Data-Driven Audit Ecosystems:
This cluster includes all the keywords listed above: “Machine Learning,” “Blockchain,” “Big Data,” etc. This group of keywords includes the technical capabilities, as well as applications of AI and digital technologies used in conducting audits. Articles within this group describe specific applications, tools, algorithms, and data-driven methods used in the audit process. Articles in this cluster typically describe specific tools, algorithms, and data-driven methodologies that enable enhanced audit procedures, real-time assurance, and comprehensive data analysis, as well as anomaly detection.
  • Cluster 3 (Purple)—Audit Quality, Professional Skepticism, and Ethical Governance:
This cluster of keywords examines the relationships among AI and audit quality, professional judgment, ethical issues in auditing, and the governance framework necessary to support the responsible use of AI technologies in auditing. Examples of keywords found in this cluster include: audit quality; professional skepticism; ethics; governance; accountability; explainability; transparency; bias; trust; and regulation. This cluster presents research on the effects of AI technology on basic auditing principles, the quality and reliability of auditing results, ethical issues related to algorithmically generated decisions, and the governance framework required for the responsible use of AI technologies. Research articles included in this cluster generally address:
(a)
Concerns regarding the maintenance of professional standards;
(b)
Managing the risks associated with algorithmic decision making;
(c)
Developing appropriate oversight mechanisms for the use of AI technologies.

3.7.1. Theme 1: AI in Auditing: Readiness, Representation, and Implementation

Artificial intelligence (AI) and analytics in auditing are essential and probably disruptive innovations. Nevertheless, a gap between the desire to use AI and its practical application in audit practice has been observed in the literature (Damerji & Salimi, 2021; Alles & Gray, 2024). Instead of a more individual or technical option, AI use in auditing is increasingly seen as a broader social or technical upheaval and disruption, and, in most instances, is limited by organizational frameworks, regulatory forces, risk cultures, and professional roles. Based on the extant literature, it is evident that studies on AI-enabled auditing have focused more on perceptions, preparedness, and disposition towards AI than on how these technologies are actually applied in audit environments. Initial research (Damerji & Salimi, 2021), looks at the technology preparedness and to what extent it is embraced, focusing on the mediation impacts of perceived usefulness and perceived ease of use. Although this line of questioning offers some interesting insights into acceptance at an early stage, it remains detached from the realities of professional auditing. Adoption decisions are entrenched through organizational practices, regulatory demands, and accountability pressures. Current research on digital transformation and Industry 4.0 in auditing focuses on technological opportunities and drivers of adoption, yet provides limited insight into how AI would transform audit work in practice (Abdullah & Almaqtari, 2024; Abu Huson et al., 2025). Meanwhile, audit firms are promoting and emphasizing on advanced analytics as value-adding insight, making technological sophistication a hallmark of the modern audit. This narrative is supported by conceptual reviews that emphasize AI’s potential to transform auditing from a backward-looking process to a forward-looking, ongoing procedure. Despite such positive descriptions, little empirical evidence exists on the introduction of AI into auditors’ daily routine.
A study by Alles and Gray (2024) highlights the Big Four’s marketing of audit analytics features on their official websites. They document that most companies position data analytics as a tool for providing value-adding operational information to the clients, which implies the need to change the way the audits are positioned—less compliance-oriented assurance and more business intelligence. Simultaneously, the authors express concerns about auditors’ independence as analytics-driven information becomes the primary selling proposition. However, since their analysis is based on external marketing messages, it fails to define whether the promoted technologies are incorporated into audit testing or risk assessment procedures in practice. Thus, gaps exist between what is represented and what is actually performed during the audit. Nevertheless, despite the potential positive impact on efficiency and effectiveness, the use of audit technologies depends heavily on contextual factors (client support, task complexity, time pressure) rather than solely on the technical capability itself (Curtis & Payne, 2008). Even though this observation was made in the pre-AI era, it remains very topical: sophisticated tools do not necessarily lead to meaningful use when organizational incentives and accountability frameworks are flawed. This opinion is supported by more recent field-based research by Kokina et al. (2025), who found that the main barriers to AI adoption in auditing are the lack of transparency and explainability, bias in their algorithms, data privacy, robustness and reliability, and even fears that auditors will overly depend on AI in their work, as well as the lack of explicit professional guidance. In a broader perspective, Leocádio et al. (2024) conducted a systematic literature review to develop a conceptual framework for AI-enabled auditing. The authors discussed on the efficiency of audits, their performance outcomes, the problems of regulation, and the adaptation of auditors. The authors claim that AI can re-evaluate the role of auditors and shift auditing to a more proactive and continuous monitoring mode. However, as a conceptual review paper, this study does not empirically evaluate implementation issues, skill limitations, or governance systems, nor does it directly examine how auditors operationalize AI tools in actual audit engagements. Existing empirical studies linking AI use to audit quality focus primarily on outcomes, rather than on the specific ways AI is integrated into audit processes (Abu Huson et al., 2025).
Collectively, these papers indicate a blatant lack of connection between preparedness, presentation, and actualization of AI in auditing. Although auditors and firms appear better prepared to work with AI and to promote its potential, translating that preparedness into practical implementation remains a concern. To fill this gap, it is necessary to go beyond what companies want or say they do and to a more thorough analysis of the institutional, behavioral, and ethical circumstances in which AI can be held responsible, as well as actually incorporated into the audit judgment and evidence-gathering procedures, and no longer be seen as a symbolic, side-whisker technology. Table 4 summarizes the main findings and contributions of Theme 1.

3.7.2. Theme 2: Data-Driven Audit Ecosystems and Digital Technologies

  • (Combination: Machine Learning, Deep Learning, Big Data, Blockchain, RPA, IoT)
The modern audit environment is being re-architectured as a data-centric audit ecosystem and the type that uses big data analytics, machine learning (ML), deep learning (DL), robotic process automation (RPA) technology, blockchain, Internet of Things, and next-generation visualization tools rather than the silo-driven audit that has been historically the case. This conglomeration of technology creates an audit environment that is instant and continuous, as well as abundant in empirical knowledge, which interferes with the commonly used audit methods that rely on regular testing and sampling. This ecosystem is built on the premise of big data that can be used to test the whole population, with more sophisticated anomaly detection.
The works of Manita et al. (2020) capture the changes in digital transformation and analytics in audit processes and governance, which provide auditors with greater capacity to process vast amounts of data. Appelbaum et al. (2017) contend that the increased availability of structured and unstructured data in audit settings has created significant opportunities to advance analytical techniques beyond conventional sampling. Combined, these works underscore a tendency toward deeper data analysis that could support more proactive, continuous audit processes. However, the capability of predictive or continuous assurance is still being studied. This practice is further extended by machine learning and deep learning, which enable predictive modeling, clustering, classification, and detection of high-dimensional patterns. Gu et al. (2024) theorized AI- and ML-enabled so-called ‘co-piloted auditing’, and believe that the advanced models can enhance their ability to detect anomalies and possible misstatements when paired with human judgment, and not as an alternative to traditional audit techniques. Likewise, Kokina et al. (2025) point out that AI offers the opportunity to analyze large, complex data and to do more than traditional analytics. Concurrently, they are creating significant impediments for auditors, especially in explainability, bias, data privacy, and governance. Similarly, deep learning (DL) systems are typically black boxes, which limit interpretability and make it difficult for auditors to justify their use of the results (Sun, 2019). This creates critical questions about transparency, explainability, and auditability of opaque models.
Robotic process automation (RPA) is also known as an additive tool in audit, as its adoption might lead to complacency or process blindness, where auditors use automated workflows without sufficient supervision. Huang and Vasarhelyi (2019) discuss how routine audit procedures, including data extraction and reconciliation, can be delegated to RPA, enabling auditors to save time and work more productively. Nevertheless, they emphasize that RPA will not be able to substitute for professional judgment, since auditors are still required to interpret exceptions, evaluate risks, and make informed decisions; therefore, human oversight in automated auditing processes remains important. Another area discussed is blockchain and IoT technologies, where a massive transformation in data provenance and transaction authentication is demonstrated. Han et al. (2023) and Dai and Vasarhelyi (2017) state that blockchain will enable the establishment of tamper-proof audit trails and the ability to verify transactions nearly in real time. With smart contracts and IoT sensors, blockchain can automatically record and verify transaction data, making it easier to facilitate continuous auditing and assurance in supply chains and digitally connected systems. As illustrated by Sayal et al. (2025), AI in combination with blockchain may be used to enhance fraud detection by matching sophisticated analytics with secure, traceable data, which, in turn, enables more timely and ongoing audit assurance. However, Li and Goel (2025) indicate that the decentralized nature of blockchain may complicate the creation of a clear sense of accountability, the development of audit evidence, and overall system validation.
In short, the technologies merge to form a new digital audit ecosystem consisting of continuous assurance, real-time analytics, and decentralized verification. Nevertheless, gaps remain: limited evidence on AI-enabled auditing, a lack of a standard audit policy for AI/ML/DL systems, insufficient regulation of blockchain-based evidence, and insufficient understanding of whether a prominent role is being effectively played by the automation of applications. Table 5 summarizes the main findings and contributions of Theme 2.

3.7.3. Theme 3: Audit Quality, Professional Skepticism, and Ethical Governance

The concept of AI-enabled auditing has raised a philosophical discussion on the quality of the audit and ethical leadership since this new tool can alter the judgment and misjudgment of the auditors. AI is widely regarded as improving analytical accuracy and expanding audit coverage; however, its adoption places greater implementation pressure than traditional auditing frameworks. It is also difficult to balance AI’s analytics capabilities with human professional judgment. When AI is brought into the decision-making process, issues of quality and accountability arise, since audit quality has traditionally been defined in the auditing environment in terms of competence, independence, sufficient evidence, and professional skepticism (Francis, 2011; Knechel et al., 2013). However, in current times, it does not depend solely on auditors’ technical capabilities but also on their ability to handle complex algorithms.
The extant literature documents that the introduction of advanced data analysis tools into the audit process might affect auditors’ judgment and risk assessment, even when they are designed to influence the audit process positively (Rose et al., 2017). These concerns about judgment have been exacerbated as audit analytics have evolved into more automated, AI-oriented systems. The idea of auditor competence, as Munoko et al. (2020) explain, goes beyond the technical proficiency of the accountant to include algorithmic literacy, digital skepticism, and critical thinking about machine output.
The Big 4, in particular, as well as other accounting firms, are turning to AI as a tool for auditing and advisory services due to its time-saving benefits, improved analysis, and more efficient service delivery to clients. Simultaneously, these advantages are evident, but an increasing awareness of the risks posed by AI, both ethical and social, is growing. Francis (2011) explains that audit quality is determined by auditors’ competence, independence, and professional judgment in reviewing evidence. Taken together with Munoko et al. (2020), this point of view indicates that technology does not end the growing human presence in AI-related auditing. Rather, AI puts additional pressure on audit quality, requiring auditors not only to use advanced analytical tools but also to analyze, question, and verify the findings produced by AI and to remain responsible and skeptical about these possibilities as professionals.
In traditional forms of professional skepticism, the auditors are expected to take a questioning attitude toward management claims based on human judgment, experience, and pattern recognition (Hurtt et al., 2013; Nelson, 2009). Nevertheless, according to previous research, the auditor-centered audit process, filtering, and presentation of audit evidence are being restructured by the increasingly common use of AI and automated tools (Munoko et al., 2020; Francis, 2011). Increasingly dependent on machine-based cues, behavioral research suggests that professional skepticism becomes more susceptible when judgment becomes an object of automation bias, with less visibility into its logic and greater dependence on machine outputs rather than critical judgment. This is further confirmed by Raschke et al. (2018) and Parasuraman and Riley (1997), who demonstrate that auditors, as many other professionals, ultimately place undue faith in automated systems, particularly those that are highly automated or highly objective, thereby undermining independent judgment and professional skepticism. Conversely, Dietvorst et al. (2015) discusses on algorithm aversion, demonstrating that a decision-maker can too quickly dismiss the results of AI when a small number of errors is detected, despite the system being more accurate on average. This, combined with automation bias, indicates that auditors face an even greater challenge of AI dependency, since intelligent systems can suppress and over-provoke professional skepticism based on factors such as system design, familiarity, and perceived trustworthiness.
Previous studies on behavior and judgment support Rose et al. (2017), suggesting that auditors’ perceptions of fraud risk can be influenced by visualizations created with audit analytics. The outcome supports the current evidence on automation bias and algorithm aversion, which suggests that auditors may be overconfident in complex systems or discount them too quickly after a minor error (Parasuraman & Riley, 1997; Dietvorst et al., 2015; Raschke et al., 2018). The studies, including the original research on the significance of professional skepticism and audit quality (that highlights the central position of independent judgment and evidence critical analysis) (Nelson, 2009; Hurtt et al., 2013; Francis, 2011), suggest that AI tools do not just support an auditor but also impact the perception and treatment of evidence. The shift to judgment is more grounded in signals from the system than in holistic reasoning, casting doubt on ethical practice, accountability, professional skepticism, and audit quality (Munoko et al., 2020).
Building on earlier concerns about automation bias and reduced skepticism, a further challenge is that AI may systematically shape audit judgments in unfair ways. In a more general sense, algorithmic decision-making research demonstrates that AI models can reproduce and amplify existing biases, particularly when trained on past data and when opaque and not subject to accountability (O’Neil, 2016). These biases might cause smaller firms, non-traditional business models, or businesses to be flagged disproportionately, which underscores the importance of openness and regulation in AI-assisted audits. Simultaneously, when the rationale for using AI-generated signals is not readily apparent to the auditor, it may be difficult to justify them as audit evidence. Thus, the topic of AI auditability and explainability has become the focus of the AI-audit literature. Zhong and Goel (2024) show that explainable AI (XAI) methodologies in the context of auditing can increase the level of transparency and interpretability, allowing auditors and other stakeholders to comprehend the process of AI decision-making and justify the need to rely on AI-generated insights, which is an essential step toward meeting audit evidence criteria and adhering to the professional standards. In cases where the auditor does not have an explanation for the motivations behind a recommendation, it is hard to determine the level of reliability of that information in a manner that is comparable to the requirement of the ISA 500 to provide consideration to the relevance and reliability of audit evidence (Li & Goel, 2025). These challenges quickly become governance problems as well. The responsibilities of the engagement teams, auditors, and vendors are likely to be confused when AI tools are created by vendors, set up by companies, and used by engagement teams, even though, according to auditing standards, the ultimate responsibility for the opinion lies with the auditor. This raises ethical issues of accountability, liability, and the credibility of the audit opinion beyond technical performance.
The issue of accountability is further heightened in the technology-based audit environment. With transparency and automation enhanced, technologies that improve blockchain-based accounting systems (Bonsón & Bednárová, 2019) can equally blur the sense of responsibility by dispersing the control among various actors. The same issue can be observed with AI-based audits because the tool is usually developed by the vendor, configured by the audit firm, and implemented by the engagement team, making responsibility difficult to delineate when failures occur. The risk is institutional in nature and extends beyond individual audit judgments. The legitimacy of auditing is based on social trust, professionalism, and compliance with the set standards (Power, 1997). With the growing integration of advanced technologies into the audit workflow, researchers warn that the position and perception of a human auditor can be distorted, potentially altering stakeholders’ perceptions of the origin of assurance (Lombardi et al., 2022). Here, the increasing application of AI to complex audit procedures increases the necessity to be transparent: not only do the auditors need to have the accurate outputs, but they also need to have the explanation of how the outputs are produced so that they can preserve their professional credibility, rationalize audit decisions, and adhere to the accepted standards in auditing. Table 6 summarizes the main findings and contributions of Theme 3.
Table 7 summarizes all three clusters for a more consolidated comparison, with the main keywords for each cluster and the representative studies extracted from the dataset of 184 articles. Such organized reviews not only cover a large amount of scholarship but also demonstrate gaps in empirical, theoretical, and regulatory development. In general, the literature indicates that AI raises significant multidimensional issues related to audit quality and ethical governance. Although AI offers the potential to improve analytical processes, it has also been risky, including overreliance, bias, obscurity, diminished autonomy, and unclear responsibility. As such, it can be seen that the central paradox of AI in auditing is that AI can radically enhance the quality of audits, though only under the circumstances in which auditors retain their epistemic power, are skeptical of AI as such, and live in the governance contexts that embrace transparency, fairness, explainability, and responsibility. The absence of such frameworks may weaken, rather than enhance, the pillars of quality auditors.
Based on the thematic analysis and identified gaps (by theme), Table 7 highlights potential future research that could significantly contribute to the body of knowledge in AI-enabled auditing.

3.8. Thematic Integration and Interdependencies

While the preceding sections analyzed each theme individually, the three themes are fundamentally interconnected and collectively constitute the intellectual structure of AI-enabled auditing research. This section explicitly examines the interdependencies, bidirectional influences, and mediating relationships among themes.
  • Sequential Progression: Adoption → Technology → Governance
The three themes exhibit a temporal and conceptual progression mirroring field evolution. Theme 1 (AI adoption, readiness, and organizational transformation) establishes the foundation by examining why and how organizations initiate AI adoption, the capabilities required, and the organizational changes necessary. This adoption foundation enables Theme 2 (digital technologies and data-driven audit ecosystems), which addresses the actual deployment of specific AI tools, their technical capabilities, and the data infrastructure supporting them. The technological deployment documented in Theme 2 subsequently raises the quality, ethical, and governance concerns addressed in Theme 3 (audit quality, professional skepticism, and ethical governance). This progression reflects historical evolution: early research (pre-2018) concentrated on adoption readiness; middle-period research (2018–2022) emphasized technical capabilities and digital ecosystems; recent research (2022-present) increasingly addresses governance and quality implications.
  • Bidirectional Influences Among Themes
Beyond sequential progression, themes exhibit reciprocal influences:
Technology Capabilities Shape Adoption (Theme 2 → Theme 1): Organizational adoption choices for AI technologies are affected by the characteristics of AI technologies themselves, including their complexity, explainability, and reliability (influences of AI technology on organizational adoption: Theme 2 to Theme 1). According to previous research, explainability limitations created by deep learning (Theme 2 technical challenges) create impediments to organizational adoption of artificial intelligence because of a lack of explainability creating risks for operational exposure and regulatory scrutiny (barriers to AI technology organizational adoption: Theme 1). Conversely, improvements in machine learning interpretability (Theme 2 enabler) support organizations’ willingness to accept AI technologies (facilitators of AI technology organizational adoption: Theme 1).
Quality Concerns Constrain Adoption (Theme 3 → Theme 1): The issues identified in Theme 3 of governance, ethical considerations, and professional skepticism limit the pattern of organizational adoption decisions identified in Theme 1. For example, uncertainty about algorithmic accountability (Theme 3) results in the organizational adoption of AI technologies occurring slowly/prudently and limited to low-risk areas (Theme 1 adoption strategies). Previous research on automation bias and algorithm aversion (Theme 3) informs human-AI collaboration and training methodologies (Theme 1 capability development).
Adoption Choices Determine Quality Outcomes (Theme 1 → Theme 3): Organizational decisions about governance structures, training investments, and human-AI collaboration configurations (Theme 1) directly affect audit quality and professional skepticism outcomes (Theme 3). Firms investing heavily in explainable AI training (Theme 1 choice) experience less automation bias and maintain stronger professional skepticism (Theme 3 outcome). Adoption strategies emphasizing gradual integration with human oversight (Theme 1) mitigate quality risks (Theme 3) compared to aggressive automation.
  • Cross-Theme Research Streams
Several literature streams explicitly bridge multiple themes, demonstrating integrated perspectives:
Human-AI Collaboration Research (Bridging Themes 1, 2, 3): Studies examining how auditors and AI systems interact address adoption considerations (Theme 1: what collaboration models to adopt), technical implementation (Theme 2: how to design AI interfaces and workflows), and quality implications (Theme 3: impact on judgment and skepticism). This cross-cutting research stream (e.g., Gu et al., 2024; Kokina et al., 2025) demonstrates that effective AI deployment requires simultaneously addressing organizational, technical, and governance dimensions.
Explainable AI in Auditing (Bridging Themes 2, 3): Research on transparency mechanisms spans technical development (Theme 2: designing explainable algorithms) and governance implications (Theme 3: maintaining professional skepticism and accountability). This integration recognizes that explainability is simultaneously a technical challenge and a professional/regulatory requirement.
AI Governance Frameworks (Bridging Themes 1, 3): Studies on governance structures address both adoption strategy (Theme 1: how to establish AI oversight) and quality assurance (Theme 3: how governance affects audit quality). This integration acknowledges that governance is both an adoption enabler (reducing uncertainty) and a quality safeguard.
  • Synthesis: An Integrated Framework
The three themes collectively constitute a comprehensive framework for understanding AI-enabled auditing:
  • Theme 1 addresses the ‘why’ and ‘how’ of initiating AI adoption.
  • Theme 2 addresses the ‘what’ and ‘with what tools’ of implementing AI.
  • Theme 3 addresses the ‘with what effects’ and ‘with what safeguards’ of AI deployment.
Our research indicates that instead of separating them into different research domains, the above-mentioned areas are aspects of an interconnected facet of a single phenomenon; the impact of AI on auditing. To have an effective AI audit process, you must deal with adoption readiness (Theme 1), technical capabilities (Theme 2) and governance (Theme 3) at the same time. Failure to examine any of these three areas will result in a lesser understanding of or less than optimal results from auditing with AI.
Figure 5 (Thematic Integration Model) visually represents these interdependencies, showing bidirectional arrows among themes with specific relationship labels (e.g., ‘technical capabilities enable/constrain adoption,’ ‘adoption choices determine quality outcomes,’ ‘governance requirements shape technology design’). These three thematic clusters represent interrelated dimensions of a single, occasionally transformative phenomenon rather than separated research areas or silos. AI adoption decisions (Thematic Cluster 1) directly impact audit quality and professional judgment performance (Thematic Cluster 3), with governance-sensitive adoption approaches maintaining an appropriate level of professional skepticism while aggressive automation increases quality risks. The relationship between AI adoption and audit quality or professional judgment outcomes is mediated by digital ecosystems or governance structures (Thematic Cluster 2), which determine whether AI adoption leads to quality enhancement or impairment. Additionally, Thematic Cluster 2 serves as both an enabler and a mediator by enabling the transmission of the consequences of AI adoption on the quality of audit and professional judgment outcomes while at the same time providing direct feedback on governance requirements back to the technology designer to redesign the technology. The intellectual evolution of the accounting academic community mirrors this structural logic in that it has developed in a causal fashion from readiness to adopt AI technologies, through the acquisition of necessary technical capabilities to the establishment of ethical governance and accountability for the use of AI technologies in auditing. This integrated framework supports the demand for the coordination of all three dimensions of responsible AI-enabled auditing simultaneously and in an integrated manner, rather than sequentially and/or in a siloed fashion. This integrated perspective provides a roadmap for future research examining cross-theme dynamics and for practice integrating organizational, technical, and governance considerations.
  • Synthesized Research Gaps and Future Research Directions
Table 8 presents a comprehensive overview of research gaps, future research opportunities, recommended methodologies, and appropriate levels of analysis across the three major themes in AI-enabled auditing research.

4. Conclusions

This study summarized the fragmented literature on AI-enabled auditing by critically reviewing 184 articles from the Scopus database. It was therefore intended to highlight the opportunities, risks, and unresolved issues that continue to shape the evolving relationship between artificial intelligence technologies and the auditing profession. AI has the potential to change the audit process radically—not only the detection of fraud and risk, but also its automation and predictive analytics. However, its implementation in practice will be hindered by issues of explainability, equity of uptake, and regulatory preparedness. The bibliometric findings reveal studies by a few authors, organizations, and journals. Individuals such as Miklos Vasarhelyi and Thomas Davenport have been instrumental in developing initial conceptualizations of AI in the auditing field, and the United States, the United Kingdom, and Chinese institutions currently dominate scholarship. Many of the concepts are developed based on the publications of high-impact journals, including the International Journal of Accounting Information Systems and the Journal of Emerging Technologies in Accounting. Geographic inclination towards Western and East Asian contexts indicates the necessity of a significantly expanded scope of representation in research for areas that do not share the same institutional, cultural, and regulatory conditions.
The keyword co-occurrence analysis defines three thematic groups in the literature. The themes are: readiness, representation, and real-world use; digital technologies and data-driven audit ecosystems; and audit quality, professional skepticism, and ethical governance. The first theme on readiness sheds light on the motivation for behavior, but it is overly focused on intent rather than on the actual outcomes of audit performance. Although studies on machine learning and decision support showed technical possibilities, they were not subjected to large-scale empirical testing. The studies on blockchain and digital ecosystems emphasize academic synergies but brush over their practical and regulatory implications. The literature on big data and deep learning both highlight a strategic revolution in auditing, though it falls short of addressing the significant differences between big and SMEs in implementation. Scholarly research on audit quality and ethics has advanced the debate on skepticism, accountability, and fairness, but has generated fragmented suggestions for governance arrangements. Collectively, these clusters indicate rapid technical innovation, while empirical and regulatory developments lag, leaving the profession in a state of liminality.
  • Contribution and significance to stakeholders.
This section presents specific, actionable recommendations for key stakeholder groups in the AI-enabled auditing ecosystem. Each subsection provides prioritized guidance tailored to the unique responsibilities and needs of audit firms, regulators, academic researchers, and financial statement users.
  • Implications for Audit Firms and Practitioners
Audit firms face immediate practical challenges in deploying AI systems while maintaining audit quality and professional standards. The following recommendations are organized by implementation priority.
  • Invest in Explainable AI Training: Develop auditor training programs focused on interpreting AI outputs, understanding algorithmic logic, and maintaining professional skepticism when using AI tools.
  • Establish AI Governance Frameworks: Implement internal controls for AI tool selection, validation, monitoring, and documentation. Create dedicated AI governance committees with technical and audit expertise.
  • Pilot Human-AI Collaboration Models: Test different configurations of auditor-AI interaction before full-scale deployment. Conduct controlled pilots in 3–5 audit engagements to identify optimal collaboration patterns.
  • Develop Algorithmic Audit Trails: Ensure AI systems maintain comprehensive logs of decisions, data inputs, and processing steps for audit documentation and regulatory compliance.
  • Address Algorithmic Bias: Implement bias testing protocols for AI tools used in risk assessment and sampling to ensure fair, unbiased audit procedures.
  • Invest in Data Infrastructure: Build capabilities for handling large-scale, unstructured data required for advanced AI applications.
  • Implications for Regulators and Standard-Setters
Regulatory bodies face the challenge of developing appropriate oversight frameworks for AI-enabled auditing while fostering beneficial innovation. The following recommendations provide specific guidance for regulatory action.
  • Develop AI-Specific Audit Standards: Create guidance addressing: (a) acceptable AI tools and validation requirements, (b) documentation standards for AI-assisted procedures, (c) quality control requirements for AI outputs, and (d) auditor competency requirements for AI usage. Coordinate internationally (IAASB) to develop principles-based standards.
  • Establish Auditability Requirements for AI Systems: Mandate that AI tools used in audits meet explainability, transparency, and reproducibility standards. Develop certification framework for ‘audit-grade AI systems.’
  • Clarify Accountability Frameworks: Provide guidance on responsibility allocation when AI tools contribute to audit failures. Address legal and professional liability questions through formal pronouncements.
  • Require AI Disclosure in Audit Reports: Consider mandatory disclosure of material AI usage in audit procedures to enhance transparency for financial statement users.
  • Monitor AI Adoption Patterns: Establish ongoing surveillance of AI deployment in audits to identify emerging risks and best practices.
  • Support SME Audit Firms: Provide resources and guidance to help smaller firms adopt AI responsibly without competitive disadvantage.
  • Implications for Investors and Financial Statement Users
While investors do not directly implement AI systems, they are affected by AI’s impact on audit quality and should understand key implications for their decision-making.
  • Understand AI’s Impact on Audit Quality: Recognize that AI adoption may initially increase audit quality variability as firms learn to deploy tools effectively.
  • Seek Transparency: Request information from audit committees about AI usage in audits and governance mechanisms.
  • Monitor Regulatory Developments: Stay informed about evolving AI audit standards that may affect audit quality and reliability.
These stakeholder-specific recommendations provide a practical roadmap for advancing AI-enabled auditing in a manner that enhances audit quality while addressing critical governance, ethical, and regulatory challenges. Effective implementation requires coordinated action across all stakeholder groups, with clear accountability and ongoing monitoring of outcomes.

Study Limitations

This study has several limitations: First, this research could have omitted the relevant articles indexed in Web of Science, Google Scholar, or non-English databases, because only the Scopus database is considered and this could have contributed to a certain level of bias. Second, citation-frequency-centered bibliometric methods are likely to place more emphasis on established authors, journals, and countries than on early contributions or geographically significant ones. Third, although keyword co-occurrence analysis can reveal associative patterns, it does not capture the contextual depth of how concepts are applied within individual studies. Fourth, the analysis is constrained by the absence of metadata; there are articles that were incapable of stating theoretical frameworks or affiliations explicitly, and thus, we have not been able to make a more in-depth comparison between different approaches to the methodology.
In conclusion, artificial intelligence has the potential to improve fraud detection, predictive auditing efficiency, and analytics, but there are still major gaps regarding empirical validation and fair adoption, and regulatory frameworks to act in good governance. Achievement of these targets will require coordination of various fields, creation of regulations, and further investment in auditor capacity. Finally, the quality, reliability, and social applicability of auditing in a digital setting cannot be considered a substitute for human judgment despite the application of AI.

Author Contributions

Conceptualization: S.S.; Methodology: S.S.; Formal analysis: S.S., K.K. and D.N.; Data curation: S.S.; Writing—original draft: S.S., K.K. and D.N.; Writing—review: S.S. and K.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The APC was funded by Prince Sultan University.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is unavailable due to privacy and restrictions.

Acknowledgments

The authors would like to thank Prince Sultan University for its financial assistance. During the preparation of this manuscript/study, the author(s) used Grammarly for editing sentence structure and grammar. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Abdullah, A. A. H., & Almaqtari, F. A. (2024). The impact of artificial intelligence and Industry 4.0 on transforming accounting and auditing practices. Journal of Open Innovation: Technology, Market, and Complexity, 10(1), 100218. [Google Scholar] [CrossRef]
  2. Abu Huson, Y., Sierra García, L., García Benau, M. A., & Mohammad Aljawarneh, N. (2025). Cloud-based artificial intelligence and audit report: The mediating role of the auditor. VINE Journal of Information and Knowledge Management Systems, 55(6), 1553–1574. [Google Scholar] [CrossRef]
  3. Albitar, K., Gerged, A. M., Kikhia, H., & Hussainey, K. (2021). Auditing in times of social distancing: The effect of COVID-19 on auditing quality. International Journal of Accounting & Information Management, 29(1), 169–178. [Google Scholar] [CrossRef]
  4. Alles, M., & Gray, G. L. (2024). The marketing on Big 4 websites of Big Data Analytics in the external audit: Evidence and consequences. International Journal of Accounting Information Systems, 54, 100697. [Google Scholar] [CrossRef]
  5. Almaqtari, F. A. (2024). The role of IT governance in the integration of AI in accounting and auditing operations. Economies, 12(8), 199. [Google Scholar] [CrossRef]
  6. Appelbaum, D., Kogan, A., & Vasarhelyi, M. A. (2017). Big data and analytics in the modern audit engagement: Research needs. Auditing: A Journal of Practice & Theory, 36(4), 1–27. [Google Scholar]
  7. Aria, M., & Cuccurullo, C. (2017). Bibliometrix: An R-tool for comprehensive science mapping analysis. Journal of Informetrics, 11(4), 959–975. [Google Scholar] [CrossRef]
  8. Baker, H. K., Kumar, S., & Pandey, N. (2020). A bibliometric analysis of managerial finance: A retrospective. Managerial Finance, 46(11), 1495–1517. [Google Scholar] [CrossRef]
  9. Bonsón, E., & Bednárová, M. (2019). Blockchain and its implications for accounting and auditing. Meditari Accountancy Research, 27(5), 725–740. [Google Scholar] [CrossRef]
  10. Curtis, M. B., & Payne, E. A. (2008). An examination of contextual factors and individual characteristics affecting technology implementation decisions in auditing. International Journal of Accounting Information Systems, 9(2), 104–121. [Google Scholar] [CrossRef]
  11. Dai, J., & Vasarhelyi, M. A. (2017). Toward blockchain-based accounting and assurance. Journal of Information Systems, 31(3), 5–21. [Google Scholar] [CrossRef]
  12. Damerji, H., & Salimi, A. (2021). Mediating effect of use perceptions on technology readiness and adoption of artificial intelligence in accounting. Accounting Education, 30(2), 107–130. [Google Scholar] [CrossRef]
  13. Deloitte. (2024). Navigating the artificial intelligence frontier: An introduction for internal audit. Deloitte Risk Advisory. Available online: https://www2.deloitte.com/content/dam/Deloitte/us/Documents/Advisory/us-navigating-the-artificial-intelligence-frontier.pdf (accessed on 10 January 2026).
  14. Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114. [Google Scholar] [CrossRef]
  15. Donthu, N., Kumar, S., Mukherjee, D., Pandey, N., & Lim, W. M. (2021). How to conduct a bibliometric analysis: An overview and guidelines. Journal of Business Research, 133, 285–296. [Google Scholar] [CrossRef]
  16. Earley, S. (2023). What executives need to know about knowledge management, large language models and generative AI. Applied Marketing Analytics, 9(3), 215–229. [Google Scholar] [CrossRef]
  17. Ellili, N. O. D. (2023). FinTech adoption during COVID-19 pandemic: Bibliometric analysis. What lessons for the future? What Lessons for the Future. [Google Scholar] [CrossRef]
  18. European Parliament and Council of the European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union. Available online: https://eur-lex.europa.eu/eli/reg/2024/1689/oj (accessed on 10 January 2026).
  19. Fisher, I. E., Garnsey, M. R., & Hughes, M. E. (2016). Natural language processing in accounting, auditing and finance: A synthesis of the literature with a roadmap for future research. Intelligent Systems in Accounting, Finance and Management, 23(3), 157–214. [Google Scholar] [CrossRef]
  20. Francis, J. R. (2011). A framework for understanding and researching audit quality. Auditing: A Journal of Practice & Theory, 30(2), 125–152. [Google Scholar]
  21. Gu, H., Schreyer, M., Moffitt, K., & Vasarhelyi, M. (2024). Artificial intelligence co-piloted auditing. International Journal of Accounting Information Systems, 54, 100698. [Google Scholar] [CrossRef]
  22. Hajek, P., & Henriques, R. (2017). Mining corporate annual reports for intelligent detection of financial statement fraud—A comparative study of machine learning methods. Knowledge-Based Systems, 128, 139–152. [Google Scholar] [CrossRef]
  23. Han, H., Shiwakoti, R. K., Jarvis, R., Mordi, C., & Botchie, D. (2023). Accounting and auditing with blockchain technology and artificial Intelligence: A literature review. International Journal of Accounting Information Systems, 48, 100598. [Google Scholar] [CrossRef]
  24. Huang, F., & Vasarhelyi, M. A. (2019). Applying robotic process automation (RPA) in auditing: A framework. International Journal of Accounting Information Systems, 35, 100433. [Google Scholar] [CrossRef]
  25. Hurtt, R. K., Brown-Liburd, H., Earley, C. E., & Krishnamoorthy, G. (2013). Research on auditor professional skepticism: Literature synthesis and opportunities for future research. Auditing: A Journal of Practice & Theory, 32(Suppl. S1), 45–97. [Google Scholar]
  26. International Auditing and Assurance Standards Board (IAASB). (2024). Technology position statement. International Federation of Accountants. Available online: https://www.iaasb.org/publications/technology-position-statement (accessed on 10 January 2026).
  27. Jain, K., & Tripathi, P. S. (2023). Mapping the environmental, social and governance literature: A bibliometric and content analysis. Journal of Strategy and Management, 16(3), 397–428. [Google Scholar] [CrossRef]
  28. Kassar, M., & Jizi, M. (2026). Artificial intelligence and robotic process automation in auditing and accounting: A systematic literature review. Journal of Applied Accounting Research, 27(1), 217–241. [Google Scholar] [CrossRef]
  29. Khatib, S. F., Abdullah, D. F., Elamer, A., Yahaya, I. S., & Owusu, A. (2023). Global trends in board diversity research: A bibliometric view. Meditari Accountancy Research, 31(2), 441–469. [Google Scholar] [CrossRef]
  30. Knechel, W. R., Krishnan, G. V., Pevzner, M., Shefchik, L. B., & Velury, U. K. (2013). Audit quality: Insights from the academic literature. Auditing: A Journal of Practice & Theory, 32(Suppl. S1), 385–421. [Google Scholar]
  31. Kokina, J., Blanchette, S., Davenport, T. H., & Pachamanova, D. (2025). Challenges and opportunities for artificial intelligence in auditing: Evidence from the field. International Journal of Accounting Information Systems, 56, 100734. [Google Scholar] [CrossRef]
  32. Kokina, J., & Davenport, T. H. (2017). The emergence of artificial intelligence: How automation is changing auditing. Journal of Emerging Technologies in Accounting, 14(1), 115–122. [Google Scholar] [CrossRef]
  33. Kureljusic, M., & Karger, E. (2024). Forecasting in financial accounting with artificial intelligence–A systematic literature review and future research agenda. Journal of Applied Accounting Research, 25(1), 81–104. [Google Scholar] [CrossRef]
  34. Leocádio, D., Malheiro, L., & Reis, J. (2024). Artificial intelligence in auditing: A conceptual framework for auditing practices. Administrative Sciences, 14(10), 238. [Google Scholar] [CrossRef]
  35. Leocádio, D., Malheiro, L., & Reis, J. C. G. D. (2025). Auditors in the digital age: A systematic literature review. Digital Transformation and Society, 4(1), 5–20. [Google Scholar] [CrossRef]
  36. Li, Y., & Goel, S. (2025). Artificial intelligence auditability and auditor readiness for auditing artificial intelligence systems. International Journal of Accounting Information Systems, 56, 100739. [Google Scholar] [CrossRef]
  37. Lombardi, R., de Villiers, C., Moscariello, N., & Pizzo, M. (2022). The disruption of blockchain in auditing—A systematic literature review and an agenda for future research. Accounting, Auditing & Accountability Journal, 35(7), 1534–1565. [Google Scholar] [CrossRef]
  38. Manita, R., Elommal, N., Baudier, P., & Hikkerova, L. (2020). The digital transformation of external audit and its impact on corporate governance. Technological Forecasting and Social Change, 150, 119751. [Google Scholar] [CrossRef]
  39. Mansour, E. M., Al-Zyod, L., Ghassab, E. E., & Alaqrabawi, M. (2025). Auditor’s willingness to learn and its effect on the intention to use AI technologies in the audit process: Evidence from emerging economies. Journal of Financial Reporting and Accounting, 23(4), 1553–1586. [Google Scholar] [CrossRef]
  40. Mohammed Ismail, I. H., & Abdul Hamid, F. Z. (2024). A systematic literature review of the role of big data analysis in financial auditing. Management & Accounting Review (MAR), 23(2), 321–350. [Google Scholar] [CrossRef]
  41. Mugwira, T. (2022). Internet related technologies in the auditing profession: A WOS bibliometric review of the past three decades and conceptual structure mapping. Spanish Accounting Review, 25, 201–216. [Google Scholar] [CrossRef]
  42. Munoko, I., Brown-Liburd, H. L., & Vasarhelyi, M. (2020). The ethical implications of using artificial intelligence in auditing. Journal of Business Ethics, 167(2), 209–234. [Google Scholar] [CrossRef]
  43. Musunuru, K. (2025). Big data analytics for financial auditing practices: Identification of conceptual patterns, implications and challenges using text mining. Contaduría y Administración, 70(2), 1–36. [Google Scholar] [CrossRef]
  44. Nelson, M. W. (2009). A model and literature review of professional skepticism in auditing. Auditing: A Journal of Practice & Theory, 28(2), 1–34. [Google Scholar]
  45. Odonkor, B., Kaggwa, S., Uwaoma, P. U., Hassan, A. O., & Farayola, O. A. (2024). The impact of AI on accounting practices: A review: Exploring how artificial intelligence is transforming traditional accounting methods and financial reporting. World Journal of Advanced Research and Reviews, 21(1), 172–188. [Google Scholar] [CrossRef]
  46. O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishers. [Google Scholar]
  47. Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253. [Google Scholar] [CrossRef]
  48. PCAOB. (2025). Accounting developments 2024. The Business Lawyer, 80. [Google Scholar]
  49. Power, M. (1997). The audit society: Rituals of verification. Oxford University Press. [Google Scholar]
  50. Pravdiuk, N., Miroshnichenko, M., Lukanovska, I., Tkal, Y., & Motorniuk, U. (2024). The impact of cryptocurrencies and blockchain technologies on the accounting and audit systems. Economic Affairs, 69, 107–115. [Google Scholar] [CrossRef]
  51. Ramzan, S., & Lokanan, M. (2025). The application of machine learning to study fraud in the accounting literature. Journal of Accounting Literature, 47(3), 570–596. [Google Scholar] [CrossRef]
  52. Raschke, R. L., Saiewitz, A., Kachroo, P., & Lennard, J. B. (2018). AI-enhanced audit inquiry: A research note. Journal of Emerging Technologies in Accounting, 15(2), 111–116. [Google Scholar] [CrossRef]
  53. Rose, A. M., Rose, J. M., Sanderson, K. A., & Thibodeau, J. C. (2017). When should audit firms introduce analyses of big data into the audit process? Journal of Information Systems, 31(3), 81–99. [Google Scholar] [CrossRef]
  54. Sayal, A., Johri, A., Chaithra, N., Alhumoudi, H., & Alatawi, Z. (2025). Optimizing audit processes through open innovation: Leveraging emerging technologies for enhanced accuracy and efficiency. Journal of Open Innovation: Technology, Market, and Complexity, 11(3), 100573. [Google Scholar] [CrossRef]
  55. Sun, T. (2019). Applying deep learning to audit procedures: An illustrative framework. Accounting Horizons, 33(3), 89–109. [Google Scholar] [CrossRef]
  56. Sundarasen, S., Kumar, R., Tanaraj, K., Ali Alsmady, A., & Rajagopalan, U. (2024a). From board diversity to disclosure: A comprehensive review on board dynamics and ESG reporting. Research in Globalization, 9, 100259. [Google Scholar] [CrossRef]
  57. Sundarasen, S., Rajagopalan, U., & Alsmady, A. A. (2024b). Environmental accounting and sustainability: A meta-synthesis. Sustainability, 16(21), 9341. [Google Scholar] [CrossRef]
  58. Sutton, S. G., Holt, M., & Arnold, V. (2016). The reports of my death are greatly exaggerated: Artificial intelligence research in accounting. International Journal of Accounting Information Systems, 22, 60–73. [Google Scholar] [CrossRef]
  59. Tranfield, D., Denyer, D., & Smart, P. (2003). Towards a methodology for developing evidence-informed management knowledge by means of systematic review. British Journal of Management, 14(3), 207–222. [Google Scholar] [CrossRef]
  60. Ülker, P., Ülker, M., & Karamustafa, K. (2023). Bibliometric analysis of bibliometric studies in the field of tourism and hospitality. Journal of Hospitality and Tourism Insights, 6(2), 797–818. [Google Scholar] [CrossRef]
  61. Van Eck, N. J., & Waltman, L. (2009). Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics, 84(2), 523–538. [Google Scholar] [CrossRef]
  62. Yilmaz, A. A., & Tuzlukaya, S. E. (2024). The relation between intellectual capital and digital transformation: A bibliometric analysis. International Journal of Innovation Science, 16(2), 244–264. [Google Scholar] [CrossRef]
  63. Zhao, J., & Wang, X. (2024). Unleashing efficiency and insights: Exploring the potential applications and challenges of ChatGPT in accounting. Journal of Corporate Accounting & Finance, 35(1), 269–276. [Google Scholar]
  64. Zhong, C., & Goel, S. (2024). Transparent AI in auditing through explainable AI. Current Issues in Auditing, 18(2), A1–A14. [Google Scholar] [CrossRef]
  65. Zupic, I., & Čater, T. (2015). Bibliometric methods in management and organization. Organizational Research Methods, 18(3), 429–472. [Google Scholar] [CrossRef]
Figure 1. Publication trends on AI-enabled Auditing. Source: Bibliometrix R-package (Biblioshiny).
Figure 1. Publication trends on AI-enabled Auditing. Source: Bibliometrix R-package (Biblioshiny).
Jrfm 19 00209 g001
Figure 2. Country Contribution.
Figure 2. Country Contribution.
Jrfm 19 00209 g002
Figure 3. Thematic Map on Ai and Auditing. Source: Bibliometrix R-package (Biblioshiny).
Figure 3. Thematic Map on Ai and Auditing. Source: Bibliometrix R-package (Biblioshiny).
Jrfm 19 00209 g003
Figure 4. Keyword Network on AI in Auditing (Minimum occurrences = 5; Total keywords analyzed = 47; Clustering method = Association strength; 5 clusters identified). Source: Vosviewer.
Figure 4. Keyword Network on AI in Auditing (Minimum occurrences = 5; Total keywords analyzed = 47; Clustering method = Association strength; 5 clusters identified). Source: Vosviewer.
Jrfm 19 00209 g004
Figure 5. Thematic Integration Model.
Figure 5. Thematic Integration Model.
Jrfm 19 00209 g005
Table 1. Most cited Articles on AI-enabled Auditing.
Table 1. Most cited Articles on AI-enabled Auditing.
CitationKey FindingsMain ContributionRelevant Theme(s)
(Section 3.7)
Kokina and Davenport (2017)AI shifts auditing from sampling to full-population, proactive analysis; enhances efficiency but risks erosion of professional judgment.Provides early conceptual framing of AI’s transformative impact on auditing.Theme 1: AI Adoption & Audit Transformation; Theme 3: Judgment & Audit Quality
Munoko et al. (2020)AI introduces ethical risks related to bias, transparency, accountability, and may widen capability gaps between large and small firms.Integrates ethics and governance concerns into the AI auditing discourse.Theme 3: Professional Judgment, Ethics & Governance
Han et al. (2023)AI–blockchain convergence improves transparency and fraud detection; empirical evidence remains limited.Maps synergies and gaps at the intersection of AI and blockchain in auditing.Theme 2: Digital Technologies & Data-Driven Audit Ecosystems
Manita et al. (2020)AI enhances efficiency and fraud detection while reshaping auditors’ roles; adoption requires reskilling and regulatory adaptation.Frames digital transformation as both a technical and organizational process.Theme 1: AI Adoption & Audit Transformation; Theme 2
Hajek and Henriques (2017)NLP and machine learning outperform traditional models in detecting financial distress using unstructured disclosures.Demonstrates predictive value of text mining for audit risk assessment.Theme 2: Digital Technologies & Data Analytics
Damerji and Salimi (2021)Perceived usefulness and ease of use mediate the relationship between readiness and AI adoption.Provides empirical evidence on behavioral drivers of AI adoption in auditing.Theme 1: AI Adoption & Capability Building
Fisher et al. (2016)AI has strong potential to enhance audit quality, but academic research lags technological developments.Establishes an early research agenda for AI in auditing.Theme 1: AI Adoption & Audit Transformation
Sutton et al. (2016)AI is more likely to augment rather than replace auditors; human–AI collaboration is essential.Challenges automation-replacement narratives in auditing research.Theme 1: AI Adoption & Audit Transformation; Theme 3
Albitar et al. (2021)Pandemic constraints accelerated AI adoption but introduced risks due to rapid implementation.Links crisis-driven digital transformation with long-term AI adoption debates.Theme 1: AI Adoption & Audit Transformation; Theme 3
Table 2. Most published authors on AI-enabled Auditing research.
Table 2. Most published authors on AI-enabled Auditing research.
AuthorNo. of
Publications
Main Research Interest AreasAffiliationCountry
Miklós A. Vasarhelyi7Continuous auditing, audit analytics, artificial intelligence in auditing, continuous assurance systems, audit automationRutgers Business SchoolUnited States
Fahad A. Almaqtari3Corporate governance, audit quality, financial performance, emerging market accounting practicesKing Khalid UniversitySaudi Arabia
Feng-Hsiang Chen3Artificial intelligence applications in accounting, audit risk assessment, big data analyticsNational Taiwan UniversityTaiwan
Ming-Feng Hsu3Audit analytics, decision support systems, machine learning in auditingNational Chung Cheng UniversityTaiwan
Kuo-Hua Hu3Predictive analytics, fraud detection models, AI-based audit techniquesNational Chengchi UniversityTaiwan
Jian Yang3Textual analysis, machine learning in accounting research, financial disclosure analyticsCity University of Hong KongHong Kong
Yahya Abu Huson2Audit quality, technology adoption in auditing, auditor judgmentAl-Zaytoonah University of JordanJordan
Sami F. Al-Aroud2Accounting information systems, data analytics, digital transformation in auditingYarmouk UniversityJordan
Abdulrahman Alassuli2Audit technology, professional judgment, audit analytics adoptionUniversity Utara MalaysiaMalaysia
Table 3. Prominent sources on AI-Enabled Auditing.
Table 3. Prominent sources on AI-Enabled Auditing.
Journal TitleABDC RankingScimago Quartile
International Journal of Accounting Information SystemsAQ1
Journal of Emerging Technologies in AccountingBQ2
Journal of Financial Reporting and AccountingBQ2
Managerial Auditing JournalAQ1
Journal of Open Innovation: Technology, Market, and ComplexityBQ1
Review of Accounting StudiesAQ1
Accounting EducationBQ1
International Review of Financial AnalysisAQ1
Table 4. Summary of main findings of Theme 1.
Table 4. Summary of main findings of Theme 1.
Study (Authors, Year)TitleMain FindingsContribution
Damerji and Salimi (2021)Mediating effect of use perceptions on technology readiness and adoption of artificial intelligence in accountingTechnology readiness significantly relates to AI adoption, with perceived usefulness (PU) and perceived ease of use (PEOU) mediating the readiness–adoption relationship (sample: accounting students, survey-based).Links technology readiness to AI adoption through PU/PEOU; highlights the role of education in preparing future professionals.
Alles and Gray (2024)The marketing on Big 4 websites of Big Data Analytics in the external audit: Evidence and consequencesBig Four firms market audit analytics as providing operational and value-adding insights; “value add” is positioned as an essential selling point; raises concerns regarding auditor independence.Shifts attention to how audit analytics are publicly represented and how audits are strategically positioned.
Leocádio et al. (2024)Artificial Intelligence in Auditing: A Conceptual Framework for Auditing PracticesSLR develops a conceptual framework highlighting AI’s potential to shift auditing toward proactive and continuous monitoring; calls for research on efficiency, performance, regulation, and auditor adaptation.Organizes fragmented audit–AI literature and proposes a structured agenda for future empirical work.
Curtis and Payne (2008)An examination of contextual factors and individual characteristics affecting technology implementation decisions in auditingAudit technologies can improve efficiency and effectiveness but are underutilized due to performance evaluation pressures, budget constraints, and contextual factors.Explains why technological availability does not ensure implementation; emphasizes incentives and organizational context.
Kokina et al. (2025)Challenges and opportunities for artificial intelligence in auditing: Evidence from the fieldSimple AI applications are widely used, while complex AI remains limited; key challenges include explainability, bias, privacy, robustness, overreliance, and lack of guidance.Provides field-based insight into actual AI use and consolidates governance and risk concerns.
Abdullah and Almaqtari (2024)The impact of artificial intelligence and Industry 4.0 on auditingAI and Industry 4.0 technologies are expected to reshape auditing, but adoption is constrained by institutional, infrastructural, and regulatory factors.Situates AI adoption within broader digital transformation in auditing.
Abu Huson et al. (2025)Cloud-based artificial intelligence and audit qualityAI use is associated with potential improvements in audit quality, suggesting benefits depend on effective implementation.Links AI adoption to audit quality outcomes.
Table 5. Summary of main findings of Theme 2.
Table 5. Summary of main findings of Theme 2.
Study (Authors, Year)TitleMain FindingsContribution
Gu et al. (2024)Artificial intelligence co-piloted auditingProposes “AI co-piloted auditing,” arguing auditors can be augmented by foundation models across audit tasks; discusses how human–AI collaboration could reshape audit work.Introduces a clear conceptual framing for human-in-the-loop auditing with foundation models, helping shift the discussion from “automation replaces auditors” to “augmentation and workflow redesign.”
Raschke et al. (2018)AI-Enhanced Audit Inquiry: A Research NoteDiscusses the feasibility of using AI “bots” to generate audit inquiries and evaluate client responses, and outlines research opportunities for automated inquiry.Sharp, audit-specific contribution: treats inquiry as a workflow that can be augmented/automated and highlights researchable design questions.
Manita et al. (2020)The digital transformation of external audit and its impact on corporate governanceInterview-based evidence (Big audit firms in France) showing digital tech affects audit firms at multiple levels and reshapes audit’s role as a governance mechanism.Strong empirical anchor: explains digital transformation through governance and organizational change rather than “tools only.”
Han et al. (2023)Accounting and auditing with blockchain technology: A literature reviewSurveys research on how blockchain will affect accounting/auditing (including audit processes and assurance implications).Maps a major adjacent technology domain; useful for positioning how “data infrastructure” innovations interact with audit analytics/AI.
Sayal et al. (2025)Optimizing audit processes through open innovation: Leveraging emerging technologies for enhanced accuracy and efficiencyUses ML (supervised + unsupervised) on SEC financial statement datasets to improve audit-related risk classification (as described on the publisher page).Offers a more “build/test” oriented approach (model framework + dataset), moving beyond conceptual claims.
Mohammed Ismail and Abdul Hamid (2024)A systematic literature review of the role of big data analysis in financial auditingSLR synthesizing how big data analytics is used/positioned in financial auditing and discusses opportunities and challenges (per the journal/repository record).Consolidates “big data analysis in auditing” as a structured stream—useful bridge between audit analytics and AI-enabled decision support.
Huang and Vasarhelyi (2019)Applying robotic process automation (RPA) in auditing: A frameworkDevelops a conceptual framework for integrating RPA into audit processes, focusing on automating routine, rule-based audit tasks such as data extraction, reconciliations, and control testing.Demonstrates how RPA can enhance audit efficiency and consistency by automating repetitive procedures, allowing auditors to focus on judgment-intensive activities such as risk assessment and exception evaluation.
Dai and Vasarhelyi (2017)Toward blockchain-based accounting and assuranceDevelops a conceptual framework for applying blockchain technology to accounting and auditing, with a focus on continuous auditing, real-time verification, and immutable transaction records.Highlights blockchain’s ability to enhance audit quality through continuous assurance models.
Table 6. Summary of main findings of Theme 3.
Table 6. Summary of main findings of Theme 3.
Study (Authors, Year)TitleMain FindingsContribution
Munoko et al. (2020)The Ethical Implications of Using Artificial Intelligence in AuditingExamines ethical challenges arising from the use of AI in auditing, including issues of fairness, accountability, transparency, and responsibility, and how these affect audit decision-making.Positions AI in auditing as not merely a technical advancement but a professional ethics and governance issue, highlighting risks related to over-reliance, bias, and responsibility for AI-driven outcomes.
Li and Goel (2025)Making It Possible for the Auditing of AI: A Systematic Review of AI Audits and AI AuditabilitySystematically reviews academic and regulatory literature on auditing AI systems and identifies auditability measures required across the AI lifecycle.Clearly distinguishes auditing AI from using AI to audit, offering a structured view of governance, data, models, monitoring, transparency, and accountability required for AI assurance.
Zhong and Goel (2024)Transparent AI in Auditing through Explainable AIProposes the use of explainable AI (XAI) techniques to improve transparency and interpretability of AI systems used in auditing.Demonstrates that AI systems remain “black boxes” without deliberate auditability design, supporting the need for explainability, documentation, and lifecycle controls to justify audit reliance.
Bonsón and Bednárová (2019)Blockchain and its Implications for Accounting and AuditingExplores how blockchain features such as immutability, decentralization, and transparency may reshape accounting records and audit processes.Supports arguments around immutable audit trails while highlighting governance challenges, particularly the need to link on-chain records to real-world economic rights and obligations.
Lombardi et al. (2022)The Disruption of Blockchain in Auditing: A Systematic Literature Review and Future Research AgendaProvides a structured literature review identifying research streams, gaps, and implications of blockchain adoption in auditing.Serves as a state-of-the-art reference on blockchain in auditing, emphasizing institutional change, evolving audit procedures, and the need for standards, training, and empirical validation.
Rose et al. (2017)When Should Audit Firms Introduce Analyses of Big Data Into the Audit Process?Examines when and how audit firms should adopt big data analytics and how such tools influence audit planning and risk assessment.Establishes a foundational link between analytics adoption and auditor judgment, showing that advanced tools can shape perceptions of risk and decision-making.
Raschke et al. (2018)AI-Enhanced Audit Inquiry: A Research NoteInvestigates the feasibility of using AI tools to automate audit inquiries and evaluate management responses.Demonstrates that while AI can support inquiry processes, professional judgment, follow-up questioning, and skepticism remain essential.
Hurtt et al. (2013)Research on Auditor Professional Skepticism: Literature Synthesis and Opportunities for Future ResearchSynthesizes prior research on professional skepticism and outlines how skepticism operates and can be developed and measured.Provides a theoretical foundation for understanding skepticism, which is critical for examining how AI and automation affect auditor judgment.
Nelson (2009)A Model and Literature Review of Professional Skepticism in AuditingDevelops a conceptual model linking incentives, evidence, judgment, and skeptical actions in auditing.Offers a mechanistic explanation of how AI tools may either strengthen or weaken professional skepticism through their influence on evidence evaluation and judgment.
Parasuraman and Riley (1997)Humans and Automation: Use, Misuse, Disuse, AbuseIntroduces a human-factors framework explaining how users interact with automated systems, including over-use and misuse.Provides the behavioral foundation for understanding automation bias and “process blindness” in AI-enabled audit environments.
Dietvorst et al. (2015)Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them ErrShows that individuals may reject algorithmic advice after observing small errors, even when algorithms outperform humans overall.Complements automation bias research by explaining under-reliance on AI, highlighting the difficulty of calibrating appropriate trust in audit technologies.
Francis (2011)A Framework for Understanding and Researching Audit QualityDevelops a multi-level framework explaining audit quality as a function of competence, independence, incentives, and institutional factors.Serves as a foundational lens for evaluating how AI and automation reshape audit quality by challenging traditional notions of judgment, skepticism, and accountability.
Table 7. Summary of themes, keywords and representative articles.
Table 7. Summary of themes, keywords and representative articles.
ThemeIntegrated KeywordsRepresentative Articles
Theme 1: AI in Auditing: Readiness, Representation, and Real-World UseTechnology adoption, audit risk, accounting education, expert systems, capability maturity, organizational readiness, workforce digital skillsDamerji and Salimi (2021); Alles and Gray (2024); Leocádio et al. (2025); Kokina et al. (2025); Abdullah and Almaqtari (2024); Abu Huson et al. (2025)
Theme 2: Digital Technologies & Data-Driven Audit EcosystemsMachine learning, deep learning, RPA, big data, blockchain, IoT, text mining, predictive analytics, digital ecosystems, continuous auditingGu et al. (2024); Han et al. (2023); Sayal et al. (2025); Manita et al. (2020); Mohammed Ismail and Abdul Hamid (2024); Dai and Vasarhelyi (2017); Sun (2019); Huang and Vasarhelyi (2019); Han et al. (2023)
Theme 3: Audit Quality, Professional Skepticism & Ethical GovernanceAudit quality, skepticism, ethics, accountability, transparency, explainability, AI governance, algorithmic bias, fairnessParasuraman and Riley (1997); Nelson (2009); Hurtt et al. (2013); Dietvorst et al. (2015); Bonsón and Bednárová (2019); Francis (2011); Munoko et al. (2020); Raschke et al. (2018); Zhong and Goel (2024); Li and Goel (2025); Lombardi et al. (2022); Rose et al. (2017)
Table 8. Research gaps and potential future research.
Table 8. Research gaps and potential future research.
ThemeSynthesized Research GapsPotential Future ResearchRecommended MethodologiesLevel(s) of Analysis
Theme 1: Adoption, Audit Risk & Capability BuildingOveremphasis on adoption intention rather than post-adoption useDevelop post-adoption behavioral models and longitudinal studiesLongitudinal field studies, panel data analysis, interrupted time seriesIndividual auditor, audit team, firm level
Limited evidence on AI impact on actual audit qualityExamine AI-readiness frameworks for different audit firm sizesArchival analysis, quasi-experimental designs, difference-in-differencesEngagement level, firm level
Lack of integration with institutional pressure, regulation, and governance theoriesStudy regulatory influence on AI adoptionComparative institutional analysis, cross-country studies, policy analysisInstitutional level, regulatory framework
Skills gaps between universities, firms, and technological needsExplore skill transformation pathways and investigate organizational resistance and cultural barriersSurvey research, competency gap analysis, Delphi studies, case studiesIndividual auditor, educational institutions, firm level
Lack of organizational change models for AI-based auditingDesign capability maturity models for AI-enabled auditingAction research, design science research, multiple case studiesFirm level, organizational processes
Lack of field-based empirical validations of ML/DL in real auditsDevelop open-source benchmark datasets and conduct firm-level pilots using ML/DL in real engagementsField experiments, pilot studies, randomized controlled trialsEngagement level, algorithm performance
Theme 2: Digital Technologies & Data-Driven Audit EcosystemsFew benchmark datasets for replicable AI audit researchBuild integrated digital ecosystem audit frameworksDataset development, collaborative research initiatives, open-source projectsIndustry-academic collaboration level
Limited understanding of AI-human collaboration in judgmentPropose standards for AI-enabled digital evidenceLaboratory experiments, process tracing, think-aloud protocols, behavioral observationIndividual auditor, task level
Regulatory uncertainty in blockchain-based evidenceStudy model drift and continuous monitoringLegal analysis, Delphi studies with regulators, comparative jurisdictional analysisInstitutional level, regulatory frameworks
Fragmented literature on digital ecosystemsDevelop AI model auditability frameworksSystematic literature review, framework development, design science researchTechnology level, ecosystem level
Lack of AI lifecycle governance, model drift research, and assurance of AI modelsCreate AI adoption pathways for small audit firmsLongitudinal monitoring studies, algorithm audits, performance testingAlgorithm level, firm level
Theme 3: Audit Quality, Professional Skepticism & Ethical GovernanceNo frameworks for AI responsibility, liability, and accountabilityDevelop AI accountability frameworksLegal case analysis, scenario-based analysis, stakeholder interviews, action researchInstitutional level, firm level, legal framework
Limited evidence on AI improving regulatory inspection outcomesStudy AI impact on PCAOB/IAASB inspection resultsArchival analysis of inspection data, quasi-experimental designs, regulatory data analysisEngagement level, firm level, regulatory level
Underdeveloped models of digital skepticismCreate hybrid skepticism modelsBehavioral experiments, cognitive psychology studies, survey researchIndividual auditor, cognitive processes
Insufficient exploration of AI-driven cognitive biasesConduct behavioral experiments on AI-assisted fraud detectionLaboratory experiments, between-subjects designs, eye-tracking studies, neuroimagingIndividual auditor, judgment and decision-making
Poor integration of AI ethics into auditing standardsDesign explainable AI protocols for auditorsDesign science research, A/B testing of explanation formats, user experience studiesTechnology design, individual auditor, task level
Lack of explainable AI tools designed for audit evidenceExplore governance of algorithmic fairness and biasAlgorithmic fairness audits, field experiments, archival analysis for bias detectionAlgorithm level, client level, societal level
N/AEvaluate impact of AI governance on trust and litigation riskArchival analysis of litigation cases, survey of stakeholder trust, event studiesFirm level, market level, stakeholder level
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sundarasen, S.; Kamaludin, K.; Nakiran, D. From Adoption to Audit Quality: Mapping the Intellectual Structure of Artificial Intelligence-Enabled Auditing. J. Risk Financial Manag. 2026, 19, 209. https://doi.org/10.3390/jrfm19030209

AMA Style

Sundarasen S, Kamaludin K, Nakiran D. From Adoption to Audit Quality: Mapping the Intellectual Structure of Artificial Intelligence-Enabled Auditing. Journal of Risk and Financial Management. 2026; 19(3):209. https://doi.org/10.3390/jrfm19030209

Chicago/Turabian Style

Sundarasen, Sheela, Kamilah Kamaludin, and Deepa Nakiran. 2026. "From Adoption to Audit Quality: Mapping the Intellectual Structure of Artificial Intelligence-Enabled Auditing" Journal of Risk and Financial Management 19, no. 3: 209. https://doi.org/10.3390/jrfm19030209

APA Style

Sundarasen, S., Kamaludin, K., & Nakiran, D. (2026). From Adoption to Audit Quality: Mapping the Intellectual Structure of Artificial Intelligence-Enabled Auditing. Journal of Risk and Financial Management, 19(3), 209. https://doi.org/10.3390/jrfm19030209

Article Metrics

Back to TopTop