Next Article in Journal
Platform-Enabled Destination Management: KPI Dashboards and DEA Benchmarking in the Peloponnese
Previous Article in Journal
Digital Intermediation and Precarity: Experiences of Domestic Workers in Chile’s Platform Labor Economy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Driving Strategic Innovation Through AI Adoption in Government Financial Regulators: A Case Study

by
Carlos Andrés Merlano Porras
1,*,
Luis Arregoces Castillo
2,
Lisa Bosman
1 and
Monica Gamez-Djokic
1
1
Purdue Polytechnic Institute, Purdue University, West Lafayette, IN 47907, USA
2
Department of Mathematics, University of Houston, Houston, TX 77204, USA
*
Author to whom correspondence should be addressed.
Platforms 2025, 3(4), 20; https://doi.org/10.3390/platforms3040020
Submission received: 15 October 2025 / Revised: 3 December 2025 / Accepted: 15 December 2025 / Published: 16 December 2025

Abstract

Public institutions are experiencing increased dynamism due to rapid technological development and digitalization, which are creating novel opportunities for innovation. This reality is particularly prevalent in high-accountability contexts, such as financial regulation, where the adoption of Artificial Intelligence (AI) drives new forms of governance. Orchestrating this technological shift can offer a path to enhanced effectiveness; however, it requires new capabilities to sense, seize, and reconfigure opportunities in a complex public-interest environment. However, prior findings lack insights into the specific dynamic capabilities and routines required for responsible AI adoption in the public sector. Therefore, this study investigates how a government institution develops dynamic capabilities to govern AI innovation. Through a single, in-depth case study of a national financial regulator, this study offers insights into the specific micro-routines that underlie the regulator’s sensing, seizing, and reconfiguring capabilities. We develop a capability-based framework that demonstrates that responsible adoption depends on a dual set of capabilities operating at both an internal (organizational) and an ecosystem (market-facing) level. This study’s findings carry implications for the literature on public sector innovation, dynamic capabilities, and platform governance, as well as for leaders managing technological change in governments.

1. Introduction

In digitalization, the innovativeness of the public sector becomes a key issue that transcends the walls of a single agency [1]. Instead, it expands into a myriad of partnerships, commonly known as innovation ecosystems and collaborative networks [2]. Government organizations face mounting pressure to modernize as rapid technological advancements create both new opportunities for service delivery and new governing risks [3]. When oversight comes after the deployment of new technology, it results in fragmented pilots, unclear ownership, duplicated vendor spends, slower risk detection, and legitimacy risks. These frictions increase total technology costs (due to duplication/rework), extend the pilot-to-scale cycle times, and delay the detection of emerging risks for both supervisors and supervised firms. For example, a multi-stakeholder market-surveillance database, run in collaboration with exchanges and vendors, faced ballooning costs, unclear cost ownership, legal setbacks, and delays due to poor governance and cost allocation issues [4].
The longer reforms are deferred, the more complex legacy constraints become, skill gaps widen, and public trust erodes. Notably, Artificial Intelligence (AI) exhibits significant potential for enhancing government effectiveness, such as lowering costs, allowing access to services, and improving efficiency [5,6]. However, it also presents profound challenges to government entities regarding oversight activities, ethics, and public trust [7]. In addition, the current knowledge about how public organizations, especially those in high-accountability contexts, can effectively orchestrate this technological transformation is limited [8].
A pivotal challenge is that public agencies often lack established routines and capabilities for managing disruptive innovation in a way that aligns with their core mandates of stability and public trust [9]. The literature in the field of dynamic capabilities (DCs), which discusses how an organization purposefully adapts to changing environments, provides a compelling lens for exploring these challenges [10]. According to the DC view [10], sustainable adaptation depends on being able to sense opportunities and threats, and exploiting them through timely action, while realigning organizational resources to remain effective. Arguably, such capabilities act as the backbone of successful public sector transformation. Nevertheless, insights into the formation and use of dynamic capabilities in a public sector context are hitherto lacking.
First, there is a need to understand the specific “distinct skills, processes, procedures, organizational structures, decision rules, and disciplines” [10] that underlie dynamic capabilities in a non-market, high-accountability environment. While prior studies have described the barriers to GovTech adoption [11,12,13,14], there is a need to understand the micro-foundational routines that enable public agencies to build the innovation capacity.
Second, an interesting domain for further inquiry is how these institutions utilize dynamic capabilities to govern not only their own transformation but also that of the broader ecosystem they oversee. It would be beneficial to investigate how they use sensing, seizing, and reconfiguring in combination to manage relationships with diverse actors, i.e., from technology vendors to regulated firms and civil society.
This study examines the “XYZ Government Financial Regulator” to address the limited empirical evidence on dynamic capabilities in high-stakes public settings [9]. The guiding research question inquires about how internal routines and governance arrangements facilitate AI adoption and how they align with sensing, seizing, and reconfiguring dynamic capabilities. Building on this guiding question, this study states two research questions: RQ1—How do internal routines and governance arrangements in a high-accountability financial regulator enable responsible AI adoption from intake to scale? and RQ2—How do these routines and arrangements operationalize the dynamic capabilities of sensing, seizing, and reconfiguring?
Against this background, this study examines how dynamic capabilities support the adoption of responsible AI by a national financial regulator (anonymized as the “XYZ Government Financial Regulator”). To move beyond ad hoc trials, the regulator implemented three design moves: a governed sandbox, a formal use-case intake/triage, and model risk co-governance, creating gated decisions that balance learning with accountability. Specifically, the purpose of this study is to investigate how a high-accountability public agency develops the dynamic capabilities to govern the adoption of AI. This case study relies on in-depth documentary and archival data from a national financial regulator. The findings indicate that dynamic capabilities—specifically, sensing, seizing, and reconfiguring—are crucial for orchestrating both internal- and ecosystem-level adaptation. This study proposes a framework for these capabilities and their microfoundations, illustrating how they enable a balance between innovation and governance. This study also illustrates how platform governance—shared data services, standardized interfaces, and gatekeeping rules—enables repeatable, accountable scale-up of AI solutions in public institutions [15,16].
The aims of this study are as follows: (i) to identify micro-routines for sensing (readiness auditing; intake/triage), (ii) to explain seizing mechanisms (governed sandboxes; iterative feedback; common interfaces/contracts), and (iii) to specify reconfiguring routines (roles, policies, metrics, multi-party governance; open artifacts) that institutionalize learning and scale.
The remainder of this paper is organized as follows: Section 2 outlines the literature on dynamic capabilities and the specific challenges of AI adoption in the public sector. Section 3 presents our research methods. Section 4 presents the study results, detailing the internal and ecosystem capabilities. Section 5 presents a discussion of the study results. Section 6 concludes this paper with a discussion of the theoretical and managerial implications.

2. Literature Review

2.1. Dynamic Capabilities in the Public Sector Context

Organizations can remain effective in an era of increased dynamism and changes by developing “dynamic capabilities” [17,18]. The dynamic capabilities perspective focuses on how firms adapt to changing environments by purposefully creating, extending, or modifying their resource base. In a few words, it means “the firm’s processes that use resources, specifically the process to integrate, reconfigure, gain, and release resources, to match and even create market change” [18].
For analytical purposes, Teece [10] classifies dynamic capabilities into three core clusters of routines. First, sensing capabilities involve scanning, interpreting, and learning about the external environment to identify opportunities and threats. Second, seizing capabilities consists of mobilizing resources to address an identified opportunity, including making investment decisions and designing business models. Finally, reconfiguring capabilities involve the continuous renewal of the organization through redesigning structures, processes, and assets to maintain relevance over time.
Although this framework is designed for competitive markets, it remains a strong framework for examining public sector adaptation. The primary distinction lies in context. Public institutions do not seek profit but seek to maintain the trust of the people using legal mandates and political oversight [19,20]. This scenario of high accountability is accompanied by unique limitations that shape the development and deployment of capabilities [1,21]. In the case of a public agency, sensing is limited not only to market trends but also to emerging social risks. Seizing is governed by procurement rules and a low risk of tolerance; changes to reconfigure needs must meet the principles of transparency and due process.

2.2. AI Adoption in Organizations

Artificial Intelligence (AI) is a technology that simulates cognitive skills in humans, such as learning, reasoning, and decision-making [22]. It encompasses machine learning, natural language processing, and robotics to automate tasks that require human knowledge and skills. AI has numerous applications, including process automation, predictive analytics, customer service, and compliance [23,24].
AI can help transform organizations by analyzing data in real-time, identifying anomalies, and making informed decisions [25,26]. Interestingly, AI technologies have the potential to automate reporting and facilitate the classification of compliance filings [27,28]. In fact, AI can handle the operational and strategic tasks of modern companies, including complex operations, problematic datasets, and decision-making [29].
Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) systems have opened new horizons in content generation and strategic decision-making [30]. The application of AI to organizations is endless, efficient, and effective in decision-making, providing accurate results. The fact that AI systems can process large amounts of data in real-time, unearth patterns, and present actionable information is one of these advantages. Frauds and market risks could be identified and evaluated with the help of AI-driven predictive analytics, irrespective of the industry [31].
To a significant extent, AI has enhanced various operations, primarily through machine learning (ML) and natural language processing (NLP). Bostrom [32] describes how AI may simplify the decision-making process, lower expenses, and streamline processes for authorities to detect and respond to market changes. Indeed, machine learning tools help compare various financial documents and identify potential suspicious activity associated with fraudulent activities and criminal operations [33].
Another advantage of AI is automation, which reduces the amount of repetitive work and enables employees to focus on tasks that are more valuable and meaningful. For instance, AI-based chatbots for customer service reduce response times and operate 24/7 [34,35]. In addition, AI applications can prevent non-compliance and risk management issues by monitoring, reporting, and detecting irregularities in financial transactions [36]. Governmental agencies have also been interested in these developments and have found them to be helpful in creating efficiency and control, as well as responding to emerging risks in dynamic economies.

2.3. AI as a Governance Challenge for Public Institutions

The rise of Artificial Intelligence (AI) provides a critical test for public sector dynamic capabilities. On the one hand, AI can be revolutionary in terms of improving efficiency and delivering services in the public sector [37,38], as digital infrastructures or platforms facilitate AI tools to control resources in dynamic settings [39]. On the other hand, AI presents major control issues that jeopardize all public administration principles [40,41]. These include risks of algorithmic discrimination perpetuating social disparities [42]; insufficient transparency and explainability [43], which is incompatible with the requirement to justify decisions [44]; and a lack of data privacy associated with citizens’ sensitive data [45].
Such challenges are difficult to overcome particularly in the case of financial regulatory and supervisory institutions [46]. For these institutions, adopting AI is not merely a technical upgrade, but an intricate attempt to balance the possibilities of better market regulations with the risk of undermining systemic stability or public confidence [30,47,48,49]. These challenges have been addressed in the literature, and in many cases, existing sources present rich descriptive or normative accounts of AI pilots or what governments ought to do [50,51]. However, the academic literature still lacks insight into the underlying organizational processes required to navigate this landscape. Consequently, new insights are required to understand the dynamic capabilities that enable a public institution to manage the dual challenge of AI innovation and governance.

3. Materials and Methods

This study employs an exploratory, single-case study approach to gain insights into how a regulatory institution develops dynamic capabilities to govern AI innovation. The case study method is suitable for addressing “how” questions and building theory from complex, real-world phenomena [52,53].

3.1. Case Selection

This case involves a national government entity, anonymized as the “XYZ Government Financial Regulator”. We used convenience and theoretical sampling to select a setting with (i) high accountability, (ii) explicit AI governance structures (model risk co-governance; DPIA pre-screen), (iii) a formal experimentation and intake apparatus (governed sandbox; use-case intake/triage), and (iv) comprehensive, traceable documentation spanning planning, execution, and assurance (PP, TS/DAS, MIN, PRG, ER, MRC/POL/GOV, LOG). The organization’s active AI program and document trail (March–July 2024) provided a transparent view of routines that instantiate sensing, seizing, and reconfiguring under real-world public sector constraints. We selected an information-rich regulator with explicit AI governance and a governed sandbox, allowing for the observation of capability routines under real-world constraints—an instance suitable for theory elaboration [52,53].

3.2. Data Collection

Data were gathered through an unobtrusive review of internal archival documents generated by the “XYZ Government Financial Regulator” between March and July 2024, facilitating a close analysis of formal processes, decisions, and outcomes within the regulatory entity. A case study protocol and database cataloged project plans (PP, n = 5), technical specifications/data architecture documents (TS/DAS, n = 5), meeting minutes (MIN, n = 20), progress reports (PRG, n = 10), evaluation reports (ER, n = 5), system logs (LOG, ≈300 entries), and end-user feedback forms (UX-Eval, n = 40). To strengthen reliability and triangulate findings, we incorporated organization-wide assessment—CultureSurvey-2024, IT-Assessment-2024, and TNA-2024-05—together with policies and governance artifacts (MRC-2024-05; POL-2024-06; GOV-2024-05) and ecosystem-facing signals (RFI-2024-04; TechSprint-Call-2024; CSC-2024-06; CAR-2024-03). The design choices we observed, standardized APIs, shared data layers, and interface conventions mirror the platform governance patterns in which the management of interfaces reduces rework and accelerates integration, as seen in public digital platforms [54]. Artifact tags (e.g., PP-2024-04, MIN-2024-05-22) are used consistently in the Results and tables for verbatim traceability. Tagged artifacts are cited in the Results and evidence tables when used in the coded analysis; background materials are logged as context. All tagged sources are cataloged in Appendix A: Source Log.

3.3. Data Analysis

The data were subjected to thematic analysis to understand how the regulator develops dynamic capabilities. A strength of this method is that it allows for the practical identification of patterns and analytical themes within a large and complex dataset [55,56]. The process was iterative and followed a theory-guided logic. Initially, we drew on insights from the prior literature to inform the development of theoretically grounded, overarching themes. To help us understand the process of adaptation, we adopted Teece’s [10] division of dynamic capabilities—i.e., sensing, seizing, and reconfiguring—as synthesizing concepts to create three overarching analytical themes.
We employed a theory-building single-case, embedded design. As detailed in §2.2 and Table 1, we analyzed PP, TS/DAS, MIN, PRG, ER, LOG, and UX-Eval, supplemented by CultureSurvey-2024, IT-Assessment-2024, TNA-2024-05, policy/governance artifacts (MRC/POL/GOV), and ecosystem signals (RFI/TechSprint-Call/CSC/CAR). We conducted iterative, theory-guided thematic coding (open → axial), with a preregistered coding protocol. Two coders independently applied open codes to a 20% stratified sample (by source type), reconciled discrepancies, and refined the codebook (version 1 to version 3). Inter-rater reliability of the confirmation sample (n = 60 excerpts) yielded Cohen’s κ = 0.78 (substantial). We maintained an audit trail of code changes and decision memos and triangulated document coding with meeting-minute context checks and pattern-matched codes for sensing, seizing, and reconfiguring. We enhanced credibility through temporal bracketing of events and rival-explanation testing (e.g., staffing changes, policy deadlines), and we triangulated interpretations across all tagged sources. This study analyzed existing, non-identifiable records only; per institutional policy, it qualifies as secondary research, and it does not involve human subjects (IRB determination on file). The aim of this study is analytic generalization, i.e., proposing internal- and ecosystem-level micro-routines that extend the dynamic capabilities theory in high-accountability public settings.
We then followed an inductive process to identify the specific micro-routines within each theme. Facilitated by DELVE, every document was read several times, and phrases related to the research purpose were coded. The software’s choice of DELVE aligns with recent research advocating the use of AI as an aid for reflexive thematic analysis, expanding the scope and details of this study [57].
Through a series of iterations, we were able to discover links and patterns within the codes, which enabled us to group them into the specific internal- and ecosystem-level routines presented in the findings. During the analysis process, the authors discussed preliminary findings to arrive at valid results. These steps enabled us to develop an empirically driven theoretical framework linking capabilities and their underlying micro-routines.

4. Results: The Dynamic Capabilities for Responsible AI Adoption

The analysis revealed that the effective integration of AI in a government institution, such as the XYZ Government Financial Regulator, is underpinned by a dual set of dynamic capabilities. These capabilities operate at both the internal level, focusing on the organization’s processes and resources, and the external or ecosystem level, involving the strategic orchestration of external partners, regulated entities, and the public, each expressed through sensing, seizing, and reconfiguring routines. The analysis identified a series of micro-routines supported by direct documentary evidence (as shown in Table 2, Table 3 and Table 4), and Figure 1 collates these results into a framework that emphasizes vertical maturation (sensing → seizing → reconfiguring) and horizontal alignment between the internal and ecosystem levels at each stage.

4.1. Sensing: Identifying Opportunities in a Regulatory Environment

For a financial regulator, the “sensing” capability extends beyond merely identifying market or technological opportunities. It depends on the mandate to manage systemic risk, maintain public trust, and uphold legal and ethical standards. This context reframes the sensing capability as a risk- and legitimacy-driven process rather than a profit-seeking activity.

4.1.1. Internal Routines: Internal Readiness Auditing and Use-Case Intake and Triage

The regulator developed two primary internal routines to sense and prepare for AI integration.
The first routine was a systematic process of internal readiness auditing, which involved a holistic assessment of technology, skills, and culture to create a baseline for AI adoption. The analysis of the culture survey revealed a supportive culture, with 82% of employees expressing confidence in implementing AI [CultureSurvey-2024]. However, this routine also presented critical gaps. A technological review highlighted that only 60% of legacy systems were ready for real-time data integration [IT-Assessment-2024], prompting a project plan that noted the need to “map the current digital and technological structure... [and] propose a clear structure for the development and maintenance of analytical products” [PP-2024-04]. Similarly, skills assessments revealed that fewer than 30% of staff were proficient in AI tools [TNA-2024-05], leading one employee to emphasize that “this training is essential for developing skills and updating knowledge” [UX-Eval-2024-07].
The second routine was a formalized use-case intake and triage process. Rather than ad hoc adoption, the regulator established a collaborative process with 80 participants to identify and prioritize AI applications [MIN-2024-05-22]. Meeting minutes show this process evaluated use cases against strategic objectives, such as optimizing “the detection of irregularities in financial transactions” [STRAT-2024] and considering “money laundering and terrorism financing risks” [AML-2024-05]. Crucially, this triage included ethical pre-screening initiatives checking against the organization’s integrity framework [DPIA-2024-05]. As one respondent noted, this ethical alignment was a prerequisite for action: “We are awaiting the updates to the XYZ Government Financial Regulator’s code of ethics and integrity so that the aspects of ethics and transparency in Artificial Intelligence (AI) can be integrated” [HR-EthicsMemo-2024-05].
The internal audit of skills, culture, and technology is critical, as it enables organizations to identify and address hidden gaps before they derail implementation.

4.1.2. Ecosystem Routines: Convening Stakeholder Meetings and Public Problem Signaling

The regulator’s sensing capability extended beyond its walls through routines designed to engage the broader ecosystem, involving convening cross-agency and industry meetings to map technological roadmaps and identify shared challenges collaboratively [CAR-2024-03]. Furthermore, the regulator established a routine of public problem signaling, using Requests for Information (RFIs) and tech sprints to signal its priorities to the market, thereby inviting innovative solutions from vendors and academia [RFI-2024-04]; [TechSprint-Call-2024]. The April 2024 RFI drew diverse responses across open-source groups, COTS vendors, hybrid providers, and civil society/academic actors, illustrating the benefits and trade-offs of cross-sector collaborations in digital governments [1]. The ecosystem-level sensing also included proactively mapping citizen risks and equity concerns by soliciting input from civil society organizations, ensuring that the opportunities being “sensed” aligned with broader public values [CSC-2024-06]; [CFS-2024-06]. Indeed, entities should proactively convene ecosystem partners (industry, academia, other agencies) to build a shared understanding of technological trends and risks, rather than relying exclusively on their internal knowledge. Table 2 presents the sensing capability, linking observable activities to representative evidence/quotations.

4.2. Seizing: Mobilizing Resources Through Controlled Experimentation

In the public sector, the “seizing” capability does not depend on the “fail fast” mantra of private industry but on a more cautious principle of “safe-to-fail” experimentation. The regulator’s routines sought to mobilize resources and test AI solutions in a manner that maximized learning while minimizing public risk and protecting institutional legitimacy.

4.2.1. Internal Routines: Governed Sandbox Environments and Iterative Feedback and Refinement

The central seizing routine used governed sandbox environments. These were not merely technical testbeds but formalized governance structures. Progress reports described them as “secure and flexible platforms that allow for testing solutions while safeguarding sensitive data,” often using “anonymized and synthetic data” [PRG-2024-06]. This routine enabled the controlled testing of selected use cases, from AI-powered chatbots to risk assessment models, before any exposure to live operational environments. The value of this routine was evident in system logs that stated the following: “Detected anomalies during sandbox testing reduced data discrepancies by 15% and improved response times for AI model outputs by 25%, ensuring readiness for production deployment” [LOG-2024-06].
In parallel, the regulator institutionalized routines for iterative feedback and refinement. The public regulator entity collected user feedback from focus groups and structured forms to improve AI algorithms and interfaces. For instance, an evaluation report noted that feedback on one tool led to a redesign that “improved its usability rate from 40% to 80% across the organization” [UX-Eval-2024-07]. The continuous loop ensured that the solutions were both technically sound and operationally relevant and user-centric. Distinct organizations should acknowledge the importance of governed sandboxes with precise entry/exit criteria to de-risk innovation and build the institutional confidence needed for full-scale deployment.

4.2.2. Ecosystem Routines: Establishing Common Interfaces and Standardizing Collaboration Contracts

Seizing at the ecosystem level involved creating the shared infrastructure necessary for collaborative innovation. A key routine was the establishment of standard interfaces and shared datasets. Technical specifications indicate that the regulator collaborated with partners to establish API standards and schemas, enabling secure data exchange among the involved parties [DAS-2024-06]. It also developed shared reference datasets to enable partners to test solutions against a standard benchmark [RDG-2024-06].
Furthermore, the regulator developed standardized collaboration contracts. These legal instruments went beyond typical procurement to include specific clauses on data use, transparency, reproducibility, and audit rights, ensuring alignment with public accountability standards [CCT-2024-06]. The ongoing routine established a trusted framework for public–private collaboration, which is essential for mobilizing resources from technology providers and academic institutions to the public regulator. Creating standardized collaboration contracts and shared technical interfaces (APIs) reduces friction and accelerates secure co-innovation between entities and external partners.
Table 3 summarizes seizing routines by mapping observable activities to representative evidence/quotations.

4.3. Reconfiguring: Institutionalizing Innovation for Enduring Governance

The final capability, “reconfiguring,” involves embedding successful AI initiatives into the organization’s structure, policies, and culture to ensure lasting change. For the regulator, this meant reconfiguring not just for efficiency, but for enhanced governance and institutional legitimacy.

4.3.1. Internal Routines: Embedding New Governance Roles and Updating Policies and Metrics

A primary reconfiguring routine involved the establishment of new governance roles and forums. The organization appointed formal roles such as an “AI Product Owner” and “Data Steward” and established a Model Risk Committee to provide ongoing oversight [ORG-2024-04]; [MRC-2024-05]. This structural change embedded accountability for AI systems within the organization.
Another routine involved updating policies and performance metrics. The regulator moved beyond measuring only model accuracy. Evaluation reports show the adoption of a broader set of metrics, including “decision cycle time, explainability acceptance by end-users, [and] complaint rate.” Performance on these metrics, such as achieving an “F1 score [that] averaged 92% across tested applications,” was used to determine readiness for full-scale deployment [MLOps-Metrics-2024-05].
Finally, the organization developed an AI implementation roadmap. The strategic document, detailed in project plans, outlined a long-term vision, including workforce development targets (e.g., “90% of employees trained within two years”) and plans for continuous investment [AIR-2024-07]; [TRN-2024-07]. The implemented routine ensured that AI adoption was not a one-off project, but a sustained and strategic commitment to the public entity. AI accountability is paramount, and organizations can enhance these practices by creating new roles (e.g., AI Product Owner) and updating performance metrics to include measures of trust and efficiency, in addition to technical accuracy.

4.3.2. Ecosystem Routines: Creating Multi-Party Governance and Publishing Open Artifacts

The “XYZ Government Financial Regulator” has implemented the most advanced reconfiguration routines, aiming to shape the entire ecosystem by establishing multi-party governance boards that include representatives from other agencies, the industry, and the public, thereby providing ongoing strategic direction [GOV-2024-05].
A second powerful routine was the publication of open artifacts. By releasing templates, checklists, and model cards, the regulator worked to stabilize market expectations and promote best practices across the financial sector [OAR-2024-07]. The ultimate reconfiguring routine involved creating pathways to codify learnings, where successful practices from AI pilots were formalized in supervisory handbooks and, where appropriate, new regulations [HBK-2024-07]; [MLT-2024-07]. This final step closes the loop, using the “XYZ Government Financial Regulator” innovation to evolve the very rules of the ecosystem it governs.
Table 4 maps the reconfiguration of routines from observable activities to representative evidence/quotations.
The interplay of these internal- and ecosystem-level capabilities forms a comprehensive framework for responsible AI integration in the public sector, as depicted in Figure 1.
Based on the inductive analysis of this case study, we propose a capability framework (Figure 1) to explain how a high-accountability public institution applies dynamic capabilities to govern the adoption of Artificial Intelligence (AI) and achieve benefits in public value. The proposed framework is grounded in the micro-routines identified in the empirical analysis. It draws on the three core components of dynamic capabilities—sensing, seizing, and reconfiguring [10]—and illuminates the specifics of these components in a public sector regulatory context.
Crucially, the framework depicts the relationships among capabilities operating at two distinct but interdependent levels: internal (within the regulator itself) and ecosystem (engaging with external market and civil society actors). Each capability exercises a critical function in ensuring the successful and responsible integration of AI.
The internal sensing involved the “XYZ Government Financial Regulator” conducting internal readiness auditing (technology, skills, and culture) and running a use-case intake and triage process to filter proposals through strategic fit and ethical pre-screening. Regarding ecosystem sensing, the “XYZ Government Financial Regulator” convened stakeholder meetings to present shared risks and opportunities, as well as public problem signaling (e.g., RFIs/tech sprints), to attract fit-for-purpose solutions from the market.
Through internal seizing, the “XYZ Government Financial Regulator” identified opportunities in governed sandbox environments, where it tested ingenious solutions. Moreover, the “XYZ Government Financial Regulator” also developed iterative feedback and refinement, incorporating structured input from users and supervisors to enhance models and interfaces prior to the production phase. With ecosystem seizing, the “XYZ Government Financial Regulator” enabled joint execution by establishing common interfaces—standard APIs, schemas, and shared reference datasets—and by standardizing collaboration contracts that outlined data use, transparency, auditability, and risk allocation, thereby lowering transaction costs and enabling secure co-innovation.
Finally, internal reconfiguration enabled the “XYZ Government Financial Regulator” to consolidate gains by embedding new governance roles, such as AI product ownership, data stewardship, and model risk oversight, and by updating policies and metrics to include performance tracking of explainability acceptance, complaint rates, and decision cycle time alongside accuracy. During ecosystem reconfiguring, the regulator maintained alignment by establishing multi-party governance with peer agencies, industry, and civil society, and by publishing open artifacts, templates, checklists, model cards, and guidance to codify lessons and stabilize expectations across the market.
A key insight from the analysis, visualized by the connections in the framework, is that achieving the full benefits of responsible AI adoption requires these two streams of capabilities to be developed in concert. The arrows illustrate two types of relationships:
  • Sequential Flow (vertical solid arrows): The down-pointing arrows show the maturation path (sensing → seizing → reconfiguring). The “XYZ Government Financial Regulator” must first sense and then seize an opportunity before it can successfully reconfigure its operations around a new solution.
  • Cross Boundary Alignment (horizontal double-headed arrows): The two-way arrows between the internal and ecosystem panels at each stage indicate required, concurrent coordination (e.g., internal sensing ↔ ecosystem sensing). Effective internal sensing (e.g., identifying a use case) must align with ecosystem sensing (e.g., understanding citizen input in that use case). Successful internal seizing (e.g., running a sandbox) is dependent on robust ecosystem seizing (e.g., having clear collaboration contracts with partners).
When these dynamic capabilities are present and aligned, they enable the “XYZ Government Financial Regulator” to achieve two overarching governance objectives: first, to drive responsible internal innovation by safely and effectively deploying new technologies; and second, to exercise proactive ecosystem governance by shaping the standards and collaborative structures that ensure AI is used ethically and safely across the market it oversees.
Unlike the prior dynamic capabilities work centered on private firms, this framework treats the ecosystem as co-equal with internal routines for a public regulator, detailing the linking mechanisms needed to govern AI with respect to safety, ethics, and legitimacy.

5. Discussion

This study aimed to investigate how a high-accountability public institution develops the dynamic capabilities necessary to govern the adoption of transformative technologies, such as AI. Our findings, which reveal a dual framework of capabilities operating at both an internal and ecosystem level, offer several important contributions to the literature on dynamic capabilities and public sector innovation.

5.1. Answer to the Research Question/Objective

AI adoption advanced when three capability-aligned routine sets operated in concert. Sensing routines synthesized cultural, technical, and skill baselines through internal readiness auditing and a formal intake and triage of use cases (see §2.2; Table 1; Appendix A). Moreover, the institutionalized routines for seizing data enabled lightweight, gated experiments with defined data privileges and model risk co-governance, allowing for value checks before scale decisions (see Results §§3.1–3.3 and Table 2, Table 3 and Table 4). Additionally, the reconfiguring routines aided in consolidating shared data services, clarifying product ownership, and delivering targeted upskilling, ultimately reshaping workflows within the “XYZ Government Financial Regulator,” avoiding the need for large, monolithic programs (see Results §§3.1–3.3 and Table 2, Table 3 and Table 4). Acting together, these routines shortened cycle times and increased the proportion of proposals that progressed beyond pilots, while maintaining the accountability expected of a public regulator.

5.2. Positioning Relative to the Literature

Our results align with research that links dynamic capabilities to digital transformation [10,58]. Capability-aligned routines sense, seize, and reconfigure advanced AI initiatives from intake to gated pilots and, where warranted, to scaled institutional use. In public settings, the portfolio-style intake and staged experimentation documented in these accounts align with those of agile public innovation and data-driven supervision [1,46,59]. The progression through formal gates also resonates with stage-model perspectives on government transformation [60]. Governance moves that pair product ownership with model risk oversight align with digital innovation guidance that stresses role clarity and platform thinking [46,59].
At the same time, our findings diverge from those of studies that portray innovation labs as struggling to scale beyond pilots [61]. In this case, scale decisions were taken by permanent committees using value- and risk-based criteria embedded in model risk governance and product ownership, shifting authority from peripheral lab enthusiasm to durable, institutional forums. Prior research also highlights ethics and accountability as barriers to deployment [50,61]. Here, co-governance reduced delay by integrating model risk review into the seizing gates, turning ethics into a concurrent constraint rather than an ex-post veto, and enabling earlier value checks and predictable escalation paths. At the same time, equity risks remain central in public sector AI, as shared datasets can encode historical disparities unless they are audited for representativeness and disparate impact—a concern emphasized in algorithmic governance and AI ethics work [62,63]. Our findings, therefore, complement this literature by situating fairness review inside the gates rather than after deployment. Finally, while some public sector accounts emphasize skill shortages and procurement frictions as dominant bottlenecks [64] and private sector research often ties dynamic capabilities to performance through firm resources rather than institutionalization under governance [65,66,67,68], this case shows that targeted training, shared components, and reusable integration patterns mitigate these frictions. Platformization (e.g., investments in shared platform services) and clear ownership reduced one-off integration costs [54]. They created repeatable paths to scale, suggesting that capability investments at the platform/governance level can convert hard constraints into manageable ones in regulatory contexts.

5.3. Theoretical Contribution

This study refines dynamic capabilities theory for public regulators by specifying actionable micro-routines and governance gates that operationalize the processes of sensing, seizing, and reconfiguring. Readiness auditing converts diffuse cultural, technical, and skill conditions into investment-grade sensing signals. An explicit intake and triage process, with scoring links, involves sequencing options and balancing expected value against change effort and risk, progressing from sensing to seizing. Meanwhile, model risk co-governance embeds accountability within seizing, reducing delay, and rework. Finally, shared data services and clear product ownership enable reconfiguring without large, monolithic programs. Mechanistically, this cyclical readiness auditing incorporates data provenance and representativeness checks, contributing to sensing through structured environmental scanning and adaptive learning routines, intake/triage with scoring links, and sensing to seizing via selection and prioritization rules. Model risk gates instantiate seizing by codifying decision and risk-mitigation rules for bias, drift, and explainability. Shared data services and clarified ownership support reconfiguration through asset orchestration, auditable reuse, and co-specialization [10], which in turn inform the next cycle of sensing.
These refinements elaborate on the microfoundations emphasized in foundational dynamic capabilities work and adapt them to the procedural and accountability constraints of public organizations [9,10,69]. They also situate assurance within delivery, aligning with public innovation and algorithmic governance arguments for embedded oversight rather than post hoc review [70,71,72], and with government-focused accounts that connect capabilities to staged transformation and IT governance by showing how gates function as design features that coordinate roles and decisions across the portfolio [29,31,60,73]. Finally, we respond to recent calls to specify how public bodies mobilize and assess dynamic capabilities for digital/AI programs by linking governance arrangements and platform choices [15] to observable institutionalization outcomes, not just performance correlations [74]. Taken together, these refinements recast governance as an enabling architecture that integrates accountability into delivery and accelerates responsible scale-up in regulatory contexts.

5.4. Practical Contribution

Managers can translate these insights into concrete actions. At the strategic tier, regulatory executives should establish a Model Risk Committee with defined gates and pair it with product ownership to clarify decisions, providing leadership with a repeatable path from pilot to scale, thereby maintaining accountability. In portfolio management, program and product leads should conduct quarterly use-case triage, scoring value, data readiness, and change effort, and then publish a ranked pipeline so that operational teams can focus their scarce capacity on the highest-value, most-ready opportunities. On the platform side, data and IT leaders should prioritize shared data services, access controls, and representativeness checks, as well as document reusable integration patterns to reduce pilot-to-production rework and shorten cycle times. To build capacity, HR and training units should deliver targeted, role-based upskilling tied to the pipeline, tracking capability growth over time to ensure skills advance in line with portfolio needs. Policy, legal, and compliance teams should mandate and supervise ongoing drift/fairness monitoring with audit trails accessible to oversight bodies. They should also update AI standard operating procedures (SOPs) and model risk policies early and iteratively, aligning criteria with the gating process to prevent late-stage stalls and increase predictability for delivery teams. Finally, at the policy layer, model risk co-governance should be formalized through inter-agency MOUs, standardized DPIA templates, and gate criteria harmonized with risk-tiered provisions in emerging AI regulation (e.g., EU AI Act risk classes). Following staged governance [60], codifying the gates, piloting to limited production, and then scaling up to full production, with role-specific accountabilities, requires evidence and auditable decision trails to create a durable, repeatable path from pilot to scale.

6. Conclusions

6.1. Summary

Public agencies face rapid technological change under the constraints of accountability. Fragmented pilots and late oversight raise costs and erode trust; a governed capability sequence helps reverse these effects. This study aimed to explain how a national financial regulator (anonymized as the “XYZ Government Financial Regulator”) organized the responsible adoption of AI. This study’s objective was to identify the micro-routines and governance gates that enable adoption and to demonstrate how they align with the dynamic capabilities of sensing, seizing, and reconfiguring. Accordingly, this study posed the following research questions: RQ1—How do internal routines and governance arrangements in a high-accountability financial regulator enable responsible AI adoption from intake to scale? RQ2—How do these routines and arrangements operationalize the dynamic capabilities of sensing, seizing, and reconfiguring?
Through an in-depth case study of a national financial regulator’s adoption of AI, this study found that success hinges on a dual set of sensing, seizing, and reconfiguring capabilities operating at both an internal and an ecosystem level. It also identified the specific micro-routines that underpin these capabilities and proposed a framework that illustrates how they work in concert to balance innovation with governance. The combined approach shortens cycle time while preserving prudential goals.
This study adapts the dynamic capability theory for the public sector by unpacking the micro-mechanisms that underpin such capabilities and by underlining the essential function of ecosystem orchestration. For public managers and policymakers, the results offer a practical framework for developing the organizational capacity necessary to navigate technological disruption effectively. Practitioners can stage the work as follows: triage and readiness auditing; safe-to-learn pilots under joint model risk review; and scale through shared services, standardized contracts, and clear product ownership.

6.2. Limitations

While this study offers rich, context-specific insights into the dynamic capabilities underpinning responsible AI adoption in the public sector, several limitations should be acknowledged. First, as a single case, external validity is constrained; we emphasize analytic generalization to mechanisms and specify boundary conditions (prudential mandate, risk appetite, platform maturity, governance structure). The routines and capabilities identified may reflect institutional or national characteristics that differ in other governance settings. Second, as with most qualitative research, the analysis is interpretive and may be influenced by researcher bias, despite efforts to ensure rigor through triangulation and transparency. Finally, this study captures a specific moment in an evolving technological and regulatory landscape; future research could adopt comparative or longitudinal designs to explore how these capabilities develop across different institutions and over time. Transferability is the strongest for regulators with similar mandates and platform maturity; primary data and interviews with role-holders (e.g., product owners, model risk reviewers) are a priority for future research.

6.3. Future Research

First, to address the single-case limitation, multi-case and comparative designs across regulators and jurisdictions should be considered to examine whether the same micro-routines and governance gates are applicable beyond this setting. Studies could purposely offer sample agencies that differ in statutory mandates, risk appetites, resources, and platform maturity to identify boundary conditions. Additionally, triangulation through interviews or primary data could improve methodological robustness and the possibility of theoretical generalization.
Second, longitudinal studies should track how sensing, seizing, and reconfiguring capabilities evolve and how shifts in routines or gate criteria impact outcomes. Panels combining administrative logs (minutes, progress reports, model risk memos) with governance records can estimate maturation trajectories; event history models can analyze time to scale.
Third, to move from association to stronger causal inference, quasi-experimental and experimental designs, such as policy changes that alter gate thresholds or reviewer composition, can be used to identify which governance choices yield the most significant improvements in cycle time, escalation predictability, or quality.
Fourth, building on this study, future research should refine and validate public-value metrics beyond cycle time and progression rates, operationalizing the predictability of escalation, rework cost, and audit outcomes, and link these indicators to qualitative evidence through mixed-methods designs.

Author Contributions

Conceptualization, C.A.M.P. and L.A.C.; methodology, C.A.M.P. and L.A.C.; formal analysis, C.A.M.P. and L.A.C.; investigation, C.A.M.P. and L.A.C.; writing—original draft preparation, C.A.M.P. and L.A.C.; writing—review and editing, C.A.M.P., L.A.C., L.B. and M.G.-D.; supervision, L.B.; project administration, C.A.M.P. and L.A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This project entailed secondary analysis of anonymized, non-identifiable organizational documents produced in the normal course of agency operations. In accordance with institutional practice, Purdue University did not conduct an IRB review because the project did not meet the definition of human subjects’ research. All procedures adhered to applicable Colombian regulations, including the supervisory-confidentiality provisions of the Colombian Financial Statute (Estatuto Orgánico del Sistema Financiero, EOSF) governing information obtained through inspection and supervision, as well as Law 1581 of 2012 and Decree 1377 of 2013 (personal-data protection) and Law 1266 of 2008 (financial habeas data).

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author due to institutional confidentiality and ethical restrictions. The data include internal documents and communications from a governmental financial regulator that are not publicly accessible.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Source Log

  • [IT-Assessment-2024, p. 3]—IT & Legacy Systems Assessment (April 2024), p. 3;
  • [IT-Assessment-2024, p. 6]—same doc, p. 6;
  • [CultureSurvey-2024, Q14]—Employee Culture Survey (July 2024), item Q14;
  • [TNA-2024-05, p. 2]—Training Needs Assessment (May 2024), p. 2;
  • [HR-EthicsMemo-2024-05, p. 2]—HR/Ethics Memo on AI & Transparency (May 2024), p. 2;
  • [MIN-2024-05-22]—Use-Case Intake Workshop Minutes (22 May 2024);
  • [STRAT-2024, §2.3]—Supervisory Strategy 2024–2026, §2.3 (Analytics Priorities);
  • [AML-2024-05, §1.2]—AML/CFT Risk Memo (May 2024), §1.2;
  • [DPIA-2024-05, Checklist]—DPIA Pre-Screen Checklist (May 2024);
  • [CAR-2024-03, Slide 9]—Cross-Agency AI Roadmap Deck (Mar 2024), slide 9;
  • [RFI-2024-04]—Public RFI: AI for Supervisory Risk Signals (April 2024);
  • [TechSprint-Call-2024]—Tech Sprint Call for Participation (March 2024);
  • [CSC-2024-06, p. 1]—Civil Society Consultation Summary (June 2024), p.1;
  • [PRG-2024-06, p. 4]—AI Sandbox Progress Report (June 2024), p.4;
  • [LOG-2024-06, §4.1]—System Logs & Model QA Summary (June 2024), §4.1;
  • [UX-Eval-2024-07, p. 2]—UX Evaluation Report (July 2024), p. 2;
  • [DAS-2024-06, §3.1]—Data Architecture Spec (APIs/Schemas) (June 2024), §3.1;
  • [RDG-2024-06, v1.2]—Reference Dataset Guide v1.2 (June 2024);
  • [CCT-2024-06, §4–§7]—Collaboration Contract Template (Data/Audit/Transparency) (Jun 2024), §§4–7;
  • [ORG-2024-04, p. 1]—Org Announcement: AI Roles & Committees (April 2024), p. 1;
  • [MRC-2024-05, Charter]—Model Risk Committee Charter (May 2024);
  • [POL-2024-06, §5]—Policy Update: AI Use & SOPs (June 2024), §5;
  • [MLOps-Metrics-2024-05, Table S3]—MLOps Metrics Compendium (May 2024), Table S3;
  • [KPI-2024-06, Dashboard]—AI KPI Dashboard (June 2024);
  • [AIR-2024-07, p. 5]—AI Implementation Roadmap (July 2024), p. 5;
  • [TRN-2024-07, p. 3]—Workforce Training Plan (July 2024), p. 3;
  • [GOV-2024-05, p. 2]—Multi-Party Governance Board Charter (May 2024), p. 2;
  • [OAR-2024-07, Index]—Open Artifacts Repository Index (Jul 2024);
  • [HBK-2024-07, Draft §3]—Supervisory Handbook Draft (Jul 2024), §3;
  • [PP-2024-04, §1.1]—AI Program Project Plan (Apr 2024), §1.1;
  • [MIN-2024-03-12, Item 4]—Steering Committee Minutes (12 March 2024), Item 4;
  • [TS-2024-06, §2.2]—Technical Specifications: Data Types & Flows (June 2024), §2.2;
  • [ER-2024-07, p. 6]—Evaluation Report: AI Pilots (July 2024), p. 6;
  • [MLT-2024-07, p. 1]—Multilateral Benchmark Note (July 2024), p. 1;
  • [PROC-2024-06, p. 2]—Procurement Pattern Analysis (June 2024), p. 2;
  • [LEG-2024-07, p. 4]—Legacy System Retirement Plan (July 2024), p. 4;
  • [VPQ-2024-04, p. 1]—Vendor Pre-Qualification List (April 2024), p. 1;
  • [EX-2024-07, p. 3]—Incident Exercise Playbook (July 2024), p. 3;
  • [CFS-2024-06, p. 2]—Citizen Feedback Summary (June 2024), p. 2.

References

  1. Mergel, I.; Edelmann, N.; Haug, N. Defining Digital Transformation: Results from Expert Interviews. Gov. Inf. Q. 2019, 36, 101385. [Google Scholar] [CrossRef]
  2. Sousa, T.B.D.; Guerrini, F.M.; Oliveira, M.R.D.; Cantorani, J.R.H. Industry 4.0 and Collaborative Networks: A Goals- and Rules-Oriented Approach Using the 4EM Method. Platforms 2025, 3, 14. [Google Scholar] [CrossRef]
  3. Di Giulio, M.; Vecchi, G. Implementing Digitalization in the Public Sector. Technologies, Agency, and Governance. Public Policy Adm. 2023, 38, 133–158. [Google Scholar] [CrossRef]
  4. Reuters. US Appeals Court Strikes down SEC Rule on ‘Audit Trail’ Funding. Reuters. 25 July 2025. Available online: https://www.reuters.com/legal/government/us-appeals-court-strikes-down-sec-rule-audit-trail-funding-2025-07-25/ (accessed on 1 December 2025).
  5. Al-Besher, A.; Kumar, K. Use of Artificial Intelligence to Enhance E-Government Services. Meas. Sens. 2022, 24, 100484. [Google Scholar] [CrossRef]
  6. Alhosani, K.; Alhashmi, S.M. Opportunities, Challenges, and Benefits of AI Innovation in Government Services: A Review. Discov. Artif. Intell. 2024, 4, 18. [Google Scholar] [CrossRef]
  7. Agbese, M.; Mohanani, R.; Khan, A.; Abrahamsson, P. Implementing AI Ethics: Making Sense of the Ethical Requirements. In Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering, Oulu, Finland, 14–16 June 2023; ACM: Oulu, Finland, 2023; pp. 62–71. [Google Scholar]
  8. Rane, N.L.; Desai, P.; Choudhary, S. Challenges of Implementing Artificial Intelligence for Smart and Sustainable Industry: Technological, Economic, and Regulatory Barriers. In Artificial Intelligence and Industry in Society 5.0; Deep Science Publishing: San Francisco, CA, USA, 2024. [Google Scholar]
  9. Miller, M.; Ghaffarzadegan, N. Dynamic Capabilities in the Public Sector: A Systematic Literature Review. Int. J. Public Sect. Manag. 2025, 38, 717–734. [Google Scholar] [CrossRef]
  10. Teece, D.J. Explicating Dynamic Capabilities: The Nature and Microfoundations of (Sustainable) Enterprise Performance. Strateg. Manag. J. 2007, 28, 1319–1350. [Google Scholar] [CrossRef]
  11. Kothandapani, H.P. Automating Financial Compliance with AI: A New Era in Regulatory Technology (RegTech). Int. J. Sci. Res. Arch. 2024, 11, 2646–2659. [Google Scholar] [CrossRef]
  12. Bolton, M.; Mintrom, M. RegTech and Creating Public Value: Opportunities and Challenges. Policy Des. Pract. 2023, 6, 266–282. [Google Scholar] [CrossRef]
  13. El Khoury, R.; Alshater, M.M.; Joshipura, M. RegTech Advancements-a Comprehensive Review of Its Evolution, Challenges, and Implications for Financial Regulation and Compliance. J. Financ. Report. Account. 2025, 23, 1450–1485. [Google Scholar] [CrossRef]
  14. Grassi, L.; Lanfranchi, D. RegTech in Public and Private Sectors: The Nexus between Data, Technology and Regulation. J. Ind. Bus. Econ. 2022, 49, 441–479. [Google Scholar] [CrossRef]
  15. Ben Youssef, A. Introducing Platforms: A Transdisciplinary Journal on Platform Management, Services and Policy and All Related Research. Platforms 2022, 1, 1–4. [Google Scholar] [CrossRef]
  16. Su, R.; Li, N. Environmental, Social, and Governance Performance, Platform Governance, and Value Creation of Platform Enterprises. Sustainability 2024, 16, 7251. [Google Scholar] [CrossRef]
  17. Teece, D.J.; Pisano, G.; Shuen, A. Dynamic Capabilities and Strategic Management. Strateg. Manag. J. 1997, 18, 509–533. [Google Scholar] [CrossRef]
  18. Eisenhardt, K.M.; Martin, J.A. Dynamic Capabilities: What Are They? Strateg. Manag. J. 2000, 21, 1105–1121. [Google Scholar] [CrossRef]
  19. Cath, C. Governing Artificial Intelligence: Ethical, Legal and Technical Opportunities and Challenges. Philos. Trans. R. Soc. Math. Phys. Eng. Sci. 2018, 376, 20180080. [Google Scholar] [CrossRef]
  20. Cath, C.J.N. Artificial Intelligence and the “Good Society”: The US, EU, and UK Approach. SSRN Electron. J. 2016. [Google Scholar] [CrossRef]
  21. Mergel, I. Digital Service Teams in Government. Gov. Inf. Q. 2019, 36, 101389. [Google Scholar] [CrossRef]
  22. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed.; Pearson Series in Artificial Intelligence; Pearson: Hoboken, NJ, USA, 2021; ISBN 978-0-13-461099-3. [Google Scholar]
  23. Muntanyola-Saura, D. Book Review: Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. Int. Sociol. 2016, 31, 626–628. [Google Scholar] [CrossRef]
  24. Weber, M.; Engert, M.; Schaffer, N.; Weking, J.; Krcmar, H. Organizational Capabilities for AI Implementation—Coping with Inscrutability and Data Dependency in AI. Inf. Syst. Front. 2023, 25, 1549–1569. [Google Scholar] [CrossRef]
  25. De Santis, F. Artificial Intelligence to Support Business Decisions. In Artificial Intelligence in Accounting and Auditing; Springer Nature: Cham, Switzerland, 2024; pp. 107–137. ISBN 978-3-031-71370-5. [Google Scholar]
  26. Kim, J. Case Study about Efficient AI(Artificial Intelligence) Implementation Strategy. Int. J. Adv. Res. Big Data Manag. Syst. 2019, 3, 1–6. [Google Scholar] [CrossRef]
  27. Bostrom, N.; Yudkowsky, E. The Ethics of Artificial Intelligence. In Artificial Intelligence Safety and Security, 1st ed.; Yampolskiy, R.V., Ed.; Chapman and Hall/CRC: Boca Raton, FL, USA; CRC Press/Taylor & Francis Group: Boca Raton, FL, USA, 2018; pp. 57–69. ISBN 978-1-351-25138-9. [Google Scholar]
  28. Jarrahi, M.H.; Askay, D.; Eshraghi, A.; Smith, P. Artificial Intelligence and Knowledge Management: A Partnership between Human and AI. Bus. Horiz. 2023, 66, 87–99. [Google Scholar] [CrossRef]
  29. Lichtenthaler, U. Five Maturity Levels of Managing AI: From Isolated Ignorance to Integrated Intelligence. J. Innov. Manag. 2020, 8, 39–50. [Google Scholar] [CrossRef]
  30. Chen, Z.; Balan, M.M.; Brown, K. Language Models Are Few-Shot Learners for Prognostic Prediction. arXiv 2023, arXiv:2302.12692. [Google Scholar] [CrossRef]
  31. Javaid, H.A. How Artificial Intelligence Is Revolutionizing Fraud Detection in Financial Services. Innov. Eng. Sci. J. 2024, 4, 1–7. [Google Scholar]
  32. Bostrom, N. Superintelligence: Paths, Dangers, Strategies; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
  33. Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A Survey on Bias and Fairness in Machine Learning. ACM Comput. Surv. 2022, 54, 1–35. [Google Scholar] [CrossRef]
  34. Adam, M.; Wessel, M.; Benlian, A. AI-Based Chatbots in Customer Service and Their Effects on User Compliance. Electron. Mark. 2021, 31, 427–445. [Google Scholar] [CrossRef]
  35. Peng, C.; Van Doorn, J.; Eggers, F.; Wieringa, J.E. The Effect of Required Warmth on Consumer Acceptance of Artificial Intelligence in Service: The Moderating Role of AI-Human Collaboration. Int. J. Inf. Manag. 2022, 66, 102533. [Google Scholar] [CrossRef]
  36. Fares, O.H.; Butt, I.; Lee, S.H.M. Utilization of Artificial Intelligence in the Banking Sector: A Systematic Literature Review. J. Financ. Serv. Mark. 2023, 28, 835–852. [Google Scholar] [CrossRef]
  37. Veale, M.; Brass, I. Administration by Algorithm?: Public Management Meets Public Sector Machine Learning. In Algorithmic Regulation; Yeung, K., Lodge, M., Eds.; Oxford University Press: Oxford, UK, 2019; pp. 121–149. ISBN 978-0-19-883849-4. [Google Scholar]
  38. Chen, Y.-C.; Ahn, M.J.; Wang, Y.-F. Artificial Intelligence and Public Values: Value Impacts and Governance in the Public Sector. Sustainability 2023, 15, 4796. [Google Scholar] [CrossRef]
  39. Grieco, A.; Caricato, P.; Margiotta, P. From Key Role to Core Infrastructure: Platforms as AI Enablers in Hospitality Management. Platforms 2025, 3, 16. [Google Scholar] [CrossRef]
  40. Aderibigbe, A.O.; Ohenhen, P.E.; Nwaobia, K.; Gidiagba, J.O.; Ani, E.C. Artificial Intelligence in Developing Countries: Bridging the Gap between Potential and Implementation. Comput. Sci. IT Res. J. 2023, 4, 185–199. [Google Scholar] [CrossRef]
  41. Ångström, R.C.; Björn, M.; Dahlander, L.; Mähring, M.; Wallin, M.W. Getting AI Implementation Right: Insights from a Global Survey. Calif. Manag. Rev. 2023, 66, 5–22. [Google Scholar] [CrossRef]
  42. Ntoutsi, E.; Fafalios, P.; Gadiraju, U.; Iosifidis, V.; Nejdl, W.; Vidal, M.; Ruggieri, S.; Turini, F.; Papadopoulos, S.; Krasanakis, E.; et al. Bias in Data-driven Artificial Intelligence Systems—An Introductory Survey. WIREs Data Min. Knowl. Discov. 2020, 10, e1356. [Google Scholar] [CrossRef]
  43. Massive Analysis Quality Control (MAQC) Society Board of Directors; Haibe-Kains, B.; Adam, G.A.; Hosny, A.; Khodakarami, F.; Shraddha, T.; Kusko, R.; Sansone, S.-A.; Tong, W.; Wolfinger, R.D.; et al. Transparency and Reproducibility in Artificial Intelligence. Nature 2020, 586, E14–E16. [Google Scholar] [CrossRef]
  44. Mittelstadt, B.D.; Allo, P.; Taddeo, M.; Wachter, S.; Floridi, L. The Ethics of Algorithms: Mapping the Debate. Big Data Soc. 2016, 3, 2053951716679679. [Google Scholar] [CrossRef]
  45. Benhamou, S. Artificial Intelligence and the Future of Work. Rev. Déconomie Ind. 2020, 170, 57–88. [Google Scholar] [CrossRef]
  46. Broeders, D.; Prenio, J. Innovative Technology in Financial Supervision (Suptech): The Experience of Early Users; Financial Stability Institute/Bank for International Settlements: Basel, Switzerland, 2018. [Google Scholar]
  47. Choi, H.; Park, M.J. To Govern or Be Governed: An Integrated Framework for AI Governance in the Public Sector. Sci. Public Policy 2023, 50, 1059–1072. [Google Scholar] [CrossRef]
  48. Kuziemski, M.; Misuraca, G. AI Governance in the Public Sector: Three Tales from the Frontiers of Automated Decision-Making in Democratic Settings. Telecommun. Policy 2020, 44, 101976. [Google Scholar] [CrossRef]
  49. Arner, D.W.; Barberis, J.; Buckley, R.P. FinTech, RegTech, and the Reconceptualization of Financial Regulation. Nw. J. Int’l L. Bus. 2017, 37, 371–414. [Google Scholar]
  50. Wirtz, B.W.; Weyerer, J.C.; Geyer, C. Artificial Intelligence and the Public Sector—Applications and Challenges. Int. J. Public Adm. 2019, 42, 596–615. [Google Scholar] [CrossRef]
  51. Floridi, L.; Cowls, J. A Unified Framework of Five Principles for AI in Society. In Machine Learning and the City; Carta, S., Ed.; Wiley: Hoboken, NJ, USA, 2022; pp. 535–545. ISBN 978-1-119-74963-9. [Google Scholar]
  52. Eisenhardt, K.M.; Graebner, M.E. Theory Building From Cases: Opportunities And Challenges. Acad. Manag. J. 2007, 50, 25–32. [Google Scholar] [CrossRef]
  53. Yin, R.K. Case Study Research and Applications: Design and Methods, 6th ed.; SAGE: Los Angeles, CA, USA; London, UK; New Delhi, India; Singapore; Washington, DC, USA; Melbourne, Australia, 2018; ISBN 978-1-5063-3616-9. [Google Scholar]
  54. Permin, E.; Wohlgemuth, C.; Keller, T. Use-Case-Driven Architectures for Data Platforms in Manufacturing. Platforms 2025, 3, 15. [Google Scholar] [CrossRef]
  55. Braun, V.; Clarke, V. Using Thematic Analysis in Psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
  56. Morgan, H. Conducting a Qualitative Document Analysis. Qual. Rep. 2022, 27, 64–77. [Google Scholar] [CrossRef]
  57. Christou, P. Thematic Analysis through Artificial Intelligence (AI). Qual. Rep. 2024, 29. [Google Scholar] [CrossRef]
  58. Warner, K.S.R.; Wäger, M. Building Dynamic Capabilities for Digital Transformation: An Ongoing Process of Strategic Renewal. Long Range Plann. 2019, 52, 326–349. [Google Scholar] [CrossRef]
  59. Nambisan, S.; Lyytinen, K.; Majchrzak, A.; Song, M. Digital Innovation Management: Reinventing Innovation Management Research in a Digital World. MIS Q. 2017, 41, 223–238. [Google Scholar] [CrossRef]
  60. Klievink, B.; Janssen, M. Realizing Joined-up Government—Dynamic Capabilities and Stage Models for Transformation. Gov. Inf. Q. 2009, 26, 275–284. [Google Scholar] [CrossRef]
  61. Tõnurist, P.; Kattel, R.; Lember, V. Innovation Labs in the Public Sector: What They Are and What They Do? Public Manag. Rev. 2017, 19, 1455–1479. [Google Scholar] [CrossRef]
  62. Ammah, L.N.A.; Lütge, C.; Kriebitz, A.; Ramkissoon, L. AI4people—An Ethical Framework for a Good AI Society: The Ghana (Ga) Perspective. J. Inf. Commun. Ethics Soc. 2024, 22, 453–465. [Google Scholar] [CrossRef]
  63. Matteucci, S.C. Public Administration Algorithm Decision-Making and the Rule of Law. Eur. Public Law 2021, 27, 103–130. [Google Scholar] [CrossRef]
  64. Mikhaylov, S.J.; Esteve, M.; Campion, A. Artificial Intelligence for the Public Sector: Opportunities and Challenges of Cross-Sector Collaboration. Philos. Trans. R. Soc. Math. Phys. Eng. Sci. 2018, 376, 20170357. [Google Scholar] [CrossRef] [PubMed]
  65. Arifin, Z.; Frmanzah. The Effect of Dynamic Capability to Technology Adoption and Its Determinant Factors for Improving Firm’s Performance; Toward a Conceptual Model. Procedia-Soc. Behav. Sci. 2015, 207, 786–796. [Google Scholar] [CrossRef]
  66. Graham, K.; Moore, R. The Role of Dynamic Capabilities in Firm-Level Technology Adoption Processes: A Qualitative Investigation. J. Innov. Manag. 2021, 9, 25–50. [Google Scholar] [CrossRef]
  67. Shen, L.; Zhang, X.; Liu, H. Digital Technology Adoption, Digital Dynamic Capability, and Digital Transformation Performance of Textile Industry: Moderating Role of Digital Innovation Orientation. Manag. Decis. Econ. 2022, 43, 2038–2054. [Google Scholar] [CrossRef]
  68. Sunday, C.E.; Vera, C.C.-E. Examining Information and Communication Technology (ICT) Adoption in SMEs: A Dynamic Capabilities Approach. J. Enterp. Inf. Manag. 2018, 31, 338–356. [Google Scholar] [CrossRef]
  69. Piening, E.P. Dynamic Capabilities in Public Organizations: A Literature Review and Research Agenda. Public Manag. Rev. 2013, 15, 209–245. [Google Scholar] [CrossRef]
  70. Calzada, I. Artificial Intelligence for Social Innovation: Beyond the Noise of Algorithms and Datafication. Sustainability 2024, 16, 8638. [Google Scholar] [CrossRef]
  71. Gritsenko, D.; Wood, M. Algorithmic Governance: A Modes of Governance Approach. Regul. Gov. 2022, 16, 45–62. [Google Scholar] [CrossRef]
  72. Pūraitė, A.; Zuzevičiūtė, V.; Bereikienė, D.; Skrypko, T.; Shmorgun, L. Algorithmic Governance in Public Sector: Is Digitization a Key to Effective Management. Indep. J. Manag. Prod. 2020, 11, 2149–2170. [Google Scholar] [CrossRef]
  73. Luna-Reyes, L.; Juiz, C.; Gutierrez-Martinez, I.; Duhamel, F.B. Exploring the Relationships between Dynamic Capabilities and IT Governance: Implications for Local Governments. Transform. Gov. People Process Policy 2020, 14, 149–169. [Google Scholar] [CrossRef]
  74. Panagiotopoulos, P.; Protogerou, A.; Caloghirou, Y. Dynamic Capabilities and ICT Utilization in Public Organizations: An Empirical Testing in Local Government. Long Range Plan. 2023, 56, 102251. [Google Scholar] [CrossRef]
Figure 1. A dynamic capabilities framework for AI integration in the public sector. Note: Dynamic capabilities for regulatory AI orchestration. Each band represents a capability stage—sensing, seizing, and reconfiguring—and is split into internal routines (left) and ecosystem routines (right). The vertical solid arrows show the maturation path of sensing → seizing → reconfiguring. The horizontal double-headed arrows indicate required alignment between internal and ecosystem routines at each stage. “DPIA pre-screen” refers to a Data Protection Impact Assessment pre-screen, a preliminary checklist used to determine if a complete privacy risk assessment is required.
Figure 1. A dynamic capabilities framework for AI integration in the public sector. Note: Dynamic capabilities for regulatory AI orchestration. Each band represents a capability stage—sensing, seizing, and reconfiguring—and is split into internal routines (left) and ecosystem routines (right). The vertical solid arrows show the maturation path of sensing → seizing → reconfiguring. The horizontal double-headed arrows indicate required alignment between internal and ecosystem routines at each stage. “DPIA pre-screen” refers to a Data Protection Impact Assessment pre-screen, a preliminary checklist used to determine if a complete privacy risk assessment is required.
Platforms 03 00020 g001
Table 1. Case study context and corpus summary.
Table 1. Case study context and corpus summary.
Item/CategoryDescription/Tag PrefixCountNotes
Panel A: Case and Period
Case
XYZ Government Financial Regulator—national public institution overseeing financial markets;
actively integrating AI.
Period coveredMarch–July 2024
Panel B: Corpus Summary
Project plansPP-YYYY-MM (e.g., PP-2024-04)5Program scope, milestones
Technical specs/data architectureTS/DAS-YYYY-MM (e.g., DAS-2024-06)5Schemas, APIs, data flows
Meeting minutesMIN-YYYY-MM-DD20Steering, intake, governance
Progress reportsPRG-YYYY-MM10Sandbox/pilot status
Evaluation reportsER-YYYY-MM5Usability, outcomes
System logsLOG-YYYY-MM, §…≈300Model QA, latency, errors
End-user feedback/UXUX-Eval-YYYY-MM, p…40Forms, focus notes
Surveys and assessmentsCultureSurvey-2024; IT-Assessment-2024; TNA-2024-053 (1 each)Org readiness baselines
Policy/governance artifactsMRC-2024-05; POL-2024-06; GOV-2024-053 (1 each)Charters, SOP updates
Ecosystem signalsRFI-2024-04; TechSprint-Call-2024; CSC-2024-06; CAR-2024-034 (1 each)Market/civil society inputs
To maintain traceability, we use the artifact tags above in the Results and evidence tables; the full catalog appears in Appendix A: Source Log.
Table 2. Observable activities and evidence/quotations for sensing.
Table 2. Observable activities and evidence/quotations for sensing.
Sensing
Internal or
Ecosystem
Micro-RoutinesObservable ActivitiesRepresentative Evidence/Quotation
InternalReadiness
Auditing
Assessing legacy
systems and skills
“Only 60% of core systems support real-time data integration.”
[IT-Assessment-2024, p. 3]
Analyzing employee
culture surveys
“82% of respondents feel confident adopting AI with training.”
[CultureSurvey-2024, Q14]
Use-Case Intake
and Triage
Triaging use-cases
for ethics and legality
“We are awaiting the updates to the XYZ Government Financial Regulator’s code of ethics and integrity so that the aspects of
ethics and transparency in Artificial Intelligence (AI) can be
integrated…”—DPIA Pre-Screen Checklist/Minutes
[DPIA-2024-05, Checklist]; [MIN-2024-05-22]
EcosystemConvening
Stakeholder
Meetings
Hosting cross-agency workshops“Workshop agreed on a shared roadmap for data sharing and model audits.” [CAR-2024-03, Slide 9]
Public Problem
Signaling
Issuing public RFIs
for tech solutions
“migrating all its data to a secure cloud environment in
partnership with a recognized technological company.”—Meeting Minutes/Public RFI [MIN-2024-03-12, Item 4]
“RFI seeks proposals for secure cloud migration and auditability.”
[RFI-2024-04]
Soliciting input from stakeholders“Shared challenges collaboratively.” [CSC-2024-06, p. 1]
Table 3. Observable activities and evidence/quotations for seizing.
Table 3. Observable activities and evidence/quotations for seizing.
Seizing
Internal or
Ecosystem
Micro-RoutinesObservable ActivitiesRepresentative Evidence/Quotation
InternalGoverned Sandbox
Environments
Using sandboxes with
anonymized data
“The secure and flexible sandbox platform allowed for the testing of solutions using anonymized and synthetic data, ensuring data protection while fostering public-private collaboration and innovation.”—Progress Report [PRG-2024-06, p. 4]
Iterative Feedback
and Refinement
Tracking model
performance metrics
“updating performance metrics to include measures of trust and efficiency.” [MLOps-Metrics-2024-05, Table S3]
Systematically collecting
user feedback
“Redesigning the legal assistance interface improved its usability rate from 40% to 80% across the organization.”—Evaluation
Report [UX-Eval-2024-07, p. 2]
EcosystemEstablishing
Common
Interfaces
Defining standard API
specifications
“including structured data… unstructured data… and real-time data captured via application programming interfaces (APIs).”—Defining Standard API and Technical Specifications [DAS-2024-06, §3.1; TS-2024-06, §2.2]
Standardizing
Collaboration
Contracts
Creating shared
reference datasets
“To develop shared datasets under collaboration contracts…”
Reference Dataset Guide [RDG-2024-06, v1.2]
Drafting data use and
audit agreements
“The Authority retains audit rights over training data, code
artifacts, and logs.” Collaboration Contract Template
[CCT-2024-06, §4–§7]
“promoting public-private collaboration.”—Evaluation Report [ER-2024-07, p. 6]
Table 4. Observable activities and evidence/quotations for reconfiguring.
Table 4. Observable activities and evidence/quotations for reconfiguring.
Reconfiguring
Internal or
Ecosystem
Micro-RoutinesObservable ActivitiesRepresentative Evidence/Quotation
InternalEmbedding New Governance RolesAppointing new roles (e.g., AI Product Owner)“establishing new governance roles and responsibilities.”
Org. Announcement and Model Risk Charter Committee
[ORG-2024-04, p. 1]; [MRC-2024-05, Charter]
Updating Policies
and Metrics
Updating SOPs and
policies
“The organization established centers of innovation and excellence, where the staff is free to experiment with AI technologies… [to empower] staff to experiment with advanced technologies and methodologies.”—Policies [POL-2024-06, §5]; [ORG-2024-04, p. 1]
Creating new performance dashboards“Key performance metrics, including accuracy, precision rate, recall rate, and F1 scores, were utilized to benchmark the effectiveness of AI tools… the F1 score averaged 92% across tested applications.”—[KPI-2024-06, Dashboard; MLOps-Metrics-2024-05, Table S3]
EcosystemCreating
Multi-Party
Governance
Chartering multi-party
governance boards
“signing [of] collaboration agreements between the… Regulator and other public sector agencies, advanced learning institutions, and local industries…”—Governance Charter and Project Plan [GOV-2024-05, p. 2; PP-2024-04, §1.1]
EcosystemPublishing Open
Artifacts
Publishing open templates and checklists“releasing templates and codifying learning within the industry.”—Open Artifacts, Handbooks, and Progress Report
[OAR-2024-07, Index]
Codifying learnings into handbooks“collaboration with multilateral organizations to benchmark practices and align with international standards.” Supervisory Handbook Draft [HBK-2024-07, Draft §3; Multilateral Benchmark Note MLT-2024-07, p. 1]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Merlano Porras, C.A.; Arregoces Castillo, L.; Bosman, L.; Gamez-Djokic, M. Driving Strategic Innovation Through AI Adoption in Government Financial Regulators: A Case Study. Platforms 2025, 3, 20. https://doi.org/10.3390/platforms3040020

AMA Style

Merlano Porras CA, Arregoces Castillo L, Bosman L, Gamez-Djokic M. Driving Strategic Innovation Through AI Adoption in Government Financial Regulators: A Case Study. Platforms. 2025; 3(4):20. https://doi.org/10.3390/platforms3040020

Chicago/Turabian Style

Merlano Porras, Carlos Andrés, Luis Arregoces Castillo, Lisa Bosman, and Monica Gamez-Djokic. 2025. "Driving Strategic Innovation Through AI Adoption in Government Financial Regulators: A Case Study" Platforms 3, no. 4: 20. https://doi.org/10.3390/platforms3040020

APA Style

Merlano Porras, C. A., Arregoces Castillo, L., Bosman, L., & Gamez-Djokic, M. (2025). Driving Strategic Innovation Through AI Adoption in Government Financial Regulators: A Case Study. Platforms, 3(4), 20. https://doi.org/10.3390/platforms3040020

Article Metrics

Back to TopTop